From 101d4a02f162b057ec1a0e1ef196e887314cab55 Mon Sep 17 00:00:00 2001 From: sigjo290 <sigjo290@student.liu.se> Date: Wed, 11 Dec 2024 10:04:25 +0100 Subject: [PATCH] Lab 3: Better answer on assignment 1.2 --- lab3/lab-notes.md | 10 +++++++++- 1 file changed, 9 insertions(+), 1 deletion(-) diff --git a/lab3/lab-notes.md b/lab3/lab-notes.md index 299a716..f521d14 100644 --- a/lab3/lab-notes.md +++ b/lab3/lab-notes.md @@ -8,7 +8,15 @@ This is known as the kernel trick: If x enters the model as ð“(x)^Tð“(x') only, we can choose a kernel ðœ…(x, x') instead of chosing ð“(x). p. 194 - In the literature, it is common to see a formulation of SVMs that makes use of a hyperparameter. What is the purpose of this hyperparameter? -The purpose is to regularize. p. 211 +The hyperparameter C is the regularization term in the dual formulation of SVMs: +\[ +\alpha = \arg \min_\alpha \left( \frac{1}{2} \alpha^T K(X, X) \alpha - \alpha^T y \right) +\] +\[ +\text{subject to } \lvert \alpha_i \rvert \leq \frac{1}{2n\lambda} \quad \text{and} \quad 0 \leq \alpha_i y +\] +with \[y(x^\star) = \operatorname{sign} \left( b + \alpha^T K(X, x^\star) \right)\]. +Here \[C = \frac{1}{2n\lambda}\]. p. 211 - In neural networks, what do we mean by mini-batch and epoch? -- GitLab