Skip to content
Snippets Groups Projects
Commit 101d4a02 authored by Signe Jörnö Hammarström's avatar Signe Jörnö Hammarström
Browse files

Lab 3: Better answer on assignment 1.2

parent 888c8096
No related branches found
No related tags found
No related merge requests found
......@@ -8,7 +8,15 @@ This is known as the kernel trick:
If x enters the model as 𝝓(x)^T𝝓(x') only, we can choose a kernel 𝜅(x, x') instead of chosing 𝝓(x). p. 194
- In the literature, it is common to see a formulation of SVMs that makes use of a hyperparameter. What is the purpose of this hyperparameter?
The purpose is to regularize. p. 211
The hyperparameter C is the regularization term in the dual formulation of SVMs:
\[
\alpha = \arg \min_\alpha \left( \frac{1}{2} \alpha^T K(X, X) \alpha - \alpha^T y \right)
\]
\[
\text{subject to } \lvert \alpha_i \rvert \leq \frac{1}{2n\lambda} \quad \text{and} \quad 0 \leq \alpha_i y
\]
with \[y(x^\star) = \operatorname{sign} \left( b + \alpha^T K(X, x^\star) \right)\].
Here \[C = \frac{1}{2n\lambda}\]. p. 211
- In neural networks, what do we mean by mini-batch and epoch?
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment