Tutorial 1: Optimizers
By Yinggan XU Dibbla The tutorial video can be found here This notebook will only cover the basic optimizers and their ideas. However, the optimizers for DL remains a very interesting question. Background Knowledge $\mu$-strong Convexity We can refer to this note. A function $f$ is $\mu$-strong convex if: $$f(y)\ge f(x)+\nabla f(x)^T(y-x) + \frac{\mu}{2}||y-x||^2\newline \text{for some $\mu\ge0$ and for all $x,y$}$$ Note that Strong convexity doesn’t necessarily require the function to be differentiable, and the gradient is replaced by the sub-gradient when the function is non-smooth....