{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Non Linear Least Squares #\n",
    "This notebook is the theory behind the non-linear least squares. Where it all comes from and what is the logic behind the different ways to solve such problems. We start by talking about least squares optimization problems, steepest gradient descent and move on to non-linear case with Gauss-Newton method to solve it."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\newcommand{\\x}{\\boldsymbol{\\mathrm{x}}}$\n",
    "$\\newcommand{\\h}{\\boldsymbol{\\mathrm{h}}}$\n",
    "$\\newcommand{\\g}{\\boldsymbol{\\mathrm{g}}}$\n",
    "$\\newcommand{\\J}{\\boldsymbol{\\mathrm{J}}}$\n",
    "$\\newcommand{\\H}{\\boldsymbol{\\mathrm{H}}}$\n",
    "$\\newcommand{\\f}{\\boldsymbol{\\mathrm{f}}}$\n",
    "$\\newcommand{\\F}{\\boldsymbol{\\mathrm{F}}}$\n",
    "$\\newcommand{\\l}{\\boldsymbol{l}}$\n",
    "$\\newcommand{\\L}{\\boldsymbol{\\mathrm{L}}}$\n",
    "$\\newcommand{\\norm}[1]{\\left\\lVert#1\\right\\rVert}$\n",
    "## Least Squares Problem\n",
    "We want to find a **local minimizer** $\\x^*$ for\n",
    "\\begin{equation}\n",
    "F(\\x) = \\frac{1}{2}\\sum_{i=1}^{m}(f_i(\\x))^2,\n",
    "\\end{equation}\n",
    "where $f_i: {\\rm I\\!R}^n \\rightarrow {\\rm I\\!R}, i = 1, \\dots, m$ are given functions, and $m \\geq n$.\n",
    "\n",
    "We assume that the cost function $F$ is differentiable and so smooth that the following Taylor expansion holds:\n",
    "\\begin{equation}\n",
    "F(\\x + \\h) = F(\\x) + \\h^T\\g + \\frac{1}{2}\\h^T\\H\\h + O(\\norm{\\h}^3),\n",
    "\\end{equation}\n",
    "where $\\g$ is *gradient*: $\n",
    "\\g \\equiv \\F^\\prime(\\x) = \\begin{bmatrix}\n",
    "    \\frac{\\partial{F}}{\\partial x_1}(\\x) \\\\\n",
    "    \\dots \\\\\n",
    "    \\frac{\\partial{F}}{\\partial x_n}(\\x)\n",
    "  \\end{bmatrix}$, and $\\H$ is *Hessian*: $\\H \\equiv \\F^{\\prime \\prime}(\\x) = \\begin{bmatrix} \n",
    "      \\frac{\\partial^2 F}{\\partial x_i \\partial x_j}(\\x)\n",
    "  \\end{bmatrix}$.\n",
    "  \n",
    "### Necessary condition for a local minimizer\n",
    "If $\\x^*$ is a local minimizer, then $\\x^*$ is a stationary point, i.e., $\\g^* \\equiv \\F^\\prime(\\x^*) = 0$.\n",
    "\n",
    "However, it can also be a *saddle point*, i.e. a point which is a local minimum in one direction and a local maximum in another direction. To determine if a given stationary point $\\x_s$ is a local minimizer we need to include the second order term in the Taylor series:\n",
    "\\begin{equation}\n",
    "F(\\x_s + \\h) = F(\\x_s) + \\frac{1}{2}\\h^T \\H_s \\h + O(\\norm(\\h)^3),\n",
    "\\end{equation}\n",
    "where $\\H_s = F^{\\prime \\prime}(\\x_s)$. From the fact that $F(\\x_s + \\h)$ must be bigger than $F(\\x_s)$ for any $\\h$ we can formulate:\n",
    "\n",
    "### Sufficient condition for a local minimizer\n",
    "If $\\x_s$ is a stationary point and $F^{\\prime \\prime}(\\x_s)$ is positive definite then $\\x_s$ is a local minimizer."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\newcommand\\myeq{\\stackrel{\\tiny{\\mbox{dot product}}}{=}}$\n",
    "\n",
    "## Descent methods\n",
    "Now that we know *what* we are searching for we need to find *how* to search for a solution. We do it by (iteratively) moving in the correct direction to a local minimizer. Roughly we describe what we need to do as follows:\n",
    "\n",
    "- find the direction of descent $\\h$.\n",
    "- make a step of some size $\\alpha$ in that direction.\n",
    "- repeat\n",
    "\n",
    "We then consider the variation of the function $F$ with the step $\\h$:\n",
    "\\begin{eqnarray}\n",
    "    F(\\x + \\alpha \\h) \n",
    "        &=& F(\\x) + \\alpha \\h^T \\F^\\prime(\\x) + O(\\alpha^2) \\\\\n",
    "        &\\simeq& F(\\x) + \\alpha \\h^T \\F^\\prime(\\x)\n",
    "\\end{eqnarray}\n",
    "\n",
    "### Steepest descent\n",
    "How big should be the step? This is an open question. We can say that we perform a step $\\alpha \\h$, where $\\alpha$ is positive. Then the relative gain in function value is:\n",
    "\n",
    "\\begin{eqnarray}\n",
    "    \\lim_{\\alpha \\rightarrow 0}\\frac{F(\\x) - F(\\x + \\alpha \\h)}{\\alpha \\norm{\\h}} \n",
    "        = \\lim_{\\alpha \\rightarrow 0}\\frac{F(\\x) - (F(\\x) + \\alpha \\h^T \\F^\\prime(\\x))}{\\alpha \\norm{\\h}} \n",
    "        = \\lim_{\\alpha \\rightarrow 0}\\frac{- \\alpha \\h^T \\F^\\prime(\\x)}{\\alpha \\norm{\\h}} \n",
    "        = -\\frac{1}{\\norm{\\h}} \\h^T \\F^\\prime(\\x)\n",
    "        \\myeq -\\norm{\\F^\\prime(\\x)} \\cos{\\theta}\n",
    "\\end{eqnarray}\n",
    "\n",
    "Here, $\\theta$ is an angle between $\\h$ and $\\F^\\prime(\\x)$. Therefore, we see that the biggest change in function is reached if $\\theta = \\pi$, i.e. $\\h = -\\F^\\prime(\\x)$.\n",
    "\n",
    "This method has one downside - as the magnitude of the step depends on the magnitude of the derivative, the final convergence (close to a stationary point, where derivative is zero) is slow, although it has good initial performance in the initial stage of the iterative process.\n",
    "\n",
    "### Newton's method\n",
    "To have good convergence in the final stage we can look at the Newton's method. We derive it from the fact that $\\x^*$ is a stationary point. So $\\F^\\prime(\\x^*) = \\F^\\prime(\\x + \\h) = 0$. We can furthermore consider its Taylor expansion:\n",
    "\n",
    "\\begin{eqnarray}\n",
    "    \\F^\\prime(\\x + \\h) &=& \\F^\\prime(\\x) + \\F^{\\prime \\prime}(\\x) \\h + O(\\norm{\\h}^2) \\\\\n",
    "    &\\simeq& \\F^\\prime(\\x) + \\F^{\\prime \\prime}(\\x) \\h\n",
    "\\end{eqnarray}\n",
    "\n",
    "And when we set the left part of this equation to $0$ we get an equation, solutions to which give us $\\h$: $\\H \\h = -\\F^{\\prime}(\\x)$.\n",
    "\n",
    "Note, that Newton's method does not really care if the direction of optimization is towards a minimum, maximum or a saddle point. It just guides you towards the \"closest\" saddle point. So it works best when you know you are already in the vicinity of a global optimum.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Non-linear Least Squares\n",
    "These methods are more efficient than the general optimization methods for least squares problems. Also, they don't need to compute the second derivatives. More formally, we want to find $\\x^* = \\mathrm{argmin}_\\x\\{F(\\x)\\}$, where \n",
    "\\begin{equation}\n",
    "    F(\\x) = \\frac{1}{2}\\sum_{i = 1}^{m}(f_i(\\x))^2 = \\frac{1}{2} \\f(\\x)^T \\f(\\x)\n",
    "\\end{equation}\n",
    "\n",
    "Provided, that $\\f$ has continuous second partial derivatives, we can write its Taylor expansion:\n",
    "\n",
    "\\begin{equation}\n",
    "    \\f(\\x + \\h) = \\f(\\x) + \\J(\\x)\\h + O(\\norm{\\h}^2),\n",
    "\\end{equation}\n",
    "\n",
    "where $\\J$ is the *Jacobian* and $\\h$ is some step. \n",
    "\n",
    "The partial derivatives of $F$ are trivially achieved from the equation in the beginning of this cell:\n",
    "\n",
    "\\begin{eqnarray}\n",
    "    \\frac{\\partial F}{\\partial x_j}(\\x) \n",
    "    &=& \\frac{\\partial (\\frac{1}{2}\\sum_{i = 1}^{m}(f_i(\\x))^2) }{\\partial x_j} \\\\\n",
    "    &=& \\sum_{i = 1}^{m} {f_i(\\x) \\frac{\\partial f_i}{\\partial x_j}(\\x) }.\n",
    "\\end{eqnarray}\n",
    "\n",
    "Thus the gradient is $\\F^\\prime(\\x) = \\J(\\x)^T \\f(\\x)$.\n",
    "\n",
    "Following the same logic, we can also estimate the hessian: $\\F^{\\prime \\prime}(\\x) = \\J(\\x)^T \\J(\\x) + \\sum_{i=1}^{m}{f_i(x) \\f^{\\prime \\prime}(\\x)}$.\n",
    "\n",
    "Depending on the underlying functions, such a Hessian can be too expensive to compute. To avoid direct Hessian computation we can use:\n",
    "\n",
    "### The Gauss-Newton Method\n",
    "This method us just a non-linear least squares optimization method, that **linearizes** $\\f$ in the neighborhood of $\\x$, i.e. for small $\\norm{\\h}$, the Taylor expansion of $\\f$ is:\n",
    "\n",
    "\\begin{equation}\n",
    "    \\f(\\x + \\h) \\simeq \\l(\\h) \\equiv \\f(\\x) + \\J(\\x)\\h\n",
    "\\end{equation}\n",
    "\n",
    "And we can do the same for function $F$:\n",
    "\n",
    "\\begin{eqnarray}\n",
    "    F(\\x + \\h) \\simeq L(\\h) &\\equiv& \\frac{1}{2}\\l(\\h)^T \\l(\\h) \\\\\n",
    "    &=& \\frac{1}{2} \\f^T \\f + \\h^T \\J^T \\f + \\frac{1}{2}\\h^T \\J^T \\J \\h \\\\\n",
    "    &=& F(\\x) + \\h^T \\J^T \\f + \\frac{1}{2}\\h^T \\J^T \\J \\h\n",
    "\\end{eqnarray}\n",
    "\n",
    "From this it is easy to derive that the gradient and the Hessian of $L$ are:\n",
    "\n",
    "\\begin{eqnarray}\n",
    "    \\L^\\prime(\\h) &=& \\J^T \\f + \\J^T \\J\\h \\\\\n",
    "    \\L^{\\prime \\prime}(\\h) &=& \\J^T \\J\n",
    "\\end{eqnarray}\n",
    "\n",
    "Now we can find a unique minimizer by solving for $\\h$:\n",
    "\n",
    "\\begin{eqnarray}\n",
    "    (\\J^T \\J)\\h = -\\J^T \\f\n",
    "\\end{eqnarray}\n",
    "\n",
    "And the typical next step is then: $\\x \\rightarrow \\x + \\alpha \\h$. The classical Gauss-Newton method uses $\\alpha = 1$."
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.5.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}