{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 1. Gradients and Hessians (and Jacobians)\n",
    "\n",
    "Let's further recall, that for a differentiable function $f=(f_1,\\ldots, f_m):\\mathbb{R}^n\\to \\mathbb{R}^m$ (where $f_i:\\mathbb{R}^n\\to \\mathbb{R}$ is the $i$-th component function of $f$), we can define at every point $x\\in \\mathbb{R}^n$ the differential ${\\rm d}_xf:\\mathbb{R}^n\\to \\mathbb{R}^m$ (we identified the tangent space at $x$ with the vectorspace $\\mathbb{R}^n$ and the tangent space at $f(x)$ with the vector space $\\mathbb{R}^m$): \n",
    "\n",
    "${\\rm d}_xf$ is a linear map. In the standard basis we can represent this map by the following $m\\times n$-matrix (called Jacobian matrix):\n",
    "\n",
    "$$\n",
    "{\\rm d}_xf=\n",
    "\\begin{bmatrix} \n",
    " \\frac{\\partial f_1}{\\partial x_1}(x) & \\cdots & \\frac{\\partial f_1}{\\partial x_n}(x)\\\\\n",
    "    \\vdots & \\ddots & \\vdots\\\\\n",
    "    \\frac{\\partial f_m}{\\partial x_1} (x)& \\cdots & \\frac{\\partial f_m}{\\partial x_n}(x) \n",
    " \\end{bmatrix}\n",
    "= \n",
    "\\begin{bmatrix}\n",
    "(\\nabla f_1(x))^T\\\\ \\vdots \\\\(\\nabla f_n(x))^T\n",
    "\\end{bmatrix}\n",
    "$$\n",
    "\n",
    "Note that for $m=1$ (i.e. $f=f_1$) the gradient is the transpose of the Jacobian:\n",
    "$$\\begin{bmatrix}\n",
    "\\frac{\\partial f}{\\partial x_1}(x) & \\cdots & \\frac{\\partial f}{\\partial x_n}(x) \n",
    "\\end{bmatrix}^T= \\nabla f(x)$$\n",
    "\n",
    "Also note that (still for $m=1$) if we view the gradient as a function $\\nabla f:\\mathbb{R}^n\\to \\mathbb{R}^n$ the hessian $\\nabla^2 f(x)$ is the Jacobian of the gradient at $x$.\n",
    "\n",
    "More generally for a differentiable map $f:V\\to W$ between finite dimensional vector spaces we can define for all $x\\in V$ the differential ${\\rm d}_x f: V\\to W$ by mapping $v\\in V$ as follows:\n",
    "$$ v\\mapsto ({\\rm d}_x f)v := \\lim_{h\\to 0}\\frac{f(x+hv)-f(x)}h.$$\n",
    "${\\rm d}_x f$ is a linear map between the vector spaces $V$ and $W$. \n",
    "\n",
    "If $W = \\mathbb{R}$, then ${\\rm d}_x f:V\\to \\mathbb{R}$ is a linear form. If $\\langle-,-\\rangle: V\\times V \\to \\mathbb{R}, (x,y)\\mapsto \\langle x,y\\rangle$ is an inner product on $V$ (i.e. a symmetric positive definite bilinear form), then there is a unique vector $\\nabla f(x)\\in V$ (called gradient) such that \n",
    "$$({\\rm d}_x f)v = \\langle \\nabla f(x), v\\rangle$$\n",
    "holds for all $v\\in V$.\n",
    "\n",
    "Here are a few facts, that will let us easily compute Jacobians (and therefore gradients and Hessians):\n",
    "\n",
    "1. **Linear functions**: \n",
    "If $f:V\\to W$ is a linear map, then we have \n",
    "$$({\\rm d}_x f(x)) v = f(v),$$\n",
    "i.e. the differential of a linear map is the map itself.  \n",
    "E.g. matrix transposition is a linear map $\\mathbb{R}^{m\\times n}\\to \\mathbb{R}^{n\\times m}$ and hence $({\\rm d}_x x^T)v = v^T$. \n",
    "<br>\n",
    "If $f:\\mathbb{R}^n\\to \\mathbb{R}^m$ is an affine linear map, i.e. $f(x)=Ax+b$ for a $m\\times n$-matrix $A$ and a vector $b\\in \\mathbb{R}^m$, then the Jacobian of $f$ is $A$:\n",
    "$${\\rm d}_x(Ax+b) =A$$.\n",
    "\n",
    "2. **Bilinear functions**: For a bilinear map $B:U\\times V\\to W, (x,y)\\mapsto B(x,y)$ and any tangent vector $(u,v)$ at $(x,y)$ we have: \n",
    "$$\\left({\\rm d}_{(x,y)}(B(x,y)\\right)(u,v) = B(u,y)+B(x,v)$$.\n",
    "\n",
    "3. **Chain rule**: \n",
    "${\\rm d}_x (f\\circ g) = ({\\rm d}_{g(x)}f) \\circ ({\\rm d}_{x}g)$\n",
    "\n",
    "4. **Product rule**: matrix multiplication is a bilinear map. Therefore by combining 2. and 3. we get for functions $f:\\mathbb{R}^{n}\\to \\mathbb{R}^{a\\times b}, g: \\mathbb{R}^n\n",
    "\\to \\mathbb{R}^{b\\times c}$ and a tangent vector $v\\in \\mathbb{R}^n$:\n",
    "$${\\rm d}_x(f(x)\\cdot g(x))v = ({\\rm d}_x f(x) v)\\cdot g(x) + f(x) \\cdot ({\\rm d}_x g(x) v) $$\n",
    "Compare this with the well-known onedimensional formula \n",
    "$$(f(x)g(x))' = f'(x)g(x)+f(x)g'(x).$$\n",
    "\n",
    "These rules suffice to compute differentials, gradients and Hessians of many functions without ever having to get into an \"index battle\" by looking at the components $A_{ij}$ of a matrix $A$ etc.\n",
    "\n",
    "As an example let us derive the formula for the gradient of the function $A\\mapsto \\text{tr } AB$, where $A,B$ are suitable matrices (this formula was presented in class at the end of lecture 2). Clearly this is a linear map. <br>\n",
    "The standard inner product on $\\mathbb{R}^{m\\times n}$ is given by $\\langle x,y\\rangle := \\text{tr } x^Ty$ (check that this is indeed an inner product and that for $m=1$ we recover the canonical inner product on $\\mathbb{R}^m = \\mathbb{R}^{m\\times 1}$).  \n",
    "Therefore for any tangent vector (tangent matrix?) $v$ ($v$ is a matrix of the same shape as $A$) we get\n",
    "$$ {\\rm d}_A(\\text{tr } AB)v = \\text{tr } vB = \\text{tr } Bv = \\langle B^T,v\\rangle,$$\n",
    "i.e. $\\nabla_A(\\text{tr } AB) = B^T$.\n",
    "\n",
    "In the lecture we were also presented with a formula for the gradient of $A\\mapsto \\text{tr } AA^TC$. Using rules 2 and 3 above this can be done as follows (we use that $\\text{tr } X^T = \\text{tr } X$ and $\\text{tr } XY = \\text{tr } YX$):\n",
    "$$\\begin{align*}\n",
    "{\\rm d}_A(\\text{tr } AA^TC) v &= {\\rm d}_A \\langle A^T, A^TC\\rangle v \\\\\n",
    "&= \\langle v^T, A^TC\\rangle + \\langle A^T, v^TC\\rangle \\\\\n",
    "& = \\text{tr } vA^TC + \\text{tr } Av^TC\\\\\n",
    "&= \\text{tr } A^TCv +\\text{tr } v^TCA\\\\\n",
    "&= \\text{tr } A^TCv +\\text{tr } A^TC^Tv\\\\\n",
    "&= \\langle C^TA, v\\rangle + \\langle CA,v\\rangle\\\\\n",
    "&= \\langle C^TA +CA, v\\rangle.\n",
    "\\end{align*}$$\n",
    "Therefore $\\nabla_A \\text{tr } AA^TC = C^TA+CA$.\n",
    "\n",
    "## Solutions to the actual problems:\n",
    "### (a) \n",
    "First compute the differential (applied to a tangent vector $v$) using the product rule:\n",
    "$${\\rm d}_x\\left(\\frac {1}{2} x^TAx+b^Tx\\right)v =\\frac {1}{2}( v^TAx + x^TAv) + b^Tv.$$\n",
    "Because $v^TAx$ ist a  real number, it is equal to its own transpose: \n",
    "$$v^TAx = (v^TAx)^T = x^TA^Tv = x^TAv,$$\n",
    "where the last equality used that $A=A^T$ is symmetric.\n",
    "Therefore we can further simplify:\n",
    "$${\\rm d}_x\\left(\\frac {1}{2} x^TAx+b^Tx\\right)v = \\frac {1}{2}(x^TAv + x^TAv) + b^Tv = (x^TA+b^T)v.$$\n",
    "This proves ${\\rm d}_x (f(x))= x^TA+ b^T$. Taking the transpose we get the gradient:\n",
    "$$\\nabla f(x) = (x^TA+ b^T)^T = A^Tx + b = Ax+ b.$$\n",
    "\n",
    "### (b)\n",
    "Compute the Jacobian\n",
    "$${\\rm d}_x(g(h(x)) = {\\rm d}_{h(x)}g {\\rm d}_x h = g'(h(x)) {\\rm d}_x h(x).$$\n",
    "Take the transpose to get the gradient ($g'(h(x)\\in \\mathbb{R}$ is a scalar):\n",
    "$$\\nabla f(x) = (g'(h(x)) {\\rm d}_x h(x))^T = g'(h(x)) \\nabla h(x).$$\n",
    "\n",
    "### (c)\n",
    "The hessian is the Jacobian of the gradient, which we computed in (a):\n",
    "$$ \\nabla^2 f(x) = {\\rm d}_x (Ax+b) = A.$$\n",
    "\n",
    "### (d)\n",
    "$${\\rm d}_x(g(a^Tx) = g'(a^Tx)a^T \\Longrightarrow \\nabla f(x) = ag'(a^Tx).$$\n",
    "$$ \\nabla^2f(x) = {\\rm d}_x (ag'(a^Tx)) = ag''(a^Tx)a^T = aa^T g''(a^Tx). $$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 2. Positive definite matrices\n",
    "\n",
    "### (a)\n",
    "Clearly $A^T = (zz^T)^T = zz^T = A$.\n",
    "\n",
    "For any vector $x\\in \\mathbb R^n$ we have $x^Tz = z^Tx\\in \\mathbb R$. Therefore\n",
    "$$ x^TAx = x^Tzz^Tx = (z^Tx)^2 \\geq 0,$$\n",
    "hence $A$ is PSD.\n",
    "\n",
    "### (b)\n",
    "Every vector $x\\in \\mathbb R^n$ can be written in the form $x= x_1 +x_2$, where $x_1 = \\lambda z \\in \\mathbb R z$ is a multiple of $z$ and $x_2 \\in(\\mathbb R z)^\\perp$ is orthogonal to $z$ (i.e. $z^Tx_2 = 0$).\n",
    "Therefore \n",
    "$$ Ax = Ax_1 +Ax_2 = zz^T\\lambda z + zz^Tx_2 = \\lambda (z^Tz) z \\in \\mathbb R z.$$\n",
    "Because $z^Tz >0$ we conclude that $x$ is in the null-space iff $\\lambda = 0$, i.e. $\\ker A = (\\mathbb R z)^\\perp$ and that the image of $A$ is the one dimensional space $\\mathbb R z$, i.e. $A$ has rank one.\n",
    "\n",
    "**Remark**: Another way to see that $A$ has at most rank one is the observation that $A$ represents a composition $\\mathbb R^n\\to \\mathbb R \\to \\mathbb R^n$ of linear maps $\\mathbb R^n\\to \\mathbb R, x \\mapsto z^Tx$ and $\\mathbb R\\to \\mathbb R^n, \\lambda\\mapsto z\\lambda = \\lambda z$. Because the middle vector space $\\mathbb R$ has dimension one, the rank of $A$ is at most one.\n",
    "\n",
    "### (c)\n",
    "The matrix is PSD:\n",
    "\n",
    "*Symmetry*: $(BAB^T)^T = BA^TB^T = BAB^T$, because $A$ is PSD and in particular symmetric.\n",
    "*Positivity*:Let $x\\in \\mathbb R^n$. For $y = B^Tx$ we have\n",
    "$$ x^TBAB^Tx = y^T A y \\geq 0. $$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 3. Eigenvectors, eigenvalues, and the spectral theorem\n",
    "\n",
    "### (a)\n",
    "Remember that if $e_1,\\ldots, e_n$ is the standard basis of $\\mathbb R^n$, then $Te_i = t_i$.\n",
    "Therefore \n",
    "$$ At_i = T\\Lambda T^{-1} Te_i=  T \\Lambda e_i = T \\lambda_i e_i = \\lambda_i Te_i = \\lambda t_i.$$\n",
    "\n",
    "### (b)\n",
    "Follows from (a), because $U^T=U^{-1}$.\n",
    "\n",
    "### (c)\n",
    "By the spectral theorem we can write $A = U\\Lambda U^T$ for an orthogonal matrix $U$ and a diagonal matrix $\\Lambda = \\rm{diag}(\\lambda_1,\\ldots, \\lambda_n)$.\n",
    "By problem 2(c), we conclude that $\\Lambda$ itself has to be PSD. In particular\n",
    "$$\\lambda_i = e_i^T\\Lambda e_i \\geq 0.$$"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
