{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# ROA Estimation for the Time-Reversed Van der Pol Oscillator"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Notebook Setup \n",
    "The following cell will install Drake, checkout the underactuated repository, and set up the path (only if necessary).\n",
    "- On Google's Colaboratory, this **will take approximately two minutes** on the first time it runs (to provision the machine), but should only need to reinstall once every 12 hours.  Colab will ask you to \"Reset all runtimes\"; say no to save yourself the reinstall.\n",
    "- On Binder, the machines should already be provisioned by the time you can run this; it should return (almost) instantly.\n",
    "\n",
    "More details are available [here](http://underactuated.mit.edu/drake.html)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "try:\n",
    "    import pydrake\n",
    "    import underactuated\n",
    "except ImportError:\n",
    "    !curl -s https://raw.githubusercontent.com/RussTedrake/underactuated/master/scripts/setup/jupyter_setup.py > jupyter_setup.py\n",
    "    from jupyter_setup import setup_underactuated\n",
    "    setup_underactuated()\n",
    "\n",
    "# Setup matplotlib.\n",
    "from IPython import get_ipython\n",
    "if get_ipython() is not None: get_ipython().run_line_magic(\"matplotlib\", \"inline\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# python libraries\n",
    "import numpy as np\n",
    "from matplotlib import pyplot as plt\n",
    "\n",
    "# pydrake imports\n",
    "from pydrake.all import (LinearQuadraticRegulator, MathematicalProgram, Variables,\n",
    "                         Solve, RealContinuousLyapunovEquation)\n",
    "from pydrake.examples.van_der_pol import VanDerPolOscillator\n",
    "\n",
    "# underactuated imports\n",
    "from underactuated import plot_2d_phase_portrait\n",
    "\n",
    "# increase default size matplotlib figures\n",
    "from matplotlib import rcParams\n",
    "rcParams['figure.figsize'] = (6, 6)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Problem Description\n",
    "In this notebook we will study the time-reversed Van der Pol oscillator.\n",
    "The equations of motion for this system are polynomial, and read as follows:\n",
    "\\begin{align}\n",
    "\\dot x_1 &= - x_2, \\\\\n",
    "\\dot x_2 &= x_1 + (x_1^2 - 1) x_2.\n",
    "\\end{align}\n",
    "We compactly represent the latter as $\\dot{\\mathbf{x}} = f(\\mathbf{x})$, with $\\mathbf{x} = [x_1, x_2]^T$."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# function that implements the time-reversed Van der Pol dynamics\n",
    "f = lambda x: [- x[1], x[0] + (x[0]**2 - 1) * x[1]]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Here is the phase portrait of the time-reversed Van der Pol oscillator.\n",
    "\n",
    "As you can see, the origin of this system is locally asymptotically stable whereas, outside the Region Of Attraction (ROA) of the origin, the trajectories escape to infinity.\n",
    "The boundary of the ROA is an *unstable periodic orbit*:\n",
    "if the system state at time $t=0$ is exactly on this curve, the oscillator will orbit around the origin forever.\n",
    "However, any disturbance will make the system either converge to the origin or escape to infinity.\n",
    "Notice that the shape of this ROA is nontrivial (it is not even a convex set) and no analytic description of it is available.\n",
    "\n",
    "**Note:**\n",
    "Reversing the sign of $f$, we obtain the [classical Van der Pol oscillator](https://en.wikipedia.org/wiki/Van_der_Pol_oscillator); for which the above periodic orbit is a (globally) asymptotically stable [limit cycle](http://underactuated.mit.edu/simple_legs.html#section1) and the origin is an unstable equilibrium.\n",
    "Here we reverse time (i.e. change the sign of $f$) to make the origin a stable equilibrium."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# compute and plot the unstable periodic orbit\n",
    "limit_cycle = VanDerPolOscillator.CalcLimitCycle()\n",
    "plt.plot(limit_cycle[0], limit_cycle[1], color='b', linewidth=3, label='ROA boundary')\n",
    "plt.legend(loc=1)\n",
    "\n",
    "# plot the phase portrait\n",
    "xlim = (-3, 3)\n",
    "plot_2d_phase_portrait(f, x1lim=xlim, x2lim=xlim)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In this notebook we will use Sums-Of-Squares (SOS) optimization to find an inner approximation of the ROA of the equilibrium point in the origin.\n",
    "We will write three different SOS optimizations, and we will analyze their pros and cons."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Lyapunov Function via Linearization\n",
    "The approach we will follow to estimate the ROA of the oscillator is the following:\n",
    "\n",
    "- We linearize the dynamics $\\dot{\\mathbf{x}} = f(\\mathbf{x})$ around the origin, to get $\\dot{\\mathbf{x}} = A \\mathbf{x}$.\n",
    "\n",
    "- We solve the Lyapunov equation $$A^T P + P A = - Q$$ to get the matrix $P$, and the Lyapunov function $V(\\mathbf{x}) = \\mathbf{x}^T P \\mathbf{x}$ for the linearized system.\n",
    "\n",
    "- Lyapunov theory tells us that if $A$ is strictly stable (all its eigenvalues have strictly negative real part) then the origin is a locally asymptotically stable equilibrium point for the nonlinear system $\\dot{\\mathbf{x}} = f(\\mathbf{x})$.\n",
    "Moreover, a conservative approximation of the ROA can be obtained using the Lyapunov function $V(\\mathbf{x})$ derived in the linear analysis.\n",
    "\n",
    "- We consider the level sets $$L(\\rho) = \\{ \\mathbf{x} : V(\\mathbf{x}) \\leq \\rho \\},$$ and we look for the maximum value of $\\rho$ such that $$\\dot{V}(\\mathbf{x}) = \\frac{\\partial V}{\\partial \\mathbf{x}} f(\\mathbf{x}) = 2 \\mathbf{x}^T P f(\\mathbf{x}) < 0, \\quad \\forall \\mathbf{x} \\in L(\\rho)\\backslash \\{0\\}.$$\n",
    "In words, we try to find the largest level set $L(\\rho)$ entirely contained the region of space where $\\dot{V}(\\mathbf{x})$ is negative.\n",
    "Lyapunov theory tells us that any trajectory that starts inside such a set will eventually converge to the origin.\n",
    "\n",
    "We start by deriving $V(\\mathbf{x})$."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# linear approximation A x of f(x)\n",
    "# for x sufficiently close to the origin\n",
    "# (if you don't see this immediately, do the math!)\n",
    "A = np.array([[0, -1], [1, -1]])\n",
    "\n",
    "# rhs of the Lyapunov equation (standard choice)\n",
    "Q = np.eye(2)\n",
    "\n",
    "# positive definite matrix of the Lyapunov function\n",
    "P = RealContinuousLyapunovEquation(A, Q)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Manual Check\n",
    "The advantage of working in 2D is that we can plot things!\n",
    "Before starting with complicated optimizations, let us plot $V(\\mathbf{x})$ and $\\dot{V}(\\mathbf{x})$ to get a sense of what we are actually looking for in this analysis."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# function that given rho plots the boundary\n",
    "# of the the set L(rho) defined above\n",
    "def plot_V(rho):\n",
    "    \n",
    "    # grid of the state space\n",
    "    x1 = np.linspace(*xlim)\n",
    "    x2 = np.linspace(*xlim)\n",
    "    X1, X2 = np.meshgrid(x1, x2)\n",
    "    \n",
    "    # function that evaluates V(x) at a given x\n",
    "    # (looks bad, but it must accept meshgrids)\n",
    "    eval_V = lambda x: sum(sum(x[i]*x[j]*Pij for j, Pij in enumerate(Pi)) for i, Pi in enumerate(P))\n",
    "    \n",
    "    # contour plot with only the rho level set\n",
    "    cs = plt.contour(X1, X2, eval_V([X1, X2]), levels=[rho], colors='r', linewidths=3, zorder=3)\n",
    "    \n",
    "    # misc plot settings\n",
    "    plt.xlabel(r'$x_1$')\n",
    "    plt.ylabel(r'$x_2$')\n",
    "    plt.gca().set_aspect('equal')\n",
    "    \n",
    "    # fake plot for legend\n",
    "    plt.plot(0, 0, color='r', linewidth=3, label=r'$\\{ \\mathbf{x} : V(\\mathbf{x}) = \\rho \\}$')\n",
    "    plt.legend()\n",
    "    \n",
    "    return cs\n",
    "    \n",
    "# function that plots the levels sets of Vdot(x)\n",
    "def plot_Vdot():\n",
    "    \n",
    "    # grid of the state space\n",
    "    x1 = np.linspace(*xlim)\n",
    "    x2 = np.linspace(*xlim)\n",
    "    X1, X2 = np.meshgrid(x1, x2)\n",
    "    \n",
    "    # function that evaluates Vdot(x) at a given x\n",
    "    eval_Vdot = lambda x: 2*sum(sum(x[i]*f(x)[j]*Pij for j, Pij in enumerate(Pi)) for i, Pi in enumerate(P))\n",
    "    \n",
    "    # contour plot with only the rho level set\n",
    "    cs = plt.contour(X1, X2, eval_Vdot([X1, X2]), colors='b', levels=np.linspace(-10, 40, 11))\n",
    "    plt.gca().clabel(cs, inline=1, fontsize=10)\n",
    "    \n",
    "    # misc plot settings\n",
    "    plt.xlabel(r'$x_1$')\n",
    "    plt.ylabel(r'$x_2$')\n",
    "    plt.gca().set_aspect('equal')\n",
    "    \n",
    "    # fake plot for legend\n",
    "    plt.plot(0, 0, color='b', label=r'$\\dot{V}(\\mathbf{x})$')\n",
    "    plt.legend()\n",
    "    \n",
    "    return cs"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "By playing with the code below, we see that the largest $\\rho$ we can find is $\\approx 2.3$."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# tune rho by hand to make it as big as possible\n",
    "# while staying in the region where Vdot(x) is negative\n",
    "rho_max = 2.3\n",
    "\n",
    "# plot Vdot(x) and V(x) = rho\n",
    "Vdot_cs = plot_Vdot()\n",
    "V_cs = plot_V(rho_max)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Superimposing the level set to the phase portrait, we see that this is a pretty good approximation of the ROA."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plot_2d_phase_portrait(f, x1lim=xlim, x2lim=xlim)\n",
    "plot_V(rho_max)\n",
    "plt.plot(limit_cycle[0], limit_cycle[1], color='b', linewidth=3, label='ROA boundary')\n",
    "plt.legend(loc=1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Of course, when $\\text{dim}(\\mathbf{x}) > 2$, a \"manual\" maximization of $\\rho$ has no hope to work.\n",
    "That's where SOS programming really makes the difference!\n",
    "The goal of this notebook is to experiment these tools on a case where things can actually be visualized, so that we get a better sense of the power of this technique."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Method 1: Line-Search on $\\rho$\n",
    "The first method we use to estimate the ROA is the one from the textbook example \"[Region of attraction for the one-dimensional cubic system](http://underactuated.mit.edu/lyapunov.html#roa_cubic_system)\".\n",
    "We look for the largest $\\rho$ for which there exists a SOS polynomial $\\lambda(\\mathbf{x})$ such that $$- \\dot{V}(\\mathbf{x}) - \\lambda(\\mathbf{x}) (\\rho - V(\\mathbf{x})) - \\epsilon \\mathbf{x}^T \\mathbf{x} \\ \\text{is SOS},$$\n",
    "with $\\epsilon$ very small.\n",
    "This problem cannot be written as a single SOS program, since both the polynomial $\\lambda$ and scalar $\\rho$ are decision variables, and here they multiply.\n",
    "Hence we naively solve a sequence of SOS programs with increasing value of $\\rho$.\n",
    "\n",
    "The intuition behind this formulation is the following.\n",
    "Think of the condition \"is SOS\" as \"$\\geq 0$\" (actually, the first is sufficient for the second).\n",
    "Then what we are asking is $- \\dot{V}(\\mathbf{x}) \\geq \\lambda(\\mathbf{x}) (\\rho - V(\\mathbf{x})) + \\epsilon \\mathbf{x}^T \\mathbf{x}$.\n",
    "Inside the level set $L(\\rho)$, we have $\\rho - V(\\mathbf{x}) \\geq 0$ and, since $\\lambda(\\mathbf{x})$ is SOS, $\\lambda(\\mathbf{x}) (\\rho - V(\\mathbf{x})) \\geq 0$.\n",
    "Thus the condition above is just sayng that, for all $\\mathbf{x}$ in $L(\\rho)$, we must have $- \\dot{V}(\\mathbf{x})\\geq \\epsilon \\mathbf{x}^T \\mathbf{x}$, i.e., $\\dot{V}(\\mathbf{x})$ negative definite."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# function that verifies the condition described above\n",
    "# for the level set L(rho) for a given rho\n",
    "def is_verified(rho):\n",
    "    \n",
    "    # initialize optimization problem\n",
    "    # (with Drake there is no need to specify that\n",
    "    # this is going to be a SOS program!)\n",
    "    prog = MathematicalProgram()\n",
    "    \n",
    "    # SOS indeterminates\n",
    "    x = prog.NewIndeterminates(2, 'x')\n",
    "    \n",
    "    # Lyapunov function\n",
    "    V = x.dot(P).dot(x)\n",
    "    V_dot = 2*x.dot(P).dot(f(x))\n",
    "    \n",
    "    # degree of the polynomial lambda(x)\n",
    "    # no need to change it, but if you really want to,\n",
    "    # keep l_deg even (why?) and do not set l_deg greater than 10\n",
    "    # (otherwise optimizations will take forever)\n",
    "    l_deg = 4\n",
    "    assert l_deg % 2 == 0\n",
    "\n",
    "    # SOS Lagrange multipliers\n",
    "    l = prog.NewSosPolynomial(Variables(x), l_deg)[0].ToExpression()\n",
    "    \n",
    "    # main condition above\n",
    "    eps = 1e-3 # do not change\n",
    "    prog.AddSosConstraint(- V_dot - l * (rho - V) - eps*x.dot(x))\n",
    "    \n",
    "    # solve SOS program\n",
    "    # no objective function in this formulation\n",
    "    result = Solve(prog)\n",
    "    \n",
    "    # return True if feasible, False if infeasible\n",
    "    return result.is_success()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now that we have the building block of our algorithm, it's your time to write the line search to find the maximum $\\rho$ for which the condition described above holds.\n",
    "\n",
    "Implement the line search in next cell.\n",
    "Start with `rho = 0`, check if the level set $L(\\rho)$ is verified (i.e. the function `is_verified(rho)` returns `True`); if yes, increase `rho` by `rho_step = .01`, if no, assign to the variable `rho_method_1` the maximum verified `rho` you've found with this procedure."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# line-search parameters\n",
    "rho = 0 # do not modify\n",
    "rho_step = .01 # do not modify\n",
    "\n",
    "# implement your line-search here\n",
    "# modify here\n",
    "\n",
    "# set the maximum value of rho you've found with line search\n",
    "rho_method_1 = 0 # modify here\n",
    "    \n",
    "# print maximum rho\n",
    "print(f'Method 1 verified rho = {rho_method_1}.')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Did this method do a good job in approximating (from below) the maximum $\\rho$ we have found by hand?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Method 2: Single-Shot SOS Program\n",
    "With the previous formulation we had to solve a sequence of SOS programs, now we consider an equivalent formulation of the SOS problem in which we can directly maximize $\\rho$.\n",
    "In the previous case we wrote a SOS program to check the impication\n",
    "$$\\mathbf{x} \\in L(\\rho) \\Rightarrow \\dot{V}(\\mathbf{x}) < 0.$$\n",
    "This, however, can be equivalently stated as\n",
    "$$\\dot{V}(\\mathbf{x}) \\geq 0 \\Rightarrow \\mathbf{x} \\not\\in L(\\rho).$$\n",
    "Expressing $\\mathbf{x} \\not\\in L(\\rho)$ as $V(\\mathbf{x}) - \\rho \\geq 0$, it turns out that the latter condition can be verified with a single SOS program.\n",
    "(Actually, we should say $V(\\mathbf{x}) - \\rho > 0$, but working with a computer there is no difference.)\n",
    "\n",
    "The new problem reads as follows.\n",
    "Find an SOS polynomial $\\lambda(\\mathbf{x})$ such that\n",
    "$$V(\\mathbf{x}) - \\rho - \\lambda(\\mathbf{x}) \\dot{V}(\\mathbf{x}) \\ \\text{is SOS}.$$\n",
    "In fact, this implies $V(\\mathbf{x}) - \\rho \\geq \\lambda(\\mathbf{x}) \\dot{V}(\\mathbf{x})$ and, for all $\\mathbf{x}$ where $\\dot{V}(\\mathbf{x}) \\geq 0$, we get $V(\\mathbf{x}) - \\rho \\geq 0$.\n",
    "\n",
    "Notice that now $\\lambda$ does not multiply $\\rho$, and we can search over both of them at the same time.\n",
    "Hence we can ask the optimizer to maximize $\\rho$.\n",
    "There is however an issue with the current problem formulation...\n",
    "\n",
    "### Not quite there yet...\n",
    "\n",
    "Do you see anything wrong with the problem formulation we put together so far? What do you think the maximum $\\rho$ will be?\n",
    "\n",
    "As stated so far, the problem will always return $\\rho = 0$!\n",
    "To see why, first notice that for $\\rho = \\lambda = 0$ the SOS condition above would become $V(\\mathbf{x})$ is SOS, which holds since $V(\\mathbf{x}) = \\mathbf{x}^T P \\mathbf{x}$.\n",
    "Now consider a positive $\\rho$.\n",
    "Since $V(0) = \\dot{V} (0) = 0$, evaluating the SOS condition in the origin, we would get $-\\rho \\geq 0$ which can never hold!\n",
    "\n",
    "Not everything is lost, we have a neat fix for you.\n",
    "Certainly, if $V(\\mathbf{x}) - \\rho$ is nonnegative, so is $\\mathbf{x}^T\\mathbf{x}(V(\\mathbf{x}) - \\rho)$.\n",
    "Hence we consider the SOS condition\n",
    "$$\\mathbf{x}^T\\mathbf{x}(V(\\mathbf{x}) - \\rho) - \\lambda(\\mathbf{x}) \\dot{V}(\\mathbf{x}) \\ \\text{is SOS},$$\n",
    "with $\\lambda(\\mathbf{x})$ SOS.\n",
    "Now the issue in the origin is fixed, since for $\\mathbf{x} = 0$, we get \"$0$ is SOS\", which is always true.\n",
    "Moreover, where $\\mathbf{x}$ is such that $\\dot{V}(\\mathbf{x}) \\geq 0$, the new SOS condition requires $\\mathbf{x}^T\\mathbf{x}(V(\\mathbf{x}) - \\rho) \\geq 0$ and hence $V(\\mathbf{x}) - \\rho \\geq 0$ as desired.\n",
    "\n",
    "Now we are good to go!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In the next cell you need to code the SOS program we just described.\n",
    "We have already set up the problem for you.\n",
    "You only have to write two lines of code:\n",
    "\n",
    "- A line where you add the SOS constraint described in the \"Not quite there yet...\" subsection above.\n",
    "To do this, use the method `AddSosConstraint` of `MathematicalProgram` (same method we've used in the previous case).\n",
    "\n",
    "- A line where you set the objective function of the SOS program.\n",
    "Remember that we'd like to maximize `rho`.\n",
    "To this end, use the method `AddLinearCost` of `MathematicalProgram`, but notice that writing `prog.AddLinearCost(rho)` the variable `rho` will be *minimized*.\n",
    "Any idea for a quick workaround?\n",
    "Hint: it shouldn't take more than one character!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# initialize optimization problem\n",
    "prog2 = MathematicalProgram()\n",
    "\n",
    "# SOS indeterminates\n",
    "x = prog2.NewIndeterminates(2, 'x')\n",
    "\n",
    "# Lyapunov function\n",
    "V = x.dot(P).dot(x)\n",
    "V_dot = 2*x.dot(P).dot(f(x))\n",
    "\n",
    "# degree of the polynomial lambda(x)\n",
    "# no need to change it, but if you really want to,\n",
    "# keep l_deg even and do not set l_deg greater than 10\n",
    "l_deg = 4\n",
    "assert l_deg % 2 == 0\n",
    "\n",
    "# SOS Lagrange multipliers\n",
    "l = prog2.NewSosPolynomial(Variables(x), l_deg)[0].ToExpression()\n",
    "\n",
    "# level set as optimization variable\n",
    "rho = prog2.NewContinuousVariables(1, 'rho')[0]\n",
    "\n",
    "# write here the SOS condition described in the \"Not quite there yet...\" section above\n",
    "# modify here\n",
    "\n",
    "# insert here the objective function (maximize rho)\n",
    "# modify here\n",
    "\n",
    "# solve program only if the lines above are filled\n",
    "if len(prog2.GetAllConstraints()) != 0:\n",
    "\n",
    "    # solve SOS program\n",
    "    result = Solve(prog2)\n",
    "\n",
    "    # get maximum rho\n",
    "    assert result.is_success()\n",
    "    rho_method_2 = result.GetSolution(rho)\n",
    "\n",
    "    # print maximum rho\n",
    "    print(f'Method 2 verified rho = {rho_method_2}.')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Method 3: Smarter Single-Shot SOS Program\n",
    "The SOS program we just wrote was already a satisfying solution, but it turns out we can do even better!\n",
    "From the textbook chapter \"[Lyapunov analysis with convex optimization](http://underactuated.mit.edu/lyapunov.html#section2)\", you know that every SOS constraint brings with it a lot of optimization variables and an SDP constraint.\n",
    "So, whenever we can, removing redundant SOS requirements is always a good thing to do.\n",
    "\n",
    "We claim that in the previous formulation we don't need $\\lambda(\\mathbf{x})$ to be SOS. How is this possible?\n",
    "\n",
    "We start by noticing that, in a neighborhood of the origin, excluded the origin itself, $\\dot{V}(\\mathbf{x})$ is strictly negative.\n",
    "(This because $V(\\mathbf{x})$ is a Lyapunov function for the linearized system hence, locally, it works also for the nonlinear system.)\n",
    "\n",
    "In light of the latter observation, instead of asking that $\\dot{V}(\\mathbf{x})$ is negative for all $\\mathbf{x} \\neq 0$ in $L(\\rho)$, we can equivalently ask that all the points $\\mathbf{x} \\neq 0$ where $\\dot{V}(\\mathbf{x}) = 0$ must be outside the level set $L(\\rho)$.\n",
    "This might take a second to parse!\n",
    "\n",
    "The latter condition can be enforced exactly as the one above:\n",
    "$$\\mathbf{x}^T\\mathbf{x}(V(\\mathbf{x}) - \\rho) - \\lambda(\\mathbf{x}) \\dot{V}(\\mathbf{x}) \\ \\text{is SOS},$$\n",
    "but this time we do not require $\\lambda(\\mathbf{x})$ to be SOS.\n",
    "\n",
    "Here is the reasoning.\n",
    "First, notice that this condition implies $\\mathbf{x}^T\\mathbf{x}(V(\\mathbf{x}) - \\rho) \\geq \\lambda(\\mathbf{x}) \\dot{V}(\\mathbf{x})$.\n",
    "Then, observe that for all $\\mathbf{x}$ such that $\\dot{V}(\\mathbf{x}) = 0$, we get $\\mathbf{x}^T\\mathbf{x}(V(\\mathbf{x}) - \\rho) \\geq 0$.\n",
    "This implies $V(\\mathbf{x}) - \\rho \\geq 0$, i.e., $\\mathbf{x} \\not\\in L(\\rho)$ as desired.\n",
    "(As before, no need to care about what happens at the boundary of the level set.)\n",
    "\n",
    "This trick can make a huge difference when you need to verify high-dimensional systems!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To try this new idea:\n",
    "- In the following cell, copy and paste the optimization problem you just wrote above for \"Method 2\". Attention: this time give the name `prog3` to the `MathematicalProgram` you write (important for autograding).\n",
    "\n",
    "- Substitute the definition of the polynomial $\\lambda$ from `l = prog.NewSosPolynomial(Variables(x), l_deg)[0].ToExpression()` to `l = prog.NewFreePolynomial(Variables(x), l_deg).ToExpression()`.\n",
    "\n",
    "- Run the new SOS program.\n",
    "\n",
    "- Define the variable `rho_method_3` to be the optimal value of `rho` for this new optimization problem.\n",
    "\n",
    "If you have done thing correctly, `rho_method_3` should \"closely match\" `rho_method_2`!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# initialize optimization problem\n",
    "prog3 = MathematicalProgram()\n",
    "\n",
    "rho_method_3 = 0 # modify here\n",
    "\n",
    "# print maximum rho\n",
    "print(f'Method 3 verified rho = {rho_method_3}.')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Final Note\n",
    "More advanced techniques to approximate ROAs are available.\n",
    "Generally they require some sort of alternation between optimizing over the multiplier $\\lambda$ (as we did here) and modifying the shape of the Lyapunov function $V(\\mathbf{x})$, e.g., by considering higher-order polynomials (here we stuck to the quadratic one coming from\n",
    "linear analysis).\n",
    "The level set $\\rho$ is generally kept fixed (e.g. equal to unity) since, when reshaping the Lyapunov function, the optimizer is allowed to scale the range of this function arbitrarily.\n",
    "\n",
    "Here is an image of SOS in its full glory approximating the ROA of the Van der Pol oscillator.\n",
    "Impressive, isn't it?!\n",
    "\n",
    "![figure](https://raw.githubusercontent.com/RussTedrake/underactuated/master/figures/exercises/van_der_pol_roa.png)\n",
    "(Courtesy of Shen Shen.)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Autograding\n",
    "\n",
    "You can check your work by running the following cell:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from underactuated.exercises.lyapunov.van_der_pol.test_van_der_pol import TestVanDerPol\n",
    "from underactuated.exercises.grader import Grader\n",
    "Grader.grade_output([TestVanDerPol], [locals()], 'results.json')\n",
    "Grader.print_test_results('results.json')"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}