{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Value Iteration for Minimum Time Control"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Notebook Setup \n",
    "The following cell will install Drake, checkout the underactuated repository, and set up the path (only if necessary).\n",
    "- On Google's Colaboratory, this **will take approximately two minutes** on the first time it runs (to provision the machine), but should only need to reinstall once every 12 hours.  Colab will ask you to \"Reset all runtimes\"; say no to save yourself the reinstall.\n",
    "- On Binder, the machines should already be provisioned by the time you can run this; it should return (almost) instantly.\n",
    "\n",
    "More details are available [here](http://underactuated.mit.edu/drake.html)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "try:\n",
    "    import pydrake\n",
    "    import underactuated\n",
    "except ImportError:\n",
    "    !curl -s https://raw.githubusercontent.com/RussTedrake/underactuated/master/scripts/setup/jupyter_setup.py > jupyter_setup.py\n",
    "    from jupyter_setup import setup_underactuated\n",
    "    setup_underactuated()\n",
    "\n",
    "# Setup matplotlib.\n",
    "from IPython import get_ipython\n",
    "if get_ipython() is not None: get_ipython().run_line_magic(\"matplotlib\", \"inline\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# python libraries\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "import matplotlib.animation as ani\n",
    "from matplotlib import cm\n",
    "from mpl_toolkits.mplot3d import Axes3D\n",
    "from IPython.display import HTML\n",
    "\n",
    "# pydrake imports\n",
    "from pydrake.all import (LinearSystem, VectorSystem, Simulator, DynamicProgrammingOptions,\n",
    "                         FittedValueIteration, DiagramBuilder, LogOutput)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Problem Description\n",
    "In this problem you will analyze the performance of the value-iteration algorithm on the minimum-time problem for the double integrator.\n",
    "Don't worry, the value iteration algorithm is provided by Drake, and you won't have to code it!\n",
    "You will be asked to analyze the policy it produces and understand the algorithmic reasons behind the poor performance of the closed loop system.\n",
    "Then you will have to implement on your own the closed-form controller we have studied in class, and compare it with the one obtained numerically.\n",
    "\n",
    "**These are the main steps of the notebook:**\n",
    "1. Construct the double integrator system.\n",
    "2. Define the objective function for the minimum time problem.\n",
    "3. Run the value-iteration algorithm.\n",
    "4. Animate the intermediate steps of the algorithm.\n",
    "5. Simulate the double integrator in closed loop with the controller from the value iteration.\n",
    "6. Write down a controller that implements the closed form solution, and test it."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Dynamics of the Double Integrator\n",
    "We start by writing a function that returns the double-integrator system.\n",
    "We write the dynamics is state-space linear form\n",
    "$$\\dot{\\mathbf{x}} = A \\mathbf{x} + B u,$$\n",
    "where $\\mathbf{x} = [q, \\dot{q}]^T$."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# we write a function since we will need to call\n",
    "# this a handful of times\n",
    "def get_double_integrator():\n",
    "    A = np.array([[0, 1], [0, 0]])\n",
    "    B = np.array([[0], [1]])\n",
    "    C = np.eye(2)\n",
    "    D = np.zeros((2, 1))\n",
    "    return LinearSystem(A, B, C, D)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Integrand of the Cost Function\n",
    "Remember that the minimum-time objective can be written in integral form\n",
    "$$\\int_{0}^{\\infty} \\ell(\\mathbf{x}) dt,$$\n",
    "by defining\n",
    "$$\\ell(\\mathbf{x}) = \\begin{cases} 0 & \\text{if} \\quad \\mathbf{x} =0,\\\\ 1 & \\text{otherwise}. \\end{cases}$$\n",
    "(See also [the example from the textbook](http://underactuated.csail.mit.edu/dp.html#example2).)\n",
    "In the following cell we approximate this with the `numpy` function `isclose`, to be able to handle small numerical errors."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# approximation of the indicator function for the origin\n",
    "def cost_function(context):\n",
    "    \n",
    "    # extract the state from the context of the plant\n",
    "    x = context.get_continuous_state_vector().CopyToVector()\n",
    "    \n",
    "    return 0 if np.isclose(np.linalg.norm(x), 0.) else 1"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Value Iteration Algorithm\n",
    "The value iteration is implemented in the Drake function `FittedValueIteration`. Take some time to have a look at [its documentation](https://drake.mit.edu/doxygen_cxx/namespacedrake_1_1systems_1_1controllers.html#a6c8d5619eb8dc7a8e280eb123deac6fe), and to go through the description of this algorithm in [Section \\\"Representing the cost-to-go on a mesh\\\" in the textbook](http://underactuated.csail.mit.edu/dp.html#section3).\n",
    "Before using it, we need to construct an appropriate discretization of the state and input space.\n",
    "\n",
    "**Important:** This code will work if you change the limits of the input to be different from $u_{\\text{min}} = -1$ and $u_{\\text{max}} = 1$.\n",
    "However, be aware that the closed-form solution we derived in class (and that you'll have to implement at the end of this notebook) is assuming that!\n",
    "It's not hard to generalize the closed-form solution to the case with generic bounds $u_{\\text{min}}$ and $u_{\\text{max}}$.\n",
    "But if you don't want to do that, do not change `mesh['u_lim']` below!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# discretization mesh of state space, input space,\n",
    "# and time for the value-iteration algorithm\n",
    "mesh = {}\n",
    "\n",
    "# number of knot points in the grids\n",
    "# odd to have a point in the origin\n",
    "mesh['n_q'] = 31 # do not exceed ~51/101\n",
    "mesh['n_qdot'] = 31 # do not exceed ~51/101\n",
    "mesh['n_u'] = 11  # don't exceed ~11/21\n",
    "\n",
    "# grid limits\n",
    "mesh['q_lim'] = [-2., 2.]\n",
    "mesh['qdot_lim'] = [-2., 2.]\n",
    "mesh['u_lim'] = [-1., 1.] # do not change\n",
    "\n",
    "# axis discretization\n",
    "for s in ['q', 'qdot', 'u']:\n",
    "    mesh[f'{s}_grid'] = np.linspace(*mesh[f'{s}_lim'], mesh[f'n_{s}'])\n",
    "    \n",
    "    # important: ensure that a knot point is in the origin\n",
    "    # otherwise there is no way the value iteration can converge\n",
    "    assert 0. in mesh[f'{s}_grid']\n",
    "    \n",
    "# time discretization in the value-iteration algorithm\n",
    "mesh['timestep'] = 0.005"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In the following cell we wrap Drake's `FittedValueIteration` function with a function we call `run_value_iteration`.\n",
    "This returns the optimal value function, the optimal controller, and all the data we need for the upcoming animation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def run_value_iteration(cost_function, mesh, max_iter=10000):\n",
    "    \n",
    "    # to create an animation, we store the values of\n",
    "    # the cost to go and the optimal policy for each\n",
    "    # iteration of the value-iteration algorithm\n",
    "    J_grid = []\n",
    "    pi_grid = []\n",
    "    \n",
    "    # callback from the value-iteration algorithm\n",
    "    # that saves the intermediate values of J and pi\n",
    "    # and that ensures we do not exceed max_iter\n",
    "    # (iteration number i starts from 1)\n",
    "    def callback(i, unused, J, pi):\n",
    "\n",
    "        # check max iter is not exceeded\n",
    "        if i > max_iter:\n",
    "            raise RuntimeError(f'Value-iteration algorithm did not converge within {max_iter} iterations.')\n",
    "\n",
    "        # store cost to go for iteration i\n",
    "        # the 'F' order facilitates the plot phase\n",
    "        J_grid.append(np.reshape(J, (mesh['n_q'], mesh['n_qdot']), order='F'))\n",
    "        pi_grid.append(np.reshape(pi, (mesh['n_q'], mesh['n_qdot']), order='F'))\n",
    "\n",
    "    # set up a simulation\n",
    "    simulator = Simulator(get_double_integrator())\n",
    "    \n",
    "    # grids for the value-iteration algorithm\n",
    "    state_grid = [set(mesh['q_grid']), set(mesh['qdot_grid'])]\n",
    "    input_grid = [set(mesh['u_grid'])]\n",
    "    \n",
    "    # add custom callback function as a visualization_callback\n",
    "    options = DynamicProgrammingOptions()\n",
    "    options.visualization_callback = callback\n",
    "    \n",
    "    # run value-iteration algorithm \n",
    "    policy, cost_to_go = FittedValueIteration(\n",
    "        simulator,\n",
    "        cost_function,\n",
    "        state_grid,\n",
    "        input_grid,\n",
    "        mesh['timestep'],\n",
    "        options\n",
    "    )\n",
    "\n",
    "    # recast J and pi from lists to 3d arrays\n",
    "    J_grid = np.dstack(J_grid)\n",
    "    pi_grid = np.dstack(pi_grid)\n",
    "    \n",
    "    return policy, cost_to_go, J_grid, pi_grid"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Animation of the Value-Iteration Algorithm\n",
    "If the following code looks intimidating, no need to fully understand it.\n",
    "What it does can be summarized as follows:\n",
    "- runs value iteration,\n",
    "- initializes an empty 3D surface plot for the value function and the policy,\n",
    "- creates the function `update_surf` that when called updates the surface plots from the previous point,\n",
    "- creates a fancy animation by calling `update_surf` many times.\n",
    "\n",
    "We hope you'll appreciate the final result!"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# run value iteration to get the matrices J_grid and pi_grid\n",
    "policy, cost_to_go, J_grid, pi_grid = run_value_iteration(cost_function, mesh)\n",
    "\n",
    "# initialize figure for animation plus misc settings\n",
    "fig, ax = plt.subplots(1, 2, figsize=(12,5))\n",
    "plt.tight_layout()\n",
    "ax[0].axis('off')\n",
    "ax[1].axis('off')\n",
    "\n",
    "# cost-to-go plot and policy plots\n",
    "# overwrites the subplots axes\n",
    "ax[0] = fig.add_subplot(121, projection='3d')\n",
    "ax[1] = fig.add_subplot(122, projection='3d')\n",
    "ax[0].set_title(r'Optimal cost to go   $J^*(\\mathbf{x})$')\n",
    "ax[1].set_title(r'Optimal policy   $\\pi^*(\\mathbf{x})$')\n",
    "for axi in ax:\n",
    "    axi.set_xlabel(r'$q$')\n",
    "    axi.set_ylabel(r'$\\dot q$')\n",
    "\n",
    "# helper function for the surface plot\n",
    "Q, Qdot = np.meshgrid(mesh['q_grid'], mesh['qdot_grid'])\n",
    "plot_surf = lambda ax, Z : ax.plot_surface(Q, Qdot, Z.T, rstride=1, cstride=1, cmap=cm.jet)\n",
    "\n",
    "# first frame of the animation\n",
    "J_surf = [plot_surf(ax[0], J_grid[:, :, 0])]\n",
    "pi_surf = [plot_surf(ax[1], pi_grid[:, :, 0])]\n",
    "\n",
    "# video parameters\n",
    "frames = 50 # total number of frames\n",
    "duration = 10 # seconds\n",
    "interval = duration / (frames - 1) * 1000 # milliseconds\n",
    "\n",
    "# initialize title to be modified in the callback\n",
    "title = fig.text(.5, .95, \"\", fontsize='x-large', bbox={'facecolor':'w'}, ha='center')\n",
    "\n",
    "# callback function for the animation\n",
    "def update_surf(frame, J_grid, J_surf, pi_grid, pi_surf):\n",
    "    \n",
    "    # iteration to plot, - 1 for base zero\n",
    "    iters = J_grid.shape[2]\n",
    "    i = int(frame * (iters - 1) / (frames - 1))\n",
    "    \n",
    "    # update cost-to-go and policy\n",
    "    J_surf[0].remove()\n",
    "    pi_surf[0].remove()\n",
    "    J_surf[0] = plot_surf(ax[0], J_grid[:, :, i])\n",
    "    pi_surf[0] = plot_surf(ax[1], pi_grid[:, :, i])\n",
    "    \n",
    "    # update title with current iteration\n",
    "    # use base 1 as above\n",
    "    title.set_text(f'Value iteration {i+1}')\n",
    "    \n",
    "# create animation\n",
    "animate = ani.FuncAnimation(\n",
    "    fig,\n",
    "    update_surf,\n",
    "    frames=frames,\n",
    "    interval=interval,\n",
    "    fargs=(J_grid,J_surf,pi_grid,pi_surf)\n",
    ")\n",
    "\n",
    "# play video\n",
    "plt.close() # close any open figure\n",
    "HTML(animate.to_jshtml())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Performance of the Value-Iteration Policy\n",
    "Value iteration is an extremely powerful and very general algorithm.\n",
    "However, its performances in solving \"bang-bang\" problems (i.e. problems where the control is always at the bounds) can be very poor.\n",
    "In this section we simulate the double integrator in closed-loop with the approximated optimal policy.\n",
    "We'll see that things do not go exactly how we expect..."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# function that simulates the double integrator\n",
    "# starting from the state (q0, qdot0) for sim_time\n",
    "# seconds in closed loop with the passed controller\n",
    "def simulate(q0, qdot0, sim_time, controller):\n",
    "        \n",
    "    # initialize block diagram\n",
    "    builder = DiagramBuilder()\n",
    "    \n",
    "    # add system and controller\n",
    "    double_integrator = builder.AddSystem(get_double_integrator())\n",
    "    controller = builder.AddSystem(controller)\n",
    "    \n",
    "    # wirw system and controller\n",
    "    builder.Connect(double_integrator.get_output_port(0), controller.get_input_port(0))\n",
    "    builder.Connect(controller.get_output_port(0), double_integrator.get_input_port(0))\n",
    "    \n",
    "    # measure double-integrator state and input\n",
    "    state_logger = LogOutput(double_integrator.get_output_port(0), builder)\n",
    "    input_logger = LogOutput(controller.get_output_port(0), builder)\n",
    "    \n",
    "    # finalize block diagram\n",
    "    diagram = builder.Build()\n",
    "    \n",
    "    # instantiate simulator\n",
    "    simulator = Simulator(diagram)\n",
    "    simulator.set_publish_every_time_step(False) # makes sim faster\n",
    "    \n",
    "    # set initial conditions\n",
    "    context = simulator.get_mutable_context()\n",
    "    context.SetContinuousState([q0, qdot0])\n",
    "    \n",
    "    # run simulation\n",
    "    simulator.AdvanceTo(sim_time)\n",
    "    \n",
    "    # unpack sim results\n",
    "    q_sim, qdot_sim = state_logger.data()\n",
    "    u_sim = input_logger.data().flatten()\n",
    "    t_sim = state_logger.sample_times()\n",
    "    \n",
    "    return q_sim, qdot_sim, u_sim, t_sim"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In order to properly visualize the results of the simulator above we need a bunch of helper functions that are listed below.\n",
    "Feel free to skip the next cell if you are not a `matplotlib` enthusiast..."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# function that plots the trajectory of the\n",
    "# double integrator in state space (q_sim vs qdot_sim)\n",
    "def plot_state_trajectory(q_sim, qdot_sim):\n",
    "    \n",
    "    # draw a white dot for the initial conditions\n",
    "    plt.scatter(q_sim[0], qdot_sim[0], s=100, edgecolor='k', c='w', zorder=3, label=r'$\\mathbf{x}(0)$')\n",
    "    \n",
    "    # black line for the trajectory in time\n",
    "    plt.plot(q_sim, qdot_sim, color='k', linewidth=2, label=r'$\\mathbf{x}(t)$')\n",
    "    \n",
    "    # misc\n",
    "    plt.xlabel(r'$q$')\n",
    "    plt.ylabel(r'$\\dot q$')\n",
    "    plt.legend()\n",
    "    \n",
    "# helper function for plot_policy that evaluates\n",
    "# the controller output at a given state\n",
    "def evaluate_controller(controller, q, qdot):\n",
    "\n",
    "    # get context and set system output (= system state)\n",
    "    context = controller.CreateDefaultContext()\n",
    "    controller.get_input_port(0).FixValue(context, (q, qdot))\n",
    "\n",
    "    # compute input for the double integrator\n",
    "    u = controller.get_output_port(0).Eval(context)[0]\n",
    "\n",
    "    return u\n",
    "\n",
    "# function that produces a level plot\n",
    "# the policy generated by the passed controller\n",
    "# it used the grid defined by q_grid and qdot_grid\n",
    "def plot_policy(q_grid, qdot_grid, controller):\n",
    "    \n",
    "    # evaluate the policy on the grid\n",
    "    Pi_grid = np.array([[evaluate_controller(controller, q, qdot) for qdot in qdot_grid] for q in q_grid])\n",
    "    \n",
    "    # level plot the level function\n",
    "    # note the transpose to align the policy to the grid\n",
    "    plt.contourf(*np.meshgrid(q_grid, qdot_grid), Pi_grid.T, cmap=cm.jet)\n",
    "    \n",
    "    # add a bar with the color scale on the right\n",
    "    plt.colorbar(label=r'$\\pi^*(\\mathbf{x})$')\n",
    "\n",
    "# function that plots the control signal\n",
    "# u_sim as a function of the time vector t\n",
    "def plot_input(u_sim, t_sim, u_lim):\n",
    "    \n",
    "    # plot the bounds for the control signal\n",
    "    plt.plot(t_sim, [u_lim[0]]*len(t_sim), c='r', linestyle='--', label='Input limits')\n",
    "    plt.plot(t_sim, [u_lim[1]]*len(t_sim), c='r', linestyle='--')\n",
    "    \n",
    "    # plot the control signal\n",
    "    plt.plot(t_sim, u_sim, label='Input from simulation')\n",
    "    \n",
    "    # misc\n",
    "    plt.xlabel(r'$t$')\n",
    "    plt.ylabel(r'$u$')\n",
    "    plt.xlim(min(t_sim), max(t_sim))\n",
    "    plt.grid(True)\n",
    "    plt.legend(loc=1)\n",
    "    \n",
    "# overall plot function for the state trajectory,\n",
    "# controller policy, and input signal\n",
    "def simulate_and_plot(q0, qdot0, sim_time, controller, u_lim, nq=201, nqdot=201):\n",
    "    \n",
    "    # get trajectories\n",
    "    q_sim, qdot_sim, u_sim, t_sim = simulate(q0, qdot0, sim_time, controller)\n",
    "    \n",
    "    # state figure\n",
    "    plt.figure()\n",
    "    plot_state_trajectory(q_sim, qdot_sim)\n",
    "    \n",
    "    # plot policy only in a square region that\n",
    "    # contains tightly the trajectory\n",
    "    \n",
    "    # helper function that computes upper and\n",
    "    # lower bounds, with a small frame, for the\n",
    "    # passed signal s\n",
    "    def frame_signal(s, frame=.1):\n",
    "        ds = (max(s) - min(s)) * frame\n",
    "        return [min(s)-ds, max(s)+ds]\n",
    "    \n",
    "    # regrid state space for policy plot\n",
    "    # this grid must be much finer than the\n",
    "    # one used for value iteration\n",
    "    q_grid = np.linspace(*frame_signal(q_sim), nq)\n",
    "    qdot_grid = np.linspace(*frame_signal(qdot_sim), nqdot)\n",
    "    \n",
    "    # plot the policy from the passed controller\n",
    "    plot_policy(q_grid, qdot_grid, controller)\n",
    "    \n",
    "    # plot input as a function of time\n",
    "    plt.figure()\n",
    "    plot_input(u_sim, t_sim, u_lim)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We are finally ready to simulate and plot the trajectories of the double integrator controlled by the value-iteration policy.\n",
    "Running the following cell you'll see two plots:\n",
    "- The plot of the state-space trajectory of the double integrator superimposed to the level plot of the policy.\n",
    "In the red regions the controller selects the input $u=1$ (full gas), in the blue regions it selects $u=-1$ (full brake). The are in between approximates the quadratic boundaries we have seen in class, and are due to the discretization of the state space.\n",
    "- The plot of the control force as a function of time.\n",
    "\n",
    "Is this the optimal policy we expected to see?\n",
    "Take your time to understand why these plots look so strange!\n",
    "Does this get any better if you increase the number of knot points (finer discretization of $q$ and $\\dot{q}$)?\n",
    "If no, why?\n",
    "(Questions not graded, do not submit.)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# initial state\n",
    "q0 = -1.\n",
    "qdot0 = 0.\n",
    "\n",
    "# verify that the given initial state is inside the value-iteration grid\n",
    "assert mesh['q_lim'][0] <= q0 <= mesh['q_lim'][1]\n",
    "assert mesh['qdot_lim'][0] <= qdot0 <= mesh['qdot_lim'][1]\n",
    "\n",
    "# duration of the simulation in seconds\n",
    "sim_time = 5.\n",
    "\n",
    "# sim and plot\n",
    "policy = run_value_iteration(cost_function, mesh)[0]\n",
    "simulate_and_plot(q0, qdot0, sim_time, policy, mesh['u_lim'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Implementation of the Closed-Form Solution\n",
    "Since value iteration didn't give us the results we wanted, in the next cell we ask you to implement [the closed-form solution we've derived in class](http://underactuated.csail.mit.edu/dp.html#example2).\n",
    "Note that in class we assumed the input to be bounded between $-1$ and $1$, so you can either do the math and generalize that result to generic bounds $u_{\\text{min}} < 0$ and $u_{\\text{max}} > 0$ (not hard), or double check that `mesh['u_lim']` is still set to `[-1., 1.]`.\n",
    "\n",
    "**Note 1:**\n",
    "To help you, we already partially filled the function.\n",
    "In a small neighborhood of the origin we return $u = - \\dot{q} - q$, even if the theoretical solution would say $u = 0$.\n",
    "This gives the closed-loop dynamics $m \\ddot{q} = - q - \\dot{q}$ which makes the origin a stable equilibrium.\n",
    "This trick prevents the controller from chattering wildly between $u_{\\text{max}}$ and $u_{\\text{min}}$ because of small numerical errors.\n",
    "Do not cancel it.\n",
    "\n",
    "**Note 2:**\n",
    "To complete this function with [the control law from the textbook](http://underactuated.csail.mit.edu/dp.html#example2)\n",
    "you need to write two conditions on the state $[q, \\dot{q}]^T$: one for the full-gas region and one for the full-brake region.\n",
    "Notice that, momentarily, the function always returns $u = u_{\\text{max}}$ if the state is not close to the origin."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def policy_closed_form(q, qdot, atol=1.e-2):\n",
    "    \n",
    "    # system in a neighborhood of the origin\n",
    "    # up to the absolute tolerance atol\n",
    "    x_norm = np.linalg.norm([q, qdot])\n",
    "    if np.isclose(x_norm, 0., atol=atol):\n",
    "    \n",
    "        # little trick, do not modify: use a stabilizing controller in the \n",
    "        # neighborhood of the origin to prevent wild chattering\n",
    "        return - q - qdot\n",
    "    \n",
    "    # full-brake region\n",
    "    # check if the state of the system is\n",
    "    # such that u must be set to -1\n",
    "    elif False: # modify here\n",
    "        return mesh['u_lim'][0]\n",
    "    \n",
    "    # full-gas region\n",
    "    # if all the others do not apply,\n",
    "    # u must be set to 1\n",
    "    else: # modify here\n",
    "        return mesh['u_lim'][1]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now we just encapsulate the function you wrote in a Drake `VectorSystem` that can be sent to the simulator.\n",
    "Does this state trajectory and this control signal look more reasonable than the ones from the value-iteration algorithm? (Question not graded, do not submit.)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# controller which implements the closed-form solution\n",
    "class ClosedFormController(VectorSystem):\n",
    "    \n",
    "    # two inputs (system state)\n",
    "    # one output (system input)\n",
    "    def __init__(self):\n",
    "        VectorSystem.__init__(self, 2, 1)\n",
    "        \n",
    "    # just evaluate the function above\n",
    "    def DoCalcVectorOutput(self, context, x, controller_state, u):\n",
    "        u[:] = policy_closed_form(*x)\n",
    "\n",
    "# sim and plot\n",
    "simulate_and_plot(q0, qdot0, sim_time, ClosedFormController(), mesh['u_lim'])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Autograding\n",
    "You can check your work by running the following cell:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from underactuated.exercises.dp.minimum_time.test_minimum_time import TestMinimumTime\n",
    "from underactuated.exercises.grader import Grader\n",
    "Grader.grade_output([TestMinimumTime], [locals()], 'results.json')\n",
    "Grader.print_test_results('results.json')"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.10"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}