{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Guided Policy Search\n",
    "\n",
    "<h2 id=\"tocheading\">Index</h2>\n",
    "<div id=\"toc\"></div>\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/javascript": [
       "$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')"
      ],
      "text/plain": [
       "<IPython.core.display.Javascript object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "%%javascript\n",
    "$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js')\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Introduction\n",
    "Guided Policy Search (GPS) is a technique that transforms the Reinforcement Learning (RL) task of policy search into a Supervised Learning problem, where the training set is generated by a simple trajectory-centric RL algorithm. \n",
    "\n",
    "This algorithm optimizes linear-Gaussian controllers $p_i (u_t | x_t)$. Each $p_i (u_t | x_t)$ succeeds in the task from different initial states which helps the algorithm to generalize to other states from the same distribution. The final policy $\\pi_\\theta(u_t | o_t )$ learned with GPS is only provided with observations $o_t$ of the full state $x_t$, and assumed dynamics are assumed to be unknown. \n",
    "\n",
    "![](gps_illustration.png)\n",
    "\n",
    "We draw sample trajectories $\\tau_i^j$ for each initial state on the physical system by running the corresponding controller $p_i(u_t | x_t)$. The samples are used to fit the dynamics $p_i (x_{t+1} | x_t, u_t)$ that are used to improve the controllers $p_i(u_t | x_t)$, and serve as training data for the policy $\\pi_\\theta(u_t | o_t )$. Within the graph we can observe how there's a loop that alternates between optimizing each trajectory distribution $p_i (\\tau)$ and optimizing the policy $\\pi_\\theta(u_t | o_t )$ to match these trajectory distributions.\n",
    "\n",
    "\n",
    "This work is based on https://arxiv.org/abs/1504.00702. Refer to http://rll.berkeley.edu/gps/ for the original implementation."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Definitions and notations\n",
    "\n",
    "| corresponding symbol | definition |\n",
    "|--------|------------|\n",
    "| $p_i(u_t | x_t)$ | linear-Gaussian controllers, they induce the trajectory distributions $p_i (\\tau)$ |\n",
    "| $\\hat{p_i}(u_t | x_t)$ | previous controllers, previous time step t-1 |\n",
    "| $\\pi_\\theta(u_t | o_t )$| final policy learned |\n",
    "| $p_i (\\tau)$| trajectory distribution induced from the linear-Gaussian controllers, guiding distribution |\n",
    "| $\\tau_i^j$ | sample trajectories, sampled from the distribution |\n",
    "| $o_t$ | observations |\n",
    "| $x_t$ | full state |\n",
    "| $p_i (x_{t+1} | x_t, u_t)$ | system dynamics |"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Test environment\n",
    "The following test environment will be used for the purpose of implementing GPS.\n",
    "\n",
    "![](gps_testenv.png)"
   ]
  },
  {
   "cell_type": "raw",
   "metadata": {},
   "source": [
    "import gym\n",
    "\n",
    "env = gym.make('Pendulum-v0')\n",
    "env.reset()\n",
    "for _ in range(1000):\n",
    "    env.render()\n",
    "    env.step(env.action_space.sample()) # take a random action"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## GPS implementation"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Utils\n",
    "A set of utility functions used along the code."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "\n",
    "def gauss_fit_joint_prior(pts, mu0, Phi, m, n0, dwts, dX, dU, sig_reg):\n",
    "    \"\"\" Perform Gaussian fit to data with a prior. \"\"\"\n",
    "    # Build weights matrix.\n",
    "    D = np.diag(dwts)\n",
    "    # Compute empirical mean and covariance.\n",
    "    mun = np.sum((pts.T * dwts).T, axis=0)\n",
    "    diff = pts - mun\n",
    "    empsig = diff.T.dot(D).dot(diff)\n",
    "    empsig = 0.5 * (empsig + empsig.T)\n",
    "    # MAP estimate of joint distribution.\n",
    "    N = dwts.shape[0]\n",
    "    mu = mun\n",
    "    sigma = (N * empsig + Phi + (N * m) / (N + m) *\n",
    "             np.outer(mun - mu0, mun - mu0)) / (N + n0)\n",
    "    sigma = 0.5 * (sigma + sigma.T)\n",
    "    # Add sigma regularization.\n",
    "    sigma += sig_reg\n",
    "    # Conditioning to get dynamics.\n",
    "    fd = np.linalg.solve(sigma[:dX, :dX], sigma[:dX, dX:dX+dU]).T\n",
    "    fc = mu[dX:dX+dU] - fd.dot(mu[:dX])\n",
    "    dynsig = sigma[dX:dX+dU, dX:dX+dU] - fd.dot(sigma[:dX, :dX]).dot(fd.T)\n",
    "    dynsig = 0.5 * (dynsig + dynsig.T)\n",
    "    return fd, fc, dynsig\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### `Sample`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "from gps_pb2 import ACTION\n",
    "\n",
    "\n",
    "class Sample(object):\n",
    "    \"\"\"\n",
    "    Class that handles the representation of a trajectory and stores a\n",
    "    single trajectory.\n",
    "    Note: must be serializable for easy saving, no C++ references!\n",
    "    \"\"\"\n",
    "    def __init__(self, agent):\n",
    "        self.agent = agent\n",
    "\n",
    "        self.T = agent.T\n",
    "        self.dX = agent.dX\n",
    "        self.dU = agent.dU\n",
    "        self.dO = agent.dO\n",
    "        self.dM = agent.dM\n",
    "\n",
    "        # Dictionary containing the sample data from various sensors.\n",
    "        self._data = {}\n",
    "\n",
    "        self._X = np.empty((self.T, self.dX))\n",
    "        self._X.fill(np.nan)\n",
    "        self._obs = np.empty((self.T, self.dO))\n",
    "        self._obs.fill(np.nan)\n",
    "        self._meta = np.empty(self.dM)\n",
    "        self._meta.fill(np.nan)\n",
    "\n",
    "    def set(self, sensor_name, sensor_data, t=None):\n",
    "        \"\"\" Set trajectory data for a particular sensor. \"\"\"\n",
    "        if t is None:\n",
    "            self._data[sensor_name] = sensor_data\n",
    "            self._X.fill(np.nan)  # Invalidate existing X.\n",
    "            self._obs.fill(np.nan)  # Invalidate existing obs.\n",
    "            self._meta.fill(np.nan)  # Invalidate existing meta data.\n",
    "        else:\n",
    "            if sensor_name not in self._data:\n",
    "                self._data[sensor_name] = \\\n",
    "                        np.empty((self.T,) + sensor_data.shape)\n",
    "                self._data[sensor_name].fill(np.nan)\n",
    "            self._data[sensor_name][t, :] = sensor_data\n",
    "            self._X[t, :].fill(np.nan)\n",
    "            self._obs[t, :].fill(np.nan)\n",
    "\n",
    "    def get(self, sensor_name, t=None):\n",
    "        \"\"\" Get trajectory data for a particular sensor. \"\"\"\n",
    "        return (self._data[sensor_name] if t is None\n",
    "                else self._data[sensor_name][t, :])\n",
    "\n",
    "    def get_X(self, t=None):\n",
    "        \"\"\" Get the state. Put it together if not precomputed. \"\"\"\n",
    "        X = self._X if t is None else self._X[t, :]\n",
    "        if np.any(np.isnan(X)):\n",
    "            for data_type in self._data:\n",
    "                if data_type not in self.agent.x_data_types:\n",
    "                    continue\n",
    "                data = (self._data[data_type] if t is None\n",
    "                        else self._data[data_type][t, :])\n",
    "                self.agent.pack_data_x(X, data, data_types=[data_type])\n",
    "        return X\n",
    "\n",
    "    def get_U(self, t=None):\n",
    "        \"\"\" Get the action. \"\"\"\n",
    "        return self._data[ACTION] if t is None else self._data[ACTION][t, :]\n",
    "\n",
    "    def get_obs(self, t=None):\n",
    "        \"\"\" Get the observation. Put it together if not precomputed. \"\"\"\n",
    "        obs = self._obs if t is None else self._obs[t, :]\n",
    "        if np.any(np.isnan(obs)):\n",
    "            for data_type in self._data:\n",
    "                if data_type not in self.agent.obs_data_types:\n",
    "                    continue\n",
    "                if data_type in self.agent.meta_data_types:\n",
    "                    continue\n",
    "                data = (self._data[data_type] if t is None\n",
    "                        else self._data[data_type][t, :])\n",
    "                self.agent.pack_data_obs(obs, data, data_types=[data_type])\n",
    "        return obs\n",
    "\n",
    "    def get_meta(self):\n",
    "        \"\"\" Get the meta data. Put it together if not precomputed. \"\"\"\n",
    "        meta = self._meta\n",
    "        if np.any(np.isnan(meta)):\n",
    "            for data_type in self._data:\n",
    "                if data_type not in self.agent.meta_data_types:\n",
    "                    continue\n",
    "                data = self._data[data_type]\n",
    "                self.agent.pack_data_meta(meta, data, data_types=[data_type])\n",
    "        return meta\n",
    "\n",
    "    # For pickling.\n",
    "    def __getstate__(self):\n",
    "        state = self.__dict__.copy()\n",
    "        state.pop('agent')\n",
    "        return state\n",
    "\n",
    "    # For unpickling.\n",
    "    def __setstate__(self, state):\n",
    "        self.__dict__ = state\n",
    "        self.__dict__['agent'] = None\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### `SampleList`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "#NOISE = 19  # initially generated by the protocol buffer compiler. \n",
    "from gps_pb2 import NOISE\n",
    "\n",
    "class SampleList(object):\n",
    "    \"\"\" Class that handles writes and reads to sample data. \"\"\"\n",
    "    def __init__(self, samples):\n",
    "        self._samples = samples\n",
    "\n",
    "    def get_X(self, idx=None):\n",
    "        \"\"\" Returns N x T x dX numpy array of states. \"\"\"\n",
    "        if idx is None:\n",
    "            idx = range(len(self._samples))\n",
    "        return np.asarray([self._samples[i].get_X() for i in idx])\n",
    "\n",
    "    def get_U(self, idx=None):\n",
    "        \"\"\" Returns N x T x dU numpy array of actions. \"\"\"\n",
    "        if idx is None:\n",
    "            idx = range(len(self._samples))\n",
    "        return np.asarray([self._samples[i].get_U() for i in idx])\n",
    "\n",
    "    def get_noise(self, idx=None):\n",
    "        \"\"\" Returns N x T x dU numpy array of noise generated during rollouts. \"\"\"\n",
    "        if idx is None:\n",
    "            idx = range(len(self._samples))\n",
    "        return np.asarray([self._samples[i].get(NOISE) for i in idx])\n",
    "\n",
    "    def get_obs(self, idx=None):\n",
    "        \"\"\" Returns N x T x dO numpy array of features. \"\"\"\n",
    "        if idx is None:\n",
    "            idx = range(len(self._samples))\n",
    "        return np.asarray([self._samples[i].get_obs() for i in idx])\n",
    "\n",
    "    def get_samples(self, idx=None):\n",
    "        \"\"\" Returns N sample objects. \"\"\"\n",
    "        if idx is None:\n",
    "            idx = range(len(self._samples))\n",
    "        return [self._samples[i] for i in idx]\n",
    "\n",
    "    def num_samples(self):\n",
    "        \"\"\" Returns number of samples. \"\"\"\n",
    "        return len(self._samples)\n",
    "\n",
    "    # Convenience methods.\n",
    "    def __len__(self):\n",
    "        return self.num_samples()\n",
    "\n",
    "    def __getitem__(self, idx):\n",
    "        return self.get_samples([idx])[0]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Dynamics\n",
    "#### Dynamics superclass: `Dynamics`\n",
    "The dynamical model superclass which assumes dynamics are always linear with $x_t$:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import abc\n",
    "\n",
    "class Dynamics(object):\n",
    "    \"\"\" Dynamics superclass. \"\"\"\n",
    "    __metaclass__ = abc.ABCMeta\n",
    "\n",
    "    def __init__(self, hyperparams):\n",
    "    #def __init__(self):\n",
    "        self._hyperparams = hyperparams\n",
    "\n",
    "        # TODO - Currently assuming that dynamics will always be linear\n",
    "        #        with X.\n",
    "        # TODO - Allocate arrays using hyperparams dU, dX, T.\n",
    "\n",
    "        # Fitted dynamics: x_t+1 = Fm * [x_t;u_t] + fv.\n",
    "        self.Fm = np.array(np.nan)\n",
    "        self.fv = np.array(np.nan)\n",
    "        self.dyn_covar = np.array(np.nan)  # Covariance.\n",
    "\n",
    "    @abc.abstractmethod\n",
    "    def update_prior(self, sample):\n",
    "        \"\"\" Update dynamics prior. \"\"\"\n",
    "        raise NotImplementedError(\"Must be implemented in subclass.\")\n",
    "\n",
    "    @abc.abstractmethod\n",
    "    def get_prior(self):\n",
    "        \"\"\" Returns prior object. \"\"\"\n",
    "        raise NotImplementedError(\"Must be implemented in subclass.\")\n",
    "\n",
    "    @abc.abstractmethod\n",
    "    def fit(self, sample_list):\n",
    "        \"\"\" Fit dynamics. \"\"\"\n",
    "        raise NotImplementedError(\"Must be implemented in subclass.\")\n",
    "\n",
    "    def copy(self):\n",
    "        \"\"\" Return a copy of the dynamics estimate. \"\"\"\n",
    "        dyn = type(self)(self._hyperparams)\n",
    "        #dyn = type(self)()\n",
    "        dyn.Fm = np.copy(self.Fm)\n",
    "        dyn.fv = np.copy(self.fv)\n",
    "        dyn.dyn_covar = np.copy(self.dyn_covar)\n",
    "        return dyn"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Gaussian mixture model (GMM) class: `GMM`"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We define the Gaussian mixture model (GMM) class"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import logging\n",
    "import scipy.linalg\n",
    "\n",
    "def logsum(vec, axis=0, keepdims=True):\n",
    "    #TODO: Add a docstring.\n",
    "    maxv = np.max(vec, axis=axis, keepdims=keepdims)\n",
    "    maxv[maxv == -float('inf')] = 0\n",
    "    return np.log(np.sum(np.exp(vec-maxv), axis=axis, keepdims=keepdims)) + maxv\n",
    "\n",
    "class GMM(object):\n",
    "    \"\"\" Gaussian Mixture Model. \"\"\"\n",
    "    def __init__(self, init_sequential=False, eigreg=False, warmstart=True):\n",
    "        self.init_sequential = init_sequential\n",
    "        self.eigreg = eigreg\n",
    "        self.warmstart = warmstart\n",
    "        self.sigma = None\n",
    "\n",
    "    def inference(self, pts):\n",
    "        \"\"\"\n",
    "        Evaluate dynamics prior.\n",
    "        Args:\n",
    "            pts: A N x D array of points.\n",
    "        \"\"\"\n",
    "        # Compute posterior cluster weights.\n",
    "        logwts = self.clusterwts(pts)\n",
    "\n",
    "        # Compute posterior mean and covariance.\n",
    "        mu0, Phi = self.moments(logwts)\n",
    "\n",
    "        # Set hyperparameters.\n",
    "        m = self.N\n",
    "        n0 = m - 2 - mu0.shape[0]\n",
    "\n",
    "        # Normalize.\n",
    "        m = float(m) / self.N\n",
    "        n0 = float(n0) / self.N\n",
    "        return mu0, Phi, m, n0\n",
    "\n",
    "    def estep(self, data):\n",
    "        \"\"\"\n",
    "        Compute log observation probabilities under GMM.\n",
    "        Args:\n",
    "            data: A N x D array of points.\n",
    "        Returns:\n",
    "            logobs: A N x K array of log probabilities (for each point\n",
    "                on each cluster).\n",
    "        \"\"\"\n",
    "        # Constants.\n",
    "        N, D = data.shape\n",
    "        K = self.sigma.shape[0]\n",
    "\n",
    "        logobs = -0.5*np.ones((N, K))*D*np.log(2*np.pi)\n",
    "        for i in range(K):\n",
    "            mu, sigma = self.mu[i], self.sigma[i]\n",
    "            L = scipy.linalg.cholesky(sigma, lower=True)\n",
    "            logobs[:, i] -= np.sum(np.log(np.diag(L)))\n",
    "\n",
    "            diff = (data - mu).T\n",
    "            soln = scipy.linalg.solve_triangular(L, diff, lower=True)\n",
    "            logobs[:, i] -= 0.5*np.sum(soln**2, axis=0)\n",
    "\n",
    "        logobs += self.logmass.T\n",
    "        return logobs\n",
    "\n",
    "    def moments(self, logwts):\n",
    "        \"\"\"\n",
    "        Compute the moments of the cluster mixture with logwts.\n",
    "        Args:\n",
    "            logwts: A K x 1 array of log cluster probabilities.\n",
    "        Returns:\n",
    "            mu: A (D,) mean vector.\n",
    "            sigma: A D x D covariance matrix.\n",
    "        \"\"\"\n",
    "        # Exponentiate.\n",
    "        wts = np.exp(logwts)\n",
    "\n",
    "        # Compute overall mean.\n",
    "        mu = np.sum(self.mu * wts, axis=0)\n",
    "\n",
    "        # Compute overall covariance.\n",
    "        diff = self.mu - np.expand_dims(mu, axis=0)\n",
    "        diff_expand = np.expand_dims(self.mu, axis=1) * \\\n",
    "                np.expand_dims(diff, axis=2)\n",
    "        wts_expand = np.expand_dims(wts, axis=2)\n",
    "        sigma = np.sum((self.sigma + diff_expand) * wts_expand, axis=0)\n",
    "        return mu, sigma\n",
    "\n",
    "    def clusterwts(self, data):\n",
    "        \"\"\"\n",
    "        Compute cluster weights for specified points under GMM.\n",
    "        Args:\n",
    "            data: An N x D array of points\n",
    "        Returns:\n",
    "            A K x 1 array of average cluster log probabilities.\n",
    "        \"\"\"\n",
    "        # Compute probability of each point under each cluster.\n",
    "        logobs = self.estep(data)\n",
    "\n",
    "        # Renormalize to get cluster weights.\n",
    "        logwts = logobs - logsum(logobs, axis=1)\n",
    "\n",
    "        # Average the cluster probabilities.\n",
    "        logwts = logsum(logwts, axis=0) - np.log(data.shape[0])\n",
    "        return logwts.T\n",
    "\n",
    "    def update(self, data, K, max_iterations=100):\n",
    "        \"\"\"\n",
    "        Run EM to update clusters.\n",
    "        Args:\n",
    "            data: An N x D data matrix, where N = number of data points.\n",
    "            K: Number of clusters to use.\n",
    "        \"\"\"\n",
    "        # Constants.\n",
    "        N = data.shape[0]\n",
    "        Do = data.shape[1]\n",
    "\n",
    "        LOGGER.debug('Fitting GMM with %d clusters on %d points', K, N)\n",
    "\n",
    "        if (not self.warmstart or self.sigma is None or\n",
    "                K != self.sigma.shape[0]):\n",
    "            # Initialization.\n",
    "            LOGGER.debug('Initializing GMM.')\n",
    "            self.sigma = np.zeros((K, Do, Do))\n",
    "            self.mu = np.zeros((K, Do))\n",
    "            self.logmass = np.log(1.0 / K) * np.ones((K, 1))\n",
    "            self.mass = (1.0 / K) * np.ones((K, 1))\n",
    "            self.N = data.shape[0]\n",
    "            N = self.N\n",
    "\n",
    "            # Set initial cluster indices.\n",
    "            if not self.init_sequential:\n",
    "                cidx = np.random.randint(0, K, size=(1, N))\n",
    "            else:\n",
    "                raise NotImplementedError()\n",
    "\n",
    "            # Initialize.\n",
    "            for i in range(K):\n",
    "                cluster_idx = (cidx == i)[0]\n",
    "                mu = np.mean(data[cluster_idx, :], axis=0)\n",
    "                diff = (data[cluster_idx, :] - mu).T\n",
    "                sigma = (1.0 / K) * (diff.dot(diff.T))\n",
    "                self.mu[i, :] = mu\n",
    "                self.sigma[i, :, :] = sigma + np.eye(Do) * 2e-6\n",
    "\n",
    "        prevll = -float('inf')\n",
    "        for itr in range(max_iterations):\n",
    "            # E-step: compute cluster probabilities.\n",
    "            logobs = self.estep(data)\n",
    "\n",
    "            # Compute log-likelihood.\n",
    "            ll = np.sum(logsum(logobs, axis=1))\n",
    "            LOGGER.debug('GMM itr %d/%d. Log likelihood: %f',\n",
    "                         itr, max_iterations, ll)\n",
    "            if ll < prevll:\n",
    "                # TODO: Why does log-likelihood decrease sometimes?\n",
    "                LOGGER.debug('Log-likelihood decreased! Ending on itr=%d/%d',\n",
    "                             itr, max_iterations)\n",
    "                break\n",
    "            if np.abs(ll-prevll) < 1e-5*prevll:\n",
    "                LOGGER.debug('GMM converged on itr=%d/%d',\n",
    "                             itr, max_iterations)\n",
    "                break\n",
    "            prevll = ll\n",
    "\n",
    "            # Renormalize to get cluster weights.\n",
    "            logw = logobs - logsum(logobs, axis=1)\n",
    "            assert logw.shape == (N, K)\n",
    "\n",
    "            # Renormalize again to get weights for refitting clusters.\n",
    "            logwn = logw - logsum(logw, axis=0)\n",
    "            assert logwn.shape == (N, K)\n",
    "            w = np.exp(logwn)\n",
    "\n",
    "            # M-step: update clusters.\n",
    "            # Fit cluster mass.\n",
    "            self.logmass = logsum(logw, axis=0).T\n",
    "            self.logmass = self.logmass - logsum(self.logmass, axis=0)\n",
    "            assert self.logmass.shape == (K, 1)\n",
    "            self.mass = np.exp(self.logmass)\n",
    "            # Reboot small clusters.\n",
    "            w[:, (self.mass < (1.0 / K) * 1e-4)[:, 0]] = 1.0 / N\n",
    "            # Fit cluster means.\n",
    "            w_expand = np.expand_dims(w, axis=2)\n",
    "            data_expand = np.expand_dims(data, axis=1)\n",
    "            self.mu = np.sum(w_expand * data_expand, axis=0)\n",
    "            # Fit covariances.\n",
    "            wdata = data_expand * np.sqrt(w_expand)\n",
    "            assert wdata.shape == (N, K, Do)\n",
    "            for i in range(K):\n",
    "                # Compute weighted outer product.\n",
    "                XX = wdata[:, i, :].T.dot(wdata[:, i, :])\n",
    "                mu = self.mu[i, :]\n",
    "                self.sigma[i, :, :] = XX - np.outer(mu, mu)\n",
    "\n",
    "                if self.eigreg:  # Use eigenvalue regularization.\n",
    "                    raise NotImplementedError()\n",
    "                else:  # Use quick and dirty regularization.\n",
    "                    sigma = self.sigma[i, :, :]\n",
    "                    self.sigma[i, :, :] = 0.5 * (sigma + sigma.T) + \\\n",
    "                            1e-6 * np.eye(Do)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Gaussian mixture model (GMM) dynamics: `DynamicsPriorGMM`\n",
    "\n",
    "Optimizing the linear-Gaussian controllers $p_i(u_t | x_t)$ (that induce the trajectories $p_i (\\tau)$) requires fitting the system dynamics $p_i (x_{t+1} | x_t, u_t)$ at each iteration to samples generated on the physical system from the previous controller $\\hat{p_i}(u_t | x_t)$.\n",
    "\n",
    "![](trajectory_optimization.png)\n",
    "\n",
    "The linear-Gaussian dynamics are defined as $p_i (x_{t+1} | x_t, u_t) = \\mathcal{N} (f_{xt}x_t + f_{ut}u_t + f_{ct}, F_t)$, and the data that we obtain from the robot can be viewed as tuples $\\{x_t^i, u_t^i, x_{t+1}^i\\}$. A simple way to fit these linear-Gaussian dynamics is to use linear regression to determine $f_x$, $f_u$ and $f_c$, and fit $F_t$ based on errors however the sample complexity of linear regression scales with the dimensionality of the full state space $x_t$.\n",
    "\n",
    "Although this might be an issue for high-dimensional robotic systems, we can observe that the dynamics at nearby time steps are strongly correlated which means that we can dramatically reduce the sample complexity of the dynamics fitting by bringing in information from previous time steps. This implementation will fit a global model to all of the transitions $\\{x_t^i, u_t^i, x_{t+1}^i\\}$ for all t and all tuples from prior iterations and then use this model as **a prior for fitting the dynamics at each time step**.\n",
    "\n",
    "Below, the definition of GMM prior for dynamics estimation."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import copy\n",
    "\n",
    "# DynamicsPriorGMM\n",
    "DYN_PRIOR_GMM = {\n",
    "    'min_samples_per_cluster': 20,\n",
    "    'max_clusters': 50,\n",
    "    'max_samples': 20,\n",
    "    'strength': 1.0,\n",
    "}\n",
    "\n",
    "# As defined in the code examples\n",
    "DYN_PRIOR_GMM_example = {\n",
    "    'min_samples_per_cluster': 40,\n",
    "    'max_clusters': 20,\n",
    "    'max_samples': 20,    \n",
    "    'strength': 1.0,\n",
    "}\n",
    "\n",
    "class DynamicsPriorGMM(object):\n",
    "    \"\"\"\n",
    "    A dynamics prior encoded as a GMM over [x_t, u_t, x_t+1] points.\n",
    "    See:\n",
    "        S. Levine*, C. Finn*, T. Darrell, P. Abbeel, \"End-to-end\n",
    "        training of Deep Visuomotor Policies\", arXiv:1504.00702,\n",
    "        Appendix A.3.\n",
    "    \"\"\"\n",
    "    def __init__(self, hyperparams):\n",
    "        \"\"\"\n",
    "        Hyperparameters:\n",
    "            min_samples_per_cluster: Minimum samples per cluster.\n",
    "            max_clusters: Maximum number of clusters to fit.\n",
    "            max_samples: Maximum number of trajectories to use for\n",
    "                fitting the GMM at any given time.\n",
    "            strength: Adjusts the strength of the prior.\n",
    "        \"\"\"\n",
    "        config = copy.deepcopy(DYN_PRIOR_GMM)\n",
    "        #config = copy.deepcopy(DYN_PRIOR_GMM_example)        \n",
    "        config.update(hyperparams)\n",
    "        self._hyperparams = config\n",
    "        self.X = None\n",
    "        self.U = None\n",
    "        self.gmm = GMM()\n",
    "        self._min_samp = self._hyperparams['min_samples_per_cluster']\n",
    "        self._max_samples = self._hyperparams['max_samples']\n",
    "        self._max_clusters = self._hyperparams['max_clusters']\n",
    "        self._strength = self._hyperparams['strength']\n",
    "\n",
    "        # Should we use copy.min_samples_per_cluster, etc. instead?\n",
    "        #self._min_samp = DYN_PRIOR_GMM_example.min_samples_per_cluster\n",
    "        #self._max_samples = DYN_PRIOR_GMM_example.max_samples\n",
    "        #self._max_clusters = DYN_PRIOR_GMM_example.max_clusters\n",
    "        #self._strength = DYN_PRIOR_GMM_example.strength\n",
    "        \n",
    "\n",
    "    def initial_state(self):\n",
    "        \"\"\" Return dynamics prior for initial time step. \"\"\"\n",
    "        # Compute mean and covariance.\n",
    "        mu0 = np.mean(self.X[:, 0, :], axis=0)\n",
    "        Phi = np.diag(np.var(self.X[:, 0, :], axis=0))\n",
    "\n",
    "        # Factor in multiplier.\n",
    "        n0 = self.X.shape[2] * self._strength\n",
    "        m = self.X.shape[2] * self._strength\n",
    "\n",
    "        # Multiply Phi by m (since it was normalized before).\n",
    "        Phi = Phi * m\n",
    "        return mu0, Phi, m, n0\n",
    "\n",
    "    def update(self, X, U):\n",
    "        \"\"\"\n",
    "        Update prior with additional data.\n",
    "        Args:\n",
    "            X: A N x T x dX matrix of sequential state data.\n",
    "            U: A N x T x dU matrix of sequential control data.\n",
    "        \"\"\"\n",
    "        # Constants.\n",
    "        T = X.shape[1] - 1\n",
    "\n",
    "        # Append data to dataset.\n",
    "        if self.X is None:\n",
    "            self.X = X\n",
    "        else:\n",
    "            self.X = np.concatenate([self.X, X], axis=0)\n",
    "\n",
    "        if self.U is None:\n",
    "            self.U = U\n",
    "        else:\n",
    "            self.U = np.concatenate([self.U, U], axis=0)\n",
    "\n",
    "        # Remove excess samples from dataset.\n",
    "        start = max(0, self.X.shape[0] - self._max_samples + 1)\n",
    "        self.X = self.X[start:, :]\n",
    "        self.U = self.U[start:, :]\n",
    "\n",
    "        # Compute cluster dimensionality.\n",
    "        Do = X.shape[2] + U.shape[2] + X.shape[2]  #TODO: Use Xtgt.\n",
    "\n",
    "        # Create dataset.\n",
    "        N = self.X.shape[0]\n",
    "        xux = np.reshape(\n",
    "            np.c_[self.X[:, :T, :], self.U[:, :T, :], self.X[:, 1:(T+1), :]],\n",
    "            [T * N, Do]\n",
    "        )\n",
    "\n",
    "        # Choose number of clusters.\n",
    "        K = int(max(2, min(self._max_clusters,\n",
    "                           np.floor(float(N * T) / self._min_samp))))\n",
    "        LOGGER.debug('Generating %d clusters for dynamics GMM.', K)\n",
    "\n",
    "        # Update GMM.\n",
    "        self.gmm.update(xux, K)\n",
    "\n",
    "    def eval(self, Dx, Du, pts):\n",
    "        \"\"\"\n",
    "        Evaluate prior.\n",
    "        Args:\n",
    "            pts: A N x Dx+Du+Dx matrix.\n",
    "        \"\"\"\n",
    "        # Construct query data point by rearranging entries and adding\n",
    "        # in reference.\n",
    "        assert pts.shape[1] == Dx + Du + Dx\n",
    "\n",
    "        # Perform query and fix mean.\n",
    "        mu0, Phi, m, n0 = self.gmm.inference(pts)\n",
    "\n",
    "        # Factor in multiplier.\n",
    "        n0 = n0 * self._strength\n",
    "        m = m * self._strength\n",
    "\n",
    "        # Multiply Phi by m (since it was normalized before).\n",
    "        Phi *= m\n",
    "        return mu0, Phi, m, n0"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Linear regression dynamics with an arbitrary prior: `DynamicsLRPrior`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "#regularization = 1e-6\n",
    "\n",
    "class DynamicsLRPrior(Dynamics):\n",
    "    \"\"\" Dynamics with linear regression, with arbitrary prior. \"\"\"\n",
    "    def __init__(self, hyperparams):\n",
    "    #def __init__(self):\n",
    "        Dynamics.__init__(self, hyperparams)\n",
    "        #Dynamics.__init__(self)\n",
    "        self.Fm = None\n",
    "        self.fv = None\n",
    "        self.dyn_covar = None        \n",
    "        #self.prior = DynamicsPriorGMM() # Refer to the corresponding class for a deeper understanding\n",
    "        self.prior = self._hyperparams['prior']['type'](self._hyperparams['prior'])\n",
    "\n",
    "    def update_prior(self, samples):\n",
    "        \"\"\" Update dynamics prior. \"\"\"\n",
    "        X = samples.get_X()\n",
    "        U = samples.get_U()\n",
    "        self.prior.update(X, U)\n",
    "\n",
    "    def get_prior(self):\n",
    "        \"\"\" Return the dynamics prior. \"\"\"\n",
    "        return self.prior\n",
    "\n",
    "    #TODO: Merge this with DynamicsLR.fit - lots of duplicated code.\n",
    "    def fit(self, X, U):\n",
    "        \"\"\" Fit dynamics. \"\"\"\n",
    "        N, T, dX = X.shape\n",
    "        dU = U.shape[2]\n",
    "\n",
    "        if N == 1:\n",
    "            raise ValueError(\"Cannot fit dynamics on 1 sample\")\n",
    "\n",
    "        self.Fm = np.zeros([T, dX, dX+dU])\n",
    "        self.fv = np.zeros([T, dX])\n",
    "        self.dyn_covar = np.zeros([T, dX, dX])\n",
    "\n",
    "        it = slice(dX+dU)\n",
    "        ip = slice(dX+dU, dX+dU+dX)\n",
    "        # Fit dynamics with least squares regression.\n",
    "        dwts = (1.0 / N) * np.ones(N)\n",
    "        for t in range(T - 1):\n",
    "            Ys = np.c_[X[:, t, :], U[:, t, :], X[:, t+1, :]]\n",
    "            # Obtain Normal-inverse-Wishart prior.\n",
    "            mu0, Phi, mm, n0 = self.prior.eval(dX, dU, Ys)\n",
    "            sig_reg = np.zeros((dX+dU+dX, dX+dU+dX))\n",
    "            sig_reg[it, it] = self._hyperparams['regularization']\n",
    "            #sig_reg[it, it] = regularization\n",
    "            Fm, fv, dyn_covar = gauss_fit_joint_prior(Ys,\n",
    "                        mu0, Phi, mm, n0, dwts, dX+dU, dX, sig_reg)\n",
    "            self.Fm[t, :, :] = Fm\n",
    "            self.fv[t, :] = fv\n",
    "            self.dyn_covar[t, :, :] = dyn_covar\n",
    "        return self.Fm, self.fv, self.dyn_covar\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Algorithm\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### `BundleType`\n",
    "General utility functions and classes"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "class BundleType(object):\n",
    "    \"\"\"\n",
    "    This class bundles many fields, similar to a record or a mutable\n",
    "    namedtuple.\n",
    "    \"\"\"\n",
    "    def __init__(self, variables):\n",
    "        for var, val in variables.items():\n",
    "            object.__setattr__(self, var, val)\n",
    "\n",
    "    # Freeze fields so new ones cannot be set.\n",
    "    def __setattr__(self, key, value):\n",
    "        if not hasattr(self, key):\n",
    "            raise AttributeError(\"%r has no attribute %s\" % (self, key))\n",
    "        object.__setattr__(self, key, value)\n",
    "\n",
    "\n",
    "def check_shape(value, expected_shape, name=''):\n",
    "    \"\"\"\n",
    "    Throws a ValueError if value.shape != expected_shape.\n",
    "    Args:\n",
    "        value: Matrix to shape check.\n",
    "        expected_shape: A tuple or list of integers.\n",
    "        name: An optional name to add to the exception message.\n",
    "    \"\"\"\n",
    "    if value.shape != tuple(expected_shape):\n",
    "        raise ValueError('Shape mismatch %s: Expected %s, got %s' %\n",
    "                         (name, str(expected_shape), str(value.shape)))\n",
    "\n",
    "\n",
    "def finite_differences(func, inputs, func_output_shape=(), epsilon=1e-5):\n",
    "    \"\"\"\n",
    "    Computes gradients via finite differences.\n",
    "    derivative = (func(x+epsilon) - func(x-epsilon)) / (2*epsilon)\n",
    "    Args:\n",
    "        func: Function to compute gradient of. Inputs and outputs can be\n",
    "            arbitrary dimension.\n",
    "        inputs: Vector value to compute gradient at.\n",
    "        func_output_shape: Shape of the output of func. Default is\n",
    "            empty-tuple, which works for scalar-valued functions.\n",
    "        epsilon: Difference to use for computing gradient.\n",
    "    Returns:\n",
    "        Gradient vector of each dimension of func with respect to each\n",
    "        dimension of input.\n",
    "    \"\"\"\n",
    "    gradient = np.zeros(inputs.shape+func_output_shape)\n",
    "    for idx, _ in np.ndenumerate(inputs):\n",
    "        test_input = np.copy(inputs)\n",
    "        test_input[idx] += epsilon\n",
    "        obj_d1 = func(test_input)\n",
    "        assert obj_d1.shape == func_output_shape\n",
    "        test_input = np.copy(inputs)\n",
    "        test_input[idx] -= epsilon\n",
    "        obj_d2 = func(test_input)\n",
    "        assert obj_d2.shape == func_output_shape\n",
    "        diff = (obj_d1 - obj_d2) / (2 * epsilon)\n",
    "        gradient[idx] += diff\n",
    "    return gradient"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### `IterationData` and `TrajectoryInfo`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "class IterationData(BundleType):\n",
    "    \"\"\" Collection of iteration variables. \"\"\"\n",
    "    def __init__(self):\n",
    "        variables = {\n",
    "            'sample_list': None,  # List of samples for the current iteration.\n",
    "            'traj_info': None,  # Current TrajectoryInfo object.\n",
    "            'pol_info': None,  # Current PolicyInfo object.\n",
    "            'traj_distr': None,  # Initial trajectory distribution.\n",
    "            'new_traj_distr': None, # Updated trajectory distribution.\n",
    "            'cs': None,  # Sample costs of the current iteration.\n",
    "            'step_mult': 1.0,  # KL step multiplier for the current iteration.\n",
    "            'eta': 1.0,  # Dual variable used in LQR backward pass.\n",
    "        }\n",
    "        BundleType.__init__(self, variables)\n",
    "\n",
    "\n",
    "class TrajectoryInfo(BundleType):\n",
    "    \"\"\" Collection of trajectory-related variables. \"\"\"\n",
    "    def __init__(self):\n",
    "        variables = {\n",
    "            'dynamics': None,  # Dynamics object for the current iteration.\n",
    "            'x0mu': None,  # Mean for the initial state, used by the dynamics.\n",
    "            'x0sigma': None,  # Covariance for the initial state distribution.\n",
    "            'cc': None,  # Cost estimate constant term.\n",
    "            'cv': None,  # Cost estimate vector term.\n",
    "            'Cm': None,  # Cost estimate matrix term.\n",
    "            'last_kl_step': float('inf'),  # KL step of the previous iteration.\n",
    "        }\n",
    "        BundleType.__init__(self, variables)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### `PolicyInfo`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "class PolicyInfo(BundleType):\n",
    "    \"\"\" Collection of policy-related variables. \"\"\"\n",
    "    def __init__(self, hyperparams):\n",
    "        T, dU, dX = hyperparams['T'], hyperparams['dU'], hyperparams['dX']\n",
    "        variables = {\n",
    "            'lambda_k': np.zeros((T, dU)),  # Dual variables.\n",
    "            'lambda_K': np.zeros((T, dU, dX)),  # Dual variables.\n",
    "            'pol_wt': hyperparams['init_pol_wt'] * np.ones(T),  # Policy weight.\n",
    "            'pol_mu': None,  # Mean of the current policy output.\n",
    "            'pol_sig': None,  # Covariance of the current policy output.\n",
    "            'pol_K': np.zeros((T, dU, dX)),  # Policy linearization.\n",
    "            'pol_k': np.zeros((T, dU)),  # Policy linearization.\n",
    "            'pol_S': np.zeros((T, dU, dU)),  # Policy linearization covariance.\n",
    "            'chol_pol_S': np.zeros((T, dU, dU)),  # Cholesky decomp of covar.\n",
    "            'prev_kl': None,  # Previous KL divergence.\n",
    "            'init_kl': None,  # The initial KL divergence, before the iteration.\n",
    "            'policy_samples': [],  # List of current policy samples.\n",
    "            'policy_prior': None,  # Current prior for policy linearization.\n",
    "        }\n",
    "        BundleType.__init__(self, variables)\n",
    "\n",
    "    def traj_distr(self):\n",
    "        \"\"\" Create a trajectory distribution object from policy info. \"\"\"\n",
    "        T, dU, dX = self.pol_K.shape\n",
    "        # Compute inverse policy covariances.\n",
    "        inv_pol_S = np.empty_like(self.chol_pol_S)\n",
    "        for t in range(T):\n",
    "            inv_pol_S[t, :, :] = np.linalg.solve(\n",
    "                self.chol_pol_S[t, :, :],\n",
    "                np.linalg.solve(self.chol_pol_S[t, :, :].T, np.eye(dU))\n",
    "            )\n",
    "        return LinearGaussianPolicy(self.pol_K, self.pol_k, self.pol_S,\n",
    "                self.chol_pol_S, inv_pol_S)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Algorithm superclass: `Algorithm`\n",
    "Base algorithm class"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "import random\n",
    "\n",
    "def extract_condition(hyperparams, m):\n",
    "    \"\"\"\n",
    "    Pull the relevant hyperparameters corresponding to the specified\n",
    "    condition, and return a new hyperparameter dictionary.\n",
    "    \"\"\"\n",
    "    return {var: val[m] if isinstance(val, list) else val\n",
    "            for var, val in hyperparams.items()}\n",
    "\n",
    "# Algorithm\n",
    "ALG = {\n",
    "    'inner_iterations': 1,  # Number of iterations.\n",
    "    'min_eta': 1e-5,  # Minimum initial lagrange multiplier in DGD for\n",
    "                      # trajectory optimization.\n",
    "    'kl_step':0.2,\n",
    "    'min_step_mult':0.01,\n",
    "    'max_step_mult':10.0,\n",
    "    'min_mult': 0.1,\n",
    "    'max_mult': 5.0,\n",
    "    # Trajectory settings.\n",
    "    'initial_state_var':1e-6,\n",
    "    'init_traj_distr': None,  # A list of initial LinearGaussianPolicy\n",
    "                              # objects for each condition.\n",
    "    # Trajectory optimization.\n",
    "    'traj_opt': None,\n",
    "    # Weight of maximum entropy term in trajectory optimization.\n",
    "    'max_ent_traj': 0.0,\n",
    "    # Dynamics hyperaparams.\n",
    "    'dynamics': None,\n",
    "    # Costs.\n",
    "    'cost': None,  # A list of Cost objects for each condition.\n",
    "    # Whether or not to sample with neural net policy (only for badmm/mdgps).\n",
    "    'sample_on_policy': False,\n",
    "    # Inidicates if the algorithm requires fitting of the dynamics.\n",
    "    'fit_dynamics': True,    \n",
    "}\n",
    "\n",
    "class Algorithm(object):\n",
    "    \"\"\" Algorithm superclass. \"\"\"\n",
    "    __metaclass__ = abc.ABCMeta\n",
    "\n",
    "    def __init__(self, hyperparams):\n",
    "        config = copy.deepcopy(ALG)\n",
    "        config.update(hyperparams)\n",
    "        self._hyperparams = config\n",
    "\n",
    "        if 'train_conditions' in hyperparams:\n",
    "            self._cond_idx = hyperparams['train_conditions']\n",
    "            self.M = len(self._cond_idx)\n",
    "        else:\n",
    "            self.M = hyperparams['conditions']\n",
    "            self._cond_idx = range(self.M)\n",
    "            self._hyperparams['train_conditions'] = self._cond_idx\n",
    "            self._hyperparams['test_conditions'] = self._cond_idx\n",
    "        self.iteration_count = 0\n",
    "\n",
    "        # Grab a few values from the agent.\n",
    "        agent = self._hyperparams['agent']\n",
    "        self.T = self._hyperparams['T'] = agent.T\n",
    "        self.dU = self._hyperparams['dU'] = agent.dU\n",
    "        self.dX = self._hyperparams['dX'] = agent.dX\n",
    "        self.dO = self._hyperparams['dO'] = agent.dO\n",
    "\n",
    "        init_traj_distr = config['init_traj_distr']\n",
    "        init_traj_distr['x0'] = agent.x0\n",
    "        init_traj_distr['dX'] = agent.dX\n",
    "        init_traj_distr['dU'] = agent.dU\n",
    "        del self._hyperparams['agent']  # Don't want to pickle this.\n",
    "\n",
    "        # IterationData objects for each condition.\n",
    "        self.cur = [IterationData() for _ in range(self.M)]\n",
    "        self.prev = [IterationData() for _ in range(self.M)]\n",
    "\n",
    "        if self._hyperparams['fit_dynamics']:\n",
    "            dynamics = self._hyperparams['dynamics']\n",
    "\n",
    "        for m in range(self.M):\n",
    "            self.cur[m].traj_info = TrajectoryInfo()\n",
    "            if self._hyperparams['fit_dynamics']:\n",
    "                self.cur[m].traj_info.dynamics = dynamics['type'](dynamics)\n",
    "            init_traj_distr = extract_condition(\n",
    "                self._hyperparams['init_traj_distr'], self._cond_idx[m]\n",
    "            )\n",
    "            self.cur[m].traj_distr = init_traj_distr['type'](init_traj_distr)\n",
    "\n",
    "        self.traj_opt = hyperparams['traj_opt']['type'](\n",
    "            hyperparams['traj_opt']\n",
    "        )\n",
    "        if type(hyperparams['cost']) == list:\n",
    "            self.cost = [\n",
    "                hyperparams['cost'][i]['type'](hyperparams['cost'][i])\n",
    "                for i in range(self.M)\n",
    "            ]\n",
    "        else:\n",
    "            self.cost = [\n",
    "                hyperparams['cost']['type'](hyperparams['cost'])\n",
    "                for _ in range(self.M)\n",
    "            ]\n",
    "        self.base_kl_step = self._hyperparams['kl_step']\n",
    "\n",
    "    @abc.abstractmethod\n",
    "    def iteration(self, sample_list):\n",
    "        \"\"\" Run iteration of the algorithm. \"\"\"\n",
    "        raise NotImplementedError(\"Must be implemented in subclass\")\n",
    "\n",
    "    def _update_dynamics(self):\n",
    "        \"\"\"\n",
    "        Instantiate dynamics objects and update prior. Fit dynamics to\n",
    "        current samples.\n",
    "        \"\"\"\n",
    "        for m in range(self.M):\n",
    "            cur_data = self.cur[m].sample_list\n",
    "            X = cur_data.get_X()\n",
    "            U = cur_data.get_U()\n",
    "\n",
    "            # Update prior and fit dynamics.\n",
    "            self.cur[m].traj_info.dynamics.update_prior(cur_data)\n",
    "            self.cur[m].traj_info.dynamics.fit(X, U)\n",
    "\n",
    "            # Fit x0mu/x0sigma.\n",
    "            x0 = X[:, 0, :]\n",
    "            x0mu = np.mean(x0, axis=0)\n",
    "            self.cur[m].traj_info.x0mu = x0mu\n",
    "            self.cur[m].traj_info.x0sigma = np.diag(\n",
    "                np.maximum(np.var(x0, axis=0),\n",
    "                           self._hyperparams['initial_state_var'])\n",
    "            )\n",
    "\n",
    "            prior = self.cur[m].traj_info.dynamics.get_prior()\n",
    "            if prior:\n",
    "                mu0, Phi, priorm, n0 = prior.initial_state()\n",
    "                N = len(cur_data)\n",
    "                self.cur[m].traj_info.x0sigma += \\\n",
    "                        Phi + (N*priorm) / (N+priorm) * \\\n",
    "                        np.outer(x0mu-mu0, x0mu-mu0) / (N+n0)\n",
    "\n",
    "    def _update_trajectories(self):\n",
    "        \"\"\"\n",
    "        Compute new linear Gaussian controllers.\n",
    "        \"\"\"\n",
    "        if not hasattr(self, 'new_traj_distr'):\n",
    "            self.new_traj_distr = [\n",
    "                self.cur[cond].traj_distr for cond in range(self.M)\n",
    "            ]\n",
    "        for cond in range(self.M):\n",
    "            self.new_traj_distr[cond], self.cur[cond].eta = \\\n",
    "                    self.traj_opt.update(cond, self)\n",
    "\n",
    "    def _eval_cost(self, cond):\n",
    "        \"\"\"\n",
    "        Evaluate costs for all samples for a condition.\n",
    "        Args:\n",
    "            cond: Condition to evaluate cost on.\n",
    "        \"\"\"\n",
    "        # Constants.\n",
    "        T, dX, dU = self.T, self.dX, self.dU\n",
    "        N = len(self.cur[cond].sample_list)\n",
    "\n",
    "        # Compute cost.\n",
    "        cs = np.zeros((N, T))\n",
    "        cc = np.zeros((N, T))\n",
    "        cv = np.zeros((N, T, dX+dU))\n",
    "        Cm = np.zeros((N, T, dX+dU, dX+dU))\n",
    "        for n in range(N):\n",
    "            sample = self.cur[cond].sample_list[n]\n",
    "            # Get costs.\n",
    "            l, lx, lu, lxx, luu, lux = self.cost[cond].eval(sample)\n",
    "            cc[n, :] = l\n",
    "            cs[n, :] = l\n",
    "\n",
    "            # Assemble matrix and vector.\n",
    "            cv[n, :, :] = np.c_[lx, lu]\n",
    "            Cm[n, :, :, :] = np.concatenate(\n",
    "                (np.c_[lxx, np.transpose(lux, [0, 2, 1])], np.c_[lux, luu]),\n",
    "                axis=1\n",
    "            )\n",
    "\n",
    "            # Adjust for expanding cost around a sample.\n",
    "            X = sample.get_X()\n",
    "            U = sample.get_U()\n",
    "            yhat = np.c_[X, U]\n",
    "            rdiff = -yhat\n",
    "            rdiff_expand = np.expand_dims(rdiff, axis=2)\n",
    "            cv_update = np.sum(Cm[n, :, :, :] * rdiff_expand, axis=1)\n",
    "            cc[n, :] += np.sum(rdiff * cv[n, :, :], axis=1) + 0.5 * \\\n",
    "                    np.sum(rdiff * cv_update, axis=1)\n",
    "            cv[n, :, :] += cv_update\n",
    "\n",
    "        # Fill in cost estimate.\n",
    "        self.cur[cond].traj_info.cc = np.mean(cc, 0)  # Constant term (scalar).\n",
    "        self.cur[cond].traj_info.cv = np.mean(cv, 0)  # Linear term (vector).\n",
    "        self.cur[cond].traj_info.Cm = np.mean(Cm, 0)  # Quadratic term (matrix).\n",
    "\n",
    "        self.cur[cond].cs = cs  # True value of cost.\n",
    "\n",
    "    def _advance_iteration_variables(self):\n",
    "        \"\"\"\n",
    "        Move all 'cur' variables to 'prev', and advance iteration\n",
    "        counter.\n",
    "        \"\"\"\n",
    "        self.iteration_count += 1\n",
    "        self.prev = copy.deepcopy(self.cur)\n",
    "        # TODO: change IterationData to reflect new stuff better\n",
    "        for m in range(self.M):\n",
    "            self.prev[m].new_traj_distr = self.new_traj_distr[m]\n",
    "        self.cur = [IterationData() for _ in range(self.M)]\n",
    "        for m in range(self.M):\n",
    "            self.cur[m].traj_info = TrajectoryInfo()\n",
    "            self.cur[m].traj_info.dynamics = copy.deepcopy(self.prev[m].traj_info.dynamics)\n",
    "            self.cur[m].step_mult = self.prev[m].step_mult\n",
    "            self.cur[m].eta = self.prev[m].eta\n",
    "            self.cur[m].traj_distr = self.new_traj_distr[m]\n",
    "        delattr(self, 'new_traj_distr')\n",
    "\n",
    "    def _set_new_mult(self, predicted_impr, actual_impr, m):\n",
    "        \"\"\"\n",
    "        Adjust step size multiplier according to the predicted versus\n",
    "        actual improvement.\n",
    "        \"\"\"\n",
    "        # Model improvement as I = predicted_dI * KL + penalty * KL^2,\n",
    "        # where predicted_dI = pred/KL and penalty = (act-pred)/(KL^2).\n",
    "        # Optimize I w.r.t. KL: 0 = predicted_dI + 2 * penalty * KL =>\n",
    "        # KL' = (-predicted_dI)/(2*penalty) = (pred/2*(pred-act)) * KL.\n",
    "        # Therefore, the new multiplier is given by pred/2*(pred-act).\n",
    "        new_mult = predicted_impr / (2.0 * max(1e-4,\n",
    "                                               predicted_impr - actual_impr))\n",
    "        new_mult = max(0.1, min(5.0, new_mult))\n",
    "        new_step = max(\n",
    "            min(new_mult * self.cur[m].step_mult,\n",
    "                self._hyperparams['max_step_mult']),\n",
    "            self._hyperparams['min_step_mult']\n",
    "        )\n",
    "        self.cur[m].step_mult = new_step\n",
    "\n",
    "        if new_mult > 1:\n",
    "            LOGGER.debug('Increasing step size multiplier to %f', new_step)\n",
    "        else:\n",
    "            LOGGER.debug('Decreasing step size multiplier to %f', new_step)\n",
    "\n",
    "    def _measure_ent(self, m):\n",
    "        \"\"\" Measure the entropy of the current trajectory. \"\"\"\n",
    "        ent = 0\n",
    "        for t in range(self.T):\n",
    "            ent = ent + np.sum(\n",
    "                np.log(np.diag(self.cur[m].traj_distr.chol_pol_covar[t, :, :]))\n",
    "            )\n",
    "        return ent\n",
    "\n",
    "    # For pickling.\n",
    "    def __getstate__(self):\n",
    "        state = self.__dict__.copy()\n",
    "        state['_random_state'] = random.getstate()\n",
    "        state['_np_random_state'] = np.random.get_state()\n",
    "        return state\n",
    "\n",
    "    # For unpickling.\n",
    "    def __setstate__(self, state):\n",
    "        self.__dict__ = state\n",
    "        random.setstate(state.pop('_random_state'))\n",
    "        np.random.set_state(state.pop('_np_random_state'))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### `AlgorithmBADMM`\n",
    "BADMM-based Guided Policy Search (GPS) algorithm"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "import scipy as sp\n",
    "\n",
    "# AlgorithmBADMM\n",
    "ALG_BADMM = {\n",
    "    'inner_iterations': 4,\n",
    "    'policy_dual_rate': 0.1,\n",
    "    'policy_dual_rate_covar': 0.0,\n",
    "    'fixed_lg_step': 0,\n",
    "    'lg_step_schedule': 10.0,\n",
    "    'ent_reg_schedule': 0.0,\n",
    "    'init_pol_wt': 0.01,\n",
    "    'policy_sample_mode': 'add',\n",
    "    'exp_step_increase': 2.0,\n",
    "    'exp_step_decrease': 0.5,\n",
    "    'exp_step_upper': 0.5,\n",
    "    'exp_step_lower': 1.0,\n",
    "}\n",
    "\n",
    "class AlgorithmBADMM(Algorithm):\n",
    "    \"\"\"\n",
    "    Sample-based joint policy learning and trajectory optimization with\n",
    "    BADMM-based guided policy search algorithm.\n",
    "    \"\"\"\n",
    "    def __init__(self, hyperparams):\n",
    "        config = copy.deepcopy(ALG_BADMM)\n",
    "        config.update(hyperparams)\n",
    "        Algorithm.__init__(self, config)\n",
    "\n",
    "        policy_prior = self._hyperparams['policy_prior']\n",
    "        for m in range(self.M):\n",
    "            self.cur[m].pol_info = PolicyInfo(self._hyperparams)\n",
    "            self.cur[m].pol_info.policy_prior = \\\n",
    "                    policy_prior['type'](policy_prior)\n",
    "\n",
    "        self.policy_opt = self._hyperparams['policy_opt']['type'](\n",
    "            self._hyperparams['policy_opt'], self.dO, self.dU\n",
    "        )\n",
    "\n",
    "    def iteration(self, sample_lists):\n",
    "        \"\"\"\n",
    "        Run iteration of BADMM-based guided policy search.\n",
    "\n",
    "        Args:\n",
    "            sample_lists: List of SampleList objects for each condition.\n",
    "        \"\"\"\n",
    "        for m in range(self.M):\n",
    "            self.cur[m].sample_list = sample_lists[m]\n",
    "\n",
    "        self._set_interp_values()\n",
    "        self._update_dynamics()  # Update dynamics model using all sample.\n",
    "        self._update_step_size()  # KL Divergence step size.\n",
    "\n",
    "        for m in range(self.M):\n",
    "            # save initial kl for debugging / visualization\n",
    "            self.cur[m].pol_info.init_kl = self._policy_kl(m)[0]\n",
    "\n",
    "        # Run inner loop to compute new policies.\n",
    "        for inner_itr in range(self._hyperparams['inner_iterations']):\n",
    "            #TODO: Could start from init controller.\n",
    "            if self.iteration_count > 0 or inner_itr > 0:\n",
    "                # Update the policy.\n",
    "                self._update_policy(inner_itr)\n",
    "            for m in range(self.M):\n",
    "                self._update_policy_fit(m)  # Update policy priors.\n",
    "            if self.iteration_count > 0 or inner_itr > 0:\n",
    "                step = (inner_itr == self._hyperparams['inner_iterations'] - 1)\n",
    "                # Update dual variables.\n",
    "                for m in range(self.M):\n",
    "                    self._policy_dual_step(m, step=step)\n",
    "            self._update_trajectories()\n",
    "\n",
    "        self._advance_iteration_variables()\n",
    "\n",
    "    def _set_interp_values(self):\n",
    "        \"\"\"\n",
    "        Use iteration-based interpolation to set values of some\n",
    "        schedule-based parameters.\n",
    "        \"\"\"\n",
    "        # Compute temporal interpolation value.\n",
    "        t = min((self.iteration_count + 1.0) /\n",
    "                (self._hyperparams['iterations'] - 1), 1)\n",
    "        # Perform iteration-based interpolation of entropy penalty.\n",
    "        if type(self._hyperparams['ent_reg_schedule']) in (int, float):\n",
    "            self.policy_opt.set_ent_reg(self._hyperparams['ent_reg_schedule'])\n",
    "        else:\n",
    "            sch = self._hyperparams['ent_reg_schedule']\n",
    "            self.policy_opt.set_ent_reg(\n",
    "                np.exp(np.interp(t, np.linspace(0, 1, num=len(sch)),\n",
    "                                 np.log(sch)))\n",
    "            )\n",
    "        # Perform iteration-based interpolation of Lagrange multiplier.\n",
    "        if type(self._hyperparams['lg_step_schedule']) in (int, float):\n",
    "            self._hyperparams['lg_step'] = self._hyperparams['lg_step_schedule']\n",
    "        else:\n",
    "            sch = self._hyperparams['lg_step_schedule']\n",
    "            self._hyperparams['lg_step'] = np.exp(\n",
    "                np.interp(t, np.linspace(0, 1, num=len(sch)), np.log(sch))\n",
    "            )\n",
    "\n",
    "    def _update_step_size(self):\n",
    "        \"\"\" Evaluate costs on samples, and adjust the step size. \"\"\"\n",
    "        # Evaluate cost function for all conditions and samples.\n",
    "        for m in range(self.M):\n",
    "            self._update_policy_fit(m, init=True)\n",
    "            self._eval_cost(m)\n",
    "            # Adjust step size relative to the previous iteration.\n",
    "            if self.iteration_count >= 1 and self.prev[m].sample_list:\n",
    "                self._stepadjust(m)\n",
    "\n",
    "    def _update_policy(self, inner_itr):\n",
    "        \"\"\" Compute the new policy. \"\"\"\n",
    "        dU, dO, T = self.dU, self.dO, self.T\n",
    "        # Compute target mean, cov, and weight for each sample.\n",
    "        obs_data, tgt_mu = np.zeros((0, T, dO)), np.zeros((0, T, dU))\n",
    "        tgt_prc, tgt_wt = np.zeros((0, T, dU, dU)), np.zeros((0, T))\n",
    "        for m in range(self.M):\n",
    "            samples = self.cur[m].sample_list\n",
    "            X = samples.get_X()\n",
    "            N = len(samples)\n",
    "            if inner_itr > 0:\n",
    "                traj, pol_info = self.new_traj_distr[m], self.cur[m].pol_info\n",
    "            else:\n",
    "                traj, pol_info = self.cur[m].traj_distr, self.cur[m].pol_info\n",
    "            mu = np.zeros((N, T, dU))\n",
    "            prc = np.zeros((N, T, dU, dU))\n",
    "            wt = np.zeros((N, T))\n",
    "            # Get time-indexed actions.\n",
    "            for t in range(T):\n",
    "                # Compute actions along this trajectory.\n",
    "                prc[:, t, :, :] = np.tile(traj.inv_pol_covar[t, :, :],\n",
    "                                          [N, 1, 1])\n",
    "                for i in range(N):\n",
    "                    mu[i, t, :] = \\\n",
    "                            (traj.K[t, :, :].dot(X[i, t, :]) + traj.k[t, :]) - \\\n",
    "                            np.linalg.solve(\n",
    "                                prc[i, t, :, :] / pol_info.pol_wt[t],\n",
    "                                pol_info.lambda_K[t, :, :].dot(X[i, t, :]) + \\\n",
    "                                        pol_info.lambda_k[t, :]\n",
    "                            )\n",
    "                wt[:, t].fill(pol_info.pol_wt[t])\n",
    "            tgt_mu = np.concatenate((tgt_mu, mu))\n",
    "            tgt_prc = np.concatenate((tgt_prc, prc))\n",
    "            tgt_wt = np.concatenate((tgt_wt, wt))\n",
    "            obs_data = np.concatenate((obs_data, samples.get_obs()))\n",
    "        self.policy_opt.update(obs_data, tgt_mu, tgt_prc, tgt_wt)\n",
    "\n",
    "    def _update_policy_fit(self, m, init=False):\n",
    "        \"\"\"\n",
    "        Re-estimate the local policy values in the neighborhood of the\n",
    "        trajectory.\n",
    "        Args:\n",
    "            m: Condition\n",
    "            init: Whether this is the initial fitting of the policy.\n",
    "        \"\"\"\n",
    "        dX, dU, T = self.dX, self.dU, self.T\n",
    "        # Choose samples to use.\n",
    "        samples = self.cur[m].sample_list\n",
    "        N = len(samples)\n",
    "        pol_info = self.cur[m].pol_info\n",
    "        X = samples.get_X()\n",
    "        obs = samples.get_obs().copy()\n",
    "        pol_mu, pol_sig = self.policy_opt.prob(obs)[:2]\n",
    "        pol_info.pol_mu, pol_info.pol_sig = pol_mu, pol_sig\n",
    "\n",
    "        # Update policy prior.\n",
    "        policy_prior = pol_info.policy_prior\n",
    "        if init:\n",
    "            samples = SampleList(self.cur[m].sample_list)\n",
    "            mode = self._hyperparams['policy_sample_mode']\n",
    "        else:\n",
    "            samples = SampleList([])\n",
    "            mode = 'add' # Don't replace with empty samples\n",
    "        policy_prior.update(samples, self.policy_opt, mode)\n",
    "\n",
    "        # Fit linearization and store in pol_info.\n",
    "        pol_info.pol_K, pol_info.pol_k, pol_info.pol_S = \\\n",
    "                policy_prior.fit(X, pol_mu, pol_sig)\n",
    "        for t in range(T):\n",
    "            pol_info.chol_pol_S[t, :, :] = \\\n",
    "                    sp.linalg.cholesky(pol_info.pol_S[t, :, :])\n",
    "\n",
    "    def _policy_dual_step(self, m, step=False):\n",
    "        \"\"\"\n",
    "        Update the dual variables for the specified condition.\n",
    "        Args:\n",
    "            m: Condition\n",
    "            step: Whether or not to update pol_wt.\n",
    "        \"\"\"\n",
    "        dU, T = self.dU, self.T\n",
    "        samples = self.cur[m].sample_list\n",
    "        N = len(samples)\n",
    "        X = samples.get_X()\n",
    "        if 'new_traj_distr' in dir(self):\n",
    "            traj, pol_info = self.new_traj_distr[m], self.cur[m].pol_info\n",
    "        else:\n",
    "            traj, pol_info = self.cur[m].traj_distr, self.cur[m].pol_info\n",
    "\n",
    "        # Compute trajectory action at each sampled state.\n",
    "        traj_mu = np.zeros((N, T, dU))\n",
    "        for i in range(N):\n",
    "            for t in range(T):\n",
    "                traj_mu[i, t, :] = traj.K[t, :, :].dot(X[i, t, :]) + \\\n",
    "                        traj.k[t, :]\n",
    "        # Compute policy action at each sampled state.\n",
    "        pol_mu = pol_info.pol_mu\n",
    "        # Compute the difference and increment based on pol_wt.\n",
    "        for t in range(T):\n",
    "            tU, pU = traj_mu[:, t, :], pol_mu[:, t, :]\n",
    "            # Increment mean term.\n",
    "            pol_info.lambda_k[t, :] -= self._hyperparams['policy_dual_rate'] * \\\n",
    "                    pol_info.pol_wt[t] * \\\n",
    "                    traj.inv_pol_covar[t, :, :].dot(np.mean(tU - pU, axis=0))\n",
    "            # Increment covariance term.\n",
    "            t_covar, p_covar = traj.K[t, :, :], pol_info.pol_K[t, :, :]\n",
    "            pol_info.lambda_K[t, :, :] -= \\\n",
    "                    self._hyperparams['policy_dual_rate_covar'] * \\\n",
    "                    pol_info.pol_wt[t] * \\\n",
    "                    traj.inv_pol_covar[t, :, :].dot(t_covar - p_covar)\n",
    "        # Compute KL divergence.\n",
    "        kl_m = self._policy_kl(m)[0]\n",
    "        if step:\n",
    "            lg_step = self._hyperparams['lg_step']\n",
    "            # Increment pol_wt based on change in KL divergence.\n",
    "            if self._hyperparams['fixed_lg_step'] == 1:\n",
    "                # Take fixed size step.\n",
    "                pol_info.pol_wt = np.array([\n",
    "                    max(wt + lg_step, 0) for wt in pol_info.pol_wt\n",
    "                ])\n",
    "            elif self._hyperparams['fixed_lg_step'] == 2:\n",
    "                # (In/De)crease based on change in constraint\n",
    "                # satisfaction.\n",
    "                if hasattr(pol_info, 'prev_kl'):\n",
    "                    kl_change = kl_m / pol_info.prev_kl\n",
    "                    for i in range(len(pol_info.pol_wt)):\n",
    "                        if kl_change[i] < 0.8:\n",
    "                            pol_info.pol_wt[i] *= 0.5\n",
    "                        elif kl_change[i] >= 0.95:\n",
    "                            pol_info.pol_wt[i] *= 2.0\n",
    "            elif self._hyperparams['fixed_lg_step'] == 3:\n",
    "                # (In/De)crease based on difference from average.\n",
    "                if hasattr(pol_info, 'prev_kl'):\n",
    "                    lower = np.mean(kl_m) - \\\n",
    "                            self._hyperparams['exp_step_lower'] * np.std(kl_m)\n",
    "                    upper = np.mean(kl_m) + \\\n",
    "                            self._hyperparams['exp_step_upper'] * np.std(kl_m)\n",
    "                    for i in range(len(pol_info.pol_wt)):\n",
    "                        if kl_m[i] < lower:\n",
    "                            pol_info.pol_wt[i] *= \\\n",
    "                                    self._hyperparams['exp_step_decrease']\n",
    "                        elif kl_m[i] >= upper:\n",
    "                            pol_info.pol_wt[i] *= \\\n",
    "                                    self._hyperparams['exp_step_increase']\n",
    "            else:\n",
    "                # Standard DGD step.\n",
    "                pol_info.pol_wt = np.array([\n",
    "                    max(pol_info.pol_wt[t] + lg_step * kl_m[t], 0)\n",
    "                    for t in range(T)\n",
    "                ])\n",
    "            pol_info.prev_kl = kl_m\n",
    "\n",
    "    def _advance_iteration_variables(self):\n",
    "        \"\"\"\n",
    "        Move all 'cur' variables to 'prev', reinitialize 'cur'\n",
    "        variables, and advance iteration counter.\n",
    "        \"\"\"\n",
    "        Algorithm._advance_iteration_variables(self)\n",
    "        for m in range(self.M):\n",
    "            self.cur[m].traj_info.last_kl_step = \\\n",
    "                    self.prev[m].traj_info.last_kl_step\n",
    "            self.cur[m].pol_info = copy.deepcopy(self.prev[m].pol_info)\n",
    "\n",
    "    def _stepadjust(self, m):\n",
    "        \"\"\"\n",
    "        Calculate new step sizes.\n",
    "        Args:\n",
    "            m: Condition\n",
    "        \"\"\"\n",
    "\n",
    "        # Compute values under Laplace approximation. This is the policy\n",
    "        # that the previous samples were actually drawn from under the\n",
    "        # dynamics that were estimated from the previous samples.\n",
    "        prev_laplace_obj, prev_laplace_kl = self._estimate_cost(\n",
    "            self.prev[m].traj_distr, self.prev[m].traj_info, self.prev[m].pol_info, m\n",
    "        )\n",
    "        # This is the policy that we just used under the dynamics that\n",
    "        # were estimated from the previous samples (so this is the cost\n",
    "        # we thought we would have).\n",
    "        new_pred_laplace_obj, new_pred_laplace_kl = self._estimate_cost(\n",
    "            self.cur[m].traj_distr, self.prev[m].traj_info, self.prev[m].pol_info, m\n",
    "        )\n",
    "\n",
    "        # This is the actual cost we have under the current trajectory\n",
    "        # based on the latest samples.\n",
    "        new_actual_laplace_obj, new_actual_laplace_kl = self._estimate_cost(\n",
    "            self.cur[m].traj_distr, self.cur[m].traj_info, self.cur[m].pol_info, m\n",
    "        )\n",
    "\n",
    "        # Measure the entropy of the current trajectory (for printout).\n",
    "        ent = self._measure_ent(m)\n",
    "\n",
    "        # Compute actual objective values based on the samples.\n",
    "        prev_mc_obj = np.mean(np.sum(self.prev[m].cs, axis=1), axis=0)\n",
    "        new_mc_obj = np.mean(np.sum(self.cur[m].cs, axis=1), axis=0)\n",
    "\n",
    "        # Compute sample-based estimate of KL divergence between policy\n",
    "        # and trajectories.\n",
    "        new_mc_kl = self._policy_kl(m)[0]\n",
    "        if self.iteration_count >= 1 and self.prev[m].sample_list:\n",
    "            prev_mc_kl = self._policy_kl(m, prev=True)[0]\n",
    "        else:\n",
    "            prev_mc_kl = np.zeros_like(new_mc_kl)\n",
    "\n",
    "        # Compute full policy KL divergence objective terms by applying\n",
    "        # the Lagrange multipliers.\n",
    "        pol_wt = self.cur[m].pol_info.pol_wt\n",
    "        prev_laplace_kl_sum = np.sum(prev_laplace_kl * pol_wt)\n",
    "        new_pred_laplace_kl_sum = np.sum(new_pred_laplace_kl * pol_wt)\n",
    "        new_actual_laplace_kl_sum = np.sum(new_actual_laplace_kl * pol_wt)\n",
    "        prev_mc_kl_sum = np.sum(prev_mc_kl * pol_wt)\n",
    "        new_mc_kl_sum = np.sum(new_mc_kl * pol_wt)\n",
    "\n",
    "        LOGGER.debug(\n",
    "            'Trajectory step: ent: %f cost: %f -> %f KL: %f -> %f',\n",
    "            ent, prev_mc_obj, new_mc_obj, prev_mc_kl_sum, new_mc_kl_sum\n",
    "        )\n",
    "\n",
    "        # Compute predicted and actual improvement.\n",
    "        predicted_impr = np.sum(prev_laplace_obj) + prev_laplace_kl_sum - \\\n",
    "                np.sum(new_pred_laplace_obj) - new_pred_laplace_kl_sum\n",
    "        actual_impr = np.sum(prev_laplace_obj) + prev_laplace_kl_sum - \\\n",
    "                np.sum(new_actual_laplace_obj) - new_actual_laplace_kl_sum\n",
    "\n",
    "        # Print improvement details.\n",
    "        LOGGER.debug('Previous cost: Laplace: %f MC: %f',\n",
    "                     np.sum(prev_laplace_obj), prev_mc_obj)\n",
    "        LOGGER.debug('Predicted new cost: Laplace: %f MC: %f',\n",
    "                     np.sum(new_pred_laplace_obj), new_mc_obj)\n",
    "        LOGGER.debug('Actual new cost: Laplace: %f MC: %f',\n",
    "                     np.sum(new_actual_laplace_obj), new_mc_obj)\n",
    "        LOGGER.debug('Previous KL: Laplace: %f MC: %f',\n",
    "                     np.sum(prev_laplace_kl), np.sum(prev_mc_kl))\n",
    "        LOGGER.debug('Predicted new KL: Laplace: %f MC: %f',\n",
    "                     np.sum(new_pred_laplace_kl), np.sum(new_mc_kl))\n",
    "        LOGGER.debug('Actual new KL: Laplace: %f MC: %f',\n",
    "                     np.sum(new_actual_laplace_kl), np.sum(new_mc_kl))\n",
    "        LOGGER.debug('Previous w KL: Laplace: %f MC: %f',\n",
    "                     prev_laplace_kl_sum, prev_mc_kl_sum)\n",
    "        LOGGER.debug('Predicted w new KL: Laplace: %f MC: %f',\n",
    "                     new_pred_laplace_kl_sum, new_mc_kl_sum)\n",
    "        LOGGER.debug('Actual w new KL: Laplace %f MC: %f',\n",
    "                     new_actual_laplace_kl_sum, new_mc_kl_sum)\n",
    "        LOGGER.debug('Predicted/actual improvement: %f / %f',\n",
    "                     predicted_impr, actual_impr)\n",
    "\n",
    "        # Compute actual KL step taken at last iteration.\n",
    "        actual_step = self.cur[m].traj_info.last_kl_step / \\\n",
    "                (self._hyperparams['kl_step'] * self.T)\n",
    "        if actual_step < self.cur[m].step_mult:\n",
    "            self.cur[m].step_mult = max(actual_step,\n",
    "                                        self._hyperparams['min_step_mult'])\n",
    "\n",
    "        self._set_new_mult(predicted_impr, actual_impr, m)\n",
    "\n",
    "    def _policy_kl(self, m, prev=False):\n",
    "        \"\"\"\n",
    "        Monte-Carlo estimate of KL divergence between policy and\n",
    "        trajectory.\n",
    "        \"\"\"\n",
    "        dU, T = self.dU, self.T\n",
    "        if prev:\n",
    "            traj, pol_info = self.prev[m].traj_distr, self.cur[m].pol_info\n",
    "            samples = self.prev[m].sample_list\n",
    "        else:\n",
    "            traj, pol_info = self.cur[m].traj_distr, self.cur[m].pol_info\n",
    "            samples = self.cur[m].sample_list\n",
    "        N = len(samples)\n",
    "        X, obs = samples.get_X(), samples.get_obs()\n",
    "        kl, kl_m = np.zeros((N, T)), np.zeros(T)\n",
    "        kl_l, kl_lm = np.zeros((N, T)), np.zeros(T)\n",
    "        # Compute policy mean and covariance at each sample.\n",
    "        pol_mu, _, pol_prec, pol_det_sigma = self.policy_opt.prob(obs.copy())\n",
    "        # Compute KL divergence.\n",
    "        for t in range(T):\n",
    "            # Compute trajectory action at sample.\n",
    "            traj_mu = np.zeros((N, dU))\n",
    "            for i in range(N):\n",
    "                traj_mu[i, :] = traj.K[t, :, :].dot(X[i, t, :]) + traj.k[t, :]\n",
    "            diff = pol_mu[:, t, :] - traj_mu\n",
    "            tr_pp_ct = pol_prec[:, t, :, :] * traj.pol_covar[t, :, :]\n",
    "            k_ln_det_ct = 0.5 * dU + np.sum(\n",
    "                np.log(np.diag(traj.chol_pol_covar[t, :, :]))\n",
    "            )\n",
    "            ln_det_cp = np.log(pol_det_sigma[:, t])\n",
    "            # IMPORTANT: Note that this assumes that pol_prec does not\n",
    "            #            depend on state!!!!\n",
    "            #            (Only the last term makes this assumption.)\n",
    "            d_pp_d = np.sum(diff * (diff.dot(pol_prec[1, t, :, :])), axis=1)\n",
    "            kl[:, t] = 0.5 * np.sum(np.sum(tr_pp_ct, axis=1), axis=1) - \\\n",
    "                    k_ln_det_ct + 0.5 * ln_det_cp + 0.5 * d_pp_d\n",
    "            tr_pp_ct_m = np.mean(tr_pp_ct, axis=0)\n",
    "            kl_m[t] = 0.5 * np.sum(np.sum(tr_pp_ct_m, axis=0), axis=0) - \\\n",
    "                    k_ln_det_ct + 0.5 * np.mean(ln_det_cp) + \\\n",
    "                    0.5 * np.mean(d_pp_d)\n",
    "            # Compute trajectory action at sample with Lagrange\n",
    "            # multiplier.\n",
    "            traj_mu = np.zeros((N, dU))\n",
    "            for i in range(N):\n",
    "                traj_mu[i, :] = \\\n",
    "                        (traj.K[t, :, :] - pol_info.lambda_K[t, :, :]).dot(\n",
    "                            X[i, t, :]\n",
    "                        ) + (traj.k[t, :] - pol_info.lambda_k[t, :])\n",
    "            # Compute KL divergence with Lagrange multiplier.\n",
    "            diff_l = pol_mu[:, t, :] - traj_mu\n",
    "            d_pp_d_l = np.sum(diff_l * (diff_l.dot(pol_prec[1, t, :, :])),\n",
    "                              axis=1)\n",
    "            kl_l[:, t] = 0.5 * np.sum(np.sum(tr_pp_ct, axis=1), axis=1) - \\\n",
    "                    k_ln_det_ct + 0.5 * ln_det_cp + 0.5 * d_pp_d_l\n",
    "            kl_lm[t] = 0.5 * np.sum(np.sum(tr_pp_ct_m, axis=0), axis=0) - \\\n",
    "                    k_ln_det_ct + 0.5 * np.mean(ln_det_cp) + \\\n",
    "                    0.5 * np.mean(d_pp_d_l)\n",
    "        return kl_m, kl, kl_lm, kl_l\n",
    "\n",
    "    def _estimate_cost(self, traj_distr, traj_info, pol_info, m):\n",
    "        \"\"\"\n",
    "        Compute Laplace approximation to expected cost.\n",
    "        Args:\n",
    "            traj_distr: A linear Gaussian policy object.\n",
    "            traj_info: A TrajectoryInfo object.\n",
    "            pol_info: Policy linearization info.\n",
    "            m: Condition number.\n",
    "        \"\"\"\n",
    "        # Constants.\n",
    "        T, dU, dX = self.T, self.dU, self.dX\n",
    "\n",
    "        # Perform forward pass (note that we repeat this here, because\n",
    "        # traj_info may have different dynamics from the ones that were\n",
    "        # used to compute the distribution already saved in traj).\n",
    "        mu, sigma = self.traj_opt.forward(traj_distr, traj_info)\n",
    "\n",
    "        # Compute cost.\n",
    "        predicted_cost = np.zeros(T)\n",
    "        for t in range(T):\n",
    "            predicted_cost[t] = traj_info.cc[t] + 0.5 * \\\n",
    "                    (np.sum(sigma[t, :, :] * traj_info.Cm[t, :, :]) +\n",
    "                     mu[t, :].T.dot(traj_info.Cm[t, :, :]).dot(mu[t, :])) + \\\n",
    "                    mu[t, :].T.dot(traj_info.cv[t, :])\n",
    "\n",
    "        # Compute KL divergence.\n",
    "        predicted_kl = np.zeros(T)\n",
    "        for t in range(T):\n",
    "            inv_pS = np.linalg.solve(\n",
    "                pol_info.chol_pol_S[t, :, :],\n",
    "                np.linalg.solve(pol_info.chol_pol_S[t, :, :].T, np.eye(dU))\n",
    "            )\n",
    "            Ufb = pol_info.pol_K[t, :, :].dot(mu[t, :dX].T) + \\\n",
    "                    pol_info.pol_k[t, :]\n",
    "            diff = mu[t, dX:] - Ufb\n",
    "            Kbar = traj_distr.K[t, :, :] - pol_info.pol_K[t, :, :]\n",
    "            predicted_kl[t] = 0.5 * (diff).dot(inv_pS).dot(diff) + \\\n",
    "                    0.5 * np.sum(traj_distr.pol_covar[t, :, :] * inv_pS) + \\\n",
    "                    0.5 * np.sum(\n",
    "                        sigma[t, :dX, :dX] * Kbar.T.dot(inv_pS).dot(Kbar)\n",
    "                    ) + np.sum(\n",
    "                        np.log(np.diag(pol_info.chol_pol_S[t, :, :]))\n",
    "                    ) - np.sum(\n",
    "                        np.log(np.diag(traj_distr.chol_pol_covar[t, :, :]))\n",
    "                    ) + 0.5 * dU\n",
    "\n",
    "        return predicted_cost, predicted_kl\n",
    "\n",
    "    def compute_costs(self, m, eta, augment=True):\n",
    "        \"\"\" Compute cost estimates used in the LQR backward pass. \"\"\"\n",
    "        traj_info, traj_distr = self.cur[m].traj_info, self.cur[m].traj_distr\n",
    "        if not augment:  # Whether to augment cost with term to penalize KL\n",
    "            return traj_info.Cm, traj_info.cv\n",
    "\n",
    "        pol_info = self.cur[m].pol_info\n",
    "        multiplier = self._hyperparams['max_ent_traj']\n",
    "        T, dU, dX = traj_distr.T, traj_distr.dU, traj_distr.dX\n",
    "        Cm, cv = np.copy(traj_info.Cm), np.copy(traj_info.cv)\n",
    "\n",
    "        # Modify policy action via Lagrange multiplier.\n",
    "        cv[:, dX:] -= pol_info.lambda_k\n",
    "        Cm[:, dX:, :dX] -= pol_info.lambda_K\n",
    "        Cm[:, :dX, dX:] -= np.transpose(pol_info.lambda_K, [0, 2, 1])\n",
    "\n",
    "        #Pre-process the costs with KL-divergence terms.\n",
    "        TKLm = np.zeros((T, dX+dU, dX+dU))\n",
    "        TKLv = np.zeros((T, dX+dU))\n",
    "        PKLm = np.zeros((T, dX+dU, dX+dU))\n",
    "        PKLv = np.zeros((T, dX+dU))\n",
    "        fCm, fcv = np.zeros(Cm.shape), np.zeros(cv.shape)\n",
    "        for t in range(T):\n",
    "            K, k = traj_distr.K[t, :, :], traj_distr.k[t, :]\n",
    "            inv_pol_covar = traj_distr.inv_pol_covar[t, :, :]\n",
    "            # Trajectory KL-divergence terms.\n",
    "            TKLm[t, :, :] = np.vstack([\n",
    "                np.hstack([\n",
    "                    K.T.dot(inv_pol_covar).dot(K),\n",
    "                    -K.T.dot(inv_pol_covar)]),\n",
    "                np.hstack([-inv_pol_covar.dot(K), inv_pol_covar])\n",
    "            ])\n",
    "            TKLv[t, :] = np.concatenate([\n",
    "                K.T.dot(inv_pol_covar).dot(k), -inv_pol_covar.dot(k)\n",
    "            ])\n",
    "            # Policy KL-divergence terms.\n",
    "            inv_pol_S = np.linalg.solve(\n",
    "                pol_info.chol_pol_S[t, :, :],\n",
    "                np.linalg.solve(pol_info.chol_pol_S[t, :, :].T, np.eye(dU))\n",
    "            )\n",
    "            KB, kB = pol_info.pol_K[t, :, :], pol_info.pol_k[t, :]\n",
    "            PKLm[t, :, :] = np.vstack([\n",
    "                np.hstack([KB.T.dot(inv_pol_S).dot(KB), -KB.T.dot(inv_pol_S)]),\n",
    "                np.hstack([-inv_pol_S.dot(KB), inv_pol_S])\n",
    "            ])\n",
    "            PKLv[t, :] = np.concatenate([\n",
    "                KB.T.dot(inv_pol_S).dot(kB), -inv_pol_S.dot(kB)\n",
    "            ])\n",
    "            wt = pol_info.pol_wt[t]\n",
    "            fCm[t, :, :] = (Cm[t, :, :] + TKLm[t, :, :] * eta +\n",
    "                            PKLm[t, :, :] * wt) / (eta + wt + multiplier)\n",
    "            fcv[t, :] = (cv[t, :] + TKLv[t, :] * eta +\n",
    "                         PKLv[t, :] * wt) / (eta + wt + multiplier)\n",
    "\n",
    "        return fCm, fcv\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### `AlgorithmMDGPS`\n",
    "MD-based GPS algorithm"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "# AlgorithmMDGPS\n",
    "ALG_MDGPS = {\n",
    "    # TODO: remove need for init_pol_wt in MDGPS\n",
    "    'init_pol_wt': 0.01,\n",
    "    'policy_sample_mode': 'add',\n",
    "    # Whether to use 'laplace' or 'mc' cost in step adjusment\n",
    "    'step_rule': 'laplace',\n",
    "}\n",
    "\n",
    "class AlgorithmMDGPS(Algorithm):\n",
    "    \"\"\"\n",
    "    Sample-based joint policy learning and trajectory optimization with\n",
    "    (approximate) mirror descent guided policy search algorithm.\n",
    "    \"\"\"\n",
    "    def __init__(self, hyperparams):\n",
    "        config = copy.deepcopy(ALG_MDGPS)\n",
    "        config.update(hyperparams)\n",
    "        Algorithm.__init__(self, config)\n",
    "\n",
    "        policy_prior = self._hyperparams['policy_prior']\n",
    "        for m in range(self.M):\n",
    "            self.cur[m].pol_info = PolicyInfo(self._hyperparams)\n",
    "            self.cur[m].pol_info.policy_prior = \\\n",
    "                    policy_prior['type'](policy_prior)\n",
    "\n",
    "        self.policy_opt = self._hyperparams['policy_opt']['type'](\n",
    "            self._hyperparams['policy_opt'], self.dO, self.dU\n",
    "        )\n",
    "\n",
    "    def iteration(self, sample_lists):\n",
    "        \"\"\"\n",
    "        Run iteration of MDGPS-based guided policy search.\n",
    "\n",
    "        Args:\n",
    "            sample_lists: List of SampleList objects for each condition.\n",
    "        \"\"\"\n",
    "        # Store the samples and evaluate the costs.\n",
    "        for m in range(self.M):\n",
    "            self.cur[m].sample_list = sample_lists[m]\n",
    "            self._eval_cost(m)\n",
    "\n",
    "        # Update dynamics linearizations.\n",
    "        self._update_dynamics()\n",
    "\n",
    "        # On the first iteration, need to catch policy up to init_traj_distr.\n",
    "        if self.iteration_count == 0:\n",
    "            self.new_traj_distr = [\n",
    "                self.cur[cond].traj_distr for cond in range(self.M)\n",
    "            ]\n",
    "            self._update_policy()\n",
    "\n",
    "        # Update policy linearizations.\n",
    "        for m in range(self.M):\n",
    "            self._update_policy_fit(m)\n",
    "\n",
    "        # C-step\n",
    "        if self.iteration_count > 0:\n",
    "            self._stepadjust()\n",
    "        self._update_trajectories()\n",
    "\n",
    "        # S-step\n",
    "        self._update_policy()\n",
    "\n",
    "        # Prepare for next iteration\n",
    "        self._advance_iteration_variables()\n",
    "\n",
    "    def _update_policy(self):\n",
    "        \"\"\" Compute the new policy. \"\"\"\n",
    "        dU, dO, T = self.dU, self.dO, self.T\n",
    "        # Compute target mean, cov, and weight for each sample.\n",
    "        obs_data, tgt_mu = np.zeros((0, T, dO)), np.zeros((0, T, dU))\n",
    "        tgt_prc, tgt_wt = np.zeros((0, T, dU, dU)), np.zeros((0, T))\n",
    "        for m in range(self.M):\n",
    "            samples = self.cur[m].sample_list\n",
    "            X = samples.get_X()\n",
    "            N = len(samples)\n",
    "            traj, pol_info = self.new_traj_distr[m], self.cur[m].pol_info\n",
    "            mu = np.zeros((N, T, dU))\n",
    "            prc = np.zeros((N, T, dU, dU))\n",
    "            wt = np.zeros((N, T))\n",
    "            # Get time-indexed actions.\n",
    "            for t in range(T):\n",
    "                # Compute actions along this trajectory.\n",
    "                prc[:, t, :, :] = np.tile(traj.inv_pol_covar[t, :, :],\n",
    "                                          [N, 1, 1])\n",
    "                for i in range(N):\n",
    "                    mu[i, t, :] = (traj.K[t, :, :].dot(X[i, t, :]) + traj.k[t, :])\n",
    "                wt[:, t].fill(pol_info.pol_wt[t])\n",
    "            tgt_mu = np.concatenate((tgt_mu, mu))\n",
    "            tgt_prc = np.concatenate((tgt_prc, prc))\n",
    "            tgt_wt = np.concatenate((tgt_wt, wt))\n",
    "            obs_data = np.concatenate((obs_data, samples.get_obs()))\n",
    "        self.policy_opt.update(obs_data, tgt_mu, tgt_prc, tgt_wt)\n",
    "\n",
    "    def _update_policy_fit(self, m):\n",
    "        \"\"\"\n",
    "        Re-estimate the local policy values in the neighborhood of the\n",
    "        trajectory.\n",
    "        Args:\n",
    "            m: Condition\n",
    "        \"\"\"\n",
    "        dX, dU, T = self.dX, self.dU, self.T\n",
    "        # Choose samples to use.\n",
    "        samples = self.cur[m].sample_list\n",
    "        N = len(samples)\n",
    "        pol_info = self.cur[m].pol_info\n",
    "        X = samples.get_X()\n",
    "        obs = samples.get_obs().copy()\n",
    "        pol_mu, pol_sig = self.policy_opt.prob(obs)[:2]\n",
    "        pol_info.pol_mu, pol_info.pol_sig = pol_mu, pol_sig\n",
    "\n",
    "        # Update policy prior.\n",
    "        policy_prior = pol_info.policy_prior\n",
    "        samples = SampleList(self.cur[m].sample_list)\n",
    "        mode = self._hyperparams['policy_sample_mode']\n",
    "        policy_prior.update(samples, self.policy_opt, mode)\n",
    "\n",
    "        # Fit linearization and store in pol_info.\n",
    "        pol_info.pol_K, pol_info.pol_k, pol_info.pol_S = \\\n",
    "                policy_prior.fit(X, pol_mu, pol_sig)\n",
    "        for t in range(T):\n",
    "            pol_info.chol_pol_S[t, :, :] = \\\n",
    "                    sp.linalg.cholesky(pol_info.pol_S[t, :, :])\n",
    "\n",
    "    def _advance_iteration_variables(self):\n",
    "        \"\"\"\n",
    "        Move all 'cur' variables to 'prev', reinitialize 'cur'\n",
    "        variables, and advance iteration counter.\n",
    "        \"\"\"\n",
    "        Algorithm._advance_iteration_variables(self)\n",
    "        for m in range(self.M):\n",
    "            self.cur[m].traj_info.last_kl_step = \\\n",
    "                    self.prev[m].traj_info.last_kl_step\n",
    "            self.cur[m].pol_info = copy.deepcopy(self.prev[m].pol_info)\n",
    "\n",
    "    def _stepadjust(self):\n",
    "        \"\"\"\n",
    "        Calculate new step sizes. This version uses the same step size\n",
    "        for all conditions.\n",
    "        \"\"\"\n",
    "        # Compute previous cost and previous expected cost.\n",
    "        prev_M = len(self.prev) # May be different in future.\n",
    "        prev_laplace = np.empty(prev_M)\n",
    "        prev_mc = np.empty(prev_M)\n",
    "        prev_predicted = np.empty(prev_M)\n",
    "        for m in range(prev_M):\n",
    "            prev_nn = self.prev[m].pol_info.traj_distr()\n",
    "            prev_lg = self.prev[m].new_traj_distr\n",
    "\n",
    "            # Compute values under Laplace approximation. This is the policy\n",
    "            # that the previous samples were actually drawn from under the\n",
    "            # dynamics that were estimated from the previous samples.\n",
    "            prev_laplace[m] = self.traj_opt.estimate_cost(\n",
    "                    prev_nn, self.prev[m].traj_info\n",
    "            ).sum()\n",
    "            # This is the actual cost that we experienced.\n",
    "            prev_mc[m] = self.prev[m].cs.mean(axis=0).sum()\n",
    "            # This is the policy that we just used under the dynamics that\n",
    "            # were estimated from the prev samples (so this is the cost\n",
    "            # we thought we would have).\n",
    "            prev_predicted[m] = self.traj_opt.estimate_cost(\n",
    "                    prev_lg, self.prev[m].traj_info\n",
    "            ).sum()\n",
    "\n",
    "        # Compute current cost.\n",
    "        cur_laplace = np.empty(self.M)\n",
    "        cur_mc = np.empty(self.M)\n",
    "        for m in range(self.M):\n",
    "            cur_nn = self.cur[m].pol_info.traj_distr()\n",
    "            # This is the actual cost we have under the current trajectory\n",
    "            # based on the latest samples.\n",
    "            cur_laplace[m] = self.traj_opt.estimate_cost(\n",
    "                    cur_nn, self.cur[m].traj_info\n",
    "            ).sum()\n",
    "            cur_mc[m] = self.cur[m].cs.mean(axis=0).sum()\n",
    "\n",
    "        # Compute predicted and actual improvement.\n",
    "        prev_laplace = prev_laplace.mean()\n",
    "        prev_mc = prev_mc.mean()\n",
    "        prev_predicted = prev_predicted.mean()\n",
    "        cur_laplace = cur_laplace.mean()\n",
    "        cur_mc = cur_mc.mean()\n",
    "        if self._hyperparams['step_rule'] == 'laplace':\n",
    "            predicted_impr = prev_laplace - prev_predicted\n",
    "            actual_impr = prev_laplace - cur_laplace\n",
    "        elif self._hyperparams['step_rule'] == 'mc':\n",
    "            predicted_impr = prev_mc - prev_predicted\n",
    "            actual_impr = prev_mc - cur_mc\n",
    "        LOGGER.debug('Previous cost: Laplace: %f, MC: %f',\n",
    "                     prev_laplace, prev_mc)\n",
    "        LOGGER.debug('Predicted cost: Laplace: %f', prev_predicted)\n",
    "        LOGGER.debug('Actual cost: Laplace: %f, MC: %f',\n",
    "                     cur_laplace, cur_mc)\n",
    "\n",
    "        for m in range(self.M):\n",
    "            self._set_new_mult(predicted_impr, actual_impr, m)\n",
    "\n",
    "    def compute_costs(self, m, eta, augment=True):\n",
    "        \"\"\" Compute cost estimates used in the LQR backward pass. \"\"\"\n",
    "        traj_info, traj_distr = self.cur[m].traj_info, self.cur[m].traj_distr\n",
    "        if not augment:  # Whether to augment cost with term to penalize KL\n",
    "            return traj_info.Cm, traj_info.cv\n",
    "\n",
    "        pol_info = self.cur[m].pol_info\n",
    "        multiplier = self._hyperparams['max_ent_traj']\n",
    "        T, dU, dX = traj_distr.T, traj_distr.dU, traj_distr.dX\n",
    "        Cm, cv = np.copy(traj_info.Cm), np.copy(traj_info.cv)\n",
    "\n",
    "        PKLm = np.zeros((T, dX+dU, dX+dU))\n",
    "        PKLv = np.zeros((T, dX+dU))\n",
    "        fCm, fcv = np.zeros(Cm.shape), np.zeros(cv.shape)\n",
    "        for t in range(T):\n",
    "            # Policy KL-divergence terms.\n",
    "            inv_pol_S = np.linalg.solve(\n",
    "                pol_info.chol_pol_S[t, :, :],\n",
    "                np.linalg.solve(pol_info.chol_pol_S[t, :, :].T, np.eye(dU))\n",
    "            )\n",
    "            KB, kB = pol_info.pol_K[t, :, :], pol_info.pol_k[t, :]\n",
    "            PKLm[t, :, :] = np.vstack([\n",
    "                np.hstack([KB.T.dot(inv_pol_S).dot(KB), -KB.T.dot(inv_pol_S)]),\n",
    "                np.hstack([-inv_pol_S.dot(KB), inv_pol_S])\n",
    "            ])\n",
    "            PKLv[t, :] = np.concatenate([\n",
    "                KB.T.dot(inv_pol_S).dot(kB), -inv_pol_S.dot(kB)\n",
    "            ])\n",
    "            fCm[t, :, :] = (Cm[t, :, :] + PKLm[t, :, :] * eta) / (eta + multiplier)\n",
    "            fcv[t, :] = (cv[t, :] + PKLv[t, :] * eta) / (eta + multiplier)\n",
    "\n",
    "        return fCm, fcv\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Trajectory optimization\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### iLQG-based trajectory optimization: `AlgorithmTrajOpt`\n",
    "Sample-based trajectory optimization algorithm"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "class AlgorithmTrajOpt(Algorithm):\n",
    "    \"\"\" Sample-based trajectory optimization. \"\"\"\n",
    "    def __init__(self, hyperparams):\n",
    "        Algorithm.__init__(self, hyperparams)\n",
    "\n",
    "    def iteration(self, sample_lists):\n",
    "        \"\"\"\n",
    "        Run iteration of LQR.\n",
    "        Args:\n",
    "            sample_lists: List of SampleList objects for each condition.\n",
    "        \"\"\"\n",
    "        for m in range(self.M):\n",
    "            self.cur[m].sample_list = sample_lists[m]\n",
    "\n",
    "        # Update dynamics model using all samples.\n",
    "        self._update_dynamics()\n",
    "\n",
    "        self._update_step_size()  # KL Divergence step size.\n",
    "\n",
    "        # Run inner loop to compute new policies.\n",
    "        for _ in range(self._hyperparams['inner_iterations']):\n",
    "            self._update_trajectories()\n",
    "\n",
    "        self._advance_iteration_variables()\n",
    "\n",
    "    def _update_step_size(self):\n",
    "        \"\"\" Evaluate costs on samples, and adjust the step size. \"\"\"\n",
    "        # Evaluate cost function for all conditions and samples.\n",
    "        for m in range(self.M):\n",
    "            self._eval_cost(m)\n",
    "\n",
    "        # Adjust step size relative to the previous iteration.\n",
    "        for m in range(self.M):\n",
    "            if self.iteration_count >= 1 and self.prev[m].sample_list:\n",
    "                self._stepadjust(m)\n",
    "\n",
    "    def _stepadjust(self, m):\n",
    "        \"\"\"\n",
    "        Calculate new step sizes.\n",
    "        Args:\n",
    "            m: Condition\n",
    "        \"\"\"\n",
    "        # Compute values under Laplace approximation. This is the policy\n",
    "        # that the previous samples were actually drawn from under the\n",
    "        # dynamics that were estimated from the previous samples.\n",
    "        previous_laplace_obj = self.traj_opt.estimate_cost(\n",
    "            self.prev[m].traj_distr, self.prev[m].traj_info\n",
    "        )\n",
    "        # This is the policy that we just used under the dynamics that\n",
    "        # were estimated from the previous samples (so this is the cost\n",
    "        # we thought we would have).\n",
    "        new_predicted_laplace_obj = self.traj_opt.estimate_cost(\n",
    "            self.cur[m].traj_distr, self.prev[m].traj_info\n",
    "        )\n",
    "\n",
    "        # This is the actual cost we have under the current trajectory\n",
    "        # based on the latest samples.\n",
    "        new_actual_laplace_obj = self.traj_opt.estimate_cost(\n",
    "            self.cur[m].traj_distr, self.cur[m].traj_info\n",
    "        )\n",
    "\n",
    "        # Measure the entropy of the current trajectory (for printout).\n",
    "        ent = self._measure_ent(m)\n",
    "\n",
    "        # Compute actual objective values based on the samples.\n",
    "        previous_mc_obj = np.mean(np.sum(self.prev[m].cs, axis=1), axis=0)\n",
    "        new_mc_obj = np.mean(np.sum(self.cur[m].cs, axis=1), axis=0)\n",
    "\n",
    "        LOGGER.debug('Trajectory step: ent: %f cost: %f -> %f',\n",
    "                     ent, previous_mc_obj, new_mc_obj)\n",
    "\n",
    "        # Compute predicted and actual improvement.\n",
    "        predicted_impr = np.sum(previous_laplace_obj) - \\\n",
    "                np.sum(new_predicted_laplace_obj)\n",
    "        actual_impr = np.sum(previous_laplace_obj) - \\\n",
    "                np.sum(new_actual_laplace_obj)\n",
    "\n",
    "        # Print improvement details.\n",
    "        LOGGER.debug('Previous cost: Laplace: %f MC: %f',\n",
    "                     np.sum(previous_laplace_obj), previous_mc_obj)\n",
    "        LOGGER.debug('Predicted new cost: Laplace: %f MC: %f',\n",
    "                     np.sum(new_predicted_laplace_obj), new_mc_obj)\n",
    "        LOGGER.debug('Actual new cost: Laplace: %f MC: %f',\n",
    "                     np.sum(new_actual_laplace_obj), new_mc_obj)\n",
    "        LOGGER.debug('Predicted/actual improvement: %f / %f',\n",
    "                     predicted_impr, actual_impr)\n",
    "\n",
    "        self._set_new_mult(predicted_impr, actual_impr, m)\n",
    "\n",
    "    def compute_costs(self, m, eta, augment=True):\n",
    "        \"\"\" Compute cost estimates used in the LQR backward pass. \"\"\"\n",
    "        traj_info, traj_distr = self.cur[m].traj_info, self.cur[m].traj_distr\n",
    "        if not augment:  # Whether to augment cost with term to penalize KL\n",
    "            return traj_info.Cm, traj_info.cv\n",
    "\n",
    "        multiplier = self._hyperparams['max_ent_traj']\n",
    "        fCm, fcv = traj_info.Cm / (eta + multiplier), traj_info.cv / (eta + multiplier)\n",
    "        K, ipc, k = traj_distr.K, traj_distr.inv_pol_covar, traj_distr.k\n",
    "\n",
    "        # Add in the trajectory divergence term.\n",
    "        for t in range(self.T - 1, -1, -1):\n",
    "            fCm[t, :, :] += eta / (eta + multiplier) * np.vstack([\n",
    "                np.hstack([\n",
    "                    K[t, :, :].T.dot(ipc[t, :, :]).dot(K[t, :, :]),\n",
    "                    -K[t, :, :].T.dot(ipc[t, :, :])\n",
    "                ]),\n",
    "                np.hstack([\n",
    "                    -ipc[t, :, :].dot(K[t, :, :]), ipc[t, :, :]\n",
    "                ])\n",
    "            ])\n",
    "            fcv[t, :] += eta / (eta + multiplier) * np.hstack([\n",
    "                K[t, :, :].T.dot(ipc[t, :, :]).dot(k[t, :]),\n",
    "                -ipc[t, :, :].dot(k[t, :])\n",
    "            ])\n",
    "\n",
    "        return fCm, fcv\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### `TrajOpt`\n",
    "Base trajectory optimization class"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import abc\n",
    "\n",
    "class TrajOpt(object):\n",
    "    \"\"\" Trajectory optimization superclass. \"\"\"\n",
    "    __metaclass__ = abc.ABCMeta\n",
    "\n",
    "    def __init__(self, hyperparams):\n",
    "        self._hyperparams = hyperparams\n",
    "\n",
    "    @abc.abstractmethod\n",
    "    def update(self):\n",
    "        \"\"\" Update trajectory distributions. \"\"\"\n",
    "        raise NotImplementedError(\"Must be implemented in subclass.\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### `TrajOptLQRPython`\n",
    "iLQG-based trajectory optimization"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "\"\"\" This file defines code for iLQG-based trajectory optimization. \"\"\"\n",
    "from numpy.linalg import LinAlgError\n",
    "import scipy as sp\n",
    "\n",
    "# Constants used in TrajOptLQR.\n",
    "DGD_MAX_ITER = 50\n",
    "DGD_MAX_LS_ITER = 20\n",
    "DGD_MAX_GD_ITER = 200\n",
    "\n",
    "ALPHA, BETA1, BETA2, EPS = 0.005, 0.9, 0.999, 1e-8  # Adam parameters\n",
    "\n",
    "def traj_distr_kl(new_mu, new_sigma, new_traj_distr, prev_traj_distr, tot=True):\n",
    "    \"\"\"\n",
    "    Compute KL divergence between new and previous trajectory\n",
    "    distributions.\n",
    "    Args:\n",
    "        new_mu: T x dX, mean of new trajectory distribution.\n",
    "        new_sigma: T x dX x dX, variance of new trajectory distribution.\n",
    "        new_traj_distr: A linear Gaussian policy object, new\n",
    "            distribution.\n",
    "        prev_traj_distr: A linear Gaussian policy object, previous\n",
    "            distribution.\n",
    "        tot: Whether or not to sum KL across all time steps.\n",
    "    Returns:\n",
    "        kl_div: The KL divergence between the new and previous\n",
    "            trajectories.\n",
    "    \"\"\"\n",
    "    # Constants.\n",
    "    T = new_mu.shape[0]\n",
    "    dU = new_traj_distr.dU\n",
    "\n",
    "    # Initialize vector of divergences for each time step.\n",
    "    kl_div = np.zeros(T)\n",
    "\n",
    "    # Step through trajectory.\n",
    "    for t in range(T):\n",
    "        # Fetch matrices and vectors from trajectory distributions.\n",
    "        mu_t = new_mu[t, :]\n",
    "        sigma_t = new_sigma[t, :, :]\n",
    "        K_prev = prev_traj_distr.K[t, :, :]\n",
    "        K_new = new_traj_distr.K[t, :, :]\n",
    "        k_prev = prev_traj_distr.k[t, :]\n",
    "        k_new = new_traj_distr.k[t, :]\n",
    "        chol_prev = prev_traj_distr.chol_pol_covar[t, :, :]\n",
    "        chol_new = new_traj_distr.chol_pol_covar[t, :, :]\n",
    "\n",
    "        # Compute log determinants and precision matrices.\n",
    "        logdet_prev = 2 * sum(np.log(np.diag(chol_prev)))\n",
    "        logdet_new = 2 * sum(np.log(np.diag(chol_new)))\n",
    "        prc_prev = sp.linalg.solve_triangular(\n",
    "            chol_prev, sp.linalg.solve_triangular(chol_prev.T, np.eye(dU),\n",
    "                                                  lower=True)\n",
    "        )\n",
    "        prc_new = sp.linalg.solve_triangular(\n",
    "            chol_new, sp.linalg.solve_triangular(chol_new.T, np.eye(dU),\n",
    "                                                 lower=True)\n",
    "        )\n",
    "\n",
    "        # Construct matrix, vector, and constants.\n",
    "        M_prev = np.r_[\n",
    "            np.c_[K_prev.T.dot(prc_prev).dot(K_prev), -K_prev.T.dot(prc_prev)],\n",
    "            np.c_[-prc_prev.dot(K_prev), prc_prev]\n",
    "        ]\n",
    "        M_new = np.r_[\n",
    "            np.c_[K_new.T.dot(prc_new).dot(K_new), -K_new.T.dot(prc_new)],\n",
    "            np.c_[-prc_new.dot(K_new), prc_new]\n",
    "        ]\n",
    "        v_prev = np.r_[K_prev.T.dot(prc_prev).dot(k_prev),\n",
    "                       -prc_prev.dot(k_prev)]\n",
    "        v_new = np.r_[K_new.T.dot(prc_new).dot(k_new), -prc_new.dot(k_new)]\n",
    "        c_prev = 0.5 * k_prev.T.dot(prc_prev).dot(k_prev)\n",
    "        c_new = 0.5 * k_new.T.dot(prc_new).dot(k_new)\n",
    "\n",
    "        # Compute KL divergence at timestep t.\n",
    "        kl_div[t] = max(\n",
    "            0,\n",
    "            -0.5 * mu_t.T.dot(M_new - M_prev).dot(mu_t) -\n",
    "            mu_t.T.dot(v_new - v_prev) - c_new + c_prev -\n",
    "            0.5 * np.sum(sigma_t * (M_new-M_prev)) - 0.5 * logdet_new +\n",
    "            0.5 * logdet_prev\n",
    "        )\n",
    "\n",
    "    # Add up divergences across time to get total divergence.\n",
    "    return np.sum(kl_div) if tot else kl_div\n",
    "\n",
    "def traj_distr_kl_alt(new_mu, new_sigma, new_traj_distr, prev_traj_distr, tot=True):\n",
    "    \"\"\"\n",
    "    This function computes the same quantity as the function above.\n",
    "    However, it is easier to modify and understand this function, i.e.,\n",
    "    passing in a different mu and sigma to this function will behave properly.\n",
    "    \"\"\"\n",
    "    T, dX, dU = new_mu.shape[0], new_traj_distr.dX, new_traj_distr.dU\n",
    "    kl_div = np.zeros(T)\n",
    "\n",
    "    for t in range(T):\n",
    "        K_prev = prev_traj_distr.K[t, :, :]\n",
    "        K_new = new_traj_distr.K[t, :, :]\n",
    "\n",
    "        k_prev = prev_traj_distr.k[t, :]\n",
    "        k_new = new_traj_distr.k[t, :]\n",
    "\n",
    "        sig_prev = prev_traj_distr.pol_covar[t, :, :]\n",
    "        sig_new = new_traj_distr.pol_covar[t, :, :]\n",
    "\n",
    "        chol_prev = prev_traj_distr.chol_pol_covar[t, :, :]\n",
    "        chol_new = new_traj_distr.chol_pol_covar[t, :, :]\n",
    "\n",
    "        inv_prev = prev_traj_distr.inv_pol_covar[t, :, :]\n",
    "        inv_new = new_traj_distr.inv_pol_covar[t, :, :]\n",
    "\n",
    "        logdet_prev = 2 * sum(np.log(np.diag(chol_prev)))\n",
    "        logdet_new = 2 * sum(np.log(np.diag(chol_new)))\n",
    "\n",
    "        K_diff, k_diff = K_prev - K_new, k_prev - k_new\n",
    "        mu, sigma = new_mu[t, :dX], new_sigma[t, :dX, :dX]\n",
    "\n",
    "        kl_div[t] = max(\n",
    "                0,\n",
    "                0.5 * (logdet_prev - logdet_new - new_traj_distr.dU +\n",
    "                       np.sum(np.diag(inv_prev.dot(sig_new))) +\n",
    "                       k_diff.T.dot(inv_prev).dot(k_diff) +\n",
    "                       mu.T.dot(K_diff.T).dot(inv_prev).dot(K_diff).dot(mu) +\n",
    "                       np.sum(np.diag(K_diff.T.dot(inv_prev).dot(K_diff).dot(sigma))) +\n",
    "                       2 * k_diff.T.dot(inv_prev).dot(K_diff).dot(mu))\n",
    "        )\n",
    "\n",
    "    return np.sum(kl_div) if tot else kl_div\n",
    "\n",
    "# TrajOptLQRPython\n",
    "TRAJ_OPT_LQR = {\n",
    "    # Dual variable updates for non-PD Q-function.\n",
    "    'del0': 1e-4,\n",
    "    'eta_error_threshold': 1e16,\n",
    "    'min_eta': 1e-8,\n",
    "    'max_eta': 1e16,\n",
    "    'cons_per_step': False,  # Whether or not to enforce separate KL constraints at each time step.\n",
    "    'use_prev_distr': False,  # Whether or not to measure expected KL under the previous traj distr.\n",
    "    'update_in_bwd_pass': True,  # Whether or not to update the TVLG controller during the bwd pass.\n",
    "}\n",
    "\n",
    "class TrajOptLQRPython(TrajOpt):\n",
    "    \"\"\" LQR trajectory optimization, Python implementation. \"\"\"\n",
    "    def __init__(self, hyperparams):\n",
    "        config = copy.deepcopy(TRAJ_OPT_LQR)\n",
    "        config.update(hyperparams)\n",
    "\n",
    "        TrajOpt.__init__(self, config)\n",
    "\n",
    "        self.cons_per_step = config['cons_per_step']\n",
    "        self._use_prev_distr = config['use_prev_distr']\n",
    "        self._update_in_bwd_pass = config['update_in_bwd_pass']\n",
    "\n",
    "    # TODO - Add arg and return spec on this function.\n",
    "    def update(self, m, algorithm):\n",
    "        \"\"\" Run dual gradient decent to optimize trajectories. \"\"\"\n",
    "        T = algorithm.T\n",
    "        eta = algorithm.cur[m].eta\n",
    "        if self.cons_per_step and type(eta) in (int, float):\n",
    "            eta = np.ones(T) * eta\n",
    "        step_mult = algorithm.cur[m].step_mult\n",
    "        traj_info = algorithm.cur[m].traj_info\n",
    "\n",
    "        if isinstance(algorithm, AlgorithmMDGPS):\n",
    "            # For MDGPS, constrain to previous NN linearization\n",
    "            prev_traj_distr = algorithm.cur[m].pol_info.traj_distr()\n",
    "        else:\n",
    "            # For BADMM/trajopt, constrain to previous LG controller\n",
    "            prev_traj_distr = algorithm.cur[m].traj_distr\n",
    "\n",
    "        # Set KL-divergence step size (epsilon).\n",
    "        kl_step = algorithm.base_kl_step * step_mult\n",
    "        if not self.cons_per_step:\n",
    "            kl_step *= T\n",
    "\n",
    "        # We assume at min_eta, kl_div > kl_step, opposite for max_eta.\n",
    "        if not self.cons_per_step:\n",
    "            min_eta = self._hyperparams['min_eta']\n",
    "            max_eta = self._hyperparams['max_eta']\n",
    "            LOGGER.debug(\"Running DGD for trajectory %d, eta: %f\", m, eta)\n",
    "        else:\n",
    "            min_eta = np.ones(T) * self._hyperparams['min_eta']\n",
    "            max_eta = np.ones(T) * self._hyperparams['max_eta']\n",
    "            LOGGER.debug(\"Running DGD for trajectory %d, avg eta: %f\", m,\n",
    "                         np.mean(eta[:-1]))\n",
    "\n",
    "        max_itr = (DGD_MAX_LS_ITER if self.cons_per_step else\n",
    "                   DGD_MAX_ITER)\n",
    "        for itr in range(max_itr):\n",
    "            if not self.cons_per_step:\n",
    "                LOGGER.debug(\"Iteration %d, bracket: (%.2e , %.2e , %.2e)\", itr,\n",
    "                             min_eta, eta, max_eta)\n",
    "\n",
    "            # Run fwd/bwd pass, note that eta may be updated.\n",
    "            # Compute KL divergence constraint violation.\n",
    "            traj_distr, eta = self.backward(prev_traj_distr, traj_info,\n",
    "                                            eta, algorithm, m)\n",
    "\n",
    "            if not self._use_prev_distr:\n",
    "                new_mu, new_sigma = self.forward(traj_distr, traj_info)\n",
    "                kl_div = traj_distr_kl(\n",
    "                        new_mu, new_sigma, traj_distr, prev_traj_distr,\n",
    "                        tot=(not self.cons_per_step)\n",
    "                )\n",
    "            else:\n",
    "                prev_mu, prev_sigma = self.forward(prev_traj_distr, traj_info)\n",
    "                kl_div = traj_distr_kl_alt(\n",
    "                        prev_mu, prev_sigma, traj_distr, prev_traj_distr,\n",
    "                        tot=(not self.cons_per_step)\n",
    "                )\n",
    "\n",
    "            con = kl_div - kl_step\n",
    "\n",
    "            # Convergence check - constraint satisfaction.\n",
    "            if self._conv_check(con, kl_step):\n",
    "                if not self.cons_per_step:\n",
    "                    LOGGER.debug(\"KL: %f / %f, converged iteration %d\", kl_div,\n",
    "                                 kl_step, itr)\n",
    "                else:\n",
    "                    LOGGER.debug(\n",
    "                            \"KL: %f / %f, converged iteration %d\",\n",
    "                            np.mean(kl_div[:-1]), np.mean(kl_step[:-1]), itr\n",
    "                    )\n",
    "                break\n",
    "\n",
    "            if not self.cons_per_step:\n",
    "                # Choose new eta (bisect bracket or multiply by constant)\n",
    "                if con < 0: # Eta was too big.\n",
    "                    max_eta = eta\n",
    "                    geom = np.sqrt(min_eta*max_eta)  # Geometric mean.\n",
    "                    new_eta = max(geom, 0.1*max_eta)\n",
    "                    LOGGER.debug(\"KL: %f / %f, eta too big, new eta: %f\",\n",
    "                                 kl_div, kl_step, new_eta)\n",
    "                else: # Eta was too small.\n",
    "                    min_eta = eta\n",
    "                    geom = np.sqrt(min_eta*max_eta)  # Geometric mean.\n",
    "                    new_eta = min(geom, 10.0*min_eta)\n",
    "                    LOGGER.debug(\"KL: %f / %f, eta too small, new eta: %f\",\n",
    "                                 kl_div, kl_step, new_eta)\n",
    "\n",
    "                # Logarithmic mean: log_mean(x,y) = (y - x)/(log(y) - log(x))\n",
    "                eta = new_eta\n",
    "            else:\n",
    "                for t in range(T):\n",
    "                    if con[t] < 0:\n",
    "                        max_eta[t] = eta[t]\n",
    "                        geom = np.sqrt(min_eta[t]*max_eta[t])\n",
    "                        eta[t] = max(geom, 0.1*max_eta[t])\n",
    "                    else:\n",
    "                        min_eta[t] = eta[t]\n",
    "                        geom = np.sqrt(min_eta[t]*max_eta[t])\n",
    "                        eta[t] = min(geom, 10.0*min_eta[t])\n",
    "                if itr % 10 == 0:\n",
    "                    LOGGER.debug(\"avg KL: %f / %f, avg new eta: %f\",\n",
    "                                 np.mean(kl_div[:-1]), np.mean(kl_step[:-1]),\n",
    "                                 np.mean(eta[:-1]))\n",
    "\n",
    "        if (self.cons_per_step and not self._conv_check(con, kl_step)):\n",
    "            m_b, v_b = np.zeros(T-1), np.zeros(T-1)\n",
    "\n",
    "            for itr in range(DGD_MAX_GD_ITER):\n",
    "                traj_distr, eta = self.backward(prev_traj_distr, traj_info,\n",
    "                                                eta, algorithm, m)\n",
    "\n",
    "                if not self._use_prev_distr:\n",
    "                    new_mu, new_sigma = self.forward(traj_distr, traj_info)\n",
    "                    kl_div = traj_distr_kl(\n",
    "                            new_mu, new_sigma, traj_distr, prev_traj_distr,\n",
    "                            tot=False\n",
    "                    )\n",
    "                else:\n",
    "                    prev_mu, prev_sigma = self.forward(prev_traj_distr,\n",
    "                                                       traj_info)\n",
    "                    kl_div = traj_distr_kl_alt(\n",
    "                            prev_mu, prev_sigma, traj_distr, prev_traj_distr,\n",
    "                            tot=False\n",
    "                    )\n",
    "\n",
    "                con = kl_div - kl_step\n",
    "                if self._conv_check(con, kl_step):\n",
    "                    LOGGER.debug(\n",
    "                            \"KL: %f / %f, converged iteration %d\",\n",
    "                            np.mean(kl_div[:-1]), np.mean(kl_step[:-1]), itr\n",
    "                    )\n",
    "                    break\n",
    "\n",
    "                m_b = (BETA1 * m_b + (1-BETA1) * con[:-1])\n",
    "                m_u = m_b / (1 - BETA1 ** (itr+1))\n",
    "                v_b = (BETA2 * v_b + (1-BETA2) * np.square(con[:-1]))\n",
    "                v_u = v_b / (1 - BETA2 ** (itr+1))\n",
    "                eta[:-1] = np.minimum(\n",
    "                        np.maximum(eta[:-1] + ALPHA * m_u / (np.sqrt(v_u) + EPS),\n",
    "                                   self._hyperparams['min_eta']),\n",
    "                        self._hyperparams['max_eta']\n",
    "                )\n",
    "\n",
    "                if itr % 10 == 0:\n",
    "                    LOGGER.debug(\"avg KL: %f / %f, avg new eta: %f\",\n",
    "                                 np.mean(kl_div[:-1]), np.mean(kl_step[:-1]),\n",
    "                                 np.mean(eta[:-1]))\n",
    "\n",
    "        if (np.mean(kl_div) > np.mean(kl_step) and\n",
    "            not self._conv_check(con, kl_step)):\n",
    "            LOGGER.warning(\n",
    "                    \"Final KL divergence after DGD convergence is too high.\"\n",
    "            )\n",
    "        return traj_distr, eta\n",
    "\n",
    "    def estimate_cost(self, traj_distr, traj_info):\n",
    "        \"\"\" Compute Laplace approximation to expected cost. \"\"\"\n",
    "        # Constants.\n",
    "        T = traj_distr.T\n",
    "\n",
    "        # Perform forward pass (note that we repeat this here, because\n",
    "        # traj_info may have different dynamics from the ones that were\n",
    "        # used to compute the distribution already saved in traj).\n",
    "        mu, sigma = self.forward(traj_distr, traj_info)\n",
    "\n",
    "        # Compute cost.\n",
    "        predicted_cost = np.zeros(T)\n",
    "        for t in range(T):\n",
    "            predicted_cost[t] = traj_info.cc[t] + 0.5 * \\\n",
    "                    np.sum(sigma[t, :, :] * traj_info.Cm[t, :, :]) + 0.5 * \\\n",
    "                    mu[t, :].T.dot(traj_info.Cm[t, :, :]).dot(mu[t, :]) + \\\n",
    "                    mu[t, :].T.dot(traj_info.cv[t, :])\n",
    "        return predicted_cost\n",
    "\n",
    "    def forward(self, traj_distr, traj_info):\n",
    "        \"\"\"\n",
    "        Perform LQR forward pass. Computes state-action marginals from\n",
    "        dynamics and policy.\n",
    "        Args:\n",
    "            traj_distr: A linear Gaussian policy object.\n",
    "            traj_info: A TrajectoryInfo object.\n",
    "        Returns:\n",
    "            mu: A T x dX mean action vector.\n",
    "            sigma: A T x dX x dX covariance matrix.\n",
    "        \"\"\"\n",
    "        # Compute state-action marginals from specified conditional\n",
    "        # parameters and current traj_info.\n",
    "        T = traj_distr.T\n",
    "        dU = traj_distr.dU\n",
    "        dX = traj_distr.dX\n",
    "\n",
    "        # Constants.\n",
    "        idx_x = slice(dX)\n",
    "\n",
    "        # Allocate space.\n",
    "        sigma = np.zeros((T, dX+dU, dX+dU))\n",
    "        mu = np.zeros((T, dX+dU))\n",
    "\n",
    "        # Pull out dynamics.\n",
    "        Fm = traj_info.dynamics.Fm\n",
    "        fv = traj_info.dynamics.fv\n",
    "        dyn_covar = traj_info.dynamics.dyn_covar\n",
    "\n",
    "        # Set initial covariance (initial mu is always zero).\n",
    "        sigma[0, idx_x, idx_x] = traj_info.x0sigma\n",
    "        mu[0, idx_x] = traj_info.x0mu\n",
    "\n",
    "        for t in range(T):\n",
    "            sigma[t, :, :] = np.vstack([\n",
    "                np.hstack([\n",
    "                    sigma[t, idx_x, idx_x],\n",
    "                    sigma[t, idx_x, idx_x].dot(traj_distr.K[t, :, :].T)\n",
    "                ]),\n",
    "                np.hstack([\n",
    "                    traj_distr.K[t, :, :].dot(sigma[t, idx_x, idx_x]),\n",
    "                    traj_distr.K[t, :, :].dot(sigma[t, idx_x, idx_x]).dot(\n",
    "                        traj_distr.K[t, :, :].T\n",
    "                    ) + traj_distr.pol_covar[t, :, :]\n",
    "                ])\n",
    "            ])\n",
    "            mu[t, :] = np.hstack([\n",
    "                mu[t, idx_x],\n",
    "                traj_distr.K[t, :, :].dot(mu[t, idx_x]) + traj_distr.k[t, :]\n",
    "            ])\n",
    "            if t < T - 1:\n",
    "                sigma[t+1, idx_x, idx_x] = \\\n",
    "                        Fm[t, :, :].dot(sigma[t, :, :]).dot(Fm[t, :, :].T) + \\\n",
    "                        dyn_covar[t, :, :]\n",
    "                mu[t+1, idx_x] = Fm[t, :, :].dot(mu[t, :]) + fv[t, :]\n",
    "        return mu, sigma\n",
    "\n",
    "    def backward(self, prev_traj_distr, traj_info, eta, algorithm, m):\n",
    "        \"\"\"\n",
    "        Perform LQR backward pass. This computes a new linear Gaussian\n",
    "        policy object.\n",
    "        Args:\n",
    "            prev_traj_distr: A linear Gaussian policy object from\n",
    "                previous iteration.\n",
    "            traj_info: A TrajectoryInfo object.\n",
    "            eta: Dual variable.\n",
    "            algorithm: Algorithm object needed to compute costs.\n",
    "            m: Condition number.\n",
    "        Returns:\n",
    "            traj_distr: A new linear Gaussian policy.\n",
    "            new_eta: The updated dual variable. Updates happen if the\n",
    "                Q-function is not PD.\n",
    "        \"\"\"\n",
    "        # Constants.\n",
    "        T = prev_traj_distr.T\n",
    "        dU = prev_traj_distr.dU\n",
    "        dX = prev_traj_distr.dX\n",
    "\n",
    "        if self._update_in_bwd_pass:\n",
    "            traj_distr = prev_traj_distr.nans_like()\n",
    "        else:\n",
    "            traj_distr = prev_traj_distr.copy()\n",
    "\n",
    "        # Store pol_wt if necessary\n",
    "        if type(algorithm) == AlgorithmBADMM:\n",
    "            pol_wt = algorithm.cur[m].pol_info.pol_wt\n",
    "\n",
    "        idx_x = slice(dX)\n",
    "        idx_u = slice(dX, dX+dU)\n",
    "\n",
    "        # Pull out dynamics.\n",
    "        Fm = traj_info.dynamics.Fm\n",
    "        fv = traj_info.dynamics.fv\n",
    "\n",
    "        # Non-SPD correction terms.\n",
    "        del_ = self._hyperparams['del0']\n",
    "        if self.cons_per_step:\n",
    "            del_ = np.ones(T) * del_\n",
    "        eta0 = eta\n",
    "\n",
    "        # Run dynamic programming.\n",
    "        fail = True\n",
    "        while fail:\n",
    "            fail = False  # Flip to true on non-symmetric PD.\n",
    "\n",
    "            # Allocate.\n",
    "            Vxx = np.zeros((T, dX, dX))\n",
    "            Vx = np.zeros((T, dX))\n",
    "            Qtt = np.zeros((T, dX+dU, dX+dU))\n",
    "            Qt = np.zeros((T, dX+dU))\n",
    "\n",
    "            if not self._update_in_bwd_pass:\n",
    "                new_K, new_k = np.zeros((T, dU, dX)), np.zeros((T, dU))\n",
    "                new_pS = np.zeros((T, dU, dU))\n",
    "                new_ipS, new_cpS = np.zeros((T, dU, dU)), np.zeros((T, dU, dU))\n",
    "\n",
    "            fCm, fcv = algorithm.compute_costs(\n",
    "                    m, eta, augment=(not self.cons_per_step)\n",
    "            )\n",
    "\n",
    "            # Compute state-action-state function at each time step.\n",
    "            for t in range(T - 1, -1, -1):\n",
    "                # Add in the cost.\n",
    "                Qtt[t] = fCm[t, :, :]  # (X+U) x (X+U)\n",
    "                Qt[t] = fcv[t, :]  # (X+U) x 1\n",
    "\n",
    "                # Add in the value function from the next time step.\n",
    "                if t < T - 1:\n",
    "                    if type(algorithm) == AlgorithmBADMM:\n",
    "                        multiplier = (pol_wt[t+1] + eta)/(pol_wt[t] + eta)\n",
    "                    else:\n",
    "                        multiplier = 1.0\n",
    "                    Qtt[t] += multiplier * \\\n",
    "                            Fm[t, :, :].T.dot(Vxx[t+1, :, :]).dot(Fm[t, :, :])\n",
    "                    Qt[t] += multiplier * \\\n",
    "                            Fm[t, :, :].T.dot(Vx[t+1, :] +\n",
    "                                            Vxx[t+1, :, :].dot(fv[t, :]))\n",
    "\n",
    "                # Symmetrize quadratic component.\n",
    "                Qtt[t] = 0.5 * (Qtt[t] + Qtt[t].T)\n",
    "\n",
    "                if not self.cons_per_step:\n",
    "                    inv_term = Qtt[t, idx_u, idx_u]\n",
    "                    k_term = Qt[t, idx_u]\n",
    "                    K_term = Qtt[t, idx_u, idx_x]\n",
    "                else:\n",
    "                    inv_term = (1.0 / eta[t]) * Qtt[t, idx_u, idx_u] + \\\n",
    "                            prev_traj_distr.inv_pol_covar[t]\n",
    "                    k_term = (1.0 / eta[t]) * Qt[t, idx_u] - \\\n",
    "                            prev_traj_distr.inv_pol_covar[t].dot(prev_traj_distr.k[t])\n",
    "                    K_term = (1.0 / eta[t]) * Qtt[t, idx_u, idx_x] - \\\n",
    "                            prev_traj_distr.inv_pol_covar[t].dot(prev_traj_distr.K[t])\n",
    "                # Compute Cholesky decomposition of Q function action\n",
    "                # component.\n",
    "                try:\n",
    "                    U = sp.linalg.cholesky(inv_term)\n",
    "                    L = U.T\n",
    "                except LinAlgError as e:\n",
    "                    # Error thrown when Qtt[idx_u, idx_u] is not\n",
    "                    # symmetric positive definite.\n",
    "                    LOGGER.debug('LinAlgError: %s', e)\n",
    "                    fail = t if self.cons_per_step else True\n",
    "                    break\n",
    "\n",
    "                if self._hyperparams['update_in_bwd_pass']:\n",
    "                    # Store conditional covariance, inverse, and Cholesky.\n",
    "                    traj_distr.inv_pol_covar[t, :, :] = inv_term\n",
    "                    traj_distr.pol_covar[t, :, :] = sp.linalg.solve_triangular(\n",
    "                        U, sp.linalg.solve_triangular(L, np.eye(dU), lower=True)\n",
    "                    )\n",
    "                    traj_distr.chol_pol_covar[t, :, :] = sp.linalg.cholesky(\n",
    "                        traj_distr.pol_covar[t, :, :]\n",
    "                    )\n",
    "\n",
    "                    # Compute mean terms.\n",
    "                    traj_distr.k[t, :] = -sp.linalg.solve_triangular(\n",
    "                        U, sp.linalg.solve_triangular(L, k_term, lower=True)\n",
    "                    )\n",
    "                    traj_distr.K[t, :, :] = -sp.linalg.solve_triangular(\n",
    "                        U, sp.linalg.solve_triangular(L, K_term, lower=True)\n",
    "                    )\n",
    "                else:\n",
    "                    # Store conditional covariance, inverse, and Cholesky.\n",
    "                    new_ipS[t, :, :] = inv_term\n",
    "                    new_pS[t, :, :] = sp.linalg.solve_triangular(\n",
    "                        U, sp.linalg.solve_triangular(L, np.eye(dU), lower=True)\n",
    "                    )\n",
    "                    new_cpS[t, :, :] = sp.linalg.cholesky(\n",
    "                        new_pS[t, :, :]\n",
    "                    )\n",
    "\n",
    "                    # Compute mean terms.\n",
    "                    new_k[t, :] = -sp.linalg.solve_triangular(\n",
    "                        U, sp.linalg.solve_triangular(L, k_term, lower=True)\n",
    "                    )\n",
    "                    new_K[t, :, :] = -sp.linalg.solve_triangular(\n",
    "                        U, sp.linalg.solve_triangular(L, K_term, lower=True)\n",
    "                    )\n",
    "\n",
    "                # Compute value function.\n",
    "                if (self.cons_per_step or\n",
    "                    not self._hyperparams['update_in_bwd_pass']):\n",
    "                    Vxx[t, :, :] = Qtt[t, idx_x, idx_x] + \\\n",
    "                            traj_distr.K[t].T.dot(Qtt[t, idx_u, idx_u]).dot(traj_distr.K[t]) + \\\n",
    "                            (2 * Qtt[t, idx_x, idx_u]).dot(traj_distr.K[t])\n",
    "                    Vx[t, :] = Qt[t, idx_x].T + \\\n",
    "                            Qt[t, idx_u].T.dot(traj_distr.K[t]) + \\\n",
    "                            traj_distr.k[t].T.dot(Qtt[t, idx_u, idx_u]).dot(traj_distr.K[t]) + \\\n",
    "                            Qtt[t, idx_x, idx_u].dot(traj_distr.k[t])\n",
    "                else:\n",
    "                    Vxx[t, :, :] = Qtt[t, idx_x, idx_x] + \\\n",
    "                            Qtt[t, idx_x, idx_u].dot(traj_distr.K[t, :, :])\n",
    "                    Vx[t, :] = Qt[t, idx_x] + \\\n",
    "                            Qtt[t, idx_x, idx_u].dot(traj_distr.k[t, :])\n",
    "                Vxx[t, :, :] = 0.5 * (Vxx[t, :, :] + Vxx[t, :, :].T)\n",
    "\n",
    "            if not self._hyperparams['update_in_bwd_pass']:\n",
    "                traj_distr.K, traj_distr.k = new_K, new_k\n",
    "                traj_distr.pol_covar = new_pS\n",
    "                traj_distr.inv_pol_covar = new_ipS\n",
    "                traj_distr.chol_pol_covar = new_cpS\n",
    "\n",
    "            # Increment eta on non-SPD Q-function.\n",
    "            if fail:\n",
    "                if not self.cons_per_step:\n",
    "                    old_eta = eta\n",
    "                    eta = eta0 + del_\n",
    "                    LOGGER.debug('Increasing eta: %f -> %f', old_eta, eta)\n",
    "                    del_ *= 2  # Increase del_ exponentially on failure.\n",
    "                else:\n",
    "                    old_eta = eta[fail]\n",
    "                    eta[fail] = eta0[fail] + del_[fail]\n",
    "                    LOGGER.debug('Increasing eta %d: %f -> %f',\n",
    "                                 fail, old_eta, eta[fail])\n",
    "                    del_[fail] *= 2  # Increase del_ exponentially on failure.\n",
    "                if self.cons_per_step:\n",
    "                    fail_check = (eta[fail] >= 1e16)\n",
    "                else:\n",
    "                    fail_check = (eta >= 1e16)\n",
    "                if fail_check:\n",
    "                    if np.any(np.isnan(Fm)) or np.any(np.isnan(fv)):\n",
    "                        raise ValueError('NaNs encountered in dynamics!')\n",
    "                    raise ValueError('Failed to find PD solution even for very \\\n",
    "                            large eta (check that dynamics and cost are \\\n",
    "                            reasonably well conditioned)!')\n",
    "        return traj_distr, eta\n",
    "\n",
    "    def _conv_check(self, con, kl_step):\n",
    "        \"\"\"Function that checks whether dual gradient descent has converged.\"\"\"\n",
    "        if self.cons_per_step:\n",
    "            return all([abs(con[t]) < (0.1*kl_step[t]) for t in range(con.size)])\n",
    "        return abs(con) < 0.1 * kl_step\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Cost"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "First, utilities and functions"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [],
   "source": [
    "RAMP_CONSTANT = 1\n",
    "RAMP_LINEAR = 2\n",
    "RAMP_QUADRATIC = 3\n",
    "RAMP_FINAL_ONLY = 4\n",
    "\n",
    "# CostAction\n",
    "COST_ACTION = {\n",
    "    'wu': np.array([]),  # Torque penalties, must be 1 x dU numpy array.\n",
    "}\n",
    "\n",
    "# CostState\n",
    "COST_STATE = {\n",
    "    'ramp_option': RAMP_CONSTANT,  # How target cost ramps over time.\n",
    "    'l1': 0.0,\n",
    "    'l2': 1.0,\n",
    "    'alpha': 1e-2,\n",
    "    'wp_final_multiplier': 1.0,  # Weight multiplier on final time step.\n",
    "    'data_types': {\n",
    "        'JointAngle': {\n",
    "            'target_state': None,  # Target state - must be set.\n",
    "            'wp': None,  # State weights - must be set.\n",
    "        },\n",
    "    },\n",
    "}\n",
    "\n",
    "# CostSum\n",
    "COST_SUM = {\n",
    "    'costs': [],  # A list of hyperparam dictionaries for each cost.\n",
    "    'weights': [],  # Weight multipliers for each cost.\n",
    "}\n",
    "\n",
    "def get_ramp_multiplier(ramp_option, T, wp_final_multiplier=1.0):\n",
    "    \"\"\"\n",
    "    Return a time-varying multiplier.\n",
    "    Returns:\n",
    "        A (T,) float vector containing weights for each time step.\n",
    "    \"\"\"\n",
    "    if ramp_option == RAMP_CONSTANT:\n",
    "        wpm = np.ones(T)\n",
    "    elif ramp_option == RAMP_LINEAR:\n",
    "        wpm = (np.arange(T, dtype=np.float32) + 1) / T\n",
    "    elif ramp_option == RAMP_QUADRATIC:\n",
    "        wpm = ((np.arange(T, dtype=np.float32) + 1) / T) ** 2\n",
    "    elif ramp_option == RAMP_FINAL_ONLY:\n",
    "        wpm = np.zeros(T)\n",
    "        wpm[T-1] = 1.0\n",
    "    else:\n",
    "        raise ValueError('Unknown cost ramp requested!')\n",
    "    wpm[-1] *= wp_final_multiplier\n",
    "    return wpm\n",
    "\n",
    "def evall1l2term(wp, d, Jd, Jdd, l1, l2, alpha):\n",
    "    \"\"\"\n",
    "    Evaluate and compute derivatives for combined l1/l2 norm penalty.\n",
    "    loss = (0.5 * l2 * d^2) + (l1 * sqrt(alpha + d^2))\n",
    "    Args:\n",
    "        wp: T x D matrix with weights for each dimension and time step.\n",
    "        d: T x D states to evaluate norm on.\n",
    "        Jd: T x D x Dx Jacobian - derivative of d with respect to state.\n",
    "        Jdd: T x D x Dx x Dx Jacobian - 2nd derivative of d with respect\n",
    "            to state.\n",
    "        l1: l1 loss weight.\n",
    "        l2: l2 loss weight.\n",
    "        alpha: Constant added in square root.\n",
    "    \"\"\"\n",
    "    # Get trajectory length.\n",
    "    T, _ = d.shape\n",
    "\n",
    "    # Compute scaled quantities.\n",
    "    sqrtwp = np.sqrt(wp)\n",
    "    dsclsq = d * sqrtwp\n",
    "    dscl = d * wp\n",
    "    dscls = d * (wp ** 2)\n",
    "\n",
    "    # Compute total cost.\n",
    "    l = 0.5 * np.sum(dsclsq ** 2, axis=1) * l2 + \\\n",
    "            np.sqrt(alpha + np.sum(dscl ** 2, axis=1)) * l1\n",
    "\n",
    "    # First order derivative terms.\n",
    "    d1 = dscl * l2 + (\n",
    "        dscls / np.sqrt(alpha + np.sum(dscl ** 2, axis=1, keepdims=True)) * l1\n",
    "    )\n",
    "    lx = np.sum(Jd * np.expand_dims(d1, axis=2), axis=1)\n",
    "\n",
    "    # Second order terms.\n",
    "    psq = np.expand_dims(\n",
    "        np.sqrt(alpha + np.sum(dscl ** 2, axis=1, keepdims=True)), axis=1\n",
    "    )\n",
    "    d2 = l1 * (\n",
    "        (np.expand_dims(np.eye(wp.shape[1]), axis=0) *\n",
    "         (np.expand_dims(wp ** 2, axis=1) / psq)) -\n",
    "        ((np.expand_dims(dscls, axis=1) *\n",
    "          np.expand_dims(dscls, axis=2)) / psq ** 3)\n",
    "    )\n",
    "    d2 += l2 * (\n",
    "        np.expand_dims(wp, axis=2) * np.tile(np.eye(wp.shape[1]), [T, 1, 1])\n",
    "    )\n",
    "\n",
    "    d1_expand = np.expand_dims(np.expand_dims(d1, axis=-1), axis=-1)\n",
    "    sec = np.sum(d1_expand * Jdd, axis=1)\n",
    "\n",
    "    Jd_expand_1 = np.expand_dims(np.expand_dims(Jd, axis=2), axis=4)\n",
    "    Jd_expand_2 = np.expand_dims(np.expand_dims(Jd, axis=1), axis=3)\n",
    "    d2_expand = np.expand_dims(np.expand_dims(d2, axis=-1), axis=-1)\n",
    "    lxx = np.sum(np.sum(Jd_expand_1 * Jd_expand_2 * d2_expand, axis=1), axis=1)\n",
    "\n",
    "    lxx += 0.5 * sec + 0.5 * np.transpose(sec, [0, 2, 1])\n",
    "\n",
    "    return l, lx, lxx\n",
    "\n",
    "def evallogl2term(wp, d, Jd, Jdd, l1, l2, alpha):\n",
    "    \"\"\"\n",
    "    Evaluate and compute derivatives for combined l1/l2 norm penalty.\n",
    "    loss = (0.5 * l2 * d^2) + (0.5 * l1 * log(alpha + d^2))\n",
    "    Args:\n",
    "        wp: T x D matrix with weights for each dimension and time step.\n",
    "        d: T x D states to evaluate norm on.\n",
    "        Jd: T x D x Dx Jacobian - derivative of d with respect to state.\n",
    "        Jdd: T x D x Dx x Dx Jacobian - 2nd derivative of d with respect\n",
    "            to state.\n",
    "        l1: l1 loss weight.\n",
    "        l2: l2 loss weight.\n",
    "        alpha: Constant added in square root.\n",
    "    \"\"\"\n",
    "    # Get trajectory length.\n",
    "    T, _ = d.shape\n",
    "\n",
    "    # Compute scaled quantities.\n",
    "    sqrtwp = np.sqrt(wp)\n",
    "    dsclsq = d * sqrtwp\n",
    "    dscl = d * wp\n",
    "    dscls = d * (wp ** 2)\n",
    "\n",
    "    # Compute total cost.\n",
    "    l = 0.5 * np.sum(dsclsq ** 2, axis=1) * l2 + \\\n",
    "            0.5 * np.log(alpha + np.sum(dscl ** 2, axis=1)) * l1\n",
    "    # First order derivative terms.\n",
    "    d1 = dscl * l2 + (\n",
    "        dscls / (alpha + np.sum(dscl ** 2, axis=1, keepdims=True)) * l1\n",
    "    )\n",
    "    lx = np.sum(Jd * np.expand_dims(d1, axis=2), axis=1)\n",
    "\n",
    "    # Second order terms.\n",
    "    psq = np.expand_dims(\n",
    "        alpha + np.sum(dscl ** 2, axis=1, keepdims=True), axis=1\n",
    "    )\n",
    "    #TODO: Need * 2.0 somewhere in following line, or * 0.0 which is\n",
    "    #      wrong but better.\n",
    "    d2 = l1 * (\n",
    "        (np.expand_dims(np.eye(wp.shape[1]), axis=0) *\n",
    "         (np.expand_dims(wp ** 2, axis=1) / psq)) -\n",
    "        ((np.expand_dims(dscls, axis=1) *\n",
    "          np.expand_dims(dscls, axis=2)) / psq ** 2)\n",
    "    )\n",
    "    d2 += l2 * (\n",
    "        np.expand_dims(wp, axis=2) * np.tile(np.eye(wp.shape[1]), [T, 1, 1])\n",
    "    )\n",
    "\n",
    "    d1_expand = np.expand_dims(np.expand_dims(d1, axis=-1), axis=-1)\n",
    "    sec = np.sum(d1_expand * Jdd, axis=1)\n",
    "\n",
    "    Jd_expand_1 = np.expand_dims(np.expand_dims(Jd, axis=2), axis=4)\n",
    "    Jd_expand_2 = np.expand_dims(np.expand_dims(Jd, axis=1), axis=3)\n",
    "    d2_expand = np.expand_dims(np.expand_dims(d2, axis=-1), axis=-1)\n",
    "    lxx = np.sum(np.sum(Jd_expand_1 * Jd_expand_2 * d2_expand, axis=1), axis=1)\n",
    "\n",
    "    lxx += 0.5 * sec + 0.5 * np.transpose(sec, [0, 2, 1])\n",
    "\n",
    "    return l, lx, lxx\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "class Cost(object):\n",
    "    \"\"\" Cost superclass. \"\"\"\n",
    "    __metaclass__ = abc.ABCMeta\n",
    "\n",
    "    def __init__(self, hyperparams):\n",
    "        self._hyperparams = hyperparams\n",
    "\n",
    "    @abc.abstractmethod\n",
    "    def eval(self, sample):\n",
    "        \"\"\"\n",
    "        Evaluate cost function and derivatives.\n",
    "        Args:\n",
    "            sample:  A single sample.\n",
    "        \"\"\"\n",
    "        raise NotImplementedError(\"Must be implemented in subclass.\")\n",
    "\n",
    "class CostAction(Cost):\n",
    "    \"\"\" Computes torque penalties. \"\"\"\n",
    "    def __init__(self, hyperparams):\n",
    "        config = copy.deepcopy(COST_ACTION)\n",
    "        config.update(hyperparams)\n",
    "        Cost.__init__(self, config)\n",
    "\n",
    "    def eval(self, sample):\n",
    "        \"\"\"\n",
    "        Evaluate cost function and derivatives on a sample.\n",
    "        Args:\n",
    "            sample: A single sample\n",
    "        \"\"\"\n",
    "        sample_u = sample.get_U()\n",
    "        T = sample.T\n",
    "        Du = sample.dU\n",
    "        Dx = sample.dX\n",
    "        l = 0.5 * np.sum(self._hyperparams['wu'] * (sample_u ** 2), axis=1)\n",
    "        lu = self._hyperparams['wu'] * sample_u\n",
    "        lx = np.zeros((T, Dx))\n",
    "        luu = np.tile(np.diag(self._hyperparams['wu']), [T, 1, 1])\n",
    "        lxx = np.zeros((T, Dx, Dx))\n",
    "        lux = np.zeros((T, Du, Dx))\n",
    "        return l, lx, lu, lxx, luu, lux\n",
    "\n",
    "class CostState(Cost):\n",
    "    \"\"\" Computes l1/l2 distance to a fixed target state. \"\"\"\n",
    "    def __init__(self, hyperparams):\n",
    "        config = copy.deepcopy(COST_STATE)\n",
    "        config.update(hyperparams)\n",
    "        Cost.__init__(self, config)\n",
    "\n",
    "    def eval(self, sample):\n",
    "        \"\"\"\n",
    "        Evaluate cost function and derivatives on a sample.\n",
    "        Args:\n",
    "            sample:  A single sample\n",
    "        \"\"\"\n",
    "        T = sample.T\n",
    "        Du = sample.dU\n",
    "        Dx = sample.dX\n",
    "\n",
    "        final_l = np.zeros(T)\n",
    "        final_lu = np.zeros((T, Du))\n",
    "        final_lx = np.zeros((T, Dx))\n",
    "        final_luu = np.zeros((T, Du, Du))\n",
    "        final_lxx = np.zeros((T, Dx, Dx))\n",
    "        final_lux = np.zeros((T, Du, Dx))\n",
    "\n",
    "        for data_type in self._hyperparams['data_types']:\n",
    "            config = self._hyperparams['data_types'][data_type]\n",
    "            wp = config['wp']\n",
    "            tgt = config['target_state']\n",
    "            x = sample.get(data_type)\n",
    "            _, dim_sensor = x.shape\n",
    "\n",
    "            wpm = get_ramp_multiplier(\n",
    "                self._hyperparams['ramp_option'], T,\n",
    "                wp_final_multiplier=self._hyperparams['wp_final_multiplier']\n",
    "            )\n",
    "            wp = wp * np.expand_dims(wpm, axis=-1)\n",
    "            # Compute state penalty.\n",
    "            dist = x - tgt\n",
    "\n",
    "            # Evaluate penalty term.\n",
    "            l, ls, lss = evall1l2term(\n",
    "                wp, dist, np.tile(np.eye(dim_sensor), [T, 1, 1]),\n",
    "                np.zeros((T, dim_sensor, dim_sensor, dim_sensor)),\n",
    "                self._hyperparams['l1'], self._hyperparams['l2'],\n",
    "                self._hyperparams['alpha']\n",
    "            )\n",
    "\n",
    "            final_l += l\n",
    "\n",
    "            sample.agent.pack_data_x(final_lx, ls, data_types=[data_type])\n",
    "            sample.agent.pack_data_x(final_lxx, lss,\n",
    "                                     data_types=[data_type, data_type])\n",
    "        return final_l, final_lx, final_lu, final_lxx, final_luu, final_lux\n",
    "\n",
    "class CostSum(Cost):\n",
    "    \"\"\" A wrapper cost function that adds other cost functions. \"\"\"\n",
    "    def __init__(self, hyperparams):\n",
    "        config = copy.deepcopy(COST_SUM)\n",
    "        config.update(hyperparams)\n",
    "        Cost.__init__(self, config)\n",
    "\n",
    "        self._costs = []\n",
    "        self._weights = self._hyperparams['weights']\n",
    "\n",
    "        for cost in self._hyperparams['costs']:\n",
    "            self._costs.append(cost['type'](cost))\n",
    "\n",
    "    def eval(self, sample):\n",
    "        \"\"\"\n",
    "        Evaluate cost function and derivatives.\n",
    "        Args:\n",
    "            sample:  A single sample\n",
    "        \"\"\"\n",
    "        l, lx, lu, lxx, luu, lux = self._costs[0].eval(sample)\n",
    "\n",
    "        # Compute weighted sum of each cost value and derivatives.\n",
    "        weight = self._weights[0]\n",
    "        l = l * weight\n",
    "        lx = lx * weight\n",
    "        lu = lu * weight\n",
    "        lxx = lxx * weight\n",
    "        luu = luu * weight\n",
    "        lux = lux * weight\n",
    "        for i in range(1, len(self._costs)):\n",
    "            pl, plx, plu, plxx, pluu, plux = self._costs[i].eval(sample)\n",
    "            weight = self._weights[i]\n",
    "            l = l + pl * weight\n",
    "            lx = lx + plx * weight\n",
    "            lu = lu + plu * weight\n",
    "            lxx = lxx + plxx * weight\n",
    "            luu = luu + pluu * weight\n",
    "            lux = lux + plux * weight\n",
    "        return l, lx, lu, lxx, luu, lux"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### GUI and Datalogger\n",
    "Code related to the custom GUI coded by the group at Berkeley"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def generate_experiment_info(config):\n",
    "    \"\"\"\n",
    "    Generate experiment info, to be displayed by GPS Trainig GUI.\n",
    "    Assumes config is the config created in hyperparams.py\n",
    "    \"\"\"\n",
    "    common = config['common']\n",
    "    algorithm = config['algorithm']\n",
    "\n",
    "    if type(algorithm['cost']) == list:\n",
    "        algorithm_cost_type = algorithm['cost'][0]['type'].__name__\n",
    "        if (algorithm_cost_type) == 'CostSum':\n",
    "            algorithm_cost_type += '(%s)' % ', '.join(\n",
    "                    map(lambda cost: cost['type'].__name__,\n",
    "                        algorithm['cost'][0]['costs']))\n",
    "    else:\n",
    "        algorithm_cost_type = algorithm['cost']['type'].__name__\n",
    "        if (algorithm_cost_type) == 'CostSum':\n",
    "            algorithm_cost_type += '(%s)' % ', '.join(\n",
    "                    map(lambda cost: cost['type'].__name__,\n",
    "                        algorithm['cost']['costs']))\n",
    "\n",
    "    if 'dynamics' in algorithm:        \n",
    "        alg_dyn = str(algorithm['dynamics']['type'].__name__)\n",
    "    else:\n",
    "        alg_dyn = 'None'       \n",
    "\n",
    "    return (\n",
    "        'exp_name:   ' + str(common['experiment_name'])              + '\\n' +\n",
    "        'alg_type:   ' + str(algorithm['type'].__name__)             + '\\n' +\n",
    "        'alg_dyn:    ' + alg_dyn + '\\n' +\n",
    "        'alg_cost:   ' + str(algorithm_cost_type)                    + '\\n' +\n",
    "        'iterations: ' + str(config['iterations'])                   + '\\n' +\n",
    "        'conditions: ' + str(algorithm['conditions'])                + '\\n' +\n",
    "        'samples:    ' + str(config['num_samples'])                  + '\\n'\n",
    "    )"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### PS3 config "
   ]
  },
  {
   "cell_type": "raw",
   "metadata": {},
   "source": [
    "### DO NOT CHANGE THE CONTENTS OF THIS FILE ###\n",
    "# PS3 Joystick Buttons and Axes\n",
    "# documentation: http://wiki.ros.org/ps3joy\n",
    "\n",
    "# Mappings from PS3 buttons to their corresponding array indices.\n",
    "PS3_BUTTON = {\n",
    "    'select': 0,\n",
    "    'stick_left': 1,\n",
    "    'stick_right': 2,\n",
    "    'start': 3,\n",
    "    'cross_up': 4,\n",
    "    'cross_right': 5,\n",
    "    'cross_down': 6,\n",
    "    'cross_left': 7,\n",
    "    'rear_left_2': 8,\n",
    "    'rear_right_2': 9,\n",
    "    'rear_left_1': 10,\n",
    "    'rear_right_1': 11,\n",
    "    'action_triangle': 12,\n",
    "    'action_circle': 13,\n",
    "    'action_cross': 14,\n",
    "    'action_square': 15,\n",
    "    'pairing': 16,\n",
    "}\n",
    "INVERTED_PS3_BUTTON = {value: key for key, value in PS3_BUTTON.items()}\n",
    "\n",
    "# Mappings from PS3 axes to their corresponding array indices.\n",
    "PS3_AXIS = {\n",
    "    'stick_left_leftwards': 0,\n",
    "    'stick_left_upwards': 1,\n",
    "    'stick_right_leftwards': 2,\n",
    "    'stick_right_upwards': 3,\n",
    "    'button_cross_up': 4,\n",
    "    'button_cross_right': 5,\n",
    "    'button_cross_down': 6,\n",
    "    'button_cross_left': 7,\n",
    "    'button_rear_left_2': 8,\n",
    "    'button_rear_right_2': 9,\n",
    "    'button_rear_left_1': 10,\n",
    "    'button_rear_right_1': 11,\n",
    "    'button_action_triangle': 12,\n",
    "    'button_action_circle': 13,\n",
    "    'button_action_cross': 14,\n",
    "    'button_action_square': 15,\n",
    "    'acceleratometer_left': 16,\n",
    "    'acceleratometer_forward': 17,\n",
    "    'acceleratometer_up': 18,\n",
    "    'gyro_yaw': 19,\n",
    "}\n",
    "INVERTED_PS3_AXIS = {value: key for key, value in PS3_AXIS.items()}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### GUI config stuff"
   ]
  },
  {
   "cell_type": "raw",
   "metadata": {},
   "source": [
    "\"\"\" Default configuration and hyperparameter values for GUI objects. \"\"\"\n",
    "import itertools\n",
    "\n",
    "from gps_pb2 import TRIAL_ARM, AUXILIARY_ARM\n",
    "#from gps.gui.ps3_config import PS3_BUTTON, INVERTED_PS3_BUTTON\n",
    "\n",
    "\n",
    "# Mappings from actions to their corresponding keyboard bindings.\n",
    "# WARNING: keybindings must be unique\n",
    "keyboard_bindings = {\n",
    "    # Target Setup.\n",
    "    'ptn': 'left',  # previous target number\n",
    "    'ntn': 'right', # next target number\n",
    "    'pat': 'down',  # previous actuator type\n",
    "    'nat': 'up',    # next actuator type\n",
    "\n",
    "    'sip': 'i',     # set initial position\n",
    "    'stp': 't',     # set target position\n",
    "    'sii': 'z',     # set initial image\n",
    "    'sti': 'x',     # set target image\n",
    "\n",
    "    'mti': 'm',     # move to initial\n",
    "    'mtt': 'n',     # move to target\n",
    "    'rc': 'c',      # relax controller\n",
    "    'mm': 'q',      # mannequin mode\n",
    "\n",
    "    # GPS Training.\n",
    "    'stop' : 's',   # stop\n",
    "    'reset': 'r',   # reset\n",
    "    'go'   : 'g',   # go\n",
    "    'fail' : 'f',   # fail\n",
    "\n",
    "    # Image Visualizer\n",
    "    'oii'  : 'o',   # overlay initial image\n",
    "    'oti'  : 'p',   # overlay target image\n",
    "}\n",
    "inverted_keyboard_bindings = {value: key\n",
    "                              for key, value in keyboard_bindings.items()}\n",
    "\n",
    "# Mappings from actions to their corresponding PS3 controller bindings.\n",
    "ps3_bindings = {\n",
    "    # Target Setup\n",
    "    'ptn': (PS3_BUTTON['rear_right_1'], PS3_BUTTON['cross_left']),\n",
    "    'ntn': (PS3_BUTTON['rear_right_1'], PS3_BUTTON['cross_right']),\n",
    "    'pat': (PS3_BUTTON['rear_right_1'], PS3_BUTTON['cross_down']),\n",
    "    'nat': (PS3_BUTTON['rear_right_1'], PS3_BUTTON['cross_up']),\n",
    "\n",
    "    'sip': (PS3_BUTTON['rear_right_1'], PS3_BUTTON['action_square']),\n",
    "    'stp': (PS3_BUTTON['rear_right_1'], PS3_BUTTON['action_circle']),\n",
    "    'sii': (PS3_BUTTON['rear_right_1'], PS3_BUTTON['action_cross']),\n",
    "    'sti': (PS3_BUTTON['rear_right_1'], PS3_BUTTON['action_triangle']),\n",
    "\n",
    "    'mti': (PS3_BUTTON['rear_right_2'], PS3_BUTTON['cross_left']),\n",
    "    'mtt': (PS3_BUTTON['rear_right_2'], PS3_BUTTON['cross_right']),\n",
    "    'rc' : (PS3_BUTTON['rear_right_2'], PS3_BUTTON['cross_down']),\n",
    "    'mm' : (PS3_BUTTON['rear_right_2'], PS3_BUTTON['cross_up']),\n",
    "\n",
    "    # GPS Training\n",
    "    'stop' : (PS3_BUTTON['rear_right_2'], PS3_BUTTON['action_square']),\n",
    "    'reset': (PS3_BUTTON['rear_right_2'], PS3_BUTTON['action_triangle']),\n",
    "    'go'   : (PS3_BUTTON['rear_right_2'], PS3_BUTTON['action_circle']),\n",
    "    'fail' : (PS3_BUTTON['rear_right_2'], PS3_BUTTON['action_cross']),\n",
    "\n",
    "    # Image Visualizer\n",
    "    'oii'  : (PS3_BUTTON['cross_up']    ,),\n",
    "    'oti'  : (PS3_BUTTON['cross_down']  ,),\n",
    "}\n",
    "inverted_ps3_bindings = {value: key for key, value in ps3_bindings.items()}\n",
    "\n",
    "permuted_inverted_ps3_bindings = {}\n",
    "for key, value in list(inverted_ps3_bindings.items()):\n",
    "    for permuted_key in itertools.permutations(key, len(key)):\n",
    "        permuted_inverted_ps3_bindings[permuted_key] = value\n",
    "\n",
    "config = {\n",
    "    # Keyboard shortcuts bindings\n",
    "    'keyboard_bindings': keyboard_bindings,\n",
    "    'inverted_keyboard_bindings': inverted_keyboard_bindings,\n",
    "\n",
    "    # PS3 controller bindings\n",
    "    'ps3_topic': 'joy',\n",
    "    'ps3_process_rate': 20,  # Only process 1/20 of PS3 messages.\n",
    "    'ps3_button': PS3_BUTTON,\n",
    "    'inverted_ps3_button': INVERTED_PS3_BUTTON,\n",
    "    'ps3_bindings': ps3_bindings,\n",
    "    'inverted_ps3_bindings': inverted_ps3_bindings,\n",
    "    'permuted_inverted_ps3_bindings': permuted_inverted_ps3_bindings,\n",
    "\n",
    "    # Images\n",
    "    'image_on': True,\n",
    "    'image_topic': '/camera/rgb/image_color',\n",
    "    'image_size': (240, 240),\n",
    "    'image_overlay_actuator': 'trial_arm',\n",
    "    'image_overlay_alpha': 0.3,\n",
    "\n",
    "    # Both GUIs\n",
    "    'figsize': (12, 12),\n",
    "\n",
    "    # Target Setup\n",
    "    'num_targets': 10,\n",
    "    'actuator_types': [TRIAL_ARM, AUXILIARY_ARM],\n",
    "    'actuator_names': ['trial_arm', 'auxiliary_arm'],\n",
    "    'target_output_fontsize': 10,\n",
    "\n",
    "    # GPS Training\n",
    "    'initial_mode': 'run',\n",
    "    'algthm_output_fontsize': 10,\n",
    "    'algthm_output_max_display_size': 15,\n",
    "}\n",
    "\n",
    "def generate_experiment_info(config):\n",
    "    \"\"\"\n",
    "    Generate experiment info, to be displayed by GPS Trainig GUI.\n",
    "    Assumes config is the config created in hyperparams.py\n",
    "    \"\"\"\n",
    "    common = config['common']\n",
    "    algorithm = config['algorithm']\n",
    "\n",
    "    if type(algorithm['cost']) == list:\n",
    "        algorithm_cost_type = algorithm['cost'][0]['type'].__name__\n",
    "        if (algorithm_cost_type) == 'CostSum':\n",
    "            algorithm_cost_type += '(%s)' % ', '.join(\n",
    "                    map(lambda cost: cost['type'].__name__,\n",
    "                        algorithm['cost'][0]['costs']))\n",
    "    else:\n",
    "        algorithm_cost_type = algorithm['cost']['type'].__name__\n",
    "        if (algorithm_cost_type) == 'CostSum':\n",
    "            algorithm_cost_type += '(%s)' % ', '.join(\n",
    "                    map(lambda cost: cost['type'].__name__,\n",
    "                        algorithm['cost']['costs']))\n",
    "\n",
    "    if 'dynamics' in algorithm:        \n",
    "        alg_dyn = str(algorithm['dynamics']['type'].__name__)\n",
    "    else:\n",
    "        alg_dyn = 'None'       \n",
    "\n",
    "    return (\n",
    "        'exp_name:   ' + str(common['experiment_name'])              + '\\n' +\n",
    "        'alg_type:   ' + str(algorithm['type'].__name__)             + '\\n' +\n",
    "        'alg_dyn:    ' + alg_dyn + '\\n' +\n",
    "        'alg_cost:   ' + str(algorithm_cost_type)                    + '\\n' +\n",
    "        'iterations: ' + str(config['iterations'])                   + '\\n' +\n",
    "        'conditions: ' + str(algorithm['conditions'])                + '\\n' +\n",
    "        'samples:    ' + str(config['num_samples'])                  + '\\n'\n",
    "    )\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### `Action` and `ActionPanel`"
   ]
  },
  {
   "cell_type": "raw",
   "metadata": {},
   "source": [
    "\"\"\"\n",
    "Action Panel\n",
    "\n",
    "The Action Panel contains a set of Action's which can be activated through\n",
    "one of three methods: a clickable button, a keyboard shortcut, or a ps3\n",
    "controller binding (with ps3 controller button presses read through ROS).\n",
    "\"\"\"\n",
    "import numpy as np\n",
    "import matplotlib.gridspec as gridspec\n",
    "import matplotlib.pyplot as plt\n",
    "from matplotlib.widgets import Button\n",
    "\n",
    "#from gps.gui.config import config\n",
    "\n",
    "import logging\n",
    "LOGGER = logging.getLogger(__name__)\n",
    "ROS_ENABLED = False\n",
    "try:\n",
    "    import rospkg\n",
    "    import roslib\n",
    "    import rospy\n",
    "    from sensor_msgs.msg import Joy\n",
    "\n",
    "    roslib.load_manifest('gps_agent_pkg')\n",
    "    ROS_ENABLED = True\n",
    "except ImportError as e:\n",
    "    LOGGER.debug('Import ROS failed: %s', e)\n",
    "except rospkg.common.ResourceNotFound as e:\n",
    "    LOGGER.debug('No gps_agent_pkg: %s', e)\n",
    "\n",
    "\n",
    "class Action:\n",
    "    \"\"\"\n",
    "    An action is defined by a key (used to identify it), a name, and a function.\n",
    "    It is called by placing it on an matplotlib Axis object (as specified by\n",
    "    axis_pos), giving it a keyboard_binding, or giving it a ps3_binding.\n",
    "    \"\"\"\n",
    "    def __init__(self, key, name, func, axis_pos=None, keyboard_binding=None, ps3_binding=None):\n",
    "        self.key = key\n",
    "        self.name = name\n",
    "        self.func = func\n",
    "        self.axis_pos = axis_pos\n",
    "        self.kb = keyboard_binding\n",
    "        self.pb = ps3_binding\n",
    "\n",
    "class ActionPanel:\n",
    "\n",
    "    def __init__(self, fig, gs, rows, cols, actions_arr):\n",
    "        \"\"\"\n",
    "        Constructs an ActionPanel assuming actions_arr is an array of\n",
    "        fully initialized actions.\n",
    "        Each action must have: key, name, func.\n",
    "        Each action can have: axis_pos, keyboard_binding, ps3_binding.\n",
    "        \"\"\"\n",
    "        assert len(actions_arr) <= rows*cols, 'Too many actions to put into gridspec.'\n",
    "\n",
    "        self._fig = fig\n",
    "        self._gs = gridspec.GridSpecFromSubplotSpec(rows, cols, subplot_spec=gs)\n",
    "        self._axarr = [plt.subplot(self._gs[i]) for i in range(len(actions_arr))]\n",
    "        \n",
    "        # Read keyboard_bindings and ps3_bindings from config\n",
    "        self._actions = {action.key: action for action in actions_arr}\n",
    "        for key, action in self._actions.items():\n",
    "            if key in config['keyboard_bindings']:\n",
    "                action.kb = config['keyboard_bindings'][key]\n",
    "            if key in config['ps3_bindings']:\n",
    "                action.pb = config['ps3_bindings'][key]\n",
    "\n",
    "        self._buttons = None\n",
    "        self._initialize_buttons()\n",
    "        self._cid = self._fig.canvas.mpl_connect('key_press_event', self.on_key_press)\n",
    "        if ROS_ENABLED:\n",
    "            self._ps3_count = 0\n",
    "            rospy.Subscriber(config['ps3_topic'], Joy, self.ps3_callback)\n",
    "\n",
    "    def _initialize_buttons(self):\n",
    "        self._buttons = {}\n",
    "        for key, action in self._actions.items():\n",
    "            if action.axis_pos is None:\n",
    "                continue\n",
    "            \n",
    "            button_name = '%s\\n(%s)' % (action.name, action.kb)\n",
    "            if ROS_ENABLED and action.pb:\n",
    "                ps3_buttons = [config['inverted_ps3_button'][i] for i in action.pb]\n",
    "                button_name += '\\n(%s)' % ',\\n'.join(ps3_buttons)\n",
    "\n",
    "            self._buttons[key] = Button(self._axarr[action.axis_pos], button_name)\n",
    "            self._buttons[key].on_clicked(action.func)\n",
    "\n",
    "    def on_key_press(self, event):\n",
    "        if event.key in config['inverted_keyboard_bindings']:\n",
    "            key = config['inverted_keyboard_bindings'][event.key]\n",
    "            if key in self._actions:\n",
    "                self._actions[key].func()\n",
    "        else:\n",
    "            LOGGER.debug('Unrecognized keyboard input: %s', str(event.key))\n",
    "\n",
    "    def ps3_callback(self, joy_msg):\n",
    "        self._ps3_count += 1\n",
    "        if self._ps3_count % config['ps3_process_rate'] != 0:\n",
    "            return\n",
    "        \n",
    "        buttons_pressed = tuple(np.nonzero(joy_msg.buttons)[0])\n",
    "        if buttons_pressed in config['permuted_inverted_ps3_bindings']:\n",
    "            self._actions[config['permuted_inverted_ps3_bindings'][buttons_pressed]].func()\n",
    "        else:\n",
    "            if ((len(buttons_pressed) == 1 and buttons_pressed[0] not in (\n",
    "                config['ps3_button']['rear_right_1'], config['ps3_button']['rear_right_1'])) \n",
    "                or len(buttons_pressed) >= 2):\n",
    "                LOGGER.debug('Unrecognized ps3 controller input:\\n%s',\n",
    "                        str([config['inverted_ps3_button'][b] for b in buttons_pressed]))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### TextBox"
   ]
  },
  {
   "cell_type": "raw",
   "metadata": {},
   "source": [
    "\"\"\"\n",
    "Textbox\n",
    "\n",
    "A Textbox represents the standard textbox. It has basic capabilities for\n",
    "setting the text, appending text, or changing the background color.\n",
    "If a log filename is given, all text displayed by the Textbox is also placed\n",
    "within the log file.\n",
    "\"\"\"\n",
    "import matplotlib as mpl\n",
    "import matplotlib.pyplot as plt\n",
    "import matplotlib.gridspec as gridspec\n",
    "from matplotlib.colors import ColorConverter\n",
    "\n",
    "\n",
    "class Textbox:\n",
    "\n",
    "    def __init__(self, fig, gs, log_filename=None, max_display_size=10,\n",
    "        border_on=False, bgcolor=mpl.rcParams['figure.facecolor'], bgalpha=1.0,\n",
    "        fontsize=12, font_family='sans-serif'):\n",
    "        self._fig = fig\n",
    "        self._gs = gridspec.GridSpecFromSubplotSpec(1, 1, subplot_spec=gs)\n",
    "        self._ax = plt.subplot(self._gs[0])\n",
    "        self._log_filename = log_filename\n",
    "\n",
    "        self._text_box = self._ax.text(0.01, 0.95, '', color='black',\n",
    "                va='top', ha='left', transform=self._ax.transAxes,\n",
    "                fontsize=fontsize, family=font_family)\n",
    "        self._text_arr = []\n",
    "        self._max_display_size = max_display_size\n",
    "\n",
    "        self._ax.set_xticks([])\n",
    "        self._ax.set_yticks([])\n",
    "        if not border_on:\n",
    "            self._ax.spines['top'].set_visible(False)\n",
    "            self._ax.spines['right'].set_visible(False)\n",
    "            self._ax.spines['bottom'].set_visible(False)\n",
    "            self._ax.spines['left'].set_visible(False)\n",
    "\n",
    "        self._fig.canvas.draw()\n",
    "        self._fig.canvas.flush_events()     # Fixes bug with Qt4Agg backend\n",
    "        self.set_bgcolor(bgcolor, bgalpha)  # this must come after fig.canvas.draw()\n",
    "\n",
    "    #TODO: Add docstrings here.\n",
    "    def set_text(self, text):\n",
    "        self._text_arr = [text]\n",
    "        self._text_box.set_text('\\n'.join(self._text_arr))\n",
    "        self.log_text(text)\n",
    "        self.draw()\n",
    "\n",
    "    def append_text(self, text):\n",
    "        self._text_arr.append(text)\n",
    "        if len(self._text_arr) > self._max_display_size:\n",
    "            self._text_arr = self._text_arr[-self._max_display_size:]\n",
    "        self._text_box.set_text('\\n'.join(self._text_arr))\n",
    "        self.log_text(text)\n",
    "        self.draw()\n",
    "\n",
    "    def log_text(self, text):\n",
    "        if self._log_filename is not None:\n",
    "            with open(self._log_filename, 'a') as f:\n",
    "                f.write(text + '\\n')\n",
    "\n",
    "    def set_bgcolor(self, color, alpha=1.0):\n",
    "        self._ax.set_axis_bgcolor(ColorConverter().to_rgba(color, alpha))\n",
    "        self.draw()\n",
    "\n",
    "    def draw(self):\n",
    "        color, alpha = self._ax.get_axis_bgcolor(), self._ax.get_alpha()\n",
    "        self._ax.set_axis_bgcolor(mpl.rcParams['figure.facecolor'])\n",
    "        self._ax.draw_artist(self._ax.patch)\n",
    "        self._ax.set_axis_bgcolor(ColorConverter().to_rgba(color, alpha))\n",
    "\n",
    "        self._ax.draw_artist(self._ax.patch)\n",
    "        self._ax.draw_artist(self._text_box)\n",
    "        self._fig.canvas.draw()\n",
    "        self._fig.canvas.flush_events()   # Fixes bug with Qt4Agg backend\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### GUI util"
   ]
  },
  {
   "cell_type": "raw",
   "metadata": {},
   "source": [
    "import os\n",
    "import numpy as np\n",
    "\n",
    "DEFAULT_JOINT_ANGLES = np.zeros(7)\n",
    "DEFAULT_END_EFFECTOR_POSITIONS = np.zeros(3)\n",
    "DEFAULT_END_EFFECTOR_ROTATIONS = np.zeros((3, 3))\n",
    "\n",
    "def buffered_axis_limits(amin, amax, buffer_factor=1.0):\n",
    "    \"\"\"\n",
    "    Increases the range (amin, amax) by buffer_factor on each side\n",
    "    and then rounds to precision of 1/10th min or max.\n",
    "    Used for generating good plotting limits.\n",
    "    For example (0, 100) with buffer factor 1.1 is buffered to (-10, 110)\n",
    "    and then rounded to the nearest 10.\n",
    "    \"\"\"\n",
    "    diff = amax - amin\n",
    "    amin -= (buffer_factor-1)*diff\n",
    "    amax += (buffer_factor-1)*diff\n",
    "    magnitude = np.floor(np.log10(np.amax(np.abs((amin, amax)) + 1e-100)))\n",
    "    precision = np.power(10, magnitude-1)\n",
    "    amin = np.floor(amin/precision) * precision\n",
    "    amax = np.ceil (amax/precision) * precision\n",
    "    return (amin, amax)\n",
    "\n",
    "def save_pose_to_npz(filename, actuator_name, target_number, data_time, pose):\n",
    "    \"\"\"\n",
    "    Saves a pose for the specified actuator name, target number, and data time.\n",
    "    Args:\n",
    "        filename - the target file ('../target.npz')\n",
    "        actuator_name - the actuator name ('trial_arm', 'auxiliary_arm', etc.)\n",
    "        target_number - the target number ('0', '1', '2', etc.)\n",
    "        data_time - either 'initial or 'final'\n",
    "        pose - (joint angle, end effector position, end effector rotation) tuple\n",
    "    Return:\n",
    "        None\n",
    "    \"\"\"\n",
    "    ja, ee_pos, ee_rot = pose\n",
    "    save_data_to_npz(filename, actuator_name, target_number, data_time,\n",
    "                     'ja', ja)\n",
    "    save_data_to_npz(filename, actuator_name, target_number, data_time,\n",
    "                     'ee_pos', ee_pos)\n",
    "    save_data_to_npz(filename, actuator_name, target_number, data_time,\n",
    "                     'ee_rot', ee_rot)\n",
    "\n",
    "\n",
    "def save_data_to_npz(filename, actuator_name, target_number, data_time,\n",
    "                     data_name, value):\n",
    "    \"\"\"\n",
    "    Save data to the specified file with key\n",
    "    (actuator_name, target_number, data_time, data_name).\n",
    "    \"\"\"\n",
    "    key = '_'.join((actuator_name, target_number, data_time, data_name))\n",
    "    save_to_npz(filename, key, value)\n",
    "\n",
    "\n",
    "def save_to_npz(filename, key, value):\n",
    "    \"\"\"\n",
    "    Save a (key,value) pair to a npz dictionary.\n",
    "    Args:\n",
    "        filename: The file containing the npz dictionary.\n",
    "        key: The key (string).\n",
    "        value: The value (numpy array).\n",
    "    \"\"\"\n",
    "    tmp = {}\n",
    "    if os.path.exists(filename):\n",
    "        with np.load(filename) as f:\n",
    "            tmp = dict(f)\n",
    "    tmp[key] = value\n",
    "    np.savez(filename, **tmp)\n",
    "\n",
    "\n",
    "def load_pose_from_npz(filename, actuator_name, target_number, data_time,\n",
    "        default_ja=DEFAULT_JOINT_ANGLES,\n",
    "        default_ee_pos=DEFAULT_END_EFFECTOR_POSITIONS,\n",
    "        default_ee_rot=DEFAULT_END_EFFECTOR_ROTATIONS):\n",
    "    \"\"\"\n",
    "    Loads a pose for the specified actuator name, target number, and data time.\n",
    "    Args:\n",
    "        filename - the target file ('../target.npz')\n",
    "        actuator_name - the actuator name ('trial_arm', 'auxiliary_arm', etc.)\n",
    "        target_number - the target number ('0', '1', '2', etc.)\n",
    "        data_time - either 'initial or 'final'\n",
    "    Return:\n",
    "        pose - (joint angle, end effector position, end effector rotation) tuple \n",
    "    \"\"\"\n",
    "    ja = load_data_from_npz(filename, actuator_name, target_number, data_time,\n",
    "                            'ja', default=default_ja)\n",
    "    ee_pos = load_data_from_npz(filename, actuator_name, target_number,\n",
    "                                data_time, 'ee_pos', default=default_ee_pos)\n",
    "    ee_rot = load_data_from_npz(filename, actuator_name, target_number,\n",
    "                                data_time, 'ee_rot', default=default_ee_rot)\n",
    "    return (ja, ee_pos, ee_rot)\n",
    "\n",
    "\n",
    "def load_data_from_npz(filename, actuator_name, target_number, data_time,\n",
    "                       data_name, default=None):\n",
    "    \"\"\"\n",
    "    Load data from the specified file with key\n",
    "    (actuator_name, target_number, data_time, data_name).\n",
    "    \"\"\"\n",
    "    key = '_'.join((actuator_name, target_number, data_time, data_name))\n",
    "    return load_from_npz(filename, key, default)\n",
    "\n",
    "\n",
    "def load_from_npz(filename, key, default=None):\n",
    "    \"\"\"\n",
    "    Load a (key,value) pair from a npz dictionary. Returns default if failed.\n",
    "    Args:\n",
    "        filename: The file containing the npz dictionary.\n",
    "        key: The key (string).\n",
    "        value: The default value to return, if key or file not found.\n",
    "    \"\"\"\n",
    "    try:\n",
    "        with np.load(filename) as f:\n",
    "            return f[key]\n",
    "    except (IOError, KeyError):\n",
    "        pass\n",
    "    return default\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### MeanPlotter"
   ]
  },
  {
   "cell_type": "raw",
   "metadata": {},
   "source": [
    "\"\"\"\n",
    "Mean Plotter\n",
    "\n",
    "The Mean Plotter plots data along with its mean. The data is plotted as dots\n",
    "whereas the mean is a connected line.\n",
    "\n",
    "This is used to plot the mean cost after each iteration, along with the initial\n",
    "costs for each sample and condition.\n",
    "\"\"\"\n",
    "import numpy as np\n",
    "import matplotlib.pylab as plt\n",
    "import matplotlib.gridspec as gridspec\n",
    "\n",
    "#from gps.gui.util import buffered_axis_limits\n",
    "\n",
    "\n",
    "class MeanPlotter:\n",
    "\n",
    "    def __init__(self, fig, gs, label='mean', color='black', alpha=1.0, min_itr=10):\n",
    "        self._fig = fig\n",
    "        self._gs = gridspec.GridSpecFromSubplotSpec(1, 1, subplot_spec=gs)\n",
    "        self._ax = plt.subplot(self._gs[0])\n",
    "\n",
    "        self._label = label\n",
    "        self._color = color\n",
    "        self._alpha = alpha\n",
    "        self._min_itr = min_itr\n",
    "\n",
    "        self._ts = np.empty((1, 0))\n",
    "        self._data_mean = np.empty((1, 0))\n",
    "        self._plots_mean = self._ax.plot([], [], '-x', markeredgewidth=1.0,\n",
    "                color=self._color, alpha=1.0, label=self._label)[0]\n",
    "\n",
    "        self._ax.set_xlim(0-0.5, self._min_itr+0.5)\n",
    "        self._ax.set_ylim(0, 1)\n",
    "        self._ax.minorticks_on()\n",
    "        self._ax.legend(loc='upper right', bbox_to_anchor=(1, 1))\n",
    "\n",
    "        self._init = False\n",
    "\n",
    "        self._fig.canvas.draw()\n",
    "        self._fig.canvas.flush_events()   # Fixes bug with Qt4Agg backend\n",
    "\n",
    "    def init(self, data_len):\n",
    "        \"\"\"\n",
    "        Initialize plots based off the length of the data array.\n",
    "        \"\"\"\n",
    "        self._t = 0\n",
    "        self._data_len = data_len\n",
    "        self._data = np.empty((data_len, 0))\n",
    "        self._plots = [self._ax.plot([], [], '.', markersize=4, color='black',\n",
    "            alpha=self._alpha)[0] for _ in range(data_len)]\n",
    "\n",
    "        self._init = True\n",
    "\n",
    "    def update(self, x, t=None):\n",
    "        \"\"\"\n",
    "        Update the plots with new data x. Assumes x is a one-dimensional array.\n",
    "        \"\"\"\n",
    "        x = np.ravel([x])\n",
    "\n",
    "        if not self._init:\n",
    "            self.init(x.shape[0])\n",
    "\n",
    "        if not t:\n",
    "            t = self._t\n",
    "\n",
    "        assert x.shape[0] == self._data_len\n",
    "        t = np.array([t]).reshape((1, 1))\n",
    "        x = x.reshape((self._data_len, 1))\n",
    "        mean = np.mean(x).reshape((1, 1))\n",
    "\n",
    "        self._t += 1\n",
    "        self._ts = np.append(self._ts, t, axis=1)\n",
    "        self._data = np.append(self._data, x, axis=1)\n",
    "        self._data_mean = np.append(self._data_mean, mean, axis=1)\n",
    "\n",
    "        for i in range(self._data_len):\n",
    "            self._plots[i].set_data(self._ts, self._data[i, :])\n",
    "        self._plots_mean.set_data(self._ts, self._data_mean[0, :])\n",
    "\n",
    "        self._ax.set_xlim(self._ts[0, 0]-0.5, max(self._ts[-1, 0], self._min_itr)+0.5)\n",
    "\n",
    "        y_min, y_max = np.amin(self._data), np.amax(self._data)\n",
    "        self._ax.set_ylim(buffered_axis_limits(y_min, y_max, buffer_factor=1.1))\n",
    "        self.draw()\n",
    "\n",
    "    def draw(self):\n",
    "        self._ax.draw_artist(self._ax.patch)\n",
    "        for plot in self._plots:\n",
    "            self._ax.draw_artist(plot)\n",
    "        self._ax.draw_artist(self._plots_mean)\n",
    "        self._fig.canvas.draw()\n",
    "        self._fig.canvas.flush_events()   # Fixes bug with Qt4Agg backend\n",
    "\n",
    "    def draw_ticklabels(self):\n",
    "        \"\"\"\n",
    "        Redraws the ticklabels. Used to redraw the ticklabels (since they are\n",
    "        outside the axis) when something else is drawn over them.\n",
    "        \"\"\"\n",
    "        for item in self._ax.get_xticklabels() + self._ax.get_yticklabels():\n",
    "            self._ax.draw_artist(item)\n",
    "        self._fig.canvas.draw()\n",
    "        self._fig.canvas.flush_events()   # Fixes bug with Qt4Agg backend\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Plotter3D"
   ]
  },
  {
   "cell_type": "raw",
   "metadata": {},
   "source": [
    "\"\"\"\n",
    "Plotter 3D\n",
    "\n",
    "The Plotter 3D plots data in 3D. It has options for setting a title and legend,\n",
    "plotting 3D points or 3D Gaussians, and clipping data based off axis limits.\n",
    "\n",
    "This is used to plot the 3D trajectories, including the trajectory samples,\n",
    "policy samples, and the linear Gaussian controllers.\n",
    "\"\"\"\n",
    "import numpy as np\n",
    "import matplotlib.pylab as plt\n",
    "import matplotlib.gridspec as gridspec\n",
    "from mpl_toolkits.mplot3d import Axes3D\n",
    "\n",
    "class Plotter3D:\n",
    "    def __init__(self, fig, gs, num_plots, rows=None, cols=None):\n",
    "        if cols is None:\n",
    "            cols = int(np.floor(np.sqrt(num_plots)))\n",
    "        if rows is None:\n",
    "            rows = int(np.ceil(float(num_plots)/cols))\n",
    "        assert num_plots <= rows*cols, 'Too many plots to put into gridspec.'\n",
    "\n",
    "        self._fig = fig\n",
    "        self._gs = gridspec.GridSpecFromSubplotSpec(8, 1, subplot_spec=gs)\n",
    "        self._gs_legend = self._gs[0:1, 0]\n",
    "        self._gs_plot   = self._gs[1:8, 0]\n",
    "\n",
    "        self._ax_legend = plt.subplot(self._gs_legend)\n",
    "        self._ax_legend.get_xaxis().set_visible(False)\n",
    "        self._ax_legend.get_yaxis().set_visible(False)\n",
    "\n",
    "        self._gs_plots = gridspec.GridSpecFromSubplotSpec(rows, cols, subplot_spec=self._gs_plot)\n",
    "        self._axarr = [plt.subplot(self._gs_plots[i], projection='3d') for i in range(num_plots)]\n",
    "        self._lims = [None for i in range(num_plots)]\n",
    "        self._plots = [[] for i in range(num_plots)]\n",
    "\n",
    "        for ax in self._axarr:\n",
    "            ax.tick_params(pad=0)\n",
    "            ax.locator_params(nbins=5)\n",
    "            for item in (ax.get_xticklabels() + ax.get_yticklabels() + ax.get_zticklabels()):\n",
    "                item.set_fontsize(10)\n",
    "\n",
    "        self._fig.canvas.draw()\n",
    "        self._fig.canvas.flush_events()   # Fixes bug with Qt4Agg backend\n",
    "\n",
    "    def set_title(self, i, title):\n",
    "        self._axarr[i].set_title(title)\n",
    "        self._axarr[i].title.set_fontsize(10)\n",
    "\n",
    "    def add_legend(self, linestyle, marker, color, label):\n",
    "        self._ax_legend.plot([], [], linestyle=linestyle, marker=marker,\n",
    "                color=color, label=label)\n",
    "        self._ax_legend.legend(ncol=2, mode='expand', fontsize=10)\n",
    "\n",
    "    def plot(self, i, xs, ys, zs, linestyle='-', linewidth=1.0, marker=None,\n",
    "            markersize=5.0, markeredgewidth=1.0, color='black', alpha=1.0, label=''):\n",
    "        # Manually clip at xlim, ylim, zlim (MPL doesn't support axis limits for 3D plots)\n",
    "        if self._lims[i]:\n",
    "            xlim, ylim, zlim = self._lims[i]\n",
    "            xs[np.any(np.c_[xs < xlim[0], xs > xlim[1]], axis=1)] = np.nan\n",
    "            ys[np.any(np.c_[ys < ylim[0], ys > ylim[1]], axis=1)] = np.nan\n",
    "            zs[np.any(np.c_[zs < zlim[0], zs > zlim[1]], axis=1)] = np.nan\n",
    "\n",
    "        # Create and add plot\n",
    "        plot = self._axarr[i].plot(xs, ys, zs=zs, linestyle=linestyle,\n",
    "                linewidth=linewidth, marker=marker, markersize=markersize,\n",
    "                markeredgewidth=markeredgewidth, color=color, alpha=alpha,\n",
    "                label=label)[0]\n",
    "        self._plots[i].append(plot)\n",
    "\n",
    "    def plot_3d_points(self, i, points, linestyle='-', linewidth=1.0,\n",
    "            marker=None, markersize=5.0, markeredgewidth=1.0, color='black',\n",
    "            alpha=1.0, label=''):\n",
    "        self.plot(i, points[:, 0], points[:, 1], points[:, 2],\n",
    "                linestyle=linestyle, linewidth=linewidth, marker=marker,\n",
    "                markersize=markersize, markeredgewidth=markeredgewidth,\n",
    "                color=color, alpha=alpha, label=label)\n",
    "\n",
    "    def plot_3d_gaussian(self, i, mu, sigma, edges=100, linestyle='-.',\n",
    "            linewidth=1.0, color='black', alpha=0.1, label=''):\n",
    "        \"\"\"\n",
    "        Plots ellipses in the xy plane representing the Gaussian distributions\n",
    "        specified by mu and sigma.\n",
    "        Args:\n",
    "            mu    - Tx3 mean vector for (x, y, z)\n",
    "            sigma - Tx3x3 covariance matrix for (x, y, z)\n",
    "            edges - the number of edges to use to construct each ellipse\n",
    "        \"\"\"\n",
    "        p = np.linspace(0, 2*np.pi, edges)\n",
    "        xy_ellipse = np.c_[np.cos(p), np.sin(p)]\n",
    "        T = mu.shape[0]\n",
    "\n",
    "        sigma_xy = sigma[:, 0:2, 0:2]\n",
    "        u, s, v = np.linalg.svd(sigma_xy)\n",
    "\n",
    "        for t in range(T):\n",
    "            xyz = np.repeat(mu[t, :].reshape((1, 3)), edges, axis=0)\n",
    "            xyz[:, 0:2] += np.dot(xy_ellipse, np.dot(np.diag(\n",
    "                    np.sqrt(s[t, :])), u[t, :, :].T))\n",
    "            self.plot_3d_points(i, xyz, linestyle=linestyle,\n",
    "                    linewidth=linewidth, color=color, alpha=alpha, label=label)\n",
    "\n",
    "    def set_lim(self, i, xlim, ylim, zlim):\n",
    "        \"\"\"\n",
    "        Sets the xlim, ylim, and zlim for plot i\n",
    "        WARNING: limits must be set before adding data to plots\n",
    "        Args:\n",
    "            xlim - a tuple of (x_start, x_end)\n",
    "            ylim - a tuple of (y_start, y_end)\n",
    "            zlim - a tuple of (z_start, z_end)\n",
    "        \"\"\"\n",
    "        self._lims[i] = [xlim, ylim, zlim]\n",
    "\n",
    "    def clear(self, i):\n",
    "        for plot in self._plots[i]:\n",
    "            plot.remove()\n",
    "        self._plots[i] = []\n",
    "\n",
    "    def clear_all(self):\n",
    "        for i in range(len(self._plots)):\n",
    "            self.clear(i)\n",
    "\n",
    "    def draw(self):\n",
    "        for ax in self._axarr:\n",
    "            ax.draw_artist(ax.patch)\n",
    "        for i in range(len(self._plots)):\n",
    "            for plot in self._plots[i]:\n",
    "                self._axarr[i].draw_artist(plot)\n",
    "        self._fig.canvas.draw()\n",
    "        self._fig.canvas.flush_events()   # Fixes bug with Qt4Agg backend\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### ImageVisualizer"
   ]
  },
  {
   "cell_type": "raw",
   "metadata": {},
   "source": [
    "\"\"\"\n",
    "Image Visualizer\n",
    "\n",
    "The Image Visualizer class expects to be given images in real time (via the\n",
    "update function), and displays those images onto the screen. Optionally,\n",
    "images can be supplied via a rostopic.\n",
    "\n",
    "The Image Visualizer can also overlay an initial image or target image on top \n",
    "of the realtime images. This is used for resetting the task space after each\n",
    "sample if the robot disturbed it during sampling.\n",
    "\"\"\"\n",
    "import matplotlib.pyplot as plt\n",
    "import matplotlib.gridspec as gridspec\n",
    "import logging\n",
    "LOGGER = logging.getLogger(__name__)\n",
    "ROS_ENABLED = False\n",
    "try:\n",
    "    import rospkg\n",
    "    import roslib\n",
    "    import rospy\n",
    "    from sensor_msgs.msg import Image\n",
    "\n",
    "    roslib.load_manifest('gps_agent_pkg')\n",
    "    ROS_ENABLED = True\n",
    "except ImportError as e:\n",
    "    LOGGER.debug('Import ROS failed: %s', e)\n",
    "except rospkg.common.ResourceNotFound as e:\n",
    "    LOGGER.debug('No gps_agent_pkg: %s', e)\n",
    "\n",
    "\n",
    "class ImageVisualizer(object):\n",
    "\n",
    "    def __init__(self, fig, gs, cropsize=None, rostopic=None, show_overlay_buttons=False):\n",
    "        \"\"\"\n",
    "        If rostopic is given to this constructor, then the image visualizer will \n",
    "        automatically update with rostopic image. Else, the update method must \n",
    "        be manually called to supply images.\n",
    "        \"\"\"\n",
    "        # Real-time image\n",
    "        self._t = 0\n",
    "        self._current_image = None\n",
    "        self._crop_size = cropsize\n",
    "\n",
    "        # Image overlay\n",
    "        self._initial_image_overlay_on = False\n",
    "        self._target_image_overlay_on = False\n",
    "        self._initial_image = None\n",
    "        self._initial_alpha = None\n",
    "        self._target_image = None\n",
    "        self._target_alpha = None\n",
    "        self._default_image = np.zeros((1, 1, 3))\n",
    "        self._default_alpha = 0.0\n",
    "\n",
    "        # Actions\n",
    "        actions_arr = [\n",
    "            Action('oii', 'overlay_initial_image', self.toggle_initial_image_overlay, axis_pos=0),\n",
    "            Action('oti', 'overlay_target_image',  self.toggle_target_image_overlay,  axis_pos=1),\n",
    "        ]\n",
    "\n",
    "        # GUI Components\n",
    "        self._fig = fig\n",
    "        self._gs = gridspec.GridSpecFromSubplotSpec(8, 1, subplot_spec=gs)\n",
    "        self._gs_action_panel = self._gs[0:1, 0]\n",
    "        self._gs_image_axis  = self._gs[1:8, 0]\n",
    "\n",
    "        if show_overlay_buttons:\n",
    "            self._action_panel = ActionPanel(self._fig, self._gs_action_panel, 1, 2, actions_arr)\n",
    "\n",
    "        self._ax_image = plt.subplot(self._gs_image_axis)\n",
    "        self._ax_image.set_axis_off()\n",
    "        self._plot = self._ax_image.imshow(self._default_image)\n",
    "        self._overlay_plot_initial = self._ax_image.imshow(self._default_image, alpha=self._default_alpha)\n",
    "        self._overlay_plot_target  = self._ax_image.imshow(self._default_image, alpha=self._default_alpha)\n",
    "\n",
    "        self._fig.canvas.draw()\n",
    "        self._fig.canvas.flush_events()   # Fixes bug with Qt4Agg backend\n",
    "\n",
    "        # ROS subscriber for PS3 controller\n",
    "        if rostopic and ROS_ENABLED:\n",
    "            rospy.Subscriber(rostopic, Image, self.update_ros, queue_size=1,\n",
    "                             buff_size=2**24)\n",
    "\n",
    "    def update(self, image):\n",
    "        \"\"\" Update images. \"\"\"\n",
    "        if image is None:\n",
    "            return\n",
    "        image = np.array(image, dtype=float)\n",
    "        if self._crop_size:\n",
    "            h, w = image.shape[0], image.shape[1]\n",
    "            ch, cw = self._crop_size[0], self._crop_size[1]\n",
    "            image = image[(h/2-ch/2):(h/2-ch/2+ch), (w/2-cw/2):(w/2-cw/2+cw), :]\n",
    "\n",
    "        self._current_image = image\n",
    "        self._plot.set_array(image)\n",
    "        self.draw()\n",
    "\n",
    "    def update_ros(self, image_msg):\n",
    "        # Extract image.\n",
    "        image = np.fromstring(image_msg.data, np.uint8)\n",
    "        # Convert from ros image format to matplotlib image format.\n",
    "        image = image.reshape(image_msg.height, image_msg.width, 3)[:, :, ::-1]\n",
    "        image = 255 - image\n",
    "        # Update visualizer.\n",
    "        self.update(image)\n",
    "\n",
    "    def get_current_image(self):\n",
    "        return self._current_image\n",
    "\n",
    "    def set_initial_image(self, image, alpha=0.3):\n",
    "        if image is None:\n",
    "            return\n",
    "        self._initial_image = np.array(image, dtype=float)\n",
    "        self._initial_alpha = alpha\n",
    "\n",
    "    def set_target_image(self, image, alpha=0.3):\n",
    "        if image is None:\n",
    "            return\n",
    "        self._target_image = np.array(image, dtype=float)\n",
    "        self._target_alpha = alpha\n",
    "\n",
    "    def toggle_initial_image_overlay(self, event=None):\n",
    "        self._initial_image_overlay_on = not self._initial_image_overlay_on\n",
    "        if self._initial_image is not None and self._initial_image_overlay_on:\n",
    "            image = self._initial_image\n",
    "        else:\n",
    "            image = self._default_image\n",
    "        if self._initial_alpha is not None and self._initial_image_overlay_on:\n",
    "            alpha = self._initial_alpha\n",
    "        else:\n",
    "            alpha = self._default_alpha\n",
    "        self._overlay_plot_initial.set_array(image)\n",
    "        self._overlay_plot_initial.set_alpha(alpha)\n",
    "        self.draw()\n",
    "\n",
    "    def toggle_target_image_overlay(self, event=None):\n",
    "        self._target_image_overlay_on = not self._target_image_overlay_on\n",
    "        if self._target_image is not None and self._target_image_overlay_on:\n",
    "            image = self._target_image\n",
    "        else:\n",
    "            image = self._default_image\n",
    "        if self._target_alpha is not None and self._target_image_overlay_on:\n",
    "            alpha = self._target_alpha\n",
    "        else:\n",
    "            alpha = self._default_alpha\n",
    "        self._overlay_plot_target.set_array(image)\n",
    "        self._overlay_plot_target.set_alpha(alpha)\n",
    "        self.draw()\n",
    "\n",
    "    def draw(self):\n",
    "        self._ax_image.draw_artist(self._ax_image.patch)\n",
    "        self._ax_image.draw_artist(self._plot)\n",
    "        self._ax_image.draw_artist(self._overlay_plot_initial)\n",
    "        self._ax_image.draw_artist(self._overlay_plot_target)\n",
    "        self._fig.canvas.update()\n",
    "        self._fig.canvas.flush_events()   # Fixes bug with Qt4Agg backend\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### `GPSTrainingGUI`"
   ]
  },
  {
   "cell_type": "raw",
   "metadata": {},
   "source": [
    "\"\"\"\n",
    "GPS Training GUI\n",
    "\n",
    "The GPS Training GUI is used to interact with the GPS algorithm during training.\n",
    "It contains the below seven functionalities:\n",
    "\n",
    "Action Panel                contains buttons for stop, reset, go, fail\n",
    "Action Status Textbox       displays action status\n",
    "Algorithm Status Textbox    displays algorithm status\n",
    "Cost Plot                   displays costs after each iteration\n",
    "Algorithm Output Textbox    displays algorithm output after each iteration\n",
    "3D Trajectory Visualizer    displays 3D trajectories after each iteration\n",
    "Image Visualizer            displays images received from a rostopic\n",
    "\n",
    "For more detailed documentation, visit: rll.berkeley.edu/gps/gui\n",
    "\"\"\"\n",
    "import time\n",
    "import threading\n",
    "\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "import matplotlib.gridspec as gridspec\n",
    "\n",
    "#from gps.gui.config import config\n",
    "#from gps.gui.action_panel import Action, ActionPanel\n",
    "#from gps.gui.textbox import Textbox\n",
    "#from gps.gui.mean_plotter import MeanPlotter\n",
    "#from gps.gui.plotter_3d import Plotter3D\n",
    "#from gps.gui.image_visualizer import ImageVisualizer\n",
    "#from gps.gui.util import buffered_axis_limits, load_data_from_npz\n",
    "\n",
    "from gps_pb2 import END_EFFECTOR_POINTS\n",
    "\n",
    "# Needed for typechecks\n",
    "#from gps.algorithm.algorithm_badmm import AlgorithmBADMM\n",
    "#from gps.algorithm.algorithm_mdgps import AlgorithmMDGPS\n",
    "\n",
    "class GPSTrainingGUI(object):\n",
    "\n",
    "    def __init__(self, hyperparams):\n",
    "        self._hyperparams = hyperparams\n",
    "        self._log_filename = self._hyperparams['log_filename']\n",
    "        if 'target_filename' in self._hyperparams:\n",
    "            self._target_filename = self._hyperparams['target_filename']\n",
    "        else:\n",
    "            self._target_filename = None\n",
    "\n",
    "        # GPS Training Status.\n",
    "        self.mode = config['initial_mode']  # Modes: run, wait, end, request, process.\n",
    "        self.request = None                 # Requests: stop, reset, go, fail, None.\n",
    "        self.err_msg = None\n",
    "        self._colors = {\n",
    "            'run': 'cyan',\n",
    "            'wait': 'orange',\n",
    "            'end': 'red',\n",
    "\n",
    "            'stop': 'red',\n",
    "            'reset': 'yellow',\n",
    "            'go': 'green',\n",
    "            'fail': 'magenta',\n",
    "        }\n",
    "        self._first_update = True\n",
    "\n",
    "        # Actions.\n",
    "        actions_arr = [\n",
    "            Action('stop',  'stop',  self.request_stop,  axis_pos=0),\n",
    "            Action('reset', 'reset', self.request_reset, axis_pos=1),\n",
    "            Action('go',    'go',    self.request_go,    axis_pos=2),\n",
    "            Action('fail',  'fail',  self.request_fail,  axis_pos=3),\n",
    "        ]\n",
    "\n",
    "        # Setup figure.\n",
    "        plt.ion()\n",
    "        plt.rcParams['toolbar'] = 'None'\n",
    "        for key in plt.rcParams:\n",
    "            if key.startswith('keymap.'):\n",
    "                plt.rcParams[key] = ''\n",
    "\n",
    "        self._fig = plt.figure(figsize=config['figsize'])\n",
    "        self._fig.subplots_adjust(left=0.01, bottom=0.01, right=0.99, top=0.99,\n",
    "                wspace=0, hspace=0)\n",
    "\n",
    "        # Assign GUI component locations.\n",
    "        self._gs = gridspec.GridSpec(16, 8)\n",
    "        self._gs_action_panel           = self._gs[0:2,  0:8]\n",
    "        self._gs_action_output          = self._gs[2:3,  0:4]\n",
    "        self._gs_status_output          = self._gs[3:4,  0:4]\n",
    "        self._gs_cost_plotter           = self._gs[2:4,  4:8]\n",
    "        self._gs_algthm_output          = self._gs[4:8,  0:8]\n",
    "        if config['image_on']:\n",
    "            self._gs_traj_visualizer    = self._gs[8:16, 0:4]\n",
    "            self._gs_image_visualizer   = self._gs[8:16, 4:8]\n",
    "        else:\n",
    "            self._gs_traj_visualizer    = self._gs[8:16, 0:8]\n",
    "\n",
    "        # Create GUI components.\n",
    "        self._action_panel = ActionPanel(self._fig, self._gs_action_panel, 1, 4, actions_arr)\n",
    "        self._action_output = Textbox(self._fig, self._gs_action_output, border_on=True)\n",
    "        self._status_output = Textbox(self._fig, self._gs_status_output, border_on=False)\n",
    "        self._algthm_output = Textbox(self._fig, self._gs_algthm_output,\n",
    "                max_display_size=config['algthm_output_max_display_size'],\n",
    "                log_filename=self._log_filename,\n",
    "                fontsize=config['algthm_output_fontsize'],\n",
    "                font_family='monospace')\n",
    "        self._cost_plotter = MeanPlotter(self._fig, self._gs_cost_plotter,\n",
    "                color='blue', label='mean cost')\n",
    "        self._traj_visualizer = Plotter3D(self._fig, self._gs_traj_visualizer,\n",
    "                num_plots=self._hyperparams['conditions'])\n",
    "        if config['image_on']:\n",
    "            self._image_visualizer = ImageVisualizer(self._fig,\n",
    "                    self._gs_image_visualizer, cropsize=config['image_size'],\n",
    "                    rostopic=config['image_topic'], show_overlay_buttons=True)\n",
    "\n",
    "        # Setup GUI components.\n",
    "        self._algthm_output.log_text('\\n')\n",
    "        self.set_output_text(self._hyperparams['info'])\n",
    "        if config['initial_mode'] == 'run':\n",
    "            self.run_mode()\n",
    "        else:\n",
    "            self.wait_mode()\n",
    "\n",
    "        # Setup 3D Trajectory Visualizer plot titles and legends\n",
    "        for m in range(self._hyperparams['conditions']):\n",
    "            self._traj_visualizer.set_title(m, 'Condition %d' % (m))\n",
    "        self._traj_visualizer.add_legend(linestyle='-', marker='None',\n",
    "                color='green', label='Trajectory Samples')\n",
    "        self._traj_visualizer.add_legend(linestyle='-', marker='None',\n",
    "                color='blue', label='Policy Samples')\n",
    "        self._traj_visualizer.add_legend(linestyle='None', marker='x',\n",
    "                color=(0.5, 0, 0), label='LG Controller Means')\n",
    "        self._traj_visualizer.add_legend(linestyle='-', marker='None',\n",
    "                color='red', label='LG Controller Distributions')\n",
    "\n",
    "        self._fig.canvas.draw()\n",
    "\n",
    "        # Display calculating thread\n",
    "        def display_calculating(delay, run_event):\n",
    "            while True:\n",
    "                if not run_event.is_set():\n",
    "                    run_event.wait()\n",
    "                if run_event.is_set():\n",
    "                    self.set_status_text('Calculating.')\n",
    "                    time.sleep(delay)\n",
    "                if run_event.is_set():\n",
    "                    self.set_status_text('Calculating..')\n",
    "                    time.sleep(delay)\n",
    "                if run_event.is_set():\n",
    "                    self.set_status_text('Calculating...')\n",
    "                    time.sleep(delay)\n",
    "\n",
    "        self._calculating_run = threading.Event()\n",
    "        self._calculating_thread = threading.Thread(target=display_calculating,\n",
    "                args=(1, self._calculating_run))\n",
    "        self._calculating_thread.daemon = True\n",
    "        self._calculating_thread.start()\n",
    "\n",
    "    # GPS Training functions\n",
    "    def request_stop(self, event=None):\n",
    "        self.request_mode('stop')\n",
    "\n",
    "    def request_reset(self, event=None):\n",
    "        self.request_mode('reset')\n",
    "\n",
    "    def request_go(self, event=None):\n",
    "        self.request_mode('go')\n",
    "\n",
    "    def request_fail(self, event=None):\n",
    "        self.request_mode('fail')\n",
    "\n",
    "    def request_mode(self, request):\n",
    "        \"\"\"\n",
    "        Sets the request mode (stop, reset, go, fail). The request is read by\n",
    "        gps_main before sampling, and the appropriate action is taken.\n",
    "        \"\"\"\n",
    "        self.mode = 'request'\n",
    "        self.request = request\n",
    "        self.set_action_text(self.request + ' requested')\n",
    "        self.set_action_bgcolor(self._colors[self.request], alpha=0.2)\n",
    "\n",
    "    def process_mode(self):\n",
    "        \"\"\"\n",
    "        Completes the current request, after it is first read by gps_main.\n",
    "        Displays visual confirmation that the request was processed,\n",
    "        displays any error messages, and then switches into mode 'run' or 'wait'.\n",
    "        \"\"\"\n",
    "        self.mode = 'process'\n",
    "        self.set_action_text(self.request + ' processed')\n",
    "        self.set_action_bgcolor(self._colors[self.request], alpha=1.0)\n",
    "        if self.err_msg:\n",
    "            self.set_action_text(self.request + ' processed' + '\\nERROR: ' +\n",
    "                                 self.err_msg)\n",
    "            self.err_msg = None\n",
    "            time.sleep(1.0)\n",
    "        else:\n",
    "            time.sleep(0.5)\n",
    "        if self.request in ('stop', 'reset', 'fail'):\n",
    "            self.wait_mode()\n",
    "        elif self.request == 'go':\n",
    "            self.run_mode()\n",
    "        self.request = None\n",
    "\n",
    "    def wait_mode(self):\n",
    "        self.mode = 'wait'\n",
    "        self.set_action_text('waiting')\n",
    "        self.set_action_bgcolor(self._colors[self.mode], alpha=1.0)\n",
    "\n",
    "    def run_mode(self):\n",
    "        self.mode = 'run'\n",
    "        self.set_action_text('running')\n",
    "        self.set_action_bgcolor(self._colors[self.mode], alpha=1.0)\n",
    "\n",
    "    def end_mode(self):\n",
    "        self.mode = 'end'\n",
    "        self.set_action_text('ended')\n",
    "        self.set_action_bgcolor(self._colors[self.mode], alpha=1.0)\n",
    "\n",
    "    def estop(self, event=None):\n",
    "        self.set_action_text('estop: NOT IMPLEMENTED')\n",
    "\n",
    "    # GUI functions\n",
    "    def set_action_text(self, text):\n",
    "        self._action_output.set_text(text)\n",
    "        self._cost_plotter.draw_ticklabels()    # redraw overflow ticklabels\n",
    "\n",
    "    def set_action_bgcolor(self, color, alpha=1.0):\n",
    "        self._action_output.set_bgcolor(color, alpha)\n",
    "        self._cost_plotter.draw_ticklabels()    # redraw overflow ticklabels\n",
    "\n",
    "    def set_status_text(self, text):\n",
    "        self._status_output.set_text(text)\n",
    "        self._cost_plotter.draw_ticklabels()    # redraw overflow ticklabels\n",
    "\n",
    "    def set_output_text(self, text):\n",
    "        self._algthm_output.set_text(text)\n",
    "        self._cost_plotter.draw_ticklabels()    # redraw overflow ticklabels\n",
    "\n",
    "    def append_output_text(self, text):\n",
    "        self._algthm_output.append_text(text)\n",
    "        self._cost_plotter.draw_ticklabels()    # redraw overflow ticklabels\n",
    "\n",
    "    def start_display_calculating(self):\n",
    "        self._calculating_run.set()\n",
    "\n",
    "    def stop_display_calculating(self):\n",
    "        self._calculating_run.clear()\n",
    "\n",
    "    def set_image_overlays(self, condition):\n",
    "        \"\"\"\n",
    "        Sets up the image visualizer with what images to overlay if\n",
    "        \"overlay_initial_image\" or \"overlay_target_image\" is pressed.\n",
    "        \"\"\"\n",
    "        if not config['image_on'] or not self._target_filename:\n",
    "            return\n",
    "        initial_image = load_data_from_npz(self._target_filename,\n",
    "                config['image_overlay_actuator'], str(condition),\n",
    "                'initial', 'image', default=None)\n",
    "        target_image  = load_data_from_npz(self._target_filename,\n",
    "            config['image_overlay_actuator'], str(condition),\n",
    "                'target',  'image', default=None)\n",
    "        self._image_visualizer.set_initial_image(initial_image,\n",
    "                alpha=config['image_overlay_alpha'])\n",
    "        self._image_visualizer.set_target_image(target_image,\n",
    "                alpha=config['image_overlay_alpha'])\n",
    "\n",
    "    # Iteration update functions\n",
    "    def update(self, itr, algorithm, agent, traj_sample_lists, pol_sample_lists):\n",
    "        \"\"\"\n",
    "        After each iteration, update the iteration data output, the cost plot,\n",
    "        and the 3D trajectory visualizations (if end effector points exist).\n",
    "        \"\"\"\n",
    "        if self._first_update:\n",
    "            self._output_column_titles(algorithm)\n",
    "            self._first_update = False\n",
    "\n",
    "        costs = [np.mean(np.sum(algorithm.prev[m].cs, axis=1)) for m in range(algorithm.M)]\n",
    "        self._update_iteration_data(itr, algorithm, costs, pol_sample_lists)\n",
    "        self._cost_plotter.update(costs, t=itr)\n",
    "        if END_EFFECTOR_POINTS in agent.x_data_types:\n",
    "            self._update_trajectory_visualizations(algorithm, agent,\n",
    "                    traj_sample_lists, pol_sample_lists)\n",
    "\n",
    "        self._fig.canvas.draw()\n",
    "        self._fig.canvas.flush_events() # Fixes bug in Qt4Agg backend\n",
    "\n",
    "    def _output_column_titles(self, algorithm, policy_titles=False):\n",
    "        \"\"\"\n",
    "        Setup iteration data column titles: iteration, average cost, and for\n",
    "        each condition the mean cost over samples, step size, linear Guassian\n",
    "        controller entropies, and initial/final KL divergences for BADMM.\n",
    "        \"\"\"\n",
    "        self.set_output_text(self._hyperparams['experiment_name'])\n",
    "        if isinstance(algorithm, AlgorithmMDGPS) or isinstance(algorithm, AlgorithmBADMM):\n",
    "            condition_titles = '%3s | %8s %12s' % ('', '', '')\n",
    "            itr_data_fields  = '%3s | %8s %12s' % ('itr', 'avg_cost', 'avg_pol_cost')\n",
    "        else:\n",
    "            condition_titles = '%3s | %8s' % ('', '')\n",
    "            itr_data_fields  = '%3s | %8s' % ('itr', 'avg_cost')\n",
    "        for m in range(algorithm.M):\n",
    "            condition_titles += ' | %8s %9s %-7d' % ('', 'condition', m)\n",
    "            itr_data_fields  += ' | %8s %8s %8s' % ('  cost  ', '  step  ', 'entropy ')\n",
    "            if isinstance(algorithm, AlgorithmBADMM):\n",
    "                condition_titles += ' %8s %8s %8s' % ('', '', '')\n",
    "                itr_data_fields  += ' %8s %8s %8s' % ('pol_cost', 'kl_div_i', 'kl_div_f')\n",
    "            elif isinstance(algorithm, AlgorithmMDGPS):\n",
    "                condition_titles += ' %8s' % ('')\n",
    "                itr_data_fields  += ' %8s' % ('pol_cost')\n",
    "        self.append_output_text(condition_titles)\n",
    "        self.append_output_text(itr_data_fields)\n",
    "\n",
    "    def _update_iteration_data(self, itr, algorithm, costs, pol_sample_lists):\n",
    "        \"\"\"\n",
    "        Update iteration data information: iteration, average cost, and for\n",
    "        each condition the mean cost over samples, step size, linear Guassian\n",
    "        controller entropies, and initial/final KL divergences for BADMM.\n",
    "        \"\"\"\n",
    "        avg_cost = np.mean(costs)\n",
    "        if pol_sample_lists is not None:\n",
    "            test_idx = algorithm._hyperparams['test_conditions']\n",
    "            # pol_sample_lists is a list of singletons\n",
    "            samples = [sl[0] for sl in pol_sample_lists]\n",
    "            pol_costs = [np.sum(algorithm.cost[idx].eval(s)[0])\n",
    "                    for s, idx in zip(samples, test_idx)]\n",
    "            itr_data = '%3d | %8.2f %12.2f' % (itr, avg_cost, np.mean(pol_costs))\n",
    "        else:\n",
    "            itr_data = '%3d | %8.2f' % (itr, avg_cost)\n",
    "        for m in range(algorithm.M):\n",
    "            cost = costs[m]\n",
    "            step = np.mean(algorithm.prev[m].step_mult * algorithm.base_kl_step)\n",
    "            entropy = 2*np.sum(np.log(np.diagonal(algorithm.prev[m].traj_distr.chol_pol_covar,\n",
    "                    axis1=1, axis2=2)))\n",
    "            itr_data += ' | %8.2f %8.2f %8.2f' % (cost, step, entropy)\n",
    "            if isinstance(algorithm, AlgorithmBADMM):\n",
    "                kl_div_i = algorithm.cur[m].pol_info.init_kl.mean()\n",
    "                kl_div_f = algorithm.cur[m].pol_info.prev_kl.mean()\n",
    "                itr_data += ' %8.2f %8.2f %8.2f' % (pol_costs[m], kl_div_i, kl_div_f)\n",
    "            elif isinstance(algorithm, AlgorithmMDGPS):\n",
    "                # TODO: Change for test/train better.\n",
    "                if test_idx == algorithm._hyperparams['train_conditions']:\n",
    "                    itr_data += ' %8.2f' % (pol_costs[m])\n",
    "                else:\n",
    "                    itr_data += ' %8s' % (\"N/A\")\n",
    "        self.append_output_text(itr_data)\n",
    "\n",
    "    def _update_trajectory_visualizations(self, algorithm, agent,\n",
    "                traj_sample_lists, pol_sample_lists):\n",
    "        \"\"\"\n",
    "        Update 3D trajectory visualizations information: the trajectory samples,\n",
    "        policy samples, and linear Gaussian controller means and covariances.\n",
    "        \"\"\"\n",
    "        xlim, ylim, zlim = self._calculate_3d_axis_limits(traj_sample_lists, pol_sample_lists)\n",
    "        for m in range(algorithm.M):\n",
    "            self._traj_visualizer.clear(m)\n",
    "            self._traj_visualizer.set_lim(i=m, xlim=xlim, ylim=ylim, zlim=zlim)\n",
    "            if algorithm._hyperparams['fit_dynamics']:\n",
    "                self._update_linear_gaussian_controller_plots(algorithm, agent, m)                                \n",
    "            self._update_samples_plots(traj_sample_lists, m, 'green', 'Trajectory Samples')\n",
    "            if pol_sample_lists:\n",
    "                self._update_samples_plots(pol_sample_lists,  m, 'blue',  'Policy Samples')\n",
    "        self._traj_visualizer.draw()    # this must be called explicitly\n",
    "\n",
    "    def _calculate_3d_axis_limits(self, traj_sample_lists, pol_sample_lists):\n",
    "        \"\"\"\n",
    "        Calculate the 3D axis limits shared between trajectory plots,\n",
    "        based on the minimum and maximum xyz values across all samples.\n",
    "        \"\"\"\n",
    "        all_eept = np.empty((0, 3))\n",
    "        sample_lists = traj_sample_lists\n",
    "        if pol_sample_lists:\n",
    "            sample_lists += traj_sample_lists\n",
    "        for sample_list in sample_lists:\n",
    "            for sample in sample_list.get_samples():\n",
    "                ee_pt = sample.get(END_EFFECTOR_POINTS)\n",
    "                for i in range(ee_pt.shape[1]/3):\n",
    "                    ee_pt_i = ee_pt[:, 3*i+0:3*i+3]\n",
    "                    all_eept = np.r_[all_eept, ee_pt_i]\n",
    "        min_xyz = np.amin(all_eept, axis=0)\n",
    "        max_xyz = np.amax(all_eept, axis=0)\n",
    "        xlim = buffered_axis_limits(min_xyz[0], max_xyz[0], buffer_factor=1.25)\n",
    "        ylim = buffered_axis_limits(min_xyz[1], max_xyz[1], buffer_factor=1.25)\n",
    "        zlim = buffered_axis_limits(min_xyz[2], max_xyz[2], buffer_factor=1.25)\n",
    "        return xlim, ylim, zlim\n",
    "\n",
    "    def _update_linear_gaussian_controller_plots(self, algorithm, agent, m):\n",
    "        \"\"\"\n",
    "        Update the linear Guassian controller plots with iteration data,\n",
    "        for the mean and covariances of the end effector points.\n",
    "        \"\"\"\n",
    "        # Calculate mean and covariance for end effector points\n",
    "        eept_idx = agent.get_idx_x(END_EFFECTOR_POINTS)\n",
    "        start, end = eept_idx[0], eept_idx[-1]\n",
    "        mu, sigma = algorithm.traj_opt.forward(algorithm.prev[m].traj_distr, algorithm.prev[m].traj_info)\n",
    "        mu_eept, sigma_eept = mu[:, start:end+1], sigma[:, start:end+1, start:end+1]\n",
    "\n",
    "        # Linear Gaussian Controller Distributions (Red)\n",
    "        for i in range(mu_eept.shape[1]/3):\n",
    "            mu, sigma = mu_eept[:, 3*i+0:3*i+3], sigma_eept[:, 3*i+0:3*i+3, 3*i+0:3*i+3]\n",
    "            self._traj_visualizer.plot_3d_gaussian(i=m, mu=mu, sigma=sigma,\n",
    "                    edges=100, linestyle='-', linewidth=1.0, color='red',\n",
    "                    alpha=0.15, label='LG Controller Distributions')\n",
    "\n",
    "        # Linear Gaussian Controller Means (Dark Red)\n",
    "        for i in range(mu_eept.shape[1]/3):\n",
    "            mu = mu_eept[:, 3*i+0:3*i+3]\n",
    "            self._traj_visualizer.plot_3d_points(i=m, points=mu, linestyle='None',\n",
    "                    marker='x', markersize=5.0, markeredgewidth=1.0,\n",
    "                    color=(0.5, 0, 0), alpha=1.0, label='LG Controller Means')\n",
    "\n",
    "    def _update_samples_plots(self, sample_lists, m, color, label):\n",
    "        \"\"\"\n",
    "        Update the samples plots with iteration data, for the trajectory samples\n",
    "        and the policy samples.\n",
    "        \"\"\"\n",
    "        samples = sample_lists[m].get_samples()\n",
    "        for sample in samples:\n",
    "            ee_pt = sample.get(END_EFFECTOR_POINTS)\n",
    "            for i in range(ee_pt.shape[1]/3):\n",
    "                ee_pt_i = ee_pt[:, 3*i+0:3*i+3]\n",
    "                self._traj_visualizer.plot_3d_points(m, ee_pt_i, color=color, label=label)\n",
    "\n",
    "    def save_figure(self, filename):\n",
    "        self._fig.savefig(filename)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### `DataLogger`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import logging\n",
    "try:\n",
    "   import cPickle as pickle\n",
    "except:\n",
    "   import pickle\n",
    "\n",
    "\n",
    "LOGGER = logging.getLogger(__name__)\n",
    "\n",
    "\n",
    "class DataLogger(object):\n",
    "    \"\"\"\n",
    "    This class pickles data into files and unpickles data from files.\n",
    "    TODO: Handle logging text to terminal, GUI text, and/or log file at\n",
    "        DEBUG, INFO, WARN, ERROR, FATAL levels.\n",
    "    TODO: Handle logging data to terminal, GUI text/plots, and/or data\n",
    "          files.\n",
    "    \"\"\"\n",
    "    def __init__(self):\n",
    "        pass\n",
    "\n",
    "    def pickle(self, filename, data):\n",
    "        \"\"\" Pickle data into file specified by filename. \"\"\"\n",
    "        pickle.dump(data, open(filename, 'wb'))\n",
    "\n",
    "    def unpickle(self, filename):\n",
    "        \"\"\" Unpickle data from file specified by filename. \"\"\"\n",
    "        try:\n",
    "            return pickle.load(open(filename, 'rb'))\n",
    "        except IOError:\n",
    "            LOGGER.debug('Unpickle error. Cannot find file: %s', filename)\n",
    "            return None\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Agents"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### `Agent`\n",
    "Base agent class"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "from gps_pb2 import ACTION\n",
    "\n",
    "# Agent\n",
    "AGENT = {\n",
    "    'dH': 0,\n",
    "    'x0var': 0,\n",
    "    'noisy_body_idx': np.array([]),\n",
    "    'noisy_body_var': np.array([]),\n",
    "    'pos_body_idx': np.array([]),\n",
    "    'pos_body_offset': np.array([]),\n",
    "    'smooth_noise': True,\n",
    "    'smooth_noise_var': 2.0,\n",
    "    'smooth_noise_renormalize': True,\n",
    "}\n",
    "\n",
    "class Agent(object):\n",
    "    \"\"\"\n",
    "    Agent superclass. The agent interacts with the environment to\n",
    "    collect samples.\n",
    "    \"\"\"\n",
    "    __metaclass__ = abc.ABCMeta\n",
    "\n",
    "    def __init__(self, hyperparams):\n",
    "        config = copy.deepcopy(AGENT)\n",
    "        config.update(hyperparams)\n",
    "        self._hyperparams = config\n",
    "\n",
    "        # Store samples, along with size/index information for samples.\n",
    "        self._samples = [[] for _ in range(self._hyperparams['conditions'])]\n",
    "        self.T = self._hyperparams['T']\n",
    "        self.dU = self._hyperparams['sensor_dims'][ACTION]\n",
    "\n",
    "        self.x_data_types = self._hyperparams['state_include']\n",
    "        self.obs_data_types = self._hyperparams['obs_include']\n",
    "        if 'meta_include' in self._hyperparams:\n",
    "            self.meta_data_types = self._hyperparams['meta_include']\n",
    "        else:\n",
    "            self.meta_data_types = []\n",
    "\n",
    "        # List of indices for each data type in state X.\n",
    "        self._state_idx, i = [], 0\n",
    "        for sensor in self.x_data_types:\n",
    "            dim = self._hyperparams['sensor_dims'][sensor]\n",
    "            self._state_idx.append(list(range(i, i+dim)))\n",
    "            i += dim\n",
    "        self.dX = i\n",
    "\n",
    "        # List of indices for each data type in observation.\n",
    "        self._obs_idx, i = [], 0\n",
    "        for sensor in self.obs_data_types:\n",
    "            dim = self._hyperparams['sensor_dims'][sensor]\n",
    "            self._obs_idx.append(list(range(i, i+dim)))\n",
    "            i += dim\n",
    "        self.dO = i\n",
    "\n",
    "        # List of indices for each data type in meta data.\n",
    "        self._meta_idx, i = [], 0\n",
    "        for sensor in self.meta_data_types:\n",
    "            dim = self._hyperparams['sensor_dims'][sensor]\n",
    "            self._meta_idx.append(list(range(i, i+dim)))\n",
    "            i += dim\n",
    "        self.dM = i\n",
    "\n",
    "        self._x_data_idx = {d: i for d, i in zip(self.x_data_types,\n",
    "                                                 self._state_idx)}\n",
    "        self._obs_data_idx = {d: i for d, i in zip(self.obs_data_types,\n",
    "                                                   self._obs_idx)}\n",
    "        self._meta_data_idx = {d: i for d, i in zip(self.meta_data_types,\n",
    "                                                   self._meta_idx)}\n",
    "\n",
    "    @abc.abstractmethod\n",
    "    def sample(self, policy, condition, verbose=True, save=True, noisy=True):\n",
    "        \"\"\"\n",
    "        Draw a sample from the environment, using the specified policy\n",
    "        and under the specified condition, with or without noise.\n",
    "        \"\"\"\n",
    "        raise NotImplementedError(\"Must be implemented in subclass.\")\n",
    "\n",
    "    def reset(self, condition):\n",
    "        \"\"\" Reset environment to the specified condition. \"\"\"\n",
    "        pass  # May be overridden in subclass.\n",
    "\n",
    "    def get_samples(self, condition, start=0, end=None):\n",
    "        \"\"\"\n",
    "        Return the requested samples based on the start and end indices.\n",
    "        Args:\n",
    "            start: Starting index of samples to return.\n",
    "            end: End index of samples to return.\n",
    "        \"\"\"\n",
    "        return (SampleList(self._samples[condition][start:]) if end is None\n",
    "                else SampleList(self._samples[condition][start:end]))\n",
    "\n",
    "    def clear_samples(self, condition=None):\n",
    "        \"\"\"\n",
    "        Reset the samples for a given condition, defaulting to all conditions.\n",
    "        Args:\n",
    "            condition: Condition for which to reset samples.\n",
    "        \"\"\"\n",
    "        if condition is None:\n",
    "            self._samples = [[] for _ in range(self._hyperparams['conditions'])]\n",
    "        else:\n",
    "            self._samples[condition] = []\n",
    "\n",
    "    def delete_last_sample(self, condition):\n",
    "        \"\"\" Delete the last sample from the specified condition. \"\"\"\n",
    "        self._samples[condition].pop()\n",
    "\n",
    "    def get_idx_x(self, sensor_name):\n",
    "        \"\"\"\n",
    "        Return the indices corresponding to a certain state sensor name.\n",
    "        Args:\n",
    "            sensor_name: The name of the sensor.\n",
    "        \"\"\"\n",
    "        return self._x_data_idx[sensor_name]\n",
    "\n",
    "    def get_idx_obs(self, sensor_name):\n",
    "        \"\"\"\n",
    "        Return the indices corresponding to a certain observation sensor name.\n",
    "        Args:\n",
    "            sensor_name: The name of the sensor.\n",
    "        \"\"\"\n",
    "        return self._obs_data_idx[sensor_name]\n",
    "\n",
    "    def pack_data_obs(self, existing_mat, data_to_insert, data_types,\n",
    "                      axes=None):\n",
    "        \"\"\"\n",
    "        Update the observation matrix with new data.\n",
    "        Args:\n",
    "            existing_mat: Current observation matrix.\n",
    "            data_to_insert: New data to insert into the existing matrix.\n",
    "            data_types: Name of the sensors to insert data for.\n",
    "            axes: Which axes to insert data. Defaults to the last axes.\n",
    "        \"\"\"\n",
    "        num_sensor = len(data_types)\n",
    "        if axes is None:\n",
    "            # If axes not specified, assume indexing on last dimensions.\n",
    "            axes = list(range(-1, -num_sensor - 1, -1))\n",
    "        else:\n",
    "            # Make sure number of sensors and axes are consistent.\n",
    "            if num_sensor != len(axes):\n",
    "                raise ValueError(\n",
    "                    'Length of sensors (%d) must equal length of axes (%d)',\n",
    "                    num_sensor, len(axes)\n",
    "                )\n",
    "\n",
    "        # Shape checks.\n",
    "        insert_shape = list(existing_mat.shape)\n",
    "        for i in range(num_sensor):\n",
    "            # Make sure to slice along X.\n",
    "            if existing_mat.shape[axes[i]] != self.dO:\n",
    "                raise ValueError('Axes must be along an dX=%d dimensional axis',\n",
    "                                 self.dO)\n",
    "            insert_shape[axes[i]] = len(self._obs_data_idx[data_types[i]])\n",
    "        if tuple(insert_shape) != data_to_insert.shape:\n",
    "            raise ValueError('Data has shape %s. Expected %s',\n",
    "                             data_to_insert.shape, tuple(insert_shape))\n",
    "\n",
    "        # Actually perform the slice.\n",
    "        index = [slice(None) for _ in range(len(existing_mat.shape))]\n",
    "        for i in range(num_sensor):\n",
    "            index[axes[i]] = slice(self._obs_data_idx[data_types[i]][0],\n",
    "                                   self._obs_data_idx[data_types[i]][-1] + 1)\n",
    "        existing_mat[index] = data_to_insert\n",
    "\n",
    "    def pack_data_meta(self, existing_mat, data_to_insert, data_types,\n",
    "                       axes=None):\n",
    "        \"\"\"\n",
    "        Update the meta data matrix with new data.\n",
    "        Args:\n",
    "            existing_mat: Current meta data matrix.\n",
    "            data_to_insert: New data to insert into the existing matrix.\n",
    "            data_types: Name of the sensors to insert data for.\n",
    "            axes: Which axes to insert data. Defaults to the last axes.\n",
    "        \"\"\"\n",
    "        num_sensor = len(data_types)\n",
    "        if axes is None:\n",
    "            # If axes not specified, assume indexing on last dimensions.\n",
    "            axes = list(range(-1, -num_sensor - 1, -1))\n",
    "        else:\n",
    "            # Make sure number of sensors and axes are consistent.\n",
    "            if num_sensor != len(axes):\n",
    "                raise ValueError(\n",
    "                    'Length of sensors (%d) must equal length of axes (%d)',\n",
    "                    num_sensor, len(axes)\n",
    "                )\n",
    "\n",
    "        # Shape checks.\n",
    "        insert_shape = list(existing_mat.shape)\n",
    "        for i in range(num_sensor):\n",
    "            # Make sure to slice along X.\n",
    "            if existing_mat.shape[axes[i]] != self.dM:\n",
    "                raise ValueError('Axes must be along an dX=%d dimensional axis',\n",
    "                                 self.dM)\n",
    "            insert_shape[axes[i]] = len(self._meta_data_idx[data_types[i]])\n",
    "        if tuple(insert_shape) != data_to_insert.shape:\n",
    "            raise ValueError('Data has shape %s. Expected %s',\n",
    "                             data_to_insert.shape, tuple(insert_shape))\n",
    "\n",
    "        # Actually perform the slice.\n",
    "        index = [slice(None) for _ in range(len(existing_mat.shape))]\n",
    "        for i in range(num_sensor):\n",
    "            index[axes[i]] = slice(self._meta_data_idx[data_types[i]][0],\n",
    "                                   self._meta_data_idx[data_types[i]][-1] + 1)\n",
    "        existing_mat[index] = data_to_insert\n",
    "\n",
    "    def pack_data_x(self, existing_mat, data_to_insert, data_types, axes=None):\n",
    "        \"\"\"\n",
    "        Update the state matrix with new data.\n",
    "        Args:\n",
    "            existing_mat: Current state matrix.\n",
    "            data_to_insert: New data to insert into the existing matrix.\n",
    "            data_types: Name of the sensors to insert data for.\n",
    "            axes: Which axes to insert data. Defaults to the last axes.\n",
    "        \"\"\"\n",
    "        num_sensor = len(data_types)\n",
    "        if axes is None:\n",
    "            # If axes not specified, assume indexing on last dimensions.\n",
    "            axes = list(range(-1, -num_sensor - 1, -1))\n",
    "        else:\n",
    "            # Make sure number of sensors and axes are consistent.\n",
    "            if num_sensor != len(axes):\n",
    "                raise ValueError(\n",
    "                    'Length of sensors (%d) must equal length of axes (%d)',\n",
    "                    num_sensor, len(axes)\n",
    "                )\n",
    "\n",
    "        # Shape checks.\n",
    "        insert_shape = list(existing_mat.shape)\n",
    "        for i in range(num_sensor):\n",
    "            # Make sure to slice along X.\n",
    "            if existing_mat.shape[axes[i]] != self.dX:\n",
    "                raise ValueError('Axes must be along an dX=%d dimensional axis',\n",
    "                                 self.dX)\n",
    "            insert_shape[axes[i]] = len(self._x_data_idx[data_types[i]])\n",
    "        if tuple(insert_shape) != data_to_insert.shape:\n",
    "            raise ValueError('Data has shape %s. Expected %s',\n",
    "                             data_to_insert.shape, tuple(insert_shape))\n",
    "\n",
    "        # Actually perform the slice.\n",
    "        index = [slice(None) for _ in range(len(existing_mat.shape))]\n",
    "        for i in range(num_sensor):\n",
    "            index[axes[i]] = slice(self._x_data_idx[data_types[i]][0],\n",
    "                                   self._x_data_idx[data_types[i]][-1] + 1)\n",
    "        existing_mat[index] = data_to_insert\n",
    "\n",
    "    def unpack_data_x(self, existing_mat, data_types, axes=None):\n",
    "        \"\"\"\n",
    "        Returns the requested data from the state matrix.\n",
    "        Args:\n",
    "            existing_mat: State matrix to unpack from.\n",
    "            data_types: Names of the sensor to unpack.\n",
    "            axes: Which axes to unpack along. Defaults to the last axes.\n",
    "        \"\"\"\n",
    "        num_sensor = len(data_types)\n",
    "        if axes is None:\n",
    "            # If axes not specified, assume indexing on last dimensions.\n",
    "            axes = list(range(-1, -num_sensor - 1, -1))\n",
    "        else:\n",
    "            # Make sure number of sensors and axes are consistent.\n",
    "            if num_sensor != len(axes):\n",
    "                raise ValueError(\n",
    "                    'Length of sensors (%d) must equal length of axes (%d)',\n",
    "                    num_sensor, len(axes)\n",
    "                )\n",
    "\n",
    "        # Shape checks.\n",
    "        for i in range(num_sensor):\n",
    "            # Make sure to slice along X.\n",
    "            if existing_mat.shape[axes[i]] != self.dX:\n",
    "                raise ValueError('Axes must be along an dX=%d dimensional axis',\n",
    "                                 self.dX)\n",
    "\n",
    "        # Actually perform the slice.\n",
    "        index = [slice(None) for _ in range(len(existing_mat.shape))]\n",
    "        for i in range(num_sensor):\n",
    "            index[axes[i]] = slice(self._x_data_idx[data_types[i]][0],\n",
    "                                   self._x_data_idx[data_types[i]][-1] + 1)\n",
    "        return existing_mat[index]\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### `AgentBox2D`\n",
    "An agent for the Box2D simulator"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import scipy.ndimage as sp_ndimage\n",
    "\n",
    "def generate_noise(T, dU, hyperparams):\n",
    "    \"\"\"\n",
    "    Generate a T x dU gaussian-distributed noise vector. This will\n",
    "    approximately have mean 0 and variance 1, ignoring smoothing.\n",
    "\n",
    "    Args:\n",
    "        T: Number of time steps.\n",
    "        dU: Dimensionality of actions.\n",
    "    Hyperparams:\n",
    "        smooth: Whether or not to perform smoothing of noise.\n",
    "        var : If smooth=True, applies a Gaussian filter with this\n",
    "            variance.\n",
    "        renorm : If smooth=True, renormalizes data to have variance 1\n",
    "            after smoothing.\n",
    "    \"\"\"\n",
    "    smooth, var = hyperparams['smooth_noise'], hyperparams['smooth_noise_var']\n",
    "    renorm = hyperparams['smooth_noise_renormalize']\n",
    "    noise = np.random.randn(T, dU)\n",
    "    if smooth:\n",
    "        # Smooth noise. This violates the controller assumption, but\n",
    "        # might produce smoother motions.\n",
    "        for i in range(dU):\n",
    "            noise[:, i] = sp_ndimage.filters.gaussian_filter(noise[:, i], var)\n",
    "        if renorm:\n",
    "            variance = np.var(noise, axis=0)\n",
    "            noise = noise / np.sqrt(variance)\n",
    "    return noise\n",
    "\n",
    "\n",
    "def setup(value, n):\n",
    "    \"\"\" Go through various types of hyperparameters. \"\"\"\n",
    "    if not isinstance(value, list):\n",
    "        try:\n",
    "            return [value.copy() for _ in range(n)]\n",
    "        except AttributeError:\n",
    "            return [value for _ in range(n)]\n",
    "    assert len(value) == n, \\\n",
    "            'Number of elements must match number of conditions or 1.'\n",
    "    return value\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "from copy import deepcopy\n",
    "from gps_pb2 import ACTION\n",
    "\n",
    "AGENT_BOX2D = {\n",
    "    'render': True,\n",
    "}\n",
    "\n",
    "class AgentBox2D(Agent):\n",
    "    \"\"\"\n",
    "    All communication between the algorithms and Box2D is done through\n",
    "    this class.\n",
    "    \"\"\"\n",
    "    def __init__(self, hyperparams):\n",
    "        config = deepcopy(AGENT_BOX2D)\n",
    "        config.update(hyperparams)\n",
    "        Agent.__init__(self, config)\n",
    "\n",
    "        self._setup_conditions()\n",
    "        self._setup_world(self._hyperparams[\"world\"],\n",
    "                          self._hyperparams[\"target_state\"],\n",
    "                          self._hyperparams[\"render\"])\n",
    "\n",
    "    def _setup_conditions(self):\n",
    "        \"\"\"\n",
    "        Helper method for setting some hyperparameters that may vary by\n",
    "        condition.\n",
    "        \"\"\"\n",
    "        conds = self._hyperparams['conditions']\n",
    "        for field in ('x0', 'x0var', 'pos_body_idx', 'pos_body_offset',\n",
    "                      'noisy_body_idx', 'noisy_body_var'):\n",
    "            self._hyperparams[field] = setup(self._hyperparams[field], conds)\n",
    "\n",
    "    def _setup_world(self, world, target, render):\n",
    "        \"\"\"\n",
    "        Helper method for handling setup of the Box2D world.\n",
    "        \"\"\"\n",
    "        self.x0 = self._hyperparams[\"x0\"]\n",
    "        self._worlds = [world(self.x0[i], target, render)\n",
    "                        for i in range(self._hyperparams['conditions'])]\n",
    "\n",
    "\n",
    "    def sample(self, policy, condition, verbose=False, save=True, noisy=True):\n",
    "        \"\"\"\n",
    "        Runs a trial and constructs a new sample containing information\n",
    "        about the trial.\n",
    "\n",
    "        Args:\n",
    "            policy: Policy to to used in the trial.\n",
    "            condition (int): Which condition setup to run.\n",
    "            verbose (boolean): Whether or not to plot the trial (not used here).\n",
    "            save (boolean): Whether or not to store the trial into the samples.\n",
    "            noisy (boolean): Whether or not to use noise during sampling.\n",
    "        \"\"\"\n",
    "        self._worlds[condition].run()\n",
    "        self._worlds[condition].reset_world()\n",
    "        b2d_X = self._worlds[condition].get_state()\n",
    "        new_sample = self._init_sample(b2d_X)\n",
    "        U = np.zeros([self.T, self.dU])\n",
    "        if noisy:\n",
    "            noise = generate_noise(self.T, self.dU, self._hyperparams)\n",
    "        else:\n",
    "            noise = np.zeros((self.T, self.dU))\n",
    "        for t in range(self.T):\n",
    "            X_t = new_sample.get_X(t=t)\n",
    "            obs_t = new_sample.get_obs(t=t)\n",
    "            U[t, :] = policy.act(X_t, obs_t, t, noise[t, :])\n",
    "            if (t+1) < self.T:\n",
    "                for _ in range(self._hyperparams['substeps']):\n",
    "                    self._worlds[condition].run_next(U[t, :])\n",
    "                b2d_X = self._worlds[condition].get_state()\n",
    "                self._set_sample(new_sample, b2d_X, t)\n",
    "        new_sample.set(ACTION, U)\n",
    "        if save:\n",
    "            self._samples[condition].append(new_sample)\n",
    "        return new_sample\n",
    "\n",
    "    def _init_sample(self, b2d_X):\n",
    "        \"\"\"\n",
    "        Construct a new sample and fill in the first time step.\n",
    "        \"\"\"\n",
    "        sample = Sample(self)\n",
    "        self._set_sample(sample, b2d_X, -1)\n",
    "        return sample\n",
    "\n",
    "    def _set_sample(self, sample, b2d_X, t):\n",
    "        for sensor in b2d_X.keys():\n",
    "            sample.set(sensor, np.array(b2d_X[sensor]), t=t+1)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Worlds"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### `fwSettings`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "class fwSettings(object):\n",
    "    \"\"\" This class contains the settings for Box2D's framwork. \"\"\"\n",
    "    backend = 'pygame'\n",
    "\n",
    "    # Physics options\n",
    "    hz = 20.0\n",
    "    velocityIterations = 8\n",
    "    positionIterations = 3\n",
    "    enableWarmStarting = True\n",
    "    enableContinuous = True\n",
    "    enableSubStepping = False\n",
    "\n",
    "    # Drawing\n",
    "    drawStats = True\n",
    "    drawShapes = True\n",
    "    drawJoints = True\n",
    "    drawCoreShapes = False\n",
    "    drawAABBs = False\n",
    "    drawOBBs = False\n",
    "    drawPairs = False\n",
    "    drawContactPoints = False\n",
    "    maxContactPoints = 100\n",
    "    drawContactNormals = False\n",
    "    drawFPS = True\n",
    "    drawMenu = True             # toggle by pressing F1\n",
    "    drawCOMs = False            # Centers of mass\n",
    "    pointSize = 2.5             # pixel radius for drawing points\n",
    "\n",
    "    # Miscellaneous testbed options\n",
    "    pause = False\n",
    "    singleStep = False\n",
    "    onlyInit = False\n",
    "\n",
    "#             text                  variable\n",
    "checkboxes = ((\"Warm Starting\", \"enableWarmStarting\"),\n",
    "              (\"Time of Impact\", \"enableContinuous\"),\n",
    "              (\"Sub-Stepping\", \"enableSubStepping\"),\n",
    "              (\"Draw\", None),\n",
    "              (\"Shapes\", \"drawShapes\"),\n",
    "              (\"Joints\", \"drawJoints\"),\n",
    "              (\"AABBs\", \"drawAABBs\"),\n",
    "              (\"Pairs\", \"drawPairs\"),\n",
    "              (\"Contact Points\", \"drawContactPoints\"),\n",
    "              (\"Contact Normals\", \"drawContactNormals\"),\n",
    "              (\"Center of Masses\", \"drawCOMs\"),\n",
    "              (\"Statistics\", \"drawStats\"),\n",
    "              (\"FPS\", \"drawFPS\"),\n",
    "              (\"Control\", None),\n",
    "              (\"Pause\" \"pause\"),\n",
    "              (\"Single Step\", \"singleStep\"))\n",
    "\n",
    "sliders = [\n",
    "    {'name' : 'hz', 'text' : 'Hertz', 'min' : 5, 'max' : 200},\n",
    "    {'name' : 'positionIterations', 'text' :\n",
    "     'Pos Iters', 'min' : 0, 'max' : 100},\n",
    "    {'name' : 'velocityIterations', 'text' :\n",
    "     'Vel Iters', 'min' : 1, 'max' : 500},\n",
    "]\n",
    "\n",
    "\n",
    "list_options = [i for i in dir(fwSettings) if not i.startswith('_')]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Framework: `fwQueryCallback` and `FrameworkBase`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [],
   "source": [
    "import Box2D as b2\n",
    "from time import time\n",
    "\n",
    "class fwQueryCallback(b2.b2QueryCallback):\n",
    "    \"\"\"\n",
    "    This callback for each fixture in the world.\n",
    "    \"\"\"\n",
    "    def __init__(self, p):\n",
    "        super(fwQueryCallback, self).__init__()\n",
    "        self.point = p\n",
    "        self.fixture = None\n",
    "\n",
    "    def ReportFixture(self, fixture):\n",
    "        \"\"\"\n",
    "        This method is called to query for a fixture.\n",
    "        \"\"\"\n",
    "        body = fixture.body\n",
    "        if body.type == b2.b2_dynamicBody:\n",
    "            inside = fixture.TestPoint(self.point)\n",
    "            if inside:\n",
    "                self.fixture = fixture\n",
    "                # We found the object, so stop the query\n",
    "                return False\n",
    "        # Continue the query\n",
    "        return True\n",
    "\n",
    "class FrameworkBase(b2.b2ContactListener):\n",
    "    \"\"\"\n",
    "    The base of the main Box2D GUI framework.\n",
    "\n",
    "    \"\"\"\n",
    "    name = \"None\"\n",
    "    description = None\n",
    "    TEXTLINE_START = 30\n",
    "    colors = {\n",
    "        'joint_line' : b2.b2Color(0.8, 0.8, 0.8),\n",
    "        'contact_add' : b2.b2Color(0.3, 0.95, 0.3),\n",
    "        'contact_persist' : b2.b2Color(0.3, 0.3, 0.95),\n",
    "        'contact_normal' : b2.b2Color(0.4, 0.9, 0.4),\n",
    "    }\n",
    "\n",
    "    def __reset(self):\n",
    "        \"\"\" Reset all of the variables to their starting values.\n",
    "        Not to be called except at initialization.\"\"\"\n",
    "        # Box2D-related\n",
    "        self.points = []\n",
    "        self.settings = fwSettings\n",
    "        self.using_contacts = False\n",
    "        self.stepCount = 0\n",
    "\n",
    "        # Box2D-callbacks\n",
    "        self.destructionListener = None\n",
    "        self.renderer = None\n",
    "\n",
    "    def __init__(self):\n",
    "        super(FrameworkBase, self).__init__()\n",
    "\n",
    "        self.__reset()\n",
    "\n",
    "        # Box2D Initialization\n",
    "        self.world = b2.b2World(gravity=(0, -10), doSleep=True)\n",
    "\n",
    "        self.world.contactListener = self\n",
    "        self.t_steps, self.t_draws = [], []\n",
    "\n",
    "    def __del__(self):\n",
    "        pass\n",
    "\n",
    "    def Step(self, settings, action=None):\n",
    "        \"\"\"\n",
    "        The main physics step.\n",
    "\n",
    "        Takes care of physics drawing\n",
    "        (callbacks are executed after the world.Step() )\n",
    "        and drawing additional information.\n",
    "        \"\"\"\n",
    "        assert action is None,\\\n",
    "            'action should only be used in subclass'\n",
    "\n",
    "        self.stepCount += 1\n",
    "        # Don't do anything if the setting's Hz are <= 0\n",
    "        if settings.hz > 0.0:\n",
    "            timeStep = 1.0 / settings.hz\n",
    "        else:\n",
    "            timeStep = 0.0\n",
    "\n",
    "        # Set the flags based on what the settings show\n",
    "        if self.renderer:\n",
    "            self.renderer.flags = dict(\n",
    "                drawShapes=settings.drawShapes,\n",
    "                drawJoints=settings.drawJoints,\n",
    "                drawAABBs=settings.drawAABBs,\n",
    "                drawPairs=settings.drawPairs,\n",
    "                drawCOMs=settings.drawCOMs,\n",
    "                convertVertices=isinstance(self.renderer, b2.b2DrawExtended)\n",
    "            )\n",
    "\n",
    "        # Set the other settings that aren't contained in the flags\n",
    "        self.world.warmStarting = settings.enableWarmStarting\n",
    "        self.world.continuousPhysics = settings.enableContinuous\n",
    "        self.world.subStepping = settings.enableSubStepping\n",
    "\n",
    "        # Reset the collision points\n",
    "        self.points = []\n",
    "\n",
    "        # Tell Box2D to step\n",
    "        t_step = time()\n",
    "        self.world.Step(timeStep, settings.velocityIterations,\n",
    "                        settings.positionIterations)\n",
    "        t_step = time()-t_step\n",
    "\n",
    "        # Update the debug draw settings so that the vertices will be properly\n",
    "        # converted to screen coordinates\n",
    "        t_draw = time()\n",
    "        if self.renderer:\n",
    "            self.renderer.StartDraw()\n",
    "\n",
    "        self.world.DrawDebugData()\n",
    "\n",
    "        if self.renderer:\n",
    "\n",
    "\n",
    "            # Draw each of the contact points in different colors.\n",
    "            if self.settings.drawContactPoints:\n",
    "                for point in self.points:\n",
    "                    if point['state'] == b2.b2_addState:\n",
    "                        self.renderer.DrawPoint(self.renderer.to_screen(\n",
    "                            point['position']), settings.pointSize,\n",
    "                                                self.colors['contact_add'])\n",
    "                    elif point['state'] == b2.b2_persistState:\n",
    "                        self.renderer.DrawPoint(self.renderer.to_screen(\n",
    "                            point['position']), settings.pointSize,\n",
    "                                                self.colors['contact_persist'])\n",
    "\n",
    "            if settings.drawContactNormals:\n",
    "                for point in self.points:\n",
    "                    p1 = self.renderer.to_screen(point['position'])\n",
    "                    p2 = self.renderer.axisScale * point['normal'] + p1\n",
    "                    self.renderer.DrawSegment(p1, p2,\n",
    "                                              self.colors['contact_normal'])\n",
    "\n",
    "            self.renderer.EndDraw()\n",
    "            t_draw = time()-t_draw\n",
    "\n",
    "            t_draw = max(b2.b2_epsilon, t_draw)\n",
    "            t_step = max(b2.b2_epsilon, t_step)\n",
    "\n",
    "\n",
    "            self.t_draws.append(1.0/t_draw)\n",
    "            self.t_steps.append(1.0/t_step)\n",
    "\n",
    "\n",
    "    def SimulationLoop(self, action):\n",
    "        \"\"\"\n",
    "        The main simulation loop. Don't override this, override Step instead.\n",
    "        \"\"\"\n",
    "\n",
    "        # Reset the text line to start the text from the top\n",
    "        self.textLine = self.TEXTLINE_START\n",
    "\n",
    "        # Draw the name of the test running\n",
    "        self.Print(self.name, (127, 127, 255))\n",
    "\n",
    "        if self.description:\n",
    "            # Draw the name of the test running\n",
    "            for s in self.description.split('\\n'):\n",
    "                self.Print(s, (127, 255, 127))\n",
    "\n",
    "        self.Step(self.settings, action)\n",
    "\n",
    "    def PreSolve(self, contact, old_manifold):\n",
    "        \"\"\"\n",
    "        This is a critical function when there are many contacts in the world.\n",
    "        It should be optimized as much as possible.\n",
    "        \"\"\"\n",
    "        if not (self.settings.drawContactPoints or\n",
    "                self.settings.drawContactNormals or self.using_contacts):\n",
    "            return\n",
    "        elif len(self.points) > self.settings.maxContactPoints:\n",
    "            return\n",
    "\n",
    "        manifold = contact.manifold\n",
    "        if manifold.pointCount == 0:\n",
    "            return\n",
    "\n",
    "        _, state2 = b2.b2GetPointStates(old_manifold, manifold)\n",
    "        if not state2:\n",
    "            return\n",
    "\n",
    "        worldManifold = contact.worldManifold\n",
    "\n",
    "        for i, _ in enumerate(state2):\n",
    "            self.points.append(\n",
    "                {\n",
    "                    'fixtureA' : contact.fixtureA,\n",
    "                    'fixtureB' : contact.fixtureB,\n",
    "                    'position' : worldManifold.points[i],\n",
    "                    'normal' : worldManifold.normal,\n",
    "                    'state' : state2[i]\n",
    "                })\n",
    "\n",
    "#print(fwSettings.backend.lower()) # pygame\n",
    "#print('gps.agent.box2d.'+'%s_framework' % (fwSettings.backend.lower())) # gps.agent.box2d.pygame_framework\n",
    "#print('%sFramework'%fwSettings.backend.capitalize()) # PygameFramework\n",
    "#print('%sFramework' %fwSettings.backend.capitalize()) # PygameFramework\n",
    "            \n",
    "#framework_module = __import__('gps.agent.box2d.'+'%s_framework' %\n",
    "#                              (fwSettings.backend.lower()),\n",
    "#                              fromlist=['%sFramework'\n",
    "#                                        %fwSettings.backend.capitalize()])\n",
    "#Framework = getattr(framework_module, '%sFramework' %\n",
    "#                    fwSettings.backend.capitalize())\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [],
   "source": [
    "import Box2D as b2\n",
    "import pygame\n",
    "\n",
    "GUIEnabled = False\n",
    "\n",
    "class PygameDraw(b2.b2DrawExtended):\n",
    "    \"\"\"\n",
    "    This debug draw class accepts callbacks from Box2D and\n",
    "    handles all of the rendering.\n",
    "    \"\"\"\n",
    "    surface = None\n",
    "    axisScale = 10.0\n",
    "    def __init__(self, test=None, **kwargs):\n",
    "        b2.b2DrawExtended.__init__(self, **kwargs)\n",
    "        self.flipX = False\n",
    "        self.flipY = True\n",
    "        self.convertVertices = True\n",
    "        self.test = test\n",
    "\n",
    "    def StartDraw(self):\n",
    "        \"\"\"\n",
    "        Called by renderer before drawing.\n",
    "        \"\"\"\n",
    "        self.zoom = self.test.viewZoom\n",
    "        self.center = self.test.viewCenter\n",
    "        self.offset = self.test.viewOffset\n",
    "        self.screenSize = self.test.screenSize\n",
    "\n",
    "    def EndDraw(self):\n",
    "        \"\"\"\n",
    "        Called by renderer when finished drawing.\n",
    "        \"\"\"\n",
    "\n",
    "        pass\n",
    "\n",
    "    def DrawPoint(self, p, size, color):\n",
    "        \"\"\"\n",
    "        Draw a single point at point p given a pixel size and color.\n",
    "        \"\"\"\n",
    "        self.DrawCircle(p, size/self.zoom, color, drawwidth=0)\n",
    "\n",
    "    def DrawAABB(self, aabb, color):\n",
    "        \"\"\"\n",
    "        Draw a wireframe around the AABB with the given color.\n",
    "        \"\"\"\n",
    "        points = [(aabb.lowerBound.x, aabb.lowerBound.y),\n",
    "                  (aabb.upperBound.x, aabb.lowerBound.y),\n",
    "                  (aabb.upperBound.x, aabb.upperBound.y),\n",
    "                  (aabb.lowerBound.x, aabb.upperBound.y)]\n",
    "\n",
    "        pygame.draw.aalines(self.surface, color, True, points)\n",
    "\n",
    "    def DrawSegment(self, p1, p2, color):\n",
    "        \"\"\"\n",
    "        Draw the line segment from p1-p2 with the specified color.\n",
    "        \"\"\"\n",
    "        pygame.draw.aaline(self.surface, color.bytes, p1, p2)\n",
    "\n",
    "    def DrawTransform(self, xf):\n",
    "        \"\"\"\n",
    "        Draw the transform xf on the screen\n",
    "        \"\"\"\n",
    "        p1 = xf.position\n",
    "        p2 = self.to_screen(p1 + self.axisScale * xf.R.col1)\n",
    "        p3 = self.to_screen(p1 + self.axisScale * xf.R.col2)\n",
    "        p1 = self.to_screen(p1)\n",
    "\n",
    "        pygame.draw.aaline(self.surface, (255, 0, 0), p1, p2)\n",
    "        pygame.draw.aaline(self.surface, (0, 255, 0), p1, p3)\n",
    "\n",
    "    def DrawCircle(self, center, radius, color, drawwidth=1):\n",
    "        \"\"\"\n",
    "        Draw a wireframe circle given the center, radius, and color.\n",
    "        \"\"\"\n",
    "        radius *= self.zoom\n",
    "        if radius < 1:\n",
    "            radius = 1\n",
    "        else: radius = int(radius)\n",
    "\n",
    "        pygame.draw.circle(self.surface, color.bytes, center, radius, drawwidth)\n",
    "\n",
    "    def DrawSolidCircle(self, center, radius, axis, color):\n",
    "        \"\"\"\n",
    "        Draw a solid circle given the center, radius, and color.\n",
    "        \"\"\"\n",
    "        radius *= self.zoom\n",
    "        if radius < 1:\n",
    "            radius = 1\n",
    "        else: radius = int(radius)\n",
    "\n",
    "        pygame.draw.circle(self.surface, (color/2).bytes+[127],\n",
    "                           center, radius, 0)\n",
    "        pygame.draw.circle(self.surface, color.bytes, center, radius, 1)\n",
    "        pygame.draw.aaline(self.surface, (255, 0, 0), center,\n",
    "                           (center[0] - radius*axis[0], center[1] +\n",
    "                            radius*axis[1]))\n",
    "\n",
    "    def DrawPolygon(self, vertices, color):\n",
    "        \"\"\"\n",
    "        Draw a wireframe polygon given the screen vertices with the given color.\n",
    "        \"\"\"\n",
    "        if not vertices:\n",
    "            return\n",
    "\n",
    "        if len(vertices) == 2:\n",
    "            pygame.draw.aaline(self.surface, color.bytes, vertices[0], vertices)\n",
    "        else:\n",
    "            pygame.draw.polygon(self.surface, color.bytes, vertices, 1)\n",
    "\n",
    "    def DrawSolidPolygon(self, vertices, color):\n",
    "        \"\"\"\n",
    "        Draw a filled polygon given the screen vertices with the given color.\n",
    "        \"\"\"\n",
    "        if not vertices:\n",
    "            return\n",
    "\n",
    "        if len(vertices) == 2:\n",
    "            pygame.draw.aaline(self.surface, color.bytes, vertices[0],\n",
    "                               vertices[1])\n",
    "        else:\n",
    "            pygame.draw.polygon(self.surface, (color/2).bytes+[127],\n",
    "                                vertices, 0)\n",
    "            pygame.draw.polygon(self.surface, color.bytes, vertices, 1)\n",
    "\n",
    "class PygameFramework(FrameworkBase):\n",
    "    \"\"\"\n",
    "    This class is the framework for running the simulation\n",
    "    \"\"\"\n",
    "\n",
    "    def __reset(self):\n",
    "        # Screen/rendering-related\n",
    "        self._viewZoom = 10.0\n",
    "        self._viewCenter = None\n",
    "        self._viewOffset = None\n",
    "        self.screenSize = None\n",
    "        self.rMouseDown = False\n",
    "        self.textLine = 30\n",
    "        self.font = None\n",
    "        self.fps = 0\n",
    "\n",
    "        # GUI-related (PGU)\n",
    "        self.gui_app = None\n",
    "        self.gui_table = None\n",
    "\n",
    "    def __init__(self):\n",
    "        super(PygameFramework, self).__init__()\n",
    "\n",
    "        self.__reset()\n",
    "        print('Initializing pygame framework...')\n",
    "        # Pygame Initialization\n",
    "        pygame.init()\n",
    "        caption = \"Python Box2D Testbed - \" + self.name\n",
    "        pygame.display.set_caption(caption)\n",
    "\n",
    "        self.screen = pygame.display.set_mode((640, 480))\n",
    "        self.screenSize = b2.b2Vec2(*self.screen.get_size())\n",
    "\n",
    "        self.renderer = PygameDraw(surface=self.screen, test=self)\n",
    "        self.world.renderer = self.renderer\n",
    "\n",
    "\n",
    "        self.font = pygame.font.Font(None, 15)\n",
    "\n",
    "        self.viewCenter = (0, 20.0)\n",
    "        self.groundbody = self.world.CreateBody()\n",
    "\n",
    "    def setCenter(self, value):\n",
    "        \"\"\"\n",
    "        Updates the view offset based on the center of the screen.\n",
    "        \"\"\"\n",
    "        self._viewCenter = b2.b2Vec2(*value)\n",
    "        self._viewCenter *= self._viewZoom\n",
    "        self._viewOffset = self._viewCenter - self.screenSize/2\n",
    "\n",
    "    def setZoom(self, zoom):\n",
    "        \"\"\"\n",
    "        Tells the display the zoom.\n",
    "        \"\"\"\n",
    "        self._viewZoom = zoom\n",
    "\n",
    "    viewZoom = property(lambda self: self._viewZoom, setZoom,\n",
    "                        doc='Zoom factor for the display')\n",
    "    viewCenter = property(lambda self: self._viewCenter/self._viewZoom,\n",
    "                          setCenter, doc='Screen center in camera coordinates')\n",
    "    viewOffset = property(lambda self: self._viewOffset,\n",
    "                          doc='Offset of the top-left corner of the screen')\n",
    "\n",
    "\n",
    "    def run(self):\n",
    "        \"\"\"\n",
    "        Begins the draw loopn and tells the GUI to paint itself.\n",
    "        \"\"\"\n",
    "\n",
    "        # If any of the test constructors update the settings, reflect\n",
    "        # those changes on the GUI before running\n",
    "        if GUIEnabled:\n",
    "            self.gui_table.updateGUI(self.settings)\n",
    "        self.clock = pygame.time.Clock()\n",
    "        self.screen.fill((0, 0, 0))\n",
    "\n",
    "            # Run the simulation loop\n",
    "        self.SimulationLoop([0, 0, 0])\n",
    "\n",
    "        if GUIEnabled and self.settings.drawMenu:\n",
    "            self.gui_app.paint(self.screen)\n",
    "\n",
    "        pygame.display.flip()\n",
    "        self.clock.tick(self.settings.hz)\n",
    "        self.fps = self.clock.get_fps()\n",
    "\n",
    "    def run_next(self, action):\n",
    "        \"\"\"\n",
    "        Updates the screen and tells the GUI to paint itself.\n",
    "        \"\"\"\n",
    "        self.screen.fill((0, 0, 0))\n",
    "\n",
    "        # Run the simulation loop\n",
    "        self.SimulationLoop(action)\n",
    "        if GUIEnabled and self.settings.drawMenu:\n",
    "            self.gui_app.paint(self.screen)\n",
    "\n",
    "        pygame.display.flip()\n",
    "        self.clock.tick(self.settings.hz)\n",
    "        self.fps = self.clock.get_fps()\n",
    "\n",
    "\n",
    "\n",
    "    def Step(self, settings):\n",
    "        \"\"\"\n",
    "        Updates the simulation\n",
    "        \"\"\"\n",
    "        if GUIEnabled:\n",
    "            self.gui_table.updateSettings(self.settings)\n",
    "\n",
    "        super(PygameFramework, self).Step(settings)\n",
    "\n",
    "        if GUIEnabled:\n",
    "            self.gui_table.updateGUI(self.settings)\n",
    "\n",
    "    def ConvertScreenToWorld(self, x, y):\n",
    "        \"\"\"\n",
    "        Converts the display screen to the simulation's coordinates.\n",
    "        \"\"\"\n",
    "        return b2.b2Vec2((x + self.viewOffset.x) / self.viewZoom,\n",
    "                         ((self.screenSize.y - y + self.viewOffset.y)\n",
    "                          / self.viewZoom))\n",
    "\n",
    "    def DrawStringAt(self, x, y, s, color=(229, 153, 153, 255)):\n",
    "        \"\"\"\n",
    "        Draw some text, str, at screen coordinates (x, y).\n",
    "        \"\"\"\n",
    "        self.screen.blit(self.font.render(s, True, color), (x, y))\n",
    "\n",
    "    def Print(self, s, color=(229, 153, 153, 255)):\n",
    "        \"\"\"\n",
    "        Draw some text at the top status lines\n",
    "        and advance to the next line.\n",
    "        \"\"\"\n",
    "        self.screen.blit(self.font.render(s, True, color), (5, self.textLine))\n",
    "        self.textLine += 15\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### `ArmWorld`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [],
   "source": [
    "import Box2D as b2\n",
    "#from framework import Framework\n",
    "from gps_pb2 import JOINT_ANGLES, JOINT_VELOCITIES, END_EFFECTOR_POINTS\n",
    "\n",
    "#class ArmWorld(Framework):\n",
    "class ArmWorld(PygameFramework):\n",
    "    \"\"\" This class defines the 2 Link Arm and its environment.\"\"\"\n",
    "    name = \"2 Link Arm\"\n",
    "    def __init__(self, x0, target, render):\n",
    "        self.render = render\n",
    "        if self.render:\n",
    "            super(ArmWorld, self).__init__()\n",
    "        else:\n",
    "            self.world = b2.b2World(gravity=(0, -10), doSleep=True)\n",
    "\n",
    "        self.world.gravity = (0.0, 0.0)\n",
    "\n",
    "        fixture_length = 5\n",
    "        self.x0 = x0\n",
    "\n",
    "        rectangle_fixture = b2.b2FixtureDef(\n",
    "            shape=b2.b2PolygonShape(box=(.5, fixture_length)),\n",
    "            density=.5,\n",
    "            friction=1,\n",
    "        )\n",
    "        square_fixture = b2.b2FixtureDef(\n",
    "            shape=b2.b2PolygonShape(box=(1, 1)),\n",
    "            density=100.0,\n",
    "            friction=1,\n",
    "        )\n",
    "        self.base = self.world.CreateBody(\n",
    "            position=(0, 15),\n",
    "            fixtures=square_fixture,\n",
    "        )\n",
    "\n",
    "        self.body1 = self.world.CreateDynamicBody(\n",
    "            position=(0, 2),\n",
    "            fixtures=rectangle_fixture,\n",
    "            angle=b2.b2_pi,\n",
    "        )\n",
    "\n",
    "        self.body2 = self.world.CreateDynamicBody(\n",
    "            fixtures=rectangle_fixture,\n",
    "            position=(0, 2),\n",
    "            angle=b2.b2_pi,\n",
    "        )\n",
    "        self.target1 = self.world.CreateDynamicBody(\n",
    "            fixtures=rectangle_fixture,\n",
    "            position=(0, 0),\n",
    "            angle=b2.b2_pi,\n",
    "        )\n",
    "        self.target2 = self.world.CreateDynamicBody(\n",
    "            fixtures=rectangle_fixture,\n",
    "            position=(0, 0),\n",
    "            angle=b2.b2_pi,\n",
    "        )\n",
    "\n",
    "        self.joint1 = self.world.CreateRevoluteJoint(\n",
    "            bodyA=self.base,\n",
    "            bodyB=self.body1,\n",
    "            localAnchorA=(0, 0),\n",
    "            localAnchorB=(0, fixture_length),\n",
    "            enableMotor=True,\n",
    "            maxMotorTorque=400,\n",
    "            enableLimit=False,\n",
    "        )\n",
    "\n",
    "        self.joint2 = self.world.CreateRevoluteJoint(\n",
    "            bodyA=self.body1,\n",
    "            bodyB=self.body2,\n",
    "            localAnchorA=(0, -(fixture_length - 0.5)),\n",
    "            localAnchorB=(0, fixture_length - 0.5),\n",
    "            enableMotor=True,\n",
    "            maxMotorTorque=400,\n",
    "            enableLimit=False,\n",
    "        )\n",
    "\n",
    "        self.set_joint_angles(self.body1, self.body2, x0[0], x0[1])\n",
    "        self.set_joint_angles(self.target1, self.target2, target[0], target[1])\n",
    "        self.target1.active = False\n",
    "        self.target2.active = False\n",
    "\n",
    "        self.joint1.motorSpeed = x0[2]\n",
    "        self.joint2.motorSpeed = x0[3]\n",
    "\n",
    "    def set_joint_angles(self, body1, body2, angle1, angle2):\n",
    "        \"\"\" Converts the given absolute angle of the arms to joint angles\"\"\"\n",
    "        pos = self.base.GetWorldPoint((0, 0))\n",
    "        body1.angle = angle1 + np.pi\n",
    "        new_pos = body1.GetWorldPoint((0, 5))\n",
    "        body1.position += pos - new_pos\n",
    "        body2.angle = angle2 + body1.angle\n",
    "        pos = body1.GetWorldPoint((0, -4.5))\n",
    "        new_pos = body2.GetWorldPoint((0, 4.5))\n",
    "        body2.position += pos - new_pos\n",
    "\n",
    "\n",
    "    def run(self):\n",
    "        \"\"\"Initiates the first time step\n",
    "        \"\"\"\n",
    "        if self.render:\n",
    "            super(ArmWorld, self).run()\n",
    "        else:\n",
    "            self.run_next(None)\n",
    "\n",
    "    def run_next(self, action):\n",
    "        \"\"\"Moves forward in time one step. Calls the renderer if applicable.\"\"\"\n",
    "        if self.render:\n",
    "            super(ArmWorld, self).run_next(action)\n",
    "        else:\n",
    "            if action is not None:\n",
    "                self.joint1.motorSpeed = action[0]\n",
    "                self.joint2.motorSpeed = action[1]\n",
    "            self.world.Step(1.0 / fwSettings.hz, fwSettings.velocityIterations,\n",
    "                            fwSettings.positionIterations)\n",
    "\n",
    "    def Step(self, settings, action):\n",
    "        \"\"\"Moves forward in time one step. Called by the renderer\"\"\"\n",
    "        self.joint1.motorSpeed = action[0]\n",
    "        self.joint2.motorSpeed = action[1]\n",
    "\n",
    "        super(ArmWorld, self).Step(settings)\n",
    "\n",
    "    def reset_world(self):\n",
    "        \"\"\"Returns the world to its intial state\"\"\"\n",
    "        self.world.ClearForces()\n",
    "        self.joint1.motorSpeed = 0\n",
    "        self.joint2.motorSpeed = 0\n",
    "        self.body1.linearVelocity = (0, 0)\n",
    "        self.body1.angularVelocity = 0\n",
    "        self.body2.linearVelocity = (0, 0)\n",
    "        self.body2.angularVelocity = 0\n",
    "        self.set_joint_angles(self.body1, self.body2, self.x0[0], self.x0[1])\n",
    "\n",
    "\n",
    "    def get_state(self):\n",
    "        \"\"\"Retrieves the state of the point mass\"\"\"\n",
    "        state = {JOINT_ANGLES: np.array([self.joint1.angle,\n",
    "                                         self.joint2.angle]),\n",
    "                 JOINT_VELOCITIES: np.array([self.joint1.speed,\n",
    "                                             self.joint2.speed]),\n",
    "                 END_EFFECTOR_POINTS: np.append(np.array(self.body2.position),[0])}\n",
    "\n",
    "        return state"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Policy"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### `guess_dynamics`\n",
    "Initial guess at the model using position-velocity assumption."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "def guess_dynamics(gains, acc, dX, dU, dt):\n",
    "    \"\"\"\n",
    "    Initial guess at the model using position-velocity assumption.\n",
    "    Note: This code assumes joint positions occupy the first dU state\n",
    "          indices and joint velocities occupy the next dU.\n",
    "    Args:\n",
    "        gains: dU dimensional joint gains.\n",
    "        acc: dU dimensional joint acceleration.\n",
    "        dX: Dimensionality of the state.\n",
    "        dU: Dimensionality of the action.\n",
    "        dt: Length of a time step.\n",
    "    Returns:\n",
    "        Fd: A dX by dX+dU transition matrix.\n",
    "        fc: A dX bias vector.\n",
    "    \"\"\"\n",
    "    #TODO: Use packing instead of assuming which indices are the joint\n",
    "    #      angles.\n",
    "    Fd = np.vstack([\n",
    "        np.hstack([\n",
    "            np.eye(dU), dt * np.eye(dU), np.zeros((dU, dX - dU*2)),\n",
    "            dt ** 2 * np.diag(gains)\n",
    "        ]),\n",
    "        np.hstack([\n",
    "            np.zeros((dU, dU)), np.eye(dU), np.zeros((dU, dX - dU*2)),\n",
    "            dt * np.diag(gains)\n",
    "        ]),\n",
    "        np.zeros((dX - dU*2, dX+dU))\n",
    "    ])\n",
    "    fc = np.hstack([acc * dt ** 2, acc * dt, np.zeros((dX - dU*2))])\n",
    "    return Fd, fc\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### `Policy`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "class Policy(object):\n",
    "    \"\"\" Computes actions from states/observations. \"\"\"\n",
    "    __metaclass__ = abc.ABCMeta\n",
    "\n",
    "    @abc.abstractmethod\n",
    "    def act(self, x, obs, t, noise):\n",
    "        \"\"\"\n",
    "        Args:\n",
    "            x: State vector.\n",
    "            obs: Observation vector.\n",
    "            t: Time step.\n",
    "            noise: A dU-dimensional noise vector.\n",
    "        Returns:\n",
    "            A dU dimensional action vector.\n",
    "        \"\"\"\n",
    "        raise NotImplementedError(\"Must be implemented in subclass.\")\n",
    "\n",
    "    def set_meta_data(self, meta):\n",
    "        \"\"\"\n",
    "        Set meta data for policy (e.g., domain image, multi modal observation sizes)\n",
    "        Args:\n",
    "            meta: meta data.\n",
    "        \"\"\"\n",
    "        return\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### `LinearGaussianPolicy`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "class LinearGaussianPolicy(Policy):\n",
    "    \"\"\"\n",
    "    Time-varying linear Gaussian policy.\n",
    "    U = K*x + k + noise, where noise ~ N(0, chol_pol_covar)\n",
    "    \"\"\"\n",
    "    def __init__(self, K, k, pol_covar, chol_pol_covar, inv_pol_covar):\n",
    "        Policy.__init__(self)\n",
    "\n",
    "        # Assume K has the correct shape, and make sure others match.\n",
    "        self.T = K.shape[0]\n",
    "        self.dU = K.shape[1]\n",
    "        self.dX = K.shape[2]\n",
    "\n",
    "        check_shape(k, (self.T, self.dU))\n",
    "        check_shape(pol_covar, (self.T, self.dU, self.dU))\n",
    "        check_shape(chol_pol_covar, (self.T, self.dU, self.dU))\n",
    "        check_shape(inv_pol_covar, (self.T, self.dU, self.dU))\n",
    "\n",
    "        self.K = K\n",
    "        self.k = k\n",
    "        self.pol_covar = pol_covar\n",
    "        self.chol_pol_covar = chol_pol_covar\n",
    "        self.inv_pol_covar = inv_pol_covar\n",
    "\n",
    "    def act(self, x, obs, t, noise=None):\n",
    "        \"\"\"\n",
    "        Return an action for a state.\n",
    "        Args:\n",
    "            x: State vector.\n",
    "            obs: Observation vector.\n",
    "            t: Time step.\n",
    "            noise: Action noise. This will be scaled by the variance.\n",
    "        \"\"\"\n",
    "        u = self.K[t].dot(x) + self.k[t]\n",
    "        u += self.chol_pol_covar[t].T.dot(noise)\n",
    "        return u\n",
    "\n",
    "    def fold_k(self, noise):\n",
    "        \"\"\"\n",
    "        Fold noise into k.\n",
    "        Args:\n",
    "            noise: A T x Du noise vector with mean 0 and variance 1.\n",
    "        Returns:\n",
    "            k: A T x dU bias vector.\n",
    "        \"\"\"\n",
    "        k = np.zeros_like(self.k)\n",
    "        for i in range(self.T):\n",
    "            scaled_noise = self.chol_pol_covar[i].T.dot(noise[i])\n",
    "            k[i] = scaled_noise + self.k[i]\n",
    "        return k\n",
    "\n",
    "    def nans_like(self):\n",
    "        \"\"\"\n",
    "        Returns:\n",
    "            A new linear Gaussian policy object with the same dimensions\n",
    "            but all values filled with NaNs.\n",
    "        \"\"\"\n",
    "        policy = LinearGaussianPolicy(\n",
    "            np.zeros_like(self.K), np.zeros_like(self.k),\n",
    "            np.zeros_like(self.pol_covar), np.zeros_like(self.chol_pol_covar),\n",
    "            np.zeros_like(self.inv_pol_covar)\n",
    "        )\n",
    "        policy.K.fill(np.nan)\n",
    "        policy.k.fill(np.nan)\n",
    "        policy.pol_covar.fill(np.nan)\n",
    "        policy.chol_pol_covar.fill(np.nan)\n",
    "        policy.inv_pol_covar.fill(np.nan)\n",
    "        return policy\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Initializations for linear Gaussian controllers"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Initial Linear Gaussian Trajectory Distributions, PD-based initializer.\n",
    "# Note, PD is the default initializer type.\n",
    "INIT_LG_PD = {\n",
    "    'init_var': 10.0,\n",
    "    'pos_gains': 10.0, # position gains\n",
    "    'vel_gains_mult': 0.01,  # velocity gains multiplier on pos_gains\n",
    "    'init_action_offset': None,\n",
    "}\n",
    "\n",
    "# Initial Linear Gaussian Trajectory distribution, LQR-based initializer.\n",
    "INIT_LG_LQR = {\n",
    "    'init_var': 1.0,\n",
    "    'stiffness': 1.0,\n",
    "    'stiffness_vel': 0.5,\n",
    "    'final_weight': 1.0,\n",
    "    # Parameters for guessing dynamics\n",
    "    'init_acc': [],  # dU vector of accelerations, default zeros.\n",
    "    'init_gains': [],  # dU vector of gains, default ones.\n",
    "}\n",
    "\n",
    "def init_lqr(hyperparams):\n",
    "    \"\"\"\n",
    "    Return initial gains for a time-varying linear Gaussian controller\n",
    "    that tries to hold the initial position.\n",
    "    \"\"\"\n",
    "    config = copy.deepcopy(INIT_LG_LQR)\n",
    "    config.update(hyperparams)\n",
    "\n",
    "    x0, dX, dU = config['x0'], config['dX'], config['dU']\n",
    "    dt, T = config['dt'], config['T']\n",
    "\n",
    "    #TODO: Use packing instead of assuming which indices are the joint\n",
    "    #      angles.\n",
    "\n",
    "    # Notation notes:\n",
    "    # L = loss, Q = q-function (dX+dU dimensional),\n",
    "    # V = value function (dX dimensional), F = dynamics\n",
    "    # Vectors are lower-case, matrices are upper case.\n",
    "    # Derivatives: x = state, u = action, t = state+action (trajectory).\n",
    "    # The time index is denoted by _t after the above.\n",
    "    # Ex. Ltt_t = Loss, 2nd derivative (w.r.t. trajectory),\n",
    "    # indexed by time t.\n",
    "\n",
    "    # Constants.\n",
    "    idx_x = slice(dX)  # Slices out state.\n",
    "    idx_u = slice(dX, dX+dU)  # Slices out actions.\n",
    "\n",
    "    if len(config['init_acc']) == 0:\n",
    "        config['init_acc'] = np.zeros(dU)\n",
    "\n",
    "    if len(config['init_gains']) == 0:\n",
    "        config['init_gains'] = np.ones(dU)\n",
    "\n",
    "    # Set up simple linear dynamics model.\n",
    "    Fd, fc = guess_dynamics(config['init_gains'], config['init_acc'],\n",
    "                            dX, dU, dt)\n",
    "\n",
    "    # Setup a cost function based on stiffness.\n",
    "    # Ltt = (dX+dU) by (dX+dU) - Hessian of loss with respect to\n",
    "    # trajectory at a single timestep.\n",
    "    Ltt = np.diag(np.hstack([\n",
    "        config['stiffness'] * np.ones(dU),\n",
    "        config['stiffness'] * config['stiffness_vel'] * np.ones(dU),\n",
    "        np.zeros(dX - dU*2), np.ones(dU)\n",
    "    ]))\n",
    "    Ltt = Ltt / config['init_var']  # Cost function - quadratic term.\n",
    "    lt = -Ltt.dot(np.r_[x0, np.zeros(dU)])  # Cost function - linear term.\n",
    "\n",
    "    # Perform dynamic programming.\n",
    "    K = np.zeros((T, dU, dX))  # Controller gains matrix.\n",
    "    k = np.zeros((T, dU))  # Controller bias term.\n",
    "    PSig = np.zeros((T, dU, dU))  # Covariance of noise.\n",
    "    cholPSig = np.zeros((T, dU, dU))  # Cholesky decomposition.\n",
    "    invPSig = np.zeros((T, dU, dU))  # Inverse of covariance.\n",
    "    vx_t = np.zeros(dX)  # Vx = dV/dX. Derivative of value function.\n",
    "    Vxx_t = np.zeros((dX, dX))  # Vxx = ddV/dXdX.\n",
    "\n",
    "    #TODO: A lot of this code is repeated with traj_opt_lqr_python.py\n",
    "    #      backward pass.\n",
    "    for t in range(T - 1, -1, -1):\n",
    "        # Compute Q function at this step.\n",
    "        if t == (T - 1):\n",
    "            Ltt_t = config['final_weight'] * Ltt\n",
    "            lt_t = config['final_weight'] * lt\n",
    "        else:\n",
    "            Ltt_t = Ltt\n",
    "            lt_t = lt\n",
    "        # Qtt = (dX+dU) by (dX+dU) 2nd Derivative of Q-function with\n",
    "        # respect to trajectory (dX+dU).\n",
    "        Qtt_t = Ltt_t + Fd.T.dot(Vxx_t).dot(Fd)\n",
    "        # Qt = (dX+dU) 1st Derivative of Q-function with respect to\n",
    "        # trajectory (dX+dU).\n",
    "        qt_t = lt_t + Fd.T.dot(vx_t + Vxx_t.dot(fc))\n",
    "\n",
    "        # Compute preceding value function.\n",
    "        U = sp.linalg.cholesky(Qtt_t[idx_u, idx_u])\n",
    "        L = U.T\n",
    "\n",
    "        invPSig[t, :, :] = Qtt_t[idx_u, idx_u]\n",
    "        PSig[t, :, :] = sp.linalg.solve_triangular(\n",
    "            U, sp.linalg.solve_triangular(L, np.eye(dU), lower=True)\n",
    "        )\n",
    "        cholPSig[t, :, :] = sp.linalg.cholesky(PSig[t, :, :])\n",
    "        K[t, :, :] = -sp.linalg.solve_triangular(\n",
    "            U, sp.linalg.solve_triangular(L, Qtt_t[idx_u, idx_x], lower=True)\n",
    "        )\n",
    "        k[t, :] = -sp.linalg.solve_triangular(\n",
    "            U, sp.linalg.solve_triangular(L, qt_t[idx_u], lower=True)\n",
    "        )\n",
    "        Vxx_t = Qtt_t[idx_x, idx_x] + Qtt_t[idx_x, idx_u].dot(K[t, :, :])\n",
    "        vx_t = qt_t[idx_x] + Qtt_t[idx_x, idx_u].dot(k[t, :])\n",
    "        Vxx_t = 0.5 * (Vxx_t + Vxx_t.T)\n",
    "\n",
    "    return LinearGaussianPolicy(K, k, PSig, cholPSig, invPSig)\n",
    "\n",
    "\n",
    "#TODO: Fix docstring\n",
    "def init_pd(hyperparams):\n",
    "    \"\"\"\n",
    "    This function initializes the linear-Gaussian controller as a\n",
    "    proportional-derivative (PD) controller with Gaussian noise. The\n",
    "    position gains are controlled by the variable pos_gains, velocity\n",
    "    gains are controlled by pos_gains*vel_gans_mult.\n",
    "    \"\"\"\n",
    "    config = copy.deepcopy(INIT_LG_PD)\n",
    "    config.update(hyperparams)\n",
    "\n",
    "    dU, dQ, dX = config['dU'], config['dQ'], config['dX']\n",
    "    x0, T = config['x0'], config['T']\n",
    "\n",
    "    # Choose initialization mode.\n",
    "    Kp = 1.0\n",
    "    Kv = config['vel_gains_mult']\n",
    "    if dU < dQ:\n",
    "        K = -config['pos_gains'] * np.tile(\n",
    "            [np.eye(dU) * Kp, np.zeros((dU, dQ-dU)),\n",
    "             np.eye(dU) * Kv, np.zeros((dU, dQ-dU))],\n",
    "            [T, 1, 1]\n",
    "        )\n",
    "    else:\n",
    "        K = -config['pos_gains'] * np.tile(\n",
    "            np.hstack([\n",
    "                np.eye(dU) * Kp, np.eye(dU) * Kv,\n",
    "                np.zeros((dU, dX - dU*2))\n",
    "            ]), [T, 1, 1]\n",
    "        )\n",
    "    k = np.tile(-K[0, :, :].dot(x0), [T, 1])\n",
    "    PSig = config['init_var'] * np.tile(np.eye(dU), [T, 1, 1])\n",
    "    cholPSig = np.sqrt(config['init_var']) * np.tile(np.eye(dU), [T, 1, 1])\n",
    "    invPSig = (1.0 / config['init_var']) * np.tile(np.eye(dU), [T, 1, 1])\n",
    "\n",
    "    return LinearGaussianPolicy(K, k, PSig, cholPSig, invPSig)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Run"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### GPS main class and code: `GPSMain`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "import matplotlib as mpl\n",
    "# mpl.use('Qt4Agg')\n",
    "\n",
    "qt_found=False\n",
    "try:\n",
    "     import PyQt4\n",
    "     mpl.use('Qt4Agg')\n",
    "     qt_found = True\n",
    "except ImportError:\n",
    "     qt_found = False\n",
    "if qt_found == False:\n",
    "   try:\n",
    "      import PyQt5\n",
    "      mpl.use('Qt5Agg')\n",
    "      qt_found = True\n",
    "   except ImportError:\n",
    "      qt_found = False\n",
    "\n",
    "import logging\n",
    "import imp\n",
    "import os\n",
    "import os.path\n",
    "import sys\n",
    "import argparse\n",
    "import threading\n",
    "import time\n",
    "import traceback\n",
    "\n",
    "# Add gps/python to path so that imports work.\n",
    "#sys.path.append('/'.join(str.split(__file__, '/')[:-2]))\n",
    "\n",
    "from gps.gui.gps_training_gui import GPSTrainingGUI\n",
    "#from gps.utility.data_logger import DataLogger\n",
    "#from gps.sample.sample_list import SampleList\n",
    "\n",
    "\n",
    "class GPSMain(object):\n",
    "    \"\"\" Main class to run algorithms and experiments. \"\"\"\n",
    "    def __init__(self, config, quit_on_end=False):\n",
    "        \"\"\"\n",
    "        Initialize GPSMain\n",
    "        Args:\n",
    "            config: Hyperparameters for experiment\n",
    "            quit_on_end: When true, quit automatically on completion\n",
    "        \"\"\"\n",
    "        self._quit_on_end = quit_on_end\n",
    "        self._hyperparams = config\n",
    "        self._conditions = config['common']['conditions']\n",
    "        if 'test_conditions' in config['common']:\n",
    "            self._train_idx = config['common']['train_conditions']\n",
    "            self._test_idx = config['common']['test_conditions']\n",
    "        else:\n",
    "            self._train_idx = range(self._conditions)\n",
    "            config['common']['train_conditions'] = config['common']['conditions']\n",
    "            self._hyperparams=config\n",
    "            self._test_idx = self._train_idx\n",
    "\n",
    "        self._data_files_dir = config['common']['data_files_dir']\n",
    "\n",
    "        self.agent = config['agent']['type'](config['agent'])\n",
    "        self.data_logger = DataLogger()\n",
    "        self.gui = GPSTrainingGUI(config['common']) if config['gui_on'] else None\n",
    "\n",
    "        config['algorithm']['agent'] = self.agent\n",
    "        self.algorithm = config['algorithm']['type'](config['algorithm'])\n",
    "\n",
    "    def run(self, itr_load=None):\n",
    "        \"\"\"\n",
    "        Run training by iteratively sampling and taking an iteration.\n",
    "        Args:\n",
    "            itr_load: If specified, loads algorithm state from that\n",
    "                iteration, and resumes training at the next iteration.\n",
    "        Returns: None\n",
    "        \"\"\"\n",
    "        try:\n",
    "            itr_start = self._initialize(itr_load)\n",
    "\n",
    "            for itr in range(itr_start, self._hyperparams['iterations']):\n",
    "                for cond in self._train_idx:\n",
    "                    for i in range(self._hyperparams['num_samples']):\n",
    "                        self._take_sample(itr, cond, i)\n",
    "\n",
    "                traj_sample_lists = [\n",
    "                    self.agent.get_samples(cond, -self._hyperparams['num_samples'])\n",
    "                    for cond in self._train_idx\n",
    "                ]\n",
    "\n",
    "                # Clear agent samples.\n",
    "                self.agent.clear_samples()\n",
    "\n",
    "                self._take_iteration(itr, traj_sample_lists)\n",
    "                pol_sample_lists = self._take_policy_samples()\n",
    "                self._log_data(itr, traj_sample_lists, pol_sample_lists)\n",
    "        except Exception as e:\n",
    "            traceback.print_exception(*sys.exc_info())\n",
    "        finally:\n",
    "            self._end()\n",
    "\n",
    "    def test_policy(self, itr, N):\n",
    "        \"\"\"\n",
    "        Take N policy samples of the algorithm state at iteration itr,\n",
    "        for testing the policy to see how it is behaving.\n",
    "        (Called directly from the command line --policy flag).\n",
    "        Args:\n",
    "            itr: the iteration from which to take policy samples\n",
    "            N: the number of policy samples to take\n",
    "        Returns: None\n",
    "        \"\"\"\n",
    "        algorithm_file = self._data_files_dir + 'algorithm_itr_%02d.pkl' % itr\n",
    "        self.algorithm = self.data_logger.unpickle(algorithm_file)\n",
    "        if self.algorithm is None:\n",
    "            print(\"Error: cannot find '%s.'\" % algorithm_file)\n",
    "            os._exit(1) # called instead of sys.exit(), since t\n",
    "        traj_sample_lists = self.data_logger.unpickle(self._data_files_dir +\n",
    "            ('traj_sample_itr_%02d.pkl' % itr))\n",
    "\n",
    "        pol_sample_lists = self._take_policy_samples(N)\n",
    "        self.data_logger.pickle(\n",
    "            self._data_files_dir + ('pol_sample_itr_%02d.pkl' % itr),\n",
    "            copy.copy(pol_sample_lists)\n",
    "        )\n",
    "\n",
    "        if self.gui:\n",
    "            self.gui.update(itr, self.algorithm, self.agent,\n",
    "                traj_sample_lists, pol_sample_lists)\n",
    "            self.gui.set_status_text(('Took %d policy sample(s) from ' +\n",
    "                'algorithm state at iteration %d.\\n' +\n",
    "                'Saved to: data_files/pol_sample_itr_%02d.pkl.\\n') % (N, itr, itr))\n",
    "\n",
    "    def _initialize(self, itr_load):\n",
    "        \"\"\"\n",
    "        Initialize from the specified iteration.\n",
    "        Args:\n",
    "            itr_load: If specified, loads algorithm state from that\n",
    "                iteration, and resumes training at the next iteration.\n",
    "        Returns:\n",
    "            itr_start: Iteration to start from.\n",
    "        \"\"\"\n",
    "        if itr_load is None:\n",
    "            if self.gui:\n",
    "                self.gui.set_status_text('Press \\'go\\' to begin.')\n",
    "            return 0\n",
    "        else:\n",
    "            algorithm_file = self._data_files_dir + 'algorithm_itr_%02d.pkl' % itr_load\n",
    "            self.algorithm = self.data_logger.unpickle(algorithm_file)\n",
    "            if self.algorithm is None:\n",
    "                print(\"Error: cannot find '%s.'\" % algorithm_file)\n",
    "                os._exit(1) # called instead of sys.exit(), since this is in a thread\n",
    "\n",
    "            if self.gui:\n",
    "                traj_sample_lists = self.data_logger.unpickle(self._data_files_dir +\n",
    "                    ('traj_sample_itr_%02d.pkl' % itr_load))\n",
    "                if self.algorithm.cur[0].pol_info:\n",
    "                    pol_sample_lists = self.data_logger.unpickle(self._data_files_dir +\n",
    "                        ('pol_sample_itr_%02d.pkl' % itr_load))\n",
    "                else:\n",
    "                    pol_sample_lists = None\n",
    "                self.gui.set_status_text(\n",
    "                    ('Resuming training from algorithm state at iteration %d.\\n' +\n",
    "                    'Press \\'go\\' to begin.') % itr_load)\n",
    "            return itr_load + 1\n",
    "\n",
    "    def _take_sample(self, itr, cond, i):\n",
    "        \"\"\"\n",
    "        Collect a sample from the agent.\n",
    "        Args:\n",
    "            itr: Iteration number.\n",
    "            cond: Condition number.\n",
    "            i: Sample number.\n",
    "        Returns: None\n",
    "        \"\"\"\n",
    "        if self.algorithm._hyperparams['sample_on_policy'] \\\n",
    "                and self.algorithm.iteration_count > 0:\n",
    "            pol = self.algorithm.policy_opt.policy\n",
    "        else:\n",
    "            pol = self.algorithm.cur[cond].traj_distr\n",
    "        if self.gui:\n",
    "            self.gui.set_image_overlays(cond)   # Must call for each new cond.\n",
    "            redo = True\n",
    "            while redo:\n",
    "                while self.gui.mode in ('wait', 'request', 'process'):\n",
    "                    if self.gui.mode in ('wait', 'process'):\n",
    "                        time.sleep(0.01)\n",
    "                        continue\n",
    "                    # 'request' mode.\n",
    "                    if self.gui.request == 'reset':\n",
    "                        try:\n",
    "                            self.agent.reset(cond)\n",
    "                        except NotImplementedError:\n",
    "                            self.gui.err_msg = 'Agent reset unimplemented.'\n",
    "                    elif self.gui.request == 'fail':\n",
    "                        self.gui.err_msg = 'Cannot fail before sampling.'\n",
    "                    self.gui.process_mode()  # Complete request.\n",
    "\n",
    "                self.gui.set_status_text(\n",
    "                    'Sampling: iteration %d, condition %d, sample %d.' %\n",
    "                    (itr, cond, i)\n",
    "                )\n",
    "                self.agent.sample(\n",
    "                    pol, cond,\n",
    "                    verbose=(i < self._hyperparams['verbose_trials'])\n",
    "                )\n",
    "\n",
    "                if self.gui.mode == 'request' and self.gui.request == 'fail':\n",
    "                    redo = True\n",
    "                    self.gui.process_mode()\n",
    "                    self.agent.delete_last_sample(cond)\n",
    "                else:\n",
    "                    redo = False\n",
    "        else:\n",
    "            self.agent.sample(\n",
    "                pol, cond,\n",
    "                verbose=(i < self._hyperparams['verbose_trials'])\n",
    "            )\n",
    "\n",
    "    def _take_iteration(self, itr, sample_lists):\n",
    "        \"\"\"\n",
    "        Take an iteration of the algorithm.\n",
    "        Args:\n",
    "            itr: Iteration number.\n",
    "        Returns: None\n",
    "        \"\"\"\n",
    "        if self.gui:\n",
    "            self.gui.set_status_text('Calculating.')\n",
    "            self.gui.start_display_calculating()\n",
    "        self.algorithm.iteration(sample_lists)\n",
    "        if self.gui:\n",
    "            self.gui.stop_display_calculating()\n",
    "\n",
    "    def _take_policy_samples(self, N=None):\n",
    "        \"\"\"\n",
    "        Take samples from the policy to see how it's doing.\n",
    "        Args:\n",
    "            N  : number of policy samples to take per condition\n",
    "        Returns: None\n",
    "        \"\"\"\n",
    "        if 'verbose_policy_trials' not in self._hyperparams:\n",
    "            # AlgorithmTrajOpt\n",
    "            return None\n",
    "        verbose = self._hyperparams['verbose_policy_trials']\n",
    "        if self.gui:\n",
    "            self.gui.set_status_text('Taking policy samples.')\n",
    "        pol_samples = [[None] for _ in range(len(self._test_idx))]\n",
    "        # Since this isn't noisy, just take one sample.\n",
    "        # TODO: Make this noisy? Add hyperparam?\n",
    "        # TODO: Take at all conditions for GUI?\n",
    "        for cond in range(len(self._test_idx)):\n",
    "            pol_samples[cond][0] = self.agent.sample(\n",
    "                self.algorithm.policy_opt.policy, self._test_idx[cond],\n",
    "                verbose=verbose, save=False, noisy=False)\n",
    "        return [SampleList(samples) for samples in pol_samples]\n",
    "\n",
    "    def _log_data(self, itr, traj_sample_lists, pol_sample_lists=None):\n",
    "        \"\"\"\n",
    "        Log data and algorithm, and update the GUI.\n",
    "        Args:\n",
    "            itr: Iteration number.\n",
    "            traj_sample_lists: trajectory samples as SampleList object\n",
    "            pol_sample_lists: policy samples as SampleList object\n",
    "        Returns: None\n",
    "        \"\"\"\n",
    "        if self.gui:\n",
    "            self.gui.set_status_text('Logging data and updating GUI.')\n",
    "            self.gui.update(itr, self.algorithm, self.agent,\n",
    "                traj_sample_lists, pol_sample_lists)\n",
    "            self.gui.save_figure(\n",
    "                self._data_files_dir + ('figure_itr_%02d.png' % itr)\n",
    "            )\n",
    "        if 'no_sample_logging' in self._hyperparams['common']:\n",
    "            return\n",
    "        self.data_logger.pickle(\n",
    "            self._data_files_dir + ('algorithm_itr_%02d.pkl' % itr),\n",
    "            copy.copy(self.algorithm)\n",
    "        )\n",
    "        self.data_logger.pickle(\n",
    "            self._data_files_dir + ('traj_sample_itr_%02d.pkl' % itr),\n",
    "            copy.copy(traj_sample_lists)\n",
    "        )\n",
    "        if pol_sample_lists:\n",
    "            self.data_logger.pickle(\n",
    "                self._data_files_dir + ('pol_sample_itr_%02d.pkl' % itr),\n",
    "                copy.copy(pol_sample_lists)\n",
    "            )\n",
    "\n",
    "    def _end(self):\n",
    "        \"\"\" Finish running and exit. \"\"\"\n",
    "        if self.gui:\n",
    "            self.gui.set_status_text('Training complete.')\n",
    "            self.gui.end_mode()\n",
    "            if self._quit_on_end:\n",
    "                # Quit automatically (for running sequential expts)\n",
    "                os._exit(1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [],
   "source": [
    "from gps_pb2 import JOINT_ANGLES, JOINT_VELOCITIES, END_EFFECTOR_POINTS, ACTION\n",
    "from datetime import datetime\n",
    "import os\n",
    "\n",
    "SENSOR_DIMS = {\n",
    "    JOINT_ANGLES: 2,\n",
    "    JOINT_VELOCITIES: 2,\n",
    "    END_EFFECTOR_POINTS: 3,\n",
    "    ACTION: 2\n",
    "}\n",
    "\n",
    "#BASE_DIR = '/'.join(str.split(gps_filepath, '/')[:-2])\n",
    "BASE_DIR = '.'\n",
    "#EXP_DIR = BASE_DIR + '/../experiments/box2d_arm_example/'\n",
    "EXP_DIR = BASE_DIR + 'experiments/'\n",
    "\n",
    "\n",
    "common = {\n",
    "    'experiment_name': 'box2d_arm_example' + '_' + \\\n",
    "            datetime.strftime(datetime.now(), '%m-%d-%y_%H-%M'),\n",
    "    'experiment_dir': EXP_DIR,\n",
    "    'data_files_dir': EXP_DIR + 'data_files/',\n",
    "    'log_filename': EXP_DIR + 'log.txt',\n",
    "    'conditions': 1,\n",
    "}\n",
    "\n",
    "if not os.path.exists(common['data_files_dir']):\n",
    "    os.makedirs(common['data_files_dir'])\n",
    "\n",
    "agent = {\n",
    "    'type': AgentBox2D,\n",
    "    'target_state' : np.array([0, 0]),\n",
    "    'world' : ArmWorld,\n",
    "    'render' : False,\n",
    "    'x0': np.array([0.75*np.pi, 0.5*np.pi, 0, 0, 0, 0, 0]),\n",
    "    'rk': 0,\n",
    "    'dt': 0.05,\n",
    "    'substeps': 1,\n",
    "    'conditions': common['conditions'],\n",
    "    'pos_body_idx': np.array([]),\n",
    "    'pos_body_offset': np.array([]),\n",
    "    'T': 100,\n",
    "    'sensor_dims': SENSOR_DIMS,\n",
    "    'state_include': [JOINT_ANGLES, JOINT_VELOCITIES, END_EFFECTOR_POINTS],\n",
    "    'obs_include': [],\n",
    "}\n",
    "\n",
    "algorithm = {\n",
    "    'type': AlgorithmTrajOpt,\n",
    "    'conditions': common['conditions'],\n",
    "}\n",
    "\n",
    "algorithm['init_traj_distr'] = {\n",
    "    'type': init_lqr,\n",
    "    'init_gains': np.zeros(SENSOR_DIMS[ACTION]),\n",
    "    'init_acc': np.zeros(SENSOR_DIMS[ACTION]),\n",
    "    'init_var': 0.1,\n",
    "    'stiffness': 0.01,\n",
    "    'dt': agent['dt'],\n",
    "    'T': agent['T'],\n",
    "}\n",
    "\n",
    "action_cost = {\n",
    "    'type': CostAction,\n",
    "    'wu': np.array([1, 1])\n",
    "}\n",
    "\n",
    "state_cost = {\n",
    "    'type': CostState,\n",
    "    'data_types' : {\n",
    "        JOINT_ANGLES: {\n",
    "            'wp': np.array([1, 1]),\n",
    "            'target_state': agent[\"target_state\"],\n",
    "        },\n",
    "    },\n",
    "}\n",
    "\n",
    "algorithm['cost'] = {\n",
    "    'type': CostSum,\n",
    "    'costs': [action_cost, state_cost],\n",
    "    'weights': [1e-5, 1.0],\n",
    "}\n",
    "\n",
    "algorithm['dynamics'] = {\n",
    "    'type': DynamicsLRPrior,\n",
    "    'regularization': 1e-6,\n",
    "    'prior': {\n",
    "        'type': DynamicsPriorGMM,\n",
    "        'max_clusters': 20,\n",
    "        'min_samples_per_cluster': 40,\n",
    "        'max_samples': 20,\n",
    "    },\n",
    "}\n",
    "\n",
    "algorithm['traj_opt'] = {\n",
    "    'type': TrajOptLQRPython,\n",
    "}\n",
    "\n",
    "algorithm['policy_opt'] = {}\n",
    "\n",
    "config_gps = {\n",
    "    'iterations': 10,\n",
    "    'num_samples': 5,\n",
    "    'verbose_trials': 5,\n",
    "    'common': common,\n",
    "    'agent': agent,\n",
    "    'gui_on': True,\n",
    "    'algorithm': algorithm,\n",
    "}\n",
    "\n",
    "common['info'] = generate_experiment_info(config_gps)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/Victor/basic_reinforcement_learning/tutorial10/gps/gui/textbox.py:64: MatplotlibDeprecationWarning: The set_axis_bgcolor function was deprecated in version 2.0. Use set_facecolor instead.\n",
      "  self._ax.set_axis_bgcolor(ColorConverter().to_rgba(color, alpha))\n",
      "/Users/Victor/basic_reinforcement_learning/tutorial10/gps/gui/textbox.py:68: MatplotlibDeprecationWarning: The get_axis_bgcolor function was deprecated in version 2.0. Use get_facecolor instead.\n",
      "  color, alpha = self._ax.get_axis_bgcolor(), self._ax.get_alpha()\n",
      "/Users/Victor/basic_reinforcement_learning/tutorial10/gps/gui/textbox.py:69: MatplotlibDeprecationWarning: The set_axis_bgcolor function was deprecated in version 2.0. Use set_facecolor instead.\n",
      "  self._ax.set_axis_bgcolor(mpl.rcParams['figure.facecolor'])\n",
      "/Users/Victor/basic_reinforcement_learning/tutorial10/gps/gui/textbox.py:71: MatplotlibDeprecationWarning: The set_axis_bgcolor function was deprecated in version 2.0. Use set_facecolor instead.\n",
      "  self._ax.set_axis_bgcolor(ColorConverter().to_rgba(color, alpha))\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "3\n",
      "1\n",
      "3\n",
      "1\n",
      "3\n",
      "1\n",
      "3\n",
      "1\n",
      "3\n",
      "1\n",
      "3\n",
      "1\n",
      "3\n",
      "1\n",
      "3\n",
      "1\n",
      "3\n",
      "1\n",
      "3\n",
      "1\n"
     ]
    }
   ],
   "source": [
    "exp_name = \"box2d_arm_example\"\n",
    "\n",
    "import random\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "#print(config)\n",
    "#print(\"------------------------------\")\n",
    "#print(config_gps)\n",
    "\n",
    "seed = config_gps.get('random_seed', 0)\n",
    "random.seed(seed)\n",
    "np.random.seed(seed)\n",
    "\n",
    "gps = GPSMain(config_gps)\n",
    "if config_gps['gui_on']:\n",
    "    run_gps = threading.Thread(\n",
    "        #target=lambda: gps.run(itr_load=resume_training_itr) # no iteration load used at this stage\n",
    "        target=lambda: gps.run()\n",
    "    )\n",
    "    run_gps.daemon = True\n",
    "    run_gps.start()\n",
    "\n",
    "    plt.ioff()\n",
    "    plt.show()\n",
    "else:\n",
    "    gps.run()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.1"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
