{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Implementing IPP in a Simulated GP Environment\n",
    "This notebook is intended for to serve as a GP playground for differnt informative path planning algorithms. \n",
    "\n",
    "Please note, that you may need to install the following in order to play with the [dubins library](https://github.com/AndrewWalker/Dubins-Curves) and the [Gaussian Process model](https://github.com/SheffieldML/GPy) respectively:\n",
    "\n",
    "```pip install dubins\n",
    "pip install GPy```\n",
    "\n",
    "Please note that for some environments, it may be necessary to install GPy as:\n",
    "\n",
    "```conda install -c omnia gpy```\n",
    "\n",
    "Finally, please note that this code is largely DEPRECATED; the code in the main repository is fundamentally built from this code, and should be used directly.\n",
    "\n",
    "This library allows for the generation of a Gaussian environment with known kernel parameter and provides an point-robot interface for planning informative paths through this initially unknown environment, while collecting noisy samples of the phenonema of interest from the environment. The robot's trajectories are represented as concatonations of dubins curves, assuming that the turning radius is known. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# Necessary imports\n",
    "%matplotlib inline\n",
    "\n",
    "from matplotlib import pyplot as plt\n",
    "import matplotlib\n",
    "from matplotlib.colors import LogNorm\n",
    "from matplotlib import cm\n",
    "from sklearn import mixture\n",
    "from IPython.display import display\n",
    "from scipy.stats import multivariate_normal\n",
    "import numpy as np\n",
    "import scipy as sp\n",
    "import math\n",
    "import os\n",
    "import GPy as GPy\n",
    "import dubins\n",
    "import time\n",
    "from itertools import chain"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## The GPModel Class\n",
    "This custom GPModel class is the model representation used to generate a simulated Gaussian environment, and also used to represent the robot's current model of the environment. The class is a wrapper on to of the GPy library that allows for some additional features, including:\n",
    "* Saving and loading trained kernel parameters\n",
    "* Initializing a GP model without any prior data and adding new sample points\n",
    "* Incrementally addding sample points to a previous dataset, resuing previous computation [TODO]\n",
    "* Sparse, streaming GP posterior updates [TODO]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## The Squared Exponential Kernel\n",
    "\n",
    "The GPModel currently assumes a Squared Exponential Kernel. Here is a guide to what the differnt kernel hyperparmeters mean, from [this post](http://evelinag.com/Ariadne/covarianceFunctions.html). \n",
    "\n",
    "\\begin{equation}\n",
    "\\kappa(x_i, x_j) = \\sigma^2 \\text{exp}\\Big(- \\frac{(x_i - x_j)^2}{2l^2}\\Big) + \\sigma_{\\text{noise}}^2\n",
    "\\end{equation} \n",
    "\n",
    "where $\\sigma^2 > 0$ is the signal variance, $l>0$ is the lengthscale and $\\sigma_{\\text{noise}}^2>=0$ is the noise covariance. The noise variance is applied only when $i=j$.\n",
    "\n",
    "Squared exponential is appropriate for modelling very smooth functions. The parameters have the following interpretation:\n",
    "\n",
    "* **Lengthscale $l$** describes how smooth a function is. Small lengthscale value means that function values can change quickly, large values characterize functions that change only slowly. Lengthscale also determines how far we can reliably extrapolate from the training data.\n",
    "\n",
    "* **Signal variance $\\sigma^2$** is a scaling factor. It determines variation of function values from their mean. Small value of $\\sigma^2$ characterize functions that stay close to their mean value, larger values allow more variation. If the signal variance is too large, the modelled function will be free to chase outliers.\n",
    "\n",
    "* **Noise variance** $\\sigma_{\\text{noise}}^2$ is formally not a part of the covariance function itself. It is used by the Gaussian process model to allow for noise present in training data. This parameter specifies how much noise is expected to be present in the data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "class GPModel:\n",
    "    '''The GPModel class, which is a wrapper on top of GPy.\n",
    "    \n",
    "    Inputs:\n",
    "    * variance (float) the variance parameter of the squared exponential kernel\n",
    "    * lengthscale (float) the lengthscale parameter of the squared exponential kernel\n",
    "    * noise (float) the sensor noise parameter of the squared exponential kernel\n",
    "    * dimension (float) the dimension of the environment (currently, only 2D environments are supported)\n",
    "    * kernel (string) the type of kernel (currently, only 'rbf' kernels are supported) '''     \n",
    "    \n",
    "    def __init__(self, ranges, lengthscale, variance, noise = 0.0001, dimension = 2, kernel = 'rbf'):\n",
    "        '''Initialize a GP regression model with given kernel parameters. '''\n",
    "        \n",
    "        # The noise parameter of the sensor\n",
    "        self.noise = noise\n",
    "        self.lengthscale = lengthscale\n",
    "        self.variance = variance\n",
    "        \n",
    "        self.ranges = ranges\n",
    "        \n",
    "        # The Gaussian dataset\n",
    "        self.xvals = None\n",
    "        self.zvals = None\n",
    "        \n",
    "        # The dimension of the evironment\n",
    "        if dimension == 2:\n",
    "            self.dim = dimension\n",
    "        else:\n",
    "            raise ValueError('Environment must have dimension 2 \\'rbf\\'')\n",
    "\n",
    "        if kernel == 'rbf':\n",
    "            self.kern = GPy.kern.RBF(input_dim = self.dim, lengthscale = lengthscale, variance = variance) \n",
    "        else:\n",
    "            raise ValueError('Kernel type must by \\'rbf\\'')\n",
    "            \n",
    "        # Intitally, before any data is created, \n",
    "        self.model = None\n",
    "         \n",
    "    def predict_value(self, xvals):\n",
    "        ''' Public method returns the mean and variance predictions at a set of input locations.\n",
    "        Inputs:\n",
    "        * xvals (float array): an nparray of floats representing observation locations, with dimension NUM_PTS x 2\n",
    "        \n",
    "        Returns: \n",
    "        * mean (float array): an nparray of floats representing predictive mean, with dimension NUM_PTS x 1         \n",
    "        * var (float array): an nparray of floats representing predictive variance, with dimension NUM_PTS x 1 '''        \n",
    "\n",
    "        assert(xvals.shape[0] >= 1)            \n",
    "        assert(xvals.shape[1] == self.dim)    \n",
    "        \n",
    "        n_points, input_dim = xvals.shape\n",
    "        \n",
    "        # With no observations, predict 0 mean everywhere and prior variance\n",
    "        if self.model == None:\n",
    "            return np.zeros((n_points, 1)), np.ones((n_points, 1)) * self.variance\n",
    "        \n",
    "        # Else, return \n",
    "        mean, var = self.model.predict(xvals, full_cov = False, include_likelihood = True)\n",
    "        return mean, var        \n",
    "    \n",
    "\n",
    "    def set_data(self, xvals, zvals):\n",
    "        ''' Public method that updates the data in the GP model.\n",
    "        Inputs:\n",
    "        * xvals (float array): an nparray of floats representing observation locations, with dimension NUM_PTS x 2\n",
    "        * zvals (float array): an nparray of floats representing sensor observations, with dimension NUM_PTS x 1 ''' \n",
    "        \n",
    "        # Save the data internally\n",
    "        self.xvals = xvals\n",
    "        self.zvals = zvals\n",
    "        \n",
    "        # If the model hasn't been created yet (can't be created until we have data), create GPy model\n",
    "        if self.model == None:\n",
    "            self.model = GPy.models.GPRegression(np.array(xvals), np.array(zvals), self.kern)\n",
    "        # Else add to the exisiting model\n",
    "        else:\n",
    "            self.model.set_XY(X = np.array(xvals), Y = np.array(zvals))\n",
    "    \n",
    "    def add_data(self, xvals, zvals):\n",
    "        ''' Public method that adds data to an the GP model.\n",
    "        Inputs:\n",
    "        * xvals (float array): an nparray of floats representing observation locations, with dimension NUM_PTS x 2\n",
    "        * zvals (float array): an nparray of floats representing sensor observations, with dimension NUM_PTS x 1 ''' \n",
    "        \n",
    "        if self.xvals is None:\n",
    "            self.xvals = xvals\n",
    "        else:\n",
    "            self.xvals = np.vstack([self.xvals, xvals])\n",
    "            \n",
    "        if self.zvals is None:\n",
    "            self.zvals = zvals\n",
    "        else:\n",
    "            self.zvals = np.vstack([self.zvals, zvals])\n",
    "\n",
    "        # If the model hasn't been created yet (can't be created until we have data), create GPy model\n",
    "        if self.model == None:\n",
    "            self.model = GPy.models.GPRegression(np.array(self.xvals), np.array(self.zvals), self.kern)\n",
    "        # Else add to the exisiting model\n",
    "        else:\n",
    "            self.model.set_XY(X = np.array(self.xvals), Y = np.array(self.zvals))\n",
    "            \n",
    "    def simulate_prediction(self, xvals_add, zvals_add, xvals_test):\n",
    "        ''' Public method that adds data to an the GP model.\n",
    "        Inputs:\n",
    "        * xvals (float array): an nparray of floats representing observation locations, with dimension NUM_PTS x 2\n",
    "        * zvals (float array): an nparray of floats representing sensor observations, with dimension NUM_PTS x 1 ''' \n",
    "        \n",
    "        if self.xvals is None:\n",
    "            xvals = xvals_add\n",
    "        else:\n",
    "            xvals = np.vstack([self.xvals, xvals_add])\n",
    "            \n",
    "        if self.zvals is None:\n",
    "            zvals = zvals_add\n",
    "        else:\n",
    "            zvals = np.vstack([self.zvals, zvals_add])\n",
    "\n",
    "\n",
    "        temp_model = GPy.models.GPRegression(np.array(xvals), np.array(zvals), self.kern)\n",
    "        mean, var = temp_model.predict(xvals_test, full_cov = False, include_likelihood = True)\n",
    "\n",
    "        return mean, var                   \n",
    "\n",
    "    def load_kernel(self, kernel_file = 'kernel_model.npy'):\n",
    "        ''' Public method that loads kernel parameters from file.\n",
    "        Inputs:\n",
    "        * kernel_file (string): a filename string with the location of the kernel parameters '''    \n",
    "        \n",
    "        # Read pre-trained kernel parameters from file, if avaliable and no training data is provided\n",
    "        if os.path.isfile(kernel_file):\n",
    "            print \"Loading kernel parameters from file\"\n",
    "            self.kern[:] = np.load(kernel_file)\n",
    "        else:\n",
    "            raise ValueError(\"Failed to load kernel. Kernel parameter file not found.\")\n",
    "            \n",
    "        return\n",
    "\n",
    "    def train_kernel(self, xvals = None, zvals = None, kernel_file = 'kernel_model.npy'):\n",
    "        ''' Public method that optmizes kernel parameters based on input data and saves to files.\n",
    "        Inputs:\n",
    "        * xvals (float array): an nparray of floats representing observation locations, with dimension NUM_PTS x 2\n",
    "        * zvals (float array): an nparray of floats representing sensor observations, with dimension NUM_PTS x 1        \n",
    "        * kernel_file (string): a filename string with the location to save the kernel parameters '''      \n",
    "        \n",
    "        # Read pre-trained kernel parameters from file, if avaliable and no training data is provided\n",
    "        if xvals is not None and zvals is not None:\n",
    "            print \"Optimizing kernel parameters given data\"\n",
    "            # Initilaize a GP model (used only for optmizing kernel hyperparamters)\n",
    "            self.m = GPy.models.GPRegression(np.array(xvals), np.array(zvals), self.kern)\n",
    "            self.m.initialize_parameter()\n",
    "\n",
    "            # Constrain the hyperparameters during optmization\n",
    "            self.m.constrain_positive('')\n",
    "            #self.m['rbf.variance'].constrain_bounded(0.01, 10)\n",
    "            #self.m['rbf.lengthscale'].constrain_bounded(0.01, 10)\n",
    "            self.m['Gaussian_noise.variance'].constrain_fixed(self.noise)\n",
    "\n",
    "            # Train the kernel hyperparameters\n",
    "            self.m.optimize_restarts(num_restarts = 2, messages = True)\n",
    "\n",
    "            # Save the hyperparemters to file\n",
    "            np.save(kernel_file, self.kern[:])\n",
    "        else:\n",
    "            raise ValueError(\"Failed to train kernel. No training data provided.\")\n",
    "            \n",
    "    def visualize_model(self, x1lim, x2lim, title = ''):\n",
    "        if self.model is None:\n",
    "            print 'No samples have been collected. World model is equivalent to prior.'\n",
    "            return None\n",
    "        else:\n",
    "            print \"Sample set size:\", self.xvals.shape\n",
    "            fig = self.model.plot(figsize=(8, 6), title = title, xlim = x1lim, ylim = x2lim)\n",
    "            \n",
    "    def kernel_plot(self):\n",
    "        ''' Visualize the learned GP kernel '''        \n",
    "        _ = self.kern.plot()\n",
    "        plt.ylim([-10, 10])\n",
    "        plt.xlim([-10, 10])\n",
    "        plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## The Environment Class\n",
    "This custom Environment class is used to represent environments of interest. Currently, the enviroment is constrained to be a rectangular region in $\\mathbb{R}^2$. Upon initialization, the Environment class draws a random function from a Gaussian Process with the input covariance function. The Environment has one public method, ``sample_value``, which returns a noisy sample of the function value at a specified location. \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "class Environment:\n",
    "    '''The Environment class, which represents a retangular Gaussian world.\n",
    "    \n",
    "    Input:\n",
    "    * ranges (tuple of floats): a tuple representing the max/min of 2D rectangular domain i.e. (-10, 10, -50, 50)\n",
    "    * NUM_PTS (int): the number of points in each dimension to sample for initialization, \n",
    "                    resulting in a sample grid of size NUM_PTS x NUM_PTS\n",
    "    * variance (float): the variance parameter of the squared exponential kernel\n",
    "    * lengthscale (float): the lengthscale parameter of the squared exponential kernel\n",
    "    * noise (float): the sensor noise parameter of the squared exponential kernel\n",
    "    * visualize (boolean): a boolean flag to plot the surface of the resulting environment \n",
    "    * seed (int): an integer seed for the random draws. If set to \\'None\\', no seed is used ''' \n",
    "    def __init__(self, ranges, NUM_PTS, variance, lengthscale, noise = 0.0001, visualize = True, seed = None, dim = 2):\n",
    "        ''' Initialize a random Gaussian environment using the input kernel, assuming zero mean'''\n",
    "        # Save the parmeters of GP model\n",
    "        self.variance = variance\n",
    "        self.lengthscale = lengthscale\n",
    "        self.dim = dim\n",
    "        \n",
    "        # Expect ranges to be a 4-tuple consisting of x1min, x1max, x2min, and x2max\n",
    "        self.x1min = float(ranges[0])\n",
    "        self.x1max = float(ranges[1])\n",
    "        self.x2min = float(ranges[2])\n",
    "        self.x2max = float(ranges[3]) \n",
    "        \n",
    "        # Intialize a GP model of the environment\n",
    "        self.GP = GPModel(ranges = ranges, lengthscale = lengthscale, variance = variance)         \n",
    "                            \n",
    "        # Generate a set of discrete grid poin        zsamples = np.random.normal(loc = 0, scale = np.sqrt(var))\n",
    "        x1 = np.linspace(self.x1min, self.x1max, NUM_PTS)\n",
    "        x2 = np.linspace(self.x2min, self.x2max, NUM_PTS)\n",
    "        x1vals, x2vals = np.meshgrid(x1, x2, sparse = False, indexing = 'xy') # dimension: NUM_PTS x NUM_PTS\n",
    "        data = np.vstack([x1vals.ravel(), x2vals.ravel()]).T # dimension: NUM_PTS*NUM_PTS x 2\n",
    "\n",
    "        # Take an initial sample in the GP prior, conditioned on no other data\n",
    "        xsamples = np.reshape(np.array(data[0, :]), (1, dim)) # dimension: 1 x 2        \n",
    "        mean, var = self.GP.predict_value(xsamples)   \n",
    "        \n",
    "        if seed is not None:\n",
    "            np.random.seed(seed)\n",
    "            seed += 1\n",
    "        zsamples = np.random.normal(loc = 0, scale = np.sqrt(var))\n",
    "        zsamples = np.reshape(zsamples, (1,1)) # dimension: 1 x 1 \n",
    "                            \n",
    "        # Add new data point to the GP model\n",
    "        self.GP.set_data(xsamples, zsamples)                            \n",
    "                                 \n",
    "        # Iterate through the rest of the grid sequentially and sample a z values, condidtioned on previous samples\n",
    "        for index, point in enumerate(data[1:, :]):\n",
    "            # Get a new sample point\n",
    "            xs = np.reshape(np.array(point), (1, dim))\n",
    "    \n",
    "            # Compute the predicted mean and variance\n",
    "            mean, var = self.GP.predict_value(xs)\n",
    "            \n",
    "            # Sample a new observation, given the mean and variance\n",
    "            if seed is not None:\n",
    "                np.random.seed(seed)\n",
    "                seed += 1            \n",
    "            zs = np.random.normal(loc = mean, scale = np.sqrt(var))\n",
    "            \n",
    "            # Add new sample point to the GP model\n",
    "            zsamples = np.vstack([zsamples, np.reshape(zs, (1, 1))])\n",
    "            xsamples = np.vstack([xsamples, np.reshape(xs, (1, dim))])\n",
    "            self.GP.set_data(xsamples, zsamples)\n",
    "      \n",
    "        # Plot the surface mesh and scatter plot representation of the samples points\n",
    "        if visualize == True:\n",
    "            fig = plt.figure(figsize=(8, 6))\n",
    "            ax = fig.add_subplot(111, projection = '3d')\n",
    "            ax.set_title('Surface of the Simulated Environment')\n",
    "            surf = ax.plot_surface(x1vals, x2vals, zsamples.reshape(x1vals.shape), cmap = cm.coolwarm, linewidth = 1)\n",
    "            \n",
    "            #ax2 = fig.add_subplot(212, projection = '3d')\n",
    "            \n",
    "            fig2 = plt.figure(figsize=(8, 6))\n",
    "            ax2 = fig2.add_subplot(111)\n",
    "            ax2.set_title('Countour Plot of the Simulated Environment')     \n",
    "            plot = ax2.contourf(x1vals, x2vals, zsamples.reshape(x1vals.shape), cmap = 'viridis', vmin = -25, vmax = 25)\n",
    "            scatter = ax2.scatter(data[:, 0], data[:, 1], c = zsamples.ravel(), s = 4.0, cmap = 'viridis')\n",
    "            maxind = np.argmax(zsamples)\n",
    "            ax2.scatter(xsamples[maxind, 0], xsamples[maxind,1], color = 'k', marker = '*', s = 500)\n",
    "            plt.show()           \n",
    "        \n",
    "        print \"Environment initialized with bounds X1: (\", self.x1min, \",\", self.x1max, \")  X2:(\", self.x2min, \",\", self.x2max, \")\" \n",
    "      \n",
    "    def sample_value(self, xvals):\n",
    "        ''' The public interface to the Environment class. Returns a noisy sample of the true value of environment \n",
    "        at a set of point. \n",
    "        Input:\n",
    "        * xvals (float array): an nparray of floats representing observation locations, with dimension NUM_PTS x 2 \n",
    "        \n",
    "        Returns:\n",
    "        * mean (float array): an nparray of floats representing predictive mean, with dimension NUM_PTS x 1 '''\n",
    "        assert(xvals.shape[0] >= 1)            \n",
    "        assert(xvals.shape[1] == self.dim)        \n",
    "        mean, var = self.GP.predict_value(xvals)\n",
    "        return mean\n",
    "    "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Representing Path Sets: Path Generator Classes\n",
    "This custom class leverages some of the nice features of the ```dubins``` library to generate a number of options and sample sets for the vehicle to choose from."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "class Path_Generator:\n",
    "    '''The Path_Generator class which creates naive point-to-point straightline paths'''\n",
    "    \n",
    "    def __init__(self, frontier_size, horizon_length, turning_radius, sample_step, extent):\n",
    "        '''\n",
    "        frontier_size (int) the number of points on the frontier we should consider for navigation\n",
    "        horizon_length (float) distance between the vehicle and the horizon to consider\n",
    "        turning_radius (float) the feasible turning radius for the vehicle\n",
    "        sample_step (float) the unit length along the path from which to draw a sample\n",
    "        '''\n",
    "\n",
    "        self.fs = frontier_size\n",
    "        self.hl = horizon_length\n",
    "        self.tr = turning_radius\n",
    "        self.ss = sample_step\n",
    "        self.extent = extent\n",
    "\n",
    "        # Global variables\n",
    "        self.goals = [] #The frontier coordinates\n",
    "        self.samples = {} #The sample points which form the paths\n",
    "        self.cp = (0,0,0) #The current pose of the vehicle\n",
    "        \n",
    "    def generate_frontier_points(self):\n",
    "        '''From the frontier_size and horizon_length, generate the frontier points to goal'''\n",
    "        angle = np.linspace(-2.35,2.35,self.fs) #fix the possibilities to 75% of the unit circle, ignoring points directly behind the vehicle\n",
    "        goals = [(self.hl*np.cos(self.cp[2]+a)+self.cp[0], self.hl*np.sin(self.cp[2]+a)+self.cp[1], self.cp[2]+a) for a in angle]\n",
    "        self.goals = goals#[coordinate for coordinate in goals if coordinate[0] < self.extent[1] and coordinate[0] > self.extent[0] and coordinate[1] < self.extent[3] and coordinate[1] > self.extent[2]]\n",
    "        return self.goals\n",
    "        \n",
    "    def make_sample_paths(self):\n",
    "        '''Connect the current_pose to the goal places'''\n",
    "        cp = np.array(self.cp)\n",
    "        coords = {}\n",
    "        for i,goal in enumerate(self.goals):\n",
    "            g = np.array(goal)\n",
    "            distance = np.sqrt((cp[0]-g[0])**2 + (cp[1]-g[1])**2)\n",
    "            samples = int(round(distance/self.ss))\n",
    "            \n",
    "            # Don't include the start location but do include the end point\n",
    "            for j in range(0,samples):\n",
    "                x = cp[0]+((j+1)*self.ss)*np.cos(g[2])\n",
    "                y = cp[1]+((j+1)*self.ss)*np.sin(g[2])\n",
    "                a = g[2]\n",
    "                try: \n",
    "                    coords[i].append((x,y,a))\n",
    "                except:\n",
    "                    coords[i] = []\n",
    "                    coords[i].append((x,y,a))\n",
    "        self.samples = coords\n",
    "        return coords\n",
    "    \n",
    "    def get_path_set(self, current_pose):\n",
    "        '''Primary interface for getting list of path sample points for evaluation'''\n",
    "        self.cp = current_pose\n",
    "        self.generate_frontier_points()\n",
    "        paths = self.make_sample_paths()\n",
    "        return paths\n",
    "    \n",
    "    def get_frontier_points(self):\n",
    "        return self.goals\n",
    "    \n",
    "    def get_sample_points(self):\n",
    "        return self.samples"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "class Dubins_Path_Generator(Path_Generator):\n",
    "    '''\n",
    "    The Dubins_Path_Generator class, which inherits from the Path_Generator class. Replaces the make_sample_paths\n",
    "    method with paths generated using the dubins library\n",
    "    '''\n",
    "        \n",
    "    def make_sample_paths(self):\n",
    "        '''Connect the current_pose to the goal places'''\n",
    "        coords = {}\n",
    "        for i,goal in enumerate(self.goals):\n",
    "            g = (goal[0],goal[1],self.cp[2])\n",
    "            path = dubins.shortest_path(self.cp, goal, self.tr)\n",
    "            configurations, _ = path.sample_many(self.ss)\n",
    "            coords[i] = [config for config in configurations if config[0] > self.extent[0] and config[0] < self.extent[1] and config[1] > self.extent[2] and config[1] < self.extent[3]]\n",
    "        \n",
    "        self.samples = coords\n",
    "        return coords"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "class Dubins_EqualPath_Generator(Path_Generator):\n",
    "    '''\n",
    "    The Dubins_EqualPath_Generator class which inherits from Path_Generator. Modifies Dubin Curve paths so that all\n",
    "    options have an equal number of sampling points\n",
    "    '''\n",
    "        \n",
    "    def make_sample_paths(self):\n",
    "        '''Connect the current_pose to the goal places'''\n",
    "        coords = {}\n",
    "        for i,goal in enumerate(self.goals):\n",
    "            g = (goal[0],goal[1],self.cp[2])\n",
    "            path = dubins.shortest_path(self.cp, goal, self.tr)\n",
    "            configurations, _ = path.sample_many(self.ss)\n",
    "            coords[i] = [config for config in configurations if config[0] > self.extent[0] and config[0] < self.extent[1] and config[1] > self.extent[2] and config[1] < self.extent[3]]\n",
    "        \n",
    "        # find the \"shortest\" path in sample space\n",
    "        current_min = 1000\n",
    "        for key,path in coords.items():\n",
    "            if len(path) < current_min and len(path) > 1:\n",
    "                current_min = len(path)\n",
    "        \n",
    "        # limit all paths to the shortest path in sample space\n",
    "        for key,path in coords.items():\n",
    "            if len(path) > current_min:\n",
    "                path = path[0:current_min]\n",
    "                coords[key]=path\n",
    "        \n",
    "        self.samples = coords\n",
    "        return coords"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## The Monte Carlo Search Tree Class\n",
    "For non-myopic horizons we select to use a Monte Carlo tree search in order to select an action plan in a global setting. We create a Monte Calro Search Tree class for convenience."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "class MCTS:\n",
    "    '''Class that establishes a MCTS for nonmyopic planning'''\n",
    "    def __init__(self, computation_budget, belief, initial_pose, planning_limit, frontier_size, path_generator, aquisition_function, f_rew, time):\n",
    "        '''Initialize with constraints for the planning, including whether there is \n",
    "           a budget or planning horizon\n",
    "           budget - length, time, etc to consider\n",
    "           belief - GP model of the robot current belief state\n",
    "           initial_pose - (x,y,rho) for vehicle'''\n",
    "        self.budget = computation_budget\n",
    "        self.GP = belief\n",
    "        self.cp = initial_pose\n",
    "        self.limit = planning_limit\n",
    "        self.frontier_size = frontier_size\n",
    "        self.path_generator = path_generator\n",
    "        self.spent = 0\n",
    "        self.tree = None\n",
    "        self.aquisition_function = aquisition_function\n",
    "        self.f_rew = f_rew\n",
    "        self.t = time\n",
    "\n",
    "    def choose_trajectory(self):\n",
    "        self.tree = self.initialize_tree()        \n",
    "        max_val = max_locs = None\n",
    "        nK = 2\n",
    "        nFeatures = 300\n",
    "        if self.f_rew == 'mes':\n",
    "            max_val, max_locs = sample_max_vals(self.GP, nK = nK, nFeatures = nFeatures)\n",
    "            \n",
    "        time_start = time.clock()            \n",
    "            \n",
    "        i = 0\n",
    "        while time.clock() - time_start < self.budget:\n",
    "            i += 1\n",
    "            current_node = self.tree_policy()\n",
    "            sequence = self.rollout_policy(current_node, self.budget)\n",
    "            reward = self.get_reward(sequence, max_val)\n",
    "            self.update_tree(reward, sequence)\n",
    "       \n",
    "        paths = self.path_generator.get_path_set(self.cp)                \n",
    "        best_sequence, best_val, all_vals = self.get_best_child()\n",
    "        print \"\\t Number of rollouts:\", i, \"\\t Size of tree:\", len(self.tree)\n",
    "        return self.tree[best_sequence][0], best_val, paths, all_vals, max_locs\n",
    "\n",
    "    def initialize_tree(self):\n",
    "        '''Creates a tree instance, which is a dictionary, that keeps track of the nodes in the world'''\n",
    "        tree = {}\n",
    "        #(pose, number of queries)\n",
    "        tree['root'] = (self.cp, 0)\n",
    "        actions = self.path_generator.get_path_set(self.cp)\n",
    "        for action, samples in actions.items():\n",
    "            #(samples, cost, reward, number of times queried)\n",
    "            cost = np.sqrt((self.cp[0]-samples[-1][0])**2 + (self.cp[1]-samples[-1][1])**2)\n",
    "            tree['child '+ str(action)] = (samples, cost, 0, 0)\n",
    "        return tree\n",
    "\n",
    "    def tree_policy(self):\n",
    "        '''Implements the UCB policy to select the child to expand and forward simulate'''\n",
    "        # According to Arora:\n",
    "        #avg_r average reward of all rollouts that have passed through node n\n",
    "        #c_p some constant , 0.1 in literature\n",
    "        #N number of times parent has been evaluated\n",
    "        #n number of time node n has been evaluated\n",
    "        #ucb = avg_r + c_p*np.sqrt(2*np.log(N)/n)\n",
    "        leaf_eval = {}\n",
    "        for i in xrange(self.frontier_size):\n",
    "            node = 'child '+ str(i)\n",
    "            leaf_eval[node] = self.tree[node][2] + 0.1*np.sqrt(2*(np.log(self.tree['root'][1]))/self.tree[node][3])\n",
    "#         print max(leaf_eval, key=leaf_eval.get)\n",
    "        return max(leaf_eval, key=leaf_eval.get)\n",
    "\n",
    "    def rollout_policy(self, node, budget):\n",
    "        '''Select random actions to expand the child node'''\n",
    "        sequence = [node] #include the child node\n",
    "        #TODO use the cost metric to signal action termination, for now using horizon\n",
    "        for i in xrange(self.limit):\n",
    "            actions = self.path_generator.get_path_set(self.tree[node][0][-1]) #plan from the last point in the sample\n",
    "            a = np.random.randint(0,len(actions)) #choose a random path\n",
    "            #TODO add cost metrics\n",
    "#             best_path = actions[a]\n",
    "#             if len(best_path) == 1:\n",
    "#                 best_path = [(best_path[-1][0],best_path[-1][1],best_path[-1][2]-1.14)]\n",
    "#             elif best_path[-1][0] < -9.5 or best_path[-1][0] > 9.5:\n",
    "#                 best_path = (best_path[-1][0],best_path[-1][1],best_path[-1][2]-1.14)\n",
    "#             elif best_path[-1][1] < -9.5 or best_path[-1][0] >9.5:s\n",
    "#                 best_path = (best_path[-1][0],best_path[-1][1],best_path[-1][2]-1.14)\n",
    "#             else:\n",
    "#                 best_path = best_path[-1]\n",
    "            self.tree[node + ' child ' + str(a)] = (actions[a], 0, 0, 0) #add random path to the tree\n",
    "            node = node + ' child ' + str(a)\n",
    "            sequence.append(node)\n",
    "        return sequence #return the sequence of nodes that are made\n",
    "\n",
    "    def get_reward(self, sequence, maxes):\n",
    "        '''Evaluate the sequence to get the reward, defined by the percentage of entropy reduction'''\n",
    "        # The process is iterated until the last node of the rollout sequence is reached \n",
    "        # and the total information gain is determined by subtracting the entropies \n",
    "        # of the initial and final belief space.\n",
    "        # reward = infogain / Hinit (joint entropy of current state of the mission)\n",
    "        sim_world = self.GP\n",
    "        samples = []\n",
    "        obs = []\n",
    "        for seq in sequence:\n",
    "            samples.append(self.tree[seq][0])\n",
    "        obs = list(chain.from_iterable(samples))\n",
    "        \n",
    "        if self.f_rew == 'mes':\n",
    "            if maxes is None:\n",
    "                return 1.0\n",
    "            else:\n",
    "                return self.aquisition_function(time = self.t, xvals = obs, robot_model = sim_world, maxes = maxes)         \n",
    "                \n",
    "        else:\n",
    "            return self.aquisition_function(time = self.t, xvals = obs, robot_model = sim_world)\n",
    "    \n",
    "    def update_tree(self, reward, sequence):\n",
    "        '''Propogate the reward for the sequence'''\n",
    "        #TODO update costs as well\n",
    "        self.tree['root'] = (self.tree['root'][0], self.tree['root'][1]+1)\n",
    "        for seq in sequence:\n",
    "            samples, cost, rew, queries = self.tree[seq]\n",
    "            queries += 1\n",
    "            n = queries\n",
    "            rew = ((n-1)*rew+reward)/n\n",
    "            self.tree[seq] = (samples, cost, rew, queries)\n",
    "\n",
    "    def get_best_child(self):\n",
    "        '''Query the tree for the best child in the actions'''\n",
    "        best = -1000\n",
    "        best_child = None\n",
    "        value = {}\n",
    "        for i in xrange(self.frontier_size):\n",
    "            r = self.tree['child '+ str(i)][2]\n",
    "            value[i] = r\n",
    "            if r > best:\n",
    "                best = r\n",
    "                best_child = 'child '+ str(i)\n",
    "        return best_child, best, value"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## The Evaluation Class\n",
    "This custom Evaluation class store performance metrics for the robot during mission execution. It is initialized with a ground truth world model, which is used only for the purpous of real-time performance tracking. The Evaluation class includes a variety of differnt evaluation metrics.\n",
    "\n",
    "### The Model: \n",
    "We have some ground truth scalar, stationary function $m: \\mathbb{X} \\to \\mathbb{R}$ e.g. methane concentration, temperature, etc. We place a GP prior on the unknown function $m$ with mean $\\mu$ and covariance function $\\kappa$:\n",
    "$$ m \\sim \\mathcal{GP}(\\mu, \\kappa) $$\n",
    "\n",
    "We define a reward function $f: 2^{\\mathbb{X}} \\to \\mathbb{R}$ that quantifies the true reward of a robot being in a set of states $\\mathcal{A} \\subset \\mathbb{X}$:\n",
    "$$ f(\\mathcal{A}) = IG(m; \\mathcal{A}) + \\lambda \\sum_{\\mathbf{x} \\in \\mathcal{A}} m(\\mathbf{x}) $$\n",
    "where $IG: 2^{\\mathbb{X}} \\to \\mathbb{R}$ is the information gain of a set of points with respect to the underlying function $m$.\n",
    "\n",
    "We also define an aquisition function $\\alpha: 2^{\\mathbb{X}} \\to \\mathbb{R}$, by which the robot can select promising trajectories:\n",
    "$$ \\alpha(\\mathcal{A}) = f(\\mathcal{A}) + \\beta_t \\sum_{\\mathbf{x} \\in \\mathcal{A}} \\sigma_{t-1}(\\mathbf{x})$$\n",
    "$$ \\alpha(\\mathcal{A}) = IG(m; \\mathcal{A}) + \\lambda \\sum_{\\mathbf{x} \\in \\mathcal{A}} \\mu_{t-1}(\\mathbf{x}) + \\beta_t \\lambda \\sum_{\\mathbf{x} \\in \\mathcal{A}} \\sigma_{t-1}(\\mathbf{x})$$\n",
    "\n",
    "Here is an outline of the metrics:\n",
    "\n",
    "* Information Gain $\\mathcal{I}(m; \\mathbf{y}_A \\mid \\mathbf{y}_{obs})$: The mutual information between a set of potential observations $\\mathbf{y}_A$ and the underlying function $m$, given a set of previous observations $\\mathbf{y}_{obs}$.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "class Evaluation:\n",
    "    ''' The Evaluation class, which includes the ground truth world model and a selection of reward criteria.\n",
    "    \n",
    "    Inputs:\n",
    "        * world (Environment object): an environment object that represents the ground truth environment\n",
    "        * f_rew (string): the reward function. One of {hotspot_info, mean, info_gain, mes}  '''\n",
    "    def __init__(self, world, reward_function = 'mean'):\n",
    "        ''' Initialize the evaluation module and select reward function'''\n",
    "        self.world = world\n",
    "        \n",
    "        self.metrics = {'aquisition_function': {},\n",
    "                        'mean_reward': {}, \n",
    "                        'info_gain_reward': {},                         \n",
    "                        'hotspot_info_reward': {}, \n",
    "                        'MSE': {},                         \n",
    "                        'instant_regret': {},\n",
    "                        'regret_bound': {}\n",
    "                       }\n",
    "        \n",
    "        self.reward_function = reward_function\n",
    "        \n",
    "        if reward_function == 'hotspot_info':\n",
    "            self.f_rew = self.hotspot_info_reward\n",
    "            self.f_aqu = hotspot_info_UCB\n",
    "        elif reward_function == 'mean':\n",
    "            self.f_rew = self.mean_reward\n",
    "            self.f_aqu = mean_UCB      \n",
    "        elif reward_function == 'info_gain':\n",
    "            self.f_rew = self.info_gain_reward\n",
    "            self.f_aqu = info_gain   \n",
    "        # TODO: fix these two reward and aquistion functions for evaluation\n",
    "        elif reward_function == 'mes':\n",
    "            self.f_rew = self.info_gain_reward\n",
    "            self.f_aqu = info_gain                 \n",
    "        else:\n",
    "            raise ValueError('Only \\'mean\\' and \\'hotspot_info\\' reward functions currently supported.')    \n",
    "    \n",
    "    ''' %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n",
    "                                Reward Functions - should have the form:\n",
    "    def reward(time, xvals), where:\n",
    "    * time (int): the current timestep of planning\n",
    "    * xvals (list of float tuples): representing a path i.e. [(3.0, 4.0), (5.6, 7.2), ... ])\n",
    "    * robot_model (GPModel)\n",
    "    \n",
    "    %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%'''\n",
    "    def mean_reward(self, time, xvals, robot_model):\n",
    "        ''' Predcited mean (true) reward function'''\n",
    "        data = np.array(xvals)\n",
    "        x1 = data[:,0]\n",
    "        x2 = data[:,1]\n",
    "        queries = np.vstack([x1, x2]).T   \n",
    "        \n",
    "        mu, var = self.world.GP.predict_value(queries)\n",
    "        return np.sum(mu)   \n",
    "\n",
    "    def hotspot_info_reward(self, time, xvals, robot_model):\n",
    "        ''' The reward information gathered plus the exploitation value gathered'''    \n",
    "        LAMBDA = 1.0# TOOD: should depend on time\n",
    "        data = np.array(xvals)\n",
    "        x1 = data[:,0]\n",
    "        x2 = data[:,1]\n",
    "        queries = np.vstack([x1, x2]).T   \n",
    "        \n",
    "        mu, var = self.world.GP.predict_value(queries)    \n",
    "        return self.info_gain_reward(time, xvals, robot_model) + LAMBDA * np.sum(mu)\n",
    "    \n",
    "    def info_gain_reward(self, time, xvals, robot_model):\n",
    "        ''' The information reward gathered '''\n",
    "        return info_gain(time, xvals, robot_model)\n",
    "    \n",
    "    ''' %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n",
    "                                               End Reward Functions\n",
    "        %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%'''        \n",
    "    def inst_regret(self, t, all_paths, selected_path, robot_model):\n",
    "        ''' The instantaneous Kapoor regret of a selected path, according to the specified reward function\n",
    "        Input:\n",
    "        * all_paths: the set of all avalaible paths to the robot at time t\n",
    "        * selected path: the path selected by the robot at time t '''\n",
    "        \n",
    "        value_omni = {}        \n",
    "        for path, points in all_paths.items():           \n",
    "            value_omni[path] =  self.f_rew(time = t, xvals = points, robot_model = robot_model)  \n",
    "        value_max = value_omni[max(value_omni, key = value_omni.get)]\n",
    "        \n",
    "        value_selected = self.f_rew(time = t, xvals = selected_path, robot_model = robot_model)\n",
    "\n",
    "        #assert(value_max - value_selected >= 0)\n",
    "        return value_max - value_selected\n",
    "    \n",
    "    def regret_bound(self, t, T):\n",
    "        pass\n",
    "        \n",
    "        \n",
    "    def MSE(self, robot_model, NTEST = 10):\n",
    "        ''' Compute the MSE on a set of test points, randomly distributed throughout the environment'''\n",
    "        np.random.seed(0)\n",
    "        x1 = np.random.random_sample((NTEST, 1)) * (self.world.x1max - self.world.x1min) + self.world.x1min\n",
    "        x2 = np.random.random_sample((NTEST, 1)) * (self.world.x2max - self.world.x2min) + self.world.x2min\n",
    "        data = np.hstack((x1, x2))\n",
    "        \n",
    "        pred_world, var_world = self.world.GP.predict_value(data)\n",
    "        pred_robot, var_robot = robot_model.predict_value(data)      \n",
    "        \n",
    "        return ((pred_world - pred_robot) ** 2).mean()\n",
    "    \n",
    "    def update_metrics(self, t, robot_model, all_paths, selected_path):\n",
    "        ''' Function to update avaliable metrics'''    \n",
    "        # Compute aquisition function\n",
    "        self.metrics['aquisition_function'][t] = self.f_aqu(t, selected_path, robot_model)\n",
    "        \n",
    "        # Compute reward functions\n",
    "        self.metrics['mean_reward'][t] = self.mean_reward(t, selected_path, robot_model)\n",
    "        self.metrics['info_gain_reward'][t] = self.info_gain_reward(t, selected_path, robot_model)\n",
    "        self.metrics['hotspot_info_reward'][t] = self.hotspot_info_reward(t, selected_path, robot_model)\n",
    "        \n",
    "        # Compute other performance metrics\n",
    "        self.metrics['MSE'][t] = self.MSE(robot_model, NTEST = 25)\n",
    "        self.metrics['instant_regret'][t] = self.inst_regret(t, all_paths, selected_path, robot_model)\n",
    "    \n",
    "    def plot_metrics(self):\n",
    "        ''' Plots the performance metrics computed over the course of a info'''\n",
    "        # Asumme that all metrics have the same time as MSE; not necessary\n",
    "        time = np.array(self.metrics['MSE'].keys())\n",
    "        \n",
    "        ''' Metrics that require a ground truth global model to compute'''        \n",
    "        MSE = np.array(self.metrics['MSE'].values())\n",
    "        regret = np.cumsum(np.array(self.metrics['instant_regret'].values()))\n",
    "        mean = np.cumsum(np.array(self.metrics['mean_reward'].values()))\n",
    "        hotspot_info = np.cumsum(np.array(self.metrics['hotspot_info_reward'].values()))\n",
    "        \n",
    "        ''' Metrics that the robot can compute online '''\n",
    "        info_gain = np.cumsum(np.array(self.metrics['info_gain_reward'].values()))        \n",
    "        UCB = np.cumsum(np.array(self.metrics['aquisition_function'].values()))\n",
    "        \n",
    "        fig, ax = plt.subplots(figsize=(8, 6))\n",
    "        ax.set_title('Accumulated UCB Aquisition Function')             \n",
    "        plt.plot(time, UCB, 'g')\n",
    "        \n",
    "        fig, ax = plt.subplots(figsize=(8, 6))\n",
    "        ax.set_title('Accumulated Information Gain')                             \n",
    "        plt.plot(time, info_gain, 'k')        \n",
    "        \n",
    "        fig, ax = plt.subplots(figsize=(8, 6))\n",
    "        ax.set_title('Accumulated Mean Reward')                     \n",
    "        plt.plot(time, mean, 'b')      \n",
    "        \n",
    "        fig, ax = plt.subplots(figsize=(8, 6))\n",
    "        ax.set_title('Accumulated Hotspot Information Gain Reward')                             \n",
    "        plt.plot(time, hotspot_info, 'r')          \n",
    "        \n",
    "        fig, ax = plt.subplots(figsize=(8, 6))\n",
    "        ax.set_title('Average Regret w.r.t. ' + self.reward_function + ' Reward')                     \n",
    "        plt.plot(time, regret/time, 'b')        \n",
    "        \n",
    "        fig, ax = plt.subplots(figsize=(8, 6))\n",
    "        ax.set_title('Map MSE at 100 Random Test Points')                             \n",
    "        plt.plot(time, MSE, 'r')  \n",
    "   \n",
    "        plt.show()          \n",
    "    \n",
    "                             \n",
    "'''%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%\n",
    "                                    Aquisition Functions - should have the form:\n",
    "    def alpha(time, xvals, robot_model), where:\n",
    "    * time (int): the current timestep of planning\n",
    "    * xvals (list of float tuples): representing a path i.e. [(3.0, 4.0), (5.6, 7.2), ... ])\n",
    "    * robot_model (GPModel object): the robot's current model of the environment\n",
    "    \n",
    "%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% '''\n",
    "\n",
    "def info_gain(time, xvals, robot_model):\n",
    "    ''' Compute the information gain of a set of potential sample locations with respect to the underlying fucntion\n",
    "        conditioned or previous samples xobs'''        \n",
    "    data = np.array(xvals)\n",
    "    x1 = data[:,0]\n",
    "    x2 = data[:,1]\n",
    "    queries = np.vstack([x1, x2]).T   \n",
    "    xobs = robot_model.xvals\n",
    "\n",
    "    # If the robot hasn't taken any observations yet, simply return the entropy of the potential set\n",
    "    if xobs is None:\n",
    "        Sigma_after = robot_model.kern.K(queries)\n",
    "        entropy_after, sign_after = np.linalg.slogdet(np.eye(Sigma_after.shape[0], Sigma_after.shape[1]) \\\n",
    "                                    + robot_model.variance * Sigma_after)\n",
    "        #print \"Entropy with no obs: \", entropy_after\n",
    "        return 0.5 * sign_after * entropy_after\n",
    "\n",
    "    all_data = np.vstack([xobs, queries])\n",
    "    \n",
    "    # The covariance matrices of the previous observations and combined observations respectively\n",
    "    Sigma_before = robot_model.kern.K(xobs) \n",
    "    Sigma_total = robot_model.kern.K(all_data)       \n",
    "\n",
    "    # The term H(y_a, y_obs)\n",
    "    entropy_before, sign_before =  np.linalg.slogdet(np.eye(Sigma_before.shape[0], Sigma_before.shape[1]) \\\n",
    "                                    + robot_model.variance * Sigma_before)\n",
    "    \n",
    "    # The term H(y_a, y_obs)\n",
    "    entropy_after, sign_after = np.linalg.slogdet(np.eye(Sigma_total.shape[0], Sigma_total.shape[1]) \\\n",
    "                                    + robot_model.variance * Sigma_total)\n",
    "\n",
    "    # The term H(y_a | f)\n",
    "    entropy_total = 2 * np.pi * np.e * sign_after * entropy_after - 2 * np.pi * np.e * sign_before * entropy_before\n",
    "    #print \"Entropy: \", entropy_total\n",
    "\n",
    "\n",
    "    ''' TODO: this term seems like it should still be in the equation, but it makes the IG negative'''\n",
    "    #entropy_const = 0.5 * np.log(2 * np.pi * np.e * robot_model.variance)\n",
    "    entropy_const = 0.0\n",
    "\n",
    "    # This assert should be true, but it's not :(\n",
    "    #assert(entropy_after - entropy_before - entropy_const > 0)\n",
    "    return entropy_total - entropy_const\n",
    "\n",
    "    \n",
    "def mean_UCB(time, xvals, robot_model):\n",
    "    ''' Computes the UCB for a set of points along a trajectory '''\n",
    "    data = np.array(xvals)\n",
    "    x1 = data[:,0]\n",
    "    x2 = data[:,1]\n",
    "    queries = np.vstack([x1, x2]).T   \n",
    "                              \n",
    "    # The GPy interface can predict mean and variance at an array of points; this will be an overestimate\n",
    "    mu, var = robot_model.predict_value(queries)\n",
    "    \n",
    "    delta = 0.9\n",
    "    d = 20\n",
    "    pit = np.pi**2 * (time + 1)**2 / 6.\n",
    "    beta_t = 2 * np.log(d * pit / delta)\n",
    "\n",
    "    return np.sum(mu) + np.sqrt(beta_t) * np.sum(np.fabs(var))\n",
    "\n",
    "def hotspot_info_UCB(time, xvals, robot_model):\n",
    "    ''' The reward information gathered plus the estimated exploitation value gathered'''\n",
    "    data = np.array(xvals)\n",
    "    x1 = data[:,0]\n",
    "    x2 = data[:,1]\n",
    "    queries = np.vstack([x1, x2]).T   \n",
    "                              \n",
    "    LAMBDA = 1.0 # TOOD: should depend on time\n",
    "    mu, var = robot_model.predict_value(queries)\n",
    "    \n",
    "    delta = 0.9\n",
    "    d = 20\n",
    "    pit = np.pi**2 * (time + 1)**2 / 6.\n",
    "    beta_t = 2 * np.log(d * pit / delta)\n",
    "\n",
    "    return info_gain(time, xvals, robot_model) + LAMBDA * np.sum(mu) + np.sqrt(beta_t) * np.sum(np.fabs(var))\n",
    "\n",
    "\n",
    "def mves(time, xvals, robot_model, maxes):\n",
    "    ''' Define the Acquisition Function of MES'''\n",
    "    # Compute the aquisition function value f and garident g at the queried point x using MES, given samples\n",
    "    # function maxes and a previous set of functino maxes\n",
    "    data = np.array(xvals)\n",
    "    x1 = data[:,0]\n",
    "    x2 = data[:,1]\n",
    "    queries = np.vstack([x1, x2]).T        \n",
    "    \n",
    "    #print \"Evaluation points:\", queries\n",
    "    d = queries.shape[1] # The dimension of the points (should be 2D)     \n",
    "\n",
    "    # Initialize f, g\n",
    "    f = 0\n",
    "    for i in xrange(maxes.shape[0]):\n",
    "        # Compute the posterior mean/variance predictions and gradients.\n",
    "        #[meanVector, varVector, meangrad, vargrad] = mean_var(x, xx, ...\n",
    "        #    yy, KernelMatrixInv{i}, l(i,:), sigma(i), sigma0(i));\n",
    "        mean, var = robot_model.predict_value(queries)\n",
    "        std_dev = np.sqrt(var)\n",
    "        \n",
    "        # Compute the acquisition function of MES.        \n",
    "        gamma = (maxes[i] - mean) / var\n",
    "        pdfgamma = sp.stats.norm.pdf(gamma)\n",
    "        cdfgamma = sp.stats.norm.cdf(gamma)\n",
    "        f += sum(gamma * pdfgamma / (2.0 * cdfgamma) - np.log(cdfgamma))        \n",
    "    # Average f\n",
    "    f = f / maxes.shape[0]\n",
    "    # f is an np array; return scalar value\n",
    "    return f[0]\n",
    "    \n",
    "def entropy_of_n(var):    \n",
    "    return np.log(np.sqrt(2.0 * np.pi * var))\n",
    "\n",
    "def entropy_of_tn(a, b, mu, var):\n",
    "    ''' a (float) is the upper bound\n",
    "        b (float) is the lower bound '''\n",
    "    if a is None:\n",
    "        Phi_alpha = 0\n",
    "        phi_alpha = 0\n",
    "        alpha = 0\n",
    "    else:\n",
    "        alpha = (a - mu) / var        \n",
    "        Phi_alpha = sp.stats.norm.cdf(alpha)\n",
    "        phi_alpha = sp.stats.norm.pdf(alpha)\n",
    "    if b is None:\n",
    "        Phi_beta = 1\n",
    "        phi_beta = 0\n",
    "        beta = 0\n",
    "    else:\n",
    "        beta = (b - mu) / var        \n",
    "        Phi_beta = sp.stats.norm.cdf(beta)\n",
    "        phi_beta = sp.stats.norm.pdf(beta)\n",
    "\n",
    "    Z = Phi_beta - Phi_alpha\n",
    "    \n",
    "    return np.log(Z * np.sqrt(2.0 * np.pi * var)) + (alpha * phi_alpha - beta * phi_beta) / (2.0 * Z)\n",
    "    \n",
    "def sample_max_vals(robot_model, nK, nFeatures, visualize = True):\n",
    "    ''' The mutual information between a potential set of samples and the local maxima'''\n",
    "    # If the robot has not samples yet, return a constant value\n",
    "    if robot_model.xvals is None:\n",
    "        return None, None\n",
    "    \n",
    "    d = robot_model.xvals.shape[1] # The dimension of the points (should be 2D)     \n",
    "    \n",
    "    ''' Sample Maximum values i.e. return sampled max values for the posterior GP, conditioned on \n",
    "    current observations. Construct random freatures and optimize functions drawn from posterior GP.'''\n",
    "    samples = np.zeros((nK, 1))\n",
    "    locs = np.zeros((nK, 2))\n",
    "    delete_locs = []\n",
    "\n",
    "    for i in xrange(nK):\n",
    "        # Draw the weights for the random features\n",
    "        W = np.random.normal(loc = 0.0, scale = robot_model.lengthscale, size = (nFeatures, d))\n",
    "        b = 2 * np.pi * np.random.uniform(low = 0.0, high = 1.0, size = (nFeatures, 1))\n",
    "        \n",
    "        # Compute the features for xx\n",
    "        Z = np.sqrt(2 * robot_model.variance / nFeatures) * np.cos(np.dot(W, robot_model.xvals.T) + b)\n",
    "        \n",
    "        # Draw the coefficient theta\n",
    "        noise = np.random.normal(loc = 0.0, scale = 1.0, size = (nFeatures, 1))\n",
    "    \n",
    "        # TODO: Figure this code out\n",
    "        if robot_model.xvals.shape[0] < nFeatures:\n",
    "            #We adopt the formula $theta \\sim \\N(Z(Z'Z + \\sigma^2 I)^{-1} y, I-Z(Z'Z + \\sigma^2 I)Z')$.            \n",
    "            Sigma = np.dot(Z.T, Z) + robot_model.noise * np.eye(robot_model.xvals.shape[0])\n",
    "            mu = np.dot(np.dot(Z, np.linalg.inv(Sigma)), robot_model.zvals)\n",
    "            [D, U] = np.linalg.eig(Sigma)\n",
    "            D = np.reshape(D, (D.shape[0], 1))\n",
    "            R = np.reciprocal((np.sqrt(D) *(np.sqrt(D) + np.sqrt(robot_model.noise))))\n",
    "            theta = noise - np.dot(Z, np.dot(U, R*(np.dot(U.T, np.dot(Z.T, noise))))) + mu\n",
    "        else:\n",
    "            # $theta \\sim \\N((ZZ'/\\sigma^2 + I)^{-1} Z y / \\sigma^2, (ZZ'/\\sigma^2 + I)^{-1})$.            \n",
    "            Sigma = np.dot(Z, Z.transpose()) / robot_model.noise + np.eye(nFeatures)\n",
    "            Sigma = np.linalg.inv(Sigma)\n",
    "            mu = np.dot(np.dot(Sigma, Z), robot_model.zvals) / robot_model.noise\n",
    "            theta = mu + np.dot(np.linalg.cholesky(Sigma), noise)            \n",
    "            #theta = np.random.multivariate_normal(mean = np.reshape(mu, (nFeatures,)), cov = Sigma, size = (nFeatures, 1))\n",
    "            \n",
    "        # Obtain a function samples from posterior GP\n",
    "        target = lambda x: np.dot(theta.T * np.sqrt(2.0 * robot_model.variance / nFeatures), \\\n",
    "                        np.cos(np.dot(W, x.T) + b)).transpose()\n",
    "        target_vector_n = lambda x: -float(target(x)[0,0])\n",
    "        \n",
    "        # Can only take a 1D input\n",
    "        target_gradient = lambda x: np.dot(theta.T * -np.sqrt(2.0 * robot_model.variance / nFeatures), \\\n",
    "                        np.sin(np.dot(W, x.reshape((2,1))) + b) * W)\n",
    "        target_vector_gradient_n = lambda x: -np.asarray(target_gradient(x).reshape(2,))\n",
    "                                                                  \n",
    "        # Optimize the function\n",
    "        status = False\n",
    "        count = 0\n",
    "        # Retry optimization up to 5 times; if hasn't converged, give up on this simulated world\n",
    "        while status == False and count < 5:\n",
    "            maxima, max_val, max_inv_hess, status = global_maximization(target, target_vector_n, target_gradient, \n",
    "                                target_vector_gradient_n, robot_model.ranges, robot_model.xvals, visualize)\n",
    "            count += 1\n",
    "        if status == False:\n",
    "            delete_locs.append(i)\n",
    "            continue\n",
    "        \n",
    "        samples[i] = max_val\n",
    "        print \"Max val in optmization \\t \\t\", samples[i]\n",
    "        locs[i, :] = maxima\n",
    "        \n",
    "        if max_val < np.max(robot_model.zvals) + 5.0 * np.sqrt(robot_model.noise):\n",
    "            samples[i] = np.max(robot_model.zvals) + 5.0 * np.sqrt(robot_model.noise)\n",
    "            print \"Max observed is bigger than max in opt:\", samples[i]\n",
    "            locs[i, :] = robot_model.xvals[np.argmax(robot_model.zvals)]\n",
    "            #locs[i, :] = locs[i-1, :]\n",
    "                \n",
    "    samples = np.delete(samples, delete_locs, axis = 0)\n",
    "    locs = np.delete(locs, delete_locs, axis = 0)\n",
    "    \n",
    "    return samples, locs\n",
    "            \n",
    "        \n",
    "def global_maximization(target, target_vector_n, target_grad, target_vector_gradient_n, ranges, guesses, visualize):\n",
    "    ''' Perform efficient global maximization'''\n",
    "    gridSize = 30\n",
    "    \n",
    "    # Uniformly sample gridSize number of points in interval xmin to xmax\n",
    "    x1 = np.random.uniform(ranges[0], ranges[1], size = gridSize)\n",
    "    x2 = np.random.uniform(ranges[2], ranges[3], size = gridSize)\n",
    "    x1, x2 = np.meshgrid(x1, x2, sparse = False, indexing = 'xy')  \n",
    "    \n",
    "    Xgrid = np.vstack([x1.ravel(), x2.ravel()]).T    \n",
    "    Xgrid = np.vstack([Xgrid, guesses])   \n",
    "    \n",
    "    # Get the function value at Xgrid locations\n",
    "    y = target(Xgrid)\n",
    "    max_index = np.argmax(y)   \n",
    "    start = np.asarray(Xgrid[max_index, :])\n",
    "    #print \"Starting optimization at\", start\n",
    "    res = sp.optimize.minimize(fun = target_vector_n, x0 = start, method = 'SLSQP', jac = target_vector_gradient_n, \\\n",
    "                               bounds = ((ranges[0], ranges[1]), (ranges[2], ranges[3])))\n",
    "    if res['success'] == False:\n",
    "        print \"Failed to converge!\"\n",
    "        #print res\n",
    "        return 0, 0, 0, False\n",
    "    \n",
    "    \n",
    "    # Generate a set of observations from robot model with which to make contour plots\n",
    "    x1vals = np.linspace(ranges[0], ranges[1], 100)\n",
    "    x2vals = np.linspace(ranges[2], ranges[3], 100)\n",
    "    x1, x2 = np.meshgrid(x1vals, x2vals, sparse = False, indexing = 'xy') # dimension: NUM_PTS x NUM_PTS       \n",
    "    data = np.vstack([x1.ravel(), x2.ravel()]).T\n",
    "    observations = target(data)\n",
    "    \n",
    "    if visualize:\n",
    "        fig2, ax2 = plt.subplots(figsize=(8, 6))\n",
    "        #ax2 = fig2.add_subplot(111)\n",
    "        ax2.set_xlim(ranges[0:2])\n",
    "        ax2.set_ylim(ranges[2:])        \n",
    "        ax2.set_title('Countour Plot of the Approximated World Model')     \n",
    "        plot = ax2.contourf(x1, x2, observations.reshape(x1.shape), cmap = 'viridis')\n",
    "        scatter = ax2.scatter(guesses[:, 0], guesses[:, 1], color = 'k', s = 20.0)\n",
    "        scatter = ax2.scatter(res['x'][0], res['x'][1], color = 'r', s = 100.0)      \n",
    "        \n",
    "        plt.show()\n",
    "        #print res\n",
    "        \n",
    "    return res['x'], -res['fun'], res['jac'], True\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## The Robot Class\n",
    "This custom Robot class is used to represent a point robot with all of the requisite modeling and planning algortihms necessary to perform informative path planning in unknown Gaussian environments. The Robot class includes:\n",
    "* A ``GPModel`` object, which represents the robot's current knowledge about the world, including prior information, kernel, and previously sampled points\n",
    "* A ``sample_world`` function handel, which it allows it to aquire a noisy sample from the enviroment at a location/set of locations in the environment\n",
    "* A ``Path_Generator object``, which represents the set of actions/paths avaliable to a robot from it's current pose\n",
    "* A ``myopic_planner method``, which allows the robot to plan myopic trajectories\n",
    "* A [TODO], which allows the robot to plan nonmyopic trajectories\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "class Robot(object):\n",
    "    ''' The Robot class, which includes the vehicles current model of the world and IPP algorithms.\n",
    "\n",
    "    Inputs:\n",
    "        * sample_world (method) a function handle that takes a set of locations as input and returns a set of observations\n",
    "        * start_loc (tuple of floats) the location of the robot initially in 2-D space e.g. (0.0, 0.0)\n",
    "        * ranges (tuple of floats): a tuple representing the max/min of 2D rectangular domain i.e. (-10, 10, -50, 50)\n",
    "        * kernel_file (string) a filename specifying the location of the stored kernel values\n",
    "        * kernel_dataset (tuple of nparrays) a tuple (xvals, zvals), where xvals is a Npoint x 2 nparray of type float\n",
    "            and zvals is a Npoint x 1 nparray of type float \n",
    "        * prior_dataset (tuple of nparrays) a tuple (xvals, zvals), where xvals is a Npoint x 2 nparray of type float\n",
    "            and zvals is a Npoint x 1 nparray of type float                \n",
    "        * init_variance (float) the variance parameter of the squared exponential kernel\n",
    "        * init_lengthscale (float) the lengthscale parameter of the squared exponential kernel\n",
    "        * noise (float) the sensor noise parameter of the squared exponential kernel \n",
    "        * path_generator (string): one of default, dubins, or equal_dubins. Robot path parameterization. \n",
    "        * frontier_size (int): the number of paths in the generated path set\n",
    "        * horizon_length (float): the length of the paths generated by the robot \n",
    "        * turning_radius (float): the turning radius (in units of distance) of the robot\n",
    "        * sample_set (float): the step size (in units of distance) between sequential samples on a trajectory\n",
    "        * evaluation (Evaluation object): an evaluation object for performance metric compuation\n",
    "        * f_rew (string): the reward function. One of {hotspot_info, mean, info_gain}\n",
    "        * create_animation (boolean): save the generate world model and trajectory to file at each timestep '''\n",
    "    def __init__(self, sample_world, start_loc = (0.0, 0.0, 0.0), ranges = (-10., 10., -10., 10.), kernel_file = None, \n",
    "            kernel_dataset = None, prior_dataset = None, init_lengthscale = 10.0, init_variance = 100.0, noise = 0.05, \n",
    "            path_generator = 'default', frontier_size = 6, horizon_length = 5, turning_radius = 1, sample_step = 0.5, \n",
    "            evaluation = None, f_rew = 'mean', create_animation = False):\n",
    "        ''' Initialize the robot class with a GP model, initial location, path sets, and prior dataset'''\n",
    "        self.ranges = ranges\n",
    "        self.create_animation = create_animation\n",
    "        self.eval = evaluation\n",
    "        self.loc = start_loc # Initial location of the robot      \n",
    "        self.sample_world = sample_world # A function handel that allows the robot to sample from the environment \n",
    "        self.f_rew = f_rew\n",
    "        self.fs = frontier_size\n",
    "        \n",
    "        if f_rew == 'hotspot_info':\n",
    "            self.aquisition_function = hotspot_info_UCB\n",
    "        elif f_rew == 'mean':\n",
    "            self.aquisition_function = mean_UCB  \n",
    "        elif f_rew == 'info_gain':\n",
    "            self.aquisition_function = info_gain\n",
    "        elif f_rew == 'mes':\n",
    "            self.aquisition_function = mves\n",
    "        else:\n",
    "            raise ValueError('Only \\'hotspot_info\\' and \\'mean\\' and \\'info_gain\\' and \\'mes\\' reward fucntions supported.')\n",
    "\n",
    "        # Initialize the robot's GP model with the initial kernel parameters\n",
    "        self.GP = GPModel(ranges = ranges, lengthscale = init_lengthscale, variance = init_variance)\n",
    "                \n",
    "        # If both a kernel training dataset and a prior dataset are provided, train the kernel using both\n",
    "        if  kernel_dataset is not None and prior_dataset is not None:\n",
    "            data = np.vstack([prior_dataset[0], kernel_dataset[0]])\n",
    "            observations = np.vstack([prior_dataset[1], kernel_dataset[1]])\n",
    "            self.GP.train_kernel(data, observations, kernel_file) \n",
    "        # Train the kernel using the provided kernel dataset\n",
    "        elif kernel_dataset is not None:\n",
    "            self.GP.train_kernel(kernel_dataset[0], kernel_dataset[1], kernel_file)\n",
    "        # If a kernel file is provided, load the kernel parameters\n",
    "        elif kernel_file is not None:\n",
    "            self.GP.load_kernel()\n",
    "        # No kernel information was provided, so the kernel will be initialized with provided values\n",
    "        else:\n",
    "            pass\n",
    "        \n",
    "        # Incorporate the prior dataset into the model\n",
    "        if prior_dataset is not None:\n",
    "            self.GP.set_data(prior_dataset[0], prior_dataset[1]) \n",
    "        \n",
    "        # The path generation class for the robot\n",
    "        path_options = {'default':Path_Generator(frontier_size, horizon_length, turning_radius, sample_step, ranges),\n",
    "                        'dubins': Dubins_Path_Generator(frontier_size, horizon_length, turning_radius, sample_step, ranges),\n",
    "                        'equal_dubins': Dubins_EqualPath_Generator(frontier_size, horizon_length, turning_radius, sample_step, ranges)}\n",
    "        self.path_generator = path_options[path_generator]\n",
    "\n",
    "    def choose_trajectory(self, t):\n",
    "        ''' Select the best trajectory avaliable to the robot at the current pose, according to the aquisition function.\n",
    "        Input: \n",
    "        * t (int > 0): the current planning iteration (value of a point can change with algortihm progress)'''\n",
    "        paths = self.path_generator.get_path_set(self.loc)\n",
    "        value = {}    \n",
    "    \n",
    "        nK = 2\n",
    "        nFeatures = 300\n",
    "        max_locs = max_vals = None      \n",
    "        if self.f_rew == 'mes':\n",
    "            max_val, max_locs = sample_max_vals(self.GP, nK = nK, nFeatures = nFeatures)\n",
    "            \n",
    "        for path, points in paths.items():\n",
    "            if self.f_rew == 'mes':\n",
    "                if max_val is None or max_locs is None:\n",
    "                    value[path] = 1.0\n",
    "                else:\n",
    "                    value[path] = self.aquisition_function(time = t, xvals = points, robot_model = self.GP, maxes = max_val)\n",
    "            else:\n",
    "                value[path] =  self .aquisition_function(time = t, xvals = points, robot_model = self.GP)     \n",
    "        try:\n",
    "            return paths[max(value, key = value.get)], value[max(value, key = value.get)], paths, value, max_locs\n",
    "        except:\n",
    "            return None\n",
    "    \n",
    "    def collect_observations(self, xobs):\n",
    "        ''' Gather noisy samples of the environment and updates the robot's GP model.\n",
    "        Input: \n",
    "        * xobs (float array): an nparray of floats representing observation locations, with dimension NUM_PTS x 2 '''\n",
    "        zobs = self.sample_world(xobs)       \n",
    "        self.GP.add_data(xobs, zobs)\n",
    "\n",
    "    def myopic_planner(self, T):\n",
    "        ''' Gather noisy samples of the environment and updates the robot's GP model  \n",
    "        Input: \n",
    "        * T (int > 0): the length of the planning horization (number of planning iterations)'''\n",
    "        self.trajectory = []\n",
    "        \n",
    "        for t in xrange(T):\n",
    "            # Select the best trajectory according to the robot's aquisition function\n",
    "            print \"[\", t, \"] Current Location:  \", self.loc\n",
    "            best_path, best_val, all_paths, all_values, max_locs = self.choose_trajectory(t = t)\n",
    "            \n",
    "            # Given this choice, update the evaluation metrics \n",
    "            self.eval.update_metrics(t, self.GP, all_paths, best_path) \n",
    "            \n",
    "            if best_path == None:\n",
    "                break\n",
    "            data = np.array(best_path)\n",
    "            x1 = data[:,0]\n",
    "            x2 = data[:,1]\n",
    "            xlocs = np.vstack([x1, x2]).T           \n",
    "            \n",
    "            if len(best_path) != 1:\n",
    "                self.collect_observations(xlocs)\n",
    "            self.trajectory.append(best_path)\n",
    "            \n",
    "            if self.create_animation:\n",
    "                self.visualize_trajectory(screen = False, filename = t, best_path = best_path, \n",
    "                        maxes = max_locs, all_paths = all_paths, all_vals = all_values)            \n",
    "\n",
    "            if len(best_path) == 1:\n",
    "                self.loc = (best_path[-1][0], best_path[-1][1], best_path[-1][2]-0.45)\n",
    "            else:\n",
    "                self.loc = best_path[-1]\n",
    "    \n",
    "    def visualize_trajectory(self, screen = True, filename = 'SUMMARY', best_path = None, \n",
    "        maxes = None, all_paths = None, all_vals = None):      \n",
    "        ''' Visualize the set of paths chosen by the robot \n",
    "        Inputs:\n",
    "        * screen (boolean): determines whether the figure is plotted to the screen or saved to file \n",
    "        * T (string or int): string append to the figure filename '''\n",
    "        \n",
    "        # Generate a set of observations from robot model with which to make contour plots\n",
    "        x1vals = np.linspace(self.ranges[0], self.ranges[1], 100)\n",
    "        x2vals = np.linspace(self.ranges[2], self.ranges[3], 100)\n",
    "        x1, x2 = np.meshgrid(x1vals, x2vals, sparse = False, indexing = 'xy') # dimension: NUM_PTS x NUM_PTS       \n",
    "        data = np.vstack([x1.ravel(), x2.ravel()]).T\n",
    "        observations, var = self.GP.predict_value(data)        \n",
    "        \n",
    "        fig, ax = plt.subplots(figsize=(8, 6))\n",
    "        ax.set_xlim(self.ranges[0:2])\n",
    "        ax.set_ylim(self.ranges[2:])\n",
    "        plot = ax.contourf(x1, x2, observations.reshape(x1.shape), cmap = 'viridis', vmin = -25, vmax = 25)\n",
    "        if self.GP.xvals is not None:\n",
    "            scatter = ax.scatter(self.GP.xvals[:, 0], self.GP.xvals[:, 1], c='k', s = 20.0, cmap = 'viridis')                \n",
    "\n",
    "        color = iter(plt.cm.cool(np.linspace(0,1,len(self.trajectory))))\n",
    "        \n",
    "        for i, path in enumerate(self.trajectory):\n",
    "            c = next(color)\n",
    "            f = np.array(path)\n",
    "            plt.plot(f[:,0], f[:,1], c=c, marker='*')\n",
    "\n",
    "        if all_paths is not None:\n",
    "            #print \"All paths:\", all_paths\n",
    "            #print \"Values:\", all_vals\n",
    "            all_vals = [x for x in all_vals.values()]   \n",
    "\n",
    "            path_color = iter(plt.cm.autumn(np.linspace(0, max(all_vals),len(all_vals))/ max(all_vals)))        \n",
    "            path_order = np.argsort(all_vals)\n",
    "            \n",
    "            for index in path_order:\n",
    "                c = next(path_color)                \n",
    "                points = all_paths[index]\n",
    "                f = np.array(points)\n",
    "                plt.plot(f[:,0], f[:,1], c = c, marker='.')\n",
    "                \n",
    "        if best_path is not None:\n",
    "            f = np.array(best_path)\n",
    "            plt.plot(f[:,0], f[:,1], c = 'g', marker='*')\n",
    "            \n",
    "        if maxes is not None:\n",
    "            plt.scatter(maxes[:, 0], maxes[:, 1], color = 'r', marker = '*', s = 500.0)\n",
    "            \n",
    "        if screen:\n",
    "            plt.show()           \n",
    "        else:\n",
    "            if not os.path.exists('./figures/' + str(self.f_rew)):\n",
    "                os.makedirs('./figures/' + str(self.f_rew))\n",
    "            fig.savefig('./figures/' + str(self.f_rew)+ '/trajectory-N.' + str(filename) + '.png')\n",
    "            plt.close()\n",
    "            \n",
    "        \n",
    "    def visualize_world_model(self, screen = True, filename = 'SUMMARY', maxes = None):\n",
    "        ''' Visaulize the robots current world model by sampling points uniformly in space and \n",
    "        plotting the predicted function value at those locations.\n",
    "        Inputs:\n",
    "        * screen (boolean): determines whether the figure is plotted to the screen or saved to file \n",
    "        * T (string or int): string append to the figure filename'''\n",
    "        # Generate a set of observations from robot model with which to make contour plots\n",
    "        x1vals = np.linspace(self.ranges[0], self.ranges[1], 100)\n",
    "        x2vals = np.linspace(self.ranges[2], self.ranges[3], 100)\n",
    "        x1, x2 = np.meshgrid(x1vals, x2vals, sparse = False, indexing = 'xy') # dimension: NUM_PTS x NUM_PTS       \n",
    "        data = np.vstack([x1.ravel(), x2.ravel()]).T\n",
    "        observations, var = self.GP.predict_value(data)        \n",
    "        \n",
    "        fig2, ax2 = plt.subplots(figsize=(8, 6))\n",
    "        #ax2 = fig2.add_subplot(111)\n",
    "        ax2.set_xlim(self.ranges[0:2])\n",
    "        ax2.set_ylim(self.ranges[2:])        \n",
    "        ax2.set_title('Countour Plot of the Robot\\'s World Model')     \n",
    "    \n",
    "        plot = ax2.contourf(x1, x2, observations.reshape(x1.shape), cmap = 'viridis', vmin = -25, vmax = 25)\n",
    "        # Plot the samples taken by the robot\n",
    "        if self.GP.xvals is not None:\n",
    "            scatter = ax2.scatter(self.GP.xvals[:, 0], self.GP.xvals[:, 1], c=self.GP.zvals.ravel(), s = 10.0, cmap = 'viridis')        \n",
    "        if maxes is not None:\n",
    "            print \"Plotting maxes\"            \n",
    "            ax2.scatter(maxes[:, 0], maxes[:, 1], color = 'r', s = 50.0)\n",
    "        if screen:\n",
    "            plt.show()           \n",
    "        else:\n",
    "            if not os.path.exists('./figures/' + str(self.f_rew)):\n",
    "                os.makedirs('./figures/' + str(self.f_rew))\n",
    "            fig.savefig('./figures/' + str(self.f_rew)+ '/world_model.' + str(filename) + '.png')\n",
    "            plt.close()\n",
    "    \n",
    "    def plot_information(self):\n",
    "        ''' Visualizes the accumulation of reward and aquisition functions ''' \n",
    "        self.eval.plot_metrics()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "class Nonmyopic_Robot(Robot):\n",
    "    '''This robot inherits from the Robot class, but uses a MCTS in order to perform global horizon planning'''\n",
    "    \n",
    "    def __init__(self, sample_world, start_loc = (0.0, 0.0, 0.0), ranges = (-10., 10., -10., 10.), kernel_file = None, \n",
    "            kernel_dataset = None, prior_dataset = None, init_lengthscale = 10.0, init_variance = 100.0, noise = 0.05, \n",
    "            path_generator = 'default', frontier_size = 6, horizon_length = 5, turning_radius = 1, sample_step = 0.5, \n",
    "            evaluation = None, f_rew = 'mean', create_animation = False, computation_budget = 60, rollout_length = 6):\n",
    "        ''' Initialize the robot class with a GP model, initial location, path sets, and prior dataset'''\n",
    "        \n",
    "        super(Nonmyopic_Robot, self).__init__(sample_world, start_loc, ranges, kernel_file, kernel_dataset, \n",
    "            prior_dataset, init_lengthscale, init_variance, noise, path_generator, frontier_size, \n",
    "            horizon_length, turning_radius, sample_step, evaluation, f_rew, create_animation)        \n",
    "        \n",
    "        # Computation limits\n",
    "        self.comp_budget = computation_budget\n",
    "        self.roll_length = rollout_length\n",
    "\n",
    "    def nonmyopic_planner(self, T = 3):\n",
    "        ''' Use a monte carlo tree search in order to perform long-horizon planning'''\n",
    "        self.trajectory = []\n",
    "                 \n",
    "        for t in xrange(T):\n",
    "            print \"[\", t, \"] Current Location:  \", self.loc            \n",
    "            #computation_budget, belief, initial_pose, planning_limit, frontier_size, path_generator, aquisition_function, time\n",
    "            mcts = MCTS(self.comp_budget, self.GP, self.loc, self.roll_length, self.fs, self.path_generator, self.aquisition_function, self.f_rew, t)\n",
    "            best_path, best_val, all_paths, all_vals, max_locs = mcts.choose_trajectory()\n",
    "            \n",
    "            if self.create_animation:\n",
    "                #robot.visualize_world_model(screen = False, filename = t, maxes = max_locs)\n",
    "                self.visualize_trajectory(screen = False, filename = str(t),  best_path = best_path, \n",
    "                        maxes = max_locs, all_paths = all_paths, all_vals = all_vals)                        \n",
    "            \n",
    "            # print best_path\n",
    "            \n",
    "            # Update relevent metrics with selected path\n",
    "            data = np.array(best_path)\n",
    "            x1 = data[:,0]\n",
    "            x2 = data[:,1]\n",
    "            xlocs = np.vstack([x1, x2]).T\n",
    "            all_paths = self.path_generator.get_path_set(self.loc)\n",
    "            self.eval.update_metrics(t, self.GP, all_paths, best_path) \n",
    "            self.collect_observations(xlocs)\n",
    "            self.trajectory.append(best_path)        \n",
    "            \n",
    "            if len(best_path) == 1:\n",
    "                # If the best past returned was into a wall, rotate by 1.14 radients clockwise\n",
    "                self.loc = (best_path[-1][0], best_path[-1][1], best_path[-1][2] - 1.14)\n",
    "            elif best_path[-1][0] < self.ranges[0]+0.5 or best_path[-1][0] > self.ranges[1]-0.5:\n",
    "                # If the best path is too near the edge of the space, rotate by 1.1.4 radians clockwise\n",
    "                self.loc = (best_path[-1][0],best_path[-1][1],best_path[-1][2]-1.14)\n",
    "            elif best_path[-1][1] < self.ranges[2]+0.5 or best_path[-1][0] > self.ranges[3]-0.5:\n",
    "                # IF the pest path is too near the edge of space, rotat by 1.14 radients clockwise\n",
    "                self.loc = (best_path[-1][0],best_path[-1][1],best_path[-1][2]-1.14)\n",
    "            else:\n",
    "                self.loc = best_path[-1]\n",
    "        "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Example initialization: Myopic Robot\n",
    "To use this code, first create a global Gaussian Environment object. Then, create a Robot object, which can optionally be supplied with a prior dataset, a dataset that can be used to train kernel parameters, or a filename from which to load kernel parameters. If the ``visualize`` flag is set in the Environment model, the resulting Gaussian surface will be plotted.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# Create a random enviroment sampled from a GP with an RBF kernel and specified hyperparameters, mean function 0 \n",
    "# The enviorment will be constrained by a set of uniformly distributed  sample points of size NUM_PTS x NUM_PTS\n",
    "''' Options include mean, info_gain, hotspot_info, and mes'''\n",
    "reward_function = 'mes'\n",
    "ranges = (0., 10., 0., 10.)\n",
    "\n",
    "world = Environment(ranges = ranges, # x1min, x1max, x2min, x2max constraints\n",
    "                    NUM_PTS = 20, \n",
    "                    variance = 100.0, \n",
    "                    lengthscale = 1.0, \n",
    "                    visualize = True,\n",
    "                    seed = 3)\n",
    "\n",
    "evaluation = Evaluation(world = world, \n",
    "                        reward_function = reward_function)\n",
    "\n",
    "# Gather some prior observations to train the kernel (optional)\n",
    "x1observe = np.linspace(ranges[0], ranges[1], 8)\n",
    "x2observe = np.linspace(ranges[2], ranges[3], 8)\n",
    "x1observe, x2observe = np.meshgrid(x1observe, x2observe, sparse = False, indexing = 'xy')  \n",
    "data = np.vstack([x1observe.ravel(), x2observe.ravel()]).T\n",
    "observations = world.sample_value(data)\n",
    "\n",
    "# Create the point robot\n",
    "robot = Robot(sample_world = world.sample_value, \n",
    "              start_loc = (5.0, 5.0, 0.0), \n",
    "              ranges = ranges,\n",
    "              kernel_file = None,\n",
    "              kernel_dataset = None,\n",
    "              prior_dataset =  None, \n",
    "              #prior_dataset =  (data, observations), \n",
    "              init_lengthscale = 1.0, \n",
    "              init_variance = 100.0, \n",
    "              noise = 0.0001,\n",
    "              path_generator = 'dubins',\n",
    "              frontier_size = 20, \n",
    "              horizon_length = 2.0, \n",
    "              turning_radius = 0.05, \n",
    "              sample_step = 1.0,\n",
    "              evaluation = evaluation, \n",
    "              f_rew = reward_function, \n",
    "              create_animation = True)\n",
    "\n",
    "robot.myopic_planner(T = 150)\n",
    "robot.visualize_world_model(screen = True)\n",
    "robot.visualize_trajectory(screen = True)\n",
    "robot.plot_information()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Example initialization: Nonmyopic Robot\n",
    "To use this code, first create a global Gaussian Environment object. Then, create a Robot object, which can optionally be supplied with a prior dataset, a dataset that can be used to train kernel parameters, or a filename from which to load kernel parameters. If the ``visualize`` flag is set in the Environment model, the resulting Gaussian surface will be plotted.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "# Create a random enviroment sampled from a GP with an RBF kernel and specified hyperparameters, mean function 0 \n",
    "# The enviorment will be constrained by a set of uniformly distributed  sample points of size NUM_PTS x NUM_PTS\n",
    "''' Options include mean, info_gain, hotspot_info, mes, aes'''\n",
    "reward_function = 'mes'\n",
    "ranges = (0., 10., 0., 10.)\n",
    "\n",
    "world = Environment(ranges = ranges, # x1min, x1max, x2min, x2max constraints\n",
    "                    NUM_PTS = 20, \n",
    "                    variance = 100.0, \n",
    "                    lengthscale = 1.0, \n",
    "                    visualize = True,\n",
    "                    seed = 3)\n",
    "\n",
    "evaluation = Evaluation(world = world, \n",
    "                        reward_function = reward_function)\n",
    "\n",
    "# Gather some prior observations to train the kernel (optional)\n",
    "x1observe = np.linspace(ranges[0]+0.5, ranges[1]-0,5, 8)\n",
    "x2observe = np.linspace(ranges[2]+0.5, ranges[3]-0.5, 8)\n",
    "x1observe, x2observe = np.meshgrid(x1observe, x2observe, sparse = False, indexing = 'xy')  \n",
    "data = np.vstack([x1observe.ravel(), x2observe.ravel()]).T\n",
    "observations = world.sample_value(data)\n",
    "\n",
    "# Create the point robot\n",
    "robot = Nonmyopic_Robot(sample_world = world.sample_value, \n",
    "              start_loc = (5.0, 5.0, 0.0), \n",
    "              ranges = ranges,\n",
    "              kernel_file = None,\n",
    "              kernel_dataset = None,\n",
    "              prior_dataset =  None, \n",
    "              #prior_dataset =  (data, observations), \n",
    "              init_lengthscale = 1.0, \n",
    "              init_variance = 100.0, \n",
    "              noise = 0.0001,\n",
    "              path_generator = 'dubins',\n",
    "              frontier_size = 20, \n",
    "              horizon_length = 1.5, \n",
    "              turning_radius = 0.05, \n",
    "              sample_step = 0.5,\n",
    "              evaluation = evaluation, \n",
    "              f_rew = reward_function, \n",
    "              create_animation = True,\n",
    "              computation_budget = 5,\n",
    "              rollout_length = 3) \n",
    "\n",
    "robot.nonmyopic_planner(T = 5)\n",
    "robot.visualize_world_model(screen = True)\n",
    "robot.visualize_trajectory(screen = True)\n",
    "robot.plot_information()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": true
   },
   "source": [
    "# Implementation Notes\n",
    "* Should thoroughly comment everything at some point\n",
    "* TODO add \"information visualization\" to the system\n",
    "* Change aquisition function interface\n",
    "* Add reward computation interface (UCB accumulation, exploit accumulation, explore accumulation, map entropy, access final map (MSE?) [G]\n",
    "* Check out MCTS (performance guarentees and implementatino) [V]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 2",
   "language": "python",
   "name": "python2"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.14"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
