{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Discrete-Time Black Scholes\n",
    "Welcome to your 1st assignment in Reinforcement Learning in Finance. This exercise will introduce Black-Scholes model as viewed through the lens of pricing an option as discrete-time replicating portfolio of stock and bond.\n",
    "\n",
    "**Instructions:**\n",
    "- You will be using Python 3.\n",
    "- Avoid using for-loops and while-loops, unless you are explicitly told to do so.\n",
    "- Do not modify the (# GRADED FUNCTION [function name]) comment in some cells. Your work would not be graded if you change this. Each cell containing that comment should only contain one function.\n",
    "- After coding your function, run the cell right below it to check if your result is correct.\n",
    "\n",
    "\n",
    "Let's get started!"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## About iPython Notebooks ##\n",
    "\n",
    "iPython Notebooks are interactive coding environments embedded in a webpage. You will be using iPython notebooks in this class. You only need to write code between the ### START CODE HERE ### and ### END CODE HERE ### comments. After writing your code, you can run the cell by either pressing \"SHIFT\"+\"ENTER\" or by clicking on \"Run Cell\" (denoted by a play symbol) in the upper bar of the notebook. \n",
    "\n",
    "We will often specify \"(≈ X lines of code)\" in the comments to tell you about how much code you need to write. It is just a rough estimate, so don't feel bad if your code is longer or shorter."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import pandas as pd\n",
    "import matplotlib.pyplot as plt\n",
    "%matplotlib inline\n",
    "\n",
    "from numpy.random import standard_normal, seed\n",
    "\n",
    "import scipy.stats as stats\n",
    "from scipy.stats import norm\n",
    "\n",
    "import sys\n",
    "\n",
    "sys.path.append(\"..\")\n",
    "import grading\n",
    "\n",
    "import datetime \n",
    "import time\n",
    "import bspline\n",
    "import bspline.splinelab as splinelab"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "### ONLY FOR GRADING. DO NOT EDIT ###\n",
    "submissions=dict()\n",
    "assignment_key=\"J_L65CoiEeiwfQ53m1Mlug\" \n",
    "all_parts=[\"9jLRK\",\"YoMns\",\"Wc3NN\",\"fcl3r\"]\n",
    "### ONLY FOR GRADING. DO NOT EDIT ###"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "COURSERA_TOKEN =  '' # the key provided to the Student under his/her email on submission page\n",
    "COURSERA_EMAIL = '' # the email"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "# The Black-Scholes prices\n",
    "def bs_put(t, S0, K, r, sigma, T):\n",
    "    d1 = (np.log(S0/K) + (r + 1/2 * sigma**2) * (T-t)) / sigma / np.sqrt(T-t)\n",
    "    d2 = (np.log(S0/K) + (r - 1/2 * sigma**2) * (T-t)) / sigma / np.sqrt(T-t)\n",
    "    price = K * np.exp(-r * (T-t)) * norm.cdf(-d2) - S0 * norm.cdf(-d1)\n",
    "    return price\n",
    "\n",
    "def bs_call(t, S0, K, r, sigma, T):\n",
    "    d1 = (np.log(S0/K) + (r + 1/2 * sigma**2) * (T-t)) / sigma / np.sqrt(T-t)\n",
    "    d2 = (np.log(S0/K) + (r - 1/2 * sigma**2) * (T-t)) / sigma / np.sqrt(T-t)\n",
    "    price = S0 * norm.cdf(d1) - K * np.exp(-r * (T-t)) * norm.cdf(d2)\n",
    "    return price\n",
    "\n",
    "def d1(S0, K, r, sigma, T):\n",
    "    return (np.log(S0/K) + (r + sigma**2 / 2) * T)/(sigma * np.sqrt(T))\n",
    " \n",
    "def d2(S0, K, r, sigma, T):\n",
    "    return (np.log(S0 / K) + (r - sigma**2 / 2) * T) / (sigma * np.sqrt(T))\n",
    " "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Simulate $N_{MC}$ stock price sample paths with $T$ steps by the classical Black-Sholes formula.\n",
    "\n",
    "$$dS_t=\\mu S_tdt+\\sigma S_tdW_t\\quad\\quad S_{t+1}=S_te^{\\left(\\mu-\\frac{1}{2}\\sigma^2\\right)\\Delta t+\\sigma\\sqrt{\\Delta t}Z}$$\n",
    "\n",
    "where $Z$ is a standard normal random variable.\n",
    "\n",
    "MC paths are simulated by GeneratePaths() method of DiscreteBlackScholes class."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Part 1\n",
    "\n",
    "\n",
    "Class DiscreteBlackScholes implements the above calculations with class variables to math symbols mapping of:\n",
    "\n",
    "$$\\Delta S_t=S_{t+1} - e^{-r\\Delta t} S_t\\space \\quad t=T-1,...,0$$\n",
    " \n",
    "**Instructions:**\n",
    "Some portions of code in DiscreteBlackScholes have bee taken out. You are to implement the missing portions of code in DiscreteBlackScholes class.\n",
    "\n",
    "$$\\Pi_t=e^{-r\\Delta t}\\left[\\Pi_{t+1}-u_t \\Delta S_t\\right]\\quad t=T-1,...,0$$\n",
    "\n",
    "- implement DiscreteBlackScholes.function_A_vec() method\n",
    "$$A_{nm}^{\\left(t\\right)}=\\sum_{k=1}^{N_{MC}}{\\Phi_n\\left(X_t^k\\right)\\Phi_m\\left(X_t^k\\right)\\left(\\Delta\\hat{S}_t^k\\right)^2}\\quad\\quad$$ \n",
    "\n",
    "- implement DiscreteBlackScholes.function_B_vec() method\n",
    "$$B_n^{\\left(t\\right)}=\\sum_{k=1}^{N_{MC}}{\\Phi_n\\left(X_t^k\\right)\\left[\\hat\\Pi_{t+1}^k\\Delta\\hat{S}_t^k+\\frac{1}{2\\gamma\\lambda}\\Delta S_t^k\\right]}$$\n",
    "- implement DiscreteBlackScholes.gen_paths() method using the following relation:\n",
    "$$S_{t+1}=S_te^{\\left(\\mu-\\frac{1}{2}\\sigma^2\\right)\\Delta t+\\sigma\\sqrt{\\Delta t}Z}$$\n",
    "where $Z \\sim N(0,1)$\n",
    "- implement parts of DiscreteBlackScholes.roll_backward()\n",
    "    - DiscreteBlackScholes.bVals corresponds to $B_t$ and is computed as $$B_t = e^{-r\\Delta t}\\left[B_{t+1} + (u_{t+1} - u_t)S_{t+1}\\right]\\quad t=T-1,...,0$$\n",
    "    \n",
    "DiscreteBlackScholes.opt_hedge corresponds to $\\phi_t$ and is computed as \n",
    "     $$\\phi_t=\\mathbf A_t^{-1}\\mathbf B_t$$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "class DiscreteBlackScholes:\n",
    "    \"\"\"\n",
    "    Class implementing discrete Black Scholes\n",
    "    DiscreteBlackScholes is class for pricing and hedging under\n",
    "    the real-world measure for a one-dimensional Black-Scholes setting\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(self,\n",
    "                 s0,\n",
    "                 strike,\n",
    "                 vol,\n",
    "                 T,\n",
    "                 r,\n",
    "                 mu,\n",
    "                 numSteps,\n",
    "                 numPaths):\n",
    "        \"\"\"\n",
    "        :param s0: initial price of the underlying\n",
    "        :param strike: option strike\n",
    "        :param vol: volatility\n",
    "        :param T: time to maturity, in years\n",
    "        :param r: risk-free rate,\n",
    "        :param mu: real drift, asset drift\n",
    "        :param numSteps: number of time steps\n",
    "        :param numPaths: number of Monte Carlo paths\n",
    "        \"\"\"\n",
    "        self.s0 = s0\n",
    "        self.strike = strike\n",
    "        self.vol = vol\n",
    "        self.T = T\n",
    "        self.r = r\n",
    "        self.mu = mu\n",
    "        self.numSteps = numSteps\n",
    "        self.numPaths = numPaths\n",
    "\n",
    "        self.dt = self.T / self.numSteps  # time step\n",
    "        self.gamma = np.exp(-r * self.dt)  # discount factor for one time step, i.e. gamma in the QLBS paper\n",
    "\n",
    "        self.sVals = np.zeros((self.numPaths, self.numSteps + 1), 'float')  # matrix of stock values\n",
    "\n",
    "        # initialize half of the paths with stock price values ranging from 0.5 to 1.5 of s0\n",
    "        # the other half of the paths start with s0\n",
    "        half_paths = int(numPaths / 2)\n",
    "\n",
    "        if False:\n",
    "            # Grau (2010) \"Applications of Least-Squares Regressions to Pricing and Hedging of Financial Derivatives\"\n",
    "            self.sVals[:, 0] = (np.hstack((np.linspace(0.5 * s0, 1.5 * s0, half_paths),\n",
    "                                           s0 * np.ones(half_paths, 'float')))).T\n",
    "\n",
    "        self.sVals[:, 0] = s0 * np.ones(numPaths, 'float')\n",
    "        self.optionVals = np.zeros((self.numPaths, self.numSteps + 1), 'float')  # matrix of option values\n",
    "        self.intrinsicVals = np.zeros((self.numPaths, self.numSteps + 1), 'float')\n",
    "\n",
    "        self.bVals = np.zeros((self.numPaths, self.numSteps + 1), 'float')  # matrix of cash position values\n",
    "        self.opt_hedge = np.zeros((self.numPaths, self.numSteps + 1),\n",
    "                              'float')  # matrix of optimal hedges calculated from cross-sectional information F_t\n",
    "        self.X = None\n",
    "        self.data = None  # matrix of features, i.e. self.X as sum of basis functions\n",
    "        self.delta_S_hat = None\n",
    "\n",
    "        # coef = 1.0/(2 * gamma * risk_lambda)\n",
    "        # override it by zero to have pure risk hedge\n",
    "        self.coef = 0.\n",
    "\n",
    "    def gen_paths(self):\n",
    "        \"\"\"\n",
    "        A simplest path generator\n",
    "        \"\"\"\n",
    "        np.random.seed(42)\n",
    "        # Spline basis of order p on knots k\n",
    "\n",
    "        ### START CODE HERE ### (≈ 3-4 lines of code)\n",
    "        # self.sVals = your code goes here ...\n",
    "        # for-loop or while loop is allowed heres\n",
    "        \n",
    "        Z = np.random.normal(0, 1, size=(self.numSteps + 1, self.numPaths)).T\n",
    "        for t in range(0, self.numSteps):\n",
    "            self.sVals[:, t + 1] = self.sVals[:, t] * np.exp((self.mu - 0.5 * self.vol**2) * self.dt + (self.vol * np.sqrt(self.dt) * Z[:, t + 1]))\n",
    "        \n",
    "        print(self.sVals)\n",
    "        ### END CODE HERE ###\n",
    "\n",
    "        # like in QLBS\n",
    "        delta_S = self.sVals[:, 1:] - np.exp(self.r * self.dt) * self.sVals[:, :self.numSteps]\n",
    "        self.delta_S_hat = np.apply_along_axis(lambda x: x - np.mean(x), axis=0, arr=delta_S)\n",
    "\n",
    "        # state variable\n",
    "        # delta_t here is due to their conventions\n",
    "        self.X = - (self.mu - 0.5 * self.vol ** 2) * np.arange(self.numSteps + 1) * self.dt + np.log(self.sVals)\n",
    "\n",
    "        X_min = np.min(np.min(self.X))\n",
    "        X_max = np.max(np.max(self.X))\n",
    "\n",
    "        print('X.shape = ', self.X.shape)\n",
    "        print('X_min, X_max = ', X_min, X_max)\n",
    "\n",
    "        p = 4  # order of spline (as-is; 3 = cubic, 4: B-spline?)\n",
    "        ncolloc = 12\n",
    "        tau = np.linspace(X_min, X_max, ncolloc)  # These are the sites to which we would like to interpolate\n",
    "\n",
    "        # k is a knot vector that adds endpoints repeats as appropriate for a spline of order p\n",
    "        # To get meaningful results, one should have ncolloc >= p+1\n",
    "        k = splinelab.aptknt(tau, p)\n",
    "        basis = bspline.Bspline(k, p)\n",
    "\n",
    "        num_basis = ncolloc  # len(k) #\n",
    "        self.data = np.zeros((self.numSteps + 1, self.numPaths, num_basis))\n",
    "\n",
    "        print('num_basis = ', num_basis)\n",
    "        print('dim self.data = ', self.data.shape)\n",
    "\n",
    "        # fill it, expand function in finite dimensional space\n",
    "        # in neural network the basis is the neural network itself\n",
    "        t_0 = time.time()\n",
    "        for ix in np.arange(self.numSteps + 1):\n",
    "            x = self.X[:, ix]\n",
    "            self.data[ix, :, :] = np.array([basis(el) for el in x])\n",
    "        t_end = time.time()\n",
    "        print('\\nTime Cost of basis expansion:', t_end - t_0, 'seconds')\n",
    "\n",
    "    def function_A_vec(self, t, reg_param=1e-3):\n",
    "        \"\"\"\n",
    "        function_A_vec - compute the matrix A_{nm} from Eq. (52) (with a regularization!)\n",
    "        Eq. (52) in QLBS Q-Learner in the Black-Scholes-Merton article\n",
    "\n",
    "        Arguments:\n",
    "        t - time index, a scalar, an index into time axis of data_mat\n",
    "        reg_param - a scalar, regularization parameter\n",
    "\n",
    "        Return:\n",
    "        - np.array, i.e. matrix A_{nm} of dimension num_basis x num_basis\n",
    "        \"\"\"\n",
    "        X_mat = self.data[t, :, :]\n",
    "        num_basis_funcs = X_mat.shape[1]\n",
    "        this_dS = self.delta_S_hat[:, t]\n",
    "        hat_dS2 = (this_dS ** 2).reshape(-1, 1)\n",
    "        A_mat = np.dot(X_mat.T, X_mat * hat_dS2) + reg_param * np.eye(num_basis_funcs)\n",
    "        return A_mat\n",
    "\n",
    "    def function_B_vec(self, t, Pi_hat):\n",
    "        \"\"\"\n",
    "        function_B_vec - compute vector B_{n} from Eq. (52) QLBS Q-Learner in the Black-Scholes-Merton article\n",
    "\n",
    "        Arguments:\n",
    "        t - time index, a scalar, an index into time axis of delta_S_hat\n",
    "        Pi_hat - pandas.DataFrame of dimension N_MC x T of portfolio values\n",
    "        Return:\n",
    "        B_vec - np.array() of dimension num_basis x 1\n",
    "        \"\"\"\n",
    "        tmp = Pi_hat * self.delta_S_hat[:, t] + self.coef * (np.exp((self.mu - self.r) * self.dt)) * self.sVals[:, t]\n",
    "        X_mat = self.data[t, :, :]  # matrix of dimension N_MC x num_basis\n",
    "\n",
    "        B_vec = np.dot(X_mat.T, tmp)\n",
    "        return B_vec\n",
    "\n",
    "    def seed_intrinsic(self, strike=None, cp='P'):\n",
    "        \"\"\"\n",
    "        initilaize option value and intrinsic value for each node\n",
    "        \"\"\"\n",
    "        if strike is not None:\n",
    "            self.strike = strike\n",
    "\n",
    "        if cp == 'P':\n",
    "            # payoff function at maturity T: max(K - S(T),0) for all paths\n",
    "            self.optionVals = np.maximum(self.strike - self.sVals[:, -1], 0).copy()\n",
    "            # payoff function for all paths, at all time slices\n",
    "            self.intrinsicVals = np.maximum(self.strike - self.sVals, 0).copy()\n",
    "        elif cp == 'C':\n",
    "            # payoff function at maturity T: max(S(T) -K,0) for all paths\n",
    "            self.optionVals = np.maximum(self.sVals[:, -1] - self.strike, 0).copy()\n",
    "            # payoff function for all paths, at all time slices\n",
    "            self.intrinsicVals = np.maximum(self.sVals - self.strike, 0).copy()\n",
    "        else:\n",
    "            raise Exception('Invalid parameter: %s'% cp)\n",
    "\n",
    "        self.bVals[:, -1] = self.intrinsicVals[:, -1]\n",
    "\n",
    "    def roll_backward(self):\n",
    "        \"\"\"\n",
    "        Roll the price and optimal hedge back in time starting from maturity\n",
    "        \"\"\"\n",
    "\n",
    "        for t in range(self.numSteps - 1, -1, -1):\n",
    "\n",
    "            # determine the expected portfolio value at the next time node\n",
    "            piNext = self.bVals[:, t+1] + self.opt_hedge[:, t+1] * self.sVals[:, t+1]\n",
    "            pi_hat = piNext - np.mean(piNext)\n",
    "\n",
    "            A_mat = self.function_A_vec(t)\n",
    "            B_vec = self.function_B_vec(t, pi_hat)\n",
    "            phi = np.dot(np.linalg.inv(A_mat), B_vec)\n",
    "            self.opt_hedge[:, t] = np.dot(self.data[t, :, :], phi)\n",
    "\n",
    "            ### START CODE HERE ### (≈ 1-2 lines of code)\n",
    "            # implement code to update self.bVals\n",
    "            # self.bVals[:,t] = your code goes here ....\n",
    "            self.bVals[:,t] = np.exp(-self.r * self.dt) * (self.bVals[:, t+1] + (self.opt_hedge[:, t+1] - self.opt_hedge[:, t]) * self.sVals[:, t+1])\n",
    "      \n",
    "\n",
    "            ### END CODE HERE ###\n",
    "\n",
    "        # calculate the initial portfolio value\n",
    "        initPortfolioVal = self.bVals[:, 0] + self.opt_hedge[:, 0] * self.sVals[:, 0]\n",
    "\n",
    "        # use only the second half of the paths generated with paths starting from S0\n",
    "        optionVal = np.mean(initPortfolioVal)\n",
    "        optionValVar = np.std(initPortfolioVal)\n",
    "        delta = np.mean(self.opt_hedge[:, 0])\n",
    "\n",
    "        return optionVal, delta, optionValVar"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[ 100.           98.23650359   98.6842395  ...,  111.52820537\n",
      "   111.93345414  111.50088104]\n",
      " [ 100.           99.47538589  100.18466561 ...,   69.58859259\n",
      "    69.36721589   68.46903615]\n",
      " [ 100.           99.57310236  100.94511135 ...,  110.66761375\n",
      "   110.53260244  110.37282496]\n",
      " ..., \n",
      " [ 100.          100.19783913  100.59050962 ...,  151.7887043   151.63565543\n",
      "   152.14692905]\n",
      " [ 100.          100.07733423  101.11151453 ...,  103.08321744\n",
      "   101.41095506  101.46651123]\n",
      " [ 100.           98.57422289   99.36322314 ...,   91.31429149\n",
      "    91.06798685   92.50219743]]\n",
      "X.shape =  (100, 253)\n",
      "X_min, X_max =  4.10743882917 5.16553756345\n",
      "num_basis =  12\n",
      "dim self.data =  (253, 100, 12)\n",
      "\n",
      "Time Cost of basis expansion: 7.709317445755005 seconds\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<matplotlib.text.Text at 0x7fb330f2cac8>"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    },
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAWkAAAEcCAYAAAAFlEU8AAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAGrtJREFUeJzt3X+U3HV97/HnK5tN3KCycAlqFmK8/ginihIaPbT0B7+u\nsf4iordSwZZrKae9tS2WEyTiVejRIzQ9V+y1p23Aej38kCDEFbytASv01p6CJmxiDJArys8BZCms\ntWSBzeZ9/5jvhMlkZndmdma+n+/s63HOHnbmOzP7/gby2g/v7+fz+SoiMDOzNC3IuwAzM2vMIW1m\nljCHtJlZwhzSZmYJc0ibmSXMIW1mljCHtJlZwhzSZmYJc0ib1ZD0vyV9Ju86zMAhbR0g6UFJk5L+\no+rriznXc1oXP7v2XJ+T9N1u/Lx2SDpL0jZJE5KekXSHpJG867L2OKStU94TES+t+vpo3gV10QHn\nCvx+3gVVSDoHuBQ4DzgMeD1wDfBMjmXZHDikrWskvVbS05KOzx4vkzQu6aTs8YOS1ku6JxvxfVnS\nS6rev0zSTdl7HpD0x1XHjpa0OTv2b5WRu6SrgeXALdko98KZPid7zypJd0v6uaRNwEvoMkkfl3Rj\nzXNfkPSXVcdLWU27JZ3a5Ef/LvC3EbEtyp6KiKsiYk+nz8F6wyFtXRMRPwY+DlwjaQnwZeArEXFH\n1cvOAtYArwXeAHwSQNIC4BZgBzACnAqcL2mNpAHgm8BDwIrs+PXZz/ww8DDZaBf4i0afk/2cRcAo\ncDVwOPA14P2d/9M4yPXAOyW9LKtjAPhN4DpJK4GPAm+NiJdR/vN5sMnPnQQ+Iuk3JR3R+bKt1xzS\n1imjWQ+08vV7ABFxJXA/cBfwKuDimvd9MSIeiYingc8Cv5U9/1ZgaUT8WUS8EBE/Aa4EzgTeBiwD\n1kXEsxHxXEQ06gnP9DkAJwCDwBURMRURNwLfn8sfhKTLJf2zpKslDdZ7TUQ8BNwNvC976hRgT0Tc\nCUwDi4FfkDQYEQ9mv/Ca8dvAFsq/nH4q6RZJR1bVdrKk5W2emuXAIW2dsjYihqu+rqw6diXwJuB/\nRcTzNe97pOr7hyiHL8CrgWXVwQ98AngFcDTwUETsbaKumT6H7OeV4sA9ex9q4nPrkvQWYCQifhW4\nD/jADC+/jhd/KX0oe0xE3A+cD1wCPCnpeknL6n5CjYh4IiLOj4jllH+ZvZny/81UfATw/sQF4pC2\nrpL0UuAK4EvAJZIOr3nJ0VXfLwcey75/BHigJvhfFhHvzI4tl7SwwY+tDqGZPgfgcWBEkmrqaNcv\nA7dm338LOHGG134NOEnSUZRH1NftP4GI6yLiVyj/kgng8lYLiYhtwE7gEABJ7wXeDVwt6cOtfp7l\nwyFt3fYFYGtEnAv8H+Bvao7/oaSjsvC+GNiUPf894OfZBbQhSQOS3iTprdmxx4HLJB0i6SWSqsPw\np8B/buJzAP4V2Av8saRBSWdQHoG26zDg37Pvf0a5z11XRIwDd1Du1T8QEfcCSFop6RRJi4HnKPeZ\n9832gyVdJOlESYuzr3OAk7LPh3Iff1tEnBQRV7dzctZ7DmnrlMpsisrX1yWdDrwD+IPsNX8KHC/p\nrKr3XUd55PkT4MfAZwAiYpryqO844AHgKeAq4NDs2HuA11G+SPgo8MGqz/wc8MmstfGxRp+T/ZwX\ngDOAc4Cns8/ZPIc/hwng5dn3h2afOZPrgNOoGkVT7kdfltX6BHAksB5A0j9I+kSDz3o55UD+N8p/\nLh8ETo2Iu7LjrwN+1MrJWP7k22dZXiQ9CJwbEd/Ou5ZOkXQc8KcR8dtZmD4QEV/Nuy4ASe8DXh0R\nV+RdizXPI2mzDoqI7ZRnVfwz8EbgppxLqrYbOFeSQ7pAPJK23PTjSNqs0xzSZmYJc7vDzCxhDmkz\ns4Q1WgxQCEcccUSsWLEi7zLMzFq2bdu2pyJi6WyvK3RIr1ixgq1bt+ZdhplZyyQ1tf2A2x1mZglz\nSJuZJcwhbWaWMIe0mVnCHNJmZgkr9OwOM7O8jI6V2LBlN49NTLJseIh1a1aydlXnb8rukDYza9Ho\nWIn1m3cyOTUNQGlikvWbdwJ0PKjd7jAza9GGLbv3B3TF5NQ0G7bs7vjPckibmbXosYnJlp6fC4e0\nmVmLlg0PtfT8XDikzcxatG7NSoYGBw54bmhwgHVrVnb8Z/nCoZlZiyoXBz27w8wsUWtXjXQllGu5\n3WFmljCHtJlZwhzSZmYJc0ibmSXMIW1mljCHtJlZwhzSZmYJc0ibmSUsqZCWNCzpRkn3SbpX0i/l\nXZOZWZ5SW3H4BeBbEfEBSYuAJXkXZGaWp2RCWtKhwK8B5wBExAvAC3nWZGaWt5TaHa8BxoEvSxqT\ndJWkQ/IuyswsTymF9ELgeOCvI2IV8CxwUe2LJJ0naaukrePj472u0cysp1IK6UeBRyPiruzxjZRD\n+wARsTEiVkfE6qVLl/a0QDOzXksmpCPiCeARSZVds08F7smxJDOz3CVz4TDzR8C12cyOnwD/Led6\nzMxylVRIR8R2YHXedZiZpSKZdoeZmR3MIW1mljCHtJlZwhzSZmYJc0ibmSXMIW1mljCHtJlZwhzS\nZmYJc0ibmSXMIW1mljCHtJlZwpLau8PM+t/oWIkNW3bz2MQky4aHWLdmJWtXjeRdVrIc0mbWM6Nj\nJdZv3snk1DQApYlJ1m/eCeCgbsDtDjPrmQ1bdu8P6IrJqWk2bNmdU0Xpc0ibWc88NjHZ0vPmkDaz\nHlo2PNTS8+aQNrMeWrdmJUODAwc8NzQ4wLo1Kxu8w3zh0Mx6pnJx0LM7mueQNrOeWrtqxKHcAoe0\nmc2J5z13l0PazNrmec/d5wuHZtY2z3vuPoe0mbXN8567zyFtZm3zvOfuc0ibWds877n7fOHQzNrm\nec/d55A2sznxvOfuSq7dIWlA0pikb+Zdi5lZ3pILaeBPgHvzLsLMLAVJhbSko4B3AVflXYuZWQqS\nCmngCuBCYF+jF0g6T9JWSVvHx8d7V5mZWQ6SuXAo6d3AkxGxTdJJjV4XERuBjQCrV6+OHpVn1pe8\n70b6UhpJnwi8V9KDwPXAKZKuybcks/5V2XejNDFJUN534/xN21n1Z7cyOlbKuzzLJBPSEbE+Io6K\niBXAmcB3IuLsnMsy60ujYyUuuGHHQftuADyzZ4r1m3c6qBORTEibWW9URtDT0bhb6E2S0pFMT7pa\nRNwB3JFzGWZ9qd7OdfV4k6Q0eCRtNs80G77eJCkNDmmzeaaZ8PUmSelwSJvNM/V2rhtcIA5bMoiA\nkeEhPnfGsZ6Kl4gke9Jm1j2V8L30ll08s2cKgEMWL+TT73mjgzlBHkmbzVPPTb24sHdi0tPuUuWQ\nNpuHfG/C4nC7w6zg2lna7XsTFodH0mYFVm9pdzNtC9+bsDgc0mYF1m7bwvcmLA63O8wKrN22he9N\nWBwOabMCWzY8RKlOIDfTtvC9CYvB7Q6zAnPbov95JG1WYG5b9D+HtFnBuW3R3xzSZjnwbausWQ5p\nsx6rzG2uTJ2rzG0GHNR2EF84NOsxL8m2VjikzXrMS7KtFQ5psx7zkmxrhUParMc8t9la4QuHZj3m\nuc3WCoe0WQ48t9ma5ZA26wHPi7Z2OaTNuszzom0uHNJmXVIZPdfbpa4yL9ohbbNxSJt1SHVL49Ch\nQZ59YS9T09Hw9Z4Xbc1IZgqepKMl3S7pHkm7JP1J3jWZNav2NlYTk1MzBjR4XrQ1J6WR9F7ggoi4\nW9LLgG2SbouIe/IuzGw29ZZ6z8Tzoq1ZyYykI+LxiLg7+/7nwL2AG3ZWCK20LgS8/xc9Bc+ak0xI\nV5O0AlgF3FXn2HmStkraOj4+3uvSzA4yOlZigdT06wO4/T7/t2vNSandAYCklwI3AedHxL/XHo+I\njcBGgNWrV8/c9DPrgupZG6Icuq3yRUNrVlIhLWmQckBfGxGb867HDGaetdHuKMEXDa1ZyYS0JAFf\nAu6NiP+Zdz1mo2MlLr1lF8/smdr/3MTk1AzvaI4vGlorUupJnwh8GDhF0vbs6515F2XzU2VKXXVA\nt2NkeIizT1jOyPAQyh5/7oxjfdHQmpbMSDoivkv5wrdZ7lqdUlePgH+56JTOFGTzVkojabNkdOLC\nnvvO1gkOabM65hqw7jtbp7Qc0pI+JOl6SddKuk7Sb3WjMLM8nXzM0pbfM5DNlXbf2TqpnZ70r0fE\nmZUHkv4K+GrnSjLLX6PFJgtUDuOpfS9OvhsaHHAoW9e0E9KLJb0LeAQ4GnDjzfpOo570voCBBTA8\nNMjPJqe8gb91XTsh/d+BM4A3AY8CH+1oRWY5qyzzno76S1WmpoNDFi9k+6ff3uPKbD6aNaQl/WWj\nQ8BIRFzb2ZLM8lOZH90ooCu8rNt6pZmR9OnAp7pdiFkvzHavwWbnR3t6nfVKMyH9dER8peuVmHVZ\nvXsNnr9pO5fesot3vflV3H7feN1bXdXy9DrrpWZC2jvNWV9oNEp+Zs8U19z58IzvHZDYF+ELhdZz\nySwLN+u2ZkbJ9XiKneWpmZB+i6SD9nWmfOEwIuLlHa7JrC31+s1Awzt2N2PEI2fL2awhHREDvSjE\nbC7q9ZvX3biD6X3BvjYbdiPDQ94gyXLndof1hUtv2XVQv3m2u3XPxBcHLRXeYMkKb3SsNOd9n73n\ns6XKI2krvEtv2TWn9x+yaIDPrD22Q9WYdZZH0lZocx1FDywQn32fA9rS5ZG0FUrtDI5nn9/b9md5\n5oYVgUPaCqPeDI5WHbZkkE+/540OZisMh7Qlq3bUvOeFvW3fd/DBy97V4erMesMhbUnqxKi5YsSb\nIVmB+cKhJakTd+sGGByQ5ztboXkkbUnqxH7N7j9bP3BIW5KWDQ/Nab8NL+e2fuF2hyVp3ZqVDA22\nvm2Ml3Nbv/FI2pJUaVFcesuuGRerHLZkkCWLFja804pZ0SUV0pLeAXwBGACuiojLci7Jumi2W1kB\nPDe1r+H7hwYH3HO2vpdMSEsaAP4K+C+U70L+fUk3R8Q9+VZm3VBvit36zTuBF0fRM83w8GpBmy+S\nCWngbcD9EfETAEnXU74JrkO6D9UL4Mmpac7ftJ3zN21nZIYLhwJfGLR5I6ULhyPAI1WPH82esz4y\nOlbixMu+M+vMjdLEJGpwzHfqtvkkpZF0UySdB5wHsHz58pyrsVbUtjhmE2T3aKt6zrM3bL5JaSRd\nAo6uenxU9twBImJjRKyOiNVLly7tWXE2N6NjJS64YUfLqwgDvBm/zWspjaS/D7xe0msoh/OZwIfy\nLck6oTKCno7Wb2flhSk23yUT0hGxV9JHgS2Up+D9XUTM7ZYblpvq6XULpLYCWuDWhs17yYQ0QET8\nPfD3eddhc1Pbe243oM86YblbGzbvJRXSVlxzGTlXLg4OZO/zHGizFzmkbc7mMnJ2IJvNzCFtc3bJ\nzbva2vvZFwXNZpfSFDwrmNGxEsddeisTk63frdvznc2a45C2tlRaHM0G9NknLPd8Z7M2uN1hbWnl\n9lYjw0N8Zu2xXa7IrD95JG1tafb2Vm5rmM2NQ9ra0swmR25rmM2d2x3WlNoN+k8+Zik3bSvVbXkM\nLhAb/utbHM5mHeCRtM2qcpGwNDFJUN5G9KZtJd7/iyOMZCPqAZU3Fh0ZHnJAm3WQR9I2q0Yb9F9z\n58OMDA9xxQePcyibdYlD2vYbHSsdcOPX2r2c66l32ysz6xy3OwwoB/S6G3cccGfuZhd3T05Ns2HL\n7u4UZjbPOaQNKLc0pqZb362uotkpeWbWGoe0AXMPWd930Kw7HNIGzC1kvWDFrHsc0gbAycc0f7/I\nBYLhoUHvw2HWA57dMQ/UztoYHhrkkve+cX+wjo6V2PT9R5r6rKHBAYeyWQ85pPtcZdZG9UXBickp\n1n1tB1CeNjfTRcPDlgyyZNHC/SsNvUG/WW85pPtcowCe2hds2LKbtatGZrxoOLFnirFPvb2bJZrZ\nDNyT7nOlGQK4Es4zXTT0rA2zfDmk+1xlT416KgG8bs1KBgcOft3gAnnWhlnO3O7oczPdFLYSwJUe\n80wXF80sHw7pPjcg1Q1q6cC9NtauGnEgmyXI7Y4+12gkPcMA28wS4pF0gdXbiP/2+8YPmC43MjxU\n9+LhiC8ImhWCR9IF9cnRnXxs0/YDNuK/5s6HD3i87ms7OPmYpQwNDhzwXi/jNiuOJEJa0gZJ90n6\ngaSvSxrOu6aUjY6VuPbOh2fdSnRqX/DNHY/zuTOOZWR4yMu4zQoolXbHbcD6iNgr6XJgPfDxnGtK\n1oYtu5ve63licsoXBc0KLImRdETcGhF7s4d3AkflWU/qWt1WdHSs1KVKzKzbUhlJV/sIsKnRQUnn\nAecBLF++vFc15a76IuGCBtPqGqks/zaz4ulZSEv6NvDKOocujohvZK+5GNgLXNvocyJiI7ARYPXq\n1fNiIlntJkmtBDT4rilmRdazkI6I02Y6Lukc4N3AqRGexVs9cpZg3xz+RLz/hllxJdHukPQO4ELg\n1yNiT9715G10rMT6zTuZnJoG5rbwxNPtzIotiZAGvggsBm5TeUOgOyPi9/MtKT8btuzeH9CtGh4a\nRCpvMer9n82KL4mQjojX5V1DStrtIY8MD/EvF53S4WrMLE9JhPR8Vru0e92alSxrsJR7Jm5rmPWn\nJOZJz1eV3nP1Uu71m3fWXco9OKADbv569gnLvYrQbB7wSDono2MlLrhhx0HT6Sanpvcv5a4dYTuE\nzeYfh3SPjY6VuOTmXUxMTjV8TeWY+8tm5nZHD1XaGzMFdMWGLbt7UJGZpc4h3UOtTK3zKkEzA4d0\nT7USvF4laGbgkO6pZoPX0+nMrMIh3UPr1qw8aGpdLU+nM7Nqnt3RQ5Xg3bBlN6WJyf138h7xFDsz\na8Ah3WO+S4qZtcLtDjOzhHkkPQe1+26cfMxSbr9v3KsEzaxjHNJtqt3zuTQxyTV3Prz/eGUfDsBB\nbWZtc7ujTZfcvGvWhSmTU9NeOWhmc+KRdBPqtTWaWdoNXjloZnPjkJ5FvbbGtVVtjdl45aCZzYXb\nHbOot99Gs7cc9MpBM5srj6TrqG5vtHIP2EMWDTC8ZJFnd5hZxzika9S2N5o1NDjAZ9/n5dxm1llu\nd9Ro907d3m/DzLrBIV2jndkYI8NDDmgz6wqHdI1WZ2P44qCZdZNDukYz24lWeFtRM+s2XzisUb2d\n6EyzO4RvFGtm3eeQrqN6O9ETL/sOpTp9ai9SMbNeSKrdIekCSSHpiLxrqajX/nAf2sx6JZmRtKSj\ngbcDza+57oHa9ocXqZhZLyUT0sDngQuBb+RdSC3fTcXM8pJEu0PS6UApInY08drzJG2VtHV8fLwH\n1ZmZ5adnI2lJ3wZeWefQxcAnKLc6ZhURG4GNAKtXr25la42Dthx128LMUtezkI6I0+o9L+lY4DXA\nDkkARwF3S3pbRDzRqZ9fb8tR3znFzFKXe7sjInZGxJERsSIiVgCPAsd3MqCh/p4cvnOKmaUu95Du\nlUZ7cvjOKWaWsuRCOhtRP9Xpz220+MSLUswsZcmFdLd4UYqZFVFK86S7yotSzKyI5k1IgxelmFnx\nzJt2h5lZETmkzcwS5pA2M0uYQ9rMLGEOaTOzhCmipT2KkiJpHHioweEjgI4vismJzyU9/XIe4HPJ\ny6sjYulsLyp0SM9E0taIWJ13HZ3gc0lPv5wH+FxS53aHmVnCHNJmZgnr55DemHcBHeRzSU+/nAf4\nXJLWtz1pM7N+0M8jaTOzwnNIm5klrG9CWtKwpBsl3SfpXkm/JOlwSbdJ+lH2z8PyrnM2kj4maZek\nH0r6qqSXFOU8JP2dpCcl/bDquYa1S1ov6X5JuyWtyafq+hqcy4bsv68fSPq6pOGqY4U6l6pjF0gK\nSUdUPVe4c5H0R9m/m12S/rzq+WTPpWkR0RdfwFeAc7PvFwHDwJ8DF2XPXQRcnneds5zDCPAAMJQ9\nvgE4pyjnAfwacDzww6rn6tYO/AKwA1hM+UbEPwYG8j6HWc7l7cDC7PvLi3wu2fNHA1soLwg7oqjn\nApwMfBtYnD0+sgjn0uxXX4ykJR1K+V/elwAi4oWImABOpxzeZP9cm0+FLVkIDElaCCwBHqMg5xER\n/xd4uubpRrWfDlwfEc9HxAPA/cDbelJoE+qdS0TcGhF7s4d3Ur6zPRTwXDKfBy4EqmcPFPFc/gC4\nLCKez17zZPZ80ufSrL4Iacq/JceBL0sak3SVpEOAV0TE49lrngBekVuFTYiIEvAXwMPA48DPIuJW\nCnYeNRrVPgI8UvW6R7PniuIjwD9k3xfuXCSdDpQiYkfNocKdC/AG4Fcl3SXpnyS9NXu+iOdykH4J\n6YWU/xforyNiFfAs5f+13i/K//+T9HzDrF97OuVfOsuAQySdXf2aIpxHI0WuvZqki4G9wLV519IO\nSUuATwCfyruWDlkIHA6cAKwDbpCkfEvqnH4J6UeBRyPiruzxjZRD+6eSXgWQ/fPJBu9PxWnAAxEx\nHhFTwGbglyneeVRrVHuJck+04qjsuaRJOgd4N3BW9ksHincur6U8ENgh6UHK9d4t6ZUU71yg/Pd/\nc5R9D9hHeaOlIp7LQfoipCPiCeARSZVbf58K3APcDPxO9tzvAN/IobxWPAycIGlJNhI4FbiX4p1H\ntUa13wycKWmxpNcArwe+l0N9TZP0Dso93PdGxJ6qQ4U6l4jYGRFHRsSKiFhBOeSOz/4eFepcMqOU\nLx4i6Q2UJw48RTHP5WB5X7ns1BdwHLAV+AHlf2mHAf8J+EfgR5Sv/h6ed51NnMelwH3AD4GrKV+Z\nLsR5AF+l3EufovwX/3dnqh24mPIV993Ab+RdfxPncj/lHuf27OtvinouNccfJJvdUcRzoRzK12R/\nZ+4GTinCuTT75WXhZmYJ64t2h5lZv3JIm5klzCFtZpYwh7SZWcIc0mZmCXNIm5klzCFtlpH0KknX\nS9oq6f9Juj3vmswW5l2AWUKuBq6MiE0Ako7NuR4zL2YxA5A0ADwPHBXl5dFmSXC7wwyIiGnKy9Z3\nSPpbSSdWjqV6JxybHxzSZi/6DeD9wM+Ab0mq3KDg8/mVZPOde9JmmSj3/r4LfDcbPb9Z0nPAMZLW\nRcSGfCu0+cgjaTNA0hpJi7LvjwR+BbiN8paX1zigLS8OabOyDwD3StoBfBP4HxHxr8CbKd/M1CwX\nbneYARHxew0OPQWcK+mpiLi3lzWZgafgmZklze0OM7OEOaTNzBLmkDYzS5hD2swsYQ5pM7OEOaTN\nzBLmkDYzS5hD2swsYQ5pM7OE/X/tRcYsgGPH/wAAAABJRU5ErkJggg==\n",
      "text/plain": [
       "<matplotlib.figure.Figure at 0x7fb33b441438>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "np.random.seed(42)\n",
    "strike_k = 95\n",
    "test_vol = 0.2\n",
    "test_mu = 0.03\n",
    "dt = 0.01\n",
    "rfr = 0.05\n",
    "num_paths = 100\n",
    "num_periods = 252\n",
    "\n",
    "hMC = DiscreteBlackScholes(100, strike_k, test_vol, 1., rfr, test_mu, num_periods, num_paths)\n",
    "hMC.gen_paths()\n",
    "\n",
    "t = hMC.numSteps - 1\n",
    "piNext = hMC.bVals[:, t+1] + 0.1 * hMC.sVals[:, t+1]\n",
    "pi_hat = piNext - np.mean(piNext)\n",
    "\n",
    "A_mat = hMC.function_A_vec(t)\n",
    "B_vec = hMC.function_B_vec(t, pi_hat)\n",
    "phi = np.dot(np.linalg.inv(A_mat), B_vec)\n",
    "opt_hedge = np.dot(hMC.data[t, :, :], phi)\n",
    "\n",
    "# plot the results\n",
    "fig = plt.figure(figsize=(12,4))\n",
    "ax1 = fig.add_subplot(121)\n",
    "\n",
    "ax1.scatter(hMC.sVals[:,t], pi_hat)\n",
    "ax1.set_title(r'Expected $\\Pi_0$ vs. $S_t$')\n",
    "ax1.set_xlabel(r'$S_t$')\n",
    "ax1.set_ylabel(r'$\\Pi_0$')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Submission successful, please check on the coursera grader page for the status\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "array([ 0.81274895, -3.49043554,  0.69994334,  1.61239986, -0.25153316,\n",
       "       -3.19082265,  0.8848621 , -2.0380868 ,  0.45033564,  3.74872863,\n",
       "       -0.6568227 ,  1.74148929,  0.94314331, -4.19716113,  1.72135256,\n",
       "       -0.66188482,  6.95675041, -2.20512677, -0.14942482,  0.30067272,\n",
       "        3.33419402,  0.68536713,  1.65097153,  2.69898611,  1.22528159,\n",
       "        1.47188744, -2.48129898, -0.37360224,  0.81064666, -1.05269459,\n",
       "        0.02476551, -1.88267258,  0.11748169, -0.9038195 ,  0.69753811,\n",
       "       -0.54805029,  1.97594593, -0.44331403,  0.62134931, -1.86191032,\n",
       "       -3.21226413,  2.24508097, -2.23451292, -0.13488281,  3.64364848,\n",
       "       -0.11270281, -1.15582237, -3.30169455,  1.74454841, -1.10425448,\n",
       "        2.10192819,  1.80570507, -1.68587001, -1.42113397, -2.70292006,\n",
       "        0.79454199, -2.05396827,  3.13973887, -1.08786662,  0.42347686,\n",
       "        1.32787012,  0.55924965, -3.54140814, -3.70258632,  2.14853641,\n",
       "        1.11495458,  3.69639676,  0.62864736, -2.62282995, -0.05315552,\n",
       "        1.05789698,  1.8023196 , -3.35217374, -2.30436466, -2.68609519,\n",
       "        0.95284884, -1.35963013, -0.56273408, -0.08311276,  0.79044269,\n",
       "        0.46247485, -1.04921463, -2.18122285,  1.82920128,  1.05635272,\n",
       "        0.90161346, -1.93870347, -0.37549305, -1.96383274,  1.9772888 ,\n",
       "       -1.37386984,  0.95230068,  0.88842589, -1.42214528, -2.60256696,\n",
       "       -1.53509699,  4.47491253,  4.87735375, -0.19068803, -1.08711941])"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "### GRADED PART (DO NOT EDIT) ###\n",
    "\n",
    "part_1 = list(pi_hat)\n",
    "try:\n",
    "    part1 = \" \".join(map(repr, part_1))\n",
    "except TypeError:\n",
    "    part1 = repr(part_1)\n",
    "submissions[all_parts[0]]=part1\n",
    "grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:1],all_parts,submissions)\n",
    "pi_hat\n",
    "### GRADED PART (DO NOT EDIT) ###"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[ 100.          101.44740793  119.84140463 ...,  192.78653975  210.7076386\n",
      "   167.37134738]\n",
      " [ 100.           98.79378416   81.67103247 ...,   78.75163254\n",
      "   104.69106128  114.29766651]\n",
      " [ 100.          116.62110943  127.89787986 ...,   85.9631909    79.72061217\n",
      "    78.03372489]\n",
      " ..., \n",
      " [ 100.          106.73222875  103.49782882 ...,  108.30352919\n",
      "    96.76512324  114.08668191]\n",
      " [ 100.           96.45073828   98.70345177 ...,   89.5899346    75.07626471\n",
      "    91.91332688]\n",
      " [ 100.          101.81014094  115.21893111 ...,   68.72837469\n",
      "    64.71929858   65.04500528]]\n",
      "X.shape =  (50000, 7)\n",
      "X_min, X_max =  2.96880459823 6.37164911461\n",
      "num_basis =  12\n",
      "dim self.data =  (7, 50000, 12)\n",
      "\n",
      "Time Cost of basis expansion: 96.63819098472595 seconds\n",
      "Option value =  13.1083499076\n",
      "Option value variance =  5.17079676287\n",
      "Option delta =  -0.356133722933\n",
      "BS value 13.1458939003\n"
     ]
    }
   ],
   "source": [
    "# input parameters\n",
    "s0 = 100.0\n",
    "strike = 100.0\n",
    "r = 0.05\n",
    "mu = 0.07 # 0.05\n",
    "vol = 0.4\n",
    "T = 1.0\n",
    "\n",
    "# Simulation Parameters\n",
    "numPaths = 50000  # number of Monte Carlo trials\n",
    "numSteps = 6\n",
    "\n",
    "# create the class object\n",
    "hMC = DiscreteBlackScholes(s0, strike, vol, T, r, mu, numSteps, numPaths)\n",
    "\n",
    "# calculation\n",
    "hMC.gen_paths()\n",
    "hMC.seed_intrinsic()\n",
    "option_val, delta, option_val_variance = hMC.roll_backward()\n",
    "bs_call_value = bs_put(0, s0, K=strike, r=r, sigma=vol, T=T)\n",
    "print('Option value = ', option_val)\n",
    "print('Option value variance = ', option_val_variance)\n",
    "print('Option delta = ', delta)  \n",
    "print('BS value', bs_call_value)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Submission successful, please check on the coursera grader page for the status\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "13.10834990762385"
      ]
     },
     "execution_count": 18,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "### GRADED PART (DO NOT EDIT) ###\n",
    "part2 = str(option_val)\n",
    "submissions[all_parts[1]]=part2\n",
    "grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:2],all_parts,submissions)\n",
    "option_val\n",
    "### GRADED PART (DO NOT EDIT) ###"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[ 100.          101.44740793  119.84140463 ...,  192.78653975  210.7076386\n",
      "   167.37134738]\n",
      " [ 100.           98.79378416   81.67103247 ...,   78.75163254\n",
      "   104.69106128  114.29766651]\n",
      " [ 100.          116.62110943  127.89787986 ...,   85.9631909    79.72061217\n",
      "    78.03372489]\n",
      " ..., \n",
      " [ 100.          106.73222875  103.49782882 ...,  108.30352919\n",
      "    96.76512324  114.08668191]\n",
      " [ 100.           96.45073828   98.70345177 ...,   89.5899346    75.07626471\n",
      "    91.91332688]\n",
      " [ 100.          101.81014094  115.21893111 ...,   68.72837469\n",
      "    64.71929858   65.04500528]]\n",
      "X.shape =  (50000, 7)\n",
      "X_min, X_max =  2.96880459823 6.37164911461\n",
      "num_basis =  12\n",
      "dim self.data =  (7, 50000, 12)\n",
      "\n",
      "Time Cost of basis expansion: 95.9269437789917 seconds\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "array([  6.70326307,   8.59543726,  10.74614496,  13.1458939 ,\n",
       "        15.78197485,  18.63949388])"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "strikes = np.linspace(85, 110, 6)\n",
    "results = [None] * len(strikes)\n",
    "bs_prices = np.zeros(len(strikes))\n",
    "bs_deltas = np.zeros(len(strikes))\n",
    "numPaths = 50000\n",
    "hMC = DiscreteBlackScholes(s0, strike, vol, T, r, mu, numSteps, numPaths)\n",
    "hMC.gen_paths()\n",
    "for ix, k_strike in enumerate(strikes):\n",
    "    hMC.seed_intrinsic(k_strike)\n",
    "    results[ix] = hMC.roll_backward()\n",
    "    bs_prices[ix] = bs_put(0, s0, K=k_strike, r=r, sigma=vol, T=T)\n",
    "    bs_deltas[ix] = norm.cdf(d1(s0, K=k_strike, r=r, sigma=vol, T=T)) - 1\n",
    "bs_prices"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": [
    "mc_prices = np.array([x[0] for x in results])\n",
    "mc_deltas = np.array([x[1] for x in results])\n",
    "price_variances = np.array([x[-1] for x in results])\n",
    "prices_diff = mc_prices - bs_prices\n",
    "deltas_diff = mc_deltas - bs_deltas\n",
    "# price_variances"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Submission successful, please check on the coursera grader page for the status\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "array([-0.03641511, -0.04034139, -0.03996597, -0.03754399, -0.03240009,\n",
       "       -0.02997062])"
      ]
     },
     "execution_count": 21,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "### GRADED PART (DO NOT EDIT) ###\n",
    "\n",
    "part_3 = list(prices_diff)\n",
    "try:\n",
    "    part3 = \" \".join(map(repr, part_3))\n",
    "except TypeError:\n",
    "    part3 = repr(part_3)\n",
    "submissions[all_parts[2]]=part3\n",
    "grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:3],all_parts,submissions)\n",
    "prices_diff\n",
    "### GRADED PART (DO NOT EDIT) ###"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Submission successful, please check on the coursera grader page for the status\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "array([ 0.01279798,  0.01416019,  0.01532701,  0.01645681,  0.01715345,\n",
       "        0.01780652])"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "### GRADED PART (DO NOT EDIT) ###\n",
    "part_4 = list(deltas_diff)\n",
    "try:\n",
    "    part4 = \" \".join(map(repr, part_4))\n",
    "except TypeError:\n",
    "    part4= repr(part_4)\n",
    "submissions[all_parts[3]]=part4\n",
    "grading.submit(COURSERA_EMAIL, COURSERA_TOKEN, assignment_key,all_parts[:4],all_parts,submissions)\n",
    "deltas_diff\n",
    "### GRADED PART (DO NOT EDIT) ###"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": true
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "coursera": {
   "course_slug": "reinforcement-learning-in-finance"
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
