{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "_kFoQyUpUAGb"
   },
   "outputs": [],
   "source": [
    "# ML_in_Finance-Interpretability\n",
    "# Author: Matthew Dixon\n",
    "# Version: 1.0 (08.09.2019)\n",
    "# License: MIT\n",
    "# Email: matthew.dixon@iit.edu\n",
    "# Notes: tested on Mac OS X with Python 3.6.9 and the following packages:\n",
    "# numpy=1.18.1, keras=2.3.1, tensorflow=2.0.0, statsmodels=0.10.1, scikit-learn=0.22.1\n",
    "# Citation: Please cite the following reference if this notebook is used for research purposes:\n",
    "# Dixon M.F., I. Halperin and P. Bilokon, Machine Learning in Finance: From Theory to Practice, Springer Graduate textbook Series, 2020. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "UudbSI59UAGm"
   },
   "source": [
    "# Overview\n",
    "The purpose of this notebook is to illustrate a neural network interpretability method which is compatible with linear regression. \n",
    "\n",
    "In linear regression, provided the independent variables are scaled, one can view the regression coefficients as a measure of importance of the variables. Equivalently, the dependent variable can be differentiated w.r.t. the inputs to give the coefficient. \n",
    "\n",
    "Similarly, the derivatives of the network w.r.t. the inputs are a non-linear generalization of interpretability in linear regression. Moreover, we should expect the neural network gradients to approximate the linear regression coefficients when the data is generated by a linear regression model. \n",
    "\n",
    "Various simple experimental tests, corresponding to Section 3 of Chpt 5, are performed to illustrate the properties of network interpretability."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 89
    },
    "colab_type": "code",
    "id": "Zfd1onEAUAGn",
    "outputId": "fbc1c8da-b7b9-453e-aeab-e75cecfa8ce7"
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "from keras.models import Sequential\n",
    "from keras.layers import Dense\n",
    "from keras.callbacks import EarlyStopping\n",
    "from keras.wrappers.scikit_learn import KerasRegressor\n",
    "import statsmodels.api as sm\n",
    "import sklearn"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "3zwrQkfYUAGy"
   },
   "source": [
    "## Simple Data Generation Process (DGP)\n",
    "\n",
    "\n",
    "Let us generate data from the following linear regression model\n",
    "\n",
    "$Y=X_1+X_2 + \\epsilon~, ~~X_1, X_2 \\sim N(0,1)~, ~~\\epsilon \\sim N(0,1)$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "diSrymSRUAGz"
   },
   "outputs": [],
   "source": [
    "M = 5000 # Number of samples\n",
    "np.random.seed(7) # Set NumPy's random seed for reproducibility\n",
    "X = np.zeros(shape=(M, 2))\n",
    "X[:int(M/2), 0] = np.random.randn(int(M/2))\n",
    "X[:int(M/2), 1] = np.random.randn(int(M/2))\n",
    "\n",
    "# Use antithetic sampling to reduce the bias in the mean\n",
    "X[int(M/2):, 0] = -X[:int(M/2), 0]\n",
    "X[int(M/2):, 1] = -X[:int(M/2), 1]\n",
    "\n",
    "eps = np.zeros(shape=(M,1))\n",
    "eps[:int(M/2)] = np.random.randn(int(M/2), 1)\n",
    "eps[int(M/2):] = -eps[:int(M/2)]\n",
    "Y = X[:, 0] + X[:, 1] + eps.flatten()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "kx6JfSujUAHC"
   },
   "source": [
    "## Use ordinary least squares to fit a linear model to the data\n",
    "For a baseline, let us compare the neural network with OLS regression. \n",
    "\n",
    "We fit statsmodels' OLS model to the data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "LJTP5gc1UAHD"
   },
   "outputs": [],
   "source": [
    "ols_results = sm.OLS(Y, sm.add_constant(X)).fit()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For each input, get the predicted $Y$ value according to the model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "qqS2vAz_UAHL"
   },
   "outputs": [],
   "source": [
    "y_ols = ols_results.predict(sm.add_constant(X))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "View characteristics of the resulting model. You should observe that the intercept is close to zero and the other coefficients are close to one."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 473
    },
    "colab_type": "code",
    "id": "i8fHdPH3UAHQ",
    "outputId": "c2297cf9-91f8-443e-c61d-b63f1fb6bb85"
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table class=\"simpletable\">\n",
       "<caption>OLS Regression Results</caption>\n",
       "<tr>\n",
       "  <th>Dep. Variable:</th>            <td>y</td>        <th>  R-squared:         </th> <td>   0.678</td> \n",
       "</tr>\n",
       "<tr>\n",
       "  <th>Model:</th>                   <td>OLS</td>       <th>  Adj. R-squared:    </th> <td>   0.677</td> \n",
       "</tr>\n",
       "<tr>\n",
       "  <th>Method:</th>             <td>Least Squares</td>  <th>  F-statistic:       </th> <td>   5249.</td> \n",
       "</tr>\n",
       "<tr>\n",
       "  <th>Date:</th>             <td>Mon, 18 May 2020</td> <th>  Prob (F-statistic):</th>  <td>  0.00</td>  \n",
       "</tr>\n",
       "<tr>\n",
       "  <th>Time:</th>                 <td>16:36:00</td>     <th>  Log-Likelihood:    </th> <td> -7020.4</td> \n",
       "</tr>\n",
       "<tr>\n",
       "  <th>No. Observations:</th>      <td>  5000</td>      <th>  AIC:               </th> <td>1.405e+04</td>\n",
       "</tr>\n",
       "<tr>\n",
       "  <th>Df Residuals:</th>          <td>  4997</td>      <th>  BIC:               </th> <td>1.407e+04</td>\n",
       "</tr>\n",
       "<tr>\n",
       "  <th>Df Model:</th>              <td>     2</td>      <th>                     </th>     <td> </td>    \n",
       "</tr>\n",
       "<tr>\n",
       "  <th>Covariance Type:</th>      <td>nonrobust</td>    <th>                     </th>     <td> </td>    \n",
       "</tr>\n",
       "</table>\n",
       "<table class=\"simpletable\">\n",
       "<tr>\n",
       "    <td></td>       <th>coef</th>     <th>std err</th>      <th>t</th>      <th>P>|t|</th>  <th>[0.025</th>    <th>0.975]</th>  \n",
       "</tr>\n",
       "<tr>\n",
       "  <th>const</th> <td>   4.1e-18</td> <td>    0.014</td> <td> 2.94e-16</td> <td> 1.000</td> <td>   -0.027</td> <td>    0.027</td>\n",
       "</tr>\n",
       "<tr>\n",
       "  <th>x1</th>    <td>    0.9858</td> <td>    0.014</td> <td>   70.409</td> <td> 0.000</td> <td>    0.958</td> <td>    1.013</td>\n",
       "</tr>\n",
       "<tr>\n",
       "  <th>x2</th>    <td>    1.0190</td> <td>    0.014</td> <td>   72.695</td> <td> 0.000</td> <td>    0.992</td> <td>    1.047</td>\n",
       "</tr>\n",
       "</table>\n",
       "<table class=\"simpletable\">\n",
       "<tr>\n",
       "  <th>Omnibus:</th>       <td> 0.800</td> <th>  Durbin-Watson:     </th> <td>   1.941</td>\n",
       "</tr>\n",
       "<tr>\n",
       "  <th>Prob(Omnibus):</th> <td> 0.670</td> <th>  Jarque-Bera (JB):  </th> <td>   0.750</td>\n",
       "</tr>\n",
       "<tr>\n",
       "  <th>Skew:</th>          <td> 0.000</td> <th>  Prob(JB):          </th> <td>   0.687</td>\n",
       "</tr>\n",
       "<tr>\n",
       "  <th>Kurtosis:</th>      <td> 3.060</td> <th>  Cond. No.          </th> <td>    1.02</td>\n",
       "</tr>\n",
       "</table><br/><br/>Warnings:<br/>[1] Standard Errors assume that the covariance matrix of the errors is correctly specified."
      ],
      "text/plain": [
       "<class 'statsmodels.iolib.summary.Summary'>\n",
       "\"\"\"\n",
       "                            OLS Regression Results                            \n",
       "==============================================================================\n",
       "Dep. Variable:                      y   R-squared:                       0.678\n",
       "Model:                            OLS   Adj. R-squared:                  0.677\n",
       "Method:                 Least Squares   F-statistic:                     5249.\n",
       "Date:                Mon, 18 May 2020   Prob (F-statistic):               0.00\n",
       "Time:                        16:36:00   Log-Likelihood:                -7020.4\n",
       "No. Observations:                5000   AIC:                         1.405e+04\n",
       "Df Residuals:                    4997   BIC:                         1.407e+04\n",
       "Df Model:                           2                                         \n",
       "Covariance Type:            nonrobust                                         \n",
       "==============================================================================\n",
       "                 coef    std err          t      P>|t|      [0.025      0.975]\n",
       "------------------------------------------------------------------------------\n",
       "const         4.1e-18      0.014   2.94e-16      1.000      -0.027       0.027\n",
       "x1             0.9858      0.014     70.409      0.000       0.958       1.013\n",
       "x2             1.0190      0.014     72.695      0.000       0.992       1.047\n",
       "==============================================================================\n",
       "Omnibus:                        0.800   Durbin-Watson:                   1.941\n",
       "Prob(Omnibus):                  0.670   Jarque-Bera (JB):                0.750\n",
       "Skew:                           0.000   Prob(JB):                        0.687\n",
       "Kurtosis:                       3.060   Cond. No.                         1.02\n",
       "==============================================================================\n",
       "\n",
       "Warnings:\n",
       "[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n",
       "\"\"\""
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "ols_results.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "m_M_l4RFUAHV"
   },
   "source": [
    "## Compare with a feedforward NN with no hidden layers\n",
    "\n",
    "Recall that the feedforward network with no hidden layers or activation function is a linear regression model.\n",
    "\n",
    "Create a build function for the linear perceptron, which transforms the inputs directly to a single output"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "gK7cKS-fUAHW"
   },
   "outputs": [],
   "source": [
    "def linear_NN0_model(l1_reg=0.0):    \n",
    "    model = Sequential()\n",
    "    model.add(Dense(1, input_dim=2, kernel_initializer='normal'))\n",
    "    model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mae', 'mse'])\n",
    "    return model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "An early stopping callback to terminate training once the weights appear to have converged to an optimum. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "FVNCG9q_UAHc"
   },
   "outputs": [],
   "source": [
    "es = EarlyStopping(monitor='loss', mode='min', verbose=1, patience=10)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Passing the build function for our model and training parameters to the `KerasRegressor` constructor to create a Scikit-learn-compatible regression model. This allows you to take advantage of the library's built-in tools and estimator methods, and to incorporate it into Scikit-learn pipelines. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "hlK793fFUAHh"
   },
   "outputs": [],
   "source": [
    "lm = KerasRegressor(build_fn=linear_NN0_model, epochs=40, batch_size=10, verbose=1, callbacks=[es])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Train the model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 955
    },
    "colab_type": "code",
    "id": "2KDwpplKUAHm",
    "outputId": "6f4c4451-df7d-4415-98b6-f6527f319a2e",
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/40\n",
      "5000/5000 [==============================] - 3s 601us/step - loss: 2.4874 - mae: 1.2604 - mse: 2.4874\n",
      "Epoch 2/40\n",
      "5000/5000 [==============================] - 2s 364us/step - loss: 1.5458 - mae: 0.9998 - mse: 1.5458\n",
      "Epoch 3/40\n",
      "5000/5000 [==============================] - 2s 318us/step - loss: 1.1422 - mae: 0.8552 - mse: 1.1422\n",
      "Epoch 4/40\n",
      "5000/5000 [==============================] - 1s 261us/step - loss: 1.0092 - mae: 0.8027 - mse: 1.0092\n",
      "Epoch 5/40\n",
      "5000/5000 [==============================] - 2s 303us/step - loss: 0.9773 - mae: 0.7890 - mse: 0.9773 1s - loss: 1.0402 - ma\n",
      "Epoch 6/40\n",
      "5000/5000 [==============================] - 2s 319us/step - loss: 0.9719 - mae: 0.7869 - mse: 0.9719\n",
      "Epoch 7/40\n",
      "5000/5000 [==============================] - 2s 303us/step - loss: 0.9712 - mae: 0.7865 - mse: 0.9712\n",
      "Epoch 8/40\n",
      "5000/5000 [==============================] - 1s 286us/step - loss: 0.9713 - mae: 0.7865 - mse: 0.9713\n",
      "Epoch 9/40\n",
      "5000/5000 [==============================] - 1s 290us/step - loss: 0.9716 - mae: 0.7868 - mse: 0.9716\n",
      "Epoch 10/40\n",
      "5000/5000 [==============================] - 1s 257us/step - loss: 0.9714 - mae: 0.7868 - mse: 0.9714\n",
      "Epoch 11/40\n",
      "5000/5000 [==============================] - 1s 263us/step - loss: 0.9714 - mae: 0.7868 - mse: 0.9714\n",
      "Epoch 12/40\n",
      "5000/5000 [==============================] - 1s 264us/step - loss: 0.9713 - mae: 0.7867 - mse: 0.9713\n",
      "Epoch 13/40\n",
      "5000/5000 [==============================] - 1s 266us/step - loss: 0.9713 - mae: 0.7867 - mse: 0.9713\n",
      "Epoch 14/40\n",
      "5000/5000 [==============================] - 1s 266us/step - loss: 0.9714 - mae: 0.7866 - mse: 0.9714\n",
      "Epoch 15/40\n",
      "5000/5000 [==============================] - 1s 271us/step - loss: 0.9716 - mae: 0.7868 - mse: 0.9716\n",
      "Epoch 16/40\n",
      "5000/5000 [==============================] - 1s 265us/step - loss: 0.9712 - mae: 0.7867 - mse: 0.9712\n",
      "Epoch 17/40\n",
      "5000/5000 [==============================] - 1s 279us/step - loss: 0.9716 - mae: 0.7870 - mse: 0.9716\n",
      "Epoch 18/40\n",
      "5000/5000 [==============================] - 1s 271us/step - loss: 0.9714 - mae: 0.7869 - mse: 0.9714 0s - loss: 0.9701 - mae: 0.7854 - mse: \n",
      "Epoch 19/40\n",
      "5000/5000 [==============================] - 1s 281us/step - loss: 0.9714 - mae: 0.7866 - mse: 0.9714\n",
      "Epoch 20/40\n",
      "5000/5000 [==============================] - 2s 399us/step - loss: 0.9716 - mae: 0.7868 - mse: 0.9716 1s - loss: 0.9538 -\n",
      "Epoch 21/40\n",
      "5000/5000 [==============================] - 1s 290us/step - loss: 0.9716 - mae: 0.7867 - mse: 0.9716\n",
      "Epoch 22/40\n",
      "5000/5000 [==============================] - 1s 286us/step - loss: 0.9714 - mae: 0.7868 - mse: 0.9714\n",
      "Epoch 23/40\n",
      "5000/5000 [==============================] - 1s 279us/step - loss: 0.9713 - mae: 0.7866 - mse: 0.9713\n",
      "Epoch 24/40\n",
      "5000/5000 [==============================] - 1s 279us/step - loss: 0.9714 - mae: 0.7867 - mse: 0.9714\n",
      "Epoch 25/40\n",
      "5000/5000 [==============================] - 1s 274us/step - loss: 0.9716 - mae: 0.7870 - mse: 0.9716\n",
      "Epoch 26/40\n",
      "5000/5000 [==============================] - 1s 271us/step - loss: 0.9714 - mae: 0.7866 - mse: 0.9714\n",
      "Epoch 00026: early stopping\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<keras.callbacks.callbacks.History at 0x1a3e26dc50>"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "lm.fit(X, Y)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "iokoBVzTUAHs"
   },
   "source": [
    "### Check that the weights are close to one\n",
    "The weights should be close to unity. The bias term is the second entry and should be close to zero."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 69
    },
    "colab_type": "code",
    "id": "ErrGUmQ7UAHt",
    "outputId": "d1ec8bb8-80e2-468d-fe8b-2751f9206be8"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "weights: [[0.9869313]\n",
      " [1.0236046]]\n",
      "bias: [0.00017654]\n"
     ]
    }
   ],
   "source": [
    "print(\"weights: \" + str(lm.model.layers[0].get_weights()[0]))\n",
    "print(\"bias: \" + str(lm.model.layers[0].get_weights()[1]))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "XeY2UlfbUAHz"
   },
   "source": [
    "## Compare with a FFW Neural Network with one hidden layer (unactivated)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This time we create a neural network with a hidden layer with 10 units."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "EEz8Yig7UAHz"
   },
   "outputs": [],
   "source": [
    "n = 10 # number of hidden units"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "9DXvB6wUUAH4"
   },
   "outputs": [],
   "source": [
    "def linear_NN1_model(l1_reg=0.0):    \n",
    "    model = Sequential()\n",
    "    model.add(Dense(n, input_dim=2, kernel_initializer='normal')) \n",
    "    model.add(Dense(1, kernel_initializer='normal', activation='linear'))\n",
    "    model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mae', 'mse'])\n",
    "    return model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "cX3ZHz0tUAH-"
   },
   "outputs": [],
   "source": [
    "lm = KerasRegressor(build_fn=linear_NN1_model, epochs=50, batch_size=10, verbose=1, callbacks=[es])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "Zq6A4WMzUAIC",
    "outputId": "74a5c317-fa81-4b4e-f8ad-6c374837ca4e",
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/50\n",
      "5000/5000 [==============================] - 3s 542us/step - loss: 1.6197 - mae: 0.9937 - mse: 1.6197\n",
      "Epoch 2/50\n",
      "5000/5000 [==============================] - 2s 384us/step - loss: 0.9737 - mae: 0.7876 - mse: 0.9737\n",
      "Epoch 3/50\n",
      "5000/5000 [==============================] - 2s 329us/step - loss: 0.9730 - mae: 0.7868 - mse: 0.9730\n",
      "Epoch 4/50\n",
      "5000/5000 [==============================] - 2s 376us/step - loss: 0.9744 - mae: 0.7878 - mse: 0.9744\n",
      "Epoch 5/50\n",
      "5000/5000 [==============================] - 2s 346us/step - loss: 0.9721 - mae: 0.7876 - mse: 0.9721\n",
      "Epoch 6/50\n",
      "5000/5000 [==============================] - 2s 333us/step - loss: 0.9746 - mae: 0.7883 - mse: 0.9746\n",
      "Epoch 7/50\n",
      "5000/5000 [==============================] - 2s 424us/step - loss: 0.9727 - mae: 0.7866 - mse: 0.9727\n",
      "Epoch 8/50\n",
      "5000/5000 [==============================] - 2s 327us/step - loss: 0.9733 - mae: 0.7874 - mse: 0.9732\n",
      "Epoch 9/50\n",
      "5000/5000 [==============================] - 1s 294us/step - loss: 0.9745 - mae: 0.7880 - mse: 0.9745\n",
      "Epoch 10/50\n",
      "5000/5000 [==============================] - 1s 298us/step - loss: 0.9733 - mae: 0.7877 - mse: 0.9733\n",
      "Epoch 11/50\n",
      "5000/5000 [==============================] - 1s 294us/step - loss: 0.9736 - mae: 0.7878 - mse: 0.9736\n",
      "Epoch 12/50\n",
      "5000/5000 [==============================] - 1s 300us/step - loss: 0.9740 - mae: 0.7879 - mse: 0.9740\n",
      "Epoch 13/50\n",
      "5000/5000 [==============================] - 1s 296us/step - loss: 0.9730 - mae: 0.7872 - mse: 0.9730\n",
      "Epoch 14/50\n",
      "5000/5000 [==============================] - 1s 300us/step - loss: 0.9737 - mae: 0.7879 - mse: 0.9737\n",
      "Epoch 15/50\n",
      "5000/5000 [==============================] - 1s 295us/step - loss: 0.9726 - mae: 0.7873 - mse: 0.9726\n",
      "Epoch 00015: early stopping\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<keras.callbacks.callbacks.History at 0x1a3eb94fd0>"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "lm.fit(X, Y)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "kcwH9k3tUAIF",
    "outputId": "82b9016e-cde4-4cfa-fb58-eeb579287e2f"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[ 0.21191542 -0.3510564   0.32653695  0.30687252  0.27739337  0.30587277\n",
      "   0.26292333  0.24803898  0.250718   -0.3381098 ]\n",
      " [ 0.33603936 -0.28845066  0.25735727  0.3006563   0.32002118  0.28473574\n",
      "   0.35539797  0.28512466  0.28092325 -0.26294112]] [[ 0.34959066]\n",
      " [-0.33088237]\n",
      " [ 0.29290077]\n",
      " [ 0.3498653 ]\n",
      " [ 0.26861414]\n",
      " [ 0.3326019 ]\n",
      " [ 0.35045084]\n",
      " [ 0.37188262]\n",
      " [ 0.40377927]\n",
      " [-0.34295896]]\n"
     ]
    }
   ],
   "source": [
    "W1 = lm.model.get_weights()[0]\n",
    "b1 = lm.model.get_weights()[1]\n",
    "W2 = lm.model.get_weights()[2]\n",
    "b2 = lm.model.get_weights()[3]\n",
    "print(W1, W2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "pgdJ8GHBUAIJ"
   },
   "source": [
    "### Check that the coefficients are close to one and the intercept is close to zero"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "4QUD6mBTUAIK"
   },
   "outputs": [],
   "source": [
    "beta_0 = np.dot(np.transpose(W2), b1) + b2\n",
    "beta_1 = np.dot(np.transpose(W2), W1[0])\n",
    "beta_2 = np.dot(np.transpose(W2), W1[1])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "GUE_8PjFUAIO",
    "outputId": "1e01c6b4-8d11-4d4f-e70f-24ae7c1bb70e"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[-0.04406428] [0.97107023] [1.0083461]\n"
     ]
    }
   ],
   "source": [
    "print(beta_0, beta_1, beta_2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "YLXKtdM8UAIS"
   },
   "source": [
    "## Compare with a feedforward NN with one hidden layer ($tanh$ activated)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Finally, we create another model with a 10 unit hidden layer, this time with a $tanh$ activation function."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "zwVeJX1KUAIU"
   },
   "outputs": [],
   "source": [
    "# number of hidden neurons\n",
    "n = 10"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "QTrU8L9_UAIY"
   },
   "outputs": [],
   "source": [
    "# with non-linear activation\n",
    "def linear_NN1_model_act(l1_reg=0.0):    \n",
    "    model = Sequential()\n",
    "    model.add(Dense(n, input_dim=2, kernel_initializer='normal', activation='tanh'))\n",
    "    model.add(Dense(1, kernel_initializer='normal')) \n",
    "    model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mae', 'mse'])\n",
    "    return model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "ga1tsGQAUAIc"
   },
   "outputs": [],
   "source": [
    "lm = KerasRegressor(build_fn=linear_NN1_model_act, epochs=100, batch_size=10, verbose=1, callbacks=[es])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Train the model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "YZwN89X8UAIg",
    "outputId": "9189b597-0a6b-4035-885d-d61ec1647f28"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/100\n",
      "5000/5000 [==============================] - 2s 491us/step - loss: 1.6220 - mae: 0.9961 - mse: 1.6220\n",
      "Epoch 2/100\n",
      "5000/5000 [==============================] - 1s 281us/step - loss: 0.9934 - mae: 0.7969 - mse: 0.9934\n",
      "Epoch 3/100\n",
      "5000/5000 [==============================] - 1s 284us/step - loss: 0.9909 - mae: 0.7954 - mse: 0.9909\n",
      "Epoch 4/100\n",
      "5000/5000 [==============================] - 1s 278us/step - loss: 0.9882 - mae: 0.7949 - mse: 0.9882\n",
      "Epoch 5/100\n",
      "5000/5000 [==============================] - 1s 288us/step - loss: 0.9871 - mae: 0.7942 - mse: 0.9871\n",
      "Epoch 6/100\n",
      "5000/5000 [==============================] - 1s 290us/step - loss: 0.9851 - mae: 0.7934 - mse: 0.9851\n",
      "Epoch 7/100\n",
      "5000/5000 [==============================] - 1s 292us/step - loss: 0.9842 - mae: 0.7928 - mse: 0.9842\n",
      "Epoch 8/100\n",
      "5000/5000 [==============================] - 1s 295us/step - loss: 0.9828 - mae: 0.7916 - mse: 0.9828\n",
      "Epoch 9/100\n",
      "5000/5000 [==============================] - 2s 316us/step - loss: 0.9825 - mae: 0.7920 - mse: 0.9825\n",
      "Epoch 10/100\n",
      "5000/5000 [==============================] - 2s 375us/step - loss: 0.9811 - mae: 0.7914 - mse: 0.9811\n",
      "Epoch 11/100\n",
      "5000/5000 [==============================] - 2s 340us/step - loss: 0.9817 - mae: 0.7919 - mse: 0.9817\n",
      "Epoch 12/100\n",
      "5000/5000 [==============================] - 2s 353us/step - loss: 0.9805 - mae: 0.7909 - mse: 0.9805\n",
      "Epoch 13/100\n",
      "5000/5000 [==============================] - 2s 341us/step - loss: 0.9795 - mae: 0.7909 - mse: 0.9795\n",
      "Epoch 14/100\n",
      "5000/5000 [==============================] - 2s 348us/step - loss: 0.9790 - mae: 0.7904 - mse: 0.9790\n",
      "Epoch 15/100\n",
      "5000/5000 [==============================] - 2s 365us/step - loss: 0.9802 - mae: 0.7911 - mse: 0.9802\n",
      "Epoch 16/100\n",
      "5000/5000 [==============================] - 2s 391us/step - loss: 0.9790 - mae: 0.7910 - mse: 0.9790\n",
      "Epoch 17/100\n",
      "5000/5000 [==============================] - 2s 420us/step - loss: 0.9773 - mae: 0.7908 - mse: 0.9773\n",
      "Epoch 18/100\n",
      "5000/5000 [==============================] - 2s 351us/step - loss: 0.9794 - mae: 0.7903 - mse: 0.9794\n",
      "Epoch 19/100\n",
      "5000/5000 [==============================] - 2s 324us/step - loss: 0.9780 - mae: 0.7904 - mse: 0.9780\n",
      "Epoch 20/100\n",
      "5000/5000 [==============================] - 2s 314us/step - loss: 0.9783 - mae: 0.7907 - mse: 0.9783\n",
      "Epoch 21/100\n",
      "5000/5000 [==============================] - 1s 299us/step - loss: 0.9774 - mae: 0.7904 - mse: 0.9774\n",
      "Epoch 22/100\n",
      "5000/5000 [==============================] - 1s 298us/step - loss: 0.9782 - mae: 0.7899 - mse: 0.9782\n",
      "Epoch 23/100\n",
      "5000/5000 [==============================] - 1s 295us/step - loss: 0.9785 - mae: 0.7907 - mse: 0.9785\n",
      "Epoch 24/100\n",
      "5000/5000 [==============================] - 1s 295us/step - loss: 0.9772 - mae: 0.7896 - mse: 0.9772\n",
      "Epoch 25/100\n",
      "5000/5000 [==============================] - 1s 297us/step - loss: 0.9769 - mae: 0.7891 - mse: 0.9769\n",
      "Epoch 26/100\n",
      "5000/5000 [==============================] - 2s 304us/step - loss: 0.9773 - mae: 0.7900 - mse: 0.9773\n",
      "Epoch 27/100\n",
      "5000/5000 [==============================] - 1s 293us/step - loss: 0.9768 - mae: 0.7899 - mse: 0.9768\n",
      "Epoch 28/100\n",
      "5000/5000 [==============================] - 1s 294us/step - loss: 0.9783 - mae: 0.7905 - mse: 0.9783\n",
      "Epoch 29/100\n",
      "5000/5000 [==============================] - 1s 285us/step - loss: 0.9750 - mae: 0.7891 - mse: 0.9750\n",
      "Epoch 30/100\n",
      "5000/5000 [==============================] - 1s 288us/step - loss: 0.9774 - mae: 0.7903 - mse: 0.9774\n",
      "Epoch 31/100\n",
      "5000/5000 [==============================] - 1s 292us/step - loss: 0.9770 - mae: 0.7894 - mse: 0.9770\n",
      "Epoch 32/100\n",
      "5000/5000 [==============================] - 1s 295us/step - loss: 0.9771 - mae: 0.7900 - mse: 0.9771\n",
      "Epoch 33/100\n",
      "5000/5000 [==============================] - 1s 296us/step - loss: 0.9763 - mae: 0.7890 - mse: 0.9763\n",
      "Epoch 34/100\n",
      "5000/5000 [==============================] - 2s 301us/step - loss: 0.9764 - mae: 0.7895 - mse: 0.9764\n",
      "Epoch 35/100\n",
      "5000/5000 [==============================] - 1s 293us/step - loss: 0.9764 - mae: 0.7894 - mse: 0.9764\n",
      "Epoch 36/100\n",
      "5000/5000 [==============================] - 1s 292us/step - loss: 0.9765 - mae: 0.7892 - mse: 0.9765\n",
      "Epoch 37/100\n",
      "5000/5000 [==============================] - 1s 287us/step - loss: 0.9761 - mae: 0.7895 - mse: 0.9761\n",
      "Epoch 38/100\n",
      "5000/5000 [==============================] - 1s 289us/step - loss: 0.9767 - mae: 0.7901 - mse: 0.9767\n",
      "Epoch 39/100\n",
      "5000/5000 [==============================] - 1s 285us/step - loss: 0.9762 - mae: 0.7893 - mse: 0.9762\n",
      "Epoch 00039: early stopping\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<keras.callbacks.callbacks.History at 0x1a3eb43160>"
      ]
     },
     "execution_count": 23,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "lm.fit(X, Y)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "mU3sy8asUAIk"
   },
   "source": [
    "### Compute the Sensitivities"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "xNOwwjCfUAIk"
   },
   "outputs": [],
   "source": [
    "# Assumes that the activation function is tanh\n",
    "def sensitivities(lm, X):\n",
    "    \n",
    "    W1 = lm.model.get_weights()[0]\n",
    "    b1 = lm.model.get_weights()[1]\n",
    "    W2 = lm.model.get_weights()[2]\n",
    "    b2 = lm.model.get_weights()[3]\n",
    "    \n",
    "    \n",
    "    M = np.shape(X)[0]\n",
    "    p = np.shape(X)[1]\n",
    "\n",
    "    beta = np.array([0]*M*(p+1), dtype='float32').reshape(M,p+1)\n",
    "    \n",
    "    beta[:, 0] = (np.dot(np.transpose(W2), np.tanh(b1)) + b2)[0] # intercept \\beta_0= F_{W,b}(0)\n",
    "    for i in range(M):\n",
    " \n",
    "        Z1 = np.tanh(np.dot(np.transpose(W1),np.transpose(X[i,])) + b1)\n",
    "      \n",
    "        D = np.diag(1 - Z1**2)\n",
    "        \n",
    "        for j in range(p):  \n",
    "            beta[i, j+1] = np.dot(np.transpose(W2), np.dot(D, W1[j]))\n",
    "            \n",
    "    return beta"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "ux3ey5dfUAIn"
   },
   "outputs": [],
   "source": [
    "beta = sensitivities(lm, X)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "jiOcNG7CUAIr"
   },
   "source": [
    "### Check that the intercept is close to one and the coefficients are close to one"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "kWMqCNwhUAIt",
    "outputId": "6c9a3082-517e-4153-d524-defc44c592b0"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[-0.01284499  0.9332601   1.05014   ]\n"
     ]
    }
   ],
   "source": [
    "print(np.mean(beta, axis=0))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {
    "colab": {},
    "colab_type": "code",
    "id": "pHy9jOlEUAIw",
    "outputId": "6c00e092-00d7-4d1a-fd86-b6c73b9afe5e"
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[5.578546e-07 6.809256e-02 7.658932e-02]\n"
     ]
    }
   ],
   "source": [
    "print(np.std(beta, axis=0))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "colab": {
   "collapsed_sections": [],
   "name": "ML_in_Finance-Interpretability.ipynb",
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
