{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "# ML_in_Finance-Interaction\n",
    "# Author: Matthew Dixon\n",
    "# Version: 1.0 (08.09.2019)\n",
    "# License: MIT\n",
    "# Email: matthew.dixon@iit.edu\n",
    "# Notes: tested on Mac OS X with Python 3.6.9 and the following packages:\n",
    "# numpy=1.18.1, keras=2.3.1, tensorflow=2.0.0, statsmodels=0.10.1, scikit-learn=0.22.1\n",
    "# Citation: Please cite the following reference if this notebook is used for research purposes:\n",
    "# Dixon M.F., I. Halperin and P. Bilokon, Machine Learning in Finance: From Theory to Practice, Springer Graduate textbook Series, 2020. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "from keras.models import Sequential\n",
    "from keras.layers import Dense\n",
    "from keras.callbacks import EarlyStopping\n",
    "from keras.wrappers.scikit_learn import KerasRegressor\n",
    "import statsmodels.api as sm\n",
    "import sklearn"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Overview\n",
    "The purpose of this notebook is to illustrate a neural network interpretability method which is compatible with linear regression, including an interaction term. \n",
    "\n",
    "In linear regression, provided the independent variables are scaled, one can view the regression coefficients as a measure of importance of the variables and their interaction effect. Equivalently, the dependent variable can be differentiated w.r.t. the inputs to give the coefficient, with the interaction obtained from the cross-term in the Hessian. \n",
    "\n",
    "Similarly, the derivatives of the network w.r.t. the inputs are a non-linear generalization of interpretability in a linear regression model with interaction effects. Moreover, we should expect the neural network gradients to approximate the regression model coefficients when the data is generated by a linear regression model with interaction terms. \n",
    "\n",
    "Various simple experimental tests, corresponding to Section 4 of Chpt 5, are performed to illustrate the properties of network interpretability."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Simple Data Generation Process (DGP)\n",
    "\n",
    "Generate data from a regression model with an interaction term \n",
    "\n",
    "$Y=X_1+X_2 + X_1X_2+\\epsilon~, ~~X_1, X_2 \\sim N(0,1)~, ~~\\epsilon \\sim N(0,\\sigma_n^2)$"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "M = 5000 # Number of samples\n",
    "np.random.seed(7) # Set NumPy's random seed for reproducibility\n",
    "X = np.zeros(shape=(M, 2))\n",
    "sigma_n = 0.01\n",
    "X[:int(M/2), 0] = np.random.randn(int(M/2))\n",
    "X[:int(M/2), 1] = np.random.randn(int(M/2))\n",
    "# Use antithetic sampling to reduce sample bias in the mean\n",
    "X[int(M/2):, 0] = -X[:int(M/2), 0]\n",
    "X[int(M/2):, 1] = -X[:int(M/2), 1]\n",
    "\n",
    "eps = np.random.randn(M)\n",
    "Y = X[:, 0] + X[:, 1] + X[:, 0]*X[:, 1] + sigma_n*eps.flatten()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Use ordinary least squares to fit a linear model to the data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For a baseline, let us compare the neural network with OLS regression. \n",
    "\n",
    "We fit statsmodels' OLS model to the data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "ols_results = sm.OLS(Y, sm.add_constant(X)).fit()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "For each input, get the predicted $Y$ value according to the model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "y_ols = ols_results.predict(sm.add_constant(X))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "View characteristics of the resulting model. You should observe that the intercept is close to zero and the other coefficients are close to one."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<table class=\"simpletable\">\n",
       "<caption>OLS Regression Results</caption>\n",
       "<tr>\n",
       "  <th>Dep. Variable:</th>            <td>y</td>        <th>  R-squared:         </th> <td>   0.669</td> \n",
       "</tr>\n",
       "<tr>\n",
       "  <th>Model:</th>                   <td>OLS</td>       <th>  Adj. R-squared:    </th> <td>   0.669</td> \n",
       "</tr>\n",
       "<tr>\n",
       "  <th>Method:</th>             <td>Least Squares</td>  <th>  F-statistic:       </th> <td>   5052.</td> \n",
       "</tr>\n",
       "<tr>\n",
       "  <th>Date:</th>             <td>Mon, 18 May 2020</td> <th>  Prob (F-statistic):</th>  <td>  0.00</td>  \n",
       "</tr>\n",
       "<tr>\n",
       "  <th>Time:</th>                 <td>16:09:50</td>     <th>  Log-Likelihood:    </th> <td> -7103.5</td> \n",
       "</tr>\n",
       "<tr>\n",
       "  <th>No. Observations:</th>      <td>  5000</td>      <th>  AIC:               </th> <td>1.421e+04</td>\n",
       "</tr>\n",
       "<tr>\n",
       "  <th>Df Residuals:</th>          <td>  4997</td>      <th>  BIC:               </th> <td>1.423e+04</td>\n",
       "</tr>\n",
       "<tr>\n",
       "  <th>Df Model:</th>              <td>     2</td>      <th>                     </th>     <td> </td>    \n",
       "</tr>\n",
       "<tr>\n",
       "  <th>Covariance Type:</th>      <td>nonrobust</td>    <th>                     </th>     <td> </td>    \n",
       "</tr>\n",
       "</table>\n",
       "<table class=\"simpletable\">\n",
       "<tr>\n",
       "    <td></td>       <th>coef</th>     <th>std err</th>      <th>t</th>      <th>P>|t|</th>  <th>[0.025</th>    <th>0.975]</th>  \n",
       "</tr>\n",
       "<tr>\n",
       "  <th>const</th> <td>    0.0243</td> <td>    0.014</td> <td>    1.713</td> <td> 0.087</td> <td>   -0.004</td> <td>    0.052</td>\n",
       "</tr>\n",
       "<tr>\n",
       "  <th>x1</th>    <td>    0.9999</td> <td>    0.014</td> <td>   70.236</td> <td> 0.000</td> <td>    0.972</td> <td>    1.028</td>\n",
       "</tr>\n",
       "<tr>\n",
       "  <th>x2</th>    <td>    1.0000</td> <td>    0.014</td> <td>   70.164</td> <td> 0.000</td> <td>    0.972</td> <td>    1.028</td>\n",
       "</tr>\n",
       "</table>\n",
       "<table class=\"simpletable\">\n",
       "<tr>\n",
       "  <th>Omnibus:</th>       <td>846.889</td> <th>  Durbin-Watson:     </th> <td>   2.024</td> \n",
       "</tr>\n",
       "<tr>\n",
       "  <th>Prob(Omnibus):</th> <td> 0.000</td>  <th>  Jarque-Bera (JB):  </th> <td>16614.377</td>\n",
       "</tr>\n",
       "<tr>\n",
       "  <th>Skew:</th>          <td> 0.168</td>  <th>  Prob(JB):          </th> <td>    0.00</td> \n",
       "</tr>\n",
       "<tr>\n",
       "  <th>Kurtosis:</th>      <td>11.924</td>  <th>  Cond. No.          </th> <td>    1.02</td> \n",
       "</tr>\n",
       "</table><br/><br/>Warnings:<br/>[1] Standard Errors assume that the covariance matrix of the errors is correctly specified."
      ],
      "text/plain": [
       "<class 'statsmodels.iolib.summary.Summary'>\n",
       "\"\"\"\n",
       "                            OLS Regression Results                            \n",
       "==============================================================================\n",
       "Dep. Variable:                      y   R-squared:                       0.669\n",
       "Model:                            OLS   Adj. R-squared:                  0.669\n",
       "Method:                 Least Squares   F-statistic:                     5052.\n",
       "Date:                Mon, 18 May 2020   Prob (F-statistic):               0.00\n",
       "Time:                        16:09:50   Log-Likelihood:                -7103.5\n",
       "No. Observations:                5000   AIC:                         1.421e+04\n",
       "Df Residuals:                    4997   BIC:                         1.423e+04\n",
       "Df Model:                           2                                         \n",
       "Covariance Type:            nonrobust                                         \n",
       "==============================================================================\n",
       "                 coef    std err          t      P>|t|      [0.025      0.975]\n",
       "------------------------------------------------------------------------------\n",
       "const          0.0243      0.014      1.713      0.087      -0.004       0.052\n",
       "x1             0.9999      0.014     70.236      0.000       0.972       1.028\n",
       "x2             1.0000      0.014     70.164      0.000       0.972       1.028\n",
       "==============================================================================\n",
       "Omnibus:                      846.889   Durbin-Watson:                   2.024\n",
       "Prob(Omnibus):                  0.000   Jarque-Bera (JB):            16614.377\n",
       "Skew:                           0.168   Prob(JB):                         0.00\n",
       "Kurtosis:                      11.924   Cond. No.                         1.02\n",
       "==============================================================================\n",
       "\n",
       "Warnings:\n",
       "[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.\n",
       "\"\"\""
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "ols_results.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Compare with a feedforward NN with no hidden layers"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Recall the feedforward network with no hidden layers or activation function is a linear regression model\n",
    "\n",
    "Create a build function for the linear perceptron, which transforms the inputs directly to a single output"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "def linear_NN0_model(l1_reg=0.0):    \n",
    "    model = Sequential()\n",
    "    model.add(Dense(1, input_dim=2, kernel_initializer='normal')) \n",
    "    model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mae', 'mse'])\n",
    "    return model"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "An early stopping callback to terminate training once the weights appear to have converged to an optimum"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "es = EarlyStopping(monitor='loss', mode='min', verbose=1, patience=10)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Passing the build function for our model and training parameters to the `KerasRegressor` constructor to create a Scikit-learn-compatible regression model. This allows you to take advantage of the library's built-in tools and estimator methods, and to incorporate it into Scikit-learn pipelines. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "lm = KerasRegressor(build_fn=linear_NN0_model, epochs=40, batch_size=10, verbose=1, callbacks=[es])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Train the model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/40\n",
      "5000/5000 [==============================] - 3s 528us/step - loss: 2.1870 - mae: 1.0363 - mse: 2.1870\n",
      "Epoch 2/40\n",
      "5000/5000 [==============================] - 1s 286us/step - loss: 1.4523 - mae: 0.7912 - mse: 1.4523\n",
      "Epoch 3/40\n",
      "5000/5000 [==============================] - 1s 281us/step - loss: 1.1474 - mae: 0.6760 - mse: 1.1474\n",
      "Epoch 4/40\n",
      "5000/5000 [==============================] - 1s 272us/step - loss: 1.0432 - mae: 0.6392 - mse: 1.0432\n",
      "Epoch 5/40\n",
      "5000/5000 [==============================] - 1s 273us/step - loss: 1.0133 - mae: 0.6288 - mse: 1.0133\n",
      "Epoch 6/40\n",
      "5000/5000 [==============================] - 1s 293us/step - loss: 1.0068 - mae: 0.6265 - mse: 1.0068\n",
      "Epoch 7/40\n",
      "5000/5000 [==============================] - 1s 270us/step - loss: 1.0052 - mae: 0.6260 - mse: 1.0052\n",
      "Epoch 8/40\n",
      "5000/5000 [==============================] - 1s 265us/step - loss: 1.0048 - mae: 0.6261 - mse: 1.0048\n",
      "Epoch 9/40\n",
      "5000/5000 [==============================] - 1s 259us/step - loss: 1.0047 - mae: 0.6259 - mse: 1.0047\n",
      "Epoch 10/40\n",
      "5000/5000 [==============================] - 2s 311us/step - loss: 1.0050 - mae: 0.6264 - mse: 1.0050\n",
      "Epoch 11/40\n",
      "5000/5000 [==============================] - 1s 266us/step - loss: 1.0047 - mae: 0.6258 - mse: 1.0047\n",
      "Epoch 12/40\n",
      "5000/5000 [==============================] - 1s 244us/step - loss: 1.0046 - mae: 0.6258 - mse: 1.0046\n",
      "Epoch 13/40\n",
      "5000/5000 [==============================] - 1s 211us/step - loss: 1.0045 - mae: 0.6260 - mse: 1.0045\n",
      "Epoch 14/40\n",
      "5000/5000 [==============================] - 3s 519us/step - loss: 1.0048 - mae: 0.6259 - mse: 1.0048\n",
      "Epoch 15/40\n",
      "5000/5000 [==============================] - 2s 458us/step - loss: 1.0048 - mae: 0.6263 - mse: 1.0048\n",
      "Epoch 16/40\n",
      "5000/5000 [==============================] - 2s 388us/step - loss: 1.0047 - mae: 0.6260 - mse: 1.0047\n",
      "Epoch 17/40\n",
      "5000/5000 [==============================] - 1s 280us/step - loss: 1.0047 - mae: 0.6260 - mse: 1.0047\n",
      "Epoch 18/40\n",
      "5000/5000 [==============================] - 1s 278us/step - loss: 1.0046 - mae: 0.6259 - mse: 1.0046\n",
      "Epoch 19/40\n",
      "5000/5000 [==============================] - 1s 243us/step - loss: 1.0048 - mae: 0.6261 - mse: 1.0048\n",
      "Epoch 20/40\n",
      "5000/5000 [==============================] - 1s 253us/step - loss: 1.0048 - mae: 0.6261 - mse: 1.0048\n",
      "Epoch 21/40\n",
      "5000/5000 [==============================] - 1s 215us/step - loss: 1.0048 - mae: 0.6258 - mse: 1.0048\n",
      "Epoch 22/40\n",
      "5000/5000 [==============================] - 1s 223us/step - loss: 1.0048 - mae: 0.6262 - mse: 1.0048\n",
      "Epoch 23/40\n",
      "5000/5000 [==============================] - 1s 211us/step - loss: 1.0049 - mae: 0.6260 - mse: 1.0049\n",
      "Epoch 00023: early stopping\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<keras.callbacks.callbacks.History at 0x14066a978>"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "lm.fit(X, Y)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Check that the weights are close to one\n",
    "The weights should be close to unity. The bias term is the second entry and should be close to zero."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Weights: [[0.99630827]\n",
      " [0.99237144]]\n",
      "Bias: [0.02239072]\n"
     ]
    }
   ],
   "source": [
    "W = lm.model.layers[0].get_weights()[0]\n",
    "b = lm.model.layers[0].get_weights()[1]\n",
    "print(\"Weights: \"+ str(W))\n",
    "print(\"Bias: \" + str(b))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Compare with a feedforward NN with one hidden layer (unactivated)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "This time we create a neural network with a hidden layer with 10 units."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [],
   "source": [
    "n = 10 # number of hidden units"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "def linear_NN1_model(l1_reg=0.0):    \n",
    "    model = Sequential()\n",
    "    # Note the first argument passed to the Dense layer constructor\n",
    "    model.add(Dense(n, input_dim=2, kernel_initializer='normal')) \n",
    "    model.add(Dense(1, kernel_initializer='normal', activation='linear'))\n",
    "    model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mae', 'mse'])\n",
    "    return model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "lm = KerasRegressor(build_fn=linear_NN1_model, epochs=50, batch_size=10, verbose=1, callbacks=[es])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Train the model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/50\n",
      "5000/5000 [==============================] - 2s 490us/step - loss: 1.6932 - mae: 0.8501 - mse: 1.6932\n",
      "Epoch 2/50\n",
      "5000/5000 [==============================] - 2s 320us/step - loss: 1.0064 - mae: 0.6269 - mse: 1.0064\n",
      "Epoch 3/50\n",
      "5000/5000 [==============================] - 1s 265us/step - loss: 1.0069 - mae: 0.6264 - mse: 1.0069\n",
      "Epoch 4/50\n",
      "5000/5000 [==============================] - 2s 328us/step - loss: 1.0101 - mae: 0.6284 - mse: 1.0101\n",
      "Epoch 5/50\n",
      "5000/5000 [==============================] - 1s 276us/step - loss: 1.0089 - mae: 0.6268 - mse: 1.0089\n",
      "Epoch 6/50\n",
      "5000/5000 [==============================] - 2s 306us/step - loss: 1.0082 - mae: 0.6265 - mse: 1.0082\n",
      "Epoch 7/50\n",
      "5000/5000 [==============================] - 2s 305us/step - loss: 1.0069 - mae: 0.6275 - mse: 1.0069\n",
      "Epoch 8/50\n",
      "5000/5000 [==============================] - 1s 291us/step - loss: 1.0089 - mae: 0.6271 - mse: 1.0089\n",
      "Epoch 9/50\n",
      "5000/5000 [==============================] - 1s 278us/step - loss: 1.0096 - mae: 0.6282 - mse: 1.0096\n",
      "Epoch 10/50\n",
      "5000/5000 [==============================] - 1s 264us/step - loss: 1.0075 - mae: 0.6270 - mse: 1.0075\n",
      "Epoch 11/50\n",
      "5000/5000 [==============================] - 1s 265us/step - loss: 1.0097 - mae: 0.6284 - mse: 1.0097\n",
      "Epoch 12/50\n",
      "5000/5000 [==============================] - 1s 253us/step - loss: 1.0080 - mae: 0.6274 - mse: 1.0080\n",
      "Epoch 00012: early stopping\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<keras.callbacks.callbacks.History at 0x1a47f43eb8>"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "lm.fit(X, Y)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Extract the trained weights from the model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[-0.38235897  0.30883455 -0.27476394  0.37077746  0.33718014 -0.30436334\n",
      "  -0.2637127  -0.30291387 -0.29929903  0.28886756]\n",
      " [-0.26061237  0.2982374  -0.3480544   0.25751233  0.20222877 -0.3529237\n",
      "  -0.32209945 -0.2524027  -0.3108291   0.31747958]] [[-0.28780663]\n",
      " [ 0.30990782]\n",
      " [-0.32593182]\n",
      " [ 0.30765072]\n",
      " [ 0.3182374 ]\n",
      " [-0.3202046 ]\n",
      " [-0.2919182 ]\n",
      " [-0.36224043]\n",
      " [-0.33498996]\n",
      " [ 0.34460506]]\n"
     ]
    }
   ],
   "source": [
    "W1 = lm.model.get_weights()[0]\n",
    "b1 = lm.model.get_weights()[1]\n",
    "W2 = lm.model.get_weights()[2]\n",
    "b2 = lm.model.get_weights()[3]\n",
    "print(W1, W2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Check that the coefficients are close to one and the intercept is close to zero"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [],
   "source": [
    "beta_0 = np.dot(np.transpose(W2), b1) + b2\n",
    "beta_1 = np.dot(np.transpose(W2), W1[0])\n",
    "beta_2 = np.dot(np.transpose(W2), W1[1])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[0.03100312] [1.0006595] [0.9364493]\n"
     ]
    }
   ],
   "source": [
    "print(beta_0, beta_1, beta_2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Compare with a feedforward NN with one hidden layer ($tanh$ activated)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Finally, we create another model with a 10 unit hidden layer, this time with a $tanh$ activation function."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "# number of hidden neurons\n",
    "n = 10"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [],
   "source": [
    "def linear_NN1_model_act(l1_reg=0.0):    \n",
    "    model = Sequential()\n",
    "    # Note the activation parameter passed to the layer constructor\n",
    "    model.add(Dense(n, input_dim=2, kernel_initializer='normal', activation='tanh'))\n",
    "    model.add(Dense(1, kernel_initializer='normal')) \n",
    "    model.compile(loss='mean_squared_error', optimizer='adam', metrics=['mae', 'mse'])\n",
    "    return model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [],
   "source": [
    "lm = KerasRegressor(build_fn=linear_NN1_model_act, epochs=100, batch_size=10, verbose=1, callbacks=[es])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Train the model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/100\n",
      "5000/5000 [==============================] - 2s 482us/step - loss: 1.7010 - mae: 0.8555 - mse: 1.7010\n",
      "Epoch 2/100\n",
      "5000/5000 [==============================] - 1s 251us/step - loss: 0.9881 - mae: 0.6248 - mse: 0.9881\n",
      "Epoch 3/100\n",
      "5000/5000 [==============================] - 1s 257us/step - loss: 0.9503 - mae: 0.6143 - mse: 0.9503\n",
      "Epoch 4/100\n",
      "5000/5000 [==============================] - 1s 246us/step - loss: 0.9102 - mae: 0.6005 - mse: 0.9102\n",
      "Epoch 5/100\n",
      "5000/5000 [==============================] - 1s 252us/step - loss: 0.8652 - mae: 0.5863 - mse: 0.8652\n",
      "Epoch 6/100\n",
      "5000/5000 [==============================] - 1s 273us/step - loss: 0.8183 - mae: 0.5731 - mse: 0.8183\n",
      "Epoch 7/100\n",
      "5000/5000 [==============================] - 1s 280us/step - loss: 0.7711 - mae: 0.5542 - mse: 0.7711\n",
      "Epoch 8/100\n",
      "5000/5000 [==============================] - 1s 284us/step - loss: 0.7274 - mae: 0.5408 - mse: 0.7274\n",
      "Epoch 9/100\n",
      "5000/5000 [==============================] - 1s 271us/step - loss: 0.6861 - mae: 0.5274 - mse: 0.6861\n",
      "Epoch 10/100\n",
      "5000/5000 [==============================] - 1s 248us/step - loss: 0.6520 - mae: 0.5135 - mse: 0.6520\n",
      "Epoch 11/100\n",
      "5000/5000 [==============================] - 1s 255us/step - loss: 0.6134 - mae: 0.5000 - mse: 0.6134\n",
      "Epoch 12/100\n",
      "5000/5000 [==============================] - 1s 250us/step - loss: 0.5442 - mae: 0.4655 - mse: 0.5442\n",
      "Epoch 13/100\n",
      "5000/5000 [==============================] - 1s 250us/step - loss: 0.4182 - mae: 0.3839 - mse: 0.4182 0s - loss: 0.4564 - mae: 0.4026 - m\n",
      "Epoch 14/100\n",
      "5000/5000 [==============================] - 1s 253us/step - loss: 0.2872 - mae: 0.2807 - mse: 0.2872\n",
      "Epoch 15/100\n",
      "5000/5000 [==============================] - 1s 275us/step - loss: 0.2011 - mae: 0.1965 - mse: 0.2011\n",
      "Epoch 16/100\n",
      "5000/5000 [==============================] - 1s 257us/step - loss: 0.1531 - mae: 0.1587 - mse: 0.1531\n",
      "Epoch 17/100\n",
      "5000/5000 [==============================] - 1s 248us/step - loss: 0.1255 - mae: 0.1428 - mse: 0.1255\n",
      "Epoch 18/100\n",
      "5000/5000 [==============================] - 1s 249us/step - loss: 0.1059 - mae: 0.1319 - mse: 0.1059\n",
      "Epoch 19/100\n",
      "5000/5000 [==============================] - 1s 256us/step - loss: 0.0910 - mae: 0.1230 - mse: 0.0910\n",
      "Epoch 20/100\n",
      "5000/5000 [==============================] - 1s 283us/step - loss: 0.0800 - mae: 0.1155 - mse: 0.0800\n",
      "Epoch 21/100\n",
      "5000/5000 [==============================] - 2s 301us/step - loss: 0.0713 - mae: 0.1091 - mse: 0.0713\n",
      "Epoch 22/100\n",
      "5000/5000 [==============================] - 2s 308us/step - loss: 0.0643 - mae: 0.1030 - mse: 0.0643\n",
      "Epoch 23/100\n",
      "5000/5000 [==============================] - 2s 309us/step - loss: 0.0587 - mae: 0.0994 - mse: 0.0587\n",
      "Epoch 24/100\n",
      "5000/5000 [==============================] - 1s 295us/step - loss: 0.0545 - mae: 0.0981 - mse: 0.0545\n",
      "Epoch 25/100\n",
      "5000/5000 [==============================] - 2s 311us/step - loss: 0.0508 - mae: 0.0955 - mse: 0.0508\n",
      "Epoch 26/100\n",
      "5000/5000 [==============================] - 1s 294us/step - loss: 0.0479 - mae: 0.0931 - mse: 0.0479\n",
      "Epoch 27/100\n",
      "5000/5000 [==============================] - 2s 336us/step - loss: 0.0454 - mae: 0.0926 - mse: 0.0454\n",
      "Epoch 28/100\n",
      "5000/5000 [==============================] - 2s 333us/step - loss: 0.0431 - mae: 0.0890 - mse: 0.0431\n",
      "Epoch 29/100\n",
      "5000/5000 [==============================] - 2s 350us/step - loss: 0.0411 - mae: 0.0883 - mse: 0.0411\n",
      "Epoch 30/100\n",
      "5000/5000 [==============================] - 2s 325us/step - loss: 0.0393 - mae: 0.0880 - mse: 0.0393\n",
      "Epoch 31/100\n",
      "5000/5000 [==============================] - 1s 272us/step - loss: 0.0373 - mae: 0.0827 - mse: 0.0373\n",
      "Epoch 32/100\n",
      "5000/5000 [==============================] - 1s 246us/step - loss: 0.0358 - mae: 0.0814 - mse: 0.0358\n",
      "Epoch 33/100\n",
      "5000/5000 [==============================] - 1s 247us/step - loss: 0.0342 - mae: 0.0785 - mse: 0.0342\n",
      "Epoch 34/100\n",
      "5000/5000 [==============================] - 1s 258us/step - loss: 0.0326 - mae: 0.0759 - mse: 0.0326\n",
      "Epoch 35/100\n",
      "5000/5000 [==============================] - 1s 254us/step - loss: 0.0314 - mae: 0.0748 - mse: 0.0314\n",
      "Epoch 36/100\n",
      "5000/5000 [==============================] - 1s 251us/step - loss: 0.0300 - mae: 0.0733 - mse: 0.0300\n",
      "Epoch 37/100\n",
      "5000/5000 [==============================] - 1s 263us/step - loss: 0.0291 - mae: 0.0726 - mse: 0.0291\n",
      "Epoch 38/100\n",
      "5000/5000 [==============================] - 1s 249us/step - loss: 0.0282 - mae: 0.0712 - mse: 0.0282\n",
      "Epoch 39/100\n",
      "5000/5000 [==============================] - 1s 254us/step - loss: 0.0274 - mae: 0.0704 - mse: 0.0274\n",
      "Epoch 40/100\n",
      "5000/5000 [==============================] - 1s 272us/step - loss: 0.0267 - mae: 0.0696 - mse: 0.0267\n",
      "Epoch 41/100\n",
      "5000/5000 [==============================] - 1s 292us/step - loss: 0.0261 - mae: 0.0696 - mse: 0.0261\n",
      "Epoch 42/100\n",
      "5000/5000 [==============================] - 1s 284us/step - loss: 0.0256 - mae: 0.0691 - mse: 0.0256\n",
      "Epoch 43/100\n",
      "5000/5000 [==============================] - 1s 280us/step - loss: 0.0249 - mae: 0.0689 - mse: 0.0249\n",
      "Epoch 44/100\n",
      "5000/5000 [==============================] - 1s 262us/step - loss: 0.0245 - mae: 0.0682 - mse: 0.0245\n",
      "Epoch 45/100\n",
      "5000/5000 [==============================] - 1s 293us/step - loss: 0.0241 - mae: 0.0683 - mse: 0.0241\n",
      "Epoch 46/100\n",
      "5000/5000 [==============================] - 1s 284us/step - loss: 0.0239 - mae: 0.0682 - mse: 0.0239\n",
      "Epoch 47/100\n",
      "5000/5000 [==============================] - 1s 253us/step - loss: 0.0233 - mae: 0.0674 - mse: 0.0233\n",
      "Epoch 48/100\n",
      "5000/5000 [==============================] - 2s 309us/step - loss: 0.0232 - mae: 0.0676 - mse: 0.0232\n",
      "Epoch 49/100\n",
      "5000/5000 [==============================] - 2s 310us/step - loss: 0.0227 - mae: 0.0667 - mse: 0.0227\n",
      "Epoch 50/100\n",
      "5000/5000 [==============================] - 1s 284us/step - loss: 0.0224 - mae: 0.0669 - mse: 0.0224\n",
      "Epoch 51/100\n",
      "5000/5000 [==============================] - 1s 268us/step - loss: 0.0223 - mae: 0.0663 - mse: 0.0223\n",
      "Epoch 52/100\n",
      "5000/5000 [==============================] - 1s 274us/step - loss: 0.0219 - mae: 0.0652 - mse: 0.0219\n",
      "Epoch 53/100\n",
      "5000/5000 [==============================] - 1s 256us/step - loss: 0.0218 - mae: 0.0653 - mse: 0.0218\n",
      "Epoch 54/100\n",
      "5000/5000 [==============================] - 2s 305us/step - loss: 0.0214 - mae: 0.0647 - mse: 0.0214\n",
      "Epoch 55/100\n",
      "5000/5000 [==============================] - 1s 262us/step - loss: 0.0212 - mae: 0.0636 - mse: 0.0212\n",
      "Epoch 56/100\n",
      "5000/5000 [==============================] - 2s 302us/step - loss: 0.0210 - mae: 0.0649 - mse: 0.0210\n",
      "Epoch 57/100\n",
      "5000/5000 [==============================] - 1s 288us/step - loss: 0.0208 - mae: 0.0628 - mse: 0.0208 0s - loss: 0.0255 - mae: 0.0\n",
      "Epoch 58/100\n",
      "5000/5000 [==============================] - 2s 396us/step - loss: 0.0205 - mae: 0.0627 - mse: 0.0205\n",
      "Epoch 59/100\n",
      "5000/5000 [==============================] - 2s 373us/step - loss: 0.0204 - mae: 0.0621 - mse: 0.0204\n",
      "Epoch 60/100\n",
      "5000/5000 [==============================] - 2s 333us/step - loss: 0.0201 - mae: 0.0621 - mse: 0.0201\n",
      "Epoch 61/100\n",
      "5000/5000 [==============================] - 2s 360us/step - loss: 0.0199 - mae: 0.0611 - mse: 0.0199\n",
      "Epoch 62/100\n",
      "5000/5000 [==============================] - 2s 323us/step - loss: 0.0199 - mae: 0.0607 - mse: 0.0199\n",
      "Epoch 63/100\n",
      "5000/5000 [==============================] - 1s 287us/step - loss: 0.0196 - mae: 0.0595 - mse: 0.0196\n",
      "Epoch 64/100\n",
      "5000/5000 [==============================] - 2s 311us/step - loss: 0.0194 - mae: 0.0604 - mse: 0.0194\n",
      "Epoch 65/100\n",
      "5000/5000 [==============================] - 2s 333us/step - loss: 0.0192 - mae: 0.0599 - mse: 0.0192\n",
      "Epoch 66/100\n",
      "5000/5000 [==============================] - 1s 290us/step - loss: 0.0190 - mae: 0.0583 - mse: 0.0190\n",
      "Epoch 67/100\n",
      "5000/5000 [==============================] - 1s 266us/step - loss: 0.0188 - mae: 0.0582 - mse: 0.0188\n",
      "Epoch 68/100\n",
      "5000/5000 [==============================] - 1s 252us/step - loss: 0.0186 - mae: 0.0586 - mse: 0.0186\n",
      "Epoch 69/100\n",
      "5000/5000 [==============================] - 2s 403us/step - loss: 0.0185 - mae: 0.0565 - mse: 0.0185\n",
      "Epoch 70/100\n",
      "5000/5000 [==============================] - 1s 268us/step - loss: 0.0182 - mae: 0.0570 - mse: 0.0182\n",
      "Epoch 71/100\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "5000/5000 [==============================] - 2s 305us/step - loss: 0.0182 - mae: 0.0563 - mse: 0.0182\n",
      "Epoch 72/100\n",
      "5000/5000 [==============================] - 2s 308us/step - loss: 0.0180 - mae: 0.0556 - mse: 0.0180\n",
      "Epoch 73/100\n",
      "5000/5000 [==============================] - 1s 280us/step - loss: 0.0180 - mae: 0.0556 - mse: 0.0180\n",
      "Epoch 74/100\n",
      "5000/5000 [==============================] - 1s 280us/step - loss: 0.0180 - mae: 0.0559 - mse: 0.0180\n",
      "Epoch 75/100\n",
      "5000/5000 [==============================] - 1s 274us/step - loss: 0.0176 - mae: 0.0548 - mse: 0.0176\n",
      "Epoch 76/100\n",
      "5000/5000 [==============================] - 1s 286us/step - loss: 0.0176 - mae: 0.0546 - mse: 0.0176\n",
      "Epoch 77/100\n",
      "5000/5000 [==============================] - 1s 292us/step - loss: 0.0174 - mae: 0.0546 - mse: 0.0174\n",
      "Epoch 78/100\n",
      "5000/5000 [==============================] - 1s 274us/step - loss: 0.0173 - mae: 0.0537 - mse: 0.0173\n",
      "Epoch 79/100\n",
      "5000/5000 [==============================] - 1s 289us/step - loss: 0.0171 - mae: 0.0532 - mse: 0.0171\n",
      "Epoch 80/100\n",
      "5000/5000 [==============================] - 1s 292us/step - loss: 0.0170 - mae: 0.0540 - mse: 0.0170\n",
      "Epoch 81/100\n",
      "5000/5000 [==============================] - 2s 320us/step - loss: 0.0170 - mae: 0.0525 - mse: 0.0170\n",
      "Epoch 82/100\n",
      "5000/5000 [==============================] - 1s 278us/step - loss: 0.0168 - mae: 0.0530 - mse: 0.0168\n",
      "Epoch 83/100\n",
      "5000/5000 [==============================] - 1s 279us/step - loss: 0.0168 - mae: 0.0527 - mse: 0.0168\n",
      "Epoch 84/100\n",
      "5000/5000 [==============================] - 1s 292us/step - loss: 0.0166 - mae: 0.0519 - mse: 0.0166\n",
      "Epoch 85/100\n",
      "5000/5000 [==============================] - 1s 289us/step - loss: 0.0166 - mae: 0.0529 - mse: 0.0166\n",
      "Epoch 86/100\n",
      "5000/5000 [==============================] - 1s 288us/step - loss: 0.0165 - mae: 0.0521 - mse: 0.0165\n",
      "Epoch 87/100\n",
      "5000/5000 [==============================] - 1s 284us/step - loss: 0.0164 - mae: 0.0515 - mse: 0.0164\n",
      "Epoch 88/100\n",
      "5000/5000 [==============================] - 1s 285us/step - loss: 0.0163 - mae: 0.0524 - mse: 0.0163\n",
      "Epoch 89/100\n",
      "5000/5000 [==============================] - 1s 281us/step - loss: 0.0162 - mae: 0.0511 - mse: 0.0162\n",
      "Epoch 90/100\n",
      "5000/5000 [==============================] - 1s 273us/step - loss: 0.0161 - mae: 0.0514 - mse: 0.0161\n",
      "Epoch 91/100\n",
      "5000/5000 [==============================] - 1s 293us/step - loss: 0.0161 - mae: 0.0517 - mse: 0.0161\n",
      "Epoch 92/100\n",
      "5000/5000 [==============================] - 1s 289us/step - loss: 0.0160 - mae: 0.0509 - mse: 0.0160\n",
      "Epoch 93/100\n",
      "5000/5000 [==============================] - 1s 280us/step - loss: 0.0158 - mae: 0.0515 - mse: 0.0158\n",
      "Epoch 94/100\n",
      "5000/5000 [==============================] - 2s 304us/step - loss: 0.0159 - mae: 0.0505 - mse: 0.0159\n",
      "Epoch 95/100\n",
      "5000/5000 [==============================] - 2s 311us/step - loss: 0.0158 - mae: 0.0510 - mse: 0.0158\n",
      "Epoch 96/100\n",
      "5000/5000 [==============================] - 2s 301us/step - loss: 0.0159 - mae: 0.0520 - mse: 0.0159\n",
      "Epoch 97/100\n",
      "5000/5000 [==============================] - 1s 289us/step - loss: 0.0156 - mae: 0.0501 - mse: 0.0156\n",
      "Epoch 98/100\n",
      "5000/5000 [==============================] - 1s 291us/step - loss: 0.0156 - mae: 0.0507 - mse: 0.0156\n",
      "Epoch 99/100\n",
      "5000/5000 [==============================] - 1s 288us/step - loss: 0.0156 - mae: 0.0504 - mse: 0.0156\n",
      "Epoch 100/100\n",
      "5000/5000 [==============================] - 1s 284us/step - loss: 0.0156 - mae: 0.0508 - mse: 0.0156\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<keras.callbacks.callbacks.History at 0x1a4851dda0>"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "lm.fit(X, Y)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Compute the sensitivities"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Assumes that the activation function is tanh\n",
    "def sensitivities(lm, X):\n",
    "    \n",
    "    W1 = lm.model.get_weights()[0]\n",
    "    b1 = lm.model.get_weights()[1]\n",
    "    W2 = lm.model.get_weights()[2]\n",
    "    b2 = lm.model.get_weights()[3]\n",
    "    \n",
    "    M = np.shape(X)[0]\n",
    "    p = np.shape(X)[1]\n",
    "\n",
    "    beta = np.array([0]*M*(p+1), dtype='float32').reshape(M, p+1)\n",
    "    beta_interact = np.array([0]*M*p*p, dtype='float32').reshape(M, p, p)\n",
    "    \n",
    "    beta[:, 0] = (np.dot(np.transpose(W2), np.tanh(b1)) + b2)[0] # intercept \\beta_0= F_{W,b}(0)\n",
    "    for i in range(M):\n",
    " \n",
    "        Z1 = np.tanh(np.dot(np.transpose(W1), np.transpose(X[i,])) + b1)\n",
    "        \n",
    "        D = np.diag(1 - Z1**2) \n",
    "        D_prime = np.diag(-2 * Z1 * (1 - Z1**2))   # Needed for interaction term     \n",
    "          \n",
    "        for j in range(p):  \n",
    "            beta[i, j+1] = np.dot(np.transpose(W2), np.dot(D, W1[j]))\n",
    "            # Interaction term\n",
    "            for k in range(p):\n",
    "                beta_interact[i, j, k ] = np.dot(np.transpose(W2), np.dot(np.diag(W1[j]), np.dot(D_prime, W1[k])))  \n",
    "    \n",
    "    return beta, beta_interact"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [],
   "source": [
    "beta, beta_inter = sensitivities(lm, X)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Check that the intercept is close to one and the coefficients are close to one"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[-0.04780496  1.0240353   1.0067265 ]\n"
     ]
    }
   ],
   "source": [
    "print(np.mean(beta, axis=0))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[1.7955754e-06 9.7666699e-01 9.8022658e-01]\n"
     ]
    }
   ],
   "source": [
    "print(np.std(beta, axis=0))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[[-0.01265506  0.9733205 ]\n",
      " [ 0.9733205  -0.02740895]]\n"
     ]
    }
   ],
   "source": [
    "print(np.mean(beta_inter, axis=0)) # off-diagonals are interaction terms"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
