{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "fFIsBLlF7YFv"
   },
   "source": [
    "# Machine Learning Estimators for Wage Prediction"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ssGRQl-d7U9O"
   },
   "source": [
    "We illustrate how to predict an outcome variable $Y$ in a high-dimensional setting, where the number of covariates $p$ is large in relation to the sample size $n$. So far we have used linear prediction rules, e.g. Lasso regression, for estimation.\n",
    "Now, we also consider nonlinear prediction rules including tree-based methods."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "yYmcd6mN7VCV"
   },
   "source": [
    "## Data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "1n1-LWsu53N6"
   },
   "source": [
    "Again, we consider data from the U.S. March Supplement of the Current Population Survey (CPS) in 2015.\n",
    "The preproccessed sample consists of $5150$ never-married individuals."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "1or5aUNr7yTv"
   },
   "source": [
    "Set the following file_directory to a place where you downloaded https://raw.githubusercontent.com/CausalAIBook/MetricsMLNotebooks/main/data/wage2015_subsample_inference.csv"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "57TFoHNk8BIg"
   },
   "outputs": [],
   "source": [
    "# Import relevant packages\n",
    "import pandas as pd\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "from sklearn.model_selection import train_test_split\n",
    "from sklearn.model_selection import KFold\n",
    "from sklearn.pipeline import make_pipeline\n",
    "from sklearn.linear_model import LassoCV, RidgeCV, ElasticNetCV, LinearRegression\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "import patsy\n",
    "import warnings\n",
    "from sklearn.base import BaseEstimator\n",
    "warnings.simplefilter('ignore')\n",
    "np.random.seed(1234)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "5tu2dsStXzED"
   },
   "outputs": [],
   "source": [
    "file = \"https://raw.githubusercontent.com/CausalAIBook/MetricsMLNotebooks/main/data/wage2015_subsample_inference.csv\"\n",
    "data = pd.read_csv(file)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "i9zfg2VVXzEE"
   },
   "outputs": [],
   "source": [
    "data.describe()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "fUxf95F1B2EE"
   },
   "outputs": [],
   "source": [
    "y = np.log(data['wage']).values\n",
    "Z = data.drop(['wage', 'lwage'], axis=1)\n",
    "Z.columns"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "v2E-oxA8DQqH"
   },
   "source": [
    "The following figure shows the weekly wage distribution from the US survey data."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "4QJc_mnlB2KN"
   },
   "outputs": [],
   "source": [
    "plt.hist(data.wage, bins=np.arange(0, 350, 20))\n",
    "plt.xlabel('hourly wage')\n",
    "plt.ylabel('Frequency')\n",
    "plt.title('Empirical wage distribution from the US survey data')\n",
    "plt.ylim((0, 3000))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "TvMhq0RNDL43"
   },
   "source": [
    "Wages show a high degree of skewness. Hence, wages are transformed in almost all studies by\n",
    "the logarithm."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "xBL1FmvgDV3f"
   },
   "source": [
    "## Analysis"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "nDbY6BAuDYVD"
   },
   "source": [
    "Due to the skewness of the data, we are considering log wages which leads to the following regression model\n",
    "\n",
    "$$\\log(\\operatorname{wage}) = g(Z) + \\epsilon.$$"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "jNACoPwVDdpK"
   },
   "source": [
    "We will estimate the two sets of prediction rules: Linear and Nonlinear Models.\n",
    "In linear models, we estimate the prediction rule of the form\n",
    "\n",
    "$$\\hat g(Z) = \\hat \\beta'X.$$\n",
    "Again, we generate $X$ in two ways:\n",
    "\n",
    "1. Basic Model:   $X$ consists of a set of raw regressors (e.g. gender, experience, education indicators, regional indicators).\n",
    "\n",
    "\n",
    "2. Flexible Model:  $X$ consists of all raw regressors from the basic model plus occupation and industry indicators, transformations (e.g., $\\operatorname{exp}^2$ and $\\operatorname{exp}^3$) and additional two-way interactions."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "iDM3apFhDlgf"
   },
   "source": [
    "To evaluate the out-of-sample performance, we split the data first and we use the following helper function to calculate evaluation metrics."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "LQAKKE4iXzEF"
   },
   "outputs": [],
   "source": [
    "train_idx, test_idx = train_test_split(np.arange(len(y)), test_size=0.25, random_state=123)\n",
    "y_train, y_test = y[train_idx], y[test_idx]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "d8D89aBZXzEF"
   },
   "outputs": [],
   "source": [
    "Zbase = patsy.dmatrix('0 + sex + exp1 + shs + hsg+ scl + clg + mw + so + we + C(occ2) + C(ind2)',\n",
    "                      Z, return_type='dataframe').values\n",
    "X_train, X_test = Zbase[train_idx], Zbase[test_idx]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Iy9SILlkXzEF"
   },
   "outputs": [],
   "source": [
    "Zflex = patsy.dmatrix('0 + sex + (exp1+exp2+exp3+exp4)*(shs+hsg+scl+clg+C(occ2)+C(ind2)+mw+so+we)',\n",
    "                      Z, return_type='dataframe').values\n",
    "Xflex_train, Xflex_test = Zflex[train_idx], Zflex[test_idx]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "rlMPubjEXzEG"
   },
   "outputs": [],
   "source": [
    "def metrics(X_test, y_test, estimator):\n",
    "    mse = np.mean((y_test - estimator.predict(X_test))**2)\n",
    "    semse = np.std((y_test - estimator.predict(X_test))**2) / np.sqrt(len(y_test))\n",
    "    r2 = 1 - mse / np.var(y_test)\n",
    "    print(f'{mse:.4f}, {semse:.4f}, {r2:.4f}')\n",
    "    return mse, semse, r2\n",
    "\n",
    "\n",
    "results = {}  # dictionary that will store all the metric results from each estimator"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "PPt0pZtgOMSn"
   },
   "source": [
    "We are starting by running a simple OLS regression. We fit the basic and flexible model to our training data by running an ols regression and compute the R-squared on the test sample"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "whmRF4T5XzEG"
   },
   "source": [
    "### Low dimensional specification"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "DcTekwMGXzEG"
   },
   "outputs": [],
   "source": [
    "lr_base = LinearRegression().fit(X_train, y_train)\n",
    "ypred_ols = lr_base.predict(X_test)\n",
    "results['ols'] = metrics(X_test, y_test, lr_base)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "aUyNS3ypXzEG"
   },
   "source": [
    "### High-dimensional specification"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "jX09oKqhRJgz"
   },
   "source": [
    "We repeat the same procedure for the flexible model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "PZJ51jTwXzEG"
   },
   "outputs": [],
   "source": [
    "lr_flex = LinearRegression().fit(Xflex_train, y_train)\n",
    "ypred_ols_flex = lr_flex.predict(Xflex_test)\n",
    "results['ols_flex'] = metrics(Xflex_test, y_test, lr_flex)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "kP7rK1KhXzEG"
   },
   "source": [
    "### Penalized Regressions"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "-4QL8R2OUbT_"
   },
   "source": [
    "We observe that ols regression works better for the basic model with smaller $p/n$ ratio. We are proceeding by running penalized regressions."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Sm9CM8vcXzEG"
   },
   "source": [
    "First we try a pure `l1` penalty, tuned using cross-validation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Yb3ov7yPXzEH"
   },
   "outputs": [],
   "source": [
    "cv = KFold(n_splits=5, shuffle=True, random_state=123)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "VKtjuvvfXzEH"
   },
   "outputs": [],
   "source": [
    "lcv = make_pipeline(StandardScaler(), LassoCV(cv=cv, random_state=123)).fit(X_train, y_train)\n",
    "ypred_lcv = lcv.predict(X_test)\n",
    "results['lcv'] = metrics(X_test, y_test, lcv)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "g_wjI_W7XzEH"
   },
   "outputs": [],
   "source": [
    "lcv_flex = make_pipeline(StandardScaler(), LassoCV(cv=cv, random_state=123)).fit(Xflex_train, y_train)\n",
    "ypred_lcv_flex = lcv_flex.predict(Xflex_test)\n",
    "results['lcv_flex'] = metrics(Xflex_test, y_test, lcv_flex)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "WV4rW976XzEH"
   },
   "source": [
    "Then we try a pure `l2` penalty, tuned using cross-validation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Ka-hKNo5XzEH"
   },
   "outputs": [],
   "source": [
    "rcv = make_pipeline(StandardScaler(), RidgeCV(cv=cv)).fit(X_train, y_train)\n",
    "ypred_rcv = rcv.predict(X_test)\n",
    "results['rcv'] = metrics(X_test, y_test, rcv)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "9QIH9QmPXzEH"
   },
   "outputs": [],
   "source": [
    "rcv_flex = make_pipeline(StandardScaler(), RidgeCV(cv=cv)).fit(Xflex_train, y_train)\n",
    "ypred_rcv_flex = rcv_flex.predict(Xflex_test)\n",
    "results['rcv_flex'] = metrics(Xflex_test, y_test, rcv_flex)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "gQveEbZPXzEH"
   },
   "source": [
    "Finally, we try an equal combination of the two penalties, with the overall weight tuned using cross validation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "z4C9ppBpXzEH"
   },
   "outputs": [],
   "source": [
    "ecv = make_pipeline(StandardScaler(), ElasticNetCV(cv=cv, random_state=123)).fit(X_train, y_train)\n",
    "ypred_ecv = ecv.predict(X_test)\n",
    "results['ecv'] = metrics(X_test, y_test, ecv)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "_9rY6AfzXzEH"
   },
   "outputs": [],
   "source": [
    "ecv_flex = make_pipeline(StandardScaler(), ElasticNetCV(cv=cv, random_state=123)).fit(Xflex_train, y_train)\n",
    "ypred_ecv_flex = ecv_flex.predict(Xflex_test)\n",
    "results['ecv_flex'] = metrics(Xflex_test, y_test, ecv_flex)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "iv7sJhnOXzEH"
   },
   "source": [
    "We can also try a variant of the `l1` penalty, where the weight is chosen based on theoretical derivations. This is a based on a Python implementation that tries to replicate the main function of hdm r-package. It was made by [Max Huppertz](https://maxhuppertz.github.io/code/). His library is this [repository](https://github.com/maxhuppertz/hdmpy). Download its repository and copy this folder to your site-packages folder. In my case it is located here ***C:\\Python\\Python38\\Lib\\site-packages*** . It requires the multiprocess package ***pip install multiprocess***."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "FFgyTMz9YU1w"
   },
   "outputs": [],
   "source": [
    "!git clone https://github.com/maxhuppertz/hdmpy.git\n",
    "!pip install multiprocess"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "dLOYDwPtXzEL"
   },
   "outputs": [],
   "source": [
    "import hdmpy\n",
    "from sklearn.base import RegressorMixin\n",
    "\n",
    "\n",
    "# We wrap the package so that it has the familiar sklearn API\n",
    "class RLasso(BaseEstimator, RegressorMixin):\n",
    "\n",
    "    def __init__(self, *, post=True):\n",
    "        self.post = post\n",
    "\n",
    "    def fit(self, X, y):\n",
    "        self.rlasso_ = hdmpy.rlasso(X, y, post=self.post)\n",
    "        return self\n",
    "\n",
    "    @property\n",
    "    def coef_(self):\n",
    "        return np.array(self.rlasso_.est['beta']).flatten()\n",
    "\n",
    "    @property\n",
    "    def intercept_(self):\n",
    "        return np.array(self.rlasso_.est['intercept'])\n",
    "\n",
    "    def predict(self, X):\n",
    "        return X @ self.coef_ + self.intercept_"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "VUR0R5jkXzEM"
   },
   "outputs": [],
   "source": [
    "lasso = make_pipeline(StandardScaler(), RLasso(post=False)).fit(X_train, y_train)\n",
    "ypred_lasso = lasso.predict(X_test)\n",
    "results['lasso'] = metrics(X_test, y_test, lasso)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "gr5y5MdrXzEM"
   },
   "outputs": [],
   "source": [
    "lasso_flex = make_pipeline(StandardScaler(), RLasso(post=False)).fit(Xflex_train, y_train)\n",
    "ypred_lasso_flex = lasso_flex.predict(Xflex_test)\n",
    "results['lasso_flex'] = metrics(Xflex_test, y_test, lasso_flex)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "tQfiiFbcXzEM"
   },
   "outputs": [],
   "source": [
    "postlasso = make_pipeline(StandardScaler(), RLasso(post=True)).fit(X_train, y_train)\n",
    "ypred_postlasso = postlasso.predict(X_test)\n",
    "results['postlasso'] = metrics(X_test, y_test, postlasso)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "JwgiRcnOXzEM"
   },
   "outputs": [],
   "source": [
    "postlasso_flex = make_pipeline(StandardScaler(), RLasso(post=True)).fit(Xflex_train, y_train)\n",
    "ypred_postlasso_flex = postlasso_flex.predict(Xflex_test)\n",
    "results['postlasso_flex'] = metrics(Xflex_test, y_test, postlasso_flex)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "sy-KINKTXzEM"
   },
   "source": [
    "# Non-Linear Models"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "02dJMG8WXzEM"
   },
   "source": [
    "Besides linear regression models, we consider nonlinear regression models to build a predictive model. We are applying regression trees, random forests, boosted trees and neural nets to estimate the regression function $g(X)$."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ZutkU9uqXzEM"
   },
   "source": [
    "## Regression Trees"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Iw8iY-JZXzEN"
   },
   "source": [
    "We fit a regression tree to the training data using the basic model. The variable *alpha_cp* controls the complexity of the regression tree, i.e. how deep we build the tree."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Tv41p-N-XzEN"
   },
   "outputs": [],
   "source": [
    "from sklearn.tree import DecisionTreeRegressor"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "vS7hcZ4WXzEN"
   },
   "outputs": [],
   "source": [
    "dtr = DecisionTreeRegressor(ccp_alpha=0.001, min_samples_leaf=5, random_state=123).fit(X_train, y_train)\n",
    "ypred_dtr = dtr.predict(X_test)\n",
    "results['dtr'] = metrics(X_test, y_test, dtr)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "I7_I1E5MXzEN"
   },
   "source": [
    "## Random Forests"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "NK8q3UN-XzEN"
   },
   "outputs": [],
   "source": [
    "from sklearn.ensemble import RandomForestRegressor"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "xcDIsHugXzEN"
   },
   "outputs": [],
   "source": [
    "rf = RandomForestRegressor(n_estimators=2000, min_samples_leaf=5, random_state=123)\n",
    "rf.fit(X_train, y_train)\n",
    "ypred_rf = rf.predict(X_test)\n",
    "results['rf'] = metrics(X_test, y_test, rf)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "zdoBNOYlXzEN"
   },
   "source": [
    "## Gradient Boosted Forests"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Xzl07xdkXzEN"
   },
   "outputs": [],
   "source": [
    "from sklearn.ensemble import GradientBoostingRegressor"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "riShnZGtXzEN"
   },
   "outputs": [],
   "source": [
    "gbf = GradientBoostingRegressor(n_estimators=1000, learning_rate=.01,\n",
    "                                subsample=.5, max_depth=2, random_state=123)\n",
    "gbf.fit(X_train, y_train)\n",
    "ypred_gbf = gbf.predict(X_test)\n",
    "results['gbf'] = metrics(X_test, y_test, gbf)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "obDSF4NuXzEO"
   },
   "source": [
    "## NNets"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "kqY6JgTuXzEO"
   },
   "outputs": [],
   "source": [
    "from sklearn.neural_network import MLPRegressor"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "_P4V7pKqXzEO"
   },
   "outputs": [],
   "source": [
    "nnet = MLPRegressor((200, 20,), 'relu',\n",
    "                    learning_rate_init=0.01,\n",
    "                    batch_size=10, max_iter=10,\n",
    "                    random_state=123)\n",
    "nnet.fit(X_train, y_train)\n",
    "ypred_nnet = nnet.predict(X_test)\n",
    "results['nnet'] = metrics(X_test, y_test, nnet)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "sZXvw3M-c0qX"
   },
   "source": [
    "### Using the PyTorch Neural Network Library and its Sklearn API Skorch\n",
    "\n",
    "We first need to install skorch."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "eEESJO5Zc2fE"
   },
   "outputs": [],
   "source": [
    "!pip install skorch"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "3GaKrfy3XzEO"
   },
   "outputs": [],
   "source": [
    "import skorch\n",
    "from skorch import NeuralNetRegressor\n",
    "import torch.nn as nn\n",
    "import torch"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "7Ea4BEJfXzEO"
   },
   "outputs": [],
   "source": [
    "arch = nn.Sequential(nn.Linear(X_train.shape[1], 200), nn.ReLU(),\n",
    "                     nn.Linear(200, 20), nn.ReLU(),\n",
    "                     nn.Linear(20, 1))\n",
    "nnet_early = NeuralNetRegressor(arch, lr=0.01, batch_size=10,\n",
    "                                max_epochs=100,\n",
    "                                optimizer=torch.optim.Adam,\n",
    "                                callbacks=[skorch.callbacks.EarlyStopping()])\n",
    "nnet_early.fit(X_train.astype(np.float32), y_train.reshape(-1, 1).astype(np.float32))\n",
    "ypred_nnet_early = nnet_early.predict(X_test.astype(np.float32)).flatten()\n",
    "results['nnet_early'] = metrics(X_test.astype(np.float32),\n",
    "                                y_test.reshape(-1, 1).astype(np.float32), nnet_early)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Il9blxMsXzEO"
   },
   "outputs": [],
   "source": [
    "df = pd.DataFrame(results).T\n",
    "df.columns = ['MSE', 'S.E. MSE', '$R^2$']\n",
    "df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "uiNjydByXzEO"
   },
   "source": [
    "Above, we displayed the results for a single split of data into the training and testing part. The table shows the test MSE in column 1 as well as the standard error in column 2 and the test $R^2$\n",
    "in column 3. We see that the prediction rule produced by Cross-Validated Lasso using the flexible model performs the best here, giving the lowest test MSE. Cross-Validated Ridge performs nearly as well. For the majority of the considered methods, test MSEs are within one standard error of each other. Remarkably, OLS with just the basic variables performs extremely well. However, OLS on a flexible model with many regressors performs very poorly giving the highest test MSE. It is worth noticing that, as this is just a simple illustration that is meant to be relatively quick, the nonlinear methods are not tuned. Thus, there is potential to improve the performance of the nonlinear methods we used in the analysis."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "5kqZBjN-XzEO"
   },
   "source": [
    "# Combining Predictions with Stacking"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ZajLaRcAXzEP"
   },
   "source": [
    "In the final step, we can build a prediction model by combining the strength of the models we considered so far. We consider stacking which froms its prediction rule as\n",
    "\t$$ f(x) = \\sum_{k=1}^K \\alpha_k f_k(x) $$\n",
    "where the $f_k$'s denote our prediction rules from the table above and the $\\alpha_k$'s are the corresponding weights. We choose to estimate the weights here without penalization."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "mRUGKnMxXzEP"
   },
   "outputs": [],
   "source": [
    "method_name = ['OLS', 'OLS (flexible)', 'CV Lasso', 'CV Lasso (flexible)',\n",
    "               'CV Ridge', 'CV Ridge (flexible)', 'CV ElasticNet', 'CV ElasticNet (flexible)',\n",
    "               'Lasso', 'Lasso (flexible)', 'Post-Lasso OLS', 'Post-Lasso OLS (flexible)',\n",
    "               'Decision Tree', 'Random Forest', 'Boosted Forest', 'Neural Net', 'Neural Net (early stopping)']\n",
    "ypreds = np.stack((ypred_ols, ypred_ols_flex, ypred_lcv, ypred_lcv_flex,\n",
    "                   ypred_rcv, ypred_rcv_flex, ypred_ecv, ypred_ecv_flex,\n",
    "                   ypred_lasso, ypred_lasso_flex, ypred_postlasso, ypred_postlasso_flex,\n",
    "                   ypred_dtr, ypred_rf, ypred_gbf, ypred_nnet, ypred_nnet_early), axis=-1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "eB2263h0XzEP"
   },
   "outputs": [],
   "source": [
    "stack_ols = LinearRegression().fit(ypreds, y_test)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "xA-Z-89zXzEP"
   },
   "outputs": [],
   "source": [
    "pd.DataFrame({'weight': stack_ols.coef_}, index=method_name)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "F88Zs6FbXzEP"
   },
   "source": [
    "We can calculate the test sample MSE. Though for more unbiased performance evaluation, we should have left out a third sample to validate the performance of the stacked model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "4TfZSUXUXzEP"
   },
   "outputs": [],
   "source": [
    "mse = np.mean((y_test - stack_ols.predict(ypreds))**2)\n",
    "r2 = 1 - mse / np.var(y_test)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "qtxDHUUYXzEP"
   },
   "outputs": [],
   "source": [
    "mse, r2"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "mRpUh6GRXzEP"
   },
   "source": [
    "Alternatively, we can determine the weights via lasso regression."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "_vob37knXzEQ"
   },
   "outputs": [],
   "source": [
    "stack_lasso = RLasso(post=False).fit(ypreds, y_test)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "9fgxOUysXzEQ"
   },
   "outputs": [],
   "source": [
    "pd.DataFrame({'weight': stack_lasso.coef_}, index=method_name)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "k-xiPNAXXzEQ"
   },
   "source": [
    "We can calculate the test sample MSE. Though for more unbiased performance evaluation, we should have left out a third sample to validate the performance of the stacked model."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "lgwz7wpKXzEQ"
   },
   "outputs": [],
   "source": [
    "mse = np.mean((y_test - stack_lasso.predict(ypreds))**2)\n",
    "r2 = 1 - mse / np.var(y_test)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "nU3_qwX5XzEQ"
   },
   "outputs": [],
   "source": [
    "mse, r2"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "kGj6WK3mXzEQ"
   },
   "source": [
    "# Redoing it in a more  scikit-learn way"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "7tFniD-gXzEQ"
   },
   "source": [
    "We can also do it in a more sklearn way, by defining a formula transformer and corresponding pipelines"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "bBKdXAUFXzEQ"
   },
   "outputs": [],
   "source": [
    "from sklearn.base import TransformerMixin, BaseEstimator\n",
    "\n",
    "\n",
    "class FormulaTransformer(TransformerMixin, BaseEstimator):\n",
    "\n",
    "    def __init__(self, formula):\n",
    "        self.formula = formula\n",
    "\n",
    "    def fit(self, X, y=None):\n",
    "        mat = patsy.dmatrix(self.formula, X, return_type='matrix')\n",
    "        self.design_info = mat.design_info\n",
    "        return self\n",
    "\n",
    "    def transform(self, X, y=None):\n",
    "        return patsy.build_design_matrices([self.design_info], X)[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "Z43rHDifXzEQ"
   },
   "outputs": [],
   "source": [
    "base = FormulaTransformer('0 + sex + exp1 + shs + hsg+ scl + clg + mw + so + we + C(occ2) + C(ind2)')\n",
    "flex = FormulaTransformer('0 + sex + (exp1+exp2+exp3+exp4)*(shs+hsg+scl+clg+C(occ2)+C(ind2)+mw+so+we)')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "GWE7jPXhXzEQ"
   },
   "outputs": [],
   "source": [
    "methods = [('ols', make_pipeline(base, LinearRegression())),\n",
    "           ('ols_flex', make_pipeline(flex, LinearRegression())),\n",
    "           ('lasso', make_pipeline(base, StandardScaler(), RLasso(post=False))),\n",
    "           ('lasso_flex', make_pipeline(flex, StandardScaler(), RLasso(post=False))),\n",
    "           ('postlasso', make_pipeline(base, StandardScaler(), RLasso(post=True))),\n",
    "           ('postlasso_flex', make_pipeline(flex, StandardScaler(), RLasso(post=True))),\n",
    "           ('lcv', make_pipeline(base, StandardScaler(), LassoCV())),\n",
    "           ('lcv_flex', make_pipeline(flex, StandardScaler(), LassoCV())),\n",
    "           ('rcv', make_pipeline(base, StandardScaler(), RidgeCV())),\n",
    "           ('rcv_flex', make_pipeline(flex, StandardScaler(), RidgeCV())),\n",
    "           ('ecv', make_pipeline(base, StandardScaler(), ElasticNetCV())),\n",
    "           ('ecv_flex', make_pipeline(flex, StandardScaler(), ElasticNetCV())),\n",
    "           ('dtr', make_pipeline(base, DecisionTreeRegressor(ccp_alpha=0.001, min_samples_leaf=5,\n",
    "                                                             random_state=123))),\n",
    "           ('rf', make_pipeline(base, RandomForestRegressor(n_estimators=2000, min_samples_leaf=5,\n",
    "                                                            random_state=123))),\n",
    "           ('gbf', make_pipeline(base, GradientBoostingRegressor(n_estimators=1000, learning_rate=.01,\n",
    "                                                                 subsample=.5, max_depth=2,\n",
    "                                                                 random_state=123))),\n",
    "           ('nnet', make_pipeline(base, MLPRegressor((200, 20,), 'relu',\n",
    "                                                     learning_rate_init=0.01,\n",
    "                                                     batch_size=10, max_iter=10,\n",
    "                                                     random_state=123)))]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "OBz3vtXLXzER"
   },
   "outputs": [],
   "source": [
    "train_idx, test_idx = train_test_split(np.arange(len(y)), test_size=0.25, random_state=123)\n",
    "\n",
    "results = {}\n",
    "ypreds = np.zeros((len(test_idx), len(methods)))  # test predictions used for stacking\n",
    "\n",
    "for it, (name, estimator) in enumerate(methods):\n",
    "    estimator.fit(Z.iloc[train_idx], y[train_idx])\n",
    "    results[name] = metrics(Z.iloc[test_idx], y[test_idx], estimator)\n",
    "    ypreds[:, it] = estimator.predict(Z.iloc[test_idx])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "ytZaqT1DXzER"
   },
   "outputs": [],
   "source": [
    "df = pd.DataFrame(results).T\n",
    "df.columns = ['MSE', 'S.E. MSE', '$R^2$']\n",
    "df"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "TzQE9juDXzER"
   },
   "outputs": [],
   "source": [
    "stack_lasso = RLasso(post=False).fit(ypreds, y[test_idx])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "BdUXLQOcXzER"
   },
   "outputs": [],
   "source": [
    "pd.DataFrame({'weight': stack_lasso.coef_}, index=[name for name, _ in methods])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "GNHsG4trXzER"
   },
   "source": [
    "For a more unbiased performance evaluation we should have left a further evaluation sample that was not used for the stacking weights"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "-F6bq9oAXzER"
   },
   "outputs": [],
   "source": [
    "mse = np.mean((y_test - stack_lasso.predict(ypreds))**2)\n",
    "r2 = 1 - mse / np.var(y_test)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "XJiaFjitXzER"
   },
   "outputs": [],
   "source": [
    "mse, r2"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "KySwyzsZXzER"
   },
   "source": [
    "### Sklearn also provides a Stacking API"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "i0Xg09jOXzER"
   },
   "source": [
    "The sklearn Stacking API wraps the stacking process. Here, we're also using also k-fold cross validation instead of just sample splitting."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "hF7Xj1IRXzES"
   },
   "outputs": [],
   "source": [
    "from sklearn.ensemble import StackingRegressor\n",
    "\n",
    "stack = StackingRegressor(methods,\n",
    "                          final_estimator=RLasso(),\n",
    "                          cv=3,\n",
    "                          verbose=3)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "6RVgVg8XXzES"
   },
   "source": [
    "We will construct a stacked ensemble using only the training data for unbiased performance evaluation. The stacking regressor will partition the data in k-folds, based on the `cv` parameter. For each fold it will train each of the estimators in the `methods` parameter on all the data outside of the fold and then predict on the data in the fold. Then using all the predictions on all the data from each method, it will train a `final_estimator` predicting the true outcome using the out-of-fold predictions of each method as features. This will define how the estimators are being aggregated. In the end, all the base estimators are re-fitted on all the data and the final predictor will first predict based on each fitted based estimator and then aggregate based on the fitted `final_estimator`."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "po6zui0bXzES"
   },
   "outputs": [],
   "source": [
    "stack.fit(Z.iloc[train_idx], y[train_idx])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "umQ40SpSXzES"
   },
   "source": [
    "We can see the weights placed on each estimator by accessing the final model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "8DOtieuPXzES"
   },
   "outputs": [],
   "source": [
    "pd.DataFrame({'weight': stack.final_estimator_.coef_}, index=[name for name, _ in methods])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "knIwZlwbXzES"
   },
   "source": [
    "Calculate out of sample performance metrics"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "t6mZE3SjXzES"
   },
   "outputs": [],
   "source": [
    "mse, semse, r2 = metrics(Z.iloc[test_idx], y[test_idx], stack)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "ILYgz-LGXzES"
   },
   "source": [
    "We find that this stacked estimator achieved the best out of sample performance."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "UEB4MPK7XzES"
   },
   "source": [
    "# FLAML AutoML Framework"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "HaAsbKvVXzES"
   },
   "outputs": [],
   "source": [
    "!pip install flaml"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "WamITPbbXzET"
   },
   "outputs": [],
   "source": [
    "from flaml import AutoML\n",
    "\n",
    "automl = make_pipeline(base, AutoML(task='regression', time_budget=60, early_stop=True,\n",
    "                                    eval_method='cv', n_splits=3, metric='r2',\n",
    "                                    verbose=3,))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "icvlA8NHXzET"
   },
   "outputs": [],
   "source": [
    "train_idx, test_idx = train_test_split(np.arange(len(y)), test_size=0.25, random_state=123)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "WMvkCNLjXzET"
   },
   "outputs": [],
   "source": [
    "automl.fit(Z.iloc[train_idx], y[train_idx])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "8NXhzJOQXzET"
   },
   "outputs": [],
   "source": [
    "mse, semse, r2 = metrics(Z.iloc[test_idx], y[test_idx], automl)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Xvb4i37KXzET"
   },
   "source": [
    "We see that it best model chosen matches the performance of the stacked estimator we strived to achieve on our own without automl. Moreover, we can also do stacking within the automl framework"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "z3J6dBahXzET"
   },
   "outputs": [],
   "source": [
    "automl = make_pipeline(base, AutoML(task='regression', time_budget=60, early_stop=True,\n",
    "                                    eval_method='cv', n_splits=3, metric='r2',\n",
    "                                    verbose=3,\n",
    "                                    ensemble={'passthrough': False,  # whether stacker will use raw X's or predictions\n",
    "                                              'final_estimator': RLasso()}))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "zvi8uC-MXzET"
   },
   "outputs": [],
   "source": [
    "train_idx, test_idx = train_test_split(np.arange(len(y)), test_size=0.25, random_state=123)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "cGS9hrGUXzET"
   },
   "outputs": [],
   "source": [
    "automl.fit(Z.iloc[train_idx], y[train_idx])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "TBSQubj4XzET"
   },
   "outputs": [],
   "source": [
    "mse, semse, r2 = metrics(Z.iloc[test_idx], y[test_idx], automl)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "8iljQqWyXzEU"
   },
   "outputs": [],
   "source": [
    "automl = make_pipeline(base, AutoML(task='regression', time_budget=60, early_stop=True,\n",
    "                                    eval_method='cv', n_splits=3, metric='r2',\n",
    "                                    verbose=3,\n",
    "                                    ensemble={'passthrough': True,  # whether stacker will use raw X's or predictions\n",
    "                                              'final_estimator': RLasso()}))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "68Ti3XnPXzEY"
   },
   "outputs": [],
   "source": [
    "train_idx, test_idx = train_test_split(np.arange(len(y)), test_size=0.25, random_state=123)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "wgju95y0XzEZ"
   },
   "outputs": [],
   "source": [
    "automl.fit(Z.iloc[train_idx], y[train_idx])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "wtjilTiQXzEZ"
   },
   "outputs": [],
   "source": [
    "mse, semse, r2 = metrics(Z.iloc[test_idx], y[test_idx], automl)"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "name": "PM2A_prediction",
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
