{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from IPython.core.display import display, HTML\n",
    "display(HTML(\"<style>.container { width:100% !important; }</style>\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Lecture 4 - Regression\n",
    "---\n",
    "\n",
    "### Content\n",
    "\n",
    "1. Intro to regression\n",
    "2. Assessing the goodness of fit\n",
    "\n",
    "### Learning Outcomes\n",
    "\n",
    "At the end of this lecture, you should be able to:\n",
    "\n",
    "* describe the purpose of linear regression\n",
    "* perform introductory regression model fitting using python libraries\n",
    "* explain the degree of fit of a linear regression model\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Linear Regression - An Introduction"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Often in real world applications we need to understand how the value of one variable is affected or determined by one or several other variables. \n",
    "\n",
    "This not only helps us to **understand and explain** the relationships of existing data and get a grasp of the situation, but it also enables us to make **predictions on future data**.\n",
    "\n",
    "We may for example want to know:\n",
    "\n",
    "* How does sales volume change with changes in price. How is this affected by changes in the weather?\n",
    "\n",
    "* How does the amount of a drug absorbed vary with dosage and with body weight of patient?  Does it depend on blood pressure?\n",
    "\n",
    "* How are the conversions on an e-commerce website affected by two different page titles in an A/B comparison? \n",
    "\n",
    "* How does the energy released by an earthquake vary with the depth of it's epicenter?\n",
    "\n",
    "* How is the interest rate charged on a loan affected by credit history and by loan amount?\n",
    "\n",
    "* What exam mark is a student likely to achieve given their previous assignment marks?\n",
    "\n",
    "If we can find a **pattern** that accurately describes the relationship between these variables, then we can predict the latter, given the former. Eg. Given a persons credit rating (x1) and the requested loan amount (x2), what will the likely interest rate (y) be?\n",
    "\n",
    "Answering the above questions, requires us to create a **model** which describes the pattern in the data.  \n",
    "\n",
    "A model is a mathematical formula where one variable (response, usually $y$) varies depending on one or more independent variables (covariates, usually $x_i$). For example, the total number of Facebook friends a person has ($y$) might be related to the number of hours $x$ a person spends on Facebook a day.\n",
    "\n",
    "One of the common and simplest models we can create is a **Linear Model**. With a linear model we make a big assumption that one response variable changes linearly with the changes in one or two other variables. \n",
    "\n",
    "\n",
    "While this is a big and simplistic assumption, it turns out that many real world problems can be modeled usefully in this way and the model thus works quite well.  Often data that don't appear to have a linear relationship can be transformed using simple mappings so that they do show a linear relationship.  This is very powerful and accordingly Linear Models have wide applicability. \n",
    "\n",
    "Note that linear modeling involves numerical and not categorical outcomes. Creating a Linear Model involves a technique known as **Linear Regression**. Given numerical problems, experimenting initially with linear regression is a wise choice before moving to other methods at subsequent attempts to find better fits to the problem if needed.\n",
    "\n",
    "Linear Regression is one of the foundational tools of Data Science. It is the first **machine learning algorithm** we will look at from the perspective of how to use it and interpret it, rather than from the mathematical perspective.\n",
    "\n",
    "---\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import patsy"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import matplotlib.pyplot as plt\n",
    "import matplotlib as mpl\n",
    "import seaborn as sns\n",
    "import numpy as np\n",
    "\n",
    "%matplotlib inline"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "pd.options.display.max_columns = 50"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pylab import rcParams\n",
    "\n",
    "sns.set(style=\"ticks\")\n",
    "#sns.set_style(\"whitegrid\")\n",
    "rcParams['figure.dpi'] = 180\n",
    "rcParams['lines.linewidth'] = 2\n",
    "rcParams['axes.facecolor'] = 'white'\n",
    "rcParams['patch.edgecolor'] = 'white'\n",
    "rcParams['font.family'] = 'StixGeneral'\n",
    "rcParams['figure.figsize'] = 7,5\n",
    "rcParams['font.size'] = 15"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rcParams['axes.labelsize'] = 'large'\n",
    "rcParams['xtick.labelsize'] = 10\n",
    "rcParams['ytick.labelsize'] = 10"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rcParams['figure.figsize'] = 0.01,5\n",
    "x = [2.2, 4.3, 5.1, 5.8, 6.4, 8.0]\n",
    "y = [0.4, 10.1, 7.0, 10.9, 17.4, 18.5]\n",
    "x = np.array(x)\n",
    "y = np.array(y)\n",
    "plt.plot(x,y,'ro')\n",
    "plt.xlabel(\"X\")\n",
    "plt.ylabel(\"Y\")\n",
    "plt.ylim([-5,30])\n",
    "plt.xlim([0,0])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "rcParams['figure.figsize'] = 7,5\n",
    "\n",
    "x = [2.2, 4.3, 5.1, 5.8, 6.4, 8.0]\n",
    "y = [0.4, 10.1, 7.0, 10.9, 17.4, 18.5]\n",
    "x = np.array(x)\n",
    "y = np.array(y)\n",
    "plt.plot(x,y,'ro')\n",
    "plt.xlabel(\"X\")\n",
    "plt.ylabel(\"Y\")\n",
    "plt.ylim([-5,30])\n",
    "plt.xlim([0,10])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "We can build a model to characterize the relationship between $X$ and $Y$, recognizing that additional factors other than $X$ (the ones we have measured or are interested in) may influence the response variable $Y$.\n",
    "\n",
    "Our task is to find the optimal values for the $y$ intercept $\\beta_0$ and the coefficient for $x$ values $\\beta_1$:\n",
    "\n",
    "\n",
    "<div style=\"font-size: 150%;\">  \n",
    "$y_i = \\beta_0 + \\beta_1 x_i + \\epsilon_i$\n",
    "</div>\n",
    "\n",
    "where $\\epsilon_i$ represents our expected error so that our predicted $y_i$ for a given $x_i$ will be in the vicinity of $y_i \\pm \\epsilon_i$.\n",
    "\n",
    "The approximate equation for the line in this example is roughly $y = -4.35 + 3.0 \\times x $. We will say this is our initial **model**. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "b0, b1 = (-4.35, 3.0)\n",
    "plt.plot(x, y, 'ro')\n",
    "plt.plot([0,10], [b0, b0+b1*10])\n",
    "plt.xlabel(\"X\")\n",
    "plt.ylabel(\"Y\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Model Assumptions:\n",
    "\n",
    "Each model/classification algorithm comes with some assumptions regarding the data it is applied to, and it is important to be aware of these since they directly affect the efficacy of the generated model accuracy. Generating regression models using the least squares approach comes with the following three assumptions about the underlying data:\n",
    "\n",
    "1. The relationship between the two variables should take on a **linear functional form**.\n",
    "2. The variance around the regression line is the same for all values of X - this is called **homoscedasticity**.\n",
    "3. The *residuals* or the **errors of prediction are distributed normally** around the regression line.\n",
    "4. The independent variables should not be correlated"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If we believe that our data does not (strongly) violate the above assumptions, we can proceed with generating a linear regression model. There are a number of different libraries in Python which allow us to do this. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from IPython.display import display, Image\n",
    "Image(url='https://s3-us-west-2.amazonaws.com/courses-images/wp-content/uploads/sites/132/2016/04/21214901/Figure7_12.png')\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Measuring the Goodness-of-fit\n",
    "\n",
    "Once the model has been generated, the questions should then be raised: \n",
    "\n",
    "1. How well does the model fit the data?\n",
    "2. How accurate and reliable is this model likely to be when predicting new data? \n",
    "\n",
    "In order to be able to give and a robust evaluation, we need to define a few terms which will allow us to quantify the above.\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Below is the the formula that gives us the predicted values for $y$ given $x$.\n",
    "\n",
    "<div style=\"font-size: 150%;\">  \n",
    "$\\hat{y} = \\beta_0 + \\beta_1 x$\n",
    "</div>"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.plot(x, y, 'ro')\n",
    "plt.plot([0,10], [b0, b0+b1*10])\n",
    "for xi, yi in zip(x,y):\n",
    "    print([xi]*2, [yi, b0+b1*xi])\n",
    "    plt.plot([xi]*2, [yi, b0+b1*xi], 'k:')\n",
    "plt.xlim(2, 8.2); plt.ylim(0, 20)\n",
    "\n",
    "plt.text(6.5, 15.5, r'}',    fontsize=38)\n",
    "plt.text(8.2, 15.7, r'$(y_i - \\hat{y}_i)$ = residual',    fontsize=20)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The difference between the true value of $y_i$ and the predicted value of $\\hat{y}_i$ is the error of the regression line and is called the *residual*. The least squares procedure for fitting a regression line, optimizes the line so that the sum of squared residuals is minimized."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(np.mean(y))\n",
    "plt.plot( x, [np.mean(y)]*len(x),'k:')\n",
    "plt.text(2, np.mean(y), r'$\\bar{y}$',    fontsize=20)\n",
    "\n",
    "plt.plot([2,6.4], [14.85, b0+b1*6.4], 'k:')\n",
    "plt.text(2, b0+b1*6.4, r'$\\hat{y}_i$',    fontsize=20)\n",
    "\n",
    "plt.plot([2,6.4], [17.4]*2, 'k:')\n",
    "plt.text(2, 17.4, r'$y_i$',    fontsize=20)\n",
    "\n",
    "plt.plot(x, y, 'ro')\n",
    "plt.plot([0,10], [b0, b0+b1*10])\n",
    "for xi, yi in zip(x,y):\n",
    "    print([xi]*2, [yi, b0+b1*xi])\n",
    "    plt.plot([xi]*2, [yi, b0+b1*xi], 'k:')\n",
    "plt.xlim(2, 8.2); plt.ylim(0, 20)\n",
    "\n",
    "plt.text(6.5, 15.5, r'}',    fontsize=38)\n",
    "plt.text(8.2, 15.7, r'$(y_i - \\hat{y}_i)$',    fontsize=20)\n",
    "\n",
    "plt.text(6.5, 11.6, r'}',    fontsize=75)\n",
    "plt.text(8.2, 13.7, r'$(y_i - \\bar{y})$',    fontsize=20)\n",
    "\n",
    "plt.text(7, 12.0, r'}',    fontsize=110)\n",
    "plt.text(8.2, 11.4, r'$(\\hat{y}_i - \\bar{y})$',    fontsize=20)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The above graph allows us to visually see how the total variation in $y$ can be explained by $x$, given the generated linear regression model. \n",
    "\n",
    "\n",
    "<div style=\"font-size: 120%;\">  \n",
    "$$ (y_i - \\bar{y}) = (\\hat{y}_i - \\bar{y}) + (y_i - \\hat{y}_i) $$\n",
    "</div>\n",
    "\n",
    "The above quantities are squared and summed for each value of observations. Squaring serves two purposes: (1) to prevent positive and negative values from canceling each other out and (2) to strongly penalize large deviations. Whether the latter is a good thing or not depends on the goals of the analysis. This give us the following:\n",
    "\n",
    "<div style=\"font-size: 120%;\">  \n",
    "$$ SS_{Total} = \\sum_{i=1}^N(y_i - \\bar{y})^2  $$\n",
    "</div>\n",
    "\n",
    "<div style=\"font-size: 120%;\">  \n",
    "$$ SS_{Regression} = \\sum_{i=1}^N(\\hat{y}_i - \\bar{y})^2  $$\n",
    "</div>\n",
    "\n",
    "<div style=\"font-size: 120%;\">  \n",
    "$$ SS_{Residual} = \\sum_{i=1}^N(y_i - \\hat{y}_i)^2  $$\n",
    "</div>\n",
    "\n",
    "The total proportion of all the variation in $y$, explained by $x$ can therefore be defined as:\n",
    "\n",
    "<div style=\"font-size: 120%;\">  \n",
    "$$R^2 = \\frac{SS_{Regression}}{SS_{Total}} $$  \n",
    "</div>\n",
    "\n",
    "often called the **coefficient of determination** and used as the primary criterion for summarizing how well a linear regression model fits the data.\n",
    "\n",
    "If the predictions ($\\hat{y}$) are close to the actual values ($y$), we would expect $R^2$ to be close to 1. On the other hand, if the predictions are unrelated to the actual values, then $R^2 = 0$ . In all cases, $R^2$ lies between 0 and 1.\n",
    "\n",
    "The $R^2$ value is commonly used, often incorrectly, in forecasting. There are no set rules of what a good $R^2$ value is and typical values of $R^2$ depend on the type of data used. \n",
    "\n",
    "Sometimes a regression line will still be useful and yield statistically significant results even when $R^2$ is low. This means even when $R^2$ is low, low P values still indicate a real relationship between the significant predictors and the response variable. However, the low $R^2$ models will be less useful for precise predictions.\n",
    "\n",
    "Validating a model’s out-of-sample (test set) forecasting performance  is much better than measuring the in-sample (training set) $R^2$ value. More on this and how to set up robust experiments, later in the course.\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Root Mean Squared Error###\n",
    "\n",
    "Another way to express how far typical points or observations are, on average, from your regression line is through the Root Mean Squared Error (RMSE) or root-mean-square deviation (RMSD). \n",
    "\n",
    "Essentially, the **RMSE is the standard deviation of the residuals** or the measure of the spread of the residuals around the regression line on the vertical axis. Since the RMS error is measured on the same scale with the same units as $y$, the RMSE can be interpreted in the same manner as the standard deviation. This means that 68% of the residuals can be expected to fall within $1 \\pm$RMSE, meanwhile 95% can be expected to fall within $2 \\pm$RMSE.\n",
    "\n",
    "RMSE can be quantified as:\n",
    "\n",
    "<div style=\"font-size: 120%;\">  \n",
    "   $$ \\operatorname{RMSE}=\\sqrt{\\frac{\\sum_{i=1}^n (y - \\hat y_i)^2}{N}} $$\n",
    "</div>\n",
    "\n",
    "A shortcut for calculating the RMSE if you have the $R^2$ is as follows:\n",
    "<div style=\"font-size: 120%;\">  \n",
    "$$ RMSE = \\sigma_y \\times \\sqrt{1 - R^2}   $$\n",
    "</div>\n",
    "\n",
    "where $\\sigma_y$ is the standard deviation of the quantity you are trying to predict and $R^2$ is the correlation coefficient. "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Example\n",
    "\n",
    "Given a set of student grades in assignments/course work, is it possible to predict the score they will achieve in the exam?\n",
    "\n",
    "The example below is based on an anonymized dataset from a class. The values have been permuted with noise and thus no single observation represents a real student's assignment/exam grade. However, while the permutation and the introduction of noise has weakened the real relationships to some degree, the trends have nonetheless been preserved and are effective for analysis.\n",
    "\n",
    "A# refer to assignment numbers, CW is the total course work (sum of assignment scores) and the exam represents the numeric score achieved."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#regression library\n",
    "import statsmodels.formula.api as smf\n",
    "\n",
    "grades = pd.read_csv(\"../datasets/grades_prediction_mode.csv\", index_col=0)\n",
    "grades.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's first take a look at the medians, min and max values for each of the course assessments to see if there are some patterns:\n",
    "\n",
    "**Exercise:** Use pivot tables to generate a table which summarizes all the medians, min and max values for all the assessments based on the final student grade. Can we tell if there are any obvious trends?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "If we wanted to drill deeper and see if there were any differences of note between the performances of internal and extramural students, we could do so using pivots.\n",
    "\n",
    "**Exercise:** Use pivot tables to generate the same pivots as above, except, add Mode as an additional index: if there are any obvious trends?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's turn to a graphical representation of this data cloud and visually examine if there are strong relationships between the different features in this dataset.\n",
    "\n",
    "**Exercise:** Generate a graph that visualises the pair-wise relationships between all the features in the dataset. Are there correlations? Which ones are weak/strong? Why?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Correlation is a statistical technique that numerically describes the strength of how strongly pairs of variables are related.\n",
    "\n",
    "We can  calculate the strengths of the correlations from the above in a matrix format:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "grades.corr()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now that we see that there are relationships in the data and in particular between the CW as a predictor and Exam as a response variable, we can generate a regression model, visualise it, and evaluate its reliability as follows:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#generate the x-axis values that are in range for the CW values\n",
    "x = pd.DataFrame({'CW': np.linspace(grades.CW.min(), grades.CW.max(), len(grades.CW))})\n",
    "\n",
    "#generate the model which uses the course work score to predict the exan mark - the ols() return the generated model\n",
    "mod = smf.ols(formula='Exam ~ 1 + CW', data=grades.dropna()).fit()\n",
    "\n",
    "#plot the actual data\n",
    "plt.scatter(grades.CW, grades.Exam, s=20, alpha=0.6)\n",
    "plt.xlabel('CW'); plt.ylabel('Exam')\n",
    "\n",
    "#render the regression line by predicting the ys using the generated model from above\n",
    "plt.plot(x.CW, mod.predict(x), 'b-', label='Linear $R^2$=%.2f' % mod.rsquared, alpha=0.9)\n",
    "\n",
    "#give the figure a meaningful legend\n",
    "plt.legend(loc='upper left', framealpha=0.5, prop={'size':'small'})\n",
    "plt.title(\"Predicting student exam results based on course work\", fontsize=20)\n",
    "\n",
    "#display the model statistics describing the goodness of fit\n",
    "mod.summary()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The explanation of the output table of statsmodels linear regression fit can be found here http://connor-johnson.com/2014/02/18/linear-regression-with-python/"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Exercise:** Describe what you see in the regression output and the model fit? How good of a fit is the model? Is the relationship between the variables significant? Is the model likely to be accurate for predictions?"
   ]
  },
  {
   "cell_type": "raw",
   "metadata": {},
   "source": [
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Given the above regression model that we have generated, we can now make predictions about individual observations for whom we do not know what the true dependent variable value is.\n",
    "\n",
    "Here is an example of a student who achieved 25/40 from the class course work, and we are going to predict what he would have achieved in his exam, given the above model:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#first create a series object with the value to predict\n",
    "student_course_work = pd.Series(data={'CW':25})\n",
    "student_course_work\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "mod.predict(student_course_work)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Exercise:** Let's assume that you are now interested in finding out how well you can predict a student's overall total mark for a paper, based on their exam result. Write code that does this and consider the interpretation of your findings."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Exercise:** Using the above model you generated, predict the total score students with the following exam marks [15, 40, 55] are likely to get."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# How to Calculate the Line of Best Fit "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "There are a number of ways to determine the regression line. Below is urguably the easiest approach (In the appendix of this notebook, a method of determining the regression line using gradient descent is demonstrated).\n",
    "\n",
    "Source Wikipedia: <img src=https://wikimedia.org/api/rest_v1/media/math/render/svg/c142dc313a360d32591a184474122ac1de87be81 width=400>\n",
    "\n",
    "The approach may seem complex at first glance, but can be broken down into just sums and squares of various quantities. \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Exercise:** Implement a function that calculates the intercept and a single coefficient for values of x and y. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def basic_linear_regression(x, y):\n",
    "\n",
    "    # calculate the length of x\n",
    "    length = len(x)\n",
    "    \n",
    "    # Σx - calculate the sum of all values of x\n",
    "    sum_x = sum(x)\n",
    "    \n",
    "    # Σy - calculate the sum of all values of y\n",
    "    sum_y = sum(y)\n",
    "\n",
    "    # Σx^2  - calculate the sum of all xs squared\n",
    "    sum_x_squared = np.sum(np.square(x)) #sum(map(lambda a: a * a, x))\n",
    "\n",
    "    #Σxy  - calculate the sum of the products of x * y\n",
    "    sum_of_xy_products = np.sum(x * y) # sum([x[i] * y[i] for i in range(length)])\n",
    "\n",
    "    # calculate\n",
    "    # coefficient = (Σxy - (Σx * Σy) / len) / (Σx^2 - ( (Σx)^2 / len))\n",
    "    coef = \n",
    "    # calculate\n",
    "    # intercept = (Σy - coef * Σx) / len\n",
    "    intercept = \n",
    "    \n",
    "    return intercept, coef"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "basic_linear_regression(grades.dropna().Exam.values, grades.dropna().Total.values)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Regression Line Confidence Intervals\n",
    "\n",
    "The **regression line is an estimation** of where the true model parameters should lie. \n",
    "\n",
    "Sometimes we would like to know what is the the **region in which the true model parameters lie given a certain confidence value**. For this we can define a confidence interval.\n",
    "\n",
    "The confidence interval lets us define a window in which the true regression line is likely to be situated for example with a 95% confidence. Another way of thinking about this is that our result (whatever it is), can be estimated with a 95% chance of lying in this range.\n",
    "\n",
    "For linear regression we can visually display a confidence interval in which we expect the true value to lie. \n",
    "\n",
    "Below is an example of a function defined by James Bagrow (http://nbviewer.ipython.org/url/bagrow.com/dsv/LEC10_notes_2014-02-13.ipynb) on how to calculate a specified confidence interval using Student's t Distribution:\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import scipy\n",
    "\n",
    "def linregress_CIs(xd,yd,conf=0.95):\n",
    "    \"\"\"Linear regression CIs FTW!\"\"\"\n",
    "    alpha=1.-conf   # significance\n",
    "    n = xd.size   # data sample size\n",
    "    x = np.linspace(xd.min(),xd.max(),1000)\n",
    "        \n",
    "    # Predicted values from fitted model:\n",
    "    a, b, r, p, err = scipy.stats.linregress(xd,yd)\n",
    "    #print a\n",
    "    #print b\n",
    "    #print r\n",
    "    #print p\n",
    "    #print err\n",
    "    y = a*x+b\n",
    "    \n",
    "    sd = 1./(n-2.)*np.sum((yd-a*xd-b)**2)\n",
    "    #print 'np.sum((yd-a*xd-b)**2):', np.sum((yd-a*xd-b)**2)\n",
    "    #print 'sd:', sd\n",
    "    sd = np.sqrt(sd)\n",
    "    #print 'sqrt sd - RMSE:', sd\n",
    "    sxd = np.sum((xd-xd.mean())**2) #SS total\n",
    "    #print 'sxd',sxd\n",
    "    sx  = (x-xd.mean())**2 # variance of each x\n",
    "    #print sx\n",
    "    \n",
    "    # quantile of student's t distribution for p=1-alpha/2\n",
    "    q = scipy.stats.t.ppf(1.-alpha/2, n-2)\n",
    "    #print q\n",
    "    # get the upper and lower CI:\n",
    "    #print 'sx/sxd:', sx/sxd\n",
    "    dy = q*sd*np.sqrt( 1./n + sx/sxd )\n",
    "    #print 'dy:', dy\n",
    "    yl = y-dy\n",
    "    yu = y+dy\n",
    "    \n",
    "    return yl,yu,x"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "grades_no_NAN = grades.dropna()\n",
    "yl,yu,xd = linregress_CIs(grades_no_NAN.Exam,grades_no_NAN.Total, .95)\n",
    "mod = smf.ols(formula='Total ~ 1 + Exam', data=grades.dropna()).fit()\n",
    "\n",
    "plt.xlabel('Exam')\n",
    "plt.ylabel('Total Grade')\n",
    "plt.plot(grades.Exam,grades.Total, 'o')\n",
    "plt.plot(grades.Exam, mod.params[1]*grades.Exam+mod.params[0],'k-')\n",
    "plt.fill_between(xd, yl, yu, alpha=0.3, facecolor='blue',edgecolor='none')\n",
    "plt.show()\n",
    "mod.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The blue area is a 95% confidence interval on the line. **It does not mean that 95% of the data points fall inside the blue area**.\n",
    "\n",
    "The  confidence bands are curved. This also does not mean that the confidence band includes the possibility of curves as well as straight lines. Rather, the curved lines are the boundaries of all possible straight lines that could be fit within this confidence interval.\n",
    "\n",
    "Given the assumptions of linear regression, **you can be 95% confident that the two curved confidence bands enclose the true best-fit linear regression line**, leaving a 5% chance that the true line is outside those boundaries.\n",
    "\n",
    "Many data points will be outside the 95% confidence bands. The confidence bands are 95% sure to contain the best-fit regression line. This is not the same as saying it will contain 95% of the data points.\n",
    "\n",
    "\n",
    "**Exercise:** Generate a scatter plot matrix for the above student grade problem. Select a different combination of variables and build a regression model together with the confidence intervals using the function provided."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Regression Line Prediction Bands\n",
    "\n",
    "The **prediction bands identify the region that specifies where a given percentage of points should fall**. \n",
    "\n",
    "The **prediction band** deals specifically with where the **predictions** should fall, while the **confidence intervals** define where the true **regression line** should fall. \n",
    "\n",
    "Use prediction bands when your intent is to depict the variation in your data. Meanwhile, use confidence intervals to visually analyze how precisely your data define the best-fit line.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import statsmodels.api as sm\n",
    "from statsmodels.sandbox.regression.predstd import wls_prediction_std\n",
    "\n",
    "#generate the model\n",
    "mod = smf.ols(formula='Total ~ 1 + Exam', data=grades.dropna()).fit()\n",
    "\n",
    "#extract the parameters for the confidence window\n",
    "x_pred = np.linspace(grades.Exam.min(), grades.Exam.max(), len(grades.Exam))\n",
    "x_pred2 = sm.add_constant(x_pred)\n",
    "\n",
    "#confidence = 95% (alpha=0.05)\n",
    "sdev, lower, upper = wls_prediction_std(mod, exog=x_pred2, alpha=0.05)\n",
    "\n",
    "#plot points and confidence window\n",
    "plt.scatter(grades.Exam, grades.Total, s=10, alpha=0.9)\n",
    "plt.fill_between(x_pred, lower, upper, color='#888888', alpha=0.2)\n",
    "\n",
    "#plot the regression line\n",
    "plt.plot(grades.Exam, mod.predict(grades[['Exam']] ), 'k-', label='Linear n=1 $R^2$=%.2f' % mod.rsquared, alpha=0.9)\n",
    "\n",
    "plt.xlabel('Exam')\n",
    "plt.ylabel('Total Grade')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Exercise:**: Draw 99% prediction bands for the prediction model you generated in the previous exercise.\n",
    "        "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Exercise:**: Calculate the RMSE for the above model and plot a figure that displays the spread of the residuals at one standard deviation from the regression line (covering some 68% of the predictions)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Sometimes it is useful to visualise the residuals in respect to the response variable, in oder to inspect the normality of their distribution and unusual shapes which might be indicative of non-normality.\n",
    "\n",
    "The most useful way to plot the residuals, though, is with your predicted values on the x-axis, and your residuals on the y-axis.\n",
    "\n",
    "**Exercise:**: Use a scatter plot to visualise the distribution of the residuals."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Be Careful with Predictions\n",
    "\n",
    "We have to be very careful to understand the difference between **extrapolation** and **interpolation**. The two are subtly different. Interpolation is concerned with predicting points within your range of data, which is what regression is designed to do.\n",
    "\n",
    "Extrapolation is about making predictions that are outside the range of the data that your algorithm has been trained on. You must be very cautious of extrapolation. People extrapolate all the time. But if you're going to do it, you need to specify additional assumptions that make explicit your ignorance about what happens outside the data range.\n",
    "\n",
    "\n",
    "---\n",
    "\n",
    "---\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Fitting Linear and Polynomial models\n",
    "\n",
    "We do not need to be restricted to a straight-line regression model. Despite its name, linear regression can be used to fit non-linear functions. A linear regression model is linear in the model parameters, not necessarily in the predictors. If you add non-linear transformations of your predictors to the linear regression model, the model will be non-linear in the predictors. For example we can represent a curved relationship between our variables by introducing **polynomial** terms. For example, a cubic model below can still be treated as a linear regression problem:\n",
    "\n",
    "<div style=\"font-size: 150%;\">  \n",
    "$y_i = \\beta_0 + \\beta_1 x_i + \\beta_2 x_i^2 + \\beta_3 x_i^3 + \\epsilon_i$\n",
    "</div>\n",
    "\n",
    "\n",
    "A very popular regression technique is [Polynomial Regression](http://en.wikipedia.org/wiki/Polynomial_regression) (a special case of multiple linear regression), a technique which models the relationship between the response and the predictors as an n-th order polynomial. The higher the order of the polynomial the more \"wigglier\" functions you can fit. Using higher order polynomial comes at a price, however. First, the computational complexity of model fitting grows as the number of adaptable parameters grows. Second, more complex models have a higher risk of **overfitting**. Overfitting refers to a situation in which the model fits the idiosyncrasies of the training data and loses the ability to generalize from the seen to predict the unseen.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The examples below from the 'student grade' prediction dataset will show how linear regression can be used to fit linear and polynomial models using the `ols` method found in the  `statsmodels.formula.api` module."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "grades = pd.read_csv(\"../datasets/grades_prediction_mode.csv\", index_col=0)\n",
    "grades.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "grades_no_NaN = grades.fillna(grades.mean())\n",
    "grades_no_NaN.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.scatter( grades_no_NaN.CW, grades_no_NaN.Exam, s=10, alpha=0.3)\n",
    "plt.xlabel('CW')\n",
    "plt.ylabel('Exam')\n",
    "\n",
    "# points linearlyd space on lstats\n",
    "x = pd.DataFrame({'CW': np.linspace(grades_no_NaN.CW.min(), grades_no_NaN.CW.max(), len(grades_no_NaN.CW)) })\n",
    "\n",
    "# 1-st order polynomial\n",
    "poly_1 = smf.ols(formula='Exam ~ 1 + CW', data=grades_no_NaN).fit()\n",
    "plt.plot(x, poly_1.predict(x), 'b-', label='Poly n=1 $R^2$=%.2f' % poly_1.rsquared,  alpha=0.9)\n",
    "\n",
    "# 2-nd order polynomial\n",
    "poly_2 = smf.ols(formula='Exam ~ 1 + CW + I(CW ** 2.0)', data=grades_no_NaN).fit()\n",
    "plt.plot(x, poly_2.predict(x), 'g-', label='Poly n=2 $R^2$=%.2f' % poly_2.rsquared, alpha=0.9)\n",
    "\n",
    "# 3-rd order polynomial\n",
    "poly_3 = smf.ols(formula='Exam ~ 1 + CW + I(CW ** 2.0) + I(CW ** 3.0)', data=grades_no_NaN).fit()\n",
    "plt.plot(x, poly_3.predict(x), 'r-', alpha=0.9,\n",
    "         label='Poly n=3 $R^2$=%.2f' % poly_3.rsquared)\n",
    "\n",
    "plt.legend()\n",
    "#poly_1.mse_resid\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Exercise:** Describe the goodness-of-fit for each of the generated models. Which is better? Why?"
   ]
  },
  {
   "cell_type": "raw",
   "metadata": {},
   "source": [
    "\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Exercise:** For each of the above three generated models, write code that predicts exam scores for students who score [4, 25, 40,100] in their course work:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Exercise:** Attempt to create a polynomial model that better fits the data:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Multiple linear regression\n",
    "\n",
    "The above were examples of linear and polynomial regression models. One feature (predictor variable) and one prediction output (response variable). Below is an example of multiple linear regression. Several different features and one prediction output value."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "grades_no_NaN"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "multi_linear = smf.ols(formula='Exam ~ 1 + A1 + A2 + A3', data=grades_no_NaN).fit()\n",
    "print(multi_linear.params[0:4])\n",
    "print('R-Squared: ', multi_linear.rsquared)\n",
    "multi_linear.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Exercise:** For the above generated model, write code that predicts exam scores for a student who scores 4,5,8 in their first 3 assignments:\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Exercise:** Experiment with a different combination of variables in order to generate a multiple linear regression model that better fits the data:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Stepwise regression technique\n",
    "\n",
    "Stepwise regression is often used to build models automatically when there is uncertainly about which subset of variables to use. The technique either begins with no variables (forward stepwise) and increasingly adds new best scoring variables according to a criterion (eg. p-value). Likewise, stepwise regression can begin with a full model that uses all available variables which are pruned step by step (backward stepwise) using a given criterion.\n",
    "\n",
    "This technique is a heuristic only. It can assist in generating a good model though it is not guaranteed to return the best possible model. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "grades.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "grades.columns"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def stepwise_backwards_regression(response_var, exp_vars, df):\n",
    "    while len(exp_vars) > 0:\n",
    "        forml = response_var + ' ~ 1 +' + ' + '.join(exp_vars)\n",
    "        print(forml)\n",
    "        model = smf.ols(formula= forml, data=df).fit()\n",
    "\n",
    "        sorted_ps = model.pvalues.sort_values(ascending=False).drop('Intercept')\n",
    "        if (sorted_ps[0]) > 0.05:\n",
    "            exp_vars = sorted_ps.index[1:].values\n",
    "            drop = sorted_ps.index[0]\n",
    "            print(str(len(exp_vars)) + ' var model AIC: ' + str(model.aic) + ', adj Rsq: ' + str(model.rsquared_adj))\n",
    "            print('Dropped: ' + drop + ' with p-value ' + str(round(sorted_ps[0],3)))\n",
    "        else:\n",
    "            return model\n",
    "    return model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "mod = stepwise_backwards_regression('Total', [u'A1', u'A2', u'A3', u'A4', u'A5'], grades[[u'A1', u'A2', u'A3', u'A4', u'A5', 'Total']])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "mod.summary()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Appendix - Calculate line of best fit using gradient descent"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Calculating regression using gradient descent is an optimisation procedure. I entails calculating the gradient at a given point, and moving down the gradient at a certain step size. There are four key operations:\n",
    "\n",
    "    1. Calculate the hypothesis h = X * theta (which tells us classification for each value of x)\n",
    "    2. Calculate the loss = h - y and maybe the squared cost (loss^2)/2m (which tells us how far off we are giving us the error)\n",
    "    3. Calculate the average gradient for all points = X' * loss / m\n",
    "    4. Move in the direction of the downward sloping gradient by updating the parameters theta = theta - alpha * gradient\n",
    "    \n",
    "Code below gives an example of this and is adapted from http://stackoverflow.com/questions/17784587/gradient-descent-using-python-and-numpy"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "grades.dropna()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": false
   },
   "outputs": [],
   "source": [
    "import random\n",
    "\n",
    "# m denotes the number of examples here, not the number of features\n",
    "def gradient_descent(x, y, theta, alpha, m, iterations):\n",
    "    transpose_x = x.transpose()\n",
    "    print('transpose_x:',transpose_x)\n",
    "    print('theta:',theta)\n",
    "    for i in range(0, iterations):\n",
    "        print(i)\n",
    "        hypothesis = np.dot(x, theta)\n",
    "        #print('y:',y)\n",
    "        #print('hypothesis:',hypothesis)\n",
    "        loss = hypothesis - y\n",
    "        #print('loss:',loss)\n",
    "        # avg cost per example (the 2 in 2*m doesn't really matter here.\n",
    "        # But to be consistent with the gradient, I include it)\n",
    "        cost = np.sum(loss ** 2) / (2 * m)\n",
    "        print(\"Iteration %d | Cost: %f\" % (i, cost))\n",
    "        \n",
    "        # avg gradient per example\n",
    "        gradient = np.dot(transpose_x, loss) / m\n",
    "        print('gradient:',gradient)\n",
    "        # update\n",
    "        print('- alpha * gradient', - alpha * gradient)\n",
    "        theta = theta - alpha * gradient\n",
    "        print('theta:',theta)\n",
    "        if i % 1 == 0:\n",
    "            plt.scatter(x[:,1], y)\n",
    "            xs = np.arange( np.min(x[:,1]), np.max(x[:,1]))\n",
    "            xs =  np.dstack( [np.ones(( len(xs), )), xs  ] )[0]\n",
    "            plt.plot(xs[:,1], np.dot(xs, theta) ) \n",
    "    return theta\n",
    "\n",
    "\n",
    "x, y = np.dstack( [np.ones(( len(grades.dropna()), )), grades.dropna().Exam ] )[0], grades.dropna().Total.values  \n",
    "m, n = np.shape(x)\n",
    "num_iterations= 15\n",
    "alpha = 0.001\n",
    "theta = np.ones(n)\n",
    "print(theta)\n",
    "theta = gradient_descent(x, y, theta, alpha, m, num_iterations)\n",
    "print(theta)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.3"
  },
  "toc": {
   "base_numbering": 1,
   "nav_menu": {},
   "number_sections": true,
   "sideBar": true,
   "skip_h1_title": false,
   "title_cell": "Table of Contents",
   "title_sidebar": "Contents",
   "toc_cell": false,
   "toc_position": {
    "height": "calc(100% - 180px)",
    "left": "10px",
    "top": "150px",
    "width": "298.333px"
   },
   "toc_section_display": true,
   "toc_window_display": false
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
