{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Introduction\n",
    "In this discussion section we study the learning problem for the canonical case of one-dimensional function $f$ on the segment $[-1,1]$. It goes in parallel with problem 8 from HW1. We are going to look at two feature families, and see which properties the data has for those features.\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "%matplotlib inline\n",
    "from ipywidgets import interactive\n",
    "import ipywidgets as widgets\n",
    "from ipywidgets import fixed\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Part a): Generating training data\n",
    "\n",
    "In this part we generate the data from which we attempt to learn the function. The function of our choice for this section is \n",
    "$$\n",
    "f(x) = \\begin{cases}\n",
    "8(x + 0.6)^2 - 0.72& x \\in [-1, -0.3]\\\\\n",
    "0& x\\in[-0.3, 0.3]\\\\\n",
    "4\\sin(2\\pi (x - 0.3))& x \\in [0.3, 1].\n",
    "\\end{cases}\n",
    "$$\n",
    "\n",
    "We are going to sample points $\\{x_i\\}_{i=1}^n$ from $[-1, 1]$ and learn $f$ from values $\\{f(x_i) + \\varepsilon_i\\}$, where $\\{\\varepsilon_i\\}_{i=1}^n$ is some additive noise."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "### Generating $x_i$\n",
    "In this section we consider two ways of sampling $x_i$ for training data. <br>\n",
    "1. $x_i$ sampled at random from the uniform distribution on $[-1,1]$.\n",
    "2. $x_i$ from an evenly spaced grid on the interval $[-1,1]$.\n",
    "   For example, for $n$ = 4, then we have the samples (-1, -0.5, 0, 0.5). Note that the endpoint 1, is not included in our training set. \n",
    "   This kind of evenly spaced samples gives rise to interesting properties of the feature matrix when using Fourier features as we will see in part c.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def generate_x(n, x_type, x_low=-1, x_high=1):  \n",
    "    if x_type == 'grid':\n",
    "        x = np.linspace(x_low, x_high, n, endpoint = False).astype(np.float64)\n",
    "\n",
    "    elif x_type == 'uniform_random':\n",
    "        x = np.sort(np.random.uniform(x_low, x_high, n).astype(np.float64))\n",
    "        #Note that for making it easy for plotting we sort the randomly sampled x in ascending order\n",
    "    else:\n",
    "        raise ValueError   \n",
    "    \n",
    "    \n",
    "    return x"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Generating $y_i$\n",
    "\n",
    "        \n",
    "Here we generate our observed values $\\{f(x_i)\\}$. Note that we are not adding noise yet."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def generate_y(x):\n",
    "    y = np.zeros(len(x))\n",
    "    y[x < -0.3] = 8 * (x[x < -0.3] + 0.6)**2 - 0.72\n",
    "    y[x> 0.3] = np.sin(2 * np.pi * (x[x> 0.3]-0.3) * 4)\n",
    "    return y"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Visualizing training data\n",
    "\n",
    "The following cell plots $f$. The line shows the true function, and the dots indicate the data that we get from our measurements. "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def plot_training_data(x_type, n=64): \n",
    "    x_true = generate_x(x_type = 'grid', n=1000)\n",
    "    x_train = generate_x(x_type=x_type, n=n)\n",
    "    labels = ['y']\n",
    "    \n",
    "    y_true = generate_y(x=x_true)\n",
    "    y_train = generate_y(x=x_train)\n",
    "    plt.plot(x_true, y_true, linewidth = 0.5)\n",
    "    plt.ylabel('y')\n",
    "    plt.xlabel('x')\n",
    "    plt.scatter(x_train, y_train, marker='o', label = 'f')\n",
    "    plt.legend(bbox_to_anchor  = (1.03, 0.97))\n",
    "    plt.show()\n",
    "    \n",
    "    \n",
    "\n",
    "slider = widgets.RadioButtons(\n",
    "    options=['uniform_random', 'grid'],\n",
    "    description='x_type:',\n",
    "    disabled=False\n",
    ")\n",
    "slider1 = widgets.IntSlider(\n",
    "    value=65,\n",
    "    min=65,\n",
    "    max=1000,\n",
    "    step=2,\n",
    "    description='n:',\n",
    "    disabled=False,\n",
    "    continuous_update=False,\n",
    "    orientation='horizontal',\n",
    "    readout=True,\n",
    "    readout_format='d'\n",
    ")\n",
    "\n",
    "\n",
    "interactive_plot = interactive(plot_training_data, x_type=slider, n=slider1)\n",
    "output = interactive_plot.children[-1]\n",
    "interactive_plot\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Noise in training data\n",
    "\n",
    "Now we add noise. We model this by assuming that our samples for y are corrupted by Additive White Gaussian Noise (AWGN).  The true function is given by $y = f(x)$. The amount of noise is controlled by the standard deviation of the noise which we denote by awgn_std. The noiseless case corresponds to awgn_std = 0"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def add_awgn_noise(y, awgn_std=0):\n",
    "    noise = np.random.normal(0, awgn_std, y.shape)\n",
    "    y_noisy = y + noise    \n",
    "    return y_noisy"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Visualizing noise in training data"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def plot_noisy_training_data(awgn_std, n=64): \n",
    "    np.random.seed(7)\n",
    "    x_true = generate_x(x_type = 'grid', n=1000)\n",
    "    x_train = generate_x(x_type='uniform_random', n=n)\n",
    "    y_true = generate_y(x=x_true)\n",
    "    y_train_clean = generate_y(x=x_train)\n",
    "    y_train = add_awgn_noise(y_train_clean, awgn_std=awgn_std)\n",
    "    plt.plot(x_true, y_true, linewidth = 0.5, label = 'True function')\n",
    "    plt.ylabel('y')\n",
    "    plt.xlabel('x')\n",
    "    plt.ylim([-4,4])\n",
    "    plt.scatter(x_train, y_train, marker='o', label = 'Training samples')\n",
    "    plt.legend(loc = 'upper right', bbox_to_anchor  = (1.43, 0.97))\n",
    "\n",
    "    plt.show()\n",
    "    \n",
    "\n",
    "slider= widgets.FloatLogSlider(\n",
    "    value=-5,\n",
    "    base=2,\n",
    "    min=-5, # min exponent of base\n",
    "    max=5, # max exponent of base\n",
    "    step=0.2, # exponent step\n",
    "    description='awgn_std',\n",
    "    continuous_update= False\n",
    ")\n",
    "\n",
    "interactive_plot = interactive(plot_noisy_training_data, awgn_std=slider)\n",
    "output = interactive_plot.children[-1]\n",
    "interactive_plot"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Part b): Featurization- Lifting the training data\n",
    " \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "To apply linear regression in our learning problem we use \"lifting\" trick : instead of looking at data points $\\{x_i\\}_{i=1}^n$ we lift them into some high dimensional space with mapping ${\\phi}$. Then our features become  $n$ vectors $\\{{\\phi}(x_i)\\}_{i=1}^n$.\n",
    "\n",
    "In problem 7 of HW1 we show that under some rather general conditions any function $f$ can be approximated by either polynomials or trigonometric series. This motivates our choice of two feature families that we are going to use:"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Polynomial features\n",
    "We consider the d-dimensional features given by the Vandermonde polynomials:\n",
    "$${ \\phi}(x) = [1, x, x^2, \\dots, x^{d-1}].$$ The code in this cell lets you choose $k$ and look at the plot of $x^k$."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from numpy.polynomial.polynomial import polyvander\n",
    "def featurize_vandermonde(x, d, normalize = False):\n",
    "    A = polyvander(x, d-1)\n",
    "    for d_ in range(A.shape[1]):\n",
    "        if normalize:\n",
    "            A[:,d_] *=  np.sqrt(2*d_+1)\n",
    "    return A\n",
    "\n",
    "\n",
    "def plot_poly_features(d): \n",
    "    n = 128  \n",
    "    d_max = 20\n",
    "    x_type = 'uniform_random'\n",
    "    np.random.seed(7)\n",
    "    x_true = generate_x(x_type = 'grid', n=1000)\n",
    "    x_train = generate_x(x_type=x_type, n=n)\n",
    "    phi_train = featurize_vandermonde(x_train, d_max)\n",
    "    phi_true = featurize_vandermonde(x_true, d_max)\n",
    "\n",
    "    plt.plot(x_true, phi_true[:,d], linewidth = 0.5)\n",
    "    plt.scatter(x_train, phi_train[:,d], marker='o')\n",
    "    plt.ylim([-1.2,1.2])\n",
    "    plt.xlabel('x')\n",
    "    plt.ylabel('$\\phi(x)$')\n",
    "    plt.show()\n",
    "\n",
    "\n",
    "slider = widgets.IntSlider(\n",
    "    value=0,\n",
    "    min=0,\n",
    "    max=10,\n",
    "    step=1,\n",
    "    description='Feature # k:',\n",
    "    disabled=False,\n",
    "    continuous_update=False,\n",
    "    orientation='horizontal',\n",
    "    readout=True,\n",
    "    readout_format='d'\n",
    ")\n",
    "\n",
    "\n",
    "interactive_plot = interactive(plot_poly_features, d=slider)\n",
    "output = interactive_plot.children[-1]\n",
    "interactive_plot"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### Fourier features\n",
    "We consider the d-dimensional real Fourier features given by:\n",
    "    $$\\phi(x) = [1, \\sin(\\pi x), \\cos(\\pi x), \\sin(2 \\pi x), \\cos(2\\pi x), \\dots,  \\sin (r \\pi x), \\cos(r \\pi x)],$$\n",
    "    where $r = \\frac{d-1}{2}$.\n",
    "    \n",
    "Note that by this convention we require $d$ to be an odd integer. The code in this cell lets you choose $k$ and look at the plot of the $k$-th coordinate of $\\phi$ from above."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def featurize_fourier(x, d, normalize = False):\n",
    "    assert (d-1) % 2 == 0, \"d must be odd\"\n",
    "    max_r = int((d-1)/2)\n",
    "    n = len(x)\n",
    "    A = np.zeros((n, d))\n",
    "    A[:,0] = 1\n",
    "    for d_ in range(1,max_r+1):\n",
    "        A[:,2*(d_-1)+1] =  np.sin(d_*x*np.pi)\n",
    "        A[:,2*(d_-1)+2] =  np.cos(d_*x*np.pi)\n",
    "    \n",
    "    if normalize:\n",
    "        A[:,0] *= (1/np.sqrt(2))\n",
    "        A *= np.sqrt(2)\n",
    "    return A\n",
    "\n",
    "def plot_fourier_features(x_type,d): \n",
    "    n = 128  \n",
    "    d_max = 21\n",
    "    np.random.seed(7)\n",
    "    x_true = generate_x(x_type = 'grid', n=1000)\n",
    "    x_train = generate_x(x_type=x_type, n=n)\n",
    "    phi_train = featurize_fourier(x_train, d_max)\n",
    "    phi_true = featurize_fourier(x_true, d_max)\n",
    "    \n",
    "    plt.plot(x_true, phi_true[:,d], linewidth = 0.5)\n",
    "    plt.scatter(x_train, phi_train[:,d], marker='o')\n",
    "    plt.ylim([-1.2,1.2])\n",
    "    plt.xlabel('x')\n",
    "    plt.ylabel('$\\phi(x)$')\n",
    "    plt.show()\n",
    "\n",
    "\n",
    "slider1 = widgets.RadioButtons(\n",
    "    options=['uniform_random', 'grid'],\n",
    "    description='x_type:',\n",
    "    disabled=False\n",
    ")\n",
    "    \n",
    "    \n",
    "slider2 = widgets.IntSlider(\n",
    "    value=0,\n",
    "    min=0,\n",
    "    max=20,\n",
    "    step=1,\n",
    "    description='Feature # k:',\n",
    "    disabled=False,\n",
    "    continuous_update=False,\n",
    "    orientation='horizontal',\n",
    "    readout=True,\n",
    "    readout_format='d'\n",
    ")\n",
    "\n",
    "interactive_plot = interactive(plot_fourier_features, d=slider2, x_type=slider1)\n",
    "output = interactive_plot.children[-1]\n",
    "interactive_plot"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def featurize(x, d, phi_type, normalize = False):\n",
    "    function_map = {'polynomial':featurize_vandermonde, 'fourier':featurize_fourier}\n",
    "    return function_map[phi_type](x,d,normalize)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Sanity check:** answer the following questions:\n",
    "1. How do we generate the data?\n",
    "2. Why is our data a matrix?\n",
    "3. What does approximating a function by polynomials have to do with linear regression with polynomial features?"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Part c) conditioning of $\\Phi^T \\Phi$\n",
    "\n",
    "As we will see in part g, a very important property of the data matrix is the shape of the array of its singular values. In this part we are going to explore how our choice of features and sampling influences those singular values. However, before we start the experiements, let's look at problem 9b from HW1. \n",
    "1. Do problem 9b from HW1 (you can assume part a). What does the result of part b mean in terms of singular values of the data matrix for measurements on a grid and Fourier features?\n",
    "2. Can you guess how the singular values will behave when $x_i$ are sampled uniformly at random for polynomial and Fourier features? (Hint: on the one hand, uniform random sampling seems somewhat similar to making measurements on a grid. On the other hand, look at the plots of the features above. Polynomial features have small values and look similar. What does it mean in terms of singular values?)\n",
    "\n",
    "Do the simulation below to check your intuition.\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def plot_eig_values(n, d, seed, lambda_ridge):\n",
    "    np.random.seed(seed)\n",
    "    x_type_phi_type_pairs = [('uniform_random', 'polynomial'), ('uniform_random', 'fourier'), ('grid', 'fourier')]\n",
    "    \n",
    "    colors = ['blue', 'green', 'orange']\n",
    "    \n",
    "    for k, (x_type, phi_type) in enumerate(x_type_phi_type_pairs):\n",
    "        x_train = generate_x(x_type=x_type, n=n)\n",
    "        phi_train = featurize(x_train, d, phi_type)\n",
    "        eig_vals,_ = np.linalg.eig(phi_train.T @ phi_train + lambda_ridge*np.eye(d)) \n",
    "        \n",
    "\n",
    "        \n",
    "        eig_vals = np.sort(np.abs(eig_vals))[::-1]\n",
    "        plt.plot(eig_vals, 'o-', c = colors[k], label = 'x_type: ' + str(x_type) + ', phi_type: ' + str(phi_type))\n",
    "        \n",
    "        \n",
    "    plt.legend(bbox_to_anchor = (1.73, 0.97))\n",
    "    plt.yscale('log')\n",
    "    plt.ylim(1e-20, 1e4)\n",
    "    plt.xlim([-1, d+1])\n",
    "\n",
    "    plt.show()\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "seed = 1\n",
    "lambda_ridge = 0\n",
    "\n",
    "\n",
    "slider = widgets.IntSlider(\n",
    "    value=11,\n",
    "    min=1,\n",
    "    max=65,\n",
    "    step=2,\n",
    "    description='d:',\n",
    "    disabled=False,\n",
    "    continuous_update=False,\n",
    "    orientation='horizontal',\n",
    "    readout=True,\n",
    "    readout_format='d'\n",
    ")\n",
    "slider1 = widgets.IntSlider(\n",
    "    value=65,\n",
    "    min=65,\n",
    "    max=200,\n",
    "    step=2,\n",
    "    description='n:',\n",
    "    disabled=False,\n",
    "    continuous_update=False,\n",
    "    orientation='horizontal',\n",
    "    readout=True,\n",
    "    readout_format='d'\n",
    ")\n",
    "\n",
    "\n",
    "interactive_plot =interactive(plot_eig_values,n = slider1, d = slider, seed = fixed(seed),\n",
    "                              lambda_ridge = fixed(lambda_ridge))\n",
    "interactive_plot"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Part d): Linear Regression to learn the 1d-function in feature space\n",
    "\n",
    "Now that we are done with setting up our data, we start  defining the tools that we will use for learning: ordinary least squares and ridge regression.\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "#### Least squares\n",
    "To learn the function we will perform linear regression in the lifted feature space, i.e. we learn a set of coefficients $w \\in \\mathbb{R}^d$ to minimize the least-squares loss:\n",
    "\n",
    "$$\\ell(w) = \\frac{1}{n} \\| y - \\phi w \\|_2^2.$$\n",
    "\n",
    "1. Add the code to the places marked with \"TODO\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.linear_model import LinearRegression\n",
    "def solve_ls(phi, y):\n",
    "    \n",
    "    LR = LinearRegression(fit_intercept=False, normalize=False)\n",
    "\n",
    "    LR.fit(TODO)\n",
    "    coeffs = TODO\n",
    "\n",
    "    \n",
    "    \n",
    "    loss = np.mean((y- phi@coeffs)**2)\n",
    "    return coeffs, loss"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### Ridge\n",
    "\n",
    "Ridge regression is a celebrated tool to combat noise. We   add a regularizing penalty term to the least squares objective to perform ridge regression where we minimize the loss,\n",
    "\n",
    "$$\\ell(w) = \\frac{1}{n} (\\| y - \\phi w \\|_2^2+ \\lambda \\| w \\|_2^2).$$\n",
    "\n",
    "We will explore the effect of the regularizing term $\\lambda$ in part g\n",
    "\n",
    "2. Add the code to the places marked with \"TODO\"\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.linear_model import Ridge\n",
    "\n",
    "def solve_ridge(phi, y, lambda_ridge=0):\n",
    "    \n",
    "    Rdg = Ridge(fit_intercept=False, normalize=False, alpha = lambda_ridge)\n",
    "\n",
    "    Rdg.fit(TODO)\n",
    "    coeffs = TODO\n",
    "\n",
    "\n",
    "    \n",
    "    loss = np.mean((y- phi@coeffs)**2) + np.mean(coeffs**2)\n",
    "    return coeffs, loss\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Part e): Comparing polynomial and Fourier features\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "In this part we compare our two families of features (Fourier and polynomial) in terms of approximating our chosen function. The sampling of data is chosen to be uniform random for both families. Note that we have no noise yet.\n",
    "\n",
    "Do the following:\n",
    "\n",
    "1. Choose $3$ different values of $n$: small, medium and large.  For each such $n$ find the number of features that is needed to approximate our function better. Wich feature family needs more features? Is it much more?\n",
    "2. Compare the learned weights. Try to explain, how what you see is related to the singular values of the data matrices. (You will be able to give a more rigorous answer to this question in part g.)\n",
    "2. Fix $d=17$ and vary $n$. What do you see as $n$ grows? \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_plot_data( phi_type, d, w, n_plot = 1000):\n",
    "    x_plot= generate_x(x_type = 'grid', n=n_plot)\n",
    "    y_plot_true = generate_y(x=x_plot)\n",
    "    phi_plot = featurize(x_plot, d, phi_type)\n",
    "    \n",
    "    return x_plot, y_plot_true, phi_plot @ w\n",
    "\n",
    "def gen_and_solve(n, d, x_type, phi_type, seed = 1, awgn_std = 0, lambda_ridge = 0):\n",
    "    np.random.seed(seed)\n",
    "    \n",
    "    x_train = generate_x(x_type=x_type, n=n)\n",
    "    phi_train = featurize(x_train, d, phi_type)\n",
    "    y_train = generate_y(x=x_train)\n",
    "    \n",
    "    if awgn_std != 0:\n",
    "        y_train = add_awgn_noise(y_train, awgn_std)\n",
    "        \n",
    "    if lambda_ridge == 0:\n",
    "        w, loss = solve_ls(phi_train, y_train)\n",
    "\n",
    "    else:\n",
    "        w, loss = solve_ridge(phi_train, y_train, lambda_ridge)\n",
    "\n",
    "\n",
    "    return x_train, y_train, w, loss\n",
    "\n",
    "\n",
    "def visualize_(x_train, y_train,  phi_type, d, w, loss, n_plot = 1000, n_fit = 1000):\n",
    "    x_plot, y_plot_true, y_plot_pred = get_plot_data( phi_type, d, w, n_plot)\n",
    "    plt.plot(x_plot, y_plot_true, label = 'True function')\n",
    "    plt.scatter(x_train, y_train, marker='o', s=20, label = 'Training samples')\n",
    "    plt.plot(x_plot, y_plot_pred, 'o-', ms=2, label = 'Learned function')\n",
    "\n",
    "    \n",
    "\n",
    "    plt.title(\"Train loss:\" + str(\"{:.2e}\".format(loss)))\n",
    "    plt.ylim([-1.5, 1.5])\n",
    "    plt.xlabel('x')\n",
    "    plt.ylabel('y')\n",
    "    plt.legend(bbox_to_anchor = (1.03, 0.97))\n",
    "    plt.show()\n",
    "        \n",
    "    markerlines, stemlines,  baseline = plt.stem(np.arange(d), w, 'b', 'o',  use_line_collection=True)\n",
    "    plt.setp(stemlines, 'color', plt.getp(markerlines,'color'))\n",
    "    plt.xlabel('feature #(k)')\n",
    "    plt.ylabel('weight')\n",
    "    plt.show()\n",
    "\n",
    "def plot_true_and_predicted(n, n_plot, n_fit, x_type, phi_type, seed, awgn_std, lambda_ridge, d): \n",
    "    x_train, y_train, w, loss = gen_and_solve(n, d, x_type,  phi_type, awgn_std = awgn_std, lambda_ridge = lambda_ridge, seed = seed)\n",
    "    fit_mse = visualize_(x_train, y_train,  phi_type, d, w, loss, n_plot , n_fit)   \n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "def get_params2():\n",
    "    n = 64\n",
    "    n_plot = 1000\n",
    "    n_fit = 10000\n",
    "    x_type = 'uniform_random'\n",
    "    seed = 1\n",
    "    awgn_std = 0\n",
    "    lambda_ridge = 0\n",
    "    return n, n_plot, n_fit, x_type, seed, awgn_std, lambda_ridge\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "n, n_plot, n_fit, x_type, seed, awgn_std, lambda_ridge = get_params2()\n",
    "slider1 = widgets.IntSlider(\n",
    "    value=1,\n",
    "    min=1,\n",
    "    max=65,\n",
    "    step=2,\n",
    "    description='d:',\n",
    "    disabled=False,\n",
    "    continuous_update=False,\n",
    "    orientation='horizontal',\n",
    "    readout=True,\n",
    "    readout_format='d'\n",
    ")\n",
    "slider2 = widgets.IntSlider(\n",
    "    value=65,\n",
    "    min=65,\n",
    "    max=1000,\n",
    "    step=2,\n",
    "    description='n:',\n",
    "    disabled=False,\n",
    "    continuous_update=False,\n",
    "    orientation='horizontal',\n",
    "    readout=True,\n",
    "    readout_format='d'\n",
    ")\n",
    "    \n",
    "\n",
    "phi_type = 'polynomial'\n",
    "print(\"Polynomial features\")\n",
    "\n",
    "interactive_plot =interactive(plot_true_and_predicted,n = slider2, n_plot = fixed(n_plot), n_fit = fixed(n_fit), \n",
    "                              x_type = fixed(x_type), phi_type = fixed(phi_type),seed = fixed(seed),\n",
    "                              awgn_std= fixed(awgn_std), lambda_ridge = fixed(lambda_ridge), d = slider1)\n",
    "interactive_plot\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "n, n_plot, n_fit, x_type, seed, awgn_std, lambda_ridge = get_params2()\n",
    "\n",
    "slider3 = widgets.IntSlider(\n",
    "    value=1,\n",
    "    min=1,\n",
    "    max=65,\n",
    "    step=2,\n",
    "    description='d:',\n",
    "    disabled=False,\n",
    "    continuous_update=False,\n",
    "    orientation='horizontal',\n",
    "    readout=True,\n",
    "    readout_format='d'\n",
    ")\n",
    "slider4 = widgets.IntSlider(\n",
    "    value=65,\n",
    "    min=65,\n",
    "    max=1000,\n",
    "    step=2,\n",
    "    description='n:',\n",
    "    disabled=False,\n",
    "    continuous_update=False,\n",
    "    orientation='horizontal',\n",
    "    readout=True,\n",
    "    readout_format='d'\n",
    ")\n",
    "\n",
    "phi_type = 'fourier'\n",
    "print(\"Fourier features\")\n",
    "\n",
    "interactive_plot =interactive(plot_true_and_predicted,n = slider4, n_plot = fixed(n_plot), n_fit = fixed(n_fit), \n",
    "                              x_type = fixed(x_type), phi_type = fixed(phi_type),seed = fixed(seed),\n",
    "                              awgn_std= fixed(awgn_std), lambda_ridge = fixed(lambda_ridge), d = slider3)\n",
    "interactive_plot\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Part f): Effect of noise\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    " \n",
    "\n",
    "Next we will see the effect of noise in our learning method. For this purpose we will consider the three kinds of input data/feature combinations:\n",
    "1. uniform randomly sampled x, polynomial features\n",
    "2. uniform randomly sampled x, fourier features\n",
    "3. evenly spaced x, fourier features\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Answer the following questions:\n",
    "1. How does the influence of the noise change as we increase the number of features?\n",
    "2. How does the influence of the noise change as we increase the number of samples?\n",
    "3. What is the shape of the dependence of the predicted function on $\\sigma$(e.g. quadratic, exponential, etc.)?  (Hint: recall that the solution to least squares is given by the formula $w^* = (\\Phi^\\top \\Phi)^{-1} \\Phi^\\top y$, and we generate $y$ by adding noise scaled by $\\sigma$ to the true signal)."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_params3():\n",
    "    n = 100\n",
    "    n_plot = 1000\n",
    "    n_fit = 10000\n",
    "    seed = 1\n",
    "    awgn_std = 1e-1\n",
    "    lambda_ridge = 0\n",
    "    return n, n_plot, n_fit, seed, awgn_std, lambda_ridge"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "n, n_plot, n_fit, seed, awgn_std, lambda_ridge = get_params3()\n",
    "slider1 = slider = widgets.RadioButtons(\n",
    "    options=[11, 31, 51],\n",
    "    description='d:',\n",
    "    disabled=False\n",
    ")\n",
    "slider2= widgets.FloatSlider(\n",
    "    value=0.,\n",
    "    min=0., \n",
    "    max=1, \n",
    "    step=0.01,\n",
    "    description='$\\sigma$',\n",
    "    continuous_update= False\n",
    ")\n",
    "slider3 = widgets.IntSlider(\n",
    "    value=65,\n",
    "    min=65,\n",
    "    max=1000,\n",
    "    step=2,\n",
    "    description='n:',\n",
    "    disabled=False,\n",
    "    continuous_update=False,\n",
    "    orientation='horizontal',\n",
    "    readout=True,\n",
    "    readout_format='d'\n",
    ")\n",
    "x_type = 'uniform_random'\n",
    "\n",
    "\n",
    "phi_type = 'polynomial'\n",
    "print(\"Uniform random x, Polynomial features\")\n",
    "interactive_plot =interactive(plot_true_and_predicted,n = slider3, n_plot = fixed(n_plot), n_fit = fixed(n_fit), \n",
    "                              x_type = fixed(x_type), phi_type = fixed(phi_type),seed = fixed(seed),\n",
    "                              awgn_std= slider2, lambda_ridge = fixed(lambda_ridge), d = slider1)\n",
    "interactive_plot\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "n, n_plot, n_fit, seed, awgn_std, lambda_ridge = get_params3()\n",
    "slider4 = slider = widgets.RadioButtons(\n",
    "    options=[11, 31, 51],\n",
    "    description='d:',\n",
    "    disabled=False\n",
    ")\n",
    "\n",
    "slider5= widgets.FloatSlider(\n",
    "    value=0.,\n",
    "    min=0., \n",
    "    max=1, \n",
    "    step=0.01,\n",
    "    description='$\\sigma$',\n",
    "    continuous_update= False\n",
    ")\n",
    "slider6 = widgets.IntSlider(\n",
    "    value=65,\n",
    "    min=65,\n",
    "    max=1000,\n",
    "    step=2,\n",
    "    description='n:',\n",
    "    disabled=False,\n",
    "    continuous_update=False,\n",
    "    orientation='horizontal',\n",
    "    readout=True,\n",
    "    readout_format='d'\n",
    ")\n",
    "x_type = 'uniform_random'\n",
    "\n",
    "phi_type = 'fourier'\n",
    "print(\"Uniform random x, Fourier features\")\n",
    "\n",
    "interactive_plot =interactive(plot_true_and_predicted,n = slider6, n_plot = fixed(n_plot), n_fit = fixed(n_fit), \n",
    "                              x_type = fixed(x_type), phi_type = fixed(phi_type),seed = fixed(seed),\n",
    "                              awgn_std= slider5, lambda_ridge = fixed(lambda_ridge), d = slider4)\n",
    "interactive_plot"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "n, n_plot, n_fit, seed, awgn_std, lambda_ridge = get_params3()\n",
    "slider7 = widgets.RadioButtons(\n",
    "    options=[11, 31, 51],\n",
    "    description='d:',\n",
    "    disabled=False\n",
    ")\n",
    "slider8= widgets.FloatSlider(\n",
    "    value=0.,\n",
    "    min=0., \n",
    "    max=1, \n",
    "    step=0.01,\n",
    "    description='$\\sigma$',\n",
    "    continuous_update= False\n",
    ")\n",
    "slider9 = widgets.IntSlider(\n",
    "    value=65,\n",
    "    min=65,\n",
    "    max=1000,\n",
    "    step=2,\n",
    "    description='n:',\n",
    "    disabled=False,\n",
    "    continuous_update=False,\n",
    "    orientation='horizontal',\n",
    "    readout=True,\n",
    "    readout_format='d'\n",
    ")\n",
    "x_type = 'grid'\n",
    "\n",
    "\n",
    "phi_type = 'fourier'\n",
    "print(\"Evenly spaced x, Fourier features\")\n",
    "\n",
    "interactive_plot =interactive(plot_true_and_predicted,n = slider9, n_plot = fixed(n_plot), n_fit = fixed(n_fit), \n",
    "                              x_type = fixed(x_type), phi_type = fixed(phi_type),seed = fixed(seed),\n",
    "                              awgn_std= slider8, lambda_ridge = fixed(lambda_ridge), d = slider7)\n",
    "interactive_plot"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Part g) Ridge regression\n",
    "\n",
    "Ridge regression that we defined in part d is a celebrated tool to combat noise. Further in this course we will develop theoretical understanding of how adding penalty helps do that, but for now we will just observe how it influences the solution to our learning problem, and how that influence is related to the spectral properties of the data matrix.\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Before we start, let's think about the problem from the theoretical point of view: problem 6c from HW1 states that after a suitable change of coordinates, the $i$-th coordinate of  the solution to ridge regression can be obtained from the corresponding coordinate of $\\bf{U}^\\top \\vec{y}$ by multiplication by $\\frac{\\sigma_i}{\\sigma_i^2 + \\lambda}$, where $\\sigma_i$ is the $i$-th singular value of $\\bf{X}$ (or zero if $i$ is greater than the rank of $\\bf{X}$.)\n",
    "\n",
    "1. What happens if $\\lambda = 0$ and $\\sigma_i$ is small?\n",
    "2. Suppose $\\sigma_i$ are all large. Will small $\\lambda$ influence the performance?\n",
    "3. How do you think $\\sigma_i$ will change as we add more data points (recall that we are in the regime where $n > d$)? How will it influence our choice of $\\lambda$?(Hint: Think about Frobenius norm. What happens to it when you add rows to a matrix? Now recall that Frobenius norm is equal to the sum of squared singular values.)\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Now run the experiements below and answer the following questions:\n",
    "1. How does the amount of regularization you need to apply depend on the number of samples?\n",
    "2. How does the amount of regularization you need to apply depend on the number of features?\n",
    "3. For which choice of three experiements below the learned function is the most sensitive to $\\lambda$? \n",
    "3. Do your observations align well with the intuition we developed above?"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_params4():\n",
    "    n = 65\n",
    "    n_plot = 1000\n",
    "    n_fit = 10000\n",
    "    seed = 1\n",
    "    awgn_std = 3 * 1e-1\n",
    "    return n, n_plot, n_fit, seed, awgn_std\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "n, n_plot, n_fit, seed, awgn_std = get_params4()\n",
    "\n",
    "slider1 = widgets.RadioButtons(\n",
    "    options=[11, 31, 51],\n",
    "    description='d:',\n",
    "    disabled=False\n",
    ")\n",
    "\n",
    "slider2= widgets.FloatLogSlider(\n",
    "    value=-50,\n",
    "    base=2,\n",
    "    min=-50, \n",
    "    max=10,\n",
    "    step=1, \n",
    "    description='$\\lambda$',\n",
    "    continuous_update= False\n",
    ")\n",
    "\n",
    "slider3 = widgets.RadioButtons(\n",
    "    options=[65, 150, 500, 1000],\n",
    "    description='n:',\n",
    "    disabled=False\n",
    ")\n",
    "\n",
    "x_type = 'uniform_random'\n",
    "\n",
    "phi_type = 'polynomial'\n",
    "print(\"Uniform random x, Polynomial features\")\n",
    "interactive_plot =interactive(plot_true_and_predicted,n = slider3, n_plot = fixed(n_plot), n_fit = fixed(n_fit), \n",
    "                              x_type = fixed(x_type), phi_type = fixed(phi_type),seed = fixed(seed),\n",
    "                              awgn_std= fixed(awgn_std), lambda_ridge = slider2, d = slider1)\n",
    "interactive_plot\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "n, n_plot, n_fit, seed, awgn_std = get_params4()\n",
    "slider4 = widgets.RadioButtons(\n",
    "    options=[11, 31, 51],\n",
    "    description='d:',\n",
    "    disabled=False\n",
    ")\n",
    "\n",
    "slider5= widgets.FloatLogSlider(\n",
    "    value=-50,\n",
    "    base=2,\n",
    "    min=-50,\n",
    "    max=10, \n",
    "    step=1,\n",
    "    description='$\\lambda$',\n",
    "    continuous_update= False\n",
    ")\n",
    "\n",
    "slider6 = widgets.RadioButtons(\n",
    "    options=[65, 150, 500, 1000],\n",
    "    description='n:',\n",
    "    disabled=False\n",
    ")\n",
    "\n",
    "x_type = 'uniform_random'\n",
    "\n",
    "\n",
    "phi_type = 'fourier'\n",
    "print(\"Uniform random x, Fourier features\")\n",
    "\n",
    "interactive_plot =interactive(plot_true_and_predicted,n = slider6, n_plot = fixed(n_plot), n_fit = fixed(n_fit), \n",
    "                              x_type = fixed(x_type), phi_type = fixed(phi_type),seed = fixed(seed),\n",
    "                              awgn_std= fixed(awgn_std), lambda_ridge = slider5, d = slider4)\n",
    "interactive_plot"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "n, n_plot, n_fit, seed, awgn_std = get_params4()\n",
    "slider7 = widgets.RadioButtons(\n",
    "    options=[11, 31, 51],\n",
    "    description='d:',\n",
    "    disabled=False\n",
    ")\n",
    "\n",
    "slider8= widgets.FloatLogSlider(\n",
    "    value=-50,\n",
    "    base=2,\n",
    "    min=-50,\n",
    "    max=10, \n",
    "    step=1,\n",
    "    description='$\\lambda$',\n",
    "    continuous_update= False\n",
    ")\n",
    "\n",
    "slider9 = widgets.RadioButtons(\n",
    "    options=[65, 150, 500, 1000],\n",
    "    description='n:',\n",
    "    disabled=False\n",
    ")\n",
    "\n",
    "x_type = 'grid'\n",
    "\n",
    "\n",
    "phi_type = 'fourier'\n",
    "print(\"Evenly spaced x, Fourier features\")\n",
    "\n",
    "interactive_plot =interactive(plot_true_and_predicted,n = slider9, n_plot = fixed(n_plot), n_fit = fixed(n_fit), \n",
    "                              x_type = fixed(x_type), phi_type = fixed(phi_type),seed = fixed(seed),\n",
    "                              awgn_std= fixed(awgn_std), lambda_ridge = slider8, d = slider7)\n",
    "interactive_plot"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
