{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "colab_type": "text",
    "id": "view-in-github"
   },
   "source": [
    "<a href=\"https://colab.research.google.com/github/CoreTheGreat/HBPU-Machine-Learning-Course/blob/main/ML_Chapter2_Regression.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "lPboLx_o0UxI"
   },
   "source": [
    "# 第二章：回归\n",
    "湖北理工学院《机器学习》课程资料\n",
    "\n",
    "作者：李辉楚吴\n",
    "\n",
    "笔记内容概述:\n",
    "* 2.1 绕不开的房价预测：准备房价预测数据\n",
    "* 2.2 用线性回归预测房价\n",
    "* 2.3 梯度下降 Gradient Descend\n",
    "* 2.4 模型泛化 Generalization\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Ifpm7Sql4U09"
   },
   "source": [
    "## 2.1 绕不开的房价预测\n",
    "\n",
    "步骤1：从 http://lib.stat.cmu.edu/datasets/boston 导入房价预测数据\n",
    "\n",
    "原始数据是一个列数为11的矩阵。每两行内容为一条记录，包含描述房屋的13种特征以及房屋价格。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 363
    },
    "id": "sM-ziKb94S9_",
    "outputId": "c78f94c9-6a14-43da-b0b2-c4df281b75b3"
   },
   "outputs": [],
   "source": [
    "import pandas as pd # To load house price data\n",
    "import numpy as np # To manipulate data\n",
    "\n",
    "# Load from URL\n",
    "# data_url = 'http://lib.stat.cmu.edu/datasets/boston' # Set url of the dataset\n",
    "# raw_df = pd.read_csv(data_url, sep='\\s+', skiprows=22, header=None) # Load data\n",
    "\n",
    "# Load from local file\n",
    "data_url = './Data/houseprice.csv' # Set url of the dataset\n",
    "raw_df = pd.read_csv(data_url, sep=',', skiprows=1, header=None) # Load data\n",
    "\n",
    "raw_df.head(10) # Display the raw data"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Gvwwz2J7APC0"
   },
   "source": [
    "步骤2：重构数据，将原始数据分为一个特征矩阵 (data) 和房价向量 (target)。\n",
    "\n",
    "注意：特征矩阵的记录数量应与房价数量相同。\n",
    "\n",
    "特征描述：\n",
    "\n",
    "* [0] - 按城镇划分的犯罪率\n",
    "* [1] - 面积超过25,000平方英尺（2322.58平方米）的住宅用地占比\n",
    "* [2] - 非零售商业用地占比\n",
    "* [3] - 是否位于查尔斯河畔 ( 1为是，0为否 )\n",
    "* [4] - 氮氧化物浓度（空气质量） (单位：ppm)\n",
    "* [5] - 每个住宅的平均房间数\n",
    "* [6] - 1940年以前建造的自住房比例\n",
    "* [7] - 到波士顿五个就业中心的加权距离\n",
    "* [8] - 辐射路可达性指数\n",
    "* [9] - 每10,000美元的房产税率\n",
    "* [10] - 学生-教师比率\n",
    "* [11] - （忽略） 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town\n",
    "* [12] - 低层次人口占比\n",
    "\n",
    "房价描述:\n",
    "* 自住房房价中位数（单位：千美元）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "rmzRsgRV6YzQ",
    "outputId": "eae6a6c2-2f18-4b8d-8498-276b35bf495a"
   },
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "\n",
    "data = np.hstack([raw_df.values[::2, :], raw_df.values[1::2, 1:2]]) # Select features\n",
    "target = raw_df.values[1::2, 2] # Select target\n",
    "\n",
    "fts_names = [\n",
    "    '犯罪率（%）',\n",
    "    '大住宅用地占比（%）',\n",
    "    '非零售商业用地占比（%）',\n",
    "    '景观房',\n",
    "    '氮氧化物浓度（ppm）',\n",
    "    '平均房间数',\n",
    "    '老旧房屋占比（%）',\n",
    "    '离就业中心的加权距离',\n",
    "    '辐射路可达性指标',\n",
    "    '每万元房产税',\n",
    "    '学生-教师比',\n",
    "    '低层次人口占比（%）'] # Feature names\n",
    "\n",
    "print(f'Data shape: {data.shape}, Target shape: {target.shape}') # Display the shape of data and target"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "O4wQ192rVbd9"
   },
   "source": [
    "步骤3：用散点图（Scatter）展现各个特征与房价之间的关系。\n",
    "\n",
    "注意：需要安装字体才能在figure中显式中文。[Colab环境中需要]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "A2wOqq4sWxDl",
    "outputId": "c02be50c-ca78-4cf8-d821-cc58ffcdba2f"
   },
   "outputs": [],
   "source": [
    "import matplotlib\n",
    "from matplotlib import font_manager\n",
    "\n",
    "font_manager.fontManager.addfont('./Data/simhei.ttf') # Add the font\n",
    "matplotlib.rc('font', family='SimHei') # Set the font"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "1wieWtKaW3SK"
   },
   "source": [
    "逐一完成各个特征和房价的散点图"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 591
    },
    "id": "h4wzLL4W6TYi",
    "outputId": "34a368da-0916-463f-e4e8-84ec02de72e0"
   },
   "outputs": [],
   "source": [
    "import matplotlib.pyplot as plt # To draw figures\n",
    "\n",
    "plt.rcParams['font.sans-serif'] = ['SimHei']  # Support Chinese\n",
    "plt.rcParams['axes.unicode_minus'] = False  # Support negative sign\n",
    "\n",
    "num_fts = data.shape[1] # Get the number of features\n",
    "num_col = 6 # Number of columns in the figure\n",
    "num_row = int(np.ceil(num_fts / num_col)) # Number of rows in the figure\n",
    "\n",
    "label_size = 18 # Label size\n",
    "ticklabel_size = 14 # Tick label size\n",
    "\n",
    "_, axes = plt.subplots(num_row, num_col, figsize=(18, 3*num_row)) # Create a figure\n",
    "\n",
    "for i in range(num_fts): # Loop through all features\n",
    "    row = int(i / num_col) # Get the row index\n",
    "    col = i % num_col # Get the column index\n",
    "\n",
    "    ax = axes[row, col]\n",
    "    ax.scatter(data[:, i], target) # Plot scatter fig of i-th feature and target\n",
    "    ax.tick_params(axis='both', which='major', labelsize=ticklabel_size) # Set tick label size\n",
    "    ax.set_xlabel(fts_names[i], fontsize=label_size) # Label the x-axis\n",
    "    ax.set_ylabel('房价中位数（千美元）', fontsize=label_size) # Label the y-axis\n",
    "\n",
    "plt.tight_layout() # Adjust the layout of the figure\n",
    "plt.show() # Display the figure"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Ok1YZCFqNAuH"
   },
   "source": [
    "## 2.2 用线性回归预测房价\n",
    "\n",
    "本节内容包括三个部分：\n",
    "* 准备一大堆房价数据\n",
    "* 准备一堆房价预测模型\n",
    "* 选择最优的房价预测模型"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "A9QJr7SMOCBM"
   },
   "source": [
    "### 2.2.1 准备一大堆房价数据\n",
    "\n",
    "画出房价与房间数量的散点图，观察房价与房间数量的关系。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 490
    },
    "id": "m2_RSiywNtUV",
    "outputId": "eb5d66ab-62c7-442c-ee43-b592b36ebbc5"
   },
   "outputs": [],
   "source": [
    "x = data[:, 5] # Get the number of rooms\n",
    "y = target # Get the target\n",
    "\n",
    "def draw_scatter(x, y):\n",
    "    '''\n",
    "    This a specific function to draw a scatter figure of room number and house price.\n",
    "    x: room number\n",
    "    y: house price\n",
    "    '''\n",
    "    global label_size, ticklabel_size # Set global variables of font size\n",
    "\n",
    "    fig, ax = plt.subplots() # Create a figure and a set of subplots.\n",
    "    ax.scatter(x, y) # Plot the data\n",
    "    ax.tick_params(axis='both', which='major', labelsize=ticklabel_size) # Set tick label size\n",
    "    ax.set_xlabel('平均房间数', fontsize=label_size) # Label the x-axis\n",
    "    ax.set_ylabel('房价中位数（千美元）', fontsize=label_size) # Label the y-axis\n",
    "\n",
    "    x_min = np.min(x)-0.1 # Get the minimum value of x\n",
    "    x_max = np.max(x)+0.1 # Get the maximum value of x\n",
    "    ax.set_xlim(x_min, x_max) # Set the x-axis limits\n",
    "    ax.set_ylim(np.min(y)-1, np.max(y)+1) # Set the y-axis limits\n",
    "\n",
    "    ax.set_position([0.12, 0.14, 0.85, 0.83]) # Set the position of the figure\n",
    "\n",
    "    x_linear = np.linspace(x_min, x_max, 100) # Create a sequence of x to draw prediction function\n",
    "\n",
    "    return fig, ax, x_linear\n",
    "\n",
    "fig, ax, _ = draw_scatter(x, y)\n",
    "\n",
    "# plt.savefig('room_price.png', dpi=300) # Make figure clearer\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "vK7Nd0ATTjpC"
   },
   "source": [
    "### 2.2.2 准备一堆房价预测模型\n",
    "\n",
    "定义预测函数并在图中增加对应的曲线。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 490
    },
    "id": "Pyu18U8POAqC",
    "outputId": "91125c42-ec34-401f-a849-2e0a15d4d391"
   },
   "outputs": [],
   "source": [
    "def linear_fun(x, w, b):\n",
    "    '''\n",
    "    This a linear prediction function.\n",
    "    x: feature\n",
    "    b: bias\n",
    "    w: weight\n",
    "    '''\n",
    "\n",
    "    y = w * x + b\n",
    "\n",
    "    return y\n",
    "\n",
    "fig, ax, x_linear = draw_scatter(x, y) # Plot the scatter\n",
    "\n",
    "ax.plot(x_linear, linear_fun(x_linear, 1, 4), color='green', linewidth=2) # y = x + 4\n",
    "# plt.savefig('room_price_f1.png', dpi=300) # Make figure clearer\n",
    "\n",
    "ax.plot(x_linear, linear_fun(x_linear, 6, -4), color='orange', linewidth=2) # y = 6x - 4\n",
    "# plt.savefig('room_price_f2.png', dpi=300) # Make figure clearer\n",
    "\n",
    "ax.plot(x_linear, linear_fun(x_linear, 11, -47), color='red', linewidth=2) # y = 11x - 47\n",
    "# plt.savefig('room_price_f3.png', dpi=300) # Make figure clearer\n",
    "\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "m2njQ-yxpqju"
   },
   "source": [
    "练习：尝试构建其它的预测模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 490
    },
    "id": "vEwfIl7uazAA",
    "outputId": "6e49f6c4-a934-49e0-9fcf-fc1c5eafae33"
   },
   "outputs": [],
   "source": [
    "def nonlinear_fun(x, w, b):\n",
    "    '''\n",
    "    This an example of non-linear prediction function.\n",
    "    x: feature\n",
    "    b: bias\n",
    "    w: weight\n",
    "\n",
    "    Return:\n",
    "    y: prediction\n",
    "    '''\n",
    "    return w / x + b\n",
    "\n",
    "\n",
    "fig, ax, x_linear = draw_scatter(x, y) # Plot the scatter\n",
    "ax.plot(x_linear, nonlinear_fun(x_linear, 100, 10), color='red', linewidth=2) # 100 / x + 10\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "gapReol-qyYW"
   },
   "source": [
    "### 2.2.3 选择最优的房价预测模型"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "5Sjl2lRIOT3W"
   },
   "source": [
    "计算所有预测函数的损失"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "JKwTr3HTO_ZG"
   },
   "outputs": [],
   "source": [
    "w_min, w_max = -200, 200 # Weight range\n",
    "b_min, b_max = -200, 200 # Bias range\n",
    "param_num = 800 # Number of per parameter\n",
    "\n",
    "w_list = np.linspace(w_min, w_max, param_num) # Create a sequence of w\n",
    "b_list = np.linspace(b_min, b_max, param_num) # Create a sequence of b\n",
    "\n",
    "w_grid, b_grid = np.meshgrid(w_list, b_list) # Create a grid of w and b\n",
    "loss_grid = np.zeros((param_num, param_num)) # Create a grid of loss\n",
    "\n",
    "for i in range(param_num):\n",
    "    for j in range(param_num):\n",
    "        # Compute loss\n",
    "        y_pred = linear_fun(x, w_grid[i, j], b_grid[i, j])\n",
    "        loss_temp = np.mean((y_pred - y) ** 2) / 2\n",
    "        loss_grid[i, j] = loss_temp / 10 ** 5 # Scale loss for display"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "oRXvzocEbGWj"
   },
   "source": [
    "用热力图展示loss、w、b之间的关系，并找出最优解"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 577
    },
    "id": "XP8-K-A7pwq_",
    "outputId": "552cb1b0-87ec-4319-a546-a22751eaa59a"
   },
   "outputs": [],
   "source": [
    "# Draw heatmap first\n",
    "def draw_heatmap(w_grid, b_grid, loss_grid):\n",
    "    '''\n",
    "    Display training process of w and b\n",
    "    '''\n",
    "    global label_size, ticklabel_size # Set global variables of font size\n",
    "\n",
    "    w_min, w_max = -200, 200\n",
    "    b_min, b_max = -200, 200\n",
    "\n",
    "    # Set figure\n",
    "    fig, ax = plt.subplots(figsize=(10,6))\n",
    "\n",
    "    # Plot the loss\n",
    "    im = ax.imshow(loss_grid, extent=[w_min, w_max, b_min, b_max], origin='lower', cmap='viridis', zorder=0)\n",
    "\n",
    "    # Set x-axis and y-axis\n",
    "    ax.set_xticks(np.linspace(w_min, w_max, 5))\n",
    "    ax.set_xticklabels(np.linspace(w_min, w_max, 5))\n",
    "    ax.set_yticks(np.linspace(b_min, b_max, 5)[1:])\n",
    "    ax.set_yticklabels(np.linspace(b_min, b_max, 5)[1:])\n",
    "    ax.set_xlabel('Weight (w)', fontsize=label_size)\n",
    "    ax.set_ylabel('Bias (b)', fontsize=label_size)\n",
    "\n",
    "    ax.set_xlim(w_min, w_max)\n",
    "    ax.set_ylim(b_min, b_max)\n",
    "\n",
    "    ax.set_position([0.15, 0.13, 0.60, 0.8]) # Set the position of the figure\n",
    "\n",
    "    # Set tick label size\n",
    "    ax.tick_params(axis='both', which='major', labelsize=ticklabel_size)\n",
    "\n",
    "    # Mark the point of lowest loss\n",
    "    min_loss_idx = np.unravel_index(np.argmin(loss_grid), loss_grid.shape)\n",
    "    min_w = w_grid[min_loss_idx]\n",
    "    min_b = b_grid[min_loss_idx]\n",
    "    ax.scatter(min_w, min_b, color='red', marker='x', linewidth=5, s=12**2)\n",
    "\n",
    "    return fig, ax\n",
    "\n",
    "fig, ax = draw_heatmap(w_grid, b_grid, loss_grid)\n",
    "\n",
    "# plt.savefig('loss_with_mark.png', dpi=300) # Make figure clearer\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "DHt56gT9lcJH"
   },
   "source": [
    "展示最优的房价预测函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 490
    },
    "id": "e69VMFy-lkg-",
    "outputId": "d5eea9be-5e77-4841-835a-8be83cc10402"
   },
   "outputs": [],
   "source": [
    "'''\n",
    "Drawing a figure of the best estimation function\n",
    "'''\n",
    "\n",
    "fig, ax, x_linear = draw_scatter(x, y) # Plot the scatter\n",
    "ax.plot(x_linear, linear_fun(x_linear, 9.26, -35.79), color='red', linewidth=2)\n",
    "# plt.savefig('room_price_best.png', dpi=300) # Make figure clearer\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "awWBO3bKYE9A"
   },
   "source": [
    "## 2.3 梯度下降 Gradient Descend\n",
    "\n",
    "本节包括两个内容：\n",
    "* 梯度下降的基本逻辑\n",
    "* 学习率的影响 - 练习：观察不同学习率、迭代次数条件下的训练效果\n",
    "* 自适应梯度 Adaptive Gradient"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "SdN2-UkhVuZ6"
   },
   "source": [
    "### 2.3.1 梯度下降的基本逻辑"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "m3JHFU_sW7Xu"
   },
   "source": [
    "仅讨论w的变化（b=-34.67）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 557
    },
    "id": "fOKlTr1eWVEY",
    "outputId": "4866ea7e-b8fc-44e5-e12b-474f384018d4"
   },
   "outputs": [],
   "source": [
    "def loss_w_base():\n",
    "    '''\n",
    "    Draw the baseline of loss function when b is -34.67\n",
    "    '''\n",
    "    global label_size, ticklabel_size # Set global variables of font size\n",
    "\n",
    "    b = -34.67\n",
    "\n",
    "    w_min, w_max = -200, 200 # Weight range\n",
    "    param_num = 10000 # Number of per parameter\n",
    "\n",
    "    w_base = np.linspace(w_min, w_max, param_num) # Create a sequence of w\n",
    "\n",
    "    loss_base = np.zeros(param_num) # Create a grid of loss\n",
    "\n",
    "    for i in range(param_num):\n",
    "        # Compute loss\n",
    "        y_pred = linear_fun(x, w_base[i], b)\n",
    "        loss_base[i] = np.mean((y_pred - y) ** 2) / 2\n",
    "\n",
    "    fig, ax = plt.subplots(figsize=(10,6))\n",
    "    ax.plot(w_base, loss_base / 10 ** 5, color='black', linewidth=2, zorder=0) # Scale loss for display\n",
    "    ax.set_xlim(-210, 210)\n",
    "    ax.set_ylim(-0.2, 10)\n",
    "    ax.set_xlabel('Weight', fontsize=label_size)\n",
    "    ax.set_ylabel('Loss ($x10^5$)', fontsize=label_size)\n",
    "    ax.tick_params(axis='both', which='major', labelsize=ticklabel_size)\n",
    "\n",
    "    return fig, ax\n",
    "\n",
    "fig, ax = loss_w_base()\n",
    "# plt.savefig('learning_rate_base.png', dpi=300) # Make figure clearer\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "KlYyzcO5aZBB"
   },
   "source": [
    "训练模型: learning rate = 0.001"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 557
    },
    "id": "xgg9pzNlafIZ",
    "outputId": "28de4e33-9029-452e-bf57-e5470f8023b2"
   },
   "outputs": [],
   "source": [
    "def learn_w(x, y, lr=0.00001, max_epoch=100, batch_size=4, adagrad=None):\n",
    "    '''\n",
    "    Learning weight w with fixed bias b\n",
    "    w starts with -200\n",
    "\n",
    "    Return:\n",
    "    w_list: array of w\n",
    "    loss_list: array of loss\n",
    "    '''\n",
    "    w = -200 # Set initial value of w\n",
    "    b = -34.67 # Fixed bias\n",
    "\n",
    "    w_list = [w] # Create a list to store w\n",
    "    loss_list = [] # Create a list to store loss\n",
    "\n",
    "    # Training\n",
    "    for ie in range(max_epoch):\n",
    "        # Shuffle x\n",
    "        idx = np.random.permutation(x.shape[0])\n",
    "\n",
    "        # Update learning rate\n",
    "        if adagrad == 'temporal':\n",
    "            lr = lr / np.sqrt(ie+1) # Update learning rate\n",
    "\n",
    "        # Split indices into batches\n",
    "        dw = 0\n",
    "        for ib in range(0, x.shape[0], batch_size):\n",
    "            batch_idx = idx[ib:ib+batch_size]\n",
    "            x_batch = x[batch_idx]\n",
    "            y_batch = y[batch_idx]\n",
    "\n",
    "            y_batch_pred = linear_fun(x_batch, w, b)\n",
    "\n",
    "            dw = dw + np.mean((y_batch_pred - y_batch) * x_batch) # Compute partial derivative of w\n",
    "\n",
    "        w -= lr * dw # Update w\n",
    "\n",
    "        loss_list.append(np.mean((y_batch_pred - y_batch) ** 2) / 2) # Compute and record loss\n",
    "        w_list.append(w)\n",
    "\n",
    "    # Final loss\n",
    "    y_pred = linear_fun(x, w, b)\n",
    "    loss_list.append(np.mean((y - y_pred) ** 2) / 2) # Compute and record loss\n",
    "\n",
    "    w_list = np.array(w_list)\n",
    "    loss_list = np.array(loss_list) / 10 ** 5\n",
    "\n",
    "    return w_list, loss_list\n",
    "\n",
    "def add_loss_w(fig, ax, w_list, loss_list):\n",
    "    global label_size, ticklabel_size # Set global variables of font size\n",
    "\n",
    "    # Plot training losses\n",
    "    ax.plot(w_list, loss_list / 10 ** 5, color='blue', linewidth=2, zorder=1)\n",
    "    ax.legend(ncol=2, fontsize=label_size)\n",
    "\n",
    "    # Mark final place\n",
    "    ax.scatter(w_list[-1], loss_list[-1] / 10 ** 5, s=15**2, marker='*', facecolor='white', edgecolor='blue', linewidth=2, zorder=2)\n",
    "\n",
    "    return fig, ax\n",
    "\n",
    "# Display training result of lr = 0.001\n",
    "lr = 0.00001 # Set learning rate\n",
    "\n",
    "w_list, loss_list = learn_w(x, y, lr) # Training model\n",
    "fig, ax = loss_w_base() # Draw base losses\n",
    "\n",
    "ax.plot(w_list, loss_list, color='blue', marker='.', markersize=10, linewidth=2, zorder=1) # Plot training losses\n",
    "ax.scatter(w_list[-1], loss_list[-1], s=15**2, marker='*', facecolor='white', edgecolor='blue', linewidth=2, zorder=2)\n",
    "\n",
    "# plt.savefig('learning_rate_D00001.png', dpi=300) # Make figure clearer\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Cv5RSQolBA-V"
   },
   "source": [
    "以较小的学习率训练模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 557
    },
    "id": "CqaJz3Xwe_3e",
    "outputId": "1e765f41-c0b9-46f3-d4e5-10e35f07dc5d"
   },
   "outputs": [],
   "source": [
    "lr = 0.000001 # Set learning rate\n",
    "\n",
    "w_list, loss_list = learn_w(x, y, lr) # Training model\n",
    "fig, ax = loss_w_base() # Draw base losses\n",
    "ax.plot(w_list, loss_list, color='blue', marker='.', markersize=10, linewidth=2, zorder=1) # Plot training losses\n",
    "ax.scatter(w_list[-1], loss_list[-1], s=15**2, marker='*', facecolor='white', edgecolor='blue', linewidth=2, zorder=2)\n",
    "\n",
    "# plt.savefig('learning_rate_D000001.png', dpi=300) # Make figure clearer\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "Hq3KgTjbEsDh"
   },
   "source": [
    "以更小的学习率训练模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 557
    },
    "id": "QDRa46syfWdx",
    "outputId": "646a24c1-949c-436a-d880-23c7e0d5a914"
   },
   "outputs": [],
   "source": [
    "lr = 0.0000001 # Set learning rate\n",
    "\n",
    "w_list, loss_list = learn_w(x, y, lr) # Training model\n",
    "fig, ax = loss_w_base() # Draw base losses\n",
    "ax.plot(w_list, loss_list, color='blue', marker='.', markersize=10, linewidth=2, zorder=1) # Plot training losses\n",
    "ax.scatter(w_list[-1], loss_list[-1], s=15**2, marker='*', facecolor='white', edgecolor='blue', linewidth=2, zorder=2)\n",
    "\n",
    "# plt.savefig('learning_rate_D0000001.png', dpi=300) # Make figure clearer\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "AV-I7jn8BwLb"
   },
   "source": [
    "以较大的学习率训练模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 557
    },
    "id": "Y04ll7ldfgd7",
    "outputId": "b06b3505-86c0-45b3-9572-192055498f52"
   },
   "outputs": [],
   "source": [
    "lr = 0.0001 # Set learning rate\n",
    "\n",
    "w_list, loss_list = learn_w(x, y, lr) # Training model\n",
    "fig, ax = loss_w_base() # Draw base losses\n",
    "ax.plot(w_list, loss_list, color='blue', marker='.', markersize=10, linewidth=2, zorder=1) # Plot training losses\n",
    "ax.scatter(w_list[-1], loss_list[-1], s=15**2, marker='*', facecolor='white', edgecolor='blue', linewidth=2, zorder=2)\n",
    "\n",
    "# plt.savefig('learning_rate_D0001.png', dpi=300) # Make figure clearer\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "X2jLjKQSD2n6"
   },
   "source": [
    "以更大的学习率训练模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 557
    },
    "id": "WFI6PR2KhX_j",
    "outputId": "398bf0ee-4abb-4520-bb46-6fe292762484"
   },
   "outputs": [],
   "source": [
    "lr = 0.0003 # Set learning rate\n",
    "\n",
    "w_list, loss_list = learn_w(x, y, lr) # Training model\n",
    "fig, ax = loss_w_base() # Draw base losses\n",
    "ax.plot(w_list, loss_list, color='blue', marker='.', markersize=10, linewidth=2, zorder=1) # Plot training losses\n",
    "ax.scatter(w_list[-1], loss_list[-1], s=15**2, marker='*', facecolor='white', edgecolor='blue', linewidth=2, zorder=2)\n",
    "\n",
    "# plt.savefig('learning_rate_D0003.png', dpi=300) # Make figure clearer\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "-8ibwsoxFufb"
   },
   "source": [
    "以更更大的学习率训练模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 574
    },
    "id": "AiDOzP3OhuUd",
    "outputId": "7823cf09-6901-44cd-ffa6-e3ade7ff48ed"
   },
   "outputs": [],
   "source": [
    "lr = 0.001 # Set learning rate\n",
    "\n",
    "w_list, loss_list = learn_w(x, y, lr) # Training model\n",
    "fig, ax = loss_w_base() # Draw base losses\n",
    "fig, ax = add_loss_w(fig, ax, w_list, loss_list) # Add training losses\n",
    "\n",
    "# plt.savefig('learning_rate_D001.png', dpi=300) # Make figure clearer\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "SrwnYsyi-bAL"
   },
   "source": [
    "### 2.3.2 练习：观察不同学习率和迭代次数条件下的结果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "id": "RBQl8QIIEIsK"
   },
   "outputs": [],
   "source": [
    "def compute_loss_and_derivative(x, y, w, b):\n",
    "    '''\n",
    "    This a linear prediction function.\n",
    "\n",
    "    Return loss, dw, db:\n",
    "    loss: loss of the prediction\n",
    "    dw: partial derivative of w\n",
    "    db: partial derivative of b\n",
    "    '''\n",
    "    pass\n",
    "\n",
    "def init_param():\n",
    "    '''\n",
    "    This a linear prediction function.\n",
    "\n",
    "    Return w, b:\n",
    "    w: weight\n",
    "    b: bias\n",
    "    '''\n",
    "    pass\n",
    "\n",
    "def train_linear_fun(x, y, lr, num_iter):\n",
    "    '''\n",
    "    This a linear prediction function.\n",
    "\n",
    "    Return training log including losses and parameters:\n",
    "    loss_list: list of losses\n",
    "    param_list: list of parameters\n",
    "    '''\n",
    "    pass\n",
    "\n",
    "# Training models in different conditions"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "cHfqmmQiTJu8"
   },
   "source": [
    "### 2.3.4 自适应梯度"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "HjNLheOiQvQk"
   },
   "source": [
    "\n",
    "同时对w和b进行更新"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 577
    },
    "id": "hbyLDAZTpR-v",
    "outputId": "aaa929b6-d3e0-4a9b-cd37-c989e5d0400c"
   },
   "outputs": [],
   "source": [
    "def learn_w_b(x, y, lr=0.0001, max_epoch=100000, batch_size=4, adagrad=False):\n",
    "    '''\n",
    "    Learning weight w and bias b\n",
    "    (w, b) starts with (-200, -200)\n",
    "\n",
    "    Return:\n",
    "    w_list: array of w\n",
    "    b_list: array of b\n",
    "    loss_final: final loss\n",
    "    '''\n",
    "    # Initialize parameters\n",
    "    w, b = -200, -200\n",
    "\n",
    "    if adagrad == True:\n",
    "        lr_w = 0.0\n",
    "        lr_b = 0.0\n",
    "\n",
    "    # Model training process\n",
    "    w_list = [w] # Create a list to store w\n",
    "    b_list = [b] # Create a list to store b\n",
    "    for ie in range(max_epoch):\n",
    "        # Shuffle x\n",
    "        idx = np.random.permutation(x.shape[0])\n",
    "\n",
    "        # Split indices into batches\n",
    "        dw, db = 0, 0\n",
    "        for ib in range(0, x.shape[0], batch_size):\n",
    "            batch_idx = idx[ib:ib+batch_size]\n",
    "            x_batch = x[batch_idx]\n",
    "            y_batch = y[batch_idx]\n",
    "\n",
    "            y_batch_pred = linear_fun(x_batch, w, b)\n",
    "\n",
    "            dw = dw + np.mean((y_batch_pred - y_batch) * x_batch) # Compute partial derivative of w\n",
    "            db = db + np.mean((y_batch_pred - y_batch) * 1.0) # Compute partial derivative of b\n",
    "\n",
    "        if adagrad == True:\n",
    "            lr_w = lr_w + dw ** 2\n",
    "            lr_b = lr_b + db ** 2\n",
    "\n",
    "            w -= lr / np.sqrt(lr_w) * dw # Update w\n",
    "            b -= lr / np.sqrt(lr_b) * db # Update d\n",
    "\n",
    "        else:\n",
    "            w -= lr * dw # Update w\n",
    "            b -= lr * db # Update d\n",
    "\n",
    "        w_list.append(w)\n",
    "        b_list.append(b)\n",
    "\n",
    "    # Change list into array for plotting\n",
    "    w_list = np.array(w_list)\n",
    "    b_list = np.array(b_list)\n",
    "\n",
    "    # Final loss\n",
    "    y_pred = linear_fun(x, w, b)\n",
    "    loss_final = np.mean((y - y_pred) ** 2) / 2 # Compute and record loss\n",
    "\n",
    "    return w_list, b_list, loss_final\n",
    "\n",
    "w_list, b_list, loss_final = learn_w_b(x, y, lr=0.00001, max_epoch=10000)\n",
    "\n",
    "fig, ax = draw_heatmap(w_grid, b_grid, loss_grid) # Draw base image\n",
    "ax.plot(w_list, b_list, color='yellow', marker='.', linewidth=2, markersize=4, zorder=1) # Plot training losses\n",
    "ax.scatter(w_list[0], b_list[0], color='yellow', facecolor='black', marker='o', linewidth=2, s=12**2, zorder=2) # Add start place\n",
    "\n",
    "# plt.savefig('f2_learning_rate_D00001.png', dpi=300) # Make figure clearer\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 577
    },
    "id": "8RPthtLpZYRR",
    "outputId": "382b6405-8b74-4907-9c55-71246a4ebf9d"
   },
   "outputs": [],
   "source": [
    "w_list, b_list, loss_final = learn_w_b(x, y, lr=0.0001, max_epoch=10000)\n",
    "\n",
    "fig, ax = draw_heatmap(w_grid, b_grid, loss_grid) # Draw base image\n",
    "ax.plot(w_list, b_list, color='yellow', marker='.', linewidth=2, markersize=4, zorder=1) # Plot training losses\n",
    "ax.scatter(w_list[0], b_list[0], color='yellow', facecolor='black', marker='o', linewidth=2, s=12**2, zorder=2) # Add start place\n",
    "\n",
    "# plt.savefig('f2_learning_rate_D0001.png', dpi=300) # Make figure clearer\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 646
    },
    "id": "XbEleb_sKWKW",
    "outputId": "f6f80c4c-3f1b-4310-f690-37339d160746"
   },
   "outputs": [],
   "source": [
    "w_list, b_list, loss_final = learn_w_b(x, y, lr=0.001, max_epoch=10000)\n",
    "\n",
    "fig, ax = draw_heatmap(w_grid, b_grid, loss_grid) # Draw base image\n",
    "ax.plot(w_list, b_list, color='yellow', marker='.', linewidth=2, markersize=4, zorder=1) # Plot training losses\n",
    "ax.scatter(w_list[0], b_list[0], color='yellow', facecolor='black', marker='o', linewidth=2, s=12**2, zorder=2) # Add start place\n",
    "\n",
    "# plt.savefig('f2_learning_rate_D001.png', dpi=300) # Make figure clearer\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "NnchbXjSC9Yb"
   },
   "source": [
    "让学习率随时间减小"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 577
    },
    "id": "Y5Pd0mYADDW1",
    "outputId": "359f8e73-c2fd-49ec-cae5-24e9b387a7e2"
   },
   "outputs": [],
   "source": [
    "w_list, b_list, loss_final = learn_w_b(x, y, lr=100, max_epoch=10000, adagrad=True)\n",
    "\n",
    "fig, ax = draw_heatmap(w_grid, b_grid, loss_grid) # Draw base image\n",
    "ax.plot(w_list, b_list, color='yellow', marker='.', linewidth=2, markersize=4, zorder=1) # Plot training losses\n",
    "ax.scatter(w_list[0], b_list[0], color='yellow', facecolor='black', marker='o', linewidth=2, s=12**2, zorder=2) # Add start place\n",
    "\n",
    "# plt.savefig('f2_Adagrad_D001.png', dpi=300) # Make figure clearer\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "1Wn2i_r_Y8zn"
   },
   "source": [
    "## 2.4 模型泛化 Generalization"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "VLvq5KoYUTlU"
   },
   "source": [
    "构建数据集与测试集，观察模型复杂度与预测结果的关系"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 963
    },
    "id": "JqLWJHtoUAS_",
    "outputId": "6272a8c5-238c-45b6-8260-19cef23387b5"
   },
   "outputs": [],
   "source": [
    "from sklearn.model_selection import train_test_split\n",
    "\n",
    "def normalization(x, norm_type='none'):\n",
    "    '''\n",
    "    Normalization\n",
    "    '''\n",
    "    if norm_type == 'min-max':\n",
    "        a, b = np.min(x), np.max(x)\n",
    "        x_norm = (x - a) / (b - a)\n",
    "\n",
    "    elif norm_type == 'z-score':\n",
    "        a, b = np.mean(x), np.std(x)\n",
    "        x_norm = (x - a) / b\n",
    "    \n",
    "    elif norm_type == 'none':\n",
    "        a, b = 0, 0\n",
    "        x_norm = x\n",
    "\n",
    "    return x_norm, a, b\n",
    "\n",
    "def norm_recover(x_norm, a, b, norm_type='none'):\n",
    "    '''\n",
    "    Normalization\n",
    "    '''\n",
    "    if norm_type == 'min-max':\n",
    "        x = x_norm * (b - a) + a\n",
    "\n",
    "    elif norm_type == 'z-score':\n",
    "        x = x_norm * b + a\n",
    "\n",
    "    elif norm_type == 'none':\n",
    "        x = x_norm\n",
    "\n",
    "    return x\n",
    "\n",
    "def draw_uniform_scatter(x, y, xlim, ylim):\n",
    "    '''\n",
    "    This a specific function to draw a scatter figure of room number and house price.\n",
    "    x: room number\n",
    "    y: house price\n",
    "    '''\n",
    "    global label_size, ticklabel_size # Set global variables of font size\n",
    "\n",
    "    fig, ax = plt.subplots() # Create a figure and a set of subplots.\n",
    "    ax.scatter(x, y) # Plot the data\n",
    "    ax.tick_params(axis='both', which='major', labelsize=ticklabel_size) # Set tick label size\n",
    "    ax.set_xlabel('平均房间数', fontsize=label_size) # Label the x-axis\n",
    "    ax.set_ylabel('房价中位数（千美元）', fontsize=label_size) # Label the y-axis\n",
    "\n",
    "    ax.set_xlim(xlim[0], xlim[1]) # Set the x-axis limits\n",
    "    ax.set_ylim(ylim[0], ylim[1]) # Set the y-axis limits\n",
    "\n",
    "    ax.set_position([0.12, 0.14, 0.85, 0.83]) # Set the position of the figure\n",
    "\n",
    "    x_linear = np.linspace(xlim[0], xlim[1], 100) # Create a sequence of x to draw prediction function\n",
    "\n",
    "    return fig, ax, x_linear\n",
    "    \n",
    "norm_type = 'z-score'\n",
    "\n",
    "# Normalize x\n",
    "x_norm, x_a, x_b = normalization(x, norm_type=norm_type)\n",
    "\n",
    "# Split data into training and testing domains (Not for this section)\n",
    "X_train, X_test, Y_train, Y_test = train_test_split(x_norm, y, test_size=0.2, random_state=42)\n",
    "\n",
    "num_samples = 10\n",
    "np.random.seed(42) # Set the random seed for reproducibility\n",
    "random_indices = np.random.choice(X_train.shape[0], num_samples, replace=False)\n",
    "x_train = X_train[random_indices]\n",
    "y_train = Y_train[random_indices]\n",
    "\n",
    "random_indices = np.random.choice(X_test.shape[0], num_samples, replace=False)\n",
    "x_test = X_test[random_indices]\n",
    "y_test = Y_test[random_indices]\n",
    "\n",
    "x_test_scatter = norm_recover(x_test, x_a, x_b, norm_type=norm_type)\n",
    "\n",
    "# Plot the training data\n",
    "x_lim_offset = 0.1\n",
    "x_train_scatter = norm_recover(x_train, x_a, x_b, norm_type=norm_type)\n",
    "x_scatter_min = np.min([np.min(x_train_scatter), np.min(x_test_scatter)]) - x_lim_offset\n",
    "x_scatter_max = np.max([np.max(x_train_scatter), np.max(x_test_scatter)]) + x_lim_offset\n",
    "x_lim = [x_scatter_min, x_scatter_max]\n",
    "\n",
    "y_lim_offset = 1\n",
    "y_scatter_min = np.min([np.min(y_train), np.min(y_test)]) - y_lim_offset\n",
    "y_scatter_max = np.max([np.max(y_train), np.max(y_test)]) + y_lim_offset\n",
    "y_lim = [y_scatter_min, y_scatter_max]\n",
    "\n",
    "fig, ax, _ = draw_uniform_scatter(x_train_scatter, y_train, x_lim, y_lim)\n",
    "# plt.savefig(f'scatter_train_data_{num_samples}.png', dpi=300) # Make figure clearer\n",
    "plt.show()\n",
    "\n",
    "# Plot the testing data\n",
    "fig, ax, _ = draw_uniform_scatter(x_test_scatter, y_test, x_lim, y_lim)\n",
    "# plt.savefig(f'scatter_test_data_{num_samples}.png', dpi=300) # Make figure clearer\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/"
    },
    "id": "mTZNhm5oYgec",
    "outputId": "dd240eb8-8ae8-46f4-8d27-abe77ac38090"
   },
   "outputs": [],
   "source": [
    "def train_poly_mdl(x_tr, y_tr, x_te, y_te, lr=100, max_epoch=1000000, batch_size=1, p=0, regulization_method='none', regulization_lambda=0, disp=False, stop=False):\n",
    "    '''\n",
    "    Learning weight w and bias b\n",
    "    p is the order of polynomial\n",
    "\n",
    "    Return:\n",
    "    w_list: array of w\n",
    "    b_list: array of b\n",
    "    err_train: prediction accuracy\n",
    "    '''\n",
    "    global norm_type\n",
    "\n",
    "    # Train logs\n",
    "    logs = {}\n",
    "    logs['epoch_itr'] = []\n",
    "    logs['loss_train'] = []\n",
    "    logs['loss_test'] = []\n",
    "    logs['loss_batch'] = []\n",
    "    logs['err_train'] = []\n",
    "    logs['err_test'] = []\n",
    "\n",
    "    # Initialize w, b, and exponential x\n",
    "    b = 0.0\n",
    "    if p > 0:\n",
    "        w = np.random.randn(p)\n",
    "        \n",
    "        x_tr = [x_tr ** (i+1) for i in range(p)]\n",
    "        x_tr = np.array(x_tr).T\n",
    "        \n",
    "        x_te = [x_te ** (i+1) for i in range(p)]\n",
    "        x_te = np.array(x_te).T\n",
    "        \n",
    "    else:\n",
    "        w = np.random.randn(x_tr.shape[1])\n",
    "\n",
    "    # Initialize learning rate\n",
    "    lr_w = np.zeros(w.size)\n",
    "    lr_b = 0.0\n",
    "\n",
    "    # For early stop\n",
    "    min_err = 100000000\n",
    "    err_no_improvement = 0\n",
    "    early_stop = 100\n",
    "\n",
    "    # Model training process\n",
    "    itr_num = 0\n",
    "    for ie in range(max_epoch):\n",
    "\n",
    "        idx = np.random.permutation(x_tr.shape[0]) # Shuffle x\n",
    "\n",
    "        dw = np.zeros(w.size)\n",
    "        db = 0.0\n",
    "\n",
    "        for ib in range(0, x_tr.shape[0], batch_size):\n",
    "            batch_idx = idx[ib:ib+batch_size]\n",
    "            x_batch = x_tr[batch_idx, :]\n",
    "            y_batch = y_tr[batch_idx]\n",
    "\n",
    "            y_batch_pred = np.matmul(x_batch, w) + b\n",
    "            y_batch_diff = y_batch_pred - y_batch\n",
    "            \n",
    "            # Regularization in loss and gradient\n",
    "            if regulization_method == 'L1':\n",
    "                loss_reg = regulization_lambda * np.sum(np.abs(w))\n",
    "                grad_reg = regulization_lambda * np.sign(w)\n",
    "            elif regulization_method == 'L2':\n",
    "                loss_reg = regulization_lambda * np.sum(w ** 2) / 2\n",
    "                grad_reg = regulization_lambda * w\n",
    "            else:\n",
    "                loss_reg = 0.0\n",
    "                grad_reg = np.zeros(w.size)\n",
    "\n",
    "            logs['loss_batch'].append(np.mean(y_batch_diff ** 2) / 2 + loss_reg)\n",
    "\n",
    "            # Compute partial derivative of w and b\n",
    "            for i in range(w.size):\n",
    "                dw[i] = dw[i] + np.mean(y_batch_diff * x_batch[:, i]) + grad_reg[i]\n",
    "            db = db + np.mean(y_batch_diff * 1.0) # Compute partial derivative of b\n",
    "            \n",
    "            itr_num += 1\n",
    "\n",
    "        # Regularization in loss and gradient\n",
    "        if regulization_method == 'L1':\n",
    "            loss_reg = regulization_lambda * np.sum(np.abs(w))\n",
    "            \n",
    "        elif regulization_method == 'L2':\n",
    "            loss_reg = regulization_lambda * np.sum(w ** 2) / 2\n",
    "            \n",
    "        else:\n",
    "            loss_reg = 0\n",
    "        \n",
    "        # Compute epoch loss and err\n",
    "        y_tr_pred = np.matmul(x_tr, w) + b\n",
    "        y_tr_diff = y_tr_pred - y_tr\n",
    "        \n",
    "        loss_train = np.mean(y_tr_diff ** 2) / 2 + loss_reg\n",
    "        logs['loss_train'].append(loss_train)\n",
    "        \n",
    "        err_train = np.mean(np.abs(y_tr_diff))\n",
    "        logs['err_train'].append(err_train)\n",
    "                \n",
    "        y_te_pred = np.matmul(x_te, w) + b\n",
    "        y_te_diff = y_te_pred - y_te\n",
    "        \n",
    "        loss_test = np.mean(y_te_diff ** 2) / 2 + loss_reg\n",
    "        logs['loss_test'].append(loss_test)\n",
    "        \n",
    "        err_test = np.mean(np.abs(y_te_diff))\n",
    "        logs['err_test'].append(err_test)\n",
    "        \n",
    "        # Display training process\n",
    "        if (ie + 1) % 10000 == 0 and disp == True:\n",
    "            print(f'Epoch {ie+1}: train loss ({loss_train:.8f}), test loss ({loss_test:.8f}), train error ({err_train:.4f}), test error ({err_test:.4f})')\n",
    "        \n",
    "        # Early stop\n",
    "        if stop == True:\n",
    "            if err_test < min_err:\n",
    "                min_err = err_test\n",
    "                err_no_improvement = 0\n",
    "            else:\n",
    "                err_no_improvement += 1\n",
    "\n",
    "            if err_no_improvement >= early_stop:\n",
    "                break\n",
    "\n",
    "        # Adagrad\n",
    "        lr_w = lr_w + dw ** 2\n",
    "        lr_b = lr_b + db ** 2\n",
    "\n",
    "        w -= lr / np.sqrt(lr_w) * dw # Update w\n",
    "        b -= lr / np.sqrt(lr_b) * db # Update d\n",
    "\n",
    "    # Regularization in loss and gradient\n",
    "    if regulization_method == 'L1':\n",
    "        loss_reg = regulization_lambda * np.sum(np.abs(w))\n",
    "        \n",
    "    elif regulization_method == 'L2':\n",
    "        loss_reg = regulization_lambda * np.sum(w ** 2) / 2\n",
    "        \n",
    "    else:\n",
    "        loss_reg = 0\n",
    "\n",
    "    # Compute epoch loss and err\n",
    "    y_tr_pred = np.matmul(x_tr, w) + b\n",
    "    y_tr_diff = y_tr_pred - y_tr\n",
    "\n",
    "    loss_train = np.mean(y_tr_diff ** 2) / 2 + loss_reg\n",
    "    logs['loss_train'].append(loss_train)\n",
    "    \n",
    "    err_train = np.mean(np.abs(y_tr_diff))\n",
    "    logs['err_train'].append(err_train)\n",
    "            \n",
    "    y_te_pred = np.matmul(x_te, w) + b\n",
    "    y_te_diff = y_te_pred - y_te\n",
    "\n",
    "    loss_test = np.mean(y_te_diff ** 2) / 2 + loss_reg\n",
    "    logs['loss_test'].append(loss_test)\n",
    "    \n",
    "    err_test = np.mean(np.abs(y_te_diff))\n",
    "    logs['err_test'].append(err_test)\n",
    "    \n",
    "    # Change logs to array\n",
    "    logs['loss_train'] = np.array(logs['loss_train'])\n",
    "    logs['err_train'] = np.array(logs['err_train'])\n",
    "    logs['loss_test'] = np.array(logs['loss_test'])\n",
    "    logs['err_test'] = np.array(logs['err_test'])\n",
    "\n",
    "    return w, b, logs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "p_list = range(1, 11, 1) # Set the order of polynomial\n",
    "param_list = []\n",
    "err_list = []\n",
    "\n",
    "for p in p_list:\n",
    "    # Train model\n",
    "    w, b, logs = train_poly_mdl(x_train, y_train, x_test, y_test, p=p, lr=100, max_epoch=10000, batch_size=num_samples)\n",
    "    param_list.append((w, b))\n",
    "\n",
    "    err_train = logs['err_train'][-1]\n",
    "    err_test = logs['err_test'][-1]\n",
    "    err_list.append((err_train, err_test))\n",
    "\n",
    "    print(f'p={p}\\t Training avg_err: {err_train}\\t Testing avg_err: {err_test}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 560
    },
    "id": "6Kx6GQXtLSfi",
    "outputId": "b54a3f70-e7a2-4509-f31f-9cb5a2f207c1"
   },
   "outputs": [],
   "source": [
    "err_list = np.array(err_list)\n",
    "\n",
    "fig, ax = plt.subplots(figsize=(10,6))\n",
    "plt.plot(p_list, err_list[:, 0], color='blue', marker='.', markersize=10, linewidth=2, zorder=1, label='训练集') # Plot training losses\n",
    "plt.plot(p_list, err_list[:, 1], color='red', marker='.', markersize=10, linewidth=2, zorder=1, label='测试集') # Plot training losses\n",
    "plt.legend(ncol=1, fontsize=label_size)\n",
    "\n",
    "ax.set_xticks(p_list)\n",
    "ax.set_ylim(0, 25)\n",
    "\n",
    "ax.set_xlabel('预测模型复杂度', fontsize=label_size)\n",
    "ax.set_ylabel('房价预测误差（千美元）', fontsize=label_size)\n",
    "ax.tick_params(axis='both', which='major', labelsize=ticklabel_size)\n",
    "\n",
    "# plt.savefig('complexity_vs_error.png', dpi=300) # Make figure clearer\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "colab": {
     "base_uri": "https://localhost:8080/",
     "height": 1000
    },
    "id": "lsjl0bkeIirV",
    "outputId": "6fe1c4cb-ef99-4aeb-f7aa-279412cfb03c"
   },
   "outputs": [],
   "source": [
    "x_min = np.min([np.min(x_train), np.min(x_test)]) - 1\n",
    "x_max = np.max([np.max(x_train), np.max(x_test)]) + 1\n",
    "\n",
    "x_linear = np.linspace(x_min, x_max, 10000) # Create a sequence of x to draw prediction function\n",
    "\n",
    "x_linear_scatter = norm_recover(x_linear, x_a, x_b, norm_type=norm_type)\n",
    "\n",
    "for i in range(len(p_list)):\n",
    "    p = p_list[i]\n",
    "    w, b = param_list[i]\n",
    "\n",
    "    # Draw prediction function\n",
    "    fig, ax, _ = draw_uniform_scatter(x_train_scatter, y_train, x_lim, y_lim) # Plot the scatter\n",
    "\n",
    "    x_linear_exp = [x_linear ** (i+1) for i in range(p)]\n",
    "    x_linear_exp = np.array(x_linear_exp).T\n",
    "\n",
    "    y_linear = np.matmul(x_linear_exp, w) + b\n",
    "\n",
    "    ax.plot(x_linear_scatter, y_linear, color='red', linewidth=2)\n",
    "\n",
    "    # plt.savefig(f'polylinearfit_p{p}.png', dpi=300) # Make figure clearer\n",
    "    plt.show()\n",
    "\n",
    "    # Draw prediction function\n",
    "    fig, ax, _ = draw_uniform_scatter(x_test_scatter, y_test, x_lim, y_lim) # Plot the scatter\n",
    "\n",
    "    x_linear_exp = [x_linear ** (i+1) for i in range(p)]\n",
    "    x_linear_exp = np.array(x_linear_exp).T\n",
    "\n",
    "    y_linear = np.matmul(x_linear_exp, w) + b\n",
    "\n",
    "    ax.plot(x_linear_scatter, y_linear, color='red', linewidth=2)\n",
    "\n",
    "    # plt.savefig(f'polylinearfit_test_p{p}.png', dpi=300) # Make figure clearer\n",
    "    plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.5 过拟合的解决办法"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "2.5.1 数据增强\n",
    "\n",
    "扩展数据集规模"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "num_samples = 200 # Expand train sample to 200\n",
    "np.random.seed(42) # Set the random seed for reproducibility\n",
    "random_indices = np.random.choice(X_train.shape[0], num_samples, replace=False)\n",
    "x_train = X_train[random_indices]\n",
    "y_train = Y_train[random_indices]\n",
    "\n",
    "# Train model\n",
    "p = 10\n",
    "w, b, logs = train_poly_mdl(x_train, y_train, x_test, y_test, p=p, lr=100, max_epoch=100000, batch_size=10)\n",
    "\n",
    "err_train = logs['err_train'][-1]\n",
    "err_test = logs['err_test'][-1]\n",
    "\n",
    "print(f'p={p}\\t Training avg_err: {err_train}\\t Testing avg_err: {err_test}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "print(f'Parameter b: {b}')\n",
    "print(f'Parameters w: {w}')\n",
    "\n",
    "x_min = np.min([np.min(x_train), np.min(x_test)]) - 1\n",
    "x_max = np.max([np.max(x_train), np.max(x_test)]) + 1\n",
    "\n",
    "x_linear = np.linspace(x_min, x_max, 10000) # Create a sequence of x to draw prediction function\n",
    "\n",
    "x_linear_scatter = norm_recover(x_linear, x_a, x_b, norm_type=norm_type)\n",
    "x_train_scatter = norm_recover(x_train, x_a, x_b, norm_type=norm_type)\n",
    "\n",
    "# Draw prediction function\n",
    "fig, ax, _ = draw_uniform_scatter(x_train_scatter, y_train, x_lim, y_lim) # Plot the scatter\n",
    "# plt.savefig(f'polylinearfit_scatter_p{p}_aug_{num_samples}.png', dpi=300) # Make figure clearer\n",
    "plt.show()\n",
    "\n",
    "# Draw prediction function\n",
    "fig, ax, _ = draw_uniform_scatter(x_train_scatter, y_train, x_lim, y_lim) # Plot the scatter\n",
    "\n",
    "x_linear_exp = [x_linear ** (i+1) for i in range(p)]\n",
    "x_linear_exp = np.array(x_linear_exp).T\n",
    "\n",
    "y_linear = np.matmul(x_linear_exp, w) + b\n",
    "\n",
    "ax.plot(x_linear_scatter, y_linear, color='red', linewidth=2)\n",
    "\n",
    "# plt.savefig(f'polylinearfit_p{p}_aug_{num_samples}.png', dpi=300) # Make figure clearer\n",
    "plt.show()\n",
    "\n",
    "# Draw prediction function\n",
    "fig, ax, _ = draw_uniform_scatter(x_test_scatter, y_test, x_lim, y_lim) # Plot the scatter\n",
    "\n",
    "x_linear_exp = [x_linear ** (i+1) for i in range(p)]\n",
    "x_linear_exp = np.array(x_linear_exp).T\n",
    "\n",
    "y_linear = np.matmul(x_linear_exp, w) + b\n",
    "\n",
    "ax.plot(x_linear_scatter, y_linear, color='red', linewidth=2)\n",
    "\n",
    "# plt.savefig(f'polylinearfit_test_p{p}_aug_{num_samples}.png', dpi=300) # Make figure clearer\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "引入更多的特征"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Split data into training and testing domains (Not for this section)\n",
    "X_tr1, X_te1, Y_tr1, Y_te1 = train_test_split(data, target, test_size=0.2, random_state=42)\n",
    "\n",
    "np.random.seed(42) # Set the random seed for reproducibility\n",
    "random_indices = np.random.choice(X_tr1.shape[0], 50, replace=False)\n",
    "x_tr1 = X_tr1[random_indices, :]\n",
    "y_tr1 = Y_tr1[random_indices]\n",
    "\n",
    "random_indices = np.random.choice(X_te1.shape[0], 10, replace=False)\n",
    "x_te1 = X_te1[random_indices, :]\n",
    "y_te1 = Y_te1[random_indices]\n",
    "\n",
    "# Train model\n",
    "w, b, logs = train_poly_mdl(x_tr1, y_tr1, x_te1, y_te1, lr=100, max_epoch=10000, batch_size=10, disp=True)\n",
    "err_train = logs['err_train'][-1]\n",
    "err_test = logs['err_test'][-1]\n",
    "\n",
    "print(f'p={x_tr1.shape[1]}\\t Training avg_err: {err_train}\\t Testing avg_err: {err_test}')\n",
    "print(f'Parameter b: {b}')\n",
    "print(f'Parameters w: {w}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "fig, ax = plt.subplots(figsize=(10,6))\n",
    "\n",
    "plt.plot(logs['err_train'], color='blue', linewidth=2, zorder=1, label='Training') # Plot training losses\n",
    "plt.plot(logs['err_test'], color='red', linewidth=2, zorder=1, label='Testing') # Plot training losses\n",
    "plt.legend(ncol=1, fontsize=label_size)\n",
    "\n",
    "ax.set_xlim(0, 500)\n",
    "ax.set_ylim(0, 50)\n",
    "\n",
    "ax.set_xlabel('Epoch Number', fontsize=label_size)\n",
    "ax.set_ylabel('Prediction Error', fontsize=label_size)\n",
    "ax.tick_params(axis='both', which='major', labelsize=ticklabel_size)\n",
    "\n",
    "# plt.savefig('Expand_ftr_error.png', dpi=300) # Make figure clearer\n",
    "plt.show()\n",
    "\n",
    "fig, ax = plt.subplots(figsize=(10,6))\n",
    "\n",
    "plt.plot(logs['loss_train'], color='blue', linewidth=2, zorder=1, label='Training') # Plot training losses\n",
    "plt.plot(logs['loss_test'], color='red', linewidth=2, zorder=1, label='Testing') # Plot training losses\n",
    "plt.legend(ncol=1, fontsize=label_size)\n",
    "\n",
    "ax.set_xlim(0, 600)\n",
    "ax.set_ylim(0, 200)\n",
    "\n",
    "ax.set_xlabel('Epoch Number', fontsize=label_size)\n",
    "ax.set_ylabel('Loss', fontsize=label_size)\n",
    "ax.tick_params(axis='both', which='major', labelsize=ticklabel_size)\n",
    "\n",
    "# plt.savefig('Expand_ftr_loss.png', dpi=300) # Make figure clearer\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.5.2 正则化\n",
    "\n",
    "L2 Norm"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lambda_list = [0, 1, 10, 100, 1000, 10000, 100000]\n",
    "\n",
    "b_list = []\n",
    "w_list = []\n",
    "err_list = []\n",
    "loss_list = []\n",
    "\n",
    "for lbd in lambda_list:\n",
    "    # Train model\n",
    "    w, b, logs = train_poly_mdl(x_tr1, y_tr1, x_te1, y_te1, lr=1, max_epoch=100000, batch_size=10, disp=True, regulization_method='L2', regulization_lambda=lbd)\n",
    "    b_list.append(b)\n",
    "    w_list.append(w)\n",
    "    \n",
    "    err_train = logs['err_train'][-1]\n",
    "    err_test = logs['err_test'][-1]\n",
    "    err_list.append([err_train, err_test])\n",
    "    \n",
    "    loss_train = logs['loss_train'][-1]\n",
    "    loss_test = logs['loss_test'][-1]\n",
    "    loss_list.append([loss_train, loss_test])\n",
    "\n",
    "    print(f'Lambda={lbd}\\t Training avg_err: {err_train}\\t Testing avg_err: {err_test}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "b_array = np.array(b_list)\n",
    "w_array = np.array(w_list)\n",
    "err_array = np.array(err_list)\n",
    "loss_array = np.array(loss_list)\n",
    "\n",
    "# Draw errors vs lambda\n",
    "fig, ax = plt.subplots(figsize=(10,6))\n",
    "\n",
    "xticks = np.arange(len(lambda_list))\n",
    "xticklabels = lambda_list\n",
    "\n",
    "plt.plot(xticks, err_array[:, 0], color='blue', marker='.', markersize=10, linewidth=2, zorder=1, label='Train') # Plot training losses\n",
    "plt.plot(xticks, err_array[:, 1], color='red', marker='.', markersize=10, linewidth=2, zorder=1, label='Test') # Plot training losses\n",
    "plt.legend(ncol=1, fontsize=label_size)\n",
    "\n",
    "ax.set_xticks(xticks)\n",
    "ax.set_xticklabels(xticklabels)\n",
    "# ax.set_ylim(0, 25)\n",
    "\n",
    "ax.set_xlabel('Lambda', fontsize=label_size)\n",
    "ax.set_ylabel('Prediction Error', fontsize=label_size)\n",
    "ax.tick_params(axis='both', which='major', labelsize=ticklabel_size)\n",
    "\n",
    "# plt.savefig('lambda_2_vs_error.png', dpi=300) # Make figure clearer\n",
    "plt.show()\n",
    "\n",
    "# Draw weights changes\n",
    "fig, ax = plt.subplots(figsize=(10,6))\n",
    "\n",
    "xticks = np.arange(len(lambda_list))\n",
    "xticklabels = lambda_list\n",
    "\n",
    "colormap = plt.cm.get_cmap('jet', 12)\n",
    "linestyles = ['-', '--', '-.', ':', '-', '--', '-.', ':', '-', '--', '-.', ':']\n",
    "markers = ['o', 's', '^', 'v', '*', 'x', '+', 'D', 'h', 'p', '<', '>']\n",
    "for i in range(w_array.shape[1]):\n",
    "    plt.plot(xticks, w_array[:, i], color=colormap(i), linestyle=linestyles[i], marker=markers[i], markersize=7, linewidth=2, zorder=1, label=f'w{i+1}') # Plot training losses\n",
    "plt.legend(ncol=4, fontsize=label_size, loc='upper right')\n",
    "\n",
    "ax.set_xticks(xticks)\n",
    "ax.set_xticklabels(xticklabels)\n",
    "# ax.set_ylim(0, 25)\n",
    "\n",
    "ax.set_xlabel('Lambda', fontsize=label_size)\n",
    "ax.set_ylabel('Weight Value', fontsize=label_size)\n",
    "ax.tick_params(axis='both', which='major', labelsize=ticklabel_size)\n",
    "\n",
    "# plt.savefig('lambda_2_vs_weights.png', dpi=300) # Make figure clearer\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "L1 Norm"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "lambda_list = [0, 1, 10, 100, 1000, 10000, 100000]\n",
    "\n",
    "b_list = []\n",
    "w_list = []\n",
    "err_list = []\n",
    "loss_list = []\n",
    "\n",
    "for lbd in lambda_list:\n",
    "    # Train model\n",
    "    w, b, logs = train_poly_mdl(x_tr1, y_tr1, x_te1, y_te1, lr=1, max_epoch=100000, batch_size=10, disp=True, regulization_method='L1', regulization_lambda=lbd)\n",
    "    b_list.append(b)\n",
    "    w_list.append(w)\n",
    "    \n",
    "    err_train = logs['err_train'][-1]\n",
    "    err_test = logs['err_test'][-1]\n",
    "    err_list.append([err_train, err_test])\n",
    "    \n",
    "    loss_train = logs['loss_train'][-1]\n",
    "    loss_test = logs['loss_test'][-1]\n",
    "    loss_list.append([loss_train, loss_test])\n",
    "\n",
    "    print(f'Lambda={lbd}\\t Training avg_err: {err_train}\\t Testing avg_err: {err_test}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "b_array = np.array(b_list)\n",
    "w_array = np.array(w_list)\n",
    "err_array = np.array(err_list)\n",
    "loss_array = np.array(loss_list)\n",
    "\n",
    "# Draw errors vs lambda\n",
    "fig, ax = plt.subplots(figsize=(10,6))\n",
    "\n",
    "xticks = np.arange(len(lambda_list))\n",
    "xticklabels = lambda_list\n",
    "\n",
    "plt.plot(xticks, err_array[:, 0], color='blue', marker='.', markersize=10, linewidth=2, zorder=1, label='Train') # Plot training losses\n",
    "plt.plot(xticks, err_array[:, 1], color='red', marker='.', markersize=10, linewidth=2, zorder=1, label='Test') # Plot training losses\n",
    "plt.legend(ncol=1, fontsize=label_size)\n",
    "\n",
    "ax.set_xticks(xticks)\n",
    "ax.set_xticklabels(xticklabels)\n",
    "# ax.set_ylim(0, 25)\n",
    "\n",
    "ax.set_xlabel('Lambda', fontsize=label_size)\n",
    "ax.set_ylabel('Prediction Error', fontsize=label_size)\n",
    "ax.tick_params(axis='both', which='major', labelsize=ticklabel_size)\n",
    "\n",
    "# plt.savefig('lambda_1_vs_error.png', dpi=300) # Make figure clearer\n",
    "plt.show()\n",
    "\n",
    "# Draw weights changes\n",
    "fig, ax = plt.subplots(figsize=(10,6))\n",
    "\n",
    "xticks = np.arange(len(lambda_list))\n",
    "xticklabels = lambda_list\n",
    "\n",
    "colormap = plt.cm.get_cmap('jet', 12)\n",
    "linestyles = ['-', '--', '-.', ':', '-', '--', '-.', ':', '-', '--', '-.', ':']\n",
    "markers = ['o', 's', '^', 'v', '*', 'x', '+', 'D', 'h', 'p', '<', '>']\n",
    "for i in range(w_array.shape[1]):\n",
    "    plt.plot(xticks, w_array[:, i], color=colormap(i), linestyle=linestyles[i], marker=markers[i], markersize=7, linewidth=2, zorder=1, label=f'w{i+1}') # Plot training losses\n",
    "plt.legend(ncol=4, fontsize=label_size, loc='upper right')\n",
    "\n",
    "ax.set_xticks(xticks)\n",
    "ax.set_xticklabels(xticklabels)\n",
    "# ax.set_ylim(0, 25)\n",
    "\n",
    "ax.set_xlabel('Lambda', fontsize=label_size)\n",
    "ax.set_ylabel('Weight Value', fontsize=label_size)\n",
    "ax.tick_params(axis='both', which='major', labelsize=ticklabel_size)\n",
    "\n",
    "# plt.savefig('lambda_1_vs_weights.png', dpi=300) # Make figure clearer\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.5.3 交叉验证\n",
    "\n",
    "仅提供三种方法的思路"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Set cross validation methods\n",
    "cv_methods = ['leave-p-out', 'k-fold', 'boostrapping']\n",
    "\n",
    "# Split data into train and test sets\n",
    "X_tr2, x_te2, Y_tr2, y_te2 = train_test_split(data, target, test_size=0.2)\n",
    "print(f'Training dataset size: {X_tr2.shape[0]}, testing dataset size: {x_te2.shape[0]}')\n",
    "\n",
    "# Leave-p-out: Split X_tr2 and Y_tr2 into train and cross-validation sets with train-val ratio\n",
    "tr_val_ratio = 0.1\n",
    "x_tr2, x_val2, y_tr2, y_val2 = train_test_split(X_tr2, Y_tr2, test_size=tr_val_ratio)\n",
    "print(f'Training dataset size: {x_tr2.shape[0]}, cross-validation dataset size: {x_val2.shape[0]}')\n",
    "\n",
    "# Leave-p-out: Select p random samples as cross-validation data\n",
    "random_indices = np.random.permutation(X_tr2.shape[0])\n",
    "val_num = 10 # Set the number of test data\n",
    "train_idx = random_indices[val_num:]\n",
    "val_idx = random_indices[:val_num]\n",
    "x_tr2 = X_tr2[train_idx, :]\n",
    "y_tr2 = Y_tr2[train_idx]\n",
    "x_te2 = X_tr2[val_idx, :]\n",
    "y_te2 = Y_tr2[val_idx]\n",
    "print(f'Training dataset size: {x_tr2.shape[0]}, cross-validation dataset size: {x_te2.shape[0]}')\n",
    "\n",
    "# K-fold: Divide training dataset into K folds and leave one fold for CV in each time\n",
    "from sklearn.model_selection import KFold\n",
    "k = 5\n",
    "kf = KFold(n_splits=k)\n",
    "\n",
    "# Display kfold number and datasets size\n",
    "for ifold, (train_index, val_index) in enumerate(kf.split(X_tr2)):\n",
    "    x_tr2, x_te2 = X_tr2[train_index], X_tr2[val_index]\n",
    "    y_tr2, y_te2 = Y_tr2[train_index], Y_tr2[val_index]\n",
    "    print(f'Train-CV samples in fold {ifold+1}: {x_tr2.shape[0]}-{x_te2.shape[0]}')\n",
    "\n",
    "# K-fold: Manual\n",
    "k = 5\n",
    "fold_size = X_tr2.shape[0] // k # Calculate the size of each fold\n",
    "fold_indices = np.array_split(random_indices, k) # Split the indices into k folds\n",
    "for ifold in range(k):\n",
    "    val_idx = fold_indices[ifold]\n",
    "    train_idx = np.concatenate([fold_indices[i] for i in range(k) if i != ifold])\n",
    "\n",
    "    x_tr2 = X_tr2[train_idx, :]\n",
    "    y_tr2 = Y_tr2[train_idx]\n",
    "    x_te2 = X_tr2[val_idx, :]\n",
    "    y_te2 = Y_tr2[val_idx]\n",
    "\n",
    "    print(f'Train-CV samples in fold {ifold+1}: {x_tr2.shape[0]}-{x_te2.shape[0]}')\n",
    "\n",
    "# Boostrapping\n",
    "bt_set_num = 10\n",
    "\n",
    "# Generate 10 random number bigger than 10\n",
    "train_size_list = [np.random.randint(200, 500) for _ in range(bt_set_num)]\n",
    "print(f'Boostrapping training data sets: {train_size_list}')\n",
    "val_size_list = [np.random.randint(50, 100) for _ in range(bt_set_num)]\n",
    "print(f'Boostrapping cross-validation data sets: {val_size_list}')\n",
    "\n",
    "train_sample_dict = {}\n",
    "val_sample_dict = {}\n",
    "for ibt in range(bt_set_num):\n",
    "    \n",
    "    # Get indices of training and testing data\n",
    "    train_sample_dict[ibt] = np.random.choice(X_tr2.shape[0], train_size_list[ibt], replace=True)\n",
    "    val_sample_dict[ibt] = np.random.choice(X_tr2.shape[0], val_size_list[ibt], replace=True)\n",
    "\n",
    "for ibt in range(bt_set_num):\n",
    "    print(f'Training dataset size: {train_sample_dict[ibt].shape[0]}, cross-validation dataset size: {val_sample_dict[ibt].shape[0]}')  \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "id": "0I7KcTW3ZKi8"
   },
   "source": [
    "## 2.6 实验：基于线性回归的房价预测\n",
    "\n",
    "此部分需要同学自行完成各个任务要求，训练并评估房价预测模型：\n",
    "* 数据读取及预处理\n",
    "* 模型设计：线性模型、非线性模型\n",
    "* 模型训练：蛮力法、梯度下降、学习率\n",
    "* 过拟合相关内容；数据增强、正则化、交叉验证"
   ]
  }
 ],
 "metadata": {
  "colab": {
   "authorship_tag": "ABX9TyOiUalPnPeYCEu6nF62YJIO",
   "include_colab_link": true,
   "provenance": []
  },
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.19"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
