{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### *线性回归*\n",
    "- 线性： 自变量和应变量之间存在一定的线性关系\n",
    "- 回归： 需要去寻找自变量和应变量之间存在的关系"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "import torch "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "k = 3\n",
    "b = 4\n",
    "def f(x):\n",
    "    return k * x + b\n",
    "n = 100\n",
    "X = np.linspace(1,100,100)\n",
    "Y = f(X)\n",
    "plt.plot(X,Y)\n",
    "#print(X,Y)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 手动添加偏差"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#通过np.random.normal函数来产生一组随机数\n",
    "#随机数服从高斯分布 平均值为0 方差为5 产生一百个\n",
    "D = np.random.normal(0,5,100)\n",
    "# D\n",
    "Y += D\n",
    "# plt.plot(X,Y)    #折线图\n",
    "plt.scatter(X,Y)    #散点图"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1. 线性回归是又解析解的，即能算出一个数字确保偏差是最小的 但真实世界中并不是每个问题都能算解析解 因此要用到深度学习\n",
    "2. 深度学习一般步骤\n",
    "    - 准备数据  X Y\n",
    "    - 确定模型  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#因为是线性回归模型，所以是KX+B\n",
    "#K是未知量，需要先使用torch来随机一个值；B同理\n",
    "#同时需要将X与Y变一下格式 变成torch的\n",
    "K = torch.normal(0,1,size=(1,1),requires_grad=True)\n",
    "B = torch.normal(0,1,size=(1,1),requires_grad=True)\n",
    "K,B\n",
    "X = torch.tensor(X)\n",
    "Y = torch.tensor(Y)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# define model\n",
    "def model(x,k,b):\n",
    "    return k * x + b"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "model(X,K,B) \n",
    "model(X,K,B) == Y"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 损失函数\n",
    "需要一个指标用于衡量实际值与计算值之间的差别程度\n",
    "- 均方误差\n",
    "-"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#定义函数用于衡量y的预测值(y_hat)和真实值(y)的差别程度\n",
    "def squared_loss(y_hat,y):\n",
    "    return (y_hat - y) ** 2 / 2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# model(X,K,B)即为预测的值\n",
    "# Y为真实值\n",
    "# 求损失函数\n",
    "squared_loss(model(X,K,B),Y)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "- 核心要求：要更好的模拟出线性关系，也即是求出使得\n",
    "损失函数的值最小的 K 和 B\n",
    "- 如何修改参数K和B使得squared_loss越来越小？\n",
    "把squared_loss(model(X,K,B),Y)想象成函数\n",
    "该函数的参数为K和B，X和Y都是真实世界给的，视为固定值 希望改变K和B来使函数值变小 数学中用倒数来实（偏导数）通过对K和B求偏导数（required_grade = True）将K和B往偏导数的负方向移动 可以使得损失函数变小\n",
    "- 移动多少? \n",
    "移动的太多不行，会跳过关键点，太少也不行，会使得效率变低\n",
    "- 模型训练（该案例中其实就是改变K和B的值）\n",
    "    1. 学习率：learn_rate = 0.01"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#从X，Y数据集中拿出一部分数据\n",
    "for x,y in zip(X,Y):\n",
    "    print(x,y)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#定义学习率\n",
    "learn_rate = 0.01\n",
    "#加载数据\n",
    "for x,y in zip(X,Y):      \n",
    "    #求损失函数\n",
    "    l = squared_loss(model(x,K,B),y)    \n",
    "    #求偏导数\n",
    "    l.sum().backward()\n",
    "    with torch.no_grad():  #这句的目的是在执行优化时不在计算偏导数\n",
    "        #优化\n",
    "        K -= K.grad * learn_rate\n",
    "        K.grad.zero_()\n",
    "        B -= B.grad * learn_rate\n",
    "        B.grad.zero_() \n",
    "    print(K,B) \n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 加入批量学习来实现对整体的把控"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "#定义学习率\n",
    "learn_rate = 0.0002\n",
    "#一批学习10个点\n",
    "batch_size = 10\n",
    "#加载数据\n",
    "for j in range(200):\n",
    "    for i in range(n // batch_size):\n",
    "        x = X[10 * i:10 * i + 10]\n",
    "        y = Y[10 * i:10 * i + 10]\n",
    "        #求损失函数\n",
    "        l = squared_loss(model(x,K,B),y)    \n",
    "        print(l.sum())\n",
    "        #求偏导数\n",
    "        l.sum().backward()\n",
    "        with torch.no_grad():  #这句的目的是在执行优化时不在计算偏导数\n",
    "            #优化\n",
    "            K -= K.grad * learn_rate / batch_size\n",
    "            K.grad.zero_()\n",
    "            B -= B.grad * learn_rate / batch_size\n",
    "            B.grad.zero_() \n",
    "        #print(K,B,l) "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.plot(X,Y)\n",
    "Y_hat = model(X,K,B)\n",
    "plt.plot(X,Y_hat.detach()[0])\n",
    "plt.plot(X,f(X))"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "d0l",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
