{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1. 对数据做数据探索分析（可参考EDA_BikeSharing.ipynb，不计分） \n",
    "2. 适当的特征工程（可参考FE_BikeSharing.ipynb，不计分） \n",
    "3. 对全体数据，随机选择其中80%做训练数据，剩下20%为测试数据，评价指标为RMSE。（10分） "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "['instant' 'dteday' 'season' 'yr' 'mnth' 'holiday' 'weekday' 'workingday'\n",
      " 'weathersit' 'temp' 'atemp' 'hum' 'windspeed' 'casual' 'registered' 'cnt']\n",
      "(584, 13) (146, 13) (584,) (146,)\n",
      "917.2044226858354 917.2044226858354\n"
     ]
    }
   ],
   "source": [
    "import numpy as np\n",
    "# from scipy.stats import pearsonr\n",
    "from sklearn.model_selection import train_test_split\n",
    "import sklearn.metrics as metrics\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.linear_model import LinearRegression\n",
    "import csv\n",
    "import time\n",
    "\n",
    "\n",
    "\n",
    "np.set_printoptions(precision=3, suppress=True)\n",
    "np.set_printoptions(formatter={'float': '{: 0.3f}'.format})\n",
    "\n",
    "\n",
    "def rmse(y_test,y_pred):\n",
    "    result = y_test.ravel() - y_pred.ravel()\n",
    "#     print(np.sum((result * result)),result.size)\n",
    "    return np.sqrt( np.sum((result * result))  / result.size )\n",
    "\n",
    "with open('day.csv') as f:\n",
    "    reader = csv.reader(f)\n",
    "    train = np.array([row for row in reader])\n",
    "    heat = train[0]\n",
    "    train = train[1:-1]\n",
    "print(heat)\n",
    "temp = train[0,1]\n",
    "d = train[:,1]\n",
    "daeday = [ time.mktime(time.strptime(temp,'%Y-%m-%d')) for temp in d ]\n",
    "\n",
    "train[:,1] = daeday\n",
    "train = train.astype('float')\n",
    "y = train[:,-1]\n",
    "x = train[:,0:-3]\n",
    "scaler = StandardScaler()\n",
    "x = scaler.fit_transform(x)\n",
    "x_train , x_test,y_train,y_test = train_test_split(x,y,test_size = 0.2)\n",
    "print(x_train.shape,x_test.shape,y_train.shape,y_test.shape)\n",
    "linearRegression = LinearRegression(copy_X = True)\n",
    "linearRegression.fit(x_train,y_train)\n",
    "y = linearRegression.predict(x_test)\n",
    "r0 = rmse(y_test,y)\n",
    "r1 = metrics.mean_squared_error(y,y_test)\n",
    "print(r0,np.sqrt(r1))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 4. 用训练数据训练最小二乘线性回归模型（20分）、岭回归模型、Lasso模型，其中岭回归模型（30分）和Lasso模型（30分），注意岭回归模型和Lasso模型的正则超参数调优。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "6.8 921.4674779195548\n",
      "7865.0---(7040.6667947158  921.4674779195548)(7050.599994059088   921.4674779195548)(7050.858104984095  921.3286824677364)\n"
     ]
    }
   ],
   "source": [
    "from sklearn.linear_model import LassoCV\n",
    "from sklearn.linear_model import Lasso\n",
    "lassoCV = LassoCV(alphas=[13,6.8,7,7.2,10,10.5,12,1000],copy_X = True,cv=3,random_state = 0,max_iter=10000)\n",
    "lassoCV.fit(x_train,y_train)\n",
    "y0 = lassoCV.predict(x_test)\n",
    "r0 = rmse(y,y_test)\n",
    "print(lassoCV.alpha_,r)\n",
    "lassoCV = LassoCV(copy_X = True,cv=3,random_state = 0,max_iter=10000)\n",
    "lassoCV.fit(x_train,y_train)\n",
    "y1 = lassoCV.predict(x_test)\n",
    "r1 = rmse(y,y_test)\n",
    "lasso = Lasso()\n",
    "lasso.fit(x_train,y_train)\n",
    "y2 = lasso.predict(x_test)\n",
    "r2 = rmse(y2,y_test)\n",
    "print(\"{}---({}  {})({}   {})({}  {})\".format(y_test[0],y0[0],r0,y1[0],r1,y2[0],r2))\n",
    "# print(lassoCV.alpha_)\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "6.8\n",
      "7865.0---(7043.029838688803  921.4674779195548)(7037.0049006636655   921.4674779195548)(7050.844855404153  921.3111704646317)\n"
     ]
    }
   ],
   "source": [
    "from sklearn.linear_model import Ridge\n",
    "from sklearn.linear_model import RidgeCV\n",
    "ridgeCV = RidgeCV(alphas=[13,6.8,7,7.2,10,10.5,12,1000],cv=3)\n",
    "ridgeCV.fit(x_train,y_train)\n",
    "print(ridgeCV.alpha_)\n",
    "\n",
    "y0 = ridgeCV.predict(x_test)\n",
    "\n",
    "ridgeCV = RidgeCV(cv=3)\n",
    "ridgeCV.fit(x_train,y_train)\n",
    "y1 = ridgeCV.predict(x_test)\n",
    "r1 = rmse(y,y_test)\n",
    "\n",
    "\n",
    "ridge = Ridge()\n",
    "ridge.fit(x_train,y_train)\n",
    "y2 = ridge.predict(x_test)\n",
    "r2 = rmse(y2,y_test)\n",
    "print(\"{}---({}  {})({}   {})({}  {})\".format(y_test[0],y0[0],r0,y1[0],r1,y2[0],r2))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 5. 比较用上述三种模型得到的各特征的系数，以及各模型在测试集上的性能。并简单说明原因。（10分） "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "通过观察上诉几个测试结果,超参数和REMS都比较接近.应该是数据集上特征之间线性关系很小,在参数调优的作用下,使得超参较小,使其对模型影响不大所以会有上面的结果."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
