{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "from pyspark.context import SparkContext\n",
    "from pyspark.sql.session import SparkSession\n",
    "from pyspark.mllib.regression import LabeledPoint\n",
    "from pyspark.ml.feature import Normalizer"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 一、项目背景\n",
    "    城市租用自行车计划是在城市中部署若干个自助租车处。在这个由租车处组成的网络中使用者可自助租用、归还自行车。迄今为止，全世界已经有500多个自助自行车租用处。\n",
    "       这个租车系统产生的诸如租车时间、租借/交还位置，时间消费等数据引起了人们的关注。租车系统也借此成为了一个感知网络。本题目要求借助华盛顿的历史数据来预测租车系统的租借需求。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 二、项目描述\n",
    "    本项目需要通过给予的历史数据（包括天气、时间、季节等特征）预测特定条件下的租车数目。通过选择决策树、SVM(支持向量机)和随机森林算法，构建不同的模型。在特征过程后，使用训练数据集对模型行进行训练。最终使用训练好的模型对测试集进行预测。之后通过改变参数和使用交叉验证等方法提升模型精度。预测结果越靠近真实数据越好，在之后会介绍如何评判预测结果。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 三、数据集简介\n",
    "    项目数据描述如下：\n",
    "       (1) datetime：代表数据日期，以年-月-日 小时的形式给出。\n",
    "       (2) season：数据记录时的季节。1 为春季, 2为夏季,3 为秋季,4 为冬季。\n",
    "       (3) hodliday：当日是否为假期。1代表是，0代表不是。\n",
    "       (4) workingday：当日是否为工作日，即既不是周末也不是假期。1代表是，0代表不是。\n",
    "       (5) weather:当日天气：\n",
    "              1: 天气晴朗或者少云/部分有云。\n",
    "              2: 有雾和云/风等。\n",
    "              3: 小雪/小雨，闪电及多云。\n",
    "              4: 大雨/冰雹/闪电和大雾/大雪。\n",
    "       (6) temp - 当日摄氏温度。\n",
    "       (7) atemp - 当日人们感觉的温度。\n",
    "       (8) humidity - 当日湿度。\n",
    "       (9) windspeed - 风速。\n",
    "       (10) casual -非预定自行车的人数\n",
    "       (11) registered - 登记预定自信车的人数。\n",
    "       (12) count - 总租车数，我们需要预测的值。即casual+registered数目。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 四、加载数据集\n",
    "    （1）训练集：data/BikeSharing/trian.csv\n",
    "    （2）测试集：data/BikeSharing/test.csv"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### 4.1 加载并查看原始数据集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 创建SparkContext对象\n",
    "sc = SparkContext(\"local[*]\",\"BikeSharing\")\n",
    "spark = SparkSession(sc)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "数据的第一行:\n",
      " ['datetime', 'season', 'holiday', 'workingday', 'weather', 'temp', 'atemp', 'humidity', 'windspeed', 'casual', 'registered', 'count']\n",
      "数据样本数: 10886\n"
     ]
    }
   ],
   "source": [
    "path = 'data/BikeSharing/train.csv'\n",
    "records = sc.textFile(path) \n",
    "records =records.map(lambda x: x.split(',')) \n",
    "## 取第一行字段名称数据\n",
    "header = records.first()\n",
    "## 剔除字段名（特征名）行，取数据行\n",
    "records = records.filter(lambda x:x!=header)\n",
    "num_data = records.count() \n",
    "print('数据的第一行:\\n',header )\n",
    "print('数据样本数:',num_data)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+-------------------+------+-------+----------+-------+-----+------+--------+---------+------+----------+-----+\n",
      "|           datetime|season|holiday|workingday|weather| temp| atemp|humidity|windspeed|casual|registered|count|\n",
      "+-------------------+------+-------+----------+-------+-----+------+--------+---------+------+----------+-----+\n",
      "|2011-01-01 00:00:00|     1|      0|         0|      1| 9.84|14.395|      81|        0|     3|        13|   16|\n",
      "|2011-01-01 01:00:00|     1|      0|         0|      1| 9.02|13.635|      80|        0|     8|        32|   40|\n",
      "|2011-01-01 02:00:00|     1|      0|         0|      1| 9.02|13.635|      80|        0|     5|        27|   32|\n",
      "|2011-01-01 03:00:00|     1|      0|         0|      1| 9.84|14.395|      75|        0|     3|        10|   13|\n",
      "|2011-01-01 04:00:00|     1|      0|         0|      1| 9.84|14.395|      75|        0|     0|         1|    1|\n",
      "|2011-01-01 05:00:00|     1|      0|         0|      2| 9.84| 12.88|      75|   6.0032|     0|         1|    1|\n",
      "|2011-01-01 06:00:00|     1|      0|         0|      1| 9.02|13.635|      80|        0|     2|         0|    2|\n",
      "|2011-01-01 07:00:00|     1|      0|         0|      1|  8.2| 12.88|      86|        0|     1|         2|    3|\n",
      "|2011-01-01 08:00:00|     1|      0|         0|      1| 9.84|14.395|      75|        0|     1|         7|    8|\n",
      "|2011-01-01 09:00:00|     1|      0|         0|      1|13.12|17.425|      76|        0|     8|         6|   14|\n",
      "|2011-01-01 10:00:00|     1|      0|         0|      1|15.58|19.695|      76|  16.9979|    12|        24|   36|\n",
      "|2011-01-01 11:00:00|     1|      0|         0|      1|14.76|16.665|      81|  19.0012|    26|        30|   56|\n",
      "|2011-01-01 12:00:00|     1|      0|         0|      1|17.22| 21.21|      77|  19.0012|    29|        55|   84|\n",
      "|2011-01-01 13:00:00|     1|      0|         0|      2|18.86|22.725|      72|  19.9995|    47|        47|   94|\n",
      "|2011-01-01 14:00:00|     1|      0|         0|      2|18.86|22.725|      72|  19.0012|    35|        71|  106|\n",
      "|2011-01-01 15:00:00|     1|      0|         0|      2|18.04| 21.97|      77|  19.9995|    40|        70|  110|\n",
      "|2011-01-01 16:00:00|     1|      0|         0|      2|17.22| 21.21|      82|  19.9995|    41|        52|   93|\n",
      "|2011-01-01 17:00:00|     1|      0|         0|      2|18.04| 21.97|      82|  19.0012|    15|        52|   67|\n",
      "|2011-01-01 18:00:00|     1|      0|         0|      3|17.22| 21.21|      88|  16.9979|     9|        26|   35|\n",
      "|2011-01-01 19:00:00|     1|      0|         0|      3|17.22| 21.21|      88|  16.9979|     6|        31|   37|\n",
      "+-------------------+------+-------+----------+-------+-----+------+--------+---------+------+----------+-----+\n",
      "only showing top 20 rows\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# 将Rdd转化为DataFram查看数据集\n",
    "dataFrame = spark.createDataFrame(records,header)\n",
    "dataFrame.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### 4.2 过滤数据集\n",
    "    忽略记录中的datetime。忽略两个记录次数的变量casual 和registered，只保留count（casual和registered的和）。最后就剩下9个变量，其中前8个是类型变量，4个是实数变量。4个类型变量，我们使用OneHot进行编码"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+------+-------+----------+-------+-----+------+--------+---------+-----+\n",
      "|season|holiday|workingday|weather| temp| atemp|humidity|windspeed|count|\n",
      "+------+-------+----------+-------+-----+------+--------+---------+-----+\n",
      "|     1|      0|         0|      1| 9.84|14.395|    81.0|      0.0|   16|\n",
      "|     1|      0|         0|      1| 9.02|13.635|    80.0|      0.0|   40|\n",
      "|     1|      0|         0|      1| 9.02|13.635|    80.0|      0.0|   32|\n",
      "|     1|      0|         0|      1| 9.84|14.395|    75.0|      0.0|   13|\n",
      "|     1|      0|         0|      1| 9.84|14.395|    75.0|      0.0|    1|\n",
      "|     1|      0|         0|      2| 9.84| 12.88|    75.0|   6.0032|    1|\n",
      "|     1|      0|         0|      1| 9.02|13.635|    80.0|      0.0|    2|\n",
      "|     1|      0|         0|      1|  8.2| 12.88|    86.0|      0.0|    3|\n",
      "|     1|      0|         0|      1| 9.84|14.395|    75.0|      0.0|    8|\n",
      "|     1|      0|         0|      1|13.12|17.425|    76.0|      0.0|   14|\n",
      "|     1|      0|         0|      1|15.58|19.695|    76.0|  16.9979|   36|\n",
      "|     1|      0|         0|      1|14.76|16.665|    81.0|  19.0012|   56|\n",
      "|     1|      0|         0|      1|17.22| 21.21|    77.0|  19.0012|   84|\n",
      "|     1|      0|         0|      2|18.86|22.725|    72.0|  19.9995|   94|\n",
      "|     1|      0|         0|      2|18.86|22.725|    72.0|  19.0012|  106|\n",
      "|     1|      0|         0|      2|18.04| 21.97|    77.0|  19.9995|  110|\n",
      "|     1|      0|         0|      2|17.22| 21.21|    82.0|  19.9995|   93|\n",
      "|     1|      0|         0|      2|18.04| 21.97|    82.0|  19.0012|   67|\n",
      "|     1|      0|         0|      3|17.22| 21.21|    88.0|  16.9979|   35|\n",
      "|     1|      0|         0|      3|17.22| 21.21|    88.0|  16.9979|   37|\n",
      "+------+-------+----------+-------+-----+------+--------+---------+-----+\n",
      "only showing top 20 rows\n",
      "\n"
     ]
    }
   ],
   "source": [
    "trainData = records.map(lambda row : [int(x1) for x1 in row[1:5]] + [float(x2) for x2 in row[5:9]] + [int(row[-1])])\n",
    "trainDataFrame = spark.createDataFrame(trainData,[ 'season', 'holiday', 'workingday', 'weather',\n",
    "                                                  'temp', 'atemp', 'humidity', 'windspeed', 'count'])\n",
    "trainDataFrame.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### 4.3 数值类型特征归一化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pyspark.ml.feature import Normalizer,VectorAssembler\n",
    "from pyspark.ml import Pipeline\n",
    "vecAssembler = VectorAssembler(inputCols=header[4:8], outputCol=\"norm_features\")\n",
    "normalizer = Normalizer(p=2.0, inputCol=\"norm_features\", outputCol=\"norm_test\")\n",
    "pipeline = Pipeline(stages=[vecAssembler, normalizer])\n",
    "pipeline_fit = pipeline.fit(trainDataFrame)\n",
    "df = pipeline_fit.transform(trainDataFrame)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+------+-------+----------+-------+--------------------+-----+\n",
      "|season|holiday|workingday|weather|           norm_test|count|\n",
      "+------+-------+----------+-------+--------------------+-----+\n",
      "|     1|      0|         0|      1|[0.01206831907543...|   16|\n",
      "|     1|      0|         0|      1|[0.01224597289523...|   40|\n",
      "|     1|      0|         0|      1|[0.01224597289523...|   32|\n",
      "|     1|      0|         0|      1|[0.01298587233806...|   13|\n",
      "|     1|      0|         0|      1|[0.01298587233806...|    1|\n",
      "|     1|      0|         0|      2|[0.02605607202861...|    1|\n",
      "|     1|      0|         0|      1|[0.01224597289523...|    2|\n",
      "|     1|      0|         0|      1|[0.01144811296171...|    3|\n",
      "|     1|      0|         0|      1|[0.01298587233806...|    8|\n",
      "|     1|      0|         0|      1|[0.01264631356014...|   14|\n",
      "|     1|      0|         0|      1|[0.01249255604442...|   36|\n",
      "|     1|      0|         0|      1|[0.01190342940172...|   56|\n",
      "|     1|      0|         0|      1|[0.01223852472859...|   84|\n",
      "|     1|      0|         0|      2|[0.02569148433143...|   94|\n",
      "|     1|      0|         0|      2|[0.02569148433143...|  106|\n",
      "|     1|      0|         0|      2|[0.02435924850966...|  110|\n",
      "|     1|      0|         0|      2|[0.02313353733948...|   93|\n",
      "|     1|      0|         0|      2|[0.02303859663022...|   67|\n",
      "|     1|      0|         0|      3|[0.03254073154618...|   35|\n",
      "|     1|      0|         0|      3|[0.03254073154618...|   37|\n",
      "+------+-------+----------+-------+--------------------+-----+\n",
      "only showing top 20 rows\n",
      "\n"
     ]
    }
   ],
   "source": [
    "traindf = df.select(\"season\",\"holiday\",\"workingday\",\"weather\",\"norm_test\",\"count\")\n",
    "traindf.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[Row(season=1, holiday=0, workingday=0, weather=1, norm_test=DenseVector([0.0121, 0.1188, 0.1737, 0.9775]), count=16),\n",
       " Row(season=1, holiday=0, workingday=0, weather=1, norm_test=DenseVector([0.0122, 0.1105, 0.167, 0.9797]), count=40),\n",
       " Row(season=1, holiday=0, workingday=0, weather=1, norm_test=DenseVector([0.0122, 0.1105, 0.167, 0.9797]), count=32),\n",
       " Row(season=1, holiday=0, workingday=0, weather=1, norm_test=DenseVector([0.013, 0.1278, 0.1869, 0.9739]), count=13),\n",
       " Row(season=1, holiday=0, workingday=0, weather=1, norm_test=DenseVector([0.013, 0.1278, 0.1869, 0.9739]), count=1)]"
      ]
     },
     "execution_count": 25,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "traindf.rdd.take(5)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "MapPartitionsRDD[50] at javaToPython at NativeMethodAccessorImpl.java:0"
      ]
     },
     "execution_count": 26,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 因为变量rawData下文经常要用到，此处对其进行缓存：\n",
    "trainData = traindf.rdd.cache()\n",
    "trainData"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 五、数据预处理\n",
    "##### 5.1、为了将类型特征表示成二维形式，我们将特征值映射到二元向量中非0的位置。下面定义这样一个映射函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_mapping(rdd, idx):\n",
    "    return rdd.map(lambda fields: fields[idx]).distinct().zipWithIndex().collectAsMap()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "上面的函数首先将第idx列的特征值去重，然后对每个值使用zipWithIndex函数映射到一个唯一的索引，这样就组成了一个RDD的键值映射，键是变量，值是索引。上述索引便是特征在二元向量中对应的非0位置，最后我们将这个RDD表示成Python的字典类型。\n",
    "\n",
    "下面，我们用特征矩阵的第三列（索引2）来测试上面的映射函数："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "第三个特征的类别编码： {0: 0, 1: 1} \n"
     ]
    }
   ],
   "source": [
    "print('第三个特征的类别编码： %s '%get_mapping(trainData,2))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "接着，对是类型变量的列（第1~4列）应用该函数："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "类别特征打编码字典: [{2: 0, 4: 1, 1: 2, 3: 3}, {0: 0, 1: 1}, {0: 0, 1: 1}, {2: 0, 4: 1, 1: 2, 3: 3}]\n",
      "类别特征的个数： 12\n",
      "数值特征的个数： 4\n",
      "所有特征的个数:：16\n"
     ]
    }
   ],
   "source": [
    "mappings = [get_mapping(trainData, i) for i in range(0,4)]   #对类型变量的列（第2~9列）应用映射函数\n",
    "print('类别特征打编码字典:',mappings)\n",
    "cat_len = sum(map(len,[i for i in mappings]))        #类别特征的个数                 \n",
    "\n",
    "num_len = len(trainData.first()[4])                      #数值特征的个数\n",
    "total_len = num_len+cat_len                                  #所有特征的个数\n",
    "print('类别特征的个数： %d'% cat_len)\n",
    "print('数值特征的个数： %d'% num_len)\n",
    "print('所有特征的个数:：%d' % total_len)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### 5.2、为线性模型创建特征向量\n",
    "    接下来用上面的映射函数将所有类型特征转换为二元编码的特征。为了方便对每条记录提取特征和标签，我们分别定义两个辅助函数extract_features和extract_label。如下为代码实现，注意需要引入numpy和MLlib的LabeledPoint对特征向量和目标变量进行封装："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [],
   "source": [
    "def extract_features(record): \n",
    "    cat_vec = np.zeros(cat_len) \n",
    "    step = 0\n",
    "    for i,raw_feature in enumerate(record[0:4]):\n",
    "        dict_code = mappings[i]\n",
    "        index = dict_code[raw_feature]\n",
    "        cat_vec[index+step] = 1\n",
    "        step = step+len(dict_code)\n",
    "    num_vec = np.array(record[4])\n",
    "    return np.concatenate((cat_vec, num_vec))\n",
    "\n",
    "def extract_label(record):\n",
    "    return float(record[-1])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "在extract_features函数中，我们遍历了数据的每一行每一列，根据已经创建的映射对每个特征进行二元编码。其中step变量用来确保非0特征在整个特征向量中位于正确的位置（另外一种实现方法是将若干较短的二元向量拼接在一起）。数值向量直接对之前已经被转换成浮点数的数据用numpy的array进行封装。最后将二元向量和数值向量拼接起来。定义extract_label函数将数据中的最后一列cnt的数据转换成浮点数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "原始特征向量:[Row(season=1, holiday=0, workingday=0, weather=1, norm_test=DenseVector([0.0121, 0.1188, 0.1737, 0.9775]), count=16)]\n",
      "标签:16.0\n",
      "对类别特征进行独热编码之后的特征向量: \n",
      "[0.0,0.0,1.0,0.0,1.0,0.0,1.0,0.0,0.0,0.0,1.0,0.0,0.012068319075437855,0.11875225970230849,0.1737234530909279,0.9775338451104663]\n",
      "对类别特征进行独热编码之后的特征向量长度:16\n"
     ]
    }
   ],
   "source": [
    "data = trainData.map(lambda point: LabeledPoint(extract_label(point),extract_features(point)))\n",
    "first_point = data.first()\n",
    "\n",
    "print('原始特征向量:' +str(trainData.take(1)))\n",
    "print('标签:' + str(first_point.label)) \n",
    "print('对类别特征进行独热编码之后的特征向量: \\n' + str(first_point.features)) \n",
    "print('对类别特征进行独热编码之后的特征向量长度:' + str(len(first_point.features))) "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### 5.3 为决策树创建特征向量\n",
    "    我们已经知道，决策树模型可以直接使用原始数据（不需要将类型数据用二元向量表示）。因此，只需要创建一个分割函数简单地将所有数值转换为浮点数，最后用numpy的array封装："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "决策树特征向量: [1.0,0.0,0.0,1.0,0.012068319075437855,0.11875225970230849,0.1737234530909279,0.9775338451104663]\n",
      "决策树特征向量长度: 8\n"
     ]
    }
   ],
   "source": [
    "def extract_features_dt(record):\n",
    "    cat_vec = np.array([x for x in record[0:4]])\n",
    "    num_vec = np.array(record[4])\n",
    "    return np.concatenate((cat_vec, num_vec))\n",
    "data_dt = trainData.map(lambda point: LabeledPoint(extract_label(point), extract_features_dt(point)))\n",
    "first_point_dt = data_dt.first()\n",
    "print('决策树特征向量: '+str(first_point_dt.features))\n",
    "print('决策树特征向量长度: '+str(len(first_point_dt.features)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 六、回归模型训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Help on method train in module pyspark.mllib.regression:\n",
      "\n",
      "train(data, iterations=100, step=1.0, miniBatchFraction=1.0, initialWeights=None, regParam=0.0, regType=None, intercept=False, validateData=True, convergenceTol=0.001) method of builtins.type instance\n",
      "    Train a linear regression model using Stochastic Gradient\n",
      "    Descent (SGD). This solves the least squares regression\n",
      "    formulation\n",
      "    \n",
      "        f(weights) = 1/(2n) ||A weights - y||^2\n",
      "    \n",
      "    which is the mean squared error. Here the data matrix has n rows,\n",
      "    and the input RDD holds the set of rows of A, each with its\n",
      "    corresponding right hand side label y.\n",
      "    See also the documentation for the precise formulation.\n",
      "    \n",
      "    :param data:\n",
      "      The training data, an RDD of LabeledPoint.\n",
      "    :param iterations:\n",
      "      The number of iterations.\n",
      "      (default: 100)\n",
      "    :param step:\n",
      "      The step parameter used in SGD.\n",
      "      (default: 1.0)\n",
      "    :param miniBatchFraction:\n",
      "      Fraction of data to be used for each SGD iteration.\n",
      "      (default: 1.0)\n",
      "    :param initialWeights:\n",
      "      The initial weights.\n",
      "      (default: None)\n",
      "    :param regParam:\n",
      "      The regularizer parameter.\n",
      "      (default: 0.0)\n",
      "    :param regType:\n",
      "      The type of regularizer used for training our model.\n",
      "      Supported values:\n",
      "    \n",
      "        - \"l1\" for using L1 regularization\n",
      "        - \"l2\" for using L2 regularization\n",
      "        - None for no regularization (default)\n",
      "    :param intercept:\n",
      "      Boolean parameter which indicates the use or not of the\n",
      "      augmented representation for training data (i.e., whether bias\n",
      "      features are activated or not).\n",
      "      (default: False)\n",
      "    :param validateData:\n",
      "      Boolean parameter which indicates if the algorithm should\n",
      "      validate data before training.\n",
      "      (default: True)\n",
      "    :param convergenceTol:\n",
      "      A condition which decides iteration termination.\n",
      "      (default: 0.001)\n",
      "    \n",
      "    .. versionadded:: 0.9.0\n",
      "\n"
     ]
    }
   ],
   "source": [
    "from pyspark.mllib.regression import LinearRegressionWithSGD\n",
    "help(LinearRegressionWithSGD.train)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "我们已经从bike sharing数据中提取了用于训练模型的特征，下面进行具体的训练。首先训练线性模型并测试该模型在训练数据上的预测效果："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "线性回归模型对前5个样本的预测值:\n",
      " [(16.0, 146.79056148328095), (40.0, 146.55377615512685), (32.0, 146.55377615512685), (13.0, 147.13519243602354), (1.0, 147.13519243602354)]\n"
     ]
    }
   ],
   "source": [
    "linear_model = LinearRegressionWithSGD.train(data, iterations=10, step=0.1, intercept =False)\n",
    "true_vs_predicted = data.map(lambda point:(point.label,linear_model.predict(point.features)))\n",
    "print('线性回归模型对前5个样本的预测值:\\n '+ str(true_vs_predicted.take(5)))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 七、决策树模型训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "决策树回归模型对前5个样本的预测值: [(16.0, 45.57828282828283), (40.0, 45.57828282828283), (32.0, 45.57828282828283), (13.0, 45.57828282828283), (1.0, 45.57828282828283)]\n",
      "决策树模型的深度: 5\n",
      "决策树模型的叶子节点个数: 63\n"
     ]
    }
   ],
   "source": [
    "from pyspark.mllib.tree import DecisionTree\n",
    "dt_model = DecisionTree.trainRegressor(data_dt,{})\n",
    "preds = dt_model.predict(data_dt.map(lambda p: p.features))\n",
    "actual = data.map(lambda p:p.label)\n",
    "true_vs_predicted_dt = actual.zip(preds)\n",
    "print('决策树回归模型对前5个样本的预测值: '+str(true_vs_predicted_dt.take(5)) )\n",
    "print('决策树模型的深度: ' + str(dt_model.depth()) )\n",
    "print('决策树模型的叶子节点个数: '+str(dt_model.numNodes()))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 八、模型评估\n",
    "    一些用于评估回归模型的方法包括：均方误差（MSE，Mean Squared Error）、均方根误差（RMSE，Root Mean Squared Error）、平均绝对误差（MAE，Mean Absolute Error）、R-平方系数（R-squared coefficient）等。\n",
    "##### 8.1 均方误差和均方根误差\n",
    "    MSE是平方误差的均值，用作最小二乘回归的损失函数，公式如下：\n",
    " ![images](data/20160704172828611.gif)\n",
    "    这个公式计算的是所有样本预测值和实际值平方差之和，最后除以样本总数。而RMSE是MSE的平方根。MSE的公式类似平方损失函数，会进一步放大误差。\n",
    "    为了计算模型预测的平均误差，我们首先预测RDD实例LabeledPoint中每个特征向量，然后计算预测值与实际值的误差并组成一个Double数组的RDD，最后使用mean方法计算所有Double值的平均值。计算平方误差函数实现如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [],
   "source": [
    "def squared_error(actual, pred): \n",
    "    return (pred-actual)**2 "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### 8.2、平均绝对误差\n",
    "    MAE是预测值和实际值的差的绝对值的平均值。公式如下：\n",
    "![images](data/20160704172908127.gif)\n",
    "MAE和MSE大体类似，区别在于MAE对大的误差没有惩罚。计算MAE的代码如下：    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [],
   "source": [
    "def abs_error(actual, pred): \n",
    "    return np.abs(pred-actual) "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### 8.3 均方根对数误差\n",
    "    这个度量方法虽然没有MSE和MAE使用得广，但被用于Kaggle中以bike sharing作为数据集的比赛。RMSLE可以认为是对预测值和目标值进行对数变换后的RMSE。这个度量方法适用于目标变量值域很大，并且没有必要对预测值和目标值的误差进行惩罚的情况。另外，它也适用于计算误差的百分率而不是误差的绝对值。计算RMSLE的代码如下："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [],
   "source": [
    "def squared_log_error(pred, actual): \n",
    "    return (np.log(pred+1)-np.log(actual+1))**2"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Linear Model - Mean Squared Error: 32858.2602\n",
      "Linear Model - Mean Absolute Error: 135.5146\n",
      "Linear Model - Root Mean Squared Log Error: 1.4852\n"
     ]
    }
   ],
   "source": [
    "# 我们的方法对RDD的每一条记录应用相关的误差函数，其中线性模型的误差函数为true_vs_predicted，相关代码实现如下:\n",
    "mse = true_vs_predicted.map(lambda row: squared_error(row[0], row[1])).mean() \n",
    "mae = true_vs_predicted.map(lambda row: abs_error(row[0],row[1])).mean() \n",
    "rmsle = np.sqrt(true_vs_predicted.map(lambda row: squared_log_error(row[0],row[1])).mean()) \n",
    "print('Linear Model - Mean Squared Error: %2.4f' % mse )\n",
    "print('Linear Model - Mean Absolute Error: %2.4f' % mae) \n",
    "print('Linear Model - Root Mean Squared Log Error: %2.4f' % rmsle)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
