{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 一、项目概述"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### 1.1 项目目标\n",
    "        该项目的目标是根据给定的森林覆盖类型相关数据集(带目标变量)预测森林覆盖类型。该项目的训练数据集是来自于美国林业局（USFS）资源信息系统数据库以及美国地质调查局。数据中的目标变量，即森林覆盖类型包含以下其中类型：\n",
    "+ 1 - Spruce/Fir\n",
    "+ 2 - Lodgepole Pine\n",
    "+ 3 - Ponderosa Pine\n",
    "+ 4 - Cottonwood/Willow\n",
    "+ 5 - Aspen\n",
    "+ 6 - Douglas-fir\n",
    "+ 7 - Krummholz"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### 1.2 数据简介\n",
    "    训练集：data/CoverType/train.csv\n",
    "    测试集：data/CoverType/test.csv\n",
    "    训练集(train.csv)共总共15120个观测样本，包含特征数据与标注数据（森岭覆盖类型），测试集(test.csv)仅包含特征数据,无标注数据\n",
    "##### 1.3 数据集字段说明\n",
    "+ Elevation - 海拔\n",
    "+ Aspect - 经度\n",
    "+ Slope - 坡度\n",
    "+ Horizontal_Distance_To_Hydrology - 距离最近水源的水平距离\n",
    "+ Vertical_Distance_To_Hydrology - 距离最近水源的垂直距离\n",
    "+ Horizontal_Distance_To_Roadways - 距离最近道路的水品距离\n",
    "+ Hillshade_9am (0 to 255 index) - 夏至时器上午9点的山体阴影，范围为 0~255\n",
    "+ Hillshade_Noon (0 to 255 index) - 夏至时器下午的山体阴影，范围为 0~255\n",
    "+ Hillshade_3pm (0 to 255 index) - 夏至时器下午3点的山体阴影,范围为 0~255\n",
    "+ Horizontal_Distance_To_Fire_Points - 距离最近野外火源的水平距离\n",
    "+ Wilderness_Area (4 binary columns, 0 = absence or 1 = presence) - 荒野地区（0 不存在，1 存在），包含4种类型\n",
    "    + 四种荒野地区类型\n",
    "      + 1 - Rawah Wilderness Area\n",
    "      + 2 - Neota Wilderness Area\n",
    "      + 3 - Comanche Peak Wilderness Area\n",
    "      + 4 - Cache la Poudre Wilderness Area\n",
    "+ Soil_Type (40 binary columns, 0 = absence or 1 = presence) - 土壤类型（0 存在 1 不存在），包含40种类型\n",
    "+ Cover_Type (7 types, integers 1 to 7) - 森岭覆盖类型，即目标标量，1~7总共7种类型\n",
    "\n",
    "如上所述，数据集包括目标变量总共55列"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 二、数据集准备"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### 2.1 加载数据集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 导入模块\n",
    "from pyspark.context import SparkContext\n",
    "from pyspark.sql.session import SparkSession"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 创建SparkSession\n",
    "sc = SparkContext(\"local[*]\",\"Forest Cover Type\")\n",
    "spark = SparkSession(sc)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 加载训练数据\n",
    "covType = sc.textFile(\"data/CoverType/train.csv\")\n",
    "# 以逗号为分隔符分割每一行数据\n",
    "data = covType.map(lambda row : row.split(\",\"))\n",
    "# 获取数据集字段名称\n",
    "header = data.first()\n",
    "# 将字段名称从数据集中剔除\n",
    "data = data.filter(lambda row : row != header). \\\n",
    "            map(lambda row : [float(x) for x in row]) # 将所有数据转为float类型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['Id',\n",
       " 'Elevation',\n",
       " 'Aspect',\n",
       " 'Slope',\n",
       " 'Horizontal_Distance_To_Hydrology',\n",
       " 'Vertical_Distance_To_Hydrology',\n",
       " 'Horizontal_Distance_To_Roadways',\n",
       " 'Hillshade_9am',\n",
       " 'Hillshade_Noon',\n",
       " 'Hillshade_3pm',\n",
       " 'Horizontal_Distance_To_Fire_Points',\n",
       " 'Wilderness_Area1',\n",
       " 'Wilderness_Area2',\n",
       " 'Wilderness_Area3',\n",
       " 'Wilderness_Area4',\n",
       " 'Soil_Type1',\n",
       " 'Soil_Type2',\n",
       " 'Soil_Type3',\n",
       " 'Soil_Type4',\n",
       " 'Soil_Type5',\n",
       " 'Soil_Type6',\n",
       " 'Soil_Type7',\n",
       " 'Soil_Type8',\n",
       " 'Soil_Type9',\n",
       " 'Soil_Type10',\n",
       " 'Soil_Type11',\n",
       " 'Soil_Type12',\n",
       " 'Soil_Type13',\n",
       " 'Soil_Type14',\n",
       " 'Soil_Type15',\n",
       " 'Soil_Type16',\n",
       " 'Soil_Type17',\n",
       " 'Soil_Type18',\n",
       " 'Soil_Type19',\n",
       " 'Soil_Type20',\n",
       " 'Soil_Type21',\n",
       " 'Soil_Type22',\n",
       " 'Soil_Type23',\n",
       " 'Soil_Type24',\n",
       " 'Soil_Type25',\n",
       " 'Soil_Type26',\n",
       " 'Soil_Type27',\n",
       " 'Soil_Type28',\n",
       " 'Soil_Type29',\n",
       " 'Soil_Type30',\n",
       " 'Soil_Type31',\n",
       " 'Soil_Type32',\n",
       " 'Soil_Type33',\n",
       " 'Soil_Type34',\n",
       " 'Soil_Type35',\n",
       " 'Soil_Type36',\n",
       " 'Soil_Type37',\n",
       " 'Soil_Type38',\n",
       " 'Soil_Type39',\n",
       " 'Soil_Type40',\n",
       " 'Cover_Type']"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 查看字段名称\n",
    "header"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "15120"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 统计训练样本数量\n",
    "data.count()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[1.0,\n",
       " 2596.0,\n",
       " 51.0,\n",
       " 3.0,\n",
       " 258.0,\n",
       " 0.0,\n",
       " 510.0,\n",
       " 221.0,\n",
       " 232.0,\n",
       " 148.0,\n",
       " 6279.0,\n",
       " 1.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 1.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 0.0,\n",
       " 5.0]"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 查看第一行训练数据\n",
    "data.first()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 三、数据预处理\n",
    "#### 3.1 移除无效列数据\n",
    "    查看字段名称header后，不难发现，第一列为数据Id标识。改列数据对于模型训练来说是无效特征，因此可以移除该列数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    " # id位于索引为0的列，因此我们只取索引从1开始的所有数据列\n",
    "dataWithoutId = data.map(lambda row : row[1:])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[[2596.0,\n",
       "  51.0,\n",
       "  3.0,\n",
       "  258.0,\n",
       "  0.0,\n",
       "  510.0,\n",
       "  221.0,\n",
       "  232.0,\n",
       "  148.0,\n",
       "  6279.0,\n",
       "  1.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  1.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  5.0],\n",
       " [2590.0,\n",
       "  56.0,\n",
       "  2.0,\n",
       "  212.0,\n",
       "  -6.0,\n",
       "  390.0,\n",
       "  220.0,\n",
       "  235.0,\n",
       "  151.0,\n",
       "  6225.0,\n",
       "  1.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  1.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  0.0,\n",
       "  5.0]]"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 去除后随机查看2个数据样本\n",
    "dataWithoutId.take(2)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "由上结果可知，此时数据集中已经没有了数据Id"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3.2 Cover_Type目标遍俩处理\n",
    "    使用决策树决策树算法构建模型要求从 0 开始，但是，由于原始数据集集中，目标变量为从1开始，因此，需要将所有目标变量在原来基础上减1\n",
    "+ 说明：目标变量位于数据集最后一列"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{2.0: 0, 6.0: 1, 4.0: 2, 5.0: 3, 1.0: 4, 7.0: 5, 3.0: 6}"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 查看目标变量总共有几个类别\n",
    "dataWithoutId.map(lambda fields: fields[-1]).distinct().zipWithIndex().collectAsMap()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "由以上结果可知，目标变量总共有7个类别，类别Id为1～7，不是从0开始，因此需要对目标变量进行预处理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [],
   "source": [
    "dataWithoutId = dataWithoutId.map(lambda row : row[:-1] + [row[-1] - 1])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{4.0: 0, 0.0: 1, 6.0: 2, 2.0: 3, 1.0: 4, 5.0: 5, 3.0: 6}"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 查看预处理后目标变量总共有几个类别\n",
    "dataWithoutId.map(lambda fields: fields[-1]).distinct().zipWithIndex().collectAsMap()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "由以上结果可知，此时目标变量总共有7个类别，类别Id为0～6"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3..3 数据转换与拆分\n",
    "    经过以上数据预处理后，数据集依然不能直接送入模型中进行模型训练，需要得到可直接用于模型训练的数据集，还需要以下两步操作\n",
    "+ 将数据集封装为LabelPoint格式\n",
    "+ 将数据集拆分为训练集与测试集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "from pyspark.mllib.regression import LabeledPoint\n",
    "# 将训练数据转换为LabelPoint格式数据\n",
    "labelPointRdd = dataWithoutId.map(lambda r: LabeledPoint(r[-1],r[:-1]))\n",
    "## 划分训练集、和测试集\n",
    "(trainData,testData) = labelPointRdd.randomSplit([8,2])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[LabeledPoint(4.0, [2590.0,56.0,2.0,212.0,-6.0,390.0,220.0,235.0,151.0,6225.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,1.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0,0.0])]"
      ]
     },
     "execution_count": 16,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 随机查看一个训练样本数据\n",
    "trainData.take(1)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 四、模型训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "DecisionTreeModel classifier of depth 5 with 53 nodes\n"
     ]
    }
   ],
   "source": [
    "# 导入决策树模型并训练\n",
    "from pyspark.mllib.tree import DecisionTree\n",
    "model = DecisionTree.trainClassifier(trainData,7,{})\n",
    "print(model)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "DecisionTreeModel classifier of depth 5 with 53 nodes\n",
      "  If (feature 0 <= 2663.5)\n",
      "   If (feature 0 <= 2374.5)\n",
      "    If (feature 6 <= 195.5)\n",
      "     If (feature 3 <= 15.0)\n",
      "      Predict: 3.0\n",
      "     Else (feature 3 > 15.0)\n",
      "      If (feature 30 <= 0.5)\n",
      "       Predict: 2.0\n",
      "      Else (feature 30 > 0.5)\n",
      "       Predict: 3.0\n",
      "    Else (feature 6 > 195.5)\n",
      "     If (feature 3 <= 15.0)\n",
      "      If (feature 12 <= 0.5)\n",
      "       Predict: 3.0\n",
      "      Else (feature 12 > 0.5)\n",
      "       Predict: 5.0\n",
      "     Else (feature 3 > 15.0)\n",
      "      Predict: 3.0\n",
      "   Else (feature 0 > 2374.5)\n",
      "    If (feature 17 <= 0.5)\n",
      "     If (feature 10 <= 0.5)\n",
      "      Predict: 5.0\n",
      "     Else (feature 10 > 0.5)\n",
      "      If (feature 31 <= 0.5)\n",
      "       Predict: 1.0\n",
      "      Else (feature 31 > 0.5)\n",
      "       Predict: 4.0\n",
      "    Else (feature 17 > 0.5)\n",
      "     If (feature 1 <= 95.5)\n",
      "      If (feature 3 <= 280.0)\n",
      "       Predict: 5.0\n",
      "      Else (feature 3 > 280.0)\n",
      "       Predict: 2.0\n",
      "     Else (feature 1 > 95.5)\n",
      "      If (feature 3 <= 15.0)\n",
      "       Predict: 5.0\n",
      "      Else (feature 3 > 15.0)\n",
      "       Predict: 2.0\n",
      "  Else (feature 0 > 2663.5)\n",
      "   If (feature 0 <= 3203.5)\n",
      "    If (feature 0 <= 2926.5)\n",
      "     If (feature 5 <= 452.5)\n",
      "      If (feature 6 <= 199.5)\n",
      "       Predict: 0.0\n",
      "      Else (feature 6 > 199.5)\n",
      "       Predict: 4.0\n",
      "     Else (feature 5 > 452.5)\n",
      "      If (feature 12 <= 0.5)\n",
      "       Predict: 1.0\n",
      "      Else (feature 12 > 0.5)\n",
      "       Predict: 4.0\n",
      "    Else (feature 0 > 2926.5)\n",
      "     If (feature 0 <= 3029.5)\n",
      "      If (feature 3 <= 76.0)\n",
      "       Predict: 0.0\n",
      "      Else (feature 3 > 76.0)\n",
      "       Predict: 1.0\n",
      "     Else (feature 0 > 3029.5)\n",
      "      If (feature 7 <= 239.5)\n",
      "       Predict: 0.0\n",
      "      Else (feature 7 > 239.5)\n",
      "       Predict: 1.0\n",
      "   Else (feature 0 > 3203.5)\n",
      "    If (feature 0 <= 3291.5)\n",
      "     If (feature 52 <= 0.5)\n",
      "      If (feature 51 <= 0.5)\n",
      "       Predict: 0.0\n",
      "      Else (feature 51 > 0.5)\n",
      "       Predict: 6.0\n",
      "     Else (feature 52 > 0.5)\n",
      "      Predict: 6.0\n",
      "    Else (feature 0 > 3291.5)\n",
      "     If (feature 45 <= 0.5)\n",
      "      Predict: 6.0\n",
      "     Else (feature 45 > 0.5)\n",
      "      If (feature 2 <= 7.5)\n",
      "       Predict: 6.0\n",
      "      Else (feature 2 > 7.5)\n",
      "       Predict: 0.0\n",
      "\n"
     ]
    }
   ],
   "source": [
    "# 输出模型结构\n",
    "print(model.toDebugString())"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "由上图可知，训练所得决策树模型深度为5，共49个叶子节点"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 五、模型评估"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[(1.0, 4.0), (1.0, 4.0), (4.0, 4.0), (4.0, 4.0), (1.0, 4.0)]"
      ]
     },
     "execution_count": 19,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 使用AUC(Area under the Curve of ROC)来对模型进行评估\n",
    "## 使用模型对测试集进行预测\n",
    "predict = model.predict(testData.map(lambda p:p.features))\n",
    "predict_real = predict.zip(testData.map(lambda p: p.label))\n",
    "predict_real.take(5)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "AUC=0.8120227597749455\n"
     ]
    }
   ],
   "source": [
    "# 接着使用BinaryClassificationMetrics计算AUC\n",
    "from pyspark.mllib.evaluation import BinaryClassificationMetrics\n",
    "metrics = BinaryClassificationMetrics(predict_real)\n",
    "print(\"AUC=\"+str(metrics.areaUnderROC))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 六、使用模型进行预测"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "样本Id：15174.0,预测结果：Lodgepole Pine\n",
      "样本Id：15208.0,预测结果：Lodgepole Pine\n",
      "样本Id：15270.0,预测结果：Lodgepole Pine\n",
      "样本Id：15320.0,预测结果：Lodgepole Pine\n",
      "样本Id：15436.0,预测结果：Lodgepole Pine\n",
      "样本Id：15767.0,预测结果：Lodgepole Pine\n",
      "样本Id：15830.0,预测结果：Lodgepole Pine\n",
      "样本Id：15949.0,预测结果：Lodgepole Pine\n",
      "样本Id：15971.0,预测结果：Lodgepole Pine\n",
      "样本Id：16102.0,预测结果：Lodgepole Pine\n",
      "样本Id：16219.0,预测结果：Lodgepole Pine\n",
      "样本Id：16256.0,预测结果：Lodgepole Pine\n",
      "样本Id：16301.0,预测结果：Lodgepole Pine\n",
      "样本Id：16651.0,预测结果：Lodgepole Pine\n",
      "样本Id：16756.0,预测结果：Lodgepole Pine\n",
      "样本Id：16886.0,预测结果：Aspen\n",
      "样本Id：16918.0,预测结果：Lodgepole Pine\n",
      "样本Id：16935.0,预测结果：Lodgepole Pine\n",
      "样本Id：16946.0,预测结果：Lodgepole Pine\n",
      "样本Id：16968.0,预测结果：Lodgepole Pine\n"
     ]
    }
   ],
   "source": [
    "def predict(sc,model):\n",
    "    # 加载测试集数据\n",
    "    testSet = sc.textFile(\"data/CoverType/test.csv\")\n",
    "    firstLine = testSet.first()\n",
    "    # 测试数据集预处理\n",
    "    testSet = testSet.filter(lambda row : row != firstLine). \\\n",
    "                      map(lambda row : row.split(\",\")). \\\n",
    "                      map(lambda row : [float(x) for x in row]). \\\n",
    "                      map(lambda row : LabeledPoint(row[0],row[1:]))\n",
    "    # 类别id到类别名称映射字典\n",
    "    DescDict={0:\"Spruce/Fir\",\n",
    "              1:\"Lodgepole Pine\",\n",
    "              2:\"Ponderosa Pine\",\n",
    "              3:\"Cottonwood/Willow\",\n",
    "              4:\"Aspen\",\n",
    "              5:\"Douglas-fir\",\n",
    "              6:\"Krummholz\"}\n",
    "    for sample in testSet.sample(False,0.01,111).take(20):\n",
    "        dataId = sample.label\n",
    "        prediction = model.predict(sample.features)\n",
    "        coverType = DescDict[prediction]\n",
    "        print(\"样本Id：{},预测结果：{}\".format(dataId,coverType))\n",
    "predict(sc,model)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
