{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. 为什么Bagging能改进模型性能？（10分）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 首先说明什么是bagging模型：分别构造多个弱学习器，最终将多个弱学习器得到的结果求平均。    \n",
    "1.这多个弱学习器相互之间是并行的关系，可以同时训练。      \n",
    "2.对每个弱学习器的样本进行Bootstrap随机采样，通过遍历所有特征的所有阈值来找到最佳分割点，然后以这个分裂点进行弱学习器的构建。   \n",
    "3.如果是分类任务，那么多个弱学习器投出最多票数的类别作为最终类别；如果是分类任务，那么把多个弱学习器得到的结果求平均作为最终的输出。   \n",
    "\n",
    "#### 其次，为什么Bagging能改进模型性能？   \n",
    "先谈谈什么是偏差和方差？   \n",
    "偏差：模型预测值的期望与真实值之间的差异，反应的是模型的拟合能力。      \n",
    "方差：反应的是训练集的变化所导致的学习性能的变化，即刻画了数据扰动所造成的影响，模型过拟合时会出现较大的方差。   \n",
    "而Bagging是对多个弱学习器求平均，这样能减少模型的方差，从而提高模型稳定性。   "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. 随机森林中随机体现在哪些方面？（10分）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(1). 每颗决策树样本选择随机   \n",
    "(2). 每颗决策树特征选择随机   \n",
    "(3). 在此基础上，再随机选择特征，然后从这些特征中再来遍历选择最优的分割点。这是同Bagging模型不同的地方。 "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3. 随机森林和GBDT的基学习器都是决策树。请问这两种模型中的决策树有什么不同。（10分） "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "随机森林：属于特别的bagging模型，分类任务使用分类树，回归任务使用回归树。其构建决策树的方式是并行构造的，意思就是每棵决策树之间相互独立的，没有关系。\n",
    "\n",
    "GBDT：使用的是回归树模型，因为GBDT属于boosting模型，每一棵树是在前一棵树的基础之上建立的，属于串行的关系，最后是要将所有决策树的结果加起来的。所以如果使用分类树的话，那么结果相加是毫无意义的。比如说预测性别，第一棵树预测的分类结果是0（男生），第二棵树预测的结果是1（女生），这两棵树相加的结果是完全没有实际意义的。而只有使用回归树，相加才是有实际意义的。比方说预测年龄，第一棵树预测为20岁，第二棵树预测结果为3岁，那么最后的结果就是20+3=23岁。即使是分类任务，使用的也是回归树。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4. 请简述LightGBM的训练速度为什么比XGBoost快。（30分） "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(1). 直方图优化：   \n",
    "在进行分割点选择的时候，XGBoost是需要遍历所有特征以及特征里面的所有阈值，并且还需要对特征先进行排序。这都需要消耗大量的时间。    \n",
    "而LightGBM在选择分割点时，采用的是直方图算法。即对每个特征的取值按照要求的数量划分成N个bins，然后在遍历特征里的阈值就不需要遍历每个特征值    了，只需要遍历这N个分裂点，且不需要对特征值进行排序。大大缩短了时间。\n",
    "    \n",
    "(2). Goss算法：   \n",
    "XGBoost在每颗树进行样本选择的时候，虽然是随机选取部分样本，但每棵树其实都会存在许多相同的样本。   \n",
    "而LightGBM在每棵树选择样本是采取的的是Goss算法。即在前一棵树迭代完之后，计算每一个样本的梯度值，再根据梯度值绝对值的大小，选取前n个最大梯度的样本，然后再在剩下的样本中随机选取m个样本，最后把这m+n个样本送到下一棵树继续学习。   \n",
    "为什么可以按照梯度的大小来选取样本呢？因为梯度的大小可以表示误差的大小，比如说L2损失，梯度就是预测值减真实值，绝对值越大，说明误差越大。绝对值越小，说明预测的就越准确，就不需要再继续学习了，可以大大加快迭代速度。\n",
    "\n",
    "(3). 互斥特征捆绑：   \n",
    "在高维的稀疏特征空间中，许多特征是互斥的（即多个特征之间每一个样本最多只有一个特征的取值为非0），那么就可以把这些特征捆绑在一起组成一个特征，然后再把这个特征构建直方图，这样在实现特征遍历时又可以加速了。\n",
    "\n",
    "(4). 直方图作差：  \n",
    "在前面第一点的直方图优化算法中，怎么样评价是否这个分裂点是最佳分裂点呢？还是需要计算增益，这里增益的计算公式为：   \n",
    "gain = S(L) * S(L)/N(L)+S(R)* S(R)/N(R)-S(P) * S(P)/N(P)   \n",
    "其中，S为所有样本梯度之和，N为样本数，L/R/P表示左/右/父节点。 通过比较gain的大小来判断最佳分割点。   \n",
    "这里在计算完父节点和左节点的梯度之和以后，右节点的梯度之和就不需要再遍历样本计算了，只需要S(P)-S(L)就可以了。   \n",
    "这就是所谓的直方图作差，在这里又大大提升了速度。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 5. 采用tfidf特征，采用LightGBM（gbdt）完成Otto商品分类，尽可能将超参数调到最优，并用该模型对测试数据进行测试，提交Kaggle网站，提交排名。（20分）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 自己已经把模型训练好了，保存在本地，不贴训练代码了，这里直接用测试集来测试。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 首先 import 必要的模块"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd \n",
    "import numpy as np\n",
    "\n",
    "import lightgbm as lgbm\n",
    "from lightgbm.sklearn import LGBMClassifier\n",
    "\n",
    "from sklearn.model_selection import GridSearchCV\n",
    "\n",
    "import matplotlib.pyplot as plt\n",
    "%matplotlib inline"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 加载测试数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>id</th>\n",
       "      <th>feat_1</th>\n",
       "      <th>feat_2</th>\n",
       "      <th>feat_3</th>\n",
       "      <th>feat_4</th>\n",
       "      <th>feat_5</th>\n",
       "      <th>feat_6</th>\n",
       "      <th>feat_7</th>\n",
       "      <th>feat_8</th>\n",
       "      <th>feat_9</th>\n",
       "      <th>...</th>\n",
       "      <th>feat_84_tfidf</th>\n",
       "      <th>feat_85_tfidf</th>\n",
       "      <th>feat_86_tfidf</th>\n",
       "      <th>feat_87_tfidf</th>\n",
       "      <th>feat_88_tfidf</th>\n",
       "      <th>feat_89_tfidf</th>\n",
       "      <th>feat_90_tfidf</th>\n",
       "      <th>feat_91_tfidf</th>\n",
       "      <th>feat_92_tfidf</th>\n",
       "      <th>feat_93_tfidf</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>1</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.00000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>...</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.421803</td>\n",
       "      <td>0.052224</td>\n",
       "      <td>0.842245</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>2</td>\n",
       "      <td>0.032787</td>\n",
       "      <td>0.039216</td>\n",
       "      <td>0.21875</td>\n",
       "      <td>0.228571</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>...</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.143963</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.070171</td>\n",
       "      <td>0.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>3</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.019608</td>\n",
       "      <td>0.18750</td>\n",
       "      <td>0.014286</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>...</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.078248</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.071995</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>4</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.00000</td>\n",
       "      <td>0.014286</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>...</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.139311</td>\n",
       "      <td>0.034257</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>5</td>\n",
       "      <td>0.016393</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.00000</td>\n",
       "      <td>0.014286</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.026316</td>\n",
       "      <td>0.026316</td>\n",
       "      <td>0.0</td>\n",
       "      <td>...</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.556178</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>5 rows × 187 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "   id    feat_1    feat_2   feat_3    feat_4  feat_5  feat_6    feat_7  \\\n",
       "0   1  0.000000  0.000000  0.00000  0.000000     0.0     0.0  0.000000   \n",
       "1   2  0.032787  0.039216  0.21875  0.228571     0.0     0.0  0.000000   \n",
       "2   3  0.000000  0.019608  0.18750  0.014286     0.0     0.0  0.000000   \n",
       "3   4  0.000000  0.000000  0.00000  0.014286     0.0     0.0  0.000000   \n",
       "4   5  0.016393  0.000000  0.00000  0.014286     0.0     0.0  0.026316   \n",
       "\n",
       "     feat_8  feat_9      ...        feat_84_tfidf  feat_85_tfidf  \\\n",
       "0  0.000000     0.0      ...                  0.0       0.000000   \n",
       "1  0.000000     0.0      ...                  0.0       0.000000   \n",
       "2  0.000000     0.0      ...                  0.0       0.000000   \n",
       "3  0.000000     0.0      ...                  0.0       0.139311   \n",
       "4  0.026316     0.0      ...                  0.0       0.000000   \n",
       "\n",
       "   feat_86_tfidf  feat_87_tfidf  feat_88_tfidf  feat_89_tfidf  feat_90_tfidf  \\\n",
       "0       0.421803       0.052224       0.842245       0.000000            0.0   \n",
       "1       0.000000       0.000000       0.000000       0.143963            0.0   \n",
       "2       0.000000       0.000000       0.078248       0.000000            0.0   \n",
       "3       0.034257       0.000000       0.000000       0.000000            0.0   \n",
       "4       0.000000       0.000000       0.000000       0.000000            0.0   \n",
       "\n",
       "   feat_91_tfidf  feat_92_tfidf  feat_93_tfidf  \n",
       "0       0.000000       0.000000       0.000000  \n",
       "1       0.000000       0.070171       0.000000  \n",
       "2       0.000000       0.000000       0.071995  \n",
       "3       0.000000       0.000000       0.000000  \n",
       "4       0.556178       0.000000       0.000000  \n",
       "\n",
       "[5 rows x 187 columns]"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "dpath = './data/'\n",
    "test1 = pd.read_csv(dpath +\"Otto_FE_test_org.csv\")\n",
    "#test = pd.read_csv(dpath +\"Otto_FE_test_log.csv\")\n",
    "test2 = pd.read_csv(dpath +\"Otto_FE_test_tfidf.csv\")\n",
    "\n",
    "#去掉多余的id\n",
    "test2 = test2.drop([\"id\"], axis=1)\n",
    "test =  pd.concat([test1, test2], axis = 1, ignore_index=False)\n",
    "test.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "test_id = test['id']   \n",
    "X_test = test.drop([\"id\"], axis=1)\n",
    "\n",
    "#保存特征名字以备后用（可视化）\n",
    "feat_names = X_test.columns \n",
    "from scipy.sparse import csr_matrix\n",
    "X_test = csr_matrix(X_test)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 加载用lightgbm的gbdt算法训练的模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pickle\n",
    "model = pickle.load(open(\"./model/Otto_LightGBM_org_tfidf.pkl\", 'rb'))\n",
    "\n",
    "#输出每类的概率\n",
    "y_test_pred = model.predict_proba(X_test)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(144368, 9)"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "y_test_pred.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 生成提交结果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "out_df = pd.DataFrame(y_test_pred)\n",
    "\n",
    "columns = np.empty(9, dtype=object)\n",
    "for i in range(9):\n",
    "    columns[i] = 'Class_' + str(i+1)\n",
    "\n",
    "out_df.columns = columns\n",
    "\n",
    "out_df = pd.concat([test_id,out_df], axis = 1)\n",
    "out_df.to_csv(\"./data/LightGBM_org_tfidf.csv\", index=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<img  style=\"float:left\" src=\"kaggle_resule.png\" width = \"300\" height = \"200\" >"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 6. 采用tfidf特征，采用LightGBM（goss）完成Otto商品分类，尽可能将超参数调到最优，并用该模型对测试数据进行测试，提交Kaggle网站，提交排名。（20分）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 加载用lightgbm的goss算法训练的模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "model_goss = pickle.load(open(\"./model/Otto_LightGBM_goss_org_tfidf.pkl\", 'rb'))\n",
    "\n",
    "#输出每类的概率\n",
    "y_test_pred_goss = model_goss.predict_proba(X_test)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(144368, 9)"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "y_test_pred_goss.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 生成提交结果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [],
   "source": [
    "out_df_goss = pd.DataFrame(y_test_pred_goss)\n",
    "\n",
    "columns = np.empty(9, dtype=object)\n",
    "for i in range(9):\n",
    "    columns[i] = 'Class_' + str(i+1)\n",
    "\n",
    "out_df_goss.columns = columns\n",
    "\n",
    "out_df_goss = pd.concat([test_id,out_df_goss], axis = 1)\n",
    "out_df_goss.to_csv(\"./data/LightGBM_goss_org_tfidf.csv\", index=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<img  style=\"float:left\" src=\"kaggle_resule.png\" width = \"300\" height = \"200\" >"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
