{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "506032bf",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "#  21\\.  装袋和提升集成学习方法  # \n",
    "\n",
    "##  21.1.  介绍  # \n",
    "\n",
    "前面的实验都是独立的讲解每一个分类器的分类过程，每一个分类器都有其独有的特点并非常适合某些数据。但在实际中，由于数据的不确定性，单独应用个别分类器可能会出现分类准确率低的问题。为了应对这样的情况，集成学习被提出，其可以利用多个弱分类器结合的方式提高分类准确率。 \n",
    "\n",
    "##  21.2.  知识点  # \n",
    "\n",
    "  * 集成学习概念 \n",
    "\n",
    "  * 装袋算法 Bagging \n",
    "\n",
    "  * 随机森林 Random Forest \n",
    "\n",
    "  * 提升算法 Boosting \n",
    "\n",
    "  * 梯度提升树 GBDT \n",
    "\n",
    "##  21.3.  集成学习概念  # \n",
    "\n",
    "在学习装袋和提升算法之前，先引入一个概念：集成学习。集成学习（英文：Ensemble learning），顾名思义就是通过构建多个分类器并综合使用来完成学习任务，同时也被称为多分类器系统。其最大的特点就是结合各个弱分类器的长处，从而达到「三个臭皮匠顶个诸葛亮」的效果。 \n",
    "\n",
    "每一个弱分类器被称作「个体学习器」，集成学习的基本结构就是生成一组个体学习器，再用某种策略将他们结合起来。 \n",
    "\n",
    "从个体学习器类别来看，集成学习通常分为两种类型： \n",
    "\n",
    "  * 「同质」集成，在一个集成学习中，「个体学习器」是同一类型，如 「决策树集成」 所有个体学习器都为决策树。 \n",
    "\n",
    "  * 「异质」集成，在一个集成学习中，「个体学习器」为不同类型，如一个集成学习中可以包含决策树模型也可以包含支持向量机模型。 \n",
    "\n",
    "同样从集成方式来看，集成学习也可以分为两类： \n",
    "\n",
    "  * 并行式，当个体学习器之间不存在强依赖关系时，可同时生成并行化方法，其中代表算法为装袋（Bagging）算法。 \n",
    "\n",
    "  * 串行式，当个体学习器之间存在强依赖关系时，必须串行生成序列化方法，其中代表算法为提升（Boosting）算法。 \n",
    "\n",
    "[ ![https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid7506timestamp1553235184797.png](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid7506timestamp1553235184797.png) ](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid7506timestamp1553235184797.png)\n",
    "\n",
    "[ 来源 ](https://www.researchgate.net/publication/276549421_Argumentation_Based_Joint_Learning_A_Novel_Ensemble_Learning_Approach/figures?lo=1)\n",
    "\n",
    "##  21.4.  结合策略  # \n",
    "\n",
    "集成学习中，当数据被多个个体学习器学习后，如何最终决定学习结果呢？这里，我们假定集成包含  $T$  个「个体学习器」  $\\\\{ h_{1},h_{2},…,h_{T}\\\\}$  ，常用的有三种方法来判定。 \n",
    "\n",
    "##  21.5.  平均法  # \n",
    "\n",
    "在数值型输出中，最常用的结合策略为平均法（Averaging），在平均法中有两种方式： \n",
    "\n",
    "  * 简单平均法：取每一个「个体学习器」学习后的平均值。 \n",
    "\n",
    "  * 加权平均法：其中  $w_{i}$  是每一个「个体学习器」  $h_{i}$  的权重，通常为  $w_{i} \\geq 0$  。 \n",
    "\n",
    "##  21.6.  投票法  # \n",
    "\n",
    "对于分类输出而言，平均法显然效果不太好，最常用的结合策略为投票法（Voting），在投票法中主要有三种方式： \n",
    "\n",
    "  * 多数投票法：即在「个体学习器」分类完成后，通过投票选出分类最多的标签作为此次分类的结果。 \n",
    "\n",
    "  * 加权投票法：同加权平均法类似，  $w_{i}$  是每一个「个体学习器」  $h_{i}$  的权重，通常为  $w_{i}\\geq 0$  。 \n",
    "\n",
    "##  21.7.  学习法  # \n",
    "\n",
    "以上两种方法（平均法和投票法）相对比较简单，但是可能学习误差较大，为了解决这种情况，还有一种方法为学习法，其代表方法是 stacking ，当使用 stacking 的结合策略时， 不是对弱学习器的结果做简单的逻辑处理，而是再加上一层学习器，即把训练集弱学习器的学习结果作为输入，重新训练一个学习器来得到最终结果。 \n",
    "\n",
    "在这种情况下，我们将弱学习器称为初级学习器，将用于结合的学习器称为次级学习器。对于测试集，首先用初级学习器预测一次，得到次级学习器的输入样本，再用次级学习器预测一次，得到最终的预测结果。 \n",
    "\n",
    "##  21.8.  装袋算法 Bagging  # \n",
    "\n",
    "在大致了解集成学习相关概念之后，接下来就是对集成学习中常用算法思想之一的装袋算法进行详细的讲解。 \n",
    "\n",
    "装袋算法是并行式集成学习的代表，其原理也比较简单。算法步骤如下： \n",
    "\n",
    "  1. 数据处理：将数据根据实际情况进行清洗整理。 \n",
    "\n",
    "  2. 随机采样：从样本中随机选出  $m$  个样本作为一个子样本集。有放回的重复  $T$  次，得到  $T$  个子样本集。 \n",
    "\n",
    "  3. 个体训练：设定  $T$  个个体学习器，将每一个子样本集放入对应个体学习器进行训练。 \n",
    "\n",
    "  4. 分类决策：用投票法集成进行分类决策。 \n",
    "\n",
    "##  21.9.  Bagging Tree  # \n",
    "\n",
    "在前一节的决策树讲解中提到，决策树是一个十分「完美」的训练器，但特别容易出现过拟合的情况，最终导致预测准确率低的问题。事实上在装袋算法中，决策树常常被用作弱分类器。下面我们通过具体实验来看看决策树和以决策树作为装袋算法的预测效果。 \n",
    "\n",
    "本实验使用在上一章讲决策树时所用的学生成绩预测数据集。其中数据处理已在上一章详细说明，本次实验我们使用处理过后的数据集。数据集名称为 ` course-14-student.csv  ` ，加载并预览数据集： "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "8308cb63",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "File 'course-14-student.csv' already there; not retrieving.\n",
      "\n"
     ]
    }
   ],
   "source": [
    "!wget -nc https://cdn.aibydoing.com/aibydoing/files/course-14-student.csv"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "cd567854",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>school</th>\n",
       "      <th>sex</th>\n",
       "      <th>address</th>\n",
       "      <th>Pstatus</th>\n",
       "      <th>Pedu</th>\n",
       "      <th>reason</th>\n",
       "      <th>guardian</th>\n",
       "      <th>traveltime</th>\n",
       "      <th>studytime</th>\n",
       "      <th>schoolsup</th>\n",
       "      <th>...</th>\n",
       "      <th>famrel</th>\n",
       "      <th>freetime</th>\n",
       "      <th>goout</th>\n",
       "      <th>Dalc</th>\n",
       "      <th>Walc</th>\n",
       "      <th>health</th>\n",
       "      <th>absences</th>\n",
       "      <th>G1</th>\n",
       "      <th>G2</th>\n",
       "      <th>G3</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>2</td>\n",
       "      <td>2</td>\n",
       "      <td>1</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>...</td>\n",
       "      <td>3</td>\n",
       "      <td>2</td>\n",
       "      <td>3</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>2</td>\n",
       "      <td>6</td>\n",
       "      <td>2</td>\n",
       "      <td>2</td>\n",
       "      <td>2</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>2</td>\n",
       "      <td>2</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>1</td>\n",
       "      <td>...</td>\n",
       "      <td>4</td>\n",
       "      <td>2</td>\n",
       "      <td>2</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>2</td>\n",
       "      <td>4</td>\n",
       "      <td>2</td>\n",
       "      <td>2</td>\n",
       "      <td>2</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>2</td>\n",
       "      <td>0</td>\n",
       "      <td>2</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>...</td>\n",
       "      <td>3</td>\n",
       "      <td>2</td>\n",
       "      <td>1</td>\n",
       "      <td>1</td>\n",
       "      <td>2</td>\n",
       "      <td>2</td>\n",
       "      <td>10</td>\n",
       "      <td>2</td>\n",
       "      <td>2</td>\n",
       "      <td>3</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>2</td>\n",
       "      <td>0</td>\n",
       "      <td>2</td>\n",
       "      <td>1</td>\n",
       "      <td>...</td>\n",
       "      <td>2</td>\n",
       "      <td>1</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>4</td>\n",
       "      <td>2</td>\n",
       "      <td>3</td>\n",
       "      <td>3</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>1</td>\n",
       "      <td>...</td>\n",
       "      <td>3</td>\n",
       "      <td>2</td>\n",
       "      <td>1</td>\n",
       "      <td>0</td>\n",
       "      <td>1</td>\n",
       "      <td>4</td>\n",
       "      <td>4</td>\n",
       "      <td>2</td>\n",
       "      <td>3</td>\n",
       "      <td>3</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>5 rows × 27 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "   school  sex  address  Pstatus  Pedu  reason  guardian  traveltime  \\\n",
       "0       0    0        0        0     1       2         2           1   \n",
       "1       0    0        0        1     2       2         0           0   \n",
       "2       0    0        0        1     2       0         2           0   \n",
       "3       0    0        0        1     0       1         2           0   \n",
       "4       0    0        0        1     0       1         0           0   \n",
       "\n",
       "   studytime  schoolsup  ...  famrel  freetime  goout  Dalc  Walc  health  \\\n",
       "0          1          0  ...       3         2      3     0     0       2   \n",
       "1          1          1  ...       4         2      2     0     0       2   \n",
       "2          1          0  ...       3         2      1     1     2       2   \n",
       "3          2          1  ...       2         1      1     0     0       4   \n",
       "4          1          1  ...       3         2      1     0     1       4   \n",
       "\n",
       "   absences  G1  G2  G3  \n",
       "0         6   2   2   2  \n",
       "1         4   2   2   2  \n",
       "2        10   2   2   3  \n",
       "3         2   3   3   1  \n",
       "4         4   2   3   3  \n",
       "\n",
       "[5 rows x 27 columns]"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import pandas as pd\n",
    "\n",
    "data = pd.read_csv(\"course-14-student.csv\", index_col=0)\n",
    "data.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a9994f97",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "|  school  |  sex  |  address  |  Pstatus  |  Pedu  |  reason  |  guardian  |  traveltime  |  studytime  |  schoolsup  |  ...  |  famrel  |  freetime  |  goout  |  Dalc  |  Walc  |  health  |  absences  |  G1  |  G2  |  G3   \n",
    "---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---  \n",
    "0  |  0  |  0  |  0  |  0  |  1  |  2  |  2  |  1  |  1  |  0  |  ...  |  3  |  2  |  3  |  0  |  0  |  2  |  6  |  2  |  2  |  2   \n",
    "1  |  0  |  0  |  0  |  1  |  2  |  2  |  0  |  0  |  1  |  1  |  ...  |  4  |  2  |  2  |  0  |  0  |  2  |  4  |  2  |  2  |  2   \n",
    "2  |  0  |  0  |  0  |  1  |  2  |  0  |  2  |  0  |  1  |  0  |  ...  |  3  |  2  |  1  |  1  |  2  |  2  |  10  |  2  |  2  |  3   \n",
    "3  |  0  |  0  |  0  |  1  |  0  |  1  |  2  |  0  |  2  |  1  |  ...  |  2  |  1  |  1  |  0  |  0  |  4  |  2  |  3  |  3  |  1   \n",
    "4  |  0  |  0  |  0  |  1  |  0  |  1  |  0  |  0  |  1  |  1  |  ...  |  3  |  2  |  1  |  0  |  1  |  4  |  4  |  2  |  3  |  3   \n",
    "  \n",
    "5 rows × 27 columns \n",
    "\n",
    "加载好预处理的数据集之后，为了应用装袋算法，需要将数据集分为训练集和测试集，依照经验：训练集占比为 70%，测试集占 30%。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "c31b50c2",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "((276, 26), (119, 26), (276,), (119,))"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sklearn.model_selection import train_test_split\n",
    "\n",
    "X_train, X_test, y_train, y_test = train_test_split(\n",
    "    data.iloc[:, :-1], data[\"G3\"], test_size=0.3, random_state=35\n",
    ")\n",
    "X_train.shape, X_test.shape, y_train.shape, y_test.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "99ca469f",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "((276, 26), (119, 26), (276,), (119,))"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "((276, 26), (119, 26), (276,), (119,))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5819cb3a",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "作为比较，首先将该数据集用决策树的方式进行预测，使用 scikit-learn 实现决策树预测的用法在前一章节已详细介绍，本实验直接使用。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "6a51eb0a",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([3, 0, 3, 2, 1, 2, 3, 2, 3, 3, 0, 2, 1, 3, 3, 2, 3, 0, 1, 2, 1, 0,\n",
       "       1, 2, 3, 2, 3, 0, 3, 3, 3, 3, 2, 2, 3, 3, 0, 1, 2, 2, 2, 1, 3, 2,\n",
       "       1, 3, 2, 3, 3, 3, 3, 1, 1, 2, 2, 0, 1, 3, 2, 3, 3, 2, 2, 2, 2, 3,\n",
       "       2, 3, 2, 1, 0, 3, 2, 3, 3, 2, 1, 3, 0, 2, 3, 3, 3, 3, 0, 3, 3, 1,\n",
       "       3, 3, 1, 3, 3, 3, 2, 0, 2, 3, 0, 3, 1, 3, 1, 1, 3, 3, 3, 3, 3, 3,\n",
       "       2, 1, 1, 1, 3, 0, 3, 3, 3])"
      ]
     },
     "execution_count": 6,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sklearn.tree import DecisionTreeClassifier\n",
    "\n",
    "dt_model = DecisionTreeClassifier(criterion=\"entropy\", random_state=34)\n",
    "dt_model.fit(X_train, y_train)  # 使用训练集训练模型\n",
    "\n",
    "dt_y_pred = dt_model.predict(X_test)\n",
    "dt_y_pred"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "62b269c7",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.8319327731092437"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sklearn.metrics import accuracy_score\n",
    "\n",
    "accuracy_score(y_test, dt_y_pred)  # 计算使用决策树预测的准确率"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e5c9467f",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "由于本次实验所采用的数据集特征值比前一章节多，所以决策树泛化能力更差。 \n",
    "\n",
    "单棵决策树的预测结果并不能使我们满意，下面我们使用 装袋（Bagging） 的思想来提高预测准确率。我们通过 scikit-learn 来对 Bagging Tree 算法进行实现。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "eb8fc787",
   "metadata": {},
   "outputs": [],
   "source": [
    "BaggingClassifier(base_estimator=None, n_estimators=10, max_samples=1.0, max_features=1.0)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7bfcad4c",
   "metadata": {},
   "source": [
    "其中： \n",
    "\n",
    "  * ` base_estimator  ` ：表示基础分类器（弱分类器）种类，默认为决策树 。 \n",
    "\n",
    "  * ` n_estimators  ` ：表示建立树的个数，默认值为 10 。 \n",
    "\n",
    "  * ` max_samples  ` ：表示从抽取数据中选取训练样本的数量，Int（整型）表示数量，Float（浮点型）表示比例，默认为所有样本。 \n",
    "\n",
    "  * ` max_features  ` ：表示抽取特征的数量，Int（整型）表示数量，Float（浮点型）表示比例，默认为所有特征。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "d46b71c3",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([3, 0, 3, 2, 1, 2, 3, 2, 3, 3, 3, 2, 1, 3, 3, 2, 2, 0, 1, 2, 1, 0,\n",
       "       1, 3, 3, 2, 3, 0, 2, 3, 3, 3, 2, 2, 3, 3, 0, 1, 2, 2, 2, 1, 3, 3,\n",
       "       1, 3, 2, 3, 3, 3, 3, 3, 1, 2, 2, 0, 1, 3, 2, 3, 3, 2, 0, 2, 2, 3,\n",
       "       2, 3, 2, 3, 0, 3, 2, 2, 3, 2, 1, 2, 0, 2, 3, 1, 3, 3, 0, 3, 3, 1,\n",
       "       3, 3, 1, 3, 3, 3, 2, 0, 2, 3, 0, 3, 1, 3, 1, 1, 3, 3, 3, 3, 3, 3,\n",
       "       3, 1, 1, 1, 3, 0, 3, 3, 3])"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sklearn.ensemble import BaggingClassifier\n",
    "\n",
    "tree = DecisionTreeClassifier(criterion=\"entropy\", random_state=34)  # 使用决策树作为基学习器\n",
    "bt_model = BaggingClassifier(tree, n_estimators=100, max_samples=1.0, random_state=3)\n",
    "\n",
    "bt_model.fit(X_train, y_train)\n",
    "bt_y_pred = bt_model.predict(X_test)\n",
    "bt_y_pred"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "3bd6a237",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.8907563025210085"
      ]
     },
     "execution_count": 9,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "accuracy_score(y_test, bt_y_pred)  # 计算使用决策树预测的准确率"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e819a5cf",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "根据准确率可以看到在决策树通过装袋（Bagging）算法后预测准确率有明显提升。 \n",
    "\n",
    "##  21.10.  随机森林 Random Forest  # \n",
    "\n",
    "其实，Bagging Tree 算法，是应用子数据集中的所有特征构建一棵完整的树，最终通过投票的方式进行预测。而随机森林就是在 Bagging Tree 算法的基础上进行进一步的改进。 \n",
    "\n",
    "随机森林的思想就是将一个大的数据集使用自助采样法进行处理，即从原样本数据集中随机抽取多个子样本集，并基于每一个子样本集生成相应的决策树。这样，就可以构建出由许多小决策树组形成的决策树「森林」。最后，实验通过投票法选择决策树最多的预测结果作为最终的输出。 \n",
    "\n",
    "[ ![https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid7506timestamp1553236039312.png](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid7506timestamp1553236039312.png) ](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid7506timestamp1553236039312.png)\n",
    "\n",
    "[ 来源 ](https://www.globalsoftwaresupport.com/random-forest-classifier-bagging-machine-learning/)\n",
    "\n",
    "所以，随机森林的名称来源就是「随机抽样 + 决策树森林」。 \n",
    "\n",
    "##  21.11.  随机森林算法原理  # \n",
    "\n",
    "随机森林作为装袋（Bagging）的代表算法，算法原理和装袋十分相似，但在此基础上做了一些改进： \n",
    "\n",
    "  1. 对于普通的决策树，会在 N 个样本的所有特征中选择一个最优划分特征，但是随机森林首先会从所有特征中随机选择部分特征，再从该部分特征中选择一个最优划分特征。这样进一步增强了模型的泛化能力。 \n",
    "\n",
    "  2. 在决定部分特征个数时，通过交叉验证的方式来获取一个合适的值。 \n",
    "\n",
    "随机森林算法流程： \n",
    "\n",
    "  1. 从样本集中有放回随机采样选出  $n$  个样本。 \n",
    "\n",
    "  2. 从所有特征中随机选择  $k$  个特征，对选出的样本利用这些特征建立决策树。 \n",
    "\n",
    "  3. 重复以上两步  $m$  次，即生成  $m$  棵决策树，形成随机森林。 \n",
    "\n",
    "  4. 对于新数据，经过每棵树决策，最后投票确认分到哪一类。 \n",
    "\n",
    "##  21.12.  模型构建和数据预测  # \n",
    "\n",
    "在划分好数据集之后，接下来就是进行模型的构建以及预测。下面我们通过 scikit-learn 来对其进行实现。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3cf488ba",
   "metadata": {},
   "outputs": [],
   "source": [
    "RandomForestClassifier(n_estimators, criterion, max_features, random_state=None)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b4d23264",
   "metadata": {},
   "source": [
    "其中： \n",
    "\n",
    "  * ` n_estimators  ` ：表示建立树的个数，默认值为 10 。 \n",
    "\n",
    "  * ` criterion  ` ：表示特征划分方法选择，默认为 ` gini  ` ，可选择为 ` entropy  ` (信息增益)。 \n",
    "\n",
    "  * ` max_features  ` ：表示随机选择特征个数，默认为特征数的根号。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "787f963d",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([3, 0, 3, 2, 1, 2, 3, 2, 3, 3, 3, 2, 1, 3, 3, 2, 2, 0, 1, 2, 1, 0,\n",
       "       1, 3, 3, 2, 3, 0, 2, 3, 3, 3, 2, 2, 3, 3, 0, 1, 2, 2, 2, 1, 3, 3,\n",
       "       1, 3, 2, 3, 3, 3, 3, 3, 1, 2, 2, 0, 1, 3, 2, 3, 3, 2, 2, 2, 2, 3,\n",
       "       2, 3, 2, 3, 0, 3, 2, 2, 3, 2, 1, 2, 0, 2, 3, 1, 3, 3, 0, 3, 3, 1,\n",
       "       3, 3, 1, 3, 3, 3, 2, 0, 2, 3, 0, 3, 1, 3, 1, 1, 3, 3, 3, 3, 3, 3,\n",
       "       3, 1, 1, 1, 3, 0, 3, 3, 3])"
      ]
     },
     "execution_count": 10,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sklearn.ensemble import RandomForestClassifier\n",
    "\n",
    "# 这里构建 100 棵决策树，采用信息熵来寻找最优划分特征。\n",
    "rf_model = RandomForestClassifier(\n",
    "    n_estimators=100, max_features=None, criterion=\"entropy\"\n",
    ")\n",
    "\n",
    "rf_model.fit(X_train, y_train)  # 进行模型的训练\n",
    "rf_y_pred = rf_model.predict(X_test)\n",
    "rf_y_pred"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fe0716dd",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "当我们训练好模型并进行分类预测之后，可以通过比对预测结果和真实结果得到预测的准确率。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "37892954",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.8823529411764706"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "accuracy_score(y_test, rf_y_pred)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a342ed09",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "可以通过结果看到，本次实验的数据集用随机森林预测的准确率和用 Bagging Tree 预测的准确率差别不大，但随着数据集的增大和特征数的增多，随机森林的优势就会慢慢显现出来。 \n",
    "\n",
    "##  21.13.  提升算法 Boosting  # \n",
    "\n",
    "当「个体学习器」之间存在较强的依赖时，采用装袋的算法便有些不合适，此时最好的方法就是使用串行集成方式：提升（Boosting）。 \n",
    "\n",
    "提升算法是可以将弱学习器提升为强学习器的算法，其具体思想是从初始训练集训练出一个「个体学习器」，再根据个体学习器的表现对训练样本分布进行调整，使得在个体学习器中判断错的训练样本在后续受到更多的关注，然后基于调整后的样本分布来训练下一个「个体学习器」。如此重复进行，直至个体学习器数目达到事先指定的值 T，最终将这 T 个「个体学习器」输出的值进行加权结合得到最终的输出值。 \n",
    "\n",
    "##  21.14.  Adaboost  # \n",
    "\n",
    "提升（Boosting）算法中最具代表性的算法为 Adaboost。 \n",
    "\n",
    "AdaBoost（Adaptive Boosting）名为自适应增强，其主要自适应增强表现在：上一个「个体学习器」中被错误分类的样本的权值会增大，正确分类的样本的权值会减小，并再次用来训练下一个基本分类器。在每一轮迭代中，加入一个新的弱分类器，直到达到某个预定的足够小的错误率或达到预先指定的最大迭代次数才确定最终的强分类器。 \n",
    "\n",
    "[ ![https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid7506timestamp1553236325799.png](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid7506timestamp1553236325799.png) ](https://cdn.aibydoing.com/aibydoing/images/document-uid214893labid7506timestamp1553236325799.png)\n",
    "\n",
    "[ 来源 ](https://subscription.packtpub.com/book/big_data_and_business_intelligence/9781788295758/4/ch04lvl1sec32/adaboost-classifier)\n",
    "\n",
    "AdaBoost 算法与 Boosting 算法不同的是，其不需要预先知道弱分类器的误差，并且最后得到的强分类器的分类精度依赖于所有弱分类器的分类精度。 \n",
    "\n",
    "Adaboost 算法流程： \n",
    "\n",
    "  1. 数据准备：通过数据清理和数据整理的方式得到符合规范的数据。 \n",
    "\n",
    "  2. 初始化权重：如果有  $N$  个训练样本数据，在最开始时每一个数据被赋予相同的权值：  $1/N$  。 \n",
    "\n",
    "  3. 弱分类器预测：将有权重的训练样本放入弱分类器进行分类预测。 \n",
    "\n",
    "  4. 更改权重：如果某个样本点被准确地分类，降低其权值；若被分类错误，那么提高其权值。然后，权值更新过的样本集被用于训练下一个分类器。 \n",
    "\n",
    "  5. 强分类器组合：重复 3，4 步骤，直至训练结束，加大分类误差率小的弱分类器的权重（这里的权重和样本权重不一样），使其在最终的分类函数中起着较大的决定作用，降低分类误差率大的弱分类器的权重，使其在最终的分类函数中起着较小的决定作用，最终输出结果。 \n",
    "\n",
    "##  21.15.  模型构建和数据预测  # \n",
    "\n",
    "在划分好数据集之后，接下来就是进行模型的构建以及预测。下面我们通过 scikit-learn 来对其进行实现。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b82f4263",
   "metadata": {},
   "outputs": [],
   "source": [
    "AdaBoostClassifier(base_estimators,n_estimators)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8437d137",
   "metadata": {},
   "source": [
    "其中： \n",
    "\n",
    "  * ` base_estimators  ` ：表示弱分类器种类，默认为 CART 分类树。 \n",
    "\n",
    "  * ` n_estimators  ` ：表示弱学习器的最大个数，默认值为 ` 50  ` 。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "0bedf25a",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([3, 2, 3, 2, 1, 3, 3, 2, 3, 3, 3, 2, 1, 3, 3, 3, 2, 0, 1, 2, 1, 0,\n",
       "       1, 3, 3, 2, 3, 0, 2, 3, 3, 3, 2, 2, 3, 3, 0, 1, 2, 2, 2, 1, 3, 3,\n",
       "       1, 3, 2, 3, 3, 3, 3, 3, 1, 2, 3, 0, 1, 3, 3, 3, 3, 2, 0, 2, 2, 3,\n",
       "       2, 3, 3, 3, 0, 3, 3, 2, 3, 2, 1, 2, 0, 2, 2, 1, 3, 3, 0, 3, 3, 1,\n",
       "       3, 3, 1, 3, 3, 3, 2, 0, 2, 3, 0, 3, 1, 3, 1, 1, 3, 3, 3, 3, 3, 3,\n",
       "       3, 1, 1, 1, 3, 0, 3, 3, 3])"
      ]
     },
     "execution_count": 12,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sklearn.ensemble import AdaBoostClassifier\n",
    "\n",
    "ad_model = AdaBoostClassifier(n_estimators=100)\n",
    "\n",
    "ad_model.fit(X_train, y_train)\n",
    "ad_y_pred = ad_model.predict(X_test)\n",
    "ad_y_pred"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "9a2f17c4",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.8403361344537815"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "accuracy_score(y_test, ad_y_pred)  # 计算使用决策树预测的准确率"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9752f5f6",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "source": [
    "通过结果可以看到，应用 Adaboost 算法得到的准确率和决策树在该数据集上相差不大。 \n",
    "\n",
    "##  21.16.  梯度提升树 GBDT  # \n",
    "\n",
    "梯度提升树（Gradient Boosting Decison Tree，GBDT）同样是 Boosting 算法家族中的一员， Adaboost 是利用前一轮迭代弱学习器的误差率来更新训练集的权重，而梯度提升树所采用的是前向分布算法，且弱学习器限定了只能使用CART树模型。 \n",
    "\n",
    "在 GBDT 的迭代中，假设我们前一轮迭代得到的强学习器是  $f_{t-1}(x)$  ，损失函数是  $L(y,f_{t-1}(x))$  ，我们本轮迭代的目标是找到一个 CART 回归树模型的弱学习器  $h_{t}(x)$  ，让本轮的损失  $L(y,f_{t}(x) = L(y ，f_{t−1}(x)+h_{t}(x))$  最小。也就是说，本轮迭代找到决策树，要让样本的损失尽量变得更小。 \n",
    "\n",
    "算法流程： \n",
    "\n",
    "  1. 数据准备：通过数据清理和数据整理的方式得到符合规范的数据。 \n",
    "\n",
    "  2. 初始化权重：如果有  $N$  个训练样本数据，在最开始时每一个数据被赋予相同的权值：  $1/N$  。 \n",
    "\n",
    "  3. 弱分类器预测：将有权重的训练样本放入弱分类器进行分类预测。 \n",
    "\n",
    "  4. CART 树拟合：计算每一个子样本的梯度值，通过梯度值和子样本拟合一棵 CART 树 \n",
    "\n",
    "  5. 更新强学习器：在拟合好的 CART 树中通过损失函数计算出最佳的拟合值，更新先前组成的强学习器。 \n",
    "\n",
    "  6. 强分类器组合：重复 3，4，5 步骤，直至训练结束，得到一个强分类器，最终输出结果。 \n",
    "\n",
    "在划分好数据集之后，接下来就是进行模型的构建以及预测。下面我们通过 scikit-learn 来对其进行实现。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5c749b5a",
   "metadata": {},
   "outputs": [],
   "source": [
    "GradientBoostingClassifier(max_depth = 3, learning_rate = 0.1, n_estimators = 100, random_state = None)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d0f4df00",
   "metadata": {},
   "source": [
    "其中： \n",
    "\n",
    "  * ` max_depth  ` :表示生成 CART 树的最大深度，默认为 3 \n",
    "\n",
    "  * ` learning_rate  ` :表示学习效率，默认为 0.1。 \n",
    "\n",
    "  * ` n_estimators  ` ：表示弱学习器的最大个数，默认值为 100。 \n",
    "\n",
    "  * ` random_state  ` :表示随机数种子。 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "f36530ba",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "array([3, 0, 3, 2, 1, 2, 3, 2, 3, 3, 3, 2, 1, 3, 3, 2, 2, 0, 1, 2, 1, 1,\n",
       "       1, 3, 3, 2, 3, 0, 3, 3, 3, 3, 2, 2, 3, 3, 0, 1, 2, 2, 2, 1, 3, 3,\n",
       "       1, 3, 2, 3, 3, 3, 3, 3, 1, 2, 2, 0, 1, 3, 2, 3, 3, 2, 0, 2, 2, 3,\n",
       "       2, 3, 2, 3, 0, 2, 2, 2, 3, 2, 1, 2, 0, 2, 2, 1, 3, 3, 0, 3, 3, 1,\n",
       "       3, 3, 1, 3, 3, 3, 2, 0, 2, 3, 0, 3, 1, 3, 1, 1, 3, 3, 3, 3, 3, 3,\n",
       "       3, 1, 1, 1, 3, 0, 3, 3, 3])"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sklearn.ensemble import GradientBoostingClassifier\n",
    "\n",
    "gb_model = GradientBoostingClassifier(\n",
    "    n_estimators=100, learning_rate=1.0, random_state=33\n",
    ")\n",
    "\n",
    "gb_model.fit(X_train, y_train)\n",
    "gb_y_pred = gb_model.predict(X_test)\n",
    "gb_y_pred"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "929b8756",
   "metadata": {
    "lines_to_next_cell": 0
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "0.8823529411764706"
      ]
     },
     "execution_count": 15,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "accuracy_score(y_test, gb_y_pred)  # 计算使用决策树预测的准确率"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3340ecc1",
   "metadata": {},
   "source": [
    "可以看到，在使用装袋和提升算法时，在大部分情况下，会产生更好的预测结果，但有时也可能出现没有优化的情况。事实上，机器学习分类器的选择就是如此，没有最好的分类器只有最适合的分类器，不同的数据集，由于其数据特点的不同，在不同的分类器中表现也不同。 \n",
    "\n",
    "##  21.17.  总结  # \n",
    "\n",
    "本节实验中学习了集成学习中装袋和提升算法的原理，且分别讲解了每个类别中具有代表性的算法 Bagging Tree 和随机森林以及 Adaboost 和梯度提升树，并用 scikit-learn 对算法进行实现。 \n",
    "\n",
    "相关链接 \n",
    "\n",
    "  * [ 随机森林 - 维基百科 ](https://zh.wikipedia.org/zh-hans/%E9%9A%8F%E6%9C%BA%E6%A3%AE%E6%9E%97)\n",
    "\n",
    "  * [ Bootstrap aggregating - 维基百科 ](https://en.wikipedia.org/wiki/Bootstrap_aggregating)\n"
   ]
  }
 ],
 "metadata": {
  "jupytext": {
   "cell_metadata_filter": "-all",
   "main_language": "python",
   "notebook_metadata_filter": "-all"
  },
  "kernelspec": {
   "display_name": ".venv",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
