{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "logical-profit",
   "metadata": {},
   "source": [
    "## 投票法的思路"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "proper-retail",
   "metadata": {},
   "source": [
    "投票法是集成学习中常用的技巧，可以帮助我们提高模型的泛化能力，减少模型的错误率。举个例子，在航空航天领域，每个零件发出的电信号都对航空器的成功发射起到重要作用。如果我们有一个二进制形式的信号：\n",
    "\n",
    "11101100100111001011011011011\n",
    "\n",
    "在传输过程中第二位发生了翻转\n",
    "\n",
    "10101100100111001011011011011\n",
    "\n",
    "这导致的结果可能是致命的。一个常用的纠错方法是重复多次发送数据，并以少数服从多数的方法确定正确的传输数据。一般情况下，错误总是发生在局部，因此融合多个数据是降低误差的一个好方法，这就是投票法的基本思路。\n",
    "\n",
    "对于回归模型来说，投票法最终的预测结果是多个其他回归模型预测结果的平均值。\n",
    "\n",
    "对于分类模型，硬投票法的预测结果是多个模型预测结果中出现次数最多的类别，软投票对各类预测结果的概率进行求和，最终选取概率之和最大的类标签。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "contrary-france",
   "metadata": {},
   "source": [
    "## 投票法的原理分析"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ecological-dependence",
   "metadata": {},
   "source": [
    "投票法是一种遵循少数服从多数原则的集成学习模型，通过多个模型的集成降低方差，从而提高模型的鲁棒性。在理想情况下，投票法的预测效果应当优于任何一个基模型的预测效果。\n",
    "\n",
    "投票法在回归模型与分类模型上均可使用：\n",
    "\n",
    "- 回归投票法：预测结果是所有模型预测结果的平均值。\n",
    "- 分类投票法：预测结果是所有模型种出现最多的预测结果。\n",
    "\n",
    "分类投票法又可以被划分为硬投票与软投票：\n",
    "\n",
    "- 硬投票：预测结果是所有投票结果最多出现的类。\n",
    "- 软投票：预测结果是所有投票结果中概率加和最大的类。\n",
    "\n",
    "下面我们使用一个例子说明硬投票：\n",
    "\n",
    "> 对于某个样本：\n",
    ">\n",
    "> 模型 1 的预测结果是 类别 A\n",
    ">\n",
    "> 模型 2 的预测结果是 类别 B\n",
    ">\n",
    "> 模型 3 的预测结果是 类别 B\n",
    "\n",
    "有2/3的模型预测结果是B，因此硬投票法的预测结果是B\n",
    "\n",
    "同样的例子说明软投票：\n",
    "\n",
    "> 对于某个样本：\n",
    ">\n",
    "> 模型 1 的预测结果是 类别 A 的概率为 99%\n",
    ">\n",
    "> 模型 2 的预测结果是 类别 A 的概率为 49%\n",
    ">\n",
    "> 模型 3 的预测结果是 类别 A 的概率为 49%\n",
    "\n",
    "最终对于类别A的预测概率的平均是 (99 + 49 + 49) / 3 = 65.67%，因此软投票法的预测结果是A。\n",
    "\n",
    "从这个例子我们可以看出，软投票法与硬投票法可以得出完全不同的结论。相对于硬投票，软投票法考虑到了预测概率这一额外的信息，因此可以得出比硬投票法更加准确的预测结果。\n",
    "\n",
    "在投票法中，我们还需要考虑到不同的基模型可能产生的影响。理论上，基模型可以是任何已被训练好的模型。但在实际应用上，想要投票法产生较好的结果，需要满足两个条件：\n",
    "\n",
    "- 基模型之间的效果不能差别过大。当某个基模型相对于其他基模型效果过差时，该模型很可能成为噪声。\n",
    "- 基模型之间应该有较小的同质性。例如在基模型预测效果近似的情况下，基于树模型与线性模型的投票，往往优于两个树模型或两个线性模型。\n",
    "\n",
    "当投票合集中使用的模型能预测出清晰的类别标签时，适合使用硬投票。当投票集合中使用的模型能预测类别的概率时，适合使用软投票。软投票同样可以用于那些本身并不预测类成员概率的模型，只要他们可以输出类似于概率的预测分数值（例如支持向量机、k-最近邻和决策树）。\n",
    "\n",
    "投票法的局限性在于，它对所有模型的处理是一样的，这意味着所有模型对预测的贡献是一样的。如果一些模型在某些情况下很好，而在其他情况下很差，这是使用投票法时需要考虑到的一个问题。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "measured-medline",
   "metadata": {},
   "source": [
    "## 投票法的案例分析(基于sklearn，介绍pipe管道的使用以及voting的使用)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "increased-discovery",
   "metadata": {},
   "source": [
    "Sklearn中提供了 [VotingRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.VotingRegressor.html) 与 [VotingClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.VotingClassifier.html) 两个投票方法。 这两种模型的操作方式相同，并采用相同的参数。使用模型需要提供一个模型列表，列表中每个模型采用Tuple的结构表示，第一个元素代表名称，第二个元素代表模型，需要保证每个模型必须拥有唯一的名称。\n",
    "\n",
    "例如这里，我们定义两个模型：\n",
    "\n",
    "    models = [('lr',LogisticRegression()),('svm',SVC())]\n",
    "    ensemble = VotingClassifier(estimators=models)\n",
    "    \n",
    "模型还提供了voting参数让我们选择软投票或者硬投票：\n",
    "\n",
    "    models = [('lr',LogisticRegression()),('svm',SVC())]\n",
    "    ensemble = VotingClassifier(estimators=models, voting='soft')\n",
    "    \n",
    "下面我们使用一个完整的例子演示投票法的使用：\n",
    "首先我们创建一个1000个样本，20个特征的随机数据集："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "little-exclusive",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-03-16T10:26:49.060430Z",
     "start_time": "2021-03-16T10:26:48.291432Z"
    }
   },
   "outputs": [],
   "source": [
    "from sklearn.datasets import make_classification\n",
    "\n",
    "def get_dataset():\n",
    "    # define dataset\n",
    "    X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=2)\n",
    "    # summarize the dataset\n",
    "#     print(X.shape, y.shape)\n",
    "    return X,y"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "tutorial-sacramento",
   "metadata": {},
   "source": [
    "我们使用多个KNN模型作为基模型演示投票法，其中每个模型采用不同的邻居值K参数："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "excessive-verification",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-03-16T10:26:50.222430Z",
     "start_time": "2021-03-16T10:26:50.131433Z"
    }
   },
   "outputs": [],
   "source": [
    "from sklearn.neighbors import KNeighborsClassifier\n",
    "from sklearn.ensemble import VotingClassifier\n",
    "\n",
    "def get_voting():\n",
    "    # define the base models\n",
    "    models,ensemble = list(), list()\n",
    "    models.append(('knn1', KNeighborsClassifier(n_neighbors=1)))\n",
    "    models.append(('knn3', KNeighborsClassifier(n_neighbors=3)))\n",
    "    models.append(('knn5', KNeighborsClassifier(n_neighbors=5)))\n",
    "    models.append(('knn7', KNeighborsClassifier(n_neighbors=7)))\n",
    "    models.append(('knn9', KNeighborsClassifier(n_neighbors=9)))\n",
    "    ensemble = [x for x in models]\n",
    "    # define the voting ensemble\n",
    "#     \tensemble = VotingClassifier(estimators=models, voting='hard')\n",
    "    ensemble.append(('hard_voting',VotingClassifier(estimators=models,voting = 'hard')))\n",
    "    return ensemble"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "enclosed-roads",
   "metadata": {},
   "source": [
    "然后，我们可以创建一个模型列表来评估投票带来的提升，包括KNN模型配置的每个独立版本和硬投票模型。下面的get_models()函数可以为我们创建模型列表进行评估。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "indie-width",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-03-16T10:26:51.306430Z",
     "start_time": "2021-03-16T10:26:51.301430Z"
    }
   },
   "outputs": [],
   "source": [
    "# evaluate a give model using cross-validation\n",
    "from sklearn.model_selection import RepeatedStratifiedKFold\n",
    "from sklearn.model_selection import cross_val_score\n",
    "\n",
    "def evaluate_model(model, X, y):\n",
    "    cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)\n",
    "#     print('{:*^40}'.format(' start'))\n",
    "#     print(len(X),len(y))\n",
    "    scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1, error_score='raise')\n",
    "    return scores"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "transparent-dating",
   "metadata": {},
   "source": [
    "然后，我们可以报告每个算法的平均性能，还可以创建一个箱形图和须状图来比较每个算法的精度分数分布。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "eastern-boost",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-03-16T10:26:57.546532Z",
     "start_time": "2021-03-16T10:26:52.752531Z"
    },
    "pycharm": {
     "is_executing": true
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "knn1: Acc:0.873, std:0.030\n",
      "knn3: Acc:0.889, std:0.038\n",
      "knn5: Acc:0.895, std:0.031\n",
      "knn7: Acc:0.899, std:0.035\n",
      "knn9: Acc:0.900, std:0.033\n",
      "hard_voting: Acc:0.902, std:0.034\n",
      "hard_voting: Acc:0.902, std:0.034\n"
     ]
    },
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXwAAAD5CAYAAAAk7Y4VAAAAOXRFWHRTb2Z0d2FyZQBNYXRwbG90bGliIHZlcnNpb24zLjMuNCwgaHR0cHM6Ly9tYXRwbG90bGliLm9yZy8QVMy6AAAACXBIWXMAAAsTAAALEwEAmpwYAAAZ/ElEQVR4nO3de5Ad5X3m8e+DQAYjLhKadTkahJRdkdUgCI6PRbyRIzCxLZRdyYBjNLYTK6VFccUitRiciBIVy2JVJA5x9mIZVg5EgZQly8SxtTaxICAKy4sTHSEkVowFY3mDLt4wBIiXEFu33/7RPXA4jGZ6zvScS/fzqTrF6e63T/9ezfCcnrdvigjMzKz4Tml1AWZm1hwOfDOzknDgm5mVhAPfzKwkHPhmZiVxaqsLqDd16tSYMWNGq8swM+soO3fufCEiuoZr03aBP2PGDKrVaqvLMDPrKJL+fqQ2HtIxMysJB76ZWUk48M3MSsKBb2ZWEg58M7OSyBT4khZI2iepX9LKIZZfIOlhSXskPSqpu2bZdEkPSuqT9LSkGTnWb2ZmGY0Y+JImAOuAq4AeoFdST12zO4B7I+ISYA1we82ye4E/iojZwFzg+TwKNzOz0cmyhz8X6I+I/RFxBNgELK5r0wM8kr7fNrg8/WI4NSIeAoiIVyLi1VwqNzOzUckS+NOAAzXTB9N5tXYD16TvrwbOknQecCHwsqSvSdol6Y/SvxjeQNJySVVJ1YGBgdH3ouQkNfyy1iv6z6/o/eskeR20vRmYL2kXMB84BBwnuZL3PenydwE/CyytXzki1kdEJSIqXV3DXhlsQ4iIk76yLLfWKvrPr+j96yRZAv8QcH7NdHc67zURcTgiromIdwCr0nkvk/w18GQ6HHQM+DrwCznUbWZmo5Ql8HcAsyTNlDQRWAJsqW0gaaqkwc+6BbinZt1zJQ3utr8XeHrsZZuZ2WiNGPjpnvkKYCvQB2yOiL2S1khalDa7HNgn6RngbcDadN3jJMM5D0t6ChDwpdx7YWZmI1K7jZNVKpXw3TLzI8ljoR2s6D+/ovevmSTtjIjKcG18pa2ZWUk48M3MSsKBb2ZWEg58M7OScOCbmZWEA9/MrCQc+GZmJXFqqwswM+tUY7nBWyuuP3Dgm5k1aLjQbseLyjykY2ZWEg58M7OScOCbmZWEA9/MrCQc+GZmJeHANzMrCQe+mVlJZAp8SQsk7ZPUL2nlEMsvkPSwpD2SHpXUXbf8bEkHJX0hr8LNzGx0Rgx8SROAdcBVQA/QK6mnrtkdwL0RcQmwBri9bvltwGNjL9fMzBqVZQ9/LtAfEfsj4giwCVhc16YHeCR9v612uaR3kjzn9sGxl2tmZo3KEvjTgAM10wfTebV2A9ek768GzpJ0nqRTgD8meZD5SUlaLqkqqTowMJCt8lGQ1PDLWs8/P7N85HXQ9mZgvqRdwHzgEHAc+G3ggYg4ONzKEbE+IioRUenq6sqppDd8/klfWZZba/nnZ5aPLDdPOwScXzPdnc57TUQcJt3DlzQJuDYiXpb0buA9kn4bmARMlPRKRLzpwK+ZmY2vLIG/A5glaSZJ0C8BPlLbQNJU4MWIOAHcAtwDEBEfrWmzFKg47M3MWmPEIZ2IOAasALYCfcDmiNgraY2kRWmzy4F9kp4hOUC7dpzqNTOzBqndxjkrlUpUq9Wmba8d71mdJ/evs7l/navZfZO0MyIqw7XxlbZmZiXhwDczKwkHvplZSTjwzcxKwoFvZlYSDnwzs5Jw4JuZlYQD38ysJBz4ZmYl4cA3MxvGlClTGr41dyPrTZkyZdz6kuXmaWZmpfXSSy81+xYJ4/bZ3sM3MysJB76ZWUk48M3MSsKBb2ZWEg58M7OSyBT4khZI2iepX9KbHlEo6QJJD0vaI+lRSd3p/EslPS5pb7rsurw7YGZm2YwY+JImAOuAq4AeoFdST12zO4B7I+ISYA1wezr/VeA3IuIiYAHwXySdm1PtZmY2Cln28OcC/RGxPyKOAJuAxXVteoBH0vfbBpdHxDMR8Wz6/jDwPNCVR+FmZjY6WQJ/GnCgZvpgOq/WbuCa9P3VwFmSzqttIGkuMBH4Qf0GJC2XVJVUHRgYyFq7FUiRrmZ0/4rXv6LI60rbm4EvSFoKPAYcAo4PLpT0duA+4OMRcaJ+5YhYD6yH5CHmOdVkHaRIVzMOxf3LV7P7VxRZAv8QcH7NdHc67zXpcM01AJImAddGxMvp9NnAt4BVEfG9HGo2M7MGZBnS2QHMkjRT0kRgCbCltoGkqZIGP+sW4J50/kTgr0gO6N6fX9lmZjZaIwZ+RBwDVgBbgT5gc0TslbRG0qK02eXAPknPAG8D1qbzPwz8MrBU0pPp69Kc+2BmZhmomeNuWVQqlahWq03bnqSmjj02W6f0r9l1enveXjtuayzbk7QzIirDtfGVtmZmJeHANzMrCQe+mVlJOPDNzErCgW9mVhIOfDOzkvBDzDvElClTeOmllxpat5HL0CdPnsyLL77Y0PbMiiQ+czasPqe52xsnDvwO4XuVmLWGPvvj5p+Hv3p8PttDOmZmJeHANzMrCQe+mVlJOPDNzErCB22tLRTpTIiTbs/9y3d7Nmq+W6bvJunteXveXs7bGnh1gE8/9mnumH8HU8+YOu7bS9fz3TLNzJrtrj138cQ/PMFdu+9qdSlvkCnwJS2QtE9Sv6SVQyy/QNLDkvZIelRSd82yj0t6Nn19PM/izczazcCrA3yj/xsEwdf7v84L//JCq0t6zYiBL2kCsA64CugBeiX11DW7g+QxhpcAa4Db03WnAJ8BLgPmAp+RNDm/8s2Ka+DVAZZ+e2lbBUaeitq/u/bcxYk4AcCJONFWe/lZ9vDnAv0RsT8ijgCbgMV1bXqAR9L322qWfwB4KCJejIiXgIeABWMv26z42nVYIC9F7N/g3v3RE0cBOHriaFvt5WcJ/GnAgZrpg+m8WruBa9L3VwNnSTov47pIWi6pKqk6MDCQtXazwmrnYYE8FLV/tXv3g9ppLz+vg7Y3A/Ml7QLmA4eA41lXjoj1EVGJiEpXV1dOJZl1rnYeFshDUfu3+/ndr+3dDzp64ihPPv9kawqqk+U8/EPA+TXT3em810TEYdI9fEmTgGsj4mVJh4DL69Z9dAz1mhXeyYYFPvHznxj1KX7tqMj9u3/R/a0uYVhZ9vB3ALMkzZQ0EVgCbKltIGmqpMHPugW4J32/FXi/pMnpwdr3p/PM7CTafVhgrIrev3Y2YuBHxDFgBUlQ9wGbI2KvpDWSFqXNLgf2SXoGeBuwNl33ReA2ki+NHcCadF7upkyZgqRRv4CG1psyZcp4dMNGoahnebT7sMBYFb1/7awwV9oW+Uo/b29ot33vNr6676t8+Oc+zK2/eOu4b28svL3O3V6n9M1X2lphFfUsD7Px5MC3jlTUszzMxpMD3zpOu1/cYtauHPjWcXyWh1ljHPjWcXyWh1lj/ACUAhvLPbnbWbtf3GLWrryHX2BFvDmVmTXOgV9QPm3RzOo58AvKpy2aWT0HfgH5tEUzG4oDv4B82qKZDcWBX0A+bdHMhuLTMgvIpy2a2VAc+B0iPnM2rD6nudszs0Jx4HcIffbHzb9F6+qmbc7MmsBj+GZmJZEp8CUtkLRPUr+klUMsny5pm6RdkvZIWpjOP03Sn0t6SlKfpFvy7oCZmWUzYuBLmgCsA64CeoBeST11zW4lefThO0ieefvFdP6vAW+JiIuBdwK/JWlGTrWbmdkoZNnDnwv0R8T+iDgCbAIW17UJYPAo3znA4Zr5Z0o6FTgDOAL8eMxVm5nZqGU5aDsNOFAzfRC4rK7NauBBSTcAZwK/ks6/n+TL4UfAW4Ebh3qIuaTlwHKA6dOnj6L81/ksFmt3kpq2rcmTJzdtW4OK3L+i9C2vs3R6gQ0R8ceS3g3cJ2kOyV8Hx4GfASYD35H0NxGxv3bliFgPrIfkIeaNFOCzWKydNfq72ewHaDeqyP0rUt+yDOkcAs6vme5O59VaBmwGiIjHgdOBqcBHgG9HxNGIeB74LjDsU9XNzGx8ZAn8HcAsSTMlTSQ5KLulrs1zwJUAkmaTBP5AOv+96fwzgV8Evp9P6WZmNhojBn5EHANWAFuBPpKzcfZKWiNpUdrsJuB6SbuBjcDSSP6WWQdMkrSX5IvjzyJiz3h0xMzMhqd2G2OqVCpRrVZHvV4j42VjeQRgs8fnvL3O3l6jOqXORhW5fy34nd4ZEcMOmZf6Sls/AtDMyqS0ge9HAJpZ2ZQ28P0IQDMrm1IGvh8B2J4kNe3ViguTzFqtlIHvRwC2n4ho6NXoui+++KYLvs0Kr5SB70cAmlkZlfIBKH4EoJmVUSn38M3MysiBb2ZWEg58M7OScOCbmZWEA9/MrCQc+GZmJeHANzMrCQe+mVlJOPDNzEoiU+BLWiBpn6R+SSuHWD5d0jZJuyTtkbSwZtklkh6XtFfSU5JOz7MDZeKbi5nZWIx4awVJE0geVfg+4CCwQ9KWiHi6ptmtJI8+vFNSD/AAMEPSqcBfAL8eEbslnQccxUat0SfnFPmJQmY2Oln28OcC/RGxPyKOAJuAxXVtAjg7fX8OcDh9/35gT0TsBoiIf4yI42Mv28zMRitL4E8DDtRMH0zn1VoNfEzSQZK9+xvS+RcCIWmrpCck/e5QG5C0XFJVUnVgYGBUHTAzs2zyOmjbC2yIiG5gIXCfpFNIhozmAR9N/3u1pCvrV46I9RFRiYhKV1dXTiWZmVmtLIF/CDi/Zro7nVdrGbAZICIeB04HppL8NfBYRLwQEa+S7P3/wliLNjOz0csS+DuAWZJmSpoILAG21LV5DrgSQNJsksAfALYCF0t6a3oAdz7wNGZm1nQjnqUTEcckrSAJ7wnAPRGxV9IaoBoRW4CbgC9JupHkAO7SSE4NeUnS50m+NAJ4ICK+NV6dMTOzk1O7nbJXqVSiWq2Oer1mn37YKac7dkqdjXL/OluR+9eCTNoZEZXh2vhKWzOzknDgm5mVhAPfzKwkRjxo20kkNW1bvteMmXWawgS+7zVjZjY8D+mYmZWEA9/MrCQc+GZmJeHANzMrCQe+mVlJOPDNzErCgW9mVhIOfDOzknDgm5mVhAPfzKwkMgW+pAWS9knql7RyiOXTJW2TtEvSHkkLh1j+iqSb8yrczMxGZ8TAlzQBWAdcBfQAvZJ66prdCmyOiHeQPALxi3XLPw/89djLNTOzRmXZw58L9EfE/og4AmwCFte1CeDs9P05wOHBBZI+CPwQ2Dvmas3MrGFZAn8acKBm+mA6r9Zq4GOSDgIPADcASJoE/B7w2TFXamZmY5LXQdteYENEdAMLgfsknULyRfAnEfHKcCtLWi6pKqk6MDCQU0lmZlYry/3wDwHn10x3p/NqLQMWAETE45JOB6YClwEfkvQ54FzghKSfRMQXaleOiPXAekgeYt5AP8zMbARZAn8HMEvSTJKgXwJ8pK7Nc8CVwAZJs4HTgYGIeM9gA0mrgVfqw97MzJpjxCGdiDgGrAC2An0kZ+PslbRG0qK02U3A9ZJ2AxuBpeHHSJmZtRW1Wy5XKpWoVqtN217RH3Ho/nU2969zNbtvknZGRGW4Nr7S1sysJBz4ZmYl4cA3MysJB76ZWUk48M3MSsKBb2ZWEg58M7OSyHKlrZlZwyQ1vLyo5+i3igPfzMaVQ7t9eEjHzKwkHPhmZiXhwDczKwkHvplZSTjwzcxKwoFvZlYSDnwzs5LIFPiSFkjaJ6lf0sohlk+XtE3SLkl7JC1M579P0k5JT6X/fW/eHTAzs2xGvPBK0gRgHfA+4CCwQ9KWiHi6ptmtJI8+vFNSD/AAMAN4AfgPEXFY0hySxyROy7kPZmaWQZY9/LlAf0Tsj4gjwCZgcV2bAM5O358DHAaIiF0RcTidvxc4Q9Jbxl62mZmNVpZbK0wDDtRMHwQuq2uzGnhQ0g3AmcCvDPE51wJPRMRP6xdIWg4sB5g+fXqGksyKw/ea6Vyd9rPL66BtL7AhIrqBhcB9kl77bEkXAX8I/NZQK0fE+oioRESlq6srp5LMOkNENPyy1uq0n12WwD8EnF8z3Z3Oq7UM2AwQEY8DpwNTASR1A38F/EZE/GCsBZuZWWOyBP4OYJakmZImAkuALXVtngOuBJA0myTwBySdC3wLWBkR382tajMzG7URAz8ijgErSM6w6SM5G2evpDWSFqXNbgKul7Qb2AgsjeRvlhXAvwF+X9KT6etfjUtPzMxsWGq3ccBKpRLVarVp25NU6LFQ98+sHCTtjIjKcG18pa2ZWUk48M3MSsKBb2ZWEg58M7OScOCbmZWEA9/MrCQc+GZmJZHl5mkdr9NucDRa7l9n98+sWUoR+EX/n979M7MsPKRjZlYSDnwzs5Jw4JuZlYQD38ysJBz4ZmYl4cA3MysJB76ZWUlkCnxJCyTtk9QvaeUQy6dL2iZpl6Q9khbWLLslXW+fpA/kWbxZUW3cuJE5c+YwYcIE5syZw8aNG1tdkhXAiBdeSZoArAPeBxwEdkjaEhFP1zS7leTRh3dK6gEeAGak75cAFwE/A/yNpAsj4njeHTErio0bN7Jq1Sruvvtu5s2bx/bt21m2bBkAvb29La7OOlmWPfy5QH9E7I+II8AmYHFdmwDOTt+fAxxO3y8GNkXETyPih0B/+nlmdhJr167l7rvv5oorruC0007jiiuu4O6772bt2rWtLs06XJbAnwYcqJk+mM6rtRr4mKSDJHv3N4xiXSQtl1SVVB0YGMhYulkx9fX1MW/evDfMmzdvHn19fS2qyIoir4O2vcCGiOgGFgL3Scr82RGxPiIqEVHp6urKqSSzzjR79my2b9/+hnnbt29n9uzZLarIiiJLKB8Czq+Z7k7n1VoGbAaIiMeB04GpGdc1sxqrVq1i2bJlbNu2jaNHj7Jt2zaWLVvGqlWrWl2adbgsd8vcAcySNJMkrJcAH6lr8xxwJbBB0mySwB8AtgBflvR5koO2s4C/y6l2s0IaPDB7ww030NfXx+zZs1m7dq0P2NqYjRj4EXFM0gpgKzABuCci9kpaA1QjYgtwE/AlSTeSHMBdGsk9bfdK2gw8DRwDPukzdMxG1tvb64C33Knd7jVeqVSiWq22ugwzs44iaWdEVIZr4yttzcxKwoFvZlYSDnwzs5Jw4JuZlUTbHbSVNAD8fRM3ORV4oYnbazb3r7O5f52r2X27ICKGvXK17QK/2SRVRzqy3cncv87m/nWuduybh3TMzErCgW9mVhIOfFjf6gLGmfvX2dy/ztV2fSv9GL6ZWVl4D9/MrCQc+GZmJVHIwJc0Q9L/zuFzflnSE5KOSfpQHrXlIcf+fULSU5KelLQ9fQZxS+XYt6WSBtK+PSnpP+ZR31jl2L8/qenbM5JezqG8McuxfxdIeljSHkmPSurOo76yK2Tg5+g5YCnw5RbXMV6+HBEXR8SlwOeAz7e4nrx9JSIuTV9/2upi8hQRNw72DfjvwNdaXFLe7gDujYhLgDXA7WP9wLy+jGo+75UcP+tSSQtrphdJWpnX5w8qfOBL+llJuyR9WtLXJH1b0rOSPlfT5hVJayXtlvQ9SW8DiIj/ExF7gBMt68AIxti/H9d81JkkzzJoG2PpWyfIsX+9wMbmVZ7NGPvXAzySvt8GLG52/bUkZXlY1FhcSvJ4WAAiYktE/EHeGyl04Ev6OeAvSfbSB0j+Ua8DLgaukzT4+MUzge9FxM8DjwHXN73YBuTRP0mflPQDkj3832la8SPI6Wd3bTokcH9N+7aQ1++mpAuAmbwejm0hh/7tBq5J318NnCXpvBxKmyDpS5L2SnpQ0hmSrpe0I/3S+UtJb037sEHSXZL+FvicpJmSHlcyDPqfR+j/Jkm/WjO9QdKHJJ0u6c/Sz9gl6QpJE0n+irlOyRDddUqGJL9Qs+5/k/S/JO1XOrws6RRJX5T0fUkPSXpAIww9Fznwu4BvAB+NiN3pvIcj4p8i4ickT+G6IJ1/BPhm+n4nMKOZhTYol/5FxLqI+NfA7wG3NqPwDPLo2/8EZqRDAg8Bf96MwjPK83dzCXB/mz1JLo/+3QzMl7QLmE/yeNU8+jgLWBcRFwEvA9cCX4uId6VfOn0kz+ge1A38u4j4FPBfgTsj4mLgRyNs5yvAhwHSQL8S+BbwSSDSz+gl+b08Bfh9Xh+C/MoQn/d2YB7w74HBPf9rSP69eoBfB949UueLHPj/RDIGP69m3k9r3h/n9Uc8Ho3XL0iond/O8u7fJuCDOdfYqDH3LSL+MSIG1/lT4J3jV+6o5fmzW0L7Defk8fM7HBHXRMQ7gFXpvJdzqO2HEfFk+n7wC2aOpO9Iegr4KHBRTfuv1nyZ/hKv/1vfN8J2/hq4QtJbgKuAxyLiX0j+Tf4CICK+T3KjyAsz1P31iDgREU8Dg8Ne89L6TkTE/yUZ+hpWJwRbo46Q/Cm4VTkeXGkjY+6fpFkR8Ww6+avAs8O1b6I8+vb2iBjcC1tEsufWLnL53ZT0b4HJwON5FZaTPH5+U4EXI+IEcAtwT0611X/xnAFsAD4YEbslLQUur2nzz3XrZzrOFRE/kfQo8AGSoaxNjZX7mtq61eiHFHkPn4j4Z5I/gW4Ezh7t+pLeJekg8GvA/5C0N+cSx2Ss/QNWpGOZTwKfAj6eY3ljkkPffift226SYxNLcyxvzHLoHyR795tq9pDbRg79uxzYJ+kZkj3atflV9yZnAT+SdBrJHv7JfJfk35wR2g36CvCbwHuAb6fzvjO4rqQLgenAPuD/pXWMxndJjlOdkh7svnykFXxrBTMrBUkzgG9GxJx0+mZgEvAPwO+SHFz+W+CsiFgqaUPa/v60/UySU7QnkRyj+E8RMWmY7Z2WfvY3IuI303mnA3cCFeAY8KmI2CZpCrAVOI3kFNQzgEpErBiijlciYpKkU4AvkgT9AZI9/z+MiIdOWpMD38ysM0maFBGvpGcw/R3wS+l4/pCKPIZvZlZ035R0LjARuG24sAfv4ZuZNUzSxbz5jJ2fRsRlrahnJA58M7OSKPRZOmZm9joHvplZSTjwzcxKwoFvZlYS/x+8q/3LUcsoggAAAABJRU5ErkJggg==\n",
      "text/plain": [
       "<Figure size 432x288 with 1 Axes>"
      ]
     },
     "metadata": {
      "needs_background": "light"
     },
     "output_type": "display_data"
    }
   ],
   "source": [
    "from numpy import mean,std\n",
    "import matplotlib.pyplot as plt\n",
    "# define dataset\n",
    "X, y = get_dataset()\n",
    "# get the models to evaluate\n",
    "models = get_voting()\n",
    "# evaluate the models and store results\n",
    "results, names = list(), list()\n",
    "for (name, model) in models:\n",
    "    scores = evaluate_model(model, X, y)\n",
    "    results.append(scores)\n",
    "    names.append(name)\n",
    "    print('{0}: Acc:{1:.3f}, std:{2:.3f}' .format(name, mean(scores), std(scores)))\n",
    "# plot model performance for comparison\n",
    "# name,model =(('hard_voting',VotingClassifier(estimators=models,voting = 'hard')))\n",
    "# scores = evaluate_model(model, X, y)\n",
    "# results.append(scores)\n",
    "# names.append(name)\n",
    "# print('{0}: Acc:{1:.3f}, std:{2:.3f}' .format(name, mean(scores), std(scores)))\n",
    "plt.boxplot(results, labels=names, showmeans=True)\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "satisfied-wheat",
   "metadata": {},
   "source": [
    "显然投票的效果略大于任何一个基模型。\n",
    "![Box Plot of Hard Voting Ensemble Compared to Standalone Models for Binary Classification](https://3qeqpr26caki16dnhd19sv6by6v-wpengine.netdna-ssl.com/wp-content/uploads/2020/02/Box-Plot-of-Hard-Voting-Ensemble-Compared-to-Standalone-Models-for-Binary-Classification.png)\n",
    "\n",
    "通过箱形图我们可以看到硬投票方法对交叉验证整体预测结果分布带来的提升。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "specific-winter",
   "metadata": {},
   "source": [
    "## bagging的思路"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fluid-control",
   "metadata": {},
   "source": [
    "与投票法不同的是，Bagging不仅仅集成模型最后的预测结果，同时采用一定策略来影响基模型训练，保证基模型可以服从一定的假设。在上一章中我们提到，希望各个模型之间具有较大的差异性，而在实际操作中的模型却往往是同质的，因此一个简单的思路是通过不同的采样增加模型的差异性。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "double-czech",
   "metadata": {},
   "source": [
    "## bagging的原理分析"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "premier-tennessee",
   "metadata": {},
   "source": [
    "Bagging的核心在于自助采样(bootstrap)这一概念，即有放回的从数据集中进行采样，也就是说，同样的一个样本可能被多次进行采样。一个自助采样的小例子是我们希望估计全国所有人口年龄的平均值，那么我们可以在全国所有人口中随机抽取不同的集合（这些集合可能存在交集），计算每个集合的平均值，然后将所有平均值的均值作为估计值。\n",
    "\n",
    "首先我们随机取出一个样本放入采样集合中，再把这个样本放回初始数据集，重复K次采样，最终我们可以获得一个大小为K的样本集合。同样的方法， 我们可以采样出T个含K个样本的采样集合，然后基于每个采样集合训练出一个基学习器，再将这些基学习器进行结合，这就是Bagging的基本流程。\n",
    "\n",
    "对回归问题的预测是通过预测取平均值来进行的。对于分类问题的预测是通过对预测取多数票预测来进行的。Bagging方法之所以有效，是因为每个模型都是在略微不同的训练数据集上拟合完成的，这又使得每个基模型之间存在略微的差异，使每个基模型拥有略微不同的训练能力。\n",
    "\n",
    "Bagging同样是一种降低方差的技术，因此它在不剪枝决策树、神经网络等易受样本扰动的学习器上效果更加明显。在实际的使用中，加入列采样的Bagging技术对高维小样本往往有神奇的效果。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "minimal-prayer",
   "metadata": {},
   "source": [
    "## bagging的案例分析(基于sklearn，介绍随机森林的相关理论以及实例)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "front-trailer",
   "metadata": {},
   "source": [
    "Sklearn为我们提供了 [BaggingRegressor](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.BaggingRegressor.html) 与 [BaggingClassifier](https://scikit-learn.org/stable/modules/generated/sklearn.ensemble.BaggingClassifier.html) 两种Bagging方法的API，我们在这里通过一个完整的例子演示Bagging在分类问题上的具体应用。这里两种方法的默认基模型是树模型。\n",
    "\n",
    "我们创建一个含有1000个样本20维特征的随机分类数据集："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "manufactured-research",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-03-14T13:00:48.932432Z",
     "start_time": "2021-03-14T13:00:48.927432Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(1000, 20) (1000,)\n"
     ]
    }
   ],
   "source": [
    "# test classification dataset\n",
    "from sklearn.datasets import make_classification\n",
    "# define dataset\n",
    "X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=5)\n",
    "# summarize the dataset\n",
    "print(X.shape, y.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "genuine-poetry",
   "metadata": {},
   "source": [
    "我们将使用重复的分层k-fold交叉验证来评估该模型，一共重复3次，每次有10个fold。我们将评估该模型在所有重复交叉验证中性能的平均值和标准差。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "sustained-tonight",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2021-03-14T13:04:39.896814Z",
     "start_time": "2021-03-14T13:04:39.720811Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Accuracy: 0.853 +/- 0.036\n"
     ]
    }
   ],
   "source": [
    "# evaluate bagging algorithm for classification\n",
    "from numpy import mean\n",
    "from numpy import std\n",
    "from sklearn.datasets import make_classification\n",
    "from sklearn.model_selection import cross_val_score\n",
    "from sklearn.model_selection import RepeatedStratifiedKFold\n",
    "from sklearn.ensemble import BaggingClassifier\n",
    "# define dataset\n",
    "X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=5)\n",
    "# define the model\n",
    "model = BaggingClassifier()\n",
    "# evaluate the model\n",
    "cv = RepeatedStratifiedKFold(n_splits=10, n_repeats=3, random_state=1)\n",
    "n_scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1, error_score='raise')\n",
    "# report performance\n",
    "print('Accuracy: {:.3f} +/- {:.3f}' .format(mean(n_scores), std(n_scores)))"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.0"
  },
  "toc": {
   "base_numbering": 1,
   "nav_menu": {},
   "number_sections": true,
   "sideBar": true,
   "skip_h1_title": false,
   "title_cell": "Table of Contents",
   "title_sidebar": "Contents",
   "toc_cell": false,
   "toc_position": {},
   "toc_section_display": true,
   "toc_window_display": true
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
