{
 "nbformat": 4,
 "nbformat_minor": 2,
 "metadata": {
  "language_info": {
   "name": "python",
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   }
  },
  "orig_nbformat": 2,
  "file_extension": ".py",
  "mimetype": "text/x-python",
  "name": "python",
  "npconvert_exporter": "python",
  "pygments_lexer": "ipython3",
  "version": 3
 },
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "from sklearn import neighbors\n",
    "from sklearn import datasets\n",
    "from sklearn import tree"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "def plot(model):\n",
    "    # 获取数据值所在的范围\n",
    "    x_min, x_max = x_data[:, 0].min() - 1, x_data[:, 0].max() + 1\n",
    "    y_min, y_max = x_data[:, 1].min() - 1, x_data[:, 1].max() + 1\n",
    "\n",
    "    # 生成网格矩阵\n",
    "    xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02),\n",
    "                         np.arange(y_min, y_max, 0.02))\n",
    "\n",
    "    z = model.predict(np.c_[xx.ravel(), yy.ravel()])# ravel与flatten类似，多维数据转一维。flatten不会改变原始数据，ravel会改变原始数据\n",
    "    z = z.reshape(xx.shape)\n",
    "    # 等高线图\n",
    "    cs = plt.contourf(xx, yy, z)\n",
    "    # 样本散点图\n",
    "    plt.scatter(x_test[:, 0], x_test[:, 1], c=y_test)\n",
    "    plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "集成学习Ensemble Learning：\n",
    "+ 集成学习就是组合多个学习器，最后可以得到一个更好的学习器。\n",
    "\n",
    "集成学习算法：\n",
    "+ 1.个体学习器之间不存在强依赖关系，装袋（bagging）\n",
    "+ 2.随机森林（Random Forest）\n",
    "+ 3.个体学习器之间存在强依赖关系，提升（boosting）\n",
    "+ 4.Stacking"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "bagging：\n",
    "+ 直觉：数据量越大，学习器性能越好。\n",
    "+ bagging也叫做bootstrap aggregating，是在原始数据集选择S次后得到S个<br>\n",
    "新数据集的一种技术。是一种有放回抽样。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.ensemble import BaggingClassifier"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "iris = datasets.load_iris()\n",
    "x_data = iris.data[:,:2]\n",
    "y_data = iris.target\n",
    "x_train,x_test,y_train,y_test = train_test_split(x_data, y_data)\n",
    "bagging_knn = BaggingClassifier(knn, n_estimators=100)\n",
    "# 输入数据建立模型\n",
    "bagging_knn.fit(x_train, y_train)\n",
    "plot(bagging_knn)\n",
    "# 样本散点图\n",
    "bagging_knn.score(x_test, y_test)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "随机森林(Random Forest):\n",
    "+ RF = 决策树+Bagging+随机属性选择\n",
    "+ 1.样本的随机：从样本集中用bagging的方式，随机选\n",
    "择n个样本。\n",
    "+ 2.特征的随机：从所有属性d中随机选择k个属性(k<d)，<br>\n",
    "然后从k个属性中选择最佳分割属性作为节点建立CART决策树。\n",
    "+ 3.重复以上两个步骤m次，建立m棵CART决策树。\n",
    "+ 4.这m棵CART决策树形成随机森林，通过投票表决结\n",
    "果，决定数据属于哪一类。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn import tree\n",
    "from sklearn.model_selection import train_test_split\n",
    "from sklearn.ensemble import RandomForestClassifier"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "dtree = tree.DecisionTreeClassifier()\n",
    "dtree.fit(x_train, y_train)\n",
    "plot(dtree)\n",
    "dtree.score(x_test, y_test)\n",
    "RF = RandomForestClassifier(n_estimators=50)\n",
    "RF.fit(x_train, y_train)\n",
    "plot(RF)\n",
    "RF.score(x_test, y_test)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "AdaBoost:\n",
    "+ AdaBoost是英文“Adaptive Boosting”（自适应增强）的缩写，它的自适应在于：<br>\n",
    "前一个基本分类器被错误分类的样本的权值会增大，而正确分类的样本的权值会减小，<br>\n",
    "并再次用来训练下一个基本分类器。同时，在每一轮迭代中，加入一个新的弱分类器，<br>\n",
    "直到达到某个预定的足够小的错误率或达到预先指定的最大迭代次数才确定最终的强分类器。\n",
    "+ 直觉：将学习器的重点放在“容易”出错的样本上。可以提升学习器的性能。\n",
    "\n",
    "Adaboost算法可以简述为三个步骤：\n",
    "+ （1）首先，是初始化训练数据的权值分布D1。假设有N个训练样本数据，<br>\n",
    "则每一个训练样本最开始时，都被赋予相同的权值：w1=1/N。\n",
    "+ （2）然后，训练弱分类器hi。具体训练过程中是：如果某个训练样本点，<br>\n",
    "被弱分类器hi准确地分类，那么在构造下一个训练集中，它对应的权值要减小；<br>\n",
    "相反，如果某个训练样本点被错误分类，那么它的权值就应该增大。<br>\n",
    "权值更新过的样本集被用于训练下一个分类器，整个训练过程如此迭代地进行下去。\n",
    "+ （3）最后，将各个训练得到的弱分类器组合成一个强分类器。各个弱分类器的训练过程结束后，<br>\n",
    "加大分类误差率小的弱分类器的权重，使其在最终的分类函数中起着较大的决定作用，<br>\n",
    "而降低分类误差率大的弱分类器的权重，使其在最终的分类函数中起着较小的决定作用。<br>\n",
    "+ 换而言之，误差率低的弱分类器在最终分类器中占的权重较大，否则较小。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.ensemble import AdaBoostClassifier\n",
    "from sklearn.metrics import classification_report\n",
    "from sklearn.tree import DecisionTreeClassifier\n",
    "from sklearn.datasets import make_gaussian_quantiles"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 生成2维正态分布，生成的数据按分位数分为两类，500个样本,2个样本特征\n",
    "x1, y1 = make_gaussian_quantiles(n_samples=500, n_features=2,n_classes=2)\n",
    "# 生成2维正态分布，生成的数据按分位数分为两类，400个样本,2个样本特征均值都为3\n",
    "x2, y2 = make_gaussian_quantiles(mean=(3, 3), n_samples=500, n_features=2, n_classes=2)\n",
    "# 将两组数据合成一组数据\n",
    "x_data = np.concatenate((x1, x2))\n",
    "y_data = np.concatenate((y1, - y2 + 1))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "plt.scatter(x_data[:, 0], x_data[:, 1], c=y_data)\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# AdaBoost模型\n",
    "model = AdaBoostClassifier(DecisionTreeClassifier(max_depth=3),n_estimators=10)\n",
    "# 训练模型\n",
    "model.fit(x_data, y_data)\n",
    "\n",
    "# 获取数据值所在的范围\n",
    "x_min, x_max = x_data[:, 0].min() - 1, x_data[:, 0].max() + 1\n",
    "y_min, y_max = x_data[:, 1].min() - 1, x_data[:, 1].max() + 1\n",
    "\n",
    "# 生成网格矩阵\n",
    "xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02),\n",
    "                     np.arange(y_min, y_max, 0.02))\n",
    "\n",
    "# 获取预测值\n",
    "z = model.predict(np.c_[xx.ravel(), yy.ravel()])\n",
    "z = z.reshape(xx.shape)\n",
    "# 等高线图\n",
    "cs = plt.contourf(xx, yy, z)\n",
    "# 样本散点图\n",
    "plt.scatter(x_data[:, 0], x_data[:, 1], c=y_data)\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\n",
    "Stacking:\n",
    "+ 使用多个不同的分类器对训练集进预测，把预测得到的结果作为一个次级分类器的输入。<br>\n",
    "次级分类器的输出是整个模型的预测结果。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from mlxtend.classifier import StackingClassifier # pip install mlxtend\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 载入数据集\n",
    "iris = datasets.load_iris()  \n",
    "# 只要第1,2列的特征\n",
    "x_data, y_data = iris.data[:, 1:3], iris.target  \n",
    "\n",
    "# 定义三个不同的分类器\n",
    "clf1 = KNeighborsClassifier(n_neighbors=1)  \n",
    "clf2 = DecisionTreeClassifier() \n",
    "clf3 = LogisticRegression()  \n",
    " \n",
    "# 定义一个次级分类器\n",
    "lr = LogisticRegression()  \n",
    "sclf = StackingClassifier(classifiers=[clf1, clf2, clf3],   \n",
    "                          meta_classifier=lr)\n",
    "  \n",
    "for clf,label in zip([clf1, clf2, clf3, sclf],\n",
    "                      ['KNN','Decision Tree','LogisticRegression','StackingClassifier']):  \n",
    "    scores = model_selection.cross_val_score(clf, x_data, y_data, cv=3, scoring='accuracy')  \n",
    "    print(\"Accuracy: %0.2f [%s]\" % (scores.mean(), label)) "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.ensemble import VotingClassifier\n",
    "# 利用投票思想\n",
    "\n",
    "sclf = VotingClassifier([('knn',clf1),('dtree',clf2), ('lr',clf3)])   \n",
    "  \n",
    "for clf, label in zip([clf1, clf2, clf3, sclf],\n",
    "                      ['KNN','Decision Tree','LogisticRegression','VotingClassifier']):  \n",
    "  \n",
    "    scores = model_selection.cross_val_score(clf, x_data, y_data, cv=3, scoring='accuracy')  \n",
    "    print(\"Accuracy: %0.2f [%s]\" % (scores.mean(), label)) "
   ]
  }
 ]
}