{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 第五次打卡 模型融合\n",
    "\n",
    "模型融合是为了博采每种模型的长处，往往模型相差越大且模型表现都不错的前提下，模型融合后结果会有大幅提升\n",
    "\n",
    "### 融合方式\n",
    "\n",
    "|平均|投票|综合|stackingb|lending|\n",
    "|--|--|--|--|--|\n",
    "|简单平均法,加权平均法|简单投票法,加权投票法|排序融合,log融合|构建多层模型，并利用预测结果再拟合预测|选取部分数据预测训练得到预测结果作为新特征，带入剩下的数据中预测\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 平均方法\n",
    "\n",
    "简单平均就是权重相等的平均\n",
    "\n",
    "在之前使用catboost，lightgbm，xgboost中就使用了加权平均"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "#融合模型结果并输出比赛结果\r\n",
    "\r\n",
    "rh_test = lgb_test*0.3 + xgb_test*0.4+cat_test*0.3\r\n",
    "testA['isDefault'] = rh_test\r\n",
    "testA[['id','isDefault']].to_csv('test_sub1.csv', index=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 投票\n",
    "\n",
    "加权投票\n",
    "在VotingClassifier中加入参数 voting='soft', weights=[2, 1, 1]，weights用于调节基模型的权重"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "#简单投票\r\n",
    "\r\n",
    "from xgboost import XGBClassifier\r\n",
    "from sklearn.linear_model import LogisticRegression\r\n",
    "from sklearn.ensemble import RandomForestClassifier, VotingClassifier\r\n",
    "clf1 = LogisticRegression(random_state=1)\r\n",
    "clf2 = RandomForestClassifier(random_state=1)\r\n",
    "clf3 = XGBClassifier(learning_rate=0.1, n_estimators=150, max_depth=4, min_child_weight=2, subsample=0.7,objective='binary:logistic')\r\n",
    " \r\n",
    "vclf = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('xgb', clf3)])\r\n",
    "vclf = vclf .fit(x_train,y_train)\r\n",
    "print(vclf .predict(x_test))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "#加权投票\r\n",
    "\r\n",
    "from xgboost import XGBClassifier\r\n",
    "from sklearn.linear_model import LogisticRegression\r\n",
    "from sklearn.ensemble import RandomForestClassifier, VotingClassifier\r\n",
    "clf1 = LogisticRegression(random_state=1)\r\n",
    "clf2 = RandomForestClassifier(random_state=1)\r\n",
    "clf3 = XGBClassifier(learning_rate=0.1, n_estimators=150, max_depth=4, min_child_weight=2, subsample=0.7,objective='binary:logistic')\r\n",
    " \r\n",
    "vclf = VotingClassifier(estimators=[('lr', clf1), ('rf', clf2), ('xgb', clf3)], voting='soft', weights=[2, 1, 1])\r\n",
    "vclf = vclf .fit(x_train,y_train)\r\n",
    "print(vclf .predict(x_test))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    " ### Stacking\n",
    " \n",
    " 首先需要安装：pip install mlxtend\n",
    " \n",
    " tacking 将若干基学习器获得的预测结果，将预测结果作为新的训练集来训练一个学习器。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Accuracy: 0.91 (+/- 0.07) [KNN]\n",
      "Accuracy: 0.94 (+/- 0.04) [Random Forest]\n",
      "Accuracy: 0.91 (+/- 0.04) [Naive Bayes]\n",
      "Accuracy: 0.94 (+/- 0.04) [Stacking Classifier]\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "<Figure size 1000x800 with 4 Axes>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "import warnings\r\n",
    "warnings.filterwarnings('ignore')\r\n",
    "import itertools\r\n",
    "import numpy as np\r\n",
    "import seaborn as sns\r\n",
    "import matplotlib.pyplot as plt\r\n",
    "import matplotlib.gridspec as gridspec\r\n",
    "from sklearn import datasets\r\n",
    "from sklearn.linear_model import LogisticRegression\r\n",
    "from sklearn.neighbors import KNeighborsClassifier\r\n",
    "from sklearn.naive_bayes import GaussianNB \r\n",
    "from sklearn.ensemble import RandomForestClassifier\r\n",
    "from mlxtend.classifier import StackingClassifier\r\n",
    "from sklearn.model_selection import cross_val_score, train_test_split\r\n",
    "from mlxtend.plotting import plot_learning_curves\r\n",
    "from mlxtend.plotting import plot_decision_regions\r\n",
    "\r\n",
    "\r\n",
    "# 以python自带的鸢尾花数据集为例\r\n",
    "iris = datasets.load_iris()\r\n",
    "X, y = iris.data[:, 1:3], iris.target\r\n",
    "\r\n",
    "\r\n",
    "clf1 = KNeighborsClassifier(n_neighbors=1)\r\n",
    "clf2 = RandomForestClassifier(random_state=1)\r\n",
    "clf3 = GaussianNB()\r\n",
    "lr = LogisticRegression()\r\n",
    "sclf = StackingClassifier(classifiers=[clf1, clf2, clf3], \r\n",
    "                          meta_classifier=lr)\r\n",
    "\r\n",
    "\r\n",
    "label = ['KNN', 'Random Forest', 'Naive Bayes', 'Stacking Classifier']\r\n",
    "clf_list = [clf1, clf2, clf3, sclf]\r\n",
    "    \r\n",
    "fig = plt.figure(figsize=(10,8))\r\n",
    "gs = gridspec.GridSpec(2, 2)\r\n",
    "grid = itertools.product([0,1],repeat=2)\r\n",
    "\r\n",
    "\r\n",
    "clf_cv_mean = []\r\n",
    "clf_cv_std = []\r\n",
    "for clf, label, grd in zip(clf_list, label, grid):\r\n",
    "        \r\n",
    "    scores = cross_val_score(clf, X, y, cv=5, scoring='accuracy')\r\n",
    "    print(\"Accuracy: %.2f (+/- %.2f) [%s]\" %(scores.mean(), scores.std(), label))\r\n",
    "    clf_cv_mean.append(scores.mean())\r\n",
    "    clf_cv_std.append(scores.std())\r\n",
    "        \r\n",
    "    clf.fit(X, y)\r\n",
    "    ax = plt.subplot(gs[grd[0], grd[1]])\r\n",
    "    fig = plot_decision_regions(X=X, y=y, clf=clf)\r\n",
    "    plt.title(label)\r\n",
    " \r\n",
    "\r\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "##  blending\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# 以python自带的鸢尾花数据集为例\r\n",
    "data_0 = iris.data\r\n",
    "data = data_0[:100,:]\r\n",
    "\r\n",
    "\r\n",
    "target_0 = iris.target\r\n",
    "target = target_0[:100]\r\n",
    " \r\n",
    "#模型融合中基学习器\r\n",
    "clfs = [LogisticRegression(),\r\n",
    "        RandomForestClassifier(),\r\n",
    "        ExtraTreesClassifier(),\r\n",
    "        GradientBoostingClassifier()]\r\n",
    " \r\n",
    "#切分一部分数据作为测试集\r\n",
    "X, X_predict, y, y_predict = train_test_split(data, target, test_size=0.3, random_state=914)\r\n",
    "\r\n",
    "\r\n",
    "#切分训练数据集为d1,d2两部分\r\n",
    "X_d1, X_d2, y_d1, y_d2 = train_test_split(X, y, test_size=0.5, random_state=914)\r\n",
    "dataset_d1 = np.zeros((X_d2.shape[0], len(clfs)))\r\n",
    "dataset_d2 = np.zeros((X_predict.shape[0], len(clfs)))\r\n",
    " \r\n",
    "for j, clf in enumerate(clfs):\r\n",
    "    #依次训练各个单模型\r\n",
    "    clf.fit(X_d1, y_d1)\r\n",
    "    y_submission = clf.predict_proba(X_d2)[:, 1]\r\n",
    "    dataset_d1[:, j] = y_submission\r\n",
    "    #对于测试集，直接用这k个模型的预测值作为新的特征。\r\n",
    "    dataset_d2[:, j] = clf.predict_proba(X_predict)[:, 1]\r\n",
    "    print(\"val auc Score: %f\" % roc_auc_score(y_predict, dataset_d2[:, j]))\r\n",
    "\r\n",
    "\r\n",
    "#融合使用的模型\r\n",
    "clf = GradientBoostingClassifier()\r\n",
    "clf.fit(dataset_d1, y_d2)\r\n",
    "y_submission = clf.predict_proba(dataset_d2)[:, 1]\r\n",
    "print(\"Val auc Score of Blending: %f\" % (roc_auc_score(y_predict, y_submission)))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "PaddlePaddle 1.8.4 (Python 3.5)",
   "language": "python",
   "name": "py35-paddle1.2.0"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
