{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "<br>\n",
    "<center><font face=\"黑体\" size=4>《机器学习基础实践》课程实验指导书</font></center>\n",
    "<br>\n",
    "<center><font face=\"黑体\",size=4>第6章  集成学习</font></center>\n",
    "\n",
    "$\\textbf{1.实验目标}$\n",
    "\n",
    "了解集成学习的基本结构、训练方法、实现方法，并通过随机森林与Adaboost算法加深理解。\n",
    "\n",
    "$\\textbf{2.实验内容}$\n",
    "\n",
    "$\\textbf{6.1 集成学习的理论基础}$\n",
    "\n",
    "集成学习通过构建并结合多个学习模型来完成学习任务，通过组合多个学习模型，集成学习通常可以获得比单一学习模型更好的泛化性。\n",
    "\n",
    "考虑一个二分类问题，类别标签$y \\in \\{-1,+1\\}$，真实的预测模型为$f$，假设在集成学习中单个个体学习模型$h_i$的分类错误率为$\\varepsilon$，对单个个体学习模型$h_i$有\n",
    "\n",
    "\\begin{equation}\n",
    "P(h_{i}(\\mathbf{x}) \\ne f(\\textbf{x})) = \\varepsilon\n",
    "\\end{equation}\n",
    "\n",
    "假设通过简单的投票法将$T$个个体学习模型组合起来，如果有超过半数的个体分类器预测正确，则集成模型$H$就预测正确。集成学习模型$H$可以表示为，\n",
    "\n",
    "\\begin{equation}\n",
    "H(\\textbf{x}) = sign(\\sum_{i=1}^{T}h_i(\\textbf{x}))\n",
    "\\end{equation}\n",
    "\n",
    "假设个体学习模型之间相互独立，则集成模型$H$的错误率为,\n",
    "\n",
    "\\begin{equation}\n",
    "P(H(\\mathbf{x}) \\ne f(\\textbf{x})) = \\sum_{k=1}^{\\lfloor T/2 \\rfloor}\\binom{T}{k}(1-\\varepsilon)^{k}\\varepsilon^{T-k} \\le e^{(-\\frac{1}{2}T(1-2\\varepsilon)^2)}\n",
    "\\end{equation}\n",
    "\n",
    "从上式可以看出，随着集成的个体分类模型的个数$T$增加，集成模型的错误率呈指数级下降，且个体分类模型的错误率$\\varepsilon$越小，集成模型的错误率也越小。\n",
    "\n",
    "在集成学习中，如果个体学习模型之间的差异很小，集成之后的模型与个体模型之间的并没有明显的差别，集成的效果不明显。下图所示的是个体分类模型对集成结果的影响。\n",
    "\n",
    "<img src=picture/ensemble1.png>\n",
    "\n",
    "综上所述，集成学习的研究核心就在于如何产生“好而不同”的个体学习模型。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{6.2 Boosting集成学习方法}$\n",
    "\n",
    "Boosting集成方法首先从初始训练集学习得到一个基学习器，再根据基学习器的表现对训练集的样本分布进行调整，使得先前基学习器预测错误的训练样本在后续的基学习器的学习过程中受到更多的关注，然后基于调整后的训练样本分布训练下一个基学习器，如此重复，直到基学习器的个数达到预先指定的值$T$为止，最后将$T$个基学习器通过加权进行组合，形成一个集成学习模型。\n",
    "\n",
    "<img src=picture/ensemble2.png>\n",
    "\n",
    "AdaBoost算法是Boosting集成方法的典型代表，假设基学习器的个数为$T$，第$t$个基学习器的权值为$\\alpha_{t}$，AdaBoost算法通过线性加权组合将$T$个基学习器进行集成，其集成模型如式(6.1)所示。\n",
    "\n",
    "\\begin{equation}\n",
    "H(\\textbf{x})=\\sum_{t=1}^{T}\\alpha_{k}h_{t}(\\textbf{x}) ,\\ 式(6.1)\n",
    "\\end{equation}\n",
    "\n",
    "在AdaBoost算法中，第$t$个基学习器$h_t$是基于新的训练样本分布$D_t$产生的，其在集成模型中的权值$\\alpha_t$应该使得$\\alpha_{t}h_{t}$的指数损失函数最小,\n",
    "\n",
    "\\begin{equation}\n",
    "L(\\alpha_{t}h_{t}|D_t) = \\sum_{(\\textbf{x},y) \\in D_t}e^{-y\\alpha_{t}h_{t}}=e^{-\\alpha_t}I(y=h_t(\\textbf{x}))+e^{\\alpha_t}I(y \\ne h_t(\\textbf{x}))=e^{-\\alpha_t}(1-\\varepsilon_{t})+e^{\\alpha_t}(\\varepsilon_{t}) ,\\ 式(6.2)\n",
    "\\end{equation}\n",
    "\n",
    "式(6.2)对$\\alpha_t$求偏导，并令偏导数为零，可得\n",
    "\n",
    "\\begin{equation}\n",
    "\\alpha_t = \\frac{1}{2}ln(\\frac{(1-\\varepsilon_t)}{\\varepsilon_t}) ,\\ 式(6.3)\n",
    "\\end{equation}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "tags": []
   },
   "source": [
    "$\\textbf{6.3 Adaboost算法的实现}$\n",
    "\n",
    "AdaBoost算法流程如下：\n",
    "\n",
    "$\\textbf{输入：}$\n",
    "\n",
    "训练集$D={(\\textbf{x}_1,y_1), (\\textbf{x}_2,y_2),…,(\\textbf{x}_m,y_m)}$，样本的输出$y_i \\in \\{-1,+1\\}$，基学习器个数$T$.\n",
    "\n",
    "$\\textbf{输出：}$\n",
    "\n",
    "集成学习模型$H(\\textbf{x})$\n",
    "\n",
    "$\\textbf{过程：}$\n",
    "\n",
    "(1)初始化训练样本的权值$D_1=[w_{11},w_{12},…, w_{1m}], w_{1i}=\\frac{1}{m}, t=1$.\n",
    "\n",
    "(2)使用训练集$D$以及训练样本的权值$D_t$训练基学习器$h_t(\\textbf{x})$.\n",
    "\n",
    "(3)计算基学习器$h_t(\\textbf{x})$在训练集上的预测错误率$\\varepsilon_t=\\sum_{i=1}^{m}w_{ti}I(f(\\textbf{x}_i)\\ne h_t(\\textbf{x}_i))$ \n",
    "\n",
    "(4)根据式(6.3)计算基学习器$h_t(\\textbf{x})$的权值$\\alpha_{t}$.\n",
    "\n",
    "(5)更新训练样本的权值$w_{t+1,i}=\\frac{w_{t,i}}{Z_t}e^{-(\\alpha_t y_i h_t(\\textbf{x}_i))}, Z_t=\\sum_{i=1}^{m}w_{t,i}e^{-(\\alpha_t y_i h_t(\\textbf{x}_i))}$\n",
    "\n",
    "(6) 如果$t<T,t=t+1$，转入(2)继续执行；否则，结束。\n",
    "\n",
    "本节使用Python编程语言，实现AdaBoost类，封装AdaBoost算法的实现细节。本节采用决策树模型作为AdaBoost算法的基学习器。实现代码如下：\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "from sklearn.tree import DecisionTreeClassifier\n",
    "class AdaBoost:\n",
    "    def fit(self,train_x,train_y,clf_num):\n",
    "        self.weak_clfs = []\n",
    "        self.clf_alphas = []\n",
    "        n_train = len(train_x)\n",
    "        w = np.ones(n_train) / n_train\n",
    "        \n",
    "        for i in range(clf_num):\n",
    "            #训练第i个若分类器\n",
    "            clf = DecisionTreeClassifier(max_depth=3)\n",
    "            clf.fit(train_x, train_y,sample_weight=w)\n",
    "            self.weak_clfs.append(clf)\n",
    "            #添加代码，第i个基分类器对训练集分类，并统计错分结果,输出基分类器在训练集上的分类准确率\n",
    "            \n",
    "            #添加代码，计算错分率\n",
    "            \n",
    "            #计算第i个弱分类器的集成系数alpha_i\n",
    "            \n",
    "            self.clf_alphas.append(alpha_i)\n",
    "            #添加代码，更新训练样本的权值\n",
    "            \n",
    "    def predict(self,test_x):\n",
    "        n_test = len(test_x)\n",
    "        pred_test = np.zeros(n_test)\n",
    "        for i in range(len(self.weak_clfs)):\n",
    "            pred_test_i = self.weak_clfs[i].predict(test_x)\n",
    "            pred_test_i = [1 if x == 1 else -1 for x in pred_test_i]\n",
    "            pred_test = pred_test + np.multiply(self.clf_alphas[i], pred_test_i)\n",
    "        pred_test = (pred_test > 0) * 1\n",
    "        return pred_test"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{6.4 Bagging集成学习方法}$\n",
    "\n",
    "Bagging集成学习方法对训练样本进行采样，产生若干个不相同的子集，在从每个子集中训练一个基学习器，由于使用的训练数据不相同，得到的基学习器就可能具有比较大的差异。通常采用自助采样法对训练数据进行采样，具体过程如下：给定一个包含$m$个样本的训练集$D$，从$D$中随机选取一个样本放入采样集合中，再把该样本放回训练集$D$中，进行下一次采样，经过$m$次采样后就可以得到一个包含$m$个样本的采样后的训练集。Bagging集成学习方法从原始训练集$D$中，采样自助采样法采样出$T$个训练集的采样集，然后基于每个采样集训练一个基学习器，然后采用简单投票法或者简单平均法将$T$个基学习器进行结合，形成最终的基础学习模型。Bagging集成学习算法的过程如下所示:\n",
    "\n",
    "$\\textbf{Bagging集成学习方法的过程}$\n",
    "\n",
    "$\\textbf{输入：}$\n",
    "\n",
    "训练集$D={(\\textbf{x}_1,y_1), (\\textbf{x}_2,y_2),…,(\\textbf{x}_m,y_m)}$，基学习器个数$T$.\n",
    "\n",
    "$\\textbf{输出：}$\n",
    "\n",
    "集成学习模型$H(\\textbf{x})$\n",
    "\n",
    "$\\textbf{过程：}$\n",
    "\n",
    "(1) for $t=1:T$ do:\n",
    "\n",
    "(2) 采用自助采样法从训练集$D$中采样出一个采样集$D_t$.\n",
    "\n",
    "(3) 在采样集$D_t$上训练得到一个基学习器$h_t(\\textbf{x})$.\n",
    "\n",
    "(4) end for\n",
    "\n",
    "(5) reutrn $H(\\textbf{x}) = \\underset{y}\\max\\sum_{t=1}^{T}I(h_t(\\textbf{x})=y)$.\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{6.5 随机森林}$\n",
    "\n",
    "随机森林(Random Forest，简称RF)是Bagging集成学习方法的一个扩展变体。随机森林以决策树作为Bagging集成学习的基学习器，与经典的决策树不同，随机森林在构建决策树模型的时候不是在所有的属性上去选择最优划分属性，而是在数据集属性的一个随机子集上进行最优划分属性的选择。由于基学习器的训练数据以及特征属性都不完全相同，随机森林构造的基学习器具有较大的差异，使得随机森林不仅简单、计算开销小，而且在很多实际任务中展现出强大的性能。\n",
    "\n",
    "随机森林的实现代码如下所示："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from random import randrange\n",
    "from random import randint\n",
    "from sklearn.tree import DecisionTreeClassifier\n",
    "import numpy as np\n",
    "class RandomForest:\n",
    "    #自助采样\n",
    "    def boosttrap_sampling(self,data_length):\n",
    "        sample_data_index = []        \n",
    "        #此处添加代码，完成自助采样\n",
    "        return sample_data_index\n",
    "    #从所有的feature_length个属性中随机选取k个属性\n",
    "    def random_select_k_features(self, feature_length,k):\n",
    "        feature_index = []\n",
    "        #此次添加代码，从feature_length个特征中，随机选取k个不同的特征\n",
    "        return feature_index\n",
    "    #训练集采样\n",
    "    def get_sampled_data(self,data_x,data_y,k):\n",
    "        data_len = data_x.shape[0]\n",
    "        feat_len = data_x.shape[1]\n",
    "        sample_data_index = self.boosttrap_sampling(data_len)\n",
    "        feature_index = self.random_select_k_features(feat_len,k)        \n",
    "        sample_data_x = data_x[sample_data_index]\n",
    "        sample_data_x = sample_data_x[:,feature_index]\n",
    "        sample_data_y = data_y[sample_data_index]\n",
    "        return sample_data_x,sample_data_y,feature_index\n",
    "    #训练随机森林\n",
    "    def fit(self, train_x, train_y,tree_num, k,tree_depth):\n",
    "        \"\"\" \n",
    "        参数：\n",
    "        train_x: 训练集特征\n",
    "        train_y: 训练集类别\n",
    "        tree_num：森林中决策树的个数\n",
    "        k: 每次选取的特征个数\n",
    "        tree_depth:树的最大高度\n",
    "        \"\"\"\n",
    "        self.feature_list = []\n",
    "        self.trees = []\n",
    "        for i in range(tree_num):\n",
    "            sample_data_x,sample_data_y,feature_index = self.get_sampled_data(train_x, train_y, k)\n",
    "            self.feature_list.append(feature_index)\n",
    "            clf = DecisionTreeClassifier(criterion='gini',max_depth=tree_depth)\n",
    "            clf.fit(sample_data_x,sample_data_y)\n",
    "            self.trees.append(clf)\n",
    "    #随机森林预测\n",
    "    def predict(self,test_x):\n",
    "        pred_result = np.zeros((len(test_x),len(self.trees)),dtype=int)\n",
    "        labels = []\n",
    "        for i in range(len(self.trees)):\n",
    "            test_x_sub = test_x[:,self.feature_list[i]]\n",
    "            pred_y = self.trees[i].predict(test_x_sub)\n",
    "            pred_result[:,i] = pred_y\n",
    "        for i in range(len(test_x)):\n",
    "            label = self.majorityCount(pred_result[i,:])\n",
    "            labels.append(label)\n",
    "        return pred_result,labels\n",
    "    #选取出现次数最多的类别\n",
    "    def majorityCount(self,votes):\n",
    "        class_list = []\n",
    "        for c in votes:\n",
    "            if c not in class_list:\n",
    "                class_list.append(c)\n",
    "        count = []\n",
    "        for c in class_list:\n",
    "            num = 0\n",
    "            for x in votes:\n",
    "                if x == c:\n",
    "                    num += 1\n",
    "            count.append(num)\n",
    "        max_count = 0 \n",
    "        max_index = 0\n",
    "        for i in range(len(count)):\n",
    "            if count[i] > max_count:\n",
    "                max_count =count[i]\n",
    "                max_index = i\n",
    "        return class_list[max_index]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "$\\textbf{6.6 实践任务}$\n",
    "\n",
    "如今越来越多的人在消费时使用信用卡进行支付，各大银行纷纷投入更多资源拓展信用卡业务，信用卡产业飞速发展。因为市场竞争激烈，信用卡产品同质化严重，商业银行迫切需要更有效的方式扩大客户规模，实现精准营销，从而降低陈本，提高效益。此次实践任务要求根据1000条信用卡营销的客户数据(详见“信用卡精准营销模型.csv”)，使用本章实现的AdaBoost和随机森林算法，构建信用卡精准营销模型。具体过程如下："
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(1)读取信用卡精准营销数据，进行必要的数据预处理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import pandas as pd\n",
    "\n",
    "data = pd.read_csv(\"datasets/信用卡精准营销模型.csv\",encoding='gbk')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(2) 采用十折交叉验证方法，验证建立的信用卡精准营销模型的预测准确性"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "from sklearn.model_selection import KFold\n",
    "from sklearn.metrics import accuracy_score\n",
    "\n",
    "# 添加实现代码\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "(3)将本章实现的AdaBoost算法和随机森林算法与sklearn机器学习库中的AdaBoostClassifier和RandomForestClassifier算法在同一数据集上进行预测性能的比较。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.ensemble import AdaBoostClassifier\n",
    "from sklearn.ensemble import RandomForestClassifier\n",
    "\n",
    "# 添加实现代码"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
