{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## sklearn GradientBoostingRegressor源码分析\n",
    "\n",
    "1. 初始化参数，迭代次数\n",
    "2. 初始化预测值 -回归问题是均值raw_predictions\n",
    "3. boosting算法步骤  \n",
    "    3.1 for i in 迭代次数:\n",
    "           根据预测值得到残差，之后拟合一棵树\n",
    "           计算新的预测值 更新预测值，注意是所有树的求和，而不是当前这棵树！！！！\n",
    "    3.2 循环结束，保存每一棵树，保存初始化的预测值，用于预测"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "((506, 13), (506,))"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from sklearn.tree import DecisionTreeRegressor\n",
    "from sklearn.ensemble import GradientBoostingRegressor\n",
    "from sklearn.datasets import load_boston\n",
    "from sklearn.tree import DecisionTreeRegressor\n",
    "import numpy as np\n",
    "X, y = load_boston(return_X_y=True)\n",
    "X.shape,y.shape"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_init_raw_predictions(X,y):\n",
    "    constant_ = np.average(y, axis=0)\n",
    "    raw_predictions = y.copy()\n",
    "    raw_predictions[:] = constant_\n",
    "    return raw_predictions\n",
    "\n",
    "def negative_gradient(y, raw_predictions):\n",
    "    return y - raw_predictions.ravel()\n",
    "\n",
    "def loss_(y, raw_predictions):\n",
    "    return np.mean((y - raw_predictions.ravel())** 2)\n",
    "\n",
    "#更新预测值，注意是所有树的求和，而不是当前这棵树！！！！\n",
    "def update_terminal_regions(tree, X, y, residual, raw_predictions):\n",
    "    pre = tree.predict(X)\n",
    "    raw_predictions +=  pre\n",
    "    return raw_predictions\n",
    "\n",
    "def predict(X,init_raw_predictions,estimators_):\n",
    "    #初始化-均值\n",
    "    raw_predictions = np.full((X.shape[0]),init_raw_predictions[0])\n",
    "    for tree in estimators_:\n",
    "        pre = tree.predict(X.astype(np.float32))\n",
    "        raw_predictions+= pre\n",
    "    return raw_predictions"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "训练误差逐渐减小： [46.19909168 31.57830339 28.84084433 25.38314259 23.19508896 20.92134251]\n",
      "\n",
      "预测结果： [23.19416871 23.19416871 40.69684411 40.69684411 35.29381512 23.19416871\n",
      " 23.19416871 15.18681546 15.18681546 15.18681546]\n"
     ]
    }
   ],
   "source": [
    "#初始化参数，迭代次数\n",
    "n_estimators = 6\n",
    "train_score_ = []\n",
    "estimators_ = DecisionTreeRegressor()\n",
    "estimators_ = np.resize(estimators_,n_estimators)\n",
    "train_score_ = np.resize(train_score_,n_estimators)\n",
    "\n",
    "# 初始化 -回归问题是均值 记录下来在预测时可以使用\n",
    "raw_predictions = get_init_raw_predictions(X,y)\n",
    "init_raw_predictions = raw_predictions.copy()\n",
    "\n",
    "#boosting算法\n",
    "for iteration in range(n_estimators):\n",
    "    raw_predictions_copy = raw_predictions.copy()\n",
    "    residual = negative_gradient(y, raw_predictions_copy)\n",
    "\n",
    "    #利用残差拟合一棵树,最简单的决策树有两个节点  大于  小于\n",
    "    tree = DecisionTreeRegressor(max_leaf_nodes=2)\n",
    "    tree.fit(X, residual)\n",
    "\n",
    "    #根据构建好的树更新 raw_predictions\n",
    "    raw_predictions = update_terminal_regions(tree, X, y, residual, raw_predictions)\n",
    "   \n",
    "    #记录树的结构\n",
    "    estimators_[iteration] = tree\n",
    "    #print(tree.tree_.value)\n",
    "    \n",
    "    #计算总的损失 均方误差\n",
    "    train_score_[iteration] = loss_(y, raw_predictions)\n",
    "    \n",
    "print('训练误差逐渐减小：',train_score_)\n",
    "print('\\n预测结果：',predict(X[:10,:],init_raw_predictions,estimators_).ravel())\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "训练误差逐渐减小： [6.25000000e-03 2.91666667e-03 6.01851852e-04 3.44650206e-04\n",
      " 1.25171468e-04 5.20118884e-05]\n",
      "\n",
      "预测结果： [1.1        1.29012346 1.69938272 1.81049383]\n"
     ]
    }
   ],
   "source": [
    "#其他例子\n",
    "x1 = [5, 7, 21, 30]\n",
    "x2 = [20, 30, 70, 60]\n",
    "y = [1.1, 1.3, 1.7, 1.8]\n",
    "X = np.array([x1,x2]).T.reshape(4,2)\n",
    "y = np.array([1.1, 1.3, 1.7, 1.8])\n",
    "\n",
    "# X = np.array([1,2,3,4,5,6,7,8,9,10]).reshape(10,-1)\n",
    "# y = np.array([5.56,5.70,5.91,6.40,6.80,7.05,8.9,8.70,9.00,9.05])\n",
    "# 初始化 -回归问题是均值 记录下来在预测时可以使用\n",
    "raw_predictions = get_init_raw_predictions(X,y)\n",
    "init_raw_predictions = raw_predictions.copy()\n",
    "\n",
    "#boosting算法\n",
    "for iteration in range(n_estimators):\n",
    "    raw_predictions_copy = raw_predictions.copy()\n",
    "    residual = negative_gradient(y, raw_predictions_copy)\n",
    "\n",
    "    #利用残差拟合一棵树,最简单的决策树有两个节点  大于  小于\n",
    "    tree = DecisionTreeRegressor(max_leaf_nodes=2)\n",
    "    tree.fit(X, residual)\n",
    "\n",
    "    #根据构建好的树更新 raw_predictions\n",
    "    raw_predictions = update_terminal_regions(tree, X, y, residual, raw_predictions)\n",
    "   \n",
    "    #记录树的结构\n",
    "    estimators_[iteration] = tree\n",
    "    #print(tree.tree_.value)\n",
    "    \n",
    "    #计算总的损失 均方误差\n",
    "    train_score_[iteration] = loss_(y, raw_predictions)\n",
    "    \n",
    "print('训练误差逐渐减小：',train_score_)\n",
    "print('\\n预测结果：',predict(X,init_raw_predictions,estimators_).ravel())\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## sklearn GradientBoostingClassifier 分类源码分析\n",
    "\n",
    "1. 初始化参数，迭代次数\n",
    "2. 初始化预测值 -回归问题是均值raw_predictions\n",
    "3. boosting算法步骤  \n",
    "    3.1 for i in 迭代次数:\n",
    "           根据预测值得到残差，之后拟合一棵树\n",
    "           \n",
    "           计算新的预测值 这里预测值不再是树的叶子节点均值（叶子节点的值其实是残差的均值），\n",
    "           而是通过update_terminal_region计算得到的，相当于一个二阶导数的加权。得到新的叶子结点值后，更新预测值\n",
    "           \n",
    "           更新预测值，注意是所有树的求和，而不是当前这棵树！！！！\n",
    "           \n",
    "           计算训练的损失，这里损失是logloss，参考loss_\n",
    "    3.2 循环结束，保存每一棵树，保存初始化的预测值，用于预测\n",
    "\n",
    "#### 回归、分类对比\n",
    "1. 相较于回归，最关键的一点在于求负梯度时，分类需要用logit（sigmoid）转化一下成概率\n",
    "2. 分类更新叶子节点时，利用了二阶导数加权，非常类似于XGB，回归直接求平均\n",
    "3. 在计算代价函数（所有样本都计算叫代价函数，单个样本计算叫损失函数）时，回归利用了均方差损失函数，分类利用了交叉熵损失函数（对数损失函数）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.ensemble import GradientBoostingClassifier\n",
    "from sklearn.datasets import load_iris\n",
    "from sklearn.ensemble import GradientBoostingRegressor\n",
    "from sklearn.datasets import load_boston\n",
    "from sklearn.tree import DecisionTreeRegressor\n",
    "import numpy as np\n",
    "from scipy.special import expit\n",
    "from sklearn.tree._tree import TREE_LEAF\n",
    "\n",
    "\n",
    "# log(x / (1 - x)) is the inverse of the sigmoid (expit) function\n",
    "#如果给定的标签值不是0,1，会根据类别标记索引\n",
    "# proba_pos_class  这里会根据先验概率计算得到    prior = pos/(pos+neg)\n",
    "def get_init_raw_predictions(X,y):\n",
    "    classes_, y = np.unique(y, return_inverse=True)\n",
    "    y = y.astype(np.float64())\n",
    "    pos = len(y[np.where(y == 1)])\n",
    "    prior_1 = pos/len(y)\n",
    "    proba_pos_class = y.copy()\n",
    "    proba_pos_class[:] = prior_1\n",
    "    #根据先验概率，利用sigmoid反函数将概率转化成预测值\n",
    "    #log(x / (1 - x)) 是  1/(1+exp(-x))的反函数\n",
    "    raw_predictions = np.log(proba_pos_class / (1 - proba_pos_class))\n",
    "    return raw_predictions\n",
    "\n",
    "\n",
    "# 计算负梯度\n",
    "# expit(x) = 1/(1+exp(-x))\n",
    "def negative_gradient(y,raw_predictions):\n",
    "    return y - expit(raw_predictions.ravel())\n",
    "\n",
    "# Compute the deviance (= 2 * negative log-likelihood).\n",
    "# logloss = - y*logp - (1-y)log(1-p)     这里传进来的是raw_predictions=f   p = 1/(1+exp(-f)) \n",
    "#可以将上式带入，化简得到该式子可以推导下  - (yf-log(1+exp(f)))\n",
    "# logaddexp(0, v) == log(1.0 + exp(v))\n",
    "def loss_(y,raw_predictions):\n",
    "    return  -2 * np.mean((y * raw_predictions) -np.logaddexp(0, raw_predictions))\n",
    "    \n",
    "\n",
    "#更新预测值，注意是所有树的求和，而不是当前这棵树！！！！\n",
    "def update_terminal_regions(tree, X, y, residual, raw_predictions):\n",
    "\n",
    "    terminal_regions = tree.apply(X.astype(np.float32))\n",
    "\n",
    "    # mask all which are not in sample mask.\n",
    "    sample_mask = np.ones((len(y), ), dtype=np.bool)    \n",
    "    masked_terminal_regions = terminal_regions.copy()\n",
    "    masked_terminal_regions[~sample_mask] = -1\n",
    "    \n",
    "    #可以对比下更新叶子节点前后的变化\n",
    "    #print(tree.value[:, 0, 0].take(terminal_regions, axis=0))\n",
    "    \n",
    "    # update each leaf (= perform line search) 更新的叶子节点，不会直接以预测值返回结果\n",
    "    for leaf in np.where(tree.children_left == TREE_LEAF)[0]:\n",
    "        tree= update_terminal_region(tree, masked_terminal_regions,\n",
    "                                     leaf, X, y, residual,\n",
    "                                     raw_predictions )\n",
    "        \n",
    "    #print(tree.value[:, 0, 0].take(terminal_regions, axis=0))\n",
    "    raw_predictions += tree.value[:, 0, 0].take(terminal_regions, axis=0)\n",
    "    return raw_predictions\n",
    "\n",
    "\n",
    "'''\n",
    "Make a single Newton-Raphson step.\n",
    "value = sum((y - prob)) / sum(prob * (1 - prob))\n",
    "y - prob = residual\n",
    "\n",
    "p是预测的概率  p = logit(raw_pre)\n",
    "logloss = -ylogp-(1-y)log(1-p)\n",
    "一阶导数G:   p-y\n",
    "二阶导数H： p(1-p)\n",
    "value的值如上面公式所示，是关于1介导和2介导的一个比值 -G/H。\n",
    "在XGB中使用的就是1、2介导的并加上正则化 w =  G/(H+α)\n",
    "所以GBDT中value的返回值并不是直接返回resudial，而是相当于做了一个加权\n",
    "'''\n",
    "def update_terminal_region(tree, terminal_regions, leaf, X, y, residual, raw_predictions):\n",
    "    terminal_region = np.where(terminal_regions == leaf)[0]\n",
    "    residual = residual.take(terminal_region, axis=0)\n",
    "    y = y.take(terminal_region, axis=0)\n",
    "\n",
    "    numerator = np.sum(residual)\n",
    "    denominator = np.sum((y - residual) * (1 - y + residual))\n",
    "\n",
    "    # prevents overflow and division by zero\n",
    "    if abs(denominator) < 1e-150:\n",
    "        tree.value[leaf, 0, 0] = 0.0\n",
    "    else:\n",
    "        tree.value[leaf, 0, 0] = numerator / denominator\n",
    "    return tree\n",
    "\n",
    "#预测\n",
    "def predict(X,init_raw_predictions,estimators_):\n",
    "    #初始化-均值\n",
    "    raw_predictions = np.full((X.shape[0]),init_raw_predictions[0])\n",
    "    for tree in estimators_:\n",
    "        pre = tree.predict(X)\n",
    "        raw_predictions+= pre\n",
    "    return raw_predictions"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "训练误差逐渐减小： [0.62563623 0.35924393 0.25562034 0.19766221 0.15276796 0.10551893]\n",
      "\n",
      "预测结果： [0.03972771 0.03972771 0.03972771 0.03972771 0.03972771 0.00477442\n",
      " 0.03972771 0.03972771 0.04572219 0.03972771 0.03972771 0.03972771\n",
      " 0.03972771 0.00233191 0.13598437 0.13598437 0.00477442 0.03972771\n",
      " 0.13598437 0.00477442 0.03972771 0.03972771 0.03972771 0.03972771\n",
      " 0.03972771 0.03972771 0.03972771 0.03972771 0.03972771 0.03972771\n",
      " 0.03972771 0.03972771 0.00477442 0.00477442 0.03972771 0.03972771\n",
      " 0.03972771 0.03972771 0.00233191 0.03972771 0.03972771 0.29029874\n",
      " 0.00233191 0.03972771 0.00477442 0.03972771 0.00477442 0.03972771\n",
      " 0.03972771 0.03972771 0.92052158 0.92052158 0.92052158 0.87864195\n",
      " 0.99580567 0.96530476 0.92052158 0.87864195 0.99580567 0.87864195\n",
      " 0.87864195 0.92052158 0.99950688 0.99580567 0.96530476 0.92052158\n",
      " 0.57578134 0.99580759 0.99950688 0.99580759 0.92052158 0.99580567\n",
      " 0.99950688 0.99580567 0.99580567 0.92052158 0.99580567 0.92052158\n",
      " 0.99580567 0.99580759]\n"
     ]
    }
   ],
   "source": [
    "\n",
    "\n",
    "X, y = load_iris(return_X_y=True)\n",
    "X = X[:80, :2]\n",
    "y = y[:80]\n",
    "\n",
    "#初始化参数，迭代次数\n",
    "begin_at_stage = 0\n",
    "n_estimators = 6\n",
    "train_score_ = []\n",
    "estimators_ = DecisionTreeRegressor()\n",
    "estimators_ = np.resize(estimators_,n_estimators)\n",
    "train_score_ = np.resize(train_score_,n_estimators)\n",
    "\n",
    "# 初始化 -回归问题是均值，分类问题可以根据先验概率得到\n",
    "raw_predictions = get_init_raw_predictions(X,y)\n",
    "init_raw_predictions = raw_predictions.copy()\n",
    "\n",
    "#boosting步骤    \n",
    "for iteration in range(n_estimators):\n",
    "    #根据预测值计算残差\n",
    "    raw_predictions_copy = raw_predictions.copy()\n",
    "    residual = negative_gradient(y, raw_predictions_copy)\n",
    "\n",
    "    #利用残差拟合一棵树,最简单的决策树有两个节点  大于  小于\n",
    "    tree = DecisionTreeRegressor(max_leaf_nodes=2)\n",
    "    tree.fit(X, residual)\n",
    "\n",
    "    #根据构建好的树更新 raw_predictions\n",
    "    raw_predictions = update_terminal_regions(tree.tree_, X, y, residual, raw_predictions)\n",
    "   \n",
    "    #记录树的结构\n",
    "    estimators_[iteration] = tree\n",
    "    #print(tree.tree_.value)\n",
    "    \n",
    "    #计算总的损失 均方误差\n",
    "    train_score_[iteration] = loss_(y, raw_predictions)\n",
    "    \n",
    "\n",
    "print('训练误差逐渐减小：',train_score_)\n",
    "print('\\n预测结果：',expit(predict(X,init_raw_predictions,estimators_)).ravel())\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 多分类\n",
    "- 将标签转化成onehot格式，分别训练三棵树\n",
    "- 针对某一个样本，用三棵树分别预测，得到三个预测值，并将三个预测值结果用softmax函数转出成概率\n",
    "- 将转化的概率当成最终的预测值，y-f 得到残差进行下一轮循环  \n",
    "\n",
    "以三分类来说，相当于 raw_predictions[:,3] 有三列，在训练时，训练三棵树"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "参考：  \n",
    "GBDT源码解读及实现（二）https://blog.csdn.net/jin_tmac/article/details/78954068"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
