{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "安装\n",
    "--\n",
    "\n",
    "```\n",
    "pip install lightgbm\n",
    "```\n",
    "\n",
    "gitup网址：https://github.com/Microsoft/LightGBM\n",
    "\n",
    "# 中文教程\n",
    "\n",
    "\n",
    "http://lightgbm.apachecn.org/cn/latest/index.html\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Looking in indexes: https://mirrors.tencent.com/pypi/simple/, https://mirrors.tencent.com/repository/pypi/tencent_pypi/simple\n",
      "Collecting lightgbm\n",
      "  Downloading https://mirrors.tencent.com/pypi/packages/a1/00/84c572ff02b27dd828d6095158f4bda576c124c4c863be7bf14f58101e53/lightgbm-3.3.2-py3-none-manylinux1_x86_64.whl (2.0 MB)\n",
      "     |████████████████████████████████| 2.0 MB 592 kB/s            \n",
      "\u001b[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from lightgbm) (1.19.4)\n",
      "Requirement already satisfied: scikit-learn!=0.22.0 in /usr/local/lib/python3.6/dist-packages (from lightgbm) (0.23.2)\n",
      "Requirement already satisfied: wheel in /usr/local/lib/python3.6/dist-packages (from lightgbm) (0.36.2)\n",
      "Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from lightgbm) (1.5.4)\n",
      "Requirement already satisfied: threadpoolctl>=2.0.0 in /usr/local/lib/python3.6/dist-packages (from scikit-learn!=0.22.0->lightgbm) (2.1.0)\n",
      "Requirement already satisfied: joblib>=0.11 in /usr/local/lib/python3.6/dist-packages (from scikit-learn!=0.22.0->lightgbm) (1.0.0)\n",
      "Installing collected packages: lightgbm\n",
      "Successfully installed lightgbm-3.3.2\n",
      "\u001b[33mWARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv\u001b[0m\n",
      "Note: you may need to restart the kernel to use updated packages.\n"
     ]
    }
   ],
   "source": [
    "pip install lightgbm"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# lightGBM简介\n",
    "\n",
    "\n",
    "xgboost的出现，让数据民工们告别了传统的机器学习算法们：RF、GBM、SVM、LASSO……..。\n",
    "\n",
    "现在微软推出了一个新的boosting框架，想要挑战xgboost的江湖地位。\n",
    "\n",
    "顾名思义，lightGBM包含两个关键点：light即轻量级，GBM 梯度提升机。\n",
    "\n",
    "LightGBM 是一个梯度 boosting 框架，使用基于学习算法的决策树。它可以说是分布式的，高效的，有以下优势：\n",
    "\n",
    " - 更快的训练效率\n",
    "\n",
    " - 低内存使用\n",
    "\n",
    " - 更高的准确率\n",
    "\n",
    " - 支持并行化学习\n",
    "\n",
    " - 可处理大规模数据\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# xgboost缺点\n",
    "\n",
    "\n",
    "其缺点，或者说不足之处：\n",
    "\n",
    "每轮迭代时，都需要遍历整个训练数据多次。如果把整个训练数据装进内存则会限制训练数据的大小；\n",
    "\n",
    "如果不装进内存，反复地读写训练数据又会消耗非常大的时间。\n",
    "\n",
    "预排序方法（pre-sorted）：\n",
    "\n",
    "首先，空间消耗大。这样的算法需要保存数据的特征值，还保存了特征排序的结果（例如排序后的索引，为了后续快速的计算分割点），这里需要消耗训练数据两倍的内存。\n",
    "\n",
    "其次时间上也有较大的开销，在遍历每一个分割点的时候，都需要进行分裂增益的计算，消耗的代价大。\n",
    "\n",
    "对cache优化不友好。在预排序后，特征对梯度的访问是一种随机访问，并且不同的特征访问的顺序不一样，无法对cache进行优化。\n",
    "\n",
    "同时，在每一层长树的时候，需要随机访问一个行索引到叶子索引的数组，并且不同特征访问的顺序也不一样，也会造成较大的cache miss。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# lightGBM特点\n",
    "\n",
    "以上与其说是xgboost的不足，倒不如说是lightGBM作者们构建新算法时着重瞄准的点。\n",
    "\n",
    "解决了什么问题，那么原来模型没解决就成了原模型的缺点。\n",
    "\n",
    "概括来说，lightGBM主要有以下特点：\n",
    "\n",
    " - 基于Histogram的决策树算法\n",
    "\n",
    " - 带深度限制的Leaf-wise的叶子生长策略\n",
    "\n",
    " - 直方图做差加速\n",
    "\n",
    " - 直接支持类别特征(Categorical Feature)\n",
    "\n",
    " - Cache命中率优化\n",
    "\n",
    " - 基于直方图的稀疏特征优化\n",
    "\n",
    " - 多线程优化\n",
    "\n",
    "前2个特点使我们尤为关注的。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**Histogram算法**\n",
    "\n",
    "直方图算法的基本思想：先把连续的浮点特征值离散化成k个整数，同时构造一个宽度为k的直方图。\n",
    "\n",
    "遍历数据时，根据离散化后的值作为索引在直方图中累积统计量，当遍历一次数据后，直方图累积了需要的统计量，然后根据直方图的离散值，遍历寻找最优的分割点。\n",
    "\n",
    "**带深度限制的Leaf-wise的叶子生长策略**\n",
    "\n",
    "Level-wise过一次数据可以同时分裂同一层的叶子，容易进行多线程优化，也好控制模型复杂度，不容易过拟合。\n",
    "\n",
    "但实际上Level-wise是一种低效算法，因为它不加区分的对待同一层的叶子，带来了很多没必要的开销，因为实际上很多叶子的分裂增益较低，没必要进行搜索和分裂。\n",
    "\n",
    "Leaf-wise则是一种更为高效的策略：每次从当前所有叶子中，找到分裂增益最大的一个叶子，然后分裂，如此循环。\n",
    "\n",
    "因此同Level-wise相比，在分裂次数相同的情况下，Leaf-wise可以降低更多的误差，得到更好的精度。\n",
    "\n",
    "Leaf-wise的缺点：可能会长出比较深的决策树，产生过拟合。因此LightGBM在Leaf-wise之上增加了一个最大深度限制，在保证高效率的同时防止过拟合。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# xgboost和lightgbm\n",
    "\n",
    "\n",
    "**决策树算法**\n",
    "\n",
    "XGBoost使用的是pre-sorted算法，能够更精确的找到数据分隔点；\n",
    "\n",
    " - 首先，对所有特征按数值进行预排序。\n",
    " \n",
    " - 其次，在每次的样本分割时，用O(# data)的代价找到每个特征的最优分割点。\n",
    " \n",
    " - 最后，找到最后的特征以及分割点，将数据分裂成左右两个子节点。 \n",
    "\n",
    "优缺点： \n",
    "\n",
    "这种pre-sorting算法能够准确找到分裂点，但是在空间和时间上有很大的开销。 \n",
    "\n",
    " - i. 由于需要对特征进行预排序并且需要保存排序后的索引值（为了后续快速的计算分裂点），因此内存需要训练数据的两倍。\n",
    " \n",
    " - ii. 在遍历每一个分割点的时候，都需要进行分裂增益的计算，消耗的代价大。\n",
    "\n",
    "\n",
    "\n",
    "LightGBM使用的是histogram算法，占用的内存更低，数据分隔的复杂度更低。\n",
    "\n",
    "其思想是将连续的浮点特征离散成k个离散值，并构造宽度为k的Histogram。\n",
    "\n",
    "然后遍历训练数据，统计每个离散值在直方图中的累计统计量。\n",
    "\n",
    "在进行特征选择时，只需要根据直方图的离散值，遍历寻找最优的分割点。\n",
    "\n",
    "Histogram 算法的优缺点：\n",
    "\n",
    " - Histogram算法并不是完美的。由于特征被离散化后，找到的并不是很精确的分割点，所以会对结果产生影响。但在实际的数据集上表明，离散化的分裂点对最终的精度影响并不大，甚至会好一些。原因在于decision tree本身就是一个弱学习器，采用Histogram算法会起到正则化的效果，有效地防止模型的过拟合。\n",
    " \n",
    " - 时间上的开销由原来的`O(#data * #features)`降到`O(k * #features)`。由于离散化，`#bin`远小于`#data`，因此时间上有很大的提升。\n",
    " \n",
    " - Histogram算法还可以进一步加速。一个叶子节点的Histogram可以直接由父节点的Histogram和兄弟节点的Histogram做差得到。\n",
    " \n",
    " - 一般情况下，构造Histogram需要遍历该叶子上的所有数据，通过该方法，只需要遍历Histogram的k个捅。速度提升了一倍。 "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**决策树生长策略**\n",
    "\n",
    "XGBoost采用的是按层生长level（depth）-wise生长策略，如Figure 1所示，能够同时分裂同一层的叶子，从而进行多线程优化，不容易过拟合；但不加区分的对待同一层的叶子，带来了很多没必要的开销。\n",
    "\n",
    "因为实际上很多叶子的分裂增益较低，没必要进行搜索和分裂。 \n",
    "\n",
    "![这里写图片描述](https://imgconvert.csdnimg.cn/aHR0cDovLzViMDk4OGU1OTUyMjUuY2RuLnNvaHVjcy5jb20vaW1hZ2VzLzIwMTcxMTIzL2NlOTQ4MjhlN2I3YTQ1ODBhNTlhMGY5MWQ5NWU1ZjgzLnBuZw?x-oss-process=image/format,png)\n",
    "\n",
    "LightGBM采用leaf-wise生长策略，如Figure 2所示，每次从当前所有叶子中找到分裂增益最大（一般也是数据量最大）的一个叶子，然后分裂，如此循环。\n",
    "\n",
    "因此同Level-wise相比，在分裂次数相同的情况下，Leaf-wise可以降低更多的误差，得到更好的精度。Leaf-wise的缺点是可能会长出比较深的决策树，产生过拟合。\n",
    "\n",
    "因此LightGBM在Leaf-wise之上增加了一个最大深度的限制，在保证高效率的同时防止过拟合。 \n",
    "\n",
    "![这里写图片描述](https://imgconvert.csdnimg.cn/aHR0cDovLzViMDk4OGU1OTUyMjUuY2RuLnNvaHVjcy5jb20vaW1hZ2VzLzIwMTcxMTIzLzAxNTlhMzk5MGE1NDRkMDBhNmNkMDI1ZDlhMzNiNDk5LnBuZw?x-oss-process=image/format,png)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**网络通信优化**\n",
    "\n",
    "XGBoost由于采用pre-sorted算法，通信代价非常大，所以在并行的时候也是采用histogram算法；\n",
    "\n",
    "LightGBM采用的histogram算法通信代价小，通过使用集合通信算法，能够实现并行计算的线性加速。\n",
    "\n",
    "**LightGBM支持类别特征**\n",
    "\n",
    "实际上大多数机器学习工具都无法直接支持类别特征，一般需要把类别特征，转化one-hotting特征，降低了空间和时间的效率。\n",
    "\n",
    "而类别特征的使用是在实践中很常用的。基于这个考虑，LightGBM优化了对类别特征的支持，可以直接输入类别特征，不需要额外的0/1展开。\n",
    "\n",
    "并在决策树算法上增加了类别特征的决策规则。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# lightGBM调参\n",
    "\n",
    "\n",
    "所有的参数含义，参考：http://lightgbm.apachecn.org/cn/latest/Parameters.html\n",
    "\n",
    "调参过程：\n",
    "\n",
    "（1）num_leaves\n",
    "\n",
    "LightGBM使用的是leaf-wise的算法，因此在调节树的复杂程度时，使用的是num_leaves而不是max_depth。\n",
    "\n",
    "大致换算关系：num_leaves = 2^(max_depth)\n",
    "\n",
    "（2）样本分布非平衡数据集：可以param[‘is_unbalance’]=’true’\n",
    "\n",
    "（3）Bagging参数：bagging_fraction+bagging_freq（必须同时设置）、feature_fraction\n",
    "\n",
    "（4）min_data_in_leaf、min_sum_hessian_in_leaf"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "sklearn接口形式的LightGBM示例\n",
    "--\n",
    "\n",
    "这里主要以sklearn的使用形式来使用lightgbm算法，包含建模，训练，预测，网格参数优化。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Load data...\n",
      "Start training...\n",
      "[1]\tvalid_0's l1: 0.588034\tvalid_0's l2: 0.557257\n",
      "[2]\tvalid_0's l1: 0.558601\tvalid_0's l2: 0.504\n",
      "[3]\tvalid_0's l1: 0.533366\tvalid_0's l2: 0.457982\n",
      "[4]\tvalid_0's l1: 0.507443\tvalid_0's l2: 0.41463\n",
      "[5]\tvalid_0's l1: 0.485721\tvalid_0's l2: 0.379804\n",
      "[6]\tvalid_0's l1: 0.465626\tvalid_0's l2: 0.344388\n",
      "[7]\tvalid_0's l1: 0.44743\tvalid_0's l2: 0.316163\n",
      "[8]\tvalid_0's l1: 0.429186\tvalid_0's l2: 0.287238\n",
      "[9]\tvalid_0's l1: 0.411935\tvalid_0's l2: 0.262292\n",
      "[10]\tvalid_0's l1: 0.395375\tvalid_0's l2: 0.238778\n",
      "[11]\tvalid_0's l1: 0.380425\tvalid_0's l2: 0.220261\n",
      "[12]\tvalid_0's l1: 0.365386\tvalid_0's l2: 0.201077\n",
      "[13]\tvalid_0's l1: 0.351844\tvalid_0's l2: 0.186131\n",
      "[14]\tvalid_0's l1: 0.338187\tvalid_0's l2: 0.170488\n",
      "[15]\tvalid_0's l1: 0.32592\tvalid_0's l2: 0.158447\n",
      "[16]\tvalid_0's l1: 0.313696\tvalid_0's l2: 0.14627\n",
      "[17]\tvalid_0's l1: 0.301835\tvalid_0's l2: 0.13475\n",
      "[18]\tvalid_0's l1: 0.291441\tvalid_0's l2: 0.126063\n",
      "[19]\tvalid_0's l1: 0.280924\tvalid_0's l2: 0.117155\n",
      "[20]\tvalid_0's l1: 0.270932\tvalid_0's l2: 0.109156\n",
      "Start predicting...\n",
      "The rmse of prediction is: 0.33038813204779693\n",
      "Feature importances: [8, 0, 41, 8]\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.6/dist-packages/lightgbm/sklearn.py:726: UserWarning: 'early_stopping_rounds' argument is deprecated and will be removed in a future release of LightGBM. Pass 'early_stopping()' callback via 'callbacks' argument instead.\n",
      "  _log_warning(\"'early_stopping_rounds' argument is deprecated and will be removed in a future release of LightGBM. \"\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Best parameters found by grid search are: {'learning_rate': 0.1, 'n_estimators': 40}\n"
     ]
    }
   ],
   "source": [
    "import lightgbm as lgb\n",
    "import pandas as pd\n",
    "from sklearn.metrics import mean_squared_error\n",
    "from sklearn.model_selection import GridSearchCV\n",
    "from sklearn.datasets import load_iris\n",
    "from sklearn.model_selection import train_test_split\n",
    "from sklearn.datasets import  make_classification\n",
    "# 加载数据\n",
    "print('Load data...')\n",
    "\n",
    "iris = load_iris()\n",
    "data=iris.data\n",
    "target = iris.target\n",
    "X_train,X_test,y_train,y_test =train_test_split(data,target,test_size=0.2)\n",
    "\n",
    "# df_train = pd.read_csv('../regression/regression.train', header=None, sep='\\t')\n",
    "# df_test = pd.read_csv('../regression/regression.test', header=None, sep='\\t')\n",
    "# y_train = df_train[0].values\n",
    "# y_test = df_test[0].values\n",
    "# X_train = df_train.drop(0, axis=1).values\n",
    "# X_test = df_test.drop(0, axis=1).values\n",
    "\n",
    "print('Start training...')\n",
    "# 创建模型，训练模型\n",
    "gbm = lgb.LGBMRegressor(objective='regression',num_leaves=31,learning_rate=0.05,n_estimators=20)\n",
    "gbm.fit(X_train, y_train,eval_set=[(X_test, y_test)],eval_metric='l1',early_stopping_rounds=5)\n",
    "\n",
    "print('Start predicting...')\n",
    "# 测试机预测\n",
    "y_pred = gbm.predict(X_test, num_iteration=gbm.best_iteration_)\n",
    "# 模型评估\n",
    "print('The rmse of prediction is:', mean_squared_error(y_test, y_pred) ** 0.5)\n",
    "\n",
    "# feature importances\n",
    "print('Feature importances:', list(gbm.feature_importances_))\n",
    "\n",
    "# 网格搜索，参数优化\n",
    "estimator = lgb.LGBMRegressor(num_leaves=31)\n",
    "\n",
    "param_grid = {\n",
    "    'learning_rate': [0.01, 0.1, 1],\n",
    "    'n_estimators': [20, 40]\n",
    "}\n",
    "\n",
    "gbm = GridSearchCV(estimator, param_grid)\n",
    "\n",
    "gbm.fit(X_train, y_train)\n",
    "\n",
    "print('Best parameters found by grid search are:', gbm.best_params_)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 原生形式使用lightgbm"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Start training...\n",
      "[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.000061 seconds.\n",
      "You can set `force_col_wise=true` to remove the overhead.\n",
      "[LightGBM] [Info] Total Bins 88\n",
      "[LightGBM] [Info] Number of data points in the train set: 120, number of used features: 4\n",
      "[LightGBM] [Info] Start training from score 0.983333\n",
      "[LightGBM] [Warning] No further splits with positive gain, best gain: -inf\n",
      "[1]\tvalid_0's auc: 0.973545\tvalid_0's l2: 0.613932\n",
      "Training until validation scores don't improve for 5 rounds\n",
      "[LightGBM] [Warning] No further splits with positive gain, best gain: -inf\n",
      "[2]\tvalid_0's auc: 0.973545\tvalid_0's l2: 0.564205\n",
      "[LightGBM] [Warning] No further splits with positive gain, best gain: -inf\n",
      "[3]\tvalid_0's auc: 0.97619\tvalid_0's l2: 0.516368\n",
      "[LightGBM] [Warning] No further splits with positive gain, best gain: -inf\n",
      "[4]\tvalid_0's auc: 0.97619\tvalid_0's l2: 0.475329\n",
      "[LightGBM] [Warning] No further splits with positive gain, best gain: -inf\n",
      "[5]\tvalid_0's auc: 0.97619\tvalid_0's l2: 0.436004\n",
      "[LightGBM] [Warning] No further splits with positive gain, best gain: -inf\n",
      "[6]\tvalid_0's auc: 1\tvalid_0's l2: 0.40095\n",
      "[LightGBM] [Warning] No further splits with positive gain, best gain: -inf\n",
      "[7]\tvalid_0's auc: 1\tvalid_0's l2: 0.36919\n",
      "[LightGBM] [Warning] No further splits with positive gain, best gain: -inf\n",
      "[8]\tvalid_0's auc: 1\tvalid_0's l2: 0.338404\n",
      "[LightGBM] [Warning] No further splits with positive gain, best gain: -inf\n",
      "[9]\tvalid_0's auc: 1\tvalid_0's l2: 0.310487\n",
      "[LightGBM] [Warning] No further splits with positive gain, best gain: -inf\n",
      "[10]\tvalid_0's auc: 1\tvalid_0's l2: 0.285916\n",
      "[LightGBM] [Warning] No further splits with positive gain, best gain: -inf\n",
      "[11]\tvalid_0's auc: 1\tvalid_0's l2: 0.264549\n",
      "Early stopping, best iteration is:\n",
      "[6]\tvalid_0's auc: 1\tvalid_0's l2: 0.40095\n",
      "Save model...\n",
      "Start predicting...\n",
      "The rmse of prediction is: 0.6332058050413797\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/usr/local/lib/python3.6/dist-packages/lightgbm/engine.py:181: UserWarning: 'early_stopping_rounds' argument is deprecated and will be removed in a future release of LightGBM. Pass 'early_stopping()' callback via 'callbacks' argument instead.\n",
      "  _log_warning(\"'early_stopping_rounds' argument is deprecated and will be removed in a future release of LightGBM. \"\n"
     ]
    }
   ],
   "source": [
    "# coding: utf-8\n",
    "# pylint: disable = invalid-name, C0111\n",
    "import json\n",
    "import lightgbm as lgb\n",
    "import pandas as pd\n",
    "from sklearn.metrics import mean_squared_error\n",
    "from sklearn.datasets import load_iris\n",
    "from sklearn.model_selection import train_test_split\n",
    "from sklearn.datasets import  make_classification\n",
    "\n",
    "iris = load_iris()\n",
    "data=iris.data\n",
    "target = iris.target\n",
    "X_train,X_test,y_train,y_test =train_test_split(data,target,test_size=0.2)\n",
    "\n",
    "\n",
    "# 加载你的数据\n",
    "# print('Load data...')\n",
    "# df_train = pd.read_csv('../regression/regression.train', header=None, sep='\\t')\n",
    "# df_test = pd.read_csv('../regression/regression.test', header=None, sep='\\t')\n",
    "#\n",
    "# y_train = df_train[0].values\n",
    "# y_test = df_test[0].values\n",
    "# X_train = df_train.drop(0, axis=1).values\n",
    "# X_test = df_test.drop(0, axis=1).values\n",
    "\n",
    "# 创建成lgb特征的数据集格式\n",
    "lgb_train = lgb.Dataset(X_train, y_train)\n",
    "lgb_eval = lgb.Dataset(X_test, y_test, reference=lgb_train)\n",
    "\n",
    "# 将参数写成字典下形式\n",
    "params = {\n",
    "    'task': 'train',\n",
    "    'boosting_type': 'gbdt',  # 设置提升类型\n",
    "    'objective': 'regression', # 目标函数\n",
    "    'metric': {'l2', 'auc'},  # 评估函数\n",
    "    'num_leaves': 31,   # 叶子节点数\n",
    "    'learning_rate': 0.05,  # 学习速率\n",
    "    'feature_fraction': 0.9, # 建树的特征选择比例\n",
    "    'bagging_fraction': 0.8, # 建树的样本采样比例\n",
    "    'bagging_freq': 5,  # k 意味着每 k 次迭代执行bagging\n",
    "    'verbose': 1 # <0 显示致命的, =0 显示错误 (警告), >0 显示信息\n",
    "}\n",
    "\n",
    "print('Start training...')\n",
    "# 训练 cv and train\n",
    "gbm = lgb.train(params,lgb_train,num_boost_round=20,valid_sets=lgb_eval,early_stopping_rounds=5)\n",
    "\n",
    "print('Save model...')\n",
    "# 保存模型到文件\n",
    "gbm.save_model('model.txt')\n",
    "\n",
    "print('Start predicting...')\n",
    "# 预测数据集\n",
    "y_pred = gbm.predict(X_test, num_iteration=gbm.best_iteration)\n",
    "# 评估模型\n",
    "print('The rmse of prediction is:', mean_squared_error(y_test, y_pred) ** 0.5)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
