{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 排序模型"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "通过召回的操作， 我们已经进行了问题规模的缩减， 对于每个用户， 选择出了N篇文章作为了候选集，并基于召回的候选集构建了与用户历史相关的特征，以及用户本身的属性特征，文章本省的属性特征，以及用户与文章之间的特征，下面就是使用机器学习模型来对构造好的特征进行学习，然后对测试集进行预测，得到测试集中的每个候选集用户点击的概率，返回点击概率最大的topk个文章，作为最终的结果。\n",
    "\n",
    "排序阶段选择了三个比较有代表性的排序模型，它们分别是：\n",
    "\n",
    "1. LGB的排序模型\n",
    "2. LGB的分类模型\n",
    "3. 深度学习的分类模型DIN\n",
    "\n",
    "得到了最终的排序模型输出的结果之后，还选择了两种比较经典的模型集成的方法：\n",
    "\n",
    "1. 输出结果加权融合\n",
    "2. Staking（将模型的输出结果再使用一个简单模型进行预测）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import pandas as pd\n",
    "import pickle\n",
    "from tqdm import tqdm\n",
    "import gc, os\n",
    "import time\n",
    "from datetime import datetime\n",
    "import lightgbm as lgb\n",
    "from sklearn.preprocessing import MinMaxScaler"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 读取排序特征"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "path='../data/'\n",
    "pathcache='../datacache/'\n",
    "#是否非正式的模式\n",
    "informal = True"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 59,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 重新读取数据的时候，发现click_article_id是一个浮点数，所以将其转换成int类型\n",
    "trn_user_item_feats_df = pd.read_csv(pathcache + 'trn_user_item_feats_df.csv')\n",
    "trn_user_item_feats_df['click_article_id'] = trn_user_item_feats_df['click_article_id'].astype(int)\n",
    "\n",
    "if informal:\n",
    "    val_user_item_feats_df = pd.read_csv(pathcache + 'val_user_item_feats_df.csv')\n",
    "    val_user_item_feats_df['click_article_id'] = val_user_item_feats_df['click_article_id'].astype(int)\n",
    "else:\n",
    "    val_user_item_feats_df = None\n",
    "    \n",
    "tst_user_item_feats_df = pd.read_csv(pathcache + 'tst_user_item_feats_df.csv')\n",
    "tst_user_item_feats_df['click_article_id'] = tst_user_item_feats_df['click_article_id'].astype(int)\n",
    "\n",
    "# 做特征的时候为了方便，给测试集也打上了一个无效的标签，这里直接删掉就行\n",
    "del tst_user_item_feats_df['label']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 97,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>user_id</th>\n",
       "      <th>click_article_id</th>\n",
       "      <th>sim0</th>\n",
       "      <th>time_diff0</th>\n",
       "      <th>word_diff0</th>\n",
       "      <th>sim_max</th>\n",
       "      <th>sim_min</th>\n",
       "      <th>sim_sum</th>\n",
       "      <th>sim_mean</th>\n",
       "      <th>score</th>\n",
       "      <th>...</th>\n",
       "      <th>click_region</th>\n",
       "      <th>click_referrer_type</th>\n",
       "      <th>user_time_hob1</th>\n",
       "      <th>user_time_hob2</th>\n",
       "      <th>words_hbo</th>\n",
       "      <th>category_id</th>\n",
       "      <th>created_at_ts</th>\n",
       "      <th>words_count</th>\n",
       "      <th>is_cat_hab</th>\n",
       "      <th>pred_score</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>200000</td>\n",
       "      <td>237870</td>\n",
       "      <td>0.323067</td>\n",
       "      <td>5706464000</td>\n",
       "      <td>26</td>\n",
       "      <td>0.323067</td>\n",
       "      <td>0.323067</td>\n",
       "      <td>0.323067</td>\n",
       "      <td>0.323067</td>\n",
       "      <td>1.971289</td>\n",
       "      <td>...</td>\n",
       "      <td>17</td>\n",
       "      <td>1</td>\n",
       "      <td>0.076379</td>\n",
       "      <td>0.989986</td>\n",
       "      <td>200.333333</td>\n",
       "      <td>375</td>\n",
       "      <td>1501929686000</td>\n",
       "      <td>228</td>\n",
       "      <td>0</td>\n",
       "      <td>0.248998</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>200171</td>\n",
       "      <td>237870</td>\n",
       "      <td>0.318483</td>\n",
       "      <td>5269269000</td>\n",
       "      <td>12</td>\n",
       "      <td>0.318483</td>\n",
       "      <td>0.318483</td>\n",
       "      <td>0.318483</td>\n",
       "      <td>0.318483</td>\n",
       "      <td>1.293362</td>\n",
       "      <td>...</td>\n",
       "      <td>17</td>\n",
       "      <td>1</td>\n",
       "      <td>0.048197</td>\n",
       "      <td>0.989659</td>\n",
       "      <td>210.285714</td>\n",
       "      <td>375</td>\n",
       "      <td>1501929686000</td>\n",
       "      <td>228</td>\n",
       "      <td>1</td>\n",
       "      <td>0.230704</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>200642</td>\n",
       "      <td>237870</td>\n",
       "      <td>0.223529</td>\n",
       "      <td>5081702000</td>\n",
       "      <td>51</td>\n",
       "      <td>0.223529</td>\n",
       "      <td>0.223529</td>\n",
       "      <td>0.223529</td>\n",
       "      <td>0.223529</td>\n",
       "      <td>2.190321</td>\n",
       "      <td>...</td>\n",
       "      <td>17</td>\n",
       "      <td>1</td>\n",
       "      <td>0.019076</td>\n",
       "      <td>0.989379</td>\n",
       "      <td>177.000000</td>\n",
       "      <td>375</td>\n",
       "      <td>1501929686000</td>\n",
       "      <td>228</td>\n",
       "      <td>0</td>\n",
       "      <td>0.265973</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>200920</td>\n",
       "      <td>237870</td>\n",
       "      <td>0.208142</td>\n",
       "      <td>5620301000</td>\n",
       "      <td>15</td>\n",
       "      <td>0.208142</td>\n",
       "      <td>0.208142</td>\n",
       "      <td>0.208142</td>\n",
       "      <td>0.208142</td>\n",
       "      <td>1.437069</td>\n",
       "      <td>...</td>\n",
       "      <td>17</td>\n",
       "      <td>1</td>\n",
       "      <td>0.080148</td>\n",
       "      <td>0.990005</td>\n",
       "      <td>221.142857</td>\n",
       "      <td>375</td>\n",
       "      <td>1501929686000</td>\n",
       "      <td>228</td>\n",
       "      <td>1</td>\n",
       "      <td>0.231461</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>201504</td>\n",
       "      <td>237870</td>\n",
       "      <td>0.432363</td>\n",
       "      <td>5674771000</td>\n",
       "      <td>29</td>\n",
       "      <td>0.432363</td>\n",
       "      <td>0.432363</td>\n",
       "      <td>0.432363</td>\n",
       "      <td>0.432363</td>\n",
       "      <td>1.164026</td>\n",
       "      <td>...</td>\n",
       "      <td>17</td>\n",
       "      <td>2</td>\n",
       "      <td>0.062804</td>\n",
       "      <td>0.989787</td>\n",
       "      <td>207.090909</td>\n",
       "      <td>375</td>\n",
       "      <td>1501929686000</td>\n",
       "      <td>228</td>\n",
       "      <td>1</td>\n",
       "      <td>0.233304</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>...</th>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>499995</th>\n",
       "      <td>249983</td>\n",
       "      <td>277242</td>\n",
       "      <td>0.692443</td>\n",
       "      <td>135540000</td>\n",
       "      <td>6</td>\n",
       "      <td>0.692443</td>\n",
       "      <td>0.692443</td>\n",
       "      <td>0.692443</td>\n",
       "      <td>0.692443</td>\n",
       "      <td>2.090675</td>\n",
       "      <td>...</td>\n",
       "      <td>25</td>\n",
       "      <td>1</td>\n",
       "      <td>0.131926</td>\n",
       "      <td>0.990347</td>\n",
       "      <td>184.500000</td>\n",
       "      <td>409</td>\n",
       "      <td>1507747629000</td>\n",
       "      <td>165</td>\n",
       "      <td>1</td>\n",
       "      <td>0.284488</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>499996</th>\n",
       "      <td>249983</td>\n",
       "      <td>276783</td>\n",
       "      <td>0.886860</td>\n",
       "      <td>132641000</td>\n",
       "      <td>15</td>\n",
       "      <td>0.886860</td>\n",
       "      <td>0.886860</td>\n",
       "      <td>0.886860</td>\n",
       "      <td>0.886860</td>\n",
       "      <td>1.901495</td>\n",
       "      <td>...</td>\n",
       "      <td>25</td>\n",
       "      <td>1</td>\n",
       "      <td>0.131926</td>\n",
       "      <td>0.990347</td>\n",
       "      <td>184.500000</td>\n",
       "      <td>409</td>\n",
       "      <td>1507750528000</td>\n",
       "      <td>156</td>\n",
       "      <td>1</td>\n",
       "      <td>0.284488</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>499997</th>\n",
       "      <td>249983</td>\n",
       "      <td>276744</td>\n",
       "      <td>0.751380</td>\n",
       "      <td>741760000</td>\n",
       "      <td>29</td>\n",
       "      <td>0.751380</td>\n",
       "      <td>0.751380</td>\n",
       "      <td>0.751380</td>\n",
       "      <td>0.751380</td>\n",
       "      <td>1.279787</td>\n",
       "      <td>...</td>\n",
       "      <td>25</td>\n",
       "      <td>1</td>\n",
       "      <td>0.131926</td>\n",
       "      <td>0.990347</td>\n",
       "      <td>184.500000</td>\n",
       "      <td>409</td>\n",
       "      <td>1507141409000</td>\n",
       "      <td>142</td>\n",
       "      <td>1</td>\n",
       "      <td>0.250896</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>499998</th>\n",
       "      <td>249983</td>\n",
       "      <td>277046</td>\n",
       "      <td>0.781130</td>\n",
       "      <td>747207000</td>\n",
       "      <td>1</td>\n",
       "      <td>0.781130</td>\n",
       "      <td>0.781130</td>\n",
       "      <td>0.781130</td>\n",
       "      <td>0.781130</td>\n",
       "      <td>1.238260</td>\n",
       "      <td>...</td>\n",
       "      <td>25</td>\n",
       "      <td>1</td>\n",
       "      <td>0.131926</td>\n",
       "      <td>0.990347</td>\n",
       "      <td>184.500000</td>\n",
       "      <td>409</td>\n",
       "      <td>1507135962000</td>\n",
       "      <td>172</td>\n",
       "      <td>1</td>\n",
       "      <td>0.250896</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>499999</th>\n",
       "      <td>249989</td>\n",
       "      <td>255488</td>\n",
       "      <td>0.173375</td>\n",
       "      <td>4948188000</td>\n",
       "      <td>55</td>\n",
       "      <td>0.173375</td>\n",
       "      <td>0.173375</td>\n",
       "      <td>0.173375</td>\n",
       "      <td>0.173375</td>\n",
       "      <td>1.213447</td>\n",
       "      <td>...</td>\n",
       "      <td>21</td>\n",
       "      <td>1</td>\n",
       "      <td>0.125578</td>\n",
       "      <td>0.990420</td>\n",
       "      <td>199.600000</td>\n",
       "      <td>389</td>\n",
       "      <td>1502969898000</td>\n",
       "      <td>149</td>\n",
       "      <td>1</td>\n",
       "      <td>0.230704</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>500000 rows × 28 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "        user_id  click_article_id      sim0  time_diff0  word_diff0   sim_max  \\\n",
       "0        200000            237870  0.323067  5706464000          26  0.323067   \n",
       "1        200171            237870  0.318483  5269269000          12  0.318483   \n",
       "2        200642            237870  0.223529  5081702000          51  0.223529   \n",
       "3        200920            237870  0.208142  5620301000          15  0.208142   \n",
       "4        201504            237870  0.432363  5674771000          29  0.432363   \n",
       "...         ...               ...       ...         ...         ...       ...   \n",
       "499995   249983            277242  0.692443   135540000           6  0.692443   \n",
       "499996   249983            276783  0.886860   132641000          15  0.886860   \n",
       "499997   249983            276744  0.751380   741760000          29  0.751380   \n",
       "499998   249983            277046  0.781130   747207000           1  0.781130   \n",
       "499999   249989            255488  0.173375  4948188000          55  0.173375   \n",
       "\n",
       "         sim_min   sim_sum  sim_mean     score  ...  click_region  \\\n",
       "0       0.323067  0.323067  0.323067  1.971289  ...            17   \n",
       "1       0.318483  0.318483  0.318483  1.293362  ...            17   \n",
       "2       0.223529  0.223529  0.223529  2.190321  ...            17   \n",
       "3       0.208142  0.208142  0.208142  1.437069  ...            17   \n",
       "4       0.432363  0.432363  0.432363  1.164026  ...            17   \n",
       "...          ...       ...       ...       ...  ...           ...   \n",
       "499995  0.692443  0.692443  0.692443  2.090675  ...            25   \n",
       "499996  0.886860  0.886860  0.886860  1.901495  ...            25   \n",
       "499997  0.751380  0.751380  0.751380  1.279787  ...            25   \n",
       "499998  0.781130  0.781130  0.781130  1.238260  ...            25   \n",
       "499999  0.173375  0.173375  0.173375  1.213447  ...            21   \n",
       "\n",
       "        click_referrer_type  user_time_hob1  user_time_hob2   words_hbo  \\\n",
       "0                         1        0.076379        0.989986  200.333333   \n",
       "1                         1        0.048197        0.989659  210.285714   \n",
       "2                         1        0.019076        0.989379  177.000000   \n",
       "3                         1        0.080148        0.990005  221.142857   \n",
       "4                         2        0.062804        0.989787  207.090909   \n",
       "...                     ...             ...             ...         ...   \n",
       "499995                    1        0.131926        0.990347  184.500000   \n",
       "499996                    1        0.131926        0.990347  184.500000   \n",
       "499997                    1        0.131926        0.990347  184.500000   \n",
       "499998                    1        0.131926        0.990347  184.500000   \n",
       "499999                    1        0.125578        0.990420  199.600000   \n",
       "\n",
       "        category_id  created_at_ts  words_count  is_cat_hab  pred_score  \n",
       "0               375  1501929686000          228           0    0.248998  \n",
       "1               375  1501929686000          228           1    0.230704  \n",
       "2               375  1501929686000          228           0    0.265973  \n",
       "3               375  1501929686000          228           1    0.231461  \n",
       "4               375  1501929686000          228           1    0.233304  \n",
       "...             ...            ...          ...         ...         ...  \n",
       "499995          409  1507747629000          165           1    0.284488  \n",
       "499996          409  1507750528000          156           1    0.284488  \n",
       "499997          409  1507141409000          142           1    0.250896  \n",
       "499998          409  1507135962000          172           1    0.250896  \n",
       "499999          389  1502969898000          149           1    0.230704  \n",
       "\n",
       "[500000 rows x 28 columns]"
      ]
     },
     "execution_count": 97,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "tst_user_item_feats_df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 返回排序后的结果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [],
   "source": [
    "def submit(recall_df, topk=5, model_name=None):\n",
    "    recall_df = recall_df.sort_values(by=['user_id', 'pred_score'])\n",
    "    recall_df['rank'] = recall_df.groupby(['user_id'])['pred_score'].rank(ascending=False, method='first')\n",
    "    \n",
    "    # 判断是不是每个用户都有5篇文章及以上\n",
    "    tmp = recall_df.groupby('user_id').apply(lambda x: x['rank'].max())\n",
    "    assert tmp.min() >= topk\n",
    "    \n",
    "    del recall_df['pred_score']\n",
    "    submit = recall_df[recall_df['rank'] <= topk].set_index(['user_id', 'rank']).unstack(-1).reset_index()\n",
    "    \n",
    "    submit.columns = [int(col) if isinstance(col, int) else col for col in submit.columns.droplevel(0)]\n",
    "    # 按照提交格式定义列名\n",
    "    submit = submit.rename(columns={'': 'user_id', 1: 'article_1', 2: 'article_2', \n",
    "                                                  3: 'article_3', 4: 'article_4', 5: 'article_5'})\n",
    "    \n",
    "    save_name = pathcache + model_name + '_' + datetime.today().strftime('%m-%d') + '.csv'\n",
    "    submit.to_csv(save_name, index=False, header=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 排序结果归一化\n",
    "def norm_sim(sim_df, weight=0.0):\n",
    "    # print(sim_df.head())\n",
    "    min_sim = sim_df.min()\n",
    "    max_sim = sim_df.max()\n",
    "    if max_sim == min_sim:\n",
    "        sim_df = sim_df.apply(lambda sim: 1.0)\n",
    "    else:\n",
    "        sim_df = sim_df.apply(lambda sim: 1.0 * (sim - min_sim) / (max_sim - min_sim))\n",
    "\n",
    "    sim_df = sim_df.apply(lambda sim: sim + weight)  # plus one\n",
    "    return sim_df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# LGB排序模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 防止中间出错之后重新读取数据\n",
    "trn_user_item_feats_df_rank_model = trn_user_item_feats_df.copy()\n",
    "\n",
    "if informal:\n",
    "    val_user_item_feats_df_rank_model = val_user_item_feats_df.copy()\n",
    "    \n",
    "tst_user_item_feats_df_rank_model = tst_user_item_feats_df.copy()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>user_id</th>\n",
       "      <th>click_article_id</th>\n",
       "      <th>sim0</th>\n",
       "      <th>time_diff0</th>\n",
       "      <th>word_diff0</th>\n",
       "      <th>sim_max</th>\n",
       "      <th>sim_min</th>\n",
       "      <th>sim_sum</th>\n",
       "      <th>sim_mean</th>\n",
       "      <th>score</th>\n",
       "      <th>...</th>\n",
       "      <th>click_country</th>\n",
       "      <th>click_region</th>\n",
       "      <th>click_referrer_type</th>\n",
       "      <th>user_time_hob1</th>\n",
       "      <th>user_time_hob2</th>\n",
       "      <th>words_hbo</th>\n",
       "      <th>category_id</th>\n",
       "      <th>created_at_ts</th>\n",
       "      <th>words_count</th>\n",
       "      <th>is_cat_hab</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>0</td>\n",
       "      <td>324426</td>\n",
       "      <td>0.181872</td>\n",
       "      <td>17249000</td>\n",
       "      <td>12</td>\n",
       "      <td>0.181872</td>\n",
       "      <td>0.181872</td>\n",
       "      <td>0.181872</td>\n",
       "      <td>0.181872</td>\n",
       "      <td>0.330990</td>\n",
       "      <td>...</td>\n",
       "      <td>1</td>\n",
       "      <td>25</td>\n",
       "      <td>2</td>\n",
       "      <td>0.343715</td>\n",
       "      <td>0.992865</td>\n",
       "      <td>266.000000</td>\n",
       "      <td>434</td>\n",
       "      <td>1508167842000</td>\n",
       "      <td>150</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>297</td>\n",
       "      <td>324426</td>\n",
       "      <td>0.181872</td>\n",
       "      <td>17249000</td>\n",
       "      <td>12</td>\n",
       "      <td>0.181872</td>\n",
       "      <td>0.181872</td>\n",
       "      <td>0.181872</td>\n",
       "      <td>0.181872</td>\n",
       "      <td>0.635869</td>\n",
       "      <td>...</td>\n",
       "      <td>1</td>\n",
       "      <td>25</td>\n",
       "      <td>2</td>\n",
       "      <td>0.342064</td>\n",
       "      <td>0.992745</td>\n",
       "      <td>142.666667</td>\n",
       "      <td>434</td>\n",
       "      <td>1508167842000</td>\n",
       "      <td>150</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>430</td>\n",
       "      <td>324426</td>\n",
       "      <td>0.181872</td>\n",
       "      <td>17249000</td>\n",
       "      <td>12</td>\n",
       "      <td>0.181872</td>\n",
       "      <td>0.181872</td>\n",
       "      <td>0.181872</td>\n",
       "      <td>0.181872</td>\n",
       "      <td>0.330990</td>\n",
       "      <td>...</td>\n",
       "      <td>1</td>\n",
       "      <td>25</td>\n",
       "      <td>6</td>\n",
       "      <td>0.341853</td>\n",
       "      <td>0.992765</td>\n",
       "      <td>156.000000</td>\n",
       "      <td>434</td>\n",
       "      <td>1508167842000</td>\n",
       "      <td>150</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>493</td>\n",
       "      <td>324426</td>\n",
       "      <td>0.181872</td>\n",
       "      <td>17249000</td>\n",
       "      <td>12</td>\n",
       "      <td>0.181872</td>\n",
       "      <td>0.181872</td>\n",
       "      <td>0.181872</td>\n",
       "      <td>0.181872</td>\n",
       "      <td>0.330990</td>\n",
       "      <td>...</td>\n",
       "      <td>1</td>\n",
       "      <td>25</td>\n",
       "      <td>6</td>\n",
       "      <td>0.341286</td>\n",
       "      <td>0.992680</td>\n",
       "      <td>194.500000</td>\n",
       "      <td>434</td>\n",
       "      <td>1508167842000</td>\n",
       "      <td>150</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>53023</td>\n",
       "      <td>324426</td>\n",
       "      <td>0.806628</td>\n",
       "      <td>22331000</td>\n",
       "      <td>1</td>\n",
       "      <td>0.806628</td>\n",
       "      <td>0.806628</td>\n",
       "      <td>0.806628</td>\n",
       "      <td>0.806628</td>\n",
       "      <td>2.307003</td>\n",
       "      <td>...</td>\n",
       "      <td>1</td>\n",
       "      <td>25</td>\n",
       "      <td>1</td>\n",
       "      <td>0.264550</td>\n",
       "      <td>0.991852</td>\n",
       "      <td>196.285714</td>\n",
       "      <td>434</td>\n",
       "      <td>1508167842000</td>\n",
       "      <td>150</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>...</th>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "      <td>...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>305401</th>\n",
       "      <td>199866</td>\n",
       "      <td>41326</td>\n",
       "      <td>-0.016861</td>\n",
       "      <td>54483809000</td>\n",
       "      <td>37</td>\n",
       "      <td>-0.016861</td>\n",
       "      <td>-0.016861</td>\n",
       "      <td>-0.016861</td>\n",
       "      <td>-0.016861</td>\n",
       "      <td>1.576267</td>\n",
       "      <td>...</td>\n",
       "      <td>1</td>\n",
       "      <td>20</td>\n",
       "      <td>2</td>\n",
       "      <td>0.134634</td>\n",
       "      <td>0.977024</td>\n",
       "      <td>211.714286</td>\n",
       "      <td>67</td>\n",
       "      <td>1453405548000</td>\n",
       "      <td>219</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>305402</th>\n",
       "      <td>199868</td>\n",
       "      <td>323251</td>\n",
       "      <td>0.231573</td>\n",
       "      <td>28486380000</td>\n",
       "      <td>22</td>\n",
       "      <td>0.231573</td>\n",
       "      <td>0.231573</td>\n",
       "      <td>0.231573</td>\n",
       "      <td>0.231573</td>\n",
       "      <td>0.920399</td>\n",
       "      <td>...</td>\n",
       "      <td>11</td>\n",
       "      <td>28</td>\n",
       "      <td>1</td>\n",
       "      <td>0.020207</td>\n",
       "      <td>0.989268</td>\n",
       "      <td>218.000000</td>\n",
       "      <td>434</td>\n",
       "      <td>1478463230000</td>\n",
       "      <td>267</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>305403</th>\n",
       "      <td>199879</td>\n",
       "      <td>130009</td>\n",
       "      <td>0.398067</td>\n",
       "      <td>1052314000</td>\n",
       "      <td>44</td>\n",
       "      <td>0.398067</td>\n",
       "      <td>0.398067</td>\n",
       "      <td>0.398067</td>\n",
       "      <td>0.398067</td>\n",
       "      <td>0.919014</td>\n",
       "      <td>...</td>\n",
       "      <td>1</td>\n",
       "      <td>13</td>\n",
       "      <td>1</td>\n",
       "      <td>0.137565</td>\n",
       "      <td>0.990504</td>\n",
       "      <td>169.600000</td>\n",
       "      <td>252</td>\n",
       "      <td>1506587816000</td>\n",
       "      <td>175</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>305404</th>\n",
       "      <td>199912</td>\n",
       "      <td>271045</td>\n",
       "      <td>0.769143</td>\n",
       "      <td>37557000</td>\n",
       "      <td>64</td>\n",
       "      <td>0.769143</td>\n",
       "      <td>0.769143</td>\n",
       "      <td>0.769143</td>\n",
       "      <td>0.769143</td>\n",
       "      <td>0.674264</td>\n",
       "      <td>...</td>\n",
       "      <td>1</td>\n",
       "      <td>20</td>\n",
       "      <td>4</td>\n",
       "      <td>0.020497</td>\n",
       "      <td>0.989311</td>\n",
       "      <td>191.000000</td>\n",
       "      <td>399</td>\n",
       "      <td>1506976501000</td>\n",
       "      <td>262</td>\n",
       "      <td>1</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>305405</th>\n",
       "      <td>199949</td>\n",
       "      <td>87122</td>\n",
       "      <td>0.130967</td>\n",
       "      <td>1862018000</td>\n",
       "      <td>78</td>\n",
       "      <td>0.130967</td>\n",
       "      <td>0.130967</td>\n",
       "      <td>0.130967</td>\n",
       "      <td>0.130967</td>\n",
       "      <td>0.504939</td>\n",
       "      <td>...</td>\n",
       "      <td>11</td>\n",
       "      <td>28</td>\n",
       "      <td>2</td>\n",
       "      <td>0.070106</td>\n",
       "      <td>0.989719</td>\n",
       "      <td>213.250000</td>\n",
       "      <td>186</td>\n",
       "      <td>1505512981000</td>\n",
       "      <td>120</td>\n",
       "      <td>0</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>305406 rows × 28 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "        user_id  click_article_id      sim0   time_diff0  word_diff0  \\\n",
       "0             0            324426  0.181872     17249000          12   \n",
       "1           297            324426  0.181872     17249000          12   \n",
       "2           430            324426  0.181872     17249000          12   \n",
       "3           493            324426  0.181872     17249000          12   \n",
       "4         53023            324426  0.806628     22331000           1   \n",
       "...         ...               ...       ...          ...         ...   \n",
       "305401   199866             41326 -0.016861  54483809000          37   \n",
       "305402   199868            323251  0.231573  28486380000          22   \n",
       "305403   199879            130009  0.398067   1052314000          44   \n",
       "305404   199912            271045  0.769143     37557000          64   \n",
       "305405   199949             87122  0.130967   1862018000          78   \n",
       "\n",
       "         sim_max   sim_min   sim_sum  sim_mean     score  ...  click_country  \\\n",
       "0       0.181872  0.181872  0.181872  0.181872  0.330990  ...              1   \n",
       "1       0.181872  0.181872  0.181872  0.181872  0.635869  ...              1   \n",
       "2       0.181872  0.181872  0.181872  0.181872  0.330990  ...              1   \n",
       "3       0.181872  0.181872  0.181872  0.181872  0.330990  ...              1   \n",
       "4       0.806628  0.806628  0.806628  0.806628  2.307003  ...              1   \n",
       "...          ...       ...       ...       ...       ...  ...            ...   \n",
       "305401 -0.016861 -0.016861 -0.016861 -0.016861  1.576267  ...              1   \n",
       "305402  0.231573  0.231573  0.231573  0.231573  0.920399  ...             11   \n",
       "305403  0.398067  0.398067  0.398067  0.398067  0.919014  ...              1   \n",
       "305404  0.769143  0.769143  0.769143  0.769143  0.674264  ...              1   \n",
       "305405  0.130967  0.130967  0.130967  0.130967  0.504939  ...             11   \n",
       "\n",
       "        click_region  click_referrer_type  user_time_hob1  user_time_hob2  \\\n",
       "0                 25                    2        0.343715        0.992865   \n",
       "1                 25                    2        0.342064        0.992745   \n",
       "2                 25                    6        0.341853        0.992765   \n",
       "3                 25                    6        0.341286        0.992680   \n",
       "4                 25                    1        0.264550        0.991852   \n",
       "...              ...                  ...             ...             ...   \n",
       "305401            20                    2        0.134634        0.977024   \n",
       "305402            28                    1        0.020207        0.989268   \n",
       "305403            13                    1        0.137565        0.990504   \n",
       "305404            20                    4        0.020497        0.989311   \n",
       "305405            28                    2        0.070106        0.989719   \n",
       "\n",
       "         words_hbo  category_id  created_at_ts  words_count  is_cat_hab  \n",
       "0       266.000000          434  1508167842000          150           0  \n",
       "1       142.666667          434  1508167842000          150           1  \n",
       "2       156.000000          434  1508167842000          150           1  \n",
       "3       194.500000          434  1508167842000          150           0  \n",
       "4       196.285714          434  1508167842000          150           1  \n",
       "...            ...          ...            ...          ...         ...  \n",
       "305401  211.714286           67  1453405548000          219           1  \n",
       "305402  218.000000          434  1478463230000          267           0  \n",
       "305403  169.600000          252  1506587816000          175           1  \n",
       "305404  191.000000          399  1506976501000          262           1  \n",
       "305405  213.250000          186  1505512981000          120           0  \n",
       "\n",
       "[305406 rows x 28 columns]"
      ]
     },
     "execution_count": 35,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "trn_user_item_feats_df_rank_model"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 定义特征列\n",
    "lgb_cols = ['sim0', 'time_diff0', 'word_diff0','sim_max', 'sim_min', 'sim_sum', \n",
    "            'sim_mean', 'score','click_size', 'time_diff_mean', 'active_level',\n",
    "            'click_environment','click_deviceGroup', 'click_os', 'click_country', \n",
    "            'click_region','click_referrer_type', 'user_time_hob1', 'user_time_hob2',\n",
    "            'words_hbo', 'category_id', 'created_at_ts','words_count']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 排序模型分组\n",
    "trn_user_item_feats_df_rank_model.sort_values(by=['user_id'], inplace=True)\n",
    "g_train = trn_user_item_feats_df_rank_model.groupby(['user_id'], as_index=False).count()[\"label\"].values\n",
    "\n",
    "if informal:\n",
    "    val_user_item_feats_df_rank_model.sort_values(by=['user_id'], inplace=True)\n",
    "    g_val = val_user_item_feats_df_rank_model.groupby(['user_id'], as_index=False).count()[\"label\"].values"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 排序模型定义\n",
    "lgb_ranker = lgb.LGBMRanker(boosting_type='gbdt', num_leaves=31, reg_alpha=0.0, reg_lambda=1,\n",
    "                            max_depth=-1, n_estimators=100, subsample=0.7, colsample_bytree=0.7, subsample_freq=1,\n",
    "                            learning_rate=0.01, min_child_weight=50, random_state=2018, n_jobs= 16)  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[1]\tvalid_0's ndcg@1: 0.8448\tvalid_0's ndcg@2: 0.922531\tvalid_0's ndcg@3: 0.931931\tvalid_0's ndcg@4: 0.934773\tvalid_0's ndcg@5: 0.936166\n",
      "Training until validation scores don't improve for 50 rounds\n",
      "[2]\tvalid_0's ndcg@1: 0.845\tvalid_0's ndcg@2: 0.922731\tvalid_0's ndcg@3: 0.931731\tvalid_0's ndcg@4: 0.934573\tvalid_0's ndcg@5: 0.935927\n",
      "[3]\tvalid_0's ndcg@1: 0.8461\tvalid_0's ndcg@2: 0.923767\tvalid_0's ndcg@3: 0.933117\tvalid_0's ndcg@4: 0.935615\tvalid_0's ndcg@5: 0.936892\n",
      "[4]\tvalid_0's ndcg@1: 0.8479\tvalid_0's ndcg@2: 0.924747\tvalid_0's ndcg@3: 0.934147\tvalid_0's ndcg@4: 0.936602\tvalid_0's ndcg@5: 0.937608\n",
      "[5]\tvalid_0's ndcg@1: 0.8498\tvalid_0's ndcg@2: 0.926016\tvalid_0's ndcg@3: 0.935116\tvalid_0's ndcg@4: 0.937442\tvalid_0's ndcg@5: 0.938486\n",
      "[6]\tvalid_0's ndcg@1: 0.8514\tvalid_0's ndcg@2: 0.926607\tvalid_0's ndcg@3: 0.935757\tvalid_0's ndcg@4: 0.938126\tvalid_0's ndcg@5: 0.939054\n",
      "[7]\tvalid_0's ndcg@1: 0.8554\tvalid_0's ndcg@2: 0.92884\tvalid_0's ndcg@3: 0.93764\tvalid_0's ndcg@4: 0.93988\tvalid_0's ndcg@5: 0.940808\n",
      "[8]\tvalid_0's ndcg@1: 0.8575\tvalid_0's ndcg@2: 0.929741\tvalid_0's ndcg@3: 0.938491\tvalid_0's ndcg@4: 0.940645\tvalid_0's ndcg@5: 0.941573\n",
      "[9]\tvalid_0's ndcg@1: 0.8589\tvalid_0's ndcg@2: 0.930763\tvalid_0's ndcg@3: 0.939013\tvalid_0's ndcg@4: 0.941252\tvalid_0's ndcg@5: 0.942181\n",
      "[10]\tvalid_0's ndcg@1: 0.8582\tvalid_0's ndcg@2: 0.930694\tvalid_0's ndcg@3: 0.938844\tvalid_0's ndcg@4: 0.940997\tvalid_0's ndcg@5: 0.942003\n",
      "[11]\tvalid_0's ndcg@1: 0.8592\tvalid_0's ndcg@2: 0.931694\tvalid_0's ndcg@3: 0.939444\tvalid_0's ndcg@4: 0.94164\tvalid_0's ndcg@5: 0.942646\n",
      "[12]\tvalid_0's ndcg@1: 0.8602\tvalid_0's ndcg@2: 0.932189\tvalid_0's ndcg@3: 0.940039\tvalid_0's ndcg@4: 0.942149\tvalid_0's ndcg@5: 0.943078\n",
      "[13]\tvalid_0's ndcg@1: 0.8609\tvalid_0's ndcg@2: 0.932889\tvalid_0's ndcg@3: 0.940389\tvalid_0's ndcg@4: 0.94237\tvalid_0's ndcg@5: 0.943376\n",
      "[14]\tvalid_0's ndcg@1: 0.8607\tvalid_0's ndcg@2: 0.932689\tvalid_0's ndcg@3: 0.940439\tvalid_0's ndcg@4: 0.942291\tvalid_0's ndcg@5: 0.943258\n",
      "[15]\tvalid_0's ndcg@1: 0.86\tvalid_0's ndcg@2: 0.932494\tvalid_0's ndcg@3: 0.939994\tvalid_0's ndcg@4: 0.942104\tvalid_0's ndcg@5: 0.942994\n",
      "[16]\tvalid_0's ndcg@1: 0.8598\tvalid_0's ndcg@2: 0.932357\tvalid_0's ndcg@3: 0.939907\tvalid_0's ndcg@4: 0.94206\tvalid_0's ndcg@5: 0.94295\n",
      "[17]\tvalid_0's ndcg@1: 0.8613\tvalid_0's ndcg@2: 0.932974\tvalid_0's ndcg@3: 0.940374\tvalid_0's ndcg@4: 0.94257\tvalid_0's ndcg@5: 0.94346\n",
      "[18]\tvalid_0's ndcg@1: 0.86\tvalid_0's ndcg@2: 0.932368\tvalid_0's ndcg@3: 0.939768\tvalid_0's ndcg@4: 0.942093\tvalid_0's ndcg@5: 0.942944\n",
      "[19]\tvalid_0's ndcg@1: 0.8603\tvalid_0's ndcg@2: 0.932668\tvalid_0's ndcg@3: 0.940018\tvalid_0's ndcg@4: 0.9423\tvalid_0's ndcg@5: 0.943113\n",
      "[20]\tvalid_0's ndcg@1: 0.8619\tvalid_0's ndcg@2: 0.93288\tvalid_0's ndcg@3: 0.94043\tvalid_0's ndcg@4: 0.942755\tvalid_0's ndcg@5: 0.943645\n",
      "[21]\tvalid_0's ndcg@1: 0.8607\tvalid_0's ndcg@2: 0.932815\tvalid_0's ndcg@3: 0.939865\tvalid_0's ndcg@4: 0.942406\tvalid_0's ndcg@5: 0.943335\n",
      "[22]\tvalid_0's ndcg@1: 0.8616\tvalid_0's ndcg@2: 0.932958\tvalid_0's ndcg@3: 0.940208\tvalid_0's ndcg@4: 0.942577\tvalid_0's ndcg@5: 0.943544\n",
      "[23]\tvalid_0's ndcg@1: 0.862\tvalid_0's ndcg@2: 0.933232\tvalid_0's ndcg@3: 0.940432\tvalid_0's ndcg@4: 0.942758\tvalid_0's ndcg@5: 0.943725\n",
      "[24]\tvalid_0's ndcg@1: 0.8624\tvalid_0's ndcg@2: 0.933001\tvalid_0's ndcg@3: 0.940401\tvalid_0's ndcg@4: 0.942856\tvalid_0's ndcg@5: 0.943784\n",
      "[25]\tvalid_0's ndcg@1: 0.8618\tvalid_0's ndcg@2: 0.93259\tvalid_0's ndcg@3: 0.93999\tvalid_0's ndcg@4: 0.942531\tvalid_0's ndcg@5: 0.94346\n",
      "[26]\tvalid_0's ndcg@1: 0.8618\tvalid_0's ndcg@2: 0.93278\tvalid_0's ndcg@3: 0.94018\tvalid_0's ndcg@4: 0.942678\tvalid_0's ndcg@5: 0.943529\n",
      "[27]\tvalid_0's ndcg@1: 0.8617\tvalid_0's ndcg@2: 0.932364\tvalid_0's ndcg@3: 0.939964\tvalid_0's ndcg@4: 0.942419\tvalid_0's ndcg@5: 0.943386\n",
      "[28]\tvalid_0's ndcg@1: 0.8619\tvalid_0's ndcg@2: 0.932627\tvalid_0's ndcg@3: 0.940127\tvalid_0's ndcg@4: 0.942582\tvalid_0's ndcg@5: 0.943472\n",
      "[29]\tvalid_0's ndcg@1: 0.8617\tvalid_0's ndcg@2: 0.932743\tvalid_0's ndcg@3: 0.940193\tvalid_0's ndcg@4: 0.942561\tvalid_0's ndcg@5: 0.94349\n",
      "[30]\tvalid_0's ndcg@1: 0.862\tvalid_0's ndcg@2: 0.933043\tvalid_0's ndcg@3: 0.940243\tvalid_0's ndcg@4: 0.942654\tvalid_0's ndcg@5: 0.943622\n",
      "[31]\tvalid_0's ndcg@1: 0.8607\tvalid_0's ndcg@2: 0.932563\tvalid_0's ndcg@3: 0.939663\tvalid_0's ndcg@4: 0.942118\tvalid_0's ndcg@5: 0.943124\n",
      "[32]\tvalid_0's ndcg@1: 0.8604\tvalid_0's ndcg@2: 0.932578\tvalid_0's ndcg@3: 0.939628\tvalid_0's ndcg@4: 0.941954\tvalid_0's ndcg@5: 0.94296\n",
      "[33]\tvalid_0's ndcg@1: 0.8606\tvalid_0's ndcg@2: 0.932778\tvalid_0's ndcg@3: 0.939728\tvalid_0's ndcg@4: 0.942097\tvalid_0's ndcg@5: 0.943064\n",
      "[34]\tvalid_0's ndcg@1: 0.8604\tvalid_0's ndcg@2: 0.932705\tvalid_0's ndcg@3: 0.939555\tvalid_0's ndcg@4: 0.942052\tvalid_0's ndcg@5: 0.943097\n",
      "[35]\tvalid_0's ndcg@1: 0.8599\tvalid_0's ndcg@2: 0.932457\tvalid_0's ndcg@3: 0.939457\tvalid_0's ndcg@4: 0.941869\tvalid_0's ndcg@5: 0.942875\n",
      "[36]\tvalid_0's ndcg@1: 0.8599\tvalid_0's ndcg@2: 0.932394\tvalid_0's ndcg@3: 0.939394\tvalid_0's ndcg@4: 0.941806\tvalid_0's ndcg@5: 0.942773\n",
      "[37]\tvalid_0's ndcg@1: 0.8591\tvalid_0's ndcg@2: 0.93172\tvalid_0's ndcg@3: 0.93902\tvalid_0's ndcg@4: 0.941389\tvalid_0's ndcg@5: 0.942511\n",
      "[38]\tvalid_0's ndcg@1: 0.8595\tvalid_0's ndcg@2: 0.932057\tvalid_0's ndcg@3: 0.939257\tvalid_0's ndcg@4: 0.941583\tvalid_0's ndcg@5: 0.942666\n",
      "[39]\tvalid_0's ndcg@1: 0.8602\tvalid_0's ndcg@2: 0.932252\tvalid_0's ndcg@3: 0.939502\tvalid_0's ndcg@4: 0.941871\tvalid_0's ndcg@5: 0.942877\n",
      "[40]\tvalid_0's ndcg@1: 0.8596\tvalid_0's ndcg@2: 0.931905\tvalid_0's ndcg@3: 0.939205\tvalid_0's ndcg@4: 0.941616\tvalid_0's ndcg@5: 0.942661\n",
      "[41]\tvalid_0's ndcg@1: 0.8599\tvalid_0's ndcg@2: 0.931889\tvalid_0's ndcg@3: 0.939339\tvalid_0's ndcg@4: 0.941708\tvalid_0's ndcg@5: 0.94283\n",
      "[42]\tvalid_0's ndcg@1: 0.8599\tvalid_0's ndcg@2: 0.931952\tvalid_0's ndcg@3: 0.939302\tvalid_0's ndcg@4: 0.941628\tvalid_0's ndcg@5: 0.942827\n",
      "[43]\tvalid_0's ndcg@1: 0.8598\tvalid_0's ndcg@2: 0.932041\tvalid_0's ndcg@3: 0.939291\tvalid_0's ndcg@4: 0.941703\tvalid_0's ndcg@5: 0.942864\n",
      "[44]\tvalid_0's ndcg@1: 0.8601\tvalid_0's ndcg@2: 0.9319\tvalid_0's ndcg@3: 0.9393\tvalid_0's ndcg@4: 0.941712\tvalid_0's ndcg@5: 0.942872\n",
      "[45]\tvalid_0's ndcg@1: 0.8593\tvalid_0's ndcg@2: 0.931415\tvalid_0's ndcg@3: 0.938965\tvalid_0's ndcg@4: 0.941463\tvalid_0's ndcg@5: 0.942585\n",
      "[46]\tvalid_0's ndcg@1: 0.8595\tvalid_0's ndcg@2: 0.931615\tvalid_0's ndcg@3: 0.939065\tvalid_0's ndcg@4: 0.941563\tvalid_0's ndcg@5: 0.942685\n",
      "[47]\tvalid_0's ndcg@1: 0.8593\tvalid_0's ndcg@2: 0.931541\tvalid_0's ndcg@3: 0.938941\tvalid_0's ndcg@4: 0.941353\tvalid_0's ndcg@5: 0.942591\n",
      "[48]\tvalid_0's ndcg@1: 0.8601\tvalid_0's ndcg@2: 0.931647\tvalid_0's ndcg@3: 0.939197\tvalid_0's ndcg@4: 0.941738\tvalid_0's ndcg@5: 0.942822\n",
      "[49]\tvalid_0's ndcg@1: 0.8607\tvalid_0's ndcg@2: 0.931806\tvalid_0's ndcg@3: 0.939306\tvalid_0's ndcg@4: 0.941976\tvalid_0's ndcg@5: 0.942982\n",
      "[50]\tvalid_0's ndcg@1: 0.8614\tvalid_0's ndcg@2: 0.932001\tvalid_0's ndcg@3: 0.939551\tvalid_0's ndcg@4: 0.942221\tvalid_0's ndcg@5: 0.943188\n",
      "[51]\tvalid_0's ndcg@1: 0.8611\tvalid_0's ndcg@2: 0.931764\tvalid_0's ndcg@3: 0.939614\tvalid_0's ndcg@4: 0.942069\tvalid_0's ndcg@5: 0.943075\n",
      "[52]\tvalid_0's ndcg@1: 0.8612\tvalid_0's ndcg@2: 0.931675\tvalid_0's ndcg@3: 0.939675\tvalid_0's ndcg@4: 0.942216\tvalid_0's ndcg@5: 0.943106\n",
      "[53]\tvalid_0's ndcg@1: 0.8607\tvalid_0's ndcg@2: 0.931617\tvalid_0's ndcg@3: 0.939467\tvalid_0's ndcg@4: 0.942051\tvalid_0's ndcg@5: 0.94294\n",
      "[54]\tvalid_0's ndcg@1: 0.8602\tvalid_0's ndcg@2: 0.931495\tvalid_0's ndcg@3: 0.939245\tvalid_0's ndcg@4: 0.941743\tvalid_0's ndcg@5: 0.942749\n",
      "[55]\tvalid_0's ndcg@1: 0.8606\tvalid_0's ndcg@2: 0.931832\tvalid_0's ndcg@3: 0.939632\tvalid_0's ndcg@4: 0.941958\tvalid_0's ndcg@5: 0.942963\n",
      "[56]\tvalid_0's ndcg@1: 0.8608\tvalid_0's ndcg@2: 0.931969\tvalid_0's ndcg@3: 0.939619\tvalid_0's ndcg@4: 0.942117\tvalid_0's ndcg@5: 0.943045\n",
      "[57]\tvalid_0's ndcg@1: 0.8601\tvalid_0's ndcg@2: 0.931837\tvalid_0's ndcg@3: 0.939287\tvalid_0's ndcg@4: 0.941785\tvalid_0's ndcg@5: 0.94279\n",
      "[58]\tvalid_0's ndcg@1: 0.8602\tvalid_0's ndcg@2: 0.931874\tvalid_0's ndcg@3: 0.939374\tvalid_0's ndcg@4: 0.941785\tvalid_0's ndcg@5: 0.942791\n",
      "[59]\tvalid_0's ndcg@1: 0.8606\tvalid_0's ndcg@2: 0.932147\tvalid_0's ndcg@3: 0.939497\tvalid_0's ndcg@4: 0.941952\tvalid_0's ndcg@5: 0.942919\n",
      "[60]\tvalid_0's ndcg@1: 0.8608\tvalid_0's ndcg@2: 0.932095\tvalid_0's ndcg@3: 0.939695\tvalid_0's ndcg@4: 0.942021\tvalid_0's ndcg@5: 0.942988\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[61]\tvalid_0's ndcg@1: 0.8613\tvalid_0's ndcg@2: 0.932532\tvalid_0's ndcg@3: 0.939882\tvalid_0's ndcg@4: 0.942294\tvalid_0's ndcg@5: 0.943222\n",
      "[62]\tvalid_0's ndcg@1: 0.8595\tvalid_0's ndcg@2: 0.931805\tvalid_0's ndcg@3: 0.939255\tvalid_0's ndcg@4: 0.941666\tvalid_0's ndcg@5: 0.942517\n",
      "[63]\tvalid_0's ndcg@1: 0.8595\tvalid_0's ndcg@2: 0.931615\tvalid_0's ndcg@3: 0.939215\tvalid_0's ndcg@4: 0.94167\tvalid_0's ndcg@5: 0.942483\n",
      "[64]\tvalid_0's ndcg@1: 0.8591\tvalid_0's ndcg@2: 0.931468\tvalid_0's ndcg@3: 0.939068\tvalid_0's ndcg@4: 0.941566\tvalid_0's ndcg@5: 0.942378\n",
      "Early stopping, best iteration is:\n",
      "[14]\tvalid_0's ndcg@1: 0.8607\tvalid_0's ndcg@2: 0.932689\tvalid_0's ndcg@3: 0.940439\tvalid_0's ndcg@4: 0.942291\tvalid_0's ndcg@5: 0.943258\n"
     ]
    }
   ],
   "source": [
    "# 排序模型训练\n",
    "if informal:\n",
    "    lgb_ranker.fit(trn_user_item_feats_df_rank_model[lgb_cols], trn_user_item_feats_df_rank_model['label'], group=g_train,\n",
    "                eval_set=[(val_user_item_feats_df_rank_model[lgb_cols], val_user_item_feats_df_rank_model['label'])], \n",
    "                eval_group= [g_val], eval_at=[1, 2, 3, 4, 5], eval_metric=['ndcg', ], early_stopping_rounds=50, )\n",
    "else:\n",
    "    lgb_ranker.fit(trn_user_item_feats_df_rank_model[lgb_cols], trn_user_item_feats_df_rank_model['label'], group=g_train)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 64,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 模型预测\n",
    "tst_user_item_feats_df_rank_model['pred_score'] = lgb_ranker.predict(tst_user_item_feats_df_rank_model[lgb_cols], num_iteration=lgb_ranker.best_iteration_)\n",
    "\n",
    "# 将这里的排序结果保存一份，用户后面的模型融合\n",
    "tst_user_item_feats_df_rank_model[['user_id', 'click_article_id', 'pred_score']].to_csv(pathcache + 'lgb_ranker_score.csv', index=False)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 65,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "<ipython-input-65-0cb6afed208d>:4: SettingWithCopyWarning: \n",
      "A value is trying to be set on a copy of a slice from a DataFrame.\n",
      "Try using .loc[row_indexer,col_indexer] = value instead\n",
      "\n",
      "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n",
      "  rank_results['click_article_id'] = rank_results['click_article_id'].astype(int)\n"
     ]
    }
   ],
   "source": [
    "# 预测结果重新排序, 及生成提交结果\n",
    "# 单模型生成提交结果\n",
    "rank_results = tst_user_item_feats_df_rank_model[['user_id', 'click_article_id', 'pred_score']]\n",
    "rank_results['click_article_id'] = rank_results['click_article_id'].astype(int)\n",
    "submit(rank_results, topk=5, model_name='lgb_ranker')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 174,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "<ipython-input-174-4bc1148bd508>:22: SettingWithCopyWarning: \n",
      "A value is trying to be set on a copy of a slice from a DataFrame\n",
      "\n",
      "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n",
      "  train_idx.sort_values(by=['user_id'], inplace=True)\n",
      "<ipython-input-174-4bc1148bd508>:25: SettingWithCopyWarning: \n",
      "A value is trying to be set on a copy of a slice from a DataFrame\n",
      "\n",
      "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n",
      "  valid_idx.sort_values(by=['user_id'], inplace=True)\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[1]\tvalid_0's ndcg@1: 0.873763\tvalid_0's ndcg@2: 0.950504\tvalid_0's ndcg@3: 0.952517\tvalid_0's ndcg@4: 0.95271\tvalid_0's ndcg@5: 0.952761\n",
      "Training until validation scores don't improve for 50 rounds\n",
      "[2]\tvalid_0's ndcg@1: 0.877579\tvalid_0's ndcg@2: 0.952245\tvalid_0's ndcg@3: 0.953955\tvalid_0's ndcg@4: 0.95417\tvalid_0's ndcg@5: 0.954221\n",
      "[3]\tvalid_0's ndcg@1: 0.879842\tvalid_0's ndcg@2: 0.953229\tvalid_0's ndcg@3: 0.954874\tvalid_0's ndcg@4: 0.955044\tvalid_0's ndcg@5: 0.955105\n",
      "[4]\tvalid_0's ndcg@1: 0.8805\tvalid_0's ndcg@2: 0.953605\tvalid_0's ndcg@3: 0.955157\tvalid_0's ndcg@4: 0.955316\tvalid_0's ndcg@5: 0.955377\n",
      "[5]\tvalid_0's ndcg@1: 0.880895\tvalid_0's ndcg@2: 0.9538\tvalid_0's ndcg@3: 0.955327\tvalid_0's ndcg@4: 0.955474\tvalid_0's ndcg@5: 0.955535\n",
      "[6]\tvalid_0's ndcg@1: 0.882105\tvalid_0's ndcg@2: 0.95433\tvalid_0's ndcg@3: 0.955751\tvalid_0's ndcg@4: 0.955933\tvalid_0's ndcg@5: 0.955994\n",
      "[7]\tvalid_0's ndcg@1: 0.8825\tvalid_0's ndcg@2: 0.954526\tvalid_0's ndcg@3: 0.955907\tvalid_0's ndcg@4: 0.956089\tvalid_0's ndcg@5: 0.956139\n",
      "[8]\tvalid_0's ndcg@1: 0.883816\tvalid_0's ndcg@2: 0.955094\tvalid_0's ndcg@3: 0.956423\tvalid_0's ndcg@4: 0.956616\tvalid_0's ndcg@5: 0.956646\n",
      "[9]\tvalid_0's ndcg@1: 0.884184\tvalid_0's ndcg@2: 0.955214\tvalid_0's ndcg@3: 0.956556\tvalid_0's ndcg@4: 0.95676\tvalid_0's ndcg@5: 0.95678\n",
      "[10]\tvalid_0's ndcg@1: 0.8855\tvalid_0's ndcg@2: 0.955799\tvalid_0's ndcg@3: 0.957088\tvalid_0's ndcg@4: 0.95727\tvalid_0's ndcg@5: 0.9573\n",
      "[11]\tvalid_0's ndcg@1: 0.886395\tvalid_0's ndcg@2: 0.956079\tvalid_0's ndcg@3: 0.957448\tvalid_0's ndcg@4: 0.957595\tvalid_0's ndcg@5: 0.957626\n",
      "[12]\tvalid_0's ndcg@1: 0.887289\tvalid_0's ndcg@2: 0.956343\tvalid_0's ndcg@3: 0.957751\tvalid_0's ndcg@4: 0.95791\tvalid_0's ndcg@5: 0.95794\n",
      "[13]\tvalid_0's ndcg@1: 0.887658\tvalid_0's ndcg@2: 0.956462\tvalid_0's ndcg@3: 0.95787\tvalid_0's ndcg@4: 0.95804\tvalid_0's ndcg@5: 0.958071\n",
      "[14]\tvalid_0's ndcg@1: 0.887711\tvalid_0's ndcg@2: 0.956515\tvalid_0's ndcg@3: 0.95791\tvalid_0's ndcg@4: 0.958068\tvalid_0's ndcg@5: 0.958099\n",
      "[15]\tvalid_0's ndcg@1: 0.888526\tvalid_0's ndcg@2: 0.956866\tvalid_0's ndcg@3: 0.958221\tvalid_0's ndcg@4: 0.95838\tvalid_0's ndcg@5: 0.95841\n",
      "[16]\tvalid_0's ndcg@1: 0.889237\tvalid_0's ndcg@2: 0.957178\tvalid_0's ndcg@3: 0.958494\tvalid_0's ndcg@4: 0.958652\tvalid_0's ndcg@5: 0.958683\n",
      "[17]\tvalid_0's ndcg@1: 0.889526\tvalid_0's ndcg@2: 0.957268\tvalid_0's ndcg@3: 0.958584\tvalid_0's ndcg@4: 0.958754\tvalid_0's ndcg@5: 0.958785\n",
      "[18]\tvalid_0's ndcg@1: 0.889868\tvalid_0's ndcg@2: 0.957361\tvalid_0's ndcg@3: 0.95869\tvalid_0's ndcg@4: 0.95886\tvalid_0's ndcg@5: 0.958901\n",
      "[19]\tvalid_0's ndcg@1: 0.890184\tvalid_0's ndcg@2: 0.957461\tvalid_0's ndcg@3: 0.958817\tvalid_0's ndcg@4: 0.958975\tvalid_0's ndcg@5: 0.959016\n",
      "[20]\tvalid_0's ndcg@1: 0.891895\tvalid_0's ndcg@2: 0.958093\tvalid_0's ndcg@3: 0.959448\tvalid_0's ndcg@4: 0.959606\tvalid_0's ndcg@5: 0.959647\n",
      "[21]\tvalid_0's ndcg@1: 0.892868\tvalid_0's ndcg@2: 0.958469\tvalid_0's ndcg@3: 0.959784\tvalid_0's ndcg@4: 0.959954\tvalid_0's ndcg@5: 0.960005\n",
      "[22]\tvalid_0's ndcg@1: 0.893237\tvalid_0's ndcg@2: 0.958671\tvalid_0's ndcg@3: 0.959947\tvalid_0's ndcg@4: 0.960117\tvalid_0's ndcg@5: 0.960158\n",
      "[23]\tvalid_0's ndcg@1: 0.893026\tvalid_0's ndcg@2: 0.958543\tvalid_0's ndcg@3: 0.959859\tvalid_0's ndcg@4: 0.960018\tvalid_0's ndcg@5: 0.960069\n",
      "[24]\tvalid_0's ndcg@1: 0.893105\tvalid_0's ndcg@2: 0.958573\tvalid_0's ndcg@3: 0.959875\tvalid_0's ndcg@4: 0.960045\tvalid_0's ndcg@5: 0.960096\n",
      "[25]\tvalid_0's ndcg@1: 0.893368\tvalid_0's ndcg@2: 0.958703\tvalid_0's ndcg@3: 0.960019\tvalid_0's ndcg@4: 0.960155\tvalid_0's ndcg@5: 0.960206\n",
      "[26]\tvalid_0's ndcg@1: 0.893421\tvalid_0's ndcg@2: 0.958722\tvalid_0's ndcg@3: 0.960038\tvalid_0's ndcg@4: 0.960174\tvalid_0's ndcg@5: 0.960225\n",
      "[27]\tvalid_0's ndcg@1: 0.893632\tvalid_0's ndcg@2: 0.958783\tvalid_0's ndcg@3: 0.960099\tvalid_0's ndcg@4: 0.960258\tvalid_0's ndcg@5: 0.960299\n",
      "[28]\tvalid_0's ndcg@1: 0.894079\tvalid_0's ndcg@2: 0.958915\tvalid_0's ndcg@3: 0.960257\tvalid_0's ndcg@4: 0.960416\tvalid_0's ndcg@5: 0.960457\n",
      "[29]\tvalid_0's ndcg@1: 0.894605\tvalid_0's ndcg@2: 0.959176\tvalid_0's ndcg@3: 0.960479\tvalid_0's ndcg@4: 0.960615\tvalid_0's ndcg@5: 0.960665\n",
      "[30]\tvalid_0's ndcg@1: 0.894026\tvalid_0's ndcg@2: 0.958929\tvalid_0's ndcg@3: 0.960284\tvalid_0's ndcg@4: 0.960398\tvalid_0's ndcg@5: 0.960449\n",
      "[31]\tvalid_0's ndcg@1: 0.897211\tvalid_0's ndcg@2: 0.96022\tvalid_0's ndcg@3: 0.961444\tvalid_0's ndcg@4: 0.961603\tvalid_0's ndcg@5: 0.961644\n",
      "[32]\tvalid_0's ndcg@1: 0.897368\tvalid_0's ndcg@2: 0.960246\tvalid_0's ndcg@3: 0.961496\tvalid_0's ndcg@4: 0.961654\tvalid_0's ndcg@5: 0.961695\n",
      "[33]\tvalid_0's ndcg@1: 0.897158\tvalid_0's ndcg@2: 0.960135\tvalid_0's ndcg@3: 0.961424\tvalid_0's ndcg@4: 0.961571\tvalid_0's ndcg@5: 0.961612\n",
      "[34]\tvalid_0's ndcg@1: 0.897342\tvalid_0's ndcg@2: 0.960186\tvalid_0's ndcg@3: 0.961489\tvalid_0's ndcg@4: 0.961636\tvalid_0's ndcg@5: 0.961677\n",
      "[35]\tvalid_0's ndcg@1: 0.897\tvalid_0's ndcg@2: 0.96011\tvalid_0's ndcg@3: 0.961373\tvalid_0's ndcg@4: 0.961509\tvalid_0's ndcg@5: 0.96156\n",
      "[36]\tvalid_0's ndcg@1: 0.897237\tvalid_0's ndcg@2: 0.960131\tvalid_0's ndcg@3: 0.961446\tvalid_0's ndcg@4: 0.961582\tvalid_0's ndcg@5: 0.961633\n",
      "[37]\tvalid_0's ndcg@1: 0.897947\tvalid_0's ndcg@2: 0.960426\tvalid_0's ndcg@3: 0.961729\tvalid_0's ndcg@4: 0.961853\tvalid_0's ndcg@5: 0.961904\n",
      "[38]\tvalid_0's ndcg@1: 0.897658\tvalid_0's ndcg@2: 0.960319\tvalid_0's ndcg@3: 0.961622\tvalid_0's ndcg@4: 0.961746\tvalid_0's ndcg@5: 0.961797\n",
      "[39]\tvalid_0's ndcg@1: 0.898053\tvalid_0's ndcg@2: 0.960498\tvalid_0's ndcg@3: 0.961761\tvalid_0's ndcg@4: 0.961897\tvalid_0's ndcg@5: 0.961948\n",
      "[40]\tvalid_0's ndcg@1: 0.898921\tvalid_0's ndcg@2: 0.960835\tvalid_0's ndcg@3: 0.962098\tvalid_0's ndcg@4: 0.962223\tvalid_0's ndcg@5: 0.962274\n",
      "[41]\tvalid_0's ndcg@1: 0.899474\tvalid_0's ndcg@2: 0.961072\tvalid_0's ndcg@3: 0.962309\tvalid_0's ndcg@4: 0.962434\tvalid_0's ndcg@5: 0.962485\n",
      "[42]\tvalid_0's ndcg@1: 0.899605\tvalid_0's ndcg@2: 0.961121\tvalid_0's ndcg@3: 0.962371\tvalid_0's ndcg@4: 0.962484\tvalid_0's ndcg@5: 0.962535\n",
      "[43]\tvalid_0's ndcg@1: 0.899184\tvalid_0's ndcg@2: 0.960949\tvalid_0's ndcg@3: 0.962212\tvalid_0's ndcg@4: 0.962337\tvalid_0's ndcg@5: 0.962377\n",
      "[44]\tvalid_0's ndcg@1: 0.901868\tvalid_0's ndcg@2: 0.962023\tvalid_0's ndcg@3: 0.963194\tvalid_0's ndcg@4: 0.963341\tvalid_0's ndcg@5: 0.963382\n",
      "[45]\tvalid_0's ndcg@1: 0.902132\tvalid_0's ndcg@2: 0.962153\tvalid_0's ndcg@3: 0.963298\tvalid_0's ndcg@4: 0.963445\tvalid_0's ndcg@5: 0.963486\n",
      "[46]\tvalid_0's ndcg@1: 0.902184\tvalid_0's ndcg@2: 0.962172\tvalid_0's ndcg@3: 0.96333\tvalid_0's ndcg@4: 0.963466\tvalid_0's ndcg@5: 0.963507\n",
      "[47]\tvalid_0's ndcg@1: 0.902237\tvalid_0's ndcg@2: 0.962125\tvalid_0's ndcg@3: 0.963349\tvalid_0's ndcg@4: 0.963474\tvalid_0's ndcg@5: 0.963514\n",
      "[48]\tvalid_0's ndcg@1: 0.902632\tvalid_0's ndcg@2: 0.962271\tvalid_0's ndcg@3: 0.963482\tvalid_0's ndcg@4: 0.963618\tvalid_0's ndcg@5: 0.963658\n",
      "[49]\tvalid_0's ndcg@1: 0.903263\tvalid_0's ndcg@2: 0.962554\tvalid_0's ndcg@3: 0.963725\tvalid_0's ndcg@4: 0.963861\tvalid_0's ndcg@5: 0.963902\n",
      "[50]\tvalid_0's ndcg@1: 0.903816\tvalid_0's ndcg@2: 0.962741\tvalid_0's ndcg@3: 0.963926\tvalid_0's ndcg@4: 0.964062\tvalid_0's ndcg@5: 0.964102\n",
      "[51]\tvalid_0's ndcg@1: 0.904158\tvalid_0's ndcg@2: 0.962851\tvalid_0's ndcg@3: 0.964035\tvalid_0's ndcg@4: 0.964183\tvalid_0's ndcg@5: 0.964223\n",
      "[52]\tvalid_0's ndcg@1: 0.904421\tvalid_0's ndcg@2: 0.962965\tvalid_0's ndcg@3: 0.964136\tvalid_0's ndcg@4: 0.964283\tvalid_0's ndcg@5: 0.964324\n",
      "[53]\tvalid_0's ndcg@1: 0.903711\tvalid_0's ndcg@2: 0.962702\tvalid_0's ndcg@3: 0.963874\tvalid_0's ndcg@4: 0.964021\tvalid_0's ndcg@5: 0.964062\n",
      "[54]\tvalid_0's ndcg@1: 0.903789\tvalid_0's ndcg@2: 0.962715\tvalid_0's ndcg@3: 0.963899\tvalid_0's ndcg@4: 0.964047\tvalid_0's ndcg@5: 0.964087\n",
      "[55]\tvalid_0's ndcg@1: 0.904\tvalid_0's ndcg@2: 0.962809\tvalid_0's ndcg@3: 0.96398\tvalid_0's ndcg@4: 0.964128\tvalid_0's ndcg@5: 0.964168\n",
      "[56]\tvalid_0's ndcg@1: 0.904368\tvalid_0's ndcg@2: 0.962962\tvalid_0's ndcg@3: 0.96412\tvalid_0's ndcg@4: 0.964267\tvalid_0's ndcg@5: 0.964308\n",
      "[57]\tvalid_0's ndcg@1: 0.904289\tvalid_0's ndcg@2: 0.962933\tvalid_0's ndcg@3: 0.964091\tvalid_0's ndcg@4: 0.964238\tvalid_0's ndcg@5: 0.964279\n",
      "[58]\tvalid_0's ndcg@1: 0.904947\tvalid_0's ndcg@2: 0.963176\tvalid_0's ndcg@3: 0.964333\tvalid_0's ndcg@4: 0.964481\tvalid_0's ndcg@5: 0.964521\n",
      "[59]\tvalid_0's ndcg@1: 0.904789\tvalid_0's ndcg@2: 0.963101\tvalid_0's ndcg@3: 0.964285\tvalid_0's ndcg@4: 0.964421\tvalid_0's ndcg@5: 0.964462\n",
      "[60]\tvalid_0's ndcg@1: 0.904711\tvalid_0's ndcg@2: 0.963088\tvalid_0's ndcg@3: 0.964259\tvalid_0's ndcg@4: 0.964395\tvalid_0's ndcg@5: 0.964436\n",
      "[61]\tvalid_0's ndcg@1: 0.904605\tvalid_0's ndcg@2: 0.963033\tvalid_0's ndcg@3: 0.964217\tvalid_0's ndcg@4: 0.964342\tvalid_0's ndcg@5: 0.964392\n",
      "[62]\tvalid_0's ndcg@1: 0.906105\tvalid_0's ndcg@2: 0.963636\tvalid_0's ndcg@3: 0.964781\tvalid_0's ndcg@4: 0.964917\tvalid_0's ndcg@5: 0.964958\n",
      "[63]\tvalid_0's ndcg@1: 0.906447\tvalid_0's ndcg@2: 0.963779\tvalid_0's ndcg@3: 0.964911\tvalid_0's ndcg@4: 0.965047\tvalid_0's ndcg@5: 0.965087\n",
      "[64]\tvalid_0's ndcg@1: 0.907211\tvalid_0's ndcg@2: 0.964044\tvalid_0's ndcg@3: 0.965189\tvalid_0's ndcg@4: 0.965325\tvalid_0's ndcg@5: 0.965365\n",
      "[65]\tvalid_0's ndcg@1: 0.907132\tvalid_0's ndcg@2: 0.963998\tvalid_0's ndcg@3: 0.965156\tvalid_0's ndcg@4: 0.965292\tvalid_0's ndcg@5: 0.965333\n",
      "[66]\tvalid_0's ndcg@1: 0.907263\tvalid_0's ndcg@2: 0.964063\tvalid_0's ndcg@3: 0.965208\tvalid_0's ndcg@4: 0.965344\tvalid_0's ndcg@5: 0.965385\n",
      "[67]\tvalid_0's ndcg@1: 0.907026\tvalid_0's ndcg@2: 0.963976\tvalid_0's ndcg@3: 0.965121\tvalid_0's ndcg@4: 0.965257\tvalid_0's ndcg@5: 0.965297\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[68]\tvalid_0's ndcg@1: 0.907026\tvalid_0's ndcg@2: 0.963993\tvalid_0's ndcg@3: 0.965124\tvalid_0's ndcg@4: 0.96526\tvalid_0's ndcg@5: 0.965301\n",
      "[69]\tvalid_0's ndcg@1: 0.907526\tvalid_0's ndcg@2: 0.964244\tvalid_0's ndcg@3: 0.965323\tvalid_0's ndcg@4: 0.965459\tvalid_0's ndcg@5: 0.965499\n",
      "[70]\tvalid_0's ndcg@1: 0.907526\tvalid_0's ndcg@2: 0.964227\tvalid_0's ndcg@3: 0.965319\tvalid_0's ndcg@4: 0.965455\tvalid_0's ndcg@5: 0.965496\n",
      "[71]\tvalid_0's ndcg@1: 0.908132\tvalid_0's ndcg@2: 0.964467\tvalid_0's ndcg@3: 0.965546\tvalid_0's ndcg@4: 0.965682\tvalid_0's ndcg@5: 0.965723\n",
      "[72]\tvalid_0's ndcg@1: 0.909447\tvalid_0's ndcg@2: 0.965002\tvalid_0's ndcg@3: 0.966042\tvalid_0's ndcg@4: 0.966178\tvalid_0's ndcg@5: 0.966219\n",
      "[73]\tvalid_0's ndcg@1: 0.909342\tvalid_0's ndcg@2: 0.964964\tvalid_0's ndcg@3: 0.966003\tvalid_0's ndcg@4: 0.966139\tvalid_0's ndcg@5: 0.96618\n",
      "[74]\tvalid_0's ndcg@1: 0.909763\tvalid_0's ndcg@2: 0.965102\tvalid_0's ndcg@3: 0.966155\tvalid_0's ndcg@4: 0.966291\tvalid_0's ndcg@5: 0.966332\n",
      "[75]\tvalid_0's ndcg@1: 0.909763\tvalid_0's ndcg@2: 0.965086\tvalid_0's ndcg@3: 0.966152\tvalid_0's ndcg@4: 0.966288\tvalid_0's ndcg@5: 0.966328\n",
      "[76]\tvalid_0's ndcg@1: 0.909658\tvalid_0's ndcg@2: 0.965047\tvalid_0's ndcg@3: 0.966113\tvalid_0's ndcg@4: 0.966249\tvalid_0's ndcg@5: 0.966289\n",
      "[77]\tvalid_0's ndcg@1: 0.909974\tvalid_0's ndcg@2: 0.965197\tvalid_0's ndcg@3: 0.966236\tvalid_0's ndcg@4: 0.966372\tvalid_0's ndcg@5: 0.966413\n",
      "[78]\tvalid_0's ndcg@1: 0.909816\tvalid_0's ndcg@2: 0.965188\tvalid_0's ndcg@3: 0.966201\tvalid_0's ndcg@4: 0.966326\tvalid_0's ndcg@5: 0.966367\n",
      "[79]\tvalid_0's ndcg@1: 0.910474\tvalid_0's ndcg@2: 0.965431\tvalid_0's ndcg@3: 0.966444\tvalid_0's ndcg@4: 0.966569\tvalid_0's ndcg@5: 0.96661\n",
      "[80]\tvalid_0's ndcg@1: 0.910605\tvalid_0's ndcg@2: 0.965463\tvalid_0's ndcg@3: 0.966489\tvalid_0's ndcg@4: 0.966614\tvalid_0's ndcg@5: 0.966655\n",
      "[81]\tvalid_0's ndcg@1: 0.911711\tvalid_0's ndcg@2: 0.965904\tvalid_0's ndcg@3: 0.966904\tvalid_0's ndcg@4: 0.967029\tvalid_0's ndcg@5: 0.967069\n",
      "[82]\tvalid_0's ndcg@1: 0.911711\tvalid_0's ndcg@2: 0.965904\tvalid_0's ndcg@3: 0.966904\tvalid_0's ndcg@4: 0.967029\tvalid_0's ndcg@5: 0.967069\n",
      "[83]\tvalid_0's ndcg@1: 0.911816\tvalid_0's ndcg@2: 0.965943\tvalid_0's ndcg@3: 0.966943\tvalid_0's ndcg@4: 0.967068\tvalid_0's ndcg@5: 0.967108\n",
      "[84]\tvalid_0's ndcg@1: 0.912053\tvalid_0's ndcg@2: 0.966014\tvalid_0's ndcg@3: 0.967027\tvalid_0's ndcg@4: 0.967152\tvalid_0's ndcg@5: 0.967192\n",
      "[85]\tvalid_0's ndcg@1: 0.912184\tvalid_0's ndcg@2: 0.966062\tvalid_0's ndcg@3: 0.967075\tvalid_0's ndcg@4: 0.9672\tvalid_0's ndcg@5: 0.967241\n",
      "[86]\tvalid_0's ndcg@1: 0.912158\tvalid_0's ndcg@2: 0.966069\tvalid_0's ndcg@3: 0.967069\tvalid_0's ndcg@4: 0.967194\tvalid_0's ndcg@5: 0.967235\n",
      "[87]\tvalid_0's ndcg@1: 0.912447\tvalid_0's ndcg@2: 0.966193\tvalid_0's ndcg@3: 0.967179\tvalid_0's ndcg@4: 0.967304\tvalid_0's ndcg@5: 0.967345\n",
      "[88]\tvalid_0's ndcg@1: 0.912579\tvalid_0's ndcg@2: 0.966241\tvalid_0's ndcg@3: 0.967228\tvalid_0's ndcg@4: 0.967353\tvalid_0's ndcg@5: 0.967393\n",
      "[89]\tvalid_0's ndcg@1: 0.912763\tvalid_0's ndcg@2: 0.966326\tvalid_0's ndcg@3: 0.967299\tvalid_0's ndcg@4: 0.967424\tvalid_0's ndcg@5: 0.967465\n",
      "[90]\tvalid_0's ndcg@1: 0.912868\tvalid_0's ndcg@2: 0.966348\tvalid_0's ndcg@3: 0.967335\tvalid_0's ndcg@4: 0.96746\tvalid_0's ndcg@5: 0.9675\n",
      "[91]\tvalid_0's ndcg@1: 0.912737\tvalid_0's ndcg@2: 0.966333\tvalid_0's ndcg@3: 0.967293\tvalid_0's ndcg@4: 0.967418\tvalid_0's ndcg@5: 0.967459\n",
      "[92]\tvalid_0's ndcg@1: 0.912684\tvalid_0's ndcg@2: 0.96628\tvalid_0's ndcg@3: 0.967267\tvalid_0's ndcg@4: 0.967392\tvalid_0's ndcg@5: 0.967432\n",
      "[93]\tvalid_0's ndcg@1: 0.912711\tvalid_0's ndcg@2: 0.96629\tvalid_0's ndcg@3: 0.967277\tvalid_0's ndcg@4: 0.967401\tvalid_0's ndcg@5: 0.967442\n",
      "[94]\tvalid_0's ndcg@1: 0.912632\tvalid_0's ndcg@2: 0.966244\tvalid_0's ndcg@3: 0.967244\tvalid_0's ndcg@4: 0.967357\tvalid_0's ndcg@5: 0.967408\n",
      "[95]\tvalid_0's ndcg@1: 0.913053\tvalid_0's ndcg@2: 0.966399\tvalid_0's ndcg@3: 0.967399\tvalid_0's ndcg@4: 0.967513\tvalid_0's ndcg@5: 0.967564\n",
      "[96]\tvalid_0's ndcg@1: 0.913816\tvalid_0's ndcg@2: 0.966731\tvalid_0's ndcg@3: 0.967665\tvalid_0's ndcg@4: 0.967801\tvalid_0's ndcg@5: 0.967852\n",
      "[97]\tvalid_0's ndcg@1: 0.914105\tvalid_0's ndcg@2: 0.966854\tvalid_0's ndcg@3: 0.967775\tvalid_0's ndcg@4: 0.967911\tvalid_0's ndcg@5: 0.967962\n",
      "[98]\tvalid_0's ndcg@1: 0.913737\tvalid_0's ndcg@2: 0.966702\tvalid_0's ndcg@3: 0.967649\tvalid_0's ndcg@4: 0.967774\tvalid_0's ndcg@5: 0.967825\n",
      "[99]\tvalid_0's ndcg@1: 0.914026\tvalid_0's ndcg@2: 0.966842\tvalid_0's ndcg@3: 0.967763\tvalid_0's ndcg@4: 0.967887\tvalid_0's ndcg@5: 0.967938\n",
      "[100]\tvalid_0's ndcg@1: 0.913921\tvalid_0's ndcg@2: 0.96682\tvalid_0's ndcg@3: 0.967727\tvalid_0's ndcg@4: 0.967852\tvalid_0's ndcg@5: 0.967903\n",
      "Did not meet early stopping. Best iteration is:\n",
      "[97]\tvalid_0's ndcg@1: 0.914105\tvalid_0's ndcg@2: 0.966854\tvalid_0's ndcg@3: 0.967775\tvalid_0's ndcg@4: 0.967911\tvalid_0's ndcg@5: 0.967962\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "<ipython-input-174-4bc1148bd508>:38: SettingWithCopyWarning: \n",
      "A value is trying to be set on a copy of a slice from a DataFrame.\n",
      "Try using .loc[row_indexer,col_indexer] = value instead\n",
      "\n",
      "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n",
      "  valid_idx['pred_score'] = lgb_ranker.predict(valid_idx[lgb_cols], num_iteration=lgb_ranker.best_iteration_)\n",
      "<ipython-input-174-4bc1148bd508>:41: SettingWithCopyWarning: \n",
      "A value is trying to be set on a copy of a slice from a DataFrame.\n",
      "Try using .loc[row_indexer,col_indexer] = value instead\n",
      "\n",
      "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n",
      "  valid_idx['pred_score'] = valid_idx[['pred_score']].transform(lambda x: norm_sim(x))\n",
      "<ipython-input-174-4bc1148bd508>:44: SettingWithCopyWarning: \n",
      "A value is trying to be set on a copy of a slice from a DataFrame.\n",
      "Try using .loc[row_indexer,col_indexer] = value instead\n",
      "\n",
      "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n",
      "  valid_idx['pred_rank'] = valid_idx.groupby(['user_id'])['pred_score'].rank(ascending=False, method='first')\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[1]\tvalid_0's ndcg@1: 0.871842\tvalid_0's ndcg@2: 0.949895\tvalid_0's ndcg@3: 0.951697\tvalid_0's ndcg@4: 0.951958\tvalid_0's ndcg@5: 0.95204\n",
      "Training until validation scores don't improve for 50 rounds\n",
      "[2]\tvalid_0's ndcg@1: 0.877816\tvalid_0's ndcg@2: 0.952166\tvalid_0's ndcg@3: 0.953903\tvalid_0's ndcg@4: 0.954175\tvalid_0's ndcg@5: 0.954246\n",
      "[3]\tvalid_0's ndcg@1: 0.880237\tvalid_0's ndcg@2: 0.953342\tvalid_0's ndcg@3: 0.954947\tvalid_0's ndcg@4: 0.955174\tvalid_0's ndcg@5: 0.955225\n",
      "[4]\tvalid_0's ndcg@1: 0.880158\tvalid_0's ndcg@2: 0.953313\tvalid_0's ndcg@3: 0.954918\tvalid_0's ndcg@4: 0.955167\tvalid_0's ndcg@5: 0.955198\n",
      "[5]\tvalid_0's ndcg@1: 0.882447\tvalid_0's ndcg@2: 0.954174\tvalid_0's ndcg@3: 0.955766\tvalid_0's ndcg@4: 0.956027\tvalid_0's ndcg@5: 0.956047\n",
      "[6]\tvalid_0's ndcg@1: 0.883132\tvalid_0's ndcg@2: 0.954526\tvalid_0's ndcg@3: 0.956105\tvalid_0's ndcg@4: 0.956309\tvalid_0's ndcg@5: 0.95633\n",
      "[7]\tvalid_0's ndcg@1: 0.885605\tvalid_0's ndcg@2: 0.955522\tvalid_0's ndcg@3: 0.957049\tvalid_0's ndcg@4: 0.957241\tvalid_0's ndcg@5: 0.957262\n",
      "[8]\tvalid_0's ndcg@1: 0.887053\tvalid_0's ndcg@2: 0.95604\tvalid_0's ndcg@3: 0.957606\tvalid_0's ndcg@4: 0.957776\tvalid_0's ndcg@5: 0.957796\n",
      "[9]\tvalid_0's ndcg@1: 0.887\tvalid_0's ndcg@2: 0.956103\tvalid_0's ndcg@3: 0.95759\tvalid_0's ndcg@4: 0.957772\tvalid_0's ndcg@5: 0.957792\n",
      "[10]\tvalid_0's ndcg@1: 0.888289\tvalid_0's ndcg@2: 0.956513\tvalid_0's ndcg@3: 0.958\tvalid_0's ndcg@4: 0.958215\tvalid_0's ndcg@5: 0.958246\n",
      "[11]\tvalid_0's ndcg@1: 0.888632\tvalid_0's ndcg@2: 0.956888\tvalid_0's ndcg@3: 0.95823\tvalid_0's ndcg@4: 0.958423\tvalid_0's ndcg@5: 0.958433\n",
      "[12]\tvalid_0's ndcg@1: 0.888763\tvalid_0's ndcg@2: 0.956937\tvalid_0's ndcg@3: 0.958266\tvalid_0's ndcg@4: 0.95847\tvalid_0's ndcg@5: 0.95848\n",
      "[13]\tvalid_0's ndcg@1: 0.888921\tvalid_0's ndcg@2: 0.957012\tvalid_0's ndcg@3: 0.958327\tvalid_0's ndcg@4: 0.958531\tvalid_0's ndcg@5: 0.958542\n",
      "[14]\tvalid_0's ndcg@1: 0.889132\tvalid_0's ndcg@2: 0.957006\tvalid_0's ndcg@3: 0.958401\tvalid_0's ndcg@4: 0.958594\tvalid_0's ndcg@5: 0.958604\n",
      "[15]\tvalid_0's ndcg@1: 0.891132\tvalid_0's ndcg@2: 0.957695\tvalid_0's ndcg@3: 0.959089\tvalid_0's ndcg@4: 0.959282\tvalid_0's ndcg@5: 0.959323\n",
      "[16]\tvalid_0's ndcg@1: 0.891211\tvalid_0's ndcg@2: 0.957707\tvalid_0's ndcg@3: 0.959128\tvalid_0's ndcg@4: 0.959321\tvalid_0's ndcg@5: 0.959351\n",
      "[17]\tvalid_0's ndcg@1: 0.891421\tvalid_0's ndcg@2: 0.957868\tvalid_0's ndcg@3: 0.95921\tvalid_0's ndcg@4: 0.959425\tvalid_0's ndcg@5: 0.959446\n",
      "[18]\tvalid_0's ndcg@1: 0.892053\tvalid_0's ndcg@2: 0.958151\tvalid_0's ndcg@3: 0.959467\tvalid_0's ndcg@4: 0.959671\tvalid_0's ndcg@5: 0.959691\n",
      "[19]\tvalid_0's ndcg@1: 0.892184\tvalid_0's ndcg@2: 0.958216\tvalid_0's ndcg@3: 0.959545\tvalid_0's ndcg@4: 0.959726\tvalid_0's ndcg@5: 0.959747\n",
      "[20]\tvalid_0's ndcg@1: 0.892026\tvalid_0's ndcg@2: 0.958191\tvalid_0's ndcg@3: 0.95948\tvalid_0's ndcg@4: 0.959673\tvalid_0's ndcg@5: 0.959693\n",
      "[21]\tvalid_0's ndcg@1: 0.892579\tvalid_0's ndcg@2: 0.958378\tvalid_0's ndcg@3: 0.959694\tvalid_0's ndcg@4: 0.959875\tvalid_0's ndcg@5: 0.959896\n",
      "[22]\tvalid_0's ndcg@1: 0.893368\tvalid_0's ndcg@2: 0.958653\tvalid_0's ndcg@3: 0.959995\tvalid_0's ndcg@4: 0.960165\tvalid_0's ndcg@5: 0.960186\n",
      "[23]\tvalid_0's ndcg@1: 0.893132\tvalid_0's ndcg@2: 0.958549\tvalid_0's ndcg@3: 0.959904\tvalid_0's ndcg@4: 0.960074\tvalid_0's ndcg@5: 0.960095\n",
      "[24]\tvalid_0's ndcg@1: 0.893289\tvalid_0's ndcg@2: 0.958591\tvalid_0's ndcg@3: 0.959959\tvalid_0's ndcg@4: 0.960129\tvalid_0's ndcg@5: 0.960149\n",
      "[25]\tvalid_0's ndcg@1: 0.893184\tvalid_0's ndcg@2: 0.958585\tvalid_0's ndcg@3: 0.959888\tvalid_0's ndcg@4: 0.96008\tvalid_0's ndcg@5: 0.960111\n",
      "[26]\tvalid_0's ndcg@1: 0.893421\tvalid_0's ndcg@2: 0.958606\tvalid_0's ndcg@3: 0.959948\tvalid_0's ndcg@4: 0.960141\tvalid_0's ndcg@5: 0.960182\n",
      "[27]\tvalid_0's ndcg@1: 0.893711\tvalid_0's ndcg@2: 0.958779\tvalid_0's ndcg@3: 0.960056\tvalid_0's ndcg@4: 0.96026\tvalid_0's ndcg@5: 0.9603\n",
      "[28]\tvalid_0's ndcg@1: 0.894079\tvalid_0's ndcg@2: 0.958899\tvalid_0's ndcg@3: 0.960201\tvalid_0's ndcg@4: 0.960394\tvalid_0's ndcg@5: 0.960435\n",
      "[29]\tvalid_0's ndcg@1: 0.894974\tvalid_0's ndcg@2: 0.959179\tvalid_0's ndcg@3: 0.960521\tvalid_0's ndcg@4: 0.960714\tvalid_0's ndcg@5: 0.960755\n",
      "[30]\tvalid_0's ndcg@1: 0.894789\tvalid_0's ndcg@2: 0.959161\tvalid_0's ndcg@3: 0.96049\tvalid_0's ndcg@4: 0.960671\tvalid_0's ndcg@5: 0.960702\n",
      "[31]\tvalid_0's ndcg@1: 0.895974\tvalid_0's ndcg@2: 0.959631\tvalid_0's ndcg@3: 0.960921\tvalid_0's ndcg@4: 0.961125\tvalid_0's ndcg@5: 0.961145\n",
      "[32]\tvalid_0's ndcg@1: 0.895763\tvalid_0's ndcg@2: 0.959537\tvalid_0's ndcg@3: 0.96084\tvalid_0's ndcg@4: 0.961044\tvalid_0's ndcg@5: 0.961064\n",
      "[33]\tvalid_0's ndcg@1: 0.895763\tvalid_0's ndcg@2: 0.959504\tvalid_0's ndcg@3: 0.960859\tvalid_0's ndcg@4: 0.96104\tvalid_0's ndcg@5: 0.961061\n",
      "[34]\tvalid_0's ndcg@1: 0.895342\tvalid_0's ndcg@2: 0.959381\tvalid_0's ndcg@3: 0.960724\tvalid_0's ndcg@4: 0.960882\tvalid_0's ndcg@5: 0.960913\n",
      "[35]\tvalid_0's ndcg@1: 0.895079\tvalid_0's ndcg@2: 0.959284\tvalid_0's ndcg@3: 0.960626\tvalid_0's ndcg@4: 0.960785\tvalid_0's ndcg@5: 0.960816\n",
      "[36]\tvalid_0's ndcg@1: 0.895395\tvalid_0's ndcg@2: 0.959401\tvalid_0's ndcg@3: 0.960717\tvalid_0's ndcg@4: 0.960898\tvalid_0's ndcg@5: 0.960929\n",
      "[37]\tvalid_0's ndcg@1: 0.896553\tvalid_0's ndcg@2: 0.959845\tvalid_0's ndcg@3: 0.961147\tvalid_0's ndcg@4: 0.961329\tvalid_0's ndcg@5: 0.961359\n",
      "[38]\tvalid_0's ndcg@1: 0.898684\tvalid_0's ndcg@2: 0.960681\tvalid_0's ndcg@3: 0.961945\tvalid_0's ndcg@4: 0.962126\tvalid_0's ndcg@5: 0.962156\n",
      "[39]\tvalid_0's ndcg@1: 0.898947\tvalid_0's ndcg@2: 0.960762\tvalid_0's ndcg@3: 0.962038\tvalid_0's ndcg@4: 0.962208\tvalid_0's ndcg@5: 0.962249\n",
      "[40]\tvalid_0's ndcg@1: 0.899447\tvalid_0's ndcg@2: 0.96098\tvalid_0's ndcg@3: 0.962243\tvalid_0's ndcg@4: 0.962401\tvalid_0's ndcg@5: 0.962442\n",
      "[41]\tvalid_0's ndcg@1: 0.899868\tvalid_0's ndcg@2: 0.961135\tvalid_0's ndcg@3: 0.962398\tvalid_0's ndcg@4: 0.962568\tvalid_0's ndcg@5: 0.962599\n",
      "[42]\tvalid_0's ndcg@1: 0.899737\tvalid_0's ndcg@2: 0.96112\tvalid_0's ndcg@3: 0.96237\tvalid_0's ndcg@4: 0.96254\tvalid_0's ndcg@5: 0.96256\n",
      "[43]\tvalid_0's ndcg@1: 0.899737\tvalid_0's ndcg@2: 0.961136\tvalid_0's ndcg@3: 0.962386\tvalid_0's ndcg@4: 0.962545\tvalid_0's ndcg@5: 0.962565\n",
      "[44]\tvalid_0's ndcg@1: 0.900737\tvalid_0's ndcg@2: 0.961505\tvalid_0's ndcg@3: 0.962729\tvalid_0's ndcg@4: 0.962922\tvalid_0's ndcg@5: 0.962932\n",
      "[45]\tvalid_0's ndcg@1: 0.900526\tvalid_0's ndcg@2: 0.961444\tvalid_0's ndcg@3: 0.962668\tvalid_0's ndcg@4: 0.962838\tvalid_0's ndcg@5: 0.962858\n",
      "[46]\tvalid_0's ndcg@1: 0.900395\tvalid_0's ndcg@2: 0.961396\tvalid_0's ndcg@3: 0.962619\tvalid_0's ndcg@4: 0.962801\tvalid_0's ndcg@5: 0.962811\n",
      "[47]\tvalid_0's ndcg@1: 0.899921\tvalid_0's ndcg@2: 0.961271\tvalid_0's ndcg@3: 0.962455\tvalid_0's ndcg@4: 0.962636\tvalid_0's ndcg@5: 0.962646\n",
      "[48]\tvalid_0's ndcg@1: 0.900316\tvalid_0's ndcg@2: 0.961416\tvalid_0's ndcg@3: 0.962601\tvalid_0's ndcg@4: 0.962782\tvalid_0's ndcg@5: 0.962792\n",
      "[49]\tvalid_0's ndcg@1: 0.901316\tvalid_0's ndcg@2: 0.961736\tvalid_0's ndcg@3: 0.962972\tvalid_0's ndcg@4: 0.963131\tvalid_0's ndcg@5: 0.963151\n",
      "[50]\tvalid_0's ndcg@1: 0.901368\tvalid_0's ndcg@2: 0.961722\tvalid_0's ndcg@3: 0.962985\tvalid_0's ndcg@4: 0.963144\tvalid_0's ndcg@5: 0.963164\n",
      "[51]\tvalid_0's ndcg@1: 0.901526\tvalid_0's ndcg@2: 0.961797\tvalid_0's ndcg@3: 0.963007\tvalid_0's ndcg@4: 0.9632\tvalid_0's ndcg@5: 0.96322\n",
      "[52]\tvalid_0's ndcg@1: 0.901737\tvalid_0's ndcg@2: 0.961874\tvalid_0's ndcg@3: 0.963085\tvalid_0's ndcg@4: 0.963278\tvalid_0's ndcg@5: 0.963298\n",
      "[53]\tvalid_0's ndcg@1: 0.9015\tvalid_0's ndcg@2: 0.96182\tvalid_0's ndcg@3: 0.963004\tvalid_0's ndcg@4: 0.963197\tvalid_0's ndcg@5: 0.963217\n",
      "[54]\tvalid_0's ndcg@1: 0.901421\tvalid_0's ndcg@2: 0.961758\tvalid_0's ndcg@3: 0.962968\tvalid_0's ndcg@4: 0.963161\tvalid_0's ndcg@5: 0.963181\n",
      "[55]\tvalid_0's ndcg@1: 0.901526\tvalid_0's ndcg@2: 0.961797\tvalid_0's ndcg@3: 0.963007\tvalid_0's ndcg@4: 0.9632\tvalid_0's ndcg@5: 0.96322\n",
      "[56]\tvalid_0's ndcg@1: 0.901605\tvalid_0's ndcg@2: 0.961842\tvalid_0's ndcg@3: 0.96304\tvalid_0's ndcg@4: 0.963232\tvalid_0's ndcg@5: 0.963253\n",
      "[57]\tvalid_0's ndcg@1: 0.901526\tvalid_0's ndcg@2: 0.96183\tvalid_0's ndcg@3: 0.963014\tvalid_0's ndcg@4: 0.963207\tvalid_0's ndcg@5: 0.963227\n",
      "[58]\tvalid_0's ndcg@1: 0.902289\tvalid_0's ndcg@2: 0.962128\tvalid_0's ndcg@3: 0.963312\tvalid_0's ndcg@4: 0.963505\tvalid_0's ndcg@5: 0.963515\n",
      "[59]\tvalid_0's ndcg@1: 0.903447\tvalid_0's ndcg@2: 0.962605\tvalid_0's ndcg@3: 0.963763\tvalid_0's ndcg@4: 0.963945\tvalid_0's ndcg@5: 0.963955\n",
      "[60]\tvalid_0's ndcg@1: 0.903289\tvalid_0's ndcg@2: 0.962547\tvalid_0's ndcg@3: 0.963705\tvalid_0's ndcg@4: 0.963886\tvalid_0's ndcg@5: 0.963896\n",
      "[61]\tvalid_0's ndcg@1: 0.903184\tvalid_0's ndcg@2: 0.962508\tvalid_0's ndcg@3: 0.963666\tvalid_0's ndcg@4: 0.963847\tvalid_0's ndcg@5: 0.963858\n",
      "[62]\tvalid_0's ndcg@1: 0.904605\tvalid_0's ndcg@2: 0.963049\tvalid_0's ndcg@3: 0.964168\tvalid_0's ndcg@4: 0.964372\tvalid_0's ndcg@5: 0.964382\n",
      "[63]\tvalid_0's ndcg@1: 0.904921\tvalid_0's ndcg@2: 0.963166\tvalid_0's ndcg@3: 0.964284\tvalid_0's ndcg@4: 0.964488\tvalid_0's ndcg@5: 0.964498\n",
      "[64]\tvalid_0's ndcg@1: 0.905289\tvalid_0's ndcg@2: 0.963302\tvalid_0's ndcg@3: 0.96442\tvalid_0's ndcg@4: 0.964624\tvalid_0's ndcg@5: 0.964634\n",
      "[65]\tvalid_0's ndcg@1: 0.905421\tvalid_0's ndcg@2: 0.96335\tvalid_0's ndcg@3: 0.964482\tvalid_0's ndcg@4: 0.964675\tvalid_0's ndcg@5: 0.964685\n",
      "[66]\tvalid_0's ndcg@1: 0.905526\tvalid_0's ndcg@2: 0.963389\tvalid_0's ndcg@3: 0.964521\tvalid_0's ndcg@4: 0.964713\tvalid_0's ndcg@5: 0.964724\n",
      "[67]\tvalid_0's ndcg@1: 0.905789\tvalid_0's ndcg@2: 0.96352\tvalid_0's ndcg@3: 0.964612\tvalid_0's ndcg@4: 0.964816\tvalid_0's ndcg@5: 0.964826\n",
      "[68]\tvalid_0's ndcg@1: 0.905921\tvalid_0's ndcg@2: 0.963585\tvalid_0's ndcg@3: 0.964664\tvalid_0's ndcg@4: 0.964868\tvalid_0's ndcg@5: 0.964878\n",
      "[69]\tvalid_0's ndcg@1: 0.906184\tvalid_0's ndcg@2: 0.963615\tvalid_0's ndcg@3: 0.964747\tvalid_0's ndcg@4: 0.96494\tvalid_0's ndcg@5: 0.96496\n",
      "[70]\tvalid_0's ndcg@1: 0.906053\tvalid_0's ndcg@2: 0.9636\tvalid_0's ndcg@3: 0.964705\tvalid_0's ndcg@4: 0.964909\tvalid_0's ndcg@5: 0.96492\n",
      "[71]\tvalid_0's ndcg@1: 0.906211\tvalid_0's ndcg@2: 0.963675\tvalid_0's ndcg@3: 0.964767\tvalid_0's ndcg@4: 0.964971\tvalid_0's ndcg@5: 0.964981\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[72]\tvalid_0's ndcg@1: 0.907737\tvalid_0's ndcg@2: 0.964271\tvalid_0's ndcg@3: 0.96535\tvalid_0's ndcg@4: 0.965543\tvalid_0's ndcg@5: 0.965553\n",
      "[73]\tvalid_0's ndcg@1: 0.908079\tvalid_0's ndcg@2: 0.964398\tvalid_0's ndcg@3: 0.965477\tvalid_0's ndcg@4: 0.965669\tvalid_0's ndcg@5: 0.96568\n",
      "[74]\tvalid_0's ndcg@1: 0.908158\tvalid_0's ndcg@2: 0.964443\tvalid_0's ndcg@3: 0.965509\tvalid_0's ndcg@4: 0.965691\tvalid_0's ndcg@5: 0.965711\n",
      "[75]\tvalid_0's ndcg@1: 0.908421\tvalid_0's ndcg@2: 0.964541\tvalid_0's ndcg@3: 0.965606\tvalid_0's ndcg@4: 0.965788\tvalid_0's ndcg@5: 0.965808\n",
      "[76]\tvalid_0's ndcg@1: 0.909053\tvalid_0's ndcg@2: 0.964757\tvalid_0's ndcg@3: 0.965836\tvalid_0's ndcg@4: 0.966017\tvalid_0's ndcg@5: 0.966038\n",
      "[77]\tvalid_0's ndcg@1: 0.909158\tvalid_0's ndcg@2: 0.964813\tvalid_0's ndcg@3: 0.965878\tvalid_0's ndcg@4: 0.966071\tvalid_0's ndcg@5: 0.966081\n",
      "[78]\tvalid_0's ndcg@1: 0.909447\tvalid_0's ndcg@2: 0.964886\tvalid_0's ndcg@3: 0.965978\tvalid_0's ndcg@4: 0.966148\tvalid_0's ndcg@5: 0.966179\n",
      "[79]\tvalid_0's ndcg@1: 0.910526\tvalid_0's ndcg@2: 0.965218\tvalid_0's ndcg@3: 0.966363\tvalid_0's ndcg@4: 0.966533\tvalid_0's ndcg@5: 0.966563\n",
      "[80]\tvalid_0's ndcg@1: 0.910658\tvalid_0's ndcg@2: 0.965267\tvalid_0's ndcg@3: 0.966411\tvalid_0's ndcg@4: 0.966581\tvalid_0's ndcg@5: 0.966612\n",
      "[81]\tvalid_0's ndcg@1: 0.911211\tvalid_0's ndcg@2: 0.965504\tvalid_0's ndcg@3: 0.966622\tvalid_0's ndcg@4: 0.966792\tvalid_0's ndcg@5: 0.966823\n",
      "[82]\tvalid_0's ndcg@1: 0.911211\tvalid_0's ndcg@2: 0.965487\tvalid_0's ndcg@3: 0.966619\tvalid_0's ndcg@4: 0.966777\tvalid_0's ndcg@5: 0.966818\n",
      "[83]\tvalid_0's ndcg@1: 0.911316\tvalid_0's ndcg@2: 0.965526\tvalid_0's ndcg@3: 0.966644\tvalid_0's ndcg@4: 0.966814\tvalid_0's ndcg@5: 0.966855\n",
      "[84]\tvalid_0's ndcg@1: 0.911711\tvalid_0's ndcg@2: 0.965705\tvalid_0's ndcg@3: 0.966797\tvalid_0's ndcg@4: 0.966978\tvalid_0's ndcg@5: 0.967009\n",
      "[85]\tvalid_0's ndcg@1: 0.911763\tvalid_0's ndcg@2: 0.965724\tvalid_0's ndcg@3: 0.966816\tvalid_0's ndcg@4: 0.966998\tvalid_0's ndcg@5: 0.967028\n",
      "[86]\tvalid_0's ndcg@1: 0.911763\tvalid_0's ndcg@2: 0.965724\tvalid_0's ndcg@3: 0.966816\tvalid_0's ndcg@4: 0.966986\tvalid_0's ndcg@5: 0.967027\n",
      "[87]\tvalid_0's ndcg@1: 0.911895\tvalid_0's ndcg@2: 0.965773\tvalid_0's ndcg@3: 0.966878\tvalid_0's ndcg@4: 0.967037\tvalid_0's ndcg@5: 0.967077\n",
      "[88]\tvalid_0's ndcg@1: 0.912053\tvalid_0's ndcg@2: 0.965848\tvalid_0's ndcg@3: 0.96694\tvalid_0's ndcg@4: 0.96711\tvalid_0's ndcg@5: 0.96714\n",
      "[89]\tvalid_0's ndcg@1: 0.912\tvalid_0's ndcg@2: 0.965828\tvalid_0's ndcg@3: 0.966907\tvalid_0's ndcg@4: 0.967089\tvalid_0's ndcg@5: 0.967119\n",
      "[90]\tvalid_0's ndcg@1: 0.912105\tvalid_0's ndcg@2: 0.965867\tvalid_0's ndcg@3: 0.966946\tvalid_0's ndcg@4: 0.967127\tvalid_0's ndcg@5: 0.967158\n",
      "[91]\tvalid_0's ndcg@1: 0.912658\tvalid_0's ndcg@2: 0.966071\tvalid_0's ndcg@3: 0.96715\tvalid_0's ndcg@4: 0.967331\tvalid_0's ndcg@5: 0.967362\n",
      "[92]\tvalid_0's ndcg@1: 0.912395\tvalid_0's ndcg@2: 0.965974\tvalid_0's ndcg@3: 0.967053\tvalid_0's ndcg@4: 0.967246\tvalid_0's ndcg@5: 0.967266\n",
      "[93]\tvalid_0's ndcg@1: 0.912342\tvalid_0's ndcg@2: 0.965971\tvalid_0's ndcg@3: 0.967037\tvalid_0's ndcg@4: 0.96723\tvalid_0's ndcg@5: 0.96725\n",
      "[94]\tvalid_0's ndcg@1: 0.912553\tvalid_0's ndcg@2: 0.966065\tvalid_0's ndcg@3: 0.967118\tvalid_0's ndcg@4: 0.967311\tvalid_0's ndcg@5: 0.967331\n",
      "[95]\tvalid_0's ndcg@1: 0.913211\tvalid_0's ndcg@2: 0.966325\tvalid_0's ndcg@3: 0.967377\tvalid_0's ndcg@4: 0.967547\tvalid_0's ndcg@5: 0.967578\n",
      "[96]\tvalid_0's ndcg@1: 0.914158\tvalid_0's ndcg@2: 0.966708\tvalid_0's ndcg@3: 0.967734\tvalid_0's ndcg@4: 0.967904\tvalid_0's ndcg@5: 0.967935\n",
      "[97]\tvalid_0's ndcg@1: 0.914605\tvalid_0's ndcg@2: 0.96684\tvalid_0's ndcg@3: 0.967892\tvalid_0's ndcg@4: 0.968062\tvalid_0's ndcg@5: 0.968093\n",
      "[98]\tvalid_0's ndcg@1: 0.914474\tvalid_0's ndcg@2: 0.966808\tvalid_0's ndcg@3: 0.967847\tvalid_0's ndcg@4: 0.968017\tvalid_0's ndcg@5: 0.968048\n",
      "[99]\tvalid_0's ndcg@1: 0.914763\tvalid_0's ndcg@2: 0.966898\tvalid_0's ndcg@3: 0.967964\tvalid_0's ndcg@4: 0.968122\tvalid_0's ndcg@5: 0.968153\n",
      "[100]\tvalid_0's ndcg@1: 0.914789\tvalid_0's ndcg@2: 0.966908\tvalid_0's ndcg@3: 0.967973\tvalid_0's ndcg@4: 0.968132\tvalid_0's ndcg@5: 0.968163\n",
      "Did not meet early stopping. Best iteration is:\n",
      "[100]\tvalid_0's ndcg@1: 0.914789\tvalid_0's ndcg@2: 0.966908\tvalid_0's ndcg@3: 0.967973\tvalid_0's ndcg@4: 0.968132\tvalid_0's ndcg@5: 0.968163\n",
      "[1]\tvalid_0's ndcg@1: 0.872974\tvalid_0's ndcg@2: 0.950113\tvalid_0's ndcg@3: 0.952074\tvalid_0's ndcg@4: 0.952357\tvalid_0's ndcg@5: 0.952388\n",
      "Training until validation scores don't improve for 50 rounds\n",
      "[2]\tvalid_0's ndcg@1: 0.878868\tvalid_0's ndcg@2: 0.952787\tvalid_0's ndcg@3: 0.954366\tvalid_0's ndcg@4: 0.954615\tvalid_0's ndcg@5: 0.954646\n",
      "[3]\tvalid_0's ndcg@1: 0.880842\tvalid_0's ndcg@2: 0.953698\tvalid_0's ndcg@3: 0.955198\tvalid_0's ndcg@4: 0.955402\tvalid_0's ndcg@5: 0.955443\n",
      "[4]\tvalid_0's ndcg@1: 0.881474\tvalid_0's ndcg@2: 0.953848\tvalid_0's ndcg@3: 0.955414\tvalid_0's ndcg@4: 0.955618\tvalid_0's ndcg@5: 0.955669\n",
      "[5]\tvalid_0's ndcg@1: 0.8825\tvalid_0's ndcg@2: 0.954227\tvalid_0's ndcg@3: 0.955806\tvalid_0's ndcg@4: 0.95601\tvalid_0's ndcg@5: 0.95605\n",
      "[6]\tvalid_0's ndcg@1: 0.882658\tvalid_0's ndcg@2: 0.954202\tvalid_0's ndcg@3: 0.955873\tvalid_0's ndcg@4: 0.956054\tvalid_0's ndcg@5: 0.956105\n",
      "[7]\tvalid_0's ndcg@1: 0.884737\tvalid_0's ndcg@2: 0.955086\tvalid_0's ndcg@3: 0.956638\tvalid_0's ndcg@4: 0.956831\tvalid_0's ndcg@5: 0.956892\n",
      "[8]\tvalid_0's ndcg@1: 0.886553\tvalid_0's ndcg@2: 0.955839\tvalid_0's ndcg@3: 0.957339\tvalid_0's ndcg@4: 0.95752\tvalid_0's ndcg@5: 0.957561\n",
      "[9]\tvalid_0's ndcg@1: 0.885658\tvalid_0's ndcg@2: 0.955492\tvalid_0's ndcg@3: 0.957005\tvalid_0's ndcg@4: 0.957175\tvalid_0's ndcg@5: 0.957226\n",
      "[10]\tvalid_0's ndcg@1: 0.886974\tvalid_0's ndcg@2: 0.956077\tvalid_0's ndcg@3: 0.957538\tvalid_0's ndcg@4: 0.957719\tvalid_0's ndcg@5: 0.95775\n",
      "[11]\tvalid_0's ndcg@1: 0.886816\tvalid_0's ndcg@2: 0.955936\tvalid_0's ndcg@3: 0.957475\tvalid_0's ndcg@4: 0.957645\tvalid_0's ndcg@5: 0.957676\n",
      "[12]\tvalid_0's ndcg@1: 0.887395\tvalid_0's ndcg@2: 0.956199\tvalid_0's ndcg@3: 0.957699\tvalid_0's ndcg@4: 0.957858\tvalid_0's ndcg@5: 0.957899\n",
      "[13]\tvalid_0's ndcg@1: 0.887158\tvalid_0's ndcg@2: 0.956095\tvalid_0's ndcg@3: 0.957608\tvalid_0's ndcg@4: 0.957767\tvalid_0's ndcg@5: 0.957808\n",
      "[14]\tvalid_0's ndcg@1: 0.887711\tvalid_0's ndcg@2: 0.956349\tvalid_0's ndcg@3: 0.957836\tvalid_0's ndcg@4: 0.957995\tvalid_0's ndcg@5: 0.958025\n",
      "[15]\tvalid_0's ndcg@1: 0.889684\tvalid_0's ndcg@2: 0.957144\tvalid_0's ndcg@3: 0.958552\tvalid_0's ndcg@4: 0.958722\tvalid_0's ndcg@5: 0.958763\n",
      "[16]\tvalid_0's ndcg@1: 0.889263\tvalid_0's ndcg@2: 0.957022\tvalid_0's ndcg@3: 0.95839\tvalid_0's ndcg@4: 0.958583\tvalid_0's ndcg@5: 0.958613\n",
      "[17]\tvalid_0's ndcg@1: 0.889395\tvalid_0's ndcg@2: 0.957037\tvalid_0's ndcg@3: 0.958445\tvalid_0's ndcg@4: 0.958626\tvalid_0's ndcg@5: 0.958657\n",
      "[18]\tvalid_0's ndcg@1: 0.889368\tvalid_0's ndcg@2: 0.957027\tvalid_0's ndcg@3: 0.958435\tvalid_0's ndcg@4: 0.958628\tvalid_0's ndcg@5: 0.958648\n",
      "[19]\tvalid_0's ndcg@1: 0.889184\tvalid_0's ndcg@2: 0.956943\tvalid_0's ndcg@3: 0.958377\tvalid_0's ndcg@4: 0.958547\tvalid_0's ndcg@5: 0.958578\n",
      "[20]\tvalid_0's ndcg@1: 0.890711\tvalid_0's ndcg@2: 0.957423\tvalid_0's ndcg@3: 0.958897\tvalid_0's ndcg@4: 0.959089\tvalid_0's ndcg@5: 0.95912\n",
      "[21]\tvalid_0's ndcg@1: 0.891474\tvalid_0's ndcg@2: 0.957771\tvalid_0's ndcg@3: 0.959192\tvalid_0's ndcg@4: 0.959396\tvalid_0's ndcg@5: 0.959417\n",
      "[22]\tvalid_0's ndcg@1: 0.891289\tvalid_0's ndcg@2: 0.95772\tvalid_0's ndcg@3: 0.959128\tvalid_0's ndcg@4: 0.95932\tvalid_0's ndcg@5: 0.959351\n",
      "[23]\tvalid_0's ndcg@1: 0.891184\tvalid_0's ndcg@2: 0.957714\tvalid_0's ndcg@3: 0.959109\tvalid_0's ndcg@4: 0.959302\tvalid_0's ndcg@5: 0.959322\n",
      "[24]\tvalid_0's ndcg@1: 0.891474\tvalid_0's ndcg@2: 0.957838\tvalid_0's ndcg@3: 0.959219\tvalid_0's ndcg@4: 0.9594\tvalid_0's ndcg@5: 0.959431\n",
      "[25]\tvalid_0's ndcg@1: 0.892316\tvalid_0's ndcg@2: 0.958148\tvalid_0's ndcg@3: 0.95953\tvalid_0's ndcg@4: 0.959711\tvalid_0's ndcg@5: 0.959752\n",
      "[26]\tvalid_0's ndcg@1: 0.892395\tvalid_0's ndcg@2: 0.958194\tvalid_0's ndcg@3: 0.959562\tvalid_0's ndcg@4: 0.959744\tvalid_0's ndcg@5: 0.959785\n",
      "[27]\tvalid_0's ndcg@1: 0.892842\tvalid_0's ndcg@2: 0.958359\tvalid_0's ndcg@3: 0.959728\tvalid_0's ndcg@4: 0.959909\tvalid_0's ndcg@5: 0.95995\n",
      "[28]\tvalid_0's ndcg@1: 0.893211\tvalid_0's ndcg@2: 0.958495\tvalid_0's ndcg@3: 0.959864\tvalid_0's ndcg@4: 0.960045\tvalid_0's ndcg@5: 0.960086\n",
      "[29]\tvalid_0's ndcg@1: 0.894316\tvalid_0's ndcg@2: 0.958969\tvalid_0's ndcg@3: 0.960285\tvalid_0's ndcg@4: 0.960455\tvalid_0's ndcg@5: 0.960506\n",
      "[30]\tvalid_0's ndcg@1: 0.894395\tvalid_0's ndcg@2: 0.958982\tvalid_0's ndcg@3: 0.960311\tvalid_0's ndcg@4: 0.960481\tvalid_0's ndcg@5: 0.960532\n",
      "[31]\tvalid_0's ndcg@1: 0.899079\tvalid_0's ndcg@2: 0.960744\tvalid_0's ndcg@3: 0.96206\tvalid_0's ndcg@4: 0.96223\tvalid_0's ndcg@5: 0.962271\n",
      "[32]\tvalid_0's ndcg@1: 0.898553\tvalid_0's ndcg@2: 0.96055\tvalid_0's ndcg@3: 0.961866\tvalid_0's ndcg@4: 0.962024\tvalid_0's ndcg@5: 0.962075\n",
      "[33]\tvalid_0's ndcg@1: 0.898368\tvalid_0's ndcg@2: 0.960498\tvalid_0's ndcg@3: 0.961801\tvalid_0's ndcg@4: 0.96196\tvalid_0's ndcg@5: 0.962011\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[34]\tvalid_0's ndcg@1: 0.898105\tvalid_0's ndcg@2: 0.960418\tvalid_0's ndcg@3: 0.961707\tvalid_0's ndcg@4: 0.961866\tvalid_0's ndcg@5: 0.961917\n",
      "[35]\tvalid_0's ndcg@1: 0.898053\tvalid_0's ndcg@2: 0.960398\tvalid_0's ndcg@3: 0.961688\tvalid_0's ndcg@4: 0.961847\tvalid_0's ndcg@5: 0.961897\n",
      "[36]\tvalid_0's ndcg@1: 0.898947\tvalid_0's ndcg@2: 0.960745\tvalid_0's ndcg@3: 0.962022\tvalid_0's ndcg@4: 0.962192\tvalid_0's ndcg@5: 0.962232\n",
      "[37]\tvalid_0's ndcg@1: 0.899079\tvalid_0's ndcg@2: 0.960827\tvalid_0's ndcg@3: 0.962077\tvalid_0's ndcg@4: 0.962247\tvalid_0's ndcg@5: 0.962288\n",
      "[38]\tvalid_0's ndcg@1: 0.899263\tvalid_0's ndcg@2: 0.960895\tvalid_0's ndcg@3: 0.962158\tvalid_0's ndcg@4: 0.962317\tvalid_0's ndcg@5: 0.962358\n",
      "[39]\tvalid_0's ndcg@1: 0.899711\tvalid_0's ndcg@2: 0.961077\tvalid_0's ndcg@3: 0.962314\tvalid_0's ndcg@4: 0.962484\tvalid_0's ndcg@5: 0.962524\n",
      "[40]\tvalid_0's ndcg@1: 0.900474\tvalid_0's ndcg@2: 0.961342\tvalid_0's ndcg@3: 0.962579\tvalid_0's ndcg@4: 0.96276\tvalid_0's ndcg@5: 0.962801\n",
      "[41]\tvalid_0's ndcg@1: 0.901342\tvalid_0's ndcg@2: 0.961613\tvalid_0's ndcg@3: 0.962902\tvalid_0's ndcg@4: 0.963072\tvalid_0's ndcg@5: 0.963113\n",
      "[42]\tvalid_0's ndcg@1: 0.901421\tvalid_0's ndcg@2: 0.961691\tvalid_0's ndcg@3: 0.962955\tvalid_0's ndcg@4: 0.963102\tvalid_0's ndcg@5: 0.963153\n",
      "[43]\tvalid_0's ndcg@1: 0.900974\tvalid_0's ndcg@2: 0.961477\tvalid_0's ndcg@3: 0.962779\tvalid_0's ndcg@4: 0.962938\tvalid_0's ndcg@5: 0.962979\n",
      "[44]\tvalid_0's ndcg@1: 0.901789\tvalid_0's ndcg@2: 0.961844\tvalid_0's ndcg@3: 0.963094\tvalid_0's ndcg@4: 0.963241\tvalid_0's ndcg@5: 0.963292\n",
      "[45]\tvalid_0's ndcg@1: 0.901553\tvalid_0's ndcg@2: 0.96174\tvalid_0's ndcg@3: 0.963003\tvalid_0's ndcg@4: 0.963162\tvalid_0's ndcg@5: 0.963203\n",
      "[46]\tvalid_0's ndcg@1: 0.901263\tvalid_0's ndcg@2: 0.9616\tvalid_0's ndcg@3: 0.962889\tvalid_0's ndcg@4: 0.963037\tvalid_0's ndcg@5: 0.963088\n",
      "[47]\tvalid_0's ndcg@1: 0.901263\tvalid_0's ndcg@2: 0.961583\tvalid_0's ndcg@3: 0.962886\tvalid_0's ndcg@4: 0.963045\tvalid_0's ndcg@5: 0.963085\n",
      "[48]\tvalid_0's ndcg@1: 0.901395\tvalid_0's ndcg@2: 0.961649\tvalid_0's ndcg@3: 0.962938\tvalid_0's ndcg@4: 0.963097\tvalid_0's ndcg@5: 0.963137\n",
      "[49]\tvalid_0's ndcg@1: 0.901842\tvalid_0's ndcg@2: 0.961814\tvalid_0's ndcg@3: 0.96309\tvalid_0's ndcg@4: 0.963249\tvalid_0's ndcg@5: 0.9633\n",
      "[50]\tvalid_0's ndcg@1: 0.902132\tvalid_0's ndcg@2: 0.961854\tvalid_0's ndcg@3: 0.963183\tvalid_0's ndcg@4: 0.963342\tvalid_0's ndcg@5: 0.963393\n",
      "[51]\tvalid_0's ndcg@1: 0.902368\tvalid_0's ndcg@2: 0.961958\tvalid_0's ndcg@3: 0.963274\tvalid_0's ndcg@4: 0.963444\tvalid_0's ndcg@5: 0.963485\n",
      "[52]\tvalid_0's ndcg@1: 0.902421\tvalid_0's ndcg@2: 0.962027\tvalid_0's ndcg@3: 0.963317\tvalid_0's ndcg@4: 0.963475\tvalid_0's ndcg@5: 0.963516\n",
      "[53]\tvalid_0's ndcg@1: 0.902658\tvalid_0's ndcg@2: 0.962115\tvalid_0's ndcg@3: 0.963404\tvalid_0's ndcg@4: 0.963563\tvalid_0's ndcg@5: 0.963604\n",
      "[54]\tvalid_0's ndcg@1: 0.902605\tvalid_0's ndcg@2: 0.962079\tvalid_0's ndcg@3: 0.963368\tvalid_0's ndcg@4: 0.963538\tvalid_0's ndcg@5: 0.963579\n",
      "[55]\tvalid_0's ndcg@1: 0.902737\tvalid_0's ndcg@2: 0.96216\tvalid_0's ndcg@3: 0.96345\tvalid_0's ndcg@4: 0.963597\tvalid_0's ndcg@5: 0.963638\n",
      "[56]\tvalid_0's ndcg@1: 0.902974\tvalid_0's ndcg@2: 0.962248\tvalid_0's ndcg@3: 0.963524\tvalid_0's ndcg@4: 0.963683\tvalid_0's ndcg@5: 0.963724\n",
      "[57]\tvalid_0's ndcg@1: 0.903289\tvalid_0's ndcg@2: 0.962364\tvalid_0's ndcg@3: 0.963654\tvalid_0's ndcg@4: 0.963801\tvalid_0's ndcg@5: 0.963842\n",
      "[58]\tvalid_0's ndcg@1: 0.903553\tvalid_0's ndcg@2: 0.962511\tvalid_0's ndcg@3: 0.963748\tvalid_0's ndcg@4: 0.963907\tvalid_0's ndcg@5: 0.963948\n",
      "[59]\tvalid_0's ndcg@1: 0.903605\tvalid_0's ndcg@2: 0.962547\tvalid_0's ndcg@3: 0.963771\tvalid_0's ndcg@4: 0.963918\tvalid_0's ndcg@5: 0.963969\n",
      "[60]\tvalid_0's ndcg@1: 0.903368\tvalid_0's ndcg@2: 0.962427\tvalid_0's ndcg@3: 0.963677\tvalid_0's ndcg@4: 0.963835\tvalid_0's ndcg@5: 0.963876\n",
      "[61]\tvalid_0's ndcg@1: 0.903526\tvalid_0's ndcg@2: 0.962502\tvalid_0's ndcg@3: 0.963738\tvalid_0's ndcg@4: 0.963886\tvalid_0's ndcg@5: 0.963937\n",
      "[62]\tvalid_0's ndcg@1: 0.904395\tvalid_0's ndcg@2: 0.962822\tvalid_0's ndcg@3: 0.964059\tvalid_0's ndcg@4: 0.964206\tvalid_0's ndcg@5: 0.964257\n",
      "[63]\tvalid_0's ndcg@1: 0.904605\tvalid_0's ndcg@2: 0.962883\tvalid_0's ndcg@3: 0.964146\tvalid_0's ndcg@4: 0.964282\tvalid_0's ndcg@5: 0.964333\n",
      "[64]\tvalid_0's ndcg@1: 0.905105\tvalid_0's ndcg@2: 0.963084\tvalid_0's ndcg@3: 0.964334\tvalid_0's ndcg@4: 0.96447\tvalid_0's ndcg@5: 0.964521\n",
      "[65]\tvalid_0's ndcg@1: 0.905\tvalid_0's ndcg@2: 0.963046\tvalid_0's ndcg@3: 0.964296\tvalid_0's ndcg@4: 0.964432\tvalid_0's ndcg@5: 0.964482\n",
      "[66]\tvalid_0's ndcg@1: 0.905079\tvalid_0's ndcg@2: 0.963025\tvalid_0's ndcg@3: 0.964314\tvalid_0's ndcg@4: 0.96445\tvalid_0's ndcg@5: 0.964501\n",
      "[67]\tvalid_0's ndcg@1: 0.904921\tvalid_0's ndcg@2: 0.962967\tvalid_0's ndcg@3: 0.964256\tvalid_0's ndcg@4: 0.964403\tvalid_0's ndcg@5: 0.964444\n",
      "[68]\tvalid_0's ndcg@1: 0.904816\tvalid_0's ndcg@2: 0.962928\tvalid_0's ndcg@3: 0.964217\tvalid_0's ndcg@4: 0.964353\tvalid_0's ndcg@5: 0.964404\n",
      "[69]\tvalid_0's ndcg@1: 0.904947\tvalid_0's ndcg@2: 0.962976\tvalid_0's ndcg@3: 0.964266\tvalid_0's ndcg@4: 0.964402\tvalid_0's ndcg@5: 0.964453\n",
      "[70]\tvalid_0's ndcg@1: 0.904895\tvalid_0's ndcg@2: 0.962973\tvalid_0's ndcg@3: 0.96425\tvalid_0's ndcg@4: 0.964386\tvalid_0's ndcg@5: 0.964437\n",
      "[71]\tvalid_0's ndcg@1: 0.905316\tvalid_0's ndcg@2: 0.963212\tvalid_0's ndcg@3: 0.964422\tvalid_0's ndcg@4: 0.964558\tvalid_0's ndcg@5: 0.964609\n",
      "[72]\tvalid_0's ndcg@1: 0.906105\tvalid_0's ndcg@2: 0.963487\tvalid_0's ndcg@3: 0.96471\tvalid_0's ndcg@4: 0.964846\tvalid_0's ndcg@5: 0.964897\n",
      "[73]\tvalid_0's ndcg@1: 0.906105\tvalid_0's ndcg@2: 0.963487\tvalid_0's ndcg@3: 0.96471\tvalid_0's ndcg@4: 0.964858\tvalid_0's ndcg@5: 0.964898\n",
      "[74]\tvalid_0's ndcg@1: 0.906553\tvalid_0's ndcg@2: 0.963668\tvalid_0's ndcg@3: 0.964879\tvalid_0's ndcg@4: 0.965026\tvalid_0's ndcg@5: 0.965067\n",
      "[75]\tvalid_0's ndcg@1: 0.906605\tvalid_0's ndcg@2: 0.963704\tvalid_0's ndcg@3: 0.964915\tvalid_0's ndcg@4: 0.965051\tvalid_0's ndcg@5: 0.965092\n",
      "[76]\tvalid_0's ndcg@1: 0.906763\tvalid_0's ndcg@2: 0.963779\tvalid_0's ndcg@3: 0.964977\tvalid_0's ndcg@4: 0.965113\tvalid_0's ndcg@5: 0.965153\n",
      "[77]\tvalid_0's ndcg@1: 0.907158\tvalid_0's ndcg@2: 0.963908\tvalid_0's ndcg@3: 0.965119\tvalid_0's ndcg@4: 0.965266\tvalid_0's ndcg@5: 0.965297\n",
      "[78]\tvalid_0's ndcg@1: 0.907132\tvalid_0's ndcg@2: 0.963865\tvalid_0's ndcg@3: 0.965102\tvalid_0's ndcg@4: 0.965238\tvalid_0's ndcg@5: 0.965279\n",
      "[79]\tvalid_0's ndcg@1: 0.907632\tvalid_0's ndcg@2: 0.964083\tvalid_0's ndcg@3: 0.965294\tvalid_0's ndcg@4: 0.96543\tvalid_0's ndcg@5: 0.96547\n",
      "[80]\tvalid_0's ndcg@1: 0.907947\tvalid_0's ndcg@2: 0.964183\tvalid_0's ndcg@3: 0.965407\tvalid_0's ndcg@4: 0.965543\tvalid_0's ndcg@5: 0.965584\n",
      "[81]\tvalid_0's ndcg@1: 0.909605\tvalid_0's ndcg@2: 0.964795\tvalid_0's ndcg@3: 0.966032\tvalid_0's ndcg@4: 0.966157\tvalid_0's ndcg@5: 0.966197\n",
      "[82]\tvalid_0's ndcg@1: 0.909526\tvalid_0's ndcg@2: 0.964766\tvalid_0's ndcg@3: 0.966003\tvalid_0's ndcg@4: 0.966127\tvalid_0's ndcg@5: 0.966168\n",
      "[83]\tvalid_0's ndcg@1: 0.909526\tvalid_0's ndcg@2: 0.964782\tvalid_0's ndcg@3: 0.966006\tvalid_0's ndcg@4: 0.966131\tvalid_0's ndcg@5: 0.966172\n",
      "[84]\tvalid_0's ndcg@1: 0.909605\tvalid_0's ndcg@2: 0.964812\tvalid_0's ndcg@3: 0.966035\tvalid_0's ndcg@4: 0.96616\tvalid_0's ndcg@5: 0.966201\n",
      "[85]\tvalid_0's ndcg@1: 0.909868\tvalid_0's ndcg@2: 0.964892\tvalid_0's ndcg@3: 0.966129\tvalid_0's ndcg@4: 0.966254\tvalid_0's ndcg@5: 0.966294\n",
      "[86]\tvalid_0's ndcg@1: 0.909947\tvalid_0's ndcg@2: 0.964938\tvalid_0's ndcg@3: 0.966162\tvalid_0's ndcg@4: 0.966286\tvalid_0's ndcg@5: 0.966327\n",
      "[87]\tvalid_0's ndcg@1: 0.909684\tvalid_0's ndcg@2: 0.964857\tvalid_0's ndcg@3: 0.966068\tvalid_0's ndcg@4: 0.966193\tvalid_0's ndcg@5: 0.966233\n",
      "[88]\tvalid_0's ndcg@1: 0.910395\tvalid_0's ndcg@2: 0.965086\tvalid_0's ndcg@3: 0.966323\tvalid_0's ndcg@4: 0.966448\tvalid_0's ndcg@5: 0.966489\n",
      "[89]\tvalid_0's ndcg@1: 0.910579\tvalid_0's ndcg@2: 0.965154\tvalid_0's ndcg@3: 0.966391\tvalid_0's ndcg@4: 0.966516\tvalid_0's ndcg@5: 0.966557\n",
      "[90]\tvalid_0's ndcg@1: 0.910553\tvalid_0's ndcg@2: 0.965145\tvalid_0's ndcg@3: 0.966382\tvalid_0's ndcg@4: 0.966506\tvalid_0's ndcg@5: 0.966547\n",
      "[91]\tvalid_0's ndcg@1: 0.910368\tvalid_0's ndcg@2: 0.965077\tvalid_0's ndcg@3: 0.966314\tvalid_0's ndcg@4: 0.966438\tvalid_0's ndcg@5: 0.966479\n",
      "[92]\tvalid_0's ndcg@1: 0.910421\tvalid_0's ndcg@2: 0.965079\tvalid_0's ndcg@3: 0.966329\tvalid_0's ndcg@4: 0.966454\tvalid_0's ndcg@5: 0.966495\n",
      "[93]\tvalid_0's ndcg@1: 0.910368\tvalid_0's ndcg@2: 0.96506\tvalid_0's ndcg@3: 0.96631\tvalid_0's ndcg@4: 0.966435\tvalid_0's ndcg@5: 0.966475\n",
      "[94]\tvalid_0's ndcg@1: 0.910421\tvalid_0's ndcg@2: 0.965063\tvalid_0's ndcg@3: 0.966326\tvalid_0's ndcg@4: 0.966462\tvalid_0's ndcg@5: 0.966493\n",
      "[95]\tvalid_0's ndcg@1: 0.910474\tvalid_0's ndcg@2: 0.965099\tvalid_0's ndcg@3: 0.966349\tvalid_0's ndcg@4: 0.966485\tvalid_0's ndcg@5: 0.966515\n",
      "[96]\tvalid_0's ndcg@1: 0.911237\tvalid_0's ndcg@2: 0.965414\tvalid_0's ndcg@3: 0.966637\tvalid_0's ndcg@4: 0.966773\tvalid_0's ndcg@5: 0.966804\n",
      "[97]\tvalid_0's ndcg@1: 0.911605\tvalid_0's ndcg@2: 0.965566\tvalid_0's ndcg@3: 0.966777\tvalid_0's ndcg@4: 0.966913\tvalid_0's ndcg@5: 0.966943\n",
      "[98]\tvalid_0's ndcg@1: 0.911658\tvalid_0's ndcg@2: 0.965586\tvalid_0's ndcg@3: 0.966796\tvalid_0's ndcg@4: 0.966932\tvalid_0's ndcg@5: 0.966963\n",
      "[99]\tvalid_0's ndcg@1: 0.911737\tvalid_0's ndcg@2: 0.965632\tvalid_0's ndcg@3: 0.966829\tvalid_0's ndcg@4: 0.966965\tvalid_0's ndcg@5: 0.966995\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[100]\tvalid_0's ndcg@1: 0.912079\tvalid_0's ndcg@2: 0.965791\tvalid_0's ndcg@3: 0.966962\tvalid_0's ndcg@4: 0.967098\tvalid_0's ndcg@5: 0.967129\n",
      "Did not meet early stopping. Best iteration is:\n",
      "[100]\tvalid_0's ndcg@1: 0.912079\tvalid_0's ndcg@2: 0.965791\tvalid_0's ndcg@3: 0.966962\tvalid_0's ndcg@4: 0.967098\tvalid_0's ndcg@5: 0.967129\n",
      "[1]\tvalid_0's ndcg@1: 0.867711\tvalid_0's ndcg@2: 0.948304\tvalid_0's ndcg@3: 0.950277\tvalid_0's ndcg@4: 0.95047\tvalid_0's ndcg@5: 0.9505\n",
      "Training until validation scores don't improve for 50 rounds\n",
      "[2]\tvalid_0's ndcg@1: 0.872184\tvalid_0's ndcg@2: 0.950204\tvalid_0's ndcg@3: 0.951927\tvalid_0's ndcg@4: 0.952165\tvalid_0's ndcg@5: 0.952196\n",
      "[3]\tvalid_0's ndcg@1: 0.873605\tvalid_0's ndcg@2: 0.950878\tvalid_0's ndcg@3: 0.952509\tvalid_0's ndcg@4: 0.952758\tvalid_0's ndcg@5: 0.952779\n",
      "[4]\tvalid_0's ndcg@1: 0.878421\tvalid_0's ndcg@2: 0.952821\tvalid_0's ndcg@3: 0.95436\tvalid_0's ndcg@4: 0.954564\tvalid_0's ndcg@5: 0.954585\n",
      "[5]\tvalid_0's ndcg@1: 0.878816\tvalid_0's ndcg@2: 0.953\tvalid_0's ndcg@3: 0.954526\tvalid_0's ndcg@4: 0.954707\tvalid_0's ndcg@5: 0.954748\n",
      "[6]\tvalid_0's ndcg@1: 0.880158\tvalid_0's ndcg@2: 0.953279\tvalid_0's ndcg@3: 0.95495\tvalid_0's ndcg@4: 0.955143\tvalid_0's ndcg@5: 0.955194\n",
      "[7]\tvalid_0's ndcg@1: 0.881474\tvalid_0's ndcg@2: 0.953815\tvalid_0's ndcg@3: 0.955446\tvalid_0's ndcg@4: 0.955639\tvalid_0's ndcg@5: 0.95569\n",
      "[8]\tvalid_0's ndcg@1: 0.882658\tvalid_0's ndcg@2: 0.954401\tvalid_0's ndcg@3: 0.955901\tvalid_0's ndcg@4: 0.956105\tvalid_0's ndcg@5: 0.956156\n",
      "[9]\tvalid_0's ndcg@1: 0.883632\tvalid_0's ndcg@2: 0.95481\tvalid_0's ndcg@3: 0.956258\tvalid_0's ndcg@4: 0.956473\tvalid_0's ndcg@5: 0.956524\n",
      "[10]\tvalid_0's ndcg@1: 0.884579\tvalid_0's ndcg@2: 0.95506\tvalid_0's ndcg@3: 0.956587\tvalid_0's ndcg@4: 0.956802\tvalid_0's ndcg@5: 0.956853\n",
      "[11]\tvalid_0's ndcg@1: 0.885368\tvalid_0's ndcg@2: 0.955468\tvalid_0's ndcg@3: 0.956915\tvalid_0's ndcg@4: 0.957119\tvalid_0's ndcg@5: 0.95717\n",
      "[12]\tvalid_0's ndcg@1: 0.885026\tvalid_0's ndcg@2: 0.955375\tvalid_0's ndcg@3: 0.956796\tvalid_0's ndcg@4: 0.957011\tvalid_0's ndcg@5: 0.957052\n",
      "[13]\tvalid_0's ndcg@1: 0.885579\tvalid_0's ndcg@2: 0.955612\tvalid_0's ndcg@3: 0.956994\tvalid_0's ndcg@4: 0.957209\tvalid_0's ndcg@5: 0.95726\n",
      "[14]\tvalid_0's ndcg@1: 0.885789\tvalid_0's ndcg@2: 0.955723\tvalid_0's ndcg@3: 0.957078\tvalid_0's ndcg@4: 0.957305\tvalid_0's ndcg@5: 0.957346\n",
      "[15]\tvalid_0's ndcg@1: 0.886395\tvalid_0's ndcg@2: 0.955863\tvalid_0's ndcg@3: 0.957298\tvalid_0's ndcg@4: 0.957502\tvalid_0's ndcg@5: 0.957553\n",
      "[16]\tvalid_0's ndcg@1: 0.886316\tvalid_0's ndcg@2: 0.955884\tvalid_0's ndcg@3: 0.957305\tvalid_0's ndcg@4: 0.957498\tvalid_0's ndcg@5: 0.957539\n",
      "[17]\tvalid_0's ndcg@1: 0.886316\tvalid_0's ndcg@2: 0.955784\tvalid_0's ndcg@3: 0.957245\tvalid_0's ndcg@4: 0.957472\tvalid_0's ndcg@5: 0.957512\n",
      "[18]\tvalid_0's ndcg@1: 0.887184\tvalid_0's ndcg@2: 0.956105\tvalid_0's ndcg@3: 0.957579\tvalid_0's ndcg@4: 0.957794\tvalid_0's ndcg@5: 0.957835\n",
      "[19]\tvalid_0's ndcg@1: 0.887237\tvalid_0's ndcg@2: 0.956124\tvalid_0's ndcg@3: 0.957598\tvalid_0's ndcg@4: 0.957813\tvalid_0's ndcg@5: 0.957854\n",
      "[20]\tvalid_0's ndcg@1: 0.887579\tvalid_0's ndcg@2: 0.9563\tvalid_0's ndcg@3: 0.957774\tvalid_0's ndcg@4: 0.957956\tvalid_0's ndcg@5: 0.957996\n",
      "[21]\tvalid_0's ndcg@1: 0.889053\tvalid_0's ndcg@2: 0.956878\tvalid_0's ndcg@3: 0.958312\tvalid_0's ndcg@4: 0.958493\tvalid_0's ndcg@5: 0.958544\n",
      "[22]\tvalid_0's ndcg@1: 0.889053\tvalid_0's ndcg@2: 0.956795\tvalid_0's ndcg@3: 0.958281\tvalid_0's ndcg@4: 0.958463\tvalid_0's ndcg@5: 0.958524\n",
      "[23]\tvalid_0's ndcg@1: 0.888763\tvalid_0's ndcg@2: 0.956738\tvalid_0's ndcg@3: 0.958185\tvalid_0's ndcg@4: 0.958366\tvalid_0's ndcg@5: 0.958427\n",
      "[24]\tvalid_0's ndcg@1: 0.888342\tvalid_0's ndcg@2: 0.956582\tvalid_0's ndcg@3: 0.95803\tvalid_0's ndcg@4: 0.958222\tvalid_0's ndcg@5: 0.958273\n",
      "[25]\tvalid_0's ndcg@1: 0.888868\tvalid_0's ndcg@2: 0.956793\tvalid_0's ndcg@3: 0.958227\tvalid_0's ndcg@4: 0.958431\tvalid_0's ndcg@5: 0.958472\n",
      "[26]\tvalid_0's ndcg@1: 0.889105\tvalid_0's ndcg@2: 0.95688\tvalid_0's ndcg@3: 0.958315\tvalid_0's ndcg@4: 0.958519\tvalid_0's ndcg@5: 0.958559\n",
      "[27]\tvalid_0's ndcg@1: 0.889316\tvalid_0's ndcg@2: 0.957008\tvalid_0's ndcg@3: 0.958416\tvalid_0's ndcg@4: 0.958597\tvalid_0's ndcg@5: 0.958648\n",
      "[28]\tvalid_0's ndcg@1: 0.889921\tvalid_0's ndcg@2: 0.957215\tvalid_0's ndcg@3: 0.958636\tvalid_0's ndcg@4: 0.958817\tvalid_0's ndcg@5: 0.958868\n",
      "[29]\tvalid_0's ndcg@1: 0.890132\tvalid_0's ndcg@2: 0.957292\tvalid_0's ndcg@3: 0.958687\tvalid_0's ndcg@4: 0.958891\tvalid_0's ndcg@5: 0.958942\n",
      "[30]\tvalid_0's ndcg@1: 0.889789\tvalid_0's ndcg@2: 0.957166\tvalid_0's ndcg@3: 0.958574\tvalid_0's ndcg@4: 0.958767\tvalid_0's ndcg@5: 0.958818\n",
      "[31]\tvalid_0's ndcg@1: 0.895132\tvalid_0's ndcg@2: 0.959171\tvalid_0's ndcg@3: 0.960566\tvalid_0's ndcg@4: 0.960747\tvalid_0's ndcg@5: 0.960798\n",
      "[32]\tvalid_0's ndcg@1: 0.895079\tvalid_0's ndcg@2: 0.959168\tvalid_0's ndcg@3: 0.960563\tvalid_0's ndcg@4: 0.960744\tvalid_0's ndcg@5: 0.960785\n",
      "[33]\tvalid_0's ndcg@1: 0.894816\tvalid_0's ndcg@2: 0.959088\tvalid_0's ndcg@3: 0.960469\tvalid_0's ndcg@4: 0.960651\tvalid_0's ndcg@5: 0.960691\n",
      "[34]\tvalid_0's ndcg@1: 0.894868\tvalid_0's ndcg@2: 0.959157\tvalid_0's ndcg@3: 0.960486\tvalid_0's ndcg@4: 0.960678\tvalid_0's ndcg@5: 0.960719\n",
      "[35]\tvalid_0's ndcg@1: 0.894895\tvalid_0's ndcg@2: 0.9592\tvalid_0's ndcg@3: 0.960502\tvalid_0's ndcg@4: 0.960695\tvalid_0's ndcg@5: 0.960736\n",
      "[36]\tvalid_0's ndcg@1: 0.895105\tvalid_0's ndcg@2: 0.959311\tvalid_0's ndcg@3: 0.960574\tvalid_0's ndcg@4: 0.960778\tvalid_0's ndcg@5: 0.960819\n",
      "[37]\tvalid_0's ndcg@1: 0.895184\tvalid_0's ndcg@2: 0.959273\tvalid_0's ndcg@3: 0.960589\tvalid_0's ndcg@4: 0.960793\tvalid_0's ndcg@5: 0.960834\n",
      "[38]\tvalid_0's ndcg@1: 0.894974\tvalid_0's ndcg@2: 0.959212\tvalid_0's ndcg@3: 0.960515\tvalid_0's ndcg@4: 0.960719\tvalid_0's ndcg@5: 0.96076\n",
      "[39]\tvalid_0's ndcg@1: 0.895474\tvalid_0's ndcg@2: 0.959413\tvalid_0's ndcg@3: 0.960703\tvalid_0's ndcg@4: 0.960907\tvalid_0's ndcg@5: 0.960948\n",
      "[40]\tvalid_0's ndcg@1: 0.896263\tvalid_0's ndcg@2: 0.959721\tvalid_0's ndcg@3: 0.961011\tvalid_0's ndcg@4: 0.961204\tvalid_0's ndcg@5: 0.961244\n",
      "[41]\tvalid_0's ndcg@1: 0.896605\tvalid_0's ndcg@2: 0.959897\tvalid_0's ndcg@3: 0.961147\tvalid_0's ndcg@4: 0.96134\tvalid_0's ndcg@5: 0.961381\n",
      "[42]\tvalid_0's ndcg@1: 0.896895\tvalid_0's ndcg@2: 0.960021\tvalid_0's ndcg@3: 0.961258\tvalid_0's ndcg@4: 0.96145\tvalid_0's ndcg@5: 0.961491\n",
      "[43]\tvalid_0's ndcg@1: 0.896816\tvalid_0's ndcg@2: 0.959975\tvalid_0's ndcg@3: 0.961225\tvalid_0's ndcg@4: 0.961418\tvalid_0's ndcg@5: 0.961459\n",
      "[44]\tvalid_0's ndcg@1: 0.8975\tvalid_0's ndcg@2: 0.960194\tvalid_0's ndcg@3: 0.961458\tvalid_0's ndcg@4: 0.961662\tvalid_0's ndcg@5: 0.961702\n",
      "[45]\tvalid_0's ndcg@1: 0.897211\tvalid_0's ndcg@2: 0.960104\tvalid_0's ndcg@3: 0.961367\tvalid_0's ndcg@4: 0.96156\tvalid_0's ndcg@5: 0.961601\n",
      "[46]\tvalid_0's ndcg@1: 0.897132\tvalid_0's ndcg@2: 0.960059\tvalid_0's ndcg@3: 0.961335\tvalid_0's ndcg@4: 0.961528\tvalid_0's ndcg@5: 0.961568\n",
      "[47]\tvalid_0's ndcg@1: 0.896763\tvalid_0's ndcg@2: 0.959889\tvalid_0's ndcg@3: 0.961192\tvalid_0's ndcg@4: 0.961385\tvalid_0's ndcg@5: 0.961425\n",
      "[48]\tvalid_0's ndcg@1: 0.896868\tvalid_0's ndcg@2: 0.959978\tvalid_0's ndcg@3: 0.961241\tvalid_0's ndcg@4: 0.961445\tvalid_0's ndcg@5: 0.961476\n",
      "[49]\tvalid_0's ndcg@1: 0.897789\tvalid_0's ndcg@2: 0.960351\tvalid_0's ndcg@3: 0.961588\tvalid_0's ndcg@4: 0.961792\tvalid_0's ndcg@5: 0.961823\n",
      "[50]\tvalid_0's ndcg@1: 0.897605\tvalid_0's ndcg@2: 0.960316\tvalid_0's ndcg@3: 0.961514\tvalid_0's ndcg@4: 0.961729\tvalid_0's ndcg@5: 0.96176\n",
      "[51]\tvalid_0's ndcg@1: 0.897737\tvalid_0's ndcg@2: 0.960365\tvalid_0's ndcg@3: 0.961549\tvalid_0's ndcg@4: 0.961776\tvalid_0's ndcg@5: 0.961806\n",
      "[52]\tvalid_0's ndcg@1: 0.897868\tvalid_0's ndcg@2: 0.960364\tvalid_0's ndcg@3: 0.961587\tvalid_0's ndcg@4: 0.961814\tvalid_0's ndcg@5: 0.961845\n",
      "[53]\tvalid_0's ndcg@1: 0.897553\tvalid_0's ndcg@2: 0.960231\tvalid_0's ndcg@3: 0.961467\tvalid_0's ndcg@4: 0.961694\tvalid_0's ndcg@5: 0.961725\n",
      "[54]\tvalid_0's ndcg@1: 0.897789\tvalid_0's ndcg@2: 0.960351\tvalid_0's ndcg@3: 0.961562\tvalid_0's ndcg@4: 0.961788\tvalid_0's ndcg@5: 0.961819\n",
      "[55]\tvalid_0's ndcg@1: 0.897974\tvalid_0's ndcg@2: 0.960403\tvalid_0's ndcg@3: 0.961626\tvalid_0's ndcg@4: 0.961853\tvalid_0's ndcg@5: 0.961883\n",
      "[56]\tvalid_0's ndcg@1: 0.897921\tvalid_0's ndcg@2: 0.960383\tvalid_0's ndcg@3: 0.961607\tvalid_0's ndcg@4: 0.961833\tvalid_0's ndcg@5: 0.961864\n",
      "[57]\tvalid_0's ndcg@1: 0.897789\tvalid_0's ndcg@2: 0.960368\tvalid_0's ndcg@3: 0.961565\tvalid_0's ndcg@4: 0.961792\tvalid_0's ndcg@5: 0.961822\n",
      "[58]\tvalid_0's ndcg@1: 0.897816\tvalid_0's ndcg@2: 0.960394\tvalid_0's ndcg@3: 0.961578\tvalid_0's ndcg@4: 0.961805\tvalid_0's ndcg@5: 0.961835\n",
      "[59]\tvalid_0's ndcg@1: 0.8975\tvalid_0's ndcg@2: 0.960261\tvalid_0's ndcg@3: 0.961471\tvalid_0's ndcg@4: 0.961687\tvalid_0's ndcg@5: 0.961717\n",
      "[60]\tvalid_0's ndcg@1: 0.897447\tvalid_0's ndcg@2: 0.960208\tvalid_0's ndcg@3: 0.961445\tvalid_0's ndcg@4: 0.961649\tvalid_0's ndcg@5: 0.96169\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[61]\tvalid_0's ndcg@1: 0.897474\tvalid_0's ndcg@2: 0.960218\tvalid_0's ndcg@3: 0.961455\tvalid_0's ndcg@4: 0.961659\tvalid_0's ndcg@5: 0.9617\n",
      "[62]\tvalid_0's ndcg@1: 0.898974\tvalid_0's ndcg@2: 0.960805\tvalid_0's ndcg@3: 0.962015\tvalid_0's ndcg@4: 0.962219\tvalid_0's ndcg@5: 0.96226\n",
      "[63]\tvalid_0's ndcg@1: 0.899526\tvalid_0's ndcg@2: 0.960992\tvalid_0's ndcg@3: 0.962216\tvalid_0's ndcg@4: 0.96242\tvalid_0's ndcg@5: 0.962461\n",
      "[64]\tvalid_0's ndcg@1: 0.899684\tvalid_0's ndcg@2: 0.96105\tvalid_0's ndcg@3: 0.962274\tvalid_0's ndcg@4: 0.962478\tvalid_0's ndcg@5: 0.962519\n",
      "[65]\tvalid_0's ndcg@1: 0.899632\tvalid_0's ndcg@2: 0.961014\tvalid_0's ndcg@3: 0.962251\tvalid_0's ndcg@4: 0.962455\tvalid_0's ndcg@5: 0.962496\n",
      "[66]\tvalid_0's ndcg@1: 0.9\tvalid_0's ndcg@2: 0.96115\tvalid_0's ndcg@3: 0.962387\tvalid_0's ndcg@4: 0.962591\tvalid_0's ndcg@5: 0.962632\n",
      "[67]\tvalid_0's ndcg@1: 0.899842\tvalid_0's ndcg@2: 0.961092\tvalid_0's ndcg@3: 0.962329\tvalid_0's ndcg@4: 0.962533\tvalid_0's ndcg@5: 0.962574\n",
      "[68]\tvalid_0's ndcg@1: 0.899684\tvalid_0's ndcg@2: 0.961067\tvalid_0's ndcg@3: 0.962278\tvalid_0's ndcg@4: 0.962482\tvalid_0's ndcg@5: 0.962522\n",
      "[69]\tvalid_0's ndcg@1: 0.900079\tvalid_0's ndcg@2: 0.961213\tvalid_0's ndcg@3: 0.962423\tvalid_0's ndcg@4: 0.962627\tvalid_0's ndcg@5: 0.962668\n",
      "[70]\tvalid_0's ndcg@1: 0.900132\tvalid_0's ndcg@2: 0.961249\tvalid_0's ndcg@3: 0.962446\tvalid_0's ndcg@4: 0.96265\tvalid_0's ndcg@5: 0.962691\n",
      "[71]\tvalid_0's ndcg@1: 0.900105\tvalid_0's ndcg@2: 0.961239\tvalid_0's ndcg@3: 0.962436\tvalid_0's ndcg@4: 0.96264\tvalid_0's ndcg@5: 0.962681\n",
      "[72]\tvalid_0's ndcg@1: 0.901895\tvalid_0's ndcg@2: 0.961933\tvalid_0's ndcg@3: 0.963117\tvalid_0's ndcg@4: 0.96331\tvalid_0's ndcg@5: 0.96335\n",
      "[73]\tvalid_0's ndcg@1: 0.901921\tvalid_0's ndcg@2: 0.961942\tvalid_0's ndcg@3: 0.963127\tvalid_0's ndcg@4: 0.963319\tvalid_0's ndcg@5: 0.96336\n",
      "[74]\tvalid_0's ndcg@1: 0.902474\tvalid_0's ndcg@2: 0.962163\tvalid_0's ndcg@3: 0.963321\tvalid_0's ndcg@4: 0.963525\tvalid_0's ndcg@5: 0.963566\n",
      "[75]\tvalid_0's ndcg@1: 0.902579\tvalid_0's ndcg@2: 0.962202\tvalid_0's ndcg@3: 0.96336\tvalid_0's ndcg@4: 0.963564\tvalid_0's ndcg@5: 0.963604\n",
      "[76]\tvalid_0's ndcg@1: 0.902579\tvalid_0's ndcg@2: 0.962202\tvalid_0's ndcg@3: 0.96336\tvalid_0's ndcg@4: 0.963564\tvalid_0's ndcg@5: 0.963604\n",
      "[77]\tvalid_0's ndcg@1: 0.902842\tvalid_0's ndcg@2: 0.962266\tvalid_0's ndcg@3: 0.96345\tvalid_0's ndcg@4: 0.963654\tvalid_0's ndcg@5: 0.963695\n",
      "[78]\tvalid_0's ndcg@1: 0.903026\tvalid_0's ndcg@2: 0.962317\tvalid_0's ndcg@3: 0.963514\tvalid_0's ndcg@4: 0.963718\tvalid_0's ndcg@5: 0.963759\n",
      "[79]\tvalid_0's ndcg@1: 0.904132\tvalid_0's ndcg@2: 0.962692\tvalid_0's ndcg@3: 0.963942\tvalid_0's ndcg@4: 0.964123\tvalid_0's ndcg@5: 0.964164\n",
      "[80]\tvalid_0's ndcg@1: 0.904947\tvalid_0's ndcg@2: 0.962993\tvalid_0's ndcg@3: 0.964243\tvalid_0's ndcg@4: 0.964424\tvalid_0's ndcg@5: 0.964465\n",
      "[81]\tvalid_0's ndcg@1: 0.905868\tvalid_0's ndcg@2: 0.963333\tvalid_0's ndcg@3: 0.964583\tvalid_0's ndcg@4: 0.964764\tvalid_0's ndcg@5: 0.964805\n",
      "[82]\tvalid_0's ndcg@1: 0.905947\tvalid_0's ndcg@2: 0.963362\tvalid_0's ndcg@3: 0.964599\tvalid_0's ndcg@4: 0.964791\tvalid_0's ndcg@5: 0.964832\n",
      "[83]\tvalid_0's ndcg@1: 0.905711\tvalid_0's ndcg@2: 0.963308\tvalid_0's ndcg@3: 0.964518\tvalid_0's ndcg@4: 0.964711\tvalid_0's ndcg@5: 0.964752\n",
      "[84]\tvalid_0's ndcg@1: 0.905842\tvalid_0's ndcg@2: 0.96339\tvalid_0's ndcg@3: 0.964587\tvalid_0's ndcg@4: 0.96478\tvalid_0's ndcg@5: 0.96481\n",
      "[85]\tvalid_0's ndcg@1: 0.905816\tvalid_0's ndcg@2: 0.96338\tvalid_0's ndcg@3: 0.964577\tvalid_0's ndcg@4: 0.96477\tvalid_0's ndcg@5: 0.9648\n",
      "[86]\tvalid_0's ndcg@1: 0.905868\tvalid_0's ndcg@2: 0.963383\tvalid_0's ndcg@3: 0.96458\tvalid_0's ndcg@4: 0.964784\tvalid_0's ndcg@5: 0.964815\n",
      "[87]\tvalid_0's ndcg@1: 0.906368\tvalid_0's ndcg@2: 0.9636\tvalid_0's ndcg@3: 0.964758\tvalid_0's ndcg@4: 0.964985\tvalid_0's ndcg@5: 0.965005\n",
      "[88]\tvalid_0's ndcg@1: 0.906579\tvalid_0's ndcg@2: 0.963695\tvalid_0's ndcg@3: 0.964839\tvalid_0's ndcg@4: 0.965066\tvalid_0's ndcg@5: 0.965086\n",
      "[89]\tvalid_0's ndcg@1: 0.906395\tvalid_0's ndcg@2: 0.963627\tvalid_0's ndcg@3: 0.964771\tvalid_0's ndcg@4: 0.964998\tvalid_0's ndcg@5: 0.965018\n",
      "[90]\tvalid_0's ndcg@1: 0.906421\tvalid_0's ndcg@2: 0.963636\tvalid_0's ndcg@3: 0.964794\tvalid_0's ndcg@4: 0.964998\tvalid_0's ndcg@5: 0.965029\n",
      "[91]\tvalid_0's ndcg@1: 0.906605\tvalid_0's ndcg@2: 0.963638\tvalid_0's ndcg@3: 0.964835\tvalid_0's ndcg@4: 0.965051\tvalid_0's ndcg@5: 0.965081\n",
      "[92]\tvalid_0's ndcg@1: 0.906605\tvalid_0's ndcg@2: 0.963621\tvalid_0's ndcg@3: 0.964845\tvalid_0's ndcg@4: 0.96506\tvalid_0's ndcg@5: 0.965081\n",
      "[93]\tvalid_0's ndcg@1: 0.906553\tvalid_0's ndcg@2: 0.963619\tvalid_0's ndcg@3: 0.964829\tvalid_0's ndcg@4: 0.965033\tvalid_0's ndcg@5: 0.965064\n",
      "[94]\tvalid_0's ndcg@1: 0.906789\tvalid_0's ndcg@2: 0.963689\tvalid_0's ndcg@3: 0.964913\tvalid_0's ndcg@4: 0.965117\tvalid_0's ndcg@5: 0.965148\n",
      "[95]\tvalid_0's ndcg@1: 0.906974\tvalid_0's ndcg@2: 0.963774\tvalid_0's ndcg@3: 0.964984\tvalid_0's ndcg@4: 0.9652\tvalid_0's ndcg@5: 0.96522\n",
      "[96]\tvalid_0's ndcg@1: 0.907526\tvalid_0's ndcg@2: 0.964011\tvalid_0's ndcg@3: 0.965195\tvalid_0's ndcg@4: 0.965411\tvalid_0's ndcg@5: 0.965431\n",
      "[97]\tvalid_0's ndcg@1: 0.908\tvalid_0's ndcg@2: 0.964219\tvalid_0's ndcg@3: 0.96539\tvalid_0's ndcg@4: 0.965594\tvalid_0's ndcg@5: 0.965615\n",
      "[98]\tvalid_0's ndcg@1: 0.908237\tvalid_0's ndcg@2: 0.96429\tvalid_0's ndcg@3: 0.965474\tvalid_0's ndcg@4: 0.965667\tvalid_0's ndcg@5: 0.965697\n",
      "[99]\tvalid_0's ndcg@1: 0.908053\tvalid_0's ndcg@2: 0.964239\tvalid_0's ndcg@3: 0.96541\tvalid_0's ndcg@4: 0.965614\tvalid_0's ndcg@5: 0.965634\n",
      "[100]\tvalid_0's ndcg@1: 0.908184\tvalid_0's ndcg@2: 0.964271\tvalid_0's ndcg@3: 0.965481\tvalid_0's ndcg@4: 0.965662\tvalid_0's ndcg@5: 0.965683\n",
      "Did not meet early stopping. Best iteration is:\n",
      "[98]\tvalid_0's ndcg@1: 0.908237\tvalid_0's ndcg@2: 0.96429\tvalid_0's ndcg@3: 0.965474\tvalid_0's ndcg@4: 0.965667\tvalid_0's ndcg@5: 0.965697\n",
      "[1]\tvalid_0's ndcg@1: 0.871421\tvalid_0's ndcg@2: 0.949773\tvalid_0's ndcg@3: 0.951523\tvalid_0's ndcg@4: 0.951829\tvalid_0's ndcg@5: 0.951879\n",
      "Training until validation scores don't improve for 50 rounds\n",
      "[2]\tvalid_0's ndcg@1: 0.877737\tvalid_0's ndcg@2: 0.952303\tvalid_0's ndcg@3: 0.953895\tvalid_0's ndcg@4: 0.954178\tvalid_0's ndcg@5: 0.954239\n",
      "[3]\tvalid_0's ndcg@1: 0.879105\tvalid_0's ndcg@2: 0.952941\tvalid_0's ndcg@3: 0.954454\tvalid_0's ndcg@4: 0.954737\tvalid_0's ndcg@5: 0.954788\n",
      "[4]\tvalid_0's ndcg@1: 0.881605\tvalid_0's ndcg@2: 0.954013\tvalid_0's ndcg@3: 0.955447\tvalid_0's ndcg@4: 0.955696\tvalid_0's ndcg@5: 0.955747\n",
      "[5]\tvalid_0's ndcg@1: 0.881895\tvalid_0's ndcg@2: 0.95412\tvalid_0's ndcg@3: 0.955541\tvalid_0's ndcg@4: 0.955801\tvalid_0's ndcg@5: 0.955852\n",
      "[6]\tvalid_0's ndcg@1: 0.882842\tvalid_0's ndcg@2: 0.954536\tvalid_0's ndcg@3: 0.95593\tvalid_0's ndcg@4: 0.956168\tvalid_0's ndcg@5: 0.956219\n",
      "[7]\tvalid_0's ndcg@1: 0.883632\tvalid_0's ndcg@2: 0.954877\tvalid_0's ndcg@3: 0.956245\tvalid_0's ndcg@4: 0.956472\tvalid_0's ndcg@5: 0.956523\n",
      "[8]\tvalid_0's ndcg@1: 0.886053\tvalid_0's ndcg@2: 0.95592\tvalid_0's ndcg@3: 0.957183\tvalid_0's ndcg@4: 0.957398\tvalid_0's ndcg@5: 0.957449\n",
      "[9]\tvalid_0's ndcg@1: 0.885895\tvalid_0's ndcg@2: 0.955745\tvalid_0's ndcg@3: 0.957101\tvalid_0's ndcg@4: 0.957316\tvalid_0's ndcg@5: 0.957367\n",
      "[10]\tvalid_0's ndcg@1: 0.887184\tvalid_0's ndcg@2: 0.956288\tvalid_0's ndcg@3: 0.95759\tvalid_0's ndcg@4: 0.957806\tvalid_0's ndcg@5: 0.957856\n",
      "[11]\tvalid_0's ndcg@1: 0.888342\tvalid_0's ndcg@2: 0.956649\tvalid_0's ndcg@3: 0.957991\tvalid_0's ndcg@4: 0.958229\tvalid_0's ndcg@5: 0.958269\n",
      "[12]\tvalid_0's ndcg@1: 0.888474\tvalid_0's ndcg@2: 0.956697\tvalid_0's ndcg@3: 0.958026\tvalid_0's ndcg@4: 0.958275\tvalid_0's ndcg@5: 0.958316\n",
      "[13]\tvalid_0's ndcg@1: 0.888842\tvalid_0's ndcg@2: 0.956767\tvalid_0's ndcg@3: 0.958135\tvalid_0's ndcg@4: 0.958396\tvalid_0's ndcg@5: 0.958436\n",
      "[14]\tvalid_0's ndcg@1: 0.889026\tvalid_0's ndcg@2: 0.956868\tvalid_0's ndcg@3: 0.95821\tvalid_0's ndcg@4: 0.958471\tvalid_0's ndcg@5: 0.958511\n",
      "[15]\tvalid_0's ndcg@1: 0.889921\tvalid_0's ndcg@2: 0.957165\tvalid_0's ndcg@3: 0.95856\tvalid_0's ndcg@4: 0.958798\tvalid_0's ndcg@5: 0.958838\n",
      "[16]\tvalid_0's ndcg@1: 0.890421\tvalid_0's ndcg@2: 0.957333\tvalid_0's ndcg@3: 0.958741\tvalid_0's ndcg@4: 0.958979\tvalid_0's ndcg@5: 0.95903\n",
      "[17]\tvalid_0's ndcg@1: 0.890421\tvalid_0's ndcg@2: 0.957383\tvalid_0's ndcg@3: 0.958751\tvalid_0's ndcg@4: 0.958978\tvalid_0's ndcg@5: 0.959029\n",
      "[18]\tvalid_0's ndcg@1: 0.890868\tvalid_0's ndcg@2: 0.957581\tvalid_0's ndcg@3: 0.958923\tvalid_0's ndcg@4: 0.959161\tvalid_0's ndcg@5: 0.959202\n",
      "[19]\tvalid_0's ndcg@1: 0.890947\tvalid_0's ndcg@2: 0.957577\tvalid_0's ndcg@3: 0.958945\tvalid_0's ndcg@4: 0.959183\tvalid_0's ndcg@5: 0.959224\n",
      "[20]\tvalid_0's ndcg@1: 0.892316\tvalid_0's ndcg@2: 0.958115\tvalid_0's ndcg@3: 0.959497\tvalid_0's ndcg@4: 0.959712\tvalid_0's ndcg@5: 0.959743\n",
      "[21]\tvalid_0's ndcg@1: 0.892632\tvalid_0's ndcg@2: 0.958298\tvalid_0's ndcg@3: 0.959627\tvalid_0's ndcg@4: 0.959842\tvalid_0's ndcg@5: 0.959873\n",
      "[22]\tvalid_0's ndcg@1: 0.892895\tvalid_0's ndcg@2: 0.958379\tvalid_0's ndcg@3: 0.959721\tvalid_0's ndcg@4: 0.959925\tvalid_0's ndcg@5: 0.959965\n",
      "[23]\tvalid_0's ndcg@1: 0.892816\tvalid_0's ndcg@2: 0.958349\tvalid_0's ndcg@3: 0.959692\tvalid_0's ndcg@4: 0.959896\tvalid_0's ndcg@5: 0.959936\n",
      "[24]\tvalid_0's ndcg@1: 0.892947\tvalid_0's ndcg@2: 0.958431\tvalid_0's ndcg@3: 0.959747\tvalid_0's ndcg@4: 0.959951\tvalid_0's ndcg@5: 0.959992\n",
      "[25]\tvalid_0's ndcg@1: 0.893289\tvalid_0's ndcg@2: 0.958491\tvalid_0's ndcg@3: 0.959846\tvalid_0's ndcg@4: 0.960062\tvalid_0's ndcg@5: 0.960102\n",
      "[26]\tvalid_0's ndcg@1: 0.893342\tvalid_0's ndcg@2: 0.958527\tvalid_0's ndcg@3: 0.959869\tvalid_0's ndcg@4: 0.960085\tvalid_0's ndcg@5: 0.960125\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[27]\tvalid_0's ndcg@1: 0.894263\tvalid_0's ndcg@2: 0.958867\tvalid_0's ndcg@3: 0.960196\tvalid_0's ndcg@4: 0.960423\tvalid_0's ndcg@5: 0.960463\n",
      "[28]\tvalid_0's ndcg@1: 0.894211\tvalid_0's ndcg@2: 0.958864\tvalid_0's ndcg@3: 0.96018\tvalid_0's ndcg@4: 0.960407\tvalid_0's ndcg@5: 0.960447\n",
      "[29]\tvalid_0's ndcg@1: 0.894474\tvalid_0's ndcg@2: 0.958945\tvalid_0's ndcg@3: 0.960274\tvalid_0's ndcg@4: 0.9605\tvalid_0's ndcg@5: 0.960541\n",
      "[30]\tvalid_0's ndcg@1: 0.894526\tvalid_0's ndcg@2: 0.958931\tvalid_0's ndcg@3: 0.960313\tvalid_0's ndcg@4: 0.960517\tvalid_0's ndcg@5: 0.960557\n",
      "[31]\tvalid_0's ndcg@1: 0.898289\tvalid_0's ndcg@2: 0.960403\tvalid_0's ndcg@3: 0.961692\tvalid_0's ndcg@4: 0.961919\tvalid_0's ndcg@5: 0.96196\n",
      "[32]\tvalid_0's ndcg@1: 0.8985\tvalid_0's ndcg@2: 0.960481\tvalid_0's ndcg@3: 0.961757\tvalid_0's ndcg@4: 0.961995\tvalid_0's ndcg@5: 0.962036\n",
      "[33]\tvalid_0's ndcg@1: 0.898368\tvalid_0's ndcg@2: 0.960465\tvalid_0's ndcg@3: 0.961715\tvalid_0's ndcg@4: 0.961953\tvalid_0's ndcg@5: 0.961994\n",
      "[34]\tvalid_0's ndcg@1: 0.898132\tvalid_0's ndcg@2: 0.960378\tvalid_0's ndcg@3: 0.961628\tvalid_0's ndcg@4: 0.961866\tvalid_0's ndcg@5: 0.961907\n",
      "[35]\tvalid_0's ndcg@1: 0.898158\tvalid_0's ndcg@2: 0.960387\tvalid_0's ndcg@3: 0.961651\tvalid_0's ndcg@4: 0.961877\tvalid_0's ndcg@5: 0.961918\n",
      "[36]\tvalid_0's ndcg@1: 0.898658\tvalid_0's ndcg@2: 0.960589\tvalid_0's ndcg@3: 0.961839\tvalid_0's ndcg@4: 0.962065\tvalid_0's ndcg@5: 0.962106\n",
      "[37]\tvalid_0's ndcg@1: 0.899079\tvalid_0's ndcg@2: 0.960761\tvalid_0's ndcg@3: 0.961997\tvalid_0's ndcg@4: 0.962224\tvalid_0's ndcg@5: 0.962265\n",
      "[38]\tvalid_0's ndcg@1: 0.899079\tvalid_0's ndcg@2: 0.960794\tvalid_0's ndcg@3: 0.962004\tvalid_0's ndcg@4: 0.962231\tvalid_0's ndcg@5: 0.962272\n",
      "[39]\tvalid_0's ndcg@1: 0.899368\tvalid_0's ndcg@2: 0.960867\tvalid_0's ndcg@3: 0.962091\tvalid_0's ndcg@4: 0.962329\tvalid_0's ndcg@5: 0.96237\n",
      "[40]\tvalid_0's ndcg@1: 0.900158\tvalid_0's ndcg@2: 0.961142\tvalid_0's ndcg@3: 0.962379\tvalid_0's ndcg@4: 0.962617\tvalid_0's ndcg@5: 0.962658\n",
      "[41]\tvalid_0's ndcg@1: 0.900158\tvalid_0's ndcg@2: 0.961175\tvalid_0's ndcg@3: 0.962412\tvalid_0's ndcg@4: 0.962628\tvalid_0's ndcg@5: 0.962668\n",
      "[42]\tvalid_0's ndcg@1: 0.901026\tvalid_0's ndcg@2: 0.961546\tvalid_0's ndcg@3: 0.96273\tvalid_0's ndcg@4: 0.962957\tvalid_0's ndcg@5: 0.962997\n",
      "[43]\tvalid_0's ndcg@1: 0.900684\tvalid_0's ndcg@2: 0.961436\tvalid_0's ndcg@3: 0.962607\tvalid_0's ndcg@4: 0.962834\tvalid_0's ndcg@5: 0.962875\n",
      "[44]\tvalid_0's ndcg@1: 0.901\tvalid_0's ndcg@2: 0.961553\tvalid_0's ndcg@3: 0.962737\tvalid_0's ndcg@4: 0.962964\tvalid_0's ndcg@5: 0.962994\n",
      "[45]\tvalid_0's ndcg@1: 0.900895\tvalid_0's ndcg@2: 0.961497\tvalid_0's ndcg@3: 0.962708\tvalid_0's ndcg@4: 0.962923\tvalid_0's ndcg@5: 0.962954\n",
      "[46]\tvalid_0's ndcg@1: 0.900684\tvalid_0's ndcg@2: 0.96142\tvalid_0's ndcg@3: 0.96263\tvalid_0's ndcg@4: 0.962845\tvalid_0's ndcg@5: 0.962876\n",
      "[47]\tvalid_0's ndcg@1: 0.901842\tvalid_0's ndcg@2: 0.96188\tvalid_0's ndcg@3: 0.963064\tvalid_0's ndcg@4: 0.96328\tvalid_0's ndcg@5: 0.96331\n",
      "[48]\tvalid_0's ndcg@1: 0.901816\tvalid_0's ndcg@2: 0.961854\tvalid_0's ndcg@3: 0.963038\tvalid_0's ndcg@4: 0.963253\tvalid_0's ndcg@5: 0.963294\n",
      "[49]\tvalid_0's ndcg@1: 0.902368\tvalid_0's ndcg@2: 0.962108\tvalid_0's ndcg@3: 0.963252\tvalid_0's ndcg@4: 0.963468\tvalid_0's ndcg@5: 0.963508\n",
      "[50]\tvalid_0's ndcg@1: 0.902079\tvalid_0's ndcg@2: 0.961984\tvalid_0's ndcg@3: 0.963155\tvalid_0's ndcg@4: 0.963359\tvalid_0's ndcg@5: 0.9634\n",
      "[51]\tvalid_0's ndcg@1: 0.902842\tvalid_0's ndcg@2: 0.962316\tvalid_0's ndcg@3: 0.963447\tvalid_0's ndcg@4: 0.963651\tvalid_0's ndcg@5: 0.963692\n",
      "[52]\tvalid_0's ndcg@1: 0.903079\tvalid_0's ndcg@2: 0.96242\tvalid_0's ndcg@3: 0.963525\tvalid_0's ndcg@4: 0.96374\tvalid_0's ndcg@5: 0.963781\n",
      "[53]\tvalid_0's ndcg@1: 0.902737\tvalid_0's ndcg@2: 0.962243\tvalid_0's ndcg@3: 0.963401\tvalid_0's ndcg@4: 0.963605\tvalid_0's ndcg@5: 0.963646\n",
      "[54]\tvalid_0's ndcg@1: 0.902737\tvalid_0's ndcg@2: 0.962243\tvalid_0's ndcg@3: 0.963401\tvalid_0's ndcg@4: 0.963605\tvalid_0's ndcg@5: 0.963646\n",
      "[55]\tvalid_0's ndcg@1: 0.902947\tvalid_0's ndcg@2: 0.962338\tvalid_0's ndcg@3: 0.963483\tvalid_0's ndcg@4: 0.963687\tvalid_0's ndcg@5: 0.963727\n",
      "[56]\tvalid_0's ndcg@1: 0.903474\tvalid_0's ndcg@2: 0.962549\tvalid_0's ndcg@3: 0.963693\tvalid_0's ndcg@4: 0.963886\tvalid_0's ndcg@5: 0.963927\n",
      "[57]\tvalid_0's ndcg@1: 0.904105\tvalid_0's ndcg@2: 0.962798\tvalid_0's ndcg@3: 0.96393\tvalid_0's ndcg@4: 0.964123\tvalid_0's ndcg@5: 0.964163\n",
      "[58]\tvalid_0's ndcg@1: 0.904395\tvalid_0's ndcg@2: 0.962922\tvalid_0's ndcg@3: 0.96404\tvalid_0's ndcg@4: 0.964233\tvalid_0's ndcg@5: 0.964274\n",
      "[59]\tvalid_0's ndcg@1: 0.904237\tvalid_0's ndcg@2: 0.96283\tvalid_0's ndcg@3: 0.963975\tvalid_0's ndcg@4: 0.964168\tvalid_0's ndcg@5: 0.964208\n",
      "[60]\tvalid_0's ndcg@1: 0.904263\tvalid_0's ndcg@2: 0.96284\tvalid_0's ndcg@3: 0.963972\tvalid_0's ndcg@4: 0.964176\tvalid_0's ndcg@5: 0.964216\n",
      "[61]\tvalid_0's ndcg@1: 0.904263\tvalid_0's ndcg@2: 0.962857\tvalid_0's ndcg@3: 0.963988\tvalid_0's ndcg@4: 0.964181\tvalid_0's ndcg@5: 0.964222\n",
      "[62]\tvalid_0's ndcg@1: 0.904632\tvalid_0's ndcg@2: 0.963009\tvalid_0's ndcg@3: 0.964128\tvalid_0's ndcg@4: 0.964332\tvalid_0's ndcg@5: 0.964362\n",
      "[63]\tvalid_0's ndcg@1: 0.904711\tvalid_0's ndcg@2: 0.963038\tvalid_0's ndcg@3: 0.964157\tvalid_0's ndcg@4: 0.964361\tvalid_0's ndcg@5: 0.964391\n",
      "[64]\tvalid_0's ndcg@1: 0.904658\tvalid_0's ndcg@2: 0.963019\tvalid_0's ndcg@3: 0.964137\tvalid_0's ndcg@4: 0.964341\tvalid_0's ndcg@5: 0.964372\n",
      "[65]\tvalid_0's ndcg@1: 0.904553\tvalid_0's ndcg@2: 0.962963\tvalid_0's ndcg@3: 0.964095\tvalid_0's ndcg@4: 0.964299\tvalid_0's ndcg@5: 0.96433\n",
      "[66]\tvalid_0's ndcg@1: 0.904658\tvalid_0's ndcg@2: 0.963002\tvalid_0's ndcg@3: 0.964134\tvalid_0's ndcg@4: 0.964338\tvalid_0's ndcg@5: 0.964368\n",
      "[67]\tvalid_0's ndcg@1: 0.904632\tvalid_0's ndcg@2: 0.962993\tvalid_0's ndcg@3: 0.964137\tvalid_0's ndcg@4: 0.96433\tvalid_0's ndcg@5: 0.964361\n",
      "[68]\tvalid_0's ndcg@1: 0.904947\tvalid_0's ndcg@2: 0.963093\tvalid_0's ndcg@3: 0.96425\tvalid_0's ndcg@4: 0.964443\tvalid_0's ndcg@5: 0.964484\n",
      "[69]\tvalid_0's ndcg@1: 0.904947\tvalid_0's ndcg@2: 0.963093\tvalid_0's ndcg@3: 0.96425\tvalid_0's ndcg@4: 0.964443\tvalid_0's ndcg@5: 0.964484\n",
      "[70]\tvalid_0's ndcg@1: 0.904816\tvalid_0's ndcg@2: 0.963061\tvalid_0's ndcg@3: 0.964205\tvalid_0's ndcg@4: 0.964398\tvalid_0's ndcg@5: 0.964439\n",
      "[71]\tvalid_0's ndcg@1: 0.905237\tvalid_0's ndcg@2: 0.963266\tvalid_0's ndcg@3: 0.964371\tvalid_0's ndcg@4: 0.964564\tvalid_0's ndcg@5: 0.964604\n",
      "[72]\tvalid_0's ndcg@1: 0.906553\tvalid_0's ndcg@2: 0.963751\tvalid_0's ndcg@3: 0.964857\tvalid_0's ndcg@4: 0.965049\tvalid_0's ndcg@5: 0.96509\n",
      "[73]\tvalid_0's ndcg@1: 0.906447\tvalid_0's ndcg@2: 0.963696\tvalid_0's ndcg@3: 0.964814\tvalid_0's ndcg@4: 0.965007\tvalid_0's ndcg@5: 0.965048\n",
      "[74]\tvalid_0's ndcg@1: 0.906789\tvalid_0's ndcg@2: 0.963839\tvalid_0's ndcg@3: 0.964918\tvalid_0's ndcg@4: 0.965133\tvalid_0's ndcg@5: 0.965174\n",
      "[75]\tvalid_0's ndcg@1: 0.907289\tvalid_0's ndcg@2: 0.964023\tvalid_0's ndcg@3: 0.965102\tvalid_0's ndcg@4: 0.965318\tvalid_0's ndcg@5: 0.965358\n",
      "[76]\tvalid_0's ndcg@1: 0.907211\tvalid_0's ndcg@2: 0.964011\tvalid_0's ndcg@3: 0.96509\tvalid_0's ndcg@4: 0.965294\tvalid_0's ndcg@5: 0.965334\n",
      "[77]\tvalid_0's ndcg@1: 0.907737\tvalid_0's ndcg@2: 0.964222\tvalid_0's ndcg@3: 0.965287\tvalid_0's ndcg@4: 0.965491\tvalid_0's ndcg@5: 0.965532\n",
      "[78]\tvalid_0's ndcg@1: 0.907842\tvalid_0's ndcg@2: 0.964277\tvalid_0's ndcg@3: 0.96533\tvalid_0's ndcg@4: 0.965534\tvalid_0's ndcg@5: 0.965574\n",
      "[79]\tvalid_0's ndcg@1: 0.908658\tvalid_0's ndcg@2: 0.964595\tvalid_0's ndcg@3: 0.965634\tvalid_0's ndcg@4: 0.965838\tvalid_0's ndcg@5: 0.965879\n",
      "[80]\tvalid_0's ndcg@1: 0.908895\tvalid_0's ndcg@2: 0.964715\tvalid_0's ndcg@3: 0.965742\tvalid_0's ndcg@4: 0.965934\tvalid_0's ndcg@5: 0.965975\n",
      "[81]\tvalid_0's ndcg@1: 0.910026\tvalid_0's ndcg@2: 0.96515\tvalid_0's ndcg@3: 0.966163\tvalid_0's ndcg@4: 0.966355\tvalid_0's ndcg@5: 0.966396\n",
      "[82]\tvalid_0's ndcg@1: 0.910079\tvalid_0's ndcg@2: 0.965169\tvalid_0's ndcg@3: 0.966182\tvalid_0's ndcg@4: 0.966375\tvalid_0's ndcg@5: 0.966416\n",
      "[83]\tvalid_0's ndcg@1: 0.909895\tvalid_0's ndcg@2: 0.965101\tvalid_0's ndcg@3: 0.966114\tvalid_0's ndcg@4: 0.966307\tvalid_0's ndcg@5: 0.966348\n",
      "[84]\tvalid_0's ndcg@1: 0.909947\tvalid_0's ndcg@2: 0.965154\tvalid_0's ndcg@3: 0.966141\tvalid_0's ndcg@4: 0.966333\tvalid_0's ndcg@5: 0.966374\n",
      "[85]\tvalid_0's ndcg@1: 0.910132\tvalid_0's ndcg@2: 0.965222\tvalid_0's ndcg@3: 0.966195\tvalid_0's ndcg@4: 0.966399\tvalid_0's ndcg@5: 0.96644\n",
      "[86]\tvalid_0's ndcg@1: 0.910342\tvalid_0's ndcg@2: 0.965316\tvalid_0's ndcg@3: 0.966277\tvalid_0's ndcg@4: 0.966481\tvalid_0's ndcg@5: 0.966521\n",
      "[87]\tvalid_0's ndcg@1: 0.910395\tvalid_0's ndcg@2: 0.965335\tvalid_0's ndcg@3: 0.966296\tvalid_0's ndcg@4: 0.9665\tvalid_0's ndcg@5: 0.966541\n",
      "[88]\tvalid_0's ndcg@1: 0.911105\tvalid_0's ndcg@2: 0.965614\tvalid_0's ndcg@3: 0.966562\tvalid_0's ndcg@4: 0.966766\tvalid_0's ndcg@5: 0.966806\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[89]\tvalid_0's ndcg@1: 0.910947\tvalid_0's ndcg@2: 0.96549\tvalid_0's ndcg@3: 0.96649\tvalid_0's ndcg@4: 0.966694\tvalid_0's ndcg@5: 0.966734\n",
      "[90]\tvalid_0's ndcg@1: 0.910921\tvalid_0's ndcg@2: 0.965463\tvalid_0's ndcg@3: 0.966476\tvalid_0's ndcg@4: 0.96668\tvalid_0's ndcg@5: 0.966721\n",
      "[91]\tvalid_0's ndcg@1: 0.911263\tvalid_0's ndcg@2: 0.965606\tvalid_0's ndcg@3: 0.966606\tvalid_0's ndcg@4: 0.96681\tvalid_0's ndcg@5: 0.966851\n",
      "[92]\tvalid_0's ndcg@1: 0.911421\tvalid_0's ndcg@2: 0.965681\tvalid_0's ndcg@3: 0.966681\tvalid_0's ndcg@4: 0.966874\tvalid_0's ndcg@5: 0.966914\n",
      "[93]\tvalid_0's ndcg@1: 0.911447\tvalid_0's ndcg@2: 0.965707\tvalid_0's ndcg@3: 0.966694\tvalid_0's ndcg@4: 0.966887\tvalid_0's ndcg@5: 0.966928\n",
      "[94]\tvalid_0's ndcg@1: 0.911658\tvalid_0's ndcg@2: 0.965802\tvalid_0's ndcg@3: 0.966775\tvalid_0's ndcg@4: 0.966968\tvalid_0's ndcg@5: 0.967009\n",
      "[95]\tvalid_0's ndcg@1: 0.911579\tvalid_0's ndcg@2: 0.965756\tvalid_0's ndcg@3: 0.96673\tvalid_0's ndcg@4: 0.966934\tvalid_0's ndcg@5: 0.966974\n",
      "[96]\tvalid_0's ndcg@1: 0.912184\tvalid_0's ndcg@2: 0.965963\tvalid_0's ndcg@3: 0.96695\tvalid_0's ndcg@4: 0.967154\tvalid_0's ndcg@5: 0.967194\n",
      "[97]\tvalid_0's ndcg@1: 0.912526\tvalid_0's ndcg@2: 0.966122\tvalid_0's ndcg@3: 0.967083\tvalid_0's ndcg@4: 0.967287\tvalid_0's ndcg@5: 0.967327\n",
      "[98]\tvalid_0's ndcg@1: 0.9125\tvalid_0's ndcg@2: 0.966096\tvalid_0's ndcg@3: 0.96707\tvalid_0's ndcg@4: 0.967274\tvalid_0's ndcg@5: 0.967314\n",
      "[99]\tvalid_0's ndcg@1: 0.912526\tvalid_0's ndcg@2: 0.966122\tvalid_0's ndcg@3: 0.967083\tvalid_0's ndcg@4: 0.967287\tvalid_0's ndcg@5: 0.967327\n",
      "[100]\tvalid_0's ndcg@1: 0.912947\tvalid_0's ndcg@2: 0.966278\tvalid_0's ndcg@3: 0.967238\tvalid_0's ndcg@4: 0.967442\tvalid_0's ndcg@5: 0.967483\n",
      "Did not meet early stopping. Best iteration is:\n",
      "[100]\tvalid_0's ndcg@1: 0.912947\tvalid_0's ndcg@2: 0.966278\tvalid_0's ndcg@3: 0.967238\tvalid_0's ndcg@4: 0.967442\tvalid_0's ndcg@5: 0.967483\n"
     ]
    }
   ],
   "source": [
    "# 五折交叉验证，这里的五折交叉是以用户为目标进行五折划分\n",
    "#  这一部分与前面的单独训练和验证是分开的\n",
    "def get_kfold_users(trn_df, n=5):\n",
    "    user_ids = trn_df['user_id'].unique()\n",
    "    user_set = [user_ids[i::n] for i in range(n)]\n",
    "    return user_set\n",
    "\n",
    "k_fold = 5\n",
    "trn_df = trn_user_item_feats_df_rank_model\n",
    "user_set = get_kfold_users(trn_df, n=k_fold)\n",
    "\n",
    "score_list = []\n",
    "score_df = trn_df[['user_id', 'click_article_id','label']]\n",
    "sub_preds = np.zeros(tst_user_item_feats_df_rank_model.shape[0])\n",
    "\n",
    "# 五折交叉验证，并将中间结果保存用于staking\n",
    "for n_fold, valid_user in enumerate(user_set):\n",
    "    train_idx = trn_df[~trn_df['user_id'].isin(valid_user)] # add slide user\n",
    "    valid_idx = trn_df[trn_df['user_id'].isin(valid_user)]\n",
    "    \n",
    "    # 训练集与验证集的用户分组\n",
    "    train_idx.sort_values(by=['user_id'], inplace=True)\n",
    "    g_train = train_idx.groupby(['user_id'], as_index=False).count()[\"label\"].values\n",
    "    \n",
    "    valid_idx.sort_values(by=['user_id'], inplace=True)\n",
    "    g_val = valid_idx.groupby(['user_id'], as_index=False).count()[\"label\"].values\n",
    "    \n",
    "    # 定义模型\n",
    "    lgb_ranker = lgb.LGBMRanker(boosting_type='gbdt', num_leaves=31, reg_alpha=0.0, reg_lambda=1,\n",
    "                            max_depth=-1, n_estimators=100, subsample=0.7, colsample_bytree=0.7, subsample_freq=1,\n",
    "                            learning_rate=0.01, min_child_weight=50, random_state=2018, n_jobs= 16)  \n",
    "    # 训练模型\n",
    "    lgb_ranker.fit(train_idx[lgb_cols], train_idx['label'], group=g_train,\n",
    "                   eval_set=[(valid_idx[lgb_cols], valid_idx['label'])], eval_group= [g_val], \n",
    "                   eval_at=[1, 2, 3, 4, 5], eval_metric=['ndcg', ], early_stopping_rounds=50, )\n",
    "    \n",
    "    # 预测验证集结果\n",
    "    valid_idx['pred_score'] = lgb_ranker.predict(valid_idx[lgb_cols], num_iteration=lgb_ranker.best_iteration_)\n",
    "    \n",
    "    # 对输出结果进行归一化\n",
    "    valid_idx['pred_score'] = valid_idx[['pred_score']].transform(lambda x: norm_sim(x))\n",
    "    \n",
    "    valid_idx.sort_values(by=['user_id', 'pred_score'])\n",
    "    valid_idx['pred_rank'] = valid_idx.groupby(['user_id'])['pred_score'].rank(ascending=False, method='first')\n",
    "    \n",
    "    # 将验证集的预测结果放到一个列表中，后面进行拼接\n",
    "    score_list.append(valid_idx[['user_id', 'click_article_id', 'pred_score', 'pred_rank']])\n",
    "    \n",
    "    # 如果是线上测试，需要计算每次交叉验证的结果相加，最后求平均\n",
    "    if informal:\n",
    "        sub_preds += lgb_ranker.predict(tst_user_item_feats_df_rank_model[lgb_cols], lgb_ranker.best_iteration_)\n",
    "    \n",
    "score_df_ = pd.concat(score_list, axis=0)\n",
    "score_df = score_df.merge(score_df_, how='left', on=['user_id', 'click_article_id'])\n",
    "# 保存训练集交叉验证产生的新特征\n",
    "score_df[['user_id', 'click_article_id', 'pred_score', 'pred_rank', 'label']].to_csv(pathcache + 'trn_lgb_ranker_feats.csv', index=False)\n",
    "    \n",
    "# 测试集的预测结果，多次交叉验证求平均,将预测的score和对应的rank特征保存，可以用于后面的staking，这里还可以构造其他更多的特征\n",
    "tst_user_item_feats_df_rank_model['pred_score'] = sub_preds / k_fold\n",
    "tst_user_item_feats_df_rank_model['pred_score'] = tst_user_item_feats_df_rank_model['pred_score'].transform(lambda x: norm_sim(x))\n",
    "tst_user_item_feats_df_rank_model.sort_values(by=['user_id', 'pred_score'])\n",
    "tst_user_item_feats_df_rank_model['pred_rank'] = tst_user_item_feats_df_rank_model.groupby(['user_id'])['pred_score'].rank(ascending=False, method='first')\n",
    "\n",
    "# 保存测试集交叉验证的新特征\n",
    "tst_user_item_feats_df_rank_model[['user_id', 'click_article_id', 'pred_score', 'pred_rank']].to_csv(pathcache + 'tst_lgb_ranker_feats.csv', index=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 189,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "<ipython-input-189-0cb6afed208d>:4: SettingWithCopyWarning: \n",
      "A value is trying to be set on a copy of a slice from a DataFrame.\n",
      "Try using .loc[row_indexer,col_indexer] = value instead\n",
      "\n",
      "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n",
      "  rank_results['click_article_id'] = rank_results['click_article_id'].astype(int)\n"
     ]
    }
   ],
   "source": [
    "# 预测结果重新排序, 及生成提交结果\n",
    "# 单模型生成提交结果\n",
    "rank_results = tst_user_item_feats_df_rank_model[['user_id', 'click_article_id', 'pred_score']]\n",
    "rank_results['click_article_id'] = rank_results['click_article_id'].astype(int)\n",
    "submit(rank_results, topk=5, model_name='lgb_ranker')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# LGB分类模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 190,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 模型及参数的定义\n",
    "lgb_Classfication = lgb.LGBMClassifier(boosting_type='gbdt', num_leaves=31, reg_alpha=0.0, reg_lambda=1,\n",
    "                            max_depth=-1, n_estimators=500, subsample=0.7, colsample_bytree=0.7, subsample_freq=1,\n",
    "                            learning_rate=0.01, min_child_weight=50, random_state=2018, n_jobs= 16, verbose=10)  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 191,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LightGBM] [Info] Number of positive: 91639, number of negative: 213767\n",
      "[LightGBM] [Debug] Dataset::GetMultiBinFromAllFeatures: sparse rate 0.042928\n",
      "[LightGBM] [Debug] init for col-wise cost 0.000031 seconds, init for row-wise cost 0.015462 seconds\n",
      "[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.018431 seconds.\n",
      "You can set `force_col_wise=true` to remove the overhead.\n",
      "[LightGBM] [Info] Total Bins 4126\n",
      "[LightGBM] [Info] Number of data points in the train set: 305406, number of used features: 23\n",
      "[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.300056 -> initscore=-0.847030\n",
      "[LightGBM] [Info] Start training from score -0.847030\n",
      "[LightGBM] [Debug] Re-bagging, using 213843 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[1]\tvalid_0's auc: 0.808183\tvalid_0's binary_logloss: 0.524411\n",
      "Training until validation scores don't improve for 50 rounds\n",
      "[LightGBM] [Debug] Re-bagging, using 213345 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[2]\tvalid_0's auc: 0.817812\tvalid_0's binary_logloss: 0.523369\n",
      "[LightGBM] [Debug] Re-bagging, using 213697 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[3]\tvalid_0's auc: 0.815602\tvalid_0's binary_logloss: 0.52183\n",
      "[LightGBM] [Debug] Re-bagging, using 213583 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[4]\tvalid_0's auc: 0.823904\tvalid_0's binary_logloss: 0.519856\n",
      "[LightGBM] [Debug] Re-bagging, using 213737 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[5]\tvalid_0's auc: 0.825051\tvalid_0's binary_logloss: 0.517924\n",
      "[LightGBM] [Debug] Re-bagging, using 214138 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[6]\tvalid_0's auc: 0.827419\tvalid_0's binary_logloss: 0.516729\n",
      "[LightGBM] [Debug] Re-bagging, using 213650 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[7]\tvalid_0's auc: 0.827392\tvalid_0's binary_logloss: 0.514898\n",
      "[LightGBM] [Debug] Re-bagging, using 214137 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[8]\tvalid_0's auc: 0.827523\tvalid_0's binary_logloss: 0.513874\n",
      "[LightGBM] [Debug] Re-bagging, using 213905 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[9]\tvalid_0's auc: 0.827685\tvalid_0's binary_logloss: 0.512071\n",
      "[LightGBM] [Debug] Re-bagging, using 213748 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[10]\tvalid_0's auc: 0.828443\tvalid_0's binary_logloss: 0.510729\n",
      "[LightGBM] [Debug] Re-bagging, using 213533 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[11]\tvalid_0's auc: 0.828614\tvalid_0's binary_logloss: 0.509356\n",
      "[LightGBM] [Debug] Re-bagging, using 213954 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[12]\tvalid_0's auc: 0.827947\tvalid_0's binary_logloss: 0.508429\n",
      "[LightGBM] [Debug] Re-bagging, using 214008 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[13]\tvalid_0's auc: 0.828962\tvalid_0's binary_logloss: 0.506756\n",
      "[LightGBM] [Debug] Re-bagging, using 213760 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[14]\tvalid_0's auc: 0.829089\tvalid_0's binary_logloss: 0.505075\n",
      "[LightGBM] [Debug] Re-bagging, using 213462 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[15]\tvalid_0's auc: 0.829045\tvalid_0's binary_logloss: 0.503909\n",
      "[LightGBM] [Debug] Re-bagging, using 214117 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[16]\tvalid_0's auc: 0.829241\tvalid_0's binary_logloss: 0.502428\n",
      "[LightGBM] [Debug] Re-bagging, using 213764 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[17]\tvalid_0's auc: 0.829951\tvalid_0's binary_logloss: 0.50135\n",
      "[LightGBM] [Debug] Re-bagging, using 214100 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[18]\tvalid_0's auc: 0.83006\tvalid_0's binary_logloss: 0.500147\n",
      "[LightGBM] [Debug] Re-bagging, using 213851 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[19]\tvalid_0's auc: 0.830242\tvalid_0's binary_logloss: 0.498584\n",
      "[LightGBM] [Debug] Re-bagging, using 214005 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[20]\tvalid_0's auc: 0.830623\tvalid_0's binary_logloss: 0.498299\n",
      "[LightGBM] [Debug] Re-bagging, using 213367 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[21]\tvalid_0's auc: 0.83039\tvalid_0's binary_logloss: 0.497168\n",
      "[LightGBM] [Debug] Re-bagging, using 213523 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[22]\tvalid_0's auc: 0.830706\tvalid_0's binary_logloss: 0.496224\n",
      "[LightGBM] [Debug] Re-bagging, using 213433 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[23]\tvalid_0's auc: 0.830781\tvalid_0's binary_logloss: 0.494684\n",
      "[LightGBM] [Debug] Re-bagging, using 214409 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[24]\tvalid_0's auc: 0.830781\tvalid_0's binary_logloss: 0.493192\n",
      "[LightGBM] [Debug] Re-bagging, using 213836 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[25]\tvalid_0's auc: 0.830703\tvalid_0's binary_logloss: 0.492093\n",
      "[LightGBM] [Debug] Re-bagging, using 214284 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[26]\tvalid_0's auc: 0.830801\tvalid_0's binary_logloss: 0.490697\n",
      "[LightGBM] [Debug] Re-bagging, using 213850 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[27]\tvalid_0's auc: 0.830559\tvalid_0's binary_logloss: 0.489657\n",
      "[LightGBM] [Debug] Re-bagging, using 213505 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[28]\tvalid_0's auc: 0.831018\tvalid_0's binary_logloss: 0.488255\n",
      "[LightGBM] [Debug] Re-bagging, using 213392 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[29]\tvalid_0's auc: 0.830968\tvalid_0's binary_logloss: 0.486894\n",
      "[LightGBM] [Debug] Re-bagging, using 214085 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[30]\tvalid_0's auc: 0.831222\tvalid_0's binary_logloss: 0.48554\n",
      "[LightGBM] [Debug] Re-bagging, using 213968 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[31]\tvalid_0's auc: 0.830923\tvalid_0's binary_logloss: 0.484622\n",
      "[LightGBM] [Debug] Re-bagging, using 213542 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[32]\tvalid_0's auc: 0.830942\tvalid_0's binary_logloss: 0.48328\n",
      "[LightGBM] [Debug] Re-bagging, using 213819 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[33]\tvalid_0's auc: 0.830876\tvalid_0's binary_logloss: 0.481979\n",
      "[LightGBM] [Debug] Re-bagging, using 213665 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[34]\tvalid_0's auc: 0.830725\tvalid_0's binary_logloss: 0.480994\n",
      "[LightGBM] [Debug] Re-bagging, using 213637 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[35]\tvalid_0's auc: 0.830716\tvalid_0's binary_logloss: 0.479707\n",
      "[LightGBM] [Debug] Re-bagging, using 213859 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[36]\tvalid_0's auc: 0.830714\tvalid_0's binary_logloss: 0.479019\n",
      "[LightGBM] [Debug] Re-bagging, using 213954 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[37]\tvalid_0's auc: 0.830574\tvalid_0's binary_logloss: 0.478065\n",
      "[LightGBM] [Debug] Re-bagging, using 213675 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[38]\tvalid_0's auc: 0.830538\tvalid_0's binary_logloss: 0.476843\n",
      "[LightGBM] [Debug] Re-bagging, using 213881 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[39]\tvalid_0's auc: 0.830353\tvalid_0's binary_logloss: 0.475988\n",
      "[LightGBM] [Debug] Re-bagging, using 214055 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[40]\tvalid_0's auc: 0.830335\tvalid_0's binary_logloss: 0.475364\n",
      "[LightGBM] [Debug] Re-bagging, using 214200 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[41]\tvalid_0's auc: 0.830336\tvalid_0's binary_logloss: 0.474394\n",
      "[LightGBM] [Debug] Re-bagging, using 213791 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[42]\tvalid_0's auc: 0.83044\tvalid_0's binary_logloss: 0.47328\n",
      "[LightGBM] [Debug] Re-bagging, using 213808 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[43]\tvalid_0's auc: 0.83065\tvalid_0's binary_logloss: 0.472136\n",
      "[LightGBM] [Debug] Re-bagging, using 213742 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[44]\tvalid_0's auc: 0.830426\tvalid_0's binary_logloss: 0.471351\n",
      "[LightGBM] [Debug] Re-bagging, using 214018 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[45]\tvalid_0's auc: 0.830448\tvalid_0's binary_logloss: 0.470529\n",
      "[LightGBM] [Debug] Re-bagging, using 213774 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[46]\tvalid_0's auc: 0.830581\tvalid_0's binary_logloss: 0.469462\n",
      "[LightGBM] [Debug] Re-bagging, using 213695 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[47]\tvalid_0's auc: 0.830409\tvalid_0's binary_logloss: 0.468406\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LightGBM] [Debug] Re-bagging, using 213055 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[48]\tvalid_0's auc: 0.83053\tvalid_0's binary_logloss: 0.467719\n",
      "[LightGBM] [Debug] Re-bagging, using 213553 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[49]\tvalid_0's auc: 0.83049\tvalid_0's binary_logloss: 0.467474\n",
      "[LightGBM] [Debug] Re-bagging, using 213876 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[50]\tvalid_0's auc: 0.830675\tvalid_0's binary_logloss: 0.467158\n",
      "[LightGBM] [Debug] Re-bagging, using 213487 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[51]\tvalid_0's auc: 0.830554\tvalid_0's binary_logloss: 0.466432\n",
      "[LightGBM] [Debug] Re-bagging, using 213417 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[52]\tvalid_0's auc: 0.830479\tvalid_0's binary_logloss: 0.465656\n",
      "[LightGBM] [Debug] Re-bagging, using 214201 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[53]\tvalid_0's auc: 0.830448\tvalid_0's binary_logloss: 0.464633\n",
      "[LightGBM] [Debug] Re-bagging, using 213514 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[54]\tvalid_0's auc: 0.830371\tvalid_0's binary_logloss: 0.463635\n",
      "[LightGBM] [Debug] Re-bagging, using 213963 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[55]\tvalid_0's auc: 0.830527\tvalid_0's binary_logloss: 0.463121\n",
      "[LightGBM] [Debug] Re-bagging, using 213455 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[56]\tvalid_0's auc: 0.830558\tvalid_0's binary_logloss: 0.462276\n",
      "[LightGBM] [Debug] Re-bagging, using 213923 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[57]\tvalid_0's auc: 0.830482\tvalid_0's binary_logloss: 0.461326\n",
      "[LightGBM] [Debug] Re-bagging, using 213980 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[58]\tvalid_0's auc: 0.830368\tvalid_0's binary_logloss: 0.460648\n",
      "[LightGBM] [Debug] Re-bagging, using 213631 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[59]\tvalid_0's auc: 0.830311\tvalid_0's binary_logloss: 0.459696\n",
      "[LightGBM] [Debug] Re-bagging, using 214422 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[60]\tvalid_0's auc: 0.830398\tvalid_0's binary_logloss: 0.458838\n",
      "[LightGBM] [Debug] Re-bagging, using 213568 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 6\n",
      "[61]\tvalid_0's auc: 0.830499\tvalid_0's binary_logloss: 0.457907\n",
      "[LightGBM] [Debug] Re-bagging, using 213716 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[62]\tvalid_0's auc: 0.83037\tvalid_0's binary_logloss: 0.457239\n",
      "[LightGBM] [Debug] Re-bagging, using 213833 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[63]\tvalid_0's auc: 0.830258\tvalid_0's binary_logloss: 0.456555\n",
      "[LightGBM] [Debug] Re-bagging, using 213886 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[64]\tvalid_0's auc: 0.830253\tvalid_0's binary_logloss: 0.45568\n",
      "[LightGBM] [Debug] Re-bagging, using 213512 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[65]\tvalid_0's auc: 0.830412\tvalid_0's binary_logloss: 0.454817\n",
      "[LightGBM] [Debug] Re-bagging, using 213857 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[66]\tvalid_0's auc: 0.830367\tvalid_0's binary_logloss: 0.454169\n",
      "[LightGBM] [Debug] Re-bagging, using 213754 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[67]\tvalid_0's auc: 0.83046\tvalid_0's binary_logloss: 0.453333\n",
      "[LightGBM] [Debug] Re-bagging, using 213805 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[68]\tvalid_0's auc: 0.83066\tvalid_0's binary_logloss: 0.452913\n",
      "[LightGBM] [Debug] Re-bagging, using 213482 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[69]\tvalid_0's auc: 0.83062\tvalid_0's binary_logloss: 0.452279\n",
      "[LightGBM] [Debug] Re-bagging, using 213419 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[70]\tvalid_0's auc: 0.830738\tvalid_0's binary_logloss: 0.451611\n",
      "[LightGBM] [Debug] Re-bagging, using 213512 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[71]\tvalid_0's auc: 0.830661\tvalid_0's binary_logloss: 0.451017\n",
      "[LightGBM] [Debug] Re-bagging, using 213743 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[72]\tvalid_0's auc: 0.830803\tvalid_0's binary_logloss: 0.450492\n",
      "[LightGBM] [Debug] Re-bagging, using 213643 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[73]\tvalid_0's auc: 0.830763\tvalid_0's binary_logloss: 0.449687\n",
      "[LightGBM] [Debug] Re-bagging, using 213362 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[74]\tvalid_0's auc: 0.830662\tvalid_0's binary_logloss: 0.449107\n",
      "[LightGBM] [Debug] Re-bagging, using 213595 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[75]\tvalid_0's auc: 0.830642\tvalid_0's binary_logloss: 0.448325\n",
      "[LightGBM] [Debug] Re-bagging, using 213856 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[76]\tvalid_0's auc: 0.830745\tvalid_0's binary_logloss: 0.44754\n",
      "[LightGBM] [Debug] Re-bagging, using 213837 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[77]\tvalid_0's auc: 0.830719\tvalid_0's binary_logloss: 0.446921\n",
      "[LightGBM] [Debug] Re-bagging, using 213926 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[78]\tvalid_0's auc: 0.830737\tvalid_0's binary_logloss: 0.446332\n",
      "[LightGBM] [Debug] Re-bagging, using 213709 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[79]\tvalid_0's auc: 0.830666\tvalid_0's binary_logloss: 0.445776\n",
      "[LightGBM] [Debug] Re-bagging, using 213881 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[80]\tvalid_0's auc: 0.830662\tvalid_0's binary_logloss: 0.445366\n",
      "Early stopping, best iteration is:\n",
      "[30]\tvalid_0's auc: 0.831222\tvalid_0's binary_logloss: 0.48554\n"
     ]
    }
   ],
   "source": [
    "# 模型训练\n",
    "if informal:\n",
    "    lgb_Classfication.fit(trn_user_item_feats_df_rank_model[lgb_cols], trn_user_item_feats_df_rank_model['label'],\n",
    "                    eval_set=[(val_user_item_feats_df_rank_model[lgb_cols], val_user_item_feats_df_rank_model['label'])], \n",
    "                    eval_metric=['auc', ],early_stopping_rounds=50, )\n",
    "else:\n",
    "    lgb_Classfication.fit(trn_user_item_feats_df_rank_model[lgb_cols], trn_user_item_feats_df_rank_model['label'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 192,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 模型预测\n",
    "tst_user_item_feats_df['pred_score'] = lgb_Classfication.predict_proba(tst_user_item_feats_df[lgb_cols])[:,1]\n",
    "\n",
    "# 将这里的排序结果保存一份，用户后面的模型融合\n",
    "tst_user_item_feats_df[['user_id', 'click_article_id', 'pred_score']].to_csv(pathcache + 'lgb_cls_score.csv', index=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 193,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "<ipython-input-193-319979cff3f7>:3: SettingWithCopyWarning: \n",
      "A value is trying to be set on a copy of a slice from a DataFrame.\n",
      "Try using .loc[row_indexer,col_indexer] = value instead\n",
      "\n",
      "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n",
      "  rank_results['click_article_id'] = rank_results['click_article_id'].astype(int)\n"
     ]
    }
   ],
   "source": [
    "# 预测结果重新排序, 及生成提交结果\n",
    "rank_results = tst_user_item_feats_df[['user_id', 'click_article_id', 'pred_score']]\n",
    "rank_results['click_article_id'] = rank_results['click_article_id'].astype(int)\n",
    "submit(rank_results, topk=5, model_name='lgb_cls')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 194,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LightGBM] [Info] Number of positive: 73370, number of negative: 171060\n",
      "[LightGBM] [Debug] Dataset::GetMultiBinFromAllFeatures: sparse rate 0.042939\n",
      "[LightGBM] [Debug] init for col-wise cost 0.000031 seconds, init for row-wise cost 0.012362 seconds\n",
      "[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.002691 seconds.\n",
      "You can set `force_row_wise=true` to remove the overhead.\n",
      "And if memory is not enough, you can set `force_col_wise=true`.\n",
      "[LightGBM] [Debug] Using Dense Multi-Val Bin\n",
      "[LightGBM] [Info] Total Bins 4128\n",
      "[LightGBM] [Info] Number of data points in the train set: 244430, number of used features: 23\n",
      "[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.300168 -> initscore=-0.846499\n",
      "[LightGBM] [Info] Start training from score -0.846499\n",
      "[LightGBM] [Debug] Re-bagging, using 171226 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[1]\tvalid_0's auc: 0.781908\tvalid_0's binary_logloss: 0.608409\n",
      "Training until validation scores don't improve for 50 rounds\n",
      "[LightGBM] [Debug] Re-bagging, using 170895 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[2]\tvalid_0's auc: 0.788053\tvalid_0's binary_logloss: 0.607341\n",
      "[LightGBM] [Debug] Re-bagging, using 170820 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[3]\tvalid_0's auc: 0.789432\tvalid_0's binary_logloss: 0.605643\n",
      "[LightGBM] [Debug] Re-bagging, using 170831 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[4]\tvalid_0's auc: 0.794504\tvalid_0's binary_logloss: 0.603582\n",
      "[LightGBM] [Debug] Re-bagging, using 171014 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[5]\tvalid_0's auc: 0.795414\tvalid_0's binary_logloss: 0.601552\n",
      "[LightGBM] [Debug] Re-bagging, using 171645 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[6]\tvalid_0's auc: 0.796901\tvalid_0's binary_logloss: 0.600257\n",
      "[LightGBM] [Debug] Re-bagging, using 170965 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[7]\tvalid_0's auc: 0.79644\tvalid_0's binary_logloss: 0.598325\n",
      "[LightGBM] [Debug] Re-bagging, using 171325 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[8]\tvalid_0's auc: 0.797316\tvalid_0's binary_logloss: 0.597168\n",
      "[LightGBM] [Debug] Re-bagging, using 171284 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[9]\tvalid_0's auc: 0.797376\tvalid_0's binary_logloss: 0.595267\n",
      "[LightGBM] [Debug] Re-bagging, using 171110 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[10]\tvalid_0's auc: 0.798624\tvalid_0's binary_logloss: 0.593879\n",
      "[LightGBM] [Debug] Re-bagging, using 170888 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[11]\tvalid_0's auc: 0.79866\tvalid_0's binary_logloss: 0.592459\n",
      "[LightGBM] [Debug] Re-bagging, using 171229 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[12]\tvalid_0's auc: 0.798456\tvalid_0's binary_logloss: 0.59141\n",
      "[LightGBM] [Debug] Re-bagging, using 171320 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[13]\tvalid_0's auc: 0.798378\tvalid_0's binary_logloss: 0.589699\n",
      "[LightGBM] [Debug] Re-bagging, using 171046 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[14]\tvalid_0's auc: 0.798186\tvalid_0's binary_logloss: 0.58796\n",
      "[LightGBM] [Debug] Re-bagging, using 170873 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[15]\tvalid_0's auc: 0.799229\tvalid_0's binary_logloss: 0.586684\n",
      "[LightGBM] [Debug] Re-bagging, using 171464 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[16]\tvalid_0's auc: 0.798929\tvalid_0's binary_logloss: 0.585022\n",
      "[LightGBM] [Debug] Re-bagging, using 171138 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[17]\tvalid_0's auc: 0.799086\tvalid_0's binary_logloss: 0.583907\n",
      "[LightGBM] [Debug] Re-bagging, using 171314 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[18]\tvalid_0's auc: 0.799206\tvalid_0's binary_logloss: 0.582641\n",
      "[LightGBM] [Debug] Re-bagging, using 171122 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[19]\tvalid_0's auc: 0.799462\tvalid_0's binary_logloss: 0.580987\n",
      "[LightGBM] [Debug] Re-bagging, using 171195 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[20]\tvalid_0's auc: 0.800166\tvalid_0's binary_logloss: 0.580577\n",
      "[LightGBM] [Debug] Re-bagging, using 170739 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[21]\tvalid_0's auc: 0.800565\tvalid_0's binary_logloss: 0.579364\n",
      "[LightGBM] [Debug] Re-bagging, using 170918 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[22]\tvalid_0's auc: 0.800535\tvalid_0's binary_logloss: 0.578313\n",
      "[LightGBM] [Debug] Re-bagging, using 170862 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[23]\tvalid_0's auc: 0.800595\tvalid_0's binary_logloss: 0.576754\n",
      "[LightGBM] [Debug] Re-bagging, using 171691 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[24]\tvalid_0's auc: 0.800739\tvalid_0's binary_logloss: 0.575211\n",
      "[LightGBM] [Debug] Re-bagging, using 171161 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[25]\tvalid_0's auc: 0.801069\tvalid_0's binary_logloss: 0.574078\n",
      "[LightGBM] [Debug] Re-bagging, using 171544 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[26]\tvalid_0's auc: 0.801143\tvalid_0's binary_logloss: 0.572589\n",
      "[LightGBM] [Debug] Re-bagging, using 171158 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[27]\tvalid_0's auc: 0.801427\tvalid_0's binary_logloss: 0.571486\n",
      "[LightGBM] [Debug] Re-bagging, using 170758 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[28]\tvalid_0's auc: 0.801629\tvalid_0's binary_logloss: 0.570007\n",
      "[LightGBM] [Debug] Re-bagging, using 170878 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[29]\tvalid_0's auc: 0.801579\tvalid_0's binary_logloss: 0.56857\n",
      "[LightGBM] [Debug] Re-bagging, using 171162 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[30]\tvalid_0's auc: 0.801436\tvalid_0's binary_logloss: 0.567193\n",
      "[LightGBM] [Debug] Re-bagging, using 171066 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[31]\tvalid_0's auc: 0.80228\tvalid_0's binary_logloss: 0.566131\n",
      "[LightGBM] [Debug] Re-bagging, using 170918 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[32]\tvalid_0's auc: 0.80239\tvalid_0's binary_logloss: 0.564744\n",
      "[LightGBM] [Debug] Re-bagging, using 171029 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[33]\tvalid_0's auc: 0.802436\tvalid_0's binary_logloss: 0.563371\n",
      "[LightGBM] [Debug] Re-bagging, using 170864 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[34]\tvalid_0's auc: 0.80256\tvalid_0's binary_logloss: 0.562317\n",
      "[LightGBM] [Debug] Re-bagging, using 170975 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[35]\tvalid_0's auc: 0.802515\tvalid_0's binary_logloss: 0.560987\n",
      "[LightGBM] [Debug] Re-bagging, using 171055 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[36]\tvalid_0's auc: 0.802786\tvalid_0's binary_logloss: 0.560368\n",
      "[LightGBM] [Debug] Re-bagging, using 171182 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[37]\tvalid_0's auc: 0.802892\tvalid_0's binary_logloss: 0.559347\n",
      "[LightGBM] [Debug] Re-bagging, using 170900 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[38]\tvalid_0's auc: 0.803044\tvalid_0's binary_logloss: 0.558074\n",
      "[LightGBM] [Debug] Re-bagging, using 171110 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[39]\tvalid_0's auc: 0.803485\tvalid_0's binary_logloss: 0.55708\n",
      "[LightGBM] [Debug] Re-bagging, using 171177 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[40]\tvalid_0's auc: 0.804514\tvalid_0's binary_logloss: 0.556284\n",
      "[LightGBM] [Debug] Re-bagging, using 171606 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[41]\tvalid_0's auc: 0.80448\tvalid_0's binary_logloss: 0.555322\n",
      "[LightGBM] [Debug] Re-bagging, using 171142 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[42]\tvalid_0's auc: 0.804768\tvalid_0's binary_logloss: 0.554103\n",
      "[LightGBM] [Debug] Re-bagging, using 171111 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[43]\tvalid_0's auc: 0.804634\tvalid_0's binary_logloss: 0.552932\n",
      "[LightGBM] [Debug] Re-bagging, using 171089 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[44]\tvalid_0's auc: 0.806091\tvalid_0's binary_logloss: 0.551823\n",
      "[LightGBM] [Debug] Re-bagging, using 171313 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 12\n",
      "[45]\tvalid_0's auc: 0.806151\tvalid_0's binary_logloss: 0.551049\n",
      "[LightGBM] [Debug] Re-bagging, using 171108 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[46]\tvalid_0's auc: 0.806119\tvalid_0's binary_logloss: 0.549926\n",
      "[LightGBM] [Debug] Re-bagging, using 170987 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[47]\tvalid_0's auc: 0.806104\tvalid_0's binary_logloss: 0.548798\n",
      "[LightGBM] [Debug] Re-bagging, using 170599 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[48]\tvalid_0's auc: 0.806316\tvalid_0's binary_logloss: 0.548018\n",
      "[LightGBM] [Debug] Re-bagging, using 171150 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[49]\tvalid_0's auc: 0.806539\tvalid_0's binary_logloss: 0.547675\n",
      "[LightGBM] [Debug] Re-bagging, using 171028 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[50]\tvalid_0's auc: 0.806719\tvalid_0's binary_logloss: 0.547345\n",
      "[LightGBM] [Debug] Re-bagging, using 170851 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[51]\tvalid_0's auc: 0.807009\tvalid_0's binary_logloss: 0.5465\n",
      "[LightGBM] [Debug] Re-bagging, using 170698 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[52]\tvalid_0's auc: 0.807072\tvalid_0's binary_logloss: 0.545671\n",
      "[LightGBM] [Debug] Re-bagging, using 171351 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[53]\tvalid_0's auc: 0.807018\tvalid_0's binary_logloss: 0.544612\n",
      "[LightGBM] [Debug] Re-bagging, using 170915 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[54]\tvalid_0's auc: 0.807043\tvalid_0's binary_logloss: 0.543549\n",
      "[LightGBM] [Debug] Re-bagging, using 171314 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[55]\tvalid_0's auc: 0.807276\tvalid_0's binary_logloss: 0.543\n",
      "[LightGBM] [Debug] Re-bagging, using 170926 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[56]\tvalid_0's auc: 0.807402\tvalid_0's binary_logloss: 0.541965\n",
      "[LightGBM] [Debug] Re-bagging, using 171142 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[57]\tvalid_0's auc: 0.807424\tvalid_0's binary_logloss: 0.540919\n",
      "[LightGBM] [Debug] Re-bagging, using 171399 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[58]\tvalid_0's auc: 0.807419\tvalid_0's binary_logloss: 0.540191\n",
      "[LightGBM] [Debug] Re-bagging, using 170984 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[59]\tvalid_0's auc: 0.807422\tvalid_0's binary_logloss: 0.539203\n",
      "[LightGBM] [Debug] Re-bagging, using 171513 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[60]\tvalid_0's auc: 0.80733\tvalid_0's binary_logloss: 0.538277\n",
      "[LightGBM] [Debug] Re-bagging, using 171014 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[61]\tvalid_0's auc: 0.807307\tvalid_0's binary_logloss: 0.537315\n",
      "[LightGBM] [Debug] Re-bagging, using 171120 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[62]\tvalid_0's auc: 0.808205\tvalid_0's binary_logloss: 0.536393\n",
      "[LightGBM] [Debug] Re-bagging, using 171155 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[63]\tvalid_0's auc: 0.808597\tvalid_0's binary_logloss: 0.535587\n",
      "[LightGBM] [Debug] Re-bagging, using 171153 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[64]\tvalid_0's auc: 0.808721\tvalid_0's binary_logloss: 0.534645\n",
      "[LightGBM] [Debug] Re-bagging, using 170870 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[65]\tvalid_0's auc: 0.808671\tvalid_0's binary_logloss: 0.533758\n",
      "[LightGBM] [Debug] Re-bagging, using 171109 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[66]\tvalid_0's auc: 0.808848\tvalid_0's binary_logloss: 0.533037\n",
      "[LightGBM] [Debug] Re-bagging, using 171013 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[67]\tvalid_0's auc: 0.808763\tvalid_0's binary_logloss: 0.532179\n",
      "[LightGBM] [Debug] Re-bagging, using 171238 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[68]\tvalid_0's auc: 0.808859\tvalid_0's binary_logloss: 0.531769\n",
      "[LightGBM] [Debug] Re-bagging, using 170735 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[69]\tvalid_0's auc: 0.808984\tvalid_0's binary_logloss: 0.531079\n",
      "[LightGBM] [Debug] Re-bagging, using 170859 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[70]\tvalid_0's auc: 0.809001\tvalid_0's binary_logloss: 0.530411\n",
      "[LightGBM] [Debug] Re-bagging, using 170817 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[71]\tvalid_0's auc: 0.808967\tvalid_0's binary_logloss: 0.529797\n",
      "[LightGBM] [Debug] Re-bagging, using 170978 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[72]\tvalid_0's auc: 0.809187\tvalid_0's binary_logloss: 0.529203\n",
      "[LightGBM] [Debug] Re-bagging, using 171016 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[73]\tvalid_0's auc: 0.809169\tvalid_0's binary_logloss: 0.528378\n",
      "[LightGBM] [Debug] Re-bagging, using 170898 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[74]\tvalid_0's auc: 0.809264\tvalid_0's binary_logloss: 0.527747\n",
      "[LightGBM] [Debug] Re-bagging, using 170990 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[75]\tvalid_0's auc: 0.80932\tvalid_0's binary_logloss: 0.526901\n",
      "[LightGBM] [Debug] Re-bagging, using 171035 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[76]\tvalid_0's auc: 0.80937\tvalid_0's binary_logloss: 0.526081\n",
      "[LightGBM] [Debug] Re-bagging, using 171088 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[77]\tvalid_0's auc: 0.809352\tvalid_0's binary_logloss: 0.525451\n",
      "[LightGBM] [Debug] Re-bagging, using 171028 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[78]\tvalid_0's auc: 0.809446\tvalid_0's binary_logloss: 0.524834\n",
      "[LightGBM] [Debug] Re-bagging, using 170887 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[79]\tvalid_0's auc: 0.809674\tvalid_0's binary_logloss: 0.52419\n",
      "[LightGBM] [Debug] Re-bagging, using 171066 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[80]\tvalid_0's auc: 0.80976\tvalid_0's binary_logloss: 0.523736\n",
      "[LightGBM] [Debug] Re-bagging, using 171581 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[81]\tvalid_0's auc: 0.810806\tvalid_0's binary_logloss: 0.523205\n",
      "[LightGBM] [Debug] Re-bagging, using 170925 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[82]\tvalid_0's auc: 0.81099\tvalid_0's binary_logloss: 0.522579\n",
      "[LightGBM] [Debug] Re-bagging, using 171101 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[83]\tvalid_0's auc: 0.810943\tvalid_0's binary_logloss: 0.521834\n",
      "[LightGBM] [Debug] Re-bagging, using 171098 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[84]\tvalid_0's auc: 0.81115\tvalid_0's binary_logloss: 0.521047\n",
      "[LightGBM] [Debug] Re-bagging, using 171101 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[85]\tvalid_0's auc: 0.811476\tvalid_0's binary_logloss: 0.520399\n",
      "[LightGBM] [Debug] Re-bagging, using 170805 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[86]\tvalid_0's auc: 0.811499\tvalid_0's binary_logloss: 0.519683\n",
      "[LightGBM] [Debug] Re-bagging, using 171286 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[87]\tvalid_0's auc: 0.811622\tvalid_0's binary_logloss: 0.519112\n",
      "[LightGBM] [Debug] Re-bagging, using 171041 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[88]\tvalid_0's auc: 0.811602\tvalid_0's binary_logloss: 0.518561\n",
      "[LightGBM] [Debug] Re-bagging, using 170456 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[89]\tvalid_0's auc: 0.81161\tvalid_0's binary_logloss: 0.517848\n",
      "[LightGBM] [Debug] Re-bagging, using 171308 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[90]\tvalid_0's auc: 0.811571\tvalid_0's binary_logloss: 0.517164\n",
      "[LightGBM] [Debug] Re-bagging, using 171332 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[91]\tvalid_0's auc: 0.811542\tvalid_0's binary_logloss: 0.516777\n",
      "[LightGBM] [Debug] Re-bagging, using 171326 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[92]\tvalid_0's auc: 0.811527\tvalid_0's binary_logloss: 0.516086\n",
      "[LightGBM] [Debug] Re-bagging, using 171033 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[93]\tvalid_0's auc: 0.811519\tvalid_0's binary_logloss: 0.515421\n",
      "[LightGBM] [Debug] Re-bagging, using 171338 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[94]\tvalid_0's auc: 0.811496\tvalid_0's binary_logloss: 0.514774\n",
      "[LightGBM] [Debug] Re-bagging, using 171117 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[95]\tvalid_0's auc: 0.811584\tvalid_0's binary_logloss: 0.51426\n",
      "[LightGBM] [Debug] Re-bagging, using 171246 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[96]\tvalid_0's auc: 0.812039\tvalid_0's binary_logloss: 0.513873\n",
      "[LightGBM] [Debug] Re-bagging, using 171136 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[97]\tvalid_0's auc: 0.812409\tvalid_0's binary_logloss: 0.513453\n",
      "[LightGBM] [Debug] Re-bagging, using 170721 data to train\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[98]\tvalid_0's auc: 0.812372\tvalid_0's binary_logloss: 0.51284\n",
      "[LightGBM] [Debug] Re-bagging, using 170893 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[99]\tvalid_0's auc: 0.812441\tvalid_0's binary_logloss: 0.51257\n",
      "[LightGBM] [Debug] Re-bagging, using 170738 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[100]\tvalid_0's auc: 0.812472\tvalid_0's binary_logloss: 0.512077\n",
      "Did not meet early stopping. Best iteration is:\n",
      "[100]\tvalid_0's auc: 0.812472\tvalid_0's binary_logloss: 0.512077\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "<ipython-input-194-a71baaa13d59>:30: SettingWithCopyWarning: \n",
      "A value is trying to be set on a copy of a slice from a DataFrame.\n",
      "Try using .loc[row_indexer,col_indexer] = value instead\n",
      "\n",
      "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n",
      "  valid_idx['pred_score'] = lgb_Classfication.predict_proba(valid_idx[lgb_cols],\n",
      "<ipython-input-194-a71baaa13d59>:37: SettingWithCopyWarning: \n",
      "A value is trying to be set on a copy of a slice from a DataFrame.\n",
      "Try using .loc[row_indexer,col_indexer] = value instead\n",
      "\n",
      "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n",
      "  valid_idx['pred_rank'] = valid_idx.groupby(['user_id'])['pred_score'].rank(ascending=False, method='first')\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LightGBM] [Info] Number of positive: 73307, number of negative: 171165\n",
      "[LightGBM] [Debug] Dataset::GetMultiBinFromAllFeatures: sparse rate 0.042963\n",
      "[LightGBM] [Debug] init for col-wise cost 0.000029 seconds, init for row-wise cost 0.011644 seconds\n",
      "[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.014119 seconds.\n",
      "You can set `force_col_wise=true` to remove the overhead.\n",
      "[LightGBM] [Info] Total Bins 4120\n",
      "[LightGBM] [Info] Number of data points in the train set: 244472, number of used features: 23\n",
      "[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.299858 -> initscore=-0.847972\n",
      "[LightGBM] [Info] Start training from score -0.847972\n",
      "[LightGBM] [Debug] Re-bagging, using 171251 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[1]\tvalid_0's auc: 0.781762\tvalid_0's binary_logloss: 0.609459\n",
      "Training until validation scores don't improve for 50 rounds\n",
      "[LightGBM] [Debug] Re-bagging, using 170929 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[2]\tvalid_0's auc: 0.789617\tvalid_0's binary_logloss: 0.608376\n",
      "[LightGBM] [Debug] Re-bagging, using 170848 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[3]\tvalid_0's auc: 0.791484\tvalid_0's binary_logloss: 0.606693\n",
      "[LightGBM] [Debug] Re-bagging, using 170856 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[4]\tvalid_0's auc: 0.797048\tvalid_0's binary_logloss: 0.604628\n",
      "[LightGBM] [Debug] Re-bagging, using 171046 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[5]\tvalid_0's auc: 0.798455\tvalid_0's binary_logloss: 0.602594\n",
      "[LightGBM] [Debug] Re-bagging, using 171672 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[6]\tvalid_0's auc: 0.798612\tvalid_0's binary_logloss: 0.601306\n",
      "[LightGBM] [Debug] Re-bagging, using 171001 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[7]\tvalid_0's auc: 0.798921\tvalid_0's binary_logloss: 0.599343\n",
      "[LightGBM] [Debug] Re-bagging, using 171359 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[8]\tvalid_0's auc: 0.799462\tvalid_0's binary_logloss: 0.598176\n",
      "[LightGBM] [Debug] Re-bagging, using 171303 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[9]\tvalid_0's auc: 0.799379\tvalid_0's binary_logloss: 0.596271\n",
      "[LightGBM] [Debug] Re-bagging, using 171138 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[10]\tvalid_0's auc: 0.800564\tvalid_0's binary_logloss: 0.594878\n",
      "[LightGBM] [Debug] Re-bagging, using 170909 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[11]\tvalid_0's auc: 0.800716\tvalid_0's binary_logloss: 0.593462\n",
      "[LightGBM] [Debug] Re-bagging, using 171280 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[12]\tvalid_0's auc: 0.800753\tvalid_0's binary_logloss: 0.592394\n",
      "[LightGBM] [Debug] Re-bagging, using 171327 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[13]\tvalid_0's auc: 0.800501\tvalid_0's binary_logloss: 0.590673\n",
      "[LightGBM] [Debug] Re-bagging, using 171103 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[14]\tvalid_0's auc: 0.800821\tvalid_0's binary_logloss: 0.588915\n",
      "[LightGBM] [Debug] Re-bagging, using 170879 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[15]\tvalid_0's auc: 0.802057\tvalid_0's binary_logloss: 0.587629\n",
      "[LightGBM] [Debug] Re-bagging, using 171494 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[16]\tvalid_0's auc: 0.801786\tvalid_0's binary_logloss: 0.585946\n",
      "[LightGBM] [Debug] Re-bagging, using 171178 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[17]\tvalid_0's auc: 0.8018\tvalid_0's binary_logloss: 0.584832\n",
      "[LightGBM] [Debug] Re-bagging, using 171367 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[18]\tvalid_0's auc: 0.801877\tvalid_0's binary_logloss: 0.583582\n",
      "[LightGBM] [Debug] Re-bagging, using 171130 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[19]\tvalid_0's auc: 0.80191\tvalid_0's binary_logloss: 0.581956\n",
      "[LightGBM] [Debug] Re-bagging, using 171232 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[20]\tvalid_0's auc: 0.802202\tvalid_0's binary_logloss: 0.58161\n",
      "[LightGBM] [Debug] Re-bagging, using 170742 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[21]\tvalid_0's auc: 0.802733\tvalid_0's binary_logloss: 0.580399\n",
      "[LightGBM] [Debug] Re-bagging, using 170929 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[22]\tvalid_0's auc: 0.802547\tvalid_0's binary_logloss: 0.579365\n",
      "[LightGBM] [Debug] Re-bagging, using 170910 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[23]\tvalid_0's auc: 0.802428\tvalid_0's binary_logloss: 0.577797\n",
      "[LightGBM] [Debug] Re-bagging, using 171717 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[24]\tvalid_0's auc: 0.802603\tvalid_0's binary_logloss: 0.576242\n",
      "[LightGBM] [Debug] Re-bagging, using 171201 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[25]\tvalid_0's auc: 0.802953\tvalid_0's binary_logloss: 0.575093\n",
      "[LightGBM] [Debug] Re-bagging, using 171554 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[26]\tvalid_0's auc: 0.802906\tvalid_0's binary_logloss: 0.573607\n",
      "[LightGBM] [Debug] Re-bagging, using 171239 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[27]\tvalid_0's auc: 0.803061\tvalid_0's binary_logloss: 0.572507\n",
      "[LightGBM] [Debug] Re-bagging, using 170762 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[28]\tvalid_0's auc: 0.803068\tvalid_0's binary_logloss: 0.571063\n",
      "[LightGBM] [Debug] Re-bagging, using 170888 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[29]\tvalid_0's auc: 0.803143\tvalid_0's binary_logloss: 0.569616\n",
      "[LightGBM] [Debug] Re-bagging, using 171217 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[30]\tvalid_0's auc: 0.802964\tvalid_0's binary_logloss: 0.568246\n",
      "[LightGBM] [Debug] Re-bagging, using 171108 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[31]\tvalid_0's auc: 0.803338\tvalid_0's binary_logloss: 0.56722\n",
      "[LightGBM] [Debug] Re-bagging, using 170945 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[32]\tvalid_0's auc: 0.803287\tvalid_0's binary_logloss: 0.565845\n",
      "[LightGBM] [Debug] Re-bagging, using 171047 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[33]\tvalid_0's auc: 0.803282\tvalid_0's binary_logloss: 0.564488\n",
      "[LightGBM] [Debug] Re-bagging, using 170899 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[34]\tvalid_0's auc: 0.80356\tvalid_0's binary_logloss: 0.563403\n",
      "[LightGBM] [Debug] Re-bagging, using 170993 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[35]\tvalid_0's auc: 0.803595\tvalid_0's binary_logloss: 0.56207\n",
      "[LightGBM] [Debug] Re-bagging, using 171072 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[36]\tvalid_0's auc: 0.803892\tvalid_0's binary_logloss: 0.561376\n",
      "[LightGBM] [Debug] Re-bagging, using 171237 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[37]\tvalid_0's auc: 0.804045\tvalid_0's binary_logloss: 0.560332\n",
      "[LightGBM] [Debug] Re-bagging, using 170947 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[38]\tvalid_0's auc: 0.804316\tvalid_0's binary_logloss: 0.559032\n",
      "[LightGBM] [Debug] Re-bagging, using 171145 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 12\n",
      "[39]\tvalid_0's auc: 0.804592\tvalid_0's binary_logloss: 0.558055\n",
      "[LightGBM] [Debug] Re-bagging, using 171207 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[40]\tvalid_0's auc: 0.805579\tvalid_0's binary_logloss: 0.557252\n",
      "[LightGBM] [Debug] Re-bagging, using 171599 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[41]\tvalid_0's auc: 0.806191\tvalid_0's binary_logloss: 0.556224\n",
      "[LightGBM] [Debug] Re-bagging, using 171133 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[42]\tvalid_0's auc: 0.806103\tvalid_0's binary_logloss: 0.555027\n",
      "[LightGBM] [Debug] Re-bagging, using 171154 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[43]\tvalid_0's auc: 0.806025\tvalid_0's binary_logloss: 0.553871\n",
      "[LightGBM] [Debug] Re-bagging, using 171152 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[44]\tvalid_0's auc: 0.806545\tvalid_0's binary_logloss: 0.552931\n",
      "[LightGBM] [Debug] Re-bagging, using 171344 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[45]\tvalid_0's auc: 0.806693\tvalid_0's binary_logloss: 0.55215\n",
      "[LightGBM] [Debug] Re-bagging, using 171129 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[46]\tvalid_0's auc: 0.806645\tvalid_0's binary_logloss: 0.551008\n",
      "[LightGBM] [Debug] Re-bagging, using 171004 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[47]\tvalid_0's auc: 0.806779\tvalid_0's binary_logloss: 0.549861\n",
      "[LightGBM] [Debug] Re-bagging, using 170628 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 12\n",
      "[48]\tvalid_0's auc: 0.806854\tvalid_0's binary_logloss: 0.549087\n",
      "[LightGBM] [Debug] Re-bagging, using 171169 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[49]\tvalid_0's auc: 0.807026\tvalid_0's binary_logloss: 0.548759\n",
      "[LightGBM] [Debug] Re-bagging, using 171088 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[50]\tvalid_0's auc: 0.807148\tvalid_0's binary_logloss: 0.548441\n",
      "[LightGBM] [Debug] Re-bagging, using 170905 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[51]\tvalid_0's auc: 0.807586\tvalid_0's binary_logloss: 0.547561\n",
      "[LightGBM] [Debug] Re-bagging, using 170729 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[52]\tvalid_0's auc: 0.807698\tvalid_0's binary_logloss: 0.546735\n",
      "[LightGBM] [Debug] Re-bagging, using 171352 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[53]\tvalid_0's auc: 0.807651\tvalid_0's binary_logloss: 0.545674\n",
      "[LightGBM] [Debug] Re-bagging, using 170928 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[54]\tvalid_0's auc: 0.807672\tvalid_0's binary_logloss: 0.544622\n",
      "[LightGBM] [Debug] Re-bagging, using 171343 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[55]\tvalid_0's auc: 0.807844\tvalid_0's binary_logloss: 0.544092\n",
      "[LightGBM] [Debug] Re-bagging, using 170969 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[56]\tvalid_0's auc: 0.807937\tvalid_0's binary_logloss: 0.543077\n",
      "[LightGBM] [Debug] Re-bagging, using 171180 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[57]\tvalid_0's auc: 0.807907\tvalid_0's binary_logloss: 0.542059\n",
      "[LightGBM] [Debug] Re-bagging, using 171431 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[58]\tvalid_0's auc: 0.80797\tvalid_0's binary_logloss: 0.541329\n",
      "[LightGBM] [Debug] Re-bagging, using 171032 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[59]\tvalid_0's auc: 0.808065\tvalid_0's binary_logloss: 0.540305\n",
      "[LightGBM] [Debug] Re-bagging, using 171517 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[60]\tvalid_0's auc: 0.80794\tvalid_0's binary_logloss: 0.539383\n",
      "[LightGBM] [Debug] Re-bagging, using 171052 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[61]\tvalid_0's auc: 0.807864\tvalid_0's binary_logloss: 0.53842\n",
      "[LightGBM] [Debug] Re-bagging, using 171140 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[62]\tvalid_0's auc: 0.808152\tvalid_0's binary_logloss: 0.537633\n",
      "[LightGBM] [Debug] Re-bagging, using 171174 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[63]\tvalid_0's auc: 0.80832\tvalid_0's binary_logloss: 0.536866\n",
      "[LightGBM] [Debug] Re-bagging, using 171195 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[64]\tvalid_0's auc: 0.808442\tvalid_0's binary_logloss: 0.535893\n",
      "[LightGBM] [Debug] Re-bagging, using 170864 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[65]\tvalid_0's auc: 0.808498\tvalid_0's binary_logloss: 0.53499\n",
      "[LightGBM] [Debug] Re-bagging, using 171131 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[66]\tvalid_0's auc: 0.808776\tvalid_0's binary_logloss: 0.534241\n",
      "[LightGBM] [Debug] Re-bagging, using 171063 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[67]\tvalid_0's auc: 0.808758\tvalid_0's binary_logloss: 0.533383\n",
      "[LightGBM] [Debug] Re-bagging, using 171265 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[68]\tvalid_0's auc: 0.80879\tvalid_0's binary_logloss: 0.532975\n",
      "[LightGBM] [Debug] Re-bagging, using 170777 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[69]\tvalid_0's auc: 0.808967\tvalid_0's binary_logloss: 0.532275\n",
      "[LightGBM] [Debug] Re-bagging, using 170891 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[70]\tvalid_0's auc: 0.80898\tvalid_0's binary_logloss: 0.531612\n",
      "[LightGBM] [Debug] Re-bagging, using 170830 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[71]\tvalid_0's auc: 0.809012\tvalid_0's binary_logloss: 0.530985\n",
      "[LightGBM] [Debug] Re-bagging, using 170997 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[72]\tvalid_0's auc: 0.809076\tvalid_0's binary_logloss: 0.530414\n",
      "[LightGBM] [Debug] Re-bagging, using 171039 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[73]\tvalid_0's auc: 0.809037\tvalid_0's binary_logloss: 0.529589\n",
      "[LightGBM] [Debug] Re-bagging, using 170921 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[74]\tvalid_0's auc: 0.809277\tvalid_0's binary_logloss: 0.528914\n",
      "[LightGBM] [Debug] Re-bagging, using 170999 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[75]\tvalid_0's auc: 0.80937\tvalid_0's binary_logloss: 0.528075\n",
      "[LightGBM] [Debug] Re-bagging, using 171089 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[76]\tvalid_0's auc: 0.809402\tvalid_0's binary_logloss: 0.527259\n",
      "[LightGBM] [Debug] Re-bagging, using 171123 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[77]\tvalid_0's auc: 0.809445\tvalid_0's binary_logloss: 0.526619\n",
      "[LightGBM] [Debug] Re-bagging, using 171081 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[78]\tvalid_0's auc: 0.809529\tvalid_0's binary_logloss: 0.526001\n",
      "[LightGBM] [Debug] Re-bagging, using 170957 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 12\n",
      "[79]\tvalid_0's auc: 0.809825\tvalid_0's binary_logloss: 0.52535\n",
      "[LightGBM] [Debug] Re-bagging, using 171100 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[80]\tvalid_0's auc: 0.809893\tvalid_0's binary_logloss: 0.524894\n",
      "[LightGBM] [Debug] Re-bagging, using 171610 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[81]\tvalid_0's auc: 0.810704\tvalid_0's binary_logloss: 0.524457\n",
      "[LightGBM] [Debug] Re-bagging, using 170989 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[82]\tvalid_0's auc: 0.81086\tvalid_0's binary_logloss: 0.523844\n",
      "[LightGBM] [Debug] Re-bagging, using 171106 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[83]\tvalid_0's auc: 0.810878\tvalid_0's binary_logloss: 0.523089\n",
      "[LightGBM] [Debug] Re-bagging, using 171084 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[84]\tvalid_0's auc: 0.811167\tvalid_0's binary_logloss: 0.522286\n",
      "[LightGBM] [Debug] Re-bagging, using 171122 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[85]\tvalid_0's auc: 0.811193\tvalid_0's binary_logloss: 0.521708\n",
      "[LightGBM] [Debug] Re-bagging, using 170813 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[86]\tvalid_0's auc: 0.81125\tvalid_0's binary_logloss: 0.520991\n",
      "[LightGBM] [Debug] Re-bagging, using 171298 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 12\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[87]\tvalid_0's auc: 0.811382\tvalid_0's binary_logloss: 0.520415\n",
      "[LightGBM] [Debug] Re-bagging, using 171050 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[88]\tvalid_0's auc: 0.811513\tvalid_0's binary_logloss: 0.519832\n",
      "[LightGBM] [Debug] Re-bagging, using 170504 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[89]\tvalid_0's auc: 0.811485\tvalid_0's binary_logloss: 0.51913\n",
      "[LightGBM] [Debug] Re-bagging, using 171366 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[90]\tvalid_0's auc: 0.811427\tvalid_0's binary_logloss: 0.518447\n",
      "[LightGBM] [Debug] Re-bagging, using 171370 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[91]\tvalid_0's auc: 0.811488\tvalid_0's binary_logloss: 0.518037\n",
      "[LightGBM] [Debug] Re-bagging, using 171347 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[92]\tvalid_0's auc: 0.811474\tvalid_0's binary_logloss: 0.517357\n",
      "[LightGBM] [Debug] Re-bagging, using 171116 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[93]\tvalid_0's auc: 0.811448\tvalid_0's binary_logloss: 0.516691\n",
      "[LightGBM] [Debug] Re-bagging, using 171374 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[94]\tvalid_0's auc: 0.811434\tvalid_0's binary_logloss: 0.516041\n",
      "[LightGBM] [Debug] Re-bagging, using 171128 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[95]\tvalid_0's auc: 0.811684\tvalid_0's binary_logloss: 0.515491\n",
      "[LightGBM] [Debug] Re-bagging, using 171280 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[96]\tvalid_0's auc: 0.812116\tvalid_0's binary_logloss: 0.515106\n",
      "[LightGBM] [Debug] Re-bagging, using 171151 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[97]\tvalid_0's auc: 0.812416\tvalid_0's binary_logloss: 0.514702\n",
      "[LightGBM] [Debug] Re-bagging, using 170732 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[98]\tvalid_0's auc: 0.812389\tvalid_0's binary_logloss: 0.514081\n",
      "[LightGBM] [Debug] Re-bagging, using 170899 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[99]\tvalid_0's auc: 0.812415\tvalid_0's binary_logloss: 0.513816\n",
      "[LightGBM] [Debug] Re-bagging, using 170749 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[100]\tvalid_0's auc: 0.812864\tvalid_0's binary_logloss: 0.513224\n",
      "Did not meet early stopping. Best iteration is:\n",
      "[100]\tvalid_0's auc: 0.812864\tvalid_0's binary_logloss: 0.513224\n",
      "[LightGBM] [Info] Number of positive: 73359, number of negative: 170856\n",
      "[LightGBM] [Debug] Dataset::GetMultiBinFromAllFeatures: sparse rate 0.042936\n",
      "[LightGBM] [Debug] init for col-wise cost 0.000029 seconds, init for row-wise cost 0.013023 seconds\n",
      "[LightGBM] [Warning] Auto-choosing row-wise multi-threading, the overhead of testing was 0.003330 seconds.\n",
      "You can set `force_row_wise=true` to remove the overhead.\n",
      "And if memory is not enough, you can set `force_col_wise=true`.\n",
      "[LightGBM] [Debug] Using Dense Multi-Val Bin\n",
      "[LightGBM] [Info] Total Bins 4127\n",
      "[LightGBM] [Info] Number of data points in the train set: 244215, number of used features: 23\n",
      "[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.300387 -> initscore=-0.845456\n",
      "[LightGBM] [Info] Start training from score -0.845456\n",
      "[LightGBM] [Debug] Re-bagging, using 171079 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[1]\tvalid_0's auc: 0.781497\tvalid_0's binary_logloss: 0.607683\n",
      "Training until validation scores don't improve for 50 rounds\n",
      "[LightGBM] [Debug] Re-bagging, using 170749 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[2]\tvalid_0's auc: 0.789062\tvalid_0's binary_logloss: 0.60662\n",
      "[LightGBM] [Debug] Re-bagging, using 170674 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[3]\tvalid_0's auc: 0.789897\tvalid_0's binary_logloss: 0.604935\n",
      "[LightGBM] [Debug] Re-bagging, using 170658 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[4]\tvalid_0's auc: 0.795982\tvalid_0's binary_logloss: 0.602878\n",
      "[LightGBM] [Debug] Re-bagging, using 170872 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[5]\tvalid_0's auc: 0.796474\tvalid_0's binary_logloss: 0.600871\n",
      "[LightGBM] [Debug] Re-bagging, using 171488 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[6]\tvalid_0's auc: 0.797663\tvalid_0's binary_logloss: 0.599603\n",
      "[LightGBM] [Debug] Re-bagging, using 170821 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[7]\tvalid_0's auc: 0.797171\tvalid_0's binary_logloss: 0.597689\n",
      "[LightGBM] [Debug] Re-bagging, using 171188 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[8]\tvalid_0's auc: 0.797826\tvalid_0's binary_logloss: 0.596536\n",
      "[LightGBM] [Debug] Re-bagging, using 171136 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[9]\tvalid_0's auc: 0.797503\tvalid_0's binary_logloss: 0.594661\n",
      "[LightGBM] [Debug] Re-bagging, using 170965 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[10]\tvalid_0's auc: 0.798878\tvalid_0's binary_logloss: 0.593277\n",
      "[LightGBM] [Debug] Re-bagging, using 170741 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[11]\tvalid_0's auc: 0.798874\tvalid_0's binary_logloss: 0.591875\n",
      "[LightGBM] [Debug] Re-bagging, using 171105 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[12]\tvalid_0's auc: 0.798857\tvalid_0's binary_logloss: 0.590825\n",
      "[LightGBM] [Debug] Re-bagging, using 171152 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[13]\tvalid_0's auc: 0.798575\tvalid_0's binary_logloss: 0.589117\n",
      "[LightGBM] [Debug] Re-bagging, using 170900 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[14]\tvalid_0's auc: 0.798446\tvalid_0's binary_logloss: 0.587371\n",
      "[LightGBM] [Debug] Re-bagging, using 170708 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[15]\tvalid_0's auc: 0.799921\tvalid_0's binary_logloss: 0.586093\n",
      "[LightGBM] [Debug] Re-bagging, using 171316 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[16]\tvalid_0's auc: 0.799608\tvalid_0's binary_logloss: 0.584434\n",
      "[LightGBM] [Debug] Re-bagging, using 170985 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[17]\tvalid_0's auc: 0.799704\tvalid_0's binary_logloss: 0.583326\n",
      "[LightGBM] [Debug] Re-bagging, using 171168 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[18]\tvalid_0's auc: 0.799674\tvalid_0's binary_logloss: 0.58209\n",
      "[LightGBM] [Debug] Re-bagging, using 170961 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[19]\tvalid_0's auc: 0.799881\tvalid_0's binary_logloss: 0.580461\n",
      "[LightGBM] [Debug] Re-bagging, using 171027 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[20]\tvalid_0's auc: 0.800569\tvalid_0's binary_logloss: 0.58008\n",
      "[LightGBM] [Debug] Re-bagging, using 170588 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[21]\tvalid_0's auc: 0.801278\tvalid_0's binary_logloss: 0.57888\n",
      "[LightGBM] [Debug] Re-bagging, using 170774 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[22]\tvalid_0's auc: 0.801177\tvalid_0's binary_logloss: 0.577847\n",
      "[LightGBM] [Debug] Re-bagging, using 170730 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[23]\tvalid_0's auc: 0.801265\tvalid_0's binary_logloss: 0.57628\n",
      "[LightGBM] [Debug] Re-bagging, using 171547 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[24]\tvalid_0's auc: 0.801371\tvalid_0's binary_logloss: 0.574727\n",
      "[LightGBM] [Debug] Re-bagging, using 171016 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[25]\tvalid_0's auc: 0.801707\tvalid_0's binary_logloss: 0.573596\n",
      "[LightGBM] [Debug] Re-bagging, using 171399 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[26]\tvalid_0's auc: 0.801632\tvalid_0's binary_logloss: 0.572125\n",
      "[LightGBM] [Debug] Re-bagging, using 171051 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[27]\tvalid_0's auc: 0.80196\tvalid_0's binary_logloss: 0.57102\n",
      "[LightGBM] [Debug] Re-bagging, using 170587 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[28]\tvalid_0's auc: 0.801968\tvalid_0's binary_logloss: 0.569581\n",
      "[LightGBM] [Debug] Re-bagging, using 170739 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[29]\tvalid_0's auc: 0.801912\tvalid_0's binary_logloss: 0.568171\n",
      "[LightGBM] [Debug] Re-bagging, using 171034 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[30]\tvalid_0's auc: 0.801695\tvalid_0's binary_logloss: 0.566804\n",
      "[LightGBM] [Debug] Re-bagging, using 170894 data to train\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[31]\tvalid_0's auc: 0.802443\tvalid_0's binary_logloss: 0.565764\n",
      "[LightGBM] [Debug] Re-bagging, using 170752 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[32]\tvalid_0's auc: 0.802501\tvalid_0's binary_logloss: 0.564397\n",
      "[LightGBM] [Debug] Re-bagging, using 170850 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[33]\tvalid_0's auc: 0.802536\tvalid_0's binary_logloss: 0.563047\n",
      "[LightGBM] [Debug] Re-bagging, using 170705 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[34]\tvalid_0's auc: 0.802663\tvalid_0's binary_logloss: 0.561987\n",
      "[LightGBM] [Debug] Re-bagging, using 170820 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[35]\tvalid_0's auc: 0.802614\tvalid_0's binary_logloss: 0.560677\n",
      "[LightGBM] [Debug] Re-bagging, using 170881 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[36]\tvalid_0's auc: 0.802818\tvalid_0's binary_logloss: 0.560056\n",
      "[LightGBM] [Debug] Re-bagging, using 171042 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[37]\tvalid_0's auc: 0.802902\tvalid_0's binary_logloss: 0.559059\n",
      "[LightGBM] [Debug] Re-bagging, using 170746 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[38]\tvalid_0's auc: 0.802848\tvalid_0's binary_logloss: 0.557817\n",
      "[LightGBM] [Debug] Re-bagging, using 170910 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[39]\tvalid_0's auc: 0.803199\tvalid_0's binary_logloss: 0.556848\n",
      "[LightGBM] [Debug] Re-bagging, using 171041 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[40]\tvalid_0's auc: 0.803615\tvalid_0's binary_logloss: 0.556081\n",
      "[LightGBM] [Debug] Re-bagging, using 171441 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[41]\tvalid_0's auc: 0.803626\tvalid_0's binary_logloss: 0.55512\n",
      "[LightGBM] [Debug] Re-bagging, using 170972 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[42]\tvalid_0's auc: 0.8038\tvalid_0's binary_logloss: 0.553899\n",
      "[LightGBM] [Debug] Re-bagging, using 170954 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[43]\tvalid_0's auc: 0.803648\tvalid_0's binary_logloss: 0.552751\n",
      "[LightGBM] [Debug] Re-bagging, using 170957 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[44]\tvalid_0's auc: 0.804209\tvalid_0's binary_logloss: 0.551816\n",
      "[LightGBM] [Debug] Re-bagging, using 171188 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 13\n",
      "[45]\tvalid_0's auc: 0.804431\tvalid_0's binary_logloss: 0.551037\n",
      "[LightGBM] [Debug] Re-bagging, using 170961 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[46]\tvalid_0's auc: 0.804314\tvalid_0's binary_logloss: 0.549923\n",
      "[LightGBM] [Debug] Re-bagging, using 170833 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[47]\tvalid_0's auc: 0.804477\tvalid_0's binary_logloss: 0.548792\n",
      "[LightGBM] [Debug] Re-bagging, using 170455 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[48]\tvalid_0's auc: 0.804847\tvalid_0's binary_logloss: 0.548001\n",
      "[LightGBM] [Debug] Re-bagging, using 171005 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[49]\tvalid_0's auc: 0.80515\tvalid_0's binary_logloss: 0.547652\n",
      "[LightGBM] [Debug] Re-bagging, using 170884 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[50]\tvalid_0's auc: 0.805322\tvalid_0's binary_logloss: 0.547323\n",
      "[LightGBM] [Debug] Re-bagging, using 170713 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[51]\tvalid_0's auc: 0.805708\tvalid_0's binary_logloss: 0.546469\n",
      "[LightGBM] [Debug] Re-bagging, using 170552 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[52]\tvalid_0's auc: 0.805823\tvalid_0's binary_logloss: 0.545647\n",
      "[LightGBM] [Debug] Re-bagging, using 171181 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[53]\tvalid_0's auc: 0.805728\tvalid_0's binary_logloss: 0.544594\n",
      "[LightGBM] [Debug] Re-bagging, using 170740 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[54]\tvalid_0's auc: 0.805625\tvalid_0's binary_logloss: 0.543559\n",
      "[LightGBM] [Debug] Re-bagging, using 171166 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[55]\tvalid_0's auc: 0.805795\tvalid_0's binary_logloss: 0.543016\n",
      "[LightGBM] [Debug] Re-bagging, using 170794 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[56]\tvalid_0's auc: 0.805749\tvalid_0's binary_logloss: 0.542017\n",
      "[LightGBM] [Debug] Re-bagging, using 171017 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[57]\tvalid_0's auc: 0.805712\tvalid_0's binary_logloss: 0.541008\n",
      "[LightGBM] [Debug] Re-bagging, using 171247 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[58]\tvalid_0's auc: 0.805762\tvalid_0's binary_logloss: 0.540283\n",
      "[LightGBM] [Debug] Re-bagging, using 170852 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[59]\tvalid_0's auc: 0.805664\tvalid_0's binary_logloss: 0.539327\n",
      "[LightGBM] [Debug] Re-bagging, using 171365 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[60]\tvalid_0's auc: 0.805509\tvalid_0's binary_logloss: 0.538411\n",
      "[LightGBM] [Debug] Re-bagging, using 170836 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[61]\tvalid_0's auc: 0.805452\tvalid_0's binary_logloss: 0.537454\n",
      "[LightGBM] [Debug] Re-bagging, using 170966 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[62]\tvalid_0's auc: 0.805617\tvalid_0's binary_logloss: 0.536726\n",
      "[LightGBM] [Debug] Re-bagging, using 170990 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[63]\tvalid_0's auc: 0.805668\tvalid_0's binary_logloss: 0.535994\n",
      "[LightGBM] [Debug] Re-bagging, using 171002 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[64]\tvalid_0's auc: 0.805659\tvalid_0's binary_logloss: 0.535087\n",
      "[LightGBM] [Debug] Re-bagging, using 170707 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[65]\tvalid_0's auc: 0.805686\tvalid_0's binary_logloss: 0.534186\n",
      "[LightGBM] [Debug] Re-bagging, using 170963 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[66]\tvalid_0's auc: 0.805824\tvalid_0's binary_logloss: 0.533491\n",
      "[LightGBM] [Debug] Re-bagging, using 170886 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[67]\tvalid_0's auc: 0.805733\tvalid_0's binary_logloss: 0.532641\n",
      "[LightGBM] [Debug] Re-bagging, using 171078 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[68]\tvalid_0's auc: 0.805864\tvalid_0's binary_logloss: 0.532222\n",
      "[LightGBM] [Debug] Re-bagging, using 170592 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[69]\tvalid_0's auc: 0.806011\tvalid_0's binary_logloss: 0.531527\n",
      "[LightGBM] [Debug] Re-bagging, using 170726 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[70]\tvalid_0's auc: 0.806033\tvalid_0's binary_logloss: 0.530873\n",
      "[LightGBM] [Debug] Re-bagging, using 170646 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[71]\tvalid_0's auc: 0.806412\tvalid_0's binary_logloss: 0.530193\n",
      "[LightGBM] [Debug] Re-bagging, using 170824 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[72]\tvalid_0's auc: 0.806618\tvalid_0's binary_logloss: 0.529603\n",
      "[LightGBM] [Debug] Re-bagging, using 170867 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[73]\tvalid_0's auc: 0.806692\tvalid_0's binary_logloss: 0.528761\n",
      "[LightGBM] [Debug] Re-bagging, using 170754 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[74]\tvalid_0's auc: 0.806949\tvalid_0's binary_logloss: 0.528098\n",
      "[LightGBM] [Debug] Re-bagging, using 170848 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[75]\tvalid_0's auc: 0.806938\tvalid_0's binary_logloss: 0.527287\n",
      "[LightGBM] [Debug] Re-bagging, using 170918 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[76]\tvalid_0's auc: 0.807013\tvalid_0's binary_logloss: 0.526461\n",
      "[LightGBM] [Debug] Re-bagging, using 170951 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[77]\tvalid_0's auc: 0.807058\tvalid_0's binary_logloss: 0.525829\n",
      "[LightGBM] [Debug] Re-bagging, using 170906 data to train\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[78]\tvalid_0's auc: 0.807157\tvalid_0's binary_logloss: 0.525213\n",
      "[LightGBM] [Debug] Re-bagging, using 170743 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[79]\tvalid_0's auc: 0.807983\tvalid_0's binary_logloss: 0.524417\n",
      "[LightGBM] [Debug] Re-bagging, using 170906 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[80]\tvalid_0's auc: 0.80805\tvalid_0's binary_logloss: 0.523969\n",
      "[LightGBM] [Debug] Re-bagging, using 171417 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[81]\tvalid_0's auc: 0.808971\tvalid_0's binary_logloss: 0.52351\n",
      "[LightGBM] [Debug] Re-bagging, using 170788 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[82]\tvalid_0's auc: 0.80915\tvalid_0's binary_logloss: 0.522885\n",
      "[LightGBM] [Debug] Re-bagging, using 170930 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[83]\tvalid_0's auc: 0.809152\tvalid_0's binary_logloss: 0.522133\n",
      "[LightGBM] [Debug] Re-bagging, using 170928 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[84]\tvalid_0's auc: 0.809149\tvalid_0's binary_logloss: 0.521408\n",
      "[LightGBM] [Debug] Re-bagging, using 170953 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[85]\tvalid_0's auc: 0.809297\tvalid_0's binary_logloss: 0.520805\n",
      "[LightGBM] [Debug] Re-bagging, using 170647 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[86]\tvalid_0's auc: 0.809371\tvalid_0's binary_logloss: 0.52008\n",
      "[LightGBM] [Debug] Re-bagging, using 171112 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[87]\tvalid_0's auc: 0.809545\tvalid_0's binary_logloss: 0.519514\n",
      "[LightGBM] [Debug] Re-bagging, using 170915 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[88]\tvalid_0's auc: 0.81001\tvalid_0's binary_logloss: 0.518888\n",
      "[LightGBM] [Debug] Re-bagging, using 170339 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[89]\tvalid_0's auc: 0.810037\tvalid_0's binary_logloss: 0.518184\n",
      "[LightGBM] [Debug] Re-bagging, using 171152 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[90]\tvalid_0's auc: 0.809973\tvalid_0's binary_logloss: 0.517507\n",
      "[LightGBM] [Debug] Re-bagging, using 171196 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[91]\tvalid_0's auc: 0.810039\tvalid_0's binary_logloss: 0.517114\n",
      "[LightGBM] [Debug] Re-bagging, using 171166 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[92]\tvalid_0's auc: 0.810001\tvalid_0's binary_logloss: 0.516449\n",
      "[LightGBM] [Debug] Re-bagging, using 170908 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[93]\tvalid_0's auc: 0.810001\tvalid_0's binary_logloss: 0.515784\n",
      "[LightGBM] [Debug] Re-bagging, using 171207 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[94]\tvalid_0's auc: 0.810082\tvalid_0's binary_logloss: 0.515095\n",
      "[LightGBM] [Debug] Re-bagging, using 170924 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[95]\tvalid_0's auc: 0.810282\tvalid_0's binary_logloss: 0.514549\n",
      "[LightGBM] [Debug] Re-bagging, using 171093 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[96]\tvalid_0's auc: 0.810618\tvalid_0's binary_logloss: 0.514213\n",
      "[LightGBM] [Debug] Re-bagging, using 170984 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[97]\tvalid_0's auc: 0.810994\tvalid_0's binary_logloss: 0.513785\n",
      "[LightGBM] [Debug] Re-bagging, using 170544 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[98]\tvalid_0's auc: 0.810919\tvalid_0's binary_logloss: 0.513178\n",
      "[LightGBM] [Debug] Re-bagging, using 170731 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[99]\tvalid_0's auc: 0.811019\tvalid_0's binary_logloss: 0.512905\n",
      "[LightGBM] [Debug] Re-bagging, using 170579 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[100]\tvalid_0's auc: 0.811377\tvalid_0's binary_logloss: 0.512346\n",
      "Did not meet early stopping. Best iteration is:\n",
      "[100]\tvalid_0's auc: 0.811377\tvalid_0's binary_logloss: 0.512346\n",
      "[LightGBM] [Info] Number of positive: 73269, number of negative: 170987\n",
      "[LightGBM] [Debug] Dataset::GetMultiBinFromAllFeatures: sparse rate 0.042917\n",
      "[LightGBM] [Debug] init for col-wise cost 0.000030 seconds, init for row-wise cost 0.016294 seconds\n",
      "[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.019871 seconds.\n",
      "You can set `force_col_wise=true` to remove the overhead.\n",
      "[LightGBM] [Info] Total Bins 4126\n",
      "[LightGBM] [Info] Number of data points in the train set: 244256, number of used features: 23\n",
      "[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.299968 -> initscore=-0.847450\n",
      "[LightGBM] [Info] Start training from score -0.847450\n",
      "[LightGBM] [Debug] Re-bagging, using 171104 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[1]\tvalid_0's auc: 0.775615\tvalid_0's binary_logloss: 0.609129\n",
      "Training until validation scores don't improve for 50 rounds\n",
      "[LightGBM] [Debug] Re-bagging, using 170782 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[2]\tvalid_0's auc: 0.782811\tvalid_0's binary_logloss: 0.608039\n",
      "[LightGBM] [Debug] Re-bagging, using 170696 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[3]\tvalid_0's auc: 0.78446\tvalid_0's binary_logloss: 0.606403\n",
      "[LightGBM] [Debug] Re-bagging, using 170696 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[4]\tvalid_0's auc: 0.790471\tvalid_0's binary_logloss: 0.60438\n",
      "[LightGBM] [Debug] Re-bagging, using 170898 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[5]\tvalid_0's auc: 0.792037\tvalid_0's binary_logloss: 0.602399\n",
      "[LightGBM] [Debug] Re-bagging, using 171513 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 6\n",
      "[6]\tvalid_0's auc: 0.79247\tvalid_0's binary_logloss: 0.601131\n",
      "[LightGBM] [Debug] Re-bagging, using 170851 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[7]\tvalid_0's auc: 0.792913\tvalid_0's binary_logloss: 0.59925\n",
      "[LightGBM] [Debug] Re-bagging, using 171220 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[8]\tvalid_0's auc: 0.793438\tvalid_0's binary_logloss: 0.598128\n",
      "[LightGBM] [Debug] Re-bagging, using 171161 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[9]\tvalid_0's auc: 0.793436\tvalid_0's binary_logloss: 0.596277\n",
      "[LightGBM] [Debug] Re-bagging, using 170991 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[10]\tvalid_0's auc: 0.794426\tvalid_0's binary_logloss: 0.594892\n",
      "[LightGBM] [Debug] Re-bagging, using 170775 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[11]\tvalid_0's auc: 0.794454\tvalid_0's binary_logloss: 0.593529\n",
      "[LightGBM] [Debug] Re-bagging, using 171128 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[12]\tvalid_0's auc: 0.794517\tvalid_0's binary_logloss: 0.592495\n",
      "[LightGBM] [Debug] Re-bagging, using 171178 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[13]\tvalid_0's auc: 0.794425\tvalid_0's binary_logloss: 0.590817\n",
      "[LightGBM] [Debug] Re-bagging, using 170930 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[14]\tvalid_0's auc: 0.794617\tvalid_0's binary_logloss: 0.589116\n",
      "[LightGBM] [Debug] Re-bagging, using 170733 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[15]\tvalid_0's auc: 0.79593\tvalid_0's binary_logloss: 0.58786\n",
      "[LightGBM] [Debug] Re-bagging, using 171340 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[16]\tvalid_0's auc: 0.795767\tvalid_0's binary_logloss: 0.586217\n",
      "[LightGBM] [Debug] Re-bagging, using 171033 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[17]\tvalid_0's auc: 0.795841\tvalid_0's binary_logloss: 0.585129\n",
      "[LightGBM] [Debug] Re-bagging, using 171182 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[18]\tvalid_0's auc: 0.795839\tvalid_0's binary_logloss: 0.583897\n",
      "[LightGBM] [Debug] Re-bagging, using 170989 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[19]\tvalid_0's auc: 0.795928\tvalid_0's binary_logloss: 0.582302\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LightGBM] [Debug] Re-bagging, using 171074 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[20]\tvalid_0's auc: 0.796638\tvalid_0's binary_logloss: 0.581898\n",
      "[LightGBM] [Debug] Re-bagging, using 170592 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[21]\tvalid_0's auc: 0.79707\tvalid_0's binary_logloss: 0.580698\n",
      "[LightGBM] [Debug] Re-bagging, using 170802 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[22]\tvalid_0's auc: 0.796904\tvalid_0's binary_logloss: 0.57968\n",
      "[LightGBM] [Debug] Re-bagging, using 170753 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[23]\tvalid_0's auc: 0.796847\tvalid_0's binary_logloss: 0.578157\n",
      "[LightGBM] [Debug] Re-bagging, using 171594 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[24]\tvalid_0's auc: 0.796836\tvalid_0's binary_logloss: 0.576646\n",
      "[LightGBM] [Debug] Re-bagging, using 171071 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[25]\tvalid_0's auc: 0.797105\tvalid_0's binary_logloss: 0.575526\n",
      "[LightGBM] [Debug] Re-bagging, using 171414 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[26]\tvalid_0's auc: 0.797168\tvalid_0's binary_logloss: 0.574089\n",
      "[LightGBM] [Debug] Re-bagging, using 171083 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[27]\tvalid_0's auc: 0.797293\tvalid_0's binary_logloss: 0.572994\n",
      "[LightGBM] [Debug] Re-bagging, using 170621 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[28]\tvalid_0's auc: 0.797266\tvalid_0's binary_logloss: 0.571592\n",
      "[LightGBM] [Debug] Re-bagging, using 170746 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[29]\tvalid_0's auc: 0.797574\tvalid_0's binary_logloss: 0.570181\n",
      "[LightGBM] [Debug] Re-bagging, using 171031 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[30]\tvalid_0's auc: 0.797459\tvalid_0's binary_logloss: 0.568847\n",
      "[LightGBM] [Debug] Re-bagging, using 170938 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[31]\tvalid_0's auc: 0.797942\tvalid_0's binary_logloss: 0.567834\n",
      "[LightGBM] [Debug] Re-bagging, using 170779 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[32]\tvalid_0's auc: 0.798057\tvalid_0's binary_logloss: 0.566478\n",
      "[LightGBM] [Debug] Re-bagging, using 170875 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[33]\tvalid_0's auc: 0.79803\tvalid_0's binary_logloss: 0.565159\n",
      "[LightGBM] [Debug] Re-bagging, using 170742 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[34]\tvalid_0's auc: 0.798359\tvalid_0's binary_logloss: 0.564112\n",
      "[LightGBM] [Debug] Re-bagging, using 170837 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[35]\tvalid_0's auc: 0.798303\tvalid_0's binary_logloss: 0.562832\n",
      "[LightGBM] [Debug] Re-bagging, using 170908 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[36]\tvalid_0's auc: 0.798542\tvalid_0's binary_logloss: 0.562145\n",
      "[LightGBM] [Debug] Re-bagging, using 171092 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[37]\tvalid_0's auc: 0.798686\tvalid_0's binary_logloss: 0.561147\n",
      "[LightGBM] [Debug] Re-bagging, using 170788 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[38]\tvalid_0's auc: 0.798884\tvalid_0's binary_logloss: 0.559879\n",
      "[LightGBM] [Debug] Re-bagging, using 170960 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[39]\tvalid_0's auc: 0.799328\tvalid_0's binary_logloss: 0.558904\n",
      "[LightGBM] [Debug] Re-bagging, using 171029 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[40]\tvalid_0's auc: 0.800012\tvalid_0's binary_logloss: 0.55814\n",
      "[LightGBM] [Debug] Re-bagging, using 171468 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[41]\tvalid_0's auc: 0.800111\tvalid_0's binary_logloss: 0.557184\n",
      "[LightGBM] [Debug] Re-bagging, using 171003 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[42]\tvalid_0's auc: 0.80005\tvalid_0's binary_logloss: 0.556033\n",
      "[LightGBM] [Debug] Re-bagging, using 171019 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[43]\tvalid_0's auc: 0.79992\tvalid_0's binary_logloss: 0.554911\n",
      "[LightGBM] [Debug] Re-bagging, using 170988 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[44]\tvalid_0's auc: 0.801628\tvalid_0's binary_logloss: 0.553797\n",
      "[LightGBM] [Debug] Re-bagging, using 171221 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[45]\tvalid_0's auc: 0.801731\tvalid_0's binary_logloss: 0.553061\n",
      "[LightGBM] [Debug] Re-bagging, using 170974 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[46]\tvalid_0's auc: 0.801659\tvalid_0's binary_logloss: 0.55196\n",
      "[LightGBM] [Debug] Re-bagging, using 170860 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[47]\tvalid_0's auc: 0.80157\tvalid_0's binary_logloss: 0.550861\n",
      "[LightGBM] [Debug] Re-bagging, using 170484 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[48]\tvalid_0's auc: 0.801783\tvalid_0's binary_logloss: 0.550099\n",
      "[LightGBM] [Debug] Re-bagging, using 171025 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[49]\tvalid_0's auc: 0.802084\tvalid_0's binary_logloss: 0.549757\n",
      "[LightGBM] [Debug] Re-bagging, using 170904 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[50]\tvalid_0's auc: 0.80226\tvalid_0's binary_logloss: 0.549428\n",
      "[LightGBM] [Debug] Re-bagging, using 170748 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[51]\tvalid_0's auc: 0.803926\tvalid_0's binary_logloss: 0.548329\n",
      "[LightGBM] [Debug] Re-bagging, using 170592 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[52]\tvalid_0's auc: 0.804006\tvalid_0's binary_logloss: 0.547512\n",
      "[LightGBM] [Debug] Re-bagging, using 171218 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[53]\tvalid_0's auc: 0.803898\tvalid_0's binary_logloss: 0.546486\n",
      "[LightGBM] [Debug] Re-bagging, using 170792 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[54]\tvalid_0's auc: 0.803833\tvalid_0's binary_logloss: 0.545466\n",
      "[LightGBM] [Debug] Re-bagging, using 171214 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[55]\tvalid_0's auc: 0.804045\tvalid_0's binary_logloss: 0.544942\n",
      "[LightGBM] [Debug] Re-bagging, using 170803 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[56]\tvalid_0's auc: 0.803972\tvalid_0's binary_logloss: 0.543962\n",
      "[LightGBM] [Debug] Re-bagging, using 171020 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[57]\tvalid_0's auc: 0.803935\tvalid_0's binary_logloss: 0.542969\n",
      "[LightGBM] [Debug] Re-bagging, using 171263 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[58]\tvalid_0's auc: 0.803965\tvalid_0's binary_logloss: 0.542263\n",
      "[LightGBM] [Debug] Re-bagging, using 170850 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[59]\tvalid_0's auc: 0.803883\tvalid_0's binary_logloss: 0.541299\n",
      "[LightGBM] [Debug] Re-bagging, using 171379 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[60]\tvalid_0's auc: 0.8038\tvalid_0's binary_logloss: 0.540386\n",
      "[LightGBM] [Debug] Re-bagging, using 170895 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[61]\tvalid_0's auc: 0.803696\tvalid_0's binary_logloss: 0.539467\n",
      "[LightGBM] [Debug] Re-bagging, using 170993 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[62]\tvalid_0's auc: 0.803777\tvalid_0's binary_logloss: 0.53875\n",
      "[LightGBM] [Debug] Re-bagging, using 171028 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[63]\tvalid_0's auc: 0.803813\tvalid_0's binary_logloss: 0.53803\n",
      "[LightGBM] [Debug] Re-bagging, using 171040 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[64]\tvalid_0's auc: 0.803742\tvalid_0's binary_logloss: 0.537151\n",
      "[LightGBM] [Debug] Re-bagging, using 170738 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[65]\tvalid_0's auc: 0.803956\tvalid_0's binary_logloss: 0.536247\n",
      "[LightGBM] [Debug] Re-bagging, using 170976 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[66]\tvalid_0's auc: 0.804057\tvalid_0's binary_logloss: 0.535547\n",
      "[LightGBM] [Debug] Re-bagging, using 170899 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[67]\tvalid_0's auc: 0.803993\tvalid_0's binary_logloss: 0.53472\n",
      "[LightGBM] [Debug] Re-bagging, using 171118 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[68]\tvalid_0's auc: 0.804074\tvalid_0's binary_logloss: 0.534306\n",
      "[LightGBM] [Debug] Re-bagging, using 170636 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[69]\tvalid_0's auc: 0.804152\tvalid_0's binary_logloss: 0.53364\n",
      "[LightGBM] [Debug] Re-bagging, using 170754 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[70]\tvalid_0's auc: 0.804161\tvalid_0's binary_logloss: 0.533005\n",
      "[LightGBM] [Debug] Re-bagging, using 170688 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[71]\tvalid_0's auc: 0.804164\tvalid_0's binary_logloss: 0.532404\n",
      "[LightGBM] [Debug] Re-bagging, using 170884 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[72]\tvalid_0's auc: 0.804267\tvalid_0's binary_logloss: 0.531847\n",
      "[LightGBM] [Debug] Re-bagging, using 170879 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[73]\tvalid_0's auc: 0.804316\tvalid_0's binary_logloss: 0.53102\n",
      "[LightGBM] [Debug] Re-bagging, using 170778 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[74]\tvalid_0's auc: 0.804464\tvalid_0's binary_logloss: 0.530366\n",
      "[LightGBM] [Debug] Re-bagging, using 170860 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[75]\tvalid_0's auc: 0.804514\tvalid_0's binary_logloss: 0.529555\n",
      "[LightGBM] [Debug] Re-bagging, using 170933 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[76]\tvalid_0's auc: 0.804539\tvalid_0's binary_logloss: 0.528763\n",
      "[LightGBM] [Debug] Re-bagging, using 170974 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[77]\tvalid_0's auc: 0.804563\tvalid_0's binary_logloss: 0.528148\n",
      "[LightGBM] [Debug] Re-bagging, using 170921 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[78]\tvalid_0's auc: 0.804636\tvalid_0's binary_logloss: 0.527551\n",
      "[LightGBM] [Debug] Re-bagging, using 170793 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[79]\tvalid_0's auc: 0.805726\tvalid_0's binary_logloss: 0.526704\n",
      "[LightGBM] [Debug] Re-bagging, using 170939 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[80]\tvalid_0's auc: 0.80584\tvalid_0's binary_logloss: 0.526266\n",
      "[LightGBM] [Debug] Re-bagging, using 171462 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[81]\tvalid_0's auc: 0.806972\tvalid_0's binary_logloss: 0.525728\n",
      "[LightGBM] [Debug] Re-bagging, using 170826 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[82]\tvalid_0's auc: 0.807029\tvalid_0's binary_logloss: 0.525145\n",
      "[LightGBM] [Debug] Re-bagging, using 170945 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[83]\tvalid_0's auc: 0.806995\tvalid_0's binary_logloss: 0.524418\n",
      "[LightGBM] [Debug] Re-bagging, using 170970 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[84]\tvalid_0's auc: 0.807171\tvalid_0's binary_logloss: 0.523665\n",
      "[LightGBM] [Debug] Re-bagging, using 170978 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[85]\tvalid_0's auc: 0.807546\tvalid_0's binary_logloss: 0.523036\n",
      "[LightGBM] [Debug] Re-bagging, using 170670 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[86]\tvalid_0's auc: 0.807441\tvalid_0's binary_logloss: 0.522389\n",
      "[LightGBM] [Debug] Re-bagging, using 171146 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[87]\tvalid_0's auc: 0.807474\tvalid_0's binary_logloss: 0.521849\n",
      "[LightGBM] [Debug] Re-bagging, using 170913 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[88]\tvalid_0's auc: 0.807777\tvalid_0's binary_logloss: 0.521242\n",
      "[LightGBM] [Debug] Re-bagging, using 170354 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[89]\tvalid_0's auc: 0.807764\tvalid_0's binary_logloss: 0.520566\n",
      "[LightGBM] [Debug] Re-bagging, using 171185 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[90]\tvalid_0's auc: 0.807707\tvalid_0's binary_logloss: 0.519903\n",
      "[LightGBM] [Debug] Re-bagging, using 171221 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[91]\tvalid_0's auc: 0.807706\tvalid_0's binary_logloss: 0.519532\n",
      "[LightGBM] [Debug] Re-bagging, using 171180 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[92]\tvalid_0's auc: 0.807704\tvalid_0's binary_logloss: 0.51887\n",
      "[LightGBM] [Debug] Re-bagging, using 170940 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[93]\tvalid_0's auc: 0.807692\tvalid_0's binary_logloss: 0.518216\n",
      "[LightGBM] [Debug] Re-bagging, using 171242 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[94]\tvalid_0's auc: 0.807737\tvalid_0's binary_logloss: 0.517566\n",
      "[LightGBM] [Debug] Re-bagging, using 170960 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[95]\tvalid_0's auc: 0.807984\tvalid_0's binary_logloss: 0.516996\n",
      "[LightGBM] [Debug] Re-bagging, using 171130 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[96]\tvalid_0's auc: 0.808402\tvalid_0's binary_logloss: 0.516626\n",
      "[LightGBM] [Debug] Re-bagging, using 171000 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[97]\tvalid_0's auc: 0.808827\tvalid_0's binary_logloss: 0.516197\n",
      "[LightGBM] [Debug] Re-bagging, using 170586 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[98]\tvalid_0's auc: 0.808766\tvalid_0's binary_logloss: 0.515605\n",
      "[LightGBM] [Debug] Re-bagging, using 170766 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[99]\tvalid_0's auc: 0.808874\tvalid_0's binary_logloss: 0.515321\n",
      "[LightGBM] [Debug] Re-bagging, using 170595 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[100]\tvalid_0's auc: 0.808934\tvalid_0's binary_logloss: 0.514852\n",
      "Did not meet early stopping. Best iteration is:\n",
      "[100]\tvalid_0's auc: 0.808934\tvalid_0's binary_logloss: 0.514852\n",
      "[LightGBM] [Info] Number of positive: 73251, number of negative: 171000\n",
      "[LightGBM] [Debug] Dataset::GetMultiBinFromAllFeatures: sparse rate 0.042965\n",
      "[LightGBM] [Debug] init for col-wise cost 0.000035 seconds, init for row-wise cost 0.013713 seconds\n",
      "[LightGBM] [Warning] Auto-choosing col-wise multi-threading, the overhead of testing was 0.017228 seconds.\n",
      "You can set `force_col_wise=true` to remove the overhead.\n",
      "[LightGBM] [Info] Total Bins 4124\n",
      "[LightGBM] [Info] Number of data points in the train set: 244251, number of used features: 23\n",
      "[LightGBM] [Info] [binary:BoostFromScore]: pavg=0.299901 -> initscore=-0.847772\n",
      "[LightGBM] [Info] Start training from score -0.847772\n",
      "[LightGBM] [Debug] Re-bagging, using 171099 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[1]\tvalid_0's auc: 0.781582\tvalid_0's binary_logloss: 0.609313\n",
      "Training until validation scores don't improve for 50 rounds\n",
      "[LightGBM] [Debug] Re-bagging, using 170782 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[2]\tvalid_0's auc: 0.789489\tvalid_0's binary_logloss: 0.608209\n",
      "[LightGBM] [Debug] Re-bagging, using 170690 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[3]\tvalid_0's auc: 0.790869\tvalid_0's binary_logloss: 0.606511\n",
      "[LightGBM] [Debug] Re-bagging, using 170692 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[4]\tvalid_0's auc: 0.795083\tvalid_0's binary_logloss: 0.60445\n",
      "[LightGBM] [Debug] Re-bagging, using 170893 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[5]\tvalid_0's auc: 0.795862\tvalid_0's binary_logloss: 0.602439\n",
      "[LightGBM] [Debug] Re-bagging, using 171517 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[6]\tvalid_0's auc: 0.796692\tvalid_0's binary_logloss: 0.601171\n",
      "[LightGBM] [Debug] Re-bagging, using 170844 data to train\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[7]\tvalid_0's auc: 0.796619\tvalid_0's binary_logloss: 0.599233\n",
      "[LightGBM] [Debug] Re-bagging, using 171219 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[8]\tvalid_0's auc: 0.79688\tvalid_0's binary_logloss: 0.598092\n",
      "[LightGBM] [Debug] Re-bagging, using 171149 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[9]\tvalid_0's auc: 0.797235\tvalid_0's binary_logloss: 0.596196\n",
      "[LightGBM] [Debug] Re-bagging, using 170999 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[10]\tvalid_0's auc: 0.798605\tvalid_0's binary_logloss: 0.594803\n",
      "[LightGBM] [Debug] Re-bagging, using 170756 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[11]\tvalid_0's auc: 0.798612\tvalid_0's binary_logloss: 0.593382\n",
      "[LightGBM] [Debug] Re-bagging, using 171132 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[12]\tvalid_0's auc: 0.798351\tvalid_0's binary_logloss: 0.59233\n",
      "[LightGBM] [Debug] Re-bagging, using 171177 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[13]\tvalid_0's auc: 0.79806\tvalid_0's binary_logloss: 0.590629\n",
      "[LightGBM] [Debug] Re-bagging, using 170925 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[14]\tvalid_0's auc: 0.798053\tvalid_0's binary_logloss: 0.588875\n",
      "[LightGBM] [Debug] Re-bagging, using 170729 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[15]\tvalid_0's auc: 0.799505\tvalid_0's binary_logloss: 0.587584\n",
      "[LightGBM] [Debug] Re-bagging, using 171333 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[16]\tvalid_0's auc: 0.799186\tvalid_0's binary_logloss: 0.585919\n",
      "[LightGBM] [Debug] Re-bagging, using 171035 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[17]\tvalid_0's auc: 0.799279\tvalid_0's binary_logloss: 0.584821\n",
      "[LightGBM] [Debug] Re-bagging, using 171184 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[18]\tvalid_0's auc: 0.799274\tvalid_0's binary_logloss: 0.583572\n",
      "[LightGBM] [Debug] Re-bagging, using 170978 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[19]\tvalid_0's auc: 0.799251\tvalid_0's binary_logloss: 0.581937\n",
      "[LightGBM] [Debug] Re-bagging, using 171072 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[20]\tvalid_0's auc: 0.799863\tvalid_0's binary_logloss: 0.581543\n",
      "[LightGBM] [Debug] Re-bagging, using 170594 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[21]\tvalid_0's auc: 0.800477\tvalid_0's binary_logloss: 0.580342\n",
      "[LightGBM] [Debug] Re-bagging, using 170795 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[22]\tvalid_0's auc: 0.800443\tvalid_0's binary_logloss: 0.579314\n",
      "[LightGBM] [Debug] Re-bagging, using 170748 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[23]\tvalid_0's auc: 0.800546\tvalid_0's binary_logloss: 0.577723\n",
      "[LightGBM] [Debug] Re-bagging, using 171586 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[24]\tvalid_0's auc: 0.800546\tvalid_0's binary_logloss: 0.576174\n",
      "[LightGBM] [Debug] Re-bagging, using 171074 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[25]\tvalid_0's auc: 0.800793\tvalid_0's binary_logloss: 0.575054\n",
      "[LightGBM] [Debug] Re-bagging, using 171411 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[26]\tvalid_0's auc: 0.800691\tvalid_0's binary_logloss: 0.57357\n",
      "[LightGBM] [Debug] Re-bagging, using 171070 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[27]\tvalid_0's auc: 0.800897\tvalid_0's binary_logloss: 0.572468\n",
      "[LightGBM] [Debug] Re-bagging, using 170625 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[28]\tvalid_0's auc: 0.801214\tvalid_0's binary_logloss: 0.571003\n",
      "[LightGBM] [Debug] Re-bagging, using 170737 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[29]\tvalid_0's auc: 0.801252\tvalid_0's binary_logloss: 0.569574\n",
      "[LightGBM] [Debug] Re-bagging, using 171034 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[30]\tvalid_0's auc: 0.801125\tvalid_0's binary_logloss: 0.568192\n",
      "[LightGBM] [Debug] Re-bagging, using 170924 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[31]\tvalid_0's auc: 0.801699\tvalid_0's binary_logloss: 0.567159\n",
      "[LightGBM] [Debug] Re-bagging, using 170782 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[32]\tvalid_0's auc: 0.801596\tvalid_0's binary_logloss: 0.565793\n",
      "[LightGBM] [Debug] Re-bagging, using 170879 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[33]\tvalid_0's auc: 0.801612\tvalid_0's binary_logloss: 0.564443\n",
      "[LightGBM] [Debug] Re-bagging, using 170722 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[34]\tvalid_0's auc: 0.801797\tvalid_0's binary_logloss: 0.563393\n",
      "[LightGBM] [Debug] Re-bagging, using 170854 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[35]\tvalid_0's auc: 0.801787\tvalid_0's binary_logloss: 0.562073\n",
      "[LightGBM] [Debug] Re-bagging, using 170898 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[36]\tvalid_0's auc: 0.802174\tvalid_0's binary_logloss: 0.561393\n",
      "[LightGBM] [Debug] Re-bagging, using 171071 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[37]\tvalid_0's auc: 0.802293\tvalid_0's binary_logloss: 0.560365\n",
      "[LightGBM] [Debug] Re-bagging, using 170799 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[38]\tvalid_0's auc: 0.802405\tvalid_0's binary_logloss: 0.559096\n",
      "[LightGBM] [Debug] Re-bagging, using 170962 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[39]\tvalid_0's auc: 0.802856\tvalid_0's binary_logloss: 0.558114\n",
      "[LightGBM] [Debug] Re-bagging, using 171019 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[40]\tvalid_0's auc: 0.803784\tvalid_0's binary_logloss: 0.557298\n",
      "[LightGBM] [Debug] Re-bagging, using 171461 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[41]\tvalid_0's auc: 0.80383\tvalid_0's binary_logloss: 0.556337\n",
      "[LightGBM] [Debug] Re-bagging, using 171004 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[42]\tvalid_0's auc: 0.803769\tvalid_0's binary_logloss: 0.55516\n",
      "[LightGBM] [Debug] Re-bagging, using 171020 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[43]\tvalid_0's auc: 0.803683\tvalid_0's binary_logloss: 0.553988\n",
      "[LightGBM] [Debug] Re-bagging, using 170981 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[44]\tvalid_0's auc: 0.80503\tvalid_0's binary_logloss: 0.552916\n",
      "[LightGBM] [Debug] Re-bagging, using 171208 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[45]\tvalid_0's auc: 0.805113\tvalid_0's binary_logloss: 0.552155\n",
      "[LightGBM] [Debug] Re-bagging, using 170977 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[46]\tvalid_0's auc: 0.805008\tvalid_0's binary_logloss: 0.551044\n",
      "[LightGBM] [Debug] Re-bagging, using 170850 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[47]\tvalid_0's auc: 0.80511\tvalid_0's binary_logloss: 0.549912\n",
      "[LightGBM] [Debug] Re-bagging, using 170490 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[48]\tvalid_0's auc: 0.805308\tvalid_0's binary_logloss: 0.549148\n",
      "[LightGBM] [Debug] Re-bagging, using 171012 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[49]\tvalid_0's auc: 0.805627\tvalid_0's binary_logloss: 0.548792\n",
      "[LightGBM] [Debug] Re-bagging, using 170916 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[50]\tvalid_0's auc: 0.805737\tvalid_0's binary_logloss: 0.548466\n",
      "[LightGBM] [Debug] Re-bagging, using 170732 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[51]\tvalid_0's auc: 0.80741\tvalid_0's binary_logloss: 0.547353\n",
      "[LightGBM] [Debug] Re-bagging, using 170588 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[52]\tvalid_0's auc: 0.807482\tvalid_0's binary_logloss: 0.546525\n",
      "[LightGBM] [Debug] Re-bagging, using 171221 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[53]\tvalid_0's auc: 0.807356\tvalid_0's binary_logloss: 0.545466\n",
      "[LightGBM] [Debug] Re-bagging, using 170778 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[54]\tvalid_0's auc: 0.807323\tvalid_0's binary_logloss: 0.544406\n",
      "[LightGBM] [Debug] Re-bagging, using 171204 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[55]\tvalid_0's auc: 0.807569\tvalid_0's binary_logloss: 0.54388\n",
      "[LightGBM] [Debug] Re-bagging, using 170830 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[56]\tvalid_0's auc: 0.80768\tvalid_0's binary_logloss: 0.542839\n",
      "[LightGBM] [Debug] Re-bagging, using 170996 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[57]\tvalid_0's auc: 0.807646\tvalid_0's binary_logloss: 0.541824\n",
      "[LightGBM] [Debug] Re-bagging, using 171267 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[58]\tvalid_0's auc: 0.807719\tvalid_0's binary_logloss: 0.541099\n",
      "[LightGBM] [Debug] Re-bagging, using 170851 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[59]\tvalid_0's auc: 0.807868\tvalid_0's binary_logloss: 0.540086\n",
      "[LightGBM] [Debug] Re-bagging, using 171361 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[60]\tvalid_0's auc: 0.807722\tvalid_0's binary_logloss: 0.539153\n",
      "[LightGBM] [Debug] Re-bagging, using 170897 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[61]\tvalid_0's auc: 0.807645\tvalid_0's binary_logloss: 0.538204\n",
      "[LightGBM] [Debug] Re-bagging, using 170998 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[62]\tvalid_0's auc: 0.807773\tvalid_0's binary_logloss: 0.537467\n",
      "[LightGBM] [Debug] Re-bagging, using 171020 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[63]\tvalid_0's auc: 0.807795\tvalid_0's binary_logloss: 0.536731\n",
      "[LightGBM] [Debug] Re-bagging, using 171036 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 6\n",
      "[64]\tvalid_0's auc: 0.807767\tvalid_0's binary_logloss: 0.53582\n",
      "[LightGBM] [Debug] Re-bagging, using 170734 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[65]\tvalid_0's auc: 0.807883\tvalid_0's binary_logloss: 0.534893\n",
      "[LightGBM] [Debug] Re-bagging, using 170983 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[66]\tvalid_0's auc: 0.808051\tvalid_0's binary_logloss: 0.534188\n",
      "[LightGBM] [Debug] Re-bagging, using 170881 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[67]\tvalid_0's auc: 0.807975\tvalid_0's binary_logloss: 0.533324\n",
      "[LightGBM] [Debug] Re-bagging, using 171120 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[68]\tvalid_0's auc: 0.808105\tvalid_0's binary_logloss: 0.532907\n",
      "[LightGBM] [Debug] Re-bagging, using 170617 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 12\n",
      "[69]\tvalid_0's auc: 0.808237\tvalid_0's binary_logloss: 0.532217\n",
      "[LightGBM] [Debug] Re-bagging, using 170765 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[70]\tvalid_0's auc: 0.808259\tvalid_0's binary_logloss: 0.531568\n",
      "[LightGBM] [Debug] Re-bagging, using 170684 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[71]\tvalid_0's auc: 0.808284\tvalid_0's binary_logloss: 0.530949\n",
      "[LightGBM] [Debug] Re-bagging, using 170868 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[72]\tvalid_0's auc: 0.808283\tvalid_0's binary_logloss: 0.530403\n",
      "[LightGBM] [Debug] Re-bagging, using 170892 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[73]\tvalid_0's auc: 0.808315\tvalid_0's binary_logloss: 0.529558\n",
      "[LightGBM] [Debug] Re-bagging, using 170774 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[74]\tvalid_0's auc: 0.808524\tvalid_0's binary_logloss: 0.528891\n",
      "[LightGBM] [Debug] Re-bagging, using 170853 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[75]\tvalid_0's auc: 0.808536\tvalid_0's binary_logloss: 0.528065\n",
      "[LightGBM] [Debug] Re-bagging, using 170926 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[76]\tvalid_0's auc: 0.80848\tvalid_0's binary_logloss: 0.527281\n",
      "[LightGBM] [Debug] Re-bagging, using 170975 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[77]\tvalid_0's auc: 0.80852\tvalid_0's binary_logloss: 0.526654\n",
      "[LightGBM] [Debug] Re-bagging, using 170924 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[78]\tvalid_0's auc: 0.808575\tvalid_0's binary_logloss: 0.526045\n",
      "[LightGBM] [Debug] Re-bagging, using 170777 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 10\n",
      "[79]\tvalid_0's auc: 0.809269\tvalid_0's binary_logloss: 0.525244\n",
      "[LightGBM] [Debug] Re-bagging, using 170949 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[80]\tvalid_0's auc: 0.809337\tvalid_0's binary_logloss: 0.524799\n",
      "[LightGBM] [Debug] Re-bagging, using 171439 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[81]\tvalid_0's auc: 0.809886\tvalid_0's binary_logloss: 0.524429\n",
      "[LightGBM] [Debug] Re-bagging, using 170832 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 11\n",
      "[82]\tvalid_0's auc: 0.810061\tvalid_0's binary_logloss: 0.523812\n",
      "[LightGBM] [Debug] Re-bagging, using 170957 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[83]\tvalid_0's auc: 0.809996\tvalid_0's binary_logloss: 0.523074\n",
      "[LightGBM] [Debug] Re-bagging, using 170952 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[84]\tvalid_0's auc: 0.809976\tvalid_0's binary_logloss: 0.522336\n",
      "[LightGBM] [Debug] Re-bagging, using 170968 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[85]\tvalid_0's auc: 0.809965\tvalid_0's binary_logloss: 0.52177\n",
      "[LightGBM] [Debug] Re-bagging, using 170675 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[86]\tvalid_0's auc: 0.809918\tvalid_0's binary_logloss: 0.521084\n",
      "[LightGBM] [Debug] Re-bagging, using 171135 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 12\n",
      "[87]\tvalid_0's auc: 0.810029\tvalid_0's binary_logloss: 0.52052\n",
      "[LightGBM] [Debug] Re-bagging, using 170922 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[88]\tvalid_0's auc: 0.810073\tvalid_0's binary_logloss: 0.519954\n",
      "[LightGBM] [Debug] Re-bagging, using 170345 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[89]\tvalid_0's auc: 0.810052\tvalid_0's binary_logloss: 0.519247\n",
      "[LightGBM] [Debug] Re-bagging, using 171188 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 6\n",
      "[90]\tvalid_0's auc: 0.809995\tvalid_0's binary_logloss: 0.518575\n",
      "[LightGBM] [Debug] Re-bagging, using 171207 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[91]\tvalid_0's auc: 0.809995\tvalid_0's binary_logloss: 0.51819\n",
      "[LightGBM] [Debug] Re-bagging, using 171186 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[92]\tvalid_0's auc: 0.80991\tvalid_0's binary_logloss: 0.517539\n",
      "[LightGBM] [Debug] Re-bagging, using 170931 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[93]\tvalid_0's auc: 0.809901\tvalid_0's binary_logloss: 0.516873\n",
      "[LightGBM] [Debug] Re-bagging, using 171226 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[94]\tvalid_0's auc: 0.809873\tvalid_0's binary_logloss: 0.516211\n",
      "[LightGBM] [Debug] Re-bagging, using 170980 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 12\n",
      "[95]\tvalid_0's auc: 0.810303\tvalid_0's binary_logloss: 0.515579\n",
      "[LightGBM] [Debug] Re-bagging, using 171112 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 8\n",
      "[96]\tvalid_0's auc: 0.810736\tvalid_0's binary_logloss: 0.515199\n",
      "[LightGBM] [Debug] Re-bagging, using 171007 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[97]\tvalid_0's auc: 0.81128\tvalid_0's binary_logloss: 0.514732\n",
      "[LightGBM] [Debug] Re-bagging, using 170578 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 7\n",
      "[98]\tvalid_0's auc: 0.811209\tvalid_0's binary_logloss: 0.514129\n",
      "[LightGBM] [Debug] Re-bagging, using 170747 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 9\n",
      "[99]\tvalid_0's auc: 0.811323\tvalid_0's binary_logloss: 0.513846\n",
      "[LightGBM] [Debug] Re-bagging, using 170628 data to train\n",
      "[LightGBM] [Debug] Trained a tree with leaves = 31 and max_depth = 12\n",
      "[100]\tvalid_0's auc: 0.811773\tvalid_0's binary_logloss: 0.513272\n",
      "Did not meet early stopping. Best iteration is:\n",
      "[100]\tvalid_0's auc: 0.811773\tvalid_0's binary_logloss: 0.513272\n"
     ]
    }
   ],
   "source": [
    "# 五折交叉验证，这里的五折交叉是以用户为目标进行五折划分\n",
    "#  这一部分与前面的单独训练和验证是分开的\n",
    "def get_kfold_users(trn_df, n=5):\n",
    "    user_ids = trn_df['user_id'].unique()\n",
    "    user_set = [user_ids[i::n] for i in range(n)]\n",
    "    return user_set\n",
    "\n",
    "k_fold = 5\n",
    "trn_df = trn_user_item_feats_df_rank_model\n",
    "user_set = get_kfold_users(trn_df, n=k_fold)\n",
    "\n",
    "score_list = []\n",
    "score_df = trn_df[['user_id', 'click_article_id', 'label']]\n",
    "sub_preds = np.zeros(tst_user_item_feats_df_rank_model.shape[0])\n",
    "\n",
    "# 五折交叉验证，并将中间结果保存用于staking\n",
    "for n_fold, valid_user in enumerate(user_set):\n",
    "    train_idx = trn_df[~trn_df['user_id'].isin(valid_user)] # add slide user\n",
    "    valid_idx = trn_df[trn_df['user_id'].isin(valid_user)]\n",
    "    \n",
    "    # 模型及参数的定义\n",
    "    lgb_Classfication = lgb.LGBMClassifier(boosting_type='gbdt', num_leaves=31, reg_alpha=0.0, reg_lambda=1,\n",
    "                            max_depth=-1, n_estimators=100, subsample=0.7, colsample_bytree=0.7, subsample_freq=1,\n",
    "                            learning_rate=0.01, min_child_weight=50, random_state=2018, n_jobs= 16, verbose=10)  \n",
    "    # 训练模型\n",
    "    lgb_Classfication.fit(train_idx[lgb_cols], train_idx['label'],eval_set=[(valid_idx[lgb_cols], valid_idx['label'])], \n",
    "                          eval_metric=['auc', ],early_stopping_rounds=50, )\n",
    "    \n",
    "    # 预测验证集结果\n",
    "    valid_idx['pred_score'] = lgb_Classfication.predict_proba(valid_idx[lgb_cols], \n",
    "                                                              num_iteration=lgb_Classfication.best_iteration_)[:,1]\n",
    "    \n",
    "    # 对输出结果进行归一化 分类模型输出的值本身就是一个概率值不需要进行归一化\n",
    "    # valid_idx['pred_score'] = valid_idx[['pred_score']].transform(lambda x: norm_sim(x))\n",
    "    \n",
    "    valid_idx.sort_values(by=['user_id', 'pred_score'])\n",
    "    valid_idx['pred_rank'] = valid_idx.groupby(['user_id'])['pred_score'].rank(ascending=False, method='first')\n",
    "    \n",
    "    # 将验证集的预测结果放到一个列表中，后面进行拼接\n",
    "    score_list.append(valid_idx[['user_id', 'click_article_id', 'pred_score', 'pred_rank']])\n",
    "    \n",
    "    # 如果是线上测试，需要计算每次交叉验证的结果相加，最后求平均\n",
    "    if informal:\n",
    "        sub_preds += lgb_Classfication.predict_proba(tst_user_item_feats_df_rank_model[lgb_cols], \n",
    "                                                     num_iteration=lgb_Classfication.best_iteration_)[:,1]\n",
    "    \n",
    "score_df_ = pd.concat(score_list, axis=0)\n",
    "score_df = score_df.merge(score_df_, how='left', on=['user_id', 'click_article_id'])\n",
    "# 保存训练集交叉验证产生的新特征\n",
    "score_df[['user_id', 'click_article_id', 'pred_score', 'pred_rank', 'label']].to_csv(pathcache + 'trn_lgb_cls_feats.csv', index=False)\n",
    "    \n",
    "# 测试集的预测结果，多次交叉验证求平均,将预测的score和对应的rank特征保存，可以用于后面的staking，这里还可以构造其他更多的特征\n",
    "tst_user_item_feats_df_rank_model['pred_score'] = sub_preds / k_fold\n",
    "tst_user_item_feats_df_rank_model['pred_score'] = tst_user_item_feats_df_rank_model['pred_score'].transform(lambda x: norm_sim(x))\n",
    "tst_user_item_feats_df_rank_model.sort_values(by=['user_id', 'pred_score'])\n",
    "tst_user_item_feats_df_rank_model['pred_rank'] = tst_user_item_feats_df_rank_model.groupby(['user_id'])['pred_score'].rank(ascending=False, method='first')\n",
    "\n",
    "# 保存测试集交叉验证的新特征\n",
    "tst_user_item_feats_df_rank_model[['user_id', 'click_article_id', 'pred_score', 'pred_rank']].to_csv(pathcache + 'tst_lgb_cls_feats.csv', index=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 195,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "<ipython-input-195-6078113a36c3>:3: SettingWithCopyWarning: \n",
      "A value is trying to be set on a copy of a slice from a DataFrame.\n",
      "Try using .loc[row_indexer,col_indexer] = value instead\n",
      "\n",
      "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n",
      "  rank_results['click_article_id'] = rank_results['click_article_id'].astype(int)\n"
     ]
    }
   ],
   "source": [
    "# 预测结果重新排序, 及生成提交结果\n",
    "rank_results = tst_user_item_feats_df_rank_model[['user_id', 'click_article_id', 'pred_score']]\n",
    "rank_results['click_article_id'] = rank_results['click_article_id'].astype(int)\n",
    "submit(rank_results, topk=5, model_name='lgb_cls')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# DIN模型"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 用户的历史点击行为列表\n",
    "这个是为后面的DIN模型服务的"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 113,
   "metadata": {},
   "outputs": [],
   "source": [
    "# if informal:\n",
    "#     all_data = pd.read_csv(path + 'train_click_log.csv')\n",
    "# else:\n",
    "trn_data = pd.read_csv(path + 'train_click_log.csv')\n",
    "tst_data = pd.read_csv(path + 'testA_click_log.csv')\n",
    "all_data = trn_data.append(tst_data)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 114,
   "metadata": {},
   "outputs": [],
   "source": [
    "hist_click =all_data[['user_id', 'click_article_id']].groupby('user_id').agg({list}).reset_index()\n",
    "his_behavior_df = pd.DataFrame()\n",
    "his_behavior_df['user_id'] = hist_click['user_id']\n",
    "his_behavior_df['hist_click_article_id'] = hist_click['click_article_id']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 115,
   "metadata": {},
   "outputs": [],
   "source": [
    "trn_user_item_feats_df_din_model = trn_user_item_feats_df.copy()\n",
    "\n",
    "if informal:\n",
    "    val_user_item_feats_df_din_model = val_user_item_feats_df.copy()\n",
    "else: \n",
    "    val_user_item_feats_df_din_model = None\n",
    "    \n",
    "tst_user_item_feats_df_din_model = tst_user_item_feats_df.copy()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 116,
   "metadata": {},
   "outputs": [],
   "source": [
    "trn_user_item_feats_df_din_model = trn_user_item_feats_df_din_model.merge(his_behavior_df, on='user_id')\n",
    "\n",
    "if informal:\n",
    "    val_user_item_feats_df_din_model = val_user_item_feats_df_din_model.merge(his_behavior_df, on='user_id')\n",
    "else:\n",
    "    val_user_item_feats_df_din_model = None\n",
    "\n",
    "tst_user_item_feats_df_din_model = tst_user_item_feats_df_din_model.merge(his_behavior_df, on='user_id')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 149,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 导入deepctr\n",
    "from deepctr.models import DIN\n",
    "from deepctr.feature_column import SparseFeat, VarLenSparseFeat, DenseFeat, get_feature_names\n",
    "from tensorflow.keras.preprocessing.sequence import pad_sequences\n",
    "\n",
    "from tensorflow.keras import backend as K\n",
    "from tensorflow.keras.layers import *\n",
    "from tensorflow.keras.models import *\n",
    "from tensorflow.keras.callbacks import * \n",
    "import tensorflow as tf\n",
    "\n",
    "import os\n",
    "os.environ[\"CUDA_DEVICE_ORDER\"] = \"PCI_BUS_ID\"\n",
    "os.environ[\"CUDA_VISIBLE_DEVICES\"] = \"2\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 163,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 数据准备函数\n",
    "def get_din_feats_columns(df, dense_fea, sparse_fea, behavior_fea, his_behavior_fea, emb_dim=32, max_len=100):\n",
    "    \"\"\"\n",
    "    数据准备函数:\n",
    "    df: 数据集\n",
    "    dense_fea: 数值型特征列\n",
    "    sparse_fea: 离散型特征列\n",
    "    behavior_fea: 用户的候选行为特征列\n",
    "    his_behavior_fea: 用户的历史行为特征列\n",
    "    embedding_dim: embedding的维度， 这里为了简单， 统一把离散型特征列采用一样的隐向量维度\n",
    "    max_len: 用户序列的最大长度\n",
    "    \"\"\"\n",
    "    \n",
    "    sparse_feature_columns = [SparseFeat(feat, vocabulary_size=df[feat].max() + 1, embedding_dim=emb_dim) for feat in sparse_fea]\n",
    "    \n",
    "    dense_feature_columns = [DenseFeat(feat, 1, ) for feat in dense_fea]\n",
    "    \n",
    "    var_feature_columns = [VarLenSparseFeat(SparseFeat(feat, vocabulary_size=df['click_article_id'].max() + 1,\n",
    "                                    embedding_dim=emb_dim, embedding_name='click_article_id'), maxlen=max_len) for feat in hist_behavior_fea]\n",
    "    \n",
    "    dnn_feature_columns = sparse_feature_columns + dense_feature_columns + var_feature_columns\n",
    "    \n",
    "    # 建立x, x是一个字典的形式\n",
    "    x = {}\n",
    "    for name in get_feature_names(dnn_feature_columns):\n",
    "        if name in his_behavior_fea:\n",
    "            # 这是历史行为序列\n",
    "            his_list = [l for l in df[name]]\n",
    "            x[name] = pad_sequences(his_list, maxlen=max_len, padding='post')      # 二维数组\n",
    "        else:\n",
    "            x[name] = df[name].values\n",
    "    \n",
    "    return x, dnn_feature_columns"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 164,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 把特征分开\n",
    "sparse_fea = ['user_id', 'click_article_id', 'category_id', 'click_environment', 'click_deviceGroup', \n",
    "              'click_os', 'click_country', 'click_region', 'click_referrer_type', 'is_cat_hab']\n",
    "\n",
    "behavior_fea = ['click_article_id']\n",
    "\n",
    "hist_behavior_fea = ['hist_click_article_id']\n",
    "\n",
    "dense_fea = ['sim0', 'time_diff0', 'word_diff0', 'sim_max', 'sim_min', 'sim_sum', 'sim_mean', 'score',\n",
    "             'rank','click_size','time_diff_mean','active_level','user_time_hob1','user_time_hob2',\n",
    "             'words_hbo','words_count']"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 165,
   "metadata": {},
   "outputs": [],
   "source": [
    "# dense特征进行归一化, 神经网络训练都需要将数值进行归一化处理\n",
    "mm = MinMaxScaler()\n",
    "\n",
    "# 下面是做一些特殊处理，当在其他的地方出现无效值的时候，不处理无法进行归一化，刚开始可以先把他注释掉，在运行了下面的代码\n",
    "# 之后如果发现报错，应该先去想办法处理如何不出现inf之类的值\n",
    "# trn_user_item_feats_df_din_model.replace([np.inf, -np.inf], 0, inplace=True)\n",
    "# tst_user_item_feats_df_din_model.replace([np.inf, -np.inf], 0, inplace=True)\n",
    "\n",
    "for feat in dense_fea:\n",
    "    trn_user_item_feats_df_din_model[feat] = mm.fit_transform(trn_user_item_feats_df_din_model[[feat]])\n",
    "    \n",
    "    if val_user_item_feats_df_din_model is not None:\n",
    "        val_user_item_feats_df_din_model[feat] = mm.fit_transform(val_user_item_feats_df_din_model[[feat]])\n",
    "    \n",
    "    tst_user_item_feats_df_din_model[feat] = mm.fit_transform(tst_user_item_feats_df_din_model[[feat]])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 166,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 准备训练数据\n",
    "x_trn, dnn_feature_columns = get_din_feats_columns(trn_user_item_feats_df_din_model, dense_fea, \n",
    "                                               sparse_fea, behavior_fea, hist_behavior_fea, max_len=50)\n",
    "y_trn = trn_user_item_feats_df_din_model['label'].values\n",
    "\n",
    "if informal:\n",
    "    # 准备验证数据\n",
    "    x_val, dnn_feature_columns = get_din_feats_columns(val_user_item_feats_df_din_model, dense_fea, \n",
    "                                                   sparse_fea, behavior_fea, hist_behavior_fea, max_len=50)\n",
    "    y_val = val_user_item_feats_df_din_model['label'].values\n",
    "    \n",
    "dense_fea = [x for x in dense_fea if x != 'label']\n",
    "x_tst, dnn_feature_columns = get_din_feats_columns(tst_user_item_feats_df_din_model, dense_fea, \n",
    "                                               sparse_fea, behavior_fea, hist_behavior_fea, max_len=50)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 167,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:\n",
      "The following Variables were used a Lambda layer's call (lambda_3), but\n",
      "are not present in its tracked objects:\n",
      "  <tf.Variable 'attention_sequence_pooling_layer_3/local_activation_unit_3/kernel:0' shape=(40, 1) dtype=float32>\n",
      "  <tf.Variable 'attention_sequence_pooling_layer_3/local_activation_unit_3/bias:0' shape=(1,) dtype=float32>\n",
      "It is possible that this is intended behavior, but it is more likely\n",
      "an omission. This is a strong indication that this layer should be\n",
      "formulated as a subclassed Layer rather than a Lambda layer.\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "WARNING:tensorflow:\n",
      "The following Variables were used a Lambda layer's call (lambda_3), but\n",
      "are not present in its tracked objects:\n",
      "  <tf.Variable 'attention_sequence_pooling_layer_3/local_activation_unit_3/kernel:0' shape=(40, 1) dtype=float32>\n",
      "  <tf.Variable 'attention_sequence_pooling_layer_3/local_activation_unit_3/bias:0' shape=(1,) dtype=float32>\n",
      "It is possible that this is intended behavior, but it is more likely\n",
      "an omission. This is a strong indication that this layer should be\n",
      "formulated as a subclassed Layer rather than a Lambda layer.\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Model: \"functional_7\"\n",
      "__________________________________________________________________________________________________\n",
      "Layer (type)                    Output Shape         Param #     Connected to                     \n",
      "==================================================================================================\n",
      "user_id (InputLayer)            [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "click_article_id (InputLayer)   [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "category_id (InputLayer)        [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "click_environment (InputLayer)  [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "click_deviceGroup (InputLayer)  [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "click_os (InputLayer)           [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "click_country (InputLayer)      [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "click_region (InputLayer)       [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "click_referrer_type (InputLayer [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "is_cat_hab (InputLayer)         [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "sparse_emb_user_id (Embedding)  (None, 1, 32)        8000000     user_id[0][0]                    \n",
      "__________________________________________________________________________________________________\n",
      "sparse_seq_emb_hist_click_artic multiple             11649504    click_article_id[0][0]           \n",
      "                                                                 hist_click_article_id[0][0]      \n",
      "                                                                 click_article_id[0][0]           \n",
      "__________________________________________________________________________________________________\n",
      "sparse_emb_category_id (Embeddi (None, 1, 32)        14752       category_id[0][0]                \n",
      "__________________________________________________________________________________________________\n",
      "sparse_emb_click_environment (E (None, 1, 32)        160         click_environment[0][0]          \n",
      "__________________________________________________________________________________________________\n",
      "sparse_emb_click_deviceGroup (E (None, 1, 32)        192         click_deviceGroup[0][0]          \n",
      "__________________________________________________________________________________________________\n",
      "sparse_emb_click_os (Embedding) (None, 1, 32)        672         click_os[0][0]                   \n",
      "__________________________________________________________________________________________________\n",
      "sparse_emb_click_country (Embed (None, 1, 32)        384         click_country[0][0]              \n",
      "__________________________________________________________________________________________________\n",
      "sparse_emb_click_region (Embedd (None, 1, 32)        928         click_region[0][0]               \n",
      "__________________________________________________________________________________________________\n",
      "sparse_emb_click_referrer_type  (None, 1, 32)        256         click_referrer_type[0][0]        \n",
      "__________________________________________________________________________________________________\n",
      "sparse_emb_is_cat_hab (Embeddin (None, 1, 32)        64          is_cat_hab[0][0]                 \n",
      "__________________________________________________________________________________________________\n",
      "no_mask_15 (NoMask)             (None, 1, 32)        0           sparse_emb_user_id[0][0]         \n",
      "                                                                 sparse_seq_emb_hist_click_article\n",
      "                                                                 sparse_emb_category_id[0][0]     \n",
      "                                                                 sparse_emb_click_environment[0][0\n",
      "                                                                 sparse_emb_click_deviceGroup[0][0\n",
      "                                                                 sparse_emb_click_os[0][0]        \n",
      "                                                                 sparse_emb_click_country[0][0]   \n",
      "                                                                 sparse_emb_click_region[0][0]    \n",
      "                                                                 sparse_emb_click_referrer_type[0]\n",
      "                                                                 sparse_emb_is_cat_hab[0][0]      \n",
      "__________________________________________________________________________________________________\n",
      "hist_click_article_id (InputLay [(None, 50)]         0                                            \n",
      "__________________________________________________________________________________________________\n",
      "concatenate_12 (Concatenate)    (None, 1, 320)       0           no_mask_15[0][0]                 \n",
      "                                                                 no_mask_15[1][0]                 \n",
      "                                                                 no_mask_15[2][0]                 \n",
      "                                                                 no_mask_15[3][0]                 \n",
      "                                                                 no_mask_15[4][0]                 \n",
      "                                                                 no_mask_15[5][0]                 \n",
      "                                                                 no_mask_15[6][0]                 \n",
      "                                                                 no_mask_15[7][0]                 \n",
      "                                                                 no_mask_15[8][0]                 \n",
      "                                                                 no_mask_15[9][0]                 \n",
      "__________________________________________________________________________________________________\n",
      "no_mask_16 (NoMask)             (None, 1, 320)       0           concatenate_12[0][0]             \n",
      "__________________________________________________________________________________________________\n",
      "attention_sequence_pooling_laye (None, 1, 32)        13961       sparse_seq_emb_hist_click_article\n",
      "                                                                 sparse_seq_emb_hist_click_article\n",
      "__________________________________________________________________________________________________\n",
      "concatenate_13 (Concatenate)    (None, 1, 352)       0           no_mask_16[0][0]                 \n",
      "                                                                 attention_sequence_pooling_layer_\n",
      "__________________________________________________________________________________________________\n",
      "sim0 (InputLayer)               [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "time_diff0 (InputLayer)         [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "word_diff0 (InputLayer)         [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "sim_max (InputLayer)            [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "sim_min (InputLayer)            [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "sim_sum (InputLayer)            [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "sim_mean (InputLayer)           [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "score (InputLayer)              [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "rank (InputLayer)               [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "click_size (InputLayer)         [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "time_diff_mean (InputLayer)     [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "active_level (InputLayer)       [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "user_time_hob1 (InputLayer)     [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "user_time_hob2 (InputLayer)     [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "words_hbo (InputLayer)          [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "words_count (InputLayer)        [(None, 1)]          0                                            \n",
      "__________________________________________________________________________________________________\n",
      "flatten_9 (Flatten)             (None, 352)          0           concatenate_13[0][0]             \n",
      "__________________________________________________________________________________________________\n",
      "no_mask_18 (NoMask)             (None, 1)            0           sim0[0][0]                       \n",
      "                                                                 time_diff0[0][0]                 \n",
      "                                                                 word_diff0[0][0]                 \n",
      "                                                                 sim_max[0][0]                    \n",
      "                                                                 sim_min[0][0]                    \n",
      "                                                                 sim_sum[0][0]                    \n",
      "                                                                 sim_mean[0][0]                   \n",
      "                                                                 score[0][0]                      \n",
      "                                                                 rank[0][0]                       \n",
      "                                                                 click_size[0][0]                 \n",
      "                                                                 time_diff_mean[0][0]             \n",
      "                                                                 active_level[0][0]               \n",
      "                                                                 user_time_hob1[0][0]             \n",
      "                                                                 user_time_hob2[0][0]             \n",
      "                                                                 words_hbo[0][0]                  \n",
      "                                                                 words_count[0][0]                \n",
      "__________________________________________________________________________________________________\n",
      "no_mask_17 (NoMask)             (None, 352)          0           flatten_9[0][0]                  \n",
      "__________________________________________________________________________________________________\n",
      "concatenate_14 (Concatenate)    (None, 16)           0           no_mask_18[0][0]                 \n",
      "                                                                 no_mask_18[1][0]                 \n",
      "                                                                 no_mask_18[2][0]                 \n",
      "                                                                 no_mask_18[3][0]                 \n",
      "                                                                 no_mask_18[4][0]                 \n",
      "                                                                 no_mask_18[5][0]                 \n",
      "                                                                 no_mask_18[6][0]                 \n",
      "                                                                 no_mask_18[7][0]                 \n",
      "                                                                 no_mask_18[8][0]                 \n",
      "                                                                 no_mask_18[9][0]                 \n",
      "                                                                 no_mask_18[10][0]                \n",
      "                                                                 no_mask_18[11][0]                \n",
      "                                                                 no_mask_18[12][0]                \n",
      "                                                                 no_mask_18[13][0]                \n",
      "                                                                 no_mask_18[14][0]                \n",
      "                                                                 no_mask_18[15][0]                \n",
      "__________________________________________________________________________________________________\n",
      "flatten_10 (Flatten)            (None, 352)          0           no_mask_17[0][0]                 \n",
      "__________________________________________________________________________________________________\n",
      "flatten_11 (Flatten)            (None, 16)           0           concatenate_14[0][0]             \n",
      "__________________________________________________________________________________________________\n",
      "no_mask_19 (NoMask)             multiple             0           flatten_10[0][0]                 \n",
      "                                                                 flatten_11[0][0]                 \n",
      "__________________________________________________________________________________________________\n",
      "concatenate_15 (Concatenate)    (None, 368)          0           no_mask_19[0][0]                 \n",
      "                                                                 no_mask_19[1][0]                 \n",
      "__________________________________________________________________________________________________\n",
      "dnn_3 (DNN)                     (None, 80)           89880       concatenate_15[0][0]             \n",
      "__________________________________________________________________________________________________\n",
      "dense_3 (Dense)                 (None, 1)            80          dnn_3[0][0]                      \n",
      "__________________________________________________________________________________________________\n",
      "prediction_layer_3 (PredictionL (None, 1)            1           dense_3[0][0]                    \n",
      "==================================================================================================\n",
      "Total params: 19,770,834\n",
      "Trainable params: 19,770,594\n",
      "Non-trainable params: 240\n",
      "__________________________________________________________________________________________________\n"
     ]
    }
   ],
   "source": [
    "# 建立模型\n",
    "model = DIN(dnn_feature_columns, behavior_fea)\n",
    "\n",
    "# 查看模型结构\n",
    "model.summary()\n",
    "\n",
    "# 模型编译\n",
    "model.compile('adam', 'binary_crossentropy',metrics=['binary_crossentropy', tf.keras.metrics.AUC()])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 169,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/2\n",
      "1193/1193 [==============================] - 237s 199ms/step - loss: 0.0149 - binary_crossentropy: 0.0131 - auc_3: 0.9998 - val_loss: 2.0515 - val_binary_crossentropy: 2.0496 - val_auc_3: 0.6588\n",
      "Epoch 2/2\n",
      "1193/1193 [==============================] - 238s 199ms/step - loss: 0.0025 - binary_crossentropy: 7.4345e-04 - auc_3: 1.0000 - val_loss: 2.0641 - val_binary_crossentropy: 2.0625 - val_auc_3: 0.6579\n"
     ]
    }
   ],
   "source": [
    "# 模型训练\n",
    "if informal == True:\n",
    "    history = model.fit(x_trn, y_trn, verbose=1, epochs=2, validation_data=(x_val, y_val) , batch_size=256)\n",
    "else:\n",
    "    # 也可以使用上面的语句用自己采样出来的验证集\n",
    "    # history = model.fit(x_trn, y_trn, verbose=1, epochs=3, validation_split=0.3, batch_size=256)\n",
    "    history = model.fit(x_trn, y_trn, verbose=1, epochs=2, batch_size=256)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 170,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1954/1954 [==============================] - 28s 14ms/step\n"
     ]
    }
   ],
   "source": [
    "# 模型预测\n",
    "tst_user_item_feats_df_din_model['pred_score'] = model.predict(x_tst, verbose=1, batch_size=256)\n",
    "tst_user_item_feats_df_din_model[['user_id', 'click_article_id', 'pred_score']].to_csv(pathcache + 'din_rank_score.csv', index=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 171,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 预测结果重新排序, 及生成提交结果\n",
    "rank_results = tst_user_item_feats_df_din_model[['user_id', 'click_article_id', 'pred_score']]\n",
    "submit(rank_results, topk=5, model_name='din')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 178,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/2\n",
      "956/956 [==============================] - 206s 215ms/step - loss: 0.0022 - binary_crossentropy: 0.0012 - auc_3: 1.0000 - val_loss: 0.0038 - val_binary_crossentropy: 0.0028 - val_auc_3: 0.9999\n",
      "Epoch 2/2\n",
      "956/956 [==============================] - 201s 211ms/step - loss: 0.0027 - binary_crossentropy: 0.0017 - auc_3: 1.0000 - val_loss: 0.0080 - val_binary_crossentropy: 0.0070 - val_auc_3: 0.9999\n",
      "238/238 [==============================] - 3s 14ms/step\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "<ipython-input-178-cea30d132280>:38: SettingWithCopyWarning: \n",
      "A value is trying to be set on a copy of a slice from a DataFrame.\n",
      "Try using .loc[row_indexer,col_indexer] = value instead\n",
      "\n",
      "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n",
      "  valid_idx['pred_score'] = model.predict(x_val, verbose=1, batch_size=256)\n",
      "<ipython-input-178-cea30d132280>:41: SettingWithCopyWarning: \n",
      "A value is trying to be set on a copy of a slice from a DataFrame.\n",
      "Try using .loc[row_indexer,col_indexer] = value instead\n",
      "\n",
      "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n",
      "  valid_idx['pred_rank'] = valid_idx.groupby(['user_id'])['pred_score'].rank(ascending=False, method='first')\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "1954/1954 [==============================] - 30s 15ms/step\n",
      "Epoch 1/2\n",
      "954/954 [==============================] - 203s 213ms/step - loss: 0.0038 - binary_crossentropy: 0.0028 - auc_3: 1.0000 - val_loss: 0.0023 - val_binary_crossentropy: 0.0012 - val_auc_3: 0.9999\n",
      "Epoch 2/2\n",
      "954/954 [==============================] - 201s 211ms/step - loss: 0.0018 - binary_crossentropy: 8.4140e-04 - auc_3: 1.0000 - val_loss: 0.0047 - val_binary_crossentropy: 0.0038 - val_auc_3: 0.9999\n",
      "240/240 [==============================] - 4s 15ms/step\n",
      "1954/1954 [==============================] - 30s 15ms/step\n",
      "Epoch 1/2\n",
      "955/955 [==============================] - 201s 210ms/step - loss: 0.0034 - binary_crossentropy: 0.0025 - auc_3: 0.9999 - val_loss: 0.0019 - val_binary_crossentropy: 8.6289e-04 - val_auc_3: 1.0000\n",
      "Epoch 2/2\n",
      "955/955 [==============================] - 203s 212ms/step - loss: 0.0013 - binary_crossentropy: 4.0795e-04 - auc_3: 1.0000 - val_loss: 0.0051 - val_binary_crossentropy: 0.0043 - val_auc_3: 0.9999\n",
      "239/239 [==============================] - 4s 16ms/step\n",
      "1954/1954 [==============================] - 31s 16ms/step\n",
      "Epoch 1/2\n",
      "955/955 [==============================] - 205s 215ms/step - loss: 0.0041 - binary_crossentropy: 0.0032 - auc_3: 0.9999 - val_loss: 0.0020 - val_binary_crossentropy: 9.3124e-04 - val_auc_3: 1.0000\n",
      "Epoch 2/2\n",
      "955/955 [==============================] - 205s 215ms/step - loss: 0.0015 - binary_crossentropy: 5.9034e-04 - auc_3: 1.0000 - val_loss: 0.0083 - val_binary_crossentropy: 0.0074 - val_auc_3: 0.9998\n",
      "239/239 [==============================] - 4s 16ms/step\n",
      "1954/1954 [==============================] - 30s 16ms/step\n",
      "Epoch 1/2\n",
      "954/954 [==============================] - 204s 214ms/step - loss: 0.0035 - binary_crossentropy: 0.0025 - auc_3: 0.9999 - val_loss: 0.0019 - val_binary_crossentropy: 8.6695e-04 - val_auc_3: 1.0000\n",
      "Epoch 2/2\n",
      "954/954 [==============================] - 204s 214ms/step - loss: 0.0015 - binary_crossentropy: 6.3749e-04 - auc_3: 1.0000 - val_loss: 0.0035 - val_binary_crossentropy: 0.0027 - val_auc_3: 1.0000\n",
      "240/240 [==============================] - 4s 16ms/step\n",
      "1954/1954 [==============================] - 31s 16ms/step\n"
     ]
    },
    {
     "ename": "NameError",
     "evalue": "name 'save_path' is not defined",
     "output_type": "error",
     "traceback": [
      "\u001b[0;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[0;31mNameError\u001b[0m                                 Traceback (most recent call last)",
      "\u001b[0;32m<ipython-input-178-cea30d132280>\u001b[0m in \u001b[0;36m<module>\u001b[0;34m\u001b[0m\n\u001b[1;32m     51\u001b[0m \u001b[0mscore_df\u001b[0m \u001b[0;34m=\u001b[0m \u001b[0mscore_df\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mmerge\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0mscore_df_\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mhow\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m'left'\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mon\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m'user_id'\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m'click_article_id'\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     52\u001b[0m \u001b[0;31m# 保存训练集交叉验证产生的新特征\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0;32m---> 53\u001b[0;31m \u001b[0mscore_df\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m[\u001b[0m\u001b[0;34m'user_id'\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m'click_article_id'\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m'pred_score'\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m'pred_rank'\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0;34m'label'\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m]\u001b[0m\u001b[0;34m.\u001b[0m\u001b[0mto_csv\u001b[0m\u001b[0;34m(\u001b[0m\u001b[0msave_path\u001b[0m \u001b[0;34m+\u001b[0m \u001b[0;34m'trn_din_cls_feats.csv'\u001b[0m\u001b[0;34m,\u001b[0m \u001b[0mindex\u001b[0m\u001b[0;34m=\u001b[0m\u001b[0;32mFalse\u001b[0m\u001b[0;34m)\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n\u001b[0m\u001b[1;32m     54\u001b[0m \u001b[0;34m\u001b[0m\u001b[0m\n\u001b[1;32m     55\u001b[0m \u001b[0;31m# 测试集的预测结果，多次交叉验证求平均,将预测的score和对应的rank特征保存，可以用于后面的staking，这里还可以构造其他更多的特征\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0;34m\u001b[0m\u001b[0m\n",
      "\u001b[0;31mNameError\u001b[0m: name 'save_path' is not defined"
     ]
    }
   ],
   "source": [
    "# 五折交叉验证，这里的五折交叉是以用户为目标进行五折划分\n",
    "#  这一部分与前面的单独训练和验证是分开的\n",
    "def get_kfold_users(trn_df, n=5):\n",
    "    user_ids = trn_df['user_id'].unique()\n",
    "    user_set = [user_ids[i::n] for i in range(n)]\n",
    "    return user_set\n",
    "\n",
    "k_fold = 5\n",
    "trn_df = trn_user_item_feats_df_din_model\n",
    "user_set = get_kfold_users(trn_df, n=k_fold)\n",
    "\n",
    "score_list = []\n",
    "score_df = trn_df[['user_id', 'click_article_id', 'label']]\n",
    "sub_preds = np.zeros(tst_user_item_feats_df_rank_model.shape[0])\n",
    "\n",
    "dense_fea = [x for x in dense_fea if x != 'label']\n",
    "x_tst, dnn_feature_columns = get_din_feats_columns(tst_user_item_feats_df_din_model, dense_fea, \n",
    "                                                   sparse_fea, behavior_fea, hist_behavior_fea, max_len=50)\n",
    "\n",
    "# 五折交叉验证，并将中间结果保存用于staking\n",
    "for n_fold, valid_user in enumerate(user_set):\n",
    "    train_idx = trn_df[~trn_df['user_id'].isin(valid_user)] # add slide user\n",
    "    valid_idx = trn_df[trn_df['user_id'].isin(valid_user)]\n",
    "    \n",
    "    # 准备训练数据\n",
    "    x_trn, dnn_feature_columns = get_din_feats_columns(train_idx, dense_fea, \n",
    "                                                       sparse_fea, behavior_fea, hist_behavior_fea, max_len=50)\n",
    "    y_trn = train_idx['label'].values\n",
    "\n",
    "    # 准备验证数据\n",
    "    x_val, dnn_feature_columns = get_din_feats_columns(valid_idx, dense_fea, \n",
    "                                                   sparse_fea, behavior_fea, hist_behavior_fea, max_len=50)\n",
    "    y_val = valid_idx['label'].values\n",
    "    \n",
    "    history = model.fit(x_trn, y_trn, verbose=1, epochs=2, validation_data=(x_val, y_val) , batch_size=256)\n",
    "    \n",
    "    # 预测验证集结果\n",
    "    valid_idx['pred_score'] = model.predict(x_val, verbose=1, batch_size=256)   \n",
    "    \n",
    "    valid_idx.sort_values(by=['user_id', 'pred_score'])\n",
    "    valid_idx['pred_rank'] = valid_idx.groupby(['user_id'])['pred_score'].rank(ascending=False, method='first')\n",
    "    \n",
    "    # 将验证集的预测结果放到一个列表中，后面进行拼接\n",
    "    score_list.append(valid_idx[['user_id', 'click_article_id', 'pred_score', 'pred_rank']])\n",
    "    \n",
    "    # 如果是线上测试，需要计算每次交叉验证的结果相加，最后求平均\n",
    "    if informal:\n",
    "        sub_preds += model.predict(x_tst, verbose=1, batch_size=256)[:, 0]   \n",
    "    \n",
    "score_df_ = pd.concat(score_list, axis=0)\n",
    "score_df = score_df.merge(score_df_, how='left', on=['user_id', 'click_article_id'])\n",
    "# 保存训练集交叉验证产生的新特征\n",
    "score_df[['user_id', 'click_article_id', 'pred_score', 'pred_rank', 'label']].to_csv(pathcache + 'trn_din_cls_feats.csv', index=False)\n",
    "    \n",
    "# 测试集的预测结果，多次交叉验证求平均,将预测的score和对应的rank特征保存，可以用于后面的staking，这里还可以构造其他更多的特征\n",
    "tst_user_item_feats_df_din_model['pred_score'] = sub_preds / k_fold\n",
    "tst_user_item_feats_df_din_model['pred_score'] = tst_user_item_feats_df_din_model['pred_score'].transform(lambda x: norm_sim(x))\n",
    "tst_user_item_feats_df_din_model.sort_values(by=['user_id', 'pred_score'])\n",
    "tst_user_item_feats_df_din_model['pred_rank'] = tst_user_item_feats_df_din_model.groupby(['user_id'])['pred_score'].rank(ascending=False, method='first')\n",
    "\n",
    "# 保存测试集交叉验证的新特征\n",
    "tst_user_item_feats_df_din_model[['user_id', 'click_article_id', 'pred_score', 'pred_rank']].to_csv(pathcache + 'tst_din_cls_feats.csv', index=False)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 模型融合"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 加权融合"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 180,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 读取多个模型的排序结果文件\n",
    "lgb_ranker = pd.read_csv(pathcache + 'lgb_ranker_score.csv')\n",
    "lgb_cls = pd.read_csv(pathcache + 'lgb_cls_score.csv')\n",
    "din_ranker = pd.read_csv(pathcache + 'din_rank_score.csv')\n",
    "\n",
    "# 这里也可以换成交叉验证输出的测试结果进行加权融合"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 181,
   "metadata": {},
   "outputs": [],
   "source": [
    "rank_model = {'lgb_ranker': lgb_ranker, \n",
    "              'lgb_cls': lgb_cls, \n",
    "              'din_ranker': din_ranker}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 182,
   "metadata": {},
   "outputs": [],
   "source": [
    "def get_ensumble_predict_topk(rank_model, topk=5):\n",
    "    final_recall = rank_model['lgb_cls'].append(rank_model['din_ranker'])\n",
    "    rank_model['lgb_ranker']['pred_score'] = rank_model['lgb_ranker']['pred_score'].transform(lambda x: norm_sim(x))\n",
    "    \n",
    "    final_recall = final_recall.append(rank_model['lgb_ranker'])\n",
    "    final_recall = final_recall.groupby(['user_id', 'click_article_id'])['pred_score'].sum().reset_index()\n",
    "    \n",
    "    submit(final_recall, topk=topk, model_name='ensemble_fuse')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 183,
   "metadata": {},
   "outputs": [],
   "source": [
    "get_ensumble_predict_topk(rank_model)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# Staking"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 196,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 读取多个模型的交叉验证生成的结果文件\n",
    "# 训练集\n",
    "trn_lgb_ranker_feats = pd.read_csv(pathcache + 'trn_lgb_ranker_feats.csv')\n",
    "trn_lgb_cls_feats = pd.read_csv(pathcache + 'trn_lgb_cls_feats.csv')\n",
    "trn_din_cls_feats = pd.read_csv(pathcache + 'trn_din_cls_feats.csv')\n",
    "\n",
    "# 测试集\n",
    "tst_lgb_ranker_feats = pd.read_csv(pathcache + 'tst_lgb_ranker_feats.csv')\n",
    "tst_lgb_cls_feats = pd.read_csv(pathcache + 'tst_lgb_cls_feats.csv')\n",
    "tst_din_cls_feats = pd.read_csv(pathcache + 'tst_din_cls_feats.csv')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 215,
   "metadata": {},
   "outputs": [],
   "source": [
    "trn_lgb_ranker_feats.sort_values(by=['user_id', 'click_article_id'],inplace=True,ignore_index=True)\n",
    "trn_lgb_cls_feats.sort_values(by=['user_id', 'click_article_id'],inplace=True,ignore_index=True)\n",
    "trn_din_cls_feats.sort_values(by=['user_id', 'click_article_id'],inplace=True,ignore_index=True)\n",
    "\n",
    "tst_lgb_ranker_feats.sort_values(by=['user_id', 'click_article_id'],inplace=True,ignore_index=True)\n",
    "tst_lgb_cls_feats.sort_values(by=['user_id', 'click_article_id'],inplace=True,ignore_index=True)\n",
    "tst_din_cls_feats.sort_values(by=['user_id', 'click_article_id'],inplace=True,ignore_index=True)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 216,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "<ipython-input-216-e57ec981c763>:9: SettingWithCopyWarning: \n",
      "A value is trying to be set on a copy of a slice from a DataFrame.\n",
      "Try using .loc[row_indexer,col_indexer] = value instead\n",
      "\n",
      "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n",
      "  finall_trn_ranker_feats[col_name] = trn_model[feat]\n",
      "<ipython-input-216-e57ec981c763>:14: SettingWithCopyWarning: \n",
      "A value is trying to be set on a copy of a slice from a DataFrame.\n",
      "Try using .loc[row_indexer,col_indexer] = value instead\n",
      "\n",
      "See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy\n",
      "  finall_tst_ranker_feats[col_name] = tst_model[feat]\n"
     ]
    }
   ],
   "source": [
    "# 将多个模型输出的特征进行拼接\n",
    "\n",
    "finall_trn_ranker_feats = trn_lgb_ranker_feats[['user_id', 'click_article_id', 'label']]\n",
    "finall_tst_ranker_feats = tst_lgb_ranker_feats[['user_id', 'click_article_id']]\n",
    "\n",
    "for idx, trn_model in enumerate([trn_lgb_ranker_feats, trn_lgb_cls_feats, trn_din_cls_feats]):\n",
    "    for feat in [ 'pred_score', 'pred_rank']:\n",
    "        col_name = feat + '_' + str(idx)\n",
    "        finall_trn_ranker_feats[col_name] = trn_model[feat]\n",
    "\n",
    "for idx, tst_model in enumerate([tst_lgb_ranker_feats, tst_lgb_cls_feats, tst_din_cls_feats]):\n",
    "    for feat in [ 'pred_score', 'pred_rank']:\n",
    "        col_name = feat + '_' + str(idx)\n",
    "        finall_tst_ranker_feats[col_name] = tst_model[feat]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 219,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 定义一个逻辑回归模型再次拟合交叉验证产生的特征对测试集进行预测\n",
    "# 这里需要注意的是，在做交叉验证的时候可以构造多一些与输出预测值相关的特征，来丰富这里简单模型的特征\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "\n",
    "feat_cols = ['pred_score_0', 'pred_rank_0', 'pred_score_1', 'pred_rank_1', 'pred_score_2', 'pred_rank_2']\n",
    "\n",
    "trn_x = finall_trn_ranker_feats[feat_cols]\n",
    "trn_y = finall_trn_ranker_feats['label']\n",
    "\n",
    "tst_x = finall_tst_ranker_feats[feat_cols]\n",
    "\n",
    "# 定义模型\n",
    "lr = LogisticRegression()\n",
    "\n",
    "# 模型训练\n",
    "lr.fit(trn_x, trn_y)\n",
    "\n",
    "# 模型预测\n",
    "finall_tst_ranker_feats['pred_score'] = lr.predict_proba(tst_x)[:, 1]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 220,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 预测结果重新排序, 及生成提交结果\n",
    "rank_results = finall_tst_ranker_feats[['user_id', 'click_article_id', 'pred_score']]\n",
    "submit(rank_results, topk=5, model_name='ensumble_staking')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
