{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 问题描述\n",
    "采用xgboost模型完成商品分类（需进行参数调优）。\n",
    "\n",
    "## 解题提示\n",
    "为减轻大家对特征工程的入手难度，以及统一标准，数据请用课程网站提供的特征工程编码后的数据（RentListingInquries_FE_train.csv）或稀疏编码的形式（RentListingInquries_FE_train.bin）。xgboost既可以单独调用，也可以在sklearn框架下调用。大家可以随意选择。若采用xgboost单独调用使用方式，建议读取稀疏格式文件。\n",
    "\n",
    "## 批改标准\n",
    "独立调用xgboost或在sklearn框架下调用均可。\n",
    "1. 模型训练：超参数调优\n",
    "    1. 初步确定弱学习器数目： 20分\n",
    "    2. 对树的最大深度（可选）和min_children_weight进行调优（可选）：20分\n",
    "    3. 对正则参数进行调优：20分\n",
    "    4. 重新调整弱学习器数目：10分\n",
    "    5. 行列重采样参数调整：10分\n",
    "2. 调用模型进行测试10分\n",
    "3. 生成测试结果文件10分\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 由于数据过大，本着学习的目的，对训练数据进行采样后进行参数调优"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 导入 pandas 对数据进行抽样，保存"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style>\n",
       "    .dataframe thead tr:only-child th {\n",
       "        text-align: right;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: left;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>bathrooms</th>\n",
       "      <th>bedrooms</th>\n",
       "      <th>price</th>\n",
       "      <th>price_bathrooms</th>\n",
       "      <th>price_bedrooms</th>\n",
       "      <th>room_diff</th>\n",
       "      <th>room_num</th>\n",
       "      <th>Year</th>\n",
       "      <th>Month</th>\n",
       "      <th>Day</th>\n",
       "      <th>...</th>\n",
       "      <th>walk</th>\n",
       "      <th>walls</th>\n",
       "      <th>war</th>\n",
       "      <th>washer</th>\n",
       "      <th>water</th>\n",
       "      <th>wheelchair</th>\n",
       "      <th>wifi</th>\n",
       "      <th>windows</th>\n",
       "      <th>work</th>\n",
       "      <th>interest_level</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>count</th>\n",
       "      <td>49352.00000</td>\n",
       "      <td>49352.000000</td>\n",
       "      <td>4.935200e+04</td>\n",
       "      <td>4.935200e+04</td>\n",
       "      <td>4.935200e+04</td>\n",
       "      <td>49352.000000</td>\n",
       "      <td>49352.000000</td>\n",
       "      <td>49352.0</td>\n",
       "      <td>49352.000000</td>\n",
       "      <td>49352.000000</td>\n",
       "      <td>...</td>\n",
       "      <td>49352.000000</td>\n",
       "      <td>49352.000000</td>\n",
       "      <td>49352.000000</td>\n",
       "      <td>49352.000000</td>\n",
       "      <td>49352.000000</td>\n",
       "      <td>49352.000000</td>\n",
       "      <td>49352.000000</td>\n",
       "      <td>49352.000000</td>\n",
       "      <td>49352.000000</td>\n",
       "      <td>49352.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>mean</th>\n",
       "      <td>1.21218</td>\n",
       "      <td>1.541640</td>\n",
       "      <td>3.830174e+03</td>\n",
       "      <td>1.697863e+03</td>\n",
       "      <td>1.657567e+03</td>\n",
       "      <td>-0.329460</td>\n",
       "      <td>2.753820</td>\n",
       "      <td>2016.0</td>\n",
       "      <td>5.014852</td>\n",
       "      <td>15.206881</td>\n",
       "      <td>...</td>\n",
       "      <td>0.003080</td>\n",
       "      <td>0.000385</td>\n",
       "      <td>0.186477</td>\n",
       "      <td>0.009361</td>\n",
       "      <td>0.000446</td>\n",
       "      <td>0.028165</td>\n",
       "      <td>0.002026</td>\n",
       "      <td>0.001013</td>\n",
       "      <td>0.000952</td>\n",
       "      <td>1.616895</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>std</th>\n",
       "      <td>0.50142</td>\n",
       "      <td>1.115018</td>\n",
       "      <td>2.206687e+04</td>\n",
       "      <td>1.100477e+04</td>\n",
       "      <td>7.817996e+03</td>\n",
       "      <td>0.947732</td>\n",
       "      <td>1.446091</td>\n",
       "      <td>0.0</td>\n",
       "      <td>0.824442</td>\n",
       "      <td>8.280749</td>\n",
       "      <td>...</td>\n",
       "      <td>0.055412</td>\n",
       "      <td>0.019618</td>\n",
       "      <td>0.389495</td>\n",
       "      <td>0.101625</td>\n",
       "      <td>0.021109</td>\n",
       "      <td>0.165446</td>\n",
       "      <td>0.044969</td>\n",
       "      <td>0.031814</td>\n",
       "      <td>0.030846</td>\n",
       "      <td>0.626035</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>min</th>\n",
       "      <td>0.00000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>4.300000e+01</td>\n",
       "      <td>2.150000e+01</td>\n",
       "      <td>4.300000e+01</td>\n",
       "      <td>-5.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>2016.0</td>\n",
       "      <td>4.000000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>...</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>25%</th>\n",
       "      <td>1.00000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>2.500000e+03</td>\n",
       "      <td>1.225000e+03</td>\n",
       "      <td>1.066667e+03</td>\n",
       "      <td>-1.000000</td>\n",
       "      <td>2.000000</td>\n",
       "      <td>2016.0</td>\n",
       "      <td>4.000000</td>\n",
       "      <td>8.000000</td>\n",
       "      <td>...</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>1.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>50%</th>\n",
       "      <td>1.00000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>3.150000e+03</td>\n",
       "      <td>1.500000e+03</td>\n",
       "      <td>1.383417e+03</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>2.000000</td>\n",
       "      <td>2016.0</td>\n",
       "      <td>5.000000</td>\n",
       "      <td>15.000000</td>\n",
       "      <td>...</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>2.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>75%</th>\n",
       "      <td>1.00000</td>\n",
       "      <td>2.000000</td>\n",
       "      <td>4.100000e+03</td>\n",
       "      <td>1.850000e+03</td>\n",
       "      <td>1.962500e+03</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>4.000000</td>\n",
       "      <td>2016.0</td>\n",
       "      <td>6.000000</td>\n",
       "      <td>22.000000</td>\n",
       "      <td>...</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>0.000000</td>\n",
       "      <td>2.000000</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>max</th>\n",
       "      <td>10.00000</td>\n",
       "      <td>8.000000</td>\n",
       "      <td>4.490000e+06</td>\n",
       "      <td>2.245000e+06</td>\n",
       "      <td>1.496667e+06</td>\n",
       "      <td>8.000000</td>\n",
       "      <td>13.500000</td>\n",
       "      <td>2016.0</td>\n",
       "      <td>6.000000</td>\n",
       "      <td>31.000000</td>\n",
       "      <td>...</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>2.000000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>1.000000</td>\n",
       "      <td>2.000000</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "<p>8 rows × 228 columns</p>\n",
       "</div>"
      ],
      "text/plain": [
       "         bathrooms      bedrooms         price  price_bathrooms  \\\n",
       "count  49352.00000  49352.000000  4.935200e+04     4.935200e+04   \n",
       "mean       1.21218      1.541640  3.830174e+03     1.697863e+03   \n",
       "std        0.50142      1.115018  2.206687e+04     1.100477e+04   \n",
       "min        0.00000      0.000000  4.300000e+01     2.150000e+01   \n",
       "25%        1.00000      1.000000  2.500000e+03     1.225000e+03   \n",
       "50%        1.00000      1.000000  3.150000e+03     1.500000e+03   \n",
       "75%        1.00000      2.000000  4.100000e+03     1.850000e+03   \n",
       "max       10.00000      8.000000  4.490000e+06     2.245000e+06   \n",
       "\n",
       "       price_bedrooms     room_diff      room_num     Year         Month  \\\n",
       "count    4.935200e+04  49352.000000  49352.000000  49352.0  49352.000000   \n",
       "mean     1.657567e+03     -0.329460      2.753820   2016.0      5.014852   \n",
       "std      7.817996e+03      0.947732      1.446091      0.0      0.824442   \n",
       "min      4.300000e+01     -5.000000      0.000000   2016.0      4.000000   \n",
       "25%      1.066667e+03     -1.000000      2.000000   2016.0      4.000000   \n",
       "50%      1.383417e+03      0.000000      2.000000   2016.0      5.000000   \n",
       "75%      1.962500e+03      0.000000      4.000000   2016.0      6.000000   \n",
       "max      1.496667e+06      8.000000     13.500000   2016.0      6.000000   \n",
       "\n",
       "                Day       ...                walk         walls           war  \\\n",
       "count  49352.000000       ...        49352.000000  49352.000000  49352.000000   \n",
       "mean      15.206881       ...            0.003080      0.000385      0.186477   \n",
       "std        8.280749       ...            0.055412      0.019618      0.389495   \n",
       "min        1.000000       ...            0.000000      0.000000      0.000000   \n",
       "25%        8.000000       ...            0.000000      0.000000      0.000000   \n",
       "50%       15.000000       ...            0.000000      0.000000      0.000000   \n",
       "75%       22.000000       ...            0.000000      0.000000      0.000000   \n",
       "max       31.000000       ...            1.000000      1.000000      1.000000   \n",
       "\n",
       "             washer         water    wheelchair          wifi       windows  \\\n",
       "count  49352.000000  49352.000000  49352.000000  49352.000000  49352.000000   \n",
       "mean       0.009361      0.000446      0.028165      0.002026      0.001013   \n",
       "std        0.101625      0.021109      0.165446      0.044969      0.031814   \n",
       "min        0.000000      0.000000      0.000000      0.000000      0.000000   \n",
       "25%        0.000000      0.000000      0.000000      0.000000      0.000000   \n",
       "50%        0.000000      0.000000      0.000000      0.000000      0.000000   \n",
       "75%        0.000000      0.000000      0.000000      0.000000      0.000000   \n",
       "max        2.000000      1.000000      1.000000      1.000000      1.000000   \n",
       "\n",
       "               work  interest_level  \n",
       "count  49352.000000    49352.000000  \n",
       "mean       0.000952        1.616895  \n",
       "std        0.030846        0.626035  \n",
       "min        0.000000        0.000000  \n",
       "25%        0.000000        1.000000  \n",
       "50%        0.000000        2.000000  \n",
       "75%        0.000000        2.000000  \n",
       "max        1.000000        2.000000  \n",
       "\n",
       "[8 rows x 228 columns]"
      ]
     },
     "execution_count": 38,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "\n",
    "dpath = './data/'\n",
    "train = pd.read_csv(dpath + 'RentListingInquries_FE_train.csv')\n",
    "\n",
    "train.describe()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "<class 'pandas.core.frame.DataFrame'>\n",
      "RangeIndex: 49352 entries, 0 to 49351\n",
      "Columns: 228 entries, bathrooms to interest_level\n",
      "dtypes: float64(9), int64(219)\n",
      "memory usage: 85.8 MB\n"
     ]
    }
   ],
   "source": [
    "train.info()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 抽样 3000 个训练数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "image/png": "iVBORw0KGgoAAAANSUhEUgAAAfgAAAFXCAYAAABOYlxEAAAABHNCSVQICAgIfAhkiAAAAAlwSFlz\nAAALEgAACxIB0t1+/AAAIABJREFUeJzt3XtwVPXh/vFnk81ySTYmaGhVCMglRmHCLSAdBL9YadCR\nEhhuSQktQUHk0lCaBhAJNVyLgSmUYAHbHw2XEKHVWK81KAhmkIYCEolVBJRLNdyzGwgh2d8fjlup\nhA1Jdjf58H79lXP27DnPzmTm2c85Zz/H4nK5XAIAAEYJ8HcAAABQ/yh4AAAMRMEDAGAgCh4AAANR\n8AAAGIiCBwDAQFZ/B6hPJSWl/o4AAIDPRETYq32NETwAAAai4AEAMBAFDwCAgSh4AAAMRMEDAGAg\nCh4AAANR8AAAGIiCBwDAQBQ8AAAGouABADAQBQ8AgIEoeAAADETBAwBgIKOeJgcAptozfaq/I8BL\nemYu98p+GcEDAGAgCh4AAANR8AAAGIiCBwDAQBQ8AAAGouABADAQBQ8AgIEoeAAADETBAwBgIK/M\nZFdRUaFZs2bpxIkTunLliiZOnKgOHTpoxowZslgs6tixo9LT0xUQEKDc3Fzl5OTIarVq4sSJ6t+/\nvy5fvqzU1FSdOXNGwcHBWrx4sVq0aOGNqAAAGMkrI/i8vDyFhYVp48aNWrt2rTIyMrRw4UKlpKRo\n48aNcrlcys/PV0lJibKzs5WTk6MXX3xRS5cu1ZUrV7Rp0yZFRUVp48aNio+PV1ZWljdiAgBgLK+M\n4AcOHKi4uDhJksvlUmBgoIqKitSrVy9JUr9+/bRr1y4FBASoW7dustlsstlsioyMVHFxsQoLC/XE\nE0+4t6XgAQC4OV4p+ODgYEmSw+HQ1KlTlZKSosWLF8tisbhfLy0tlcPhkN1uv+Z9DofjmvXfblsT\n4eHNZbUG1vOnAQDAeyIi7J43qgWvPU3u1KlTmjRpkhITEzVo0CAtWbLE/ZrT6VRoaKhCQkLkdDqv\nWW+3269Z/+22NXHuXFn9fggAALyspKRmg9jrudGXA69cgz99+rSSk5OVmpqqYcOGSZLuv/9+7d69\nW5K0Y8cOxcbGKiYmRoWFhSovL1dpaakOHz6sqKgode/eXdu3b3dv26NHD2/EBADAWBaXy+Wq753O\nmzdPb7zxhtq1a+de98wzz2jevHmqqKhQu3btNG/ePAUGBio3N1ebN2+Wy+XShAkTFBcXp0uXLikt\nLU0lJSUKCgpSZmamIiIiPB63Lt+CAKAh43nw5qrL8+BvNIL3SsH7CwUPwFQUvLm8VfBMdAMAgIEo\neAAADETBAwBgIAoeAAADUfAAABiIggcAwEAUPAAABqLgAQAwEAUPAICBKHgAAAxEwQMAYCAKHgAA\nA1HwAAAYiIIHAMBAFDwAAAai4AEAMBAFDwCAgSh4AAAMRMEDAGAgCh4AAANR8AAAGIiCBwDAQBQ8\nAAAGouABADAQBQ8AgIEoeAAADGT15s7379+v559/XtnZ2Zo2bZpOnz4tSTpx4oS6dOmiZcuWad68\nedq7d6+Cg4MlSVlZWQoKClJqaqrOnDmj4OBgLV68WC1atPBmVAAAjOK1gl+zZo3y8vLUrFkzSdKy\nZcskSRcuXNCYMWM0c+ZMSVJRUZHWrl17TYH/+c9/VlRUlKZMmaLXXntNWVlZmj17treiAgBgHK+d\noo+MjNSKFSu+t37FihUaPXq0WrZsqaqqKh07dkxz5szRqFGjtGXLFklSYWGh+vbtK0nq16+fCgoK\nvBUTAAAjeW0EHxcXp+PHj1+z7syZMyooKHCP3svKyjR69GiNHTtWlZWVGjNmjDp37iyHwyG73S5J\nCg4OVmlpaY2OGR7eXFZrYP1+EAAAvCgiwu6V/Xr1Gvz/evPNN/X4448rMPCbEm7WrJnGjBnjPo3f\nu3dvFRcXKyQkRE6nU5LkdDoVGhpao/2fO1fmneAAAHhJSUnNBrHXc6MvBz69i76goED9+vVzLx89\nelQJCQmqrKxURUWF9u7dq06dOql79+7avn27JGnHjh3q0aOHL2MCANDo+XQEf+TIEbVu3dq93L59\new0ePFgjRoxQUFCQBg8erI4dO6pVq1ZKS0tTQkKCgoKClJmZ6cuYAAA0ehaXy+Xyd4j6UpfTHADQ\nkO2ZPtXfEeAlPTOX1/q9DeYUPQAA8A0KHgAAA1HwAAAYiIIHAMBAFDwAAAai4AEAMBAFDwCAgSh4\nAAAMRMEDAGAgCh4AAANR8AAAGIiCBwDAQBQ8AAAGouABADAQBQ8AgIEoeAAADETBAwBgIAoeAAAD\nUfAAABiIggcAwEAUPAAABqLgAQAwEAUPAICBKHgAAAxEwQMAYCAKHgAAA3m14Pfv36+kpCRJ0scf\nf6y+ffsqKSlJSUlJev311yVJubm5Gjp0qEaMGKF3331XknT58mVNmTJFiYmJevLJJ3X27FlvxgQA\nwDhWb+14zZo1ysvLU7NmzSRJRUVFGjt2rJKTk93blJSUKDs7W1u3blV5ebkSExPVp08fbdq0SVFR\nUZoyZYpee+01ZWVlafbs2d6KCgCAcbw2go+MjNSKFSvcywcPHtR7772nn/3sZ5o1a5YcDocOHDig\nbt26yWazyW63KzIyUsXFxSosLFTfvn0lSf369VNBQYG3YgIAYCSvjeDj4uJ0/Phx93JMTIyGDx+u\nzp07a9WqVVq5cqWio6Nlt9vd2wQHB8vhcMjhcLjXBwcHq7S0tEbHDA9vLqs1sH4/CAAAXhQRYfe8\nUS14reD/14ABAxQaGur+OyMjQ7GxsXI6ne5tnE6n7Ha7QkJC3OudTqf7fZ6cO1dW/8EBAPCikpKa\nDWKv50ZfDnx2F/24ceN04MABSVJBQYE6deqkmJgYFRYWqry8XKWlpTp8+LCioqLUvXt3bd++XZK0\nY8cO9ejRw1cxAQAwgs9G8HPnzlVGRoaCgoJ0xx13KCMjQyEhIUpKSlJiYqJcLpemTZumJk2aKCEh\nQWlpaUpISFBQUJAyMzN9FRMAACNYXC6Xy98h6ktdTnMAQEO2Z/pUf0eAl/TMXF7r9zaIU/QAAMB3\nKHgAAAxEwQMAYCAKHgAAA1HwAAAYyGPBZ2RkfG9dWlqaV8IAAID6Ue3v4J955hl9+eWXOnjwoD79\n9FP3+qtXr9Z46lgAAOAf1Rb8xIkTdeLECc2fP1+TJ092rw8MDFT79u19Eg4AANROtafoW7VqpQce\neEB5eXlq27atevXqpYCAABUXF8tms/kyIwAAuEker8Gnp6dr1apV+uyzzzR9+nQVFRVxDR4AgAbO\nY8F/9NFHmjNnjt544w0NGzZMCxYs0MmTJ32RDQAA1JLHgq+srFRVVZXy8/PVr18/Xbp0SZcuXfJF\nNgAAUEseCz4+Pl4PPvig7r77bnXp0kVDhw7VyJEjfZENAADUUo2eJldZWanAwEBJ0tmzZ9WiRQuv\nB6sNniYHwFQ8Tc5cfnua3IkTJ/TEE0/oJz/5ib7++mulpKTo+PHjtQ4DAAC8z2PBz5kzR+PGjVPz\n5s0VERGhxx9/nLvoAQBo4DwW/Llz5/Tggw9KkiwWi0aMGCGHw+H1YAAAoPY8FnzTpk31n//8RxaL\nRZL0z3/+k4luAABo4KqdqvZbM2fO1IQJE/TFF19o8ODBunDhgn7/+9/7IhsAAKgljwV/5swZbdmy\nRUePHlVlZaXatWvHCB4AgAbO4yn6JUuWKCgoSB07dlR0dDTlDgBAI+BxBN+6dWvNnDlTXbp0UdOm\nTd3r4+PjvRoMAADUnseCDw8PlyTt37//mvUUPAAADZfHgl+4cKEk6cKFC7rtttu8HggAANSdx2vw\nxcXFGjhwoAYPHqyvvvpKAwYMUFFRkS+yAQCAWvJY8BkZGVq5cqXCwsL0gx/8QHPnzlV6erovsgEA\ngFryWPCXLl1S+/bt3ct9+vTRlStXarTz/fv3KykpSZJ06NAhJSYmKikpSePGjdPp06clSfPmzdPQ\noUOVlJSkpKQklZaW6vLly5oyZYoSExP15JNP6uzZs7X5bAAA3LI8FnxYWJiKi4vdM9nl5eXV6Fr8\nmjVrNHv2bJWXl0uS5s+fr2effVbZ2dkaMGCA1qxZI0kqKirS2rVrlZ2drezsbNntdm3atElRUVHa\nuHGj4uPjlZWVVZfPCADALcdjwc+dO1e//e1v9emnnyo2Nlbr1q3Tc88953HHkZGRWrFihXt56dKl\nuu+++yR98/jZJk2aqKqqSseOHdOcOXM0atQobdmyRZJUWFiovn37SpL69eungoKCWn04AABuVR7v\noi8vL9emTZtUVlamqqoqhYSEaN++fR53HBcXd81jZVu2bClJ2rt3r9avX68NGzaorKxMo0eP1tix\nY1VZWakxY8aoc+fOcjgcstu/ecZtcHCwSktr9pz38PDmsloDa7QtAAANwY2e6V4X1RZ8YWGhqqqq\nNHv2bM2fP18ul0uSdPXqVc2dO1dvvfXWTR/s9ddf16pVq7R69Wq1aNHCXerNmjWTJPXu3VvFxcUK\nCQmR0+mUJDmdToWGhtZo/+fOld10JgAA/KmkpGaD2Ou50ZeDagv+gw8+0Icffqivv/76mofLWK1W\njRw58qZDvPLKK9q8ebOys7MVFhYmSTp69KhSUlL08ssvq6qqSnv37tWQIUN09uxZbd++XTExMdqx\nY4d69Ohx08cDAOBWVm3BT5kyRZL08ssv13nWusrKSs2fP1933nmne789e/bU1KlTNXjwYI0YMUJB\nQUEaPHiwOnbsqFatWiktLU0JCQkKCgpSZmZmnY4PAMCtxuL69tx7NU6cOKH169frwoUL+u6m385w\n15DU5TQHADRke6ZP9XcEeEnPzOW1fm+tTtF/KyUlRbGxsYqNjXX/VA4AADRsHgv+6tWrSktL80UW\nAABQTzz+Dr5Hjx7atm1bjWevAwAA/udxBP/mm29q/fr116yzWCw6dOiQ10IBAIC68VjwO3fu9EUO\nAABQj6ot+M2bN2vkyJH6wx/+cN3XJ0+e7LVQAACgbqq9Bu/h13MAAKABq3YEP2rUKEmM1AEAaIw8\n3kUPAAAan2oLvqyMB7cAANBYVVvwSUlJkr55HjwAAGhcqr0GX1ZWpl//+td6//33VV5e/r3XG+Jc\n9AAA4BvVFvyf/vQn7d69W4WFherVq5cvMwEAgDqqtuDvvPNOxcfHKzo6Wu3bt9eRI0dUWVmpjh07\nymr1OD8OAADwI49NXVFRobi4OIWFhamqqkqnT5/WypUr1aVLF1/kAwAAteCx4OfPn69ly5a5C33f\nvn3KyMjQli1bvB4OAADUjsffwZeVlV0zWu/atet1b7oDAAANh8eCv+222/TOO++4l9955x2FhYV5\nNRQAAKgbj6foMzIylJqaqmeeeUaS1Lp1ay1ZssTrwQAAQO15LPi2bdvqpZdeUllZmaqqqhQSEuKL\nXAAAoA5q/Hu35s2bezMHAACoRzxsBgAAA3ks+E2bNvkiBwAAqEceC37Dhg2+yAEAAOqRx2vwP/zh\nDzVmzBh16dJFTZo0ca+fPHmyV4MBAIDa81jwXbt29UUOAABQjzwW/OTJk1VWVqYvvvhCUVFRunz5\nco3vqN+/f7+ef/55ZWdn69ixY5oxY4YsFos6duyo9PR0BQQEKDc3Vzk5ObJarZo4caL69++vy5cv\nKzU1VWfOnFFwcLAWL16sFi1a1PnDAgBwq/B4Db6goECDBw/W008/rdOnT+vhhx/Wzp07Pe54zZo1\nmj17tnta24ULFyolJUUbN26Uy+VSfn6+SkpKlJ2drZycHL344otaunSprly5ok2bNikqKkobN25U\nfHy8srKy6v5JAQC4hXgs+KVLl2rjxo0KDQ1Vy5YttX79ev3ud7/zuOPIyEitWLHCvVxUVOR+rny/\nfv30wQcf6MCBA+rWrZtsNpvsdrsiIyNVXFyswsJC9e3b171tQUFBbT8fAAC3JI+n6KuqqhQREeFe\n7tChQ412HBcXp+PHj7uXXS6XLBaLJCk4OFilpaVyOByy2+3ubYKDg+VwOK5Z/+22NREe3lxWa2CN\ntgUAoCGIiLB73qgWanQX/bvvviuLxaKLFy9qw4YNuuuuu276QAEB/z1Z4HQ6FRoaqpCQEDmdzmvW\n2+32a9Z/u21NnDtXdtO5AADwp5KSmg1ir+dGXw48nqJ/7rnn9Oqrr+rUqVN65JFHdOjQIT333HM3\nHeL+++/X7t27JUk7duxQbGysYmJiVFhYqPLycpWWlurw4cOKiopS9+7dtX37dve2PXr0uOnjAQBw\nK/M4gr/99tu1dOlSORwOWa1WNW3atFYHSktL07PPPqulS5eqXbt2iouLU2BgoJKSkpSYmCiXy6Vp\n06apSZMmSkhIUFpamhISEhQUFKTMzMxaHRMAgFuVxeVyuW60wSeffKIZM2bo5MmTkqR27dpp8eLF\nioyM9EnAm1GX0xwA0JDtmT7V3xHgJT0zl9f6vXU6RZ+enq6UlBTt3r1bu3fvVnJysmbNmlXrMAAA\nwPs8Fnx5ebkeeugh9/KAAQPkcDi8GgoAANRNtQV/8uRJnTx5UtHR0Vq9erXOnj2rCxcuaP369YqN\njfVlRgAAcJOqvQb/8MMPy2Kx6HovWywW5efnez3czeIaPABTcQ3eXN66Bl/tXfTbtm2r9QEBAIB/\nefyZ3Oeff67c3FxduHDhmvULFy70WigAAFA3NXqa3GOPPaZ7773XF3kAAEA98FjwoaGhmjx5si+y\nAACAeuKx4IcMGaJly5apd+/eslr/u3nPnj29GgwAANSex4L/8MMP9dFHH2nv3r3udRaLRX/5y1+8\nGgwAANSex4I/ePCg3n77bV9kAQAA9cTjTHZRUVEqLi72RRYAAFBPPI7gv/zySw0ZMkQREREKCgqS\ny+VqsBPdAACAb3gs+JUrV/oiBwAAqEceC37Pnj3XXX/33XfXexgAAFA/PBb87t273X9XVFSosLBQ\nsbGxio+P92owAABQex4L/n+npD1//rymTZvmtUAAAKDuPN5F/7+aN2+uEydOeCMLAACoJx5H8ElJ\nSbJYLJIkl8ul48eP66GHHvJ6MAAAUHseC37KlCnuvy0Wi8LDw9WhQwevhgIAAHVTbcGfPHlSktSq\nVavrvnbXXXd5LxUAAKiTagt+9OjRslgscrlc7nUWi0Vff/21rl69qkOHDvkkIAAAuHnVFvy2bduu\nWXY6nVq8eLF27typjIwMrwcDAAC1V6O76AsKCvTTn/5UkpSXl6c+ffp4NRQAAKibG95kV1ZWpkWL\nFrlH7RQ7AACNQ7Uj+IKCAg0aNEiS9Oqrr1LuAAA0ItWO4MeOHSur1aqdO3dq165d7vV1eZrcX//6\nV/3tb3+TJJWXl+vQoUPavHmzJkyYoLZt20qSEhIS9Nhjjyk3N1c5OTmyWq2aOHGi+vfvf9PHAwDg\nVmVxffc2+e/wNFtdXR8289vf/lbR0dEKCAhQaWmpkpOT3a+VlJQoOTlZW7duVXl5uRITE7V161bZ\nbLYb7rOkpLROmQCgodozfaq/I8BLemYur/V7IyLs1b5W7Qjem0+L++ijj/TZZ58pPT1d6enpOnLk\niPLz89WmTRvNmjVLBw4cULdu3WSz2WSz2RQZGani4mLFxMR4LRMAACbxOJOdN/zxj3/UpEmTJEkx\nMTEaPny4OnfurFWrVmnlypWKjo6W3f7fbyXBwcFyOBwe9xse3lxWa6DXcgMAUN9uNAqvC58X/MWL\nF3XkyBH17t1bkjRgwACFhoa6/87IyFBsbKycTqf7PU6n85rCr865c2XeCQ0AgJfU5fLyjb4c3PTT\n5Opqz549+tGPfuReHjdunA4cOCDpmzv3O3XqpJiYGBUWFqq8vFylpaU6fPiwoqKifB0VAIBGy+cj\n+CNHjlwzv/3cuXOVkZGhoKAg3XHHHcrIyFBISIiSkpKUmJgol8uladOmqUmTJr6OCgBAo1XtXfSN\nEXfRAzAVd9Gby1t30fv8FD0AAPA+Ch4AAANR8AAAGIiCBwDAQBQ8AAAG8stMdoDpUv8+298R4CVL\nHp/n7whAjTCCBwDAQBQ8AAAGouABADAQBQ8AgIEoeAAADETBAwBgIAoeAAADUfAAABiIggcAwEAU\nPAAABqLgAQAwEAUPAICBKHgAAAxEwQMAYCAKHgAAA1HwAAAYiIIHAMBAFDwAAAai4AEAMBAFDwCA\ngay+PuCQIUMUEhIiSWrVqpWeeuopzZgxQxaLRR07dlR6eroCAgKUm5urnJwcWa1WTZw4Uf379/d1\nVAAAGi2fFnx5eblcLpeys7Pd65566imlpKTogQce0Jw5c5Sfn6+uXbsqOztbW7duVXl5uRITE9Wn\nTx/ZbDZfxgUAoNHyacEXFxfr0qVLSk5O1tWrV/WrX/1KRUVF6tWrlySpX79+2rVrlwICAtStWzfZ\nbDbZbDZFRkaquLhYMTExvowLAECj5dOCb9q0qcaNG6fhw4fr6NGjevLJJ+VyuWSxWCRJwcHBKi0t\nlcPhkN1ud78vODhYDofD4/7Dw5vLag30Wn4AiIiwe94IuAne+p/yacHfc889atOmjSwWi+655x6F\nhYWpqKjI/brT6VRoaKhCQkLkdDqvWf/dwq/OuXNlXskNAN8qKSn1dwQYpi7/Uzf6cuDTu+i3bNmi\nRYsWSZK++uorORwO9enTR7t375Yk7dixQ7GxsYqJiVFhYaHKy8tVWlqqw4cPKyoqypdRAQBo1Hw6\ngh82bJhmzpyphIQEWSwWLViwQOHh4Xr22We1dOlStWvXTnFxcQoMDFRSUpISExPlcrk0bdo0NWnS\nxJdRAQBo1Hxa8DabTZmZmd9bv379+u+tGzFihEaMGOGLWAAAGIeJbgAAMBAFDwCAgSh4AAAMRMED\nAGAgCh4AAANR8AAAGIiCBwDAQBQ8AAAGouABADAQBQ8AgIEoeAAADETBAwBgIAoeAAADUfAAABiI\nggcAwEAUPAAABqLgAQAwEAUPAICBKHgAAAxEwQMAYCAKHgAAA1HwAAAYiIIHAMBAFDwAAAai4AEA\nMBAFDwCAgay+PFhFRYVmzZqlEydO6MqVK5o4caLuvPNOTZgwQW3btpUkJSQk6LHHHlNubq5ycnJk\ntVo1ceJE9e/f35dRAQBo1Hxa8Hl5eQoLC9OSJUt0/vx5xcfHa9KkSRo7dqySk5Pd25WUlCg7O1tb\nt25VeXm5EhMT1adPH9lsNl/GBQCg0fJpwQ8cOFBxcXGSJJfLpcDAQB08eFBHjhxRfn6+2rRpo1mz\nZunAgQPq1q2bbDabbDabIiMjVVxcrJiYGF/GBQCg0fJpwQcHB0uSHA6Hpk6dqpSUFF25ckXDhw9X\n586dtWrVKq1cuVLR0dGy2+3XvM/hcHjcf3h4c1mtgV7LDwAREXbPGwE3wVv/Uz4teEk6deqUJk2a\npMTERA0aNEgXL15UaGioJGnAgAHKyMhQbGysnE6n+z1Op/Oawq/OuXNlXssNAJJUUlLq7wgwTF3+\np2705cCnd9GfPn1aycnJSk1N1bBhwyRJ48aN04EDByRJBQUF6tSpk2JiYlRYWKjy8nKVlpbq8OHD\nioqK8mVUAAAaNZ+O4F944QVdvHhRWVlZysrKkiTNmDFDCxYsUFBQkO644w5lZGQoJCRESUlJSkxM\nlMvl0rRp09SkSRNfRgUAoFGzuFwul79D1Je6nOb45ZK8ekyChuT3qT/1+TFT/z7b58eEbyx5fJ5f\njrtn+lS/HBfe1zNzea3f22BO0QMAAN+g4AEAMBAFDwCAgSh4AAAMRMEDAGAgCh4AAANR8AAAGIiC\nBwDAQBQ8AAAGouABADAQBQ8AgIEoeAAADETBAwBgIAoeAAADUfAAABiIggcAwEAUPAAABqLgAQAw\nEAUPAICBKHgAAAxEwQMAYCAKHgAAA1HwAAAYiIIHAMBAFDwAAAai4AEAMJDV3wGqU1VVpblz5+qT\nTz6RzWbTvHnz1KZNG3/HAgCgUWiwI/h33nlHV65c0ebNmzV9+nQtWrTI35EAAGg0GmzBFxYWqm/f\nvpKkrl276uDBg35OBABA49FgT9E7HA6FhIS4lwMDA3X16lVZrdVHjoiw1/p4G3/3s1q/F/hf/2/s\n7/0dAYZ57C9/9ncENDINdgQfEhIip9PpXq6qqrphuQMAgP9qsAXfvXt37dixQ5K0b98+RUVF+TkR\nAACNh8Xlcrn8HeJ6vr2L/t///rdcLpcWLFig9u3b+zsWAACNQoMteAAAUHsN9hQ9AACoPQoeAAAD\nUfC3kKqqKs2ZM0cjR45UUlKSjh075u9IMMD+/fuVlJTk7xgwQEVFhVJTU5WYmKhhw4YpPz/f35Ea\nNX53dgv57uyA+/bt06JFi7Rq1Sp/x0IjtmbNGuXl5alZs2b+jgID5OXlKSwsTEuWLNH58+cVHx+v\nH//4x/6O1Wgxgr+FMDsg6ltkZKRWrFjh7xgwxMCBA/XLX/5SkuRyuRQYGOjnRI0bBX8LqW52QKC2\n4uLimIAK9SY4OFghISFyOByaOnWqUlJS/B2pUaPgbyHMDgigoTt16pTGjBmjwYMHa9CgQf6O06hR\n8LcQZgcE0JCdPn1aycnJSk1N1bBhw/wdp9Fj+HYLGTBggHbt2qVRo0a5ZwcEgIbihRde0MWLF5WV\nlaWsrCxJ39zI2bRpUz8na5yYyQ4AAANxih4AAANR8AAAGIiCBwDAQBQ8AAAGouABADAQBQ/ArbS0\nVE8//bRXjzFz5kydOHHCq8cAQMED+I4LFy6ouLjYq8fYvXu3+HUu4H38Dh6A21NPPaWdO3fqoYce\nUocOHVRQUKALFy4oPDxcK1asUEREhHr37q1OnTrp9OnT2rJli5YvX6633npL4eHhioiI0MMPP6yh\nQ4fq5Zdf1rp161RVVaVOnTopPT1d69at0/LlyxUZGakNGzYoPDzc3x8ZMBYjeABus2fPVsuWLfWb\n3/xGn3/+uXJycvTWW28pMjJSr776qiTp3LlzGj9+vF555RW9//77Kiws1N///netXr1aH3/8sSTp\n008/VW5urnJycvTKK6/o9ttv14svvqjx48erZcuWWr16NeUOeBlT1QL4njZt2igtLU0vvfSSjhw5\non379ilQaMQuAAABoklEQVQyMtL9epcuXSRJH3zwgR599FHZbDbZbDY98sgjkr45DX/s2DGNGDFC\nklRRUaH777/f9x8EuIVR8AC+5+DBg5o+fbp+8YtfKC4uTgEBAddcN/92bvCAgABVVVV97/2VlZV6\n9NFHNXv2bEmS0+lUZWWlb8IDkMQpegDfYbVadfXqVe3Zs0e9evVSQkKCOnTooF27dl23oPv06aO3\n335bV65ckcPh0HvvvSeLxaIHHnhA//jHP3TmzBm5XC7NnTtX69atkyQFBgZS9oAPMIIH4Hb77bfr\nrrvu0rZt23T58mUNGjRIQUFBuvfee3X8+PHvbf/QQw9p7969GjJkiG677Ta1bNlSTZo0UXR0tCZP\nnqyf//znqqqq0n333afx48dLkv7v//5P48eP19q1a9W6dWtff0TglsFd9ABq7V//+peOHj2qIUOG\nqKKiQiNHjtSCBQsUHR3t72jALY+CB1Br58+f1/Tp01VSUiKXy6X4+HiNGzfO37EAiIIHAMBI3GQH\nAICBKHgAAAxEwQMAYCAKHgAAA1HwAAAYiIIHAMBA/x/1Up1FFKe8SAAAAABJRU5ErkJggg==\n",
      "text/plain": [
       "<matplotlib.figure.Figure at 0x10ce2ae10>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "train = train.sample(n=3000)\n",
    "train.to_csv(dpath + 'RentListingInquries_FE_train_sample.csv')\n",
    "\n",
    "train_X = train.drop(\"interest_level\", axis=1)\n",
    "train_y = train[\"interest_level\"]\n",
    "\n",
    "from matplotlib import pyplot\n",
    "import seaborn as sns\n",
    "sns.countplot(train_y);\n",
    "pyplot.xlabel('target');\n",
    "pyplot.ylabel('Number of interest');\n",
    "pyplot.show()\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 初步确定弱学习器 estimate "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [],
   "source": [
    "from xgboost import XGBClassifier\n",
    "import xgboost as xgb\n",
    "from sklearn.metrics import accuracy_score\n",
    "from sklearn.model_selection import GridSearchCV"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "best n_estimators\n",
      "62\n"
     ]
    }
   ],
   "source": [
    "\n",
    "\n",
    "def modelfit(alg, X_train, y_train, cv_folds=None, early_stopping_rounds=10):\n",
    "    xgb_param = alg.get_xgb_params()\n",
    "    xgb_param['num_class'] = 3\n",
    "\n",
    "    # 直接调用xgboost，而非sklarn的wrapper类\n",
    "    xgtrain = xgb.DMatrix(X_train, label=y_train)\n",
    "\n",
    "    cvresult = xgb.cv(xgb_param, xgtrain, num_boost_round=alg.get_params()['n_estimators'], folds= cv_folds.split(X_train, y_train),\n",
    "                      metrics='mlogloss', early_stopping_rounds=early_stopping_rounds,)\n",
    "\n",
    "\n",
    "    # 最佳参数n_estimators\n",
    "    n_estimators = cvresult.shape[0]\n",
    "\n",
    "    print \"best n_estimators\"\n",
    "    print n_estimators\n",
    "\n",
    "    cvresult.to_csv('nestimators.csv', index_label='n_estimators')\n",
    "\n",
    "#params = {\"objective\": \"multi:softprob\", \"eval_metric\":\"mlogloss\", \"num_class\": 9}\n",
    "\n",
    "xgb1 = XGBClassifier(\n",
    "        learning_rate =0.1,\n",
    "        n_estimators=1000,  #数值大没关系，cv会自动返回合适的n_estimators\n",
    "        max_depth=5,\n",
    "        min_child_weight=1,\n",
    "        gamma=0,\n",
    "        subsample=0.3,\n",
    "        colsample_bytree=0.8,\n",
    "        colsample_bylevel=0.7,\n",
    "        objective= 'multi:softmax',\n",
    "        nthread=-1,\n",
    "        seed=3)\n",
    "\n",
    "from sklearn.model_selection import StratifiedKFold\n",
    "kfold = StratifiedKFold(n_splits=5, shuffle=True, random_state=3)\n",
    "modelfit(xgb1, train_X, train_y, cv_folds = kfold)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 获取到 estimators 的最佳参数是 62  ， 训练数据准确率 74.23"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train Accuary: 74.23%\n"
     ]
    }
   ],
   "source": [
    "xgb_param = xgb1.get_xgb_params()\n",
    "xgb_param['num_class'] = 3\n",
    "# 直接调用xgboost，而非sklarn的wrapper类\n",
    "# num_round = 2\n",
    "dtrain = xgb.DMatrix(train_X, label=train_y)\n",
    "bst = xgb.train(xgb_param, dtrain, 5)\n",
    "train_predictions = bst.predict(dtrain)\n",
    "y_train = dtrain.get_label()\n",
    "train_accuracy = accuracy_score(y_train, train_predictions)\n",
    "print (\"Train Accuary: %.2f%%\" % (train_accuracy * 100.0))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "##### 计算最大深度和权重"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[mean: -0.63890, std: 0.01323, params: {'max_depth': 3, 'min_child_weight': 1}, mean: -0.64070, std: 0.01006, params: {'max_depth': 3, 'min_child_weight': 3}, mean: -0.63855, std: 0.00897, params: {'max_depth': 3, 'min_child_weight': 5}, mean: -0.64040, std: 0.01559, params: {'max_depth': 5, 'min_child_weight': 1}, mean: -0.63731, std: 0.01819, params: {'max_depth': 5, 'min_child_weight': 3}, mean: -0.64066, std: 0.01547, params: {'max_depth': 5, 'min_child_weight': 5}, mean: -0.64835, std: 0.02346, params: {'max_depth': 7, 'min_child_weight': 1}, mean: -0.64526, std: 0.01967, params: {'max_depth': 7, 'min_child_weight': 3}, mean: -0.64698, std: 0.02347, params: {'max_depth': 7, 'min_child_weight': 5}, mean: -0.66231, std: 0.02607, params: {'max_depth': 9, 'min_child_weight': 1}, mean: -0.64839, std: 0.02449, params: {'max_depth': 9, 'min_child_weight': 3}, mean: -0.63968, std: 0.02466, params: {'max_depth': 9, 'min_child_weight': 5}]\n",
      "{'max_depth': 5, 'min_child_weight': 3}\n",
      "-0.637309175578\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/mary/anaconda3/envs/py27/lib/python2.7/site-packages/sklearn/model_selection/_search.py:667: DeprecationWarning: The grid_scores_ attribute was deprecated in version 0.18 in favor of the more elaborate cv_results_ attribute. The grid_scores_ attribute will not be available from 0.20\n",
      "  DeprecationWarning)\n"
     ]
    }
   ],
   "source": [
    "estimators = 62\n",
    "\n",
    "xgb2 = XGBClassifier(\n",
    "        learning_rate =0.1,\n",
    "        n_estimators=estimators,\n",
    "        max_depth=5,\n",
    "        min_child_weight=1,\n",
    "        gamma=0,\n",
    "        subsample=0.3,\n",
    "        colsample_bytree=0.8,\n",
    "        colsample_bylevel = 0.7,\n",
    "        objective= 'multi:softmax',\n",
    "        nthread=-1,\n",
    "        seed=3)\n",
    "\n",
    "\n",
    "max_depth = range(3,10,2)\n",
    "min_child_weight = range(1,6,2)\n",
    "param_test_2 = dict(max_depth=max_depth, min_child_weight=min_child_weight)\n",
    "\n",
    "gsearch2 = GridSearchCV(xgb2, param_grid = param_test_2, scoring='neg_log_loss',n_jobs=-1, cv=kfold)\n",
    "gsearch2.fit(train_X , train_y)\n",
    "\n",
    "print gsearch2.grid_scores_\n",
    "print gsearch2.best_params_\n",
    "print gsearch2.best_score_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 第一轮的结果是： max_depth': 5, 'min_child_weight': 3 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[mean: -0.63722, std: 0.01894, params: {'max_depth': 4, 'min_child_weight': 2}, mean: -0.63878, std: 0.01798, params: {'max_depth': 4, 'min_child_weight': 3}, mean: -0.63922, std: 0.01981, params: {'max_depth': 4, 'min_child_weight': 4}, mean: -0.63915, std: 0.01579, params: {'max_depth': 5, 'min_child_weight': 2}, mean: -0.63731, std: 0.01819, params: {'max_depth': 5, 'min_child_weight': 3}, mean: -0.63982, std: 0.01375, params: {'max_depth': 5, 'min_child_weight': 4}, mean: -0.64326, std: 0.01919, params: {'max_depth': 6, 'min_child_weight': 2}, mean: -0.64881, std: 0.01804, params: {'max_depth': 6, 'min_child_weight': 3}, mean: -0.64472, std: 0.02445, params: {'max_depth': 6, 'min_child_weight': 4}]\n",
      "{'max_depth': 4, 'min_child_weight': 2}\n",
      "-0.637223530607\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/mary/anaconda3/envs/py27/lib/python2.7/site-packages/sklearn/model_selection/_search.py:667: DeprecationWarning: The grid_scores_ attribute was deprecated in version 0.18 in favor of the more elaborate cv_results_ attribute. The grid_scores_ attribute will not be available from 0.20\n",
      "  DeprecationWarning)\n"
     ]
    }
   ],
   "source": [
    "# 开始第二轮的交叉验证\n",
    "max_depth = [4,5,6]\n",
    "min_child_weight = [2,3,4]\n",
    "param_test_3 = dict(max_depth=max_depth, min_child_weight=min_child_weight)\n",
    "\n",
    "gsearch3 = GridSearchCV(xgb2, param_grid = param_test_3, scoring='neg_log_loss',n_jobs=-1, cv=kfold)\n",
    "gsearch3.fit(train_X , train_y)\n",
    "\n",
    "print gsearch3.grid_scores_\n",
    "print gsearch3.best_params_\n",
    "print gsearch3.best_score_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 第二轮的结果还是： max_depth': 4, 'min_child_weight': 2 ， 准确率 73.40%"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train Accuary: 73.40%\n"
     ]
    }
   ],
   "source": [
    "estimators = 62\n",
    "max_depth4 = 4\n",
    "min_child_weight4 = 2\n",
    "\n",
    "xgb4 = XGBClassifier(\n",
    "        learning_rate =0.1,\n",
    "        n_estimators=estimators,\n",
    "        max_depth= max_depth4,\n",
    "        min_child_weight= min_child_weight4,\n",
    "        gamma=0,\n",
    "        subsample=0.3,\n",
    "        colsample_bytree=0.8,\n",
    "        colsample_bylevel = 0.7,\n",
    "        objective= 'multi:softmax',\n",
    "        nthread=-1,\n",
    "        seed=3)\n",
    "\n",
    "xgb_param = xgb4.get_xgb_params()\n",
    "xgb_param['num_class'] = 3\n",
    "dtrain = xgb.DMatrix(train_X, label=train_y)\n",
    "bst = xgb.train(xgb_param, dtrain, 5)\n",
    "train_predictions = bst.predict(dtrain)\n",
    "y_train = dtrain.get_label()\n",
    "train_accuracy = accuracy_score(y_train, train_predictions)\n",
    "print (\"Train Accuary: %.2f%%\" % (train_accuracy * 100.0))\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 正则参数调优"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[mean: -0.63982, std: 0.01939, params: {'reg_alpha': 0.001, 'reg_lambda': 0.001}, mean: -0.63920, std: 0.01952, params: {'reg_alpha': 0.001, 'reg_lambda': 0.01}, mean: -0.63698, std: 0.02003, params: {'reg_alpha': 0.001, 'reg_lambda': 0.05}, mean: -0.63966, std: 0.02033, params: {'reg_alpha': 0.001, 'reg_lambda': 0.1}, mean: -0.63942, std: 0.01928, params: {'reg_alpha': 0.01, 'reg_lambda': 0.001}, mean: -0.64026, std: 0.02107, params: {'reg_alpha': 0.01, 'reg_lambda': 0.01}, mean: -0.63808, std: 0.02203, params: {'reg_alpha': 0.01, 'reg_lambda': 0.05}, mean: -0.63867, std: 0.01986, params: {'reg_alpha': 0.01, 'reg_lambda': 0.1}, mean: -0.63736, std: 0.02230, params: {'reg_alpha': 0.05, 'reg_lambda': 0.001}, mean: -0.63787, std: 0.02204, params: {'reg_alpha': 0.05, 'reg_lambda': 0.01}, mean: -0.63953, std: 0.02134, params: {'reg_alpha': 0.05, 'reg_lambda': 0.05}, mean: -0.64037, std: 0.02001, params: {'reg_alpha': 0.05, 'reg_lambda': 0.1}, mean: -0.63961, std: 0.02209, params: {'reg_alpha': 0.1, 'reg_lambda': 0.001}, mean: -0.63923, std: 0.01981, params: {'reg_alpha': 0.1, 'reg_lambda': 0.01}, mean: -0.63977, std: 0.01751, params: {'reg_alpha': 0.1, 'reg_lambda': 0.05}, mean: -0.63981, std: 0.02048, params: {'reg_alpha': 0.1, 'reg_lambda': 0.1}]\n",
      "{'reg_alpha': 0.001, 'reg_lambda': 0.05}\n",
      "-0.636980423568\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/mary/anaconda3/envs/py27/lib/python2.7/site-packages/sklearn/model_selection/_search.py:667: DeprecationWarning: The grid_scores_ attribute was deprecated in version 0.18 in favor of the more elaborate cv_results_ attribute. The grid_scores_ attribute will not be available from 0.20\n",
      "  DeprecationWarning)\n"
     ]
    }
   ],
   "source": [
    "reg_alpha = [1e-3, 1e-2, 0.05, 0.1]    #default = 0\n",
    "reg_lambda = [1e-3, 1e-2, 0.05, 0.1]   #default = 1\n",
    "param_test_4 =  dict(reg_alpha=reg_alpha, reg_lambda=reg_lambda)\n",
    "gsearch4 = GridSearchCV(xgb4, param_grid = param_test_4, scoring='neg_log_loss',n_jobs=-1, cv=kfold)\n",
    "gsearch4.fit(train_X , train_y)\n",
    "\n",
    "print gsearch4.grid_scores_\n",
    "print gsearch4.best_params_\n",
    "print gsearch4.best_score_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 第一轮正则参数 {'reg_alpha': 0.001, 'reg_lambda': 0.01}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[mean: -0.63840, std: 0.02044, params: {'reg_alpha': 0.0005, 'reg_lambda': 0.005}, mean: -0.63952, std: 0.01969, params: {'reg_alpha': 0.0005, 'reg_lambda': 0.01}, mean: -0.64006, std: 0.01958, params: {'reg_alpha': 0.0005, 'reg_lambda': 0.02}, mean: -0.63550, std: 0.02179, params: {'reg_alpha': 0.0005, 'reg_lambda': 0.04}, mean: -0.63838, std: 0.02055, params: {'reg_alpha': 0.001, 'reg_lambda': 0.005}, mean: -0.63920, std: 0.01952, params: {'reg_alpha': 0.001, 'reg_lambda': 0.01}, mean: -0.63876, std: 0.02093, params: {'reg_alpha': 0.001, 'reg_lambda': 0.02}, mean: -0.63517, std: 0.02175, params: {'reg_alpha': 0.001, 'reg_lambda': 0.04}, mean: -0.63839, std: 0.02032, params: {'reg_alpha': 0.002, 'reg_lambda': 0.005}, mean: -0.63927, std: 0.02023, params: {'reg_alpha': 0.002, 'reg_lambda': 0.01}, mean: -0.64014, std: 0.02044, params: {'reg_alpha': 0.002, 'reg_lambda': 0.02}, mean: -0.63658, std: 0.02178, params: {'reg_alpha': 0.002, 'reg_lambda': 0.04}]\n",
      "{'reg_alpha': 0.001, 'reg_lambda': 0.04}\n",
      "-0.635167308978\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/mary/anaconda3/envs/py27/lib/python2.7/site-packages/sklearn/model_selection/_search.py:667: DeprecationWarning: The grid_scores_ attribute was deprecated in version 0.18 in favor of the more elaborate cv_results_ attribute. The grid_scores_ attribute will not be available from 0.20\n",
      "  DeprecationWarning)\n"
     ]
    }
   ],
   "source": [
    "reg_alpha = [0.0005,0.001, 0.002]    #default = 0\n",
    "reg_lambda = [0.005, 0.01, 0.02,0.04]   #default = 1\n",
    "param_test_5 =  dict(reg_alpha=reg_alpha, reg_lambda=reg_lambda)\n",
    "\n",
    "gsearch5 = GridSearchCV(xgb4, param_grid = param_test_5, scoring='neg_log_loss',n_jobs=-1, cv=kfold)\n",
    "gsearch5.fit(train_X , train_y)\n",
    "\n",
    "print gsearch5.grid_scores_\n",
    "print gsearch5.best_params_\n",
    "print gsearch5.best_score_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 第二轮正则参数 {'reg_alpha': 0.001, 'reg_lambda': 0.01} ， 准确率 71.3%"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train Accuary: 71.30%\n"
     ]
    }
   ],
   "source": [
    "best_reg_alpha = 0.001    #default = 0\n",
    "best_reg_lambda = 0.01\n",
    "\n",
    "xgb6 = XGBClassifier(\n",
    "        learning_rate =0.1,\n",
    "        n_estimators=estimators,  #数值大没关系，cv会自动返回合适的n_estimators\n",
    "        max_depth=max_depth4,\n",
    "        min_child_weight=min_child_weight4,\n",
    "        reg_alpha = best_reg_alpha,\n",
    "        reg_lambda = best_reg_lambda,\n",
    "        gamma=0,\n",
    "        subsample=0.3,\n",
    "        colsample_bytree=0.8,\n",
    "        colsample_bylevel=0.7,\n",
    "        objective= 'multi:softmax',\n",
    "        nthread=-1,\n",
    "        seed=3)\n",
    "\n",
    "xgb_param = xgb6.get_xgb_params()\n",
    "xgb_param['num_class'] = 3\n",
    "dtrain = xgb.DMatrix(train_X, label=train_y)\n",
    "bst = xgb.train(xgb_param, dtrain, 5)\n",
    "train_predictions = bst.predict(dtrain)\n",
    "y_train = dtrain.get_label()\n",
    "train_accuracy = accuracy_score(y_train, train_predictions)\n",
    "print (\"Train Accuary: %.2f%%\" % (train_accuracy * 100.0))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 重新调整 estimates"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "best n_estimators\n",
      "98\n"
     ]
    }
   ],
   "source": [
    "xgb6 = XGBClassifier(\n",
    "        learning_rate =0.1,\n",
    "        n_estimators=1000,  #数值大没关系，cv会自动返回合适的n_estimators\n",
    "        max_depth=max_depth4,\n",
    "        min_child_weight=min_child_weight4,\n",
    "        reg_alpha = best_reg_alpha,\n",
    "        reg_lambda = best_reg_lambda,\n",
    "        gamma=0,\n",
    "        subsample=0.3,\n",
    "        colsample_bytree=0.8,\n",
    "        colsample_bylevel=0.7,\n",
    "        objective= 'multi:softmax',\n",
    "        nthread=-1,\n",
    "        seed=3)\n",
    "\n",
    "modelfit(xgb6, train_X, train_y, cv_folds = kfold)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 最佳 estimators 为 98 ， 准确率 71.3%"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train Accuary: 71.30%\n"
     ]
    }
   ],
   "source": [
    "estimators = 98\n",
    "xgb7 = XGBClassifier(\n",
    "        learning_rate =0.1,\n",
    "        n_estimators=estimators,  #数值大没关系，cv会自动返回合适的n_estimators\n",
    "        max_depth=max_depth4,\n",
    "        min_child_weight=min_child_weight4,\n",
    "        reg_alpha = best_reg_alpha,\n",
    "        reg_lambda = best_reg_lambda,\n",
    "        gamma=0,\n",
    "        subsample=0.3,\n",
    "        colsample_bytree=0.8,\n",
    "        colsample_bylevel=0.7,\n",
    "        objective= 'multi:softmax',\n",
    "        nthread=-1,\n",
    "        seed=3)\n",
    "\n",
    "xgb_param = xgb7.get_xgb_params()\n",
    "xgb_param['num_class'] = 3\n",
    "dtrain = xgb.DMatrix(train_X, label=train_y)\n",
    "bst = xgb.train(xgb_param, dtrain, 5)\n",
    "train_predictions = bst.predict(dtrain)\n",
    "y_train = dtrain.get_label()\n",
    "train_accuracy = accuracy_score(y_train, train_predictions)\n",
    "print (\"Train Accuary: %.2f%%\" % (train_accuracy * 100.0))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 调整采样参数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[mean: -0.64909, std: 0.02288, params: {'subsample': 0.3, 'colsample_bytree': 0.6}, mean: -0.64665, std: 0.02415, params: {'subsample': 0.4, 'colsample_bytree': 0.6}, mean: -0.64357, std: 0.02174, params: {'subsample': 0.5, 'colsample_bytree': 0.6}, mean: -0.64232, std: 0.01505, params: {'subsample': 0.6, 'colsample_bytree': 0.6}, mean: -0.64244, std: 0.01768, params: {'subsample': 0.7, 'colsample_bytree': 0.6}, mean: -0.64286, std: 0.01788, params: {'subsample': 0.8, 'colsample_bytree': 0.6}, mean: -0.65036, std: 0.02264, params: {'subsample': 0.3, 'colsample_bytree': 0.7}, mean: -0.64794, std: 0.01852, params: {'subsample': 0.4, 'colsample_bytree': 0.7}, mean: -0.64615, std: 0.01928, params: {'subsample': 0.5, 'colsample_bytree': 0.7}, mean: -0.64545, std: 0.01738, params: {'subsample': 0.6, 'colsample_bytree': 0.7}, mean: -0.64857, std: 0.01888, params: {'subsample': 0.7, 'colsample_bytree': 0.7}, mean: -0.64562, std: 0.01629, params: {'subsample': 0.8, 'colsample_bytree': 0.7}, mean: -0.64887, std: 0.02046, params: {'subsample': 0.3, 'colsample_bytree': 0.8}, mean: -0.64996, std: 0.01744, params: {'subsample': 0.4, 'colsample_bytree': 0.8}, mean: -0.64656, std: 0.01786, params: {'subsample': 0.5, 'colsample_bytree': 0.8}, mean: -0.64491, std: 0.01852, params: {'subsample': 0.6, 'colsample_bytree': 0.8}, mean: -0.64508, std: 0.01580, params: {'subsample': 0.7, 'colsample_bytree': 0.8}, mean: -0.64185, std: 0.02070, params: {'subsample': 0.8, 'colsample_bytree': 0.8}, mean: -0.64829, std: 0.02223, params: {'subsample': 0.3, 'colsample_bytree': 0.9}, mean: -0.64912, std: 0.02185, params: {'subsample': 0.4, 'colsample_bytree': 0.9}, mean: -0.64449, std: 0.01791, params: {'subsample': 0.5, 'colsample_bytree': 0.9}, mean: -0.64305, std: 0.01702, params: {'subsample': 0.6, 'colsample_bytree': 0.9}, mean: -0.64358, std: 0.01801, params: {'subsample': 0.7, 'colsample_bytree': 0.9}, mean: -0.64193, std: 0.01743, params: {'subsample': 0.8, 'colsample_bytree': 0.9}]\n",
      "{'subsample': 0.8, 'colsample_bytree': 0.8}\n",
      "-0.641846370134\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/mary/anaconda3/envs/py27/lib/python2.7/site-packages/sklearn/model_selection/_search.py:667: DeprecationWarning: The grid_scores_ attribute was deprecated in version 0.18 in favor of the more elaborate cv_results_ attribute. The grid_scores_ attribute will not be available from 0.20\n",
      "  DeprecationWarning)\n"
     ]
    }
   ],
   "source": [
    "subsample = [i/10.0 for i in range(3,9)]\n",
    "colsample_bytree = [i/10.0 for i in range(6,10)]\n",
    "param_test7 = dict(subsample=subsample, colsample_bytree=colsample_bytree)\n",
    "\n",
    "gsearch7 = GridSearchCV(xgb7, param_grid = param_test7, scoring='neg_log_loss',n_jobs=-1, cv=kfold)\n",
    "gsearch7.fit(train_X , train_y)\n",
    "\n",
    "print gsearch7.grid_scores_\n",
    "print gsearch7.best_params_\n",
    "print gsearch7.best_score_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 最佳参数 {'subsample': 0.8, 'colsample_bytree': 0.8}"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 最佳参数调整如下：\n",
    "```\n",
    " 'reg_alpha': 0.001\n",
    " 'n_estimators': 98\n",
    " 'subsample': 0.8,\n",
    " 'colsample_bylevel': 0.8,\n",
    " 'reg_lambda': 0.01\n",
    " 'min_child_weight': 3\n",
    " 'max_depth': 3\n",
    "```    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [],
   "source": [
    "best_subsample = 0.8\n",
    "best_colsample = 0.8\n",
    "\n",
    "xgb8 = XGBClassifier(\n",
    "        learning_rate =0.1,\n",
    "        n_estimators=estimators,  #数值大没关系，cv会自动返回合适的n_estimators\n",
    "        max_depth=max_depth4,\n",
    "        min_child_weight=min_child_weight4,\n",
    "        reg_alpha = best_reg_alpha,\n",
    "        reg_lambda = best_reg_lambda,\n",
    "        gamma=0,\n",
    "        subsample=best_subsample,\n",
    "        colsample_bytree=best_colsample,\n",
    "        colsample_bylevel=best_colsample,\n",
    "        objective= 'multi:softmax',\n",
    "        nthread=-1,\n",
    "        seed=3)\n",
    "\n",
    "xgb_param = xgb8.get_xgb_params()\n",
    "xgb_param['num_class'] = 3\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'reg_alpha': 0.001, 'colsample_bytree': 0.8, 'silent': 1, 'colsample_bylevel': 0.8, 'scale_pos_weight': 1, 'learning_rate': 0.1, 'missing': None, 'max_delta_step': 0, 'base_score': 0.5, 'n_estimators': 98, 'subsample': 0.8, 'reg_lambda': 0.01, 'seed': 3, 'min_child_weight': 3, 'objective': 'multi:softmax', 'num_class': 3, 'max_depth': 3, 'gamma': 0}\n"
     ]
    }
   ],
   "source": [
    "print xgb_param"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 对训练数据进行验证  71.53%"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train Accuary: 71.53%\n"
     ]
    }
   ],
   "source": [
    "dtrain = xgb.DMatrix(train_X, label=train_y)\n",
    "bst = xgb.train(xgb_param, dtrain, 5)\n",
    "train_predictions = bst.predict(dtrain)\n",
    "y_train = dtrain.get_label()\n",
    "train_accuracy = accuracy_score(y_train, train_predictions)\n",
    "print (\"Train Accuary: %.2f%%\" % (train_accuracy * 100.0))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 对全量的训练数据进行验证 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Train Accuary: 70.37%\n"
     ]
    }
   ],
   "source": [
    "dtrain2 = xgb.DMatrix(dpath + 'RentListingInquries_FE_train.bin')\n",
    "bst = xgb.train(xgb_param, dtrain2, 5)\n",
    "train_predictions = bst.predict(dtrain2)\n",
    "y_train2 = dtrain2.get_label()\n",
    "train_accuracy = accuracy_score(y_train2, train_predictions)\n",
    "print (\"Train Accuary: %.2f%%\" % (train_accuracy * 100.0))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Train Accuary: 70.37%"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 对测试数据进行预测 ， 结果保存到 RentListingInquries_FE_test_predict.cs"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[ 2.  2.  2. ...,  2.  2.  2.]\n"
     ]
    }
   ],
   "source": [
    "dpath = './data/'\n",
    "train2 = pd.read_csv(dpath + 'RentListingInquries_FE_train.csv')\n",
    "test_X = pd.read_csv(dpath + 'RentListingInquries_FE_test.csv')\n",
    "\n",
    "train2_X = train2.drop(\"interest_level\", axis=1)\n",
    "train2_y = train2[\"interest_level\"]\n",
    "\n",
    "#第二轮参数调整得到的n_estimators 是 277\n",
    "estimators = 98\n",
    "# 上一轮求得的参数是： {'max_depth': 3, 'min_child_weight': 3}\n",
    "max_depth4 = 3\n",
    "min_child_weight4 = 3\n",
    "best_reg_alpha = 0.001\n",
    "best_reg_lambda = 0.01\n",
    "best_subsample = 0.8\n",
    "best_colsample = 0.8\n",
    "\n",
    "xgb9 = XGBClassifier(\n",
    "        learning_rate =0.1,\n",
    "        n_estimators=estimators,  #数值大没关系，cv会自动返回合适的n_estimators\n",
    "        max_depth=max_depth4,\n",
    "        min_child_weight=min_child_weight4,\n",
    "        reg_alpha = best_reg_alpha,\n",
    "        reg_lambda = best_reg_lambda,\n",
    "        gamma=0,\n",
    "        subsample=best_subsample,\n",
    "        colsample_bytree=best_colsample,\n",
    "        colsample_bylevel=best_colsample,\n",
    "        objective= 'multi:softmax',\n",
    "        nthread=-1,\n",
    "        seed=3)\n",
    "\n",
    "gbm = xgb9.fit(train2_X, train2_y)\n",
    "test2_predict = gbm.predict(test_X)\n",
    "\n",
    "test_X[\"interest_level\"] = test2_predict\n",
    "print(test_X.head(5))\n",
    "test_X.to_csv(dpath + 'RentListingInquries_FE_test_predict.csv')\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 2",
   "language": "python",
   "name": "python2"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
