{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 利用LightGBM/XGboost实现Happy Customer Bank目标客户（贷款成功的客户）识别\n",
    "    \n",
    "** 任务说明: ** \n",
    "    \n",
    "Happy Customer Bank目标客户识别Happy Customer Bank目标客户识别\n",
    "\n",
    "https://discuss.analyticsvidhya.com/t/hackathon-3-x-predict-customer-worth-for-happy-customer-bank/3802\n",
    "\n",
    "** 文件说明: **\n",
    "\n",
    "Train.csv：训练数据\n",
    "\n",
    "Test.csv：测试数据\n",
    "\n",
    "** 字段说明: **\n",
    "\n",
    "数据集共26个字段: 其中1-24列为输入特征，25-26列为输出特征。\n",
    "    \n",
    "输入特征：\n",
    "    \n",
    "1.\tID - 唯一ID（不能用于预测）\n",
    "2.\tGender - 性别\n",
    "3.\tCity - 城市\n",
    "4.\tMonthly_Income - 月收入（以卢比为单位）\n",
    "5.\tDOB - 出生日期\n",
    "6.\tLead_Creation_Date - 潜在（贷款）创建日期\n",
    "7.\tLoan_Amount_Applied - 贷款申请请求金额（印度卢比，INR）\n",
    "8.\tLoan_Tenure_Applied - 贷款申请期限（单位为年）\n",
    "9.\tExisting_EMI -现有贷款的EMI（EMI：电子货币机构许可证） \n",
    "10.\tEmployer_Name雇主名称\n",
    "11.\tSalary_Account - 薪资帐户银行\n",
    "12.\tMobile_Verified - 是否移动验证（Y / N）\n",
    "13.\tVAR5 - 连续型变量\n",
    "14.\tVAR1-  类别型变量\n",
    "15.\tLoan_Amount_Submitted - 提交的贷款金额（在看到资格后修改和选择）\n",
    "16.\tLoan_Tenure_Submitted - 提交的贷款期限（单位为年，在看到资格后修改和选择）\n",
    "17.\tInterest_Rate - 提交贷款金额的利率\n",
    "18.\tProcessing_Fee - 提交贷款的处理费（INR）\n",
    "19.\tEMI_Loan_Submitted -提交的EMI贷款金额（INR）\n",
    "20.\tFilled_Form - 后期报价后是否已填写申请表格\n",
    "21.\tDevice_Type - 进行申请的设备（浏览器/移动设备）\n",
    "22.\tVar2 - 类别型变量\n",
    "23.\tSource - 类别型变量\n",
    "24.\tVar4 - 类别型变量\n",
    "\n",
    "输出：\n",
    "\n",
    "25.\tLoggedIn - 是否login（只用于理解问题的变量，不能用于预测，测试集中没有）\n",
    "26. Disbursed - 是否发放贷款（目标变量），1为发放贷款（目标客户）\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** 作业要求：**\n",
    "\n",
    "1.\t适当的特征工程（20分）\n",
    "2.\t用LightGBM完成任务，并用交叉验证对模型的超参数（learning_rate、n_estimators、num_leaves、max_depth、min_data_in_leaf、colsample_bytree、subsample）进行调优。（70分）\n",
    "或者用XGBoost完成任务，并用交叉验证对模型的超参数（learning_rate、n_estimators、max_depth、min_child_weight、colsample_bytree、subsample、reg_lambda、reg_）进行调优。\n",
    "3.\t对最终模型给出特征重要性（10分）\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 导入工具包"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "from xgboost import XGBClassifier\n",
    "import xgboost as xgb\n",
    "\n",
    "import pandas as pd \n",
    "import numpy as np\n",
    "\n",
    "from sklearn.model_selection import GridSearchCV\n",
    "from sklearn.model_selection import StratifiedKFold\n",
    "\n",
    "from sklearn import metrics \n",
    "\n",
    "#from sklearn.model_selection\n",
    "\n",
    "from sklearn.metrics import log_loss\n",
    "\n",
    "from matplotlib import pyplot\n",
    "import seaborn as sns\n",
    "%matplotlib inline"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "dpath = './Datas/'\n",
    "train = pd.read_csv(dpath +\"train_modified_new.csv\", encoding = \"latin1\")\n",
    "\n",
    "y_train = train[\"Disbursed\"]\n",
    "X_train = train.drop([\"ID\", \"Disbursed\"], axis=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "#reg_alpha = [1e-5, 1e-2, 0.1, 1, 100], # default = 0\n",
    "#reg_lambda = [0.5, 1, 2] # default = 1\n",
    "\n",
    "#param_test1 = dict(reg_alpha=reg_alpha, reg_lambda=reg_lambda)\n",
    "\n",
    "param_test1 = {\n",
    " 'reg_alpha':[1e-5, 1e-2, 0.1, 1, 100], # default = 0\n",
    " 'reg_lambda': [0.5, 1, 2] # default = 1\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,\n",
       "        colsample_bytree=0.8, gamma=0, learning_rate=0.1, max_delta_step=0,\n",
       "        max_depth=4, min_child_weight=2, missing=None, n_estimators=142,\n",
       "        n_jobs=1, nthread=None, objective='binary:logistic', random_state=0,\n",
       "        reg_alpha=1e-05, reg_lambda=1, scale_pos_weight=1, seed=3,\n",
       "        silent=True, subsample=0.75),\n",
       " {'reg_alpha': 1e-05, 'reg_lambda': 1},\n",
       " 0.8425638195865206)"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "xgb1 = XGBClassifier(\n",
    "        learning_rate =0.1,\n",
    "        n_estimators=142,  \n",
    "        max_depth=4,\n",
    "        min_child_weight=2,\n",
    "        gamma=0,\n",
    "        subsample=0.75,\n",
    "        colsample_bytree=0.8,\n",
    "        objective= 'binary:logistic',\n",
    "        seed=3)\n",
    "\n",
    "\n",
    "gsearch1 = GridSearchCV(xgb1, param_grid = param_test1, scoring='roc_auc', n_jobs=4, cv=5)\n",
    "gsearch1.fit(X_train , y_train)\n",
    "\n",
    "gsearch1.best_estimator_, gsearch1.best_params_, gsearch1.best_score_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The optimal values are:\n",
    "    \n",
    "reg_alpha: 1e-05, reg_lambda: 1."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Let's fix reg_lambda = 1 and try some other values of reg_alpha."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "param_test2 = {\n",
    " 'reg_alpha':[0, 1e-4, 1e-3, 5e-3, 5e-2], # default = 0\n",
    "}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(XGBClassifier(base_score=0.5, booster='gbtree', colsample_bylevel=1,\n",
       "        colsample_bytree=0.8, gamma=0, learning_rate=0.1, max_delta_step=0,\n",
       "        max_depth=4, min_child_weight=2, missing=None, n_estimators=142,\n",
       "        n_jobs=1, nthread=None, objective='binary:logistic', random_state=0,\n",
       "        reg_alpha=0, reg_lambda=1, scale_pos_weight=1, seed=3, silent=True,\n",
       "        subsample=0.75), {'reg_alpha': 0}, 0.8425637967188977)"
      ]
     },
     "execution_count": 7,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "xgb2 = XGBClassifier(\n",
    "        learning_rate =0.1,\n",
    "        n_estimators=142,  \n",
    "        max_depth=4,\n",
    "        min_child_weight=2,\n",
    "        gamma=0,\n",
    "        subsample=0.75,\n",
    "        colsample_bytree=0.8,\n",
    "        reg_lambda=1,\n",
    "        objective= 'binary:logistic',\n",
    "        seed=3)\n",
    "\n",
    "\n",
    "gsearch2 = GridSearchCV(xgb1, param_grid = param_test2, scoring='roc_auc', n_jobs=4, cv=5)\n",
    "gsearch2.fit(X_train , y_train)\n",
    "\n",
    "gsearch2.best_estimator_, gsearch2.best_params_, gsearch2.best_score_"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "The above result is worse than before, I will just use the results from the first time.\n",
    "\n",
    "** The optimal values are:**\n",
    "\n",
    "** reg_alpha: 1e-5 , reg_lambda: 1. **"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
