{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 利用LightGBM/XGboost实现Happy Customer Bank目标客户（贷款成功的客户）识别\n",
    "    \n",
    "** 任务说明: ** \n",
    "    \n",
    "Happy Customer Bank目标客户识别Happy Customer Bank目标客户识别\n",
    "\n",
    "https://discuss.analyticsvidhya.com/t/hackathon-3-x-predict-customer-worth-for-happy-customer-bank/3802\n",
    "\n",
    "** 文件说明: **\n",
    "\n",
    "Train.csv：训练数据\n",
    "\n",
    "Test.csv：测试数据\n",
    "\n",
    "** 字段说明: **\n",
    "\n",
    "数据集共26个字段: 其中1-24列为输入特征，25-26列为输出特征。\n",
    "    \n",
    "输入特征：\n",
    "    \n",
    "1.\tID - 唯一ID（不能用于预测）\n",
    "2.\tGender - 性别\n",
    "3.\tCity - 城市\n",
    "4.\tMonthly_Income - 月收入（以卢比为单位）\n",
    "5.\tDOB - 出生日期\n",
    "6.\tLead_Creation_Date - 潜在（贷款）创建日期\n",
    "7.\tLoan_Amount_Applied - 贷款申请请求金额（印度卢比，INR）\n",
    "8.\tLoan_Tenure_Applied - 贷款申请期限（单位为年）\n",
    "9.\tExisting_EMI -现有贷款的EMI（EMI：电子货币机构许可证） \n",
    "10.\tEmployer_Name雇主名称\n",
    "11.\tSalary_Account - 薪资帐户银行\n",
    "12.\tMobile_Verified - 是否移动验证（Y / N）\n",
    "13.\tVAR5 - 连续型变量\n",
    "14.\tVAR1-  类别型变量\n",
    "15.\tLoan_Amount_Submitted - 提交的贷款金额（在看到资格后修改和选择）\n",
    "16.\tLoan_Tenure_Submitted - 提交的贷款期限（单位为年，在看到资格后修改和选择）\n",
    "17.\tInterest_Rate - 提交贷款金额的利率\n",
    "18.\tProcessing_Fee - 提交贷款的处理费（INR）\n",
    "19.\tEMI_Loan_Submitted -提交的EMI贷款金额（INR）\n",
    "20.\tFilled_Form - 后期报价后是否已填写申请表格\n",
    "21.\tDevice_Type - 进行申请的设备（浏览器/移动设备）\n",
    "22.\tVar2 - 类别型变量\n",
    "23.\tSource - 类别型变量\n",
    "24.\tVar4 - 类别型变量\n",
    "\n",
    "输出：\n",
    "\n",
    "25.\tLoggedIn - 是否login（只用于理解问题的变量，不能用于预测，测试集中没有）\n",
    "26. Disbursed - 是否发放贷款（目标变量），1为发放贷款（目标客户）\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** 作业要求：**\n",
    "\n",
    "1.\t适当的特征工程（20分）\n",
    "2.\t用LightGBM完成任务，并用交叉验证对模型的超参数（learning_rate、n_estimators、num_leaves、max_depth、min_data_in_leaf、colsample_bytree、subsample）进行调优。（70分）\n",
    "或者用XGBoost完成任务，并用交叉验证对模型的超参数（learning_rate、n_estimators、max_depth、min_child_weight、colsample_bytree、subsample、reg_lambda、reg_）进行调优。\n",
    "3.\t对最终模型给出特征重要性（10分）\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 导入工具包"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "from xgboost import XGBClassifier\n",
    "import xgboost as xgb\n",
    "\n",
    "import pandas as pd \n",
    "import numpy as np\n",
    "\n",
    "from sklearn.model_selection import GridSearchCV\n",
    "from sklearn.model_selection import StratifiedKFold\n",
    "\n",
    "from sklearn import metrics \n",
    "\n",
    "#from sklearn.model_selection\n",
    "\n",
    "from sklearn.metrics import log_loss\n",
    "\n",
    "from matplotlib import pyplot\n",
    "import seaborn as sns\n",
    "%matplotlib inline"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "dpath = './Datas/'\n",
    "train = pd.read_csv(dpath +\"train_modified_new.csv\", encoding = \"latin1\")\n",
    "\n",
    "y_train = train[\"Disbursed\"]\n",
    "X_train = train.drop([\"ID\", \"Disbursed\"], axis=1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "def modelfit(alg, X_train, y_train, cv_folds=5, early_stopping_rounds=50):\n",
    "    \n",
    "    xgb_param = alg.get_xgb_params()\n",
    "    \n",
    "    xgtrain = xgb.DMatrix(X_train, label = y_train)\n",
    "    \n",
    "    cvresult = xgb.cv(xgb_param, xgtrain, num_boost_round=alg.get_params()['n_estimators'], nfold=cv_folds,\n",
    "    metrics='auc', early_stopping_rounds=early_stopping_rounds)\n",
    "        \n",
    "    # The optimal n_estimators\n",
    "    n_estimators = cvresult.shape[0]\n",
    "        \n",
    "    # Use the optimal n_estimators to train model\n",
    "    alg.set_params(n_estimators=n_estimators)\n",
    "    alg.fit(X_train, y_train, eval_metric='auc')\n",
    "        \n",
    "    # Predict training set:\n",
    "    train_predictions = alg.predict(X_train)\n",
    "    train_predprob = alg.predict_proba(X_train)[:,1]\n",
    "        \n",
    "    # Print model report:\n",
    "    print(\"\\nModel Report\")\n",
    "    print('Best n_estimator: %i' % n_estimators )\n",
    "    print(\"Accuracy : %.4g\" % metrics.accuracy_score(y_train, train_predictions)) \n",
    "    print(\"AUC Score (Train): %f\" % metrics.roc_auc_score(y_train, train_predprob)) \n",
    "                    \n",
    "    #FeatureImportance = pd.Series(alg.get_booster().get_fscore()).sort_values(ascending=False)\n",
    "    #FeatureImportance.plot(kind='bar', title='Feature Importances')\n",
    "    #pyplot.ylabel('Feature Importance Score')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "** 直接调用 xgboost 内嵌的 cv 寻找最佳的参数 n_estimators **"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/sddy9/anaconda3/lib/python3.6/site-packages/sklearn/preprocessing/label.py:151: DeprecationWarning: The truth value of an empty array is ambiguous. Returning False, but in future this will result in an error. Use `array.size > 0` to check that an array is not empty.\n",
      "  if diff:\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "Model Report\n",
      "Best n_estimator: 142\n",
      "Accuracy : 0.9854\n",
      "AUC Score (Train): 0.903325\n"
     ]
    }
   ],
   "source": [
    "xgb1 = XGBClassifier(\n",
    "         learning_rate =0.1,\n",
    "         n_estimators=1000, #数值大没关系，cv会自动返回合适的n_estimators\n",
    "         max_depth=5,\n",
    "         min_child_weight=1,\n",
    "         gamma=0,\n",
    "         subsample=0.8,\n",
    "         colsample_bytree=0.8,\n",
    "         objective= 'binary:logistic',\n",
    "         seed=3)\n",
    "\n",
    "modelfit(xgb1, X_train, y_train)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Based on the above results, let's fix **learning_rate =0.1** and **n_estimator = 142** for now."
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Next, I will tune **max_depth** and **min_child_weight** in **XGBoost3_HappyBank_TreeDepth_ChildWeight.ipynb.**"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.5"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
