{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# 基于LGBM的模型预测借款人是否能按期还款\n",
    "\n",
    "**利用轻量级LGBM模型学习纯数据，从中归纳出优秀的特征，能够判断一个人是否会按期还款。**\n",
    "\n",
    "# 一、项目背景\n",
    "\n",
    "**1、代替人为判断是否能贷款给某人     2、减少传统人为贷款，暗箱操作     3、帮助银行找出信用更优质的人群**\n",
    "\n",
    "# 二、数据集简介\n",
    "\n",
    "已收集的数据集数量: 训练集提供40000名，测试集提供15000名的缴存人基本信息、缴存信息，贷款信息 。\n",
    "\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/b01edc93a44c45b186aa40caa42d4415d9b8cff951d646d884ee01690bf84683)\n",
    "\n",
    "\n",
    "## 1.数据加载和预处理\n",
    "\n",
    "\n",
    "```python\n",
    "train_df = df[df['label'].isna() == False].reset_index(drop=True)\n",
    "test_df = df[df['label'].isna() == True].reset_index(drop=True)\n",
    "display(train_df.shape, test_df.shape)\n",
    "```\n",
    "\n",
    "训练集样本量: (40000, 1093)，验证集样本量: (15000, 1093)\n",
    "\n",
    "\n",
    "## 2.数据集查看\n",
    "\n",
    "\n",
    "```python\n",
    "def get_daikuanYE(df,col):\n",
    "    df[col + '_genFeat1'] = (df[col] > 100000).astype(int)\n",
    "    df[col + '_genFeat2'] = (df[col] > 120000).astype(int)\n",
    "    df[col + '_genFeat3'] = (df[col] > 140000).astype(int)\n",
    "    df[col + '_genFeat4'] = (df[col] > 180000).astype(int)\n",
    "    df[col + '_genFeat5'] = (df[col] > 220000).astype(int)\n",
    "    df[col + '_genFeat6'] = (df[col] > 260000).astype(int)\n",
    "    df[col + '_genFeat7'] = (df[col] > 300000).astype(int)\n",
    "    return df, [col + f'_genFeat{i}' for i in range(1, 8)]\n",
    "\n",
    "df, genFeats2 = get_daikuanYE(df, col = 'DKYE')\n",
    "df, genFeats3 = get_daikuanYE(df, col = 'DKFFE')\n",
    "\n",
    "\n",
    "plt.figure(figsize = (8, 2))\n",
    "plt.subplot(1,2,1)\n",
    "sns.distplot(df['DKYE'][df['label'] == 1])\n",
    "plt.subplot(1,2,2)\n",
    "sns.distplot(df['DKFFE'][df['label'] == 1])\n",
    "\n",
    "```\n",
    "\n",
    "\n",
    "# 三、模型选择和开发\n",
    "\n",
    "详细说明你使用的算法。此处可细分，如下所示：\n",
    "\n",
    "## 1.特征工程\n",
    "\n",
    "\n",
    "```python\n",
    "# df['GRYJCE_sum_DWYJCE']= (df['GRYJCE']+df['DWYJCE'])*12*(df['DKLL']+1)    #贷款每年的还款额\n",
    "# df['GRZHDNGJYE_GRZHSNJZYE']=(df['GRZHDNGJYE']+df['GRZHSNJZYE']+df['GRZHYE'])-df['GRYJCE_sum_DWYJCE']\n",
    "# df['DWJJLX_DWYSSHY']=df['DWJJLX'] *df['DWSSHY']  #单位经济体行业*单位经济体行业\n",
    "\n",
    "# df['XINGBIEDKYE'] = df['XINGBIE'] * df['DKYE']\n",
    "\n",
    "# df['m2']  = (df['DKYE']  - ((df['GRYJCE']  + df['DWYJCE'] ) * 12) + df['GRZHDNGJYE'] ) / 12\n",
    "# df['KDKZGED'] = df['m2'] * (df['GRYJCE']  + df['DWYJCE'] )\n",
    "\n",
    "# gen_feats = ['DKFFE_multi_DKLL','DKFFE_DKYE_DKFFE','DWYSSHY2GRYJCE','DWYSSHY2DWYJCE','ZHIYE_GRZHZT','GRZHDNGJYE_GRZHSNJZYE']\n",
    "\n",
    "\n",
    "df['DWYSSHY2GRYJCE']=df['DWSSHY'] * df['DWSSHY']*df['GRYJCE']  #好\n",
    "gen_feats = ['DWYSSHY2GRYJCE']\n",
    "\n",
    "\n",
    "df.head()\n",
    "```\n",
    "\n",
    "## 2.模型介绍\n",
    "\n",
    "\n",
    "### xgboost的出现，让数据民工们告别了传统的机器学习算法们：RF、GBM、SVM、LASSO……..。 顾名思义，lightGBM包含两个关键点：light即轻量级，GBM 梯度提升机。LightGBM 是一个梯度 boosting 框架，使用基于学习算法的决策树。是分布式的，高效的，有以下优势：\n",
    " *  更快的训练效率\n",
    "*  低内存使用\n",
    "*  更高的准确率\n",
    "*  支持并行化学习\n",
    "*  可处理大规模数据\n",
    "\n",
    "概括来说，lightGBM主要有以下特点：\n",
    "* \t 基于Histogram的决策树算法\n",
    "* \t 带深度限制的Leaf-wise的叶子生长策略\n",
    "* \t 直方图做差加速\n",
    "* \t 直接支持类别特征(Categorical Feature)\n",
    "* \t Cache命中率优化\n",
    "* \t 基于直方图的稀疏特征优化\n",
    "* \t 多线程优化\n",
    "## 3.模型训练\n",
    "\n",
    "\n",
    "```python\n",
    "oof = np.zeros(train_df.shape[0])\n",
    "# feat_imp_df = pd.DataFrame({'feat': cols, 'imp': 0})\n",
    "test_df['prob'] = 0\n",
    "clf = LGBMClassifier(\n",
    "    # 0.05--0.1\n",
    "    learning_rate=0.07,\n",
    "    # 1030\n",
    "    # 1300\n",
    "    n_estimators=1030,\n",
    "    # 31\n",
    "    # 35\n",
    "    # 37\n",
    "    # 40\n",
    "    # (0.523177, 0.93799)  38\n",
    "    #(0.519115, 0.93587) 39\n",
    "    num_leaves=37,\n",
    "    subsample=0.8,\n",
    "    # 0.8\n",
    "    # 0.85\n",
    "    colsample_bytree=0.8,\n",
    "    random_state=11,\n",
    "    is_unbalace=True,\n",
    "    sample_pos_weight=13\n",
    "    \n",
    "    # learning_rate=0.066,#学习率\n",
    "    # n_estimators=1032,#拟合的树的棵树，相当于训练轮数\n",
    "    # num_leaves=38,#树的最大叶子数，对比xgboost一般为2^(max_depth)\n",
    "    # subsample=0.85,#子样本频率\n",
    "    # colsample_bytree=0.85, #训练特征采样率列\n",
    "    # random_state=17,   #随机种子数\n",
    "    # reg_lambda=1e-1,    #L2正则化系数\n",
    "    # # min_split_gain=0.2#最小分割增益\n",
    "    # learning_rate=0.07,#学习率\n",
    "    # n_estimators=1032,#拟合的树的棵树，相当于训练轮数\n",
    "    # num_leaves=37,#树的最大叶子数，对比xgboost一般为2^(max_depth)\n",
    "    # subsample=0.8,#子样本频率\n",
    "    # colsample_bytree=0.8, #训练特征采样率 列\n",
    "    # random_state=17,   #随机种子数\n",
    "    # silent=True , #训练过程是否打印日志信息\n",
    "    # min_split_gain=0.05 ,#最小分割增益\n",
    "    # is_unbalace=True,\n",
    "    # sample_pos_weight=13\n",
    ")\n",
    "```\n",
    "\n",
    "--------------------- 0 fold ---------------------\n",
    " \n",
    " [LightGBM] [Warning] Unknown parameter: is_unbalace\n",
    " \n",
    "[LightGBM] [Warning] Unknown parameter: sample_pos_weight\n",
    "\n",
    " Training until validation scores don't improve for 200 rounds\n",
    " \n",
    " [200]\tvalid_0's auc: 0.944549\tvalid_0's binary_logloss: 0.110362\n",
    " \n",
    " Early stopping, best iteration is:\n",
    "\n",
    " [173]\tvalid_0's auc: 0.944278\tvalid_0's binary_logloss: 0.1097\n",
    "\n",
    " --------------------- 1 fold ---------------------\n",
    "\n",
    " [LightGBM] [Warning] Unknown parameter: is_unbalace\n",
    "\n",
    " [LightGBM] [Warning] Unknown parameter: sample_pos_weight\n",
    "\n",
    " Training until validation scores don't improve for 200 rounds\n",
    "\n",
    " [200]\tvalid_0's auc: 0.943315\tvalid_0's binary_logloss: 0.113508\n",
    "\n",
    " Early stopping, best iteration is:\n",
    "\n",
    " [161]\tvalid_0's auc: 0.943045\tvalid_0's binary_logloss: 0.113012 \n",
    "\n",
    "\n",
    "\n",
    "## 4.模型预测\n",
    "\n",
    "\n",
    "使用model.predict接口来完成对大量数据集的批量预测。\n",
    "\n",
    "\n",
    "```python\n",
    "val_aucs = []\n",
    "seeds = [11,22,33]\n",
    "for seed in seeds:\n",
    "    skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=seed)\n",
    "    for i, (trn_idx, val_idx) in enumerate(skf.split(train_df, train_df['label'])):\n",
    "        print('--------------------- {} fold ---------------------'.format(i))\n",
    "        t = time.time()\n",
    "        trn_x, trn_y = train_df[cols].iloc[trn_idx].reset_index(drop=True), train_df['label'].values[trn_idx]\n",
    "        val_x, val_y = train_df[cols].iloc[val_idx].reset_index(drop=True), train_df['label'].values[val_idx]\n",
    "        clf.fit(\n",
    "            trn_x, trn_y,\n",
    "            eval_set=[(val_x, val_y)],\n",
    "    #         categorical_feature=cate_cols,\n",
    "            eval_metric='auc',\n",
    "            early_stopping_rounds=200,\n",
    "            verbose=200\n",
    "        )\n",
    "    #     feat_imp_df['imp'] += clf.feature_importances_ / skf.n_splits\n",
    "        oof[val_idx] = clf.predict_proba(val_x)[:, 1]\n",
    "        test_df['prob'] += clf.predict_proba(test_df[cols])[:, 1] / skf.n_splits / len(seeds)\n",
    "\n",
    "    cv_auc = roc_auc_score(train_df['label'], oof)\n",
    "    val_aucs.append(cv_auc)\n",
    "    print('\\ncv_auc: ', cv_auc)\n",
    "print(val_aucs, np.mean(val_aucs))\n",
    "```\n",
    "\n",
    "# 四、总结与升华\n",
    "\n",
    "**心得：特征工程非常重要，大部分提升都是在特征工程的特征组合上，然后就是参数调优。我尝试了所有的参数，调参需要一一做对比实验，千万别同时改两个参数，然后正则化是一般是防止过拟合的，没搞清楚是否过拟合时不要用，不然大概率降低，找到关键的一些参数，能够强迫模型有提升的，比如上述min_split_gain=0.05 ,#最小分割增益，学习率由大往小了调整，多尝试不同的参数组合会有意想不到的效果。**\n",
    "\n",
    "# 个人简介\n",
    "\n",
    "> 我在AI Studio上获得钻石等级，点亮10个徽章，来互关呀~  [Alchemist_W](https://aistudio.baidu.com/aistudio/personalcenter/thirdview/546270) 点我互关"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# AI达人创造营作业：模型训练和调参\n",
    "\n",
    "当数据集准备好后，就需要进行愉快的训练了，在这里 Paddle 为大家提供了很多方便的套件，大大缩短开发者的开发时间，提高了开发效率\n",
    "\n",
    "* 综合套件：[PaddleHub](https://github.com/PaddlePaddle/PaddleHub)\n",
    "* 图像分类：[PaddleClas](https://github.com/PaddlePaddle/PaddleClas)\n",
    "* 目标检测：[PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection)\n",
    "* 图像分割：[PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg)\n",
    "* 文字识别：[PaddleOCR](https://github.com/PaddlePaddle/PaddleOCR)\n",
    "\n",
    "更多的套件请访问 [飞桨产品全景](https://www.paddlepaddle.org.cn/overview)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# 下面详细介绍项目——基于LGBM模型预测借款人是否能按期还款\n",
    "![相关属性](https://ai-studio-static-online.cdn.bcebos.com/74135877d9804120ac98eb43a89c789d63f0f188a6924997a40d78dac8570919)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "import pandas as pd\r\n",
    "import numpy as np\r\n",
    "from tqdm import tqdm\r\n",
    "import os\r\n",
    "import matplotlib.pyplot as plt\r\n",
    "import seaborn as sns\r\n",
    "import paddle"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "train = pd.read_csv('./work/train.csv')\r\n",
    "test = pd.read_csv('./work/test.csv')\r\n",
    "submit = pd.read_csv('./work/submit.csv')\r\n",
    "train.shape, test.shape, submit.shape"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# 1、观察数据结构 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "train.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "cate_2_cols = ['XINGBIE', 'ZHIWU', 'XUELI']\r\n",
    "cate_cols = ['HYZK', 'ZHIYE', 'ZHICHEN', 'DWJJLX', 'DWSSHY', 'GRZHZT']\r\n",
    "train[cate_cols]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "num_cols = ['GRJCJS', 'GRZHYE', 'GRZHSNJZYE', 'GRZHDNGJYE', 'GRYJCE', 'DWYJCE','DKFFE', 'DKYE', 'DKLL']\r\n",
    "train[num_cols]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# 2、特征工程  "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "df = pd.concat([train, test], axis = 0).reset_index(drop = True)  #这一块把训练集和测试集连起来 观察数据分布特征 且方便后期做处理  最终还是会分开\r\n",
    "df.head(10)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# 3、LightGBM简介\n",
    "### xgboost的出现，让数据民工们告别了传统的机器学习算法们：RF、GBM、SVM、LASSO……..。 顾名思义，lightGBM包含两个关键点：light即轻量级，GBM 梯度提升机。LightGBM 是一个梯度 boosting 框架，使用基于学习算法的决策树。是分布式的，高效的，有以下优势：\n",
    " *  更快的训练效率\n",
    "*  低内存使用\n",
    "*  更高的准确率\n",
    "*  支持并行化学习\n",
    "*  可处理大规模数据\n",
    "\n",
    "概括来说，lightGBM主要有以下特点：\n",
    "* \t 基于Histogram的决策树算法\n",
    "* \t 带深度限制的Leaf-wise的叶子生长策略\n",
    "* \t 直方图做差加速\n",
    "* \t 直接支持类别特征(Categorical Feature)\n",
    "* \t Cache命中率优化\n",
    "* \t 基于直方图的稀疏特征优化\n",
    "* \t 多线程优化\n",
    "\n",
    "# 4、皮尔逊系数\n",
    "## 4.1 Corr()函数保证行相同 可以使用 其中有三种相关函数 默认为皮尔逊相关系数（'Pearson','Kendall','Spman'）\n",
    "## 4.2当两个变量的标准差都不为零时，相关系数才有定义，Pearson相关系数适用于：\n",
    "\n",
    "### (1)、两个变量之间是线性关系，都是连续数据。\n",
    "\n",
    "### (2)、两个变量的总体是正态分布，或接近正态的单峰分布。\n",
    "\n",
    "### (3)、两个变量的观测值是成对的，每对观测值之间相互独立。\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# Find correlations with the target and sort  \r\n",
    "correlations = df.corr()['label'].sort_values()\r\n",
    "\r\n",
    "# Display correlations\r\n",
    "print('Most Positive Correlations:\\n', correlations.tail(15))\r\n",
    "print('\\nMost Negative Correlations:\\n', correlations.head(15))\r\n",
    "\r\n",
    "#Most Positive Correlations: 越大说明正相关性越强 可以用来做组合特征"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "# Heatmap 使用热力图来分析两组特征存在的数学上的关系\r\n",
    "summary=pd.pivot_table(data=df,\r\n",
    "                        index='GRZHZT',\r\n",
    "                        columns='ZHIYE',\r\n",
    "                        values='label',\r\n",
    "                        aggfunc=np.sum)\r\n",
    "\r\n",
    "sns.heatmap(data=summary,\r\n",
    "            cmap='rainbow',\r\n",
    "            annot=True,\r\n",
    "            # fmt='.2e',  #科学计数法 保留2位小数\r\n",
    "            linewidth=0.5)\r\n",
    "plt.title('Label')\r\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# 5、数据可视化-----使用条形图分析年龄段与还款的关系\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "def get_age(df,col = 'age'):\r\n",
    "    df[col+\"_genFeat1\"]=(df['age'] > 18).astype(int)\r\n",
    "    df[col+\"_genFeat2\"]=(df['age'] > 25).astype(int)\r\n",
    "    df[col+\"_genFeat3\"]=(df['age'] > 30).astype(int)\r\n",
    "    df[col+\"_genFeat4\"]=(df['age'] > 35).astype(int)\r\n",
    "    df[col+\"_genFeat5\"]=(df['age'] > 40).astype(int)\r\n",
    "    df[col+\"_genFeat6\"]=(df['age'] > 45).astype(int)\r\n",
    "    return df, [col + f'_genFeat{i}' for i in range(1, 7)]\r\n",
    "\r\n",
    "df['age'] = ((1609430399 - df['CSNY']) / (365 * 24 * 3600)).astype(int)\r\n",
    "df, genFeats1 = get_age(df, col = 'age')\r\n",
    "\r\n",
    "sns.distplot(df['age'][df['age'] > 0])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "def get_daikuanYE(df,col):\r\n",
    "    df[col + '_genFeat1'] = (df[col] > 100000).astype(int)\r\n",
    "    df[col + '_genFeat2'] = (df[col] > 120000).astype(int)\r\n",
    "    df[col + '_genFeat3'] = (df[col] > 140000).astype(int)\r\n",
    "    df[col + '_genFeat4'] = (df[col] > 180000).astype(int)\r\n",
    "    df[col + '_genFeat5'] = (df[col] > 220000).astype(int)\r\n",
    "    df[col + '_genFeat6'] = (df[col] > 260000).astype(int)\r\n",
    "    df[col + '_genFeat7'] = (df[col] > 300000).astype(int)\r\n",
    "    return df, [col + f'_genFeat{i}' for i in range(1, 8)]\r\n",
    "\r\n",
    "df, genFeats2 = get_daikuanYE(df, col = 'DKYE')\r\n",
    "df, genFeats3 = get_daikuanYE(df, col = 'DKFFE')\r\n",
    "\r\n",
    "\r\n",
    "plt.figure(figsize = (8, 2))\r\n",
    "plt.subplot(1,2,1)\r\n",
    "sns.distplot(df['DKYE'][df['label'] == 1])\r\n",
    "plt.subplot(1,2,2)\r\n",
    "sns.distplot(df['DKFFE'][df['label'] == 1])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "train_df = df[df['label'].isna() == False].reset_index(drop=True)\r\n",
    "test_df = df[df['label'].isna() == True].reset_index(drop=True)\r\n",
    "display(train_df.shape, test_df.shape)\r\n",
    "\r\n",
    "plt.figure(figsize = (8, 2))\r\n",
    "plt.subplot(1,2,1)\r\n",
    "sns.distplot(train_df['age'][train_df['age'] > 0])\r\n",
    "plt.subplot(1,2,2)\r\n",
    "sns.distplot(test_df['age'][test_df['age'] > 0])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "**Tips比较训练集与测试集中的各属性分布 发现（zhiwu）职务这个属性训练集与测试集分布不同 是不可用属性 给LGBM会使得分类效果变差**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "gen_feats_fest = ['age','HYZK','ZHIYE','ZHICHEN','ZHIWU','XUELI','DWJJLX','DWSSHY','GRJCJS','GRZHZT','GRZHYE','GRZHSNJZYE','GRZHDNGJYE','GRYJCE','DWYJCE','DKFFE','DKYE','DKLL']\r\n",
    "\r\n",
    "for i in range(len(gen_feats_fest)):\r\n",
    "    plt.figure(figsize = (8, 2))\r\n",
    "    plt.subplot(1,2,1)\r\n",
    "    sns.distplot(train_df[gen_feats_fest[i]])\r\n",
    "    plt.subplot(1,2,2)\r\n",
    "    sns.distplot(test_df[gen_feats_fest[i]])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "**依据贷款公式组合特征，根据前面提到的皮尔曼系数组合特征**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "#df['missing_rate'] = (df.shape[1] - df.count(axis = 1)) / df.shape[1]#差\r\n",
    "\r\n",
    "#df['DKFFE_DKYE'] = df['DKFFE'] + df['DKYE'] #一般差\r\n",
    "#df['DKFFE_DKY_multi_DKLL'] = (df['DKFFE'] + df['DKYE']) * df['DKLL']#一般好\r\n",
    "#df['DKFFE_multi_DKLL'] = df['DKFFE'] * df['DKLL']#一般好\r\n",
    "#df['DKYE_multi_DKLL'] = df['DKYE'] * df['DKLL']#一般差\r\n",
    "#df['GRYJCE_DWYJCE'] = df['GRYJCE'] + df['DWYJCE']#一般\r\n",
    "#df['GRZHDNGJYE_GRZHSNJZYE'] = df['GRZHDNGJYE'] + df['GRZHSNJZYE']#一般差\r\n",
    "\r\n",
    "#df['DKFFE_multi_DKLL_ratio'] = df['DKFFE'] * df['DKLL'] / df['DKFFE_DKY_multi_DKLL']#一般差\r\n",
    "#df['DKYE_multi_DKLL_ratio'] = df['DKYE'] * df['DKLL'] / df['DKFFE_DKY_multi_DKLL']#一般差\r\n",
    "#df['DKYE_DKFFE_ratio'] = df['DKYE'] / (df['DKFFE'] + df['DKYE'])#一般\r\n",
    "#df['DKFFE_DKYE_ratio'] = df['DKFFE'] /(df['DKFFE'] + df['DKYE'])#一般差\r\n",
    "#df['GRZHYE_diff_GRZHDNGJYE'] = df['GRZHYE'] - df['GRZHDNGJYE']#一般差\r\n",
    "#df['GRZHYE_diff_GRZHSNJZYE'] = df['GRZHYE'] - df['GRZHSNJZYE']#一般差\r\n",
    "#df['GRYJCE_DWYJCE_ratio'] = df['GRYJCE'] / (df['GRYJCE'] + df['DWYJCE'])#差\r\n",
    "#df['DWYJCE_GRYJCE_ratio'] = df['DWYJCE'] / (df['GRYJCE'] + df['DWYJCE'])#一般差\r\n",
    "\r\n",
    "#df['DWYSSHY2DKLL']=df['DWSSHY'] * df['DWSSHY']*df['DKLL']\r\n",
    "# df['DWYSSHY2GRJCJS2']=df['DWSSHY'] * df['DWSSHY']*df['GRYJCE']*df['GRYJCE']\r\n",
    "\r\n",
    "\r\n",
    "# df['ZHIYE_GRZHZT']= df['GRZHZT']/(df['ZHIYE']+0.00000001)\r\n",
    "# gen_feats = ['DWYSSHY2GRYJCE','ZHIYE_GRZHZT']\r\n",
    "\r\n",
    "\r\n",
    "# df['DKFFE_multi_DKLL'] = df['DKFFE'] * df['DKLL']   #发放贷款金额*贷款利率\r\n",
    "# df['DKFFE-DKYE']=df['DKFFE']-df['DKYE']  #贷款发放额-贷款余额=剩余未还款\r\n",
    "# df['DKFFE_DKYE_DKFFE']=df['DKFFE-DKYE']*df['DKFFE']  #(贷款发放额-贷款余额)*贷款利率\r\n",
    "# df['DWYSSHY2GRYJCE']=df['DWSSHY'] * df['DWSSHY']*df['GRYJCE'] #所属行业*所属行业*个人月缴存额 ***\r\n",
    "# df['DWYSSHY2DWYJCE']=df['DWSSHY'] * df['DWSSHY']*df['DWYJCE'] #所属行业*所属行业*单位月缴存额\r\n",
    "# df['ZHIYE_GRZHZT']=df['GRZHZT']/df['ZHIYE']\r\n",
    "# df['DWYSSHY3GRYJCE']=(df['DWSSHY']*df['DWSSHY']*df['DWSSHY']*df['GRYJCE'])*(df['GRZHZT']/df['ZHIYE'])\r\n",
    "\r\n",
    "# df['GRYJCE_sum_DWYJCE']= (df['GRYJCE']+df['DWYJCE'])*12*(df['DKLL']+1)    #贷款每年的还款额\r\n",
    "# df['GRZHDNGJYE_GRZHSNJZYE']=(df['GRZHDNGJYE']+df['GRZHSNJZYE']+df['GRZHYE'])-df['GRYJCE_sum_DWYJCE']\r\n",
    "# df['DWJJLX_DWYSSHY']=df['DWJJLX'] *df['DWSSHY']  #单位经济体行业*单位经济体行业\r\n",
    "\r\n",
    "# df['XINGBIEDKYE'] = df['XINGBIE'] * df['DKYE']\r\n",
    "\r\n",
    "# df['m2']  = (df['DKYE']  - ((df['GRYJCE']  + df['DWYJCE'] ) * 12) + df['GRZHDNGJYE'] ) / 12\r\n",
    "# df['KDKZGED'] = df['m2'] * (df['GRYJCE']  + df['DWYJCE'] )\r\n",
    "\r\n",
    "# gen_feats = ['DKFFE_multi_DKLL','DKFFE_DKYE_DKFFE','DWYSSHY2GRYJCE','DWYSSHY2DWYJCE','ZHIYE_GRZHZT','GRZHDNGJYE_GRZHSNJZYE']\r\n",
    "\r\n",
    "\r\n",
    "df['DWYSSHY2GRYJCE']=df['DWSSHY'] * df['DWSSHY']*df['GRYJCE']  #好\r\n",
    "gen_feats = ['DWYSSHY2GRYJCE']\r\n",
    "\r\n",
    "\r\n",
    "df.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "for f in tqdm(cate_cols):\r\n",
    "    df[f] = df[f].map(dict(zip(df[f].unique(), range(df[f].nunique()))))\r\n",
    "    df[f + '_count'] = df[f].map(df[f].value_counts())\r\n",
    "    df = pd.concat([df,pd.get_dummies(df[f],prefix=f\"{f}\")],axis=1)\r\n",
    "    \r\n",
    "    \r\n",
    "cate_cols_combine = [[cate_cols[i], cate_cols[j]] for i in range(len(cate_cols)) \\\r\n",
    "                     for j in range(i + 1, len(cate_cols))]\r\n",
    "\r\n",
    "\r\n",
    "for f1, f2 in tqdm(cate_cols_combine):\r\n",
    "    df['{}_{}_count'.format(f1, f2)] = df.groupby([f1, f2])['id'].transform('count')\r\n",
    "    df['{}_in_{}_prop'.format(f1, f2)] = df['{}_{}_count'.format(f1, f2)] / df[f2 + '_count']\r\n",
    "    df['{}_in_{}_prop'.format(f2, f1)] = df['{}_{}_count'.format(f1, f2)] / df[f1 + '_count']\r\n",
    "\r\n",
    "    \r\n",
    "for f1 in tqdm(cate_cols):\r\n",
    "    g = df.groupby(f1)\r\n",
    "    for f2 in num_cols + gen_feats:\r\n",
    "        for stat in ['sum', 'mean', 'std', 'max', 'min', 'std']:\r\n",
    "            df['{}_{}_{}'.format(f1, f2, stat)] = g[f2].transform(stat)\r\n",
    "    for f3 in genFeats2 + genFeats3:\r\n",
    "        for stat in ['sum', 'mean']:\r\n",
    "            df['{}_{}_{}'.format(f1, f2, stat)] = g[f2].transform(stat)\r\n",
    "\r\n",
    "num_cols_gen_feats = num_cols + gen_feats\r\n",
    "for f1 in tqdm(num_cols_gen_feats):\r\n",
    "    g = df.groupby(f1)\r\n",
    "    for f2 in num_cols_gen_feats:\r\n",
    "        if f1 != f2:\r\n",
    "            for stat in ['sum', 'mean', 'std', 'max', 'min', 'std']:\r\n",
    "                df['{}_{}_{}'.format(f1, f2, stat)] = g[f2].transform(stat)\r\n",
    "\r\n",
    "for i in tqdm(range(len(num_cols_gen_feats))):\r\n",
    "    for j in range(i + 1, len(num_cols_gen_feats)):\r\n",
    "        df[f'numsOf_{num_cols_gen_feats[i]}_{num_cols_gen_feats[j]}_add'] = df[num_cols_gen_feats[i]] + df[num_cols_gen_feats[j]]\r\n",
    "        df[f'numsOf_{num_cols_gen_feats[i]}_{num_cols_gen_feats[j]}_diff'] = df[num_cols_gen_feats[i]] - df[num_cols_gen_feats[j]]\r\n",
    "        df[f'numsOf_{num_cols_gen_feats[i]}_{num_cols_gen_feats[j]}_multi'] = df[num_cols_gen_feats[i]] * df[num_cols_gen_feats[j]]\r\n",
    "        df[f'numsOf_{num_cols_gen_feats[i]}_{num_cols_gen_feats[j]}_div'] = df[num_cols_gen_feats[i]] / (df[num_cols_gen_feats[j]] + 0.0000000001)\r\n",
    "    \r\n",
    "            "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "# 6、划分训练集与测试集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(40000, 1093)"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "text/plain": [
       "(15000, 1093)"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "train_df = df[df['label'].isna() == False].reset_index(drop=True)\r\n",
    "test_df = df[df['label'].isna() == True].reset_index(drop=True)\r\n",
    "display(train_df.shape, test_df.shape)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(4,\n",
       " ['DWSSHY_DKYE_min',\n",
       "  'GRYJCE_DWYJCE_std',\n",
       "  'DWYJCE_GRYJCE_std',\n",
       "  'numsOf_GRYJCE_DWYJCE_diff'])"
      ]
     },
     "execution_count": null,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "drop_feats = [f for f in train_df.columns if train_df[f].nunique() == 1 or train_df[f].nunique() == 0]\r\n",
    "len(drop_feats), drop_feats"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "cols = [col for col in train_df.columns if col not in ['id', 'label'] + drop_feats]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "data": {
      "text/html": [
       "<style type='text/css'>\n",
       ".datatable table.frame { margin-bottom: 0; }\n",
       ".datatable table.frame thead { border-bottom: none; }\n",
       ".datatable table.frame tr.coltypes td {  color: #FFFFFF;  line-height: 6px;  padding: 0 0.5em;}\n",
       ".datatable .bool    { background: #DDDD99; }\n",
       ".datatable .object  { background: #565656; }\n",
       ".datatable .int     { background: #5D9E5D; }\n",
       ".datatable .float   { background: #4040CC; }\n",
       ".datatable .str     { background: #CC4040; }\n",
       ".datatable .row_index {  background: var(--jp-border-color3);  border-right: 1px solid var(--jp-border-color0);  color: var(--jp-ui-font-color3);  font-size: 9px;}\n",
       ".datatable .frame tr.coltypes .row_index {  background: var(--jp-border-color0);}\n",
       ".datatable th:nth-child(2) { padding-left: 12px; }\n",
       ".datatable .hellipsis {  color: var(--jp-cell-editor-border-color);}\n",
       ".datatable .vellipsis {  background: var(--jp-layout-color0);  color: var(--jp-cell-editor-border-color);}\n",
       ".datatable .na {  color: var(--jp-cell-editor-border-color);  font-size: 80%;}\n",
       ".datatable .footer { font-size: 9px; }\n",
       ".datatable .frame_dimensions {  background: var(--jp-border-color3);  border-top: 1px solid var(--jp-border-color0);  color: var(--jp-ui-font-color3);  display: inline-block;  opacity: 0.6;  padding: 1px 10px 1px 5px;}\n",
       "</style>\n"
      ],
      "text/plain": [
       "<IPython.core.display.HTML object>"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    }
   ],
   "source": [
    "from sklearn.model_selection import StratifiedKFold\r\n",
    "from lightgbm.sklearn import LGBMClassifier\r\n",
    "from sklearn.metrics import f1_score, roc_auc_score\r\n",
    "from sklearn.ensemble import RandomForestClassifier,VotingClassifier\r\n",
    "from xgboost import XGBClassifier\r\n",
    "import time\r\n",
    "import lightgbm as lgb"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "--------------------- 0 fold ---------------------\n",
      "[LightGBM] [Warning] Unknown parameter: is_unbalace\n",
      "[LightGBM] [Warning] Unknown parameter: sample_pos_weight\n",
      "Training until validation scores don't improve for 200 rounds\n",
      "[200]\tvalid_0's auc: 0.944549\tvalid_0's binary_logloss: 0.110362\n",
      "Early stopping, best iteration is:\n",
      "[173]\tvalid_0's auc: 0.944278\tvalid_0's binary_logloss: 0.1097\n",
      "--------------------- 1 fold ---------------------\n",
      "[LightGBM] [Warning] Unknown parameter: is_unbalace\n",
      "[LightGBM] [Warning] Unknown parameter: sample_pos_weight\n",
      "Training until validation scores don't improve for 200 rounds\n",
      "[200]\tvalid_0's auc: 0.943315\tvalid_0's binary_logloss: 0.113508\n",
      "Early stopping, best iteration is:\n",
      "[161]\tvalid_0's auc: 0.943045\tvalid_0's binary_logloss: 0.113012\n",
      "--------------------- 2 fold ---------------------\n",
      "[LightGBM] [Warning] Unknown parameter: is_unbalace\n",
      "[LightGBM] [Warning] Unknown parameter: sample_pos_weight\n",
      "Training until validation scores don't improve for 200 rounds\n",
      "[200]\tvalid_0's auc: 0.942585\tvalid_0's binary_logloss: 0.119059\n",
      "Early stopping, best iteration is:\n",
      "[148]\tvalid_0's auc: 0.942207\tvalid_0's binary_logloss: 0.117848\n",
      "--------------------- 3 fold ---------------------\n",
      "[LightGBM] [Warning] Unknown parameter: is_unbalace\n",
      "[LightGBM] [Warning] Unknown parameter: sample_pos_weight\n",
      "Training until validation scores don't improve for 200 rounds\n",
      "[200]\tvalid_0's auc: 0.942192\tvalid_0's binary_logloss: 0.115931\n",
      "Early stopping, best iteration is:\n",
      "[123]\tvalid_0's auc: 0.942244\tvalid_0's binary_logloss: 0.114857\n",
      "--------------------- 4 fold ---------------------\n",
      "[LightGBM] [Warning] Unknown parameter: is_unbalace\n",
      "[LightGBM] [Warning] Unknown parameter: sample_pos_weight\n",
      "Training until validation scores don't improve for 200 rounds\n",
      "[200]\tvalid_0's auc: 0.939505\tvalid_0's binary_logloss: 0.113455\n",
      "Early stopping, best iteration is:\n",
      "[164]\tvalid_0's auc: 0.939654\tvalid_0's binary_logloss: 0.112933\n",
      "\n",
      "cv_auc:  0.9420797160267054\n",
      "--------------------- 0 fold ---------------------\n",
      "[LightGBM] [Warning] Unknown parameter: is_unbalace\n",
      "[LightGBM] [Warning] Unknown parameter: sample_pos_weight\n",
      "Training until validation scores don't improve for 200 rounds\n",
      "[200]\tvalid_0's auc: 0.937373\tvalid_0's binary_logloss: 0.119639\n",
      "Early stopping, best iteration is:\n",
      "[140]\tvalid_0's auc: 0.938125\tvalid_0's binary_logloss: 0.117851\n",
      "--------------------- 1 fold ---------------------\n",
      "[LightGBM] [Warning] Unknown parameter: is_unbalace\n",
      "[LightGBM] [Warning] Unknown parameter: sample_pos_weight\n",
      "Training until validation scores don't improve for 200 rounds\n",
      "[200]\tvalid_0's auc: 0.942087\tvalid_0's binary_logloss: 0.113331\n",
      "Early stopping, best iteration is:\n",
      "[182]\tvalid_0's auc: 0.942311\tvalid_0's binary_logloss: 0.112912\n",
      "--------------------- 2 fold ---------------------\n",
      "[LightGBM] [Warning] Unknown parameter: is_unbalace\n",
      "[LightGBM] [Warning] Unknown parameter: sample_pos_weight\n",
      "Training until validation scores don't improve for 200 rounds\n",
      "[200]\tvalid_0's auc: 0.93272\tvalid_0's binary_logloss: 0.120388\n",
      "Early stopping, best iteration is:\n",
      "[138]\tvalid_0's auc: 0.933033\tvalid_0's binary_logloss: 0.118682\n",
      "--------------------- 3 fold ---------------------\n",
      "[LightGBM] [Warning] Unknown parameter: is_unbalace\n",
      "[LightGBM] [Warning] Unknown parameter: sample_pos_weight\n",
      "Training until validation scores don't improve for 200 rounds\n",
      "[200]\tvalid_0's auc: 0.951504\tvalid_0's binary_logloss: 0.10742\n",
      "Early stopping, best iteration is:\n",
      "[178]\tvalid_0's auc: 0.951198\tvalid_0's binary_logloss: 0.107208\n",
      "--------------------- 4 fold ---------------------\n",
      "[LightGBM] [Warning] Unknown parameter: is_unbalace\n",
      "[LightGBM] [Warning] Unknown parameter: sample_pos_weight\n",
      "Training until validation scores don't improve for 200 rounds\n"
     ]
    }
   ],
   "source": [
    "# callback=paddle.callbacks.VisualDL(log_dir='visualdl_log_dir')# 本地\r\n",
    "\r\n",
    "oof = np.zeros(train_df.shape[0])\r\n",
    "# feat_imp_df = pd.DataFrame({'feat': cols, 'imp': 0})\r\n",
    "test_df['prob'] = 0\r\n",
    "clf = LGBMClassifier(\r\n",
    "    # 0.05--0.1\r\n",
    "    learning_rate=0.07,\r\n",
    "    # 1030\r\n",
    "    # 1300\r\n",
    "    n_estimators=1030,\r\n",
    "    # 31\r\n",
    "    # 35\r\n",
    "    # 37\r\n",
    "    # 40\r\n",
    "    # (0.523177, 0.93799)  38\r\n",
    "    #(0.519115, 0.93587) 39\r\n",
    "    num_leaves=37,\r\n",
    "    subsample=0.8,\r\n",
    "    # 0.8\r\n",
    "    # 0.85\r\n",
    "    colsample_bytree=0.8,\r\n",
    "    random_state=11,\r\n",
    "    is_unbalace=True,\r\n",
    "    sample_pos_weight=13\r\n",
    "\r\n",
    "\r\n",
    "    # learning_rate=0.066,#学习率\r\n",
    "    # n_estimators=1032,#拟合的树的棵树，相当于训练轮数\r\n",
    "    # num_leaves=38,#树的最大叶子数，对比xgboost一般为2^(max_depth)\r\n",
    "    # subsample=0.85,#子样本频率\r\n",
    "    # colsample_bytree=0.85, #训练特征采样率列\r\n",
    "    # random_state=17,   #随机种子数\r\n",
    "    # reg_lambda=1e-1,    #L2正则化系数\r\n",
    "    # # min_split_gain=0.2#最小分割增益\r\n",
    "    # learning_rate=0.07,#学习率\r\n",
    "    # n_estimators=1032,#拟合的树的棵树，相当于训练轮数\r\n",
    "    # num_leaves=37,#树的最大叶子数，对比xgboost一般为2^(max_depth)\r\n",
    "    # subsample=0.8,#子样本频率\r\n",
    "    # colsample_bytree=0.8, #训练特征采样率 列\r\n",
    "    # random_state=17,   #随机种子数\r\n",
    "    # silent=True , #训练过程是否打印日志信息\r\n",
    "    # min_split_gain=0.05 ,#最小分割增益\r\n",
    "    # is_unbalace=True,\r\n",
    "    # sample_pos_weight=13\r\n",
    ")\r\n",
    "\r\n",
    "val_aucs = []\r\n",
    "seeds = [11,22,33]\r\n",
    "for seed in seeds:\r\n",
    "    skf = StratifiedKFold(n_splits=5, shuffle=True, random_state=seed)\r\n",
    "    for i, (trn_idx, val_idx) in enumerate(skf.split(train_df, train_df['label'])):\r\n",
    "        print('--------------------- {} fold ---------------------'.format(i))\r\n",
    "        t = time.time()\r\n",
    "        trn_x, trn_y = train_df[cols].iloc[trn_idx].reset_index(drop=True), train_df['label'].values[trn_idx]\r\n",
    "        val_x, val_y = train_df[cols].iloc[val_idx].reset_index(drop=True), train_df['label'].values[val_idx]\r\n",
    "        clf.fit(\r\n",
    "            trn_x, trn_y,\r\n",
    "            eval_set=[(val_x, val_y)],\r\n",
    "    #         categorical_feature=cate_cols,\r\n",
    "            eval_metric='auc',\r\n",
    "            early_stopping_rounds=200,\r\n",
    "            verbose=200\r\n",
    "        )\r\n",
    "    #     feat_imp_df['imp'] += clf.feature_importances_ / skf.n_splits\r\n",
    "        oof[val_idx] = clf.predict_proba(val_x)[:, 1]\r\n",
    "        test_df['prob'] += clf.predict_proba(test_df[cols])[:, 1] / skf.n_splits / len(seeds)\r\n",
    "\r\n",
    "    cv_auc = roc_auc_score(train_df['label'], oof)\r\n",
    "    val_aucs.append(cv_auc)\r\n",
    "    print('\\ncv_auc: ', cv_auc)\r\n",
    "print(val_aucs, np.mean(val_aucs))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "![2](https://ai-studio-static-online.cdn.bcebos.com/9be55d88c86942f697de89c258a6f1a5810bc69ada18458ba5663e5c9f44a2a5)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "print(val_aucs, np.mean(val_aucs))\r\n",
    "def tpr_weight_funtion(y_true,y_predict):\r\n",
    "    d = pd.DataFrame()\r\n",
    "    d['prob'] = list(y_predict) #训练集\r\n",
    "    d['y'] = list(y_true) #训练之后的label\r\n",
    "    d = d.sort_values(['prob'], ascending=[0]) #对第一列排序\r\n",
    "    y = d.y  #训练之后的label\r\n",
    "    PosAll = pd.Series(y).value_counts()[1]  #测试集所有为1结果相加\r\n",
    "    NegAll = pd.Series(y).value_counts()[0]  #测试集所有为0结果相加\r\n",
    "    pCumsum = d['y'].cumsum()      #真实的1的总数\r\n",
    "    nCumsum = np.arange(len(y)) - pCumsum + 1   #真实的0的总数\r\n",
    "    pCumsumPer = pCumsum / PosAll  #覆盖率\r\n",
    "    nCumsumPer = nCumsum / NegAll  #打扰率\r\n",
    "    TR1 = pCumsumPer[abs(nCumsumPer-0.001).idxmin()]  #TPR：TPR1：FPR = 0.001\r\n",
    "    TR2 = pCumsumPer[abs(nCumsumPer-0.005).idxmin()]   #TPR TPR2：FPR = 0.005\r\n",
    "    TR3 = pCumsumPer[abs(nCumsumPer-0.01).idxmin()]   #TPR TPR3：FPR = 0.01\r\n",
    "    \r\n",
    "    return 0.4 * TR1 + 0.3 * TR2 + 0.3 * TR3\r\n",
    "\r\n",
    "tpr = round(tpr_weight_funtion(train_df['label'], oof), 6)\r\n",
    "tpr, round(np.mean(val_aucs), 5)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "submit.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "collapsed": false
   },
   "outputs": [],
   "source": [
    "submit['id'] = test_df['id']\r\n",
    "submit['label'] = test_df['prob']\r\n",
    "\r\n",
    "submit.to_csv('./work/Sub62 {}_{}.csv'.format(tpr, round(np.mean(val_aucs), 6)), index = False)\r\n",
    "submit.head()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "训练完感觉精度不是很满意，可以对配置文件进行调参，常见的调参是针对优化器和学习率\n",
    "\n",
    "打开`PaddleDetection/configs/yolov3/yolov3_mobilenet_v1_roadsign.yml`文件\n",
    "\n",
    "```\n",
    "LearningRate:\n",
    "  base_lr: 0.0001     \n",
    "  schedulers:\n",
    "  - !PiecewiseDecay  \n",
    "    gamma: 0.1\n",
    "    milestones: [32, 36]\n",
    "  - !LinearWarmup\n",
    "    start_factor: 0.3333333333333333\n",
    "    steps: 100\n",
    "\n",
    "OptimizerBuilder:\n",
    "  optimizer:\n",
    "    momentum: 0.9\n",
    "    type: Momentum\n",
    "  regularizer:\n",
    "    factor: 0.0005\n",
    "    type: L2\n",
    "```\n",
    "我们可以根据数据集以及选择的模型来适当调整我们的参数\n",
    "\n",
    "更多详细可见 [配置文件改动和说明](https://github.com/PaddlePaddle/PaddleDetection/blob/release/2.1/docs/tutorials/GETTING_STARTED_cn.md#3-%E9%85%8D%E7%BD%AE%E6%96%87%E4%BB%B6%E6%94%B9%E5%8A%A8%E5%92%8C%E8%AF%B4%E6%98%8E)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "### 优秀的项目参考\n",
    "\n",
    "* [PaddleHub 各种项目](https://aistudio.baidu.com/aistudio/projectdetail/1291587)\n",
    "\n",
    "* [Jetson Nano上部署PaddleDection](https://zhuanlan.zhihu.com/p/319371293)\n",
    "\n",
    "* [使用SSD-MobileNetv1完成一个项目--准备数据集到完成树莓派部署](https://github.com/PaddleCV-FAQ/PaddleDetection-FAQ/blob/main/Lite%E9%83%A8%E7%BD%B2/ssd_mobilenet_v1_for_raspi.md)\n",
    "\n",
    "* [PaddleClas 源码解析](https://aistudio.baidu.com/aistudio/projectdetail/1308952?channelType=0&channel=0)\n",
    "\n",
    "* [PaddleSeg 2.0动态图：车道线图像分割任务简介](https://aistudio.baidu.com/aistudio/projectdetail/1752986?channelType=0&channel=0)\n",
    "\n",
    "* [PaddleGAN 大合集](https://aistudio.baidu.com/aistudio/projectdetail/1632344)\n",
    "\n",
    "\n",
    "\n",
    "> 如果没有你感兴趣的，可以去寻找相应的项目作为参考，注意项目 paddle 版本最好是 2.0.2 及其以上\n",
    "\n",
    "![](https://ai-studio-static-online.cdn.bcebos.com/0afc3386350446ca8c293519e73996396240fa7dec5646a495ba51ebf1c6b9c0)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "## 作业要求\n",
    "\n",
    "写出完整的训练代码，并说明使用的套件，使用的优化器，在训练过程调整了那些参数，以及简短的心得\n",
    "\n",
    "Note ：如果您打算新建项目完成本作业，可以在提交的作业附上链接"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "collapsed": false
   },
   "source": [
    "使用的套件：使用了 PaddlePaddle\n",
    "\n",
    "使用了什么模型：使用了LGBM模型\n",
    "\n",
    "调整了那些参数：\n",
    "     learning_rate=0.066,#学习率\n",
    "     \n",
    "     n_estimators=1032,#拟合的树的棵树，相当于训练轮数\n",
    "     \n",
    "     num_leaves=38,#树的最大叶子数，对比xgboost一般为2^(max_depth)\n",
    "     \n",
    "     subsample=0.85,#子样本频率\n",
    "     \n",
    "     colsample_bytree=0.85, #训练特征采样率列\n",
    "     \n",
    "     random_state=17,   #随机种子数\n",
    "     \n",
    "     reg_lambda=1e-1,    #L2正则化系数\n",
    "     \n",
    "     # min_split_gain=0.2#最小分割增益\n",
    "     \n",
    "     learning_rate=0.07,#学习率\n",
    "     \n",
    "     n_estimators=1032,#拟合的树的棵树，相当于训练轮数\n",
    "     \n",
    "     num_leaves=37,#树的最大叶子数，对比xgboost一般为2^(max_depth)\n",
    "     \n",
    "     subsample=0.8,#子样本频率\n",
    "     \n",
    "     colsample_bytree=0.8, #训练特征采样率 列\n",
    "     \n",
    "     random_state=17,   #随机种子数\n",
    "     \n",
    "     silent=True , #训练过程是否打印日志信息\n",
    "     \n",
    "     min_split_gain=0.05 ,#最小分割增益\n",
    "     \n",
    "     is_unbalace=True,\n",
    "     \n",
    "     sample_pos_weight=13\n",
    "\n",
    "**心得：特征工程非常重要，大部分提升都是在特征工程的特征组合上，然后就是参数调优。我尝试了所有的参数，调参需要一一做对比实验，千万别同时改两个参数，然后正则化是一般是防止过拟合的，没搞清楚是否过拟合时不要用，不然大概率降低，找到关键的一些参数，能够强迫模型有提升的，比如上述min_split_gain=0.05 ,#最小分割增益，学习率由大往小了调整，多尝试不同的参数组合会有意想不到的效果。**"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "PaddlePaddle 2.1.2 (Python 3.5)",
   "language": "python",
   "name": "py35-paddle1.2.0"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 1
}
