{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "2fb8bb43",
   "metadata": {},
   "source": [
    "# Report 1交通事故理赔审核预测"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f30b81a0",
   "metadata": {},
   "source": [
    "# 1.\t任务目标:"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "96f0bc1f",
   "metadata": {},
   "source": [
    "利用二元分类的相关知识，分析交通事故后采集到的36条信息，对训练集的数据进行学习，然后再对测试集的数据进行是否理赔的预测，并通过改进来提高自身的正确率。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d8902c95",
   "metadata": {},
   "source": [
    "# 2. \t基础知识："
   ]
  },
  {
   "cell_type": "raw",
   "id": "5d87f288",
   "metadata": {},
   "source": [
    "Logisticregression逻辑回归模型。虽然被称为回归模型，但其实际是分类模型，并常用于二分类。有着简单、可并行化、可解释强等优点，本质是假设数据服从这个分布，然后使用极大似然估计做参数的估计。\n",
    "randomforestclassifier随机森林分配模型。随机森林是非常具有代表性的bagging集成算法，他的所有基评估器都是决策树，分类树组成的森林就叫做随机森林。参数有：criterion、max_depth、min_samples_leaf、min_samples_split、max_features、min_impurity_decrease。\n",
    "混淆矩阵、以及如何对各个模型中的参数进行调整从而得到最优解的相关知识。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a05520ca",
   "metadata": {},
   "source": [
    "# 3.\t背景：\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d66d8c24",
   "metadata": {},
   "source": [
    "在交通摩擦（事故）发生后，理赔员会前往现场勘察、采集信息，这些信息往往影响着车主是否能够得到保险公司的理赔。训练集数据包括理赔人员在现场对该事故方采集的36条信息，信息已经被编码，以及该事故方最终是否获得理赔。因此我们现在需要根据这36条信息预测该事故方没有被理赔的概率"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bb67ee5e",
   "metadata": {},
   "source": [
    "# 4.\t原理："
   ]
  },
  {
   "cell_type": "markdown",
   "id": "795f14cb",
   "metadata": {},
   "source": [
    "   1.使用官方所给的两个标杆模型进行分类，在进行分类后发现随机森林模型的正确率较高，故选用随机森林模型来进行修改和调整。\n",
    "   2.分别调整随机森林的各个参数，选择各个参数的最优值。\n",
    "   3.在确定各个参数的相关范围后，对整体性进行分析。\n",
    "   4.得到最终模型。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f6cdc08b",
   "metadata": {},
   "source": [
    "# 5.     实验 ："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4bdd1780",
   "metadata": {},
   "outputs": [],
   "source": [
    "#（1）用两个标杆模型进行分类\n",
    "#Logisticregression逻辑回归模型：\n",
    "import pandas as pd\n",
    "from sklearn.linear_model import LogisticRegression#导入库\n",
    "train = pd.read_csv(r'C:\\Users\\Junjie Wang\\Desktop\\机器学习\\train.csv')\n",
    "test = pd.read_csv(r'C:\\Users\\Junjie Wang\\Desktop\\机器学习\\test.csv')\n",
    "submit = pd.read_csv(r'C:\\Users\\Junjie Wang\\Desktop\\机器学习\\sample_submit.csv')#利用pandas的库函数进行数据的读取\n",
    "train.drop('CaseId', axis=1, inplace=True)\n",
    "test.drop('CaseId', axis=1, inplace=True)#利用drop函数删除训练集和测试集的id\n",
    "y_train = train.pop('Evaluation')#取出训练集的预测结果\n",
    "clf = LogisticRegression(penalty='l1', C=1.0, random_state=0,solver='liblinear')#建立LogisticRegression模型\n",
    "clf.fit(train, y_train)\n",
    "y_pred = clf.predict_proba(test)[:, 1]\n",
    "submit['Evaluation'] = y_pred\n",
    "submit.to_csv(r'C:\\Users\\Junjie Wang\\Desktop\\机器学习\\01.csv', index=False)# 将预测结果放至01.csv"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7dff4075",
   "metadata": {},
   "outputs": [],
   "source": [
    "#randomforestclassifier随机森林模型：\n",
    "import pandas as pd\n",
    "from sklearn.ensemble import RandomForestClassifier\n",
    "train = pd.read_csv(r'C:\\Users\\Junjie Wang\\Desktop\\机器学习\\train.csv')\n",
    "test = pd.read_csv(r'C:\\Users\\Junjie Wang\\Desktop\\机器学习\\test.csv')\n",
    "submit = pd.read_csv(r'C:\\Users\\Junjie Wang\\Desktop\\机器学习\\sample_submit.csv') # 读取数据\n",
    "\n",
    "\n",
    "\n",
    "train.drop('CaseId', axis=1, inplace=True)\n",
    "test.drop('CaseId', axis=1, inplace=True)#利用drop函数删除训练集和测试集的id\n",
    "\n",
    "\n",
    "y_train = train.pop('Evaluation')# 取出训练集的预测结果\n",
    "\n",
    "# 建立随机森林模型\n",
    "clf = RandomForestClassifier(n_estimators=100, random_state=0)\n",
    "clf.fit(train, y_train)\n",
    "y_pred = clf.predict_proba(test)[:, 1]\n",
    "\n",
    "\n",
    "submit['Evaluation'] = y_pred\n",
    "submit.to_csv(r'C:\\Users\\Junjie Wang\\Desktop\\机器学习\\02.csv', index=False)# 输出预测结果至02.csv\n"
   ]
  },
  {
   "cell_type": "raw",
   "id": "c7f77ec6",
   "metadata": {},
   "source": [
    "# 通过对两个模型输出结果的比较，发现随机森林模型的预测结果正确率更高，于是决定在随机森林模型的基础上进行参数的调整，以期得到跟更好的正确率，从而得到最终模型。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7c5e6d53",
   "metadata": {},
   "outputs": [],
   "source": [
    "#最终模型\n",
    "import pandas as pd\n",
    "import numpy as np\n",
    "from sklearn.model_selection import train_test_split\n",
    "import xgboost as xgb\n",
    "from sklearn.metrics import accuracy_score\n",
    "\n",
    "traindata = pd.read_csv(r'C:\\Users\\Junjie Wang\\Desktop\\机器学习\\train.csv')\n",
    "testdata = pd.read_csv(r'C:\\Users\\Junjie Wang\\Desktop\\机器学习\\test.csv')\n",
    "submitdata = pd.read_csv(r'C:\\Users\\Junjie Wang\\Desktop\\机器学习\\sample_submit.csv')#引入数据集\n",
    "\n",
    "\n",
    "traindata.drop('CaseId', axis=1, inplace=True)\n",
    "testdata.drop('CaseId', axis=1, inplace=True)#去掉数据集中无意义的一项\n",
    "\n",
    "trainlabel = traindata['Evaluation']\n",
    "traindata.drop('Evaluation', axis=1, inplace=True)# 利用drop函数从训练集中分类标签\n",
    "\n",
    "traindata1, testdata1, trainlabel1 = traindata.values, testdata.values, trainlabel.values\n",
    "X_train, X_test, y_train, y_test = train_test_split(traindata1, trainlabel1,\n",
    "                                                    test_size=0.3, random_state=123457)\n",
    "# 设置优化后的参数，开始训练模型\n",
    "model = xgb.XGBClassifier(max_depth=5,\n",
    "                          learning_rate=0.1,\n",
    "                          min_child_weight=4,\n",
    "                          gamma=0.5,\n",
    "                          n_estimators=5000,\n",
    "                          silent=True,\n",
    "                          objective='binary:logistic',\n",
    "                          nthread=4,\n",
    "                          seed=27,\n",
    "                          scale_pos_weight=1,\n",
    "                          subsample=0.9,\n",
    "                          colsample_bytree=0.6,\n",
    "                          reg_alpha=10,\n",
    "                         colsample_bylevel=1,\n",
    "                         base_score=0.5,\n",
    "                          )\n",
    "\n",
    "model.fit(X_train, y_train)\n",
    "\n",
    "# 对测试集进行预测\n",
    "y_pred = model.predict(X_test)\n",
    "\n",
    "# 计算准确率\n",
    "accuracy = accuracy_score(y_test, y_pred)\n",
    "print('accuracy:%2.f%%' % (accuracy * 100))\n",
    "\n",
    "from sklearn import metrics\n",
    "print(\"AUC Score (Train): %f\" % metrics.roc_auc_score(y_test, y_pred))\n",
    "\n",
    "def run_predict():\n",
    "    y_pred_test = model.predict_proba(testdata1)[:, 1]\n",
    " \n",
    "    submitData = pd.read_csv(r'C:\\Users\\Junjie Wang\\Desktop\\机器学习\\01.csv')\n",
    "    submitData['Evaluation'] = y_pred_test\n",
    "    submitData.to_csv(r'C:\\Users\\Junjie Wang\\Desktop\\机器学习\\xgboost.csv', index=False)   # 将结果写入xgboost.csv文件中\n",
    "\n",
    "run_predict()"
   ]
  },
  {
   "cell_type": "raw",
   "id": "de178a3e",
   "metadata": {},
   "source": [
    "#结果：最终模型准确率为93%（具体结果发在gitee仓库中，因为我安装的jupyter notebook装不上相关的库，在pycharm中得到的最终结果）"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "07836bb8",
   "metadata": {},
   "source": [
    "# 总结"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2ba369a5",
   "metadata": {},
   "source": [
    "1.随机森林是用于分类和回归的监督式集成学习模型。为了使整体性能更好，集成学习模型聚合了多个机器学习模型。因为每个模型单独使用时性能表现的不是很好，但如果放在一个整体中则很强大。在随机森林模型下，使用大量“弱”因子的决策树，来聚合它们的输出，结果能代表“强”的集成。随机森林通过聚合单个决策树的不同输出来减少可能导致决策树错误的方差。通过多数投票算法，我们可以找到大多数单个树给出的平均输出，从而平滑了方差，这样模型就不容易产生离真值更远的结果。随机森林思想是取一组高方差、低偏差的决策树，并将它们转换成低方差、低偏差的新模型。\n",
    "2.在用一个已有的模型对数据进行分类后，为提高准确率，可以对模型中的某一个参数进行调整，在优化完所有参数后，将这些数值代入却发现正确率并没有显著增加，因此在通过优化参数调整模型时要考虑整体性的影响。\n",
    "3.要对pycharm和jupyter更为熟悉，提高自己的英文阅读能力，在系统报错时可以第一时间知道是出了什么样的问题，同时也要会利用pycharm和jupyter导入相关的函数库。"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
