{
 "cells": [
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "# 决策树的参数调优\n",
    "- gini：基尼系数，衡量分类的不确定性，越小越好\n",
    "- id3：信息增益，信息增益越大，分类的不确定性越小\n",
    "- c4.5：信息增益比，信息增益除以经验熵，经验熵越小，信息增益比越大\n",
    "\n",
    "class sklearn.tree.DecisionTreeClassifier(criterion=’gini’, max_depth=None,random_state=None)\n",
    "- 决策树分类器\n",
    "- criterion:默认是’gini’系数，也可以选择信息增益（即 id3）的熵’entropy’可以是 id3,或者 c4.5(sklearn 默认没有实现 C4.5)\n",
    "- max_depth:树的深度大小\n",
    "- random_state:随机数种子\n",
    "- min_samples_split int 或 float，默认为 2\n",
    "    - 拆分内部节点所需的最少样本数：\n",
    "- method:\n",
    "- decision_path:返回决策树的路径"
   ]
  },
  {
   "cell_type": "code",
   "source": [
    "# 分割数据集到训练集合测试集\n",
    "x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25, random_state=4)\n",
    "# 进行处理（特征工程）特征-》类别-》one_hot编码\n",
    "dict = DictVectorizer(sparse=False)\n",
    "\n",
    "# 这一步是对字典进行特征抽取\n",
    "x_train = dict.fit_transform(x_train.to_dict(orient=\"records\"))\n",
    "x_test = dict.transform(x_test.to_dict(orient=\"records\"))\n",
    "\n",
    "# print(x_train)\n",
    "# # 用决策树进行预测，修改max_depth为10，发现提升了,min_impurity_decrease带来的增益要大于0.01才会进行划分\n",
    "dec = DecisionTreeClassifier(max_depth=7,min_impurity_decrease=0.01,min_samples_split=20)\n",
    "\n",
    "dec.fit(x_train, y_train)\n",
    "#\n",
    "# # 预测准确率\n",
    "print(\"预测的准确率：\", dec.score(x_test, y_test))\n",
    "#\n",
    "# # 导出决策树的结构\n",
    "export_graphviz(dec, out_file=\"tree1.dot\",\n",
    "                feature_names=dict.get_feature_names_out())"
   ],
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    },
    "ExecuteTime": {
     "end_time": "2025-01-22T05:38:06.760263Z",
     "start_time": "2025-01-22T05:38:06.743869Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "预测的准确率： 0.8206686930091185\n"
     ]
    }
   ],
   "execution_count": 81
  },
  {
   "cell_type": "code",
   "source": [
    "y_train.shape"
   ],
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    },
    "ExecuteTime": {
     "end_time": "2025-01-22T05:42:12.356095Z",
     "start_time": "2025-01-22T05:42:12.351570Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(984,)"
      ]
     },
     "execution_count": 82,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 82
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "# 决策树的优缺点以及改进\n",
    "优点：\n",
    "- 简单的理解和解释，树木可视化。\n",
    "- 需要很少的数据准备，其他技术通常需要数据归一化，标准化（决策树不需要进行归一化和标准化）\n",
    "\n",
    "缺点：\n",
    "- 决策树学习者可以创建不能很好地推广数据的过于复杂的树，\n",
    "这被称为过拟合。\n",
    "- 决策树可能不稳定，因为数据的小变化可能会导致完全不同的树被生成（弱分类器）\n",
    "\n",
    "改进：\n",
    "- 减枝 cart(Classification and regression tree)算法—这里我们来看下源码实现还有 png 图\n",
    "- 随机森林—解决过拟合"
   ]
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "### 预剪枝\n",
    "（1）每一个结点所包含的最小样本数目，例如 10，则该结点总样本数小于 10时，则不再分；(min_samples_split)\n",
    "\n",
    "（2）指定树的高度或者深度，例如树的最大深度为 4；（max_depth）\n",
    "\n",
    "（3）指定结点的熵小于某个值，不再划分。随着树的增长， 在训练样集上的精度是单调上升的， 然而在独立的测试样例上测出的精度先上升后下降。对应超参数是 min_impurity_decrease，这个值越大，树的高度越低\n",
    "\n",
    "### 后剪枝：\n",
    "后剪枝，在已生成过拟合决策树上进行剪枝，可以得到简化版的剪枝决策树。"
   ]
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "# 9 集成学习方法-随机森林\n",
    "#### 集成学习方法\n",
    "集成学习通过建立几个模型组合的来解决单一预测问题。它的工作原理是生成多个分类器/模型，各自独立地学习和作出预测。这些预测最后结合成单预测，因此优于任何一个单分类的做出预测。\n",
    "#### 随机森林\n",
    "定义：在机器学习中，随机森林是一个包含多个决策树的分类器，并且其输出的类别是由个别树输出的类别的众数而定。\n",
    "#### 为什么要随机抽样训练集？\n",
    "- 如果不进行随机抽样，每棵树的训练集都一样，那么最终训练出的树分类结果也是完全一样的\n",
    "#### 为什么要有放回地随机抽样？\n",
    "- 如果不是有放回的抽样，那么每棵树的训练样本都是不同的，都是没有交集的，这样每棵树都是“有偏的”，都是绝对“片面的”（当然这样说可能不对），也就是说每棵树训练出来都是有很大的差异的；而随机森林最后分类取决于多棵树（弱分类器）的\n",
    "投票表决。\n",
    "- 随机森林使用有放回的抽样"
   ]
  },
  {
   "cell_type": "code",
   "source": [
    "x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.25, random_state=4)\n",
    "# 进行处理（特征工程）特征-》类别-》one_hot编码\n",
    "dict = DictVectorizer(sparse=False)\n",
    "\n",
    "# 这一步是对字典进行特征抽取\n",
    "x_train = dict.fit_transform(x_train.to_dict(orient=\"records\"))\n",
    "x_test = dict.transform(x_test.to_dict(orient=\"records\"))"
   ],
   "metadata": {
    "collapsed": false,
    "ExecuteTime": {
     "end_time": "2025-01-22T05:08:10.560237Z",
     "start_time": "2025-01-22T05:08:10.547877Z"
    }
   },
   "outputs": [],
   "execution_count": 79
  },
  {
   "cell_type": "code",
   "source": [
    "# 随机森林进行预测 （超参数调优），n_jobs充分利用多核的一个参数\n",
    "rf = RandomForestClassifier(n_jobs=-1)\n",
    "# 120, 200, 300, 500, 800, 1200,n_estimators森林中决策树的数目，也就是分类器的数目\n",
    "# max_samples  是最大样本数\n",
    "#bagging类型\n",
    "param = {\"n_estimators\": [1500,2000, 5000], \"max_depth\": [2, 3, 5, 8, 15, 25]}\n",
    "\n",
    "# 网格搜索与交叉验证\n",
    "gc = GridSearchCV(rf, param_grid=param, cv=3)\n",
    "\n",
    "gc.fit(x_train, y_train)\n",
    "\n",
    "print(\"准确率：\", gc.score(x_test, y_test))\n",
    "\n",
    "print(\"查看选择的参数模型：\", gc.best_params_)\n",
    "\n",
    "print(\"选择最好的模型是：\", gc.best_estimator_)\n"
   ],
   "metadata": {
    "collapsed": false,
    "pycharm": {
     "name": "#%%\n"
    },
    "ExecuteTime": {
     "end_time": "2025-01-22T05:10:49.209258Z",
     "start_time": "2025-01-22T05:08:10.561223Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "准确率： 0.8328267477203647\n",
      "查看选择的参数模型： {'max_depth': 3, 'n_estimators': 2000}\n",
      "选择最好的模型是： RandomForestClassifier(max_depth=3, n_estimators=2000, n_jobs=-1)\n"
     ]
    }
   ],
   "execution_count": 80
  },
  {
   "metadata": {},
   "cell_type": "code",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "每个超参数每次交叉验证的结果： {'mean_fit_time': array([1.42668621, 2.01992742, 5.17521898, 1.41602651, 1.8718152 ,\n",
      "       4.6288871 , 1.36712853, 1.81253672, 4.48539241, 1.32155291,\n",
      "       1.75371329, 4.55546188, 1.31463385, 1.76522319, 4.61754187,\n",
      "       1.32016627, 1.7773993 , 4.68354948]), 'std_fit_time': array([0.01800295, 0.09225985, 0.23373382, 0.01518977, 0.01241641,\n",
      "       0.0746935 , 0.01762606, 0.02440667, 0.0882879 , 0.01042789,\n",
      "       0.01423035, 0.12514934, 0.00368039, 0.02479667, 0.02346305,\n",
      "       0.0172283 , 0.01300723, 0.35609046]), 'mean_score_time': array([0.15451924, 0.20733738, 0.52302575, 0.15239255, 0.19084915,\n",
      "       0.46970463, 0.14551361, 0.18346604, 0.44726737, 0.14029741,\n",
      "       0.18422246, 0.45180448, 0.14296158, 0.18685961, 0.47240027,\n",
      "       0.14420644, 0.18640272, 0.46219746]), 'std_score_time': array([0.00616835, 0.01094967, 0.04247778, 0.00403438, 0.00097751,\n",
      "       0.01620811, 0.00782402, 0.00542539, 0.01374067, 0.00515704,\n",
      "       0.00485575, 0.00796891, 0.00493557, 0.00557881, 0.00737878,\n",
      "       0.00485275, 0.00474555, 0.00748113]), 'param_max_depth': masked_array(data=[2, 2, 2, 3, 3, 3, 5, 5, 5, 8, 8, 8, 15, 15, 15, 25, 25,\n",
      "                   25],\n",
      "             mask=[False, False, False, False, False, False, False, False,\n",
      "                   False, False, False, False, False, False, False, False,\n",
      "                   False, False],\n",
      "       fill_value=999999), 'param_n_estimators': masked_array(data=[1500, 2000, 5000, 1500, 2000, 5000, 1500, 2000, 5000,\n",
      "                   1500, 2000, 5000, 1500, 2000, 5000, 1500, 2000, 5000],\n",
      "             mask=[False, False, False, False, False, False, False, False,\n",
      "                   False, False, False, False, False, False, False, False,\n",
      "                   False, False],\n",
      "       fill_value=999999), 'params': [{'max_depth': 2, 'n_estimators': 1500}, {'max_depth': 2, 'n_estimators': 2000}, {'max_depth': 2, 'n_estimators': 5000}, {'max_depth': 3, 'n_estimators': 1500}, {'max_depth': 3, 'n_estimators': 2000}, {'max_depth': 3, 'n_estimators': 5000}, {'max_depth': 5, 'n_estimators': 1500}, {'max_depth': 5, 'n_estimators': 2000}, {'max_depth': 5, 'n_estimators': 5000}, {'max_depth': 8, 'n_estimators': 1500}, {'max_depth': 8, 'n_estimators': 2000}, {'max_depth': 8, 'n_estimators': 5000}, {'max_depth': 15, 'n_estimators': 1500}, {'max_depth': 15, 'n_estimators': 2000}, {'max_depth': 15, 'n_estimators': 5000}, {'max_depth': 25, 'n_estimators': 1500}, {'max_depth': 25, 'n_estimators': 2000}, {'max_depth': 25, 'n_estimators': 5000}], 'split0_test_score': array([0.73780488, 0.73780488, 0.73780488, 0.80182927, 0.80182927,\n",
      "       0.80182927, 0.81097561, 0.81097561, 0.81097561, 0.82012195,\n",
      "       0.82012195, 0.82012195, 0.82012195, 0.82012195, 0.82012195,\n",
      "       0.81402439, 0.82012195, 0.82012195]), 'split1_test_score': array([0.82621951, 0.82621951, 0.82621951, 0.81707317, 0.82317073,\n",
      "       0.82317073, 0.81402439, 0.81402439, 0.81402439, 0.81097561,\n",
      "       0.81402439, 0.80792683, 0.81402439, 0.81402439, 0.81402439,\n",
      "       0.81707317, 0.81402439, 0.81402439]), 'split2_test_score': array([0.81707317, 0.81707317, 0.81707317, 0.82926829, 0.82926829,\n",
      "       0.82926829, 0.82317073, 0.82317073, 0.82621951, 0.79268293,\n",
      "       0.79268293, 0.79268293, 0.79573171, 0.79573171, 0.79573171,\n",
      "       0.79573171, 0.79573171, 0.79573171]), 'mean_test_score': array([0.79369919, 0.79369919, 0.79369919, 0.81605691, 0.81808943,\n",
      "       0.81808943, 0.81605691, 0.81605691, 0.81707317, 0.80792683,\n",
      "       0.80894309, 0.80691057, 0.80995935, 0.80995935, 0.80995935,\n",
      "       0.80894309, 0.80995935, 0.80995935]), 'std_test_score': array([0.03969924, 0.03969924, 0.03969924, 0.01122496, 0.01176406,\n",
      "       0.01176406, 0.00518193, 0.00518193, 0.00658612, 0.01140749,\n",
      "       0.01176406, 0.01122496, 0.01036386, 0.01036386, 0.01036386,\n",
      "       0.00942441, 0.01036386, 0.01036386]), 'rank_test_score': array([16, 16, 16,  4,  1,  1,  4,  4,  3, 14, 12, 15,  7,  7,  7, 12,  7,\n",
      "        7], dtype=int32)}\n"
     ]
    }
   ],
   "execution_count": 83,
   "source": "print(\"每个超参数每次交叉验证的结果：\", gc.cv_results_)"
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 0
}
