{
 "cells": [
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ff5cbc6e",
   "metadata": {},
   "outputs": [],
   "source": [
    "import warnings\n",
    "warnings.filterwarnings('ignore') \n",
    "\n",
    "from IPython.display import display, HTML\n",
    "\n",
    "display(HTML(\"<style>.container { width:60% !important; }</style>\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "726e1d70",
   "metadata": {},
   "source": [
    "# 第一步：明确需要解决的问题，以及解决该问题需要哪种类型的算法"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "80f306a6",
   "metadata": {},
   "source": [
    "常见的问题类型只有三种：**分类**、**回归**、**聚类**。而明确具体问题对应的类型也很简单。比如，\n",
    "\n",
    "1. **分类**： 如果你需要通过输入数据得到一个类别变量，那就是分类问题。分成两类就是二分类问题，分成两类以上就是多分类问题。例如，判别一个邮件是否是垃圾邮件、根据图片分辩图片里的是猫还是狗等等。\n",
    "\n",
    "2. **回归**： 如果你需要通过输入数据得到一个具体的连续数值，那就是回归问题。比如：预测某个区域的房价等。\n",
    "\n",
    "3. **聚类**：如果你的数据集并没有对应的属性标签，你要做的，是发掘这组样本在空间的分布, 比如分析哪些样本靠的更近，哪些样本之间离得很远, 这就是属于聚类问题。\n",
    "\n",
    "常用的分类和回归算法算法有：SVM (支持向量机) 、xgboost、, KNN、LR算法、SGD (随机梯度下降算法)、Bayes (贝叶斯估计)以及随机森林等。这些算法大多都既可以解分类问题，又可以解回归问题。\n",
    "\n",
    "常用的聚类算法有k-means算法等。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c3242ffb",
   "metadata": {},
   "source": [
    "# 第二步：通过skicit-learn构建模型"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8560f258",
   "metadata": {},
   "source": [
    "## 1. 导入数据集\n",
    "\n",
    "原始数据可能含有缺失值、异常值、重复值，经过预处理后，准备好的数据集用于机器学习模型构建。   \n",
    "这里，我们通过导入scikit-learn自带的数据集来演示万能模板的使用。\n",
    "\n",
    "通常我们的数据保存在DataFrame中，每一个特征分别存储在不同的列，此外还有一列(target_col)是数据的标记(预测目标列).一般可以通过以下代码获得x和y:   \n",
    "`x = df[feature_cols].to_numpy()`  \n",
    "`y = df[target_col].to_numpy()`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "af46e898",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.datasets import load_iris\n",
    "data = load_iris()\n",
    "x = data.data\n",
    "y = data.target"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d516c858",
   "metadata": {},
   "source": [
    "x是一个150行4列的二维数组，保存了150个数据的4个特征"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4234291e",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 查看前5个数据\n",
    "x[:5, ]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d6946451",
   "metadata": {},
   "source": [
    "y是包含150个数据的一维数组，是这150个数据的分类，其中0、1、2分别代表三类花。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "87f08dfe",
   "metadata": {},
   "outputs": [],
   "source": [
    "y"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ad097b5c",
   "metadata": {},
   "source": [
    "## 2. 拆分数据集\n",
    "数据集拆分是为了验证模型在`训练集`和`测试集`是否过拟合，使用train_test_split的目的是保证从数据集中均匀拆分出测试集。这里，简单把20%的数据集拿出来用作测试集。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "39bb1f87",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.model_selection import train_test_split\n",
    "x_train, x_test, y_train, y_test = train_test_split(x, y, test_size=0.2, random_state=0)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a182c928",
   "metadata": {},
   "source": [
    "## 3. 确定机器学习算法\n",
    "\n",
    "- 常用的**分类**和**回归**算法算法有：线性、逻辑斯蒂回归、支持向量机、KNN、决策树、朴素贝叶斯、随机森林、XGBoost、SGD(随机梯度下降算法)、神经网络等。这些算法大多都既可以解分类问题，又可以解回归问题。   \n",
    "\n",
    "\n",
    "- 常用的**聚类**算法有k-means算法等。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c29acf53",
   "metadata": {},
   "source": [
    "## 4. 确定评价模型的方法\n",
    "\n",
    "- 分类模型指标：默认情况下用准确率即可，也可以根据实际需要考虑其他指标，各指标含义此处不解释。\n",
    "  - 准确率: accuracy，`from sklearn.metrics import accuracy_score`\n",
    "  - 精确率: precision，`from sklearn.metrics import precision_score`\n",
    "  - 召回率: recall，`from sklearn.metrics import recall_score`\n",
    "  - F1-score: f1，`from sklearn.metrics import f1_score`   \n",
    "  \n",
    "  \n",
    "- 回归模型指标：一般用MSE即可\n",
    "  - 均方误差(MSE): neg_mean_squared_error, `from sklearn.metrics import mean_squared_error`\n",
    "  - 平均绝对误差(MAE): neg_mean_absolute_error,`from sklearn.metrics import mean_absolute_error`\n",
    "  - R平方值: r2, `from sklearn.metrics import r2_score`   \n",
    "  \n",
    "  \n",
    "- 聚类问题：\n",
    "  - 互信息: adjusted_mutual_info_score, `from sklearn.metrics import adjusted_mutual_info_score`\n",
    "  - 兰德系数: adjusted_rand_score, `from sklearn.metrics import adjusted_rand_score`\n",
    "  - v-measure: v_measure_score, `from sklearn.metrics import v_measure_score`"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2db0f44a",
   "metadata": {},
   "source": [
    "## 5. 机器学习模型构建模板"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5485d30e",
   "metadata": {},
   "source": [
    "### (1)万能模板V1.0版（比赛可以考虑用，简单版）\n",
    "\n",
    "我们在`训练集`上训练模型，然后在`测试集`上评估模型，评估指标值越大越好。**不同的问题需要用不同的评估指标(见第4步)**\n",
    "\n",
    "<img src=\"机器学习模板v1.png\" width = \"500\" alt=\"机器学习模板v1\" align=left />"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7cb0c17e",
   "metadata": {},
   "outputs": [],
   "source": [
    "# svm支持向量机分类\n",
    "from sklearn.svm import SVC\n",
    "from sklearn.metrics import accuracy_score\n",
    "\n",
    "# 构建模型\n",
    "model = SVC()\n",
    "\n",
    "# 训练模型\n",
    "model.fit(x_train, y_train)\n",
    "\n",
    "pred1 = model.predict(x_train)\n",
    "accuracy1 = accuracy_score(y_train, pred1)\n",
    "print(f'在训练集上的精确度: {accuracy1}')\n",
    "\n",
    "# 输出模型在测试集上的评估指标值\n",
    "pred2 = model.predict(x_test)\n",
    "accuracy2 = accuracy_score(y_test, pred2)\n",
    "print(f'在测试集上的精确度: {accuracy2}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7bc500d1",
   "metadata": {},
   "outputs": [],
   "source": [
    "# LogisticRegression分类\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "from sklearn.metrics import accuracy_score #评分函数用精确度评估\n",
    "\n",
    "# 构建模型\n",
    "model = LogisticRegression()\n",
    "\n",
    "# 训练模型\n",
    "model.fit(x_train, y_train)\n",
    "\n",
    "pred1 = model.predict(x_train)\n",
    "accuracy1 = accuracy_score(y_train, pred1)\n",
    "print(f'在训练集上的精确度: {accuracy1}')\n",
    "\n",
    "# 输出模型在测试集上的评估指标值\n",
    "pred2 = model.predict(x_test)\n",
    "accuracy2 = accuracy_score(y_test, pred2)\n",
    "print(f'在测试集上的精确度: {accuracy2}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f494cab6",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 随机森林分类模型\n",
    "from sklearn.ensemble import RandomForestClassifier\n",
    "from sklearn.metrics import accuracy_score #评分函数用精确度评估\n",
    "\n",
    "# 构建模型\n",
    "model = RandomForestClassifier()\n",
    "\n",
    "# 训练模型\n",
    "model.fit(x_train, y_train)\n",
    "\n",
    "pred1 = model.predict(x_train)\n",
    "accuracy1 = accuracy_score(y_train, pred1)\n",
    "print(f'在训练集上的精确度: {accuracy1}')\n",
    "\n",
    "# 输出模型在测试集上的评估指标值\n",
    "pred2 = model.predict(x_test)\n",
    "accuracy2 = accuracy_score(y_test, pred2)\n",
    "print(f'在测试集上的精确度: {accuracy2}')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e210b6b7",
   "metadata": {},
   "source": [
    "### (2)万能模板V2.0版 （引入交叉验证评估模型）\n",
    "\n",
    "**加入交叉验证，让算法模型评估更加科学**。 \n",
    "\n",
    "在上一个版本中，我们通过模型在`测试集`上的表现来评估模型的好坏。在当前版本，如果`训练集`样本足够多，我们可以引入`交叉验证`来更科学地评估模型的好坏。如果模型在交叉验证中的分数和测试集上的分数都比较高，表示模型的泛化能力和预测能力好。**下面这个动图在测试集上也交叉验证，这是没必要的**。\n",
    "\n",
    "最常用的k-折交叉验证，它主要是将训练集划分为 k 个较小的集合。然后将k-1份训练子集作为训练集训练模型，将剩余的 1 份训练集子集作为验证集用于模型验证。这样需要训练k次，最后在训练集上的评估得分取所有训练结果评估得分的平均值。\n",
    "\n",
    "<img src=\"机器学习模板v2.webp\" width = \"500\" alt=\"机器学习模板v2\" align=left />"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9b88df28",
   "metadata": {},
   "outputs": [],
   "source": [
    "### svm支持向量机回归\n",
    "from sklearn.model_selection import cross_val_score\n",
    "from sklearn.metrics import mean_squared_error\n",
    "from sklearn.svm import SVR\n",
    "\n",
    "# 构建模型\n",
    "model = SVR()\n",
    "\n",
    "# 训练模型\n",
    "model.fit(x_train, y_train)\n",
    "\n",
    "# 如果训练集样本足够多，可以在训练集上交叉验证评估模型\n",
    "scores = cross_val_score(model, x_train, y_train, cv=5, scoring='neg_mean_squared_error')\n",
    "rmse = np.sqrt(-scores).mean()\n",
    "print(f\"训练集上的平均评估分数(均方根误差): {rmse}\")\n",
    "\n",
    "# 输出模型在测试集上的评估指标值\n",
    "y_pred = model.predict(x_test)\n",
    "mse = mean_squared_error(y_test, y_pred)\n",
    "rmse = np.sqrt(mse)\n",
    "print(f'在测试集上的均方根误差: {rmse}')\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ba44d3e5",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 随机森林回归模型 \n",
    "from sklearn.model_selection import cross_val_score\n",
    "from sklearn.metrics import mean_squared_error\n",
    "from sklearn.ensemble import RandomForestRegressor\n",
    "\n",
    "# 构建模型\n",
    "model = RandomForestRegressor()\n",
    "\n",
    "# 训练模型\n",
    "model.fit(x_train, y_train)\n",
    "\n",
    "# 如果训练集样本足够多，可以在训练集上交叉验证评估模型\n",
    "scores = cross_val_score(model, x_train, y_train, cv=5, scoring='neg_mean_squared_error')\n",
    "rmse = np.sqrt(-scores).mean()\n",
    "print(f\"训练集上的平均评估分数(均方根误差): {rmse}\")\n",
    "\n",
    "# 输出模型在测试集上的评估指标值\n",
    "y_pred = model.predict(x_test)\n",
    "mse = mean_squared_error(y_test, y_pred)\n",
    "rmse = np.sqrt(mse)\n",
    "print(f'在测试集上的均方根误差: {rmse}')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e88e9bdf",
   "metadata": {},
   "source": [
    "### (3)万能模板V3.0版  （引入网格搜索调参，寻找参数最佳的模型）\n",
    "\n",
    "之前的版本都是用默认参数训练模型，有时候默认参数效果不一定好，那我们可以调整参数找到在样本内表现最佳的模型，最后在样本外的`测试集`上看模型的预测能力。\n",
    "\n",
    "**调参让算法表现更上一层楼**     \n",
    "\n",
    "主要通过网格搜索调参(GridSearchCV函数)。网格搜索用于系统地遍历多种参数组合，通过交叉验证确定最佳效果参数。\n",
    "\n",
    "<img src=\"机器学习模板v3.png\" width = \"500\" alt=\"机器学习模板v3\" align=left />"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e3760935",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 参数搜索的svr回归器\n",
    "from sklearn.model_selection import cross_val_score, GridSearchCV\n",
    "from sklearn.metrics import mean_squared_error\n",
    "from sklearn.svm import SVR\n",
    "\n",
    "model = SVR()\n",
    "\n",
    "params = [\n",
    "        {'kernel': ['linear'], 'C': [1, 10, 100, 100]},\n",
    "        {'kernel': ['poly'], 'C': [1], 'degree': [2, 3]},\n",
    "        {'kernel': ['rbf'], 'C': [1, 10, 100, 100], 'gamma':[1, 0.1, 0.01, 0.001]}\n",
    "        ]\n",
    "\n",
    "\n",
    "grid_search = GridSearchCV(model, param_grid=params, cv=5, refit=True, scoring='neg_mean_squared_error')\n",
    "grid_search.fit(x_train, y_train)\n",
    "\n",
    "print('最优参数:', grid_search.best_params_ )\n",
    "\n",
    "# 获取最佳模型\n",
    "final_model = grid_search.best_estimator_\n",
    "\n",
    "# 在训练集上交叉验证评估最优模型\n",
    "scores = cross_val_score(final_model, x_train, y_train, cv=5, scoring='neg_mean_squared_error')\n",
    "rmse = np.sqrt(-scores).mean()\n",
    "print(f\"训练集上的平均精确度: {rmse}\")\n",
    "\n",
    "# 输出模型在测试集上的评估指标值\n",
    "pred2 = final_model.predict(x_test)\n",
    "mse = mean_squared_error(y_test, pred2)\n",
    "rmse = np.sqrt(mse)\n",
    "print(f'在测试集上的评估分数: {rmse}')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "80a6d3c1",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 参数搜索的随机森林回归器\n",
    "from sklearn.model_selection import cross_val_score, GridSearchCV\n",
    "from sklearn.metrics import mean_squared_error\n",
    "from sklearn.ensemble import RandomForestRegressor\n",
    "\n",
    "model = RandomForestRegressor()\n",
    "\n",
    "params = [\n",
    "        {'n_estimators': [3, 10, 30, 100, 300], 'max_features': [6, 9, 12, 15]},\n",
    "        {'bootstrap': [False], 'n_estimators': [30, 100, 300], 'max_features': [9, 12, 15]}\n",
    "        ]\n",
    "\n",
    "\n",
    "grid_search = GridSearchCV(model, param_grid=params, cv=5, refit=True, scoring='neg_mean_squared_error')\n",
    "grid_search.fit(x_train, y_train)\n",
    "\n",
    "print('最优参数:', grid_search.best_params_ )\n",
    "\n",
    "# 获取最佳模型\n",
    "final_model = grid_search.best_estimator_\n",
    "\n",
    "# 在训练集上交叉验证评估最优模型\n",
    "scores = cross_val_score(final_model, x_train, y_train, cv=5, scoring='neg_mean_squared_error')\n",
    "rmse = np.sqrt(-scores).mean()\n",
    "print(f\"训练集上的平均精确度: {rmse}\")\n",
    "\n",
    "# 输出模型在测试集上的评估指标值\n",
    "pred2 = final_model.predict(x_test)\n",
    "mse = mean_squared_error(y_test, pred2)\n",
    "rmse = np.sqrt(mse)\n",
    "print(f'在测试集上的评估分数: {rmse}')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "de4c784f",
   "metadata": {},
   "source": [
    "### (4) 三个版本的关系\n",
    "版本1和版本2得到的模型是一样的，在`测试集`上测试的结果也是一样，它们都是用默认参数训练模型，只不过版本2多了一个交叉验证，这是对模型的一种评估方式。\n",
    "\n",
    "版本3多了一个网格搜索调参的步骤，可以在多个参数组合中找到最佳的模型。\n",
    "  \n",
    " "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "51b4d364",
   "metadata": {},
   "source": [
    "## 各种回归算法的导入方法\n",
    "以下是调用不同回归模型的方法    "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "91439e9a",
   "metadata": {},
   "outputs": [],
   "source": [
    "#### 线性回归\n",
    "from sklearn.linear_model import LinearRegression\n",
    "\n",
    "#### 贝叶斯回归\n",
    "from sklearn.linear_model import BayesianRidge\n",
    "\n",
    "#### 支持向量机回归\n",
    "from sklearn.svm import SVR\n",
    "\n",
    "#### 决策树回归\n",
    "from sklearn.tree import DecisionTreeRegressor\n",
    "\n",
    "#### 随机森林回归\n",
    "from sklearn.ensemble import RandomForestRegressor\n",
    "\n",
    "#### 随机梯度提升回归\n",
    "from sklearn.ensemble import GradientBoostingRegressor\n",
    "\n",
    "#### AdaBoost回归\n",
    "from sklearn.ensemble import AdaBoostRegressor"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a9cf756f",
   "metadata": {},
   "source": [
    "## 各种分类算法的导入方法\n",
    "以下是调用不同回归模型的方法 "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "00459832",
   "metadata": {},
   "outputs": [],
   "source": [
    "#### 逻辑斯蒂回归\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "\n",
    "#### 支持向量机分类\n",
    "from sklearn.svm import SVC\n",
    "\n",
    "#### 决策树分类\n",
    "from sklearn.tree import DecisionTreeClassifier\n",
    "\n",
    "#### 随机森林分类\n",
    "from sklearn.ensemble import RandomForestClassifier\n",
    "\n",
    "#### 随机梯度提升分类\n",
    "from sklearn.ensemble import GradientBoostingClassifier\n",
    "\n",
    "#### AdaBoost分类\n",
    "from sklearn.ensemble import AdaBoostClassifier"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
