{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "36c6826d-17f6-4819-98c8-0373c8143dde",
   "metadata": {},
   "source": [
    "# Scikit-Learn机器学习快速入门与实践\n",
    "\n",
    "**Scikit-Learn的学习目标**：\n",
    "- 数据预处理\n",
    "  - 标准化（StandardScaler）、归一化（MinMaxScaler）、编码分类变量（LabelEncoder、OneHotEncoder）。\n",
    "  - 处理缺失值（SimpleImputer）、特征选择（SelectKBest、RFE 等）。\n",
    "- 常用算法\n",
    "  - 回归：线性回归、逻辑回归、Lasso、Ridge 等。\n",
    "  - 分类：支持向量机（SVM）、决策树、随机森林、K近邻、朴素贝叶斯等。\n",
    "  - 聚类：K均值、DBSCAN、层次聚类等。\n",
    "  - 降维：主成分分析（PCA）、t-SNE 等。\n",
    "- 模型训练与评估\n",
    "  - 掌握fit、predict、score等方法。\n",
    "  - 了解交叉验证（cross_val_score、KFold）、超参数调优（GridSearchCV、RandomizedSearchCV）。\n",
    "  - 熟悉评估指标：准确率、精确率、召回率、F1 分数（分类），均方误差、R² 分数（回归）。\n",
    "- Pipeline和工作流\n",
    "  - 使用Pipeline整合数据预处理和模型训练。\n",
    "  - 掌握ColumnTransformer处理混合类型数据。\n",
    "- 模型持久化\n",
    "  - 使用 oblib或pickle保存和加载模型。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3aade844-a22f-4c19-96eb-a479c318639a",
   "metadata": {},
   "source": [
    "### API一致性示例\n",
    "- Estimator支持fit()、transform()、predict()、fit_transform()等统一的方法。\n",
    "- Estimator包括模型、数据转换、Pipeline等。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "9c2c194a-5a80-40d5-9809-224ace862817",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.datasets import make_classification, make_regression\n",
    "from sklearn.model_selection import train_test_split\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "from sklearn.ensemble import RandomForestRegressor\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.metrics import accuracy_score, r2_score\n",
    "\n",
    "# 1. 分类任务\n",
    "# 生成分类数据集\n",
    "X_clf, y_clf = make_classification(n_samples=100, n_features=4, n_classes=2, random_state=42)\n",
    "X_clf_train, X_clf_test, y_clf_train, y_clf_test = train_test_split(X_clf, y_clf, test_size=0.2, random_state=42)\n",
    "\n",
    "# 分类模型：LogisticRegression\n",
    "clf = LogisticRegression()\n",
    "clf.fit(X_clf_train, y_clf_train)  # 统一 fit 接口\n",
    "y_clf_pred = clf.predict(X_clf_test)  # 统一 predict 接口\n",
    "clf_score = accuracy_score(y_clf_test, y_clf_pred)  # 评估\n",
    "print(\"Classification Accuracy:\", clf_score)\n",
    "\n",
    "# 2. 回归任务\n",
    "# 生成回归数据集\n",
    "X_reg, y_reg = make_regression(n_samples=100, n_features=4, random_state=42)\n",
    "X_reg_train, X_reg_test, y_reg_train, y_reg_test = train_test_split(X_reg, y_reg, test_size=0.2, random_state=42)\n",
    "\n",
    "# 回归模型：RandomForestRegressor\n",
    "reg = RandomForestRegressor(random_state=42)\n",
    "reg.fit(X_reg_train, y_reg_train)  # 统一 fit 接口\n",
    "y_reg_pred = reg.predict(X_reg_test)  # 统一 predict 接口\n",
    "reg_score = r2_score(y_reg_test, y_reg_pred)  # 评估\n",
    "print(\"Regression R² Score:\", reg_score)\n",
    "\n",
    "# 3. 数据转换任务\n",
    "# 使用分类数据集进行数据标准化\n",
    "scaler = StandardScaler()\n",
    "scaler.fit(X_clf_train)  # 统一 fit 接口\n",
    "X_clf_train_scaled = scaler.transform(X_clf_train)  # 统一 transform 接口\n",
    "X_clf_test_scaled = scaler.transform(X_clf_test)\n",
    "print(\"Scaled Data (first sample):\", X_clf_test_scaled[0])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "27b79c6e-a64e-4ee5-b57a-d58be0950753",
   "metadata": {},
   "source": [
    "## Scikit-Learn基础实践"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "34865d68-9111-4087-858e-59c9cde93ba0",
   "metadata": {},
   "source": [
    "### 数据转换示例\n",
    "- 数据标准化StandardScaler()转换后的效果。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2d3cd2e6-4dfe-4605-a317-63e62343ed21",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "\n",
    "# 创建一个简单的带标签分类数据集\n",
    "# 特征：2维（身高、体重），标签：0（负类）或1（正类）\n",
    "X = np.array([[170, 60], [165, 55], [180, 80], [175, 70], [160, 50], [185, 85]])\n",
    "y = np.array([0, 0, 1, 1, 0, 1])\n",
    "\n",
    "# 初始化 StandardScaler\n",
    "scaler = StandardScaler()\n",
    "\n",
    "# 对特征数据进行标准化转换\n",
    "X_transformed = scaler.fit_transform(X)\n",
    "\n",
    "# 打印原始数据和转换后的数据\n",
    "print(\"原始特征数据：\")\n",
    "print(X)\n",
    "print(\"\\n原始标签：\")\n",
    "print(y)\n",
    "print(\"\\n转换后的特征数据：\")\n",
    "print(X_transformed)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8c7d3dce-ad1a-412b-9b0a-c978bd2e512f",
   "metadata": {},
   "source": [
    "### Imputer数据转换\n",
    "- SimpleImputer数据预处理组件，位于 klearn.impute模块，用于处理数据集中的缺失值（NaN或None）\n",
    "- 主要功能是通过指定的策略自动填充缺失值，以确保数据集完整，适合后续的机器学习模型训练\n",
    "- 支持填充策略：\n",
    "  - 均值填充（strategy='mean'）：用特征列的均值替换该列中的缺失值（仅适用于数值型数据）\n",
    "  - 中位数填充（strategy='median'）：用特征列的中位数替换缺失值（适用于数值型数据，特别对异常值敏感的数据集）\n",
    "  - 众数填充（strategy='most_frequent'）：用特征列出现频率最高的值替换缺失值（适用于数值型或类别型数据）\n",
    "  - 常数填充（strategy='constant'）：用用户指定的固定值（通过 fill_value 参数设置）替换缺失值（适用于数值型或类别型数据）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "2d67532f-2715-411b-b5b2-cfe0489b3fd5",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.impute import SimpleImputer\n",
    "\n",
    "# 创建一个简单的带标签分类数据集\n",
    "# 特征：2维（身高、体重），标签：0（负类）或1（正类）\n",
    "X = np.array([[170, 60], [165, 55], [180, None], [175, 70], [160, 50], [185, 85]])\n",
    "y = np.array([0, 0, 1, 1, 0, 1])\n",
    "\n",
    "# 使用 SimpleImputer 进行均值填充\n",
    "imputer = SimpleImputer(strategy='mean')\n",
    "X_imputed = imputer.fit_transform(X)\n",
    "\n",
    "# 使用 StandardScaler 进行标准化\n",
    "scaler = StandardScaler()\n",
    "X_transformed = scaler.fit_transform(X_imputed)\n",
    "\n",
    "# 打印原始数据和转换后的数据\n",
    "print(\"原始特征数据：\")\n",
    "print(X)\n",
    "print(\"\\n原始标签：\")\n",
    "print(y)\n",
    "print(\"\\nimputer转换后的特征数据：\")\n",
    "print(X_imputed)\n",
    "print(\"\\nstandardscaler转换后的特征数据：\")\n",
    "print(X_transformed)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c0369574-f6d3-4d5b-989a-84da1a3b0a56",
   "metadata": {},
   "source": [
    "### 基于上面的数据样本训练一个分类模型（逻辑回归）\n",
    "\n",
    "本示例选用LogisticRegression算法，尽管名字中带有“回归”，但它实际上是一种分类算法，主要用于解决二分类问题（也可以扩展到多分类）。它基于线性回归的原理，通过Sigmoid函数 将线性回归的输出映射到 (0, 1) 的概率值，从而进行分类。\n",
    "\n",
    "**工作原理**\n",
    "\n",
    "1. 线性组合： 输入特征$X$首先进行线性组合，得到一个线性分数$z$：   \n",
    "\n",
    "   $z=w_0+w_1x_1+w_2x_2+...+w_nw_n$\n",
    "\n",
    "   其中，$w$是权重，$x$是特征，$w_0$是截距。\n",
    "2. Sigmoid函数 (S函数)： 将z输入到Sigmoid函数中，将其转换为一个介于0到1之间的概率值$P(y=1∣X)$：\n",
    "\n",
    "   $P(y=1|x)=\\frac{1}{1+e^{-z}}$\n",
    "3. 分类决策： 通常，如果$P(y=1∣X)0.5$，则预测为正类；否则，预测为负类。\n",
    "4. 损失函数： 逻辑回归通常使用对数损失 (Log Loss) 或交叉熵损失 (Cross-Entropy Loss) 来衡量模型预测与真实标签之间的差距，并通过梯度下降等优化算法来最小化损失函数，从而找到最佳的权重 w。\n",
    "\n",
    "**常用参数**\n",
    "- penalty: 正则化类型 ('l1', 'l2', 'elasticnet', 'none')。\n",
    "- C: 正则化强度的倒数，C 越小表示正则化越强。\n",
    "- solver: 优化算法 ('liblinear', 'lbfgs', 'newton-cg', 'sag', 'saga')。\n",
    "- max_iter: 最大迭代次数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "3c24b965-e3e0-471b-b6bb-c30378efca3b",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.impute import SimpleImputer\n",
    "from sklearn.linear_model import LogisticRegression # 导入LogisticRegression\n",
    "\n",
    "# 创建一个简单的带标签分类数据集\n",
    "# 特征：2维（身高、体重），标签：0（负类）或1（正类）\n",
    "X = np.array([[170, 60], [165, 55], [180, None], [175, 70], [160, 50], [185, 85]])\n",
    "y = np.array([0, 0, 1, 1, 0, 1])\n",
    "\n",
    "# 使用SimpleImputer 进行均值填充\n",
    "imputer =SimpleImputer(strategy='mean')\n",
    "X_imputed = imputer.fit_transform(X)\n",
    "\n",
    "# 使用 StandardScaler 进行标准化\n",
    "scaler = StandardScaler()\n",
    "X_transformed = scaler.fit_transform(X_imputed)\n",
    "\n",
    "# 初始化 LogisticRegression 模型\n",
    "model = LogisticRegression(random_state=42)\n",
    "\n",
    "# 训练模型\n",
    "model.fit(X_transformed, y)\n",
    "\n",
    "# 创建两个专门用于测试的样本，并进行数据转换\n",
    "y_test = np.array([[172, 55], [178, 75]])\n",
    "y_test_scaled = scaler.transform(X_test)\n",
    "\n",
    "# 对测试样本进行预测\n",
    "y_pred = model.predict(y_test_scaled)\n",
    "\n",
    "# 打印结果\n",
    "print(\"\\n测试样本特征数据：\")\n",
    "print(X_test)\n",
    "print(\"\\n标准化后的测试样本特征数据：\")\n",
    "print(X_test_scaled)\n",
    "print(\"\\n测试样本预测标签：\")\n",
    "print(y_test_pred)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "24525e79-1a28-407f-b150-05324a0ed8cc",
   "metadata": {},
   "source": [
    "### 另一个分类模型SVC\n",
    "\n",
    "支持向量机 (Support Vector Machine, SVM) 是一种强大而灵活的分类（和回归）算法。对于分类任务，它被称为Support Vector Classifier(SVC)。SVC的核心思想是找到一个最优的超平面 (Hyperplane)，将不同类别的数据点分隔开，并且使这个超平面到最近数据点（支持向量）的距离最大化，这个距离被称为 间隔 (Margin)。\n",
    "\n",
    "**常用参数**\n",
    "- C: 惩罚参数，C越大，对错误分类的惩罚越大。\n",
    "- kernel: 核函数类型 ('linear', 'poly', 'rbf', 'sigmoid', 'precomputed')。\n",
    "- degree: 多项式核的次数。\n",
    "- gamma: RBF、Poly和Sigmoid核函数的核系数。'scale'（默认）表示 1/(n_features * X.var())，'auto' 表示 1/n_features。\n",
    "- probability: 是否启用概率估计（会增加计算开销）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "985e6317-e9c5-479c-b17b-5a9448f404b5",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.impute import SimpleImputer\n",
    "from sklearn.svm import SVC   # 导入SVC\n",
    "\n",
    "# 创建一个简单的带标签分类数据集\n",
    "# 特征：2维（身高、体重），标签：0（负类）或1（正类）\n",
    "X = np.array([[170, 60], [165, 55], [180, None], [175, 70], [160, 50], [185, 85]])\n",
    "y = np.array([0, 0, 1, 1, 0, 1])\n",
    "\n",
    "# 使用 SimpleImputer 进行均值填充\n",
    "imputer = SimpleImputer(strategy='mean')\n",
    "X_imputed = imputer.fit_transform(X)\n",
    "\n",
    "# 使用 StandardScaler 进行标准化\n",
    "scaler = StandardScaler()\n",
    "X_transformed = scaler.fit_transform(X_imputed)\n",
    "\n",
    "# 初始化 SVC 模型\n",
    "model = SVC(kernel='rbf', C=1.0, gamma='scale', random_state=42, probability=True)\n",
    "\n",
    "# 训练模型\n",
    "model.fit(X_transformed, y)\n",
    "\n",
    "# 创建两个专门用于测试的样本，并进行数据转换\n",
    "X_test = np.array([[172, 55], [178, 75]])\n",
    "X_test_scaled = scaler.transform(X_test)\n",
    "\n",
    "# 对测试样本进行预测\n",
    "y_test_pred = model.predict(X_test_scaled)\n",
    "\n",
    "# 打印结果\n",
    "print(\"\\n测试样本预测标签：\")\n",
    "print(y_test_pred)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ec68133a-f5e9-4947-bcc5-cbea42eefe13",
   "metadata": {},
   "source": [
    "### 基于类似上面的分类数据集进行机器学习\n",
    "将扩展之前的分类数据集到200个样本，使用scikit-learn的LogisticRegression模型进行训练和评估，数据集将分为90%的训练集和10%的测试集。\n",
    "- 生成一个扩展的分类数据集（200个样本），特征为身高（160-190cm）和体重（50-90kg），标签基于规则（身高+体重>250为1，否则为0）并添加随机噪声。\n",
    "- 打印训练集和测试集的前5行特征数据、标签、标准化后的特征数据、预测标签，以及训练集和测试集的准确率和测试集的分类报告。\n",
    "\n",
    "#### 评估指标\n",
    "在机器学习中，评估指标 (Evaluation Metrics) 是用来衡量模型性能好坏的标准。它们提供了一种量化的方式，帮助我们理解模型在处理新数据时的表现。简单来说，评估指标就是一套衡量模型“好坏”的打分规则。\n",
    "\n",
    "**为什么要获取评估指标？**\n",
    "\n",
    "模型在训练完成后，我们通常会使用未参与训练的测试集 (Test Set) 对其进行评估，并获取各项评估指标。这样做至关重要，原因如下：\n",
    "- 量化模型性能： 评估指标将抽象的模型表现转化为具体的数值，例如准确率是 85%，或者 F1 分数是 0.78。这使得我们可以客观地比较不同模型、不同参数设置下的性能。\n",
    "- 指导模型选择与优化： 没有评估指标，我们就无法知道当前模型是否足够好，或者哪个模型更适合我们的任务。通过评估指标，我们可以判断模型是否存在过拟合或欠拟合，从而指导我们进行特征工程、模型选择或超参数调优。\n",
    "- 避免过拟合： 模型在训练集上表现完美，但在测试集上却很糟糕，这通常意味着模型过拟合了。评估指标在测试集上的表现能有效揭示过拟合现象，确保模型的泛化能力，即对未知数据的处理能力。\n",
    "- 业务决策依据： 不同的业务场景对模型性能有不同的侧重。例如，在疾病诊断中，我们可能更关心召回率（不错过任何病人），而在垃圾邮件识别中，可能更关心精确率（不要把正常邮件误判为垃圾邮件）。评估指标帮助我们将模型性能与实际业务目标对齐。\n",
    "- 沟通与协作： 评估指标提供了一种标准化的语言，让数据科学家、工程师和业务人员能够清晰地沟通模型的能力和局限性。\n",
    "\n",
    "**分类模型的常用指标**\n",
    "1. 混淆矩阵 (Confusion Matrix)\n",
    "   混淆矩阵是一个表格，可视化了模型预测的类别与真实类别的对应关系。它包含了以下四个基本指标：\n",
    "   - 真阳性 (TP-True Positive)： 真实为正类，预测也为正类。\n",
    "   - 真阴性 (TN-True Negative)： 真实为负类，预测也为负类。\n",
    "   - 假阳性 (FP-False Positive)： 真实为负类，预测为正类（I类错误）。\n",
    "   - 假阴性 (FN-False Negative)： 真实为正类，预测为负类（II类错误）。\n",
    "2. 准确率 (Accuracy)\n",
    "   准确率是最直观的指标，表示模型正确预测的样本数量占总样本数量的比例。\n",
    "\n",
    "   $Accuracy=\\frac{TP+TN}{TP+FP+TN+FN}$\n",
    "   - 优点：易于理解和计算\n",
    "   - 缺点：在类别不平衡 (Class Imbalance) 的数据集中容易产生误导。例如，在一个95%是负类的场景中，一个总是预测为负类的模型也能达到95%的准确率，但这显然不是一个好模型。\n",
    "3. 精确率 (Precision)\n",
    "   精确率衡量的是模型预测为正类的样本中，真正是正类的比例。\n",
    "   \n",
    "   $Precesion=\\frac{TP}{TP+FP}$\n",
    "   - 适用场景： 当假阳性 (False Positive) 的成本很高时。\n",
    "   - 例如，垃圾邮件分类（我们不希望将正常邮件标记为垃圾邮件），或者癌症诊断（我们不希望误诊健康人为患病）。\n",
    "4. 召回率 (Recall) / 灵敏度 (Sensitivity)\n",
    "   召回率衡量的是所有真正是正类的样本中，被模型正确预测为正类的比例。\n",
    "   $Recall=\\frac{TP}{TP+FN}$\n",
    "   - 适用场景： 当假阴性 (False Negative) 的成本很高时。\n",
    "   - 例如，疾病诊断（我们不希望漏诊病人），或者欺诈检测（我们不希望错过真正的欺诈行为）。\n",
    "5. F1-Score\n",
    "   F1-Score 是精确率和召回率的调和平均值。它提供了一个综合考虑两者的单一指标。\n",
    "   \n",
    "   $F1-Score=2 \\times \\frac{Precision×Recall}{Precision+Recall}$\n",
    "   - 优点： 在类别不平衡的数据集中比准确率更有用，因为它同时考虑了假阳性和假阴性。\n",
    "   - 适用场景： 当我们希望精确率和召回率都较高，并且它们之间存在一个平衡时。\n",
    "\n",
    "**分类评估函数**\n",
    "- accuracy_score：计算分类准确率，参数为测试集的真实标签和预测结果（y_true和y_pred），例如accuracy_score(y_true, y_pred)。\n",
    "- precision_score, recall_score, f1_score：分别计算精确率、召回率和 F1-Score。参数为测试集的真实标签和预测结果（y_true和y_pred）。\n",
    "- classification_report：生成一个包含精确率、召回率、F1-Score 和支持度（Support）的文本报告。参数为测试集的真实标签和预测结果（y_true和y_pred）。\n",
    "- confusion_matrix：计算混淆矩阵。参数为测试集的真实标签和预测结果（y_true和y_pred）。\n",
    "- roc_curve 和 roc_auc_score：计算 ROC 曲线点和 ROC AUC 值。参数为测试集的真实标签和预测的结果概率(y_true, y_proba)。\n",
    "  - roc_curve： 接收真实标签和预测概率（或决策函数值），返回真阳性率 (TPR) 和假阳性率 (FPR) 的序列，用于绘制 ROC 曲线。\n",
    "  - roc_auc_score： 计算 ROC 曲线下的面积 (AUC)。AUC 衡量了模型区分正负类的能力，值越接近 1 越好，0.5 表示随机分类。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "4eb0ffab-a78a-4a05-a43d-3d98f616c3d5",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.impute import SimpleImputer\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "from sklearn.model_selection import train_test_split\n",
    "from sklearn.metrics import accuracy_score, classification_report\n",
    "\n",
    "# 设置随机种子以确保结果可重现\n",
    "np.random.seed(42)\n",
    "\n",
    "# 生成扩展的分类数据集（200个样本）\n",
    "# 特征：身高（160-190cm），体重（50-90kg），标签：0或1\n",
    "n_samples = 200\n",
    "height = np.random.uniform(160, 190, n_samples)\n",
    "weight = np.random.uniform(50, 90, n_samples)\n",
    "X = np.column_stack((height, weight))\n",
    "# 生成标签：基于简单的规则（例如，身高+体重>250则为1，否则为0）并添加噪声\n",
    "y = np.where(height + weight > 250, 1, 0)\n",
    "# 添加一些随机噪声\n",
    "noise = np.random.choice([0, 1], size=n_samples, p=[0.9, 0.1])\n",
    "y = np.logical_xor(y, noise).astype(int)\n",
    "\n",
    "# 划分训练集和测试集（90%训练，10%测试）\n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42)\n",
    "\n",
    "# 初始化 StandardScaler 并对特征数据进行标准化\n",
    "scaler = StandardScaler()\n",
    "X_train_scaled = scaler.fit_transform(X_train)\n",
    "X_test_scaled = scaler.transform(X_test)\n",
    "\n",
    "# 初始化 LogisticRegression 模型\n",
    "model = LogisticRegression(random_state=42)\n",
    "\n",
    "# 训练模型\n",
    "model.fit(X_train_scaled, y_train)\n",
    "\n",
    "# 预测训练集和测试集\n",
    "y_train_pred = model.predict(X_train_scaled)\n",
    "y_test_pred = model.predict(X_test_scaled)\n",
    "\n",
    "# 计算准确率\n",
    "train_accuracy = accuracy_score(y_train, y_train_pred)\n",
    "test_accuracy = accuracy_score(y_test, y_test_pred)\n",
    "\n",
    "# 打印结果\n",
    "print(\"训练集特征数据（前5行）：\")\n",
    "print(X_train[:5])\n",
    "print(\"\\n测试集特征数据（前5行）：\")\n",
    "print(X_test[:5])\n",
    "print(\"\\n训练集标签（前5个）：\")\n",
    "print(y_train[:5])\n",
    "print(\"\\n测试集标签（前5个）：\")\n",
    "print(y_test[:5])\n",
    "print(\"\\n训练集标准化后的特征数据（前5行）：\")\n",
    "print(X_train_scaled[:5])\n",
    "print(\"\\n测试集标准化后的特征数据（前5行）：\")\n",
    "print(X_test_scaled[:5])\n",
    "print(\"\\n训练集预测标签（前5个）：\")\n",
    "print(y_train_pred[:5])\n",
    "print(\"\\n测试集预测标签（前5个）：\")\n",
    "print(y_test_pred[:5])\n",
    "print(\"\\n训练集准确率：\")\n",
    "print(f\"{train_accuracy:.2f}\")\n",
    "print(\"\\n测试集准确率：\")\n",
    "print(f\"{test_accuracy:.2f}\")\n",
    "print(\"\\n测试集分类报告：\")\n",
    "print(classification_report(y_test, y_test_pred))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "895f031a-7147-432e-9d5e-1d08a80bab77",
   "metadata": {},
   "source": [
    "### 交叉验证示例\n",
    "\n",
    "**工作原理**\n",
    "1. 将原始数据集划分为K个（通常是5或10个）大小相似的互斥子集（或称为“折叠”，folds）。\n",
    "2. 在每次迭代中，选择其中一个子集作为验证集 (Validation Set)，而将其余K−1个子集合并起来作为训练集 (Training Set)。\n",
    "3. 使用训练集训练模型。\n",
    "4. 使用验证集评估训练好的模型，记录评估指标（如准确率、F1分数等）。\n",
    "5. 重复步骤2-4共K次，确保每个子集都作为验证集使用一次。\n",
    "6. 最终，将K次评估结果取平均值（或中位数、标准差等），作为模型性能的最终估计。\n",
    "\n",
    "**常用的交叉验证方法**\n",
    "1. K-Fold Cross-Validation (K 折交叉验证)\n",
    "3. Stratified K-Fold Cross-Validation (分层 K 折交叉验证)\n",
    "4. Leave-One-Out Cross-Validation (LOOCV)\n",
    "5. Shuffle-Split Cross-Validation (洗牌-分割交叉验证)\n",
    "6. GroupKFold (分组 K 折交叉验证)\n",
    "\n",
    "**交叉验证函数**\n",
    "1. cross_val_score\n",
    "2. cross_validation"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bc6a21f9-52ca-4867-80ef-37426db09f17",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.impute import SimpleImputer\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "from sklearn.model_selection import train_test_split, cross_val_score\n",
    "from sklearn.metrics import accuracy_score, classification_report\n",
    "\n",
    "# 设置随机种子以确保结果可重现\n",
    "np.random.seed(42)\n",
    "\n",
    "# 生成扩展的分类数据集（200个样本）\n",
    "# 特征：身高（160-190cm），体重（50-90kg），标签：0或1\n",
    "n_samples = 200\n",
    "height = np.random.uniform(160, 190, n_samples)\n",
    "weight = np.random.uniform(50, 90, n_samples)\n",
    "X = np.column_stack((height, weight))\n",
    "\n",
    "# 随机将约3%的数据设置为NaN\n",
    "mask = np.random.random(X.shape) < 0.03\n",
    "X[mask] = np.nan\n",
    "\n",
    "# 生成标签：基于简单的规则（例如，身高+体重>250则为1，否则为0）并添加噪声\n",
    "y = np.where(np.nansum(X, axis=1) > 250, 1, 0)\n",
    "# 添加一些随机噪声\n",
    "noise = np.random.choice([0, 1], size=n_samples, p=[0.9, 0.1])\n",
    "y = np.logical_xor(y, noise).astype(int)\n",
    "\n",
    "# 划分训练集和测试集（90%训练，10%测试）\n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42)\n",
    "\n",
    "# 使用 SimpleImputer 进行均值填充\n",
    "imputer = SimpleImputer(strategy='mean')\n",
    "X_train_imputed = imputer.fit_transform(X_train)\n",
    "X_test_imputed = imputer.transform(X_test)\n",
    "\n",
    "# 使用 StandardScaler 进行标准化\n",
    "scaler = StandardScaler()\n",
    "X_train_scaled = scaler.fit_transform(X_train_imputed)\n",
    "X_test_scaled = scaler.transform(X_test_imputed)\n",
    "\n",
    "# 初始化 LogisticRegression 模型\n",
    "model = LogisticRegression(random_state=42)\n",
    "\n",
    "# 进行 5 折交叉验证\n",
    "cv_scores = cross_val_score(model, X_train_scaled, y_train, cv=5, scoring='accuracy')\n",
    "\n",
    "# 训练模型\n",
    "model.fit(X_train_scaled, y_train)\n",
    "\n",
    "# 预测训练集和测试集\n",
    "y_train_pred = model.predict(X_train_scaled)\n",
    "y_test_pred = model.predict(X_test_scaled)\n",
    "\n",
    "# 计算准确率\n",
    "train_accuracy = accuracy_score(y_train, y_train_pred)\n",
    "test_accuracy = accuracy_score(y_test, y_test_pred)\n",
    "\n",
    "# 打印结果\n",
    "print(\"\\n交叉验证准确率（每折）：\")\n",
    "print(cv_scores)\n",
    "print(\"\\n平均交叉验证准确率：\")\n",
    "print(f\"{np.mean(cv_scores):.2f} ± {np.std(cv_scores):.2f}\")\n",
    "print(\"\\n训练集准确率：\")\n",
    "print(f\"{train_accuracy:.2f}\")\n",
    "print(\"\\n测试集准确率：\")\n",
    "print(f\"{test_accuracy:.2f}\")\n",
    "print(\"\\n测试集分类报告：\")\n",
    "print(classification_report(y_test, y_test_pred))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5dfdd76c-24cb-44b1-9594-d3b165349403",
   "metadata": {},
   "source": [
    "### 模型选择\n",
    "模型选择不仅仅是挑选算法（例如，是使用逻辑回归还是随机森林），更重要的是找到最适合当前数据集和业务目标的模型及其最佳配置（超参数）。它是一个迭代的过程，通常涉及以下几个方面：\n",
    "1. 选择模型类型： 决定使用哪种机器学习算法（例如，分类问题是使用支持向量机、决策树还是集成学习）。\n",
    "2. 选择模型超参数： 每种模型都有其特定的超参数（Hyperparameters），这些参数在模型训练之前设置，而不是通过训练数据学习。例如，随机森林中树的数量 (n_estimators)、支持向量机中的正则化参数 C 和核函数 (kernel) 等。选择合适的超参数对模型性能至关重要。\n",
    "3. 评估模型性能： 使用适当的评估指标（如准确率、F1 分数、AUC 等）来量化模型的表现。\n",
    "4. 避免过拟合和欠拟合： 模型选择的目标是找到一个既能很好地拟合训练数据，又能很好地泛化到新数据的模型。过拟合（模型在训练数据上表现很好，但在新数据上很差）和欠拟合（模型在训练数据和新数据上都表现不佳）是模型选择过程中需要避免的常见问题。\n",
    "\n",
    "**模型选择机制**\n",
    "1. 网格搜索 (Grid Search)：在预定义的超参数空间中，尝试所有可能的参数组合，并通过交叉验证评估每种组合的性能，最终选择性能最佳的组合。\n",
    "   - 优点： 简单直观，能确保找到给定参数空间内的最佳组合。\n",
    "   - 缺点： 计算成本高昂，特别是当参数数量多、每个参数的取值范围大时，搜索空间会呈指数级增长。\n",
    "2. 随机搜索 (Randomized Search)：在计算资源有限的情况下，通常能比网格搜索更快地找到较好的参数组合，因为它不局限于穷举所有组合。\n",
    "   - 优点： 效率更高，更适合参数空间很大的情况。\n",
    "   - 缺点： 不保证找到全局最优解，但通常能找到一个非常好的近似解。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "39c4b608-8e50-4597-b4c2-08c39b220876",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.impute import SimpleImputer\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "from sklearn.model_selection import train_test_split, GridSearchCV\n",
    "from sklearn.metrics import accuracy_score, classification_report\n",
    "\n",
    "# 设置随机种子以确保结果可重现\n",
    "np.random.seed(42)\n",
    "\n",
    "# 生成扩展的分类数据集（200个样本）\n",
    "# 特征：身高（160-190cm），体重（50-90kg），标签：0或1\n",
    "n_samples = 200\n",
    "height = np.random.uniform(160, 190, n_samples)\n",
    "weight = np.random.uniform(50, 90, n_samples)\n",
    "X = np.column_stack((height, weight))\n",
    "\n",
    "# 随机将约3%的数据设置为NaN\n",
    "mask = np.random.random(X.shape) < 0.03\n",
    "X[mask] = np.nan\n",
    "\n",
    "# 生成标签：基于简单的规则（例如，身高+体重>250则为1，否则为0）并添加噪声\n",
    "y = np.where(np.nansum(X, axis=1) > 250, 1, 0)\n",
    "# 添加一些随机噪声\n",
    "noise = np.random.choice([0, 1], size=n_samples, p=[0.9, 0.1])\n",
    "y = np.logical_xor(y, noise).astype(int)\n",
    "\n",
    "# 划分训练集和测试集（90%训练，10%测试）\n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42)\n",
    "\n",
    "# 使用 SimpleImputer 进行均值填充\n",
    "imputer = SimpleImputer(strategy='mean')\n",
    "X_train_imputed = imputer.fit_transform(X_train)\n",
    "X_test_imputed = imputer.transform(X_test)\n",
    "\n",
    "# 使用 StandardScaler 进行标准化\n",
    "scaler = StandardScaler()\n",
    "X_train_scaled = scaler.fit_transform(X_train_imputed)\n",
    "X_test_scaled = scaler.transform(X_test_imputed)\n",
    "\n",
    "# 初始化 LogisticRegression 模型\n",
    "model = LogisticRegression(random_state=42)\n",
    "\n",
    "# 定义 GridSearchCV 的超参数网格\n",
    "param_grid = {\n",
    "    'C': [0.1, 1.0, 10.0],  # 正则化参数\n",
    "    'solver': ['lbfgs', 'liblinear']  # 优化器\n",
    "}\n",
    "\n",
    "# 初始化 GridSearchCV\n",
    "grid_search = GridSearchCV(\n",
    "    model,\n",
    "    param_grid,\n",
    "    cv=5,  # 5 折交叉验证\n",
    "    scoring='accuracy',\n",
    "    n_jobs=-1  # 使用所有可用 CPU 核心\n",
    ")\n",
    "\n",
    "# 训练 GridSearchCV\n",
    "grid_search.fit(X_train_scaled, y_train)\n",
    "\n",
    "# 获取最佳模型\n",
    "best_model = grid_search.best_estimator_\n",
    "\n",
    "# 预测训练集和测试集\n",
    "y_train_pred = best_model.predict(X_train_scaled)\n",
    "y_test_pred = best_model.predict(X_test_scaled)\n",
    "\n",
    "# 计算准确率\n",
    "train_accuracy = accuracy_score(y_train, y_train_pred)\n",
    "test_accuracy = accuracy_score(y_test, y_test_pred)\n",
    "\n",
    "# 打印结果\n",
    "print(\"\\n测试集分类报告：\")\n",
    "print(classification_report(y_test, y_test_pred))\n",
    "print(\"\\n最佳超参数：\")\n",
    "print(grid_search.best_params_)\n",
    "print(\"\\n最佳交叉验证准确率：\")\n",
    "print(f\"{grid_search.best_score_:.2f}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6c5144a2-2a91-4481-8c6b-14c74201c24f",
   "metadata": {},
   "source": [
    "### 使用Pipeline构建模型训练流水线\n",
    "\n",
    "在机器学习中，我们通常需要对数据进行一系列的预处理步骤（如特征缩放、缺失值填充、特征选择等），然后再将处理后的数据送入模型进行训练。这个过程如果手动一步步操作，不仅繁琐，而且容易出错，尤其是在交叉验证时，需要确保每个步骤都正确地应用于训练集和测试集。Scikit-learn的Pipeline就是为了解决这个问题而设计的，它能够将多个数据预处理和模型训练步骤串联起来，形成一个统一的工作流。\n",
    "- Pipeline是Scikit-learn中的一个工具，它允许我们将多个转换器（transformers）和最后一个估计器（estimator）按顺序组合起来。\n",
    "  - 转换器 (Transformers): 负责数据的转换，例如StandardScaler（标准化）、MinMaxScaler（归一化）、SimpleImputer（缺失值填充）、OneHotEncoder（独热编码）等。它们都实现了fit和transform方法。\n",
    "  - 估计器 (Estimators): 通常是机器学习模型，例如LogisticRegression（逻辑回归）、RandomForestClassifier（随机森林分类器）、SVC（支持向量机）等。它们实现了fit和predict方法（或predict_proba、decision_function等）。\n",
    "- 通过Pipeline，我们可以将整个机器学习流程封装成一个单一的estimator对象\n",
    "  - 对Pipeline对象调用fit方法时，数据会依次经过每个转换器进行处理，然后最终被送入估计器进行训练\n",
    "  - 调用predict或transform方法时，数据也会按照相同的顺序进行处理\n",
    "\n",
    "创建一个Pipeline非常简单，只需要传入一个元组列表，每个元组包含两个元素：一个自定义的名称和对应的转换器/估计器对象。\n",
    "\n",
    "**示例**：\n",
    "- 生成类似前面的数据集，但在其中随机生成3%左右的空值，这些空值需要一个专门的SimpleImputer进行处理。\n",
    "- 因此，Pipeline依次包含有如下几个步骤：\n",
    "  - SimpleImputer（使用均值填充 NaN 值，strategy='mean'）。\n",
    "  - StandardScaler（标准化特征数据）。\n",
    "  - LogisticRegression（分类模型，random_state=42）。\n",
    "- 通过Pipeline的fit方法自动处理缺失值填充、标准化和模型训练。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "df1b4253-b6fa-4e23-b695-5f199ba101b5",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.impute import SimpleImputer\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "from sklearn.model_selection import train_test_split\n",
    "from sklearn.metrics import accuracy_score, classification_report\n",
    "from sklearn.pipeline import Pipeline\n",
    "\n",
    "# 设置随机种子以确保结果可重现\n",
    "np.random.seed(42)\n",
    "\n",
    "# 生成扩展的分类数据集（200个样本）\n",
    "# 特征：身高（160-190cm），体重（50-90kg），标签：0或1\n",
    "n_samples = 200\n",
    "height = np.random.uniform(160, 190, n_samples)\n",
    "weight = np.random.uniform(50, 90, n_samples)\n",
    "X = np.column_stack((height, weight))\n",
    "\n",
    "# 随机将约3%的数据设置为NaN\n",
    "mask = np.random.random(X.shape) < 0.03\n",
    "X[mask] = np.nan\n",
    "\n",
    "# 生成标签：基于简单的规则（例如，身高+体重>250则为1，否则为0）并添加噪声\n",
    "# 忽略NaN进行计算\n",
    "y = np.where(np.nansum(X, axis=1) > 250, 1, 0)\n",
    "# 添加一些随机噪声\n",
    "noise = np.random.choice([0, 1], size=n_samples, p=[0.9, 0.1])\n",
    "y = np.logical_xor(y, noise).astype(int)\n",
    "\n",
    "# 划分训练集和测试集（90%训练，10%测试）\n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42)\n",
    "\n",
    "# 创建 Pipeline，整合 SimpleImputer、StandardScaler 和 LogisticRegression\n",
    "pipeline = Pipeline([\n",
    "    ('imputer', SimpleImputer(strategy='mean')),\n",
    "    ('scaler', StandardScaler()),\n",
    "    ('classifier', LogisticRegression(random_state=42))\n",
    "])\n",
    "\n",
    "# 训练 Pipeline\n",
    "pipeline.fit(X_train, y_train)\n",
    "\n",
    "# 预测训练集和测试集\n",
    "y_train_pred = pipeline.predict(X_train)\n",
    "y_test_pred = pipeline.predict(X_test)\n",
    "\n",
    "# 计算准确率\n",
    "train_accuracy = accuracy_score(y_train, y_train_pred)\n",
    "test_accuracy = accuracy_score(y_test, y_test_pred)\n",
    "\n",
    "# 打印结果\n",
    "print(\"\\n训练集准确率：\")\n",
    "print(f\"{train_accuracy:.2f}\")\n",
    "print(\"\\n测试集准确率：\")\n",
    "print(f\"{test_accuracy:.2f}\")\n",
    "print(\"\\n测试集分类报告：\")\n",
    "print(classification_report(y_test, y_test_pred))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4ca83dad-f4a7-42ee-ad16-6aa5df728a52",
   "metadata": {},
   "source": [
    "### 综合案例\n",
    "综合交叉验证、模型选择、Pipeline等功能于一体的综合案例。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6144b088-fa23-40e9-a69a-3a9aec00bc68",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.impute import SimpleImputer\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "from sklearn.model_selection import train_test_split, GridSearchCV, cross_val_score\n",
    "from sklearn.metrics import accuracy_score, classification_report, roc_curve, auc\n",
    "from sklearn.pipeline import Pipeline\n",
    "\n",
    "# 设置随机种子以确保结果可重现\n",
    "np.random.seed(42)\n",
    "\n",
    "# 生成扩展的分类数据集（200个样本）\n",
    "# 特征：身高（160-190cm），体重（50-90kg），标签：0或1\n",
    "n_samples = 200\n",
    "height = np.random.uniform(160, 190, n_samples)\n",
    "weight = np.random.uniform(50, 90, n_samples)\n",
    "X = np.column_stack((height, weight))\n",
    "\n",
    "# 随机将约3%的数据设置为NaN\n",
    "mask = np.random.random(X.shape) < 0.03\n",
    "X[mask] = np.nan\n",
    "\n",
    "# 生成标签：基于简单的规则（例如，身高+体重>250则为1，否则为0）并添加噪声\n",
    "y = np.where(np.nansum(X, axis=1) > 250, 1, 0)\n",
    "# 添加一些随机噪声\n",
    "noise = np.random.choice([0, 1], size=n_samples, p=[0.9, 0.1])\n",
    "y = np.logical_xor(y, noise).astype(int)\n",
    "\n",
    "# 划分训练集和测试集（90%训练，10%测试）\n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.1, random_state=42)\n",
    "\n",
    "# 创建 Pipeline，整合 SimpleImputer、StandardScaler 和 LogisticRegression\n",
    "pipeline = Pipeline([\n",
    "    ('imputer', SimpleImputer(strategy='mean')),\n",
    "    ('scaler', StandardScaler()),\n",
    "    ('classifier', LogisticRegression(random_state=42))\n",
    "])\n",
    "\n",
    "# 定义 GridSearchCV 的超参数网格\n",
    "param_grid = {\n",
    "    'classifier__C': [0.1, 1.0, 10.0],  # 正则化参数\n",
    "    'classifier__solver': ['lbfgs', 'liblinear']  # 优化器\n",
    "}\n",
    "\n",
    "# 初始化 GridSearchCV\n",
    "grid_search = GridSearchCV(\n",
    "    pipeline,\n",
    "    param_grid,\n",
    "    cv=5,  # 5 折交叉验证\n",
    "    scoring='accuracy',\n",
    "    n_jobs=-1  # 使用所有可用 CPU 核心\n",
    ")\n",
    "\n",
    "# 训练 GridSearchCV\n",
    "grid_search.fit(X_train, y_train)\n",
    "\n",
    "# 获取最佳模型\n",
    "best_model = grid_search.best_estimator_\n",
    "\n",
    "# 使用 cross_val_score 进行额外的交叉验证评估\n",
    "cv_scores = cross_val_score(best_model, X_train, y_train, cv=5, scoring='accuracy')\n",
    "\n",
    "# 预测训练集和测试集\n",
    "y_train_pred = best_model.predict(X_train)\n",
    "y_test_pred = best_model.predict(X_test)\n",
    "\n",
    "# 计算准确率\n",
    "train_accuracy = accuracy_score(y_train, y_train_pred)\n",
    "test_accuracy = accuracy_score(y_test, y_test_pred)\n",
    "\n",
    "# 计算测试集的 ROC 曲线和 AUC\n",
    "y_test_score = best_model.predict_proba(X_test)[:, 1]  # 获取正类的概率\n",
    "fpr, tpr, _ = roc_curve(y_test, y_test_score)\n",
    "roc_auc = auc(fpr, tpr)\n",
    "\n",
    "# 打印结果\n",
    "print(\"\\n交叉验证准确率（每折）：\")\n",
    "print(cv_scores)\n",
    "print(\"\\n平均交叉验证准确率：\")\n",
    "print(f\"{np.mean(cv_scores):.2f} ± {np.std(cv_scores):.2f}\")\n",
    "print(\"\\n训练集准确率：\")\n",
    "print(f\"{train_accuracy:.2f}\")\n",
    "print(\"\\n测试集准确率：\")\n",
    "print(f\"{test_accuracy:.2f}\")\n",
    "print(\"\\n测试集分类报告：\")\n",
    "print(classification_report(y_test, y_test_pred))\n",
    "print(\"\\n最佳超参数：\")\n",
    "print(grid_search.best_params_)\n",
    "print(\"\\nGridSearchCV 最佳交叉验证准确率：\")\n",
    "print(f\"{grid_search.best_score_:.2f}\")\n",
    "print(\"\\n测试集 AUC 值：\")\n",
    "print(f\"{roc_auc:.2f}\")\n",
    "\n",
    "# 绘制 ROC 曲线\n",
    "plt.figure(figsize=(8, 6))\n",
    "plt.plot(fpr, tpr, color='darkorange', lw=2, label=f'ROC curve (AUC = {roc_auc:.2f})')\n",
    "plt.plot([0, 1], [0, 1], color='navy', lw=2, linestyle='--')\n",
    "plt.xlim([0.0, 1.0])\n",
    "plt.ylim([0.0, 1.05])\n",
    "plt.xlabel('False Positive Rate')\n",
    "plt.ylabel('True Positive Rate')\n",
    "plt.title('Receiver Operating Characteristic (ROC) Curve')\n",
    "plt.legend(loc=\"lower right\")\n",
    "plt.grid(True)\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b58c765f-e6ee-4818-ada1-031124735399",
   "metadata": {},
   "source": [
    "## 回归任务示例"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "37846fa0-c31d-494d-8e54-f86733526b26",
   "metadata": {},
   "source": [
    "### 数据集准备\n",
    "- 包含5000个样本，6个数值特征（房屋面积、卧室数量、距离市中心、建筑年份、地块面积、社区质量）和1个分类特征（社区类型：城市、郊区、农村）。\n",
    "- 目标变量是房价（price），范围在10万到200万之间，属于回归类型的任务。\n",
    "- 部分特征（如房屋面积、距离市中心、地块面积）包含约10%的缺失值，适合练习缺失值处理。\n",
    "\n",
    "**使用方法**\n",
    "- 运行代码生成housing_dataset.csv数据集\n",
    "- 使用Pandas加载数据集，结合scikit-learn进行数据处理、模型训练和评估。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b26176d9-cdc1-40b1-96a1-40d6c669d324",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import pandas as pd\n",
    "from sklearn.datasets import make_regression\n",
    "from sklearn.model_selection import train_test_split\n",
    "import random\n",
    "\n",
    "# 设置随机种子以确保可复现性\n",
    "np.random.seed(42)\n",
    "random.seed(42)\n",
    "\n",
    "# 生成基础回归数据集\n",
    "n_samples = 5000\n",
    "n_features = 6\n",
    "X, y = make_regression(n_samples=n_samples, n_features=n_features, noise=0.1, random_state=42)\n",
    "\n",
    "# 创建特征名称，模拟房价数据集\n",
    "feature_names = ['house_size_sqm', 'num_bedrooms', 'distance_to_city_km', \n",
    "                 'year_built', 'lot_size_sqm', 'neighborhood_quality']\n",
    "\n",
    "# 转换为 DataFrame\n",
    "df = pd.DataFrame(X, columns=feature_names)\n",
    "df['price'] = y\n",
    "\n",
    "# 模拟现实数据中的复杂性\n",
    "# 1. 将数值特征转换为更真实的范围\n",
    "df['house_size_sqm'] = df['house_size_sqm'] * 50 + 150  # 房屋面积：50-250平米\n",
    "df['num_bedrooms'] = np.round(df['num_bedrooms'] * 2 + 3).clip(1, 8)  # 卧室数量：1-8\n",
    "df['distance_to_city_km'] = df['distance_to_city_km'] * 10 + 20  # 距离市中心：0-40公里\n",
    "df['year_built'] = np.round(df['year_built'] * 20 + 1980).clip(1950, 2023)  # 建筑年份：1950-2023\n",
    "df['lot_size_sqm'] = df['lot_size_sqm'] * 100 + 300  # 地块面积：100-500平米\n",
    "df['neighborhood_quality'] = np.round(df['neighborhood_quality'] * 2 + 5).clip(1, 10)  # 社区质量：1-10\n",
    "\n",
    "# 2. 添加分类特征\n",
    "neighborhood_types = ['Urban', 'Suburban', 'Rural']\n",
    "df['neighborhood_type'] = [random.choice(neighborhood_types) for _ in range(n_samples)]\n",
    "\n",
    "# 3. 引入缺失值\n",
    "for col in ['house_size_sqm', 'distance_to_city_km', 'lot_size_sqm']:\n",
    "    mask = np.random.random(n_samples) < 0.05  # 5% 的缺失值\n",
    "    df.loc[mask, col] = np.nan\n",
    "\n",
    "# 4. 添加噪声到目标变量（房价）\n",
    "df['price'] = df['price'] * 1000 + 500000  # 房价范围：50万-150万\n",
    "df['price'] = df['price'].clip(100000, 2000000)  # 限制房价范围\n",
    "\n",
    "# 显示数据的基本结构\n",
    "print(df.head())\n",
    "\n",
    "# 保存数据集为 CSV 文件\n",
    "df.to_csv('housing_dataset.csv', index=False)\n",
    "\n",
    "# 分离特征和目标变量\n",
    "X = df.drop('price', axis=1)\n",
    "y = df['price']\n",
    "\n",
    "# 分割训练集和测试集\n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\n",
    "\n",
    "print(\"\\n数据集已生成并保存为 'housing_dataset.csv'\")\n",
    "print(f\"训练集大小: {X_train.shape[0]} 样本, 测试集大小: {X_test.shape[0]} 样本\")\n",
    "print(\"特征:\", X.columns.tolist())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "deb428c6-035e-4e85-9d77-f206bfbdf790",
   "metadata": {},
   "source": [
    "### 基于上面的数据集训练线性回归模型\n",
    "\n",
    "#### Custom Transformer\n",
    "\n",
    "**Pipeline的层次结构**\n",
    "- numeric_transformer和categorical_transformer是两个子Pipeline，分别处理数值和分类特征。\n",
    "- ColumnTransformer整合这两个子Pipeline，分别应用于指定的特征列。\n",
    "- 顶层的pipeline将预处理（preprocessor）和模型（LinearRegression）结合。\n",
    "\n",
    "**数据预处理Pipeline**\n",
    "\n",
    "- 数值特征预处理Pipeline：使用SimpleImputer（均值填补缺失值）和StandardScaler（标准化）。\n",
    "  - 缺失值填补：使用SimpleImputer(strategy='mean') 将数值特征（house_size_sqm, num_bedrooms 等）中的缺失值替换为该特征的均值。\n",
    "  - 标准化：使用StandardScaler将数值特征缩放到均值为0、标准差为1的范围，以确保不同量纲的特征对模型的影响一致。\n",
    "- 分类特征预处理Pipeline：对neighborhood_type使用SimpleImputer（用 'missing' 填补缺失值）和OneHotEncoder（独热编码）。\n",
    "  - 缺失值填补：使用SimpleImputer(strategy='constant', fill_value='missing') 将分类特征（neighborhood_type）中的缺失值替换为'missing'。\n",
    "  - 独热编码：将分类特征转换为数值形式的独热编码，以便模型能够处理。\n",
    "- 整合处理（ColumnTransformer）\n",
    "  - 使用ColumnTransformer将数值特征和分类特征的处理步骤分别应用于对应的特征列，确保不同类型的特征得到适当处理。\n",
    "\n",
    "**关于Custom Transfomer**\n",
    "\n",
    "Custom Transformer是一个用户定义的类，用于执行特定的数据转换操作。它遵循scikit-learn的转换器接口，能够像内置的StandardScaler、OneHotEncoder或SimpleImputer一样，集成到scikit-learn的工作流中。主要用于以下场景：\n",
    "- 自定义特征工程：例如，创建新特征（如特征组合、数学变换）、提取复杂特征（如从日期提取年份或星期几）。\n",
    "- 特殊数据清洗：处理特定类型的缺失值、异常值，或者应用特定领域的规则。\n",
    "- 复杂转换逻辑：实现 scikit-learn 内置转换器无法直接处理的逻辑（如基于业务规则的特征转换）。\n",
    "\n",
    "**图形解读**\n",
    "- 散点分布\n",
    "  - 如果散点紧密围绕红色虚线（y=x），说明模型预测准确，偏差小。\n",
    "  - 如果散点偏离虚线较多（如在高房价或低房价区域），说明模型在这些区域预测偏差较大。\n",
    "- 训练集vs测试集\n",
    "  - 训练集散点通常更集中（因模型在训练数据上优化），而测试集可能显示更大偏差。\n",
    "  - 如果测试集散点分散明显，可能表示模型过拟合或欠拟合。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "b52cbaad-9dec-4e0c-993c-91fc319bd4c1",
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "from sklearn.model_selection import train_test_split\n",
    "from sklearn.preprocessing import StandardScaler, OneHotEncoder\n",
    "from sklearn.impute import SimpleImputer\n",
    "from sklearn.compose import ColumnTransformer\n",
    "from sklearn.pipeline import Pipeline\n",
    "from sklearn.linear_model import LinearRegression\n",
    "from sklearn.metrics import mean_squared_error, r2_score\n",
    "\n",
    "# 设置随机种子以确保可复现性\n",
    "np.random.seed(42)\n",
    "\n",
    "# 加载数据集\n",
    "df = pd.read_csv('housing_dataset.csv')\n",
    "\n",
    "# 分离特征和目标变量\n",
    "X = df.drop('price', axis=1)\n",
    "y = df['price']\n",
    "\n",
    "# 划分训练集和测试集\n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\n",
    "\n",
    "# 定义数值特征和分类特征，以便后续进行不同的预处理机制\n",
    "numeric_features = ['house_size_sqm', 'num_bedrooms', 'distance_to_city_km', \n",
    "                    'year_built', 'lot_size_sqm', 'neighborhood_quality']\n",
    "categorical_features = ['neighborhood_type']\n",
    "\n",
    "# 创建预处理专用于处理数值特征的Pipeline\n",
    "numeric_transformer = Pipeline(steps=[\n",
    "    ('imputer', SimpleImputer(strategy='mean')),  # 填补缺失值\n",
    "    ('scaler', StandardScaler())  # 标准化\n",
    "])\n",
    "\n",
    "# 创建预处理专用于处理分类特征的Pipeline\n",
    "categorical_transformer = Pipeline(steps=[\n",
    "    ('imputer', SimpleImputer(strategy='constant', fill_value='missing')),  # 填补缺失值\n",
    "    ('onehot', OneHotEncoder(handle_unknown='ignore'))  # 独热编码\n",
    "])\n",
    "\n",
    "#使用ColumnTransformer整合数值和分类特征处理功能于一体，成为一个独立的Transformer\n",
    "preprocessor = ColumnTransformer(\n",
    "    transformers=[\n",
    "        ('num', numeric_transformer, numeric_features),\n",
    "        ('cat', categorical_transformer, categorical_features)\n",
    "    ])\n",
    "\n",
    "# 创建完整的Pipeline（预处理 + 模型）\n",
    "pipeline = Pipeline(steps=[\n",
    "    ('preprocessor', preprocessor),\n",
    "    ('regressor', LinearRegression())\n",
    "])\n",
    "\n",
    "# 训练模型\n",
    "pipeline.fit(X_train, y_train)\n",
    "\n",
    "# 预测\n",
    "y_pred_train = pipeline.predict(X_train)\n",
    "y_pred_test = pipeline.predict(X_test)\n",
    "\n",
    "# 评估模型\n",
    "train_mse = mean_squared_error(y_train, y_pred_train)\n",
    "test_mse = mean_squared_error(y_test, y_pred_test)\n",
    "train_r2 = r2_score(y_train, y_pred_train)\n",
    "test_r2 = r2_score(y_test, y_pred_test)\n",
    "\n",
    "# 输出结果\n",
    "print(\"训练集MSE:\", train_mse)\n",
    "print(\"测试集MSE:\", test_mse)\n",
    "print(\"训练集R²分数:\", train_r2)\n",
    "print(\"测试集R²分数:\", test_r2)\n",
    "\n",
    "# 绘制预测值与真实值的散点图\n",
    "plt.figure(figsize=(10, 5))\n",
    "\n",
    "# 训练集散点图\n",
    "plt.subplot(1, 2, 1)\n",
    "plt.scatter(y_train, y_pred_train, alpha=0.5, color='blue', label='预测值')\n",
    "plt.plot([y_train.min(), y_train.max()], [y_train.min(), y_train.max()], 'r--', label='理想线')\n",
    "plt.xlabel('真实值')\n",
    "plt.ylabel('预测值')\n",
    "plt.title('训练集：预测值 vs 真实值')\n",
    "plt.legend()\n",
    "\n",
    "# 测试集散点图\n",
    "plt.subplot(1, 2, 2)\n",
    "plt.scatter(y_test, y_pred_test, alpha=0.5, color='green', label='预测值')\n",
    "plt.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], 'r--', label='理想线')\n",
    "plt.xlabel('真实值')\n",
    "plt.ylabel('预测值')\n",
    "plt.title('测试集：预测值 vs 真实值')\n",
    "plt.legend()\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "bf293d56-b626-445d-a7b0-9b539773d02e",
   "metadata": {},
   "source": [
    "#### （可选）显示前面代码中预处理后的数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "81fdccc8-2e85-4ea8-9933-95f82f15f3cc",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 获取预处理后的数据\n",
    "# 1. 从 Pipeline 中提取 preprocessor\n",
    "preprocessor = pipeline.named_steps['preprocessor']\n",
    "\n",
    "# 2. 转换训练集和测试集\n",
    "X_train_transformed = preprocessor.transform(X_train)\n",
    "X_test_transformed = preprocessor.transform(X_test)\n",
    "\n",
    "# 3. 获取特征名称\n",
    "# 数值特征保持原名称，分类特征通过 OneHotEncoder 获取新名称\n",
    "numeric_feature_names = numeric_features\n",
    "categorical_feature_names = preprocessor.named_transformers_['cat'].named_steps['onehot'].get_feature_names_out(categorical_features)\n",
    "all_feature_names = np.concatenate([numeric_feature_names, categorical_feature_names])\n",
    "\n",
    "# 4. 将转换后的数据转换为 DataFrame\n",
    "X_train_transformed_df = pd.DataFrame(X_train_transformed, columns=all_feature_names, index=X_train.index)\n",
    "X_test_transformed_df = pd.DataFrame(X_test_transformed, columns=all_feature_names, index=X_test.index)\n",
    "\n",
    "# 5. 显示预处理后的数据（前几行）\n",
    "print(\"\\n预处理后的训练集数据（前5行）：\")\n",
    "print(X_train_transformed_df.head())\n",
    "print(\"\\n预处理后的测试集数据（前5行）：\")\n",
    "print(X_test_transformed_df.head())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a5007447-d541-443c-bec2-e1f973ec8d58",
   "metadata": {},
   "source": [
    "### 回归模型的评估指标\n",
    "\n",
    "回归模型的评估指标用于量化模型预测值与真实值之间的误差，衡量模型的性能和预测能力。以下是回归模型常用的评估指标，涵盖其定义、计算公式、应用场景、优缺点以及在 scikit-learn 中的实现方法。\n",
    "1. 均方误差（Mean Squared Error, MSE）\n",
    "   - 计算预测值与真实值之间差的平方的平均值，强调较大的误差（通过平方放大）。\n",
    "   - 公式： $\\text{MSE} = \\frac{1}{n} \\sum_{i=1}^{n} (y_i - \\hat{y}_i)^2$\n",
    "\n",
    "     其中，$y_i$ 是真实值，$\\hat{y}_i$ 是预测值，$n$ 是样本数。\n",
    "   - 特点：\n",
    "     - 优点：对大误差敏感，易于优化（平方项可导），广泛用于回归任务。\n",
    "     - 缺点：对异常值敏感（大误差被平方放大）；量纲与目标变量的平方相同，难以直观解释。\n",
    "   - 应用场景：常用于优化模型（如线性回归的损失函数），适合需要关注大误差的场景。\n",
    "   - scikit-learn实现：mean_squared_error(y_test, y_pred_test)\n",
    "2. 均方根误差（Root Mean Squared Error, RMSE）\n",
    "   - RMSE是MSE的平方根，将误差恢复到与目标变量相同的量纲，便于解释。\n",
    "   - 公式：$\\text{RMSE} = \\sqrt{\\frac{1}{n} \\sum_{i=1}^{n} (y_i - \\hat{y}_i)^2} = \\sqrt{\\text{MSE}}$\n",
    "   - 特点：\n",
    "     - 优点：与目标变量量纲相同，易于解释（如房价预测中，RMSE 单位是美元）；对大误差敏感。\n",
    "     - 缺点：仍对异常值敏感；不直接用于模型优化（非凸）。\n",
    "   - 应用场景：适合需要直观误差量纲的场景，如房价预测（RMSE 表示预测房价的平均误差）。\n",
    "   - scikit-learn实现：mean_squared_error(y_test, y_pred_test)\n",
    "3. 平均绝对误差（Mean Absolute Error, MAE）\n",
    "   - 计算预测值与真实值之间差的绝对值的平均值。\n",
    "   - 公式：$\\text{MAE} = \\frac{1}{n} \\sum_{i=1}^{n} |y_i - \\hat{y}_i|$\n",
    "   - 特点\n",
    "     - 优点：对异常值不敏感（绝对值不放大误差）；量纲与目标变量相同，易于解释。\n",
    "     - 缺点：对所有误差一视同仁，可能忽略大误差的影响；不可导，优化较困难。\n",
    "   - 应用场景：适合异常值较多的数据集，或需要平均误差直观解释的场景。\n",
    "   - scikit-learn实现：mean_absolute_error(y_test, y_pred_test)\n",
    "4. R² 分数（R-squared, Coefficient of Determination）\n",
    "   - 衡量模型解释目标变量方差的比例，表示模型拟合的好坏。\n",
    "   - 公式：$R^2 = 1 - \\frac{\\sum_{i=1}^{n} (y_i - \\hat{y}_i)^2}{\\sum_{i=1}^{n} (y_i - \\bar{y})^2}$\n",
    "\n",
    "     其中，$\\bar{y}$ 是真实值的均值，分子是残差平方和，分母是总平方和。\n",
    "   - 特点\n",
    "     - 优点：无量纲，范围通常在 [0, 1]（可能为负，若模型比均值模型差）；易于比较模型。\n",
    "     - 缺点：对复杂模型可能高估（需结合调整后的 R²）；对非线性关系解释力有限。\n",
    "   - 应用场景：评估模型整体拟合效果，常用于比较不同模型。\n",
    "   - scikit-learn实现：r2_score(y_test, y_pred_test)\n",
    "\n",
    "**MAE和RMSE**\n",
    "\n",
    "均方根误差（RMSE）和平均绝对误差（MAE）在可解释性上确实有相似之处，它们都与原始数据的单位相同，这使得它们比均方误差（MSE）更容易理解。例如，如果模型预测的是房价，那么RMSE和MAE的值都可以直接解释为“平均预测偏差了多少万元”。但它们反映的“平均误差”的侧重点有所不同：\n",
    "- MAE 更好地代表了典型预测误差的平均大小。如果你的目标是让模型的平均预测误差最小，并且对所有误差都一视同仁，那么MAE可能更合适。\n",
    "- RMSE 则更强调大误差的影响。如果你的模型中出现大的预测误差会带来更高的成本或更严重的后果（例如，在金融预测中，大的错误可能导致巨大损失），那么RMSE将更好地反映这些“糟糕”的预测。\n",
    "\n",
    "另一方面，尽管它们都具有这种“单位一致性”带来的可解释性，但 RMSE和MAE的值可能会有较大的差别，这主要源于它们计算误差的方式不同：\n",
    "1. 对误差的惩罚机制不同\n",
    "   - MAE: 它计算的是预测值与真实值之间绝对差值的平均。这意味着每个误差对MAE的贡献是线性的。例如，一个误差为10的预测，对MAE的贡献就是10；一个误差为20的预测，对MAE的贡献就是20。\n",
    "   - RMSE: 它计算的是预测值与真实值之间差值平方的平均，然后再开方。因为误差被平方了，较大的误差会被赋予更大的权重。例如，一个误差为10的预测，其平方是100；一个误差为20的预测，其平方是400。显然，20的误差对RMSE的贡献（通过其平方值）是10的误差的4倍，而不是2倍。\n",
    "2. 对异常值的敏感度不同\n",
    "   - 由于RMSE会将误差平方，这意味着它对数据集中的异常值（outliers）更加敏感。即使只有一个或少数几个很大的预测误差，它们在平方后会显著增大RMSE的值，从而更能体现出这些大误差的存在。\n",
    "   - MAE由于是线性处理误差，所以对异常值相对不那么敏感。它提供的是一个所有误差大小的平均值，无论误差是大是小，都以同样的“权重”计入总和。\n",
    "\n",
    "\n",
    "\n",
    "#### 示例\n",
    "下面的指标示例，建立在前一节模型训练示例代码的运行结果之上。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6f46639a-d484-4a11-8713-6297f287a14e",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.metrics import mean_squared_error, mean_absolute_error, r2_score\n",
    "\n",
    "# 计算评估指标\n",
    "mse = mean_squared_error(y_test, y_pred_test)\n",
    "rmse = np.sqrt(mse)\n",
    "mae = mean_absolute_error(y_test, y_pred_test)\n",
    "r2 = r2_score(y_test, y_pred_test)\n",
    "\n",
    "# 输出结果\n",
    "print(\"回归模型评估指标：\")\n",
    "print(f\"MSE: {mse:.2f}\")\n",
    "# RMSE（以美元为单位）反映平均预测误差，例如RMSE=35289.95表示平均误差约3.53万美元。\n",
    "print(f\"RMSE: {rmse:.2f}\")\n",
    "print(f\"MAE: {mae:.2f}\")\n",
    "# R²表示模型解释房价方差的能力，例如 R²=0.94表示解释了94%的方差。\n",
    "print(f\"R² 分数: {r2:.4f}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3ddeaf24-f476-440c-9f04-b73a0d3a76c2",
   "metadata": {},
   "source": [
    "### SVR回归模型\n",
    "支持向量回归（SVR）是支持向量机（Support Vector Machine, SVM）在回归任务上的扩展。与经典的SVM主要用于分类不同，SVR旨在预测连续的数值型输出。\n",
    "\n",
    "SVR的核心思想是寻找一个函数（在特征空间中表现为一个超平面），该函数能尽可能地拟合训练数据，同时满足一个关键条件：它只关注那些超出特定误差范围(通常称为ϵ-不敏感带)的数据点。这意味着SVR不追求完美地拟合所有数据点，而是允许在一定误差范围内存在偏差，以此来提高模型的泛化能力。\n",
    "\n",
    "**核心概念**\n",
    "1. 超平面 (Hyperplane)\n",
    "   - 在SVR中，超平面就是要学习的预测函数。\n",
    "   - 在二维空间中，它就是一条直线；在更高维的空间中，它是一个多维的平面。\n",
    "   - SVR的目标是找到一个“最佳”超平面，使得数据点与该超平面的距离在一定程度上最小化。\n",
    "2. ϵ-不敏感带 (ϵ-Insensitive Tube)\n",
    "   - SVR引入了一个“容忍区”的独特概念，它由超平面及其上下各一个距离为ϵ的边界构成。\n",
    "     - 在$\\epsilon$-不敏感带内的数据点： SVR认为这些点被“很好地”预测了，即使它们不完全落在超平面上，也不会产生任何损失。\n",
    "     - 在$\\epsilon$-不敏感带外的数据点： 只有这些点才会产生损失，并且损失的大小与它们到不敏感带边界的距离成正比。SVR的目标就是最小化这些超出不敏感带的点的损失。\n",
    "3. 支持向量 (Support Vectors)\n",
    "   - 支持向量是SVR模型中最关键的数据点，是指那些落在$\\epsilon$-不敏感带边界上或边界之外的数据点，它们对于定义超平面的位置和方向至关重要。\n",
    "   - 模型一旦训练好，只需要这些支持向量就可以进行预测，而不需要整个训练数据集。这就是SVR具有稀疏性的原因，也是它在处理高维数据时效率高的原因之一。\n",
    "4. 核函数 (Kernel Function)\n",
    "   - SVR和SVM一样，可以通过使用核技巧（Kernel Trick）来处理非线性关系。\n",
    "   - 核函数可以将原始低维输入空间的数据映射到更高维的特征空间，在这个高维空间中，原本非线性可分/可回归的数据点可能变得线性可分/可回归。\n",
    "   - 常见的核函数：\n",
    "     - 线性核 (Linear Kernel)： 适用于处理数据存在线性关系的情况。\n",
    "     - 多项式核 (Polynomial Kernel)： 适用于数据存在多项式关系。\n",
    "     - 径向基函数核 (Radial Basis Function, RBF Kernel / Gaussian Kernel)： 这是最常用的核函数，因为它能够处理复杂的非线性关系，具有很强的泛化能力。\n",
    "     - Sigmoid 核 (Sigmoid Kernel)： 类似于神经网络中的激活函数。\n",
    "\n",
    "**主要超参数**\n",
    "1. C (惩罚参数/Regularization Parameter)：控制着模型对训练误差的惩罚强度，同时平衡了模型的复杂度和对训练数据的拟合程度。\n",
    "   - 对模型的影响： C值越大，对超出不敏感带的误差惩罚更严格，模型越复杂，越容易过拟合；C值越小，对超出不敏感带的误差惩罚较轻，模型越简单，越容易欠拟合。\n",
    "   - 通常取值为正数，如0.1, 1, 10, 100, 1000等。一般建议在对数尺度上进行搜索。\n",
    "2. epsilon (ϵ) (不敏感带宽度 / Epsilon-Insensitive Tube)：定义了SVR模型中的不敏感带的宽度，在这个带内的预测误差不会被模型惩罚，被认为是“足够好”的预测。\n",
    "   - 对模型的影响：epsilon值越小，不敏感带变窄，这意味着模型对小误差也变得敏感，模型也就越复杂，从而越容易过拟合；epsilon值越大，不敏感带变宽，会产生一个更平滑、更简单的模型，对训练数据的噪声不那么敏感，因此模型也越简单，也就越容易欠拟合。\n",
    "   - 取值范围：必须是非负数。通常可以从 0 开始，然后尝试一些小的值，例如 0.1, 0.01, 0.001 等。\n",
    "3. kernel (核函数)：决定了SVR如何将输入数据映射到高维特征空间，从而处理非线性关系。\n",
    "   - 常用选项如下：\n",
    "     - 'linear' (线性核)： 当数据存在线性关系时使用。模型相当于传统的线性回归，但依然保留了SVR的稀疏性和对异常值的鲁棒性。\n",
    "     - 'poly' (多项式核)： 适用于数据存在多项式关系。它通过 degree 参数来控制多项式的阶数。\n",
    "     - 'rbf' (径向基函数核)： 这是最常用且通常表现最好的核函数。它能够处理复杂的非线性关系，具有很强的灵活性。rbf核函数需要额外的gamma参数。\n",
    "     - 'sigmoid' (Sigmoid 核)： 类似于神经网络中的激活函数，但实际应用中不如 rbf 核常用。\n",
    "   - 对模型的影响： 核函数的选择直接决定了SVR处理非线性数据的能力，选择不合适的核函数可能导致模型性能不佳。\n",
    "4. gamma (γ) (核函数参数，尤其是RBF核)：定义了单个训练样本的影响范围，也就是核函数的作用半径。它只在非线性核（如rbf、poly和sigmoid）中有效。\n",
    "   - 以RBF核为例\n",
    "     - gamma值越大，单个训练样本的影响范围就越小，模型会更加关注局部数据。这会导致模型更加复杂，更容易过拟合训练数据，因为它试图精确拟合每个数据点。\n",
    "     - gamma值越小，单个训练样本的影响范围就越大，模型会考虑更广阔的数据区域。这会导致模型更平滑，更简单，不容易过拟合，但如果过小，可能导致欠拟合。\n",
    "   - 取值范围： 通常取值为正数。默认情况下，Scikit-learn中的gamma通常设置为'scale'，即 1/(n_features×X.var())。也可以手动指定数值，例如 0.001, 0.01, 0.1, 1, 10 等。\n",
    "5. degree (核函数参数，尤其是多项式核)：当kernel='poly'时，degree参数指定了多项式核的阶数。\n",
    "   - 它决定了多项式核在将数据映射到高维空间时所使用的特征组合的最高幂次。\n",
    "   - 取值范围： 整数，通常从2或3开始尝试。\n",
    "   - 对模型的影响： degree越高，模型越复杂，可以捕捉更复杂的非线性关系，但也越容易过拟合。\n",
    "\n",
    "#### 示例：基于房价数据训练SVR\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "56bcee92-eee2-437d-af53-566bb742d12c",
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "from sklearn.model_selection import train_test_split\n",
    "from sklearn.preprocessing import StandardScaler, OneHotEncoder\n",
    "from sklearn.impute import SimpleImputer\n",
    "from sklearn.compose import ColumnTransformer\n",
    "from sklearn.pipeline import Pipeline\n",
    "from sklearn.svm import SVR\n",
    "from sklearn.metrics import mean_squared_error, r2_score\n",
    "\n",
    "# 设置随机种子以确保可复现性\n",
    "np.random.seed(42)\n",
    "\n",
    "# 加载数据集\n",
    "df = pd.read_csv('housing_dataset.csv')\n",
    "\n",
    "# 分离特征和目标变量\n",
    "X = df.drop('price', axis=1)\n",
    "y = df['price']\n",
    "\n",
    "# 划分训练集和测试集\n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\n",
    "\n",
    "# 定义数值特征和分类特征\n",
    "numeric_features = ['house_size_sqm', 'num_bedrooms', 'distance_to_city_km', \n",
    "                    'year_built', 'lot_size_sqm', 'neighborhood_quality']\n",
    "categorical_features = ['neighborhood_type']\n",
    "\n",
    "# 创建预处理 Pipeline\n",
    "numeric_transformer = Pipeline(steps=[\n",
    "    ('imputer', SimpleImputer(strategy='mean')),  # 填补缺失值\n",
    "    ('scaler', StandardScaler())  # 标准化\n",
    "])\n",
    "\n",
    "categorical_transformer = Pipeline(steps=[\n",
    "    ('imputer', SimpleImputer(strategy='constant', fill_value='missing')),  # 填补缺失值\n",
    "    ('onehot', OneHotEncoder(handle_unknown='ignore'))  # 独热编码\n",
    "])\n",
    "\n",
    "# 使用 ColumnTransformer 整合数值和分类特征处理\n",
    "preprocessor = ColumnTransformer(\n",
    "    transformers=[\n",
    "        ('num', numeric_transformer, numeric_features),\n",
    "        ('cat', categorical_transformer, categorical_features)\n",
    "    ])\n",
    "\n",
    "# 创建完整的 Pipeline（预处理 + SVR模型）\n",
    "pipeline = Pipeline(steps=[\n",
    "    ('preprocessor', preprocessor),\n",
    "    ('regressor', SVR(kernel='rbf', C=100, epsilon=0.1))  # 使用RBF核的SVR\n",
    "])\n",
    "\n",
    "# 训练模型\n",
    "pipeline.fit(X_train, y_train)\n",
    "\n",
    "# 预测\n",
    "y_pred_train = pipeline.predict(X_train)\n",
    "y_pred_test = pipeline.predict(X_test)\n",
    "\n",
    "# 计算训练集和测试集的 MAE 和 R²\n",
    "train_mae = mean_absolute_error(y_train, y_pred_train)\n",
    "test_mae = mean_absolute_error(y_test, y_pred_test)\n",
    "train_r2 = r2_score(y_train, y_pred_train)\n",
    "test_r2 = r2_score(y_test, y_pred_test)\n",
    "\n",
    "# 输出评估指标\n",
    "print(\"SVR 回归模型评估指标：\")\n",
    "print(f\"训练集MAE: {train_mae:.2f}\")\n",
    "print(f\"测试集MAE: {test_mae:.2f}\")\n",
    "print(f\"训练集R²分数: {train_r2:.4f}\")\n",
    "print(f\"测试集R²分数: {test_r2:.4f}\")\n",
    "\n",
    "# 绘制预测值与真实值的散点图\n",
    "plt.figure(figsize=(10, 5))\n",
    "\n",
    "# 训练集散点图\n",
    "plt.subplot(1, 2, 1)\n",
    "plt.scatter(y_train, y_pred_train, alpha=0.5, color='blue', label='预测值')\n",
    "plt.plot([y_train.min(), y_train.max()], [y_train.min(), y_train.max()], 'r--', label='理想线')\n",
    "plt.xlabel('真实值')\n",
    "plt.ylabel('预测值')\n",
    "plt.title('训练集：预测值 vs 真实值')\n",
    "plt.legend()\n",
    "\n",
    "# 测试集散点图\n",
    "plt.subplot(1, 2, 2)\n",
    "plt.scatter(y_test, y_pred_test, alpha=0.5, color='green', label='预测值')\n",
    "plt.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], 'r--', label='理想线')\n",
    "plt.xlabel('真实值')\n",
    "plt.ylabel('预测值')\n",
    "plt.title('测试集：预测值 vs 真实值')\n",
    "plt.legend()\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d9c04c58-3ff9-4135-8347-8e61dc5bb924",
   "metadata": {},
   "source": [
    "#### 在SVR上使用模型搜索"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "abb7afd2-a1a0-4b50-b275-95ad6542fbb9",
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "from sklearn.model_selection import train_test_split, GridSearchCV\n",
    "from sklearn.preprocessing import StandardScaler, OneHotEncoder\n",
    "from sklearn.impute import SimpleImputer\n",
    "from sklearn.compose import ColumnTransformer\n",
    "from sklearn.pipeline import Pipeline\n",
    "from sklearn.svm import SVR\n",
    "from sklearn.metrics import mean_squared_error, r2_score\n",
    "\n",
    "# 设置随机种子以确保可复现性\n",
    "np.random.seed(42)\n",
    "\n",
    "# 加载数据集\n",
    "df = pd.read_csv('housing_dataset.csv')\n",
    "\n",
    "# 分离特征和目标变量\n",
    "X = df.drop('price', axis=1)\n",
    "y = df['price']\n",
    "\n",
    "# 划分训练集和测试集\n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\n",
    "\n",
    "# 定义数值特征和分类特征\n",
    "numeric_features = ['house_size_sqm', 'num_bedrooms', 'distance_to_city_km', \n",
    "                    'year_built', 'lot_size_sqm', 'neighborhood_quality']\n",
    "categorical_features = ['neighborhood_type']\n",
    "\n",
    "# 创建预处理 Pipeline\n",
    "numeric_transformer = Pipeline(steps=[\n",
    "    ('imputer', SimpleImputer(strategy='mean')),  # 填补缺失值\n",
    "    ('scaler', StandardScaler())  # 标准化\n",
    "])\n",
    "\n",
    "categorical_transformer = Pipeline(steps=[\n",
    "    ('imputer', SimpleImputer(strategy='constant', fill_value='missing')),  # 填补缺失值\n",
    "    ('onehot', OneHotEncoder(handle_unknown='ignore'))  # 独热编码\n",
    "])\n",
    "\n",
    "# 使用 ColumnTransformer 整合数值和分类特征处理\n",
    "preprocessor = ColumnTransformer(\n",
    "    transformers=[\n",
    "        ('num', numeric_transformer, numeric_features),\n",
    "        ('cat', categorical_transformer, categorical_features)\n",
    "    ])\n",
    "\n",
    "# 创建 SVR 模型的 Pipeline\n",
    "pipeline = Pipeline(steps=[\n",
    "    ('preprocessor', preprocessor),\n",
    "    ('regressor', SVR())  # SVR 模型，超参数将在 GridSearchCV 中指定\n",
    "])\n",
    "\n",
    "# 定义 GridSearchCV 的超参数网格\n",
    "param_grid = {\n",
    "    'regressor__kernel': ['rbf', 'linear', 'poly'],  # 核函数\n",
    "    'regressor__C': [10, 100, 1000],        # 正则化参数\n",
    "    'regressor__epsilon': [0.01, 0.1, 1.0]  # ε-不敏感区域宽度\n",
    "}\n",
    "\n",
    "# 使用 GridSearchCV 进行超参数搜索和交叉验证\n",
    "grid_search = GridSearchCV(\n",
    "    pipeline,\n",
    "    param_grid,\n",
    "    cv=5,  # 5 折交叉验证\n",
    "    scoring='neg_mean_squared_error',  # 使用负 MSE 作为评分标准\n",
    "    n_jobs=-1,  # 使用所有可用 CPU 核心\n",
    "    verbose=1\n",
    ")\n",
    "\n",
    "# 训练模型\n",
    "grid_search.fit(X_train, y_train)\n",
    "\n",
    "# 输出最佳参数和得分\n",
    "print(\"最佳参数:\", grid_search.best_params_)\n",
    "print(f\"最佳交叉验证 RMSE: {np.sqrt(-grid_search.best_score_):.2f}\")\n",
    "\n",
    "# 使用最佳模型进行预测\n",
    "best_model = grid_search.best_estimator_\n",
    "y_pred_train = best_model.predict(X_train)\n",
    "y_pred_test = best_model.predict(X_test)\n",
    "\n",
    "# 计算训练集和测试集的 MAE 和 R²\n",
    "train_mae = mean_absolute_error(y_train, y_pred_train)\n",
    "test_mae = mean_absolute_error(y_test, y_pred_test)\n",
    "train_r2 = r2_score(y_train, y_pred_train)\n",
    "test_r2 = r2_score(y_test, y_pred_test)\n",
    "\n",
    "# 输出评估指标\n",
    "print(\"\\n最佳 SVR 模型评估指标：\")\n",
    "print(f\"训练集MAE: {train_mae:.2f}\")\n",
    "print(f\"测试集MAE: {test_mae:.2f}\")\n",
    "print(f\"训练集R²分数: {train_r2:.4f}\")\n",
    "print(f\"测试集R²分数: {test_r2:.4f}\")\n",
    "\n",
    "# 绘制预测值与真实值的散点图\n",
    "plt.figure(figsize=(10, 5))\n",
    "\n",
    "# 训练集散点图\n",
    "plt.subplot(1, 2, 1)\n",
    "plt.scatter(y_train, y_pred_train, alpha=0.5, color='blue', label='预测值')\n",
    "plt.plot([y_train.min(), y_train.max()], [y_train.min(), y_train.max()], 'r--', label='理想线')\n",
    "plt.xlabel('真实值')\n",
    "plt.ylabel('预测值')\n",
    "plt.title('训练集：预测值 vs 真实值')\n",
    "plt.legend()\n",
    "\n",
    "# 测试集散点图\n",
    "plt.subplot(1, 2, 2)\n",
    "plt.scatter(y_test, y_pred_test, alpha=0.5, color='green', label='预测值')\n",
    "plt.plot([y_test.min(), y_test.max()], [y_test.min(), y_test.max()], 'r--', label='理想线')\n",
    "plt.xlabel('真实值')\n",
    "plt.ylabel('预测值')\n",
    "plt.title('测试集：预测值 vs 真实值')\n",
    "plt.legend()\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "59a8ddbd-0bac-4940-aa43-b1c7c29592a0",
   "metadata": {},
   "source": [
    "## 无监督学习"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6ed7758c-850e-4dad-aead-09fdf0169bcb",
   "metadata": {},
   "source": [
    "### 生成数据集\n",
    "\n",
    "如下Python脚本可用于生成包含5000行的合成数据集，模拟服务器性能指标（如CPU使用率、内存使用率、响应时间等），其中约5%的数据点被标记为异常（故障）。数据包括时间戳和多个性能指标，适合后续用于孤立森林算法的异常检测训练。生成的数据集保存于CSV文件中。\n",
    "\n",
    "**数据集包含的特征**\n",
    "- timestamp：时间戳（模拟时间序列数据）\n",
    "- cpu_usage：CPU使用率（百分比，0-100）\n",
    "- memory_usage：内存使用率（百分比，0-100）\n",
    "- response_time：服务响应时间（毫秒）\n",
    "- error_rate：错误率（每秒错误请求数）\n",
    "- is_anomaly：是否为异常（0表示正常，1表示异常）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "044c6c42-9206-4ec4-b304-ba635651216b",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import pandas as pd\n",
    "from datetime import datetime, timedelta\n",
    "import random\n",
    "\n",
    "# 设置随机种子以确保可重现性\n",
    "np.random.seed(42)\n",
    "random.seed(42)\n",
    "\n",
    "# 数据集大小\n",
    "n_samples = 5000\n",
    "anomaly_ratio = 0.05  # 异常数据比例为5%\n",
    "\n",
    "# 生成时间戳（从当前时间开始，间隔1分钟）\n",
    "start_time = datetime(2025, 7, 21, 0, 0)\n",
    "timestamps = [start_time + timedelta(minutes=i) for i in range(n_samples)]\n",
    "\n",
    "# 生成正常数据\n",
    "cpu_usage = np.random.normal(50, 10, n_samples)  # 均值50，标准差10\n",
    "memory_usage = np.random.normal(60, 15, n_samples)  # 均值60，标准差15\n",
    "response_time = np.random.normal(200, 50, n_samples)  # 均值200ms，标准差50ms\n",
    "error_rate = np.random.exponential(0.1, n_samples)  # 错误率，指数分布，均值0.1\n",
    "\n",
    "# 确保数据在合理范围内\n",
    "cpu_usage = np.clip(cpu_usage, 0, 100)\n",
    "memory_usage = np.clip(memory_usage, 0, 100)\n",
    "response_time = np.clip(response_time, 50, 1000)\n",
    "error_rate = np.clip(error_rate, 0, 5)\n",
    "\n",
    "# 初始化异常标签\n",
    "is_anomaly = np.zeros(n_samples, dtype=int)\n",
    "\n",
    "# 随机选择异常点\n",
    "n_anomalies = int(n_samples * anomaly_ratio)\n",
    "anomaly_indices = random.sample(range(n_samples), n_anomalies)\n",
    "\n",
    "# 为异常点设置异常值\n",
    "for idx in anomaly_indices:\n",
    "    # 模拟故障：CPU或内存使用率激增、响应时间大幅增加、错误率升高\n",
    "    cpu_usage[idx] = np.random.uniform(90, 100)  # 异常高的CPU使用率\n",
    "    memory_usage[idx] = np.random.uniform(85, 100)  # 异常高的内存使用率\n",
    "    response_time[idx] = np.random.uniform(500, 2000)  # 异常高的响应时间\n",
    "    error_rate[idx] = np.random.uniform(2, 10)  # 异常高的错误率\n",
    "    is_anomaly[idx] = 1\n",
    "\n",
    "# 创建DataFrame\n",
    "data = {\n",
    "    'timestamp': timestamps,\n",
    "    'cpu_usage': cpu_usage,\n",
    "    'memory_usage': memory_usage,\n",
    "    'response_time': response_time,\n",
    "    'error_rate': error_rate,\n",
    "    'is_anomaly': is_anomaly\n",
    "}\n",
    "df = pd.DataFrame(data)\n",
    "\n",
    "# 保存到CSV文件\n",
    "output_file = 'sre_fault_detection_dataset.csv'\n",
    "df.to_csv(output_file, index=False)\n",
    "\n",
    "print(f\"Dataset with {n_samples} rows generated and saved to {output_file}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ff268e1f-9705-493c-aceb-3d77ad49fd79",
   "metadata": {},
   "source": [
    "### 训练孤立森林模型进行故障检测\n",
    "\n",
    "孤立森林（Isolation Forest）是一种基于树的异常检测算法，它利用随机分割数据的特性，通过构建多棵随机树来隔离数据点，检测异常点（outliers）。它的核心思想是：异常点（outliers）是那些与数据集中大多数点“显著不同”的点，因此它们更容易被“孤立”出来。\n",
    "\n",
    "孤立森林是一种无监督学习算法，但它不属于传统的分类、回归、聚类或降维任务，而是一个独立的机器学习任务：异常检测。\n",
    "\n",
    "**适用场景**\n",
    "\n",
    "- 金融欺诈检测： 识别信用卡欺诈、洗钱行为。\n",
    "- 网络安全： 检测网络入侵、恶意流量、僵尸网络活动。\n",
    "- 工业故障诊断： 识别设备异常运行模式，预测故障。\n",
    "- 健康监测： 发现传感器数据中的异常读数，预警健康问题。\n",
    "- 数据清洗： 识别并移除数据集中的噪声点或错误数据。\n",
    "\n",
    "在Scikit-learn中，孤立森林算法通过sklearn.ensemble.IsolationForest模块实现。常用参数有如下几个：\n",
    "- n_estimators：森林中决策树（iTree）的数量，默认值通常是100。更多的树通常能提供更稳定和更准确的异常分数，因为它们能从更多不同的随机划分中学习。然而，增加树的数量也会增加计算时间和内存消耗。\n",
    "- max_samples：每棵iTree在构建时随机抽取训练样本的数量，默认值为“auto”，意味着它将取 min(256, n_samples)。较小的 max_samples 值可以加快训练速度，但在某些情况下可能会导致信息丢失；较大的值可能包含更多的正常样本，使异常点更难被孤立，从而降低模型敏感度。\n",
    "- contamination：用于在模型训练后设定一个阈值，从而将样本划分为“正常”或“异常”，它表示模型预期数据集中异常值的比例。contamination是一个介于 0 和 0.5 之间的浮点数。\n",
    "- max_features：指定每棵iTree在每个分割步骤中随机选择特征的数量，默认值通常是 1.0，这意味着在每次分裂时会考虑所有特征进行随机选择。较小的 max_features 可以减少每棵树的计算量，并可能增加模型的多样性（因为每棵树看到的特征组合不同），从而提高鲁棒性。\n",
    "- random_state：用于控制随机性，确保实验的可重复性。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "67e74ffe-253a-4f88-afb1-743c5e8a2910",
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "from sklearn.ensemble import IsolationForest\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.metrics import classification_report, confusion_matrix\n",
    "import matplotlib.pyplot as plt\n",
    "import seaborn as sns\n",
    "\n",
    "# 设置随机种子以确保可重现性\n",
    "np.random.seed(42)\n",
    "\n",
    "# 1. 加载数据集\n",
    "df = pd.read_csv('sre_fault_detection_dataset.csv')\n",
    "\n",
    "# 2. 选择特征（排除时间戳和标签）\n",
    "features = ['cpu_usage', 'memory_usage', 'response_time', 'error_rate']\n",
    "X = df[features]\n",
    "y_true = df['is_anomaly']  # 真实标签，仅用于评估\n",
    "\n",
    "# 3. 数据预处理：标准化\n",
    "scaler = StandardScaler()\n",
    "X_scaled = scaler.fit_transform(X)\n",
    "\n",
    "# 4. 训练孤立森林模型\n",
    "iso_forest = IsolationForest(contamination=0.05, random_state=42)\n",
    "iso_forest.fit(X_scaled)\n",
    "\n",
    "# 5. 预测异常（-1表示异常，1表示正常）\n",
    "y_pred = iso_forest.predict(X_scaled)\n",
    "y_pred = np.where(y_pred == -1, 1, 0)  # 转换为0（正常）/1（异常）以匹配标签\n",
    "\n",
    "# 6. 评估模型\n",
    "print(\"Classification Report:\")\n",
    "print(classification_report(y_true, y_pred, target_names=['Normal', 'Anomaly']))\n",
    "print(\"\\nConfusion Matrix:\")\n",
    "print(confusion_matrix(y_true, y_pred))\n",
    "\n",
    "# 7. 可视化结果\n",
    "plt.figure(figsize=(12, 5))\n",
    "\n",
    "# 散点图：CPU使用率 vs 响应时间，标注正常/异常点\n",
    "plt.subplot(1, 2, 1)\n",
    "normal = df[y_pred == 0]\n",
    "anomaly = df[y_pred == 1]\n",
    "plt.scatter(normal['cpu_usage'], normal['response_time'], c='blue', label='Normal', alpha=0.5)\n",
    "plt.scatter(anomaly['cpu_usage'], anomaly['response_time'], c='red', label='Anomaly', alpha=0.5)\n",
    "plt.xlabel('CPU Usage (%)')\n",
    "plt.ylabel('Response Time (ms)')\n",
    "plt.title('Anomaly Detection: CPU Usage vs Response Time')\n",
    "plt.legend()\n",
    "\n",
    "# 异常分数分布直方图\n",
    "plt.subplot(1, 2, 2)\n",
    "anomaly_scores = -iso_forest.score_samples(X_scaled)  # 负值表示异常程度\n",
    "plt.hist(anomaly_scores[y_true == 0], bins=50, alpha=0.5, label='Normal', color='blue')\n",
    "plt.hist(anomaly_scores[y_true == 1], bins=50, alpha=0.5, label='Anomaly', color='red')\n",
    "plt.xlabel('Anomaly Score')\n",
    "plt.ylabel('Frequency')\n",
    "plt.title('Anomaly Score Distribution')\n",
    "plt.legend()\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "774bb355-e263-444c-bd20-b2f3c5f8b679",
   "metadata": {},
   "source": [
    "### 增强实现版本\n",
    "\n",
    "使用Pipeline、交叉验证、模型选择来增强上面的孤立森林模型训练示例。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "851619aa-5132-426c-9a3e-4c6654391da8",
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "from sklearn.ensemble import IsolationForest\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.pipeline import Pipeline\n",
    "from sklearn.model_selection import KFold\n",
    "from sklearn.metrics import classification_report, confusion_matrix\n",
    "import matplotlib.pyplot as plt\n",
    "import seaborn as sns\n",
    "from sklearn.model_selection import GridSearchCV\n",
    "import joblib\n",
    "\n",
    "# 设置随机种子\n",
    "np.random.seed(42)\n",
    "\n",
    "# 1. 加载数据集\n",
    "df = pd.read_csv('sre_fault_detection_dataset.csv')\n",
    "features = ['cpu_usage', 'memory_usage', 'response_time', 'error_rate']\n",
    "X = df[features]\n",
    "y_true = df['is_anomaly']  # 真实标签，仅用于评估\n",
    "\n",
    "# 2. 创建Pipeline\n",
    "pipeline = Pipeline([\n",
    "    ('scaler', StandardScaler()),\n",
    "    ('isolation_forest', IsolationForest(random_state=42))\n",
    "])\n",
    "\n",
    "# 3. 定义参数网格用于模型选择\n",
    "param_grid = {\n",
    "    'isolation_forest__n_estimators': [50, 100, 200],\n",
    "    'isolation_forest__contamination': [0.01, 0.05, 0.1]\n",
    "}\n",
    "\n",
    "# 4. 自定义评分函数（基于异常分数的平均值）\n",
    "def anomaly_score(estimator, X):\n",
    "    return -estimator.score_samples(X).mean()\n",
    "\n",
    "# 5. 交叉验证与模型选择\n",
    "kf = KFold(n_splits=5, shuffle=True, random_state=42)\n",
    "grid_search = GridSearchCV(\n",
    "    pipeline,\n",
    "    param_grid,\n",
    "    cv=kf,\n",
    "    scoring=anomaly_score,\n",
    "    n_jobs=-1\n",
    ")\n",
    "\n",
    "# 6. 训练模型并选择最优参数\n",
    "grid_search.fit(X)\n",
    "\n",
    "# 7. 输出最优参数\n",
    "print(\"Best parameters:\", grid_search.best_params_)\n",
    "best_model = grid_search.best_estimator_\n",
    "\n",
    "# 8. 交叉验证评估\n",
    "print(\"\\nCross-Validation Results:\")\n",
    "cv_scores = []\n",
    "y_pred_all = np.zeros_like(y_true)  # 存储所有预测\n",
    "for train_idx, test_idx in kf.split(X):\n",
    "    X_train, X_test = X.iloc[train_idx], X.iloc[test_idx]\n",
    "    y_train_true, y_test_true = y_true.iloc[train_idx], y_true.iloc[test_idx]\n",
    "    \n",
    "    # 训练并预测\n",
    "    best_model.fit(X_train)\n",
    "    y_test_pred = best_model.predict(X_test)\n",
    "    y_test_pred = np.where(y_test_pred == -1, 1, 0)  # 转换为0（正常）/1（异常）\n",
    "    \n",
    "    # 存储预测结果\n",
    "    y_pred_all[test_idx] = y_test_pred\n",
    "    \n",
    "    # 计算异常分数\n",
    "    scores = -best_model.score_samples(X_test)\n",
    "    cv_scores.append(scores.mean())\n",
    "\n",
    "print(f\"\\nAverage CV anomaly score: {np.mean(cv_scores):.4f} ± {np.std(cv_scores):.4f}\")\n",
    "\n",
    "# 9. 全数据集预测与评估\n",
    "y_pred = best_model.predict(X)\n",
    "y_pred = np.where(y_pred == -1, 1, 0)\n",
    "print(\"\\nFull Dataset Classification Report:\")\n",
    "print(classification_report(y_true, y_pred, target_names=['Normal', 'Anomaly']))\n",
    "print(\"\\nConfusion Matrix:\")\n",
    "print(confusion_matrix(y_true, y_pred))\n",
    "\n",
    "# 10. 保存最优模型\n",
    "joblib.dump(best_model, 'best_isolation_forest_model.pkl')\n",
    "print(\"Best model saved to 'best_isolation_forest_model.pkl'\")\n",
    "\n",
    "# 11. 可视化结果\n",
    "plt.figure(figsize=(12, 5))\n",
    "\n",
    "# 散点图：CPU使用率 vs 响应时间\n",
    "plt.subplot(1, 2, 1)\n",
    "normal = df[y_pred == 0]\n",
    "anomaly = df[y_pred == 1]\n",
    "plt.scatter(normal['cpu_usage'], normal['response_time'], c='blue', label='Normal', alpha=0.5)\n",
    "plt.scatter(anomaly['cpu_usage'], anomaly['response_time'], c='red', label='Anomaly', alpha=0.5)\n",
    "plt.xlabel('CPU Usage (%)')\n",
    "plt.ylabel('Response Time (ms)')\n",
    "plt.title('Anomaly Detection: CPU Usage vs Response Time')\n",
    "plt.legend()\n",
    "\n",
    "# 异常分数分布直方图\n",
    "plt.subplot(1, 2, 2)\n",
    "anomaly_scores = -best_model.score_samples(X)\n",
    "plt.hist(anomaly_scores[y_true == 0], bins=50, alpha=0.5, label='Normal', color='blue')\n",
    "plt.hist(anomaly_scores[y_true == 1], bins=50, alpha=0.5, label='Anomaly', color='red')\n",
    "plt.xlabel('Anomaly Score')\n",
    "plt.ylabel('Frequency')\n",
    "plt.title('Anomaly Score Distribution')\n",
    "plt.legend()\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9e0e5d67-f2c7-490c-a265-4314a515e17a",
   "metadata": {},
   "source": [
    "## 模型保存与载入\n",
    "\n",
    "在scikit-learn中，保存和载入模型通常使用Python的joblib库（推荐）或pickle模块。这两种方法都可以高效地序列化scikit-learn模型（包括Pipeline和GridSearchCV的结果），以便在未来重复使用而无需重新训练。\n",
    "\n",
    "**保存和载入模型的方法**\n",
    "- 方法1：使用joblib\n",
    "  - joblib是scikit-learn官方推荐的工具，特别适合保存大型NumPy数组（如模型参数、Pipeline中的转换器参数），效率高于pickle。\n",
    "  - 保存模型：使用joblib.dump将模型保存为文件（通常以.joblib或.pkl扩展名）。\n",
    "  - 载入模型：使用joblib.load加载保存的模型文件。\n",
    "- 方法2：使用pickle\n",
    "  - pickle是Python的内置序列化工具，适合小型模型或不涉及大量NumPy数组的场景。\n",
    "  - 保存模型：使用pickle.dump保存模型。\n",
    "  - 载入模型：使用pickle.load加载模型。\n",
    "\n",
    "**安装依赖**\n",
    "\n",
    "运行如下命令，即可安装相关的依赖。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "dff2b0a4-404c-4085-8d84-2bc73ec09a7b",
   "metadata": {},
   "outputs": [],
   "source": [
    "!pip install fastapi uvicorn joblib numpy pydantic nest_asyncio"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4741ff75-9a5d-43ea-9608-d4e797dafd3e",
   "metadata": {},
   "source": [
    "### 示例\n",
    "下面示例中的代码专为运行于JupyterLab。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "275419d0-dc50-4727-b7cb-95a418fa1330",
   "metadata": {},
   "outputs": [],
   "source": [
    "import nest_asyncio\n",
    "from fastapi import FastAPI, HTTPException\n",
    "from pydantic import BaseModel\n",
    "import joblib\n",
    "import numpy as np\n",
    "import uvicorn\n",
    "from threading import Thread\n",
    "\n",
    "# 应用nest_asyncio以支持Jupyter的事件循环\n",
    "nest_asyncio.apply()\n",
    "\n",
    "# 初始化FastAPI应用\n",
    "app = FastAPI()\n",
    "\n",
    "# 定义输入数据模型\n",
    "class InputData(BaseModel):\n",
    "    cpu_usage: float\n",
    "    memory_usage: float\n",
    "    response_time: float\n",
    "    error_rate: float\n",
    "\n",
    "# 加载保存的模型\n",
    "model = joblib.load('best_isolation_forest_model.pkl')\n",
    "\n",
    "@app.post(\"/predict\")\n",
    "async def predict(data: InputData):\n",
    "    try:\n",
    "        # 将输入数据转换为模型所需格式\n",
    "        input_array = np.array([[\n",
    "            data.cpu_usage,\n",
    "            data.memory_usage,\n",
    "            data.response_time,\n",
    "            data.error_rate\n",
    "        ]])\n",
    "\n",
    "        # 使用模型进行预测\n",
    "        prediction = model.predict(input_array)\n",
    "\n",
    "        # 转换为0（正常）/1（异常）\n",
    "        is_anomaly = 1 if prediction[0] == -1 else 0\n",
    "\n",
    "        return {\"is_anomaly\": is_anomaly}\n",
    "    except Exception as e:\n",
    "        raise HTTPException(status_code=500, detail=f\"Prediction error: {str(e)}\")\n",
    "\n",
    "@app.get(\"/\")\n",
    "async def root():\n",
    "    return {\"message\": \"Isolation Forest API for SRE Fault Detection\"}\n",
    "\n",
    "# 定义运行服务器的函数\n",
    "def run_server():\n",
    "    uvicorn.run(app, host=\"0.0.0.0\", port=8000, log_level=\"info\")\n",
    "\n",
    "# 在单独线程中运行服务器\n",
    "server_thread = Thread(target=run_server)\n",
    "server_thread.start()\n",
    "\n",
    "print(\"FastAPI server is running on http://localhost:8000\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "51312d12-8b69-4609-bad4-ee064af30675",
   "metadata": {},
   "source": [
    "#### 测试命令\n",
    "下面是一个基于curl命令的模型服务请求示例，其期望的输出结果为：{\"is_anomaly\": 1}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "6bf7ef5f-1e7a-4a80-be16-0e3fd57d9922",
   "metadata": {},
   "outputs": [],
   "source": [
    "!curl -X POST \"http://localhost:8000/predict\" \\\n",
    "-H \"Content-Type: application/json\" \\\n",
    "-d '{\"cpu_usage\": 99.0, \"memory_usage\": 95.0, \"response_time\": 2000.0, \"error_rate\": 10.0}'"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9e143625-e70f-4f67-a8c7-d5792ecec221",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "## 高级实践案例"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "54afab8f-fd9f-4600-8f40-297a3beafad7",
   "metadata": {},
   "source": [
    "### 典型的分类任务工作流示例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "59c24e3b-ebc9-47d7-b97b-b7ce08e1a9af",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.datasets import load_digits\n",
    "from sklearn.model_selection import train_test_split, GridSearchCV\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "from sklearn.metrics import accuracy_score, classification_report\n",
    "\n",
    "# 1. 数据加载\n",
    "digits = load_digits()\n",
    "X, y = digits.data, digits.target\n",
    "\n",
    "# 2. 数据集拆分\n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\n",
    "\n",
    "# 3. 数据预处理（标准化）\n",
    "# 使用StandardScaler标准化特征（均值0，方差1），以提高LogisticRegression的性能\n",
    "scaler = StandardScaler()\n",
    "X_train_scaled = scaler.fit_transform(X_train)  # 拟合并转换训练数据\n",
    "X_test_scaled = scaler.transform(X_test)       # 转换测试数据\n",
    "\n",
    "# 4. 模型训练与超参数调优\n",
    "model = LogisticRegression(max_iter=1000, random_state=42)\n",
    "# 通过GridSearchCV调优超参数C（正则化强度）和solver（优化算法）\n",
    "param_grid = {'C': [0.1, 1, 10], 'solver': ['lbfgs', 'liblinear']}\n",
    "grid_search = GridSearchCV(model, param_grid, cv=5, scoring='accuracy')\n",
    "grid_search.fit(X_train_scaled, y_train)  # 训练模型\n",
    "\n",
    "# 5. 预测与评估\n",
    "y_pred = grid_search.predict(X_test_scaled)  # 预测\n",
    "# 使用accuracy_score计算准确率，classification_report提供详细的性能指标（精确率、召回率、F1分数）\n",
    "accuracy = accuracy_score(y_test, y_pred)    # 计算准确率\n",
    "print(\"Best Parameters:\", grid_search.best_params_)\n",
    "print(\"Test Accuracy:\", accuracy)\n",
    "print(\"\\nClassification Report:\\n\", classification_report(y_test, y_pred))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2ea2c0a4-445d-409d-b266-5277204c1a11",
   "metadata": {},
   "source": [
    "### 回归任务工作流示例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "e09a2d98-5c19-466b-85c7-b064ca99ab05",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.datasets import load_diabetes\n",
    "from sklearn.model_selection import train_test_split, GridSearchCV\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.ensemble import RandomForestRegressor\n",
    "from sklearn.metrics import mean_squared_error, r2_score\n",
    "\n",
    "# 1. 数据加载\n",
    "# load_diabetes 是一个回归数据集，包含 442 个样本，10 个特征（如年龄、BMI），目标是疾病进展的量化指标\n",
    "diabetes = load_diabetes()\n",
    "X, y = diabetes.data, diabetes.target\n",
    "\n",
    "# 2. 数据集拆分\n",
    "# 将数据集分为 80% 训练集和 20% 测试集，设置 random_state 确保结果可重现\n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\n",
    "\n",
    "# 3. 数据预处理\n",
    "# 使用 StandardScaler 标准化特征，使每个特征的均值为 0，方差为 1，有助于提高模型性能\n",
    "scaler = StandardScaler()\n",
    "# 对训练数据进行拟合（计算均值和标准差）并转换\n",
    "X_train_scaled = scaler.fit_transform(X_train)\n",
    "# 对测试数据仅进行转换（使用训练数据的均值和标准差）\n",
    "X_test_scaled = scaler.transform(X_test)\n",
    "\n",
    "# 4. 模型训练与超参数调优\n",
    "# 初始化 RandomForestRegressor，设置 random_state 确保结果可重现\n",
    "model = RandomForestRegressor(random_state=42)\n",
    "# 定义超参数网格，用于 GridSearchCV 搜索最优参数\n",
    "param_grid = {\n",
    "    'n_estimators': [50, 100],  # 树的数量\n",
    "    'max_depth': [None, 10],    # 树的最大深度\n",
    "    'min_samples_split': [2, 5] # 节点分裂的最小样本数\n",
    "}\n",
    "# 使用 GridSearchCV 进行 5 折交叉验证，优化均方误差（负值形式）\n",
    "grid_search = GridSearchCV(model, param_grid, cv=5, scoring='neg_mean_squared_error', n_jobs=-1)\n",
    "# 训练模型，自动搜索最佳超参数\n",
    "grid_search.fit(X_train_scaled, y_train)\n",
    "\n",
    "# 5. 预测与评估\n",
    "# 使用最佳模型对测试集进行预测\n",
    "y_pred = grid_search.predict(X_test_scaled)\n",
    "# 计算均方误差（MSE），评估预测误差\n",
    "mse = mean_squared_error(y_test, y_pred)\n",
    "# 计算 R² 分数，评估模型解释目标变量的程度\n",
    "r2 = r2_score(y_test, y_pred)\n",
    "# 输出最佳超参数、MSE 和 R² 分数\n",
    "print(\"Best Parameters:\", grid_search.best_params_)\n",
    "print(\"Test Mean Squared Error:\", mse)\n",
    "print(\"Test R² Score:\", r2)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6072adb0-f271-4b4d-827b-0e61f89e65ae",
   "metadata": {},
   "source": [
    "### Pipeline示例：修改上面的回归任务，基于Pipeline完成"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5f714a9f-8e1a-4b66-86fa-0f81f579fd8b",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.datasets import load_diabetes\n",
    "from sklearn.model_selection import train_test_split, GridSearchCV\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.ensemble import RandomForestRegressor\n",
    "from sklearn.metrics import mean_squared_error, r2_score\n",
    "from sklearn.pipeline import Pipeline\n",
    "\n",
    "# 1. 数据加载\n",
    "diabetes = load_diabetes()\n",
    "X, y = diabetes.data, diabetes.target\n",
    "\n",
    "# 2. 数据集拆分\n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\n",
    "\n",
    "# 3. 构建Pipeline\n",
    "# Pipeline整合预处理（标准化）和模型训练（RandomForestRegressor）\n",
    "pipeline = Pipeline([\n",
    "    ('scaler', StandardScaler()),  # 标准化特征，使均值为 0，方差为 1\n",
    "    ('model', RandomForestRegressor(random_state=42))  # 随机森林回归模型\n",
    "])\n",
    "\n",
    "# 4. 超参数调优\n",
    "# 定义超参数网格，注意使用Pipeline的参数命名格式（如 'model__参数名'）\n",
    "param_grid = {\n",
    "    'model__n_estimators': [50, 100],  # 树的数量\n",
    "    'model__max_depth': [None, 10],    # 树的最大深度\n",
    "    'model__min_samples_split': [2, 5] # 节点分裂的最小样本数\n",
    "}\n",
    "# 使用 GridSearchCV 进行 5 折交叉验证，优化均方误差（负值形式）\n",
    "grid_search = GridSearchCV(pipeline, param_grid, cv=5, scoring='neg_mean_squared_error', n_jobs=-1)\n",
    "\n",
    "# 5. 训练 Pipeline\n",
    "# Pipeline自动按顺序执行scaler.fit_transform和model.fit\n",
    "grid_search.fit(X_train, y_train)\n",
    "\n",
    "# 6. 预测与评估\n",
    "# 使用最佳模型对测试集进行预测，Pipeline自动应用scaler.transform和model.predict\n",
    "y_pred = grid_search.predict(X_test)\n",
    "mse = mean_squared_error(y_test, y_pred)  # 计算均方误差（MSE），评估预测误差\n",
    "r2 = r2_score(y_test, y_pred)  # 计算 R² 分数，评估模型解释目标变量的程度\n",
    "print(\"Best Parameters:\", grid_search.best_params_)  # 输出最佳超参数、MSE 和 R² 分数\n",
    "print(\"Test Mean Squared Error:\", mse)\n",
    "print(\"Test R² Score:\", r2)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7a184a7c-1bf3-4fcb-ae5f-a42698272a78",
   "metadata": {},
   "source": [
    "### 聚类和降维的简单示例\n",
    "- 运行代码后，会生成一个散点图，展示降维后的数据点（2维），每个点根据 KMeans 的簇标签着色。\n",
    "- 由于load_digits数据包含10个数字类别，散点图通常会显示10个簇的分布，但簇的分离程度取决于KMeans和PCA的效果。\n",
    "- 输出示例（散点图）无法直接以文本形式展示，但点分布会反映数据的自然分组，颜色区分不同簇。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bf466fb0-84a0-494d-aafc-a936f7764ece",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.datasets import load_digits\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.cluster import KMeans  # KMeans 聚类算法\n",
    "from sklearn.decomposition import PCA  # 主成分分析（PCA）降维算法\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "# 1. 数据加载\n",
    "X, y = load_digits(return_X_y=True)\n",
    "\n",
    "# 2. 数据预处理\n",
    "# 标准化有助于提高KMeans和PCA的性能，因为它们对特征尺度敏感\n",
    "scaler = StandardScaler()\n",
    "# fit_transform：对训练数据计算均值和标准差，并将数据转换为标准化形式\n",
    "X_scaled = scaler.fit_transform(X)\n",
    "\n",
    "# 3. 聚类：使用 KMeans 算法\n",
    "# KMeans将数据分为指定数量的簇（n_clusters=10，对应 0-9 十个数字）\n",
    "# random_state=42确保结果可重现，n_clusters=10是基于数据集的数字类别数\n",
    "kmeans = KMeans(n_clusters=10, random_state=42)\n",
    "# fit_predict：拟合KMeans模型并返回每个样本的簇标签\n",
    "# X_scaled是标准化后的数据，KMeans根据欧氏距离最小化簇内方差\n",
    "labels = kmeans.fit_predict(X_scaled)\n",
    "\n",
    "# 4. 降维：使用 PCA 算法\n",
    "# PCA将64维特征降到2维，以便在二维平面可视化\n",
    "# n_components=2表示保留2个主成分，捕捉数据的主要方差\n",
    "pca = PCA(n_components=2)\n",
    "# fit_transform：计算主成分并将数据投影到前两个主成分上\n",
    "X_reduced = pca.fit_transform(X_scaled)\n",
    "\n",
    "# 5. 可视化\n",
    "# 使用matplotlib绘制散点图，展示降维后的数据点\n",
    "# X_reduced[:, 0]和X_reduced[:, 1]分别是第一和第二主成分\n",
    "# c=labels表示用KMeans的簇标签为点着色，cmap='viridis'定义颜色映射\n",
    "plt.scatter(X_reduced[:, 0], X_reduced[:, 1], c=labels, cmap='viridis')\n",
    "plt.title(\"KMeans Clustering after PCA\")\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "71d5fbf6-f089-4be3-96f9-919e87239fc8",
   "metadata": {},
   "source": [
    "### 数据集拆分示例\n",
    "- 第一次拆分生成训练+验证集（80%）和测试集（20%）。\n",
    "- 第二次拆分将训练+验证集再分为训练集（60%）和验证集（20%）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5cf36d89-eecd-41a0-912d-1964e16fc067",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.datasets import load_breast_cancer\n",
    "from sklearn.model_selection import train_test_split\n",
    "\n",
    "# 加载数据\n",
    "X, y = load_breast_cancer(return_X_y=True)\n",
    "\n",
    "# 第一次拆分：80% 训练+验证，20% 测试\n",
    "X_temp, X_test, y_temp, y_test = train_test_split(X, y, test_size=0.2, random_state=42, stratify=y)\n",
    "\n",
    "# 第二次拆分：从训练+验证中分出 25% 作为验证集（即总数据的 20%）\n",
    "X_train, X_val, y_train, y_val = train_test_split(X_temp, y_temp, test_size=0.25, random_state=42, stratify=y_temp)\n",
    "\n",
    "print(\"Train set size:\", len(X_train))  # 60% of total\n",
    "print(\"Validation set size:\", len(X_val))  # 20% of total\n",
    "print(\"Test set size:\", len(X_test))  # 20% of total"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1928f4a7-a855-4e3d-a655-0bbb36504ecd",
   "metadata": {},
   "source": [
    "### 交叉验证"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ce4fba4a-850a-4186-af56-a6429fe74bcc",
   "metadata": {},
   "source": [
    "#### cross_val_score函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "08d1448e-0a67-438f-b884-7dac060e4569",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.datasets import load_iris\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "from sklearn.model_selection import cross_val_score # 导入cross_val_score函数，用于交叉验证\n",
    "\n",
    "# 加载鸢尾花数据集，X表示特征数据，y表示目标（标签）数据\n",
    "X, y = load_iris(return_X_y=True) \n",
    "# 创建一个逻辑回归模型实例，并设置最大迭代次数为1000\n",
    "model = LogisticRegression(max_iter=1000) \n",
    "# 使用交叉验证评估模型性能，cv=5表示5折交叉验证，scoring='accuracy'表示使用准确率作为评估指标\n",
    "scores = cross_val_score(model, X, y, cv=5, scoring='accuracy') \n",
    "print(\"Cross-Validation Accuracy:\", scores.mean(), \"±\", scores.std()) # 打印交叉验证的平均准确率和标准差"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3f48bbae-c5e3-4ed8-a526-b9dd0287ee50",
   "metadata": {},
   "source": [
    "#### cross_validate函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "c2e68193-f5b9-4f13-bd83-3a21f00ef474",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.datasets import load_iris\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "from sklearn.model_selection import cross_validate\n",
    "\n",
    "X, y = load_iris(return_X_y=True) \n",
    "model = LogisticRegression(max_iter=1000) \n",
    "\n",
    "# 换用cross_validate函数，并指定accuracy和f1_macro两个指标\n",
    "scores = cross_validate(model, X, y, cv=5, scoring=['accuracy', 'f1_macro'], return_train_score=True)\n",
    "print(\"Validation Accuracy:\", scores['test_accuracy'].mean())\n",
    "print(\"Train Accuracy:\", scores['train_accuracy'].mean())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a9428999-0b63-400f-94c7-c943db0e6ec9",
   "metadata": {},
   "source": [
    "#### 交叉验证与Pipeline相结合"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f88970e2-c3ef-4105-8314-b494512e134b",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.datasets import load_iris\n",
    "from sklearn.model_selection import cross_val_score, StratifiedKFold\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.svm import SVC\n",
    "from sklearn.pipeline import Pipeline\n",
    "\n",
    "# 加载鸢尾花数据集\n",
    "X, y = load_iris(return_X_y=True)\n",
    "\n",
    "# 构建机器学习工作流Pipeline\n",
    "pipeline = Pipeline([\n",
    "    ('scaler', StandardScaler()),  # 第一步: 特征标准化\n",
    "    ('svc', SVC())                 # 第二步: SVC，Pipeline中的最终模型，用于执行分类任务\n",
    "])\n",
    "\n",
    "# 定义交叉验证策略\n",
    "# StratifiedKFold是一种分层K折交叉验证，特别适用于分类任务和类别不平衡的数据集\n",
    "# 它确保每个折叠中各类别的样本比例与原始数据集保持一致\n",
    "cv = StratifiedKFold(n_splits=5,     # 将数据集分成5个折叠 (K=5)\n",
    "                     shuffle=True,   # 在分割前打乱数据，以增加随机性\n",
    "                     random_state=42)# 设置随机种子，确保每次运行分割结果一致，方便复现\n",
    "\n",
    "# 执行交叉验证\n",
    "# cross_val_score函数用于在指定交叉验证策略下评估模型的性能\n",
    "scores = cross_val_score(pipeline,  # 要评估的机器学习模型或Pipeline\n",
    "                         X,         # 特征数据\n",
    "                         y,         # 目标数据\n",
    "                         cv=cv,     # 指定使用的交叉验证策略\n",
    "                         scoring='accuracy', # 评估指标，这里使用准确率\n",
    "                         n_jobs=-1) # 使用所有可用的CPU核心进行并行计算，加速运行\n",
    "\n",
    "# 输出交叉验证结果\n",
    "print(\"Cross-Validation Scores:\", scores) # 打印每个折叠的准确率得分\n",
    "print(\"Mean Accuracy:\", scores.mean(),    # 打印所有折叠的平均准确率\n",
    "      \"±\", scores.std())                # 打印准确率的标准差，表示结果的稳定性"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "298711ef-40ec-412b-837d-522bb7ae1418",
   "metadata": {},
   "source": [
    "### GridSearchCV示例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "d4f8b3ac-536f-476c-a7e3-926e95496bb6",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "import pandas as pd\n",
    "from sklearn import datasets\n",
    "from sklearn.model_selection import train_test_split, GridSearchCV\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.svm import SVC\n",
    "from sklearn.metrics import accuracy_score, classification_report, confusion_matrix\n",
    "import matplotlib.pyplot as plt\n",
    "import seaborn as sns\n",
    "\n",
    "# 1. 加载数据集\n",
    "iris = datasets.load_iris()\n",
    "X = iris.data\n",
    "y = iris.target\n",
    "feature_names = iris.feature_names\n",
    "target_names = iris.target_names\n",
    "\n",
    "print(f\"特征名称: {feature_names}\")\n",
    "print(f\"目标类别名称: {target_names}\")\n",
    "print(f\"数据集形状: X={X.shape}, y={y.shape}\")\n",
    "\n",
    "# 2. 数据分割\n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42, stratify=y)\n",
    "print(f\"\\n训练集形状: {X_train.shape}, 测试集形状: {X_test.shape}\")\n",
    "\n",
    "# 3. 特征缩放 (SVM 对特征尺度敏感，必须进行缩放)\n",
    "scaler = StandardScaler()\n",
    "X_train_scaled = scaler.fit_transform(X_train)\n",
    "X_test_scaled = scaler.transform(X_test)\n",
    "\n",
    "# 4. 实例化和训练 SVC 模型 (使用默认 RBF 核)\n",
    "print(\"\\n--- 默认参数 SVC 模型 ---\")\n",
    "svc_default = SVC(random_state=42)\n",
    "svc_default.fit(X_train_scaled, y_train)\n",
    "\n",
    "# 5. 模型预测与评估\n",
    "y_pred_default = svc_default.predict(X_test_scaled)\n",
    "accuracy_default = accuracy_score(y_test, y_pred_default)\n",
    "print(f\"默认 SVC 模型准确率: {accuracy_default:.4f}\")\n",
    "print(\"\\n默认 SVC 模型分类报告:\\n\", classification_report(y_test, y_pred_default, target_names=target_names))\n",
    "print(\"默认 SVC 模型混淆矩阵:\\n\", confusion_matrix(y_test, y_pred_default))\n",
    "\n",
    "\n",
    "# 6. 参数调优 (GridSearchCV)\n",
    "print(\"\\n--- 使用 GridSearchCV 进行参数调优 ---\")\n",
    "# 定义参数网格\n",
    "param_grid = {\n",
    "    'C': [0.1, 1, 10, 100],            # 惩罚参数\n",
    "    'kernel': ['linear', 'rbf'],       # 核函数\n",
    "    'gamma': ['scale', 'auto', 0.1, 1] # 核系数 (仅 RBF 核相关)\n",
    "}\n",
    "\n",
    "# 实例化 GridSearchCV\n",
    "# cv=5 表示使用 5 折交叉验证\n",
    "# n_jobs=-1 表示使用所有可用的 CPU 核心并行计算\n",
    "grid_search = GridSearchCV(SVC(random_state=42), param_grid, cv=5, scoring='accuracy', n_jobs=-1, verbose=1)\n",
    "\n",
    "# 在缩放后的训练数据上执行网格搜索（包含内层的交叉验证过程）\n",
    "grid_search.fit(X_train_scaled, y_train)\n",
    "\n",
    "# 获取最佳超参数和模型\n",
    "print(f\"\\n最佳参数组合: {grid_search.best_params_}\")\n",
    "print(f\"最佳交叉验证准确率: {grid_search.best_score_:.4f}\")\n",
    "best_svc = grid_search.best_estimator_\n",
    "\n",
    "# 使用最佳参数的模型进行预测和评估（在独立的测试集上进行最终评估）\n",
    "y_pred_best = best_svc.predict(X_test_scaled)\n",
    "accuracy_best = accuracy_score(y_test, y_pred_best)\n",
    "print(f\"最佳 SVC 模型在测试集上的准确率: {accuracy_best:.4f}\")\n",
    "print(\"\\n最佳 SVC 模型分类报告:\\n\", classification_report(y_test, y_pred_best, target_names=target_names))\n",
    "print(\"最佳 SVC 模型混淆矩阵:\\n\", confusion_matrix(y_test, y_pred_best))\n",
    "\n",
    "\n",
    "# 7. 可视化 (仅限二维数据，这里为了演示，只取两个特征)\n",
    "# 为了简化可视化，我们只使用前两个特征\n",
    "X_reduced = X[:, :2] # 只取 sepal length 和 sepal width\n",
    "X_train_reduced, X_test_reduced, y_train_reduced, y_test_reduced = train_test_split(\n",
    "    X_reduced, y, test_size=0.3, random_state=42, stratify=y\n",
    ")\n",
    "\n",
    "# 对简化后的数据进行缩放\n",
    "scaler_reduced = StandardScaler()\n",
    "X_train_scaled_reduced = scaler_reduced.fit_transform(X_train_reduced)\n",
    "X_test_scaled_reduced = scaler_reduced.transform(X_test_reduced)\n",
    "\n",
    "# 使用最佳参数的 SVC 模型在二维数据上训练\n",
    "best_svc_reduced = SVC(C=best_svc.C, kernel=best_svc.kernel, gamma=best_svc.gamma, random_state=42)\n",
    "best_svc_reduced.fit(X_train_scaled_reduced, y_train_reduced)\n",
    "\n",
    "# 绘制决策边界\n",
    "def plot_decision_boundary(X, y, model, title):\n",
    "    h = .02  # 网格中的步长\n",
    "    x_min, x_max = X[:, 0].min() - 1, X[:, 0].max() + 1\n",
    "    y_min, y_max = X[:, 1].min() - 1, X[:, 1].max() + 1\n",
    "    xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h))\n",
    "\n",
    "    Z = model.predict(np.c_[xx.ravel(), yy.ravel()])\n",
    "    Z = Z.reshape(xx.shape)\n",
    "\n",
    "    plt.figure(figsize=(10, 7))\n",
    "    plt.contourf(xx, yy, Z, alpha=0.8, cmap=plt.cm.coolwarm)\n",
    "    plt.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.coolwarm, edgecolors='k', s=60)\n",
    "    \n",
    "    # 绘制支持向量\n",
    "    if hasattr(model, 'support_vectors_'):\n",
    "        plt.scatter(model.support_vectors_[:, 0], model.support_vectors_[:, 1], s=150,\n",
    "                    facecolors='none', edgecolors='green', linewidth=2, label='Support Vectors')\n",
    "    \n",
    "    plt.xlabel(feature_names[0])\n",
    "    plt.ylabel(feature_names[1])\n",
    "    plt.title(title)\n",
    "    plt.legend()\n",
    "    plt.show()\n",
    "\n",
    "plot_decision_boundary(X_train_scaled_reduced, y_train_reduced, best_svc_reduced, \"SVC Decision Boundary (Iris - 2 Features)\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2eafb2be-25ca-4a19-804f-41122185d0d9",
   "metadata": {},
   "source": [
    "#### GridSearchVC、交叉验证和Pipeline的组合示例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "77612cc9-8274-4379-bf4a-c0403ab04dbd",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 导入必要的库\n",
    "from sklearn.datasets import load_iris\n",
    "from sklearn.model_selection import train_test_split, GridSearchCV, StratifiedKFold\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.svm import SVC\n",
    "from sklearn.pipeline import Pipeline\n",
    "from sklearn.metrics import accuracy_score, classification_report\n",
    "import numpy as np\n",
    "\n",
    "# 1. 加载鸢尾花数据集\n",
    "X, y = load_iris(return_X_y=True)\n",
    "# 分割训练集和测试集（80% 训练，20% 测试）\n",
    "X_train, X_test, y_train, y_test = train_test_split(\n",
    "    X, y, test_size=0.2, random_state=42, stratify=y\n",
    ")\n",
    "\n",
    "# 2. 构建 Pipeline\n",
    "pipeline = Pipeline([\n",
    "    ('scaler', StandardScaler()),  # 标准化：将特征缩放到均值为 0，方差为 1\n",
    "    ('svc', SVC())                 # 支持向量机分类器\n",
    "])\n",
    "\n",
    "# 3. 定义超参数网格\n",
    "param_grid = {\n",
    "    'scaler__with_mean': [True, False],   # 标准化是否中心化\n",
    "    'scaler__with_std': [True, False],    # 标准化是否缩放\n",
    "    'svc__C': [0.1, 1, 10],              # SVM 正则化参数\n",
    "    'svc__kernel': ['linear', 'rbf'],     # 核函数\n",
    "    'svc__gamma': ['scale', 'auto', 0.1]  # 核函数系数\n",
    "}\n",
    "\n",
    "# 4. 定义交叉验证策略\n",
    "cv = StratifiedKFold(n_splits=5, shuffle=True, random_state=42)\n",
    "\n",
    "# 5. 初始化 GridSearchCV\n",
    "grid_search = GridSearchCV(\n",
    "    estimator=pipeline,\n",
    "    param_grid=param_grid,\n",
    "    cv=cv,\n",
    "    scoring='accuracy',  # 使用准确率作为评估指标\n",
    "    n_jobs=-1,           # 使用所有 CPU 核心加速\n",
    "    verbose=1            # 输出搜索进度\n",
    ")\n",
    "\n",
    "# 6. 在训练集上执行超参数调优和交叉验证\n",
    "grid_search.fit(X_train, y_train)\n",
    "\n",
    "# 7. 输出结果\n",
    "print(\"\\n=== 超参数调优结果 ===\")\n",
    "print(\"最佳参数:\", grid_search.best_params_)\n",
    "print(\"最佳交叉验证准确率:\", grid_search.best_score_)\n",
    "print(\"最佳模型:\", grid_search.best_estimator_)\n",
    "\n",
    "# 8. 在测试集上进行预测和评估\n",
    "y_pred = grid_search.predict(X_test)\n",
    "test_accuracy = accuracy_score(y_test, y_pred)\n",
    "print(\"\\n=== 测试集评估 ===\")\n",
    "print(\"测试集准确率:\", test_accuracy)\n",
    "print(\"\\n分类报告:\")\n",
    "print(classification_report(y_test, y_pred, target_names=load_iris().target_names))\n",
    "\n",
    "# 9. 可视化交叉验证结果（可选）\n",
    "# 输出所有参数组合的平均交叉验证评分\n",
    "print(\"\\n=== 所有参数组合的交叉验证评分 ===\")\n",
    "results = grid_search.cv_results_\n",
    "for mean_score, params in zip(results['mean_test_score'], results['params']):\n",
    "    print(f\"参数: {params}, 平均准确率: {mean_score:.4f}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c8780aa5-f465-490f-8cea-352a313f30e2",
   "metadata": {},
   "source": [
    "#### 上面示例的改进和对比版本\n",
    "- 添加特征选择：在Pipeline中加入SelectKBest进行特征选择，优化选择特征数量\n",
    "- 多指标评估：使用GridSearchCV的多指标评估（accuracy和f1_macro），并通过refit指定主要指标\n",
    "- 使用RandomizedSearchCV：展示如何替换GridSearchCV，使用随机搜索优化参数空间\n",
    "- 可视化结果：绘制交叉验证评分的分布图，分析参数组合的性能\n",
    "\n",
    "代码说明\n",
    "- RandomizedSearchCV：\n",
    "  - 使用 uniform 分布抽样 C 和 gamma，探索连续参数空间。\n",
    "  - n_iter=20 减少计算成本（20×5=100 次训练 vs. GridSearchCV 的 216×5=1080 次）。\n",
    "- 多指标评估：\n",
    "  - GridSearchCV同时评估accuracy和f1_macro，提供更全面的性能分析。\n",
    "  - f1_macro适合不平衡数据集（尽管鸢尾花数据集较为平衡）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a385a4c5-9eb3-4575-a3b9-a0fdc622b9d9",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 导入必要的库\n",
    "from sklearn.datasets import load_iris\n",
    "from sklearn.model_selection import train_test_split, GridSearchCV, RandomizedSearchCV, StratifiedKFold\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.feature_selection import SelectKBest, f_classif\n",
    "from sklearn.svm import SVC\n",
    "from sklearn.pipeline import Pipeline\n",
    "from sklearn.metrics import accuracy_score, classification_report\n",
    "from scipy.stats import uniform\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "# 1. 加载鸢尾花数据集\n",
    "X, y = load_iris(return_X_y=True)\n",
    "# 分割训练集和测试集（80% 训练，20% 测试）\n",
    "X_train, X_test, y_train, y_test = train_test_split(\n",
    "    X, y, test_size=0.2, random_state=42, stratify=y\n",
    ")\n",
    "\n",
    "# 2. 构建 Pipeline（添加特征选择）\n",
    "pipeline = Pipeline([\n",
    "    ('scaler', StandardScaler()),              # 标准化\n",
    "    ('select', SelectKBest(score_func=f_classif)),  # 特征选择\n",
    "    ('svc', SVC())                             # SVM 分类器\n",
    "])\n",
    "\n",
    "# 3. 定义超参数网格（用于 GridSearchCV）\n",
    "param_grid = {\n",
    "    'scaler__with_mean': [True, False],        # 标准化是否中心化\n",
    "    'scaler__with_std': [True, False],         # 标准化是否缩放\n",
    "    'select__k': [2, 3, 4],                   # 选择特征数量\n",
    "    'svc__C': [0.1, 1, 10],                   # 正则化参数\n",
    "    'svc__kernel': ['linear', 'rbf'],          # 核函数\n",
    "    'svc__gamma': ['scale', 'auto', 0.1]      # 核函数系数\n",
    "}\n",
    "\n",
    "# 4. 定义参数分布（用于 RandomizedSearchCV）\n",
    "param_dist = {\n",
    "    'scaler__with_mean': [True, False],\n",
    "    'scaler__with_std': [True, False],\n",
    "    'select__k': [2, 3, 4],\n",
    "    'svc__C': uniform(0.1, 10),                # 连续分布：0.1 到 10\n",
    "    'svc__kernel': ['linear', 'rbf'],\n",
    "    'svc__gamma': uniform(0.01, 0.1)           # 连续分布：0.01 到 0.1\n",
    "}\n",
    "\n",
    "# 5. 定义交叉验证策略\n",
    "cv = StratifiedKFold(n_splits=5, shuffle=True, random_state=42)\n",
    "\n",
    "# 6. 执行 GridSearchCV（多指标评估）\n",
    "grid_search = GridSearchCV(\n",
    "    estimator=pipeline,\n",
    "    param_grid=param_grid,\n",
    "    cv=cv,\n",
    "    scoring=['accuracy', 'f1_macro'],  # 多指标：准确率和 F1 分数（宏平均）\n",
    "    refit='accuracy',                  # 以准确率选择最佳模型\n",
    "    n_jobs=-1,\n",
    "    verbose=1\n",
    ")\n",
    "grid_search.fit(X_train, y_train)\n",
    "\n",
    "# 7. 执行 RandomizedSearchCV\n",
    "random_search = RandomizedSearchCV(\n",
    "    estimator=pipeline,\n",
    "    param_distributions=param_dist,\n",
    "    n_iter=20,                         # 随机抽样 20 次\n",
    "    cv=cv,\n",
    "    scoring='accuracy',\n",
    "    n_jobs=-1,\n",
    "    random_state=42,\n",
    "    verbose=1\n",
    ")\n",
    "random_search.fit(X_train, y_train)\n",
    "\n",
    "# 8. 输出 GridSearchCV 结果\n",
    "print(\"\\n=== GridSearchCV 结果 ===\")\n",
    "print(\"最佳参数:\", grid_search.best_params_)\n",
    "print(\"最佳交叉验证准确率:\", grid_search.best_score_)\n",
    "print(\"最佳交叉验证 F1 分数:\", grid_search.cv_results_['mean_test_f1_macro'][grid_search.best_index_])\n",
    "print(\"测试集准确率:\", accuracy_score(y_test, grid_search.predict(X_test)))\n",
    "print(\"\\nGridSearchCV 分类报告:\")\n",
    "print(classification_report(y_test, grid_search.predict(X_test), target_names=load_iris().target_names))\n",
    "\n",
    "# 9. 输出 RandomizedSearchCV 结果\n",
    "print(\"\\n=== RandomizedSearchCV 结果 ===\")\n",
    "print(\"最佳参数:\", random_search.best_params_)\n",
    "print(\"最佳交叉验证准确率:\", random_search.best_score_)\n",
    "print(\"测试集准确率:\", accuracy_score(y_test, random_search.predict(X_test)))\n",
    "print(\"\\nRandomizedSearchCV 分类报告:\")\n",
    "print(classification_report(y_test, random_search.predict(X_test), target_names=load_iris().target_names))\n",
    "\n",
    "# 10. 可视化交叉验证评分分布（GridSearchCV）\n",
    "plt.figure(figsize=(10, 6))\n",
    "plt.hist(grid_search.cv_results_['mean_test_accuracy'], bins=20, alpha=0.5, label='GridSearchCV Accuracy', color='blue')\n",
    "plt.hist(random_search.cv_results_['mean_test_score'], bins=20, alpha=0.5, label='RandomizedSearchCV Accuracy', color='orange')\n",
    "plt.xlabel('Cross-Validation Accuracy')\n",
    "plt.ylabel('Frequency')\n",
    "plt.title('Distribution of Cross-Validation Accuracy Scores')\n",
    "plt.legend()\n",
    "plt.grid(True)\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "57b36b82-c4ae-42db-a9d9-a0d64cc3c90b",
   "metadata": {},
   "source": [
    "#### 基于RandomSearchVC、交叉验证和Pipeline的回归任务第二个示例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "7d53decb-ab8b-4604-8b11-57089ab9c043",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.datasets import load_diabetes\n",
    "from sklearn.model_selection import RandomizedSearchCV, KFold, train_test_split\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.ensemble import RandomForestRegressor\n",
    "from sklearn.pipeline import Pipeline\n",
    "from sklearn.metrics import mean_squared_error\n",
    "from scipy.stats import randint, uniform\n",
    "\n",
    "# 加载数据\n",
    "X, y = load_diabetes(return_X_y=True)\n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\n",
    "\n",
    "# 构建 Pipeline\n",
    "pipeline = Pipeline([\n",
    "    ('scaler', StandardScaler()),\n",
    "    ('rf', RandomForestRegressor(random_state=42))\n",
    "])\n",
    "\n",
    "# 定义参数分布\n",
    "param_dist = {\n",
    "    'scaler__with_std': [True, False],         # 标准化是否缩放\n",
    "    'rf__n_estimators': randint(50, 200),      # 树数量\n",
    "    'rf__max_depth': [None, 10, 20],          # 最大深度\n",
    "    'rf__min_samples_split': randint(2, 10)    # 最小分裂样本数\n",
    "}\n",
    "\n",
    "# 定义交叉验证策略\n",
    "cv = KFold(n_splits=5, shuffle=True, random_state=42)\n",
    "\n",
    "# 初始化 RandomizedSearchCV\n",
    "random_search = RandomizedSearchCV(\n",
    "    pipeline,\n",
    "    param_dist,\n",
    "    n_iter=10,\n",
    "    cv=cv,\n",
    "    scoring='neg_mean_squared_error',\n",
    "    n_jobs=-1,\n",
    "    random_state=42\n",
    ")\n",
    "random_search.fit(X_train, y_train)\n",
    "\n",
    "# 输出结果\n",
    "print(\"Best Parameters:\", random_search.best_params_)\n",
    "print(\"Best CV Score (neg MSE):\", random_search.best_score_)\n",
    "print(\"Test MSE:\", mean_squared_error(y_test, random_search.predict(X_test)))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5fdc4dc9-6ceb-490f-b933-5ef6bf34bc64",
   "metadata": {},
   "source": [
    "### 案例：基于Iris数据集的分类任务\n",
    "- Iris数据集是一个经典的多分类数据集，包含150个样本，描述了三种鸢尾花（Setosa（山鸢尾）、Versicolor（蔓生鸢尾）、Virginica（弗吉尼亚鸢尾））的4个特征（萼片长度、萼片宽度、花瓣长度、花瓣宽度）。\n",
    "- 任务：基于4个特征预测鸢尾花的类别（三分类问题）。\n",
    "- 方法：使用Pipeline结合数据标准化（StandardScaler）和逻辑回归模型（LogisticRegression）进行分类。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a3b56337-5cce-468b-b6f7-bbf92c6b5197",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 导入必要的库\n",
    "from sklearn.datasets import load_iris\n",
    "from sklearn.model_selection import train_test_split\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "from sklearn.pipeline import Pipeline\n",
    "from sklearn.metrics import accuracy_score, classification_report\n",
    "import numpy as np\n",
    "\n",
    "# 1. 加载数据集\n",
    "# 使用 load_iris() 加载Iris数据集，得到特征矩阵X（150 行，4 列）和标签向量y（150 个标签，0/1/2 对应三种鸢尾花）\n",
    "# 数据包含4个数值型特征，无需额外处理缺失值或类别编码\n",
    "iris = load_iris()\n",
    "X = iris.data  # 特征：(150, 4)\n",
    "y = iris.target  # 标签：(150,)\n",
    "\n",
    "# 2. 划分训练集和测试集\n",
    "# 使用 train_test_split 将数据分为训练集（70%，105 个样本）和测试集（30%，45 个样本）\n",
    "# 参数 test_size=0.3 表示测试集占比 30%，random_state=42 确保划分可重现\n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)\n",
    "\n",
    "# 3. 创建 Pipeline\n",
    "pipeline = Pipeline([\n",
    "    ('scaler', StandardScaler()),  # 标准化特征\n",
    "    ('clf', LogisticRegression(random_state=42))  # 逻辑回归模型\n",
    "])\n",
    "\n",
    "# 4. 训练模型\n",
    "pipeline.fit(X_train, y_train)\n",
    "\n",
    "# 5. 预测\n",
    "y_pred = pipeline.predict(X_test)\n",
    "\n",
    "# 6. 评估模型\n",
    "accuracy = accuracy_score(y_test, y_pred)\n",
    "print(f\"Accuracy: {accuracy:.2f}\")\n",
    "print(\"\\nClassification Report:\")\n",
    "print(classification_report(y_test, y_pred, target_names=iris.target_names))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2cca08a4-063d-4f27-8046-e0f3cb7757ac",
   "metadata": {},
   "source": [
    "#### 交叉验证\n",
    "为了更稳健地评估模型性能，可以使用交叉验证代替单一的 train-test split。\n",
    "- cross_val_score 将数据分为5折，每次使用4折训练、1折测试。\n",
    "- 输出每折的准确率以及平均准确率和标准差，评估模型的稳定性。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "ef0f4144-a2f8-444c-be34-76a546534ef4",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.model_selection import cross_val_score\n",
    "\n",
    "# 使用 5 折交叉验证评估 Pipeline\n",
    "scores = cross_val_score(pipeline, X, y, cv=5, scoring='accuracy')\n",
    "print(f\"Cross-validation scores: {scores}\")\n",
    "print(f\"Mean CV accuracy: {scores.mean():.2f} ± {scores.std():.2f}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "8f3ac53b-d19d-4c13-873d-161b12e5193b",
   "metadata": {},
   "source": [
    "### 案例：基于 MNIST 数据集的手写数字分类\n",
    "- MNIST 是一个经典的手写数字数据集，包含70000张28×28像素的灰度图像，每张图像表示一个手写数字（0-9）。每个样本有 784 个特征（28×28 像素展平为向量），标签为对应的数字类别。\n",
    "- 本示例的任务：基于图像的像素特征预测手写数字的类别（10 分类问题）。\n",
    "下面是一个完整的代码示例，涵盖数据加载、预处理、模型训练、测试和评估。但是，为了简化计算，本示例中使用load_digits()（Scikit-Learn提供的8×8像素简化版MNIST数据集，1797个样本，64个特征）。如果需要使用完整的MNIST数据集（70000个样本，784个特征），可以替换为from sklearn.datasets import fetch_openml; X, y = fetch_openml('mnist_784', version=1, return_X_y=True, as_frame=False)，但流程类似。\n",
    "#### 下面代码的训练过程说明\n",
    "训练过程由 Pipeline 自动管理，具体步骤如下：\n",
    "1. 数据加载\n",
    "   - 使用load_digits()加载简化版MNIST数据集，得到特征矩阵X（1797行，64列，每个样本是8×8像素展平后的向量）和标签向量 y（1797个标签，0-9）。\n",
    "   - 每个特征表示像素的灰度值（0-16），无需处理缺失值或类别编码。\n",
    "2. 数据集划分\n",
    "   - 使用train_test_split将数据分为训练集（70%，1257 个样本）和测试集（30%，540 个样本）。\n",
    "   - 参数test_size=0.3表示测试集占比 30%，random_state=42 确保划分可重现。\n",
    "3. Pipeline配置\n",
    "   - Pipeline 包含两个步骤\n",
    "     - StandardScaler：标准化特征（将像素值缩放到均值为 0、方差为 1），使 SVM 模型更有效，因为 SVM 对特征尺度敏感。\n",
    "     - SVC：支持向量分类器，使用径向基函数（RBF）核（kernel='rbf'），适合非线性可分数据。\n",
    "   - Pipeline 确保训练和测试数据使用相同的标准化参数。\n",
    "4. 训练\n",
    "   - 步骤 1：StandardScaler.fit_transform(X_train)\n",
    "     - 计算训练数据 X_train 每个特征的均值和标准差。\n",
    "     - 对 X_train 应用标准化：(X - mean) / std，生成标准化的特征矩阵 X_train_scaled。\n",
    "   - 步骤 2：SVC.fit(X_train_scaled, y_train)\n",
    "     - 使用标准化后的训练数据 X_train_scaled 和标签 y_train 训练 SVM 模型。\n",
    "     - SVM 学习支持向量和决策边界，优化分类超平面（RBF 核将数据映射到高维空间）。\n",
    "#### 测试方法说明\n",
    "测试过程用于评估模型在新数据上的性能，具体步骤如下：\n",
    "1. 预测\n",
    "   - 步骤 1：StandardScaler.transform(X_test)\n",
    "     - 使用训练阶段学习的均值和标准差，对测试数据 X_test 进行标准化，生成 X_test_scaled。\n",
    "     - 注意：测试数据只调用 transform()，不调用 fit()，以避免数据泄漏。\n",
    "   - 步骤 2：SVC.predict(X_test_scaled)\n",
    "     - 使用训练好的 SVM 模型对 X_test_scaled 进行预测，生成预测标签 y_pred。\n",
    "     - SVM 输出每个样本的预测类别（0-9）。\n",
    "2. 评估模型\n",
    "   - 准确率（Accuracy）\n",
    "     - 使用 accuracy_score(y_test, y_pred) 计算预测正确的比例。\n",
    "     - 公式：accuracy = (正确预测的样本数) / (总样本数)。\n",
    "   - 分类报告（Classification Report）\n",
    "     - 使用 classification_report 输出每个类别的精确率（Precision）、召回率（Recall）和F1 分数。\n",
    "     - 精确率：TP / (TP + FP)，表示预测为某类的样本中实际为该类的比例。\n",
    "     - 召回率：TP / (TP + FN)，表示实际为某类的样本中被正确预测的比例。\n",
    "     - F1 分数：2 * (Precision * Recall) / (Precision + Recall)。\n",
    "   - 混淆矩阵（Confusion Matrix）\n",
    "     - 使用 confusion_matrix 输出 10×10 矩阵，显示每个类别的预测情况（行是真实类别，列是预测类别）。\n",
    "     - 对角线表示正确预测的样本数，非对角线表示错误预测。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "f04c17ad-f948-4c53-8d15-1f90ddefa878",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 导入必要的库\n",
    "from sklearn.datasets import load_digits  # 使用 digits 数据集（小型 MNIST）\n",
    "from sklearn.model_selection import train_test_split\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.svm import SVC\n",
    "from sklearn.pipeline import Pipeline\n",
    "from sklearn.metrics import accuracy_score, classification_report, confusion_matrix\n",
    "import numpy as np\n",
    "\n",
    "# 1. 加载数据集（使用 digits 数据集，简化版 MNIST）\n",
    "digits = load_digits()\n",
    "X = digits.data  # 特征：(1797, 64)，8x8 像素展平\n",
    "y = digits.target  # 标签：(1797,)，0-9 的类别\n",
    "\n",
    "# 2. 划分训练集和测试集\n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)\n",
    "\n",
    "# 3. 创建 Pipeline\n",
    "pipeline = Pipeline([\n",
    "    ('scaler', StandardScaler()),  # 标准化特征\n",
    "    ('clf', SVC(kernel='rbf', random_state=42))  # 支持向量机分类器\n",
    "])\n",
    "\n",
    "# 4. 训练模型\n",
    "pipeline.fit(X_train, y_train)\n",
    "\n",
    "# 5. 预测\n",
    "y_pred = pipeline.predict(X_test)\n",
    "\n",
    "# 6. 评估模型\n",
    "accuracy = accuracy_score(y_test, y_pred)\n",
    "print(f\"Accuracy: {accuracy:.2f}\")\n",
    "print(\"\\nClassification Report:\")\n",
    "print(classification_report(y_test, y_pred))\n",
    "print(\"\\nConfusion Matrix:\")\n",
    "print(confusion_matrix(y_test, y_pred))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f4e93825-a704-4456-ae52-b8218b1d9082",
   "metadata": {},
   "source": [
    "#### 上面代码的输出结果的说明\n",
    "- Accuracy表示模型在测试集上的准确率，例如98%表明分类效果很好。\n",
    "- 分类报告显示每个类别的精确率、召回率和 F1 分数都在 0.92-1.00 之间，模型性能均衡。\n",
    "- 混淆矩阵显示大多数样本被正确分类，少数错误主要出现在的类别上，可能由于有些数字的形状相似所致（如8和3、9和5）。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f91ffe4a-caed-4c2f-8e48-ee44fb4d8d87",
   "metadata": {},
   "source": [
    "### 扩展：交叉验证和参数调优\n",
    "为了更稳健地评估模型性能并优化超参数，可以使用交叉验证和网格搜索。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "a4bc04f9-703b-4b84-9040-11c4d0633e9d",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.model_selection import GridSearchCV\n",
    "\n",
    "# 定义参数网格\n",
    "param_grid = {\n",
    "    'clf__C': [0.1, 1, 10],  # SVM 的正则化参数\n",
    "    'clf__gamma': ['scale', 0.01, 0.1]  # RBF 核的宽度参数\n",
    "}\n",
    "\n",
    "# 创建 GridSearchCV\n",
    "grid_search = GridSearchCV(pipeline, param_grid, cv=5, scoring='accuracy', n_jobs=-1)\n",
    "\n",
    "# 训练和搜索最佳参数\n",
    "grid_search.fit(X_train, y_train)\n",
    "\n",
    "# 输出最佳参数和分数\n",
    "print(f\"Best parameters: {grid_search.best_params_}\")\n",
    "print(f\"Best cross-validation score: {grid_search.best_score_:.2f}\")\n",
    "\n",
    "# 使用最佳模型预测\n",
    "y_pred = grid_search.predict(X_test)\n",
    "print(f\"Test accuracy: {accuracy_score(y_test, y_pred):.2f}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5e3e3eab-f717-47ac-9b94-651479db2168",
   "metadata": {},
   "source": [
    "### 可视化\n",
    "为了直观理解模型表现，下面的代码用于绘制混淆矩阵的热图"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "bb8e597a-2742-45f1-ab1b-9ccf377f27c0",
   "metadata": {},
   "outputs": [],
   "source": [
    "import seaborn as sns\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "# 绘制混淆矩阵\n",
    "cm = confusion_matrix(y_test, y_pred)\n",
    "plt.figure(figsize=(8, 6))\n",
    "sns.heatmap(cm, annot=True, fmt='d', cmap='Blues', xticklabels=digits.target_names, yticklabels=digits.target_names)\n",
    "plt.xlabel('Predicted')\n",
    "plt.ylabel('True')\n",
    "plt.title('Confusion Matrix')\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "83f20c95-5ca1-49ce-b69a-d322a6fe9e22",
   "metadata": {},
   "source": [
    "### 回归模型示例"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "828c33c0-cf4b-4ff4-88ff-5d58a54fda15",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 导入必要的库\n",
    "from sklearn.svm import SVR\n",
    "from sklearn.datasets import make_regression\n",
    "from sklearn.model_selection import train_test_split\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.metrics import mean_squared_error, r2_score\n",
    "import numpy as np\n",
    "\n",
    "# 使用 make_regression 生成一个包含 1000 个样本、10 个特征的合成回归数据集，带有少量噪声（noise=0.1）\n",
    "X, y = make_regression(n_samples=1000, n_features=10, noise=0.1, random_state=42)\n",
    "\n",
    "# 划分训练集和测试集\n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)\n",
    "\n",
    "# 数据标准化（SVR对特征尺度敏感）\n",
    "# 使用 StandardScaler 对特征 $X$ 进行标准化（均值为 0，标准差为 1），因为 SVR 对特征尺度敏感\n",
    "scaler_X = StandardScaler()\n",
    "X_train = scaler_X.fit_transform(X_train)\n",
    "X_test = scaler_X.transform(X_test)\n",
    "\n",
    "# 如果目标变量y的尺度变化较大，也可对其标准化（可选）\n",
    "scaler_y = StandardScaler()\n",
    "y_train = scaler_y.fit_transform(y_train.reshape(-1, 1)).ravel()\n",
    "y_test = scaler_y.transform(y_test.reshape(-1, 1)).ravel()\n",
    "\n",
    "# 创建并训练SVR模型（使用RBF核）\n",
    "svr = SVR(kernel='rbf', C=1.0, epsilon=0.1, gamma='scale')\n",
    "svr.fit(X_train, y_train)\n",
    "\n",
    "# 进行预测\n",
    "y_pred = svr.predict(X_test)\n",
    "\n",
    "# 反标准化预测结果（如果对y标准化了）\n",
    "y_pred = scaler_y.inverse_transform(y_pred.reshape(-1, 1)).ravel()\n",
    "y_test = scaler_y.inverse_transform(y_test.reshape(-1, 1)).ravel()\n",
    "\n",
    "# 评估模型\n",
    "# 使用均方误差（MSE）评估预测误差。\n",
    "# 使用 R^2 分数评估模型的解释能力（越接近 1 越好）。\n",
    "mse = mean_squared_error(y_test, y_pred)\n",
    "r2 = r2_score(y_test, y_pred)\n",
    "\n",
    "print(f\"Mean Squared Error (MSE): {mse:.2f}\")\n",
    "print(f\"R² Score: {r2:.2f}\")\n",
    "\n",
    "# 可视化预测结果（以第一个特征为例）\n",
    "import matplotlib.pyplot as plt\n",
    "\n",
    "plt.scatter(X_test[:, 0], y_test, color='blue', label='True values', alpha=0.5)\n",
    "plt.scatter(X_test[:, 0], y_pred, color='red', label='Predicted values', alpha=0.5)\n",
    "plt.xlabel('Feature 1 (Standardized)')\n",
    "plt.ylabel('Target (Original Scale)')\n",
    "plt.title('SVR: True vs Predicted Values')\n",
    "plt.legend()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "72d2ca0a-7ce0-4d24-acd6-a73bfb3a2862",
   "metadata": {},
   "source": [
    "### 模型选择示例\n",
    "模型选择（Model Selection）是指在机器学习任务中，从多个候选模型或同一模型的不同超参数配置中，选择性能最佳的模型或配置，以在给定任务上获得最优的预测能力。模型选择的目标是找到一个模型（或模型配置），在测试数据或未见过的数据上具有良好的泛化性能。模型选择通常涉及以下几个方面：\n",
    "1. 选择模型类型：比较不同类型的机器学习算法，例如逻辑回归、支持向量机（SVM）、随机森林、神经网络等。\n",
    "2. 选择超参数：为选定的模型调整超参数，例如 SVM 的正则化参数 C 和核函数参数 gamma，或随机森林的树数量 n_estimators。\n",
    "3. 评估泛化性能：使用验证集或交叉验证（Cross-Validation）评估模型在未见过数据上的表现，避免过拟合。\n",
    "4. 权衡计算成本与性能：在性能和计算复杂度之间找到平衡，特别是在大规模数据集（如 MNIST）上。\n",
    "模型选择通常通过以下方法实现：\n",
    "- 交叉验证：将训练数据分为多折（如 5 折或 10 折），在每折上训练和验证模型，计算平均性能指标（如准确率、F1 分数）。\n",
    "- 网格搜索（Grid Search）：系统地测试超参数的组合，找到最优配置。\n",
    "- 随机搜索（Random Search）：随机采样超参数组合，效率高于网格搜索，尤其在超参数空间较大时。\n",
    "- 自动化模型选择：使用工具（如 AutoML 或 Scikit-Learn 的 RandomizedSearchCV）自动选择模型和超参数。\n",
    "\n",
    "#### 在 MNIST 数据集分类任务上进行模型选择\n",
    "MNIST 是一个图像分类任务，适合的模型包括：\n",
    "- 逻辑回归（Logistic Regression）：简单、快速，适合线性可分数据。\n",
    "- 支持向量机（SVM）：通过核函数（如 RBF）处理非线性关系，效果好但计算成本高。\n",
    "- 随机森林（Random Forest）：集成方法，适合高维数据，训练和预测较快。\n",
    "- K 近邻（K-Nearest Neighbors, KNN）：基于距离的非参数方法，适合小型数据集。\n",
    "- （可选）神经网络：如多层感知机（MLP）或卷积神经网络（CNN），但 CNN 通常需要深度学习框架（如 TensorFlow/Keras）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "913b8bab-bd6e-4755-92dd-16e0e20df1c3",
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.datasets import load_digits\n",
    "from sklearn.model_selection import train_test_split\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.linear_model import LogisticRegression\n",
    "from sklearn.svm import SVC\n",
    "from sklearn.ensemble import RandomForestClassifier\n",
    "from sklearn.pipeline import Pipeline\n",
    "from sklearn.model_selection import GridSearchCV\n",
    "from sklearn.metrics import accuracy_score, classification_report\n",
    "import numpy as np\n",
    "\n",
    "# 加载简化版 MNIST 数据集\n",
    "digits = load_digits()\n",
    "X = digits.data  # 特征：(1797, 64)\n",
    "y = digits.target  # 标签：(1797,)\n",
    "\n",
    "# 划分训练集和测试集\n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)\n",
    "\n",
    "# 定义候选模型及其超参数\n",
    "pipelines = {\n",
    "    'logistic': Pipeline([\n",
    "        ('scaler', StandardScaler()),\n",
    "        ('clf', LogisticRegression(max_iter=1000, random_state=42))\n",
    "    ]),\n",
    "    'svm': Pipeline([\n",
    "        ('scaler', StandardScaler()),\n",
    "        ('clf', SVC(random_state=42))\n",
    "    ]),\n",
    "    'random_forest': Pipeline([\n",
    "        ('scaler', StandardScaler()),\n",
    "        ('clf', RandomForestClassifier(random_state=42))\n",
    "    ])\n",
    "}\n",
    "\n",
    "# 定义超参数网格\n",
    "param_grids = {\n",
    "    'logistic': {\n",
    "        'clf__C': [0.1, 1.0, 10.0],  # 正则化强度的倒数\n",
    "        'clf__solver': ['lbfgs', 'liblinear']  # 优化算法\n",
    "    },\n",
    "    'svm': {\n",
    "        'clf__C': [0.1, 1.0, 10.0],  # 正则化参数\n",
    "        'clf__kernel': ['linear', 'rbf'],  # 核函数\n",
    "        'clf__gamma': ['scale', 0.01]  # RBF 核的宽度\n",
    "    },\n",
    "    'random_forest': {\n",
    "        'clf__n_estimators': [50, 100, 200],  # 树的数量\n",
    "        'clf__max_depth': [None, 10, 20]  # 最大深度\n",
    "    }\n",
    "}\n",
    "\n",
    "# 使用交叉验证选择最佳模型：对每个模型执行网格搜索，使用 5 折交叉验证评估性能\n",
    "# 存储最佳模型和分数\n",
    "best_models = {}\n",
    "best_scores = {}\n",
    "\n",
    "for model_name in pipelines:\n",
    "    print(f\"Training {model_name}...\")\n",
    "    grid_search = GridSearchCV(\n",
    "        pipelines[model_name],\n",
    "        param_grids[model_name],\n",
    "        cv=5,  # 5 折交叉验证\n",
    "        scoring='accuracy',  # 使用准确率作为评估指标\n",
    "        n_jobs=-1  # 使用所有 CPU 核心\n",
    "    )\n",
    "    grid_search.fit(X_train, y_train)\n",
    "    \n",
    "    # 保存最佳模型和分数\n",
    "    best_models[model_name] = grid_search.best_estimator_\n",
    "    best_scores[model_name] = grid_search.best_score_\n",
    "    print(f\"Best parameters for {model_name}: {grid_search.best_params_}\")\n",
    "    print(f\"Best CV accuracy for {model_name}: {grid_search.best_score_:.2f}\")\n",
    "\n",
    "# 在测试集上评估并选出最佳模型\n",
    "best_model_name = max(best_scores, key=best_scores.get)\n",
    "best_model = best_models[best_model_name]\n",
    "print(f\"\\nBest model: {best_model_name} with CV accuracy: {best_scores[best_model_name]:.2f}\")\n",
    "\n",
    "# 在测试集上预测\n",
    "y_pred = best_model.predict(X_test)\n",
    "\n",
    "# 评估测试集性能\n",
    "accuracy = accuracy_score(y_test, y_pred)\n",
    "print(f\"Test accuracy: {accuracy:.2f}\")\n",
    "print(\"\\nClassification Report:\")\n",
    "print(classification_report(y_test, y_pred))    "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "df3c4cdb-26c8-41c5-b89e-c589ea8fcd4c",
   "metadata": {},
   "source": [
    "### 机器学习在SRE中的应用示例\n",
    "以下示例结合Pipeline、交叉验证和GridSearchCV，实现SRE领域的异常检测任务。假设我们有一个系统指标数据集（模拟CPU和内存使用率），目标是检测异常点。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "fd71142d-0939-44cc-b1b5-32935745d5fb",
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from sklearn.ensemble import IsolationForest\n",
    "from sklearn.model_selection import GridSearchCV, train_test_split\n",
    "from sklearn.pipeline import Pipeline\n",
    "\n",
    "# 1. 模拟系统指标数据（CPU 和内存使用率）\n",
    "np.random.seed(42)\n",
    "X_normal = np.random.normal(loc=0.5, scale=0.1, size=(100, 2))  # 正常数据\n",
    "X_anomaly = np.random.uniform(low=0.9, high=1.0, size=(10, 2))   # 异常数据\n",
    "X = np.vstack([X_normal, X_anomaly])\n",
    "y = np.array([1] * 100 + [-1] * 10)  # 标签：1=正常，-1=异常（仅用于评估）\n",
    "\n",
    "# 2. 分割训练集和测试集\n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)\n",
    "\n",
    "# 3. 构建 Pipeline\n",
    "pipeline = Pipeline([\n",
    "    ('scaler', StandardScaler()),           # 标准化\n",
    "    ('isolation_forest', IsolationForest())  # 孤立森林异常检测\n",
    "])\n",
    "\n",
    "# 4. 定义超参数网格\n",
    "param_grid = {\n",
    "    'scaler__with_mean': [True, False],\n",
    "    'isolation_forest__n_estimators': [50, 100, 200],  # 树数量\n",
    "    'isolation_forest__contamination': [0.05, 0.1, 0.2]  # 异常比例\n",
    "}\n",
    "\n",
    "# 5. 初始化 GridSearchCV\n",
    "grid_search = GridSearchCV(\n",
    "    pipeline,\n",
    "    param_grid,\n",
    "    cv=5,                   # 5 折交叉验证\n",
    "    scoring='accuracy',     # 假设有标签评估\n",
    "    n_jobs=-1,\n",
    "    verbose=1\n",
    ")\n",
    "\n",
    "# 6. 训练模型\n",
    "grid_search.fit(X_train, y_train)\n",
    "\n",
    "# 7. 输出结果\n",
    "print(\"\\n=== GridSearchCV 结果 ===\")\n",
    "print(\"最佳参数:\", grid_search.best_params_)\n",
    "print(\"最佳交叉验证准确率:\", grid_search.best_score_)\n",
    "print(\"测试集准确率:\", grid_search.score(X_test, y_test))\n",
    "\n",
    "# 8. 可视化结果\n",
    "import matplotlib.pyplot as plt\n",
    "y_pred = grid_search.predict(X_test)\n",
    "plt.scatter(X_test[:, 0], X_test[:, 1], c=['red' if x == -1 else 'blue' for x in y_pred])\n",
    "plt.xlabel('CPU Usage')\n",
    "plt.ylabel('Memory Usage')\n",
    "plt.title('Anomaly Detection Results')\n",
    "plt.show()"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.11"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
