{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "a9a1de56",
   "metadata": {},
   "source": [
    "# Day 11: 常见的调参方式\n",
    "\n",
    "## 📋 目录\n",
    "1. [课程概述](#1-课程概述)\n",
    "2. [数据预处理](#2-数据预处理)\n",
    "3. [数据集划分](#3-数据集划分)\n",
    "4. [调参方法介绍](#4-调参方法介绍)\n",
    "5. [实战：随机森林调参](#5-实战随机森林调参)\n",
    "   - 5.1 [基线模型（默认参数）](#51-基线模型默认参数)\n",
    "   - 5.2 [网格搜索优化](#52-网格搜索优化)\n",
    "   - 5.3 [贝叶斯优化（skopt）](#53-贝叶斯优化skopt)\n",
    "   - 5.4 [贝叶斯优化（bayesian-optimization）](#54-贝叶斯优化bayesian-optimization)\n",
    "6. [总结与对比](#6-总结与对比)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3ead736a",
   "metadata": {},
   "source": [
    "## 1. 课程概述\n",
    "\n",
    "### 核心知识点回顾\n",
    "\n",
    "1. **模型组成** = 算法 + 实例化设置的外参（超参数）+ 训练得到的内参\n",
    "\n",
    "2. **调参原则**：只要调参就需要**考2次**\n",
    "   - 传统方式：划分训练集、验证集、测试集\n",
    "   - 现代方式：很多调参函数自带交叉验证（可省去验证集）\n",
    "\n",
    "### 学习目标\n",
    "\n",
    "本节课将学习三种主流调参方法：\n",
    "- ✅ **网格搜索（GridSearchCV）**：穷举式搜索\n",
    "- ✅ **随机搜索（RandomizedSearchCV）**：随机采样---只是一种思想\n",
    "- ✅ **贝叶斯优化（BayesSearchCV）**：智能优化\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "761873db",
   "metadata": {},
   "source": [
    "## 2. 数据预处理\n",
    "\n",
    "运行之前学习过的数据预处理代码，包括：\n",
    "- 导入必要的库\n",
    "- 读取数据\n",
    "- 特征工程（标签编码、独热编码）\n",
    "- 缺失值处理"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "42413c3f",
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import pandas as pd    #用于数据处理和分析，可处理表格数据。\n",
    "import numpy as np     #用于数值计算，提供了高效的数组操作。\n",
    "import matplotlib.pyplot as plt    #用于绘制各种类型的图表\n",
    "import seaborn as sns   #基于matplotlib的高级绘图库，能绘制更美观的统计图形。\n",
    "import warnings\n",
    "warnings.filterwarnings('ignore')  #忽略警告信息，保持输出清洁。\n",
    " \n",
    " # 设置中文字体（解决中文显示问题）\n",
    "plt.rcParams['font.sans-serif'] = ['SimHei']  # Windows系统常用黑体字体\n",
    "plt.rcParams['axes.unicode_minus'] = False    # 正常显示负号\n",
    "data = pd.read_csv('E:\\study\\PythonStudy\\python60-days-challenge-master\\data.csv')    #读取数据\n",
    "\n",
    "# 先筛选字符串变量 \n",
    "discrete_features = data.select_dtypes(include=['object']).columns.tolist()\n",
    "# Home Ownership 标签编码\n",
    "home_ownership_mapping = {\n",
    "    'Own Home': 1,\n",
    "    'Rent': 2,\n",
    "    'Have Mortgage': 3,\n",
    "    'Home Mortgage': 4\n",
    "}\n",
    "data['Home Ownership'] = data['Home Ownership'].map(home_ownership_mapping)\n",
    "\n",
    "# Years in current job 标签编码\n",
    "years_in_job_mapping = {\n",
    "    '< 1 year': 1,\n",
    "    '1 year': 2,\n",
    "    '2 years': 3,\n",
    "    '3 years': 4,\n",
    "    '4 years': 5,\n",
    "    '5 years': 6,\n",
    "    '6 years': 7,\n",
    "    '7 years': 8,\n",
    "    '8 years': 9,\n",
    "    '9 years': 10,\n",
    "    '10+ years': 11\n",
    "}\n",
    "data['Years in current job'] = data['Years in current job'].map(years_in_job_mapping)\n",
    "\n",
    "# Purpose 独热编码，记得需要将bool类型转换为数值\n",
    "data = pd.get_dummies(data, columns=['Purpose'])\n",
    "data2 = pd.read_csv(\"E:\\study\\PythonStudy\\python60-days-challenge-master\\data.csv\") # 重新读取数据，用来做列名对比\n",
    "list_final = [] # 新建一个空列表，用于存放独热编码后新增的特征名\n",
    "for i in data.columns:\n",
    "    if i not in data2.columns:\n",
    "       list_final.append(i) # 这里打印出来的就是独热编码后的特征名\n",
    "for i in list_final:\n",
    "    data[i] = data[i].astype(int) # 这里的i就是独热编码后的特征名\n",
    "\n",
    "\n",
    "\n",
    "# Term 0 - 1 映射\n",
    "term_mapping = {\n",
    "    'Short Term': 0,\n",
    "    'Long Term': 1\n",
    "}\n",
    "data['Term'] = data['Term'].map(term_mapping)\n",
    "data.rename(columns={'Term': 'Long Term'}, inplace=True) # 重命名列\n",
    "continuous_features = data.select_dtypes(include=['int64', 'float64']).columns.tolist()  #把筛选出来的列名转换成列表\n",
    " \n",
    " # 连续特征用中位数补全\n",
    "for feature in continuous_features:     \n",
    "    mode_value = data[feature].mode()[0]            #获取该列的众数。\n",
    "    data[feature].fillna(mode_value, inplace=True)          #用众数填充该列的缺失值，inplace=True表示直接在原数据上修改。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "04b97981",
   "metadata": {},
   "source": [
    "## 3. 数据集划分\n",
    "\n",
    "### 3.1 方案一：三分法（训练集 + 验证集 + 测试集）\n",
    "\n",
    "当不使用交叉验证时，需要划分出验证集用于调参。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "afcc2495",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 划分训练集、验证集和测试集，因为需要考2次\n",
    "# 这里演示一下如何2次划分数据集，因为这个函数只能划分一次，所以需要调用两次才能划分出训练集、验证集和测试集。\n",
    "from sklearn.model_selection import train_test_split\n",
    "X = data.drop(['Credit Default'], axis=1)  # 特征，axis=1表示按列删除\n",
    "y = data['Credit Default']  # 标签\n",
    "# 按照8:1:1划分训练集、验证集和测试集\n",
    "X_train, X_temp, y_train, y_temp = train_test_split(X, y, test_size=0.2, random_state=42)  # 80%训练集，20%临时集\n",
    "X_val, X_test, y_val, y_test = train_test_split(X_temp, y_temp, test_size=0.5, random_state=42)  # 50%验证集，50%测试集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "e5601598",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Data shapes:\n",
      "X_train: (6000, 31)\n",
      "y_train: (6000,)\n",
      "X_val: (750, 31)\n",
      "y_val: (750,)\n",
      "X_test: (750, 31)\n",
      "y_test: (750,)\n"
     ]
    }
   ],
   "source": [
    "# X_train, y_train (80%)\n",
    "# X_val, y_val (10%)\n",
    "# X_test, y_test (10%)\n",
    "\n",
    "print(\"Data shapes:\")\n",
    "print(\"X_train:\", X_train.shape)\n",
    "print(\"y_train:\", y_train.shape)\n",
    "print(\"X_val:\", X_val.shape)\n",
    "print(\"y_val:\", y_val.shape)\n",
    "print(\"X_test:\", X_test.shape)\n",
    "print(\"y_test:\", y_test.shape)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fa237100",
   "metadata": {},
   "source": [
    "### 3.2 方案二：二分法（训练集 + 测试集）⭐ 推荐\n",
    "\n",
    "由于调参函数大多自带交叉验证，实际使用中只需要划分训练集和测试集。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "a6438975",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 最开始也说了 很多调参函数自带交叉验证，甚至是必选的参数，你如果想要不交叉反而实现起来会麻烦很多\n",
    "# 所以这里我们还是只划分一次数据集\n",
    "from sklearn.model_selection import train_test_split\n",
    "X = data.drop(['Credit Default'], axis=1)  # 特征，axis=1表示按列删除\n",
    "y = data['Credit Default'] # 标签\n",
    "# 按照8:2划分训练集和测试集\n",
    "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)  # 80%训练集，20%测试集\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "32c1326b",
   "metadata": {},
   "source": [
    "### 3.3 导入评估工具"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "916d4710",
   "metadata": {},
   "outputs": [],
   "source": [
    "\n",
    "from sklearn.ensemble import RandomForestClassifier #随机森林分类器\n",
    "\n",
    "from sklearn.metrics import accuracy_score, precision_score, recall_score, f1_score # 用于评估分类器性能的指标\n",
    "from sklearn.metrics import classification_report, confusion_matrix #用于生成分类报告和混淆矩阵\n",
    "import warnings #用于忽略警告信息\n",
    "warnings.filterwarnings(\"ignore\") # 忽略所有警告信息"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f6eef27e",
   "metadata": {},
   "source": [
    "## 4. 调参方法介绍\n",
    "\n",
    "### 4.1 三种主流调参方法对比\n",
    "\n",
    "| 方法 | 原理 | 优点 | 缺点 | 适用场景 |\n",
    "|------|------|------|------|----------|\n",
    "| **网格搜索** | 穷举所有参数组合 | 能找到最优解 | 计算量大，维度灾难 | 参数空间小，计算资源充足 |\n",
    "| **随机搜索** | 随机采样参数组合 | 效率高于网格搜索 | 可能错过最优解 | 参数空间大，中等计算资源 |\n",
    "| **贝叶斯优化** | 基于概率模型智能搜索 | 高效，收敛快 | 实现复杂 | 参数空间大，计算资源有限 |\n",
    "\n",
    "### 4.2 基线模型（Baseline）\n",
    "\n",
    "在调参前，先建立基线模型：\n",
    "- 使用**默认参数**训练模型\n",
    "- 记录性能指标作为**对比基准**\n",
    "- 后续调参效果以此为参照\n",
    "\n",
    "### 4.3 详细说明\n",
    "\n",
    "1️⃣ 网格搜索 (GridSearchCV)\n",
    "- 需要定义参数的**固定列表**（param_grid）\n",
    "- 尝试所有可能的参数组合\n",
    "- ⚠️ 计算成本高，参数多时组合呈指数级增长\n",
    "\n",
    "2️⃣ 随机搜索 (RandomizedSearchCV)\n",
    "- 定义参数的**分布范围**\n",
    "- 随机采样指定次数（如 50-100 次）\n",
    "- ✅ 对于给定计算预算，通常比网格搜索更有效\n",
    "\n",
    "3️⃣ 贝叶斯优化 (BayesSearchCV)\n",
    "- 定义参数的**搜索空间**\n",
    "- 根据先验结果建立概率模型（高斯过程）\n",
    "- 智能选择下一个最有潜力的参数组合\n",
    "- ✅ 通常用更少迭代达到更好效果\n",
    "\n",
    "### 4.4 选择建议\n",
    "\n",
    "```\n",
    "计算资源充足 → 网格搜索\n",
    "计算资源有限 → 贝叶斯优化\n",
    "介于中间     → 随机搜索\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "9d55ceba",
   "metadata": {},
   "source": [
    "## 5. 实战：随机森林调参\n",
    "\n",
    "使用三种方法对随机森林进行超参数优化，并对比效果。\n",
    "\n",
    "### 5.1 基线模型（默认参数）"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "a5222839",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "--- 1. 默认参数随机森林 (训练集 -> 测试集) ---\n",
      "训练与预测耗时: 1.8770 秒\n",
      "\n",
      "默认随机森林 在测试集上的分类报告：\n",
      "              precision    recall  f1-score   support\n",
      "\n",
      "           0       0.77      0.97      0.86      1059\n",
      "           1       0.79      0.30      0.43       441\n",
      "\n",
      "    accuracy                           0.77      1500\n",
      "   macro avg       0.78      0.63      0.64      1500\n",
      "weighted avg       0.77      0.77      0.73      1500\n",
      "\n",
      "默认随机森林 在测试集上的混淆矩阵：\n",
      "[[1023   36]\n",
      " [ 309  132]]\n"
     ]
    }
   ],
   "source": [
    "# --- 1. 默认参数的随机森林 ---\n",
    "# 评估基准模型，这里确实不需要验证集\n",
    "print(\"--- 1. 默认参数随机森林 (训练集 -> 测试集) ---\")\n",
    "import time # 这里介绍一个新的库，time库，主要用于时间相关的操作，因为调参需要很长时间，记录下会帮助后人知道大概的时长\n",
    "start_time = time.time() # 记录开始时间\n",
    "rf_model = RandomForestClassifier(random_state=42)\n",
    "rf_model.fit(X_train, y_train) # 在训练集上训练\n",
    "rf_pred = rf_model.predict(X_test) # 在测试集上预测\n",
    "end_time = time.time() # 记录结束时间\n",
    "\n",
    "print(f\"训练与预测耗时: {end_time - start_time:.4f} 秒\")\n",
    "print(\"\\n默认随机森林 在测试集上的分类报告：\")\n",
    "print(classification_report(y_test, rf_pred))\n",
    "print(\"默认随机森林 在测试集上的混淆矩阵：\")\n",
    "print(confusion_matrix(y_test, rf_pred))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "024a1d3c",
   "metadata": {},
   "source": [
    "### 5.2 网格搜索优化\n",
    "\n",
    "\n",
    "网格搜索是 scikit-learn 内置功能，无需额外安装。\n",
    "\n",
    "\n",
    "网格搜索会尝试所有参数组合，计算量较大但能找到局部最优解。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "8708baf7",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "--- 2. 网格搜索优化随机森林 (训练集 -> 测试集) ---\n",
      "网格搜索耗时: 34.8438 秒\n",
      "最佳参数:  {'max_depth': 20, 'min_samples_leaf': 1, 'min_samples_split': 2, 'n_estimators': 200}\n",
      "\n",
      "网格搜索优化后的随机森林 在测试集上的分类报告：\n",
      "              precision    recall  f1-score   support\n",
      "\n",
      "           0       0.76      0.97      0.86      1059\n",
      "           1       0.80      0.28      0.42       441\n",
      "\n",
      "    accuracy                           0.77      1500\n",
      "   macro avg       0.78      0.63      0.64      1500\n",
      "weighted avg       0.77      0.77      0.73      1500\n",
      "\n",
      "网格搜索优化后的随机森林 在测试集上的混淆矩阵：\n",
      "[[1028   31]\n",
      " [ 317  124]]\n"
     ]
    }
   ],
   "source": [
    "# --- 2. 网格搜索优化随机森林 ---\n",
    "print(\"\\n--- 2. 网格搜索优化随机森林 (训练集 -> 测试集) ---\")\n",
    "from sklearn.model_selection import GridSearchCV\n",
    "\n",
    "# 定义要搜索的参数网格\n",
    "param_grid = {\n",
    "    'n_estimators': [50, 100, 200],\n",
    "    'max_depth': [None, 10, 20, 30],\n",
    "    'min_samples_split': [2, 5, 10],\n",
    "    'min_samples_leaf': [1, 2, 4]\n",
    "}\n",
    "\n",
    "# 创建网格搜索对象\n",
    "grid_search = GridSearchCV(estimator=RandomForestClassifier(random_state=42), # 随机森林分类器\n",
    "                           param_grid=param_grid, # 参数网格\n",
    "                           cv=5, # 5折交叉验证\n",
    "                           n_jobs=-1, # 使用所有可用的CPU核心进行并行计算\n",
    "                           scoring='accuracy') # 使用准确率作为评分标准\n",
    "\n",
    "start_time = time.time()\n",
    "# 在训练集上进行网格搜索\n",
    "grid_search.fit(X_train, y_train) # 在训练集上训练，模型实例化和训练的方法都被封装在这个网格搜索对象里了\n",
    "end_time = time.time()\n",
    "\n",
    "print(f\"网格搜索耗时: {end_time - start_time:.4f} 秒\")\n",
    "print(\"最佳参数: \", grid_search.best_params_) #best_params_属性返回最佳参数组合\n",
    "\n",
    "# 使用最佳参数的模型进行预测\n",
    "best_model = grid_search.best_estimator_ # 获取最佳模型\n",
    "best_pred = best_model.predict(X_test) # 在测试集上进行预测\n",
    "\n",
    "print(\"\\n网格搜索优化后的随机森林 在测试集上的分类报告：\")\n",
    "print(classification_report(y_test, best_pred))\n",
    "print(\"网格搜索优化后的随机森林 在测试集上的混淆矩阵：\")\n",
    "print(confusion_matrix(y_test, best_pred))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f974b204",
   "metadata": {},
   "source": [
    "### 5.3 随机搜索优化\n",
    "\n",
    "随机搜索在参数空间中随机采样，通常比网格搜索更高效。\n",
    "\n",
    "一般用随机搜索的很少，原因是如果你一般能跑30min，那5h你就认了；如果本来需要跑10000h，那么优化到3000h你也扛不住\n",
    "\n",
    "在复杂项目上随机优化比贝叶斯差很多，再简单场景比贝叶斯效率高，但是没必要\n",
    "\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "daad3adc",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "--- 2. 随机搜索优化随机森林 (训练集 -> 测试集) ---\n",
      "随机搜索耗时: 14.0266 秒\n",
      "最佳参数:  {'max_depth': 20, 'min_samples_leaf': 3, 'min_samples_split': 2, 'n_estimators': 99}\n",
      "\n",
      "随机搜索优化后的随机森林 在测试集上的分类报告：\n",
      "              precision    recall  f1-score   support\n",
      "\n",
      "           0       0.76      0.98      0.86      1059\n",
      "           1       0.83      0.27      0.40       441\n",
      "\n",
      "    accuracy                           0.77      1500\n",
      "   macro avg       0.79      0.62      0.63      1500\n",
      "weighted avg       0.78      0.77      0.72      1500\n",
      "\n",
      "随机搜索优化后的随机森林 在测试集上的混淆矩阵：\n",
      "[[1034   25]\n",
      " [ 323  118]]\n"
     ]
    }
   ],
   "source": [
    "# --- 2. 随机搜索优化随机森林 ---\n",
    "print(\"\\n--- 2. 随机搜索优化随机森林 (训练集 -> 测试集) ---\")\n",
    "from sklearn.model_selection import RandomizedSearchCV\n",
    "from scipy.stats import randint\n",
    "\n",
    "# 定义参数分布（使用分布而非固定列表）\n",
    "param_distributions = {\n",
    "    'n_estimators': randint(50, 200),           # 从50-200之间随机整数\n",
    "    'max_depth': [None, 10, 20, 30],            # 也可以用固定列表\n",
    "    'min_samples_split': randint(2, 11),        # 从2-10之间随机整数\n",
    "    'min_samples_leaf': randint(1, 5)           # 从1-4之间随机整数\n",
    "}\n",
    "\n",
    "# 创建随机搜索对象\n",
    "random_search = RandomizedSearchCV(\n",
    "    estimator=RandomForestClassifier(random_state=42),\n",
    "    param_distributions=param_distributions,\n",
    "    n_iter=50,          # 随机采样50次（可调整）\n",
    "    cv=5,               # 5折交叉验证\n",
    "    n_jobs=-1,          # 使用所有CPU核心\n",
    "    scoring='accuracy',\n",
    "    random_state=42     # 保证结果可复现\n",
    ")\n",
    "\n",
    "start_time = time.time()\n",
    "# 在训练集上进行随机搜索\n",
    "random_search.fit(X_train, y_train)\n",
    "end_time = time.time()\n",
    "\n",
    "print(f\"随机搜索耗时: {end_time - start_time:.4f} 秒\")\n",
    "print(\"最佳参数: \", random_search.best_params_)\n",
    "\n",
    "# 使用最佳参数的模型进行预测\n",
    "best_model_random = random_search.best_estimator_\n",
    "best_pred_random = best_model_random.predict(X_test)\n",
    "\n",
    "print(\"\\n随机搜索优化后的随机森林 在测试集上的分类报告：\")\n",
    "print(classification_report(y_test, best_pred_random))\n",
    "print(\"随机搜索优化后的随机森林 在测试集上的混淆矩阵：\")\n",
    "print(confusion_matrix(y_test, best_pred_random))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4c6bc9cd",
   "metadata": {},
   "source": [
    "### 5.4 贝叶斯优化（skopt）\n",
    "\n",
    "使用 `scikit-optimize` 库的 `BayesSearchCV`，代码风格与网格搜索高度一致。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "a70a5b31",
   "metadata": {},
   "outputs": [],
   "source": [
    "# pip install scikit-optimize -i https://pypi.tuna.tsinghua.edu.cn/simple"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "3cfda225",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "--- 2. 贝叶斯优化随机森林 (训练集 -> 测试集) ---\n",
      "贝叶斯优化耗时: 40.7741 秒\n",
      "最佳参数:  OrderedDict([('max_depth', 24), ('min_samples_leaf', 4), ('min_samples_split', 10), ('n_estimators', 60)])\n",
      "\n",
      "贝叶斯优化后的随机森林 在测试集上的分类报告：\n",
      "              precision    recall  f1-score   support\n",
      "\n",
      "           0       0.76      0.97      0.85      1059\n",
      "           1       0.81      0.27      0.41       441\n",
      "\n",
      "    accuracy                           0.77      1500\n",
      "   macro avg       0.78      0.62      0.63      1500\n",
      "weighted avg       0.78      0.77      0.72      1500\n",
      "\n",
      "贝叶斯优化后的随机森林 在测试集上的混淆矩阵：\n",
      "[[1030   29]\n",
      " [ 321  120]]\n"
     ]
    }
   ],
   "source": [
    "# --- 2. 贝叶斯优化随机森林 ---\n",
    "print(\"\\n--- 2. 贝叶斯优化随机森林 (训练集 -> 测试集) ---\")\n",
    "from skopt import BayesSearchCV\n",
    "from skopt.space import Integer\n",
    "from sklearn.ensemble import RandomForestClassifier\n",
    "from sklearn.metrics import classification_report, confusion_matrix\n",
    "import time\n",
    "\n",
    "# 定义要搜索的参数空间\n",
    "search_space = {\n",
    "    'n_estimators': Integer(50, 200),\n",
    "    'max_depth': Integer(10, 30),\n",
    "    'min_samples_split': Integer(2, 10),\n",
    "    'min_samples_leaf': Integer(1, 4)\n",
    "}\n",
    "\n",
    "# 创建贝叶斯优化搜索对象\n",
    "bayes_search = BayesSearchCV(\n",
    "    estimator=RandomForestClassifier(random_state=42),\n",
    "    search_spaces=search_space,\n",
    "    n_iter=32,  # 迭代次数，可根据需要调整\n",
    "    cv=5, # 5折交叉验证，这个参数是必须的，不能设置为1，否则就是在训练集上做预测了\n",
    "    n_jobs=-1,\n",
    "    scoring='accuracy'\n",
    ")\n",
    "\n",
    "start_time = time.time()\n",
    "# 在训练集上进行贝叶斯优化搜索\n",
    "bayes_search.fit(X_train, y_train)\n",
    "end_time = time.time()\n",
    "\n",
    "print(f\"贝叶斯优化耗时: {end_time - start_time:.4f} 秒\")\n",
    "print(\"最佳参数: \", bayes_search.best_params_)\n",
    "\n",
    "# 使用最佳参数的模型进行预测\n",
    "best_model = bayes_search.best_estimator_\n",
    "best_pred = best_model.predict(X_test)\n",
    "\n",
    "print(\"\\n贝叶斯优化后的随机森林 在测试集上的分类报告：\")\n",
    "print(classification_report(y_test, best_pred))\n",
    "print(\"贝叶斯优化后的随机森林 在测试集上的混淆矩阵：\")\n",
    "print(confusion_matrix(y_test, best_pred))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d5ae18ec",
   "metadata": {},
   "source": [
    "### 5.5 贝叶斯优化（bayesian-optimization）⭐ 进阶\n",
    "\n",
    "#### 方法特点\n",
    "\n",
    "使用 `bayesian-optimization` 库实现，相比 skopt 有以下优势：\n",
    "\n",
    "✅ **更灵活的自定义**\n",
    "- 可以自定义目标函数\n",
    "- 可以选择是否使用交叉验证\n",
    "- 评估指标可自由修改\n",
    "\n",
    "✅ **更好的可视化**\n",
    "- `verbose` 参数可输出详细的迭代过程\n",
    "- 实时查看优化进度\n",
    "\n",
    "✅ **更精细的控制**\n",
    "- `init_points`：初始随机采样点数\n",
    "- `n_iter`：优化迭代次数\n",
    "\n",
    "> 💡 **提示**：此方法仅供参考和知识拓展，不做强制要求。\n",
    "\n",
    "#### 安装依赖"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "18066c5d",
   "metadata": {},
   "outputs": [],
   "source": [
    "# pip install bayesian-optimization -i https://mirrors.aliyun.com/pypi/simple/"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "970b70ab",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "\n",
      "--- 2. 贝叶斯优化随机森林 (训练集 -> 测试集) ---\n",
      "|   iter    |  target   | n_esti... | max_depth | min_sa... | min_sa... |\n",
      "-------------------------------------------------------------------------\n",
      "| \u001b[39m1        \u001b[39m | \u001b[39m0.78     \u001b[39m | \u001b[39m106.18101\u001b[39m | \u001b[39m29.014286\u001b[39m | \u001b[39m7.8559515\u001b[39m | \u001b[39m2.7959754\u001b[39m |\n",
      "| \u001b[35m2        \u001b[39m | \u001b[35m0.7801666\u001b[39m | \u001b[35m73.402796\u001b[39m | \u001b[35m13.119890\u001b[39m | \u001b[35m2.4646688\u001b[39m | \u001b[35m3.5985284\u001b[39m |\n",
      "| \u001b[35m3        \u001b[39m | \u001b[35m0.7818333\u001b[39m | \u001b[35m140.16725\u001b[39m | \u001b[35m24.161451\u001b[39m | \u001b[35m2.1646759\u001b[39m | \u001b[35m3.9097295\u001b[39m |\n",
      "| \u001b[39m4        \u001b[39m | \u001b[39m0.7798333\u001b[39m | \u001b[39m174.86639\u001b[39m | \u001b[39m14.246782\u001b[39m | \u001b[39m3.4545997\u001b[39m | \u001b[39m1.5502135\u001b[39m |\n",
      "| \u001b[39m5        \u001b[39m | \u001b[39m0.7796666\u001b[39m | \u001b[39m95.636336\u001b[39m | \u001b[39m20.495128\u001b[39m | \u001b[39m5.4555601\u001b[39m | \u001b[39m1.8736874\u001b[39m |\n",
      "| \u001b[39m6        \u001b[39m | \u001b[39m0.7796666\u001b[39m | \u001b[39m136.74202\u001b[39m | \u001b[39m12.179999\u001b[39m | \u001b[39m3.7531597\u001b[39m | \u001b[39m3.4559411\u001b[39m |\n",
      "| \u001b[39m7        \u001b[39m | \u001b[39m0.7815   \u001b[39m | \u001b[39m117.60857\u001b[39m | \u001b[39m22.977345\u001b[39m | \u001b[39m3.2291009\u001b[39m | \u001b[39m3.2778986\u001b[39m |\n",
      "| \u001b[35m8        \u001b[39m | \u001b[35m0.7821666\u001b[39m | \u001b[35m118.93021\u001b[39m | \u001b[35m14.047274\u001b[39m | \u001b[35m2.7242584\u001b[39m | \u001b[35m1.9451177\u001b[39m |\n",
      "| \u001b[39m9        \u001b[39m | \u001b[39m0.7801666\u001b[39m | \u001b[39m118.97801\u001b[39m | \u001b[39m13.352471\u001b[39m | \u001b[39m2.6269215\u001b[39m | \u001b[39m1.2405415\u001b[39m |\n",
      "| \u001b[35m10       \u001b[39m | \u001b[35m0.783    \u001b[39m | \u001b[35m128.79383\u001b[39m | \u001b[35m13.190843\u001b[39m | \u001b[35m8.6087443\u001b[39m | \u001b[35m3.2302918\u001b[39m |\n",
      "| \u001b[35m11       \u001b[39m | \u001b[35m0.7831666\u001b[39m | \u001b[35m192.35274\u001b[39m | \u001b[35m29.605227\u001b[39m | \u001b[35m6.8716504\u001b[39m | \u001b[35m2.4941064\u001b[39m |\n",
      "| \u001b[39m12       \u001b[39m | \u001b[39m0.7821666\u001b[39m | \u001b[39m88.231233\u001b[39m | \u001b[39m29.321446\u001b[39m | \u001b[39m2.6936871\u001b[39m | \u001b[39m3.0287579\u001b[39m |\n",
      "| \u001b[39m13       \u001b[39m | \u001b[39m0.7783333\u001b[39m | \u001b[39m64.051922\u001b[39m | \u001b[39m29.407630\u001b[39m | \u001b[39m9.4222027\u001b[39m | \u001b[39m3.7586886\u001b[39m |\n",
      "| \u001b[39m14       \u001b[39m | \u001b[39m0.7795   \u001b[39m | \u001b[39m131.50758\u001b[39m | \u001b[39m29.956430\u001b[39m | \u001b[39m5.5975075\u001b[39m | \u001b[39m2.6256721\u001b[39m |\n",
      "| \u001b[35m15       \u001b[39m | \u001b[35m0.7843333\u001b[39m | \u001b[35m115.38666\u001b[39m | \u001b[35m20.337056\u001b[39m | \u001b[35m3.6425684\u001b[39m | \u001b[35m3.3390254\u001b[39m |\n",
      "| \u001b[39m16       \u001b[39m | \u001b[39m0.7825000\u001b[39m | \u001b[39m139.68231\u001b[39m | \u001b[39m20.248577\u001b[39m | \u001b[39m7.8924204\u001b[39m | \u001b[39m2.3485132\u001b[39m |\n",
      "| \u001b[39m17       \u001b[39m | \u001b[39m0.7808333\u001b[39m | \u001b[39m130.39325\u001b[39m | \u001b[39m22.181066\u001b[39m | \u001b[39m5.0025705\u001b[39m | \u001b[39m3.0651501\u001b[39m |\n",
      "| \u001b[39m18       \u001b[39m | \u001b[39m0.7818333\u001b[39m | \u001b[39m157.93234\u001b[39m | \u001b[39m16.969411\u001b[39m | \u001b[39m2.3577145\u001b[39m | \u001b[39m2.0590190\u001b[39m |\n",
      "| \u001b[39m19       \u001b[39m | \u001b[39m0.781    \u001b[39m | \u001b[39m87.901862\u001b[39m | \u001b[39m21.260038\u001b[39m | \u001b[39m8.0092253\u001b[39m | \u001b[39m3.0670069\u001b[39m |\n",
      "| \u001b[39m20       \u001b[39m | \u001b[39m0.7793333\u001b[39m | \u001b[39m95.142647\u001b[39m | \u001b[39m12.963737\u001b[39m | \u001b[39m2.3614368\u001b[39m | \u001b[39m3.2323328\u001b[39m |\n",
      "| \u001b[39m21       \u001b[39m | \u001b[39m0.7776666\u001b[39m | \u001b[39m134.01230\u001b[39m | \u001b[39m12.200086\u001b[39m | \u001b[39m6.0082886\u001b[39m | \u001b[39m1.4874907\u001b[39m |\n",
      "| \u001b[39m22       \u001b[39m | \u001b[39m0.7818333\u001b[39m | \u001b[39m174.45464\u001b[39m | \u001b[39m27.802482\u001b[39m | \u001b[39m5.7588637\u001b[39m | \u001b[39m2.2435537\u001b[39m |\n",
      "| \u001b[39m23       \u001b[39m | \u001b[39m0.7808333\u001b[39m | \u001b[39m73.439057\u001b[39m | \u001b[39m22.636601\u001b[39m | \u001b[39m7.0346249\u001b[39m | \u001b[39m3.3215931\u001b[39m |\n",
      "| \u001b[39m24       \u001b[39m | \u001b[39m0.7816666\u001b[39m | \u001b[39m149.87878\u001b[39m | \u001b[39m29.245608\u001b[39m | \u001b[39m6.5382549\u001b[39m | \u001b[39m3.8124604\u001b[39m |\n",
      "| \u001b[39m25       \u001b[39m | \u001b[39m0.7803333\u001b[39m | \u001b[39m193.27657\u001b[39m | \u001b[39m22.450704\u001b[39m | \u001b[39m6.0085699\u001b[39m | \u001b[39m1.4398149\u001b[39m |\n",
      "| \u001b[39m26       \u001b[39m | \u001b[39m0.7821666\u001b[39m | \u001b[39m88.835766\u001b[39m | \u001b[39m29.239516\u001b[39m | \u001b[39m2.8324497\u001b[39m | \u001b[39m3.1080030\u001b[39m |\n",
      "| \u001b[39m27       \u001b[39m | \u001b[39m0.7798333\u001b[39m | \u001b[39m116.11903\u001b[39m | \u001b[39m20.807167\u001b[39m | \u001b[39m3.6104311\u001b[39m | \u001b[39m2.9658947\u001b[39m |\n",
      "| \u001b[39m28       \u001b[39m | \u001b[39m0.7825   \u001b[39m | \u001b[39m129.10549\u001b[39m | \u001b[39m13.204989\u001b[39m | \u001b[39m8.5504517\u001b[39m | \u001b[39m3.1265836\u001b[39m |\n",
      "| \u001b[39m29       \u001b[39m | \u001b[39m0.783    \u001b[39m | \u001b[39m143.95107\u001b[39m | \u001b[39m20.618952\u001b[39m | \u001b[39m7.0656536\u001b[39m | \u001b[39m3.3488727\u001b[39m |\n",
      "| \u001b[39m30       \u001b[39m | \u001b[39m0.7779999\u001b[39m | \u001b[39m177.40419\u001b[39m | \u001b[39m10.997618\u001b[39m | \u001b[39m7.7664111\u001b[39m | \u001b[39m1.3362187\u001b[39m |\n",
      "| \u001b[39m31       \u001b[39m | \u001b[39m0.7771666\u001b[39m | \u001b[39m113.00139\u001b[39m | \u001b[39m28.085376\u001b[39m | \u001b[39m4.5321138\u001b[39m | \u001b[39m1.2109450\u001b[39m |\n",
      "| \u001b[39m32       \u001b[39m | \u001b[39m0.7806666\u001b[39m | \u001b[39m127.14903\u001b[39m | \u001b[39m22.765652\u001b[39m | \u001b[39m2.1422097\u001b[39m | \u001b[39m1.9630526\u001b[39m |\n",
      "| \u001b[39m33       \u001b[39m | \u001b[39m0.7843333\u001b[39m | \u001b[39m115.05024\u001b[39m | \u001b[39m20.122707\u001b[39m | \u001b[39m3.6550208\u001b[39m | \u001b[39m3.5079185\u001b[39m |\n",
      "| \u001b[39m34       \u001b[39m | \u001b[39m0.7834999\u001b[39m | \u001b[39m115.48545\u001b[39m | \u001b[39m19.815823\u001b[39m | \u001b[39m4.0357784\u001b[39m | \u001b[39m3.5004071\u001b[39m |\n",
      "| \u001b[39m35       \u001b[39m | \u001b[39m0.7843333\u001b[39m | \u001b[39m115.04983\u001b[39m | \u001b[39m20.635684\u001b[39m | \u001b[39m4.1248397\u001b[39m | \u001b[39m3.7163762\u001b[39m |\n",
      "| \u001b[39m36       \u001b[39m | \u001b[39m0.7806666\u001b[39m | \u001b[39m114.75142\u001b[39m | \u001b[39m20.407546\u001b[39m | \u001b[39m4.1283014\u001b[39m | \u001b[39m2.9227331\u001b[39m |\n",
      "| \u001b[39m37       \u001b[39m | \u001b[39m0.7801666\u001b[39m | \u001b[39m115.38440\u001b[39m | \u001b[39m20.394713\u001b[39m | \u001b[39m3.6359716\u001b[39m | \u001b[39m4.0      \u001b[39m |\n",
      "=========================================================================\n",
      "贝叶斯优化耗时: 137.2709 秒\n",
      "最佳参数:  {'n_estimators': np.float64(115.38666933483324), 'max_depth': np.float64(20.33705686164553), 'min_samples_split': np.float64(3.642568424379064), 'min_samples_leaf': np.float64(3.33902548742676)}\n",
      "\n",
      "贝叶斯优化后的随机森林 在测试集上的分类报告：\n",
      "              precision    recall  f1-score   support\n",
      "\n",
      "           0       0.76      0.97      0.85      1059\n",
      "           1       0.81      0.26      0.40       441\n",
      "\n",
      "    accuracy                           0.76      1500\n",
      "   macro avg       0.78      0.62      0.63      1500\n",
      "weighted avg       0.77      0.76      0.72      1500\n",
      "\n",
      "贝叶斯优化后的随机森林 在测试集上的混淆矩阵：\n",
      "[[1031   28]\n",
      " [ 325  116]]\n"
     ]
    }
   ],
   "source": [
    "# --- 2. 贝叶斯优化随机森林 ---\n",
    "print(\"\\n--- 2. 贝叶斯优化随机森林 (训练集 -> 测试集) ---\")\n",
    "from bayes_opt import BayesianOptimization\n",
    "from sklearn.ensemble import RandomForestClassifier\n",
    "from sklearn.model_selection import cross_val_score\n",
    "from sklearn.metrics import classification_report, confusion_matrix\n",
    "import time\n",
    "import numpy as np\n",
    "\n",
    "# 假设 X_train, y_train, X_test, y_test 已经定义好\n",
    "# 定义目标函数，这里使用交叉验证来评估模型性能\n",
    "def rf_eval(n_estimators, max_depth, min_samples_split, min_samples_leaf):\n",
    "    n_estimators = int(n_estimators)\n",
    "    max_depth = int(max_depth)\n",
    "    min_samples_split = int(min_samples_split)\n",
    "    min_samples_leaf = int(min_samples_leaf)\n",
    "    model = RandomForestClassifier(\n",
    "        n_estimators=n_estimators,\n",
    "        max_depth=max_depth,\n",
    "        min_samples_split=min_samples_split,\n",
    "        min_samples_leaf=min_samples_leaf,\n",
    "        random_state=42\n",
    "    )\n",
    "    scores = cross_val_score(model, X_train, y_train, cv=5, scoring='accuracy')\n",
    "    return np.mean(scores)\n",
    "\n",
    "# 定义要搜索的参数空间\n",
    "pbounds_rf = {\n",
    "    'n_estimators': (50, 200),\n",
    "   'max_depth': (10, 30),\n",
    "   'min_samples_split': (2, 10),\n",
    "   'min_samples_leaf': (1, 4)\n",
    "}\n",
    "\n",
    "# 创建贝叶斯优化对象，设置 verbose=2 显示详细迭代信息\n",
    "optimizer_rf = BayesianOptimization(\n",
    "    f=rf_eval, # 目标函数\n",
    "    pbounds=pbounds_rf, # 参数空间\n",
    "    random_state=42, # 随机种子\n",
    "    verbose=2  # 显示详细迭代信息\n",
    ")\n",
    "\n",
    "start_time = time.time()\n",
    "# 开始贝叶斯优化\n",
    "optimizer_rf.maximize(\n",
    "    init_points=5,  # 初始随机采样点数\n",
    "    n_iter=32  # 迭代次数\n",
    ")\n",
    "end_time = time.time()\n",
    "\n",
    "print(f\"贝叶斯优化耗时: {end_time - start_time:.4f} 秒\")\n",
    "print(\"最佳参数: \", optimizer_rf.max['params'])\n",
    "\n",
    "# 使用最佳参数的模型进行预测\n",
    "best_params = optimizer_rf.max['params']\n",
    "best_model = RandomForestClassifier(\n",
    "    n_estimators=int(best_params['n_estimators']),\n",
    "    max_depth=int(best_params['max_depth']),\n",
    "    min_samples_split=int(best_params['min_samples_split']),\n",
    "    min_samples_leaf=int(best_params['min_samples_leaf']),\n",
    "    random_state=42\n",
    ")\n",
    "best_model.fit(X_train, y_train)\n",
    "best_pred = best_model.predict(X_test)\n",
    "\n",
    "print(\"\\n贝叶斯优化后的随机森林 在测试集上的分类报告：\")\n",
    "print(classification_report(y_test, best_pred))\n",
    "print(\"贝叶斯优化后的随机森林 在测试集上的混淆矩阵：\")\n",
    "print(confusion_matrix(y_test, best_pred))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "18636cbe",
   "metadata": {},
   "source": [
    "## 6. 总结与对比\n",
    "\n",
    "### 6.1 性能对比表\n",
    "\n",
    "基于实际运行结果的完整对比：\n",
    "\n",
    "| 方法 | 准确率 | 精确率(类1) | 召回率(类1) | F1-Score(类1) | 耗时(秒) |\n",
    "|------|--------|-------------|-------------|---------------|----------|\n",
    "| 默认参数 | 0.77 | 0.79 | 0.30 | 0.43 | 0.83 |\n",
    "| 随机搜索 | 0.77 | **0.83** ⭐ | 0.27 | 0.40 | 14.03 |\n",
    "| 网格搜索 | 0.77 | 0.80 | 0.28 | 0.42 | 34.84 |\n",
    "| 贝叶斯优化(skopt) | 0.77 | 0.81 | 0.26 | 0.40 | 33.51 |\n",
    "| 贝叶斯优化(bayes-opt) | 0.76 | 0.81 | 0.26 | 0.40 | 126.02 |\n",
    "\n",
    "**注释**：\n",
    "- 精确率、召回率、F1-Score 均为**正类(类1)的指标**\n",
    "- 类1 代表**违约客户**，这是我们重点关注的目标\n",
    "- ⭐ **随机搜索精确率最高**：0.83，说明它找到的参数在识别违约客户时最准确\n",
    "\n",
    "### 6.2 最佳参数对比\n",
    "\n",
    "| 方法 | n_estimators | max_depth | min_samples_split | min_samples_leaf |\n",
    "|------|--------------|-----------|-------------------|------------------|\n",
    "| 默认参数 | 100 | None | 2 | 1 |\n",
    "| 随机搜索 | 99 | 20 | 2 | **3** |\n",
    "| 网格搜索 | 200 | 20 | 2 | 1 |\n",
    "| 贝叶斯优化(skopt) | 118 | 17 | 8 | 2 |\n",
    "| 贝叶斯优化(bayes-opt) | 115 | 20 | 4 | 3 |\n",
    "\n",
    "*\n",
    "\n",
    "```\n",
    "场景1：快速原型，先用默认参数\n",
    "```\n",
    "场景2：小参数空间 → 网格搜索（穷举最优）\n",
    "       ↓\n",
    "场景3：大参数空间 + 中等算力 → 随机搜索（效率高）\n",
    "       ↓  \n",
    "场景4：大参数空间 + 有限算力 → 贝叶斯优化(skopt)（智能搜索）\n",
    "       ↓\n",
    "场景5：需要可视化优化过程 → 贝叶斯优化(bayes-opt)（详细输出）\n",
    "```\n",
    "场景5：需要可视化优化过程 → 贝叶斯优化(bayes-opt)（详细输出）\n",
    "```\n",
    "\n",
    "### 6.3 关键要点\n",
    "\n",
    "1. **基线很重要**：先建立默认参数的基线模型\n",
    "2. **交叉验证**：调参函数通常自带 CV，无需单独划分验证集\n",
    "3. **时间成本**：根据实际需求选择方法\n",
    "4. **参数空间**：空间越大，贝叶斯优化和随机搜索优势越明显\n",
    "5. **类别不平衡**：本案例中最大的问题不是参数，而是数据不平衡\n",
    "\n"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "vs",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
