{
 "cells": [
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "# 用随机森林对 20newsgroups数据集进行分类\n",
    "\n",
    "虽然随机森林在处理文本数据上非常强大，但它本身是一个**基于表格数据（特征向量）** 的算法。因此，我们不能直接把文本扔给随机森林。我们需要先将文本数据转换为数值特征向量（这个过程称为**文本向量化**），然后再使用随机森林进行分类。"
   ],
   "id": "d794c0af9adba895"
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 1.导入必要的库",
   "id": "2aff63e9cfc6c9c4"
  },
  {
   "metadata": {
    "collapsed": true
   },
   "cell_type": "code",
   "source": [
    "# 数据集获取\n",
    "from sklearn.datasets import fetch_20newsgroups\n",
    "\n",
    "# 文本特征提取\n",
    "from sklearn.feature_extraction.text import TfidfVectorizer\n",
    "\n",
    "# 随机森林模型\n",
    "from sklearn.ensemble import RandomForestClassifier\n",
    "\n",
    "# 评估模型性能\n",
    "from sklearn.metrics import classification_report, confusion_matrix, accuracy_score\n",
    "\n",
    "# 划分训练集和测试集\n",
    "from sklearn.model_selection import train_test_split\n",
    "\n",
    "# 可选：用于超参数调优\n",
    "from sklearn.model_selection import GridSearchCV\n",
    "\n",
    "import pandas as pd\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt"
   ],
   "id": "initial_id",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## 2.加载数据\n",
    "**关键点**：`remove=('headers', 'footers', 'quotes')`是一个常用的技巧，它移除了每篇新闻的元信息（如发件人、主题等），迫使模型更专注于新闻正文内容本身，这通常会稍微降低准确率但让模型更通用。你可以去掉这个参数来获得更高的准确率。"
   ],
   "id": "af2951dea67101a6"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 下载并加载20newsgroups数据集\n",
    "# subset='all' 表示获取全部数据，包括训练集和测试集\n",
    "# 我们之后自己划分训练测试集，所以这里先全部加载\n",
    "news = fetch_20newsgroups(subset='all',\n",
    "                          shuffle=True,\n",
    "                          random_state=42,  # 随机数种子,保证结果可复现\n",
    "                          )\n",
    "#remove=('headers', 'footers', 'quotes')  # 移除标头、脚注和引言,让文本更干净"
   ],
   "id": "f0db7eebddd9f88a",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 查看数据的基本信息\n",
    "print(\"数据集键值:\", news.keys())\n",
    "print(\"类别名称:\", news.target_names)\n",
    "print(\"文档数量:\", len(news.data))\n",
    "print(\"数据类型:\", type(news.data))\n",
    "\n",
    "print(f\"\\n第一个文档的前500个字符:\")\n",
    "print(news.data[0][:500])"
   ],
   "id": "794f1a7db9ce5aef",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 数据内容和对应的标签\n",
    "X_text = news.data  # 这是一个列表，里面是所有新闻的文本内容\n",
    "y = news.target  # 这是一个数组，里面是每个文本对应的类别编号（0-19）"
   ],
   "id": "7cf5605dc0c04688",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## 3.文本向量化\n",
    "\n",
    "这是将文本转换为随机森林可以理解的数字格式的过程。我们使用 **TF-IDF** 方法，它是一种统计方法，用以评估一字词对于一个文件集或一个语料库中的其中一份文件的重要程度。\n",
    "\n",
    "TF-IDF（Term Frequency-Inverse Document Frequency）是一种在自然语言处理（NLP）和信息检索中广泛使用的文本特征提取方法，用于衡量一个词在文档集合中的**重要性**。它的核心思想是：**一个词的重要性与它在当前文档中出现的频率成正比，但与它在整个文档集合中出现的频率成反比**。\n",
    "\n",
    "- **TF（Term Frequency，词频）**：统计一个词在当前文档中出现的频率。\n",
    "- **IDF（Inverse Document Frequency，逆文档频率）**：衡量一个词在整个文档集合中的“稀有性”（即区分度）。"
   ],
   "id": "71acbe4bccf8a3c2"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# TF-IDF简单示例\n",
    "corpus = [\n",
    "    \"机器学习 深度学习 算法\",\n",
    "    \"机器学习 自然语言处理\",\n",
    "    \"深度学习 计算机视觉\"\n",
    "]\n",
    "tfidf = TfidfVectorizer()\n",
    "tfidf_matrix = tfidf.fit_transform(corpus)\n",
    "print(\"TF-IDF矩阵:\")\n",
    "print(tfidf_matrix.toarray())\n",
    "print(\"特征名称:\", tfidf.get_feature_names_out())"
   ],
   "id": "983875b6eb1804c",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "**关键参数解释**：\n",
    "\n",
    "- `max_features`：限制特征数量，防止维度爆炸，至关重要。\n",
    "- `stop_words`：移除常见但无实际意义的词。\n",
    "- `ngram_range=(1, 2)`：不仅考虑单个词，还考虑相邻词的组合，可以捕捉到像“not good”这样的短语，对性能提升很大。"
   ],
   "id": "ba8ea313d128d51b"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 初始化一个TF-IDF向量化器\n",
    "# max_features=5000 表示只考虑数据集中最常见的5000个单词作为特征。\n",
    "# 这可以降低特征维度，加快训练速度，有时甚至能提升模型表现（减少了噪声特征）。\n",
    "vectorizer = TfidfVectorizer(max_features=5000,\n",
    "                             stop_words='english',  # 移除英文停用词（如'the', 'is', 'and'）\n",
    "                             lowercase=True,  # 将所有字符转换为小写\n",
    "                             ngram_range=(1, 2)  # 考虑一元词组和二元词组（如 \"not good\"）\n",
    "                             )\n",
    "# 将全部文本数据进行拟合和转换\n",
    "X_features = vectorizer.fit_transform(X_text)\n",
    "\n",
    "# 查看转换后的特征矩阵形状\n",
    "print(f\"特征矩阵的形状为：{X_features.shape}\")\n",
    "print(f\"特征数量：{X_features.shape[1]}\")"
   ],
   "id": "5e42b23ff53932ad",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 4.划分训练集和测试集",
   "id": "3d66f8893b91d3a7"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 划分训练集和测试集\n",
    "X_train, X_test, y_train, y_test = train_test_split(\n",
    "    X_features,\n",
    "    y,\n",
    "    test_size=0.2,\n",
    "    random_state=42,\n",
    "    stratify=y  # 保证训练集和测试集中各类别比例一致\n",
    ")"
   ],
   "id": "237e786bb1c2b3df",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "print(f\"训练集大小: {X_train.shape[0]}\")\n",
    "print(f\"测试集大小: {X_test.shape[0]}\")\n",
    "print(f\"训练集特征维度: {X_train.shape[1]}\")"
   ],
   "id": "fc72750b73ef545c",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 5.创建、训练随机森林模型",
   "id": "45c822bda38a8"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 初始化随机森林分类器\n",
    "# n_estimators=100：森林中树的数量，通常100-500是个不错的选择\n",
    "# random_state=42：保证每次运行结果一致\n",
    "# n_jobs=-1：使用所有可用的CPU核心进行并行计算，加快训练速度\n",
    "rf_classifier = RandomForestClassifier(\n",
    "    n_estimators=500,\n",
    "    random_state=42,\n",
    "    n_jobs=-1)\n",
    "\n",
    "# 在训练集上训练模型\n",
    "print(\"开始训练随机森林模型...\")\n",
    "rf_classifier.fit(X_train, y_train)\n",
    "print(\"模型训练完成！\")\n",
    "\n",
    "# 查看模型参数\n",
    "print(\"\\n模型参数:\")\n",
    "print(rf_classifier.get_params())"
   ],
   "id": "f062bb45c31eef00",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 6.在测试集上进行预测并评估模型",
   "id": "17e4155900ad244b"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 使用训练好的模型对测试集进行预测\n",
    "y_pred = rf_classifier.predict(X_test)\n",
    "\n",
    "# 计算并打印准确率\n",
    "accuracy = accuracy_score(y_test, y_pred)\n",
    "print(f\"随机森林模型的准确率为：{accuracy:.4f}\")\n",
    "\n",
    "# 打印详细的评估报告\n",
    "print(\"\\n===== 随机森林分类报告 =====\")\n",
    "print(classification_report(y_test,\n",
    "                            y_pred,\n",
    "                            target_names=news.target_names))\n",
    "\n",
    "# 计算混淆矩阵\n",
    "cm = confusion_matrix(y_test, y_pred)\n",
    "print(\"混淆矩阵形状:\", cm.shape)"
   ],
   "id": "ec20850cae6731fa",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 7.朴素贝叶斯分类",
   "id": "d0613f8bae3cafb8"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "from sklearn.naive_bayes import MultinomialNB\n",
    "\n",
    "# 初始化多项式朴素贝叶斯分类器\n",
    "# alpha=1.0 : 拉普拉斯平滑参数，用于处理训练集中未出现的单词，防止概率为0。\n",
    "#             alpha=1.0 是默认值，也是一个很好的起点。\n",
    "nb_classifier = MultinomialNB()\n",
    "\n",
    "# 在训练集上训练模型 (这一步非常快!)\n",
    "nb_classifier.fit(X_train, y_train)"
   ],
   "id": "345c3cf73e409aea",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 8.在测试集上预测并评估",
   "id": "2a5ec5815a24bf67"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 使用训练好的模型对测试集进行预测 (这一步也非常快!)\n",
    "y_pred = nb_classifier.predict(X_test)\n",
    "\n",
    "# 计算并打印准确率\n",
    "accuracy = accuracy_score(y_test, y_pred)\n",
    "print(f\"朴素贝叶斯模型准确率: {accuracy:.4f}\\n\")\n",
    "\n",
    "# 打印详细的评估报告\n",
    "print(\"===== 分类报告 =====\")\n",
    "print(classification_report(\n",
    "    y_test,\n",
    "    y_pred,\n",
    "    target_names=news.target_names))"
   ],
   "id": "a591cfdee0806ced",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 9.随机森林特征重要性",
   "id": "58c17085413a888e"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 获取特征重要性\n",
    "feature_importance_rf = rf_classifier.feature_importances_\n",
    "feature_names = vectorizer.get_feature_names_out()\n",
    "\n",
    "# 可视化最重要的20个特征\n",
    "n = 20\n",
    "top_indices = np.argsort(feature_importance_rf)[-n:][::-1]\n",
    "top_importance = feature_importance_rf[top_indices]\n",
    "top_features = feature_names[top_indices]\n",
    "\n",
    "plt.figure(figsize=(12, 8))\n",
    "plt.bar(range(n), top_importance)\n",
    "plt.xticks(range(n), top_features, rotation=45, ha='right')\n",
    "plt.title(\"Top 20 特征重要性\")\n",
    "plt.ylabel(\"重要性得分\")\n",
    "plt.tight_layout()\n",
    "# plt.savefig('../images/feature_importance.png', dpi=300, bbox_inches='tight')\n",
    "plt.show()\n",
    "\n",
    "# 打印重要特征\n",
    "print(\"最重要的20个特征:\")\n",
    "for i, (feature, importance) in enumerate(zip(top_features, top_importance)):\n",
    "    print(f\"{i+1:2d}. {feature}: {importance:.6f}\")"
   ],
   "id": "99cdea45569055c5",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "## 10.调整树的数量观察结果",
   "id": "772e5286a160b28c"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": [
    "# 研究树的数量对性能的影响\n",
    "n_trees = [1, 5, 10, 50, 100, 200, 500]\n",
    "train_acc = []\n",
    "test_acc = []\n",
    "\n",
    "# 使用warm_start来增量训练，避免重复训练\n",
    "rf = RandomForestClassifier(warm_start=True, random_state=42, n_jobs=-1)\n",
    "\n",
    "for n in n_trees:\n",
    "    print(f\"训练 {n} 棵树...\")\n",
    "    rf.n_estimators = n\n",
    "    rf.fit(X_train, y_train)\n",
    "\n",
    "    # 预测\n",
    "    y_train_pred = rf.predict(X_train)\n",
    "    y_test_pred = rf.predict(X_test)\n",
    "\n",
    "    # 计算准确率\n",
    "    train_acc.append(accuracy_score(y_train, y_train_pred))\n",
    "    test_acc.append(accuracy_score(y_test, y_test_pred))\n",
    "    print(f\"  训练准确率: {train_acc[-1]:.4f}, 测试准确率: {test_acc[-1]:.4f}\")\n",
    "\n",
    "# 绘制结果\n",
    "plt.figure(figsize=(10, 6))\n",
    "plt.plot(n_trees, train_acc, 'o-', label='训练准确率', linewidth=2)\n",
    "plt.plot(n_trees, test_acc, 'o-', label='测试准确率', linewidth=2)\n",
    "plt.xlabel('树的数量')\n",
    "plt.ylabel('准确率')\n",
    "plt.title('树的数量对随机森林性能的影响')\n",
    "plt.legend()\n",
    "plt.grid(True, alpha=0.3)\n",
    "plt.xscale('log')\n",
    "plt.xticks(n_trees, n_trees)\n",
    "plt.tight_layout()\n",
    "# plt.savefig('../images/n_estimators_impact.png', dpi=300, bbox_inches='tight')\n",
    "plt.show()\n",
    "\n",
    "# 打印最佳参数\n",
    "best_n = n_trees[np.argmax(test_acc)]\n",
    "print(f\"\\n最佳树数量: {best_n}\")\n",
    "print(f\"对应的测试准确率: {max(test_acc):.4f}\")"
   ],
   "id": "b4e82b7121af5026",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": "",
   "id": "7242b6f0678d103c"
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": "",
   "id": "919f4e103c624a53",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": "",
   "id": "e879ae6b10912c",
   "outputs": [],
   "execution_count": null
  },
  {
   "metadata": {},
   "cell_type": "code",
   "source": "",
   "id": "27146671e5807fe9",
   "outputs": [],
   "execution_count": null
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 2
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython2",
   "version": "2.7.6"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
