{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 2. 数据预处理与特征工程"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "数据预处理和特征工程是构建机器学习模型之前最关键的步骤之一。原始数据通常包含各种问题，如缺失值、不同的量纲或非数值格式，这些都会影响模型的性能。Scikit-learn 的 `preprocessing` 模块提供了丰富的工具来解决这些问题。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import numpy as np"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.2 特征缩放\n",
    "\n",
    "特征缩放的目的是将所有特征调整到相似的尺度上，这样可以避免某些特征在模型训练中占据主导地位。这对于许多算法（如SVM、KNN和梯度下降）来说至关重要。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### StandardScaler (标准化)\n",
    "\n",
    "`StandardScaler` 通过移除均值并缩放到单位方差来标准化特征。标准化后的数据均值为0，标准差为1。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.preprocessing import StandardScaler\n",
    "\n",
    "data = np.array([[1, 2], [3, 4], [5, 6]], dtype=float)\n",
    "\n",
    "scaler = StandardScaler()\n",
    "scaled_data = scaler.fit_transform(data)\n",
    "\n",
    "print(\"原始数据:\")\n",
    "print(data)\n",
    "print(f\"\n",
    "原始数据均值: {data.mean(axis=0)}, 标准差: {data.std(axis=0)}\")\n",
    "\n",
    "print(\"\n",
    "标准化后的数据:\")\n",
    "print(scaled_data)\n",
    "print(f\"\n",
    "标准化数据均值: {scaled_data.mean(axis=0)}, 标准差: {scaled_data.std(axis=0)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### MinMaxScaler (归一化)\n",
    "\n",
    "`MinMaxScaler` 将特征缩放到一个给定的范围（通常是 [0, 1]）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.preprocessing import MinMaxScaler\n",
    "\n",
    "data = np.array([[-1], [2], [5], [10]], dtype=float)\n",
    "\n",
    "min_max_scaler = MinMaxScaler()\n",
    "scaled_data = min_max_scaler.fit_transform(data)\n",
    "\n",
    "print(\"原始数据:\")\n",
    "print(data)\n",
    "\n",
    "print(\"\n",
    "归一化后的数据:\")\n",
    "print(scaled_data)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.3 编码分类特征\n",
    "\n",
    "机器学习模型通常只能处理数值数据，因此我们需要将文本类的分类特征转换为数值。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### LabelEncoder\n",
    "\n",
    "`LabelEncoder` 将每个类别标签编码为一个整数。这种方法适用于目标变量（y），但不推荐用于特征（X），因为它会引入序数关系。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.preprocessing import LabelEncoder\n",
    "\n",
    "labels = np.array(['cat', 'dog', 'cat', 'fish', 'dog'])\n",
    "\n",
    "le = LabelEncoder()\n",
    "encoded_labels = le.fit_transform(labels)\n",
    "\n",
    "print(\"原始标签:\", labels)\n",
    "print(\"编码后标签:\", encoded_labels)\n",
    "\n",
    "# 可以使用 `inverse_transform` 进行解码\n",
    "decoded_labels = le.inverse_transform(encoded_labels)\n",
    "print(\"解码后标签:\", decoded_labels)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### OneHotEncoder (独热编码)\n",
    "\n",
    "`OneHotEncoder` 将每个类别转换为一个二进制向量，其中只有一个元素是1，其余都是0。这是处理分类特征最常用和推荐的方法。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.preprocessing import OneHotEncoder\n",
    "\n",
    "# 注意：OneHotEncoder 需要输入是二维的\n",
    "data = np.array([['Male'], ['Female'], ['Female'], ['Male']])\n",
    "\n",
    "ohe = OneHotEncoder(sparse_output=False) # sparse_output=False 返回 numpy 数组\n",
    "encoded_data = ohe.fit_transform(data)\n",
    "\n",
    "print(\"原始数据:\")\n",
    "print(data)\n",
    "\n",
    "print(\"独热编码后数据:\")\n",
    "print(encoded_data)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.4 处理缺失值\n",
    "\n",
    "`SimpleImputer` 可以用指定的策略（如均值、中位数、众数或常量）填充缺失值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.impute import SimpleImputer\n",
    "\n",
    "data = np.array([[1, 2, np.nan], [4, np.nan, 6], [7, 8, 9]])\n",
    "\n",
    "# 使用均值填充缺失值\n",
    "imputer = SimpleImputer(strategy='mean')\n",
    "imputed_data = imputer.fit_transform(data)\n",
    "\n",
    "print(\"原始数据:\")\n",
    "print(data)\n",
    "\n",
    "print(\"填充后数据:\")\n",
    "print(imputed_data)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2.6 特征选择\n",
    "\n",
    "特征选择是从原始特征中选出最相关子集的过程，以提高模型性能并减少计算复杂性。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "from sklearn.feature_selection import SelectKBest, f_classif\n",
    "from sklearn.datasets import make_classification\n",
    "\n",
    "# 创建一个合成数据集\n",
    "X, y = make_classification(n_samples=100, n_features=10, n_informative=3, random_state=42)\n",
    "\n",
    "# 选择与目标最相关的 k=3 个特征\n",
    "selector = SelectKBest(score_func=f_classif, k=3)\n",
    "X_new = selector.fit_transform(X, y)\n",
    "\n",
    "print(\"原始特征形状:\", X.shape)\n",
    "print(\"选择后特征形状:\", X_new.shape)\n",
    "\n",
    "# 获取被选中的特征的索引\n",
    "selected_indices = selector.get_support(indices=True)\n",
    "print(\"被选中的特征索引:\", selected_indices)"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.7"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
