{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "intro",
   "metadata": {},
   "source": [
    "# 数据清洗实操\n",
    "\n",
    "## 完整清洗流程\n",
    "\n",
    "```\n",
    "1. 数据加载 → 2. 缺失值处理 → 3. 一致性处理 → 4. 异常值处理 → 5. 结果验证\n",
    "```\n",
    "\n",
    "---\n",
    "\n",
    "## 一、数据加载"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "load-data",
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "import re\n",
    "import warnings\n",
    "from datetime import datetime\n",
    "import os\n",
    "\n",
    "warnings.filterwarnings('ignore')\n",
    "\n",
    "# 加载脏数据（如果没有数据,请先运行第三讲创建脏数据）\n",
    "try:\n",
    "    df = pd.read_csv('../data/messy_data.csv')\n",
    "    print(f\"成功加载数据: {df.shape}\")\n",
    "except FileNotFoundError:\n",
    "    print(\"错误: 数据文件不存在,请先运行第三讲创建脏数据集\")\n",
    "    raise\n",
    "\n",
    "# 数据概览\n",
    "print(\"\\n=== 数据基本信息 ===\")\n",
    "print(f\"数据维度: {df.shape}\")\n",
    "print(f\"列名: {df.columns.tolist()}\")\n",
    "print(f\"\\n数据类型:\\n{df.dtypes}\")\n",
    "print(f\"\\n前5行:\\n{df.head()}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "missing-detection",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "## 二、缺失值处理\n",
    "\n",
    "### 处理步骤\n",
    "\n",
    "| 步骤 | 操作 | 方法 |\n",
    "|------|------|------|\n",
    "| 1 | 识别特殊空值标记 | 'N/A', 'null', '', 'pending'等 |\n",
    "| 2 | 转换为标准NaN | `np.nan` |\n",
    "| 3 | 统计缺失情况 | `isnull().sum()` |\n",
    "| 4 | 填充/删除缺失值 | 根据列类型和比例决定 |"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "handle-missing",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 1. 识别并转换特殊空值标记\n",
    "def is_special_null(value):\n",
    "    \"\"\"识别特殊的空值标记\"\"\"\n",
    "    special_nulls = [\n",
    "        'N/A', 'NA', 'null', 'NULL', 'none', 'NONE', '', \n",
    "        'invalid_date', 'invalid_age', 'invalid_salary',\n",
    "        'pending', '待审核', '未评分', '待确认', 'error', 'hidden', '未知'\n",
    "    ]\n",
    "    return pd.isnull(value) or str(value).strip() in special_nulls\n",
    "\n",
    "# 转换特殊空值为NaN\n",
    "df_cleaned = df.copy()\n",
    "df_cleaned = df_cleaned.applymap(lambda x: np.nan if is_special_null(x) else x)\n",
    "\n",
    "print(\"=== 缺失值统计 ===\")\n",
    "missing_stats = pd.DataFrame({\n",
    "    '缺失数量': df_cleaned.isnull().sum(),\n",
    "    '缺失比例(%)': (df_cleaned.isnull().sum() / len(df_cleaned) * 100).round(2)\n",
    "})\n",
    "print(missing_stats)\n",
    "\n",
    "# 2. 处理数值列缺失值（用中位数填充）\n",
    "numeric_cols = ['年龄', '工资', '订单金额', '客户评分', '物流时效', '评价数量']\n",
    "\n",
    "for col in numeric_cols:\n",
    "    # 转换为数值型（无效值变为NaN）\n",
    "    df_cleaned[col] = pd.to_numeric(df_cleaned[col], errors='coerce')\n",
    "    # 用中位数填充\n",
    "    median_val = df_cleaned[col].median()\n",
    "    df_cleaned[col].fillna(median_val, inplace=True)\n",
    "    print(f\"{col}: 用中位数 {median_val:.2f} 填充缺失值\")\n",
    "\n",
    "# 3. 处理分类列缺失值（用众数填充）\n",
    "categorical_cols = ['部门', '产品编码']\n",
    "\n",
    "for col in categorical_cols:\n",
    "    mode_val = df_cleaned[col].mode()[0]\n",
    "    df_cleaned[col].fillna(mode_val, inplace=True)\n",
    "    print(f\"{col}: 用众数 '{mode_val}' 填充缺失值\")\n",
    "\n",
    "# 4. 删除关键字段的缺失行（姓名、手机号必填）\n",
    "df_cleaned = df_cleaned.dropna(subset=['姓名', '手机号'])\n",
    "print(f\"\\n删除姓名/手机号缺失的行后,剩余 {len(df_cleaned)} 条记录\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "consistency",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "## 三、一致性处理\n",
    "\n",
    "### 标准化规则\n",
    "\n",
    "| 字段 | 标准格式 | 处理方法 |\n",
    "|------|----------|----------|\n",
    "| 日期 | YYYY-MM-DD | `pd.to_datetime()` |\n",
    "| 姓名 | 中文姓名,去空格 | `strip()` + 拼音转换 |\n",
    "| 手机号 | 11位数字 | 正则提取数字 |\n",
    "| 邮箱 | 小写,标准格式 | `lower()` + 正则验证 |\n",
    "| 部门 | 统一中文名称 | 映射字典 |\n",
    "| 产品编码 | PRD+4位数字 | 正则提取+格式化 |"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "standardize",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 1. 标准化日期格式\n",
    "def standardize_date(date_str):\n",
    "    try:\n",
    "        return pd.to_datetime(date_str).strftime('%Y-%m-%d')\n",
    "    except:\n",
    "        return None\n",
    "\n",
    "df_cleaned['日期'] = df_cleaned['日期'].apply(standardize_date)\n",
    "print(\"✓ 日期格式标准化完成\")\n",
    "\n",
    "# 2. 标准化姓名\n",
    "def standardize_name(name):\n",
    "    if pd.isna(name):\n",
    "        return None\n",
    "    # 去除所有空白字符\n",
    "    name = re.sub(r'\\s+', '', str(name))\n",
    "    # 拼音转中文映射\n",
    "    name_map = {\n",
    "        'zhangsan': '张三', 'lisi': '李四', 'wangwu': '王五',\n",
    "        'zhaoliu': '赵六', 'qianqi': '钱七'\n",
    "    }\n",
    "    return name_map.get(name.lower(), name)\n",
    "\n",
    "df_cleaned['姓名'] = df_cleaned['姓名'].apply(standardize_name)\n",
    "print(\"✓ 姓名格式标准化完成\")\n",
    "\n",
    "# 3. 标准化手机号\n",
    "def standardize_phone(phone):\n",
    "    if pd.isna(phone):\n",
    "        return None\n",
    "    # 提取所有数字\n",
    "    digits = re.sub(r'\\D', '', str(phone))\n",
    "    # 验证11位手机号\n",
    "    if len(digits) == 11 and digits.startswith('1'):\n",
    "        return digits\n",
    "    return None\n",
    "\n",
    "df_cleaned['手机号'] = df_cleaned['手机号'].apply(standardize_phone)\n",
    "print(\"✓ 手机号格式标准化完成\")\n",
    "\n",
    "# 4. 标准化邮箱\n",
    "def standardize_email(email):\n",
    "    if pd.isna(email):\n",
    "        return None\n",
    "    email = str(email).strip().lower()\n",
    "    # 验证邮箱格式\n",
    "    if re.match(r'^[a-z0-9._%+-]+@[a-z0-9.-]+\\.[a-z]{2,}$', email):\n",
    "        return email\n",
    "    return None\n",
    "\n",
    "df_cleaned['邮箱'] = df_cleaned['邮箱'].apply(standardize_email)\n",
    "print(\"✓ 邮箱格式标准化完成\")\n",
    "\n",
    "# 5. 标准化部门名称\n",
    "dept_map = {\n",
    "    'Sales': '销售部', 'SALES': '销售部', 'sales': '销售部', '销售部门': '销售部',\n",
    "    'Tech': '技术部', 'TECH': '技术部', '技术部门': '技术部',\n",
    "    'Marketing': '市场部', 'MARKETING': '市场部', '市场部门': '市场部',\n",
    "    'HR': '人事部', '人事部门': '人事部'\n",
    "}\n",
    "df_cleaned['部门'] = df_cleaned['部门'].replace(dept_map)\n",
    "print(\"✓ 部门名称标准化完成\")\n",
    "\n",
    "# 6. 标准化产品编码\n",
    "def standardize_code(code):\n",
    "    if pd.isna(code):\n",
    "        return None\n",
    "    # 提取4位数字\n",
    "    match = re.search(r'\\d{4}', str(code))\n",
    "    if match:\n",
    "        return f\"PRD{match.group()}\"\n",
    "    return None\n",
    "\n",
    "df_cleaned['产品编码'] = df_cleaned['产品编码'].apply(standardize_code)\n",
    "print(\"✓ 产品编码标准化完成\")\n",
    "\n",
    "print(\"\\n=== 标准化后样例 ===\")\n",
    "print(df_cleaned[['日期', '姓名', '手机号', '邮箱', '部门', '产品编码']].head())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "outliers",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "## 四、异常值处理\n",
    "\n",
    "### 处理策略\n",
    "\n",
    "| 方法 | 适用场景 | 优点 | 缺点 |\n",
    "|------|----------|------|------|\n",
    "| **删除** | 异常值比例<5% | 简单直接 | 丢失数据 |\n",
    "| **替换-中位数** | 有明显异常 | 稳健 | 可能失真 |\n",
    "| **缩尾(Winsorize)** | 需保留分布 | 保留数据量 | 可能过度平滑 |\n",
    "| **业务规则** | 有明确边界 | 符合实际 | 需要领域知识 |\n",
    "\n",
    "### 异常值检测：IQR方法\n",
    "\n",
    "```\n",
    "Q1 = 第25百分位数\n",
    "Q3 = 第75百分位数\n",
    "IQR = Q3 - Q1\n",
    "下界 = Q1 - 1.5 × IQR\n",
    "上界 = Q3 + 1.5 × IQR\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "handle-outliers",
   "metadata": {},
   "outputs": [],
   "source": [
    "df_final = df_cleaned.copy()\n",
    "\n",
    "print(\"=== 异常值处理 ===\")\n",
    "\n",
    "# 1. 业务规则处理\n",
    "print(\"\\n1. 业务规则检查与修正:\")\n",
    "\n",
    "# 年龄: 18-70岁\n",
    "age_outliers = (df_final['年龄'] < 18) | (df_final['年龄'] > 70)\n",
    "if age_outliers.sum() > 0:\n",
    "    median_age = df_final.loc[~age_outliers, '年龄'].median()\n",
    "    df_final.loc[age_outliers, '年龄'] = median_age\n",
    "    print(f\"  年龄: 修正 {age_outliers.sum()} 个异常值 → {median_age}\")\n",
    "\n",
    "# 工资: 3000-50000元\n",
    "salary_outliers = (df_final['工资'] < 3000) | (df_final['工资'] > 50000)\n",
    "if salary_outliers.sum() > 0:\n",
    "    median_salary = df_final.loc[~salary_outliers, '工资'].median()\n",
    "    df_final.loc[salary_outliers, '工资'] = median_salary\n",
    "    print(f\"  工资: 修正 {salary_outliers.sum()} 个异常值 → {median_salary}\")\n",
    "\n",
    "# 评分: 0-100分\n",
    "score_outliers = (df_final['客户评分'] < 0) | (df_final['客户评分'] > 100)\n",
    "if score_outliers.sum() > 0:\n",
    "    median_score = df_final.loc[~score_outliers, '客户评分'].median()\n",
    "    df_final.loc[score_outliers, '客户评分'] = median_score\n",
    "    print(f\"  评分: 修正 {score_outliers.sum()} 个异常值 → {median_score}\")\n",
    "\n",
    "# 物流时效: 0-72小时\n",
    "delivery_outliers = (df_final['物流时效'] < 0) | (df_final['物流时效'] > 72)\n",
    "if delivery_outliers.sum() > 0:\n",
    "    median_delivery = df_final.loc[~delivery_outliers, '物流时效'].median()\n",
    "    df_final.loc[delivery_outliers, '物流时效'] = median_delivery\n",
    "    print(f\"  物流时效: 修正 {delivery_outliers.sum()} 个异常值 → {median_delivery}\")\n",
    "\n",
    "# 2. IQR统计方法处理（缩尾）\n",
    "print(\"\\n2. IQR方法异常值处理（缩尾）:\")\n",
    "\n",
    "for col in numeric_cols:\n",
    "    Q1 = df_final[col].quantile(0.25)\n",
    "    Q3 = df_final[col].quantile(0.75)\n",
    "    IQR = Q3 - Q1\n",
    "    lower = Q1 - 1.5 * IQR\n",
    "    upper = Q3 + 1.5 * IQR\n",
    "    \n",
    "    outlier_mask = (df_final[col] < lower) | (df_final[col] > upper)\n",
    "    outlier_count = outlier_mask.sum()\n",
    "    \n",
    "    if outlier_count > 0:\n",
    "        # 缩尾处理\n",
    "        df_final.loc[df_final[col] < lower, col] = lower\n",
    "        df_final.loc[df_final[col] > upper, col] = upper\n",
    "        print(f\"  {col}: 处理 {outlier_count} 个异常值 (边界: [{lower:.2f}, {upper:.2f}])\")\n",
    "\n",
    "print(\"\\n✓ 异常值处理完成\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "validation",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "## 五、结果验证与保存"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "validate-save",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 1. 验证清洗结果\n",
    "print(\"=== 数据清洗结果验证 ===\")\n",
    "\n",
    "print(\"\\n1. 数据规模:\")\n",
    "print(f\"  原始数据: {len(df)} 行\")\n",
    "print(f\"  清洗后: {len(df_final)} 行\")\n",
    "print(f\"  减少: {len(df) - len(df_final)} 行 ({(len(df) - len(df_final)) / len(df) * 100:.2f}%)\")\n",
    "\n",
    "print(\"\\n2. 缺失值检查:\")\n",
    "missing_final = df_final.isnull().sum()\n",
    "print(missing_final[missing_final > 0] if missing_final.sum() > 0 else \"  无缺失值\")\n",
    "\n",
    "print(\"\\n3. 数据类型:\")\n",
    "print(df_final.dtypes)\n",
    "\n",
    "print(\"\\n4. 数值列统计:\")\n",
    "print(df_final[numeric_cols].describe())\n",
    "\n",
    "# 2. 保存清洗后的数据\n",
    "output_dir = '../data/processed'\n",
    "os.makedirs(output_dir, exist_ok=True)\n",
    "\n",
    "output_file = os.path.join(output_dir, 'cleaned_data.csv')\n",
    "df_final.to_csv(output_file, index=False, encoding='utf-8')\n",
    "print(f\"\\n✓ 清洗后的数据已保存: {output_file}\")\n",
    "\n",
    "# 3. 生成清洗报告\n",
    "report = f\"\"\"\n",
    "========== 数据清洗报告 ==========\n",
    "生成时间: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}\n",
    "\n",
    "一、数据规模\n",
    "  原始数据: {len(df)} 行 × {len(df.columns)} 列\n",
    "  清洗后: {len(df_final)} 行 × {len(df_final.columns)} 列\n",
    "  数据保留率: {len(df_final) / len(df) * 100:.2f}%\n",
    "\n",
    "二、清洗步骤\n",
    "  1. 缺失值处理\n",
    "     - 识别并转换特殊空值标记\n",
    "     - 数值列: 中位数填充\n",
    "     - 分类列: 众数填充\n",
    "     - 关键字段: 删除缺失行\n",
    "  \n",
    "  2. 一致性处理\n",
    "     - 日期: 统一为YYYY-MM-DD格式\n",
    "     - 姓名: 去除空格,拼音转中文\n",
    "     - 手机号: 11位数字格式\n",
    "     - 邮箱: 小写,标准格式验证\n",
    "     - 部门: 统一中文名称\n",
    "     - 产品编码: PRD+4位数字\n",
    "  \n",
    "  3. 异常值处理\n",
    "     - 业务规则: 年龄(18-70)、工资(3000-50000)、评分(0-100)、时效(0-72)\n",
    "     - 统计方法: IQR缩尾处理\n",
    "\n",
    "三、数据质量指标\n",
    "  缺失值: {df_final.isnull().sum().sum()} 个\n",
    "  重复行: {df_final.duplicated().sum()} 个\n",
    "  \n",
    "四、输出文件\n",
    "  {output_file}\n",
    "\"\"\"\n",
    "\n",
    "report_file = os.path.join(output_dir, 'cleaning_report.txt')\n",
    "with open(report_file, 'w', encoding='utf-8') as f:\n",
    "    f.write(report)\n",
    "\n",
    "print(f\"✓ 清洗报告已保存: {report_file}\")\n",
    "print(\"\\n\" + report)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "summary",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "## 清洗流程速查\n",
    "\n",
    "### 完整代码模板\n",
    "\n",
    "```python\n",
    "# 1. 加载数据\n",
    "df = pd.read_csv('data.csv')\n",
    "\n",
    "# 2. 缺失值处理\n",
    "df = df.applymap(lambda x: np.nan if x in ['N/A', ''] else x)\n",
    "df['数值列'].fillna(df['数值列'].median(), inplace=True)\n",
    "df['分类列'].fillna(df['分类列'].mode()[0], inplace=True)\n",
    "df = df.dropna(subset=['关键列'])\n",
    "\n",
    "# 3. 一致性处理\n",
    "df['日期'] = pd.to_datetime(df['日期']).dt.strftime('%Y-%m-%d')\n",
    "df['文本'] = df['文本'].str.strip().str.lower()\n",
    "df['分类'] = df['分类'].replace(映射字典)\n",
    "\n",
    "# 4. 异常值处理\n",
    "Q1 = df['列'].quantile(0.25)\n",
    "Q3 = df['列'].quantile(0.75)\n",
    "IQR = Q3 - Q1\n",
    "df = df[(df['列'] >= Q1-1.5*IQR) & (df['列'] <= Q3+1.5*IQR)]\n",
    "\n",
    "# 5. 保存结果\n",
    "df.to_csv('cleaned_data.csv', index=False)\n",
    "```\n",
    "\n",
    "---\n",
    "\n",
    "## 小结\n",
    "\n",
    "**核心步骤**:\n",
    "1. **缺失值**: 识别 → 转换 → 填充/删除\n",
    "2. **一致性**: 格式标准化 → 命名统一 → 类型转换\n",
    "3. **异常值**: 业务规则 + 统计方法（IQR）\n",
    "4. **验证**: 检查数据质量 → 生成报告\n",
    "\n",
    "**关键原则**:\n",
    "- 先了解数据再清洗\n",
    "- 记录所有清洗步骤\n",
    "- 验证清洗结果\n",
    "- 保留原始数据备份\n",
    "\n",
    "**下一步**: 学习基础数据操作（筛选、排序、分组等）"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "name": "python",
   "version": "3.8.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
