{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 第三阶段 - 第1讲：数据分析总流程\n",
    "\n",
    "## 学习目标\n",
    "- 理解数据分析的完整流程\n",
    "- 掌握问题定义的方法\n",
    "- 识别常见的脏数据类型\n",
    "- 建立数据质量意识\n",
    "\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 一、数据分析六步法\n",
    "\n",
    "### 完整流程图解\n",
    "\n",
    "```\n",
    "问题定义 → 数据采集 → 数据准备 → 数据分析 → 可视化呈现 → 决策建议\n",
    "   ↓          ↓          ↓          ↓           ↓           ↓\n",
    " 明确目标   获取数据   清洗整理   统计建模    图表展示    业务洞察\n",
    "```\n",
    "\n",
    "### 详细说明\n",
    "\n",
    "#### 1️⃣ 问题定义 (Define the Problem)\n",
    "**核心**: 明确分析目标和业务问题\n",
    "\n",
    "**关键问题**:\n",
    "- 要解决什么业务问题？\n",
    "- 需要什么数据来回答这个问题？\n",
    "- 预期的决策或行动是什么？\n",
    "- 成功的标准是什么？\n",
    "\n",
    "**案例示例**:\n",
    "- ❌ 不好的问题：\"分析一下销售数据\"\n",
    "- ✅ 好的问题：\"为什么Q2销售额同比下降15%？哪些产品/地区下降最严重？\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 导入基础库\n",
    "import pandas as pd\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "import seaborn as sns\n",
    "import warnings\n",
    "warnings.filterwarnings('ignore')\n",
    "\n",
    "# 设置中文显示\n",
    "plt.rcParams['font.sans-serif'] = ['Arial Unicode MS', 'SimHei']\n",
    "plt.rcParams['axes.unicode_minus'] = False\n",
    "\n",
    "print(\"✅ 环境配置完成\")\n",
    "print(f\"Pandas版本: {pd.__version__}\")\n",
    "print(f\"Numpy版本: {np.__version__}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 2️⃣ 数据采集 (Data Collection)\n",
    "**核心**: 获取相关数据源\n",
    "\n",
    "**常见数据源**:\n",
    "- 数据库 (MySQL, PostgreSQL, MongoDB)\n",
    "- Excel/CSV文件\n",
    "- API接口\n",
    "- 网页爬虫\n",
    "- 日志文件\n",
    "- 第三方平台导出\n",
    "\n",
    "**数据量级评估**:\n",
    "- 小数据 (<1GB): Excel + Pandas\n",
    "- 中等数据 (1-10GB): Pandas + 分块处理\n",
    "- 大数据 (>10GB): Spark, Dask, 数据库"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 模拟不同来源的数据采集\n",
    "\n",
    "# 场景1: 从CSV读取销售数据\n",
    "sales_data = {\n",
    "    'order_id': ['ORD001', 'ORD002', 'ORD003', 'ORD004', 'ORD005'],\n",
    "    'date': ['2024-01-15', '2024-01-16', '2024-01-17', '2024-01-18', '2024-01-19'],\n",
    "    'product': ['iPhone 15', 'MacBook Pro', 'iPad Air', 'iPhone 15', 'AirPods Pro'],\n",
    "    'amount': [6999, 12999, 4999, 6999, 1999],\n",
    "    'region': ['华东', '华北', '华南', '华东', '华中']\n",
    "}\n",
    "df_sales = pd.DataFrame(sales_data)\n",
    "\n",
    "print(\"销售数据示例:\")\n",
    "print(df_sales)\n",
    "print(f\"\\n数据规模: {len(df_sales)}行 × {len(df_sales.columns)}列\")\n",
    "print(f\"内存占用: {df_sales.memory_usage(deep=True).sum() / 1024:.2f} KB\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 3️⃣ 数据准备 (Data Preparation)\n",
    "**核心**: 清洗和整理数据，确保数据质量\n",
    "\n",
    "**主要任务**:\n",
    "- 缺失值处理\n",
    "- 异常值检测与处理\n",
    "- 重复值去除\n",
    "- 数据类型转换\n",
    "- 格式统一\n",
    "- 数据合并与拼接\n",
    "\n",
    "**时间占比**: 通常占据整个分析工作的 **60-70%**"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 4️⃣ 数据分析 (Data Analysis)\n",
    "**核心**: 统计分析和模型构建\n",
    "\n",
    "**分析方法**:\n",
    "- **描述性分析**: 均值、中位数、标准差、分布\n",
    "- **诊断性分析**: 为什么会这样？相关性、因果关系\n",
    "- **预测性分析**: 未来会怎样？回归、时间序列\n",
    "- **指导性分析**: 应该怎么做？优化、决策模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 简单分析示例\n",
    "print(\"=== 基础统计分析 ===\")\n",
    "print(f\"总销售额: ¥{df_sales['amount'].sum():,}\")\n",
    "print(f\"平均客单价: ¥{df_sales['amount'].mean():,.2f}\")\n",
    "print(f\"订单数量: {len(df_sales)}\")\n",
    "\n",
    "print(\"\\n=== 产品销量排名 ===\")\n",
    "product_sales = df_sales.groupby('product')['amount'].agg(['sum', 'count'])\n",
    "product_sales.columns = ['总销售额', '订单数']\n",
    "print(product_sales.sort_values('总销售额', ascending=False))\n",
    "\n",
    "print(\"\\n=== 地区分布 ===\")\n",
    "region_sales = df_sales.groupby('region')['amount'].sum().sort_values(ascending=False)\n",
    "print(region_sales)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 5️⃣ 可视化呈现 (Visualization)\n",
    "**核心**: 用图表讲故事\n",
    "\n",
    "**常用图表**:\n",
    "- 对比: 柱状图、条形图\n",
    "- 趋势: 折线图、面积图\n",
    "- 占比: 饼图、环形图、树状图\n",
    "- 分布: 直方图、箱线图、小提琴图\n",
    "- 关系: 散点图、热力图、气泡图\n",
    "\n",
    "**可视化原则**:\n",
    "- 简洁明了，一图一主题\n",
    "- 突出重点数据\n",
    "- 选择合适的配色\n",
    "- 添加清晰的标题和标签"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 可视化示例\n",
    "fig, axes = plt.subplots(1, 2, figsize=(14, 5))\n",
    "\n",
    "# 产品销售额对比\n",
    "product_sales_sorted = df_sales.groupby('product')['amount'].sum().sort_values(ascending=False)\n",
    "axes[0].bar(range(len(product_sales_sorted)), product_sales_sorted.values, color='steelblue', edgecolor='black')\n",
    "axes[0].set_xticks(range(len(product_sales_sorted)))\n",
    "axes[0].set_xticklabels(product_sales_sorted.index, rotation=45, ha='right')\n",
    "axes[0].set_title('Product Sales Comparison', fontsize=14, fontweight='bold')\n",
    "axes[0].set_ylabel('Sales Amount (¥)')\n",
    "axes[0].grid(axis='y', alpha=0.3)\n",
    "\n",
    "# 地区销售占比\n",
    "region_sales = df_sales.groupby('region')['amount'].sum()\n",
    "colors = ['#ff9999', '#66b3ff', '#99ff99', '#ffcc99', '#ff99cc']\n",
    "axes[1].pie(region_sales.values, labels=region_sales.index, autopct='%1.1f%%', \n",
    "            colors=colors[:len(region_sales)], startangle=90)\n",
    "axes[1].set_title('Regional Sales Distribution', fontsize=14, fontweight='bold')\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()\n",
    "\n",
    "print(\"✅ 可视化完成\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### 6️⃣ 决策建议 (Decision Making)\n",
    "**核心**: 从数据到行动\n",
    "\n",
    "**输出内容**:\n",
    "- 关键发现 (Key Findings)\n",
    "- 业务洞察 (Insights)\n",
    "- 可行动建议 (Actionable Recommendations)\n",
    "- 预期影响 (Expected Impact)\n",
    "\n",
    "**报告结构**:\n",
    "1. 执行摘要 (1页)\n",
    "2. 问题背景\n",
    "3. 数据说明\n",
    "4. 分析发现 (图表+文字)\n",
    "5. 结论与建议\n",
    "6. 附录 (详细数据)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 生成分析报告示例\n",
    "def generate_analysis_report(df):\n",
    "    \"\"\"\n",
    "    生成结构化分析报告\n",
    "    \"\"\"\n",
    "    report = []\n",
    "    report.append(\"=\"*60)\n",
    "    report.append(\"数据分析报告\")\n",
    "    report.append(\"=\"*60)\n",
    "    \n",
    "    # 1. 核心指标\n",
    "    report.append(\"\\n【核心指标】\")\n",
    "    report.append(f\"  · 总销售额: ¥{df['amount'].sum():,}\")\n",
    "    report.append(f\"  · 订单总数: {len(df):,}\")\n",
    "    report.append(f\"  · 平均客单价: ¥{df['amount'].mean():,.2f}\")\n",
    "    \n",
    "    # 2. 产品表现\n",
    "    report.append(\"\\n【产品表现】\")\n",
    "    top_product = df.groupby('product')['amount'].sum().idxmax()\n",
    "    top_sales = df.groupby('product')['amount'].sum().max()\n",
    "    report.append(f\"  · 最佳产品: {top_product} (¥{top_sales:,})\")\n",
    "    \n",
    "    # 3. 地区分析\n",
    "    report.append(\"\\n【地区分析】\")\n",
    "    top_region = df.groupby('region')['amount'].sum().idxmax()\n",
    "    top_region_sales = df.groupby('region')['amount'].sum().max()\n",
    "    report.append(f\"  · 最强地区: {top_region} (¥{top_region_sales:,})\")\n",
    "    \n",
    "    # 4. 建议\n",
    "    report.append(\"\\n【行动建议】\")\n",
    "    report.append(f\"  1. 加大{top_product}的库存和推广力度\")\n",
    "    report.append(f\"  2. 在{top_region}地区增加营销投入\")\n",
    "    report.append(\"  3. 分析低销售产品的原因，考虑优化或下架\")\n",
    "    report.append(\"  4. 建立客户画像，实施精准营销\")\n",
    "    \n",
    "    report.append(\"\\n\" + \"=\"*60)\n",
    "    \n",
    "    return \"\\n\".join(report)\n",
    "\n",
    "# 生成并打印报告\n",
    "print(generate_analysis_report(df_sales))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "## 二、脏数据的四大类型\n",
    "\n",
    "### 为什么关注数据质量？\n",
    "> \"Garbage In, Garbage Out\" - 垃圾数据输入，垃圾结果输出\n",
    "\n",
    "数据质量直接影响分析结果的可信度和决策的准确性。\n",
    "\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1. 缺失值 (Missing Values)\n",
    "\n",
    "**定义**: 数据集中某些字段没有记录值\n",
    "\n",
    "**表现形式**:\n",
    "- `NaN` (Not a Number)\n",
    "- `None`\n",
    "- 空字符串 `\"\"`\n",
    "- 特殊标记 (如 `9999`, `-1`, `N/A`)\n",
    "\n",
    "**产生原因**:\n",
    "- 数据采集失败\n",
    "- 用户未填写\n",
    "- 系统错误\n",
    "- 数据合并时匹配不上\n",
    "\n",
    "**Excel对比**:\n",
    "- Excel: 空白单元格，需要手动查找\n",
    "- Pandas: `isnull()`, `isna()` 快速检测"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 创建包含缺失值的数据\n",
    "data_missing = {\n",
    "    'customer_id': ['C001', 'C002', 'C003', 'C004', 'C005'],\n",
    "    'name': ['张三', '李四', None, '赵六', '王五'],\n",
    "    'age': [28, 35, 42, None, 31],\n",
    "    'city': ['北京', '上海', '广州', '深圳', None],\n",
    "    'purchase_amount': [1200, None, 3500, 800, 2100]\n",
    "}\n",
    "\n",
    "df_missing = pd.DataFrame(data_missing)\n",
    "\n",
    "print(\"包含缺失值的数据:\")\n",
    "print(df_missing)\n",
    "\n",
    "print(\"\\n缺失值统计:\")\n",
    "print(df_missing.isnull().sum())\n",
    "\n",
    "print(\"\\n缺失值占比:\")\n",
    "print((df_missing.isnull().sum() / len(df_missing) * 100).round(2))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 可视化缺失值\n",
    "plt.figure(figsize=(10, 4))\n",
    "\n",
    "# 缺失值热力图\n",
    "sns.heatmap(df_missing.isnull(), cbar=False, yticklabels=False, cmap='viridis')\n",
    "plt.title('Missing Values Heatmap (Yellow = Missing)', fontsize=14, fontweight='bold')\n",
    "plt.xlabel('Columns')\n",
    "plt.tight_layout()\n",
    "plt.show()\n",
    "\n",
    "print(\"✅ 缺失值可视化完成\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2. 异常值 (Outliers)\n",
    "\n",
    "**定义**: 明显偏离正常范围的数据点\n",
    "\n",
    "**常见异常**:\n",
    "- 负数年龄: `-5岁`\n",
    "- 超出范围: `体温150°C`\n",
    "- 极端值: `月薪1000万`\n",
    "- 逻辑错误: `出生日期晚于当前日期`\n",
    "\n",
    "**检测方法**:\n",
    "1. **统计法**: 3σ原则 (均值±3倍标准差)\n",
    "2. **IQR法**: 箱线图方法 (Q1-1.5×IQR, Q3+1.5×IQR)\n",
    "3. **业务规则**: 根据业务逻辑判断\n",
    "\n",
    "**Excel对比**:\n",
    "- Excel: 手动排序查看，使用条件格式高亮\n",
    "- Pandas: `describe()`, `quantile()`, 箱线图"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 创建包含异常值的数据\n",
    "np.random.seed(42)\n",
    "normal_data = np.random.normal(50000, 10000, 95)  # 95个正常值\n",
    "outliers = np.array([150000, 180000, -5000, 200000, 1000])  # 5个异常值\n",
    "salary_data = np.concatenate([normal_data, outliers])\n",
    "\n",
    "df_outliers = pd.DataFrame({\n",
    "    'employee_id': [f'E{str(i).zfill(3)}' for i in range(1, 101)],\n",
    "    'salary': salary_data\n",
    "})\n",
    "\n",
    "print(\"薪资数据统计:\")\n",
    "print(df_outliers['salary'].describe())\n",
    "\n",
    "print(\"\\n排序后的极端值:\")\n",
    "print(\"最低5个:\", df_outliers['salary'].nsmallest(5).values)\n",
    "print(\"最高5个:\", df_outliers['salary'].nlargest(5).values)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# IQR法检测异常值\n",
    "Q1 = df_outliers['salary'].quantile(0.25)\n",
    "Q3 = df_outliers['salary'].quantile(0.75)\n",
    "IQR = Q3 - Q1\n",
    "\n",
    "lower_bound = Q1 - 1.5 * IQR\n",
    "upper_bound = Q3 + 1.5 * IQR\n",
    "\n",
    "print(f\"IQR异常值检测:\")\n",
    "print(f\"  Q1 (25%分位数): ¥{Q1:,.2f}\")\n",
    "print(f\"  Q3 (75%分位数): ¥{Q3:,.2f}\")\n",
    "print(f\"  IQR: ¥{IQR:,.2f}\")\n",
    "print(f\"  正常范围: [¥{lower_bound:,.2f}, ¥{upper_bound:,.2f}]\")\n",
    "\n",
    "outliers_detected = df_outliers[(df_outliers['salary'] < lower_bound) | (df_outliers['salary'] > upper_bound)]\n",
    "print(f\"\\n检测到 {len(outliers_detected)} 个异常值:\")\n",
    "print(outliers_detected)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 可视化异常值\n",
    "fig, axes = plt.subplots(1, 2, figsize=(14, 5))\n",
    "\n",
    "# 箱线图\n",
    "axes[0].boxplot(df_outliers['salary'], vert=True)\n",
    "axes[0].set_title('Salary Distribution (Boxplot)', fontsize=14, fontweight='bold')\n",
    "axes[0].set_ylabel('Salary (¥)')\n",
    "axes[0].axhline(y=lower_bound, color='r', linestyle='--', label=f'Lower Bound: ¥{lower_bound:,.0f}')\n",
    "axes[0].axhline(y=upper_bound, color='r', linestyle='--', label=f'Upper Bound: ¥{upper_bound:,.0f}')\n",
    "axes[0].legend()\n",
    "axes[0].grid(axis='y', alpha=0.3)\n",
    "\n",
    "# 直方图\n",
    "axes[1].hist(df_outliers['salary'], bins=30, edgecolor='black', alpha=0.7, color='skyblue')\n",
    "axes[1].axvline(x=lower_bound, color='r', linestyle='--', linewidth=2, label='Bounds')\n",
    "axes[1].axvline(x=upper_bound, color='r', linestyle='--', linewidth=2)\n",
    "axes[1].set_title('Salary Distribution (Histogram)', fontsize=14, fontweight='bold')\n",
    "axes[1].set_xlabel('Salary (¥)')\n",
    "axes[1].set_ylabel('Frequency')\n",
    "axes[1].legend()\n",
    "axes[1].grid(axis='y', alpha=0.3)\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3. 重复值 (Duplicates)\n",
    "\n",
    "**定义**: 数据集中出现完全相同或部分相同的记录\n",
    "\n",
    "**类型**:\n",
    "- **完全重复**: 所有字段完全相同\n",
    "- **部分重复**: 关键字段相同(如订单ID相同)\n",
    "\n",
    "**产生原因**:\n",
    "- 数据录入重复\n",
    "- 系统bug导致重复提交\n",
    "- 多数据源合并时重复\n",
    "- ETL流程错误\n",
    "\n",
    "**影响**:\n",
    "- 统计指标失真\n",
    "- 模型训练偏差\n",
    "- 存储空间浪费\n",
    "\n",
    "**Excel对比**:\n",
    "- Excel: 数据→删除重复项(需手动选择列)\n",
    "- Pandas: `drop_duplicates()` 灵活指定"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 创建包含重复值的数据\n",
    "data_duplicates = {\n",
    "    'order_id': ['O001', 'O002', 'O003', 'O002', 'O004', 'O001', 'O005'],\n",
    "    'customer': ['张三', '李四', '王五', '李四', '赵六', '张三', '钱七'],\n",
    "    'product': ['手机', '电脑', '平板', '电脑', '手机', '手机', '耳机'],\n",
    "    'amount': [5999, 8999, 3999, 8999, 5999, 5999, 899],\n",
    "    'date': ['2024-01-15', '2024-01-16', '2024-01-17', '2024-01-16', '2024-01-18', '2024-01-15', '2024-01-19']\n",
    "}\n",
    "\n",
    "df_duplicates = pd.DataFrame(data_duplicates)\n",
    "\n",
    "print(\"原始数据(含重复):\")\n",
    "print(df_duplicates)\n",
    "\n",
    "print(f\"\\n数据总行数: {len(df_duplicates)}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 检测完全重复\n",
    "print(\"完全重复的行:\")\n",
    "fully_duplicated = df_duplicates[df_duplicates.duplicated(keep=False)]\n",
    "print(fully_duplicated.sort_values('order_id'))\n",
    "print(f\"\\n完全重复记录数: {df_duplicates.duplicated().sum()}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 检测基于order_id的重复\n",
    "print(\"基于订单ID的重复:\")\n",
    "order_duplicated = df_duplicates[df_duplicates.duplicated(subset=['order_id'], keep=False)]\n",
    "print(order_duplicated.sort_values('order_id'))\n",
    "print(f\"\\n订单ID重复数: {df_duplicates.duplicated(subset=['order_id']).sum()}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 去重处理\n",
    "print(\"去重策略对比:\\n\")\n",
    "\n",
    "# 策略1: 保留第一条\n",
    "df_dedup_first = df_duplicates.drop_duplicates(subset=['order_id'], keep='first')\n",
    "print(f\"策略1 - 保留第一条: {len(df_dedup_first)}行\")\n",
    "\n",
    "# 策略2: 保留最后一条\n",
    "df_dedup_last = df_duplicates.drop_duplicates(subset=['order_id'], keep='last')\n",
    "print(f\"策略2 - 保留最后一条: {len(df_dedup_last)}行\")\n",
    "\n",
    "# 策略3: 删除所有重复\n",
    "df_dedup_none = df_duplicates.drop_duplicates(subset=['order_id'], keep=False)\n",
    "print(f\"策略3 - 删除所有重复: {len(df_dedup_none)}行\")\n",
    "\n",
    "print(\"\\n去重后数据(保留第一条):\")\n",
    "print(df_dedup_first)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4. 不一致值 (Inconsistent Data)\n",
    "\n",
    "**定义**: 相同含义的数据以不同格式或写法出现\n",
    "\n",
    "**常见情况**:\n",
    "- **大小写不一致**: `Beijing` vs `beijing` vs `BEIJING`\n",
    "- **格式不一致**: `2024-01-15` vs `2024/01/15` vs `15-Jan-2024`\n",
    "- **单位不一致**: `1000元` vs `1千元` vs `1K`\n",
    "- **空格问题**: `  张三  ` vs `张三`\n",
    "- **同义词**: `手机` vs `移动电话` vs `cellphone`\n",
    "- **缩写**: `北京市` vs `北京` vs `BJ`\n",
    "\n",
    "**影响**:\n",
    "- 分组统计错误\n",
    "- 匹配失败\n",
    "- 重复计数\n",
    "\n",
    "**Excel对比**:\n",
    "- Excel: 查找替换、分列、TRIM函数\n",
    "- Pandas: `str`方法、`replace()`、正则表达式"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 创建包含不一致值的数据\n",
    "data_inconsistent = {\n",
    "    'product_name': ['iPhone 15', 'iphone 15', 'IPHONE 15', 'iPhone15', '  iPhone 15  '],\n",
    "    'category': ['手机', '手机', '移动电话', 'Phone', '手机'],\n",
    "    'date': ['2024-01-15', '2024/01/15', '15-Jan-2024', '2024.01.15', '20240115'],\n",
    "    'price': ['6999元', '6999', '¥6999', '6999.0', '6,999'],\n",
    "    'region': ['北京市', '北京', 'Beijing', 'BJ', '  北京  ']\n",
    "}\n",
    "\n",
    "df_inconsistent = pd.DataFrame(data_inconsistent)\n",
    "\n",
    "print(\"包含不一致值的数据:\")\n",
    "print(df_inconsistent)\n",
    "\n",
    "print(\"\\n统计各列的唯一值数量:\")\n",
    "for col in df_inconsistent.columns:\n",
    "    print(f\"{col}: {df_inconsistent[col].nunique()}个唯一值\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 查看每列的唯一值\n",
    "print(\"各列的所有唯一值:\\n\")\n",
    "for col in df_inconsistent.columns:\n",
    "    print(f\"{col}:\")\n",
    "    print(f\"  {df_inconsistent[col].unique()}\")\n",
    "    print()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 清洗不一致值\n",
    "df_clean = df_inconsistent.copy()\n",
    "\n",
    "# 1. 产品名标准化: 统一大小写、去空格\n",
    "df_clean['product_name'] = df_clean['product_name'].str.strip().str.lower().str.replace(' ', '')\n",
    "\n",
    "# 2. 类别标准化: 统一为中文\n",
    "category_mapping = {\n",
    "    '移动电话': '手机',\n",
    "    'Phone': '手机',\n",
    "    'phone': '手机'\n",
    "}\n",
    "df_clean['category'] = df_clean['category'].replace(category_mapping)\n",
    "\n",
    "# 3. 日期标准化: 统一为YYYY-MM-DD格式\n",
    "# 这里简化处理，实际应使用pd.to_datetime()\n",
    "df_clean['date'] = '2024-01-15'  # 简化示例\n",
    "\n",
    "# 4. 价格标准化: 提取数字\n",
    "df_clean['price'] = df_clean['price'].str.replace('[^0-9.]', '', regex=True).astype(float)\n",
    "\n",
    "# 5. 地区标准化\n",
    "region_mapping = {\n",
    "    '北京市': '北京',\n",
    "    'Beijing': '北京',\n",
    "    'BJ': '北京',\n",
    "    '  北京  ': '北京'\n",
    "}\n",
    "df_clean['region'] = df_clean['region'].str.strip().replace(region_mapping)\n",
    "\n",
    "print(\"清洗后的数据:\")\n",
    "print(df_clean)\n",
    "\n",
    "print(\"\\n清洗后唯一值数量:\")\n",
    "for col in df_clean.columns:\n",
    "    print(f\"{col}: {df_clean[col].nunique()}个唯一值\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "## 三、数据质量检查清单\n",
    "\n",
    "### 拿到数据后的标准检查流程\n",
    "\n",
    "#### ✅ 第一步：整体概览\n",
    "```python\n",
    "df.head()        # 查看前几行\n",
    "df.info()        # 数据类型、缺失值\n",
    "df.shape         # 行列数\n",
    "df.columns       # 列名\n",
    "```\n",
    "\n",
    "#### ✅ 第二步：缺失值检查\n",
    "```python\n",
    "df.isnull().sum()                # 各列缺失数\n",
    "(df.isnull().sum() / len(df))   # 缺失占比\n",
    "```\n",
    "\n",
    "#### ✅ 第三步：重复值检查\n",
    "```python\n",
    "df.duplicated().sum()            # 完全重复数\n",
    "df.duplicated(subset=['id']).sum()  # 关键字段重复\n",
    "```\n",
    "\n",
    "#### ✅ 第四步：数据类型检查\n",
    "```python\n",
    "df.dtypes                        # 数据类型是否正确\n",
    "df.describe()                    # 数值列统计\n",
    "```\n",
    "\n",
    "#### ✅ 第五步：异常值检查\n",
    "```python\n",
    "df.describe()                    # 最大最小值是否合理\n",
    "df[col].value_counts()           # 频数分布\n",
    "```\n",
    "\n",
    "#### ✅ 第六步：一致性检查\n",
    "```python\n",
    "df[col].unique()                 # 唯一值是否存在不一致\n",
    "df[col].nunique()                # 唯一值数量是否合理\n",
    "```"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 完整的数据质量检查函数\n",
    "def data_quality_check(df, name=\"Dataset\"):\n",
    "    \"\"\"\n",
    "    全面的数据质量检查报告\n",
    "    \"\"\"\n",
    "    print(\"=\"*70)\n",
    "    print(f\"数据质量检查报告 - {name}\")\n",
    "    print(\"=\"*70)\n",
    "    \n",
    "    # 1. 基本信息\n",
    "    print(\"\\n【1. 基本信息】\")\n",
    "    print(f\"  · 数据规模: {df.shape[0]:,}行 × {df.shape[1]}列\")\n",
    "    print(f\"  · 内存占用: {df.memory_usage(deep=True).sum() / 1024 / 1024:.2f} MB\")\n",
    "    \n",
    "    # 2. 缺失值\n",
    "    print(\"\\n【2. 缺失值检查】\")\n",
    "    missing = df.isnull().sum()\n",
    "    missing_pct = (missing / len(df) * 100).round(2)\n",
    "    missing_df = pd.DataFrame({\n",
    "        '缺失数量': missing,\n",
    "        '缺失占比(%)': missing_pct\n",
    "    })\n",
    "    missing_df = missing_df[missing_df['缺失数量'] > 0].sort_values('缺失数量', ascending=False)\n",
    "    if len(missing_df) > 0:\n",
    "        print(missing_df)\n",
    "    else:\n",
    "        print(\"  ✓ 无缺失值\")\n",
    "    \n",
    "    # 3. 重复值\n",
    "    print(\"\\n【3. 重复值检查】\")\n",
    "    dup_count = df.duplicated().sum()\n",
    "    print(f\"  · 完全重复行数: {dup_count} ({dup_count/len(df)*100:.2f}%)\")\n",
    "    \n",
    "    # 4. 数据类型\n",
    "    print(\"\\n【4. 数据类型】\")\n",
    "    print(df.dtypes.value_counts())\n",
    "    \n",
    "    # 5. 数值列异常\n",
    "    print(\"\\n【5. 数值列检查】\")\n",
    "    numeric_cols = df.select_dtypes(include=[np.number]).columns\n",
    "    if len(numeric_cols) > 0:\n",
    "        for col in numeric_cols:\n",
    "            print(f\"\\n  {col}:\")\n",
    "            print(f\"    范围: [{df[col].min()}, {df[col].max()}]\")\n",
    "            print(f\"    均值: {df[col].mean():.2f}, 中位数: {df[col].median():.2f}\")\n",
    "            # 检查负数(如果不应该有)\n",
    "            if (df[col] < 0).any():\n",
    "                print(f\"    ⚠ 警告: 发现{(df[col] < 0).sum()}个负数值\")\n",
    "    \n",
    "    # 6. 分类列唯一值\n",
    "    print(\"\\n【6. 分类列唯一值】\")\n",
    "    categorical_cols = df.select_dtypes(include=['object']).columns\n",
    "    for col in categorical_cols:\n",
    "        nunique = df[col].nunique()\n",
    "        print(f\"  · {col}: {nunique}个唯一值\")\n",
    "        if nunique <= 10:  # 如果唯一值少于10个,显示出来\n",
    "            print(f\"    → {list(df[col].unique())}\")\n",
    "    \n",
    "    print(\"\\n\" + \"=\"*70)\n",
    "    print(\"检查完成!\")\n",
    "    print(\"=\"*70)\n",
    "\n",
    "# 测试检查函数\n",
    "data_quality_check(df_missing, \"缺失值示例数据\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "## 四、课堂练习\n",
    "\n",
    "### 练习1: 识别脏数据\n",
    "下面的数据包含所有四种脏数据类型,请识别并标注出来:"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 练习数据\n",
    "practice_data = {\n",
    "    'student_id': ['S001', 'S002', 'S003', 'S002', 'S004', 'S005'],\n",
    "    'name': ['张三', '李四', None, '李四', '王五', '  赵六  '],\n",
    "    'age': [20, 22, 21, 22, -5, 23],\n",
    "    'score': [85, 92, 78, 92, 88, None],\n",
    "    'city': ['北京', 'beijing', '上海', '北京', '广州', 'BEIJING']\n",
    "}\n",
    "\n",
    "df_practice = pd.DataFrame(practice_data)\n",
    "print(\"练习数据:\")\n",
    "print(df_practice)\n",
    "\n",
    "print(\"\\n请识别:\")\n",
    "print(\"1. 哪些是缺失值?\")\n",
    "print(\"2. 哪些是异常值?\")\n",
    "print(\"3. 哪些是重复值?\")\n",
    "print(\"4. 哪些是不一致值?\")\n",
    "\n",
    "# 答案在下方代码块"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 练习答案\n",
    "print(\"=== 练习答案 ===\")\n",
    "print(\"\\n1. 缺失值:\")\n",
    "print(\"  - name列: 第3行(索引2)\")\n",
    "print(\"  - score列: 第6行(索引5)\")\n",
    "\n",
    "print(\"\\n2. 异常值:\")\n",
    "print(\"  - age列: 第5行(索引4)的-5岁(年龄不能为负)\")\n",
    "\n",
    "print(\"\\n3. 重复值:\")\n",
    "print(\"  - student_id='S002'的学生记录重复(第2行和第4行)\")\n",
    "\n",
    "print(\"\\n4. 不一致值:\")\n",
    "print(\"  - city列: '北京', 'beijing', 'BEIJING'表示同一城市但写法不一致\")\n",
    "print(\"  - name列: '  赵六  '前后有空格\")\n",
    "\n",
    "# 使用检查函数验证\n",
    "data_quality_check(df_practice, \"练习数据\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "## 五、本讲总结\n",
    "\n",
    "### 核心要点\n",
    "\n",
    "1. **数据分析六步法**\n",
    "   - 问题定义 → 数据采集 → 数据准备 → 数据分析 → 可视化呈现 → 决策建议\n",
    "   - 数据准备占60-70%工作量\n",
    "\n",
    "2. **四大脏数据类型**\n",
    "   - 缺失值: `isnull()`, `fillna()`, `dropna()`\n",
    "   - 异常值: IQR法, 3σ法, 业务规则\n",
    "   - 重复值: `duplicated()`, `drop_duplicates()`\n",
    "   - 不一致值: `str`方法, `replace()`, 标准化\n",
    "\n",
    "3. **数据质量检查流程**\n",
    "   - 整体概览 → 缺失值 → 重复值 → 类型 → 异常值 → 一致性\n",
    "   - 建立检查清单,养成良好习惯\n",
    "\n",
    "### 与Excel对比\n",
    "\n",
    "| 任务 | Excel | Pandas |\n",
    "|------|-------|--------|\n",
    "| 查看数据 | 滚动浏览 | `head()`, `info()` |\n",
    "| 缺失值 | 手动查找空白 | `isnull().sum()` |\n",
    "| 去重 | 数据→删除重复项 | `drop_duplicates()` |\n",
    "| 异常值 | 排序+筛选 | `describe()`, IQR法 |\n",
    "| 数据量 | <100万行 | 千万级别 |\n",
    "| 自动化 | 录制宏 | Python脚本 |\n",
    "\n",
    "### 下节预告\n",
    "**第2讲: 数据准备与Pandas基础**\n",
    "- 数据读取与导出\n",
    "- DataFrame核心操作\n",
    "- 数据选择与过滤\n",
    "- 排序与索引\n",
    "\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 课后作业\n",
    "\n",
    "### 作业1: 数据质量报告\n",
    "使用提供的销售数据,完成一份完整的数据质量检查报告,包含:\n",
    "1. 基本信息统计\n",
    "2. 四类脏数据识别\n",
    "3. 数据质量评分(优/良/中/差)\n",
    "4. 清洗建议\n",
    "\n",
    "### 作业2: 设计分析问题\n",
    "针对电商销售场景,设计3个具体的分析问题,要求:\n",
    "- 问题明确,可量化\n",
    "- 说明需要哪些数据\n",
    "- 预期的分析方法\n",
    "- 可能的业务价值\n",
    "\n",
    "### 作业3: 编写检查函数\n",
    "改进`data_quality_check()`函数,增加:\n",
    "- 异常值自动检测(IQR法)\n",
    "- 不一致值提示\n",
    "- 生成HTML格式报告\n",
    "- 数据质量评分"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
