{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 第三阶段 - 第4讲：数据格式规范与去重\n",
    "\n",
    "## 学习目标\n",
    "- 掌握数据类型转换的各种方法\n",
    "- 熟练处理日期时间数据（解析、提取、运算）\n",
    "- 掌握字符串处理的30+种方法\n",
    "- 学会使用正则表达式清洗文本数据\n",
    "- 掌握去重的多种策略\n",
    "- 完成数据标准化和规范化\n",
    "\n",
    "**重要性**: ⭐⭐⭐⭐ 数据清洗的最后一步，确保数据格式统一！\n",
    "\n",
    "---"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 导入必要的库\n",
    "import pandas as pd\n",
    "import numpy as np\n",
    "import matplotlib.pyplot as plt\n",
    "import seaborn as sns\n",
    "from datetime import datetime, timedelta\n",
    "import re  # 正则表达式\n",
    "import warnings\n",
    "warnings.filterwarnings('ignore')\n",
    "\n",
    "# 设置\n",
    "pd.set_option('display.max_columns', None)\n",
    "pd.set_option('display.max_rows', 100)\n",
    "pd.set_option('display.float_format', '{:.2f}'.format)\n",
    "\n",
    "# 中文显示\n",
    "plt.rcParams['font.sans-serif'] = ['Arial Unicode MS', 'SimHei']\n",
    "plt.rcParams['axes.unicode_minus'] = False\n",
    "\n",
    "print(\"✅ 环境配置完成\")\n",
    "print(f\"Pandas版本: {pd.__version__}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 一、数据类型转换\n",
    "\n",
    "### 1.1 为什么要关注数据类型？\n",
    "\n",
    "**重要性**:\n",
    "- 影响内存占用\n",
    "- 影响计算性能\n",
    "- 影响函数可用性\n",
    "- 影响数据准确性\n",
    "\n",
    "**常见问题**:\n",
    "- 数字被识别为字符串 → 无法计算\n",
    "- 日期被识别为字符串 → 无法排序和筛选\n",
    "- 类型不匹配 → 合并/比较出错\n",
    "\n",
    "### 1.2 Pandas主要数据类型\n",
    "\n",
    "| 类型 | 说明 | 示例 |\n",
    "|------|------|------|\n",
    "| `int64` | 整数 | 1, 100, -5 |\n",
    "| `float64` | 浮点数 | 3.14, -0.5 |\n",
    "| `object` | 字符串或混合 | 'abc', 'hello' |\n",
    "| `bool` | 布尔值 | True, False |\n",
    "| `datetime64` | 日期时间 | 2024-01-01 |\n",
    "| `category` | 分类 | '男', '女' |\n",
    "| `timedelta` | 时间差 | 3 days |"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 创建示例数据 - 包含类型问题\n",
    "messy_types = {\n",
    "    'id': ['1', '2', '3', '4', '5'],\n",
    "    'name': ['张三', '李四', '王五', '赵六', '钱七'],\n",
    "    'age': ['28', '35', '42', 'unknown', '31'],\n",
    "    'salary': ['8000', '12000.5', '15000', '10000', '9,500'],\n",
    "    'join_date': ['2020-01-15', '2019/03/20', '15-Jun-2021', '2022.05.10', '20230815'],\n",
    "    'is_active': ['yes', 'no', 'yes', 'yes', 'no'],\n",
    "    'score': [85, 92, 88, 95, 90]\n",
    "}\n",
    "\n",
    "df_types = pd.DataFrame(messy_types)\n",
    "\n",
    "print(\"原始数据:\")\n",
    "print(df_types)\n",
    "print(\"\\n数据类型:\")\n",
    "print(df_types.dtypes)\n",
    "print(\"\\n问题: 很多应该是数字的列被识别为object(字符串)\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1.3 类型转换方法\n",
    "\n",
    "**Excel对比**:\n",
    "- Excel: 手动设置单元格格式，或用VALUE()、TEXT()函数\n",
    "- Pandas: `astype()`, `to_numeric()`, `to_datetime()`"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 方法1: astype() - 最基础的类型转换\n",
    "df_convert = df_types.copy()\n",
    "\n",
    "print(\"=== astype()类型转换 ===\")\n",
    "\n",
    "# 转换id为整数\n",
    "df_convert['id'] = df_convert['id'].astype(int)\n",
    "print(f\"1. id转换为int: {df_convert['id'].dtype}\")\n",
    "\n",
    "# 转换score为字符串\n",
    "df_convert['score_str'] = df_convert['score'].astype(str)\n",
    "print(f\"2. score转换为str: {df_convert['score_str'].dtype}\")\n",
    "\n",
    "print(\"\\n转换后的数据类型:\")\n",
    "print(df_convert.dtypes)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 方法2: to_numeric() - 智能数字转换\n",
    "print(\"\\n=== to_numeric()智能转换 ===\")\n",
    "\n",
    "# 问题: age列有'unknown'，直接astype会报错\n",
    "try:\n",
    "    df_convert['age'].astype(int)\n",
    "except ValueError as e:\n",
    "    print(f\"直接转换出错: {e}\")\n",
    "\n",
    "# 使用to_numeric处理\n",
    "# errors='coerce': 无法转换的变为NaN\n",
    "df_convert['age_numeric'] = pd.to_numeric(df_convert['age'], errors='coerce')\n",
    "print(f\"\\nto_numeric转换结果:\")\n",
    "print(df_convert[['age', 'age_numeric']])\n",
    "\n",
    "# errors='ignore': 无法转换的保持原样\n",
    "df_convert['age_ignore'] = pd.to_numeric(df_convert['age'], errors='ignore')\n",
    "print(f\"\\nerrors='ignore'结果:\")\n",
    "print(df_convert[['age', 'age_ignore']])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 方法3: 处理特殊格式的数字\n",
    "print(\"=== 处理特殊格式数字 ===\")\n",
    "\n",
    "# salary列有逗号分隔符\n",
    "print(\"\\n原始salary:\")\n",
    "print(df_convert['salary'])\n",
    "\n",
    "# 先去除逗号，再转换\n",
    "df_convert['salary_clean'] = df_convert['salary'].str.replace(',', '').astype(float)\n",
    "print(\"\\n清洗后的salary:\")\n",
    "print(df_convert[['salary', 'salary_clean']])\n",
    "\n",
    "print(f\"\\n类型: {df_convert['salary_clean'].dtype}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 方法4: 布尔值转换\n",
    "print(\"=== 布尔值转换 ===\")\n",
    "\n",
    "# 定义映射\n",
    "bool_mapping = {'yes': True, 'no': False}\n",
    "df_convert['is_active_bool'] = df_convert['is_active'].map(bool_mapping)\n",
    "\n",
    "print(df_convert[['is_active', 'is_active_bool']])\n",
    "print(f\"\\n类型: {df_convert['is_active_bool'].dtype}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 方法5: category类型 - 优化内存\n",
    "print(\"=== category类型 ===\")\n",
    "\n",
    "# 对于重复值多的列，使用category可以节省内存\n",
    "df_convert['name_cat'] = df_convert['name'].astype('category')\n",
    "\n",
    "print(f\"object类型内存: {df_convert['name'].memory_usage(deep=True)} bytes\")\n",
    "print(f\"category类型内存: {df_convert['name_cat'].memory_usage(deep=True)} bytes\")\n",
    "print(f\"节省: {(1 - df_convert['name_cat'].memory_usage(deep=True)/df_convert['name'].memory_usage(deep=True))*100:.1f}%\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 1.4 批量类型转换"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 创建更大的数据集\n",
    "large_data = {\n",
    "    'int_col': ['1', '2', '3', '4', '5'] * 20,\n",
    "    'float_col': ['1.5', '2.3', '3.7', '4.2', '5.8'] * 20,\n",
    "    'cat_col': ['A', 'B', 'C', 'A', 'B'] * 20\n",
    "}\n",
    "df_large = pd.DataFrame(large_data)\n",
    "\n",
    "print(\"=== 批量类型转换 ===\")\n",
    "print(\"\\n转换前:\")\n",
    "print(df_large.dtypes)\n",
    "print(f\"总内存: {df_large.memory_usage(deep=True).sum()} bytes\")\n",
    "\n",
    "# 批量转换\n",
    "df_large = df_large.astype({\n",
    "    'int_col': 'int32',      # 用int32而不是int64节省内存\n",
    "    'float_col': 'float32',  # 用float32而不是float64\n",
    "    'cat_col': 'category'    # 分类型\n",
    "})\n",
    "\n",
    "print(\"\\n转换后:\")\n",
    "print(df_large.dtypes)\n",
    "print(f\"总内存: {df_large.memory_usage(deep=True).sum()} bytes\")\n",
    "print(f\"\\n内存优化显著！\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "## 二、日期时间处理\n",
    "\n",
    "### 2.1 为什么日期处理很重要？\n",
    "\n",
    "**业务场景**:\n",
    "- 时间序列分析\n",
    "- 计算时间差（用户留存、订单周期）\n",
    "- 按时间分组（日/周/月/季度/年）\n",
    "- 工作日/节假日判断\n",
    "\n",
    "**常见问题**:\n",
    "- 日期格式不统一\n",
    "- 字符串无法排序和计算\n",
    "- 时区问题\n",
    "\n",
    "### 2.2 日期解析 - to_datetime()\n",
    "\n",
    "**Excel对比**:\n",
    "- Excel: 日期序列值，或用DATEVALUE()函数\n",
    "- Pandas: `pd.to_datetime()` 自动识别多种格式"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 日期解析示例\n",
    "print(\"=== 日期解析 ===\")\n",
    "\n",
    "# 回到我们的数据\n",
    "print(\"\\n原始join_date（多种格式）:\")\n",
    "print(df_convert['join_date'])\n",
    "print(f\"类型: {df_convert['join_date'].dtype}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 自动解析（Pandas很智能！）\n",
    "df_convert['join_date_parsed'] = pd.to_datetime(df_convert['join_date'], errors='coerce')\n",
    "\n",
    "print(\"\\n解析后:\")\n",
    "print(df_convert[['join_date', 'join_date_parsed']])\n",
    "print(f\"\\n类型: {df_convert['join_date_parsed'].dtype}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 各种日期格式的解析\n",
    "date_formats = {\n",
    "    '标准格式': '2024-01-15',\n",
    "    '斜杠分隔': '2024/01/15',\n",
    "    '点分隔': '2024.01.15',\n",
    "    '中文': '2024年1月15日',\n",
    "    '无分隔': '20240115',\n",
    "    '美式': '01/15/2024',\n",
    "    '欧式': '15/01/2024',\n",
    "    '月份名': '15-Jan-2024',\n",
    "    '时间戳': '1705276800',\n",
    "    '带时间': '2024-01-15 14:30:00'\n",
    "}\n",
    "\n",
    "print(\"\\n=== 各种格式解析测试 ===\")\n",
    "for name, date_str in date_formats.items():\n",
    "    try:\n",
    "        if name == '时间戳':\n",
    "            parsed = pd.to_datetime(int(date_str), unit='s')\n",
    "        else:\n",
    "            parsed = pd.to_datetime(date_str)\n",
    "        print(f\"{name:12} {date_str:20} → {parsed}\")\n",
    "    except:\n",
    "        print(f\"{name:12} {date_str:20} → 解析失败\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 指定格式解析（更快）\n",
    "print(\"\\n=== 指定格式解析 ===\")\n",
    "\n",
    "# 创建示例\n",
    "dates_specific = pd.Series(['15/01/2024', '20/03/2024', '25/06/2024'])\n",
    "\n",
    "# 方式1: 自动推断（慢）\n",
    "parsed_auto = pd.to_datetime(dates_specific, dayfirst=True)\n",
    "print(\"自动推断:\")\n",
    "print(parsed_auto)\n",
    "\n",
    "# 方式2: 指定格式（快）\n",
    "parsed_format = pd.to_datetime(dates_specific, format='%d/%m/%Y')\n",
    "print(\"\\n指定格式:\")\n",
    "print(parsed_format)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.3 日期提取\n",
    "\n",
    "**从日期中提取各种信息**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 创建完整的日期数据\n",
    "date_data = pd.DataFrame({\n",
    "    'date': pd.date_range('2024-01-01', periods=100, freq='D')\n",
    "})\n",
    "\n",
    "print(\"=== 日期提取 ===\")\n",
    "\n",
    "# 提取年月日\n",
    "date_data['year'] = date_data['date'].dt.year\n",
    "date_data['month'] = date_data['date'].dt.month\n",
    "date_data['day'] = date_data['date'].dt.day\n",
    "\n",
    "# 提取季度和周\n",
    "date_data['quarter'] = date_data['date'].dt.quarter\n",
    "date_data['week'] = date_data['date'].dt.isocalendar().week\n",
    "\n",
    "# 星期几\n",
    "date_data['weekday'] = date_data['date'].dt.weekday  # 0=周一\n",
    "date_data['day_name'] = date_data['date'].dt.day_name()  # 英文名称\n",
    "\n",
    "# 是否周末\n",
    "date_data['is_weekend'] = date_data['weekday'].isin([5, 6])\n",
    "\n",
    "# 月初月末\n",
    "date_data['is_month_start'] = date_data['date'].dt.is_month_start\n",
    "date_data['is_month_end'] = date_data['date'].dt.is_month_end\n",
    "\n",
    "# 一年中的第几天\n",
    "date_data['day_of_year'] = date_data['date'].dt.dayofyear\n",
    "\n",
    "print(\"\\n提取的日期信息:\")\n",
    "print(date_data.head(10))\n",
    "\n",
    "print(\"\\n周末天数:\")\n",
    "print(f\"周末: {date_data['is_weekend'].sum()}天\")\n",
    "print(f\"工作日: {(~date_data['is_weekend']).sum()}天\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.4 日期运算\n",
    "\n",
    "**计算时间差、日期偏移**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 创建示例：员工入职和离职数据\n",
    "employee_dates = pd.DataFrame({\n",
    "    'name': ['张三', '李四', '王五', '赵六', '钱七'],\n",
    "    'hire_date': pd.to_datetime(['2020-01-15', '2019-06-20', '2021-03-10', '2018-09-05', '2022-11-20']),\n",
    "    'leave_date': pd.to_datetime(['2023-12-31', None, None, '2024-01-15', None])\n",
    "})\n",
    "\n",
    "print(\"=== 日期运算 ===\")\n",
    "print(\"\\n员工日期数据:\")\n",
    "print(employee_dates)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 计算在职天数\n",
    "today = pd.Timestamp('2024-01-01')\n",
    "\n",
    "# 对于离职的用离职日期，否则用今天\n",
    "employee_dates['end_date'] = employee_dates['leave_date'].fillna(today)\n",
    "employee_dates['days_employed'] = (employee_dates['end_date'] - employee_dates['hire_date']).dt.days\n",
    "\n",
    "# 计算年数\n",
    "employee_dates['years_employed'] = (employee_dates['days_employed'] / 365.25).round(1)\n",
    "\n",
    "print(\"\\n在职时长:\")\n",
    "print(employee_dates[['name', 'hire_date', 'leave_date', 'days_employed', 'years_employed']])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 日期偏移\n",
    "print(\"\\n=== 日期偏移 ===\")\n",
    "\n",
    "base_date = pd.Timestamp('2024-01-15')\n",
    "print(f\"基准日期: {base_date}\")\n",
    "\n",
    "# 加减天数\n",
    "print(f\"\\n+7天: {base_date + pd.Timedelta(days=7)}\")\n",
    "print(f\"-30天: {base_date - pd.Timedelta(days=30)}\")\n",
    "\n",
    "# 加减月份（使用DateOffset）\n",
    "print(f\"+1月: {base_date + pd.DateOffset(months=1)}\")\n",
    "print(f\"+1年: {base_date + pd.DateOffset(years=1)}\")\n",
    "\n",
    "# 到月初/月末\n",
    "print(f\"\\n本月第一天: {base_date + pd.offsets.MonthBegin(0)}\")\n",
    "print(f\"本月最后一天: {base_date + pd.offsets.MonthEnd(0)}\")\n",
    "print(f\"下月第一天: {base_date + pd.offsets.MonthBegin(1)}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 计算工作日\n",
    "print(\"\\n=== 工作日计算 ===\")\n",
    "\n",
    "start = pd.Timestamp('2024-01-01')\n",
    "end = pd.Timestamp('2024-01-31')\n",
    "\n",
    "# 生成日期范围\n",
    "date_range = pd.date_range(start, end, freq='D')\n",
    "\n",
    "# 计算工作日（周一到周五）\n",
    "workdays = date_range[date_range.weekday < 5]\n",
    "\n",
    "print(f\"时间范围: {start.date()} 至 {end.date()}\")\n",
    "print(f\"总天数: {len(date_range)}天\")\n",
    "print(f\"工作日: {len(workdays)}天\")\n",
    "print(f\"周末: {len(date_range) - len(workdays)}天\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 2.5 日期格式化输出"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 日期格式化\n",
    "print(\"=== 日期格式化 ===\")\n",
    "\n",
    "sample_date = pd.Timestamp('2024-01-15 14:30:45')\n",
    "\n",
    "formats = {\n",
    "    '标准格式': '%Y-%m-%d',\n",
    "    '斜杠分隔': '%Y/%m/%d',\n",
    "    '中文格式': '%Y年%m月%d日',\n",
    "    '带时间': '%Y-%m-%d %H:%M:%S',\n",
    "    '12小时制': '%Y-%m-%d %I:%M:%S %p',\n",
    "    '月份名': '%d-%b-%Y',\n",
    "    '完整月份': '%B %d, %Y',\n",
    "    '星期': '%A, %Y-%m-%d'\n",
    "}\n",
    "\n",
    "print(f\"\\n原始: {sample_date}\\n\")\n",
    "for name, fmt in formats.items():\n",
    "    formatted = sample_date.strftime(fmt)\n",
    "    print(f\"{name:12} {fmt:25} → {formatted}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "## 三、字符串处理\n",
    "\n",
    "### 3.1 字符串处理的重要性\n",
    "\n",
    "**常见问题**:\n",
    "- 前后空格\n",
    "- 大小写不一致\n",
    "- 特殊字符\n",
    "- 格式不统一\n",
    "- 需要提取信息\n",
    "\n",
    "**Excel对比**:\n",
    "- Excel: TRIM(), UPPER(), LOWER(), LEFT(), RIGHT(), MID(), SUBSTITUTE()\n",
    "- Pandas: `str`方法链，功能更强大\n",
    "\n",
    "### 3.2 基础字符串操作"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 创建包含各种字符串问题的数据\n",
    "messy_strings = pd.DataFrame({\n",
    "    'name': ['  张三  ', 'LI Si', '王五', '  ZHAO Liu  ', 'qian Qi'],\n",
    "    'email': ['ZHANG@COMPANY.COM', 'lisi@company.com', 'wangwu@COMPANY.com', 'zhaoliu@company.COM', 'qianqi@Company.Com'],\n",
    "    'phone': ['138-1234-5678', '13912345678', '139 1234 5678', '+86-139-1234-5678', '(139)1234-5678'],\n",
    "    'address': ['北京市朝阳区', '上海市 浦东新区', '  广州市天河区  ', '深圳市南山区！', '杭州市 西湖区？'],\n",
    "    'id_card': ['110101199001011234', '31010119850615432X', '440101199203051234', '440301198807121234', '330101199512251234']\n",
    "})\n",
    "\n",
    "print(\"=== 原始字符串数据 ===\")\n",
    "print(messy_strings)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 1. 去除空格\n",
    "print(\"\\n=== 1. 去除空格 ===\")\n",
    "\n",
    "df_str = messy_strings.copy()\n",
    "\n",
    "df_str['name_strip'] = df_str['name'].str.strip()  # 前后空格\n",
    "df_str['address_strip'] = df_str['address'].str.strip()  # 前后空格\n",
    "\n",
    "print(\"去除前后空格:\")\n",
    "print(df_str[['name', 'name_strip']])\n",
    "\n",
    "# 去除所有空格\n",
    "df_str['phone_no_space'] = df_str['phone'].str.replace(' ', '')\n",
    "print(\"\\n去除所有空格:\")\n",
    "print(df_str[['phone', 'phone_no_space']])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 2. 大小写转换\n",
    "print(\"\\n=== 2. 大小写转换 ===\")\n",
    "\n",
    "df_str['email_lower'] = df_str['email'].str.lower()  # 全小写\n",
    "df_str['email_upper'] = df_str['email'].str.upper()  # 全大写\n",
    "df_str['name_title'] = df_str['name'].str.strip().str.title()  # 首字母大写\n",
    "\n",
    "print(\"邮箱标准化（全小写）:\")\n",
    "print(df_str[['email', 'email_lower']].head())\n",
    "\n",
    "print(\"\\n姓名标准化（首字母大写）:\")\n",
    "print(df_str[['name', 'name_title']].head())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 3. 替换和删除\n",
    "print(\"\\n=== 3. 替换和删除 ===\")\n",
    "\n",
    "# 替换字符\n",
    "df_str['phone_clean'] = (df_str['phone']\n",
    "    .str.replace('-', '', regex=False)\n",
    "    .str.replace(' ', '', regex=False)\n",
    "    .str.replace('(', '', regex=False)\n",
    "    .str.replace(')', '', regex=False)\n",
    "    .str.replace('+86', '', regex=False)\n",
    ")\n",
    "\n",
    "print(\"清洗电话号码:\")\n",
    "print(df_str[['phone', 'phone_clean']])\n",
    "\n",
    "# 删除标点符号\n",
    "df_str['address_clean'] = df_str['address'].str.replace('[！？。，]', '', regex=True)\n",
    "print(\"\\n删除标点:\")\n",
    "print(df_str[['address', 'address_clean']])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.3 字符串提取和分割"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 字符串提取\n",
    "print(\"=== 字符串提取 ===\")\n",
    "\n",
    "# 从身份证号提取信息\n",
    "df_str['birth_year'] = df_str['id_card'].str[6:10]\n",
    "df_str['birth_month'] = df_str['id_card'].str[10:12]\n",
    "df_str['birth_day'] = df_str['id_card'].str[12:14]\n",
    "df_str['gender_code'] = df_str['id_card'].str[16:17]\n",
    "\n",
    "# 性别判断（倒数第二位，奇数为男，偶数为女）\n",
    "df_str['gender'] = df_str['gender_code'].apply(\n",
    "    lambda x: '男' if x.isdigit() and int(x) % 2 == 1 else '女'\n",
    ")\n",
    "\n",
    "print(\"从身份证号提取信息:\")\n",
    "print(df_str[['id_card', 'birth_year', 'birth_month', 'birth_day', 'gender']])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 字符串分割\n",
    "print(\"\\n=== 字符串分割 ===\")\n",
    "\n",
    "# 创建包含多部分的数据\n",
    "address_data = pd.DataFrame({\n",
    "    'full_address': [\n",
    "        '北京市-朝阳区-建国门外大街1号',\n",
    "        '上海市-浦东新区-世纪大道100号',\n",
    "        '广州市-天河区-天河路123号'\n",
    "    ]\n",
    "})\n",
    "\n",
    "# 分割成多列\n",
    "address_split = address_data['full_address'].str.split('-', expand=True)\n",
    "address_split.columns = ['省市', '区', '街道']\n",
    "\n",
    "result = pd.concat([address_data, address_split], axis=1)\n",
    "print(result)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.4 字符串查询和匹配"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 字符串查询\n",
    "print(\"=== 字符串查询 ===\")\n",
    "\n",
    "# 包含\n",
    "print(\"\\n地址中包含'区'的:\")\n",
    "contains_qu = df_str[df_str['address'].str.contains('区', na=False)]\n",
    "print(contains_qu[['name', 'address']])\n",
    "\n",
    "# 以...开头\n",
    "print(\"\\n地址以'北京'开头的:\")\n",
    "starts_beijing = df_str[df_str['address'].str.startswith('北京', na=False)]\n",
    "print(starts_beijing[['name', 'address']])\n",
    "\n",
    "# 以...结尾\n",
    "print(\"\\n邮箱以.com结尾的:\")\n",
    "ends_com = df_str[df_str['email'].str.endswith('.com', na=False)]\n",
    "print(ends_com[['name', 'email']])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.5 正则表达式 (Regex)\n",
    "\n",
    "**强大的文本模式匹配工具**\n",
    "\n",
    "**常用模式**:\n",
    "- `\\d`: 数字\n",
    "- `\\w`: 字母数字下划线\n",
    "- `\\s`: 空白字符\n",
    "- `.`: 任意字符\n",
    "- `*`: 0次或多次\n",
    "- `+`: 1次或多次\n",
    "- `?`: 0次或1次\n",
    "- `[]`: 字符集\n",
    "- `^`: 开头\n",
    "- `$`: 结尾"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 正则表达式示例\n",
    "print(\"=== 正则表达式 ===\")\n",
    "\n",
    "# 创建测试数据\n",
    "text_data = pd.DataFrame({\n",
    "    'text': [\n",
    "        '我的手机号是13812345678',\n",
    "        '联系电话: 021-12345678',\n",
    "        '邮箱: user@example.com',\n",
    "        '订单号: ORD20240115001',\n",
    "        '价格：¥1,234.56元'\n",
    "    ]\n",
    "})\n",
    "\n",
    "print(\"原始文本:\")\n",
    "print(text_data)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 提取手机号（11位数字）\n",
    "text_data['mobile'] = text_data['text'].str.extract(r'(1[3-9]\\d{9})')\n",
    "print(\"\\n提取手机号:\")\n",
    "print(text_data[['text', 'mobile']])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 提取邮箱\n",
    "text_data['email_extracted'] = text_data['text'].str.extract(r'([\\w\\.-]+@[\\w\\.-]+\\.\\w+)')\n",
    "print(\"\\n提取邮箱:\")\n",
    "print(text_data[['text', 'email_extracted']].dropna())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 提取数字\n",
    "text_data['numbers'] = text_data['text'].str.findall(r'\\d+').str.join(',')\n",
    "print(\"\\n提取所有数字:\")\n",
    "print(text_data[['text', 'numbers']])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 验证格式\n",
    "print(\"\\n=== 格式验证 ===\")\n",
    "\n",
    "# 验证邮箱格式\n",
    "emails = pd.Series([\n",
    "    'user@example.com',\n",
    "    'invalid.email',\n",
    "    'another@test.co.uk',\n",
    "    '@wrong.com',\n",
    "    'good_email@company.com'\n",
    "])\n",
    "\n",
    "# 邮箱正则\n",
    "email_pattern = r'^[\\w\\.-]+@[\\w\\.-]+\\.\\w+$'\n",
    "emails_valid = emails.str.match(email_pattern)\n",
    "\n",
    "result_df = pd.DataFrame({\n",
    "    'email': emails,\n",
    "    'is_valid': emails_valid\n",
    "})\n",
    "print(result_df)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 3.6 字符串方法大全\n",
    "\n",
    "| 方法 | 说明 | Excel等价 |\n",
    "|------|------|----------|\n",
    "| `str.strip()` | 去除前后空格 | `TRIM()` |\n",
    "| `str.lower()` | 转小写 | `LOWER()` |\n",
    "| `str.upper()` | 转大写 | `UPPER()` |\n",
    "| `str.title()` | 首字母大写 | `PROPER()` |\n",
    "| `str.replace()` | 替换 | `SUBSTITUTE()` |\n",
    "| `str.split()` | 分割 | `分列` |\n",
    "| `str.contains()` | 包含 | `SEARCH()` |\n",
    "| `str.startswith()` | 以...开头 | `LEFT()+IF()` |\n",
    "| `str.endswith()` | 以...结尾 | `RIGHT()+IF()` |\n",
    "| `str.len()` | 长度 | `LEN()` |\n",
    "| `str[:n]` | 取前n个字符 | `LEFT()` |\n",
    "| `str[-n:]` | 取后n个字符 | `RIGHT()` |\n",
    "| `str[a:b]` | 取中间字符 | `MID()` |\n",
    "| `str.extract()` | 正则提取 | 复杂公式 |\n",
    "| `str.findall()` | 查找所有匹配 | 无直接等价 |"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "## 四、数据去重\n",
    "\n",
    "### 4.1 重复数据的危害\n",
    "\n",
    "**影响**:\n",
    "- 统计指标失真（计数、求和）\n",
    "- 存储空间浪费\n",
    "- 分析结果偏差\n",
    "- 用户体验差（重复通知）\n",
    "\n",
    "**产生原因**:\n",
    "- 数据录入重复\n",
    "- 系统bug\n",
    "- 数据合并时重复\n",
    "- ETL流程错误\n",
    "\n",
    "### 4.2 检测重复值"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 创建包含重复的数据\n",
    "duplicate_data = pd.DataFrame({\n",
    "    'order_id': ['O001', 'O002', 'O003', 'O002', 'O004', 'O001', 'O005', 'O003'],\n",
    "    'customer': ['张三', '李四', '王五', '李四', '赵六', '张三', '钱七', '王五'],\n",
    "    'product': ['手机', '电脑', '平板', '电脑', '手机', '手机', '耳机', '平板'],\n",
    "    'amount': [3999, 8999, 2999, 8999, 3999, 3999, 299, 2999],\n",
    "    'date': pd.to_datetime(['2024-01-01', '2024-01-02', '2024-01-03', '2024-01-02', \n",
    "                             '2024-01-04', '2024-01-01', '2024-01-05', '2024-01-03'])\n",
    "})\n",
    "\n",
    "print(\"=== 包含重复的数据 ===\")\n",
    "print(duplicate_data)\n",
    "print(f\"\\n总行数: {len(duplicate_data)}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 检测完全重复的行\n",
    "print(\"\\n=== 检测完全重复 ===\")\n",
    "print(f\"完全重复的行数: {duplicate_data.duplicated().sum()}\")\n",
    "\n",
    "# 查看重复的行\n",
    "print(\"\\n完全重复的记录:\")\n",
    "fully_duplicated = duplicate_data[duplicate_data.duplicated(keep=False)]\n",
    "print(fully_duplicated.sort_values('order_id'))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 基于特定列检测重复\n",
    "print(\"\\n=== 基于order_id检测重复 ===\")\n",
    "order_duplicated = duplicate_data.duplicated(subset=['order_id'], keep=False)\n",
    "print(f\"order_id重复数: {order_duplicated.sum()}\")\n",
    "\n",
    "print(\"\\norder_id重复的记录:\")\n",
    "print(duplicate_data[order_duplicated].sort_values('order_id'))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 分析重复情况\n",
    "print(\"\\n=== 重复情况分析 ===\")\n",
    "\n",
    "# 统计每个order_id的出现次数\n",
    "order_counts = duplicate_data['order_id'].value_counts()\n",
    "print(\"\\norder_id出现次数:\")\n",
    "print(order_counts)\n",
    "\n",
    "# 重复的order_id\n",
    "duplicated_orders = order_counts[order_counts > 1]\n",
    "print(f\"\\n重复的order_id: {duplicated_orders.index.tolist()}\")\n",
    "print(f\"重复的订单数: {len(duplicated_orders)}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4.3 去重策略\n",
    "\n",
    "**keep参数**:\n",
    "- `'first'`: 保留第一次出现的（默认）\n",
    "- `'last'`: 保留最后一次出现的\n",
    "- `False`: 删除所有重复（包括第一次）\n",
    "\n",
    "**Excel对比**:\n",
    "- Excel: 数据→删除重复项，只能保留第一条\n",
    "- Pandas: 灵活控制保留哪一条"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 策略1: 删除完全重复的行\n",
    "print(\"=== 策略1: 删除完全重复 ===\")\n",
    "df_dedup1 = duplicate_data.drop_duplicates()\n",
    "print(f\"原始: {len(duplicate_data)}行 → 去重后: {len(df_dedup1)}行\")\n",
    "print(f\"删除了{len(duplicate_data) - len(df_dedup1)}行\")\n",
    "print(\"\\n去重后数据:\")\n",
    "print(df_dedup1)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 策略2: 基于order_id去重，保留第一条\n",
    "print(\"\\n=== 策略2: 基于order_id去重(保留第一条) ===\")\n",
    "df_dedup2 = duplicate_data.drop_duplicates(subset=['order_id'], keep='first')\n",
    "print(f\"原始: {len(duplicate_data)}行 → 去重后: {len(df_dedup2)}行\")\n",
    "print(\"\\n去重后数据:\")\n",
    "print(df_dedup2.sort_values('order_id'))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 策略3: 保留最后一条\n",
    "print(\"\\n=== 策略3: 基于order_id去重(保留最后一条) ===\")\n",
    "df_dedup3 = duplicate_data.drop_duplicates(subset=['order_id'], keep='last')\n",
    "print(f\"原始: {len(duplicate_data)}行 → 去重后: {len(df_dedup3)}行\")\n",
    "print(\"\\n去重后数据:\")\n",
    "print(df_dedup3.sort_values('order_id'))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 策略4: 删除所有重复（包括第一次）\n",
    "print(\"\\n=== 策略4: 删除所有重复 ===\")\n",
    "df_dedup4 = duplicate_data.drop_duplicates(subset=['order_id'], keep=False)\n",
    "print(f\"原始: {len(duplicate_data)}行 → 去重后: {len(df_dedup4)}行\")\n",
    "print(\"\\n只保留不重复的记录:\")\n",
    "print(df_dedup4.sort_values('order_id'))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 策略5: 根据多列组合去重\n",
    "print(\"\\n=== 策略5: 基于客户+产品组合去重 ===\")\n",
    "df_dedup5 = duplicate_data.drop_duplicates(subset=['customer', 'product'], keep='first')\n",
    "print(f\"原始: {len(duplicate_data)}行 → 去重后: {len(df_dedup5)}行\")\n",
    "print(\"\\n去重后数据:\")\n",
    "print(df_dedup5.sort_values(['customer', 'product']))"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4.4 智能去重 - 保留最优记录\n",
    "\n",
    "**场景**: 重复记录中，希望保留最新的、最完整的、金额最大的等"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 智能去重：保留日期最新的\n",
    "print(\"=== 智能去重: 保留最新记录 ===\")\n",
    "\n",
    "# 先按date降序排序，再去重（会保留最新的）\n",
    "df_dedup_latest = (duplicate_data\n",
    "    .sort_values('date', ascending=False)\n",
    "    .drop_duplicates(subset=['order_id'], keep='first')\n",
    "    .sort_values('order_id')\n",
    ")\n",
    "\n",
    "print(\"\\n原始数据（按order_id排序）:\")\n",
    "print(duplicate_data.sort_values('order_id')[['order_id', 'date', 'amount']])\n",
    "\n",
    "print(\"\\n保留最新记录:\")\n",
    "print(df_dedup_latest[['order_id', 'date', 'amount']])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 智能去重：保留金额最大的\n",
    "print(\"\\n=== 智能去重: 保留金额最大的记录 ===\")\n",
    "\n",
    "# 先按amount降序排序，再去重\n",
    "df_dedup_max = (duplicate_data\n",
    "    .sort_values('amount', ascending=False)\n",
    "    .drop_duplicates(subset=['customer'], keep='first')\n",
    "    .sort_values('customer')\n",
    ")\n",
    "\n",
    "print(\"\\n每个客户保留最大金额的订单:\")\n",
    "print(df_dedup_max[['customer', 'order_id', 'amount']])"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 4.5 去重前后对比"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 可视化去重效果\n",
    "fig, axes = plt.subplots(1, 2, figsize=(14, 5))\n",
    "\n",
    "# 左图：各策略去重后的记录数\n",
    "strategies = ['原始', '完全重复', 'order_id\\n(first)', 'order_id\\n(last)', 'order_id\\n(False)', '客户+产品']\n",
    "counts = [\n",
    "    len(duplicate_data),\n",
    "    len(df_dedup1),\n",
    "    len(df_dedup2),\n",
    "    len(df_dedup3),\n",
    "    len(df_dedup4),\n",
    "    len(df_dedup5)\n",
    "]\n",
    "\n",
    "colors = ['lightblue', 'lightcoral', 'lightgreen', 'lightyellow', 'lightpink', 'lightgray']\n",
    "bars = axes[0].bar(strategies, counts, color=colors, edgecolor='black')\n",
    "axes[0].set_title('Deduplication Strategies Comparison', fontsize=14, fontweight='bold')\n",
    "axes[0].set_ylabel('Number of Records')\n",
    "axes[0].grid(axis='y', alpha=0.3)\n",
    "\n",
    "# 添加数值标签\n",
    "for bar, count in zip(bars, counts):\n",
    "    height = bar.get_height()\n",
    "    axes[0].text(bar.get_x() + bar.get_width()/2., height,\n",
    "                f'{int(count)}',\n",
    "                ha='center', va='bottom', fontweight='bold')\n",
    "\n",
    "# 右图：重复订单分布\n",
    "order_counts = duplicate_data['order_id'].value_counts().value_counts().sort_index()\n",
    "axes[1].bar(order_counts.index, order_counts.values, color='steelblue', edgecolor='black')\n",
    "axes[1].set_title('Duplicate Order Distribution', fontsize=14, fontweight='bold')\n",
    "axes[1].set_xlabel('Number of Occurrences')\n",
    "axes[1].set_ylabel('Number of Orders')\n",
    "axes[1].grid(axis='y', alpha=0.3)\n",
    "\n",
    "plt.tight_layout()\n",
    "plt.show()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "## 五、数据标准化综合案例\n",
    "\n",
    "### 完整的数据清洗和标准化流程"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 创建一个真实场景的脏数据\n",
    "messy_customer_data = pd.DataFrame({\n",
    "    'customer_id': ['C001', 'c002', 'C003', 'C002', 'C004', 'c001', 'C005'],\n",
    "    'name': ['  张三  ', 'LI Si', '王五', 'li si', '  赵六', 'Zhang San', '钱七'],\n",
    "    'gender': ['男', 'Male', 'M', 'male', '女', '1', 'Female'],\n",
    "    'age': ['28', '35', '42', '35', 'unknown', '28', '31'],\n",
    "    'phone': ['138-1234-5678', '13912345678', '139 1234 5678', '(139)1234-5678', '138-5678-1234', '13812345678', '139-8765-4321'],\n",
    "    'email': ['ZHANG@COMPANY.COM', 'lisi@company.com', 'wangwu@COMPANY.com', 'lisi@company.com', 'zhaoliu@company.COM', 'zhang@company.com', 'qianqi@Company.Com'],\n",
    "    'registration_date': ['2020-01-15', '2019/06/20', '15-Mar-2021', '2019/06/20', '2022.05.10', '2020-01-15', '20230815'],\n",
    "    'city': ['北京', 'beijing', '上海', 'Beijing', '  广州  ', '北京市', 'SHANGHAI'],\n",
    "    'total_purchase': ['1,234.56', '2345.67', '3,456', '2345.67', '0', '1234.56', '567.89']\n",
    "})\n",
    "\n",
    "print(\"=== 原始脏数据 ===\")\n",
    "print(messy_customer_data)\n",
    "print(f\"\\n数据规模: {messy_customer_data.shape}\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 开始清洗和标准化\n",
    "print(\"\\n\" + \"=\"*70)\n",
    "print(\"开始数据清洗和标准化\")\n",
    "print(\"=\"*70)\n",
    "\n",
    "df_clean = messy_customer_data.copy()\n",
    "\n",
    "# 步骤1: 统一customer_id大小写\n",
    "print(\"\\n步骤1: 统一customer_id格式\")\n",
    "df_clean['customer_id'] = df_clean['customer_id'].str.upper()\n",
    "print(f\"完成: {df_clean['customer_id'].unique()}\")\n",
    "\n",
    "# 步骤2: 清洗姓名（去除空格）\n",
    "print(\"\\n步骤2: 清洗姓名\")\n",
    "df_clean['name'] = df_clean['name'].str.strip()\n",
    "print(f\"完成: {df_clean['name'].tolist()}\")\n",
    "\n",
    "# 步骤3: 标准化性别\n",
    "print(\"\\n步骤3: 标准化性别\")\n",
    "gender_mapping = {\n",
    "    '男': '男', 'Male': '男', 'M': '男', 'male': '男', '1': '男',\n",
    "    '女': '女', 'Female': '女', 'F': '女', 'female': '女', '0': '女'\n",
    "}\n",
    "df_clean['gender'] = df_clean['gender'].map(gender_mapping)\n",
    "print(f\"完成: {df_clean['gender'].value_counts().to_dict()}\")\n",
    "\n",
    "# 步骤4: 转换年龄类型\n",
    "print(\"\\n步骤4: 转换年龄类型\")\n",
    "df_clean['age'] = pd.to_numeric(df_clean['age'], errors='coerce')\n",
    "print(f\"完成: 类型={df_clean['age'].dtype}, 缺失={df_clean['age'].isnull().sum()}\")\n",
    "\n",
    "# 步骤5: 标准化电话号码\n",
    "print(\"\\n步骤5: 标准化电话号码\")\n",
    "df_clean['phone'] = (df_clean['phone']\n",
    "    .str.replace('-', '', regex=False)\n",
    "    .str.replace(' ', '', regex=False)\n",
    "    .str.replace('(', '', regex=False)\n",
    "    .str.replace(')', '', regex=False)\n",
    ")\n",
    "print(f\"完成: 前3个={df_clean['phone'].head(3).tolist()}\")\n",
    "\n",
    "# 步骤6: 统一邮箱为小写\n",
    "print(\"\\n步骤6: 统一邮箱格式\")\n",
    "df_clean['email'] = df_clean['email'].str.lower()\n",
    "print(f\"完成: 前3个={df_clean['email'].head(3).tolist()}\")\n",
    "\n",
    "# 步骤7: 解析日期\n",
    "print(\"\\n步骤7: 解析日期\")\n",
    "df_clean['registration_date'] = pd.to_datetime(df_clean['registration_date'], errors='coerce')\n",
    "print(f\"完成: 类型={df_clean['registration_date'].dtype}\")\n",
    "\n",
    "# 步骤8: 标准化城市名\n",
    "print(\"\\n步骤8: 标准化城市名\")\n",
    "city_mapping = {\n",
    "    '北京': '北京', 'beijing': '北京', 'Beijing': '北京', 'BEIJING': '北京', '北京市': '北京',\n",
    "    '上海': '上海', 'shanghai': '上海', 'Shanghai': '上海', 'SHANGHAI': '上海', '上海市': '上海',\n",
    "    '广州': '广州', 'guangzhou': '广州', 'Guangzhou': '广州', '广州市': '广州'\n",
    "}\n",
    "df_clean['city'] = df_clean['city'].str.strip().map(city_mapping)\n",
    "print(f\"完成: {df_clean['city'].value_counts().to_dict()}\")\n",
    "\n",
    "# 步骤9: 清洗金额（去除逗号并转换）\n",
    "print(\"\\n步骤9: 清洗金额\")\n",
    "df_clean['total_purchase'] = df_clean['total_purchase'].str.replace(',', '').astype(float)\n",
    "print(f\"完成: 类型={df_clean['total_purchase'].dtype}, 总额=¥{df_clean['total_purchase'].sum():,.2f}\")\n",
    "\n",
    "# 步骤10: 去重（基于customer_id保留第一条）\n",
    "print(\"\\n步骤10: 去重\")\n",
    "before_dedup = len(df_clean)\n",
    "df_clean = df_clean.drop_duplicates(subset=['customer_id'], keep='first')\n",
    "after_dedup = len(df_clean)\n",
    "print(f\"完成: {before_dedup}行 → {after_dedup}行 (删除{before_dedup-after_dedup}行)\")\n",
    "\n",
    "print(\"\\n\" + \"=\"*70)\n",
    "print(\"✅ 数据清洗和标准化完成！\")\n",
    "print(\"=\"*70)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 查看清洗后的结果\n",
    "print(\"\\n=== 清洗后的数据 ===\")\n",
    "print(df_clean)\n",
    "\n",
    "print(\"\\n数据类型:\")\n",
    "print(df_clean.dtypes)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 清洗前后对比\n",
    "print(\"\\n=== 清洗前后对比 ===\")\n",
    "\n",
    "comparison = pd.DataFrame({\n",
    "    '指标': [\n",
    "        '总行数',\n",
    "        '数据类型一致性',\n",
    "        '姓名空格',\n",
    "        '性别标准化',\n",
    "        '电话格式统一',\n",
    "        '邮箱格式统一',\n",
    "        '日期格式',\n",
    "        '城市标准化',\n",
    "        '金额类型'\n",
    "    ],\n",
    "    '清洗前': [\n",
    "        len(messy_customer_data),\n",
    "        '混乱',\n",
    "        '有空格',\n",
    "        '多种写法',\n",
    "        '多种格式',\n",
    "        '大小写混乱',\n",
    "        '多种格式',\n",
    "        '多种写法',\n",
    "        'object'\n",
    "    ],\n",
    "    '清洗后': [\n",
    "        len(df_clean),\n",
    "        '统一',\n",
    "        '已去除',\n",
    "        '男/女',\n",
    "        '纯数字11位',\n",
    "        '全小写',\n",
    "        'datetime64',\n",
    "        '标准名称',\n",
    "        'float64'\n",
    "    ]\n",
    "})\n",
    "\n",
    "print(comparison.to_string(index=False))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 导出清洗后的数据\n",
    "df_clean.to_csv('cleaned_customer_data.csv', index=False, encoding='utf-8-sig')\n",
    "df_clean.to_excel('cleaned_customer_data.xlsx', index=False)\n",
    "\n",
    "print(\"\\n✅ 清洗后的数据已导出:\")\n",
    "print(\"   - cleaned_customer_data.csv\")\n",
    "print(\"   - cleaned_customer_data.xlsx\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "---\n",
    "\n",
    "## 六、本讲总结\n",
    "\n",
    "### 核心知识点\n",
    "\n",
    "#### 1. 数据类型转换\n",
    "\n",
    "**方法**:\n",
    "- `astype()`: 基础转换\n",
    "- `to_numeric()`: 智能数字转换\n",
    "- `to_datetime()`: 日期解析\n",
    "- `category`: 优化内存\n",
    "\n",
    "**原则**:\n",
    "- 数字就是数字，不是字符串\n",
    "- 日期就是日期，方便排序和计算\n",
    "- 重复多的用category节省内存\n",
    "\n",
    "#### 2. 日期时间处理\n",
    "\n",
    "**解析**: `pd.to_datetime()` 自动识别多种格式\n",
    "\n",
    "**提取**:\n",
    "- `dt.year`, `dt.month`, `dt.day`\n",
    "- `dt.quarter`, `dt.week`\n",
    "- `dt.weekday`, `dt.day_name()`\n",
    "\n",
    "**运算**:\n",
    "- 日期差: `date1 - date2`\n",
    "- 日期偏移: `+ pd.Timedelta()`, `+ pd.DateOffset()`\n",
    "- 工作日计算\n",
    "\n",
    "#### 3. 字符串处理\n",
    "\n",
    "**基础操作**:\n",
    "- `str.strip()`: 去空格\n",
    "- `str.lower()`, `str.upper()`: 大小写\n",
    "- `str.replace()`: 替换\n",
    "- `str.split()`: 分割\n",
    "\n",
    "**查询匹配**:\n",
    "- `str.contains()`: 包含\n",
    "- `str.startswith()`: 开头\n",
    "- `str.endswith()`: 结尾\n",
    "\n",
    "**正则表达式**:\n",
    "- `str.extract()`: 提取\n",
    "- `str.findall()`: 查找所有\n",
    "- `str.match()`: 验证格式\n",
    "\n",
    "#### 4. 数据去重\n",
    "\n",
    "**检测**: `duplicated()`\n",
    "\n",
    "**去重**: `drop_duplicates()`\n",
    "- `keep='first'`: 保留第一条\n",
    "- `keep='last'`: 保留最后一条\n",
    "- `keep=False`: 删除所有重复\n",
    "\n",
    "**智能去重**:\n",
    "- 先排序再去重\n",
    "- 保留最新/最大/最完整的记录\n",
    "\n",
    "### Excel vs Pandas 对比\n",
    "\n",
    "| 任务 | Excel | Pandas |\n",
    "|------|-------|--------|\n",
    "| 类型转换 | 设置单元格格式 | `astype()` |\n",
    "| 日期解析 | DATEVALUE() | `to_datetime()` |\n",
    "| 日期提取 | YEAR(), MONTH() | `dt.year`, `dt.month` |\n",
    "| 去空格 | TRIM() | `str.strip()` |\n",
    "| 大小写 | UPPER(), LOWER() | `str.upper()`, `str.lower()` |\n",
    "| 替换 | SUBSTITUTE() | `str.replace()` |\n",
    "| 分列 | 数据→分列 | `str.split()` |\n",
    "| 去重 | 数据→删除重复项 | `drop_duplicates()` |\n",
    "| 正则 | 无 | `str.extract()`, `str.findall()` |\n",
    "\n",
    "### 数据清洗完整流程\n",
    "\n",
    "```\n",
    "1. 数据质量检查\n",
    "   ↓\n",
    "2. 类型转换\n",
    "   ↓\n",
    "3. 缺失值处理\n",
    "   ↓\n",
    "4. 异常值处理\n",
    "   ↓\n",
    "5. 格式规范化\n",
    "   - 字符串清洗\n",
    "   - 日期统一\n",
    "   - 类别标准化\n",
    "   ↓\n",
    "6. 去重\n",
    "   ↓\n",
    "7. 验证结果\n",
    "   ↓\n",
    "8. 导出清洗后数据\n",
    "```\n",
    "\n",
    "### 最佳实践\n",
    "\n",
    "1. **先检查后处理**: 了解数据问题再动手\n",
    "2. **保留原始数据**: 在副本上操作\n",
    "3. **建立映射字典**: 统一的标准化规则\n",
    "4. **批量处理**: 用循环或apply处理多列\n",
    "5. **验证结果**: 处理后检查数据类型和分布\n",
    "\n",
    "### 第三阶段完成！\n",
    "\n",
    "经过4讲学习，你已经掌握:\n",
    "- ✅ 数据分析总流程和脏数据类型\n",
    "- ✅ Pandas基础操作和数据选择\n",
    "- ✅ 缺失值和异常值的检测处理\n",
    "- ✅ 数据格式规范化和去重\n",
    "\n",
    "现在你可以处理任何脏数据了！\n",
    "\n",
    "### 下一阶段预告\n",
    "**第四阶段: 数据分析与探索(EDA)**\n",
    "- 分析思维与指标设计\n",
    "- 描述性统计分析\n",
    "- 分组与聚合分析（核心！）\n",
    "- 探索性数据分析\n",
    "- 线性回归建模\n",
    "\n",
    "---"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 课后作业\n",
    "\n",
    "### 作业1: 类型转换实战\n",
    "给定一个数据集，完成:\n",
    "1. 识别所有类型不正确的列\n",
    "2. 将它们转换为正确的类型\n",
    "3. 处理转换过程中的错误\n",
    "4. 对比转换前后的内存占用\n",
    "5. 使用category优化内存\n",
    "\n",
    "### 作业2: 日期时间处理\n",
    "1. 解析多种格式的日期字符串\n",
    "2. 提取年、月、季度、星期等信息\n",
    "3. 计算两个日期之间的天数/月数/年数\n",
    "4. 计算某个时间范围内的工作日数量\n",
    "5. 生成未来12个月的月末日期序列\n",
    "\n",
    "### 作业3: 字符串清洗综合\n",
    "提供包含各种字符串问题的数据集:\n",
    "1. 清洗姓名（去空格、统一大小写）\n",
    "2. 标准化电话号码格式\n",
    "3. 验证邮箱格式\n",
    "4. 从地址中提取省市区\n",
    "5. 用正则表达式提取身份证号的生日和性别\n",
    "6. 清洗金额（去除货币符号和千分位）\n",
    "\n",
    "### 作业4: 去重策略对比\n",
    "1. 创建包含多种重复模式的数据\n",
    "2. 用不同策略去重并对比结果\n",
    "3. 实现智能去重（保留最优记录）\n",
    "4. 统计去重前后的数据质量变化\n",
    "5. 可视化去重效果\n",
    "\n",
    "### 作业5: 完整清洗项目\n",
    "提供一个真实的脏数据集（客户数据/订单数据），要求:\n",
    "1. 完整的数据质量报告\n",
    "2. 设计清洗方案（10个步骤）\n",
    "3. 实施清洗和标准化\n",
    "4. 验证清洗效果\n",
    "5. 导出清洗后数据和清洗日志\n",
    "6. 编写清洗脚本（可复用）"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.0"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
