{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "1f8e5950-d69d-4a66-8731-3d3143b01319",
   "metadata": {},
   "source": [
    "# Pandas简介\n",
    "\n",
    "## 第一章：Pandas简介与基础概念\n",
    "\n",
    "### 1.1 什么是Pandas？\n",
    "\n",
    "Pandas是Python数据科学生态系统中最重要的工具库之一。它的名字来源于\"Panel Data\"（面板数据），这暗示了它最初是为处理经济学和金融领域的数据而设计的。今天，Pandas已经发展成为一个通用的数据分析工具，被广泛应用于各个领域。\n",
    "\n",
    "想象你面前有一堆Excel表格、CSV文件、数据库导出文件，需要对它们进行清理、转换、分析和整合。这正是Pandas擅长的领域 - 它就像是一个强大的数据处理助手，能够帮你轻松处理这些任务。\n",
    "\n",
    "### 1.2 为什么选择Pandas？\n",
    "\n",
    "1. **数据处理效率**\n",
    "   - 比Excel处理大规模数据更快\n",
    "   - 支持自动化操作，减少重复工作\n",
    "   - 处理数百万行数据不在话下\n",
    "\n",
    "2. **灵活的数据操作**\n",
    "   - 强大的数据清洗功能\n",
    "   - 灵活的数据转换能力\n",
    "   - 丰富的数据分析工具\n",
    "\n",
    "3. **广泛的格式支持**\n",
    "   - Excel文件（.xlsx，.xls）\n",
    "   - CSV和文本文件\n",
    "   - SQL数据库\n",
    "   - JSON和HTML\n",
    "   - 压缩文件\n",
    "\n",
    "4. **与其他工具的集成**\n",
    "   - 无缝对接NumPy进行数值计算\n",
    "   - 与Matplotlib完美配合实现数据可视化\n",
    "   - 作为Scikit-learn机器学习的数据准备工具\n",
    "\n",
    "## 第二章：Pandas的核心数据结构\n",
    "\n",
    "Pandas的两个核心数据结构 - Series和DataFrame，就像是数据分析的\"积木\"，让我们能够构建复杂的数据分析流程。\n",
    "\n",
    "### 2.1 Series：一维数据的得力助手\n",
    "\n",
    "Series可以被看作是一个增强版的数组或字典，它不仅存储数据，还为每个数据点提供一个标签（索引）。\n",
    "\n",
    "#### 2.1.1 Series的特点\n",
    "\n",
    "1. **带标签的数组**\n",
    "   - 每个元素都有索引\n",
    "   - 支持通过位置和标签访问\n",
    "   - 可以像字典一样使用\n",
    "\n",
    "2. **灵活的数据类型**\n",
    "   - 可以存储任何Python对象\n",
    "   - 常用于存储数值、字符串、时间等\n",
    "   - 支持缺失值处理\n",
    "\n",
    "#### 2.1.2 创建Series的多种方式"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "38359037-f20e-419a-9d8b-6ae8dcaf3e68",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "0    1\n",
      "1    2\n",
      "2    3\n",
      "3    4\n",
      "4    5\n",
      "dtype: int64\n",
      "苹果    3\n",
      "香蕉    6\n",
      "橙子    2\n",
      "dtype: int64\n",
      "苹果    3\n",
      "香蕉    6\n",
      "橙子    2\n",
      "dtype: int64\n"
     ]
    }
   ],
   "source": [
    "import pandas as pd\n",
    "\n",
    "# 从列表创建\n",
    "numbers = pd.Series([1, 2, 3, 4, 5])\n",
    "print(numbers)\n",
    "# 带自定义索引创建\n",
    "fruits = pd.Series([3, 6, 2], index=['苹果', '香蕉', '橙子'])\n",
    "print(fruits)\n",
    "# 从字典创建\n",
    "prices = pd.Series({'苹果': 3, '香蕉': 6, '橙子': 2})\n",
    "print(prices)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "af644b41-b697-400e-976f-4be4b3bf350b",
   "metadata": {},
   "source": [
    "Series 的索引与切片："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "9f4d34eb-2a69-4789-a0c2-7cdf652dbe3d",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "3\n"
     ]
    }
   ],
   "source": [
    "print(fruits['苹果']) "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "15d094d5-fb61-419f-9f42-1936cff3c734",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "6\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "C:\\Users\\Administrator\\AppData\\Local\\Temp\\ipykernel_16984\\2539277926.py:1: FutureWarning: Series.__getitem__ treating keys as positions is deprecated. In a future version, integer keys will always be treated as labels (consistent with DataFrame behavior). To access a value by position, use `ser.iloc[pos]`\n",
      "  print(fruits[1])\n"
     ]
    }
   ],
   "source": [
    "print(fruits[1]) "
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "a76a3f49-47f0-47c9-9609-36eadf811dfa",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "6\n"
     ]
    }
   ],
   "source": [
    "print(fruits.iloc[1])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6cf20552-a137-4909-aec6-5f1c65eb7732",
   "metadata": {},
   "source": [
    "- 在Pandas中，iloc是我们在数据分析时最常用的\"数据定位器\"之一。它的名字来源于\"integer-location\"（整数位置），顾名思义，就是通过数字位置来精确定位和选择数据的工具。想象一下，如果把数据表格看作一个棋盘，那么iloc就像是棋手的手指，可以通过行号和列号（从0开始计数）精确地指向任何一个位置。\n",
    "\n",
    "  使用iloc时，我们采用的语法格式是df.iloc[行位置, 列位置]。这就像在说\"我要第几行第几列的数据\"。这里的行位置和列位置都是可选的参数 - 你可以只选择行，只选择列，或者同时选择两者。比如，df.iloc[0, 1]就是在说\"给我第一行第二列的数据\"（因为位置计数从0开始），而df.iloc[0]则是在说\"给我第一行的所有数据\"。这种灵活性使得iloc成为数据选择和切片操作中最实用的工具之一。\n",
    "\n",
    "  值得注意的是，和我们在日常生活中习惯的计数方式不同，iloc使用的是计算机的计数方式，即从0开始计数。这意味着第一行的索引是0，第二行是1，以此类推。这种设计虽然刚开始可能需要一点适应，但在编程中是最自然和高效的方式。通过iloc，我们可以像在Excel中选择单元格一样精确地操作数据，只不过是用代码而不是鼠标来完成这个过程。\n",
    "\n",
    "- 常用属性和方法：\n",
    "\n",
    "  - `index`：返回索引。\n",
    "  - `values`：返回所有元素的值。\n",
    "  - `mean()`、`sum()`、`count()`：对数据进行聚合计算。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "5d86ea3f-687b-4ad2-8f80-7441a2d103ce",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "=== 基本信息 ===\n",
      "\n",
      "原始数据：\n",
      "周一    2380\n",
      "周二    3150\n",
      "周三    2980\n",
      "周四    4280\n",
      "周五    3520\n",
      "周六    5180\n",
      "周日    4920\n",
      "Name: 日销售额, dtype: int64\n",
      "\n",
      "索引：\n",
      "Index(['周一', '周二', '周三', '周四', '周五', '周六', '周日'], dtype='object')\n",
      "\n",
      "数值：\n",
      "[2380 3150 2980 4280 3520 5180 4920]\n",
      "\n",
      "=== 基础统计 ===\n",
      "平均日销售额：3772.86元\n",
      "总销售额：26410.00元\n",
      "销售天数：7天\n",
      "最高日销售额：5180.00元\n",
      "最低日销售额：2380.00元\n",
      "销售额标准差：1046.72元\n"
     ]
    }
   ],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "\n",
    "# 创建一个表示某商店一周销售额的Series\n",
    "daily_sales = pd.Series(\n",
    "    data=[2380, 3150, 2980, 4280, 3520, 5180, 4920],\n",
    "    index=['周一', '周二', '周三', '周四', '周五', '周六', '周日'],\n",
    "    name='日销售额'\n",
    ")\n",
    "\n",
    "# 1. 基本信息查看\n",
    "print(\"=== 基本信息 ===\")\n",
    "print(\"\\n原始数据：\")\n",
    "print(daily_sales)\n",
    "\n",
    "print(\"\\n索引：\")\n",
    "print(daily_sales.index)\n",
    "\n",
    "print(\"\\n数值：\")\n",
    "print(daily_sales.values)\n",
    "\n",
    "# 2. 基础统计计算\n",
    "print(\"\\n=== 基础统计 ===\")\n",
    "#.2f表示小数点后两位\n",
    "print(f\"平均日销售额：{daily_sales.mean():.2f}元\")\n",
    "print(f\"总销售额：{daily_sales.sum():.2f}元\")\n",
    "print(f\"销售天数：{daily_sales.count()}天\")\n",
    "print(f\"最高日销售额：{daily_sales.max():.2f}元\")\n",
    "print(f\"最低日销售额：{daily_sales.min():.2f}元\")\n",
    "print(f\"销售额标准差：{daily_sales.std():.2f}元\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c49623d4-d350-4fca-b467-aba43403f632",
   "metadata": {},
   "source": [
    "### 2.2 DataFrame：数据分析的核心工具\n",
    "\n",
    "DataFrame是Pandas最重要的数据结构，它像一个智能的电子表格，既能存储数据，又提供强大的数据处理功能。\n",
    "\n",
    "#### 2.2.1 DataFrame的本质特征\n",
    "\n",
    "1. **二维表格结构**\n",
    "   - 行和列都有标签\n",
    "   - 每列可以是不同的数据类型\n",
    "   - 类似于Excel工作表或SQL表\n",
    "\n",
    "2. **数据对齐机制**\n",
    "   - 自动对齐行和列\n",
    "   - 处理缺失值\n",
    "   - 保证数据一致性\n",
    "\n",
    "3. **灵活的索引系统**\n",
    "   - 支持单级和多级索引\n",
    "   - 可以用标签或位置访问\n",
    "   - 支持复杂的条件筛选\n",
    "\n",
    "#### 2.2.2 创建DataFrame的常用方法"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "4bb4d1a1-5897-4209-b57a-f61caf6a9f48",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "sales_data.csv已创建\n",
      "company_data.xlsx已创建\n",
      "products.json已创建\n",
      "company.db已创建\n",
      "\n",
      "所有示例数据文件已创建完成。\n"
     ]
    }
   ],
   "source": [
    "# create_sample_data.py\n",
    "\n",
    "import pandas as pd\n",
    "import sqlite3\n",
    "import json\n",
    "import numpy as np\n",
    "from datetime import datetime, timedelta\n",
    "\n",
    "def create_csv_file():\n",
    "    \"\"\"创建销售数据CSV文件\"\"\"\n",
    "    sales_data = {\n",
    "        '日期': [(datetime(2024, 1, 1) + timedelta(days=x)).strftime('%Y-%m-%d') for x in range(10)],\n",
    "        '产品': ['手机', '笔记本', '平板', '耳机', '手表', '手机', '笔记本', '平板', '耳机', '手表'],\n",
    "        '价格': np.random.randint(1000, 10000, 10),\n",
    "        '数量': np.random.randint(1, 50, 10),\n",
    "        '销售员': ['张三', '李四', '王五', '赵六', '钱七'] * 2\n",
    "    }\n",
    "    df = pd.DataFrame(sales_data)\n",
    "    df.to_csv('data/sales_data.csv', index=False, encoding='utf-8')\n",
    "    print(\"sales_data.csv已创建\")\n",
    "\n",
    "def create_excel_file():\n",
    "    \"\"\"创建公司数据Excel文件\"\"\"\n",
    "    with pd.ExcelWriter('data/company_data.xlsx') as writer:\n",
    "        # 员工信息sheet\n",
    "        employee_data = {\n",
    "            '员工ID': ['EMP001', 'EMP002', 'EMP003', 'EMP004', 'EMP005'],\n",
    "            '姓名': ['张三', '李四', '王五', '赵六', '钱七'],\n",
    "            '部门': ['销售', '技术', '技术', '市场', '销售'],\n",
    "            '入职日期': ['2023-01-01', '2023-02-15', '2023-03-20', '2023-04-10', '2023-05-01'],\n",
    "            '工资': [8000, 12000, 12000, 9000, 8000]\n",
    "        }\n",
    "        pd.DataFrame(employee_data).to_excel(writer, sheet_name='员工信息', index=False)\n",
    "        \n",
    "        # 部门信息sheet\n",
    "        department_data = {\n",
    "            '部门': ['销售', '技术', '市场', '人事', '财务'],\n",
    "            '主管': ['王明', '李强', '张艳', '刘波', '周红'],\n",
    "            '人数': [20, 30, 15, 5, 8],\n",
    "            '预算': [200000, 300000, 150000, 100000, 120000]\n",
    "        }\n",
    "        pd.DataFrame(department_data).to_excel(writer, sheet_name='部门信息', index=False)\n",
    "    print(\"company_data.xlsx已创建\")\n",
    "\n",
    "def create_json_file():\n",
    "    \"\"\"创建产品数据JSON文件\"\"\"\n",
    "    product_data = {\n",
    "        \"products\": [\n",
    "            {\n",
    "                \"id\": \"P001\",\n",
    "                \"name\": \"iPhone 15\",\n",
    "                \"category\": \"手机\",\n",
    "                \"price\": 6999,\n",
    "                \"specs\": {\n",
    "                    \"color\": [\"黑色\", \"白色\", \"蓝色\"],\n",
    "                    \"storage\": [\"128GB\", \"256GB\", \"512GB\"],\n",
    "                    \"screen\": \"6.1英寸\"\n",
    "                }\n",
    "            },\n",
    "            {\n",
    "                \"id\": \"P002\",\n",
    "                \"name\": \"MacBook Pro\",\n",
    "                \"category\": \"笔记本\",\n",
    "                \"price\": 12999,\n",
    "                \"specs\": {\n",
    "                    \"color\": [\"深空灰\", \"银色\"],\n",
    "                    \"storage\": [\"256GB\", \"512GB\", \"1TB\"],\n",
    "                    \"screen\": \"14英寸\"\n",
    "                }\n",
    "            },\n",
    "            {\n",
    "                \"id\": \"P003\",\n",
    "                \"name\": \"iPad Air\",\n",
    "                \"category\": \"平板\",\n",
    "                \"price\": 4799,\n",
    "                \"specs\": {\n",
    "                    \"color\": [\"银色\", \"玫瑰金\", \"天蓝色\"],\n",
    "                    \"storage\": [\"64GB\", \"256GB\"],\n",
    "                    \"screen\": \"10.9英寸\"\n",
    "                }\n",
    "            }\n",
    "        ]\n",
    "    }\n",
    "    \n",
    "    with open('data/products.json', 'w', encoding='utf-8') as f:\n",
    "        json.dump(product_data, f, ensure_ascii=False, indent=4)\n",
    "    print(\"products.json已创建\")\n",
    "\n",
    "def create_sqlite_db():\n",
    "    \"\"\"创建公司数据SQLite数据库\"\"\"\n",
    "    conn = sqlite3.connect('data/company.db')\n",
    "    c = conn.cursor()\n",
    "    \n",
    "    # 创建表\n",
    "    c.execute('''CREATE TABLE IF NOT EXISTS employees\n",
    "                 (id TEXT PRIMARY KEY,\n",
    "                  name TEXT,\n",
    "                  department TEXT,\n",
    "                  salary REAL,\n",
    "                  hire_date TEXT)''')\n",
    "    \n",
    "    c.execute('''CREATE TABLE IF NOT EXISTS departments\n",
    "                 (name TEXT PRIMARY KEY,\n",
    "                  manager TEXT,\n",
    "                  budget REAL)''')\n",
    "    \n",
    "    # 插入示例数据\n",
    "    employees = [\n",
    "        ('EMP001', '张三', '销售', 8000, '2023-01-01'),\n",
    "        ('EMP002', '李四', '技术', 12000, '2023-02-15'),\n",
    "        ('EMP003', '王五', '技术', 12000, '2023-03-20'),\n",
    "        ('EMP004', '赵六', '市场', 9000, '2023-04-10'),\n",
    "        ('EMP005', '钱七', '销售', 8000, '2023-05-01')\n",
    "    ]\n",
    "    \n",
    "    departments = [\n",
    "        ('销售', '王明', 200000),\n",
    "        ('技术', '李强', 300000),\n",
    "        ('市场', '张艳', 150000),\n",
    "        ('人事', '刘波', 100000),\n",
    "        ('财务', '周红', 120000)\n",
    "    ]\n",
    "    \n",
    "    c.executemany('INSERT OR REPLACE INTO employees VALUES (?,?,?,?,?)', employees)\n",
    "    c.executemany('INSERT OR REPLACE INTO departments VALUES (?,?,?)', departments)\n",
    "    \n",
    "    conn.commit()\n",
    "    conn.close()\n",
    "    print(\"company.db已创建\")\n",
    "\n",
    "if __name__ == \"__main__\":\n",
    "    # 创建data目录\n",
    "    import os\n",
    "    if not os.path.exists('data'):\n",
    "        os.makedirs('data')\n",
    "        \n",
    "    # 创建所有示例文件\n",
    "    create_csv_file()\n",
    "    create_excel_file()\n",
    "    create_json_file()\n",
    "    create_sqlite_db()\n",
    "    \n",
    "    print(\"\\n所有示例数据文件已创建完成。\")"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f3d929ac-39d8-4305-b82c-71481c0b3c91",
   "metadata": {},
   "source": [
    "读取CSV文件："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "69ee18b7-da80-4786-bf55-b4a44e7eca8c",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "=== 1. 基础CSV读取 ===\n",
      "\n",
      "1.1 查看前5行数据：\n",
      "           日期   产品    价格  数量 销售员\n",
      "0  2024-01-01   手机  6837  18  张三\n",
      "1  2024-01-02  笔记本  6591  17  李四\n",
      "2  2024-01-03   平板  8308  37  王五\n",
      "3  2024-01-04   耳机  3226  24  赵六\n",
      "4  2024-01-05   手表  5710  45  钱七\n",
      "\n",
      "1.2 数据基本信息：\n",
      "<class 'pandas.core.frame.DataFrame'>\n",
      "RangeIndex: 10 entries, 0 to 9\n",
      "Data columns (total 5 columns):\n",
      " #   Column  Non-Null Count  Dtype \n",
      "---  ------  --------------  ----- \n",
      " 0   日期      10 non-null     object\n",
      " 1   产品      10 non-null     object\n",
      " 2   价格      10 non-null     int64 \n",
      " 3   数量      10 non-null     int64 \n",
      " 4   销售员     10 non-null     object\n",
      "dtypes: int64(2), object(3)\n",
      "memory usage: 532.0+ bytes\n",
      "None\n",
      "\n",
      "=== 2. 数据类型控制 ===\n",
      "\n",
      "2.1 指定数据类型后的信息：\n",
      "<class 'pandas.core.frame.DataFrame'>\n",
      "RangeIndex: 10 entries, 0 to 9\n",
      "Data columns (total 5 columns):\n",
      " #   Column  Non-Null Count  Dtype         \n",
      "---  ------  --------------  -----         \n",
      " 0   日期      10 non-null     datetime64[ns]\n",
      " 1   产品      10 non-null     object        \n",
      " 2   价格      10 non-null     float64       \n",
      " 3   数量      10 non-null     int32         \n",
      " 4   销售员     10 non-null     category      \n",
      "dtypes: category(1), datetime64[ns](1), float64(1), int32(1), object(1)\n",
      "memory usage: 462.0+ bytes\n",
      "None\n",
      "\n",
      "=== 3. 索引控制 ===\n",
      "\n",
      "3.1 使用日期列作为索引：\n",
      "             产品    价格  数量 销售员\n",
      "日期                           \n",
      "2024-01-01   手机  6837  18  张三\n",
      "2024-01-02  笔记本  6591  17  李四\n",
      "2024-01-03   平板  8308  37  王五\n",
      "2024-01-04   耳机  3226  24  赵六\n",
      "2024-01-05   手表  5710  45  钱七\n",
      "\n",
      "3.2 使用多个列创建多级索引：\n",
      "                  价格  数量 销售员\n",
      "日期         产品               \n",
      "2024-01-01 手机   6837  18  张三\n",
      "2024-01-02 笔记本  6591  17  李四\n",
      "2024-01-03 平板   8308  37  王五\n",
      "2024-01-04 耳机   3226  24  赵六\n",
      "2024-01-05 手表   5710  45  钱七\n",
      "\n",
      "=== 4. 数据筛选 ===\n",
      "\n",
      "4.1 只读取指定列：\n",
      "           日期   产品    价格\n",
      "0  2024-01-01   手机  6837\n",
      "1  2024-01-02  笔记本  6591\n",
      "2  2024-01-03   平板  8308\n",
      "3  2024-01-04   耳机  3226\n",
      "4  2024-01-05   手表  5710\n",
      "\n",
      "4.2 使用列的位置选择（前三列）：\n",
      "           日期   产品    价格\n",
      "0  2024-01-01   手机  6837\n",
      "1  2024-01-02  笔记本  6591\n",
      "2  2024-01-03   平板  8308\n",
      "3  2024-01-04   耳机  3226\n",
      "4  2024-01-05   手表  5710\n",
      "\n",
      "=== 5. 大文件处理技巧 ===\n",
      "\n",
      "5.1 分块读取示例（每次读取3行）：\n",
      "\n",
      "第1块数据：\n",
      "           日期   产品    价格  数量 销售员\n",
      "0  2024-01-01   手机  6837  18  张三\n",
      "1  2024-01-02  笔记本  6591  17  李四\n",
      "2  2024-01-03   平板  8308  37  王五\n",
      "\n",
      "第2块数据：\n",
      "           日期  产品    价格  数量 销售员\n",
      "3  2024-01-04  耳机  3226  24  赵六\n",
      "4  2024-01-05  手表  5710  45  钱七\n",
      "5  2024-01-06  手机  1916   5  张三\n",
      "\n",
      "5.2 只读取前4行数据：\n",
      "           日期   产品    价格  数量 销售员\n",
      "0  2024-01-01   手机  6837  18  张三\n",
      "1  2024-01-02  笔记本  6591  17  李四\n",
      "2  2024-01-03   平板  8308  37  王五\n",
      "3  2024-01-04   耳机  3226  24  赵六\n",
      "\n",
      "=== 6. 高级参数使用 ===\n",
      "\n",
      "6.1 自定义缺失值处理：\n",
      "\n",
      "6.2 数据格式化：\n",
      "\n",
      "6.3 跳过特定行：\n",
      "\n",
      "=== 7. 数据验证 ===\n",
      "\n",
      "7.1 数值列的统计描述：\n",
      "           价格    数量\n",
      "count   10.00 10.00\n",
      "mean  5756.70 24.30\n",
      "std   2198.00 11.60\n",
      "min   1916.00  5.00\n",
      "25%   4360.00 17.25\n",
      "50%   6150.50 22.00\n",
      "75%   7188.75 31.50\n",
      "max   8635.00 45.00\n",
      "\n",
      "7.2 查看产品列的唯一值：\n",
      "['手机' '笔记本' '平板' '耳机' '手表']\n",
      "\n",
      "7.3 检查缺失值：\n",
      "日期     0\n",
      "产品     0\n",
      "价格     0\n",
      "数量     0\n",
      "销售员    0\n",
      "dtype: int64\n",
      "\n",
      "=== 8. 实用技巧 ===\n",
      "\n",
      "8.1 读取时使用自定义处理函数：\n",
      "           日期   产品    价格  数量 销售员\n",
      "0  2024-01-01   手机  6837  18  张三\n",
      "1  2024-01-02  笔记本  6591  17  李四\n",
      "2  2024-01-03   平板  8308  37  王五\n",
      "3  2024-01-04   耳机  3226  24  赵六\n",
      "4  2024-01-05   手表  5710  45  钱七\n"
     ]
    }
   ],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "\n",
    "# =============== 1. 基础读取演示 ===============\n",
    "# read_csv是Pandas最常用的数据导入函数之一\n",
    "# 这里我们使用Python引擎而不是默认的C引擎\n",
    "# Python引擎的优势：\n",
    "# - 支持更多高级功能（如skipfooter）\n",
    "# - 更好的错误处理能力\n",
    "# - 更灵活的解析选项\n",
    "# 虽然性能较C引擎稍慢，但功能更加完整\n",
    "print(\"=== 1. 基础CSV读取 ===\")\n",
    "df = pd.read_csv('data/sales_data.csv', engine='python')\n",
    "\n",
    "# head()函数默认显示前5行数据\n",
    "print(\"\\n1.1 查看前5行数据：\")\n",
    "print(df.head())\n",
    "\n",
    "print(\"\\n1.2 数据基本信息：\")\n",
    "print(df.info())\n",
    "\n",
    "# =============== 2. 数据类型控制 ===============\n",
    "print(\"\\n=== 2. 数据类型控制 ===\")\n",
    "\n",
    "# 使用Python引擎进行数据类型转换\n",
    "# Python引擎在处理数据类型转换时更加灵活\n",
    "df_typed = pd.read_csv('data/sales_data.csv',\n",
    "                      engine='python',\n",
    "                      dtype={\n",
    "                          '产品': str,           \n",
    "                          '价格': float,         \n",
    "                          '数量': int,           \n",
    "                          '销售员': 'category'   \n",
    "                      },\n",
    "                      parse_dates=['日期'])      \n",
    "\n",
    "print(\"\\n2.1 指定数据类型后的信息：\")\n",
    "print(df_typed.info())\n",
    "\n",
    "# =============== 3. 索引控制 ===============\n",
    "print(\"\\n=== 3. 索引控制 ===\")\n",
    "\n",
    "# Python引擎在处理索引时更加灵活\n",
    "print(\"\\n3.1 使用日期列作为索引：\")\n",
    "df_indexed = pd.read_csv('data/sales_data.csv', \n",
    "                        engine='python',\n",
    "                        index_col='日期')\n",
    "print(df_indexed.head())\n",
    "\n",
    "print(\"\\n3.2 使用多个列创建多级索引：\")\n",
    "df_multi_indexed = pd.read_csv('data/sales_data.csv', \n",
    "                              engine='python',\n",
    "                              index_col=['日期', '产品'])\n",
    "print(df_multi_indexed.head())\n",
    "\n",
    "# =============== 4. 数据筛选 ===============\n",
    "print(\"\\n=== 4. 数据筛选 ===\")\n",
    "\n",
    "print(\"\\n4.1 只读取指定列：\")\n",
    "df_selected = pd.read_csv('data/sales_data.csv', \n",
    "                         engine='python',\n",
    "                         usecols=['日期', '产品', '价格'])\n",
    "print(df_selected.head())\n",
    "\n",
    "print(\"\\n4.2 使用列的位置选择（前三列）：\")\n",
    "df_selected_by_position = pd.read_csv('data/sales_data.csv',\n",
    "                                    engine='python',\n",
    "                                    usecols=[0, 1, 2])\n",
    "print(df_selected_by_position.head())\n",
    "\n",
    "# =============== 5. 大文件处理 ===============\n",
    "print(\"\\n=== 5. 大文件处理技巧 ===\")\n",
    "\n",
    "# Python引擎在处理大文件时可能比C引擎慢\n",
    "# 但提供更好的错误处理和更多功能\n",
    "print(\"\\n5.1 分块读取示例（每次读取3行）：\")\n",
    "chunk_counter = 1\n",
    "for chunk in pd.read_csv('data/sales_data.csv', \n",
    "                        engine='python',\n",
    "                        chunksize=3):\n",
    "    print(f\"\\n第{chunk_counter}块数据：\")\n",
    "    print(chunk)\n",
    "    chunk_counter += 1\n",
    "    if chunk_counter > 2:  \n",
    "        break\n",
    "\n",
    "print(\"\\n5.2 只读取前4行数据：\")\n",
    "df_limited = pd.read_csv('data/sales_data.csv', \n",
    "                        engine='python',\n",
    "                        nrows=4)\n",
    "print(df_limited)\n",
    "\n",
    "# =============== 6. 高级参数使用 ===============\n",
    "print(\"\\n=== 6. 高级参数使用 ===\")\n",
    "\n",
    "# Python引擎在处理特殊情况时更加灵活\n",
    "print(\"\\n6.1 自定义缺失值处理：\")\n",
    "df_na = pd.read_csv('data/sales_data.csv',\n",
    "                    engine='python',\n",
    "                    na_values=['未知', 'N/A', ''],  \n",
    "                    keep_default_na=True)           \n",
    "\n",
    "print(\"\\n6.2 数据格式化：\")\n",
    "df_formatted = pd.read_csv('data/sales_data.csv',\n",
    "                          engine='python',\n",
    "                          thousands=',',    \n",
    "                          decimal='.')      \n",
    "\n",
    "# Python引擎原生支持skipfooter\n",
    "print(\"\\n6.3 跳过特定行：\")\n",
    "df_skipped = pd.read_csv('data/sales_data.csv',\n",
    "                        engine='python',    # Python引擎完全支持skipfooter\n",
    "                        skiprows=[1, 3],    \n",
    "                        skipfooter=1)       \n",
    "\n",
    "# =============== 7. 数据验证 ===============\n",
    "print(\"\\n=== 7. 数据验证 ===\")\n",
    "\n",
    "print(\"\\n7.1 数值列的统计描述：\")\n",
    "print(df.describe())\n",
    "\n",
    "print(\"\\n7.2 查看产品列的唯一值：\")\n",
    "print(df['产品'].unique())\n",
    "\n",
    "print(\"\\n7.3 检查缺失值：\")\n",
    "print(df.isnull().sum())\n",
    "\n",
    "# =============== 8. 实用技巧 ===============\n",
    "print(\"\\n=== 8. 实用技巧 ===\")\n",
    "\n",
    "# Python引擎在处理自定义函数时更加灵活\n",
    "print(\"\\n8.1 读取时使用自定义处理函数：\")\n",
    "df_processed = pd.read_csv('data/sales_data.csv',\n",
    "                          engine='python',\n",
    "                          converters={\n",
    "                              '产品': lambda x: x.strip().upper(),  \n",
    "                              '销售员': lambda x: x.strip()         \n",
    "                          })\n",
    "print(df_processed.head())\n",
    "\n",
    "# 8.2 设置显示选项\n",
    "pd.set_option('display.max_columns', None)  \n",
    "pd.set_option('display.max_rows', 10)       \n",
    "pd.set_option('display.float_format', lambda x: '%.2f' % x)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d6bbeb6f-f757-4f66-b3f8-77118a558350",
   "metadata": {},
   "source": [
    "读取EXCEL文件："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "d84dc1b4-0d3c-472f-918a-a141c8af13a9",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "=== 1. 基础Excel读取 ===\n",
      "\n",
      "1.1 读取第一个sheet：\n",
      "数据预览：\n",
      "     员工ID  姓名  部门        入职日期     工资\n",
      "0  EMP001  张三  销售  2023-01-01   8000\n",
      "1  EMP002  李四  技术  2023-02-15  12000\n",
      "2  EMP003  王五  技术  2023-03-20  12000\n",
      "3  EMP004  赵六  市场  2023-04-10   9000\n",
      "4  EMP005  钱七  销售  2023-05-01   8000\n",
      "\n",
      "基本信息：\n",
      "<class 'pandas.core.frame.DataFrame'>\n",
      "RangeIndex: 5 entries, 0 to 4\n",
      "Data columns (total 5 columns):\n",
      " #   Column  Non-Null Count  Dtype \n",
      "---  ------  --------------  ----- \n",
      " 0   员工ID    5 non-null      object\n",
      " 1   姓名      5 non-null      object\n",
      " 2   部门      5 non-null      object\n",
      " 3   入职日期    5 non-null      object\n",
      " 4   工资      5 non-null      int64 \n",
      "dtypes: int64(1), object(4)\n",
      "memory usage: 332.0+ bytes\n",
      "None\n",
      "\n",
      "1.2 读取指定sheet（部门信息）：\n",
      "   部门  主管  人数      预算\n",
      "0  销售  王明  20  200000\n",
      "1  技术  李强  30  300000\n",
      "2  市场  张艳  15  150000\n",
      "3  人事  刘波   5  100000\n",
      "4  财务  周红   8  120000\n",
      "\n",
      "=== 2. 多sheet处理 ===\n",
      "\n",
      "2.1 读取所有sheet：\n",
      "\n",
      "Sheet名称: 员工信息\n",
      "     员工ID  姓名  部门        入职日期     工资\n",
      "0  EMP001  张三  销售  2023-01-01   8000\n",
      "1  EMP002  李四  技术  2023-02-15  12000\n",
      "2  EMP003  王五  技术  2023-03-20  12000\n",
      "3  EMP004  赵六  市场  2023-04-10   9000\n",
      "4  EMP005  钱七  销售  2023-05-01   8000\n",
      "\n",
      "Sheet名称: 部门信息\n",
      "   部门  主管  人数      预算\n",
      "0  销售  王明  20  200000\n",
      "1  技术  李强  30  300000\n",
      "2  市场  张艳  15  150000\n",
      "3  人事  刘波   5  100000\n",
      "4  财务  周红   8  120000\n",
      "\n",
      "2.2 读取指定的多个sheet：\n",
      "\n",
      "Sheet名称: 员工信息\n",
      "     员工ID  姓名  部门        入职日期     工资\n",
      "0  EMP001  张三  销售  2023-01-01   8000\n",
      "1  EMP002  李四  技术  2023-02-15  12000\n",
      "2  EMP003  王五  技术  2023-03-20  12000\n",
      "3  EMP004  赵六  市场  2023-04-10   9000\n",
      "4  EMP005  钱七  销售  2023-05-01   8000\n",
      "\n",
      "Sheet名称: 部门信息\n",
      "   部门  主管  人数      预算\n",
      "0  销售  王明  20  200000\n",
      "1  技术  李强  30  300000\n",
      "2  市场  张艳  15  150000\n",
      "3  人事  刘波   5  100000\n",
      "4  财务  周红   8  120000\n",
      "\n",
      "=== 3. 数据类型控制 ===\n",
      "\n",
      "3.1 指定数据类型：\n",
      "<class 'pandas.core.frame.DataFrame'>\n",
      "RangeIndex: 5 entries, 0 to 4\n",
      "Data columns (total 5 columns):\n",
      " #   Column  Non-Null Count  Dtype   \n",
      "---  ------  --------------  -----   \n",
      " 0   员工ID    5 non-null      object  \n",
      " 1   姓名      5 non-null      object  \n",
      " 2   部门      5 non-null      category\n",
      " 3   入职日期    5 non-null      object  \n",
      " 4   工资      5 non-null      float64 \n",
      "dtypes: category(1), float64(1), object(3)\n",
      "memory usage: 429.0+ bytes\n",
      "None\n",
      "\n",
      "=== 4. 数据选择与过滤 ===\n",
      "\n",
      "4.1 只读取指定列：\n",
      "     员工ID  姓名     工资\n",
      "0  EMP001  张三   8000\n",
      "1  EMP002  李四  12000\n",
      "2  EMP003  王五  12000\n",
      "3  EMP004  赵六   9000\n",
      "4  EMP005  钱七   8000\n",
      "\n",
      "4.2 使用列的位置选择：\n",
      "     员工ID  姓名  部门\n",
      "0  EMP001  张三  销售\n",
      "1  EMP002  李四  技术\n",
      "2  EMP003  王五  技术\n",
      "3  EMP004  赵六  市场\n",
      "4  EMP005  钱七  销售\n",
      "\n",
      "=== 5. 日期处理 ===\n",
      "\n",
      "5.1 处理日期数据：\n",
      "员工ID            object\n",
      "姓名              object\n",
      "部门              object\n",
      "入职日期    datetime64[ns]\n",
      "工资               int64\n",
      "dtype: object\n",
      "\n",
      "=== 6. 高级参数使用 ===\n",
      "\n",
      "6.1 跳过特定行：\n",
      "     员工ID  姓名  部门        入职日期     工资\n",
      "0  EMP002  李四  技术  2023-02-15  12000\n",
      "1  EMP003  王五  技术  2023-03-20  12000\n",
      "2  EMP004  赵六  市场  2023-04-10   9000\n",
      "3  EMP005  钱七  销售  2023-05-01   8000\n",
      "\n",
      "6.2 自定义空值处理：\n",
      "员工ID    0\n",
      "姓名      0\n",
      "部门      0\n",
      "入职日期    0\n",
      "工资      0\n",
      "dtype: int64\n",
      "\n",
      "=== 7. 数据验证 ===\n",
      "\n",
      "7.1 数值列的统计描述：\n",
      "         人数        预算\n",
      "count  5.00      5.00\n",
      "mean  15.60 174000.00\n",
      "std    9.96  79874.90\n",
      "min    5.00 100000.00\n",
      "25%    8.00 120000.00\n",
      "50%   15.00 150000.00\n",
      "75%   20.00 200000.00\n",
      "max   30.00 300000.00\n",
      "\n",
      "7.2 查看部门的唯一值：\n",
      "['销售' '技术' '市场' '人事' '财务']\n",
      "\n",
      "7.3 检查缺失值：\n",
      "部门    0\n",
      "主管    0\n",
      "人数    0\n",
      "预算    0\n",
      "dtype: int64\n",
      "\n",
      "=== 8. 实用技巧 ===\n",
      "\n",
      "8.1 使用自定义转换器：\n",
      "     员工ID  姓名   部门        入职日期     工资\n",
      "0  EMP001  张三  销售部  2023-01-01   8000\n",
      "1  EMP002  李四  技术部  2023-02-15  12000\n",
      "2  EMP003  王五  技术部  2023-03-20  12000\n",
      "3  EMP004  赵六  市场部  2023-04-10   9000\n",
      "4  EMP005  钱七  销售部  2023-05-01   8000\n",
      "\n",
      "=== 9. Excel特有功能 ===\n",
      "\n",
      "9.1 读取特定单元格范围：\n",
      "     员工ID  姓名  部门\n",
      "0  EMP001  张三  销售\n",
      "1  EMP002  李四  技术\n",
      "2  EMP003  王五  技术\n",
      "3  EMP004  赵六  市场\n",
      "4  EMP005  钱七  销售\n",
      "\n",
      "9.2 处理合并单元格：\n",
      "     员工ID  姓名  部门        入职日期     工资\n",
      "0  EMP001  张三  销售  2023-01-01   8000\n",
      "1  EMP002  李四  技术  2023-02-15  12000\n",
      "2  EMP003  王五  技术  2023-03-20  12000\n",
      "3  EMP004  赵六  市场  2023-04-10   9000\n",
      "4  EMP005  钱七  销售  2023-05-01   8000\n"
     ]
    }
   ],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "\n",
    "# =============== 1. 基础读取演示 ===============\n",
    "# read_excel是Pandas中处理Excel文件的主要函数\n",
    "# 默认情况下，它会：\n",
    "# - 读取第一个工作表\n",
    "# - 使用第一行作为列名\n",
    "# - 自动推断数据类型\n",
    "# - 创建默认的数字索引（0,1,2...）\n",
    "print(\"=== 1. 基础Excel读取 ===\")\n",
    "\n",
    "# 1.1 读取默认sheet（第一个sheet）\n",
    "print(\"\\n1.1 读取第一个sheet：\")\n",
    "df = pd.read_excel('data/company_data.xlsx')\n",
    "print(\"数据预览：\")\n",
    "print(df.head())\n",
    "print(\"\\n基本信息：\")\n",
    "print(df.info())\n",
    "\n",
    "# 1.2 读取指定名称的sheet\n",
    "print(\"\\n1.2 读取指定sheet（部门信息）：\")\n",
    "df_dept = pd.read_excel('data/company_data.xlsx', \n",
    "                       sheet_name='部门信息')\n",
    "print(df_dept.head())\n",
    "\n",
    "# =============== 2. 多sheet处理 ===============\n",
    "print(\"\\n=== 2. 多sheet处理 ===\")\n",
    "\n",
    "# 2.1 读取所有sheet\n",
    "# sheet_name=None 会返回一个字典，键为sheet名，值为对应的DataFrame\n",
    "print(\"\\n2.1 读取所有sheet：\")\n",
    "all_sheets = pd.read_excel('data/company_data.xlsx', \n",
    "                          sheet_name=None)\n",
    "for sheet_name, df in all_sheets.items():\n",
    "    print(f\"\\nSheet名称: {sheet_name}\")\n",
    "    print(df.head())\n",
    "\n",
    "# 2.2 读取多个指定sheet\n",
    "print(\"\\n2.2 读取指定的多个sheet：\")\n",
    "selected_sheets = pd.read_excel('data/company_data.xlsx',\n",
    "                              sheet_name=['员工信息', '部门信息'])\n",
    "for sheet_name, df in selected_sheets.items():\n",
    "    print(f\"\\nSheet名称: {sheet_name}\")\n",
    "    print(df.head())\n",
    "\n",
    "# =============== 3. 数据类型控制 ===============\n",
    "print(\"\\n=== 3. 数据类型控制 ===\")\n",
    "\n",
    "# 3.1 指定列的数据类型\n",
    "# 明确指定数据类型可以：\n",
    "# - 确保数据的准确性\n",
    "# - 优化内存使用\n",
    "# - 防止数据类型推断错误\n",
    "print(\"\\n3.1 指定数据类型：\")\n",
    "df_typed = pd.read_excel('data/company_data.xlsx',\n",
    "                        dtype={\n",
    "                            '员工ID': str,        # 将ID转换为字符串\n",
    "                            '工资': float,        # 确保工资为浮点数\n",
    "                            '部门': 'category'    # 使用类别类型节省内存\n",
    "                        })\n",
    "print(df_typed.info())\n",
    "\n",
    "# =============== 4. 数据选择与过滤 ===============\n",
    "print(\"\\n=== 4. 数据选择与过滤 ===\")\n",
    "\n",
    "# 4.1 选择特定列\n",
    "print(\"\\n4.1 只读取指定列：\")\n",
    "df_selected = pd.read_excel('data/company_data.xlsx',\n",
    "                           usecols=['员工ID', '姓名', '工资'])\n",
    "print(df_selected.head())\n",
    "\n",
    "# 4.2 使用列的位置选择\n",
    "print(\"\\n4.2 使用列的位置选择：\")\n",
    "df_by_position = pd.read_excel('data/company_data.xlsx',\n",
    "                              usecols=[0, 1, 2])  # 选择前三列\n",
    "print(df_by_position.head())\n",
    "\n",
    "# =============== 5. 日期处理 ===============\n",
    "print(\"\\n=== 5. 日期处理 ===\")\n",
    "\n",
    "# 5.1 转换日期列\n",
    "print(\"\\n5.1 处理日期数据：\")\n",
    "df_dates = pd.read_excel('data/company_data.xlsx',\n",
    "                        parse_dates=['入职日期'])  # 将入职日期转换为datetime\n",
    "print(df_dates.dtypes)\n",
    "\n",
    "# =============== 6. 高级参数使用 ===============\n",
    "print(\"\\n=== 6. 高级参数使用 ===\")\n",
    "\n",
    "# 6.1 跳过行\n",
    "print(\"\\n6.1 跳过特定行：\")\n",
    "df_skipped = pd.read_excel('data/company_data.xlsx',\n",
    "                          skiprows=[1],    # 跳过第二行\n",
    "                          header=0)        # 使用第一行作为列名\n",
    "print(df_skipped.head())\n",
    "\n",
    "# 6.2 处理空值\n",
    "print(\"\\n6.2 自定义空值处理：\")\n",
    "df_na = pd.read_excel('data/company_data.xlsx',\n",
    "                      na_values=['无', 'N/A', '--'])  # 将这些值视为NA\n",
    "print(df_na.isnull().sum())\n",
    "\n",
    "# =============== 7. 数据验证 ===============\n",
    "print(\"\\n=== 7. 数据验证 ===\")\n",
    "\n",
    "# 7.1 基本统计信息\n",
    "print(\"\\n7.1 数值列的统计描述：\")\n",
    "print(df.describe())\n",
    "\n",
    "# 7.2 检查唯一值\n",
    "print(\"\\n7.2 查看部门的唯一值：\")\n",
    "print(df['部门'].unique())\n",
    "\n",
    "# 7.3 检查缺失值\n",
    "print(\"\\n7.3 检查缺失值：\")\n",
    "print(df.isnull().sum())\n",
    "\n",
    "# =============== 8. 实用技巧 ===============\n",
    "print(\"\\n=== 8. 实用技巧 ===\")\n",
    "\n",
    "# 8.1 转换器使用\n",
    "print(\"\\n8.1 使用自定义转换器：\")\n",
    "df_converted = pd.read_excel('data/company_data.xlsx',\n",
    "                            converters={\n",
    "                                '员工ID': str.upper,        # ID转大写\n",
    "                                '姓名': str.strip,          # 去除姓名中的空格\n",
    "                                '部门': lambda x: f\"{x}部\"   # 添加\"部\"字\n",
    "                            })\n",
    "print(df_converted.head())\n",
    "\n",
    "# 8.2 设置显示选项\n",
    "pd.set_option('display.max_columns', None)  # 显示所有列\n",
    "pd.set_option('display.max_rows', 10)       # 最多显示10行\n",
    "pd.set_option('display.float_format', lambda x: '%.2f' % x)  # 浮点数格式\n",
    "\n",
    "# =============== 9. Excel特有功能 ===============\n",
    "print(\"\\n=== 9. Excel特有功能 ===\")\n",
    "\n",
    "# 9.1 读取指定范围\n",
    "print(\"\\n9.1 读取特定单元格范围：\")\n",
    "df_range = pd.read_excel('data/company_data.xlsx',\n",
    "                        usecols=\"A:C\",     # 只读取A到C列\n",
    "                        nrows=5)           # 只读取前5行\n",
    "print(df_range)\n",
    "\n",
    "# 9.2 处理合并的单元格\n",
    "print(\"\\n9.2 处理合并单元格：\")\n",
    "df_merged = pd.read_excel('data/company_data.xlsx',\n",
    "                         keep_default_na=True,  # 保留默认的NA处理\n",
    "                         na_filter=True)        # 启用NA值过滤\n",
    "print(df_merged.head())"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a25596b8-f427-43a8-bf5d-555babc2eb24",
   "metadata": {},
   "source": [
    "读取JSON文件："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "db34bb6e-a027-42b6-a38d-81384eacfb22",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "=== 1. 基础JSON读取 ===\n",
      "\n",
      "1.1 基本读取：\n",
      "数据预览：\n",
      "                                            products\n",
      "0  {'id': 'P001', 'name': 'iPhone 15', 'category'...\n",
      "1  {'id': 'P002', 'name': 'MacBook Pro', 'categor...\n",
      "2  {'id': 'P003', 'name': 'iPad Air', 'category':...\n",
      "\n",
      "数据信息：\n",
      "<class 'pandas.core.frame.DataFrame'>\n",
      "RangeIndex: 3 entries, 0 to 2\n",
      "Data columns (total 1 columns):\n",
      " #   Column    Non-Null Count  Dtype \n",
      "---  ------    --------------  ----- \n",
      " 0   products  3 non-null      object\n",
      "dtypes: object(1)\n",
      "memory usage: 156.0+ bytes\n",
      "None\n",
      "\n",
      "1.2 指定数据类型读取：\n",
      "products    object\n",
      "dtype: object\n",
      "\n",
      "=== 2. 处理嵌套JSON ===\n",
      "\n",
      "2.1 使用json模块读取：\n",
      "原始JSON数据结构：\n",
      "{\n",
      "  \"products\": [\n",
      "    {\n",
      "      \"id\": \"P001\",\n",
      "      \"name\": \"iPhone 15\",\n",
      "      \"category\": \"手机\",\n",
      "      \"price\": 6999,\n",
      "      \"specs\": {\n",
      "        \"color\": [\n",
      "          \"黑色\",\n",
      "          \"白色\",\n",
      "          \"蓝色\"\n",
      "        ],\n",
      "        \"storage\": [\n",
      "          \"128GB\",\n",
      "          \"256GB\",\n",
      "          \"512GB\"\n",
      "        ],\n",
      "        \"screen\": \"6.1英寸\"\n",
      "      }\n",
      "    },\n",
      "    {\n",
      "      \"id\": \"P002\",\n",
      "      \"name\": \"MacBook Pro\",\n",
      "      \"category\": \"笔记本\",\n",
      "      \"price\": 12999,\n",
      "      \"specs\": {\n",
      "        \"color\": [\n",
      "          \"深空灰\",\n",
      "          \"银色\"\n",
      "        ],\n",
      "        \"storage\": [\n",
      "          \"256GB\",\n",
      "          \"512GB\",\n",
      "          \"1TB\"\n",
      "        ],\n",
      "        \"screen\": \"14英寸\"\n",
      "      }\n",
      "    },\n",
      "    {\n",
      "      \"id\": \"P003\",\n",
      "      \"name\": \"iPad Air\",\n",
      "      \"category\": \"平板\",\n",
      "      \"price\": 4799,\n",
      "      \"specs\": {\n",
      "        \"color\": [\n",
      "          \"银色\",\n",
      "          \"玫瑰金\",\n",
      "          \"天蓝色\"\n",
      "        ],\n",
      "        \"storage\": [\n",
      "          \"64GB\",\n",
      "          \"256GB\"\n",
      "        ],\n",
      "        \"screen\": \"10.9英寸\"\n",
      "      }\n",
      "    }\n",
      "  ]\n",
      "}\n",
      "\n",
      "2.2 展平嵌套结构：\n",
      "     id         name category  price     specs.color          specs.storage  \\\n",
      "0  P001    iPhone 15       手机   6999    [黑色, 白色, 蓝色]  [128GB, 256GB, 512GB]   \n",
      "1  P002  MacBook Pro      笔记本  12999       [深空灰, 银色]    [256GB, 512GB, 1TB]   \n",
      "2  P003     iPad Air       平板   4799  [银色, 玫瑰金, 天蓝色]          [64GB, 256GB]   \n",
      "\n",
      "  specs.screen  \n",
      "0        6.1英寸  \n",
      "1         14英寸  \n",
      "2       10.9英寸  \n",
      "\n",
      "2.3 处理多层嵌套：\n",
      "     id         name category  price     specs_color          specs_storage  \\\n",
      "0  P001    iPhone 15       手机   6999    [黑色, 白色, 蓝色]  [128GB, 256GB, 512GB]   \n",
      "1  P002  MacBook Pro      笔记本  12999       [深空灰, 银色]    [256GB, 512GB, 1TB]   \n",
      "2  P003     iPad Air       平板   4799  [银色, 玫瑰金, 天蓝色]          [64GB, 256GB]   \n",
      "\n",
      "  specs_screen  \n",
      "0        6.1英寸  \n",
      "1         14英寸  \n",
      "2       10.9英寸  \n",
      "\n",
      "=== 3. 高级JSON处理 ===\n",
      "\n",
      "3.1 选择性展平特定字段：\n",
      "     0    id         name  price specs.screen\n",
      "0   黑色  P001    iPhone 15   6999        6.1英寸\n",
      "1   白色  P001    iPhone 15   6999        6.1英寸\n",
      "2   蓝色  P001    iPhone 15   6999        6.1英寸\n",
      "3  深空灰  P002  MacBook Pro  12999         14英寸\n",
      "4   银色  P002  MacBook Pro  12999         14英寸\n",
      "5   银色  P003     iPad Air   4799       10.9英寸\n",
      "6  玫瑰金  P003     iPad Air   4799       10.9英寸\n",
      "7  天蓝色  P003     iPad Air   4799       10.9英寸\n",
      "\n",
      "3.2 处理数组类型字段：\n",
      "     id         name category  price specs.color          specs.storage  \\\n",
      "0  P001    iPhone 15       手机   6999          黑色  [128GB, 256GB, 512GB]   \n",
      "0  P001    iPhone 15       手机   6999          白色  [128GB, 256GB, 512GB]   \n",
      "0  P001    iPhone 15       手机   6999          蓝色  [128GB, 256GB, 512GB]   \n",
      "1  P002  MacBook Pro      笔记本  12999         深空灰    [256GB, 512GB, 1TB]   \n",
      "1  P002  MacBook Pro      笔记本  12999          银色    [256GB, 512GB, 1TB]   \n",
      "2  P003     iPad Air       平板   4799          银色          [64GB, 256GB]   \n",
      "2  P003     iPad Air       平板   4799         玫瑰金          [64GB, 256GB]   \n",
      "2  P003     iPad Air       平板   4799         天蓝色          [64GB, 256GB]   \n",
      "\n",
      "  specs.screen  \n",
      "0        6.1英寸  \n",
      "0        6.1英寸  \n",
      "0        6.1英寸  \n",
      "1         14英寸  \n",
      "1         14英寸  \n",
      "2       10.9英寸  \n",
      "2       10.9英寸  \n",
      "2       10.9英寸  \n",
      "\n",
      "=== 4. 数据验证和清理 ===\n",
      "\n",
      "4.1 数据类型检查：\n",
      "id               object\n",
      "name             object\n",
      "category         object\n",
      "price             int64\n",
      "specs.color      object\n",
      "specs.storage    object\n",
      "specs.screen     object\n",
      "dtype: object\n",
      "\n",
      "4.2 缺失值检查：\n",
      "id               0\n",
      "name             0\n",
      "category         0\n",
      "price            0\n",
      "specs.color      0\n",
      "specs.storage    0\n",
      "specs.screen     0\n",
      "dtype: int64\n",
      "\n",
      "=== 5. JSON特有功能 ===\n",
      "\n",
      "5.1 日期时间处理：\n",
      "\n",
      "5.2 处理不同JSON格式：\n"
     ]
    },
    {
     "ename": "ValueError",
     "evalue": "Expected object or value",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mValueError\u001b[0m                                Traceback (most recent call last)",
      "Cell \u001b[1;32mIn[28], line 98\u001b[0m\n\u001b[0;32m     96\u001b[0m \u001b[38;5;28mprint\u001b[39m(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;124m5.2 处理不同JSON格式：\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n\u001b[0;32m     97\u001b[0m \u001b[38;5;66;03m# 处理紧凑型JSON\u001b[39;00m\n\u001b[1;32m---> 98\u001b[0m df_compact \u001b[38;5;241m=\u001b[39m pd\u001b[38;5;241m.\u001b[39mread_json(\u001b[38;5;124m'\u001b[39m\u001b[38;5;124mdata/products.json\u001b[39m\u001b[38;5;124m'\u001b[39m,\n\u001b[0;32m     99\u001b[0m                          lines\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mTrue\u001b[39;00m)  \u001b[38;5;66;03m# 每行一个JSON对象\u001b[39;00m\n\u001b[0;32m    101\u001b[0m \u001b[38;5;66;03m# =============== 6. 实用技巧 ===============\u001b[39;00m\n\u001b[0;32m    102\u001b[0m \u001b[38;5;28mprint\u001b[39m(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;124m=== 6. 实用技巧 ===\u001b[39m\u001b[38;5;124m\"\u001b[39m)\n",
      "File \u001b[1;32mD:\\anaconda3\\Lib\\site-packages\\pandas\\io\\json\\_json.py:815\u001b[0m, in \u001b[0;36mread_json\u001b[1;34m(path_or_buf, orient, typ, dtype, convert_axes, convert_dates, keep_default_dates, precise_float, date_unit, encoding, encoding_errors, lines, chunksize, compression, nrows, storage_options, dtype_backend, engine)\u001b[0m\n\u001b[0;32m    813\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m json_reader\n\u001b[0;32m    814\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[1;32m--> 815\u001b[0m     \u001b[38;5;28;01mreturn\u001b[39;00m json_reader\u001b[38;5;241m.\u001b[39mread()\n",
      "File \u001b[1;32mD:\\anaconda3\\Lib\\site-packages\\pandas\\io\\json\\_json.py:1023\u001b[0m, in \u001b[0;36mJsonReader.read\u001b[1;34m(self)\u001b[0m\n\u001b[0;32m   1021\u001b[0m         data \u001b[38;5;241m=\u001b[39m ensure_str(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mdata)\n\u001b[0;32m   1022\u001b[0m         data_lines \u001b[38;5;241m=\u001b[39m data\u001b[38;5;241m.\u001b[39msplit(\u001b[38;5;124m\"\u001b[39m\u001b[38;5;130;01m\\n\u001b[39;00m\u001b[38;5;124m\"\u001b[39m)\n\u001b[1;32m-> 1023\u001b[0m         obj \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_get_object_parser(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_combine_lines(data_lines))\n\u001b[0;32m   1024\u001b[0m \u001b[38;5;28;01melse\u001b[39;00m:\n\u001b[0;32m   1025\u001b[0m     obj \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_get_object_parser(\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mdata)\n",
      "File \u001b[1;32mD:\\anaconda3\\Lib\\site-packages\\pandas\\io\\json\\_json.py:1051\u001b[0m, in \u001b[0;36mJsonReader._get_object_parser\u001b[1;34m(self, json)\u001b[0m\n\u001b[0;32m   1049\u001b[0m obj \u001b[38;5;241m=\u001b[39m \u001b[38;5;28;01mNone\u001b[39;00m\n\u001b[0;32m   1050\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m typ \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mframe\u001b[39m\u001b[38;5;124m\"\u001b[39m:\n\u001b[1;32m-> 1051\u001b[0m     obj \u001b[38;5;241m=\u001b[39m FrameParser(json, \u001b[38;5;241m*\u001b[39m\u001b[38;5;241m*\u001b[39mkwargs)\u001b[38;5;241m.\u001b[39mparse()\n\u001b[0;32m   1053\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m typ \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mseries\u001b[39m\u001b[38;5;124m\"\u001b[39m \u001b[38;5;129;01mor\u001b[39;00m obj \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[0;32m   1054\u001b[0m     \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;129;01mnot\u001b[39;00m \u001b[38;5;28misinstance\u001b[39m(dtype, \u001b[38;5;28mbool\u001b[39m):\n",
      "File \u001b[1;32mD:\\anaconda3\\Lib\\site-packages\\pandas\\io\\json\\_json.py:1187\u001b[0m, in \u001b[0;36mParser.parse\u001b[1;34m(self)\u001b[0m\n\u001b[0;32m   1185\u001b[0m \u001b[38;5;129m@final\u001b[39m\n\u001b[0;32m   1186\u001b[0m \u001b[38;5;28;01mdef\u001b[39;00m \u001b[38;5;21mparse\u001b[39m(\u001b[38;5;28mself\u001b[39m):\n\u001b[1;32m-> 1187\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39m_parse()\n\u001b[0;32m   1189\u001b[0m     \u001b[38;5;28;01mif\u001b[39;00m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mobj \u001b[38;5;129;01mis\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m:\n\u001b[0;32m   1190\u001b[0m         \u001b[38;5;28;01mreturn\u001b[39;00m \u001b[38;5;28;01mNone\u001b[39;00m\n",
      "File \u001b[1;32mD:\\anaconda3\\Lib\\site-packages\\pandas\\io\\json\\_json.py:1403\u001b[0m, in \u001b[0;36mFrameParser._parse\u001b[1;34m(self)\u001b[0m\n\u001b[0;32m   1399\u001b[0m orient \u001b[38;5;241m=\u001b[39m \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39morient\n\u001b[0;32m   1401\u001b[0m \u001b[38;5;28;01mif\u001b[39;00m orient \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124mcolumns\u001b[39m\u001b[38;5;124m\"\u001b[39m:\n\u001b[0;32m   1402\u001b[0m     \u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mobj \u001b[38;5;241m=\u001b[39m DataFrame(\n\u001b[1;32m-> 1403\u001b[0m         ujson_loads(json, precise_float\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mprecise_float), dtype\u001b[38;5;241m=\u001b[39m\u001b[38;5;28;01mNone\u001b[39;00m\n\u001b[0;32m   1404\u001b[0m     )\n\u001b[0;32m   1405\u001b[0m \u001b[38;5;28;01melif\u001b[39;00m orient \u001b[38;5;241m==\u001b[39m \u001b[38;5;124m\"\u001b[39m\u001b[38;5;124msplit\u001b[39m\u001b[38;5;124m\"\u001b[39m:\n\u001b[0;32m   1406\u001b[0m     decoded \u001b[38;5;241m=\u001b[39m {\n\u001b[0;32m   1407\u001b[0m         \u001b[38;5;28mstr\u001b[39m(k): v\n\u001b[0;32m   1408\u001b[0m         \u001b[38;5;28;01mfor\u001b[39;00m k, v \u001b[38;5;129;01min\u001b[39;00m ujson_loads(json, precise_float\u001b[38;5;241m=\u001b[39m\u001b[38;5;28mself\u001b[39m\u001b[38;5;241m.\u001b[39mprecise_float)\u001b[38;5;241m.\u001b[39mitems()\n\u001b[0;32m   1409\u001b[0m     }\n",
      "\u001b[1;31mValueError\u001b[0m: Expected object or value"
     ]
    }
   ],
   "source": [
    "import pandas as pd\n",
    "import json\n",
    "import numpy as np\n",
    "\n",
    "# =============== 1. 基础JSON读取 ===============\n",
    "# JSON (JavaScript Object Notation) 是一种轻量级的数据交换格式\n",
    "# 它的特点是：\n",
    "# - 易于人阅读和编写\n",
    "# - 易于机器解析和生成\n",
    "# - 基于键值对和数组结构\n",
    "print(\"=== 1. 基础JSON读取 ===\")\n",
    "\n",
    "# 1.1 直接读取JSON文件\n",
    "print(\"\\n1.1 基本读取：\")\n",
    "df = pd.read_json('data/products.json')\n",
    "print(\"数据预览：\")\n",
    "print(df)\n",
    "print(\"\\n数据信息：\")\n",
    "print(df.info())\n",
    "\n",
    "# 1.2 读取并指定数据类型\n",
    "print(\"\\n1.2 指定数据类型读取：\")\n",
    "df_typed = pd.read_json('data/products.json',\n",
    "                       dtype={'id': str,\n",
    "                             'price': float})\n",
    "print(df_typed.dtypes)\n",
    "\n",
    "# =============== 2. 处理嵌套JSON ===============\n",
    "print(\"\\n=== 2. 处理嵌套JSON ===\")\n",
    "\n",
    "# 2.1 使用Python内置json模块读取\n",
    "print(\"\\n2.1 使用json模块读取：\")\n",
    "with open('data/products.json', 'r', encoding='utf-8') as f:\n",
    "    data = json.load(f)\n",
    "print(\"原始JSON数据结构：\")\n",
    "print(json.dumps(data, indent=2, ensure_ascii=False))\n",
    "\n",
    "# 2.2 使用json_normalize处理嵌套结构\n",
    "print(\"\\n2.2 展平嵌套结构：\")\n",
    "df_normalized = pd.json_normalize(data['products'])\n",
    "print(df_normalized)\n",
    "\n",
    "# 2.3 处理多层嵌套\n",
    "print(\"\\n2.3 处理多层嵌套：\")\n",
    "df_nested = pd.json_normalize(\n",
    "    data['products'],\n",
    "    sep='_',          # 指定分隔符\n",
    "    max_level=2       # 限制展开层级\n",
    ")\n",
    "print(df_nested)\n",
    "\n",
    "# =============== 3. 高级JSON处理 ===============\n",
    "print(\"\\n=== 3. 高级JSON处理 ===\")\n",
    "\n",
    "# 3.1 选择性展平\n",
    "print(\"\\n3.1 选择性展平特定字段：\")\n",
    "df_selected = pd.json_normalize(\n",
    "    data['products'],\n",
    "    record_path=['specs', 'color'],    # 展开color数组\n",
    "    meta=[                             # 保留的其他字段\n",
    "        'id',\n",
    "        'name',\n",
    "        'price',\n",
    "        ['specs', 'screen']            # 嵌套字段路径\n",
    "    ]\n",
    ")\n",
    "print(df_selected)\n",
    "\n",
    "# 3.2 处理数组类型\n",
    "print(\"\\n3.2 处理数组类型字段：\")\n",
    "# 展开specs.color数组\n",
    "df_exploded = df_normalized.explode('specs.color')\n",
    "print(df_exploded)\n",
    "\n",
    "# =============== 4. 数据验证和清理 ===============\n",
    "print(\"\\n=== 4. 数据验证和清理 ===\")\n",
    "\n",
    "# 4.1 检查数据类型\n",
    "print(\"\\n4.1 数据类型检查：\")\n",
    "print(df_normalized.dtypes)\n",
    "\n",
    "# 4.2 检查缺失值\n",
    "print(\"\\n4.2 缺失值检查：\")\n",
    "print(df_normalized.isnull().sum())\n",
    "\n",
    "# =============== 5. JSON特有功能 ===============\n",
    "print(\"\\n=== 5. JSON特有功能 ===\")\n",
    "\n",
    "# 5.1 处理日期时间\n",
    "print(\"\\n5.1 日期时间处理：\")\n",
    "# 假设JSON中包含日期字符串\n",
    "df_dates = pd.read_json('data/products.json',\n",
    "                       convert_dates=['update_time'])  # 转换日期字段\n",
    "\n",
    "# 5.2 处理不同的JSON格式\n",
    "print(\"\\n5.2 处理不同JSON格式：\")\n",
    "# 处理紧凑型JSON\n",
    "df_compact = pd.read_json('data/products.json',\n",
    "                         lines=True)  # 每行一个JSON对象\n",
    "\n",
    "# =============== 6. 实用技巧 ===============\n",
    "print(\"\\n=== 6. 实用技巧 ===\")\n",
    "\n",
    "# 6.1 自定义JSON解析\n",
    "print(\"\\n6.1 使用自定义解析：\")\n",
    "def custom_parser(json_str):\n",
    "    \"\"\"自定义JSON解析函数\"\"\"\n",
    "    data = json.loads(json_str)\n",
    "    # 在这里可以添加自定义的处理逻辑\n",
    "    return data\n",
    "\n",
    "with open('data/products.json', 'r', encoding='utf-8') as f:\n",
    "    custom_data = custom_parser(f.read())\n",
    "    df_custom = pd.DataFrame(custom_data['products'])\n",
    "print(df_custom)\n",
    "\n",
    "# 6.2 处理特殊字符\n",
    "print(\"\\n6.2 处理特殊字符：\")\n",
    "df_encoded = pd.read_json('data/products.json',\n",
    "                         encoding='utf-8')  # 指定编码\n",
    "\n",
    "# =============== 7. 数据转换 ===============\n",
    "print(\"\\n=== 7. 数据转换 ===\")\n",
    "\n",
    "# 7.1 JSON到其他格式\n",
    "print(\"\\n7.1 转换为其他格式：\")\n",
    "# 转换为CSV\n",
    "df_normalized.to_csv('data/products_converted.csv', index=False)\n",
    "# 转换为Excel\n",
    "df_normalized.to_excel('data/products_converted.xlsx', index=False)\n",
    "\n",
    "# 7.2 数据规范化\n",
    "print(\"\\n7.2 数据规范化：\")\n",
    "# 处理价格字段\n",
    "df_normalized['price_normalized'] = df_normalized['price'].apply(\n",
    "    lambda x: float(str(x).replace(',', ''))\n",
    ")\n",
    "\n",
    "# =============== 8. 设置显示选项 ===============\n",
    "print(\"\\n=== 8. 设置显示选项 ===\")\n",
    "\n",
    "# 设置显示选项以更好地查看JSON数据\n",
    "pd.set_option('display.max_columns', None)  # 显示所有列\n",
    "pd.set_option('display.max_rows', 10)       # 限制显示行数\n",
    "pd.set_option('display.max_colwidth', 100)  # 限制列宽\n",
    "pd.set_option('display.precision', 2)       # 设置小数精度"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f6c65bab-3ced-4c12-8c19-4188605fea92",
   "metadata": {},
   "source": [
    "读取数据库"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "b7090d1f-ea6d-43b6-a882-5e51527db0d3",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "=== 1. 数据库连接 ===\n",
      "\n",
      "=== 2. 基本表读取 ===\n",
      "\n",
      "2.1 读取员工表：\n",
      "员工数据预览：\n",
      "       id name department   salary   hire_date\n",
      "0  EMP001   张三         销售  8000.00  2023-01-01\n",
      "1  EMP002   李四         技术 12000.00  2023-02-15\n",
      "2  EMP003   王五         技术 12000.00  2023-03-20\n",
      "3  EMP004   赵六         市场  9000.00  2023-04-10\n",
      "4  EMP005   钱七         销售  8000.00  2023-05-01\n",
      "\n",
      "数据信息：\n",
      "<class 'pandas.core.frame.DataFrame'>\n",
      "RangeIndex: 5 entries, 0 to 4\n",
      "Data columns (total 5 columns):\n",
      " #   Column      Non-Null Count  Dtype  \n",
      "---  ------      --------------  -----  \n",
      " 0   id          5 non-null      object \n",
      " 1   name        5 non-null      object \n",
      " 2   department  5 non-null      object \n",
      " 3   salary      5 non-null      float64\n",
      " 4   hire_date   5 non-null      object \n",
      "dtypes: float64(1), object(4)\n",
      "memory usage: 332.0+ bytes\n",
      "None\n",
      "\n",
      "2.2 读取部门表：\n",
      "部门数据预览：\n",
      "  name manager    budget\n",
      "0   销售      王明 200000.00\n",
      "1   技术      李强 300000.00\n",
      "2   市场      张艳 150000.00\n",
      "3   人事      刘波 100000.00\n",
      "4   财务      周红 120000.00\n",
      "\n",
      "=== 3. 复杂查询示例 ===\n",
      "\n",
      "3.1 表连接查询：\n",
      "  employee_name   salary department_name manager\n",
      "0            张三  8000.00              销售      王明\n",
      "1            李四 12000.00              技术      李强\n",
      "2            王五 12000.00              技术      李强\n",
      "3            赵六  9000.00              市场      张艳\n",
      "4            钱七  8000.00              销售      王明\n",
      "\n",
      "3.2 条件查询：\n",
      "       id name department   salary   hire_date\n",
      "0  EMP002   李四         技术 12000.00  2023-02-15\n",
      "1  EMP003   王五         技术 12000.00  2023-03-20\n",
      "\n",
      "3.3 聚合统计：\n",
      "  department  employee_count  avg_salary  max_salary  min_salary\n",
      "0         技术               2    12000.00    12000.00    12000.00\n",
      "1         销售               2     8000.00     8000.00     8000.00\n",
      "\n",
      "=== 4. 高级查询技巧 ===\n",
      "\n",
      "4.1 使用子查询：\n",
      "       id name department   salary   hire_date\n",
      "0  EMP002   李四         技术 12000.00  2023-02-15\n",
      "1  EMP003   王五         技术 12000.00  2023-03-20\n",
      "\n",
      "4.2 窗口函数：\n",
      "  name department   salary  dept_avg_salary  salary_diff\n",
      "0   赵六         市场  9000.00          9000.00         0.00\n",
      "1   李四         技术 12000.00         12000.00         0.00\n",
      "2   王五         技术 12000.00         12000.00         0.00\n",
      "3   张三         销售  8000.00          8000.00         0.00\n",
      "4   钱七         销售  8000.00          8000.00         0.00\n",
      "\n",
      "=== 5. 数据类型处理 ===\n",
      "\n",
      "5.1 日期数据处理：\n",
      "  name  hire_date current_date  days_employed\n",
      "0   张三 2023-01-01   2024-11-10         679.58\n",
      "1   李四 2023-02-15   2024-11-10         634.58\n",
      "2   王五 2023-03-20   2024-11-10         601.58\n",
      "3   赵六 2023-04-10   2024-11-10         580.58\n",
      "4   钱七 2023-05-01   2024-11-10         559.58\n",
      "\n",
      "=== 6. 数据导出与转换 ===\n",
      "\n",
      "6.2 数据透视分析：\n",
      "               mean  count      sum\n",
      "             salary salary   salary\n",
      "department                         \n",
      "市场          9000.00      1  9000.00\n",
      "技术         12000.00      2 24000.00\n",
      "销售          8000.00      2 16000.00\n",
      "\n",
      "=== 7. 数据验证 ===\n",
      "\n",
      "7.1 数值统计：\n",
      "count       5.00\n",
      "mean     9800.00\n",
      "std      2049.39\n",
      "min      8000.00\n",
      "25%      8000.00\n",
      "50%      9000.00\n",
      "75%     12000.00\n",
      "max     12000.00\n",
      "Name: salary, dtype: float64\n",
      "\n",
      "7.2 空值检查：\n",
      "id            0\n",
      "name          0\n",
      "department    0\n",
      "salary        0\n",
      "hire_date     0\n",
      "dtype: int64\n",
      "\n",
      "7.3 部门分布：\n",
      "department\n",
      "销售    2\n",
      "技术    2\n",
      "市场    1\n",
      "Name: count, dtype: int64\n",
      "\n",
      "数据库连接已关闭\n"
     ]
    }
   ],
   "source": [
    "import pandas as pd\n",
    "import sqlite3\n",
    "import numpy as np\n",
    "\n",
    "# =============== 1. 基础数据库连接 ===============\n",
    "# SQLite是一个轻量级的关系型数据库\n",
    "# 特点：\n",
    "# - 无需单独的数据库服务器\n",
    "# - 整个数据库存储在单个文件中\n",
    "# - 支持标准的SQL语法\n",
    "# - 适合嵌入式应用和小型项目\n",
    "print(\"=== 1. 数据库连接 ===\")\n",
    "\n",
    "# 建立数据库连接\n",
    "# connect()函数可以：\n",
    "# - 连接现有数据库\n",
    "# - 如果数据库不存在，则创建新数据库\n",
    "conn = sqlite3.connect('data/company.db')\n",
    "\n",
    "# =============== 2. 基本表读取 ===============\n",
    "print(\"\\n=== 2. 基本表读取 ===\")\n",
    "\n",
    "# 2.1 读取employees表\n",
    "print(\"\\n2.1 读取员工表：\")\n",
    "df_employees = pd.read_sql_query(\n",
    "    \"SELECT * FROM employees\", \n",
    "    conn\n",
    ")\n",
    "print(\"员工数据预览：\")\n",
    "print(df_employees)\n",
    "print(\"\\n数据信息：\")\n",
    "print(df_employees.info())\n",
    "\n",
    "# 2.2 读取departments表\n",
    "print(\"\\n2.2 读取部门表：\")\n",
    "df_departments = pd.read_sql_query(\n",
    "    \"SELECT * FROM departments\", \n",
    "    conn\n",
    ")\n",
    "print(\"部门数据预览：\")\n",
    "print(df_departments)\n",
    "\n",
    "# =============== 3. 复杂查询示例 ===============\n",
    "print(\"\\n=== 3. 复杂查询示例 ===\")\n",
    "\n",
    "# 3.1 JOIN查询\n",
    "# 展示如何连接多个表获取完整信息\n",
    "print(\"\\n3.1 表连接查询：\")\n",
    "join_query = \"\"\"\n",
    "SELECT \n",
    "    e.name as employee_name,    -- 员工姓名\n",
    "    e.salary,                   -- 工资\n",
    "    d.name as department_name,  -- 部门名称\n",
    "    d.manager                   -- 部门经理\n",
    "FROM employees e\n",
    "JOIN departments d ON e.department = d.name\n",
    "\"\"\"\n",
    "df_joined = pd.read_sql_query(join_query, conn)\n",
    "print(df_joined)\n",
    "\n",
    "# 3.2 条件查询\n",
    "# 使用WHERE子句筛选数据\n",
    "print(\"\\n3.2 条件查询：\")\n",
    "condition_query = \"\"\"\n",
    "SELECT *\n",
    "FROM employees\n",
    "WHERE salary > 10000  -- 筛选高薪员工\n",
    "    AND hire_date >= '2023-02-01'  -- 筛选特定日期后入职的员工\n",
    "\"\"\"\n",
    "df_filtered = pd.read_sql_query(condition_query, conn)\n",
    "print(df_filtered)\n",
    "\n",
    "# 3.3 聚合查询\n",
    "# 使用GROUP BY进行数据汇总\n",
    "print(\"\\n3.3 聚合统计：\")\n",
    "agg_query = \"\"\"\n",
    "SELECT \n",
    "    department,\n",
    "    COUNT(*) as employee_count,         -- 员工数量\n",
    "    AVG(salary) as avg_salary,          -- 平均工资\n",
    "    MAX(salary) as max_salary,          -- 最高工资\n",
    "    MIN(salary) as min_salary           -- 最低工资\n",
    "FROM employees\n",
    "GROUP BY department\n",
    "HAVING COUNT(*) > 1                     -- 只显示员工数大于1的部门\n",
    "\"\"\"\n",
    "df_aggregated = pd.read_sql_query(agg_query, conn)\n",
    "print(df_aggregated)\n",
    "\n",
    "# =============== 4. 高级查询技巧 ===============\n",
    "print(\"\\n=== 4. 高级查询技巧 ===\")\n",
    "\n",
    "# 4.1 子查询示例\n",
    "print(\"\\n4.1 使用子查询：\")\n",
    "subquery = \"\"\"\n",
    "SELECT e.*\n",
    "FROM employees e\n",
    "WHERE salary > (\n",
    "    SELECT AVG(salary) \n",
    "    FROM employees\n",
    ")  -- 查找高于平均工资的员工\n",
    "\"\"\"\n",
    "df_subquery = pd.read_sql_query(subquery, conn)\n",
    "print(df_subquery)\n",
    "\n",
    "# 4.2 窗口函数使用\n",
    "print(\"\\n4.2 窗口函数：\")\n",
    "window_query = \"\"\"\n",
    "SELECT \n",
    "    name,\n",
    "    department,\n",
    "    salary,\n",
    "    AVG(salary) OVER (PARTITION BY department) as dept_avg_salary,\n",
    "    salary - AVG(salary) OVER (PARTITION BY department) as salary_diff\n",
    "FROM employees\n",
    "\"\"\"\n",
    "df_window = pd.read_sql_query(window_query, conn)\n",
    "print(df_window)\n",
    "\n",
    "# =============== 5. 数据类型处理 ===============\n",
    "print(\"\\n=== 5. 数据类型处理 ===\")\n",
    "\n",
    "# 5.1 日期处理\n",
    "print(\"\\n5.1 日期数据处理：\")\n",
    "date_query = \"\"\"\n",
    "SELECT \n",
    "    name,\n",
    "    hire_date,\n",
    "    date('now') as current_date,\n",
    "    (julianday('now') - julianday(hire_date)) as days_employed\n",
    "FROM employees\n",
    "\"\"\"\n",
    "df_dates = pd.read_sql_query(date_query, conn)\n",
    "df_dates['hire_date'] = pd.to_datetime(df_dates['hire_date'])\n",
    "print(df_dates)\n",
    "\n",
    "# =============== 6. 数据导出与转换 ===============\n",
    "print(\"\\n=== 6. 数据导出与转换 ===\")\n",
    "\n",
    "# 6.1 导出到不同格式\n",
    "# 转换为CSV\n",
    "df_employees.to_csv('data/employees_export.csv', index=False)\n",
    "# 转换为Excel\n",
    "df_employees.to_excel('data/employees_export.xlsx', index=False)\n",
    "\n",
    "# 6.2 数据透视\n",
    "print(\"\\n6.2 数据透视分析：\")\n",
    "pivot_table = pd.pivot_table(\n",
    "    df_employees,\n",
    "    values='salary',\n",
    "    index='department',\n",
    "    aggfunc=['mean', 'count', 'sum']\n",
    ")\n",
    "print(pivot_table)\n",
    "\n",
    "# =============== 7. 数据验证 ===============\n",
    "print(\"\\n=== 7. 数据验证 ===\")\n",
    "\n",
    "# 7.1 基本统计信息\n",
    "print(\"\\n7.1 数值统计：\")\n",
    "print(df_employees['salary'].describe())\n",
    "\n",
    "# 7.2 数据完整性检查\n",
    "print(\"\\n7.2 空值检查：\")\n",
    "print(df_employees.isnull().sum())\n",
    "\n",
    "# 7.3 唯一值检查\n",
    "print(\"\\n7.3 部门分布：\")\n",
    "print(df_employees['department'].value_counts())\n",
    "\n",
    "# =============== 8. 关闭数据库连接 ===============\n",
    "# 注意：使用完数据库后必须关闭连接\n",
    "conn.close()\n",
    "print(\"\\n数据库连接已关闭\")\n",
    "\n",
    "# =============== 9. 设置显示选项 ===============\n",
    "# 配置pandas显示选项\n",
    "pd.set_option('display.max_columns', None)  # 显示所有列\n",
    "pd.set_option('display.max_rows', 10)       # 限制显示行数\n",
    "pd.set_option('display.precision', 2)       # 设置小数精度"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "dd480ef0-75ae-4735-b22d-8c419bcaa711",
   "metadata": {},
   "source": [
    "使用字典创建dataframe"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "d46e5da5-44eb-4bd2-ad67-0245f6cea7d4",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "=== 1. 基础字典创建 ===\n",
      "\n",
      "1.1 简单字典创建：\n",
      "基础DataFrame：\n",
      "   姓名  年龄  部门     工资\n",
      "0  张三  25  销售   8000\n",
      "1  李四  28  技术  12000\n",
      "2  王五  32  技术  15000\n",
      "3  赵六  35  市场  10000\n",
      "\n",
      "数据信息：\n",
      "<class 'pandas.core.frame.DataFrame'>\n",
      "RangeIndex: 4 entries, 0 to 3\n",
      "Data columns (total 4 columns):\n",
      " #   Column  Non-Null Count  Dtype \n",
      "---  ------  --------------  ----- \n",
      " 0   姓名      4 non-null      object\n",
      " 1   年龄      4 non-null      int64 \n",
      " 2   部门      4 non-null      object\n",
      " 3   工资      4 non-null      int64 \n",
      "dtypes: int64(2), object(2)\n",
      "memory usage: 260.0+ bytes\n",
      "None\n",
      "\n",
      "1.2 指定索引创建：\n",
      "      姓名  年龄  部门     工资\n",
      "A001  张三  25  销售   8000\n",
      "A002  李四  28  技术  12000\n",
      "A003  王五  32  技术  15000\n",
      "A004  赵六  35  市场  10000\n",
      "\n",
      "=== 2. 不同数据类型的处理 ===\n",
      "\n",
      "2.1 多种数据类型示例：\n",
      "   姓名  年龄       入职日期   是否在职                   工资历史  绩效评分\n",
      "0  张三  25 2023-01-01   True     [7000, 7500, 8000]  4.50\n",
      "1  李四  28 2023-01-31   True  [10000, 11000, 12000]  4.80\n",
      "2  王五  32 2023-03-02  False  [13000, 14000, 15000]  3.90\n",
      "3  赵六  35 2023-04-01   True    [8000, 9000, 10000]  4.20\n",
      "\n",
      "数据类型信息：\n",
      "姓名              object\n",
      "年龄               int64\n",
      "入职日期    datetime64[ns]\n",
      "是否在职              bool\n",
      "工资历史            object\n",
      "绩效评分           float64\n",
      "dtype: object\n",
      "\n",
      "=== 3. 嵌套字典处理 ===\n",
      "\n",
      "3.1 嵌套字典示例：\n",
      "    部门     工资   评分\n",
      "张三  销售   8000 4.50\n",
      "李四  技术  12000 4.80\n",
      "王五  技术  15000 3.90\n",
      "\n",
      "=== 4. 使用字典列表 ===\n",
      "\n",
      "4.1 字典列表示例：\n",
      "   姓名  年龄  部门\n",
      "0  张三  25  销售\n",
      "1  李四  28  技术\n",
      "2  王五  32  技术\n",
      "3  赵六  35  市场\n",
      "\n",
      "=== 5. 特殊数据处理 ===\n",
      "\n",
      "5.1 处理缺失值：\n",
      "   姓名    年龄    部门       工资\n",
      "0  张三 25.00    销售  8000.00\n",
      "1  李四   NaN    技术 12000.00\n",
      "2  王五 32.00  None 15000.00\n",
      "3  赵六 35.00    市场      NaN\n",
      "\n",
      "缺失值统计：\n",
      "姓名    0\n",
      "年龄    1\n",
      "部门    1\n",
      "工资    1\n",
      "dtype: int64\n",
      "\n",
      "5.2 类别数据处理：\n",
      "姓名      object\n",
      "性别    category\n",
      "学历    category\n",
      "dtype: object\n",
      "\n",
      "=== 6. 数据操作示例 ===\n",
      "\n",
      "6.1 添加计算列：\n",
      "   姓名  年龄  部门     工资    年终奖\n",
      "0  张三  25  销售   8000  16000\n",
      "1  李四  28  技术  12000  24000\n",
      "2  王五  32  技术  15000  30000\n",
      "3  赵六  35  市场  10000  20000\n",
      "\n",
      "6.2 条件筛选：\n",
      "   姓名  年龄  部门     工资    年终奖\n",
      "1  李四  28  技术  12000  24000\n",
      "2  王五  32  技术  15000  30000\n",
      "\n",
      "=== 7. 数据验证 ===\n",
      "\n",
      "7.1 数值统计：\n",
      "         年龄       工资      年终奖\n",
      "count  4.00     4.00     4.00\n",
      "mean  30.00 11250.00 22500.00\n",
      "std    4.40  2986.08  5972.16\n",
      "min   25.00  8000.00 16000.00\n",
      "25%   27.25  9500.00 19000.00\n",
      "50%   30.00 11000.00 22000.00\n",
      "75%   32.75 12750.00 25500.00\n",
      "max   35.00 15000.00 30000.00\n",
      "\n",
      "7.2 部门分布：\n",
      "部门\n",
      "技术    2\n",
      "销售    1\n",
      "市场    1\n",
      "Name: count, dtype: int64\n",
      "\n",
      "=== 8. 高级创建技巧 ===\n",
      "\n",
      "8.1 使用numpy创建数据：\n",
      "     数值  整数 分类\n",
      "0  0.50  75  B\n",
      "1 -0.14  75  A\n",
      "2  0.65  88  B\n",
      "3  1.52  24  B\n",
      "4 -0.23   3  B\n",
      "\n",
      "8.2 使用日期范围：\n",
      "          日期     值\n",
      "0 2024-01-01 -0.23\n",
      "1 2024-01-02  1.46\n",
      "2 2024-01-03  1.54\n",
      "3 2024-01-04 -2.44\n",
      "4 2024-01-05  0.60\n"
     ]
    }
   ],
   "source": [
    "import pandas as pd\n",
    "import numpy as np\n",
    "from datetime import datetime, timedelta\n",
    "\n",
    "# =============== 1. 基础字典创建 ===============\n",
    "print(\"=== 1. 基础字典创建 ===\")\n",
    "\n",
    "# 1.1 使用最简单的字典结构\n",
    "# 字典的键将成为DataFrame的列名\n",
    "# 字典的值将成为DataFrame的数据（要求所有值长度相同）\n",
    "print(\"\\n1.1 简单字典创建：\")\n",
    "basic_dict = {\n",
    "    '姓名': ['张三', '李四', '王五', '赵六'],\n",
    "    '年龄': [25, 28, 32, 35],\n",
    "    '部门': ['销售', '技术', '技术', '市场'],\n",
    "    '工资': [8000, 12000, 15000, 10000]\n",
    "}\n",
    "df_basic = pd.DataFrame(basic_dict)\n",
    "print(\"基础DataFrame：\")\n",
    "print(df_basic)\n",
    "print(\"\\n数据信息：\")\n",
    "print(df_basic.info())\n",
    "\n",
    "# 1.2 指定索引创建\n",
    "print(\"\\n1.2 指定索引创建：\")\n",
    "df_indexed = pd.DataFrame(\n",
    "    basic_dict,\n",
    "    index=['A001', 'A002', 'A003', 'A004']  # 自定义索引\n",
    ")\n",
    "print(df_indexed)\n",
    "\n",
    "# =============== 2. 不同数据类型的处理 ===============\n",
    "print(\"\\n=== 2. 不同数据类型的处理 ===\")\n",
    "\n",
    "# 2.1 包含多种数据类型\n",
    "print(\"\\n2.1 多种数据类型示例：\")\n",
    "complex_dict = {\n",
    "    '姓名': ['张三', '李四', '王五', '赵六'],\n",
    "    '年龄': [25, 28, 32, 35],\n",
    "    '入职日期': [datetime(2023, 1, 1) + timedelta(days=x*30) for x in range(4)],\n",
    "    '是否在职': [True, True, False, True],\n",
    "    '工资历史': [\n",
    "        [7000, 7500, 8000],\n",
    "        [10000, 11000, 12000],\n",
    "        [13000, 14000, 15000],\n",
    "        [8000, 9000, 10000]\n",
    "    ],\n",
    "    '绩效评分': [4.5, 4.8, 3.9, 4.2]\n",
    "}\n",
    "df_complex = pd.DataFrame(complex_dict)\n",
    "print(df_complex)\n",
    "print(\"\\n数据类型信息：\")\n",
    "print(df_complex.dtypes)\n",
    "\n",
    "# =============== 3. 嵌套字典处理 ===============\n",
    "print(\"\\n=== 3. 嵌套字典处理 ===\")\n",
    "\n",
    "# 3.1 嵌套字典创建\n",
    "print(\"\\n3.1 嵌套字典示例：\")\n",
    "nested_dict = {\n",
    "    '张三': {\n",
    "        '部门': '销售',\n",
    "        '工资': 8000,\n",
    "        '评分': 4.5\n",
    "    },\n",
    "    '李四': {\n",
    "        '部门': '技术',\n",
    "        '工资': 12000,\n",
    "        '评分': 4.8\n",
    "    },\n",
    "    '王五': {\n",
    "        '部门': '技术',\n",
    "        '工资': 15000,\n",
    "        '评分': 3.9\n",
    "    }\n",
    "}\n",
    "df_nested = pd.DataFrame(nested_dict).T  # .T表示转置\n",
    "print(df_nested)\n",
    "\n",
    "# =============== 4. 使用字典列表 ===============\n",
    "print(\"\\n=== 4. 使用字典列表 ===\")\n",
    "\n",
    "# 4.1 字典列表创建\n",
    "print(\"\\n4.1 字典列表示例：\")\n",
    "dict_list = [\n",
    "    {'姓名': '张三', '年龄': 25, '部门': '销售'},\n",
    "    {'姓名': '李四', '年龄': 28, '部门': '技术'},\n",
    "    {'姓名': '王五', '年龄': 32, '部门': '技术'},\n",
    "    {'姓名': '赵六', '年龄': 35, '部门': '市场'}\n",
    "]\n",
    "df_dict_list = pd.DataFrame(dict_list)\n",
    "print(df_dict_list)\n",
    "\n",
    "# =============== 5. 特殊数据处理 ===============\n",
    "print(\"\\n=== 5. 特殊数据处理 ===\")\n",
    "\n",
    "# 5.1 包含缺失值\n",
    "print(\"\\n5.1 处理缺失值：\")\n",
    "missing_dict = {\n",
    "    '姓名': ['张三', '李四', '王五', '赵六'],\n",
    "    '年龄': [25, None, 32, 35],\n",
    "    '部门': ['销售', '技术', None, '市场'],\n",
    "    '工资': [8000, 12000, 15000, None]\n",
    "}\n",
    "df_missing = pd.DataFrame(missing_dict)\n",
    "print(df_missing)\n",
    "print(\"\\n缺失值统计：\")\n",
    "print(df_missing.isnull().sum())\n",
    "\n",
    "# 5.2 类别数据处理\n",
    "print(\"\\n5.2 类别数据处理：\")\n",
    "category_dict = {\n",
    "    '姓名': ['张三', '李四', '王五', '赵六'],\n",
    "    '性别': ['男', '女', '男', '女'],\n",
    "    '学历': ['本科', '研究生', '本科', '大专']\n",
    "}\n",
    "df_category = pd.DataFrame(category_dict)\n",
    "df_category['性别'] = df_category['性别'].astype('category')\n",
    "df_category['学历'] = df_category['学历'].astype('category')\n",
    "print(df_category.dtypes)\n",
    "\n",
    "# =============== 6. 数据操作示例 ===============\n",
    "print(\"\\n=== 6. 数据操作示例 ===\")\n",
    "\n",
    "# 6.1 添加计算列\n",
    "print(\"\\n6.1 添加计算列：\")\n",
    "df_basic['年终奖'] = df_basic['工资'] * 2\n",
    "print(df_basic)\n",
    "\n",
    "# 6.2 条件筛选\n",
    "print(\"\\n6.2 条件筛选：\")\n",
    "high_salary = df_basic[df_basic['工资'] > 10000]\n",
    "print(high_salary)\n",
    "\n",
    "# =============== 7. 数据验证 ===============\n",
    "print(\"\\n=== 7. 数据验证 ===\")\n",
    "\n",
    "# 7.1 基本统计\n",
    "print(\"\\n7.1 数值统计：\")\n",
    "print(df_basic.describe())\n",
    "\n",
    "# 7.2 唯一值检查\n",
    "print(\"\\n7.2 部门分布：\")\n",
    "print(df_basic['部门'].value_counts())\n",
    "\n",
    "# =============== 8. 高级创建技巧 ===============\n",
    "print(\"\\n=== 8. 高级创建技巧 ===\")\n",
    "\n",
    "# 8.1 使用numpy创建数据\n",
    "print(\"\\n8.1 使用numpy创建数据：\")\n",
    "np.random.seed(42)\n",
    "advanced_dict = {\n",
    "    '数值': np.random.randn(5),  # 正态分布随机数\n",
    "    '整数': np.random.randint(1, 100, 5),  # 随机整数\n",
    "    '分类': np.random.choice(['A', 'B', 'C'], 5)  # 随机分类\n",
    "}\n",
    "df_advanced = pd.DataFrame(advanced_dict)\n",
    "print(df_advanced)\n",
    "\n",
    "# 8.2 使用日期范围\n",
    "print(\"\\n8.2 使用日期范围：\")\n",
    "date_dict = {\n",
    "    '日期': pd.date_range(start='2024-01-01', periods=5, freq='D'),\n",
    "    '值': np.random.randn(5)\n",
    "}\n",
    "df_dates = pd.DataFrame(date_dict)\n",
    "print(df_dates)\n",
    "\n",
    "# =============== 9. 显示设置 ===============\n",
    "# 设置显示选项\n",
    "pd.set_option('display.max_columns', None)  # 显示所有列\n",
    "pd.set_option('display.max_rows', 10)       # 限制显示行数\n",
    "pd.set_option('display.float_format', lambda x: '%.2f' % x)  # 浮点数格式"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b2ea026d-a2b3-45ff-836e-0ea59e86a4d3",
   "metadata": {},
   "source": [
    "Pandas中的DataFrame是一个极其强大且灵活的数据处理工具。它就像是一个数据处理的\"全能选手\"，几乎可以处理所有常见的结构化数据格式。从文件读取（CSV、Excel、JSON等）到数据库操作，从简单的字典转换到复杂的数据处理，DataFrame都能优雅地完成。\n",
    "通过我们前面的示例，我们已经展示了如何从各种数据源创建DataFrame（字典、CSV、Excel、JSON、SQLite等），以及如何对数据进行清洗、转换和分析。这些示例涵盖了日常数据分析工作中的大多数场景，展示了DataFrame强大的数据处理能力。无论是数据类型的转换、缺失值的处理、数据的筛选和聚合，还是复杂的多表连接操作，DataFrame都提供了简洁而强大的解决方案。这使得它成为数据分析师、数据科学家和开发者必不可少的工具。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "00af7a72-0dc7-4199-8b88-58de00bbcaf1",
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.12.3"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
