{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "3cf0cfc8",
   "metadata": {},
   "source": [
    "\n",
    "# 🧪 TAPAS 表格理解快速演示 Notebook\n",
    "\n",
    "**目标**：用 Google Research 的 **TAPAS** 模型实现“给表格提问并获得答案”的最小示例，证明你已经动手实践过。\n",
    "\n",
    "> 你可以在本地 Jupyter / VS Code / Colab 直接运行本 Notebook。每个步骤都拆成了单独的 Markdown 讲解 + 代码块。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d7a2a1e5",
   "metadata": {},
   "source": [
    "\n",
    "## 🚀 如何运行（两种方式）\n",
    "\n",
    "**方式 A：在本地运行**\n",
    "1. 安装 Jupyter 或使用 VS Code（带 Jupyter 扩展）。  \n",
    "2. 打开本 `.ipynb` 文件。  \n",
    "3. 逐个运行下面的代码块（Mac/Win/Linux 均可）。\n",
    "\n",
    "**方式 B：在 Google Colab 运行**\n",
    "1. 打开 [colab.research.google.com](https://colab.research.google.com)  \n",
    "2. 上传本 `.ipynb` 文件。  \n",
    "3. 依次运行每个单元格。\n",
    "\n",
    "> 建议先从 **方式 B（Colab）** 开始，能更快地拉起环境。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "25cf32c3",
   "metadata": {},
   "source": [
    "\n",
    "## 1) 安装依赖\n",
    "- 我们需要 `transformers`, `pandas`, `torch` 三个包。  \n",
    "- 如在国内网络环境，建议添加清华镜像加速。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "9b997aa6",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Looking in indexes: https://pypi.tuna.tsinghua.edu.cn/simple\n",
      "Collecting transformers\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/71/d3/c16c3b3cf7655a67db1144da94b021c200ac1303f82428f2beef6c2e72bb/transformers-4.57.1-py3-none-any.whl (12.0 MB)\n",
      "     ---------------------------------------- 0.0/12.0 MB ? eta -:--:--\n",
      "     ---------------------------------------- 0.0/12.0 MB ? eta -:--:--\n",
      "     ---------------------------------------- 0.0/12.0 MB ? eta -:--:--\n",
      "     ---------------------------------------- 0.0/12.0 MB ? eta -:--:--\n",
      "     ---------------------------------------- 0.0/12.0 MB ? eta -:--:--\n",
      "     ---------------------------------------- 0.0/12.0 MB ? eta -:--:--\n",
      "     ---------------------------------------- 0.0/12.0 MB ? eta -:--:--\n",
      "     ---------------------------------------- 0.0/12.0 MB ? eta -:--:--\n",
      "      --------------------------------------- 0.3/12.0 MB ? eta -:--:--\n",
      "      --------------------------------------- 0.3/12.0 MB ? eta -:--:--\n",
      "      --------------------------------------- 0.3/12.0 MB ? eta -:--:--\n",
      "      --------------------------------------- 0.3/12.0 MB ? eta -:--:--\n",
      "      --------------------------------------- 0.3/12.0 MB ? eta -:--:--\n",
      "     - ------------------------------------- 0.5/12.0 MB 284.6 kB/s eta 0:00:41\n",
      "     - ------------------------------------- 0.5/12.0 MB 284.6 kB/s eta 0:00:41\n",
      "     - ------------------------------------- 0.5/12.0 MB 284.6 kB/s eta 0:00:41\n",
      "     - ------------------------------------- 0.5/12.0 MB 284.6 kB/s eta 0:00:41\n",
      "     - ------------------------------------- 0.5/12.0 MB 284.6 kB/s eta 0:00:41\n",
      "     - ------------------------------------- 0.5/12.0 MB 284.6 kB/s eta 0:00:41\n",
      "     - ------------------------------------- 0.5/12.0 MB 284.6 kB/s eta 0:00:41\n",
      "     -- ------------------------------------ 0.8/12.0 MB 216.6 kB/s eta 0:00:52\n",
      "     --- ----------------------------------- 1.0/12.0 MB 301.4 kB/s eta 0:00:37\n",
      "     --- ----------------------------------- 1.0/12.0 MB 301.4 kB/s eta 0:00:37\n",
      "     ---- ---------------------------------- 1.3/12.0 MB 360.8 kB/s eta 0:00:30\n",
      "     ---- ---------------------------------- 1.3/12.0 MB 360.8 kB/s eta 0:00:30\n",
      "     ---- ---------------------------------- 1.3/12.0 MB 360.8 kB/s eta 0:00:30\n",
      "     ----- --------------------------------- 1.6/12.0 MB 368.0 kB/s eta 0:00:29\n",
      "     ----- --------------------------------- 1.8/12.0 MB 402.7 kB/s eta 0:00:26\n",
      "     ------ -------------------------------- 2.1/12.0 MB 451.7 kB/s eta 0:00:22\n",
      "     ------ -------------------------------- 2.1/12.0 MB 451.7 kB/s eta 0:00:22\n",
      "     -------- ------------------------------ 2.6/12.0 MB 522.5 kB/s eta 0:00:18\n",
      "     --------- ----------------------------- 2.9/12.0 MB 563.0 kB/s eta 0:00:17\n",
      "     ---------- ---------------------------- 3.1/12.0 MB 601.2 kB/s eta 0:00:15\n",
      "     ----------- --------------------------- 3.4/12.0 MB 613.8 kB/s eta 0:00:14\n",
      "     ----------- --------------------------- 3.7/12.0 MB 630.4 kB/s eta 0:00:14\n",
      "     ------------ -------------------------- 3.9/12.0 MB 657.9 kB/s eta 0:00:13\n",
      "     -------------- ------------------------ 4.5/12.0 MB 721.7 kB/s eta 0:00:11\n",
      "     --------------- ----------------------- 4.7/12.0 MB 740.9 kB/s eta 0:00:10\n",
      "     ---------------- ---------------------- 5.0/12.0 MB 768.5 kB/s eta 0:00:10\n",
      "     ----------------- --------------------- 5.2/12.0 MB 793.0 kB/s eta 0:00:09\n",
      "     ----------------- --------------------- 5.5/12.0 MB 789.6 kB/s eta 0:00:09\n",
      "     ------------------ -------------------- 5.8/12.0 MB 810.0 kB/s eta 0:00:08\n",
      "     -------------------- ------------------ 6.3/12.0 MB 850.0 kB/s eta 0:00:07\n",
      "     --------------------- ----------------- 6.6/12.0 MB 867.8 kB/s eta 0:00:07\n",
      "     --------------------- ----------------- 6.6/12.0 MB 867.8 kB/s eta 0:00:07\n",
      "     --------------------- ----------------- 6.6/12.0 MB 867.8 kB/s eta 0:00:07\n",
      "     --------------------- ----------------- 6.6/12.0 MB 867.8 kB/s eta 0:00:07\n",
      "     --------------------- ----------------- 6.6/12.0 MB 867.8 kB/s eta 0:00:07\n",
      "     --------------------- ----------------- 6.6/12.0 MB 867.8 kB/s eta 0:00:07\n",
      "     --------------------- ----------------- 6.6/12.0 MB 867.8 kB/s eta 0:00:07\n",
      "     --------------------- ----------------- 6.6/12.0 MB 867.8 kB/s eta 0:00:07\n",
      "     --------------------- ----------------- 6.6/12.0 MB 867.8 kB/s eta 0:00:07\n",
      "     --------------------- ----------------- 6.6/12.0 MB 867.8 kB/s eta 0:00:07\n",
      "     --------------------- ----------------- 6.6/12.0 MB 867.8 kB/s eta 0:00:07\n",
      "     --------------------- ----------------- 6.6/12.0 MB 867.8 kB/s eta 0:00:07\n",
      "     --------------------- ----------------- 6.6/12.0 MB 867.8 kB/s eta 0:00:07\n",
      "     --------------------- ----------------- 6.6/12.0 MB 867.8 kB/s eta 0:00:07\n",
      "     --------------------- ----------------- 6.6/12.0 MB 867.8 kB/s eta 0:00:07\n",
      "     --------------------- ----------------- 6.6/12.0 MB 867.8 kB/s eta 0:00:07\n",
      "     ---------------------- ---------------- 6.8/12.0 MB 625.1 kB/s eta 0:00:09\n",
      "     ----------------------- --------------- 7.1/12.0 MB 635.0 kB/s eta 0:00:08\n",
      "     ----------------------- --------------- 7.1/12.0 MB 635.0 kB/s eta 0:00:08\n",
      "     ----------------------- --------------- 7.3/12.0 MB 627.4 kB/s eta 0:00:08\n",
      "     ----------------------- --------------- 7.3/12.0 MB 627.4 kB/s eta 0:00:08\n",
      "     ------------------------- ------------- 7.9/12.0 MB 653.1 kB/s eta 0:00:07\n",
      "     -------------------------- ------------ 8.1/12.0 MB 665.8 kB/s eta 0:00:06\n",
      "     -------------------------- ------------ 8.1/12.0 MB 665.8 kB/s eta 0:00:06\n",
      "     ---------------------------- ---------- 8.7/12.0 MB 683.9 kB/s eta 0:00:05\n",
      "     ---------------------------- ---------- 8.7/12.0 MB 683.9 kB/s eta 0:00:05\n",
      "     ---------------------------- ---------- 8.9/12.0 MB 683.5 kB/s eta 0:00:05\n",
      "     ------------------------------- ------ 10.0/12.0 MB 748.8 kB/s eta 0:00:03\n",
      "     -------------------------------- ----- 10.2/12.0 MB 760.8 kB/s eta 0:00:03\n",
      "     ---------------------------------- --- 11.0/12.0 MB 805.5 kB/s eta 0:00:02\n",
      "     ---------------------------------- --- 11.0/12.0 MB 805.5 kB/s eta 0:00:02\n",
      "     ---------------------------------- --- 11.0/12.0 MB 805.5 kB/s eta 0:00:02\n",
      "     ----------------------------------- -- 11.3/12.0 MB 785.6 kB/s eta 0:00:01\n",
      "     ----------------------------------- -- 11.3/12.0 MB 785.6 kB/s eta 0:00:01\n",
      "     ---------------------------------------- 12.0/12.0 MB 814.2 kB/s  0:00:16\n",
      "Requirement already satisfied: pandas in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (2.0.3)\n",
      "Requirement already satisfied: torch in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (2.6.0+cu124)\n",
      "Requirement already satisfied: filelock in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (from transformers) (3.13.1)\n",
      "Collecting huggingface-hub<1.0,>=0.34.0 (from transformers)\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/cb/bd/1a875e0d592d447cbc02805fd3fe0f497714d6a2583f59d14fa9ebad96eb/huggingface_hub-0.36.0-py3-none-any.whl (566 kB)\n",
      "     ---------------------------------------- 0.0/566.1 kB ? eta -:--:--\n",
      "     ---------------------------------------- 566.1/566.1 kB 2.4 MB/s  0:00:00\n",
      "Requirement already satisfied: numpy>=1.17 in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (from transformers) (1.23.5)\n",
      "Requirement already satisfied: packaging>=20.0 in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (from transformers) (25.0)\n",
      "Requirement already satisfied: pyyaml>=5.1 in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (from transformers) (6.0.2)\n",
      "Collecting regex!=2019.12.17 (from transformers)\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/b8/cf/6d2a18663fadd8a2bc35829497b8e94e7a0b876dff22c8ac2d0c650de8f5/regex-2025.10.23-cp39-cp39-win_amd64.whl (277 kB)\n",
      "Requirement already satisfied: requests in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (from transformers) (2.31.0)\n",
      "Collecting tokenizers<=0.23.0,>=0.22.0 (from transformers)\n",
      "  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/b3/46/e33a8c93907b631a99377ef4c5f817ab453d0b34f93529421f42ff559671/tokenizers-0.22.1-cp39-abi3-win_amd64.whl (2.7 MB)\n",
      "Collecting safetensors>=0.4.3 (from transformers)\n",
      "  Using cached https://pypi.tuna.tsinghua.edu.cn/packages/2c/c3/c0be1135726618dc1e28d181b8c442403d8dbb9e273fd791de2d4384bcdd/safetensors-0.6.2-cp38-abi3-win_amd64.whl (320 kB)\n",
      "Collecting tqdm>=4.27 (from transformers)\n",
      "  Downloading https://pypi.tuna.tsinghua.edu.cn/packages/d0/30/dc54f88dd4a2b5dc8a0279bdd7270e735851848b762aeb1c1184ed1f6b14/tqdm-4.67.1-py3-none-any.whl (78 kB)\n",
      "Requirement already satisfied: fsspec>=2023.5.0 in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (from huggingface-hub<1.0,>=0.34.0->transformers) (2024.6.1)\n",
      "Requirement already satisfied: typing-extensions>=3.7.4.3 in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (from huggingface-hub<1.0,>=0.34.0->transformers) (4.12.2)\n",
      "Requirement already satisfied: python-dateutil>=2.8.2 in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (from pandas) (2.9.0.post0)\n",
      "Requirement already satisfied: pytz>=2020.1 in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (from pandas) (2025.2)\n",
      "Requirement already satisfied: tzdata>=2022.1 in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (from pandas) (2025.2)\n",
      "Requirement already satisfied: networkx in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (from torch) (3.2.1)\n",
      "Requirement already satisfied: jinja2 in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (from torch) (3.1.4)\n",
      "Requirement already satisfied: sympy==1.13.1 in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (from torch) (1.13.1)\n",
      "Requirement already satisfied: mpmath<1.4,>=1.1.0 in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (from sympy==1.13.1->torch) (1.3.0)\n",
      "Requirement already satisfied: six>=1.5 in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (from python-dateutil>=2.8.2->pandas) (1.17.0)\n",
      "Requirement already satisfied: colorama in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (from tqdm>=4.27->transformers) (0.4.6)\n",
      "Requirement already satisfied: MarkupSafe>=2.0 in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (from jinja2->torch) (2.1.5)\n",
      "Requirement already satisfied: charset-normalizer<4,>=2 in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (from requests->transformers) (3.3.2)\n",
      "Requirement already satisfied: idna<4,>=2.5 in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (from requests->transformers) (3.7)\n",
      "Requirement already satisfied: urllib3<3,>=1.21.1 in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (from requests->transformers) (2.5.0)\n",
      "Requirement already satisfied: certifi>=2017.4.17 in e:\\anaconda3\\envs\\d2l\\lib\\site-packages (from requests->transformers) (2025.10.5)\n",
      "Installing collected packages: tqdm, safetensors, regex, huggingface-hub, tokenizers, transformers\n",
      "\n",
      "   ------ --------------------------------- 1/6 [safetensors]\n",
      "   -------------------- ------------------- 3/6 [huggingface-hub]\n",
      "   -------------------- ------------------- 3/6 [huggingface-hub]\n",
      "   -------------------- ------------------- 3/6 [huggingface-hub]\n",
      "   -------------------- ------------------- 3/6 [huggingface-hub]\n",
      "   -------------------------- ------------- 4/6 [tokenizers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   --------------------------------- ------ 5/6 [transformers]\n",
      "   ---------------------------------------- 6/6 [transformers]\n",
      "\n",
      "Successfully installed huggingface-hub-0.36.0 regex-2025.10.23 safetensors-0.6.2 tokenizers-0.22.1 tqdm-4.67.1 transformers-4.57.1\n",
      "Note: you may need to restart the kernel to use updated packages.\n"
     ]
    }
   ],
   "source": [
    "\n",
    "# 如果网络较慢，可以加上 -i 清华镜像：\n",
    "%pip install transformers pandas torch -i https://pypi.tuna.tsinghua.edu.cn/simple\n",
    "#pip install transformers pandas torch -q\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6abf283f",
   "metadata": {},
   "source": [
    "\n",
    "## 2) 加载 TAPAS 模型与分词器\n",
    "我们使用 Hugging Face 上的已微调权重：`google/tapas-base-finetuned-wtq`。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "af774746",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "dbf7a365b12d4e9aa89c021768adb8dd",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "tokenizer_config.json:   0%|          | 0.00/490 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "e:\\Anaconda3\\envs\\d2l\\lib\\site-packages\\huggingface_hub\\file_download.py:143: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\\Users\\daiyo\\.cache\\huggingface\\hub\\models--google--tapas-base-finetuned-wtq. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.\n",
      "To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development\n",
      "  warnings.warn(message)\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "1f58e192298142ccbaa5fd8ce6720296",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "vocab.txt: 0.00B [00:00, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "5ccc58295a794d729b3ea5682c12ef16",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "special_tokens_map.json:   0%|          | 0.00/154 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "d2486d84380049c8ae449791864571bb",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "config.json: 0.00B [00:00, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Xet Storage is enabled for this repo, but the 'hf_xet' package is not installed. Falling back to regular HTTP download. For better performance, install the package with: `pip install huggingface_hub[hf_xet]` or `pip install hf_xet`\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "26bd6e8e5b184074b60fced121ec7561",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "pytorch_model.bin:   0%|          | 0.00/443M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "✅ 模型与分词器加载完成： google/tapas-base-finetuned-wtq\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Xet Storage is enabled for this repo, but the 'hf_xet' package is not installed. Falling back to regular HTTP download. For better performance, install the package with: `pip install huggingface_hub[hf_xet]` or `pip install hf_xet`\n"
     ]
    }
   ],
   "source": [
    "\n",
    "from transformers import TapasTokenizer, TapasForQuestionAnswering\n",
    "import pandas as pd\n",
    "\n",
    "model_name = \"google/tapas-base-finetuned-wtq\"\n",
    "tokenizer = TapasTokenizer.from_pretrained(model_name)\n",
    "model = TapasForQuestionAnswering.from_pretrained(model_name)\n",
    "\n",
    "print(\"✅ 模型与分词器加载完成：\", model_name)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3983039e",
   "metadata": {},
   "source": [
    "\n",
    "## 3) 准备一张示例表格（两种选一种）\n",
    "\n",
    "**选项 A：直接使用内置示例表格（推荐上手）**  \n",
    "运行下方代码会创建一个简单销售报表示例。\n",
    "\n",
    "**选项 B：加载你自己的 CSV 文件**  \n",
    "把你的 CSV 放到同目录下，然后取消注释并修改文件名。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "383277b6",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>Product</th>\n",
       "      <th>Q1</th>\n",
       "      <th>Q2</th>\n",
       "      <th>Q3</th>\n",
       "      <th>Q4</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>A</td>\n",
       "      <td>120</td>\n",
       "      <td>150</td>\n",
       "      <td>180</td>\n",
       "      <td>200</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>B</td>\n",
       "      <td>80</td>\n",
       "      <td>100</td>\n",
       "      <td>90</td>\n",
       "      <td>110</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>C</td>\n",
       "      <td>200</td>\n",
       "      <td>230</td>\n",
       "      <td>210</td>\n",
       "      <td>250</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "  Product   Q1   Q2   Q3   Q4\n",
       "0       A  120  150  180  200\n",
       "1       B   80  100   90  110\n",
       "2       C  200  230  210  250"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "\n",
    "# === 选项 A：内置示例表格 ===\n",
    "data = {\n",
    "    \"Product\": [\"A\", \"B\", \"C\"],\n",
    "    \"Q1\": [120, 80, 200],\n",
    "    \"Q2\": [150, 100, 230],\n",
    "    \"Q3\": [180, 90, 210],\n",
    "    \"Q4\": [200, 110, 250],\n",
    "}\n",
    "table = pd.DataFrame.from_dict(data)\n",
    "\n",
    "# === 选项 B：加载自己的 CSV（取消注释，修改文件名）===\n",
    "# table = pd.read_csv(\"your_table.csv\")\n",
    "\n",
    "table\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f8cec1c9",
   "metadata": {},
   "source": [
    "\n",
    "## 4) 定义自然语言问题（你可以自由修改）\n",
    "下面给出 4 个常见查询示例：最大/最小、求和、比较、是/否。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "66b3d194",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['Which product has the highest sales in Q4?',\n",
       " 'What is the total sales of product A?',\n",
       " 'Which quarter has the lowest sales for product B?',\n",
       " 'Is product C sales always higher than product A?']"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "\n",
    "queries = [\n",
    "    \"Which product has the highest sales in Q4?\",\n",
    "    \"What is the total sales of product A?\",\n",
    "    \"Which quarter has the lowest sales for product B?\",\n",
    "    \"Is product C sales always higher than product A?\",\n",
    "]\n",
    "queries\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b65d797e",
   "metadata": {},
   "source": [
    "\n",
    "## 5) 编码输入并进行推理\n",
    "- `tokenizer` 会把 **表格 + 问题** 编码成模型需要的张量输入；  \n",
    "- 模型输出包含两部分：**单元格选择 logits** 与 **聚合操作 logits**。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "5a37f271",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "✅ 推理完成\n"
     ]
    }
   ],
   "source": [
    "# ✅ 修正表格格式\n",
    "table.columns = table.columns.map(str)\n",
    "table = table.astype(str)\n",
    "table = table.fillna(\"N/A\")\n",
    "\n",
    "# 编码输入\n",
    "inputs = tokenizer(table=table, queries=queries, padding=\"max_length\", return_tensors=\"pt\")\n",
    "\n",
    "# 推理\n",
    "outputs = model(**inputs)\n",
    "\n",
    "# ⚙️ 重要：detach() 解除梯度追踪\n",
    "logits = outputs.logits.detach()\n",
    "aggregations = outputs.logits_aggregation.detach()\n",
    "\n",
    "# 将 logits 转回“答案坐标/聚合类型”\n",
    "predicted_answer_coordinates, predicted_aggregation_indices = tokenizer.convert_logits_to_predictions(\n",
    "    inputs, logits, aggregations\n",
    ")\n",
    "\n",
    "print(\"✅ 推理完成\")\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "33e32eb2",
   "metadata": {},
   "source": [
    "\n",
    "## 6) 打印模型回答（可读性展示）\n",
    "将答案坐标映射回表格中的单元格内容，并友好地打印。\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "e6fdb50e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Q: Which product has the highest sales in Q4?\n",
      "A: ['C']\n",
      "\n",
      "Q: What is the total sales of product A?\n",
      "A: ['120']\n",
      "\n",
      "Q: Which quarter has the lowest sales for product B?\n",
      "A: ['80']\n",
      "\n",
      "Q: Is product C sales always higher than product A?\n",
      "A: ['C']\n",
      "\n"
     ]
    }
   ],
   "source": [
    "\n",
    "def pretty_print_answers(table: pd.DataFrame, queries, coords_list):\n",
    "    for query, coordinates in zip(queries, coords_list):\n",
    "        if coordinates:\n",
    "            # 将坐标映射为单元格内容（注意：第一列是列名，所以 col 索引跟 DataFrame 的列对齐）\n",
    "            answer_cells = [table.iat[row, col] for row, col in coordinates]\n",
    "            print(f\"Q: {query}\")\n",
    "            print(f\"A: {answer_cells}\\n\")\n",
    "\n",
    "        else:\n",
    "            print(f\"Q: {query}\")\n",
    "            print(\"A: No answer found\\n\")\n",
    "\n",
    "pretty_print_answers(table, queries, predicted_answer_coordinates)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5541e013",
   "metadata": {},
   "source": [
    "\n",
    "## 7) 说明与小结\n",
    "\n",
    "- 本示例使用的权重 `google/tapas-base-finetuned-wtq` 已在 WikiTableQuestions 上微调，**不需要**再训练即可上手。  \n",
    "- 你可以把表格换成你自己的 CSV，并修改问题文本，TAPAS 会自动理解并回答。  \n",
    "- 模型的两个核心机制：**Cell Selection**（选相关格子）和 **Aggregation Prediction**（决定求和/平均/计数/直接取值）。  "
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "d2l",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.23"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
