{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 教程 3: 使用 ABMIL 训练 WSI 分类模型\n",
    "\n",
    "本教程将指导您一步一步使用 Trident 补丁嵌入来训练一个基于注意力的多示例学习模型。然后，我们将使用预训练模型生成注意力热图。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### A- 设置预计算的补丁特征路径\n",
    "\n",
    "本教程假设您已经使用 CONCH 提取了补丁特征。我们将直接指向您的特征目录。\n",
    "\n",
    "- **补丁嵌入路径**: 我们将使用位于以下位置的特征：\n",
    "  ```\n",
    "  /data0/lcy/data/LNM/LNM_slices_conch_v1_processed/20x_512px_0px_overlap/features_conch_v1\n",
    "  ```\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### B- 加载带数据拆分的标签\n",
    "\n",
    "在这里，我们使用您的自定义 `k=all.tsv` 文件，其中包含 slide ID、标签和数据拆分（训练/测试）。\n",
    "\n",
    "**注意**：为了加快后续加载速度，我们将在第一次读取 `.tsv` 文件后，将其转换为更快的 `.parquet` 格式。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "正在从更快的 Parquet 文件加载: /data0/lcy/Patho-Bench/tools/zzylnm_slices_splits/splits/k=all.parquet\n",
      "警告：在 df 中找到了 9 个 slide_id，但在 feats_path 中没有对应的 .h5 文件。\n",
      "将从 df 中过滤掉这些缺失的 ID。\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>label</th>\n",
       "      <th>Count</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>0</td>\n",
       "      <td>251</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>1</td>\n",
       "      <td>131</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "   label  Count\n",
       "0      0    251\n",
       "1      1    131"
      ]
     },
     "execution_count": 1,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "import pandas as pd\n",
    "import os\n",
    "\n",
    "# 从您的本地 TSV 文件加载标签和拆分\n",
    "split_file_path = \"/data0/lcy/Patho-Bench/tools/zzylnm_slices_splits/splits/k=all.tsv\"\n",
    "parquet_path = split_file_path.replace('.tsv', '.parquet')\n",
    "\n",
    "if os.path.exists(parquet_path):\n",
    "    print(f\"正在从更快的 Parquet 文件加载: {parquet_path}\")\n",
    "    df = pd.read_parquet(parquet_path)\n",
    "else:\n",
    "    print(f\"首次加载 TSV 文件（可能需要一些时间）: {split_file_path}\")\n",
    "    df = pd.read_csv(split_file_path, sep=\"\\t\")\n",
    "    print(f\"正在保存为 Parquet 格式以加快未来加载速度: {parquet_path}\")\n",
    "    df.to_parquet(parquet_path)\n",
    "\n",
    "# 1. 获取所有可用的 H5 文件的基本名称（不带 .h5）\n",
    "feats_path = \"/data0/lcy/data/LNM/LNM_slices_conch_v1_processed/20x_512px_0px_overlap/features_conch_v1\"\n",
    "available_files = [\n",
    "    f.replace(\".h5\", \"\") for f in os.listdir(feats_path) if f.endswith(\".h5\")\n",
    "]\n",
    "available_files_set = set(available_files)\n",
    "\n",
    "# 2. 检查 df 中有多少 ID 是缺失的\n",
    "df_ids = set(df[\"slide_id\"].unique())\n",
    "missing_ids = df_ids - available_files_set\n",
    "\n",
    "if missing_ids:\n",
    "    print(\n",
    "        f\"警告：在 df 中找到了 {len(missing_ids)} 个 slide_id，但在 feats_path 中没有对应的 .h5 文件。\"\n",
    "    )\n",
    "    print(\"将从 df 中过滤掉这些缺失的 ID。\")\n",
    "    # print(f\"缺失的 ID 示例: {list(missing_ids)[:5]}\") # 取消注释以查看示例\n",
    "\n",
    "# 3. 过滤 df，只保留那些 H5 文件确实存在的行\n",
    "df = df[df[\"slide_id\"].isin(available_files_set)].copy()\n",
    "\n",
    "# 根据您的文件内容，标签列是 'label'\n",
    "task_label_col = \"label\"\n",
    "\n",
    "# 检查标签分布\n",
    "df_counts = df[task_label_col].value_counts().reset_index()\n",
    "df_counts.columns = [task_label_col, \"Count\"]\n",
    "df_counts"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### C- 训练 ABMIL 模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/home/ubuntu/anaconda3/envs/lcy_clam/lib/python3.10/site-packages/requests/__init__.py:86: RequestsDependencyWarning: Unable to find acceptable character detection dependency (chardet or charset_normalizer).\n",
      "  warnings.warn(\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "2025-10-24 15:30:32 - INFO - 日志系统已启动。\n",
      "2025-10-24 15:30:32 - INFO - 随机种子设置为 1234\n",
      "2025-10-24 15:30:32 - INFO - 使用设备: cuda\n",
      "2025-10-24 15:30:32 - INFO - 特征路径: /data0/lcy/data/LNM/LNM_slices_conch_v1_processed/20x_512px_0px_overlap/features_conch_v1\n",
      "2025-10-24 15:30:32 - INFO - Batch Size: 8, Epochs: 20\n",
      "2025-10-24 15:30:32 - INFO - \n",
      "==================================================\n",
      "2025-10-24 15:30:32 - INFO - --- 开始第 1/5 折 (使用列: fold_0) ---\n",
      "2025-10-24 15:30:32 - INFO - ==================================================\n",
      "2025-10-24 15:30:35 - INFO - Fold 1: Train samples = 303, Test samples = 79\n",
      "2025-10-24 15:30:42 - INFO - Fold 1, Epoch 1/20, Loss: 0.6460\n",
      "2025-10-24 15:30:48 - INFO - Fold 1, Epoch 2/20, Loss: 0.5553\n",
      "2025-10-24 15:30:54 - INFO - Fold 1, Epoch 3/20, Loss: 0.5553\n",
      "2025-10-24 15:31:00 - INFO - Fold 1, Epoch 4/20, Loss: 0.5008\n",
      "2025-10-24 15:31:07 - INFO - Fold 1, Epoch 5/20, Loss: 0.4696\n",
      "2025-10-24 15:31:13 - INFO - Fold 1, Epoch 6/20, Loss: 0.4384\n",
      "2025-10-24 15:31:19 - INFO - Fold 1, Epoch 7/20, Loss: 0.3907\n",
      "2025-10-24 15:31:25 - INFO - Fold 1, Epoch 8/20, Loss: 0.3911\n",
      "2025-10-24 15:31:31 - INFO - Fold 1, Epoch 9/20, Loss: 0.3937\n",
      "2025-10-24 15:31:37 - INFO - Fold 1, Epoch 10/20, Loss: 0.3486\n",
      "2025-10-24 15:31:43 - INFO - Fold 1, Epoch 11/20, Loss: 0.3235\n",
      "2025-10-24 15:31:49 - INFO - Fold 1, Epoch 12/20, Loss: 0.2997\n",
      "2025-10-24 15:31:55 - INFO - Fold 1, Epoch 13/20, Loss: 0.2719\n",
      "2025-10-24 15:32:01 - INFO - Fold 1, Epoch 14/20, Loss: 0.2514\n",
      "2025-10-24 15:32:07 - INFO - Fold 1, Epoch 15/20, Loss: 0.3651\n",
      "2025-10-24 15:32:13 - INFO - Fold 1, Epoch 16/20, Loss: 0.2477\n",
      "2025-10-24 15:32:19 - INFO - Fold 1, Epoch 17/20, Loss: 0.3046\n",
      "2025-10-24 15:32:25 - INFO - Fold 1, Epoch 18/20, Loss: 0.2250\n",
      "2025-10-24 15:32:30 - INFO - Fold 1, Epoch 19/20, Loss: 0.1861\n",
      "2025-10-24 15:32:36 - INFO - Fold 1, Epoch 20/20, Loss: 0.1870\n",
      "2025-10-24 15:32:38 - INFO - --- Fold 1 结果 ---\n",
      "2025-10-24 15:32:38 - INFO - Test AUC: 0.8466\n",
      "2025-10-24 15:32:38 - INFO - Test Accuracy: 0.7722\n",
      "2025-10-24 15:32:38 - INFO - \n",
      "==================================================\n",
      "2025-10-24 15:32:38 - INFO - --- 开始第 2/5 折 (使用列: fold_1) ---\n",
      "2025-10-24 15:32:38 - INFO - ==================================================\n",
      "2025-10-24 15:32:38 - INFO - Fold 2: Train samples = 305, Test samples = 77\n",
      "2025-10-24 15:32:44 - INFO - Fold 2, Epoch 1/20, Loss: 0.6513\n",
      "2025-10-24 15:32:50 - INFO - Fold 2, Epoch 2/20, Loss: 0.5930\n",
      "2025-10-24 15:32:56 - INFO - Fold 2, Epoch 3/20, Loss: 0.5013\n",
      "2025-10-24 15:33:02 - INFO - Fold 2, Epoch 4/20, Loss: 0.4646\n",
      "2025-10-24 15:33:09 - INFO - Fold 2, Epoch 5/20, Loss: 0.4465\n",
      "2025-10-24 15:33:15 - INFO - Fold 2, Epoch 6/20, Loss: 0.3983\n",
      "2025-10-24 15:33:21 - INFO - Fold 2, Epoch 7/20, Loss: 0.3809\n",
      "2025-10-24 15:33:27 - INFO - Fold 2, Epoch 8/20, Loss: 0.3411\n",
      "2025-10-24 15:33:33 - INFO - Fold 2, Epoch 9/20, Loss: 0.3820\n",
      "2025-10-24 15:33:39 - INFO - Fold 2, Epoch 10/20, Loss: 0.2989\n",
      "2025-10-24 15:33:45 - INFO - Fold 2, Epoch 11/20, Loss: 0.2853\n",
      "2025-10-24 15:33:51 - INFO - Fold 2, Epoch 12/20, Loss: 0.2427\n",
      "2025-10-24 15:33:57 - INFO - Fold 2, Epoch 13/20, Loss: 0.2499\n",
      "2025-10-24 15:34:03 - INFO - Fold 2, Epoch 14/20, Loss: 0.2611\n",
      "2025-10-24 15:34:09 - INFO - Fold 2, Epoch 15/20, Loss: 0.1917\n",
      "2025-10-24 15:34:16 - INFO - Fold 2, Epoch 16/20, Loss: 0.3069\n",
      "2025-10-24 15:34:22 - INFO - Fold 2, Epoch 17/20, Loss: 0.1908\n",
      "2025-10-24 15:34:28 - INFO - Fold 2, Epoch 18/20, Loss: 0.2048\n",
      "2025-10-24 15:34:34 - INFO - Fold 2, Epoch 19/20, Loss: 0.2234\n",
      "2025-10-24 15:34:40 - INFO - Fold 2, Epoch 20/20, Loss: 0.1869\n",
      "2025-10-24 15:34:42 - INFO - --- Fold 2 结果 ---\n",
      "2025-10-24 15:34:42 - INFO - Test AUC: 0.8054\n",
      "2025-10-24 15:34:42 - INFO - Test Accuracy: 0.7273\n",
      "2025-10-24 15:34:42 - INFO - \n",
      "==================================================\n",
      "2025-10-24 15:34:42 - INFO - --- 开始第 3/5 折 (使用列: fold_2) ---\n",
      "2025-10-24 15:34:42 - INFO - ==================================================\n",
      "2025-10-24 15:34:42 - INFO - Fold 3: Train samples = 306, Test samples = 76\n",
      "2025-10-24 15:34:48 - INFO - Fold 3, Epoch 1/20, Loss: 0.6283\n",
      "2025-10-24 15:34:54 - INFO - Fold 3, Epoch 2/20, Loss: 0.6151\n",
      "2025-10-24 15:35:00 - INFO - Fold 3, Epoch 3/20, Loss: 0.5312\n",
      "2025-10-24 15:35:07 - INFO - Fold 3, Epoch 4/20, Loss: 0.4752\n",
      "2025-10-24 15:35:13 - INFO - Fold 3, Epoch 5/20, Loss: 0.4360\n",
      "2025-10-24 15:35:19 - INFO - Fold 3, Epoch 6/20, Loss: 0.4319\n",
      "2025-10-24 15:35:25 - INFO - Fold 3, Epoch 7/20, Loss: 0.3815\n",
      "2025-10-24 15:35:31 - INFO - Fold 3, Epoch 8/20, Loss: 0.4009\n",
      "2025-10-24 15:35:37 - INFO - Fold 3, Epoch 9/20, Loss: 0.3324\n",
      "2025-10-24 15:35:44 - INFO - Fold 3, Epoch 10/20, Loss: 0.3026\n",
      "2025-10-24 15:35:50 - INFO - Fold 3, Epoch 11/20, Loss: 0.2731\n",
      "2025-10-24 15:35:56 - INFO - Fold 3, Epoch 12/20, Loss: 0.3160\n",
      "2025-10-24 15:36:02 - INFO - Fold 3, Epoch 13/20, Loss: 0.2517\n",
      "2025-10-24 15:36:08 - INFO - Fold 3, Epoch 14/20, Loss: 0.2210\n",
      "2025-10-24 15:36:14 - INFO - Fold 3, Epoch 15/20, Loss: 0.2704\n",
      "2025-10-24 15:36:21 - INFO - Fold 3, Epoch 16/20, Loss: 0.1927\n",
      "2025-10-24 15:36:27 - INFO - Fold 3, Epoch 17/20, Loss: 0.1527\n",
      "2025-10-24 15:36:33 - INFO - Fold 3, Epoch 18/20, Loss: 0.1498\n",
      "2025-10-24 15:36:40 - INFO - Fold 3, Epoch 19/20, Loss: 0.1813\n",
      "2025-10-24 15:36:46 - INFO - Fold 3, Epoch 20/20, Loss: 0.2147\n",
      "2025-10-24 15:36:47 - INFO - --- Fold 3 结果 ---\n",
      "2025-10-24 15:36:47 - INFO - Test AUC: 0.8492\n",
      "2025-10-24 15:36:47 - INFO - Test Accuracy: 0.7632\n",
      "2025-10-24 15:36:47 - INFO - \n",
      "==================================================\n",
      "2025-10-24 15:36:47 - INFO - --- 开始第 4/5 折 (使用列: fold_3) ---\n",
      "2025-10-24 15:36:47 - INFO - ==================================================\n",
      "2025-10-24 15:36:47 - INFO - Fold 4: Train samples = 304, Test samples = 78\n",
      "2025-10-24 15:36:53 - INFO - Fold 4, Epoch 1/20, Loss: 0.6180\n",
      "2025-10-24 15:36:59 - INFO - Fold 4, Epoch 2/20, Loss: 0.5446\n",
      "2025-10-24 15:37:05 - INFO - Fold 4, Epoch 3/20, Loss: 0.4743\n",
      "2025-10-24 15:37:11 - INFO - Fold 4, Epoch 4/20, Loss: 0.4525\n",
      "2025-10-24 15:37:17 - INFO - Fold 4, Epoch 5/20, Loss: 0.4480\n",
      "2025-10-24 15:37:23 - INFO - Fold 4, Epoch 6/20, Loss: 0.4091\n",
      "2025-10-24 15:37:29 - INFO - Fold 4, Epoch 7/20, Loss: 0.4485\n",
      "2025-10-24 15:37:35 - INFO - Fold 4, Epoch 8/20, Loss: 0.3564\n",
      "2025-10-24 15:37:41 - INFO - Fold 4, Epoch 9/20, Loss: 0.2643\n",
      "2025-10-24 15:37:47 - INFO - Fold 4, Epoch 10/20, Loss: 0.2417\n",
      "2025-10-24 15:37:53 - INFO - Fold 4, Epoch 11/20, Loss: 0.2410\n",
      "2025-10-24 15:37:59 - INFO - Fold 4, Epoch 12/20, Loss: 0.2219\n",
      "2025-10-24 15:38:05 - INFO - Fold 4, Epoch 13/20, Loss: 0.1430\n",
      "2025-10-24 15:38:11 - INFO - Fold 4, Epoch 14/20, Loss: 0.2375\n",
      "2025-10-24 15:38:17 - INFO - Fold 4, Epoch 15/20, Loss: 0.1871\n",
      "2025-10-24 15:38:23 - INFO - Fold 4, Epoch 16/20, Loss: 0.1869\n",
      "2025-10-24 15:38:29 - INFO - Fold 4, Epoch 17/20, Loss: 0.1626\n",
      "2025-10-24 15:38:35 - INFO - Fold 4, Epoch 18/20, Loss: 0.0707\n",
      "2025-10-24 15:38:41 - INFO - Fold 4, Epoch 19/20, Loss: 0.0625\n",
      "2025-10-24 15:38:47 - INFO - Fold 4, Epoch 20/20, Loss: 0.2276\n",
      "2025-10-24 15:38:49 - INFO - --- Fold 4 结果 ---\n",
      "2025-10-24 15:38:49 - INFO - Test AUC: 0.8206\n",
      "2025-10-24 15:38:49 - INFO - Test Accuracy: 0.7179\n",
      "2025-10-24 15:38:49 - INFO - \n",
      "==================================================\n",
      "2025-10-24 15:38:49 - INFO - --- 开始第 5/5 折 (使用列: fold_4) ---\n",
      "2025-10-24 15:38:49 - INFO - ==================================================\n",
      "2025-10-24 15:38:49 - INFO - Fold 5: Train samples = 310, Test samples = 72\n",
      "2025-10-24 15:38:55 - INFO - Fold 5, Epoch 1/20, Loss: 0.6372\n",
      "2025-10-24 15:39:01 - INFO - Fold 5, Epoch 2/20, Loss: 0.5419\n",
      "2025-10-24 15:39:07 - INFO - Fold 5, Epoch 3/20, Loss: 0.4796\n",
      "2025-10-24 15:39:14 - INFO - Fold 5, Epoch 4/20, Loss: 0.4530\n",
      "2025-10-24 15:39:20 - INFO - Fold 5, Epoch 5/20, Loss: 0.4298\n",
      "2025-10-24 15:39:26 - INFO - Fold 5, Epoch 6/20, Loss: 0.3917\n",
      "2025-10-24 15:39:32 - INFO - Fold 5, Epoch 7/20, Loss: 0.3891\n",
      "2025-10-24 15:39:38 - INFO - Fold 5, Epoch 8/20, Loss: 0.3198\n",
      "2025-10-24 15:39:44 - INFO - Fold 5, Epoch 9/20, Loss: 0.3357\n",
      "2025-10-24 15:39:51 - INFO - Fold 5, Epoch 10/20, Loss: 0.2757\n",
      "2025-10-24 15:39:57 - INFO - Fold 5, Epoch 11/20, Loss: 0.2888\n",
      "2025-10-24 15:40:03 - INFO - Fold 5, Epoch 12/20, Loss: 0.3247\n",
      "2025-10-24 15:40:09 - INFO - Fold 5, Epoch 13/20, Loss: 0.3240\n",
      "2025-10-24 15:40:15 - INFO - Fold 5, Epoch 14/20, Loss: 0.2031\n",
      "2025-10-24 15:40:22 - INFO - Fold 5, Epoch 15/20, Loss: 0.2062\n",
      "2025-10-24 15:40:28 - INFO - Fold 5, Epoch 16/20, Loss: 0.1719\n",
      "2025-10-24 15:40:34 - INFO - Fold 5, Epoch 17/20, Loss: 0.2324\n",
      "2025-10-24 15:40:40 - INFO - Fold 5, Epoch 18/20, Loss: 0.2348\n",
      "2025-10-24 15:40:46 - INFO - Fold 5, Epoch 19/20, Loss: 0.1499\n",
      "2025-10-24 15:40:53 - INFO - Fold 5, Epoch 20/20, Loss: 0.1542\n",
      "2025-10-24 15:40:54 - INFO - --- Fold 5 结果 ---\n",
      "2025-10-24 15:40:54 - INFO - Test AUC: 0.8307\n",
      "2025-10-24 15:40:54 - INFO - Test Accuracy: 0.7361\n",
      "2025-10-24 15:40:54 - INFO - \n",
      "==================================================\n",
      "2025-10-24 15:40:54 - INFO - --- 5-折交叉验证总结 ---\n",
      "2025-10-24 15:40:54 - INFO - ==================================================\n",
      "2025-10-24 15:40:54 - INFO - 平均 AUC: 0.8305 ± 0.0164\n",
      "2025-10-24 15:40:54 - INFO - 平均 Accuracy: 0.7433 ± 0.0209\n",
      "2025-10-24 15:40:54 - INFO - \n",
      "单折 AUC:\n",
      "2025-10-24 15:40:54 - INFO -   Fold 1: 0.8466\n",
      "2025-10-24 15:40:54 - INFO -   Fold 2: 0.8054\n",
      "2025-10-24 15:40:54 - INFO -   Fold 3: 0.8492\n",
      "2025-10-24 15:40:54 - INFO -   Fold 4: 0.8206\n",
      "2025-10-24 15:40:54 - INFO -   Fold 5: 0.8307\n",
      "2025-10-24 15:40:54 - INFO - \n",
      "单折 Accuracy:\n",
      "2025-10-24 15:40:54 - INFO -   Fold 1: 0.7722\n",
      "2025-10-24 15:40:54 - INFO -   Fold 2: 0.7273\n",
      "2025-10-24 15:40:54 - INFO -   Fold 3: 0.7632\n",
      "2025-10-24 15:40:54 - INFO -   Fold 4: 0.7179\n",
      "2025-10-24 15:40:54 - INFO -   Fold 5: 0.7361\n",
      "2025-10-24 15:40:54 - INFO - 日志已保存到 cross_val_training.log\n"
     ]
    },
    {
     "ename": "",
     "evalue": "",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m在当前单元格或上一个单元格中执行代码时 Kernel 崩溃。请查看单元格中的代码，以确定故障的可能原因。有关详细信息，请单击 <a href='https://aka.ms/vscodeJupyterKernelCrash'>此处</a>。有关更多详细信息，请查看 Jupyter <a href='command:jupyter.viewOutput'>log</a>。"
     ]
    }
   ],
   "source": [
    "import os\n",
    "import numpy as np\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "import torch.optim as optim\n",
    "import h5py\n",
    "from torch.utils.data import Dataset, DataLoader\n",
    "from sklearn.metrics import roc_auc_score\n",
    "import logging  # 导入 logging\n",
    "import sys  # 导入 sys\n",
    "from trident.slide_encoder_models import ABMILSlideEncoder\n",
    "\n",
    "# --- 日志配置 ---\n",
    "log_file_path = \"cross_val_training.log\"\n",
    "\n",
    "# 1. 配置 logger\n",
    "# 清除任何现有的 handlers，以防在 (e.g.) 笔记本中多次运行\n",
    "logging.getLogger().handlers = []\n",
    "\n",
    "logging.basicConfig(\n",
    "    level=logging.INFO,\n",
    "    format=\"%(asctime)s - %(levelname)s - %(message)s\",  # 日志格式，包含时间\n",
    "    datefmt=\"%Y-%m-%d %H:%M:%S\",\n",
    "    handlers=[\n",
    "        logging.FileHandler(\n",
    "            log_file_path, mode=\"w\"\n",
    "        ),  # 写入文件 (mode='w' 每次运行时覆盖)\n",
    "        logging.StreamHandler(sys.stdout),  # 打印到控制台\n",
    "    ],\n",
    ")\n",
    "\n",
    "# 获取 logger 实例\n",
    "logger = logging.getLogger()\n",
    "logger.info(\"日志系统已启动。\")\n",
    "# --- 日志配置结束 ---\n",
    "\n",
    "\n",
    "# --- 配置 ---\n",
    "# 任务标签列（来自您的 TSV 文件）\n",
    "task_label_col = \"label\"\n",
    "\n",
    "# 特征维度 (conch_v1 是 512)\n",
    "feature_dim = 512\n",
    "\n",
    "# 5折交叉验证的列名\n",
    "fold_columns = [\"fold_0\", \"fold_1\", \"fold_2\", \"fold_3\", \"fold_4\"]\n",
    "# ---------------------\n",
    "\n",
    "# 设置确定性行为\n",
    "SEED = 1234\n",
    "np.random.seed(SEED)\n",
    "torch.manual_seed(SEED)\n",
    "torch.cuda.manual_seed_all(SEED)\n",
    "torch.backends.cudnn.deterministic = True\n",
    "torch.backends.cudnn.benchmark = False\n",
    "logger.info(f\"随机种子设置为 {SEED}\")\n",
    "\n",
    "\n",
    "class BinaryClassificationModel(nn.Module):\n",
    "    # 使用正确的 conch_v1 特征维度 (512)\n",
    "    def __init__(\n",
    "        self,\n",
    "        input_feature_dim=feature_dim,\n",
    "        n_heads=1,\n",
    "        head_dim=512,\n",
    "        dropout=0.0,\n",
    "        gated=True,\n",
    "        hidden_dim=256,\n",
    "    ):\n",
    "        super().__init__()\n",
    "        self.feature_encoder = ABMILSlideEncoder(\n",
    "            freeze=False,\n",
    "            input_feature_dim=input_feature_dim,\n",
    "            n_heads=n_heads,\n",
    "            head_dim=head_dim,\n",
    "            dropout=dropout,\n",
    "            gated=gated,\n",
    "        )\n",
    "        self.classifier = nn.Sequential(\n",
    "            nn.Linear(input_feature_dim, hidden_dim),\n",
    "            nn.ReLU(),\n",
    "            nn.Linear(hidden_dim, 1),\n",
    "        )\n",
    "\n",
    "    def forward(self, x, return_raw_attention=False):\n",
    "        if return_raw_attention:\n",
    "            features, attn = self.feature_encoder(x, return_raw_attention=True)\n",
    "        else:\n",
    "            features = self.feature_encoder(x)\n",
    "        logits = self.classifier(features).squeeze(1)\n",
    "\n",
    "        if return_raw_attention:\n",
    "            return logits, attn\n",
    "\n",
    "        return logits\n",
    "\n",
    "\n",
    "# 初始化设备\n",
    "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
    "logger.info(f\"使用设备: {device}\")\n",
    "\n",
    "\n",
    "# 自定义数据集\n",
    "class H5Dataset(Dataset):\n",
    "    # *** 更改：添加 split_col 作为参数 ***\n",
    "    def __init__(self, feats_path, df, split, split_col, num_features=512):\n",
    "        # 根据传入的 split_col 和 'train'/'test' 值过滤 dataframe\n",
    "        self.df = df[df[split_col] == split]\n",
    "        self.feats_path = feats_path\n",
    "        self.num_features = num_features\n",
    "        self.split = split\n",
    "\n",
    "    def __len__(self):\n",
    "        return len(self.df)\n",
    "\n",
    "    def __getitem__(self, idx):\n",
    "        row = self.df.iloc[idx]\n",
    "        with h5py.File(\n",
    "            os.path.join(self.feats_path, row[\"slide_id\"] + \".h5\"), \"r\"\n",
    "        ) as f:\n",
    "            features = torch.from_numpy(f[\"features\"][:])\n",
    "\n",
    "        if self.split == \"train\":\n",
    "            num_available = features.shape[0]\n",
    "            if num_available >= self.num_features:\n",
    "                indices = torch.randperm(\n",
    "                    num_available, generator=torch.Generator().manual_seed(SEED)\n",
    "                )[: self.num_features]\n",
    "            else:\n",
    "                indices = torch.randint(\n",
    "                    num_available,\n",
    "                    (self.num_features,),\n",
    "                    generator=torch.Generator().manual_seed(SEED),\n",
    "                )  # 过采样\n",
    "            features = features[indices]\n",
    "\n",
    "        # 使用正确的任务标签列 (task_label_col)\n",
    "        label = torch.tensor(row[task_label_col], dtype=torch.float32)\n",
    "        return features, label\n",
    "\n",
    "\n",
    "# --- 交叉验证循环 ---\n",
    "\n",
    "feats_path = \"/data0/lcy/data/LNM/LNM_slices_conch_v1_processed/20x_512px_0px_overlap/features_conch_v1\"\n",
    "batch_size = 8\n",
    "num_epochs = 20\n",
    "\n",
    "logger.info(f\"特征路径: {feats_path}\")\n",
    "logger.info(f\"Batch Size: {batch_size}, Epochs: {num_epochs}\")\n",
    "\n",
    "# 用于存储每一折结果的列表\n",
    "all_fold_aucs = []\n",
    "all_fold_accuracies = []\n",
    "\n",
    "# 检查 'df' 是否已定义\n",
    "if \"df\" not in locals():\n",
    "    logger.error(\"=\" * 50)\n",
    "    logger.error(\"错误: DataFrame 'df' 未定义。\")\n",
    "    logger.error(\"请在 '--- 配置 ---' 部分下面取消注释或添加代码来加载您的 'df'。\")\n",
    "    logger.error(\"=\" * 50)\n",
    "else:\n",
    "    # 遍历 fold_columns 列表中的每一列\n",
    "    for fold_idx, current_split_col in enumerate(fold_columns):\n",
    "        logger.info(f\"\\n{'='*50}\")\n",
    "        logger.info(\n",
    "            f\"--- 开始第 {fold_idx + 1}/{len(fold_columns)} 折 (使用列: {current_split_col}) ---\"\n",
    "        )\n",
    "        logger.info(f\"{'='*50}\")\n",
    "\n",
    "        # 1. 为当前折叠重新初始化模型和优化器\n",
    "        model = BinaryClassificationModel().to(device)\n",
    "        optimizer = optim.Adam(model.parameters(), lr=4e-4)\n",
    "        criterion = nn.BCEWithLogitsLoss()\n",
    "\n",
    "        # 2. 为当前折叠创建 DataLoaders\n",
    "        # *** 更改：将 current_split_col 传递给 H5Dataset ***\n",
    "        train_loader = DataLoader(\n",
    "            H5Dataset(feats_path, df, \"train\", split_col=current_split_col),\n",
    "            batch_size=batch_size,\n",
    "            shuffle=True,\n",
    "            worker_init_fn=lambda _: np.random.seed(SEED),\n",
    "        )\n",
    "        test_loader = DataLoader(\n",
    "            H5Dataset(feats_path, df, \"test\", split_col=current_split_col),\n",
    "            batch_size=1,\n",
    "            shuffle=False,\n",
    "            worker_init_fn=lambda _: np.random.seed(SEED),\n",
    "        )\n",
    "\n",
    "        logger.info(\n",
    "            f\"Fold {fold_idx + 1}: Train samples = {len(train_loader.dataset)}, Test samples = {len(test_loader.dataset)}\"\n",
    "        )\n",
    "\n",
    "        # 3. 训练循环 (与之前相同，但在主循环内)\n",
    "        for epoch in range(num_epochs):\n",
    "            model.train()\n",
    "            total_loss = 0.0\n",
    "            for features, labels in train_loader:\n",
    "                features, labels = {\"features\": features.to(device)}, labels.to(device)\n",
    "                optimizer.zero_grad()\n",
    "                outputs = model(features)\n",
    "                loss = criterion(outputs, labels)\n",
    "                loss.backward()\n",
    "                optimizer.step()\n",
    "                total_loss += loss.item()\n",
    "            logger.info(\n",
    "                f\"Fold {fold_idx + 1}, Epoch {epoch+1}/{num_epochs}, Loss: {total_loss/len(train_loader):.4f}\"\n",
    "            )\n",
    "\n",
    "        # 4. 评估 (与之前相同，但在主循环内)\n",
    "        model.eval()\n",
    "        all_labels, all_outputs = [], []\n",
    "        correct = 0\n",
    "        total = 0\n",
    "\n",
    "        with torch.no_grad():\n",
    "            for features, labels in test_loader:\n",
    "                features, labels = {\"features\": features.to(device)}, labels.to(device)\n",
    "                outputs = model(features)\n",
    "\n",
    "                # 将 logits 转换为概率和二元预测\n",
    "                predicted = (\n",
    "                    outputs > 0\n",
    "                ).float()  # 因为 BCEWithLogitsLoss 期望原始 logits\n",
    "                correct += (predicted == labels).sum().item()\n",
    "                total += labels.size(0)\n",
    "\n",
    "                all_outputs.append(outputs.cpu().numpy())\n",
    "                all_labels.append(labels.cpu().numpy())\n",
    "\n",
    "        # 5. 计算并存储当前折叠的指标\n",
    "        all_outputs = np.concatenate(all_outputs)\n",
    "        all_labels = np.concatenate(all_labels)\n",
    "\n",
    "        auc = np.nan\n",
    "        if len(np.unique(all_labels)) > 1:  # 确保至少有两个类别才能计算 AUC\n",
    "            auc = roc_auc_score(all_labels, all_outputs)\n",
    "        else:\n",
    "            logger.warning(\n",
    "                f\"Fold {fold_idx + 1} Warning: Test set only contains one class. AUC cannot be calculated.\"\n",
    "            )\n",
    "\n",
    "        accuracy = correct / total\n",
    "\n",
    "        logger.info(f\"--- Fold {fold_idx + 1} 结果 ---\")\n",
    "        logger.info(f\"Test AUC: {auc:.4f}\")\n",
    "        logger.info(f\"Test Accuracy: {accuracy:.4f}\")\n",
    "\n",
    "        all_fold_aucs.append(auc)\n",
    "        all_fold_accuracies.append(accuracy)\n",
    "\n",
    "    # --- 交叉验证总结 ---\n",
    "    logger.info(f\"\\n{'='*50}\")\n",
    "    logger.info(\"--- 5-折交叉验证总结 ---\")\n",
    "    logger.info(f\"{'='*50}\")\n",
    "\n",
    "    # 使用 nanmean/nanstd 忽略任何可能的 'nan' AUC 值\n",
    "    mean_auc = np.nanmean(all_fold_aucs)\n",
    "    std_auc = np.nanstd(all_fold_aucs)\n",
    "    mean_accuracy = np.nanmean(all_fold_accuracies)\n",
    "    std_accuracy = np.nanstd(all_fold_accuracies)\n",
    "\n",
    "    logger.info(f\"平均 AUC: {mean_auc:.4f} \\u00b1 {std_auc:.4f}\")\n",
    "    logger.info(f\"平均 Accuracy: {mean_accuracy:.4f} \\u00b1 {std_accuracy:.4f}\")\n",
    "\n",
    "    logger.info(\"\\n单折 AUC:\")\n",
    "    for i, auc in enumerate(all_fold_aucs):\n",
    "        logger.info(f\"  Fold {i+1}: {auc:.4f}\")\n",
    "\n",
    "    logger.info(\"\\n单折 Accuracy:\")\n",
    "    for i, acc in enumerate(all_fold_accuracies):\n",
    "        logger.info(f\"  Fold {i+1}: {acc:.4f}\")\n",
    "\n",
    "    logger.info(f\"日志已保存到 {log_file_path}\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#### E- 为新训练的模型提取注意力热图"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "from trident import OpenSlideWSI, visualize_heatmap\n",
    "from trident.segmentation_models import segmentation_model_factory\n",
    "from trident.patch_encoder_models import encoder_factory as patch_encoder_factory\n",
    "\n",
    "# a. 设置路径并加载 WSI\n",
    "saveto_dir = \"zzylnm_slices_splits/results/linprobe/LNM\"\n",
    "job_dir = os.path.join(saveto_dir, 'heatmap_viz')\n",
    "os.makedirs(job_dir, exist_ok=True)\n",
    "\n",
    "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
    "\n",
    "# --- TODO: 配置 WSI 路径 ---\n",
    "# 您必须提供包含原始 WSI 文件的目录的路径。\n",
    "# 根据您的特征路径，我猜测是这个路径，请核实：\n",
    "wsi_dir = \"/mnt/data1/zzy/LNM/slices\"  # <-- 猜测，请核实此路径\n",
    "\n",
    "# 从测试集中获取一个样本 slide ID (使用 cell 6 中设置的 'split_col')\n",
    "test_df = df[df[split_col] == \"test\"]\n",
    "if test_df.empty:\n",
    "    print(\"警告: 未找到 'test' 拆分的切片。正在使用 dataframe 中的第一个切片。\")\n",
    "    sample_slide_id = df.iloc[0]['slide_id']\n",
    "else:\n",
    "    sample_slide_id = test_df.iloc[0]['slide_id']\n",
    "\n",
    "# TODO: 核实您切片的文件扩展名 (例如: .svs, .tif, .ndpi)\n",
    "slide_extension = \".svs\" # <-- 这是一个常见的扩展名，请根据需要更改\n",
    "slide_path = os.path.join(wsi_dir, f\"{sample_slide_id}{slide_extension}\")\n",
    "# ----------------------------------\n",
    "\n",
    "print(f\"正在为切片生成热图: {slide_path}\")\n",
    "if not os.path.exists(slide_path):\n",
    "    print(f\"错误: 在 {slide_path} 未找到切片文件。请检查 'wsi_dir' 和 'slide_extension'。\")\n",
    "else:\n",
    "    slide = OpenSlideWSI(slide_path=slide_path, lazy_init=False)\n",
    "\n",
    "    # b. 运行分割\n",
    "    segmentation_model = segmentation_model_factory(\"hest\")\n",
    "    geojson_contours = slide.segment_tissue(segmentation_model=segmentation_model, job_dir=job_dir, device=device)\n",
    "\n",
    "    # c. 运行补丁坐标提取\n",
    "    # 匹配您的特征提取路径参数: 20x, 512px, 0px_overlap\n",
    "    coords_path = slide.extract_tissue_coords(\n",
    "        target_mag=20,          # 匹配 20x\n",
    "        patch_size=512,         # 匹配 512px\n",
    "        save_coords=job_dir,\n",
    "        overlap=0,              # 匹配 0px_overlap\n",
    "    )\n",
    "\n",
    "    # d. 运行补丁特征提取\n",
    "    # 使用 'conch_v1' 来匹配您的特征目录名\n",
    "    patch_encoder = patch_encoder_factory(\"conch_v1\").eval().to(device)\n",
    "    patch_features_path = slide.extract_patch_features(\n",
    "        patch_encoder=patch_encoder,\n",
    "        coords_path=coords_path,\n",
    "        save_features=os.path.join(job_dir, f\"features_conch_v1\"), # 匹配编码器\n",
    "        device=device\n",
    "    )\n",
    "\n",
    "    #  e. 运行推理\n",
    "    with h5py.File(patch_features_path, 'r') as f:\n",
    "        coords = f['coords'][:]\n",
    "        patch_features = f['features'][:]\n",
    "        coords_attrs = dict(f['coords'].attrs)\n",
    "\n",
    "    batch = {'features': torch.from_numpy(patch_features).float().to(device).unsqueeze(0)}\n",
    "    logits, attention = model(batch, return_raw_attention=True)\n",
    "\n",
    "    # f. 生成热图\n",
    "    heatmap_save_path = visualize_heatmap(\n",
    "        wsi=slide,\n",
    "        scores=attention.cpu().numpy().squeeze(),  \n",
    "        coords=coords,\n",
    "        vis_level=1,\n",
    "        patch_size_level0=coords_attrs['patch_size_level0'],\n",
    "        normalize=True,\n",
    "        num_top_patches_to_save=10,\n",
    "        output_dir=job_dir\n",
    "    )\n",
    "    print(f\"热图已保存至: {heatmap_save_path}\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "trident",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.16"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
