{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 保险数据集的回归（[Regression with an Insurance Dataset | Kaggle](https://www.kaggle.com/competitions/playground-series-s4e12)）\n",
    "\n",
    "## 1. 环境配置\n",
    "\n",
    "### 必要依赖\n",
    "\n",
    "```bash\n",
    "pip install pandas torch scikit-learn tqdm\n",
    "```\n",
    "\n",
    "### 硬件要求\n",
    "\n",
    "- 支持 CPU 或 CUDA GPU 运行\n",
    "- 建议使用 CUDA 显卡以加速训练过程\n",
    "\n",
    "## 2. 项目结构\n",
    "\n",
    "```plaintext\n",
    "project/\n",
    "│\n",
    "├── data/\n",
    "│   ├── train.csv\n",
    "│   ├── test.csv\n",
    "│   └── sample_submission.csv\n",
    "│\n",
    "├── model/\n",
    "│   └── best_model.pth\n",
    "│\n",
    "└── results/\n",
    "    └── submission.csv\n",
    "```\n",
    "\n",
    "## 3. 核心设计思路\n",
    "\n",
    "### 3.1 数据处理策略\n",
    "\n",
    "1. **缺失值处理**\n",
    "   - 数值型特征：使用均值填充\n",
    "   - 分类型特征：使用众数填充\n",
    "\n",
    "2. **特征工程**\n",
    "   - 分类特征编码：使用标签编码并归一化到 0-1\n",
    "   - 数值特征标准化：使用 StandardScaler 进行标准化处理\n",
    "\n",
    "### 3.2 模型架构\n",
    "\n",
    "采用多层前馈神经网络（MLP）设计：\n",
    "\n",
    "```plaintext\n",
    "输入层 (input_size) \n",
    "    ↓\n",
    "隐藏层 1 (128 neurons + ReLU + Dropout 0.2)\n",
    "    ↓\n",
    "隐藏层 2 (64 neurons + ReLU + Dropout 0.2)\n",
    "    ↓\n",
    "隐藏层 3 (32 neurons + ReLU)\n",
    "    ↓\n",
    "输出层 (1 neuron)\n",
    "```\n",
    "\n",
    "### 3.3 训练策略\n",
    "\n",
    "1. **优化器选择**\n",
    "   - 使用 Adam 优化器\n",
    "   - 学习率：0.001\n",
    "\n",
    "2. **损失函数**\n",
    "   - 均方误差损失 (MSE Loss)\n",
    "\n",
    "3. **训练技巧**\n",
    "   - 批量大小：32\n",
    "   - 早停机制：patience = 5\n",
    "   - 模型检查点：保存验证损失最低的模型\n",
    "\n",
    "## 4. 核心算法详解\n",
    "\n",
    "### 4.1 数据预处理\n",
    "\n",
    "数据预处理主要包含以下步骤：\n",
    "\n",
    "1. **数值型特征处理**\n",
    "   - 识别所有数值型列（int64 和 float64 类型）\n",
    "   - 使用每列的均值填充缺失值\n",
    "   - 使用 StandardScaler 进行标准化处理\n",
    "\n",
    "2. **分类特征处理**\n",
    "   - 识别所有对象类型列（object 类型）\n",
    "   - 使用众数填充缺失值\n",
    "   - 将分类变量转换为数值编码\n",
    "   - 对编码后的值进行 0-1 归一化处理（仅针对具有多个唯一值的列）\n",
    "\n",
    "3. **处理流程**\n",
    "   - 首先区分数值型和分类型列\n",
    "   - 分别对两种类型的特征进行处理\n",
    "   - 保持原始数据框结构不变，直接在原数据上进行转换\n",
    "\n",
    "### 4.2 模型训练流程\n",
    "\n",
    "1. **每个 epoch 的训练步骤**\n",
    "   - 前向传播计算预测值\n",
    "   - 计算 MSE 损失\n",
    "   - 反向传播更新参数\n",
    "   - 记录训练损失\n",
    "\n",
    "2. **验证步骤**\n",
    "   - 计算验证集损失\n",
    "   - 更新最佳模型\n",
    "   - 检查早停条件\n",
    "\n",
    "### 4.3 预测流程\n",
    "\n",
    "1. 加载测试数据\n",
    "2. 应用相同的预处理步骤\n",
    "3. 加载最佳模型权重\n",
    "4. 批量预测并生成提交文件\n",
    "\n",
    "## 5. 使用说明\n",
    "\n",
    "1. 数据文件：\n",
    "   - 将训练数据放入 `data/train.csv`\n",
    "   - 将测试数据放入 `data/test.csv`\n",
    "   - 将样本提交文件放入 `data/sample_submission.csv`\n",
    "\n",
    "2. 输出文件：\n",
    "   - 模型会自动保存在 `model/best_model.pth`\n",
    "   - 预测结果将保存在 `results/submission.csv`\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Using device: cuda\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 1/50 [Train]: 100%|██████████| 30000/30000 [01:34<00:00, 318.12it/s, loss=1218314.2500]\n",
      "Epoch 1/50 [Val]: 100%|██████████| 7500/7500 [00:12<00:00, 578.67it/s, loss=730219.3750] \n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 1/50:\n",
      "Average Train Loss: 759551.6394\n",
      "Average Val Loss: 742400.7666\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 2/50 [Train]: 100%|██████████| 30000/30000 [01:22<00:00, 361.49it/s, loss=898675.0000] \n",
      "Epoch 2/50 [Val]: 100%|██████████| 7500/7500 [00:12<00:00, 621.52it/s, loss=702472.7500] \n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 2/50:\n",
      "Average Train Loss: 752933.1257\n",
      "Average Val Loss: 745812.2019\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 3/50 [Train]: 100%|██████████| 30000/30000 [01:23<00:00, 359.04it/s, loss=826401.8750] \n",
      "Epoch 3/50 [Val]: 100%|██████████| 7500/7500 [00:12<00:00, 588.86it/s, loss=715124.3750] \n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 3/50:\n",
      "Average Train Loss: 752149.8664\n",
      "Average Val Loss: 745544.8684\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 4/50 [Train]: 100%|██████████| 30000/30000 [01:37<00:00, 306.89it/s, loss=766985.6250] \n",
      "Epoch 4/50 [Val]: 100%|██████████| 7500/7500 [00:12<00:00, 600.04it/s, loss=712116.6250] \n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 4/50:\n",
      "Average Train Loss: 750905.8054\n",
      "Average Val Loss: 742820.6678\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 5/50 [Train]: 100%|██████████| 30000/30000 [01:24<00:00, 353.95it/s, loss=485303.9375] \n",
      "Epoch 5/50 [Val]: 100%|██████████| 7500/7500 [00:13<00:00, 564.96it/s, loss=686663.2500] \n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 5/50:\n",
      "Average Train Loss: 748635.4376\n",
      "Average Val Loss: 765226.0651\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 6/50 [Train]: 100%|██████████| 30000/30000 [01:31<00:00, 327.58it/s, loss=655594.3125] \n",
      "Epoch 6/50 [Val]: 100%|██████████| 7500/7500 [00:12<00:00, 578.19it/s, loss=707683.7500] "
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch 6/50:\n",
      "Average Train Loss: 746957.6908\n",
      "Average Val Loss: 763867.3913\n",
      "\n",
      "Early stopping triggered after 6 epochs\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    }
   ],
   "source": [
    "import os\n",
    "\n",
    "import pandas as pd\n",
    "import torch\n",
    "import torch.nn as nn\n",
    "from sklearn.model_selection import train_test_split\n",
    "from sklearn.preprocessing import StandardScaler\n",
    "from torch.optim.adam import Adam\n",
    "from torch.utils.data import DataLoader, Dataset\n",
    "from tqdm import tqdm\n",
    "\n",
    "# 检查是否可以使用CUDA\n",
    "device = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\n",
    "print(f\"Using device: {device}\")\n",
    "\n",
    "\n",
    "# 自定义数据集类\n",
    "class InsuranceDataset(Dataset):\n",
    "    \"\"\"\n",
    "    保险费数据的自定义数据集类。\n",
    "\n",
    "    属性:\n",
    "        X (torch.FloatTensor): 特征张量\n",
    "        y (torch.FloatTensor): 目标张量(测试集可选)\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(self, X, y=None):\n",
    "        self.X = torch.FloatTensor(X)\n",
    "        self.y = torch.FloatTensor(y) if y is not None else None\n",
    "\n",
    "    def __len__(self):\n",
    "        return len(self.X)\n",
    "\n",
    "    def __getitem__(self, idx):\n",
    "        if self.y is not None:\n",
    "            return self.X[idx], self.y[idx]\n",
    "        return self.X[idx]\n",
    "\n",
    "\n",
    "# 神经网络模型\n",
    "class InsuranceNet(nn.Module):\n",
    "    \"\"\"\n",
    "    保险费预测的神经网络架构。\n",
    "\n",
    "    架构:\n",
    "        - 输入层: input_size 个神经元\n",
    "        - 隐藏层1: 128个神经元，使用ReLU激活和0.2的dropout\n",
    "        - 隐藏层2: 64个神经元，使用ReLU激活和0.2的dropout\n",
    "        - 隐藏层3: 32个神经元，使用ReLU激活\n",
    "        - 输出层: 1个神经元(保费预测)\n",
    "\n",
    "    参数:\n",
    "        input_size (int): 输入特征数量\n",
    "    \"\"\"\n",
    "\n",
    "    def __init__(self, input_size):\n",
    "        super(InsuranceNet, self).__init__()\n",
    "        self.model = nn.Sequential(\n",
    "            nn.Linear(input_size, 128),\n",
    "            nn.ReLU(),\n",
    "            nn.Dropout(0.2),\n",
    "            nn.Linear(128, 64),\n",
    "            nn.ReLU(),\n",
    "            nn.Dropout(0.2),\n",
    "            nn.Linear(64, 32),\n",
    "            nn.ReLU(),\n",
    "            nn.Linear(32, 1),\n",
    "        )\n",
    "\n",
    "    def forward(self, x):\n",
    "        return self.model(x)\n",
    "\n",
    "\n",
    "# 数据预处理函数\n",
    "def preprocess_data(df):\n",
    "    \"\"\"\n",
    "    预处理用于模型训练/推理的输入数据框。\n",
    "\n",
    "    步骤:\n",
    "        1. 处理数值/分类列中的缺失值\n",
    "        2. 编码分类变量\n",
    "        3. 使用StandardScaler缩放数值特征\n",
    "\n",
    "    参数:\n",
    "        df (pd.DataFrame): 输入数据框\n",
    "\n",
    "    返回:\n",
    "        tuple: (预处理后的数据框, StandardScaler实例)\n",
    "    \"\"\"\n",
    "    # 创建副本以避免修改原始数据框\n",
    "    df = df.copy()\n",
    "\n",
    "    # 处理缺失值\n",
    "    numeric_cols = df.select_dtypes(include=[\"int64\", \"float64\"]).columns\n",
    "    categorical_cols = df.select_dtypes(include=[\"object\"]).columns\n",
    "\n",
    "    # 用均值填充数值型缺失值\n",
    "    df[numeric_cols] = df[numeric_cols].fillna(df[numeric_cols].mean())\n",
    "\n",
    "    # 用众数填充分类型缺失值\n",
    "    for col in categorical_cols:\n",
    "        df[col] = df[col].fillna(df[col].mode().iloc[0])\n",
    "\n",
    "    # 使用标签编码将分类列转换为数值\n",
    "    for col in categorical_cols:\n",
    "        df[col] = df[col].astype(\"category\").cat.codes\n",
    "\n",
    "        # 缩放到0-1之间\n",
    "        if len(df[col].unique()) > 1:\n",
    "            df[col] = (df[col] - df[col].min()) / (df[col].max() - df[col].min())\n",
    "\n",
    "    # 缩放数值列\n",
    "    scaler = StandardScaler()\n",
    "    df[numeric_cols] = scaler.fit_transform(df[numeric_cols])\n",
    "\n",
    "    return df, scaler\n",
    "\n",
    "\n",
    "# 训练函数\n",
    "def train_model(\n",
    "    model, train_loader, val_loader, criterion, optimizer, epochs, patience=5\n",
    "):\n",
    "    \"\"\"\n",
    "    训练神经网络模型，包含早停机制。\n",
    "\n",
    "    特点:\n",
    "        - 训练和验证的进度条显示\n",
    "        - 可配置耐心值的早停机制\n",
    "        - 模型检查点(保存最佳模型)\n",
    "        - 训练和验证损失追踪\n",
    "\n",
    "    参数:\n",
    "        model: 神经网络模型\n",
    "        train_loader: 训练数据加载器\n",
    "        val_loader: 验证数据加载器\n",
    "        criterion: 损失函数\n",
    "        optimizer: 优化算法\n",
    "        epochs (int): 最大训练轮数\n",
    "        patience (int): 早停耐心值\n",
    "    \"\"\"\n",
    "    best_val_loss = float(\"inf\")\n",
    "    early_stopping_counter = 0\n",
    "\n",
    "    for epoch in range(epochs):\n",
    "        model.train()\n",
    "        train_loss = 0\n",
    "\n",
    "        # 训练循环，带进度条\n",
    "        train_pbar = tqdm(train_loader, desc=f\"Epoch {epoch+1}/{epochs} [Train]\")\n",
    "        for X, y in train_pbar:\n",
    "            X, y = X.to(device), y.to(device)\n",
    "\n",
    "            optimizer.zero_grad()\n",
    "            outputs = model(X)\n",
    "            loss = criterion(outputs, y.unsqueeze(1))\n",
    "            loss.backward()\n",
    "            optimizer.step()\n",
    "\n",
    "            train_loss += loss.item()\n",
    "            train_pbar.set_postfix({\"loss\": f\"{loss.item():.4f}\"})\n",
    "\n",
    "        # 验证循环\n",
    "        model.eval()\n",
    "        val_loss = 0\n",
    "        with torch.no_grad():\n",
    "            val_pbar = tqdm(val_loader, desc=f\"Epoch {epoch+1}/{epochs} [Val]\")\n",
    "            for X, y in val_pbar:\n",
    "                X, y = X.to(device), y.to(device)\n",
    "                outputs = model(X)\n",
    "                loss = criterion(outputs, y.unsqueeze(1))\n",
    "                val_loss += loss.item()\n",
    "                val_pbar.set_postfix({\"loss\": f\"{loss.item():.4f}\"})\n",
    "\n",
    "        train_loss /= len(train_loader)\n",
    "        val_loss /= len(val_loader)\n",
    "\n",
    "        print(f\"Epoch {epoch+1}/{epochs}:\")\n",
    "        print(f\"Average Train Loss: {train_loss:.4f}\")\n",
    "        print(f\"Average Val Loss: {val_loss:.4f}\")\n",
    "\n",
    "        # 保存最佳模型并检查是否需要早停\n",
    "        if val_loss < best_val_loss:\n",
    "            best_val_loss = val_loss\n",
    "            os.makedirs(\"model\", exist_ok=True)\n",
    "            torch.save(model.state_dict(), \"model/best_model.pth\")\n",
    "            early_stopping_counter = 0\n",
    "        else:\n",
    "            early_stopping_counter += 1\n",
    "            if early_stopping_counter >= patience:\n",
    "                print(f\"\\nEarly stopping triggered after {epoch + 1} epochs\")\n",
    "                break\n",
    "\n",
    "\n",
    "# 加载数据\n",
    "train_data = pd.read_csv(\"data/train.csv\")\n",
    "\n",
    "# 分离特征和目标\n",
    "X = train_data.drop([\"Premium Amount\", \"id\"], axis=1)\n",
    "y = train_data[\"Premium Amount\"]\n",
    "\n",
    "# 预处理数据\n",
    "X, _ = preprocess_data(X)\n",
    "\n",
    "# 分割数据\n",
    "X_train, X_val, y_train, y_val = train_test_split(X, y, test_size=0.2, random_state=42)\n",
    "\n",
    "# 创建数据集和数据加载器\n",
    "train_dataset = InsuranceDataset(X_train.values, y_train.values)\n",
    "val_dataset = InsuranceDataset(X_val.values, y_val.values)\n",
    "\n",
    "train_loader = DataLoader(train_dataset, batch_size=32, shuffle=True)\n",
    "val_loader = DataLoader(val_dataset, batch_size=32)\n",
    "\n",
    "# 初始化模型、损失函数和优化器\n",
    "model = InsuranceNet(input_size=X.shape[1]).to(device)\n",
    "criterion = nn.MSELoss()\n",
    "optimizer = Adam(model.parameters(), lr=0.001)\n",
    "\n",
    "# 训练模型\n",
    "train_model(model, train_loader, val_loader, criterion, optimizer, epochs=50)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "E:\\TEMP\\ipykernel_24612\\1443286675.py:16: FutureWarning: You are using `torch.load` with `weights_only=False` (the current default value), which uses the default pickle module implicitly. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling (See https://github.com/pytorch/pytorch/blob/main/SECURITY.md#untrusted-models for more details). In a future release, the default value for `weights_only` will be flipped to `True`. This limits the functions that could be executed during unpickling. Arbitrary objects will no longer be allowed to be loaded via this mode unless they are explicitly allowlisted by the user via `torch.serialization.add_safe_globals`. We recommend you start setting `weights_only=True` for any use case where you don't have full control of the loaded file. Please open an issue on GitHub for any issues related to this experimental feature.\n",
      "  model.load_state_dict(torch.load(\"model/best_model.pth\"))\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Predictions saved to results/submission.csv\n"
     ]
    }
   ],
   "source": [
    "# 加载测试数据和样本提交文件作为参考\n",
    "test_data = pd.read_csv(\"data/test.csv\")\n",
    "sample_submission = pd.read_csv(\"data/sample_submission.csv\")\n",
    "\n",
    "test_features = test_data.drop(\"id\", axis=1)\n",
    "\n",
    "# 预处理测试数据\n",
    "test_features = preprocess_data(test_features)[0]\n",
    "\n",
    "# 创建测试数据集和数据加载器\n",
    "test_dataset = InsuranceDataset(test_features.values)\n",
    "test_loader = DataLoader(test_dataset, batch_size=32)\n",
    "\n",
    "# 初始化并加载训练好的模型\n",
    "model = InsuranceNet(input_size=test_features.shape[1]).to(device)\n",
    "model.load_state_dict(torch.load(\"model/best_model.pth\"))\n",
    "model.eval()\n",
    "\n",
    "# 进行预测\n",
    "predictions = []\n",
    "with torch.no_grad():\n",
    "    for X in test_loader:\n",
    "        X = X.to(device)\n",
    "        outputs = model(X)\n",
    "        predictions.extend(outputs.cpu().numpy())\n",
    "\n",
    "# 创建提交文件，使用与样本提交相同的格式\n",
    "submission = pd.DataFrame()\n",
    "submission[sample_submission.columns[0]] = test_data[\"id\"]  # 使用样本中的精确列名\n",
    "submission[sample_submission.columns[1]] = predictions  # 使用样本中的精确列名\n",
    "\n",
    "# 确保与样本提交使用相同的数据类型\n",
    "for col in submission.columns:\n",
    "    submission[col] = submission[col].astype(sample_submission[col].dtype)\n",
    "\n",
    "os.makedirs(\"results\", exist_ok=True)\n",
    "submission.to_csv(\"results/submission.csv\", index=False)\n",
    "print(\"Predictions saved to results/submission.csv\")"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "ml",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.11.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
