{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "#  依赖安装和导入"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:12:27.775455Z",
     "iopub.status.busy": "2025-01-01T03:12:27.775173Z",
     "iopub.status.idle": "2025-01-01T03:12:32.130912Z",
     "shell.execute_reply": "2025-01-01T03:12:32.130109Z",
     "shell.execute_reply.started": "2025-01-01T03:12:27.775405Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "!pip install imblearn --user"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* sklearn(scikit-learn)：提供机器学习工具和模型。Kaggle自有。\n",
    "* imblearn：用于处理不平衡数据集，包含 SMOTE（一种上采样方法）。\n",
    "    * SMOTE 合成少数过采样技术\n",
    "    * 上采样是处理不平衡数据集的技术之一，目的是通过增加少数类的样本数量来平衡类别分布。常见的上采样方法包括：\n",
    "        1. **随机过采样**：随机复制少数类样本。\n",
    "        2. **SMOTE**：在少数类样本之间插值生成新样本。\n",
    "        3. **ADASYN**：SMOTE的改进版本，对类别不平衡更严重的类别生成更多样本。\n",
    "        4. **KMeans SMOTE**：使用KMeans聚类来确定少数类的哪些样本需要生成新样本。\n",
    "        5. **Borderline-SMOTE**：仅对边界样本生成新样本，以减少过拟合。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:12:32.132159Z",
     "iopub.status.busy": "2025-01-01T03:12:32.131923Z",
     "iopub.status.idle": "2025-01-01T03:12:33.076173Z",
     "shell.execute_reply": "2025-01-01T03:12:33.075292Z",
     "shell.execute_reply.started": "2025-01-01T03:12:32.132140Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "# 导入标准库\n",
    "import os\n",
    "import math\n",
    "import time\n",
    "import random\n",
    "import datetime\n",
    "# 导入科学计算与数据分析库\n",
    "import numpy as np\n",
    "import pandas as pd\n",
    "# 导入可视化工具\n",
    "import seaborn as sns\n",
    "import matplotlib.pyplot as plt"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* os：提供操作系统相关功能，比如文件路径管理。\n",
    "* math：提供数学函数，比如平方根、对数等。\n",
    "* time：进行时间相关操作，比如获取当前时间或计算程序运行时间。\n",
    "* random：生成随机数，用于数据处理和增强。\n",
    "* datetime：处理日期和时间，常用于时间戳记录。\n",
    "* numpy：用于处理多维数组和矩阵计算，高效的数值操作库。\n",
    "* pandas：用于数据加载、清洗、操作和分析，主要是 DataFrame 格式的表格数据。\n",
    "* seaborn：高级可视化库，基于 matplotlib，用于绘制美观的数据统计图表。\n",
    "* matplotlib.pyplot：底层绘图库，提供绘制图形的基础功能。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:12:33.077603Z",
     "iopub.status.busy": "2025-01-01T03:12:33.077197Z",
     "iopub.status.idle": "2025-01-01T03:12:40.247045Z",
     "shell.execute_reply": "2025-01-01T03:12:40.246109Z",
     "shell.execute_reply.started": "2025-01-01T03:12:33.077573Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "# 导入 TensorFlow 及其相关模块\n",
    "import tensorflow as tf\n",
    "import tensorflow.keras as K\n",
    "from tensorflow.keras import Sequential, utils, regularizers, Model, Input\n",
    "from tensorflow.keras.layers import Flatten, Dense, Conv1D, MaxPool1D, Dropout, AvgPool1D"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* tensorflow：一个用于深度学习的框架，支持构建和训练神经网络。\n",
    "* keras：TensorFlow 的高级 API，用于快速定义和训练深度学习模型。\n",
    "* 导入的具体模块和类：\n",
    "  * Sequential：一种简单的模型构建方式，按顺序堆叠层。\n",
    "  * utils：工具模块，如模型保存和加载等。\n",
    "  * regularizers：正则化模块，用于减轻模型过拟合。\n",
    "  * Model 和 Input：用于构建自定义模型（比 Sequential 更灵活）。\n",
    "  * Flatten：将多维张量展平成一维张量。\n",
    "  * Dense：全连接层。\n",
    "  * Conv1D：一维卷积层，用于处理序列数据（如时间序列）。\n",
    "  * MaxPool1D：一维最大池化层，用于降维。\n",
    "  * Dropout：正则化方法，随机丢弃一部分神经元以减轻过拟合。\n",
    "  * AvgPool1D：一维平均池化层。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:12:40.248591Z",
     "iopub.status.busy": "2025-01-01T03:12:40.247972Z",
     "iopub.status.idle": "2025-01-01T03:12:40.823459Z",
     "shell.execute_reply": "2025-01-01T03:12:40.822823Z",
     "shell.execute_reply.started": "2025-01-01T03:12:40.248560Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "# 导入不平衡数据处理工具\n",
    "from imblearn.over_sampling import SMOTE\n",
    "from sklearn.model_selection import KFold\n",
    "from sklearn.preprocessing import OneHotEncoder"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* SMOTE（Synthetic Minority Oversampling Technique）\n",
    "* KFold：交叉验证工具，将数据分成 K 个子集，循环训练和验证。\n",
    "* OneHotEncoder：将分类标签（如 0、1、2）编码成独热编码形式（如 [1,0,0]）。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 数据加载与预处理"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 加载数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:12:40.824558Z",
     "iopub.status.busy": "2025-01-01T03:12:40.824145Z",
     "iopub.status.idle": "2025-01-01T03:12:46.222783Z",
     "shell.execute_reply": "2025-01-01T03:12:46.221852Z",
     "shell.execute_reply.started": "2025-01-01T03:12:40.824537Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "# 加载训练集和测试集(相对路径)\n",
    "train = pd.read_csv('/kaggle/input/herat-competition/train.csv')\n",
    "test = pd.read_csv('/kaggle/input/herat-competition/testA.csv')"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 定义数据精度量化压缩函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:12:46.225702Z",
     "iopub.status.busy": "2025-01-01T03:12:46.225447Z",
     "iopub.status.idle": "2025-01-01T03:12:46.234412Z",
     "shell.execute_reply": "2025-01-01T03:12:46.233600Z",
     "shell.execute_reply.started": "2025-01-01T03:12:46.225679Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "# 数据精度量化压缩\n",
    "def reduce_mem_usage(df):\n",
    "    # 优化前的数据集内存大小\n",
    "    start_mem = df.memory_usage().sum() / 1024**2 \n",
    "    print('优化前的数据集内存大小 {:.2f} MB'.format(start_mem))\n",
    "    \n",
    "    # 遍历特征列\n",
    "    for col in df.columns:\n",
    "        # 当前特征类型\n",
    "        col_type = df[col].dtype\n",
    "        # 处理 numeric 型数据\n",
    "        if col_type != object:\n",
    "            c_min = df[col].min()  # 最小值\n",
    "            c_max = df[col].max()  # 最大值\n",
    "            # int 型数据 精度转换\n",
    "            if str(col_type)[:3] == 'int':\n",
    "                if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:\n",
    "                    df[col] = df[col].astype(np.int8)\n",
    "                elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:\n",
    "                    df[col] = df[col].astype(np.int16)\n",
    "                elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:\n",
    "                    df[col] = df[col].astype(np.int32)\n",
    "                elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:\n",
    "                    df[col] = df[col].astype(np.int64)  \n",
    "            # float 型数据 精度转换\n",
    "            else:\n",
    "                if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:\n",
    "                    df[col] = df[col].astype(np.float16)\n",
    "                elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:\n",
    "                    df[col] = df[col].astype(np.float32)\n",
    "                else:\n",
    "                    df[col] = df[col].astype(np.float64)\n",
    "        # 处理 object 型数据\n",
    "        else:\n",
    "            df[col] = df[col].astype('category')  # object 转 category\n",
    "    \n",
    "    # 处理后 数据集总内存计算\n",
    "    end_mem = df.memory_usage().sum() / 1024**2 \n",
    "    print('优化后的内存占用: {:.2f} MB'.format(end_mem))\n",
    "    print('减少的百分比 {:.1f}%'.format(100 * (start_mem - end_mem) / start_mem))\n",
    "    \n",
    "    return df"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```\n",
    "函数 reduce_mem_usage(DataFrame):\n",
    "    1. 计算初始内存占用 start_mem\n",
    "    2. 遍历 DataFrame 的每一列:\n",
    "        a. 获取列的数据类型 col_type\n",
    "        b. 如果列是数值型数据:\n",
    "            i. 计算列的最小值 c_min 和最大值 c_max\n",
    "            ii. 如果是整型数据:\n",
    "                - 根据数据范围，将类型转换为 int8/int16/int32/int64 中最小的可用类型\n",
    "            iii. 如果是浮点型数据:\n",
    "                - 根据数据范围，将类型转换为 float16/float32/float64 中最小的可用类型\n",
    "        c. 如果列是字符串型数据:\n",
    "            - 将其类型转换为 category\n",
    "    3. 计算优化后的内存占用 end_mem\n",
    "    4. 打印优化前后的内存占用和减少百分比\n",
    "    5. 返回优化后的 DataFrame\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 对训练集的数据进行处理和精度量化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:12:46.236359Z",
     "iopub.status.busy": "2025-01-01T03:12:46.236109Z",
     "iopub.status.idle": "2025-01-01T03:12:54.721521Z",
     "shell.execute_reply": "2025-01-01T03:12:54.720578Z",
     "shell.execute_reply.started": "2025-01-01T03:12:46.236333Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "# 训练集特征处理与精度量化\n",
    "train_list = [] # 初始化一个空列表\n",
    "for items in train.values:\n",
    "    train_list.append([items[0]] + [float(i) for i in items[1].split(',')] + [items[2]])\n",
    "train = pd.DataFrame(np.array(train_list))\n",
    "train.columns = ['id'] + ['s_' + str(i) for i in range(len(train_list[0])-2)] + ['label']  # 特征分离\n",
    "train = reduce_mem_usage(train)  #调用 reduce_mem_usage 函数进行精度量化"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```\n",
    "函数：训练集特征处理与精度量化(train)\n",
    "1. 初始化空列表 train_list\n",
    "\n",
    "2. 遍历 train 的每一行数据 items：\n",
    "   a. 将第一列（id）保留\n",
    "   b. 将第二列（heartbeat_signals）的字符串用逗号分隔，并将每个值转换为浮点数\n",
    "   c. 将第三列（label）保留\n",
    "   d. 将处理后的 id + heartbeat_signals + label 组合成一个新列表，追加到 train_list 中\n",
    "\n",
    "3. 将 train_list 转换为 NumPy 数组，再转换为 Pandas DataFrame 格式\n",
    "\n",
    "4. 设置 DataFrame 的列名：\n",
    "   a. 第一列命名为 'id'\n",
    "   b. 中间的列命名为 's_0', 's_1', ..., 's_204'，表示心跳信号序列的特征列\n",
    "   c. 最后一列命名为 'label'\n",
    "\n",
    "5. 调用 reduce_mem_usage 函数，对处理后的 DataFrame 进行精度量化，优化内存使用\n",
    "\n",
    "6. 返回处理完成的训练集 DataFrame\n",
    "```\n",
    "\n",
    "------\n",
    "\n",
    "**数据处理前后对比**\n",
    "\n",
    "原始训练集 `train.csv` 的部分内容如下：\n",
    "| id     | heartbeat_signals             | label |\n",
    "|--------|--------------------------------|-------|\n",
    "| 10001  | 0.1,0.2,0.3,...,0.4           | 0     |\n",
    "| 10002  | 0.5,0.6,0.7,...,0.8           | 1     |\n",
    "\n",
    "处理后生成的 DataFrame `train`：\n",
    "| id     | s_0  | s_1  | s_2  | ... | s_204 | label |\n",
    "|--------|------|------|------|-----|-------|-------|\n",
    "| 10001  | 0.1  | 0.2  | 0.3  | ... | 0.4   | 0     |\n",
    "| 10002  | 0.5  | 0.6  | 0.7  | ... | 0.8   | 1     |\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 对测试集的数据进行特征处理和内存优化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:12:54.722616Z",
     "iopub.status.busy": "2025-01-01T03:12:54.722365Z",
     "iopub.status.idle": "2025-01-01T03:12:56.407888Z",
     "shell.execute_reply": "2025-01-01T03:12:56.406882Z",
     "shell.execute_reply.started": "2025-01-01T03:12:54.722594Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "# 测试集特征处理与精度量化\n",
    "test_list=[]\n",
    "for items in test.values:\n",
    "    test_list.append([items[0]] + [float(i) for i in items[1].split(',')])\n",
    "test = pd.DataFrame(np.array(test_list))\n",
    "test.columns = ['id'] + ['s_'+str(i) for i in range(len(test_list[0])-1)]  # 特征分离\n",
    "test = reduce_mem_usage(test)  # 精度量化"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "```\n",
    "1. 初始化空列表 test_list\n",
    "\n",
    "2. 遍历测试集 test 的每一行 items:\n",
    "   a. 提取第一列（id）\n",
    "   b. 将第二列（heartbeat_signals）的字符串用逗号分隔，并将每个值转换为浮点数\n",
    "   c. 将 id + heartbeat_signals 合并为一个新列表，添加到 test_list\n",
    "\n",
    "3. 将 test_list 转换为 NumPy 数组，再转换为 Pandas DataFrame\n",
    "\n",
    "4. 设置 DataFrame 的列名：\n",
    "   a. 第一列命名为 'id'\n",
    "   b. 中间的列命名为 's_0', 's_1', ..., 's_N'，表示心跳信号序列的特征列（N 为信号序列长度）\n",
    "   \n",
    "5. 调用 reduce_mem_usage 函数：\n",
    "   a. 优化列数据类型，减少内存占用\n",
    "\n",
    "6. 返回处理后的测试集 DataFrame\n",
    "```\n",
    "\n",
    "**数据处理前后对比**\n",
    "\n",
    "假设原始测试集 `test.csv` 的部分内容如下：\n",
    "| id     | heartbeat_signals             |\n",
    "|--------|--------------------------------|\n",
    "| 20001  | 0.1,0.2,0.3,...,0.4           |\n",
    "| 20002  | 0.5,0.6,0.7,...,0.8           |\n",
    "\n",
    "处理后生成的 DataFrame `test`：\n",
    "| id     | s_0  | s_1  | s_2  | ... | s_204 |\n",
    "|--------|------|------|------|-----|-------|\n",
    "| 20001  | 0.1  | 0.2  | 0.3  | ... | 0.4   |\n",
    "| 20002  | 0.5  | 0.6  | 0.7  | ... | 0.8   |\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 处理训练集和测试集"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:12:56.409123Z",
     "iopub.status.busy": "2025-01-01T03:12:56.408830Z",
     "iopub.status.idle": "2025-01-01T03:12:56.500460Z",
     "shell.execute_reply": "2025-01-01T03:12:56.499555Z",
     "shell.execute_reply.started": "2025-01-01T03:12:56.409098Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "# 查看训练集, 分离标签与样本, 去除 id\n",
    "y_train = train['label']\n",
    "x_train = train.drop(['id', 'label'], axis=1)\n",
    "print(x_train.shape, y_train.shape)\n",
    "\n",
    "# 查看测试集, 去除 id\n",
    "X_test = test.drop(['id'], axis=1)\n",
    "print(X_test.shape)\n",
    "\n",
    "# 将测试集转换为适应 CNN 输入的 shape\n",
    "X_test = np.array(X_test).reshape(X_test.shape[0], X_test.shape[1], 1)\n",
    "print(X_test.shape, X_test.dtype)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "* 卷积神经网络（CNN）处理数据时需要三维输入格式，形状为 (样本数量, 特征数量, 通道数)。\n",
    "* 深度学习模型（如 CNN）只能接受 NumPy 数组或类似张量的输入格式\n",
    "* Shape 是数组或矩阵的“形状”，描述数据的维度结构，即数据在每个维度上的大小。\n",
    "    * 一维数据：形状如 (n,)，表示有 n 个元素。\n",
    "    * 二维数据：形状如 (m, n)，表示有 m 行，n 列。\n",
    "    * 三维数据：形状如 (batch_size, height, width)，常用于图片或序列。\n",
    "    * 四维数据：形状如 (batch_size, height, width, channels)，用于彩色图片的输入。\n",
    "* 灰度图像\n",
    "    * 灰度图像是没有颜色信息的图像，每个像素点只有一个值，表示黑白的强度（灰度值）。\n",
    "        * 灰度值的范围通常是 0（黑色）到 255（白色）。\n",
    "        * 示例：如果是彩色图片（如 RGB 图像），每个像素会有 3 个值（红、绿、蓝）。而灰度图像只有 1 个值。\n",
    "    * 类似性：\n",
    "        * 灰度图像是一种 2D 单通道数据。\n",
    "        * 心跳信号是一种 1D 单通道数据。\n",
    "        * 两者都只有一个维度的特征值，因此通道数为 1。\n",
    "\n",
    "-----\n",
    "\n",
    "```\n",
    "1. 处理训练集：\n",
    "   a. 从 train 中提取目标标签列 'label'，存入 y_train\n",
    "   b. 从 train 中删除 'id' 和 'label' 列，保留特征列，存入 x_train\n",
    "   c. 打印训练集特征矩阵和标签的形状 (x_train.shape, y_train.shape)\n",
    "\n",
    "2. 处理测试集：\n",
    "   a. 从 test 中删除 'id' 列，保留特征列，存入 X_test\n",
    "   b. 打印测试集特征矩阵的形状 (X_test.shape)\n",
    "\n",
    "3. 调整测试集形状以适配 CNN 输入：\n",
    "   a. 将 X_test 转换为 NumPy 数组\n",
    "   b. 调整 X_test 的形状为 (样本数量, 特征数量, 通道数)，通道数设为 1\n",
    "   c. 打印调整后的测试集形状和数据类型 (X_test.shape, X_test.dtype)\n",
    "\n",
    "返回：x_train, y_train, X_test\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 数据探索性分析"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 基本分析"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 查看数据集的前 5 行"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:12:56.501781Z",
     "iopub.status.busy": "2025-01-01T03:12:56.501458Z",
     "iopub.status.idle": "2025-01-01T03:12:56.532606Z",
     "shell.execute_reply": "2025-01-01T03:12:56.531962Z",
     "shell.execute_reply.started": "2025-01-01T03:12:56.501752Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "train.head()  # 查看前 5 条信息"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:12:56.533426Z",
     "iopub.status.busy": "2025-01-01T03:12:56.533229Z",
     "iopub.status.idle": "2025-01-01T03:12:56.555504Z",
     "shell.execute_reply": "2025-01-01T03:12:56.554870Z",
     "shell.execute_reply.started": "2025-01-01T03:12:56.533409Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "test.head()  # 查看前 5 条信息"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 数据统计摘要"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:12:56.556426Z",
     "iopub.status.busy": "2025-01-01T03:12:56.556237Z",
     "iopub.status.idle": "2025-01-01T03:12:58.854706Z",
     "shell.execute_reply": "2025-01-01T03:12:58.853782Z",
     "shell.execute_reply.started": "2025-01-01T03:12:56.556410Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "import warnings\n",
    "warnings.filterwarnings('ignore', category=RuntimeWarning)\n",
    "\n",
    "train.describe()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "使用 Python 的 warnings 模块临时忽略警告：\n",
    "* 数据中存在异常值（过大或过小）。\n",
    "* 数据中存在 NaN 或 Infinity。\n",
    "* 数据类型的精度不足，导致溢出。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:12:58.856085Z",
     "iopub.status.busy": "2025-01-01T03:12:58.855758Z",
     "iopub.status.idle": "2025-01-01T03:12:59.529523Z",
     "shell.execute_reply": "2025-01-01T03:12:59.528665Z",
     "shell.execute_reply.started": "2025-01-01T03:12:58.856054Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "test.describe()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 查看数据集的整体信息"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:12:59.530680Z",
     "iopub.status.busy": "2025-01-01T03:12:59.530368Z",
     "iopub.status.idle": "2025-01-01T03:12:59.545327Z",
     "shell.execute_reply": "2025-01-01T03:12:59.544544Z",
     "shell.execute_reply.started": "2025-01-01T03:12:59.530647Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "train.info()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "info() 方法返回数据框的整体信息，包括：\n",
    "* 每列的名称。\n",
    "* 数据类型（如 int64、float64、object 等）。\n",
    "* 非空值的数量。\n",
    "* 数据框占用的内存大小。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:12:59.546346Z",
     "iopub.status.busy": "2025-01-01T03:12:59.546154Z",
     "iopub.status.idle": "2025-01-01T03:12:59.569359Z",
     "shell.execute_reply": "2025-01-01T03:12:59.568597Z",
     "shell.execute_reply.started": "2025-01-01T03:12:59.546322Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "test.info()"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**结论**\n",
    "\n",
    "综上可知：\n",
    "* 主要的特征数据为 1 维信号振幅 (已被归一化至 0～1 了)，总长度均为 205 (205 个时间节点/心跳节拍)\n",
    "* 同时，除波形数据外，没有任何辅助或先验信息可以利用\n",
    "* 波形数据均已被量化为 float16 类型的数值型特征，且没有类别型特征需要考虑\n",
    "* 没有缺失值，无需填充，非常理想 —— 事实上，未采集到的信号默认振幅就是 0，故不存在缺失值的问题\n",
    "* 显然，这类非表格数据更适合用神经网络来处理，而非传统机器学习模型"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 查看类别分布"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:12:59.570446Z",
     "iopub.status.busy": "2025-01-01T03:12:59.570202Z",
     "iopub.status.idle": "2025-01-01T03:12:59.845722Z",
     "shell.execute_reply": "2025-01-01T03:12:59.844897Z",
     "shell.execute_reply.started": "2025-01-01T03:12:59.570426Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "plt.hist(train['label'], orientation = 'vertical', histtype = 'bar', color = 'red')\n",
    "plt.show() "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "对训练集 train 中的 label 列（目标变量）绘制柱状图（直方图），以可视化每个类别的分布情况"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**结论**\n",
    "\n",
    "* 从柱状图可以发现类别 0 的样本数量远多于其他类别。\n",
    "* 类别不平衡可能会影响模型的训练效果，需要采取措施处理（如过采样、下采样或调整类别权重）。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**用 SMOTE 对少数类别上采样效果最好：**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:12:59.846838Z",
     "iopub.status.busy": "2025-01-01T03:12:59.846545Z",
     "iopub.status.idle": "2025-01-01T03:13:11.928781Z",
     "shell.execute_reply": "2025-01-01T03:13:11.928079Z",
     "shell.execute_reply.started": "2025-01-01T03:12:59.846807Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "import warnings\n",
    "warnings.filterwarnings(\"ignore\", category=FutureWarning)\n",
    "\n",
    "# 使用 SMOTE 对数据进行上采样以解决类别不平衡问题\n",
    "smote = SMOTE(random_state=2021, n_jobs=-1)\n",
    "k_x_train, k_y_train = smote.fit_resample(x_train, y_train)  \n",
    "print(f\"after smote, k_x_train.shape: {k_x_train.shape}, k_y_train.shape: {k_y_train.shape}\")\n",
    "# 将训练集转换为适应 CNN 输入的 shape\n",
    "k_x_train = np.array(k_x_train).reshape(k_x_train.shape[0], k_x_train.shape[1], 1)\n",
    "\n",
    "plt.hist(k_y_train, orientation = 'vertical', histtype = 'bar', color = 'green')\n",
    "plt.show() "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 辅助函数"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:13:11.929765Z",
     "iopub.status.busy": "2025-01-01T03:13:11.929566Z",
     "iopub.status.idle": "2025-01-01T03:13:11.933751Z",
     "shell.execute_reply": "2025-01-01T03:13:11.932815Z",
     "shell.execute_reply.started": "2025-01-01T03:13:11.929748Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "# 评估函数\n",
    "def abs_sum(y_pred, y_true):\n",
    "    y_pred = np.array(y_pred)\n",
    "    y_true = np.array(y_true)\n",
    "    loss = sum(sum(abs(y_pred-y_true)))\n",
    "    return loss"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "用于计算模型预测结果和真实标签之间的误差的绝对值之和（absolute sum）"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 模型训练与推理"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**为何使用简单网络**\n",
    "* 数据特征单一，复杂网络易过拟合。\n",
    "* 通过 Dropout 和池化层平衡了模型复杂度。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": []
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Net 1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:13:11.934805Z",
     "iopub.status.busy": "2025-01-01T03:13:11.934566Z",
     "iopub.status.idle": "2025-01-01T03:13:11.952224Z",
     "shell.execute_reply": "2025-01-01T03:13:11.951465Z",
     "shell.execute_reply.started": "2025-01-01T03:13:11.934779Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "class Net1(K.Model):\n",
    "    def __init__(self):\n",
    "        super(Net1, self).__init__()\n",
    "        self.conv1 = Conv1D(filters=16, kernel_size=3, padding='same', activation='relu', input_shape = (205, 1))\n",
    "        self.conv2 = Conv1D(filters=32, kernel_size=3, dilation_rate=2, padding='same', activation='relu')\n",
    "        self.conv3 = Conv1D(filters=64, kernel_size=3, dilation_rate=2, padding='same', activation='relu')\n",
    "        self.conv4 = Conv1D(filters=64, kernel_size=5, dilation_rate=2, padding='same', activation='relu')\n",
    "        self.max_pool1 = MaxPool1D(pool_size=3, strides=2, padding='same')\n",
    "        \n",
    "        self.conv5 = Conv1D(filters=128, kernel_size=5, dilation_rate=2, padding='same', activation='relu')\n",
    "        self.conv6 = Conv1D(filters=128, kernel_size=5, dilation_rate=2, padding='same', activation='relu')\n",
    "        self.max_pool2 = MaxPool1D(pool_size=3, strides=2, padding='same')\n",
    "        \n",
    "        self.dropout = Dropout(0.5)\n",
    "        self.flatten = Flatten()\n",
    "        \n",
    "        self.fc1 = Dense(units=256, activation='relu')\n",
    "        self.fc21 = Dense(units=16, activation='relu')\n",
    "        self.fc22 = Dense(units=256, activation='sigmoid')\n",
    "        self.fc3 = Dense(units=4, activation='softmax')\n",
    "            \n",
    "    def call(self, x):\n",
    "        x = self.conv1(x)\n",
    "        x = self.conv2(x)\n",
    "        x = self.conv3(x)\n",
    "        x = self.conv4(x)\n",
    "        x = self.max_pool1(x)\n",
    "        \n",
    "        x = self.conv5(x)\n",
    "        x = self.conv6(x) \n",
    "        x = self.max_pool2(x)\n",
    "        \n",
    "        x = self.dropout(x)\n",
    "        x = self.flatten(x)\n",
    "        \n",
    "        x1 = self.fc1(x)\n",
    "        x2 = self.fc22(self.fc21(x))\n",
    "        x = self.fc3(x1+x2)\n",
    "        \n",
    "        return x "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "1. 输入数据：\n",
    "   形状为 (batch_size, 205, 1)，表示 205 个时间点的单通道信号。\n",
    "2. 卷积层：\n",
    "   使用膨胀卷积扩大感受野，提取时间序列中的局部和全局特征。\n",
    "   卷积核大小分别为 3 和 5，层间特征数逐步增加。\n",
    "3. 池化层：\n",
    "   下采样，减少特征维度。\n",
    "4. Dropout：\n",
    "   随机丢弃部分神经元，防止过拟合。\n",
    "5. 全连接层：\n",
    "    分支1 提取补充特征。主分支和分支结果相加。\n",
    "6. 输出层：\n",
    "    使用 Softmax 输出分类概率，最终实现 4 类心跳信号分类。\n",
    "\n",
    "```python\n",
    "1. 输入数据，形状为 (batch_size, 205, 1)\n",
    "\n",
    "2. 第一组卷积\n",
    "x = Conv1D(16, kernel_size=3, activation='relu', padding='same')(input)  \n",
    "x = Conv1D(32, kernel_size=3, activation='relu', dilation_rate=2, padding='same')(x)\n",
    "x = Conv1D(64, kernel_size=3, activation='relu', dilation_rate=2, padding='same')(x)\n",
    "x = Conv1D(64, kernel_size=5, activation='relu', dilation_rate=2, padding='same')(x)\n",
    "x = MaxPooling1D(pool_size=3, strides=2, padding='same')(x)  # 下采样\n",
    "\n",
    "3. 第二组卷积\n",
    "x = Conv1D(128, kernel_size=5, activation='relu', dilation_rate=2, padding='same')(x)\n",
    "x = Conv1D(128, kernel_size=5, activation='relu', dilation_rate=2, padding='same')(x)\n",
    "x = MaxPooling1D(pool_size=3, strides=2, padding='same')(x)  # 下采样\n",
    "\n",
    "4. Dropout 防止过拟合\n",
    "x = Dropout(rate=0.5)(x)\n",
    "\n",
    "5. 全连接层\n",
    "x = Flatten()(x)  # 拉平多维数据\n",
    "x1 = Dense(256, activation='relu')(x)  # 主分支\n",
    "x2 = Dense(16, activation='relu')(x)  # 分支1\n",
    "x2 = Dense(256, activation='sigmoid')(x2)  # 分支2\n",
    "\n",
    "6. 合并分支，输出分类\n",
    "x = x1 + x2\n",
    "output = Dense(4, activation='softmax')(x)  # 输出4分类结果\n",
    "\n",
    "7. 返回输出\n",
    "return output\n",
    "\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Net2"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:13:11.955411Z",
     "iopub.status.busy": "2025-01-01T03:13:11.955192Z",
     "iopub.status.idle": "2025-01-01T03:13:11.973924Z",
     "shell.execute_reply": "2025-01-01T03:13:11.973177Z",
     "shell.execute_reply.started": "2025-01-01T03:13:11.955392Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "class GeMPooling(tf.keras.layers.Layer):\n",
    "    def __init__(self, p=1.0, train_p=False):\n",
    "        super().__init__()\n",
    "        self.eps = 1e-6\n",
    "        self.p = tf.Variable(p, dtype=tf.float32) if train_p else p\n",
    "\n",
    "    def call(self, inputs: tf.Tensor, **kwargs):\n",
    "        inputs = tf.clip_by_value(inputs, clip_value_min=1e-6, clip_value_max=tf.reduce_max(inputs))\n",
    "        inputs = tf.pow(inputs, self.p)\n",
    "        inputs = tf.reduce_mean(inputs, axis=[1], keepdims=False)\n",
    "        inputs = tf.pow(inputs, 1./self.p)\n",
    "        return inputs\n",
    "\n",
    "\n",
    "class Net2(K.Model):\n",
    "    def __init__(self):\n",
    "        super(Net2, self).__init__()\n",
    "        self.conv1 = Conv1D(filters=16, kernel_size=3, padding='same', activation='relu', input_shape = (205, 1))\n",
    "        self.conv2 = Conv1D(filters=32, kernel_size=3, dilation_rate=2, padding='same', activation='relu')\n",
    "        self.conv3 = Conv1D(filters=64, kernel_size=3, dilation_rate=2, padding='same', activation='relu')\n",
    "        self.max_pool1 = MaxPool1D(pool_size=3, strides=2, padding='same')\n",
    "        \n",
    "        self.conv4 = Conv1D(filters=64, kernel_size=5, dilation_rate=2, padding='same', activation='relu')\n",
    "        self.conv5 = Conv1D(filters=128, kernel_size=5, dilation_rate=2, padding='same', activation='relu')\n",
    "        self.max_pool2 = MaxPool1D(pool_size=3, strides=2, padding='same')\n",
    "        \n",
    "        self.conv6 = Conv1D(filters=256, kernel_size=5, dilation_rate=2, padding='same', activation='relu')\n",
    "        self.conv7 = Conv1D(filters=128, kernel_size=7, dilation_rate=2, padding='same', activation='relu')\n",
    "        self.gempool = GeMPooling()\n",
    "        \n",
    "        self.dropout1 = Dropout(0.5)\n",
    "        self.flatten = Flatten()\n",
    "\n",
    "        self.fc1 = Dense(units=256, activation='relu')\n",
    "        self.fc21 = Dense(units=16, activation='relu')\n",
    "        self.fc22 = Dense(units=256, activation='sigmoid')\n",
    "        self.fc3 = Dense(units=4, activation='softmax')\n",
    "\n",
    "    def call(self, x):\n",
    "        x = self.conv1(x)\n",
    "        x = self.conv2(x)\n",
    "        x = self.conv3(x)\n",
    "        x = self.max_pool1(x)\n",
    "        \n",
    "        x = self.conv4(x)\n",
    "        x = self.conv5(x)\n",
    "        x = self.max_pool2(x)\n",
    "        \n",
    "        x = self.conv6(x)\n",
    "        x = self.conv7(x)\n",
    "\n",
    "        x = self.gempool(x)\n",
    "        x = self.dropout1(x)\n",
    "        \n",
    "        x = self.flatten(x)  \n",
    "        x1 = self.fc1(x)\n",
    "        x2 = self.fc22(self.fc21(x))\n",
    "        x = self.fc3(x1 + x2)  \n",
    "        \n",
    "        return x"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**核心流程总结**\n",
    "1. 输入：形状为 `(batch_size, 205, 1)`。\n",
    "2. 卷积：\n",
    "   - 提取局部和全局特征，使用膨胀卷积扩大感受野。\n",
    "3. 池化：\n",
    "   - 使用最大池化减少特征维度，降低计算复杂度。\n",
    "4. GeMPooling：\n",
    "   - 灵活调整池化方式，进一步整合全局特征。\n",
    "5. Dropout：\n",
    "   - 随机丢弃神经元，防止过拟合。\n",
    "6. 全连接：\n",
    "   - 提取高阶特征，分支特征整合后输出分类结果。\n",
    "\n",
    "---\n",
    "\n",
    "```python\n",
    "# 初始化模型\n",
    "class Net2:\n",
    "    def __init__():\n",
    "        # 定义卷积层组1\n",
    "        Conv1D(filters=16, kernel_size=3, activation='relu')\n",
    "        Conv1D(filters=32, kernel_size=3, dilation_rate=2, activation='relu')\n",
    "        Conv1D(filters=64, kernel_size=3, dilation_rate=2, activation='relu')\n",
    "        MaxPooling1D(pool_size=3, strides=2)\n",
    "\n",
    "        # 定义卷积层组2\n",
    "        Conv1D(filters=64, kernel_size=5, dilation_rate=2, activation='relu')\n",
    "        Conv1D(filters=128, kernel_size=5, dilation_rate=2, activation='relu')\n",
    "        MaxPooling1D(pool_size=3, strides=2)\n",
    "\n",
    "        # 定义卷积层组3\n",
    "        Conv1D(filters=256, kernel_size=5, dilation_rate=2, activation='relu')\n",
    "        Conv1D(filters=128, kernel_size=7, dilation_rate=2, activation='relu')\n",
    "\n",
    "        # 定义 GeMPooling 层\n",
    "        GeMPooling()\n",
    "\n",
    "        # Dropout 层\n",
    "        Dropout(rate=0.5)\n",
    "\n",
    "        # 全连接层\n",
    "        Dense(units=256, activation='relu')\n",
    "        Dense(units=16, activation='relu')  # 分支1\n",
    "        Dense(units=256, activation='sigmoid')  # 分支2\n",
    "        Dense(units=4, activation='softmax')  # 输出层\n",
    "\n",
    "    # 前向传播\n",
    "    def call(input):\n",
    "        # 卷积层组1\n",
    "        x = Conv1D(filters=16, kernel_size=3)(input)\n",
    "        x = Conv1D(filters=32, kernel_size=3, dilation_rate=2)(x)\n",
    "        x = Conv1D(filters=64, kernel_size=3, dilation_rate=2)(x)\n",
    "        x = MaxPooling1D(pool_size=3, strides=2)(x)\n",
    "\n",
    "        # 卷积层组2\n",
    "        x = Conv1D(filters=64, kernel_size=5, dilation_rate=2)(x)\n",
    "        x = Conv1D(filters=128, kernel_size=5, dilation_rate=2)(x)\n",
    "        x = MaxPooling1D(pool_size=3, strides=2)(x)\n",
    "\n",
    "        # 卷积层组3\n",
    "        x = Conv1D(filters=256, kernel_size=5, dilation_rate=2)(x)\n",
    "        x = Conv1D(filters=128, kernel_size=7, dilation_rate=2)(x)\n",
    "\n",
    "        # 使用 GeMPooling\n",
    "        x = GeMPooling()(x)\n",
    "\n",
    "        # Dropout\n",
    "        x = Dropout(rate=0.5)(x)\n",
    "\n",
    "        # 扁平化数据\n",
    "        x = Flatten()(x)\n",
    "\n",
    "        # 全连接层\n",
    "        x1 = Dense(units=256, activation='relu')(x)  # 主分支\n",
    "        x2 = Dense(units=16, activation='relu')(x)  # 分支1\n",
    "        x2 = Dense(units=256, activation='sigmoid')(x2)  # 分支2\n",
    "\n",
    "        # 合并分支并输出\n",
    "        output = Dense(units=4, activation='softmax')(x1 + x2)\n",
    "        return output\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## Net3"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "Net3 是一个通过 膨胀卷积 和 多池化策略 增强特征提取能力的 CNN 模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:13:11.975265Z",
     "iopub.status.busy": "2025-01-01T03:13:11.974983Z",
     "iopub.status.idle": "2025-01-01T03:13:11.992095Z",
     "shell.execute_reply": "2025-01-01T03:13:11.991441Z",
     "shell.execute_reply.started": "2025-01-01T03:13:11.975242Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "class Net3(K.Model): \n",
    "    def __init__(self):\n",
    "        super(Net3, self).__init__()\n",
    "        self.conv1 = Conv1D(filters=16, kernel_size=3, padding='same', activation='relu',input_shape = (205, 1))\n",
    "        self.conv2 = Conv1D(filters=32, kernel_size=3, padding='same', dilation_rate=2, activation='relu')\n",
    "        self.conv3 = Conv1D(filters=64, kernel_size=3, padding='same', dilation_rate=2, activation='relu')\n",
    "        self.conv4 = Conv1D(filters=128, kernel_size=3, padding='same', dilation_rate=2, activation='relu')\n",
    "        self.conv5 = Conv1D(filters=128, kernel_size=5, padding='same', dilation_rate=2, activation='relu')\n",
    "        self.max_pool1 = MaxPool1D(pool_size=3, strides=2, padding='same')\n",
    "        self.avg_pool1 = AvgPool1D(pool_size=3, strides=2, padding='same')\n",
    "        \n",
    "        self.conv6 = Conv1D(filters=128, kernel_size=5, padding='same', dilation_rate=2, activation='relu')\n",
    "        self.conv7 = Conv1D(filters=128, kernel_size=5, padding='same', dilation_rate=2,  activation='relu')\n",
    "        self.max_pool2 = MaxPool1D(pool_size=3, strides=2, padding='same')\n",
    "        self.avg_pool2 = AvgPool1D(pool_size=3, strides=2, padding='same')\n",
    "        \n",
    "        self.dropout = Dropout(0.5)\n",
    "    \n",
    "        self.flatten = Flatten()\n",
    "        \n",
    "        self.fc1 = Dense(units=256, activation='relu')\n",
    "        self.fc21 = Dense(units=16, activation='relu')\n",
    "        self.fc22 = Dense(units=256, activation='sigmoid')\n",
    "        self.fc3 = Dense(units=4, activation='softmax')\n",
    "            \n",
    "    def call(self, x):\n",
    "        x = self.conv1(x)\n",
    "        x = self.conv2(x)\n",
    "        x = self.conv3(x)\n",
    "        x = self.conv4(x)\n",
    "        x = self.conv5(x)\n",
    "        xm1 = self.max_pool1(x)\n",
    "        xa1 = self.avg_pool1(x)\n",
    "        x = tf.concat([xm1, xa1], 2)\n",
    "        \n",
    "        x = self.conv6(x)\n",
    "        x = self.conv7(x) \n",
    "        xm2 = self.max_pool2(x)\n",
    "        xa2 = self.avg_pool2(x)\n",
    "        x = tf.concat([xm2, xa2], 2)\n",
    "        \n",
    "        x = self.dropout(x)\n",
    "        x = self.flatten(x)\n",
    "        \n",
    "        x1 = self.fc1(x)\n",
    "        x2 = self.fc22(self.fc21(x))\n",
    "        x = self.fc3(x1+x2)\n",
    "        \n",
    "        return x "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "**流程总结**\n",
    "1. **输入**：形状为 `(batch_size, 205, 1)` 的时间序列信号。\n",
    "2. **卷积+膨胀卷积**：逐步提取局部和全局特征，扩大感受野。\n",
    "3. **最大池化 + 平均池化融合**：结合不同的池化策略，拼接多种特征。\n",
    "4. **Dropout**：随机丢弃神经元，防止过拟合。\n",
    "5. **全连接层**：通过主分支和分支特征相加，提升模型表现力。\n",
    "6. **输出**：使用 Softmax 输出 4 类分类概率。\n",
    "\n",
    "---\n",
    "\n",
    "```python\n",
    "# 定义 Net3 模型\n",
    "class Net3:\n",
    "    def __init__():\n",
    "        # 第一组卷积层\n",
    "        Conv1D(filters=16, kernel_size=3, activation='relu')  # 提取基础特征\n",
    "        Conv1D(filters=32, kernel_size=3, dilation_rate=2, activation='relu')  # 扩大感受野\n",
    "        Conv1D(filters=64, kernel_size=3, dilation_rate=2, activation='relu')\n",
    "        Conv1D(filters=128, kernel_size=3, dilation_rate=2, activation='relu')\n",
    "        Conv1D(filters=128, kernel_size=5, dilation_rate=2, activation='relu')\n",
    "        MaxPooling1D(pool_size=3, strides=2)  # 最大池化\n",
    "        AvgPooling1D(pool_size=3, strides=2)  # 平均池化\n",
    "\n",
    "        # 第二组卷积层\n",
    "        Conv1D(filters=128, kernel_size=5, dilation_rate=2, activation='relu')  # 提取深层特征\n",
    "        Conv1D(filters=128, kernel_size=5, dilation_rate=2, activation='relu')\n",
    "        MaxPooling1D(pool_size=3, strides=2)  # 最大池化\n",
    "        AvgPooling1D(pool_size=3, strides=2)  # 平均池化\n",
    "\n",
    "        # Dropout 防止过拟合\n",
    "        Dropout(rate=0.5)\n",
    "\n",
    "        # 全连接层\n",
    "        Dense(units=256, activation='relu')  # 主分支\n",
    "        Dense(units=16, activation='relu')  # 分支1\n",
    "        Dense(units=256, activation='sigmoid')  # 分支2\n",
    "        Dense(units=4, activation='softmax')  # 输出层\n",
    "\n",
    "    def call(input):\n",
    "        # 第一组卷积层处理\n",
    "        x = Conv1D(filters=16, kernel_size=3)(input)\n",
    "        x = Conv1D(filters=32, kernel_size=3, dilation_rate=2)(x)\n",
    "        x = Conv1D(filters=64, kernel_size=3, dilation_rate=2)(x)\n",
    "        x = Conv1D(filters=128, kernel_size=3, dilation_rate=2)(x)\n",
    "        x = Conv1D(filters=128, kernel_size=5, dilation_rate=2)(x)\n",
    "\n",
    "        # 第一组池化融合（最大池化 + 平均池化）\n",
    "        xm1 = MaxPooling1D(pool_size=3, strides=2)(x)\n",
    "        xa1 = AvgPooling1D(pool_size=3, strides=2)(x)\n",
    "        x = Concatenate(axis=2)([xm1, xa1])  # 拼接池化结果\n",
    "\n",
    "        # 第二组卷积层处理\n",
    "        x = Conv1D(filters=128, kernel_size=5, dilation_rate=2)(x)\n",
    "        x = Conv1D(filters=128, kernel_size=5, dilation_rate=2)(x)\n",
    "\n",
    "        # 第二组池化融合（最大池化 + 平均池化）\n",
    "        xm2 = MaxPooling1D(pool_size=3, strides=2)(x)\n",
    "        xa2 = AvgPooling1D(pool_size=3, strides=2)(x)\n",
    "        x = Concatenate(axis=2)([xm2, xa2])\n",
    "\n",
    "        # Dropout 防止过拟合\n",
    "        x = Dropout(rate=0.5)(x)\n",
    "\n",
    "        # 全连接层处理\n",
    "        x = Flatten()(x)  # 拉平数据\n",
    "        x1 = Dense(units=256, activation='relu')(x)  # 主分支特征\n",
    "        x2 = Dense(units=16, activation='relu')(x)  # 分支1\n",
    "        x2 = Dense(units=256, activation='sigmoid')(x2)  # 分支2\n",
    "        output = Dense(units=4, activation='softmax')(x1 + x2)  # 分支特征相加后输出分类\n",
    "\n",
    "        return output\n",
    "```"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 训练模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:13:11.993082Z",
     "iopub.status.busy": "2025-01-01T03:13:11.992778Z",
     "iopub.status.idle": "2025-01-01T03:33:53.628056Z",
     "shell.execute_reply": "2025-01-01T03:33:53.627242Z",
     "shell.execute_reply.started": "2025-01-01T03:13:11.993060Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "import warnings\n",
    "warnings.filterwarnings('ignore', category=RuntimeWarning)\n",
    "from tensorflow.keras.callbacks import EarlyStopping, LearningRateScheduler\n",
    "\n",
    "# 定义学习率阶梯衰减策略\n",
    "def step_decay(epoch):\n",
    "    \"\"\"\n",
    "    学习率阶梯衰减函数\n",
    "    :param epoch: 当前 epoch\n",
    "    :return: 学习率\n",
    "    \"\"\"\n",
    "    initial_lr = 0.01  # 初始学习率\n",
    "    drop = 0.5        # 每次下降的比例\n",
    "    epochs_drop = 10  # 每隔多少个 epoch 下降一次\n",
    "    lr = initial_lr * (drop ** (epoch // epochs_drop))\n",
    "    return lr\n",
    "\n",
    "# 创建学习率调度器\n",
    "lr_scheduler = LearningRateScheduler(step_decay)\n",
    "\n",
    "# 定义模型训练函数（带学习率阶梯衰减）\n",
    "def train_model(model, x_train, y_train, batch_size=128, epochs=30, validation_split=0.5):\n",
    "    \"\"\"\n",
    "    通用模型训练函数，学习率衰减策略\n",
    "    \"\"\"\n",
    "    # 编译模型\n",
    "    model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n",
    "    \n",
    "    # 训练模型，添加回调函数\n",
    "    history = model.fit(\n",
    "        x_train, y_train,\n",
    "        batch_size=batch_size,\n",
    "        epochs=epochs,\n",
    "        validation_split=validation_split,\n",
    "        callbacks=[lr_scheduler]  # 加入学习率衰减\n",
    "    )\n",
    "    return history\n",
    "\n",
    "# 定义模型训练函数2\n",
    "def train_model2(model, x_train, y_train, batch_size=128, epochs=30, validation_split=0.5):\n",
    "    \"\"\"\n",
    "    通用模型训练函数，学习率衰减策略\n",
    "    \"\"\"\n",
    "    # 编译模型\n",
    "    model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])\n",
    "    \n",
    "    # 训练模型，添加回调函数\n",
    "    history = model.fit(\n",
    "        x_train, y_train,\n",
    "        batch_size=batch_size,\n",
    "        epochs=epochs,\n",
    "        validation_split=validation_split,\n",
    "    )\n",
    "    return history\n",
    "\n",
    "# 使用封装函数训练多个模型\n",
    "model1 = Net1()\n",
    "model2 = Net2()\n",
    "model3 = Net3()\n",
    "history1 = train_model(model1, k_x_train, k_y_train)\n",
    "history2 = train_model2(model2, k_x_train, k_y_train)\n",
    "history3 = train_model2(model3, k_x_train, k_y_train)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 可视化训练过程"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:33:53.629791Z",
     "iopub.status.busy": "2025-01-01T03:33:53.629470Z",
     "iopub.status.idle": "2025-01-01T03:33:54.846439Z",
     "shell.execute_reply": "2025-01-01T03:33:54.845576Z",
     "shell.execute_reply.started": "2025-01-01T03:33:53.629759Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "import matplotlib.pyplot as plt\n",
    "\n",
    "def plot_training_history(history, title=\"Model Training History\"):\n",
    "    \"\"\"\n",
    "    绘制训练与验证的损失和准确率曲线\n",
    "    \"\"\"\n",
    "    plt.figure(figsize=(12, 5))\n",
    "\n",
    "    # 损失曲线\n",
    "    plt.subplot(1, 2, 1)\n",
    "    plt.plot(history.history['loss'], label='Train Loss')\n",
    "    plt.plot(history.history['val_loss'], label='Validation Loss')\n",
    "    plt.title(f'{title} - Loss')\n",
    "    plt.xlabel('Epochs')\n",
    "    plt.ylabel('Loss')\n",
    "    plt.legend()\n",
    "\n",
    "    # 准确率曲线\n",
    "    plt.subplot(1, 2, 2)\n",
    "    plt.plot(history.history['accuracy'], label='Train Accuracy')\n",
    "    plt.plot(history.history['val_accuracy'], label='Validation Accuracy')\n",
    "    plt.title(f'{title} - Accuracy')\n",
    "    plt.xlabel('Epochs')\n",
    "    plt.ylabel('Accuracy')\n",
    "    plt.legend()\n",
    "\n",
    "    plt.show()\n",
    "\n",
    "# 绘制训练曲线\n",
    "plot_training_history(history1, title=\"Net1 Training History\")\n",
    "plot_training_history(history2, title=\"Net2 Training History\")\n",
    "plot_training_history(history3, title=\"Net3 Training History\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 软投票融合 + 阈值法"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:33:54.847669Z",
     "iopub.status.busy": "2025-01-01T03:33:54.847348Z",
     "iopub.status.idle": "2025-01-01T03:34:00.453855Z",
     "shell.execute_reply": "2025-01-01T03:34:00.453198Z",
     "shell.execute_reply.started": "2025-01-01T03:33:54.847638Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "predictions1 = model1.predict(X_test)\n",
    "predictions2 = model2.predict(X_test)\n",
    "predictions3 = model3.predict(X_test)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:34:00.455160Z",
     "iopub.status.busy": "2025-01-01T03:34:00.454862Z",
     "iopub.status.idle": "2025-01-01T03:34:00.460853Z",
     "shell.execute_reply": "2025-01-01T03:34:00.460087Z",
     "shell.execute_reply.started": "2025-01-01T03:34:00.455137Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "# 平均融合预测结果\n",
    "predictions_weighted = 0.32 * predictions1 + 0.33 * predictions2 + 0.35* predictions3\n",
    "predictions_weighted[:5]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:34:00.462063Z",
     "iopub.status.busy": "2025-01-01T03:34:00.461743Z",
     "iopub.status.idle": "2025-01-01T03:34:00.484914Z",
     "shell.execute_reply": "2025-01-01T03:34:00.484195Z",
     "shell.execute_reply.started": "2025-01-01T03:34:00.462034Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "# 准备提交结果\n",
    "submit = pd.DataFrame()\n",
    "submit['id'] = range(100000, 120000)\n",
    "submit['label_0'] = predictions_weighted[:, 0]\n",
    "submit['label_1'] = predictions_weighted[:, 1]\n",
    "submit['label_2'] = predictions_weighted[:, 2]\n",
    "submit['label_3'] = predictions_weighted[:, 3]\n",
    "submit.head()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:34:00.485783Z",
     "iopub.status.busy": "2025-01-01T03:34:00.485536Z",
     "iopub.status.idle": "2025-01-01T03:34:11.159442Z",
     "shell.execute_reply": "2025-01-01T03:34:11.158585Z",
     "shell.execute_reply.started": "2025-01-01T03:34:00.485764Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "# 第一次后处理未涉及的难样本 index\n",
    "others = []\n",
    "\n",
    "# 第一次后处理 - 将预测概率值大于 0.5 的样本的概率置 1，其余置 0\n",
    "threshold = 0.5  \n",
    "for index, row in submit.iterrows():\n",
    "    row_max = max(list(row[1:]))  # 当前行中的最大类别概率预测值\n",
    "    if row_max > threshold:\n",
    "        for i in range(1, 5):\n",
    "            if row[i] > threshold:\n",
    "                submit.iloc[index, i] = 1  # 大于 0.5 的类别概率预测值置 1\n",
    "            else:\n",
    "                submit.iloc[index, i] = 0  # 其余类别概率预测值置 0\n",
    "    else:\n",
    "        others.append(index)  # 否则，没有类别概率预测值不小于 0.5，加入第一次后处理未涉及的难样本列表，等待第二次后处理\n",
    "        print(index, row)\n",
    "                \n",
    "submit.head(5)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:34:11.160406Z",
     "iopub.status.busy": "2025-01-01T03:34:11.160202Z",
     "iopub.status.idle": "2025-01-01T03:34:18.198048Z",
     "shell.execute_reply": "2025-01-01T03:34:18.197393Z",
     "shell.execute_reply.started": "2025-01-01T03:34:11.160389Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "# 第二次后处理 - 在预测概率值均不大于 0.5 的样本中，若最大预测值与次大预测值相差大于 0.04，则将最大预测值置 1，其余预测值置 0；\n",
    "#                否则，对最大预测值和次大预测值不处理 (难分类)，仅对其余样本预测值置 0\n",
    "for idx in others:\n",
    "    value = submit.iloc[idx].values[1:]\n",
    "    ordered_value = sorted([(v, j) for j, v in enumerate(value)], reverse=True)  # 根据类别概率预测值大小排序\n",
    "    #print(ordered_value)\n",
    "    if ordered_value[0][0] - ordered_value[1][0] >= 0.04:  # 最大与次大值相差至少 0.04\n",
    "        submit.iloc[idx, ordered_value[0][1]+1] = 1  # 则足够置信最大概率预测值并置为 1\n",
    "        for k in range(1, 4):\n",
    "            submit.iloc[idx, ordered_value[k][1]+1] = 0  # 对非最大的其余三个类别概率预测值置 0\n",
    "    else:\n",
    "        for s in range(2, 4):\n",
    "            submit.iloc[idx, ordered_value[s][1]+1] = 0  # 难分样本，仅对最小的两个类别概率预测值置 0        \n",
    "        \n",
    "    print(submit.iloc[idx])   "
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 保存结果"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2025-01-01T03:34:18.199110Z",
     "iopub.status.busy": "2025-01-01T03:34:18.198824Z",
     "iopub.status.idle": "2025-01-01T03:34:18.246939Z",
     "shell.execute_reply": "2025-01-01T03:34:18.246316Z",
     "shell.execute_reply.started": "2025-01-01T03:34:18.199078Z"
    },
    "trusted": true
   },
   "outputs": [],
   "source": [
    "# 检视最后的预测结果\n",
    "submit.head()\n",
    "# 保存预测结果\n",
    "submit.to_csv((\"submit_\"+datetime.datetime.now().strftime('%Y%m%d_%H%M%S') + \".csv\"), index=False) "
   ]
  }
 ],
 "metadata": {
  "kaggle": {
   "accelerator": "gpu",
   "dataSources": [
    {
     "datasetId": 6382021,
     "sourceId": 10309702,
     "sourceType": "datasetVersion"
    }
   ],
   "dockerImageVersionId": 30823,
   "isGpuEnabled": true,
   "isInternetEnabled": true,
   "language": "python",
   "sourceType": "notebook"
  },
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.12"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
