{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "8a82a769ac88b441",
   "metadata": {},
   "source": [
    "## 数据集下载\n",
    "我们可以利用ModelScope或Huggingface来下载数据集，但是Huggingface（国外）下载比较慢，所以这里我们使用ModelScope（国内，阿里）。\n",
    "\n",
    "数据集地址：\n",
    "https://www.modelscope.cn/datasets/DAMO_NLP/jd/quickstart\n",
    "\n",
    "先安装ModelScope"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "1b697d47446e075d",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-24T11:07:24.179933Z",
     "start_time": "2025-06-24T11:06:52.184217Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# !pip install \"modelscope[framework]\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "initial_id",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T00:14:29.407348Z",
     "start_time": "2025-06-25T00:14:24.047866Z"
    }
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-06-25 18:45:31,899 - modelscope - WARNING - Use trust_remote_code=True. Will invoke codes from jd. Please make sure that you can trust the external codes.\n",
      "2025-06-25 18:45:33,201 - modelscope - WARNING - Reusing dataset dataset_builder (/root/.cache/modelscope/hub/datasets/DAMO_NLP/jd/master/data_files)\n",
      "2025-06-25 18:45:33,203 - modelscope - INFO - Generating dataset dataset_builder (/root/.cache/modelscope/hub/datasets/DAMO_NLP/jd/master/data_files)\n",
      "2025-06-25 18:45:33,204 - modelscope - INFO - Reusing cached meta-data file: /root/.cache/modelscope/hub/datasets/DAMO_NLP/jd/master/data_files/3a0b7ca43b11a413d66fb247f31fb16f\n",
      "2025-06-25 18:45:33,571 - modelscope - WARNING - Use trust_remote_code=True. Will invoke codes from jd. Please make sure that you can trust the external codes.\n",
      "2025-06-25 18:45:34,689 - modelscope - WARNING - Reusing dataset dataset_builder (/root/.cache/modelscope/hub/datasets/DAMO_NLP/jd/master/data_files)\n",
      "2025-06-25 18:45:34,690 - modelscope - INFO - Generating dataset dataset_builder (/root/.cache/modelscope/hub/datasets/DAMO_NLP/jd/master/data_files)\n",
      "2025-06-25 18:45:34,691 - modelscope - INFO - Reusing cached meta-data file: /root/.cache/modelscope/hub/datasets/DAMO_NLP/jd/master/data_files/a6da68b5310a529b1be5166a6d78da55\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "(45366, 5032)"
      ]
     },
     "execution_count": 2,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 利用modelscope下载数据\n",
    "from modelscope.msdatasets import MsDataset\n",
    "# 训练集\n",
    "ds_train = MsDataset.load('DAMO_NLP/jd', subset_name='default', split='train')\n",
    "\n",
    "# 测试集\n",
    "ds_test = MsDataset.load('DAMO_NLP/jd', subset_name='default', split='validation')\n",
    "\n",
    "len(ds_train), len(ds_test)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f351b7fb59fe70c0",
   "metadata": {},
   "source": [
    "查看数据集的格式"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "a972f34a11097ea9",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T00:14:41.485108Z",
     "start_time": "2025-06-25T00:14:41.480870Z"
    }
   },
   "outputs": [],
   "source": [
    "# ds_train[:2]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "10ba7f8d5217f3fe",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T00:14:42.947162Z",
     "start_time": "2025-06-25T00:14:42.915942Z"
    }
   },
   "outputs": [],
   "source": [
    "# ds_train['sentence'][:5]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "86d79d258249c0ec",
   "metadata": {},
   "source": [
    "确定label的可能情况，发现只有0和1"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "2ac5114eb0fd2828",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T00:14:45.095766Z",
     "start_time": "2025-06-25T00:14:45.079920Z"
    }
   },
   "outputs": [],
   "source": [
    "# set(ds_train['label'])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "de0a9a242f459e2a",
   "metadata": {},
   "source": [
    "## 文本向量化\n",
    "数据集中是中文，但是模型训练需要的是数字，所以我们需要将中文的一句话转成一个向量，这里我们直接利用ModelScope上提供的向量模型来进行文本向量化"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "15f716075dbc8809",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T00:17:33.723052Z",
     "start_time": "2025-06-25T00:17:28.948087Z"
    }
   },
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-06-25 18:45:37,471 - modelscope - WARNING - Model revision not specified, use revision: v1.0.0\n",
      "2025-06-25 18:45:38,447 - modelscope - WARNING - Model revision not specified, use revision: v1.0.0\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Downloading Model from https://www.modelscope.cn to directory: /root/.cache/modelscope/hub/models/iic/nlp_corom_sentence-embedding_chinese-base-ecom\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-06-25 18:45:38,639 - modelscope - INFO - initiate model from /root/.cache/modelscope/hub/models/iic/nlp_corom_sentence-embedding_chinese-base-ecom\n",
      "2025-06-25 18:45:38,640 - modelscope - INFO - initiate model from location /root/.cache/modelscope/hub/models/iic/nlp_corom_sentence-embedding_chinese-base-ecom.\n",
      "2025-06-25 18:45:38,642 - modelscope - INFO - initialize model from /root/.cache/modelscope/hub/models/iic/nlp_corom_sentence-embedding_chinese-base-ecom\n",
      "2025-06-25 18:45:38,997 - modelscope - WARNING - No preprocessor field found in cfg.\n",
      "2025-06-25 18:45:38,998 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.\n",
      "2025-06-25 18:45:39,000 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': '/root/.cache/modelscope/hub/models/iic/nlp_corom_sentence-embedding_chinese-base-ecom'}. trying to build by task and model information.\n",
      "2025-06-25 18:45:39,301 - modelscope - WARNING - No preprocessor field found in cfg.\n",
      "2025-06-25 18:45:39,302 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.\n",
      "2025-06-25 18:45:39,304 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': '/root/.cache/modelscope/hub/models/iic/nlp_corom_sentence-embedding_chinese-base-ecom', 'sequence_length': 128}. trying to build by task and model information.\n"
     ]
    }
   ],
   "source": [
    "# 利用modelscope将sentence转成向量\n",
    "# https://www.modelscope.cn/models/iic/nlp_corom_sentence-embedding_chinese-base-ecom\n",
    "# 本模型基于MultiCPR(电商领域)上训练，在其他垂类领域文本上的文本效果会有降低，请用户自行评测后决定如何使用\n",
    "\n",
    "from modelscope.pipelines import pipeline\n",
    "from modelscope.utils.constant import Tasks\n",
    "\n",
    "model_id = \"iic/nlp_corom_sentence-embedding_chinese-base-ecom\"\n",
    "pipeline_se = pipeline(Tasks.sentence_embedding, model=model_id)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "ad2b2f404de56d90",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T00:18:02.306841Z",
     "start_time": "2025-06-25T00:18:02.240178Z"
    }
   },
   "outputs": [],
   "source": [
    "# inputs = {\n",
    "#     'source_sentence': [\n",
    "#         \"一百多和三十的也看不出什么区别，包装精美，质量应该不错\",\n",
    "#         \"一百多和三十的也看不出什么区别，包装精美，质量应该不错\"\n",
    "#     ]\n",
    "# }\n",
    "\n",
    "# result = pipeline_se(input=inputs)\n",
    "# print(result)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b63723326769f846",
   "metadata": {},
   "source": [
    "将训练集中的句子统一进行向量化，会报错"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "d5495896384a41cc",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T00:19:47.747049Z",
     "start_time": "2025-06-25T00:19:47.679424Z"
    }
   },
   "outputs": [],
   "source": [
    "# ds_train['sentence'][:4]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "596be62c7956e967",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T00:20:01.241472Z",
     "start_time": "2025-06-25T00:20:00.794660Z"
    }
   },
   "outputs": [],
   "source": [
    "# inputs = {\n",
    "#     'source_sentence': ds_train['sentence']\n",
    "# }\n",
    "\n",
    "# ds_train_x = pipeline_se(input=inputs)\n",
    "# ds_train_x[:1]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "31b49654afa6939a",
   "metadata": {},
   "source": [
    "训练集中有些sentence不是字符串，把它过滤掉"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "5d14ca44cdc6c33f",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T00:23:02.610212Z",
     "start_time": "2025-06-25T00:23:02.210381Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "45366\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "df4d260d1c854e06bb9d54de63de85e4",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Filter:   0%|          | 0/45366 [00:00<?, ? examples/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "45010\n"
     ]
    }
   ],
   "source": [
    "import math\n",
    "\n",
    "ds_train_torch = ds_train.to_torch_dataset()\n",
    "\n",
    "print(len(ds_train_torch))\n",
    "ds_train_torch = ds_train_torch.filter(lambda x: isinstance(x['sentence'], str) and x['label'] is not None and not math.isnan(x['label']))\n",
    "print(len(ds_train_torch))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "33daf012026fa8f9",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T00:23:14.250158Z",
     "start_time": "2025-06-25T00:23:14.241131Z"
    }
   },
   "outputs": [],
   "source": [
    "# ds_train_torch[:5]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e5f97589538fef6e",
   "metadata": {},
   "source": [
    "基于ds_train_torch自定义一个MyDataset"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "164150db8c2beb81",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T00:24:44.676700Z",
     "start_time": "2025-06-25T00:23:44.063899Z"
    }
   },
   "outputs": [],
   "source": [
    "# inputs = {\n",
    "#     'source_sentence': ds_train_torch['sentence']\n",
    "# }\n",
    "#\n",
    "# ds_train_x = pipeline_se(input=inputs)\n",
    "# ds_train_x[:1]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "1a6a7a7c7d44f670",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T09:02:32.258982Z",
     "start_time": "2025-06-25T09:02:25.383428Z"
    }
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(['一百多和三十的也看不出什么区别，包装精美，质量应该不错。', '质量很好 料子很不错 做工细致 样式好看 穿着很漂亮'],\n",
       " tensor([1., 1.]))"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "from torch.utils.data import Dataset\n",
    "import torch\n",
    "\n",
    "# 基于ds_train_torch自定义一个dataset\n",
    "class ZhouyuDataset(Dataset):\n",
    "    def __init__(self, ds_train_torch):\n",
    "        self.ds_train_torch = ds_train_torch\n",
    "        self.sentences = ds_train_torch['sentence']\n",
    "        self.labels = ds_train_torch['label']\n",
    "\n",
    "    def __len__(self):\n",
    "        return len(self.sentences)\n",
    "\n",
    "    def __getitem__(self, idx):\n",
    "        return self.sentences[idx], self.labels[idx]\n",
    "\n",
    "# 本机不要跑全量，太慢了，这里只前500条来做训练\n",
    "train_dataset = ZhouyuDataset(ds_train_torch)\n",
    "train_dataset[:2]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "532b79abedbb0526",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T00:29:08.792992Z",
     "start_time": "2025-06-25T00:29:08.745646Z"
    }
   },
   "outputs": [],
   "source": [
    "# len(train_dataset[0][0][0])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ab0a855656602b32",
   "metadata": {},
   "source": [
    "## 定义模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "9c8f98d9a8248b3e",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T00:37:45.063300Z",
     "start_time": "2025-06-25T00:37:45.059969Z"
    }
   },
   "outputs": [],
   "source": [
    "# 定义模型\n",
    "from torch import nn\n",
    "\n",
    "class Net(nn.Module):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.classifier = nn.Sequential(\n",
    "            nn.Linear(768, 64),\n",
    "            nn.ReLU(),\n",
    "            nn.Linear(64, 2)  # 输出2个评分等级，只有0和1\n",
    "        )\n",
    "\n",
    "    def forward(self, x):\n",
    "        return self.classifier(x)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "fb0b5c0604aa5322",
   "metadata": {},
   "source": [
    "## 开始训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "60c375b2-9c70-4364-a4e1-0252c90d275d",
   "metadata": {},
   "outputs": [],
   "source": [
    "def collate_fn(batch):\n",
    "    sentences, labels = zip(*batch)\n",
    "\n",
    "    outputs = pipeline_se(input={'source_sentence': list(sentences)})\n",
    "    embeddings = outputs['text_embedding']\n",
    "\n",
    "    embeddings = torch.tensor(embeddings)\n",
    "    labels = torch.tensor(labels)\n",
    "\n",
    "    return embeddings, labels"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "83ec0fd2b84be38b",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T00:36:26.614029Z",
     "start_time": "2025-06-25T00:36:26.514921Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([64, 768])\n",
      "tensor([[ 0.2236, -0.1391, -0.0967,  ...,  0.1084, -0.0419, -0.1079],\n",
      "        [ 0.3496, -0.2194, -0.0037,  ...,  0.2250,  0.0349,  0.2032],\n",
      "        [-0.0803, -0.4173, -0.0703,  ..., -0.4452, -0.2553,  0.4894],\n",
      "        ...,\n",
      "        [ 0.3169, -0.0444, -0.2258,  ...,  0.0375, -0.1380, -0.0786],\n",
      "        [ 0.0522,  0.4503, -0.2601,  ...,  0.4992, -0.2727, -0.1712],\n",
      "        [-0.0454, -0.0861,  0.0902,  ...,  0.2616, -0.4522, -0.1231]])\n",
      "tensor([1., 1., 1., 0., 1., 1., 1., 1., 0., 0., 0., 0., 1., 1., 0., 1., 1., 1.,\n",
      "        1., 1., 1., 0., 0., 0., 1., 1., 1., 1., 0., 1., 0., 0., 0., 1., 1., 1.,\n",
      "        1., 1., 1., 0., 0., 1., 0., 1., 0., 0., 0., 0., 0., 0., 0., 1., 1., 0.,\n",
      "        0., 0., 1., 1., 0., 0., 0., 1., 0., 0.])\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/root/miniconda3/lib/python3.10/site-packages/transformers/modeling_utils.py:1614: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.\n",
      "  warnings.warn(\n"
     ]
    }
   ],
   "source": [
    "from torch.utils.data import DataLoader\n",
    "import torch\n",
    "\n",
    "train_loader = DataLoader(train_dataset, batch_size=64, shuffle=True, collate_fn=collate_fn)\n",
    "for inputs, labels in train_loader:\n",
    "    print(inputs.shape)\n",
    "    print(inputs)\n",
    "    print(labels)\n",
    "    break"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "333cae9cc7746bd8",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T10:15:10.939633Z",
     "start_time": "2025-06-25T10:15:10.933687Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "cuda\n"
     ]
    }
   ],
   "source": [
    "device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')\n",
    "print(device)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "id": "e61245c07dc2e336",
   "metadata": {
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch [1/10], Step [10/704], Loss: 0.6909\n",
      "Epoch [1/10], Step [20/704], Loss: 0.6899\n",
      "Epoch [1/10], Step [30/704], Loss: 0.6723\n",
      "Epoch [1/10], Step [40/704], Loss: 0.6666\n",
      "Epoch [1/10], Step [50/704], Loss: 0.6606\n",
      "Epoch [1/10], Step [60/704], Loss: 0.6454\n",
      "Epoch [1/10], Step [70/704], Loss: 0.6374\n",
      "Epoch [1/10], Step [80/704], Loss: 0.6291\n",
      "Epoch [1/10], Step [90/704], Loss: 0.6176\n",
      "Epoch [1/10], Step [100/704], Loss: 0.6339\n",
      "Epoch [1/10], Step [110/704], Loss: 0.6056\n",
      "Epoch [1/10], Step [120/704], Loss: 0.6096\n",
      "Epoch [1/10], Step [130/704], Loss: 0.5979\n",
      "Epoch [1/10], Step [140/704], Loss: 0.5748\n",
      "Epoch [1/10], Step [150/704], Loss: 0.5654\n",
      "Epoch [1/10], Step [160/704], Loss: 0.5312\n",
      "Epoch [1/10], Step [170/704], Loss: 0.5441\n",
      "Epoch [1/10], Step [180/704], Loss: 0.5854\n",
      "Epoch [1/10], Step [190/704], Loss: 0.5361\n",
      "Epoch [1/10], Step [200/704], Loss: 0.5286\n",
      "Epoch [1/10], Step [210/704], Loss: 0.4859\n",
      "Epoch [1/10], Step [220/704], Loss: 0.5115\n",
      "Epoch [1/10], Step [230/704], Loss: 0.4710\n",
      "Epoch [1/10], Step [240/704], Loss: 0.4443\n",
      "Epoch [1/10], Step [250/704], Loss: 0.4671\n",
      "Epoch [1/10], Step [260/704], Loss: 0.4879\n",
      "Epoch [1/10], Step [270/704], Loss: 0.4266\n",
      "Epoch [1/10], Step [280/704], Loss: 0.4794\n",
      "Epoch [1/10], Step [290/704], Loss: 0.5328\n",
      "Epoch [1/10], Step [300/704], Loss: 0.4001\n",
      "Epoch [1/10], Step [310/704], Loss: 0.4479\n",
      "Epoch [1/10], Step [320/704], Loss: 0.4361\n",
      "Epoch [1/10], Step [330/704], Loss: 0.4259\n",
      "Epoch [1/10], Step [340/704], Loss: 0.4362\n",
      "Epoch [1/10], Step [350/704], Loss: 0.4223\n",
      "Epoch [1/10], Step [360/704], Loss: 0.3701\n",
      "Epoch [1/10], Step [370/704], Loss: 0.4023\n",
      "Epoch [1/10], Step [380/704], Loss: 0.3663\n",
      "Epoch [1/10], Step [390/704], Loss: 0.4774\n",
      "Epoch [1/10], Step [400/704], Loss: 0.3737\n",
      "Epoch [1/10], Step [410/704], Loss: 0.3118\n",
      "Epoch [1/10], Step [420/704], Loss: 0.4258\n",
      "Epoch [1/10], Step [430/704], Loss: 0.3947\n",
      "Epoch [1/10], Step [440/704], Loss: 0.4328\n",
      "Epoch [1/10], Step [450/704], Loss: 0.4378\n",
      "Epoch [1/10], Step [460/704], Loss: 0.3253\n",
      "Epoch [1/10], Step [470/704], Loss: 0.3842\n",
      "Epoch [1/10], Step [480/704], Loss: 0.3565\n",
      "Epoch [1/10], Step [490/704], Loss: 0.4218\n",
      "Epoch [1/10], Step [500/704], Loss: 0.3646\n",
      "Epoch [1/10], Step [510/704], Loss: 0.3422\n",
      "Epoch [1/10], Step [520/704], Loss: 0.3753\n",
      "Epoch [1/10], Step [530/704], Loss: 0.2942\n",
      "Epoch [1/10], Step [540/704], Loss: 0.4633\n",
      "Epoch [1/10], Step [550/704], Loss: 0.3348\n",
      "Epoch [1/10], Step [560/704], Loss: 0.3394\n",
      "Epoch [1/10], Step [570/704], Loss: 0.3923\n",
      "Epoch [1/10], Step [580/704], Loss: 0.4505\n",
      "Epoch [1/10], Step [590/704], Loss: 0.3230\n",
      "Epoch [1/10], Step [600/704], Loss: 0.2680\n",
      "Epoch [1/10], Step [610/704], Loss: 0.3822\n",
      "Epoch [1/10], Step [620/704], Loss: 0.2687\n",
      "Epoch [1/10], Step [630/704], Loss: 0.3717\n",
      "Epoch [1/10], Step [640/704], Loss: 0.3537\n",
      "Epoch [1/10], Step [650/704], Loss: 0.3818\n",
      "Epoch [1/10], Step [660/704], Loss: 0.3270\n",
      "Epoch [1/10], Step [670/704], Loss: 0.3489\n",
      "Epoch [1/10], Step [680/704], Loss: 0.4636\n",
      "Epoch [1/10], Step [690/704], Loss: 0.2468\n",
      "Epoch [1/10], Step [700/704], Loss: 0.3903\n"
     ]
    }
   ],
   "source": [
    "model = Net()\n",
    "model.to(device)\n",
    "\n",
    "# cpu+内存\n",
    "# gpu+显存\n",
    "\n",
    "optimizer = torch.optim.SGD(model.parameters(), lr=0.01)\n",
    "criterion = nn.CrossEntropyLoss()\n",
    "\n",
    "for epoch in range(1):\n",
    "    for i, (inputs, labels) in enumerate(train_loader):\n",
    "        inputs = inputs.to(device)\n",
    "        labels = labels.to(device)\n",
    "        \n",
    "        inputs = inputs.view(inputs.shape[0], -1)\n",
    "        outputs = model(inputs)\n",
    "        loss = criterion(outputs, labels.long())\n",
    "        optimizer.zero_grad()\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "        if (i + 1) % 10 == 0:\n",
    "            print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'\n",
    "                  .format(epoch + 1, 10, i + 1, len(train_loader), loss.item()))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "53704779d9f7b39e",
   "metadata": {},
   "source": [
    "## 在测试集上进行测试"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "96f0131bc16e5367",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T00:43:24.213166Z",
     "start_time": "2025-06-25T00:43:15.232278Z"
    }
   },
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "c25061237c904efeacd2aeae22dddeb4",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Filter:   0%|          | 0/5032 [00:00<?, ? examples/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "测试集正确率为: 88.8 %\n"
     ]
    }
   ],
   "source": [
    "# 在测试集上进行测试\n",
    "\n",
    "# 同样先过滤数据\n",
    "ds_test_torch = ds_test.to_torch_dataset()\n",
    "ds_test_torch = ds_test_torch.filter(lambda x: isinstance(x['sentence'], str) and x['label'] is not None and not math.isnan(x['label']))\n",
    "\n",
    "# 取100条进行测试\n",
    "test_dataset = ZhouyuDataset(ds_test_torch[:1000])\n",
    "test_loader = DataLoader(test_dataset, batch_size=8, shuffle=False, collate_fn=collate_fn)\n",
    "\n",
    "with torch.no_grad():\n",
    "    correct = 0\n",
    "    total = 0\n",
    "    for inputs, labels in test_loader:\n",
    "        inputs = inputs.to(device)\n",
    "        labels = labels.to(device)\n",
    "        inputs = inputs.view(inputs.shape[0], -1)\n",
    "        outputs = model(inputs)\n",
    "        _, predicted = torch.max(outputs.data, 1)\n",
    "        total += labels.size(0)\n",
    "        matches = (predicted == labels)\n",
    "        correct += matches.sum().item()\n",
    "\n",
    "    print('测试集正确率为: {} %'.format(100 * correct / total))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "afd07f2d6a3b3d6f",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T00:43:04.091626Z",
     "start_time": "2025-06-25T00:43:04.039432Z"
    }
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[ 1.0258, -1.1980]], device='cuda:0', grad_fn=<AddmmBackward0>)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "tensor([0], device='cuda:0')"
      ]
     },
     "execution_count": 27,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# test = \"这个手机很好\"\n",
    "# test = \"这个手机很烂\"\n",
    "# test = \"这个手机很不好\"\n",
    "# test = \"这个手机非常非常不好\"\n",
    "test = \"这个手机不好\"\n",
    "inputs = torch.from_numpy(pipeline_se(input={'source_sentence': [test]})['text_embedding'])\n",
    "inputs = inputs.to(device)\n",
    "outputs = model(inputs)\n",
    "print(outputs)\n",
    "values, predicted = torch.max(outputs.data, 1)\n",
    "predicted"
   ]
  },
  {
   "metadata": {},
   "cell_type": "markdown",
   "source": [
    "## Adam优化器\n",
    "\n",
    "Adam（Adaptive Moment Estimation）是一种优化算法，相当于SGD的升级版，结合了AdaGrad和RMSProp算法的优点，能够自动调整学习率，更新参数时考虑了梯度的历史信息（动量），有助于平稳地更新参数。"
   ],
   "id": "e51c0cdbd398e81f"
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.8"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
