{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "8a82a769ac88b441",
   "metadata": {},
   "source": [
    "## 数据集下载\n",
    "我们可以利用ModelScope或Huggingface来下载数据集，但是Huggingface（国外）下载比较慢，所以这里我们使用ModelScope（国内，阿里）。\n",
    "\n",
    "数据集地址：\n",
    "https://www.modelscope.cn/datasets/DAMO_NLP/jd/quickstart\n",
    "\n",
    "先安装ModelScope"
   ]
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T10:53:47.580154Z",
     "start_time": "2025-06-25T10:53:47.577036Z"
    }
   },
   "cell_type": "code",
   "source": "# !pip install \"modelscope[framework]\"",
   "id": "1b697d47446e075d",
   "outputs": [],
   "execution_count": 1
  },
  {
   "cell_type": "code",
   "id": "initial_id",
   "metadata": {
    "collapsed": true,
    "jupyter": {
     "outputs_hidden": true
    },
    "ExecuteTime": {
     "end_time": "2025-07-03T02:23:33.994213Z",
     "start_time": "2025-07-03T02:23:29.658002Z"
    }
   },
   "source": [
    "from modelscope.utils.constant import DownloadMode\n",
    "# 利用modelscope下载数据\n",
    "from modelscope.msdatasets import MsDataset\n",
    "\n",
    "# 训练集\n",
    "# 周瑜老师课上没有指定cache_dir\n",
    "# ds_train = MsDataset.load('DAMO_NLP/jd', subset_name='default', split='train')\n",
    "# windows系统下建议手动设置缓存cache_dir路径，避免路径过长的问题\n",
    "ds_train = MsDataset.load('DAMO_NLP/jd', subset_name='default', split='train', cache_dir='./DAMO_NLP-jd-train')\n",
    "\n",
    "# 测试集\n",
    "# ds_test = MsDataset.load('DAMO_NLP/jd', subset_name='default', split='validation')\n",
    "ds_test = MsDataset.load('DAMO_NLP/jd', subset_name='default', split='validation', cache_dir='./DAMO_NLP-jd-validation')\n",
    "\n",
    "len(ds_train), len(ds_test)"
   ],
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-07-03 10:23:29,663 - modelscope - WARNING - Use trust_remote_code=True. Will invoke codes from jd. Please make sure that you can trust the external codes.\n",
      "2025-07-03 10:23:31,322 - modelscope - WARNING - Reusing dataset dataset_builder (./DAMO_NLP-jd-train/DAMO_NLP/jd/master/data_files)\n",
      "2025-07-03 10:23:31,322 - modelscope - INFO - Generating dataset dataset_builder (./DAMO_NLP-jd-train/DAMO_NLP/jd/master/data_files)\n",
      "2025-07-03 10:23:31,323 - modelscope - INFO - Reusing cached meta-data file: ./DAMO_NLP-jd-train/DAMO_NLP/jd/master/data_files/3a0b7ca43b11a413d66fb247f31fb16f\n",
      "2025-07-03 10:23:31,689 - modelscope - WARNING - Use trust_remote_code=True. Will invoke codes from jd. Please make sure that you can trust the external codes.\n",
      "2025-07-03 10:23:33,558 - modelscope - WARNING - Reusing dataset dataset_builder (./DAMO_NLP-jd-validation/DAMO_NLP/jd/master/data_files)\n",
      "2025-07-03 10:23:33,559 - modelscope - INFO - Generating dataset dataset_builder (./DAMO_NLP-jd-validation/DAMO_NLP/jd/master/data_files)\n",
      "2025-07-03 10:23:33,559 - modelscope - INFO - Reusing cached meta-data file: ./DAMO_NLP-jd-validation/DAMO_NLP/jd/master/data_files/a6da68b5310a529b1be5166a6d78da55\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "(45366, 5032)"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 20
  },
  {
   "cell_type": "markdown",
   "id": "f351b7fb59fe70c0",
   "metadata": {},
   "source": [
    "查看数据集的格式"
   ]
  },
  {
   "cell_type": "code",
   "id": "a972f34a11097ea9",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T10:53:57.837762Z",
     "start_time": "2025-06-25T10:53:57.833507Z"
    }
   },
   "source": "ds_train[:2]",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'sentence': ['一百多和三十的也看不出什么区别，包装精美，质量应该不错。', '质量很好 料子很不错 做工细致 样式好看 穿着很漂亮'],\n",
       " 'label': [1.0, 1.0],\n",
       " 'dataset': ['jd', 'jd']}"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 3
  },
  {
   "cell_type": "code",
   "id": "10ba7f8d5217f3fe",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T10:53:57.876368Z",
     "start_time": "2025-06-25T10:53:57.848776Z"
    }
   },
   "source": "ds_train['sentence'][:5]",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['一百多和三十的也看不出什么区别，包装精美，质量应该不错。',\n",
       " '质量很好 料子很不错 做工细致 样式好看 穿着很漂亮',\n",
       " ' 会卷的    建议买大的小的会卷   胖就别买了       没用',\n",
       " '大差了  布料很差  我也不想多说',\n",
       " '一点也不好，我买的东西拿都拿到快递员自己签收了还不给我，恶心恶心恶心，不要脸不要脸']"
      ]
     },
     "execution_count": 4,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 4
  },
  {
   "cell_type": "markdown",
   "id": "86d79d258249c0ec",
   "metadata": {},
   "source": [
    "确定label的可能情况，发现只有0和1"
   ]
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T10:53:57.898958Z",
     "start_time": "2025-06-25T10:53:57.886676Z"
    }
   },
   "cell_type": "code",
   "source": "set(ds_train['label'])",
   "id": "2ac5114eb0fd2828",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{0.0, 1.0, None}"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 5
  },
  {
   "cell_type": "markdown",
   "id": "de0a9a242f459e2a",
   "metadata": {},
   "source": [
    "## 文本向量化\n",
    "数据集中是中文，但是模型训练需要的是数字，所以我们需要将中文的一句话转成一个向量，这里我们直接利用ModelScope上提供的向量模型来进行文本向量化"
   ]
  },
  {
   "cell_type": "code",
   "id": "15f716075dbc8809",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T10:54:03.397367Z",
     "start_time": "2025-06-25T10:53:57.909206Z"
    }
   },
   "source": [
    "# 利用modelscope将sentence转成向量\n",
    "# https://www.modelscope.cn/models/iic/nlp_corom_sentence-embedding_chinese-base-ecom\n",
    "# 本模型基于MultiCPR(电商领域)上训练，在其他垂类领域文本上的文本效果会有降低，请用户自行评测后决定如何使用\n",
    "\n",
    "from modelscope.pipelines import pipeline\n",
    "from modelscope.utils.constant import Tasks\n",
    "\n",
    "model_id = \"iic/nlp_corom_sentence-embedding_chinese-base-ecom\"\n",
    "pipeline_se = pipeline(Tasks.sentence_embedding, model=model_id)"
   ],
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-06-25 18:54:01,392 - modelscope - WARNING - Model revision not specified, use revision: v1.0.0\n",
      "2025-06-25 18:54:02,847 - modelscope - WARNING - Model revision not specified, use revision: v1.0.0\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Downloading Model from https://www.modelscope.cn to directory: /Users/dadudu/.cache/modelscope/hub/models/iic/nlp_corom_sentence-embedding_chinese-base-ecom\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "2025-06-25 18:54:03,092 - modelscope - INFO - initiate model from /Users/dadudu/.cache/modelscope/hub/models/iic/nlp_corom_sentence-embedding_chinese-base-ecom\n",
      "2025-06-25 18:54:03,093 - modelscope - INFO - initiate model from location /Users/dadudu/.cache/modelscope/hub/models/iic/nlp_corom_sentence-embedding_chinese-base-ecom.\n",
      "2025-06-25 18:54:03,096 - modelscope - INFO - initialize model from /Users/dadudu/.cache/modelscope/hub/models/iic/nlp_corom_sentence-embedding_chinese-base-ecom\n",
      "2025-06-25 18:54:03,337 - modelscope - WARNING - No preprocessor field found in cfg.\n",
      "2025-06-25 18:54:03,337 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.\n",
      "2025-06-25 18:54:03,338 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': '/Users/dadudu/.cache/modelscope/hub/models/iic/nlp_corom_sentence-embedding_chinese-base-ecom'}. trying to build by task and model information.\n",
      "2025-06-25 18:54:03,379 - modelscope - INFO - cuda is not available, using cpu instead.\n",
      "2025-06-25 18:54:03,380 - modelscope - WARNING - No preprocessor field found in cfg.\n",
      "2025-06-25 18:54:03,380 - modelscope - WARNING - No val key and type key found in preprocessor domain of configuration.json file.\n",
      "2025-06-25 18:54:03,381 - modelscope - WARNING - Cannot find available config to build preprocessor at mode inference, current config: {'model_dir': '/Users/dadudu/.cache/modelscope/hub/models/iic/nlp_corom_sentence-embedding_chinese-base-ecom', 'sequence_length': 128}. trying to build by task and model information.\n"
     ]
    }
   ],
   "execution_count": 6
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T10:54:03.839645Z",
     "start_time": "2025-06-25T10:54:03.765698Z"
    }
   },
   "cell_type": "code",
   "source": [
    "inputs = {\n",
    "    'source_sentence': [\n",
    "        \"一百多和三十的也看不出什么区别，包装精美，质量应该不错\",\n",
    "        \"一百多和三十的也看不出什么区别，包装精美，质量应该不错\"\n",
    "    ]\n",
    "}\n",
    "\n",
    "result = pipeline_se(input=inputs)\n",
    "print(result)"
   ],
   "id": "ad2b2f404de56d90",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{'text_embedding': array([[ 0.4156133 , -0.23202324, -0.11819676, ...,  0.04327501,\n",
      "        -0.42135528, -0.16097867],\n",
      "       [ 0.4156133 , -0.23202324, -0.11819676, ...,  0.04327501,\n",
      "        -0.42135528, -0.16097867]], shape=(2, 768), dtype=float32), 'scores': []}\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "/Users/dadudu/miniconda3/envs/mini-gpt/lib/python3.10/site-packages/transformers/modeling_utils.py:1614: FutureWarning: The `device` argument is deprecated and will be removed in v5 of Transformers.\n",
      "  warnings.warn(\n"
     ]
    }
   ],
   "execution_count": 7
  },
  {
   "cell_type": "markdown",
   "id": "b63723326769f846",
   "metadata": {},
   "source": [
    "将训练集中的句子统一进行向量化，会报错"
   ]
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T10:54:06.053527Z",
     "start_time": "2025-06-25T10:54:06.004136Z"
    }
   },
   "cell_type": "code",
   "source": "ds_train['sentence'][:4]",
   "id": "d5495896384a41cc",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "['一百多和三十的也看不出什么区别，包装精美，质量应该不错。',\n",
       " '质量很好 料子很不错 做工细致 样式好看 穿着很漂亮',\n",
       " ' 会卷的    建议买大的小的会卷   胖就别买了       没用',\n",
       " '大差了  布料很差  我也不想多说']"
      ]
     },
     "execution_count": 8,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 8
  },
  {
   "cell_type": "code",
   "id": "596be62c7956e967",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T10:54:06.065472Z",
     "start_time": "2025-06-25T10:54:06.063853Z"
    }
   },
   "source": [
    "# inputs = {\n",
    "#     'source_sentence': ds_train['sentence']\n",
    "# }\n",
    "#\n",
    "# ds_train_x = pipeline_se(input=inputs)\n",
    "# ds_train_x[:1]"
   ],
   "outputs": [],
   "execution_count": 9
  },
  {
   "cell_type": "markdown",
   "id": "31b49654afa6939a",
   "metadata": {},
   "source": "训练集中有些sentence不是字符串，把它过滤掉"
  },
  {
   "cell_type": "code",
   "id": "5d14ca44cdc6c33f",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T10:54:06.501469Z",
     "start_time": "2025-06-25T10:54:06.079556Z"
    }
   },
   "source": [
    "import math\n",
    "\n",
    "ds_train_torch = ds_train.to_torch_dataset()\n",
    "\n",
    "print(len(ds_train_torch))\n",
    "ds_train_torch = ds_train_torch.filter(lambda x: isinstance(x['sentence'], str) and x['label'] is not None and not math.isnan(x['label']))\n",
    "print(len(ds_train_torch))"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "45366\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Filter: 100%|██████████| 45366/45366 [00:00<00:00, 109017.49 examples/s]"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "45010\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n"
     ]
    }
   ],
   "execution_count": 10
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T10:54:06.522057Z",
     "start_time": "2025-06-25T10:54:06.516190Z"
    }
   },
   "cell_type": "code",
   "source": "ds_train_torch[:5]",
   "id": "33daf012026fa8f9",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "{'sentence': ['一百多和三十的也看不出什么区别，包装精美，质量应该不错。',\n",
       "  '质量很好 料子很不错 做工细致 样式好看 穿着很漂亮',\n",
       "  ' 会卷的    建议买大的小的会卷   胖就别买了       没用',\n",
       "  '大差了  布料很差  我也不想多说',\n",
       "  '一点也不好，我买的东西拿都拿到快递员自己签收了还不给我，恶心恶心恶心，不要脸不要脸'],\n",
       " 'label': tensor([1., 1., 0., 0., 0.]),\n",
       " 'dataset': ['jd', 'jd', 'jd', 'jd', 'jd']}"
      ]
     },
     "execution_count": 11,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 11
  },
  {
   "cell_type": "markdown",
   "id": "e5f97589538fef6e",
   "metadata": {},
   "source": [
    "基于ds_train_torch自定义一个MyDataset"
   ]
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T10:54:06.537774Z",
     "start_time": "2025-06-25T10:54:06.534127Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# inputs = {\n",
    "#     'source_sentence': ds_train_torch['sentence']\n",
    "# }\n",
    "#\n",
    "# ds_train_x = pipeline_se(input=inputs)\n",
    "# ds_train_x[:1]"
   ],
   "id": "164150db8c2beb81",
   "outputs": [],
   "execution_count": 12
  },
  {
   "cell_type": "code",
   "id": "1a6a7a7c7d44f670",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T10:54:06.608363Z",
     "start_time": "2025-06-25T10:54:06.552491Z"
    }
   },
   "source": [
    "from torch.utils.data import Dataset\n",
    "import torch\n",
    "\n",
    "# 基于ds_train_torch自定义一个dataset\n",
    "class MyDataset(Dataset):\n",
    "    def __init__(self, ds_train_torch):\n",
    "        self.ds_train_torch = ds_train_torch\n",
    "        self.pipeline_se = pipeline_se\n",
    "\n",
    "    def __len__(self):\n",
    "        return len(self.ds_train_torch['sentence'])\n",
    "\n",
    "    def __getitem__(self, idx):\n",
    "\n",
    "        sentences = self.ds_train_torch['sentence'][idx]\n",
    "\n",
    "        if not isinstance(sentences, list):\n",
    "            sentences = [sentences]\n",
    "\n",
    "        labels = self.ds_train_torch['label'][idx]\n",
    "\n",
    "        outputs = self.pipeline_se(input={'source_sentence': sentences})\n",
    "        embeddings = outputs['text_embedding']\n",
    "        return embeddings, labels\n",
    "\n",
    "# 本机不要跑全量，太慢了，这里只前500条来做训练\n",
    "train_dataset = MyDataset(ds_train_torch[:200])\n",
    "train_dataset[:2]"
   ],
   "outputs": [
    {
     "data": {
      "text/plain": [
       "(array([[ 0.41714746, -0.20678371, -0.15620907, ...,  0.09613244,\n",
       "         -0.41316763, -0.12344889],\n",
       "        [ 0.18679819, -0.18488356, -0.33604681, ...,  0.09035993,\n",
       "         -0.12850879,  0.09054327]], shape=(2, 768), dtype=float32),\n",
       " tensor([1., 1.]))"
      ]
     },
     "execution_count": 13,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 13
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T10:54:06.670165Z",
     "start_time": "2025-06-25T10:54:06.621862Z"
    }
   },
   "cell_type": "code",
   "source": "len(train_dataset[0][0][0])",
   "id": "532b79abedbb0526",
   "outputs": [
    {
     "data": {
      "text/plain": [
       "768"
      ]
     },
     "execution_count": 14,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 14
  },
  {
   "cell_type": "markdown",
   "id": "ab0a855656602b32",
   "metadata": {},
   "source": [
    "## 定义模型"
   ]
  },
  {
   "cell_type": "code",
   "id": "9c8f98d9a8248b3e",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T10:54:06.684917Z",
     "start_time": "2025-06-25T10:54:06.682261Z"
    }
   },
   "source": [
    "# 定义模型\n",
    "from torch import nn\n",
    "\n",
    "class Net(nn.Module):\n",
    "    def __init__(self):\n",
    "        super().__init__()\n",
    "        self.classifier = nn.Sequential(\n",
    "            nn.Linear(768, 64),\n",
    "            nn.ReLU(),\n",
    "            nn.Linear(64, 2)  # 输出2个评分等级，只有0和1\n",
    "        )\n",
    "\n",
    "    def forward(self, x):\n",
    "        return self.classifier(x)"
   ],
   "outputs": [],
   "execution_count": 15
  },
  {
   "cell_type": "markdown",
   "id": "fb0b5c0604aa5322",
   "metadata": {},
   "source": [
    "## 开始训练"
   ]
  },
  {
   "cell_type": "code",
   "id": "83ec0fd2b84be38b",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T10:54:06.781365Z",
     "start_time": "2025-06-25T10:54:06.691664Z"
    }
   },
   "source": [
    "from torch.utils.data import DataLoader\n",
    "import torch\n",
    "\n",
    "train_loader = DataLoader(train_dataset, batch_size=2, shuffle=True)\n",
    "for inputs, labels in train_loader:\n",
    "    print(inputs.shape)\n",
    "    print(inputs.view(inputs.shape[0], -1).shape)\n",
    "    print(inputs.view(inputs.shape[0], -1))\n",
    "    print(labels)\n",
    "    break"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "torch.Size([2, 1, 768])\n",
      "torch.Size([2, 768])\n",
      "tensor([[ 0.4551, -0.1386, -0.0359,  ...,  0.0854, -0.0099,  0.0109],\n",
      "        [ 0.2672,  0.1957, -0.2584,  ...,  0.6375,  0.1219,  0.3779]])\n",
      "tensor([1., 0.])\n"
     ]
    }
   ],
   "execution_count": 16
  },
  {
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T10:55:34.028528Z",
     "start_time": "2025-06-25T10:54:06.796446Z"
    }
   },
   "cell_type": "code",
   "source": [
    "# 大都督周瑜（我的微信: dadudu6789）\n",
    "\n",
    "model = Net()\n",
    "\n",
    "optimizer = torch.optim.SGD(model.parameters(), lr=0.01)\n",
    "criterion = nn.CrossEntropyLoss()\n",
    "\n",
    "for epoch in range(10):\n",
    "    for i, (inputs, labels) in enumerate(train_loader):\n",
    "        inputs = inputs.view(inputs.shape[0], -1)\n",
    "        outputs = model(inputs)\n",
    "        loss = criterion(outputs, labels.long())\n",
    "        optimizer.zero_grad()\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "        if (i + 1) % 10 == 0:\n",
    "            print('Epoch [{}/{}], Step [{}/{}], Loss: {:.4f}'\n",
    "                  .format(epoch + 1, 10, i + 1, len(train_loader), loss.item()))"
   ],
   "id": "e61245c07dc2e336",
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Epoch [1/10], Step [10/100], Loss: 0.7332\n",
      "Epoch [1/10], Step [20/100], Loss: 0.6976\n",
      "Epoch [1/10], Step [30/100], Loss: 0.6789\n",
      "Epoch [1/10], Step [40/100], Loss: 0.6671\n",
      "Epoch [1/10], Step [50/100], Loss: 0.7487\n",
      "Epoch [1/10], Step [60/100], Loss: 0.7234\n",
      "Epoch [1/10], Step [70/100], Loss: 0.4450\n",
      "Epoch [1/10], Step [80/100], Loss: 0.6723\n",
      "Epoch [1/10], Step [90/100], Loss: 0.6346\n",
      "Epoch [1/10], Step [100/100], Loss: 0.6782\n",
      "Epoch [2/10], Step [10/100], Loss: 0.6411\n",
      "Epoch [2/10], Step [20/100], Loss: 0.5388\n",
      "Epoch [2/10], Step [30/100], Loss: 0.5315\n",
      "Epoch [2/10], Step [40/100], Loss: 0.7911\n",
      "Epoch [2/10], Step [50/100], Loss: 0.4179\n",
      "Epoch [2/10], Step [60/100], Loss: 0.4690\n",
      "Epoch [2/10], Step [70/100], Loss: 0.6313\n",
      "Epoch [2/10], Step [80/100], Loss: 0.5300\n",
      "Epoch [2/10], Step [90/100], Loss: 0.5073\n",
      "Epoch [2/10], Step [100/100], Loss: 0.5989\n",
      "Epoch [3/10], Step [10/100], Loss: 0.4382\n",
      "Epoch [3/10], Step [20/100], Loss: 0.4796\n",
      "Epoch [3/10], Step [30/100], Loss: 0.2615\n",
      "Epoch [3/10], Step [40/100], Loss: 0.5196\n",
      "Epoch [3/10], Step [50/100], Loss: 0.3703\n",
      "Epoch [3/10], Step [60/100], Loss: 0.3748\n",
      "Epoch [3/10], Step [70/100], Loss: 0.2980\n",
      "Epoch [3/10], Step [80/100], Loss: 0.6088\n",
      "Epoch [3/10], Step [90/100], Loss: 0.6477\n",
      "Epoch [3/10], Step [100/100], Loss: 0.4354\n",
      "Epoch [4/10], Step [10/100], Loss: 0.2946\n",
      "Epoch [4/10], Step [20/100], Loss: 0.5923\n",
      "Epoch [4/10], Step [30/100], Loss: 0.2315\n",
      "Epoch [4/10], Step [40/100], Loss: 0.3165\n",
      "Epoch [4/10], Step [50/100], Loss: 0.6789\n",
      "Epoch [4/10], Step [60/100], Loss: 0.3488\n",
      "Epoch [4/10], Step [70/100], Loss: 0.1694\n",
      "Epoch [4/10], Step [80/100], Loss: 0.3803\n",
      "Epoch [4/10], Step [90/100], Loss: 0.2931\n",
      "Epoch [4/10], Step [100/100], Loss: 0.4275\n",
      "Epoch [5/10], Step [10/100], Loss: 0.0873\n",
      "Epoch [5/10], Step [20/100], Loss: 0.3050\n",
      "Epoch [5/10], Step [30/100], Loss: 0.2092\n",
      "Epoch [5/10], Step [40/100], Loss: 0.1031\n",
      "Epoch [5/10], Step [50/100], Loss: 0.0312\n",
      "Epoch [5/10], Step [60/100], Loss: 0.7638\n",
      "Epoch [5/10], Step [70/100], Loss: 0.3195\n",
      "Epoch [5/10], Step [80/100], Loss: 0.6263\n",
      "Epoch [5/10], Step [90/100], Loss: 0.4093\n",
      "Epoch [5/10], Step [100/100], Loss: 0.1061\n",
      "Epoch [6/10], Step [10/100], Loss: 0.1489\n",
      "Epoch [6/10], Step [20/100], Loss: 0.0976\n",
      "Epoch [6/10], Step [30/100], Loss: 0.1341\n",
      "Epoch [6/10], Step [40/100], Loss: 0.3680\n",
      "Epoch [6/10], Step [50/100], Loss: 0.2091\n",
      "Epoch [6/10], Step [60/100], Loss: 0.4089\n",
      "Epoch [6/10], Step [70/100], Loss: 0.3526\n",
      "Epoch [6/10], Step [80/100], Loss: 0.1644\n",
      "Epoch [6/10], Step [90/100], Loss: 0.3117\n",
      "Epoch [6/10], Step [100/100], Loss: 0.2953\n",
      "Epoch [7/10], Step [10/100], Loss: 0.0838\n",
      "Epoch [7/10], Step [20/100], Loss: 0.0129\n",
      "Epoch [7/10], Step [30/100], Loss: 0.4038\n",
      "Epoch [7/10], Step [40/100], Loss: 0.3349\n",
      "Epoch [7/10], Step [50/100], Loss: 0.2036\n",
      "Epoch [7/10], Step [60/100], Loss: 0.0071\n",
      "Epoch [7/10], Step [70/100], Loss: 0.1307\n",
      "Epoch [7/10], Step [80/100], Loss: 0.0183\n",
      "Epoch [7/10], Step [90/100], Loss: 0.1872\n",
      "Epoch [7/10], Step [100/100], Loss: 0.0928\n",
      "Epoch [8/10], Step [10/100], Loss: 0.2881\n",
      "Epoch [8/10], Step [20/100], Loss: 0.1485\n",
      "Epoch [8/10], Step [30/100], Loss: 0.0175\n",
      "Epoch [8/10], Step [40/100], Loss: 0.2242\n",
      "Epoch [8/10], Step [50/100], Loss: 0.0798\n",
      "Epoch [8/10], Step [60/100], Loss: 0.0107\n",
      "Epoch [8/10], Step [70/100], Loss: 0.1312\n",
      "Epoch [8/10], Step [80/100], Loss: 0.1405\n",
      "Epoch [8/10], Step [90/100], Loss: 0.1032\n",
      "Epoch [8/10], Step [100/100], Loss: 0.0277\n",
      "Epoch [9/10], Step [10/100], Loss: 0.1038\n",
      "Epoch [9/10], Step [20/100], Loss: 0.1388\n",
      "Epoch [9/10], Step [30/100], Loss: 0.0417\n",
      "Epoch [9/10], Step [40/100], Loss: 0.0981\n",
      "Epoch [9/10], Step [50/100], Loss: 0.0369\n",
      "Epoch [9/10], Step [60/100], Loss: 0.0245\n",
      "Epoch [9/10], Step [70/100], Loss: 0.1085\n",
      "Epoch [9/10], Step [80/100], Loss: 0.0932\n",
      "Epoch [9/10], Step [90/100], Loss: 0.1284\n",
      "Epoch [9/10], Step [100/100], Loss: 0.2005\n",
      "Epoch [10/10], Step [10/100], Loss: 0.0030\n",
      "Epoch [10/10], Step [20/100], Loss: 0.0978\n",
      "Epoch [10/10], Step [30/100], Loss: 0.0375\n",
      "Epoch [10/10], Step [40/100], Loss: 0.0169\n",
      "Epoch [10/10], Step [50/100], Loss: 0.3015\n",
      "Epoch [10/10], Step [60/100], Loss: 0.0561\n",
      "Epoch [10/10], Step [70/100], Loss: 0.0630\n",
      "Epoch [10/10], Step [80/100], Loss: 0.0052\n",
      "Epoch [10/10], Step [90/100], Loss: 0.2656\n",
      "Epoch [10/10], Step [100/100], Loss: 0.0167\n"
     ]
    }
   ],
   "execution_count": 17
  },
  {
   "cell_type": "markdown",
   "id": "53704779d9f7b39e",
   "metadata": {},
   "source": [
    "## 在测试集上进行测试"
   ]
  },
  {
   "cell_type": "code",
   "id": "96f0131bc16e5367",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T10:56:00.745703Z",
     "start_time": "2025-06-25T10:55:52.110533Z"
    }
   },
   "source": [
    "# 在测试集上进行测试\n",
    "\n",
    "# 同样先过滤数据\n",
    "ds_test_torch = ds_test.to_torch_dataset()\n",
    "ds_test_torch = ds_test_torch.filter(lambda x: isinstance(x['sentence'], str) and x['label'] is not None and not math.isnan(x['label']))\n",
    "\n",
    "# 取100条进行测试\n",
    "test_dataset = MyDataset(ds_test_torch[:200])\n",
    "test_loader = DataLoader(test_dataset, batch_size=8, shuffle=False)\n",
    "\n",
    "with torch.no_grad():\n",
    "    correct = 0\n",
    "    total = 0\n",
    "    for inputs, labels in test_loader:\n",
    "        inputs = inputs.view(inputs.shape[0], -1)\n",
    "        outputs = model(inputs)\n",
    "        _, predicted = torch.max(outputs.data, 1)\n",
    "        total += labels.size(0)\n",
    "        matches = (predicted == labels)\n",
    "        correct += matches.sum().item()\n",
    "\n",
    "    print('测试集正确率为: {} %'.format(100 * correct / total))"
   ],
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Filter: 100%|██████████| 5032/5032 [00:00<00:00, 72964.34 examples/s]\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "测试集正确率为: 85.5 %\n"
     ]
    }
   ],
   "execution_count": 19
  },
  {
   "cell_type": "code",
   "id": "afd07f2d6a3b3d6f",
   "metadata": {
    "ExecuteTime": {
     "end_time": "2025-06-25T10:56:01.569518Z",
     "start_time": "2025-06-25T10:56:01.514721Z"
    }
   },
   "source": [
    "# test = \"这个手机很好\"\n",
    "# test = \"这个手机很烂\"\n",
    "# test = \"这个手机很不好\"\n",
    "# test = \"这个手机非常非常不好\"\n",
    "test = \"这个手机不好\"\n",
    "inputs = torch.from_numpy(pipeline_se(input={'source_sentence': [test]})['text_embedding'])\n",
    "outputs = model(inputs)\n",
    "print(outputs)\n",
    "values, predicted = torch.max(outputs.data, 1)\n",
    "predicted"
   ],
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "tensor([[ 0.9562, -0.8834]], grad_fn=<AddmmBackward0>)\n"
     ]
    },
    {
     "data": {
      "text/plain": [
       "tensor([0])"
      ]
     },
     "execution_count": 20,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "execution_count": 20
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.8.20"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
