{
 "cells": [
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 单卡GPU训练自己的LLM\n",
    "### 花一杯拿铁的钱一小时练出属于自己的AIGC--对杠ChatGPT\n",
    "---\n",
    "## 目录\n",
    "1. Flan模型的训练方法\n",
    "2. 训练前的模型性能对杠ChatGPT\n",
    "3. Few-Shot迁移学习训练,和训练后的模型性能\n",
    "4. 对杠的总结"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 1. Flan模型的训练方法\n",
    "\n",
    "> Flan模型训练方法是Google团队的开源预训练模型的可缩放方案, 可以实现LLM的迁移学习或微调, 显著提升PaLM,T5,U-PaLM等模型的性能, **其中的PaLM模型参数已达到5400亿, 表示能记下来的内容比ChatGPT多一倍**, 下图是在1800种类型任务的评测效果, 注意用了Flan后的**性能提升值(蓝字)**\n",
    "\n",
    "<img src=\"./Flan.png\" alt=\"评测效果\" width=\"600\">\n",
    "\n",
    "图片来源: https://arxiv.org/pdf/2210.11416.pdf"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 2. 训练前的模型性能对杠ChatGPT\n",
    "### 训练用的算力环境\n",
    "一张单卡GPU: **NVIDIA Tesla T4** (一小时大概花费就是一杯拿铁的钱)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Wed May 24 14:59:51 2023       \n",
      "+-----------------------------------------------------------------------------+\n",
      "| NVIDIA-SMI 510.47.03    Driver Version: 510.47.03    CUDA Version: 11.6     |\n",
      "|-------------------------------+----------------------+----------------------+\n",
      "| GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |\n",
      "| Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |\n",
      "|                               |                      |               MIG M. |\n",
      "|===============================+======================+======================|\n",
      "|   0  Tesla T4            Off  | 00000000:00:07.0 Off |                    0 |\n",
      "| N/A   57C    P0    28W /  70W |      0MiB / 15360MiB |     10%      Default |\n",
      "|                               |                      |                  N/A |\n",
      "+-------------------------------+----------------------+----------------------+\n",
      "                                                                               \n",
      "+-----------------------------------------------------------------------------+\n",
      "| Processes:                                                                  |\n",
      "|  GPU   GI   CI        PID   Type   Process name                  GPU Memory |\n",
      "|        ID   ID                                                   Usage      |\n",
      "|=============================================================================|\n",
      "|  No running processes found                                                 |\n",
      "+-----------------------------------------------------------------------------+\n"
     ]
    }
   ],
   "source": [
    "!nvidia-smi"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import pandas as pd\n",
    "import time\n",
    "from IPython.display import clear_output\n",
    "\n",
    "import warnings\n",
    "warnings.filterwarnings(\"ignore\")\n",
    "\n",
    "import os\n",
    "os.environ[\"TRANSFORMERS_CACHE\"] = \"cache_t5_xl\"\n",
    "\n",
    "# 原始模型文件路径, 这里是30亿参数的模型flan-t5-xl, 抱面的下载地址: https://huggingface.co/google/flan-t5-xl/tree/main 无需手动下载, 首次训练后就会自动下载到下面位置,末端目录hash名可能不一致哦\n",
    "model_path = './cache_t5_xl/models--google--flan-t5-xl/snapshots/53fd1e22aa944eee1fd336f9aee8a437e01676ce'\n",
    "\n",
    "# 训练和测试用的语料数据存放位置\n",
    "data_path = './data/'\n",
    "\n",
    "# 标注的映射\n",
    "label_mapping = {1: '正向', 0: '负向'}"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 这段运行一次就好\n",
    "\n",
    "# 安装需要的包\n",
    "!pip install autogluon zip\n",
    "\n",
    "# 解压英文和中文语料集\n",
    "!unzip -q -o data.zip -d .\n",
    "\n",
    "clear_output()"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 预训练模型, 没有迁移学习前的不同任务表现"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "912b8dd118754cdfbe1b3b13f53e7ac8",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "模型加载用时秒数: 108.41\n"
     ]
    }
   ],
   "source": [
    "# 加载预训练模型\n",
    "\n",
    "from transformers import T5Tokenizer, T5ForConditionalGeneration\n",
    "\n",
    "start = time.time()\n",
    "\n",
    "tokenizer = T5Tokenizer.from_pretrained(model_path)\n",
    "model = T5ForConditionalGeneration.from_pretrained(model_path, device_map=\"auto\")\n",
    "\n",
    "end = time.time()\n",
    "print(\"模型加载用时秒数: %.2f\" % (end - start))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "问题0: Write a small piece of code to output helloworld\n",
      "模型回答: if __name__ == \"__main__\": print(\"Hello World\")\n",
      "\n",
      "\n",
      "问题1: Please answer the following question. What is the boiling point of water?\n",
      "模型回答: 212 degrees celsius\n",
      "\n",
      "\n",
      "完成任务用时秒数: 1.57\n"
     ]
    }
   ],
   "source": [
    "# 测试不同类型任务\n",
    "\n",
    "def inference(input_text):\n",
    "  input_ids = tokenizer(input_text, return_tensors=\"pt\").input_ids.to(\"cuda\")\n",
    "  outputs = model.generate(input_ids, max_length=200, bos_token_id=0)\n",
    "  return tokenizer.decode(outputs[0], skip_special_tokens=True)\n",
    "\n",
    "start = time.time()\n",
    "\n",
    "# 不同类型任务问题\n",
    "sentences = [\n",
    "    \"Write a small piece of code to output helloworld\",\n",
    "    \"Please answer the following question. What is the boiling point of water?\",\n",
    "]\n",
    "\n",
    "for index, item in enumerate(sentences):\n",
    "    print(f\"问题{index}: {item}\\n模型回答: {inference(item)}\\n\\n\")\n",
    "\n",
    "end = time.time()\n",
    "print(\"完成任务用时秒数: %.2f\" % (end - start))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### **Flan-T5-xl** ( 30亿参 ) vs **ChatGPT** ( 1750亿参 ) 能力对杠\n",
    "\n",
    "> **写代码能力:**\n",
    "\n",
    "    Write a small piece of code to output helloworld\n",
    "    写一小段代码输出helloworld\n",
    "\n",
    "> ChatGPT 回答：\n",
    "\n",
    "    当然！下面是一个简单的 Python 代码片段，输出 \"Hello, World!\"：\n",
    "\n",
    "`print(\"Hello, World!\")`\n",
    "\n",
    "    当你运行这段代码时，它将在控制台或输出窗口显示 \"Hello, World!\" 这个消息。\n",
    "\n",
    "> Flan-T5-xl 回答: \n",
    "\n",
    "`if __name__ == \"__main__\": print(\"Hello World\")`\n",
    "\n",
    "    (😂模型答案有多余代码,不按PEP规范,但没有ChatGPT那么啰嗦)\n",
    "\n",
    "> **翻译问题：**\n",
    "\n",
    "    \"Translate to German: My name is Arthur\"\n",
    "    '翻译为德语：我的名字是亚瑟'\n",
    "\n",
    "> ChatGPT 回答：\n",
    "\n",
    "    \"Mein Name ist Arthur.\"\n",
    "\n",
    "> Flan-T5-xl 回答: \n",
    "\n",
    "    Ich bin Arthur.\n",
    "\n",
    "---\n",
    "\n",
    "> **问答问题：**\n",
    "\n",
    "    \"Please answer to the following question. Who is going to be the next Ballon d'or?\"\n",
    "    '请回答这个问题。谁将成为下一个金球奖得主？'\n",
    "\n",
    "> ChatGPT 回答：\n",
    "\n",
    "    \"作为一个AI语言模型，我无法访问实时信息或预测未来事件，例如谁将获得下一个金球奖。金球奖是法国足球杂志《France Football》每年颁发的足球奖项，获奖者由记者、国家队教练和队长的投票决定，有时还会进行在线投票。决定基于球员在一年中的表现和成就。要了解谁将获得下一个金球奖，您需要关注来自可靠消息源的新闻和更新，或等待官方公告。\"\n",
    "\n",
    "> Flan-T5-xl 回答: \n",
    "\n",
    "    ronaldo (罗纳尔多 😂不知道记下了哪年前的新闻)\n",
    "\n",
    "---\n",
    "> **逻辑推理问题：**\n",
    "\n",
    "    \"Q: Can Geoffrey Hinton have a conversation with George Washington? Give the rationale before answering.\"\n",
    "    '问：Geoffrey Hinton能与George Washington进行对话吗？在回答之前给出理由。'\n",
    "\n",
    "> ChatGPT 回答：\n",
    "\n",
    "    \"不，Geoffrey Hinton不能与George Washington进行对话。这个回答的理由是，Geoffrey Hinton是一位著名的计算机科学家，是深度学习和人工神经网络的先驱之一。他是当代的人物，今天仍然活着。另一方面，George Washington是美国的第一任总统，生活在1732年到1799年。他们属于不同的时期，因此在字面意义上不能进行对话。\"\n",
    "\n",
    "> Flan-T5-xl 回答: \n",
    "\n",
    "    George Washington died in 1799. Geoffrey Hinton was born in 1924. So the final answer is no. (乔治·华盛顿在1799年去世，Geoffrey Hinton于1924年出生。因此，最终答案是否定的。)\n",
    "\n",
    "---\n",
    "\n",
    "> **科学知识问题：**\n",
    "\n",
    "    \"Please answer the following question. What is the boiling point of water?\"\n",
    "    '请回答以下问题：水的沸点是多少？'\n",
    "\n",
    "> ChatGPT 回答：\n",
    "\n",
    "    \"水在标准大气压力（海平面）下的沸点是100摄氏度或212华氏度。\"\n",
    "\n",
    "> Flan-T5-xl 回答:\n",
    "\n",
    "    212 degrees celsius\n",
    "\n",
    "---\n",
    "\n",
    "> **数学推理问题：**\n",
    "\n",
    "    \"The square root of x is the cube root of y. What is y to the power of 2, if x = 4?\"\n",
    "    'x的平方根等于y的立方根。如果x = 4，那么y的平方是多少？'\n",
    "\n",
    "> ChatGPT 回答：\n",
    "\n",
    "    \"如果 x 的平方根等于 y 的立方根，且给定 x 为 4，我们可以将 x 的值代入方程来求解 y。\n",
    "\n",
    "    给定 x = 4：\n",
    "    4 的平方根是 2，y 的立方根也是 2。\n",
    "\n",
    "    要找到 y 的值，我们需要对 2 进行立方运算，因为 y 的立方根是 2：\n",
    "    2 的立方等于 2 * 2 * 2，结果为 8。\n",
    "\n",
    "    因此，如果 x = 4，那么 y 的平方 (y^2) 等于 8。\"\n",
    "\n",
    "> Flan-T5-xl 回答: \n",
    "\n",
    "    0 (😂模型放弃回答了...)\n",
    "\n",
    "---\n",
    "\n",
    "> **前提和假设问题：**\n",
    "\n",
    "    \"Premise:  At my age you will probably have learnt one lesson. Hypothesis:  It's not certain how many lessons you'll learn by your thirties. Does the premise entail the hypothesis?\"\n",
    "    '前提：在我的年龄，你可能已经学到了一课。假设：到三十多岁时，你将学到多少课程是不确定的。前提是否蕴含假设？'\n",
    "\n",
    "> ChatGPT 回答：\n",
    "\n",
    "    \"不，前提条件 \"At my age you will probably have learnt one lesson\" 并不导致假设 \"It's not certain how many lessons you'll learn by your thirties\"。\n",
    "\n",
    "    前提条件陈述在某个年龄阶段，很可能已经学到了一课。它并未提供关于三十岁之前学到的课程数量的确定性或不确定性的任何信息。假设引入了关于某个年龄阶段所学课程的不确定性的不同观点。因此，仅凭前提条件无法逻辑上导致或包含假设。\"\n",
    "\n",
    "> Flan-T5-xl 回答: \n",
    "\n",
    "    it is not possible to tell (无法确定 😂直接给出问题答案,拒绝作大段一本正经的胡说)\n",
    "    "
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3. Few-Shot迁移学习训练, 训练后验证模型性能"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 英文语料 (购物网的用户评论)\n",
    "| 字段 | 说明 |\n",
    "| ---- | ---- |\n",
    "| label | 1 表示正向评论，0 表示负向评论 |\n",
    "| text | 评论内容 |"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "训练样本: 200 正向数: 101\n",
      "训练样本: 200 负向数: 99\n",
      "测试样本: 200 正向数: 105\n",
      "测试样本: 200 负向数: 95\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>label</th>\n",
       "      <th>text</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>138</th>\n",
       "      <td>正向</td>\n",
       "      <td>This CD is just okay. The early JB ballads are...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>8</th>\n",
       "      <td>正向</td>\n",
       "      <td>You can argue all you want about qualification...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>182</th>\n",
       "      <td>正向</td>\n",
       "      <td>I have never been able to stick with a DVD but...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>31</th>\n",
       "      <td>负向</td>\n",
       "      <td>I was a Sophomore in HS and went to this carto...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>139</th>\n",
       "      <td>负向</td>\n",
       "      <td>I read the only review for this item and order...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>43</th>\n",
       "      <td>正向</td>\n",
       "      <td>This is a spectacular read, well balanced in a...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>193</th>\n",
       "      <td>正向</td>\n",
       "      <td>This is a great book, and well worth purchasin...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>36</th>\n",
       "      <td>负向</td>\n",
       "      <td>The movie is not good but a lot can be learned...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>146</th>\n",
       "      <td>负向</td>\n",
       "      <td>This book is pretty near worthless. I started ...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>60</th>\n",
       "      <td>正向</td>\n",
       "      <td>A revealing examination of the Iraq war, with ...</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "    label                                               text\n",
       "138    正向  This CD is just okay. The early JB ballads are...\n",
       "8      正向  You can argue all you want about qualification...\n",
       "182    正向  I have never been able to stick with a DVD but...\n",
       "31     负向  I was a Sophomore in HS and went to this carto...\n",
       "139    负向  I read the only review for this item and order...\n",
       "43     正向  This is a spectacular read, well balanced in a...\n",
       "193    正向  This is a great book, and well worth purchasin...\n",
       "36     负向  The movie is not good but a lot can be learned...\n",
       "146    负向  This book is pretty near worthless. I started ...\n",
       "60     正向  A revealing examination of the Iraq war, with ..."
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 英文语料集\n",
    "\n",
    "# 英文训练语料\n",
    "train_en_df = pd.read_csv(data_path + \"en_train.tsv\",\n",
    "                          sep=\"\\t\",\n",
    "                          header=None,\n",
    "                          names=[\"label\", \"text\"]) \\\n",
    "                .sample(200, random_state=42).reset_index(drop=True)\n",
    "\n",
    "# 英文测试语料\n",
    "test_en_df = pd.read_csv(data_path + \"en_test.tsv\",\n",
    "                          sep=\"\\t\",\n",
    "                          header=None,\n",
    "                          names=[\"label\", \"text\"]) \\\n",
    "               .sample(200, random_state=42).reset_index(drop=True)\n",
    "\n",
    "print('训练样本: %d 正向数: %d' % (train_en_df.shape[0], train_en_df[train_en_df.label==1].shape[0]) )\n",
    "print('训练样本: %d 负向数: %d' % (train_en_df.shape[0], train_en_df[train_en_df.label==0].shape[0]) )\n",
    "\n",
    "print('测试样本: %d 正向数: %d' % (test_en_df.shape[0], test_en_df[test_en_df.label==1].shape[0]) )\n",
    "print('测试样本: %d 负向数: %d' % (test_en_df.shape[0], test_en_df[test_en_df.label==0].shape[0]) )\n",
    "\n",
    "# 抽样查看英文训练语料\n",
    "train_en_sample = train_en_df.sample(10)\n",
    "train_en_sample.label = train_en_sample.label.map(label_mapping)\n",
    "train_en_sample"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 中文语料 (外卖的用户评论)\n",
    "| 字段 | 说明 |\n",
    "| ---- | ---- |\n",
    "| label | 1 表示正向评论，0 表示负向评论 |\n",
    "| review | 评论内容 |"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "训练样本: 4000 正向数: 2058\n",
      "训练样本: 4000 负向数: 1942\n",
      "测试样本: 1000 正向数: 506\n",
      "测试样本: 1000 负向数: 494\n"
     ]
    },
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>label</th>\n",
       "      <th>text</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>3308</th>\n",
       "      <td>正向</td>\n",
       "      <td>豆浆撒的有点儿严重……</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>55</th>\n",
       "      <td>负向</td>\n",
       "      <td>11:30,的餐，下午三点才送到.,还是打百度投诉的结婚！</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3213</th>\n",
       "      <td>负向</td>\n",
       "      <td>送餐时间提前太多，中午吃饭时已经凉了，以后请按指定时间送</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1953</th>\n",
       "      <td>负向</td>\n",
       "      <td>百度外卖大骗子...骗人下单...</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3773</th>\n",
       "      <td>正向</td>\n",
       "      <td>送得特别特别快！嗖一下出现在我面前，哈哈</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2110</th>\n",
       "      <td>负向</td>\n",
       "      <td>订的九点送到，十二点才到，呵呵</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1020</th>\n",
       "      <td>负向</td>\n",
       "      <td>不错，就是送的太慢了</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3322</th>\n",
       "      <td>正向</td>\n",
       "      <td>送来的非常快，饮料都非常的不错呦，超级喜欢</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2951</th>\n",
       "      <td>负向</td>\n",
       "      <td>再也不会点了，送了2个小时，有这时间点别家的都吃完半天了。不要点啊不要点！！除非中午点了当晚饭！！</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>585</th>\n",
       "      <td>正向</td>\n",
       "      <td>很好吃哦！</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "     label                                               text\n",
       "3308    正向                                        豆浆撒的有点儿严重……\n",
       "55      负向                      11:30,的餐，下午三点才送到.,还是打百度投诉的结婚！\n",
       "3213    负向                       送餐时间提前太多，中午吃饭时已经凉了，以后请按指定时间送\n",
       "1953    负向                                  百度外卖大骗子...骗人下单...\n",
       "3773    正向                               送得特别特别快！嗖一下出现在我面前，哈哈\n",
       "2110    负向                                    订的九点送到，十二点才到，呵呵\n",
       "1020    负向                                         不错，就是送的太慢了\n",
       "3322    正向                              送来的非常快，饮料都非常的不错呦，超级喜欢\n",
       "2951    负向  再也不会点了，送了2个小时，有这时间点别家的都吃完半天了。不要点啊不要点！！除非中午点了当晚饭！！\n",
       "585     正向                                              很好吃哦！"
      ]
     },
     "execution_count": 3,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "waimai_all = pd.read_csv(data_path + 'waimai_10k.csv')\n",
    "\n",
    "waimai_all.rename(columns = {'review':'text'}, inplace = True)\n",
    "\n",
    "pd_positive = waimai_all[waimai_all.label==1]\n",
    "pd_negative = waimai_all[waimai_all.label==0]\n",
    "def get_balance_corpus(corpus_size, corpus_pos, corpus_neg):\n",
    "    sample_size = corpus_size // 2\n",
    "    pd_corpus_balance = pd.concat([corpus_pos.sample(sample_size, replace=corpus_pos.shape[0]<sample_size, random_state=42), \\\n",
    "    corpus_neg.sample(sample_size, replace=corpus_neg.shape[0]<sample_size, random_state=42)])\n",
    "    \n",
    "    return pd_corpus_balance\n",
    "\n",
    "# 取样数量\n",
    "sample_num = 10000\n",
    "\n",
    "# 避免取样样本过于不平衡\n",
    "waimai_sample = get_balance_corpus(sample_num, pd_positive, pd_negative).sample(sample_num, random_state=42).reset_index(drop=True)\n",
    "\n",
    "# 按比切片出训练和测试集\n",
    "train_zh_df = waimai_sample[:4000]\n",
    "test_zh_df = waimai_sample[4000:5000]\n",
    "\n",
    "print('训练样本: %d 正向数: %d' % (train_zh_df.shape[0], train_zh_df[train_zh_df.label==1].shape[0]) )\n",
    "print('训练样本: %d 负向数: %d' % (train_zh_df.shape[0], train_zh_df[train_zh_df.label==0].shape[0]) )\n",
    "\n",
    "print('测试样本: %d 正向数: %d' % (test_zh_df.shape[0], test_zh_df[test_zh_df.label==1].shape[0]) )\n",
    "print('测试样本: %d 负向数: %d' % (test_zh_df.shape[0], test_zh_df[test_zh_df.label==0].shape[0]) )\n",
    "\n",
    "# 抽样查看中文训练语料\n",
    "train_zh_sample = train_zh_df.sample(10)\n",
    "train_zh_sample.label = train_zh_sample.label.map(label_mapping)\n",
    "train_zh_sample"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 英文语料训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Global seed set to 123\n",
      "AutoMM starts to create your model. ✨\n",
      "\n",
      "- Model will be saved to \"/mnt/Ry/llm/tmp/after_transfer\".\n",
      "\n",
      "- Validation metric is \"roc_auc\".\n",
      "\n",
      "- To track the learning progress, you can open a terminal and launch Tensorboard:\n",
      "    ```shell\n",
      "    # Assume you have installed tensorboard\n",
      "    tensorboard --logdir /mnt/Ry/llm/tmp/after_transfer\n",
      "    ```\n",
      "\n",
      "Enjoy your coffee, and let AutoMM do the job ☕☕☕ Learn more at https://auto.gluon.ai\n",
      "\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "e08f81ca062c493faf39e2b3e01a4021",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "GPU available: True (cuda), used: True\n",
      "TPU available: False, using: 0 TPU cores\n",
      "IPU available: False, using: 0 IPUs\n",
      "HPU available: False, using: 0 HPUs\n",
      "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]\n",
      "\n",
      "  | Name              | Type                         | Params\n",
      "-------------------------------------------------------------------\n",
      "0 | model             | HFAutoModelForTextPrediction | 1.2 B \n",
      "1 | validation_metric | AUROC                        | 0     \n",
      "2 | loss_func         | CrossEntropyLoss             | 0     \n",
      "-------------------------------------------------------------------\n",
      "203 K     Trainable params\n",
      "1.2 B     Non-trainable params\n",
      "1.2 B     Total params\n",
      "4,894.913 Total estimated model params size (MB)\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "8b12ae583431473dbbc13b4ddc79bd33",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Sanity Checking: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "5ecab54be66e439999cda2d05c90f137",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Training: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "f3004781aa524429bd5b56f40e6a2f92",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 0, global step 20: 'val_roc_auc' reached 0.88750 (best 0.88750), saving model to '/mnt/Ry/llm/tmp/after_transfer/epoch=0-step=20.ckpt' as top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "132a73b443d84f09976747d73983a164",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 0, global step 40: 'val_roc_auc' was not in top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "7bdf092c3b9340868862b929a5983c93",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 1, global step 60: 'val_roc_auc' reached 0.91250 (best 0.91250), saving model to '/mnt/Ry/llm/tmp/after_transfer/epoch=1-step=60.ckpt' as top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "8c908ee3dce14261a4ccd2415e8e98aa",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 1, global step 80: 'val_roc_auc' reached 0.93250 (best 0.93250), saving model to '/mnt/Ry/llm/tmp/after_transfer/epoch=1-step=80.ckpt' as top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "aa700c7f1671411da3955ada47ea89d7",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 2, global step 100: 'val_roc_auc' reached 0.95000 (best 0.95000), saving model to '/mnt/Ry/llm/tmp/after_transfer/epoch=2-step=100.ckpt' as top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "c2214e0d02b744bf9602e679f693cf26",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 2, global step 120: 'val_roc_auc' reached 0.97000 (best 0.97000), saving model to '/mnt/Ry/llm/tmp/after_transfer/epoch=2-step=120.ckpt' as top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "5eb236f645564d21963f62fe47fde42e",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 3, global step 140: 'val_roc_auc' was not in top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "7ccd62302c7748cbbbe913fdf2c86c44",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 3, global step 160: 'val_roc_auc' was not in top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "01c337ceabaa41a29eb68e144e32e4db",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 4, global step 180: 'val_roc_auc' reached 0.97250 (best 0.97250), saving model to '/mnt/Ry/llm/tmp/after_transfer/epoch=4-step=180.ckpt' as top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "ff5fc283b8a249979a402125dfe03b08",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 4, global step 200: 'val_roc_auc' reached 0.97750 (best 0.97750), saving model to '/mnt/Ry/llm/tmp/after_transfer/epoch=4-step=200.ckpt' as top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "24f92c13df9e49c581ae0f8f40166cb2",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 5, global step 220: 'val_roc_auc' was not in top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "6e736d5315714b669cedfc0726d6fad7",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 5, global step 240: 'val_roc_auc' was not in top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "7fd1c99e263a4d73addb10dc0462d345",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 6, global step 260: 'val_roc_auc' was not in top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "daf93aec638543dda00e2efc6167afe3",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 6, global step 280: 'val_roc_auc' was not in top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "fda2a94a14024d4888c4b4fd1c829ca0",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 7, global step 300: 'val_roc_auc' was not in top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "6a3448cdaf474fb19f8bd38668414a60",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 7, global step 320: 'val_roc_auc' was not in top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "03ec1d428b1e4a7db1d2b45d72915aa6",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 8, global step 340: 'val_roc_auc' was not in top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "6c142c81a92a4952b59054eef07488ec",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 8, global step 360: 'val_roc_auc' was not in top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "36acf3f74ea643e0a14ee1ef9a78805c",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 9, global step 380: 'val_roc_auc' was not in top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "d1472f974ed44e50b5c2f6886ebb10ee",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 9, global step 400: 'val_roc_auc' was not in top 1\n",
      "`Trainer.fit` stopped: `max_epochs=10` reached.\n",
      "AutoMM has created your model 🎉🎉🎉\n",
      "\n",
      "- To load the model, use the code below:\n",
      "    ```python\n",
      "    from autogluon.multimodal import MultiModalPredictor\n",
      "    predictor = MultiModalPredictor.load(\"/mnt/Ry/llm/tmp/after_transfer\")\n",
      "    ```\n",
      "\n",
      "- You can open a terminal and launch Tensorboard to visualize the training log:\n",
      "    ```shell\n",
      "    # Assume you have installed tensorboard\n",
      "    tensorboard --logdir /mnt/Ry/llm/tmp/after_transfer\n",
      "    ```\n",
      "\n",
      "- If you are not satisfied with the model, try to increase the training time, \n",
      "adjust the hyperparameters (https://auto.gluon.ai/stable/tutorials/multimodal/advanced_topics/customization.html),\n",
      "or post issues on GitHub: https://github.com/autogluon/autogluon\n",
      "\n",
      "\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "训练用时秒数: 1506.71\n"
     ]
    }
   ],
   "source": [
    "# 英文语料训练\n",
    "start = time.time()\n",
    "\n",
    "from autogluon.multimodal import MultiModalPredictor\n",
    "\n",
    "new_model_path = f\"./tmp/after_transfer\"\n",
    "\n",
    "predictor = MultiModalPredictor(label=\"label\",\n",
    "                                path=new_model_path)\n",
    "\n",
    "# https://arxiv.org/pdf/2205.05638\n",
    "predictor.fit(\n",
    "    train_en_df,\n",
    "    presets=\"multilingual\",\n",
    "    # time_limit=1800,\n",
    "    hyperparameters={\n",
    "        \"model.hf_text.checkpoint_name\": \"google/flan-t5-xl\",\n",
    "        \"model.hf_text.gradient_checkpointing\": True,\n",
    "        \"model.hf_text.low_cpu_mem_usage\": True,\n",
    "        \"optimization.efficient_finetune\": \"ia3_bias\",\n",
    "        \"optimization.lr_decay\": 0.9,\n",
    "        \"optimization.learning_rate\": 3e-03,\n",
    "        \"optimization.end_lr\": 3e-03,\n",
    "        \"optimization.max_epochs\": 10,\n",
    "        \"optimization.warmup_steps\": 0,\n",
    "        \"env.batch_size\": 1,\n",
    "        \"env.eval_batch_size_ratio\": 1\n",
    "    })\n",
    "\n",
    "end = time.time()\n",
    "print(\"训练用时秒数: %.2f\" % (end - start))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 测试分析英文评论的性能"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "05557de012dd407fa14cfaee1c778c32",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Predicting: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "英文测试语料数量: 200 准确率: 97.69%\n",
      "英文测试语料用时秒数: 62.35\n"
     ]
    }
   ],
   "source": [
    "start = time.time()\n",
    "score_in_en = predictor.evaluate(test_en_df)\n",
    "print(f'英文测试语料数量: {test_en_df.shape[0]} 准确率:' , \"{:.2f}%\".format(score_in_en['roc_auc']* 100) )\n",
    "end = time.time()\n",
    "print(\"英文测试语料用时秒数: %.2f\" % (end - start))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 做中文训练前, 先测试当前中文评论的性能"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "c3abc4ce9dc54f82861fb36495e64360",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Predicting: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "中文测试语料数量: 1000 准确率: 49.44%\n",
      "中文测试语料用时秒数: 17.53\n"
     ]
    }
   ],
   "source": [
    "start = time.time()\n",
    "score_in_zh = predictor.evaluate(test_zh_df)\n",
    "print(f'中文测试语料数量: {test_zh_df.shape[0]} 准确率:' , \"{:.2f}%\".format(score_in_zh['roc_auc']* 100))\n",
    "end = time.time()\n",
    "print(\"中文测试语料用时秒数: %.2f\" % (end - start))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 中文语料训练"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Global seed set to 123\n",
      "AutoMM starts to create your model. ✨\n",
      "\n",
      "- Model will be saved to \"/mnt/Ry/llm/tmp/after_transfer_zh\".\n",
      "\n",
      "- Validation metric is \"roc_auc\".\n",
      "\n",
      "- To track the learning progress, you can open a terminal and launch Tensorboard:\n",
      "    ```shell\n",
      "    # Assume you have installed tensorboard\n",
      "    tensorboard --logdir /mnt/Ry/llm/tmp/after_transfer_zh\n",
      "    ```\n",
      "\n",
      "Enjoy your coffee, and let AutoMM do the job ☕☕☕ Learn more at https://auto.gluon.ai\n",
      "\n",
      "GPU available: True (cuda), used: True\n",
      "TPU available: False, using: 0 TPU cores\n",
      "IPU available: False, using: 0 IPUs\n",
      "HPU available: False, using: 0 HPUs\n",
      "LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]\n",
      "\n",
      "  | Name              | Type                         | Params\n",
      "-------------------------------------------------------------------\n",
      "0 | model             | HFAutoModelForTextPrediction | 1.2 B \n",
      "1 | validation_metric | AUROC                        | 0     \n",
      "2 | loss_func         | CrossEntropyLoss             | 0     \n",
      "-------------------------------------------------------------------\n",
      "203 K     Trainable params\n",
      "1.2 B     Non-trainable params\n",
      "1.2 B     Total params\n",
      "4,894.913 Total estimated model params size (MB)\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "16e53d01cd1b467c88f0c6e4474ce467",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Sanity Checking: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "7e4a62f40dd34672b75436e4ebeae34e",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Training: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "59d092badbfe4e2b9a5620cae9c1540f",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 0, global step 437: 'val_roc_auc' reached 0.56779 (best 0.56779), saving model to '/mnt/Ry/llm/tmp/after_transfer_zh/epoch=0-step=437.ckpt' as top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "5612911a758141c08be6c7b0fe4035be",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 0, global step 874: 'val_roc_auc' reached 0.57457 (best 0.57457), saving model to '/mnt/Ry/llm/tmp/after_transfer_zh/epoch=0-step=874.ckpt' as top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "4f91e7633f804dfa859d34a3d0324c78",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 1, global step 1312: 'val_roc_auc' reached 0.60913 (best 0.60913), saving model to '/mnt/Ry/llm/tmp/after_transfer_zh/epoch=1-step=1312.ckpt' as top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "62ba8c5cfeb1428f811b112e95016003",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 1, global step 1749: 'val_roc_auc' reached 0.64011 (best 0.64011), saving model to '/mnt/Ry/llm/tmp/after_transfer_zh/epoch=1-step=1749.ckpt' as top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "98ac73c3d3404103a574c3f81de0e365",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 2, global step 2187: 'val_roc_auc' reached 0.68286 (best 0.68286), saving model to '/mnt/Ry/llm/tmp/after_transfer_zh/epoch=2-step=2187.ckpt' as top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "68011ca28f334556bbbbd7318f36378a",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 2, global step 2624: 'val_roc_auc' reached 0.69733 (best 0.69733), saving model to '/mnt/Ry/llm/tmp/after_transfer_zh/epoch=2-step=2624.ckpt' as top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "9a4d486c0aed462faa707eaea76ab6f7",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 3, global step 3062: 'val_roc_auc' was not in top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "753090c759d2451b84220886601aa99b",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 3, global step 3499: 'val_roc_auc' was not in top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "f8e25953277d4fd997f520bf24953dc6",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 4, global step 3937: 'val_roc_auc' was not in top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "68abba583e354a1692a97cef6238c753",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 4, global step 4374: 'val_roc_auc' reached 0.69749 (best 0.69749), saving model to '/mnt/Ry/llm/tmp/after_transfer_zh/epoch=4-step=4374.ckpt' as top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "574d51f6dc2c47279b400a6c6feba1c9",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 5, global step 4812: 'val_roc_auc' reached 0.69972 (best 0.69972), saving model to '/mnt/Ry/llm/tmp/after_transfer_zh/epoch=5-step=4812.ckpt' as top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "c7146df5659e4ef9945b1ea2dd90e7b3",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 5, global step 5249: 'val_roc_auc' reached 0.69999 (best 0.69999), saving model to '/mnt/Ry/llm/tmp/after_transfer_zh/epoch=5-step=5249.ckpt' as top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "e8bec5435261487ba6ccfc35385748b0",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 6, global step 5687: 'val_roc_auc' was not in top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "fdd908a1f8a742018fa8f14318bca529",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 6, global step 6124: 'val_roc_auc' was not in top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "e81f2c2ffa7a4ea1a7b049dfb8ca6869",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 7, global step 6562: 'val_roc_auc' was not in top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "6ba219879f9847e5b462b763c191900a",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 7, global step 6999: 'val_roc_auc' was not in top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "cf45252c27cf428c97e055881d161a9e",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 8, global step 7437: 'val_roc_auc' was not in top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "e15d837efa414acf9c972e627c368574",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 8, global step 7874: 'val_roc_auc' was not in top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "7afb7261e1ed4c8f83a2934ca6ab261b",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 9, global step 8312: 'val_roc_auc' was not in top 1\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "c8c46e971bd641648589482063ac6a9b",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Validation: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Epoch 9, global step 8749: 'val_roc_auc' was not in top 1\n",
      "`Trainer.fit` stopped: `max_epochs=10` reached.\n",
      "AutoMM has created your model 🎉🎉🎉\n",
      "\n",
      "- To load the model, use the code below:\n",
      "    ```python\n",
      "    from autogluon.multimodal import MultiModalPredictor\n",
      "    predictor = MultiModalPredictor.load(\"/mnt/Ry/llm/tmp/after_transfer_zh\")\n",
      "    ```\n",
      "\n",
      "- You can open a terminal and launch Tensorboard to visualize the training log:\n",
      "    ```shell\n",
      "    # Assume you have installed tensorboard\n",
      "    tensorboard --logdir /mnt/Ry/llm/tmp/after_transfer_zh\n",
      "    ```\n",
      "\n",
      "- If you are not satisfied with the model, try to increase the training time, \n",
      "adjust the hyperparameters (https://auto.gluon.ai/stable/tutorials/multimodal/advanced_topics/customization.html),\n",
      "or post issues on GitHub: https://github.com/autogluon/autogluon\n",
      "\n",
      "\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "训练用时秒数: 1891.35\n"
     ]
    },
    {
     "ename": "",
     "evalue": "",
     "output_type": "error",
     "traceback": [
      "\u001b[1;31m无法中断 Kernel。 \n",
      "\n",
      "\u001b[1;31mKernel is dead. \n",
      "\n",
      "\u001b[1;31m有关更多详细信息，请查看 Jupyter <a href='command:jupyter.viewOutput'>log</a>。"
     ]
    }
   ],
   "source": [
    "# 中文语料训练\n",
    "start = time.time()\n",
    "\n",
    "from autogluon.multimodal import MultiModalPredictor\n",
    "\n",
    "# 在之前模型的基础上再次训练新模型\n",
    "save_model_path = f\"./tmp/after_transfer_zh\"\n",
    "\n",
    "# https://arxiv.org/pdf/2205.05638\n",
    "predictor.fit(\n",
    "    train_zh_df,  # 中文训练语料\n",
    "    save_path = save_model_path,\n",
    "    presets=\"multilingual\",\n",
    "    time_limit=1800,\n",
    "    hyperparameters={\n",
    "        \"model.hf_text.checkpoint_name\": \"google/flan-t5-xl\",\n",
    "        \"model.hf_text.gradient_checkpointing\": True,\n",
    "        \"model.hf_text.low_cpu_mem_usage\": True,\n",
    "        \"optimization.efficient_finetune\": \"ia3_bias\",\n",
    "        \"optimization.lr_decay\": 0.9,\n",
    "        \"optimization.learning_rate\": 3e-03,\n",
    "        \"optimization.end_lr\": 3e-03,\n",
    "        \"optimization.max_epochs\": 10,\n",
    "        \"optimization.warmup_steps\": 0,\n",
    "        \"env.batch_size\": 1,\n",
    "        \"env.eval_batch_size_ratio\": 1\n",
    "    })\n",
    "\n",
    "end = time.time()\n",
    "print(\"训练用时秒数: %.2f\" % (end - start))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 训练后, 测试分析中文评论的性能"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "7b80d573d90a4cf3be15bac7d422bec8",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Predicting: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "中文测试语料数量: 1000 准确率: 65.10%\n",
      "中文测试语料用时秒数: 22.51\n"
     ]
    }
   ],
   "source": [
    "start = time.time()\n",
    "score_in_zh = predictor.evaluate(test_zh_df)\n",
    "print(f'中文测试语料数量: {test_zh_df.shape[0]} 准确率:' , \"{:.2f}%\".format(score_in_zh['roc_auc']* 100))\n",
    "end = time.time()\n",
    "print(\"中文测试语料用时秒数: %.2f\" % (end - start))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 加载已完成迁移学习训练的模型"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "d13105dc93524db19645e3433cd3ca52",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Loading checkpoint shards:   0%|          | 0/2 [00:00<?, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Load pretrained checkpoint: /mnt/Ry/llm/tmp/after_transfer_zh/model.ckpt\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "加载模型用时秒数: 50.94\n"
     ]
    }
   ],
   "source": [
    "# 加载已进行过迁移学习训练的模型\n",
    "\n",
    "from autogluon.multimodal import MultiModalPredictor\n",
    "\n",
    "start = time.time()\n",
    "\n",
    "save_model_path = f\"./tmp/after_transfer_zh\"\n",
    "predictor = MultiModalPredictor.load(save_model_path)\n",
    "\n",
    "end = time.time()\n",
    "print(\"加载模型用时秒数: %.2f\" % (end - start))"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "### 用一些评论来验证训练后的模型能力"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "dd6900cf2de942c3ab5dfcd64c40befa",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Predicting: 0it [00:00, ?it/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "问题: 还是很好吃~包装特别精致,哈哈哈,都舍不得拆开\n",
      "标注: 正向  模型回答: 正向\n",
      "\n",
      "\n",
      "问题: 评价晚了点，服务很好很赞\n",
      "标注: 正向  模型回答: 负向\n",
      "\n",
      "\n",
      "问题: 送单快，就是路不熟。应该多跑跑\n",
      "标注: 正向  模型回答: 负向\n",
      "\n",
      "\n",
      "问题: 还可以，比预计时间晚了一小时到，不过还好\n",
      "标注: 负向  模型回答: 正向\n",
      "\n",
      "\n",
      "问题: 建议厨师烤几串肉筋自己尝尝看，看你能嚼的动么。这种边角料肉也能拿出来烤着给人吃？我真怀疑如果有人在店里点了肉筋，发现是边角料肉，不会砸场子？\n",
      "标注: 负向  模型回答: 负向\n",
      "\n",
      "\n",
      "回答用时秒数: 4.50\n"
     ]
    }
   ],
   "source": [
    "# 英文语料里取样测试效果\n",
    "# list_of_dicts = test_en_df.sample(5, random_state=12345).to_dict(orient='records')\n",
    "\n",
    "# sentences = [\n",
    "#     \"What an amazing account of a surgeon's life!  I loved this book.  Starkly honest, the writer does not spare herself.  Gripping, heartrending accounts of former patients and the training of a surgeon.  I couldn't put it down.  Why aren't all surgeons so caring?  Some of the reasons are explained in this book\",\n",
    "#     'I have read several of Nicholas Sparks\\' books, but this one was different for me.  It was a very very slow start.  I was more than 150 pages into it before I got interested in the plot.  I had even considered quitting reading it at times.  It ended up being an okay story, but nothing that I couldn\\'t have lived without reading.  What I did enjoy is the way Mr. Sparks writes about male characters in the romantic way women wish men really were.  I would suggest another read from Mr. Sparks like \"The Notebook\" or \"The Wedding\" if you want a good romantic read.'\n",
    "# ]\n",
    "\n",
    "# 中文语料里取样测试效果\n",
    "list_of_dicts = test_zh_df.sample(5, random_state=321).to_dict(orient='records')\n",
    "\n",
    "sentences = [d['text'] for d in list_of_dicts]\n",
    "\n",
    "# print( list_of_dicts, sentences )\n",
    "\n",
    "# sentences = [\n",
    "#     \"怎么说……送来的人态度很好，但是东西太难吃……春卷和南瓜饼还可以\",\n",
    "#     '外卖赞。,菜不错就是请注意包装'\n",
    "# ]\n",
    "\n",
    "start = time.time()\n",
    "predictions = predictor.predict({'text': sentences})\n",
    "end = time.time()\n",
    "\n",
    "for index, item in enumerate(sentences):\n",
    "    print(f\"问题: {list_of_dicts[index]['text']}\\n标注: { label_mapping.get(list_of_dicts[index]['label']) }  模型回答: { label_mapping.get(predictions[index]) }\\n\\n\")\n",
    "\n",
    "print(\"回答用时秒数: %.2f\" % (end - start))\n"
   ]
  },
  {
   "attachments": {},
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 4. 对杠的总结\n",
    "> 30亿参数的Flan-T5与1750亿的ChatGPT, 在多任务语言理解(MMLU)上确实有明显差距, **但如果你需要的是某个具体行业知识领域的大型语言模型, 在预训练模型上做迁移学习是最佳选择**, 我们用开源LLM的能力为基础再训练出自己的模型, 能更准确判断评论是正负意向, 应用在评论自动回复, 负面可以自动安抚客户, 正面可以引导消费, 不同应用场景都能训练出来\n",
    "\n",
    "> ChatGPT不能按你需求完全定制的, 也不可能依赖ChatGPT能力来训练出自己专用模型(这里不是指角色定义, 或上下文prompt的简单辅助功能), 如果能付出更多杯拿铁的钱, 当然可以用100亿或更高参数的预训练模型来做, 这样的效果就更优了"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.6"
  },
  "widgets": {
   "state": {},
   "version": "1.1.2"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2
}
