{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "9fc4289b",
   "metadata": {
    "tags": []
   },
   "source": [
    "# 文本摘要"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4dfff4dc",
   "metadata": {},
   "source": [
    "文本摘要任务指的是用精炼的文本来概括整篇文章的大意，使得用户能够通过阅读摘要来大致了解文章的主要内容。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4c011573",
   "metadata": {},
   "source": [
    "从实现手法来说，文本摘要任务主要分为以下三种：\n",
    "\n",
    "- 抽取式摘要：从原文档中提取现成的句子作为摘要句。\n",
    "- 压缩式摘要：对原文档的冗余信息进行过滤，压缩文本作为摘要。\n",
    "- 生成式摘要：基于NLG技术，根据源文档内容，由算法模型自己生成自然语言描述。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f787db8a",
   "metadata": {},
   "source": [
    "## 一种基于T5模型的文本生成"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "fae3b217",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "E:\\env\\Anaconda\\lib\\site-packages\\transformers\\convert_slow_tokenizer.py:454: UserWarning: The sentencepiece tokenizer that you are converting to a fast tokenizer uses the byte fallback option which is not implemented in the fast tokenizers. In practice this means that the fast version of the tokenizer can produce unknown tokens whereas the sentencepiece version would have converted these unknown tokens into a sequence of byte tokens matching the original piece of text.\n",
      "  warnings.warn(\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "原始文本:  京信数据科技有限公司（下称京信数科）是世界500强中央管理的国有骨干企业“中国电子信息产业集团（CEC）”旗下投资公司，中国电子（CEC）连续八年入选世界500强企业，拥有36家二级企业、控股上市公司15家，员工总人数11万人。京信数科成立于2007年，注册资本5000万元，依托中国电子强大的产业、技术、资金能力，为政府和企业提供大数据服务，是全国城市大数据先驱的服务商之一。\n",
      "摘要文本:  中国国家统计局周三(1月8日)发布消息称,京信数据科技有限公司(简称“数科”)是世界500强中央管理的国有骨干企业。\n"
     ]
    }
   ],
   "source": [
    "import re\n",
    "import torch\n",
    "from transformers import AutoTokenizer, AutoModelForSeq2SeqLM\n",
    " \n",
    "# 载入模型 \n",
    "tokenizer = AutoTokenizer.from_pretrained(\"D:/bert/csebuetnlp/mT5_multilingual_XLSum\")\n",
    "model = AutoModelForSeq2SeqLM.from_pretrained(\"D:/bert/csebuetnlp/mT5_multilingual_XLSum\")\n",
    "\n",
    "WHITESPACE_HANDLER = lambda k: re.sub('\\s+', ' ', re.sub('\\n+', ' ', k.strip()))\n",
    "\n",
    "text = '京信数据科技有限公司（下称京信数科）是世界500强中央管理的国有骨干企业“中国电子信息产业集团（CEC）”旗下投资公司，中国电子（CEC）连续八年入选世界500强企业，拥有36家二级企业、控股上市公司15家，员工总人数11万人。京信数科成立于2007年，注册资本5000万元，依托中国电子强大的产业、技术、资金能力，为政府和企业提供大数据服务，是全国城市大数据先驱的服务商之一。'\n",
    "text = WHITESPACE_HANDLER(text)\n",
    "input_ids = tokenizer([text], return_tensors=\"pt\", padding=\"max_length\", truncation=True, max_length=512)[\"input_ids\"]\n",
    "\n",
    "# 生成结果文本\n",
    "output_ids = model.generate(input_ids=input_ids, max_length=84, no_repeat_ngram_size=2, num_beams=4)[0]\n",
    "# 模型结果转文本\n",
    "output_text = tokenizer.decode(output_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)\n",
    "\n",
    "print(\"原始文本: \", text)\n",
    "print(\"摘要文本: \", output_text)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c9cc5e2b",
   "metadata": {},
   "source": [
    "## 基于OpenAI接口的文本摘要实验"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "d7088fdb",
   "metadata": {},
   "outputs": [],
   "source": [
    "import os\n",
    "import openai\n",
    "# 导入自己的API key\n",
    "openai.api_key = os.environ.get(\"OPENAI_API_KEY\")\n",
    "MODEL = \"gpt-3.5-turbo\""
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e2657a0e",
   "metadata": {},
   "source": [
    "### GPT3.5"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "d93270c3",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "原始文本:  京信数据科技有限公司（下称京信数科）是世界500强中央管理的国有骨干企业“中国电子信息产业集团（CEC）”旗下投资公司，中国电子（CEC）连续八年入选世界500强企业，拥有36家二级企业、控股上市公司15家，员工总人数11万人。京信数科成立于2007年，注册资本5000万元，依托中国电子强大的产业、技术、资金能力，为政府和企业提供大数据服务，是全国城市大数据先驱的服务商之一。\n",
      "摘要文本:  京信数科是中国电子信息产业集团（CEC）旗下投资公司，拥有36家二级企业、控股上市公司15家，提供大数据服务。\n",
      "摘要文本长度:  55\n"
     ]
    }
   ],
   "source": [
    "\n",
    "def summarize_text(text):\n",
    "    response = openai.Completion.create(\n",
    "        engine=\"text-davinci-003\",\n",
    "        prompt=f\"请对以下文本进行总结，注意总结的凝炼性，将总结字数控制在20个字以内:\\n{text}\",\n",
    "        temperature=0.7,\n",
    "        max_tokens=500,\n",
    "    )\n",
    "\n",
    "    summarized_text = response.choices[0].text.strip()\n",
    "    return summarized_text\n",
    "\n",
    "text = '京信数据科技有限公司（下称京信数科）是世界500强中央管理的国有骨干企业“中国电子信息产业集团（CEC）”旗下投资公司，中国电子（CEC）连续八年入选世界500强企业，拥有36家二级企业、控股上市公司15家，员工总人数11万人。京信数科成立于2007年，注册资本5000万元，依托中国电子强大的产业、技术、资金能力，为政府和企业提供大数据服务，是全国城市大数据先驱的服务商之一。'\n",
    "output_text = summarize_text(text)\n",
    "print(\"原始文本: \", text)\n",
    "print(\"摘要文本: \", output_text)\n",
    "print(\"摘要文本长度: \", len(output_text))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6f0a3c5d",
   "metadata": {},
   "source": [
    "### ChatGPT"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "5179d84d",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "原始文本:  京信数据科技有限公司（下称京信数科）是世界500强中央管理的国有骨干企业“中国电子信息产业集团（CEC）”旗下投资公司，中国电子（CEC）连续八年入选世界500强企业，拥有36家二级企业、控股上市公司15家，员工总人数11万人。京信数科成立于2007年，注册资本5000万元，依托中国电子强大的产业、技术、资金能力，为政府和企业提供大数据服务，是全国城市大数据先驱的服务商之一。\n",
      "摘要文本:  京信数科：中国电子旗下投资公司，提供大数据服务，城市大数据先驱之一。\n",
      "摘要文本长度:  34\n"
     ]
    }
   ],
   "source": [
    "\n",
    "def summarize_text(text):\n",
    "    content = f\"请对以下文本进行总结，注意总结的凝炼性，将总结字数控制在20个字以内:\\n{text}\"\n",
    "    response = openai.ChatCompletion.create(\n",
    "        model=\"gpt-3.5-turbo\", \n",
    "        messages=[{\"role\": \"user\", \"content\": content}]\n",
    "    )\n",
    "    summarized_text = response.get(\"choices\")[0].get(\"message\").get(\"content\")\n",
    "    return summarized_text\n",
    "\n",
    "text = '京信数据科技有限公司（下称京信数科）是世界500强中央管理的国有骨干企业“中国电子信息产业集团（CEC）”旗下投资公司，中国电子（CEC）连续八年入选世界500强企业，拥有36家二级企业、控股上市公司15家，员工总人数11万人。京信数科成立于2007年，注册资本5000万元，依托中国电子强大的产业、技术、资金能力，为政府和企业提供大数据服务，是全国城市大数据先驱的服务商之一。'\n",
    "output_text = summarize_text(text)\n",
    "\n",
    "print(\"原始文本: \", text)\n",
    "print(\"摘要文本: \", output_text)\n",
    "print(\"摘要文本长度: \", len(output_text))\n",
    "# 注意，chatgpt并不能完美限制摘要输出的字数"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0b27e5ec",
   "metadata": {},
   "source": [
    "### 基于自定义语料fine tune(训练)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "bbc23450",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "text/html": [
       "<div>\n",
       "<style scoped>\n",
       "    .dataframe tbody tr th:only-of-type {\n",
       "        vertical-align: middle;\n",
       "    }\n",
       "\n",
       "    .dataframe tbody tr th {\n",
       "        vertical-align: top;\n",
       "    }\n",
       "\n",
       "    .dataframe thead th {\n",
       "        text-align: right;\n",
       "    }\n",
       "</style>\n",
       "<table border=\"1\" class=\"dataframe\">\n",
       "  <thead>\n",
       "    <tr style=\"text-align: right;\">\n",
       "      <th></th>\n",
       "      <th>prompt</th>\n",
       "      <th>completion</th>\n",
       "    </tr>\n",
       "  </thead>\n",
       "  <tbody>\n",
       "    <tr>\n",
       "      <th>0</th>\n",
       "      <td>提出了一种新的保细节的变形算法,可以使网格模型进行尽量刚性的变形,以减少变形中几何细节的扭曲...</td>\n",
       "      <td>保细节的网格刚性变形算法</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>1</th>\n",
       "      <td>实时服装动画生成技术能够为三维虚拟角色实时地生成逼真的服装动态效果,在游戏娱乐、虚拟服装设计...</td>\n",
       "      <td>一种基于混合模型的实时虚拟人服装动画方法</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>2</th>\n",
       "      <td>提出一种基于模糊主分量分析技术(FPCA)的人脸遮挡检测与去除方法.首先,有遮挡人脸被投影到...</td>\n",
       "      <td>人脸遮挡区域检测与重建</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>3</th>\n",
       "      <td>图像匹配技术在计算机视觉、遥感和医学图像分析等领域有着广泛的应用背景.针对传统的相关匹配算法...</td>\n",
       "      <td>一种基于奇异值分解的图像匹配算法</td>\n",
       "    </tr>\n",
       "    <tr>\n",
       "      <th>4</th>\n",
       "      <td>提出了一种基于片相似性的各项异性扩散图像去噪方法.传统的各项异性图像去噪方法都是基于单个像素...</td>\n",
       "      <td>片相似性各项异性扩散图像去噪</td>\n",
       "    </tr>\n",
       "  </tbody>\n",
       "</table>\n",
       "</div>"
      ],
      "text/plain": [
       "                                              prompt            completion\n",
       "0  提出了一种新的保细节的变形算法,可以使网格模型进行尽量刚性的变形,以减少变形中几何细节的扭曲...          保细节的网格刚性变形算法\n",
       "1  实时服装动画生成技术能够为三维虚拟角色实时地生成逼真的服装动态效果,在游戏娱乐、虚拟服装设计...  一种基于混合模型的实时虚拟人服装动画方法\n",
       "2  提出一种基于模糊主分量分析技术(FPCA)的人脸遮挡检测与去除方法.首先,有遮挡人脸被投影到...           人脸遮挡区域检测与重建\n",
       "3  图像匹配技术在计算机视觉、遥感和医学图像分析等领域有着广泛的应用背景.针对传统的相关匹配算法...      一种基于奇异值分解的图像匹配算法\n",
       "4  提出了一种基于片相似性的各项异性扩散图像去噪方法.传统的各项异性图像去噪方法都是基于单个像素...        片相似性各项异性扩散图像去噪"
      ]
     },
     "execution_count": 5,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "# 加载数据\n",
    "import json\n",
    "import pandas as pd\n",
    "\n",
    "with open('data/csl_data.json', 'r', encoding='utf-8') as f:\n",
    "    data = json.load(f)\n",
    "df = pd.DataFrame(data)\n",
    "df = df[['content', 'title']]\n",
    "df.columns = [\"prompt\", \"completion\"]\n",
    "df_train = df.iloc[:500]\n",
    "df_train.head(5)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "id": "075e65a8",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 转换训练数据\n",
    "df_train.to_json(\"data/csl_summarize_finetune.jsonl\", orient='records', lines=True, force_ascii=False)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "f458ee33",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Analyzing...\n",
      "\n",
      "- Your file contains 500 prompt-completion pairs\n",
      "- More than a third of your `prompt` column/key is uppercase. Uppercase prompts tends to perform worse than a mixture of case encountered in normal language. We recommend to lower case the data if that makes sense in your domain. See https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset for more details\n",
      "- More than a third of your `completion` column/key is uppercase. Uppercase completions tends to perform worse than a mixture of case encountered in normal language. We recommend to lower case the data if that makes sense in your domain. See https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset for more details\n",
      "- Your data does not contain a common separator at the end of your prompts. Having a separator string appended to the end of the prompt makes it clearer to the fine-tuned model where the completion should begin. See https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset for more detail and examples. If you intend to do open-ended generation, then you should leave the prompts empty\n",
      "- Your data does not contain a common ending at the end of your completions. Having a common ending string appended to the end of the completion makes it clearer to the fine-tuned model where the completion should end. See https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset for more detail and examples.\n",
      "- The completion should start with a whitespace character (` `). This tends to produce better results due to the tokenization we use. See https://platform.openai.com/docs/guides/fine-tuning/preparing-your-dataset for more details\n",
      "\n",
      "Based on the analysis we will perform the following actions:\n",
      "- [Recommended] Lowercase all your data in column/key `prompt` [Y/n]: Y\n",
      "- [Recommended] Lowercase all your data in column/key `completion` [Y/n]: Y\n",
      "- [Recommended] Add a suffix separator ` ->` to all prompts [Y/n]: Y\n",
      "- [Recommended] Add a suffix ending `\\n` to all completions [Y/n]: Y\n",
      "- [Recommended] Add a whitespace character to the beginning of the completion [Y/n]: Y\n",
      "\n",
      "\n",
      "Your data will be written to a new JSONL file. Proceed [Y/n]: Y\n",
      "\n",
      "Wrote modified file to `data/csl_summarize_finetune_prepared.jsonl`\n",
      "Feel free to take a look!\n",
      "\n",
      "Now use that file when fine-tuning:\n",
      "> openai api fine_tunes.create -t \"data/csl_summarize_finetune_prepared.jsonl\"\n",
      "\n",
      "After you’ve fine-tuned a model, remember that your prompt has to end with the indicator string ` ->` for the model to start generating completions, rather than continuing with the prompt. Make sure to include `stop=[\"\\n\"]` so that the generated texts ends at the expected place.\n",
      "Once your model starts training, it'll approximately take 9.31 minutes to train a `curie` model, and less for `ada` and `babbage`. Queue will approximately take half an hour per job ahead of you.\n"
     ]
    }
   ],
   "source": [
    "# 调用openai的命令，检查训练数据\n",
    "!openai tools fine_tunes.prepare_data -f data/csl_summarize_finetune.jsonl -q"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "e07f9300",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "done.\n"
     ]
    }
   ],
   "source": [
    "# 设置环境变量\n",
    "import os\n",
    "os.environ.setdefault(\"OPENAI_API_KEY\", os.environ.get(\"OPENAI_API_KEY\")) \n",
    "print('done.')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "e56fe98e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "Uploaded file from ./data/csl_summarize_finetune_prepared.jsonl: file-q7VMO3yqQopfqUiTmIyj8W7B\n",
      "Created fine-tune: ft-vsAK7mm27aSigEwjBkFE8NDD\n",
      "Streaming events until fine-tuning is complete...\n",
      "\n",
      "(Ctrl-C will interrupt the stream, but not cancel the fine-tune)\n",
      "[2023-05-28 11:38:24] Created fine-tune: ft-vsAK7mm27aSigEwjBkFE8NDD\n",
      "\n",
      "Stream interrupted (client disconnected).\n",
      "To resume the stream, run:\n",
      "\n",
      "  openai api fine_tunes.follow -i ft-vsAK7mm27aSigEwjBkFE8NDD\n",
      "\n"
     ]
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\n",
      "Upload progress:   0%|          | 0.00/380k [00:00<?, ?it/s]\n",
      "Upload progress: 100%|██████████| 380k/380k [00:00<00:00, 380Mit/s]\n"
     ]
    }
   ],
   "source": [
    "# 创建模型，训练模型\n",
    "!openai api fine_tunes.create \\\n",
    "    -t \"./data/csl_summarize_finetune_prepared.jsonl\" \\\n",
    "    -m ada\\\n",
    "    --no_check_if_files_exist"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "7450eaa2",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "{\n",
      "  \"created_at\": 1685245104,\n",
      "  \"events\": [\n",
      "    {\n",
      "      \"created_at\": 1685245104,\n",
      "      \"level\": \"info\",\n",
      "      \"message\": \"Created fine-tune: ft-vsAK7mm27aSigEwjBkFE8NDD\",\n",
      "      \"object\": \"fine-tune-event\"\n",
      "    }\n",
      "  ],\n",
      "  \"fine_tuned_model\": null,\n",
      "  \"hyperparams\": {\n",
      "    \"batch_size\": null,\n",
      "    \"learning_rate_multiplier\": null,\n",
      "    \"n_epochs\": 4,\n",
      "    \"prompt_loss_weight\": 0.01\n",
      "  },\n",
      "  \"id\": \"ft-vsAK7mm27aSigEwjBkFE8NDD\",\n",
      "  \"model\": \"ada\",\n",
      "  \"object\": \"fine-tune\",\n",
      "  \"organization_id\": \"org-iZamIfsEWe1quIVhTBkQgMRQ\",\n",
      "  \"result_files\": [],\n",
      "  \"status\": \"pending\",\n",
      "  \"training_files\": [\n",
      "    {\n",
      "      \"bytes\": 380384,\n",
      "      \"created_at\": 1685245104,\n",
      "      \"filename\": \"./data/csl_summarize_finetune_prepared.jsonl\",\n",
      "      \"id\": \"file-q7VMO3yqQopfqUiTmIyj8W7B\",\n",
      "      \"object\": \"file\",\n",
      "      \"purpose\": \"fine-tune\",\n",
      "      \"status\": \"processed\",\n",
      "      \"status_details\": null\n",
      "    }\n",
      "  ],\n",
      "  \"updated_at\": 1685245104,\n",
      "  \"validation_files\": []\n",
      "}\n"
     ]
    }
   ],
   "source": [
    "# 根据上一步的输出，得到fine tune运行的key ft-vsAK7mm27aSigEwjBkFE8NDD，\n",
    "# 我们可以通过get来获取当前执行进度，\n",
    "# 如发现与openai的连接断开，可通过follow重新排队连接\n",
    "# !openai api fine_tunes.follow -i ft-LoKi6mOxlkOtfZcZTrmivKDa\n",
    "!openai api fine_tunes.get -i ft-vsAK7mm27aSigEwjBkFE8NDD"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "9468e000",
   "metadata": {},
   "outputs": [
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "\u001b[91mError:\u001b[0m No fine-tune job: ft-vsAK7mm27aSigEwjBkFE8NDD (HTTP status code: 404)\n"
     ]
    }
   ],
   "source": [
    "# 保存openai fine tune过程的记录\n",
    "!openai api fine_tunes.results -i ft-vsAK7mm27aSigEwjBkFE8NDD > data/metric.csv"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "8ee277e7",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 查看结果\n",
    "def summarize_text(text, model_name):\n",
    "    response = openai.Completion.create(\n",
    "        engine=model_name,\n",
    "        prompt=f\"请对以下文本进行总结，注意总结的凝炼性，将总结字数控制在20个字以内:\\n{text}\",\n",
    "        temperature=0.7,\n",
    "        max_tokens=100,\n",
    "    )\n",
    "\n",
    "    summarized_text = response.choices[0].text.strip()\n",
    "    return summarized_text\n",
    "\n",
    "text = '京信数据科技有限公司（下称京信数科）是世界500强中央管理的国有骨干企业“中国电子信息产业集团（CEC）”旗下投资公司，中国电子（CEC）连续八年入选世界500强企业，拥有36家二级企业、控股上市公司15家，员工总人数11万人。京信数科成立于2007年，注册资本5000万元，依托中国电子强大的产业、技术、资金能力，为政府和企业提供大数据服务，是全国城市大数据先驱的服务商之一。'\n",
    "print(\"原始文本: \", text)\n",
    "print(\"ada摘要文本: \", summarize_text(text, model_name='ada'))\n",
    "print(\"ada fine-tune摘要文本: \", summarize_text(text, model_name='ada:ft-personal-2023-04-15-13-29-50'))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "75ea10ac",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true,
    "tags": []
   },
   "source": [
    "# 文本纠错任务"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "225b953e",
   "metadata": {},
   "source": [
    "常见的错误类型包括：\n",
    "\n",
    "- 拼写错误：中文课程->中文磕碜；明天会议->明天会易\n",
    "- 语法错误：他昨天去参加会议了->他昨天将要去参加会议\n",
    "- 标点符号错误：您好，请多指教！->您好,请多指教???\n",
    "- 知识性错误：上海黄浦区->上海黄埔区\n",
    "- 重复性错误：您好，请问您今天有空吗？->您好，请问您今天有空吗吗吗吗吗吗\n",
    "- 遗漏性错误：他昨天去参加会议了->他昨天去参加了\n",
    "- 语序性错误：他昨天去参加会议了->他昨天去会议参加了\n",
    "- 多语言错误：他昨天去参加会议了->他昨天去参加huiyi了\n",
    "- ……"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b9b46356",
   "metadata": {},
   "source": [
    "常见的文本纠错方式包括\n",
    "1. 基于规则的文本纠错技术\n",
    "2. 基于语言模型的文本纠错技术\n",
    "3. 基于MLM的文本纠错技术\n",
    "4. 基于NLG的文本纠错技术"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7daf6474",
   "metadata": {},
   "source": [
    "## 基于pycorrector的文本纠错"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "e59543b4",
   "metadata": {},
   "source": [
    "pycorrector是一个文本纠错工具集，内置了KenLM、MacBERT、Transformer等多种文本纠错模型。\n",
    "\n",
    "- pycorrector的项目地址：https://github.com/shibing624/pycorrector\n",
    "- 一个基于MacBERT的线上Demo：https://huggingface.co/spaces/shibing624/pycorrector\n",
    "\n",
    "pycorrector不仅可以通过“import pycorrector”调用，也提供了Huggingface的预训练模型调用方式，以下是一个基于Huggingface的MacBERT4CSC调用样例。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "id": "c7adb99a",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "原始文本:  大家好,一起来完一下基于pycorrector的文本纠错！\n",
      "纠错文本:  大家好,一起来看一下基于pycorrector的文本纠错！\n"
     ]
    }
   ],
   "source": [
    "import torch\n",
    "from transformers import BertTokenizer, BertForMaskedLM\n",
    "\n",
    "# 载入模型\n",
    "tokenizer = BertTokenizer.from_pretrained(\"D:/bert/shibing624/macbert4csc-base-chinese\")\n",
    "model = BertForMaskedLM.from_pretrained(\"D:/bert/shibing624/macbert4csc-base-chinese\")\n",
    "\n",
    "text = \"大家好,一起来完一下基于pycorrector的文本纠错！\"\n",
    "input_ids = tokenizer([text], padding=True, return_tensors='pt')\n",
    "\n",
    "# 生成结果文本\n",
    "with torch.no_grad():\n",
    "    outputs = model(**input_ids)\n",
    "output_ids = torch.argmax(outputs.logits, dim=-1)\n",
    "output_text = tokenizer.decode(output_ids[0], skip_special_tokens=True).replace(' ', '')\n",
    "\n",
    "print(\"原始文本: \", text)\n",
    "print(\"纠错文本: \", output_text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "3b5a3b94",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[('完', '看', 7, 8)]\n"
     ]
    }
   ],
   "source": [
    "# 查看修改点\n",
    "import operator\n",
    "def get_errors(corrected_text, origin_text):\n",
    "    sub_details = []\n",
    "    for i, ori_char in enumerate(origin_text):\n",
    "        if ori_char in [' ', '“', '”', '‘', '’', '琊', '\\n', '…', '—', '擤']:\n",
    "            # add unk word\n",
    "            corrected_text = corrected_text[:i] + ori_char + corrected_text[i:]\n",
    "            continue\n",
    "        if i >= len(corrected_text):\n",
    "            continue\n",
    "        if ori_char != corrected_text[i]:\n",
    "            if ori_char.lower() == corrected_text[i]:\n",
    "                # pass english upper char\n",
    "                corrected_text = corrected_text[:i] + ori_char + corrected_text[i + 1:]\n",
    "                continue\n",
    "            sub_details.append((ori_char, corrected_text[i], i, i + 1))\n",
    "    sub_details = sorted(sub_details, key=operator.itemgetter(2))\n",
    "    return corrected_text, sub_details\n",
    "\n",
    "correct_text, details = get_errors(output_text[:len(text)], text)\n",
    "print(details)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "944dd181",
   "metadata": {},
   "source": [
    "## 基于OpenAI接口的文本纠错实验"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "cad2ecf7",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "原始文本:  大家好,一起来完一下基于pycorrector的文本纠错！\n",
      "纠错文本:  大家好，一起来完成一下基于pycorrector的文本纠错！\n"
     ]
    }
   ],
   "source": [
    "def correct_text(text):\n",
    "    content = f\"请对以下文本进行文本纠错:\\n{text}\"\n",
    "    response = openai.ChatCompletion.create(\n",
    "        model=\"gpt-3.5-turbo\", \n",
    "        messages=[{\"role\": \"user\", \"content\": content}]\n",
    "    )\n",
    "    corrected_text = response.get(\"choices\")[0].get(\"message\").get(\"content\")\n",
    "    return corrected_text\n",
    "\n",
    "text = \"大家好,一起来完一下基于pycorrector的文本纠错！\"\n",
    "output_text = correct_text(text)\n",
    "print(\"原始文本: \", text)\n",
    "print(\"纠错文本: \", output_text)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c80a624a",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true,
    "tags": []
   },
   "source": [
    "# 机器翻译"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0571002a",
   "metadata": {},
   "source": [
    "从机器翻译的发展历程来看，主要经历了如下几个阶段：\n",
    "\n",
    "- 基于规则的方法\n",
    "- 基于统计的方法\n",
    "- 基于神经网络的方法"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "84e80b48",
   "metadata": {},
   "source": [
    "基于规则的方法需要建立各类知识库，描述源语言和目标语言的词法、句法以及语义知识，有时知识无关的世界知识。\n",
    "\n",
    "基于统计的方法认为对于一条源语言 $R$，任何一条目标语言 $T$ 都可能是它的译文，只是可能性有高有低。对于源语言中的每个词 $r_i$ 及目标语言中的每个词 $t_j$，判断词对齐的概率，再通过期望最大算法（如EM算法）得到最大词对齐概率的对齐方式。这便是基于词的翻译模型。显然，将翻译的最小单位设计成词是不符合语法的，因此后来又延申出了基于短语的翻译方法，将最小翻译单位设计成连续的词串。\n",
    "\n",
    "2013年，一种用于机器翻译的新型端到端编码器-解码器架构问世，将CNN用于隐含表征挖掘，将RNN用于将隐含向量转化为目标语言，标志了神经机器翻译开端。后来，Attention、Transformer、BERT等技术被相继提出，大大提升了翻译的质量。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "14781b84",
   "metadata": {},
   "source": [
    "## 基于BERT的文本翻译"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "5c3fb802",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "原始文本:  大家好,一起来玩一下基于pycorrector的文本纠错！\n",
      "翻译文本:  Hey, guys, let's play pycorrector-based text-correction!\n"
     ]
    }
   ],
   "source": [
    "\n",
    "from transformers import AutoTokenizer, AutoModelForSeq2SeqLM\n",
    "\n",
    "tokenizer = AutoTokenizer.from_pretrained(\"D:/bert/Helsinki-NLP/opus-mt-zh-en\")\n",
    "model = AutoModelForSeq2SeqLM.from_pretrained(\"D:/bert/Helsinki-NLP/opus-mt-zh-en\")\n",
    "\n",
    "text = \"大家好,一起来玩一下基于pycorrector的文本纠错！\"\n",
    "\n",
    "inputs = tokenizer(text, return_tensors=\"pt\", )\n",
    "outputs = model.generate(inputs[\"input_ids\"], max_length=40, num_beams=4, early_stopping=True)\n",
    "translated_sentence = tokenizer.decode(outputs[0], skip_special_tokens=True)\n",
    "print('原始文本: ', text)\n",
    "print('翻译文本: ', translated_sentence)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "88329ab1",
   "metadata": {},
   "source": [
    "## 基于OpenAI接口的机器翻译实验"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "3b298598",
   "metadata": {},
   "source": [
    "### 短文本"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "5a6c78d1",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "原始文本:  大家好,一起来玩一下基于pycorrector的文本纠错！\n",
      "输出文本:  Hello everyone, let's play a text correction game based on pycorrector!\n"
     ]
    }
   ],
   "source": [
    "def translate_text(text):\n",
    "    content = f\"请将以下中文文本翻译成英文:\\n{text}\"\n",
    "    response = openai.ChatCompletion.create(\n",
    "        model=\"gpt-3.5-turbo\", \n",
    "        messages=[{\"role\": \"user\", \"content\": content}]\n",
    "    )\n",
    "    translated_text = response.get(\"choices\")[0].get(\"message\").get(\"content\")\n",
    "    return translated_text\n",
    "\n",
    "text_to_translate = \"大家好,一起来玩一下基于pycorrector的文本纠错！\"\n",
    "translated_text = translate_text(text_to_translate)\n",
    "print(\"原始文本: \", text_to_translate)\n",
    "print(\"输出文本: \", translated_text)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "254488b7",
   "metadata": {},
   "source": [
    "### 长文本"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "id": "8540d0bf",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "全书字符数:  6350735\n"
     ]
    }
   ],
   "source": [
    "# 长书籍英翻中\n",
    "with open(\"data/哈利波特1-7英文原版.txt\", \"r\") as f:\n",
    "    text = f.read()\n",
    "print('全书字符数: ', len(text))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "03f8ba7f",
   "metadata": {},
   "outputs": [
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "10f530b8d42f4b0791f01e855af2fb2c",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading (…)olve/main/vocab.json:   0%|          | 0.00/1.04M [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "E:\\env\\Anaconda\\lib\\site-packages\\huggingface_hub\\file_download.py:133: UserWarning: `huggingface_hub` cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in D:\\bert. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting the `HF_HUB_DISABLE_SYMLINKS_WARNING` environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.\n",
      "To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development\n",
      "  warnings.warn(message)\n"
     ]
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "d2b699706a0e484da684040acb25dc72",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading (…)olve/main/merges.txt:   0%|          | 0.00/456k [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "data": {
      "application/vnd.jupyter.widget-view+json": {
       "model_id": "6871bac3a03145adbd6396d937866af5",
       "version_major": 2,
       "version_minor": 0
      },
      "text/plain": [
       "Downloading (…)lve/main/config.json:   0%|          | 0.00/665 [00:00<?, ?B/s]"
      ]
     },
     "metadata": {},
     "output_type": "display_data"
    },
    {
     "name": "stderr",
     "output_type": "stream",
     "text": [
      "Token indices sequence length is longer than the specified maximum sequence length for this model (1673251 > 1024). Running this sequence through the model will result in indexing errors\n"
     ]
    },
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "全书token数:  1673251\n",
      "翻译全书约需16.73251美元\n"
     ]
    }
   ],
   "source": [
    "# 基于GPT2的翻译\n",
    "from transformers import GPT2Tokenizer\n",
    "\n",
    "tokenizer = GPT2Tokenizer.from_pretrained(\"gpt2\")  # GPT-2的tokenizer和GPT-3是一样的\n",
    "token_counts = len(tokenizer.encode(text))\n",
    "print('全书token数: ', token_counts)\n",
    "\n",
    "# chatgpt的api调用价格是 1000 token 0.01美元，因此可以大致计算翻译一本书的价格\n",
    "translate_cost = 0.01 / 1000 * token_counts\n",
    "print(f'翻译全书约需{translate_cost}美元')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "135f6c53",
   "metadata": {},
   "source": [
    "GPT-3的token限制大约在4096左右（据说GPT-4最多输入3.2万token），因此无法直接将12万token的文本输进去。\n",
    "\n",
    "我们可以将使用一个简单的方法，将文本分成若干份，每一份使用chatgpt翻译，最终再拼接起来。\n",
    "\n",
    "首先，我们最好能保证每份文本本身的语义连贯性，如果从一个句子中间将上下文拆成两块，则翻译时容易存在歧义。\n",
    "\n",
    "一个比较直观的想法是，将每个段落当成一个文本块，每次翻译一段。\n",
    "\n",
    "但是本书的段落非常多，一段一段翻译显然会降低翻译的效率。同时，由于每段的上下文较少，导致翻译错误的可能性上升。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "5f32d1c8",
   "metadata": {},
   "outputs": [],
   "source": [
    "def group_paragraphs(paragraphs, ntokens, max_len=1000):\n",
    "    \"\"\"\n",
    "    合并短段落为文本块，用于丰富上下文语境，提升文本连贯性，并提升运算效率。\n",
    "    :param paragraphs: 段落集合\n",
    "    :param ntokens: token数集合\n",
    "    :param max_len: 最大文本块token数\n",
    "    :return: 组合好的文本块\n",
    "    \"\"\"\n",
    "    batches = []\n",
    "    cur_batch = \"\"\n",
    "    cur_tokens = 0\n",
    "\n",
    "    # 对于每个文本段落做处理\n",
    "    for paragraph, ntoken in zip(paragraphs, ntokens):\n",
    "        if ntoken + cur_tokens + 1 > max_len:  # '1' 指的是'\\n'\n",
    "            # 如果加入这段文本，总token数超过阈值，则开启新的文本块\n",
    "            batches.append(cur_batch)\n",
    "            cur_batch = paragraph\n",
    "            cur_tokens = ntoken\n",
    "        else:\n",
    "            # 否则将段落插入文本块中\n",
    "            cur_batch += \"\\n\" + paragraph\n",
    "            cur_tokens += (1 + ntoken)\n",
    "    batches.append(cur_batch)  # 记录最后一个文本块\n",
    "    return batches\n",
    "\n",
    "batchs = group_paragraphs(paragraphs, ntokens, max_len=500)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "id": "192ec32a",
   "metadata": {},
   "outputs": [],
   "source": [
    "# 基于ChatGPT的翻译\n",
    "def translate_text(text):\n",
    "    response = openai.Completion.create(\n",
    "        engine=\"text-davinci-003\",\n",
    "        prompt=f\"请将以下英文翻译成中文:\\n{text}\",\n",
    "        max_tokens=2048\n",
    "    )\n",
    "\n",
    "    translate_text = response.choices[0].text.strip()\n",
    "    return translate_text\n",
    "print(translate_text(batchs[0])) # 只翻译一小段"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.10.9"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
