{
 "cells": [
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 基于预训练模型完成实体关系抽取"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "# 安装paddlenlp最新版本\n",
    "!pip install --upgrade paddlenlp -i https://pypi.org/simple\n",
    "\n",
    "%cd relation_extraction/"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 一、项目背景介绍"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "信息抽取（information extraction），简称IE，即从自然语言文本中，抽取出特定的事件或事实信息，帮助我们将海量内容自动分类、提取和重构。这些信息通常包括实体（entity）、关系（relation）、事件（event）。信息抽取主要包括三个子任务：关系抽取、命名实体识别、事件抽取。\n",
    "信息抽取旨在从非结构化自然语言文本中提取结构化知识，如实体、关系、事件等。对于给定的自然语言句子，根据预先定义的schema集合，抽取出所有满足schema约束的SPO三元组。\n",
    "\n",
    "例如，「妻子」关系的schema定义为：\n",
    "{\n",
    "S_TYPE: 人物,\n",
    "P: 妻子,\n",
    "O_TYPE: {\n",
    "@value: 人物\n",
    "}\n",
    "}\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](https://ai-studio-static-online.cdn.bcebos.com/0802798ce4a44c07b2e9fb781f7ef42dd3f395d0da6a4ed3891edc68612100f4)\n",
    "\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "\"text\":\"王雪纯是87版《红楼梦》中晴雯的配音者，她是《正大综艺》的主持人\"\n",
    "}![](https://ai-studio-static-online.cdn.bcebos.com/026d36d3f0b44b22b061076d2c5f0ad39eef811211cf4250a2c0a4b038fc5777)\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {
    "jp-MarkdownHeadingCollapsed": true
   },
   "source": [
    "\n",
    "## 1.1评估方法\n",
    "评价方法\n",
    "对测试集上参评系统输出的SPO结果和人工标注的SPO结果进行精准匹配，采用F1值作为评价指标。注意，对于复杂O值类型的SPO，必须所有槽位都精确匹配才认为该SPO抽取正确。针对部分文本中存在实体别名的问题，使用百度知识图谱的别名词典来辅助评测。F1值的计算方式如下：\n",
    "\n",
    "F1 = (2 * P * R) / (P + R)，其中\n",
    "\n",
    "P = 测试集所有句子中预测正确的SPO个数 / 测试集所有句子中预测出的SPO个数\n",
    "R = 测试集所有句子中预测正确的SPO个数 / 测试集所有句子中人工标注的SPO个数"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 二、模型介绍\n",
    "PaddleNLP内置了多种常见预训练模型，可以通过名字一键加载，可以完成，可以完成文本token化，转token ID，文本长度截断等操作。\n",
    "\n",
    "本次选用的是RoBERTa large中文模型优化模型\n",
    "\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "#导入包，加载模型\n",
    "import os\n",
    "import json\n",
    "from paddlenlp.transformers import RobertaForTokenClassification, RobertaTokenizer\n",
    "\n",
    "label_map_path = os.path.join('data', \"predicate2id.json\")\n",
    "\n",
    "if not (os.path.exists(label_map_path) and os.path.isfile(label_map_path)):\n",
    "    sys.exit(\"{} dose not exists or is not a file.\".format(label_map_path))\n",
    "with open(label_map_path, 'r', encoding='utf8') as fp:\n",
    "    label_map = json.load(fp)\n",
    "\n",
    "model = RobertaForTokenClassification.from_pretrained(\n",
    "    \"roberta-wwm-ext-large\",\n",
    "    num_classes=(len(label_map) - 2) * 2 + 2)\n",
    "tokenizer = RobertaTokenizer.from_pretrained(\"roberta-wwm-ext-large\")    \n",
    "\n",
    "\n",
    "inputs = tokenizer(text=\"请输入测试样例\", max_seq_len=20)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 三、数据介绍\n",
    "数据主要是包含超过43万三元组数据、21万中文句子及48个预定义的关系类型。数据集中的句子来自百度百科、百度贴吧和百度信息流文本。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3.1解压数据"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "cd .."
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    " #解压数据\n",
    "!unzip duie_dev.json.zip\n",
    "!unzip duie_test2.json.zip\n",
    "!unzip duie_train.json.zip"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "cd relation_extraction/"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3.2展示数据\n",
    "\n",
    "预览\n",
    "文本数据集的展示，读取训练集的文件，分别展示出来。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "![](https://ai-studio-static-online.cdn.bcebos.com/efb298fde83c48be93265baf9c4452615d99a0bebc5d4d978dfb77907a7b0fb7)\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2022-02-26T03:35:34.867969Z",
     "iopub.status.busy": "2022-02-26T03:35:34.866963Z",
     "iopub.status.idle": "2022-02-26T03:35:35.007892Z",
     "shell.execute_reply": "2022-02-26T03:35:35.006896Z",
     "shell.execute_reply.started": "2022-02-26T03:35:34.867927Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "#展示数据\n",
    "from data_loader import parse_label, DataCollator, convert_example_to_feature\n",
    "import os\n",
    "import json\n",
    "data_path = 'data'\n",
    "example=[]\n",
    "train_file_path = os.path.join(data_path, 'train_data.json')\n",
    "with open(train_file_path, \"r\", encoding=\"utf-8\") as fp:\n",
    "    for line in fp:\n",
    "        example.append(json.loads(line))\n"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2022-02-26T04:07:37.005629Z",
     "iopub.status.busy": "2022-02-26T04:07:37.004844Z",
     "iopub.status.idle": "2022-02-26T04:07:37.031727Z",
     "shell.execute_reply": "2022-02-26T04:07:37.031100Z",
     "shell.execute_reply.started": "2022-02-26T04:07:37.005582Z"
    },
    "scrolled": true
   },
   "outputs": [
    {
     "data": {
      "text/plain": [
       "[{'text': '吴宗宪遭服务生种族歧视, 他气呛: 我买下美国都行!艺人狄莺与孙鹏18岁的独子孙安佐赴美国读高中，没想到短短不到半年竟闹出校园安全事件被捕，因为美国正处于校园枪击案频传的敏感时机，加上国外种族歧视严重，外界对于孙安佐的情况感到不乐观 吴宗宪今（30）日录影前谈到美国民情，直言国外种族歧视严重，他甚至还被一名墨西哥裔的服务生看不起，让吴宗宪气到喊：「我是吃不起是不是',\n",
       "  'spo_list': [{'predicate': '父亲',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '人物',\n",
       "    'object': {'@value': '孙鹏'},\n",
       "    'subject': '孙安佐'},\n",
       "   {'predicate': '母亲',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '人物',\n",
       "    'object': {'@value': '狄莺'},\n",
       "    'subject': '孙安佐'},\n",
       "   {'predicate': '丈夫',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '人物',\n",
       "    'object': {'@value': '孙鹏'},\n",
       "    'subject': '狄莺'},\n",
       "   {'predicate': '妻子',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '人物',\n",
       "    'object': {'@value': '狄莺'},\n",
       "    'subject': '孙鹏'}]},\n",
       " {'text': '苏州亚都环保科技有限公司于2011年11月04日在苏州市吴中区市场监督管理局登记成立',\n",
       "  'spo_list': [{'predicate': '成立日期',\n",
       "    'object_type': {'@value': 'Date'},\n",
       "    'subject_type': '机构',\n",
       "    'object': {'@value': '2011年11月04日'},\n",
       "    'subject': '苏州亚都环保科技有限公司'}]},\n",
       " {'text': '贵州顺和建筑工程有限责任公司于2008年1月23日在贵阳市工商行政管理局登记成立',\n",
       "  'spo_list': [{'predicate': '成立日期',\n",
       "    'object_type': {'@value': 'Date'},\n",
       "    'subject_type': '机构',\n",
       "    'object': {'@value': '2008年1月23日'},\n",
       "    'subject': '贵州顺和建筑工程有限责任公司'}]},\n",
       " {'text': '北京尚学百纳教育科技有限公司（以下简称：尚学教育），是经北京工商局正式批准注册的教育经营类企业，',\n",
       "  'spo_list': [{'predicate': '简称',\n",
       "    'object_type': {'@value': 'Text'},\n",
       "    'subject_type': '机构',\n",
       "    'object': {'@value': '尚学教育'},\n",
       "    'subject': '北京尚学百纳教育科技有限公司'}]},\n",
       " {'text': '郑楠楠在搜剿赵明浩藏匿的各种私房钱的时候发现赵明浩居然出轨自己的闺蜜汤元，最终郑楠楠向赵明浩提出离婚但是郑母不同意',\n",
       "  'spo_list': [{'predicate': '丈夫',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '人物',\n",
       "    'object': {'@value': '赵明浩'},\n",
       "    'subject': '郑楠楠'},\n",
       "   {'predicate': '妻子',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '人物',\n",
       "    'object': {'@value': '郑楠楠'},\n",
       "    'subject': '赵明浩'}]},\n",
       " {'text': '长久以来巴基斯坦都将英语作为自己的官方语言，这让在民间普及率非常高的本土语言乌尔都语陷入尴尬',\n",
       "  'spo_list': [{'predicate': '官方语言',\n",
       "    'object_type': {'@value': '语言'},\n",
       "    'subject_type': '国家',\n",
       "    'object': {'@value': '英语'},\n",
       "    'subject': '巴基斯坦'}]},\n",
       " {'text': '电视剧《离婚前规则》讲述三对年轻人不同的婚姻经历，三对80后夫妇几经分分合合之后，终于领悟到了婚姻的相处之道，都在各自今后的人生道路上收获了幸福 谈起拍摄初衷，制片人叶进军透露，“这部戏是一个成长的故事， 80后一代人从不成熟到成熟，他们慢慢成为社会的焦点，他们的生活正是我们这些做家长的最关心的问题”',\n",
       "  'spo_list': [{'predicate': '制片人',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '影视作品',\n",
       "    'object': {'@value': '叶进军'},\n",
       "    'subject': '离婚前规则'}]},\n",
       " {'text': '2012年，吴贻弓获第15届上海国际电影节华语电影终身成就奖',\n",
       "  'spo_list': [{'predicate': '获奖',\n",
       "    'object_type': {'onDate': 'Date', '@value': '奖项'},\n",
       "    'subject_type': '娱乐人物',\n",
       "    'object': {'onDate': '2012年',\n",
       "     '@value': '上海国际电影节华语电影终身成就奖',\n",
       "     'period': '15'},\n",
       "    'subject': '吴贻弓'}]},\n",
       " {'text': '郭德纲自1995年开始就创办了相声品牌“德云社”，至今也有二十余年的时间了',\n",
       "  'spo_list': [{'predicate': '成立日期',\n",
       "    'object_type': {'@value': 'Date'},\n",
       "    'subject_type': '机构',\n",
       "    'object': {'@value': '1995年'},\n",
       "    'subject': '德云社'}]},\n",
       " {'text': '《李荣浩》是李荣浩于2014年11月28日发行的个人第二张音乐专辑 ，专辑中共收录了包括主打歌《喜剧之王》在内的10首音乐作品',\n",
       "  'spo_list': [{'predicate': '所属专辑',\n",
       "    'object_type': {'@value': '音乐专辑'},\n",
       "    'subject_type': '歌曲',\n",
       "    'object': {'@value': '李荣浩'},\n",
       "    'subject': '喜剧之王'},\n",
       "   {'predicate': '歌手',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '歌曲',\n",
       "    'object': {'@value': '李荣浩'},\n",
       "    'subject': '喜剧之王'}]},\n",
       " {'text': '景区内主要山峰80多座，其中500米以上的山峰65座，主峰梨木台山海拔997米，山势险峻，峭壁林立，自然形成一线天、五指山、万卷天书、峰林岩画岭等地质奇观',\n",
       "  'spo_list': [{'predicate': '海拔',\n",
       "    'object_type': {'@value': 'Number'},\n",
       "    'subject_type': '地点',\n",
       "    'object': {'@value': '997米'},\n",
       "    'subject': '梨木台'}]},\n",
       " {'text': '《冰敏》是连载于17k小说网的小说，作者是为你写歌',\n",
       "  'spo_list': [{'predicate': '作者',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '图书作品',\n",
       "    'object': {'@value': '为你写歌'},\n",
       "    'subject': '冰敏'}]},\n",
       " {'text': '土豪，你们学校真大方 我——》 深圳市翠园中学（高中部）航模社社长求助啊 我们那韩冬青校长，不给社费，而且，他又出国旅游去啦 这是我社的全部家产，全部是社员自费啊 T T图片来自：pioneerfirst的百度相册',\n",
       "  'spo_list': [{'predicate': '校长',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '学校',\n",
       "    'object': {'@value': '韩冬青'},\n",
       "    'subject': '深圳市翠园中学'}]},\n",
       " {'text': '《语文课程与语文教材》是2001年9月社会科学文献出版社出版的图书，作者是顾黄初、顾振彪',\n",
       "  'spo_list': [{'predicate': '作者',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '图书作品',\n",
       "    'object': {'@value': '顾黄初'},\n",
       "    'subject': '语文课程与语文教材'},\n",
       "   {'predicate': '作者',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '图书作品',\n",
       "    'object': {'@value': '顾振彪'},\n",
       "    'subject': '语文课程与语文教材'}]},\n",
       " {'text': '2018年6月13日，云南省人民政府决定：田永 免去云南冶金集团股份有限公司董事长职务',\n",
       "  'spo_list': [{'predicate': '董事长',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '企业',\n",
       "    'object': {'@value': '田永'},\n",
       "    'subject': '云南冶金集团股份有限公司'}]},\n",
       " {'text': '莱姆斯·卢平\\xa0尼法朵拉·唐克斯 一对短命的鸳鸯…… 死于霍格沃茨保卫战…… 留有一子——泰迪·卢平（哈利的教子）',\n",
       "  'spo_list': [{'predicate': '母亲',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '人物',\n",
       "    'object': {'@value': '尼法朵拉·唐克斯'},\n",
       "    'subject': '泰迪·卢平'}]},\n",
       " {'text': '织田市（1547年－1583年6月14日），日本战国时代女性，父亲为织田信秀，母不详，长兄为“战国第一风云儿”织田信长，但也有一种说法认为她是信长的堂妹',\n",
       "  'spo_list': [{'predicate': '朝代',\n",
       "    'object_type': {'@value': 'Text'},\n",
       "    'subject_type': '历史人物',\n",
       "    'object': {'@value': '日本战国时代'},\n",
       "    'subject': '织田市'},\n",
       "   {'predicate': '父亲',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '人物',\n",
       "    'object': {'@value': '织田信秀'},\n",
       "    'subject': '织田信长'},\n",
       "   {'predicate': '国籍',\n",
       "    'object_type': {'@value': '国家'},\n",
       "    'subject_type': '人物',\n",
       "    'object': {'@value': '日本'},\n",
       "    'subject': '织田信秀'},\n",
       "   {'predicate': '朝代',\n",
       "    'object_type': {'@value': 'Text'},\n",
       "    'subject_type': '历史人物',\n",
       "    'object': {'@value': '日本战国时代'},\n",
       "    'subject': '织田信长'}]},\n",
       " {'text': '中山大学图书馆中国高等教育文献保障体系(CALIS)华南地区中心，CALIS数字图书馆基地中英文图书数字化国际合作计划（CADAL）项目成员馆，中国高校人文社会科学文献中心（CASHL）华南区域中心，教育部16个文科文献信息中心之一，教育部文科专款受益院校，教育部部级科技查新工作站',\n",
       "  'spo_list': [{'predicate': '简称',\n",
       "    'object_type': {'@value': 'Text'},\n",
       "    'subject_type': '机构',\n",
       "    'object': {'@value': 'CASHL'},\n",
       "    'subject': '中国高校人文社会科学文献中心'}]},\n",
       " {'text': '1990年2月，苏有朋和陈志朋、吴奇隆联袂出演了电影《好小子之游侠儿》，在影片中扮演了一个富有科技头脑，又带有稚气的高中生“小乖”，并合唱影片主题曲《游侠儿》，本片获得1990年度台湾“十大最高票房电影”之一',\n",
       "  'spo_list': [{'predicate': '获奖',\n",
       "    'object_type': {'onDate': 'Date', '@value': '奖项'},\n",
       "    'subject_type': '娱乐人物',\n",
       "    'object': {'onDate': '1990年', '@value': '十大最高票房电影'},\n",
       "    'subject': '苏有朋'}]},\n",
       " {'text': '74 唐 张若虚 《春江花月夜》诗： “滟滟随波千万里， 何处春江无月明',\n",
       "  'spo_list': [{'predicate': '作者',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '图书作品',\n",
       "    'object': {'@value': '张若虚'},\n",
       "    'subject': '春江花月夜'}]},\n",
       " {'text': '新加坡 2013 真情无障爱 2013 SPD Charity Show大会司仪：林有懿，黄子佼外场主持人：Pornsak参加艺人：郑惠玉，郑斌辉，黄俊雄，周崇庆，赖怡玲，郑可评，杨志龙，曾诗梅，蔡小虎-，李司棋，梁文音，柳艺恩，李玉刚，金亨俊嘉宾: 吴作栋夫妇转自YouTube的几位',\n",
       "  'spo_list': [{'predicate': '主持人',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '电视综艺',\n",
       "    'object': {'@value': '黄子佼'},\n",
       "    'subject': '真情无障爱'}]},\n",
       " {'text': '歌曲歌词爱环保有诗意 - 罗秋红 1',\n",
       "  'spo_list': [{'predicate': '歌手',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '歌曲',\n",
       "    'object': {'@value': '罗秋红'},\n",
       "    'subject': '爱环保有诗意'}]},\n",
       " {'text': '王勇峰，男，1970年出生，吉林大学计算机应用人工智能专业硕士',\n",
       "  'spo_list': [{'predicate': '毕业院校',\n",
       "    'object_type': {'@value': '学校'},\n",
       "    'subject_type': '人物',\n",
       "    'object': {'@value': '吉林大学'},\n",
       "    'subject': '王勇峰'}]},\n",
       " {'text': '郁可唯的好身材藏不住了，黑白拼接西装干练帅气，筷子腿太吸睛现在的歌坛有很多实力唱将都是从《超级女声》这个选秀舞台出来的，比如李宇春、张靓颖、谭维维、郁可唯等等，郁可唯出道这么多年一直都是兢兢业业的唱歌，没有绯闻也不炒作，偶尔上上热搜还是因为她的好身材，就是这么一个实力唱将最近却因为在活动中忘词被网友吐槽唱了这么多年还忘词，不够专业也不尊重舞台',\n",
       "  'spo_list': [{'predicate': '嘉宾',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '电视综艺',\n",
       "    'object': {'@value': '郁可唯'},\n",
       "    'subject': '超级女声'},\n",
       "   {'predicate': '嘉宾',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '电视综艺',\n",
       "    'object': {'@value': '谭维维'},\n",
       "    'subject': '超级女声'},\n",
       "   {'predicate': '嘉宾',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '电视综艺',\n",
       "    'object': {'@value': '李宇春'},\n",
       "    'subject': '超级女声'},\n",
       "   {'predicate': '嘉宾',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '电视综艺',\n",
       "    'object': {'@value': '张靓颖'},\n",
       "    'subject': '超级女声'}]},\n",
       " {'text': '原来赤脚小子本来叫红尘小子啊.....还记得插曲，超感人的《留下句号的面容》',\n",
       "  'spo_list': [{'predicate': '主题曲',\n",
       "    'object_type': {'@value': '歌曲'},\n",
       "    'subject_type': '影视作品',\n",
       "    'object': {'@value': '留下句号的面容'},\n",
       "    'subject': '赤脚小子'}]},\n",
       " {'text': '《爱情不打烊》是电视剧《爱情不打烊》的同名片头曲，由杨艳丽、黄云祺作词，杨昊东作曲，1张枫演唱',\n",
       "  'spo_list': [{'predicate': '作词',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '歌曲',\n",
       "    'object': {'@value': '杨艳丽'},\n",
       "    'subject': '爱情不打烊'},\n",
       "   {'predicate': '作曲',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '歌曲',\n",
       "    'object': {'@value': '杨昊东'},\n",
       "    'subject': '爱情不打烊'},\n",
       "   {'predicate': '歌手',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '歌曲',\n",
       "    'object': {'@value': '张枫'},\n",
       "    'subject': '爱情不打烊'},\n",
       "   {'predicate': '作词',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '歌曲',\n",
       "    'object': {'@value': '黄云祺'},\n",
       "    'subject': '爱情不打烊'}]},\n",
       " {'text': '7.第二次初恋——作词:张家玮 ；作曲:Tank 编曲:吕绍淳 演唱：Tank',\n",
       "  'spo_list': [{'predicate': '作曲',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '歌曲',\n",
       "    'object': {'@value': 'tank'},\n",
       "    'subject': '第二次初恋'},\n",
       "   {'predicate': '作词',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '歌曲',\n",
       "    'object': {'@value': '张家玮'},\n",
       "    'subject': '第二次初恋'},\n",
       "   {'predicate': '歌手',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '歌曲',\n",
       "    'object': {'@value': 'Tank'},\n",
       "    'subject': '第二次初恋'},\n",
       "   {'predicate': '作曲',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '歌曲',\n",
       "    'object': {'@value': 'Tank'},\n",
       "    'subject': '第二次初恋'}]},\n",
       " {'text': '简介1、介绍  马跃峰，男，生于1979年5月24日，祖籍沈阳，毕业于国家体育总局湛江潜水学校，专业体育教育，曾任中国潜水协会三亚南海国际潜水技术培训中心副总教练，持有CMAS二星潜水教练资格证',\n",
       "  'spo_list': [{'predicate': '毕业院校',\n",
       "    'object_type': {'@value': '学校'},\n",
       "    'subject_type': '人物',\n",
       "    'object': {'@value': '国家体育总局湛江潜水学校'},\n",
       "    'subject': '马跃峰'}]},\n",
       " {'text': '上海信息化培训中心是中国领先的信息化促进机构，自1998年成立，致力于推动国际IT管理最佳实践，服务全国信息化建设，构建世界级IT经理智慧空间 自2002年率先引入ITIL国际管理国际认证培训，10多年来培养了众多的IT服务管理高级人才，活跃在全国各地各行各业',\n",
       "  'spo_list': [{'predicate': '成立日期',\n",
       "    'object_type': {'@value': 'Date'},\n",
       "    'subject_type': '机构',\n",
       "    'object': {'@value': '1998年'},\n",
       "    'subject': '上海信息化培训中心'}]},\n",
       " {'text': '要说娱乐圈中最长情的男星，非吴尊莫属了吧，与林丽莹相恋，从初恋情人变成今天的夫妻，并且拥有一对可爱的子女，多么幸福的一家啊',\n",
       "  'spo_list': [{'predicate': '丈夫',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '人物',\n",
       "    'object': {'@value': '吴尊'},\n",
       "    'subject': '林丽莹'},\n",
       "   {'predicate': '妻子',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '人物',\n",
       "    'object': {'@value': '林丽莹'},\n",
       "    'subject': '吴尊'}]},\n",
       " {'text': '《寒衣调》原曲：一青窈(Yo Hitoto,中日混血日本歌手)作曲：武部聪志(Satoshi Takebe)词：Finale演唱：河图',\n",
       "  'spo_list': [{'predicate': '作曲',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '歌曲',\n",
       "    'object': {'@value': '武部聪志'},\n",
       "    'subject': '寒衣调'},\n",
       "   {'predicate': '作词',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '歌曲',\n",
       "    'object': {'@value': 'Finale'},\n",
       "    'subject': '寒衣调'},\n",
       "   {'predicate': '歌手',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '歌曲',\n",
       "    'object': {'@value': '河图'},\n",
       "    'subject': '寒衣调'}]},\n",
       " {'text': '人民网北京6月12日电 （史雅乔）2014年6月10日，中国妇女发展基金会在北京举行分享微笑传递爱——“丁桂微笑圆梦行动”分享会暨捐赠活动，全国妇联副主席、书记处书记、中国妇女发展基金会副理事长喻红秋，中国妇女发展基金会副理事长、秘书长秦国英，中国儿童中心党委书记丛中笑，亚宝药业集团股份有限公司董事长任武贤等出席活动',\n",
       "  'spo_list': [{'predicate': '董事长',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '企业',\n",
       "    'object': {'@value': '任武贤'},\n",
       "    'subject': '亚宝药业集团股份有限公司'}]},\n",
       " {'text': '都非常好听，忒别喜欢： 古天乐～我愿爱（寻秦记\\xa0插曲）\\xa0 \\xa0 ├古天乐～寻秦记\\xa0主题曲\\xa0 罗嘉良～阳光灿烂的日子（流金岁月\\xa0片尾曲）\\xa0 罗嘉良～创造晴天（创世纪\\xa0主题曲） 陈小春～叱咤红人（鹿鼎记\\xa0主题曲）\\xa0 马浚伟～头顶一片天（鹿鼎记\\xa0片尾曲）\\xa0 ├马浚委～美丽缘份（金牌冰人\\xa0片尾曲）',\n",
       "  'spo_list': [{'predicate': '主题曲',\n",
       "    'object_type': {'@value': '歌曲'},\n",
       "    'subject_type': '影视作品',\n",
       "    'object': {'@value': '阳光灿烂的日子'},\n",
       "    'subject': '流金岁月'},\n",
       "   {'predicate': '主题曲',\n",
       "    'object_type': {'@value': '歌曲'},\n",
       "    'subject_type': '影视作品',\n",
       "    'object': {'@value': '美丽缘份'},\n",
       "    'subject': '金牌冰人'},\n",
       "   {'predicate': '歌手',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '歌曲',\n",
       "    'object': {'@value': '古天乐'},\n",
       "    'subject': '我愿爱'},\n",
       "   {'predicate': '主题曲',\n",
       "    'object_type': {'@value': '歌曲'},\n",
       "    'subject_type': '影视作品',\n",
       "    'object': {'@value': '头顶一片天'},\n",
       "    'subject': '鹿鼎记'}]},\n",
       " {'text': '1、杨幂毫无争议，杨幂这几年的事业一直都是顺风顺水，哪怕是和刘叔叔生了女儿小糯米之后也丝毫不减她的人气',\n",
       "  'spo_list': [{'predicate': '母亲',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '人物',\n",
       "    'object': {'@value': '杨幂'},\n",
       "    'subject': '小糯米'}]},\n",
       " {'text': '2016年1月19日，《家族之苦》在东京都内举行了完成报告会见与点映会，导演山田洋次和主要演员桥爪功、妻夫木聪、苍井优等悉数出席10',\n",
       "  'spo_list': [{'predicate': '主演',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '影视作品',\n",
       "    'object': {'@value': '桥爪功'},\n",
       "    'subject': '家族之苦'},\n",
       "   {'predicate': '导演',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '影视作品',\n",
       "    'object': {'@value': '山田洋次'},\n",
       "    'subject': '家族之苦'}]},\n",
       " {'text': '校长张琼与我校杰出校友、北京奥瑞金种业股份有限公司董事长、北京校友会会长韩庚辰代表双方单位签字 奥瑞金种业股份有限公司将投入1000万元支持学校建设“吴绍骙•奥瑞金农业产业化研究院”,同时以“吴绍骙•奥瑞金农业产业化研究院”为主体投入100万元设立“吴绍骙奖学金”，以鼓励品学兼优、希望在农业方面做出贡献的学生，为河南农业大学百年校庆献上一份厚礼 这个算不算',\n",
       "  'spo_list': [{'predicate': '董事长',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '企业',\n",
       "    'object': {'@value': '韩庚辰'},\n",
       "    'subject': '北京奥瑞金种业股份有限公司'}]},\n",
       " {'text': '《国务卿女士第二季》是由摩根·弗里曼执导的外交片，蒂娅·里欧妮、蒂姆·达利参加演出',\n",
       "  'spo_list': [{'predicate': '主演',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '影视作品',\n",
       "    'object': {'@value': '蒂娅·里欧妮'},\n",
       "    'subject': '国务卿女士第二季'},\n",
       "   {'predicate': '导演',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '影视作品',\n",
       "    'object': {'@value': '摩根·弗里曼'},\n",
       "    'subject': '国务卿女士第二季'},\n",
       "   {'predicate': '主演',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '影视作品',\n",
       "    'object': {'@value': '蒂姆·达利'},\n",
       "    'subject': '国务卿女士第二季'}]},\n",
       " {'text': '出演了著名导演袁和平执导的古装武侠剧《小李飞刀》，剧中饰演“惊鸿仙子”杨艳一角，让我们每个人都记住了她的美',\n",
       "  'spo_list': [{'predicate': '导演',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '影视作品',\n",
       "    'object': {'@value': '袁和平'},\n",
       "    'subject': '小李飞刀'}]},\n",
       " {'text': '张崇超1988年至1995年赴大连从事进出口贸易；1996年至1998年任浙江台州富仕广场董事兼副总经理；1998年至2007年任湖北省宜都商城有限责任公司董事长；现任九州方园集团董事长（集团下设九州方园新能源股份有限公司1、宜昌宏基置业有限公司、湖北九州方园投资有限公司、湖北金湖建设工程有限公司、九州方园博州新能源有限公司、九州方园博乐新能源有限公司）',\n",
       "  'spo_list': [{'predicate': '董事长',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '企业',\n",
       "    'object': {'@value': '张崇超'},\n",
       "    'subject': '九州方园新能源股份有限公司'},\n",
       "   {'predicate': '董事长',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '企业',\n",
       "    'object': {'@value': '张崇超'},\n",
       "    'subject': '九州方园集团'}]},\n",
       " {'text': '《梦回花香》是西西殿创作的网络小说，发表于17K小说网',\n",
       "  'spo_list': [{'predicate': '作者',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '图书作品',\n",
       "    'object': {'@value': '西西殿'},\n",
       "    'subject': '梦回花香'}]},\n",
       " {'text': '秦英林（1965-），牧原食品股份有限公司（股票代码：002714）董事长兼总经理，1989年毕业于河南农业大学，获学士学位',\n",
       "  'spo_list': [{'predicate': '董事长',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '企业',\n",
       "    'object': {'@value': '秦英林'},\n",
       "    'subject': '牧原食品股份有限公司'}]},\n",
       " {'text': '白寿彝主编：《中国通史》，第一卷，导论，上海人民出版社1989年版，第349页',\n",
       "  'spo_list': [{'predicate': '作者',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '图书作品',\n",
       "    'object': {'@value': '白寿彝主编'},\n",
       "    'subject': '中国通史'}]},\n",
       " {'text': '事故调查报告一、事故企业及相关情况（一）齐鲁天和惠世制药有限公司基本情况齐鲁天和惠世制药有限公司（以下简称天和公司）成立于2006年，公司类型：有限责任公司（台港澳与境内合资）',\n",
       "  'spo_list': [{'predicate': '成立日期',\n",
       "    'object_type': {'@value': 'Date'},\n",
       "    'subject_type': '机构',\n",
       "    'object': {'@value': '2006年'},\n",
       "    'subject': '齐鲁天和惠世制药有限公司'},\n",
       "   {'predicate': '简称',\n",
       "    'object_type': {'@value': 'Text'},\n",
       "    'subject_type': '机构',\n",
       "    'object': {'@value': '天和公司'},\n",
       "    'subject': '齐鲁天和惠世制药有限公司'}]},\n",
       " {'text': '白录东，1984年毕业于山东医学院，主任医师',\n",
       "  'spo_list': [{'predicate': '毕业院校',\n",
       "    'object_type': {'@value': '学校'},\n",
       "    'subject_type': '人物',\n",
       "    'object': {'@value': '山东医学院'},\n",
       "    'subject': '白录东'}]},\n",
       " {'text': '悉尼FC足球俱乐部，2004年成立，位于澳大利亚的悉尼市',\n",
       "  'spo_list': [{'predicate': '成立日期',\n",
       "    'object_type': {'@value': 'Date'},\n",
       "    'subject_type': '机构',\n",
       "    'object': {'@value': '2004年'},\n",
       "    'subject': '悉尼FC'}]},\n",
       " {'text': '除了 知名度较高的产品和中高端的商品之外，其他各类代言 例如 58同城、网游天下三、胶原蛋白及彩妆护肤类产品，也是层出不穷其中，58的线下覆盖量可以说得上是2012年的首位，无论是公交车 地铁或者其他的交通工具，都可以看到代言人杨幂 的身影而在线上，天下三、LUMI等代言 也会经常出现在百度、迅雷等页面',\n",
       "  'spo_list': [{'predicate': '代言人',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '企业/品牌',\n",
       "    'object': {'@value': '杨幂'},\n",
       "    'subject': '58同城'}]},\n",
       " {'text': '在《纽约时报》2013年的计票评选中，威廉·沃顿超越《北回归线》的作者亨利·米勒、《断背山》的作者安妮·普鲁克斯和“侦探小说大师”雷蒙德·钱德勒等多位著名作家，登上20世纪美国TOP15大器晚成作家排行榜第1名',\n",
       "  'spo_list': [{'predicate': '作者',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '图书作品',\n",
       "    'object': {'@value': '亨利·米勒'},\n",
       "    'subject': '北回归线'},\n",
       "   {'predicate': '作者',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '图书作品',\n",
       "    'object': {'@value': '安妮·普鲁克斯'},\n",
       "    'subject': '断背山'}]},\n",
       " {'text': '《一抹浅笑倾世温柔 》是在17k小说网连载的一部作品，作者是淡生',\n",
       "  'spo_list': [{'predicate': '作者',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '图书作品',\n",
       "    'object': {'@value': '淡生'},\n",
       "    'subject': '一抹浅笑倾世温柔'}]},\n",
       " {'text': '叶迷の°<卿心早属>文案:\\xa0 \"魔镜，魔镜，这世界上谁最美丽',\n",
       "  'spo_list': [{'predicate': '作者',\n",
       "    'object_type': {'@value': '人物'},\n",
       "    'subject_type': '图书作品',\n",
       "    'object': {'@value': '叶迷'},\n",
       "    'subject': '卿心早属'}]},\n",
       " {'text': '年仅10岁的美国男孩摩西·凯·卡夫林在加利福尼亚州洛杉矶东部学院的大学二年级课程即将结束',\n",
       "  'spo_list': [{'predicate': '毕业院校',\n",
       "    'object_type': {'@value': '学校'},\n",
       "    'subject_type': '人物',\n",
       "    'object': {'@value': '加利福尼亚州洛杉矶东部学院'},\n",
       "    'subject': '摩西·凯·卡夫林'}]}]"
      ]
     },
     "execution_count": 22,
     "metadata": {},
     "output_type": "execute_result"
    }
   ],
   "source": [
    "example[:50]"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3.3 加载处理数据\n",
    "我们可以加载自定义数据集。通过继承paddle.io.Dataset，自定义实现getitem 和 len两个方法。\n",
    "从比赛官网下载数据集，解压存放于data/目录下并重命名为train_data.json, dev_data.json, test_data.json.**"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2022-02-26T03:35:41.398589Z",
     "iopub.status.busy": "2022-02-26T03:35:41.397611Z",
     "iopub.status.idle": "2022-02-26T03:35:41.411726Z",
     "shell.execute_reply": "2022-02-26T03:35:41.410967Z",
     "shell.execute_reply.started": "2022-02-26T03:35:41.398549Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "#自定义两个方法\n",
    "from typing import Optional, List, Union, Dict\n",
    "\n",
    "import numpy as np\n",
    "import paddle\n",
    "from tqdm import tqdm\n",
    "\n",
    "from paddlenlp.transformers import ErnieTokenizer\n",
    "from paddlenlp.utils.log import logger\n",
    "\n",
    "from data_loader import parse_label, DataCollator, convert_example_to_feature\n",
    "from extract_chinese_and_punct import ChineseAndPunctuationExtractor\n",
    "\n",
    "\n",
    "class DuIEDataset(paddle.io.Dataset):\n",
    "    def __init__(self, data, label_map, tokenizer, max_length=512, pad_to_max_length=False):\n",
    "        super(DuIEDataset, self).__init__()\n",
    "\n",
    "        self.data = data\n",
    "        self.chn_punc_extractor = ChineseAndPunctuationExtractor()\n",
    "        self.tokenizer = tokenizer\n",
    "        self.max_seq_length = max_length\n",
    "        self.pad_to_max_length = pad_to_max_length\n",
    "        self.label_map = label_map\n",
    "\n",
    "    def __len__(self):\n",
    "        return len(self.data)\n",
    "\n",
    "    def __getitem__(self, item):\n",
    "\n",
    "        example = json.loads(self.data[item])\n",
    "        input_feature = convert_example_to_feature(\n",
    "            example, self.tokenizer, self.chn_punc_extractor,\n",
    "            self.label_map, self.max_seq_length, self.pad_to_max_length)\n",
    "        return {\n",
    "            \"input_ids\": np.array(input_feature.input_ids, dtype=\"int64\"),\n",
    "            \"seq_lens\": np.array(input_feature.seq_len, dtype=\"int64\"),\n",
    "            \"tok_to_orig_start_index\":\n",
    "            np.array(input_feature.tok_to_orig_start_index, dtype=\"int64\"),\n",
    "            \"tok_to_orig_end_index\": \n",
    "            np.array(input_feature.tok_to_orig_end_index, dtype=\"int64\"),\n",
    "            # If model inputs is generated in `collate_fn`, delete the data type casting.\n",
    "            \"labels\": np.array(input_feature.labels, dtype=\"float32\"),\n",
    "        }\n",
    "\n",
    "\n",
    "    @classmethod\n",
    "    def from_file(cls,\n",
    "                  file_path,\n",
    "                  tokenizer,\n",
    "                  max_length=512,\n",
    "                  pad_to_max_length=None):\n",
    "        assert os.path.exists(file_path) and os.path.isfile(\n",
    "            file_path), f\"{file_path} dose not exists or is not a file.\"\n",
    "        label_map_path = os.path.join(\n",
    "            os.path.dirname(file_path), \"predicate2id.json\")\n",
    "        assert os.path.exists(label_map_path) and os.path.isfile(\n",
    "            label_map_path\n",
    "        ), f\"{label_map_path} dose not exists or is not a file.\"\n",
    "        with open(label_map_path, 'r', encoding='utf8') as fp:\n",
    "            label_map = json.load(fp)\n",
    "\n",
    "        with open(file_path, \"r\", encoding=\"utf-8\") as fp:\n",
    "            data = fp.readlines()\n",
    "            return cls(data, label_map, tokenizer, max_length, pad_to_max_length)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2022-02-26T03:35:42.061572Z",
     "iopub.status.busy": "2022-02-26T03:35:42.060864Z",
     "iopub.status.idle": "2022-02-26T03:35:42.102550Z",
     "shell.execute_reply": "2022-02-26T03:35:42.101849Z",
     "shell.execute_reply.started": "2022-02-26T03:35:42.061522Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "#读取加载数据，设置batch——size大小\n",
    "data_path = 'data'\n",
    "batch_size = 8\n",
    "max_seq_length = 128\n",
    "\n",
    "train_file_path = os.path.join(data_path, 'train_data.json')\n",
    "train_dataset = DuIEDataset.from_file(\n",
    "    train_file_path, tokenizer, max_seq_length, True)\n",
    "train_batch_sampler = paddle.io.BatchSampler(\n",
    "    train_dataset, batch_size=batch_size, shuffle=True, drop_last=True)\n",
    "collator = DataCollator()\n",
    "train_data_loader = paddle.io.DataLoader(\n",
    "    dataset=train_dataset,\n",
    "    batch_sampler=train_batch_sampler,\n",
    "    collate_fn=collator)\n",
    "\n",
    "eval_file_path = os.path.join(data_path, 'dev_data.json')\n",
    "test_dataset = DuIEDataset.from_file(\n",
    "    eval_file_path, tokenizer, max_seq_length, True)\n",
    "test_batch_sampler = paddle.io.BatchSampler(\n",
    "    test_dataset, batch_size=batch_size, shuffle=False, drop_last=True)\n",
    "test_data_loader = paddle.io.DataLoader(\n",
    "    dataset=test_dataset,\n",
    "    batch_sampler=test_batch_sampler,\n",
    "    collate_fn=collator)"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "## 3.4定义损失函数\n",
    "\n",
    "定义损失函数和优化器，开始训练，我们选择均方误差作为损失函数，使用paddle.optimizer.AdamW作为优化器。\n",
    "\n",
    "在训练过程中，模型保存在当前目录checkpoints文件夹下。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2022-02-26T03:35:43.536022Z",
     "iopub.status.busy": "2022-02-26T03:35:43.535057Z",
     "iopub.status.idle": "2022-02-26T03:35:43.542214Z",
     "shell.execute_reply": "2022-02-26T03:35:43.541469Z",
     "shell.execute_reply.started": "2022-02-26T03:35:43.535976Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "import paddle.nn as nn\n",
    "\n",
    "class BCELossForDuIE(nn.Layer):\n",
    "    def __init__(self, ):\n",
    "        super(BCELossForDuIE, self).__init__()\n",
    "        self.criterion = nn.BCEWithLogitsLoss(reduction='none')\n",
    "\n",
    "    def forward(self, logits, labels, mask):\n",
    "        loss = self.criterion(logits, labels)\n",
    "        mask = paddle.cast(mask, 'float32')\n",
    "        loss = loss * mask.unsqueeze(-1)\n",
    "        loss = paddle.sum(loss.mean(axis=2), axis=1) / paddle.sum(mask, axis=1)\n",
    "        loss = loss.mean()\n",
    "        return loss"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2022-02-26T03:35:46.315074Z",
     "iopub.status.busy": "2022-02-26T03:35:46.314712Z",
     "iopub.status.idle": "2022-02-26T03:35:46.329955Z",
     "shell.execute_reply": "2022-02-26T03:35:46.329191Z",
     "shell.execute_reply.started": "2022-02-26T03:35:46.315039Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "from utils import write_prediction_results, get_precision_recall_f1, decoding\n",
    "\n",
    "@paddle.no_grad()\n",
    "def evaluate(model, criterion, data_loader, file_path, mode):\n",
    "    \"\"\"\n",
    "    mode eval:\n",
    "    eval on development set and compute P/R/F1, called between training.\n",
    "    mode predict:\n",
    "    eval on development / test set, then write predictions to \\\n",
    "        predict_test.json and predict_test.json.zip \\\n",
    "        under /home/aistudio/relation_extraction/data dir for later submission or evaluation.\n",
    "    \"\"\"\n",
    "    example_all = []\n",
    "    with open(file_path, \"r\", encoding=\"utf-8\") as fp:\n",
    "        for line in fp:\n",
    "            example_all.append(json.loads(line))\n",
    "    id2spo_path = os.path.join(os.path.dirname(file_path), \"id2spo.json\")\n",
    "    with open(id2spo_path, 'r', encoding='utf8') as fp:\n",
    "        id2spo = json.load(fp)\n",
    "\n",
    "    model.eval()\n",
    "    loss_all = 0\n",
    "    eval_steps = 0\n",
    "    formatted_outputs = []\n",
    "    current_idx = 0\n",
    "    for batch in tqdm(data_loader, total=len(data_loader)):\n",
    "        eval_steps += 1\n",
    "        input_ids, seq_len, tok_to_orig_start_index, tok_to_orig_end_index, labels = batch\n",
    "        logits = model(input_ids=input_ids)\n",
    "        mask = (input_ids != 0).logical_and((input_ids != 1)).logical_and((input_ids != 2))\n",
    "        loss = criterion(logits, labels, mask)\n",
    "        loss_all += loss.numpy().item()\n",
    "        probs = F.sigmoid(logits)\n",
    "        logits_batch = probs.numpy()\n",
    "        seq_len_batch = seq_len.numpy()\n",
    "        tok_to_orig_start_index_batch = tok_to_orig_start_index.numpy()\n",
    "        tok_to_orig_end_index_batch = tok_to_orig_end_index.numpy()\n",
    "        formatted_outputs.extend(decoding(example_all[current_idx: current_idx+len(logits)],\n",
    "                                          id2spo,\n",
    "                                          logits_batch,\n",
    "                                          seq_len_batch,\n",
    "                                          tok_to_orig_start_index_batch,\n",
    "                                          tok_to_orig_end_index_batch))\n",
    "        current_idx = current_idx+len(logits)\n",
    "    loss_avg = loss_all / eval_steps\n",
    "    print(\"eval loss: %f\" % (loss_avg))\n",
    "\n",
    "    if mode == \"predict\":\n",
    "        predict_file_path = os.path.join(\"/home/aistudio/relation_extraction/data\", 'predictions.json')\n",
    "    else:\n",
    "        predict_file_path = os.path.join(\"/home/aistudio/relation_extraction/data\", 'predict_eval.json')\n",
    "\n",
    "    predict_zipfile_path = write_prediction_results(formatted_outputs,\n",
    "                                                    predict_file_path)\n",
    "\n",
    "    if mode == \"eval\":\n",
    "        precision, recall, f1 = get_precision_recall_f1(file_path,\n",
    "                                                        predict_zipfile_path)\n",
    "        os.system('rm {} {}'.format(predict_file_path, predict_zipfile_path))\n",
    "        return precision, recall, f1\n",
    "    elif mode != \"predict\":\n",
    "        raise Exception(\"wrong mode for eval func\")"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2022-02-26T03:35:47.632584Z",
     "iopub.status.busy": "2022-02-26T03:35:47.631620Z",
     "iopub.status.idle": "2022-02-26T03:35:47.641350Z",
     "shell.execute_reply": "2022-02-26T03:35:47.640509Z",
     "shell.execute_reply.started": "2022-02-26T03:35:47.632539Z"
    },
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "from paddlenlp.transformers import LinearDecayWithWarmup\n",
    "\n",
    "learning_rate = 2e-5\n",
    "num_train_epochs = 5\n",
    "warmup_ratio = 0.06\n",
    "\n",
    "criterion = BCELossForDuIE()\n",
    "# Defines learning rate strategy.\n",
    "steps_by_epoch = len(train_data_loader)\n",
    "num_training_steps = steps_by_epoch * num_train_epochs\n",
    "lr_scheduler = LinearDecayWithWarmup(learning_rate, num_training_steps, warmup_ratio)\n",
    "optimizer = paddle.optimizer.AdamW(\n",
    "    learning_rate=lr_scheduler,\n",
    "    parameters=model.parameters(),\n",
    "    apply_decay_param_fun=lambda x: x in [\n",
    "        p.name for n, p in model.named_parameters()\n",
    "        if not any(nd in n for nd in [\"bias\", \"norm\"])])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "!mkdir checkpoints"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 四 、 模型训练\n",
    "选择均方误差作为损失函数，使用paddle.optimizer.AdamW作为优化器。将其设置为一万步保存一次模型，训练轮数40论，步长为50，将得到的结果保存在checkpoints中。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {
    "scrolled": true
   },
   "outputs": [],
   "source": [
    "#开始训练\n",
    "import time\n",
    "import paddle.nn.functional as F\n",
    "\n",
    "# Starts training.\n",
    "global_step = 0\n",
    "logging_steps = 50\n",
    "save_steps = 10000\n",
    "num_train_epochs = 40\n",
    "output_dir = 'checkpoints'\n",
    "tic_train = time.time()\n",
    "model.train()\n",
    "for epoch in range(num_train_epochs):\n",
    "    print(\"\\n=====start training of %d epochs=====\" % epoch)\n",
    "    tic_epoch = time.time()\n",
    "    for step, batch in enumerate(train_data_loader):\n",
    "        input_ids, seq_lens, tok_to_orig_start_index, tok_to_orig_end_index, labels = batch\n",
    "        logits = model(input_ids=input_ids)\n",
    "        mask = (input_ids != 0).logical_and((input_ids != 1)).logical_and(\n",
    "            (input_ids != 2))\n",
    "        loss = criterion(logits, labels, mask)\n",
    "        loss.backward()\n",
    "        optimizer.step()\n",
    "        lr_scheduler.step()\n",
    "        optimizer.clear_gradients()\n",
    "        loss_item = loss.numpy().item()\n",
    "\n",
    "        if global_step % logging_steps == 0:\n",
    "            print(\n",
    "                \"epoch: %d / %d, steps: %d / %d, loss: %f, speed: %.2f step/s\"\n",
    "                % (epoch, num_train_epochs, step, steps_by_epoch,\n",
    "                    loss_item, logging_steps / (time.time() - tic_train)))\n",
    "            tic_train = time.time()\n",
    "\n",
    "        if global_step % save_steps == 0 and global_step != 0:\n",
    "            print(\"\\n=====start evaluating ckpt of %d steps=====\" %\n",
    "                    global_step)\n",
    "            precision, recall, f1 = evaluate(\n",
    "                model, criterion, test_data_loader, eval_file_path, \"eval\")\n",
    "            print(\"precision: %.2f\\t recall: %.2f\\t f1: %.2f\\t\" %\n",
    "                    (100 * precision, 100 * recall, 100 * f1))\n",
    "            print(\"saving checkpoing model_%d.pdparams to %s \" %\n",
    "                    (global_step, output_dir))\n",
    "            paddle.save(model.state_dict(),\n",
    "                        os.path.join(output_dir, \n",
    "                                        \"model_%d.pdparams\" % global_step))\n",
    "            model.train()\n",
    "\n",
    "        global_step += 1\n",
    "    tic_epoch = time.time() - tic_epoch\n",
    "    print(\"epoch time footprint: %d hour %d min %d sec\" %\n",
    "            (tic_epoch // 3600, (tic_epoch % 3600) // 60, tic_epoch % 60))\n",
    "\n",
    "# Does final evaluation.\n",
    "print(\"\\n=====start evaluating last ckpt of %d steps=====\" %\n",
    "        global_step)\n",
    "precision, recall, f1 = evaluate(model, criterion, test_data_loader,\n",
    "                                    eval_file_path, \"eval\")\n",
    "print(\"precision: %.2f\\t recall: %.2f\\t f1: %.2f\\t\" %\n",
    "        (100 * precision, 100 * recall, 100 * f1))\n",
    "paddle.save(model.state_dict(),\n",
    "            os.path.join(output_dir,\n",
    "                            \"model_%d.pdparams\" % global_step))\n",
    "print(\"\\n=====training complete=====\")"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 五、模型评估\n",
    "加载训练保存的模型加载后进行预测。可以得出训练出的结果。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2022-02-26T01:38:56.877511Z",
     "iopub.status.busy": "2022-02-26T01:38:56.876806Z",
     "iopub.status.idle": "2022-02-26T01:39:51.224267Z",
     "shell.execute_reply": "2022-02-26T01:39:51.223428Z",
     "shell.execute_reply.started": "2022-02-26T01:38:56.877469Z"
    },
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "+ export CUDA_VISIBLE_DEVICES=0\n",
      "+ CUDA_VISIBLE_DEVICES=0\n",
      "+ export BATCH_SIZE=32\n",
      "+ BATCH_SIZE=32\n",
      "+ export CKPT=./checkpoints/model_50040.pdparams\n",
      "+ CKPT=./checkpoints/model_50040.pdparams\n",
      "+ export DATASET_FILE=./data/test_data.json\n",
      "+ DATASET_FILE=./data/test_data.json\n",
      "+ python run_duie.py --do_predict --init_checkpoint ./checkpoints/model_50040.pdparams --predict_data_file ./data/test_data.json --max_seq_length 512 --batch_size 32\n",
      "/opt/conda/envs/python35-paddle120-env/lib/python3.7/site-packages/paddlenlp/transformers/funnel/modeling.py:30: DeprecationWarning: Using or importing the ABCs from 'collections' instead of from 'collections.abc' is deprecated, and in 3.8 it will stop working\n",
      "  from collections import Iterable\n",
      "[2022-02-26 09:38:58,783] [    INFO] - Already cached /home/aistudio/.paddlenlp/models/roberta-wwm-ext-large/roberta_chn_large.pdparams\n",
      "W0226 09:38:58.784677 20729 device_context.cc:447] Please NOTE: device: 0, GPU Compute Capability: 7.0, Driver API Version: 10.1, Runtime API Version: 10.1\n",
      "W0226 09:38:58.789849 20729 device_context.cc:465] device: 0, cuDNN Version: 7.6.\n",
      "[2022-02-26 09:39:08,519] [    INFO] - Already cached /home/aistudio/.paddlenlp/models/roberta-wwm-ext-large/vocab.txt\n",
      "[2022-02-26 09:39:08,536] [    INFO] - Preprocessing data, loaded from ./data/test_data.json\n",
      "100%|██████████████████████████████████████| 1000/1000 [00:04<00:00, 207.16it/s]\n",
      "\n",
      "=====start predicting=====\n",
      "100%|███████████████████████████████████████████| 31/31 [00:33<00:00,  1.08s/it]\n",
      "eval loss: 0.048061\n",
      "=====predicting complete=====\n"
     ]
    }
   ],
   "source": [
    "!bash predict.sh"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {
    "execution": {
     "iopub.execute_input": "2022-02-26T01:48:47.524972Z",
     "iopub.status.busy": "2022-02-26T01:48:47.523933Z",
     "iopub.status.idle": "2022-02-26T01:48:48.039757Z",
     "shell.execute_reply": "2022-02-26T01:48:48.038941Z",
     "shell.execute_reply.started": "2022-02-26T01:48:47.524926Z"
    },
    "scrolled": true
   },
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "correct spo num = 2281.0\n",
      "submitted spo num = 2307.0\n",
      "golden set spo num = 2320.0\n",
      "submitted recall spo num = 2281.0\n",
      "{\"errorCode\": 0, \"errorMsg\": \"success\", \"data\": [{\"name\": \"precision\", \"value\": 0.9887}, {\"name\": \"recall\", \"value\": 0.9832}, {\"name\": \"f1-score\", \"value\": 0.986}]}\n"
     ]
    }
   ],
   "source": [
    "!python re_official_evaluation.py --golden_file=dev_data.json  --predict_file=duie.json.zip"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 六、总结\n",
    "\n",
    "在我看来信息抽取对我们日常生活也是非常重要的，例如聊天机器人等，可以将我们说发出的信息转让其可以更好的理解，本次项目依旧有许多不足，借鉴了许多代码，依旧会出现一些不足的地方，我会继续跟进继续更改。"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "# 七、个人总结\n",
    "现在还是一名大二的学生，了解的不是特别多，希望以后可以多学习一些深度学习有关的知识吧。\n"
   ]
  },
  {
   "cell_type": "markdown",
   "metadata": {},
   "source": [
    "飞桨主页：https://aistudio.baidu.com/aistudio/usercenter\n",
    "\n",
    "github主页：https://github.com/TTxxtt\n",
    "\n",
    "gitee主页：https://gitee.com/green-baby-milk"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3",
   "language": "python",
   "name": "py35-paddle1.2.0"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.4"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 4
}
