{
 "metadata": {
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.7.6-final"
  },
  "orig_nbformat": 2,
  "kernelspec": {
   "name": "Python 3.7.6 64-bit ('pytorch': conda)",
   "display_name": "Python 3.7.6 64-bit ('pytorch': conda)",
   "metadata": {
    "interpreter": {
     "hash": "d4c1240a08c27a9e7d24f40f6f295dcde57dffc995969ffd0a60a858ccefdc00"
    }
   }
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2,
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [],
   "source": [
    "import docx\n",
    "from utils import *"
   ]
  },
  {
   "source": [
    "### 标题说明\n",
    "\n",
    "#### 4 线路运行维护 1级标题\n",
    "#### 4.1 线路巡视   2级标题\n",
    "#### 4.1.1 三级标题 在这里三级标题是2级标题的非结构化补充信息\n",
    "\n",
    "- 在这里将1级标题与2级标题标注为上下位关系\n",
    "- 将2级标题与文本抽取出的知识三元组的第一个实体（*一般是句子前半部动作的发出者*）标注为上下位关系；知识三元组由句法分析得到\n",
    "\n",
    "这里的三级标题与论文中的二级标题下的补充内容相对应。\n",
    "\n",
    "对于三级标题（原论文中的二级标题下的补充内容）的内容：\n",
    "- 普通文本的结束符号是“。”，句号。\n",
    "- 具有补充内容的文本的结束符号是“：”，冒号。\n",
    "  - 补充内容若为*名词短语*，可以直接作为*知识填充到三元组*中。\n",
    "  - 补充内容若为句子，需要进行*关系抽取*后加入三元组中。\n"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [],
   "source": [
    "data_path = \"D:\\Program Files\\腾讯文件\\电力安全工作规定\\第四章.docx\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [],
   "source": [
    "file = docx.Document(data_path)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "lines = []\n",
    "for index,line in enumerate(file.paragraphs):\n",
    "    lines.append(line.text.strip())"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "dataset = [] # 由两部分组成，段落标签和段落文本\n",
    "tag = \"\" # 记录段落标签\n",
    "for line in lines:\n",
    "    if line == \"\": continue\n",
    "    line_split = line.split(' ')\n",
    "\n",
    "    if len(line_split) > 1 and is_digital_in_str(line_split[0][:5]):\n",
    "        # print(line_split)\n",
    "        tag = line_split[0]\n",
    "        text = line_split[1]\n",
    "    elif len(line_split) == 1:\n",
    "        text = line_split[0]\n",
    "    dataset.append([tag,text])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "dataset_dict = {} # 将列表数据转换为字典数据\n",
    "\n",
    "for i,data in enumerate(dataset):\n",
    "    temp = data[0].split(\".\")\n",
    "    temp_tag = get_num_by_list(temp)\n",
    "    # print(i,'==',temp,'==',temp_tag,'==',data[0])\n",
    "    if temp_tag in dataset_dict.keys():\n",
    "        dataset_dict[temp_tag] += data[1]\n",
    "    else:\n",
    "        dataset_dict[temp_tag] = data[1]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [],
   "source": [
    "#dataset_dict"
   ]
  },
  {
   "source": [
    "### 数据预处理的流程：\n",
    "第一步：1. 分词；2. 词性标注；3. 依存句法分析\n",
    "\n",
    "第二步：对部分缺失主语的句子进行**主语填充**，整理所有待抽取句子形成文本\n",
    "\n",
    "第三步：按照依存句法分析结果，将与**核心动词**发生的**实体名称**作为**实体**，**核心动词**作为实体的关系，完成前文的**实体三元组**的抽取。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "from ltp import LTP \n",
    "ltp = LTP()\n",
    "ltp.init_dict(path=\"dianli_dict.txt\",max_window=12)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "metadata": {
    "tags": []
   },
   "outputs": [],
   "source": [
    "all_segments, all_hiddens = [],[]\n",
    "deps_dict = {} # 保存依存句法分析结果\n",
    "segments_dict = {} # 保存分词结果\n",
    "poses_dict = {} # 保存词性分析结果\n",
    "for key in dataset_dict.keys():\n",
    "    if key > 99: # 如果是三级标题，既二级标题的补充内容\n",
    "        sentces = ltp.sent_split([dataset_dict[key]])       # 先进行分句\n",
    "        segments, hiddens = ltp.seg(sentces)                # 再进行分词,每句话的分词结果用一个列表表示\n",
    "        \n",
    "        poses = ltp.pos(hiddens)                            # 进行词性标注\n",
    "        depes = ltp.dep(hiddens)                            # 进行依存句法分析\n",
    "\n",
    "        poses_dict[key] = poses\n",
    "        deps_dict[key] = depes\n",
    "        segments_dict[key] = segments\n",
    "        # print(len(sentces),sentces)\n",
    "        # print()\n",
    "        # print(len(segments),segments)\n",
    "        # print()\n",
    "        # print(len(poses),poses)\n",
    "        # print()\n",
    "        # print(len(depes),depes[0])\n",
    "        # break "
   ]
  },
  {
   "source": [
    "#### 第二步，进行手工主语填充（**暂时不做**）"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": "13 ['巡线', '工作', '应', '由', '有', '电力', '线路', '工作', '经验', '的', '人员', '担任', '。']\n13 ['v', 'v', 'v', 'p', 'v', 'n', 'n', 'v', 'n', 'u', 'n', 'v', 'wp']\n5 [(1, 2, 'ATT'), (2, 12, 'FOB'), (3, 12, 'ADV'), (4, 12, 'ADV'), (5, 11, 'ATT'), (6, 7, 'ATT'), (7, 8, 'ATT'), (8, 9, 'ATT'), (9, 5, 'VOB'), (10, 5, 'RAD'), (11, 4, 'POB'), (12, 0, 'HED'), (13, 12, 'WP')]\n担任\n"
    }
   ],
   "source": [
    "# 查看句子的词性以及依存关系\n",
    "for key in dataset_dict.keys():\n",
    "    if key > 99:\n",
    "        temp_poes = poses_dict[key]\n",
    "        temp_segments = segments_dict[key]\n",
    "        temp_deps = deps_dict[key]\n",
    "\n",
    "        core_verb_index = get_core_verb_index_by_dep(temp_deps[0]) - 1\n",
    "        print(len(temp_segments[0]),temp_segments[0])\n",
    "        print(len(temp_poes[0]),temp_poes[0])\n",
    "        print(len(temp_deps),temp_deps[0])\n",
    "        print(temp_segments[0][core_verb_index])\n",
    "        \n",
    "        break"
   ]
  },
  {
   "source": [
    "#### 下面进行基于依存句法的自动抽取\n",
    "\n",
    "\n",
    "（1）实体的定语补全\n",
    "例：`变电站内主变、开关等重要设备发生严重故障。`\n",
    "\n",
    "`设备`与核心动词`发送`是`主谓关系`，`发送`与词语`故障`是动宾关系，\n",
    "\n",
    "句中的`主语`和`宾语`之前都有定语，例如`严重`同`故障`发生`定中关系（ATT）`。\n",
    "\n",
    "句子前半段词语`设备`具有多重定语。加入定语修饰可使表述更加完整：\n",
    "`<变电站内主变重要设备，发生，严重故障>`。\n",
    "---\n",
    "\n",
    "实体的定语补全算法流程：\n",
    "\n",
    "1. 判断是否存在词语C同需要抽取的实体A发生定中关系。若存在则按照词语编号从小到大顺序将所有同A发生定中关系的词语C加入列表L。\n",
    "2. 倒序遍历列表L，若词语C中依旧存在词语D同C发生定中关系，递归调用步骤1。 （这里感觉有点问题）\n",
    "3. 按照倒叙遍历列表L的顺序依次将定语C同实体A拼接完成实体的定语补全。\n",
    "\n",
    "---\n",
    "（2）动宾关系补全\n",
    "例：`调控机构负责编制电力统计`。\n",
    "\n",
    "可以发现`编制`为动词，且词语`统计`发生动宾关系。`编制电力统计`是一个动词短语作为一个实体。\n",
    "\n",
    "故抽取的三元组应为`<调控机构，负责，编制电力统计>`。\n",
    "\n",
    "动宾关系补全算法流程：\n",
    "\n",
    "1. 判断是否存在词语C同实体A发生`主谓`关系或`动宾`关系。\n",
    "2. 若存在`主谓`关系则拼接C和A组成实体，若存在`动宾关系`则拼接A和C组成实体。若存在同C和A发生`主谓`关系且存在D同C发生`主谓`关系，或存在A同C发生`谓宾`关系且存在C和D发生`谓宾`关系，则返回步骤2。\n",
    "\n",
    "这两个算法在实际应用中可能会单独出现，也可能一同出现，故需要写一起，递归调用。\n"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "#### 第一部分的补全总结如下：\n",
    "主谓关系和动宾关系分开补。\n",
    "\n",
    "对于主谓关系，这一部分目前只需要去找定中关系去补全，故使用dfs算法去递归遍历，最后再倒序回来。\n",
    "\n",
    "对于动宾关系，这一部分目前需要去动宾关系去补全，这里先使用bfs算法去找动宾关系，找到以后再分别去找定中关系去补宾语的定语。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": "['巡线', '工作', '应', '由', '有', '电力', '线路', '工作', '经验', '的', '人员', '担任', '。']\n['单独', '巡线', '人员', '应', '考试', '合格', '并', '经', '工区', '（', '公司', '、', '所', '）', '主管', '生产', '领导', '批准', '。']\n['电缆隧道', '、', '偏僻', '山区', '和', '夜间', '巡线', '应', '由', '两', '人', '进行', '。']\n['暑天', '、', '大雪天', '等', '恶劣', '天气', '，', '必要', '时', '由', '两', '人', '进行', '。']\n['单人', '巡线', '时', '，', '禁止', '攀登', '杆塔', '和', '铁塔', '。']\n"
    }
   ],
   "source": [
    "# 先进行定语补全测试，定中关系：ATT\n",
    "# 先找到同核心动词发生关系的两个实体\n",
    "for key in dataset_dict.keys():\n",
    "    if key < 100: continue\n",
    "    lines = len(segments_dict[key]) # 获得句子的数量\n",
    "    for line_no in range(lines):\n",
    "        print(segments_dict[key][line_no])\n",
    "    # print(segments_dict[key])\n",
    "    break\n",
    "    # print(dataset_dict[key])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "source": [
    "### 依存分析并列结构处理\n",
    "\n",
    "在句子中会出现，`谓语并列结构`以及`主语或宾语并列结构`。\n",
    "\n",
    "谓语并列结构是在存在同`核心动词并列的动词`，它们会共享`主语`或共享`宾语`。\n",
    "\n",
    "对并列结构的分析主要针对`主谓宾的句法结构`，既已经存在的两个词分别同`核心动词`存在`主谓关系（SVB）`以及`动宾关系（VOB）`。\n",
    "\n",
    "例：`相关人员应熟悉厂站设备， 熟悉启动调试方案。`\n",
    "\n",
    "第一个动词`熟悉`是整个句子的核心，第二个动词`熟悉`同核心动词`熟悉`是并列关系（COO）。由于这个动词句子拆分为两个句子，同时共享主语`人员`，宾语不相同。\n",
    "\n",
    "所获得的三元组为：`<相关人员，熟悉，厂站设备>，<相关人员，熟悉，启动调试方案>`。\n",
    "\n",
    "算法的具体思路：\n",
    "\n",
    "1. 判断是否存在同核心动词`HED`并列的词语B，既HED和B存在COO结构。\n",
    "2. 判断词语B是否有同它存在SVB或VOB结构的词语C。\n",
    "3. 若存在词语C与词语B存在SVB结构，则词语B同核心动词HED具有共同的宾语；若词语B与C存在VOB结构，则词语B同核心动词HED具有共同的主语。\n",
    "\n",
    "---\n",
    "\n",
    "例：`发电厂应按照调度指令进行调频、调峰、调压。`\n",
    "\n",
    "其核心动词是`进行`，`进行`的宾语是`调频`。`调峰`,`调压`同`调频`是并列关系（COO）。故可以提取三个知识三元组\n",
    "`<发电厂，进行，调频>，<发电厂，进行，调压>，<发电厂，进行，调峰>`。\n"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "算法的具体思路如下：\n",
    "\n",
    "1. 确定同核心动词HED发生主谓关系的词语A和发生动宾关系的词语B。\n",
    "2. 判别A与B是否存在并列的词语C。若C与A是并列关系，则句子中C同核心名词HED发生主谓关系，抽取的三元组客体相同。若C与B是并列关系，则句子核心动词HED同C发生动宾关系，抽取的三元组主体相同。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  },
  {
   "cell_type": "code",
   "execution_count": null,
   "metadata": {},
   "outputs": [],
   "source": []
  }
 ]
}