{
 "metadata": {
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.6.7-final"
  },
  "orig_nbformat": 2,
  "kernelspec": {
   "name": "python3",
   "display_name": "DataAnalysis",
   "language": "python"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 2,
 "cells": [
  {
   "cell_type": "code",
   "execution_count": 1,
   "metadata": {},
   "outputs": [],
   "source": [
    "import nltk"
   ]
  },
  {
   "source": [
    "# Ch11 语言数据管理\n",
    "1.  设计新的语言资源，需要确保覆盖面、平衡及对文档的广泛支持\n",
    "2.  将现有数据转换成合适的格式用于分析\n",
    "3.  发布已经创建的资源，方便人们查找和使用"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "## 11.1 语料库结构：安全研究\n",
    "TIMIT语料库是第一个广泛发布的已经标注的语音数据库，为获取声学——语音知识提供数据，支持自动语音识别系统的开发和评估"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "### 11.1.1 TIMIT结构"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "0 ) dr1-fvmh0/sa1.phn\n1 ) dr1-fvmh0/sa1.txt\n2 ) dr1-fvmh0/sa1.wav\n3 ) dr1-fvmh0/sa1.wrd\n4 ) dr1-fvmh0/sa2.phn\n5 ) dr1-fvmh0/sa2.txt\n6 ) dr1-fvmh0/sa2.wav\n7 ) dr1-fvmh0/sa2.wrd\n8 ) dr1-fvmh0/si1466.phn\n9 ) dr1-fvmh0/si1466.txt\n"
     ]
    }
   ],
   "source": [
    "for i,fileid in enumerate(nltk.corpus.timit.fileids()):\n",
    "    if i<10:\n",
    "        print(i,')',fileid)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "['h#', 'sh', 'iy', 'hv', 'ae', 'dcl', 'y', 'ix', 'dcl', 'd', 'aa', 'kcl', 's', 'ux', 'tcl', 'en', 'gcl', 'g', 'r', 'iy', 's', 'iy', 'w', 'aa', 'sh', 'epi', 'w', 'aa', 'dx', 'ax', 'q', 'ao', 'l', 'y', 'ih', 'ax', 'h#']\n"
     ]
    },
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "[('she', 7812, 10610),\n",
       " ('had', 10610, 14496),\n",
       " ('your', 14496, 15791),\n",
       " ('dark', 15791, 20720),\n",
       " ('suit', 20720, 25647),\n",
       " ('in', 25647, 26906),\n",
       " ('greasy', 26906, 32668),\n",
       " ('wash', 32668, 37890),\n",
       " ('water', 38531, 42417),\n",
       " ('all', 43091, 46052),\n",
       " ('year', 46052, 50522)]"
      ]
     },
     "metadata": {},
     "execution_count": 3
    }
   ],
   "source": [
    "# TIMIT记录了音标和单词，通过phones()可以查看音标\n",
    "phonetic = nltk.corpus.timit.phones('dr1-fvmh0/sa1')\n",
    "print(phonetic)\n",
    "nltk.corpus.timit.word_times('dr1-fvmh0/sa1')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "['g', 'r', 'iy1', 's', 'iy', 'w', 'ao1', 'sh', 'w', 'ao1', 't', 'axr']\n['g', 'r', 'iy', 's', 'iy', 'w', 'aa', 'sh', 'epi', 'w', 'aa', 'dx', 'ax']\n"
     ]
    }
   ],
   "source": [
    "# TIMIT提供了规范的发音\n",
    "timitdict = nltk.corpus.timit.transcription_dict()\n",
    "print(timitdict['greasy'] + timitdict['wash'] + timitdict['water'])\n",
    "print(phonetic[17:30])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "SpeakerInfo(id='VMH0', sex='F', dr='1', use='TRN', recdate='03/11/86', birthdate='01/08/60', ht='5\\'05\"', race='WHT', edu='BS', comments='BEST NEW ENGLAND ACCENT SO FAR')"
      ]
     },
     "metadata": {},
     "execution_count": 5
    }
   ],
   "source": [
    "# 说话人的相关信息\n",
    "nltk.corpus.timit.spkrinfo('dr1-fvmh0')"
   ]
  },
  {
   "source": [
    "### 11.1.2 TIMIT的主要设计特点\n",
    "TIMIT语料库设计中的主要特点：\n",
    "\n",
    "1.  语料库中包含了语音和字形两个标注层\n",
    "2.  语料库在多个维度的变化与方言地区和二元音覆盖范围之间取得了平衡\n",
    "3.  语料库区分了作为录音来捕捉和作为标注来捕捉的原始语言学事件\n",
    "4.  语料库的层次结构\n",
    "5.  语料库中除了包含了语音数据，还包含了词汇和文字数据"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "### 11.1.3 TIMIT的基本数据类型\n",
    "TIMI包含了两种基本数据类型：词典和文本\n",
    "\n",
    "-   词典资源使用记录结构表示，即一个关键字加一个或者多个字段。\n",
    "-   词典资源 可以是一个传统字典或者比较词表，也可以是一个短语词典，其中的关键字是一个短语而不是一个词\n",
    "-   词典还包括了结构化数据的记录，可以通过对应主题的非关键字字段来查找条目\n",
    "-   可以通过构造特征的表格（称为范例）来进行对比和说明系统性的变化\n",
    "-   说话者表也是一种词典资源\n",
    "\n",
    "在最抽象的层面上，文本表示真实的或者虚构的讲话事件，这个事件的时间过程也在文本本身存在。\n",
    "\n",
    "文本可以是一个小单位，如一个词或者一个句子；也可以是一个完整的叙述或者对话。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "## 11.2 语料库的生命周期\n",
    "### 11.2.1 创建语料库的3种方案\n",
    "1.  “领域语言学”模式，即来自会话的材料在被收集的同时就被分析。\n",
    "2.  实验研究模式，从人类中收集主，然后分析来评估一个假设或者开发一种技术。这类数据库是“共同任务”的科研管理方法的基础\n",
    "3.  特定的语言收集“参考语料”。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "#### 11.2.2 质量控制\n",
    "标注指南确定任务并且记录标记约定。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 6,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "0.0\n0.19047619047619047\n0.5714285714285714\n"
     ]
    }
   ],
   "source": [
    "# Kappa系数测量两个人判断类别，修正预期期望的一致性。\n",
    "# windowdiff()评估两个分割的一致性得分。3是窗口的大小\n",
    "s1 = '00000010000000001000000'\n",
    "s2 = '00000001000000010000000'\n",
    "s3 = '00010000000000000001000'\n",
    "print(nltk.windowdiff(s1, s1, 3))\n",
    "print(nltk.windowdiff(s1, s2, 3))\n",
    "print(nltk.windowdiff(s2, s3, 3))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 7,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "0.2222222222222222\n0.6666666666666666\n"
     ]
    }
   ],
   "source": [
    "print(nltk.windowdiff(s1, s2, 6))\n",
    "print(nltk.windowdiff(s2, s3, 6))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 8,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "0.17391304347826086\n0.17391304347826086\n"
     ]
    }
   ],
   "source": [
    "print(nltk.windowdiff(s1, s2, 1))\n",
    "print(nltk.windowdiff(s2, s3, 1))"
   ]
  },
  {
   "source": [
    "### 11.2.3 维护与演变\n",
    "发布原始语料库需要一个能够识别其中任何一部分的规范。\n",
    "\n",
    "-   每个句子、树或者词条都有全局唯一的标识符\n",
    "-   每个标识符、节点或者字段（分别）都有一个相对领衔\n",
    "\n",
    "标注包括分割，可以使用规范的标识符引用原始材料。\n",
    "\n",
    "新的标注可以与原始材料独立分布，同一个来源的多个独立标注可以对比和更新而不影响原始材料。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "## 11.3 数据采集\n",
    "### 11.3.1 从网络上获取数据\n",
    "RSS订阅、搜索引擎的结果、发布的网页。"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "### 11.3.2 从文本文件中获取数据\n",
    "\n",
    "从Word文档中获取数据\n",
    "\n",
    "将Word文档转存为HTML文档\n",
    "\n",
    "从HTML文档中获取数据\n",
    "\n",
    "1.  从Word转存的HTML文件与书上的格式不同\n",
    "2.  取出的结果也与书上的不同"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "<html>\n  <body>\n    <p class=MsoNormal>sleep\n      <span style='mso-spacerun:yes'> </span>\n      [<span class=SpellE>sli:p</span>]\n      <span style='mso-spacerun:yes'> </span>\n      <b><span style='font-size:11.0pt'>v.i.</span></b>\n      <span style='mso-spacerun:yes'> </span>\n      <i>a condition of body and mind ...<o:p></o:p></i>\n    </p>\n  </body>\n</html>\n\n['v.i.']\n[]\n"
     ]
    }
   ],
   "source": [
    "import re\n",
    "\n",
    "legal_pos = {'n', 'v.t.', 'v.i.', 'adj', 'det'}\n",
    "# pattern = re.compile(r\"'font-size:11.0pt'\")\n",
    "# pattern = re.compile(r\">([a-z.]+)<\")\n",
    "pattern = re.compile(r\"'font-size:11.0pt'>([a-z.]+)<\")\n",
    "document = open('dict.htm').read()\n",
    "print(document)\n",
    "used_pos = set(re.findall(pattern, document))\n",
    "print(list(used_pos))\n",
    "# 非法词性的集合\n",
    "illegal_pos = used_pos.difference(legal_pos)\n",
    "print(list(illegal_pos))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 将HTML文档转存为CSV文档\n",
    "# 不能正确地显示结果\n",
    "from bs4 import BeautifulSoup\n",
    "\n",
    "\n",
    "def lexical_data(html_file, encoding='utf-8'):\n",
    "    SEP = '_ENTRY'\n",
    "    html = open(html_file, encoding=encoding).read()\n",
    "    # print('html1:', html)\n",
    "    html = re.sub(r'<p', SEP + '<p', html)\n",
    "    # print('html2:', html)\n",
    "    text = BeautifulSoup(html, 'html.parser').get_text()\n",
    "    # print('text1:', text)\n",
    "    text = ' '.join(text.split())\n",
    "    # print('text2:', text)\n",
    "    for entry in text.split(SEP):\n",
    "        # print('entry:', entry)\n",
    "        if entry.count(' ') > 2:\n",
    "            yield entry.split(' ', 3)\n",
    "\n",
    "\n",
    "import csv\n",
    "\n",
    "dict_csv = open('dict.csv', 'w')\n",
    "writer = csv.writer(dict_csv)\n",
    "writer.writerows(lexical_data('dict.htm'))\n",
    "dict_csv.close()"
   ]
  },
  {
   "source": [
    "### 11.3.3 从电子表格和数据库中获取数据"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "['...', 'a', 'and', 'body', 'condition', 'mind', 'of']"
      ]
     },
     "metadata": {},
     "execution_count": 12
    }
   ],
   "source": [
    "import csv\n",
    "# 注意删除 dict.csv 中多余的空行\n",
    "dict_csv = open('dict.csv')\n",
    "lexicon = csv.reader(dict_csv)\n",
    "pairs=[]\n",
    "pairs = [(lexeme, defn) for (lexeme, _, _, defn) in lexicon]\n",
    "lexemes, defns = zip(*pairs)\n",
    "defn_words = set(w for defn in defns for w in defn.split())\n",
    "dict_csv.close()\n",
    "sorted(defn_words.difference(lexemes))"
   ]
  },
  {
   "source": [
    "### 11.3.4 数据格式的转换"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "body: sleep\ncondition: sleep\nmind: sleep\n"
     ]
    }
   ],
   "source": [
    "idx = nltk.Index((defn_word, lexeme)\n",
    "                 for (lexeme, defn) in pairs\n",
    "                 for defn_word in nltk.word_tokenize(defn)\n",
    "                 if len(defn_word) > 3)\n",
    "with open('dict.idx', 'w') as idx_file:\n",
    "    for word in sorted(idx):\n",
    "        idx_words = ','.join(idx[word])\n",
    "        idx_line = '{}: {}'.format(word, idx_words)\n",
    "        print(idx_line)"
   ]
  },
  {
   "source": [
    "### 11.3.5 选择需要保留的标注层\n",
    "常用的标注层：\n",
    "-   分词：文本的书写形式不能明确地识别它的标识符。分词和规范化的版本作为常规的正式版本的补充\n",
    "-   断句：因为断句的困难，因此语料库为断句提供明确的标注\n",
    "-   分段：明确注明段落和其他结构元素（标题、章节等）\n",
    "-   词性：文档中的每个单词的词类\n",
    "-   句法结构：一个树状结构显示一个句子的组成结构\n",
    "-   浅层语义：命名实体和共指标注，语义角色标签\n",
    "-   对话与段落：对话行为标记，修辞结构\n",
    "\n",
    "内联标注：通过插入带有标注信息的特殊符号或者控制序列修改原始文档\n",
    "\n",
    "对峙标注：不修改原始文档，而是创建一个新的文档，通过使用指针引用原始文档来增加标注信息"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "### 11.3.6 标准和工具\n",
    "共同的接口：抽象数据类型、面向对象设计、三层结构"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "### 11.3.7 处理濒危语言时的特征考虑\n",
    "SIL的自由软件Toolbox和Filedwords对文本和词汇的创建集成提供了很好的支持"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 使用词义范畴标注词项，允许通过语义范畴或者注释查找\n",
    "# 去除单词中的元音、缩写和重复字母\n",
    "mappings = [('ph', 'f'), ('ght', 't'), ('^kn', 'n'), ('qu', 'kw'), ('[aeiou]+', 'a'), (r'(.)\\1', r'\\1')]\n",
    "\n",
    "\n",
    "def signature(word):\n",
    "    for patt, repl in mappings:\n",
    "        word = re.sub(patt, repl, word)\n",
    "        # print(word)\n",
    "    pieces = re.findall('[^aeiou]+', word)\n",
    "    # print(pieces)\n",
    "    return ''.join(char for piece in pieces for char in sorted(piece))[:8]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "'lfnt'"
      ]
     },
     "metadata": {},
     "execution_count": 15
    }
   ],
   "source": [
    "signature('illefent')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "'bskws'"
      ]
     },
     "metadata": {},
     "execution_count": 16
    }
   ],
   "source": [
    "signature('ebsekwieous')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "'nclr'"
      ]
     },
     "metadata": {},
     "execution_count": 17
    }
   ],
   "source": [
    "signature('nuculerr')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "['anicular',\n",
       " 'inocular',\n",
       " 'nucellar',\n",
       " 'nuclear',\n",
       " 'unicolor',\n",
       " 'uniocular',\n",
       " 'unocular']"
      ]
     },
     "metadata": {},
     "execution_count": 18
    }
   ],
   "source": [
    "# 寻找相同的编码的单词\n",
    "signatures = nltk.Index((signature(w), w) for w in nltk.corpus.words.words())\n",
    "signatures[signature('nuculerr')]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 19,
   "metadata": {},
   "outputs": [],
   "source": [
    "# 寻找相同的编码的单词\n",
    "def rank(word, wordlist):\n",
    "    ranked = sorted((nltk.edit_distance(word, w), w) for w in wordlist)\n",
    "    return [word for (_, word) in ranked]\n",
    "\n",
    "\n",
    "def fuzzy_spell(word):\n",
    "    sig = signature(word)\n",
    "    if sig in signatures:\n",
    "        return rank(word, signatures[sig])\n",
    "    else:\n",
    "        return []"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "['olefiant', 'elephant', 'oliphant', 'elephanta']"
      ]
     },
     "metadata": {},
     "execution_count": 20
    }
   ],
   "source": [
    "fuzzy_spell('illefent')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "['obsequious']"
      ]
     },
     "metadata": {},
     "execution_count": 21
    }
   ],
   "source": [
    "fuzzy_spell('ebsekwieous')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "['anicular',\n",
       " 'inocular',\n",
       " 'nucellar',\n",
       " 'nuclear',\n",
       " 'unocular',\n",
       " 'uniocular',\n",
       " 'unicolor']"
      ]
     },
     "metadata": {},
     "execution_count": 22
    }
   ],
   "source": [
    "fuzzy_spell('nucular')"
   ]
  },
  {
   "source": [
    "## 11.4 使用XML\n",
    "XML(The Extensible Markup Language, 可扩展标记语言)为设计特定领域的标记语言提供了一个框架。\n",
    "\n",
    "-   用于表示已经被标注的文本和词汇资源\n",
    "\n",
    "-   XML允许创建自己的标签；允许创建的数据而不必事先指定其结构；允许有可选的、可重复的元素。\n",
    "\n",
    "### 11.4.1 在语言结构中使用XML\n",
    "\n",
    "在结构的XML中，在嵌套的同一级别中所有的开始标签必须结束标记（即XML文档必须是格式良好的树）。\n",
    "\n",
    "-   XML允许使用重复的元素\n",
    "-   XML使用“架构（scema）”限制一个XML文件的格式，是一种类似于上下文无关方法的声明。\n",
    "\n",
    "### 11.4.2 XML的作用\n",
    "XML提供了一个格式方便和用途广泛的工具"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "source": [
    "### 11.4.3 ElementTree接口"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "<?xml version=\"1.0\"?>\n<?xml-stylesheet type=\"text/css\" href=\"shakes.css\"?>\n<!-- <!DOCTYPE PLAY SYSTEM \"play.dtd\"> -->\n\n<PLAY>\n<TITLE>The Merchant of Venice</TITLE>\n"
     ]
    }
   ],
   "source": [
    "# Python的ElementTree模型提供了一种方便的方式用于访问存储在XML文件中的数据。\n",
    "merchant_file = nltk.data.find('corpora/shakespeare/merchant.xml')\n",
    "raw = open(merchant_file).read()\n",
    "print(raw[:163])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "<TITLE>ACT I</TITLE>\n\n<SCENE><TITLE>SCENE I.  Venice. A street.</TITLE>\n<STAGEDIR>Enter ANTONIO, SALARINO, and SALANIO</STAGEDIR>\n\n<SPEECH>\n<SPEAKER>ANTONIO</SPEAKER>\n<LINE>In sooth, I know not why I am so sad:</LINE>\n"
     ]
    }
   ],
   "source": [
    "print(raw[1789:2006])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "<Element 'PLAY' at 0x000000000BFDA548>"
      ]
     },
     "metadata": {},
     "execution_count": 25
    }
   ],
   "source": [
    "from xml.etree.ElementTree import ElementTree\n",
    "\n",
    "merchant = ElementTree().parse(merchant_file)\n",
    "merchant"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "<Element 'TITLE' at 0x000000000BFDA5E8>"
      ]
     },
     "metadata": {},
     "execution_count": 26
    }
   ],
   "source": [
    "merchant[0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "'The Merchant of Venice'"
      ]
     },
     "metadata": {},
     "execution_count": 27
    }
   ],
   "source": [
    "merchant[0].text"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "[<Element 'TITLE' at 0x000000000BFDA5E8>,\n",
       " <Element 'PERSONAE' at 0x000000000BFDA4F8>,\n",
       " <Element 'SCNDESCR' at 0x000000000EE55138>,\n",
       " <Element 'PLAYSUBT' at 0x000000000EE55188>,\n",
       " <Element 'ACT' at 0x000000000EE551D8>,\n",
       " <Element 'ACT' at 0x000000000C4EA4A8>,\n",
       " <Element 'ACT' at 0x000000000C51ACC8>,\n",
       " <Element 'ACT' at 0x000000000C5472C8>,\n",
       " <Element 'ACT' at 0x000000000C56CCC8>]"
      ]
     },
     "metadata": {},
     "execution_count": 28
    }
   ],
   "source": [
    "merchant.getchildren()"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "'ACT IV'"
      ]
     },
     "metadata": {},
     "execution_count": 29
    }
   ],
   "source": [
    "merchant[-2][0].text"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "<Element 'SCENE' at 0x000000000C547368>"
      ]
     },
     "metadata": {},
     "execution_count": 30
    }
   ],
   "source": [
    "merchant[-2][1]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "'SCENE I.  Venice. A court of justice.'"
      ]
     },
     "metadata": {},
     "execution_count": 31
    }
   ],
   "source": [
    "merchant[-2][1][0].text"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "<Element 'SPEECH' at 0x000000000C557458>"
      ]
     },
     "metadata": {},
     "execution_count": 32
    }
   ],
   "source": [
    "merchant[-2][1][54]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "<Element 'SPEAKER' at 0x000000000C5574A8>"
      ]
     },
     "metadata": {},
     "execution_count": 33
    }
   ],
   "source": [
    "merchant[-2][1][54][0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 34,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "'PORTIA'"
      ]
     },
     "metadata": {},
     "execution_count": 34
    }
   ],
   "source": [
    "merchant[-2][1][54][0].text"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 35,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "<Element 'LINE' at 0x000000000C5574F8>"
      ]
     },
     "metadata": {},
     "execution_count": 35
    }
   ],
   "source": [
    "merchant[-2][1][54][1]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 36,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "\"The quality of mercy is not strain'd,\""
      ]
     },
     "metadata": {},
     "execution_count": 36
    }
   ],
   "source": [
    "merchant[-2][1][54][1].text"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 37,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "Act 3 Scene 2 Speech 9: Let music sound while he doth make his choice;\nAct 3 Scene 2 Speech 9: Fading in music: that the comparison\nAct 3 Scene 2 Speech 9: And what is music then? Then music is\nAct 5 Scene 1 Speech 23: And bring your music forth into the air.\nAct 5 Scene 1 Speech 23: Here will we sit and let the sounds of music\nAct 5 Scene 1 Speech 23: And draw her home with music.\nAct 5 Scene 1 Speech 24: I am never merry when I hear sweet music.\nAct 5 Scene 1 Speech 25: Or any air of music touch their ears,\nAct 5 Scene 1 Speech 25: By the sweet power of music: therefore the poet\nAct 5 Scene 1 Speech 25: But music for the time doth change his nature.\nAct 5 Scene 1 Speech 25: The man that hath no music in himself,\nAct 5 Scene 1 Speech 25: Let no such man be trusted. Mark the music.\nAct 5 Scene 1 Speech 29: It is your music, madam, of the house.\nAct 5 Scene 1 Speech 32: No better a musician than the wren.\n"
     ]
    }
   ],
   "source": [
    "for i, act in enumerate(merchant.findall('ACT')):\n",
    "    for j, scene in enumerate(act.findall('SCENE')):\n",
    "        for k, speech in enumerate(scene.findall('SPEECH')):\n",
    "            for line in speech.findall('LINE'):\n",
    "                if 'music' in str(line.text):\n",
    "                    print('Act %d Scene %d Speech %d: %s' % (i + 1, j + 1, k + 1, line.text))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 38,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "[('PORTIA', 117), ('SHYLOCK', 79), ('BASSANIO', 73), ('GRATIANO', 48), ('ANTONIO', 47)]\n"
     ]
    }
   ],
   "source": [
    "from collections import Counter\n",
    "\n",
    "speaker_seq = [s.text for s in merchant.findall('ACT/SCENE/SPEECH/SPEAKER')]\n",
    "speaker_freq = Counter(speaker_seq)\n",
    "top5 = speaker_freq.most_common(5)\n",
    "print(top5)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 39,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "     ANTO BASS GRAT  OTH PORT SHYL \nANTO    0   11    4   11    9   12 \nBASS   10    0   11   10   26   16 \nGRAT    6    8    0   19    9    5 \n OTH    8   16   18  153   52   25 \nPORT    7   23   13   53    0   21 \nSHYL   15   15    2   26   21    0 \n"
     ]
    }
   ],
   "source": [
    "# 由于有23个演员，只选择前五位角色之间相互对话\n",
    "from collections import defaultdict\n",
    "\n",
    "abbreviate = defaultdict(lambda: 'OTH')\n",
    "for speaker, _ in top5:\n",
    "    abbreviate[speaker] = speaker[:4]\n",
    "speaker_seq2 = [abbreviate[speaker] for speaker in speaker_seq]\n",
    "cfd = nltk.ConditionalFreqDist(nltk.bigrams(speaker_seq2))\n",
    "cfd.tabulate()"
   ]
  },
  {
   "source": [
    "### 11.4.4 使用ElementTree访问Toolbox的数据"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 40,
   "metadata": {},
   "outputs": [],
   "source": [
    "from nltk.corpus import toolbox"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 41,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "<Element 'lx' at 0x000000000EE5AD68>"
      ]
     },
     "metadata": {},
     "execution_count": 41
    }
   ],
   "source": [
    "# 访问lexicon对象的内容的两种方法\n",
    "# 1） 通过索引\n",
    "# 索引访问：lexicon[3]返回3号条目（从0开始算起的第4个条目），lexicon[3][0]返回它的第一个字段\n",
    "lexicon = toolbox.xml('rotokas.dic')\n",
    "lexicon[3][0]"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 42,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "'lx'"
      ]
     },
     "metadata": {},
     "execution_count": 42
    }
   ],
   "source": [
    "lexicon[3][0].tag"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 43,
   "metadata": {},
   "outputs": [
    {
     "output_type": "execute_result",
     "data": {
      "text/plain": [
       "'kaa'"
      ]
     },
     "metadata": {},
     "execution_count": 43
    }
   ],
   "source": [
    "lexicon[3][0].text"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 44,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "0 ) kaa\n1 ) kaa\n2 ) kaa\n3 ) kaakaaro\n4 ) kaakaaviko\n5 ) kaakaavo\n6 ) kaakaoko\n7 ) kaakasi\n8 ) kaakau\n9 ) kaakauko\n"
     ]
    }
   ],
   "source": [
    "# 2） 通过路径\n",
    "# 路径访问：'record/lx'的所有匹配，并且访问该元素的文本内容，将其规范化为小写\n",
    "for i, text in enumerate([lexeme.text.lower() for lexeme in lexicon.findall('record/lx')]):\n",
    "    if i<10:\n",
    "        print(i,')',text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 45,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "<record>\n    <lx>kaa</lx>\n    <ps>N</ps>\n    <pt>MASC</pt>\n    <cl>isi</cl>\n    <ge>cooking banana</ge>\n    <tkp>banana bilong kukim</tkp>\n    <pt>itoo</pt>\n    <sf>FLORA</sf>\n    <dt>12/Aug/2005</dt>\n    <ex>Taeavi iria kaa isi kovopaueva kaparapasia.</ex>\n    <xp>Taeavi i bin planim gaden banana bilong kukim tasol long paia.</xp>\n    <xe>Taeavi planted banana in order to cook it.</xe>\n  </record>"
     ]
    }
   ],
   "source": [
    "# Toolbox数据是XML格式。\n",
    "import sys\n",
    "from nltk.util import elementtree_indent\n",
    "from xml.etree.ElementTree import ElementTree\n",
    "\n",
    "elementtree_indent(lexicon)\n",
    "tree = ElementTree(lexicon[3])\n",
    "tree.write(sys.stdout, encoding='unicode')"
   ]
  },
  {
   "source": [
    "### 11.4.5 格式化条目"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 46,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "<table>\n <tr><td>kakae</td><td>???</td><td>small</td></tr>\n <tr><td>kakae</td><td>CLASS</td><td>child</td></tr>\n <tr><td>kakaevira</td><td>ADV</td><td>small-like</td></tr>\n <tr><td>kakapikoa</td><td>???</td><td>small</td></tr>\n <tr><td>kakapikoto</td><td>N</td><td>newborn baby</td></tr>\n <tr><td>kakapu</td><td>V</td><td>place in sling for purpose of carrying</td></tr>\n <tr><td>kakapua</td><td>N</td><td>sling for lifting</td></tr>\n <tr><td>kakara</td><td>N</td><td>arm band</td></tr>\n <tr><td>Kakarapaia</td><td>N</td><td>village name</td></tr>\n <tr><td>kakarau</td><td>N</td><td>frog</td></tr>\n</table>\n"
     ]
    }
   ],
   "source": [
    "# 将数据转换为HTML格式输出\n",
    "html = \"<table>\\n\"\n",
    "for entry in lexicon[70:80]:\n",
    "    lx = entry.findtext('lx')\n",
    "    ps = entry.findtext('ps')\n",
    "    ge = entry.findtext('ge')\n",
    "    html += ' <tr><td>%s</td><td>%s</td><td>%s</td></tr>\\n' % (lx, ps, ge)\n",
    "html += '</table>'\n",
    "print(html)"
   ]
  },
  {
   "source": [
    "## 11.5 使用Toolbox数据"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 47,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "13.635955056179775\n"
     ]
    }
   ],
   "source": [
    "from nltk.corpus import toolbox\n",
    "\n",
    "lexicon = toolbox.xml('rotokas.dic')\n",
    "print(sum(len(entry) for entry in lexicon) / len(lexicon))"
   ]
  },
  {
   "source": [
    "### 11.5.1 为每个条目添加字段"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 48,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Ex11-2 为词汇条目添加新的cv字段\n",
    "from xml.etree.ElementTree import SubElement\n",
    "\n",
    "\n",
    "def cv(s):\n",
    "    s = s.lower()\n",
    "    s = re.sub(r'[^a-z]', r'_', s)\n",
    "    s = re.sub(r'[aeiou]', r'V', s)\n",
    "    s = re.sub(r'[^V_]', r'C', s)\n",
    "    return (s)\n",
    "\n",
    "\n",
    "def add_cv_field(entry):\n",
    "    for field in entry:\n",
    "        if field.tag == 'lx':\n",
    "            cv_field = SubElement(entry, 'cv')\n",
    "            cv_field.text = cv(field.text)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 49,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "\\lx kaeviro\n\\ps V\n\\pt A\n\\ge lift off\n\\ge take off\n\\tkp go antap\n\\sc MOTION\n\\vx 1\n\\nt used to describe action of plane\n\\dt 03/Jun/2005\n\\ex Pita kaeviroroe kepa kekesia oa vuripierevo kiuvu.\n\\xp Pita i go antap na lukim haus win i bagarapim.\n\\xe Peter went to look at the house that the wind destroyed.\n\\cv CVVCVCV\n\n"
     ]
    }
   ],
   "source": [
    "lexicon = toolbox.xml('rotokas.dic')\n",
    "add_cv_field(lexicon[53])\n",
    "print(nltk.toolbox.to_sfm_string(lexicon[53]))"
   ]
  },
  {
   "source": [
    "### 11.5.2 验证Toolbox词汇"
   ],
   "cell_type": "markdown",
   "metadata": {}
  },
  {
   "cell_type": "code",
   "execution_count": 50,
   "metadata": {
    "tags": []
   },
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "[('lx:ps:pt:ge:tkp:dt:ex:xp:xe', 41), ('lx:rt:ps:pt:ge:tkp:dt:ex:xp:xe', 37), ('lx:rt:ps:pt:ge:tkp:dt:ex:xp:xe:ex:xp:xe', 27), ('lx:ps:pt:ge:tkp:nt:dt:ex:xp:xe', 20), ('lx:ps:pt:ge:tkp:nt:dt:ex:xp:xe:ex:xp:xe', 17)]\n"
     ]
    }
   ],
   "source": [
    "# 使用 Counter() 函数快速寻找频率异常的字段序列\n",
    "from collections import Counter\n",
    "\n",
    "field_sequences = Counter(':'.join(field.tag for field in entry) for entry in lexicon)\n",
    "print(field_sequences.most_common()[:5])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 51,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Ex11-3 使用上下文无关语法验证Toolbox中的条目\n",
    "# 基于Ch8介绍的CFG格式。\n",
    "# 语法模型隐含了Toolbox条目的嵌套结构，建立树状结构，树的叶子是单独的字段名\n",
    "# 遍历条目并且报告它们与语法的一致性。‘+’表示被语法接受的；‘-’表示不被语法接受的\n",
    "grammar = nltk.CFG.fromstring('''\n",
    "S -> Head PS Glosses Comment Date Sem_Field Examples\n",
    "Head -> Lexeme Root\n",
    "Lexeme -> \"lx\"\n",
    "Root -> \"rt\" |\n",
    "PS -> \"ps\"\n",
    "Glosses -> Gloss Glosses |\n",
    "Gloss -> \"ge\" | \"tkp\" | \"eng\"\n",
    "Date -> \"dt\"\n",
    "Sem_Field -> \"sf\"\n",
    "Examples -> Example Ex_Pidgin Ex_English Examples |\n",
    "Example -> \"ex\"\n",
    "Ex_Pidgin -> \"xp\"\n",
    "Ex_English -> \"xe\"\n",
    "Comment -> \"cmt\" | \"nt\" |\n",
    "''')"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 52,
   "metadata": {},
   "outputs": [],
   "source": [
    "def validate_lexicon(grammar, lexicon, ignored_tags):\n",
    "    rd_parser = nltk.RecursiveDescentParser(grammar)\n",
    "    for entry in lexicon:\n",
    "        marker_list = [field.tag for field in entry if field.tag not in ignored_tags]\n",
    "        if list(rd_parser.parse(marker_list)):\n",
    "            print('+', ':'.join(marker_list))\n",
    "        else:\n",
    "            print('-', ':'.join(marker_list))"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 53,
   "metadata": {},
   "outputs": [
    {
     "output_type": "stream",
     "name": "stdout",
     "text": [
      "- lx:ps:ge:tkp:sf:nt:dt:ex:xp:xe:ex:xp:xe:ex:xp:xe\n- lx:rt:ps:ge:tkp:nt:dt:ex:xp:xe:ex:xp:xe\n- lx:ps:ge:tkp:nt:dt:ex:xp:xe:ex:xp:xe\n- lx:ps:ge:tkp:nt:sf:dt\n- lx:ps:ge:tkp:dt:cmt:ex:xp:xe:ex:xp:xe\n- lx:ps:ge:ge:ge:tkp:cmt:dt:ex:xp:xe\n- lx:rt:ps:ge:ge:tkp:dt\n- lx:rt:ps:ge:eng:eng:eng:ge:tkp:tkp:dt:cmt:ex:xp:xe:ex:xp:xe:ex:xp:xe:ex:xp:xe:ex:xp:xe\n- lx:rt:ps:ge:tkp:dt:ex:xp:xe\n- lx:ps:ge:ge:tkp:dt:ex:xp:xe:ex:xp:xe\n"
     ]
    }
   ],
   "source": [
    "lexicon = toolbox.xml('rotokas.dic')[10:20]\n",
    "ignored_tags = ['arg', 'dcsv', 'pt', 'vx']\n",
    "validate_lexicon(grammar, lexicon, ignored_tags)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 54,
   "metadata": {},
   "outputs": [],
   "source": [
    "# Ex11-4 为Toolbox词典分块：此块语法描述了一种中国方言的词汇条目结构\n",
    "# 使用块分析器，能够识别局部结构并且报告已经确定的局部结构\n",
    "grammar = r\"\"\"\n",
    "lexfunc: {<lf>(<lv><ln|le>*)*}\n",
    "example: {<rf|xv><xn|xe>*}\n",
    "sense:   {<sn><ps><pn|gv|dv|gn|gp|dn|rn|ge|de|re>*<example>*<lexfunc>*}\n",
    "record:  {<lx><hm><sense>+<dt>}\n",
    "\"\"\""
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 55,
   "metadata": {},
   "outputs": [
    {
     "output_type": "error",
     "ename": "TypeError",
     "evalue": "cannot use a string pattern on a bytes-like object",
     "traceback": [
      "\u001b[1;31m---------------------------------------------------------------------------\u001b[0m",
      "\u001b[1;31mTypeError\u001b[0m                                 Traceback (most recent call last)",
      "\u001b[1;32m<ipython-input-55-c33329638847>\u001b[0m in \u001b[0;36m<module>\u001b[1;34m\u001b[0m\n\u001b[0;32m      5\u001b[0m \u001b[0mdb\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mopen\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mnltk\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mdata\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mfind\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m'corpora/toolbox/iu_mien_samp.db'\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m      6\u001b[0m \u001b[1;31m# db.parse()解析不了\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m----> 7\u001b[1;33m \u001b[0mlexicon\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mdb\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mparse\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mgrammar\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mencoding\u001b[0m\u001b[1;33m=\u001b[0m\u001b[1;34m'utf-8'\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m      8\u001b[0m \u001b[0mtree\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mElementTree\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mlexicon\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m      9\u001b[0m \u001b[1;32mwith\u001b[0m \u001b[0mopen\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m'iu_mien_samp.xml'\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;34m'wb'\u001b[0m\u001b[1;33m)\u001b[0m \u001b[1;32mas\u001b[0m \u001b[0moutput\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\envs\\DataAnalysis\\lib\\site-packages\\nltk\\toolbox.py\u001b[0m in \u001b[0;36mparse\u001b[1;34m(self, grammar, **kwargs)\u001b[0m\n\u001b[0;32m    144\u001b[0m     \u001b[1;32mdef\u001b[0m \u001b[0mparse\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mgrammar\u001b[0m\u001b[1;33m=\u001b[0m\u001b[1;32mNone\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    145\u001b[0m         \u001b[1;32mif\u001b[0m \u001b[0mgrammar\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 146\u001b[1;33m             \u001b[1;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_chunk_parse\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mgrammar\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mgrammar\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    147\u001b[0m         \u001b[1;32melse\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    148\u001b[0m             \u001b[1;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_record_parse\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\envs\\DataAnalysis\\lib\\site-packages\\nltk\\toolbox.py\u001b[0m in \u001b[0;36m_chunk_parse\u001b[1;34m(self, grammar, root_label, trace, **kwargs)\u001b[0m\n\u001b[0;32m    261\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    262\u001b[0m         \u001b[0mcp\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mchunk\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mRegexpParser\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mgrammar\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mroot_label\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mroot_label\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mtrace\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mtrace\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 263\u001b[1;33m         \u001b[0mdb\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mparse\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    264\u001b[0m         \u001b[0mtb_etree\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mElement\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"toolbox_data\"\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    265\u001b[0m         \u001b[0mheader\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mdb\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mfind\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"header\"\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\envs\\DataAnalysis\\lib\\site-packages\\nltk\\toolbox.py\u001b[0m in \u001b[0;36mparse\u001b[1;34m(self, grammar, **kwargs)\u001b[0m\n\u001b[0;32m    146\u001b[0m             \u001b[1;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_chunk_parse\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mgrammar\u001b[0m\u001b[1;33m=\u001b[0m\u001b[0mgrammar\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    147\u001b[0m         \u001b[1;32melse\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 148\u001b[1;33m             \u001b[1;32mreturn\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0m_record_parse\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    149\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    150\u001b[0m     \u001b[1;32mdef\u001b[0m \u001b[0m_record_parse\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mself\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mkey\u001b[0m\u001b[1;33m=\u001b[0m\u001b[1;32mNone\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\envs\\DataAnalysis\\lib\\site-packages\\nltk\\toolbox.py\u001b[0m in \u001b[0;36m_record_parse\u001b[1;34m(self, key, **kwargs)\u001b[0m\n\u001b[0;32m    204\u001b[0m         \u001b[0mbuilder\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mstart\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"header\"\u001b[0m\u001b[1;33m,\u001b[0m \u001b[1;33m{\u001b[0m\u001b[1;33m}\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    205\u001b[0m         \u001b[0min_records\u001b[0m \u001b[1;33m=\u001b[0m \u001b[1;32mFalse\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 206\u001b[1;33m         \u001b[1;32mfor\u001b[0m \u001b[0mmkr\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mvalue\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mfields\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m**\u001b[0m\u001b[0mkwargs\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    207\u001b[0m             \u001b[1;32mif\u001b[0m \u001b[0mkey\u001b[0m \u001b[1;32mis\u001b[0m \u001b[1;32mNone\u001b[0m \u001b[1;32mand\u001b[0m \u001b[1;32mnot\u001b[0m \u001b[0min_records\u001b[0m \u001b[1;32mand\u001b[0m \u001b[0mmkr\u001b[0m\u001b[1;33m[\u001b[0m\u001b[1;36m0\u001b[0m\u001b[1;33m]\u001b[0m \u001b[1;33m!=\u001b[0m \u001b[1;34m\"_\"\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    208\u001b[0m                 \u001b[0mkey\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mmkr\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\envs\\DataAnalysis\\lib\\site-packages\\nltk\\toolbox.py\u001b[0m in \u001b[0;36mfields\u001b[1;34m(self, strip, unwrap, encoding, errors, unicode_fields)\u001b[0m\n\u001b[0;32m    125\u001b[0m             \u001b[1;32mraise\u001b[0m \u001b[0mValueError\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\"unicode_fields is set but not encoding.\"\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    126\u001b[0m         \u001b[0munwrap_pat\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mre\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mcompile\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34mr\"\\n+\"\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m--> 127\u001b[1;33m         \u001b[1;32mfor\u001b[0m \u001b[0mmkr\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mval\u001b[0m \u001b[1;32min\u001b[0m \u001b[0mself\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mraw_fields\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    128\u001b[0m             \u001b[1;32mif\u001b[0m \u001b[0munwrap\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    129\u001b[0m                 \u001b[0mval\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0munwrap_pat\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0msub\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;34m\" \"\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mval\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\envs\\DataAnalysis\\lib\\site-packages\\nltk\\toolbox.py\u001b[0m in \u001b[0;36mraw_fields\u001b[1;34m(self)\u001b[0m\n\u001b[0;32m     74\u001b[0m             \u001b[1;31m# no more data is available, terminate the generator\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     75\u001b[0m             \u001b[1;32mreturn\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[1;32m---> 76\u001b[1;33m         \u001b[0mmobj\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mre\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mmatch\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mfirst_line_pat\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mline\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m     77\u001b[0m         \u001b[0mmkr\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mline_value\u001b[0m \u001b[1;33m=\u001b[0m \u001b[0mmobj\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mgroups\u001b[0m\u001b[1;33m(\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m     78\u001b[0m         \u001b[0mvalue_lines\u001b[0m \u001b[1;33m=\u001b[0m \u001b[1;33m[\u001b[0m\u001b[0mline_value\u001b[0m\u001b[1;33m]\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;32mC:\\ProgramData\\Anaconda3\\envs\\DataAnalysis\\lib\\re.py\u001b[0m in \u001b[0;36mmatch\u001b[1;34m(pattern, string, flags)\u001b[0m\n\u001b[0;32m    170\u001b[0m     \"\"\"Try to apply the pattern at the start of the string, returning\n\u001b[0;32m    171\u001b[0m     a match object, or None if no match was found.\"\"\"\n\u001b[1;32m--> 172\u001b[1;33m     \u001b[1;32mreturn\u001b[0m \u001b[0m_compile\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mpattern\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mflags\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m.\u001b[0m\u001b[0mmatch\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mstring\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0m\u001b[0;32m    173\u001b[0m \u001b[1;33m\u001b[0m\u001b[0m\n\u001b[0;32m    174\u001b[0m \u001b[1;32mdef\u001b[0m \u001b[0mfullmatch\u001b[0m\u001b[1;33m(\u001b[0m\u001b[0mpattern\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mstring\u001b[0m\u001b[1;33m,\u001b[0m \u001b[0mflags\u001b[0m\u001b[1;33m=\u001b[0m\u001b[1;36m0\u001b[0m\u001b[1;33m)\u001b[0m\u001b[1;33m:\u001b[0m\u001b[1;33m\u001b[0m\u001b[1;33m\u001b[0m\u001b[0m\n",
      "\u001b[1;31mTypeError\u001b[0m: cannot use a string pattern on a bytes-like object"
     ]
    }
   ],
   "source": [
    "from xml.etree.ElementTree import ElementTree\n",
    "from nltk.toolbox import ToolboxData\n",
    "\n",
    "db = ToolboxData()\n",
    "db.open(nltk.data.find('corpora/toolbox/iu_mien_samp.db'))\n",
    "# db.parse()解析不了\n",
    "lexicon = db.parse(grammar, encoding='utf-8')\n",
    "tree = ElementTree(lexicon)\n",
    "with open('iu_mien_samp.xml', 'wb') as output:\n",
    "    tree.write(output)"
   ]
  },
  {
   "source": [
    "## 11.6 使用OLAC元数据描述语言资源\n",
    "NLP社区成员共同使用的具有很高精度和召回率的语言资源，已经提供的方法是元数据聚焦\n",
    "\n",
    "### 11.6.1 什么是元数据？\n",
    "“元数据”就是关于数据的结构化数据。是对象或者资源的描述信息。\n",
    "\n",
    "都柏林核心数据（Dublin Core Metadata）由15个元数据元素组成，每个元素都是可选的和可重复的。\n",
    "\n",
    "标题、创建者、主题、描述、发布者、参与者、日期、类型、格式、标识符、来源、语言、关系、覆盖范围和版权\n",
    "\n",
    "开放档案倡议（Open Archives Initiative，OAI）提供了跨越数字化的学术资料库的共同框架，不考虑资源的类型\n",
    "\n",
    "### 11.6.2 开放语言档案社区（Open Language Archives Community，OLAC）\n",
    "开放语言档案社区正在一种国际性的伙伴关系，这种伙伴关系是创建世界性语言资源的虚拟图书馆的机构和个人\n",
    "\n",
    "1.  制定目前最好的关于语言资源的数字归档实施的共识\n",
    "2.  开发、存储和访问这些资源的互操作信息库和服务的网络\n",
    "\n",
    "OLAC元数据是描述语言资源的标准。确保跨库描述的统一性。描述物理和数字格式的数据和工具。添加了语言资源的基本属性。\n",
    "\n",
    "### 11.6.3 发布语言资源\n",
    "社区成员可以上传语料库和模型来进行发布\n",
    "\n",
    "## 11.7 小结\n",
    "-   语料库中基本数据类型是已经标注的文本和词汇。\n",
    "    -   文本有时间结构\n",
    "    -   词汇有记录结构\n",
    "-   语料库的生命周期，包括：数据收集、标注、质量控制及发布。\n",
    "-   语料库开发包括捕捉语言使用的代表性的样本与使用任何一个来源或者文体都有足够的材料之间的平衡；增加变量的维度通常由于资源的限制而不可行\n",
    "-   XML提供了一种有用的语言数据的存储和交换形式\n",
    "-   Toolbox格式被使用在语言记录项目中，可以编写程序来支持Toolbox文件的维护，并将它们转换成XML\n",
    "-   开放语言社区（OLAC）提供了用于记录和发现语言资源的基础设施"
   ],
   "cell_type": "markdown",
   "metadata": {}
  }
 ]
}