{
 "cells": [
  {
   "cell_type": "markdown",
   "id": "543f36bd",
   "metadata": {},
   "source": [
    "# 9. 构建基于特征的语法\n",
    "\n",
    "自然语言具有范围广泛的语法结构，用[8.](https://usyiyi.github.io/nlp-py-2e-zh/8.html#chap-parse)中所描述的简单的方法很难处理的如此广泛的语法结构。为了获得更大的灵活性，我们改变我们对待语法类别如`S`、`NP`和`V`的方式。我们将这些原子标签分解为类似字典的结构，其特征可以为一个范围的值。\n",
    "\n",
    "本章的目的是要回答下列问题：\n",
    "\n",
    "1. 我们怎样用特征扩展上下文无关语法框架，以获得更细粒度的对语法类别和产生式的控制？\n",
    "2. 特征结构的主要形式化属性是什么，我们如何使用它们来计算？\n",
    "3. 我们现在用基于特征的语法能捕捉到什么语言模式和语法结构？\n",
    "\n",
    "一路上，我们将介绍更多的英语句法主题，包括约定、子类别和无限制依赖成分等现象。\n",
    "\n",
    "<a href=\"#1-语法特征\">1 语法特征</a>\n",
    "\n",
    "<a href=\"#2-处理特征结构\">2 处理特征结构</a>\n",
    "\n",
    "<a href=\"#3-扩展基于特征的语法\">3 扩展基于特征的语法</a>\n",
    "\n",
    "<a href=\"#4-小结\">4 小结</a>\n",
    "\n",
    "<a href=\"#5-深入阅读\">5 深入阅读</a>\n",
    "\n",
    "<a href=\"#6-练习\">6 练习</a>\n",
    "\n",
    "## 1 语法特征\n",
    "\n",
    "在[chap-data-intensive](https://usyiyi.github.io/nlp-py-2e-zh/6.html#chap-data-intensive)中，我们描述了如何建立基于检测文本特征的分类器。那些特征可能非常简单，如提取一个单词的最后一个字母，或者更复杂一点儿，如分类器自己预测的词性标签。在本章中，我们将探讨特征在建立基于规则的语法中的作用。对比特征提取，记录已经自动检测到的特征，我们现在要*声明*词和短语的特征。我们以一个很简单的例子开始，使用字典存储特征和它们的值。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 2,
   "id": "bf727544",
   "metadata": {},
   "outputs": [],
   "source": [
    "kim = {'CAT': 'NP', 'ORTH': 'Kim', 'REF': 'k'}\n",
    "chase = {'CAT': 'V', 'ORTH': 'chased', 'REL': 'chase'}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "a3254990",
   "metadata": {},
   "source": [
    "对象`kim`和`chase`有几个共同的特征，`CAT`（语法类别）和`ORTH`（正字法，即拼写）。此外，每一个还有更面向语义的特征：`kim['REF']`意在给出`kim`的指示物，而`chase['REL']`给出`chase`表示的关系。在基于规则的语法上下文中，这样的特征和特征值对被称为特征结构，我们将很快看到它们的替代符号。\n",
    "\n",
    "特征结构包含各种有关语法实体的信息。这些信息不需要详尽无遗，我们可能要进一步增加属性。例如，对于一个动词，根据动词的参数知道它扮演的“语义角色”往往很有用。对于chase，主语扮演“施事”的角色，而宾语扮演“受事”角色。让我们添加这些信息，使用`'sbj'`和`'obj'`作为占位符，它会被填充，当动词和它的语法参数结合时："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "ff81a3b9",
   "metadata": {},
   "outputs": [],
   "source": [
    "chase['AGT'] = 'sbj'\n",
    "chase['PAT'] = 'obj'"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c3badea5",
   "metadata": {},
   "source": [
    "如果我们现在处理句子Kim chased Lee，我们要“绑定”动词的施事角色和主语，受事角色和宾语。我们可以通过链接到相关的`NP`的`REF`特征做到这个。在下面的例子中，我们做一个简单的假设：在动词直接左侧和右侧的`NP`分别是主语和宾语。我们还在例子结尾为Lee添加了一个特征结构。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 4,
   "id": "60430891",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "ORTH  => chased\n",
      "REL   => chase\n",
      "AGT   => k\n",
      "PAT   => l\n"
     ]
    }
   ],
   "source": [
    "sent = \"Kim chased Lee\"\n",
    "tokens = sent.split()\n",
    "lee = {'CAT': 'NP', 'ORTH': 'Lee', 'REF': 'l'}\n",
    "def lex2fs(word):\n",
    "    for fs in [kim, lee, chase]:\n",
    "        if fs['ORTH'] == word:\n",
    "            return fs\n",
    "subj, verb, obj = lex2fs(tokens[0]), lex2fs(tokens[1]), lex2fs(tokens[2])\n",
    "verb['AGT'] = subj['REF']\n",
    "verb['PAT'] = obj['REF']\n",
    "for k in ['ORTH', 'REL', 'AGT', 'PAT']:\n",
    "    print(\"%-5s => %s\" % (k, verb[k]))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "ab771927",
   "metadata": {},
   "source": [
    "同样的方法可以适用不同的动词，例如surprise，虽然在这种情况下，主语将扮演“源事”（`SRC`）的角色，宾语扮演“体验者”（`EXP`）的角色："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 5,
   "id": "cea089b4",
   "metadata": {},
   "outputs": [],
   "source": [
    "surprise = {'CAT': 'V', 'ORTH': 'surprised', 'REL': 'surprise','SRC': 'sbj', 'EXP': 'obj'}"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "b637d57f",
   "metadata": {},
   "source": [
    "特征结构是非常强大的，但我们操纵它们的方式是极其*ad hoc*。我们本章接下来的任务是，显示上下文无关语法和分析如何能扩展到合适的特征结构，使我们可以一种更通用的和有原则的方式建立像这样的分析。我们将通过查看句法协议的现象作为开始；我们将展示如何使用特征典雅的表示协议约束，并在一个简单的语法中说明它们的用法。\n",
    "\n",
    "由于特征结构是表示任何形式的信息的通用的数据结构，我们将从更形式化的视点简要地看着它们，并演示NLTK提供的特征结构的支持。在本章的最后一部分，我们将表明，特征的额外表现力开辟了一个用于描述语言结构的复杂性的广泛的可能性。\n",
    "\n",
    "<a href=\"#1.1-句法协议\">1.1 句法协议</a>\n",
    "\n",
    "<a href=\"#1.2-使用属性和约束\">1.2 使用属性和约束</a>\n",
    "\n",
    "<a href=\"#1.3-术语\">1.3 术语</a>\n",
    "\n",
    "**vscode jupyter toc**\n",
    "\n",
    "<a href=\"#11-句法协议\">1.1 句法协议</a>\n",
    "\n",
    "<a href=\"#12-使用属性和约束\">1.2 使用属性和约束</a>\n",
    "\n",
    "<a href=\"#13-术语\">1.3 术语</a>\n",
    "\n",
    "## 1.1 句法协议\n",
    "\n",
    "下面的例子展示词序列对，其中第一个是符合语法的而第二个不是。（我们在词序列的开头用星号表示它是不符合语法的。）\n",
    "\n",
    "> S   ->   NP VP\n",
    "> \n",
    "> NP  ->   Det N\n",
    "> \n",
    "> VP  ->   V\n",
    "> \n",
    "> Det  ->  'this'\n",
    "> \n",
    "> N    ->  'dog'\n",
    "> \n",
    "> V    ->  'runs'\n",
    "\n",
    "## 1.2 使用属性和约束\n",
    "\n",
    "我们说过非正式的语言类别具有*属性*；例如，名词具有复数的属性。让我们把这个弄的更明确：\n",
    "\n",
    "> N[NUM=pl]\n",
    "\n",
    "注意一个句法类别可以有多个特征，例如`V[TENSE=pres, NUM=pl]`。在一般情况下，我们喜欢多少特征就可以添加多少。\n",
    "\n",
    "关于[1.1](https://usyiyi.github.io/nlp-py-2e-zh/9.html#code-feat0cfg)的最后的细节是语句`%start S`。这个“指令”告诉分析器以`S`作为文法的开始符号。\n",
    "\n",
    "一般情况下，即使我们正在尝试开发很小的语法，把产生式放在一个文件中我们可以编辑、测试和修改是很方便的。我们将[1.1](https://usyiyi.github.io/nlp-py-2e-zh/9.html#code-feat0cfg)以NLTK 的数据格式保存为文件`'feat0.fcfg'`。你可以使用`nltk.data.load()`制作你自己的副本进行进一步的实验。\n",
    "\n",
    "[1.2](https://usyiyi.github.io/nlp-py-2e-zh/9.html#code-featurecharttrace) 说明了基于特征的语法图表解析的操作。为输入分词之后，我们导入`load_parser`函数 [# 1](https://usyiyi.github.io/nlp-py-2e-zh/9.html#load_parser1)，以语法文件名为输入，返回一个图表分析器`cp`  [# 2](https://usyiyi.github.io/nlp-py-2e-zh/9.html#load_parser2)。调用分析器的`parse()`方法将迭代生成的分析树；如果文法无法分析输入，`trees`将为空，并将会包含一个或多个分析树，取决于输入是否有句法歧义。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 1,
   "id": "45cd2370",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "|.Kim .like.chil.|\n",
      "Leaf Init Rule:\n",
      "|[----]    .    .| [0:1] 'Kim'\n",
      "|.    [----]    .| [1:2] 'likes'\n",
      "|.    .    [----]| [2:3] 'children'\n",
      "Feature Bottom Up Predict Combine Rule:\n",
      "|[----]    .    .| [0:1] PropN[NUM='sg'] -> 'Kim' *\n",
      "Feature Bottom Up Predict Combine Rule:\n",
      "|[----]    .    .| [0:1] NP[NUM='sg'] -> PropN[NUM='sg'] *\n",
      "Feature Bottom Up Predict Combine Rule:\n",
      "|[---->    .    .| [0:1] S[] -> NP[NUM=?n] * VP[NUM=?n] {?n: 'sg'}\n",
      "Feature Bottom Up Predict Combine Rule:\n",
      "|.    [----]    .| [1:2] TV[NUM='sg', TENSE='pres'] -> 'likes' *\n",
      "Feature Bottom Up Predict Combine Rule:\n",
      "|.    [---->    .| [1:2] VP[NUM=?n, TENSE=?t] -> TV[NUM=?n, TENSE=?t] * NP[] {?n: 'sg', ?t: 'pres'}\n",
      "Feature Bottom Up Predict Combine Rule:\n",
      "|.    .    [----]| [2:3] N[NUM='pl'] -> 'children' *\n",
      "Feature Bottom Up Predict Combine Rule:\n",
      "|.    .    [----]| [2:3] NP[NUM='pl'] -> N[NUM='pl'] *\n",
      "Feature Bottom Up Predict Combine Rule:\n",
      "|.    .    [---->| [2:3] S[] -> NP[NUM=?n] * VP[NUM=?n] {?n: 'pl'}\n",
      "Feature Single Edge Fundamental Rule:\n",
      "|.    [---------]| [1:3] VP[NUM='sg', TENSE='pres'] -> TV[NUM='sg', TENSE='pres'] NP[] *\n",
      "Feature Single Edge Fundamental Rule:\n",
      "|[==============]| [0:3] S[] -> NP[NUM='sg'] VP[NUM='sg'] *\n",
      "(S[]\n",
      "  (NP[NUM='sg'] (PropN[NUM='sg'] Kim))\n",
      "  (VP[NUM='sg', TENSE='pres']\n",
      "    (TV[NUM='sg', TENSE='pres'] likes)\n",
      "    (NP[NUM='pl'] (N[NUM='pl'] children))))\n"
     ]
    }
   ],
   "source": [
    "tokens = 'Kim likes children'.split()\n",
    "from nltk import load_parser # 1\n",
    "cp = load_parser('grammars/book_grammars/feat0.fcfg', trace=2)  # 2\n",
    "for tree in cp.parse(tokens):\n",
    "    print(tree)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "31eb70ae",
   "metadata": {},
   "source": [
    "分析过程中的细节对于当前的目标并不重要。然而，有一个实施上的问题与我们前面的讨论语法的大小有关。分析包含特征限制的产生式的一种可行的方法是编译出问题中特征的所有可接受的值，是我们最终得到一个大的完全指定的[(6)](https://usyiyi.github.io/nlp-py-2e-zh/9.html#ex-agcfg1)中那样的CFG。相比之下，前面例子中显示的分析器过程直接与给定语法的未指定的产生式一起运作。特征值从词汇条目“向上流动”，变量值于是通过如`{?n: 'sg', ?t: 'pres'}`这样的绑定（即字典）与那些值关联起来。当分析器装配有关它正在建立的树的节点的信息时，这些变量绑定被用来实例化这些节点中的值；从而通过查找绑定中`?n`和`?t`的值，未指定的`VP[NUM=?n, TENSE=?t] -> TV[NUM=?n, TENSE=?t] NP[]`实例化为`VP[NUM='sg', TENSE='pres'] -> TV[NUM='sg', TENSE='pres'] NP[]`。\n",
    "\n",
    "最后，我们可以检查生成的分析树（在这种情况下，只有一个）。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 3,
   "id": "7cc7e07c",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(S[]\n",
      "  (NP[NUM='sg'] (PropN[NUM='sg'] Kim))\n",
      "  (VP[NUM='sg', TENSE='pres']\n",
      "    (TV[NUM='sg', TENSE='pres'] likes)\n",
      "    (NP[NUM='pl'] (N[NUM='pl'] children))))\n"
     ]
    }
   ],
   "source": [
    "# for tree in trees: print(tree) # 基于所给结果判断该源代码有误\n",
    "print(tree)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5dad28a0",
   "metadata": {},
   "source": [
    "## 1.3 术语\n",
    "\n",
    "到目前为止，我们只看到像`sg`和`pl`这样的特征值。这些简单的值通常被称为原子——也就是，它们不能被分解成更小的部分。原子值的一种特殊情况是布尔值，也就是说，值仅仅指定一个属性是真还是假。例如，我们可能要用布尔特征`AUX`区分助动词，如can，may，will和do。例如，产生式`V[TENSE=pres, AUX=+] -> 'can'`意味着can接受`TENSE`的值为`pres`，并且`AUX`的值为`+`或`true`。有一个广泛采用的约定用缩写表示布尔特征`f`；不用`AUX=+`或`AUX=-`，我们分别用`+AUX`和`-AUX`。这些都是缩写，然而，分析器就像`+`和`-`是其他原子值一样解释它们。[(15)](https://usyiyi.github.io/nlp-py-2e-zh/9.html#ex-lex)显示了一些有代表性的产生式：\n",
    "\n",
    "> V[TENSE=pres, +AUX] -> 'can'\n",
    "> \n",
    "> V[TENSE=pres, +AUX] -> 'may'\n",
    "> \n",
    "> V[TENSE=pres, -AUX] -> 'walks'\n",
    "> \n",
    "> V[TENSE=pres, -AUX] -> 'likes'\n",
    "\n",
    "在传递中，我们应该指出有显示AVM的替代方法；[1.3](https://usyiyi.github.io/nlp-py-2e-zh/9.html#fig-avm1)显示了一个例子。虽然特征结构呈现的[(16)](https://usyiyi.github.io/nlp-py-2e-zh/9.html#ex-agr0)中的风格不太悦目，我们将坚持用这种格式，因为它对应我们将会从NLTK得到的输出。\n",
    "\n",
    "关于表示，我们也注意到特征结构，像字典，对特征的*顺序*没有指定特别的意义。所以[(16)](https://usyiyi.github.io/nlp-py-2e-zh/9.html#ex-agr0)等同于︰\n",
    "\n",
    "> [AGR = [NUM = pl  ]]\n",
    "> \n",
    "> [          [PER = 3        ]]\n",
    "> \n",
    "> [          [GND = fem ]]\n",
    "> \n",
    "> [                                  ]\n",
    "> \n",
    "> [POS = N                   ]\n",
    "\n",
    "## 2 处理特征结构\n",
    "\n",
    "在本节中，我们将展示如何在NLTK中构建和操作特征结构。我们还将讨论统一的基本操作，这使我们能够结合两个不同的特征结构中的信息。\n",
    "\n",
    "NLTK中的特征结构使用构造函数`FeatStruct()`声明。原子特征值可以是字符串或整数。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 9,
   "id": "f4651f29",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[ NUM   = 'sg'   ]\n",
      "[ TENSE = 'past' ]\n"
     ]
    }
   ],
   "source": [
    "import nltk\n",
    "fs1 = nltk.FeatStruct(TENSE='past', NUM='sg')\n",
    "print(fs1)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "0ea2f444",
   "metadata": {},
   "source": [
    "一个特征结构实际上只是一种字典，所以我们可以平常的方式通过索引访问它的值。我们可以用我们熟悉的方式*赋*值给特征："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 10,
   "id": "d91ea10c",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "fem\n"
     ]
    }
   ],
   "source": [
    "fs1 = nltk.FeatStruct(PER=3, NUM='pl', GND='fem')\n",
    "print(fs1['GND'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 11,
   "id": "3b771287",
   "metadata": {},
   "outputs": [],
   "source": [
    "fs1['CASE'] = 'acc'"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "055e2fd3",
   "metadata": {},
   "source": [
    "我们还可以为特征结构定义更复杂的值，如前面所讨论的。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 12,
   "id": "e25cbd0e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[       [ CASE = 'acc' ] ]\n",
      "[ AGR = [ GND  = 'fem' ] ]\n",
      "[       [ NUM  = 'pl'  ] ]\n",
      "[       [ PER  = 3     ] ]\n",
      "[                        ]\n",
      "[ POS = 'N'              ]\n"
     ]
    }
   ],
   "source": [
    "fs2 = nltk.FeatStruct(POS='N', AGR=fs1)\n",
    "print(fs2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 13,
   "id": "facec581",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[ CASE = 'acc' ]\n",
      "[ GND  = 'fem' ]\n",
      "[ NUM  = 'pl'  ]\n",
      "[ PER  = 3     ]\n"
     ]
    }
   ],
   "source": [
    "print(fs2['AGR'])"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 14,
   "id": "e3bb7263",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "3\n"
     ]
    }
   ],
   "source": [
    "print(fs2['AGR']['PER'])"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "df078c3b",
   "metadata": {},
   "source": [
    "指定特征结构的另一种方法是使用包含`feature=value`格式的特征-值对的方括号括起的字符串，其中值本身可能是特征结构："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 15,
   "id": "ebd68d9e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[       [ GND = 'fem' ] ]\n",
      "[ AGR = [ NUM = 'pl'  ] ]\n",
      "[       [ PER = 3     ] ]\n",
      "[                       ]\n",
      "[ POS = 'N'             ]\n"
     ]
    }
   ],
   "source": [
    "print(nltk.FeatStruct(\"[POS='N', AGR=[PER=3, NUM='pl', GND='fem']]\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "609dd6cf",
   "metadata": {},
   "source": [
    "特征结构本身并不依赖于语言对象；它们是表示知识的通用目的的结构。例如，我们可以将一个人的信息用特征结构编码："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 16,
   "id": "b2fd04ec",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[ AGE   = 33               ]\n",
      "[ NAME  = 'Lee'            ]\n",
      "[ TELNO = '01 27 86 42 96' ]\n"
     ]
    }
   ],
   "source": [
    "print(nltk.FeatStruct(NAME='Lee', TELNO='01 27 86 42 96', AGE=33))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c6bb13d2",
   "metadata": {},
   "source": [
    "在接下来的几页中，我们会使用这样的例子来探讨特征结构的标准操作。这将使我们暂时从自然语言处理转移，因为在我们回来谈论语法之前需要打下基础。坚持！\n",
    "\n",
    "将特征结构作为图来查看往往是有用的；更具体的，作为有向无环图（DAG）。[(19)](https://usyiyi.github.io/nlp-py-2e-zh/9.html#ex-dag01)等同于上面的AVM。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 17,
   "id": "0ad1984f",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[ ADDRESS = (1) [ NUMBER = 74           ] ]\n",
      "[               [ STREET = 'rue Pascal' ] ]\n",
      "[                                         ]\n",
      "[ NAME    = 'Lee'                         ]\n",
      "[                                         ]\n",
      "[ SPOUSE  = [ ADDRESS -> (1)  ]           ]\n",
      "[           [ NAME    = 'Kim' ]           ]\n"
     ]
    }
   ],
   "source": [
    "print(nltk.FeatStruct(\"\"\"[NAME='Lee', ADDRESS=(1)[NUMBER=74, STREET='rue Pascal'],SPOUSE=[NAME='Kim', ADDRESS->(1)]]\"\"\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "01c5c6c5",
   "metadata": {},
   "source": [
    "括号内的整数有时也被称为标记或同指标志。整数的选择并不重要。可以有任意数目的标记在一个单独的特征结构中。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 18,
   "id": "f766b02c",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[ A = 'a'             ]\n",
      "[                     ]\n",
      "[ B = (1) [ C = 'c' ] ]\n",
      "[                     ]\n",
      "[ D -> (1)            ]\n",
      "[ E -> (1)            ]\n"
     ]
    }
   ],
   "source": [
    "print(nltk.FeatStruct(\"[A='a', B=(1)[C='c'], D->(1), E->(1)]\"))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d98401e3",
   "metadata": {},
   "source": [
    "## 2.1 包含和统一\n",
    "\n",
    "认为特征结构提供一些对象的部分信息是很正常的，在这个意义上，我们可以根据它们通用的程度给特征结构排序。例如，[(23a)](https://usyiyi.github.io/nlp-py-2e-zh/9.html#ex-fs01)比[(23b)](https://usyiyi.github.io/nlp-py-2e-zh/9.html#ex-fs02)具有更少特征，(23b)比[(23c)](https://usyiyi.github.io/nlp-py-2e-zh/9.html#ex-fs03)具有更少特征。"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "143eb692",
   "metadata": {},
   "source": [
    "> [NUMBER = 74]"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6ca01227",
   "metadata": {},
   "source": [
    "统一被正式定义为一个（部分）二元操作：FS0 ⊔ FS1。统一是对称的，所以FS0 ⊔ FS1 = FS1 ⊔ FS0。在Python中也是如此："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 20,
   "id": "ba6b79b0",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "    [ AGR  -> (1)  ]\n",
      "    [ CASE = 'acc' ]\n",
      "(1) [ GND  = 'fem' ]\n",
      "    [ NUM  = 'pl'  ]\n",
      "    [ PER  = 3     ]\n",
      "    [ POS  = 'N'   ]\n"
     ]
    }
   ],
   "source": [
    "print(fs2.unify(fs1))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5b87005a",
   "metadata": {},
   "source": [
    "如果我们统一两个具有包含关系的特征结构，那么统一的结果是两个中更具体的那个："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 21,
   "id": "50752375",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "None\n"
     ]
    }
   ],
   "source": [
    "fs0 = nltk.FeatStruct(A='a')\n",
    "fs1 = nltk.FeatStruct(A='b')\n",
    "fs2 = fs0.unify(fs1)\n",
    "print(fs2)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "30097adb",
   "metadata": {},
   "source": [
    "现在，如果我们看一下统一如何与结构共享相互作用，事情就变得很有趣。首先，让我们在Python中定义[(21)](https://usyiyi.github.io/nlp-py-2e-zh/9.html#ex-dag04)："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 22,
   "id": "5b1125ae",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[ ADDRESS = [ NUMBER = 74           ]               ]\n",
      "[           [ STREET = 'rue Pascal' ]               ]\n",
      "[                                                   ]\n",
      "[ NAME    = 'Lee'                                   ]\n",
      "[                                                   ]\n",
      "[           [ ADDRESS = [ NUMBER = 74           ] ] ]\n",
      "[ SPOUSE  = [           [ STREET = 'rue Pascal' ] ] ]\n",
      "[           [                                     ] ]\n",
      "[           [ NAME    = 'Kim'                     ] ]\n"
     ]
    }
   ],
   "source": [
    "fs0 = nltk.FeatStruct(\"\"\"[NAME=Lee,\n",
    "                          ADDRESS=[NUMBER=74,\n",
    "                          STREET='rue Pascal'],\n",
    "                          SPOUSE= [NAME=Kim,\n",
    "                          ADDRESS=[NUMBER=74,\n",
    "                          STREET='rue Pascal']]]\"\"\")\n",
    "print(fs0)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "35a91cbd",
   "metadata": {},
   "source": [
    "我们为Kim的地址指定一个`CITY`作为参数会发生什么？请注意，`fs1`需要包括从特征结构的根到`CITY`的整个路径。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 23,
   "id": "3d25f48c",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[ ADDRESS = [ NUMBER = 74           ]               ]\n",
      "[           [ STREET = 'rue Pascal' ]               ]\n",
      "[                                                   ]\n",
      "[ NAME    = 'Lee'                                   ]\n",
      "[                                                   ]\n",
      "[           [           [ CITY   = 'Paris'      ] ] ]\n",
      "[           [ ADDRESS = [ NUMBER = 74           ] ] ]\n",
      "[ SPOUSE  = [           [ STREET = 'rue Pascal' ] ] ]\n",
      "[           [                                     ] ]\n",
      "[           [ NAME    = 'Kim'                     ] ]\n"
     ]
    }
   ],
   "source": [
    "fs1 = nltk.FeatStruct(\"[SPOUSE = [ADDRESS = [CITY = Paris]]]\")\n",
    "print(fs1.unify(fs0))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "4eced24d",
   "metadata": {},
   "source": [
    "通过对比，如果`fs1`与`fs2`的结构共享版本统一，结果是非常不同的（如图[(22)](https://usyiyi.github.io/nlp-py-2e-zh/9.html#ex-dag03)所示）："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 24,
   "id": "84f86785",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[               [ CITY   = 'Paris'      ] ]\n",
      "[ ADDRESS = (1) [ NUMBER = 74           ] ]\n",
      "[               [ STREET = 'rue Pascal' ] ]\n",
      "[                                         ]\n",
      "[ NAME    = 'Lee'                         ]\n",
      "[                                         ]\n",
      "[ SPOUSE  = [ ADDRESS -> (1)  ]           ]\n",
      "[           [ NAME    = 'Kim' ]           ]\n"
     ]
    }
   ],
   "source": [
    "fs2 = nltk.FeatStruct(\"\"\"[NAME=Lee, ADDRESS=(1)[NUMBER=74, STREET='rue Pascal'],\n",
    "                          SPOUSE=[NAME=Kim, ADDRESS->(1)]]\"\"\")\n",
    "print(fs1.unify(fs2))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "7bcf0936",
   "metadata": {},
   "source": [
    "不是仅仅更新Kim的Lee的地址的“副本”，我们现在同时更新他们两个的地址。更一般的，如果统一包含指定一些路径π的值，那么统一同时更新等价于π的任何路径的值。\n",
    "\n",
    "正如我们已经看到的，结构共享也可以使用变量表示，如`?x`。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 25,
   "id": "741262bb",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[ ADDRESS1 = ?x ]\n",
      "[ ADDRESS2 = ?x ]\n"
     ]
    }
   ],
   "source": [
    "fs1 = nltk.FeatStruct(\"[ADDRESS1=[NUMBER=74, STREET='rue Pascal']]\")\n",
    "fs2 = nltk.FeatStruct(\"[ADDRESS1=?x, ADDRESS2=?x]\")\n",
    "print(fs2)"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 26,
   "id": "d7707f8e",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "[ ADDRESS1 = (1) [ NUMBER = 74           ] ]\n",
      "[                [ STREET = 'rue Pascal' ] ]\n",
      "[                                          ]\n",
      "[ ADDRESS2 -> (1)                          ]\n"
     ]
    }
   ],
   "source": [
    "print(fs2.unify(fs1))"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5a19a0fe",
   "metadata": {},
   "source": [
    "## 3 扩展基于特征的语法\n",
    "\n",
    "在本节中，我们回到基于特征的语法，探索各种语言问题，并展示将特征纳入语法的好处。\n",
    "\n",
    "<a href=\"#3.1-子类别\">3.1 子类别</a>\n",
    "\n",
    "<a href=\"#3.2-核心词回顾\">3.2 核心词回顾</a>\n",
    "\n",
    "<a href=\"#3.3-助动词与倒装\">3.3 助动词与倒装</a>\n",
    "\n",
    "<a href=\"#3.4-无限制依赖成分\">3.4 无限制依赖成分</a>\n",
    "\n",
    "<a href=\"#3.5-德语中的格和性别\">3.5 德语中的格和性别</a>\n",
    "\n",
    "**vscode jupyter toc**\n",
    "\n",
    "<a href=\"#31-子类别\">3.1 子类别</a>\n",
    "\n",
    "<a href=\"#32-核心词回顾\">3.2 核心词回顾</a>\n",
    "\n",
    "<a href=\"#33-助动词与倒装\">3.3 助动词与倒装</a>\n",
    "\n",
    "<a href=\"#34-无限制依赖成分\">3.4 无限制依赖成分</a>\n",
    "\n",
    "<a href=\"#35-德语中的格和性别\">3.5 德语中的格和性别</a>\n",
    "\n",
    "## 3.1 子类别\n",
    "\n",
    "第[8.](https://usyiyi.github.io/nlp-py-2e-zh/8.html#chap-parse)中，我们增强了类别标签表示不同类别的动词，分别用标签`IV`和`TV`表示不及物动词和及物动词。这使我们能编写如下的产生式：\n",
    "\n",
    "> VP -> IV\n",
    "> VP -> TV NP\n",
    "\n",
    "## 3.2 核心词回顾\n",
    "\n",
    "我们注意到，在上一节中，通过从主类别标签分解出子类别信息，我们可以表达有关动词属性的更多概括。类似的另一个属性如下：`V`类的表达式是`VP`类的短语的核心。同样，`N`是`NP`的核心词，`A`（即形容词）是`AP`的核心词，`P`（即介词）是`PP`的核心词。并非所有的短语都有核心词——例如，一般认为连词短语（如the book and the bell）缺乏核心词——然而，我们希望我们的语法形式能表达它所持有的父母/核心子女关系。现在，`V`和`VP`只是原子符号，我们需要找到一种方法用特征将它们关联起来（就像我们以前关联`IV`和`TV`那样）。\n",
    "\n",
    "X-bar句法通过抽象出短语级别的概念，解决了这个问题。它通常认为有三个这样的级别。如果`N`表示词汇级别，那么`N`'表示更高一层级别，对应较传统的级别Nom，`N`''表示短语级别，对应类别`NP`。[(34a)](https://usyiyi.github.io/nlp-py-2e-zh/9.html#ex-xbar0)演示了这种表示结构，而[(34b)](https://usyiyi.github.io/nlp-py-2e-zh/9.html#ex-xbar01)是更传统的对应。\n",
    "\n",
    "\n",
    "> S -> N[BAR=2] V[BAR=2]\n",
    "> \n",
    "> N[BAR=2] -> Det N[BAR=1]\n",
    "> \n",
    "> N[BAR=1] -> N[BAR=1] P[BAR=2]\n",
    "> \n",
    "> N[BAR=1] -> N[BAR=0] P[BAR=2]\n",
    "> \n",
    "> N[BAR=1] -> N[BAR=0]XS\n",
    "\n",
    "## 3.3 助动词与倒装\n",
    "\n",
    "倒装从句——其中的主语和动词顺序互换——出现在英语疑问句，也出现在“否定”副词之后：\n",
    "\n",
    "\n",
    "> S[+INV] -> V[+AUX] NP VP\n",
    "\n",
    "\n",
    "## 3.4 无限制依赖成分\n",
    "\n",
    "考虑下面的对比："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 27,
   "id": "918730d4",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "% start S\n",
      "# ###################\n",
      "# Grammar Productions\n",
      "# ###################\n",
      "S[-INV] -> NP VP\n",
      "S[-INV]/?x -> NP VP/?x\n",
      "S[-INV] -> NP S/NP\n",
      "S[-INV] -> Adv[+NEG] S[+INV]\n",
      "S[+INV] -> V[+AUX] NP VP\n",
      "S[+INV]/?x -> V[+AUX] NP VP/?x\n",
      "SBar -> Comp S[-INV]\n",
      "SBar/?x -> Comp S[-INV]/?x\n",
      "VP -> V[SUBCAT=intrans, -AUX]\n",
      "VP -> V[SUBCAT=trans, -AUX] NP\n",
      "VP/?x -> V[SUBCAT=trans, -AUX] NP/?x\n",
      "VP -> V[SUBCAT=clause, -AUX] SBar\n",
      "VP/?x -> V[SUBCAT=clause, -AUX] SBar/?x\n",
      "VP -> V[+AUX] VP\n",
      "VP/?x -> V[+AUX] VP/?x\n",
      "# ###################\n",
      "# Lexical Productions\n",
      "# ###################\n",
      "V[SUBCAT=intrans, -AUX] -> 'walk' | 'sing'\n",
      "V[SUBCAT=trans, -AUX] -> 'see' | 'like'\n",
      "V[SUBCAT=clause, -AUX] -> 'say' | 'claim'\n",
      "V[+AUX] -> 'do' | 'can'\n",
      "NP[-WH] -> 'you' | 'cats'\n",
      "NP[+WH] -> 'who'\n",
      "Adv[+NEG] -> 'rarely' | 'never'\n",
      "NP/NP ->\n",
      "Comp -> 'that'\n"
     ]
    }
   ],
   "source": [
    "nltk.data.show_cfg('grammars/book_grammars/feat1.fcfg')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d446c019",
   "metadata": {},
   "source": [
    "[3.1](https://usyiyi.github.io/nlp-py-2e-zh/9.html#code-slashcfg)中的语法包含一个“缺口引进”产生式，即`S[-INV] -> NP S/NP`。为了正确的预填充斜线特征，我们需要为扩展`S`，`VP`和`NP`的产生式中箭头两侧的斜线添加变量值。例如，`VP/?x -> V SBar/?x`是`VP -> V SBar`的斜线版本，也就是说，可以为一个成分的父母`VP`指定斜线值，只要也为孩子`SBar`指定同样的值。最后，`NP/NP ->`允许`NP`上的斜线信息为空字符串。使用[3.1](https://usyiyi.github.io/nlp-py-2e-zh/9.html#code-slashcfg)中的语法，我们可以分析序列who do you claim that you like"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 28,
   "id": "d7359676",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(S[-INV]\n",
      "  (NP[+WH] who)\n",
      "  (S[+INV]/NP[]\n",
      "    (V[+AUX] do)\n",
      "    (NP[-WH] you)\n",
      "    (VP[]/NP[]\n",
      "      (V[-AUX, SUBCAT='clause'] claim)\n",
      "      (SBar[]/NP[]\n",
      "        (Comp[] that)\n",
      "        (S[-INV]/NP[]\n",
      "          (NP[-WH] you)\n",
      "          (VP[]/NP[] (V[-AUX, SUBCAT='trans'] like) (NP[]/NP[] )))))))\n"
     ]
    }
   ],
   "source": [
    "tokens = 'who do you claim that you like'.split()\n",
    "from nltk import load_parser\n",
    "cp = load_parser('grammars/book_grammars/feat1.fcfg')\n",
    "for tree in cp.parse(tokens):\n",
    "    print(tree)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "c7bae555",
   "metadata": {},
   "source": [
    "这棵树的一个更易读的版本如[(52)](https://usyiyi.github.io/nlp-py-2e-zh/9.html#ex-gapparse)所示。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 29,
   "id": "8fae01a1",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(S[-INV]\n",
      "  (NP[-WH] you)\n",
      "  (VP[]\n",
      "    (V[-AUX, SUBCAT='clause'] claim)\n",
      "    (SBar[]\n",
      "      (Comp[] that)\n",
      "      (S[-INV]\n",
      "        (NP[-WH] you)\n",
      "        (VP[] (V[-AUX, SUBCAT='trans'] like) (NP[-WH] cats))))))\n"
     ]
    }
   ],
   "source": [
    "tokens = 'you claim that you like cats'.split()\n",
    "for tree in cp.parse(tokens):\n",
    "    print(tree)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "63b766f0",
   "metadata": {},
   "source": [
    "此外，它还允许没有wh 结构的倒装句："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 30,
   "id": "a3689668",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(S[-INV]\n",
      "  (Adv[+NEG] rarely)\n",
      "  (S[+INV]\n",
      "    (V[+AUX] do)\n",
      "    (NP[-WH] you)\n",
      "    (VP[] (V[-AUX, SUBCAT='intrans'] sing))))\n"
     ]
    }
   ],
   "source": [
    "tokens = 'rarely do you sing'.split()\n",
    "for tree in cp.parse(tokens):\n",
    "    print(tree)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "d6276580",
   "metadata": {},
   "source": [
    "## 3.5 德语中的格和性别\n",
    "\n",
    "与英语相比，德语的协议具有相对丰富的形态。例如，在德语中定冠词根据格、性别和数量变化，如[3.1](https://usyiyi.github.io/nlp-py-2e-zh/9.html#tab-german-def-art)所示。\n",
    "\n",
    "表 3.1：\n",
    "\n",
    "德语定冠词的形态范式"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 31,
   "id": "2a30c6ff",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "% start S\n",
      "# Grammar Productions\n",
      "S -> NP[CASE=nom, AGR=?a] VP[AGR=?a]\n",
      "NP[CASE=?c, AGR=?a] -> PRO[CASE=?c, AGR=?a]\n",
      "NP[CASE=?c, AGR=?a] -> Det[CASE=?c, AGR=?a] N[CASE=?c, AGR=?a]\n",
      "VP[AGR=?a] -> IV[AGR=?a]\n",
      "VP[AGR=?a] -> TV[OBJCASE=?c, AGR=?a] NP[CASE=?c]\n",
      "# Lexical Productions\n",
      "# Singular determiners\n",
      "# masc\n",
      "Det[CASE=nom, AGR=[GND=masc,PER=3,NUM=sg]] -> 'der' \n",
      "Det[CASE=dat, AGR=[GND=masc,PER=3,NUM=sg]] -> 'dem'\n",
      "Det[CASE=acc, AGR=[GND=masc,PER=3,NUM=sg]] -> 'den'\n",
      "# fem\n",
      "Det[CASE=nom, AGR=[GND=fem,PER=3,NUM=sg]] -> 'die' \n",
      "Det[CASE=dat, AGR=[GND=fem,PER=3,NUM=sg]] -> 'der'\n",
      "Det[CASE=acc, AGR=[GND=fem,PER=3,NUM=sg]] -> 'die' \n",
      "# Plural determiners\n",
      "Det[CASE=nom, AGR=[PER=3,NUM=pl]] -> 'die' \n",
      "Det[CASE=dat, AGR=[PER=3,NUM=pl]] -> 'den' \n",
      "Det[CASE=acc, AGR=[PER=3,NUM=pl]] -> 'die' \n",
      "# Nouns\n",
      "N[AGR=[GND=masc,PER=3,NUM=sg]] -> 'Hund'\n",
      "N[CASE=nom, AGR=[GND=masc,PER=3,NUM=pl]] -> 'Hunde'\n",
      "N[CASE=dat, AGR=[GND=masc,PER=3,NUM=pl]] -> 'Hunden'\n",
      "N[CASE=acc, AGR=[GND=masc,PER=3,NUM=pl]] -> 'Hunde'\n",
      "N[AGR=[GND=fem,PER=3,NUM=sg]] -> 'Katze'\n",
      "N[AGR=[GND=fem,PER=3,NUM=pl]] -> 'Katzen'\n",
      "# Pronouns\n",
      "PRO[CASE=nom, AGR=[PER=1,NUM=sg]] -> 'ich'\n",
      "PRO[CASE=acc, AGR=[PER=1,NUM=sg]] -> 'mich'\n",
      "PRO[CASE=dat, AGR=[PER=1,NUM=sg]] -> 'mir'\n",
      "PRO[CASE=nom, AGR=[PER=2,NUM=sg]] -> 'du'\n",
      "PRO[CASE=nom, AGR=[PER=3,NUM=sg]] -> 'er' | 'sie' | 'es'\n",
      "PRO[CASE=nom, AGR=[PER=1,NUM=pl]] -> 'wir'\n",
      "PRO[CASE=acc, AGR=[PER=1,NUM=pl]] -> 'uns'\n",
      "PRO[CASE=dat, AGR=[PER=1,NUM=pl]] -> 'uns'\n",
      "PRO[CASE=nom, AGR=[PER=2,NUM=pl]] -> 'ihr'\n",
      "PRO[CASE=nom, AGR=[PER=3,NUM=pl]] -> 'sie'\n",
      "# Verbs\n",
      "IV[AGR=[NUM=sg,PER=1]] -> 'komme'\n",
      "IV[AGR=[NUM=sg,PER=2]] -> 'kommst'\n",
      "IV[AGR=[NUM=sg,PER=3]] -> 'kommt'\n",
      "IV[AGR=[NUM=pl, PER=1]] -> 'kommen'\n",
      "IV[AGR=[NUM=pl, PER=2]] -> 'kommt'\n",
      "IV[AGR=[NUM=pl, PER=3]] -> 'kommen'\n",
      "TV[OBJCASE=acc, AGR=[NUM=sg,PER=1]] -> 'sehe' | 'mag'\n",
      "TV[OBJCASE=acc, AGR=[NUM=sg,PER=2]] -> 'siehst' | 'magst'\n",
      "TV[OBJCASE=acc, AGR=[NUM=sg,PER=3]] -> 'sieht' | 'mag'\n",
      "TV[OBJCASE=dat, AGR=[NUM=sg,PER=1]] -> 'folge' | 'helfe'\n",
      "TV[OBJCASE=dat, AGR=[NUM=sg,PER=2]] -> 'folgst' | 'hilfst'\n",
      "TV[OBJCASE=dat, AGR=[NUM=sg,PER=3]] -> 'folgt' | 'hilft'\n",
      "TV[OBJCASE=acc, AGR=[NUM=pl,PER=1]] -> 'sehen' | 'moegen'\n",
      "TV[OBJCASE=acc, AGR=[NUM=pl,PER=2]] -> 'sieht' | 'moegt'\n",
      "TV[OBJCASE=acc, AGR=[NUM=pl,PER=3]] -> 'sehen' | 'moegen'\n",
      "TV[OBJCASE=dat, AGR=[NUM=pl,PER=1]] -> 'folgen' | 'helfen'\n",
      "TV[OBJCASE=dat, AGR=[NUM=pl,PER=2]] -> 'folgt' | 'helft'\n",
      "TV[OBJCASE=dat, AGR=[NUM=pl,PER=3]] -> 'folgen' | 'helfen'\n"
     ]
    }
   ],
   "source": [
    "nltk.data.show_cfg('grammars/book_grammars/german.fcfg')"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6002d4bc",
   "metadata": {},
   "source": [
    "正如你可以看到的，特征objcase被用来指定动词支配它的对象的格。下一个例子演示了包含支配与格的动词的句子的分析树。"
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 32,
   "id": "fce9a286",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "(S[]\n",
      "  (NP[AGR=[NUM='sg', PER=1], CASE='nom']\n",
      "    (PRO[AGR=[NUM='sg', PER=1], CASE='nom'] ich))\n",
      "  (VP[AGR=[NUM='sg', PER=1]]\n",
      "    (TV[AGR=[NUM='sg', PER=1], OBJCASE='dat'] folge)\n",
      "    (NP[AGR=[GND='fem', NUM='pl', PER=3], CASE='dat']\n",
      "      (Det[AGR=[NUM='pl', PER=3], CASE='dat'] den)\n",
      "      (N[AGR=[GND='fem', NUM='pl', PER=3]] Katzen))))\n"
     ]
    }
   ],
   "source": [
    "tokens = 'ich folge den Katzen'.split()\n",
    "cp = load_parser('grammars/book_grammars/german.fcfg')\n",
    "for tree in cp.parse(tokens):\n",
    "    print(tree)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "f334273a",
   "metadata": {},
   "source": [
    "在开发语法时，排除不符合语法的词序列往往与分析符合语法的词序列一样具有挑战性。为了能知道在哪里和为什么序列分析失败，设置`load_parser()`方法的`trace`参数可能是至关重要的。思考下面的分析故障："
   ]
  },
  {
   "cell_type": "code",
   "execution_count": 33,
   "id": "d1e89acf",
   "metadata": {},
   "outputs": [
    {
     "name": "stdout",
     "output_type": "stream",
     "text": [
      "|.ich.fol.den.Kat.|\n",
      "Leaf Init Rule:\n",
      "|[---]   .   .   .| [0:1] 'ich'\n",
      "|.   [---]   .   .| [1:2] 'folge'\n",
      "|.   .   [---]   .| [2:3] 'den'\n",
      "|.   .   .   [---]| [3:4] 'Katze'\n",
      "Feature Bottom Up Predict Combine Rule:\n",
      "|[---]   .   .   .| [0:1] PRO[AGR=[NUM='sg', PER=1], CASE='nom'] -> 'ich' *\n",
      "Feature Bottom Up Predict Combine Rule:\n",
      "|[---]   .   .   .| [0:1] NP[AGR=[NUM='sg', PER=1], CASE='nom'] -> PRO[AGR=[NUM='sg', PER=1], CASE='nom'] *\n",
      "Feature Bottom Up Predict Combine Rule:\n",
      "|[--->   .   .   .| [0:1] S[] -> NP[AGR=?a, CASE='nom'] * VP[AGR=?a] {?a: [NUM='sg', PER=1]}\n",
      "Feature Bottom Up Predict Combine Rule:\n",
      "|.   [---]   .   .| [1:2] TV[AGR=[NUM='sg', PER=1], OBJCASE='dat'] -> 'folge' *\n",
      "Feature Bottom Up Predict Combine Rule:\n",
      "|.   [--->   .   .| [1:2] VP[AGR=?a] -> TV[AGR=?a, OBJCASE=?c] * NP[CASE=?c] {?a: [NUM='sg', PER=1], ?c: 'dat'}\n",
      "Feature Bottom Up Predict Combine Rule:\n",
      "|.   .   [---]   .| [2:3] Det[AGR=[GND='masc', NUM='sg', PER=3], CASE='acc'] -> 'den' *\n",
      "|.   .   [---]   .| [2:3] Det[AGR=[NUM='pl', PER=3], CASE='dat'] -> 'den' *\n",
      "Feature Bottom Up Predict Combine Rule:\n",
      "|.   .   [--->   .| [2:3] NP[AGR=?a, CASE=?c] -> Det[AGR=?a, CASE=?c] * N[AGR=?a, CASE=?c] {?a: [NUM='pl', PER=3], ?c: 'dat'}\n",
      "Feature Bottom Up Predict Combine Rule:\n",
      "|.   .   [--->   .| [2:3] NP[AGR=?a, CASE=?c] -> Det[AGR=?a, CASE=?c] * N[AGR=?a, CASE=?c] {?a: [GND='masc', NUM='sg', PER=3], ?c: 'acc'}\n",
      "Feature Bottom Up Predict Combine Rule:\n",
      "|.   .   .   [---]| [3:4] N[AGR=[GND='fem', NUM='sg', PER=3]] -> 'Katze' *\n"
     ]
    }
   ],
   "source": [
    "tokens = 'ich folge den Katze'.split()\n",
    "cp = load_parser('grammars/book_grammars/german.fcfg', trace=2)\n",
    "for tree in cp.parse(tokens):\n",
    "    print(tree)"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "94e099cc",
   "metadata": {},
   "source": [
    "跟踪中的最后两个`Scanner`行显示den被识别为两个可能的类别：`Det[AGR=[GND='masc', NUM='sg', PER=3], CASE='acc']`和`Det[AGR=[NUM='pl', PER=3], CASE='dat']`。我们从[3.2](https://usyiyi.github.io/nlp-py-2e-zh/9.html#code-germancfg)中的语法知道`Katze`的类别是`N[AGR=[GND=fem, NUM=sg, PER=3]]`。因而，产生式`NP[CASE=?c, AGR=?a] -> Det[CASE=?c, AGR=?a] N[CASE=?c, AGR=?a]`中没有变量`?a`的绑定，这将满足这些限制，因为`Katze`的`AGR`值将不与den的任何一个`AGR`值统一，也就是`[GND='masc', NUM='sg', PER=3]`或`[NUM='pl', PER=3]`。\n",
    "\n",
    "## 4 小结\n",
    "\n",
    "- 上下文无关语法的传统分类是原子符号。特征结构的一个重要的作用是捕捉精细的区分，否则将需要数量翻倍的原子类别。\n",
    "- 通过使用特征值上的变量，我们可以表达语法产生式中的限制，允许不同的特征规格的实现可以相互依赖。\n",
    "- 通常情况下，我们在词汇层面指定固定的特征值，限制短语中的特征值与它们的孩子中的对应值统一。\n",
    "- 特征值可以是原子的或复杂的。原子值的一个特定类别是布尔值，按照惯例用[+/- `f`]表示。\n",
    "- 两个特征可以共享一个值（原子的或复杂的）。具有共享值的结构被称为重入。共享的值被表示为AVM中的数字索引（或标记）。\n",
    "- 一个特征结构中的路径是一个特征的元组，对应从图的根开始的弧的序列上的标签。\n",
    "- 两条路径是等价的，如果它们共享一个值。\n",
    "- 包含的特征结构是偏序的。FS0包含FS1，当包含在FS0中的所有信息也出现在FS1中。\n",
    "- 两种结构FS0和FS1的统一，如果成功，就是包含FS0和FS1的合并信息的特征结构FS2。\n",
    "- 如果统一在FS中指定一条路径π，那么它也指定等效与π的每个路径π'。\n",
    "- 我们可以使用特征结构建立对大量广泛语言学现象的简洁的分析，包括动词子类别，倒装结构，无限制依赖结构和格支配。\n",
    "\n",
    "## 5 深入阅读\n",
    "\n",
    "本章进一步的材料请参考`http://nltk.org/`，包括特征结构、特征语法和语法测试套件。\n",
    "\n",
    "X-bar句法：[(Jacobs & Rosenbaum, 1970)](https://usyiyi.github.io/nlp-py-2e-zh/bibliography.html#chomsky1970rn), [(Jackendoff, 1977)](https://usyiyi.github.io/nlp-py-2e-zh/bibliography.html#jackendoff1977xs)（The primes we use replace Chomsky's typographically more demanding horizontal bars）。\n",
    "\n",
    "协议现象的一个很好的介绍，请参阅[(Corbett, 2006)](https://usyiyi.github.io/nlp-py-2e-zh/bibliography.html#corbett2006a)。\n",
    "\n",
    "理论语言学中最初使用特征的目的是捕捉语音的音素特性。例如，音/**b**/可能会被分解成结构`[+labial, +voice]`。一个重要的动机是捕捉分割的类别之间的一般性；例如/**n**/在任一`+labial`辅音前面被读作/**m**/。在乔姆斯基语法中，对一些现象，如协议，使用原子特征是很标准的，原子特征也用来捕捉跨句法类别的概括，通过类比与音韵。句法理论中使用特征的一个激进的扩展是广义短语结构语法（GPSG; [(Gazdar, Klein, & and, 1985)](https://usyiyi.github.io/nlp-py-2e-zh/bibliography.html#gazdar1985gps)），特别是在使用带有复杂值的特征。\n",
    "\n",
    "从计算语言学的角度来看，[(Dahl & Saint-Dizier, 1985)](https://usyiyi.github.io/nlp-py-2e-zh/bibliography.html#kay1984ug)提出语言的功能方面可以被属性-值结构的统一捕获，一个类似的方法由[(Grosz & Stickel, 1983)](https://usyiyi.github.io/nlp-py-2e-zh/bibliography.html#shieber1983fip)在PATR-II形式体系中精心设计完成。词汇功能语法（LFG; [(Bresnan, 1982)](https://usyiyi.github.io/nlp-py-2e-zh/bibliography.html#kaplan1982lfg)）的早期工作介绍了f-structure 概念，它的主要目的是表示语法关系和与成分结构短语关联的谓词参数结构。[(Shieber, 1986)](https://usyiyi.github.io/nlp-py-2e-zh/bibliography.html#shieber1986iub)提供了研究基于特征语法方面的一个极好的介绍。\n",
    "\n",
    "当研究人员试图为反面例子建模时，特征结构的代数方法的一个概念上的困难出现了。另一种观点，由[(Kasper & Rounds, 1986)](https://usyiyi.github.io/nlp-py-2e-zh/bibliography.html#kasper1986lsf)和[(Johnson, 1988)](https://usyiyi.github.io/nlp-py-2e-zh/bibliography.html#johnson1988avl)开创，认为语法涉及结构功能的描述而不是结构本身。这些描述使用逻辑操作如合取相结合，而否定仅仅是特征描述上的普通的逻辑运算。这种面向描述的观点对LFG从一开始就是不可或缺的（参见[(Huang & Chen, 1989)](https://usyiyi.github.io/nlp-py-2e-zh/bibliography.html#kaplan1989fal)），也被中心词驱动短语结构语法的较高版本采用（HPSG; [(Sag & Wasow, 1999)](https://usyiyi.github.io/nlp-py-2e-zh/bibliography.html#sag1999st)）。`http://www.cl.uni-bremen.de/HPSG-Bib/`上有HPSG文献的全面的参考书目。\n",
    "\n",
    "本章介绍的特征结构无法捕捉语言信息中重要的限制。例如，有没有办法表达`NUM`的值只允许是`sg`和`pl`，而指定`[NUM=masc]`是反常的。同样地，我们不能说`AGR`的复合值必须包含特征`PER`，`NUM`和`gnd`的指定，但不能包含如`[SUBCAT=trans]`这样的指定。指定类型的特征结构被开发出来弥补这方面的不足。开始，我们规定总是键入特征值。对于原子值，值就是类型。例如，我们可以说`NUM`的值是类型`num`。此外，`num`是`NUM`最一般类型的值。由于类型按层次结构组织，通过指定`NUM`的值为`num`的子类型，即要么是`sg`要么是`pl`，我们可以更富含信息。\n",
    "\n",
    "In the case of complex values, we say that feature structures are themselves typed. So for example the value of `AGR` will be a feature structure of type `AGR`. We also stipulate that all and only `PER`, `NUM` and `GND` are appropriate features for a structure of type `AGR`. 一个早期的关于指定类型的特征结构的很好的总结是[(Emele & Zajac, 1990)](https://usyiyi.github.io/nlp-py-2e-zh/bibliography.html#emele1990tug)。一个形式化基础的更全面的检查可以在[(Carpenter, 1992)](https://usyiyi.github.io/nlp-py-2e-zh/bibliography.html#carpenter1992ltf)中找到，[(Copestake, 2002)](https://usyiyi.github.io/nlp-py-2e-zh/bibliography.html#copestake2002itf)重点关注为面向HPSG 的方法实现指定类型的特征结构。\n",
    "\n",
    "有很多著作是关于德语的基于特征语法框架上的分析的。[(Nerbonne, Netter, & Pollard, 1994)](https://usyiyi.github.io/nlp-py-2e-zh/bibliography.html#nerbonne1994ghd)是这个主题的HPSG著作的一个好的起点，而[(M{\\\"u}ller, 2002)](https://usyiyi.github.io/nlp-py-2e-zh/bibliography.html#mueller2002cp)给出HPSG 中的德语句法非常广泛和详细的分析。\n",
    "\n",
    "[(Jurafsky & Martin, 2008)](https://usyiyi.github.io/nlp-py-2e-zh/bibliography.html#jurafskymartin2008)的第15 章讨论了特征结构、统一的算法和将统一整合到分析算法中。\n",
    "\n",
    "## 6 练习\n",
    "\n",
    "1. ☼ 需要什么样的限制才能正确分析词序列，如I am happy和she is happy而不是*you is happy或*they am happy？实现英语中动词be的现在时态范例的两个解决方案，首先以语法[(6)](https://usyiyi.github.io/nlp-py-2e-zh/9.html#ex-agcfg1)作为起点，然后以语法 [(18)](https://usyiyi.github.io/nlp-py-2e-zh/9.html#ex-agr2)为起点。\n",
    "\n",
    "2. ☼ 开发[1.1](https://usyiyi.github.io/nlp-py-2e-zh/9.html#code-feat0cfg)中语法的变体，使用特征count来区分下面显示的句子：\n",
    "\n",
    "   fs1 = nltk.FeatStruct(\"[A = ?x, B= [C = ?x]]\")\n",
    "   fs2 = nltk.FeatStruct(\"[B = [D = d]]\")\n",
    "   fs3 = nltk.FeatStruct(\"[B = [C = d]]\")\n",
    "   fs4 = nltk.FeatStruct(\"[A = (1)[B = b], C->(1)]\")\n",
    "   fs5 = nltk.FeatStruct(\"[A = (1)[D = ?x], C = [E -> (1), F = ?x] ]\")\n",
    "   fs6 = nltk.FeatStruct(\"[A = [D = d]]\")\n",
    "   fs7 = nltk.FeatStruct(\"[A = [D = d], C = [F = [D = d]]]\")\n",
    "   fs8 = nltk.FeatStruct(\"[A = (1)[D = ?x, G = ?x], C = [B = ?x, E -> (1)] ]\")\n",
    "   fs9 = nltk.FeatStruct(\"[A = [B = b], C = [E = [G = e]]]\")\n",
    "   fs10 = nltk.FeatStruct(\"[A = (1)[B = b], C -> (1)]\")\n",
    "   \n",
    "\n",
    "   在纸上计算下面的统一的结果是什么。（提示：你可能会发现绘制图结构很有用。）\n",
    "\n",
    "   1. `fs1` and `fs2`\n",
    "   2. `fs1` and `fs3`\n",
    "   3. `fs4` and `fs5`\n",
    "   4. `fs5` and `fs6`\n",
    "   5. `fs5` and `fs7`\n",
    "   6. `fs8` and `fs9`\n",
    "   7. `fs8` and `fs10`\n",
    "\n",
    "   用Python检查你的答案。\n",
    "\n",
    "3. ◑ 列出两个包含[A=?x, B=?x]的特征结构。\n",
    "\n",
    "4. ◑ 忽略结构共享，给出一个统一两个特征结构的非正式算法。\n",
    "\n",
    "5. ◑ 扩展[3.2](https://usyiyi.github.io/nlp-py-2e-zh/9.html#code-germancfg)中的德语语法，使它能处理所谓的动词第二顺位结构，如下所示："
   ]
  },
  {
   "cell_type": "markdown",
   "id": "1b3a8792",
   "metadata": {},
   "source": [
    "(58) Heute sieht der Hund die Katze. "
   ]
  },
  {
   "cell_type": "markdown",
   "id": "6252650d",
   "metadata": {},
   "source": [
    "6. ◑ 同义动词的句法属性看上去略有不同[(Levin, 1993)](https://usyiyi.github.io/nlp-py-2e-zh/bibliography.html#levin1993)。思考下面的动词loaded、filled和dumped的语法模式。你能写语法产生式处理这些数据吗？"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "2de77574",
   "metadata": {},
   "source": [
    "(59) a.The farmer *loaded* the cart with sandb.The farmer *loaded* sand into the cartc.The farmer *filled* the cart with sandd.*The farmer *filled* sand into the carte.*The farmer *dumped* the cart with sandf.The farmer *dumped* sand into the cart"
   ]
  },
  {
   "cell_type": "markdown",
   "id": "5a9a8dd2",
   "metadata": {},
   "source": [
    "7. ★ 形态范例很少是完全正规的，矩阵中的每个单元的意义有不同的实现。例如，词位walk的现在时态词性变化只有两种不同形式：第三人称单数的walks和所有其他人称和数量的组合的walk。一个成功的分析不应该额外要求6个可能的形态组合中有5个有相同的实现。设计和实施一个方法处理这个问题。\n",
    "\n",
    "8. ★ 所谓的核心特征在父节点和核心孩子节点之间共享。例如，`TENSE`是核心特征，在一个`VP`和它的核心孩子`V`之间共享。更多细节见[(Gazdar, Klein, & and, 1985)](https://usyiyi.github.io/nlp-py-2e-zh/bibliography.html#gazdar1985gps)。我们看到的结构中大部分是核心结构——除了`SUBCAT`和`SLASH`。由于核心特征的共享是可以预见的，它不需要在语法产生式中明确表示。开发一种方法自动计算核心结构的这种规则行为的比重。\n",
    "\n",
    "9. ★ 扩展NLTK中特征结构的处理，允许统一值为列表的特征，使用这个来实现一个HPSG风格的子类别分析，核心类别的`SUBCAT`是它的补语的类别和它直接父母的`SUBCAT`值的连结。\n",
    "\n",
    "10. ★ 扩展NLTK的特征结构处理，允许带未指定类别的产生式，例如`S[-INV] --> ?x S/?x`。\n",
    "\n",
    "11. ★ 扩展NLTK的特征结构处理，允许指定类型的特征结构。\n",
    "\n",
    "12. ★ 挑选一些[(Huddleston & Pullum, 2002)](https://usyiyi.github.io/nlp-py-2e-zh/bibliography.html#huddleston2002cge)中描述的文法结构，建立一个基于特征的语法计算它们的比例。\n",
    "\n",
    "13. 关于本文档...\n",
    "\n",
    "14. 针对NLTK 3.0 进行更新。本章来自于*Natural Language Processing with Python*，[Steven Bird](http://estive.net/), [Ewan Klein](http://homepages.inf.ed.ac.uk/ewan/) 和[Edward Loper](http://ed.loper.org/)，Copyright © 2014 作者所有。本章依据*Creative Commons Attribution-Noncommercial-No Derivative Works 3.0 United States License* [http://creativecommons.org/licenses/by-nc-nd/3.0/us/] 条款，与*自然语言工具包* [`http://nltk.org/`] 3.0 版一起发行。\n",
    "\n",
    "15. 本文档构建于星期三 2015 年 7 月 1 日 12:30:05 AEST"
   ]
  }
 ],
 "metadata": {
  "kernelspec": {
   "display_name": "Python 3 (ipykernel)",
   "language": "python",
   "name": "python3"
  },
  "language_info": {
   "codemirror_mode": {
    "name": "ipython",
    "version": 3
   },
   "file_extension": ".py",
   "mimetype": "text/x-python",
   "name": "python",
   "nbconvert_exporter": "python",
   "pygments_lexer": "ipython3",
   "version": "3.9.13"
  }
 },
 "nbformat": 4,
 "nbformat_minor": 5
}
