n_chapter
stringclasses
10 values
chapter
stringclasses
10 values
n_section
stringlengths
3
5
section
stringlengths
3
48
n_subsection
stringlengths
3
6
subsection
stringlengths
3
51
text
stringlengths
1
2.65k
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
nan
nan
We'll call these K functions F k (X,Y ) global features, since each one is a property of the entire input sequence X and output sequence Y . We compute them by decomposing into a sum of local features for each position i in Y :
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
nan
nan
F k (X,Y ) = n i=1 f k (y i−1 , y i , X, i) (8.26)
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
nan
nan
Each of these local features f k in a linear-chain CRF is allowed to make use of the current output token y i , the previous output token y i−1 , the entire input string X (or any subpart of it), and the current position i. This constraint to only depend on the current and previous output tokens y i and y i−1 are what characterizes a linear chain CRF. As we will see, this limitation makes it possible to use versions of the linear chain CRF efficient Viterbi and Forward-Backwards algorithms from the HMM. A general CRF, by contrast, allows a feature to make use of any output token, and are thus necessary for tasks in which the decision depend on distant output tokens, like y i−4 . General CRFs require more complex inference, and are less commonly used for language processing.
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.1
Features in a CRF POS Tagger
Let's look at some of these features in detail, since the reason to use a discriminative sequence model is that it's easier to incorporate a lot of features. 2 Again, in a linear-chain CRF, each local feature f k at position i can depend on any information from: (y i−1 , y i , X, i). So some legal features representing common situations might be the following:
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.1
Features in a CRF POS Tagger
1{x i = the, y i = DET} 1{y i = PROPN, x i+1 = Street, y i−1 = NUM} 1{y i = VERB, y i−1 = AUX}
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.1
Features in a CRF POS Tagger
For simplicity, we'll assume all CRF features take on the value 1 or 0. Above, we explicitly use the notation 1{x} to mean "1 if x is true, and 0 otherwise". From now on, we'll leave off the 1 when we define features, but you can assume each feature has it there implicitly.
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.1
Features in a CRF POS Tagger
Although the idea of what features to use is done by the system designer by hand, the specific features are automatically populated by using feature templates as we feature templates briefly mentioned in Chapter 5. Here are some templates that only use information from y i−1 , y i , X, i):
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.1
Features in a CRF POS Tagger
y i , x i , y i , y i−1 , y i , x i−1 , x i+2
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.1
Features in a CRF POS Tagger
These templates automatically populate the set of features from every instance in the training and test set. Thus for our example Janet/NNP will/MD back/VB the/DT bill/NN, when x i is the word back, the following features would be generated and have the value 1 (we've assigned them arbitrary feature numbers):
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.1
Features in a CRF POS Tagger
f 3743 : y i = VB and x i = back f 156 : y i = VB and y i−1 = MD f 99732 : y i = VB and x i−1 = will and x i+2 = bill It's also important to have features that help with unknown words. One of the most important is word shape features, which represent the abstract letter pattern word shape of the word by mapping lower-case letters to 'x', upper-case to 'X', numbers to 'd', and retaining punctuation. Thus for example I.M.F would map to X.X.X. and DC10-30 would map to XXdd-dd. A second class of shorter word shape features is also used. In these features consecutive character types are removed, so words in all caps map to X, words with initial-caps map to Xx, DC10-30 would be mapped to Xd-d but I.M.F would still map to X.X.X. Prefix and suffix features are also useful. In summary, here are some sample feature templates that help with unknown words:
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.1
Features in a CRF POS Tagger
x i contains a particular prefix (perhaps from all prefixes of length ≤ 2) x i contains a particular suffix (perhaps from all suffixes of length ≤ 2) x i 's word shape x i 's short word shape For example the word well-dressed might generate the following non-zero valued feature values:
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.1
Features in a CRF POS Tagger
prefix(x i ) = w prefix(x i ) = we suffix(x i ) = ed suffix(x i ) = d word-shape(x i ) = xxxx-xxxxxxx short-word-shape(x i ) = x-x
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.1
Features in a CRF POS Tagger
The known-word templates are computed for every word seen in the training set; the unknown word features can also be computed for all words in training, or only on training words whose frequency is below some threshold. The result of the known-word templates and word-signature features is a very large set of features. Generally a feature cutoff is used in which features are thrown out if they have count < 5 in the training set.
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.1
Features in a CRF POS Tagger
Remember that in a CRF we don't learn weights for each of these local features f k . Instead, we first sum the values of each local feature (for example feature f 3743 ) over the entire sentence, to create each global feature (for example F 3743 ). It is those global features that will then be multiplied by weight w 3743 . Thus for training and inference there is always a fixed set of K features with K weights, even though the length of each sentence is different.
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.2
Features for CRF Named Entity Recognizers
A CRF for NER makes use of very similar features to a POS tagger, as shown in Figure 8 .15. identity of w i , identity of neighboring words embeddings for w i , embeddings for neighboring words part of speech of w i , part of speech of neighboring words presence of w i in a gazetteer w i contains a particular prefix (from all prefixes of length ≤ 4) w i contains a particular suffix (from all suffixes of length ≤ 4) word shape of w i , word shape of neighboring words short word shape of w i , short word shape of neighboring words gazetteer features Figure 8 .15 Typical features for a feature-based NER system. One feature that is especially useful for locations is a gazetteer, a list of place gazetteer names, often providing millions of entries for locations with detailed geographical and political information. 3 This can be implemented as a binary feature indicating a phrase appears in the list. Other related resources like name-lists, for example from the United States Census Bureau 4 , can be used, as can other entity dictionaries like lists of corporations or products, although they may not be as helpful as a gazetteer (Mikheev et al., 1999) .
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.2
Features for CRF Named Entity Recognizers
The sample named entity token L'Occitane would generate the following nonzero valued feature values (assuming that L'Occitane is neither in the gazetteer nor the census).
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.2
Features for CRF Named Entity Recognizers
prefix(x i ) = L suffix(x i ) = tane prefix(x i ) = L' suffix(x i ) = ane prefix(x i ) = L'O suffix(x i ) = ne prefix(x i ) = L'Oc
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.2
Features for CRF Named Entity Recognizers
suffix(x i ) = e word-shape(x i ) = X'Xxxxxxxx short-word-shape(x i ) = X'Xx Figure 8 .16 illustrates the result of adding part-of-speech tags and some shape information to our earlier example.
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.3
Inference and Training for CRFs
How do we find the best tag sequenceŶ for a given input X? We start with Eq. 8.22:
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.3
Inference and Training for CRFs
Y = argmax Y ∈Y P(Y |X) = argmax Y ∈Y 1 Z(X) exp K k=1 w k F k (X,Y ) (8.27) = argmax Y ∈Y exp K k=1 w k n i=1 f k (y i−1 , y i , X, i) (8.28) = argmax Y ∈Y K k=1 w k n i=1 f k (y i−1 , y i , X, i) (8.29) = argmax Y ∈Y n i=1 K k=1 w k f k (y i−1 , y i , X, i) (8.30)
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.3
Inference and Training for CRFs
We can ignore the exp function and the denominator Z(X), as we do above, because exp doesn't change the argmax, and the denominator Z(X) is constant for a given observation sequence X. How should we decode to find this optimal tag sequenceŷ? Just as with HMMs, we'll turn to the Viterbi algorithm, which works because, like the HMM, the linearchain CRF depends at each timestep on only one previous output token y i−1 .
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.3
Inference and Training for CRFs
Concretely, this involves filling an N ×T array with the appropriate values, maintaining backpointers as we proceed. As with HMM Viterbi, when the table is filled, we simply follow pointers back from the maximum value in the final column to retrieve the desired set of labels.
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.3
Inference and Training for CRFs
The requisite changes from HMM Viterbi have to do only with how we fill each cell. Recall from Eq. 8.19 that the recursive step of the Viterbi equation computes the Viterbi value of time t for state j as
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.3
Inference and Training for CRFs
v t ( j) = N max i=1 v t−1 (i) a i j b j (o t ); 1 ≤ j ≤ N, 1 < t ≤ T (8.31)
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.3
Inference and Training for CRFs
which is the HMM implementation of
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.3
Inference and Training for CRFs
v t ( j) = N max i=1 v t−1 (i) P(s j |s i ) P(o t |s j ) 1 ≤ j ≤ N, 1 < t ≤ T (8.32)
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.3
Inference and Training for CRFs
The CRF requires only a slight change to this latter formula, replacing the a and b prior and likelihood probabilities with the CRF features:
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.3
Inference and Training for CRFs
v t ( j) = N max i=1 v t−1 (i) K k=1 w k f k (y t−1 , y t , X,t) 1 ≤ j ≤ N, 1 < t ≤ T (8.33)
8
Sequence Labeling for Parts of Speech and Named Entities
8.5
Conditional Random Fields (CRFs)
8.5.3
Inference and Training for CRFs
Learning in CRFs relies on the same supervised learning algorithms we presented for logistic regression. Given a sequence of observations, feature functions, and corresponding outputs, we use stochastic gradient descent to train the weights to maximize the log-likelihood of the training corpus. The local nature of linear-chain CRFs means that a CRF version of the forward-backward algorithm (see Appendix A) can be used to efficiently compute the necessary derivatives. As with logistic regression, L1 or L2 regularization is important.
8
Sequence Labeling for Parts of Speech and Named Entities
8.6
Evaluation of Named Entity Recognition
nan
nan
Part-of-speech taggers are evaluated by the standard metric of accuracy. Named entity recognizers are evaluated by recall, precision, and F 1 measure. Recall that recall is the ratio of the number of correctly labeled responses to the total that should have been labeled; precision is the ratio of the number of correctly labeled responses to the total labeled; and F-measure is the harmonic mean of the two.
8
Sequence Labeling for Parts of Speech and Named Entities
8.6
Evaluation of Named Entity Recognition
nan
nan
To know if the difference between the F 1 scores of two NER systems is a significant difference, we use the paired bootstrap test, or the similar randomization test (Section 4.9).
8
Sequence Labeling for Parts of Speech and Named Entities
8.6
Evaluation of Named Entity Recognition
nan
nan
For named entity tagging, the entity rather than the word is the unit of response. Thus in the example in Fig. 8.16 , the two entities Jane Villanueva and United Airlines Holding and the non-entity discussed would each count as a single response.
8
Sequence Labeling for Parts of Speech and Named Entities
8.6
Evaluation of Named Entity Recognition
nan
nan
The fact that named entity tagging has a segmentation component which is not present in tasks like text categorization or part-of-speech tagging causes some problems with evaluation. For example, a system that labeled Jane but not Jane Villanueva as a person would cause two errors, a false positive for O and a false negative for I-PER. In addition, using entities as the unit of response but words as the unit of training means that there is a mismatch between the training and test conditions.
8
Sequence Labeling for Parts of Speech and Named Entities
8.7
Further Details
nan
nan
In this section we summarize a few remaining details of the data and models, beginning with data. Since the algorithms we have presented are supervised, hav-
8
Sequence Labeling for Parts of Speech and Named Entities
8.7
Further Details
nan
nan
ing labeled data is essential for training and test. A wide variety of datasets exist for part-of-speech tagging and/or NER. The Universal Dependencies (UD) dataset (Nivre et al., 2016b) has POS tagged corpora in 92 languages at the time of this writing, as do the Penn Treebanks in English, Chinese, and Arabic. OntoNotes has corpora labeled for named entities in English, Chinese, and Arabic (Hovy et al., 2006) . Named entity tagged corpora also available in particular domains, such as for biomedical (Bada et al., 2012) and literary text (Bamman et al., 2019).
8
Sequence Labeling for Parts of Speech and Named Entities
8.7
Further Details
8.7.1
Bidirectionality
One problem with the CRF and HMM architectures as presented is that the models are exclusively run left-to-right. While the Viterbi algorithm still allows present decisions to be influenced indirectly by future decisions, it would help even more if a decision about word w i could directly use information about future tags t i+1 and t i+2 .
8
Sequence Labeling for Parts of Speech and Named Entities
8.7
Further Details
8.7.1
Bidirectionality
Alternatively, any sequence model can be turned into a bidirectional model by using multiple passes. For example, the first pass would use only part-of-speech features from already-disambiguated words on the left. In the second pass, tags for all words, including those on the right, can be used. Alternately, the tagger can be run twice, once left-to-right and once right-to-left. In Viterbi decoding, the labeler would chooses the higher scoring of the two sequences (left-to-right or right-to-left). Bidirectional models are quite standard for neural models, as we will see with the biLSTM models to be introduced in Chapter 9.
8
Sequence Labeling for Parts of Speech and Named Entities
8.7
Further Details
8.7.2
Rule-based Methods
While machine learned (neural or CRF) sequence models are the norm in academic research, commercial approaches to NER are often based on pragmatic combinations of lists and rules, with some smaller amount of supervised machine learning (Chiticariu et al., 2013) . For example in the IBM System T architecture, a user specifies declarative constraints for tagging tasks in a formal query language that includes regular expressions, dictionaries, semantic constraints, and other operators, which the system compiles into an efficient extractor (Chiticariu et al., 2018) .
8
Sequence Labeling for Parts of Speech and Named Entities
8.7
Further Details
8.7.2
Rule-based Methods
One common approach is to make repeated rule-based passes over a text, starting with rules with very high precision but low recall, and, in subsequent stages, using machine learning methods that take the output of the first pass into account (an approach first worked out for coreference (Lee et al., 2017a)):
8
Sequence Labeling for Parts of Speech and Named Entities
8.7
Further Details
8.7.2
Rule-based Methods
1. First, use high-precision rules to tag unambiguous entity mentions. 2. Then, search for substring matches of the previously detected names. 3. Use application-specific name lists to find likely domain-specific mentions. 4. Finally, apply supervised sequence labeling techniques that use tags from previous stages as additional features.
8
Sequence Labeling for Parts of Speech and Named Entities
8.7
Further Details
8.7.2
Rule-based Methods
Rule-based methods were also the earliest methods for part-of-speech tagging. Rule-based taggers like the English Constraint Grammar system (Karlsson et al. 1995 , Voutilainen 1999 ) use a two-stage formalism invented in the 1950s and 1960s:
8
Sequence Labeling for Parts of Speech and Named Entities
8.7
Further Details
8.7.2
Rule-based Methods
(1) a morphological analyzer with tens of thousands of word stem entries returns all parts of speech for a word, then (2) a large set of thousands of constraints are applied to the input sentence to rule out parts of speech inconsistent with the context.
8
Sequence Labeling for Parts of Speech and Named Entities
8.7
Further Details
8.7.3
POS Tagging for Morphologically Rich Languages
Augmentations to tagging algorithms become necessary when dealing with languages with rich morphology like Czech, Hungarian and Turkish.
8
Sequence Labeling for Parts of Speech and Named Entities
8.7
Further Details
8.7.3
POS Tagging for Morphologically Rich Languages
These productive word-formation processes result in a large vocabulary for these languages: a 250,000 word token corpus of Hungarian has more than twice as many word types as a similarly sized corpus of English (Oravecz and Dienes, 2002) , while a 10 million word token corpus of Turkish contains four times as many word types as a similarly sized English corpus (Hakkani-Tür et al., 2002) . Large vocabularies mean many unknown words, and these unknown words cause significant performance degradations in a wide variety of languages (including Czech, Slovene, Estonian, and Romanian) (Hajič, 2000) .
8
Sequence Labeling for Parts of Speech and Named Entities
8.7
Further Details
8.7.3
POS Tagging for Morphologically Rich Languages
Highly inflectional languages also have much more information than English coded in word morphology, like case (nominative, accusative, genitive) or gender (masculine, feminine). Because this information is important for tasks like parsing and coreference resolution, part-of-speech taggers for morphologically rich languages need to label words with case and gender information. Tagsets for morphologically rich languages are therefore sequences of morphological tags rather than a single primitive tag. Here's a Turkish example, in which the word izin has three possible morphological/part-of-speech tags and meanings (Hakkani-Tür et al., 2002) :
8
Sequence Labeling for Parts of Speech and Named Entities
8.7
Further Details
8.7.3
POS Tagging for Morphologically Rich Languages
1. Yerdeki izin temizlenmesi gerek.
8
Sequence Labeling for Parts of Speech and Named Entities
8.7
Further Details
8.7.3
POS Tagging for Morphologically Rich Languages
iz + Noun+A3sg+Pnon+Gen The trace on the floor should be cleaned.
8
Sequence Labeling for Parts of Speech and Named Entities
8.7
Further Details
8.7.3
POS Tagging for Morphologically Rich Languages
iz + Noun+A3sg+P2sg+Nom Your finger print is left on (it).
8
Sequence Labeling for Parts of Speech and Named Entities
8.7
Further Details
8.7.3
POS Tagging for Morphologically Rich Languages
izin + Noun+A3sg+Pnon+Nom You need permission to enter.
8
Sequence Labeling for Parts of Speech and Named Entities
8.7
Further Details
8.7.3
POS Tagging for Morphologically Rich Languages
Using a morphological parse sequence like Noun+A3sg+Pnon+Gen as the partof-speech tag greatly increases the number of parts of speech, and so tagsets can be 4 to 10 times larger than the 50-100 tags we have seen for English. With such large tagsets, each word needs to be morphologically analyzed to generate the list of possible morphological tag sequences (part-of-speech tags) for the word. The role of the tagger is then to disambiguate among these tags. This method also helps with unknown words since morphological parsers can accept unknown stems and still segment the affixes properly.
8
Sequence Labeling for Parts of Speech and Named Entities
8.8
Summary
nan
nan
This chapter introduced parts of speech and named entities, and the tasks of partof-speech tagging and named entity recognition:
8
Sequence Labeling for Parts of Speech and Named Entities
8.8
Summary
nan
nan
• Languages generally have a small set of closed class words that are highly frequent, ambiguous, and act as function words, and open-class words like nouns, verbs, adjectives. Various part-of-speech tagsets exist, of between 40 and 200 tags. • Part-of-speech tagging is the process of assigning a part-of-speech label to each of a sequence of words. • Named entities are words for proper nouns referring mainly to people, places, and organizations, but extended to many other types that aren't strictly entities or even proper nouns.
8
Sequence Labeling for Parts of Speech and Named Entities
8.8
Summary
nan
nan
• Two common approaches to sequence modeling are a generative approach, HMM tagging, and a discriminative approach, CRF tagging. We will see a neural approach in following chapters. • The probabilities in HMM taggers are estimated by maximum likelihood estimation on tag-labeled training corpora. The Viterbi algorithm is used for decoding, finding the most likely tag sequence • Conditional Random Fields or CRF taggers train a log-linear model that can choose the best tag sequence given an observation sequence, based on features that condition on the output tag, the prior output tag, the entire input sequence, and the current timestep. They use the Viterbi algorithm for inference, to choose the best sequence of tags, and a version of the Forward-Backward algorithm (see Appendix A) for training.
8
Sequence Labeling for Parts of Speech and Named Entities
8.9
Bibliographical and Historical Notes
nan
nan
What is probably the earliest part-of-speech tagger was part of the parser in Zellig Harris's Transformations and Discourse Analysis Project (TDAP), implemented between June 1958 and July 1959 at the University of Pennsylvania (Harris, 1962), although earlier systems had used part-of-speech dictionaries. TDAP used 14 handwritten rules for part-of-speech disambiguation; the use of part-of-speech tag sequences and the relative frequency of tags for a word prefigures modern algorithms. The parser was implemented essentially as a cascade of finite-state transducers; see Joshi and Hopely (1999) and Karttunen (1999) for a reimplementation. The Computational Grammar Coder (CGC) of Klein and Simmons (1963) had three components: a lexicon, a morphological analyzer, and a context disambiguator. The small 1500-word lexicon listed only function words and other irregular words. The morphological analyzer used inflectional and derivational suffixes to assign part-of-speech classes. These were run over words to produce candidate parts of speech which were then disambiguated by a set of 500 context rules by relying on surrounding islands of unambiguous words. For example, one rule said that between an ARTICLE and a VERB, the only allowable sequences were ADJ-NOUN, NOUN-ADVERB, or NOUN-NOUN. The TAGGIT tagger (Greene and Rubin, 1971) used the same architecture as Klein and Simmons (1963) , with a bigger dictionary and more tags (87). TAGGIT was applied to the Brown corpus and, according to Francis and Kučera (1982, p. 9) , accurately tagged 77% of the corpus; the remainder of the Brown corpus was then tagged by hand. All these early algorithms were based on a two-stage architecture in which a dictionary was first used to assign each word a set of potential parts of speech, and then lists of handwritten disambiguation rules winnowed the set down to a single part of speech per word.
8
Sequence Labeling for Parts of Speech and Named Entities
8.9
Bibliographical and Historical Notes
nan
nan
Probabilities were used in tagging by Stolz et al. (1965) and a complete probabilistic tagger with Viterbi decoding was sketched by Bahl and Mercer (1976) . The Lancaster-Oslo/Bergen (LOB) corpus, a British English equivalent of the Brown corpus, was tagged in the early 1980's with the CLAWS tagger (Marshall 1983; Marshall 1987; Garside 1987) , a probabilistic algorithm that approximated a simplified HMM tagger. The algorithm used tag bigram probabilities, but instead of storing the word likelihood of each tag, the algorithm marked tags either as rare (P(tag|word) < .01) infrequent (P(tag|word) < .10) or normally frequent (P(tag|word) > .10).
8
Sequence Labeling for Parts of Speech and Named Entities
8.9
Bibliographical and Historical Notes
nan
nan
DeRose (1988) developed a quasi-HMM algorithm, including the use of dynamic programming, although computing P(t|w)P(w) instead of P(w|t)P(w). The same year, the probabilistic PARTS tagger of Church 1988 Church , 1989 was probably the first implemented HMM tagger, described correctly in Church (1989), although Church (1988) also described the computation incorrectly as P(t|w)P(w) instead of P(w|t)P(w). Church (p.c.) explained that he had simplified for pedagogical purposes because using the probability P(t|w) made the idea seem more understandable as "storing a lexicon in an almost standard form".
8
Sequence Labeling for Parts of Speech and Named Entities
8.9
Bibliographical and Historical Notes
nan
nan
Later taggers explicitly introduced the use of the hidden Markov model (Kupiec 1992; Weischedel et al. 1993; Schütze and Singer 1994) . Merialdo (1994) showed that fully unsupervised EM didn't work well for the tagging task and that reliance on hand-labeled data was important. Charniak et al. (1993) showed the importance of the most frequent tag baseline; the 92.3% number we give above was from Abney et al. (1999) . See Brants (2000) for HMM tagger implementation details, including the extension to trigram contexts, and the use of sophisticated unknown word features; its performance is still close to state of the art taggers.
8
Sequence Labeling for Parts of Speech and Named Entities
8.9
Bibliographical and Historical Notes
nan
nan
Log-linear models for POS tagging were introduced by Ratnaparkhi (1996), who introduced a system called MXPOST which implemented a maximum entropy Markov model (MEMM), a slightly simpler version of a CRF. Around the same time, sequence labelers were applied to the task of named entity tagging, first with HMMs (Bikel et al., 1997) and MEMMs (McCallum et al., 2000) , and then once CRFs were developed (Lafferty et al. 2001) , they were also applied to NER (Mc-Callum and Li, 2003) . A wide exploration of features followed (Zhou et al., 2005) . Neural approaches to NER mainly follow from the pioneering results of Collobert et al. 2011, who applied a CRF on top of a convolutional net. BiLSTMs with word and character-based embeddings as input followed shortly and became a standard neural algorithm for NER (Huang et al. 2015 , Ma and Hovy 2016 , Lample et al. 2016 followed by the more recent use of Transformers and BERT.
8
Sequence Labeling for Parts of Speech and Named Entities
8.9
Bibliographical and Historical Notes
nan
nan
The idea of using letter suffixes for unknown words is quite old; the early Klein and Simmons (1963) system checked all final letter suffixes of lengths 1-5. The unknown word features described on page 169 come mainly from Ratnaparkhi (1996) , with augmentations from Toutanova et al. (2003) and Manning (2011).
8
Sequence Labeling for Parts of Speech and Named Entities
8.9
Bibliographical and Historical Notes
nan
nan
State of the art POS taggers use neural algorithms, either bidirectional RNNs or Transformers like BERT; see Chapter 9 and Chapter 11. HMM (Brants 2000; Thede and Harper 1999) and CRF tagger accuracies are likely just a tad lower.
8
Sequence Labeling for Parts of Speech and Named Entities
8.9
Bibliographical and Historical Notes
nan
nan
Manning (2011) investigates the remaining 2.7% of errors in a high-performing tagger (Toutanova et al., 2003) . He suggests that a third or half of these remaining errors are due to errors or inconsistencies in the training data, a third might be solvable with richer linguistic models, and for the remainder the task is underspecified or unclear.
8
Sequence Labeling for Parts of Speech and Named Entities
8.9
Bibliographical and Historical Notes
nan
nan
Supervised tagging relies heavily on in-domain training data hand-labeled by experts. Ways to relax this assumption include unsupervised algorithms for clustering words into part-of-speech-like classes, summarized in Christodoulopoulos et al. (2010), and ways to combine labeled and unlabeled data, for example by co-training (Clark et al. 2003; Søgaard 2010) .
8
Sequence Labeling for Parts of Speech and Named Entities
8.9
Bibliographical and Historical Notes
nan
nan
See Householder (1995) for historical notes on parts of speech, and Sampson (1987) and Garside et al. (1997)
9
Deep Learning Architectures for Sequence Processing
nan
nan
nan
nan
Language is an inherently temporal phenomenon. Spoken language is a sequence of acoustic events over time, and we comprehend and produce both spoken and written language as a continuous input stream. The temporal nature of language is reflected in the metaphors we use; we talk of the flow of conversations, news feeds, and twitter streams, all of which emphasize that language is a sequence that unfolds in time. This temporal nature is reflected in some of the algorithms we use to process language. For example, the Viterbi algorithm applied to HMM part-of-speech tagging, proceeds through the input a word at a time, carrying forward information gleaned along the way. Yet other machine learning approaches, like those we've studied for sentiment analysis or other text classification tasks don't have this temporal naturethey assume simultaneous access to all aspects of their input.
9
Deep Learning Architectures for Sequence Processing
nan
nan
nan
nan
The feedforward networks of Chapter 7 also assumed simultaneous access, although they also had a simple model for time. Recall that we applied feedforward networks to language modeling by having them look only at a fixed-size window of words, and then sliding this window over the input, making independent predictions along the way. Fig. 9 .1, reproduced from Chapter 7, shows a neural language model with window size 3 predicting what word follows the input for all the. Subsequent words are predicted by sliding the window forward a word at a time.
9
Deep Learning Architectures for Sequence Processing
nan
nan
nan
nan
The simple feedforward sliding-window is promising, but isn't a completely satisfactory solution to temporality. By using embeddings as inputs, it does solve the main problem of the simple n-gram models of Chapter 3 (recall that n-grams were based on words rather than embeddings, making them too literal, unable to generalize across contexts of similar words). But feedforward networks still share another weakness of n-gram approaches: limited context. Anything outside the context window has no impact on the decision being made. Yet many language tasks require access to information that can be arbitrarily distant from the current word. Second, the use of windows makes it difficult for networks to learn systematic patterns arising from phenomena like constituency and compositionality: the way the meaning of words in phrases combine together. For example, in Fig. 9 .1 the phrase all the appears in one window in the second and third positions, and in the next window in the first and second positions, forcing the network to learn two separate patterns for what should be the same item.
9
Deep Learning Architectures for Sequence Processing
nan
nan
nan
nan
This chapter introduces two important deep learning architectures designed to address these challenges: recurrent neural networks and transformer networks. Both approaches have mechanisms to deal directly with the sequential nature of language that allow them to capture and exploit the temporal nature of language. The recurrent network offers a new way to represent the prior context, allowing the model's deci- Figure 9 .1 Simplified sketch of a feedforward neural language model moving through a text. At each time step t the network converts N context words, each to a d-dimensional embedding, and concatenates the N embeddings together to get the Nd × 1 unit input vector x for the network. The output of the network is a probability distribution over the vocabulary representing the model's belief with respect to each word being the next possible word. sion to depend on information from hundreds of words in the past. The transformer offers new mechanisms (self-attention and positional encodings) that help represent time and help focus on how words relate to each other over long distances. We'll see how to apply both models to the task of language modeling, to sequence modeling tasks like part-of-speech tagging, and to text classification tasks like sentiment analysis.
9
Deep Learning Architectures for Sequence Processing
nan
nan
nan
nan
w t-1 w t-2 w t w t-3 p(doe|…) p(ant|…) p(zebra|…) p(fish|…) … U W
9
Deep Learning Architectures for Sequence Processing
9.1
Language Models Revisited
nan
nan
In this chapter, we'll begin exploring the RNN and transformer architectures through the lens of probabilistic language models, so let's briefly remind ourselves of the framework for language modeling. Recall from Chapter 3 that probabilistic language models predict the next word in a sequence given some preceding context. For example, if the preceding context is "Thanks for all the" and we want to know how likely the next word is "fish" we would compute:
9
Deep Learning Architectures for Sequence Processing
9.1
Language Models Revisited
nan
nan
Language models give us the ability to assign such a conditional probability to every possible next word, giving us a distribution over the entire vocabulary. We can also assign probabilities to entire sequences by using these conditional probabilities in combination with the chain rule:
9
Deep Learning Architectures for Sequence Processing
9.1
Language Models Revisited
nan
nan
P(w 1:n ) = n i=1 P(w i |w <i )
9
Deep Learning Architectures for Sequence Processing
9.1
Language Models Revisited
nan
nan
Recall that we evaluate language models by examining how well they predict unseen text. Intuitively, good models are those that assign higher probabilities to unseen data (are less surprised when encountering the new words).
9
Deep Learning Architectures for Sequence Processing
9.1
Language Models Revisited
nan
nan
We instantiate this intuition by using perplexity to measure the quality of a perplexity language model. Recall from page 36 that the perplexity (PP) of a model θ on an unseen test set is the inverse probability that θ assigns to the test set, normalized by the test set length. For a test set w 1:n , the perplexity is
9
Deep Learning Architectures for Sequence Processing
9.1
Language Models Revisited
nan
nan
PP θ (w 1:n ) = P θ (w 1:n ) − 1 n = n 1 P θ (w 1:n ) (9.1)
9
Deep Learning Architectures for Sequence Processing
9.1
Language Models Revisited
nan
nan
To visualize how perplexity can be computed as a function of the probabilities our LM will compute for each new word, we can use the chain rule to expand the computation of probability of the test set:
9
Deep Learning Architectures for Sequence Processing
9.1
Language Models Revisited
nan
nan
PP θ (w 1:n ) = n n i=1 1 P θ (w i |w 1:n−1 ) (9.2)
9
Deep Learning Architectures for Sequence Processing
9.2
Recurrent Neural Networks
nan
nan
A recurrent neural network (RNN) is any network that contains a cycle within its network connections, meaning that the value of some unit is directly, or indirectly, dependent on its own earlier outputs as an input. While powerful, such networks are difficult to reason about and to train. However, within the general class of recurrent networks there are constrained architectures that have proven to be extremely effective when applied to language. In this section, we consider a class of recurrent networks referred to as Elman Networks (Elman, 1990) or simple recurrent net-Elman Networks works. These networks are useful in their own right and serve as the basis for more complex approaches like the Long Short-Term Memory (LSTM) networks discussed later in this chapter. In this chapter when we use the term RNN we'll be referring to these simpler more constrained networks (although you will often see the term RNN to mean any net with recurrent properties including LSTMs).
9
Deep Learning Architectures for Sequence Processing
9.2
Recurrent Neural Networks
nan
nan
x t h t y t Figure 9 .2 Simple recurrent neural network after Elman (1990). The hidden layer includes a recurrent connection as part of its input. That is, the activation value of the hidden layer depends on the current input as well as the activation value of the hidden layer from the previous time step. Fig. 9 .2 illustrates the structure of an RNN. As with ordinary feedforward networks, an input vector representing the current input, x t , is multiplied by a weight matrix and then passed through a non-linear activation function to compute the values for a layer of hidden units. This hidden layer is then used to calculate a corresponding output, y t . In a departure from our earlier window-based approach, sequences are processed by presenting one item at a time to the network. We'll use 9.2 • RECURRENT NEURAL NETWORKS 181 subscripts to represent time, thus x t will mean the input vector x at time t. The key difference from a feedforward network lies in the recurrent link shown in the figure with the dashed line. This link augments the input to the computation at the hidden layer with the value of the hidden layer from the preceding point in time.
9
Deep Learning Architectures for Sequence Processing
9.2
Recurrent Neural Networks
nan
nan
The hidden layer from the previous time step provides a form of memory, or context, that encodes earlier processing and informs the decisions to be made at later points in time. Critically, this approach does not impose a fixed-length limit on this prior context; the context embodied in the previous hidden layer can include information extending back to the beginning of the sequence.
9
Deep Learning Architectures for Sequence Processing
9.2
Recurrent Neural Networks
nan
nan
Adding this temporal dimension makes RNNs appear to be more complex than non-recurrent architectures. But in reality, they're not all that different. Given an input vector and the values for the hidden layer from the previous time step, we're still performing the standard feedforward calculation introduced in Chapter 7. To see this, consider Fig. 9 .3 which clarifies the nature of the recurrence and how it factors into the computation at the hidden layer. The most significant change lies in the new set of weights, U, that connect the hidden layer from the previous time step to the current hidden layer. These weights determine how the network makes use of past context in calculating the output for the current input. As with the other weights in the network, these connections are trained via backpropagation. Figure 9 .3 Simple recurrent neural network illustrated as a feedforward network.
9
Deep Learning Architectures for Sequence Processing
9.2
Recurrent Neural Networks
nan
nan
U V W y t x t h t h t-1
9
Deep Learning Architectures for Sequence Processing
9.2
Recurrent Neural Networks
9.2.1
Inference in RNNs
Forward inference (mapping a sequence of inputs to a sequence of outputs) in an RNN is nearly identical to what we've already seen with feedforward networks. To compute an output y t for an input x t , we need the activation value for the hidden layer h t . To calculate this, we multiply the input x t with the weight matrix W, and the hidden layer from the previous time step h t−1 with the weight matrix U. We add these values together and pass them through a suitable activation function, g, to arrive at the activation value for the current hidden layer, h t . Once we have the values for the hidden layer, we proceed with the usual computation to generate the output vector.
9
Deep Learning Architectures for Sequence Processing
9.2
Recurrent Neural Networks
9.2.1
Inference in RNNs
h t = g(Uh t−1 + Wx t ) (9.3) y t = f (Vh t ) (9.4)
9
Deep Learning Architectures for Sequence Processing
9.2
Recurrent Neural Networks
9.2.1
Inference in RNNs
It's worthwhile here to be careful about specifying the dimensions of the input, hidden and output layers, as well as the weight matrices to make sure these calculations are correct. Let's refer to the input, hidden and output layer dimensions as d in , d h , and d out respectively. Given this, our three parameter matrices are:
9
Deep Learning Architectures for Sequence Processing
9.2
Recurrent Neural Networks
9.2.1
Inference in RNNs
W ∈ R d h ×d in , U ∈ R d h ×d h , and V ∈ R d out ×d h .
9
Deep Learning Architectures for Sequence Processing
9.2
Recurrent Neural Networks
9.2.1
Inference in RNNs
In the commonly encountered case of soft classification, computing y t consists of a softmax computation that provides a probability distribution over the possible output classes.
9
Deep Learning Architectures for Sequence Processing
9.2
Recurrent Neural Networks
9.2.1
Inference in RNNs
y t = softmax(Vh t ) (9.5)
9
Deep Learning Architectures for Sequence Processing
9.2
Recurrent Neural Networks
9.2.1
Inference in RNNs
The fact that the computation at time t requires the value of the hidden layer from time t − 1 mandates an incremental inference algorithm that proceeds from the start of the sequence to the end as illustrated in Fig. 9 .4. The sequential nature of simple recurrent networks can also be seen by unrolling the network in time as is shown in Fig. 9 .5. In this figure, the various layers of units are copied for each time step to illustrate that they will have differing values over time. However, the various weight matrices are shared across time.
9
Deep Learning Architectures for Sequence Processing
9.2
Recurrent Neural Networks
9.2.1
Inference in RNNs
function FORWARDRNN(x, network) returns output sequence y h 0 ← 0 for i ← 1 to LENGTH(x) do h i ← g(Uh i−1 + Wx i ) y i ← f (Vh i ) return y
9
Deep Learning Architectures for Sequence Processing
9.2
Recurrent Neural Networks
9.2.2
Training
As with feedforward networks, we'll use a training set, a loss function, and backpropagation to obtain the gradients needed to adjust the weights in these recurrent networks. As shown in Fig. 9 .3, we now have 3 sets of weights to update: W, the weights from the input layer to the hidden layer, U, the weights from the previous hidden layer to the current hidden layer, and finally V, the weights from the hidden layer to the output layer. Fig. 9 .5 highlights two considerations that we didn't have to worry about with backpropagation in feedforward networks. First, to compute the loss function for the output at time t we need the hidden layer from time t − 1. Second, the hidden layer at time t influences both the output at time t and the hidden layer at time t + 1 (and hence the output and loss at t + 1). It follows from this that to assess the error accruing to h t , we'll need to know its influence on both the current output as well as the ones that follow.
9
Deep Learning Architectures for Sequence Processing
9.2
Recurrent Neural Networks
9.2.2
Training
Tailoring the backpropagation algorithm to this situation leads to a two-pass algorithm for training the weights in RNNs. In the first pass, we perform forward inference, computing h t , y t , accumulating the loss at each step in time, saving the value of the hidden layer at each step for use at the next time step. In the second phase, we process the sequence in reverse, computing the required gradients as we go, computing and saving the error term for use in the hidden layer for each step backward in time. This general approach is commonly referred to as Backpropagation Through Time (Werbos 1974 , Rumelhart et al. 1986 , Werbos 1990 ).
9
Deep Learning Architectures for Sequence Processing
9.2
Recurrent Neural Networks
9.2.2
Training
Fortunately, with modern computational frameworks and adequate computing resources, there is no need for a specialized approach to training RNNs. As illustrated in Fig. 9 .5, explicitly unrolling a recurrent network into a feedforward computational graph eliminates any explicit recurrences, allowing the network weights to be trained directly. In such an approach, we provide a template that specifies the basic structure of the network, including all the necessary parameters for the input, output, and hidden layers, the weight matrices, as well as the activation and output functions to be used. Then, when presented with a specific input sequence, we can generate an unrolled feedforward network specific to that input, and use that graph to perform forward inference or training via ordinary backpropagation.
9
Deep Learning Architectures for Sequence Processing
9.2
Recurrent Neural Networks
9.2.2
Training
For applications that involve much longer input sequences, such as speech recognition, character-level processing, or streaming of continuous inputs, unrolling an entire input sequence may not be feasible. In these cases, we can unroll the input into manageable fixed-length segments and treat each segment as a distinct training item.
9
Deep Learning Architectures for Sequence Processing
9.3
RNNs as Language Models
nan
nan
RNN language models (Mikolov et al., 2010) process the input sequence one word at a time, attempting to predict the next word from the current word and the previous hidden state. RNNs don't have the limited context problem that n-gram models have, since the hidden state can in principle represent information about all of the preceding words all the way back to the beginning of the sequence.
9
Deep Learning Architectures for Sequence Processing
9.3
RNNs as Language Models
nan
nan
Forward inference in a recurrent language model proceeds exactly as described in Section 9.2.1. The input sequence X = [x 1 ; ...; x t ; ...;
9
Deep Learning Architectures for Sequence Processing
9.3
RNNs as Language Models
nan
nan
x N ] consists of a series of word embeddings each represented as a one-hot vector of size |V |×1, and the output prediction, y, is a vector representing a probability distribution over the vocabulary. At each step, the model uses the word embedding matrix E to retrieve the embedding for the current word, and then combines it with the hidden layer from the previous step to compute a new hidden layer. This hidden layer is then used to generate an output layer which is passed through a softmax layer to generate a probability distribution over the entire vocabulary. That is, at time t:
9
Deep Learning Architectures for Sequence Processing
9.3
RNNs as Language Models
nan
nan
e t = Ex t (9.6) h t = g(Uh t−1 + We t ) (9.7) y t = softmax(Vh t ) (9.8)
9
Deep Learning Architectures for Sequence Processing
9.3
RNNs as Language Models
nan
nan
The vector resulting from Vh can be thought of as a set of scores over the vocabulary given the evidence provided in h. Passing these scores through the softmax normalizes the scores into a probability distribution. The probability that a particular word i in the vocabulary is the next word is represented by y t [i], the ith component of y t :
9
Deep Learning Architectures for Sequence Processing
9.3
RNNs as Language Models
nan
nan
P(w t+1 = i|w 1 , . . . , w t ) = y t [i] (9.9)
9
Deep Learning Architectures for Sequence Processing
9.3
RNNs as Language Models
nan
nan
The probability of an entire sequence is just the product of the probabilities of each item in the sequence, where we'll use y i [w i ] to mean the probability of the true word w i at time step i.