n_chapter
stringclasses
10 values
chapter
stringclasses
10 values
n_section
stringlengths
3
5
section
stringlengths
3
48
n_subsection
stringlengths
3
6
subsection
stringlengths
3
51
text
stringlengths
1
2.65k
2
Regular Expressions
2.4
Text Normalization
2.4.2
Word Tokenization
The simple UNIX tools above were fine for getting rough word statistics but more sophisticated algorithms are generally necessary for tokenization, the task of segmenting running text into words.
2
Regular Expressions
2.4
Text Normalization
2.4.2
Word Tokenization
While the Unix command sequence just removed all the numbers and punctuation, for most NLP applications we’ll need to keep these in our tokenization. We often want to break off punctuation as a separate token; commas are a useful piece of information for parsers, periods help indicate sentence boundaries. But we’ll often want to keep the punctuation that occurs word internally, in examples like m.p.h., Ph.D., AT&T, and cap’n. Special characters and numbers will need to be kept in prices ($45.55) and dates (01/02/06); we don’t want to segment that price into separate tokens of “45” and “55”. And there are URLs (http://www.stanford.edu), Twitter hashtags (#nlproc), or email addresses (someone@cs.colorado.edu).
2
Regular Expressions
2.4
Text Normalization
2.4.2
Word Tokenization
Number expressions introduce other complications as well; while commas normally appear at word boundaries, commas are used inside numbers in English, every three digits: 555,500.50. Languages, and hence tokenization requirements, differ on this; many continental European languages like Spanish, French, and German, by contrast, use a comma to mark the decimal point, and spaces (or sometimes periods) where English puts commas, for example, 555 500,50.
2
Regular Expressions
2.4
Text Normalization
2.4.2
Word Tokenization
A tokenizer can also be used to expand clitic contractions that are marked by apostrophes, for example, converting what’re to the two tokens what are, and we’re to we are. A clitic is a part of a word that can’t stand on its own, and can only occur when it is attached to another word. Some such contractions occur in other alphabetic languages, including articles and pronouns in French (j’ai, l’homme).
2
Regular Expressions
2.4
Text Normalization
2.4.2
Word Tokenization
Depending on the application, tokenization algorithms may also tokenize multiword expressions like New York or rock ’n’ roll as a single token, which requires a multiword expression dictionary of some sort. Tokenization is thus intimately tied up with named entity recognition, the task of detecting names, dates, and organizations (Chapter 8).
2
Regular Expressions
2.4
Text Normalization
2.4.2
Word Tokenization
One commonly used tokenization standard is known as the Penn Treebank tokenization standard, used for the parsed corpora (treebanks) released by the Linguistic Data Consortium (LDC), the source of many useful datasets. This standard separates out clitics (doesn’t becomes does plus n’t), keeps hyphenated words together, and separates out all punctuation (to save space we’re showing visible spaces ‘ ’ between tokens, although newlines is a more common output):
2
Regular Expressions
2.4
Text Normalization
2.4.2
Word Tokenization
Input: "The San Francisco-based restaurant," they said, "doesn't charge $10". Output: " The San Francisco-based restaurant , " they said , " does n't charge $ 10 " .
2
Regular Expressions
2.4
Text Normalization
2.4.2
Word Tokenization
In practice, since tokenization needs to be run before any other language processing, it needs to be very fast. The standard method for tokenization is therefore to use deterministic algorithms based on regular expressions compiled into very efficient finite state automata. For example, Fig. 2 .12 shows an example of a basic regular expression that can be used to tokenize with the nltk.regexp tokenize function of the Python-based Natural Language Toolkit (NLTK) (Bird et al. 2009; http://www.nltk.org).
2
Regular Expressions
2.4
Text Normalization
2.4.2
Word Tokenization
>>> text = 'That U.S.A. poster-print costs $12.40...' >>> pattern = r''' (?x) # set flag to allow verbose regexps . ['That', 'U.S.A.', 'costs', '$12.40', '...'] Figure 2 .12 A Python trace of regular expression tokenization in the NLTK Python-based natural language processing toolkit (Bird et al., 2009) , commented for readability; the (?x) verbose flag tells Python to strip comments and whitespace.
2
Regular Expressions
2.4
Text Normalization
2.4.2
Word Tokenization
Carefully designed deterministic algorithms can deal with the ambiguities that arise, such as the fact that the apostrophe needs to be tokenized differently when used as a genitive marker (as in the book's cover), a quotative as in 'The other class', she said, or in clitics like they're.
2
Regular Expressions
2.4
Text Normalization
2.4.2
Word Tokenization
Word tokenization is more complex in languages like written Chinese, Japanese, and Thai, which do not use spaces to mark potential word-boundaries. In Chinese, for example, words are composed of characters (called hanzi in Chinese). Each hanzi character generally represents a single unit of meaning (called a morpheme) and is pronounceable as a single syllable. Words are about 2.4 characters long on average. But deciding what counts as a word in Chinese is complex. For example, consider the following sentence:
2
Regular Expressions
2.4
Text Normalization
2.4.2
Word Tokenization
(2.4) 姚明进入总决赛 "Yao Ming reaches the finals"
2
Regular Expressions
2.4
Text Normalization
2.4.2
Word Tokenization
As Chen et al. (2017b) point out, this could be treated as 3 words ('Chinese Treebank' segmentation):
2
Regular Expressions
2.4
Text Normalization
2.4.2
Word Tokenization
(2.5) 姚明 YaoMing 进入 reaches 总决赛 finals or as 5 words ('Peking University' segmentation):
2
Regular Expressions
2.4
Text Normalization
2.4.2
Word Tokenization
(2.6) 姚 Yao 明 Ming 进入 reaches 总 overall 决赛 finals Finally, it is possible in Chinese simply to ignore words altogether and use characters as the basic elements, treating the sentence as a series of 7 characters:
2
Regular Expressions
2.4
Text Normalization
2.4.2
Word Tokenization
(2.7) 姚 Yao 明 Ming 进 enter 入 enter 总 overall 决 decision 赛 game
2
Regular Expressions
2.4
Text Normalization
2.4.2
Word Tokenization
In fact, for most Chinese NLP tasks it turns out to work better to take characters rather than words as input, since characters are at a reasonable semantic level for most applications, and since most word standards, by contrast, result in a huge vocabulary with large numbers of very rare words (Li et al., 2019b) .
2
Regular Expressions
2.4
Text Normalization
2.4.2
Word Tokenization
However, for Japanese and Thai the character is too small a unit, and so algorithms for word segmentation are required. These can also be useful for Chinese word segmentation in the rare situations where word rather than character boundaries are required. The standard segmentation algorithms for these languages use neural sequence models trained via supervised machine learning on hand-segmented training sets; we'll introduce sequence models in Chapter 8 and Chapter 9.
2
Regular Expressions
2.4
Text Normalization
2.4.3
Byte-Pair Encoding for Tokenization
There is a third option to tokenizing text. Instead of defining tokens as words (whether delimited by spaces or more complex algorithms), or as characters (as in Chinese), we can use our data to automatically tell us what the tokens should be. This is especially useful in dealing with unknown words, an important problem in language processing. As we will see in the next chapter, NLP algorithms often learn some facts about language from one corpus (a training corpus) and then use these facts to make decisions about a separate test corpus and its language. Thus if our training corpus contains, say the words low, new, newer, but not lower, then if the word lower appears in our test corpus, our system will not know what to do with it.
2
Regular Expressions
2.4
Text Normalization
2.4.3
Byte-Pair Encoding for Tokenization
To deal with this unknown word problem, modern tokenizers often automatically induce sets of tokens that include tokens smaller than words, called subwords. subwords Subwords can be arbitrary substrings, or they can be meaning-bearing units like the morphemes -est or -er. (A morpheme is the smallest meaning-bearing unit of a language; for example the word unlikeliest has the morphemes un-, likely, and -est.) In modern tokenization schemes, most tokens are words, but some tokens are frequently occurring morphemes or other subwords like -er. Every unseen word like lower can thus be represented by some sequence of known subword units, such as low and er, or even as a sequence of individual letters if necessary.
2
Regular Expressions
2.4
Text Normalization
2.4.3
Byte-Pair Encoding for Tokenization
Most tokenization schemes have two parts: a token learner, and a token segmenter. The token learner takes a raw training corpus (sometimes roughly preseparated into words, for example by whitespace) and induces a vocabulary, a set of tokens. The token segmenter takes a raw test sentence and segments it into the tokens in the vocabulary. Three algorithms are widely used: byte-pair encoding (Sennrich et al., 2016) , unigram language modeling (Kudo, 2018) , and WordPiece (Schuster and Nakajima, 2012) ; there is also a SentencePiece library that includes implementations of the first two of the three (Kudo and Richardson, 2018) .
2
Regular Expressions
2.4
Text Normalization
2.4.3
Byte-Pair Encoding for Tokenization
In this section we introduce the simplest of the three, the byte-pair encoding or BPE algorithm (Sennrich et al., 2016) ; see Fig. 2 .13. The BPE token learner begins BPE with a vocabulary that is just the set of all individual characters. It then examines the training corpus, chooses the two symbols that are most frequently adjacent (say 'A', 'B'), adds a new merged symbol 'AB' to the vocabulary, and replaces every adjacent 'A' 'B' in the corpus with the new 'AB'. It continues to count and merge, creating new longer and longer character strings, until k merges have been done creating k novel tokens; k is thus a parameter of the algorithm. The resulting vocabulary consists of the original set of characters plus k new symbols.
2
Regular Expressions
2.4
Text Normalization
2.4.3
Byte-Pair Encoding for Tokenization
The algorithm is usually run inside words (not merging across word boundaries), so the input corpus is first white-space-separated to give a set of strings, each corresponding to the characters of a word, plus a special end-of-word symbol , and its counts. Let's see its operation on the following tiny input corpus of 18 word tokens with counts for each word (the word low appears 5 times, the word newer 6 times, and so on), which would have a starting vocabulary of 11 letters
2
Regular Expressions
2.4
Text Normalization
2.4.3
Byte-Pair Encoding for Tokenization
The BPE algorithm first counts all pairs of adjacent symbols: the most frequent is the pair e r because it occurs in newer (frequency of 6) and wider (frequency of 3) for a total of 9 occurrences 1 . We then merge these symbols, treating er as one symbol, and count again:
2
Regular Expressions
2.4
Text Normalization
2.4.3
Byte-Pair Encoding for Tokenization
Now the most frequent pair is er , which we merge; our system has learned that there should be a token for word-final er, represented as er :
2
Regular Expressions
2.4
Text Normalization
2.4.3
Byte-Pair Encoding for Tokenization
get merged to ne:
2
Regular Expressions
2.4
Text Normalization
2.4.3
Byte-Pair Encoding for Tokenization
If we continue, the next merges are:
2
Regular Expressions
2.4
Text Normalization
2.4.3
Byte-Pair Encoding for Tokenization
Once we've learned our vocabulary, the token parser is used to tokenize a test sentence. The token parser just runs on the test data the merges we have learned from the training data, greedily, in the order we learned them. (Thus the frequencies in the test data don't play a role, just the frequencies in the training data). So first we segment each test sentence word into characters. Then we apply the first rule: replace every instance of e r in the test corpus with er, and then the second rule: replace every instance of er in the test corpus with er , and so on. By the end, if the test corpus contained the word n e w e r , it would be tokenized as a full word. But a new (unknown) word like l o w e r would be merged into the two tokens low er .
2
Regular Expressions
2.4
Text Normalization
2.4.3
Byte-Pair Encoding for Tokenization
The token learner part of the BPE algorithm for taking a corpus broken up into individual characters or bytes, and learning a vocabulary by iteratively merging tokens.
2
Regular Expressions
2.4
Text Normalization
2.4.3
Byte-Pair Encoding for Tokenization
Of course in real algorithms BPE is run with many thousands of merges on a very large input corpus. The result is that most words will be represented as full symbols, and only the very rare words (and unknown words) will have to be represented by their parts.
2
Regular Expressions
2.4
Text Normalization
2.4.4
Word Normalization, Lemmatization and Stemming
Word normalization is the task of putting words/tokens in a standard format, choosing a single normal form for words with multiple forms like USA and US or uh-huh and uhhuh. This standardization may be valuable, despite the spelling information that is lost in the normalization process. For information retrieval or information extraction about the US, we might want to see information from documents whether they mention the US or the USA.
2
Regular Expressions
2.4
Text Normalization
2.4.4
Word Normalization, Lemmatization and Stemming
Case folding is another kind of normalization. Mapping everything to lowercase means that Woodchuck and woodchuck are represented identically, which is very helpful for generalization in many tasks, such as information retrieval or speech recognition. For sentiment analysis and other text classification tasks, information extraction, and machine translation, by contrast, case can be quite helpful and case folding is generally not done. This is because maintaining the difference between, for example, US the country and us the pronoun can outweigh the advantage in generalization that case folding would have provided for other words. For many natural language processing situations we also want two morphologically different forms of a word to behave similarly. For example in web search, someone may type the string woodchucks but a useful system might want to also return pages that mention woodchuck with no s. This is especially common in morphologically complex languages like Russian, where for example the word Moscow has different endings in the phrases Moscow, of Moscow, to Moscow, and so on.
2
Regular Expressions
2.4
Text Normalization
2.4.4
Word Normalization, Lemmatization and Stemming
Lemmatization is the task of determining that two words have the same root, despite their surface differences. The words am, are, and is have the shared lemma be; the words dinner and dinners both have the lemma dinner. Lemmatizing each of these forms to the same lemma will let us find all mentions of words in Russian like Moscow. The lemmatized form of a sentence like He is reading detective stories would thus be He be read detective story.
2
Regular Expressions
2.4
Text Normalization
2.4.4
Word Normalization, Lemmatization and Stemming
How is lemmatization done? The most sophisticated methods for lemmatization involve complete morphological parsing of the word. Morphology is the study of the way words are built up from smaller meaning-bearing units called morphemes. Two broad classes of morphemes can be distinguished: stems-the central morpheme of the word, supplying the main meaning-and affixes-adding "additional" meanings of various kinds. So, for example, the word fox consists of one morpheme (the morpheme fox) and the word cats consists of two: the morpheme cat and the morpheme -s. A morphological parser takes a word like cats and parses it into the two morphemes cat and s, or parses a Spanish word like amaren ('if in the future they would love') into the morpheme amar 'to love', and the morphological features 3PL and future subjunctive.
2
Regular Expressions
2.4
Text Normalization
2.4.4
Word Normalization, Lemmatization and Stemming
The Porter Stemmer
2
Regular Expressions
2.4
Text Normalization
2.4.4
Word Normalization, Lemmatization and Stemming
Lemmatization algorithms can be complex. For this reason we sometimes make use of a simpler but cruder method, which mainly consists of chopping off word-final affixes. This naive version of morphological analysis is called stemming. One of stemming the most widely used stemming algorithms is the Porter (1980) . The Porter stemmer stemmer applied to the following paragraph:
2
Regular Expressions
2.4
Text Normalization
2.4.4
Word Normalization, Lemmatization and Stemming
This was not the map we found in Billy Bones's chest, but an accurate copy, complete in all things-names and heights and soundings-with the single exception of the red crosses and the written notes.
2
Regular Expressions
2.4
Text Normalization
2.4.4
Word Normalization, Lemmatization and Stemming
produces the following stemmed output:
2
Regular Expressions
2.4
Text Normalization
2.4.4
Word Normalization, Lemmatization and Stemming
Thi wa not the map we found in Billi Bone s chest but an accur copi complet in all thing name and height and sound with the singl except of the red cross and the written note
2
Regular Expressions
2.4
Text Normalization
2.4.4
Word Normalization, Lemmatization and Stemming
The algorithm is based on series of rewrite rules run in series, as a cascade, in cascade which the output of each pass is fed as input to the next pass; here is a sampling of the rules: ATIONAL → ATE (e.g., relational → relate) ING → if stem contains vowel (e.g., motoring → motor) SSES → SS (e.g., grasses → grass)
2
Regular Expressions
2.4
Text Normalization
2.4.4
Word Normalization, Lemmatization and Stemming
Detailed rule lists for the Porter stemmer, as well as code (in Java, Python, etc.) can be found on Martin Porter's homepage; see also the original paper (Porter, 1980) . Simple stemmers can be useful in cases where we need to collapse across different variants of the same lemma. Nonetheless, they do tend to commit errors of both over-and under-generalizing, as shown in the table below (Krovetz, 1993
2
Regular Expressions
2.4
Text Normalization
2.4.5
Sentence Segmentation
Sentence segmentation is another important step in text processing. The most useful cues for segmenting a text into sentences are punctuation, like periods, question marks, and exclamation points. Question marks and exclamation points are relatively unambiguous markers of sentence boundaries. Periods, on the other hand, are more ambiguous. The period character "." is ambiguous between a sentence boundary marker and a marker of abbreviations like Mr. or Inc. The previous sentence that you just read showed an even more complex case of this ambiguity, in which the final period of Inc. marked both an abbreviation and the sentence boundary marker. For this reason, sentence tokenization and word tokenization may be addressed jointly.
2
Regular Expressions
2.4
Text Normalization
2.4.5
Sentence Segmentation
In general, sentence tokenization methods work by first deciding (based on rules or machine learning) whether a period is part of the word or is a sentence-boundary marker. An abbreviation dictionary can help determine whether the period is part of a commonly used abbreviation; the dictionaries can be hand-built or machinelearned (Kiss and Strunk, 2006) , as can the final sentence splitter. In the Stanford CoreNLP toolkit (Manning et al., 2014) , for example sentence splitting is rule-based, a deterministic consequence of tokenization; a sentence ends when a sentence-ending punctuation (., !, or ?) is not already grouped with other characters into a token (such as for an abbreviation or number), optionally followed by additional final quotes or brackets.
2
Regular Expressions
2.5
Minimum Edit Distance
nan
nan
Much of natural language processing is concerned with measuring how similar two strings are. For example in spelling correction, the user typed some erroneous string—let’s say graffe–and we want to know what the user meant. The user probably intended a word that is similar to graffe. Among candidate similar words, the word giraffe, which differs by only one letter from graffe, seems intuitively to be more similar than, say grail or graf, which differ in more letters. Another example comes from coreference, the task of deciding whether two strings such as the following refer to the same entity:
2
Regular Expressions
2.5
Minimum Edit Distance
nan
nan
Stanford President Marc Tessier-Lavigne
2
Regular Expressions
2.5
Minimum Edit Distance
nan
nan
Stanford University President Marc Tessier-Lavigne
2
Regular Expressions
2.5
Minimum Edit Distance
nan
nan
Again, the fact that these two strings are very similar (differing by only one word) seems like useful evidence for deciding that they might be coreferent. Edit distance gives us a way to quantify both of these intuitions about string similarity. More formally, the minimum edit distance between two strings is defined as the minimum number of editing operations (operations like insertion, deletion, substitution) needed to transform one string into another.
2
Regular Expressions
2.5
Minimum Edit Distance
nan
nan
The gap between intention and execution, for example, is 5 (delete an i, substitute e for n, substitute x for t, insert c, substitute u for n). It's much easier to see this by looking at the most important visualization for string distances, an alignment between the two strings, shown in Fig. 2 .14. Given two sequences, an alignment is a correspondence between substrings of the two sequences. Thus, we say I aligns with the empty string, N with E, and so on. Beneath the aligned strings is another representation; a series of symbols expressing an operation list for converting the top string into the bottom string: d for deletion, s for substitution, i for insertion. Figure 2 .14 Representing the minimum edit distance between two strings as an alignment.
2
Regular Expressions
2.5
Minimum Edit Distance
nan
nan
We can also assign a particular cost or weight to each of these operations. The Levenshtein distance between two sequences is the simplest weighting factor in which each of the three operations has a cost of 1 (Levenshtein, 1966)-we assume that the substitution of a letter for itself, for example, t for t, has zero cost. The Levenshtein distance between intention and execution is 5. Levenshtein also proposed an alternative version of his metric in which each insertion or deletion has a cost of 1 and substitutions are not allowed. (This is equivalent to allowing substitution, but giving each substitution a cost of 2 since any substitution can be represented by one insertion and one deletion). Using this version, the Levenshtein distance between intention and execution is 8.
2
Regular Expressions
2.5
Minimum Edit Distance
2.5.1
The Minimum Edit Distance Algorithm
How do we find the minimum edit distance? We can think of this as a search task, in which we are searching for the shortest path-a sequence of edits-from one string to another.
2
Regular Expressions
2.5
Minimum Edit Distance
2.5.1
The Minimum Edit Distance Algorithm
The space of all possible edits is enormous, so we can't search naively. However, lots of distinct edit paths will end up in the same state (string), so rather than recomputing all those paths, we could just remember the shortest path to a state each time we saw it. We can do this by using dynamic programming. Dynamic programming dynamic programming is the name for a class of algorithms, first introduced by Bellman (1957) , that apply a table-driven method to solve problems by combining solutions to sub-problems. Some of the most commonly used algorithms in natural language processing make use of dynamic programming, such as the Viterbi algorithm (Chapter 8) and the CKY algorithm for parsing (Chapter 13) .
2
Regular Expressions
2.5
Minimum Edit Distance
2.5.1
The Minimum Edit Distance Algorithm
The intuition of a dynamic programming problem is that a large problem can be solved by properly combining the solutions to various sub-problems. Consider the shortest path of transformed words that represents the minimum edit distance between the strings intention and execution shown in Fig. 2.16 .
2
Regular Expressions
2.5
Minimum Edit Distance
2.5.1
The Minimum Edit Distance Algorithm
Imagine some string (perhaps it is exention) that is in this optimal path (whatever it is). The intuition of dynamic programming is that if exention is in the optimal operation list, then the optimal sequence must also include the optimal path from intention to exention. Why? If there were a shorter path from intention to exention, then we could use it instead, resulting in a shorter overall path, and the optimal sequence wouldn't be optimal, thus leading to a contradiction.
2
Regular Expressions
2.5
Minimum Edit Distance
2.5.1
The Minimum Edit Distance Algorithm
The minimum edit distance algorithm algorithm was named by Wagner and Fischer (1974) but independently discovered by many people (see the Historical Notes section of Chapter 8).
2
Regular Expressions
2.5
Minimum Edit Distance
2.5.1
The Minimum Edit Distance Algorithm
Let's first define the minimum edit distance between two strings. Given two strings, the source string X of length n, and target string Y of length m, we'll define D [i, j] as the edit distance between X[1..i] and Y [1.. j], i.e., the first i characters of X and the first j characters of Y . The edit distance between X and Y is thus D [n, m] .
2
Regular Expressions
2.5
Minimum Edit Distance
2.5.1
The Minimum Edit Distance Algorithm
We'll use dynamic programming to compute D[n, m] bottom up, combining solutions to subproblems. In the base case, with a source substring of length i but an empty target string, going from i characters to 0 requires i deletes. With a target substring of length j but an empty source going from 0 characters to j characters requires j inserts. Having computed D[i, j] for small i, j we then compute larger D[i, j] based on previously computed smaller values. The value of D[i, j] is computed by taking the minimum of the three possible paths through the matrix which arrive there:
2
Regular Expressions
2.5
Minimum Edit Distance
2.5.1
The Minimum Edit Distance Algorithm
D[i, j] = min    D[i − 1, j] + del-cost(source[i]) D[i, j − 1] + ins-cost(target[ j]) D[i − 1, j − 1] + sub-cost(source[i], target[ j])
2
Regular Expressions
2.5
Minimum Edit Distance
2.5.1
The Minimum Edit Distance Algorithm
If we assume the version of Levenshtein distance in which the insertions and deletions each have a cost of 1 (ins-cost(•) = del-cost(•) = 1), and substitutions have a cost of 2 (except substitution of identical letters have zero cost), the computation for D [i, j] becomes:
2
Regular Expressions
2.5
Minimum Edit Distance
2.5.1
The Minimum Edit Distance Algorithm
D[i, j] = min        D[i − 1, j] + 1 D[i, j − 1] + 1 D[i − 1, j − 1] + 2; if source[i] = target[ j] 0; if source[i] = target[ j] (2.8)
2
Regular Expressions
2.5
Minimum Edit Distance
2.5.1
The Minimum Edit Distance Algorithm
The algorithm is summarized in Fig. 2.17; Fig. 2.18 shows the results of applying the algorithm to the distance between intention and execution with the version of Levenshtein in Eq. 2.8.
2
Regular Expressions
2.5
Minimum Edit Distance
2.5.1
The Minimum Edit Distance Algorithm
Alignment
2
Regular Expressions
2.5
Minimum Edit Distance
2.5.1
The Minimum Edit Distance Algorithm
Knowing the minimum edit distance is useful for algorithms like finding potential spelling error corrections. But the edit distance algorithm is important in another way; with a small change, it can also provide the minimum cost alignment between two strings. Aligning two strings is useful throughout speech and language processing. In speech recognition, minimum edit distance alignment is used to compute the word error rate (Chapter 26). Alignment plays a role in machine translation, in which sentences in a parallel corpus (a corpus with a text in two languages) need to be matched to each other.
2
Regular Expressions
2.5
Minimum Edit Distance
2.5.1
The Minimum Edit Distance Algorithm
The minimum edit distance algorithm, an example of the class of dynamic programming algorithms. The various costs can either be fixed (e.g., ∀x, ins-cost(x) = 1) or can be specific to the letter (to model the fact that some letters are more likely to be inserted than others). We assume that there is no cost for substituting a letter for itself (i.e., sub-cost(x, x) = 0).
2
Regular Expressions
2.5
Minimum Edit Distance
2.5.1
The Minimum Edit Distance Algorithm
Figure 2.18 Computation of minimum edit distance between intention and execution with the algorithm of Fig. 2 .17, using Levenshtein distance with cost of 1 for insertions or deletions, 2 for substitutions.
2
Regular Expressions
2.5
Minimum Edit Distance
2.5.1
The Minimum Edit Distance Algorithm
To extend the edit distance algorithm to produce an alignment, we can start by visualizing an alignment as a path through the edit distance matrix. Figure 2 .19 shows this path with the boldfaced cell. Each boldfaced cell represents an alignment of a pair of letters in the two strings. If two boldfaced cells occur in the same row, there will be an insertion in going from the source to the target; two boldfaced cells in the same column indicate a deletion.
2
Regular Expressions
2.5
Minimum Edit Distance
2.5.1
The Minimum Edit Distance Algorithm
Figure 2 .19 also shows the intuition of how to compute this alignment path. The computation proceeds in two steps. In the first step, we augment the minimum edit distance algorithm to store backpointers in each cell. The backpointer from a cell points to the previous cell (or cells) that we came from in entering the current cell. We've shown a schematic of these backpointers in Fig. 2.19 . Some cells have multiple backpointers because the minimum extension could have come from multiple previous cells. In the second step, we perform a backtrace. In a backtrace, we start from the last cell (at the final row and column), and follow the pointers back through the dynamic programming matrix. Each complete path between the final cell and the initial cell is a minimum distance alignment. Exercise 2.7 asks you to modify the minimum edit distance algorithm to store the pointers and compute the backtrace to output an alignment.
2
Regular Expressions
2.5
Minimum Edit Distance
2.5.1
The Minimum Edit Distance Algorithm
Figure 2 .19 When entering a value in each cell, we mark which of the three neighboring cells we came from with up to three arrows. After the table is full we compute an alignment (minimum edit path) by using a backtrace, starting at the 8 in the lower-right corner and following the arrows back. The sequence of bold cells represents one possible minimum cost alignment between the two strings. Diagram design after Gusfield (1997).
2
Regular Expressions
2.5
Minimum Edit Distance
2.5.1
The Minimum Edit Distance Algorithm
While we worked our example with simple Levenshtein distance, the algorithm in Fig. 2 .17 allows arbitrary weights on the operations. For spelling correction, for example, substitutions are more likely to happen between letters that are next to each other on the keyboard. The Viterbi algorithm is a probabilistic extension of minimum edit distance. Instead of computing the "minimum edit distance" between two strings, Viterbi computes the "maximum probability alignment" of one string with another. We'll discuss this more in Chapter 8.
2
Regular Expressions
2.6
Summary
nan
nan
This chapter introduced a fundamental tool in language processing, the regular expression, and showed how to perform basic text normalization tasks including word segmentation and normalization, sentence segmentation, and stemming. We also introduced the important minimum edit distance algorithm for comparing strings. Here's a summary of the main points we covered about these ideas:
2
Regular Expressions
2.6
Summary
nan
nan
• The regular expression language is a powerful tool for pattern-matching.
2
Regular Expressions
2.6
Summary
nan
nan
• Basic operations in regular expressions include concatenation of symbols, disjunction of symbols ([], |, and .), counters (*, +, and {n,m}), anchors (ˆ, $) and precedence operators ((,) ).
2
Regular Expressions
2.6
Summary
nan
nan
• Word tokenization and normalization are generally done by cascades of simple regular expression substitutions or finite automata.
2
Regular Expressions
2.6
Summary
nan
nan
• The Porter algorithm is a simple and efficient way to do stemming, stripping off affixes. It does not have high accuracy but may be useful for some tasks.
2
Regular Expressions
2.6
Summary
nan
nan
• The minimum edit distance between two strings is the minimum number of operations it takes to edit one into the other. Minimum edit distance can be computed by dynamic programming, which also results in an alignment of the two strings.
2
Regular Expressions
2.7
Bibliographical and Historical Notes
nan
nan
Kleene 1951; 1956 first defined regular expressions and the finite automaton, based on the McCulloch-Pitts neuron. Ken Thompson was one of the first to build regular expressions compilers into editors for text searching (Thompson, 1968) . His editor ed included a command "g/regular expression/p", or Global Regular Expression Print, which later became the Unix grep utility. Text normalization algorithms have been applied since the beginning of the field. One of the earliest widely used stemmers was Lovins (1968) . Stemming was also applied early to the digital humanities, by Packard (1973) , who built an affix-stripping morphological parser for Ancient Greek. Currently a wide variety of code for tokenization and normalization is available, such as the Stanford Tokenizer (http://nlp.stanford.edu/software/tokenizer.shtml) or specialized tokenizers for Twitter (O'Connor et al., 2010) , or for sentiment (http: //sentiment.christopherpotts.net/tokenizing.html). See Palmer (2012) for a survey of text preprocessing. NLTK is an essential tool that offers both useful Python libraries (http://www.nltk.org) and textbook descriptions (Bird et al., 2009 ) of many algorithms including text normalization and corpus interfaces.
2
Regular Expressions
2.7
Bibliographical and Historical Notes
nan
nan
For more on Herdan's law and Heaps' Law, see Herdan (1960 , p. 28), Heaps (1978 , Egghe (2007) and Baayen (2001) ; Yasseri et al. (2012) discuss the relationship with other measures of linguistic complexity. For more on edit distance, see the excellent Gusfield (1997) . Our example measuring the edit distance from 'intention' to 'execution' was adapted from Kruskal (1983) . There are various publicly available packages to compute edit distance, including Unix diff and the NIST sclite program (NIST, 2005) .
2
Regular Expressions
2.7
Bibliographical and Historical Notes
nan
nan
In his autobiography Bellman (1984) explains how he originally came up with the term dynamic programming:
2
Regular Expressions
2.7
Bibliographical and Historical Notes
nan
nan
"...The 1950s were not good years for mathematical research. [the] Secretary of Defense ...had a pathological fear and hatred of the word, research... I decided therefore to use the word, "programming". I wanted to get across the idea that this was dynamic, this was multistage... I thought, let's ... take a word that has an absolutely precise meaning, namely dynamic... it's impossible to use the word, dynamic, in a pejorative sense. Try thinking of some combination that will possibly give it a pejorative meaning. It's impossible. Thus, I thought dynamic programming was a good name. It was something not even a Congressman could object to."
3
N-gram Language Models
nan
nan
nan
nan
"You are uniformly charming!" cried he, with a smile of associating and now and then I bowed and they perceived a chaise and four to wish for. Random sentence generated from a Jane Austen trigram model
3
N-gram Language Models
nan
nan
nan
nan
Predicting is difficult-especially about the future, as the old quip goes. But how about predicting something that seems much easier, like the next few words someone is going to say? What word, for example, is likely to follow
3
N-gram Language Models
nan
nan
nan
nan
Please turn your homework ...
3
N-gram Language Models
nan
nan
nan
nan
Hopefully, most of you concluded that a very likely word is in, or possibly over, but probably not refrigerator or the. In the following sections we will formalize this intuition by introducing models that assign a probability to each possible next word. The same models will also serve to assign a probability to an entire sentence. Such a model, for example, could predict that the following sequence has a much higher probability of appearing in a text:
3
N-gram Language Models
nan
nan
nan
nan
all of a sudden I notice three guys standing on the sidewalk
3
N-gram Language Models
nan
nan
nan
nan
than does this same set of words in a different order:
3
N-gram Language Models
nan
nan
nan
nan
on guys all I of notice sidewalk three a sudden standing the
3
N-gram Language Models
nan
nan
nan
nan
Why would you want to predict upcoming words, or assign probabilities to sentences? Probabilities are essential in any task in which we have to identify words in noisy, ambiguous input, like speech recognition. For a speech recognizer to realize that you said I will be back soonish and not I will be bassoon dish, it helps to know that back soonish is a much more probable sequence than bassoon dish. For writing tools like spelling correction or grammatical error correction, we need to find and correct errors in writing like Their are two midterms, in which There was mistyped as Their, or Everything has improve, in which improve should have been improved. The phrase There are will be much more probable than Their are, and has improved than has improve, allowing us to help users by detecting and correcting these errors.
3
N-gram Language Models
nan
nan
nan
nan
Assigning probabilities to sequences of words is also essential in machine translation. Suppose we are translating a Chinese source sentence:
3
N-gram Language Models
nan
nan
nan
nan
他 向 记者 介绍了 主要 内容 He to reporters introduced main content
3
N-gram Language Models
nan
nan
nan
nan
As part of the process we might have built the following set of potential rough English translations:
3
N-gram Language Models
nan
nan
nan
nan
he introduced reporters to the main contents of the statement
3
N-gram Language Models
nan
nan
nan
nan
he briefed to reporters the main contents of the statement
3
N-gram Language Models
nan
nan
nan
nan
he briefed reporters on the main contents of the statement
3
N-gram Language Models
nan
nan
nan
nan
A probabilistic model of word sequences could suggest that briefed reporters on is a more probable English phrase than briefed to reporters (which has an awkward to after briefed) or introduced reporters to (which uses a verb that is less fluent English in this context), allowing us to correctly select the boldfaced sentence above.
3
N-gram Language Models
nan
nan
nan
nan
Probabilities are also important for augmentative and alternative communication systems (Trnka et al. 2007 , Kane et al. 2017 . People often use such AAC devices if they are physically unable to speak or sign but can instead use eye gaze or other specific movements to select words from a menu to be spoken by the system. Word prediction can be used to suggest likely words for the menu.
3
N-gram Language Models
nan
nan
nan
nan
Models that assign probabilities to sequences of words are called language models or LMs. In this chapter we introduce the simplest model that assigns probabilities to sentences and sequences of words, the n-gram. An n-gram is a sequence of n words: a 2-gram (which we'll call bigram) is a two-word sequence of words like "please turn", "turn your", or "your homework", and a 3-gram (a trigram) is a three-word sequence of words like "please turn your", or "turn your homework". We'll see how to use n-gram models to estimate the probability of the last word of an n-gram given the previous words, and also to assign probabilities to entire sequences. In a bit of terminological ambiguity, we usually drop the word "model", and use the term n-gram (and bigram, etc.) to mean either the word sequence itself or the predictive model that assigns it a probability. While n-gram models are much simpler than state-of-the art neural language models based on the RNNs and transformers we will introduce in Chapter 9, they are an important foundational tool for understanding the fundamental concepts of language modeling.
3
N-gram Language Models
3.1
N-Grams
nan
nan
Let's begin with the task of computing P(w|h), the probability of a word w given some history h. Suppose the history h is "its water is so transparent that" and we want to know the probability that the next word is the:
3
N-gram Language Models
3.1
N-Grams
nan
nan
P(the|its water is so transparent that). (3.1)
3
N-gram Language Models
3.1
N-Grams
nan
nan
One way to estimate this probability is from relative frequency counts: take a very large corpus, count the number of times we see its water is so transparent that, and count the number of times this is followed by the. This would be answering the question "Out of the times we saw the history h, how many times was it followed by the word w", as follows:
3
N-gram Language Models
3.1
N-Grams
nan
nan
P(the|its water is so transparent that) = C(its water is so transparent that the) C(its water is so transparent that) (3.2)
3
N-gram Language Models
3.1
N-Grams
nan
nan
With a large enough corpus, such as the web, we can compute these counts and estimate the probability from Eq. 3.2. You should pause now, go to the web, and compute this estimate for yourself.