n_chapter
stringclasses
10 values
chapter
stringclasses
10 values
n_section
stringlengths
3
5
section
stringlengths
3
48
n_subsection
stringlengths
3
6
subsection
stringlengths
3
51
text
stringlengths
1
2.65k
3
N-gram Language Models
3.1
N-Grams
nan
nan
While this method of estimating probabilities directly from counts works fine in many cases, it turns out that even the web isn't big enough to give us good estimates in most cases. This is because language is creative; new sentences are created all the time, and we won't always be able to count entire sentences. Even simple extensions of the example sentence may have counts of zero on the web (such as "Walden Pond's water is so transparent that the"; well, used to have counts of zero).
3
N-gram Language Models
3.1
N-Grams
nan
nan
Similarly, if we wanted to know the joint probability of an entire sequence of words like its water is so transparent, we could do it by asking "out of all possible sequences of five words, how many of them are its water is so transparent?" We would have to get the count of its water is so transparent and divide by the sum of the counts of all possible five word sequences. That seems rather a lot to estimate! For this reason, we'll need to introduce more clever ways of estimating the probability of a word w given a history h, or the probability of an entire word sequence W . Let's start with a little formalizing of notation. To represent the probability of a particular random variable X i taking on the value "the", or P(X i = "the"), we will use the simplification P(the). We'll represent a sequence of N words either as w 1 . . . w n or w 1:n (so the expression w 1:n−1 means the string w 1 , w 2 , ..., w n−1 ). For the joint probability of each word in a sequence having a particular value P(X = w 1 ,Y = w 2 , Z = w 3 , ...,W = w n ) we'll use P(w 1 , w 2 , ..., w n ).
3
N-gram Language Models
3.1
N-Grams
nan
nan
Now how can we compute probabilities of entire sequences like P(w 1 , w 2 , ..., w n )? One thing we can do is decompose this probability using the chain rule of probability:
3
N-gram Language Models
3.1
N-Grams
nan
nan
P(X 1 ...X n ) = P(X 1 )P(X 2 |X 1 )P(X 3 |X 1:2 ) . . . P(X n |X 1:n−1 ) = n k=1 P(X k |X 1:k−1 ) (3.3)
3
N-gram Language Models
3.1
N-Grams
nan
nan
Applying the chain rule to words, we get P(w 1:n ) = P(w 1 )P(w 2 |w 1 )P(w 3 |w 1:2 ) . . . P(w n |w 1: n−1 ) = n k=1 P(w k |w 1:k−1 ) (3.4)
3
N-gram Language Models
3.1
N-Grams
nan
nan
The chain rule shows the link between computing the joint probability of a sequence and computing the conditional probability of a word given previous words. Equation 3.4 suggests that we could estimate the joint probability of an entire sequence of words by multiplying together a number of conditional probabilities. But using the chain rule doesn't really seem to help us! We don't know any way to compute the exact probability of a word given a long sequence of preceding words, P(w n |w n−1 1 ). As we said above, we can't just estimate by counting the number of times every word occurs following every long string, because language is creative and any particular context might have never occurred before!
3
N-gram Language Models
3.1
N-Grams
nan
nan
The intuition of the n-gram model is that instead of computing the probability of a word given its entire history, we can approximate the history by just the last few words.
3
N-gram Language Models
3.1
N-Grams
nan
nan
The bigram model, for example, approximates the probability of a word given all the previous words P(w n |w 1:n−1 ) by using only the conditional probability of the preceding word P(w n |w n−1 ). In other words, instead of computing the probability
3
N-gram Language Models
3.1
N-Grams
nan
nan
P(the|Walden Pond's water is so transparent that) (3.5)
3
N-gram Language Models
3.1
N-Grams
nan
nan
we approximate it with the probability
3
N-gram Language Models
3.1
N-Grams
nan
nan
P(the|that) (3.6)
3
N-gram Language Models
3.1
N-Grams
nan
nan
When we use a bigram model to predict the conditional probability of the next word, we are thus making the following approximation:
3
N-gram Language Models
3.1
N-Grams
nan
nan
P(w n |w 1:n−1 ) ≈ P(w n |w n−1 ) (3.7)
3
N-gram Language Models
3.1
N-Grams
nan
nan
The assumption that the probability of a word depends only on the previous word is called a Markov assumption. Markov models are the class of probabilistic models Markov that assume we can predict the probability of some future unit without looking too far into the past. We can generalize the bigram (which looks one word into the past) to the trigram (which looks two words into the past) and thus to the n-gram (which n-gram looks n − 1 words into the past). Thus, the general equation for this n-gram approximation to the conditional probability of the next word in a sequence is
3
N-gram Language Models
3.1
N-Grams
nan
nan
P(w n |w 1:n−1 ) ≈ P(w n |w n−N+1:n−1 ) (3.8)
3
N-gram Language Models
3.1
N-Grams
nan
nan
Given the bigram assumption for the probability of an individual word, we can compute the probability of a complete word sequence by substituting Eq. 3.7 into Eq. 3.4:
3
N-gram Language Models
3.1
N-Grams
nan
nan
P(w 1:n ) ≈ n k=1 P(w k |w k−1 ) (3.9)
3
N-gram Language Models
3.1
N-Grams
nan
nan
How do we estimate these bigram or n-gram probabilities? An intuitive way to estimate probabilities is called maximum likelihood estimation or MLE. We get maximum likelihood estimation the MLE estimate for the parameters of an n-gram model by getting counts from a corpus, and normalizing the counts so that they lie between 0 and 1. 1 normalize For example, to compute a particular bigram probability of a word y given a previous word x, we'll compute the count of the bigram C(xy) and normalize by the sum of all the bigrams that share the same first word x:
3
N-gram Language Models
3.1
N-Grams
nan
nan
P(w n |w n−1 ) = C(w n−1 w n ) w C(w n−1 w) (3.10)
3
N-gram Language Models
3.1
N-Grams
nan
nan
We can simplify this equation, since the sum of all bigram counts that start with a given word w n−1 must be equal to the unigram count for that word w n−1 (the reader should take a moment to be convinced of this):
3
N-gram Language Models
3.1
N-Grams
nan
nan
P(w n |w n−1 ) = C(w n−1 w n ) C(w n−1 ) (3.11)
3
N-gram Language Models
3.1
N-Grams
nan
nan
Let's work through an example using a mini-corpus of three sentences. We'll first need to augment each sentence with a special symbol <s> at the beginning of the sentence, to give us the bigram context of the first word. We'll also need a special end-symbol. </s> 2 <s> I am Sam </s> <s> Sam I am </s> <s> I do not like green eggs and ham </s> Here are the calculations for some of the bigram probabilities from this corpus P(I|<s>) = 2 3 = .67 P(Sam|<s>) = 1 3 = .33 P(am|I) = 2 3 = .67 P(</s>|Sam) = 1 2 = 0.5 P(Sam|am) = 1 2 = .5 P(do|I) = 1 3 = .33 For the general case of MLE n-gram parameter estimation:
3
N-gram Language Models
3.1
N-Grams
nan
nan
P(w n |w n−N+1:n−1 ) = C(w n−N+1:n−1 w n ) C(w n−N+1:n−1 ) (3.12)
3
N-gram Language Models
3.1
N-Grams
nan
nan
Equation 3.12 (like Eq. 3.11) estimates the n-gram probability by dividing the observed frequency of a particular sequence by the observed frequency of a prefix. This ratio is called a relative frequency. We said above that this use of relative frequencies as a way to estimate probabilities is an example of maximum likelihood estimation or MLE. In MLE, the resulting parameter set maximizes the likelihood of the training set T given the model M (i.e., P(T |M)). For example, suppose the word Chinese occurs 400 times in a corpus of a million words like the Brown corpus. What is the probability that a random word selected from some other text of, say, a million words will be the word Chinese? The MLE of its probability is 400 1000000 or .0004. Now .0004 is not the best possible estimate of the probability of Chinese occurring in all situations; it might turn out that in some other corpus or context Chinese is a very unlikely word. But it is the probability that makes it most likely that Chinese will occur 400 times in a million-word corpus. We present ways to modify the MLE estimates slightly to get better probability estimates in Section 3.5.
3
N-gram Language Models
3.1
N-Grams
nan
nan
Let's move on to some examples from a slightly larger corpus than our 14-word example above. We'll use data from the now-defunct Berkeley Restaurant Project, a dialogue system from the last century that answered questions about a database of restaurants in Berkeley, California (Jurafsky et al., 1994) . Here are some textnormalized sample user queries (a sample of 9332 sentences is on the website):
3
N-gram Language Models
3.1
N-Grams
nan
nan
can you tell me about any good cantonese restaurants close by mid priced thai food is what i'm looking for tell me about chez panisse can you give me a listing of the kinds of food that are available i'm looking for a good place to eat breakfast when is caffe venezia open during the day
3
N-gram Language Models
3.1
N-Grams
nan
nan
Figure 3 .1 shows the bigram counts from a piece of a bigram grammar from the Berkeley Restaurant Project. Note that the majority of the values are zero. In fact, we have chosen the sample words to cohere with each other; a matrix selected from a random set of seven words would be even more sparse .
3
N-gram Language Models
3.1
N-Grams
nan
nan
We leave it as Exercise 3.2 to compute the probability of i want chinese food. What kinds of linguistic phenomena are captured in these bigram statistics? Some of the bigram probabilities above encode some facts that we think of as strictly syntactic in nature, like the fact that what comes after eat is usually a noun or an adjective, or that what comes after to is usually a verb. Others might be a fact about the personal assistant task, like the high probability of sentences beginning with the words I. And some might even be cultural rather than linguistic, like the higher probability that people are looking for Chinese versus English food.
3
N-gram Language Models
3.1
N-Grams
nan
nan
Some practical issues: Although for pedagogical purposes we have only described bigram models, in practice it's more common to use trigram models, which condition on the previous two words rather than the previous word, or 4-gram or even 5-gram models, when there is sufficient training data. Note that for these larger n-grams, we'll need to assume extra contexts to the left and right of the sentence end. For example, to compute trigram probabilities at the very beginning of the sentence, we use two pseudo-words for the first trigram (i.e., P(I|<s><s>).
3
N-gram Language Models
3.1
N-Grams
nan
nan
We always represent and compute language model probabilities in log format as log probabilities. Since probabilities are (by definition) less than or equal to 1, the more probabilities we multiply together, the smaller the product becomes. Multiplying enough n-grams together would result in numerical underflow. By using log probabilities instead of raw probabilities, we get numbers that are not as small. Adding in log space is equivalent to multiplying in linear space, so we combine log probabilities by adding them. The result of doing all computation and storage in log space is that we only need to convert back into probabilities if we need to report them at the end; then we can just take the exp of the logprob:
3
N-gram Language Models
3.2
Evaluating Language Models
nan
nan
p 1 × p 2 × p 3 × p 4 = exp(log p 1 + log p 2 + log p 3 + log p 4 ) (3.13)
3
N-gram Language Models
3.2
Evaluating Language Models
nan
nan
The best way to evaluate the performance of a language model is to embed it in an application and measure how much the application improves. Such end-to-end evaluation is called extrinsic evaluation. Extrinsic evaluation is the only way to extrinsic evaluation know if a particular improvement in a component is really going to help the task at hand. Thus, for speech recognition, we can compare the performance of two language models by running the speech recognizer twice, once with each language model, and seeing which gives the more accurate transcription.
3
N-gram Language Models
3.2
Evaluating Language Models
nan
nan
Unfortunately, running big NLP systems end-to-end is often very expensive. Instead, it would be nice to have a metric that can be used to quickly evaluate potential improvements in a language model. An intrinsic evaluation metric is one that mea-intrinsic evaluation sures the quality of a model independent of any application.
3
N-gram Language Models
3.2
Evaluating Language Models
nan
nan
For an intrinsic evaluation of a language model we need a test set. As with many of the statistical models in our field, the probabilities of an n-gram model come from the corpus it is trained on, the training set or training corpus. We can then measure training set the quality of an n-gram model by its performance on some unseen data called the test set or test corpus. We will also sometimes call test sets and other datasets that test set are not in our training sets held out corpora because we hold them out from the held out training data.
3
N-gram Language Models
3.2
Evaluating Language Models
nan
nan
So if we are given a corpus of text and want to compare two different n-gram models, we divide the data into training and test sets, train the parameters of both models on the training set, and then compare how well the two trained models fit the test set.
3
N-gram Language Models
3.2
Evaluating Language Models
nan
nan
But what does it mean to "fit the test set"? The answer is simple: whichever model assigns a higher probability to the test set-meaning it more accurately predicts the test set-is a better model. Given two probabilistic models, the better model is the one that has a tighter fit to the test data or that better predicts the details of the test data, and hence will assign a higher probability to the test data.
3
N-gram Language Models
3.2
Evaluating Language Models
nan
nan
Since our evaluation metric is based on test set probability, it's important not to let the test sentences into the training set. Suppose we are trying to compute the probability of a particular "test" sentence. If our test sentence is part of the training corpus, we will mistakenly assign it an artificially high probability when it occurs in the test set. We call this situation training on the test set. Training on the test set introduces a bias that makes the probabilities all look too high, and causes huge inaccuracies in perplexity, the probability-based metric we introduce below.
3
N-gram Language Models
3.2
Evaluating Language Models
nan
nan
Sometimes we use a particular test set so often that we implicitly tune to its characteristics. We then need a fresh test set that is truly unseen. In such cases, we call the initial test set the development test set or, devset. How do we divide our data into training, development, and test sets? We want our test set to be as large as possible, since a small test set may be accidentally unrepresentative, but we also want as much training data as possible. At the minimum, we would want to pick the smallest test set that gives us enough statistical power to measure a statistically significant difference between two potential models. In practice, we often just divide our data into 80% training, 10% development, and 10% test. Given a large corpus that we want to divide into training and test, test data can either be taken from some continuous sequence of text inside the corpus, or we can remove smaller "stripes" of text from randomly selected parts of our corpus and combine them into a test set.
3
N-gram Language Models
3.2
Evaluating Language Models
3.2.1
Perplexity
In practice we don't use raw probability as our metric for evaluating language models, but a variant called perplexity. The perplexity (sometimes called PP for short) perplexity of a language model on a test set is the inverse probability of the test set, normalized by the number of words. For a test set W = w 1 w 2 . . . w N ,:
3
N-gram Language Models
3.2
Evaluating Language Models
3.2.1
Perplexity
PP(W ) = P(w 1 w 2 . . . w N ) − 1 N (3.14) = N 1 P(w 1 w 2 . . . w N )
3
N-gram Language Models
3.2
Evaluating Language Models
3.2.1
Perplexity
We can use the chain rule to expand the probability of W :
3
N-gram Language Models
3.2
Evaluating Language Models
3.2.1
Perplexity
PP(W ) = N N i=1 1 P(w i |w 1 . . . w i−1 ) (3.15)
3
N-gram Language Models
3.2
Evaluating Language Models
3.2.1
Perplexity
Thus, if we are computing the perplexity of W with a bigram language model, we get:
3
N-gram Language Models
3.2
Evaluating Language Models
3.2.1
Perplexity
PP(W ) = N N i=1 1 P(w i |w i−1 ) (3.16)
3
N-gram Language Models
3.2
Evaluating Language Models
3.2.1
Perplexity
Note that because of the inverse in Eq. 3.15, the higher the conditional probability of the word sequence, the lower the perplexity. Thus, minimizing perplexity is equivalent to maximizing the test set probability according to the language model. What we generally use for word sequence in Eq. 3.15 or Eq. 3.16 is the entire sequence of words in some test set. Since this sequence will cross many sentence boundaries, we need to include the begin-and end-sentence markers <s> and </s> in the probability computation. We also need to include the end-of-sentence marker </s> (but not the beginning-of-sentence marker <s>) in the total count of word tokens N.
3
N-gram Language Models
3.2
Evaluating Language Models
3.2.1
Perplexity
There is another way to think about perplexity: as the weighted average branching factor of a language. The branching factor of a language is the number of possible next words that can follow any word. Consider the task of recognizing the digits in English (zero, one, two,..., nine), given that (both in some training set and in some test set) each of the 10 digits occurs with equal probability P = 1 10 . The perplexity of this mini-language is in fact 10. To see that, imagine a test string of digits of length
3
N-gram Language Models
3.2
Evaluating Language Models
3.2.1
Perplexity
N, and assume that in the training set all the digits occurred with equal probability. By Eq. 3.15, the perplexity will be
3
N-gram Language Models
3.2
Evaluating Language Models
3.2.1
Perplexity
PP(W ) = P(w 1 w 2 . . . w N ) − 1 N = ( 1 10 N ) − 1 N = 1 10 −1 = 10 (3.17)
3
N-gram Language Models
3.2
Evaluating Language Models
3.2.1
Perplexity
But suppose that the number zero is really frequent and occurs far more often than other numbers. Let's say that 0 occur 91 times in the training set, and each of the other digits occurred 1 time each. Now we see the following test set: 0 0 0 0 0 3 0 0 0 0. We should expect the perplexity of this test set to be lower since most of the time the next number will be zero, which is very predictable, i.e. has a high probability. Thus, although the branching factor is still 10, the perplexity or weighted branching factor is smaller. We leave this exact calculation as exercise 12.
3
N-gram Language Models
3.2
Evaluating Language Models
3.2.1
Perplexity
We see in Section 3.8 that perplexity is also closely related to the informationtheoretic notion of entropy.
3
N-gram Language Models
3.2
Evaluating Language Models
3.2.1
Perplexity
Finally, let's look at an example of how perplexity can be used to compare different n-gram models. We trained unigram, bigram, and trigram grammars on 38 million words (including start-of-sentence tokens) from the Wall Street Journal, using a 19,979 word vocabulary. We then computed the perplexity of each of these models on a test set of 1.5 million words with Eq. 3.16. The table below shows the perplexity of a 1.5 million word WSJ test set according to each of these grammars.
3
N-gram Language Models
3.2
Evaluating Language Models
3.2.1
Perplexity
As we see above, the more information the n-gram gives us about the word sequence, the lower the perplexity (since as Eq. 3.15 showed, perplexity is related inversely to the likelihood of the test sequence according to the model).
3
N-gram Language Models
3.2
Evaluating Language Models
3.2.1
Perplexity
Note that in computing perplexities, the n-gram model P must be constructed without any knowledge of the test set or any prior knowledge of the vocabulary of the test set. Any kind of knowledge of the test set can cause the perplexity to be artificially low. The perplexity of two language models is only comparable if they use identical vocabularies.
3
N-gram Language Models
3.2
Evaluating Language Models
3.2.1
Perplexity
An (intrinsic) improvement in perplexity does not guarantee an (extrinsic) improvement in the performance of a language processing task like speech recognition or machine translation. Nonetheless, because perplexity often correlates with such improvements, it is commonly used as a quick check on an algorithm. But a model's improvement in perplexity should always be confirmed by an end-to-end evaluation of a real task before concluding the evaluation of the model.
3
N-gram Language Models
3.3
Sampling sentences from a language model
nan
nan
One important way to visualize what kind of knowledge a language model embodies is to sample from it. Sampling from a distribution means to choose random points sampling according to their likelihood. Thus sampling from a language model-which represents a distribution over sentences-means to generate some sentences, choosing each sentence according to its likelihood as defined by the model. Thus we are more likely to generate sentences that the model thinks have a high probability and less likely to generate sentences that the model thinks have a low probability.
3
N-gram Language Models
3.3
Sampling sentences from a language model
nan
nan
This technique of visualizing a language model by sampling was first suggested very early on by Shannon (1951) and Miller and Selfridge (1950) It's simplest to visualize how this works for the unigram case. Imagine all the words of the English language covering the probability space between 0 and 1, each word covering an interval proportional to its frequency. Fig. 3 .3 shows a visualization, using a unigram LM computed from the text of this book. We choose a random value between 0 and 1, find that point on the probability line, and print the word whose interval includes this chosen value. We continue choosing random numbers and generating words until we randomly generate the sentence-final token </s>.
3
N-gram Language Models
3.3
Sampling sentences from a language model
nan
nan
Figure 3 .3 A visualization of the sampling distribution for sampling sentences by repeatedly sampling unigrams. The blue bar represents the frequency of each word. The number line shows the cumulative probabilities. If we choose a random number between 0 and 1, it will fall in an interval corresponding to some word. The expectation for the random number to fall in the larger intervals of one of the frequent words (the, of, a) is much higher than in the smaller interval of one of the rare words (polyphonic).
3
N-gram Language Models
3.3
Sampling sentences from a language model
nan
nan
We can use the same technique to generate bigrams by first generating a random bigram that starts with <s> (according to its bigram probability). Let's say the second word of that bigram is w. We next choose a random bigram starting with w (again, drawn according to its bigram probability), and so on.
3
N-gram Language Models
3.4
Generalization and Zeros
nan
nan
The n-gram model, like many statistical models, is dependent on the training corpus. One implication of this is that the probabilities often encode specific facts about a given training corpus. Another implication is that n-grams do a better and better job of modeling the training corpus as we increase the value of N.
3
N-gram Language Models
3.4
Generalization and Zeros
nan
nan
We can use the sampling method from the prior section to visualize both of these facts! To give an intuition for the increasing power of higher-order n-grams, Fig. 3 .4 shows random sentences generated from unigram, bigram, trigram, and 4gram models trained on Shakespeare's works.
3
N-gram Language Models
3.4
Generalization and Zeros
nan
nan
The longer the context on which we train the model, the more coherent the sentences. In the unigram sentences, there is no coherent relation between words or any sentence-final punctuation. The bigram sentences have some local word-to-word coherence (especially if we consider that punctuation counts as a word). The tri- .4 Eight sentences randomly generated from four n-grams computed from Shakespeare's works. All characters were mapped to lower-case and punctuation marks were treated as words. Output is hand-corrected for capitalization to improve readability.
3
N-gram Language Models
3.4
Generalization and Zeros
nan
nan
gram and 4-gram sentences are beginning to look a lot like Shakespeare. Indeed, a careful investigation of the 4-gram sentences shows that they look a little too much like Shakespeare. The words It cannot be but so are directly from King John. This is because, not to put the knock on Shakespeare, his oeuvre is not very large as corpora go (N = 884, 647,V = 29, 066), and our n-gram probability matrices are ridiculously sparse. There are V 2 = 844, 000, 000 possible bigrams alone, and the number of possible 4-grams is V 4 = 7 × 10 17 . Thus, once the generator has chosen the first 4-gram (It cannot be but), there are only five possible continuations (that, I, he, thou, and so); indeed, for many 4-grams, there is only one continuation.
3
N-gram Language Models
3.4
Generalization and Zeros
nan
nan
To get an idea of the dependence of a grammar on its training set, let's look at an n-gram grammar trained on a completely different corpus: the Wall Street Journal (WSJ) newspaper. Shakespeare and the Wall Street Journal are both English, so we might expect some overlap between our n-grams for the two genres. Fig. 3 .5 shows sentences generated by unigram, bigram, and trigram grammars trained on 40 million words from WSJ. .5 Three sentences randomly generated from three n-gram models computed from 40 million words of the Wall Street Journal, lower-casing all characters and treating punctuation as words. Output was then hand-corrected for capitalization to improve readability.
3
N-gram Language Models
3.4
Generalization and Zeros
nan
nan
Compare these examples to the pseudo-Shakespeare in Fig. 3 .4. While they both model "English-like sentences", there is clearly no overlap in generated sentences, and little overlap even in small phrases. Statistical models are likely to be pretty useless as predictors if the training sets and the test sets are as different as Shakespeare and WSJ.
3
N-gram Language Models
3.4
Generalization and Zeros
nan
nan
How should we deal with this problem when we build n-gram models? One step is to be sure to use a training corpus that has a similar genre to whatever task we are trying to accomplish. To build a language model for translating legal documents, we need a training corpus of legal documents. To build a language model for a question-answering system, we need a training corpus of questions.
3
N-gram Language Models
3.4
Generalization and Zeros
nan
nan
It is equally important to get training data in the appropriate dialect or variety, especially when processing social media posts or spoken transcripts. For example some tweets will use features of African American Language (AAL)-the name for the many variations of language used in African American communities (King, 2020) . Such features include words like finna-an auxiliary verb that marks immediate future tense -that don't occur in other varieties, or spellings like den for then, in tweets like this one (Blodgett and O'Connor, 2017):
3
N-gram Language Models
3.4
Generalization and Zeros
nan
nan
(3.18) Bored af den my phone finna die!!! while tweets from varieties like Nigerian English have markedly different vocabulary and n-gram patterns from American English (Jurgens et al., 2017):
3
N-gram Language Models
3.4
Generalization and Zeros
nan
nan
(3.19) @username R u a wizard or wat gan sef: in d mornin -u tweet, afternoon -u tweet, nyt gan u dey tweet. beta get ur IT placement wiv twitter
3
N-gram Language Models
3.4
Generalization and Zeros
nan
nan
Matching genres and dialects is still not sufficient. Our models may still be subject to the problem of sparsity. For any n-gram that occurred a sufficient number of times, we might have a good estimate of its probability. But because any corpus is limited, some perfectly acceptable English word sequences are bound to be missing from it. That is, we'll have many cases of putative "zero probability n-grams" that should really have some non-zero probability. Consider the words that follow the bigram denied the in the WSJ Treebank3 corpus, together with their counts: denied the allegations: 5 denied the speculation: 2 denied the rumors: 1 denied the report: 1
3
N-gram Language Models
3.4
Generalization and Zeros
nan
nan
But suppose our test set has phrases like: denied the offer denied the loan Our model will incorrectly estimate that the P(offer|denied the) is 0! These zerosthings that don't ever occur in the training set but do occur in zeros the test set-are a problem for two reasons. First, their presence means we are underestimating the probability of all sorts of words that might occur, which will hurt the performance of any application we want to run on this data. Second, if the probability of any word in the test set is 0, the entire probability of the test set is 0. By definition, perplexity is based on the inverse probability of the test set. Thus if some words have zero probability, we can't compute perplexity at all, since we can't divide by 0!
3
N-gram Language Models
3.4
Generalization and Zeros
3.4.1
Unknown Words
The previous section discussed the problem of words whose bigram probability is zero. But what about words we simply have never seen before?
3
N-gram Language Models
3.4
Generalization and Zeros
3.4.1
Unknown Words
Sometimes we have a language task in which this can't happen because we know all the words that can occur. In such a closed vocabulary system the test set can only contain words from this lexicon, and there will be no unknown words. This is a reasonable assumption in some domains, such as speech recognition or machine translation, where we have a pronunciation dictionary or a phrase table that are fixed in advance, and so the language model can only use the words in that dictionary or phrase table.
3
N-gram Language Models
3.4
Generalization and Zeros
3.4.1
Unknown Words
In other cases we have to deal with words we haven't seen before, which we'll call unknown words, or out of vocabulary (OOV) words. is one in which we model these potential unknown words in the test set by adding a pseudo-word called <UNK>.
3
N-gram Language Models
3.4
Generalization and Zeros
3.4.1
Unknown Words
There are two common ways to train the probabilities of the unknown word model <UNK>. The first one is to turn the problem back into a closed vocabulary one by choosing a fixed vocabulary in advance:
3
N-gram Language Models
3.4
Generalization and Zeros
3.4.1
Unknown Words
1. Choose a vocabulary (word list) that is fixed in advance. 2. Convert in the training set any word that is not in this set (any word) to the unknown word token <UNK> in a text normalization step. 3. Estimate the probabilities for <UNK> from its counts just like any other regular word in the training set.
3
N-gram Language Models
3.4
Generalization and Zeros
3.4.1
Unknown Words
The second alternative, in situations where we don't have a prior vocabulary in advance, is to create such a vocabulary implicitly, replacing words in the training data by <UNK> based on their frequency. For example we can replace by <UNK> all words that occur fewer than n times in the training set, where n is some small number, or equivalently select a vocabulary size V in advance (say 50,000) and choose the top V words by frequency and replace the rest by UNK. In either case we then proceed to train the language model as before, treating <UNK> like a regular word. The exact choice of <UNK> model does have an effect on metrics like perplexity. A language model can achieve low perplexity by choosing a small vocabulary and assigning the unknown word a high probability. For this reason, perplexities should only be compared across language models with the same vocabularies (Buck et al., 2014).
3
N-gram Language Models
3.5
Smoothing
nan
nan
What do we do with words that are in our vocabulary (they are not unknown words) but appear in a test set in an unseen context (for example they appear after a word they never appeared after in training)? To keep a language model from assigning zero probability to these unseen events, we'll have to shave off a bit of probability mass from some more frequent events and give it to the events we've never seen. This modification is called smoothing or discounting. In this section and the folsmoothing discounting lowing ones we'll introduce a variety of ways to do smoothing: Laplace (add-one) smoothing, add-k smoothing, stupid backoff, and Kneser-Ney smoothing.
3
N-gram Language Models
3.5
Smoothing
3.5.1
Laplace Smoothing
The simplest way to do smoothing is to add one to all the n-gram counts, before we normalize them into probabilities. All the counts that used to be zero will now have a count of 1, the counts of 1 will be 2, and so on. This algorithm is called Laplace smoothing. Laplace smoothing does not perform well enough to be used Laplace smoothing in modern n-gram models, but it usefully introduces many of the concepts that we see in other smoothing algorithms, gives a useful baseline, and is also a practical smoothing algorithm for other tasks like text classification (Chapter 4).
3
N-gram Language Models
3.5
Smoothing
3.5.1
Laplace Smoothing
Let's start with the application of Laplace smoothing to unigram probabilities. Recall that the unsmoothed maximum likelihood estimate of the unigram probability of the word w i is its count c i normalized by the total number of word tokens N:
3
N-gram Language Models
3.5
Smoothing
3.5.1
Laplace Smoothing
P(w i ) = c i N
3
N-gram Language Models
3.5
Smoothing
3.5.1
Laplace Smoothing
Laplace smoothing merely adds one to each count (hence its alternate name addone smoothing). Since there are V words in the vocabulary and each one was increadd-one mented, we also need to adjust the denominator to take into account the extra V observations. (What happens to our P values if we don't increase the denominator?)
3
N-gram Language Models
3.5
Smoothing
3.5.1
Laplace Smoothing
P Laplace (w i ) = c i + 1 N +V (3.20)
3
N-gram Language Models
3.5
Smoothing
3.5.1
Laplace Smoothing
Instead of changing both the numerator and denominator, it is convenient to describe how a smoothing algorithm affects the numerator, by defining an adjusted count c * . This adjusted count is easier to compare directly with the MLE counts and can be turned into a probability like an MLE count by normalizing by N. To define this count, since we are only changing the numerator in addition to adding 1 we'll also need to multiply by a normalization factor N N+V :
3
N-gram Language Models
3.5
Smoothing
3.5.1
Laplace Smoothing
c * i = (c i + 1) N N +V (3.21)
3
N-gram Language Models
3.5
Smoothing
3.5.1
Laplace Smoothing
We can now turn c * i into a probability P * i by normalizing by N. A related way to view smoothing is as discounting (lowering) some non-zero discounting counts in order to get the probability mass that will be assigned to the zero counts. Thus, instead of referring to the discounted counts c * , we might describe a smoothing algorithm in terms of a relative discount d c , the ratio of the discounted counts to discount the original counts:
3
N-gram Language Models
3.5
Smoothing
3.5.1
Laplace Smoothing
d c = c * c
3
N-gram Language Models
3.5
Smoothing
3.5.1
Laplace Smoothing
Now that we have the intuition for the unigram case, let's smooth our Berkeley Restaurant Project bigrams. Figure 3 .6 shows the add-one smoothed counts for the bigrams in Fig. 3.1 . Figure 3 .7 shows the add-one smoothed probabilities for the bigrams in Fig. 3 .2. Recall that normal bigram probabilities are computed by normalizing each row of counts by the unigram count:
3
N-gram Language Models
3.5
Smoothing
3.5.1
Laplace Smoothing
P(w n |w n−1 ) = C(w n−1 w n ) C(w n−1 ) (3.22)
3
N-gram Language Models
3.5
Smoothing
3.5.1
Laplace Smoothing
Thus, each of the unigram counts given in the previous section will need to be augmented by V = 1446. The result is the smoothed bigram probabilities in Fig. 3.7.
3
N-gram Language Models
3.5
Smoothing
3.5.1
Laplace Smoothing
It is often convenient to reconstruct the count matrix so we can see how much a smoothing algorithm has changed the original counts. These adjusted counts can be computed by Eq. 3.24. Figure 3 .8 shows the reconstructed counts.
3
N-gram Language Models
3.5
Smoothing
3.5.1
Laplace Smoothing
Note that add-one smoothing has made a very big change to the counts. C(want to) changed from 609 to 238! We can see this in probability space as well: P(to|want) decreases from .66 in the unsmoothed case to .26 in the smoothed case. Looking at the discount d (the ratio between new and old counts) shows us how strikingly the counts for each prefix word have been reduced; the discount for the bigram want to is .39, while the discount for Chinese food is .10, a factor of 10!
3
N-gram Language Models
3.5
Smoothing
3.5.1
Laplace Smoothing
P * Laplace (w n |w n−1 ) = C(w n−1 w n ) + 1 w (C(w n−1 w) + 1) = C(w n−1 w n ) + 1 C(w n−1 ) +V (3.23) 3.5
3
N-gram Language Models
3.5
Smoothing
3.5.1
Laplace Smoothing
c * (w n−1 w n ) = [C(w n−1 w n ) + 1] ×C(w n−1 ) C(w n−1 ) +V (
3
N-gram Language Models
3.5
Smoothing
3.5.1
Laplace Smoothing
The sharp change in counts and probabilities occurs because too much probability mass is moved to all the zeros.
3
N-gram Language Models
3.5
Smoothing
3.5.2
Add-k smoothing
One alternative to add-one smoothing is to move a bit less of the probability mass from the seen to the unseen events. Instead of adding 1 to each count, we add a fractional count k (.5? .05? .01?). This algorithm is therefore called add-k smoothing.
3
N-gram Language Models
3.5
Smoothing
3.5.2
Add-k smoothing
add-k P * Add-k (w n |w n−1 ) = C(w n−1 w n ) + k C(w n−1 ) + kV (3.25)
3
N-gram Language Models
3.5
Smoothing
3.5.2
Add-k smoothing
Add-k smoothing requires that we have a method for choosing k; this can be done, for example, by optimizing on a devset. Although add-k is useful for some tasks (including text classification), it turns out that it still doesn't work well for language modeling, generating counts with poor variances and often inappropriate discounts (Gale and Church, 1994) .
3
N-gram Language Models
3.5
Smoothing
3.5.3
Backoff and Interpolation
The discounting we have been discussing so far can help solve the problem of zero frequency n-grams. But there is an additional source of knowledge we can draw on. If we are trying to compute P(w n |w n−2 w n−1 ) but we have no examples of a particular trigram w n−2 w n−1 w n , we can instead estimate its probability by using the bigram probability P(w n |w n−1 ). Similarly, if we don't have counts to compute P(w n |w n−1 ), we can look to the unigram P(w n ).
3
N-gram Language Models
3.5
Smoothing
3.5.3
Backoff and Interpolation
In other words, sometimes using less context is a good thing, helping to generalize more for contexts that the model hasn't learned much about. There are two ways to use this n-gram "hierarchy". In backoff, we use the trigram if the evidence is backoff sufficient, otherwise we use the bigram, otherwise the unigram. In other words, we only "back off" to a lower-order n-gram if we have zero evidence for a higher-order n-gram. By contrast, in interpolation, we always mix the probability estimates from interpolation all the n-gram estimators, weighing and combining the trigram, bigram, and unigram counts.
3
N-gram Language Models
3.5
Smoothing
3.5.3
Backoff and Interpolation
In simple linear interpolation, we combine different order n-grams by linearly interpolating them. Thus, we estimate the trigram probability P(w n |w n−2 w n−1 ) by mixing together the unigram, bigram, and trigram probabilities, each weighted by a