{ "paper_id": "O06-1023", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:07:06.178961Z" }, "title": "An Evaluation of Adopting Language Model as the Checker of Preposition Usage", "authors": [ { "first": "Shih-Hung", "middle": [], "last": "Wu", "suffix": "", "affiliation": {}, "email": "shwu@cyut.edu.tw" }, { "first": "Chen-Yu", "middle": [], "last": "Su", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Tian-Jian", "middle": [], "last": "Jiang", "suffix": "", "affiliation": {}, "email": "tmjiang@iis.sinica.edu.tw" }, { "first": "Wen-Lian", "middle": [], "last": "Hsu", "suffix": "", "affiliation": {}, "email": "hsu@iis.sinica.edu.tw" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Many grammar checkers in rule-based approach do not handle errors that come from various usages, for example, the usages of prepositions. To study the behavior of prepositions, we introduce the language model into a grammarchecking task. A language model is trained from a large training corpus, which contains many short phrases. It can be used for detecting and correcting certain types of grammar errors, where local information is sufficient to make decision. We conduct several experiments on finding the correct English prepositions. The experiment results show that the accuracy of open test is 71% and the accuracy of closed test is 89%. The accuracy is 70% on TOEFL-level tests.", "pdf_parse": { "paper_id": "O06-1023", "_pdf_hash": "", "abstract": [ { "text": "Many grammar checkers in rule-based approach do not handle errors that come from various usages, for example, the usages of prepositions. To study the behavior of prepositions, we introduce the language model into a grammarchecking task. A language model is trained from a large training corpus, which contains many short phrases. It can be used for detecting and correcting certain types of grammar errors, where local information is sufficient to make decision. We conduct several experiments on finding the correct English prepositions. The experiment results show that the accuracy of open test is 71% and the accuracy of closed test is 89%. The accuracy is 70% on TOEFL-level tests.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Computer-Aided Language Learning is a fascinating area; however, the computer still lacks many abilities of a human teacher, for example, the ability of grammar checking. Technically, it is hard to build a grammar checker that can deal with all types of errors. There are errors caused beyond the knowledge of syntax. For example, to overcome the misusing of prepositions, a system requires more semantic knowledge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "There are three major approaches to implement a grammar checker. The first strategy is the syntax-based checking [Jensen et al., 1993] . In this approach, a sentence is parsed into a tree structure. A sentence is correct if it can be parsed completely. Another choice is the statisticsbased checking [Attwell, 1987] . In this approach, the system built a list of POS tag sequences based on a POS-annotated corpus. A sentence with known POS tag sequence is considered as a correct one. The last one is the rule-based checking [Naber 2003 ], where a set of rules is built manually and used to match against a text. Park et al. proposed an online English grammar checker for students who take English as the second language. This system focuses on a limited category of frequently occurring grammatical mistakes in essays written by students. The grammar knowledge is represented in Prolog language. [Park 1997] We find that most grammar checkers do not deal with the errors of preposition usage. We suppose that it should be hard to write rules for all of the prepositions. To evaluate this difficulty, we introduce the language model into the grammar-checking task. Since a language model is usually trained from a large training corpus, it may contain many short phrases with prepositions.", "cite_spans": [ { "start": 113, "end": 134, "text": "[Jensen et al., 1993]", "ref_id": "BIBREF5" }, { "start": 300, "end": 315, "text": "[Attwell, 1987]", "ref_id": null }, { "start": 525, "end": 536, "text": "[Naber 2003", "ref_id": "BIBREF10" }, { "start": 613, "end": 624, "text": "Park et al.", "ref_id": null }, { "start": 897, "end": 908, "text": "[Park 1997]", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "The Language Model (LM) is one of the popular natural language processing technology for various applications, like information retrieval, handwriting recognition, speech recognition, and machine translation. [Jurafsky and Martin, 2000] [Manning and Schutze, 1999] An LM uses short history to predict the next word. Word prediction is an essential subtask of speech recognition, handwriting character recognition, augmentative communication for the disabled, and spelling error detection. An LM can estimate the probability of a sentence. Therefore, it can be a way to distinguish good usages from bad ones of English prepositions. ", "cite_spans": [ { "start": 209, "end": 236, "text": "[Jurafsky and Martin, 2000]", "ref_id": "BIBREF6" }, { "start": 237, "end": 264, "text": "[Manning and Schutze, 1999]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1." }, { "text": "We briefly restate the notation of N-gram language model. In this model, a sentence is viewed as a sequence of n words. The probability of a sentence in a language, say English, is defined as the probability of the sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical language model", "sec_num": "2." }, { "text": "That can be further decomposed by the chain rule of conditional probability under the Markov assumption.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical language model", "sec_num": "2." }, { "text": "Since it is not possible to collect all the history, a prefix of size N, as an approximation, is used to replace each component in the product.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical language model", "sec_num": "2." }, { "text": "Usually, the N is 1, 2, or 3, are named as unigrams: P(w n ), bi-grams: P(w n |w n-1 ), and tri-grams:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical language model", "sec_num": "2." }, { "text": "P(w n | w n-1 w n-2 ) model, respectively.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical language model", "sec_num": "2." }, { "text": "Next step is to estimate the n-gram approximation from corpus. The basic way is called Maximum Likelihood Estimation (MLE), which calculates the relative frequency and is used as the estimation of probability. For bi-gram:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical language model", "sec_num": "2." }, { "text": "And, for n-gram ) ,..., , ( ) ( 2 1 1 n n w w w P w P \u2261 ) | ( ) ( ) | ( )... | ( ) | ( ) ( ) ( 1 1 2 1 1 1 2 1 3 1 2 1 1 \u2212 = \u2212 \u220f = = k k n k n n n w w P w P w w P w w P w w P w P w P ) | ( ) | ( 1 1 1 1 \u2212 + \u2212 \u2212 \u2248 n N n n n n w w P w w P ) ( ) ( ) | ( 1 1 1 \u2212 \u2212 \u2212 = n n n n n w C w w C w w P", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical language model", "sec_num": "2." }, { "text": "where C represents the count of each specified n-grams w in the corpus. MLE works well for high-frequency n-gram; however, no matter how large the corpus is, there are always some lowfrequency n-grams. The frequency might be very low even zero. Some zeroes are really zeroes, which means that they represent meaningless word combinations. However, some zeroes are not really zeroes. They represent low frequency events that simply did not occur in the corpus and might exist in real world. When using n-gram model, we cannot assign a probability to a sequence where one of the component n-gram has a value of zero. An alternative solution is to smooth the probability estimations so that no component in the sequences is given a probability of zero.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Statistical language model", "sec_num": "2." }, { "text": "To cope with the problem of unseen data, several smoothing methods are developed [Goodman, 2002] ; they can be classified as discounting methods and model combination methods.", "cite_spans": [ { "start": 81, "end": 96, "text": "[Goodman, 2002]", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Smoothing methods", "sec_num": "2.1" }, { "text": "Discounting methods adjust the probability estimators, so that zero relative frequency in the training data does not imply zero relative counts. Model combination methods combine available models (unigram, bi-gram, tri-gram, etc.) by interpolation and back-off. To our knowledge, Good-Turing discounting, absolute discounting and Chen-Goodman modified Kneser-Ney discounting are three of best smoothing methods; therefore, we use them in our experiments. [Chen and Goodman, 1998] ", "cite_spans": [ { "start": 455, "end": 479, "text": "[Chen and Goodman, 1998]", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Smoothing methods", "sec_num": "2.1" }, { "text": "Good-Turing discounting adjusts the count of n-gram from r to r * , which is base on the assumption that their distribution is binomial [Good, 1953] ", "cite_spans": [ { "start": 136, "end": 148, "text": "[Good, 1953]", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Good-Turing Discounting (GT)", "sec_num": "2.1.1" }, { "text": ". ) ( ) ( ) | ( 1 1 1 1 1 1 \u2212 + \u2212 \u2212 + \u2212 \u2212 + \u2212 = n N n n n N n n N n n w C w w C w w P", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Good-Turing Discounting (GT)", "sec_num": "2.1.1" }, { "text": "where N r is types of n-gram occurring r times, and M is a threshold usually smaller than 5. Note that for r=0, where N 0 is the number of n-grams that never occurred. The discounted probabilities are thus:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Good-Turing Discounting (GT)", "sec_num": "2.1.1" }, { "text": "The Good-Turing formula only applies to the situation when r < 5, and need to renormalize to ensure that everything sums to one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Good-Turing Discounting (GT)", "sec_num": "2.1.1" }, { "text": "In the absolute discounting model, all non-zero frequencies are discounted by a small constant discount rate b. And all the unseen events gain the frequency uniformly. [Ney et al., 1994] Where R is the highest frequency and K is the number of bins that training instances are divided into:", "cite_spans": [ { "start": 168, "end": 186, "text": "[Ney et al., 1994]", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Absolute Discounting (AD)", "sec_num": "2.1.2" }, { "text": "So the probability is M r N N r r r r < + = +1 * ) 1 ( 0 1 * N N r = N r w w P n GT * 1 ) ... ( = , _ 1 0 1 0 0 N N K b rate discount N N P N R r r \u2212 \u22c5 = \u22c5 = \u22c5 \u2211 = 1 0 , 0 \u2264 < = \u2211 = b N K R r r \u23aa \u23aa \u23a9 \u23aa \u23aa \u23a8 \u23a7 = \u22c5 \u2212 \u22c5 \u2264 < \u2212 = 0 , 0 , ) ... ( 0 0 1 r N N N K b R r N b r w w p n abs", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Absolute Discounting (AD)", "sec_num": "2.1.2" }, { "text": "The Kneser-Ney discounting model is a back-off model based on an extension of absolute discounting which provides a more accurate estimation of the distribution. Chen and Goodman proposed a modified Kneser-Ney(mKN) discounting model. Instead of using a single discount for all nonzero counts as in KN smoothing, the mKN has three different parameters, D 1 , D 2 , and D 3 that are applied to n-grams with one, two, and three or more counts, respectively. The formula of mKN discounting is: ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modified Kneser-Ney discounting (mKN)", "sec_num": "2.1.3" }, { "text": ") | ( ) ( ) ( )) ( ( ) ( ) | ( 1 2 1 1 1 1 1 1 1 \u2212 + \u2212 \u2212 + \u2212 + \u2212 + \u2212 + \u2212 \u2212 + \u2212 + \u2212 = \u2211 i n i i mKN i n i w i n i i n i i n i i n i i mKN w w P w w c w c D w c w w P i \u03b3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modified Kneser-Ney discounting (mKN)", "sec_num": "2.1.3" }, { "text": "2 4 1 2 3 1 2 2 1 N N N N N D N N N N N D N N N N N D \u22c5 + \u2212 = \u22c5 + \u2212 = \u22c5 + \u2212 =", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modified Kneser-Ney discounting (mKN)", "sec_num": "2.1.3" }, { "text": "and the gamma is a normalization constant such that the probabilities sum to one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modified Kneser-Ney discounting (mKN)", "sec_num": "2.1.3" }, { "text": "Entropy is widely used to measure information. The entropy of a random variable X ranges over what are predictable set T (words, letters, or parts-of-speech) can be defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entropy and Perplexity", "sec_num": "2.2" }, { "text": "\u2211 \u2208 \u2212 = T x x p x p X H ) ( log ) ( ) (", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entropy and Perplexity", "sec_num": "2.2" }, { "text": "Perplexity is a variant of entropy. Generally, the perplexity can be defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entropy and Perplexity", "sec_num": "2.2" }, { "text": "Entropy of sequence of words can be defined as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entropy and Perplexity", "sec_num": "2.2" }, { "text": "Where p(W 1 n ) can be replaced by n-gram models.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Entropy and Perplexity", "sec_num": "2.2" }, { "text": "To assess the ability of how LM finds the right preposition, we use various sizes of training sets, and three test sets from three different sources.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "3." }, { "text": "For each original test sentence, we make up some wrong ones, and then calculate the perplexity of the test sentences. The perplexity is the measurement of how well the LM can predict the sentence. The sentence with the lowest perplexity is the most possible sentence with respect to the given LM; we assume that sentence is the correct one.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment design", "sec_num": "3.1" }, { "text": "We conduct the experiments with the SRI Language Modeling Toolkit. [Stolcke, 2002] [ ", "cite_spans": [ { "start": 67, "end": 82, "text": "[Stolcke, 2002]", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Experiment design", "sec_num": "3.1" }, { "text": "2 \u2211 \u2208 \u2212 = L W n n n n W p W p w w w H 1 ) ( log ) ( ) ,..., , ( 1 2 1 2 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment design", "sec_num": "3.1" }, { "text": "set consists of 100 sentences of TOFEL-level questions. We collect these sentences from TOFEL reference books; they contain most of the English prepositions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment design", "sec_num": "3.1" }, { "text": "The training corpus is selected from LDC Gigaword corpora [LDC 2003 ]. The Gigaword corpora are very large English newswire text collections. There are four distinct international sources: Agence France Press English Service (AFE), Associated Press Worldstream English Service(APW), The New York Times Newswire Service (NYT) and The Xinhua News Agency English Service (XIE). The total size of the corpora is more than one gigabyte in word counts.", "cite_spans": [ { "start": 58, "end": 67, "text": "[LDC 2003", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiment design", "sec_num": "3.1" }, { "text": "We use the NYT corpus as the training set. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiment design", "sec_num": "3.1" }, { "text": "In the first experiment, we select 100 sentences from our training corpus as the test set. We fabricate the wrong sentences by replacing the correct preposition with other prepositions. We calculate the perplexity of the sentences with LMs and check if the sentence with the lowest perplexity is the original one. We do not list the values of perplexity, since it is meaningless for the closed test. In computing perplexities, the model must be constructed without any knowledge of the test set. The knowledge of the test set will make the perplexity artificially low. Again, we fabricate the wrong sentences by replacing the correct preposition with other prepositions. We calculate perplexities of the sentences with LMs of different sizes and check if the sentence with the lowest perplexity is the original one. 1868% 71%", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Closed tests", "sec_num": "3.2.1" }, { "text": "There is a problem in the setting of the previous two experiments. We do not check if the fabricated wrong sentences are also legal in the real world. Therefore, we collect 100 TOEFLlevel single-choice questions from pseudo TOEFL tests. Each sentence has a blank for a preposition. Four candidates are available, but only one is correct. For example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TOEFL-level tests", "sec_num": "3.2.3" }, { "text": "My sister whispered __ my ear.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "TOEFL-level tests", "sec_num": "3.2.3" }, { "text": "Then our task is to distinguish which of the following four sentences is correct.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(a) in (b) to (c) with (d) on", "sec_num": null }, { "text": "My sister whispered to my ear.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "My sister whispered in my ear. (correct)", "sec_num": null }, { "text": "My sister whispered with my ear.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "My sister whispered in my ear. (correct)", "sec_num": null }, { "text": "My sister whispered on my ear.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "My sister whispered in my ear. (correct)", "sec_num": null }, { "text": "We also train our LMs with different sizes of training set. We then use the LMs to calculate the perplexities of the four sentences. The system regards the sentence with the lowest perplexity as the correct one. The results in Table 7 show that tri-gram model with mKN smoothing gives the best result even though the training size is much smaller than the one for the bi-gram model. -200206(18) 69% 70% Table 8 shows a part of the test results that the LM gives wrong answers. The system chooses the candidate with the lowest perplexity as the answer; however, in these cases, the candidates with the lowest perplexities are wrong. We manually check these sentences and identify the necessary keyword. We find that, to give the right answer, the system must refer to some words that are not close to the blank. Such long-distance features cannot be learned in a short windows size of two or three; therefore, the tri-gram model cannot give the right answer. advantages, the first one is that it requires only untagged corpus. The second one is that it requires no domain knowledge. Thus, the approach can cooperate with other approaches in the future easily.", "cite_spans": [ { "start": 383, "end": 394, "text": "-200206(18)", "ref_id": null } ], "ref_spans": [ { "start": 227, "end": 234, "text": "Table 7", "ref_id": "TABREF7" }, { "start": 403, "end": 410, "text": "Table 8", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "(wrong)", "sec_num": null }, { "text": "To improve the accuracy, the system requires more linguistic knowledge. Other feature-based machine learning approaches, for instance, Maximum Entropy (ME) [Berger et al., 1996] , Conditional Random Fields (CRF) [Lafferty et al., 2001 ] are also promising. They can incorporate more long-distance linguistic features that LM cannot. [Rosenfeld, 1997] .", "cite_spans": [ { "start": 156, "end": 177, "text": "[Berger et al., 1996]", "ref_id": "BIBREF1" }, { "start": 212, "end": 234, "text": "[Lafferty et al., 2001", "ref_id": "BIBREF7" }, { "start": 333, "end": 350, "text": "[Rosenfeld, 1997]", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Error Analysis", "sec_num": "3.3" }, { "text": "The collection of linguistic features requires more knowledge engineering. In an English grammar textbook of college-level [Eastwood, 1999] , the usages of the prepositions are addressed by rules and examples, as listed in Table 9 and 10. To cooperate with the rules, a system requires linguistic resources to recognize the names of different entities such as countries, regions, towns, and time expressions. Moreover, the system still requires templates of specific usages. Table 10 gives many common phrases examples of the three prepositions: in-on-at (used for place only). These \"common\" phrases might appear in the corpus many times. Since they are short, they will be in the tri-gram model. Table 9 . Rules of preposition usage [Eastwood, 1999] Positive and Negative Rules At 1. Use in (not at) before the names of countries, regions, cities, and large towns.", "cite_spans": [ { "start": 123, "end": 139, "text": "[Eastwood, 1999]", "ref_id": "BIBREF3" }, { "start": 735, "end": 751, "text": "[Eastwood, 1999]", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 223, "end": 230, "text": "Table 9", "ref_id": null }, { "start": 475, "end": 483, "text": "Table 10", "ref_id": null }, { "start": 698, "end": 705, "text": "Table 9", "ref_id": null } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "3.3" }, { "text": "2. Use in (not at) with seasons, months, and years. 3. Use on (not at) before dates. 4. Without at before 'an hour before', 'a week later', 'two years afterwards' 5. Do not use at to introduce a time expression with ago. In 1. on a day or date, not in 2. in the morning/afternoon/evening' but 'the following morning', 'the next afternoon', 'the previous evening', etc. 3. When talking about how long something lasts or continues, use for, not in. 4. on/upon doing something, not in 5. made of wool/wood etc., not in 6. in is not used in expressions such as 'the shop is open six days a week.' 'He visits his father three times a year.' 'Bananas cost fifty pence a pound.' 'I drove to the hospital at ninety miles an hour.' On 1. Do not use a preposition to begin a time expression with next when the point of time is being considered in relation to the present: 'the next morning', 'the next afternoon'. 2. a good/bad thing about someone/something, not on 3. When talking about a particular afternoon, use on. When speaking generally, use in. Table 10 . Common phrase for in, on, and at [Eastwood, 1999] ", "cite_spans": [ { "start": 1087, "end": 1103, "text": "[Eastwood, 1999]", "ref_id": "BIBREF3" } ], "ref_spans": [ { "start": 1043, "end": 1051, "text": "Table 10", "ref_id": null } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "3.3" } ], "back_matter": [ { "text": "This research was partly supported by the National Science Council under GRANT NSC 94-2218-E-324 -003.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "The computational analysis of English", "authors": [ { "first": "Eric", "middle": [], "last": "Atwell", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Elliott", "suffix": "" } ], "year": 1987, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Atwell, Stephen Elliott, Dealing with ill-formed English text, in: The computational analysis of English : a corpus-based approach / edited by Roger Garside, Geoffrey Leech, Geoffrey Sampson, The computational analysis of English, London ; New York, Longman, 1987.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "A maximum entropy approach to natural language processing", "authors": [], "year": 1996, "venue": "Computational Linguistics", "volume": "22", "issue": "", "pages": "39--71", "other_ids": {}, "num": null, "urls": [], "raw_text": "et al., 1996] A. Berger, S. A. Della Pietra, and V. J. Della Pietra, A maximum entropy approach to natural language processing, Computational Linguistics, vol. 22, pp. 39-71, 1996.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "An empirical study of smoothing techniques for language modeling", "authors": [ { "first": ";", "middle": [ "S F" ], "last": "Goodman", "suffix": "" }, { "first": "J", "middle": [], "last": "Chen", "suffix": "" }, { "first": "", "middle": [], "last": "Goodman", "suffix": "" } ], "year": 1998, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "and Goodman, 1998] S. F. Chen and J. Goodman, An empirical study of smoothing techniques for language modeling, Technical Report TR-10-98, Computer Science Group, Harvard University, Aug. 1998.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The population frequencies of species and the estimation of population parameters", "authors": [ { "first": "John", "middle": [], "last": "Eastwood", "suffix": "" }, { "first": ";", "middle": [ "I J" ], "last": "Good", "suffix": "" } ], "year": 1953, "venue": "Biometrika", "volume": "40", "issue": "", "pages": "237--264", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Eastwood, 1999] John Eastwood, Oxford Practice Grammar, Oxford University Press, 1999. [Good, 1953] I.J. Good, The population frequencies of species and the estimation of population parameters, Biometrika 40: pp 237-264, 1953.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "A bit of Progress in Language Modeling", "authors": [ { "first": "Joshua", "middle": [ "T" ], "last": "Goodman", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Goodman, 2002] Joshua T. Goodman, A bit of Progress in Language Modeling, Technical Report, MSR-TR-2001-72, Microsoft Research, Redmond, 2002.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Natural language processing: the PLNLP approach", "authors": [ { "first": "[", "middle": [], "last": "Jensen", "suffix": "" } ], "year": 1993, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Jensen et al.,1993] Karen Jensen, George E. Heidorn, Stpehen D. Richardson (Eds.): Natural language processing: the PLNLP approach, Kluwer Academic Publishers, 1993.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition", "authors": [ { "first": "Martin", "middle": [ ";" ], "last": "Jurafsky", "suffix": "" }, { "first": "D", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "J", "middle": [ "H" ], "last": "Martin", "suffix": "" } ], "year": 2000, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Jurafsky and Martin, 2000] Jurafsky, D. and Martin, J.H., Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition, Prentice-Hall, Upper Saddle River, New Jersey, 2000.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data", "authors": [], "year": 2001, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "et al, 2001] Lafferty, J., McCallum, A., and Pereira, F., Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. Paper presented at the ICML-01.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Chris Manning and Hinrich Sch\u00fctze, Foundations of Statistical Natural Language Processing", "authors": [], "year": 1999, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "and Schutze, 1999] Chris Manning and Hinrich Sch\u00fctze, Foundations of Statistical Natural Language Processing, MIT Press, Cambridge, MA: May 1999.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "On structuring probabilistic dependencies in stochastic language modeling", "authors": [ { "first": "[", "middle": [], "last": "Naber", "suffix": "" } ], "year": 1994, "venue": "Computer Speech and Language", "volume": "8", "issue": "", "pages": "1--28", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Naber et al., 2003] Daniel Naber, A Rule-Based Style and Grammar Checker, diploma thesis, University Bielefeld, 2003. [Ney et al., 1994] Hermann Ney, Ute Essen, and Reinhard Kneser, On structuring probabilistic dependencies in stochastic language modeling, Computer Speech and Language 8: pp1-28, 1994.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "An English Grammar Checker as a Writing Aid for Students o f English as a Second Language", "authors": [ { "first": "[", "middle": [], "last": "Park", "suffix": "" } ], "year": 1997, "venue": "Proceedings of the Fifth Conference on Applied Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "[Park et al., 1997] Jong C. Park, Martha Palmer, and Clay Washburn, An English Grammar Checker as a Writing Aid for Students o f English as a Second Language, in Proceedings of the Fifth Conference on Applied Natural Language Processing, 1997. http://acl.ldc.upenn.edu/A/A97/A97-2014.pdf [Rosenfeld, 1997] Ronald Rosenfeld, A Whole Sentence Maximum Entropy Language Model, In Proc. IEEE workshop on Automatic Speech Recognition and Understanding, Santa Barbara, California, December 1997.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "SRILM -An Extensible Language Modeling Toolkit", "authors": [ { "first": "Andreas", "middle": [], "last": "Stolcke", "suffix": "" }, { "first": "", "middle": [], "last": "Stolcke", "suffix": "" } ], "year": 2002, "venue": "Proc. Intl. Conf. Spoken Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stolcke, 2002] Andreas Stolcke, SRILM -An Extensible Language Modeling Toolkit, in Proc. Intl. Conf. Spoken Language Processing, Denver, Colorado, September 2002", "links": null } }, "ref_entries": { "FIGREF0": { "text": "shows a general architecture of an English grammar checker. An ideal system should consist of both rule-based and language model approaches. Linguistic knowledge of the rulebased system is acquired from domain experts. Statistical knowledge of the language model is gathered from training corpus by programs. In this paper, we design several experiments to assess the ability of the LM on the preposition usage problem.", "uris": null, "type_str": "figure", "num": null }, "TABREF0": { "text": "The second test set is another 100 sentences that we collect from various English literatures outside the training set. This is an open test. In the first two experiments, we focus on only three prepositions: in, on, and at. We fabricate the wrong sentences by replacing the correct preposition with other ones. The third test", "num": null, "html": null, "content": "
H
", "type_str": "table" }, "TABREF1": { "text": "The training set sizes in different experiments are different. For bi-gram model, we select the news of the NYT from January 1999 to June 2002 as our training corpus. It consists of 351,427,489 words and is about 1.89 GB. We do not perform any preprocessing and do not remove stop words. For tri-gram model, we select the news of NYT from January 2001 to June 2002. This corpus consists of 156,896,511 words and the size is about 856 MB.", "num": null, "html": null, "content": "
Table Training Set# of wordsMB
nyt200111-200206(8)69865209384
nyt200101-200206(18)156896511865
nyt199901-200206(42)3514274891890
", "type_str": "table" }, "TABREF2": { "text": "", "num": null, "html": null, "content": "
Training Set# of wordsMB
nyt200203(1)931019552
nyt200203-200204(2)18734690102
nyt200201-200206(6)52574963289
nyt200108-200206(11)97578257537
nyt200101-200206(18)156896511865
", "type_str": "table" }, "TABREF3": { "text": "", "num": null, "html": null, "content": "", "type_str": "table" }, "TABREF4": { "text": "", "num": null, "html": null, "content": "
Smoothing method
Training SetGTmKNAD
nyt200111-200206(8)65%65%73%
nyt200101-200206(18)65%65%
nyt199901-200206(42)66%
", "type_str": "table" }, "TABREF5": { "text": "and 6 show the accuracy of the second test set. The mKN smoothing method gives the best accuracy 71%.", "num": null, "html": null, "content": "", "type_str": "table" }, "TABREF6": { "text": "", "num": null, "html": null, "content": "
Smoothing method
Training SetGTmKNAD
nyt200111-200206(8)47%49%50%
nyt200101-200206(18)48%51%
nyt199901-200206(42)47%
Smoothing method
Training SetGTmKNAD
nyt200203(1)61%57%61%
nyt200203-200204(2)61%62%
nyt200201-200206(6)67%69%
nyt200108-200206(11)68%69%
nyt200101-200206
", "type_str": "table" }, "TABREF7": { "text": "", "num": null, "html": null, "content": "
Smoothing method
Training SetGTmKN
Bigram model
nyt199901-200206(42)53%54%
Trigram model
nyt200101
", "type_str": "table" }, "TABREF8": { "text": "In this paper, we report the evaluation of adopting the language model on checking the English prepositions. In our experiments, we assume that a correct sentence has less perplexity than the wrong ones. The experiment results show that tri-gram language model can find most of the correct prepositions. The modified Kneser-Ney smoothing method gives the best accuracy in three test sets. Experiment results show that the accuracy of open test is 71%, the accuracy of closed test is 89%, and the accuracy on TOEFL-level test is 70%. This approach has two", "num": null, "html": null, "content": "
correctLM
No.Questionchoicesanswerlogprob perplexityanswer
1among-35.5006 343.367
It is sometimes difficult to maketo-33.5936 250.923v
pleasant conversation ___ peoplefor-36.7712 423.168
you have just met.withv-33.6707 254.127
", "type_str": "table" } } } }