Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
97.4 kB
{
"paper_id": "W01-0513",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T06:00:36.830752Z"
},
"title": "Is Knowledge-Free Induction of Multiword Unit Dictionary Headwords a Solved Problem?",
"authors": [
{
"first": "Patrick",
"middle": [],
"last": "Schone",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Colorado",
"location": {
"postCode": "80309",
"settlement": "Boulder",
"region": "CO"
}
},
"email": "schone@cs.colorado.edu"
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Colorado",
"location": {
"postCode": "80309",
"settlement": "Boulder",
"region": "CO"
}
},
"email": "jurafsky@cs.colorado.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We seek a knowledge-free method for inducing multiword units from text corpora for use as machine-readable dictionary headwords. We provide two major evaluations of nine existing collocation-finders and illustrate the continuing need for improvement. We use Latent Semantic Analysis to make modest gains in performance, but we show the significant challenges encountered in trying this approach.",
"pdf_parse": {
"paper_id": "W01-0513",
"_pdf_hash": "",
"abstract": [
{
"text": "We seek a knowledge-free method for inducing multiword units from text corpora for use as machine-readable dictionary headwords. We provide two major evaluations of nine existing collocation-finders and illustrate the continuing need for improvement. We use Latent Semantic Analysis to make modest gains in performance, but we show the significant challenges encountered in trying this approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A multiword unit (MWU) is a connected collocation: a sequence of neighboring words \"whose exact and unambiguous meaning or connotation cannot be derived from the meaning or connotation of its components\" (Choueka, 1988) . In other words, MWUs are typically non-compositional at some linguistic level. For example, phonological non-compositionality has been observed (Finke & Weibel, 1997; Gregory, et al, 1999) where words like \"got\" [g<t] and \"to\" [tu] change phonetically to \"gotta\" [g<rF] when combined. We have interest in inducing headwords for machine-readable dictionaries (MRDs), so our interest is in semantic rather than phonological non-compositionality. As an example of semantic non-compositionality, consider \"compact disk\": one could not deduce that it was a music medium by only considering the semantics of \"compact\" and \"disk.\"",
"cite_spans": [
{
"start": 204,
"end": 219,
"text": "(Choueka, 1988)",
"ref_id": "BIBREF2"
},
{
"start": 366,
"end": 388,
"text": "(Finke & Weibel, 1997;",
"ref_id": "BIBREF11"
},
{
"start": 389,
"end": 410,
"text": "Gregory, et al, 1999)",
"ref_id": "BIBREF14"
},
{
"start": 434,
"end": 439,
"text": "[g<t]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "MWUs may also be non-substitutable and/or non-modifiable (Manning and Sch\u00fctze, 1999) . Nonsubstitutability implies that substituting a word of the MWU with its synonym should no longer convey the same original content: \"compact disk\" does not readily imply \"densely-packed disk.\" Nonmodifiability, on the other hand, suggests one cannot modify the MWU's structure and still convey the same content: \"compact disk\" does not signify \"disk that is compact.\" MWU dictionary headwords generally satisfy at least one of these constraints. For example, a compositional phrase would typically be excluded from a hard-copy dictionary since its constituent words would already be listed. These strategies allow hard-copy dictionaries to remain compact.",
"cite_spans": [
{
"start": 57,
"end": 84,
"text": "(Manning and Sch\u00fctze, 1999)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As mentioned, we wish to find MWU headwords for machine-readable dictionaries (MRDs). Although space is not an issue in MRDs, we desire to follow the lexicographic practice of reducing redundancy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As Sproat indicated, \"simply expanding the dictionary to encompass every word one is ever likely to encounter is wrong: it fails to take advantage of regularities\" (1992, p. xiii). Our goal is to identify an automatic, knowledge-free algorithm that finds all and only those collocations where it is necessary to supply a definition. \"Knowledge-free\" means that the process should proceed without human input (other than, perhaps, indicating whitespace and punctuation).",
"cite_spans": [
{
"start": 164,
"end": 173,
"text": "(1992, p.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This seems like a solved problem. Many collocation-finders exist, so one might suspect that most could suffice for finding MWU dictionary headwords. To verify this, we evaluate nine existing collocation-finders to see which best identifies valid headwords. We evaluate using two completely separate gold standards: (1) WordNet and (2) a compendium of Internet dictionaries. Although web-based resources are dynamic and have better coverage than WordNet (especially for acronyms and names), we show that WordNet-based scores are comparable to those using Internet MRDs. Yet the evaluations indicate that significant improvement is still needed in MWU-induction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As an attempt to improve MWU headword induction, we introduce several algorithms using Latent Semantic Analysis (LSA). LSA is a technique which automatically induces semantic relationships between words. We use LSA to try to eliminate proposed MWUs which are semantically compositional. Unfortunately, this does not help.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Yet when we use LSA to identify substitutable delimiters. This suggests that in a language with MWUs, we do show modest performance gains.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "whitespace, one might prefer to begin at the word",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For decades, researchers have explored various techniques for identifying interesting collocations. There have essentially been three separate kinds of approaches for accomplishing this task. These approaches could be broadly classified into (1) segmentation-based, (2) word-based and knowledgedriven, or (3) word-based and probabilistic. We will illustrate strategies that have been attempted in each of the approaches. Since we assume knowledge of whitespace, and since many of the first and all of the second categories rely upon human input, we will be most interested in the third category.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Previous Approaches",
"sec_num": "2"
},
{
"text": "Some researchers view MWU-finding as a natural by-product of segmentation. One can regard text as a stream of symbols and segmentation as a means of placing delimiters in that stream so as to separate logical groupings of symbols from one another. A segmentation process may find that a symbol stream should not be delimited even though subcomponents of the stream have been seen elsewhere. In such cases, these larger units may be MWUs. The principal work on segmentation has focused either on identifying words in phonetic streams (Saffran, et. al, 1996; Brent, 1996; de Marcken, 1996) or on tokenizing Asian and Indian languages that do not normally include word delimiters in their orthography (Sproat, et al, 1996; Ponte and Croft 1996; Shimohata, 1997; Teahan, et al., 2000 ; and many others). Such efforts have employed various strategies for segmentation, including the use of hidden Markov models, minimum description length, dictionary-based approaches, probabilistic automata, transformation-based learning, and text compression. Some of these approaches require significant sources of human knowledge, though others, especially those that follow data compression or HMM schemes, do not.",
"cite_spans": [
{
"start": 533,
"end": 556,
"text": "(Saffran, et. al, 1996;",
"ref_id": "BIBREF26"
},
{
"start": 557,
"end": 569,
"text": "Brent, 1996;",
"ref_id": "BIBREF1"
},
{
"start": 570,
"end": 587,
"text": "de Marcken, 1996)",
"ref_id": null
},
{
"start": 698,
"end": 719,
"text": "(Sproat, et al, 1996;",
"ref_id": "BIBREF36"
},
{
"start": 720,
"end": 741,
"text": "Ponte and Croft 1996;",
"ref_id": "BIBREF24"
},
{
"start": 742,
"end": 758,
"text": "Shimohata, 1997;",
"ref_id": "BIBREF29"
},
{
"start": 759,
"end": 779,
"text": "Teahan, et al., 2000",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation-driven Strategies",
"sec_num": "2.1"
},
{
"text": "These approaches could be applied to languages where word delimiters exist (such as in European languages delimited by the space character). However, in such languages, it seems more prudent to simply take advantage of delimiters rather than introducing potential errors by trying to find word boundaries while ignoring knowledge of the level and identify appropriate word combinations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Segmentation-driven Strategies",
"sec_num": "2.1"
},
{
"text": "Some researchers start with words and propose MWU induction methods that make use of parts of speech, lexicons, syntax or other linguistic structure (Justeson and Katz, 1995; Jacquemin, et al., 1997; Daille, 1996) . For example, Justeson and Katz indicated that the patterns NOUN NOUN and ADJ NOUN are very typical of MWUs. Daille also suggests that in French, technical MWUs follow patterns such as \"NOUN de NOUN\" (1996, p. 50) . To find word combinations that satisfy such patterns in both of these situations necessitates the use of a lexicon equipped with part of speech tags. Since we are interested in knowledge-free induction of MWUs, these approaches are less directly related to our work. Furthermore, we are not really interested in identifying constructs such as general noun phrases as the above rules might generate, but rather, in finding only those collocations that one would typically need to define.",
"cite_spans": [
{
"start": 149,
"end": 174,
"text": "(Justeson and Katz, 1995;",
"ref_id": null
},
{
"start": 175,
"end": 199,
"text": "Jacquemin, et al., 1997;",
"ref_id": "BIBREF18"
},
{
"start": 200,
"end": 213,
"text": "Daille, 1996)",
"ref_id": "BIBREF3"
},
{
"start": 409,
"end": 428,
"text": "NOUN\" (1996, p. 50)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Word-based, knowledge-driven Strategies",
"sec_num": "2.2"
},
{
"text": "The third category assumes at most whitespace and punctuation knowledge and attempts to infer MWUs using word combination probabilities. Table 1 (see next page) shows nine commonly-used probabilistic MWU-induction approaches. In the table, f and P signify frequency and probability X X of a word X. A variable XY indicates a word bigram and indicates its expected frequency at random.",
"cite_spans": [],
"ref_spans": [
{
"start": 137,
"end": 144,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Word-based, Probabilistic Approaches",
"sec_num": "2.3"
},
{
"text": "An overbar signifies a variable's complement. For more details, one can consult the original sources as well as Ferreira and Pereira (1999) and Manning and Sch\u00fctze (1999) .",
"cite_spans": [
{
"start": 144,
"end": 170,
"text": "Manning and Sch\u00fctze (1999)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "XY",
"sec_num": null
},
{
"text": "Prior to applying the algorithms, we lemmatize using a weakly-informed tokenizer that knows only that whitespace and punctuation separate words. Punctuation can either be discarded or treated as words. Since we are equally interested in finding units like \"Dr.\" and \"U. S.,\" we opt to treat punctuation as words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Access",
"sec_num": "3"
},
{
"text": "Once we tokenize, we use Church's (1995) suffix array approach to identify word n-grams that occur at least T times (for T=10). We then rank-order the (Fano, 1961; Church and Hanks, 1990) log (P / P P ) (Resnik, 1996) Symmetric Conditional Probability (Ferreira and Pereira, 1999)",
"cite_spans": [
{
"start": 151,
"end": 163,
"text": "(Fano, 1961;",
"ref_id": "BIBREF10"
},
{
"start": 164,
"end": 187,
"text": "Church and Hanks, 1990)",
"ref_id": null
},
{
"start": 203,
"end": 217,
"text": "(Resnik, 1996)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Access",
"sec_num": "3"
},
{
"text": "P X|Y MI XY MZ Pr Z|Y MI ZY 2 log [P X P Y P X P Y ] f Y [P XY P XY ] f XY [P XY P XY ] f XY Mi{X,X} j{Y,Y} (f ij ij ) 2 ij f XY XY XY (1 ( XY /N)) f XY XY f XY (1 (f XY /N))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Access",
"sec_num": "3"
},
{
"text": "2 XY X Y Selectional Association",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Access",
"sec_num": "3"
},
{
"text": "P / P P XY X Y 2 Dice Formula (Dice, 1945) 2 f / (f +f ) XY X Y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Access",
"sec_num": "3"
},
{
"text": "Log-likelihood (Dunning, 1993; (Daille, 1996) . Since we need knowledge-poor Daille, 1996) induction, we cannot use human-suggested filtering Chi-squared ($ ) 2 (Church and Gale, 1991)",
"cite_spans": [
{
"start": 15,
"end": 30,
"text": "(Dunning, 1993;",
"ref_id": "BIBREF9"
},
{
"start": 31,
"end": 45,
"text": "(Daille, 1996)",
"ref_id": "BIBREF3"
},
{
"start": 77,
"end": 90,
"text": "Daille, 1996)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Access",
"sec_num": "3"
},
{
"text": "Z-Score (Smadja, 1993; Fontenelle, et al., 1994) Student's t-Score (Church and Hanks, 1990) n-gram list in accordance to each probabilistic algorithm. This task is non-trivial since most algorithms were originally suited for finding twoword collocations. We must therefore decide how to expand the algorithms to identify general n-grams (say, C=w w ...w ). We can either generalize or 1 2 n approximate. Since generalizing requires exponential compute time and memory for several of the algorithms, approximation is an attractive alternative.",
"cite_spans": [
{
"start": 8,
"end": 22,
"text": "(Smadja, 1993;",
"ref_id": "BIBREF32"
},
{
"start": 23,
"end": 48,
"text": "Fontenelle, et al., 1994)",
"ref_id": "BIBREF13"
},
{
"start": 67,
"end": 91,
"text": "(Church and Hanks, 1990)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Access",
"sec_num": "3"
},
{
"text": "One approximation redefines X and Y to be, respectively, the word sequences w w ...w and 1 2 i w w ...w where i is chosen to maximize P P .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Access",
"sec_num": "3"
},
{
"text": "i+1 i+2 n, X Y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Access",
"sec_num": "3"
},
{
"text": "This has a natural interpretation of being the expected probability of concatenating the two most probable substrings in order to form the larger unit.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Access",
"sec_num": "3"
},
{
"text": "Since it can be computed rapidly with low memory costs, we use this approximation. Two additional issues need addressing before evaluation. The first regards document sourcing. If an n-gram appears in multiple sources (eg., Congressional Record versus Associated Press), its likelihood of accuracy should increase. This is particularly true if we are looking for MWU headwords for a general versus specialized dictionary. Phrases that appear in one source may in fact be general MWUs, but frequently, they are text-specific units. Hence, precision gained by excluding single-source n-grams may be worth losses in recall. We will measure this trade-off.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Access",
"sec_num": "3"
},
{
"text": "Second, evaluating with punctuation as words and applying no filtering mechanism may unfairly bias against some algorithms. Pre-or post-processing of n-grams with a linguistic filter has shown to improve some induction algorithms' performance rules as in Section 2.2. Yet we can filter by pruning n-grams whose beginning or ending word is among the top N most frequent words. This unfortunately eliminates acronyms like \"U. S.\" and phrasal verbs like \"throw up.\" However, discarding some words may be worthwhile if the final list of n-grams is richer in terms of MRD headwords. We therefore evaluate with such an automatic filter, arbitrarily (and without optimization) choosing N=75.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Access",
"sec_num": "3"
},
{
"text": "A natural scoring standard is to select a language and evaluate against headwords from existing dictionaries in that language. Others have used similar standards (Daille, 1996) , but to our knowledge, none to the extent described here. We evaluate thousands of hypothesized units from an unconstrained corpus. Furthermore, we use two separate evaluation gold standards: (1) WordNet (Miller, et al, 1990) and (2) a collection of Internet MRDs. Using two gold standards helps valid MWUs. It also provides evaluation using both static and dynamic resources. We choose to evaluate in English due to the wealth of linguistic resources. The \"* *\" and \"* * *\" are actual units.",
"cite_spans": [
{
"start": 162,
"end": 176,
"text": "(Daille, 1996)",
"ref_id": "BIBREF3"
},
{
"start": 382,
"end": 403,
"text": "(Miller, et al, 1990)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Performance",
"sec_num": "4"
},
{
"text": "In particular, we use a randomly-selected corpus the first five columns as \"information-like.\" consisting of a 6.7 million word subset of the TREC Similarly, since the last four columns share databases (DARPA, 1993 (DARPA, -1997 .",
"cite_spans": [
{
"start": 202,
"end": 214,
"text": "(DARPA, 1993",
"ref_id": null
},
{
"start": 215,
"end": 228,
"text": "(DARPA, -1997",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluating Performance",
"sec_num": "4"
},
{
"text": "properties of the frequency approach, we will refer Table 2 illustrates a sample of rank-ordered output to them as \"frequency-like.\" from each of the different algorithms (following the One's application may dictate which set of cross-source, filtered paradigm described in section algorithms to use. Our gold standard selection 3). Note that algorithms in the first four columns reflects our interest in general word dictionaries, so produce results that are similar to each other as do results we obtain may differ from results we might those in the last four columns. Although the mutual have obtained using terminology lexicons. information results seem to be almost in a class of If our gold standard contains K MWUs with their own, they actually are similar overall to the corpus frequencies satisfying threshold (T=10), our first four sets of results; therefore, we will refer to figure of merit (FOM) is given by ",
"cite_spans": [],
"ref_spans": [
{
"start": 52,
"end": 59,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Evaluating Performance",
"sec_num": "4"
},
{
"text": "WordNet has definite advantages as an evaluation resource. It has in excess of 50,000 MWUs, is freely accessible, widely used, and is in electronic form. Yet, it obviously cannot contain every MWU. For instance, our corpus contains 177,331 n-grams (for 2n10) satisfying T10, but WordNet contains only 2610 of these. It is unclear, therefore, if algorithms are wrong when they propose MWUs that are not in WordNet. We will assume they are wrong but with a special caveat for proper nouns. WordNet includes few proper noun MWUs. Yet several algorithms produce large numbers of proper nouns. This biases against them. One could contend that all proper nouns MWUs are valid, but we disagree. Although such may be MWUs, they are not necessarily MRD headwords; one would not include every proper noun in a dictionary, but rather, those needing definitions. To overcome this, we will have two scoring modes. The first, \"S\" mode (standing for some) discards any proposed capitalized n-gram whose uncapitalized version is not in WordNet. The second mode \"N\" (for none) disregards all capitalized n-grams. Table 3 illustrates algorithmic performance as compared to the 2610 MWUs from WordNet. The first double column illustrates \"out-of-the-box\" performance on all 177,331 possible n-grams. The second double column shows cross-sourcing: only hypothesizing MWUs that appear in at least two separate datasets (124,952 in all), but being evaluated against all of the 2610 valid units. Double columns 3 and 4 show effects from high-frequency filtering the n-grams of the first and second columns (reporting only 29,716 and 17,720 n-grams) respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 1096,
"end": 1103,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "WordNet-based Evaluation",
"sec_num": "4.1"
},
{
"text": "As Table 3 suggests, for every condition, the information-like algorithms seem to perform best at identifying valid, general MWU headwords. Moreover, they are enhanced when cross-sourcing is considered; but since much of their strength comes from identifying proper nouns, filtering has the frequency-like approaches are independent of data source. They also improve significantly with filtering. Overall, though, after the algorithms are judged, even the best score of 0.265 is far short of the maximum possible, namely 1.0. ",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "WordNet-based Evaluation",
"sec_num": "4.1"
},
{
"text": "Since WordNet is static and cannot report on all of a corpus' n-grams, one may expect different performance by using a more all-encompassing, dynamic resource. The Internet houses dynamic resources which can judge practically every induced n-gram. With permission and sufficient time, one can repeatedly query websites that host large collections of MRDs and evaluate each n-gram.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Web-based Evaluation",
"sec_num": "4.2"
},
{
"text": "Having approval, we queried: (1) onelook.com, (2) acronymfinder.com, and (3) infoplease.com. The first website interfaces with over 600 electronic dictionaries. The second is devoted to identifying proper acronyms. The third focuses on world facts such as historical figures and organization names.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Web-based Evaluation",
"sec_num": "4.2"
},
{
"text": "To minimize disruption to websites by reducing the total number of queries needed for evaluation, we use an evaluation approach from the information retrieval community (Sparck-Jones and van Rijsbergen, 1975) . Each algorithm reports its top 5000 MWU choices and the union of these choices (45192 possible n-grams) is looked up on the Internet. Valid MWUs identified at any website are assumed to be the only valid units in the data.",
"cite_spans": [
{
"start": 169,
"end": 208,
"text": "(Sparck-Jones and van Rijsbergen, 1975)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Web-based Evaluation",
"sec_num": "4.2"
},
{
"text": "{X i } n i1 {X i } n i1 cos( X, Y) X #Y ||X|| ||Y|| .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Web-based Evaluation",
"sec_num": "4.2"
},
{
"text": "Algorithms are then evaluated based on this showed how one could compute latent semantic collection. Although this strategy for evaluation is vectors for any word in a corpus (Schone and not flawless, it is reasonable and makes dynamic Jurafsky, 2000) . Using the same approach, we evaluation tractable. Table 4 shows the algorithms' compute semantic vectors for every proposed word performance (including proper nouns).",
"cite_spans": [
{
"start": 236,
"end": 251,
"text": "Jurafsky, 2000)",
"ref_id": "BIBREF27"
}
],
"ref_spans": [
{
"start": 304,
"end": 311,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Web-based Evaluation",
"sec_num": "4.2"
},
{
"text": "n-gram C=X X ...X Since LSA involves word Though Internet dictionaries and WordNet are counts, we can also compute semantic vectors completely separate \"gold standards,\" results are surprisingly consistent. One can conclude that WordNet may safely be used as a gold standard in future MWU headword evaluations. Also, ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Web-based Evaluation",
"sec_num": "4.2"
},
{
"text": "Can performance be improved? Numerous strategies could be explored. An idea we discuss here tries using induced semantics to rescore the output of the best algorithm (filtered, cross-sourced Zscore) and eliminate semantically compositional or modifiable MWU hypotheses. Deerwester, et al (1990) introduced Latent Semantic Analysis (LSA) as a computational technique for inducing semantic relationships between words and documents. It forms highdimensional vectors using word counts and uses singular value decomposition to project those vectors into an optimal k-dimensional, \"semantic\" subspace (see Landauer, et al, 1998) .",
"cite_spans": [
{
"start": 270,
"end": 294,
"text": "Deerwester, et al (1990)",
"ref_id": "BIBREF5"
},
{
"start": 601,
"end": 623,
"text": "Landauer, et al, 1998)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Improvement strategies",
"sec_num": "5"
},
{
"text": "Following an approach from Sch\u00fctze (1993), we 1 2 n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Improvement strategies",
"sec_num": "5"
},
{
"text": "(denoted by ) for C's subcomponents. These can either include ( ) or exclude ( ) C's counts. We seek to see if induced semantics can help eliminate incorrectly-chosen MWUs. As will be shown, the effort using semantics in this nature has a very small payoff for the expended cost.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Improvement strategies",
"sec_num": "5"
},
{
"text": "Non-compositionality is a key component of valid MWUs, so we may desire to emphasize n-grams that are semantically non-compositional. Suppose we wanted to determine if C (defined above) were noncompositional. Then given some meaning function, , C should satisfy an equation like: scores. These formulations suggest that several of the probabilistic algorithms we have seen include non-compositionality measures already. However, since the probabilistic algorithms rely only on distributional information obtained by considering juxtaposed words, they tend to incorporate a significant amount of non-semantic information such as syntax. Can semantic-only rescoring help?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-compositionality",
"sec_num": "5.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g( (C) , h( (X ),...,(X ) ) )0,",
"eq_num": "(1)"
}
],
"section": "Non-compositionality",
"sec_num": "5.1"
},
{
"text": "To find out, we must select g, h, and . Since we want to eliminate MWUs that are compositional, we want h's output to correlate well with C when there is compositionality and correlate poorly otherwise. Frequently, LSA vectors are correlated using the cosine between them:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-compositionality",
"sec_num": "5.1"
},
{
"text": "A large cosine indicates strong correlation, so large values for g(a,b)=1-|cos(a,b)| should signal weak correlation or non-compositionality. h could",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-compositionality",
"sec_num": "5.1"
},
{
"text": "M n i1 w i a i cos cos(X i ,Y) min k{X i ,Y} cos( Xi , Y ) \u00b5 k k .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-compositionality",
"sec_num": "5.1"
},
{
"text": "represent a weighted vector sum of the components' required for this task. This seems to be a significant semantic vectors with weights (w ) set to either 1.0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-compositionality",
"sec_num": "5.1"
},
{
"text": "component. Yet there is still another: maybe i or the reciprocal of the words' frequencies. semantic compositionality is not always bad. Table 5 indicates several results using these Interestingly, this is often the case. Consider settings. As the first four rows indicate and as vice_president, organized crime, and desired, non-compositionality is more apparent for Marine_Corps. Although these are MWUs, one * (i.e., the vectors derived from excluding C's X counts) than for . Yet, performance overall is X horrible, particularly considering we are rescoring Z-score output whose score was 0.269. Rescoring caused five-fold degradation! to compute the * for each possible n-gram X combination. Since the probabilistic algorithms already identify n-grams that share strong distributional properties with their components, it seems imprudent to exhaust resources on this LSAbased strategy for non-compositionality. These findings warrant some discussion. Why did non-compositionality fail? Certainly there is the possibility that better choices for g, h, and could yield improvements. We actually spent months trying to find an optimal combination as well as a strategy for coupling LSA-based scores with the Zscores, but without avail. Another possibility: although LSA can find semantic relationships, it may not make semantic decisions at the level would still expect that the first is related to president, the second relates to crime, and the last relates to Marine. Similarly, tokens such as Johns_Hopkins and Elvis are anaphors for Johns_Hopkins_University and Elvis_Presley, so they should have similar meanings.",
"cite_spans": [],
"ref_spans": [
{
"start": 137,
"end": 144,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Non-compositionality",
"sec_num": "5.1"
},
{
"text": "This begs the question: can induced semantics help at all? The answer is \"yes.\" The key is using LSA where it does best: finding things that are similar -or substitutable.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-compositionality",
"sec_num": "5.1"
},
{
"text": "For every collocation C=X X ..X X X ..X , we 1 2 i-1 i+1 n i attempt to find other similar patterns in the data, X X ..X YX ..X . If X and Y are semantically However, guilty and innocent are semantically related, but pleaded_guilty and pleaded_innocent are not MWUs. We would like to emphasize only ngrams whose substitutes are valid MWUs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-substitutivity",
"sec_num": "5.2"
},
{
"text": "1 2 i-1 i+1 n i related,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Non-substitutivity",
"sec_num": "5.2"
},
{
"text": "To show how we do this using LSA, suppose we want to rescore a list L whose entries are potential MWUs. For every entry X in L, we seek out all other entries whose sorted order is less than some maximum value (such as 5000) that have all but one word in common. For example, suppose X is \"bachelor_'_s_degree.\" The only other entry that matches in all but one word is \"master_'_s_degree.\" If the semantic vectors for \"bachelor\" and \"master\" have a normalized cosine score greater than a threshold of 2.0, we then say that the two MWUs are in each others substitution set. To rescore, we assign a new score to each entry in substitution set. Each element in the substitution set gets the same score. The score is derived using a combination of the previous Z-scores for each element in the substitution set. The combining function may be an averaging, or a computation of the median, the maximum, or something else. The maximum outperforms the average and the median on our data. By applying in to our data, we observe a small but visible improvement of 1.3% absolute to .282 (see Fig. 1 ). It is also possible that other improvements could be gained using other combining strategies.",
"cite_spans": [],
"ref_spans": [
{
"start": 1080,
"end": 1086,
"text": "Fig. 1",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Non-substitutivity",
"sec_num": "5.2"
},
{
"text": "This paper identifies several new results in the area of MWU-finding. We saw that MWU headword evaluations using WordNet provide similar results to those obtained from far more extensive webbased resources. Thus, one could safely use WordNet as a gold standard for future evaluations. We also noted that information-like algorithms, particularly Z-scores, SCP, and $2, seem to perform best at finding MRD headwords regardless of filtering mechanism, but that improvements are still needed. We proposed two new LSA-based approaches which attempted to address issues of non-compositionality and non-substitutivity. Apparently, either current algorithms already capture much non-compositionality or LSA-based models of non-compositionality are of little help. LSA does help somewhat as a model of substitutivity. However, LSA-based gains are small compared to the effort required to obtain them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
}
],
"back_matter": [
{
"text": "The authors would like to thank the anonymous reviewers for their comments and insights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF1": {
"ref_id": "b1",
"title": "Distributional regularity and phonotactic constraints are useful for segmentation",
"authors": [
{
"first": "M",
"middle": [
"R"
],
"last": "Brent",
"suffix": ""
},
{
"first": "T",
"middle": [
"A"
],
"last": "Cartwright",
"suffix": ""
}
],
"year": 1996,
"venue": "Cognition",
"volume": "61",
"issue": "",
"pages": "93--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Brent, M.R. and Cartwright, T.A. (1996). Distributional regularity and phonotactic constraints are useful for segmentation. Cognition, 61, 93-125.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Looking for needles in a haystack or locating interesting collocation expressions in large textual databases",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Choueka",
"suffix": ""
},
{
"first": "K",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mit",
"suffix": ""
},
{
"first": "M",
"middle": [
"A"
],
"last": "Cambridge",
"suffix": ""
},
{
"first": "K",
"middle": [
"W"
],
"last": "Church",
"suffix": ""
},
{
"first": "W",
"middle": [
"A"
],
"last": "Gale",
"suffix": ""
}
],
"year": 1988,
"venue": "Proc. of the 7 Annual Conference of the th UW Center for ITE New OED & Text Research",
"volume": "16",
"issue": "",
"pages": "22--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Choueka, Y. (1988). Looking for needles in a haystack or locating interesting collocation expressions in large textual databases. Proceedings of the RIAO, pp. 38-43. Church, K.W. (1995). N-grams. Tutorial at ACL, '95. MIT, Cambridge, MA. Church, K.W., & Gale, W.A. (1991). Concordances for parallel text. Proc. of the 7 Annual Conference of the th UW Center for ITE New OED & Text Research, pp. 40-62, Oxford. Church, K.W., & Hanks, P. (1990). Word association norms, mutual information and lexicography. Computational Linguistics, Vol. 16, No. 1, pp. 22-29.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Study and Implementation of Combined Techniques from Automatic Extraction of Terminology",
"authors": [
{
"first": "B",
"middle": [],
"last": "Daille",
"suffix": ""
}
],
"year": 1996,
"venue": "The Balancing Act\": Combining Symbolic and Statistical Approaches to Language",
"volume": "",
"issue": "",
"pages": "49--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daille, B. (1996). \"Study and Implementation of Combined Techniques from Automatic Extraction of Terminology\" Chap. 3 of \"The Balancing Act\": Combining Symbolic and Statistical Approaches to Language (Klavans, J., Resnik, P. (eds.)), pp. 49-66",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Congressional Record of the 103 Congress, rd and Los Angeles Times",
"authors": [],
"year": 1988,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "DARPA (1993-1997). DARPA text collections: A.P. Material, 1988-1990, Ziff Communications Corpus, 1989, Congressional Record of the 103 Congress, rd and Los Angeles Times.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Indexing by Latent Semantic Analysis",
"authors": [
{
"first": "S",
"middle": [],
"last": "Deerwester",
"suffix": ""
},
{
"first": "S",
"middle": [
"T"
],
"last": "Dumais",
"suffix": ""
},
{
"first": "G",
"middle": [
"W"
],
"last": "Furnas",
"suffix": ""
},
{
"first": "T",
"middle": [
"K"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Harshman",
"suffix": ""
}
],
"year": 1990,
"venue": "Journal of the American Society of Information Science",
"volume": "41",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deerwester, S., S.T. Dumais, G.W. Furnas, T.K. Landauer, and R. Harshman. (1990) Indexing by Latent Semantic Analysis. Journal of the American Society of Information Science, Vol. 41 de Marcken, C. (1996) Unsupervised Language Acquisition, Ph.D., MIT",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Foundations of",
"authors": [
{
"first": "C",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Manning, C.D., Sch\u00fctze, H. (1999) Foundations of",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Language independent automatic acquisition of rigid Cambridge, MA, 1999. multiword units from unrestricted text corpora",
"authors": [
{
"first": "G",
"middle": [],
"last": "Dias",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Guillor\u00e9",
"suffix": ""
},
{
"first": "J",
"middle": [
"G"
],
"last": "Pereira Lopes",
"suffix": ""
},
{
"first": ";",
"middle": [],
"last": "Taln",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mikheev",
"suffix": ""
},
{
"first": "S",
"middle": [
";"
],
"last": "Finch",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Wvlc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kong",
"suffix": ""
}
],
"year": 1997,
"venue": "Statistical Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dias, G., S. Guillor\u00e9, J.G. Pereira Lopes (1999). Statistical Natural Language Processing, MIT Press, Language independent automatic acquisition of rigid Cambridge, MA, 1999. multiword units from unrestricted text corpora. TALN, Mikheev, A., Finch, S. (1997). Collocation lattices and Carg\u00e8se. maximum entropy models. WVLC, Hong Kong.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Measures of the amount of ecologic associations between species",
"authors": [
{
"first": "L",
"middle": [
"R"
],
"last": "Dice",
"suffix": ""
}
],
"year": 1945,
"venue": "Journal of Ecology",
"volume": "26",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dice, L.R. (1945). Measures of the amount of ecologic associations between species. Journal of Ecology, 26, 1945.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Accurate methods for the statistics of surprise and coincidence",
"authors": [
{
"first": "T",
"middle": [],
"last": "Dunning",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dunning, T (1993). Accurate methods for the statistics of surprise and coincidence. Computational Linguistics. Vol. 19, No. 1.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Transmission of Information",
"authors": [
{
"first": "R",
"middle": [],
"last": "Fano",
"suffix": ""
}
],
"year": 1961,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fano, R. (1961). Transmission of Information. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Speaking mode dependent pronunciation modeling in large vocabulary conversational speech recognition",
"authors": [
{
"first": "M",
"middle": [],
"last": "Finke",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Weibel",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Finke, M. and Weibel, A. (1997) Speaking mode dependent pronunciation modeling in large vocabulary conversational speech recognition. Eurospeech-97.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A local maxima method and a fair dispersion normalization for extracting multi-word units from corpora",
"authors": [
{
"first": "J",
"middle": [],
"last": "Ferreira Da Silva",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Pereira Lopes",
"suffix": ""
}
],
"year": 1999,
"venue": "Sixth Meeting on Mathematics of Language",
"volume": "",
"issue": "",
"pages": "369--381",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ferreira da Silva, J., Pereira Lopes, G. (1999). A local maxima method and a fair dispersion normalization for extracting multi-word units from corpora. Sixth Meeting on Mathematics of Language, pp. 369-381.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "DECIDE, MLAP-Project 93-19, deliverable D-1a: Survey of collocation extraction tools",
"authors": [
{
"first": "T",
"middle": [],
"last": "Fontenelle",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Br\u00fcls",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Vanallemeersch",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Jansen",
"suffix": ""
}
],
"year": 1964,
"venue": "",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fontenelle, T., Br\u00fcls, W., Thomas, L., Vanallemeersch, T., Jansen, J. (1994). DECIDE, MLAP-Project 93-19, deliverable D-1a: Survey of collocation extraction tools. Tech. Report, Univ. of Liege, Liege, Belgium. Giuliano, V. E. (1964) \"The interpretation of word associations.\" In M.E. Stevens et al. (Eds.) Statistical association methods for mechanized documentation, pp. 25-32. National Bureau of Standards Miscellaneous Publication 269, Dec. 15, 1965.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The effects of collocational strength and contextual predictability in lexical production",
"authors": [
{
"first": "M",
"middle": [
"L"
],
"last": "Gregory",
"suffix": ""
},
{
"first": "W",
"middle": [
"D"
],
"last": "Raymond",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Bell",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Fosler-Lussier",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "99",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gregory, M. L., Raymond, W.D., Bell, A., Fosler- Lussier, E., Jurafsky, D. (1999). The effects of collocational strength and contextual predictability in lexical production. CLS99, University of Chicago.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "On ways words work together",
"authors": [
{
"first": "U",
"middle": [],
"last": "Heid",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Heid, U. (1994). On ways words work together. Euralex- 99.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Noun classification from predicateargument structures",
"authors": [
{
"first": "D",
"middle": [],
"last": "Hindle",
"suffix": ""
}
],
"year": 1990,
"venue": "Proceedings of the Annual Meeting of the ACL",
"volume": "",
"issue": "",
"pages": "268--275",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hindle, D. (1990). Noun classification from predicate- argument structures. Proceedings of the Annual Meeting of the ACL, pp. 268-275.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Expansion of multi-word terms for indexing and retrieval using morphology and syntax",
"authors": [
{
"first": "C",
"middle": [],
"last": "Jacquemin",
"suffix": ""
},
{
"first": "J",
"middle": [
"L"
],
"last": "Klavans",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Tzoukermann",
"suffix": ""
}
],
"year": 1997,
"venue": "Proc. of ACL",
"volume": "",
"issue": "",
"pages": "24--31",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacquemin, C., Klavans, J.L., & Tzoukermann, E. (1997). Expansion of multi-word terms for indexing and retrieval using morphology and syntax. Proc. of ACL 1997, Madrid, pp. 24-31.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Technical terminology: some linguistic properties and an algorithm for identification in text",
"authors": [
{
"first": "J",
"middle": [
"S"
],
"last": "Justeson",
"suffix": ""
},
{
"first": "S",
"middle": [
"M"
],
"last": "Katz",
"suffix": ""
}
],
"year": 1995,
"venue": "Natural Language Engineering",
"volume": "1",
"issue": "",
"pages": "9--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Justeson, J.S. and S.M.Katz (1995). Technical terminology: some linguistic properties and an algorithm for identification in text. Natural Language Engineering 1:9-27.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Metrics for corpus similarity & homogeneity. Manuscript",
"authors": [
{
"first": "A",
"middle": [],
"last": "Kilgariff",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Rose",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kilgariff, A., & Rose, T. (1998). Metrics for corpus similarity & homogeneity. Manuscript, ITRI, University of Brighton.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Introduction to Latent Semantic Analysis",
"authors": [
{
"first": "T",
"middle": [
"K"
],
"last": "Landauer",
"suffix": ""
},
{
"first": "P",
"middle": [
"W"
],
"last": "Foltz",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Laham",
"suffix": ""
}
],
"year": 1998,
"venue": "Discourse Processes",
"volume": "25",
"issue": "",
"pages": "259--284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Landauer, T.K., P.W. Foltz, and D. Laham. (1998) Introduction to Latent Semantic Analysis. Discourse Processes. Vol. 25, pp. 259-284.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "WordNet: An on-line lexical database",
"authors": [
{
"first": "G",
"middle": [],
"last": "Miller",
"suffix": ""
}
],
"year": 1990,
"venue": "International Journal of Lexicography",
"volume": "3",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Miller, G. (1990).\"WordNet: An on-line lexical database,\" International Journal of Lexicography, 3(4).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Useg: A Retargetable word segmentation procedure for information retrieval",
"authors": [
{
"first": "J",
"middle": [
"M"
],
"last": "Ponte",
"suffix": ""
},
{
"first": "B",
"middle": [
"W"
],
"last": "Croft",
"suffix": ""
}
],
"year": 1996,
"venue": "Symposium on Document Analysis and Information Retrieval '96",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ponte, J.M., Croft, B.W. (1996). Useg: A Retargetable word segmentation procedure for information retrieval. Symposium on Document Analysis and Information Retrieval '96. Technical Report TR96-2, University of Massachusetts.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Selectional constraints: an information-theoretic model and its computational realization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1996,
"venue": "Cognition",
"volume": "61",
"issue": "",
"pages": "127--159",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Resnik, P. (1996). Selectional constraints: an information-theoretic model and its computational realization. Cognition. Vol. 61, pp. 127-159.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Word segmentation: the role of distributional cues",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Saffran",
"suffix": ""
},
{
"first": "E",
"middle": [
"L"
],
"last": "Newport",
"suffix": ""
},
{
"first": "R",
"middle": [
"N"
],
"last": "Aslin",
"suffix": ""
}
],
"year": 1996,
"venue": "Journal of Memory and Language",
"volume": "25",
"issue": "",
"pages": "606--621",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saffran, J.R., Newport, E.L., and Aslin, R.N. (1996). Word segmentation: the role of distributional cues. Journal of Memory and Language, Vol. 25, pp. 606- 621.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Knowledge-free induction of morphology using latent semantic analysis",
"authors": [
{
"first": "P",
"middle": [],
"last": "Schone",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of the Computational Natural Language Learning Conference",
"volume": "",
"issue": "",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Schone, P. and D. Jurafsky. (2000) Knowledge-free induction of morphology using latent semantic analysis. Proc. of the Computational Natural Language Learning Conference, Lisbon, pp. 67-72.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Distributed syntactic representations with an application to part-of-speech tagging",
"authors": [
{
"first": "H",
"middle": [],
"last": "Sch\u00fctze",
"suffix": ""
}
],
"year": 1993,
"venue": "Proceedings of the IEEE International Conference on Neural Networks",
"volume": "",
"issue": "",
"pages": "1504--1509",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sch\u00fctze, H. (1993) Distributed syntactic representations with an application to part-of-speech tagging. Proceedings of the IEEE International Conference on Neural Networks, pp. 1504-1509.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Retrieving collocations by co-occurrences and word order constraints",
"authors": [
{
"first": "S",
"middle": [],
"last": "Shimohata",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Sugio",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 35 Annual Mtg",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shimohata, S., Sugio, T., Nagata, J. (1997). Retrieving collocations by co-occurrences and word order constraints. Proceedings of the 35 Annual Mtg. of the th",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Retrieving collocations from text",
"authors": [
{
"first": "F",
"middle": [],
"last": "Smadja",
"suffix": ""
}
],
"year": 1993,
"venue": "Xtract. Computational Linguistics",
"volume": "19",
"issue": "",
"pages": "143--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Smadja, F. (1993). Retrieving collocations from text: Xtract. Computational Linguistics, 19:143-177.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Report on the need for and provision of an \"ideal\" information retrieval text collection",
"authors": [
{
"first": "K",
"middle": [],
"last": "Sparck-Jones",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Van Rijsbergen",
"suffix": ""
}
],
"year": 1975,
"venue": "British Library Research and Development Report",
"volume": "5266",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sparck-Jones, K., C. van Rijsbergen (1975) Report on the need for and provision of an \"ideal\" information retrieval text collection, British Library Research and Development Report, 5266, Computer Laboratory, University of Cambridge.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "A statistical method for finding word boundaries in Chinese text",
"authors": [
{
"first": "R",
"middle": [],
"last": "Sproat",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Shih",
"suffix": ""
}
],
"year": 1990,
"venue": "Computer Processing of Chinese & Oriental Languages",
"volume": "4",
"issue": "4",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sproat R, Shih, C. (1990) A statistical method for finding word boundaries in Chinese text. Computer Processing of Chinese & Oriental Languages, Vol. 4, No. 4.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Morphology and Computation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Sproat",
"suffix": ""
}
],
"year": 1992,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sproat, R. (1992) Morphology and Computation. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A stochastic finite-state word segmentation algorithm for Chinese",
"authors": [
{
"first": "R",
"middle": [
"W"
],
"last": "Sproat",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Shih",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Gale",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sproat, R.W., Shih, C., Gale, W., Chang, N. (1996) A stochastic finite-state word segmentation algorithm for Chinese. Computational Linguistics, Vol. 22, #3.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A Compression-based algorithm for Chinese word segmentation",
"authors": [
{
"first": "W",
"middle": [
"J"
],
"last": "Teahan",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Yingyin",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Mcnab",
"suffix": ""
},
{
"first": "I",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "26",
"issue": "",
"pages": "375--394",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Teahan, W.J., Yingyin, W. McNab, R, Witten, I.H. (2000). A Compression-based algorithm for Chinese word segmentation. ACL Vol. 26, No. 3, pp. 375-394.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "the semantics of C's subcomponents and g measures semantic differences. If C were a bigram, then if g(a,b) is defined to be |a-b|, if h(c,d) is the sum of c and d, and if (e) is set to -log P , then equation (1) would e become the pointwise mutual information of the bigram. If g(a,b) were defined to be (a-b)/b , and if \u00bd h(a,b)=ab/N and (X)=f , we essentially get Z-X"
},
"FIGREF1": {
"type_str": "figure",
"uris": null,
"num": null,
"text": "Precision-recall curve for rescoring"
},
"TABREF0": {
"content": "<table><tr><td>METHOD</td><td>FORMULA</td></tr><tr><td>Frequency (Guiliano, 1964)</td><td>f XY</td></tr><tr><td>Pointwise Mutual</td><td/></tr><tr><td>Information (MI)</td><td/></tr></table>",
"html": null,
"type_str": "table",
"text": "Probabilistic Approaches",
"num": null
},
"TABREF2": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": "Outputs from each algorithm at different sorted ranks",
"num": null
},
"TABREF3": {
"content": "<table><tr><td>1 K</td><td>M</td><td>K i1 P i ,</td></tr><tr><td>i</td><td/><td>i</td><td>i</td></tr><tr><td colspan=\"4\">number of hypothesized MWUs required to find the i correct MWU. This FOM corresponds to area th</td></tr><tr><td colspan=\"3\">under a precision-recall curve.</td></tr></table>",
"html": null,
"type_str": "table",
"text": "or even negative impact. On the other hand, where P (precision at i) equals i/H , and H is the",
"num": null
},
"TABREF4": {
"content": "<table><tr><td>Prob</td><td>(1)</td><td>(2)</td><td/><td>(3)</td><td>(4)</td></tr><tr><td colspan=\"6\">algo-WordNet WordNet WordNet WordNet</td></tr><tr><td>rithm</td><td/><td colspan=\"2\">cross-</td><td>+Filter</td><td>cross-</td></tr><tr><td/><td/><td colspan=\"2\">source</td><td/><td>source</td></tr><tr><td/><td/><td/><td/><td/><td>+Filter</td></tr><tr><td>S</td><td>N</td><td>S</td><td>N</td><td colspan=\"2\">S N S N</td></tr></table>",
"html": null,
"type_str": "table",
"text": "WordNet-based scores",
"num": null
},
"TABREF5": {
"content": "<table/>",
"html": null,
"type_str": "table",
"text": ".151 SA .057 .051 .058 .053 .182 .125 .202 .143 Loglike .049 .050 .068 .064 .118 .095 .177 .129 T-score .050 .051 .050 .052 .150 .109 .160 .118 Freq .035 .037 .034 .037 .144 .105 .152 .112",
"num": null
},
"TABREF6": {
"content": "<table><tr><td>Prob</td><td>(1)</td><td>(2)</td><td>(3)</td><td>(4)</td></tr><tr><td colspan=\"5\">algorithm Internet Internet Internet Internet</td></tr><tr><td/><td colspan=\"2\">cross-</td><td>+Filter</td><td>cross-</td></tr><tr><td/><td colspan=\"2\">source</td><td/><td>source</td></tr><tr><td/><td/><td/><td/><td>+Filter</td></tr><tr><td>Z-Score</td><td>.165</td><td>.260</td><td>.169</td><td>.269</td></tr><tr><td>SCP</td><td>.166</td><td>.259</td><td>.170</td><td>.270</td></tr><tr><td>Chi-sqr</td><td>.166</td><td>.260</td><td>.170</td><td>.270</td></tr><tr><td>Dice</td><td>.183</td><td>.258</td><td>.187</td><td>.267</td></tr><tr><td>MI</td><td>.139</td><td>.234</td><td>.140</td><td>.234</td></tr><tr><td>SA</td><td>.027</td><td>.033</td><td>.107</td><td>.194</td></tr><tr><td colspan=\"2\">Log Like .023</td><td>.043</td><td>.087</td><td>.162</td></tr><tr><td>T-score</td><td>.025</td><td>.027</td><td>.110</td><td>.142</td></tr><tr><td>Freq</td><td>.016</td><td>.017</td><td>.104</td><td>.134</td></tr><tr><td colspan=\"5\">one can see that Z-scores, $ , and 2</td></tr><tr><td colspan=\"5\">SCP have virtually identical results and seem to best</td></tr><tr><td colspan=\"5\">identify MWU headwords (particularly if proper</td></tr><tr><td colspan=\"5\">nouns are desired). Yet there is still significant</td></tr><tr><td colspan=\"2\">room for improvement.</td><td/><td/><td/></tr></table>",
"html": null,
"type_str": "table",
"text": "Performance on Internet data",
"num": null
},
"TABREF7": {
"content": "<table><tr><td colspan=\"3\">: Equation 1 settings</td><td/><td/></tr><tr><td>g(a,b)</td><td>h(a)</td><td colspan=\"4\">(X) w Score on i</td></tr><tr><td/><td/><td/><td/><td/><td>Internet</td></tr><tr><td/><td/><td colspan=\"4\">X 1 0.0517</td></tr><tr><td>1-|cos(a,b)|</td><td/><td>X</td><td>*</td><td>1/fi 1</td><td>0.0473 0.0598</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">1/fi* 0.0523</td></tr><tr><td/><td/><td colspan=\"4\">X 1 0.174</td></tr><tr><td>|cos(a,b)|</td><td/><td/><td/><td>1/fi</td><td>0.169</td></tr><tr><td/><td/><td>X</td><td>*</td><td>1</td><td>0.131</td></tr><tr><td/><td/><td/><td/><td>1/fi*</td><td>0.128</td></tr><tr><td colspan=\"6\">What happens if we instead emphasize</td></tr><tr><td colspan=\"6\">compositionality? Rows 5-8 illustrate the effect:</td></tr><tr><td colspan=\"6\">there is a significant recovery in performance. The</td></tr><tr><td colspan=\"6\">most reasonable explanation for this is that if</td></tr><tr><td colspan=\"6\">MWUs and their components are strongly</td></tr><tr><td colspan=\"6\">correlated, the components may rarely occur except</td></tr><tr><td colspan=\"6\">in context with the MWU. It takes about 20 hours</td></tr></table>",
"html": null,
"type_str": "table",
"text": "",
"num": null
},
"TABREF8": {
"content": "<table><tr><td/><td/><td>Xi</td><td/><td>Y</td></tr><tr><td colspan=\"6\">substitutable. We use our earlier approach (Schone</td></tr><tr><td colspan=\"6\">and Jurafsky, 2000) for performing the comparison;</td></tr><tr><td colspan=\"6\">namely, for every word W, we compute cos( ) w, R</td></tr><tr><td colspan=\"6\">for 200 randomly chosen words, R. This allows for</td></tr><tr><td colspan=\"6\">computation of a correlaton mean (\u00b5 ) and standard W</td></tr><tr><td colspan=\"6\">deviation (1 ) between W and other words. As W</td></tr><tr><td colspan=\"6\">before, we then compute a normalized cosine score</td></tr><tr><td>(</td><td colspan=\"5\">) between words of interest, defined by</td></tr><tr><td/><td colspan=\"5\">With this set-up, we now look for substitutivity.</td></tr><tr><td colspan=\"6\">Note that phrases may be substitutable and still be</td></tr><tr><td colspan=\"6\">headword if their substitute phrases are themselves</td></tr><tr><td colspan=\"6\">MWUs. For example, dioxide in carbon_dioxide is</td></tr><tr><td colspan=\"2\">semantically</td><td colspan=\"2\">similar</td><td>to</td><td>monoxide</td><td>in</td></tr><tr><td colspan=\"3\">carbon_monoxide.</td><td colspan=\"3\">Moreover, there are other</td></tr><tr><td colspan=\"6\">important instances of valid substitutivity:</td></tr><tr><td/><td colspan=\"5\">&amp; Abbreviations AlAlbert &lt; Al_GoreAlbert_Gore &amp; Morphological similarities RicoRican &lt; Puerto_RicoPuerto_Rican &amp; Taxonomic relationships</td></tr><tr><td/><td colspan=\"3\">bachelormaster&lt;</td><td/></tr><tr><td/><td colspan=\"5\">bachelor_'_s_degreemaster_'_s_degree.</td></tr></table>",
"html": null,
"type_str": "table",
"text": "chances are that C is substitutable. Since LSA excels at finding semantic correlations, we can compare and to see if C is",
"num": null
}
}
}
}