|
{ |
|
"paper_id": "I05-1024", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:26:37.069696Z" |
|
}, |
|
"title": "Automatic Term Extraction Based on Perplexity of Compound Words", |
|
"authors": [ |
|
{ |
|
"first": "Minoru", |
|
"middle": [], |
|
"last": "Yoshida", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Tokyo", |
|
"location": { |
|
"addrLine": "7-3-1 Hongo, Bunkyo-ku", |
|
"postCode": "113-0033", |
|
"settlement": "Tokyo" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Hiroshi", |
|
"middle": [], |
|
"last": "Nakagawa", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "University of Tokyo", |
|
"location": { |
|
"addrLine": "7-3-1 Hongo, Bunkyo-ku", |
|
"postCode": "113-0033", |
|
"settlement": "Tokyo" |
|
} |
|
}, |
|
"email": "nakagawa@dl.itc.u-tokyo.ac.jp" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "Many methods of term extraction have been discussed in terms of their accuracy on huge corpora. However, when we try to apply various methods that derive from frequency to a small corpus, we may not be able to achieve sufficient accuracy because of the shortage of statistical information on frequency. This paper reports a new way of extracting terms that is tuned for a very small corpus. It focuses on the structure of compound terms and calculates perplexity on the term unit's left-side and right-side. The results of our experiments revealed that the accuracy with the proposed method was not that advantageous. However, experimentation with the method combining perplexity and frequency information obtained the highest average-precision in comparison with other methods.", |
|
"pdf_parse": { |
|
"paper_id": "I05-1024", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "Many methods of term extraction have been discussed in terms of their accuracy on huge corpora. However, when we try to apply various methods that derive from frequency to a small corpus, we may not be able to achieve sufficient accuracy because of the shortage of statistical information on frequency. This paper reports a new way of extracting terms that is tuned for a very small corpus. It focuses on the structure of compound terms and calculates perplexity on the term unit's left-side and right-side. The results of our experiments revealed that the accuracy with the proposed method was not that advantageous. However, experimentation with the method combining perplexity and frequency information obtained the highest average-precision in comparison with other methods.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Term extraction, which is the task of extracting terminology (or technical terms) from a set of documents, is one of major topics in natural language processing. It has a wide variety of applications including book indexing, dictionary generation, and keyword extraction for information retrieval systems.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Most automatic term extraction systems make a sorted list of candidate terms extracted from a given corpus according to the \"importance\" scores of the terms, so they require scores of \"importance\" for the terms. Existing scores include TF-IDF, C-Value [1] , and FLR [9] . In this paper, we propose a new method that involves revising the definition of the FLR method in a more sophisticated way. One of the advantages of the FLR method is its size-robustness, i.e, it can be applied to small corpus with less significant drop in performance than other standard methods like TF and IDF, because it is defined using more fine-grained features called term units. Our new method, called FPP, inherit this property while exhibiting better performance than FLR.", |
|
"cite_spans": [ |
|
{ |
|
"start": 252, |
|
"end": 255, |
|
"text": "[1]", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 266, |
|
"end": 269, |
|
"text": "[9]", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "At the same time, we also propose a new scheme for evaluating term extraction systems. Our idea is to use summaries 1 of articles as a gold standard. This strategy is based on the assumption that summaries of documents can serve as collections of important terms because, in writing summaries, people may make an original document shorter by dropping unnecessary parts of original documents, while retaining essential fragments. Thus, we regard a term in an original document to be important if it also appears in the summary.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Term extraction is the task of extracting important terms from a given corpus. Typically, term extraction systems first extract term candidates, which are usually the noun phrases detected by handcrafted POS sequence patterns, from the corpus. After that, term candidates are sorted according to some importance score. Important terms, (i.e., terms that appear in the summary, in our problem setting,) are desired to be ranked higher than others. In this paper we focus on the second step, i.e., term candidate sorting by importance scores. We propose a new score of term importance by modifying an existing one in a more sophisticated manner.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Term Extraction", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "In the remainder of this paper, a term candidate is represented by W = w 1 w 2 \u2022 \u2022 \u2022 w n where w i represents a term unit contained in W , and n is the number of term units contained in W . Here, a term unit is the basic element comprising term candidates that is not further decomporsable without destruction of meaning. Term units are used to calculate of the LR score that is explained in the next section.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Term Extraction", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Many methods of term scoring have been proposed in the literature [7] [3] [4] . Methods that use corpus statistics have especially emerged over the past decade due to the increasing number of machine-readable documents such as news articles and WWW documents. These methods can be mainly categorized into the following three types according to what types of features are used to calculate the scores.", |
|
"cite_spans": [ |
|
{ |
|
"start": 66, |
|
"end": 69, |
|
"text": "[7]", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 74, |
|
"end": 77, |
|
"text": "[4]", |
|
"ref_id": "BIBREF3" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "-Measurement by frequencies -Measurement by internal structures of term candidates -Combination of the above", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Frequency is one of the most basic features of term extraction. Usually, a term that appears frequently is assumed to be important. We introduce a score of this type: tf (W ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Score by Frequency: TF", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "tf (W ) represents the TF(Term Frequency) of W . It is defined as the number of occurrences of W in all documents. Note that tf (W ) is the result of the brute force counting of W occurrences. This method, for example, counts the term natural even if it is merely part of another phrase such as natural language processing. 2 ", |
|
"cite_spans": [ |
|
{ |
|
"start": 324, |
|
"end": 325, |
|
"text": "2", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Score by Frequency: TF", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "An LR method [9] is based on the intuition that some words are used as term units more frequently than others, and a phrase that contains such \"good\" term units is likely to be important. The left score l(w i ) of each term unit w i of a target term is defined as the number (or the number of types) of term units connected to the left of w i (i.e., appearing just in the left of w i in term candidates), and the right score r(w i ) is defined in the same manner. 3 An LR score lr(w i ) is defined as the geometric mean of left and right scores:", |
|
"cite_spans": [ |
|
{ |
|
"start": 13, |
|
"end": 16, |
|
"text": "[9]", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 464, |
|
"end": 465, |
|
"text": "3", |
|
"ref_id": "BIBREF2" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Score by Internal Structures in Term Candidates: LR", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "lr(w i ) = l(w i )r(w i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Score by Internal Structures in Term Candidates: LR", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "The total LR score of W is defined as a geometric mean of the scores of term units as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Score by Internal Structures in Term Candidates: LR", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "LR(W ) = (lr(w 1 )lr(w 2 ) \u2022 \u2022 \u2022 lr(w n )) 1", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Score by Internal Structures in Term Candidates: LR", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "n . An example of LR score calculation is given in the next section. C-Value is defined by using these two expressions in the following way.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Score by Internal Structures in Term Candidates: LR", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "c-val(W ) = (n \u2212 1) \u00d7 tf (W ) \u2212 t(W ) c(W )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mixed Measures", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": "Note that the value is zero where n = 1. MC-Value [9] is a modified version of C-Value adapted for use in term collections that include the term of length 1 (i.e., n = 1).", |
|
"cite_spans": [ |
|
{ |
|
"start": 50, |
|
"end": 53, |
|
"text": "[9]", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Mixed Measures", |
|
"sec_num": "3.3" |
|
}, |
|
{ |
|
"text": ") = n \u00d7 tf (W ) \u2212 t(W ) c(W )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MC-val(W", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We used MC-Value in the experiments because our task was to extract terms regardless of whether each term is one-word term or not.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "MC-val(W", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The LR method reflects the number of appearances of term units, but does not reflect that of a whole term itself. For example, even if \"natural language\" is more frequent than \"language natural\" and the former should be given a higher score than the latter, LR cannot be used to do this.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "FLR.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "An FLR method [9] was proposed to overcome this shortcoming of LR. It reflects both the frequencies and inner structures of terms. F LR(W ) is defined as the product of LR(W ) and tf (W ) as: Type-LR cannot reflect frequencies which suggest whether there are specially important connecting terms or not. However, Token-LR cannot reflect the number of types that suggest the variety of connections. To solve these shortcomings with LR measures, we propose a new kind that combines these two through perplexity.", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 17, |
|
"text": "[9]", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "FLR.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "F LR(W ) = tf (W )LR(W ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "FLR.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Our method is based on the idea of perplexity [8] . The score of a term is defined by the left perplexity and right perplexity of its term units. In this subsection we first give a standard definition of the perplexity of language, from which our left and right perplexity measures are derived. After that, we describe how to score terms by using these perplexities.", |
|
"cite_spans": [ |
|
{ |
|
"start": 46, |
|
"end": 49, |
|
"text": "[8]", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Term Extraction by Perplexity", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Assume that language L is information source that produces word lists of length n and each word list is produced independently with probability P (w n 1 ). Then, the entropy of language L is calculated as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "H 0 (L) = \u2212 w n 1 P (w n 1 ) log P (w n 1 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The entropy per word is then calculated as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "H(L) = \u2212 1 n w n 1 P (w n 1 ) log P (w n 1 ).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This value indicates the number of bits needed to express each word generated from L. Perplexity of language L is defined using H(L) as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "P erplexity = 2 H(L) .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Perplexity can be seen as the average number of types of words that follow each preceding word. The larger the perplexity of L, the less predictable the word connection in L.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Left and right perplexity. Assume that k types of unit words can connect to the right of w i (see Figure 2 ). Also assume that R i is a random variable assigned to the i-th term unit which represents its right connections and takes its value from the set", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 98, |
|
"end": 106, |
|
"text": "Figure 2", |
|
"ref_id": "FIGREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "{r 1 , r 2 , \u2022 \u2022 \u2022 , r k }.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Then, entropy H(R i ) is calculated as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "H(R i ) = \u2212 k j=1 P (r j ) log 2 P (r j )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that we define 0 log 0 = 0, according to the fact that x log x \u2192 0 where x \u2192 0. This entropy value can be thought of as a variety of terms that connect to the right of w i , or, more precisely, the number of bits needed to describe words that connect to the right of w i .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Then right perplexity pp r (w i ) of term unit w i is defined as", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "pp r (w i ) = 2 H(R i ) .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This value can be seen as the number of branches, in the sense of information theory, of right-connection from w i . It naturally reflects both the frequency and number of types of each connection between term units. Random variable L i for the left connections is defined in the same manner. The perplexity for left connections is thus defined as:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "pp l (w i ) = 2 H(L i ) .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Term Score by Perplexity. We define our measure by substituting l and r in the definition of LR with pp l and pp r . First, a combination of left and right perplexities is defined as the geometric mean of both:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "pp(w i ) = (pp l (w i ) \u2022 pp r (w i )) 1 2 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "After that, perplexity score P P (W ) for W is defined as the geometric mean of all pp(w i )s:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "P P (W ) = n i=1 pp(w i ) 1 n .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We used log P P (W ) instead of P P (W ) to make implementation easier. Notice that log x is a monotonic (increasing) function of x.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "P P (W ) = n i=1 {pp l (w i ) \u2022 pp r (w i )} 1 2 1 n \u21d2 log 2 P P (W ) = 1 n log 2 n i=1 {pp l (w i ) \u2022 pp r (w i )} 1 2 \u21d2 log 2 P P (W ) = 1 2n n i=1 (log 2 pp l (w i ) + log 2 pp r (w i ))", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Using pp r (w i ) = 2 H(R i ) and pp l (w i ) = 2 H(l i ) , we obtain", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "log 2 P P (W ) = 1 2n n i=1 H(R i ) + H(L i ) .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "The right side means the sum of the left and right entropies of all term units.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Perplexity of language.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Perplexity itself serves as a good score for terms, but combining it with TF, which is a measure from another point of view, can provide a still better score that reflects both the inner structures of term candidates and their frequencies which are regarded as global information about the whole corpus. Our new score, F P P (W ), which is a combination of PP and TF, is defined as their product: F P P (W ) = tf (W )P P (W ) \u21d2 log 2 F P P (W ) = log 2 tf (W ) + log 2 P P (W )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Term Extraction by Perplexity and TF", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "\u21d2 log 2 F P P (W ) = log 2 tf (W ) + 1 2n n i=1 H(R i ) + H(L i )", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Term Extraction by Perplexity and TF", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We avoided the problem of log 2 tf (W ) being undefined with tf (W ) = 0 5 by applying the adding-one smoothing to tf (W ). Therefore, the above definition of log F P P (W ) changed as follows:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Term Extraction by Perplexity and TF", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "log 2 F P P (W ) = log 2 (tf (W ) + 1) + 1 2n n i=1 H(R i ) + H(L i ) .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Term Extraction by Perplexity and TF", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We used this log 2 F P P (W ) measure for evaluation.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Term Extraction by Perplexity and TF", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "We collected news articles and their summaries from the Mainichi Web News from April, 2001 to March, 2002. The articles were categorized into four genres: Economy, Society, World, and Politics. A shorter version of each article was provided for browsing on mobile phones. Articles for mobile phones were written manually from the original ones, which were shorter versions of the original articles adapted to small displays. We regard them as summaries of the original articles and used them to evaluate whether the extracted terms were correct or not. If a term in the original article was also in the summary, the term was correct, and incorrect if otherwise. Each article had a size of about 300 letters and each summary had a size of about 50. Table 1 lists the number of articles in each category. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 748, |
|
"end": 755, |
|
"text": "Table 1", |
|
"ref_id": "TABREF1" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Test Collection", |
|
"sec_num": "5.1" |
|
}, |
|
{ |
|
"text": "We used test data on the various numbers of articles to investigate how the performance of each measure changed according to corpus size. A corpus of each size was generated by singly adding an article randomly selected from the corpus of each genre. We generated test data consisting of 50 different sizes (from 1 to 50) for each genre. The average number of letters in the size 50 corpus was about 19,000, and the average number of term candidates was about 1,300. We used five different seed numbers to randomly select articles. The performance of each method was evaluated in terms of recall and precision, which were averaged over the five trials.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setup", |
|
"sec_num": "5.2" |
|
}, |
|
{ |
|
"text": "Each article was preprocessed with a morphological analyzer, the Chasen 2.3.3. [2] The output of Chasen was further modified according to heuristic rules as follows.", |
|
"cite_spans": [ |
|
{ |
|
"start": 79, |
|
"end": 82, |
|
"text": "[2]", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing: Term Candidate Extraction", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "-Nouns and undefined words were extracted for further processes and other words were discarded. -Suffixes and prefixes were concatenated to their following and preceding words, respectively.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing: Term Candidate Extraction", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "The result was a set of term candidates to be evaluated with the term importance scores described in the previous sections. We applied the following methods to the term candidates: F, TF, DF (Document Frequency) [8] , LR, MC-Value, FLR, TF-IDF [8] , PP, and FPP'.", |
|
"cite_spans": [ |
|
{ |
|
"start": 212, |
|
"end": 215, |
|
"text": "[8]", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 244, |
|
"end": 247, |
|
"text": "[8]", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Preprocessing: Term Candidate Extraction", |
|
"sec_num": "5.3" |
|
}, |
|
{ |
|
"text": "We used average precision [8] for the evaluation. Let D be a set of all the term candidates and D q \u2286 D be a set of the correct ones among them. The extracted term was correct if it appeared in the summary. Then, the average precision can be calculated in the following manner.", |
|
"cite_spans": [ |
|
{ |
|
"start": 26, |
|
"end": 29, |
|
"text": "[8]", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evaluation Method", |
|
"sec_num": "5.4" |
|
}, |
|
{ |
|
"text": "= 1 |D q | 1\u2264k\u2264|D| \u23a7 \u23a8 \u23a9 r k \u00d7 \u239b \u239d 1 k 1\u2264i\u2264k r i \u239e \u23a0 \u23ab \u23ac \u23ad", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Average-Precision", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "where r i = 1 if the i-th term is correct, and r i = 0 if otherwise. Note that the total number of correct answers was |D q |. The next section presents the experimental results obtained by average precision. Table 2 shows the results on the corpus of 1, 10, and 50 articles in all the genres. Figure 3 plots the average precision for each corpus size (from 1 to 50) in the economy category. 6 In some cases, results on one article were better than those on 10 and 50 articles. This was mainly caused by the fact that the average precision is tend to be high on articles of short length, and the average length for one article was much shorter than that of ten articles in some genres. PP outperformed LR in most cases. We think the reason was that PP could provide more precious information about connections among term units. We observed that PP depended less on the size of the corpus than frequency-based methods like TF and MC-Val. FPP' had the best performance of all methods in all genres. Figure 4 plots the results in the economy genre when the corpus size was increased to 1,000 in increments of 50 articles. We observed that the performance of PP and LR got close with the increase in corpus size, especially with 200 articles and more. FPP' once again outperformed all the other methods in this experiment. The FPP' method exhibited the best performance regardless of corpus size.", |
|
"cite_spans": [ |
|
{ |
|
"start": 392, |
|
"end": 393, |
|
"text": "6", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 209, |
|
"end": 216, |
|
"text": "Table 2", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 294, |
|
"end": 302, |
|
"text": "Figure 3", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 997, |
|
"end": 1005, |
|
"text": "Figure 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Average-Precision", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We can also use another frequency score F(Frequency), or f (W ), that is defined as the number of independent occurrences of W in all documents. (Independent means that W is not included in any larger term candidate.) However, we observed that f (W ) (or the combination of f (W ) and another score) had no advantage over tf (W ) (or the combination of tf (W ) and another score) in the experiments,so in this paper we omit scores that are the combination of f (W ) and other scores.3 In addition, we apply the adding-one smoothing to both of them to avoid the score being zero when wi has no connected terms.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Note that the adding-one smoothing is applied.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "This situation occurs when we want to score a new term candidate from outside of corpus.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "We only show a graph in the economy genre, but the results in other genres were similar to this.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "We proposed a new method for extracting terms. It involved the combination of two LR methods: Token-LR and Type-LR. We showed that these two could be combined by using the idea of perplexity, and gave a definition for the combined method. This new method was then combined with TF and experimental results on the test corpus consisting of news articles and their summaries revealed that the new method (FPP') outperformed existing methods including TF, TF-IDF, MC-Value, and FLR.In future work, we would like to improve the performance of the method by, for example, adding preprocessing rules, such as the appropriate treatment of numerical characters, and developing more sophisticated methods for combining TF and PP. We also plan to extend our experiments to include other test collections like TMREC [6] .", |
|
"cite_spans": [ |
|
{ |
|
"start": 805, |
|
"end": 808, |
|
"text": "[6]", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion and Future Work", |
|
"sec_num": "7" |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "A methodology for automatic term recognition", |
|
"authors": [ |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Ananiadou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1994, |
|
"venue": "Proceedings of the 15th InternationalConference on Computational Linguistcs (COLING)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1034--1038", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Ananiadou, S.: A methodology for automatic term recognition. In Proceedings of the 15th InternationalConference on Computational Linguistcs (COLING) (1994), pp. 1034-1038.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Extended Models and Tools for High-performance Part-of-Speech Tagger", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Asahara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Matsumoto", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of COLING 2000", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Asahara, M., Matsumoto, Y.: Extended Models and Tools for High-performance Part-of-Speech Tagger. Proceedings of COLING 2000. (2000).", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "COMPUTERM'98 First Workshop on Computational Terminology", |
|
"authors": [], |
|
"year": 1998, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "COMPUTERM'98 First Workshop on Computational Terminology. (1998).", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "COMPUTERM'02 Second Workshop on Computational Terminology", |
|
"authors": [], |
|
"year": 2002, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "COMPUTERM'02 Second Workshop on Computational Terminology. (2002).", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "The C-value/NC-value method for ATR", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Frantzi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "S", |
|
"middle": [], |
|
"last": "Ananiadou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "Journal of NLP", |
|
"volume": "6", |
|
"issue": "3", |
|
"pages": "145--179", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Frantzi, K. and Ananiadou, S.: The C-value/NC-value method for ATR. Journal of NLP, Vol. 6, No. 3, (1999). pp.145-179.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "TMREC Task: Overview and Evaluation. Proc. of the First NTCIR Workshop on Research in Japanese Text Retrieval and Term Recognition", |
|
"authors": [ |
|
{ |
|
"first": "K", |
|
"middle": [], |
|
"last": "Kageura", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "411--440", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kageura, K.: TMREC Task: Overview and Evaluation. Proc. of the First NTCIR Workshop on Research in Japanese Text Retrieval and Term Recognition, (1999). pp. 411-440.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Methods of automatic term recognition: A review. Terminology", |
|
"authors": [ |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Kageura", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "B", |
|
"middle": [], |
|
"last": "Umino", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "", |
|
"volume": "3", |
|
"issue": "", |
|
"pages": "259--289", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Kageura, K and Umino, B.: Methods of automatic term recognition: A review. Terminology, Vol. 3, No. 2, (1996). pp. 259-289.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Foundations of Statistical Natural Language Processing", |
|
"authors": [ |
|
{ |
|
"first": "C", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Schutze", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1999, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Manning, C.D., and Schutze, H..: Foundations of Statistical Natural Language Pro- cessing. (1999). The MIT Press.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Automatic Term Recognition based on Statistics of Compound Nouns and their", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "Nakagawa", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "T", |
|
"middle": [], |
|
"last": "Mori", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Components. Terminology", |
|
"volume": "9", |
|
"issue": "2", |
|
"pages": "201--219", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nakagawa, H. and Mori, T.: Automatic Term Recognition based on Statistics of Compound Nouns and their Components. Terminology, Vol. 9, No. 2, (2003). pp. 201-219.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Preliminaries: Token-LR and Type-LR Figure 1 outlines example statistics for term unit connections. For example, the term disaster information appeared three times in the corpus. An example of statistics for term unit connections LR scores have two versions: Token-LR and Type-LR. Token-LR (and Type-LR) are calculated by simply counting the frequency (and the types) of terms connected to each term unit, respectively. In this case, a Type-LR score for the term unit \"information\" is l(inf ormation) = 1 + 1 4 , r(inf ormation) = 3 + 1, LR(inf ormation) = \u221a 8, and a Token-LR score is l(inf ormation) = 3 + 1, r(inf ormation) = 6 + 1, LR(inf ormation) = \u221a 28.", |
|
"num": null |
|
}, |
|
"FIGREF2": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Example of term unit and term units connected to its right", |
|
"num": null |
|
}, |
|
"FIGREF3": { |
|
"uris": null, |
|
"type_str": "figure", |
|
"text": "Results in economy genre Results on 50 -1000 articles", |
|
"num": null |
|
}, |
|
"TABREF0": { |
|
"text": ") does not count W itself. Intuitively, t(W ) is the degree of being part of another term, and c(W ) is the degree of being part of various types of terms.", |
|
"num": null, |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF1": { |
|
"text": "Number of articles in test collection", |
|
"num": null, |
|
"content": "<table><tr><td colspan=\"2\">Economy Society World Politics</td></tr><tr><td># of articles 4,177</td><td>5,952 6,153 4,428</td></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
}, |
|
"TABREF2": { |
|
"text": "Average precision on corpus of 1, 10, and 50 articles. Each cell contains results for the Economy/World/Society/Politics genres.", |
|
"num": null, |
|
"content": "<table><tr><td>Measure</td><td>SIZE=1</td><td>SIZE=10</td><td>SIZE=50</td></tr><tr><td>F</td><td colspan=\"3\">0.275/0.274/0.246/0.406 0.337/0.350/0.325/0.378 0.401/0.415/0.393/0.425</td></tr><tr><td>TF</td><td colspan=\"3\">0.305/0.388/0.281/0.430 0.386/0.406/0.376/0.435 0.454/0.462/0.436/0.477</td></tr><tr><td>DF</td><td colspan=\"3\">0.150/0.173/0.076/0.256 0.237/0.253/0.234/0.294 0.337/0.357/0.332/0.378</td></tr><tr><td>LR</td><td colspan=\"3\">0.192/0.370/0.194/0.378 0.255/0.280/0.254/0.317 0.303/0.302/0.273/0.320</td></tr><tr><td colspan=\"4\">MC-Val 0.218/0.296/0.240/0.388 0.317/0.334/0.307/0.365 0.399/0.400/0.369/0.420</td></tr><tr><td>FLR</td><td colspan=\"3\">0.305/0.410/0.298/0.469 0.361/0.397/0.364/0.429 0.423/0.435/0.404/0.455</td></tr><tr><td colspan=\"4\">TF-IDF 0.150/0.173/0.076/0.256 0.388/0.407/0.376/0.437 0.457/0.465/0.438/0.479</td></tr><tr><td>PP</td><td colspan=\"3\">0.223/0.327/0.285/0.514 0.285/0.299/0.282/0.331 0.329/0.317/0.279/0.331</td></tr><tr><td>FPP'</td><td colspan=\"3\">0.320/0.457/0.380/0.561 0.407/0.444/0.409/0.471 0.487/0.480/0.448/0.493</td></tr></table>", |
|
"type_str": "table", |
|
"html": null |
|
} |
|
} |
|
} |
|
} |