Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
79.3 kB
{
"paper_id": "I08-1014",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:41:48.474901Z"
},
"title": "Determining the Unithood of Word Sequences using a Probabilistic Approach",
"authors": [
{
"first": "Wilson",
"middle": [],
"last": "Wong",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Western",
"location": {
"postCode": "6009",
"settlement": "Crawley",
"region": "WA",
"country": "Australia"
}
},
"email": "wilson@csse.uwa.edu.au"
},
{
"first": "Wei",
"middle": [],
"last": "Liu",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Western",
"location": {
"postCode": "6009",
"settlement": "Crawley",
"region": "WA",
"country": "Australia"
}
},
"email": ""
},
{
"first": "Mohammed",
"middle": [],
"last": "Bennamoun",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Western",
"location": {
"postCode": "6009",
"settlement": "Crawley",
"region": "WA",
"country": "Australia"
}
},
"email": "bennamou@csse.uwa.edu.au"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Most research related to unithood were conducted as part of a larger effort for the determination of termhood. Consequently, novelties are rare in this small sub-field of term extraction. In addition, existing work were mostly empirically motivated and derived. We propose a new probabilistically-derived measure, independent of any influences of termhood, that provides dedicated measures to gather linguistic evidence from parsed text and statistical evidence from Google search engine for the measurement of unithood. Our comparative study using 1, 825 test cases against an existing empiricallyderived function revealed an improvement in terms of precision, recall and accuracy.",
"pdf_parse": {
"paper_id": "I08-1014",
"_pdf_hash": "",
"abstract": [
{
"text": "Most research related to unithood were conducted as part of a larger effort for the determination of termhood. Consequently, novelties are rare in this small sub-field of term extraction. In addition, existing work were mostly empirically motivated and derived. We propose a new probabilistically-derived measure, independent of any influences of termhood, that provides dedicated measures to gather linguistic evidence from parsed text and statistical evidence from Google search engine for the measurement of unithood. Our comparative study using 1, 825 test cases against an existing empiricallyderived function revealed an improvement in terms of precision, recall and accuracy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Automatic term recognition, also referred to as term extraction or terminology mining, is the process of extracting lexical units from text and filtering them for the purpose of identifying terms which characterise certain domains of interest. This process involves the determination of two factors: unithood and termhood. Unithood concerns with whether or not a sequence of words should be combined to form a more stable lexical unit. On the other hand, termhood measures the degree to which these stable lexical units are related to domain-specific concepts. Unithood is only relevant to complex terms (i.e. multi-word terms) while termhood (Wong et al., 2007a ) deals with both simple terms (i.e. singleword terms) and complex terms. Recent reviews by (Wong et al., 2007b) show that existing research on unithood are mostly carried out as a prerequisite to the determination of termhood. As a result, there is only a small number of existing measures dedicated to determining unithood. Besides the lack of dedicated attention in this sub-field of term extraction, the existing measures are usually derived from term or document frequency, and are modified as per need. As such, the significance of the different weights that compose the measures usually assume an empirical viewpoint. Obviously, such methods are at most inspired by, but not derived from formal models (Kageura and Umino, 1996) .",
"cite_spans": [
{
"start": 643,
"end": 662,
"text": "(Wong et al., 2007a",
"ref_id": "BIBREF13"
},
{
"start": 755,
"end": 775,
"text": "(Wong et al., 2007b)",
"ref_id": "BIBREF15"
},
{
"start": 1372,
"end": 1397,
"text": "(Kageura and Umino, 1996)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The three objectives of this paper are (1) to separate the measurement of unithood from the determination of termhood, (2) to devise a probabilisticallyderived measure which requires only one threshold for determining the unithood of word sequences using non-static textual resources, and 3to demonstrate the superior performance of the new probabilistically-derived measure against existing empirical measures. In regards to the first objective, we will derive our probabilistic measure free from any influence of termhood determination. Following this, our unithood measure will be an independent tool that is applicable not only to term extraction, but many other tasks in information extraction and text mining. Concerning the second objective, we will devise our new measure, known as the Odds of Unithood (OU ), which are derived using Bayes Theorem and founded on a few elementary probabilities. The probabilities are estimated using Google page counts in an attempt to eliminate problems related to the use of static corpora. Moreover, only one threshold, namely, OU T is required to control the functioning of OU . Regarding the third objective, we will compare our new OU against an existing empirically-derived measure called Unithood (U H) (Wong et al., 2007b) in terms of their precision, recall and accuracy.",
"cite_spans": [
{
"start": 1252,
"end": 1272,
"text": "(Wong et al., 2007b)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In Section 2, we provide a brief review on some of existing techniques for measuring unithood. In Section 3, we present our new probabilistic approach, the measures involved, and the theoretical and intuitive justification behind every aspect of our measures. In Section 4, we summarize some findings from our evaluations. Finally, we conclude this paper with an outlook to future work in Section 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Some of the most common measures of unithood include pointwise mutual information (MI) (Church and Hanks, 1990) and log-likelihood ratio (Dunning, 1994) . In mutual information, the co-occurrence frequencies of the constituents of complex terms are utilised to measure their dependency. The mutual information for two words a and b is defined as:",
"cite_spans": [
{
"start": 87,
"end": 111,
"text": "(Church and Hanks, 1990)",
"ref_id": "BIBREF0"
},
{
"start": 137,
"end": 152,
"text": "(Dunning, 1994)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "M I(a, b) = log 2 p(a, b) p(a)p(b)",
"eq_num": "(1)"
}
],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "where p(a) and p(b) are the probabilities of occurrence of a and b. Many measures that apply statistical techniques assuming strict normal distribution, and independence between the word occurrences (Franz, 1997) do not fare well. For handling extremely uncommon words or small sized corpus, log-likelihood ratio delivers the best precision (Kurz and Xu, 2002) . Log-likelihood ratio attempts to quantify how much more likely one pair of words is to occur compared to the others. Despite its potential, \"How to apply this statistic measure to quantify structural dependency of a word sequence remains an interesting issue to explore.\" (Kit, 2002) . (Seretan et al., 2004) tested mutual information, loglikelihood ratio and t-tests to examine the use of results from web search engines for determining the collocational strength of word pairs. However, no performance results were presented. (Wong et al., 2007b) presented a hybrid approach inspired by mutual information in Equation 1, and C-value in Equation 3. The authors employ Google page counts for the computation of statistical evidences to replace the use of frequencies obtained from static corpora. Using the page counts, the authors proposed a function known as Unithood (UH) for determining the mergeability of two lexical units a x and a y to produce a stable sequence of words s. The word sequences are organised as a set W = {s, a x , a y } where s = a x ba y is a term candidate, b can be any preposition, the coordinating conjunction \"and\" or an empty string, and a x and a y can either be noun phrases in the form Adj * N + or another s (i.e. defining a new s in terms of other s). The authors define U H as:",
"cite_spans": [
{
"start": 199,
"end": 212,
"text": "(Franz, 1997)",
"ref_id": "BIBREF3"
},
{
"start": 341,
"end": 360,
"text": "(Kurz and Xu, 2002)",
"ref_id": "BIBREF8"
},
{
"start": 635,
"end": 646,
"text": "(Kit, 2002)",
"ref_id": "BIBREF6"
},
{
"start": 649,
"end": 671,
"text": "(Seretan et al., 2004)",
"ref_id": "BIBREF11"
},
{
"start": 891,
"end": 911,
"text": "(Wong et al., 2007b)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "U H(a x , a y ) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f4 \uf8f3 1 if (M I(a x , a y ) > M I + ) \u2228 (M I + \u2265 M I(a x , a y ) \u2265 M I \u2212 \u2227 ID(a x , s) \u2265 ID T \u2227 ID(a y , s) \u2265 ID T \u2227 IDR + \u2265 IDR(a x , a y ) \u2265 IDR \u2212 ) 0 otherwise (2) where M I + , M I \u2212 , ID T , IDR + and IDR \u2212",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "are thresholds for determining mergeability decisions, and M I(a x , a y ) is the mutual information between a x and a y , while ID(a x , s), ID(a y , s) and IDR(a x , a y ) are measures of lexical independence of a x and a y from s. For brevity, let z be either a x or a y , and the independence measure ID(z, s) is then defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "ID(z, s) = log 10 (n z \u2212 n s ) if(n z > n s ) 0 otherwise",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "where n z and n s is the Google page count for z and s respectively. On the other hand, IDR(a x , a y ) = ID(ax,s) ID(ay,s) . Intuitively, U H(a x , a y ) states that the two lexical units a x and a y can only be merged in two cases, namely, 1) if a x and a y has extremely high mutual information (i.e. higher than a certain threshold M I + ), or 2) if a x and a y achieve average mutual information (i.e. within the acceptable range of two thresholds M I + and M I \u2212 ) due to both of their extremely high independence (i.e. higher than the threshold ID T ) from s. (Frantzi, 1997) proposed a measure known as Cvalue for extracting complex terms. The measure is based upon the claim that a substring of a term candidate is a candidate itself given that it demonstrates adequate independence from the longer version it appears in. For example, \"E. coli food poisoning\", \"E. coli\" and \"food poisoning\" are acceptable as valid complex term candidates. However, \"E. coli food\" is not. Therefore, some measures are required to gauge the strength of word combinations to decide whether two word sequences should be merged or not. Given a word sequence a to be examined for unithood, the Cvalue is defined as:",
"cite_spans": [
{
"start": 567,
"end": 582,
"text": "(Frantzi, 1997)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "Cvalue(a) = log 2 |a|f a if |a| = g log 2 |a|(f a \u2212 l\u2208La f l |La| ) otherwise (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "where |a| is the number of words in a, L a is the set of longer term candidates that contain a, g is the longest n-gram considered, f a is the frequency of occurrence of a, and a / \u2208 L a . While certain researchers (Kit, 2002) consider Cvalue as a termhood measure, others (Nakagawa and Mori, 2002) accept it as a measure for unithood. One can observe that longer candidates tend to gain higher weights due to the inclusion of log 2 |a| in Equation 3. In addition, the weights computed using Equation 3 are purely dependent on the frequency of a.",
"cite_spans": [
{
"start": 215,
"end": 226,
"text": "(Kit, 2002)",
"ref_id": "BIBREF6"
},
{
"start": 273,
"end": 298,
"text": "(Nakagawa and Mori, 2002)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Works",
"sec_num": "2"
},
{
"text": "We propose a probabilistically-derived measure for determining the unithood of word pairs (i.e. potential term candidates) extracted using the headdriven left-right filter (Wong, 2005; Wong et al., 2007b) and Stanford Parser (Klein and Manning, 2003) . These word pairs will appear in the form of (a x , a y ) \u2208 A with a x and a y located immediately next to each other (i.e. x + 1 = y), or separated by a preposition or coordinating conjunction \"and\" (i.e. x + 2 = y). Obviously, a x has to appear before a y in the sentence or in other words, x < y for all pairs where x and y are the word offsets produced by the Stanford Parser. The pairs in A will remain as potential term candidates until their unithood have been examined. Once the unithood of the pairs in A have been determined, they will be referred to as term candidates. Formally, the unithood of any two lexical units a x and a y can be defined as",
"cite_spans": [
{
"start": 172,
"end": 184,
"text": "(Wong, 2005;",
"ref_id": "BIBREF16"
},
{
"start": 185,
"end": 204,
"text": "Wong et al., 2007b)",
"ref_id": "BIBREF15"
},
{
"start": 225,
"end": 250,
"text": "(Klein and Manning, 2003)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "Definition 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "The unithood of two lexical units is the \"degree of strength or stability of syntagmatic combinations and collocations\" (Kageura and Umino, 1996) between them.",
"cite_spans": [
{
"start": 120,
"end": 145,
"text": "(Kageura and Umino, 1996)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "It is obvious that the problem of measuring the unithood of any pair of words is the determination of their \"degree\" of collocational strength as mentioned in Definition 1. In practical terms, the \"degree\" mentioned above will provide us with a way to determine if the units a x and a y should be combined to form s, or left alone as separate units. The collocational strength of a x and a y that exceeds a certain threshold will demonstrate to us that s is able to form a stable unit and hence, a better term candidate than a x and a y separated. It is worth pointing that the size (i.e. number of words) of a x and a y is not limited to 1. For example, we can have a x =\"National Institute\", b=\"of\" and a y =\"Allergy and Infectious Diseases\". In addition, the size of a x and a y has no effect on the determination of their unithood using our approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "As we have discussed in Section 2, most of the conventional practices employ frequency of occurrence from local corpora, and some statistical tests or information-theoretic measures to determine the coupling strength between elements in W = {s, a x , a y }. Two of the main problems associated with such approaches are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "\u2022 Data sparseness is a problem that is welldocumented by many researchers (Keller et al., 2002) . It is inherent to the use of local corpora that can lead to poor estimation of parameters or weights; and",
"cite_spans": [
{
"start": 74,
"end": 95,
"text": "(Keller et al., 2002)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "\u2022 Assumption of independence and normality of word distribution are two of the many problems in language modelling (Franz, 1997) . While the independence assumption reduces text to simply a bag of words, the assumption of normal distribution of words will often lead to incorrect conclusions during statistical tests.",
"cite_spans": [
{
"start": 115,
"end": 128,
"text": "(Franz, 1997)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "As a general solution, we innovatively employ results from web search engines for use in a probabilistic framework for measuring unithood.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "As an attempt to address the first problem, we utilise page counts by Google for estimating the probability of occurrences of the lexical units in W .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "We consider the World Wide Web as a large general corpus and the Google search engine as a gateway for accessing the documents in the general corpus. Our choice of using Google to obtain the page count was merely motivated by its extensive coverage. In fact, it is possible to employ any search engines on the World Wide Web for this research. As for the second issue, we attempt to address the problem of determining the degree of collocational strength in terms of probabilities estimated using Google page count. We begin by defining the sample space, N as the set of all documents indexed by Google search engine. We can estimate the index size of Google, |N | using function words as predictors. Function words such as \"a\", \"is\" and \"with\", as opposed to content words, appear with frequencies that are relatively stable over many different genres. Next, we perform random draws (i.e. trial) of documents from N . For each lexical unit w \u2208 W , there will be a corresponding set of outcomes (i.e. events) from the draw. There will be three basic sets which are of interest to us:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "Definition 2 Basic events corresponding to each w \u2208 W :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "\u2022 X is the event that a x occurs in the document",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "\u2022 Y is the event that a y occurs in the document",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "\u2022 S is the event that s occurs in the document It should be obvious to the readers that since the documents in S have to contain all two units a x and a y , S is a subset of X \u2229 Y or S \u2286 X \u2229 Y . It is worth noting that even though S \u2286 X \u2229 Y , it is highly unlikely that S = X \u2229 Y since the two portions a x and a y may exist in the same document without being conjoined by b. Next, subscribing to the frequency interpretation of probability, we can obtain the probability of the events in Definition 2 in terms of Google page count:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "P (X) = n x |N | (4) P (Y ) = n y |N | P (S) = n s |N |",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "where n x , n y and n s is the page count returned as the result of Google search using the term [+\"a x \"],",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "[+\"a y \"] and [+\"s\"], respectively. The pair of quotes that encapsulates the search terms is the phrase operator, while the character \"+\" is the required operator supported by the Google search engine. As discussed earlier, the independence assumption required by certain information-theoretic measures and other Bayesian approaches may not always be valid, especially when we are dealing with linguistics. As such, P (X \u2229 Y ) = P (X)P (Y ) since the occurrences of a x and a y in documents are inevitably governed by some hidden variables and hence, not independent. Following this, we define the probabilities for two new sets which result from applying some set operations on the basic events in Definition 2:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "P (X \u2229 Y ) = n xy |N | (5) P (X \u2229 Y \\ S) = P (X \u2229 Y ) \u2212 P (S)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "where n xy is the page count returned by Google for the search using [+\"a x \" +\"a y \"]. Defining P (X \u2229Y ) in terms of observable page counts, rather than a combination of two independent events will allow us to avoid any unnecessary assumption of independence. Next, referring back to our main problem discussed in Definition 1, we are required to estimate the strength of collocation of the two units a x and a y . Since there is no standard metric for such measurement, we propose to address the problem from a probabilistic perspective. We introduce the probability that s is a stable lexical unit given the evidence s possesses:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "Definition 3 Probability of unithood:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "P (U |E) = P (E|U )P (U ) P (E)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "where U is the event that s is a stable lexical unit and E is the evidences belonging to s. P (U |E) is the posterior probability that s is a stable unit given the evidence E. P (U ) is the prior probability that s is a unit without any evidence, and P (E) is the prior probability of evidences held by s. As we shall see later, these two prior probabilities will be immaterial in the final computation of unithood. Since s can either be a stable unit or not, we can state that,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (\u016a |E) = 1 \u2212 P (U |E)",
"eq_num": "(6)"
}
],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "where\u016a is the event that s is not a stable lexical unit. Since Odds = P/(1 \u2212 P ), we multiply both sides of Definition 3 by (1 \u2212 P (U |E)) \u22121 to obtain,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (U |E) 1 \u2212 P (U |E) = P (E|U )P (U ) P (E)(1 \u2212 P (U |E))",
"eq_num": "(7)"
}
],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "By substituting Equation 6 in Equation 7 and later, applying the multiplication rule P (\u016a |E)P (E) = P (E|\u016a )P (\u016a ) to it, we will obtain:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (U |E) P (\u016a |E) = P (E|U )P (U ) P (E|\u016a )P (\u016a )",
"eq_num": "(8)"
}
],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "We proceed to take the log of the odds in Equation 8 (i.e. logit) to get:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "log P (E|U ) P (E|\u016a ) = log P (U |E) P (\u016a |E) \u2212 log P (U ) P (\u016a )",
"eq_num": "(9)"
}
],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "While it is obvious that certain words tend to cooccur more frequently than others (i.e. idioms and collocations), such phenomena are largely arbitrary (Smadja, 1993 ). This makes the task of deciding on what constitutes an acceptable collocation difficult. The only way to objectively identify stable lexical units is through observations in samples of the language (e.g. text corpus) (McKeown and Radev, 2000) . In other words, assigning the apriori probability of collocational strength without empirical evidence is both subjective and difficult. As such, we are left with the option to assume that the probability of s being a stable unit and not being a stable unit without evidence is the same (i.e. P (U ) = P (\u016a ) = 0.5). As a result, the second term in Equation 9 evaluates to 0:",
"cite_spans": [
{
"start": 152,
"end": 165,
"text": "(Smadja, 1993",
"ref_id": "BIBREF12"
},
{
"start": 386,
"end": 411,
"text": "(McKeown and Radev, 2000)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "log P (U |E) P (\u016a |E) = log P (E|U ) P (E|\u016a )",
"eq_num": "(10)"
}
],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "We introduce a new measure for determining the odds of s being a stable unit known as Odds of Unithood (OU) :",
"cite_spans": [
{
"start": 94,
"end": 107,
"text": "Unithood (OU)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "Definition 4 Odds of unithood OU (s) = log P (E|U ) P (E|\u016a )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "Assuming that the evidences in E are independent of one another, we can evaluate OU (s) in terms of:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "OU (s) = log i P (e i |U ) i P (e i |\u016a )",
"eq_num": "(11)"
}
],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "= i log P (e i |U ) P (e i |\u016a )",
"eq_num": "(a)"
}
],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "The area with darker shade is the set X \u2229 Y \\ S. Computing the ratio of P (S) and the probability of this area will give us the first evidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "(b) The area with darker shade is the set S \u2032 . Computing the ratio of P (S) and the probability of this area (i.e. P (S \u2032 ) = 1 \u2212 P (S)) will give us the second evidence. Figure 1 : The probability of the areas with darker shade are the denominators required by the evidences e 1 and e 2 for the estimation of OU (s).",
"cite_spans": [],
"ref_spans": [
{
"start": 172,
"end": 180,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "where e i are individual evidences possessed by s.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "With the introduction of Definition 4, we can examine the degree of collocational strength of a x and a y in forming s, mentioned in Definition 1 in terms of OU (s). With the base of the log in Definition 4 more than 1, the upper and lower bound of OU (s) would be +\u221e and \u2212\u221e, respectively. OU (s) = +\u221e and OU (s) = \u2212\u221e corresponds to the highest and the lowest degree of stability of the two units a x and a y appearing as s, respectively. A high 1 OU (s) would indicate the suitability for the two units a x and a y to be merged to form s. Ultimately, we have reduced the vague problem of the determination of unithood introduced in Definition 1 into a practical and computable solution in Definition 4. The evidences that we propose to employ for determining unithood are based on the occurrences of s, or the event S if the readers recall from Definition 2. We are interested in two types of occurrences of s, namely, the occurrence of s given that a x and a y have already occurred or X \u2229 Y , and the occurrence of s as it is in our sample space, N . We refer to the first evidence e 1 as local occurrence, while the second one e 2 as global occurrence. We will discuss the intuitive justification behind each type of occurrences. Each evidence e i captures the occurrences of s within a different confinement. We will estimate these evidences in terms of the elementary probabilities already defined in Equations 4 and 5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "The first evidence e 1 captures the probability of occurrences of s within the confinement of a x and a y or X\u2229Y . As such, P (e |U ) can be interpreted as the probability of s occurring within X \u2229 Y as a stable unit or P (S|X \u2229 Y ). On the other hand, P (e 1 |\u016a ) captures the probability of s occurring in X \u2229 Y not as a unit. In other words, P (e 1 |\u016a ) is the probability of s not occurring in X \u2229 Y , or equivalently, equal to P ((X \u2229 Y \\ S)|(X \u2229 Y )). The set X \u2229 Y \\ S is shown as the area with darker shade in Figure 1(a) . Let us define the odds based on the first evidence as:",
"cite_spans": [],
"ref_spans": [
{
"start": 518,
"end": 529,
"text": "Figure 1(a)",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "O L = P (e 1 |U ) P (e 1 |\u016a )",
"eq_num": "(12)"
}
],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "Substituting P (e 1 |U ) = P (S|X \u2229 Y ) and P (e 1 |\u016a ) = P ((X \u2229 Y \\ S)|(X \u2229 Y )) into Equation 12 will give us:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "O L = P (S|X \u2229 Y ) P ((X \u2229 Y \\ S)|(X \u2229 Y )) = P (S \u2229 (X \u2229 Y )) P (X \u2229 Y ) P (X \u2229 Y ) P ((X \u2229 Y \\ S) \u2229 (X \u2229 Y )) = P (S \u2229 (X \u2229 Y )) P ((X \u2229 Y \\ S) \u2229 (X \u2229 Y )) and since S \u2286 (X \u2229Y ) and (X \u2229Y \\S) \u2286 (X \u2229Y ), O L = P (S) P (X \u2229 Y \\ S) if (P (X \u2229 Y \\ S) = 0)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "and O L = 1 if P (X \u2229 Y \\ S) = 0. The second evidence e 2 captures the probability of occurrences of s without confinement. If s is a stable unit, then its probability of occurrence in the sample space would simply be P (S). On the other hand, if s occurs not as a unit, then its probability of non-occurrence is 1 \u2212 P (S). The complement of S, which is the set S \u2032 is shown as the area with darker shade in Figure 1(b) . Let us define the odds based on the second evidence as:",
"cite_spans": [],
"ref_spans": [
{
"start": 408,
"end": 419,
"text": "Figure 1(b)",
"ref_id": null
}
],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "O G = P (e 2 |U ) P (e 2 |\u016a )",
"eq_num": "(13)"
}
],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "Substituting P (e 2 |U ) = P (S) and P (e 2 |\u016a ) = 1 \u2212 P (S) into Equation 13 will give us:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "O G = P (S) 1 \u2212 P (S)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "Intuitively, the first evidence attempts to capture the extent to which the existence of the two lexical units a x and a y is attributable to s. Referring back to O L , whenever the denominator P (X \u2229 Y \\ S) becomes less than P (S), we can deduce that a x and a y actually exist together as s more than in other forms. At one extreme when P (X \u2229 Y \\ S) = 0, we can conclude that the co-occurrence of a x and a y is exclusively for s. As such, we can also refer to O L as a measure of exclusivity for the use of a x and a y with respect to s. This first evidence is a good indication for the unithood of s since the more the existence of a x and a y is attributed to s, the stronger the collocational strength of s becomes. Concerning the second evidence, O G attempts to capture the extent to which s occurs in general usage (i.e. World Wide Web). We can consider O G as a measure of pervasiveness for the use of s. As s becomes more widely used in text, the numerator in O G will increase. This provides a good indication on the unithood of s since the more s appears in usage, the likelier it becomes that s is a stable unit instead of an occurrence by chance when a x and a y are located next to each other. As a result, the derivation of OU (s) using O L and O G will ensure a comprehensive way of determining unithood.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "Finally, expanding OU (s) in Equation 11 using Equations 12 and 13 will give us:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "OU (s) = log O L + log O G (14) = log P (S) P (X \u2229 Y \\ S) + log P (S) 1 \u2212 P (S)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "As such, the decision on whether a x and a y should be merged to form s can be made based solely on the Odds of Unithood (OU) defined in Equation 14. We will merge a x and a y if their odds of unithood exceeds a certain threshold, OU T .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A Probabilistically-derived Measure for Unithood Determination",
"sec_num": "3"
},
{
"text": "For this evaluation, we employed 500 news articles from Reuters in the health domain gathered between December 2006 to May 2007. These 500 articles are fed into the Stanford Parser whose output is then used by our head-driven left-right filter (Wong, 2005; Wong et al., 2007b) to extract word sequences in the form of nouns and noun phrases. Pairs of word sequences (i.e. a x and a y ) located immediately next to each other, or separated by a preposition or the conjunction \"and\" in the same sentence are mea-sured for their unithood. Using the 500 news articles, we managed to obtain 1, 825 pairs of words to be tested for unithood.",
"cite_spans": [
{
"start": 244,
"end": 256,
"text": "(Wong, 2005;",
"ref_id": "BIBREF16"
},
{
"start": 257,
"end": 276,
"text": "Wong et al., 2007b)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluations and Discussions",
"sec_num": "4"
},
{
"text": "We performed a comparative study of our new probabilistic approach against the empiricallyderived unithood function described in Equation 2. Two experiments were conducted. In the first one, we assessed our probabilistically-derived measure OU (s) as described in Equation 14 where the decisions on whether or not to merge the 1, 825 pairs are done automatically. These decisions are known as the actual results. At the same time, we inspected the same list manually to decide on the merging of all the pairs. These decisions are known as the ideal results. The threshold OU T employed for our evaluation is determined empirically through experiments and is set to \u22128.39. However, since only one threshold is involved in deciding mergeability, training algorithms and data sets may be employed to automatically decide on an optimal number. This option is beyond the scope of this paper. The actual and ideal results for this first experiment are organised into a contingency table (not shown here) for identifying the true and the false positives, and the true and the false negatives. In the second experiment, we conducted the same assessment as carried out in the first one but the decisions to merge the 1, 825 pairs are based on the U H(a x , a y ) function described in Equation 2. The thresholds required for this function are based on the values suggested by (Wong et al., 2007b) , namely, M I + = 0.9, M I \u2212 = 0.02, ID T = 6, IDR + = 1.35, and IDR \u2212 = 0.93. Table 1 : The performance of OU (s) (from Experiment 1) and U H(a x , a y ) (from Experiment 2) in terms of precision, recall and accuracy. The last column shows the difference in the performance of Experiment 1 and 2.",
"cite_spans": [
{
"start": 1367,
"end": 1387,
"text": "(Wong et al., 2007b)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 1467,
"end": 1474,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluations and Discussions",
"sec_num": "4"
},
{
"text": "Using the results from the contingency tables, we computed the precision, recall and accuracy for the two measures under evaluation. Table 1 sum-marises the performance of OU (s) and U H(a x , a y ) in determining the unithood of 1, 825 pairs of lexical units. One will notice that our new measure OU (s) outperformed the empirically-derived function U H(a x , a y ) in all aspects, with an improvement of 2.63%, 3.33% and 2.74% for precision, recall and accuracy, respectively. Our new measure achieved a 100% precision with a lower recall at 95.83%. As with any measures that employ thresholds as a cutoff point in accepting or rejecting certain decisions, we can improve the recall of OU (s) by decreasing the threshold OU T . In this way, there will be less false negatives (i.e. pairs which are supposed to be merged but are not) and hence, increases the recall rate. Unfortunately, recall will improve at the expense of precision since the number of false positives will definitely increase from the existing 0. Since our application (i.e. ontology learning) requires perfect precision in determining the unithood of word sequences, OU (s) is the ideal candidate. Moreover, with only one threshold (i.e. OU T ) required in controlling the function of OU (s), we are able to reduce the amount of time and effort spent on optimising our results.",
"cite_spans": [],
"ref_spans": [
{
"start": 133,
"end": 140,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluations and Discussions",
"sec_num": "4"
},
{
"text": "In this paper, we highlighted the significance of unithood and that its measurement should be given equal attention by researchers in term extraction. We focused on the development of a new approach that is independent of influences of termhood measurement. We proposed a new probabilistically-derived measure which provide a dedicated way to determine the unithood of word sequences. We refer to this measure as the Odds of Unithood (OU). OU is derived using Bayes Theorem and is founded upon two evidences, namely, local occurrence and global occurrence. Elementary probabilities estimated using page counts from the Google search engine are utilised to quantify the two evidences. The new probabilistically-derived measure OU is then evaluated against an existing empirical function known as Unithood (UH). Our new measure OU achieved a precision and a recall of 100% and 95.83% respectively, with an accuracy at 97.26% in measuring the unithood of 1, 825 test cases. OU outperformed U H by 2.63%, 3.33% and 2.74% in terms of precision, recall and accuracy, respectively. Moreover, our new measure requires only one threshold, as compared to five in U H to control the mergeability decision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "More work is required to establish the coverage and the depth of the World Wide Web with regards to the determination of unithood. While the Web has demonstrated reasonable strength in handling general news articles, we have yet to study its appropriateness in dealing with unithood determination for technical text (i.e. the depth of the Web). Similarly, it remains a question the extent to which the Web is able to satisfy the requirement of unithood determination for a wider range of genres (i.e. the coverage of the Web). Studies on the effect of noises (e.g. keyword spamming) and multiple word senses on unithood determination using the Web is another future research direction.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "5"
},
{
"text": "A subjective issue that may be determined using a threshold",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported by the Australian Endeavour International Postgraduate Research Scholarship, and the Research Grant 2006 by the University of Western Australia.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Word association norms, mutual information, and lexicography",
"authors": [
{
"first": "K",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "1",
"pages": "22--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Church and P. Hanks. 1990. Word association norms, mutual information, and lexicography. Computational Linguistics, 16(1):22-29.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Accurate methods for the statistics of surprise and coincidence",
"authors": [
{
"first": "T",
"middle": [],
"last": "Dunning",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "1",
"pages": "61--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Dunning. 1994. Accurate methods for the statistics of surprise and coincidence. Computational Linguistics, 19(1):61-74.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Incorporating context information for the extraction of terms",
"authors": [
{
"first": "K",
"middle": [],
"last": "Frantzi",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 35th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Frantzi. 1997. Incorporating context information for the extraction of terms. In Proceedings of the 35th An- nual Meeting on Association for Computational Lin- guistics, Spain.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Independence assumptions considered harmful",
"authors": [
{
"first": "A",
"middle": [],
"last": "Franz",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 8th Conference on European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Franz. 1997. Independence assumptions considered harmful. In Proceedings of the 8th Conference on Eu- ropean Chapter of the Association for Computational Linguistics, Madrid, Spain.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Methods of automatic term recognition: A review",
"authors": [
{
"first": "K",
"middle": [],
"last": "Kageura",
"suffix": ""
},
{
"first": "B",
"middle": [],
"last": "Umino",
"suffix": ""
}
],
"year": 1996,
"venue": "Terminology",
"volume": "3",
"issue": "2",
"pages": "259--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. Kageura and B. Umino. 1996. Methods of automatic term recognition: A review. Terminology, 3(2):259- 289.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Using the web to overcome data sparseness",
"authors": [
{
"first": "F",
"middle": [],
"last": "Keller",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "O",
"middle": [],
"last": "Ourioupina",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Keller, M. Lapata, and O. Ourioupina. 2002. Using the web to overcome data sparseness. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), Philadelphia.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Corpus tools for retrieving and deriving termhood evidence",
"authors": [
{
"first": "C",
"middle": [],
"last": "Kit",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 5th East Asia Forum of Terminology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "C. Kit. 2002. Corpus tools for retrieving and deriving termhood evidence. In Proceedings of the 5th East Asia Forum of Terminology, Haikou, China.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Accurate unlexicalized parsing",
"authors": [
{
"first": "D",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Klein and C. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st Meeting of the As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Text mining for the extraction of domain relevant terms and term collocations",
"authors": [
{
"first": "D",
"middle": [],
"last": "Kurz",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the International Workshop on Computational Approaches to Collocations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Kurz and F. Xu. 2002. Text mining for the extrac- tion of domain relevant terms and term collocations. In Proceedings of the International Workshop on Com- putational Approaches to Collocations, Vienna.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Collocations",
"authors": [
{
"first": "K",
"middle": [],
"last": "Mckeown",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Radev",
"suffix": ""
}
],
"year": 2000,
"venue": "Handbook of Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K. McKeown and D. Radev. 2000. Collocations. In R. Dale, H. Moisl, and H. Somers, editors, Handbook of Natural Language Processing. Marcel Dekker.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A simple but powerful automatic term extraction method",
"authors": [
{
"first": "H",
"middle": [],
"last": "Nakagawa",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Mori",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the International Conference On Computational Linguistics (COLING)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Nakagawa and T. Mori. 2002. A simple but powerful automatic term extraction method. In Proceedings of the International Conference On Computational Lin- guistics (COLING).",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Using the web as a corpus for the syntactic-based collocation identification",
"authors": [
{
"first": "V",
"middle": [],
"last": "Seretan",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Nerima",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Wehrli",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the International Conference on on Language Resources and Evaluation (LREC)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Seretan, L. Nerima, and E. Wehrli. 2004. Using the web as a corpus for the syntactic-based colloca- tion identification. In Proceedings of the International Conference on on Language Resources and Evaluation (LREC), Lisbon, Portugal.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Retrieving collocations from text: Xtract",
"authors": [
{
"first": "F",
"middle": [],
"last": "Smadja",
"suffix": ""
}
],
"year": 1993,
"venue": "Computational Linguistics",
"volume": "19",
"issue": "1",
"pages": "143--177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "F. Smadja. 1993. Retrieving collocations from text: Xtract. Computational Linguistics, 19(1):143-177.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Determining termhood for learning domain ontologies in a probabilistic framework",
"authors": [
{
"first": "W",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Bennamoun",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 6th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Wong, W. Liu, and M. Bennamoun. 2007a. Deter- mining termhood for learning domain ontologies in a probabilistic framework. In Proceedings of the 6th",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Australasian Conference on Data Mining (AusDM)",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Australasian Conference on Data Mining (AusDM), Gold Coast.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Determining the unithood of word sequences using mutual information and independence measure",
"authors": [
{
"first": "W",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Bennamoun",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 10th Conference of the Pacific Association for Computational Linguistics (PACLING)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Wong, W. Liu, and M. Bennamoun. 2007b. Deter- mining the unithood of word sequences using mutual information and independence measure. In Proceed- ings of the 10th Conference of the Pacific Associa- tion for Computational Linguistics (PACLING), Mel- bourne, Australia.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Practical approach to knowledgebased question answering with natural language understanding and advanced reasoning",
"authors": [
{
"first": "W",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "W. Wong. 2005. Practical approach to knowledge- based question answering with natural language un- derstanding and advanced reasoning. Master's thesis, National Technical University College of Malaysia, arXiv:cs.CL/0707.3559.",
"links": null
}
},
"ref_entries": {}
}
}