Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
90.2 kB
{
"paper_id": "I08-1002",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:41:34.025408Z"
},
"title": "An Empirical Comparison of Goodness Measures for Unsupervised Chinese Word Segmentation with a Unified Framework",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": "",
"affiliation": {},
"email": "haizhao@cityu.edu.hk"
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": "",
"affiliation": {},
"email": "ctckit@cityu.edu.hk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper reports our empirical evaluation and comparison of several popular goodness measures for unsupervised segmentation of Chinese texts using Bakeoff-3 data sets with a unified framework. Assuming no prior knowledge about Chinese, this framework relies on a goodness measure to identify word candidates from unlabeled texts and then applies a generalized decoding algorithm to find the optimal segmentation of a sentence into such candidates with the greatest sum of goodness scores. Experiments show that description length gain outperforms other measures because of its strength for identifying short words. Further performance improvement is also reported, achieved by proper candidate pruning and by assemble segmentation to integrate the strengths of individual measures.",
"pdf_parse": {
"paper_id": "I08-1002",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper reports our empirical evaluation and comparison of several popular goodness measures for unsupervised segmentation of Chinese texts using Bakeoff-3 data sets with a unified framework. Assuming no prior knowledge about Chinese, this framework relies on a goodness measure to identify word candidates from unlabeled texts and then applies a generalized decoding algorithm to find the optimal segmentation of a sentence into such candidates with the greatest sum of goodness scores. Experiments show that description length gain outperforms other measures because of its strength for identifying short words. Further performance improvement is also reported, achieved by proper candidate pruning and by assemble segmentation to integrate the strengths of individual measures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Unsupervised Chinese word segmentation was explored in a number of previous works for various purposes and by various methods (Ge et al., 1999; Fu and Wang, 1999; Peng and Schuurmans, 2001; SUN et al., 2004; Jin and Tanaka-Ishii, 2006) . However, various heuristic rules are often involved in most existing works, and there has not been a comprehensive comparison of their performance in a unified way with available large-scale \"gold standard\" data sets, especially, multi-standard ones since Bakeoff-1 1 .",
"cite_spans": [
{
"start": 126,
"end": 143,
"text": "(Ge et al., 1999;",
"ref_id": "BIBREF5"
},
{
"start": 144,
"end": 162,
"text": "Fu and Wang, 1999;",
"ref_id": "BIBREF4"
},
{
"start": 163,
"end": 189,
"text": "Peng and Schuurmans, 2001;",
"ref_id": "BIBREF14"
},
{
"start": 190,
"end": 207,
"text": "SUN et al., 2004;",
"ref_id": "BIBREF19"
},
{
"start": 208,
"end": 235,
"text": "Jin and Tanaka-Ishii, 2006)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper we will propose a unified framework for unsupervised segmentation of Chinese text. Four existing approaches to unsupervised segmentations or word extraction are considered as its special cases, each with its own goodness measurement to quantify word likelihood. The output by each approach will be evaluated using benchmark data sets of Bakeoff-3 2 (Levow, 2006) . Note that unsupervised segmentation is different from, if not more complex than, word extraction, in that the former must carry out the segmentation task for a text, for which a segmentation (decoding) algorithm is indispensable, whereas the latter only acquires a word candidate list as output (Chang and Su, 1997; Zhang et al., 2000) .",
"cite_spans": [
{
"start": 363,
"end": 376,
"text": "(Levow, 2006)",
"ref_id": "BIBREF11"
},
{
"start": 674,
"end": 694,
"text": "(Chang and Su, 1997;",
"ref_id": "BIBREF1"
},
{
"start": 695,
"end": 714,
"text": "Zhang et al., 2000)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We propose a generalized framework to unify the existing methods for unsupervised segmentation, assuming the availability of a list of word candidates each associated with a goodness for how likely it is to be a true word. Let W = {{w i , g(w i )} i=1,...,n } be such a list, where w i is a word candidate and g(w i ) its goodness function.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Framework",
"sec_num": "2"
},
{
"text": "Two generalized decoding algorithms, (1) and (2), are formulated for optimal segmentation of a given plain text. The first one, decoding algorithm (1), is a Viterbi-style one to search for the best segmentation S * for a text T , as follows,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Framework",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "S * = argmax w 1 \u2022\u2022\u2022w i \u2022\u2022\u2022wn = T n i=1 g(w i ),",
"eq_num": "(1)"
}
],
"section": "Generalized Framework",
"sec_num": "2"
},
{
"text": "with all {w i , g(w i )} \u2208 W . Another algorithm, decoding algorithm (2), is a maximal-matching one with respect to a goodness score. It works on T to output the best current word w * repeatedly with T =t * for the next round as follows, {w * , t * } = argmax",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Framework",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "wt = T g(w)",
"eq_num": "(2)"
}
],
"section": "Generalized Framework",
"sec_num": "2"
},
{
"text": "with each {w, g(w)} \u2208 W . This algorithm will back off to forward maximal matching algorithm if the goodness function is set to word length. Thus the former may be regarded as a generalization of the latter. Symmetrically, it has an inverse version that works the other way around.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Generalized Framework",
"sec_num": "2"
},
{
"text": "An unsupervised segmentation strategy has to rest on some predefined criterion, e.g., mutual information (MI), in order to recognize a substring in the text as a word. Sproat and Shih (1990) is an early investigation in this direction. In this study, we examine four types of goodness measurement for a candidate substring 3 . In principle, the higher goodness score for a candidate, the more possible it is to be a true word. Frequency of Substring with Reduction A linear algorithm was proposed in (L\u00fc et al., 2004) to produce a list of such reduced substrings for a given corpus. The basic idea is that if two partially overlapped n-grams have the same frequency in the input corpus, then the shorter one is discarded as a redundant word candidate. We take the logarithm of FSR 3 Although there have been many existing works in this direction (Lua and Gan, 1994; Chien, 1997; Sun et al., 1998; Zhang et al., 2000; SUN et al., 2004) , we have to skip the details of comparing MI due to the length limitation of this paper. However, our experiments with MI provide no evidence against the conclusions in this paper.",
"cite_spans": [
{
"start": 168,
"end": 190,
"text": "Sproat and Shih (1990)",
"ref_id": "BIBREF17"
},
{
"start": 500,
"end": 517,
"text": "(L\u00fc et al., 2004)",
"ref_id": "BIBREF12"
},
{
"start": 781,
"end": 782,
"text": "3",
"ref_id": null
},
{
"start": 846,
"end": 865,
"text": "(Lua and Gan, 1994;",
"ref_id": "BIBREF13"
},
{
"start": 866,
"end": 878,
"text": "Chien, 1997;",
"ref_id": "BIBREF2"
},
{
"start": 879,
"end": 896,
"text": "Sun et al., 1998;",
"ref_id": "BIBREF18"
},
{
"start": 897,
"end": 916,
"text": "Zhang et al., 2000;",
"ref_id": "BIBREF22"
},
{
"start": 917,
"end": 934,
"text": "SUN et al., 2004)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Goodness Measurement",
"sec_num": "3"
},
{
"text": "as the goodness for a word candidate, i.e.,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Goodness Measurement",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "g F SR (w) = log(p(w))",
"eq_num": "(3)"
}
],
"section": "Goodness Measurement",
"sec_num": "3"
},
{
"text": "wherep(w) is w's frequency in the corpus. This allows the arithmetic addition in (1). According to Zipf's Law (Zipf, 1949) , it approximates the use of the rank of w as its goodness, which would give it some statistical significance. For the sake of efficiency, only those substrings that occur more than once are considered qualified word candidates. Description Length Gain (DLG) The goodness measure is proposed in (Kit and Wilks, 1999) for compression-based unsupervised segmentation. The DLG from extracting all occurrences of",
"cite_spans": [
{
"start": 99,
"end": 122,
"text": "Zipf's Law (Zipf, 1949)",
"ref_id": null
},
{
"start": 418,
"end": 439,
"text": "(Kit and Wilks, 1999)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Goodness Measurement",
"sec_num": "3"
},
{
"text": "x i x i+1 ...x j (also denoted as x i..j ) from a corpus X= x 1 x 2 ...x n as a word is defined as DLG(x i..j ) = L(X) \u2212 L(X[r \u2192 x i..j ] \u2295 x i..j ) (4) where X[r \u2192 x i..j ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Goodness Measurement",
"sec_num": "3"
},
{
"text": "represents the resultant corpus from replacing all instances of x i..j with a new symbol r throughout X and \u2295 denotes the concatenation of two substrings. L(\u2022) is the empirical description length of a corpus in bits that can be estimated by the Shannon-Fano code or Huffman code as below, following classic information theory (Shannon, 1948) .",
"cite_spans": [
{
"start": 326,
"end": 341,
"text": "(Shannon, 1948)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Goodness Measurement",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L(X) . = \u2212|X| x\u2208Vp (x)log 2p (x)",
"eq_num": "(5)"
}
],
"section": "Goodness Measurement",
"sec_num": "3"
},
{
"text": "where | \u2022 | denotes string length, V is the character vocabulary of X andp(x) x's frequency in X. For a given word candidate w, we define g DLG (w) = DLG(w). In principle, a substring with a negative DLG do not bring any positive compression effect by itself. Thus only substrings with a positive DLG value are added into our word candidate list. Accessor Variety (AV) Feng et al. (2004) propose AV as a statistical criterion to measure how likely a substring is a word. It is reported to handle lowfrequent words particularly well. The AV of a substring",
"cite_spans": [
{
"start": 369,
"end": 387,
"text": "Feng et al. (2004)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Goodness Measurement",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x i..j is defined as AV (x i..j ) = min{L av (x i..j ), R av (x i..j )}",
"eq_num": "(6)"
}
],
"section": "Goodness Measurement",
"sec_num": "3"
},
{
"text": "where the left and right accessor variety L av (x i..j ) and R av (x i..j ) are, respectively, the number of distinct predecessor and successor characters. For a similar reason as to FSR, the logarithm of AV is used as goodness measure, and only substrings with AV > 1 are considered word candidates. That is, we have g AV (w) = logAV (w) for a word candidate w. Boundary Entropy (Branching Entropy, BE) It is proposed as a criterion for unsupervised segmentation in some existing works (Tung and Lee, 1994; Chang and Su, 1997; Huang and Powers, 2003; Jin and Tanaka-Ishii, 2006) . The local entropy for a given",
"cite_spans": [
{
"start": 487,
"end": 507,
"text": "(Tung and Lee, 1994;",
"ref_id": "BIBREF21"
},
{
"start": 508,
"end": 527,
"text": "Chang and Su, 1997;",
"ref_id": "BIBREF1"
},
{
"start": 528,
"end": 551,
"text": "Huang and Powers, 2003;",
"ref_id": "BIBREF7"
},
{
"start": 552,
"end": 579,
"text": "Jin and Tanaka-Ishii, 2006)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Goodness Measurement",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "x i..j , defined as h(x i..j ) = \u2212 x\u2208V p(x|x i..j )log p(x|x i..j ),",
"eq_num": "(7)"
}
],
"section": "Goodness Measurement",
"sec_num": "3"
},
{
"text": "indicates the average uncertainty after (or before)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Goodness Measurement",
"sec_num": "3"
},
{
"text": "x i..j in the text, where p(x|x i..j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Goodness Measurement",
"sec_num": "3"
},
{
"text": "is the co-occurrence probability for x and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Goodness Measurement",
"sec_num": "3"
},
{
"text": "x i..j . Two types of h(x i..j ), namely h L (x i..j ) and h R (x i..j )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Goodness Measurement",
"sec_num": "3"
},
{
"text": ", can be defined for the two directions to extend x i..j (Tung and Lee, 1994) . Also, we can define h min = min{h R , h L } in a similar way as in (6). In this study, only substrings with BE > 0 are considered word candidates. For a candidate w, we have g BE (w) = h min (w) 4 .",
"cite_spans": [
{
"start": 57,
"end": 77,
"text": "(Tung and Lee, 1994)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Goodness Measurement",
"sec_num": "3"
},
{
"text": "The evaluation is conducted with all four corpora from Bakeoff-3 (Levow, 2006) , as summarized in Table 1 with corpus size in number of characters. For unsupervised segmentation, the annotation in the training corpora is not used. Instead, they are used for our evaluation, for they are large and thus provide more reliable statistics than small ones. Segmentation performance is evaluated by word Fmeasure F = 2RP/(R + P ). The recall R and precision P are, respectively, the proportions of the correctly segmented words to all words in the goldstandard and a segmenter's output 5 . Note that a decoding algorithm always requires the goodness score of a single-character candidate 4 Both AV and BE share a similar idea from Harris (1970): If the uncertainty of successive token increases, then it is likely to be at a boundary. In this sense, one may consider them the discrete and continuous formulation of the same idea.",
"cite_spans": [
{
"start": 65,
"end": 78,
"text": "(Levow, 2006)",
"ref_id": "BIBREF11"
},
{
"start": 682,
"end": 683,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 98,
"end": 105,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "5 All evaluations will be represented in terms of word F-measure if not otherwise specified. A standard scoring tool with this metric can be found in SIGHAN website, http://www.sighan.org/bakeoff2003/score. However, to compare with related work, we will also adopt boundary F-measure",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "F b = 2R b P b /(R b + P b )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": ", where the boundary recall R b and boundary precision P b are, respectively, the proportions of the correctly recognized boundaries to all boundaries in the goldstandard and a segmenter's output (Ando and Lee, 2000). for computation. There are two ways to get this score: (1) computed by the goodness measure, which is applicable only if the measure allows; (2) set to zero as default value, which is always applicable even to single-character candidates not in the word candidate list in use. For example, all singlecharacter candidates given up by DLG because of their negative DLG scores will have a default value during decoding. We will use a '/d' to indicate experiments using such a default value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "4"
},
{
"text": "We apply the decoding algorithm (1) to segment all Bakeoff-3 corpora with the above goodness measures. Both word candidates and goodness values are derived from the raw text of each training corpus. The performance of these measures is presented in Table 2 . From the table we can see that DLG and FSR have the strongest and the weakest performance, respectively, whereas AV and BE are highly comparable to each other.",
"cite_spans": [],
"ref_spans": [
{
"start": 249,
"end": 256,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Comparison",
"sec_num": "4.1"
},
{
"text": "Decoding algorithm (2) runs the forward and backward segmentation with the respective AV and BE criteria, i.e., L AV /h L for backward and R AV /h R forward, and the output is the union of two segmentations 6 . A performance comparison of AV and BE with both algorithms (1) and (2) is presented in Table 3 . We can see that the former has a rela- tively better performance on shorter words and the latter outperforms on longer ones. How segmentation performance varies along with word length is exemplified with DLG and BE as examples in Figure 1 , with (1) and (2) indicating a respective decoding algorithm in use. It shows that DLG outperforms on two-character words and BE on longer ones.",
"cite_spans": [],
"ref_spans": [
{
"start": 298,
"end": 305,
"text": "Table 3",
"ref_id": "TABREF2"
},
{
"start": 538,
"end": 546,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Comparison",
"sec_num": "4.1"
},
{
"text": "Up to now, word candidates are determined by the default goodness threshold 0. The number of them for each of the four goodness measures is presented in Table 4 . We can see that FSR generates the largest set of word candidates and DLG the smallest. More interestingly or even surprising, AV and BE generate exactly the same candidate list for all corpora.",
"cite_spans": [],
"ref_spans": [
{
"start": 153,
"end": 160,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Word Candidate Pruning",
"sec_num": "4.2"
},
{
"text": "In addition to word length, another crucial factor to affect segmentation performance is the quality of the word candidates as a whole. Since each candidate is associated with a goodness score to indicate how good it is, a straightforward way to ensure, and further enhance, the overall quality of a candidate set is to prune off those with low goodness scores. To examine how segmentation performance changes along with word candidate pruning and decide the optimal pruning rate, we conduct a series of experiments with each goodness measurements. Figures 2 and 3 present, as an illustration, the outcomes of two series of our experiments with DLG by decoding algorithm (1) and BE by decoding algorithm (1) and(2) on CityU training corpus. We find that appropriate pruning does lead to significant performance improvement and that both DLG and BE keep their superior performance respectively on two-character words and others. We also observe that each goodness measure has a stable and similar performance in a range of pruning rates around the optimal one, e.g., 79-62% around 70% in Figure 2 . The optimal pruning rates found through our experiments for the four goodness measures are given in Table 5 , and their correspondent segmentation performance in Table 6 . These results show a remarkable performance improvement beyond the de- F\u2212measure 100% size/(1) 38% size/(1) 32% size/(1) 19% size/(1) 10% size/(1) 100% size/(2) 27% size/(2) 19% size/(2) 16% size/(2) 13.5% size/(2) 11% size/(2) 4.5% size/(2) Figure 3 : Performance by candidate pruning: BE fault threshold setting. What remains unchanged is the advantage of DLG for two-character words and that of AV/BE for longer words. However, DLG achieves the best overall performance among the four, although it uses only single-and two-character word candidates. The overwhelming number of twocharacter words in Chinese allows it to triumph.",
"cite_spans": [],
"ref_spans": [
{
"start": 549,
"end": 565,
"text": "Figures 2 and 3",
"ref_id": "FIGREF1"
},
{
"start": 1088,
"end": 1096,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 1199,
"end": 1206,
"text": "Table 5",
"ref_id": "TABREF4"
},
{
"start": 1261,
"end": 1268,
"text": "Table 6",
"ref_id": "TABREF5"
},
{
"start": 1513,
"end": 1521,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Word Candidate Pruning",
"sec_num": "4.2"
},
{
"text": "Although proper pruning of word candidates brings amazing performance improvement, it is unlikely for one to determine an optimal pruning rate in practice for an unlabeled corpus. Here we put forth a parameter-free method to tackle this problem with the aids of all available goodness measures. The first step of this method to do is to derive an optimal set of word candidates from the input. We have shown above that quality candidates play a critical role in achieving quality segmentation. Without any better goodness criterion available, the best we can opt for is the intersection of all word candidate lists generated by available goodness measures with the default threshold. A good reason for this is that the agreement of them can give a more reliable decision than any individual one of them. In fact, we only need DLG and AV/BE to get this intersection, because AV and BE give the same word candidates and DLG generates only a subset of what FSR does. The next step is to use this intersection set of word candidates to perform optimal segmentation with each goodness measures, to see if any further improvement can be achieved. The best results are given in Table 7 , showing that decoding algorithm (1) achieves marvelous improvement using short word candidates with all other goodness measures than DLG. Interestingly, DLG still remains at the top by performance despite of some slip-back.",
"cite_spans": [],
"ref_spans": [
{
"start": 1171,
"end": 1178,
"text": "Table 7",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Ensemble Segmentation",
"sec_num": "4.3"
},
{
"text": "To explore further improvement, we also try to combine the strengths of DLG and AV/BE respectively for recognizing two-and multi-character word. Our strategy to combine them together is to enforce the multi-character words in AV/BE segmentation upon the correspondent parts of DLG segmentation. This ensemble method gives a better overall performance than all others that we have tried so far, as presented at the bottom of Table 7 . Jin and Tanaka-Ishii (2006) give an unsupervised segmentation criterion, henceforth referred to as decoding algorithm (3), to work with BE. It works as follows: if g(x i..j+1 ) > g(x i..j ) for any two overlapped substrings x i..j and x i..j+1 , then a segmenting point should be located right after x i..j+1 . This algorithm has a forward and a backward version. The union of the segmentation outputs by both versions is taken as the final output of the algorithm, in exactly the same way as how decoding algorithm (2) works 7 . This algorithm is evaluated in (Jin and Tanaka-Ishii, 2006) using Peking University (PKU) (Jin and Tanaka-Ishii, 2006) report their best result of boundary precision 0.88 and boundary recall 0.79, equal to boundary F-measure 0.833.",
"cite_spans": [
{
"start": 434,
"end": 461,
"text": "Jin and Tanaka-Ishii (2006)",
"ref_id": "BIBREF9"
},
{
"start": 995,
"end": 1023,
"text": "(Jin and Tanaka-Ishii, 2006)",
"ref_id": "BIBREF9"
},
{
"start": 1054,
"end": 1082,
"text": "(Jin and Tanaka-Ishii, 2006)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 424,
"end": 431,
"text": "Table 7",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Ensemble Segmentation",
"sec_num": "4.3"
},
{
"text": "Corpus of 1.1M words 8 as gold standard with a word candidate list extracted from the 200M Contemporary Chinese Corpus that mostly consists of several years of Peoples' Daily 9 . Here, we carry out evaluation with similar data: we extract word candidates from the unlabeled texts of People's Daily (1993 -1997) , of 213M and about 100M characters, in terms of the AV and BE criteria, yielding a list of 4.42 million candidates up to 6-character long 10 for each criterion. Then, the evaluation of the three decoding algorithms is performed on PKU corpus.",
"cite_spans": [
{
"start": 298,
"end": 310,
"text": "(1993 -1997)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Yet Another Decoding Algorithm",
"sec_num": "4.4"
},
{
"text": "The evaluation results with both word and boundary F-measure are presented for the same segmentation outputs in Table 8 , with \"*\" to indicate candidate pruning by DLG > 0 as reported before. Note that boundary F-measure gives much more higher score than word F-measure for the same segmentation output. However, in either of metric, we can find no evidence in favor of decoding algorithm (3). Undesirably, this algorithm does not guarantee a stable performance improvement with the BE measure through candidate pruning.",
"cite_spans": [],
"ref_spans": [
{
"start": 112,
"end": 119,
"text": "Table 8",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Yet Another Decoding Algorithm",
"sec_num": "4.4"
},
{
"text": "Huang and provide empirical evidence to estimate the degree to which the four segmentation standards involved in the Bakeoff-3 differ from each other. As quoted in Table 9 , a consistency rate Table 9 : Consistency rate among Bakeoff-3 segmentation standards (Huang and Zhao, 2007) beyond 84.8% is found among the four standards. If we do not over-expect unsupervised segmentation to achieve beyond what these standards agree with each other, it is reasonable to take this figure as the topline for evaluation. On the other hand, Zhao et al. (2006) show that the words of 1 to 2 characters long account for 95% of all words in Chinese texts, and single-character words alone for about 50%. Thus, we can take the result of the brute-force guess of every single character as a word as a baseline.",
"cite_spans": [
{
"start": 259,
"end": 281,
"text": "(Huang and Zhao, 2007)",
"ref_id": "BIBREF8"
},
{
"start": 530,
"end": 548,
"text": "Zhao et al. (2006)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 164,
"end": 171,
"text": "Table 9",
"ref_id": null
},
{
"start": 193,
"end": 200,
"text": "Table 9",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison against Supervised Segmentation",
"sec_num": "4.5"
},
{
"text": "To compare to supervised segmentation, which usually involves training using an annotated training corpus and, then, evaluation using test corpus, we carry out unsupervised segmentation in a comparable manner. For each data track, we first extract word candidates from both the training and test corpora, all unannotated, and then evaluate the unsupervised segmentation with reference to the goldstandard segmentation of the test corpus. The results are presented in Table 10 , together with best and worst official results of the Bakeoff closed test. This comparison shows that unsupervised segmentation cannot compete against supervised segmentation in terms of performance. However, the experiments generate positive results that the best combination of the four goodness measures can achieve an F-measure in the range of 0.65-0.7 on all test corpora in use without using any prior knowledge, but extracting word candidates from the unlabeled training and test corpora in terms of their goodness scores.",
"cite_spans": [],
"ref_spans": [
{
"start": 467,
"end": 475,
"text": "Table 10",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Comparison against Supervised Segmentation",
"sec_num": "4.5"
},
{
"text": "Note that DLG criterion is to perform segmentation with the intension to maximize the compression effect, which is a global effect through the text. Thus it works well incorporated with a probability maximization framework, where high frequent but independent substrings are effectively extracted and re- combined. We know that most unsupervised segmentation criteria will bring up long word bias problem, so does DLG measure. This explains why it gives the worse results as long candidates are added. As for AV and BE measures, both of them give the metric of the uncertainty before or after the current substring. This means that they are more concerned with local uncertainty information near the current substring, instead of global information among the whole text as DLG. Thus local greedy search in maximal matching style is more suitable for these two measures than Viterbi search.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion: How Things Happen",
"sec_num": "5"
},
{
"text": "Our empirical results about word candidate list with default threshold 0, where the same list is from AV and BE, give another proof that both AV and BE reflect the same uncertainty. The only difference is behind the fact that the former and the latter is in the discrete and continuous formulation, respectively.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion: How Things Happen",
"sec_num": "5"
},
{
"text": "This paper reported our empirical comparison of a number of goodness measures for unsupervised segmentation of Chinese texts with the aid two generalized decoding algorithms. We learn no previous work by others for a similar attempt. The comparison is carried out with Bakeoff-3 data sets, showing that all goodness measures exhibit their strengths for recognizing words of different lengths and achieve a performance far beyond the baseline. Among them, DLG with decoding algorithm (1) can achieve the best segmentation performance for single-and twocharacter words identification and the best overall performance as well. Our experiments also show that the quality of word candidates plays a critical role in ensuring segmentation performance 11 . Proper pruning of candidates with low goodness scores to enhance this quality enhances the segmentation performance significantly. Also, the success of unsupervised segmentation depends strongly on an appropriate decoding algorithm. Generally, Viterbi-style decoding produces better results than best-first maximal-matching. But the latter is not shy from exhibiting its particular strength for identifying multi-character words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "Finally, the ensemble segmentation we put forth to combine the strengths of different goodness measures proves to be a remarkable success. It achieves an impressive performance improvement on top of individual goodness measures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "As for future work, it would be natural for researchers to enhance supervised learning for Chinese word segmentation with goodness measures introduced here. There does be two successful examples in our existing work (Zhao and Kit, 2007) . This is still an ongoing work.",
"cite_spans": [
{
"start": 216,
"end": 236,
"text": "(Zhao and Kit, 2007)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "6"
},
{
"text": "First International Chinese Word Segmentation Bakeoff, at http://www.sighan.org/bakeoff2003 2 The Third International Chinese Language Processing Bakeoff, at http://www.sighan.org/bakeoff2006.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "That is, all segmented points by either segmentation will be accounted into the final segmentation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Three segmentation criteria are given in(Jin and Tanaka- Ishii, 2006), among which the entropy increase criterion, namely, decoding algorithm (3), proves to be the best. Here we would like to thank JIN Zhihui and Prof. Kumiko Tanaka-Ishii for presenting the details of their algorithms.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://icl.pku.edu.cn/icl groups/corpus/dwldform1.asp 9 http://ccl.pku.edu.cn:8080/ccl corpus/jsearch/index.jsp 10 This is to keep consistence with(Jin and Tanaka-Ishii, 2006), where 6 is set as the maximum n-gram length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "This observation is shared by other researchers, e.g.,(Peng et al., 2002).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Mostlyunsupervised statistical segmentation of Japanese: Applications to kanji",
"authors": [
{
"first": "Rie",
"middle": [],
"last": "Kubota Ando",
"suffix": ""
},
{
"first": "Lillian",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the first Conference on North American Chapter of the Association for Computational Linguistics and the 6th Conference on Applied Natural Language Processing",
"volume": "",
"issue": "",
"pages": "241--248",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rie Kubota Ando and Lillian Lee. 2000. Mostly- unsupervised statistical segmentation of Japanese: Ap- plications to kanji. In Proceedings of the first Confer- ence on North American Chapter of the Association for Computational Linguistics and the 6th Conference on Applied Natural Language Processing, pages 241- 248, Seattle, Washington, April 30.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "An unsupervised iterative method for Chinese new lexicon extraction",
"authors": [
{
"first": "Jing-Shin",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Keh-Yih",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics and Chinese Language Processing",
"volume": "2",
"issue": "",
"pages": "97--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing-Shin Chang and Keh-Yih Su. 1997. An unsuper- vised iterative method for Chinese new lexicon ex- traction. Computational Linguistics and Chinese Lan- guage Processing, 2(2):97-148.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "PAT-tree-based keyword extraction for Chinese information retrieval",
"authors": [
{
"first": "Lee-Feng",
"middle": [],
"last": "Chien",
"suffix": ""
}
],
"year": 1997,
"venue": "Proceedings of the 20th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "50--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lee-Feng Chien. 1997. PAT-tree-based keyword extrac- tion for Chinese information retrieval. In Proceedings of the 20th Annual International ACM SIGIR Confer- ence on Research and Development in Information Re- trieval, pages 50-58, Philadelphia.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Accessor variety criteria for Chinese word extraction",
"authors": [
{
"first": "Haodi",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaotie",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Weimin",
"middle": [],
"last": "Zheng",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "30",
"issue": "1",
"pages": "75--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haodi Feng, Kang Chen, Xiaotie Deng, and Weimin Zheng. 2004. Accessor variety criteria for Chi- nese word extraction. Computational Linguistics, 30(1):75-93.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Unsupervised Chinese word segmentation and unknown word identification",
"authors": [
{
"first": "Hong",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Xiao-Long",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 1999,
"venue": "5th Natural Language Processing Pacific Rim Symposium 1999 (NLPRS'99), \"Closing the Millennium",
"volume": "",
"issue": "",
"pages": "32--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guo-Hong Fu and Xiao-Long Wang. 1999. Unsu- pervised Chinese word segmentation and unknown word identification. In 5th Natural Language Process- ing Pacific Rim Symposium 1999 (NLPRS'99), \"Clos- ing the Millennium\", pages 32-37, Beijing, China, November 5-7.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Discovering Chinese words from unsegmented text",
"authors": [
{
"first": "Xianping",
"middle": [],
"last": "Ge",
"suffix": ""
},
{
"first": "Wanda",
"middle": [],
"last": "Pratt",
"suffix": ""
},
{
"first": "Padhraic",
"middle": [],
"last": "Smyth",
"suffix": ""
}
],
"year": 1999,
"venue": "SIGIR '99: Proceedings of the 22nd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "271--272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xianping Ge, Wanda Pratt, and Padhraic Smyth. 1999. Discovering Chinese words from unsegmented text. In SIGIR '99: Proceedings of the 22nd Annual Interna- tional ACM SIGIR Conference on Research and De- velopment in Information Retrieval, pages 271-272, Berkeley, CA, USA, August 15-19. ACM.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Morpheme boundaries within words",
"authors": [
{
"first": "Harris",
"middle": [],
"last": "Zellig Sabbetai",
"suffix": ""
}
],
"year": 1970,
"venue": "Papers in Structural and Transformational Linguistics",
"volume": "",
"issue": "",
"pages": "68--77",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zellig Sabbetai Harris. 1970. Morpheme boundaries within words. In Papers in Structural and Transfor- mational Linguistics, page 68 77.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Chinese word segmentation based on contextual entropy",
"authors": [
{
"first": "Jin",
"middle": [
"Hu"
],
"last": "Huang",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Powers",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 17th Asian Pacific Conference on Language, Information and Computation",
"volume": "",
"issue": "",
"pages": "1--3",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jin Hu Huang and David Powers. 2003. Chinese word segmentation based on contextual entropy. In Dong Hong Ji and Kim-Ten Lua, editors, Proceedings of the 17th Asian Pacific Conference on Language, In- formation and Computation, pages 152-158, Sentosa, Singapore, October, 1-3. COLIPS Publication.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Chinese word segmentation: A decade review",
"authors": [
{
"first": "Chang-Ning",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2007,
"venue": "Journal of Chinese Information Processing",
"volume": "21",
"issue": "3",
"pages": "8--20",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chang-Ning Huang and Hai Zhao. 2007. Chinese word segmentation: A decade review. Journal of Chinese Information Processing, 21(3):8-20.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Unsupervised segmentation of Chinese text by use of branching entropy",
"authors": [
{
"first": "Zhihui",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Kumiko",
"middle": [],
"last": "Tanaka-Ishii",
"suffix": ""
}
],
"year": 2006,
"venue": "COLING/ACL 2006",
"volume": "",
"issue": "",
"pages": "428--435",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhihui Jin and Kumiko Tanaka-Ishii. 2006. Unsuper- vised segmentation of Chinese text by use of branch- ing entropy. In COLING/ACL 2006, pages 428-435, Sidney, Australia.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unsupervised learning of word boundary with description length gain",
"authors": [
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
},
{
"first": "Yorick",
"middle": [],
"last": "Wilks",
"suffix": ""
}
],
"year": 1999,
"venue": "CoNLL-99",
"volume": "",
"issue": "",
"pages": "1--6",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chunyu Kit and Yorick Wilks. 1999. Unsupervised learning of word boundary with description length gain. In M. Osborne and E. T. K. Sang, editors, CoNLL-99, pages 1-6, Bergen, Norway.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The third international Chinese language processing bakeoff: Word segmentation and named entity recognition",
"authors": [
{
"first": "Gina-Anne",
"middle": [],
"last": "Levow",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Fifth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "108--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gina-Anne Levow. 2006. The third international Chi- nese language processing bakeoff: Word segmentation and named entity recognition. In Proceedings of the Fifth SIGHAN Workshop on Chinese Language Pro- cessing, pages 108-117, Sydney, Australia, July.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Statistical substring reduction in linear time",
"authors": [
{
"first": "Xueqiang",
"middle": [],
"last": "L\u00fc",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Junfeng",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceeding of the 1st International Joint Conference on Natural Language Processing",
"volume": "3248",
"issue": "",
"pages": "320--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xueqiang L\u00fc, Le Zhang, and Junfeng Hu. 2004. Sta- tistical substring reduction in linear time. In Keh- Yih Su et al., editor, Proceeding of the 1st Interna- tional Joint Conference on Natural Language Process- ing (IJCNLP-2004), volume 3248 of Lecture Notes in Computer Science, pages 320-327, Sanya City, Hainan Island, China, March 22-24. Springer.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "An application of information theory in Chinese word segmentation",
"authors": [
{
"first": "Kim-Teng",
"middle": [],
"last": "Lua",
"suffix": ""
},
{
"first": "Kok-Wee",
"middle": [],
"last": "Gan",
"suffix": ""
}
],
"year": 1994,
"venue": "Computer Processing of Chinese and Oriental Languages",
"volume": "8",
"issue": "1",
"pages": "115--123",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kim-Teng Lua and Kok-Wee Gan. 1994. An applica- tion of information theory in Chinese word segmenta- tion. Computer Processing of Chinese and Oriental Languages, 8(1):115-123.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Selfsupervised Chinese word segmentation",
"authors": [
{
"first": "Fuchun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Dale",
"middle": [],
"last": "Schuurmans",
"suffix": ""
}
],
"year": 2001,
"venue": "The Fourth International Symposium on Intelligent Data Analysis",
"volume": "",
"issue": "",
"pages": "13--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fuchun Peng and Dale Schuurmans. 2001. Self- supervised Chinese word segmentation. In The Fourth International Symposium on Intelligent Data Analysis, pages 238-247, Lisbon, Portugal, September, 13-15.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Using selfsupervised word segmentation in Chinese information retrieval",
"authors": [
{
"first": "Fuchun",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Xiangji",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Dale",
"middle": [],
"last": "Schuurmans",
"suffix": ""
},
{
"first": "Nick",
"middle": [],
"last": "Cercone",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Robertson",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 25th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval",
"volume": "",
"issue": "",
"pages": "11--15",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fuchun Peng, Xiangji Huang, Dale Schuurmans, Nick Cercone, and Stephen Robertson. 2002. Using self- supervised word segmentation in Chinese information retrieval. In Proceedings of the 25th Annual Interna- tional ACM SIGIR Conference on Research and De- velopment in Information Retrieval, pages 349-350, Tampere, Finland, August, 11-15.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A mathematical theory of communication",
"authors": [
{
"first": "Claude",
"middle": [
"E"
],
"last": "Shannon",
"suffix": ""
}
],
"year": 1948,
"venue": "The Bell System Technical Journal",
"volume": "27",
"issue": "",
"pages": "623--656",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claude E. Shannon. 1948. A mathematical theory of communication. The Bell System Technical Journal, 27:379-423, 623-656, July, October.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A statistical method for finding word boundaries in Chinese text",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Sproat",
"suffix": ""
},
{
"first": "Chilin",
"middle": [],
"last": "Shih",
"suffix": ""
}
],
"year": 1990,
"venue": "Computer Processing of Chinese and Oriental Languages",
"volume": "4",
"issue": "4",
"pages": "336--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Sproat and Chilin Shih. 1990. A statistical method for finding word boundaries in Chinese text. Computer Processing of Chinese and Oriental Lan- guages, 4(4):336-351.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Chinese word segmentation without using lexicon and hand-crafted training data",
"authors": [
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Dayang",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [
"K"
],
"last": "Tsou",
"suffix": ""
}
],
"year": 1998,
"venue": "COLING-ACL '98, 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "1265--1271",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maosong Sun, Dayang Shen, and Benjamin K. Tsou. 1998. Chinese word segmentation without using lexi- con and hand-crafted training data. In COLING-ACL '98, 36th Annual Meeting of the Association for Com- putational Linguistics and 17th International Confer- ence on Computational Linguistics, volume 2, pages 1265-1271, Montreal, Quebec, Canada.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Chinese word segmentation without using dictionary based on unsupervised learning strategy",
"authors": [
{
"first": "Mao",
"middle": [],
"last": "Song",
"suffix": ""
},
{
"first": "Sun",
"middle": [],
"last": "Ming",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [
"K"
],
"last": "Tsou",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mao Song SUN, Ming XIAO, and Benjamin K. Tsou. 2004. Chinese word segmentation without using dic- tionary based on unsupervised learning strategy (in Chinese) (",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Identification of unknown words from corpus",
"authors": [
{
"first": "His-Jian",
"middle": [],
"last": "Cheng-Huang Tung",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lee",
"suffix": ""
}
],
"year": 1994,
"venue": "Computational Proceedings of Chinese and Oriental Languages",
"volume": "8",
"issue": "",
"pages": "131--145",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cheng-Huang Tung and His-Jian Lee. 1994. Iden- tification of unknown words from corpus. Compu- tational Proceedings of Chinese and Oriental Lan- guages, 8:131-145.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Extraction of Chinese compound words -an experimental study on a very large corpus",
"authors": [
{
"first": "Jian",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the Second Chinese Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "132--139",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jian Zhang, Jianfeng Gao, and Ming Zhou. 2000. Ex- traction of Chinese compound words -an experimen- tal study on a very large corpus. In Proceedings of the Second Chinese Language Processing Workshop, pages 132-139, Hong Kong, China.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Incorporating global information into supervised learning for Chinese word segmentation",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 10th Conference of the Pacific Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "66--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao and Chunyu Kit. 2007. Incorporating global information into supervised learning for Chinese word segmentation. In Proceedings of the 10th Conference of the Pacific Association for Computational Linguis- tics, pages 66-74, Melbourne, Australia, September 19-21.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Effective tag set selection in Chinese word segmentation via conditional random field modeling",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chang-Ning",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Bao-Liang",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 20th Asian Pacific Conference on Language, Information and Computation",
"volume": "",
"issue": "",
"pages": "87--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao, Chang-Ning Huang, Mu Li, and Bao-Liang Lu. 2006. Effective tag set selection in Chinese word segmentation via conditional random field modeling. In Proceedings of the 20th Asian Pacific Conference on Language, Information and Computation, pages 87- 94, Wuhan, China, November 1-3.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Human Behavior and the Principle of Least Effort",
"authors": [
{
"first": "George",
"middle": [],
"last": "Kingsley",
"suffix": ""
},
{
"first": "Zipf",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 1949,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "George Kingsley Zipf. 1949. Human Behavior and the Principle of Least Effort. Addison-Wesley, Cam- bridge, MA.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Performance vs. word length"
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Performance by candidate pruning: DLG"
},
"TABREF0": {
"num": null,
"content": "<table><tr><td>Corpus</td><td>AS</td><td colspan=\"3\">CityU CTB MSRA</td></tr><tr><td colspan=\"2\">Training(M) 8.42</td><td>2.71</td><td>0.83</td><td>2.17</td></tr><tr><td>Test(K)</td><td>146</td><td>364</td><td>256</td><td>173</td></tr></table>",
"type_str": "table",
"html": null,
"text": "Bakeoff-3 Corpora"
},
"TABREF1": {
"num": null,
"content": "<table><tr><td/><td colspan=\"5\">: Performance with decoding algorithm (1)</td></tr><tr><td>M. L. a</td><td>Good-ness</td><td>AS</td><td colspan=\"3\">Training corpus CityU CTB MSRA</td></tr><tr><td/><td>FSR</td><td>.400</td><td>.454</td><td>.462</td><td>.432</td></tr><tr><td>2</td><td colspan=\"2\">DLG/d .592 AV .568</td><td>.610 .595</td><td>.604 .596</td><td>.603 .577</td></tr><tr><td/><td>BE</td><td>.559</td><td>.587</td><td>.592</td><td>.572</td></tr><tr><td/><td>FSR</td><td>.193</td><td>.251</td><td>.268</td><td>.235</td></tr><tr><td>7</td><td colspan=\"2\">DLG/d .331 AV .399</td><td>.397 .423</td><td>.409 .430</td><td>.379 .407</td></tr><tr><td/><td>BE</td><td>.390</td><td>.419</td><td>.428</td><td>.403</td></tr><tr><td colspan=\"6\">a M.L.: Maximal length allowable for word candidates.</td></tr></table>",
"type_str": "table",
"html": null,
"text": ""
},
"TABREF2": {
"num": null,
"content": "<table><tr><td>M.</td><td>Good-</td><td/><td/><td colspan=\"2\">Training corpus</td></tr><tr><td>L.</td><td>ness</td><td>AS</td><td/><td colspan=\"3\">CityU CTB MSRA</td></tr><tr><td>2</td><td colspan=\"2\">.568 AV (2) /d .485 AV (1) AV (2) .445 .559 BE (1) BE (2) /d .485 BE (2) .504</td><td/><td>.595 .489 .366 .587 .489 .428</td><td>.596 .508 .367 .592 .508 .446</td><td>.577 .471 .387 .572 .471 .446</td></tr><tr><td>7</td><td colspan=\"2\">.399 AV (2) /d .570 AV (1) AV (2) .445 .390 BE (1) BE (2) /d .597 BE (2) .508</td><td/><td>.423 .581 .366 .419 .604 .431</td><td>.430 .588 .368 .428 .605 .449</td><td>.407 .572 .387 .403 .593 .446</td></tr><tr><td/><td/><td/><td/><td/><td>BE/(2): AS</td></tr><tr><td/><td>0.6</td><td/><td/><td/><td>BE/(2): CityU</td></tr><tr><td/><td/><td/><td/><td/><td>BE/(2): CTB</td></tr><tr><td/><td/><td/><td/><td/><td>BE/(2): MSRA</td></tr><tr><td/><td>0.55</td><td/><td/><td/><td>DLG/(1): AS DLG/(1): CityU</td></tr><tr><td/><td/><td/><td/><td/><td>DLG/(1): CTB</td></tr><tr><td>F\u2212measure</td><td>0.45 0.5</td><td/><td/><td/><td colspan=\"2\">DLG/(1): MSRA</td></tr><tr><td/><td>0.4</td><td/><td/><td/><td/></tr><tr><td/><td>0.35</td><td/><td/><td/><td/></tr><tr><td/><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td></tr><tr><td/><td/><td colspan=\"4\">The Range of Word Length</td></tr></table>",
"type_str": "table",
"html": null,
"text": "Performance comparison: AV vs. BE"
},
"TABREF3": {
"num": null,
"content": "<table><tr><td colspan=\"2\">Good-</td><td/><td colspan=\"3\">Training Corpus</td></tr><tr><td colspan=\"2\">ness</td><td>AS</td><td colspan=\"4\">CityU CTB MSRA</td></tr><tr><td colspan=\"2\">FSR</td><td>2,009K</td><td colspan=\"2\">832K 294K</td><td colspan=\"2\">661K</td></tr><tr><td colspan=\"2\">DLG</td><td>543K</td><td>265K</td><td>96K</td><td colspan=\"2\">232K</td></tr><tr><td colspan=\"2\">AV</td><td>1,153K</td><td colspan=\"2\">443K 160K</td><td colspan=\"2\">337K</td></tr><tr><td colspan=\"2\">BE</td><td>1,153K</td><td colspan=\"2\">443K 160K</td><td colspan=\"2\">337K</td></tr><tr><td/><td>0.65</td><td/><td/><td/><td>100% size 89% size</td></tr><tr><td/><td/><td/><td/><td/><td>79% size</td></tr><tr><td/><td/><td/><td/><td/><td>74% size</td></tr><tr><td/><td>0.6</td><td/><td/><td/><td>70% size</td></tr><tr><td/><td/><td/><td/><td/><td>65% size</td></tr><tr><td/><td/><td/><td/><td/><td>62% size</td></tr><tr><td>F\u2212measure</td><td>0.5 0.55</td><td/><td/><td/><td>48% size 38% size</td></tr><tr><td/><td>0.45</td><td/><td/><td/><td/></tr><tr><td/><td>0.4</td><td/><td/><td/><td/></tr><tr><td/><td>2</td><td>3</td><td>4</td><td>5</td><td>6</td><td>7</td></tr><tr><td/><td/><td/><td colspan=\"2\">The Range of Word Length</td><td/></tr></table>",
"type_str": "table",
"html": null,
"text": "Word candidate number by threshold 0"
},
"TABREF4": {
"num": null,
"content": "<table><tr><td>Decoding</td><td/><td colspan=\"2\">Goodness measure</td><td/></tr><tr><td colspan=\"3\">algorithm FSR DLG</td><td>AV</td><td>BE</td></tr><tr><td>(1)</td><td>1.8</td><td>70</td><td>12.5</td><td>20</td></tr><tr><td>(2)</td><td>-</td><td>-</td><td>8</td><td>12.5</td></tr></table>",
"type_str": "table",
"html": null,
"text": "Optimal rates for candidate pruning (%)"
},
"TABREF5": {
"num": null,
"content": "<table><tr><td/><td colspan=\"5\">: Performance via optimal candidate pruning</td></tr><tr><td>M.</td><td>Good-</td><td/><td colspan=\"2\">Training corpus</td></tr><tr><td>L.</td><td>ness</td><td>AS</td><td colspan=\"3\">CityU CTB MSRA</td></tr><tr><td>2</td><td colspan=\"2\">.501 DLG (1) /d .710 FSR (1) .616 AV (1) .613 BE (1) .585 AV (2) /d BE (2) /d .591</td><td>.525 .650 .625 .614 .602 .599</td><td>.513 .664 .609 .605 .589 .596</td><td>.522 .638 .618 .611 .599 .593</td></tr><tr><td>7</td><td colspan=\"2\">.444 DLG (1) /d .420 FSR (1) .517 AV (1) .501 BE (1) .623 AV (2) /d BE (2) /d .630</td><td>.491 .447 .568 .539 .624 .631</td><td>.486 .460 .549 .510 .604 .620</td><td>.486 .423 .544 .519 .615 .622</td></tr></table>",
"type_str": "table",
"html": null,
"text": ""
},
"TABREF6": {
"num": null,
"content": "<table><tr><td>M.</td><td>Good-</td><td/><td colspan=\"2\">Training corpus</td><td/></tr><tr><td>L.</td><td>ness</td><td>AS</td><td colspan=\"3\">CityU CTB MSRA</td></tr><tr><td>2</td><td>FSR (1) DLG (1) /d AV (1) BE (1)</td><td>.629 .664 .641 .640</td><td>.635 .653 .644 .643</td><td>.624 .643 .631 .632</td><td>.623 .650 .634 .634</td></tr><tr><td>7</td><td>AV (2) /d BE (2) /d</td><td>.595 .593</td><td>.637 .635</td><td>.624 .620</td><td>.610 .609</td></tr><tr><td colspan=\"3\">DLG (1) /d+AV (2) /d .672 DLG (1) /d+BE (2) /d .660</td><td>.684 .681</td><td>.663 .656</td><td>.665 .653</td></tr></table>",
"type_str": "table",
"html": null,
"text": "Performances of ensemble segmentation"
},
"TABREF7": {
"num": null,
"content": "<table><tr><td/><td>Good-</td><td/><td/><td colspan=\"3\">Decoding algorithm</td></tr><tr><td/><td>ness</td><td>(1)/d</td><td>(1)</td><td>(2)/d</td><td>(2)</td><td>(3)/d</td><td>(3)</td></tr><tr><td>F</td><td>AV AV *</td><td colspan=\"5\">.313 .325 .588 .373 .376 .372 .372 .663 .663 .445</td><td>.453 .445</td></tr><tr><td/><td>BE BE *</td><td colspan=\"5\">.309 .319 .624 .501 .376 .370 .370 .676 .676 .447</td><td>.624 .447</td></tr><tr><td>F b</td><td>AV AV * BE BE *</td><td colspan=\"6\">.695 .700 .830 .762 .762 .728 .728 .865 .865 .783 .696 .699 .849 .810 .762 .837 a .728 .783 .728 .728 .872 .872 .784 .784</td></tr></table>",
"type_str": "table",
"html": null,
"text": "Performance comparison by word and boundary F-measure on PKU corpus (M. L. = 6)"
},
"TABREF9": {
"num": null,
"content": "<table><tr><td>Type</td><td>AS</td><td colspan=\"3\">Test corpus CityU CTB MSRA</td></tr><tr><td>Baseline</td><td>.389</td><td>.345</td><td>.337</td><td>.353</td></tr><tr><td>DLG (1) /d DLG * (1) /d 2 AV (1) AV * (1) BE (1) BE * (1)</td><td>.597 .655 .577 .630 .570 .629</td><td>.616 .659 .603 .650 .598 .649</td><td>.601 .632 .597 .618 .594 .618</td><td>.602 .655 .583 .638 .580 .638</td></tr><tr><td colspan=\"2\">AV (2) /d AV * (2) /d 7 BE (2) /d BE * (2) /d DLG * (1) /d +AV * (2) /d .663 .512 .591 .518 .587 DLG * (1) /d +BE * (2) /d .650</td><td>.551 .644 .554 .641 .692 .689</td><td>.543 .618 .546 .614 .658 .650</td><td>.526 .604 .533 .605 .667 .656</td></tr><tr><td>Worst closed</td><td>.710</td><td>.589</td><td>0.818</td><td>.819</td></tr><tr><td>Best closed</td><td>.958</td><td>.972</td><td>0.933</td><td>.963</td></tr></table>",
"type_str": "table",
"html": null,
"text": "Comparison of performances against supervised segmentation"
}
}
}
}