Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
91 kB
{
"paper_id": "I13-1020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:14:10.594060Z"
},
"title": "Chinese Word Segmentation by Mining Maximized Substrings",
"authors": [
{
"first": "Mo",
"middle": [],
"last": "Shen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Kyoto University Yoshida-honmachi",
"location": {
"addrLine": "Sakyo-ku",
"postCode": "606-8501",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": "shen@nlp.ist.i.kyoto-u.ac.jp"
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Kyoto University Yoshida-honmachi",
"location": {
"addrLine": "Sakyo-ku",
"postCode": "606-8501",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Kyoto University Yoshida-honmachi",
"location": {
"addrLine": "Sakyo-ku",
"postCode": "606-8501",
"settlement": "Kyoto",
"country": "Japan"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "A major problem in the field of Chinese word segmentation is the identification of out-ofvocabulary words. We propose a simple yet effective approach for extracting maximized substrings, which provide good estimations of unknown word boundaries. We also develop a new semi-supervised segmentation technique that incorporates retrieved substrings using discriminative learning. The effectiveness of this novel approach is demonstrated through experiments using both in-domain and out-ofdomain data.",
"pdf_parse": {
"paper_id": "I13-1020",
"_pdf_hash": "",
"abstract": [
{
"text": "A major problem in the field of Chinese word segmentation is the identification of out-ofvocabulary words. We propose a simple yet effective approach for extracting maximized substrings, which provide good estimations of unknown word boundaries. We also develop a new semi-supervised segmentation technique that incorporates retrieved substrings using discriminative learning. The effectiveness of this novel approach is demonstrated through experiments using both in-domain and out-ofdomain data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Chinese sentences are written without explicit word boundaries, which makes Chinese word segmentation (CWS) an initial and important step in Chinese language processing. Recent advances in machine learning techniques have boosted the performance of CWS systems. On the other hand, a major difficulty in CWS is the problem of identifying out-of-vocabulary (OOV) words, as the Chinese language is continually and rapidly evolving, particularly with the rapid growth of the internet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "A recent line of research to overcome this difficulty is through exploiting characteristics of frequent substrings in unlabeled data. Statistical criteria for measuring the likelihood of a substring being a word have been proposed in previous studies of unsupervised segmentation, such as accessor variety (Feng et al., 2004) and branching entropy (Jin and Tanaka-Ishii, 2006) . This kind of criteria has been applied to enhance the performance of supervised segmentation systems (Zhao and Kit, 2007; Zhao and Kit, 2008 Sun and Xu, 2011) by identifying unknown word boundaries.",
"cite_spans": [
{
"start": 306,
"end": 325,
"text": "(Feng et al., 2004)",
"ref_id": "BIBREF6"
},
{
"start": 348,
"end": 376,
"text": "(Jin and Tanaka-Ishii, 2006)",
"ref_id": "BIBREF8"
},
{
"start": 480,
"end": 500,
"text": "(Zhao and Kit, 2007;",
"ref_id": "BIBREF20"
},
{
"start": 501,
"end": 519,
"text": "Zhao and Kit, 2008",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "In this paper, instead of investigating statistical characteristics of batched substrings, we propose a novel method that extracts substrings as reliable word boundary estimations. The technique uses large-scale unlabeled data, and processes it on the fly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "To illustrate the idea, we first consider the following example taken from a scientific text: \"\u4f7f\u4e00\u81f4\u8ba4\u5b9a\u754c\u9650\u6570\u7684\u671f\u671b\u503c\u8fd1\u4f3c\u4e8e\u4e00\u81f4\u6b63\u786e\u754c\u9650 \u6570\u7684\u671f\u671b\u503c\uff0c\u6c42\u5f97\u4e00\u81f4\u8ba4\u5b9a\u754c\u9650\u7684\u671f\u671b\u503c/\u8ba4\u5b9a\u754c \u9650\u6570\u7684\u503c\u3002\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Without any knowledge of the Chinese language one may still notice that some substrings like \"\u4e00\u81f4\" and \"\u7684\u671f\u671b\u503c\", occur multiple times in the sentence and are likely to be valid words or chains of words. Consider a particular type of frequent substring that cannot be simultaneously extended by its surrounding characters while still being equal (Table 1) . We can observe that the boundaries of such substrings can be used as perfect word delimiters. We can segment the sentence by simply treating the boundaries of each occurrence of a substring in Table 1 ",
"cite_spans": [],
"ref_spans": [
{
"start": 342,
"end": 351,
"text": "(Table 1)",
"ref_id": null
},
{
"start": 547,
"end": 554,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "\u5fb7[S-verb] \u5fb7[S-noun] \u5fb7[E-noun] \u5fb7[B-verb] \u94ed[B-noun] \u94ed[B 2 -noun] \u94ed[E-noun] \u94ed[S-noun] \u94ed[B 3 -noun] \u94ed[S-verb] \u8bb0[B-noun] \u8bb0[B 2 -noun] \u8bb0[M-noun] \u8bb0[S-noun] \u8bb0[B 3 -noun] \u8bb0[E-noun] \u8005[B-noun] \u8005[B 2 -noun] \u8005[M-noun] \u8005[S-noun] \u8005[B 3 -noun] \u8005[E-noun] \u95ee[E-noun] \u95ee[S-verb] \u95ee[S-noun] \u95ee[E-verb] \u7b54[B-noun] \u7b54[B 2 -noun] \u7b54[M-noun] \u7b54[S-noun] \u7b54[B 3 -noun] \u7b54[E-noun]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Word-level Nodes",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1."
},
{
"text": "Sentence: \u9648\u5fb7\u94ed\u7b54\u8bb0\u8005\u95ee (Chen Deming answers to journalists' questions) Figure 1 . A Word-character hybrid lattice of a Chinese sentence. Correct path is represented by bold lines.",
"cite_spans": [],
"ref_spans": [
{
"start": 66,
"end": 74,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Character-level Nodes",
"sec_num": null
},
{
"text": "Word Length 1 2 3 4 5 6 7 or more Tags Table 2 . Word representation with a 6-tag tagset:",
"cite_spans": [],
"ref_spans": [
{
"start": 39,
"end": 46,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Character-level Nodes",
"sec_num": null
},
{
"text": "S BE BB 2 E BB 2 B 3 E BB 2 B 3 ME BB 2 B 3 MME BB 2 B 3 M...ME",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Character-level Nodes",
"sec_num": null
},
{
"text": "S, B, B 2 , B 3 , M, E delimiters: \"\u4f7f|\u4e00\u81f4|\u8ba4\u5b9a|\u754c\u9650|\u6570|\u7684|\u671f\u671b|\u503c|\u8fd1\u4f3c\u4e8e|\u4e00\u81f4|\u6b63\u786e| \u754c\u9650\u6570|\u7684\u671f\u671b|\u503c|\uff0c\u6c42\u5f97|\u4e00\u81f4|\u8ba4\u5b9a\u754c\u9650|\u7684\u671f\u671b|\u503c |/|\u8ba4\u5b9a\u754c\u9650\u6570\u7684|\u503c|\u3002\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Character-level Nodes",
"sec_num": null
},
{
"text": "Compared with the gold-standard segmentation, this partial segmentation has a precision of 100% and a recall of 73.3% with regard to boundary estimation. This is high when we consider that the method does not use a trained segmenter or annotated data. While we have obtained this re-sult on a selected instance, it still suggests that unlabeled data has the potential to enhance the performance of supervised segmentation systems by tracking consistency among substrings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Character-level Nodes",
"sec_num": null
},
{
"text": "Substrings, such as those listed in Table1, are retrievable from unlabeled data and can be incor-porated with a supervised CWS system to com-pensate for out-of-vocabulary (OOV) words. In this case the unlabeled data can be either test data only (leading to a purely supervised system), or a large-scale external corpus (leading to a semi-supervised system). We will formally define this particular type of substring, referred to as a \"maximized substring\", in a later section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Character-level Nodes",
"sec_num": null
},
{
"text": "The remainder of this paper is organized as follows. Section 2 describes our baseline seg-mentation system, defines maximized substrings, and proposes an efficient algorithm for retrieving these substrings from unlabeled data. Section 3 introduces the maximized substring features. Section 4 presents the experimental results. Sec-tion 5 discusses related work. The final section summarizes our conclusions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Character-level Nodes",
"sec_num": null
},
{
"text": "We have used a word-character hybrid model as our baseline Chinese word segmentation system (Nakagawa and Uchimoto, 2007; . As shown in Figure 1 , this hybrid model constructs a lattice that consists of word-level and character-level nodes from a given in-put sentence. Word-level nodes correspond to words found in the system's lexicon, which has been compiled from training data. Character-level nodes have special tags called position-of-character (POC) that indicate the word-internal position (Asahara, 2003; Nakagawa, 2004) . We have adopted the 6-tag tagset, which (Zhao et al., 2006) reported to be optimal. This tagset is illus-trated in Table 2 . Previous studies have shown that jointly pro-cessing word segmentation and part-of-speech tagging is preferable to separate processing, which can propagate errors (Nakagawa and Uchimoto, 2007; . If the training data was annotated by part-of-speech tags, we have combined them with both word-level and character-level nodes.",
"cite_spans": [
{
"start": 92,
"end": 121,
"text": "(Nakagawa and Uchimoto, 2007;",
"ref_id": "BIBREF15"
},
{
"start": 498,
"end": 513,
"text": "(Asahara, 2003;",
"ref_id": "BIBREF1"
},
{
"start": 514,
"end": 529,
"text": "Nakagawa, 2004)",
"ref_id": "BIBREF14"
},
{
"start": 572,
"end": 591,
"text": "(Zhao et al., 2006)",
"ref_id": "BIBREF19"
},
{
"start": 820,
"end": 849,
"text": "(Nakagawa and Uchimoto, 2007;",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 136,
"end": 144,
"text": "Figure 1",
"ref_id": null
},
{
"start": 647,
"end": 654,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Baseline Segmentation System",
"sec_num": "2.1"
},
{
"text": "Occur (ABCC) Hash1 Hash2 Occur (ABCCFA) Occur (ABCC) (a) (b) Figure 2 . Data structure for maximized substring mining. Hash1 is the first-level hash with fixedlength prefix keys. Hash2 is a hash associating to a corresponding key in Hash1 that stores the list of maximized substrings sharing the same fixed-length prefix.",
"cite_spans": [
{
"start": 6,
"end": 12,
"text": "(ABCC)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 61,
"end": 69,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Hash1 Hash2",
"sec_num": null
},
{
"text": "is the occurrence list associating to a particular maximized substrings with references to all its occurrences in the original postitions in the document. (a) shows a certain state of the data structure, and (b) the state after a maximized substring \"ABCCFA\" is inserted with the context being \"ABCCFAT\u2026\" in the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hash1 Hash2",
"sec_num": null
},
{
"text": "Frequent substrings in unlabeled data can be used as clues for identifying word boundaries, as we have illustrated in Section 1. Nevertheless, some substrings, although frequent, are not useful to the system. In the example in Section 1, the substring \"\u81f4\u8ba4\u5b9a\u754c\" occurs the same amount of times as the substring \"\u4e00\u81f4\u8ba4\u5b9a\u754c\u9650\". However, only the latter is a valid identifier for word delimiters: they are non-overlapping, meaning that it is impossible to simultaneously extend all occurrences by surrounding characters. We use the term maximized substring to describe these substrings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximized Substring: the Definition",
"sec_num": "2.2"
},
{
"text": "Formally, we define maximized substring as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximized Substring: the Definition",
"sec_num": "2.2"
},
{
"text": "Definition 1 (Maximised substring). Given a document D that is a collection of sentences, denote a length substring which starts with character by [ ]. is called a maximized substring if:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximized Substring: the Definition",
"sec_num": "2.2"
},
{
"text": "1. It has a set of distinct occurrences, , with at least two elements 1 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximized Substring: the Definition",
"sec_num": "2.2"
},
{
"text": "{ } , , s.t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximized Substring: the Definition",
"sec_num": "2.2"
},
{
"text": "; and 2. and .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximized Substring: the Definition",
"sec_num": "2.2"
},
{
"text": "The substrings listed in Table 1 are therefore maximized substrings, given that D is the example sentence. Note that these are not all maximized substrings extractable from the example sentence, but are the result of the retrieval algorithm that we will describe in the next section.",
"cite_spans": [],
"ref_spans": [
{
"start": 25,
"end": 32,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Maximized Substring: the Definition",
"sec_num": "2.2"
},
{
"text": "The problem of mining frequent substrings in a document has been extensively researched. Existing algorithms generally either use a suffix tree structure (Nelson, 1996) or suffix arrays (Fischer et al., 2005) , and make use of the apriori property (Agrawal and Srikant, 1994) . The apriori property states that a string of length k+1 is frequent only if its substring of length k is frequent. The apriori property can significantly reduce the size of enumerable substring candidates. However, as we are only interested in maximized substrings, suffix tree-based algorithms are inefficient in both time and space. We therefore propose a novel algorithm and a compact data structure for fast maximized substring mining.",
"cite_spans": [
{
"start": 154,
"end": 168,
"text": "(Nelson, 1996)",
"ref_id": "BIBREF13"
},
{
"start": 186,
"end": 208,
"text": "(Fischer et al., 2005)",
"ref_id": "BIBREF7"
},
{
"start": 248,
"end": 275,
"text": "(Agrawal and Srikant, 1994)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Maximized Substring Retrieval: Algorithm and Data Structure",
"sec_num": "2.3"
},
{
"text": "The data structure is illustrated in Figure 2 . It supports fast prefix searching for storing and retrieving maximized substrings, with each entry associated to a list of occurrences that refer to the original positions in the document. Fast prefix matching is a particular advantage of a trie, which is a type of prefix tree. Our structure is different as we use a two-level hash structure for space efficiency and ease of manipulation. This is important, especially during experiments on large-scale unlabeled data.",
"cite_spans": [],
"ref_spans": [
{
"start": 37,
"end": 45,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Maximized Substring Retrieval: Algorithm and Data Structure",
"sec_num": "2.3"
},
{
"text": "The first-level hash stores prefixes of a fixedlength, , of retrieved substrings. This part of the data structure functions as a filter to screen out substrings that are shorter than characters, as they should not be considered as candidates. This is motivated by our observation that single characters, and sometimes even double-character substrings, are not reliable enough to predict word delimiters. Note that is data dependent, for example, the optimal value of is 3 characters on the dataset Chinese Treebank (CTB).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximized Substring Retrieval: Algorithm and Data Structure",
"sec_num": "2.3"
},
{
"text": "Each key of the first-level hash is associated with a second-level hash that stores the retrieved maximized substrings that share a common prefix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximized Substring Retrieval: Algorithm and Data Structure",
"sec_num": "2.3"
},
{
"text": "The third-level structure is a linked list of occurrences of a particular maximized substring. This list stores references to the original position of each occurrence of the substring, with the surrounding context being visible so that new (longer) maximized substrings can be found by extension.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximized Substring Retrieval: Algorithm and Data Structure",
"sec_num": "2.3"
},
{
"text": "We sketch the process of maximized substring retrieval in Pseudocode 1. From the beginning of the document D, we scan each position and register maximized substrings into the data structure H. If an incoming substring already exists in H, we look up its occurrence list to check if its succeeding characters can extend the substring. As the current occurrence list is a set of maximized substrings, there will be only two possible outcomes. Either exactly one element in the occurrence list is found to have a longer common prefix with the incoming substring, in which case we create a new occurrence list consisting of the two lengthened substrings. Alternatively, the prefix remains the same and we add the incoming substring to the occurrence list.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximized Substring Retrieval: Algorithm and Data Structure",
"sec_num": "2.3"
},
{
"text": "We can easily demonstrate that all substrings retrieved by this algorithm are maximized substrings. However, the algorithm does not generally guarantee to retrieve all maximized substrings from unlabeled data. This is a necessary compromise if we wish to keep the efficiency of onetime scanning. In addition, we have observed in preliminary experiments that retrieving all maximized substrings is not only unnecessary, but can introduce harmful noise. In the next section, we will discuss our solution to this problem.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximized Substring Retrieval: Algorithm and Data Structure",
"sec_num": "2.3"
},
{
"text": "Maximized substrings can provide good estimations of word boundaries, but random noise can be introduced during the retrieval process in Pseudocode 1.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Short-Term Store",
"sec_num": "2.4"
},
{
"text": "To address this problem, we take advantage of a linguistic phenomenon. It has been observed that a word occurring in the recent past has a return ( ) much higher probability to occur again soon, when compared with its overall frequency (Kuhn and Mori, 1990) . It follows that, for speech recognition, we can then use a window of recent history to adjust the static overall language mode. This observation is applicable to the task of maximized substring retrieval in the following way. Suppose a substring is registered into the data structure. If the substring is in fact a word, it is much more likely to reoccur in the next 50 to 100 sentences than in the remainder of the corpus (especially when it is a technical term or a named entity). Otherwise the substring should have a more unified probability of reoccurrence across the entire corpus.",
"cite_spans": [
{
"start": 236,
"end": 257,
"text": "(Kuhn and Mori, 1990)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Short-Term Store",
"sec_num": "2.4"
},
{
"text": "This motivated us to introduce a functionality into the process of maximized substring retrieval, called \"short-term store\" (STS). The STS is an analogy to the cache component in speech recognition as well as the human phonological working memory in language acquisition. It restricts the length of the visible context when retrieving the next candidate of a registered substring, making it proportional to the current number of occurrences of the substring. For a registered substring, the retrieval algorithm scans a certain number of sentences after the latest occurrence of the substring, where the number of sentences D(s) is determined as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Short-Term Store",
"sec_num": "2.4"
},
{
"text": "{ where is the current number of occurrences of in the data structure. The parameter contributes a fixed-length distance to the visible context. The parameter works as a threshold of reliability. If we have observed at least times in a short period, we can regard as a word, or a sequence of words, with a high level of confidence. Thus, implies that is no longer subject to periodical decaying and will stay in the data structure statically.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Short-Term Store",
"sec_num": "2.4"
},
{
"text": "During the scanning of the sentences, if a new occurrence of is found, it is added into the data structure and is recalculated immediately, starting a new scanning period. If no new occurrences are found, we remove the earliest occurrence of from the data structure and then re-calculate . Note that we have described the short-term store functionality as if each substring in the data structure is scanned separately. In practice, however, only a small change to Pseudocode 1 is required so that STS is used, making one-time scanning of the unlabeled data sufficient.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Short-Term Store",
"sec_num": "2.4"
},
{
"text": "Introducing STS into the retrieval process results in a substantial improvement to the quality of retrieved substrings. It is also important that STS greatly improves the processing efficiency for large scale unlabeled data by keeping the size of the data structure relatively small. This is because a substring entry will decay from the data structure if it has not been refreshed in a short period.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Short-Term Store",
"sec_num": "2.4"
},
{
"text": "For baseline features, we apply the feature tem-plates described in . For further details, please see the original paper. Note that if the part-of-speech tags are not available, we omit those templates involving POS tags.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline Features",
"sec_num": "3.1"
},
{
"text": "We have incorporated the list of retrieved maximized substrings into the baseline system by using a technique which discriminatively learns their features. For every word-level and character-level node in the lattice, the method checks the maximized substring list for entries that satisfy the following two conditions: 1. The node matches the maximized substring at the beginning, the end, or both boundaries. 2. The length of the node is shorter than or equal to that of the entry. For example, consider the lattice in Figure 1 with a maximized substring \"\u9648\u5fb7\u94ed\". All of the character-level nodes of \"\u9648\" and \"\u94ed\" are encoded with maximized substring features. A segmenter will only obtain information on those possible word boundaries that are identified by maximized substrings. The maximized substrings are not directly treated as single words, because a maximized substring can sometimes be a compound word or phrase.",
"cite_spans": [],
"ref_spans": [
{
"start": 521,
"end": 529,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Maximized Substring Features",
"sec_num": "3.2"
},
{
"text": "For each match with a maximized substring entry, the technique encodes the following features.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximized Substring Features",
"sec_num": "3.2"
},
{
"text": "Basic: A binary feature that indicates whether the match is at the beginning or end of the maximized substring. It is encoded both individually and as a combination with each other feature types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximized Substring Features",
"sec_num": "3.2"
},
{
"text": "Lexicon: There is a particular kind of noise in the retrieved list of maximized substrings, namely, those like the substring \"\u4e2d\u7f8e\u7ecf\", which has resulted from the two phrases \"\u4e2d\u7f8e\u7ecf\u6d4e\" (China and U.S. economy) and \"\u4e2d\u7f8e\u7ecf\u8d38\" (China and U.S. economic and trade). This happens when the boundary of a maximized substring is a shared boundary character of multiple other words. In this example, the last character \"\u7ecf\" of the maximized substring is the character at the beginning of \"\u7ecf\u6d4e\" (economy) and \"\u7ecf\u8d38\" (economic and trade). This kind of noise can be identified by checking the context of maximized substrings in system's lexicon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximized Substring Features",
"sec_num": "3.2"
},
{
"text": "Our technique checks the context of the maximized substring in the input sentence and compares it with the system's lexicon. If any item in the lexicon is found that forms a positional relation with the maximized substring entry (as listed in Table 3 ) then the corresponding features are encoded.",
"cite_spans": [],
"ref_spans": [
{
"start": 243,
"end": 250,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Maximized Substring Features",
"sec_num": "3.2"
},
{
"text": "Lexicon Composition: When a maximized substring is a match to more than one item in the lexicon, a combination of multiple lexicon features is more informative than individual features. We encode the combinations of lexicon features listed as in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 246,
"end": 253,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Maximized Substring Features",
"sec_num": "3.2"
},
{
"text": "Frequency: We sort the list of maximized substrings by their frequencies. If a maximized substring is among the 10% most frequent it is classed as \"highly frequent\", if it is among the top 30% it is \"normal\", and all other cases are \"infrequent\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Maximized Substring Features",
"sec_num": "3.2"
},
{
"text": "To evaluate our approach, we have conducted word segmentation experiments on two datasets. The first is Chinese Treebank 7 (CTB7), which is a widely used version of the Penn Chinese Treebank dataset for the evaluations of word segmentation techniques. We have adopted the same setting of data division as (Wang et al., 2011) : the training set, dev set and test set. For CTB7, these sets have 31,131, 10,136 and 10,180 sentences respectively. The second dataset is the second international Chinese word segmentation bakeoff (SIGHAN Bakeoff-2005) (Emerson, 2005) , which has four independent subsets: the Academia Sinica Corpus (AS), the Microsoft Research Corpus (MSR), the Hong Kong City University Corpus (CityU) and the Peking University Corpus (PKU). Since POS tags are not available in this dataset, we have omitted all templates that include them. The models and parameters applied on all test sets are those that result in the best performance on the CTB7 dev set.",
"cite_spans": [
{
"start": 305,
"end": 324,
"text": "(Wang et al., 2011)",
"ref_id": "BIBREF16"
},
{
"start": 524,
"end": 545,
"text": "(SIGHAN Bakeoff-2005)",
"ref_id": null
},
{
"start": 546,
"end": 561,
"text": "(Emerson, 2005)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setting",
"sec_num": "4.1"
},
{
"text": "We have used two different types of unlabeled data. One is the test set itself, which means the system is purely supervised. Another is a largescale dataset, which is the Chinese Gigaword Second Edition (LDC2007T03). This dataset is a collection of news articles from 1991 to 2004 published by Central News Agency (Taiwan), Xinhua News Agency and Lianhe Zaobao Newspaper. It includes a total amount of over 1.2 billion characters in both simplified Chinese and traditional Chinese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setting",
"sec_num": "4.1"
},
{
"text": "We have trained all models using the averaged perceptron algorithm (Collins, 2002) , which we selected because of its efficiency and stability. To learn the characteristics of unknown words, we built the system's lexicon using only the words in the training data with a frequency higher than a threshold, . This threshold was tuned using the development data. In order to use the maximized substring features, we have used training data as unlabeled data for supervised models, and used both the training data and Chinese Gigaword for semi-supervised models.",
"cite_spans": [
{
"start": 67,
"end": 82,
"text": "(Collins, 2002)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setting",
"sec_num": "4.1"
},
{
"text": "We have applied the same parameters for all models, which are tuned on the CTB7 dev set: , , , and . We have used precision, recall and the F-score to measure the performance of segmentation systems. Precision, p, is defined as the percentage of Table 5 . Evaluation on CTB7 for the baseline approach and our approach with small and largescale in-domain unlabeled data respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 246,
"end": 253,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Setting",
"sec_num": "4.1"
},
{
"text": "words that are segmented correctly, and recall, r, is the percentage of words in the gold standard data that are recognized in the output. The balanced F-score is defined as F = 2pr/(p + r).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setting",
"sec_num": "4.1"
},
{
"text": "We have compared the performance between the baseline system and our approach. The results are shown in Table 5 . Each row in this table shows the performance of the corresponding system. \"Baseline\" refers to our baseline hybrid word segmentation and POS-tagging system. \"MaxSub-Test\" refers to the method that just uses the test set as unlabeled data. \"MaxSub-U\" refers to the method that uses the large-scale unlabeled data. We have focused on the segmentation performance of our systems. The results show that, using the test data as an additional source of information, \"MaxSub-Test\" outperforms the baseline method by 0.14 points in F-score. This indicates that our method of using maximized substrings can enhance the segmentation performance even with a purely supervised approach. The improvement increases to 0.47 points in F-score for \"MaxSub-U\", which demonstrates the effectiveness of using largescale unlabeled data.",
"cite_spans": [],
"ref_spans": [
{
"start": 104,
"end": 111,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results on In-domain Data",
"sec_num": "4.2"
},
{
"text": "We have compared our approach with previous work in Table 6 . Two methods from (Kruengkrai et al., 2009a; 2009b) are referred to as \"Kruengkrai 09a\" and \"Kruengkrai 09b\", and are taken directly from the report of (Wang et al., 2011) . \"Wang 11\" refers to the semi-supervised system in (Wang et al., 2011) . We have observed that our system \"MaxSub-U\" achieves the best segmentation among these systems. Also, although the performance of our baseline is lower than the systems \"Kruengkrai 09a\" and \"Kruengkrai 09b\" because of differences in implementation, the system \"MaxSub-Test\" (which has used no external resource) has achieved a comparable result.",
"cite_spans": [
{
"start": 79,
"end": 105,
"text": "(Kruengkrai et al., 2009a;",
"ref_id": null
},
{
"start": 106,
"end": 112,
"text": "2009b)",
"ref_id": null
},
{
"start": 213,
"end": 232,
"text": "(Wang et al., 2011)",
"ref_id": "BIBREF16"
},
{
"start": 285,
"end": 304,
"text": "(Wang et al., 2011)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 52,
"end": 59,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results on In-domain Data",
"sec_num": "4.2"
},
{
"text": "The results for the SIGHAN Bakeoff-2005 dataset are shown in Table 7 . The first three rows (\"Tseng 05\", \"Asahara 05\" and \"Chen 05\") show the results of systems that have reached the highest score on at least one corpus (Tseng et 2005; Asahara et al., 2005; Chen et al., 2005) . \"Best closed\" summarizes the best official results on all four corpora. \"Zhao 07\" and \"Zhang 06\" represent the supervised segmentation systems in (Zhao and Kit, 2007; Zhang et al., 2006) . \"Baseline\", \"Maxsub-Test\" and \"MaxSub-U\" refer to the same systems as in Table 5 . For the unlabeled data, we have used the test sets of corresponding corpora for \"MaxSub-Test\", and the Chinese Gigaword for \"MaxSub-U\". Other parameters were left unchanged. The results do not indicate that our approach performs better than other systems. However, this is largely because of our baseline not being optimized for these corpora. Nevertheless, when compared with the baseline, our approach has yielded consistent improvements across the four corpora, and on the PKU corpus we have performed better than previous work.",
"cite_spans": [
{
"start": 220,
"end": 229,
"text": "(Tseng et",
"ref_id": null
},
{
"start": 236,
"end": 257,
"text": "Asahara et al., 2005;",
"ref_id": "BIBREF2"
},
{
"start": 258,
"end": 276,
"text": "Chen et al., 2005)",
"ref_id": "BIBREF5"
},
{
"start": 425,
"end": 445,
"text": "(Zhao and Kit, 2007;",
"ref_id": "BIBREF20"
},
{
"start": 446,
"end": 465,
"text": "Zhang et al., 2006)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 61,
"end": 68,
"text": "Table 7",
"ref_id": null
},
{
"start": 541,
"end": 548,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results on In-domain Data",
"sec_num": "4.2"
},
{
"text": "In Table 8 , we have shown the effects of the different maximized substring feature types proposed in this paper. We activated different combinations of feature types in turn and trained separate models. We also investigated the impact of the short-term store by training models without this feature. The rows of this and tested on CTB7 with different configurations. The row \"Baseline\" is baseline system as in Table 5. \"+Basic&Freq\" represents the system \"MaxSub-U\" with only basic and frequency features activated, and STS turned off. The row \"+All\" represents a system activating all maximized substring features but still without STS. The last row \"+All+STS\" is identical to the system \"Maxsub-U\". It is clear that lexicon-based features are effective in discriminating unreliable maximized substring from reliable ones, and the short-term store improves the segmentation performance by filtering out noises during the retrieval of maximized substrings. The combination of these two techniques yields an improvement of 0.23 point in F-measure, and thus are essential when using maximized substrings.",
"cite_spans": [],
"ref_spans": [
{
"start": 3,
"end": 10,
"text": "Table 8",
"ref_id": null
}
],
"eq_spans": [],
"section": "Impacts of Semi-supervised Features and Short-term Store",
"sec_num": "4.3"
},
{
"text": "To demonstrate the effectiveness of our method on out-of-domain text, we have conducted an experiment on a test set that was drawn from a corpus of scientific articles. This test set contains 510 sentences that have been manually segmented by a native Chinese speaker. We used the test set as the unlabeled data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results on Out-of-domain Data",
"sec_num": "4.4"
},
{
"text": "As the results show (Table 9) , the system \"MaxSub-Test\" exceeded the baseline method by 0.53 in F-score, which is a significant improvement. Considering that the amount of unlabeled data is relatively small, it is likely that acquiring large-scale unlabeled data in the same domain will further benefit the accuracy.",
"cite_spans": [],
"ref_spans": [
{
"start": 20,
"end": 29,
"text": "(Table 9)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Results on Out-of-domain Data",
"sec_num": "4.4"
},
{
"text": "The authors of (Feng et al., 2004) proposed accessor variety (AV), a criterion measuring the likelihood of a substring being a word by count-ing distinct surrounding characters. In (Jin and Tanaka-Ishii, 2006 ) the researchers proposed branching entropy, a similar criterion based on the assumption that the uncertainty of surrounding characters of a substring peaks at the word boundaries. The authors of (Zhao and Kit, 2007) incorporated accessor variety and another type of criteria, called co-occurrence sub-sequence, with a supervised segmentation system and conducted comprehensive experiments to investigate their impacts. Although the idea behind co-occurrence sub-sequence is similar with maximized substrings, there are several restrictions: it requires post-processing to remove overlapping instances; sub-sequences are retrievable only from different sentences; and the retrieval is performed only on training and testing data. In (Sun and Xu, 2011) , the authors proposed a semi-supervised segmentation system enhanced with multiple statistical criteria. Large-scale unlabeled data were used in their experiments.",
"cite_spans": [
{
"start": 15,
"end": 34,
"text": "(Feng et al., 2004)",
"ref_id": "BIBREF6"
},
{
"start": 181,
"end": 208,
"text": "(Jin and Tanaka-Ishii, 2006",
"ref_id": "BIBREF8"
},
{
"start": 406,
"end": 426,
"text": "(Zhao and Kit, 2007)",
"ref_id": "BIBREF20"
},
{
"start": 943,
"end": 961,
"text": "(Sun and Xu, 2011)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5."
},
{
"text": "Li and Sun presented a model to learn features of word delimiters from punctuation marks in (Li and Sun, 2009) . Wang et al. proposed a semisupervised word segmentation method that took advantages from auto-analyzed data (Wang et al., 2011) .",
"cite_spans": [
{
"start": 92,
"end": 110,
"text": "(Li and Sun, 2009)",
"ref_id": "BIBREF12"
},
{
"start": 221,
"end": 240,
"text": "(Wang et al., 2011)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5."
},
{
"text": "Nakagawa showed the advantage of the hybrid model combining both character-level information and word-level information in Chinese and Japanese word segmentation (Nakagawa, 2004) . In (Nakagawa and Uchimoto, 2007) and (Kruengkrai et al., 2009a; 2009b) the researchers presented word-character hybrid models for joint word segmentation and POS tagging, and achieved the state-of-the-art accuracy on Chinese and Japanese datasets.",
"cite_spans": [
{
"start": 162,
"end": 178,
"text": "(Nakagawa, 2004)",
"ref_id": "BIBREF14"
},
{
"start": 184,
"end": 213,
"text": "(Nakagawa and Uchimoto, 2007)",
"ref_id": "BIBREF15"
},
{
"start": 218,
"end": 244,
"text": "(Kruengkrai et al., 2009a;",
"ref_id": null
},
{
"start": 245,
"end": 251,
"text": "2009b)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "5."
},
{
"text": "We propose a simple yet effective approach for extracting maximized substrings from unlabeled data. These are a particular type of substrings that provide good estimations of unknown word boundaries. The retrieved maximized substrings are incorporated with a supervised segmentation system through discriminative learning. We have demonstrated the effectiveness of our approach through experiments in both in-domain and outof-domain data and have achieved significant improvements over the baseline systems across all datasets 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6."
},
{
"text": "It should be noted that, in order to retrieve a substring, the size of M is not necessarily identical to its total count in the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "in McNemar's test.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Fast Algorithms for Mining Association Rules",
"authors": [
{
"first": "Rakesh",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Ramakrishnan",
"middle": [],
"last": "Srikant",
"suffix": ""
}
],
"year": 1994,
"venue": "Proceedings of 1994 Int. Conf. Very Large Data Bases",
"volume": "",
"issue": "",
"pages": "487--499",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rakesh Agrawal and Ramakrishnan Srikant. 1994. Fast Algorithms for Mining Association Rules. In Proceedings of 1994 Int. Conf. Very Large Data Bases, pages 487-499.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Corpus-based Japanese Morphological Analysis. Nara Institute of Science and Technology, Doctor's Thesis",
"authors": [
{
"first": "Masayuki",
"middle": [],
"last": "Asahara",
"suffix": ""
}
],
"year": 2003,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masayuki Asahara. 2003. Corpus-based Japanese Morphological Analysis. Nara Institute of Science and Technology, Doctor's Thesis.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Combination of Machine Learning Methods for Optimum Chinese Word Segmentation",
"authors": [
{
"first": "Masayuki",
"middle": [],
"last": "Asahara",
"suffix": ""
},
{
"first": "Kenta",
"middle": [],
"last": "Fukuoka",
"suffix": ""
},
{
"first": "Ai",
"middle": [],
"last": "Azuma",
"suffix": ""
},
{
"first": "Chooi-Ling",
"middle": [],
"last": "Goh",
"suffix": ""
},
{
"first": "Yotaro",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
},
{
"first": "Takashi",
"middle": [],
"last": "Tsuzuki",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "134--137",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masayuki Asahara, Kenta Fukuoka, Ai Azuma, Chooi-Ling Goh, Yotaro Watanabe, Yuji Matsu- moto, and Takashi Tsuzuki. 2005. Combination of Machine Learning Methods for Optimum Chinese Word Segmentation. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Pro- cessing, pages 134-137.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of EMNLP 2002",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2002. Discriminative Training Methods for Hidden Markov Models: Theory and Experiments with Perceptron Algorithms. In Pro- ceedings of EMNLP 2002, pages 1-8.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "The Second International Chinese Word Segmentation Bakeoff",
"authors": [
{
"first": "Thomas",
"middle": [
"Emerson"
],
"last": "",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "123--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Emerson. 2005. The Second International Chinese Word Segmentation Bakeoff. In Proceed- ings of the Fourth SIGHAN Workshop on Chinese Language Processing, pages 123-133.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Unigram language model for Chinese word segmentation",
"authors": [
{
"first": "Aitao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yiping",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Anne",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Gordon",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "138--141",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aitao Chen, Yiping Zhou, Anne Zhang, and Gordon Sun. 2005. Unigram language model for Chinese word segmentation. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Pro- cessing, pages 138-141.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Accessor Variety Criteria for Chinese Word Extraction",
"authors": [
{
"first": "Haodi",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Kang",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaotie",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Weimin",
"middle": [],
"last": "Zheng",
"suffix": ""
}
],
"year": 2004,
"venue": "Computational Linguistics",
"volume": "30",
"issue": "1",
"pages": "75--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haodi Feng, Kang Chen, Xiaotie Deng, and Weimin Zheng. 2004. Accessor Variety Criteria for Chinese Word Extraction. Computational Linguistics, 30(1), pages 75-93.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Fast Frequent String Mining Using Suffix Arrays",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Fischer",
"suffix": ""
},
{
"first": "Stefan",
"middle": [],
"last": "Volker Heun",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kramer",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ICDM 2005",
"volume": "",
"issue": "",
"pages": "609--612",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johannes Fischer, Volker Heun, and Stefan Kramer. 2005. Fast Frequent String Mining Using Suffix Arrays. In Proceedings of ICDM 2005, IEEE Computer Society, pages 609-612.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Unsupervised Segmentation of Chinese Text by Use of Branching Entropy",
"authors": [
{
"first": "Zhihui",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Kumiko",
"middle": [],
"last": "Tanaka-Ishii",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COL-ING/ACL 2006 Main Conference Poster Sessions",
"volume": "",
"issue": "",
"pages": "428--435",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhihui Jin and Kumiko Tanaka-Ishii. 2006. Unsuper- vised Segmentation of Chinese Text by Use of Branching Entropy. In Proceedings of the COL- ING/ACL 2006 Main Conference Poster Sessions, pages 428-435.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "An Error-Driven Word-Character Hybird Model for Joint Chinese Word Segmentation and POS Tagging",
"authors": [
{
"first": "Canasai",
"middle": [],
"last": "Kruengkrai",
"suffix": ""
},
{
"first": "Kiyotaka",
"middle": [],
"last": "Uchimoto",
"suffix": ""
},
{
"first": "Yiouwang",
"middle": [],
"last": "Kazama",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Torisawa",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL/IJCNLP 2009",
"volume": "",
"issue": "",
"pages": "513--521",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Canasai Kruengkrai, Kiyotaka Uchimoto, Jun'ichi Kazama, YiouWang, Kentaro Torisawa, and Hi- toshi Isahara. 2009. An Error-Driven Word- Character Hybird Model for Joint Chinese Word Segmentation and POS Tagging. In Proceedings of ACL/IJCNLP 2009, pages 513-521.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Joint Chinese Word Segmentation and POS Tagging Using an Error-Driven Word-Character Hybrid Model",
"authors": [
{
"first": "Canasai",
"middle": [],
"last": "Kruengkrai Kiyotaka Uchimoto",
"suffix": ""
},
{
"first": "Yiou",
"middle": [],
"last": "Jun'ichi Kazama",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Torisawa",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2009,
"venue": "IEICE transactions on information and systems",
"volume": "92",
"issue": "12",
"pages": "2298--2305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Canasai Kruengkrai Kiyotaka Uchimoto, Jun'ichi Kazama, Yiou Wang, Kentaro Torisawa, and Hi- toshi Isahara. 2009. Joint Chinese Word Segmenta- tion and POS Tagging Using an Error-Driven Word-Character Hybrid Model. IEICE transactions on information and systems, 92(12), pages 2298- 2305.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A Cachebased Natural Language Model for Speech Recognition",
"authors": [
{
"first": "Roland",
"middle": [],
"last": "Kuhn",
"suffix": ""
},
{
"first": "Renato",
"middle": [
"De"
],
"last": "Mori",
"suffix": ""
}
],
"year": 1990,
"venue": "IEEE Transaction on Pattern Analysis and Machine Intelligence",
"volume": "12",
"issue": "6",
"pages": "570--583",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roland Kuhn and Renato De Mori. 1990. A Cache- based Natural Language Model for Speech Recog- nition. IEEE Transaction on Pattern Analysis and Machine Intelligence, 12(6), pages 570-583.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Punctuation as Implicit Annotations for Chinese Word Segmentation",
"authors": [
{
"first": "Zhongguo",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2009,
"venue": "Computational Linguistics",
"volume": "35",
"issue": "4",
"pages": "505--512",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhongguo Li and Maosong Sun. 2009. Punctuation as Implicit Annotations for Chinese Word Segmenta- tion. Computational Linguistics, 35(4), pages 505- 512.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Fast String Searching with Suffix Trees",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Nelson",
"suffix": ""
}
],
"year": 1996,
"venue": "Dr.Dobb's Journal",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Nelson. 1996. Fast String Searching with Suffix Trees. Dr.Dobb's Journal.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Chinese and Japanese Word Segmentation Using Word-level and Characterlevel Information",
"authors": [
{
"first": "Tetsuji",
"middle": [],
"last": "Nakagawa",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of COLING 2004",
"volume": "",
"issue": "",
"pages": "466--472",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tetsuji Nakagawa. 2004. Chinese and Japanese Word Segmentation Using Word-level and Character- level Information. In Proceedings of COLING 2004, pages 466-472.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Hybrid Approach to Word Segmentation and Pos Tagging",
"authors": [
{
"first": "Tetsuji",
"middle": [],
"last": "Nakagawa",
"suffix": ""
},
{
"first": "Kiyotaka",
"middle": [],
"last": "Uchimoto",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL 2007 Demo and Poster Sessions",
"volume": "",
"issue": "",
"pages": "217--220",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tetsuji Nakagawa and Kiyotaka Uchimoto. 2007. Hybrid Approach to Word Segmentation and Pos Tagging. In Proceedings of ACL 2007 Demo and Poster Sessions, pages 217-220.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Improving Chinese Word Segmentation and POS Tagging with Semi-supervised Methods Using Large Auto-Analyzed Data",
"authors": [
{
"first": "Yiou",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yoshimasa",
"middle": [],
"last": "Kazama",
"suffix": ""
},
{
"first": "Wenliang",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Yujie",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Torisawa",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yiou Wang, Jun'ichi Kazama, Yoshimasa Tsuruoka, Wenliang Chen, Yujie Zhang, and Kentaro Torisawa. 2011. Improving Chinese Word Seg- mentation and POS Tagging with Semi-supervised Methods Using Large Auto-Analyzed Data. In Pro- ceedings of IJCNLP 2011.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Enhancing Chinese Word Segmentation Using Unlabeled Data",
"authors": [
{
"first": "Weiwei",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Jia",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of EMNLP 2011",
"volume": "",
"issue": "",
"pages": "970--979",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Weiwei Sun and Jia Xu. 2011. Enhancing Chinese Word Segmentation Using Unlabeled Data. In Pro- ceedings of EMNLP 2011, pages 970-979.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A Conditional Random Field Word Segmenter for SIGHAN Bakeoff",
"authors": [
{
"first": "Huihsin",
"middle": [],
"last": "Tseng",
"suffix": ""
},
{
"first": "Pichuan",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Galen",
"middle": [],
"last": "Andrew",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing",
"volume": "",
"issue": "",
"pages": "168--171",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Huihsin Tseng, Pichuan Chang, Galen Andrew, Dan- iel Jurafsky, and Christopher Manning. 2005. A Conditional Random Field Word Segmenter for SIGHAN Bakeoff 2005. In Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing, pages 168-171.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Effective Tag Set Selection in Chinese Word Segmentation via Conditional Random Field Modeling",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chang-Ning",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Bao-Liang",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of PACLIC 20",
"volume": "",
"issue": "",
"pages": "87--94",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao, Chang-Ning Huang, Mu Li, and Bao-Liang Lu. 2006. Effective Tag Set Selection in Chinese Word Segmentation via Conditional Random Field Modeling. In Proceedings of PACLIC 20, pages 87-94.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Incorporating Global Information into Supervised Learning for Chinese Word Segmentation",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of PACLING 2007",
"volume": "",
"issue": "",
"pages": "66--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao and Chunyu Kit. 2007. Incorporating Global Information into Supervised Learning for Chinese Word Segmentation. In Proceedings of PACLING 2007, pages 66-74.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Exploiting Unlabeled Text with Different Unsupervised Segmentation Criteria for Chinese Word Segmentation. Research in Computing Science",
"authors": [
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Chunyu",
"middle": [],
"last": "Kit",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "33",
"issue": "",
"pages": "93--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hai Zhao and Chunyu Kit. 2008. Exploiting Unla- beled Text with Different Unsupervised Segmenta- tion Criteria for Chinese Word Segmentation. Re- search in Computing Science, Vol. 33, pages 93- 104.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Subword-based Tagging for Confidence Dependent Chinese Word Segmentation",
"authors": [
{
"first": "Ruiqiang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Genichiro",
"middle": [],
"last": "Kikui",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2006,
"venue": "COLING/ACL 2006",
"volume": "",
"issue": "",
"pages": "961--968",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruiqiang Zhang, Genichiro Kikui, and Eiichiro Su- mita. 2006. Subword-based Tagging for Confi- dence Dependent Chinese Word Segmentation. In COLING/ACL 2006, pages 961-968.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"text": "Table 1. A particular type of substrings with multiple occurrences in the Chinese sentence: \"\u4f7f\u4e00\u81f4 \u8ba4\u5b9a\u754c\u9650\u6570\u7684\u671f\u671b\u503c\u8fd1\u4f3c\u4e8e\u4e00\u81f4\u6b63\u786e\u754c\u9650\u6570\u7684\u671f\u671b \u503c\uff0c\u6c42\u5f97\u4e00 \u81f4\u8ba4\u5b9a\u754c \u9650\u7684\u671f \u671b\u503c /\u8ba4\u5b9a\u754c\u9650 \u6570\u7684 \u503c\u3002\"",
"type_str": "table",
"html": null,
"content": "<table><tr><td>Substring</td><td>Freq</td></tr><tr><td>\u4e00\u81f4</td><td>3</td></tr><tr><td>\u754c\u9650\u6570\u7684\u671f\u671b\u503c</td><td>2</td></tr><tr><td>\u4e00\u81f4\u8ba4\u5b9a\u754c\u9650</td><td>2</td></tr><tr><td>\u7684\u671f\u671b\u503c</td><td>3</td></tr><tr><td>\u8ba4\u5b9a\u754c\u9650\u6570\u7684</td><td>2</td></tr><tr><td>\u503c</td><td>4</td></tr><tr><td>;</td><td/></tr></table>",
"num": null
},
"TABREF3": {
"text": "",
"type_str": "table",
"html": null,
"content": "<table><tr><td/><td colspan=\"3\">Lexicon features. Each one represents a</td></tr><tr><td colspan=\"4\">positional relation between a maximized substring</td></tr><tr><td colspan=\"4\">and a contextual substring which exists in sys-</td></tr><tr><td colspan=\"2\">tem's lexicon.</td><td/><td/></tr><tr><td>ID</td><td>At Beginning</td><td>ID</td><td>At Ending</td></tr><tr><td>B1</td><td>&lt;L1,L6&gt;</td><td>E1</td><td>&lt;L2,L5&gt;</td></tr><tr><td>B2</td><td>&lt;L6,L8&gt;</td><td>E2</td><td>&lt;L5,L7&gt;</td></tr><tr><td>B3</td><td>&lt;L1,L8&gt;</td><td>E3</td><td>&lt;L2,L7&gt;</td></tr></table>",
"num": null
},
"TABREF4": {
"text": "Lexicon Composition features. Each one represents a combination of two Lexicon features that fire simultaneously.",
"type_str": "table",
"html": null,
"content": "<table/>",
"num": null
}
}
}
}