Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
84.1 kB
{
"paper_id": "I13-1019",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:14:26.014120Z"
},
"title": "A Simple Approach to Unknown Word Processing in Japanese Morphological Analysis",
"authors": [
{
"first": "Ryohei",
"middle": [],
"last": "Sasano",
"suffix": "",
"affiliation": {
"laboratory": "Precision and Intelligence Laboratory",
"institution": "Tokyo Institute of Technology",
"location": {}
},
"email": "sasano@pi.titech.ac.jp"
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Kyoto University",
"location": {}
},
"email": ""
},
{
"first": "Manabu",
"middle": [],
"last": "Okumura",
"suffix": "",
"affiliation": {
"laboratory": "Precision and Intelligence Laboratory",
"institution": "Tokyo Institute of Technology",
"location": {}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a simple but effective approach to unknown word processing in Japanese morphological analysis, which handles 1) unknown words that are derived from words in a pre-defined lexicon and 2) unknown onomatopoeias. Our approach leverages derivation rules and onomatopoeia patterns, and correctly recognizes certain types of unknown words. Experiments revealed that our approach recognized about 4,500 unknown words in 100,000 Web sentences with only 80 harmful side effects and a 6% loss in speed.",
"pdf_parse": {
"paper_id": "I13-1019",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a simple but effective approach to unknown word processing in Japanese morphological analysis, which handles 1) unknown words that are derived from words in a pre-defined lexicon and 2) unknown onomatopoeias. Our approach leverages derivation rules and onomatopoeia patterns, and correctly recognizes certain types of unknown words. Experiments revealed that our approach recognized about 4,500 unknown words in 100,000 Web sentences with only 80 harmful side effects and a 6% loss in speed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Morphological analysis is the first step in many natural language applications. Since words are not segmented by explicit delimiters in Japanese, Japanese morphological analysis consists of two subtasks: word segmentation and part-of-speech (POS) tagging. Japanese morphological analysis has successfully adopted lexicon-based approaches for newspaper articles (Kurohashi et al., 1994; Asahara and Matsumoto, 2000; Kudo et al., 2004) , in which an input sentence is transformed into a lattice of candidate words using a pre-defined lexicon, and an optimal path in the lattice is then selected. Figure 1 shows an example of a word lattice for morphological analysis and an optimal path. Since the transformation from a sentence into a word lattice basically depends on the pre-defined lexicon, the existence of unknown words, i.e., words that are not included in the predefined lexicon, is a major problem in Japanese morphological analysis.",
"cite_spans": [
{
"start": 361,
"end": 385,
"text": "(Kurohashi et al., 1994;",
"ref_id": "BIBREF13"
},
{
"start": 386,
"end": 414,
"text": "Asahara and Matsumoto, 2000;",
"ref_id": "BIBREF0"
},
{
"start": 415,
"end": 433,
"text": "Kudo et al., 2004)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 594,
"end": 602,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There are two major approaches to this problem: one is to augment the lexicon by acquiring unknown words from a corpus in advance (Mori and Nagao, 1996; Murawaki and Kurohashi, 2008) and the other is to introduce better unknown word processing to the morphological ana-Input : \"\u1ff3\u1b22\u18e3\u19c4\u0a71\" (My father is a Japanese.) Lattice : lyzer (Nagata, 1999; Uchimoto et al., 2001 ; Asahara and Azuma et al., 2006; Nakagawa and Uchimoto, 2007) . Although both approaches have their own advantages and should be exploited cooperatively, this paper focuses only on the latter approach.",
"cite_spans": [
{
"start": 130,
"end": 152,
"text": "(Mori and Nagao, 1996;",
"ref_id": "BIBREF17"
},
{
"start": 153,
"end": 182,
"text": "Murawaki and Kurohashi, 2008)",
"ref_id": "BIBREF18"
},
{
"start": 328,
"end": 342,
"text": "(Nagata, 1999;",
"ref_id": "BIBREF19"
},
{
"start": 343,
"end": 364,
"text": "Uchimoto et al., 2001",
"ref_id": "BIBREF22"
},
{
"start": 379,
"end": 398,
"text": "Azuma et al., 2006;",
"ref_id": "BIBREF3"
},
{
"start": 399,
"end": 427,
"text": "Nakagawa and Uchimoto, 2007)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u1ff3 (father) [Noun] \u1b22 (is)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most previous work on this approach has aimed at developing a single general-purpose unknown word model. However, there are several types of unknown words, some of which can be easily dealt with by introducing simple derivation rules and unknown word patterns. In addition, as we will discuss in Section 2.3, the importance of unknown word processing varies across unknown word types. In this paper, we aim to deal with unknown words that are considered important and can be dealt with using simple rules and patterns. Table 1 lists several types of Japanese unknown words, some of which often appear in Web text. First, we broadly divide the unknown words into two classes: words derived from the words in the lexicon and the others. There are a lot of informal spelling variations in Web text that are derived from the words in the lexicon, such as \" \" (y0u) instead of \" \" (you) and \" \" (coooool) instead of \" \" (cool). The types of derivation are limited, and thus most of them can be resolved by introducing derivation rules. Unknown words other than those derived from known words are generally difficult to resolve using only simple rules, and the lexicon augmentation approach would be better for them. However, this is not true for onomatopoeias. Although Japanese is rich in onomatopoeias and some of them do not appear in the lexicon, most of them follow several patterns such as 'ABAB,' 'A B ,' and 'AB ,' 1 and they thus can be resolved by considering typical patterns.",
"cite_spans": [],
"ref_spans": [
{
"start": 519,
"end": 526,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Therefore, in this paper, we introduce derivation rules and onomatopoeia patterns to the unknown word processing in Japanese morphological analysis, and aim to resolve 1) unknown words derived from words in a pre-defined lexicon and 2) unknown onomatopoeias.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "As mentioned earlier, lexicon-based approaches have been widely adopted for Japanese morphological analysis. In these approaches, we assume that a lexicon, which lists a pair consisting of a word and its corresponding part-of-speech, is available. The process of traditional Japanese morphological analysis is as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Japanese morphological analysis",
"sec_num": "2.1"
},
{
"text": "1. Build a lattice of words that represents all the candidate sequences of words from an input sentence. 2. Find an optimal path through the lattice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Japanese morphological analysis",
"sec_num": "2.1"
},
{
"text": "Figure 1 in Section 1 shows an example of a word lattice for the input sentence \" \" (My father is Japanese), where a total of six candidate paths are encoded and the optimal path is marked with bold lines. The lattice is mainly built with the words in the lexicon. Some heuristics are also used for dealing with unknown words, but in most cases, only a few simple heuristics are used. In fact, the three major Japanese morphological analyzers, JUMAN (Kurohashi and Kawahara, 2005 ), ChaSen (Matsumoto et al., 2007) , 1 'A' and 'B' denote Japanese characters, respectively. and MeCab (Kudo, 2006) , use only a few simple heuristics based on the character types, such as hiragana, katakana, and alphabets 2 , that regard a character sequence consisting of the same character type as a word candidate.",
"cite_spans": [
{
"start": 450,
"end": 479,
"text": "(Kurohashi and Kawahara, 2005",
"ref_id": null
},
{
"start": 490,
"end": 514,
"text": "(Matsumoto et al., 2007)",
"ref_id": "BIBREF16"
},
{
"start": 517,
"end": 518,
"text": "1",
"ref_id": null
},
{
"start": 583,
"end": 595,
"text": "(Kudo, 2006)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Japanese morphological analysis",
"sec_num": "2.1"
},
{
"text": "The optimal path is searched for based on the sum of the costs for the path. There are two types of costs: the cost for a candidate word and the cost for a pair of adjacent parts-of-speech. The cost for a word reflects the probability of the occurrence of the word, and the connectivity cost of a pair of parts-of-speech reflects the probability of an adjacent occurrence of the pair. A greater cost means less probability. The costs are manually assigned in JUMAN, and assigned by adopting supervised machine learning techniques in ChaSen and MeCab, while the algorithm to find the optimal path is the same, which is based on the Viterbi algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Japanese morphological analysis",
"sec_num": "2.1"
},
{
"text": "In this section, we detail the target unknown word types of this research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of unknown words",
"sec_num": "2.2"
},
{
"text": "Rendaku (sequential voicing) is a phenomenon in Japanese morpho-phonology that voices the initial consonant of the non-initial portion of a compound word. In the following example, the initial consonant of the Japanese noun \" \" (sake, alcoholic drink) is voiced into \" \" (zake):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of unknown words",
"sec_num": "2.2"
},
{
"text": "(1) (eggnog) ta ma go -za ke. Since the expression \" \" (zake) is not included in a standard lexicon, it is regarded as an unknown word even if the original word \" \" (sake) is included in the lexicon. There are a lot of studies on rendaku in the field of phonetics and linguistics, and several conditions that prevent rendaku are known, such as Lyman's Law (Lyman, 1894), which stated that rendaku does not occur when the second element of the compound contains a voiced obstruent. However, few studies dealt with rendaku in morphological analysis. Since we have to check the adjacent word to recognize rendaku, it is difficult to deal with rendaku using only the lexicon augmentation approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of unknown words",
"sec_num": "2.2"
},
{
"text": "Some characters are substituted by peculiar characters or symbols such as long sound symbols, lowercase kana characters 3 , in informal text. First, if there is little difference in pronunciation, Japanese vowel characters ' '(a), ' '(i), ' '(u), ' '(e), and ' '(o) are sometimes substituted by long sound symbols ' ' or ' .' For example, a vowel character ' ' in the Japanese adjective \" \" (hontou, true) is sometimes substituted by ' ' and this adjective is written as \" \" (hont\u00f4, troo). We call this phenomenon substitution with long sound symbols. As well as long sound symbol substitution, some hiragana characters such as ' '(a), ' '(i), ' '(u), ' '(e), ' '(o), ' '(wa), and ' '(ka) are substituted by their lowercases: ' ,' ' , ' ' ,' ' ,' ' ,' ' ,' and ' .' We call this phenomenon substitution with lowercases.",
"cite_spans": [
{
"start": 735,
"end": 765,
"text": "' ' ,' ' ,' ' ,' ' ,' and ' .'",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Types of unknown words",
"sec_num": "2.2"
},
{
"text": "There are also other types of derivation, that is, some characters are inserted into a word that is included in the lexicon. In the following examples, long sound symbols and lowercase are inserted into the Japanese adjective \" \" (cool).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of unknown words",
"sec_num": "2.2"
},
{
"text": "(2) (Insertion of (coooool) long sound symbols)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of unknown words",
"sec_num": "2.2"
},
{
"text": "(3) (Insertion of lowercases) (coooool)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of unknown words",
"sec_num": "2.2"
},
{
"text": "In addition to the unknown words derived from words in the lexicon, there are several types of unknown words that contain rare words such as \" \" (decontamination), new words such as \" \" (Twitter), and onomatopoeias such as \" \" (caw-caw). We can easily generate Japanese onomatopoeias that are not included in the lexicon. Most of them follow several patterns, such as 'ABAB,' 'A B ,' and 'AB ,' and we classified them into two types, onomatopoeias with repetition such as 'ABAB,' and onomatopoeias without repetition such as 'A B .'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Types of unknown words",
"sec_num": "2.2"
},
{
"text": "The importance of unknown word processing varies across unknown word types. We give three example sentences (4), (5), and (6), which include the unknown words \" \" (fluffy), \" \" (decontamination), and \" \" (Twitter), respectively. In these examples, (a) denotes the desirable morphological analysis and (b) is the output of our baseline morphological analyzer, JUMAN version 5.1 (Kurohashi and Kawahara, 2005) . 4 In the case of (4), the unknown word \" \" (fluffy) is divided into three parts by JU-MAN, and influences the analyses of the adjacent function words, that is, \" \" (and) is changed to \" \" (but) and \" \" (of) is changed to \" \" (this), which will strongly affect the other NLP applications. The wide scope of influence is due to the fact that \" \" consists of hiragana characters like most Japanese function words. On the other hand, in the case of (5), although the unknown word \" \" (decontamination) is divided into two parts by JUMAN, there is no influence on the adjacent analyses. Moreover, in case of (6), although there is no lexical entry of \" \" (Twitter), the segmentation is correct thanks to simple character-based heuristics for out-of-vocabulary (OOV) words.",
"cite_spans": [
{
"start": 377,
"end": 407,
"text": "(Kurohashi and Kawahara, 2005)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Importance of unknown word processing of each type",
"sec_num": "2.3"
},
{
"text": "These two unknown words do not contain hiragana characters, and thus, we think it is important to resolve unknown words that contain hiragana. Since unknown words derived from words in the lexicon and onomatopoeias often contain hi-ragana characters, we came to the conclusion that it is more important to resolve them than to resolve rare words and new words that often consist of katakana and Chinese characters.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Importance of unknown word processing of each type",
"sec_num": "2.3"
},
{
"text": "Much work has been done on Japanese unknown word processing. Several approaches aimed to acquire unknown words from a corpus in advance (Mori and Nagao, 1996; Murawaki and Kurohashi, 2008) and others aimed to introduce better unknown word model to morphological analyzer (Nagata, 1999; Uchimoto et al., 2001 ; Asahara and Nakagawa and Uchimoto, 2007) . However, there are few works that focus on certain types of unknown words. Kazama et al. (1999) 's work is one of them. Kazama et al. improved the morphological analyzer JUMAN to deal with the informal expressions in online chat conversations. They focused on substitution and insertion, which are also the target of this paper. However, while our approach aims to develop heuristics to flexibly search the lexicon, they expanded the lexicon, and thus their approach cannot deal with an infinite number of derivations, such as \" ,\" and \" \" for the original word \"",
"cite_spans": [
{
"start": 136,
"end": 158,
"text": "(Mori and Nagao, 1996;",
"ref_id": "BIBREF17"
},
{
"start": 159,
"end": 188,
"text": "Murawaki and Kurohashi, 2008)",
"ref_id": "BIBREF18"
},
{
"start": 271,
"end": 285,
"text": "(Nagata, 1999;",
"ref_id": "BIBREF19"
},
{
"start": 286,
"end": 307,
"text": "Uchimoto et al., 2001",
"ref_id": "BIBREF22"
},
{
"start": 322,
"end": 350,
"text": "Nakagawa and Uchimoto, 2007)",
"ref_id": "BIBREF20"
},
{
"start": 428,
"end": 448,
"text": "Kazama et al. (1999)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2.4"
},
{
"text": ".\" In addition, Ikeda et al. (2009) conducted experiments using Kazama et al.'s approach on 2,000,000 blogs, and reported that their approach made 37.2% of the sentences affected by their method worse. Therefore, we conjecture that their approach only benefits a text that is very similar to the text in online chat conversations. Kacmarcik et al. (2000) exploited the normalization rules in advance of morphological analysis, and Ikeda et al. (2009) replaced peculiar expressions with formal expressions after morphological analysis. In this research, we exploit the derivation rules and onomatopoeia patterns in morphological analysis. Owing to such a design, our system can successfully deal with rendaku, which has not been dealt with in the previous works.",
"cite_spans": [
{
"start": 16,
"end": 35,
"text": "Ikeda et al. (2009)",
"ref_id": "BIBREF7"
},
{
"start": 331,
"end": 354,
"text": "Kacmarcik et al. (2000)",
"ref_id": "BIBREF8"
},
{
"start": 431,
"end": 450,
"text": "Ikeda et al. (2009)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2.4"
},
{
"text": "UniDic dictionary (Den et al., 2008) handles orthographic and phonological variations including rendaku and informal ones. However, the number of possible variations is not restricted to a fixed number because we can insert any number of long sound symbols or lowercases into a word, and thus, all the variations cannot be covered by a dictionary. In addition, as mentioned above, since we Figure 2: Example of a word lattice with new nodes \" ,\" \" ,\" and \" .\" The broken lines indicate the added nodes and paths, and the bold lines indicate the optimal path. have to take into account the adjacent word to accurately recognize rendaku, the lexical knowledge alone is not sufficient for rendaku recognition.",
"cite_spans": [
{
"start": 18,
"end": 36,
"text": "(Den et al., 2008)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2.4"
},
{
"text": "For languages other than Japanese, there is much work on text normalization that aims to handle informal expressions in social media (Beaufort et al., 2010; Liu et al., 2012; Han et al., 2012) . However, their target languages are segmented languages such as English and French, and thus they can focus only on normalization. On the other hand, since Japanese is an unsegmented language, we have to also consider the word segmentation task.",
"cite_spans": [
{
"start": 133,
"end": 156,
"text": "(Beaufort et al., 2010;",
"ref_id": "BIBREF4"
},
{
"start": 157,
"end": 174,
"text": "Liu et al., 2012;",
"ref_id": "BIBREF14"
},
{
"start": 175,
"end": 192,
"text": "Han et al., 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2.4"
},
{
"text": "3 Proposed Method",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2.4"
},
{
"text": "We use the rule-based Japanese morphological analyzer JUMAN version 5.1 as our baseline system. Basically we only improve the method for building a word lattice and do not change the process for finding an optimal path from the lattice. That is, our proposed system only adds new nodes to the word lattice built by the baseline system by exploiting the derivation rules and onomatopoeia patterns. If the new nodes and their costs are plausible, the conventional process for finding the optimal path will select the path with added nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "For example, if a sentence \" .\" is input into the baseline system, it builds the word lattice that is described with solid lines in Figure 2 . However, this lattice does not include such expressions as \" \" and \" \" since they are not included in the lexicon. Our proposed system transforms the informal expressions into their standard expressions such as \" \" (delicious) and \" \" (was) by exploiting the derivation rules, adds their nodes into the word lattice, and selects the path with these added nodes.",
"cite_spans": [],
"ref_spans": [
{
"start": 132,
"end": 140,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Overview",
"sec_num": "3.1"
},
{
"text": "We deal with five types of unknown words that are derived from words in the lexicon: rendaku, substitution with long sound symbols, substitution with lowercases, insertion of long sound symbols, and insertion of lowercases. Here, we describe how to add new nodes into the word lattice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution of unknown words derived from words in the lexicon",
"sec_num": "3.2"
},
{
"text": "Rendaku The procedure to add unvoiced nodes to deal with rendaku differs from the others. Since only the initial consonant of a word is voiced by rendaku, there is at most one possible voiced entry for each word in the lexicon. Hence, we add the voiced entries into the trie-based lexicon in advance if the original word does not satisfy any conditions that prevent rendaku such as Lyman's Law. For example, our system creates the entry \" \" (zake) from the original word \" \" (sake), and adds it into the lexicon. When the system retrieves words that start from the fourth character in the example (1) in Section 2.2, \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution of unknown words derived from words in the lexicon",
"sec_num": "3.2"
},
{
"text": ",\" the added entry \" \" (zake) is retrieved. Since rendaku occurs for the initial consonant of the noninitial portion of a compound word, our system adds the retrieved word only when it is the noninitial portion of a compound word. Substitution with long sound symbols and lowercases In order to cope with substitution with long sound symbols and lowercases, our system transforms the input text into normalized strings by using simple rules. These rules substitute a long sound symbol with one of the vowel characters: ' ,' ' ,' ' ,' ' ,' and ' ,' that minimizes the difference in pronunciation. These rules also substitute lowercase characters with the corresponding uppercase characters. For example, if the sentence \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution of unknown words derived from words in the lexicon",
"sec_num": "3.2"
},
{
"text": ".\" (It is trooly DElicious.) is input, the nodes generated from the normalized string \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution of unknown words derived from words in the lexicon",
"sec_num": "3.2"
},
{
"text": ".\" are added to the word lattice along with the nodes generated from the original string.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution of unknown words derived from words in the lexicon",
"sec_num": "3.2"
},
{
"text": "In order to cope with the insertion of long sound symbols and lowercases, our system transforms the input text into a normalized string using simple rules. These rules delete long sound symbols and lowercase characters that are considered to be inserted to prolong the original word pronunciation. For example, if the sentence \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion of long sound symbols and lowercases",
"sec_num": null
},
{
"text": ".\" (It iiisss coooool.) is input, the nodes generated from the normalized string \" .\" are added into the word lattice. We do not consider partly deleted strings such as \"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion of long sound symbols and lowercases",
"sec_num": null
},
{
"text": ".\" and the combination of substitution and insertion to avoid combinatorial explosion. Therefore, our system cannot deal with unknown words generated by both insertion and substitution, but such words are rare in practice.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion of long sound symbols and lowercases",
"sec_num": null
},
{
"text": "Costs for additional nodes Our system imposes small additional costs to the node generated from the normalized string to give priority to the nodes generated from the original string. We set these costs by using a small development data set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Insertion of long sound symbols and lowercases",
"sec_num": null
},
{
"text": "There are many onomatopoeias in Japanese. In particular, there are a lot of unfamiliar onomatopoeias in Web text. Most onomatopoeias follow limited patterns, and we thus can easily produce new onomatopoeias that follow these patterns. Hence, it seems more reasonable to recognize unknown onomatopoeias by exploiting the onomatopoeia patterns than by manually adding lexical entries for them. Therefore, our system lists onomatopoeia candidates by using onomatopoeia patterns, as shown in Tables 2 and 3 , and adds them into the word lattice. Figure 3 shows examples. The number of potential entries of onomatopoeias with repetition is large, but the candidates of onomatopoeias with repetition can be quickly searched for by using a simple string matching strategy. On the other hand, to search the candidates of onomatopoeias without repetition is a bit time consuming com-",
"cite_spans": [],
"ref_spans": [
{
"start": 488,
"end": 502,
"text": "Tables 2 and 3",
"ref_id": "TABREF5"
},
{
"start": 542,
"end": 550,
"text": "Figure 3",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Resolution of unknown onomatopoeias",
"sec_num": "3.3"
},
{
"text": "\u1fdf \u1fdf \u1fe8\u1982 \u1fdf \u1fe8 \u1fdf \u1fe8 - \u1982 \u1fdf \u1fdf",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution of unknown onomatopoeias",
"sec_num": "3.3"
},
{
"text": "Input : \"\u1b02\u1b3c\u1b02\u1b3c\u1bef\" (Approximately how much?) Lattice : pared with trie search. However, the number of potential entries of onomatopoeias without repetition is not so large, and thus our system adds all possible entries of onomatopoeias without repetition into the trie-based lexicon in advance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Resolution of unknown onomatopoeias",
"sec_num": "3.3"
},
{
"text": "We used 100,000 Japanese sentences to evaluate our approach. These sentences were obtained from an open search engine infrastructure TSUB-AKI (Shinzato et al., 2008) , which included at least one hiragana character and consisted of more than twenty characters",
"cite_spans": [
{
"start": 142,
"end": 165,
"text": "(Shinzato et al., 2008)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Setting",
"sec_num": "4.1"
},
{
"text": "We first estimated the recall. Since it is too costly to create a set of data with all unknown words annotated, we made a set of data with only our target unknown words annotated. We could apply a set of regular expressions to reduce the unknown word candidates by limiting the type of unknown words. We manually annotated 100 expressions for each type, and estimated the recall.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setting",
"sec_num": "4.1"
},
{
"text": "A high recall, however, does not always imply that the proposed system performs well. It might be possible that our proposed method gives bad effects on non-target words. Therefore, we also compared the whole analysis with and without the rules/patterns from the following seven aspects: 4 4 There are two major reasons why we did not use the precision, recall and F-measure metrics to evaluate the overall performance. The first reason is that to create a large set of annotated data is too costly. The second reason, which is more essential, is that there is no clear definition of Japanese 1. The number of positive changes for 100 different outputs: P 100D . 2. The number of negative changes for 100 different outputs: N 100D . 3. The number of different outputs for 100,000 sentences: D 100kS . 4. The estimated number of positive changes for 100,000 sentences: P * 100kS . 5. The estimated number of negative changes for 100,000 sentences: N * 100kS . 6. The relative increase of the nodes: Node inc. . 7. The relative loss in speed: SP loss . Different outputs indicate cases in which the systems with and without rules/patterns output a different result. First, for each type of rule/pattern, we extracted 100 different outputs and manually classified them into three categories: the system with the rules/patterns was better (positive), the system without the rules/patterns was better (negative), and both outputs were undesirable (others). When these outputs differed in word segmentation, we only compared the segmentation but did not take into account the POS tags. On the other side, when these outputs did not differ in word segmentation, we compared the POS tags. Tables 6-10 list several examples. For example, \" \" (can feel amused) in Table 6 should be analyzed as one word, but both systems with and without rules for rendaku divided it into several parts, and such a case is labeled as others.",
"cite_spans": [],
"ref_spans": [
{
"start": 1754,
"end": 1761,
"text": "Table 6",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Setting",
"sec_num": "4.1"
},
{
"text": "We counted the number of different outputs for 100,000 sentences. We then calculated the estimated numbers of positive/negative changes for the sentences by using the equations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setting",
"sec_num": "4.1"
},
{
"text": "X * 100kS = D 100kS \u00d7 X 100D /100",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Setting",
"sec_num": "4.1"
},
{
"text": ". We also counted the number of created nodes in lattice and calculated the relative increase, which would affect the time for finding the optimal path from the word lattice, and measured the analysis time and calculated the relative loss in speed. Table 4 lists the recall of our system for each unknown word type with the number of words that are covered by the UniDic dictionary. Note that while our system's recall denotes the ratio of actually recognized words, the coverage of UniDic word segmentation, especially for unknown words. That is, we can accept various word boundaries. We thought it is more straight-forward and efficient to compare the differences between a baseline system and the proposed system. only denotes the number of words included in the dictionary, which can be interpreted as the upper bound of the system based on UniDic. We can confirm our system achieved high recall for each type of unknown word. Since UniDic covered 95% of unknown words of rendaku type, we would be able to improve the rendaku recognition by incorporating UniDic and our approach that takes into account the adjacent word. Except for rendaku, our system's recall was higher than the coverage of UniDic, which confirms the effectiveness of our method. Table 5 summarizes the comparison between the analyses with and without the rules/patterns. In short, our method successfully recognized all types of unknown words with few bad effects. By introducing all the derivation rules and onomatopoeia patterns, there are 4,560 improvements for 100,000 sentences with only 80 deteriorations and a 6.2% loss in speed. In particular, the derivation rules of insertion and substitution of long sound symbols and lowercases produced 3,327 improvements for 100,000 sentences at high recall values (see Table 4 ) with only 27 deteriorations and a 3.8% loss in speed. We confirmed from these results that our approaches are very effective for unknown words in informal text. Since the number of newly added nodes was small, the speed loss is considered to be derived not from the optimal path searching phase but from the lattice building phase. Table 6 lists some examples of the changed outputs by introducing the derivation rules for rendaku. As listed in Table 4 and 5, the rendaku processing produced more negative changes and the lower recall value compared with the other types. This indicates that rendaku processing is more difficult than resolving informal expressions with long sound symbols or lowercases. Since long sound symbols and lowercases rarely appear in the lexicon, there are few likely candidates other than the correct analysis. On the other hand, voiced characters often appear in the lexicon and formal text, and thus, there are many likely candidates. Table 7 lists some examples of the changed output by introducing the derivation rules for informal spelling with long sound symbols. We labeled the change of the analysis \"OK \" (It's OK) as negative because the baseline system correctly tagged the POS of \" \" unlike our proposed system, but the baseline system could not also correctly resolve the entire phrase. There was no different output that our proposed system could not resolve but the baseline system could fully resolve. Table 8 lists some examples of the changed outputs by introducing the derivation rules for informal spelling with lowercase. We labeled the change of the analysis \" \" (Yumi's bedclothes) as negative because the baseline system correctly segmented the postpositional particle \" \" unlike our proposed system. Again for this example, the baseline system could not correctly resolve the entire phrase. Along with the informal spelling with long sound symbols, there was no different output that our proposed system could not resolve but the baseline system could fully resolve. Table 10 : Examples of different outputs by introducing onomatopoeia patterns without repetition. Table 9 lists some examples of the changed outputs by introducing onomatopoeia patterns with repetition. Our system recognized unknown onomatopoeias with repetition at a recall of 89%, which is not very high. However, since there were several repetition expressions other than onomatopoeias, such as \" / \" (wow wow) as shown in Table 9 , we cannot lessen the cost for onomatopoeias with repetition. Table 10 lists some examples of the changed outputs by introducing onomatopoeia patterns without repetition. Our system recognized the unknown onomatopoeias without repetition at a recall of 94% and did not output anything worse than Table 11 : Classification results of unknown words that occur more than two times in KNB corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 249,
"end": 256,
"text": "Table 4",
"ref_id": "TABREF8"
},
{
"start": 1255,
"end": 1262,
"text": "Table 5",
"ref_id": null
},
{
"start": 1793,
"end": 1800,
"text": "Table 4",
"ref_id": "TABREF8"
},
{
"start": 2135,
"end": 2142,
"text": "Table 6",
"ref_id": "TABREF10"
},
{
"start": 2248,
"end": 2255,
"text": "Table 4",
"ref_id": "TABREF8"
},
{
"start": 2768,
"end": 2775,
"text": "Table 7",
"ref_id": "TABREF11"
},
{
"start": 3249,
"end": 3256,
"text": "Table 8",
"ref_id": null
},
{
"start": 3823,
"end": 3831,
"text": "Table 10",
"ref_id": "TABREF1"
},
{
"start": 3921,
"end": 3928,
"text": "Table 9",
"ref_id": null
},
{
"start": 4249,
"end": 4256,
"text": "Table 9",
"ref_id": null
},
{
"start": 4320,
"end": 4328,
"text": "Table 10",
"ref_id": "TABREF1"
},
{
"start": 4554,
"end": 4562,
"text": "Table 11",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Setting",
"sec_num": "4.1"
},
{
"text": "the baseline output with no loss in speed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "In order to approximate the practical coverage of our method, we classified unknown words that occur more than two times in the Kyoto University and NTT Blog (KNB) corpus 5 into four types: words that are covered by the lexicon created by Murawaki and Kurohashi (2008) (Murawaki's Lexicon) , words that are not covered by Murawaki's Lexicon but have entries in Wikipedia, words that are covered only by our method, and the others. Table 11 shows the results. There are total 645 tokens of unknown words that occur more that two times in KNB corpus, 105 of which are newly covered by our method. Since the number of tokens that are covered by neither Murawaki's Lexicon nor Wikipedia is only 187, we can say that the coverage of our method is not trivial.",
"cite_spans": [
{
"start": 239,
"end": 268,
"text": "Murawaki and Kurohashi (2008)",
"ref_id": "BIBREF18"
},
{
"start": 269,
"end": 289,
"text": "(Murawaki's Lexicon)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 431,
"end": 439,
"text": "Table 11",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "4.2"
},
{
"text": "We presented a simple approach to unknown word processing in Japanese morphological analysis. Our approach introduced derivation rules and onomatopoeia patterns, and correctly recognized certain types of unknown words. Our experimental results on Web text revealed that our approach could recognize about 4,500 unknown words for 100,000 Web sentences with only 80 harmful side effects and a 6% loss in speed. We plan to apply our approach to machine learning-based morphological analyzers, such as MeCab, with Uni-Dic dictionary, which handles orthographic and phonological variations, in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "Four different character types are used in Japanese: hiragana, katakana, Chinese characters, and Roman alphabet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In this paper, we call the following characters lowercase: ' ,' ' ,' ' ,' ' ,' ' ,' ' ,' and ' .'",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The KNB corpus consists 4,186 sentences from Japanese blogs, and is available at http://nlp.kuee.kyoto-u.ac.jp/kuntt/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Extended models and tools for high-performance partof-speech tagger",
"authors": [
{
"first": "Masayuki",
"middle": [],
"last": "Asahara",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of COLING'00",
"volume": "",
"issue": "",
"pages": "21--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masayuki Asahara and Yuji Matsumoto. 2000. Ex- tended models and tools for high-performance part- of-speech tagger. In Proc. of COLING'00, pages 21-27.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Japanese unknown word identification by characterbased chunking",
"authors": [],
"year": null,
"venue": "Proc. of COLING'04",
"volume": "",
"issue": "",
"pages": "459--465",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Japanese unknown word identification by character- based chunking. In Proc. of COLING'04, pages 459-465.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Japanese unknown word processing using conditional random fields",
"authors": [
{
"first": "Ai",
"middle": [],
"last": "Azuma",
"suffix": ""
},
{
"first": "Masayuki",
"middle": [],
"last": "Asahara",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2006,
"venue": "Proc. of IPSJ SIG Notes NL-173-11",
"volume": "",
"issue": "",
"pages": "67--74",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ai Azuma, Masayuki Asahara, and Yuji Matsumoto. 2006. Japanese unknown word processing using conditional random fields (in Japanese). In Proc. of IPSJ SIG Notes NL-173-11, pages 67-74.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A hybrid rule/model-based finite-state framework for normalizing sms messages",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Beaufort",
"suffix": ""
},
{
"first": "Sophie",
"middle": [],
"last": "Roekhaut",
"suffix": ""
},
{
"first": "Louise-Am\u00e9lie",
"middle": [],
"last": "Cougnon",
"suffix": ""
},
{
"first": "C\u00e9drick",
"middle": [],
"last": "Fairon",
"suffix": ""
}
],
"year": 2010,
"venue": "Proc. of ACL'10",
"volume": "",
"issue": "",
"pages": "770--779",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Beaufort, Sophie Roekhaut, Louise-Am\u00e9lie Cougnon, and C\u00e9drick Fairon. 2010. A hybrid rule/model-based finite-state framework for normal- izing sms messages. In Proc. of ACL'10, pages 770- 779.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A proper approach to Japanese morphological analysis: Dictionary, model, and evaluation",
"authors": [
{
"first": "Yasuharu",
"middle": [],
"last": "Den",
"suffix": ""
},
{
"first": "Junpei",
"middle": [],
"last": "Nakamura",
"suffix": ""
},
{
"first": "Toshinobu",
"middle": [],
"last": "Ogiso",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Ogura",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of LREC'08",
"volume": "",
"issue": "",
"pages": "1019--1024",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yasuharu Den, Junpei Nakamura, Toshinobu Ogiso, and Hideki Ogura. 2008. A proper approach to Japanese morphological analysis: Dictionary, model, and evaluation. In Proc. of LREC'08, pages 1019-1024.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Automatically constructing a normalisation dictionary for microblogs",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc.of EMNLP-CoNLL'12",
"volume": "",
"issue": "",
"pages": "421--432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Han, Paul Cook, and Timothy Baldwin. 2012. Automatically constructing a normalisation dictio- nary for microblogs. In Proc.of EMNLP-CoNLL'12, pages 421-432.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Unsupervised text normalization approach for morphological analysis of blog documents",
"authors": [
{
"first": "Kazushi",
"middle": [],
"last": "Ikeda",
"suffix": ""
},
{
"first": "Tadashi",
"middle": [],
"last": "Yanagihara",
"suffix": ""
}
],
"year": 2009,
"venue": "Proc. of Australasian Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "401--411",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kazushi Ikeda, Tadashi Yanagihara, Kazunori Mat- sumoto, and Yasuhiro Takishima. 2009. Unsuper- vised text normalization approach for morphological analysis of blog documents. In Proc. of Australasian Conference on Artificial Intelligence, pages 401- 411.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Robust segmentation of japanese text into a lattice for parsing",
"authors": [
{
"first": "Gary",
"middle": [],
"last": "Kacmarcik",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Brockett",
"suffix": ""
},
{
"first": "Hisami",
"middle": [],
"last": "Suzuki",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. of COLING'00",
"volume": "",
"issue": "",
"pages": "390--396",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gary Kacmarcik, Chris Brockett, and Hisami Suzuki. 2000. Robust segmentation of japanese text into a lattice for parsing. In Proc. of COLING'00, pages 390-396.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Morphological analysis for japanese web chat",
"authors": [
{
"first": "Yutaka",
"middle": [],
"last": "Jun'ichi Kazama",
"suffix": ""
},
{
"first": "Makino",
"middle": [],
"last": "Mitsuishi",
"suffix": ""
},
{
"first": "Kentaro",
"middle": [],
"last": "Takaki",
"suffix": ""
},
{
"first": "Koich",
"middle": [],
"last": "Torisawa",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Matsuda",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of 5th Annual Meetings of the Japanese Association for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "509--512",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jun'ichi Kazama, Yutaka Mitsuishi, Makino Takaki, Kentaro Torisawa, Koich Matsuda, and Jun'ichi Tsujii. 1999. Morphological analysis for japanese web chat (in Japanese). In Proc. of 5th Annual Meet- ings of the Japanese Association for Natural Lan- guage Processing, pages 509-512.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Applying conditional random fields to japanese morphological analysis",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Kaoru",
"middle": [],
"last": "Yamamoto",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2004,
"venue": "Proc. of EMNLP'04",
"volume": "",
"issue": "",
"pages": "230--237",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo, Kaoru Yamamoto, and Yuji Matsumoto. 2004. Applying conditional random fields to japanese morphological analysis. In Proc. of EMNLP'04, pages 230-237.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "MeCab: Yet Another Partof-Speech and Morphological Analyzer",
"authors": [
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taku Kudo, 2006. MeCab: Yet Another Part- of-Speech and Morphological Analyzer.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Improvements of Japanese morphological analyzer JUMAN",
"authors": [
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
},
{
"first": "Toshihisa",
"middle": [],
"last": "Nakamura",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1994,
"venue": "Proc. of The International Workshop on Sharable Natural Language Resources",
"volume": "",
"issue": "",
"pages": "22--38",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sadao Kurohashi, Toshihisa Nakamura, Yuji Mat- sumoto, , and Makoto Nagao. 1994. Improvements of Japanese morphological analyzer JUMAN. In Proc. of The International Workshop on Sharable Natural Language Resources, pages 22-38.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "A broad-coverage normalization system for social media language",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Fuliang",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proc. of ACL'12",
"volume": "",
"issue": "",
"pages": "1035--1044",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Liu, Fuliang Weng, and Xiao Jiang. 2012. A broad-coverage normalization system for social me- dia language. In Proc. of ACL'12, pages 1035-1044.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "The change from surd to sonant in Japanese compounds. Philadelphia : Oriental Club of Philadelphia",
"authors": [
{
"first": "Benjamin",
"middle": [],
"last": "Smith",
"suffix": ""
},
{
"first": "Lyman",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Benjamin Smith Lyman. 1894. The change from surd to sonant in Japanese compounds. Philadelphia : Oriental Club of Philadelphia.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Chasen: Morphological analyzer version 2.4.0 user's manual",
"authors": [
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
},
{
"first": "Kazuma",
"middle": [],
"last": "Takaoka",
"suffix": ""
},
{
"first": "Masayuki",
"middle": [],
"last": "Asahara",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuji Matsumoto, Kazuma Takaoka, and Masayuki Asa- hara. 2007. Chasen: Morphological analyzer ver- sion 2.4.0 user's manual.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Word extraction from corpora and its part-of-speech estimation using distributional analysis",
"authors": [
{
"first": "Shinsuke",
"middle": [],
"last": "Mori",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Nagao",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. of COL-ING'96",
"volume": "",
"issue": "",
"pages": "1119--1122",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shinsuke Mori and Makoto Nagao. 1996. Word ex- traction from corpora and its part-of-speech estima- tion using distributional analysis. In Proc. of COL- ING'96, pages 1119-1122.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Online acquisition of Japanese unknown morphemes using morphological constraints",
"authors": [
{
"first": "Yugo",
"middle": [],
"last": "Murawaki",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of EMNLP'08",
"volume": "",
"issue": "",
"pages": "429--437",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yugo Murawaki and Sadao Kurohashi. 2008. Online acquisition of Japanese unknown morphemes using morphological constraints. In Proc. of EMNLP'08, pages 429-437.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A part of speech estimation method for japanese unknown words using a statistical model of morphology and context",
"authors": [
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. of ACL'99",
"volume": "",
"issue": "",
"pages": "277--284",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masaaki Nagata. 1999. A part of speech estimation method for japanese unknown words using a statis- tical model of morphology and context. In Proc. of ACL'99, pages 277-284.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A hybrid approach to word segmentation and pos tagging",
"authors": [
{
"first": "Tetsuji",
"middle": [],
"last": "Nakagawa",
"suffix": ""
},
{
"first": "Kiyotaka",
"middle": [],
"last": "Uchimoto",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of ACL'07",
"volume": "",
"issue": "",
"pages": "217--220",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tetsuji Nakagawa and Kiyotaka Uchimoto. 2007. A hybrid approach to word segmentation and pos tag- ging. In Proc. of ACL'07, pages 217-220.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Tsubaki: An open search engine infrastructure for developing new information access methodology",
"authors": [
{
"first": "Keiji",
"middle": [],
"last": "Shinzato",
"suffix": ""
},
{
"first": "Tomohide",
"middle": [],
"last": "Shibata",
"suffix": ""
},
{
"first": "Daisuke",
"middle": [],
"last": "Kawahara",
"suffix": ""
},
{
"first": "Chikara",
"middle": [],
"last": "Hashimoto",
"suffix": ""
},
{
"first": "Sadao",
"middle": [],
"last": "Kurohashi",
"suffix": ""
}
],
"year": 2008,
"venue": "Proc. of IJCNLP'08",
"volume": "",
"issue": "",
"pages": "189--196",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Keiji Shinzato, Tomohide Shibata, Daisuke Kawahara, Chikara Hashimoto, and Sadao Kurohashi. 2008. Tsubaki: An open search engine infrastructure for developing new information access methodology. In Proc. of IJCNLP'08, pages 189-196.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "The unknown word problem: a morphological analysis of japanese using maximum entropy aided by a dictionary",
"authors": [
{
"first": "Kiyotaka",
"middle": [],
"last": "Uchimoto",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "Hitoshi",
"middle": [],
"last": "Isahara",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. of EMNLP'01",
"volume": "",
"issue": "",
"pages": "91--99",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kiyotaka Uchimoto, Satoshi Sekine, and Hitoshi Isa- hara. 2001. The unknown word problem: a mor- phological analysis of japanese using maximum en- tropy aided by a dictionary. In Proc. of EMNLP'01, pages 91-99.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"num": null,
"text": "Example of word lattice. The bold lines indicate the optimal path.",
"type_str": "figure"
},
"FIGREF2": {
"uris": null,
"num": null,
"text": "Examples of a word lattice with new nodes of onomatopoeia. The broken lines indicate the added nodes and paths, and the bold lines indicate the optimal path. While the optimal path includes the added node in the upper example, it does not in the lower example.",
"type_str": "figure"
},
"TABREF1": {
"num": null,
"html": null,
"content": "<table/>",
"text": "Various types of Japanese unknown words. The '*' denotes that this type is the target of this research. See Section 2.2 for more details.",
"type_str": "table"
},
"TABREF5": {
"num": null,
"html": null,
"content": "<table><tr><td>Pattern</td><td>Example</td><td>Transliteration</td></tr><tr><td>H1 H2</td><td/><td>pokkori</td></tr><tr><td>K1 K2</td><td/><td>mattari</td></tr><tr><td>H 1 H2Y</td><td/><td>pecchari</td></tr><tr><td>K1 K2Y</td><td/><td>pocchari</td></tr><tr><td>K1K2</td><td/><td>chiratto</td></tr><tr><td>K1K2</td><td/><td>pakitto</td></tr></table>",
"text": "Onomatopoeia patterns with repetition and their examples. 'A,' 'B,' 'C,' and 'D' denote either hiragana or katakana. We consider only repetitions of two to four characters.",
"type_str": "table"
},
"TABREF6": {
"num": null,
"html": null,
"content": "<table><tr><td>: Onomatopoeia patterns without repetition</td></tr><tr><td>and their examples. 'H,' denotes the hiragana, 'K'</td></tr><tr><td>denotes the katakana, and 'Y' denotes the palatal-</td></tr><tr><td>ized consonants such as ' .'</td></tr></table>",
"text": "",
"type_str": "table"
},
"TABREF8": {
"num": null,
"html": null,
"content": "<table/>",
"text": "Recall of our system and the coverage of UniDic.",
"type_str": "table"
},
"TABREF10": {
"num": null,
"html": null,
"content": "<table><tr><td colspan=\"2\">Our approach</td><td colspan=\"2\">Baseline</td><td>Gold standard</td></tr><tr><td colspan=\"2\">Positive (insertion)</td><td/><td/></tr><tr><td>Input:</td><td/><td colspan=\"3\">(a bitter experiment)</td></tr><tr><td>/</td><td/><td colspan=\"2\">/ / /</td><td>/</td></tr><tr><td colspan=\"3\">Positive (substitution)</td><td/></tr><tr><td>Input:</td><td/><td colspan=\"2\">(congratulations)</td></tr><tr><td/><td/><td>/</td><td>/ /</td></tr><tr><td colspan=\"3\">Negative (substitution)</td><td/></tr><tr><td colspan=\"2\">Input: OK</td><td colspan=\"2\">(It's OK)</td></tr><tr><td>OK/ /</td><td>/</td><td colspan=\"2\">OK/ / / /</td><td>OK/ /</td></tr><tr><td colspan=\"2\">Others (insertion)</td><td/><td/></tr><tr><td>Input:</td><td/><td colspan=\"2\">(very luxury)</td></tr><tr><td>/</td><td>/</td><td>/</td><td/><td>/</td></tr></table>",
"text": "Examples of different outputs by introducing the derivation rule for rendaku. The '/' denotes the boundary between words in the corresponding analysis, and the bold font indicates the correct output, that is, the output is the same as the gold standard.",
"type_str": "table"
},
"TABREF11": {
"num": null,
"html": null,
"content": "<table/>",
"text": "Examples of different outputs by introducing derivation rules for long sound symbol substitution and insertion.",
"type_str": "table"
}
}
}
}