Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I13-1015",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:15:13.294331Z"
},
"title": "Chinese Informal Word Normalization: an Experimental Study",
"authors": [
{
"first": "Aobo",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {},
"email": "wangaobo@comp.nus.edu.sg"
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Andrade",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NEC Corporation",
"location": {
"settlement": "Nara",
"country": "Japan"
}
},
"email": "s-andrade@cj"
},
{
"first": "Takashi",
"middle": [],
"last": "Onishi",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NEC Corporation",
"location": {
"settlement": "Nara",
"country": "Japan"
}
},
"email": "t-onishi@bq"
},
{
"first": "Kai",
"middle": [],
"last": "Ishikawa",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NEC Corporation",
"location": {
"settlement": "Nara",
"country": "Japan"
}
},
"email": "k-ishikawa@dq.jp.nec.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "We study the linguistic phenomenon of informal words in the domain of Chinese microtext and present a novel method for normalizing Chinese informal words to their formal equivalents. We formalize the task as a classification problem and propose rule-based and statistical features to model three plausible channels that explain the connection between formal and informal pairs. Our two-stage selection-classification model is evaluated on a crowdsourced corpus and achieves a normalization precision of 89.5% across the different channels, significantly improving the state-of-the-art. * This research is done in part during Aobo Wang's internship in NEC Corporation.",
"pdf_parse": {
"paper_id": "I13-1015",
"_pdf_hash": "",
"abstract": [
{
"text": "We study the linguistic phenomenon of informal words in the domain of Chinese microtext and present a novel method for normalizing Chinese informal words to their formal equivalents. We formalize the task as a classification problem and propose rule-based and statistical features to model three plausible channels that explain the connection between formal and informal pairs. Our two-stage selection-classification model is evaluated on a crowdsourced corpus and achieves a normalization precision of 89.5% across the different channels, significantly improving the state-of-the-art. * This research is done in part during Aobo Wang's internship in NEC Corporation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Microtext -including microblogs, comments, SMS, chat and instant messaging (collectively referred to as microtext by Gouwset et al. (2011) or network informal language by Xia et al. (2005) )is receiving a larger research focus from the computational linguistic community. A key challenge is the presence of informal words -terms that manifest as ad hoc abbreviations, neologisms, unconventional spellings and phonetic substitutions. This phenomenon is so prevalent a challenge in Chinese microtext that the dual problems of informal word recognition and normalization deserve research. Given the close connection between an informal word and its formal equivalent, the restoration (normalization) of an informal word to its formal one is an important pre-processing step for NLP tasks that rely on string matching or word frequency statistics (Han et al., 2012) .",
"cite_spans": [
{
"start": 117,
"end": 138,
"text": "Gouwset et al. (2011)",
"ref_id": null
},
{
"start": 171,
"end": 188,
"text": "Xia et al. (2005)",
"ref_id": "BIBREF19"
},
{
"start": 843,
"end": 861,
"text": "(Han et al., 2012)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It is important to note that simply re-training models trained on formal text or annotated microtext is insufficient: user-generated microtexts exhibit markedly different orthographic and syntactic constraints compared to their formal equivalents. For example, consider the informal microtext \"\u00b3 \u00f9 > \" (formally, \"OE > \";\"harmonious society\"). A machine translation system may mistranslate it literally as \"crab community\" based on the meaning of its component words, if it lacks knowledge of the informal word \"\u00b3\u00f9\" (\"OE \" ; \"harmonious\"). It is thus desirable to normalize informal words to their standard formal equivalents before proceeding with standard text processing workflows.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this work, we present a novel method for normalizing informal word to their formal equivalents. Specifically, given an informal word with its context as input, we generate hypotheses for its formal equivalents by searching the Google Web 1T corpus (Brants and Franz, 2006) . Prospective informal-formal pairs are further classified by a supervised binary classifier to identify correct pairs. In the classification model, we incorporate both rule-based and statistical feature functions that are learned from both gold-standard annotation and formal domain synonym dictionaries. Also importantly, our method does not directly use words or lexica as features, keeping the learned model small yet robust to inevitable vocabulary change.",
"cite_spans": [
{
"start": 251,
"end": 275,
"text": "(Brants and Franz, 2006)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate our system on a crowdsourced corpus, achieving good performance with a normalization precision of 89.5%. We also show that the method can be effectively adapted to tackle the synonym acquisition task in the formal domain. To our best knowledge, this is the first work to systematically explore the informal word phenomenon in Chinese microtext. By using a formal domain corpus, we introduce a method that effectively normalizes Chinese informal words through different, independent channels.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Previous works that address a similar task includes the study on abbreviations with their definitions (e.g., (Park and Byrd, 2001; Chang and Teng, 2006; Li and Yarowsky, 2008b) ), abbreviations and acronyms in medical domain (Pakhomov, 2002) , and transliteration (e.g., (Wu and Chang, 2007; Zhang et al., 2010; Bhargava and Kondrak, 2011) ). These works dealt with such relations in formal text, but as we earlier argued, similar processing in the informal domain is quite different.",
"cite_spans": [
{
"start": 109,
"end": 130,
"text": "(Park and Byrd, 2001;",
"ref_id": "BIBREF12"
},
{
"start": 131,
"end": 152,
"text": "Chang and Teng, 2006;",
"ref_id": "BIBREF3"
},
{
"start": 153,
"end": 176,
"text": "Li and Yarowsky, 2008b)",
"ref_id": "BIBREF10"
},
{
"start": 225,
"end": 241,
"text": "(Pakhomov, 2002)",
"ref_id": "BIBREF11"
},
{
"start": 271,
"end": 291,
"text": "(Wu and Chang, 2007;",
"ref_id": "BIBREF18"
},
{
"start": 292,
"end": 311,
"text": "Zhang et al., 2010;",
"ref_id": "BIBREF21"
},
{
"start": 312,
"end": 339,
"text": "Bhargava and Kondrak, 2011)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Probably the most related work to our method is Li and Yarowsky (2008a) 's work. They tackle the problem of identifying informal-formal Chinese word pairs in the Web domain. They employ the Baidu 1 search engine to obtain definition sentences -sentences that define or explain Chinese informal words with formal ones -from which the pairs are extracted and further ranked using a conditional log-linear model. Their method only works for definition sentences, where the assumption that the formal and informal equivalents cooccur nearby holds. However, this assumption does not hold in general social network microtext, as people often directly use informal words without any explanations or definitions.",
"cite_spans": [
{
"start": 48,
"end": 71,
"text": "Li and Yarowsky (2008a)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "While seminal, Li and Yarowsky's method has other shortcomings. Relying on a search engine, the system recovers only highly frequent and conventional informal words that have been defined on the web, relying heavily on the quality of Baidu's index. In addition, the features they proposed are limited to rule-based features and ngram frequency, which does not permit their system to explain how the informal-formal word pair is related (i.e., derived by which channel).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Normalizing informal words is another focus area in related work. An important channel for informal-formal mapping (as we review in detail later) is phonetic substitution. In work on Chinese, this is often done by measuring the Pinyin similarity 2 between an informal-formal pair. Li and Yarowsky (2008a) computed the Levenshtein distance (LD) on the Pinyin of the two words in the pair to reflect the phonetic similarity. However, as a general string metric, LD does not capture the (dis-)similarity between two Pinyin pronunciations well as it is too coarse-grained. To overcome this shortcoming, Xia et al. (2008) propose a source channel model that is extended with phonetic mapping rules. They evaluated the model on manually-annotated phonetically similar informal-formal pairs. The disadvantage is that these rules need to be manually created and tuned. For example, Sim(chi, qi) is calculated as Sim(ch, q) * Sim(i, i) (here, \"ch\" and \"q\" are Pinyin initials and \"i\" is a Pinyin final, as per convention), in which Sim(ch, q) = 0.8 and Sim(i, i) = 1.0 are defined manually by the annotators. As informal words and their usage in microtext continually evolve, they noted that it is difficult for annotators to accurately weigh the similarities for all pronunciation pairs. We concur that the labor of manually tuning weights is unnecessary, given annotated informal-formal pairs. Finally, we make the key observation that the similarity of initial and final pairs are not independent, but may vary contextually. As such, a decomposition of Sim(chi, qi) as Sim(ch, q) * Sim(i, i) may not be wholly accurate.",
"cite_spans": [
{
"start": 281,
"end": 304,
"text": "Li and Yarowsky (2008a)",
"ref_id": "BIBREF9"
},
{
"start": 599,
"end": 616,
"text": "Xia et al. (2008)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To tackle these problems as a whole, we propose a two-step solution to the normalization task, which involves formal candidate generation followed by candidate classification. Our pipeline relaxes the strong assumptions described by prior work and achieves significant improvement over the previous state-of-the-art.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To bootstrap our work, we analyzed sample Chinese microtext, hoping to gain insight on how informal words relate to their formal counterparts. To do this, we first needed to compile a corpus of microtext and annotate them.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Analysis",
"sec_num": "3"
},
{
"text": "We utilized the Chinese social media archive, PrEV (Cui et al., 2012) , to obtain Chinese microblog posts from the public timeline of Sina Weibo 3 , the most popular Chinese microtext site with over half a billion users. To assemble a corpus for annotation, we first followed the convention from (Wang et al., 2012) to preprocess and label URLs, emoticons, \"@usernames\" and Hashtags as pre-defined words. We then employed Zhubajie 4 , one of China's largest crowdsourcing platforms to obtain third-party (i.e., not by the original author of the microtext) annotations for any informal words, as well as their normalization, sentiment and motivation for its use (Wang et al., 2010) . Our coarse-grained sentiment annotations use the three categories of \"positive\", \"neutral\" and \"negative\". Motivation is likewise annotated with the seven categories listed in Table 1: to avoid (politically) sensitive words 17.8% to be humorous 29.2% to hedge criticism using euphemisms 12.1% to be terse 25.4% to exaggerate the post's mood or emotion 10.5% others 5.0% Table 1 : Categories used for motivation annotation, shown with their observed distribution.",
"cite_spans": [
{
"start": 51,
"end": 69,
"text": "(Cui et al., 2012)",
"ref_id": "BIBREF4"
},
{
"start": 296,
"end": 315,
"text": "(Wang et al., 2012)",
"ref_id": "BIBREF16"
},
{
"start": 661,
"end": 680,
"text": "(Wang et al., 2010)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 859,
"end": 867,
"text": "Table 1:",
"ref_id": null
},
{
"start": 1053,
"end": 1060,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Analysis",
"sec_num": "3"
},
{
"text": "In total, we spent US$110 to annotate a subset of 5, 500 posts (12, 446 sentences), in which 1, 658 unique informal words were annotated. Each post was annotated by three annotators where conflicts were resolved by simple majority. Annotations were completed after a five-week span and are publicly available 5 for comparative study.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Analysis",
"sec_num": "3"
},
{
"text": "From our observation of the annotated informalformal word pairs, we identified three key channels through which the majority of informal words originate, summarized in Table 2 . Here, the first column describes these channels, giving each channel's observed frequency distribution as a percentage. Together, they account for about 94% of the channels by which informal words originate. The final \"Motivation (%)\" column also gives the distributional breakdown of motivations behind each of the channels as annotated by our crowdsourced annotators. We now discuss each channel.",
"cite_spans": [],
"ref_spans": [
{
"start": 168,
"end": 175,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Feature Analysis",
"sec_num": "3.1"
},
{
"text": "Phonetic Substitutions form the most wellknown channel where the resultant informal words are pronounced similar to their formal counterparts. It is also the channel responsible for most informal word derivation. It has been reported to account for 49.1% (Li and Yarowsky, 2008a) in the Web domain and for 99% in Chinese chats (Xia et al., 2006) . In our study of the microtext domain, we found it to be responsible for 63% (Table 2). As highlighted in bold in the table, normalization in this channel is realized by a charactercharacter Pinyin mapping. An interesting special case occurs when the Chinese characters are substituted for Latin alphabets, where the alphabets form a Pinyin acronym. In these cases, each letter maps to a Pinyin initial (e.g., \"bs\" \u2192 'b\"+ \"s\" \u2192 \"bi\" + \"shi\" ( AE(bi shi); \"to despise\")), each of which maps to a single Chinese character. As such, we view this special case as also following the character-character mapping.",
"cite_spans": [
{
"start": 255,
"end": 279,
"text": "(Li and Yarowsky, 2008a)",
"ref_id": "BIBREF9"
},
{
"start": 327,
"end": 345,
"text": "(Xia et al., 2006)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Feature Analysis",
"sec_num": "3.1"
},
{
"text": "We found that phonetic subsitutions are motivated by different intents. Slightly over half of the words are used to be humorous. This resonates well with the informal context of many microtexts, such that authors take advantage of expressing their humor through lexical choice. Another large group (28.9%) of informal words are variations of politically sensitive words (e.g., the names of politicians, religious movements and events), whose formal counterparts are often forbidden and censored by search engines or Chinese government officials. Netizens often create such phonetically equivalent or close variations to express themselves and communicate with others on such issues. An additional 18.7% of such word pairs are used euphemistically to avoid the usage of their harsher, formal equivalents. The remaining substitutions are explainable as typographical errors, transliterations, among other sources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Feature Analysis",
"sec_num": "3.1"
},
{
"text": "The Abbreviation channel contains informal words that are shortenings of formal words. Normalizing these informal words is equivalent to expanding short forms to corresponding full forms. As suggested by Chang and Teng (2006) , we also agree that Chinese abbreviation expansion can be modeled as character-word mapping. The statistics in Table 2 suggest 19% of informal words come from this channel, and are used to save space and to make communication efficient, especially given the format and length limitations in microtext.",
"cite_spans": [
{
"start": 204,
"end": 225,
"text": "Chang and Teng (2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [
{
"start": 338,
"end": 345,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Feature Analysis",
"sec_num": "3.1"
},
{
"text": "Paraphrases mark informal words that are created by a mixture of paraphrasing, abbreviating and combining existing formal words. We observe that the informal manifestation usually do not retain any of the original characters in their formal equivalents, but still retain the same meaning as a single formal word, or two meanings combined from two formal words. These words are created to enhance emotional response in an exaggerated (66.3%) and/or terse (27.3%) manner. For example in Table 2 , \"\u00d9\u203a\" as a whole comes from the paraphrase of the single formal word \"\u02c6\u00d2\", sharing the meaning of \"awesome\". As another example, \"\u00b4W\" (\"very embarrassed\") originates from two sources: \"\u00b4\" meaning \"A \" (\"very\") and \"W\" meaning \"4,\" (\"embarrassed\"). From this observation, we feel that both character-word and word-word mappings may adequately model the normalization process for this channel.",
"cite_spans": [],
"ref_spans": [
{
"start": 485,
"end": 492,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data Feature Analysis",
"sec_num": "3.1"
},
{
"text": "Drawing on our observations, we propose a two step generation-classification model for informal word normalization. We first generate potential formal candidates for an input informal word by combing through the Google 1T corpus. This step is fast and generates a large, prospective set of candidates which are input to a second, subsequent classification. The subsequent classification is a binary yes/no classifier that takes both rule-based and statistical features derived from our identified three major channels to identify valid formal candidates.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4"
},
{
"text": "Note that an informal word O (here, O for observation), even when used in a specific, windowed context C(O), may have several different equivalent normalizations T (here, T for target). This occurs in the abbreviation (L 8 as (L b or L ) 8 ) and paraphrase (\u00d9\u203a\u02c6\u00d2 or} or \u2030\u00b3) channels, where synonymous formal words are equivalent. In the case where an informal word is explanable as a phonetic substitution, only one formal form is viable. Our classification model caters for these multiple explanations. Figure 1 illustrates the framework of the pro-posed approach. Given an input Chinese microblog post, we first segment the sentences into words and recognize informal words leveraging the approach proposed in (Wang and Kan, 2013) . For each recognized informal word O, we search the Chinese portion of the Google Web1T corpus using lexical patterns, obtaining n potential formal (normalized) candidates. Taking the informal word O, its occurrence context C(O), and the formal candidate T together, we generate feature vectors for each three-tuple, i.e., < O, C(O), T > 6 , consisting of both rule-based and statistical features. These features are used in a supervised binary classifier to render the final yes (informalinformal pair) or no (not an appropriate formal word explanation for the given informal word) decision.",
"cite_spans": [
{
"start": 712,
"end": 732,
"text": "(Wang and Kan, 2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 504,
"end": 512,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Methodology",
"sec_num": "4"
},
{
"text": "As an initial step, we can recognize informal words and segment the Chinese words in the sentence by applying joint inference based on a Factorial Conditional Random Field (FCRF) methodology (Wang and Kan, 2013) . However, as our focus in this work is on the normalization task, we use the manually-annotated gold standard informal words (O) and their formal equivalents (T ) provided in our annotated dataset. To derive the informal words' context C(O), we use the automatically-acquired output of the preprocessing FCRF, although noisy and a source of error.",
"cite_spans": [
{
"start": 191,
"end": 211,
"text": "(Wang and Kan, 2013)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pre-Processing",
"sec_num": "4.1"
},
{
"text": "Given the two-tuple < O, C(O) > generated from pre-processing, we produce a set of hypotheses |T | which are formal candidates corresponding to O. We use two assumptions to guide us in the selection of prospective formal equivalents of O. We first discuss Assumption 1 (as [A1]):",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Candidate Generation",
"sec_num": "4.2"
},
{
"text": "[A1] The informal word and its formal equivalents share similar contextual collocations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Candidate Generation",
"sec_num": "4.2"
},
{
"text": "To implement [A1], we define several regular expression patterns to search the Chinese Web 1T corpus, as listed in Table 3 . All entries that match at least one of the five rules are collected as formal candidates. Specifically, W * refers to the word in context C(O). T denotes any Chinese candidate word, andT a word sharing at least one character in common with the informal word O. Our assumption is similar to the notion used for paraphrasing: that the informal version can be substituted for its formal equivalent(s), such that the original sentence's semantics is preserved in the new sentence. For example, in the phrase \"\u00fa \u00be \u00b3\u00f9 > \", the informal word \"\u00b3\u00f9\" is exactly equivalent to its formal equivalent \"OE \", as the resulting phrase \"\u00fa\u00be OE > \" (\"build the harmonious society\") carries exactly the same semantics. This is inferrable when both the informal word O and the candidate share the same contextual collocations of \"\u00fa\u00be\" and \"> \".",
"cite_spans": [],
"ref_spans": [
{
"start": 115,
"end": 122,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Formal Candidate Generation",
"sec_num": "4.2"
},
{
"text": "W \u22121 T W 1 W \u22122 W \u22121 T T W 1 W 2 W \u22121TT W 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Candidate Generation",
"sec_num": "4.2"
},
{
"text": "As the Web1T corpus consists of n-grams taken from approximately one trillion words indexed from Chinese web pages, queries for each informal word O can return long result lists of up to 20,000 candidates. To filter noise from the resulting candidates, we adopt Assumption 2 [A2]:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Candidate Generation",
"sec_num": "4.2"
},
{
"text": "[A2] Both the original informal word in its context -as well as the substitued formal word within the same context -are frequent in the general domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Candidate Generation",
"sec_num": "4.2"
},
{
"text": "We operationalize this by constraining the prospective normalization candidates to be within the top 1,000 candidates ranked by the trigram probability (P (W \u22121 T W 1 )). This probability is calculated by the BerkeleyLM (Pauls and Klein, 2011) trained over Google Web 1T corpus. Note that this constraint makes our method more efficient over a brute-force approach, in exchange for loss in recall. However, we feel that this trade-off is fair: by retaining the top 1000 candidates, we observed the loss rate of gold standard answers in each of the channels is 14%, 15%, and 17% for phonetic substitution, abbreviation and paraphrase, respectively. This is in comparison with the final loss rate of over 70% reported by Li and Yarowsky (2008a) .",
"cite_spans": [
{
"start": 220,
"end": 243,
"text": "(Pauls and Klein, 2011)",
"ref_id": "BIBREF13"
},
{
"start": 719,
"end": 742,
"text": "Li and Yarowsky (2008a)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Candidate Generation",
"sec_num": "4.2"
},
{
"text": "Given the annotations, the three-tuples (< O, C(O), T >) generated from the resulting list of candidates are labeled as Y (N) as positive (negative) instances. As there are a much larger number of negative than positive instances for each O, this results in data skew.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Candidate Generation",
"sec_num": "4.2"
},
{
"text": "For the classification step, we calculate both rulebased and statistical features for supervised machine learning. We leverage our previous observations to engineer features specific to a particular channel. We describe both classes of features, listing its type (binary or continuous) and which channel it models (phonetic substitution, abbreviation,paraphrase, or all), as a two tuple. We accompany each rule with an example, showing Pinyin and tones, when appropriate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Feature Extraction for Classification",
"sec_num": "4.3"
},
{
"text": "\u2022 O contains valid Pinyin script < b, ph > e.g., \" \u00bbshi \u2020\" (\" \u00bb{si3 \u2020\";\"too cold\")",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule-based Features (5 features).",
"sec_num": "4.3.1"
},
{
"text": "\u2022 O contains digits < b, ph > e.g., \" v5\" (\" wei1fwu3\";\"mighty\")",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule-based Features (5 features).",
"sec_num": "4.3.1"
},
{
"text": "\u2022 O is a potential Pinyin acronym < b, ph > e.g., \"bs\" (\" bi3AEshi4\";\"despise\")",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule-based Features (5 features).",
"sec_num": "4.3.1"
},
{
"text": "\u2022 T contains characters in O? < b, ph > e.g., \" L8\" (\" Lb8 \";\"board games\")",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule-based Features (5 features).",
"sec_num": "4.3.1"
},
{
"text": "\u2022 The percentage of characters common between O and T < c, all >",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Rule-based Features (5 features).",
"sec_num": "4.3.1"
},
{
"text": "We describe these features in more detail, as they form a key contribution in this work. Note that the statistical features that leverage information from both informal and formal domains are derived via maximum likelihood estimation on the appropriate training data. Pinyin Similarity < c, ph >. Although Levenshtein distance (LD;employed in (Li and Yarowsky, 2008a) ) is a low cost metric to measure string similarity, it has its drawbacks when applied to Pinyin similarity. As an example, the informal word \" \u00ebyin2 Mcai2 \" is normalized to \"\u00baren2 Mcai2\", meaning \"talent\". This suggests that P Y Sim(yin, ren) should be high, as they compose an informal-formal pair. However this is in contrast to evidence given by LD as LD(yin, ren) is large (especially compared with the LD(yin, yi), in which \"yi\" is a representative Pinyin string that has an edit distance with \"yin\" of just 1). For the manual annotation method, it is difficult for annotators to accurately weigh the similarities for all pronunciation pairs, since it is weighted arbitrarily. And the labor of manually tuning weights may be unnecessary, given annotated informal-formal pairs.",
"cite_spans": [
{
"start": 343,
"end": 367,
"text": "(Li and Yarowsky, 2008a)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Features (7 features).",
"sec_num": "4.3.2"
},
{
"text": "To tackle these drawbacks, we propose to fully utilize the gold standard annotation (i.e., informal-formal pairs applicable to the Phonetic Substitution channel) and to empirically estimate the Pinyin similarity from the corpus in a supervised manner. In our method, Pinyin similarity is formulated as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Features (7 features).",
"sec_num": "4.3.2"
},
{
"text": "P Y Sim(T |O) = P Y Sim(t i |o i ) (1) P Y Sim(t i |o i ) = P Y Sim(py(t i )|py(o i ))) = \u00b5P (py(t i )|py(o i )) + \u03bbP (ini(t i )|py(o i )) + \u03b7P (f in(t i )|py(o i ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Features (7 features).",
"sec_num": "4.3.2"
},
{
"text": "(2) Here, the ti (o i ) stands for the ith character in word T (O). Let the function py(x) return the Pinyin string of a character and functions ini(x) (f in(x)) return initial (final) of a Pinyin string x. We use linear interpolation algorithm for smoothing, with \u00b5, \u03bb and \u03b7 as weights summing to unity. Then, P (py(t i )|py(o i )), P (ini(t i )|py(o i )) and P (f in(t i )|py(o i )) are estimated using maximum likelihood estimation over the training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Features (7 features).",
"sec_num": "4.3.2"
},
{
"text": "Lexicon and Semantic Similarity < c, ab + pa >. For the remaining two channels, we extend the source channel model (SCM) (Brown et al., 1990) to estimate the character mapping probability. In our case, SCM aims to find the formal string T that the given input O is most likely normalized to.",
"cite_spans": [
{
"start": 121,
"end": 141,
"text": "(Brown et al., 1990)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Features (7 features).",
"sec_num": "4.3.2"
},
{
"text": "T = arg max T P (T |O) = arg max T P (O|T )P (T ) (3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Features (7 features).",
"sec_num": "4.3.2"
},
{
"text": "As discussed in Section 3, for both the two channels we use interpolation to model character-word mappings. Assuming the character-word mapping events are independent, we obtain:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Features (7 features).",
"sec_num": "4.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (O|T ) = P (o i |t i )",
"eq_num": "(4)"
}
],
"section": "Statistical Features (7 features).",
"sec_num": "4.3.2"
},
{
"text": "where o i (t i ) refers to ith character of O (T ). However, this SCM model suffers serious data sparsity problems, when the annotated microtext corpus is small (as in our case). To further address the sparsity, we extend the source channel model by inserting part-of-speech mapping models into Equation 4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Features (7 features).",
"sec_num": "4.3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "P (O|T ) = P (o i |t i ) (5) P (o i |t i ) = \u03b1P (o i |t i ) + \u03b2P (o i |pos(t i ), pos(o i ))",
"eq_num": "(6)"
}
],
"section": "Statistical Features (7 features).",
"sec_num": "4.3.2"
},
{
"text": "Here, let the function pos(x) return the partof-speech (POS) tag of x 7 . Both P (o i |t i ) and P (o i |pos(t i ), pos(o i )) are then estimated using maximum likelihood estimation over the annotated corpus. In parallel with the Pinyin similarity estimation, \u03b1 and \u03b2 are weights for the interpolation, summing to unity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Features (7 features).",
"sec_num": "4.3.2"
},
{
"text": "We give the intuition for our formulation. P (o i |t i ) measures the probability of using character o i to substitute for the given word t i . P (o i |pos(t i ), pos(o i )) measures the probability of using character o i as the substitution of any word t i , given the POS tag is mapped from pos(t i ) to pos(o i ). Finally, given the limited availability of gold standard annotations, we can optionally use formal domain synonym dictionaries to improve our model's estimation lexical and semantic similarity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Features (7 features).",
"sec_num": "4.3.2"
},
{
"text": "N-gram Probabilities 5\u00d7 < c, all >. We generate new sentences by substituting informal words with candidate formal words. The probabilities of the generated trigrams and bigrams (within a window size of 3) are computed with Berke-leyLM, trained on the Web1T corpus. The features capture how likely the candidate word is used in the informal domain. The five features are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Features (7 features).",
"sec_num": "4.3.2"
},
{
"text": "\u2022 Trigram probabilities: P (W \u22122 W \u22121 T ); P (W \u22121 T W 1 );P (T W 1 W 2 ) \u2022 Bigram probabilities: P (W \u22121 T ); P (T W 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Statistical Features (7 features).",
"sec_num": "4.3.2"
},
{
"text": "In our architecture, the candidate generation procedure is unsupervised. The part that does need tuning is the final, supervised classifier that renders the binary decision on each 3-tuple, as to whether the O-T pair is a match, so for this task we select the best classifier among three learners.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "The statistics reported by Li and Yarowsky (2008a) is then used as a baseline * performance. We mark this with an asterisk to indicate that the comparison is just for reference, where the performance figures are taken directly from their published work, as we did not reimplement their method nor execute it on our comtemporary data.",
"cite_spans": [
{
"start": 27,
"end": 50,
"text": "Li and Yarowsky (2008a)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "As a second analysis point, we compare our system -with and without features derived from synonym dictionaries -to assess how well our method adapts from formal corpora. Finally we show that our method is also effective to acquire synonyms for the formal domain (formal-formal pairs, in contrast to our task's informal-formal pairs).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "We collected 1036 unique informal-formal word pairs with their informal contexts were collected from our annotated corpus for cross-fold validation. As any supervised classifier would do, we testing logistic regression (LR), support vector machine (SVM) and decision tree (DT) learning models, provided by WEKA3 (Hall et al., 2009) . To acquire formal domain synonyms, we optionally employed the Cilin 8 and TYCDict 9 dictionaries.",
"cite_spans": [
{
"start": 312,
"end": 331,
"text": "(Hall et al., 2009)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "5.1"
},
{
"text": "We adopt the standard metrics of precision, recall and F 1 for the evaluation, focusing on the the positive (correctly matched as informal-formal pair) Y class. Table 4 presents the evaluation results over different classifiers. In this first experiment, data from all the channels are merged together and the result reported is the outcome of 5-fold cross validation. Lexicon similarity features are derived only from the training corpus. As the DT classifier performs best, we only report DT results for subsequent experiments. Table 4 : Performance comparison using different classifiers.",
"cite_spans": [],
"ref_spans": [
{
"start": 161,
"end": 168,
"text": "Table 4",
"ref_id": null
},
{
"start": 530,
"end": 537,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5.2"
},
{
"text": "To make a direct comparison with the baseline * , we perform cross-fold validation using data each of three channels separately.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with Baseline *",
"sec_num": "5.2.2"
},
{
"text": "Since Li and Yarowsky (2008a) formalized the task as a ranking problem, we show the reported Top1 and Top10 precision in Table 5 10 .",
"cite_spans": [
{
"start": 6,
"end": 29,
"text": "Li and Yarowsky (2008a)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 121,
"end": 128,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with Baseline *",
"sec_num": "5.2.2"
},
{
"text": "Our model achieves high precision for each channel, compared with the baseline * performance. From Table 5 we observe that normalizing words due to Phonetic Substitution is relatively easy as compared to the other two channels. That is because given the fixed vocabulary of standard Chinese Pinyin, the Pinyin similarity measured from the corpus is much more stable than 8 http://ir.hit.edu.cn/phpwebsite/ index.php?module=pagemaster&PAGE_user_ op=view_page&PAGE_id=162",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 106,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with Baseline *",
"sec_num": "5.2.2"
},
{
"text": "9 http://www.datatang.com/data/29207/ 10 Due to the difference in classification scheme, we recomputed the reported value, given our classification. the estimated lexicon or semantic similarity. The low recall for the Paraphrase channel suggests the difficulty of inferring the semantic similarity between word pairs. --- Table 5 : Performance, analyzed per channel. \"-\" indicate no comparable prior reported results.",
"cite_spans": [],
"ref_spans": [
{
"start": 322,
"end": 329,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison with Baseline *",
"sec_num": "5.2.2"
},
{
"text": "We note that there is a tradeoff between the data scale and performance. By keeping the Top 1000 candidates, we observed an 18.8% overall loss of correct formal candidates (breaking down as 14.9% for Phonetic Substitutions, 22.8% for Abbreviations and 31.8% for Paraphrases). Based on this statistics, the final loss rate is 64.1%. By comparison, Li and Yarowsky (2008a) 's seed bootstrapped method's self-stated loss rate is around 70%.",
"cite_spans": [
{
"start": 347,
"end": 370,
"text": "Li and Yarowsky (2008a)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Final Loss Rate",
"sec_num": "5.2.3"
},
{
"text": "In the real-world, we have to infer the channel an informal word originates from. To assess how well our system does without channel knowledge, we merged the separate channel datasets together and train a single classifier.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Channel Knowledge and Use of Formal Synonym Dictionaries",
"sec_num": "5.2.4"
},
{
"text": "To investigate the impact of the formal synonym dictionaries, two configurations -with and without features derived from synonym dictionarieswere also tested. To upper bound achievable performance, we trained an oracular model with the correct channel as an input feature. In the results presented in Table 6 , we see that the introduction of the features from the formal synonym dictionaries enhances performance (especially recall) of the basic feature set. As upper-bound performance is still significantly higher, future work may aim to improve performance by first predicting the originating channel. Table 6 : Performance over different feature sets. \"w\" (\"w/o\") refers to the model trained with (without) features from formal synonym dictionaries. \"channel\" refers to the model trained with the correct channel given as an input feature.",
"cite_spans": [],
"ref_spans": [
{
"start": 301,
"end": 308,
"text": "Table 6",
"ref_id": null
},
{
"start": 606,
"end": 613,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Channel Knowledge and Use of Formal Synonym Dictionaries",
"sec_num": "5.2.4"
},
{
"text": "To evaluate our method in the formal text domain, we take the synonym pairs from TYCDict as the test corpus and use the microtext data together with Cilin dictionaries as training. The experiment follows the same workflow as is done for the earlier microtext experiments, except that the context is extracted from the Chinese Wikipedia 11 . As we obtained solid performance, (P re = .949, Rec = .554 and F 1 = .699), we feel that our method can be applied to synonym acquisition task in the formal domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Formal Domain Synonym Acquisition",
"sec_num": "5.2.5"
},
{
"text": "Based on our observations from a crowdsourced annotated corpus of informal Chinese words, we perform a systematic analysis about how informal words originate. We show that there are three main channels -phonetic substitution, abbreviation and paraphrase -that are responsible for informal creation, and that the motivation for their creation varies by channel.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "To operationalize informal word normalization we suggest a two-stage candidate generationclassification method. The results obtained are promising, bettering the current state of the art with respect to both F 1 and loss rate. In our detailed analysis, we find that channel knowledge can still improve performance and is a possible field for future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "www.baidu.com 2 Pinyin is the official phonetic system for transcribing the sound of Chinese characters into Latin script. P Y Sim(x, y) is used to denote the similarity between two Pinyin string \"x\" and \"y\" hereafter.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://open.weibo.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.zhubajie.com 5 http://wing.comp.nus.edu.sg/portal/ downloads.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For notational convenience, the informal word context C(O) is defined as W\u2212i...O ...Wi; here, i refers to the index of the word with respect to O, which we set in this work to 3.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Implemented in our system by the FudanNLP toolkithttps://code.google.com/p/fudannlp/.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "How do you pronounce your name?: improving g2p with transliterations",
"authors": [
{
"first": "Aditya",
"middle": [],
"last": "Bhargava",
"suffix": ""
},
{
"first": "Grzegorz",
"middle": [],
"last": "Kondrak",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "399--408",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aditya Bhargava and Grzegorz Kondrak. 2011. How do you pronounce your name?: improving g2p with transliterations. In Proceedings of the 49th Annual Meeting of the Association for Computational Lin- guistics: Human Language Technologies -Volume 1, pages 399-408.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "The google web 1t 5-gram corpus version 1.1. LDC2006T13",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Franz",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants and Alex Franz. 2006. The google web 1t 5-gram corpus version 1.1. LDC2006T13.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A statistical approach to machine translation",
"authors": [
{
"first": "John",
"middle": [],
"last": "Peter F Brown",
"suffix": ""
},
{
"first": "Stephen A Della",
"middle": [],
"last": "Cocke",
"suffix": ""
},
{
"first": "Vincent J Della",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "Fredrick",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Jelinek",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"S"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roossin",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational linguistics",
"volume": "",
"issue": "",
"pages": "79--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F Brown, John Cocke, Stephen A Della Pietra, Vincent J Della Pietra, Fredrick Jelinek, John D Laf- ferty, Robert L Mercer, and Paul S Roossin. 1990. A statistical approach to machine translation. Com- putational linguistics, pages 79-85.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Mining atomic chinese abbreviations with a probabilistic single character recovery model. Language Resources and Evaluation",
"authors": [
{
"first": "Jing-Shin",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Wei-Lun",
"middle": [],
"last": "Teng",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "367--374",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing-Shin Chang and Wei-Lun Teng. 2006. Min- ing atomic chinese abbreviations with a probabilis- tic single character recovery model. Language Re- sources and Evaluation, pages 367-374.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "PrEV: Preservation Explorer and Vault for Web 2.0 User-Generated Content",
"authors": [
{
"first": "A",
"middle": [],
"last": "Cui",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Hou",
"suffix": ""
},
{
"first": "M",
"middle": [
"Y"
],
"last": "Kan",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2012,
"venue": "Theory and Practice of Digital Libraries",
"volume": "",
"issue": "",
"pages": "101--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Cui, L. Yang, D. Hou, M.Y. Kan, Y. Liu, M. Zhang, and S. Ma. 2012. PrEV: Preservation Explorer and Vault for Web 2.0 User-Generated Content. Theory and Practice of Digital Libraries, pages 101-112.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Contextual Bearing on Linguistic Variation in Social Media",
"authors": [
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Donald",
"middle": [],
"last": "Metzler",
"suffix": ""
},
{
"first": "Congxing",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Workshop on Language in Social Media",
"volume": "",
"issue": "",
"pages": "20--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephan Gouws, Donald Metzler, Congxing Cai, and Eduard Hovy. 2011. Contextual Bearing on Lin- guistic Variation in Social Media. In Proceedings of the Workshop on Language in Social Media, pages 20-29.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "The weka data mining software: an update",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Holmes",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Pfahringer",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Reutemann",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2009,
"venue": "SIGKDD Explor. Newsl",
"volume": "",
"issue": "",
"pages": "10--18",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The weka data mining software: an update. SIGKDD Explor. Newsl., pages 10-18.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatically Constructing a Normalisation Dictionary for Microblogs",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Han, Paul Cook, and Timothy Baldwin. 2012. Automatically Constructing a Normalisation Dictio- nary for Microblogs. In Proceedings of the 2012",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Natural Language Processing and Computational Natural Language Learning",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "421--432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 421-432.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Mining and modeling relations between formal and informal Chinese phrases from web corpora",
"authors": [
{
"first": "Z",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1031--1040",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Z. Li and D. Yarowsky. 2008a. Mining and model- ing relations between formal and informal Chinese phrases from web corpora. In Proceedings of the Conference on Empirical Methods in Natural Lan- guage Processing, pages 1031-1040.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Unsupervised translation induction for chinese abbreviations using monolingual corpora",
"authors": [
{
"first": "Zhifei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "425--433",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhifei Li and David Yarowsky. 2008b. Unsupervised translation induction for chinese abbreviations using monolingual corpora. In Proceedings of ACL, pages 425-433.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Semi-supervised maximum entropy based approach to acronym and abbreviation normalization in medical texts",
"authors": [
{
"first": "Serguei",
"middle": [],
"last": "Pakhomov",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Serguei Pakhomov. 2002. Semi-supervised maximum entropy based approach to acronym and abbrevia- tion normalization in medical texts. In Proceedings of the 40th annual meeting on association for com- putational linguistics, pages 160-167.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Hybrid text mining for finding abbreviations and their definitions",
"authors": [
{
"first": "Youngja",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Byrd",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 2001 conference on empirical methods in natural language processing",
"volume": "",
"issue": "",
"pages": "126--133",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Youngja Park and Roy J Byrd. 2001. Hybrid text min- ing for finding abbreviations and their definitions. In Proceedings of the 2001 conference on empiri- cal methods in natural language processing, pages 126-133.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Faster and smaller n-gram language models",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Pauls",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th annual meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "258--267",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Pauls and Dan Klein. 2011. Faster and smaller n-gram language models. In Proceedings of the 49th annual meeting of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 258-267.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Mining informal language from chinese microtext: Joint word recognition and segmentation",
"authors": [
{
"first": "Aobo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "731--741",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aobo Wang and Min-Yen Kan. 2013. Mining infor- mal language from chinese microtext: Joint word recognition and segmentation. In Proceedings of the 51st Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 731-741.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Perspectives on crowdsourcing annotations for natural language processing, journal = Language Resources and Evaluation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "C",
"middle": [
"D V"
],
"last": "Hoang",
"suffix": ""
},
{
"first": "M",
"middle": [
"Y"
],
"last": "Kan",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Wang, C.D.V. Hoang, and M.Y. Kan. 2010. Per- spectives on crowdsourcing annotations for natural language processing, journal = Language Resources and Evaluation. pages 1-23.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Retweeting From A Linguistic Perspective",
"authors": [
{
"first": "Aobo",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Second Workshop on Language in Social Media",
"volume": "",
"issue": "",
"pages": "46--55",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aobo Wang, Tao Chen, and Min-Yen Kan. 2012. Re- tweeting From A Linguistic Perspective. In Pro- ceedings of the Second Workshop on Language in Social Media, pages 46-55.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Normalization of Chinese Chat Language. Language Resources and Evaluation",
"authors": [
{
"first": "K",
"middle": [
"F"
],
"last": "Wong",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Xia",
"suffix": ""
}
],
"year": 2008,
"venue": "",
"volume": "",
"issue": "",
"pages": "219--242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "K.F. Wong and Y. Xia. 2008. Normalization of Chinese Chat Language. Language Resources and Evaluation, pages 219-242.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Learning to find english to chinese transliterations on the web",
"authors": [
{
"first": "Jian-Cheng",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"S"
],
"last": "Chang",
"suffix": ""
}
],
"year": 2007,
"venue": "Proc. of EMNLP-CoNLL",
"volume": "",
"issue": "",
"pages": "996--1004",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jian-Cheng Wu and Jason S Chang. 2007. Learning to find english to chinese transliterations on the web. In Proc. of EMNLP-CoNLL, pages 996-1004.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "NIL Is Not Nothing: Recognition of Chinese Network Informal Language Expressions",
"authors": [
{
"first": "Y",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "K",
"middle": [
"F"
],
"last": "Wong",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Gao",
"suffix": ""
}
],
"year": 2005,
"venue": "4th SIGHAN Workshop on Chinese Language Processing",
"volume": "5",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Y. Xia, K.F. Wong, and W. Gao. 2005. NIL Is Not Nothing: Recognition of Chinese Network Informal Language Expressions. In 4th SIGHAN Workshop on Chinese Language Processing, volume 5.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "A phonetic-based approach to chinese chat text normalization",
"authors": [
{
"first": "Yunqing",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Kam-Fai",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "993--1000",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yunqing Xia, Kam-Fai Wong, and Wenjie Li. 2006. A phonetic-based approach to chinese chat text nor- malization. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Compu- tational Linguistics, pages 993-1000.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Machine transliteration: leveraging on third languages",
"authors": [
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Vladimir",
"middle": [],
"last": "Pervouchine",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics: Posters",
"volume": "",
"issue": "",
"pages": "1444--1452",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Min Zhang, Xiangyu Duan, Vladimir Pervouchine, and Haizhou Li. 2010. Machine transliteration: lever- aging on third languages. In Proceedings of the 23rd International Conference on Computational Linguistics: Posters, pages 1444-1452.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"type_str": "figure",
"text": "Our framework consists of the two steps of informal word recognition and normalization. Normalization breaks down to its component steps of candidate generation and classification.",
"num": null
},
"TABREF1": {
"type_str": "table",
"num": null,
"text": "Classification of Chinese informal words as originating from three primary channels. Pronunciation is indicated with Pinyin for phonetic substitutions, while characters in bold are linked to the motivation for the informal form.",
"content": "<table/>",
"html": null
},
"TABREF2": {
"type_str": "table",
"num": null,
"text": "Lexical patterns for candidate generation.",
"content": "<table/>",
"html": null
},
"TABREF5": {
"type_str": "table",
"num": null,
"text": ".886 .443 .590 w .895 .583 .706 w + channel .915 .638 .752",
"content": "<table><tr><td>Feature set</td><td>Pre Rec</td><td>F 1</td></tr><tr><td>w/o</td><td/><td/></tr></table>",
"html": null
}
}
}
}