Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
96.9 kB
{
"paper_id": "I13-1037",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:14:29.529525Z"
},
"title": "Chinese Named Entity Abbreviation Generation Using First-Order Logic",
"authors": [
{
"first": "Huan",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fudan University Shanghai",
"location": {
"addrLine": "{12210240054, qz",
"postCode": "12110240030",
"country": "P.R. China"
}
},
"email": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fudan University Shanghai",
"location": {
"addrLine": "{12210240054, qz",
"postCode": "12110240030",
"country": "P.R. China"
}
},
"email": ""
},
{
"first": "Jin",
"middle": [],
"last": "Qian",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fudan University Shanghai",
"location": {
"addrLine": "{12210240054, qz",
"postCode": "12110240030",
"country": "P.R. China"
}
},
"email": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Fudan University Shanghai",
"location": {
"addrLine": "{12210240054, qz",
"postCode": "12110240030",
"country": "P.R. China"
}
},
"email": "xjhuang@fudan.edu.cn"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Normalizing named entity abbreviations to their standard forms is an important preprocessing task for question answering, entity retrieval, event detection, microblog processing, and many other applications. Along with the quick expansion of microblogs, this task has received more and more attentions in recent years. In this paper, we propose a novel entity abbreviation generation method using first-order logic to model long distance constraints. In order to reduce the human effort of manual annotating corpus, we also introduce an automatically training data construction method with simple strategies. Experimental results demonstrate that the proposed method achieves better performance than state-of-the-art approaches.",
"pdf_parse": {
"paper_id": "I13-1037",
"_pdf_hash": "",
"abstract": [
{
"text": "Normalizing named entity abbreviations to their standard forms is an important preprocessing task for question answering, entity retrieval, event detection, microblog processing, and many other applications. Along with the quick expansion of microblogs, this task has received more and more attentions in recent years. In this paper, we propose a novel entity abbreviation generation method using first-order logic to model long distance constraints. In order to reduce the human effort of manual annotating corpus, we also introduce an automatically training data construction method with simple strategies. Experimental results demonstrate that the proposed method achieves better performance than state-of-the-art approaches.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Twitter and other social media services have received considerable attentions in recent years. Users provide hundreds of millions microblogs through them everyday. The informative data has been relied on by many applications, such as sentiment analysis (Jiang et al., 2011; Meng et al., 2012) , event detection (Sakaki et al., 2010; Lin et al., 2010) , stock market predication (Bollen et al., 2011) , and so on. However, due to the constraint on the length of characters, abbreviations frequently occur in microblogs. According to a statistic, approximately 20% of sentences in news articles have abbreviated words (Chang and Lai, 2004) . The frequency of abbreviation has become even more popular along with the rapid increment of user generated contents. Without pre-normalizing these abbreviations, most of the natural language processing systems may heavily suffer from them.",
"cite_spans": [
{
"start": 253,
"end": 273,
"text": "(Jiang et al., 2011;",
"ref_id": "BIBREF10"
},
{
"start": 274,
"end": 292,
"text": "Meng et al., 2012)",
"ref_id": "BIBREF16"
},
{
"start": 311,
"end": 332,
"text": "(Sakaki et al., 2010;",
"ref_id": "BIBREF21"
},
{
"start": 333,
"end": 350,
"text": "Lin et al., 2010)",
"ref_id": "BIBREF14"
},
{
"start": 378,
"end": 399,
"text": "(Bollen et al., 2011)",
"ref_id": "BIBREF3"
},
{
"start": 616,
"end": 637,
"text": "(Chang and Lai, 2004)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The goal of entity abbreviation generation is to produce abbreviated equivalents of the original entities. Table 1 shows several examples of entities and their corresponding abbreviations. A few of approaches have been done on this task. Li and Yarowsky (Li and Yarowsky, 2008b) introduced an unsupervised method used to extract phrases and their abbreviation pair using parallel dataset and monolingual corpora. Xie et al. (2011) proposed to use weighted bipartite graph to extract definition and corresponding abbreviation pairs from anchor texts. Since these methods rely heavily on lexical/phonetic similarity, substitution of characters and portion may not be correctly identified through them. Yang et al. (2009) studied the Chinese entity name abbreviation problem. They formulated the abbreviation task as a sequence labeling problem and used the conditional random fields (CRFs) to model it. However the long distance and global constraint can not be easily modeled thorough CRFs.",
"cite_spans": [
{
"start": 238,
"end": 278,
"text": "Li and Yarowsky (Li and Yarowsky, 2008b)",
"ref_id": "BIBREF13"
},
{
"start": 413,
"end": 430,
"text": "Xie et al. (2011)",
"ref_id": "BIBREF23"
},
{
"start": 700,
"end": 718,
"text": "Yang et al. (2009)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 107,
"end": 114,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Abbr. To overcome these limitations, in this paper, we propose a novel entity abbreviation generation method, which combines first-order logic and rich linguistic features. To the best of our knowledge, our approach is the first work of using first-order logic for this entity abbreviation. Abbreviation generation is converted to character deletion and keep operations which are modeled by logic formula. Linguistic features and relations between different operations are represented by local and global logic formulas respectively. Markov Logic Networks (MLN) (Richardson and Domingos, 2006 ) is adopted for learning and predication. To reduce the human effort in constructing the training data, we collect standard forms of entities from online encyclopedia and introduce a few of simple patterns to extract abbreviations from documents and search engine snippets with high precision as training data. Experimental results show that the proposed methods achieve better performance than state-of-the-art methods and can efficiently process large volumes of data.",
"cite_spans": [
{
"start": 562,
"end": 592,
"text": "(Richardson and Domingos, 2006",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Entity",
"sec_num": null
},
{
"text": "The remainder of the paper is organized as follows: In section 2, we review a number of related works and the state-of-the-art approaches in related areas. Section 3 presents the proposed method. Experimental results in test collections and analyses are shown in section 4. Section 5 concludes this paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "\u5317\u4eac\u5927\u5b66 \u5317\u5927",
"sec_num": null
},
{
"text": "The proposed approach builds on contributions from two research communities: text normalization, and Markov Logic Networks. In the following of this section, we give brief description of previous works on these areas.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Named entity normalization, abbreviation generation, and lexical normalization are related to this task. These problems have been recognized as important problems for various languages. Since different languages have their own peculiarities, many approaches have been proposed to handle variants of words (Aw et al., 2006; Liu et al., 2012; Han et al., 2012) and named entities (Yang et al., 2009; Xie et al., 2011; Li and Yarowsky, 2008b) . Chang and Teng (2006) introduced an HMMbased single character recovery model to extract character level abbreviation pairs for textual corpus. Okazaki et al. (2008) also used discriminative approach for this task. They formalized the abbreviation recognition task as a binary classification problem and used Support Vector Machines to model it. Yang et al. (2012) also treated the abbreviation generation problem as a labeling task and used Conditional Random Fields (CRFs) to do it. They also proposed to re-rank candidates by a length model and web information. Li and Yarowsky (2008b) proposed an unsupervised method extracting the relation between a full-form phrase and its abbreviation from monolingual corpora. They used data co-occurrence intuition to identify relations between abbreviation and full names. They also improved a statistical machine translation by incorporating the extracted relations into the baseline translation system. Based on the data co-occurrence phenomena, they introduced a bootstrapping procedure to identify formal-informal relations informal phrases in web corpora (Li and Yarowsky, 2008a) . They used search engine to extract contextual instances of the given an informal phrase, and ranked the candidate relation pairs using conditional log-linear model. Xie et al. (2011) proposed to extract Chinese abbreviations and their corresponding definitions based on anchor texts. They constructed a weighted URL-AnchorText bipartite graph from anchor texts and applied co-frequency based measures to quantify the relatedness between two anchor texts.",
"cite_spans": [
{
"start": 305,
"end": 322,
"text": "(Aw et al., 2006;",
"ref_id": "BIBREF2"
},
{
"start": 323,
"end": 340,
"text": "Liu et al., 2012;",
"ref_id": "BIBREF16"
},
{
"start": 341,
"end": 358,
"text": "Han et al., 2012)",
"ref_id": "BIBREF7"
},
{
"start": 378,
"end": 397,
"text": "(Yang et al., 2009;",
"ref_id": "BIBREF24"
},
{
"start": 398,
"end": 415,
"text": "Xie et al., 2011;",
"ref_id": "BIBREF23"
},
{
"start": 416,
"end": 439,
"text": "Li and Yarowsky, 2008b)",
"ref_id": "BIBREF13"
},
{
"start": 442,
"end": 463,
"text": "Chang and Teng (2006)",
"ref_id": "BIBREF5"
},
{
"start": 585,
"end": 606,
"text": "Okazaki et al. (2008)",
"ref_id": "BIBREF17"
},
{
"start": 787,
"end": 805,
"text": "Yang et al. (2012)",
"ref_id": "BIBREF25"
},
{
"start": 1006,
"end": 1029,
"text": "Li and Yarowsky (2008b)",
"ref_id": "BIBREF13"
},
{
"start": 1545,
"end": 1569,
"text": "(Li and Yarowsky, 2008a)",
"ref_id": "BIBREF12"
},
{
"start": 1737,
"end": 1754,
"text": "Xie et al. (2011)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Normalization",
"sec_num": "2.1"
},
{
"text": "For lexical normalisation, Aw et al. (2006) treated the lexical normalisation problem as a translation problem from the informal language to the formal English language and adapted a phrase-based method to do it. Han and Baldwin (2011) proposed a supervised method to detect ill-formed words and used morphophonemic similarity to generate correction candidates. Liu et al. (2012) proposed to use a broad coverage lexical normalization method consisting three key components enhanced letter transformation, visual priming, and string/phonetic similarity. Han et al. (2012) introduced a dictionary based method and an automatic normalisation-dictionary construction method. They assumed that lexical variants and their standard forms occur in similar contexts.",
"cite_spans": [
{
"start": 27,
"end": 43,
"text": "Aw et al. (2006)",
"ref_id": "BIBREF2"
},
{
"start": 362,
"end": 379,
"text": "Liu et al. (2012)",
"ref_id": "BIBREF16"
},
{
"start": 554,
"end": 571,
"text": "Han et al. (2012)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text Normalization",
"sec_num": "2.1"
},
{
"text": "In this paper, we focused on named entity abbreviation generation problem and treated the problem as a labeling task. Due to the flexibilities of Markov Logic Networks on capturing local and global linguistic feature, we adopted it to model the supervised classification procedure. To reduce the human effort in constructing training data, we also introduced a sample rule based method to find relations between standard forms and abbreviations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Normalization",
"sec_num": "2.1"
},
{
"text": "Predicates about characters in the entity character(i,c) The ith character is c.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Text Normalization",
"sec_num": "2.1"
},
{
"text": "The ith character is a number. Predicates about words in the entity word (j,w) The jth word is w.",
"cite_spans": [
{
"start": 73,
"end": 78,
"text": "(j,w)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "isNumber(i)",
"sec_num": null
},
{
"text": "The jth word is a city name.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "isCity(j)",
"sec_num": null
},
{
"text": "The jth word is the last word.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "lastWord(j)",
"sec_num": null
},
{
"text": "The jth word belongs the set of common suffixes of corporation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "sufCorp(j)",
"sec_num": null
},
{
"text": "The jth word belongs the set of common suffixes of school.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "sufSchool(j)",
"sec_num": null
},
{
"text": "The jth word belongs the set of common suffixes of organizations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "sufOrg(j)",
"sec_num": null
},
{
"text": "The jth word belongs the set of common suffixes of government agencies. idf (j,v) The inverse document frequency of jth word is v. Predicates about entire entity entityType(t)",
"cite_spans": [
{
"start": 76,
"end": 81,
"text": "(j,v)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "sufGov(j)",
"sec_num": null
},
{
"text": "The type of the entity is t.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "sufGov(j)",
"sec_num": null
},
{
"text": "The total number of characters is n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "lenChar(n)",
"sec_num": null
},
{
"text": "The total number of words is n.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "lenWord(n)",
"sec_num": null
},
{
"text": "The ith character belongs to jth word. cwPosition (i,j) The ith character of the entity is the jth character in the corresponding word. Richardson and Domingos (2006) proposed Markov Logic Networks (MLN), which combines first-order logic and probabilistic graphical models. MLN framework has been adopted for several natural language processing tasks and achieved a certain level of success (Singla and Domingos, 2006; Riedel and Meza-Ruiz, 2008; Yoshikawa et al., 2009; Andrzejewski et al., 2011; Jiang et al., 2012; Huang et al., 2012) . Singla and Domingos (2006) modeled the entity resolution problem with MLN. They demonstrated the capability of MLN to seamlessly combine a number of previous approaches. Poon and Domingos (2008) proposed to use MLN for joint unsupervised coreference resolution. Yoshikawa et al. (2009) proposed to use Markov logic to incorporate both local features and global constraints that hold between temporal relations. Andrzejewski et al. (2011) introduced a framework for incorporating general domain knowledge, which is represented by First-Order Logic (FOL) rules, into LDA inference to produce topics shaped by both the data and the rules.",
"cite_spans": [
{
"start": 50,
"end": 55,
"text": "(i,j)",
"ref_id": null
},
{
"start": 136,
"end": 166,
"text": "Richardson and Domingos (2006)",
"ref_id": "BIBREF19"
},
{
"start": 391,
"end": 418,
"text": "(Singla and Domingos, 2006;",
"ref_id": "BIBREF22"
},
{
"start": 419,
"end": 446,
"text": "Riedel and Meza-Ruiz, 2008;",
"ref_id": "BIBREF20"
},
{
"start": 447,
"end": 470,
"text": "Yoshikawa et al., 2009;",
"ref_id": "BIBREF26"
},
{
"start": 471,
"end": 497,
"text": "Andrzejewski et al., 2011;",
"ref_id": null
},
{
"start": 498,
"end": 517,
"text": "Jiang et al., 2012;",
"ref_id": "BIBREF11"
},
{
"start": 518,
"end": 537,
"text": "Huang et al., 2012)",
"ref_id": "BIBREF9"
},
{
"start": 540,
"end": 566,
"text": "Singla and Domingos (2006)",
"ref_id": "BIBREF22"
},
{
"start": 710,
"end": 734,
"text": "Poon and Domingos (2008)",
"ref_id": "BIBREF18"
},
{
"start": 802,
"end": 825,
"text": "Yoshikawa et al. (2009)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Predicates about relations between characters and words cwMap(i,j)",
"sec_num": null
},
{
"text": "In this section, firstly, we briefly describe the Markov Logic Networks framework. Then, we present the first-order logic formulas including local formulas and global formulas we used in this work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Proposed Approach",
"sec_num": "3"
},
{
"text": "A MLN consists of a set of logic formulas that describe first-order knowledge base. Each formula consists of a set of first-order predicates, logical connectors and variables. Different with firstorder logic, these hard logic formulas are softened and can be violated with some penalty (the weight of formula) in MLN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markov Logic Networks",
"sec_num": "3.1"
},
{
"text": "We use M to represent a MLN and {(\u03d5 i , w i )} to represent formula \u03d5 i and its weight w i . These weighted formulas define a probability distribution over sets of possible worlds. Let y denote a possible world, the p(y) is defined as follows (Richardson and Domingos, 2006) :",
"cite_spans": [
{
"start": 243,
"end": 274,
"text": "(Richardson and Domingos, 2006)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Markov Logic Networks",
"sec_num": "3.1"
},
{
"text": "p(y) = 1 Z exp \uf8eb \uf8ed \u2211 (\u03d5 i ,w i )\u2208M w i \u2211 c\u2208C n \u03d5 i f \u03d5 i c (y) \uf8f6 \uf8f8 ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Markov Logic Networks",
"sec_num": "3.1"
},
{
"text": "where each c is a binding of free variable in \u03d5 i to constraints; f \u03d5 i c (y) is a binary feature function that returns 1 if the true value is obtained in the ground formula we get by replacing the free variables in \u03d5 i with the constants in c under the given possible world y, and 0 otherwise; C n \u03d5 i is all possible bindings of variables to constants, and Z is a normalization constant. Many methods have been proposed to learn the weights of MLNs using both generative and discriminative approaches (Richardson and Domingos, 2006; Singla and Domingos, 2006) . There are also several MLNs learning packages available online such as thebeast 1 , Tuffy 2 , PyMLNs 3 , Alchemy 4 , and so on.",
"cite_spans": [
{
"start": 503,
"end": 534,
"text": "(Richardson and Domingos, 2006;",
"ref_id": "BIBREF19"
},
{
"start": 535,
"end": 561,
"text": "Singla and Domingos, 2006)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Markov Logic Networks",
"sec_num": "3.1"
},
{
"text": "In this work, we convert the abbreviation generation problem as a labeling task for every characters in entities. Predicate drop(i) indicates that the character at position i is omitted in the abbreviation. Previous works (Chang and Lai, 2004; Yang et al., 2009) show that Chinese named entities can be further segmented into words. Words also provide important information for abbreviation generation. Hence, in this work, we also segment named entities into words and propose an observed predict to connect words and characters.",
"cite_spans": [
{
"start": 222,
"end": 243,
"text": "(Chang and Lai, 2004;",
"ref_id": "BIBREF4"
},
{
"start": 244,
"end": 262,
"text": "Yang et al., 2009)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MLN for Abbreviation Generation",
"sec_num": "3.2"
},
{
"text": "The local formulas relate one or more observed predicates to exactly one hidden predicate. In this work, we define a list of observed predicates to describe the properties of individual characters. Table 2 shows the list. For this task, there is only one hidden predicate drop. Table 3 lists the local formulas used in this work. The \"+\" notation in the formulas indicates that the each constant of the logic variable should be weighted separately. For example, formula character(2,\u4e8c) \u2227 isN umber(2) \u21d2 drop(2) and character(2,\u5341)\u2227isN umber(2) \u21d2 drop(2) may have different weights as inferred by formula",
"cite_spans": [],
"ref_spans": [
{
"start": 198,
"end": 205,
"text": "Table 2",
"ref_id": "TABREF2"
},
{
"start": 278,
"end": 285,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Local Formulas",
"sec_num": "3.2.1"
},
{
"text": "character(i, c+) \u2227 isN umber(i) \u21d2 drop(i).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Formulas",
"sec_num": "3.2.1"
},
{
"text": "Three kinds of local formulas are introduced in this work. Lexical features are used to capture the context information based on both character and word level information. Distance and position features are helpful in determining which parts of a entity may be removed. Hence, we also incorporate position information of word into local formulas. For example, \"\u5927\u5b66(University)\" is usually omitted when it is at the end of the entity. In practice, abbreviations of some kinds of entities can be generated through several strategies. So we introduce several local formulas to handle a group of related entities with similar suffix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Local Formulas",
"sec_num": "3.2.1"
},
{
"text": "Global formulas are designed to handle deletion of multiple characters. Since in this work, we only have one hidden predicate, drop, the global formulas incorporate correlations among different ground atoms of the drop predicate.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Formulas",
"sec_num": "3.2.2"
},
{
"text": "We propose to use global formulas to force the abbreviations to contain at least 2 characters and to make sure that at least one character is deleted. The following formulas are implemented:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Formulas",
"sec_num": "3.2.2"
},
{
"text": "|character(i, c) \u2227 drop(i)| all i| 1 |character(i, c) \u2227 \u00acdrop(i)| all i| 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Formulas",
"sec_num": "3.2.2"
},
{
"text": "Another constraint is that for the characters in some particular words should by dropped or kept simultaneously. So we add two formulas to model this:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Formulas",
"sec_num": "3.2.2"
},
{
"text": "character(i, c1) \u2227 cwM ap(i, j) \u2227 drop(i) \u2227 character(i + 1, c2) \u2227 cwM ap(i + 1, j) \u21d2 drop(i + 1) character(i, c1) \u2227 cwM ap(i, j) \u2227 \u00acdrop(i) \u2227 character(i + 1, c2) \u2227 cwM ap(i + 1, j) \u21d2 \u00acdrop(i + 1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Global Formulas",
"sec_num": "3.2.2"
},
{
"text": "In this section, we first describe the dataset construction method, evaluation metrics, and experimental configurations. We then describe the evaluation results and analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "For training and evaluating the performance the proposed method, we need a large number of abbreviation and corresponding standard form pairs. However, manually labeling is a laborious and time consuming work. To reduce human effort, we propose to construct annotated dataset with two steps. Firstly, we collect entities from Baidu Table 4 : The lexical level regular expressions used to match entity and abbreviation pairs.",
"cite_spans": [],
"ref_spans": [
{
"start": 332,
"end": 339,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Set",
"sec_num": "4.1"
},
{
"text": "Baike 5 , which is one of the most popular wikibased Chinese encyclopedia and contains more than 6 millions items. Secondly, we use several simple regular expressions to extract abbreviation of entities from the crawled encyclopedia and snippets of search engine.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set",
"sec_num": "4.1"
},
{
"text": "We crawled 3.2 millions articles from Baidu Baike. After that, we cleaned the HTML tags and extracted title, category and textual content from each articles. the structure of Baidu Baike is similar to that of Wikipedia, where titles are the name of the subject of the article, or may be a description of the topic. Hence, titles can be considered as the standard forms of entities. We select titles whose categories belong to location, organization, and facility to construct the standard forms list. It contains 302,633 items in total.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data Set",
"sec_num": "4.1"
},
{
"text": "The next step is to use titles and corresponding articles to extract abbreviations. Through analyzing the dataset, we observe that most of abbreviations with the explicit description can be matched through a few of lexical level regular expressions. Table 4 shows the regular expressions we used in this work. Through this step, 30,701 abbreviation and entity pairs are extracted. We randomly select 500 pairs from them and manually check their correctness. The accuracy of the extracted pairs is around 98.2%.",
"cite_spans": [],
"ref_spans": [
{
"start": 250,
"end": 257,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Set",
"sec_num": "4.1"
},
{
"text": "To further increase the number of extractions, we propose to use Web as corpus and extract abbreviation and entity pairs from snippets of search engine results. For each entity whose abbreviation cannot be identified thorough the regular expressions described above, we combine entity and \"\u7b80 \u79f0 (abbreviation)\" as queries for retrieving. The first three regular expressions in Table 4 are used to match abbreviation and entity pairs. Through this step, we get another 19,531 abbreviations. We also randomly select 500 pairs from them and manually check their correctness. The accuracy is around 95.2%. Finally, we merge the pairs extracted from Baike and search engine snippets and construct a list containing 50,232 abbreviation entity pairs. The accuracy of the list is 97.03%.",
"cite_spans": [],
"ref_spans": [
{
"start": 376,
"end": 383,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data Set",
"sec_num": "4.1"
},
{
"text": "For evaluating the performance of the proposed method, we conducted experiments on the automatical constructed data. Total instances are randomly split with 75% for training, 5% for development and the other 20% for testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.2"
},
{
"text": "We compare the proposed method against startof-the-art systems. Yang et al. (2009) proposed to use CRFs to model this. In this work, firstly, we re-implement the features they proposed. To fairly compare the two models, we also extend their work by including all local formulas we used in this work as features.",
"cite_spans": [
{
"start": 64,
"end": 82,
"text": "Yang et al. (2009)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.2"
},
{
"text": "In our setting, we use FudanNLP 6 toolkit and thebeast 7 Markov Logic engine. FudanNLP is developed for Chinese natural language processing. We use the Chinese word segmentation of it under the default settings. The detailed setting of thebeast engine is as follows: The inference algorithm is the MAP inference with a cutting plane approach. For parameter learning, the weights for formulas is updated by an online learning algorithm with MIRA update rule. All the initial weights are set to zeros. The number of iterations is set to 10 epochs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.2"
},
{
"text": "For evaluation metrics, we use precision, recall, and F-score to evaluate the performance of character deletion operation. To evaluate the performance of the entire generated abbreviations, we also propose to use accuracy to do it. It means that the generated abbreviation is considered as correct if all characters of its standard form are correctly classified. Table 5 : The lexical level regular expressions used to match entity and abbreviation pairss.",
"cite_spans": [],
"ref_spans": [
{
"start": 363,
"end": 370,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "4.2"
},
{
"text": "To evaluate the performance of our method, we set up several variants of the proposed method to compare with performances of CRFs. The MLN-LF method uses only the lexical features described in the Table 3 . The MLN-LF+DPF method uses both lexical features and distance and position features. The MLN-Local method uses all local formulas described in the Table 3 . The MLN-Local+Global methods combine both local formulas and global formulas together. For Yang's system, we use CRFs-Yang to represent the reimplemented method with feature set proposed by them and CRFs-LF, CRFs-LF+DPF, and CRFs-Local to represent feature sets similar as used by MLN. Table 5 shows the performances of different methods. We can see that MLN-Local+Global achieve the best accuracy of entire abbreviation among all the methods. Although, the F-score of MLN-Local+Global is slightly worse than MLN-Local. We think that the global formulas contribute a lot for the entire accuracy. However, since the constraint of simultaneously dropping or keeping characters does not consider context, it may also bring some false matches. We can also see that, the methods modeled by MLN significantly outperform the performances of CRFs no matter which feature sets are used(base on a paired 2-tailed t-test with p < 0.05). We think that overfitting may be one of the main reasons.",
"cite_spans": [],
"ref_spans": [
{
"start": 197,
"end": 204,
"text": "Table 3",
"ref_id": null
},
{
"start": 354,
"end": 361,
"text": "Table 3",
"ref_id": null
},
{
"start": 650,
"end": 657,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "From the perspective of entire accuracy, comparing the performances of MLN-LF+DPF and MLN-Local, we can see that features for entities with special suffixes contribute a lot. The relative improvement of MLN-Local is around 19.7%. It shows that the explicit rules are useful for improv-ing the performance. However, these explicit rules only bring a small improvement to the accuracy of CRFs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Comparing the performances of CRFs and MLNs, we can observe that CRFs achieve slightly better performance in classifying single characters. However MLNs achieve significantly better results of the entire accuracies. We think that these kinds of long distance features can be well handled by MLNs. These features are useful to capture the global constraints. Hence, MLNs can achieve better accuracy of the entire abbreviations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "In this paper, we also investigate the performance of different methods as the training data size are varied. Figure 1 shows the results. All full lines show the results of MLNs with different feature sets. The dot dash lines show the results of CRFs. From the results, we can observe that MLNs perform better than CRFs in most of cases. Except that MLNs with only lexical features work slightly worse than CRFs with small number of training data. From the figure, we also observe that the performance improvement of CRFs are not significant when the number of training data is larger than 35,000. However, methods using MLNs benefit a lot from the increasing data size. If more training instances are given, the performance of MLNs can still be improved. From the training procedures, we also empirically find that the training iterations of MLNs are small. It means that the convergence rate of MLNs is fast. To evaluate the convergence rate, we also evaluate the dependence of the performances of MLNs on the number of training epochs. Figure 2 shows the results of MLN-Local and MLN-Local+Global. From the results, we can observe that the best performances can be achieved when the number of training epochs is more than nine. Hence, in this work, we set the number of iterations to be 10. ",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 118,
"text": "Figure 1",
"ref_id": "FIGREF1"
},
{
"start": 1039,
"end": 1047,
"text": "Figure 2",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "In this paper, we focus on named entity abbreviation generation problem. We propose to use firstorder logic to model rich linguistic features and global constraints. We convert the abbreviation generation to character deletion and keep operations. Linguistic features and relations between different operations are represented by local and global logic formulas respectively. Markov Logic Network frameworks is adopted for learning and predication. To reduce the human effort in constructing the training data, we also introduce an automatical training data construction methods with sample strategies. We collect standard forms of entities from online encyclopedia, use a few simple patterns to extract abbreviations from documents and search engine snippets with high precision as training data. Experimental results show that the proposed methods achieve better performance than state-of-the-art methods and can efficiently process large volumes of data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "6 Acknowledgement",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "5"
},
{
"text": "http://code.google.com/p/thebeast 2 http://hazy.cs.wisc.edu/hazy/tuffy/ 3 http://www9-old.in.tum.de/people/jain/mlns/ 4 http://alchemy.cs.washington.edu/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://baike.baidu.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://code.google.com/p/fudannlp 7 http://code.google.com/p/thebeast",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The authors wish to thank the anonymous reviewers for their helpful comments. This work was partially funded by National Natural Science Foundation of China (61003092, 61073069), National Major Science and Technology Special Project of China (2014ZX03006005), Shanghai Municipal Science and Technology Commission (No.12511504502) and \"Chen Guang\" project supported by Shanghai Municipal Education Commission and Shanghai Education Development Foundation(11CG05).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "acknowledgement",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "(e+)\u21d2drop(i) character(i,c+) \u2227 word(j,w+) \u2227 cwMap(i,j) \u2227 isCity(j) \u2227 entityType(e+) \u21d2drop(i) character(i,c+) \u2227 word(j,w+) \u2227 cwMap(i,j) \u2227 isCity(j)\u2227 word(j,w1+) \u2227word(j+1,w2+) \u21d2drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j-1,w+) \u21d2 drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+1,w+) \u21d2 drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+2,w+) \u21d2 drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+3,w+) \u21d2 drop(i) Distance and Position Features character(i,c) \u2227 lenWord(wn+) \u2227 cwPosition(i,wp+) \u21d2drop(i) character(i,c) \u2227 lenChar(cn+) \u2227 cwPosition(i,wp+) \u21d2drop(i) character(i,c+) \u2227 cwMap(i,j) \u2227 word(j,w+) \u2227 cwPosition(i,wp+)\u21d2drop(i) character(i,c+) \u2227 cwMap(i,j) \u2227 word(j,w+) \u2227 lenWord(wn+)\u21d2drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+1,w+) \u2227 cwPosition(i,wp+)\u21d2drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+1,w+) \u2227 cwPosition(i,wp+) \u2227 entityType(t+)\u21d2drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+2,w+) \u2227 cwPosition(i,wp+)\u21d2drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+2,w+) \u2227 cwPosition(i,wp+) \u2227 entityType(t+)\u21d2drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+3,w+) \u2227 cwPosition(i,wp+)\u21d2drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+3,w+) \u2227 cwPosition(i,wp+) \u2227 entityType(t+)\u21d2drop(i) Features for Entity with Special Suffixes character(i,c+) \u2227 cwMap(i,j) \u2227 word(j,w+) \u2227 lenWord(l+) \u2227 entityType(t+) \u21d2 drop(i) character(i,c+) \u2227 isCity(j) \u2227 cwMap(i,j) \u2227 word(j+1,w+) \u2227 entityType(t+) \u21d2 drop(i) character(i,c) \u2227 isCity(j) \u2227 cwMap(i,j) \u2227 word(j,w1+) \u2227 word(j+1,w2+) \u2227 entityType(t+) \u21d2 drop(i) character(i,c) \u2227 cwPosition(i,p+) \u2227 \u00ac isCity(j) \u2227 cwMap(i,j) \u2227 word(j+1,w+) \u2227 (sufSchool(j+1) \u2228 sufOrg(j+1) \u2228 sufGov(j+1)) \u21d2 drop(i) character(i,c+) \u2227 cwMap(i,j) \u2227 word(j+1,w+) \u2227 (sufSchool(j+1) \u2228 sufOrg(j+1) \u2228 sufGov(j+1)) \u21d2 drop(i) character(i,c+) \u2227 cwMap(i,j) \u2227 word(j-1,w+) \u2227 (sufSchool(j) \u2228 sufOrg(j) \u2228 sufGov(j)) \u21d2 drop(i) character(i,c+) \u2227 cwMap(i,j) \u2227 word(j-2,w+) \u2227 (sufSchool(j) \u2228 sufOrg(j) \u2228 sufGov(j)) \u21d2 drop(i) character(i,c+) \u2227 cwMap(i,j) \u2227 word(j,w1+) \u2227 cwMap(ip,j-1) \u2227 city(ip,p) \u2227 (sufSchool(j+1) \u2228 sufOrg(j+1) \u2228 sufGov(j+1)) \u21d2 drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j,w+) \u2227 entityType(t+) \u2227 \u00ac isCity(j) \u21d2 drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+1,w+)\u2227 entityType(t+) \u2227 \u00ac isCity(j) \u21d2 drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+2,w+)\u2227 entityType(t+) \u2227 \u00ac isCity(j) \u21d2 drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+3,w+)\u2227 entityType(t+) \u2227 \u00ac isCity(j)) \u21d2 drop(i) character(i,c) \u2227 cwMap",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": ",c+) \u2227 word(j,w+) \u2227 cwMap(i,j) \u21d2drop(i) character(i,c+) \u2227 word(j,w+) \u2227 cwMap(i,j) \u2227 lastWord(j) \u21d2drop(i) character(i,c+) \u2227 word(j,w+) \u2227 cwMap(i,j) \u2227 idf(j,v+) \u21d2drop(i) character(i,c+) \u2227 word(j,w+) \u2227 cwMap(i,j) \u2227 entity(e+)\u21d2drop(i) character(i,c+) \u2227 word(j,w+) \u2227 cwMap(i,j) \u2227 isCity(j) \u2227 entityType(e+) \u21d2drop(i) character(i,c+) \u2227 word(j,w+) \u2227 cwMap(i,j) \u2227 isCity(j)\u2227 word(j,w1+) \u2227word(j+1,w2+) \u21d2drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j-1,w+) \u21d2 drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+1,w+) \u21d2 drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+2,w+) \u21d2 drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+3,w+) \u21d2 drop(i) Distance and Position Features character(i,c) \u2227 lenWord(wn+) \u2227 cwPosition(i,wp+) \u21d2drop(i) character(i,c) \u2227 lenChar(cn+) \u2227 cwPosition(i,wp+) \u21d2drop(i) character(i,c+) \u2227 cwMap(i,j) \u2227 word(j,w+) \u2227 cwPosition(i,wp+)\u21d2drop(i) character(i,c+) \u2227 cwMap(i,j) \u2227 word(j,w+) \u2227 lenWord(wn+)\u21d2drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+1,w+) \u2227 cwPosition(i,wp+)\u21d2drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+1,w+) \u2227 cwPosition(i,wp+) \u2227 entityType(t+)\u21d2drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+2,w+) \u2227 cwPosition(i,wp+)\u21d2drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+2,w+) \u2227 cwPosition(i,wp+) \u2227 entityType(t+)\u21d2drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+3,w+) \u2227 cwPosition(i,wp+)\u21d2drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+3,w+) \u2227 cwPosition(i,wp+) \u2227 entityType(t+)\u21d2drop(i) Features for Entity with Special Suffixes character(i,c+) \u2227 cwMap(i,j) \u2227 word(j,w+) \u2227 lenWord(l+) \u2227 entityType(t+) \u21d2 drop(i) character(i,c+) \u2227 isCity(j) \u2227 cwMap(i,j) \u2227 word(j+1,w+) \u2227 entityType(t+) \u21d2 drop(i) character(i,c) \u2227 isCity(j) \u2227 cwMap(i,j) \u2227 word(j,w1+) \u2227 word(j+1,w2+) \u2227 entityType(t+) \u21d2 drop(i) character(i,c) \u2227 cwPosition(i,p+) \u2227 \u00ac isCity(j) \u2227 cwMap(i,j) \u2227 word(j+1,w+) \u2227 (sufSchool(j+1) \u2228 sufOrg(j+1) \u2228 sufGov(j+1)) \u21d2 drop(i) character(i,c+) \u2227 cwMap(i,j) \u2227 word(j+1,w+) \u2227 (sufSchool(j+1) \u2228 sufOrg(j+1) \u2228 sufGov(j+1)) \u21d2 drop(i) character(i,c+) \u2227 cwMap(i,j) \u2227 word(j-1,w+) \u2227 (sufSchool(j) \u2228 sufOrg(j) \u2228 sufGov(j)) \u21d2 drop(i) character(i,c+) \u2227 cwMap(i,j) \u2227 word(j-2,w+) \u2227 (sufSchool(j) \u2228 sufOrg(j) \u2228 sufGov(j)) \u21d2 drop(i) character(i,c+) \u2227 cwMap(i,j) \u2227 word(j,w1+) \u2227 cwMap(ip,j-1) \u2227 city(ip,p) \u2227 (sufSchool(j+1) \u2228 sufOrg(j+1) \u2228 sufGov(j+1)) \u21d2 drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j,w+) \u2227 entityType(t+) \u2227 \u00ac isCity(j) \u21d2 drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+1,w+)\u2227 entityType(t+) \u2227 \u00ac isCity(j) \u21d2 drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+2,w+)\u2227 entityType(t+) \u2227 \u00ac isCity(j) \u21d2 drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+3,w+)\u2227 entityType(t+) \u2227 \u00ac isCity(j)) \u21d2 drop(i) character(i,c) \u2227 cwMap(i,j) \u2227 word(j+4,w+)\u2227 entityType(t+) \u2227 \u00ac isCity(j) \u21d2 drop(i) cwMap(i,j) \u2227 word(j-1,w+) \u2227 isCity(j-1) \u2227 entityType(t+) \u21d2 drop(i) cwMap(i,j) \u2227 (j=0) \u2227 word(j,w) \u2227 entityType(t+) \u21d2 drop(i) Table 3: Descriptions of local formulas.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "of the Twenty-Second international joint conference on Artificial Intelligence -Volume Volume Two, IJCAI'11",
"authors": [
{
"first": "References",
"middle": [],
"last": "David Andrzejewski",
"suffix": ""
},
{
"first": "Xiaojin",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Craven",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Recht",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "1171--1177",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "References David Andrzejewski, Xiaojin Zhu, Mark Craven, and Benjamin Recht. of the Twenty-Second international joint conference on Artificial Intelligence -Volume Volume Two, IJCAI'11, pages 1171-1177. AAAI Press.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A phrase-based statistical model for sms text normalization",
"authors": [
{
"first": "Aiti",
"middle": [],
"last": "Aw",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Juan",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions",
"volume": "",
"issue": "",
"pages": "33--40",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "AiTi Aw, Min Zhang, Juan Xiao, and Jian Su. 2006. A phrase-based statistical model for sms text normalization. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 33-40, Sydney, Australia, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Twitter mood predicts the stock market",
"authors": [
{
"first": "Johan",
"middle": [],
"last": "Bollen",
"suffix": ""
},
{
"first": "Huina",
"middle": [],
"last": "Mao",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Zeng",
"suffix": ""
}
],
"year": 2011,
"venue": "Journal of Computational Science",
"volume": "2",
"issue": "1",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johan Bollen, Huina Mao, and Xiaojun Zeng. 2011. Twitter mood predicts the stock market. Journal of Computational Science, 2(1):1 -8.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "A preliminary study on probabilistic models for chinese abbreviations",
"authors": [
{
"first": "Jing-Shin",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Yu-Tso",
"middle": [],
"last": "Lai",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Third SIGHAN Workshop on Chinese Language Learning",
"volume": "",
"issue": "",
"pages": "9--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing-shin Chang and Yu-Tso Lai. 2004. A pre- liminary study on probabilistic models for chinese abbreviations. In Proceedings of the Third SIGHAN Workshop on Chinese Language Learning, pages 9- 16.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Mining atomic chinese abbreviations with a probabilistic single character recovery model. Language Resources and Evaluation",
"authors": [
{
"first": "Jing-Shin",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Wei-Lun",
"middle": [],
"last": "Teng",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "40",
"issue": "",
"pages": "367--374",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jing-Shin Chang and Wei-Lun Teng. 2006. Mining atomic chinese abbreviations with a probabilistic single character recovery model. Language Re- sources and Evaluation, 40(3-4):367-374.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Lexical normalisation of short text messages: Makn sens a #twitter",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "368--378",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Han and Timothy Baldwin. 2011. Lexical normalisation of short text messages: Makn sens a #twitter. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 368-378, Portland, Oregon, USA, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Automatically constructing a normalisation dictionary for microblogs",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Cook",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bo Han, Paul Cook, and Timothy Baldwin. 2012. Automatically constructing a normalisation dictio- nary for microblogs. In Proceedings of the 2012",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Natural Language Processing and Computational Natural Language Learning",
"authors": [],
"year": null,
"venue": "",
"volume": "12",
"issue": "",
"pages": "421--432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, EMNLP-CoNLL '12, pages 421-432, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Using first-order logic to compress sentences",
"authors": [
{
"first": "Minlie",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xing",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Feng",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Xiaoyan",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minlie Huang, Xing Shi, Feng Jin, and Xiaoyan Zhu. 2012. Using first-order logic to compress sentences. In Proceedings of the Twenty-Sixth AAAI Conference on Artificial Intelligence.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Target-dependent twitter sentiment classification",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "Mo",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Xiaohua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Tiejun",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "151--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Long Jiang, Mo Yu, Ming Zhou, Xiaohua Liu, and Tiejun Zhao. 2011. Target-dependent twitter sentiment classification. In Proceedings of the 49th Annual Meeting of the Association for Computa- tional Linguistics: Human Language Technologies, pages 151-160, Portland, Oregon, USA, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning to refine an automatically extracted knowledge base using markov logic",
"authors": [
{
"first": "Shangpu",
"middle": [],
"last": "Jiang",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Lowd",
"suffix": ""
},
{
"first": "Dejing",
"middle": [],
"last": "Dou",
"suffix": ""
}
],
"year": 2012,
"venue": "IEEE 12th International Conference on",
"volume": "",
"issue": "",
"pages": "912--917",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shangpu Jiang, D. Lowd, and Dejing Dou. 2012. Learning to refine an automatically extracted knowl- edge base using markov logic. In Data Mining (ICDM), 2012 IEEE 12th International Conference on, pages 912-917.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Mining and modeling relations between formal and informal Chinese phrases from web corpora",
"authors": [
{
"first": "Zhifei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1031--1040",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhifei Li and David Yarowsky. 2008a. Mining and modeling relations between formal and informal Chinese phrases from web corpora. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 1031- 1040, Honolulu, Hawaii, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Unsupervised translation induction for chinese abbreviations using monolingual corpora",
"authors": [
{
"first": "Zhifei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL-08: HLT",
"volume": "",
"issue": "",
"pages": "425--433",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhifei Li and David Yarowsky. 2008b. Unsupervised translation induction for chinese abbreviations using monolingual corpora. In Proceedings of ACL- 08: HLT, pages 425-433, Columbus, Ohio, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Pet: a statistical model for popular events tracking in social communities",
"authors": [
{
"first": "Bo",
"middle": [],
"last": "Cindy Xide Lin",
"suffix": ""
},
{
"first": "Qiaozhu",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Jiawei",
"middle": [],
"last": "Mei",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Han",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 16th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '10",
"volume": "",
"issue": "",
"pages": "929--938",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cindy Xide Lin, Bo Zhao, Qiaozhu Mei, and Jiawei Han. 2010. Pet: a statistical model for popular events tracking in social communities. In Proceedings of the 16th ACM SIGKDD internation- al conference on Knowledge discovery and data mining, KDD '10, pages 929-938, New York, NY, USA. ACM.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "for Computational Linguistics: Long Papers",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Fuliang",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "1",
"issue": "",
"pages": "1035--1044",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Liu, Fuliang Weng, and Xiao Jiang. for Computational Lin- guistics: Long Papers -Volume 1, ACL '12, pages 1035-1044, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Entitycentric topic-oriented opinion summarization in twitter",
"authors": [
{
"first": "Xinfan",
"middle": [],
"last": "Meng",
"suffix": ""
},
{
"first": "Furu",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Xiaohua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Houfeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '12",
"volume": "",
"issue": "",
"pages": "379--387",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xinfan Meng, Furu Wei, Xiaohua Liu, Ming Zhou, Sujian Li, and Houfeng Wang. 2012. Entity- centric topic-oriented opinion summarization in twitter. In Proceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, KDD '12, pages 379-387, New York, NY, USA. ACM.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A discriminative approach to japanese abbreviation extraction",
"authors": [
{
"first": "Naoaki",
"middle": [],
"last": "Okazaki",
"suffix": ""
},
{
"first": "Mitsuru",
"middle": [],
"last": "Ishizuka",
"suffix": ""
},
{
"first": "Jun'ichi",
"middle": [],
"last": "Tsujii",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Third International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naoaki Okazaki, Mitsuru Ishizuka, and Jun'ichi Tsujii. 2008. A discriminative approach to japanese abbreviation extraction. In Proceedings of the Third International Joint Conference on Natural Language Processing (IJCNLP 2008), pages 889-",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Joint unsupervised coreference resolution with markov logic",
"authors": [
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Domingos",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08",
"volume": "",
"issue": "",
"pages": "650--659",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoifung Poon and Pedro Domingos. 2008. Joint unsupervised coreference resolution with markov logic. In Proceedings of the Conference on Em- pirical Methods in Natural Language Processing, EMNLP '08, pages 650-659, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Markov logic networks",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Richardson",
"suffix": ""
},
{
"first": "Pedro",
"middle": [],
"last": "Domingos",
"suffix": ""
}
],
"year": 2006,
"venue": "Machine Learning",
"volume": "62",
"issue": "",
"pages": "107--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Richardson and Pedro Domingos. 2006. Markov logic networks. Machine Learning, 62(1- 2):107-136.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Collective semantic role labelling with markov logic",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Meza-Ruiz",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Twelfth Conference on Computational Natural Language Learning, CoNLL '08",
"volume": "",
"issue": "",
"pages": "193--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel and Ivan Meza-Ruiz. 2008. Collective semantic role labelling with markov logic. In Proceedings of the Twelfth Conference on Computational Natural Language Learning, CoNLL '08, pages 193-197, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Earthquake shakes twitter users: real-time event detection by social sensors",
"authors": [
{
"first": "Takeshi",
"middle": [],
"last": "Sakaki",
"suffix": ""
},
{
"first": "Makoto",
"middle": [],
"last": "Okazaki",
"suffix": ""
},
{
"first": "Yutaka",
"middle": [],
"last": "Matsuo",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 19th international conference on World wide web, WWW '10",
"volume": "",
"issue": "",
"pages": "851--860",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Takeshi Sakaki, Makoto Okazaki, and Yutaka Matsuo. 2010. Earthquake shakes twitter users: real-time event detection by social sensors. In Proceedings of the 19th international conference on World wide web, WWW '10, pages 851-860, New York, NY, USA. ACM.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Entity resolution with markov logic",
"authors": [
{
"first": "P",
"middle": [],
"last": "Singla",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Domingos",
"suffix": ""
}
],
"year": 2006,
"venue": "Data Mining, 2006. ICDM '06. Sixth International Conference on",
"volume": "",
"issue": "",
"pages": "572--582",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "P. Singla and P. Domingos. 2006. Entity resolution with markov logic. In Data Mining, 2006. ICDM '06. Sixth International Conference on, pages 572- 582.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Extracting chinese abbreviation-definition pairs from anchor texts",
"authors": [
{
"first": "Li-Xing",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Ya-Bin",
"middle": [],
"last": "Zheng",
"suffix": ""
},
{
"first": "Zhi-Yuan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mao-Song",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Can-Hui",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2011,
"venue": "Machine Learning and Cybernetics (ICMLC)",
"volume": "4",
"issue": "",
"pages": "1485--1491",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li-Xing Xie, Ya-Bin Zheng, Zhi-Yuan Liu, Mao- Song Sun, and Can-Hui Wang. 2011. Extracting chinese abbreviation-definition pairs from anchor texts. In Machine Learning and Cybernetics (ICMLC), volume 4, pages 1485-1491.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Automatic chinese abbreviation generation using conditional random field",
"authors": [
{
"first": "Dong",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Sadaoki",
"middle": [],
"last": "Yi-Cheng Pan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Furui",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "273--276",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dong Yang, Yi-cheng Pan, and Sadaoki Furui. 2009. Automatic chinese abbreviation generation using conditional random field. In Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Com- panion Volume: Short Papers, NAACL-Short '09, pages 273-276, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Vocabulary expansion through automatic abbreviation generation for chinese voice search",
"authors": [
{
"first": "Dong",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Yi-Cheng",
"middle": [],
"last": "Pan",
"suffix": ""
},
{
"first": "Sadaoki",
"middle": [],
"last": "Furui",
"suffix": ""
}
],
"year": 2012,
"venue": "Computer Speech & Language",
"volume": "26",
"issue": "5",
"pages": "321--335",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dong Yang, Yi-Cheng Pan, and Sadaoki Furui. 2012. Vocabulary expansion through automatic abbreviation generation for chinese voice search. Computer Speech & Language, 26(5):321 -335.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Jointly identifying temporal relations with markov logic",
"authors": [
{
"first": "Katsumasa",
"middle": [],
"last": "Yoshikawa",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Masayuki",
"middle": [],
"last": "Asahara",
"suffix": ""
},
{
"first": "Yuji",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "1",
"issue": "",
"pages": "405--413",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katsumasa Yoshikawa, Sebastian Riedel, Masayuki Asahara, and Yuji Matsumoto. 2009. Jointly identifying temporal relations with markov logic. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 1 -Volume 1, ACL '09, pages 405-413, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "The impacts of training data size."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "The performance curves on the number of training epochs."
},
"TABREF1": {
"num": null,
"content": "<table/>",
"text": "",
"html": null,
"type_str": "table"
},
"TABREF2": {
"num": null,
"content": "<table/>",
"text": "Descriptions of observed predicates.",
"html": null,
"type_str": "table"
}
}
}
}