|
{ |
|
"paper_id": "I13-1021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T07:13:37.869737Z" |
|
}, |
|
"title": "Capturing Long-distance Dependencies in Sequence Models: A Case Study of Chinese Part-of-speech Tagging", |
|
"authors": [ |
|
{ |
|
"first": "Weiwei", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "The MOE Key Laboratory of Computational Linguistics", |
|
"institution": "Peking University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Xiaochang", |
|
"middle": [], |
|
"last": "Peng", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "The MOE Key Laboratory of Computational Linguistics", |
|
"institution": "Peking University", |
|
"location": {} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Xiaojun", |
|
"middle": [], |
|
"last": "Wan", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "The MOE Key Laboratory of Computational Linguistics", |
|
"institution": "Peking University", |
|
"location": {} |
|
}, |
|
"email": "wanxiaojun@pku.edu.cn" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "This paper is concerned with capturing long-distance dependencies in sequence models. We propose a two-step strategy. First, the stacked learning technique is applied to integrate sequence models that are good at exploring local information and other high complexity models that are good at capturing long-distance dependencies. Second, the structure compilation technique is employed to transfer the predictive power of hybrid models to sequence models via large-scale unlabeled data. To investigate the feasibility of our idea, we study Chinese POS tagging. Experiments on the Chinese Treebank data demonstrate the effectiveness of our methods. The re-compiled models not only achieve high accuracy with respect to per token classification, but also serve as a front-end to a parser well.", |
|
"pdf_parse": { |
|
"paper_id": "I13-1021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "This paper is concerned with capturing long-distance dependencies in sequence models. We propose a two-step strategy. First, the stacked learning technique is applied to integrate sequence models that are good at exploring local information and other high complexity models that are good at capturing long-distance dependencies. Second, the structure compilation technique is employed to transfer the predictive power of hybrid models to sequence models via large-scale unlabeled data. To investigate the feasibility of our idea, we study Chinese POS tagging. Experiments on the Chinese Treebank data demonstrate the effectiveness of our methods. The re-compiled models not only achieve high accuracy with respect to per token classification, but also serve as a front-end to a parser well.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "Sequential classification models provide very important solutions to pattern recognition tasks that involve the automatic assignment of a categorical label to each token of a sequence of observed values. A common example is part-of-speech (POS) tagging, which seeks to assign a grammatical category to each word in an input sentence. Standard machine learning algorithms to sequential tagging, e.g. linear-chain conditional random fields and max-margin Markov network, directly exploit local dependencies and perform quite well for a large number of sequence labeling tasks. In these models, usually, the relationships between two (or three) successive labels are parameterized and encoded as a single feature, and Viterbi style dynamic programming algorithms are applied to inference over a lattice. Although sequence models perform well for many applications, they are inadequate for tasks where many long-distance dependencies are involved.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Sequential classification models play an important role in natural language processing (NLP). Several fundamental NLP tasks, including named entity recognition, POS tagging, text chunking, supertagging, etc., employ sequential classifiers for lexical and syntactic disambiguation. In addition to learning linear chain structures, sequence models can even be applied to acquire hierarchical syntactic structures (Tsuruoka et al., 2009) . However, long-distance dependencies widely exist in linguistic structures, and many NLP systems suffer from the incapability of capturing these dependencies. For example, previous work has shown that sequence models alone cannot deal with syntactic ambiguities well (Clark and Curran, 2004; Tsuruoka et al., 2009) . On the contrary, stateof-the-art systems usually utilize high complexity models, such as lexicalized PCFG models for syntactic parsing, to achieve high accuracy. Unfortunately, they are not suitable for many real world applications due to the sacrifice of efficiency.", |
|
"cite_spans": [ |
|
{ |
|
"start": 411, |
|
"end": 434, |
|
"text": "(Tsuruoka et al., 2009)", |
|
"ref_id": "BIBREF22" |
|
}, |
|
{ |
|
"start": 703, |
|
"end": 727, |
|
"text": "(Clark and Curran, 2004;", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 728, |
|
"end": 750, |
|
"text": "Tsuruoka et al., 2009)", |
|
"ref_id": "BIBREF22" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we are concerned with capturing long-distance dependencies in sequence models. Our goal is to develop efficient models with linear time complexity that are also capable to capture non-local dependencies. Two techniques are studied to achieve this goal. First, stacked learning (Breiman, 1996) is employed to integrate sequence models that are good at exploring local information and other high complexity models that are good at capturing non-local dependencies. By combining complementary strengths of heterogeneous models, hybrid systems can obtain more accurate results. Second, structure compilation (Liang et al., 2008) is employed to transfer the predictive power of hybrid models to sequence models via large-scale unlabeled data. In particular, hybrid systems are utilized to create large-scale pseudo training data for cheap sequence models. A discriminative model can be improved by incorporating more features, while a generative latent variable model can be improved by increasing the number of latent variables. By using stacking and structure compilation techniques, a sequence model can be enhanced to better capture longdistance dependencies and to achieve more accurate results.", |
|
"cite_spans": [ |
|
{ |
|
"start": 292, |
|
"end": 307, |
|
"text": "(Breiman, 1996)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 619, |
|
"end": 639, |
|
"text": "(Liang et al., 2008)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "To demonstrate the feasibility to capture longdistance dependencies in a sequence model, we present our work on Chinese POS tagging. The Chinese language has a number of characteristics that make Chinese POS tagging particularly challenging. While simple sequential classifiers can easily achieve tagging accuracies of above 97% on English, Chinese POS tagging has proven to be more challenging and has obtained accuracies of about 93-94% (Huang et al., 2009; Sun and Uszkoreit, 2012) when applying sequence models. Recent work shows that higher accuracy (c.a. 95%) can be achieved by applying advanced learning techniques to capture deep lexical relations (Sun and Uszkoreit, 2012) . Especially, syntagmatic lexical relations have been shown playing an essential role in Chinese POS tagging. To capture such relations, an accurate POS tagging model should know more information about long range dependencies. Previous work has used syntactic parsers in either constituency or dependency formalisms to exploit such useful information (Sun and Uszkoreit, 2012; Hatori et al., 2011) . However, it is inapproporiate to employ computationally expensive parsers to improve POS tagging for many realistic NLP applications, mainly due to efficiency considerations.", |
|
"cite_spans": [ |
|
{ |
|
"start": 439, |
|
"end": 459, |
|
"text": "(Huang et al., 2009;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 460, |
|
"end": 484, |
|
"text": "Sun and Uszkoreit, 2012)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 657, |
|
"end": 682, |
|
"text": "(Sun and Uszkoreit, 2012)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 1034, |
|
"end": 1059, |
|
"text": "(Sun and Uszkoreit, 2012;", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 1060, |
|
"end": 1080, |
|
"text": "Hatori et al., 2011)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In this paper, we study several hybrid systems that are built upon various complementary tagging systems. We investigate stacked learning to build more accurate solutions by integrating heterogeneous models. Experiments on the Chinese Treebank (CTB) data show that stacking is very effective to build high-accuracy tagging systems. Although predictive powers of hybrid systems are significantly better than individual systems, they are not suitable for large-scale real word applications that have stringent time requirements. To improve POS tagging efficiency without loss of accuracy, we explore unlabeled data to transfer the predictive power of complex, inefficient models to simple, efficient models. Experiments show that unlabeled data is effective to re-compile simple models, including latent variable hidden Markov models, local and global linear classifiers. On one hand, the precison in terms of word classification is improved to 95.33%, which reachs the state-ofthe-art. On the other hand, re-compiled models are adapted based on parsing results, and as a result the ability to capture syntagmatic lexical relations is improved as well. Different from the purely supervised sequence models, re-compiled models also serve as a front-end to a parser well.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "The Chinese language has a number of characteristics that make Chinese POS tagging particularly challenging. For example, Chinese is characterized by the lack of formal devices such as morphological tense and number that often provide important clues for syntactic processing. Chinese POS tagging has proven to be very difficult and has obtained accuracies of about 93-94% (Huang et al., 2009; Li et al., 2011; Hatori et al., 2011; Sun and Uszkoreit, 2012) . On the other hand, Chinese POS information is very important for advanced NLP tasks, e.g. supertagging, full parsing and semantic role labeling. Previous work has repeatedly demonstrated the significant performance gap of NLP systems while using gold standard and automatically predicted POS tags (Zhang and Clark, 2009; Li et al., 2011; Tse and Curran, 2012) . In this section, we give a brief introduction and a comparative analysis to several models that are recently designed to resolve the Chinese POS tagging problem.", |
|
"cite_spans": [ |
|
{ |
|
"start": 373, |
|
"end": 393, |
|
"text": "(Huang et al., 2009;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 394, |
|
"end": 410, |
|
"text": "Li et al., 2011;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 411, |
|
"end": 431, |
|
"text": "Hatori et al., 2011;", |
|
"ref_id": "BIBREF9" |
|
}, |
|
{ |
|
"start": 432, |
|
"end": 456, |
|
"text": "Sun and Uszkoreit, 2012)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 756, |
|
"end": 779, |
|
"text": "(Zhang and Clark, 2009;", |
|
"ref_id": "BIBREF25" |
|
}, |
|
{ |
|
"start": 780, |
|
"end": 796, |
|
"text": "Li et al., 2011;", |
|
"ref_id": "BIBREF12" |
|
}, |
|
{ |
|
"start": 797, |
|
"end": 818, |
|
"text": "Tse and Curran, 2012)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Background", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Local linear model (LLM) A very simple approach to POS tagging is to formulate it as a local word classification problem. Various features can be drawn upon information sources such as word forms and characters that constitute words. Previous studies on many languages have shown that local classification is inadequate to capture structural information of output labels, and thus does not perform as well as structured models.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Various Chinese POS Tagging Models", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Linear-chain global linear model (LGLM) Sequence labeling models can capture output structures by exploiting local dependencies among words. A global linear model is flexible to in-clude linguistic knowledge from multiple information sources, and thus suitable to recognize more new words. A majority of state-of-the-art English POS taggers are based on LGLMs, e.g. structured perceptron (Collins, 2002) and conditional random fields (Lafferty et al., 2001) . Such models are also very popular for building Chinese POS taggers (Sun and Uszkoreit, 2012 ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 388, |
|
"end": 403, |
|
"text": "(Collins, 2002)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 434, |
|
"end": 457, |
|
"text": "(Lafferty et al., 2001)", |
|
"ref_id": "BIBREF11" |
|
}, |
|
{ |
|
"start": 527, |
|
"end": 551, |
|
"text": "(Sun and Uszkoreit, 2012", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Various Chinese POS Tagging Models", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Hidden Markov model with latent variables (HMMLA) Generative models with latent annotations (LA) obtain state-of-the-art performance for a number of NLP tasks. For example, both PCFG and TSG with refined latent variables achieve excellent results for syntactic parsing (Matsuzaki et al., 2005; Shindo et al., 2012) . For Chinese POS tagging, Huang, Eidelman and Harper (2009) described and evaluated a bi-gram HMM tagger that utilizes latent annotations. The use of latent annotations substantially improves the performance of a simple generative bigram tagger, outperforming a trigram HMM tagger with sophisticated smoothing.", |
|
"cite_spans": [ |
|
{ |
|
"start": 269, |
|
"end": 293, |
|
"text": "(Matsuzaki et al., 2005;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 294, |
|
"end": 314, |
|
"text": "Shindo et al., 2012)", |
|
"ref_id": "BIBREF18" |
|
}, |
|
{ |
|
"start": 342, |
|
"end": 375, |
|
"text": "Huang, Eidelman and Harper (2009)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Various Chinese POS Tagging Models", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "PCFG Parsing with latent variables (PCFGLA) POS tags can be taken as preterminals of a constituency parse tree, so a constituency parser can also provide POS information. The majority of the state-of-the-art constituent parsers are based on generative PCFG learning, with lexicalized (Collins, 2003; Charniak, 2000) or latent annotation (Matsuzaki et al., 2005; Petrov et al., 2006) refinements. Compared to complex lexicalized parsers, the PCFGLA parsers leverage on an automatic procedure to learn refined grammars and are more robust to parse many non-English languages that are not well studied. For Chinese, a PCFGLA parser achieves the state-of-the-art performance and outperforms many other types of parsers (Zhang and Clark, 2009) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 284, |
|
"end": 299, |
|
"text": "(Collins, 2003;", |
|
"ref_id": "BIBREF5" |
|
}, |
|
{ |
|
"start": 300, |
|
"end": 315, |
|
"text": "Charniak, 2000)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 337, |
|
"end": 361, |
|
"text": "(Matsuzaki et al., 2005;", |
|
"ref_id": "BIBREF14" |
|
}, |
|
{ |
|
"start": 362, |
|
"end": 382, |
|
"text": "Petrov et al., 2006)", |
|
"ref_id": "BIBREF16" |
|
}, |
|
{ |
|
"start": 715, |
|
"end": 738, |
|
"text": "(Zhang and Clark, 2009)", |
|
"ref_id": "BIBREF25" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Various Chinese POS Tagging Models", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "Parsing (DEP) (Hatori et al., 2011) proposes an incremental processing model for the task of joint POS tagging and dependency parsing, which is built upon a shift-reduce parsing framework with dynamic programming. Given a segmented sentence, a joint model simultaneously considers possible POS tags and dependency relations. In this way, the learner can better predict POS tags by using bi-lexical dependency information. Their experiments show that the joint approach achieved substantial im-provements over the pipeline systems in both POS tagging and dependency parsing tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 14, |
|
"end": 35, |
|
"text": "(Hatori et al., 2011)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Joint POS Tagging and Dependency", |
|
"sec_num": "2.1.1" |
|
}, |
|
{ |
|
"text": "We can distinguish the five representative tagging models from two views (see Table 2 ). From a linguistic view, we can distinguish syntax-free and syntax-based models. In a syntex-based model, POS tagging is integrated into parsing, and thus (to some extent) is capable of capturing long range syntactic information. From a machine learning view, we can distinguish generative and discriminative models. Compared to generative models, discriminative models define expressive features to classify words. Note that the two generative models employ latent variables to refine the output spaces, which significantly boost the accuracy and increase the robustness of simple generative models.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 85, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Generative Discriminative Syntax-free HMMLA LLM, LGLM Syntax-based PCFGLA DEP Table 2 : Two views of different tagging models.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 78, |
|
"end": 85, |
|
"text": "Table 2", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Comparison", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Penn Chinese Treebank (CTB) (Xue et al., 2005 ) is a popular data set to evaluate a number of Chinese NLP tasks, including word segmentation, POS tagging, syntactic parsing in both constituency and dependency formalisms. In this paper, we use CTB 6.0 as the labeled training data for the study. In order to obtain a representative split of data sets, we conduct experiments following the setting of the CoNLL 2009 shared task (Haji\u010d et al., 2009) , which is also used by (Sun and Uszkoreit, 2012) . The setting is provided by the principal organizer of the CTB project, and has considered many annotation details. This setting is very robust for evaluating Chinese language processing algorithms. We present an empirical study of the five typical approaches introduced above. In our experiments, to build local and global word classifiers (i.e. LLMs and LGLMs), we implement the feature set used in (Sun and Uszkoreit, 2012) . Denote a word w in focus with a fixed window w \u22122 w \u22121 ww +1 w +2 . The features include:", |
|
"cite_spans": [ |
|
{ |
|
"start": 28, |
|
"end": 45, |
|
"text": "(Xue et al., 2005", |
|
"ref_id": "BIBREF24" |
|
}, |
|
{ |
|
"start": 426, |
|
"end": 446, |
|
"text": "(Haji\u010d et al., 2009)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 471, |
|
"end": 496, |
|
"text": "(Sun and Uszkoreit, 2012)", |
|
"ref_id": "BIBREF19" |
|
}, |
|
{ |
|
"start": 899, |
|
"end": 924, |
|
"text": "(Sun and Uszkoreit, 2012)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "\u2022 Word unigrams: w \u22122 , w \u22121 , w, w +1 , w +2 ;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "Devel.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Experimental Setting", |
|
"sec_num": "2.3.1" |
|
}, |
|
{ |
|
"text": "LGLM (SP) LGLM ( \u2022 Word bigrams:", |
|
"cite_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 9, |
|
"text": "(SP)", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LLM", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "w \u22122 w \u22121 , w \u22121 w, w w +1 , w +1 w +2 ;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LLM", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Character n-gram prefixes and suffixes for n up to 3.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LLM", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "To train LLMs, we use the open source linear classifier -LIBLINEAR 1 . To train LGLMs, we choose structured perceptron (SP) (Collins, 2002) and passive aggressive (PA) (Crammer et al., 2006) learning algorithms. For the LAHMM and DEP models, we use the systems discribed in (Huang et al., 2009; Hatori et al., 2011) ; for the PCFGLA models, we use the Berkeley parser 2 .", |
|
"cite_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 139, |
|
"text": "(Collins, 2002)", |
|
"ref_id": "BIBREF4" |
|
}, |
|
{ |
|
"start": 168, |
|
"end": 190, |
|
"text": "(Crammer et al., 2006)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 274, |
|
"end": 294, |
|
"text": "(Huang et al., 2009;", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 295, |
|
"end": 315, |
|
"text": "Hatori et al., 2011)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "LLM", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "Table 1 summarizes the performance in terms of per word classification of different supervised models on the development data. We present the results of both first order (on the left) and second order (on the right)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "LGLMs. We can see that the perceptron algorithm performs a little better than the PA algorithm for Chinese POS tagging. There is only a slight gap between the local classification model and various structured models. This is very different from English POS tagging. Although the local classifier achieves comparable results when respectively applied to English and Chinese, there is much more significant gap between the corresponding structured models. Similarly, the gap between the first and second order LGLMs is very modest too. From the linguistic view, we mainly consider the disambiguiation ability of local and non-local dependencies. Table 1 presents accuracy results of several POS types, including nouns and functional words. The POS types NR, NT and NN respectively represent proper nouns, temporal nouns and other common nouns. We can clearly see that models which only explore local dependencies are good enough to deal with nouns. Surprisingly, the local classifier that does not directly define features of possible POS tags of other surrounding words performs even better than structured models for proper nouns and other common nouns.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 644, |
|
"end": 651, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "The tag DEC denotes a complementizer or a nominalizer, while the tag DEG denotes a genitive marker and an associative marker. These two types only include two words: \"\u7684\" and \"\u4e4b.\" The latter one is mainly used in ancient Chinese. 5.19% of words appearing in the training data set is DEC/DEG. The pattern of the DEC recognition is clause/verb phrase+DEC+noun phrase, and The pattern of the DEG recognition is nominal modi-fier+DEC+noun phrase. To distinguish the sentential/verbal and nominal modification phrases, the DEC and DEG words usually need long range syntactic information for accurate disambiguation. We claim that the prediction performance of the two specific types is a good clue of how well a tagging model resolves long distance dependencies. We can see that the two syntactic parsers significantly outperform local models on the prediction of these types of words.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "The weak ability for non-local disambiguation also imposes restrictions on using a sequence POS tagging model as front module for parsing. To evaluate the impact, we employ the PCFGLA parser to parse a sentence based on the POS tags provided by sequence models. Table 4 shows the parsing performance. Note that the overall tagging performance of the Berkeley parser is significantly worse than sequence models. However, better POS tagging does not lead to better parsing. The experiments suggest that sequence models propagate too many errors to the parser. Our linguistic analysis can also well explain the poor performance of Chinese CCG parsing when applying the C&C parser (Tse and Curran, 2012) . We think the failure is mainly due to overplaying sequence models in both POS tagging and supertag- Table 4 : Parsing accuracies on the development data. 1or and 2or respectively denote first order and second order.", |
|
"cite_spans": [ |
|
{ |
|
"start": 677, |
|
"end": 699, |
|
"text": "(Tse and Curran, 2012)", |
|
"ref_id": "BIBREF21" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 262, |
|
"end": 269, |
|
"text": "Table 4", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 802, |
|
"end": 809, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "LGLM(X) denotes a stacking model with X as the level-0 processing. All stacking models incorporate word clusters to improve the tagging accuracy.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "To distinguish the predictive abilities of generative and discriminative models, we report the precison of the prediction of unknown words (UNK). Discriminative learning can define arbitrary (even overlapping) features which play a central role in tagging English unknown words. The difference between generative and discriminative learning in Chinese POS tagging is not that much, mainly because most Chinese words are compactly composed by a very few Chinese characters that are usually morphemes. This language-specific property makes it relatively easy to smooth parameters of a generative model.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Results", |
|
"sec_num": "2.3.2" |
|
}, |
|
{ |
|
"text": "In this section, we study a simple way of integrating multiple heterogeneous models in order to exploit their complementary strength and thereby improve tagging accuracy beyond what is possible by either model in isolation. The method integrates the heterogeneous models by allowing the outputs of the HMMLA, PCFGLA and DEP to de-fine features for the LLM/LGLM.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Improving Tagging Accuracy via Stacking", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Stacked generalization is a meta-learning algorithm that has been first proposed in (Wolpert, 1992) and (Breiman, 1996) . Stacked learning has been applied as a system ensemble method in several NLP tasks, such as joint word segmentation and POS tagging (Sun, 2011) , and dependency parsing (Nivre and McDonald, 2008) . The idea is to include two \"levels\" of predictors. The first level includes one or more predictors g 1 , ..., g K :", |
|
"cite_spans": [ |
|
{ |
|
"start": 84, |
|
"end": 99, |
|
"text": "(Wolpert, 1992)", |
|
"ref_id": "BIBREF23" |
|
}, |
|
{ |
|
"start": 104, |
|
"end": 119, |
|
"text": "(Breiman, 1996)", |
|
"ref_id": "BIBREF0" |
|
}, |
|
{ |
|
"start": 254, |
|
"end": 265, |
|
"text": "(Sun, 2011)", |
|
"ref_id": "BIBREF20" |
|
}, |
|
{ |
|
"start": 291, |
|
"end": 317, |
|
"text": "(Nivre and McDonald, 2008)", |
|
"ref_id": "BIBREF15" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stacked Learning", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "R d \u2192 R; each receives input x \u2208 R d and out- puts a prediction g k (x).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stacked Learning", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "The second level consists of a single function h :", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stacked Learning", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "R d+K \u2192 R that takes as input x, g 1 (x), ..., g K (x)", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stacked Learning", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "and outputs a final prediction\u0177 = h(x, g 1 (x), ..., g K (x)). The predictor, then, combines an ensemble (the g k 's) with a meta-predictor (h).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Stacked Learning", |
|
"sec_num": "3.1" |
|
}, |
|
{ |
|
"text": "We use the LLMs or LGLMs (as h) for the level-1 processing, and other models (as g k ) for the level-0 processing. The characteristic of discriminative learning makes LLMs/LGLMs very easy to integrate the outputs of other models as new features. We are relying on the ability of discriminative learning to explore informative features, which play a central role in boosting the tagging accuracy. For output labels produced by each auxiliary model, five new label uni/bi-gram features are added:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Applying Stacking to POS Tagging", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "w \u22121 , w, w +1 , w \u22121 w, w w +1 .", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Applying Stacking to POS Tagging", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "This choice is tuned on the development data. Word clusters that are automatically acquired from large-scale unlabeled data have been shown to be very effective to bridge the gap between high and low frequency words, and therefore significantly improve tagging, as well as other syntactic processing tasks. Our stacking models are all built on word clustering enhanced discriminative linear models. Five word cluster uni/bi-gram features are added: w \u22121 , w, w +1 , w \u22121 w, w w +1 . The clusters are acquired based on the Chinese giga-word data with the MKCLS tool. The number of total clusters is set to 500, which is tuned by (Sun and Uszkoreit, 2012) . Table 3 summarizes the tagging accuracy of different stacking models. From this table, we can clearly see that the new features derived from the outputs of other models lead to substantial improvements over the baseline LLM/LGLM. The output structures provided by the PCFGLA model are most effective in improving the LLM/LGLM baseline systems. Among different stacking models, the syntax-free hybrid one (i.e., stacking LLM/LGLM with HMMLA) does not need any treebank to train their systems. For the situations that parsers are not available, this is a good solution. Moreover, the decoding algorithms for linear-chain Markov models are very fast. Therefore the syntax-free hybrid system is more appealing for many NLP applications. Table 5 is the F1 scores of the DEC/DEG prediction which are obtained by different stacking models. Compared to Table 1 , we can see that the hybrid sequence model is still not good at handling long-distance ambiguities. As a result, it harms the parsing performance (see Table 4 ), though it achieves higher overall precison.", |
|
"cite_spans": [ |
|
{ |
|
"start": 628, |
|
"end": 653, |
|
"text": "(Sun and Uszkoreit, 2012)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 656, |
|
"end": 663, |
|
"text": "Table 3", |
|
"ref_id": "TABREF2" |
|
}, |
|
{ |
|
"start": 1389, |
|
"end": 1396, |
|
"text": "Table 5", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 1501, |
|
"end": 1508, |
|
"text": "Table 1", |
|
"ref_id": "TABREF0" |
|
}, |
|
{ |
|
"start": 1661, |
|
"end": 1668, |
|
"text": "Table 4", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Applying Stacking to POS Tagging", |
|
"sec_num": "3.2" |
|
}, |
|
{ |
|
"text": "DEC DEG 1or LGLM(HMMLA) 82.93 86.64 1or LGLM(PCFGLA) 88.11 91.12 1or LGLM(DEP) 87.46 89.86 Table 5 : F1 score of the DEC/DEG prediction of different stacking models on the development data.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 98, |
|
"text": "Table 5", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Devel.", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "(Sun and Uszkoreit, 2012) introduced a Bagging model to effectively combine the outputs of individual systems. In the training phase, given a training set D of size n, the Bagging model generates m new training sets D i 's by sampling examples from D. Each D i is separately used to train k individual models. In the tagging phase, the km models outputs km tagging results, each word is assigned one POS label. The final tagging is the voting result of these km labels. Although this model is effective, it is too expensive in the sense that it uses parser multiple times. We also implement their method and compare the results with our stacking model. We find the accuracy performance produced by the two different methods are comparable. (Rush et al., 2010) introduced dual decomposition as a framework for deriving inference algorithms for serious combinatorial problems in NLP. They successfully applied dual decomposition to the combination of a lexicalized parsing model and a trigram POS tagger. Despite the effectiveness, their method iteratively parses a sentence many times to achieve convergence, and thus is not as efficient as stacking.", |
|
"cite_spans": [ |
|
{ |
|
"start": 740, |
|
"end": 759, |
|
"text": "(Rush et al., 2010)", |
|
"ref_id": "BIBREF17" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Related Work", |
|
"sec_num": "3.4" |
|
}, |
|
{ |
|
"text": "Unlabeled Data", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Improving Tagging Efficiency through", |
|
"sec_num": "4" |
|
}, |
|
{ |
|
"text": "Hybrid structured models often achieve excellent performance but can be slow at test time. In our problem, it is obviously too inefficient to improve POS tagging by parsing a sentence first. In this section, we explore unlabeled data to transfer the predictive power of hybrid models to sequence models. The main idea behind this is to use a fast model to approximate the function learned by a slower, larger, but better performing ensemble model. Unlike the true function that is unknown, the function learned by a high performing model is available and can be used to label large amounts of pseudo data. A fast and expressive model trained on large scale pseudo data will not overfit and will approximate the function learned by the high performing model well. This allows a slow, complex model such as massive ensemble to be compressed into a fast sequence model such as a first order", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "LGLM with very little loss in performance. This idea to use unlabeled data to transfer the predictive power of one model to another has been investigated in many areas, for example, from high accuracy neural networks to more interpretable decision trees (Craven, 1996) , from high accuracy ensembles to faster and more compact neural networks (Bucila et al., 2006) , or from structured prediction models to local classification models (Liang et al., 2008) Table 6 : Tagging accuracies of different re-compiled models on the development data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 254, |
|
"end": 268, |
|
"text": "(Craven, 1996)", |
|
"ref_id": "BIBREF7" |
|
}, |
|
{ |
|
"start": 343, |
|
"end": 364, |
|
"text": "(Bucila et al., 2006)", |
|
"ref_id": "BIBREF1" |
|
}, |
|
{ |
|
"start": 435, |
|
"end": 455, |
|
"text": "(Liang et al., 2008)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 456, |
|
"end": 463, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "The Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "transfer the power of a chain conditional random field to a logistic regression model. Similarly, we do some experiments to explore the feasibility of reducing hybrid tagging models to a HMMLA, LLM or LGLM, for Chinese POS tagging. The large-scale unlabeled data we use in our experiments comes from the Chinese Gigaword (LDC2005T14), which is a comprehensive archive of newswire text data that has been acquired over several years by the Linguistic Data Consortium (LDC). We choose the Mandarin news text, i.e. Xinhua newswire. We tag giga-word sentences by applying the stacked first order LGLMs with all other models. In other words, the HMMLA, PCFGLA and DEP systems are applied to tag unlabeled data features and their outputs are utilized to define features for first-order and second-order", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "LGLMs which produce pseudo training data. Both original gold standard training data and pseudo training data are used to re-train a HMMLA, a LLM/LGLM with extended features. The key for the success of hybrid tagging models is the existence of a large diversity among learners. Zhou (2009) argued that when there are lots of labeled training examples, unlabeled instances are still helpful for hybrid models since they can help to increase the diversity among the base learners. The author also briefly introduced a preliminary theoretical study. In this paper, we also combine the re-trained models to see if we can benefit more. We utilize voting as the strategy for final combination. In the tagging phase, the retrained LLM, LGLM and HMMLA systems outputs 3 tagging results, each word is assigned one POS label. The final tagging is the voting result of these 3 labels.", |
|
"cite_spans": [ |
|
{ |
|
"start": 277, |
|
"end": 288, |
|
"text": "Zhou (2009)", |
|
"ref_id": "BIBREF26" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "The Idea", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "With the increase of (pseudo) training data, a HMMLA may learn better latent variables to subcategorize POS tags, which could significantly im-prove a purely supervised HMMLA. In our experiments, all HMMLA models are trained with 8 iterations of split, merge, smooth. The second column of Table 6 shows the performance of the re-trained HMMLAs. The first column is the number of sentences of pseudo sentences. The pseudo sentences are selected from the begining of the Chinese gigaword. We can clearly see that the idea to leverage unlabeled data to transfer the predictive ability of the hybrid model works. Self-training can also slightly improve a HMMLA (Huang et al., 2009) . Our auxiliary experiments show that self-training is not as effective as our methods.", |
|
"cite_spans": [ |
|
{ |
|
"start": 657, |
|
"end": 677, |
|
"text": "(Huang et al., 2009)", |
|
"ref_id": "BIBREF10" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 289, |
|
"end": 296, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Reducing Hybrid Models to HMMLA", |
|
"sec_num": "4.3.1" |
|
}, |
|
{ |
|
"text": "To increase the expressive power of a discriminative classification model, we extend the feature templates. This strategy is proposed by (Liang et al., 2008) . In our experiments, we increase the window size of word uni/bi-gram features to approximate long distance dependencies. For window size 3, we will add w \u22123 , w 3 , w \u22123 w \u22122 and w 2 w 3 as new features; for size 4, we will add w \u22124 , w \u22123 , w 3 , w 4 , w \u22124 w \u22123 , w \u22123 w \u22122 , w 2 w 3 and w 3 w 4 ; Column 3 to 6 of Table 6 show the performance of the re-compiled LLMs/LGLMs. Similar to the generative model, the discriminative LLM/LGLM can be improved too.", |
|
"cite_spans": [ |
|
{ |
|
"start": 137, |
|
"end": 157, |
|
"text": "(Liang et al., 2008)", |
|
"ref_id": "BIBREF13" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 476, |
|
"end": 483, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Reducing Hybrid Models to LLM/LGLM", |
|
"sec_num": "4.3.2" |
|
}, |
|
{ |
|
"text": "The last two columns of Table 6 are the final voting results of the HMMLA, LLM and LGLM. The window size of word uni/bi-gram features for the LLM and LGLM is set to 4. Obviously, the retrained models are still diverse and complementary, so the voting can further improve the sequence models. The result of the best hybrid sequence model is very close to the best stacking models. Furthermore, the F1 scores of the DEC/DEG prediction are 85.75 and 89.01, which are very close to parsers too.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 24, |
|
"end": 31, |
|
"text": "Table 6", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Voting", |
|
"sec_num": "4.3.3" |
|
}, |
|
{ |
|
"text": "Purely supervised sequence models are not good at predicting function words, and accordingly are not good enough to be used as front modules to parsers. The re-compiled models can mimic some behaviors of parsers, and therefore are suitable for parsing. Our evaluation shows that the significant improvement of the POS tagging stop harming syntactic parsing. Results in Table 7 indicate that the parsing accuracy of the Berkeley parser can be simply improved by inputting the Berkeley parser with the re-trained sequential tagging results. Additionally, the success to separate tagging and parsing can improve the whole syntactic processing efficiency. Table 7 : Accuracies of parsing based on recompiled tagging. Table 8 shows the performance of different systems evaluated on the test data. Our final sequence model achieve the state-of-the-art performance, which is once obtained by combining multiple parsers as well as sequence models.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 369, |
|
"end": 376, |
|
"text": "Table 7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 652, |
|
"end": 659, |
|
"text": "Table 7", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 713, |
|
"end": 720, |
|
"text": "Table 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Improving Parsing", |
|
"sec_num": "4.3.4" |
|
}, |
|
{ |
|
"text": "Acc. (Sun and Uszkoreit, 2012) 95.34% Our system 95.33% Table 8 : Tagging accuracies on the test data.", |
|
"cite_spans": [ |
|
{ |
|
"start": 5, |
|
"end": 30, |
|
"text": "(Sun and Uszkoreit, 2012)", |
|
"ref_id": "BIBREF19" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 56, |
|
"end": 63, |
|
"text": "Table 8", |
|
"ref_id": null |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Systems", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "In this paper, we study two techniques to build accurate and fast sequence models for Chinese POS tagging. In particular, our goal is to capture long-distance dependencies in sequence models. To improve tagging accuracy, we study stacking to integrate multiple models with heterogeneous views. To improve tagging efficiency at test time, we explore unlabeled data to transfer the predictive power of hybrid models to simple sequence or even local classification models. Hybrid systems are utilized to create large-scale pseudo training data for cheap models. By applying complex machine learning techniques, we are able to build good sequential POS taggers. Another advantage of our system is that it serves as a front-end to a parser very well. Our study suggests that complicated structured models can be well simulated by simple sequence models through unlabeled data.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
}, |
|
{ |
|
"text": "www.csie.ntu.edu.tw/\u02dccjlin/liblinear/ 2 code.google.com/p/berkeleyparser/", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "", |
|
"sec_num": null |
|
} |
|
], |
|
"back_matter": [ |
|
{ |
|
"text": "The work was supported by NSFC (61170166), Beijing Nova Program (2008B03) and National High-Tech R&D Program (2012AA011101).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Acknowledgement", |
|
"sec_num": null |
|
} |
|
], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "Stacked regressions. Machine Learning", |
|
"authors": [ |
|
{ |
|
"first": "Leo", |
|
"middle": [], |
|
"last": "Breiman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "", |
|
"volume": "24", |
|
"issue": "", |
|
"pages": "49--64", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Leo Breiman. 1996. Stacked regressions. Machine Learning, 24:49-64, July.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Model compression", |
|
"authors": [ |
|
{ |
|
"first": "Cristian", |
|
"middle": [], |
|
"last": "Bucila", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Rich", |
|
"middle": [], |
|
"last": "Caruana", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Alexandru", |
|
"middle": [], |
|
"last": "Niculescu-Mizil", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "KDD", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "535--541", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Cristian Bucila, Rich Caruana, and Alexandru Niculescu-Mizil. 2006. Model compression. In KDD, pages 535-541.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "A maximum-entropyinspired parser", |
|
"authors": [ |
|
{ |
|
"first": "Eugene", |
|
"middle": [], |
|
"last": "Charniak", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2000, |
|
"venue": "Proceedings of NAACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Eugene Charniak. 2000. A maximum-entropy- inspired parser. In Proceedings of NAACL.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "The importance of supertagging for wide-coverage ccg parsing", |
|
"authors": [ |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "James", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Curran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "Proceedings of Coling", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "282--288", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Stephen Clark and James R. Curran. 2004. The impor- tance of supertagging for wide-coverage ccg pars- ing. In Proceedings of Coling 2004, pages 282-288, Geneva, Switzerland, Aug 23-Aug 27. COLING.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2002, |
|
"venue": "Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--8", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: Theory and experi- ments with perceptron algorithms. In Proceedings of EMNLP, pages 1-8. Association for Computa- tional Linguistics, July.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Head-driven statistical models for natural language parsing", |
|
"authors": [ |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2003, |
|
"venue": "Computational Linguistics", |
|
"volume": "29", |
|
"issue": "4", |
|
"pages": "589--637", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Michael Collins. 2003. Head-driven statistical models for natural language parsing. Computational Lin- guistics, 29(4):589-637.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Shai Shalev-Shwartz, and Yoram Singer", |
|
"authors": [ |
|
{ |
|
"first": "Koby", |
|
"middle": [], |
|
"last": "Crammer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ofer", |
|
"middle": [], |
|
"last": "Dekel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joseph", |
|
"middle": [], |
|
"last": "Keshet", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "JOURNAL OF MA-CHINE LEARNING RESEARCH", |
|
"volume": "7", |
|
"issue": "", |
|
"pages": "551--585", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. 2006. Online passive-aggressive algorithms. JOURNAL OF MA- CHINE LEARNING RESEARCH, 7:551-585.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Extracting Comprehensible Models from Trained Neural Networks", |
|
"authors": [ |
|
{ |
|
"first": "Mark", |
|
"middle": [], |
|
"last": "Craven", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1996, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Mark Craven. 1996. Extracting Comprehensible Mod- els from Trained Neural Networks. Ph.D. thesis, University of Wisconsin-Madison, Department of Computer Sciences. Also appears as UW Technical Report CS-TR-96-1326.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "The CoNLL-2009 shared task: Syntactic and semantic dependencies in multiple languages", |
|
"authors": [ |
|
{ |
|
"first": "Jan", |
|
"middle": [], |
|
"last": "Haji\u010d", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Massimiliano", |
|
"middle": [], |
|
"last": "Ciaramita", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Richard", |
|
"middle": [], |
|
"last": "Johansson", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Daisuke", |
|
"middle": [], |
|
"last": "Kawahara", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Maria", |
|
"middle": [ |
|
"Ant\u00f2nia" |
|
], |
|
"last": "Mart\u00ed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Llu\u00eds", |
|
"middle": [], |
|
"last": "M\u00e0rquez", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Meyers", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sebastian", |
|
"middle": [], |
|
"last": "Pad\u00f3", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Pavel", |
|
"middle": [], |
|
"last": "Jan\u0161t\u011bp\u00e1nek", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mihai", |
|
"middle": [], |
|
"last": "Stra\u0148\u00e1k", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Surdeanu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of the 13th Conference on Computational Natural Language Learning (CoNLL-2009)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jan Haji\u010d, Massimiliano Ciaramita, Richard Johans- son, Daisuke Kawahara, Maria Ant\u00f2nia Mart\u00ed, Llu\u00eds M\u00e0rquez, Adam Meyers, Joakim Nivre, Sebastian Pad\u00f3, Jan\u0160t\u011bp\u00e1nek, Pavel Stra\u0148\u00e1k, Mihai Surdeanu, Nianwen Xue, and Yi Zhang. 2009. The CoNLL- 2009 shared task: Syntactic and semantic depen- dencies in multiple languages. In Proceedings of the 13th Conference on Computational Natural Lan- guage Learning (CoNLL-2009), June 4-5, Boulder, Colorado, USA.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Incremental joint POS tagging and dependency parsing in chinese", |
|
"authors": [ |
|
{ |
|
"first": "Jun", |
|
"middle": [], |
|
"last": "Hatori", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Takuya", |
|
"middle": [], |
|
"last": "Matsuzaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yusuke", |
|
"middle": [], |
|
"last": "Miyao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun'ichi", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of 5th International Joint Conference on Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1216--1224", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jun Hatori, Takuya Matsuzaki, Yusuke Miyao, and Jun'ichi Tsujii. 2011. Incremental joint POS tag- ging and dependency parsing in chinese. In Pro- ceedings of 5th International Joint Conference on Natural Language Processing, pages 1216-1224, Chiang Mai, Thailand, November. Asian Federation of Natural Language Processing.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Improving a simple bigram hmm part-of-speech tagger by latent annotation and selftraining", |
|
"authors": [ |
|
{ |
|
"first": "Zhongqiang", |
|
"middle": [], |
|
"last": "Huang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Vladimir", |
|
"middle": [], |
|
"last": "Eidelman", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mary", |
|
"middle": [], |
|
"last": "Harper", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics, Companion Volume: Short Papers", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "213--216", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhongqiang Huang, Vladimir Eidelman, and Mary Harper. 2009. Improving a simple bigram hmm part-of-speech tagger by latent annotation and self- training. In Proceedings of Human Language Tech- nologies: The 2009 Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics, Companion Volume: Short Pa- pers, pages 213-216, Boulder, Colorado, June. As- sociation for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", |
|
"authors": [ |
|
{ |
|
"first": "John", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Lafferty", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Andrew", |
|
"middle": [], |
|
"last": "Mccallum", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fernando", |
|
"middle": [ |
|
"C N" |
|
], |
|
"last": "Pereira", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2001, |
|
"venue": "Proceedings of the Eighteenth International Conference on Machine Learning, ICML '01", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "282--289", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "John D. Lafferty, Andrew McCallum, and Fernando C. N. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proceedings of the Eighteenth Inter- national Conference on Machine Learning, ICML '01, pages 282-289, San Francisco, CA, USA. Mor- gan Kaufmann Publishers Inc.", |
|
"links": null |
|
}, |
|
"BIBREF12": { |
|
"ref_id": "b12", |
|
"title": "Joint models for Chinese POS tagging and dependency parsing", |
|
"authors": [ |
|
{ |
|
"first": "Zhenghua", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Min", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wanxiang", |
|
"middle": [], |
|
"last": "Che", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ting", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Wenliang", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Haizhou", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1180--1191", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhenghua Li, Min Zhang, Wanxiang Che, Ting Liu, Wenliang Chen, and Haizhou Li. 2011. Joint mod- els for Chinese POS tagging and dependency pars- ing. In Proceedings of the 2011 Conference on Em- pirical Methods in Natural Language Processing, pages 1180-1191, Edinburgh, Scotland, UK., July. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF13": { |
|
"ref_id": "b13", |
|
"title": "Structure compilation: trading structure for features", |
|
"authors": [ |
|
{ |
|
"first": "Percy", |
|
"middle": [], |
|
"last": "Liang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hal", |
|
"middle": [], |
|
"last": "Daum\u00e9", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Iii", |
|
"middle": [], |
|
"last": "", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of the 25th international conference on Machine learning, ICML '08", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "592--599", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Percy Liang, Hal Daum\u00e9, III, and Dan Klein. 2008. Structure compilation: trading structure for features. In Proceedings of the 25th international conference on Machine learning, ICML '08, pages 592-599, New York, NY, USA. ACM.", |
|
"links": null |
|
}, |
|
"BIBREF14": { |
|
"ref_id": "b14", |
|
"title": "Probabilistic cfg with latent annotations", |
|
"authors": [ |
|
{ |
|
"first": "Takuya", |
|
"middle": [], |
|
"last": "Matsuzaki", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yusuke", |
|
"middle": [], |
|
"last": "Miyao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jun'ichi", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Proceedings of ACL, ACL '05", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "75--82", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Takuya Matsuzaki, Yusuke Miyao, and Jun'ichi Tsu- jii. 2005. Probabilistic cfg with latent annota- tions. In Proceedings of ACL, ACL '05, pages 75- 82, Stroudsburg, PA, USA. Association for Compu- tational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF15": { |
|
"ref_id": "b15", |
|
"title": "Integrating graph-based and transition-based dependency parsers", |
|
"authors": [ |
|
{ |
|
"first": "Joakim", |
|
"middle": [], |
|
"last": "Nivre", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ryan", |
|
"middle": [], |
|
"last": "Mcdonald", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2008, |
|
"venue": "Proceedings of ACL-08: HLT", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "950--958", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Joakim Nivre and Ryan McDonald. 2008. Integrat- ing graph-based and transition-based dependency parsers. In Proceedings of ACL-08: HLT, pages 950-958, Columbus, Ohio, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF16": { |
|
"ref_id": "b16", |
|
"title": "Learning accurate, compact, and interpretable tree annotation", |
|
"authors": [ |
|
{ |
|
"first": "Slav", |
|
"middle": [], |
|
"last": "Petrov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Leon", |
|
"middle": [], |
|
"last": "Barrett", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Romain", |
|
"middle": [], |
|
"last": "Thibaux", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dan", |
|
"middle": [], |
|
"last": "Klein", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2006, |
|
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "433--440", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Slav Petrov, Leon Barrett, Romain Thibaux, and Dan Klein. 2006. Learning accurate, compact, and interpretable tree annotation. In Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Associa- tion for Computational Linguistics, pages 433-440, Sydney, Australia, July. Association for Computa- tional Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF17": { |
|
"ref_id": "b17", |
|
"title": "On dual decomposition and linear programming relaxations for natural language processing", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Alexander", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "David", |
|
"middle": [], |
|
"last": "Rush", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Sontag", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tommi", |
|
"middle": [], |
|
"last": "Collins", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Jaakkola", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2010, |
|
"venue": "Proceedings of EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1--11", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alexander M Rush, David Sontag, Michael Collins, and Tommi Jaakkola. 2010. On dual decomposi- tion and linear programming relaxations for natu- ral language processing. In Proceedings of EMNLP, pages 1-11, Cambridge, MA, October. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF18": { |
|
"ref_id": "b18", |
|
"title": "Bayesian symbol-refined tree substitution grammars for syntactic parsing", |
|
"authors": [ |
|
{ |
|
"first": "Hiroyuki", |
|
"middle": [], |
|
"last": "Shindo", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yusuke", |
|
"middle": [], |
|
"last": "Miyao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Akinori", |
|
"middle": [], |
|
"last": "Fujino", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Masaaki", |
|
"middle": [], |
|
"last": "Nagata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "440--448", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Hiroyuki Shindo, Yusuke Miyao, Akinori Fujino, and Masaaki Nagata. 2012. Bayesian symbol-refined tree substitution grammars for syntactic parsing. In Proceedings of ACL, pages 440-448, Jeju Island, Korea, July. Association for Computational Linguis- tics.", |
|
"links": null |
|
}, |
|
"BIBREF19": { |
|
"ref_id": "b19", |
|
"title": "Capturing paradigmatic and syntagmatic lexical relations: Towards accurate Chinese part-of-speech tagging", |
|
"authors": [ |
|
{ |
|
"first": "Weiwei", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Hans", |
|
"middle": [], |
|
"last": "Uszkoreit", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Weiwei Sun and Hans Uszkoreit. 2012. Capturing paradigmatic and syntagmatic lexical relations: To- wards accurate Chinese part-of-speech tagging. In Proceedings of the 50th Annual Meeting of the Asso- ciation for Computational Linguistics. Association for Computational Linguistics, July.", |
|
"links": null |
|
}, |
|
"BIBREF20": { |
|
"ref_id": "b20", |
|
"title": "A stacked sub-word model for joint Chinese word segmentation and part-of-speech tagging", |
|
"authors": [ |
|
{ |
|
"first": "Weiwei", |
|
"middle": [], |
|
"last": "Sun", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2011, |
|
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "1385--1394", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Weiwei Sun. 2011. A stacked sub-word model for joint Chinese word segmentation and part-of-speech tagging. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguis- tics: Human Language Technologies, pages 1385- 1394, Portland, Oregon, USA, June. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF21": { |
|
"ref_id": "b21", |
|
"title": "The challenges of parsing chinese with combinatory categorial grammar", |
|
"authors": [ |
|
{ |
|
"first": "Daniel", |
|
"middle": [], |
|
"last": "Tse", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "R", |
|
"middle": [], |
|
"last": "James", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Curran", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2012, |
|
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "295--304", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Daniel Tse and James R. Curran. 2012. The chal- lenges of parsing chinese with combinatory cate- gorial grammar. In Proceedings of the 2012 Con- ference of the North American Chapter of the As- sociation for Computational Linguistics: Human Language Technologies, pages 295-304, Montr\u00e9al, Canada, June. Association for Computational Lin- guistics.", |
|
"links": null |
|
}, |
|
"BIBREF22": { |
|
"ref_id": "b22", |
|
"title": "Fast full parsing by linear-chain conditional random fields", |
|
"authors": [ |
|
{ |
|
"first": "Yoshimasa", |
|
"middle": [], |
|
"last": "Tsuruoka", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sophia", |
|
"middle": [], |
|
"last": "Tsujii", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Ananiadou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "790--798", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yoshimasa Tsuruoka, Jun'ichi Tsujii, and Sophia Ana- niadou. 2009. Fast full parsing by linear-chain conditional random fields. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL 2009), pages 790-798, Athens, Greece, March. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF23": { |
|
"ref_id": "b23", |
|
"title": "Original contribution: Stacked generalization", |
|
"authors": [ |
|
{ |
|
"first": "H", |
|
"middle": [], |
|
"last": "David", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Wolpert", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 1992, |
|
"venue": "Neural Netw", |
|
"volume": "5", |
|
"issue": "", |
|
"pages": "241--259", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "David H. Wolpert. 1992. Original contribution: Stacked generalization. Neural Netw., 5:241-259, February.", |
|
"links": null |
|
}, |
|
"BIBREF24": { |
|
"ref_id": "b24", |
|
"title": "The penn Chinese treebank: Phrase structure annotation of a large corpus", |
|
"authors": [ |
|
{ |
|
"first": "Nianwen", |
|
"middle": [], |
|
"last": "Xue", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fei", |
|
"middle": [], |
|
"last": "Xia", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Fu-Dong", |
|
"middle": [], |
|
"last": "Chiou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Martha", |
|
"middle": [], |
|
"last": "Palmer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2005, |
|
"venue": "Natural Language Engineering", |
|
"volume": "11", |
|
"issue": "2", |
|
"pages": "207--238", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Nianwen Xue, Fei Xia, Fu-Dong Chiou, and Martha Palmer. 2005. The penn Chinese treebank: Phrase structure annotation of a large corpus. Natural Lan- guage Engineering, 11(2):207-238.", |
|
"links": null |
|
}, |
|
"BIBREF25": { |
|
"ref_id": "b25", |
|
"title": "Transitionbased parsing of the Chinese treebank using a global discriminative model", |
|
"authors": [ |
|
{ |
|
"first": "Yue", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Stephen", |
|
"middle": [], |
|
"last": "Clark", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 11th International Conference on Parsing Technologies (IWPT'09)", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "162--171", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yue Zhang and Stephen Clark. 2009. Transition- based parsing of the Chinese treebank using a global discriminative model. In Proceedings of the 11th International Conference on Parsing Technologies (IWPT'09), pages 162-171, Paris, France, October. Association for Computational Linguistics.", |
|
"links": null |
|
}, |
|
"BIBREF26": { |
|
"ref_id": "b26", |
|
"title": "When semi-supervised learning meets ensemble learning", |
|
"authors": [ |
|
{ |
|
"first": "Zhi-Hua", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2009, |
|
"venue": "Proceedings of the 8th International Workshop on Multiple Classifier Systems, MCS '09", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "529--538", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Zhi-Hua Zhou. 2009. When semi-supervised learning meets ensemble learning. In Proceedings of the 8th International Workshop on Multiple Classifier Sys- tems, MCS '09, pages 529-538, Berlin, Heidelberg. Springer-Verlag.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"TABREF0": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>PA)</td><td>HMMLA PCFGLA</td><td>DEP</td></tr></table>", |
|
"text": "Tagging accuracies of different supervised models on the development data.", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF2": { |
|
"type_str": "table", |
|
"content": "<table><tr><td>ging.</td><td/><td/><td/></tr><tr><td>Devel.</td><td>LP</td><td>LR</td><td>F1</td></tr><tr><td>Berkeley</td><td colspan=\"3\">80.44 80.31 81.36</td></tr><tr><td>1or LGLM</td><td colspan=\"3\">80.38 79.48 79.93\u2193</td></tr><tr><td>2or LGLM</td><td colspan=\"3\">80.98 79.93 80.45\u2193</td></tr><tr><td>HMMLA</td><td colspan=\"3\">80.65 79.62 80.13\u2193</td></tr><tr><td colspan=\"4\">1or LGLM(HMMLA) 81.55 80.80 81.17\u2193</td></tr><tr><td colspan=\"4\">1or LGLM(PCFGLA) 82.84 81.75 82.29\u2191</td></tr><tr><td>1or LGLM(DEP)</td><td colspan=\"3\">82.69 81.68 82.18\u2191</td></tr></table>", |
|
"text": "Tagging accuracies of different stacking models on the development data.", |
|
"html": null, |
|
"num": null |
|
}, |
|
"TABREF3": { |
|
"type_str": "table", |
|
"content": "<table><tr><td colspan=\"2\">Size of data HMMLA</td><td>LLM</td><td>LGLM</td><td>LLM</td><td>LGLM</td><td>Voting</td></tr><tr><td/><td/><td colspan=\"2\">win size=3</td><td colspan=\"2\">win size=4</td><td>DEC/DEG</td></tr><tr><td>+100k</td><td colspan=\"5\">94.72% 95.05% 95.07% 95.04% 95.10% 95.36%</td><td>--</td></tr><tr><td>+200k</td><td colspan=\"5\">94.77% 95.06% 95.18% 95.20% 95.23% 95.43%</td><td>--</td></tr><tr><td>+500k</td><td colspan=\"5\">94.97% 95.11% 95.21% 95.15% 95.23% 95.43%</td><td>--</td></tr><tr><td>+1000k</td><td>95.</td><td/><td/><td/><td/></tr><tr><td/><td/><td/><td/><td/><td>,</td></tr><tr><td/><td/><td/><td/><td colspan=\"3\">4.2 Reducing Hybrid Models to Sequence</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">Models</td></tr><tr><td/><td/><td/><td/><td colspan=\"3\">For English POS tagging, Liang, Daum\u00e9 and</td></tr><tr><td/><td/><td/><td/><td colspan=\"3\">Klein (2008) have done some experiments to</td></tr></table>", |
|
"text": "09% 95.19% 95.23% 95.22% 95.31% 95.49% 85.75/89.01", |
|
"html": null, |
|
"num": null |
|
} |
|
} |
|
} |
|
} |