asahi417 commited on
Commit
63c2642
·
1 Parent(s): 64ff342
dataset/chemprot/dev.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
dataset/chemprot/label2id.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"ACTIVATOR": 0, "AGONIST": 1, "AGONIST-ACTIVATOR": 2, "AGONIST-INHIBITOR": 3, "ANTAGONIST": 4, "DOWNREGULATOR": 5, "INDIRECT-DOWNREGULATOR": 6, "INDIRECT-UPREGULATOR": 7, "INHIBITOR": 8, "PRODUCT-OF": 9, "SUBSTRATE": 10, "SUBSTRATE_PRODUCT-OF": 11, "UPREGULATOR": 12}
dataset/chemprot/test.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
dataset/chemprot/train.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
dataset/citation_intent/dev.jsonl CHANGED
@@ -1,114 +1,114 @@
1
- {"text": "Typical examples are Bulgarian ( Simov et al. , 2005 ; Simov and Osenova , 2003 ) , Chinese ( Chen et al. , 2003 ) , Danish ( Kromann , 2003 ) , and Swedish ( Nilsson et al. , 2005 ) .", "label": "Background"}
2
- {"text": "This appeared to solve the problem , and the results presented later for the average degree of generalisation do not show an over-generalisation compared with those given in Li and Abe ( 1998 ) .", "label": "CompareOrContrast"}
3
- {"text": "These observations and this line of reasoning has not escaped the attention of theoretical linguists : Hale and Keyser ( 1993 ) propose that argument structure is , in fact , encoded syntactically .", "label": "Background"}
4
- {"text": "inter-document references in the form of hyperlinks ( Agrawal et al. , 2003 ) .", "label": "Background"}
5
- {"text": "While IA is generally thought to be consistent with findings on human language production ( Hermann and Deutsch 1976 ; Levelt 1989 ; Pechmann 1989 ; Sonnenschein 1982 ) , the hypothesis that incrementality is a good model of human GRE seems unfalsifiable until a preference order is specified for the properties on which it operates .", "label": "Background"}
6
- {"text": "Arabic has two kinds of plurals : broken plurals and sound plurals ( Wightwick and Gaafar , 1998 ; Chen and Gey , 2002 ) .", "label": "Background"}
7
- {"text": "Secondly , as ( Blunsom et al. , 2008 ) show , marginalizing out the different segmentations during decoding leads to improved performance .", "label": "Future"}
8
- {"text": "Nivre ( 2008 ) reports experiments on Arabic parsing using his MaltParser ( Nivre et al. 2007 ) , trained on the PADT .", "label": "Background"}
9
- {"text": "More recently , ( Sebastiani , 2002 ) has performed a good survey of document categorization ; recent works can also be found in ( Joachims , 2002 ) , ( Crammer and Singer , 2003 ) , and ( Lewis et al. , 2004 ) .", "label": "Background"}
10
- {"text": "The grammar conversion from LTAG to HPSG ( Yoshinaga and Miyao , 2001 ) is the core portion of the RenTAL system .", "label": "Background"}
11
- {"text": "This includes work on question answering ( Wang et al. , 2007 ) , sentiment analysis ( Nakagawa et al. , 2010 ) , MT reordering ( Xu et al. , 2009 ) , and many other tasks .", "label": "Background"}
12
- {"text": "Our knowledge extractors rely extensively on MetaMap ( Aronson 2001 ) , a system for identifying segments of text that correspond to concepts in the UMLS Metathesaurus .", "label": "Uses"}
13
- {"text": "We study the cases where a 9Recall that even the Keller and Lapata ( 2003 ) system , built on the world 's largest corpus , achieves only 34 % recall ( Table 1 ) ( with only 48 % of positives and 27 % of all pairs previously observed , but see Footnote 5 ) .", "label": "CompareOrContrast"}
14
- {"text": "Inspired by ( Blunsom et al. , 2009 ) and ( Cohn and Blunsom , 2009 ) , we define P ( str | frag ) as follows : where csw is the number of words in the source string .", "label": "Motivation"}
15
- {"text": "Inspired by ( Blunsom et al. , 2009 ) and ( Cohn and Blunsom , 2009 ) , we define P ( str | frag ) as follows : where csw is the number of words in the source string .", "label": "Motivation"}
16
- {"text": "The TNT POS tagger ( Brants , 2000 ) has also been designed to train and run very quickly , tagging between 30,000 and 60,000 words per second .", "label": "Background"}
17
- {"text": "Some previous works ( Bannard and Callison-Burch , 2005 ; Zhao et al. , 2009 ; Kouylekov et al. , 2009 ) indicate , as main limitations of the mentioned resources , their limited coverage , their low precision , and the fact that they are mostly suitable to capture relations mainly between single words .", "label": "Background"}
18
- {"text": "Similarly , Cowan and Collins ( 2005 ) report that the use of a subset of Spanish morphological features ( number for adjectives , determiners , nouns , pronouns , and verbs ; and mode for verbs ) outperforms other combinations .", "label": "Background"}
19
- {"text": "To solve these scaling issues , we implement Online Variational Bayesian Inference ( Hoffman et al. , 2010 ; Hoffman et al. , 2012 ) for our models .", "label": "Uses"}
20
- {"text": "Other studies which view lR as a query generation process include Maron and Kuhns , 1960 ; Hiemstra and Kraaij , 1999 ; Ponte and Croft , 1998 ; Miller et al , 1999 .", "label": "CompareOrContrast"}
21
- {"text": "As a result , researchers have re-adopted the once-popular knowledge-rich approach , investigating a variety of semantic knowledge sources for common noun resolution , such as the semantic relations between two NPs ( e.g. , Ji et al. ( 2005 ) ) , their semantic similarity as computed using WordNet ( e.g. , Poesio et al. ( 2004 ) ) or Wikipedia ( Ponzetto and Strube , 2006 ) , and the contextual role played by an NP ( see Bean and Riloff ( 2004 ) ) .", "label": "Background"}
22
- {"text": "There have been many studies on parsing techniques ( Poller and Becker , 1998 ; Flickinger et al. , 2000 ) , ones on disambiguation models ( Chiang , 2000 ; Kanayama et al. , 2000 ) , and ones on programming/grammar-development environ -", "label": "Background"}
23
- {"text": "An example of this is the estimation of maximum entropy models , from simple iterative estimation algorithms used by Ratnaparkhi ( 1998 ) that converge very slowly , to complex techniques from the optimisation literature that converge much more rapidly ( Malouf , 2002 ) .", "label": "Background"}
24
- {"text": "For example , such schema can serve as a mean to represent translation examples , or find structural correspondences for the purpose of transfer grammar learning ( Menezes & Richardson , 2001 ) , ( Aramaki et al. , 2001 ) , ( Watanabe et al. , 2000 ) , ( Meyers et al. , 2000 ) , ( Matsumoto et al. , 1993 ) , ( kaji et al. , 1992 ) , and example-base machine translation EBMT3 ( Sato & Nagao , 1990 ) , ( Sato , 1991 ) , ( Richardson et al. , 2001 ) , ( Al-Adhaileh & Tang , 1999 ) .", "label": "Background"}
25
- {"text": "Later works , such as Atallah et al. ( 2001a ) , Bolshakov ( 2004 ) , Taskiran et al. ( 2006 ) and Topkara et al. ( 2006b ) , further made use of part-ofspeech taggers and electronic dictionaries , such as WordNet and VerbNet , to increase the robustness of the method .", "label": "Background"}
26
- {"text": "results are based on a corpus of movie subtitles ( Tiedemann 2007 ) , and are consequently shorter sentences , whereas the En \u00e2\u0086\u0092 Es results are based on a corpus of parliamentary proceedings ( Koehn 2005 ) .", "label": "Uses"}
27
- {"text": "We work with a semi-technical text on meteorological phenomena ( Larrick , 1961 ) , meant for primary school students .", "label": "Uses"}
28
- {"text": "This Principle of Finitism is also assumed by Johnson-Laird ( 1983 ) , Jackendoff ( 1983 ) , Kamp ( 1981 ) , and implicitly or explicitly by almost all researchers in computational linguistics .", "label": "CompareOrContrast"}
29
- {"text": "The candidate feature templates include : Voice from Sun and Jurafsky ( 2004 ) .", "label": "Uses"}
30
- {"text": "Over the past decade , researchers at IBM have developed a series of increasingly sophisticated statistical models for machine translation ( Brown et al. , 1988 ; Brown et al. , 1990 ; Brown et al. , 1993a ) .", "label": "Background"}
31
- {"text": "The PICO framework ( Richardson et al. 1995 ) for capturing well-formulated clinical queries ( described in Section 2 ) can serve as the basis of a knowledge representation that bridges the needs of clinicians and analytical capabilities of a system .", "label": "Background"}
32
- {"text": "The Chinese PropBank has labeled the predicateargument structures of sentences from the Chinese TreeBank ( Xue et al. 2005 ) .", "label": "Uses"}
33
- {"text": "Furthermore , manually selected word pairs are often biased towards highly related pairs ( Gurevych , 2006 ) , because human annotators tend to select only highly related pairs connected by relations they are aware of .", "label": "Background"}
34
- {"text": "Disjunctive feature descriptions are also possible ; WIT incorporates an efficient method for handling disjunctions ( Nakano , 1991 ) .", "label": "Uses"}
35
- {"text": "Our strategy is based on the approach presented by Johnson et al. ( 2007 ) .", "label": "Uses"}
36
- {"text": "Indeed , contrary to the more classical statistical methods ( Mutual Information , Loglike ... , see below ) used for collocation acquisition ( see ( Pearce , 2002 ) for a review ) , these patterns allow :", "label": "Background"}
37
- {"text": "A further complication is that different speakers can regard very different values as prototypical , making it difficult to assess which of two objects is greener even on one dimension ( Berlin and Kay 1969 , pages 10 -- 12 ) .", "label": "Background"}
38
- {"text": "The M step then treats c as fixed , observed data and adjusts 0 until the predicted vector of total feature counts equals c , using Improved Iterative Scaling ( Della Pietra et al. , 1997 ; Chen and", "label": "Uses"}
39
- {"text": "Both kinds of annotation were carried out using ANVIL ( Kipp , 2004 ) .", "label": "Uses"}
40
- {"text": "The article classifier is a discriminative model that draws on the state-of-the-art approach described in Rozovskaya et al. ( 2012 ) .", "label": "Uses"}
41
- {"text": "The language grounding problem has received significant attention in recent years , owed in part to the wide availability of data sets ( e.g. Flickr , Von Ahn ( 2006 ) ) , computing power , improved computer vision models ( Oliva and Torralba , 2001 ; Lowe , 2004 ; Farhadi et al. , 2009 ; Parikh and Grauman , 2011 ) and neurological evidence of ties between the language , perceptual and motor systems in the brain ( Pulverm \u00c2\u00a8 uller et al. , 2005 ; Tettamanti et al. , 2005 ; Aziz-Zadeh et al. , 2006 ) .", "label": "Background"}
42
- {"text": "Church ( 1995 , p. 294 ) studied , among other simple text normalization techniques , the effect of case normalization for different words and showed that `` sometimes case variants refer to the same thing ( hurricane and Hurricane ) , sometimes they refer to different things ( continental and Continental ) and sometimes they do n't refer to much of anything ( e.g. , anytime and Anytime ) . ''", "label": "Background"}
43
- {"text": "We follow the notation convention of Lari and Young ( 1990 ) .", "label": "Uses"}
44
- {"text": "Later works , such as Atallah et al. ( 2001a ) , Bolshakov ( 2004 ) , Taskiran et al. ( 2006 ) and Topkara et al. ( 2006b ) , further made use of part-ofspeech taggers and electronic dictionaries , such as WordNet and VerbNet , to increase the robustness of the method .", "label": "Background"}
45
- {"text": "Finkelstein et al. ( 2002 ) did not report inter-subject correlation for their larger dataset .", "label": "CompareOrContrast"}
46
- {"text": "This is a similar conclusion to our previous work in Salloum and Habash ( 2011 ) .", "label": "CompareOrContrast"}
47
- {"text": "We then use the program Snob ( Wallace and Boulton 1968 ; Wallace 2005 ) to cluster these experiences .", "label": "Uses"}
48
- {"text": "The priorities are used for disambiguating interpretation in the incremental understanding method ( Nakano et al. , 1999b ) .", "label": "Uses"}
49
- {"text": "A number of alignment techniques have been proposed , varying from statistical methods ( Brown et al. , 1991 ; Gale and Church , 1991 ) to lexical methods ( Kay and Roscheisen , 1993 ; Chen , 1993 ) .", "label": "Background"}
50
- {"text": "Alshawi and Crouch ( 1992 ) present an illustrative first-order fragment along these lines and are able to supply a coherent formal semantics for the CLF-QLFs themselves , using a technique essentially equivalent to supervaluations : a QLF is true iff all its possible RQLFs are , false iff they are all false , and undefined otherwise .", "label": "Background"}
51
- {"text": "We use the open-source Moses toolkit ( Koehn et al. , 2007 ) to build a phrase-based SMT system trained on mostly MSA data ( 64M words on the Arabic side ) obtained from several LDC corpora including some limited DA data .", "label": "Uses"}
52
- {"text": "This approach is taken , for example , in LKB ( Copestake 1992 ) where lexical rules are introduced on a par with phrase structure rules and the parser makes no distinction between lexical and nonlexical rules ( Copestake 1993 , 31 ) .", "label": "CompareOrContrast"}
53
- {"text": "This confirms that although Kozima 's approach ( Kozima , 1993 ) is computationally expensive , it does produce more precise segmentation .", "label": "CompareOrContrast"}
54
- {"text": "Park and Byrd ( 2001 ) recently described a hybrid method for finding abbreviations and their definitions .", "label": "Background"}
55
- {"text": "More specifically , the notion of the phrasal lexicon ( used first by Becker 1975 ) has been used successfully in a number of areas :", "label": "Background"}
56
- {"text": "Ushioda et al. ( 1993 ) run a finite-state NP parser on a POS-tagged corpus to calculate the relative frequency of the same six subcategorization verb classes .", "label": "Background"}
57
- {"text": "This section , which elaborates on preliminary results reported in Demner-Fushman and Lin ( 2005 ) , describes extraction algorithms for population , problems , interventions , outcomes , and the strength of evidence .", "label": "Extends"}
58
- {"text": "It would seem therefore that the iteration of the PT operation to form a closure is needed ( cfXXX Zadrozny 1987b ) .", "label": "CompareOrContrast"}
59
- {"text": "Sedivy et al. ( 1999 ) asked subjects to identify the target of a vague description in a visual scene .", "label": "Background"}
60
- {"text": "Our most accurate single grammar achieves an F score of 91.6 on the WSJ test set , rivaling discriminative reranking approaches ( Charniak and Johnson , 2005 ) and products of latent variable grammars ( Petrov , 2010 ) , despite being a single generative PCFG .", "label": "CompareOrContrast"}
61
- {"text": "Our recovery policy is modeled on the TargetedHelp ( Hockey et al. , 2003 ) policy used in task-oriented dialogue .", "label": "Extends"}
62
- {"text": "It has been argued that , in an incremental approach , gradable properties should be given a low preference ranking because they are difficult to process ( Krahmer and Theune 2002 ) .", "label": "CompareOrContrast"}
63
- {"text": "` See ( King , 1994 ) for a discussion of the appropriateness of TIG for HPSG and a comparison with other feature logic approaches designed for HPSG .", "label": "Background"}
64
- {"text": "Specifically , we used Decision Graphs ( Oliver 1993 ) for Doc-Pred , and SVMs ( Vapnik 1998 ) for Sent-Pred .11 Additionally , we used unigrams for clustering documents and sentences , and unigrams and bigrams for predicting document clusters and sentence clusters ( Sections 3.1.2 and 3.2.2 ) .", "label": "Uses"}
65
- {"text": "There are many plausible representations , such as pairs of trees from synchronous tree adjoining grammars ( Abeille et al. 1990 ; Shieber 1994 ; Candito 1998 ) , lexical conceptual structures ( Dorr 1992 ) and WordNet synsets ( Fellbaum 1998 ; Vossen 1998 ) .", "label": "Background"}
66
- {"text": "ones , DIRT ( Lin and Pantel , 2001 ) , VerbOcean ( Chklovski and Pantel , 2004 ) , FrameNet ( Baker et al. , 1998 ) , and Wikipedia ( Mehdad et al. , 2010 ; Kouylekov et al. , 2009 ) .", "label": "Background"}
67
- {"text": "In the latter case , we can also take care of transferring the value of z. However , as discussed by Meurers ( 1994 ) , creating several instances of lexical rules can be avoided .", "label": "Motivation"}
68
- {"text": "It maximizes the probability of getting the entire DA sequence correct , but it does not necessarily find the DA sequence that has the most DA labels correct ( Dermatas and Kokkinakis 1995 ) .", "label": "Background"}
69
- {"text": "A substring in the sentence that corresponds to a node in the representation tree is denoted by assigning the interval of the substring to SNODE of 2 These definitions are based on the discussion in ( Tang , 1994 ) and Boitet & Zaharin ( 1988 ) .", "label": "Uses"}
70
- {"text": "We found that the oldest system ( Brown et al. , 1992 ) yielded the best prototypes , and that using these prototypes gave state-of-the-art performance on WSJ , as well as improvements on nearly all of the non-English corpora .", "label": "Background"}
71
- {"text": "Other definitions of predicates may be found in ( Gomez , 1998 ) .", "label": "Background"}
72
- {"text": "For the sake of completeness , we report in this section also the results obtained adopting the `` basic solution '' proposed by ( Mehdad et al. , 2010 ) .", "label": "CompareOrContrast"}
73
- {"text": "The representations used by Danlos ( 2000 ) , Gardent and Webber ( 1998 ) , or Stone and Doran ( 1997 ) are similar , but do not ( always ) explicitly represent the clause combining operations as labeled nodes .", "label": "Background"}
74
- {"text": "Since earlier versions of the SNoW based CSCL were used only to identify single phrases ( Punyakanok and Roth , 2001 ; Munoz et al. , 1999 ) and never to identify a collection of several phrases at the same time , as we do here , we also trained and tested it under the exact conditions of CoNLL-2000 ( Tjong Kim Sang and Buchholz , 2000 ) to compare it to other shallow parsers .", "label": "Extends"}
75
- {"text": "If differences in meaning between senses are very fine-grained , distinguishing between them is hard even for humans ( Mihalcea and Moldovan , 2001 ) .6 Pairs containing such words are not suitable for evaluation .", "label": "Background"}
76
- {"text": "The application of domain models and deep semantic knowledge to question answering has been explored by a variety of researchers ( e.g. , Jacquemart and Zweigenbaum 2003 , Rinaldi et al. 2004 ) , and was also the focus of recent workshops on question answering in restricted domains at ACL 2004 and AAAI 2005 .", "label": "Background"}
77
- {"text": "Griffiths et al. ( 2007 ) helped pave the path for cognitive-linguistic multimodal research , showing that Latent Dirichlet Allocation outperformed Latent Semantic Analysis ( Deerwester et al. , 1990 ) in the prediction of association norms .", "label": "Background"}
78
- {"text": "Although not the first to employ a generative approach to directly model content , the seminal work of Barzilay and Lee ( 2004 ) is a noteworthy point of reference and comparison .", "label": "CompareOrContrast"}
79
- {"text": "Others include selectional preferences , transitivity ( Schoenmackers et al. , 2008 ) , mutual exclusion , symmetry , etc. .", "label": "Background"}
80
- {"text": "\u00e2\u0080\u00a2 cross-language information retrieval ( e.g. , McCarley 1999 ) , \u00e2\u0080\u00a2 multilingual document filtering ( e.g. , Oard 1997 ) , \u00e2\u0080\u00a2 computer-assisted language learning ( e.g. , Nerbonne et al. 1997 ) , \u00e2\u0080\u00a2 certain machine-assisted translation tools ( e.g. , Macklovitch 1994 ; Melamed 1996a ) , \u00e2\u0080\u00a2 concordancing for bilingual lexicography ( e.g. , Catizone , Russell , and Warwick 1989 ; Gale and Church 1991 ) ,", "label": "Background"}
81
- {"text": "For example , ( Fang et al. , 2001 ) discusses the evaluation of two different text categorization strategies with several variations of their feature spaces .", "label": "Background"}
82
- {"text": "As stated before , the experiments are run in the ACE '04 framework ( NIST , 2004 ) where the system will identify mentions and will label them ( cfXXX Section 4 ) with a type ( person , organization , etc ) , a sub-type ( OrgCommercial , OrgGovernmental , etc ) , a mention level ( named , nominal , etc ) , and a class ( specific , generic , etc ) .", "label": "Uses"}
83
- {"text": "Thus , the second class of SBD systems employs machine learning techniques such as decision tree classifiers ( Riley 1989 ) , neural networks ( Palmer and Hearst 1994 ) , and maximum-entropy modeling ( Reynar and Ratnaparkhi 1997 ) .", "label": "Background"}
84
- {"text": "or quotation of messages in emails or postings ( see Mullen and Malouf ( 2006 ) but cfXXX Agrawal et al. ( 2003 ) ) .", "label": "Background"}
85
- {"text": "The first work to do this with topic models is Feng and Lapata ( 2010b ) .", "label": "Background"}
86
- {"text": "The language chosen for semantic representation is a flat semantics along the line of ( Bos , 1995 ; Copestake et al. , 1999 ; Copestake et al. , 2001 ) .", "label": "CompareOrContrast"}
87
- {"text": "Tetreault 's contribution features comparative evaluation involving the author 's own centering-based pronoun resolution algorithm called the Left-Right Centering algorithm ( LRC ) as well as three other pronoun resolution methods : Hobbs 's naive algorithm ( Hobbs 1978 ) , BFP ( Brennan , Friedman , and Pollard 1987 ) , and Strube 's 5list approach ( Strube 1998 ) .", "label": "Background"}
88
- {"text": "For future work , we might investigate how machine learning algorithms , which are specifically designed for the problem of domain adaptation ( Blitzer et al. , 2007 ; Jiang and Zhai , 2007 ) , perform in comparison to our approach .", "label": "Future"}
89
- {"text": "The X2 statistic is performing at least as well as G2 , throwing doubt on the claim by Dunning ( 1993 ) that the G2 statistic is better suited for use in corpus-based NLP .", "label": "CompareOrContrast"}
90
- {"text": "Provided with the candidate fragment elements , we previously ( Wang and Callison-Burch , 2011 ) used a chunker3 to finalize the output fragments , in order to follow the linguistic definition of a ( para - ) phrase .", "label": "Extends"}
91
- {"text": "There is a rich literature on organization and lexical access of morphologically complex words where experiments have been conducted mainly for derivational suffixed words of English , Hebrew , Italian , French , Dutch , and few other languages ( Marslen-Wilson et al. , 2008 ; Frost et al. , 1997 ; Grainger , et al. , 1991 ; Drews and Zwitserlood , 1995 ) .", "label": "Background"}
92
- {"text": "This method of incorporating dictionary information seems simpler than the method proposed by Brown et al. for their models ( Brown et al. , 1993b ) .", "label": "CompareOrContrast"}
93
- {"text": "One important example is the constituentcontext model ( CCM ) of Klein and Manning ( 2002 ) , which was specifically designed to capture the linguistic observation made by Radford ( 1988 ) that there are regularities to the contexts in which constituents appear .", "label": "Background"}
94
- {"text": "Japanese ( Kawata and Bartels , 2000 ) , despite a very high accuracy , is different in that attachment score drops from 98 % to 85 % , as we go from length 1 to 2 , which may have something to do with the data consisting of transcribed speech with very short utterances .", "label": "CompareOrContrast"}
95
- {"text": "7 We ignore the rare `` false idafa '' construction ( Habash 2010 , p. 102 ) .", "label": "Background"}
96
- {"text": "Various approaches for computing semantic relatedness of words or concepts have been proposed , e.g. dictionary-based ( Lesk , 1986 ) , ontology-based ( Wu and Palmer , 1994 ; Leacock and Chodorow , 1998 ) , information-based ( Resnik , 1995 ; Jiang and Conrath , 1997 ) or distributional ( Weeds and Weir , 2005 ) .", "label": "Background"}
97
- {"text": "Some methods are based on likelihood ( Och and Ney , 2002 ; Blunsom et al. , 2008 ) , error rate ( Och , 2003 ; Zhao and Chen , 2009 ; Pauls et al. , 2009 ; Galley and Quirk , 2011 ) , margin ( Watanabe et al. , 2007 ; Chiang et al. , 2008 ) and ranking ( Hopkins and May , 2011 ) , and among which minimum error rate training ( MERT ) ( Och , 2003 ) is the most popular one .", "label": "Background"}
98
- {"text": "We follow Lewis and Steedman ( 2014 ) in allowing a small set of generic , linguistically-plausible unary and binary grammar rules .", "label": "Uses"}
99
- {"text": "( 7 ) NEIGHBOR : Research in lexical semantics suggests that the SC of an NP can be inferred from its distributionally similar NPs ( see Lin ( 1998a ) ) .", "label": "Motivation"}
100
- {"text": "Discrepancies in length throw constituents off balance , and so prosodic phrasing will cross constituent boundaries in order to give the phrases similar lengths ; this is the case in Chickens were eating II the remaining green vegetables , where the subject-predicate boundary finds no prosodic correspondent .4 The most explicit version of this approach is the analysis presented in Gee and Grosjean ( 1983 ) ( henceforth G&G ) .", "label": "CompareOrContrast"}
101
- {"text": "Our approach to the problem is more compatible with the empirical evidence we presented in our prior work ( Li et al. , 2014 ) where we analyzed the output of Chinese to English machine translation and found that there is no correlation between sentence length and MT quality .", "label": "CompareOrContrast"}
102
- {"text": "For all experiments reported in this section we used the syntactic dependency parser MaltParser v1 .3 ( Nivre 2003 , 2008 ; K\u00c3\u00bcbler , McDonald , and Nivre 2009 ) , a transition-based parser with an input buffer and a stack , which uses SVM classifiers", "label": "Uses"}
103
- {"text": "Against the background of a growing interest in multilingual NLP , multilingual anaphora / coreference resolution has gained considerable momentum in recent years ( Aone and McKee 1993 ; Azzam , Humphreys , and Gaizauskas 1998 ; Harabagiu and Maiorano 2000 ; Mitkov and Barbu 2000 ; Mitkov 1999 ; Mitkov and Stys 1997 ; Mitkov , Belguith , and Stys 1998 ) .", "label": "Background"}
104
- {"text": "We use the same data setting with Xue ( 2008 ) , however a bit different from Xue and Palmer ( 2005 ) .", "label": "CompareOrContrast"}
105
- {"text": "They proved to be useful in a number of NLP applications such as natural language generation ( Iordanskaja et al. , 1991 ) , multidocument summarization ( McKeown et al. , 2002 ) , automatic evaluation of MT ( Denkowski and Lavie , 2010 ) , and TE ( Dinu and Wang , 2009 ) .", "label": "Motivation"}
106
- {"text": "Moreover , a sandbox is a temporary view of a document itself i.e. a sandbox can not cause a change in the history ( Cunningham and Leuf , 2001 ) .", "label": "Background"}
107
- {"text": "Regarding future work , there are many research line that may be followed : i ) Capturing more features by employing external knowledge such as ontological , lexical resource or WordNet-based features ( Basili et al. , 2005a ; Basili et al. , 2005b ; Bloehdorn et al. , 2006 ; Bloehdorn and Moschitti , 2007 ) or shallow semantic trees , ( Giuglea and Moschitti , 2004 ; Giuglea and Moschitti , 2006 ; Moschitti and Bejan , 2004 ; Moschitti et al. , 2007 ; Moschitti , 2008 ; Moschitti et al. , 2008 ) .", "label": "Future"}
108
- {"text": "Another line of research approaches grounded language knowledge by augmenting distributional approaches of word meaning with perceptual information ( Andrews et al. , 2009 ; Steyvers , 2010 ; Feng and Lapata , 2010b ; Bruni et al. , 2011 ; Silberer and Lapata , 2012 ; Johns and Jones , 2012 ; Bruni et al. , 2012a ; Bruni et al. , 2012b ; Silberer et al. , 2013 ) .", "label": "Background"}
109
- {"text": "Future research should apply the work of Blunsom et al. ( 2008 ) and Blunsom and Osborne ( 2008 ) , who marginalize over derivations to find the most probable translation rather than the most probable derivation , to these multi-nonterminal grammars .", "label": "Future"}
110
- {"text": "We have since improved the interface by incorporating a capability in the recognizer to propose additional solutions in turn once the first one fails to parse ( Zue et al. 1991 ) To produce these `` N-best '' alternatives , we make use of a standard A * search algorithm ( Hart 1968 , Jelinek 1976 ) .", "label": "Uses"}
111
- {"text": "Curran ( 2003 )", "label": "Background"}
112
- {"text": "OT therefore holds out the promise of simplifying grammars , by factoring all complex phenomena into simple surface-level constraints that partially mask one another .1 Whether this is always possible under an appropriate definition of `` simple constraints '' ( e.g. , Eisner 1997b ) is of course an empirical question .", "label": "Background"}
113
- {"text": "Consider , for example , the lexical rule in Figure 2 , which encodes a passive lexical rule like the one presented by Pollard and Sag ( 1987 , 215 ) in terms of the setup of Pollard and Sag ( 1994 , ch .", "label": "Background"}
114
- {"text": "This result is consistent with other works using this model with these features ( Andrews et al. , 2009 ; Silberer and Lapata , 2012 ) .", "label": "CompareOrContrast"}
 
1
+ {"text": "Typical examples are Bulgarian ( Simov et al. , 2005 ; Simov and Osenova , 2003 ) , Chinese ( Chen et al. , 2003 ) , Danish ( Kromann , 2003 ) , and Swedish ( Nilsson et al. , 2005 ) .", "label": 0}
2
+ {"text": "This appeared to solve the problem , and the results presented later for the average degree of generalisation do not show an over-generalisation compared with those given in Li and Abe ( 1998 ) .", "label": 1}
3
+ {"text": "These observations and this line of reasoning has not escaped the attention of theoretical linguists : Hale and Keyser ( 1993 ) propose that argument structure is , in fact , encoded syntactically .", "label": 0}
4
+ {"text": "inter-document references in the form of hyperlinks ( Agrawal et al. , 2003 ) .", "label": 0}
5
+ {"text": "While IA is generally thought to be consistent with findings on human language production ( Hermann and Deutsch 1976 ; Levelt 1989 ; Pechmann 1989 ; Sonnenschein 1982 ) , the hypothesis that incrementality is a good model of human GRE seems unfalsifiable until a preference order is specified for the properties on which it operates .", "label": 0}
6
+ {"text": "Arabic has two kinds of plurals : broken plurals and sound plurals ( Wightwick and Gaafar , 1998 ; Chen and Gey , 2002 ) .", "label": 0}
7
+ {"text": "Secondly , as ( Blunsom et al. , 2008 ) show , marginalizing out the different segmentations during decoding leads to improved performance .", "label": 3}
8
+ {"text": "Nivre ( 2008 ) reports experiments on Arabic parsing using his MaltParser ( Nivre et al. 2007 ) , trained on the PADT .", "label": 0}
9
+ {"text": "More recently , ( Sebastiani , 2002 ) has performed a good survey of document categorization ; recent works can also be found in ( Joachims , 2002 ) , ( Crammer and Singer , 2003 ) , and ( Lewis et al. , 2004 ) .", "label": 0}
10
+ {"text": "The grammar conversion from LTAG to HPSG ( Yoshinaga and Miyao , 2001 ) is the core portion of the RenTAL system .", "label": 0}
11
+ {"text": "This includes work on question answering ( Wang et al. , 2007 ) , sentiment analysis ( Nakagawa et al. , 2010 ) , MT reordering ( Xu et al. , 2009 ) , and many other tasks .", "label": 0}
12
+ {"text": "Our knowledge extractors rely extensively on MetaMap ( Aronson 2001 ) , a system for identifying segments of text that correspond to concepts in the UMLS Metathesaurus .", "label": 5}
13
+ {"text": "We study the cases where a 9Recall that even the Keller and Lapata ( 2003 ) system , built on the world 's largest corpus , achieves only 34 % recall ( Table 1 ) ( with only 48 % of positives and 27 % of all pairs previously observed , but see Footnote 5 ) .", "label": 1}
14
+ {"text": "Inspired by ( Blunsom et al. , 2009 ) and ( Cohn and Blunsom , 2009 ) , we define P ( str | frag ) as follows : where csw is the number of words in the source string .", "label": 4}
15
+ {"text": "Inspired by ( Blunsom et al. , 2009 ) and ( Cohn and Blunsom , 2009 ) , we define P ( str | frag ) as follows : where csw is the number of words in the source string .", "label": 4}
16
+ {"text": "The TNT POS tagger ( Brants , 2000 ) has also been designed to train and run very quickly , tagging between 30,000 and 60,000 words per second .", "label": 0}
17
+ {"text": "Some previous works ( Bannard and Callison-Burch , 2005 ; Zhao et al. , 2009 ; Kouylekov et al. , 2009 ) indicate , as main limitations of the mentioned resources , their limited coverage , their low precision , and the fact that they are mostly suitable to capture relations mainly between single words .", "label": 0}
18
+ {"text": "Similarly , Cowan and Collins ( 2005 ) report that the use of a subset of Spanish morphological features ( number for adjectives , determiners , nouns , pronouns , and verbs ; and mode for verbs ) outperforms other combinations .", "label": 0}
19
+ {"text": "To solve these scaling issues , we implement Online Variational Bayesian Inference ( Hoffman et al. , 2010 ; Hoffman et al. , 2012 ) for our models .", "label": 5}
20
+ {"text": "Other studies which view lR as a query generation process include Maron and Kuhns , 1960 ; Hiemstra and Kraaij , 1999 ; Ponte and Croft , 1998 ; Miller et al , 1999 .", "label": 1}
21
+ {"text": "As a result , researchers have re-adopted the once-popular knowledge-rich approach , investigating a variety of semantic knowledge sources for common noun resolution , such as the semantic relations between two NPs ( e.g. , Ji et al. ( 2005 ) ) , their semantic similarity as computed using WordNet ( e.g. , Poesio et al. ( 2004 ) ) or Wikipedia ( Ponzetto and Strube , 2006 ) , and the contextual role played by an NP ( see Bean and Riloff ( 2004 ) ) .", "label": 0}
22
+ {"text": "There have been many studies on parsing techniques ( Poller and Becker , 1998 ; Flickinger et al. , 2000 ) , ones on disambiguation models ( Chiang , 2000 ; Kanayama et al. , 2000 ) , and ones on programming/grammar-development environ -", "label": 0}
23
+ {"text": "An example of this is the estimation of maximum entropy models , from simple iterative estimation algorithms used by Ratnaparkhi ( 1998 ) that converge very slowly , to complex techniques from the optimisation literature that converge much more rapidly ( Malouf , 2002 ) .", "label": 0}
24
+ {"text": "For example , such schema can serve as a mean to represent translation examples , or find structural correspondences for the purpose of transfer grammar learning ( Menezes & Richardson , 2001 ) , ( Aramaki et al. , 2001 ) , ( Watanabe et al. , 2000 ) , ( Meyers et al. , 2000 ) , ( Matsumoto et al. , 1993 ) , ( kaji et al. , 1992 ) , and example-base machine translation EBMT3 ( Sato & Nagao , 1990 ) , ( Sato , 1991 ) , ( Richardson et al. , 2001 ) , ( Al-Adhaileh & Tang , 1999 ) .", "label": 0}
25
+ {"text": "Later works , such as Atallah et al. ( 2001a ) , Bolshakov ( 2004 ) , Taskiran et al. ( 2006 ) and Topkara et al. ( 2006b ) , further made use of part-ofspeech taggers and electronic dictionaries , such as WordNet and VerbNet , to increase the robustness of the method .", "label": 0}
26
+ {"text": "results are based on a corpus of movie subtitles ( Tiedemann 2007 ) , and are consequently shorter sentences , whereas the En \u00e2\u0086\u0092 Es results are based on a corpus of parliamentary proceedings ( Koehn 2005 ) .", "label": 5}
27
+ {"text": "We work with a semi-technical text on meteorological phenomena ( Larrick , 1961 ) , meant for primary school students .", "label": 5}
28
+ {"text": "This Principle of Finitism is also assumed by Johnson-Laird ( 1983 ) , Jackendoff ( 1983 ) , Kamp ( 1981 ) , and implicitly or explicitly by almost all researchers in computational linguistics .", "label": 1}
29
+ {"text": "The candidate feature templates include : Voice from Sun and Jurafsky ( 2004 ) .", "label": 5}
30
+ {"text": "Over the past decade , researchers at IBM have developed a series of increasingly sophisticated statistical models for machine translation ( Brown et al. , 1988 ; Brown et al. , 1990 ; Brown et al. , 1993a ) .", "label": 0}
31
+ {"text": "The PICO framework ( Richardson et al. 1995 ) for capturing well-formulated clinical queries ( described in Section 2 ) can serve as the basis of a knowledge representation that bridges the needs of clinicians and analytical capabilities of a system .", "label": 0}
32
+ {"text": "The Chinese PropBank has labeled the predicateargument structures of sentences from the Chinese TreeBank ( Xue et al. 2005 ) .", "label": 5}
33
+ {"text": "Furthermore , manually selected word pairs are often biased towards highly related pairs ( Gurevych , 2006 ) , because human annotators tend to select only highly related pairs connected by relations they are aware of .", "label": 0}
34
+ {"text": "Disjunctive feature descriptions are also possible ; WIT incorporates an efficient method for handling disjunctions ( Nakano , 1991 ) .", "label": 5}
35
+ {"text": "Our strategy is based on the approach presented by Johnson et al. ( 2007 ) .", "label": 5}
36
+ {"text": "Indeed , contrary to the more classical statistical methods ( Mutual Information , Loglike ... , see below ) used for collocation acquisition ( see ( Pearce , 2002 ) for a review ) , these patterns allow :", "label": 0}
37
+ {"text": "A further complication is that different speakers can regard very different values as prototypical , making it difficult to assess which of two objects is greener even on one dimension ( Berlin and Kay 1969 , pages 10 -- 12 ) .", "label": 0}
38
+ {"text": "The M step then treats c as fixed , observed data and adjusts 0 until the predicted vector of total feature counts equals c , using Improved Iterative Scaling ( Della Pietra et al. , 1997 ; Chen and", "label": 5}
39
+ {"text": "Both kinds of annotation were carried out using ANVIL ( Kipp , 2004 ) .", "label": 5}
40
+ {"text": "The article classifier is a discriminative model that draws on the state-of-the-art approach described in Rozovskaya et al. ( 2012 ) .", "label": 5}
41
+ {"text": "The language grounding problem has received significant attention in recent years , owed in part to the wide availability of data sets ( e.g. Flickr , Von Ahn ( 2006 ) ) , computing power , improved computer vision models ( Oliva and Torralba , 2001 ; Lowe , 2004 ; Farhadi et al. , 2009 ; Parikh and Grauman , 2011 ) and neurological evidence of ties between the language , perceptual and motor systems in the brain ( Pulverm \u00c2\u00a8 uller et al. , 2005 ; Tettamanti et al. , 2005 ; Aziz-Zadeh et al. , 2006 ) .", "label": 0}
42
+ {"text": "Church ( 1995 , p. 294 ) studied , among other simple text normalization techniques , the effect of case normalization for different words and showed that `` sometimes case variants refer to the same thing ( hurricane and Hurricane ) , sometimes they refer to different things ( continental and Continental ) and sometimes they do n't refer to much of anything ( e.g. , anytime and Anytime ) . ''", "label": 0}
43
+ {"text": "We follow the notation convention of Lari and Young ( 1990 ) .", "label": 5}
44
+ {"text": "Later works , such as Atallah et al. ( 2001a ) , Bolshakov ( 2004 ) , Taskiran et al. ( 2006 ) and Topkara et al. ( 2006b ) , further made use of part-ofspeech taggers and electronic dictionaries , such as WordNet and VerbNet , to increase the robustness of the method .", "label": 0}
45
+ {"text": "Finkelstein et al. ( 2002 ) did not report inter-subject correlation for their larger dataset .", "label": 1}
46
+ {"text": "This is a similar conclusion to our previous work in Salloum and Habash ( 2011 ) .", "label": 1}
47
+ {"text": "We then use the program Snob ( Wallace and Boulton 1968 ; Wallace 2005 ) to cluster these experiences .", "label": 5}
48
+ {"text": "The priorities are used for disambiguating interpretation in the incremental understanding method ( Nakano et al. , 1999b ) .", "label": 5}
49
+ {"text": "A number of alignment techniques have been proposed , varying from statistical methods ( Brown et al. , 1991 ; Gale and Church , 1991 ) to lexical methods ( Kay and Roscheisen , 1993 ; Chen , 1993 ) .", "label": 0}
50
+ {"text": "Alshawi and Crouch ( 1992 ) present an illustrative first-order fragment along these lines and are able to supply a coherent formal semantics for the CLF-QLFs themselves , using a technique essentially equivalent to supervaluations : a QLF is true iff all its possible RQLFs are , false iff they are all false , and undefined otherwise .", "label": 0}
51
+ {"text": "We use the open-source Moses toolkit ( Koehn et al. , 2007 ) to build a phrase-based SMT system trained on mostly MSA data ( 64M words on the Arabic side ) obtained from several LDC corpora including some limited DA data .", "label": 5}
52
+ {"text": "This approach is taken , for example , in LKB ( Copestake 1992 ) where lexical rules are introduced on a par with phrase structure rules and the parser makes no distinction between lexical and nonlexical rules ( Copestake 1993 , 31 ) .", "label": 1}
53
+ {"text": "This confirms that although Kozima 's approach ( Kozima , 1993 ) is computationally expensive , it does produce more precise segmentation .", "label": 1}
54
+ {"text": "Park and Byrd ( 2001 ) recently described a hybrid method for finding abbreviations and their definitions .", "label": 0}
55
+ {"text": "More specifically , the notion of the phrasal lexicon ( used first by Becker 1975 ) has been used successfully in a number of areas :", "label": 0}
56
+ {"text": "Ushioda et al. ( 1993 ) run a finite-state NP parser on a POS-tagged corpus to calculate the relative frequency of the same six subcategorization verb classes .", "label": 0}
57
+ {"text": "This section , which elaborates on preliminary results reported in Demner-Fushman and Lin ( 2005 ) , describes extraction algorithms for population , problems , interventions , outcomes , and the strength of evidence .", "label": 2}
58
+ {"text": "It would seem therefore that the iteration of the PT operation to form a closure is needed ( cfXXX Zadrozny 1987b ) .", "label": 1}
59
+ {"text": "Sedivy et al. ( 1999 ) asked subjects to identify the target of a vague description in a visual scene .", "label": 0}
60
+ {"text": "Our most accurate single grammar achieves an F score of 91.6 on the WSJ test set , rivaling discriminative reranking approaches ( Charniak and Johnson , 2005 ) and products of latent variable grammars ( Petrov , 2010 ) , despite being a single generative PCFG .", "label": 1}
61
+ {"text": "Our recovery policy is modeled on the TargetedHelp ( Hockey et al. , 2003 ) policy used in task-oriented dialogue .", "label": 2}
62
+ {"text": "It has been argued that , in an incremental approach , gradable properties should be given a low preference ranking because they are difficult to process ( Krahmer and Theune 2002 ) .", "label": 1}
63
+ {"text": "` See ( King , 1994 ) for a discussion of the appropriateness of TIG for HPSG and a comparison with other feature logic approaches designed for HPSG .", "label": 0}
64
+ {"text": "Specifically , we used Decision Graphs ( Oliver 1993 ) for Doc-Pred , and SVMs ( Vapnik 1998 ) for Sent-Pred .11 Additionally , we used unigrams for clustering documents and sentences , and unigrams and bigrams for predicting document clusters and sentence clusters ( Sections 3.1.2 and 3.2.2 ) .", "label": 5}
65
+ {"text": "There are many plausible representations , such as pairs of trees from synchronous tree adjoining grammars ( Abeille et al. 1990 ; Shieber 1994 ; Candito 1998 ) , lexical conceptual structures ( Dorr 1992 ) and WordNet synsets ( Fellbaum 1998 ; Vossen 1998 ) .", "label": 0}
66
+ {"text": "ones , DIRT ( Lin and Pantel , 2001 ) , VerbOcean ( Chklovski and Pantel , 2004 ) , FrameNet ( Baker et al. , 1998 ) , and Wikipedia ( Mehdad et al. , 2010 ; Kouylekov et al. , 2009 ) .", "label": 0}
67
+ {"text": "In the latter case , we can also take care of transferring the value of z. However , as discussed by Meurers ( 1994 ) , creating several instances of lexical rules can be avoided .", "label": 4}
68
+ {"text": "It maximizes the probability of getting the entire DA sequence correct , but it does not necessarily find the DA sequence that has the most DA labels correct ( Dermatas and Kokkinakis 1995 ) .", "label": 0}
69
+ {"text": "A substring in the sentence that corresponds to a node in the representation tree is denoted by assigning the interval of the substring to SNODE of 2 These definitions are based on the discussion in ( Tang , 1994 ) and Boitet & Zaharin ( 1988 ) .", "label": 5}
70
+ {"text": "We found that the oldest system ( Brown et al. , 1992 ) yielded the best prototypes , and that using these prototypes gave state-of-the-art performance on WSJ , as well as improvements on nearly all of the non-English corpora .", "label": 0}
71
+ {"text": "Other definitions of predicates may be found in ( Gomez , 1998 ) .", "label": 0}
72
+ {"text": "For the sake of completeness , we report in this section also the results obtained adopting the `` basic solution '' proposed by ( Mehdad et al. , 2010 ) .", "label": 1}
73
+ {"text": "The representations used by Danlos ( 2000 ) , Gardent and Webber ( 1998 ) , or Stone and Doran ( 1997 ) are similar , but do not ( always ) explicitly represent the clause combining operations as labeled nodes .", "label": 0}
74
+ {"text": "Since earlier versions of the SNoW based CSCL were used only to identify single phrases ( Punyakanok and Roth , 2001 ; Munoz et al. , 1999 ) and never to identify a collection of several phrases at the same time , as we do here , we also trained and tested it under the exact conditions of CoNLL-2000 ( Tjong Kim Sang and Buchholz , 2000 ) to compare it to other shallow parsers .", "label": 2}
75
+ {"text": "If differences in meaning between senses are very fine-grained , distinguishing between them is hard even for humans ( Mihalcea and Moldovan , 2001 ) .6 Pairs containing such words are not suitable for evaluation .", "label": 0}
76
+ {"text": "The application of domain models and deep semantic knowledge to question answering has been explored by a variety of researchers ( e.g. , Jacquemart and Zweigenbaum 2003 , Rinaldi et al. 2004 ) , and was also the focus of recent workshops on question answering in restricted domains at ACL 2004 and AAAI 2005 .", "label": 0}
77
+ {"text": "Griffiths et al. ( 2007 ) helped pave the path for cognitive-linguistic multimodal research , showing that Latent Dirichlet Allocation outperformed Latent Semantic Analysis ( Deerwester et al. , 1990 ) in the prediction of association norms .", "label": 0}
78
+ {"text": "Although not the first to employ a generative approach to directly model content , the seminal work of Barzilay and Lee ( 2004 ) is a noteworthy point of reference and comparison .", "label": 1}
79
+ {"text": "Others include selectional preferences , transitivity ( Schoenmackers et al. , 2008 ) , mutual exclusion , symmetry , etc. .", "label": 0}
80
+ {"text": "\u00e2\u0080\u00a2 cross-language information retrieval ( e.g. , McCarley 1999 ) , \u00e2\u0080\u00a2 multilingual document filtering ( e.g. , Oard 1997 ) , \u00e2\u0080\u00a2 computer-assisted language learning ( e.g. , Nerbonne et al. 1997 ) , \u00e2\u0080\u00a2 certain machine-assisted translation tools ( e.g. , Macklovitch 1994 ; Melamed 1996a ) , \u00e2\u0080\u00a2 concordancing for bilingual lexicography ( e.g. , Catizone , Russell , and Warwick 1989 ; Gale and Church 1991 ) ,", "label": 0}
81
+ {"text": "For example , ( Fang et al. , 2001 ) discusses the evaluation of two different text categorization strategies with several variations of their feature spaces .", "label": 0}
82
+ {"text": "As stated before , the experiments are run in the ACE '04 framework ( NIST , 2004 ) where the system will identify mentions and will label them ( cfXXX Section 4 ) with a type ( person , organization , etc ) , a sub-type ( OrgCommercial , OrgGovernmental , etc ) , a mention level ( named , nominal , etc ) , and a class ( specific , generic , etc ) .", "label": 5}
83
+ {"text": "Thus , the second class of SBD systems employs machine learning techniques such as decision tree classifiers ( Riley 1989 ) , neural networks ( Palmer and Hearst 1994 ) , and maximum-entropy modeling ( Reynar and Ratnaparkhi 1997 ) .", "label": 0}
84
+ {"text": "or quotation of messages in emails or postings ( see Mullen and Malouf ( 2006 ) but cfXXX Agrawal et al. ( 2003 ) ) .", "label": 0}
85
+ {"text": "The first work to do this with topic models is Feng and Lapata ( 2010b ) .", "label": 0}
86
+ {"text": "The language chosen for semantic representation is a flat semantics along the line of ( Bos , 1995 ; Copestake et al. , 1999 ; Copestake et al. , 2001 ) .", "label": 1}
87
+ {"text": "Tetreault 's contribution features comparative evaluation involving the author 's own centering-based pronoun resolution algorithm called the Left-Right Centering algorithm ( LRC ) as well as three other pronoun resolution methods : Hobbs 's naive algorithm ( Hobbs 1978 ) , BFP ( Brennan , Friedman , and Pollard 1987 ) , and Strube 's 5list approach ( Strube 1998 ) .", "label": 0}
88
+ {"text": "For future work , we might investigate how machine learning algorithms , which are specifically designed for the problem of domain adaptation ( Blitzer et al. , 2007 ; Jiang and Zhai , 2007 ) , perform in comparison to our approach .", "label": 3}
89
+ {"text": "The X2 statistic is performing at least as well as G2 , throwing doubt on the claim by Dunning ( 1993 ) that the G2 statistic is better suited for use in corpus-based NLP .", "label": 1}
90
+ {"text": "Provided with the candidate fragment elements , we previously ( Wang and Callison-Burch , 2011 ) used a chunker3 to finalize the output fragments , in order to follow the linguistic definition of a ( para - ) phrase .", "label": 2}
91
+ {"text": "There is a rich literature on organization and lexical access of morphologically complex words where experiments have been conducted mainly for derivational suffixed words of English , Hebrew , Italian , French , Dutch , and few other languages ( Marslen-Wilson et al. , 2008 ; Frost et al. , 1997 ; Grainger , et al. , 1991 ; Drews and Zwitserlood , 1995 ) .", "label": 0}
92
+ {"text": "This method of incorporating dictionary information seems simpler than the method proposed by Brown et al. for their models ( Brown et al. , 1993b ) .", "label": 1}
93
+ {"text": "One important example is the constituentcontext model ( CCM ) of Klein and Manning ( 2002 ) , which was specifically designed to capture the linguistic observation made by Radford ( 1988 ) that there are regularities to the contexts in which constituents appear .", "label": 0}
94
+ {"text": "Japanese ( Kawata and Bartels , 2000 ) , despite a very high accuracy , is different in that attachment score drops from 98 % to 85 % , as we go from length 1 to 2 , which may have something to do with the data consisting of transcribed speech with very short utterances .", "label": 1}
95
+ {"text": "7 We ignore the rare `` false idafa '' construction ( Habash 2010 , p. 102 ) .", "label": 0}
96
+ {"text": "Various approaches for computing semantic relatedness of words or concepts have been proposed , e.g. dictionary-based ( Lesk , 1986 ) , ontology-based ( Wu and Palmer , 1994 ; Leacock and Chodorow , 1998 ) , information-based ( Resnik , 1995 ; Jiang and Conrath , 1997 ) or distributional ( Weeds and Weir , 2005 ) .", "label": 0}
97
+ {"text": "Some methods are based on likelihood ( Och and Ney , 2002 ; Blunsom et al. , 2008 ) , error rate ( Och , 2003 ; Zhao and Chen , 2009 ; Pauls et al. , 2009 ; Galley and Quirk , 2011 ) , margin ( Watanabe et al. , 2007 ; Chiang et al. , 2008 ) and ranking ( Hopkins and May , 2011 ) , and among which minimum error rate training ( MERT ) ( Och , 2003 ) is the most popular one .", "label": 0}
98
+ {"text": "We follow Lewis and Steedman ( 2014 ) in allowing a small set of generic , linguistically-plausible unary and binary grammar rules .", "label": 5}
99
+ {"text": "( 7 ) NEIGHBOR : Research in lexical semantics suggests that the SC of an NP can be inferred from its distributionally similar NPs ( see Lin ( 1998a ) ) .", "label": 4}
100
+ {"text": "Discrepancies in length throw constituents off balance , and so prosodic phrasing will cross constituent boundaries in order to give the phrases similar lengths ; this is the case in Chickens were eating II the remaining green vegetables , where the subject-predicate boundary finds no prosodic correspondent .4 The most explicit version of this approach is the analysis presented in Gee and Grosjean ( 1983 ) ( henceforth G&G ) .", "label": 1}
101
+ {"text": "Our approach to the problem is more compatible with the empirical evidence we presented in our prior work ( Li et al. , 2014 ) where we analyzed the output of Chinese to English machine translation and found that there is no correlation between sentence length and MT quality .", "label": 1}
102
+ {"text": "For all experiments reported in this section we used the syntactic dependency parser MaltParser v1 .3 ( Nivre 2003 , 2008 ; K\u00c3\u00bcbler , McDonald , and Nivre 2009 ) , a transition-based parser with an input buffer and a stack , which uses SVM classifiers", "label": 5}
103
+ {"text": "Against the background of a growing interest in multilingual NLP , multilingual anaphora / coreference resolution has gained considerable momentum in recent years ( Aone and McKee 1993 ; Azzam , Humphreys , and Gaizauskas 1998 ; Harabagiu and Maiorano 2000 ; Mitkov and Barbu 2000 ; Mitkov 1999 ; Mitkov and Stys 1997 ; Mitkov , Belguith , and Stys 1998 ) .", "label": 0}
104
+ {"text": "We use the same data setting with Xue ( 2008 ) , however a bit different from Xue and Palmer ( 2005 ) .", "label": 1}
105
+ {"text": "They proved to be useful in a number of NLP applications such as natural language generation ( Iordanskaja et al. , 1991 ) , multidocument summarization ( McKeown et al. , 2002 ) , automatic evaluation of MT ( Denkowski and Lavie , 2010 ) , and TE ( Dinu and Wang , 2009 ) .", "label": 4}
106
+ {"text": "Moreover , a sandbox is a temporary view of a document itself i.e. a sandbox can not cause a change in the history ( Cunningham and Leuf , 2001 ) .", "label": 0}
107
+ {"text": "Regarding future work , there are many research line that may be followed : i ) Capturing more features by employing external knowledge such as ontological , lexical resource or WordNet-based features ( Basili et al. , 2005a ; Basili et al. , 2005b ; Bloehdorn et al. , 2006 ; Bloehdorn and Moschitti , 2007 ) or shallow semantic trees , ( Giuglea and Moschitti , 2004 ; Giuglea and Moschitti , 2006 ; Moschitti and Bejan , 2004 ; Moschitti et al. , 2007 ; Moschitti , 2008 ; Moschitti et al. , 2008 ) .", "label": 3}
108
+ {"text": "Another line of research approaches grounded language knowledge by augmenting distributional approaches of word meaning with perceptual information ( Andrews et al. , 2009 ; Steyvers , 2010 ; Feng and Lapata , 2010b ; Bruni et al. , 2011 ; Silberer and Lapata , 2012 ; Johns and Jones , 2012 ; Bruni et al. , 2012a ; Bruni et al. , 2012b ; Silberer et al. , 2013 ) .", "label": 0}
109
+ {"text": "Future research should apply the work of Blunsom et al. ( 2008 ) and Blunsom and Osborne ( 2008 ) , who marginalize over derivations to find the most probable translation rather than the most probable derivation , to these multi-nonterminal grammars .", "label": 3}
110
+ {"text": "We have since improved the interface by incorporating a capability in the recognizer to propose additional solutions in turn once the first one fails to parse ( Zue et al. 1991 ) To produce these `` N-best '' alternatives , we make use of a standard A * search algorithm ( Hart 1968 , Jelinek 1976 ) .", "label": 5}
111
+ {"text": "Curran ( 2003 )", "label": 0}
112
+ {"text": "OT therefore holds out the promise of simplifying grammars , by factoring all complex phenomena into simple surface-level constraints that partially mask one another .1 Whether this is always possible under an appropriate definition of `` simple constraints '' ( e.g. , Eisner 1997b ) is of course an empirical question .", "label": 0}
113
+ {"text": "Consider , for example , the lexical rule in Figure 2 , which encodes a passive lexical rule like the one presented by Pollard and Sag ( 1987 , 215 ) in terms of the setup of Pollard and Sag ( 1994 , ch .", "label": 0}
114
+ {"text": "This result is consistent with other works using this model with these features ( Andrews et al. , 2009 ; Silberer and Lapata , 2012 ) .", "label": 1}
dataset/citation_intent/label2id.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"Background": 0, "CompareOrContrast": 1, "Extends": 2, "Future": 3, "Motivation": 4, "Uses": 5}
dataset/citation_intent/test.jsonl CHANGED
@@ -1,139 +1,139 @@
1
- {"text": "Resnik ( 1995 ) reported a correlation of r = .9026.10 The results are not directly comparable , because he only used noun-noun pairs , words instead of concepts , a much smaller dataset , and measured semantic similarity instead of semantic relatedness .", "label": "CompareOrContrast"}
2
- {"text": "Similar observation for surface word frequency was also observed by ( Bertram et al. , 2000 ; Bradley , 1980 ; Burani et al. , 1987 ; Burani et al. , 1984 ; Schreuder et al. , 1997 ; Taft 1975 ; Taft , 2004 ) where it has been claimed that words having low surface frequency tends to decompose .", "label": "Background"}
3
- {"text": "But their importance has grown far beyond machine translation : for instance , transferring annotations between languages ( Yarowsky and Ngai 2001 ; Hwa et al. 2005 ; Ganchev , Gillenwater , and Taskar 2009 ) ; discovery of paraphrases ( Bannard and Callison-Burch 2005 ) ; and joint unsupervised POS and parser induction across languages ( Snyder and Barzilay 2008 ) .", "label": "Motivation"}
4
- {"text": "Previous sentiment-analysis work in different domains has considered inter-document similarity ( Agarwal and Bhattacharyya , 2005 ; Pang and Lee , 2005 ; Goldberg and Zhu , 2006 ) or explicit", "label": "Background"}
5
- {"text": "However , the method we are currently using in the ATIS domain ( Seneff et al. 1991 ) represents our most promising approach to this problem .", "label": "Uses"}
6
- {"text": "Henceforth the collaborative traits of blogs and wikis ( McNeill , 2005 ) emphasize annotation , comment , and strong editing .", "label": "Background"}
7
- {"text": "The ICA system ( Hepple , 2000 ) aims to reduce the training time by introducing independence assumptions on the training samples that dramatically reduce the training time with the possible downside of sacrificing performance .", "label": "Background"}
8
- {"text": "To this end , several toolkits for building spoken dialogue systems have been developed ( Barnett and Singh , 1997 ; Sasajima et al. , 1999 ) .", "label": "Background"}
9
- {"text": "Thus , over the past few years , along with advances in the use of learning and statistical methods for acquisition of full parsers ( Collins , 1997 ; Charniak , 1997a ; Charniak , 1997b ; Ratnaparkhi , 1997 ) , significant progress has been made on the use of statistical learning methods to recognize shallow parsing patterns syntactic phrases or words that participate in a syntactic relationship ( Church , 1988 ; Ramshaw and Marcus , 1995 ; Argamon et al. , 1998 ; Cardie and Pierce , 1998 ; Munoz et al. , 1999 ; Punyakanok and Roth , 2001 ; Buchholz et al. , 1999 ; Tjong Kim Sang and Buchholz , 2000 ) .", "label": "Background"}
10
- {"text": "Task properties Determining whether or not a speaker supports a proposal falls within the realm of sentiment analysis , an extremely active research area devoted to the computational treatment of subjective or opinion-oriented language ( early work includes Wiebe and Rapaport ( 1988 ) , Hearst ( 1992 ) , Sack ( 1994 ) , and Wiebe ( 1994 ) ; see Esuli ( 2006 ) for an active bibliography ) .", "label": "Background"}
11
- {"text": "Various approaches for computing semantic relatedness of words or concepts have been proposed , e.g. dictionary-based ( Lesk , 1986 ) , ontology-based ( Wu and Palmer , 1994 ; Leacock and Chodorow , 1998 ) , information-based ( Resnik , 1995 ; Jiang and Conrath , 1997 ) or distributional ( Weeds and Weir , 2005 ) .", "label": "Background"}
12
- {"text": "Both tasks are performed with a statistical framework : the mention detection system is similar to the one presented in ( Florian et al. , 2004 ) and the coreference resolution system is similar to the one described in ( Luo et al. , 2004 ) .", "label": "CompareOrContrast"}
13
- {"text": "The advantage of tuning similarity to the application of interest has been shown previously by Weeds and Weir ( 2005 ) .", "label": "CompareOrContrast"}
14
- {"text": "Although there are other discussions of the paragraph as a central element of discourse ( e.g. Chafe 1979 , Halliday and Hasan 1976 , Longacre 1979 , Haberlandt et al. 1980 ) , all of them share a certain limitation in their formal techniques for analyzing paragraph structure .", "label": "CompareOrContrast"}
15
- {"text": "Thus , over the past few years , along with advances in the use of learning and statistical methods for acquisition of full parsers ( Collins , 1997 ; Charniak , 1997a ; Charniak , 1997b ; Ratnaparkhi , 1997 ) , significant progress has been made on the use of statistical learning methods to recognize shallow parsing patterns syntactic phrases or words that participate in a syntactic relationship ( Church , 1988 ; Ramshaw and Marcus , 1995 ; Argamon et al. , 1998 ; Cardie and Pierce , 1998 ; Munoz et al. , 1999 ; Punyakanok and Roth , 2001 ; Buchholz et al. , 1999 ; Tjong Kim Sang and Buchholz , 2000 ) .", "label": "Background"}
16
- {"text": "We experiment with four learners commonly employed in language learning : Decision List ( DL ) : We use the DL learner as described in Collins and Singer ( 1999 ) , motivated by its success in the related tasks of word sense disambiguation ( Yarowsky , 1995 ) and NE classification ( Collins and Singer , 1999 ) .", "label": "Motivation"}
17
- {"text": "A central technique is to define a joint relation as a noisy-channel model , by composing a joint relation with a cascade of one or more conditional relations as in Fig. 1 ( Pereira and Riley , 1997 ; Knight and Graehl , 1998 ) .", "label": "Background"}
18
- {"text": "We use the same set of binary features as in previous work on this dataset ( Pang et al. , 2002 ; Pang and Lee , 2004 ; Zaidan et al. , 2007 ) .", "label": "Uses"}
19
- {"text": "Our classification framework , directly inspired by Blum and Chawla ( 2001 ) , integrates both perspectives , optimizing its labeling of speech segments based on both individual speech-segment classification scores and preferences for groups of speech segments to receive the same label .", "label": "Uses"}
20
- {"text": "As for work on Arabic ( MSA ) , results have been reported on the PATB ( Kulick , Gabbard , and Marcus 2006 ; Diab 2007 ; Green and Manning 2010 ) , the Prague Dependency Treebank ( PADT ) ( Buchholz and Marsi 2006 ; Nivre 2008 ) and the CATiB ( Habash and Roth 2009 ) .", "label": "Background"}
21
- {"text": "For instance , Palmer and Hearst ( 1997 ) report that the SATZ system ( decision tree variant ) was trained on a set of about 800 labeled periods , which corresponds to a corpus of about 16,000 words .", "label": "CompareOrContrast"}
22
- {"text": "One possible direction is to consider linguistically motivated approaches , such as the extraction of syntactic phrase tables as proposed by ( Yamada and Knight , 2001 ) .", "label": "Future"}
23
- {"text": "Later works , such as Atallah et al. ( 2001a ) , Bolshakov ( 2004 ) , Taskiran et al. ( 2006 ) and Topkara et al. ( 2006b ) , further made use of part-ofspeech taggers and electronic dictionaries , such as WordNet and VerbNet , to increase the robustness of the method .", "label": "Background"}
24
- {"text": "A number of speech understanding systems have been developed during the past fifteen years ( Barnett et al. 1980 , Dixon and Martin 1979 , Erman et al. 1980 , Haton and Pierrel 1976 , Lea 1980 , Lowerre and Reddy 1980 , Medress 1980 , Reddy 1976 , Walker 1978 , and Wolf and Woods 1980 ) .", "label": "CompareOrContrast"}
25
- {"text": "The bottom panel of table 1 lists the results for the chosen lexicalized model ( SSN-Freq > 200 ) and five recent statistical parsers ( Ratnaparkhi , 1999 ; Collins , 1999 ; Charniak , 2000 ; Collins , 2000 ; Bod , 2001 ) .", "label": "CompareOrContrast"}
26
- {"text": "The basic Python reflection has already been implemented and used for large scale experiments with POS tagging , using pyMPI ( a message passing interface library for Python ) to coordinate experiments across a cluster of over 100 machines ( Curran and Clark , 2003 ; Clark et al. , 2003 ) .", "label": "Background"}
27
- {"text": "This imbalance foils thresholding strategies , clever as they might be ( Gale & Church , 1991 ; Wu & Xia , 1994 ; Chen , 1996 ) .", "label": "Background"}
28
- {"text": "Training was done on the Penn Treebank ( Marcus et al. , 1993 ) Wall Street Journal data , sections 02-21 .", "label": "Uses"}
29
- {"text": "We performed Latent Semantic Analysis ( LSA ) over Wikipedia using the jLSI tool ( Giuliano , 2007 ) to measure the relatedness between words in the dataset .", "label": "Uses"}
30
- {"text": "For example , our previous work ( Nakov and Ng , 2009 ; Nakov and Ng , 2012 ) experimented with various techniques for combining a small bi-text for a resource-poor language ( Indonesian or Spanish , pretending that Spanish is resource-poor ) with a much larger bi-text for a related resource-rich language ( Malay or Portuguese ) ; the target language of all bi-texts was English .", "label": "CompareOrContrast"}
31
- {"text": "Various approaches for computing semantic relatedness of words or concepts have been proposed , e.g. dictionary-based ( Lesk , 1986 ) , ontology-based ( Wu and Palmer , 1994 ; Leacock and Chodorow , 1998 ) , information-based ( Resnik , 1995 ; Jiang and Conrath , 1997 ) or distributional ( Weeds and Weir , 2005 ) .", "label": "Background"}
32
- {"text": "Another line of research approaches grounded language knowledge by augmenting distributional approaches of word meaning with perceptual information ( Andrews et al. , 2009 ; Steyvers , 2010 ; Feng and Lapata , 2010b ; Bruni et al. , 2011 ; Silberer and Lapata , 2012 ; Johns and Jones , 2012 ; Bruni et al. , 2012a ; Bruni et al. , 2012b ; Silberer et al. , 2013 ) .", "label": "Background"}
33
- {"text": "Gurevych ( 2005 ) replicated the experiment of Rubenstein and Goodenough with the original 65 word pairs translated into German .", "label": "Background"}
34
- {"text": "One approach to this more general problem , taken by the ` Nitrogen ' generator ( Langkilde and Knight , 1998a ; Langkilde and Knight , 1998b ) , takes advantage of standard statistical techniques by generating a lattice of all possible strings given a semantic representation as input and selecting the most likely output using a bigram language model .", "label": "Uses"}
35
- {"text": "where mk is one mention in entity e , and the basic model building block PL ( L = 1 | e , mk , m ) is an exponential or maximum entropy model ( Berger et al. , 1996 ) .", "label": "Uses"}
36
- {"text": "13 We also employed sequence-based measures using the ROUGE tool set ( Lin and Hovy 2003 ) , with similar results to those obtained with the word-by-word measures .", "label": "Uses"}
37
- {"text": "Second , using continuous distributions allows us to leverage a variety of tools ( e.g. , LDA ) that have been shown to be successful in other fields , such as speech recognition ( Evermann et al. , 2004 ) .", "label": "Background"}
38
- {"text": "In this section , we validate the contribution of key tag sets and morphological features -- and combinations thereof -- using a different parser : the Easy-First Parser ( Goldberg and Elhadad 2010 ) .", "label": "Uses"}
39
- {"text": "The typical solution to the redundancy problem is to group verbs according to their argument realization patterns ( Levin , 1993 ) , possibly arranged in an inheritance hierarchy .", "label": "CompareOrContrast"}
40
- {"text": "Later , Hobbs ( 1979 , 1982 ) proposed a knowledge base in which information about language and the world would be encoded , and he emphasized the need for using `` salience '' in choosing facts from this knowledge base .", "label": "Background"}
41
- {"text": "Another technique is automatic discovery of translations from parallel or non-parallel corpora ( Fung and Mckeown , 1997 ) .", "label": "Background"}
42
- {"text": "ASARES is presented in detail in ( Claveau et al. , 2003 ) .", "label": "Uses"}
43
- {"text": "Opposition ( called `` adversative '' or `` contrary-to-expectation '' by Halliday and Hasan 1976 ; cfXXX also Quirk et al. 1972 , p. 672 ) .", "label": "Background"}
44
- {"text": "A number of applications have relied on distributional analysis ( Harris , 1971 ) in order to build classes of semantically related terms .", "label": "Background"}
45
- {"text": "Previous work with MaltParser in Russian , Turkish , and Hindi showed gains with CASE but not with agreement features ( Eryigit , Nivre , and Oflazer 2008 ; Nivre , Boguslavsky , and Iomdin 2008 ; Nivre 2009 ) .", "label": "CompareOrContrast"}
46
- {"text": "Consider , for example , the lexical rule in Figure 2 , which encodes a passive lexical rule like the one presented by Pollard and Sag ( 1987 , 215 ) in terms of the setup of Pollard and Sag ( 1994 , ch .", "label": "CompareOrContrast"}
47
- {"text": "Two applications that , like help-desk , deal with question -- answer pairs are : summarization of e-mail threads ( Dalli , Xia , and Wilks 2004 ; Shrestha and McKeown 2004 ) , and answer extraction in FAQs ( Frequently Asked Questions ) ( Berger and Mittal 2000 ;", "label": "CompareOrContrast"}
48
- {"text": "The language grounding problem has received significant attention in recent years , owed in part to the wide availability of data sets ( e.g. Flickr , Von Ahn ( 2006 ) ) , computing power , improved computer vision models ( Oliva and Torralba , 2001 ; Lowe , 2004 ; Farhadi et al. , 2009 ; Parikh and Grauman , 2011 ) and neurological evidence of ties between the language , perceptual and motor systems in the brain ( Pulverm \u00c2\u00a8 uller et al. , 2005 ; Tettamanti et al. , 2005 ; Aziz-Zadeh et al. , 2006 ) .", "label": "Background"}
49
- {"text": "In addition , we find that the Bayesian SCFG grammar can not even significantly outperform the heuristic SCFG grammar ( Blunsom et al. 2009 ) 5 .", "label": "CompareOrContrast"}
50
- {"text": "There are several grammars developed in the FB-LTAG formalism , including the XTAG English grammar , a large-scale grammar for English ( The XTAG Research Group , 2001 ) .", "label": "Background"}
51
- {"text": "Although the approach may have potential , the shifting of complex accounting into the unification algorithm is at variance with the findings of Kiefer et al. ( 1999 ) , who report large speed-ups from the elimination of disjunction processing during unification .", "label": "CompareOrContrast"}
52
- {"text": "For the task of unsupervised dependency parsing , Smith and Eisner ( 2006 ) add a constraint of the form `` the average length of dependencies should be X '' to capture the locality of syntax ( at least half of the dependencies are between adjacent words ) , using a scheme they call structural annealing .", "label": "Background"}
53
- {"text": "The speech and language processing architecture is based on that of the SRI CommandTalk system ( Moore et al. , 1997 ; Stent et a. , 1999 ) .", "label": "Uses"}
54
- {"text": "Second , in line with the findings of ( Mehdad et al. , 2010 ) , the results obtained over the MT-derived corpus are equal to those we achieve over the original RTE3 dataset ( i.e. 63.50 % ) .", "label": "CompareOrContrast"}
55
- {"text": "Therefore , inter-subject correlation is lower than the results obtained by Gurevych ( 2006 ) .", "label": "CompareOrContrast"}
56
- {"text": "There is a general consensus among theoretical linguists that the proper representation of verbal argument structure is event structure -- representations grounded in a theory of events that decompose semantic roles in terms of primitive predicates representing concepts such as causality and inchoativity ( Dowty , 1979 ; Jackendoff , 1983 ; Pustejovsky , 1991b ; Rappaport Hovav and Levin , 1998 ) .", "label": "Background"}
57
- {"text": "For example , some similar measures have been used in stylistic experiments in information retrieval on the basis of a robust parser built for information retrieval purposes ( Strzalkowski 1994 ) .", "label": "Background"}
58
- {"text": "The resulting training procedure is analogous to the one presented in ( Brown et al. , 1993 ) and ( Tillmann and Ney , 1997 ) .", "label": "CompareOrContrast"}
59
- {"text": "successfully parses , or until a quitting criterion is reached , such as an upper bound on N. Whereas in the loosely coupled system the parser acts as a filter only on completed candidate solutions ( Zue et al. 1991 ) , the tightly coupled system allows the parser to discard partial theories that have no way of continuing .", "label": "Uses"}
60
- {"text": "Zollmann and Venugopal ( 2006 ) substituted the non-terminal X in hierarchical phrase-based model by extended syntactic categories .", "label": "CompareOrContrast"}
61
- {"text": "Much of the earlier work in anaphora resolution heavily exploited domain and linguistic knowledge ( Sidner 1979 ; Carter 1987 ; Rich and LuperFoy 1988 ; Carbonell and Brown 1988 ) , which was difficult both to represent and to process , and which required considerable human input .", "label": "Background"}
62
- {"text": "The paradigm is `` write many , read many '' ( Cunningham and Leuf , 2001 ) .", "label": "Background"}
63
- {"text": "The Praat tool was used ( Boersma and Weenink , 2009 ) .", "label": "Uses"}
64
- {"text": "2 The reader is asked to focus on any reasonable size measurement , for example , the maximal horizontal or vertical distance , or some combination of dimensions ( Kamp 1975 ; also Section 8.1 of the present article ) .", "label": "Background"}
65
- {"text": "The implementation has been inspired by experience in extracting information from very large corpora ( Curran and Moens , 2002 ) and performing experiments on maximum entropy sequence tagging ( Curran and Clark , 2003 ; Clark et al. , 2003 ) .", "label": "Motivation"}
66
- {"text": "Default parameters were used , although experimentation with different parameter settings is an important direction for future work ( Daelemans and Hoste , 2002 ; Munson et al. , 2005 ) .", "label": "Future"}
67
- {"text": "Our work is inspired by the latent left-linking model in Chang et al. ( 2013 ) and the ILP formulation from Chang et al. ( 2011 ) .", "label": "Uses"}
68
- {"text": "Furthermore , the availability of rich ontological resources , in the form of the Unified Medical Language System ( UMLS ) ( Lindberg et al. , 1993 ) , and the availability of software that leverages this knowledge -- MetaMap ( Aronson , 2001 ) for concept identification and SemRep ( Rindflesch and Fiszman , 2003 ) for relation extraction -- provide a foundation for studying the role of semantics in various tasks .", "label": "Background"}
69
- {"text": "The names given to the components vary ; they have been called `` strategic '' and `` tactical '' components ( e.g. , McKeown 1985 ; Thompson 1977 ; Danlos 1987 ) 1 , `` planning '' and `` realization '' ( e.g. , McDonald 1983 ; Hovy 1988a ) , or simply `` what to say '' versus `` how to say it '' ( e.g. , Danlos 1987 ; Reithinger 1990 ) .", "label": "Background"}
70
- {"text": "Over the last decade there has been a lot of interest in developing tutorial dialogue systems that understand student explanations ( Jordan et al. , 2006 ; Graesser et al. , 1999 ; Aleven et al. , 2001 ; Buckley and Wolska , 2007 ; Nielsen et al. , 2008 ; VanLehn et al. , 2007 ) , because high percentages of selfexplanation and student contentful talk are known to be correlated with better learning in humanhuman tutoring ( Chi et al. , 1994 ; Litman et al. , 2009 ; Purandare and Litman , 2008 ; Steinhauser et al. , 2007 ) .", "label": "Background"}
71
- {"text": "We use the TRIPS dialogue parser ( Allen et al. , 2007 ) to parse the utterances .", "label": "Uses"}
72
- {"text": "In order to address these limitations in a practical way , we conducted a small user study where we asked four judges ( graduate students from the Faculty of Information Technology at Monash University ) to assess the responses generated by our system ( Marom and Zukerman 2007a ) .", "label": "Uses"}
73
- {"text": "The understanding module utilizes ISSS ( Incremental Significant-utterance Sequence Search ) ( Nakano et al. , 1999b ) , which is an integrated parsing and discourse processing method .", "label": "Uses"}
74
- {"text": "We applied our system to the XTAG English grammar ( The XTAG Research Group , 2001 ) 3 , which is a large-scale FB-LTAG grammar for English .", "label": "Uses"}
75
- {"text": "After the extraction , pruning techniques ( Snover et al. , 2009 ) can be applied to increase the precision of the extracted paraphrases .", "label": "Background"}
76
- {"text": "In this paper , we extend two classes of model adaptation methods ( i.e. , model interpolation and error-driven learning ) , which have been well studied in statistical language modeling for speech and natural language applications ( e.g. , Bacchiani et al. , 2004 ; Bellegarda , 2004 ; Gao et al. , 2006 ) , to ranking models for Web search applications .", "label": "Background"}
77
- {"text": "GATE goes beyond earlier systems by using a component-based infrastructure ( Cunningham , 2000 ) which the GUI is built on top of .", "label": "Background"}
78
- {"text": "Since sentences can refer to events described by other sentences , we may need also a quotation operator ; Perlis ( 1985 ) describes how first order logic can be augmented with such an operator .", "label": "Background"}
79
- {"text": "The system uses a knowledge base implemented in the KM representation language ( Clark and Porter , 1999 ; Dzikovska et al. , 2006 ) to represent the state of the world .", "label": "Uses"}
80
- {"text": "A possible future direction would be to compare the query string to retrieved results using a method similar to that of Tsuruoka and Tsujii ( 2003 ) .", "label": "Future"}
81
- {"text": "description-level lexical rules ( DLRs ; Meurers 1995 ) .5 2.2.1 Meta-Level Lexical Rules .", "label": "Background"}
82
- {"text": "All EBMT systems , from the initial proposal by Nagao ( 1984 ) to the recent collection of Carl and Way ( 2003 ) , are premised on the availability of subsentential alignments derived from the input bitext .", "label": "Background"}
83
- {"text": "The necessity of this kind of merging of arguments has been recognized before : Charniak and McDermott ( 1985 ) call it abductive unification/matching , Hobbs ( 1978 , 1979 ) refers to such operations using the terms knitting or petty conversational implicature .", "label": "Background"}
84
- {"text": "In a number of proposals , lexical generalizations are captured using lexical underspecification ( Kathol 1994 ; Krieger and Nerbonne 1992 ;", "label": "CompareOrContrast"}
85
- {"text": "These keywords are potentially useful features because some of them are subclasses of the ACE SCs shown in the left column of Table 1 , while others appear to be correlated with these ACE SCs .2 ( 6 ) INDUCED CLASS : Since the first-sense heuristic used in the previous feature may not be accurate in capturing the SC of an NP , we employ a corpusbased method for inducing SCs that is motivated by research in lexical semantics ( e.g. , Hearst ( 1992 ) ) .", "label": "Motivation"}
86
- {"text": "Other psycholing-uistic studies that confirm the validity of paragraph units can be found in Black and Bower ( 1979 ) and Haberlandt et al. ( 1980 ) .", "label": "Background"}
87
- {"text": "The bottom panel of table 1 lists the results for the chosen lexicalized model ( SSN-Freq > 200 ) and five recent statistical parsers ( Ratnaparkhi , 1999 ; Collins , 1999 ; Charniak , 2000 ; Collins , 2000 ; Bod , 2001 ) .", "label": "CompareOrContrast"}
88
- {"text": "Nevertheless , the full document text is present in most systems , sometimes as the only feature ( Sugiyama and Okumura , 2007 ) and sometimes in combination with others see for instance ( Chen and Martin , 2007 ; Popescu and Magnini , 2007 ) - .", "label": "Background"}
89
- {"text": "In a similar vain to Skut and Brants ( 1998 ) and Buchholz et al. ( 1999 ) , the method extends an existing flat shallow-parsing method to handle composite structures .", "label": "Future"}
90
- {"text": "As a result , researchers have re-adopted the once-popular knowledge-rich approach , investigating a variety of semantic knowledge sources for common noun resolution , such as the semantic relations between two NPs ( e.g. , Ji et al. ( 2005 ) ) , their semantic similarity as computed using WordNet ( e.g. , Poesio et al. ( 2004 ) ) or Wikipedia ( Ponzetto and Strube , 2006 ) , and the contextual role played by an NP ( see Bean and Riloff ( 2004 ) ) .", "label": "Background"}
91
- {"text": "We built a two-stage baseline system , using the perceptron segmentation model from our previous work ( Zhang and Clark , 2007 ) and the perceptron POS tagging model from Collins ( 2002 ) .", "label": "Extends"}
92
- {"text": "Note that although our current system uses MeSH headings assigned by human indexers , manually assigned terms can be replaced with automatic processing if needed ( Aronson et al. 2004 ) .", "label": "Future"}
93
- {"text": "Furthermore , medical terminology is characterized by a typical mix of Latin and Greek roots with the corresponding host language ( e.g. , German ) , often referred to as neo-classical compounding ( McCray et al. , 1988 ) .", "label": "Background"}
94
- {"text": "Previously ( Gerber and Chai 2010 ) , we assessed the importance of various implicit argument feature groups by conducting feature ablation tests .", "label": "Extends"}
95
- {"text": "To model d ( FWi \u00e2\u0088\u0092 1 , S \u00e2\u0086\u0092 T ) , d ( FWi +1 , S \u00e2\u0086\u0092 T ) , i.e. whether Li , S \u00e2\u0086\u0092 T and Ri , S \u00e2\u0086\u0092 T extend beyond the neighboring function word phrase pairs , we utilize the pairwise dominance model of Setiawan et al. ( 2009 ) .", "label": "Uses"}
96
- {"text": "For instance , Sells ( 1985 , p. 8 ) says that the sentence `` Reagan thinks bananas , '' which is otherwise strange , is in fact acceptable if it occurs as an answer to the question `` What is Kissinger 's favorite fruit ? ''", "label": "Motivation"}
97
- {"text": "Semantic Role labeling ( SRL ) was first defined in Gildea and Jurafsky ( 2002 ) .", "label": "Background"}
98
- {"text": "AJAX function lets the communication works asyncronously between a client and a server through a set of messages based on HTTP protocol and XML ( Garrett , 2005 ) .", "label": "Background"}
99
- {"text": "The inclusion of the coreference task in the Sixth and Seventh Message Understanding Conferences ( MUC-6 and MUC-7 ) gave a considerable impetus to the development of coreference resolution algorithms and systems , such as those described in Baldwin et al. ( 1995 ) , Gaizauskas and Humphreys ( 1996 ) , and Kameyama ( 1997 ) .", "label": "Background"}
100
- {"text": "The most detailed evaluation of link tokens to date was performed by ( Macklovitch & Hannan , 1996 ) , who trained Brown et al. 's Model 2 on 74 million words of the Canadian Hansards .", "label": "CompareOrContrast"}
101
- {"text": "Log-linear models have proved successful in a wide variety of applications , and are the inspiration behind one of the best current statistical parsers ( Charniak , 2000 ) .", "label": "CompareOrContrast"}
102
- {"text": "While we have observed reasonable results with both G2 and Fisher 's exact test , we have not yet discussed how these results compare to the results that can be obtained with a technique commonly used in corpus linguistics based on the mutual information ( MI ) measure ( Church and Hanks 1990 ) :", "label": "Background"}
103
- {"text": "Morphological alterations of a search term have a negative impact on the recall performance of an information retrieval ( IR ) system ( Choueka , 1990 ; J \u00c2\u00a8 appinen and Niemist \u00c2\u00a8 o , 1988 ; Kraaij and Pohlmann , 1996 ) , since they preclude a direct match between the search term proper and its morphological variants in the documents to be retrieved .", "label": "Background"}
104
- {"text": "For shuffling paraphrases , french alternations are partially described in ( Saint-Dizier , 1999 ) and a resource is available which describes alternation and the mapping verbs/alternations for roughly 1 700 verbs .", "label": "Background"}
105
- {"text": "A more recent approach , advocated by Rappaport Hovav and Levin ( 1998 ) , describes a basic set of event templates corresponding to Vendler 's event classes ( Vendler , 1957 ) : ( 3 ) a. [ x ACT <MANNER> ] ( activity ) b. [ x <STATE> ] ( state ) c. [ BECOME [ x <STATE> ] ] ( achievement ) d. [ x CAUSE [ BECOME [ x <STATE> ] ] ] ( accomplishment )", "label": "Background"}
106
- {"text": "Watanabe ( 1993 ) combines lexical and dependency mappings to form his generalizations .", "label": "Background"}
107
- {"text": "Thus for instance , ( Copestake and Flickinger , 2000 ; Copestake et al. , 2001 ) describes a Head Driven Phrase Structure Grammar ( HPSG ) which supports the parallel construction of a phrase structure ( or derived ) tree and of a semantic representation and ( Dalrymple , 1999 ) show how to equip Lexical Functional grammar ( LFG ) with a glue semantics .", "label": "Background"}
108
- {"text": "The reordering models we describe follow our previous work using function word models for translation ( Setiawan et al. , 2007 ; Setiawan et al. , 2009 ) .", "label": "Extends"}
109
- {"text": "And Collins ( 2000 ) argues for `` keeping track of counts of arbitrary fragments within parse trees '' , which has indeed been carried out in Collins and Duffy ( 2002 ) who use exactly the same set of ( all ) tree fragments as proposed in Bod ( 1992 ) .", "label": "Motivation"}
110
- {"text": "In our work , we gather sets of sentences , and assume ( but do not employ ) existing approaches for their organization ( Goldstein et al. 2000 ; Barzilay , Elhadad , and McKeown 2001 ; Barzilay and McKeown 2005 ) .", "label": "Background"}
111
- {"text": "criteria and data used in our experiments are based on the work of Talbot et al. ( 2011 ) .", "label": "Uses"}
112
- {"text": "We present experiments on the two standard coreference resolution datasets , ACE-2004 ( NIST , 2004 ) and OntoNotes-5 .0 ( Hovy et al. , 2006 ) .", "label": "Uses"}
113
- {"text": "\u00e2\u0080\u00a2 Only qualitative observations of the responses were reported ( no formal evaluation was performed ) ( Lapalme and Kosseim 2003 ; Roy and Subramaniam 2006 ) .", "label": "CompareOrContrast"}
114
- {"text": "And subderivations headed by A1 with external nonterminals only at the leaves , internal nonterminals elsewhere , have probability 1/a1 ( Goodman 1996 ) .", "label": "Background"}
115
- {"text": "\u00e2\u0080\u00a2 Support vector machines for mapping histories to parser actions ( Kudo and Matsumoto , 2002 ) .", "label": "Uses"}
116
- {"text": "Goodman ( 1996 , 1998 ) developed a polynomial time PCFG-reduction of DOP1 whose size is linear in the size of the training set , thus converting the exponential number of subtrees to a compact grammar .", "label": "Background"}
117
- {"text": "Pustejovsky ( 1995 ) avoids enumerating the various senses for adjectives like fast by exploiting the semantics of the nouns they modify .", "label": "Background"}
118
- {"text": "Hohensee and Bender ( 2012 ) have conducted a study on dependency parsing for 21 languages using features that encode whether the values for certain attributes are equal or not for a node and its governor .", "label": "Background"}
119
- {"text": "Such approaches have been tried recently in restricted cases ( McCallum et al. , 2000 ; Eisner , 2001b ; Lafferty et al. , 2001 ) .", "label": "Background"}
120
- {"text": "The relation between discourse and prosodic phrasing has been examined in some detail by Bing ( 1985 ) , who argues that each noun phrase in an utterance constitutes a separate prosodic phrase unless it is destressed because of reference to previous discourse .", "label": "Background"}
121
- {"text": "By contrast , Turkish ( Oflazer et al. , 2003 ; Atalay et al. , 2003 ) exhibits high root accuracy but consistently low attachment scores ( about 88 % for length 1 and 68 % for length 2 ) .", "label": "CompareOrContrast"}
122
- {"text": "The candidate examples that lead to the most disagreements among the different learners are considered to have the highest TUV ( Cohn , Atlas , and Ladner 1994 ; Freund et al. 1997 ) .", "label": "Background"}
123
- {"text": "Subsequently , we extracted the bilingual phrase table from the aligned corpora using the Moses toolkit ( Koehn et al. , 2007 ) .", "label": "Uses"}
124
- {"text": "Representative systems are described in Boisen et al. ( 1989 ) , De Mattia and Giachin ( 1989 ) , Niedermair ( 1989 ) , Niemann ( 1990 ) , and Young ( 1989 ) .", "label": "Background"}
125
- {"text": "Our rules for phonological word formation are adopted , for the most part , from G & G , Grosjean and Gee ( 1987 ) , and the account of monosyllabic destressing in Selkirk ( 1984 ) .", "label": "Uses"}
126
- {"text": "As a generalization , Briscoe ( 2001 ) notes that lexicons such as COMLEX tend to demonstrate high precision but low recall .", "label": "Background"}
127
- {"text": "Such systems extract information from some types of syntactic units ( clauses in ( Fillmore and Atkins , 1998 ; Gildea and Jurafsky , 2002 ; Hull and Gomez , 1996 ) ; noun phrases in ( Hull and Gomez , 1996 ; Rosario et al. , 2002 ) ) .", "label": "Background"}
128
- {"text": "Various approaches for computing semantic relatedness of words or concepts have been proposed , e.g. dictionary-based ( Lesk , 1986 ) , ontology-based ( Wu and Palmer , 1994 ; Leacock and Chodorow , 1998 ) , information-based ( Resnik , 1995 ; Jiang and Conrath , 1997 ) or distributional ( Weeds and Weir , 2005 ) .", "label": "Background"}
129
- {"text": "Besides WordNet , the RTE literature documents the use of a variety of lexical information sources ( Bentivogli et al. , 2010 ; Dagan et al. , 2009 ) .", "label": "Background"}
130
- {"text": "The question answering system developed by Chu-Carroll et al. ( 2003 ) belongs to the merging category of approaches , where the output of an individual method can be used as input to a different method ( this corresponds to Burke 's cascade sub-category ) .", "label": "CompareOrContrast"}
131
- {"text": "More recently , ( Sebastiani , 2002 ) has performed a good survey of document categorization ; recent works can also be found in ( Joachims , 2002 ) , ( Crammer and Singer , 2003 ) , and ( Lewis et al. , 2004 ) .", "label": "Background"}
132
- {"text": "Discriminant analysis has been employed by researchers in automatic text genre detection ( Biber 1993b ; Karlgren and Cutting 1994 ) since it offers a simple and robust solution despite the fact that it presupposes normal distributions of the discriminating variables .", "label": "Background"}
133
- {"text": "This model has previously been shown to provide excellent performance on multiple tasks , including prediction of association norms , word substitution errors , semantic inferences , and word similarity ( Andrews et al. , 2009 ; Silberer and Lapata , 2012 ) .", "label": "Extends"}
134
- {"text": "In other words , existing treatments of gradables in GRE fail to take the `` efficiency of language '' into account ( Barwise and Perry 1983 ; see our Section 2 ) .", "label": "Background"}
135
- {"text": "Word alignments are used primarily for extracting minimal translation units for machine translation ( MT ) ( e.g. , phrases [ Koehn , Och , and Marcu 2003 ] and rules [ Galley et al. 2004 ; Chiang et al. 2005 ] ) as well as for", "label": "Background"}
136
- {"text": "Following Miller et al. , 1999 , the IR system ranks documents according to the probability that a document D is relevant given the query Q , P ( D is R IQ ) .", "label": "Uses"}
137
- {"text": "In modern syntactic theories ( e.g. , lexical-functional grammar [ LFG ] [ Kaplan and Bresnan 1982 ; Bresnan 2001 ; Dalrymple 2001 ] , head-driven phrase structure grammar [ HPSG ] [ Pollard and Sag 1994 ] , tree-adjoining grammar [ TAG ] [ Joshi 1988 ] , and combinatory categorial grammar [ CCG ] [ Ades and Steedman 1982 ] ) , the lexicon is the central repository for much morphological , syntactic , and semantic information .", "label": "Background"}
138
- {"text": "We have shown elsewhere ( Jensen and Binot 1988 ; Zadrozny 1987a , 1987b ) that natural language programs , such as on-line grammars and dictionaries , can be used as referential levels for commonsense reasoning -- for example , to disambiguate PP attachment .", "label": "Extends"}
139
- {"text": "Thus rather than a single training procedure , we can actually partition the examples by predicate , and train a 1For a fixed verb , MI is proportional to Keller and Lapata ( 2003 ) 's conditional probability scores for pseudodisambiguation of ( v , n , n \u00e2\u0080\u00b2 ) triples : Pr ( v | n ) = Pr ( v , n ) / Pr ( n ) , which was shown to be a better measure of association than co-occurrence frequency f ( v , n ) .", "label": "Motivation"}
 
1
+ {"text": "Resnik ( 1995 ) reported a correlation of r = .9026.10 The results are not directly comparable , because he only used noun-noun pairs , words instead of concepts , a much smaller dataset , and measured semantic similarity instead of semantic relatedness .", "label": 1}
2
+ {"text": "Similar observation for surface word frequency was also observed by ( Bertram et al. , 2000 ; Bradley , 1980 ; Burani et al. , 1987 ; Burani et al. , 1984 ; Schreuder et al. , 1997 ; Taft 1975 ; Taft , 2004 ) where it has been claimed that words having low surface frequency tends to decompose .", "label": 0}
3
+ {"text": "But their importance has grown far beyond machine translation : for instance , transferring annotations between languages ( Yarowsky and Ngai 2001 ; Hwa et al. 2005 ; Ganchev , Gillenwater , and Taskar 2009 ) ; discovery of paraphrases ( Bannard and Callison-Burch 2005 ) ; and joint unsupervised POS and parser induction across languages ( Snyder and Barzilay 2008 ) .", "label": 4}
4
+ {"text": "Previous sentiment-analysis work in different domains has considered inter-document similarity ( Agarwal and Bhattacharyya , 2005 ; Pang and Lee , 2005 ; Goldberg and Zhu , 2006 ) or explicit", "label": 0}
5
+ {"text": "However , the method we are currently using in the ATIS domain ( Seneff et al. 1991 ) represents our most promising approach to this problem .", "label": 5}
6
+ {"text": "Henceforth the collaborative traits of blogs and wikis ( McNeill , 2005 ) emphasize annotation , comment , and strong editing .", "label": 0}
7
+ {"text": "The ICA system ( Hepple , 2000 ) aims to reduce the training time by introducing independence assumptions on the training samples that dramatically reduce the training time with the possible downside of sacrificing performance .", "label": 0}
8
+ {"text": "To this end , several toolkits for building spoken dialogue systems have been developed ( Barnett and Singh , 1997 ; Sasajima et al. , 1999 ) .", "label": 0}
9
+ {"text": "Thus , over the past few years , along with advances in the use of learning and statistical methods for acquisition of full parsers ( Collins , 1997 ; Charniak , 1997a ; Charniak , 1997b ; Ratnaparkhi , 1997 ) , significant progress has been made on the use of statistical learning methods to recognize shallow parsing patterns syntactic phrases or words that participate in a syntactic relationship ( Church , 1988 ; Ramshaw and Marcus , 1995 ; Argamon et al. , 1998 ; Cardie and Pierce , 1998 ; Munoz et al. , 1999 ; Punyakanok and Roth , 2001 ; Buchholz et al. , 1999 ; Tjong Kim Sang and Buchholz , 2000 ) .", "label": 0}
10
+ {"text": "Task properties Determining whether or not a speaker supports a proposal falls within the realm of sentiment analysis , an extremely active research area devoted to the computational treatment of subjective or opinion-oriented language ( early work includes Wiebe and Rapaport ( 1988 ) , Hearst ( 1992 ) , Sack ( 1994 ) , and Wiebe ( 1994 ) ; see Esuli ( 2006 ) for an active bibliography ) .", "label": 0}
11
+ {"text": "Various approaches for computing semantic relatedness of words or concepts have been proposed , e.g. dictionary-based ( Lesk , 1986 ) , ontology-based ( Wu and Palmer , 1994 ; Leacock and Chodorow , 1998 ) , information-based ( Resnik , 1995 ; Jiang and Conrath , 1997 ) or distributional ( Weeds and Weir , 2005 ) .", "label": 0}
12
+ {"text": "Both tasks are performed with a statistical framework : the mention detection system is similar to the one presented in ( Florian et al. , 2004 ) and the coreference resolution system is similar to the one described in ( Luo et al. , 2004 ) .", "label": 1}
13
+ {"text": "The advantage of tuning similarity to the application of interest has been shown previously by Weeds and Weir ( 2005 ) .", "label": 1}
14
+ {"text": "Although there are other discussions of the paragraph as a central element of discourse ( e.g. Chafe 1979 , Halliday and Hasan 1976 , Longacre 1979 , Haberlandt et al. 1980 ) , all of them share a certain limitation in their formal techniques for analyzing paragraph structure .", "label": 1}
15
+ {"text": "Thus , over the past few years , along with advances in the use of learning and statistical methods for acquisition of full parsers ( Collins , 1997 ; Charniak , 1997a ; Charniak , 1997b ; Ratnaparkhi , 1997 ) , significant progress has been made on the use of statistical learning methods to recognize shallow parsing patterns syntactic phrases or words that participate in a syntactic relationship ( Church , 1988 ; Ramshaw and Marcus , 1995 ; Argamon et al. , 1998 ; Cardie and Pierce , 1998 ; Munoz et al. , 1999 ; Punyakanok and Roth , 2001 ; Buchholz et al. , 1999 ; Tjong Kim Sang and Buchholz , 2000 ) .", "label": 0}
16
+ {"text": "We experiment with four learners commonly employed in language learning : Decision List ( DL ) : We use the DL learner as described in Collins and Singer ( 1999 ) , motivated by its success in the related tasks of word sense disambiguation ( Yarowsky , 1995 ) and NE classification ( Collins and Singer , 1999 ) .", "label": 4}
17
+ {"text": "A central technique is to define a joint relation as a noisy-channel model , by composing a joint relation with a cascade of one or more conditional relations as in Fig. 1 ( Pereira and Riley , 1997 ; Knight and Graehl , 1998 ) .", "label": 0}
18
+ {"text": "We use the same set of binary features as in previous work on this dataset ( Pang et al. , 2002 ; Pang and Lee , 2004 ; Zaidan et al. , 2007 ) .", "label": 5}
19
+ {"text": "Our classification framework , directly inspired by Blum and Chawla ( 2001 ) , integrates both perspectives , optimizing its labeling of speech segments based on both individual speech-segment classification scores and preferences for groups of speech segments to receive the same label .", "label": 5}
20
+ {"text": "As for work on Arabic ( MSA ) , results have been reported on the PATB ( Kulick , Gabbard , and Marcus 2006 ; Diab 2007 ; Green and Manning 2010 ) , the Prague Dependency Treebank ( PADT ) ( Buchholz and Marsi 2006 ; Nivre 2008 ) and the CATiB ( Habash and Roth 2009 ) .", "label": 0}
21
+ {"text": "For instance , Palmer and Hearst ( 1997 ) report that the SATZ system ( decision tree variant ) was trained on a set of about 800 labeled periods , which corresponds to a corpus of about 16,000 words .", "label": 1}
22
+ {"text": "One possible direction is to consider linguistically motivated approaches , such as the extraction of syntactic phrase tables as proposed by ( Yamada and Knight , 2001 ) .", "label": 3}
23
+ {"text": "Later works , such as Atallah et al. ( 2001a ) , Bolshakov ( 2004 ) , Taskiran et al. ( 2006 ) and Topkara et al. ( 2006b ) , further made use of part-ofspeech taggers and electronic dictionaries , such as WordNet and VerbNet , to increase the robustness of the method .", "label": 0}
24
+ {"text": "A number of speech understanding systems have been developed during the past fifteen years ( Barnett et al. 1980 , Dixon and Martin 1979 , Erman et al. 1980 , Haton and Pierrel 1976 , Lea 1980 , Lowerre and Reddy 1980 , Medress 1980 , Reddy 1976 , Walker 1978 , and Wolf and Woods 1980 ) .", "label": 1}
25
+ {"text": "The bottom panel of table 1 lists the results for the chosen lexicalized model ( SSN-Freq > 200 ) and five recent statistical parsers ( Ratnaparkhi , 1999 ; Collins , 1999 ; Charniak , 2000 ; Collins , 2000 ; Bod , 2001 ) .", "label": 1}
26
+ {"text": "The basic Python reflection has already been implemented and used for large scale experiments with POS tagging , using pyMPI ( a message passing interface library for Python ) to coordinate experiments across a cluster of over 100 machines ( Curran and Clark , 2003 ; Clark et al. , 2003 ) .", "label": 0}
27
+ {"text": "This imbalance foils thresholding strategies , clever as they might be ( Gale & Church , 1991 ; Wu & Xia , 1994 ; Chen , 1996 ) .", "label": 0}
28
+ {"text": "Training was done on the Penn Treebank ( Marcus et al. , 1993 ) Wall Street Journal data , sections 02-21 .", "label": 5}
29
+ {"text": "We performed Latent Semantic Analysis ( LSA ) over Wikipedia using the jLSI tool ( Giuliano , 2007 ) to measure the relatedness between words in the dataset .", "label": 5}
30
+ {"text": "For example , our previous work ( Nakov and Ng , 2009 ; Nakov and Ng , 2012 ) experimented with various techniques for combining a small bi-text for a resource-poor language ( Indonesian or Spanish , pretending that Spanish is resource-poor ) with a much larger bi-text for a related resource-rich language ( Malay or Portuguese ) ; the target language of all bi-texts was English .", "label": 1}
31
+ {"text": "Various approaches for computing semantic relatedness of words or concepts have been proposed , e.g. dictionary-based ( Lesk , 1986 ) , ontology-based ( Wu and Palmer , 1994 ; Leacock and Chodorow , 1998 ) , information-based ( Resnik , 1995 ; Jiang and Conrath , 1997 ) or distributional ( Weeds and Weir , 2005 ) .", "label": 0}
32
+ {"text": "Another line of research approaches grounded language knowledge by augmenting distributional approaches of word meaning with perceptual information ( Andrews et al. , 2009 ; Steyvers , 2010 ; Feng and Lapata , 2010b ; Bruni et al. , 2011 ; Silberer and Lapata , 2012 ; Johns and Jones , 2012 ; Bruni et al. , 2012a ; Bruni et al. , 2012b ; Silberer et al. , 2013 ) .", "label": 0}
33
+ {"text": "Gurevych ( 2005 ) replicated the experiment of Rubenstein and Goodenough with the original 65 word pairs translated into German .", "label": 0}
34
+ {"text": "One approach to this more general problem , taken by the ` Nitrogen ' generator ( Langkilde and Knight , 1998a ; Langkilde and Knight , 1998b ) , takes advantage of standard statistical techniques by generating a lattice of all possible strings given a semantic representation as input and selecting the most likely output using a bigram language model .", "label": 5}
35
+ {"text": "where mk is one mention in entity e , and the basic model building block PL ( L = 1 | e , mk , m ) is an exponential or maximum entropy model ( Berger et al. , 1996 ) .", "label": 5}
36
+ {"text": "13 We also employed sequence-based measures using the ROUGE tool set ( Lin and Hovy 2003 ) , with similar results to those obtained with the word-by-word measures .", "label": 5}
37
+ {"text": "Second , using continuous distributions allows us to leverage a variety of tools ( e.g. , LDA ) that have been shown to be successful in other fields , such as speech recognition ( Evermann et al. , 2004 ) .", "label": 0}
38
+ {"text": "In this section , we validate the contribution of key tag sets and morphological features -- and combinations thereof -- using a different parser : the Easy-First Parser ( Goldberg and Elhadad 2010 ) .", "label": 5}
39
+ {"text": "The typical solution to the redundancy problem is to group verbs according to their argument realization patterns ( Levin , 1993 ) , possibly arranged in an inheritance hierarchy .", "label": 1}
40
+ {"text": "Later , Hobbs ( 1979 , 1982 ) proposed a knowledge base in which information about language and the world would be encoded , and he emphasized the need for using `` salience '' in choosing facts from this knowledge base .", "label": 0}
41
+ {"text": "Another technique is automatic discovery of translations from parallel or non-parallel corpora ( Fung and Mckeown , 1997 ) .", "label": 0}
42
+ {"text": "ASARES is presented in detail in ( Claveau et al. , 2003 ) .", "label": 5}
43
+ {"text": "Opposition ( called `` adversative '' or `` contrary-to-expectation '' by Halliday and Hasan 1976 ; cfXXX also Quirk et al. 1972 , p. 672 ) .", "label": 0}
44
+ {"text": "A number of applications have relied on distributional analysis ( Harris , 1971 ) in order to build classes of semantically related terms .", "label": 0}
45
+ {"text": "Previous work with MaltParser in Russian , Turkish , and Hindi showed gains with CASE but not with agreement features ( Eryigit , Nivre , and Oflazer 2008 ; Nivre , Boguslavsky , and Iomdin 2008 ; Nivre 2009 ) .", "label": 1}
46
+ {"text": "Consider , for example , the lexical rule in Figure 2 , which encodes a passive lexical rule like the one presented by Pollard and Sag ( 1987 , 215 ) in terms of the setup of Pollard and Sag ( 1994 , ch .", "label": 1}
47
+ {"text": "Two applications that , like help-desk , deal with question -- answer pairs are : summarization of e-mail threads ( Dalli , Xia , and Wilks 2004 ; Shrestha and McKeown 2004 ) , and answer extraction in FAQs ( Frequently Asked Questions ) ( Berger and Mittal 2000 ;", "label": 1}
48
+ {"text": "The language grounding problem has received significant attention in recent years , owed in part to the wide availability of data sets ( e.g. Flickr , Von Ahn ( 2006 ) ) , computing power , improved computer vision models ( Oliva and Torralba , 2001 ; Lowe , 2004 ; Farhadi et al. , 2009 ; Parikh and Grauman , 2011 ) and neurological evidence of ties between the language , perceptual and motor systems in the brain ( Pulverm \u00c2\u00a8 uller et al. , 2005 ; Tettamanti et al. , 2005 ; Aziz-Zadeh et al. , 2006 ) .", "label": 0}
49
+ {"text": "In addition , we find that the Bayesian SCFG grammar can not even significantly outperform the heuristic SCFG grammar ( Blunsom et al. 2009 ) 5 .", "label": 1}
50
+ {"text": "There are several grammars developed in the FB-LTAG formalism , including the XTAG English grammar , a large-scale grammar for English ( The XTAG Research Group , 2001 ) .", "label": 0}
51
+ {"text": "Although the approach may have potential , the shifting of complex accounting into the unification algorithm is at variance with the findings of Kiefer et al. ( 1999 ) , who report large speed-ups from the elimination of disjunction processing during unification .", "label": 1}
52
+ {"text": "For the task of unsupervised dependency parsing , Smith and Eisner ( 2006 ) add a constraint of the form `` the average length of dependencies should be X '' to capture the locality of syntax ( at least half of the dependencies are between adjacent words ) , using a scheme they call structural annealing .", "label": 0}
53
+ {"text": "The speech and language processing architecture is based on that of the SRI CommandTalk system ( Moore et al. , 1997 ; Stent et a. , 1999 ) .", "label": 5}
54
+ {"text": "Second , in line with the findings of ( Mehdad et al. , 2010 ) , the results obtained over the MT-derived corpus are equal to those we achieve over the original RTE3 dataset ( i.e. 63.50 % ) .", "label": 1}
55
+ {"text": "Therefore , inter-subject correlation is lower than the results obtained by Gurevych ( 2006 ) .", "label": 1}
56
+ {"text": "There is a general consensus among theoretical linguists that the proper representation of verbal argument structure is event structure -- representations grounded in a theory of events that decompose semantic roles in terms of primitive predicates representing concepts such as causality and inchoativity ( Dowty , 1979 ; Jackendoff , 1983 ; Pustejovsky , 1991b ; Rappaport Hovav and Levin , 1998 ) .", "label": 0}
57
+ {"text": "For example , some similar measures have been used in stylistic experiments in information retrieval on the basis of a robust parser built for information retrieval purposes ( Strzalkowski 1994 ) .", "label": 0}
58
+ {"text": "The resulting training procedure is analogous to the one presented in ( Brown et al. , 1993 ) and ( Tillmann and Ney , 1997 ) .", "label": 1}
59
+ {"text": "successfully parses , or until a quitting criterion is reached , such as an upper bound on N. Whereas in the loosely coupled system the parser acts as a filter only on completed candidate solutions ( Zue et al. 1991 ) , the tightly coupled system allows the parser to discard partial theories that have no way of continuing .", "label": 5}
60
+ {"text": "Zollmann and Venugopal ( 2006 ) substituted the non-terminal X in hierarchical phrase-based model by extended syntactic categories .", "label": 1}
61
+ {"text": "Much of the earlier work in anaphora resolution heavily exploited domain and linguistic knowledge ( Sidner 1979 ; Carter 1987 ; Rich and LuperFoy 1988 ; Carbonell and Brown 1988 ) , which was difficult both to represent and to process , and which required considerable human input .", "label": 0}
62
+ {"text": "The paradigm is `` write many , read many '' ( Cunningham and Leuf , 2001 ) .", "label": 0}
63
+ {"text": "The Praat tool was used ( Boersma and Weenink , 2009 ) .", "label": 5}
64
+ {"text": "2 The reader is asked to focus on any reasonable size measurement , for example , the maximal horizontal or vertical distance , or some combination of dimensions ( Kamp 1975 ; also Section 8.1 of the present article ) .", "label": 0}
65
+ {"text": "The implementation has been inspired by experience in extracting information from very large corpora ( Curran and Moens , 2002 ) and performing experiments on maximum entropy sequence tagging ( Curran and Clark , 2003 ; Clark et al. , 2003 ) .", "label": 4}
66
+ {"text": "Default parameters were used , although experimentation with different parameter settings is an important direction for future work ( Daelemans and Hoste , 2002 ; Munson et al. , 2005 ) .", "label": 3}
67
+ {"text": "Our work is inspired by the latent left-linking model in Chang et al. ( 2013 ) and the ILP formulation from Chang et al. ( 2011 ) .", "label": 5}
68
+ {"text": "Furthermore , the availability of rich ontological resources , in the form of the Unified Medical Language System ( UMLS ) ( Lindberg et al. , 1993 ) , and the availability of software that leverages this knowledge -- MetaMap ( Aronson , 2001 ) for concept identification and SemRep ( Rindflesch and Fiszman , 2003 ) for relation extraction -- provide a foundation for studying the role of semantics in various tasks .", "label": 0}
69
+ {"text": "The names given to the components vary ; they have been called `` strategic '' and `` tactical '' components ( e.g. , McKeown 1985 ; Thompson 1977 ; Danlos 1987 ) 1 , `` planning '' and `` realization '' ( e.g. , McDonald 1983 ; Hovy 1988a ) , or simply `` what to say '' versus `` how to say it '' ( e.g. , Danlos 1987 ; Reithinger 1990 ) .", "label": 0}
70
+ {"text": "Over the last decade there has been a lot of interest in developing tutorial dialogue systems that understand student explanations ( Jordan et al. , 2006 ; Graesser et al. , 1999 ; Aleven et al. , 2001 ; Buckley and Wolska , 2007 ; Nielsen et al. , 2008 ; VanLehn et al. , 2007 ) , because high percentages of selfexplanation and student contentful talk are known to be correlated with better learning in humanhuman tutoring ( Chi et al. , 1994 ; Litman et al. , 2009 ; Purandare and Litman , 2008 ; Steinhauser et al. , 2007 ) .", "label": 0}
71
+ {"text": "We use the TRIPS dialogue parser ( Allen et al. , 2007 ) to parse the utterances .", "label": 5}
72
+ {"text": "In order to address these limitations in a practical way , we conducted a small user study where we asked four judges ( graduate students from the Faculty of Information Technology at Monash University ) to assess the responses generated by our system ( Marom and Zukerman 2007a ) .", "label": 5}
73
+ {"text": "The understanding module utilizes ISSS ( Incremental Significant-utterance Sequence Search ) ( Nakano et al. , 1999b ) , which is an integrated parsing and discourse processing method .", "label": 5}
74
+ {"text": "We applied our system to the XTAG English grammar ( The XTAG Research Group , 2001 ) 3 , which is a large-scale FB-LTAG grammar for English .", "label": 5}
75
+ {"text": "After the extraction , pruning techniques ( Snover et al. , 2009 ) can be applied to increase the precision of the extracted paraphrases .", "label": 0}
76
+ {"text": "In this paper , we extend two classes of model adaptation methods ( i.e. , model interpolation and error-driven learning ) , which have been well studied in statistical language modeling for speech and natural language applications ( e.g. , Bacchiani et al. , 2004 ; Bellegarda , 2004 ; Gao et al. , 2006 ) , to ranking models for Web search applications .", "label": 0}
77
+ {"text": "GATE goes beyond earlier systems by using a component-based infrastructure ( Cunningham , 2000 ) which the GUI is built on top of .", "label": 0}
78
+ {"text": "Since sentences can refer to events described by other sentences , we may need also a quotation operator ; Perlis ( 1985 ) describes how first order logic can be augmented with such an operator .", "label": 0}
79
+ {"text": "The system uses a knowledge base implemented in the KM representation language ( Clark and Porter , 1999 ; Dzikovska et al. , 2006 ) to represent the state of the world .", "label": 5}
80
+ {"text": "A possible future direction would be to compare the query string to retrieved results using a method similar to that of Tsuruoka and Tsujii ( 2003 ) .", "label": 3}
81
+ {"text": "description-level lexical rules ( DLRs ; Meurers 1995 ) .5 2.2.1 Meta-Level Lexical Rules .", "label": 0}
82
+ {"text": "All EBMT systems , from the initial proposal by Nagao ( 1984 ) to the recent collection of Carl and Way ( 2003 ) , are premised on the availability of subsentential alignments derived from the input bitext .", "label": 0}
83
+ {"text": "The necessity of this kind of merging of arguments has been recognized before : Charniak and McDermott ( 1985 ) call it abductive unification/matching , Hobbs ( 1978 , 1979 ) refers to such operations using the terms knitting or petty conversational implicature .", "label": 0}
84
+ {"text": "In a number of proposals , lexical generalizations are captured using lexical underspecification ( Kathol 1994 ; Krieger and Nerbonne 1992 ;", "label": 1}
85
+ {"text": "These keywords are potentially useful features because some of them are subclasses of the ACE SCs shown in the left column of Table 1 , while others appear to be correlated with these ACE SCs .2 ( 6 ) INDUCED CLASS : Since the first-sense heuristic used in the previous feature may not be accurate in capturing the SC of an NP , we employ a corpusbased method for inducing SCs that is motivated by research in lexical semantics ( e.g. , Hearst ( 1992 ) ) .", "label": 4}
86
+ {"text": "Other psycholing-uistic studies that confirm the validity of paragraph units can be found in Black and Bower ( 1979 ) and Haberlandt et al. ( 1980 ) .", "label": 0}
87
+ {"text": "The bottom panel of table 1 lists the results for the chosen lexicalized model ( SSN-Freq > 200 ) and five recent statistical parsers ( Ratnaparkhi , 1999 ; Collins , 1999 ; Charniak , 2000 ; Collins , 2000 ; Bod , 2001 ) .", "label": 1}
88
+ {"text": "Nevertheless , the full document text is present in most systems , sometimes as the only feature ( Sugiyama and Okumura , 2007 ) and sometimes in combination with others see for instance ( Chen and Martin , 2007 ; Popescu and Magnini , 2007 ) - .", "label": 0}
89
+ {"text": "In a similar vain to Skut and Brants ( 1998 ) and Buchholz et al. ( 1999 ) , the method extends an existing flat shallow-parsing method to handle composite structures .", "label": 3}
90
+ {"text": "As a result , researchers have re-adopted the once-popular knowledge-rich approach , investigating a variety of semantic knowledge sources for common noun resolution , such as the semantic relations between two NPs ( e.g. , Ji et al. ( 2005 ) ) , their semantic similarity as computed using WordNet ( e.g. , Poesio et al. ( 2004 ) ) or Wikipedia ( Ponzetto and Strube , 2006 ) , and the contextual role played by an NP ( see Bean and Riloff ( 2004 ) ) .", "label": 0}
91
+ {"text": "We built a two-stage baseline system , using the perceptron segmentation model from our previous work ( Zhang and Clark , 2007 ) and the perceptron POS tagging model from Collins ( 2002 ) .", "label": 2}
92
+ {"text": "Note that although our current system uses MeSH headings assigned by human indexers , manually assigned terms can be replaced with automatic processing if needed ( Aronson et al. 2004 ) .", "label": 3}
93
+ {"text": "Furthermore , medical terminology is characterized by a typical mix of Latin and Greek roots with the corresponding host language ( e.g. , German ) , often referred to as neo-classical compounding ( McCray et al. , 1988 ) .", "label": 0}
94
+ {"text": "Previously ( Gerber and Chai 2010 ) , we assessed the importance of various implicit argument feature groups by conducting feature ablation tests .", "label": 2}
95
+ {"text": "To model d ( FWi \u00e2\u0088\u0092 1 , S \u00e2\u0086\u0092 T ) , d ( FWi +1 , S \u00e2\u0086\u0092 T ) , i.e. whether Li , S \u00e2\u0086\u0092 T and Ri , S \u00e2\u0086\u0092 T extend beyond the neighboring function word phrase pairs , we utilize the pairwise dominance model of Setiawan et al. ( 2009 ) .", "label": 5}
96
+ {"text": "For instance , Sells ( 1985 , p. 8 ) says that the sentence `` Reagan thinks bananas , '' which is otherwise strange , is in fact acceptable if it occurs as an answer to the question `` What is Kissinger 's favorite fruit ? ''", "label": 4}
97
+ {"text": "Semantic Role labeling ( SRL ) was first defined in Gildea and Jurafsky ( 2002 ) .", "label": 0}
98
+ {"text": "AJAX function lets the communication works asyncronously between a client and a server through a set of messages based on HTTP protocol and XML ( Garrett , 2005 ) .", "label": 0}
99
+ {"text": "The inclusion of the coreference task in the Sixth and Seventh Message Understanding Conferences ( MUC-6 and MUC-7 ) gave a considerable impetus to the development of coreference resolution algorithms and systems , such as those described in Baldwin et al. ( 1995 ) , Gaizauskas and Humphreys ( 1996 ) , and Kameyama ( 1997 ) .", "label": 0}
100
+ {"text": "The most detailed evaluation of link tokens to date was performed by ( Macklovitch & Hannan , 1996 ) , who trained Brown et al. 's Model 2 on 74 million words of the Canadian Hansards .", "label": 1}
101
+ {"text": "Log-linear models have proved successful in a wide variety of applications , and are the inspiration behind one of the best current statistical parsers ( Charniak , 2000 ) .", "label": 1}
102
+ {"text": "While we have observed reasonable results with both G2 and Fisher 's exact test , we have not yet discussed how these results compare to the results that can be obtained with a technique commonly used in corpus linguistics based on the mutual information ( MI ) measure ( Church and Hanks 1990 ) :", "label": 0}
103
+ {"text": "Morphological alterations of a search term have a negative impact on the recall performance of an information retrieval ( IR ) system ( Choueka , 1990 ; J \u00c2\u00a8 appinen and Niemist \u00c2\u00a8 o , 1988 ; Kraaij and Pohlmann , 1996 ) , since they preclude a direct match between the search term proper and its morphological variants in the documents to be retrieved .", "label": 0}
104
+ {"text": "For shuffling paraphrases , french alternations are partially described in ( Saint-Dizier , 1999 ) and a resource is available which describes alternation and the mapping verbs/alternations for roughly 1 700 verbs .", "label": 0}
105
+ {"text": "A more recent approach , advocated by Rappaport Hovav and Levin ( 1998 ) , describes a basic set of event templates corresponding to Vendler 's event classes ( Vendler , 1957 ) : ( 3 ) a. [ x ACT <MANNER> ] ( activity ) b. [ x <STATE> ] ( state ) c. [ BECOME [ x <STATE> ] ] ( achievement ) d. [ x CAUSE [ BECOME [ x <STATE> ] ] ] ( accomplishment )", "label": 0}
106
+ {"text": "Watanabe ( 1993 ) combines lexical and dependency mappings to form his generalizations .", "label": 0}
107
+ {"text": "Thus for instance , ( Copestake and Flickinger , 2000 ; Copestake et al. , 2001 ) describes a Head Driven Phrase Structure Grammar ( HPSG ) which supports the parallel construction of a phrase structure ( or derived ) tree and of a semantic representation and ( Dalrymple , 1999 ) show how to equip Lexical Functional grammar ( LFG ) with a glue semantics .", "label": 0}
108
+ {"text": "The reordering models we describe follow our previous work using function word models for translation ( Setiawan et al. , 2007 ; Setiawan et al. , 2009 ) .", "label": 2}
109
+ {"text": "And Collins ( 2000 ) argues for `` keeping track of counts of arbitrary fragments within parse trees '' , which has indeed been carried out in Collins and Duffy ( 2002 ) who use exactly the same set of ( all ) tree fragments as proposed in Bod ( 1992 ) .", "label": 4}
110
+ {"text": "In our work , we gather sets of sentences , and assume ( but do not employ ) existing approaches for their organization ( Goldstein et al. 2000 ; Barzilay , Elhadad , and McKeown 2001 ; Barzilay and McKeown 2005 ) .", "label": 0}
111
+ {"text": "criteria and data used in our experiments are based on the work of Talbot et al. ( 2011 ) .", "label": 5}
112
+ {"text": "We present experiments on the two standard coreference resolution datasets , ACE-2004 ( NIST , 2004 ) and OntoNotes-5 .0 ( Hovy et al. , 2006 ) .", "label": 5}
113
+ {"text": "\u00e2\u0080\u00a2 Only qualitative observations of the responses were reported ( no formal evaluation was performed ) ( Lapalme and Kosseim 2003 ; Roy and Subramaniam 2006 ) .", "label": 1}
114
+ {"text": "And subderivations headed by A1 with external nonterminals only at the leaves , internal nonterminals elsewhere , have probability 1/a1 ( Goodman 1996 ) .", "label": 0}
115
+ {"text": "\u00e2\u0080\u00a2 Support vector machines for mapping histories to parser actions ( Kudo and Matsumoto , 2002 ) .", "label": 5}
116
+ {"text": "Goodman ( 1996 , 1998 ) developed a polynomial time PCFG-reduction of DOP1 whose size is linear in the size of the training set , thus converting the exponential number of subtrees to a compact grammar .", "label": 0}
117
+ {"text": "Pustejovsky ( 1995 ) avoids enumerating the various senses for adjectives like fast by exploiting the semantics of the nouns they modify .", "label": 0}
118
+ {"text": "Hohensee and Bender ( 2012 ) have conducted a study on dependency parsing for 21 languages using features that encode whether the values for certain attributes are equal or not for a node and its governor .", "label": 0}
119
+ {"text": "Such approaches have been tried recently in restricted cases ( McCallum et al. , 2000 ; Eisner , 2001b ; Lafferty et al. , 2001 ) .", "label": 0}
120
+ {"text": "The relation between discourse and prosodic phrasing has been examined in some detail by Bing ( 1985 ) , who argues that each noun phrase in an utterance constitutes a separate prosodic phrase unless it is destressed because of reference to previous discourse .", "label": 0}
121
+ {"text": "By contrast , Turkish ( Oflazer et al. , 2003 ; Atalay et al. , 2003 ) exhibits high root accuracy but consistently low attachment scores ( about 88 % for length 1 and 68 % for length 2 ) .", "label": 1}
122
+ {"text": "The candidate examples that lead to the most disagreements among the different learners are considered to have the highest TUV ( Cohn , Atlas , and Ladner 1994 ; Freund et al. 1997 ) .", "label": 0}
123
+ {"text": "Subsequently , we extracted the bilingual phrase table from the aligned corpora using the Moses toolkit ( Koehn et al. , 2007 ) .", "label": 5}
124
+ {"text": "Representative systems are described in Boisen et al. ( 1989 ) , De Mattia and Giachin ( 1989 ) , Niedermair ( 1989 ) , Niemann ( 1990 ) , and Young ( 1989 ) .", "label": 0}
125
+ {"text": "Our rules for phonological word formation are adopted , for the most part , from G & G , Grosjean and Gee ( 1987 ) , and the account of monosyllabic destressing in Selkirk ( 1984 ) .", "label": 5}
126
+ {"text": "As a generalization , Briscoe ( 2001 ) notes that lexicons such as COMLEX tend to demonstrate high precision but low recall .", "label": 0}
127
+ {"text": "Such systems extract information from some types of syntactic units ( clauses in ( Fillmore and Atkins , 1998 ; Gildea and Jurafsky , 2002 ; Hull and Gomez , 1996 ) ; noun phrases in ( Hull and Gomez , 1996 ; Rosario et al. , 2002 ) ) .", "label": 0}
128
+ {"text": "Various approaches for computing semantic relatedness of words or concepts have been proposed , e.g. dictionary-based ( Lesk , 1986 ) , ontology-based ( Wu and Palmer , 1994 ; Leacock and Chodorow , 1998 ) , information-based ( Resnik , 1995 ; Jiang and Conrath , 1997 ) or distributional ( Weeds and Weir , 2005 ) .", "label": 0}
129
+ {"text": "Besides WordNet , the RTE literature documents the use of a variety of lexical information sources ( Bentivogli et al. , 2010 ; Dagan et al. , 2009 ) .", "label": 0}
130
+ {"text": "The question answering system developed by Chu-Carroll et al. ( 2003 ) belongs to the merging category of approaches , where the output of an individual method can be used as input to a different method ( this corresponds to Burke 's cascade sub-category ) .", "label": 1}
131
+ {"text": "More recently , ( Sebastiani , 2002 ) has performed a good survey of document categorization ; recent works can also be found in ( Joachims , 2002 ) , ( Crammer and Singer , 2003 ) , and ( Lewis et al. , 2004 ) .", "label": 0}
132
+ {"text": "Discriminant analysis has been employed by researchers in automatic text genre detection ( Biber 1993b ; Karlgren and Cutting 1994 ) since it offers a simple and robust solution despite the fact that it presupposes normal distributions of the discriminating variables .", "label": 0}
133
+ {"text": "This model has previously been shown to provide excellent performance on multiple tasks , including prediction of association norms , word substitution errors , semantic inferences , and word similarity ( Andrews et al. , 2009 ; Silberer and Lapata , 2012 ) .", "label": 2}
134
+ {"text": "In other words , existing treatments of gradables in GRE fail to take the `` efficiency of language '' into account ( Barwise and Perry 1983 ; see our Section 2 ) .", "label": 0}
135
+ {"text": "Word alignments are used primarily for extracting minimal translation units for machine translation ( MT ) ( e.g. , phrases [ Koehn , Och , and Marcu 2003 ] and rules [ Galley et al. 2004 ; Chiang et al. 2005 ] ) as well as for", "label": 0}
136
+ {"text": "Following Miller et al. , 1999 , the IR system ranks documents according to the probability that a document D is relevant given the query Q , P ( D is R IQ ) .", "label": 5}
137
+ {"text": "In modern syntactic theories ( e.g. , lexical-functional grammar [ LFG ] [ Kaplan and Bresnan 1982 ; Bresnan 2001 ; Dalrymple 2001 ] , head-driven phrase structure grammar [ HPSG ] [ Pollard and Sag 1994 ] , tree-adjoining grammar [ TAG ] [ Joshi 1988 ] , and combinatory categorial grammar [ CCG ] [ Ades and Steedman 1982 ] ) , the lexicon is the central repository for much morphological , syntactic , and semantic information .", "label": 0}
138
+ {"text": "We have shown elsewhere ( Jensen and Binot 1988 ; Zadrozny 1987a , 1987b ) that natural language programs , such as on-line grammars and dictionaries , can be used as referential levels for commonsense reasoning -- for example , to disambiguate PP attachment .", "label": 2}
139
+ {"text": "Thus rather than a single training procedure , we can actually partition the examples by predicate , and train a 1For a fixed verb , MI is proportional to Keller and Lapata ( 2003 ) 's conditional probability scores for pseudodisambiguation of ( v , n , n \u00e2\u0080\u00b2 ) triples : Pr ( v | n ) = Pr ( v , n ) / Pr ( n ) , which was shown to be a better measure of association than co-occurrence frequency f ( v , n ) .", "label": 4}
dataset/citation_intent/train.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
dataset/hyperpartisan_news/dev.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
dataset/hyperpartisan_news/label2id.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"false": 0, "true": 1}
dataset/hyperpartisan_news/test.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
dataset/hyperpartisan_news/train.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
dataset/rct-sample/dev.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
dataset/rct-sample/label2id.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"BACKGROUND": 0, "CONCLUSIONS": 1, "METHODS": 2, "OBJECTIVE": 3, "RESULTS": 4}
dataset/rct-sample/test.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
dataset/rct-sample/train.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
dataset/sciie/dev.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
dataset/sciie/label2id.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"COMPARE": 0, "CONJUNCTION": 1, "EVALUATE-FOR": 2, "FEATURE-OF": 3, "HYPONYM-OF": 4, "PART-OF": 5, "USED-FOR": 6}
dataset/sciie/test.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
dataset/sciie/train.jsonl CHANGED
The diff for this file is too large to render. See raw diff
 
multi_domain_document_classification.py CHANGED
@@ -6,7 +6,7 @@ import datasets
6
  logger = datasets.logging.get_logger(__name__)
7
  _DESCRIPTION = """Multi domain document classification dataset used in [https://arxiv.org/pdf/2004.10964.pdf](https://arxiv.org/pdf/2004.10964.pdf)"""
8
  _NAME = "multi_domain_document_classification"
9
- _VERSION = "0.0.1"
10
  _CITATION = """
11
  @inproceedings{dontstoppretraining2020,
12
  author = {Suchin Gururangan and Ana Marasović and Swabha Swayamdipta and Kyle Lo and Iz Beltagy and Doug Downey and Noah A. Smith},
@@ -28,6 +28,13 @@ _URLS = {
28
  }
29
  for k in _DATA_TYPE
30
  }
 
 
 
 
 
 
 
31
 
32
 
33
  class MultiDomainDocumentClassificationConfig(datasets.BuilderConfig):
@@ -70,12 +77,14 @@ class MultiDomainDocumentClassification(datasets.GeneratorBasedBuilder):
70
  _key += 1
71
 
72
  def _info(self):
 
 
73
  return datasets.DatasetInfo(
74
  description=_DESCRIPTION,
75
  features=datasets.Features(
76
  {
77
  "text": datasets.Value("string"),
78
- "label": datasets.Value("string")
79
  }
80
  ),
81
  supervised_keys=None,
 
6
  logger = datasets.logging.get_logger(__name__)
7
  _DESCRIPTION = """Multi domain document classification dataset used in [https://arxiv.org/pdf/2004.10964.pdf](https://arxiv.org/pdf/2004.10964.pdf)"""
8
  _NAME = "multi_domain_document_classification"
9
+ _VERSION = "0.1.0"
10
  _CITATION = """
11
  @inproceedings{dontstoppretraining2020,
12
  author = {Suchin Gururangan and Ana Marasović and Swabha Swayamdipta and Kyle Lo and Iz Beltagy and Doug Downey and Noah A. Smith},
 
28
  }
29
  for k in _DATA_TYPE
30
  }
31
+ _LABELS = {
32
+ "chemprot": {"ACTIVATOR": 0, "AGONIST": 1, "AGONIST-ACTIVATOR": 2, "AGONIST-INHIBITOR": 3, "ANTAGONIST": 4, "DOWNREGULATOR": 5, "INDIRECT-DOWNREGULATOR": 6, "INDIRECT-UPREGULATOR": 7, "INHIBITOR": 8, "PRODUCT-OF": 9, "SUBSTRATE": 10, "SUBSTRATE_PRODUCT-OF": 11, "UPREGULATOR": 12},
33
+ "citation_intent": {"Background": 0, "CompareOrContrast": 1, "Extends": 2, "Future": 3, "Motivation": 4, "Uses": 5},
34
+ "hyperpartisan_news": {"false": 0, "true": 1},
35
+ "rct-sample": {"BACKGROUND": 0, "CONCLUSIONS": 1, "METHODS": 2, "OBJECTIVE": 3, "RESULTS": 4},
36
+ "sciie": {"COMPARE": 0, "CONJUNCTION": 1, "EVALUATE-FOR": 2, "FEATURE-OF": 3, "HYPONYM-OF": 4, "PART-OF": 5, "USED-FOR": 6}
37
+ }
38
 
39
 
40
  class MultiDomainDocumentClassificationConfig(datasets.BuilderConfig):
 
77
  _key += 1
78
 
79
  def _info(self):
80
+ label2id = sorted(_LABELS[self.config.name].items(), key=lambda x: x[1])
81
+ label = [i[0] for i in label2id]
82
  return datasets.DatasetInfo(
83
  description=_DESCRIPTION,
84
  features=datasets.Features(
85
  {
86
  "text": datasets.Value("string"),
87
+ "label": datasets.features.ClassLabel(names=label),
88
  }
89
  ),
90
  supervised_keys=None,