Datasets:
scidtb

Fine-Grained Tasks: parsing
Languages: English
Multilinguality: monolingual
Size Categories: unknown
Language Creators: found
Annotations Creators: expert-generated
Source Datasets: original
Dataset Preview
Go to dataset viewer
root (sequence)file_name (string)
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ], "parent": [ -1, 0, 1, 1, 5, 3, 5, 1, 7, 10, 1 ], "text": [ "ROOT", "We propose a neural network approach ", "to benefit from the non-linearity of corpus-wide statistics for part-of-speech ( POS ) tagging . <S>", "We investigated several types of corpus-wide information for the words , such as word embeddings and POS tag distributions . <S>", "Since these statistics are encoded as dense continuous features , ", "it is not trivial to combine these features ", "comparing with sparse discrete features . <S>", "Our tagger is designed as a combination of a linear model for discrete features and a feed-forward neural network ", "that captures the non-linear interactions among the continuous features . <S>", "By using several recent advances in the activation functions for neural networks , ", "the proposed method marks new state-of-the-art accuracies for English POS tagging tasks . <S>" ], "relation": [ "null", "ROOT", "enablement", "elab-aspect", "cause", "elab-addition", "comparison", "elab-aspect", "elab-addition", "manner-means", "evaluation" ] }
"D14-1101.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8 ], "parent": [ -1, 4, 1, 1, 0, 4, 4, 4, 7 ], "text": [ "ROOT", "Different approaches to high-quality grammatical error correction have been proposed recently , ", "many of which have their own strengths and weaknesses . <S>", "Most of these approaches are based on classification or statistical machine translation ( SMT ) . <S>", "In this paper , we propose to combine the output from a classification-based system and an SMT-based system ", "to improve the correction quality . <S>", "We adopt the system combination technique of Heafield and Lavie ( 2010 ) . <S>", "We achieve an F0.5 score of 39.39 % on the test set of the CoNLL-2014 shared task , ", "outperforming the best system in the shared task . <S>" ], "relation": [ "null", "bg-compare", "elab-example", "elab-addition", "ROOT", "evaluation", "elab-aspect", "evaluation", "comparison" ] }
"D14-1102.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7 ], "parent": [ -1, 0, 1, 2, 1, 1, 5, 4 ], "text": [ "ROOT", "In this paper we propose a method ", "to increase dependency parser performance ", "without using additional labeled or unlabeled data ", "by refining the layer of predicted part-of-speech ( POS ) tags . <S>", "We perform experiments on English and German ", "and show significant improvements for both languages . <S>", "The refinement is based on generative split-merge training for Hidden Markov models ( HMMs ) . <S>" ], "relation": [ "null", "ROOT", "enablement", "elab-addition", "manner-means", "evaluation", "joint", "elab-addition" ] }
"D14-1103.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 ], "parent": [ -1, 8, 3, 1, 3, 2, 5, 5, 0, 8, 9, 9, 11, 8, 13 ], "text": [ "ROOT", "Importance weighting is a generalization of various statistical bias correction techniques . <S>", "While our labeled data in NLP is heavily biased , ", "importance weighting has seen only few applications in NLP , ", "most of them relying on a small amount of labeled target data . <S>", "The publication bias ", "toward reporting positive results ", "makes it hard to say whether researchers have tried . <S>", "This paper presents a negative result on unsupervised domain adaptation for POS tagging . <S>", "In this setup , we only have unlabeled data ", "and thus only indirect access to the bias in emission and transition probabilities . <S>", "Moreover , most errors in POS tagging are due to unseen words , ", "and there , importance weighting cannot help . <S>", "We present experiments with a wide variety of weight functions , quantilizations , as well as with randomly generated weights , ", "to support these claims . <S> " ], "relation": [ "null", "bg-goal", "contrast", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "same-unit", "ROOT", "elab-aspect", "joint", "progression", "joint", "evaluation", "enablement" ] }
"D14-1104.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ], "parent": [ -1, 3, 1, 0, 3, 4, 4, 3, 7, 8 ], "text": [ "ROOT", "Code-mixing is frequently observed in user generated content on social media , especially from multilingual users . <S>", "The linguistic complexity of such content is compounded by presence of spelling variations , transliteration and non-adherance to formal grammar . <S>", "We describe our initial efforts ", "to create a multi-level annotated corpus of Hindi-English code-mixed text ", "collated from Facebook forums , ", "and explore language identification , back-transliteration , normalization and POS tagging of this data . <S>", "Our results show ", "that language identification and transliteration for Hindi are two major challenges ", "that impact POS tagging accuracy . <S>" ], "relation": [ "null", "bg-goal", "elab-addition", "ROOT", "enablement", "elab-addition", "joint", "evaluation", "attribution", "elab-addition" ] }
"D14-1105.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 ], "parent": [ -1, 0, 1, 2, 3, 2, 5, 6, 6, 1, 9, 10, 1, 12, 12, 14 ], "text": [ "ROOT", "We investigate grammatical error detection in spoken language , ", "and present a data-driven method ", "to train a dependency parser ", "to automatically identify and label grammatical errors . <S>", "This method is agnostic to the label set used , ", "and the only manual annotations ", "needed for training ", "are grammatical error labels . <S>", "We find ", "that the proposed system is robust to disfluencies , ", "so that a separate stage to elide disfluencies is not required . <S>", "The proposed system outperforms two baseline systems on two different corpora ", "that use different sets of error tags . <S>", "It is able to identify utterances with grammatical errors with an F1-score as high as 0.623 , ", "as compared to a baseline F1 of 0.350 on the same data . <S>" ], "relation": [ "null", "ROOT", "joint", "enablement", "enablement", "elab-addition", "joint", "elab-addition", "same-unit", "elab-aspect", "attribution", "result", "evaluation", "elab-addition", "elab-addition", "comparison" ] }
"D14-1106.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 ], "parent": [ -1, 0, 1, 1, 3, 1, 1, 6, 9, 1, 9, 10, 11, 1, 13 ], "text": [ "ROOT", "We introduce a new CCG parsing model ", "which is factored on lexical category assignments . <S>", "Parsing is then simply a deterministic search for the most probable category sequence ", "that supports a CCG derivation . <S>", "The parser is extremely simple , with a tiny feature set , no POS tagger , and no statistical model of the derivation or dependencies . <S>", "Formulating the model in this way allows a highly effective heuristic for A∗parsing , ", "which makes parsing extremely fast . <S>", "Compared to the standard C&C CCG parser , ", "our model is more accurate out-of-domain , ", "is four times faster , ", "has higher coverage , ", "and is greatly simplified . <S>", "We also show ", "that using our parser improves the performance of a state-of-the-art question answering system . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "comparison", "evaluation", "joint", "joint", "joint", "evaluation", "attribution" ] }
"D14-1107.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ], "parent": [ -1, 0, 1, 2, 3, 2, 5, 1, 7, 7, 1 ], "text": [ "ROOT", "We describe a new dependency parser for English tweets , TWEEBOPARSER . <S>", "The parser builds on several contributions : ", "new syntactic annotations for a corpus of tweets ( TWEEBANK ) , with conventions ", "informed by the domain ; ", "adaptations to a statistical parsing algorithm ; ", "and a new approach to exploiting out-of-domain Penn Treebank data . <S>", "Our experiments show ", "that the parser achieves over 80 % unlabeled attachment accuracy on our new , high-quality test set ", "and measure the benefit of our contributions . <S>", "Our dataset and parser can be found at http : //www.ark.cs.cmu.edu/TweetNLP . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "elab-enum_member", "elab-addition", "elab-enum_member", "joint", "evaluation", "attribution", "joint", "elab-addition" ] }
"D14-1108.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 ], "parent": [ -1, 4, 1, 2, 0, 4, 4, 6, 4, 8, 4, 10, 11 ], "text": [ "ROOT", "Dependency parsing with high-order features results in a provably hard decoding problem . <S> \r", "A lot of work has gone into developing powerful optimization methods \r", "for solving these combinatorial problems . <S>\r", "In contrast , we explore , analyze , and demonstrate \r", "that a substantially simpler randomized greedy inference algorithm already suffices for near optimal parsing : \r", "a) we analytically quantify the number of local optima \r", "that the greedy method has to overcome in the context of first-order parsing ; \r", "b) we show \r", "that , as a decoding algorithm , the greedy method surpasses dual decomposition in second-order parsing ; \r", "c) we empirically demonstrate \r", "that our approach with up to third-order and global features outperforms the state-of-the-art dual decomposition and MCMC sampling methods \r", "when evaluated on 14 languages of non-projective CoNLL datasets . <S>" ], "relation": [ "null", "bg-compare", "elab-addition", "elab-addition", "ROOT", "attribution", "elab-aspect", "elab-addition", "elab-aspect", "attribution", "evaluation", "attribution", "temporal" ] }
"D14-1109.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18 ], "parent": [ -1, 7, 1, 1, 3, 4, 5, 0, 7, 7, 9, 10, 10, 12, 7, 14, 15, 15, 17 ], "text": [ "ROOT", "Most word representation methods assume ", "that each word owns a single semantic vector . <S>", "This is usually problematic ", "because lexical ambiguity is ubiquitous , ", "which is also the problem ", "to be resolved by word sense disambiguation . <S>", "In this paper , we present a unified model for joint word sense representation and disambiguation , ", "which will assign distinct representations for each word sense . <S>", "The basic idea is that both word sense representation ( WSR ) and word sense disambiguation ( WSD ) will benefit from each other : ", "( 1 ) high-quality WSR will capture rich information about words and senses , ", "which should be helpful for WSD , ", "and ( 2 ) high-quality WSD will provide reliable disambiguated corpora ", "for learning better sense representations . <S>", "Experimental results show ", "that , our model improves the performance of contextual word similarity ", "compared to existing WSR methods , ", "outperforms state-of-the-art supervised methods on domain-specific WSD , ", "and achieves competitive performance on coarse-grained all-words WSD . <S>" ], "relation": [ "null", "bg-compare", "attribution", "elab-addition", "cause", "elab-addition", "elab-addition", "ROOT", "elab-addition", "elab-addition", "elab-enum_member", "elab-addition", "joint", "elab-addition", "evaluation", "attribution", "comparison", "elab-addition", "joint" ] }
"D14-1110.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 ], "parent": [ -1, 4, 1, 2, 0, 4, 5, 6, 7, 4, 4, 4, 11, 12, 13 ], "text": [ "ROOT", "Compositional distributional semantics is a subfield of Computational Linguistics ", "which investigates methods ", "for representing the meanings of phrases and sentences . <S>", "In this paper , we explore implementations of a framework ", "based on Combinatory Categorial Grammar ( CCG ) , ", "in which words with certain grammatical types have meanings ", "represented by multi-linear maps ", "( i.e. multi-dimensional arrays , or tensors ) . <S>", "An obstacle to full implemen-tation of the framework is the size of these tensors . <S>", "We examine the performance of lower dimensional approximations of transitive verb tensors on a sentence plausibility/selectional preference task . <S>", "We find ", "that the matrices perform as well as , and sometimes even better than , full tensors , ", "allowing a reduction in the number of parameters ", "needed to model the framework . <S>" ], "relation": [ "null", "bg-general", "elab-addition", "elab-addition", "ROOT", "bg-general", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "elab-aspect", "evaluation", "attribution", "elab-addition", "elab-addition" ] }
"D14-1111.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7 ], "parent": [ -1, 0, 1, 1, 3, 4, 1, 6 ], "text": [ "ROOT", "In this paper we propose a computational method ", "for determining the orthographic similarity between Romanian and related languages . <S>", "We account for etymons and cognates ", "and we investigate not only the number of related words , but also their forms , ", "quantifying orthographic similarities . <S>", "The method we propose is adaptable to any language , ", "as far as resources are available . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "elab-aspect", "joint", "elab-addition", "evaluation", "condition" ] }
"D14-1112.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 ], "parent": [ -1, 5, 1, 1, 3, 0, 5, 5, 7, 8, 5, 10, 11 ], "text": [ "ROOT", "There is rising interest in vector-space word embeddings and their use in NLP , ", "especially given recent methods for their fast estimation at very large scale . <S>", "Nearly all this work , however , assumes a single vector per word type—ignoring polysemy ", "and thus jeopardizing their useful-ness for downstream tasks . <S>", "We present an extension to the Skip-gram model ", "that efficiently learns multiple embeddings per word type . <S>", "It differs from recent related work ", "by jointly performing word sense discrimination and embedding learning , ", "by non-parametrically estimating the number of senses per word type , and by its efficiency and scalability . <S>", "We present new state-of-the-art results in the word similarity in context task ", "and demonstrate its scalability ", "by training with one machine on a corpus of nearly 1 billion tokens in less than 6 hours . <S>" ], "relation": [ "null", "bg-compare", "condition", "contrast", "joint", "ROOT", "elab-addition", "elab-addition", "manner-means", "joint", "evaluation", "joint", "manner-means" ] }
"D14-1113.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 ], "parent": [ -1, 7, 1, 2, 1, 4, 5, 0, 7, 8, 8, 7, 11, 7, 13, 13, 7 ], "text": [ "ROOT", "Knowledge graphs are recently used ", "for enriching query representations in an entity-aware way for the rich facts ", "organized around entities in it . <S>", "However , few of the methods pay attention to non-entity words ", "and clicked websites in queries , ", "which also help conveying user intent . <S>", "In this paper , we tackle the problem of intent understanding with ", "innovatively representing entity words , refiners and clicked urls as intent topics in a unified knowledge graph based framework , in a way ", "to exploit and expand knowledge graph ", "which we call \"tailor\" . <S>", "We collaboratively exploit global knowledge in knowledge graphs and local contexts in query log ", "to initialize intent representation , ", "then propagate the enriched features in a graph ", "consisting of intent topics ", "using an unsupervised algorithm . <S>", "The experiments prove intent topics with knowledge graph enriched features significantly enhance intent understanding . <S>" ], "relation": [ "null", "bg-goal", "elab-addition", "elab-addition", "contrast", "joint", "elab-addition", "ROOT", "manner-means", "enablement", "elab-addition", "elab-process_step", "enablement", "elab-process_step", "elab-addition", "manner-means", "evaluation" ] }
"D14-1114.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ], "parent": [ -1, 2, 0, 4, 2, 4, 2, 6, 7, 8, 9 ], "text": [ "ROOT", "The role of Web search queries has been demonstrated in the extraction of attributes of instances and classes , or of sets of related instances and their class labels . <S>", "This paper explores the acquisition of open-domain commonsense knowledge , usually available as factual knowledge , from Web search queries . <S>", "Similarly to previous work in open-domain information extraction , ", "knowledge extracted from text - in this case , from queries - takes the form of lexicalized assertions ", "associated with open-domain classes . <S>", "Experimental results indicate ", "that facts ", "extracted from queries complement , ", "and have competitive accuracy levels relative to , facts ", "extracted from Web documents by previous methods . <S>" ], "relation": [ "null", "bg-general", "ROOT", "comparison", "elab-addition", "elab-addition", "evaluation", "attribution", "elab-addition", "joint", "elab-addition" ] }
"D14-1115.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21 ], "parent": [ -1, 7, 1, 2, 2, 1, 5, 0, 7, 7, 9, 10, 9, 9, 13, 13, 15, 16, 17, 7, 19, 19 ], "text": [ "ROOT", "Question Answering over Linked Data ( QALD ) aims to evaluate a question answering system over structured data , ", "the key objective of which is to translate questions ", "posed using natural language ", "into structured queries . <S>", "This technique can help common users to directly access open-structured knowledge on the Web ", "and , accordingly , has attracted much attention . <S>", "To this end , we propose a novel method ", "using first-order logic . <S>", "We formulate the knowledge ", "for resolving the ambiguities in the main three steps of QALD ", "( phrase detection , phrase-to-semantic-item mapping and semantic item grouping ) ", "as first-order logic clauses in a Markov Logic Network . <S>", "All clauses can then produce interacted effects in a unified framework ", "and can jointly resolve all ambiguities . <S>", "Moreover , our method adopts a pattern-learning strategy for semantic item grouping . <S>", "In this way , our method can cover more text expressions ", "and answer more questions than previous methods ", "using manually designed patterns . <S>", "The experimental results ", "using open benchmarks ", "demonstrate the effectiveness of the proposed method . <S>" ], "relation": [ "null", "bg-compare", "elab-addition", "elab-addition", "same-unit", "elab-addition", "joint", "ROOT", "manner-means", "elab-aspect", "elab-addition", "elab-enum_member", "same-unit", "elab-addition", "joint", "progression", "elab-addition", "joint", "manner-means", "evaluation", "manner-means", "same-unit" ] }
"D14-1116.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25 ], "parent": [ -1, 3, 1, 0, 3, 3, 5, 3, 7, 3, 9, 10, 11, 12, 13, 13, 9, 16, 3, 18, 3, 20, 21, 20, 20, 24 ], "text": [ "ROOT", "Much recent work focuses on formal interpretation of natural question utterances , ", "with the goal of executing the resulting structured queries on knowledge graphs ( KGs ) such as Freebase . <S>", "Here we address two limitations of this approach ", "when applied to open-domain , entity-oriented Web queries . <S>", "First , Web queries are rarely well-formed questions . <S>", "They are \"telegraphic\" , with missing verbs , prepositions , clauses , case and phrase clues . <S>", "Second , the KG is always incomplete , ", "unable to directly answer many queries . <S>", "We propose a novel technique ", "to segment a telegraphic query ", "and assign a coarse-grained purpose to each segment : ", "a base entity e1 , a relation type r , a target entity type t2 , and contextual words s . <S>", "The query seeks entity e2 ∈ t2 ", "where r ( e1 , e2 ) holds , ", "further evidenced by schema-agnostic words s . <S>", "Query segmentation is integrated with the KG and an unstructured corpus ", "where mentions of entities have been linked to the KG . <S>", "We do not trust the best or any specific query segmentation . <S>", "Instead , evidence in favor of candidate e2s are aggregated across several segmentations . <S>", "Extensive experiments on the ClueWeb corpus and parts of Freebase as our KG , ", "using over a thousand telegraphic queries ", "adapted from TREC , INEX , and WebQuestions , ", "show the efficacy of our approach . <S>", "For one benchmark , MAP improves from 0.2-0.29 ( competitive baselines ) to 0.42 ( our system ) . <S>", "NDCG @ 10 improves from 0.29-0.36 to 0.54 . <S> " ], "relation": [ "null", "bg-compare", "elab-addition", "ROOT", "temporal", "elab-enum_member", "elab-addition", "elab-enum_member", "elab-addition", "elab-aspect", "enablement", "joint", "elab-enum_member", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "elab-aspect", "contrast", "evaluation", "manner-means", "elab-addition", "same-unit", "exp-evidence", "contrast" ] }
"D14-1117.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21 ], "parent": [ -1, 8, 1, 2, 2, 4, 4, 6, 0, 8, 9, 12, 9, 8, 13, 14, 8, 16, 17, 16, 19, 20 ], "text": [ "ROOT", "Estimating questions' difficulty levels is an important task in community question answering ( CQA ) services . <S>", "Previous studies propose to solve this problem based on the question-user comparisons ", "extracted from the question answering threads . <S>", "However , they suffer from data sparseness problem ", "as each question only gets a limited number of comparisons . <S>", "Moreover , they cannot handle newly posted questions ", "which get no comparisons . <S>", "In this paper , we propose a novel question difficulty estimation approach ", "called Regularized Competition Model ( RCM ) , ", "which naturally combines question-user comparisons and questions' textual descriptions into a unified framework . <S>", "By incorporating textual information , ", "RCM can effectively deal with data sparseness problem . <S>", "We further employ a K-Nearest Neighbor approach ", "to estimate difficulty levels of newly posted questions , ", "again by leveraging textual similarities . <S>", "Experiments on two publicly available data sets show ", "that for both well-resolved and newly-posted questions , RCM performs the estimation task significantly better than existing methods , ", "demonstrating the advantage of incorporating textual information . <S>", "More interestingly , we observe ", "that RCM might provide an automatic way ", "to quantitatively measure the knowledge levels of words . <S>" ], "relation": [ "null", "bg-compare", "elab-addition", "elab-addition", "contrast", "cause", "progression", "elab-addition", "ROOT", "elab-addition", "elab-addition", "manner-means", "elab-addition", "elab-aspect", "enablement", "manner-means", "evaluation", "attribution", "elab-addition", "progression", "attribution", "enablement" ] }
"D14-1118.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 ], "parent": [ -1, 3, 1, 0, 3, 4, 5, 6, 9, 3, 9, 9, 15, 12, 13, 3, 15, 16, 16, 18 ], "text": [ "ROOT", "A poll consists of a question and a set of predefined answers ", "from which voters can select . <S>", "We present the new problem of vote prediction on comments , ", "which involves determining which of these answers a voter selected ", "given a comment ", "she wrote ", "after voting . <S>", "To address this task , ", "we exploit not only the information ", "extracted from the comments ", "but also extra-textual information such as user demographic information and inter-comment constraints . <S>", "In an evaluation ", "involving nearly one million comments ", "collected from the popular SodaHead social polling website , ", "we show ", "that a vote prediction system ", "that exploits only textual information ", "can be improved significantly ", "when extended with extra-textual information . <S>" ], "relation": [ "null", "bg-general", "elab-addition", "ROOT", "elab-addition", "condition", "elab-addition", "temporal", "enablement", "elab-aspect", "elab-addition", "progression", "condition", "elab-addition", "elab-addition", "evaluation", "attribution", "elab-addition", "same-unit", "temporal" ] }
"D14-1119.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 ], "parent": [ -1, 0, 1, 1, 1, 4, 5, 6, 1, 8, 9, 8, 1, 12, 12, 1, 15 ], "text": [ "ROOT", "In this paper we first exploit cash-tags ( \" $ \" ", "followed by stocks' ticker symbols ) ", "in Twitter ", "to build a stock network , ", "where nodes are stocks ", "connected by edges ", "when two stocks co-occur frequently in tweets . <S>", "We then employ a labeled topic model ", "to jointly model both the tweets and the network structure ", "to assign each node and each edge a topic respectively . <S>", "This Semantic Stock Network ( SSN ) summarizes discussion topics about stocks and stock relations . <S>", "We further show ", "that social sentiment about stock ( node ) topics and stock relationship ( edge ) topics are predictive of each stock's market . <S>", "For prediction , we propose to regress the topic-sentiment time-series and the stock's price time series . <S>", "Experimental results demonstrate ", "that topic sentiments from close neighbors are able to help improve the prediction of a stock markedly . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "elab-addition", "enablement", "elab-addition", "elab-addition", "temporal", "elab-process_step", "enablement", "enablement", "elab-addition", "evaluation", "attribution", "elab-addition", "evaluation", "attribution" ] }
"D14-1120.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ], "parent": [ -1, 2, 0, 2, 2, 2, 2, 6, 6, 8 ], "text": [ "ROOT", "Demographic lexica have potential for widespread use in social science , economic , and business applications . <S>", "We derive predictive lexica ", "( words and weights ) ", "for age and gender ", "using regression and classification models from word usage in Facebook , blog , and Twitter data with associated demographic labels . <S>", "The lexica , ", "made publicly available, ", "achieved state-of-the-art accuracy in language based age and gender prediction over Facebook and Twitter , ", "and were evaluated for generalization across social media genres as well as in limited message situations . <S>" ], "relation": [ "null", "bg-goal", "ROOT", "elab-addition", "elab-addition", "manner-means", "elab-addition", "elab-addition", "same-unit", "joint" ] }
"D14-1121.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 ], "parent": [ -1, 7, 1, 1, 3, 3, 5, 0, 7, 7, 9, 10, 7, 12, 13 ], "text": [ "ROOT", "Dependency parsing is a core task in NLP , ", "and it is widely used by many applications such as information extraction , ques-tion answering , and machine translation . <S>", "In the era of social media , a big challenge is that parsers ", "trained on traditional newswire corpora ", "typically suffer from the domain mismatch issue , ", "and thus perform poorly on social media data . <S>", "We present a new GFL/FUDG-annotated Chinese treebank with more than 18K tokens from Sina Weibo ", "( the Chinese equivalent of Twitter ) . <S>", "We formulate the dependency parsing problem as many small and parallelizable arc prediction tasks : ", "for each task , we use a programmable probabilistic first-order logic ", "to infer the dependency arc of a token in the sentence . <S>", "In experiments , we show ", "that the proposed model outperforms an off-the-shelf Stanford Chinese parser , as well as a strong MaltParser baseline ", "that is trained on the same in-domain data . <S>" ], "relation": [ "null", "bg-compare", "joint", "elab-addition", "elab-addition", "same-unit", "result", "ROOT", "elab-addition", "elab-aspect", "elab-addition", "enablement", "evaluation", "attribution", "elab-addition" ] }
"D14-1122.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 ], "parent": [ -1, 7, 1, 1, 1, 4, 5, 0, 9, 7, 9, 10, 7, 12 ], "text": [ "ROOT", "Microblog has become a major platform for information about real-world events . <S>", "Automatically discovering real-world events from microblog has attracted the attention of many researchers . <S>", "However , most of existing work ignore the importance of emotion information for event detection . <S>", "We argue ", "that people's emotional reactions immediately reflect the occurring of real-world events ", "and should be important for event detection . <S>", "In this study , we focus on the problem of community-related event detection by community emotions . <S>", "To address the problem , ", "we propose a novel framework ", "which include the following three key components : ", "microblog emotion classification , community emotion aggregation and community emotion burst detection . <S>", "We evaluate our approach on real microblog data sets . <S>", "Experimental results demonstrate the effectiveness of the proposed framework . <S>" ], "relation": [ "null", "bg-compare", "elab-addition", "contrast", "elab-addition", "attribution", "joint", "ROOT", "enablement", "elab-aspect", "elab-addition", "elab-enum_member", "evaluation", "exp-evidence" ] }
"D14-1123.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 ], "parent": [ -1, 5, 1, 1, 3, 0, 5, 6, 5, 8, 9, 9, 11 ], "text": [ "ROOT", "Casual online forums such as Reddit , Slashdot and Digg , are continuing to increase in popularity as a means of communication . <S>", "Detecting disagreement in this domain is a considerable challenge . <S>", "Many topics are unique to the conversation on the forum , ", "and the appearance of disagreement may be much more subtle than on political blogs or social media sites such as twitter . <S>", "In this analysis we present a crowd-sourced annotated corpus for topic level disagreement detection in Slashdot , ", "showing ", "that disagreement detection in this domain is difficult even for humans . <S>", "We then proceed to show ", "that a new set of features ", "determined from the rhetorical structure of the conversation ", "significantly improves the performance on disagreement detection over a baseline ", "consisting of unigram/bigram features , discourse markers , structural features and meta-post features . <S>" ], "relation": [ "null", "bg-goal", "elab-addition", "elab-addition", "joint", "ROOT", "elab-addition", "attribution", "elab-process_step", "attribution", "elab-addition", "same-unit", "elab-addition" ] }
"D14-1124.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21 ], "parent": [ -1, 6, 1, 2, 3, 4, 0, 6, 6, 6, 9, 12, 6, 12, 13, 14, 6, 16, 17, 18, 19, 20 ], "text": [ "ROOT", "Recently , work in NLP was initiated on a type of opinion inference ", "that arises ", "when opinions are expressed toward events ", "which have positive or negative effects on entities ", "( +/-effect events ) . <S>", "This paper addresses methods ", "for creating a lexicon of such events , ", "to support such work on opinion inference . <S>", "Due to significant sense ambiguity , ", "our goal is to develop a sense-level rather than word-level lexicon . <S>", "To maximize the effectiveness of different types of information , ", "we combine a graph-based method ", "using WordNet1 relations ", "and a standard classifier ", "using gloss information . <S>", "A hybrid between the two gives the best results . <S>", "Further , we provide evidence ", "that the model is an effective way ", "to guide manual annotation ", "to find +/-effect senses ", "that are not in the seed set . <S>" ], "relation": [ "null", "bg-goal", "elab-addition", "temporal", "elab-addition", "elab-addition", "ROOT", "elab-addition", "enablement", "elab-aspect", "result", "enablement", "elab-aspect", "manner-means", "joint", "manner-means", "evaluation", "progression", "elab-addition", "enablement", "enablement", "elab-addition" ] }
"D14-1125.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 ], "parent": [ -1, 2, 13, 2, 3, 2, 5, 5, 7, 8, 9, 10, 13, 0, 13, 14, 13, 16, 17, 18 ], "text": [ "ROOT", "Aspect-based opinion mining has attracted lots of attention today . <S>", "In this paper , we address the problem of product aspect rating prediction , ", "where we would like to extract the product aspects , ", "and predict aspect ratings simultaneously . <S>", "Topic models have been widely adapted ", "to jointly model aspects and sentiments , ", "but existing models may not do the prediction task well ", "due to their weakness in sentiment extraction . <S>", "The sentiment topics usually do not have clear correspondence to commonly used ratings , ", "and the model may fail to extract certain kinds of sentiments ", "due to skewed data . <S>", "To tackle this problem , ", "we propose a sentiment-aligned topic model ( SATM ) , ", "where we incorporate two types of external knowledge : ", "product-level overall rating distribution and word-level sentiment lexicon . <S>", "Experiments on real dataset demonstrate ", "that SATM is effective on product aspect rating prediction , ", "and it achieves better performance ", "compared to the existing approaches . <S>" ], "relation": [ "null", "elab-addition", "bg-goal", "elab-addition", "joint", "elab-addition", "enablement", "comparison", "cause", "elab-addition", "joint", "cause", "enablement", "ROOT", "elab-addition", "elab-enum_member", "evaluation", "attribution", "joint", "comparison" ] }
"D14-1126.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 ], "parent": [ -1, 0, 1, 2, 3, 7, 5, 1, 7, 8, 1, 7, 11, 1, 13, 14 ], "text": [ "ROOT", "We present a weakly supervised approach ", "for learning hashtags , hashtag patterns , and phrases ", "associated with five emotions : ", "AFFECTION , ANGER/RAGE , FEAR/ANXIETY , JOY , and SADNESS/DISAPPOINTMENT . <S>", "Starting with seed hashtags ", "to label an initial set of tweets , ", "we train emotion classifiers ", "and use them ", "to learn new emotion hashtags and hashtag patterns . <S>", "This process then repeats in a bootstrapping framework . <S>", "Emotion phrases are also extracted from the learned hashtags ", "and used to create phrase-based emotion classifiers . <S>", "We show ", "that the learned set of emotion indicators yields a substantial improve-ment in F-scores , ", "ranging from + % 5 to + % 18 over baseline classifiers . <S> " ], "relation": [ "null", "ROOT", "elab-addition", "elab-addition", "elab-enum_member", "elab-addition", "enablement", "elab-process_step", "joint", "enablement", "elab-process_step", "elab-addition", "joint", "evaluation", "attribution", "elab-example" ] }
"D14-1127.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ], "parent": [ -1, 0, 1, 2, 1, 4, 5, 1, 7, 8 ], "text": [ "ROOT", "We put forward the hypothesis ", "that high-accuracy sentiment analysis is only possible ", "if word senses with different polarity are accurately recognized . <S>", "We provide evidence for this hypothesis in a case study for the adjective \"hard\" ", "and propose contextually enhanced sentiment lexicons ", "that contain the information necessary for sentiment-relevant sense disambiguation . <S>", "An experimental evaluation demonstrates ", "that senses with different polarity can be distinguished well ", "using a combination of standard and novel features . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "condition", "elab-aspect", "joint", "elab-addition", "evaluation", "attribution", "manner-means" ] }
"D14-1128.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20 ], "parent": [ -1, 2, 0, 2, 7, 4, 5, 2, 7, 8, 11, 7, 11, 11, 2, 14, 14, 2, 17, 18, 19 ], "text": [ "ROOT", "Identifying parallel web pages from bilingual web sites is a crucial step of bilingual resource construction for cross-lingual information processing . <S>", "In this paper , we propose a link-based approach ", "to distinguish parallel web pages from bilingual web sites . <S>", "Compared with the existing methods , ", "which only employ the internal translation similarity ", "( such as content-based similarity and page structural similarity ) , ", "we hypothesize ", "that the external translation similarity is an effective feature ", "to identify parallel web pages . <S>", "Within a bilingual web site , web pages are interconnected by hyperlinks . <S>", "The basic idea of our method is that the translation similarity of two pages can be inferred from their neighbor pages , ", "which can be adopted as an important source of external similarity . <S>", "Thus , the translation similarity of page pairs will influence each other . <S>", "An iterative algorithm is developed ", "to estimate the external translation similarity and the final translation similarity . <S>", "Both internal and external similarity measures are combined in the iterative algorithm . <S>", "Experiments on six bilingual websites demonstrate ", "that our method is effective ", "and obtains significant improvement ( 6.2 % F-Score ) over the baseline ", "which only utilizes internal translation similarity . <S>" ], "relation": [ "null", "bg-general", "ROOT", "enablement", "comparison", "elab-addition", "elab-example", "elab-aspect", "attribution", "enablement", "elab-addition", "elab-addition", "elab-addition", "result", "elab-aspect", "enablement", "elab-addition", "evaluation", "attribution", "joint", "elab-addition" ] }
"D14-1129.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 ], "parent": [ -1, 4, 1, 2, 0, 4, 5, 4, 7, 4, 9, 9, 11, 4, 13 ], "text": [ "ROOT", "Analyses of computer aided translation typically focus on either frontend interfaces and human effort , or backend translation and machine learnability of corrections . <S>", "However , this distinction is artificial in practice ", "since the frontend and backend must work in concert . <S>", "We present the first holistic , quantitative evaluation of these issues ", "by contrasting two assistive modes : ", "post-editing and interactive machine translation ( MT ) . <S>", "We describe a new translator interface , extensive modifications to a phrase-based MT system , and a novel objective function ", "for re-tuning to human corrections . <S>", "Evaluation with professional bilingual translators shows ", "that post-edit is faster than interactive at the cost of translation quality for French-English and English-German . <S>", "However , re-tuning the MT system to interactive output leads to larger , statistically significant reductions in HTER ", "versus re-tuning to post-edit . <S>", "Analysis shows ", "that tuning directly to HTER results in fine-grained corrections to subsequent machine output . <S>" ], "relation": [ "null", "bg-goal", "contrast", "cause", "ROOT", "manner-means", "elab-enum_member", "elab-aspect", "elab-addition", "evaluation", "attribution", "contrast", "contrast", "evaluation", "attribution" ] }
"D14-1130.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8 ], "parent": [ -1, 2, 0, 2, 5, 2, 5, 2, 7 ], "text": [ "ROOT", "The combinatorial space of translation derivations in phrase-based statistical machine translation is given by the intersection between a translation lattice and a target language model . <S>", "We replace this in-tractable intersection by a tractable relaxation ", "which incorporates a low-order upperbound on the language model . <S>", "Exact optimisation is achieved through a coarse-to-fine strategy with connections to adaptive rejection sampling . <S>", "We perform exact optimisation with unpruned language models of order 3 to 5 ", "and show search-error curves for beam search and cube pruning on standard test sets . <S>", "This is the first work ", "to tractably tackle exact optimisation with language models of orders higher than 3 . <S>" ], "relation": [ "null", "bg-general", "ROOT", "elab-addition", "elab-addition", "elab-aspect", "joint", "evaluation", "elab-addition" ] }
"D14-1131.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ], "parent": [ -1, 4, 1, 1, 0, 4, 5, 4, 7, 7 ], "text": [ "ROOT", "Recent work by Cherry ( 2013 ) has shown ", "that directly optimizing phrase-based reordering models towards BLEU can lead to significant gains . <S>", "Their approach is limited to small training sets of a few thousand sentences and a similar number of sparse features . <S>", "We show ", "how the expected BLEU objective allows us to train a simple linear discriminative reordering model with millions of sparse features on hundreds of thousands of sentences ", "resulting in significant improvements . <S>", "A comparison to likelihood training demonstrates ", "that expected BLEU is vastly more effective . <S>", "Our best results improve a hierarchical lexicalized reordering baseline by up to 2.0 BLEU in a single-reference setting on a French-English WMT 2012 setup . <S>" ], "relation": [ "null", "bg-compare", "attribution", "elab-addition", "ROOT", "attribution", "result", "evaluation", "attribution", "exp-evidence" ] }
"D14-1132.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18 ], "parent": [ -1, 4, 1, 2, 0, 4, 5, 4, 7, 8, 4, 10, 11, 4, 13, 14, 4, 16, 16 ], "text": [ "ROOT", "Numerous works in Statistical Machine Translation ( SMT ) have attempted to identify better translation hypotheses ", "obtained by an initial decoding ", "using an improved , but more costly scoring function . <S>", "In this work , we introduce an approach ", "that takes the hypotheses ", "produced by a state-of-the-art , reranked phrase-based SMT system , ", "and explores new parts of the search space ", "by applying rewriting rules ", "selected on the basis of posterior phrase-level confidence . <S>", "In the medical domain , we obtain a 1.9 BLEU improvement over a reranked baseline ", "exploiting the same scoring function , ", "corresponding to a 5.4 BLEU improvement over the original Moses baseline . <S>", "We show ", "that if an indication of which phrases require rewriting is provided , ", "our automatic rewriting procedure yields an additional improvement of 1.5 BLEU . <S>", "Various analyses , ", "including a manual error analysis , ", "further illustrate the good performance and potential for improvement of our approach in spite of its simplicity . <S>" ], "relation": [ "null", "bg-compare", "elab-addition", "manner-means", "ROOT", "elab-addition", "elab-addition", "joint", "manner-means", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "evaluation", "attribution", "result", "evaluation", "elab-aspect", "same-unit" ] }
"D14-1133.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17 ], "parent": [ -1, 7, 1, 2, 1, 4, 5, 0, 7, 8, 8, 10, 11, 7, 13, 14, 15, 16 ], "text": [ "ROOT", "We present methods ", "to control the lexicon size ", "when learning a Combinatory Categorial Grammar semantic parser . <S>", "Existing methods incrementally expand the lexicon ", "by greedily adding entries , ", "considering a single training datapoint at a time . <S>", "We propose using corpus-level statistics for lexicon learning decisions . <S>", "We introduce voting ", "to globally consider adding entries to the lexicon , ", "and pruning ", "to remove entries ", "no longer required to explain the training data . <S>", "Our methods result in state-of-the-art performance on the task of executing sequences of natural language instructions , ", "achieving up to 25 % error reduction , ", "with lexicons ", "that are up to 70 % smaller ", "and are qualitatively less noisy . <S>" ], "relation": [ "null", "bg-compare", "enablement", "temporal", "contrast", "manner-means", "elab-addition", "ROOT", "elab-aspect", "enablement", "joint", "enablement", "elab-addition", "evaluation", "exp-evidence", "elab-addition", "elab-addition", "joint" ] }
"D14-1134.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 ], "parent": [ -1, 0, 1, 2, 1, 4, 4, 6, 7, 6, 1, 10, 11 ], "text": [ "ROOT", "In this paper , we demonstrate ", "that significant performance gains can be achieved in CCG semantic parsing ", "by introducing a linguistically motivated grammar induction scheme . <S>", "We present a new morpho-syntactic factored lexicon ", "that models systematic variations in morphology , syntax , and semantics across word classes . <S>", "The grammar uses domain-independent facts about the English language ", "to restrict the number of incorrect parses ", "that must be considered , ", "thereby enabling effective learning from less data . <S>", "Experiments in benchmark domains match previous models with one quarter of the data ", "and provide new state-of-the-art results with all available data , ", "including up to 45 % relative test-error reduction . <S>" ], "relation": [ "null", "ROOT", "attribution", "manner-means", "elab-aspect", "elab-addition", "elab-addition", "enablement", "elab-addition", "result", "evaluation", "joint", "exp-evidence" ] }
"D14-1135.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8 ], "parent": [ -1, 0, 1, 2, 1, 1, 5, 6, 7 ], "text": [ "ROOT", "We present a model for the automatic semantic analysis of requirements elicitation documents . <S>", "Our target semantic representation employs live sequence charts , a multi-modal visual language for scenario-based programming , ", "which can be directly translated into executable code . <S>", "The architecture we propose integrates sentence-level and discourse-level processing in a generative probabilistic framework for the analysis and disambiguation of individual sentences in context . <S>", "We show empirically ", "that the discourse-based model consistently outperforms the sentence-based model ", "when constructing a system ", "that reflects all the static ( entities , properties ) and dynamic ( behavioral scenarios ) requirements in the document . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "elab-addition", "elab-aspect", "evaluation", "attribution", "temporal", "elab-addition" ] }
"D14-1136.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 ], "parent": [ -1, 0, 1, 1, 3, 4, 1, 6, 6, 10, 1, 10, 10, 14, 12, 14, 12 ], "text": [ "ROOT", "We propose a novel model ", "for parsing natural language sentences into their formal semantic representations . <S>", "The model is able to perform integrated lexicon acquisition and semantic parsing , ", "mapping each atomic element in a complete semantic representation to a contiguous word sequence in the input sentence in a recursive manner , ", "where certain overlappings amongst such word sequences are allowed . <S>", "It defines distributions over the novel relaxed hybrid tree structures ", "which jointly represent both sentences and semantics . <S>", "Such structures allow tractable dynamic programming algorithms to be developed for efficient learning and decoding . <S>", "Trained under a discriminative setting , ", "our model is able to incorporate a rich set of features ", "where certain unbounded long-distance dependencies can be captured in a principled manner . <S>", "We demonstrate through experiments ", "that by exploiting a large collection of simple features , ", "our model is shown to be competitive to previous works ", "and achieves state-of-the-art performance on standard benchmark data across four different languages . <S>", "The system and code can be downloaded from http ://statnlp.org/research/sp/ . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "evaluation", "elab-addition", "exp-evidence", "manner-means", "attribution", "joint", "elab-addition" ] }
"D14-1137.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8 ], "parent": [ -1, 6, 1, 1, 3, 6, 0, 6, 7 ], "text": [ "ROOT", "The anchor words algorithm performs provably efficient topic model inference ", "by finding an approximate convex hull in a high-dimensional word co-occurrence space . <S>", "However , the existing greedy algorithm often selects poor anchor words , ", "reducing topic quality and interpretability . <S>", "Rather than finding an approximate convex hull in a high-dimensional space , ", "we propose to find an exact convex hull in a visualizable 2- or 3-dimensional space . <S>", "Such low-dimensional embeddings both improve topics ", "and clearly show users why the algorithm selects certain words . <S>" ], "relation": [ "null", "bg-compare", "manner-means", "contrast", "result", "contrast", "ROOT", "evaluation", "joint" ] }
"D14-1138.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 ], "parent": [ -1, 0, 1, 1, 1, 4, 1, 6, 6, 1, 9, 9 ], "text": [ "ROOT", "We generalize contrastive estimation in two ways ", "that permit adding more knowledge to unsupervised learning . <S>", "The first allows the modeler to specify not only the set of corrupted inputs for each observation , but also how bad each one is . <S>", "The second allows specifying structural preferences on the latent variable ", "used to explain the observations . <S>", "They require setting additional hyperparameters , ", "which can be problematic in unsupervised learning , ", "so we investigate new methods for unsupervised model selection and system combination . <S>", "We instantiate these ideas for part-of-speech induction ", "without tag dictionaries , ", "improving over contrastive estimation as well as strong benchmarks from the PASCAL 2012 shared task . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "elab-enum_member", "elab-enum_member", "elab-addition", "elab-addition", "elab-addition", "result", "evaluation", "elab-addition", "elab-addition" ] }
"D14-1139.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 ], "parent": [ -1, 0, 1, 2, 1, 4, 4, 6, 1, 8, 1, 1, 11 ], "text": [ "ROOT", "We introduce a reinforcement learning-based approach to simultaneous machine translation—producing a translation ", "while receiving input words— between languages with drastically different word orders : ", "from verb-final languages ( e.g. , German ) to verb-medial languages ( English ) . <S>", "In traditional machine translation , a translator must \"wait\" for source material to appear ", "before translation begins . <S>", "We remove this bottleneck ", "by predicting the final verb in advance . <S>", "We use reinforcement learning ", "to learn when to trust predictions about unseen , future portions of the sentence . <S>", "We also introduce an evaluation metric to measure expeditiousness and quality . <S>", "We show ", "that our new translation model outperforms batch and monotone translation strategies . <S>" ], "relation": [ "null", "ROOT", "temporal", "elab-enum_member", "bg-compare", "temporal", "elab-addition", "manner-means", "elab-aspect", "enablement", "elab-aspect", "evaluation", "attribution" ] }
"D14-1140.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 ], "parent": [ -1, 5, 3, 1, 3, 0, 5, 6, 5, 8, 5, 10, 11, 11, 10, 14 ], "text": [ "ROOT", "The task of unsupervised induction of probabilistic context-free grammars ( PCFGs ) has attracted a lot of attention in the field of computational linguistics . <S>", "Although it is a difficult task , ", "work in this area is still very much in demand ", "since it can contribute to the advancement of language parsing and modelling . <S>", "In this work , we describe a new algorithm for PCFG induction ", "based on a principled approach ", "and capable of inducing accurate yet compact artificial natural language grammars and typical context-free grammars . <S>", "Moreover , this algorithm can work on large grammars and datasets ", "and infers correctly even from small samples . <S>", "Our analysis shows ", "that the type of grammars ", "induced by our algorithm ", "are , in theory , capable of modelling natural language . <S>", "One of our experiments shows ", "that our algorithm can potentially outperform the state-of-the-art in unsupervised parsing on the WSJ10 corpus . <S>" ], "relation": [ "null", "bg-goal", "contrast", "elab-addition", "cause", "ROOT", "bg-general", "joint", "progression", "joint", "evaluation", "attribution", "elab-addition", "same-unit", "exp-evidence", "attribution" ] }
"D14-1141.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20 ], "parent": [ -1, 2, 0, 2, 2, 6, 2, 6, 7, 2, 9, 10, 9, 12, 13, 14, 9, 16, 16, 18, 19 ], "text": [ "ROOT", "A common approach in text mining tasks such as text categorization , authorship identification or plagiarism detection is to rely on features like words , part-of-speech tags , stems , or some other high-level linguistic features . <S>", "In this work , an approach ", "that uses character n-grams as features ", "is proposed for the task of native language identification . <S>", "Instead of doing standard feature selection , ", "the proposed approach combines several string kernels ", "using multiple kernel learning . <S>", "Kernel Ridge Regression and Kernel Discriminant Analysis are independently used in the learning stage . <S>", "The empirical results ", "obtained in all the experiments ", "conducted in this work ", "indicate ", "that the proposed approach achieves state of the art performance in native language identification , ", "reaching an accuracy ", "that is 1.7 % above the top scoring system of the 2013 NLI Shared Task . <S>", "Furthermore , the proposed approach has an important advantage ", "in that it is language independent and linguistic theory neutral. <S>", "In the cross-corpus experiment , the proposed approach shows ", "that it can also be topic independent , ", "improving the state of the art system by 32.3 % . <S>" ], "relation": [ "null", "bg-compare", "ROOT", "elab-addition", "same-unit", "contrast", "elab-addition", "manner-means", "elab-addition", "evaluation", "elab-addition", "elab-addition", "same-unit", "attribution", "elab-addition", "elab-addition", "progression", "elab-addition", "elab-addition", "attribution", "elab-addition" ] }
"D14-1142.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18 ], "parent": [ -1, 6, 1, 2, 1, 4, 0, 6, 7, 7, 9, 6, 13, 11, 13, 14, 6, 16, 17 ], "text": [ "ROOT", "Predicting vocabulary of second language learners is essential to support their language learning ; ", "however , because of the large size of language vocabularies , ", "we cannot collect information on the entire vocabulary . <S>", "For practical measurements , we need to sample a small portion of words from the entire vocabulary ", "and predict the rest of the words . <S>", "In this study , we propose a novel framework for this sampling method . <S>", "Current methods rely on simple heuristic techniques ", "involving inflexible manual tuning by educational experts . <S>", "We formalize these heuristic techniques as a graph-based non-interactive active learning method ", "as applied to a special graph . <S>", "We show ", "that by extending the graph , ", "we can support additional functionality ", "such as incorporating domain specificity ", "and sampling from multiple corpora . <S>", "In our experiments , we show ", "that our extended methods outperform other methods in terms of vocabulary prediction accuracy ", "when the number of samples is small . <S>" ], "relation": [ "null", "bg-goal", "contrast", "result", "elab-addition", "joint", "ROOT", "contrast", "elab-addition", "elab-addition", "elab-addition", "evaluation", "manner-means", "attribution", "elab-example", "joint", "evaluation", "attribution", "temporal" ] }
"D14-1143.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 ], "parent": [ -1, 5, 1, 1, 3, 0, 5, 8, 5, 8, 11, 5, 11, 5, 13, 14 ], "text": [ "ROOT", "Language transfer , the characteristic second language usage patterns ", "caused by native language interference , ", "is investigated by Second Language Acquisition ( SLA ) researchers ", "seeking to find overused and underused linguistic features . <S>", "In this paper we develop and present a methodology ", "for deriving ranked lists of such features . <S>", "Using very large learner data , ", "we show our method's ability to find relevant candidates ", "using sophisticated linguistic features . <S>", "To illustrate its applicability to SLA research , ", "we formulate plausible language transfer hypotheses ", "supported by current evidence . <S>", "This is the first work ", "to extend Native Language Identification to a broader linguistic interpretation of learner data ", "and address the automatic extraction of underused features on a pernative language basis . <S>" ], "relation": [ "null", "bg-goal", "cause", "same-unit", "elab-addition", "ROOT", "elab-addition", "manner-means", "elab-aspect", "manner-means", "enablement", "elab-aspect", "elab-addition", "evaluation", "enablement", "joint" ] }
"D14-1144.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 ], "parent": [ -1, 8, 1, 1, 3, 6, 1, 6, 0, 8, 8, 10 ], "text": [ "ROOT", "Languages spoken by immigrants change ", "due to contact with the local languages . <S>", "Capturing these changes is problematic for current language technologies , ", "which are typically developed for speakers of the standard dialect only . <S>", "Even when dialectal variants are available for such technologies , ", "we still need to predict ", "which dialect is being used . <S>", "In this study , we distinguish between the immigrant and the standard dialect of Turkish ", "by focusing on Light Verb Constructions . <S>", "We experiment with a number of grammatical and contextual features , ", "achieving over 84 % accuracy ( 56 % baseline ) . <S>" ], "relation": [ "null", "bg-goal", "cause", "elab-addition", "elab-addition", "condition", "elab-addition", "attribution", "ROOT", "manner-means", "evaluation", "elab-example" ] }
"D14-1145.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 ], "parent": [ -1, 3, 1, 0, 2, 4, 4, 3, 7, 8, 9, 12, 7 ], "text": [ "ROOT", "Readability is used to provide users with high-quality service in text recommendation or text visualization . <S>", "With the increasing use of hand-held devices , reading device is regarded as an important factor for readability . <S>", "Therefore , this paper investigates the relationship between readability and reading devices such as a smart phone , a tablet , and paper . <S>", "We suggest readability factors ", "that are strongly related with the readability of a specific device ", "by showing the correlations between various factors in each device and human-rated readability . <S>", "Our experimental results show ", "that each device has its own readability characteristics , ", "and thus different weights should be imposed on readability factors ", "according to the device type . <S>", "In order to prove the usefulness of the results , ", "we apply the device-dependent readability to news article recommendation . <S>" ], "relation": [ "null", "bg-goal", "elab-addition", "ROOT", "elab-addition", "elab-addition", "manner-means", "evaluation", "attribution", "joint", "cause", "enablement", "elab-addition" ] }
"D14-1146.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 ], "parent": [ -1, 0, 1, 2, 5, 1, 5, 5, 9, 5, 1, 10, 1, 12, 13 ], "text": [ "ROOT", "We propose a new Chinese abbreviation prediction method ", "which can incorporate rich local information ", "while generating the abbreviation globally . <S>", "Different to previous character tagging methods , ", "we introduce the minimum semantic unit , ", "which is more fine-grained than character but more coarse-grained than word , ", "to capture word level information in the sequence labeling framework . <S>", "To solve the \"character duplication\" problem in Chinese abbreviation prediction , ", "we also use a substring tagging strategy to generate local substring tagging candidates . <S>", "We use an integer linear programming ( ILP ) formulation with various constraints ", "to globally decode the final abbreviation from the generated candidates . <S>", "Experiments show ", "that our method outperforms the state-of-the-art systems , ", "without using any extra resource . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "temporal", "contrast", "elab-aspect", "elab-addition", "enablement", "enablement", "joint", "elab-aspect", "enablement", "evaluation", "attribution", "elab-addition" ] }
"D14-1147.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 ], "parent": [ -1, 8, 1, 1, 3, 3, 3, 1, 0, 8, 9, 8, 8, 12, 13, 8, 15 ], "text": [ "ROOT", "It has been shown ", "that news events influence the trends of stock price movements . <S>", "However , previous work on news-driven stock market prediction rely on shallow features ", "( such as bags-of-words , named entities and noun phrases ) , ", "which do not capture structured entity-relation information , ", "and hence cannot represent complete and exact events . <S>", "Recent advances in Open Information Extraction ( Open IE ) techniques enable the extraction of structured events from web-scale data . <S>", "We propose to adapt Open IE technology for event-based stock price movement prediction , ", "extracting structured events from large-scale public news ", "without manual efforts . <S>", "Both linear and nonlinear models are employed to empirically investigate the hidden and complex relationships between events and the stock market . <S>", "Largescale experiments show ", "that the accuracy of S&P 500 index prediction is 60 % , ", "and that of individual stock prediction can be over 70 % . <S>", "Our event-based system outperforms bags-of-words-based baselines , and previously reported systems ", "trained on S&P 500 stock historical data . <S>" ], "relation": [ "null", "bg-compare", "attribution", "contrast", "elab-example", "elab-addition", "joint", "elab-addition", "ROOT", "elab-addition", "elab-addition", "elab-addition", "evaluation", "attribution", "joint", "evaluation", "elab-addition" ] }
"D14-1148.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 ], "parent": [ -1, 3, 1, 0, 3, 3, 5, 6, 7, 5, 9, 3, 11, 11, 13, 14 ], "text": [ "ROOT", "Automatically identifying related specialist terms is a difficult and important task ", "required to understand the lexical structure of language . <S>", "This paper develops a corpus-based method of extracting coherent clusters of satellite terminology— terms on the edge of the lexicon — ", "using co-occurrence networks of unstructured text . <S>", "Term clusters are identified ", "by extracting communities in the co-occurrence graph , ", "after which the largest is discarded ", "and the remaining words are ranked by centrality within a community . <S>", "The method is tractable on large corpora , ", "requires no document structure and minimal normalization . <S>", "The results suggest ", "that the model is able to extract coherent groups of satellite terms in corpora with varying size , content and structure . <S>", "The findings also confirm ", "that language consists of a densely connected core ", "( observed in dictionaries ) and systematic , semantically coherent groups of terms at the edges of the lexicon . <S>" ], "relation": [ "null", "bg-general", "elab-addition", "ROOT", "manner-means", "elab-addition", "manner-means", "temporal", "joint", "elab-addition", "elab-addition", "evaluation", "attribution", "joint", "attribution", "joint" ] }
"D14-1149.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ], "parent": [ -1, 2, 5, 2, 2, 0, 5, 5, 7, 7, 9 ], "text": [ "ROOT", "Given the large amounts of online textual documents available these days , e.g. , news articles , weblogs , and scientific papers , ", "effective methods for extracting keyphrases , ", "which provide a high-level topic description of a document , ", "are greatly needed . <S>", "In this paper , we propose a supervised model for keyphrase extraction from research papers , ", "which are embedded in citation networks . <S>", "To this end , we design novel features ", "based on citation network information ", "and use them in conjunction with traditional features for keyphrase extraction ", "to obtain remarkable improvements in performance over strong baselines . <S>" ], "relation": [ "null", "condition", "bg-goal", "elab-addition", "same-unit", "ROOT", "elab-addition", "elab-addition", "bg-general", "joint", "enablement" ] }
"D14-1150.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7 ], "parent": [ -1, 0, 1, 1, 1, 4, 7, 4 ], "text": [ "ROOT", "We propose to use coreference chains ", "extracted from a large corpus as a resource for semantic tasks . <S>", "We extract three million coreference chains and train word embeddings on them . <S>", "Then , we compare these embeddings to word vectors ", "derived from raw text data ", "and show ", "that coreference-based word embeddings improve F1 on the task of antonym classification by up to .09 . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "elab-addition", "evaluation", "elab-addition", "attribution", "progression" ] }
"D14-1151.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7 ], "parent": [ -1, 0, 1, 4, 1, 1, 7, 5 ], "text": [ "ROOT", "This paper proposes to apply the continuous vector representations of words ", "for discovering keywords from a financial sentiment lexicon . <S>", "In order to capture more keywords , ", "we also incorporate syntactic information into the Continuous Bag-of-Words ( CBOW ) model . <S>", "Experimental results on a task of financial risk prediction ", "using the discovered keywords demonstrate ", "that the proposed approach is good at predicting financial risk . <S>" ], "relation": [ "null", "ROOT", "enablement", "enablement", "elab-addition", "evaluation", "attribution", "same-unit" ] }
"D14-1152.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9 ], "parent": [ -1, 3, 1, 4, 0, 4, 5, 4, 7, 8 ], "text": [ "ROOT", "When it is not possible to compare the suspicious document to the source document ( s ) ", "plagiarism has been committed from , ", "the evidence of plagiarism has to be looked for intrinsically in the document itself . <S>", "In this paper , we introduce a novel languageindependent intrinsic plagiarism detection method ", "which is based on a new text representation ", "that we called n-gram classes . <S>", "The proposed method was evaluated on three publicly available standard corpora . <S>", "The obtained results are comparable to the ones ", "obtained by the best state-of-the-art methods . <S>" ], "relation": [ "null", "condition", "elab-addition", "bg-goal", "ROOT", "elab-addition", "elab-addition", "evaluation", "elab-addition", "elab-addition" ] }
"D14-1153.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 ], "parent": [ -1, 2, 3, 5, 5, 0, 5, 6, 9, 10, 5, 12, 5 ], "text": [ "ROOT", "Several recent papers on Arabic dialect identification have hinted ", "that using a word unigram model is sufficient and effective for the task . <S>", "However , most previous work was done on a standard fairly homogeneous dataset of dialectal user comments . <S>", "In this paper , we show ", "that training on the standard dataset does not generalize , ", "because a unigram model may be tuned to topics in the comments ", "and does not capture the distinguishing features of dialects . <S>", "We show ", "that effective dialect identification requires ", "that we account for the distinguishing lexical , morphological , and phonological phenomena of dialects . <S>", "We show ", "that accounting for such can improve dialect detection accuracy by nearly 10 % absolute . <S>" ], "relation": [ "null", "attribution", "contrast", "bg-compare", "attribution", "ROOT", "cause", "joint", "attribution", "attribution", "elab-addition", "attribution", "evaluation" ] }
"D14-1154.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7 ], "parent": [ -1, 0, 1, 4, 1, 6, 1, 6 ], "text": [ "ROOT", "In this paper , we explore the use of keyboard strokes as a means ", "to access the real-time writing process of online authors , analogously to prosody in speech analysis , in the context of deception detection . <S>", "We show ", "that differences in keystroke patterns like editing maneuvers and duration of pauses can help distinguish between truthful and deceptive writing . <S>", "Empirical results show ", "that incorporating keystroke-based features lead to improved performance in deception detection in two different domains : ", "online reviews and essays . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "attribution", "elab-addition", "attribution", "evaluation", "elab-enum_member" ] }
"D14-1155.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 ], "parent": [ -1, 8, 1, 1, 1, 4, 5, 6, 0, 8, 9, 10, 9, 12, 12, 9, 8 ], "text": [ "ROOT", "Statistical language modeling ( LM ) ", "that purports to quantify the acceptability of a given piece of text ", "has long been an interesting yet challenging research area . <S>", "In particular , language modeling for information retrieval ( IR ) has enjoyed remarkable empirical success ; ", "one emerging stream of the LM approach for IR is to employ the pseudo-relevance feedback process ", "to enhance the representation of an input query ", "so as to improve retrieval effectiveness . <S>", "This paper presents a continuation of such a general line of research ", "and the main contribution is threefold . <S>", "First , we propose a principled framework ", "which can unify the relationships among several widely-used query modeling formulations . <S>", "Second , on top of the successfully developed framework , we propose an extended query modeling formulation ", "by incorporating critical query-specific information cues ", "to guide the model estimation . <S>", "Third , we further adopt and formalize such a framework to the speech recognition and summarization tasks . <S>", "A series of empirical experiments reveal the feasibility of such an LM framework and the performance merits of the deduced models on these two tasks . <S>" ], "relation": [ "null", "bg-goal", "elab-addition", "same-unit", "elab-addition", "elab-addition", "enablement", "enablement", "ROOT", "joint", "elab-process_step", "elab-addition", "elab-process_step", "manner-means", "enablement", "elab-process_step", "evaluation" ] }
"D14-1156.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7 ], "parent": [ -1, 0, 1, 4, 1, 4, 7, 1 ], "text": [ "ROOT", "We study the topic dynamics of interactions in political debates ", "using the 2012 Republican presidential primary debates as data . <S>", "We show ", "that the tendency of candidates to shift topics changes over the course of the election campaign , ", "and that it is correlated with their relative power . <S>", "We also show ", "that our topic shift features help predict candidates' relative rankings . <S>" ], "relation": [ "null", "ROOT", "manner-means", "attribution", "elab-addition", "joint", "attribution", "evaluation" ] }
"D14-1157.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7 ], "parent": [ -1, 0, 1, 2, 1, 4, 1, 6 ], "text": [ "ROOT", "We present power low rank ensembles ( PLRE ) , a flexible framework for n-gram language modeling ", "where ensembles of low rank matrices and tensors are used ", "to obtain smoothed probability estimates of words in context . <S>", "Our method can be understood as a generalization of n-gram modeling to non-integer n , ", "and includes standard techniques such as absolute discounting and Kneser-Ney smoothing as special cases . <S>", "PLRE training is efficient ", "and our approach outperforms state-of-the-art modified Kneser Ney baselines in terms of perplexity on large corpora as well as on BLEU score in a downstream machine translation task . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "enablement", "elab-addition", "joint", "evaluation", "joint" ] }
"D14-1158.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17 ], "parent": [ -1, 3, 1, 4, 0, 4, 4, 6, 6, 8, 11, 8, 11, 8, 13, 16, 4, 16 ], "text": [ "ROOT", "Machine reading calls for programs ", "that read and understand text , ", "but most current work only attempts to extract facts from redundant web-scale corpora . <S>", "In this paper , we focus on a new reading comprehension task ", "that requires complex reasoning over a single document . <S>", "The input is a paragraph ", "describing a biological process , ", "and the goal is to answer questions ", "that require an understanding of the relations between entities and events in the process . <S>", "To answer the questions , ", "we first predict a rich structure ", "representing the process in the paragraph . <S>", "Then , we map the question to a formal query , ", "which is executed against the predicted structure . <S>", "We demonstrate ", "that answering questions via predicted structures substantially improves accuracy over baselines ", "that use shallower representations . <S>" ], "relation": [ "null", "contrast", "elab-addition", "bg-compare", "ROOT", "elab-addition", "elab-addition", "elab-addition", "progression", "elab-addition", "enablement", "elab-process_step", "elab-addition", "elab-process_step", "elab-addition", "attribution", "evaluation", "elab-addition" ] }
"D14-1159.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22 ], "parent": [ -1, 9, 1, 1, 1, 1, 5, 5, 7, 11, 9, 0, 11, 14, 11, 14, 11, 16, 19, 16, 19, 19, 18 ], "text": [ "ROOT", "Connecting words with senses , namely , sight , hearing , taste , smell and touch , ", "to comprehend the sensorial information in language ", "is a straightforward task for humans ", "by using commonsense knowledge . <S>", "With this in mind , a lexicon ", "associating words with senses ", "would be crucial for the computational tasks ", "aiming at interpretation of language . <S>", "However , to the best of our knowledge , there is no systematic attempt in the literature ", "to build such a resource . <S>", "In this paper , we present a sensorial lexicon ", "that associates English words with senses . <S>", "To obtain this resource , ", "we apply a computational method ", "based on bootstrapping and corpus statistics . <S>", "The quality of the resulting lexicon is evaluated with a gold standard ", "created via crowdsourcing . <S>", "The results show ", "that a simple classifier ", "relying on the lexicon ", "outperforms two baselines on a sensory classification task , both at word and sentence level , ", "and confirm the soundness of the proposed approach for the construction of the lexicon and the usefulness of the resource for computational applications . <S>" ], "relation": [ "null", "contrast", "enablement", "same-unit", "manner-means", "elab-addition", "elab-addition", "same-unit", "elab-addition", "bg-goal", "elab-addition", "ROOT", "elab-addition", "enablement", "elab-addition", "bg-general", "evaluation", "elab-addition", "attribution", "elab-addition", "elab-addition", "same-unit", "joint" ] }
"D14-1160.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 ], "parent": [ -1, 3, 1, 4, 8, 4, 4, 6, 0, 8, 9, 10 ], "text": [ "ROOT", "Statistical machine translation is quite robust ", "when it comes to the choice of input representation . <S>", "It only requires consistency between training and testing . <S>", "As a result , there is a wide range of possible preprocessing choices for data ", "used in statistical machine translation . <S>", "This is even more so for morphologically rich languages ", "such as Arabic . <S>", "In this paper , we study the effect of different word-level preprocessing schemes for Arabic on the quality of phrase-based statistical machine translation . <S>", "We also present and evaluate different methods ", "for combining preprocessing schemes ", "resulting in improved translation quality . <S> " ], "relation": [ "null", "contrast", "temporal", "result", "bg-goal", "elab-addition", "elab-addition", "elab-example", "ROOT", "elab-addition", "elab-addition", "cause" ] }
"P06-1001.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 ], "parent": [ -1, 0, 1, 1, 3, 4, 4, 8, 1, 8, 9, 8 ], "text": [ "ROOT", "This paper presents an extensive evaluation of five different alignments ", "and investigates their impact on the corresponding MT system output . <S>", "We introduce new measures for intrinsic evaluations ", "and examine the distribution of phrases and untranslated words ", "during decoding ", "to identify which characteristics of different alignments affect translation . <S>", "We show ", "that precision-oriented alignments yield better MT output ", "( translating more words ", "and using longer phrases ) ", "than recalloriented alignments . <S>" ], "relation": [ "null", "ROOT", "joint", "elab-addition", "progression", "temporal", "enablement", "attribution", "evaluation", "elab-definition", "joint", "comparison" ] }
"P06-1002.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 ], "parent": [ -1, 0, 1, 2, 2, 6, 1, 6, 7, 8, 8, 10, 11, 14, 1 ], "text": [ "ROOT", "We present a method for unsupervised topic modelling ", "which adapts methods ", "used in document classification ( Blei et al. , 2003 ; Griffiths and Steyvers , 2004 ) ", "to unsegmented multi-party discourse transcripts . <S>", "We show ", "how Bayesian inference in this generative model can be used ", "to simultaneously address the problems of topic segmentation and topic identification : ", "automatically segmenting multi-party meetings into topically coherent segments with performance ", "which compares well with previous unsupervised segmentation-only methods ( Galley et al. , 2003 ) ", "while simultaneously extracting topics ", "which rate highly ", "when assessed for coherence by human judges . <S>", "We also show ", "that this method appears robust in the face of off-topic dialogue and speech recognition errors . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "elab-addition", "enablement", "attribution", "elab-addition", "enablement", "elab-definition", "comparison", "joint", "elab-addition", "condition", "attribution", "evaluation" ] }
"P06-1003.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8 ], "parent": [ -1, 2, 0, 2, 2, 4, 7, 2, 7 ], "text": [ "ROOT", "We consider the task of unsupervised lecture segmentation . <S>", "We formalize segmentation as a graph-partitioning task ", "that optimizes the normalized cut criterion . <S>", "Our approach moves beyond localized comparisons ", "and takes into account longrange cohesion dependencies . <S>", "Our results demonstrate ", "that global analysis improves the segmentation accuracy ", "and is robust in the presence of speech recognition errors . <S>" ], "relation": [ "null", "bg-goal", "ROOT", "elab-addition", "elab-addition", "joint", "attribution", "evaluation", "joint" ] }
"P06-1004.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ], "parent": [ -1, 0, 1, 4, 1, 4, 5, 6, 6, 10, 1 ], "text": [ "ROOT", "We present an approach to pronoun resolution ", "based on syntactic paths . <S>", "Through a simple bootstrapping procedure , ", "we learn the likelihood of coreference between a pronoun and a candidate noun ", "based on the path in the parse tree between the two entities . <S>", "This path information enables us to handle previously challenging resolution instances , ", "and also robustly addresses traditional syntactic coreference constraints . <S>", "Highly coreferent paths also allow mining of precise probabilistic gender/number information . <S>", "We combine statistical knowledge with well known features in a Support Vector Machine pronoun resolution classifier . <S>", "Significant gains in performance are observed on several datasets . <S> " ], "relation": [ "null", "ROOT", "bg-general", "manner-means", "elab-addition", "bg-general", "elab-addition", "joint", "elab-addition", "result", "evaluation" ] }
"P06-1005.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 ], "parent": [ -1, 4, 4, 2, 0, 4, 7, 4, 7, 7, 9, 12, 4, 12 ], "text": [ "ROOT", "Syntactic knowledge is important for pronoun resolution . <S>", "Traditionally , the syntactic information for pronoun resolution is represented in terms of features ", "that have to be selected and defined heuristically . <S>", "In the paper , we propose a kernel-based method ", "that can automatically mine the syntactic information from the parse trees for pronoun resolution . <S>", "Specifically , we utilize the parse trees directly as a structured feature ", "and apply kernel functions to this feature , as well as other normal features , ", "to learn the resolution classifier . <S>", "In this way , our approach avoids the efforts ", "of decoding the parse trees into the set of flat syntactic features . <S>", "The experimental results show ", "that our approach can bring significant performance improvement ", "and is reliably effective for the pronoun resolution task . <S> " ], "relation": [ "null", "bg-goal", "bg-compare", "elab-addition", "ROOT", "elab-addition", "progression", "elab-addition", "enablement", "elab-addition", "elab-addition", "attribution", "evaluation", "joint" ] }
"P06-1006.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24 ], "parent": [ -1, 5, 1, 2, 5, 0, 5, 6, 5, 8, 5, 10, 11, 12, 10, 14, 10, 5, 17, 17, 19, 24, 23, 24, 5 ], "text": [ "ROOT", "It has previously been assumed in the psycholinguistic literature ", "that finite-state models of language are crucially limited in their explanatory power by the locality of the probability distribution and the narrow scope of information ", "used by the model . <S>", "We show ", "that a simple computational model ( a bigram part-of-speech tagger ", "based on the design ", "used by Corley and Crocker ( 2000 ) ) ", "makes correct predictions on processing difficulty ", "observed in a wide range of empirical sentence processing data . <S>", "We use two modes of evaluation : ", "one ", "that relies on comparison with a control sentence , ", "paralleling practice in human studies ; ", "another ", "that measures probability drop in the disambiguating region of the sentence . <S>", "Both are surprisingly good indicators of the processing difficulty of garden-path sentences . <S>", "The sentences tested are drawn from published sources ", "and systematically explore five different types of ambiguity : ", "previous studies have been narrower in scope ", "and smaller in scale . <S>", "We do not deny the limitations of finite-state models , ", "but argue ", "that our results show ", "that their usefulness has been underestimated . <S>" ], "relation": [ "null", "bg-goal", "elab-addition", "elab-addition", "attribution", "ROOT", "bg-general", "elab-addition", "same-unit", "elab-addition", "elab-addition", "elab-enum_member", "elab-addition", "comparison", "elab-enum_member", "elab-addition", "summary", "elab-addition", "joint", "comparison", "joint", "contrast", "attribution", "attribution", "evaluation" ] }
"P06-1007.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 ], "parent": [ -1, 0, 1, 1, 3, 3, 5, 1, 7, 7, 9, 12, 1 ], "text": [ "ROOT", "We propose in this paper a method ", "for quantifying sentence grammaticality . <S>", "The approach ", "based on Property Grammars , a constraint-based syntactic formalism , ", "makes it possible to evaluate a grammaticality index for any kind of sentence , ", "including ill-formed ones . <S>", "We compare on a sample of sentences the grammaticality indices ", "obtained from PG formalism ", "and the acceptability judgements ", "measured by means of a psycholinguistic analysis . <S>", "The results show ", "that the derived grammaticality index is a fairly good tracer of acceptability scores . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "elab-addition", "bg-general", "same-unit", "elab-example", "elab-addition", "elab-addition", "joint", "elab-addition", "attribution", "evaluation" ] }
"P06-1008.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 ], "parent": [ -1, 0, 1, 1, 3, 6, 3, 3, 7, 11, 11, 3, 13, 11 ], "text": [ "ROOT", "In this paper we present a novel approach ", "for inducing word alignments from sentence aligned data . <S>", "We use a Conditional Random Field ( CRF ) , a discriminative model , ", "which is estimated on a small supervised training set . <S>", "The CRF is conditioned on both the source and target texts , ", "and thus allows for the use of arbitrary and overlapping features over these data . <S>", "Moreover , the CRF has efficient training and decoding processes ", "which both find globally optimal solutions . <S>", "We apply this alignment model to both French-English and Romanian-English language pairs . <S>", "We show ", "how a large number of highly predictive features can be easily incorporated into the CRF , ", "and demonstrate ", "that even with only a few hundred word-aligned training sentences , our model improves over the current state-ofthe-art with alignment error rates of 5.29 and 25.8 for the two tasks respectively . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "elab-addition", "elab-addition", "progression", "elab-addition", "elab-addition", "elab-addition", "result", "attribution", "evaluation", "attribution", "joint" ] }
"P06-1009.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 ], "parent": [ -1, 0, 1, 2, 3, 4, 1, 6, 7, 6, 9, 13, 13, 6, 16, 14, 1 ], "text": [ "ROOT", "In this paper we investigate ChineseEnglish name transliteration ", "using comparable corpora , corpora ", "where texts in the two languages deal in some of the same topics — ", "and therefore share references to named entities — ", "but are not translations of each other . <S>", "We present two distinct methods for transliteration , ", "one approach ", "using phonetic transliteration , ", "and the second ", "using the temporal distribution of candidate pairs . <S>", "Each of these approaches works quite well , ", "but by combining the approaches ", "one can achieve even better results . <S>", "We then propose a novel score propagation method ", "that utilizes the co-occurrence of transliteration pairs within document pairs . <S>", "This propagation method achieves further improvement over the best results from the previous step . <S>" ], "relation": [ "null", "ROOT", "manner-means", "elab-addition", "progression", "contrast", "elab-addition", "elab-enum_member", "manner-means", "elab-enum_member", "manner-means", "contrast", "manner-means", "summary", "result", "elab-addition", "evaluation" ] }
"P06-1010.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 ], "parent": [ -1, 0, 1, 6, 3, 6, 1, 6, 1, 8, 1, 12, 10 ], "text": [ "ROOT", "We present a novel method ", "for extracting parallel sub-sentential fragments from comparable , non-parallel bilingual corpora . <S>", "By analyzing potentially similar sentence pairs ", "using a signal processinginspired approach , ", "we detect ", "which segments of the source sentence are translated into segments in the target sentence , ", "and which are not . <S>", "This method enables us to extract useful machine translation training data even from very non-parallel corpora , ", "which contain no parallel sentence pairs . <S>", "We evaluate the quality of the extracted data ", "by showing ", "that it improves the performance of a state-of-the-art statistical machine translation system . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "manner-means", "manner-means", "attribution", "elab-addition", "joint", "elab-addition", "elab-addition", "evaluation", "attribution", "manner-means" ] }
"P06-1011.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 ], "parent": [ -1, 6, 1, 1, 1, 4, 0, 6, 7, 7, 9, 9, 13, 6, 13 ], "text": [ "ROOT", "Instances of a word ", "drawn from different domains ", "may have different sense priors ( the proportions of the different senses of a word ) . <S>", "This in turn affects the accuracy of word sense disambiguation ( WSD ) systems ", "trained and applied on different domains . <S>", "This paper presents a method ", "to estimate the sense priors of words ", "drawn from a new domain , ", "and highlights the importance ", "of using well calibrated probabilities ", "when performing these estimations . <S>", "By using well calibrated probabilities , ", "we are able to estimate the sense priors effectively ", "to achieve significant improvements in WSD accuracy . <S>" ], "relation": [ "null", "bg-goal", "elab-addition", "same-unit", "elab-addition", "elab-addition", "ROOT", "elab-addition", "elab-addition", "joint", "elab-addition", "condition", "manner-means", "evaluation", "enablement" ] }
"P06-1012.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 ], "parent": [ -1, 3, 1, 0, 3, 3, 5, 3, 7, 10, 7, 10 ], "text": [ "ROOT", "Combination methods are an effective way ", "of improving system performance . <S>", "This paper examines the benefits of system combination for unsupervised WSD . <S>", "We investigate several voting- and arbiterbased combination strategies over a diverse pool of unsupervised WSD systems . <S>", "Our combination methods rely on predominant senses ", "which are derived automatically from raw text . <S>", "Experiments ", "using the SemCor and Senseval-3 data sets ", "demonstrate ", "that our ensembles yield signifi-cantly better results ", "when compared with state-of-the-art . <S>" ], "relation": [ "null", "bg-goal", "elab-addition", "ROOT", "elab-addition", "elab-addition", "elab-addition", "evaluation", "elab-addition", "attribution", "same-unit", "comparison" ] }
"P06-1013.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8 ], "parent": [ -1, 2, 0, 2, 2, 4, 4, 2, 7 ], "text": [ "ROOT", "Fine-grained sense distinctions are one of the major obstacles to successful Word Sense Disambiguation . <S>", "In this paper , we present a method ", "for reducing the granularity of the WordNet sense inventory ", "based on the mapping to a manually crafted dictionary ", "encoding sense hierarchies , ", "namely the Oxford Dictionary of English . <S>", "We assess the quality of the mapping and the induced clustering , ", "and evaluate the performance of coarse WSD systems in the Senseval-3 English all-words task . <S>" ], "relation": [ "null", "bg-goal", "ROOT", "elab-addition", "bg-general", "elab-addition", "elab-addition", "evaluation", "joint" ] }
"P06-1014.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 ], "parent": [ -1, 0, 1, 1, 3, 4, 5, 3, 7, 1, 9, 12, 9 ], "text": [ "ROOT", "In this paper , we present Espresso , a weakly-supervised , general-purpose , and accurate algorithm ", "for harvesting semantic relations . <S>", "The main contributions are : ", "i ) a method for exploiting generic patterns", "by filtering incorrect instances ", "using the Web ; ", "and ii ) a principled measure of pattern and instance reliability ", "enabling the filtering algorithm . <S>", "We present an empirical comparison of Espresso with various state of the art systems , on different size and genre corpora , ", "on extracting various general and specific relations . <S>", "Experimental results show ", "that our exploitation of generic patterns substantially increases system recall with small effect on overall precision . <S> " ], "relation": [ "null", "ROOT", "enablement", "elab-addition", "elab-enum_member", "manner-means", "manner-means", "elab-enum_member", "elab-addition", "elab-addition", "elab-addition", "attribution", "evaluation" ] }
"P06-1015.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18 ], "parent": [ -1, 0, 1, 2, 7, 4, 5, 1, 7, 8, 11, 7, 7, 12, 15, 1, 17, 1, 17 ], "text": [ "ROOT", "This paper proposes a novel hierarchical learning strategy ", "to deal with the data sparseness problem in relation extraction ", "by modeling the commonality among related classes . <S>", "For each class in the hierarchy ", "either manually predefined ", "or automatically clustered , ", "a linear discriminative function is determined in a topdown way ", "using a perceptron algorithm with the lower-level weight vector ", "derived from the upper-level weight vector . <S>", "As the upper-level class normally has much more positive training examples than the lower-level class , ", "the corresponding linear discriminative function can be determined more reliably . <S>", "The upperlevel discriminative function then can effectively guide the discriminative function learning in the lower-level , ", "which otherwise might suffer from limited training data . <S>", "Evaluation on the ACE RDC 2003 corpus shows ", "that the hierarchical strategy much improves the performance by 5.6 and 5.1 in F-measure on least- and medium- frequent relations respectively . <S>", "It also shows ", "that our system outperforms the previous best-reported system by 2.7 in F-measure on the 24 subtypes ", "using the same feature set . <S>" ], "relation": [ "null", "ROOT", "enablement", "manner-means", "bg-general", "elab-addition", "joint", "elab-addition", "manner-means", "elab-addition", "exp-reason", "elab-addition", "elab-addition", "elab-addition", "attribution", "evaluation", "attribution", "evaluation", "manner-means" ] }
"P06-1016.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 ], "parent": [ -1, 2, 0, 2, 3, 4, 5, 5, 9, 2, 9, 9 ], "text": [ "ROOT", "Shortage of manually labeled data is an obstacle to supervised relation extraction methods . <S>", "In this paper we investigate a graph based semi-supervised learning algorithm , a label propagation ( LP ) algorithm , for relation extraction . <S>", "It represents labeled and unlabeled examples and their distances as the nodes and the weights of edges of a graph , ", "and tries to obtain a labeling function ", "to satisfy two constraints : ", "1 ) it should be fixed on the labeled nodes , ", "2 ) it should be smooth on the whole graph . <S>", "Experiment results on the ACE corpus showed ", "that this LP algorithm achieves better performance than SVM ", "when only very few labeled examples are available , ", "and it also performs better than bootstrapping for the relation extraction task . <S>" ], "relation": [ "null", "bg-goal", "ROOT", "elab-addition", "joint", "enablement", "elab-enum_member", "elab-enum_member", "attribution", "evaluation", "condition", "joint" ] }
"P06-1017.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6 ], "parent": [ -1, 0, 1, 1, 1, 4, 5 ], "text": [ "ROOT", "This paper proposes a generic mathematical formalism for the combination of various structures : ", "strings , trees , dags , graphs and products of them . <S>", "The polarization of the objects of the elementary structures controls the saturation of the final structure . <S>", "This formalism is both elementary and powerful enough ", "to strongly simulate many grammar formalisms , ", "such as rewriting systems , dependency grammars , TAG , HPSG and LFG . <S>" ], "relation": [ "null", "ROOT", "elab-enum_member", "elab-addition", "evaluation", "enablement", "elab-example" ] }
"P06-1018.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 ], "parent": [ -1, 0, 3, 1, 6, 6, 1, 1, 7, 10, 1, 10 ], "text": [ "ROOT", "This work provides the essential foundations for modular construction of ( typed ) unification grammars for natural languages . <S>", "Much of the information in such grammars is encoded in the signature , ", "and hence the key is facilitating a modularized development of type signatures . <S>", "We introduce a definition of signature modules ", "and show ", "how two modules combine . <S>", "Our definitions are motivated by the actual needs of grammar developers ", "obtained through a careful examination of large scale grammars . <S>", "We show ", "that our definitions meet these needs ", "by conforming to a detailed set of desiderata . <S>" ], "relation": [ "null", "ROOT", "progression", "elab-addition", "progression", "attribution", "elab-addition", "elab-addition", "elab-addition", "attribution", "evaluation", "manner-means" ] }
"P06-1019.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 ], "parent": [ -1, 0, 1, 2, 1, 4, 4, 6, 9, 1, 9, 10, 10, 1, 13, 14 ], "text": [ "ROOT", "This paper investigates the use of sublexical units as a solution ", "to handling the complex morphology with productive derivational processes , ", "in the development of a lexical functional grammar for Turkish . <S>", "Such sublexical units make it possible to expose the internal structure of words with multiple derivations to the grammar rules in a uniform manner . <S>", "This in turn leads to more succinct and manageable rules . <S>", "Further , the semantics of the derivations can also be systematically reflected in a compositional way ", "by constructing PRED values on the fly . <S>", "We illustrate ", "how we use sublexical units ", "for handling simple productive derivational morphology and more interesting cases ", "such as causativization , etc. , ", "which change verb valency . <S>", "Our priority is to handle several linguistic phenomena ", "in order to observe the effects of our approach on both the c-structure and the f-structure representation , and grammar writing , ", "leaving the coverage and evaluation issues aside for the moment . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "manner-means", "attribution", "elab-addition", "enablement", "elab-example", "elab-addition", "elab-addition", "enablement", "elab-addition" ] }
"P06-1020.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 ], "parent": [ -1, 0, 1, 1, 1, 4, 7, 5, 1, 8, 9, 1, 11, 12, 15, 1 ], "text": [ "ROOT", "A grammatical method ", "of combining two kinds of speech repair cues ", "is presented . <S>", "One cue , prosodic disjuncture , is detected by a decision tree-based ensemble classifier ", "that uses acoustic cues ", "to identify ", "where normal prosody seems to be interrupted ( Lickley , 1996 ) . <S>", "The other cue , syntactic parallelism , codifies the expectation ", "that repairs continue a syntactic category ", "that was left unfinished in the reparandum ( Levelt , 1983 ) . <S>", "The two cues are combined in a Treebank PCFG ", "whose states are split ", "using a few simple tree transformations . <S>", "Parsing performance on the Switchboard and Fisher corpora suggests ", "that these two cues help to locate speech repairs in a synergistic way . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "same-unit", "elab-aspect", "elab-addition", "attribution", "enablement", "elab-aspect", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "manner-means", "attribution", "evaluation" ] }
"P06-1021.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 ], "parent": [ -1, 6, 1, 4, 1, 4, 0, 6, 6, 8, 9, 10, 8, 12, 16, 14, 6 ], "text": [ "ROOT", "Spoken monologues feature greater sentence length and structural complexity ", "than do spoken dialogues . <S>", "To achieve high parsing performance for spoken monologues , ", "it could prove effective to simplify the structure ", "by dividing a sentence into suitable language units . <S>", "This paper proposes a method for dependency parsing of Japanese monologues ", "based on sentence segmentation . <S>", "In this method , the dependency parsing is executed in two stages : at the clause level and the sentence level . <S>", "First , the dependencies within a clause are identified ", "by dividing a sentence into clauses ", "and executing stochastic dependency parsing for each clause . <S>", "Next , the dependencies over clause boundaries are identified stochastically , ", "and the dependency structure of the entire sentence is thus completed . <S>", "An experiment ", "using a spoken monologue corpus ", "shows this method to be effective for efficient dependency parsing of Japanese monologue sentences . <S>" ], "relation": [ "null", "bg-goal", "comparison", "enablement", "elab-addition", "manner-means", "ROOT", "bg-general", "elab-addition", "elab-process_step", "manner-means", "joint", "elab-process_step", "joint", "attribution", "manner-means", "evaluation" ] }
"P06-1022.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 ], "parent": [ -1, 0, 1, 2, 1, 4, 1, 6, 1, 8, 9, 12, 1 ], "text": [ "ROOT", "This paper describes a parser ", "which generates parse trees with empty elements ", "in which traces and fillers are co-indexed . <S>", "The parser is an unlexicalized PCFG parser ", "which is guaranteed to return the most probable parse . <S>", "The grammar is extracted from a version of the PENN treebank ", "which was automatically annotated with features in the style of Klein and Manning ( 2003 ) . <S>", "The annotation includes GPSG-style slash features ", "which link traces and fillers , and other features ", "which improve the general parsing accuracy . <S>", "In an evaluation on the PENN treebank ( Marcus et al. , 1993 ) , the parser outperformed other unlexicalized PCFG parsers in terms of labeled bracketing fscore . <S>", "Its results for the empty category prediction task and the trace-filler coindexation task exceed all previously reported results with 84.1 % and 77.4 % fscore , respectively <S>" ], "relation": [ "null", "ROOT", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "result", "evaluation" ] }
"P06-1023.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18 ], "parent": [ -1, 0, 1, 2, 2, 4, 5, 6, 6, 10, 1, 1, 11, 11, 15, 1, 15, 18, 1 ], "text": [ "ROOT", "We explore the use of restricted dialogue contexts in reinforcement learning ( RL ) of effective dialogue strategies for information seeking spoken dialogue systems ( e.g. COMMUNICATOR ( Walker et al. , 2001 ) ) . <S>", "The contexts ", "we use ", "are richer than previous research in this area , e.g. ( Levin and Pieraccini , 1997 ; Scheffler and Young , 2001 ; Singh et al. , 2002 ; Pietquin , 2004 ) , ", "which use only slot-based information , ", "but are much less complex than the full dialogue \"Information States\" ", "explored in ( Henderson et al. , 2005 ) , ", "for which tractabe learning is an issue . <S>", "We explore ", "how incrementally adding richer features allows learning of more effective dialogue strategies . <S>", "We use 2 user simulations ", "learned from COMMUNICATOR data ( Walker et al. , 2001 ; Georgila et al. , 2005b ) ", "to explore the effects of different features on learned dialogue strategies . <S>", "Our results show ", "that adding the dialogue moves of the last system and user turns increases the average reward of the automatically learned strategies by 65.9 % over the original ( hand-coded ) COMMUNICATOR systems , and by 7.8 % over a baseline RL policy ", "that uses only slot-status features . <S>", "We show ", "that the learned strategies exhibit an emergent \"focus switching\" strategy and effective use of the \"give help\" action . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "elab-addition", "same-unit", "elab-addition", "contrast", "elab-addition", "elab-addition", "attribution", "elab-addition", "elab-addition", "elab-addition", "enablement", "attribution", "evaluation", "elab-addition", "attribution", "evaluation" ] }
"P06-1024.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 ], "parent": [ -1, 3, 3, 0, 3, 4, 3, 6, 3, 8, 9, 10, 13, 3, 13 ], "text": [ "ROOT", "Speech recognition problems are a reality in current spoken dialogue systems . <S>", "In order to better understand these phenomena , ", "we study dependencies between speech recognition problems and several higher level dialogue factors ", "that define our notion of student state : ", "frustration/anger , certainty and correctness . <S>", "We apply Chi Square ( χ2 ) analysis to a corpus of speech-based computer tutoring dialogues ", "to discover these dependencies both within and across turns . <S>", "Significant dependencies are combined ", "to produce interesting insights regarding speech recognition problems ", "and to propose new strategies ", "for handling these problems . <S>", "We also find ", "that tutoring , as a new domain for speech applications , exhibits interesting tradeoffs and new factors ", "to consider for spoken dialogue design . <S>" ], "relation": [ "null", "bg-goal", "elab-addition", "ROOT", "elab-addition", "elab-enum_member", "elab-addition", "enablement", "elab-addition", "enablement", "joint", "enablement", "attribution", "evaluation", "enablement" ] }
"P06-1025.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 ], "parent": [ -1, 7, 1, 2, 2, 2, 5, 8, 0, 8, 9, 8, 11 ], "text": [ "ROOT", "Data-driven techniques have been used for many computational linguistics tasks . <S>", "Models ", "derived from data ", "are generally more robust than hand-crafted systems ", "since they better reflect the distribution of the phenomena ", "being modeled . <S>", "With the availability of large corpora of spoken dialog , dialog management is now reaping the benefits of data-driven techniques . <S>", "In this paper , we compare two approaches ", "to modeling subtask structure in dialog : ", "a chunk-based model of subdialog sequences , and a parse-based , or hierarchical , model . <S>", "We evaluate these models ", "using customer agent dialogs from a catalog service domain . <S>" ], "relation": [ "null", "contrast", "elab-addition", "elab-addition", "same-unit", "exp-reason", "elab-addition", "bg-goal", "ROOT", "enablement", "elab-definition", "evaluation", "manner-means" ] }
"P06-1026.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 ], "parent": [ -1, 0, 1, 2, 1, 4, 5, 8, 4, 8, 13, 10, 13, 1 ], "text": [ "ROOT", "We present a new semi-supervised training procedure for conditional random fields ( CRFs ) ", "that can be used ", "to train sequence segmentors and labelers from a combination of labeled and unlabeled training data . <S>", "Our approach is based on extending the minimum entropy regularization framework to the structured prediction case , ", "yielding a training objective ", "that combines unlabeled conditional entropy with labeled conditional likelihood . <S>", "Although the training objective is no longer concave , ", "it can still be used to improve an initial model ", "( e.g. obtained from supervised training ) by iterative ascent . <S>", "We apply our new training algorithm to the problem ", "of identifying gene and protein mentions in biological texts , ", "and show ", "that incorporating unlabeled data improves the performance of the supervised CRF in this case . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "enablement", "elab-addition", "elab-addition", "elab-addition", "contrast", "elab-addition", "manner-means", "joint", "elab-addition", "attribution", "evaluation" ] }
"P06-1027.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 ], "parent": [ -1, 0, 1, 2, 3, 4, 1, 6, 7, 10, 1, 10, 13, 1 ], "text": [ "ROOT", "This paper proposes a framework ", "for training Conditional Random Fields ( CRFs ) ", "to optimize multivariate evaluation measures , ", "including non-linear measures ", "such as F-score . <S>", "Our proposed framework is derived from an error minimization approach ", "that provides a simple solution ", "for directly optimizing any evaluation measure . <S>", "Specifically focusing on sequential segmentation tasks , i.e. text chunking and named entity recognition , ", "we introduce a loss function ", "that closely reflects the target evaluation measure for these tasks , namely , segmentation F-score . <S>", "Our experiments show ", "that our method performs better than standard CRF training . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "enablement", "elab-addition", "elab-example", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "elab-addition", "attribution", "evaluation" ] }
"P06-1028.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14 ], "parent": [ -1, 3, 1, 0, 5, 6, 3, 6, 6, 8, 8, 10, 13, 3, 13 ], "text": [ "ROOT", "Lasso is a regularization method for parameter estimation in linear models . <S>", "It optimizes the model parameters with respect to a loss function subject to model complexities . <S>", "This paper explores the use of lasso for statistical language modeling for text input . <S>", "Owing to the very large number of parameters , ", "directly optimizing the penalized lasso loss function is impossible . <S>", "Therefore , we investigate two approximation methods , ", "the boosted lasso ( BLasso ) and the forward stagewise linear regression ( FSLR ) . <S>", "Both methods , ", "when used with the exponential loss function , ", "bear strong resemblance to the boosting algorithm ", "which has been used as a discriminative training method for language modeling . <S>", "Evaluations on the task of Japanese text input show ", "that BLasso is able to produce the best approximation to the lasso solution , ", "and leads to a significant improvement , in terms of character error rate , over boosting and the traditional maximum likelihood estimation . <S>" ], "relation": [ "null", "bg-goal", "elab-addition", "ROOT", "exp-reason", "result", "elab-addition", "elab-enum_member", "elab-addition", "condition", "same-unit", "elab-addition", "attribution", "evaluation", "joint" ] }
"P06-1029.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20 ], "parent": [ -1, 0, 1, 1, 3, 7, 5, 1, 1, 8, 8, 10, 10, 8, 13, 13, 1, 16, 17, 17, 1 ], "text": [ "ROOT", "We have developed an automated Japanese essay scoring system ", "called Jess . <S>", "The system needs expert writings rather than expert raters ", "to build the evaluation model . <S>", "By detecting statistical outliers of predetermined aimed essay features ", "compared with many professional writings for each prompt , ", "our system can evaluate essays . <S>", "The following three features are examined : ", "( 1 ) rhetoric — syntactic variety , or the use of various structures in the arrangement of phases , clauses , and sentences , ", "( 2 ) organization — characteristics ", "associated with the orderly presentation of ideas , ", "such as rhetorical features and linguistic cues , ", "and ( 3 ) content — vocabulary ", "related to the topic , ", "such as relevant information and precise or specialized vocabulary . <S>", "The final evaluation score is calculated ", "by deducting from a perfect score ", "assigned by a learning process ", "using editorials and columns from the Mainichi Daily News newspaper . <S>", "A diagnosis for the essay is also given . <S> " ], "relation": [ "null", "ROOT", "elab-addition", "elab-addition", "enablement", "manner-means", "comparison", "elab-addition", "evaluation", "elab-enum_member", "elab-enum_member", "elab-addition", "elab-example", "elab-enum_member", "elab-addition", "elab-example", "evaluation", "manner-means", "elab-addition", "manner-means", "elab-addition" ] }
"P06-1030.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15 ], "parent": [ -1, 0, 1, 1, 1, 4, 7, 1, 7, 1, 9, 12, 1, 12, 13, 13 ], "text": [ "ROOT", "This paper proposes a method ", "for detecting errors in article usage and singular plural usage ", "based on the mass count distinction . <S>", "First , it learns decision lists from training data ", "generated automatically to distinguish mass and count nouns . <S>", "Then , in order to improve its performance , ", "it is augmented by feedback ", "that is obtained from the writing of learners . <S>", "Finally , it detects errors ", "by applying rules to the mass count distinction . <S>", "Experiments show ", "that it achieves a recall of 0.71 and a precision of 0.72 ", "and outperforms other methods ", "used for comparison ", "when augmented by feedback . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "bg-general", "elab-process_step", "elab-addition", "enablement", "elab-process_step", "elab-addition", "elab-process_step", "manner-means", "attribution", "evaluation", "joint", "elab-addition", "condition" ] }
"P06-1031.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18 ], "parent": [ -1, 0, 1, 2, 8, 4, 4, 8, 1, 8, 9, 1, 11, 14, 11, 14, 14, 16, 17 ], "text": [ "ROOT", "This paper presents a pilot study of the use of phrasal Statistical Machine Translation ( SMT ) techniques ", "to identify and correct writing errors ", "made by learners of English as a Second Language ( ESL ) . <S>", "Using examples of mass noun errors ", "found in the Chinese Learner Error Corpus ( CLEC ) ", "to guide creation of an engineered training set , ", "we show ", "that application of the SMT paradigm can capture errors ", "not well addressed by widely-used proofing tools ", "designed for native speakers . <S>", "Our system was able to correct 61.81 % of mistakes in a set of naturally occurring examples of mass noun errors ", "found on the World Wide Web , ", "suggesting ", "that efforts ", "to collect alignable corpora of pre- and post-editing ESL writing samples offer ", "can enable the development of SMT-based writing assistance tools ", "capable of repairing many of the complex syntactic and lexical problems ", "found in the writing of ESL learners . <S> " ], "relation": [ "null", "ROOT", "enablement", "elab-addition", "manner-means", "elab-addition", "enablement", "attribution", "elab-addition", "elab-addition", "elab-addition", "evaluation", "elab-addition", "attribution", "elab-addition", "enablement", "same-unit", "elab-addition", "elab-addition" ] }
"P06-1032.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 ], "parent": [ -1, 6, 1, 1, 1, 6, 0, 8, 6, 6, 9, 9 ], "text": [ "ROOT", "Transforming syntactic representations ", "in order to improve parsing accuracy ", "has been exploited successfully in statistical parsing systems ", "using constituency-based representations . <S>", "In this paper , we show ", "that similar transformations can give substantial improvements also in data-driven dependency parsing . <S>", "Experiments on the Prague Dependency Treebank show ", "that systematic transformations of coordinate structures and verb groups result in a 10 % error reduction for a deterministic data-driven dependency parser . <S>", "Combining these transformations with previously proposed techniques ", "for recovering nonprojective dependencies ", "leads to state-of-the-art accuracy for the given data set . <S> " ], "relation": [ "null", "bg-compare", "enablement", "same-unit", "manner-means", "attribution", "ROOT", "attribution", "evaluation", "evaluation", "elab-addition", "same-unit" ] }
"P06-1033.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19 ], "parent": [ -1, 7, 1, 2, 2, 1, 5, 0, 7, 7, 7, 10, 11, 11, 15, 7, 15, 18, 7, 18 ], "text": [ "ROOT", "Spoken language generation for dialogue systems requires a dictionary ", "of mappings between semantic representations of concepts ", "the system wants to express ", "and realizations of those concepts . <S>", "Dictionary creation is a costly process ; ", "it is currently done by hand for each dialogue domain . <S>", "We propose a novel unsupervised method ", "for learning such mappings from user reviews in the target domain , ", "and test it on restaurant reviews . <S>", "We test the hypothesis ", "that user reviews ", "that provide individual ratings for distinguished attributes of the domain entity ", "make it possible to map review sentences to their semantic representation with high precision . <S>", "Experimental analyses show ", "that the mappings learned cover most of the domain ontology , ", "and provide good linguistic variation . <S>", "A subjective user evaluation shows ", "that the consistency between the semantic representations and the learned realizations is high ", "and that the naturalness of the realizations is higher than a hand-crafted baseline . <S>" ], "relation": [ "null", "bg-goal", "elab-addition", "elab-addition", "joint", "elab-addition", "elab-addition", "ROOT", "elab-addition", "progression", "elab-addition", "elab-addition", "elab-addition", "same-unit", "attribution", "evaluation", "joint", "attribution", "evaluation", "joint" ] }
"P06-1034.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 ], "parent": [ -1, 0, 1, 1, 5, 1, 5, 6, 9, 1, 1, 10, 10 ], "text": [ "ROOT", "This paper presents a method ", "for building genetic language taxonomies ", "based on a new approach to comparing lexical forms . <S>", "Instead of comparing forms cross-linguistically , ", "a matrix of languageinternal similarities between forms is calculated . <S>", "These matrices are then compared ", "to give distances between languages . <S>", "We argue ", "that this coheres better with current thinking in linguistics and psycholinguistics . <S>", "An implementation of this approach , ", "called PHILOLOGICON , ", "is described , along with its application to Dyen et al.'s ( 1992 ) ninety-five wordlists from Indo-European languages . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "bg-general", "contrast", "elab-addition", "elab-addition", "enablement", "attribution", "evaluation", "evaluation", "elab-addition", "same-unit" ] }
"P06-1035.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13 ], "parent": [ -1, 3, 1, 6, 3, 3, 0, 6, 7, 6, 9, 6, 11, 12 ], "text": [ "ROOT", "A good dictionary contains not only many entries and a lot of information ", "concerning each one of them , ", "but also adequate means ", "to reveal the stored information . <S>", "Information access depends crucially on the quality of the index . <S>", "We will present here some ideas ", "of how a dictionary could be enhanced ", "to support a speaker/writer to find the word s/he is looking for . <S>", "To this end we suggest to add to an existing electronic resource an index ", "based on the notion of association . <S>", "We will also present preliminary work ", "of how a subset of such associations , for example , topical associations , can be acquired by filtering a network of lexical co-occurrences ", "extracted from a corpus . <S> " ], "relation": [ "null", "progression", "elab-addition", "bg-compare", "enablement", "elab-addition", "ROOT", "elab-addition", "enablement", "elab-addition", "bg-general", "elab-addition", "elab-addition", "elab-addition" ] }
"P06-1036.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12 ], "parent": [ -1, 0, 1, 4, 1, 4, 5, 1, 7, 8, 1, 10, 10 ], "text": [ "ROOT", "We investigate the utility of supertag information ", "for guiding an existing dependency parser of German . <S>", "Using weighted constraints to integrate the additionally available information , ", "the decision process of the parser is influenced ", "by changing its preferences , ", "without excluding alternative structural interpretations from being considered . <S>", "The paper reports on a series of experiments ", "using varying models of supertags ", "that significantly increase the parsing accuracy . <S>", "In addition , an upper bound on the accuracy ", "that can be achieved with perfect supertags ", "is estimated . <S>" ], "relation": [ "null", "ROOT", "elab-addition", "manner-means", "elab-addition", "manner-means", "condition", "evaluation", "manner-means", "elab-addition", "evaluation", "elab-addition", "same-unit" ] }
"P06-1037.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16 ], "parent": [ -1, 0, 1, 2, 1, 4, 1, 6, 6, 8, 1, 10, 14, 12, 1, 14, 15 ], "text": [ "ROOT", "We present a novel approach ", "for discovering word categories , sets of words ", "sharing a significant aspect of their meaning . <S>", "We utilize meta-patterns of high frequency words and content words ", "in order to discover pattern candidates . <S>", "Symmetric patterns are then identified ", "using graph-based measures , ", "and word categories are created ", "based on graph clique sets . <S>", "Our method is the first pattern-based method ", "that requires no corpus annotation or manually provided seed patterns or words . <S>", "We evaluate our algorithm on very large corpora in two languages , ", "using both human judgments and WordNet based evaluation . <S>", "Our fully unsupervised results are superior to previous work ", "that used a POS tagged corpus , and computation time for huge corpora are orders of magnitude faster ", "than previously reported . <S> " ], "relation": [ "null", "ROOT", "elab-addition", "elab-addition", "elab-addition", "enablement", "elab-addition", "manner-means", "joint", "bg-general", "evaluation", "elab-addition", "result", "manner-means", "evaluation", "elab-addition", "comparison" ] }
"P06-1038.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ], "parent": [ -1, 0, 1, 2, 5, 1, 7, 1, 7, 10, 1 ], "text": [ "ROOT", "We present BAYESUM ( for \"Bayesian summarization\" ) , a model for sentence extraction in query-focused summarization . <S>", "BAYESUM leverages the common case ", "in which multiple documents are relevant to a single query . <S>", "Using these documents as reinforcement for query terms , ", "BAYESUM is not afflicted by the paucity of information in short queries . <S>", "We show ", "that approximate inference in BAYESUM is possible on large data sets ", "and results in a state-of-the-art summarization system . <S>", "Furthermore , we show ", "how BAYESUM can be understood as a justified query expansion technique in the language modeling for IR framework . <S> " ], "relation": [ "null", "ROOT", "elab-addition", "elab-addition", "manner-means", "elab-addition", "attribution", "elab-addition", "progression", "attribution", "evaluation" ] }
"P06-1039.edu.txt.dep"
{ "id": [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17 ], "parent": [ -1, 0, 1, 2, 1, 4, 7, 4, 1, 8, 9, 8, 11, 15, 13, 1, 15, 16 ], "text": [ "ROOT", "We present an unsupervised learning algorithm ", "that mines large text corpora for patterns ", "that express implicit semantic relations . <S>", "For a given input word pair X : Y with some unspecified semantic relations , the corresponding output list of patterns <P1 , ... , Pm> is ranked ", "according to how well each pattern Pi expresses the relations between X and Y . <S>", "For example , given X = ostrich and Y = bird , ", "the two highest ranking output patterns are \"X is the largest Y\" and \"Y such as the X\" . <S>", "The output patterns are intended to be useful ", "for finding further pairs with the same relations , ", "to support the construction of lexicons , ontologies , and semantic networks . <S>", "The patterns are sorted by pertinence , ", "where the pertinence of a pattern Pi for a word pair X : Y is the expected relational similarity between the given pair and typical pairs for Pi . <S>", "The algorithm is empirically evaluated on two tasks , ", "solving multiple-choice SAT word analogy questions and classifying semantic relations in noun-modifier pairs . <S>", "On both tasks , the algorithm achieves state-of-the-art results , ", "performing significantly better than several alternative pattern ranking algorithms , ", "based on tf-idf . <S> " ], "relation": [ "null", "ROOT", "elab-addition", "elab-addition", "elab-addition", "bg-general", "condition", "elab-example", "elab-addition", "elab-addition", "enablement", "elab-addition", "elab-addition", "result", "elab-addition", "evaluation", "comparison", "bg-general" ] }
"P06-1040.edu.txt.dep"
End of preview (truncated to 100 rows)

Dataset Card for SciDTB

Dataset Summary

SciDTB is a domain-specific discourse treebank annotated on scientific articles written in English-language. Different from widely-used RST-DT and PDTB, SciDTB uses dependency trees to represent discourse structure, which is flexible and simplified to some extent but do not sacrifice structural integrity. Furthermore, this treebank is made as a benchmark for evaluating discourse dependency parsers. This dataset can benefit many downstream NLP tasks such as machine translation and automatic summarization.

Supported Tasks and Leaderboards

[Needs More Information]

Languages

English.

Dataset Structure

Data Instances

A typical data point consist of root which is a list of nodes in dependency tree. Each node in the list has four fields: id containing id for the node, parent contains id of the parent node, text refers to the span that is part of the current node and finally relation represents relation between current node and parent node.

An example from SciDTB train set is given below:

{
    "root": [
        {
            "id": 0,
            "parent": -1,
            "text": "ROOT",
            "relation": "null"
        },
        {
            "id": 1,
            "parent": 0,
            "text": "We propose a neural network approach ",
            "relation": "ROOT"
        },
        {
            "id": 2,
            "parent": 1,
            "text": "to benefit from the non-linearity of corpus-wide statistics for part-of-speech ( POS ) tagging . <S>",
            "relation": "enablement"
        },
        {
            "id": 3,
            "parent": 1,
            "text": "We investigated several types of corpus-wide information for the words , such as word embeddings and POS tag distributions . <S>",
            "relation": "elab-aspect"
        },
        {
            "id": 4,
            "parent": 5,
            "text": "Since these statistics are encoded as dense continuous features , ",
            "relation": "cause"
        },
        {
            "id": 5,
            "parent": 3,
            "text": "it is not trivial to combine these features ",
            "relation": "elab-addition"
        },
        {
            "id": 6,
            "parent": 5,
            "text": "comparing with sparse discrete features . <S>",
            "relation": "comparison"
        },
        {
            "id": 7,
            "parent": 1,
            "text": "Our tagger is designed as a combination of a linear model for discrete features and a feed-forward neural network ",
            "relation": "elab-aspect"
        },
        {
            "id": 8,
            "parent": 7,
            "text": "that captures the non-linear interactions among the continuous features . <S>",
            "relation": "elab-addition"
        },
        {
            "id": 9,
            "parent": 10,
            "text": "By using several recent advances in the activation functions for neural networks , ",
            "relation": "manner-means"
        },
        {
            "id": 10,
            "parent": 1,
            "text": "the proposed method marks new state-of-the-art accuracies for English POS tagging tasks . <S>",
            "relation": "evaluation"
        }
    ]
}

More such raw data instance can be found here

Data Fields

  • id: an integer identifier for the node
  • parent: an integer identifier for the parent node
  • text: a string containing text for the current node
  • relation: a string representing discourse relation between current node and parent node

Data Splits

Dataset consists of three splits: train, dev and test.

Train Valid Test
743 154 152

Dataset Creation

Curation Rationale

[Needs More Information]

Source Data

Initial Data Collection and Normalization

[Needs More Information]

Who are the source language producers?

[Needs More Information]

Annotations

Annotation process

More information can be found here

Who are the annotators?

[Needs More Information]

Personal and Sensitive Information

[Needs More Information]

Considerations for Using the Data

Social Impact of Dataset

[Needs More Information]

Discussion of Biases

[Needs More Information]

Other Known Limitations

[Needs More Information]

Additional Information

Dataset Curators

[Needs More Information]

Licensing Information

[Needs More Information]

Citation Information

@inproceedings{yang-li-2018-scidtb,
    title = "{S}ci{DTB}: Discourse Dependency {T}ree{B}ank for Scientific Abstracts",
    author = "Yang, An  and
      Li, Sujian",
    booktitle = "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)",
    month = jul,
    year = "2018",
    address = "Melbourne, Australia",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/P18-2071",
    doi = "10.18653/v1/P18-2071",
    pages = "444--449",
    abstract = "Annotation corpus for discourse relations benefits NLP tasks such as machine translation and question answering. In this paper, we present SciDTB, a domain-specific discourse treebank annotated on scientific articles. Different from widely-used RST-DT and PDTB, SciDTB uses dependency trees to represent discourse structure, which is flexible and simplified to some extent but do not sacrifice structural integrity. We discuss the labeling framework, annotation workflow and some statistics about SciDTB. Furthermore, our treebank is made as a benchmark for evaluating discourse dependency parsers, on which we provide several baselines as fundamental work.",
}
Downloads last month
2
Edit dataset card
Evaluate models HF Leaderboard