Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "I17-1049",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:39:37.345750Z"
},
"title": "Using Explicit Discourse Connectives in Translation for Implicit Discourse Relation Classification",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Shi",
"suffix": "",
"affiliation": {},
"email": "w.shi@coli.uni-saarland.de"
},
{
"first": "Frances",
"middle": [],
"last": "Yung",
"suffix": "",
"affiliation": {},
"email": "frances@coli.uni-saarland.de"
},
{
"first": "Raphael",
"middle": [],
"last": "Rubino",
"suffix": "",
"affiliation": {},
"email": "raphael.rubino@dfki.de"
},
{
"first": "Vera",
"middle": [],
"last": "Demberg",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Implicit discourse relation recognition is an extremely challenging task due to the lack of indicative connectives. Various neural network architectures have been proposed for this task recently, but most of them suffer from the shortage of labeled data. In this paper, we address this problem by procuring additional training data from parallel corpora: When humans translate a text, they sometimes add connectives (a process known as explicitation). We automatically back-translate it into an English connective, and use it to infer a label with high confidence. We show that a training set several times larger than the original training set can be generated this way. With the extra labeled instances, we show that even a simple bidirectional Long Short-Term Memory Network can outperform the current state-of-the-art.",
"pdf_parse": {
"paper_id": "I17-1049",
"_pdf_hash": "",
"abstract": [
{
"text": "Implicit discourse relation recognition is an extremely challenging task due to the lack of indicative connectives. Various neural network architectures have been proposed for this task recently, but most of them suffer from the shortage of labeled data. In this paper, we address this problem by procuring additional training data from parallel corpora: When humans translate a text, they sometimes add connectives (a process known as explicitation). We automatically back-translate it into an English connective, and use it to infer a label with high confidence. We show that a training set several times larger than the original training set can be generated this way. With the extra labeled instances, we show that even a simple bidirectional Long Short-Term Memory Network can outperform the current state-of-the-art.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "When humans comprehend language, their interpretation consists of more than just the sum of the content of the sentences. Additional semantic relations (known as coherence relations or discourse relations) are inferred between sentences in the text. Identification of discourse relations is useful for various NLP applications such as question answering (Jansen et al., 2014; Liakata et al., 2013) , summarization (Maskey and Hirschberg, 2005; Yoshida et al., 2014; Gerani et al., 2014) , machine translation (Guzm\u00e1n et al., 2014; Meyer et al., 2015) and information extraction (Cimiano et al., 2005 ). Recently, the task has drawn increasing attention, including two CoNLL shared tasks (Xue et al., , 2016 .",
"cite_spans": [
{
"start": 354,
"end": 375,
"text": "(Jansen et al., 2014;",
"ref_id": "BIBREF10"
},
{
"start": 376,
"end": 397,
"text": "Liakata et al., 2013)",
"ref_id": "BIBREF17"
},
{
"start": 414,
"end": 443,
"text": "(Maskey and Hirschberg, 2005;",
"ref_id": "BIBREF21"
},
{
"start": 444,
"end": 465,
"text": "Yoshida et al., 2014;",
"ref_id": "BIBREF45"
},
{
"start": 466,
"end": 486,
"text": "Gerani et al., 2014)",
"ref_id": "BIBREF5"
},
{
"start": 509,
"end": 530,
"text": "(Guzm\u00e1n et al., 2014;",
"ref_id": "BIBREF6"
},
{
"start": 531,
"end": 550,
"text": "Meyer et al., 2015)",
"ref_id": "BIBREF22"
},
{
"start": 578,
"end": 599,
"text": "(Cimiano et al., 2005",
"ref_id": "BIBREF4"
},
{
"start": 687,
"end": 706,
"text": "(Xue et al., , 2016",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Discourse relations are sometimes expressed with an explicit discourse connective (DC), such as \"because\", \"but\", \"if\". Example 1 shows an explicit discourse relation marked by \"because\"; the text spans between which the relation holds are marked as Arg1 and Arg2. DCs serve as strong cues and allow us to classify discourse relations with high accuracy (Pitler et al., 2008 (Pitler et al., , 2009 Lin et al., 2014) . However, more than half of the discourse relations in a text are not signalled by a connective. See for example 2: a contrastive relation can be inferred between the text spans marked as Arg1 and Arg2. Implicit relation classification is very challenging and represents a bottleneck of the entire discourse parsing system. In order to classify an implicit discourse relation, it is necessary to represent the semantic content of the relational arguments, which may give a cue to the coherence relation, e.g. \"care\" -\"dragdown blow\" in 2. Early methods have focused on designing various features to overcome data sparsity and more effectively identify relevant concepts in the two discourse relational arguments. (Lin et al., 2009; Zhou et al., 2010; Biran and McKeown, 2013; Park and Cardie, 2012; Rutherford and Xue, 2014) , while recent efforts use distributed representations with neural network architectures (Zhang et al., 2015; Ji and Eisenstein, 2015; Ji et al., 2016; Qin et al., 2016 Qin et al., , 2017 . Both streams of methods suffer from insufficient annotated data (Wang et al., 2015) , since the Penn Discourse Treebank (PDTB) (Prasad et al., 2008) , which is the discourse annotated resource mostly used by the community, consists of just 12763 implicit instances in the usual training set and 761 relations in the test set. Some second-level relations only have about a dozen instances. It is therefore crucial to obtain extra data for machine learning.",
"cite_spans": [
{
"start": 354,
"end": 374,
"text": "(Pitler et al., 2008",
"ref_id": "BIBREF27"
},
{
"start": 375,
"end": 397,
"text": "(Pitler et al., , 2009",
"ref_id": "BIBREF26"
},
{
"start": 398,
"end": 415,
"text": "Lin et al., 2014)",
"ref_id": "BIBREF19"
},
{
"start": 1130,
"end": 1148,
"text": "(Lin et al., 2009;",
"ref_id": "BIBREF18"
},
{
"start": 1149,
"end": 1167,
"text": "Zhou et al., 2010;",
"ref_id": "BIBREF49"
},
{
"start": 1168,
"end": 1192,
"text": "Biran and McKeown, 2013;",
"ref_id": "BIBREF1"
},
{
"start": 1193,
"end": 1215,
"text": "Park and Cardie, 2012;",
"ref_id": "BIBREF25"
},
{
"start": 1216,
"end": 1241,
"text": "Rutherford and Xue, 2014)",
"ref_id": "BIBREF34"
},
{
"start": 1331,
"end": 1351,
"text": "(Zhang et al., 2015;",
"ref_id": "BIBREF46"
},
{
"start": 1352,
"end": 1376,
"text": "Ji and Eisenstein, 2015;",
"ref_id": "BIBREF11"
},
{
"start": 1377,
"end": 1393,
"text": "Ji et al., 2016;",
"ref_id": "BIBREF12"
},
{
"start": 1394,
"end": 1410,
"text": "Qin et al., 2016",
"ref_id": "BIBREF29"
},
{
"start": 1411,
"end": 1429,
"text": "Qin et al., , 2017",
"ref_id": "BIBREF30"
},
{
"start": 1496,
"end": 1515,
"text": "(Wang et al., 2015)",
"ref_id": "BIBREF39"
},
{
"start": 1559,
"end": 1580,
"text": "(Prasad et al., 2008)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we propose a simple approach to automatically extract samples of implicit discourse relations from parallel corpus via backtranslation: Our approach is motivated by the fact that humans sometimes omit connectives during translation (implicitation), or insert connectives not originally present in the source text (explicitation) (Laali and Kosseim, 2014; Koppel and Ordan, 2011; Cartoni et al., 2011; Hoek and Zufferey, 2015; Zufferey, 2016) . When explicitating an implicit relation, the human translator is, in other words, disambiguating the source implicit relation with an explicit DC in the target language.",
"cite_spans": [
{
"start": 344,
"end": 369,
"text": "(Laali and Kosseim, 2014;",
"ref_id": "BIBREF15"
},
{
"start": 370,
"end": 393,
"text": "Koppel and Ordan, 2011;",
"ref_id": "BIBREF14"
},
{
"start": 394,
"end": 415,
"text": "Cartoni et al., 2011;",
"ref_id": null
},
{
"start": 416,
"end": 440,
"text": "Hoek and Zufferey, 2015;",
"ref_id": "BIBREF9"
},
{
"start": 441,
"end": 456,
"text": "Zufferey, 2016)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our contribution is twofold: Firstly, we propose a pipeline to automatically label English implicit discourse relation samples based on explicitation of DCs in human translation, which is the target side of a parallel corpus. Secondly, we show that the extra instances mined by the proposed method improve the performance of a standard neural classifier by a large margin, when evaluated on the PDTB 2.0 benchmark test set as well as by crossvalidation (Shi and Demberg, 2017) .",
"cite_spans": [
{
"start": 453,
"end": 476,
"text": "(Shi and Demberg, 2017)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Early works addressing discourse relation parsing were trying to classify unmarked discourse relations by training on explicit discourse relations with the marker been removed (Marcu and Echihabi, 2002) . While this method promised to provide almost unlimited training data, it was shown that explicit relations differ in systematic ways from implicit relations (Asr and Demberg, 2012), so that performance on implicits is very poor when learning on explicits only (Sporleder and Lascarides, 2008) .",
"cite_spans": [
{
"start": 176,
"end": 202,
"text": "(Marcu and Echihabi, 2002)",
"ref_id": "BIBREF20"
},
{
"start": 465,
"end": 497,
"text": "(Sporleder and Lascarides, 2008)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The release of PDTB (Prasad et al., 2008) , the largest available corpus which annotates implicit examples, lead to substantial improvements in classification of implicit relations, and spurred a variety of approaches to the task, including feature-based methods (Pitler et al., 2009; Lin et al., 2009; Park and Cardie, 2012; Biran and McKeown, 2013; Rutherford and Xue, 2014) and neural network models (Zhang et al., 2015; Ji and Eisenstein, 2015; Ji et al., 2016; Qin et al., 2016 Qin et al., , 2017 . However, the limited size of the annotated corpus, in combination with the difficulty of the task of inferring the type of relation between given text spans, presents a problem both in training (Rutherford et al. (2017) find that a simple feed-forward architecture can outperform more complex architectures, and argues that the larger number of parameters can not be estimated adequately on the small amount of training data) and testing (Shi and Demberg (2017) report experiments showing that results on the standard test set are not reliable due to the small set of just 761 relations).",
"cite_spans": [
{
"start": 20,
"end": 41,
"text": "(Prasad et al., 2008)",
"ref_id": "BIBREF28"
},
{
"start": 263,
"end": 284,
"text": "(Pitler et al., 2009;",
"ref_id": "BIBREF26"
},
{
"start": 285,
"end": 302,
"text": "Lin et al., 2009;",
"ref_id": "BIBREF18"
},
{
"start": 303,
"end": 325,
"text": "Park and Cardie, 2012;",
"ref_id": "BIBREF25"
},
{
"start": 326,
"end": 350,
"text": "Biran and McKeown, 2013;",
"ref_id": "BIBREF1"
},
{
"start": 351,
"end": 376,
"text": "Rutherford and Xue, 2014)",
"ref_id": "BIBREF34"
},
{
"start": 403,
"end": 423,
"text": "(Zhang et al., 2015;",
"ref_id": "BIBREF46"
},
{
"start": 424,
"end": 448,
"text": "Ji and Eisenstein, 2015;",
"ref_id": "BIBREF11"
},
{
"start": 449,
"end": 465,
"text": "Ji et al., 2016;",
"ref_id": "BIBREF12"
},
{
"start": 466,
"end": 482,
"text": "Qin et al., 2016",
"ref_id": "BIBREF29"
},
{
"start": 483,
"end": 501,
"text": "Qin et al., , 2017",
"ref_id": "BIBREF30"
},
{
"start": 698,
"end": 723,
"text": "(Rutherford et al. (2017)",
"ref_id": "BIBREF33"
},
{
"start": 942,
"end": 965,
"text": "(Shi and Demberg (2017)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Data extension has therefore been a longstanding goal in discourse relation classification. The main idea has been to select explicit discourse instances that are similar to implicit ones to add to the training set. Wang et al. (2012) proposed to differentiate typical and atypical examples for each discourse relation, and augment training data for implicits only by typical explicits. In a similar vein, proposed criteria for selecting among explicitly marked relations ones that contain discourse connectives which can be omitted without changing the interpretation of the discourse. These relations are then added to the implicit instances in training.",
"cite_spans": [
{
"start": 216,
"end": 234,
"text": "Wang et al. (2012)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "On the other hand, Lan et al. (2013) presented multi-task learning based systems, which in addition to the main implicit relation classification task, contain the task of predicting previously removed connectives for explicit relations, and profit from shared representations between the tasks. Similarly, Hernault et al. (2010) observes features that occur in both implicit and explicit discourse relations, and exploit such feature co-occurrence to extend the features for classifying implicits using explicitly marked relations. Mih\u0203il\u0203 and Ananiadou (2014) and Hidey and McKeown (2016) proposed semi-supervised learning and self-learning methods to improve recognition of patterns that typically signal causal discourse relations.",
"cite_spans": [
{
"start": 19,
"end": 36,
"text": "Lan et al. (2013)",
"ref_id": "BIBREF16"
},
{
"start": 306,
"end": 328,
"text": "Hernault et al. (2010)",
"ref_id": "BIBREF7"
},
{
"start": 532,
"end": 560,
"text": "Mih\u0203il\u0203 and Ananiadou (2014)",
"ref_id": "BIBREF23"
},
{
"start": 565,
"end": 589,
"text": "Hidey and McKeown (2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The approach proposed here differs from previous approaches, because we extend our train-ing data only by originally implicit relations, and obtain the label through the disambiguation that sometimes happens in human translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Parallel corpora have been exploited as a resource of discourse relation data in previous work but have mostly been used with goals different from ours: Cartoni et al. (2013) and Meyer et al. (2015) use parallel corpora to label and disambiguate discourse connectives in the target language based on explicitly marked English relations, in order to help machine translation. A second application has been to project discourse annotation from English onto other languages through parallel corpora, in order to construct discourse annotated resources for the target language (Versley, 2010; Zhou et al., 2012; Laali and Kosseim, 2014) .",
"cite_spans": [
{
"start": 153,
"end": 174,
"text": "Cartoni et al. (2013)",
"ref_id": "BIBREF2"
},
{
"start": 179,
"end": 198,
"text": "Meyer et al. (2015)",
"ref_id": "BIBREF22"
},
{
"start": 573,
"end": 588,
"text": "(Versley, 2010;",
"ref_id": "BIBREF38"
},
{
"start": 589,
"end": 607,
"text": "Zhou et al., 2012;",
"ref_id": "BIBREF47"
},
{
"start": 608,
"end": 632,
"text": "Laali and Kosseim, 2014)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The approach that is in spirit most similar to ours is by Wu et al. (2016) , who extracted bilingual-constrained synthetic implicit data from a sentence-aligned English-Chinese corpus and got improvements by incorporating these data via a multi-task neural network on the 4-way classification.",
"cite_spans": [
{
"start": 58,
"end": 74,
"text": "Wu et al. (2016)",
"ref_id": "BIBREF41"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our proposed method aims at sentence pairs in the parallel corpora where an implicit discourse relations on the source English side has been translated by human translators into an explicitly marked relation on the target side. The inserted connective hence disambiguates the originally implicit relation, and the discourse relation can be classified with confidence (under the assumption that the same discourse relation holds in the original source text).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "The pipeline of our approach is detailed in below steps.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "1. The target side of a sentence-aligned parallel corpus, with English as the source text, is back-translated to English using a pre-trained machine translation system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "2. An end-to-end discourse relation parser for English is run on both the source side and the back-translated target side. The parser will output a list of explicit and implicit relations, including the relation sense and argument spans of each relation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "3. Implicit-to-explicit discourse relation alignments are identified according to the output of the end-to-end parser. Implicit relations in the PDTB are only ever annotated between consecutive sentences. Therefore, we specifically extract pairs of consecutive sentences on the source English side:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "\u2022 that are identified as the Arg1 and Arg2 of an implicit discourse relation 1 ; \u2022 whose corresponding back-translated target sentences are identified as the Arg1 and Arg2 of an explicit relation; \u2022 that are not part of the Arg1 or Arg2 of any other discourse relations 2 .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "4. Label the source English implicit relation with the relation class of the explicit relation in back-translated target text. The two consecutive sentences are marked as Arg1 and Arg2 respectively. Figure 1 illustrates the pipeline of our approach, which takes an English-to-French parallel corpus as input and outputs a list of implicit discourse relations, each containing two arguments from the source English text and a relation class according to the back-translated French DC.",
"cite_spans": [],
"ref_spans": [
{
"start": 199,
"end": 207,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "We then compare the performance of a neural implicit discourse relation classifier trained with the annotated implicit relation samples in PDTB alone and also with the extra training samples mined from the parallel corpus. The classifier performance is evaluated on the standard PDTB implicit relation test set and by cross-validation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Method",
"sec_num": "3"
},
{
"text": "In the proposed method, we disambiguate implicit relations according to the explicitated translation. Instead of directly classifying the explicit relation in the target language, we back-translate the target text to the source language by machine translation (MT) because:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advantages of using back-translation",
"sec_num": "3.1"
},
{
"text": "\u2022 Discourse parsers on low-resource languages do not perform well, or are even not available.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advantages of using back-translation",
"sec_num": "3.1"
},
{
"text": "\u2022 Different languages have different sets of discourse relation classes defined. By the means of back-translation, we can use an English discourse parser on the target text, and thus label the implicit relations with the same set of relation labels defined for English. \u2022 The quality of the MT system has limited impact on our approach. Since the DC tokens are powerful features to disambiguate an explicit relation, limited contextual features are required. We just need correct translation of the explicit DC tokens, irrespective of word order and the rest of the translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Advantages of using back-translation",
"sec_num": "3.1"
},
{
"text": "Only inter-sentential implicit relations are annotated in the PDTB, due to time and resource constraints (Prasad et al., 2008) . However, this does not mean that implicit relations only hold between consecutive sentences. We decided to extract intra-sentential relation samples from the parallel corpus based on two motivations: Firstly, we hypothesize that intrasentential implicit relations share similar features as inter-sentential ones. Including both types may hence increase dataset size. In fact, we will see in the experiment results that intra-sentential training samples largely improve classification of implicit relations, even though the test data from PDTB contains inter-sentential samples only. An analysis on what we learn from the intra-sentential samples is presented in Section 6.1. Secondly, intra-sentential relations can potentially be identified with higher reliability: Parallel corpora are typically sentence-aligned. This makes it a lot easier to extract sentences that are detected by the end-to-end discourse relation parser as explicit in the (back-)translation target side but not on the original source side, without needing to worry about whether any sentences in the dataset were removed or the order changed during preprocessing (which would be detrimental for detecting intra-sentential relations).",
"cite_spans": [
{
"start": 105,
"end": 126,
"text": "(Prasad et al., 2008)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Inter-sentential and intra-sentential relations",
"sec_num": "3.2"
},
{
"text": "It is possible but not entirely trivial to determine the argument spans of the discourse relations labeled with the back-translation method. In this paper, we chose a neural network model that concatenates the Arg1 and Arg2 representations (see Section 4.4), so that determining exact text spans of Arg1 and Arg2 was not necessary. We are not the first one to do like this, in the work by R\u00f6nnqvist et al. (2017) , they modeled the Arg1-Arg2 pairs as a joint sequence and did not compute intermediate representations of arguments separately, to make it more generally flexible in modeling discourse units and easily extend to additional contexts.",
"cite_spans": [
{
"start": 389,
"end": 412,
"text": "R\u00f6nnqvist et al. (2017)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Argument spans",
"sec_num": "3.3"
},
{
"text": "Parallel Corpora The corpora used for the extraction of implicit discourse relation samples are publicly available bilingual English-French parallel datasets compiled by Rabinovich et al. (2015) . 3 They consist of European parliamentary proceedings, literary works and the Hansard corpus -genres that are different from the PDTB, because we want to expand the diversity of discourse relation samples available in the PDTB. These corpora contain a total of \u223c 1.9M sentence pairs with an average of 22.7 words per English sentence. Each corpus contains an originally written part in English (used as target for the MT system) and its corresponding human translation in French (used as source). We use the same corpora to train the French-English MT system (Section 4.2), to backtranslate the French side into English and to extract additional discourse training data. The Penn Discourse Treebank (PDTB) We use the Penn Discourse Treebank 2.0 (Prasad et al., 2008) for the training and testing of the implicit discourse relation classifier. PDTB is the largest available manually annotated corpus of explicit and implicit discourse relations based on one million word tokens from the Wall Street Journal. Each discourse relation is annotated with at most two senses from a three-level hierarchy of discourse relations. The first level roughly categorizes the relations into four major classes, each of which is further categorized in to more distinct relation types. Conventionally, discourse relation classifiers are either evaluated by the accuracy of the first-level 4-way classification (Pitler et al., 2009; Rutherford and Xue, 2014; , or the second-level 11-way classification (Lin et al., 2009; Ji and Eisenstein, 2015; Qin et al., 2016 Qin et al., , 2017 .",
"cite_spans": [
{
"start": 170,
"end": 194,
"text": "Rabinovich et al. (2015)",
"ref_id": "BIBREF31"
},
{
"start": 197,
"end": 198,
"text": "3",
"ref_id": null
},
{
"start": 941,
"end": 962,
"text": "(Prasad et al., 2008)",
"ref_id": "BIBREF28"
},
{
"start": 1589,
"end": 1610,
"text": "(Pitler et al., 2009;",
"ref_id": "BIBREF26"
},
{
"start": 1611,
"end": 1636,
"text": "Rutherford and Xue, 2014;",
"ref_id": "BIBREF34"
},
{
"start": 1681,
"end": 1699,
"text": "(Lin et al., 2009;",
"ref_id": "BIBREF18"
},
{
"start": 1700,
"end": 1724,
"text": "Ji and Eisenstein, 2015;",
"ref_id": "BIBREF11"
},
{
"start": 1725,
"end": 1741,
"text": "Qin et al., 2016",
"ref_id": "BIBREF29"
},
{
"start": 1742,
"end": 1760,
"text": "Qin et al., , 2017",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "We train an MT system to back-translate the target side of the parallel corpus to English. To produce the highest-quality back-translation, we use a neural MT system trained on the same parallel corpus. The system is implemented by Open-source Neural Machine Translation (OpenNMT) (Klein et al., 2017) . Source words are first mapped to word vectors and then fed into a recurrent neural network.",
"cite_spans": [
{
"start": 281,
"end": 301,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation System",
"sec_num": "4.2"
},
{
"text": "At each target time step, attention is applied over the source RNN and combined with the current hidden state to produce a prediction of the next word, and this prediction would be fed back into the target RNN.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation System",
"sec_num": "4.2"
},
{
"text": "We evaluate the MT system on newstest2014 and newsdiscusstest2015, reaching 24.63 and 22.58 BLEU respectively. The French side of the training data back-translated into English is evaluated against the originally written English source, leading to a BLEU score of 34.17. 4 The evaluation of the back-translated corpus indicates that the source text is not exactly reproduced. Critically, we assume that the MT system preserves the explicitness of the target DCs, instead of explicitating or implicitating DCs as in the human translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Machine Translation System",
"sec_num": "4.2"
},
{
"text": "We employ the PDTB-style End-to-End Discourse Parser (Lin et al., 2014) to identify and classify the explicit instances from the back-translated English sentences. It achieved about 87% F1 score for explicit relations on level-2 types, even higher than human agreement of 84%. The accuracy on explicit DC identification is 96%.",
"cite_spans": [
{
"start": 53,
"end": 71,
"text": "(Lin et al., 2014)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "End-to-end discourse parser",
"sec_num": "4.3"
},
{
"text": "On the source side, the end-to-end parser is applied to pick implicit relations from other types of relations, i.e. explicit relations or no relation, in order to extract implicit-to-explicit DC translation from the parallel corpus 5 . On the back-translation, the end-to-end parser is applied to identify only explicitly marked discourse relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "End-to-end discourse parser",
"sec_num": "4.3"
},
{
"text": "We use a Bidirectional Long Short-Term Memory (LSTM) network as the implicit relation classification model to evaluate the samples extracted by the proposed method. This architecture inspects both left and right contextual information and has been proven effective in relation classification (Zhou et al., 2016; R\u00f6nnqvist et al., 2017) .",
"cite_spans": [
{
"start": 292,
"end": 311,
"text": "(Zhou et al., 2016;",
"ref_id": "BIBREF48"
},
{
"start": 312,
"end": 335,
"text": "R\u00f6nnqvist et al., 2017)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit relation classification model",
"sec_num": "4.4"
},
{
"text": "The model is illustrated in Figure 2 , where each word from the two discourse relational arguments is represented as a vector, which is found through a look-up word embedding. Given the word representations [w 1 ,w 2 ,...,w n ] as the input sequence, an Figure 2 : The bidirectional LSTM Network for the task of implicit discourse relation classification.",
"cite_spans": [],
"ref_spans": [
{
"start": 28,
"end": 36,
"text": "Figure 2",
"ref_id": null
},
{
"start": 254,
"end": 262,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Implicit relation classification model",
"sec_num": "4.4"
},
{
"text": "LSTM computes the state sequence [h 1 ,h 2 ,...,h n ] with the following equations:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit relation classification model",
"sec_num": "4.4"
},
{
"text": "i t = \u03c3(W i w w t + W i h h t\u22121 + W i c w t\u22121 + b i ) f t = \u03c3(W f w w t + W f h h t\u22121 + W f c w t\u22121 + b f ) g t = tanh(W c w w t + W c h h t\u22121 + b c ) c t = f t c t\u22121 + i t g t o t = \u03c3(W o w w t + W o h h t\u22121 + b o ) h t = tanh(c t ) o t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit relation classification model",
"sec_num": "4.4"
},
{
"text": "The forward and backward LSTM layers traverse the sequence e i , producing sequences of vectors h if and h ib respectively, which are summed together in the coming sum layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit relation classification model",
"sec_num": "4.4"
},
{
"text": "Following the preprocessing method in (Lin et al., 2009) , relations with too few instances (Contingency.Condition, Pragmatic Condition; Comparison.Pragmatic Contrast, Pragmatic Concession; Expansion.Exception) are removed during training and evaluation, resulting in 11 types of relations. Among instances annotated with two relation senses, we only use the first sense.",
"cite_spans": [
{
"start": 38,
"end": 56,
"text": "(Lin et al., 2009)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit relation classification model",
"sec_num": "4.4"
},
{
"text": "The model is implemented in Keras 6 , which is capable of running on top of Theano. We use word embeddings of 300 dimensions, which are trained on the original English side of the parallel corpora as well as PDTB with the Skip-gram architecture in Word2Vec (Mikolov et al., 2013) . We initial- ize the weights with uniform random; use standard cross-entropy as our loss function; employ Adagrad as the optimization algorithm of choice and set dropout layers after the embedding layer and output layer with a drop rate of 0.2 and 0.5 respectively. Each LSTM has a vector dimension of 300, matching the embedding size. We split the PDTB data and evaluate the classifier in two settings. Firstly, we adopt the standard PDTB splitting convention, where section 2-21, 22, and 23 are used as train, validation and test sets respectively (Lin et al., 2009) . Secondly, we conduct 10-fold cross validation on the whole corpus including sections 0-24, as advocated in (Shi and Demberg, 2017) . And extra samples are only added into training folds in the CV setting, which means that testing fold consists of instances from PDTB only. Models trained with and without extra samples we extracted, on top of the PDTB data, are compared.",
"cite_spans": [
{
"start": 257,
"end": 279,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF24"
},
{
"start": 831,
"end": 849,
"text": "(Lin et al., 2009)",
"ref_id": "BIBREF18"
},
{
"start": 959,
"end": 982,
"text": "(Shi and Demberg, 2017)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implicit relation classification model",
"sec_num": "4.4"
},
{
"text": "In total, 102, 314 implicit discourse relation samples are extracted, of which 25, 086 are inter-sentential relations and 77, 228 are intrasentential 7 . Inter-sentential relations are much less abundant because stricter screening strategy is applied (the end of point 3 in Section 3). From Table 1 we can also see that majority of DCs in the Lin et al. (2009) 40.20 - Qin et al. (2016) 43.81 - Qin et al. (2017) 44.65 - Rutherford et al. (2017) 39.56 - Shi and Demberg (2017) 1 \"-\" means no result currently. source side have been translated into the target side explicitly. Figure 3 compares the distribution of relation senses among the annotated implicit relations in the PDTB and our extracted samples. The relation distribution generally corresponds to the distribution in PDTB, but some relations, such as Temporal and Contingency.Condition, are particularly numerous in the intra-sentential samples.",
"cite_spans": [
{
"start": 343,
"end": 360,
"text": "Lin et al. (2009)",
"ref_id": "BIBREF18"
},
{
"start": 369,
"end": 386,
"text": "Qin et al. (2016)",
"ref_id": "BIBREF29"
},
{
"start": 395,
"end": 412,
"text": "Qin et al. (2017)",
"ref_id": "BIBREF30"
},
{
"start": 421,
"end": 445,
"text": "Rutherford et al. (2017)",
"ref_id": "BIBREF33"
},
{
"start": 454,
"end": 476,
"text": "Shi and Demberg (2017)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 576,
"end": 584,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Distribution of additional instances",
"sec_num": "5"
},
{
"text": "We compare our model with current state-of-theart models that were evaluated under the same setting (11-way classification, PDTB section 23 as test set) (Qin et al., 2016 (Qin et al., , 2017 Rutherford et al., 2017) , as well as a model based on linguistic features (Lin et al., 2009) that uses this setting for evaluation. Qin et al. (2017) developed an adversarial model, which consists of two CNNs in which arguments are represented separately, a four-layer Perceptron and a dense layer for classification, to enable an adaptive imitation scheme through competition between the implicit network and a rival feature discriminator. Our model substantially differs from that setup, as it uses a much simpler network architecture and represents the two discourse relation arguments jointly, i.e. without knowledge of the arguments' spans. We can see that our baseline model performs substantially less well than the state of the art, and also less well than (Shi and Demberg, 2017) , who also use an LSTM but represent discourse relational arguments separately. As adding training data can be expected to be largely orthogonal to the choice of classification model, we are here most interested in seeing whether adding the new instances improves over the baseline model with identical architecture. Table 2 shows that including the extra interand intra-sentential instances leads to very substantial improvements in classification accuracy. Using the additional data, our method not only improves performance by 11%-points on the PDTB test set compared to training on the PDTB implicit relations only, but also outperforms much more complex neural network models (Qin et al., 2016 (Qin et al., , 2017 ) on this task.",
"cite_spans": [
{
"start": 153,
"end": 170,
"text": "(Qin et al., 2016",
"ref_id": "BIBREF29"
},
{
"start": 171,
"end": 190,
"text": "(Qin et al., , 2017",
"ref_id": "BIBREF30"
},
{
"start": 191,
"end": 215,
"text": "Rutherford et al., 2017)",
"ref_id": "BIBREF33"
},
{
"start": 266,
"end": 284,
"text": "(Lin et al., 2009)",
"ref_id": "BIBREF18"
},
{
"start": 324,
"end": 341,
"text": "Qin et al. (2017)",
"ref_id": "BIBREF30"
},
{
"start": 957,
"end": 980,
"text": "(Shi and Demberg, 2017)",
"ref_id": "BIBREF36"
},
{
"start": 1662,
"end": 1679,
"text": "(Qin et al., 2016",
"ref_id": "BIBREF29"
},
{
"start": 1680,
"end": 1699,
"text": "(Qin et al., , 2017",
"ref_id": "BIBREF30"
}
],
"ref_spans": [
{
"start": 1298,
"end": 1305,
"text": "Table 2",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The evaluation using cross-validation (around 8% point improvement over the baseline) furthermore shows that the obtained improvements do not only hold for the PDTB standard test set but also are stable across the whole PDTB data. These results strongly support the effectiveness of the implicit relation samples mined from parallel texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "The accuracies reported for our models are based on 10 repeat-runs with different initializations of the network. This allows us to show the amount of variance in results we obtained in Figure 4 . We found that results sometimes varied a lot between different runs, and would therefore like to encourage others in the field to also report variability due to initialization or other random factors. For instance, our best run achieved 49.84% accuracy on the PDTB test set trained with all additional instances, while mean performance for that setting is 45.50% accuracy. Variances were substantially smaller for the cross-validation setting, as the number of overall instances going into the evaluation is a lot larger in this setting, and hence yields more stable performance estimates.",
"cite_spans": [],
"ref_spans": [
{
"start": 186,
"end": 194,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "In order to illustrate what kinds of instances our method extracts, we show an instances below. The underlined DC is the explicit DC identified in the back-translated target text; the discourse relation is automatically classified based on the backtranslation. One strength of the proposed method is that it can mine and label discourse relations that are not commonly regarded as discourse relations and hence not annotated in PDTB. Below are some examples where the bold DC was identified in the (back-)translation: 4. A conservative member was kicked out of his caucus for defending Nova Scotians.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "6.1"
},
{
"text": "-because, Contingency.Cause 5. A failure to do so would affect our attitude to their eventual accession.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "6.1"
},
{
"text": "-if, Contingency.Condition",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "6.1"
},
{
"text": "These extra samples are in fact an invaluable resource of discourse-informative patterns, which are not available to discourse relation parsers that are trained only on the PDTB dataset. These cases provide evidence that our proposed method can not only provide instances that are similar to implicit labelled instances, but detect additional patterns, as attempted in (Mih\u0203il\u0203 and Ananiadou, 2014; Hidey and McKeown, 2016) for causal relations, and generalize from the semantic content observed in such relations to actual implicit discourse relations.",
"cite_spans": [
{
"start": 369,
"end": 398,
"text": "(Mih\u0203il\u0203 and Ananiadou, 2014;",
"ref_id": "BIBREF23"
},
{
"start": 399,
"end": 423,
"text": "Hidey and McKeown, 2016)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "6.1"
},
{
"text": "For example, as reported in Section 5, numerous Temporal relations are mined from the parallel corpus. These include cases where the original text contained a verbal construction which expresses the temporal relation, which through backtranslation gets expressed as a discourse relation, or where explicit relations include gerunds in the Arg2, e.g.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "6.1"
},
{
"text": "\"any plan takes time to have the effect required\"\u2192 \"before getting the effect required\" \"how much longer do women have to wait for fairness?\" \u2192 \"before women have fairness.\" \"having gone over the estimates\" \u2192 \"after going over the estimates.\"",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "6.1"
},
{
"text": "(source text followed by (back-)translation, where the explicitated DC is underlined).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "6.1"
},
{
"text": "In this work, we only extracted inter-and intrasentential discourse relations, but the method can be in principle extended to other discourse relations that are not annotated in the PDTB, such as implicit relation between non-consecutive sentences. Discourse parsers that identify a larger range of relations are more useful in end applications. More importantly, identification of discourse-informative linguistic patterns by the proposed method opens the opportunity to mine extra samples under a monolingual setting and further improve classification performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Qualitative Analysis",
"sec_num": "6.1"
},
{
"text": "In order to get detailed insights on how much extra data is most beneficial to the task, we also trained our classifier with different numbers of additional extracted samples. Figure 4 compares the classification accuracy when training on incremental number of extra instances. We find that the performance increases with samples size, but plateaus after 40, 000 intra-sentential samples.",
"cite_spans": [],
"ref_spans": [
{
"start": 176,
"end": 184,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Quantitative Analysis",
"sec_num": "6.2"
},
{
"text": "In fact, this sample size produces the highest averaged classification accuracy of 45.87%, which is even higher than our model which includes all extracted samples. A possible reason for not seeing further improvement in adding more intra-sentential examples is the difference in distribution and properties of these extra samples compared to the PDTB data. We also experimented with training on the parallel-text samples only (i.e., without any PDTB training samples), but the result was worse than using PDTB only. Adding more inter-sentential samples might further improve the performance, as these instances are closer to the PDTB data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Quantitative Analysis",
"sec_num": "6.2"
},
{
"text": "Our proposed method uses back-translated target DCs to label implicit relations. The quality of the relation label is intrinsically subject to the translation policy of the parallel corpora and also extrinsically subject to the accuracy of explicit DC classification by the end-to-end parser and the quality of the MT system. For example, a particularly high proportion of Contingency.Condition relations is found in the intra-sentential samples. Analyzing these samples, we found numerous instances where the word 'if' is wrongly identified as a DC (e.g. He asked if it was correct.). It is not surprising to have noisy samples extracted because limited screening strategy is applied in the current method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodological Discussion",
"sec_num": "6.3"
},
{
"text": "As a reference for the quality of the relation label produced, we analysed the intra-sentential relations in the parallel corpus that are explicit on the source side and also in the (back-)translation. We found that 68% of the originally explicit DCs are (back-)translated to the same explicit DCs and 75% to DCs of the same level-2 sense, according to automatic explicit DC classification of the endto-end parser.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Methodological Discussion",
"sec_num": "6.3"
},
{
"text": "We showed that explicitation during human translation can provide a valuable signal for expanding datasets for implicit discourse relations. As the expansion of training instances is orthogonal to the mechanism of DR classification, this method can be applied to improve any methods of implicit DR classification.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future work",
"sec_num": "7"
},
{
"text": "We see plenty of room for further improvement by controlling the sample quality, such as selection based on explicit discourse connective identification confidence, restraining the discourse relation structure, identifying Arg1 and Arg2 such that approaches which use two separate representations for arguments instead of a single concatenated vector become possible, reducing languagespecific bias by mining from parallel corpora of other language pairs, and fine-tuning the MT system for discourse connective translation. We leave the exploration of these areas to future work. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future work",
"sec_num": "7"
},
{
"text": "Relations signaled by Alternative Lexicalization are counted as implicit relations and extracted as samples. However, NoRel and EntRel are excluded.2 This restriction avoids mis-alignment of relations between source and target texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "All corpora are available at http://cl.haifa.ac. il/projects/translationese/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Case sensitive BLEU implemented in mteval-v13a.pl. Test sets available at http://www.statmt.org/ wmt15/translation-task.html 5 The non-explicit sense classification module of this parser is thus not used in the proposed method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A dataset containing these additional instances will be made available to researchers upon publication of the paper.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their careful reading, valuable and insightful comments. This work was funded by the German Research Foundation (DFG) as part of SFB 1102 \"Information Density and Linguistic Encoding\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Implicitness of discourse relations",
"authors": [
{
"first": "Torabi",
"middle": [],
"last": "Fatemeh",
"suffix": ""
},
{
"first": "Vera",
"middle": [],
"last": "Asr",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Demberg",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of COLING 2012. The COLING 2012 Organizing Committee",
"volume": "",
"issue": "",
"pages": "2669--2684",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fatemeh Torabi Asr and Vera Demberg. 2012. Im- plicitness of discourse relations. In Proceedings of COLING 2012. The COLING 2012 Organizing Committee, Mumbai, India, pages 2669-2684.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Aggregated word pair features for implicit discourse relation disambiguation",
"authors": [
{
"first": "Or",
"middle": [],
"last": "Biran",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "69--73",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Or Biran and Kathleen McKeown. 2013. Aggregated word pair features for implicit discourse relation dis- ambiguation. In Proceedings of the 51st Annual Meeting of the Association for Computational Lin- guistics. Association for Computational Linguistics, Sofia, Bulgaria, pages 69-73.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Annotating the meaning of discourse connectives by looking at their translation: The translationspotting technique",
"authors": [
{
"first": "Bruno",
"middle": [],
"last": "Cartoni",
"suffix": ""
},
{
"first": "Sandrine",
"middle": [],
"last": "Zufferey",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Meyer",
"suffix": ""
}
],
"year": 2013,
"venue": "D&D",
"volume": "4",
"issue": "2",
"pages": "65--86",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bruno Cartoni, Sandrine Zufferey, and Thomas Meyer. 2013. Annotating the meaning of discourse connec- tives by looking at their translation: The translation- spotting technique. D&D 4(2):65-86.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Implicit discourse relation detection via a deep architecture with gated relevance network",
"authors": [
{
"first": "Jifan",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Pengfei",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Xipeng",
"middle": [],
"last": "Qiu",
"suffix": ""
},
{
"first": "Xuanjing",
"middle": [],
"last": "Huang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1726--1735",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jifan Chen, Qi Zhang, Pengfei Liu, Xipeng Qiu, and Xuanjing Huang. 2016. Implicit discourse relation detection via a deep architecture with gated rele- vance network. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics. Association for Computational Linguistics, pages 1726-1735.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Ontology-driven discourse analysis for information extraction",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Cimiano",
"suffix": ""
},
{
"first": "Uwe",
"middle": [],
"last": "Reyle",
"suffix": ""
},
{
"first": "Jasmin\u0161ari\u0107",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2005,
"venue": "Data & Knowledge Engineering",
"volume": "55",
"issue": "1",
"pages": "59--83",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Cimiano, Uwe Reyle, and Jasmin\u0160ari\u0107. 2005. Ontology-driven discourse analysis for informa- tion extraction. Data & Knowledge Engineering 55(1):59-83.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Abstractive summarization of product reviews using discourse structure",
"authors": [
{
"first": "Shima",
"middle": [],
"last": "Gerani",
"suffix": ""
},
{
"first": "Yashar",
"middle": [],
"last": "Mehdad",
"suffix": ""
},
{
"first": "Giuseppe",
"middle": [],
"last": "Carenini",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"T"
],
"last": "Ng",
"suffix": ""
},
{
"first": "Bita",
"middle": [],
"last": "Nejat",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1602--1613",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shima Gerani, Yashar Mehdad, Giuseppe Carenini, Raymond T. Ng, and Bita Nejat. 2014. Abstractive summarization of product reviews using discourse structure. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Pro- cessing. Association for Computational Linguistics, pages 1602-1613.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Using discourse structure improves machine translation evaluation",
"authors": [
{
"first": "Francisco",
"middle": [],
"last": "Guzm\u00e1n",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Shafiq",
"suffix": ""
},
{
"first": "Llu\u00eds",
"middle": [],
"last": "Joty",
"suffix": ""
},
{
"first": "Preslav",
"middle": [],
"last": "M\u00e0rquez",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nakov",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "687--698",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Francisco Guzm\u00e1n, Shafiq R. Joty, Llu\u00eds M\u00e0rquez, and Preslav Nakov. 2014. Using discourse structure im- proves machine translation evaluation. In Proceed- ings of the 52nd Annual Meeting of the Associa- tion for Computational Linguistics. Association for Computational Linguistics, pages 687-698.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A semi-supervised approach to improve classification of infrequent discourse relations using feature vector extension",
"authors": [
{
"first": "Hugo",
"middle": [],
"last": "Hernault",
"suffix": ""
},
{
"first": "Danushka",
"middle": [],
"last": "Bollegala",
"suffix": ""
},
{
"first": "Mitsuru",
"middle": [],
"last": "Ishizuka",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "399--409",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hugo Hernault, Danushka Bollegala, and Mitsuru Ishizuka. 2010. A semi-supervised approach to im- prove classification of infrequent discourse relations using feature vector extension. In Proceedings of the 2010 Conference on Empirical Methods in Nat- ural Language Processing. Association for Compu- tational Linguistics, pages 399-409.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Identifying causal relation using parallel wikipedia articles",
"authors": [
{
"first": "Christopher",
"middle": [],
"last": "Hidey",
"suffix": ""
},
{
"first": "Kathleen",
"middle": [],
"last": "Mckeown",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1424--1433",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christopher Hidey and Kathleen McKeown. 2016. Identifying causal relation using parallel wikipedia articles. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics, pages 1424-1433.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Factors influencing the implicitation of discourse relations across languages",
"authors": [
{
"first": "Jet",
"middle": [],
"last": "Hoek",
"suffix": ""
},
{
"first": "Sandrine",
"middle": [],
"last": "Zufferey",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings 11th Joint ACL-ISO Workshop on Interoperable Semantic Annotation (isa-11). TiCC, Tilburg center for Cognition and Communication",
"volume": "",
"issue": "",
"pages": "39--45",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jet Hoek and Sandrine Zufferey. 2015. Factors in- fluencing the implicitation of discourse relations across languages. In Proceedings 11th Joint ACL- ISO Workshop on Interoperable Semantic Annota- tion (isa-11). TiCC, Tilburg center for Cognition and Communication, pages 39-45.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Discourse complements lexical semantics for nonfactoid answer reranking",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Jansen",
"suffix": ""
},
{
"first": "Mihai",
"middle": [],
"last": "Surdeanu",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Clark",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "977--986",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Jansen, Mihai Surdeanu, and Peter Clark. 2014. Discourse complements lexical semantics for non- factoid answer reranking. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics. Association for Computational Linguistics, pages 977-986.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "One vector is not enough: Entity-augmented distributed semantics for discourse relations",
"authors": [
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "3",
"issue": "",
"pages": "329--344",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yangfeng Ji and Jacob Eisenstein. 2015. One vector is not enough: Entity-augmented distributed semantics for discourse relations. Transactions of the Associa- tion for Computational Linguistics 3:329-344.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A latent variable recurrent neural network for discourse relation language models",
"authors": [
{
"first": "Yangfeng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Haffari",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Eisenstein",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
"volume": "",
"issue": "",
"pages": "332--342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yangfeng Ji, Gholamreza Haffari, and Jacob Eisen- stein. 2016. A latent variable recurrent neural net- work for discourse relation language models. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics on Human Language Technology. Association for Computational Linguistics, pages 332-342.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Opennmt: Open-source toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1701.02810"
]
},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senellart, and Alexander M. Rush. 2017. Opennmt: Open-source toolkit for neural machine translation. arXiv preprint arXiv:1701.02810 .",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Translationese and its dialects",
"authors": [
{
"first": "Moshe",
"middle": [],
"last": "Koppel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Ordan",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1318--1326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moshe Koppel and Noam Ordan. 2011. Translationese and its dialects. In Proceedings of the 49th Annual Meeting of the Association for Computational Lin- guistics. Association for Computational Linguistics, pages 1318-1326.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Inducing discourse connectives from parallel texts",
"authors": [
{
"first": "Majid",
"middle": [],
"last": "Laali",
"suffix": ""
},
{
"first": "Leila",
"middle": [],
"last": "Kosseim",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 25th International Conference on Computational Linguistics (COLING-2014)",
"volume": "",
"issue": "",
"pages": "610--619",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Majid Laali and Leila Kosseim. 2014. Inducing dis- course connectives from parallel texts. In Proceed- ings of the 25th International Conference on Com- putational Linguistics (COLING-2014). pages 610- 619.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Leveraging synthetic discourse data via multi-task learning for implicit discourse relation recognition",
"authors": [
{
"first": "Man",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zheng-Yu",
"middle": [],
"last": "Niu",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "476--485",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Man Lan, Yu Xu, Zheng-Yu Niu, et al. 2013. Leverag- ing synthetic discourse data via multi-task learning for implicit discourse relation recognition. In Pro- ceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics. Association for Computational Linguistics, pages 476-485.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "A discourse-driven content model for summarising scientific articles evaluated in a complex question answering task",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Liakata",
"suffix": ""
},
{
"first": "Simon",
"middle": [],
"last": "Dobnik",
"suffix": ""
},
{
"first": "Shyamasree",
"middle": [],
"last": "Saha",
"suffix": ""
},
{
"first": "Colin",
"middle": [
"R"
],
"last": "Batchelor",
"suffix": ""
},
{
"first": "Dietrich",
"middle": [],
"last": "Rebholz-Schuhmann",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "747--757",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Liakata, Simon Dobnik, Shyamasree Saha, Colin R. Batchelor, and Dietrich Rebholz- Schuhmann. 2013. A discourse-driven content model for summarising scientific articles evaluated in a complex question answering task. In Proceed- ings of the 2013 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, pages 747-757.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Recognizing implicit discourse relations in the penn discourse treebank",
"authors": [
{
"first": "Ziheng",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "Hwee Tou",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "343--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziheng Lin, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing implicit discourse relations in the penn discourse treebank. In Proceedings of the 2009 Con- ference on Empirical Methods in Natural Language Processing. Association for Computational Linguis- tics, Singapore, pages 343-351.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A pdtb-styled end-to-end discourse parser",
"authors": [
{
"first": "Ziheng",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kan",
"suffix": ""
}
],
"year": 2014,
"venue": "Natural Language Engineering",
"volume": "20",
"issue": "02",
"pages": "151--184",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ziheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2014. A pdtb-styled end-to-end discourse parser. Natural Language Engineering 20(02):151-184.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "An unsupervised approach to recognizing discourse relations",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
},
{
"first": "Abdessamad",
"middle": [],
"last": "Echihabi",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "368--375",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Marcu and Abdessamad Echihabi. 2002. An unsupervised approach to recognizing discourse re- lations. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. As- sociation for Computational Linguistics, pages 368- 375.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Comparing lexical, acoustic/prosodic, structural and discourse features for speech summarization",
"authors": [
{
"first": "Sameer",
"middle": [],
"last": "Maskey",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hirschberg",
"suffix": ""
}
],
"year": 2005,
"venue": "Ninth European Conference on Speech Communication and Technology",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sameer Maskey and Julia Hirschberg. 2005. Com- paring lexical, acoustic/prosodic, structural and dis- course features for speech summarization. In Ninth European Conference on Speech Communication and Technology.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Disambiguating discourse connectives for statistical machine translation",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Meyer",
"suffix": ""
},
{
"first": "Najeh",
"middle": [],
"last": "Hajlaoui",
"suffix": ""
},
{
"first": "Andrei",
"middle": [],
"last": "Popescu-Belis",
"suffix": ""
}
],
"year": 2015,
"venue": "Transactions on Audio, Speech, and Language Processing",
"volume": "23",
"issue": "",
"pages": "1184--1197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Meyer, Najeh Hajlaoui, and Andrei Popescu- Belis. 2015. Disambiguating discourse connec- tives for statistical machine translation. Transac- tions on Audio, Speech, and Language Processing 23(7):1184-1197.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Semisupervised learning of causal relations in biomedical scientific discourse",
"authors": [
{
"first": "Claudiu",
"middle": [],
"last": "Mih\u0203il\u0203",
"suffix": ""
},
{
"first": "Sophia",
"middle": [],
"last": "Ananiadou",
"suffix": ""
}
],
"year": 2014,
"venue": "Biomedical engineering online",
"volume": "13",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Claudiu Mih\u0203il\u0203 and Sophia Ananiadou. 2014. Semi- supervised learning of causal relations in biomedical scientific discourse. Biomedical engineering online 13(2):1.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Greg",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S. Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In Advances in neural information processing systems. pages 3111-3119.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Improving implicit discourse relation recognition through feature set optimization",
"authors": [
{
"first": "Joonsuk",
"middle": [],
"last": "Park",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "108--112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joonsuk Park and Claire Cardie. 2012. Improving im- plicit discourse relation recognition through feature set optimization. In Proceedings of the 13th Annual Meeting of the Special Interest Group on Discourse and Dialogue. Association for Computational Lin- guistics, pages 108-112.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Automatic sense prediction for implicit discourse relations in text",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Annie",
"middle": [],
"last": "Louis",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "683--691",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic sense prediction for implicit discourse re- lations in text. In Proceedings of the 47th Annual Meeting of the Association for Computational Lin- guistics. Association for Computational Linguistics, Suntec, Singapore, pages 683-691.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Easily identifiable discourse relations",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Mridhula",
"middle": [],
"last": "Raghupathy",
"suffix": ""
},
{
"first": "Hena",
"middle": [],
"last": "Mehta",
"suffix": ""
},
{
"first": "Ani",
"middle": [],
"last": "Nenkova",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Aravind",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics (COLING-2008). Manchester",
"volume": "",
"issue": "",
"pages": "85--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Emily Pitler, Mridhula Raghupathy, Hena Mehta, Ani Nenkova, Alan Lee, and Aravind K. Joshi. 2008. Easily identifiable discourse relations. In Proceed- ings of the 22nd International Conference on Com- putational Linguistics (COLING-2008). Manch- ester, UK, pages 85-88.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The penn discourse treebank 2.0",
"authors": [
{
"first": "Rashmi",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Dinesh",
"suffix": ""
},
{
"first": "Alan",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Eleni",
"middle": [],
"last": "Miltsakaki",
"suffix": ""
},
{
"first": "Livio",
"middle": [],
"last": "Robaldo",
"suffix": ""
},
{
"first": "Aravind",
"middle": [
"K"
],
"last": "Joshi",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [
"L"
],
"last": "Webber",
"suffix": ""
}
],
"year": 2008,
"venue": "LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind K. Joshi, and Bon- nie L. Webber. 2008. The penn discourse treebank 2.0. In LREC. European Language Resources Asso- ciation, Marrakech, Morocco.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Implicit discourse relation recognition with contextaware character-enhanced embeddings",
"authors": [
{
"first": "Lianhui",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Zhisong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lianhui Qin, Zhisong Zhang, and Hai Zhao. 2016. Im- plicit discourse relation recognition with context- aware character-enhanced embeddings. In Proceed- ings of COLING 2016, the 26th International Con- ference on Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Adversarial connectiveexploiting networks for implicit discourse relation classification",
"authors": [
{
"first": "Lianhui",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Zhisong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Zhiting",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Eric",
"middle": [
"P"
],
"last": "Xing",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1006--1017",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lianhui Qin, Zhisong Zhang, Hai Zhao, Zhiting Hu, and Eric P. Xing. 2017. Adversarial connective- exploiting networks for implicit discourse relation classification. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Association for Computational Linguistics, Vancouver, Canada, pages 1006-1017.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The haifa corpus of translationese",
"authors": [
{
"first": "Ella",
"middle": [],
"last": "Rabinovich",
"suffix": ""
},
{
"first": "Shuly",
"middle": [],
"last": "Wintner",
"suffix": ""
},
{
"first": "Ofek",
"middle": [],
"last": "Luis Lewinsohn",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1509.03611"
]
},
"num": null,
"urls": [],
"raw_text": "Ella Rabinovich, Shuly Wintner, and Ofek Luis Lewin- sohn. 2015. The haifa corpus of translationese. arXiv preprint arXiv:1509.03611 .",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "A recurrent neural model with attention for the recognition of chinese implicit discourse relations",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "R\u00f6nnqvist",
"suffix": ""
},
{
"first": "Niko",
"middle": [],
"last": "Schenk",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "256--262",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R\u00f6nnqvist, Niko Schenk, and Christian Chiar- cos. 2017. A recurrent neural model with attention for the recognition of chinese implicit discourse re- lations. In Proceedings of the 55th Annual Meet- ing of the Association for Computational Linguistics (Volume 2: Short Papers). Association for Compu- tational Linguistics, Vancouver, Canada, pages 256- 262.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A systematic study of neural discourse models for implicit discourse relation",
"authors": [
{
"first": "Attapol",
"middle": [],
"last": "Rutherford",
"suffix": ""
},
{
"first": "Vera",
"middle": [],
"last": "Demberg",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "281--291",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Attapol Rutherford, Vera Demberg, and Nianwen Xue. 2017. A systematic study of neural discourse mod- els for implicit discourse relation. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics. Associa- tion for Computational Linguistics, pages 281-291.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Discovering implicit discourse relations through brown cluster pair representation and coreference patterns",
"authors": [
{
"first": "Attapol",
"middle": [],
"last": "Rutherford",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "645--654",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Attapol Rutherford and Nianwen Xue. 2014. Discover- ing implicit discourse relations through brown clus- ter pair representation and coreference patterns. In Proceedings of the 14th Conference of the European Chapter of the Association for Computational Lin- guistics. Association for Computational Linguistics, pages 645-654.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Improving the inference of implicit discourse relations via classifying explicit discourse connectives",
"authors": [
{
"first": "Attapol",
"middle": [],
"last": "Rutherford",
"suffix": ""
},
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "799--808",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Attapol Rutherford and Nianwen Xue. 2015. Improv- ing the inference of implicit discourse relations via classifying explicit discourse connectives. In Pro- ceedings of the 2015 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics on Human Language Technology. Asso- ciation for Computational Linguistics, pages 799- 808.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "On the need of cross validation for discourse relation classification",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Vera",
"middle": [],
"last": "Demberg",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "150--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei Shi and Vera Demberg. 2017. On the need of cross validation for discourse relation classification. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics. Association for Computational Linguistics, pages 150-156.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Using automatically labelled examples to classify rhetorical relations: An assessment",
"authors": [
{
"first": "Caroline",
"middle": [],
"last": "Sporleder",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Lascarides",
"suffix": ""
}
],
"year": 2008,
"venue": "Natural Language Engineering",
"volume": "14",
"issue": "3",
"pages": "369--416",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caroline Sporleder and Alex Lascarides. 2008. Using automatically labelled examples to classify rhetori- cal relations: An assessment. Natural Language En- gineering 14(3):369-416.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Discovery of ambiguous and unambiguous discourse connectives via annotation projection",
"authors": [
{
"first": "Yannick",
"middle": [],
"last": "Versley",
"suffix": ""
}
],
"year": 2010,
"venue": "AEPC",
"volume": "",
"issue": "",
"pages": "83--82",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yannick Versley. 2010. Discovery of ambiguous and unambiguous discourse connectives via annotation projection. In AEPC. pages 83-82.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Word embedding for recurrent neural network based tts synthesis",
"authors": [
{
"first": "Peilu",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "K",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Soong",
"suffix": ""
},
{
"first": "Hai",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zhao",
"suffix": ""
}
],
"year": 2015,
"venue": "Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE International Conference on",
"volume": "",
"issue": "",
"pages": "4879--4883",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peilu Wang, Yao Qian, Frank K Soong, Lei He, and Hai Zhao. 2015. Word embedding for recurrent neural network based tts synthesis. In Acoustics, Speech and Signal Processing (ICASSP), 2015 IEEE Inter- national Conference on. IEEE, pages 4879-4883.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Implicit discourse relation recognition by selecting typical training examples",
"authors": [
{
"first": "Xun",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Sujian",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jiwei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 24th International Conference on Computational Linguistics (COLING-2012)",
"volume": "",
"issue": "",
"pages": "2757--2772",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xun Wang, Sujian Li, Jiwei Li, and Wenjie Li. 2012. Implicit discourse relation recognition by select- ing typical training examples. In Proceedings of the 24th International Conference on Computational Linguistics (COLING-2012). pages 2757-2772.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Bilinguallyconstrained synthetic data for implicit discourse relation recognition",
"authors": [
{
"first": "Changxing",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Yidong",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yanzhou",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Jinsong",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2306--2312",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Changxing Wu, Xiaodong Shi, Yidong Chen, Yanzhou Huang, and Jinsong Su. 2016. Bilingually- constrained synthetic data for implicit discourse re- lation recognition. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing. Association for Computational Linguis- tics, pages 2306-2312.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "The conll-2015 shared task on shallow discourse parsing",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Rashmi",
"middle": [],
"last": "Pradhan",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Prasad",
"suffix": ""
},
{
"first": "Attapol",
"middle": [],
"last": "Bryant",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rutherford",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the CoNLL-15",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, Rashmi Prasad, Christopher Bryant, and Attapol Rutherford. 2015. The conll-2015 shared task on shallow dis- course parsing. In Proceedings of the CoNLL-15",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "1--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shared Task. Association for Computational Lin- guistics, pages 1-16.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Conll 2016 shared task on multilingual shallow discourse parsing",
"authors": [
{
"first": "Nianwen",
"middle": [],
"last": "Xue",
"suffix": ""
},
{
"first": "Attapol",
"middle": [],
"last": "Hwee Tou Ng",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Rutherford",
"suffix": ""
},
{
"first": "Chuan",
"middle": [],
"last": "Webber",
"suffix": ""
},
{
"first": "Hongmin",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the CoNLL-16 shared task. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1--19",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nianwen Xue, Hwee Tou Ng, Attapol Rutherford, Bon- nie Webber, Chuan Wang, and Hongmin Wang. 2016. Conll 2016 shared task on multilingual shallow discourse parsing. In Proceedings of the CoNLL-16 shared task. Association for Computa- tional Linguistics, pages 1-19.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Dependency-based discourse parser for single-document summarization",
"authors": [
{
"first": "Yasuhisa",
"middle": [],
"last": "Yoshida",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1834--1839",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yasuhisa Yoshida, Jun Suzuki, Tsutomu Hirao, and Masaaki Nagata. 2014. Dependency-based dis- course parser for single-document summarization. In Proceedings of the 2014 Conference on Empiri- cal Methods in Natural Language Processing. Asso- ciation for Computational Linguistics, pages 1834- 1839.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Shallow convolutional neural network for implicit discourse relation recognition",
"authors": [
{
"first": "Biao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jinsong",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Deyi",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Yaojie",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Hong",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Junfeng",
"middle": [],
"last": "Yao",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2230--2235",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Biao Zhang, Jinsong Su, Deyi Xiong, Yaojie Lu, Hong Duan, and Junfeng Yao. 2015. Shallow convolu- tional neural network for implicit discourse relation recognition. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Pro- cessing. Association for Computational Linguistics, pages 2230-2235.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Cross-lingual identification of ambiguous discourse connectives for resourcepoor language",
"authors": [
{
"first": "Lanjun",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Bin",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Zhong",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Kam-Fai",
"middle": [],
"last": "Wong",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of COLING 2012: Posters. The COLING 2012 Organizing Committee",
"volume": "",
"issue": "",
"pages": "1409--1418",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lanjun Zhou, Wei Gao, Bin Li, Zhong Wei, and Kam-Fai Wong. 2012. Cross-lingual identification of ambiguous discourse connectives for resource- poor language. In Proceedings of COLING 2012: Posters. The COLING 2012 Organizing Committee, pages 1409-1418.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Attentionbased bidirectional long short-term memory networks for relation classification",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Shi",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Tian",
"suffix": ""
},
{
"first": "Zhenyu",
"middle": [],
"last": "Qi",
"suffix": ""
},
{
"first": "Bingchen",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hongwei",
"middle": [],
"last": "Hao",
"suffix": ""
},
{
"first": "Bo",
"middle": [],
"last": "Xu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "207--212",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Zhou, Wei Shi, Jun Tian, Zhenyu Qi, Bingchen Li, Hongwei Hao, and Bo Xu. 2016. Attention- based bidirectional long short-term memory net- works for relation classification. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. Association for Compu- tational Linguistics, pages 207-212.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Predicting discourse connectives for implicit discourse relation recognition",
"authors": [
{
"first": "Zhi-Min",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Yu",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zheng-Yu",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Man",
"middle": [],
"last": "Lan",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Chew Lim",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010)",
"volume": "",
"issue": "",
"pages": "1507--1514",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zhi-Min Zhou, Yu Xu, Zheng-Yu Niu, Man Lan, Jian Su, and Chew Lim Tan. 2010. Predicting discourse connectives for implicit discourse relation recog- nition. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING 2010). Association for Computational Linguistics, Beijing, China, pages 1507-1514.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Discourse connectives across languages",
"authors": [
{
"first": "Sandrine",
"middle": [],
"last": "Zufferey",
"suffix": ""
}
],
"year": 2016,
"venue": "Languages in Contrast",
"volume": "16",
"issue": "2",
"pages": "264--279",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sandrine Zufferey. 2016. Discourse connectives across languages. Languages in Contrast 16(2):264-279.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"text": "Pipeline showing how an implicit discourse relation sample, sentence pair 3-4, is extracted and labeled using a parallel corpus.",
"type_str": "figure",
"uris": null
},
"FIGREF1": {
"num": null,
"text": "Relation sense distribution of implicit relations in PDTB and the extra intra-and intersentence samples",
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"num": null,
"text": "Average and variance of classification accuracy evaluated on the PDTB test set with different sample size.",
"type_str": "figure",
"uris": null
},
"TABREF0": {
"text": "1. [The city's Campaign Finance Board has refused to pay Mr Dinkins $95,142 in matching funds] Arg1 because [his campaign records are incomplete.] Arg2 -Explicit, Contingency.Cause 2. [They desperately needed somebody who showed they cared for them, who loved them.] Arg1 [The last thing they needed was another drag-down blow.]",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF2": {
"text": "",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF5": {
"text": "Accuracy of 11-way classification of implicit discourse relations on PDTB test set and by cross validation.",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
},
"TABREF7": {
"text": "Bruno Cartoni, Sandrine Zufferey, Thomas Meyer, and Andrei Popescu-Belis. 2011. How comparable are parallel corpora? measuring the distribution of general vocabulary and connectives. In Proceedings of the 4th Workshop on Building and Using Comparable Corpora: Comparable Corpora and the Web. Association for Computational Linguistics, pages 78-86.",
"type_str": "table",
"html": null,
"num": null,
"content": "<table/>"
}
}
}
}