{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:28:33.450855Z" }, "title": "Assessing Discourse Relations in Language Generation from GPT-2", "authors": [ { "first": "Wei-Jen", "middle": [], "last": "Ko", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Texas at Austin", "location": {} }, "email": "wjko@utexas.edu" }, { "first": "Junyi", "middle": [ "Jessy" ], "last": "Li", "suffix": "", "affiliation": { "laboratory": "", "institution": "The University of Texas at Austin", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Recent advances in NLP have been attributed to the emergence of large-scale pre-trained language models. GPT-2 (Radford et al., 2019), in particular, is suited for generation tasks given its left-to-right language modeling objective, yet the linguistic quality of its generated text has largely remain unexplored. Our work takes a step in understanding GPT-2's outputs in terms of discourse coherence. We perform a comprehensive study on the validity of explicit discourse relations in GPT-2's outputs under both organic generation and fine-tuned scenarios. Results show GPT-2 does not always generate text containing valid discourse relations; nevertheless, its text is more aligned with human expectation in the fine-tuned scenario. We propose a decoupled strategy to mitigate these problems and highlight the importance of explicitly modeling discourse information.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Recent advances in NLP have been attributed to the emergence of large-scale pre-trained language models. GPT-2 (Radford et al., 2019), in particular, is suited for generation tasks given its left-to-right language modeling objective, yet the linguistic quality of its generated text has largely remain unexplored. Our work takes a step in understanding GPT-2's outputs in terms of discourse coherence. We perform a comprehensive study on the validity of explicit discourse relations in GPT-2's outputs under both organic generation and fine-tuned scenarios. Results show GPT-2 does not always generate text containing valid discourse relations; nevertheless, its text is more aligned with human expectation in the fine-tuned scenario. We propose a decoupled strategy to mitigate these problems and highlight the importance of explicitly modeling discourse information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Recent progress in NLP has been marked with the emergence of large-scale pre-trained models, e.g., ELMo (Peters et al., 2018) , BERT (Devlin et al., 2019) , and GPT-2 (Radford et al., 2019) . Among these, GPT-2 is particularly suitable in natural language generation due to its underlying left-to-right language modeling objective. Indeed, GPT-based language models have shown impressive results for open-domain dialogue generation (Golovanov et al., 2019; Zhang et al., 2020) . This has motivated investigations into GPT-2's generated text (See et al., 2019; Wallace et al., 2019) . In particular, using automatic metrics (e.g., cosine similarity, lexical diversity, sentence length), See et al. (2019) illustrated that GPT-2 has the ability to generate interesting and coherent text. However, analysis of GPT-2's outputs from deeper linguistic dimensions (e.g., discourse) has largely remained unexplored.", "cite_spans": [ { "start": 104, "end": 125, "text": "(Peters et al., 2018)", "ref_id": "BIBREF18" }, { "start": 133, "end": 154, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" }, { "start": 167, "end": 189, "text": "(Radford et al., 2019)", "ref_id": "BIBREF20" }, { "start": 432, "end": 456, "text": "(Golovanov et al., 2019;", "ref_id": "BIBREF7" }, { "start": 457, "end": 476, "text": "Zhang et al., 2020)", "ref_id": "BIBREF29" }, { "start": 541, "end": 559, "text": "(See et al., 2019;", "ref_id": "BIBREF23" }, { "start": 560, "end": 581, "text": "Wallace et al., 2019)", "ref_id": "BIBREF24" }, { "start": 686, "end": 703, "text": "See et al. (2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we perform the first discourse analysis of GPT-2's outputs, under both organic and fine-tuned scenarios, with the goals of understanding model behavior and pointing towards ways of improvement. We chiefly focus on discourse relations, one of the most important linguistic devices for textual coherence. Discourse relations specify the relationships between text spans, for example:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Jazz is good, but my favorite is country music.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The two clauses (also called arguments) are connected by a CONTRAST relation, as signaled by the connective but. Discourse relations are central in establishing textual coherence. For example, they create rhetorical connections between spans in the absence of anaphoric entity mentions (Lascarides and Asher, 2008) . Cognitive experiments have repeatedly shown discourse relations to be highly influential in the mental processing of text (Meyer and Freedle, 1984; Horowitz, 1987; Millis et al., 1993; Sanders and Noordman, 2000) . Spans joined with incorrect discourse connectives can seem logically incoherent although they are independently grammatical:", "cite_spans": [ { "start": 286, "end": 314, "text": "(Lascarides and Asher, 2008)", "ref_id": "BIBREF13" }, { "start": 439, "end": 464, "text": "(Meyer and Freedle, 1984;", "ref_id": "BIBREF15" }, { "start": 465, "end": 480, "text": "Horowitz, 1987;", "ref_id": "BIBREF9" }, { "start": 481, "end": 501, "text": "Millis et al., 1993;", "ref_id": "BIBREF16" }, { "start": 502, "end": 529, "text": "Sanders and Noordman, 2000)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Jazz is good, because my favorite is country music.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The importance of generating good discourse connectives are recognized in prior work in NLG (Biran and McKeown, 2015; Callaway, 2003) .", "cite_spans": [ { "start": 92, "end": 117, "text": "(Biran and McKeown, 2015;", "ref_id": "BIBREF1" }, { "start": 118, "end": 133, "text": "Callaway, 2003)", "ref_id": "BIBREF3" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We examine to what extent does GPT-2 generate texts that uphold plausible discourse relations, once a discourse connective (usually 1-2 tokens) is generated. We present a comprehensive analysis of discourse connectives in both fine-tuned generationspecifically, open domain dialogue generationand organic generation directly from GPT-2. We find that GPT-2 generates valid discourse connectives when the relation can be inferred by humans with high agreement, yet struggles to recover less obvious relations. Our manual analysis reveals the most common connective error is that the relations, signaled by the connectives, do not hold between the spans they connect. To this end, we propose a simple remedy: train a connective prediction model and replace incorrect connectives in a post-processing step. This method improves agreement between human and machine-generated connectives in both the fine-tuned and the organic scenarios. Collectively, our results highlight the importance of inferring discourse relations (Xue et al., 2015) , and explicitly incorporating discourse information in language models (Ji et al., 2016) , to increase their downstream efficacy.", "cite_spans": [ { "start": 1016, "end": 1034, "text": "(Xue et al., 2015)", "ref_id": "BIBREF26" }, { "start": 1107, "end": 1124, "text": "(Ji et al., 2016)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Fine-tuned. We choose open-domain dialog generation as our fine-tuned scenario. The model conditions on a prompt (dialog turn) and generates a response (next turn). We use the PERSONACHAT (Zhang et al., 2018) data for the ConvAI2 challenge. We use 122,499 prompt-response pairs for training and 4,801 pairs for validation.", "cite_spans": [ { "start": 188, "end": 208, "text": "(Zhang et al., 2018)", "ref_id": "BIBREF28" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "2" }, { "text": "We fine-tune GPT-2 medium (345M parameters). For compatibility with GPT-2's pre-training, we concatenate the prompt and response (separated by a delimiter) during training. GPT-2 is fine-tuned for 3 epochs using Adam (Kingma and Ba, 2015) with a learning rate of 5e-5. The cross-entropy (language modeling) loss is only calculated for the response. At test-time, the model is conditioned on the prompt (and delimiter) and generates the response. Our approach is similar to Zhang et al. (2020) and we follow Ko et al. (2019) to encourage generation of informative responses. 1 For decoding, we experimented with both topk sampling (Fan et al., 2018) and nucleus sampling (Holtzman et al., 2019) , and picked the better performing one upon manual inspection of the validation data. We use top-k (k=10) in this scenario.", "cite_spans": [ { "start": 507, "end": 523, "text": "Ko et al. (2019)", "ref_id": "BIBREF12" }, { "start": 574, "end": 575, "text": "1", "ref_id": null }, { "start": 630, "end": 648, "text": "(Fan et al., 2018)", "ref_id": "BIBREF5" }, { "start": 670, "end": 693, "text": "(Holtzman et al., 2019)", "ref_id": "BIBREF8" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "2" }, { "text": "For quality assurance, we manually evaluate GPT-2's generated responses against SpaceFusion (Gao et al., 2019) , a state-of-the-art RNNbased model, re-trained on PERSONACHAT. The evaluation is conducted on Amazon Mechanical Turk, where 5 annotators (per HIT) chose between GPT-2 and SpaceFusion responses. GPT-2 (45.5% chosen) largely outperforms SpaceFusion (16.9% chosen). For the other 37.7%, the two are tied.", "cite_spans": [ { "start": 92, "end": 110, "text": "(Gao et al., 2019)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "2" }, { "text": "\"Organic\" generation. To determine to what extent GPT-2 understands the discourse functions of connectives without the effects of fine-tuning, we engage an organic scenario. In this case, we pick out utterances with explicit discourse relations in the dataset, and feed the partial utterance that approximates the first argument of an explicit discourse relation (the part before the discourse connective), along with the connective, into the GPT-2 model; we then let it continue to generate the rest of the utterance. We use PERSONACHAT to make the results more comparable to the fine-tuned scenario. 2 We again experimented with both nucleus sampling and top-k, and used nucleus sampling (p = 0.9) which performed better upon manual inspection.", "cite_spans": [ { "start": 602, "end": 603, "text": "2", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "2" }, { "text": "At a high level, our assessment strategy compares discourse connectives from GPT-2 outputs with human judgment, following existing strategies of discourse relation annotation, which ask annotators to insert connectives between text spans (Prasad et al., 2008; Scholman and Demberg, 2017; Yung et al., 2019) . A discourse connective can be considered valid if humans would also insert a connective signaling the same discourse relation when the connective is masked.", "cite_spans": [ { "start": 238, "end": 259, "text": "(Prasad et al., 2008;", "ref_id": "BIBREF19" }, { "start": 260, "end": 287, "text": "Scholman and Demberg, 2017;", "ref_id": "BIBREF22" }, { "start": 288, "end": 306, "text": "Yung et al., 2019)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Assessing explicit discourse relations", "sec_num": "3" }, { "text": "Extracting sentences with discourse connectives. We follow prior work (Braud and Denis, 2016; Ma et al., 2019) in the use of heuristics to extract sentences with discourse connectives, using a list of 11 connectives most frequently observed in PERSONACHAT: after, and, because, before, but, if, since, so, though, when, while. Specifically, a clause (using verbs as approximations) needs to appear before and after the connective; the connective cannot be immediately followed by a punctuation; and only and and but can follow a period. We remove instances of so immediately followed by an adjective or adverb. Upon manual inspection of a random sample of 133 extracted sentences, 100% of them contain an explicit discourse relation.", "cite_spans": [ { "start": 70, "end": 93, "text": "(Braud and Denis, 2016;", "ref_id": "BIBREF2" }, { "start": 94, "end": 110, "text": "Ma et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Assessing explicit discourse relations", "sec_num": "3" }, { "text": "In the PERSONACHAT training set, \u223c11% of the responses contain one of the connectives. In contrast, the fine-tuned model generates a connective after and because before but if since so though when while PERSONACHAT 1.4 40.7 4.2 1.1 28.5 4.4 2.8 4.8 1.1 8.8 2.1 Fine-tuned 0.5 45.7 1.7 0.4 35.9 1.6 2.6 3.7 0.2 5.3 2.4 Organic 0.5 51.4 4.4 1.0 22.1 5.7 1.5 5.8 0.7 5.1 1.8 Table 1 : % of sentences with a particular discourse connective, of all sentences that contain a connective.", "cite_spans": [], "ref_spans": [ { "start": 372, "end": 379, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Assessing explicit discourse relations", "sec_num": "3" }, { "text": "26% among all responses, and the organic one 15%. The increase in percentage is likely because connectives are frequent words in the corpus. Table 1 shows the relative frequencies of these connectives. Notably, the distribution of connectives is skewed, with and and but appearing much more often than other connectives, a characteristic similar to other collected examples of discourse relations in the conversation domain (Ma et al., 2019) .", "cite_spans": [ { "start": 424, "end": 441, "text": "(Ma et al., 2019)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 141, "end": 148, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Assessing explicit discourse relations", "sec_num": "3" }, { "text": "Annotating discourse relations. To assess if GPT-2 generate valid discourse connectives, we compare relations signaled by these connectives with relations that humans judge to hold given the rest of the sentence, as in a masked language modeling task. Specifically, for each output sentence that contains a discourse connective, we mask the connective 3 and show the rest of the sentence to annotators (in the case of dialogue generation, we also show the prompt). They are asked to fill in the blank with a connective that most naturally expresses the relation between the arguments, or NONE if they think the two segments are not related. This type of insertion is used previously to crowdsource discourse relations (Yung et al., 2019; Scholman and Demberg, 2017) . To reduce label sparsity, we group the connectives into the four top-level discourse relations in the Penn Discourse Treebank (Prasad et al., 2008) (contingency, contrast, expansion, temporal), and the annotators are asked to choose a group if it contains the connective they think most appropriately fills the blank. To further help annotators, we included unambiguous synonyms of connectives to anchor the relations more. For ambiguous connectives in our list, we put them in all possible relations they signal. The specific groupings are listed below:", "cite_spans": [ { "start": 718, "end": 737, "text": "(Yung et al., 2019;", "ref_id": "BIBREF27" }, { "start": 738, "end": 765, "text": "Scholman and Demberg, 2017)", "ref_id": "BIBREF22" }, { "start": 894, "end": 915, "text": "(Prasad et al., 2008)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Assessing explicit discourse relations", "sec_num": "3" }, { "text": "\u2022 because, therefore, if, so, since (CONTINGENCY)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Assessing explicit discourse relations", "sec_num": "3" }, { "text": "\u2022 but, although, though, however, whereas, while (CONTRAST) 3 The workers saw an underlined blank space for the mask. If multiple connectives exist, we only consider the first one in this work. \u2022 before, after, when, since, while (TEMPORAL)", "cite_spans": [ { "start": 60, "end": 61, "text": "3", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Assessing explicit discourse relations", "sec_num": "3" }, { "text": "\u2022 and, in addition (EXPANSION)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Assessing explicit discourse relations", "sec_num": "3" }, { "text": "We also give the NONE option if the annotator could not find a suitable connective or that the two text spans are not related. We use Amazon MechanicalTurk to crowdsource annotations for 1.2K sentences with discourse connectives each for the organic and finetuned scenarios. Each sentence is annotated by five workers. As quality control, we only allow workers in the US that have completed more than 500 hits with an acceptance rate of >98%. Table 2 shows the percentage of sentences whose discourse relation is agreed upon by 5, 4, and 3 workers; Table 3 shows the frequency distribution of majority relations (one that is agreed by \u2265 3 workers). For the fine-tuned case, 89.7% of the sentences have a majority relation; inter-annotator agreement measured by Krippendorff's alpha is 0.508, indicating moderate agreement (Artstein and Poesio, 2008) . This shows that in most cases, readers are able to infer a discourse relation between the spans of text given, and they do so consistently. Similarly in the organic case, 83.5% of the sentences have a majority relation. However, relations agreed by \u2265 4 workers are much fewer; Krippendorff's alpha is also at a lower value of 0.382. After adjudicating 70 examples with no majority, we find Table 4 : % of connectives in generated texts that are consistent with human annotation, stratified by the # of annotators agreeing on the relation. that the cause of lower inter-annotator agreement is likely due to the fact that more than one relation can often hold, and in other cases, the quality of the generated text is low.", "cite_spans": [ { "start": 822, "end": 849, "text": "(Artstein and Poesio, 2008)", "ref_id": "BIBREF0" } ], "ref_spans": [ { "start": 443, "end": 450, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 549, "end": 556, "text": "Table 3", "ref_id": "TABREF2" }, { "start": 1242, "end": 1249, "text": "Table 4", "ref_id": null } ], "eq_spans": [], "section": "Assessing explicit discourse relations", "sec_num": "3" }, { "text": "Assessment results. Table 4 shows the percentage of sentences where the connective in the generated text agrees with the majority relation annotated by humans; we also show the results stratified by how many people agree on the relation. For the connectives since and while which can signal two relations, we count the model as correct if either relation is annotated by humans. The results reveal that a wrong connective could be a prominent source of error in GPT-2 generation, though the fine-tuned model agrees better with humans. Notably, for relations that humans agree more consistently, the models also generate correct relations more often. This hints that GPT-2 captures obvious, unambiguous relations better. Figure 1 shows a confusion matrix comparing human labeled relations (where at least 3 annotators agree) with GPT-2 generated ones.", "cite_spans": [], "ref_spans": [ { "start": 20, "end": 27, "text": "Table 4", "ref_id": null }, { "start": 720, "end": 728, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Assessing explicit discourse relations", "sec_num": "3" }, { "text": "As a first step to fix erroneous connectives, we propose a post-processing technique that does not require retraining a model or modifying model structure: replacing generated discourse connec- tives with ones from a connective prediction model. This task is related to discourse relation classification (e.g., Xue et al. (2015) , Nie et al. (2019) ), yet there is no annotated corpora on the dialog domain. While Ma et al. (2019) mined discourse relations from conversations, using their data yielded inferior performance in preliminary experiments.", "cite_spans": [ { "start": 311, "end": 328, "text": "Xue et al. (2015)", "ref_id": "BIBREF26" }, { "start": 331, "end": 348, "text": "Nie et al. (2019)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Fixing discourse connectives", "sec_num": "4" }, { "text": "Connective prediction model. We train a model to predict the masked discourse connective given the rest of the sentence, or NONE if no relation. For training, we extract 1 million sentences from Reddit that contain discourse connectives, using the heuristics in Section 3. We restrict the length of sentences to be 7-25 tokens, similar to that in PersonaChat. The model is fine-tuned on the pretrained BERT-base-uncased model (Devlin et al., 2019) , where the text before the connective is used as sentence A, and text after the connective is used as sentence B. We add an additional classification layer taking the learned [CLS] representation as input. To obtain training data for the NONE class, we add 300K synthesized examples with sentence A and sentence B sampled from different posts, approximating the absence of discourse relations. The model is fine-tuned for 3 epochs on Reddit using a learning rate of 5e-6. The classification accuracy on the validation set of PERSONACHAT is 0.743 and macro-F1 is 0.649. In the organic setting, we directly apply this model to predict the masked connective. In the fine-tuned setting, to obtain a better model in the domain of PERSONACHAT, we fine-tune the model for 1 epoch on the training set of PERSONACHAT. The classification accuracy improved by 3% and macro-F1 by 5%. 4", "cite_spans": [ { "start": 426, "end": 447, "text": "(Devlin et al., 2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Fixing discourse connectives", "sec_num": "4" }, { "text": "Post-processing results. With this connective prediction model, we replace connectives in generated outputs with the predicted ones. We evaluate whether the predicted connectives align better with human judgments, after collapsing to discourse relation types. We see the NONE prediction (4.4% for fine-tuned and 17.5% for organic) as an indicator that the sentence is not coherent, and resample from the model for a new sentence. These cases are not included in the results. Appendix A shows several examples illustrating connectives in the generated text and those predicted by the classifier. Table 5 and Table 6 show the consistency between a connective in the sentence and its corresponding human labeled discourse relation after post-processing, measured by macro-F1 and accuracy respectively. We stratify results according to the agreement among human annotators. We also show the accuracy of cases where \u2265 2 annotators agree to account for the possibility of multiple valid relations. For both fine-tuned and organic scenarios, the predicted connective aligns closer to human labels than those generated by GPT-2. Figure 2 compares the prediction between GPT-2 and the connective predictor for post-processing ( Fig. 2(a) ). It illustrates the types of relations that the connective model replaced correctly (Fig. 2(b) ) and incorrectly (Fig. 2(c) ). This shows that the better performance of the model is not due to simply preferring the most frequent class.", "cite_spans": [], "ref_spans": [ { "start": 595, "end": 602, "text": "Table 5", "ref_id": "TABREF5" }, { "start": 607, "end": 614, "text": "Table 6", "ref_id": "TABREF7" }, { "start": 1121, "end": 1129, "text": "Figure 2", "ref_id": "FIGREF1" }, { "start": 1219, "end": 1228, "text": "Fig. 2(a)", "ref_id": "FIGREF1" }, { "start": 1315, "end": 1325, "text": "(Fig. 2(b)", "ref_id": "FIGREF1" }, { "start": 1344, "end": 1354, "text": "(Fig. 2(c)", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Fixing discourse connectives", "sec_num": "4" }, { "text": "The improvement is notably more substantial for the organic case, an indication that fine-tuning GPT-2 nudges the model very close to what the connective prediction model learns. The overall improvement is likely due the connective prediction model having access to text before and after the connective, while the initial language generation model does not. This finding points to future work on considering stronger discourse-related signals (Ji et al., 2016) and stronger models for inferring relations.", "cite_spans": [ { "start": 443, "end": 460, "text": "(Ji et al., 2016)", "ref_id": "BIBREF10" } ], "ref_spans": [], "eq_spans": [], "section": "Fixing discourse connectives", "sec_num": "4" }, { "text": "This work presents an assessment of discourse relations in organic and fine-tuned language generation from GPT-2. We find that the understanding of discourse connectives are present in these models but are limited, especially when the relation requires more inference. We present a post-processing strategy to replace generated connectives, such that they align better with human expectation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5" }, { "text": "Ko et al. (2019) used a linguistic metric which performed better than mutual information also used inZhang et al. (2020).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We do not explicitly perform quality assurance for this scenario as we do not fine-tune GPT-2. Details of language modeling performance are discussed inRadford et al. (2019).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Note that this improvement does not translate to a better model for the organic scenario, since GPT-2's output without fine-tuning does not fall in the PERSONACHAT domain.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work was partially supported by the NSF Grant IIS-1850153, and an Amazon Alexa Graduate Fellowship. We thank Shrey Desai, Greg Durrett, and the anonymous reviewers for their helpful feedback.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "We show several examples below for both finetuned and organic scenarios. We list the text that GPT-2 generated (with the connective bolded), and the connective that our classifier predicted (in the subsequent line).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Example sentences", "sec_num": null }, { "text": "\u2022 GPT-2: I do work out at the gym but not as often.Connective classifier: but (In this case, GPT-2 produced a plausible connective, and the classifier also predicted the same connective.)\u2022 GPT-2: My husband is a detective so he loves my family .Connective classifier: and (In this case, GPT-2 did not produce a plausible connective, and the connective classifier was able to correct it.)\u2022 GPT-2: I 'm a housewife , but i also take care of my children Connective classifier: but (In this case, GPT-2 did not produced a plausible connective, neither did the classifier.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Fine-tuned", "sec_num": null }, { "text": "\u2022 GPT-2: It was hard for me to get into college and I 'm still in a wheelchair.Connective classifier: because (In this case, GPT-2 did not produce a plausible connective, and the connective classifier was able to predict a more plausible one.)\u2022 GPT-2: I agree . they insist that while they will not pursue civil or criminal action , that they have agreed to withdraw their complaints.Connective classifier: while (In this case, GPT-2 produced a plausible connective, and the classifier also predicted the same connective.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.2 Organic", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Inter-coder agreement for computational linguistics", "authors": [ { "first": "Ron", "middle": [], "last": "Artstein", "suffix": "" }, { "first": "Massimo", "middle": [], "last": "Poesio", "suffix": "" } ], "year": 2008, "venue": "Computational Linguistics", "volume": "34", "issue": "4", "pages": "555--596", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ron Artstein and Massimo Poesio. 2008. Inter-coder agreement for computational linguistics. Computa- tional Linguistics, 34(4):555-596.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Discourse planning with an n-gram model of relations", "authors": [ { "first": "Or", "middle": [], "last": "Biran", "suffix": "" }, { "first": "Kathleen", "middle": [], "last": "Mckeown", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1973--1977", "other_ids": {}, "num": null, "urls": [], "raw_text": "Or Biran and Kathleen McKeown. 2015. Discourse planning with an n-gram model of relations. In Pro- ceedings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 1973- 1977.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Learning connective-based word representations for implicit discourse relation identification", "authors": [ { "first": "Chlo\u00e9", "middle": [], "last": "Braud", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Denis", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "203--213", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chlo\u00e9 Braud and Pascal Denis. 2016. Learning connective-based word representations for implicit discourse relation identification. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 203-213.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Integrating discourse markers into a pipelined natural language generation architecture", "authors": [ { "first": "Charles", "middle": [ "B" ], "last": "Callaway", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "264--271", "other_ids": {}, "num": null, "urls": [], "raw_text": "Charles B. Callaway. 2003. Integrating discourse mark- ers into a pipelined natural language generation ar- chitecture. In Proceedings of the 41st Annual Meet- ing of the Association for Computational Linguistics, pages 264-271.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "4171--4186", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, Volume 1 (Long and Short Papers), pages 4171-4186.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Hierarchical neural story generation", "authors": [ { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yann", "middle": [], "last": "Dauphin", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "889--898", "other_ids": {}, "num": null, "urls": [], "raw_text": "Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hi- erarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889-898.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Jointly optimizing diversity and relevance in neural response generation", "authors": [ { "first": "Xiang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Sungjin", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Yizhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "1229--1238", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiang Gao, Sungjin Lee, Yizhe Zhang, Chris Brockett, Michel Galley, Jianfeng Gao, and Bill Dolan. 2019. Jointly optimizing diversity and relevance in neural response generation. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1229-1238.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Large-scale transfer learning for natural language generation", "authors": [ { "first": "Sergey", "middle": [], "last": "Golovanov", "suffix": "" }, { "first": "Rauf", "middle": [], "last": "Kurbanov", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Nikolenko", "suffix": "" }, { "first": "Kyryl", "middle": [], "last": "Truskovskyi", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Tselousov", "suffix": "" }, { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "6053--6058", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sergey Golovanov, Rauf Kurbanov, Sergey Nikolenko, Kyryl Truskovskyi, Alexander Tselousov, and Thomas Wolf. 2019. Large-scale transfer learning for natural language generation. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 6053-6058.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "The curious case of neural text degeneration", "authors": [ { "first": "Ari", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Jan", "middle": [], "last": "Buys", "suffix": "" }, { "first": "Maxwell", "middle": [], "last": "Forbes", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the Eighth International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ari Holtzman, Jan Buys, Maxwell Forbes, and Yejin Choi. 2019. The curious case of neural text degen- eration. In Proceedings of the Eighth International Conference on Learning Representations.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Rhetorical structure in discourse processing", "authors": [ { "first": "Rosalind", "middle": [], "last": "Horowitz", "suffix": "" } ], "year": 1987, "venue": "Comprehending oral and written language", "volume": "", "issue": "", "pages": "117--160", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rosalind Horowitz. 1987. Rhetorical structure in dis- course processing. In Comprehending oral and writ- ten language, pages 117-160. Academic Press.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "A latent variable recurrent neural network for discourse-driven language models", "authors": [ { "first": "Yangfeng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Gholamreza", "middle": [], "last": "Haffari", "suffix": "" }, { "first": "Jacob", "middle": [], "last": "Eisenstein", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "332--342", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yangfeng Ji, Gholamreza Haffari, and Jacob Eisenstein. 2016. A latent variable recurrent neural network for discourse-driven language models. In Proceedings of the 2016 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 332-342.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Third International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the Third International Conference on Learning Representations.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Linguistically-informed specificity and semantic plausibility for dialogue generation", "authors": [ { "first": "Wei-Jen", "middle": [], "last": "Ko", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "" }, { "first": "Junyi Jessy", "middle": [], "last": "Li", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "3456--3466", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei-Jen Ko, Greg Durrett, and Junyi Jessy Li. 2019. Linguistically-informed specificity and semantic plausibility for dialogue generation. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3456-3466.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Segmented discourse representation theory: Dynamic semantics with discourse structure", "authors": [ { "first": "Alex", "middle": [], "last": "Lascarides", "suffix": "" }, { "first": "Nicholas", "middle": [], "last": "Asher", "suffix": "" } ], "year": 2008, "venue": "Computing meaning", "volume": "", "issue": "", "pages": "87--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Lascarides and Nicholas Asher. 2008. Segmented discourse representation theory: Dynamic semantics with discourse structure. In Computing meaning, pages 87-124. Springer.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Implicit discourse relation identification for open-domain dialogues", "authors": [ { "first": "Derek", "middle": [], "last": "Mingyu", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Jiaqi", "middle": [], "last": "Bowden", "suffix": "" }, { "first": "Wen", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Marilyn", "middle": [], "last": "Cui", "suffix": "" }, { "first": "", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "666--672", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mingyu Derek Ma, Kevin Bowden, Jiaqi Wu, Wen Cui, and Marilyn Walker. 2019. Implicit discourse rela- tion identification for open-domain dialogues. In Proceedings of the 57th Annual Meeting of the As- sociation for Computational Linguistics, pages 666- 672.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Effects of discourse type on recall", "authors": [ { "first": "J", "middle": [ "F" ], "last": "Bonnie", "suffix": "" }, { "first": "Roy", "middle": [ "O" ], "last": "Meyer", "suffix": "" }, { "first": "", "middle": [], "last": "Freedle", "suffix": "" } ], "year": 1984, "venue": "American Educational Research Journal", "volume": "21", "issue": "1", "pages": "121--143", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bonnie JF Meyer and Roy O Freedle. 1984. Effects of discourse type on recall. American Educational Research Journal, 21(1):121-143.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "The impact of connectives on the memory for expository texts", "authors": [ { "first": "K", "middle": [], "last": "Keith", "suffix": "" }, { "first": "", "middle": [], "last": "Millis", "suffix": "" }, { "first": "C", "middle": [], "last": "Arthur", "suffix": "" }, { "first": "Karl", "middle": [], "last": "Graesser", "suffix": "" }, { "first": "", "middle": [], "last": "Haberlandt", "suffix": "" } ], "year": 1993, "venue": "Applied Cognitive Psychology", "volume": "7", "issue": "4", "pages": "317--339", "other_ids": {}, "num": null, "urls": [], "raw_text": "Keith K Millis, Arthur C Graesser, and Karl Haberlandt. 1993. The impact of connectives on the memory for expository texts. Applied Cognitive Psychology, 7(4):317-339.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Dissent: Learning sentence representations from explicit discourse relations", "authors": [ { "first": "Allen", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Erin", "middle": [], "last": "Bennett", "suffix": "" }, { "first": "Noah", "middle": [], "last": "Goodman", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "4497--4510", "other_ids": {}, "num": null, "urls": [], "raw_text": "Allen Nie, Erin Bennett, and Noah Goodman. 2019. Dissent: Learning sentence representations from ex- plicit discourse relations. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 4497-4510.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Deep contextualized word representations", "authors": [ { "first": "Matthew", "middle": [], "last": "Peters", "suffix": "" }, { "first": "Mark", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "1", "issue": "", "pages": "2227--2237", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word repre- sentations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2227- 2237.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "The Penn Discourse TreeBank 2.0. In Language Resources and Evaluation Conference", "authors": [ { "first": "Rashmi", "middle": [], "last": "Prasad", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Dinesh", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Eleni", "middle": [], "last": "Miltsakaki", "suffix": "" }, { "first": "Livio", "middle": [], "last": "Robaldo", "suffix": "" }, { "first": "K", "middle": [], "last": "Aravind", "suffix": "" }, { "first": "Bonnie", "middle": [ "L" ], "last": "Joshi", "suffix": "" }, { "first": "", "middle": [], "last": "Webber", "suffix": "" } ], "year": 2008, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind K Joshi, and Bon- nie L Webber. 2008. The Penn Discourse TreeBank 2.0. In Language Resources and Evaluation Confer- ence.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Technical Report.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "The role of coherence relations and their linguistic markers in text processing", "authors": [ { "first": "J", "middle": [ "M" ], "last": "Ted", "suffix": "" }, { "first": "", "middle": [], "last": "Sanders", "suffix": "" }, { "first": "G", "middle": [ "M" ], "last": "Leo", "suffix": "" }, { "first": "", "middle": [], "last": "Noordman", "suffix": "" } ], "year": 2000, "venue": "Discourse processes", "volume": "29", "issue": "1", "pages": "37--60", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ted JM Sanders and Leo GM Noordman. 2000. The role of coherence relations and their linguistic markers in text processing. Discourse processes, 29(1):37-60.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Crowdsourcing discourse interpretations: On the influence of context and the reliability of a connective insertion task", "authors": [ { "first": "Merel", "middle": [], "last": "Scholman", "suffix": "" }, { "first": "Vera", "middle": [], "last": "Demberg", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 11th Linguistic Annotation Workshop", "volume": "", "issue": "", "pages": "24--33", "other_ids": {}, "num": null, "urls": [], "raw_text": "Merel Scholman and Vera Demberg. 2017. Crowd- sourcing discourse interpretations: On the influence of context and the reliability of a connective inser- tion task. In Proceedings of the 11th Linguistic An- notation Workshop, pages 24-33.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Do massively pretrained language models make better storytellers?", "authors": [ { "first": "Abigail", "middle": [], "last": "See", "suffix": "" }, { "first": "Aneesh", "middle": [], "last": "Pappu", "suffix": "" }, { "first": "Rohun", "middle": [], "last": "Saxena", "suffix": "" }, { "first": "Akhila", "middle": [], "last": "Yerukola", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 23rd Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "843--861", "other_ids": {}, "num": null, "urls": [], "raw_text": "Abigail See, Aneesh Pappu, Rohun Saxena, Akhila Yerukola, and Christopher D Manning. 2019. Do massively pretrained language models make better storytellers? In Proceedings of the 23rd Confer- ence on Computational Natural Language Learning, pages 843-861.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Universal adversarial triggers for attacking and analyzing NLP", "authors": [ { "first": "Eric", "middle": [], "last": "Wallace", "suffix": "" }, { "first": "Shi", "middle": [], "last": "Feng", "suffix": "" }, { "first": "Nikhil", "middle": [], "last": "Kandpal", "suffix": "" }, { "first": "Matt", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "2153--2162", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial trig- gers for attacking and analyzing NLP. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing, pages 2153-2162.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Transfertransfo: A transfer learning approach for neural network based conversational agents", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" } ], "year": 2019, "venue": "NeurIPS 2018 CAI Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. Transfertransfo: A trans- fer learning approach for neural network based con- versational agents. In NeurIPS 2018 CAI Workshop.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "The conll-2015 shared task on shallow discourse parsing", "authors": [ { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Hwee Tou Ng", "suffix": "" }, { "first": "Rashmi", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Prasad", "suffix": "" }, { "first": "Attapol", "middle": [], "last": "Bryant", "suffix": "" }, { "first": "", "middle": [], "last": "Rutherford", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the Nineteenth Conference on Computational Natural Language Learning-Shared Task", "volume": "", "issue": "", "pages": "1--16", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nianwen Xue, Hwee Tou Ng, Sameer Pradhan, Rashmi Prasad, Christopher Bryant, and Attapol Rutherford. 2015. The conll-2015 shared task on shallow dis- course parsing. In Proceedings of the Nineteenth Conference on Computational Natural Language Learning-Shared Task, pages 1-16.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Crowdsourcing discourse relation annotations by a two-step connective insertion task", "authors": [ { "first": "Frances", "middle": [], "last": "Yung", "suffix": "" }, { "first": "Vera", "middle": [], "last": "Demberg", "suffix": "" }, { "first": "Merel", "middle": [], "last": "Scholman", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 13th Linguistic Annotation Workshop", "volume": "", "issue": "", "pages": "16--25", "other_ids": {}, "num": null, "urls": [], "raw_text": "Frances Yung, Vera Demberg, and Merel Scholman. 2019. Crowdsourcing discourse relation annotations by a two-step connective insertion task. In Proceed- ings of the 13th Linguistic Annotation Workshop, pages 16-25.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Personalizing dialogue agents: I have a dog, do you have pets too?", "authors": [ { "first": "Saizheng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Urbanek", "suffix": "" }, { "first": "Arthur", "middle": [], "last": "Szlam", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "2204--2213", "other_ids": {}, "num": null, "urls": [], "raw_text": "Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204- 2213.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "DialoGPT: Large-scale generative pre-training for conversational response generation", "authors": [ { "first": "Yizhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Siqi", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Yen-Chun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Jingjing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and Bill Dolan. 2020. DialoGPT: Large-scale generative pre-training for conversational response generation. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics: System Demonstrations.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "Confusion matrix for human labeled relations vs. generated connectives (after grouping into relations). Darker color indicates more instances. Vertical axis: human annotated relation. Horizontal axis: GPT-2.", "type_str": "figure", "num": null, "uris": null }, "FIGREF1": { "text": "Confusion matrix for GPT-2 (vertical axis) vs. connective prediction model (horizontal axis). Darker color indicates more instances. (a): all changed connectives; (b): sentences that the GPT-2 connectives are inconsistent with human labels, but the connective prediction model gave correct predictions; (c): sentences that the GPT-2 connectives are consistent with human labels, but the connective prediction model gave incorrect predictions. Changed connectives in the same relation class are also included.", "type_str": "figure", "num": null, "uris": null }, "TABREF1": { "type_str": "table", "html": null, "content": "
: % of sentences where the discourse relation is
agreed by n \u2208 {3, 4, 5} annotators.
Fine-tuned Organic
contingency 6.412.5
temporal5.16.2
contrast35.127.1
conjunction 52.553.0
no relation0.91.1
", "num": null, "text": "" }, "TABREF2": { "type_str": "table", "html": null, "content": "", "num": null, "text": "" }, "TABREF5": { "type_str": "table", "html": null, "content": "
: Consistency between human annotated and
predicted discourse relations, measured in macro-F1 of
the four relation types. (\u2265 n): \u2265 n annotators agree on
a relation. (*): p < 0.05 on a bootstrapping test.
", "num": null, "text": "" }, "TABREF7": { "type_str": "table", "html": null, "content": "", "num": null, "text": "Consistency between human annotated and predicted discourse relations, measured in accuracy. (\u2265 n): calculated on all sentences that \u2265 n annotators agree on a relation. (*): p < 0.05 on a binomial test." } } } }