{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T01:57:29.746975Z" }, "title": "DIALOGPT : Large-Scale Generative Pre-training for Conversational Response Generation", "authors": [ { "first": "Yizhe", "middle": [], "last": "Zhang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Corporation", "location": { "settlement": "Redmond", "region": "WA", "country": "USA" } }, "email": "yizzhang@microsoft.com" }, { "first": "Siqi", "middle": [], "last": "Sun", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Corporation", "location": { "settlement": "Redmond", "region": "WA", "country": "USA" } }, "email": "siqi.sun@microsoft.com" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Corporation", "location": { "settlement": "Redmond", "region": "WA", "country": "USA" } }, "email": "mgalley@microsoft.com" }, { "first": "Yen-Chun", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Corporation", "location": { "settlement": "Redmond", "region": "WA", "country": "USA" } }, "email": "yenchen@microsoft.com" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Corporation", "location": { "settlement": "Redmond", "region": "WA", "country": "USA" } }, "email": "chrisbkt@microsoft.com" }, { "first": "Xiang", "middle": [], "last": "Gao", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Corporation", "location": { "settlement": "Redmond", "region": "WA", "country": "USA" } }, "email": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Corporation", "location": { "settlement": "Redmond", "region": "WA", "country": "USA" } }, "email": "jfgao@microsoft.com" }, { "first": "Jingjing", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Corporation", "location": { "settlement": "Redmond", "region": "WA", "country": "USA" } }, "email": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "", "affiliation": { "laboratory": "", "institution": "Microsoft Corporation", "location": { "settlement": "Redmond", "region": "WA", "country": "USA" } }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We present a large, tunable neural conversational response generation model, DIALOGPT (dialogue generative pre-trained transformer). Trained on 147M conversation-like exchanges extracted from Reddit comment chains over a period spanning from 2005 through 2017, DialoGPT extends the Hugging Face PyTorch transformer to attain a performance close to human both in terms of automatic and human evaluation in single-turn dialogue settings. We show that conversational systems that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline systems. The pre-trained model and training pipeline are publicly released to facilitate research into neural response generation and the development of more intelligent opendomain dialogue systems. * A collaboration between Microsoft Research and Microsoft Dynamics 365 AI Research.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We present a large, tunable neural conversational response generation model, DIALOGPT (dialogue generative pre-trained transformer). Trained on 147M conversation-like exchanges extracted from Reddit comment chains over a period spanning from 2005 through 2017, DialoGPT extends the Hugging Face PyTorch transformer to attain a performance close to human both in terms of automatic and human evaluation in single-turn dialogue settings. We show that conversational systems that leverage DialoGPT generate more relevant, contentful and context-consistent responses than strong baseline systems. The pre-trained model and training pipeline are publicly released to facilitate research into neural response generation and the development of more intelligent opendomain dialogue systems. * A collaboration between Microsoft Research and Microsoft Dynamics 365 AI Research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "We introduce DIALOGPT, a tunable gigawordscale neural network model for generation of conversational reponses, trained on Reddit data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Recent advances in large-scale pre-training using transformer-based architectures (Radford et al., 2018; Devlin et al., 2019; Raffel et al., 2019) have achieved great empirical success. OpenAI's GPT-2 (Radford et al., 2018) , for example, has demonstrated that transformer models trained on very large datasets can capture long-term dependencies in textual data and generate text that is fluent, lexically diverse, and rich in content. Such models have the capacity to capture textual data with fine granularity and produce output with a high-resolution that closely emulates real-world text written by humans.", "cite_spans": [ { "start": 82, "end": 104, "text": "(Radford et al., 2018;", "ref_id": "BIBREF30" }, { "start": 105, "end": 125, "text": "Devlin et al., 2019;", "ref_id": "BIBREF11" }, { "start": 126, "end": 146, "text": "Raffel et al., 2019)", "ref_id": "BIBREF31" }, { "start": 201, "end": 223, "text": "(Radford et al., 2018)", "ref_id": "BIBREF30" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "DIALOGPT extends GPT-2 to address the challenges of conversational neural response genera-tion. Neural response generation is a subcategory of text-generation that shares the objective of generating natural-looking text (distinct from any training instance) that is relevant to the prompt. Modelling conversations, however, presents distinct challenges in that human dialogue, which encapsulates the possibly competing goals of two participants, is intrinsically more diverse in the range of potential responses (Li et al., 2016a; Zhang et al., 2018; Gao et al., 2019a,b) . It thus poses a greater one-to-many problem than is typical in other text generation tasks such as neural machine translation, text summarization and paraphrasing. Human conversations are also generally more informal, noisy, and, when in the form of textual chat, often contain informal abbreviations or syntactic/lexical errors.", "cite_spans": [ { "start": 512, "end": 530, "text": "(Li et al., 2016a;", "ref_id": "BIBREF23" }, { "start": 531, "end": 550, "text": "Zhang et al., 2018;", "ref_id": "BIBREF38" }, { "start": 551, "end": 571, "text": "Gao et al., 2019a,b)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Most open-domain neural response generation systems suffer from content or style inconsistency (Li et al., 2016b; Gao et al., 2019c) , lack of long-term contextual information (Serban et al., 2017) , and blandness (Li et al., 2016a; Zhang et al., 2018; Qin et al., 2019) . While these issues can be alleviated by modelling strategies specifically designed to boost information content, a transformer-based architecture like GPT-2 (Radford et al., 2018) , which uses a multi-layer self-attentive mechanism to allow fully-connected cross-attention to the full context in a computationally efficient manner, seems like a natural choice for exploring a more general solution. Transformer models, for example, allow long-term dependency information to be better be preserved across time (Radford et al., 2018) , thereby improving content consistency. They also have higher model capacity due to their deep structure (up to 48 layers in GPT-2) and are more effective in leveraging large-scale datasets (more than 100 million training instances) than RNN-based approaches (Vaswani et al., 2017) .", "cite_spans": [ { "start": 95, "end": 113, "text": "(Li et al., 2016b;", "ref_id": "BIBREF24" }, { "start": 114, "end": 132, "text": "Gao et al., 2019c)", "ref_id": "BIBREF17" }, { "start": 176, "end": 197, "text": "(Serban et al., 2017)", "ref_id": "BIBREF33" }, { "start": 214, "end": 232, "text": "(Li et al., 2016a;", "ref_id": "BIBREF23" }, { "start": 233, "end": 252, "text": "Zhang et al., 2018;", "ref_id": "BIBREF38" }, { "start": 253, "end": 270, "text": "Qin et al., 2019)", "ref_id": "BIBREF29" }, { "start": 430, "end": 452, "text": "(Radford et al., 2018)", "ref_id": "BIBREF30" }, { "start": 782, "end": 804, "text": "(Radford et al., 2018)", "ref_id": "BIBREF30" }, { "start": 1065, "end": 1087, "text": "(Vaswani et al., 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Like GPT-2, DIALOGPT is formulated as an autoregressive (AR) language model, and uses the multi-layer transformer as model architecture. Unlike GPT-2, however, DIALOGPT is trained on large-scale dialogue pairs/sessions extracted from Reddit discussion chains. Our assumption is that this should enable DIALOGPT to capture the joint distribution of P (Target, Source) in conversational flow with finer granularity. In practice, this is what we observe: sentences generated by DIALOGPT are diverse and contain information specific to the source prompt, analogous what GPT-2 generates for continuous text. We have evaluated the pre-trained model on a public benchmark dataset (DSTC-7), and a new 6k multireference test dataset extracted from Reddit postings. DIALOGPT achieves state-of-the-art results in both automatic and human evaluation, lifting performance to near-human response quality.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We have released the source code and a pre-trained model to facilitate future research. 1 . Our model can be easily leveraged and adapted to new dialogue datasets, especially datasets with few training examples. The DIALOGPT package also contains an open-source training pipeline (data extraction/preparation and model training/evaluation) built upon the Huggingface PyTorch transformer (HuggingFace, 2019). 2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The dataset is extracted from comment chains scraped from Reddit spanning from 2005 till 2017. Reddit discussions can be naturally expanded as tree-structured reply chains, since a thread replying to one thread forms the root node of subsequent threads. We extract each path from the root node to the leaf node as a training instance containing multiple turns of dialogue.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "We filter the data by removing the instances where (1) there is a URL in source or target, (2) where the target contains word repetitions of at least three words, (3) where the response does not contain at least one of the top-50 most frequent English words (e.g., \"the\", \"of\", \"a\"), since this probably indicates it might not be an English sentence, (4) where the response contains special markers such as \"[\" or \"]\", as this could be markup 1 GitHub:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "https://github.com/microsoft/ DialoGPT; Blog: https://aka.ms/dialogpt 2 Our model is also available over Hugging face Transformers. https://huggingface.co/microsoft/ DialoGPT-medium language, (5) where source and target sequences together are longer than 200 words, (6) where the target contains offensive language, identified by phrase matching against a large blocklist. We also excluded a large number of subreddits that had been identified as likely to contain offensive content. In addition, we aggressively filtered out blandness, e.g., removing instances where the responses contained 90% of tri-grams that have been seen more than 1000 times. Often uninformative, such responses account for about 1% of the data. After filtering, the dataset comprises 147,116,725 dialogue instances, in total 1.8 billion words.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "2" }, { "text": "We trained our DIALOGPT model on the basis of the GPT-2 (Radford et al., 2018) architecture.The GPT-2 transformer model adopts the generic transformer language model (Vaswani et al., 2017) and leverages a stack of masked multi-head selfattention layers to train on massive web-text data. The text generated either from scratch or based on a user-specific prompt is realistic-looking. The success of GPT-2 demonstrates that a transformer language model is able to characterize human language data distributions at a fine-grained level, presumably due to large large model capacity and superior efficiency.", "cite_spans": [ { "start": 56, "end": 78, "text": "(Radford et al., 2018)", "ref_id": "BIBREF30" }, { "start": 166, "end": 188, "text": "(Vaswani et al., 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Method 3.1 Model Architecture", "sec_num": "3" }, { "text": "Our model inherits from GPT-2 (Radford et al., 2018), a 12-to-48 layer transformer with layer normalization, a initialization scheme that accounts for model depth that we modified, and byte pair encodings (Sennrich et al., 2016) for the tokenizer. We follow the OpenAI GPT-2 to model a multiturn dialogue session as a long text and frame the generation task as language modeling. We first concatenate all dialog turns within a dialogue session into a long text", "cite_spans": [ { "start": 205, "end": 228, "text": "(Sennrich et al., 2016)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Method 3.1 Model Architecture", "sec_num": "3" }, { "text": "x 1 , \u2022 \u2022 \u2022 , x N (N", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 3.1 Model Architecture", "sec_num": "3" }, { "text": "is the sequence length), ended by the end-of-text token. We denote the source sentence (dialogue history)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 3.1 Model Architecture", "sec_num": "3" }, { "text": "as S = x 1 , \u2022 \u2022 \u2022 , x m and target sentence (ground truth response) as T = x m+1 , \u2022 \u2022 \u2022 ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 3.1 Model Architecture", "sec_num": "3" }, { "text": "x N , the conditional probability of P (T |S) can be written as the product of a series of conditional probabilities:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 3.1 Model Architecture", "sec_num": "3" }, { "text": "p(T |S) = N n=m+1 p(x n |x 1 , \u2022 \u2022 \u2022 , x n\u22121 ) (1) For a multi-turn dialogue session T 1 , \u2022 \u2022 \u2022 , T K , (1) can be written as p(T K , \u2022 \u2022 \u2022 , T 2 |T 1 ), which is essentially the product of conditional probabili- ties of p(T i |T 1 , \u2022 \u2022 \u2022 , T i\u22121 ). Consequently, opti- mizing a single objective p(T K , \u2022 \u2022 \u2022 , T 2 |T 1 ) can be perceived as optimizing all p(T i |T 1 , \u2022 \u2022 \u2022 , T i\u22121 ) source-target pairs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 3.1 Model Architecture", "sec_num": "3" }, { "text": "Our implementation is based on the opensource PyTorch-transformer repository. 3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Method 3.1 Model Architecture", "sec_num": "3" }, { "text": "Open-domain text generation models are notorious for generating bland, uninformative samples. To address this problem, we implement a maximum mutual information (MMI) scoring function (Li et al., 2016a; Zhang et al., 2018) . MMI employs a pre-trained backward model to predict source sentences from given responses, i.e., P (Source|target). We first generate a set of hypotheses using top-K sampling. Then we use the probability of P (Source|Hypothesis) to rerank all hypotheses. Intuitively, maximizing backward model likelihood penalizes the bland hypotheses, as frequent and repetitive hypotheses can be associated with many possible queries, thus yielding a lower probability for any specific query.", "cite_spans": [ { "start": 184, "end": 202, "text": "(Li et al., 2016a;", "ref_id": "BIBREF23" }, { "start": 203, "end": 222, "text": "Zhang et al., 2018)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Mutual Information Maximization", "sec_num": "3.2" }, { "text": "We also attempted to optimize the reward R P (Source|Hypothesis) using a policy gradient (Williams, 1992) with a sample-averaged baseline, following Zhang et al. (2018) . The validation reward can be stably improved, but unlike the training under RNN architecture, we observed that reinforcement learning (RL) training easily converges to a degenerate locally-optimal solution, where the hypothesis simply repeats the source sentence (i.e., a parroting model) and mutual information is maximized. We hypothesize that transformers can become trapped in local optima due to their strong model representation power. We leave the investigation of regularized RL training to future work.", "cite_spans": [ { "start": 149, "end": 168, "text": "Zhang et al. (2018)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "Mutual Information Maximization", "sec_num": "3.2" }, { "text": "We trained 3 different sizes of the model with total parameters of 117M, 345M and 762M respectively. The model specification follows Radford et al. (2018) (Table 1) .", "cite_spans": [ { "start": 133, "end": 154, "text": "Radford et al. (2018)", "ref_id": "BIBREF30" } ], "ref_spans": [ { "start": 155, "end": 164, "text": "(Table 1)", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Experimental Details", "sec_num": "4.1" }, { "text": "Our model uses a vocabulary of 50,257 entries, and was trained on 16 Nvidia V100 machines with NVLink. We used the Noam learning rate scheduler with 16000 warm-up steps. The learning rate is selected based on validation loss. Each model is trained until there is no progress in validation loss. For small and medium models, we trained the models for up to 5 epochs. For the large model we trained for at most 3 epochs.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Details", "sec_num": "4.1" }, { "text": "Speeding up training To accelerate the training process and accommodate GPU memory limitations, we first compress all training data into a lazy-loading database file, so that data is loaded only when needed (pre-fetching large chunks to reduce access frequency). We also leverage separate asynchronous data processes to scale the training. As a result, training time declines approximately linearly w.r.t. the number of GPUs. We further employed a dynamic batching strategy to group conversations of similar lengths into the same batch, thus increasing training throughput.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Details", "sec_num": "4.1" }, { "text": "The DSTC (Dialog System Technology Challenges) 7 track is an end-toend conversational modeling task, 4 in which the goal is to generate conversation responses that go beyond chitchat by injecting information that is grounded in external knowledge. This task is distinct from what is commonly thought of as goaloriented, task-oriented, or task-completion dialogs in that there is no specific or predefined goal (e.g., booking a flight, or reserving a table at a restaurant). Instead, it targets human-like interactions where the underlying goal is often ill-defined or unknown in advance, of the kind seen in work and other productive environments (e.g., brainstorming meetings) where people share information. The DSTC-7 test data contains conversation threads from Reddit data. In order to create a multi-reference test set, we utilized conversation sessions that contain 6 or more responses. Given other filtering criteria such as turn length, this yields a 5-reference test set of size 2208. (For each instance, one of the 6 human responses is set aside to assess human performance on this task.) Note that our training data is collected from a different time span from the test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "DSTC-7 Dialogue Generation Challenge", "sec_num": "4.2" }, { "text": "We performed automatic evaluation using standard machine translation metrics, including BLEU (Papineni et al., 2002) , METEOR (Lavie and Agarwal, 2007) , and NIST (Doddington, 2002) . NIST is a variant of BLEU that weights n-gram matches by their information gain, i.e., it indirectly penalizes uninformative n-grams. We also use Entropy (Zhang et al., 2018) and Dist-n (Li et al., 2016a) to evaluate lexical diversity. More details are provided in .", "cite_spans": [ { "start": 93, "end": 116, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF27" }, { "start": 126, "end": 151, "text": "(Lavie and Agarwal, 2007)", "ref_id": "BIBREF22" }, { "start": 163, "end": 181, "text": "(Doddington, 2002)", "ref_id": "BIBREF13" }, { "start": 338, "end": 358, "text": "(Zhang et al., 2018)", "ref_id": "BIBREF38" }, { "start": 370, "end": 388, "text": "(Li et al., 2016a)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "DSTC-7 Dialogue Generation Challenge", "sec_num": "4.2" }, { "text": "We compared DIALOGPT with our in-house competitive sequence-to-sequence model PER-SONALITYCHAT based on (Li et al., 2016a) and trained on Twitter data, which has been used in production as a Cognitive Service for Microsoft Azure. 5 Table 2 summarizes the automatic evaluation results. DIALOGPT with 345M parameters and beam search achieved the highest automatic score across most metrics. Scores for DIALOGPT with 345M parameters are better across the board than with 117M parameters. Beam search (with beam width 10) dramatically improves BLEU and DIST scores, and marginally improves NIST and METEOR. Note that our model is fine-tuned on source-target pairs, and does not leverage grounding information from the DSTC training set. Presumably, the model learns background information during pre-training and is unhindered by the lack of a grounding document.", "cite_spans": [ { "start": 104, "end": 122, "text": "(Li et al., 2016a)", "ref_id": "BIBREF23" } ], "ref_spans": [ { "start": 232, "end": 239, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "DSTC-7 Dialogue Generation Challenge", "sec_num": "4.2" }, { "text": "The automatic scores of DIALOGPT are higher than those for humans. This does not mean that the generation is more \"realistic\" than human, but is probably attributable to the one-to-many nature of conversation. As illustrated in Figure 1 , multiple human responses (R1-R4) can correspond well to a source utterance. Without loss of generality, suppose R1-R3 are the \"ground truth\" references that will be tested on, while R4 is the \"heldout\" human response that serves to compute a \"human\" score. In semantic space, a generated response R g from a well-trained model will presumably tend to lie in the vicinity the geometric center 5 Project PERSONALITYCHAT: https: //docs.microsoft.com/en-us/ azure/cognitive-services/ project-personality-chat/overview Source: I would like to report a break-in.", "cite_spans": [ { "start": 631, "end": 632, "text": "5", "ref_id": null } ], "ref_spans": [ { "start": 228, "end": 236, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "DSTC-7 Dialogue Generation Challenge", "sec_num": "4.2" }, { "text": "R 1 : W a s a n yt h in g st o le n ? R 2 : Is a n yo n e h u rt o r in ju re d ? R 4 : Is th e p e rp e tr a to r st ill in si d e ? R 3 : I w ill se n d so m e o n e ri g h t a w a y. R g: W h e n w a s th is b re a k -i n ? Figure 1 : A generated response can surpass a human response in automatic metrics. Example responses are from Gupta et al. (2019) of all possible responses, because the training objective seeks to generate the most likely response. This may be close to the geometric mean of all training instances, thus \"averaging out\" these instances. Consequently, a generated response R g might have a lower \"semantic distance\" (manifested in higher automatic scores like BLEU) from R1-R3 than the targeted human response R4.", "cite_spans": [ { "start": 337, "end": 356, "text": "Gupta et al. (2019)", "ref_id": "BIBREF19" } ], "ref_spans": [ { "start": 227, "end": 235, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "DSTC-7 Dialogue Generation Challenge", "sec_num": "4.2" }, { "text": "We further evaluate DIALOGPT on a multireference test set with 6K examples. The results are shown in Table 3 . We test our method on two settings: training from scratch and fine-tuning using GPT-2 as the pre-trained model. In both settings, a larger model consistently outperforms a smaller one. Comparing training from scratch to fine-tuning from the pre-trained GPT-2 model, when applying to smaller model, using GPT-2 model gives larger performance gains. Again, the best system DIALOGPT (345M, w/ beam search) scores higher on BLEU than humans. Larger models trained from scratch (345M and 762M) perform comparably to one finetuned on GPT-2.", "cite_spans": [], "ref_spans": [ { "start": 101, "end": 108, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "A New Reddit Multi-reference Dataset", "sec_num": "4.3" }, { "text": "We perform mutual information maximization as described in Section 3.2. Specifically, we generate 16 samples for each input source sentence by using top-K sampling (K = 10) using the 345M model fine-tuned from the GPT-2 medium model. This is followed by a re-ranking step using a backward model, which is also a 345M model finetuned from the GPT-2 medium model. The response that yields lowest backward model loss is Table 3 : 6K Reddit multi-reference evaluation. \"Beam\" denotes beam search. \"Human\" represents the held-out ground truth reference.", "cite_spans": [], "ref_spans": [ { "start": 417, "end": 424, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Re-ranking The Response Using MMI", "sec_num": "4.4" }, { "text": "selected for evaluation. The results are summarized in the second to last row in Table 3 . It can be seen that compared with Greedy generation, MMI re-ranking produces more diverse responses with higher NIST, METEOR and Entropy and Dist scores, but with a slight drop in BLEU.", "cite_spans": [], "ref_spans": [ { "start": 81, "end": 88, "text": "Table 3", "ref_id": null } ], "eq_spans": [], "section": "Re-ranking The Response Using MMI", "sec_num": "4.4" }, { "text": "We provide sample generated dialogues in Table 4 (interactive chat) and Table 5 (a self-playing bot with user prompt). The output is based on top-K sampling. 6 Interestingly, our model exhibits the ability to address commonsense questions to some extent, presumably owing to the rich amount of information that can be learned from Reddit data. In some cases, instead of giving the \"desired\" answer, the system generates an alternative, reasonable answer. Our observations suggest that the system is able to deal with multi-turn generation better than an RNN counterpart and tends to be 6 Owing to occasional toxic outputs, we are unable to release the generation code at this time. more consistent with respect to context (Table 5) which one is bigger, sun or moon?", "cite_spans": [ { "start": 586, "end": 587, "text": "6", "ref_id": null } ], "ref_spans": [ { "start": 41, "end": 48, "text": "Table 4", "ref_id": "TABREF5" }, { "start": 72, "end": 79, "text": "Table 5", "ref_id": "TABREF6" }, { "start": 722, "end": 731, "text": "(Table 5)", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Generation Examples", "sec_num": "4.5" }, { "text": "The sun .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generation Examples", "sec_num": "4.5" }, { "text": "which animal has black and white stripes?", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generation Examples", "sec_num": "4.5" }, { "text": "A black and white striped cat . for relevance, informativeness and how humanlike the generating is using a 3-point Likert-like scale. Judges were required to pass a qualification test, and a regime of spam detection was imposed. 8 Overall judge preferences for relevance, informativeness and human-likeness, presented as raw numbers and a percentage of the total, are shown in Table 7 . A strong preference can be observed for DialoGPT over PersonalityChat. Table 7 also suggests that the \"vanilla\" DialoGPT medium model may already be close to human response quality. Unexpectedly, we found that judges may prefer the MMI variant over human responses, probably because of many of the true human responses are erratic or idiosyncratic, or are tied to internet memes that happened to be unfamiliar to the judges. 9 (See Section 4.2 for the conditions underlying this effect.) Further details, including a test of significance and the human evaluation template used, are provided in the Appendix.", "cite_spans": [ { "start": 229, "end": 230, "text": "8", "ref_id": null } ], "ref_spans": [ { "start": 377, "end": 384, "text": "Table 7", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Generation Examples", "sec_num": "4.5" }, { "text": "There are several open-sourced toolkits for largescale pre-trained transformer models. Huggingface Conv-AI transfer learning repository (Wolf et al., 2019) contains the code for training conversational AI systems with transfer learning based on the GPT-2 transformer language model, which achieves the state-of-the-art performance on ConvAI-2 dialogue competition. DLGnet (Olabiyi and Mueller, 2019) is a large transformer model trained on dialogue dataset and achieves good performance in multi-turn dialogue generation. AllenNLP is developed as a toolkit for many natural language processing tasks, including the large-scale pre-trained bi-LSTM sentence representation learning framework ELMo . Texar (Hu et al., 2018) focuses on text generation including style transferring and controllable generation. It includes reinforcement learning capabilities along with its sequence modelling tools. DeepPavlov (Burtsev et al., 2018) is a popular framework focusing on task-oriented dialogue. This public repository contains several demos and pre-trained models for question answering and sentiment classification. Icecaps (Shiv et al., 2019) is a response generation toolkit with techniques such as grounding on personalities or external knowledge and multi-task training. The ConvAI2 challenge (Dinan et al., 2019) has a focus on personalized conversations. ParlAI (Miller et al., 2017) is another library for developing task-oriented dialogue systems. It contains pre-trained models for knowledge-grounded chatbot trained with crowdsourced data. The Text-to-Text Transformer (Raffel et al., 2019) unifies multiple text modeling tasks, and achieves the state-of-the-art results in various natural language generation and understanding benchmarks.", "cite_spans": [ { "start": 136, "end": 155, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF37" }, { "start": 372, "end": 399, "text": "(Olabiyi and Mueller, 2019)", "ref_id": "BIBREF26" }, { "start": 703, "end": 720, "text": "(Hu et al., 2018)", "ref_id": "BIBREF20" }, { "start": 1291, "end": 1311, "text": "(Dinan et al., 2019)", "ref_id": null }, { "start": 1362, "end": 1383, "text": "(Miller et al., 2017)", "ref_id": "BIBREF25" }, { "start": 1573, "end": 1594, "text": "(Raffel et al., 2019)", "ref_id": "BIBREF31" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "5" }, { "text": "DIALOGPT is released as a model only; the onus of decoder implementation resides with the user. Despite our efforts to minimize the amount of overtly offensive data prior to training, DI-ALOGPT retains the potential to generate output that may trigger offense. Output may reflect gender and other historical biases implicit in the data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations and risks", "sec_num": "6" }, { "text": "Responses generated using this model may exhibit a propensity to express agreement with propositions that are unethical, biased or offensive (or the reverse, disagreeing with otherwise ethical statements). These are known issues in current stateof-the-art end-to-end conversation models trained on large naturally-occurring datasets. A major motive for releasing DIALOGPT is to enable researchers to investigate these issues and develop mitigation strategies. In no case should inappropriate content generated as a result of using DI-ALOGPT be construed to reflect the views or values of either the authors or Microsoft Corporation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Limitations and risks", "sec_num": "6" }, { "text": "We have released an open-domain pre-trained model, DIALOGPT, trained on massive real-world Reddit dataset. The package consists of a distributed training pipeline and several pre-trained models that can be fine-tuned to obtain a conversation model on a moderately-sized customized dataset in few hours.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "DIALOGPT is fully opensourced and easy to deploy, allowing users to ex-tend the pre-trained conversational system to bootstrap training using various datasets. It serves as a building block to novel applications and methodologies. Detection and control of toxic output will be a major focus of future investigation. We will investigate leveraging reinforcement learning to further improve the relevance of the generated responses and prevent the model from generating egregious responses.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "https://github.com/mgalley/ DSTC7-End-to-End-Conversation-Modeling/ tree/master/evaluation", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We provide a live invitation-only demonstration site for a conversational agents with toxicity controls and mutual information maximization features discussed in this paper. Check our GitHub repository for more information.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We used held-out hand-vetted data from the human and PersonalityChat datasets to provide clear-cut cases for spam prevention and judge training examples. We suspect that this may have helped bias the results towards the extremes.9 For example, one judge protested that the internet meme \"I was today years old when I realized this.\" did not seem human-like.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We would like to thank Yu Wang, Vighnesh Leonardo Shiv, Chris Quirk, and the anonymous reviewers for their helpful discussions and comments.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": null }, { "text": "Significance testing for the difference in means was performed using 10K bootstrap iterations. P-values are computed at \u03b1 = 0.05. The results are provided in Table 8 . The differences between 345M (2) and 762M (6) models are not significant. Notably also, the differences between 345M model (2) and human response (1) are not statistically significant. The template for human evaluation is provided in Figure 2 . Table 8 : Human evaluation significance test. Bold results represent differences that are NOT statistically significant. Notation: 1 -Human response; 2 -DIALOGPT 345M; 3 -PersonalityChat; 4 -DIALOGPT 345M w/ MMI; 5 -DIALOGPT 345M Beam search; 6 -DIALOGPT 762M Figure 2 : Human evaluation template", "cite_spans": [], "ref_spans": [ { "start": 158, "end": 165, "text": "Table 8", "ref_id": null }, { "start": 402, "end": 410, "text": "Figure 2", "ref_id": null }, { "start": 413, "end": 420, "text": "Table 8", "ref_id": null }, { "start": 673, "end": 681, "text": "Figure 2", "ref_id": null } ], "eq_spans": [], "section": "A Additional Details of Human Evaluation", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Relevance: A and B, which is more relevant and appropriate to the immediately preceding turn? System A Neutral System B", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Relevance: A and B, which is more relevant and appropriate to the immediately preceding turn? System A Neutral System B", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "DialoGPT (345M) 3281 (72%) 394 (9% ) 882 (19%) PersonalityChat **** DialoGPT (345M) 2379 (40%) 527 (9% ) 3094 (52%) DialoGPT (345M, w/ MMI) ****", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "DialoGPT (345M) 3281 (72%) 394 (9% ) 882 (19%) PersonalityChat **** DialoGPT (345M) 2379 (40%) 527 (9% ) 3094 (52%) DialoGPT (345M, w/ MMI) ****", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "DialoGPT (345M) 3019 (50%) 581 (10%) 2400 (40%) DialoGPT (345M, Beam) **** DialoGPT (345M) 2726 (45%) 576 (10%) 2698 (45%) DialoGPT (762M)", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "DialoGPT (345M) 3019 (50%) 581 (10%) 2400 (40%) DialoGPT (345M, Beam) **** DialoGPT (345M) 2726 (45%) 576 (10%) 2698 (45%) DialoGPT (762M)", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "DialoGPT (345M) 2671 (45%) 513 (9% ) 2816 (47%) Human response DialoGPT (345M, w/ MMI) 2871 (48%) 522 (9%) 2607 (43%) Human response *** Informative: A and B, which is more contentful, interesting and informative? System A Neutral System B", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "DialoGPT (345M) 2671 (45%) 513 (9% ) 2816 (47%) Human response DialoGPT (345M, w/ MMI) 2871 (48%) 522 (9%) 2607 (43%) Human response *** Informative: A and B, which is more contentful, interesting and informative? System A Neutral System B", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "DialoGPT (345M) 3490 (77%) 206 (5%) 861 (19% ) PersonalityChat ****", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "DialoGPT (345M) 3490 (77%) 206 (5%) 861 (19% ) PersonalityChat ****", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "54%) DialoGPT (345M, w/ MMI) **** DialoGPT (345M) 3230 (54%) 362 (6%) 2408( 40%) DialoGPT (345M, Beam) ***** DialoGPT (345M) 2856 (48%) 303 (5%) 2841( 47%) DialoGPT (762M)", "authors": [ { "first": "", "middle": [], "last": "Dialogpt", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "DialoGPT (345M) 2474 (41%) 257 (4%) 3269( 54%) DialoGPT (345M, w/ MMI) **** DialoGPT (345M) 3230 (54%) 362 (6%) 2408( 40%) DialoGPT (345M, Beam) ***** DialoGPT (345M) 2856 (48%) 303 (5%) 2841( 47%) DialoGPT (762M)", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Human response **** DialoGPT (345M, w/ MMI) 3011 (50%) 234 (4%) 2755( 46%) Human response ** Human-like: A and B, which is more likely to be generated by human rather than a chatbot? System A Neutral System B", "authors": [ { "first": "", "middle": [], "last": "Dialogpt", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "DialoGPT (345M) 2722 (45%) 234 (4%) 3044( 51%) Human response **** DialoGPT (345M, w/ MMI) 3011 (50%) 234 (4%) 2755( 46%) Human response ** Human-like: A and B, which is more likely to be generated by human rather than a chatbot? System A Neutral System B", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "DialoGPT (345M) 3462 (76)% 196 (4%) 899 (20%) PersonalityChat ****", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "DialoGPT (345M) 3462 (76)% 196 (4%) 899 (20%) PersonalityChat ****", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "54%) DialoGPT (345M, w/ MMI) **** DialoGPT (345M) 3233 (54)% 340 (6%) 2427 (40%) DialoGPT (345M, Beam) **** DialoGPT (345M) 2847 (47)% 321 (5%) 2832 (47%) DialoGPT (762M", "authors": [ { "first": "", "middle": [], "last": "Dialogpt", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "DialoGPT (345M) 2478 (41)% 289 (5%) 3233 (54%) DialoGPT (345M, w/ MMI) **** DialoGPT (345M) 3233 (54)% 340 (6%) 2427 (40%) DialoGPT (345M, Beam) **** DialoGPT (345M) 2847 (47)% 321 (5%) 2832 (47%) DialoGPT (762M)", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "(50%) Human response *** DialoGPT (345M, w/ MMI) 2978 (50)% 241 (4%) 2781 (46%) Human response * References M. Burtsev", "authors": [ { "first": ";", "middle": [ "A" ], "last": "Dialogpt", "suffix": "" }, { "first": "R", "middle": [], "last": "Seliverstov", "suffix": "" }, { "first": "M", "middle": [], "last": "Airapetyan", "suffix": "" }, { "first": "D", "middle": [], "last": "Arkhipov", "suffix": "" }, { "first": "N", "middle": [], "last": "Baymurzina", "suffix": "" }, { "first": "O", "middle": [], "last": "Bushkov", "suffix": "" }, { "first": "T", "middle": [], "last": "Gureenkova", "suffix": "" }, { "first": "Y", "middle": [], "last": "Khakhulin", "suffix": "" }, { "first": "", "middle": [], "last": "Kuratov", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "DialoGPT (345M) 2716 (45)% 263 (4%) 3021 (50%) Human response *** DialoGPT (345M, w/ MMI) 2978 (50)% 241 (4%) 2781 (46%) Human response * References M. Burtsev, A. Seliverstov, R. Airapetyan, M. Arkhipov, D. Baymurzina, N. Bushkov, O. Gureenkova, T. Khakhulin, Y. Kuratov,", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "DeepPavlov: Open-source library for dialogue systems", "authors": [ { "first": "D", "middle": [], "last": "Kuznetsov", "suffix": "" }, { "first": "A", "middle": [], "last": "Litinsky", "suffix": "" }, { "first": "V", "middle": [], "last": "Logacheva", "suffix": "" }, { "first": "A", "middle": [], "last": "Lymar", "suffix": "" }, { "first": "V", "middle": [], "last": "Malykh", "suffix": "" }, { "first": "M", "middle": [], "last": "Petrov", "suffix": "" }, { "first": "V", "middle": [], "last": "Polulyakh", "suffix": "" }, { "first": "L", "middle": [], "last": "Pugachev", "suffix": "" }, { "first": "A", "middle": [], "last": "Sorokin", "suffix": "" }, { "first": "M", "middle": [], "last": "Vikhreva", "suffix": "" }, { "first": "M", "middle": [], "last": "Zaynutdinov", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics-System Demonstrations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "D. Kuznetsov, A. Litinsky, V. Logacheva, A. Lymar, V. Malykh, M. Petrov, V. Polulyakh, L. Pugachev, A. Sorokin, M. Vikhreva, and M. Zaynutdinov. 2018. DeepPavlov: Open-source library for dia- logue systems. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics-System Demonstrations.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "BERT: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. NAACL 2019.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Automatic evaluation of machine translation quality using n-gram cooccurrence statistics", "authors": [ { "first": "George", "middle": [], "last": "Doddington", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the second international conference on Human Language Technology Research", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "George Doddington. 2002. Automatic evaluation of machine translation quality using n-gram co- occurrence statistics. In Proceedings of the second international conference on Human Language Tech- nology Research. Morgan Kaufmann Publishers Inc.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Grounded response generation task at DSTC7", "authors": [ { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2019, "venue": "AAAI Dialog System Technology Challenges Workshop", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Michel Galley, Chris Brockett, Xiang Gao, Jianfeng Gao, and Bill Dolan. 2019. Grounded response gen- eration task at DSTC7. In AAAI Dialog System Technology Challenges Workshop.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Neural approaches to conversational AI. Foundations and Trends in Information Retrieval", "authors": [ { "first": "J", "middle": [], "last": "Gao", "suffix": "" }, { "first": "M", "middle": [], "last": "Galley", "suffix": "" }, { "first": "L", "middle": [], "last": "Li", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Gao, M. Galley, and L. Li. 2019a. Neural approaches to conversational AI. Foundations and Trends in In- formation Retrieval.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Michel Galley, Jianfeng Gao, and Bill Dolan. 2019b. Jointly optimizing diversity and relevance in neural response generation. NAACL-HLT", "authors": [ { "first": "Xiang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Sungjin", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Yizhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiang Gao, Sungjin Lee, Yizhe Zhang, Chris Brockett, Michel Galley, Jianfeng Gao, and Bill Dolan. 2019b. Jointly optimizing diversity and relevance in neural response generation. NAACL-HLT 2019.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Structuring latent spaces for stylized response generation", "authors": [ { "first": "Xiang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Yizhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Sungjin", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2019, "venue": "EMNLP-IJCNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xiang Gao, Yizhe Zhang, Sungjin Lee, Michel Gal- ley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2019c. Structuring latent spaces for stylized re- sponse generation. EMNLP-IJCNLP.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "AllenNLP: A deep semantic natural language processing platform", "authors": [ { "first": "M", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "J", "middle": [], "last": "Grus", "suffix": "" }, { "first": "M", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "O", "middle": [], "last": "Tafjord", "suffix": "" }, { "first": "P", "middle": [], "last": "Dasigi", "suffix": "" }, { "first": "N", "middle": [ "F" ], "last": "Liu", "suffix": "" }, { "first": "M", "middle": [], "last": "Peters", "suffix": "" }, { "first": "M", "middle": [], "last": "Schmitz", "suffix": "" }, { "first": "L", "middle": [ "S" ], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "Proceedings of Workshop for NLP Open Source Software", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Gardner, J. Grus, M. Neumann, O. Tafjord, P. Dasigi, N. F. Liu, M. Peters, M. Schmitz, and L. S. Zettlemoyer. 2018. AllenNLP: A deep semantic nat- ural language processing platform. In Proceedings of Workshop for NLP Open Source Software.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Investigating evaluation of open-domain dialogue systems with human generated multiple references", "authors": [ { "first": "Prakhar", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Shikib", "middle": [], "last": "Mehri", "suffix": "" }, { "first": "Tiancheng", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Amy", "middle": [], "last": "Pavel", "suffix": "" }, { "first": "Maxine", "middle": [], "last": "Eskenazi", "suffix": "" }, { "first": "Jeffrey", "middle": [ "P" ], "last": "Bigham", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.10568" ] }, "num": null, "urls": [], "raw_text": "Prakhar Gupta, Shikib Mehri, Tiancheng Zhao, Amy Pavel, Maxine Eskenazi, and Jeffrey P Bigham. 2019. Investigating evaluation of open-domain di- alogue systems with human generated multiple ref- erences. arXiv preprint arXiv:1907.10568.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Texar: A modularized, versatile, and extensible toolkit for text generation", "authors": [ { "first": "Z", "middle": [], "last": "Hu", "suffix": "" }, { "first": "H", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Z", "middle": [], "last": "Yang", "suffix": "" }, { "first": "B", "middle": [], "last": "Tan", "suffix": "" }, { "first": "T", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "J", "middle": [], "last": "He", "suffix": "" }, { "first": "W", "middle": [], "last": "Wang", "suffix": "" }, { "first": "L", "middle": [], "last": "Qin", "suffix": "" }, { "first": "D", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Z. Hu, H. Shi, Z. Yang, B. Tan, T. Zhao, J. He, W. Wang, L. Qin, D. Wang, et al. 2018. Texar: A modularized, versatile, and extensible toolkit for text generation. ACL.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "PyTorch transformer repository", "authors": [ { "first": "", "middle": [], "last": "Huggingface", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "HuggingFace. 2019. PyTorch transformer reposi- tory. https://github.com/huggingface/ pytorch-transformers.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments", "authors": [ { "first": "Alon", "middle": [], "last": "Lavie", "suffix": "" }, { "first": "Abhaya", "middle": [], "last": "Agarwal", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the Second Workshop on Statistical Machine Translation", "volume": "", "issue": "", "pages": "228--231", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alon Lavie and Abhaya Agarwal. 2007. Meteor: An automatic metric for mt evaluation with high levels of correlation with human judgments. In Proceed- ings of the Second Workshop on Statistical Machine Translation, pages 228-231. Association for Com- putational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "A diversity-promoting objective function for neural conversation models", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Jianfeng Gao, and Bill Dolan. 2016a. A diversity-promoting objective function for neural conversation models. NAACL.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "A persona-based neural conversation model", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "P", "middle": [], "last": "Georgios", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Spithourakis", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Gao", "suffix": "" }, { "first": "", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Georgios P Spithourakis, Jianfeng Gao, and Bill Dolan. 2016b. A persona-based neural conversation model. ACL.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "ParlAI: A dialog research software platform", "authors": [ { "first": "A", "middle": [ "H" ], "last": "Miller", "suffix": "" }, { "first": "W", "middle": [], "last": "Feng", "suffix": "" }, { "first": "A", "middle": [], "last": "Fisch", "suffix": "" }, { "first": "J", "middle": [], "last": "Lu", "suffix": "" }, { "first": "D", "middle": [], "last": "Batra", "suffix": "" }, { "first": "A", "middle": [], "last": "Bordes", "suffix": "" }, { "first": "D", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "J", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 2017 EMNLP System Demonstration", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. H. Miller, W. Feng, A. Fisch, J. Lu, D. Batra, A. Bor- des, D. Parikh, and J. Weston. 2017. ParlAI: A di- alog research software platform. In Proceedings of the 2017 EMNLP System Demonstration.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Multi-turn dialogue response generation with autoregressive transformer models", "authors": [ { "first": "Oluwatobi", "middle": [], "last": "Olabiyi", "suffix": "" }, { "first": "Erik", "middle": [ "T" ], "last": "Mueller", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oluwatobi Olabiyi and Erik T Mueller. 2019. Multi-turn dialogue response generation with autoregressive transformer models. arXiv preprint:1908.01841.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. ACL.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Deep contextualized word representations. NAACL", "authors": [ { "first": "M", "middle": [ "E" ], "last": "Peters", "suffix": "" }, { "first": "M", "middle": [], "last": "Neumann", "suffix": "" }, { "first": "M", "middle": [], "last": "Iyyer", "suffix": "" }, { "first": "M", "middle": [], "last": "Gardner", "suffix": "" }, { "first": "C", "middle": [], "last": "Clark", "suffix": "" }, { "first": "K", "middle": [], "last": "Lee", "suffix": "" }, { "first": "L", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. E. Peters, M. Neumann, M. Iyyer, M. Gardner, C. Clark, K. Lee, and L. Zettlemoyer. 2018. Deep contextualized word representations. NAACL.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Conversing by reading: Contentful neural conversation with on-demand machine reading. ACL", "authors": [ { "first": "Lianhui", "middle": [], "last": "Qin", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lianhui Qin, Michel Galley, Chris Brockett, Xiaodong Liu, Xiang Gao, Bill Dolan, Yejin Choi, and Jian- feng Gao. 2019. Conversing by reading: Contentful neural conversation with on-demand machine read- ing. ACL.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "A", "middle": [], "last": "Radford", "suffix": "" }, { "first": "J", "middle": [], "last": "Wu", "suffix": "" }, { "first": "R", "middle": [], "last": "Child", "suffix": "" }, { "first": "D", "middle": [], "last": "Luan", "suffix": "" }, { "first": "D", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "I", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. 2018. Language models are unsuper- vised multitask learners. Technical report, OpenAI.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Exploring the limits of transfer learning with a unified text-to-text", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint:1910.10683.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Neural machine translation of rare words with subword units", "authors": [ { "first": "R", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "B", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "A", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. Sennrich, B. Haddow, and A. Birch. 2016. Neu- ral machine translation of rare words with subword units. ACL.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "A hierarchical latent variable encoder-decoder model for generating dialogues", "authors": [ { "first": "Iulian", "middle": [], "last": "Vlad Serban", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Sordoni", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Lowe", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Charlin", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Pineau", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating di- alogues. AAAI.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Microsoft icecaps: An opensource toolkit for conversation modeling", "authors": [ { "first": "Leonardo", "middle": [], "last": "Vighnesh", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Shiv", "suffix": "" }, { "first": "Anshuman", "middle": [], "last": "Quirk", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Suri", "suffix": "" }, { "first": "Khuram", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Nithya", "middle": [], "last": "Shahid", "suffix": "" }, { "first": "Yizhe", "middle": [], "last": "Govindarajan", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Galley", "suffix": "" }, { "first": "", "middle": [], "last": "Brockett", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vighnesh Leonardo Shiv, Chris Quirk, Anshuman Suri, Xiang Gao, Khuram Shahid, Nithya Govindarajan, Yizhe Zhang, Jianfeng Gao, Michel Galley, Chris Brockett, et al. 2019. Microsoft icecaps: An open- source toolkit for conversation modeling. ACL.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Simple statistical gradientfollowing algorithms for connectionist reinforcement learning", "authors": [ { "first": "J", "middle": [], "last": "Ronald", "suffix": "" }, { "first": "", "middle": [], "last": "Williams", "suffix": "" } ], "year": 1992, "venue": "Machine learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ronald J Williams. 1992. Simple statistical gradient- following algorithms for connectionist reinforce- ment learning. Machine learning.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "TransferTransfo: A transfer learning approach for neural network based conversational agents", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. TransferTransfo: A trans- fer learning approach for neural network based con- versational agents. CoRR, abs/1901.08149.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Generating informative and diverse conversational responses via adversarial information maximization", "authors": [ { "first": "Yizhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Zhe", "middle": [], "last": "Gan", "suffix": "" }, { "first": "Xiujun", "middle": [], "last": "Li", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yizhe Zhang, Michel Galley, Jianfeng Gao, Zhe Gan, Xiujun Li, Chris Brockett, and Bill Dolan. 2018. Generating informative and diverse conversational responses via adversarial information maximization. NeurIPS.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Consistent dialogue generation with self-supervised feature learning", "authors": [ { "first": "Yizhe", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Sungjin", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yizhe Zhang, Xiang Gao, Sungjin Lee, Chris Brockett, Michel Galley, Jianfeng Gao, and Bill Dolan. 2019. Consistent dialogue generation with self-supervised feature learning. arXiv preprint:1903.05759.", "links": null } }, "ref_entries": { "TABREF1": { "html": null, "type_str": "table", "num": null, "content": "", "text": "Model configurations. \"B\" denotes batch size per GPU." }, "TABREF3": { "html": null, "type_str": "table", "num": null, "content": "
NISTBLEUMETEOR EntropyDistAvg Len
MethodN-2N-4B-2B-4E-4D-1D-2
PERSONALITYCHAT 0.78 0.79 11.22% 1.95%6.93%8.375.8%18.8%8.12
Training from scratch:
DIALOGPT (117M) 1.23 1.379.74% 1.77%6.17%7.115.3%15.9%9.41
DIALOGPT (345M) 2.51 3.08 16.92% 4.59%9.34%9.036.7%25.6%11.16
DIALOGPT (762M) 2.52 3.10 17.87% 5.19%9.53%9.327.5%29.3%10.72
Training from OpenAI GPT-2:
DIALOGPT (117M) 2.39 2.41 10.54% 1.55%7.53%10.778.6%39.9%12.82
DIALOGPT (345M) 3.00 3.06 16.96% 4.56%9.81%9.126.8%26.3%12.19
DIALOGPT (345M, Beam)3.43.5 21.76% 7.92%10.74%10.48 12.38% 48.74%11.34
DIALOGPT (762M) 2.84 2.90 18.66% 5.25%9.66%9.727.76% 29.93%11.19
DIALOGPT (762M, Beam) 2.90 2.98 21.08% 7.57%10.11%10.06 11.62% 44.07%10.68
DIALOGPT (345M, MMI) 3.28 3.33 15.68% 3.94%11.23%11.259.39% 45.55%17.21
Human 3.41 4.25 17.90% 7.48%10.64%10.9914.5%63.0%13.10
", "text": "DSTC evaluation. \"Team B\" is the winner system of the DSTC-7 challenge. \"Beam\" denotes beam search. \"Human\" represents the held-out ground truth reference." }, "TABREF5": { "html": null, "type_str": "table", "num": null, "content": "
: Addressing commonsense questions
4.6 Human Evaluation
Human evaluations We evaluated 2000 ran-
domly sampled test sources from the Reddit 6K
test dataset using crowd-sourcing. Systems were
paired and each pair of system outputs was ran-
domly presented to 3 judges, who ranked them
", "text": "" }, "TABREF6": { "html": null, "type_str": "table", "num": null, "content": "
Role Response
User what is the meaning of life ?
BotThe meaning is to be with your family and
friends .
BotI'm going to guess : It means that your par-
ents and friends have loved you . That's the
meaning , I believe .
BotI think the meaning is to make sure you're
alive when you die . So you'll be with your
family , friends , and yourself .
BotIt's not , but thanks for trying to help .
", "text": "An interactive example of multi-turn dialogue" }, "TABREF7": { "html": null, "type_str": "table", "num": null, "content": "
: An example of multi-turn self-playing dia-
logue with user prompt
", "text": "" }, "TABREF8": { "html": null, "type_str": "table", "num": null, "content": "", "text": "Results of Human Evaluation for relevance, informativeness and human-response possibility, showing preferences (%) for our model (DialoGPT) vis-a-vis its variants and real human responses. Distributions skew towards DialoGPT with MMI, even when compared with human outputs. Numbers in bold indicate the preferred systems. Statistically significant results are indicated: * p \u2264 0.01, ** p \u2264 0.001, *** p \u2264 0.0001, **** p \u2264 0.00001." } } } }