{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:28:41.687441Z" }, "title": "Policy-Driven Neural Response Generation for Knowledge-Grounded Dialog Systems", "authors": [ { "first": "Behnam", "middle": [], "last": "Hedayatnia", "suffix": "", "affiliation": {}, "email": "behnam@amazon.com" }, { "first": "Karthik", "middle": [], "last": "Gopalakrishnan", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Seokhwan", "middle": [], "last": "Kim", "suffix": "", "affiliation": {}, "email": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "", "affiliation": {}, "email": "yangliud@amazon.com" }, { "first": "Mihail", "middle": [], "last": "Eric", "suffix": "", "affiliation": {}, "email": "mihaeric@amazon.com" }, { "first": "Dilek", "middle": [], "last": "Hakkani-T\u00fcr", "suffix": "", "affiliation": {}, "email": "hakkanit@amazon.com" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Open-domain dialog systems aim to generate relevant, informative and engaging responses. In this paper, we propose using a dialog policy to plan the content and style of target, opendomain responses in the form of an action plan, which includes knowledge sentences related to the dialog context, targeted dialog acts, topic information, etc. For training, the attributes within the action plan are obtained by automatically annotating the publicly released Topical-Chat dataset. We condition neural response generators on the action plan which is then realized as target utterances at the turn and sentence levels. We also investigate different dialog policy models to predict an action plan given the dialog context. Through automated and human evaluation, we measure the appropriateness of the generated responses and check if the generation models indeed learn to realize the given action plans. We demonstrate that a basic dialog policy that operates at the sentence level generates better responses in comparison to turn level generation as well as baseline models with no action plan. Additionally the basic dialog policy has the added benefit of controllability.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Open-domain dialog systems aim to generate relevant, informative and engaging responses. In this paper, we propose using a dialog policy to plan the content and style of target, opendomain responses in the form of an action plan, which includes knowledge sentences related to the dialog context, targeted dialog acts, topic information, etc. For training, the attributes within the action plan are obtained by automatically annotating the publicly released Topical-Chat dataset. We condition neural response generators on the action plan which is then realized as target utterances at the turn and sentence levels. We also investigate different dialog policy models to predict an action plan given the dialog context. Through automated and human evaluation, we measure the appropriateness of the generated responses and check if the generation models indeed learn to realize the given action plans. We demonstrate that a basic dialog policy that operates at the sentence level generates better responses in comparison to turn level generation as well as baseline models with no action plan. Additionally the basic dialog policy has the added benefit of controllability.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Open-domain dialog systems have typically been modeled using end-to-end approaches, more specifically encoder-decoder architectures (Sordoni et al., 2015; Serban et al., 2017 Serban et al., , 2016 Vinyals and Le, 2015) . These seq2seq models are commonly trained on a maximum likelihood objective, which leads to repetitive and uninformative responses (Wei et al., 2017) . As seen in Figure 1 , candidate A is a typical generic response given the dialog context. In order to deal with this problem, previous work proposed grounding generated responses on knowledge sentences related to the ... Speaker 1: Right. Teams do all kinds of things to bother the competition. I've heard of teams having heated benches in the winter for themselves but not for the visitors. Speaker 2: I would hate a cold bench. Then again, I wouldn't want to be some place that cold or watching football.", "cite_spans": [ { "start": 132, "end": 154, "text": "(Sordoni et al., 2015;", "ref_id": "BIBREF29" }, { "start": 155, "end": 174, "text": "Serban et al., 2017", "ref_id": "BIBREF26" }, { "start": 175, "end": 196, "text": "Serban et al., , 2016", "ref_id": "BIBREF25" }, { "start": 197, "end": 218, "text": "Vinyals and Le, 2015)", "ref_id": "BIBREF31" }, { "start": 352, "end": 370, "text": "(Wei et al., 2017)", "ref_id": "BIBREF33" } ], "ref_spans": [ { "start": 384, "end": 392, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "candidate A: yeah", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Speaker 1:", "sec_num": null }, { "text": "The NFL has no official rule against female players.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "knowledge:", "sec_num": null }, { "text": "candidate B: I heard NFL has no official rule against female players.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "knowledge:", "sec_num": null }, { "text": "candidate C: Yeah. I would hate that too. Do you follow NFL? I heard they have no official rule against female players. Figure 1 : candidate A is an uninformative response. By grounding on knowledge we get more informative responses i.e., candidates B and C. candidate B contains only a statement, leading to an abrupt topic transition. candidate C smoothly transitions topics with dialog acts: feedback, statement, question, and statement. dialog context (Ghazvininejad et al., 2018; Yavuz et al., 2019; Dinan et al., 2018; Gopalakrishnan et al., 2019) . To improve the diversity of generated responses, others proposed conditioning response generation on latent (Serban et al., 2016 (Serban et al., , 2017 Shen et al., 2017; Xing et al., 2016) or discrete attributes (Sankar and Ravi, 2019; Li et al., 2016a; See et al., 2019; Serban et al., 2017) . These discrete attributes are typically presented to the decoder at the turn level, and are not associated with a specific segment of the output.", "cite_spans": [ { "start": 456, "end": 484, "text": "(Ghazvininejad et al., 2018;", "ref_id": "BIBREF9" }, { "start": 485, "end": 504, "text": "Yavuz et al., 2019;", "ref_id": "BIBREF36" }, { "start": 505, "end": 524, "text": "Dinan et al., 2018;", "ref_id": "BIBREF6" }, { "start": 525, "end": 553, "text": "Gopalakrishnan et al., 2019)", "ref_id": "BIBREF11" }, { "start": 664, "end": 684, "text": "(Serban et al., 2016", "ref_id": "BIBREF25" }, { "start": 685, "end": 707, "text": "(Serban et al., , 2017", "ref_id": "BIBREF26" }, { "start": 708, "end": 726, "text": "Shen et al., 2017;", "ref_id": "BIBREF27" }, { "start": 727, "end": 745, "text": "Xing et al., 2016)", "ref_id": "BIBREF35" }, { "start": 769, "end": 792, "text": "(Sankar and Ravi, 2019;", "ref_id": "BIBREF22" }, { "start": 793, "end": 810, "text": "Li et al., 2016a;", "ref_id": "BIBREF13" }, { "start": 811, "end": 828, "text": "See et al., 2019;", "ref_id": "BIBREF24" }, { "start": 829, "end": 849, "text": "Serban et al., 2017)", "ref_id": "BIBREF26" } ], "ref_spans": [ { "start": 120, "end": 128, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "knowledge:", "sec_num": null }, { "text": "Another issue with seq2seq approaches is that, due to the lack of explicit control mechanisms, the style of these responses does not always match with what would be suggested by user experience experts. For example, the generated response may not acknowledge what the user just said, or may jump to a new topic without first introducing it. Figure 1 shows examples of two response candidates with similar content: candidate C acknowledges Speaker 2's previous statement and follows up with a question introducing a new topic and statement, in contrast with candidate B which abruptly transitions into the new topic.", "cite_spans": [ { "start": 341, "end": 347, "text": "Figure", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "knowledge:", "sec_num": null }, { "text": "According to Schegloff (2007) human conversations are sequentially organized units. Turns and actions realized within them are related to what came before and affect what comes next. Inspired by the previous studies, we propose a policy-driven neural response generation (PD-NRG) approach for open-domain, knowledge-grounded dialog systems. Our motivation for this work is to have a mechanism for open domain conversational systems, i.e., a dialog policy, that can enable such higher-level control of generated responses. The dialog policy provides a sequential organization plan or action plan. The action plan specifies the order and relationship of sentences within a turn targeting engaging responses to users throughout the interaction.", "cite_spans": [ { "start": 13, "end": 29, "text": "Schegloff (2007)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "knowledge:", "sec_num": null }, { "text": "We design a set of dialog policy models that adapt to the dialog context to appropriately control the responses at both the turn and sentence-levels. We extend the end-to-end-approach of (Dinan et al., 2018; Gopalakrishnan et al., 2019) : we take in as input both the dialog context and an action plan to predict the next response. We train our PD-NRG model by fine-tuning on the Generative Pretrained Transformer (GPT) (Radford et al., 2018) model in a TransferTransfo fashion (Wolf et al., 2019) . Our approach differs from previous works that condition on discrete attributes independently by conditioning on these attributes jointly.", "cite_spans": [ { "start": 187, "end": 207, "text": "(Dinan et al., 2018;", "ref_id": "BIBREF6" }, { "start": 208, "end": 236, "text": "Gopalakrishnan et al., 2019)", "ref_id": "BIBREF11" }, { "start": 420, "end": 442, "text": "(Radford et al., 2018)", "ref_id": "BIBREF19" }, { "start": 478, "end": 497, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "knowledge:", "sec_num": null }, { "text": "Our contributions include:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "knowledge:", "sec_num": null }, { "text": "i. an enriched version of the Topical-Chat dataset with annotations on multiple attributes (knowledge, topic, dialog act). These annotations were tagged automatically which reduces the cost and time of manual annotation while still obtaining strong results. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "knowledge:", "sec_num": null }, { "text": "ii. the design of a basic dialog policy to predict an action plan for controllable generation for neural response generators", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "knowledge:", "sec_num": null }, { "text": "iii. a sentence-based generation approach that outperforms turn-level generation, and 1 https://github.com/alexa/Topical-Chat/tree/master/TopicalChatEnriched iv. investigation of simple hand-crafted policies as well as automatically learned policies that could be adapted to new applications.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "knowledge:", "sec_num": null }, { "text": "Controllability of generated output has been studied for multiple language generation tasks (such as poetry generation and summarization). Previous work on controlling the style and content of generated outputs focused on two main approaches, conditional generation and weighted decoding. Conditional generation modifies the input to the model to condition on control parameters. Previous works proposed conditioning response generators on latent (Serban et al., 2016 (Serban et al., , 2017 Shen et al., 2017; or discrete attributes, including dialog acts (Sankar and Ravi, 2019) , sentiment (Sankar and Ravi, 2019) , speaker identifiers (Li et al., 2016a) , lexical features (See et al., 2019) or topics (Serban et al., 2017) . Weighted decoding (See et al., 2019) instead uses token-level features that are controllable (Ghazvininejad et al., 2017; Baheti et al., 2018) and supplements the scores from the decoder model output with these features. Our work focuses on conditional generation methods with sentence-level control, as described in more detail in Section 4.", "cite_spans": [ { "start": 447, "end": 467, "text": "(Serban et al., 2016", "ref_id": "BIBREF25" }, { "start": 468, "end": 490, "text": "(Serban et al., , 2017", "ref_id": "BIBREF26" }, { "start": 491, "end": 509, "text": "Shen et al., 2017;", "ref_id": "BIBREF27" }, { "start": 556, "end": 579, "text": "(Sankar and Ravi, 2019)", "ref_id": "BIBREF22" }, { "start": 592, "end": 615, "text": "(Sankar and Ravi, 2019)", "ref_id": "BIBREF22" }, { "start": 638, "end": 656, "text": "(Li et al., 2016a)", "ref_id": "BIBREF13" }, { "start": 676, "end": 694, "text": "(See et al., 2019)", "ref_id": "BIBREF24" }, { "start": 705, "end": 726, "text": "(Serban et al., 2017)", "ref_id": "BIBREF26" }, { "start": 747, "end": 765, "text": "(See et al., 2019)", "ref_id": "BIBREF24" }, { "start": 822, "end": 850, "text": "(Ghazvininejad et al., 2017;", "ref_id": "BIBREF10" }, { "start": 851, "end": 871, "text": "Baheti et al., 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "There is also previous work on controlling attributes such as question asking at the dialog level. See et al. (2019) initialized the generation of turns of a dialog with a fixed distribution that specified what percentage of generated turns should include questions during the dialog. However this does not allow for flexible control where the number of questions may need to vary depending on the course of the dialog. Therefore, we focus on learning a dialog policy model that automatically learns the style of the response based on the dialog context.", "cite_spans": [ { "start": 99, "end": 116, "text": "See et al. (2019)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Similar to previous work for response generation we ground our generated responses on knowledge. Ghazvininejad et al. (2018) , Yavuz et al. (2019) , and used end-to-end memory networks, copy mechanisms and static graph attention mechanisms respectively to incorporate knowledge. Dinan et al. (2018) , Gopalakrishnan et al. (2019) , and (Roller et al., 2020; used memory networks based on transformer architectures (Vaswani et al., 2017) to encode knowledge sentences and dialog history to decode a response. There has been previous work on task-oriented systems that proposed explicit content and sentence planning (Walker et al., 2007) to further control the content and order of sentences within the generated response. Previous work for open-domain dialog systems also followed a similar method for content and sentence planning. The closest work to ours in terms of learning a dialog policy for open-domain dialog is (Xu et al., 2018) who designed a policy network to predict dialog acts and fed those acts into a response generation model to control responses. However, a key part of open-domain dialog is to introduce knowledge into a conversation. We design a policy that integrates knowledge with dialog acts at a sentencelevel. In contrast to (Xu et al., 2018) that used a machine learning based approach, we show that a basic rule-based dialog policy can result in strong performance.", "cite_spans": [ { "start": 97, "end": 124, "text": "Ghazvininejad et al. (2018)", "ref_id": "BIBREF9" }, { "start": 127, "end": 146, "text": "Yavuz et al. (2019)", "ref_id": "BIBREF36" }, { "start": 279, "end": 298, "text": "Dinan et al. (2018)", "ref_id": "BIBREF6" }, { "start": 301, "end": 329, "text": "Gopalakrishnan et al. (2019)", "ref_id": "BIBREF11" }, { "start": 336, "end": 357, "text": "(Roller et al., 2020;", "ref_id": "BIBREF21" }, { "start": 414, "end": 436, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF30" }, { "start": 615, "end": 636, "text": "(Walker et al., 2007)", "ref_id": "BIBREF32" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our proposed PD-NRG approach has two parts: a dialog policy that determines the action plan based on the dialog context, and a response generation model that takes the action plan and the dialog context as input to generate a response. The dialog policy has components that predict the individual elements of the action plan: knowledge selection and dialog act planning. Knowledge selection determines the knowledge to be integrated in the response by finding sentences from a knowledge document corpus that are relevant to the dialog context. Dialog act (DA) planning determines the style of the response in the form of DAs to be realized. We have two forms of DA planning methods: Knowledge-dependent DA planning and Knowledge-independent DA planning. (Mezza et al., 2018) .", "cite_spans": [ { "start": 754, "end": 774, "text": "(Mezza et al., 2018)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Dialog Policy", "sec_num": "3" }, { "text": "For the rest of this work, let", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Action Plan (AP)", "sec_num": "3.1" }, { "text": "D j = [x 1 , . . . , x j ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Action Plan (AP)", "sec_num": "3.1" }, { "text": "denote a partial dialog containing a sequence of j turns. And let x i represent a turn in a dialog where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Action Plan (AP)", "sec_num": "3.1" }, { "text": "1 \u2264 i \u2264 j. Each x i contains a sequence of n i sentences, x i = [s 1 i , . . . , s n i i ]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Action Plan (AP)", "sec_num": "3.1" }, { "text": ". Each x i is generated according to an action plan that consists of one frame for each sentence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Action Plan (AP)", "sec_num": "3.1" }, { "text": "[f 1 i , . . . , f n i i ].", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Action Plan (AP)", "sec_num": "3.1" }, { "text": "The frames, formed of attributes and values, may include: 1. Dialog acts (d) at a sentence-level to help control the style of the generated response. Table 1 lists all the dialog acts used in this work. 2. Topics (t) at a turn-level to generate topically coherent responses. The complete list of topics are: fashion, politics, books, sports, general-entertainment, music, science & technology and movies. 3. Knowledge (k) at a turn or sentence-level to generate interesting and informative responses. The knowledge is represented as a sentence drawn from an unstructured knowledge corpus. 4. Use-knowledge flag (h) that signals whether or not to use the knowledge attribute (k) at the turn or sentence-level.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Action Plan (AP)", "sec_num": "3.1" }, { "text": "Each frame in the action plan corresponds to a sentence s m j and is denoted as a tuple containing a set of the 4 attributes, (d m j , t m j , k m j , h m j ) where 1 \u2264 m \u2264 n j . In this work, we focus on these attributes for action plans, as they are the most basic and critical ways to control knowledge-grounded response generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Action Plan (AP)", "sec_num": "3.1" }, { "text": "For the knowledge selection component of our dialog policy, referenced in Figure 2 , we compute the following for each turn x i at run time. Let c i be defined as the dialog history x 1 , ..., x i\u22121 :", "cite_spans": [], "ref_spans": [ { "start": 74, "end": 82, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Knowledge Selection", "sec_num": "3.2" }, { "text": "k = argmax km\u2208K \uf8eb \uf8ed c i \u2022 k m c i k m \uf8f6 \uf8f8 (1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge Selection", "sec_num": "3.2" }, { "text": "k m is a knowledge sentence from an unstructured knowledge corpus, K, in the Topical-Chat dataset (Gopalakrishnan et al., 2019) . We use the BM25 model (Robertson et al., 2009) to rank knowledge sentences and represent c i and k m as vectors for c i and k m . We compute cosine similarity between the vectors and argmax over the all k m in our knowledge corpus. For c i we are only using the most recent previous turn x i\u22121 for selection. We decide to use the knowledge sentence as input if the similarity score between the sentences is above a manually set threshold value of 0.2", "cite_spans": [ { "start": 98, "end": 127, "text": "(Gopalakrishnan et al., 2019)", "ref_id": "BIBREF11" }, { "start": 152, "end": 176, "text": "(Robertson et al., 2009)", "ref_id": "BIBREF20" } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge Selection", "sec_num": "3.2" }, { "text": "For dialog act planning, we define a set of dialog act transitions from common examples in the Topical-Chat corpus. The set of dialog acts for the next response are determined by both dialog acts and the knowledge sentence selected, based on the dialog context. Figure 2 shows the output of the knowledge selection being fed as input into the dialog act planning component. We represent the transitions as a decision tree 2 . In Figure 5 , Speaker 2's response is a PropQ act and from our decision tree we will predict the dialog acts for the next response, i.e. Statement, PropQ. Based on which set of dialog acts were outputted, we decide whether or not to include the knowledge sentence. Some dialog acts, such as Feedback, do not need to include knowledge by definition.", "cite_spans": [], "ref_spans": [ { "start": 262, "end": 270, "text": "Figure 2", "ref_id": "FIGREF0" }, { "start": 429, "end": 437, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Dialog Act Planning", "sec_num": "3.3" }, { "text": "We propose a Knowledge-dependent DA planning (KD-DA-P) where there are two inputs to predict the dialog acts for the next turn x j+1 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge-dependent DA Planning", "sec_num": "3.3.1" }, { "text": "2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge-dependent DA Planning", "sec_num": "3.3.1" }, { "text": "The full set of decision trees are presented here in the appendix https://arxiv.org/pdf/2005.12529v4.pdf", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge-dependent DA Planning", "sec_num": "3.3.1" }, { "text": "\u2022 the last dialog act associated with the previous sentence s n j j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge-dependent DA Planning", "sec_num": "3.3.1" }, { "text": "\u2022 the output of knowledge selection The dialog act planner looks at the output of the knowledge selection model to see if the knowledge selected is the same or different as compared to the knowledge sentence selected for the previous turn x j . Based on this information a certain subset of the transitions defined for dialog act planning are used to predict the dialog acts for the next response.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge-dependent DA Planning", "sec_num": "3.3.1" }, { "text": "The prediction of the dialog acts is done independently of the selected knowledge in four ways:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Knowledge-independent DA Planning", "sec_num": "3.3.2" }, { "text": "1. Simple DA planning: We define a set of transitions that determine the set of DAs for the next response based solely on the previous dialog act. 2. Seq2Seq Model for DA planning: Using the OpenNMT library (Klein et al., 2017) , we train a sequence-to-sequence model based on bi-directional LSTMs with Luong attention (Luong et al., 2015) to estimate the DAs of the current turn given the dialog context D j . During training, each dialog act label is a separate token in the vocabulary and has its own embedding vector. Both the dialog act and word embeddings are initialized randomly and learned during training. 3. PropQ DA planning: For comparison to previous work we use the method in (See et al., 2019) which initializes the distribution of questions to be asked at the beginning of the conversation. The work finds that the best model generates questions 65.7% of the time. At each time-step the PropQ dialog act is picked 65.7% of the time thereby replicating this baseline. As shown in ", "cite_spans": [ { "start": 207, "end": 227, "text": "(Klein et al., 2017)", "ref_id": "BIBREF12" }, { "start": 319, "end": 339, "text": "(Luong et al., 2015)", "ref_id": "BIBREF16" }, { "start": 691, "end": 709, "text": "(See et al., 2019)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Knowledge-independent DA Planning", "sec_num": "3.3.2" }, { "text": "As shown in Figure 2 , at a given turn in the dialog context, the goal of the response generator is to realize the action plan output by the dialog policy. Our proposed models generate the next turn based on the action plan at a sentence-level, in a sequential manner as opposed to at a turn-level. As shown in Figure 3a when decoding each sentence of the next turn, the dialog context D j as well as the previous sentences generated for the next turn till that iteration are used as input. Algorithm 1 shows the process for sentence-level generation. As seen in the algorithm all the attributes within the AP are jointly taken in as input. To jointly condition on the action plan, each attribute is concatenated to the dialog history as shown in Figure 3c . In the training process each dialog act label is a separate token in the vocabulary and has its own embedding vector which is initialized randomly and learned during training. To train our model we represent the knowledge sentence and topic label with the pretrained embeddings from the GPT model whose vocabulary is BPE tokenized. Finally, the use-knowledge flag decides whether or not to include the knowledge embeddings as part of the input. In some of our experiments, we also include the dialog acts for the past turns by concatenating each turn in the dialog history with its respective acts.", "cite_spans": [], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 2", "ref_id": "FIGREF0" }, { "start": 311, "end": 320, "text": "Figure 3a", "ref_id": null }, { "start": 747, "end": 756, "text": "Figure 3c", "ref_id": null } ], "eq_spans": [], "section": "Policy-driven Response Generation", "sec_num": "4" }, { "text": "Algorithm 1: Sentence-level generation Result:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy-driven Response Generation", "sec_num": "4" }, { "text": "x j+1 Given D j x j+1 = [] ActionPlan = [f 1 j+1 , . . . , f n j+1 j+1 ] for f in |ActionP lan| y = Model(D j , f ) x j+1 = x j+1 \u2295 y D j = D j \u2295 y return x j+1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Policy-driven Response Generation", "sec_num": "4" }, { "text": "For all our models, we use the GPT (Radford et al., 2018) model to finetune in a Transfer-Transfo (Wolf et al., 2019) fashion. The Trans-ferTransfo model is a state-of-the-art neural opendomain dialog system that won 1st place in automated evaluation and 2nd place in human evaluation at the NeurIPS ConvAI2 conversational Intelligence Challenge . We have two methods to generate responses from our models:", "cite_spans": [ { "start": 98, "end": 117, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF34" } ], "ref_spans": [], "eq_spans": [], "section": "Policy-driven Response Generation", "sec_num": "4" }, { "text": "\u2022 Model for turn-level generation: As depicted in Figure 3b , our baseline (Wolf et al., 2019) is given the dialog context and knowledge sentence as input and predicts the response at the turn-level.", "cite_spans": [ { "start": 75, "end": 94, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF34" } ], "ref_spans": [ { "start": 50, "end": 59, "text": "Figure 3b", "ref_id": null } ], "eq_spans": [], "section": "Policy-driven Response Generation", "sec_num": "4" }, { "text": "\u2022 Models for sentence-level generation: As depicted in Figure 3c , the PD-NRG models are given the AP and the dialog context as input to perform sentence-level prediction. Table 2 lists the versions of PD-NRG models we experimented with along with their corresponding APs. Baseline-Sent is similar to the Baseline-Turn model, except it generates responses sentence-by-sentence. The model generates as many sentences as in the human response. 5 Experiments and Evaluation", "cite_spans": [], "ref_spans": [ { "start": 55, "end": 64, "text": "Figure 3c", "ref_id": null } ], "eq_spans": [], "section": "Policy-driven Response Generation", "sec_num": "4" }, { "text": "We use the publicly released Topical-Chat 3 dataset, a large and diverse knowledge-grounded opendomain dialog dataset where the underlying knowledge spans 8 broad topics including fashion, books, and so on (Gopalakrishnan et al., 2019) . Each dialog contains 20+ turns alternating between two crowd workers. For each dialog there is a reading set for each crowd worker. Each reading set has three entities and a set of corresponding knowledge sentences. When presenting the results, we use both test sets provided with the corpus, test frequent and test rare. Frequent and rare refer to the frequency of topics and entities being discussed in the training set.", "cite_spans": [ { "start": 206, "end": 235, "text": "(Gopalakrishnan et al., 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Dataset", "sec_num": "5.1" }, { "text": "The dataset does not have annotations for some attributes such as dialog acts or fine-grained associations between knowledge sentences and dialog turns. Hence, we used out-of-the-box or simple models to automatically annotate our dataset with each attribute, as defined in Section 3.1. We assume these annotations are the ground-truth attributes for the ground-truth AP and use them for testing controllability without degrading the response appropriateness. By automatically annotating we reduce Figure 3: Figure 3a shows the generation process where the input is fed into the GPT model. The output is then concatenated back to the input. This process repeats until generation is complete. Figures 3b and 3c show the input for Baseline-Turn Model and PD-NRG model respectively. the cost and time it takes to manually annotate our dataset along with getting strong results.", "cite_spans": [], "ref_spans": [ { "start": 507, "end": 516, "text": "Figure 3a", "ref_id": null }, { "start": 691, "end": 708, "text": "Figures 3b and 3c", "ref_id": null } ], "eq_spans": [], "section": "Annotating Attributes in Topical-Chat", "sec_num": "5.2" }, { "text": "Each conversation in Topical-Chat has a pair of reading sets that were presented to crowdworkers before the conversation, to have a knowledgeable interaction. During their conversation crowd workers are asked to annotate which topics/entities were attributed to their turns in the conversation. However there is no fine-grained annotation of which knowledge sentence or sentences were used for a turn, hence we create ground-truth knowledge annotations as a corpus post-processing step. To obtain the knowledge annotation for each turn we use Equation 1 to compute similarity between x j+1 and k m . To obtain the knowledge annotation for each sentence within a turn, we tokenize the turn into individual sentences. For each sentence we use the same equation to compute similarity between s n i j+1 and k m . For sentence-tokenization we use the NLTK library (Loper and Bird, 2002) . We decide whether or not the turn or sentences within a turn should be linked to a knowledge sentence by manually setting a threshold value on the similarity score between the knowledge and turn or sentences within a turn. We use the same threshold, 0.2, as described in Section 3.2.", "cite_spans": [ { "start": 859, "end": 881, "text": "(Loper and Bird, 2002)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Annotating Knowledge Sentences", "sec_num": "5.2.1" }, { "text": "We obtain the dialog acts for each sentence by running an off-the-shelf SVM dialog act tagger 4 (Mezza et al., 2018) which takes in as input the current sentence to predict one of 11 dialog acts listed in Table 1 . We also experimented with using Figure 4 : We calculate automated metrics with both a ground truth and an estimated AP past dialog acts predicted from the tagger as additional input; however, this did not change the result. If the confidence score from the SVM tagger is not above a threshold of 0.5, the tagger would output no dialog act which we denote with a special dialog act token NoDialogAct. 2.1% of sentences within the Topical-Chat dataset were labeled as NoDialo-gAct. Of the 11 dialog acts the most represented ones were Statement, PropQ and Feedback where each act had 80%, 6% and 5% sentences tagged respectively. We assume these are the ground-truth dialog acts in our dataset. To view the performance of the model we ask two crowd workers to segment and annotate a small set of 100 turns into individual sentences along with their respective dialog act. The dialog act tagger obtained an F1 of 0.54, precision of 0.77 and a recall of 0.59 on consolidated test set.", "cite_spans": [ { "start": 96, "end": 116, "text": "(Mezza et al., 2018)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 205, "end": 212, "text": "Table 1", "ref_id": "TABREF1" }, { "start": 247, "end": 255, "text": "Figure 4", "ref_id": null } ], "eq_spans": [], "section": "Annotating Dialog Acts", "sec_num": "5.2.2" }, { "text": "For the topic label, we use the topic annotations by the Turkers from the original Topical-Chat data collection. For each turn there are multiple topic annotations; however, unlike the dialog acts and knowledge sentence, topic annotations are at the turn level and are not linked to individual sentences. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Annotating Topic Labels", "sec_num": "5.2.3" }, { "text": "For automatic evaluation we compute a set of metrics between our generated and ground truth response: perplexity, BLEU-1, ROUGE-L, unigram F1-score. We also compute n-gram diversity as defined in (Ghazvininejad et al., 2018) . For human evaluation, we followed a similar setup as (Li et al., 2016b) and generated 200 snippets which contain a dialog context of 5 turns. We generated responses from 2 models to compare against. We asked a set of 3 crowd workers \"Which final response is more appropriate for the given conversation?\".", "cite_spans": [ { "start": 196, "end": 224, "text": "(Ghazvininejad et al., 2018)", "ref_id": "BIBREF9" }, { "start": 280, "end": 298, "text": "(Li et al., 2016b)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation Measures", "sec_num": "5.3" }, { "text": "We first check whether the PD-NRG approach results in better responses when we use the ground truth AP. As seen in Figure 4 instead of using a dialog policy, we form ground truth APs from the annotations described in Section 5.2. We then use them to generate a response for that turn. Table 3 presents automated evaluation results for Baseline-Turn, Baseline-Sent and variations of the PD-NRG models. As seen in the results table, adding dialog acts increases diversity for all the proposed models. This aligns with previous work that using dialog acts leads to more diverse responses (Sankar and Ravi, 2019) . The F1, BLEU, and ROUGE scores of the PD-NRG w/ DA model are lower than the Baseline-Turn model due to the PD-NRG model decoding shorter sentences resulting in lower recall. The PD-NRG w/ DA model with the addition of previous dialog acts as input results in the lowest perplexity for both frequent and rare test sets.", "cite_spans": [ { "start": 585, "end": 608, "text": "(Sankar and Ravi, 2019)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 115, "end": 123, "text": "Figure 4", "ref_id": null }, { "start": 285, "end": 292, "text": "Table 3", "ref_id": "TABREF6" } ], "eq_spans": [], "section": "Results Using the Ground-Truth Action Plan", "sec_num": "5.4" }, { "text": "By jointly conditioning on the attributes in the AP, we aim to control multiple aspects of the response, such as content and style. The dialog acts determine if the response should be a question, statement or should give feedback.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Do the Models Follow the Action Plan?", "sec_num": "5.4.1" }, { "text": "The knowledge determines what content should be present in the response. To see if the model responses follow the AP, we manually annotated if the model's responses realize the dialog acts and their respective knowledge sentence (focusing on the cases where the AP included a knowledge sentence) in their input. Turns with no dialog acts, i.e., marked as NoDialo-gAct, were ignored. The results from the manual evaluation are presented in Table 4 . The PD-NRG w/ DA + knowledge flag model has the highest accuracy in realizing the input AP, achieving 80.6% accuracy on the dialog acts of the generated responses, and 52.1% accuracy in correctly integrating the provided knowledge sentences. Figure 5 presents an example from this model.", "cite_spans": [], "ref_spans": [ { "start": 439, "end": 446, "text": "Table 4", "ref_id": "TABREF8" }, { "start": 691, "end": 699, "text": "Figure 5", "ref_id": null } ], "eq_spans": [], "section": "Do the Models Follow the Action Plan?", "sec_num": "5.4.1" }, { "text": "Using our dialog policy models, we estimate an AP for each turn. Given the dialog context and the AP we then generate responses using the PD-NRG w/ DA + knowledge flag model + Past DA model. We evaluate the responses using both automated and human evaluation. We present our automatic metrics in Table 5 . The KD-DA-P and KI-DA-P(Simple) produced more Feedback and PropQ di-Models Past DA % DA %K Baseline-Turn (Wolf et al., 2019) 26 Figure 5 : Baseline-Turn model versus PD-NRG model alog acts than the actual distribution of dialog acts in the dataset, where over 80% of the dialog acts were Statements. For example KD-DA-P produced 41% Feedback dialog acts wheres the actual distribution contains only 5% Feedback dialog acts. We believe this change in the distribution resulted in our models generating responses with fewer words and as a result these models have lower F1-scores. The limitation of n-gram overlap measures is that they do not capture the diverse set of responses that can be generated in an open-domain setting. For a more realistic comparison of our dialog policy models to our baselines, we ran human evaluation. We provided a set of crowd workers outputs from two models along with the dialog context, and asked them \"Which final response is more appropriate for the given conversation?\". Crowd workers were provided with 3 options: first response, second response and not sure (limited to those cases when the two responses are equally good/bad). Table 6 presents results from the manual evaluations. As seen, the KD-DA-P responses were chosen over the B-Turn model by a large margin. This result is also seen in KD-DA-P responses versus the KI-DA-P (PropQ/AllQ) responses, proving that its is better to have a dialog policy adapting to the course of the dialog versus using a fixed distribution (See et al., 2019) to predict the dialog acts. However, the KI-DA-P (Seq2Seq) results in worse responses than the Avg # Policy F1 words sentences Ground truth 0.22 / 0.22 15.2 / 15.3 1.68 / 1.76 Baseline-Turn 0.18 / 0.17 19.8 / 19.7 1.86 / 1.87 KI-DA-P (Simple) 0.14 / 0.14 12.9 / 12.2 1.89 / 1.89 KD-DA-P 0.14 / 0.14 12.3 / 11.5 1.91 / 1.91 KI-DA-P(Seq2Seq) 0.14 / 0.17 13.1 / 13.4 1.46 / 1.56 Table 5 : Automated metrics with estimated Action Plan. Baseline-Turn (Wolf et al., 2019) Policy %W %T %L IAA KD-DA-P vs. Baseline* 40.8 30.3 28.9 0.43 KI-DA-P Seq2Seq Table 6 : % of Wins(W), Ties (T) and Losses(L) for the baseline models vs PD-NRG model on appropriateness. The KD-DA-P policy is statistically significant compared to the B-Turn(Baseline-Turn) (Wolf et al., 2019) as well as the KI-DA-P(PropQ) and KI-DA-P(PropQ) baselines (See et al., 2019) . We compute Krippendorff's alpha for Inter-annotator agreement(IAA). We computed the p-value using a two-tailed binomial test. * refers to a p-value < 0.05 and ** refers to a p-value < 0.01. baseline. We believe this is because the Statement dialog act is a large portion of the dataset, making learning other acts harder for the model. For future work, we will investigate machine learning approaches to learn better models for the dialog policy. The proposed KD-DA-P results in responses that are better than or similar to human responses in 52% of the cases.", "cite_spans": [ { "start": 411, "end": 430, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF34" }, { "start": 1821, "end": 1839, "text": "(See et al., 2019)", "ref_id": "BIBREF24" }, { "start": 2286, "end": 2305, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF34" }, { "start": 2577, "end": 2596, "text": "(Wolf et al., 2019)", "ref_id": "BIBREF34" }, { "start": 2656, "end": 2674, "text": "(See et al., 2019)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 296, "end": 303, "text": "Table 5", "ref_id": null }, { "start": 434, "end": 442, "text": "Figure 5", "ref_id": null }, { "start": 1472, "end": 1479, "text": "Table 6", "ref_id": null }, { "start": 2216, "end": 2223, "text": "Table 5", "ref_id": null }, { "start": 2384, "end": 2391, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Results Using an Estimated Action Plan", "sec_num": "5.5" }, { "text": "In this work, we propose a policy-driven neural response generation approach for knowledge grounded open-domain dialog systems. We estimate an action plan that consists of a set of attributes that control the content and style of the generated responses at the turn and sentence levels. We investigate both manual and machine learning based policies. Through human evaluation, we empirically demonstrate that a basic dialog policy that does sentence level generation outperforms turn level generation, as well as knowledge-grounded response generation baselines. Furthermore, the generated responses realize their respective action plans. This allows builders of dialog systems control over the model's responses allowing for more consistent user experiences. Our future work includes investigation of better approaches for learning such dialog policy models along with adding other attributes such as sentiment.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "6" }, { "text": "https://github.com/alexa/Topical-Chat", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://github.com/ColingPaper2018/dialogAct-Tagger", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Emory irisbot: An open-domain conversational bot for personalized information access", "authors": [ { "first": "Ali", "middle": [], "last": "Ahmadvand", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Ingyu", "suffix": "" }, { "first": "Harshita", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Justus", "middle": [], "last": "Sahijwani", "suffix": "" }, { "first": "Mingyang", "middle": [], "last": "Schmidt", "suffix": "" }, { "first": "Sergey", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Zihao", "middle": [], "last": "Volokhin", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Wang", "suffix": "" }, { "first": "", "middle": [], "last": "Agichtein", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ali Ahmadvand, Ingyu Jason Choi, Harshita Sahijwani, Justus Schmidt, Mingyang Sun, Sergey Volokhin, Zi- hao Wang, and Eugene Agichtein. 2018. Emory iris- bot: An open-domain conversational bot for person- alized information access. Alexa Prize Proceedings.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Generating more interesting responses in neural conversation models with distributional constraints", "authors": [ { "first": "Ashutosh", "middle": [], "last": "Baheti", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashutosh Baheti, Alan Ritter, Jiwei Li, and Bill Dolan. 2018. Generating more interesting responses in neural conversation models with distributional con- straints. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Slugbot: Developing a computational model andframework of a novel dialogue genre", "authors": [ { "first": "K", "middle": [], "last": "Kevin", "suffix": "" }, { "first": "Jiaqi", "middle": [], "last": "Bowden", "suffix": "" }, { "first": "Wen", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Juraj", "middle": [], "last": "Cui", "suffix": "" }, { "first": "Vrindavan", "middle": [], "last": "Juraska", "suffix": "" }, { "first": "Brian", "middle": [], "last": "Harrison", "suffix": "" }, { "first": "Nick", "middle": [], "last": "Schwarzmann", "suffix": "" }, { "first": "Marilyn", "middle": [], "last": "Santer", "suffix": "" }, { "first": "", "middle": [], "last": "Walker", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.10658" ] }, "num": null, "urls": [], "raw_text": "Kevin K Bowden, Jiaqi Wu, Wen Cui, Juraj Juraska, Vrindavan Harrison, Brian Schwarzmann, Nick San- ter, and Marilyn Walker. 2019. Slugbot: Developing a computational model andframework of a novel di- alogue genre. arXiv preprint arXiv:1907.10658.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Roving mind: a balancing act between opendomain and engaging dialogue systems", "authors": [ { "first": "Alessandra", "middle": [], "last": "Cervone", "suffix": "" }, { "first": "Giuliano", "middle": [], "last": "Tortoreto", "suffix": "" }, { "first": "Stefano", "middle": [], "last": "Mezza", "suffix": "" }, { "first": "Enrico", "middle": [], "last": "Gambi", "suffix": "" }, { "first": "Giuseppe", "middle": [], "last": "Riccardi", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alessandra Cervone, Giuliano Tortoreto, Stefano Mezza, Enrico Gambi, Giuseppe Riccardi, et al. 2017. Roving mind: a balancing act between open- domain and engaging dialogue systems.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "The second conversational intelligence challenge (convai2)", "authors": [ { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Varvara", "middle": [], "last": "Logacheva", "suffix": "" }, { "first": "Valentin", "middle": [], "last": "Malykh", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Miller", "suffix": "" }, { "first": "Kurt", "middle": [], "last": "Shuster", "suffix": "" }, { "first": "Jack", "middle": [], "last": "Urbanek", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Arthur", "middle": [], "last": "Szlam", "suffix": "" }, { "first": "Iulian", "middle": [], "last": "Serban", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Lowe", "suffix": "" } ], "year": 2020, "venue": "The NeurIPS'18", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emily Dinan, Varvara Logacheva, Valentin Malykh, Alexander Miller, Kurt Shuster, Jack Urbanek, Douwe Kiela, Arthur Szlam, Iulian Serban, Ryan Lowe, et al. 2020. The second conversational in- telligence challenge (convai2). In The NeurIPS'18", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Wizard of wikipedia: Knowledge-powered conversational agents", "authors": [ { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Roller", "suffix": "" }, { "first": "Kurt", "middle": [], "last": "Shuster", "suffix": "" }, { "first": "Angela", "middle": [], "last": "Fan", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1811.01241" ] }, "num": null, "urls": [], "raw_text": "Emily Dinan, Stephen Roller, Kurt Shuster, Angela Fan, Michael Auli, and Jason Weston. 2018. Wizard of wikipedia: Knowledge-powered conversational agents. arXiv preprint arXiv:1811.01241.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Sounding board: A user-centric and content-driven social chatbot", "authors": [ { "first": "Hao", "middle": [], "last": "Fang", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Cheng", "suffix": "" }, { "first": "Maarten", "middle": [], "last": "Sap", "suffix": "" }, { "first": "Elizabeth", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Ari", "middle": [], "last": "Holtzman", "suffix": "" }, { "first": "Yejin", "middle": [], "last": "Choi", "suffix": "" }, { "first": "A", "middle": [], "last": "Noah", "suffix": "" }, { "first": "Mari", "middle": [], "last": "Smith", "suffix": "" }, { "first": "", "middle": [], "last": "Ostendorf", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations", "volume": "", "issue": "", "pages": "96--100", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Fang, Hao Cheng, Maarten Sap, Elizabeth Clark, Ari Holtzman, Yejin Choi, Noah A Smith, and Mari Ostendorf. 2018. Sounding board: A user-centric and content-driven social chatbot. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Demonstrations, pages 96-100.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Byueve: Mixed initiative dialog via structured knowledge graph traversal and conversational scaffolding", "authors": [ { "first": "Nancy", "middle": [], "last": "Fulda", "suffix": "" }, { "first": "Tyler", "middle": [], "last": "Etchart", "suffix": "" }, { "first": "William", "middle": [], "last": "Myers", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Ricks", "suffix": "" }, { "first": "Zachary", "middle": [], "last": "Brown", "suffix": "" }, { "first": "Joseph", "middle": [], "last": "Szendre", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Murdoch", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Carr", "suffix": "" }, { "first": "David", "middle": [], "last": "Wingate", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nancy Fulda, Tyler Etchart, William Myers, Daniel Ricks, Zachary Brown, Joseph Szendre, Ben Mur- doch, Andrew Carr, and David Wingate. 2018. Byu- eve: Mixed initiative dialog via structured knowl- edge graph traversal and conversational scaffolding.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "A knowledge-grounded neural conversation model", "authors": [ { "first": "Marjan", "middle": [], "last": "Ghazvininejad", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Yih", "middle": [], "last": "Wen-Tau", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" } ], "year": 2018, "venue": "Thirty-Second AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marjan Ghazvininejad, Chris Brockett, Ming-Wei Chang, Bill Dolan, Jianfeng Gao, Wen-tau Yih, and Michel Galley. 2018. A knowledge-grounded neural conversation model. In Thirty-Second AAAI Confer- ence on Artificial Intelligence.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Hafez: an interactive poetry generation system", "authors": [ { "first": "Marjan", "middle": [], "last": "Ghazvininejad", "suffix": "" }, { "first": "Xing", "middle": [], "last": "Shi", "suffix": "" }, { "first": "Jay", "middle": [], "last": "Priyadarshi", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Knight", "suffix": "" } ], "year": 2017, "venue": "Proceedings of ACL 2017, System Demonstrations", "volume": "", "issue": "", "pages": "43--48", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marjan Ghazvininejad, Xing Shi, Jay Priyadarshi, and Kevin Knight. 2017. Hafez: an interactive poetry generation system. In Proceedings of ACL 2017, System Demonstrations, pages 43-48.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Topical-chat: Towards knowledgegrounded open-domain conversations", "authors": [ { "first": "Karthik", "middle": [], "last": "Gopalakrishnan", "suffix": "" }, { "first": "Behnam", "middle": [], "last": "Hedayatnia", "suffix": "" }, { "first": "Qinlang", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Gottardi", "suffix": "" }, { "first": "Sanjeev", "middle": [], "last": "Kwatra", "suffix": "" }, { "first": "Anu", "middle": [], "last": "Venkatesh", "suffix": "" }, { "first": "Raefer", "middle": [], "last": "Gabriel", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-T\u00fcr", "suffix": "" } ], "year": 2019, "venue": "Proc. Interspeech 2019", "volume": "", "issue": "", "pages": "1891--1895", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karthik Gopalakrishnan, Behnam Hedayatnia, Qin- lang Chen, Anna Gottardi, Sanjeev Kwatra, Anu Venkatesh, Raefer Gabriel, and Dilek Hakkani- T\u00fcr. 2019. Topical-chat: Towards knowledge- grounded open-domain conversations. Proc. Inter- speech 2019, pages 1891-1895.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "OpenNMT: Open-source toolkit for neural machine translation", "authors": [ { "first": "Guillaume", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Yoon", "middle": [], "last": "Kim", "suffix": "" }, { "first": "Yuntian", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Jean", "middle": [], "last": "Senellart", "suffix": "" }, { "first": "Alexander", "middle": [ "M" ], "last": "Rush", "suffix": "" } ], "year": 2017, "venue": "Proc. ACL", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/P17-4012" ] }, "num": null, "urls": [], "raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander M. Rush. 2017. OpenNMT: Open-source toolkit for neural machine translation. In Proc. ACL.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A persona-based neural conversation model", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "P", "middle": [], "last": "Georgios", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Spithourakis", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Gao", "suffix": "" }, { "first": "", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1603.06155" ] }, "num": null, "urls": [], "raw_text": "Jiwei Li, Michel Galley, Chris Brockett, Georgios P Spithourakis, Jianfeng Gao, and Bill Dolan. 2016a. A persona-based neural conversation model. arXiv preprint arXiv:1603.06155.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Deep reinforcement learning for dialogue generation", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Will", "middle": [], "last": "Monroe", "suffix": "" }, { "first": "Alan", "middle": [], "last": "Ritter", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1606.01541" ] }, "num": null, "urls": [], "raw_text": "Jiwei Li, Will Monroe, Alan Ritter, Michel Galley, Jian- feng Gao, and Dan Jurafsky. 2016b. Deep rein- forcement learning for dialogue generation. arXiv preprint arXiv:1606.01541.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Nltk: the natural language toolkit", "authors": [ { "first": "Edward", "middle": [], "last": "Loper", "suffix": "" }, { "first": "Steven", "middle": [], "last": "Bird", "suffix": "" } ], "year": 2002, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Edward Loper and Steven Bird. 2002. Nltk: the natural language toolkit. arXiv preprint cs/0205028.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Effective approaches to attentionbased neural machine translation", "authors": [ { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Hieu", "middle": [], "last": "Pham", "suffix": "" }, { "first": "Christopher D", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1508.04025" ] }, "num": null, "urls": [], "raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. arXiv preprint arXiv:1508.04025.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Iso-standard domain-independent dialogue act tagging for conversational agents", "authors": [ { "first": "Stefano", "middle": [], "last": "Mezza", "suffix": "" }, { "first": "Alessandra", "middle": [], "last": "Cervone", "suffix": "" }, { "first": "Giuliano", "middle": [], "last": "Tortoreto", "suffix": "" }, { "first": "A", "middle": [], "last": "Evgeny", "suffix": "" }, { "first": "Giuseppe", "middle": [], "last": "Stepanov", "suffix": "" }, { "first": "", "middle": [], "last": "Riccardi", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1806.04327" ] }, "num": null, "urls": [], "raw_text": "Stefano Mezza, Alessandra Cervone, Giuliano Tor- toreto, Evgeny A Stepanov, and Giuseppe Riccardi. 2018. Iso-standard domain-independent dialogue act tagging for conversational agents. arXiv preprint arXiv:1806.04327.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Alquist 2.0: Alexa prize socialbot based on sub-dialogue models", "authors": [], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jan Pichl. 2018. Alquist 2.0: Alexa prize socialbot based on sub-dialogue models.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Improving language understanding by generative pre-training", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Narasimhan", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Salimans", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai- assets/researchcovers/languageunsupervised/language understanding paper. pdf.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The probabilistic relevance framework: Bm25 and beyond", "authors": [ { "first": "Stephen", "middle": [], "last": "Robertson", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Zaragoza", "suffix": "" } ], "year": 2009, "venue": "Foundations and Trends\u00ae in Information Retrieval", "volume": "3", "issue": "4", "pages": "333--389", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Robertson, Hugo Zaragoza, et al. 2009. The probabilistic relevance framework: Bm25 and be- yond. Foundations and Trends\u00ae in Information Re- trieval, 3(4):333-389.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Recipes for building an open-domain chatbot", "authors": [ { "first": "Stephen", "middle": [], "last": "Roller", "suffix": "" }, { "first": "Emily", "middle": [], "last": "Dinan", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal", "suffix": "" }, { "first": "Da", "middle": [], "last": "Ju", "suffix": "" }, { "first": "Mary", "middle": [], "last": "Williamson", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Jing", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Myle", "middle": [], "last": "Ott", "suffix": "" }, { "first": "Kurt", "middle": [], "last": "Shuster", "suffix": "" }, { "first": "Eric", "middle": [ "M" ], "last": "Smith", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.13637" ] }, "num": null, "urls": [], "raw_text": "Stephen Roller, Emily Dinan, Naman Goyal, Da Ju, Mary Williamson, Yinhan Liu, Jing Xu, Myle Ott, Kurt Shuster, Eric M Smith, et al. 2020. Recipes for building an open-domain chatbot. arXiv preprint arXiv:2004.13637.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Deep reinforcement learning for modeling chit-chat dialog with discrete attributes", "authors": [ { "first": "Chinnadhurai", "middle": [], "last": "Sankar", "suffix": "" }, { "first": "Sujith", "middle": [], "last": "Ravi", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1907.02848" ] }, "num": null, "urls": [], "raw_text": "Chinnadhurai Sankar and Sujith Ravi. 2019. Deep reinforcement learning for modeling chit-chat di- alog with discrete attributes. arXiv preprint arXiv:1907.02848.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Sequence organization in interaction: A primer in conversation analysis I", "authors": [ { "first": "A", "middle": [], "last": "Emanuel", "suffix": "" }, { "first": "", "middle": [], "last": "Schegloff", "suffix": "" } ], "year": 2007, "venue": "", "volume": "1", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Emanuel A Schegloff. 2007. Sequence organization in interaction: A primer in conversation analysis I, volume 1. Cambridge University Press.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "What makes a good conversation? how controllable attributes affect human judgments", "authors": [ { "first": "Abigail", "middle": [], "last": "See", "suffix": "" }, { "first": "Stephen", "middle": [], "last": "Roller", "suffix": "" }, { "first": "Douwe", "middle": [], "last": "Kiela", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1902.08654" ] }, "num": null, "urls": [], "raw_text": "Abigail See, Stephen Roller, Douwe Kiela, and Jason Weston. 2019. What makes a good conversation? how controllable attributes affect human judgments. arXiv preprint arXiv:1902.08654.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Building end-to-end dialogue systems using generative hierarchical neural network models", "authors": [ { "first": "Alessandro", "middle": [], "last": "Iulian V Serban", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Sordoni", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Courville", "suffix": "" }, { "first": "", "middle": [], "last": "Pineau", "suffix": "" } ], "year": 2016, "venue": "Thirtieth AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iulian V Serban, Alessandro Sordoni, Yoshua Bengio, Aaron Courville, and Joelle Pineau. 2016. Building end-to-end dialogue systems using generative hier- archical neural network models. In Thirtieth AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "A hierarchical latent variable encoder-decoder model for generating dialogues", "authors": [ { "first": "Iulian", "middle": [], "last": "Vlad Serban", "suffix": "" }, { "first": "Alessandro", "middle": [], "last": "Sordoni", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Lowe", "suffix": "" }, { "first": "Laurent", "middle": [], "last": "Charlin", "suffix": "" }, { "first": "Joelle", "middle": [], "last": "Pineau", "suffix": "" }, { "first": "Aaron", "middle": [], "last": "Courville", "suffix": "" }, { "first": "Yoshua", "middle": [], "last": "Bengio", "suffix": "" } ], "year": 2017, "venue": "Thirty-First AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Iulian Vlad Serban, Alessandro Sordoni, Ryan Lowe, Laurent Charlin, Joelle Pineau, Aaron Courville, and Yoshua Bengio. 2017. A hierarchical latent variable encoder-decoder model for generating dialogues. In Thirty-First AAAI Conference on Artificial Intelli- gence.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "A conditional variational framework for dialog generation", "authors": [ { "first": "Xiaoyu", "middle": [], "last": "Shen", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Su", "suffix": "" }, { "first": "Yanran", "middle": [], "last": "Li", "suffix": "" }, { "first": "Wenjie", "middle": [], "last": "Li", "suffix": "" }, { "first": "Shuzi", "middle": [], "last": "Niu", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Akiko", "middle": [], "last": "Aizawa", "suffix": "" }, { "first": "Guoping", "middle": [], "last": "Long", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1705.00316" ] }, "num": null, "urls": [], "raw_text": "Xiaoyu Shen, Hui Su, Yanran Li, Wenjie Li, Shuzi Niu, Yang Zhao, Akiko Aizawa, and Guoping Long. 2017. A conditional variational framework for dia- log generation. arXiv preprint arXiv:1705.00316.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Can you put it all together: Evaluating conversational agents' ability to blend skills", "authors": [ { "first": "Eric", "middle": [ "Michael" ], "last": "Smith", "suffix": "" }, { "first": "Mary", "middle": [], "last": "Williamson", "suffix": "" }, { "first": "Kurt", "middle": [], "last": "Shuster", "suffix": "" }, { "first": "Jason", "middle": [], "last": "Weston", "suffix": "" }, { "first": "Y-Lan", "middle": [], "last": "Boureau", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2004.08449" ] }, "num": null, "urls": [], "raw_text": "Eric Michael Smith, Mary Williamson, Kurt Shuster, Jason Weston, and Y-Lan Boureau. 2020. Can you put it all together: Evaluating conversational agents' ability to blend skills. arXiv preprint arXiv:2004.08449.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "A neural network approach to context-sensitive generation of conversational responses", "authors": [ { "first": "Alessandro", "middle": [], "last": "Sordoni", "suffix": "" }, { "first": "Michel", "middle": [], "last": "Galley", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Auli", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Brockett", "suffix": "" }, { "first": "Yangfeng", "middle": [], "last": "Ji", "suffix": "" }, { "first": "Margaret", "middle": [], "last": "Mitchell", "suffix": "" }, { "first": "Jian-Yun", "middle": [], "last": "Nie", "suffix": "" }, { "first": "Jianfeng", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Bill", "middle": [], "last": "Dolan", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1506.06714" ] }, "num": null, "urls": [], "raw_text": "Alessandro Sordoni, Michel Galley, Michael Auli, Chris Brockett, Yangfeng Ji, Margaret Mitchell, Jian-Yun Nie, Jianfeng Gao, and Bill Dolan. 2015. A neural network approach to context-sensitive gen- eration of conversational responses. arXiv preprint arXiv:1506.06714.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in neural information processing systems", "volume": "", "issue": "", "pages": "5998--6008", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "A neural conversational model", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1506.05869" ] }, "num": null, "urls": [], "raw_text": "Oriol Vinyals and Quoc Le. 2015. A neural conversa- tional model. arXiv preprint arXiv:1506.05869.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Individual and domain adaptation in sentence planning for dialogue", "authors": [ { "first": "A", "middle": [], "last": "Marilyn", "suffix": "" }, { "first": "Amanda", "middle": [], "last": "Walker", "suffix": "" }, { "first": "", "middle": [], "last": "Stent", "suffix": "" } ], "year": 2007, "venue": "Artificial Intelligence Research", "volume": "30", "issue": "", "pages": "413--456", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marilyn A Walker, Amanda Stent, Fran\u00e7ois Mairesse, and Rashmi Prasad. 2007. Individual and domain adaptation in sentence planning for dialogue. Jour- nal of Artificial Intelligence Research, 30:413-456.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Why do neural dialog systems generate short and meaningless replies? a comparison between dialog and translation", "authors": [ { "first": "Bolin", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Shuai", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Lili", "middle": [], "last": "Mou", "suffix": "" }, { "first": "Hao", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Pascal", "middle": [], "last": "Poupart", "suffix": "" }, { "first": "Ge", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zhi", "middle": [], "last": "Jin", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1712.02250" ] }, "num": null, "urls": [], "raw_text": "Bolin Wei, Shuai Lu, Lili Mou, Hao Zhou, Pascal Poupart, Ge Li, and Zhi Jin. 2017. Why do neu- ral dialog systems generate short and meaningless replies? a comparison between dialog and transla- tion. arXiv preprint arXiv:1712.02250.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Transfertransfo: A transfer learning approach for neural network based conversational agents", "authors": [ { "first": "Thomas", "middle": [], "last": "Wolf", "suffix": "" }, { "first": "Victor", "middle": [], "last": "Sanh", "suffix": "" }, { "first": "Julien", "middle": [], "last": "Chaumond", "suffix": "" }, { "first": "Clement", "middle": [], "last": "Delangue", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1901.08149" ] }, "num": null, "urls": [], "raw_text": "Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversational agents. arXiv preprint arXiv:1901.08149.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Towards explainable and controllable open domain dialogue generation with dialogue acts", "authors": [ { "first": "Chen", "middle": [], "last": "Xing", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Jie", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yalou", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei-Ying", "middle": [], "last": "Ma", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1606.08340" ] }, "num": null, "urls": [], "raw_text": "Chen Xing, Wei Wu, Yu Wu, Jie Liu, Yalou Huang, Ming Zhou, and Wei-Ying Ma. 2016. Topic aware neural response generation. arXiv preprint arXiv:1606.08340. Can Xu, Wei Wu, and Yu Wu. 2018. Towards ex- plainable and controllable open domain dialogue generation with dialogue acts. arXiv preprint arXiv:1807.07255.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "Grounded response generation with hierarchical pointer networks", "authors": [ { "first": "Semih", "middle": [], "last": "Yavuz", "suffix": "" }, { "first": "Abhinav", "middle": [], "last": "Rastogi", "suffix": "" }, { "first": "Guan-Lin", "middle": [], "last": "Chao", "suffix": "" }, { "first": "Dilek", "middle": [], "last": "Hakkani-Tur", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1908.10731" ] }, "num": null, "urls": [], "raw_text": "Semih Yavuz, Abhinav Rastogi, Guan-Lin Chao, and Dilek Hakkani-Tur. 2019. Deepcopy: Grounded response generation with hierarchical pointer net- works. arXiv preprint arXiv:1908.10731.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Gunrock: A social bot for complex and engaging long conversations", "authors": [ { "first": "Dian", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Michelle", "middle": [], "last": "Cohn", "suffix": "" }, { "first": "Yi", "middle": [ "Mang" ], "last": "Yang", "suffix": "" }, { "first": "Chun", "middle": [ "Yen" ], "last": "Chen", "suffix": "" }, { "first": "Weiming", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Jiaping", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Mingyang", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Kevin", "middle": [], "last": "Jesse", "suffix": "" }, { "first": "Austin", "middle": [], "last": "Chau", "suffix": "" }, { "first": "Antara", "middle": [], "last": "Bhowmick", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations", "volume": "", "issue": "", "pages": "79--84", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dian Yu, Michelle Cohn, Yi Mang Yang, Chun Yen Chen, Weiming Wen, Jiaping Zhang, Mingyang Zhou, Kevin Jesse, Austin Chau, Antara Bhowmick, et al. 2019. Gunrock: A social bot for complex and engaging long conversations. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP): System Demonstrations, pages 79-84.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Learning discourse-level diversity for neural dialog models using conditional variational autoencoders", "authors": [ { "first": "Tiancheng", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Ran", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Maxine", "middle": [], "last": "Eskenazi", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1703.10960" ] }, "num": null, "urls": [], "raw_text": "Tiancheng Zhao, Ran Zhao, and Maxine Eskenazi. 2017. Learning discourse-level diversity for neural dialog models using conditional variational autoen- coders. arXiv preprint arXiv:1703.10960.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Commonsense knowledge aware conversation generation with graph attention", "authors": [ { "first": "Hao", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Tom", "middle": [], "last": "Young", "suffix": "" }, { "first": "Minlie", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Haizhou", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Jingfang", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Xiaoyan", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2018, "venue": "IJCAI", "volume": "", "issue": "", "pages": "4623--4629", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hao Zhou, Tom Young, Minlie Huang, Haizhou Zhao, Jingfang Xu, and Xiaoyan Zhu. 2018. Com- monsense knowledge aware conversation generation with graph attention. In IJCAI, pages 4623-4629.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "type_str": "figure", "text": "Policy-driven neural response generation." }, "FIGREF1": { "uris": null, "num": null, "type_str": "figure", "text": "Fang et al. (2018),Ahmadvand et al. (2018),Fulda et al. (2018), Pichl (2018),Cervone et al. (2017),Yu et al. (2019) and Bowden et al.(2019)extracted multiple features such as topic, intent, entities, and sentiment to send to a dialog policy model to plan the structure and content of the response. However, these previous works generated responses from a set of templates that are usually repetitive for open-domain conversations. Our work focuses on neural generative models for response generation in opendomain dialog systems." }, "FIGREF2": { "uris": null, "num": null, "type_str": "figure", "text": "Figure2 depicts the architecture of PD-NRG." }, "FIGREF5": { "uris": null, "num": null, "type_str": "figure", "text": "vs. B-Turn* 25.1 35.7 39.2 0.47 KD-DA-P vs. See et.al(PropQ)** 54.2 5.5 40.2 0.46 KD-DA-P vs. See et al. (2019)** 54.1 7.4 38.3 0.48 KD-DA-P vs. Human response** 16.7 35.3 48.0 0.53" }, "TABREF1": { "text": "", "num": null, "type_str": "table", "html": null, "content": "" }, "TABREF2": { "text": "PropQ corresponds to a Yes-No question, which is the most represented question dialog act in our dataset. 4. AllQ DA planning: We extend the PropQ DA Prediction baseline above by selecting the PropQ, ChoiceQ or SetQ questions each 21.9% of the time summing up to 65.7%. See et al. (2019) does not make a distinction as to what type of questions were asked.", "num": null, "type_str": "table", "html": null, "content": "
" }, "TABREF4": { "text": "", "num": null, "type_str": "table", "html": null, "content": "
" }, "TABREF6": { "text": "Automated metrics with ground-truth Action Plan on test freq / rare", "num": null, "type_str": "table", "html": null, "content": "
" }, "TABREF8": { "text": "", "num": null, "type_str": "table", "html": null, "content": "
: % of Dialog Acts (DA) and Knowledge (K)
Realized for PD-NRG Models to showcase controlla-
bility.
...
Speaker 1: Free with you, they should have had
Snoop Dogg make a theme song for the game like
he did for his son's high school football team LOL
Speaker 2: Interesting, do you play golf?
Speaker 1:
Baseline-Turn:
no, i don't play golf, but i hear
it has been a lot of years since the last time.
PD-NRG model:
Statement \u2192 not really, i'm not a huge fan of golf.
PropQ \u2192 have you ever played?
" } } } }