{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:13:26.268898Z" }, "title": "On Task-Level Dialogue Composition of Generative Transformer Model", "authors": [ { "first": "Prasanna", "middle": [], "last": "Parthasarathi", "suffix": "", "affiliation": { "laboratory": "", "institution": "McGill University / Mila", "location": {} }, "email": "" }, { "first": "Arvind", "middle": [], "last": "Neelakantan", "suffix": "", "affiliation": { "laboratory": "", "institution": "McGill University / Mila", "location": {} }, "email": "" }, { "first": "Sharan", "middle": [], "last": "Openai", "suffix": "", "affiliation": { "laboratory": "", "institution": "McGill University / Mila", "location": {} }, "email": "" }, { "first": "Google", "middle": [], "last": "Narang", "suffix": "", "affiliation": { "laboratory": "", "institution": "McGill University / Mila", "location": {} }, "email": "" }, { "first": "", "middle": [], "last": "Brain", "suffix": "", "affiliation": { "laboratory": "", "institution": "McGill University / Mila", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Task-oriented dialogue systems help users accomplish tasks such as booking a movie ticket and ordering food via conversation. Generative models parameterized by a deep neural network are widely used for next turn response generation in such systems. It is natural for users of the system to want to accomplish multiple tasks within the same conversation, but the ability of generative models to compose multiple tasks is not well studied. In this work, we begin by studying the effect of training human-human task-oriented dialogues towards improving the ability to compose multiple tasks on Transformer generative models. To that end, we propose and explore two solutions: (1) creating synthetic multiple task dialogue data for training from human-human single task dialogue and (2) forcing the encoder representation to be invariant to single and multiple task dialogues using an auxiliary loss. The results from our experiments highlight the difficulty of even the sophisticated variant of transformer model in learning to compose multiple tasks from single task dialogues.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "Task-oriented dialogue systems help users accomplish tasks such as booking a movie ticket and ordering food via conversation. Generative models parameterized by a deep neural network are widely used for next turn response generation in such systems. It is natural for users of the system to want to accomplish multiple tasks within the same conversation, but the ability of generative models to compose multiple tasks is not well studied. In this work, we begin by studying the effect of training human-human task-oriented dialogues towards improving the ability to compose multiple tasks on Transformer generative models. To that end, we propose and explore two solutions: (1) creating synthetic multiple task dialogue data for training from human-human single task dialogue and (2) forcing the encoder representation to be invariant to single and multiple task dialogues using an auxiliary loss. The results from our experiments highlight the difficulty of even the sophisticated variant of transformer model in learning to compose multiple tasks from single task dialogues.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Recent years have seen a tremendous surge in the application of deep learning methods for dialogue in general (Vinyals and Le, 2015; Rojas-Barahona et al., 2017; Budzianowski et al., 2018; Lewis et al., 2017) and task-oriented dialogue (Wen et al., 2015; Einolghozati et al., 2019; specifically. Task-oriented dialogue systems help users accomplish tasks such as booking a movie ticket and ordering food via conversation. Generative models are a popular choice for next turn response generation in such systems (Rojas-Barahona et al., 2017; Eric and Manning, 2017) . These models are typically learned using large amounts of dialogue data for every task (Budzianowski et al., 2018; Byrne et al., 2019) . It is natural for users of the task-oriented dialogue system to want to accomplish multiple tasks within the same conversation, e.g. booking a movie ticket and ordering a taxi to the movie theater within the same conversation. The brute-force solution would require collecting dialogue data for every task combination which might be practically infeasible given the combinatorially many possibilities.", "cite_spans": [ { "start": 110, "end": 132, "text": "(Vinyals and Le, 2015;", "ref_id": "BIBREF22" }, { "start": 133, "end": 161, "text": "Rojas-Barahona et al., 2017;", "ref_id": "BIBREF13" }, { "start": 162, "end": 188, "text": "Budzianowski et al., 2018;", "ref_id": "BIBREF1" }, { "start": 189, "end": 208, "text": "Lewis et al., 2017)", "ref_id": "BIBREF10" }, { "start": 236, "end": 254, "text": "(Wen et al., 2015;", "ref_id": "BIBREF23" }, { "start": 255, "end": 281, "text": "Einolghozati et al., 2019;", "ref_id": "BIBREF4" }, { "start": 511, "end": 540, "text": "(Rojas-Barahona et al., 2017;", "ref_id": "BIBREF13" }, { "start": 541, "end": 564, "text": "Eric and Manning, 2017)", "ref_id": "BIBREF5" }, { "start": 654, "end": 681, "text": "(Budzianowski et al., 2018;", "ref_id": "BIBREF1" }, { "start": 682, "end": 701, "text": "Byrne et al., 2019)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "While the ability of generative dialogue models to compose multiple tasks has not yet been studied in the literature, there has been some investigation on the compositionality skills of deep neural networks. Lake and Baroni (2017) propose a suite of tasks to evaluate a method's compositionality skills and find that deep neural networks generalize to unseen compositions only in a limited way. Kottur et al. (2017) analyze whether the language emerged when multiple generative models interact with each other is compositional and conclude that compositionality arises only with strong regularization.", "cite_spans": [ { "start": 395, "end": 415, "text": "Kottur et al. (2017)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Motivated by the practical infeasibility of collecting data for combinatorially many task compositions, we focus on task-level compositionality of text response generation models. We begin by studying the effect of training data size of humanhuman multiple task dialogues on the performance of Transformer (Vaswani et al., 2017) generative models. Next, we explore two solutions to improve task-level compositionality. First, we propose a data augmentation approach (Simard et al., 2003; Schmidhuber, 2012; Krizhevsky et al., 2012; Baird, 1992; Sennrich et al., 2016) where we create synthetic multiple task dialogues for training from human-human single task dialogue; we add a portion of one dialogue as a prefix to another to simulate multiple task dialogues during training. As a second solution, we draw inspiration from the domain adaptation literature (Ganin and Lempitsky, 2015; Tzeng et al., 2015; Xu and Yang, 2017; Chen et al., 2016; Xu et al., 2017; Sun et al., 2018) and encourage the model to learn domain invariant representations with an auxiliary loss to learn representations that are invariant to single and multiple task dialogues.", "cite_spans": [ { "start": 306, "end": 328, "text": "(Vaswani et al., 2017)", "ref_id": null }, { "start": 466, "end": 487, "text": "(Simard et al., 2003;", "ref_id": "BIBREF16" }, { "start": 488, "end": 506, "text": "Schmidhuber, 2012;", "ref_id": "BIBREF14" }, { "start": 507, "end": 531, "text": "Krizhevsky et al., 2012;", "ref_id": "BIBREF8" }, { "start": 532, "end": 544, "text": "Baird, 1992;", "ref_id": "BIBREF0" }, { "start": 545, "end": 567, "text": "Sennrich et al., 2016)", "ref_id": "BIBREF15" }, { "start": 859, "end": 886, "text": "(Ganin and Lempitsky, 2015;", "ref_id": "BIBREF6" }, { "start": 887, "end": 906, "text": "Tzeng et al., 2015;", "ref_id": "BIBREF19" }, { "start": 907, "end": 925, "text": "Xu and Yang, 2017;", "ref_id": null }, { "start": 926, "end": 944, "text": "Chen et al., 2016;", "ref_id": "BIBREF3" }, { "start": 945, "end": 961, "text": "Xu et al., 2017;", "ref_id": "BIBREF25" }, { "start": 962, "end": 979, "text": "Sun et al., 2018)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We conduct our experiments on the Multiwoz dataset (Budzianowski et al., 2018) . The dataset contains both single and multiple task dialogues for training and evaluation. In Multiwoz, the tasks in multiple task dialogues are only the combinations of tasks in single task dialogues. This allows the dataset to be an appropriate benchmark for our experiments.", "cite_spans": [ { "start": 51, "end": 78, "text": "(Budzianowski et al., 2018)", "ref_id": "BIBREF1" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To summarize, our key findings are:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We study task-level compositionality of text response generation models and find that they are heavily reliant on multiple task conversations at train time to do well on such conversations at test time.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We explore two novel unsupervised solutions to improve task-level compositionality: (1) creating synthetic multiple task dialogue data from human-human single task dialogue and (2) forcing the encoder representation to be invariant to single and multiple task dialogues using an auxiliary loss.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 Highlighting the difficulty of composing tasks in generative dialogues with experiments on the Multiwoz dataset, where both the methods combined result only in a 8.5% BLEU (Papineni et al., 2002) score improvement when zero-shot evaluated on multiple task dialogues.", "cite_spans": [ { "start": 174, "end": 197, "text": "(Papineni et al., 2002)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "2 Background", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Let d 1 , d 2 , . . . , d", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "M be the dialogues in the training set and every dialogue", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "d m = ((u 1 m , a 1 m ), (u 2 m , a 2 m ), . . . , (u nm m , a nm m ) (\u2200m \u2208 {1, 2, . . . , M })", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "consists of n m turns each of user and assistant. Further each user and assistant turn consists of a sequence of word tokens. The individual dialogue could be either single task or multiple task depending on the number of tasks being accomplished in the dialogue.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The response generation model is trained to generate each turn of the assistant response given the conversation history. The generative model learns a probability distribution given by P (a", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "i | (u 1 , a 1 ), . . . , (u i\u22121 , a i\u22121 ), u i ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We drop the symbol m that denotes a particular training example for simplicity. The assistant turn a i consists of a sequence of word tokens,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "a i = (w i 1 , w i 2 , . . . , w i l i )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": ". The response generation model factorizes the joint distribution left-to-right given by,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "P (a i | x i ) = l i j=1 P (w j | x i , w i 1 , . . . , w i j\u22121 )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "x i = ((u 1 , a 1 ), . . . , (u i\u22121 , a i\u22121 ), u i ) refers", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "to the conversation history till the i th turn. We use a Transformer (Vaswani et al., 2017 ) sequence-to-sequence model to parameterize the above distribution. Given a training set of dialogues, the parameters of the Transformer model are learned to optimize the conditional language modelling objective given by,", "cite_spans": [ { "start": 69, "end": 90, "text": "(Vaswani et al., 2017", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L LM = M m=1 nm i=1 log P (a i | x i , \u0398)", "eq_num": "(1)" } ], "section": "Introduction", "sec_num": "1" }, { "text": "where \u0398 refers to the parameters of the Transformer model.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The first solution we explore for task compositionality generates synthetic multiple task dialogues for training from human-human single task dialogues 1 . Here, we sample two dialogues from the training set, and add a portion of one dialogue as a prefix to another. While this procedure might not create dialogues of the quality equivalent to human-human multiple task dialogue, it is an unsupervised way to create approximate multiple task dialogues that the model could theoretically benefit from. Concretely, we randomly sample two single task dialogues d i and d j from the training set and create a noisy multiple task dialogue by adding a fraction of the dialogue d j as a prefix to dialogue d i . The fraction of dialogue taken from dialogue d j is given by the hyperparameter augment f raction. The number of times dialogue d i is augmented by a randomly sampled dialogue is given by the hyperparameter augment f old.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Augmentation", "sec_num": "3" }, { "text": "We consider two strategies for sampling the dialogue d j . In Random Augment, the dialogue is uniformly randomly sampled from the remainder of the training set. A potential issue with the random strategy is that it might create spurious task combinations and the model might fit to this noise. Motivated by the spurious task combination phenomenon, we consider another sampling strategy T argeted Augment where we create synthetic multiple task dialogues only for task combinations that exist in the development set. Here, d j is sampled from a set of dialogues whose task is compatible with the task of dialogue d i . The Transformer model is now trained on the augmented training set using the objective function given in Equation 1. The effect of the sampling strategy and the hyperparameters on the model performance is discussed in the experiments section (Section 5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data Augmentation", "sec_num": "3" }, { "text": "We propose Domain Invariant Transformer model ( Figure 1 ) to maintain a domain invariant representation of the encoder by training the encoder representation for an auxiliary task. Here, the auxiliary task for the network is to predict the label , il , denoting the type of task (single or multi-task) in the encoded conversation history. The model takes as input the sequence of byte pair encoded tokens that are represented at the encoder hidden state as a set of attention weights from the multi-head multiple layer attention mechanism of transformer. The conditional language model (Equation 1) is learnt by a transformer decoder on top that attends over the encoder states.", "cite_spans": [], "ref_spans": [ { "start": 48, "end": 56, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Domain Invariant Transformer", "sec_num": "4" }, { "text": "The discriminator task network is trained with average pooling of the encoder summary over the attention heads (h j )as shown in Equation 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Invariant Transformer", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "i e s = k j=1 (h j ) k", "eq_num": "(2)" } ], "section": "Domain Invariant Transformer", "sec_num": "4" }, { "text": "The average pooled encoder summary is passed as input to a two-layer feed forward discriminator. The discriminator network has a dropout (Srivastava et al., 2014) layer in-between the two fully connected layers (f 1 and f 2 ) (Equation 3).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Invariant Transformer", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "y i = f 2 f 1 i e s", "eq_num": "(3)" } ], "section": "Domain Invariant Transformer", "sec_num": "4" }, { "text": "The binary cross-entropy loss, L disc , for the predicted label,\u0177 i , an input context i is computed as in Equation 4. The Domain Invariant Transformer model optimizes a convex combination of the two losses as shown in Equation 5.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Invariant Transformer", "sec_num": "4" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "L disc = \u2212 (y i log (\u0177 i ) + (1 \u2212 y i ) log (1 \u2212\u0177 i ))", "eq_num": "(4)" } ], "section": "Domain Invariant Transformer", "sec_num": "4" }, { "text": "L train = \u03b1 * L disc + (1 \u2212 \u03b1) * L LM (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Invariant Transformer", "sec_num": "4" }, { "text": "The language model loss makes sure that the model learns to generate the next utterance while the discriminator loss makes sure the model is aware of the nature of task. To understand the effect of the auxiliary loss we experiment with different values for \u03b1 (ref Appendix).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Domain Invariant Transformer", "sec_num": "4" }, { "text": "We measure the importance of multiple task dialogue on the overall performance of transformer by training the model with varying amount of multiple task dialogues and keeping the task distribution between multiple and single domain dialogues almost similar in the experiments. We keep increasing the number of multiple task dialogues while reducing the single task dialogues to keep the total number of dialogues constant at 2, 150. The model should be able to learn to generalize to multiple tasks as the set of tasks are the same between the train and test sets with only the nature in which the task is posed by the user is different. We use the Ten-sor2Tensor (Vaswani et al., 2018) framework to run our experiments with (tiny) hyper-parameter setting in the framework. As shown in Table 1 , the quality of the model improves significantly as number of multiple task dialogues increases. Interestingly, even though the total number of dialogues are kept fixed, the overall validation BLEU score also improves as the number of multiple task dialogues increase in the training set. The results show that the models may be better at decomposing than composing in the domain of goal oriented dialogues or the model at best can only mimic surface level token distribution (Appendix B). Though training with more multi-task dialogues can potentially improve the performance, it is not a scalable solution. We will test two of the out-of-the-shelf techniques to improve the task level compositionality in the following section.", "cite_spans": [ { "start": 664, "end": 686, "text": "(Vaswani et al., 2018)", "ref_id": "BIBREF20" } ], "ref_spans": [ { "start": 786, "end": 793, "text": "Table 1", "ref_id": "TABREF1" } ], "eq_spans": [], "section": "Importance of multiple task dialogues", "sec_num": "5.1" }, { "text": "We experiment on Transformer to evaluate the performance on handling zero-shot compositional tasks by training the baseline model only on single task dialogues, and with the proposed data augmentation techniques. The results, in Table 2 , show that the Targeted Augment technique increased the performance on multiple-task dialogues by 8.5% BLEU score while the scores of the model slightly dropped in the performance of all dialogues. The reason for only a minor BLEU improvement could be due to the noise in generation process. Although the task distributions are matched, the token level distributions appear to be significantly differ-ent between the single and multiple-tasks. The results suggest that the method may inject more noise in the token level distribution thereby not improving the model performance significantly.", "cite_spans": [], "ref_spans": [ { "start": 229, "end": 236, "text": "Table 2", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Zero-shot Compositionality Experiments", "sec_num": "5.2" }, { "text": "We compared the proposed architecture and the baseline Transformer model to understand the effects of domain invariant encoder representation towards language generation in multi-task dialogues. We observed from our experiments in Table 3 that Domain Invariant Transformer or Transformer model fails to generalize with few-shot multi-task dialogues. The data augmentation techniques too appear to not contribute towards improving the performance. But, Domain Invariant Transformer model improved the performance to a BLEU score when trained only on all of training data, which, though was not the intended objective. Although that seems good, the model is still heavily reliant on human-human multiple domain dialogues and zeroshot or few-shot generalization in compositional dialogues seem quite difficult to achieve. The poor performance of the data augmentation techniques can be due to the overwhelming noise in token distribution of input contexts, which skews the language model that the model learns.", "cite_spans": [], "ref_spans": [ { "start": 231, "end": 238, "text": "Table 3", "ref_id": "TABREF5" } ], "eq_spans": [], "section": "Domain Invariant Transformer", "sec_num": "5.3" }, { "text": "We studied the problem of composing multiple dialogue tasks to predict next utterance in a single multiple-task dialogue. We found that even powerful transformer models do not naturally compose multiple tasks and the performance is severely relied on multiple task dialogues. In this paper, we explored two solutions that only further showed the difficulty of composing multiple dialogue tasks.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The challenge in generalizing to zero-shot composition, as observed in the experiments, hints at the possibility of transformer model potentially mimicking only the surface level tokens without understanding the underlying task. The token overlap distribution in Appendix B supports the possibility.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "6" }, { "text": "The MultiWoZ 2.0 dataset has a JSON metadata that maintains a dictionary of slot-value pairs provided by the user to the agent in every utterance. We use this metadata to construct a local and a global knowledge of slot-value shared by the user and split to relabel the dataset for single domain and multidomain dialogues. The preprocessing step removed the noise in the labeling of dialogues. We used this approach to keep a test set of multidomain dialogues to evaluate the model performance on compositional tasks. On the clean split of single domain dialogues we generate synthetic multidomain dialogues using two different approaches:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Preprocessing", "sec_num": null }, { "text": "In this approach, we pick a single task dialogue i D SN G and randomly select a set of K single task", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Random Synthetic (RS)", "sec_num": null }, { "text": "dialogues, i D SN G noise K k=1 , to inject noise in D SN G .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Random Synthetic (RS)", "sec_num": null }, { "text": "With an hyperparameter, percentCopy, we select the number of utterances to be copied from every dialogue in the set noiseDialogues and add it as a prefix to D SN G . This results in K negative samples", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Random Synthetic (RS)", "sec_num": null }, { "text": "of synthetic multidomain dialogues, i D M U L RS K k=1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Random Synthetic (RS)", "sec_num": null }, { "text": ", for every single domain dialogues in the dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.1 Random Synthetic (RS)", "sec_num": null }, { "text": "We bucket the single domain dialogues based on the conversation domain (taxi, hotel, attraction etc.,). Similarly, we bucket the multi-task dialogues in the training set to measure the topic distributions in multi-task dialogues. Using the computed distribution of composite tasks in true multidomain dialogues and the domain label of every i D SN G , we constrain the selection of random dialogues to conform to the training distribution of true composite tasks in the training set. The hyperparameters and the remainder of the procedure is similar to RS except when combining the single domain dialogues from two different domains i Dom, j Dom , we inject the topic change exchanges randomly sampled from T C ( j Dom1, i Dom2) .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.2 Targetted Synthetic (TS)", "sec_num": null }, { "text": "For training the proposed Domain Invariant Transformer model, we create the labels for the auxiliary tasks using the preprocessing steps used to split the dataset into single and multi-domain dialogues We experimented with different values of \u03b1 to understand the influence of the discriminator loss.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A.2 Targetted Synthetic (TS)", "sec_num": null }, { "text": "The results in Table 4 show that Domain Invariant Transformer performed better when \u03b1 is 0.001. The experiment also shows consistent performance improvement in all the experiments with different \u03b1 highlighting the usefulness of training an auxiliary network to train domain invariant encoder representations.", "cite_spans": [], "ref_spans": [ { "start": 15, "end": 22, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "A.2 Targetted Synthetic (TS)", "sec_num": null }, { "text": "We analyze the token distribution in the dataset to understand the negative result further. We observed that despite the task distributions are matched the underlying token distribution in different set up is not (Table 5) . We looked at the overlap of the distribution of 4-grams in conversations on the different splits we used for training. We observed that Multi-task dialogues (MUL) training set has as much 4-gram overlap with MUL Valid and SNG (Single task dialogues) Valid sets as the combined (SNG + MUL) training data. The analysis raises doubts in the performance of transformer model with increased MUL train dialogues that the performance improvement cannot be only because of the model's ability to decompose multiple tasks but may be because the MUL train has higher 4-gram overlap with SNG Valid and MUL Valid. This shows that despite the dialogues carrying rich information in task oriented dialogues, the model at best only mimics the surface level token distribution. Hence, it is not clear if the Transformer model can generalize to multi-task dialogues with an understanding of the underlying task structure.", "cite_spans": [], "ref_spans": [ { "start": 213, "end": 222, "text": "(Table 5)", "ref_id": null } ], "eq_spans": [], "section": "B Token distribution", "sec_num": null }, { "text": "USER:I'm looking for the Lensfield Hotel. Can you help me find it ? ASSISTANT:Yes of course, it is located in 53-57 lensfield road. Would you like me to book you a reservation ? USER: No thank you. Could you give me the postcode and tell me if they have free parking ? ASSISTANT: Yes, they have free parking. The postcode is cb21en. Is there anything else you'd like to know about this hotel ? USER:No, that's all I needed. Thanks! Bye! ASSISTANT: You are welcome. Goddbye! USER:I'd like to find a train that leaves after 17:00 on Saturday. ASSISTANT: Absolutely, I need departure and destination information and a time you need to arrive by. USER: I will be departing from Broxbourne and I'd like to go to Cambridge. ASSISTANT: I have 7 available. The first leaves at 17:32, and arrives at 18:32. Would you like me to book that for you ? USER:I am not looking to book at the moment, can I just get the train ID? Thanks. ASSISTANT: Of course! TR0811 leaves at 17:32, arrives at 18:32, with a duration of 60 minutes. The price is 14.32 pounds. May I assist you with anything else ? USER: That is all I need for today. Thank you. Good bye. ASSISTANT: Okay glad I could help you. Enjoy your trip.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "B Token distribution", "sec_num": null }, { "text": "USER:I'm looking for the Lensfield Hotel. Can you help me find it ? ASSISTANT:Yes of course, it is located in 53-57 lensfield road. Would you like me to book you a reservation ? USER: No thank you. Coul you give me the postcode and tell me if they have free parking ? ASSISTANT: Yes, they have free parking. The postcode is cb21en. Is there anything else you'd like to know about this hotel ? USER:I'd like to find a train that leaves after 17:00 on Saturday. ASSISTANT: Absolutely, I need departure and destination information and a time you need to arrive by. USER: I will be departing from Broxbourne and I'd like to go to Cambridge. ASSISTANT: I have 7 available. The first leaves at 17:32, and arrives at 18:32. Would you like me to book that for you ? USER:I am not looking to book at the moment, can I just get the train ID? Thanks. ASSISTANT: Of course! TR0811 leaves at 17:32, arrives at 18:32, with a duration of 60 minutes. The price is 14.32 pounds. May I assist you with anything else ? USER: That is all I need for today. Thank you. Good bye. ASSISTANT: Okay glad I could help you. Enjoy your trip. Figure 2 : An example of combining two single-task dialogues in color1 and color2 together to form a single multi-task dialogue. Table 5 : Analysis of 4-gram overlap across different combinations of train and validation splits that were used in the experiments. The analysis show that the %Unseen in validation set is higher when training with SNG (Single domain dialogues) but considerably lower when trained with MUL. The composition task requires models to understand the underlying task structure but the data distribution and performance of transformer strongly correlate to show that the transformer model at best mimics the surface level token distribution than understanding the nature of task.", "cite_spans": [], "ref_spans": [ { "start": 1113, "end": 1121, "text": "Figure 2", "ref_id": null }, { "start": 1242, "end": 1249, "text": "Table 5", "ref_id": null } ], "eq_spans": [], "section": "B Token distribution", "sec_num": null }, { "text": "Code repository", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Document image defect models", "authors": [ { "first": "Henry", "middle": [], "last": "Baird", "suffix": "" } ], "year": 1992, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Henry Baird. 1992. Document image defect models. Springer.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Multiwoz -a largescale multi-domain wizard-of-oz dataset for taskoriented dialogue modelling", "authors": [ { "first": "Pawe\u0142", "middle": [], "last": "Budzianowski", "suffix": "" }, { "first": "Tsung-Hsien", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Bo-Hsiang", "middle": [], "last": "Tseng", "suffix": "" }, { "first": "I\u00f1igo", "middle": [], "last": "Casanueva", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Ultes", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Osman Ramadan", "suffix": "" }, { "first": "", "middle": [], "last": "Gasic", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Pawe\u0142 Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, I\u00f1igo Casanueva, Stefan Ultes, Osman Ra- madan, and Milica Gasic. 2018. Multiwoz -a large- scale multi-domain wizard-of-oz dataset for task- oriented dialogue modelling. EMNLP.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Taskmaster-1: Toward a realistic and diverse dialog dataset. EMNLP", "authors": [ { "first": "Bill", "middle": [], "last": "Byrne", "suffix": "" }, { "first": "Karthik", "middle": [], "last": "Krishnamoorthi", "suffix": "" }, { "first": "Chinnadhurai", "middle": [], "last": "Sankar", "suffix": "" }, { "first": "Arvind", "middle": [], "last": "Neelakantan", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Duckworth", "suffix": "" }, { "first": "Semih", "middle": [], "last": "Yavuz", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Goodrich", "suffix": "" }, { "first": "Amit", "middle": [], "last": "Dubey", "suffix": "" }, { "first": "Andy", "middle": [], "last": "Cedilnik", "suffix": "" }, { "first": "Kyu-Young", "middle": [], "last": "Kim", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Bill Byrne, Karthik Krishnamoorthi, Chinnadhurai Sankar, Arvind Neelakantan, Daniel Duckworth, Semih Yavuz, Ben Goodrich, Amit Dubey, Andy Cedilnik, and Kyu-Young Kim. 2019. Taskmaster- 1: Toward a realistic and diverse dialog dataset. EMNLP.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Adversarial deep averaging networks for cross-lingual sentiment classification", "authors": [ { "first": "Xilun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Athiwaratkun", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Q", "middle": [], "last": "Kilian", "suffix": "" }, { "first": "Claire", "middle": [], "last": "Weinberger", "suffix": "" }, { "first": "", "middle": [], "last": "Cardie", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xilun Chen, Ben Athiwaratkun, Yu Sun, Kilian Q. Weinberger, and Claire Cardie. 2016. Adversarial deep averaging networks for cross-lingual sentiment classification. TACL.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Improving semantic parsing for task oriented dialog. arXiv", "authors": [ { "first": "Arash", "middle": [], "last": "Einolghozati", "suffix": "" }, { "first": "Panupong", "middle": [], "last": "Pasupat", "suffix": "" }, { "first": "Sonal", "middle": [], "last": "Gupta", "suffix": "" }, { "first": "Rushin", "middle": [], "last": "Shah", "suffix": "" }, { "first": "Mrinal", "middle": [], "last": "Mohit", "suffix": "" }, { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arash Einolghozati, Panupong Pasupat, Sonal Gupta, Rushin Shah, Mrinal Mohit, Mike Lewis, and Luke Zettlemoyer. 2019. Improving semantic parsing for task oriented dialog. arXiv.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "A copyaugmented sequence-to-sequence architecture gives good performance on task-oriented dialogue", "authors": [ { "first": "Mihail", "middle": [], "last": "Eric", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mihail Eric and Christopher Manning. 2017. A copy- augmented sequence-to-sequence architecture gives good performance on task-oriented dialogue. EACL.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Unsupervised domain adaptation by backpropagation", "authors": [ { "first": "Yaroslav", "middle": [], "last": "Ganin", "suffix": "" }, { "first": "Victor", "middle": [ "S" ], "last": "Lempitsky", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yaroslav Ganin and Victor S. Lempitsky. 2015. Un- supervised domain adaptation by backpropagation. ICML.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Natural language does not emerge 'naturally' in multi-agent dialog", "authors": [ { "first": "Satwik", "middle": [], "last": "Kottur", "suffix": "" }, { "first": "M", "middle": [ "F" ], "last": "Jos\u00e9", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Moura", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Lee", "suffix": "" }, { "first": "", "middle": [], "last": "Batra", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Satwik Kottur, Jos\u00e9 M. F. Moura, Stefan Lee, and Dhruv Batra. 2017. Natural language does not emerge 'naturally' in multi-agent dialog. EMNLP.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Imagenet classification with deep convolutional neural networks", "authors": [ { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Geoffrey", "middle": [ "E" ], "last": "Hin", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Krizhevsky, Ilya Sutskever, and Geoffrey E Hin- ton. 2012. Imagenet classification with deep convo- lutional neural networks. NeurIPS.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Still not systematic after all these years: On the compositional skills of sequence-to-sequence recurrent networks", "authors": [ { "first": "M", "middle": [], "last": "Brenden", "suffix": "" }, { "first": "Marco", "middle": [], "last": "Lake", "suffix": "" }, { "first": "", "middle": [], "last": "Baroni", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Brenden M. Lake and Marco Baroni. 2017. Still not systematic after all these years: On the composi- tional skills of sequence-to-sequence recurrent net- works. Arxiv.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Deal or no deal? end-to-end learning for negotiation dialogues", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Denis", "middle": [], "last": "Yarats", "suffix": "" }, { "first": "Yann", "middle": [ "N" ], "last": "Dauphin", "suffix": "" }, { "first": "Devi", "middle": [], "last": "Parikh", "suffix": "" }, { "first": "Dhruv", "middle": [], "last": "Batra", "suffix": "" } ], "year": 2017, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mike Lewis, Denis Yarats, Yann N. Dauphin, Devi Parikh, and Dhruv Batra. 2017. Deal or no deal? end-to-end learning for negotiation dialogues. EMNLP.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Neural assistant: Joint action prediction, response generation, and latent knowledge reasoning", "authors": [ { "first": "Arvind", "middle": [], "last": "Neelakantan", "suffix": "" }, { "first": "Semih", "middle": [], "last": "Yavuz", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Vishaal", "middle": [], "last": "Prasad", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Goodrich", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Duckworth", "suffix": "" }, { "first": "Chinnadhurai", "middle": [], "last": "Sankar", "suffix": "" }, { "first": "Xifeng", "middle": [], "last": "Yan", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Arvind Neelakantan, Semih Yavuz, Sharan Narang, Vishaal Prasad, Ben Goodrich, Daniel Duckworth, Chinnadhurai Sankar, and Xifeng Yan. 2019. Neu- ral assistant: Joint action prediction, response gener- ation, and latent knowledge reasoning. Arxiv.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Bleu: a method for automatic evaluation of machine translation", "authors": [ { "first": "Kishore", "middle": [], "last": "Papineni", "suffix": "" }, { "first": "Salim", "middle": [], "last": "Roukos", "suffix": "" }, { "first": "Todd", "middle": [], "last": "Ward", "suffix": "" }, { "first": "Wei-Jing", "middle": [], "last": "Zhu", "suffix": "" } ], "year": 2002, "venue": "Proceedings of the 40th annual meeting on association for computational linguistics. ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics. ACL.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "A network-based end-to-end trainable task-oriented dialogue system", "authors": [ { "first": "Lina", "middle": [ "Maria" ], "last": "Rojas-Barahona", "suffix": "" }, { "first": "Milica", "middle": [], "last": "Gasic", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Mrksic", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Su", "suffix": "" }, { "first": "Stefan", "middle": [], "last": "Ultes", "suffix": "" }, { "first": "Tsung-Hsien", "middle": [], "last": "Wen", "suffix": "" }, { "first": "Steve", "middle": [ "J" ], "last": "Young", "suffix": "" }, { "first": "David", "middle": [], "last": "Vandyke", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lina Maria Rojas-Barahona, Milica Gasic, Nikola Mrk- sic, Pei-Hao Su, Stefan Ultes, Tsung-Hsien Wen, Steve J. Young, and David Vandyke. 2017. A network-based end-to-end trainable task-oriented di- alogue system. EACL.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Multi-column deep neural networks for image classification", "authors": [ { "first": "Jurgen", "middle": [], "last": "Schmidhuber", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jurgen Schmidhuber. 2012. Multi-column deep neural networks for image classification. CVPR.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Improving neural machine translation models with monolingual data", "authors": [ { "first": "Rico", "middle": [], "last": "Sennrich", "suffix": "" }, { "first": "Barry", "middle": [], "last": "Haddow", "suffix": "" }, { "first": "Alexandra", "middle": [], "last": "Birch", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Improving neural machine translation models with monolingual data. ACL.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Best practices for convolutional neural networks applied to visual document analysis", "authors": [ { "first": "Patrice", "middle": [ "Y" ], "last": "Simard", "suffix": "" }, { "first": "Dave", "middle": [], "last": "Steinkraus", "suffix": "" }, { "first": "John", "middle": [ "C" ], "last": "Platt", "suffix": "" } ], "year": 2003, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Patrice Y. Simard, Dave Steinkraus, and John C. Platt. 2003. Best practices for convolutional neural net- works applied to visual document analysis. ICDAR.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Dropout: A simple way to prevent neural networks from overfitting", "authors": [ { "first": "Nitish", "middle": [], "last": "Srivastava", "suffix": "" }, { "first": "Geoffrey", "middle": [], "last": "Hinton", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Krizhevsky", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2014, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. JMLR.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Domain adversarial training for accented speech recognition", "authors": [ { "first": "Sining", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Ching-Feng", "middle": [], "last": "Yeh", "suffix": "" }, { "first": "Mei-Yuh", "middle": [], "last": "Hwang", "suffix": "" }, { "first": "Mari", "middle": [], "last": "Ostendorf", "suffix": "" }, { "first": "Lei", "middle": [], "last": "Xie", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sining Sun, Ching-Feng Yeh, Mei-Yuh Hwang, Mari Ostendorf, and Lei Xie. 2018. Domain adversarial training for accented speech recognition. Arxiv.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Simultaneous deep transfer across domains and tasks. ICCV", "authors": [ { "first": "Eric", "middle": [], "last": "Tzeng", "suffix": "" }, { "first": "Judy", "middle": [], "last": "Hoffman", "suffix": "" }, { "first": "Trevor", "middle": [], "last": "Darrell", "suffix": "" }, { "first": "Kate", "middle": [], "last": "Saenko", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Tzeng, Judy Hoffman, Trevor Darrell, and Kate Saenko. 2015. Simultaneous deep transfer across domains and tasks. ICCV.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Tensor2tensor for neural machine translation", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Samy", "middle": [], "last": "Bengio", "suffix": "" }, { "first": "Eugene", "middle": [], "last": "Brevdo", "suffix": "" }, { "first": "Francois", "middle": [], "last": "Chollet", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Stephan", "middle": [], "last": "Gouws", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "\u0141ukasz", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "Nal", "middle": [], "last": "Kalchbrenner", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Sepassi", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Samy Bengio, Eugene Brevdo, Fran- cois Chollet, Aidan N. Gomez, Stephan Gouws, Llion Jones, \u0141ukasz Kaiser, Nal Kalchbrenner, Niki Parmar, Ryan Sepassi, Noam Shazeer, and Jakob Uszkoreit. 2018. Tensor2tensor for neural machine translation. Arxiv.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A neural conversational model", "authors": [ { "first": "Oriol", "middle": [], "last": "Vinyals", "suffix": "" }, { "first": "V", "middle": [], "last": "Quoc", "suffix": "" }, { "first": "", "middle": [], "last": "Le", "suffix": "" } ], "year": 2015, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Oriol Vinyals and Quoc V. Le. 2015. A neural conver- sational model. CoRR.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Semantically conditioned lstm-based natural language generation for spoken dialogue systems", "authors": [ { "first": "Milica", "middle": [], "last": "Tsung-Hsien Wen", "suffix": "" }, { "first": "Nikola", "middle": [], "last": "Gasic", "suffix": "" }, { "first": "Pei-Hao", "middle": [], "last": "Mrk\u0161i\u0107", "suffix": "" }, { "first": "David", "middle": [], "last": "Su", "suffix": "" }, { "first": "Steve", "middle": [], "last": "Vandyke", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2015, "venue": "EMNLP", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsung-Hsien Wen, Milica Gasic, Nikola Mrk\u0161i\u0107, Pei- Hao Su, David Vandyke, and Steve Young. 2015. Se- mantically conditioned lstm-based natural language generation for spoken dialogue systems. EMNLP.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Latent intention dialogue models", "authors": [ { "first": "Yishu", "middle": [], "last": "Tsung-Hsien Wen", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Miao", "suffix": "" }, { "first": "Steve", "middle": [ "J" ], "last": "Blunsom", "suffix": "" }, { "first": "", "middle": [], "last": "Young", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tsung-Hsien Wen, Yishu Miao, Phil Blunsom, and Steve J. Young. 2017. Latent intention dialogue models. ICML.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Domain adaptation from synthesis to reality in single-model detector for video smoke detection", "authors": [ { "first": "Gao", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Yongming", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Qixing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Gaohua", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Jinjun", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2017, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Gao Xu, Yongming Zhang, Qixing Zhang, Gaohua Lin, and Jinjun Wang. 2017. Domain adaptation from synthesis to reality in single-model detector for video smoke detection. ArXiv.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "type_str": "figure", "text": "Domain Invariant Transformer Architecture.", "uris": null }, "TABREF1": { "num": null, "content": "", "type_str": "table", "text": "Ablation study to understand the usefulness of Multiple task dialogues.", "html": null }, "TABREF3": { "num": null, "content": "
: SNG: Single task dialogues, RS: Ran-
dom Augment Synthetic, and TS: Targeted Augment
Synthetic.
", "type_str": "table", "text": "", "html": null }, "TABREF5": { "num": null, "content": "
: 0.5 and 1.0 correspond to half and all of mul-
titask samples respectively during training. Synthetic
refers to Targeted Augment dialogues.
", "type_str": "table", "text": "", "html": null }, "TABREF7": { "num": null, "content": "", "type_str": "table", "text": "Varying the \u03b1 to understand the effect of the discriminator on decoder performance", "html": null } } } }