Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D19-1001",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:10:11.112535Z"
},
"title": "Attending to Future Tokens for Bidirectional Sequence Generation",
"authors": [
{
"first": "Carolin",
"middle": [],
"last": "Lawrence",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NEC Laboratories Europe",
"location": {}
},
"email": "carolin.lawrence@neclab.eu"
},
{
"first": "Bhushan",
"middle": [],
"last": "Kotnis",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NEC Laboratories Europe",
"location": {}
},
"email": "bhushan.kotnis@neclab.eu"
},
{
"first": "Mathias",
"middle": [],
"last": "Niepert",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "NEC Laboratories Europe",
"location": {}
},
"email": "mathias.niepert@neclab.eu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Neural sequence generation is typically performed token-by-token and left-to-right. Whenever a token is generated only previously produced tokens are taken into consideration. In contrast, for problems such as sequence classification, bidirectional attention, which takes both past and future tokens into consideration, has been shown to perform much better. We propose to make the sequence generation process bidirectional by employing special placeholder tokens. Treated as a node in a fully connected graph, a placeholder token can take past and future tokens into consideration when generating the actual output token. We verify the effectiveness of our approach experimentally on two conversational tasks where the proposed bidirectional model outperforms competitive baselines by a large margin.",
"pdf_parse": {
"paper_id": "D19-1001",
"_pdf_hash": "",
"abstract": [
{
"text": "Neural sequence generation is typically performed token-by-token and left-to-right. Whenever a token is generated only previously produced tokens are taken into consideration. In contrast, for problems such as sequence classification, bidirectional attention, which takes both past and future tokens into consideration, has been shown to perform much better. We propose to make the sequence generation process bidirectional by employing special placeholder tokens. Treated as a node in a fully connected graph, a placeholder token can take past and future tokens into consideration when generating the actual output token. We verify the effectiveness of our approach experimentally on two conversational tasks where the proposed bidirectional model outperforms competitive baselines by a large margin.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "When generating an output sequence, neural network models typically produce one token at a time. At each generation step, only the already produced sequence is taken into account. However, future and not-yet-produced tokens can also be highly relevant when choosing the current token. The importance of attending to both past and future tokens is apparent in self-attention architectures such as the Transformer (Vaswani et al., 2017) . The self-attention module of a Transformer network treats a sequence bidrectionally as a fully connected graph of tokens -when a token is produced all other tokens are taken into consideration. However, this requires the entire sequence to be known a priori and when a Transformer is used for sequence generation, the self-attention process only includes previously produced tokens (Vaswani et al. (2017) ; Radford et al. (2019) ; inter alia). But the bidirectional self-attention is a crucial property of the highly successful language model BERT (Devlin et al., 2018) . During the pretraining procedure of BERT, a fraction of input tokens is randomly masked out and the training objective is to predict these masked tokens correctly. BERT can then be fine-tuned for various classification tasks. Unfortunately, BERT cannot be directly used for sequence generation because the bidirectional nature of the approach requires the entire sequence to be known beforehand.",
"cite_spans": [
{
"start": 412,
"end": 434,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 819,
"end": 841,
"text": "(Vaswani et al. (2017)",
"ref_id": "BIBREF11"
},
{
"start": 844,
"end": 865,
"text": "Radford et al. (2019)",
"ref_id": "BIBREF8"
},
{
"start": 985,
"end": 1006,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Inspired by BERT's masking-based objective, we propose to start out with a sequence of placeholder tokens which are iteratively replaced by tokens from the output vocabulary to eventually generate the full output sequence. For an example see Figure 1 . With this novel model component, the self-attention of a Transformer can take both past and future tokens into consideration, leading to Bidirectional Sequence generation (BISON). Furthermore, it allows us to directly incorporate the pre-trained language model BERT and, to the best of our knowledge, for the first time directly finetune it for sequence generation. BISON makes two major contributions which we investigate in turn. First, we explore different stochastic placeholder replacement strategies to determine, at training time, where to position the placeholder tokens. This is crucial as we need the BISON models to be exposed to a large number of heterogeneous placeholder configurations. Second, we explore several strategies for iteratively generating, at inference time, a complete output sequence from an initial sequence of placeholders.",
"cite_spans": [],
"ref_spans": [
{
"start": 242,
"end": 250,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We evaluate our bidirectional sequence generation approach on two conversational tasks. BI-SON outperforms both competitive baselines and state of the art neural network approaches on both datasets by a significant margin. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "For sequence-to-sequence tasks, an input sequence",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Generation with Transformers",
"sec_num": "2"
},
{
"text": "x = x 1 , x 2 , . . . ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Generation with Transformers",
"sec_num": "2"
},
{
"text": "x |x| is to be mapped to an output sequence y = y 1 , y 2 , . . . , y |y| by some model \u03c0 \u03b8 with learnable parameters \u03b8. For neural models, this is typically done by first encoding the input sequence x and then calling a decoder t times to produce a sequence y token-by-token, from left-to-right.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Generation with Transformers",
"sec_num": "2"
},
{
"text": "A popular choice for both encoder and decoder is the transformer (Vaswani et al., 2017) . It takes a sequence of embedded tokens s = s 1 , s 2 , . . . , s |s| and treats it as a fully connected graph over which a self-attention module is applied: for each token s t in the sequence it assigns a probabilistic attention score a t to every other token in the sentence. For the full mathematical details we refer the reader to (Vaswani et al., 2017) .",
"cite_spans": [
{
"start": 65,
"end": 87,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 424,
"end": 446,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Generation with Transformers",
"sec_num": "2"
},
{
"text": "Typically a transformer encoder is employed to encode x, whereas a transformer decoder is used to produce y. In contrast to the encoder, the decoder at time step t only has access to previously produced tokens s <t = s 1 , s 2 , . . . , s t\u22121 . Consequently, the attention module cannot take possible future tokens into account when making its decision at time t. Additionally, in this encoderdecoder framework, there is a disconnect between input x and output y because the self-attention modules are applied to x and y in isolation before they are combined.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Generation with Transformers",
"sec_num": "2"
},
{
"text": "The latter weakness has been overcome in recent work (Radford et al., 2018; Wolf et al., 2019; Radford et al., 2019) by feeding the concatenation s = x \u2295 y to a transformer decoder. At training time, given the current token s t , the transformer is trained to predict the next word s t+1 via maximum likelihood estimation. At test time, the transformer is conditioned on x and then produces the output y token-by-token. But because the model is a transformer decoder, it is unable to take possible future tokens into account.",
"cite_spans": [
{
"start": 53,
"end": 75,
"text": "(Radford et al., 2018;",
"ref_id": "BIBREF7"
},
{
"start": 76,
"end": 94,
"text": "Wolf et al., 2019;",
"ref_id": "BIBREF13"
},
{
"start": 95,
"end": 116,
"text": "Radford et al., 2019)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Generation with Transformers",
"sec_num": "2"
},
{
"text": "During sequence generation, we want to take both past and future tokens into account. More formally, at time t, we want to attend to both s 1 , . . . , s t\u22121 as well as s t+1 , . . . , s |s| . To do this, we give the sequence s = x \u2295 y, the concatenation of the sequences x and y, to a Transformer encoder, rather than a decoder. Of course, at inference time y is unknown. Thus, we propose to replace each token y j with a placeholder tokenp.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional Sequence Generation",
"sec_num": "3"
},
{
"text": "Since the model needs to be exposed to heterogeneous placeholder token configurations during training time, we introduce a placeholder strategy that replaces some tokens y j with placeholder tokensp at training time. Hence, during training, a sequence y is replaced by a sequence p = p 1 , p 2 , . . . , p |y| , where a token p j is either the original token y j or the placeholder tokenp. We introduce two placeholder strategies in the following section. At inference time, p contains only placeholder tokens up to some pre-determined maximum sequence length.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional Sequence Generation",
"sec_num": "3"
},
{
"text": "With the placeholder strategy in place, a Transformer encoder is given the sequence s = x \u2295 p as input. The self-attention module then computes hidden representations r t of each token s t by attending to every other token in the sequence s. Because the output sequence is already present in the form of placeholder tokens both past tokens as well as future, not-yet-produced, tokens can be taken into consideration for every token s t . Following the self-attention step, placeholder tokens are converted into a token from the output vocabulary with a language model (LM) classification layer, where for each placeholder p t , its hidden representation r t is mapped to a distribution d t over the output vocabulary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional Sequence Generation",
"sec_num": "3"
},
{
"text": "At training time, each output sequence token is fed the gold label and updates to the model \u03c0 \u03b8 are performed using stochastic gradient descent with a cross-entropy loss, i.e.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional Sequence Generation",
"sec_num": "3"
},
{
"text": "L \u03c0 \u03b8 = \u2212 1 M M m=1 |y| j=1 log \u03c0 \u03b8 (p j = y j |s),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional Sequence Generation",
"sec_num": "3"
},
{
"text": "where M is the size of a minibatch. At inference time, the placeholder tokens can be replaced iteratively based on the probability distribution d t over the output vocabulary for each placeholder p t . Different sequence generation strategies are outlined in Section 3.2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Bidirectional Sequence Generation",
"sec_num": "3"
},
{
"text": "At inference time, the output sequence starts out with a sequence of placeholder tokens. To introduce this notion at training time, we require a strategy that replaces some output sequence tokens y j with the placeholder tokenp. The simplest approach would be to replace all output sequence tokens y j with the placeholder tokenp. However, with this approach the model is never confronted with a sequence containing a mix of output sequence tokens and placeholder tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Placeholder Replacement Strategy",
"sec_num": "3.1"
},
{
"text": "Due to the exponential number of possible replacement configurations per given token sequence, we introduce probabilistic generative models that we can use to draw diverse sequence replacements. To this end, we model the decision whether to use placeholder or input tokens with probabilistic models, two of which we propose in the following.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Placeholder Replacement Strategy",
"sec_num": "3.1"
},
{
"text": "Bernoulli Random Variables (RV). We model each position of the input sequence with a binary random variable with a fixed mean. For every input sequence and every position i in the sequence, we draw from a Bernoulli variable with mean \u00b5 to decide whether to use y i orp. The expected number of placeholder tokens in a sequence of length |y| is |y|\u00b5 and the variance is \u03c3 2 = |y|\u00b5(1 \u2212 \u00b5). The variance of this strategy is determined by the mean and, therefore, the probabilistic model has one tunable parameter \u00b5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Placeholder Replacement Strategy",
"sec_num": "3.1"
},
{
"text": "Gaussian Random Variables. The number of placeholder tokens of the input sequence p can be seen as drawn from an unknown optimal Binomial distribution. We can approximate this Binomial with a normal distribution N (\u00b5, \u03c3 2 ), where \u00b5 is the mean and \u03c3 the standard deviation and they are considered hyperparameters. More formally, for every input sequence, we draw a value P \u223c N (\u00b5, \u03c3 2 ). Multiplied with the sequence length |y|, the nearest integer value (|y| \u2022 P ) is used as the number of placeholder tokens for the given sequence. The positions of the placeholder tokens in the sequence are then determined at random. The resulting probabilistic model's two parameters (mean \u00b5 and standard deviation \u03c3) are treated as hyperparameters and are not updated during training. Being able to tune the variance parameter independently from the mean parameter might provide an advantage over the parameterization with Bernoulli RVs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Placeholder Replacement Strategy",
"sec_num": "3.1"
},
{
"text": "Starting with a sequence of placeholder tokens at inference time, it is possible to generate output token sequences in arbitrary order. We experiment with the following strategies. The distribution in all of these strategies are the distributions d t (t = 1, ..., n) for the placeholders over the output vocabulary. We use the term uncover to mean that an output token is generated for a placeholder token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Generation Strategies",
"sec_num": "3.2"
},
{
"text": "One-step greedy. In a single time step, all placeholder tokens are uncovered simultaneously by picking the most probable token from the output vocabulary for each placeholder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Generation Strategies",
"sec_num": "3.2"
},
{
"text": "Highest probability. Placeholders are replaced iteratively and the placeholder to be uncovered is the placeholder that assigns the highest probability to a token from the output vocabulary, indicating the model is the most sure about this token.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Generation Strategies",
"sec_num": "3.2"
},
{
"text": "Lowest entropy. Placeholders are replaced iteratively and the placeholder to be uncovered is the placeholder that exhibits the lowest entropy over its output vocabulary distribution and the most likely token at this position is chosen. Intuitively, the lowest entropy indicates the position where the uncertainty of the model to decide between tokens of the output vocabulary is the lowest.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Generation Strategies",
"sec_num": "3.2"
},
{
"text": "Left-to-right. Placeholders are replaced iteratively, moving from left-to-right and thus mimicking the typical writing style for English. Note that this approach still differs from the Transformer decoders because future tokens are considered via the placeholder representations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Generation Strategies",
"sec_num": "3.2"
},
{
"text": "No look ahead. To test whether future placeholders hold useful information, we consider an adversarial sequence generation strategy: Again we iteratively uncover placeholders from left-toright, but we suppress all attention flows from future placeholders. This imitates the behaviour of a transformer decoder but follows the idea of predicting a token on a placeholder, rather than predicting the next word as is typically done in transformer decoders. If this performs worse than leftto-right, there is indeed valuable information in future placeholder tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Sequence Generation Strategies",
"sec_num": "3.2"
},
{
"text": "We conduct a series of experiments to explore BI-SON's behavior. First, we want to compare two token replacement strategies for training as well as the four generation strategies for inference. Second, we want to compare BISON to state of the art methods and investigate the impact of its ability to attend to future tokens.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4"
},
{
"text": "We run experiments on the two following conversational datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "Goal-oriented SHARC (Saeidi et al., 2018) . SHARC is a dialogue, text-based questionanswering dataset. Unlike many popular QA datasets, answers cannot simply be extracted from the text. Given a regulatory text, such as a text from the UK government's website, and a user scenario with corresponding question, it is necessary to interpret the text in the context of the specific user's needs. Before generating its final answer, a system may generate clarification questions. Finally, the system decides if the answer to the user's original question is \"Yes\", \"No\" or \"Irrelevant\" where the latter means the question cannot be answered with the given text.",
"cite_spans": [
{
"start": 20,
"end": 41,
"text": "(Saeidi et al., 2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "We perform the evaluation with the official SHARC script. For a set of generated clarification questions, it computes BLEU n-gram scores for n = 1, 2, 3, 4 using a set of clarification question in the set of gold responses. In each step of the conversation, the model under evaluation generates an output token sequence. This output is automatically assigned to the category \"More\" if it is a clarification question, and to \"Yes\", \"No\", and \"Irrelevant\" otherwise. Since this is a classification task we can compute micro and macro accuracy for it. The final model is chosen using the highest BLEU-4 score on the development set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "The SHARC dataset has a hidden test set and, therefore, it is not feasible to evaluate our various model variants. Hence, we take 30 unique rule texts and their corresponding training examples from the training set. This leads to a new de-velopment set of 2,465 instances and leaves the official development set to be used as a test set here. Finally we submitted our best model to be evaluated on the hidden test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "Free-form DAILY DIALOG (Li et al., 2017) . DAILY DIALOG is a dataset of written conversations occurring in daily life. Following the authors of the corpus, we report BLEU n-gram scores for n = 1, 2, 3, 4 for the generated output sequences with respect to the given gold responses. We tokenize these responses equally to ensure a fair comparison.",
"cite_spans": [
{
"start": 23,
"end": 40,
"text": "(Li et al., 2017)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "4.1"
},
{
"text": "We implement BISON based on the BERT Pytorch code 1 and initialize with the pre-trained BERT model BERT-BASE-UNCASED (Devlin et al., 2018) . Consequently we employ the same model architecture and tokenisation as (Devlin et al., 2018 ) resulting in a model with about 110M parameters. To remain compatible with the BERT model, we prepend each sequence with a [CLS] token and place a [SEP] token after the input context. Similarly, producing a second [SEP] token indicates the end of sequence generation. For input context of SHARC, we follow Saeidi et al. (2018) and use the concatenation of question, rule text, scenario and history. The input context for DAILY DIALOG is the concatenation of all previous utterances.",
"cite_spans": [
{
"start": 117,
"end": 138,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 212,
"end": 232,
"text": "(Devlin et al., 2018",
"ref_id": "BIBREF0"
},
{
"start": 541,
"end": 561,
"text": "Saeidi et al. (2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BISON Settings",
"sec_num": "4.2"
},
{
"text": "On the SHARC and DAILY DIALOG training sets we train for 20 and 40 epochs, respectively, which equates in each case to about 200k seen examples. As optimizer we used ADAM (Kingma and Ba, 2015) with \u03b2 1 = 0.9, \u03b2 2 = 0.999, a L2 weight decay of 0.01 and a learning rate warm-up over the first 10% of training steps. As learning rates we consider both the pre-training learning rate of BERT 1e-4 and the fine-tuning learning rate 3e-5. On preliminary experiments 3e-5 proved to be best for SHARC, whereas it is 1e-4 for DAILY DIALOG. We set the batch size to 15. Finally, the maximum sequence generation length, is set to 50 for SHARC and to 100 for DAILY DIALOG, which was chosen based on values observed in the training data. As the maximum sequence length of the BERT model is 512, longer input sequences are truncated accordingly. For the main results, we employ the sequence generation strategy left-to-right, which we found to work best. Later on we also report results for the other strategies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BISON Settings",
"sec_num": "4.2"
},
{
"text": "For the Bernoulli RV approach, we test \u00b5 \u2208 [0.2, 0.8] with increments of 0.1. For the Gaussian RV approach, we test all possible combinations for the following hyperparmeters \u00b5 = {0.4, 0.5, 0.6} and \u03c3 = {0.3, 0.6, 0.9}. The best combination on the SHARC dev set is \u00b5 = 0.5, \u03c3 = 0.6. It outperforms the best Bernoulli approach (\u00b5 = 0.7) by 3.4 point in BLEU-4 score. Some Bernoulli experiments in fact only produced a very small number of clarification question, e.g. \u00b5 = 0.5 only generated 9 clarification questions on the development set, whereas in the ground truth responses 846 clarification questions occur. This suggests that a high variance is important, as the Bernoulli setups all have a variance of 0.25 or lower and our best Gaussian approach has a variance of 0.6. We directly employ the Gaussian distribution with \u00b5 = 0.5, \u03c3 = 0.6 on the DAILY DIALOG task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BISON Settings",
"sec_num": "4.2"
},
{
"text": "To measure the success of our proposed approach, we consider the following three baselines.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "Encoder-Decoder Transformer (E&D). First, we compare our bidirectional encoder to a standard encoder-decoder Transformer where the decoder only has access to tokens produced so far to compute its self-attention. We use the implementation of OpenNMT (Klein et al., 2017) and employ the parameters suggested by them, but adjust the learning rate to 0.1, which we found to work better for both datasets. Additionally, we increased the word and hidden dimension size to 768 and the number of attention heads to 12 to match the capacity of our model. Training ran for 50 epochs. Needing both an encoder and a decoder, this leads to a total of about 270M parameters.",
"cite_spans": [
{
"start": 249,
"end": 269,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "Encoder-Decoder Transformer with BERT (E&D+B). The power of our bidirectional decoder stems from two advantages. First, we can initialize our model with the pre-trained BERT-BASE-UNCASED model. Second, the decoding process is bidirectional. It would be possible to transfer the first advantage to an encoder-decoder framework by using BERT embeddings. This is however only possible for the input sequence, because the bidirectionality of BERT requires the entire sequence to be available beforehand. Thus, we modify implementation of OpenNMT to use the BERT model as the encoder. The weights are frozen when training the decoder, which produced better results than allowing the gradients to also flow through the BERT model. Again, with both an encoder and decoder, this leads to a total of about 270M parameters. GPT2. Radford et al. (2019) present a transformer decoder, GPT2, trained as a language model on large amounts of monolingual text. Radford et al. (2019) showed that it is possible to perform various tasks in a zero-shot setting by priming the language model with an input and letting it generate further words greedily. This setup can be transferred to a supervised setting, where the model is fine-tuned to a dataset by using maximum likelihood estimation to increase the probability of the gold output sequence (Wolf et al., 2019) . As the starting point for the supervised learning, we initialize the model with the pre-trained model GPT-2-117M released by Radford et al. (2019) 2 and then fine-tune. With 117M parameters, this model is comparable to our model. Unlike baseline 2, this setup can directly employ a pre-trained model as our approach can, but it is not bidirectional.",
"cite_spans": [
{
"start": 1327,
"end": 1346,
"text": "(Wolf et al., 2019)",
"ref_id": "BIBREF13"
},
{
"start": 1474,
"end": 1497,
"text": "Radford et al. (2019) 2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baselines",
"sec_num": "4.3"
},
{
"text": "We report the results of our approach, the various baselines, as well as the previous state-of-the-art (SOTA) scores where applicable in Table 1 and 2 for SHARC and in Table 3 for DAILY DIALOG. On the SHARC dataset, we observe very poor BLEU-4 performance for the encoder-decoder Transformer (E&D), which is consistent with results from Saeidi et al. (2018) , who could not get a LSTM-based network to work without an additional classification head. Adding BERT (E&D+B) slightly improves performance. By directly leveraging a pre-trained model, GPT2 outperforms the previous models by a large margin, reaching 33.9% on BLEU-4 and a micro accuracy of 60.4%. BISON is able to take future tokens into consideration and outperforms GPT2 by 12.3 percentage points in BLEU-4 and by 4.5 points in micro accuracy.",
"cite_spans": [
{
"start": 338,
"end": 358,
"text": "Saeidi et al. (2018)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [
{
"start": 137,
"end": 194,
"text": "Table 1 and 2 for SHARC and in Table 3 for DAILY DIALOG.",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "We submitted the best BISON model out of the random three of Table 1 to be evaluated on the hidden test set and report results in comparison to the best model on the leaderboard, 3 E3 (Zhong and Zettlemoyer, 2019) in Table 2 . BISON outperforms E3 by 5.6 BLEU-4 points, while it is only slightly worse than E3 in terms of accuracy.",
"cite_spans": [
{
"start": 184,
"end": 213,
"text": "(Zhong and Zettlemoyer, 2019)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 61,
"end": 68,
"text": "Table 1",
"ref_id": null
},
{
"start": 217,
"end": 224,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "On the DAILY DIALOG dataset the information retrieval-based method (IR in Table 3) introduced by Li et al. (2017) is very strong and outperforms the best end-to-end model (E2E) (Luo et al., 2018) by over 16 percentage points in BLEU-4. The best end-to-end model is based on LSTMs and Luo et al. (2018) report performance increases when adding an attention module to their setup. The encoder-decoder transformer (E&D) outperforms this setup by over 2 percentage points in BLEU-4 and we conjecture that this is due to the transformer making more effective use of the attention principle. Adding BERT (E&D+B) does not help much for this dataset. But again we observe a large increase of performance when directly employing pre-trained models. GPT2 performs on par with the IR SOTA, achieving a BLEU-4 score of 19.4%. Again, BISON can outperform GPT2, here with a difference of 6.2 points in BLEU-4 and even larger increases in the other scores.",
"cite_spans": [
{
"start": 177,
"end": 195,
"text": "(Luo et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 74,
"end": 82,
"text": "Table 3)",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "Effect of bidirectionality. To investigate that our model benefits from bidirectionality, we consider a setup where BISON isn't allowed to attend (Li et al., 2017) and E2E (SOTA) (Luo et al., 2018) are, to the best of our knowledge, the best previously published scores for information retrieval and end-toend approaches. to future tokens during prediction (see Table 4 ). It causes a drop in BLEU-4 performance of about 25 points on the SHARC dataset and a drop of 10 points on the DAILY DIALOG dataset. This showcases that BISON during training has learnt to rely on the ability to attend to future tokens. Effect of pre-trained model. We are curious how big the effect of the pre-trained model is. Thus, instead of starting with the BERT-BASE-UNCASED weights, we initialize BISON with random weights drawn from a normal distribution with mean 0.0 and standard deviation of 0.02. Results are presented in Table 5 for SHARC and DAILY DIALOG. Even without a pre-trained language model, our approach can outperform the standard encoder-decoder transformer framework (E&D) on both datasets, although we had to increase the number of epochs for the SHARC dataset to 40. On the DAILY DIALOG task, we are even able to outperform GPT2. This demonstrates the effectiveness of our approach in itself, free of any pre-trained language model. Effect of sequence generation strategies. We present the different sequence generation strategies in Table 6 . The best overall sequence generation strategy is to predict from left to right which achieves good results on both datasets. On the SHARC dataset the highest probability approach performs better than left-to-right. However, on DAILY DIALOG this approach is not as successful. This suggests that it might be worth selecting the best sequence generation strategy for each dataset individually. However, we hypothesize that leftto-right works consistently well due to the left-toright nature of the English language. A brief experiment with a right-to-left strategy gave poor results.",
"cite_spans": [
{
"start": 146,
"end": 163,
"text": "(Li et al., 2017)",
"ref_id": "BIBREF5"
},
{
"start": 179,
"end": 197,
"text": "(Luo et al., 2018)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 362,
"end": 369,
"text": "Table 4",
"ref_id": "TABREF5"
},
{
"start": 907,
"end": 914,
"text": "Table 5",
"ref_id": "TABREF7"
},
{
"start": 1434,
"end": 1441,
"text": "Table 6",
"ref_id": "TABREF8"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.4"
},
{
"text": "We believe that the placeholders capture sequential information present in the language model learned during pre-training. After running a transformer encoder where each position can attend to every other position, a placeholder token will have a probability distribution over the output vocabulary and this distribution is informed by all other tokens in input and output. Thus, a place- Table 7 : Average attention weights and standard deviation when predicting from left-to-right on both SHARC and DAILY DIALOG (DD) for different parts of the sequence, where \u03b1 1 is for the input sequence x, \u03b1 2 /\u1fb1 2 is for the already produced sequence y and \u03b1 3 /\u1fb1 3 is for the sequence of remaining placeholder tokens p. \u03b1 k are the normalized attention weights across all three parts, whereas\u1fb1 k normalizes over the second and third part.",
"cite_spans": [],
"ref_spans": [
{
"start": 389,
"end": 396,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "holder could be seen as a mixture of tokens with varying probabilities. As placeholders are subsequently uncovered, the other placeholders can update their distribution by taking the newly revealed token into consideration. For example, in Figure 2 , for the sentence\"is the animal an endangered animal ?\", while generating \"endangered\", the self-attention head pays attention to the next placeholder token, which in the next step is revealed to be \"animal\". While producing \"endangered\", the distribution for the next position already placed a high probability on \"animal\", thus the current token can take this into consideration and produces \"endangered\". Further heat maps demonstrating this can be found in the appendix.",
"cite_spans": [],
"ref_spans": [
{
"start": 240,
"end": 248,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "The quantify this intuition, we measure the average attention score on various parts of the sequence. For this, we use the left-to-right prediction strategy. Thus, at time t, we can decompose our sequence into three parts: s = x\u2295y\u2295p, where x is the input, y the already produced sequence and p the remaining sequence of placeholder tokens. For each attention head, we can decompose the attention probabilities into the three parts, 1. attention on the input text, 2. attention on the current word and already generated words (left of the current word),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "3. attention on words that are yet to be generated (right of the current word). Each row shows the attention over the output sequence for this row's placeholder token at that point in time. Word in previous rows have been produced already, whereas words of later rows still hold placeholder tokens. Thus the upper triangle of the matrix shows the attention that is paid to future tokens. The red square shows that while generating the token \"endangered\", the attention head already takes the next placeholder into account, which is revealed to be \"animal\" in the next step. Best viewed in color.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "This is mathematically expressed as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "a t = a 0:|x| \u2295 a |x|+1:|x|+t \u2295 a |x|+t+1,|s| ,",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "where |s| is the maximum possible sequence length. For each part we can calculate an average leading to three values, a 1 t , a 2 t and a 3 t . Averaged over all T generation time steps and all D data points, we can derive a score for each part k, k = 1, 2, 3 and each attention head h:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "\u03b1 k h = 1 D 1 T D d=1 T t=1 a k d,t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "Note that we use the attention heads in the final BERT self-attention layer. Averaging over all H attention heads, \u03b1 k = 1 H H h=1 \u03b1 k h , leads to the results reported in Table 7 for both datasets. Unsurprisingly, we find that with scores of over 90% for both datasets the majority of the attention is focused on the first part, i.e. the conditioning input x (see \u03b1 1 in Table 7 ). The remaining attention is split between the already produced sequence (\u03b1 2 ) and the future tokens (\u03b1 3 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 172,
"end": 179,
"text": "Table 7",
"ref_id": null
},
{
"start": 372,
"end": 379,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "To directly compare the relationship within the sequence generation, we re-normalize over \u03b1 2 and \u03b1 3 , leading to new values\u1fb1 2 and\u1fb1 3 (see Table 7 ). Here we can see that the past, already produced tokens are about twice as important as the future, not-yet-produced tokens. But with scores of just under 30% on both datasets, we see that a substantial amount of attention is also focused on the future, not-yet-produced tokens.",
"cite_spans": [],
"ref_spans": [
{
"start": 141,
"end": 148,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "Interestingly, with a standard deviation of about 14%, the values of\u1fb1 2 and\u1fb1 3 vary strongly across the different attention heads. For example on the SHARC dataset, we find one attention head where only about 9% is focused on the future and another where it is about 64% and thus this attention head pays more attention to the future than the past. A graphical overview can be found in the appendix for both datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "Transformers (Vaswani et al., 2017) model sequences as fully connected graphs and apply a bidirectional self-attention module where every token can attend to every other token. Because of this a Transformer is not restricted to sequential orderings. However, Vaswani et al. (2017) ; inter alia still restrict themselves to producing tokens from left-to-right and only allow a Transformer decoder to attend to previously produced tokens. Recently, several attempts have been made to lift the left-to-right restriction in Transformer or LSTM-based models (Gu et al., 2019; Stern et al., 2019; Welleck et al., 2019; Zhou et al., 2019) , but in those approaches it is not possible to attend to future, not-yet-produced tokens.",
"cite_spans": [
{
"start": 13,
"end": 35,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF11"
},
{
"start": 259,
"end": 280,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF11"
},
{
"start": 553,
"end": 570,
"text": "(Gu et al., 2019;",
"ref_id": "BIBREF2"
},
{
"start": 571,
"end": 590,
"text": "Stern et al., 2019;",
"ref_id": "BIBREF10"
},
{
"start": 591,
"end": 612,
"text": "Welleck et al., 2019;",
"ref_id": "BIBREF12"
},
{
"start": 613,
"end": 631,
"text": "Zhou et al., 2019)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Concurrently to our work, (Ghazvininejad et al., 2019) proposed a similar placeholder strategy approach for generating in the context of machine translation. However, they employ an encoderdecoder framework, whereas we only require an encoder, which more closely links input and output via a single shared attention module. Furthermore, they only consider uniform sampling of placeholders whereas we found that the higher variance, which we can control with the Gaussian random variable approach, leads to better results.",
"cite_spans": [
{
"start": 26,
"end": 54,
"text": "(Ghazvininejad et al., 2019)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "Bidirectionality is one of the crucial ingredients in the success of the recently proposed unsupervised language model BERT (Devlin et al., 2018) . For this, Devlin et al. (2018) propose a Transformer encoder to take full advantage of the bidi-rectional nature of the Transformer. Their resulting model, BERT, can directly be applied to various classification tasks but not to sequence generation tasks. Our approach shows how a Transformer encoder can be used for sequence generation and this allows us to directly incorporate BERT into our experiments.",
"cite_spans": [
{
"start": 124,
"end": 145,
"text": "(Devlin et al., 2018)",
"ref_id": "BIBREF0"
},
{
"start": 158,
"end": 178,
"text": "Devlin et al. (2018)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "GPT (Radford et al., 2018) and GPT2 (Radford et al., 2019) are both pre-trained language models that use a Transformer decoder instead, which can only attend to already produced tokens. For dialogue, the GPT model has been fine-tuned for the chit-chat dataset PersonaChat (Zhang et al., 2018) by Wolf et al. (2019) . While GPT and GPT2 can immediately be used as a sequence generators, these models do not offer bidirectionality and they cannot attend to not-yet-produced tokens. Our bidirectional encoder for sequence generation can combine the best of both worlds.",
"cite_spans": [
{
"start": 4,
"end": 26,
"text": "(Radford et al., 2018)",
"ref_id": "BIBREF7"
},
{
"start": 36,
"end": 58,
"text": "(Radford et al., 2019)",
"ref_id": "BIBREF8"
},
{
"start": 272,
"end": 292,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF14"
},
{
"start": 296,
"end": 314,
"text": "Wolf et al. (2019)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "6"
},
{
"text": "We introduced bidirectional sequence generation by employing placeholders in the output sequence. These placeholder tokens are subsequently replaced by tokens of the output vocabulary. Crucially, this allows a transformer encoder to attend to both past and future, not-yet-produced token. Simply masking all placeholder tokens is not feasible. Instead we investigated two placeholder strategies, based on Bernoulli and Gaussian random variables. At prediction time, our approach is not restricted to produce the output sequence from left to right. However, this strategy proved to produce most consistent results in our experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "Our approach outperforms previous end-to-end approaches that do not make use of any pretrained language models. In conjunction with the pre-trained language model BERT, our bidirectional sequence generation approach allows us to achieve new state-of-art results on both conversational tasks. In the future, we would like to apply our approach to other sequence generation tasks. Additionally, we wonder if a further performance increase could be achieved if the pre-training of BERT would employ our placeholder strategy.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://github.com/huggingface/ pytorch-pretrained-BERT",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://github.com/openai/gpt-2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://sharc-data.github.io/ leaderboard.html, 19 August 2019",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "BERT: pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language under- standing. ArXiv e-prints, 1810.04805.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Constant-Time Machine Translation with Conditional Masked Language Models",
"authors": [
{
"first": "Marjan",
"middle": [],
"last": "Ghazvininejad",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1904.09324[cs,stat].ArXiv:1904.09324"
]
},
"num": null,
"urls": [],
"raw_text": "Marjan Ghazvininejad, Omer Levy, Yinhan Liu, and Luke Zettlemoyer. 2019. Constant-Time Machine Translation with Conditional Masked Language Models. arXiv:1904.09324 [cs, stat]. ArXiv: 1904.09324.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Insertion-based decoding with automatically inferred generation order",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Qi Liu, and Kyunghyun Cho. 2019. Insertion-based decoding with automatically in- ferred generation order. CoRR, abs/1902.01370.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederick",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Learning Representations (ICLR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederick P Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Inter- national Conference on Learning Representations (ICLR), San Diego, CA, USA.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "OpenNMT: Open-Source Toolkit for Neural Machine Translation",
"authors": [
{
"first": "G",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "A",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G. Klein, Y. Kim, Y. Deng, J. Senellart, and A. M. Rush. 2017. OpenNMT: Open-Source Toolkit for Neural Machine Translation. ArXiv e-prints, 1701.02810.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Dailydialog: A manually labelled multi-turn dialogue dataset",
"authors": [
{
"first": "Yanran",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Xiaoyu",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Wenjie",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ziqiang",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Shuzi",
"middle": [],
"last": "Niu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing (IJCNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. Dailydialog: A manually labelled multi-turn dialogue dataset. In Proceedings of the Eighth International Joint Conference on Nat- ural Language Processing (IJCNLP), Taipei, Tai- wan.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "An auto-encoder matching model for learning utterance-level semantic dependency in dialogue generation",
"authors": [
{
"first": "Liangchen",
"middle": [],
"last": "Luo",
"suffix": ""
},
{
"first": "Jingjing",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Junyang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Qi",
"middle": [],
"last": "Zeng",
"suffix": ""
},
{
"first": "Xu",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liangchen Luo, Jingjing Xu, Junyang Lin, Qi Zeng, and Xu Sun. 2018. An auto-encoder matching model for learning utterance-level semantic depen- dency in dialogue generation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Improving Language Understanding by Generative Pre-Training",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Karthik",
"middle": [],
"last": "Narasimhan",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Salimans",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving Language Under- standing by Generative Pre-Training. Technical Re- port Technical report, OpenAI.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Language Models are Unsupervised Multitask Learners",
"authors": [
{
"first": "Alec",
"middle": [],
"last": "Radford",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Rewon",
"middle": [],
"last": "Child",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Luan",
"suffix": ""
},
{
"first": "Dario",
"middle": [],
"last": "Amodei",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, and Dario Amodei. 2019. Language Models are Unsupervised Multitask Learners. Technical report, OpenAI.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Interpretation of natural language rules in conversational machine reading",
"authors": [
{
"first": "Marzieh",
"middle": [],
"last": "Saeidi",
"suffix": ""
},
{
"first": "Max",
"middle": [],
"last": "Bartolo",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Sameer",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Rockt\u00e4schel",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Sheldon",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Bouchard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marzieh Saeidi, Max Bartolo, Patrick Lewis, Sameer Singh, Tim Rockt\u00e4schel, Mike Sheldon, Guillaume Bouchard, and Sebastian Riedel. 2018. Interpreta- tion of natural language rules in conversational ma- chine reading. In Proceedings of the 2018 Con- ference on Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Insertion transformer: Flexible sequence generation via insertion operations",
"authors": [
{
"first": "Mitchell",
"middle": [],
"last": "Stern",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Chan",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Kiros",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mitchell Stern, William Chan, Jamie Kiros, and Jakob Uszkoreit. 2019. Insertion transformer: Flexible se- quence generation via insertion operations. CoRR, abs/1902.03249.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Attention is All you Need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems 30 (NIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Pro- cessing Systems 30 (NIPS).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Non-monotonic sequential text generation",
"authors": [
{
"first": "Sean",
"middle": [],
"last": "Welleck",
"suffix": ""
},
{
"first": "Kiant\u00e9",
"middle": [],
"last": "Brantley",
"suffix": ""
},
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sean Welleck, Kiant\u00e9 Brantley, Hal Daum\u00e9 III, and Kyunghyun Cho. 2019. Non-monotonic sequential text generation. CoRR, abs/1902.02192.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "TransferTransfo: A Transfer Learning Approach for Neural Network Based Conversational Agents",
"authors": [
{
"first": "Thomas",
"middle": [],
"last": "Wolf",
"suffix": ""
},
{
"first": "Victor",
"middle": [],
"last": "Sanh",
"suffix": ""
},
{
"first": "Julien",
"middle": [],
"last": "Chaumond",
"suffix": ""
},
{
"first": "Clement",
"middle": [],
"last": "Delangue",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. TransferTransfo: A Transfer Learning Approach for Neural Network Based Conversational Agents. ArXiv e-prints,",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Personalizing dialogue agents: I have a dog, do you have pets too?",
"authors": [
{
"first": "Saizheng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Dinan",
"suffix": ""
},
{
"first": "Jack",
"middle": [],
"last": "Urbanek",
"suffix": ""
},
{
"first": "Arthur",
"middle": [],
"last": "Szlam",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (ACL), Melbourne, Australia.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "E3: Entailment-driven extracting and editing for conversational machine reading",
"authors": [
{
"first": "Victor",
"middle": [],
"last": "Zhong",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Victor Zhong and Luke Zettlemoyer. 2019. E3: Entailment-driven extracting and editing for conver- sational machine reading. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, Florence, Italy.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Synchronous bidirectional neural machine translation",
"authors": [
{
"first": "Long",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2019,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "7",
"issue": "",
"pages": "91--105",
"other_ids": {
"DOI": [
"10.1162/tacl_a_00256"
]
},
"num": null,
"urls": [],
"raw_text": "Long Zhou, Jiajun Zhang, and Chengqing Zong. 2019. Synchronous bidirectional neural machine transla- tion. Transactions of the Association for Compu- tational Linguistics, 7:91-105.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"uris": null,
"text": "accepted or enrolled in an accredited degree or program ? Example generation, going from a sequence of the placeholder tokenp (1), to an intermediate representation (2) and to the final output (3).",
"type_str": "figure"
},
"FIGREF1": {
"num": null,
"uris": null,
"text": "Heat map (darker hues indicate higher attention) that shows an example of where an attention head looks into the future while generating from left to right.",
"type_str": "figure"
},
"TABREF1": {
"html": null,
"num": null,
"text": "",
"content": "<table/>",
"type_str": "table"
},
"TABREF3": {
"html": null,
"num": null,
"text": "",
"content": "<table><tr><td>: BLEU n-gram scores for n = 1, 2, 3, 4 on the</td></tr><tr><td>DailyDialog test set, averaged over 3 independent runs</td></tr><tr><td>for GPT2 and BISON. Models before the line do not</td></tr><tr><td>make use of a pre-trained language model. IR (SOTA)</td></tr></table>",
"type_str": "table"
},
"TABREF5": {
"html": null,
"num": null,
"text": "Comparison of BISON to a setup where BI-SON isn't allowed to attend to future tokens, i.e. past only, for SHARC and DAILY DIALOG (DD).",
"content": "<table/>",
"type_str": "table"
},
"TABREF7": {
"html": null,
"num": null,
"text": "",
"content": "<table><tr><td colspan=\"3\">: Best end-to-end models that do not use a</td></tr><tr><td colspan=\"3\">pre-trained language model in comparison with BISON</td></tr><tr><td colspan=\"3\">that uses randomly initialized weights for SHARC and</td></tr><tr><td colspan=\"3\">DAILY DIALOG (DD), averaged over 3 runs.</td></tr><tr><td>Strategy</td><td colspan=\"2\">SHARC DAILY DIALOG</td></tr><tr><td>one step greedy</td><td>22.9</td><td>9.3</td></tr><tr><td>lowest entropy</td><td>40.3</td><td>16.8</td></tr><tr><td>highest probability</td><td>50.9</td><td>16.4</td></tr><tr><td>left-to-right</td><td>46.2</td><td>23.8</td></tr></table>",
"type_str": "table"
},
"TABREF8": {
"html": null,
"num": null,
"text": "BLEU-4 using various sequence generation strategies for BISON on SHARC and DAILY DIALOG.",
"content": "<table/>",
"type_str": "table"
}
}
}
}