{ "paper_id": "D19-1001", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:10:11.112535Z" }, "title": "Attending to Future Tokens for Bidirectional Sequence Generation", "authors": [ { "first": "Carolin", "middle": [], "last": "Lawrence", "suffix": "", "affiliation": { "laboratory": "", "institution": "NEC Laboratories Europe", "location": {} }, "email": "carolin.lawrence@neclab.eu" }, { "first": "Bhushan", "middle": [], "last": "Kotnis", "suffix": "", "affiliation": { "laboratory": "", "institution": "NEC Laboratories Europe", "location": {} }, "email": "bhushan.kotnis@neclab.eu" }, { "first": "Mathias", "middle": [], "last": "Niepert", "suffix": "", "affiliation": { "laboratory": "", "institution": "NEC Laboratories Europe", "location": {} }, "email": "mathias.niepert@neclab.eu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Neural sequence generation is typically performed token-by-token and left-to-right. Whenever a token is generated only previously produced tokens are taken into consideration. In contrast, for problems such as sequence classification, bidirectional attention, which takes both past and future tokens into consideration, has been shown to perform much better. We propose to make the sequence generation process bidirectional by employing special placeholder tokens. Treated as a node in a fully connected graph, a placeholder token can take past and future tokens into consideration when generating the actual output token. We verify the effectiveness of our approach experimentally on two conversational tasks where the proposed bidirectional model outperforms competitive baselines by a large margin.", "pdf_parse": { "paper_id": "D19-1001", "_pdf_hash": "", "abstract": [ { "text": "Neural sequence generation is typically performed token-by-token and left-to-right. Whenever a token is generated only previously produced tokens are taken into consideration. In contrast, for problems such as sequence classification, bidirectional attention, which takes both past and future tokens into consideration, has been shown to perform much better. We propose to make the sequence generation process bidirectional by employing special placeholder tokens. Treated as a node in a fully connected graph, a placeholder token can take past and future tokens into consideration when generating the actual output token. We verify the effectiveness of our approach experimentally on two conversational tasks where the proposed bidirectional model outperforms competitive baselines by a large margin.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "When generating an output sequence, neural network models typically produce one token at a time. At each generation step, only the already produced sequence is taken into account. However, future and not-yet-produced tokens can also be highly relevant when choosing the current token. The importance of attending to both past and future tokens is apparent in self-attention architectures such as the Transformer (Vaswani et al., 2017) . The self-attention module of a Transformer network treats a sequence bidrectionally as a fully connected graph of tokens -when a token is produced all other tokens are taken into consideration. However, this requires the entire sequence to be known a priori and when a Transformer is used for sequence generation, the self-attention process only includes previously produced tokens (Vaswani et al. (2017) ; Radford et al. (2019) ; inter alia). But the bidirectional self-attention is a crucial property of the highly successful language model BERT (Devlin et al., 2018) . During the pretraining procedure of BERT, a fraction of input tokens is randomly masked out and the training objective is to predict these masked tokens correctly. BERT can then be fine-tuned for various classification tasks. Unfortunately, BERT cannot be directly used for sequence generation because the bidirectional nature of the approach requires the entire sequence to be known beforehand.", "cite_spans": [ { "start": 412, "end": 434, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF11" }, { "start": 819, "end": 841, "text": "(Vaswani et al. (2017)", "ref_id": "BIBREF11" }, { "start": 844, "end": 865, "text": "Radford et al. (2019)", "ref_id": "BIBREF8" }, { "start": 985, "end": 1006, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Inspired by BERT's masking-based objective, we propose to start out with a sequence of placeholder tokens which are iteratively replaced by tokens from the output vocabulary to eventually generate the full output sequence. For an example see Figure 1 . With this novel model component, the self-attention of a Transformer can take both past and future tokens into consideration, leading to Bidirectional Sequence generation (BISON). Furthermore, it allows us to directly incorporate the pre-trained language model BERT and, to the best of our knowledge, for the first time directly finetune it for sequence generation. BISON makes two major contributions which we investigate in turn. First, we explore different stochastic placeholder replacement strategies to determine, at training time, where to position the placeholder tokens. This is crucial as we need the BISON models to be exposed to a large number of heterogeneous placeholder configurations. Second, we explore several strategies for iteratively generating, at inference time, a complete output sequence from an initial sequence of placeholders.", "cite_spans": [], "ref_spans": [ { "start": 242, "end": 250, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We evaluate our bidirectional sequence generation approach on two conversational tasks. BI-SON outperforms both competitive baselines and state of the art neural network approaches on both datasets by a significant margin. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "For sequence-to-sequence tasks, an input sequence", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence Generation with Transformers", "sec_num": "2" }, { "text": "x = x 1 , x 2 , . . . ,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence Generation with Transformers", "sec_num": "2" }, { "text": "x |x| is to be mapped to an output sequence y = y 1 , y 2 , . . . , y |y| by some model \u03c0 \u03b8 with learnable parameters \u03b8. For neural models, this is typically done by first encoding the input sequence x and then calling a decoder t times to produce a sequence y token-by-token, from left-to-right.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Sequence Generation with Transformers", "sec_num": "2" }, { "text": "A popular choice for both encoder and decoder is the transformer (Vaswani et al., 2017) . It takes a sequence of embedded tokens s = s 1 , s 2 , . . . , s |s| and treats it as a fully connected graph over which a self-attention module is applied: for each token s t in the sequence it assigns a probabilistic attention score a t to every other token in the sentence. For the full mathematical details we refer the reader to (Vaswani et al., 2017) .", "cite_spans": [ { "start": 65, "end": 87, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF11" }, { "start": 424, "end": 446, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Sequence Generation with Transformers", "sec_num": "2" }, { "text": "Typically a transformer encoder is employed to encode x, whereas a transformer decoder is used to produce y. In contrast to the encoder, the decoder at time step t only has access to previously produced tokens s