{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T02:11:04.682896Z" }, "title": "Few-Shot Learning of an Interleaved Text Summarization Model by Pretraining with Synthetic Data", "authors": [ { "first": "Sanjeev", "middle": [ "Kumar" ], "last": "Karn", "suffix": "", "affiliation": { "laboratory": "", "institution": "LMU Munich", "location": {} }, "email": "skarn@cis.lmu.de" }, { "first": "Francine", "middle": [], "last": "Chen", "suffix": "", "affiliation": {}, "email": "francine@acm.orgyan-ying.chen@tri.global" }, { "first": "Yan-Ying", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Toyota Research Institute", "location": { "addrLine": "California 4 Machine Intelligence", "settlement": "Los Altos" } }, "email": "" }, { "first": "Ulli", "middle": [], "last": "Waltinger", "suffix": "", "affiliation": {}, "email": "ulli.waltinger@siemens.com" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "", "affiliation": { "laboratory": "", "institution": "LMU Munich", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Interleaved texts, where posts belonging to different threads occur in a sequence, commonly occur in online chat posts, so that it can be time-consuming to quickly obtain an overview of the discussions. Existing systems first disentangle the posts by threads and then extract summaries from those threads. A major issue with such systems is error propagation from the disentanglement component. While endto-end trainable summarization system could obviate explicit disentanglement, such systems require a large amount of labeled data. To address this, we propose to pretrain an endto-end trainable hierarchical encoder-decoder system using synthetic interleaved texts. We show that by fine-tuning on a real-world meeting dataset (AMI), such a system out-performs a traditional two-step system by 22%. We also compare against transformer models and observed that pretraining with synthetic data both the encoder and decoder outperforms the BertSumExtAbs transformer model which pretrains only the encoder on a large dataset.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "Interleaved texts, where posts belonging to different threads occur in a sequence, commonly occur in online chat posts, so that it can be time-consuming to quickly obtain an overview of the discussions. Existing systems first disentangle the posts by threads and then extract summaries from those threads. A major issue with such systems is error propagation from the disentanglement component. While endto-end trainable summarization system could obviate explicit disentanglement, such systems require a large amount of labeled data. To address this, we propose to pretrain an endto-end trainable hierarchical encoder-decoder system using synthetic interleaved texts. We show that by fine-tuning on a real-world meeting dataset (AMI), such a system out-performs a traditional two-step system by 22%. We also compare against transformer models and observed that pretraining with synthetic data both the encoder and decoder outperforms the BertSumExtAbs transformer model which pretrains only the encoder on a large dataset.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Interleaved texts are increasingly common, occurring in social media conversations such as Slack and Stack Exchange, where posts belonging to different threads may be intermixed in the post sequence; see a meeting transcript from the AMI corpus (McCowan et al., 2005) in Table 1 . Due to the mixing, getting a quick sense of the different conversational threads is often difficult.", "cite_spans": [ { "start": 245, "end": 267, "text": "(McCowan et al., 2005)", "ref_id": "BIBREF17" } ], "ref_spans": [ { "start": 271, "end": 278, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In conversation disentanglement, interleaved posts are grouped by the thread. However, a reader still has to read all posts in each cluster of threads to get the gist. Shang et al. (2018) proposed a two-step system that takes an interleaved text as input and first disentangles the posts thread-wise by clustering, and then compresses the thread-wise posts to single-sentence summaries. However, disentanglement e.g., Wang and Oard (2009) , propagates error to the downstream summarization task. An end-to-end supervised summarization system that implicitly identifies the conversations would eliminate error propagation. However, labeling of interleaved texts is a difficult and expensive task Verberne et al., 2018) . AMI Utteracnes . . . Who is gonna do a PowerPoint presentation ? Think we all Huh. You will . . . . . . \u03be and uh the sender will send to the telly itself an infrared signal to tell it to switch on or switch. . . . . . \u03b6 so y so it's so it's so you got so that's something we should have a look into then i when desi when designing the ergonomics of see have a look . . . . . . \u03c8 ,the little tiny weeny batteries, all like special longlasting batteries. . .", "cite_spans": [ { "start": 168, "end": 187, "text": "Shang et al. (2018)", "ref_id": "BIBREF22" }, { "start": 418, "end": 438, "text": "Wang and Oard (2009)", "ref_id": "BIBREF26" }, { "start": 695, "end": 717, "text": "Verberne et al., 2018)", "ref_id": "BIBREF25" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": ". . . Summary 1) the project manager had the team members re-introduce . . . 2) the industrial designer discussed the interior workings of a remote and the team discussed options for batteries and infra-red signals.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": ". . . 5) the marketing expert presented research on consumer preferences on remotes in general and on voice recognition and the team discussed the option to have an ergonomically designed remote.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": ". . . Table 1 : The top section shows AMI ASR transcripts and the bottom section shows human-written summaries. \u03be=150 th , \u03b6=522 th and \u03c8=570 th utterances. a) refer to the a th sentence in a multi-sentence summary.", "cite_spans": [], "ref_spans": [ { "start": 6, "end": 13, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We propose a pretraining approach to tackle these issues. We synthesized a corpus of interleaved text-summary pairs out of a corpus of regular document-summary pairs and train an end-toend trainable encoder-decoder system. To generate the summary the model learns to infer (disentangle) the major topics in several threads. We show on synthetic and real-world data that the encoderdecoder system not only obviates a disentanglement component but also enhances performance. Thus, the summarization task acts as an auxiliary task for the disentanglement. Additionally, we show that fine-tuning of the encoder-decode system with the learned disentanglement representations on a real-world AMI dataset achieves a substantial increment in evaluation metrics despite a small number of labels.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We also propose using a hierarchical attention in the encoder-decoder system with three levels of information from the interleaved text; posts, phrases, and words, rather than traditional two levels; post and word (Nallapati et al., 2017 (Nallapati et al., , 2016 Tan et al., 2017; Cheng and Lapata, 2016) .", "cite_spans": [ { "start": 214, "end": 237, "text": "(Nallapati et al., 2017", "ref_id": "BIBREF18" }, { "start": 238, "end": 263, "text": "(Nallapati et al., , 2016", "ref_id": "BIBREF19" }, { "start": 264, "end": 281, "text": "Tan et al., 2017;", "ref_id": "BIBREF24" }, { "start": 282, "end": 305, "text": "Cheng and Lapata, 2016)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The remaining paper is structured as follows. In Section 2, we discuss related work. In Section 3, we provide a detailed description of our hierarchical seq2seq model. In Section 4, we provide a detailed description on the synthetic data creation algorithm. In Section 5, we describe and discuss the experiments. And in Section 6, we present our conclusions. (2018) each designed a system that summarizes posts in multi-party conversations in order to provide readers with overview on the discussed matters. They broadly follow the same two-step approach: cluster the posts and then extract a summary from each cluster.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "There are two kinds of summarization: abstractive and extractive. In abstractive summarization, the model utilizes a corpus level vocabulary and generates novel sentences as the summary, while extractive models extract or rearrange the source words as the summary. Abstractive models based on neural sequence-to-sequence (seq2seq) (Rush et al., 2015) proved to generate summaries with higher ROUGE scores than the feature-based abstractive models. Li et al. (2015) proposed an encoder-decoder (auto-encoder) model that utilizes a hierarchy of networks: word-to-word followed by sentence-tosentence. Their model is better at capturing the underlying structure than a vanilla sequential encoderdecoder model (seq2seq). Krause et al. (2017) and Jing et al. (2018) showed multi-sentence captioning of an image through a hierarchical Recurrent Neural Network (RNN), topic-to-topic followed by word-to-word, is better than seq2seq. These works suggest a hierarchical decoder, thread-tothread followed by word-to-word, may intrinsically disentangle the posts, and therefore, generate more appropriate summaries.", "cite_spans": [ { "start": 331, "end": 350, "text": "(Rush et al., 2015)", "ref_id": "BIBREF20" }, { "start": 448, "end": 464, "text": "Li et al. (2015)", "ref_id": "BIBREF13" }, { "start": 717, "end": 737, "text": "Krause et al. (2017)", "ref_id": "BIBREF11" }, { "start": 742, "end": 760, "text": "Jing et al. (2018)", "ref_id": "BIBREF9" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Integration of attention into a seq2seq model (Bahdanau et al., 2014) led to further advancement of abstractive summarization (Nallapati et al., 2016; Chopra et al., 2016) . Nallapati et al. (2016) devised a hierarchical attention mechanism for a seq2seq model, where two levels of attention distributions over the source, i.e., sentence and word, are computed at every step of the word decoding. Based on the sentence attentions, the word attentions are rescaled. Our hierarchical attention is more intuitive, computes post(sentence)-level and phraselevel attentions for every new summary sentence, and is trained end-to-end.", "cite_spans": [ { "start": 46, "end": 69, "text": "(Bahdanau et al., 2014)", "ref_id": "BIBREF2" }, { "start": 126, "end": 150, "text": "(Nallapati et al., 2016;", "ref_id": "BIBREF19" }, { "start": 151, "end": 171, "text": "Chopra et al., 2016)", "ref_id": "BIBREF6" }, { "start": 174, "end": 197, "text": "Nallapati et al. (2016)", "ref_id": "BIBREF19" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Semi-supervised learning has recently gained popularity as it helps training parameters of large models without any training data. Researchers have pre-trained masked language models, in which only an encoder is used to reconstruct the text, e.g., BERT (Devlin et al., 2018) . Liu and Lapata (2019) used BERT as seq2seq encoder and showed improved performance on several abstractive summarization tasks. Similarly, researchers have published pre-trained seq2seq models using a different semi-supervised learning technique, where a seq2seq model is learned to reconstruct the original text, e.g., BART (Lewis et al., 2019) and MASS (Song et al., 2019) . In this work, we rely on transfer learning and demonstrate that by pretraining with appropriate interleaved text data, a seq2seq model readily transfers to a new domain with just a few examples.", "cite_spans": [ { "start": 253, "end": 274, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF8" }, { "start": 277, "end": 298, "text": "Liu and Lapata (2019)", "ref_id": "BIBREF15" }, { "start": 601, "end": 621, "text": "(Lewis et al., 2019)", "ref_id": "BIBREF12" }, { "start": 631, "end": 650, "text": "(Song et al., 2019)", "ref_id": "BIBREF23" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "Our hierarchical encoder (see Figure 1 left hand section) is based on Nallapati et al. (2017) , where word-to-word and post-to-post encoders are bidirectional LSTMs. The word-to-word BiLSTM encoder (E w2w ) runs over word embeddings of post P i and generates a set of hidden representations, h", "cite_spans": [ { "start": 70, "end": 93, "text": "Nallapati et al. (2017)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 30, "end": 38, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "E w2w i,0 , . . . , h E w2w i,p , of d dimensions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "The average pooled value of the word-to-word representations of post", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "P i ( 1 p p j=0 h E w2w i,j", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": ") is input to the post-to-post BiLSTM encoder (E t2t ), which then generates a set of representations, h", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "E p2p 0 , . . . , h E p2p n", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": ", corresponding to the posts. i.e., word-to-word (E w2w ) followed by post-to-post (E p2p ). On the right, summaries are generated hierarchically, thread-to-thread (D t2t ) followed by word-to-word (D t2t ).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "Overall, for a given channel C , output representations of word-to-word, W, and post-to-post, P, has n\u00d7p\u00d72d and n\u00d72d dimensions respectively. The hierarchical decoder has two uni-directional LSTM decoders, thread-to-thread and word-to-word (see right-hand side in Figure 1 ).", "cite_spans": [], "ref_spans": [ { "start": 264, "end": 272, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "At step k of thread decoder (D t2t ), we compute elements of post-level attention as", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "\u03b3 k i = \u03c3(attn \u03b3 (h D t2t k\u22121 , P i ) i \u2208 {1, .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": ". . , n}, where attn \u03b3 aligns the current thread decoder state vector h D t2t k\u22121 to vectors of matrix P i . A phrase is a short sequences of words in a sentence/post. Phrases in interleaved texts are equivalent to visual patterns in images, and therefore, attending phrases are more relevant for thread recognition than attending posts. Thus, we have phrase-level attentions focusing on words in a channel and with a responsibility of disentangling threads. At step k of thread decoder, we also compute a sequence of attention weights,", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "\u03b2 k = \u03b2 k 0,0 , . . . , \u03b2 k n,p , corresponding to the set of encoded word representations, h w2w 0,0 , . . . , h w2w n,p , as \u03b2 k i,j = \u03c3(attn \u03b2 (h D t2t k\u22121 , a i,j )) where a i,j = add(W i,j , P i ), i \u2208 {1, . . . , n}, j \u2208 {1, . . . , p}.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "add aligns a post representation to its word representations and does element-wise addition, and attn \u03b2 maps the current thread decoder state h D t2t k\u22121 and vector a i,j to a scalar value. Then, we use the post-level attention, \u03b3 k , to rescale the sequence of attention weights \u03b2 k to obtain phrase-level atten-", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "tions\u03b2 k as\u03b2 k i,j = \u03b2 k i,j * \u03b3 k i . A weighted representation of the words (crossed blue circle), n i=1 p j=1\u03b2 k i,j W", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "ij , is used as an input to compute the next state of the thread-tothread decoder, D t2t . Additionally, we also use the last hidden state h D w2w k\u22121,q of the word-to-word decoder LSTM (D w2w ) of the previously generated summary sentence as the second input to D t2t .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "The motivation is to provide information about the previous sentence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "The current state h D t2t k is passed through a single layer feedforward network and a distribution over STOP=1 and CONTINUE=0 is computed:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "p ST OP k = \u03c3(g h D t2t k )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": ", where g is a feedforward network. In Figure 1 , the process is depicted by a yellow circle. The thread-to-thread decoder keeps decoding until p ST OP k is greater than 0.5.", "cite_spans": [], "ref_spans": [ { "start": 39, "end": 47, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "Additionally, the new state h D t2t k and inputs to D t2t at that step are passed through a two-layer feedforward network, r, followed by a dropout layer to compute the thread representation s k . Given a thread representation s k , the word-toword decoder, a unidirectional attentional LSTM (D w2w ), generates a summary for the thread; see the right-hand side of Figure 1 . Our word-to-word decoder is based on Bahdanau et al. (2014) .", "cite_spans": [ { "start": 413, "end": 435, "text": "Bahdanau et al. (2014)", "ref_id": "BIBREF2" } ], "ref_spans": [ { "start": 365, "end": 373, "text": "Figure 1", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "At step l of word-to-word decoding of summary of thread k, we compute elements of word level attention, i.e., \u03b1 k,l i,\u2022 ; we refer to Bahdanau et al. (2014) for further details on it. However, we use phrase-level word attentions for rescaling the word level attention as\u03b1 k,l i,j = norm(\u03b2 k i,j \u00d7 \u03b1 k,l ij ), where norm (softmax) renormalizes the values. Thus, contrary to popular two-level hierarchical attention (Nallapati et al., 2016; Cheng and Lapata, 2016; Tan et al., 2017) , we have three levels of hierarchical attention and each with its responsibility and is coordinated through the rescaling operation. ", "cite_spans": [ { "start": 414, "end": 438, "text": "(Nallapati et al., 2016;", "ref_id": "BIBREF19" }, { "start": 439, "end": 462, "text": "Cheng and Lapata, 2016;", "ref_id": "BIBREF5" }, { "start": 463, "end": 480, "text": "Tan et al., 2017)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Model", "sec_num": "3" }, { "text": "m k=1 q l=1 log p \u03b8 y k,l |w k,\u2022