{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:33:47.930675Z" }, "title": "Neural Abstractive Multi-Document Summarization: Hierarchical or Flat Structure?", "authors": [ { "first": "Ye", "middle": [], "last": "Ma", "suffix": "", "affiliation": { "laboratory": "", "institution": "Xi'an Jiaotong-Liverpool University", "location": { "postCode": "215028", "settlement": "Suzhou", "country": "China" } }, "email": "ye.ma@xjtlu.edu.cn" }, { "first": "Lu", "middle": [], "last": "Zong", "suffix": "", "affiliation": { "laboratory": "", "institution": "Xi'an Jiaotong-Liverpool University", "location": { "postCode": "215028", "settlement": "Suzhou", "country": "China" } }, "email": "lu.zong@xjtlu.edu.cn" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "With regards to WikiSum (Liu et al., 2018b) that empowers applicative explorations of Neural Multi-Document Summarization (MDS) to learn from large scale dataset, this study develops two hierarchical Transformers (HT) that describe both the cross-token and crossdocument dependencies, at the same time allow extended length of input documents. By incorporating word-and paragraph-level multihead attentions in the decoder based on the parallel and vertical architectures, the proposed parallel and vertical hierarchical Transformers (PHT &VHT) generate summaries utilizing context-aware word embeddings together with static and dynamics paragraph embeddings, respectively. A comprehensive evaluation is conducted on WikiSum to compare PHT &VHT with established models and to answer the question whether hierarchical structures offer more promising performances than flat structures in the MDS task. The results suggest that our hierarchical models generate summaries of higher quality by better capturing crossdocument relationships, and save more memory spaces in comparison to flat-structure models. Moreover, we recommend PHT given its practical value of higher inference speed and greater memory-saving capacity. 1", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "With regards to WikiSum (Liu et al., 2018b) that empowers applicative explorations of Neural Multi-Document Summarization (MDS) to learn from large scale dataset, this study develops two hierarchical Transformers (HT) that describe both the cross-token and crossdocument dependencies, at the same time allow extended length of input documents. By incorporating word-and paragraph-level multihead attentions in the decoder based on the parallel and vertical architectures, the proposed parallel and vertical hierarchical Transformers (PHT &VHT) generate summaries utilizing context-aware word embeddings together with static and dynamics paragraph embeddings, respectively. A comprehensive evaluation is conducted on WikiSum to compare PHT &VHT with established models and to answer the question whether hierarchical structures offer more promising performances than flat structures in the MDS task. The results suggest that our hierarchical models generate summaries of higher quality by better capturing crossdocument relationships, and save more memory spaces in comparison to flat-structure models. Moreover, we recommend PHT given its practical value of higher inference speed and greater memory-saving capacity. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "With the promising results achieved by neural abstractive summarization on single documents (See et al., 2017; Cao et al., 2018; Liu et al., 2018a; Gehrmann et al., 2018) , an increasing number of attempts are made to study abstractive multidocument summarization (MDS) using seq2seq models (Liu et al., 2018b; Lebanoff et al., 2018; Fabbri et al., 2019; Liu and Lapata, 2019) . Compared with the single-document summarization, multi-document summarization places challenges 1 https://github.com/yema2018/wiki_sum in two primary aspects, that is representing large source documents and capturing cross-document relationships. To address the former issue, Liu et al. (2018b) adopts a two-stage approach by first selecting a list of important paragraphs from all documents in an extractive framework. Then a modified language model based on the Transformer-decoder with memory compressed attention (T-DMCA) is developed to conduct abstractive summarization after concatenating the extracted paragraphs to a flat sequence. Although the proposed flat structure of T-DMCA demonstrates both theoretical and practical soundness to learn long-term dependencies, it fails to implant the cross-document relationship in its summaries. On the other hand, the encoderdecoder structure that allows hierarchical inputs of multiple documents offers not only another solution to the long-text summarization problem (Li et al., 2018; Zhang et al., 2019; Liu and Lapata, 2019) but also allows cross-document information exchange in the produced summaries. In particular, Liu and Lapata (2019) proposes a Hierarchical Transformer with local and global encoder layers to represent cross-token and cross-paragraph information, which are both utilized later to enrich token embeddings. Summaries are then generated based on a vanilla Transformer (Vaswani et al., 2017) by concatenating enriched token embeddings from different documents to a flat sequence. Such Hierarchical Transformer though captures crossdocument relationships, the essentially-flat Transformer it adopts fails to learn dependencies of sequences longer than 2000 tokens according to Liu et al. (2018b) .", "cite_spans": [ { "start": 92, "end": 110, "text": "(See et al., 2017;", "ref_id": "BIBREF21" }, { "start": 111, "end": 128, "text": "Cao et al., 2018;", "ref_id": "BIBREF2" }, { "start": 129, "end": 147, "text": "Liu et al., 2018a;", "ref_id": "BIBREF16" }, { "start": 148, "end": 170, "text": "Gehrmann et al., 2018)", "ref_id": "BIBREF8" }, { "start": 291, "end": 310, "text": "(Liu et al., 2018b;", "ref_id": "BIBREF17" }, { "start": 311, "end": 333, "text": "Lebanoff et al., 2018;", "ref_id": "BIBREF10" }, { "start": 334, "end": 354, "text": "Fabbri et al., 2019;", "ref_id": "BIBREF7" }, { "start": 355, "end": 376, "text": "Liu and Lapata, 2019)", "ref_id": "BIBREF18" }, { "start": 655, "end": 673, "text": "Liu et al. (2018b)", "ref_id": "BIBREF17" }, { "start": 1398, "end": 1415, "text": "(Li et al., 2018;", "ref_id": "BIBREF16" }, { "start": 1416, "end": 1435, "text": "Zhang et al., 2019;", "ref_id": "BIBREF25" }, { "start": 1436, "end": 1457, "text": "Liu and Lapata, 2019)", "ref_id": "BIBREF18" }, { "start": 1552, "end": 1573, "text": "Liu and Lapata (2019)", "ref_id": "BIBREF18" }, { "start": 1823, "end": 1845, "text": "(Vaswani et al., 2017)", "ref_id": null }, { "start": 2130, "end": 2148, "text": "Liu et al. (2018b)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we develop two novel hierarchical Transformers to address both the text-length and cross-document linkage problems in MDS. By introducing the word-level and paragraph-level multi-head attention mechanisms, our models are designed to learn both cross-token and cross- document relationships. The word-and paragraphlevel context vectors are then jointly used to generate target sequences in order to abandon the flat structure, thus to mitigate the long-dependency problem. In detail, both of the proposed hierarchical architectures are based on the Transformer encoder-decoder model (Vaswani et al., 2017) , with context-aware word embeddings obtained from a shared encoder and cross-token linkages described by the word-level multi-head attention mechanism in the decoder. The difference lies in the way that the document-level information is handled. Based on the static 2 paragraph embeddings computed from the context-aware word embeddings, the parallel hierarchical Transformer (PHT) models crossdocument relationships with paragraph-level multihead attention parallel to the word-level multi-head attention. The paragraph attentions are then used to normalize the word attentions. On the other hand, the vertical hierarchical Transformer (VHT) stacks the paragraph-level attention layer on top of the word-level attention layer in order to learn the latent relationship between paragraphs with dynamic 3 paragraph embeddings from the previous layer.", "cite_spans": [ { "start": 597, "end": 619, "text": "(Vaswani et al., 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "To evaluate the performance of the proposed models as well as to compare flat and hierarchical structures in the MDS task, we select several strong baselines covering abstractive models of flat strucuture (T-DMCA (Liu et al., 2018b) and Transformer-XL (Dai et al., 2019) ) and of hierarchical structure (Liu's hierachical Transformer (Liu and Lapata, 2019) ). A systematic analysis is conducted on the WikiSum dataset according to four criteria including the models' abilities of capturing cross-document relationships, ROUGE evaluation, human evaluation and computational efficiency. The results show that PHT&VHT outperform other baselines significantly with memory space.", "cite_spans": [ { "start": 213, "end": 232, "text": "(Liu et al., 2018b)", "ref_id": "BIBREF17" }, { "start": 252, "end": 270, "text": "(Dai et al., 2019)", "ref_id": "BIBREF5" }, { "start": 303, "end": 356, "text": "(Liu's hierachical Transformer (Liu and Lapata, 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Neural multi-document summarization Regarding to extractive models, neural networks are the most widely-used approach to model in-and cross-document knowledge with the objective to minimize the distance between the selected sentence set and the gold summary (Cao et al., 2017; Ma et al., 2016; Nallapati et al., 2016; Yasunaga et al., 2017) . One representative study (Yasunaga et al., 2017) is to construct a graph of the document cluster based on the similarities between sentences. Graph Neural Network (GNN) (Kipf and Welling, 2016) is then employed to select salient sentences. Argued by Liu and Lapata (2019) , self-attention is a better mechanism to learn the latent dependency among documents than GNNs. As for abstractive models, studies tend to extract important paragraphs from different documents followed by a abstractive seq2seq model to generate summaries (Liu et al., 2018b; Liu and Lapata, 2019; Fabbri et al., 2019) . Additionally, Chu and Liu (2019) adopts an auto-encoder model to conduct MDS in an unsupervised way.", "cite_spans": [ { "start": 258, "end": 276, "text": "(Cao et al., 2017;", "ref_id": "BIBREF1" }, { "start": 277, "end": 293, "text": "Ma et al., 2016;", "ref_id": "BIBREF19" }, { "start": 294, "end": 317, "text": "Nallapati et al., 2016;", "ref_id": "BIBREF20" }, { "start": 318, "end": 340, "text": "Yasunaga et al., 2017)", "ref_id": "BIBREF24" }, { "start": 368, "end": 391, "text": "(Yasunaga et al., 2017)", "ref_id": "BIBREF24" }, { "start": 593, "end": 614, "text": "Liu and Lapata (2019)", "ref_id": "BIBREF18" }, { "start": 871, "end": 890, "text": "(Liu et al., 2018b;", "ref_id": "BIBREF17" }, { "start": 891, "end": 912, "text": "Liu and Lapata, 2019;", "ref_id": "BIBREF18" }, { "start": 913, "end": 933, "text": "Fabbri et al., 2019)", "ref_id": "BIBREF7" }, { "start": 950, "end": 968, "text": "Chu and Liu (2019)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "Hierarchical neural network Hierarchical neural document models are applied in various fields of NLP such as document auto-encoder or text classification . In the area of abstractive summarization, Li et al. (2018) extends a hierarchical RNN encoderdecoder (Lin et al., 2015) with the hybrid sentenceword attention. Instead of trainable attention machanisms, Fabbri et al. (2019) hires a hierarchical RNN with Maximal Marginal Relevance (MMR) (Carbonell and Goldstein, 1998) to represent the relationship between sentences. Liu and Lapata (2019) proposes a hierarchical Transformer by incorporating a global self-attention to represent cross-document relationships. Moreover, Zhang et al. (2019) constructs a hierarchical BERT (Devlin et al., 2018) to learn the context relationships among sentences by using other sentences to generate the masked sentence.", "cite_spans": [ { "start": 257, "end": 275, "text": "(Lin et al., 2015)", "ref_id": "BIBREF15" }, { "start": 359, "end": 379, "text": "Fabbri et al. (2019)", "ref_id": "BIBREF7" }, { "start": 443, "end": 474, "text": "(Carbonell and Goldstein, 1998)", "ref_id": "BIBREF3" }, { "start": 524, "end": 545, "text": "Liu and Lapata (2019)", "ref_id": "BIBREF18" }, { "start": 727, "end": 748, "text": "(Devlin et al., 2018)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Related work", "sec_num": "2" }, { "text": "This paper proposes two hierarchical Transformers with parallel & vertical architectures, respectively. Section 3.1 explicitly explains the construction of the parallel hierarchical Transformer (PHT) and its application in MDS, whereas Section 3.2 places emphasis on explaining the structural differences between the vertical hierarchical Transformer (VHT) and PHT.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Hierarchical Transformer", "sec_num": "3" }, { "text": "As shown in Figure 2 , the PHT encoder is shared by all paragraphs and consist of two major units, i.e. the transformer encoder and the Multi-head Attention Pooling layer, to obtain the token-and paragraph-embeddings. To be specific, contextaware word embeddings are first produced as the output of the transformer encoder based on the summation of word embeddings W and fixed positional encodings (Vaswani et al., 2017) .", "cite_spans": [ { "start": 398, "end": 420, "text": "(Vaswani et al., 2017)", "ref_id": null } ], "ref_spans": [ { "start": 12, "end": 20, "text": "Figure 2", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Encoder", "sec_num": "3.1.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "C p = T ransE(W p + E p )", "eq_num": "(1)" } ], "section": "Encoder", "sec_num": "3.1.1" }, { "text": "where C p \u2208 R n\u00d7d denotes context-aware word embeddings in the paragraph p and n is the paragraph length. We select the fixed encoding method rather than other learning models given that the former has the capacity to deal with sequences of arbitrary length. The context-aware word embedding is then used to generate paragraph embeddings as well as being a part of inputs to the PHT decoder. As the second step, the parallel architecture generates additional static paragraph embeddings to model cross-document relationships from the multihead attention pooling:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder", "sec_num": "3.1.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "head i p = HeadSplit(C p W 1 ) (2) \u03c6 i p = (Sof tmax(head i p W 2 )) T head i p (3) \u03c6 p = W 3 [\u03c6 0 p ; \u03c6 1 p ; \u2022 \u2022 \u2022]", "eq_num": "(4)" } ], "section": "Encoder", "sec_num": "3.1.1" }, { "text": "\u03c6 p := layerN orm(\u03c6 p + F F N (\u03c6 p )) (5)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder", "sec_num": "3.1.1" }, { "text": "where", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder", "sec_num": "3.1.1" }, { "text": "W 1 \u2208 R d\u00d7d , W 2 \u2208 R d head \u00d71", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder", "sec_num": "3.1.1" }, { "text": "and W 3 \u2208 R d\u00d7d are linear transformation parameters, head i p \u2208 R n\u00d7d head and \u03c6 i p \u2208 R d head denote the i th attention head and paragraph embedding. These head embeddings are concatenated and fed to a two-layer feed forward network (FFN) with Relu activation function after linear transformation. The paragraph embedding is another input to the decoder, together with the context-aware word embedding.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Encoder", "sec_num": "3.1.1" }, { "text": "The PHT decoder accepts three classes of inputs, namely the target summary, context-aware word embeddings in the p th paragraph C p \u2208 R n\u00d7d where n is the length of the paragraph, and static paragraph embeddings \u03a6 \u2208 R m\u00d7d where m is the number of paragraphs. Let X 1 \u2208 R k\u00d7d denote the output of part I where k is the length of target sequence or the number of time steps. Note that both the word embedding and vocabulary in the decoder part I are shared with the encoder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.1.2" }, { "text": "Paragraph embeddings are added with the ranking encoding R generated by the positional encoding function (Vaswani et al., 2017 ): 4", "cite_spans": [ { "start": 105, "end": 126, "text": "(Vaswani et al., 2017", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u03a6 := \u03a6 + R", "eq_num": "(6)" } ], "section": "Decoder", "sec_num": "3.1.2" }, { "text": "Different from the token-level ranking encoding (Liu and Lapata, 2019) , we intend to incorporate the positional information of paragraphs to their embeddings.", "cite_spans": [ { "start": 48, "end": 70, "text": "(Liu and Lapata, 2019)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.1.2" }, { "text": "The PHT decoder consists of three parts. Similar to a vanilla Transformer (Vaswani et al., 2017) , the first and last parts of the PHT decoder are the masked multi-head attention and the feed forward network, whereas the second part includes two parallel multi-head attentions to capture the inter-word and inter-paragraph relations.", "cite_spans": [ { "start": 74, "end": 96, "text": "(Vaswani et al., 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.1.2" }, { "text": "Paragraph-level multi-head attention: This self-attention mechanism is to create paragraphlevel context vectors that represent the latent crossparagraph relationships. The query is the output of part I: X 1 , whilst the key and value are static paragraph embeddings \u03a6:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.1.2" }, { "text": "X para , A para = M ultiHead(X 1 , \u03a6, \u03a6), (7)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.1.2" }, { "text": "where X para \u2208 R k\u00d7d is the paragraph-level context vector and A para \u2208 R k\u00d7m denotes the attention weights of paragraphs 5 . Both X para and A para are comprised of representations of all time steps.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.1.2" }, { "text": "Word-level multi-head attention: This shared self-attention mechanism is to output word-level context vectors which represent the cross-token dependency for each paragraph. Since there are m paragraphs, so the mechanism is implemented m times at each time step. The query of self attention is X 1 , whilst the key and value are context-aware word embeddings C p .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.1.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "X word p = M ultiHead(X 1 , C p , C p ),", "eq_num": "(8)" } ], "section": "Decoder", "sec_num": "3.1.2" }, { "text": "where X word p \u2208 R k\u00d7d denotes the word-level context vectors of all time steps in the p th paragraph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.1.2" }, { "text": "The outputs X word \u2208 R k\u00d7d\u00d7m are integrated by first being normalized by paragraph attentions A para , then propagated to subsequent layers after summation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.1.2" }, { "text": "X int = X word A para (9)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.1.2" }, { "text": "where the dimension of A para is expanded to R k\u00d7m\u00d71 and matrices are multiplied in the last two dimensions so X int \u2208 R k\u00d7d . The output of part II: X 2 is written as:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.1.2" }, { "text": "X 2 = LayerN orm(X 1 + X para + X int ). (10)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.1.2" }, { "text": "With the outputs of part II, we are able to proceed to part III and compute the final probability distributions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Decoder", "sec_num": "3.1.2" }, { "text": "The key difference between the parallel and the vertical architectures is the latter only passes contextaware word embeddings from the encoder to decoder part II without additional paragraph embeddings. Instead, the cross-document relationships in this architecture are modeled based on wordlevel context vectors by stacking the paragraphlevel multi-head attention vertically on top of the word-level multi-head attention.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vertical hierarchical Transformer", "sec_num": "3.2" }, { "text": "Vertical paragraph-level multi-head attention: Since the word-level context vectors X word t \u2208 R m\u00d7d are the weighted summation of token embeddings in the paragraph at the t th time step, the VHT decoder regards them as dynamic paragraph embeddings, opposite to the static paragraph embeddings in PHT. According to Figure 3 , the dynamic paragraph embedding serves as the key and value of the vertical paragraph-level multihead attention after adding the ranking embeddings, and the query remains as the output of part I after separating in the time dimension, i.e., X 1 t \u2208 R 1\u00d7d .", "cite_spans": [], "ref_spans": [ { "start": 315, "end": 323, "text": "Figure 3", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Vertical hierarchical Transformer", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "X word t := X word t + R,", "eq_num": "(11)" } ], "section": "Vertical hierarchical Transformer", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "X para t = M ultiHead(X 1 t , X word t , X word t ),", "eq_num": "(12)" } ], "section": "Vertical hierarchical Transformer", "sec_num": "3.2" }, { "text": "where X para t \u2208 R 1\u00d7d are concatenated to X para \u2208 R k\u00d7d along time steps before passed to decoder part III with X 1 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vertical hierarchical Transformer", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "X 2 = LayerN orm(X 1 + X para ).", "eq_num": "(13)" } ], "section": "Vertical hierarchical Transformer", "sec_num": "3.2" }, { "text": "4 Experimental setup", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Vertical hierarchical Transformer", "sec_num": "3.2" }, { "text": "Data sparsity has been the bottleneck of Neural MDS models til the WikiSum dataset (Liu et al., 2018b) came along. In this study, we use the ranked version of WikiSum provided in Liu and Lapata (2019) , in which each sample contains a short title, 40 ranked paragraphs with a maximum length of 100 tokens as source inputs, and a target summary with an average length of 140 tokens. Consistent with Liu and Lapata (2019) , the dataset is split with 1,579,360 samples for training, 38,144 for validation and 38,205 for test. Subword tokenization (Bojanowski et al., 2017 ) is adopted to tokenize our vocabulary to 32,000 subwords to better solve unseen words.", "cite_spans": [ { "start": 83, "end": 102, "text": "(Liu et al., 2018b)", "ref_id": "BIBREF17" }, { "start": 179, "end": 200, "text": "Liu and Lapata (2019)", "ref_id": "BIBREF18" }, { "start": 398, "end": 419, "text": "Liu and Lapata (2019)", "ref_id": "BIBREF18" }, { "start": 544, "end": 568, "text": "(Bojanowski et al., 2017", "ref_id": "BIBREF0" } ], "ref_spans": [], "eq_spans": [], "section": "WikiSum dataset", "sec_num": "4.1" }, { "text": "We apply a dropout rate of 0.3 to the output of each sub-layer and a warm-up Adam optimizer (Vaswani et al., 2017) with 16,000 warm-up steps. Given the limited computing resources (one 2080Ti), we stack 3-layers of encoder-decoder in both of our hierarchical Transformers with 256 hidden units, 1024 units in the feed-forward network and 4 headers. To demonstrate that our model has the potential to stack, 1-layer models are trained for comparison. All parameters are randomly initialized including token embeddings. All multi-layer models are trained for approximately 600,000 steps, while single-layer models for approximately 300,000 steps. Checkpoints are saved per 20,000 steps and the best-performing checkpoint on the validation set is used to generate the final summary.", "cite_spans": [ { "start": 92, "end": 114, "text": "(Vaswani et al., 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Training configuration", "sec_num": "4.2" }, { "text": "During the inference, the beam size is set as 5 and the average length normalization is used. The beam search is terminated til the length exceeds 200. In addition, we disallow repetition of trigrams and block two tokens (except the comma) before the current step to prevent degeneration situations such as Mike is good at cooking and cooking.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Training configuration", "sec_num": "4.2" }, { "text": "We compare the proposed hierarchical Transformers with the following baselines of different modeling natures.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Baselines", "sec_num": "4.3" }, { "text": "Lead is an extractive model that extracts the top K tokens from the concatenated sequence, given that K is the length of the corresponding gold summary. We combine paragraphs in order and place the title at the beginning of the concatenated sequence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extractive model", "sec_num": null }, { "text": "Abstractive model with flat structure Flat Transformer (FT) is the vanilla Transformer encoder-decoder model (Vaswani et al., 2017) . In this study, We adopt a 3-layers Transformer and truncate the flat sequence to 1600 tokens.", "cite_spans": [ { "start": 109, "end": 131, "text": "(Vaswani et al., 2017)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Extractive model", "sec_num": null }, { "text": "T-DMCA (Liu et al., 2018b ) is a Transformerdecoder model that splits a concatenated sequence into segments, and uses a Memory Compressed Attention to exchange information among them. We construct this model with 3 layers and 256 hidden states. The top 3000 tokens are truncated as inputs.", "cite_spans": [ { "start": 7, "end": 25, "text": "(Liu et al., 2018b", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Extractive model", "sec_num": null }, { "text": "Transformer-XL (Dai et al., 2019 ) is a language model that excels in handling excessively long sequences. This model improves the vanilla Transformer-decoder with the recurrent mechanism and relative positional encoding. We use 512 memory length and disable the adaptive softmax, with other hyper-parameters and token length remained the same as T-DMCA.", "cite_spans": [ { "start": 15, "end": 32, "text": "(Dai et al., 2019", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Extractive model", "sec_num": null }, { "text": "Abstractive model with hierarchical structure Liu's Hierarchical Transformer (Liu's HT) (Liu and Lapata, 2019) uses a hierarchical structure to enrich tokens with information from other paragraphs before inputting to the flat Transformer. We use 3 local-attention layers and 3 global-attention layers introduced in Liu and Lapata (2019) . Since this model is essentially based on the flat Transformer where token length should not exceed 2000, concatenated sequences are truncated to 1600 tokens.", "cite_spans": [ { "start": 315, "end": 336, "text": "Liu and Lapata (2019)", "ref_id": "BIBREF18" } ], "ref_spans": [], "eq_spans": [], "section": "Extractive model", "sec_num": null }, { "text": "Parallel & Vertical Hierarchical Transformer (PHT/VHT) are models proposed in this paper. To verify that the models could be improved with deeper architectures, we train two 1-layer models to compare with the 3-layer models. We extract the top 30 paragraphs with 100 tokens per paragraph as inputs, and concatenate the title before the first paragraph.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extractive model", "sec_num": null }, { "text": "Cross-document relationships could be reflected by paragraph attentions. That is to say, if a model assigns higher attention weights to more important paragraphs and vice versa, the model is believed to have greater capacity of capturing cross-document relationships. To analytically assess the models' performance in this aspect, we use paragraph attentions of written summaries as the gold attention distribution, and its cosine similarity to the attention distribution of generated summaries as the evaluation metric. To model the paragraph attention of gold summaries, the normalized tf-idf similarities between the gold summary and each input paragraph are computed as the gold attention distribution. For non-hierarchical models, the summation of token weights in each paragraph are computed to indicate each paragraph's attention, whilst the hierarchical model returns the paragraph attention distribution directly from its paragraphlevel multi-head attention. Table 1 that hierarchical structures place significant improvements on the flat models in learning cross-document dependencies by assigning paragraph attentions in a way that is closer to the gold summaries. Moreover, VHT generates summaries of the greatest similarity 91.42% with the gold summaries, most likely due to its dynamic paragraph embedding architecture which allows more accurate representation of information that is continuously updated according to the changes of input targets.", "cite_spans": [], "ref_spans": [ { "start": 968, "end": 975, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "The ability of capturing cross-document relationships", "sec_num": "5.1" }, { "text": "In this section, we adopt a widely-used evaluation metrics ROUGE (Lin, 2004) to evaluate the MDS models. ROUGE-1 & -2 and ROUGE-L F 1 scores are reported in Table 2 assessing the informativeness and fluency of the summaries, respectively. As shown in Table 2 , the extractive model Lead exhibits overall inferior performance in comparison to the abstractive models, except that it produces a 0.11-higher ROUGE-L than the Flat Transformer. Although Liu's HT improves FT with a hierarchical structure, it fails to outperform the two extended flat models, i.e. T-DMCA and Transformer-XL, that are developed to learn with longer input of tokens. Moreover, T-DMCA and Transformer-XL, the two flat models based on the Transformer decoder, report comparable results in terms of the informativeness (ROUGE-1 & -2) , whilst the latter outperforms the former by 0.41 in terms of the fluency (ROUGE-L).", "cite_spans": [ { "start": 65, "end": 76, "text": "(Lin, 2004)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 157, "end": 164, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 251, "end": 258, "text": "Table 2", "ref_id": "TABREF1" }, { "start": 791, "end": 805, "text": "(ROUGE-1 & -2)", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "ROUGE evaluation", "sec_num": "5.2" }, { "text": "Further, the proposed hierarchical Transformers show promising ROUGE results. Profited from the pure hierarchical structure that enlarges the input length of tokens, PHT & VHT outperform Liu's HT in all domains of the ROUGE test. Moreover, the models' potential to be deepened is suggested by enhanced results of the 3-layer architecture over the 1-layer architecture. The ultimate 3-layer PHT & VHT surpass T-DMCA and Transformer-XL, the two flat models that also handle long input sequences of 3,000 tokens. Between the parallel and vertical architectures, PHT appears to be more informative in its summaries as it produces the highest ROUGE-1 & -2 among all models, whilst VHT is more fluent with the highest ROUGE-L.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "ROUGE evaluation", "sec_num": "5.2" }, { "text": "To provide a better comparison between the hierarchical and the flat structures, we select 4 representative models with the best ROUGE performances, namely T-DMCA & Transformer-XL (flat structure), and PHT & VHT (hierarchical structure). The human evaluation is divided into two parts.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Human evaluation", "sec_num": "5.3" }, { "text": "The first part is to score multi-document summaries from four perspectives, including (A) Informativeness (Does the summary include all important facts in the gold summary), (B) Fluency (Is the summary fluent and grammatically-correct), (C) Conciseness (Does the summary avoid repetition and redundancy), (D) Factual consistency (Does the summary avoid common sense mistakes such as wrong date, wrong location, or anything else against facts). We specify five levels ranging from Very poor (1) to Very good (5) to assess criteria (A)-(C), and three levels of Much better (2), Better (1), and Hard to score (0) to assess criteria (D). Twenty examples are randomly selected from generated summaries. As shown in Table 3 , both Parallel and Vertical Hierarchical Transformer bring significant improvements over T-DMCA and Transformer-XL in terms of informativeness, fluency and factual consistency, with the former being more fluent and the latter being more informative and fact-consistent 6 . In terms of conciseness, T-DMCA outperforms with a minor advantage in comparison to the other three models. In comparison to changing model architectures, it is believed that enlarging training data and using regularization rules in the inference are more effective in preventing repetitive generations.", "cite_spans": [], "ref_spans": [ { "start": 710, "end": 717, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Human evaluation", "sec_num": "5.3" }, { "text": "The second part of human evaluation is a side-byside preference test, which is comprised of thirty control groups of two sides. In each control group, Side A randomly places a summary generated by a flat model and side B places the corresponding summary generated by a hierarchical model. Assessors select their preferred side and briefly explain their reasons. Preference results show that the hierarchical class is approximately three times more likely to be chosen than the flat class, due to their overall accuracy and informativeness according to the assessors' comments. 6 It is interesting to note that the human evaluation suggests opposite results to the ROUGE test in terms of PHT&VHT's informativeness and fluency. The authors choose to place more trust on the quantitative measure, i.e. ROUGE, as it represents the quality of the entire sample rather than a limited segment of it.", "cite_spans": [ { "start": 577, "end": 578, "text": "6", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Human evaluation", "sec_num": "5.3" }, { "text": "We assess the computational efficiency of the abstractive models in three aspects, namely the memory usage, parameter size and validation speed. We uniformly hire the 3-layers architecture and 1600 input tokens. In the experimental process, we increase the batch size until out of memory in a 2080ti GPU, and the model with the maximum batch size occupies the lowest memory space. To measure the parameter size, we count the number of parameters in the neural network. Finally, we run each trained model in the validation set (38,144 samples), and the average time consumed in each checkpoint is used to evaluate the efficiency of forward-propagating in the model. According to Table 4 , the hierarchical structure (the second panel) appears to be overall more memory-saving than the flat structure (the first panel), with higher requirements on the parameters. On the other hand, models based on the Transformer-decoder, i.e. Transformer-decoder, T-DMCA and Transformer-XL, demonstrate absolute superiority in reducing the parameter size. For the speed of forward-propagating, Transformer-XL dominates due to its recurrent mechanism, whereas VHT performs the worst in this aspect indicating the model's slow inference speed. Between the two proposed models, PHT is proven to outperform VHT in both the memory usage and inference speed, due to its parallel, rather than sequential, computation of the word & paragraph-level attention mechanisms.", "cite_spans": [], "ref_spans": [ { "start": 678, "end": 685, "text": "Table 4", "ref_id": "TABREF3" } ], "eq_spans": [], "section": "Computational efficiency", "sec_num": "5.4" }, { "text": "This paper proposes two pure hierarchical Transformers for MDS, namely the Parallel & Vertical Hierarchical Transformers (PHT & VHT). We experimentally confirm that hierarchical structure improves the quality of generated summaries over flat structure by better capturing cross-document relationships, at the same time saves more memory space. Given the similar performance of the two proposed models, we recommend PHT over VHT due to its practical value of higher inference speed and memory-saving capacity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "5.5" }, { "text": "static means the embedding remains the same for different time steps in the decoder.3 dynamic means the embeddings are dynamic for different time steps in the decoder.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We directly use the ranked paragraphs provided byLiu and Lapata (2019) 5 In this paper, average pooling is adopted to compute the final attention from multi-head attentions", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "This work is supported by the XJTLU Key Programme Special Fund -Applied Technology (No. KSF-A-14).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Enriching word vectors with subword information", "authors": [ { "first": "Piotr", "middle": [], "last": "Bojanowski", "suffix": "" }, { "first": "Edouard", "middle": [], "last": "Grave", "suffix": "" }, { "first": "Armand", "middle": [], "last": "Joulin", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Mikolov", "suffix": "" } ], "year": 2017, "venue": "Transactions of the Association for Computational Linguistics", "volume": "5", "issue": "", "pages": "135--146", "other_ids": { "DOI": [ "10.1162/tacl_a_00051" ] }, "num": null, "urls": [], "raw_text": "Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. Transactions of the Associa- tion for Computational Linguistics, 5:135-146.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Improving multi-document summarization via text classification", "authors": [ { "first": "Ziqiang", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Wenjie", "middle": [], "last": "Li", "suffix": "" }, { "first": "Sujian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" } ], "year": 2017, "venue": "Thirty-First AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2017. Improving multi-document summarization via text classification. In Thirty-First AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Retrieve, rerank and rewrite: Soft template based neural summarization", "authors": [ { "first": "Ziqiang", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Wenjie", "middle": [], "last": "Li", "suffix": "" }, { "first": "Sujian", "middle": [], "last": "Li", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "152--161", "other_ids": { "DOI": [ "10.18653/v1/P18-1015" ] }, "num": null, "urls": [], "raw_text": "Ziqiang Cao, Wenjie Li, Sujian Li, and Furu Wei. 2018. Retrieve, rerank and rewrite: Soft template based neural summarization. In Proceedings of the 56th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 152-161, Melbourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "The use of mmr, diversity-based reranking for reordering documents and producing summaries", "authors": [ { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "Jade", "middle": [], "last": "Goldstein", "suffix": "" } ], "year": 1998, "venue": "Proceedings of the 21st Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '98", "volume": "", "issue": "", "pages": "335--336", "other_ids": { "DOI": [ "10.1145/290941.291025" ] }, "num": null, "urls": [], "raw_text": "Jaime Carbonell and Jade Goldstein. 1998. The use of mmr, diversity-based reranking for reordering docu- ments and producing summaries. In Proceedings of the 21st Annual International ACM SIGIR Confer- ence on Research and Development in Information Retrieval, SIGIR '98, pages 335-336, New York, NY, USA. ACM.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "MeanSum: A neural model for unsupervised multi-document abstractive summarization", "authors": [ { "first": "Eric", "middle": [], "last": "Chu", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 36th International Conference on Machine Learning", "volume": "", "issue": "", "pages": "1223--1232", "other_ids": {}, "num": null, "urls": [], "raw_text": "Eric Chu and Peter Liu. 2019. MeanSum: A neu- ral model for unsupervised multi-document abstrac- tive summarization. In Proceedings of the 36th In- ternational Conference on Machine Learning, vol- ume 97 of Proceedings of Machine Learning Re- search, pages 1223-1232, Long Beach, California, USA. PMLR.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Transformer-xl: Attentive language models beyond a fixed-length context", "authors": [ { "first": "Zihang", "middle": [], "last": "Dai", "suffix": "" }, { "first": "Zhilin", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Yiming", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Jaime", "middle": [], "last": "Carbonell", "suffix": "" }, { "first": "Quoc", "middle": [], "last": "Le", "suffix": "" }, { "first": "Ruslan", "middle": [], "last": "Salakhutdinov", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/p19-1285" ] }, "num": null, "urls": [], "raw_text": "Zihang Dai, Zhilin Yang, Yiming Yang, Jaime Car- bonell, Quoc Le, and Ruslan Salakhutdinov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. Proceedings of the 57th An- nual Meeting of the Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Bert: Pre-training of deep bidirectional transformers for language understanding", "authors": [ { "first": "Jacob", "middle": [], "last": "Devlin", "suffix": "" }, { "first": "Ming-Wei", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Kenton", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Multi-news: A large-scale multi-document summarization dataset and abstractive hierarchical model", "authors": [ { "first": "Alexander", "middle": [], "last": "Fabbri", "suffix": "" }, { "first": "Irene", "middle": [], "last": "Li", "suffix": "" }, { "first": "Tianwei", "middle": [], "last": "She", "suffix": "" }, { "first": "Suyi", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1074--1084", "other_ids": { "DOI": [ "10.18653/v1/P19-1102" ] }, "num": null, "urls": [], "raw_text": "Alexander Fabbri, Irene Li, Tianwei She, Suyi Li, and Dragomir Radev. 2019. Multi-news: A large-scale multi-document summarization dataset and abstrac- tive hierarchical model. In Proceedings of the 57th Annual Meeting of the Association for Computa- tional Linguistics, pages 1074-1084, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Bottom-up abstractive summarization", "authors": [ { "first": "Sebastian", "middle": [], "last": "Gehrmann", "suffix": "" }, { "first": "Yuntian", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Alexander", "middle": [], "last": "Rush", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4098--4109", "other_ids": { "DOI": [ "10.18653/v1/D18-1443" ] }, "num": null, "urls": [], "raw_text": "Sebastian Gehrmann, Yuntian Deng, and Alexander Rush. 2018. Bottom-up abstractive summarization. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 4098-4109, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Semisupervised classification with graph convolutional networks", "authors": [ { "first": "N", "middle": [], "last": "Thomas", "suffix": "" }, { "first": "Max", "middle": [], "last": "Kipf", "suffix": "" }, { "first": "", "middle": [], "last": "Welling", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Thomas N. Kipf and Max Welling. 2016. Semi- supervised classification with graph convolutional networks. CoRR, abs/1609.02907.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Adapting the neural encoder-decoder framework from single to multi-document summarization", "authors": [ { "first": "Logan", "middle": [], "last": "Lebanoff", "suffix": "" }, { "first": "Kaiqiang", "middle": [], "last": "Song", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4131--4141", "other_ids": { "DOI": [ "10.18653/v1/D18-1446" ] }, "num": null, "urls": [], "raw_text": "Logan Lebanoff, Kaiqiang Song, and Fei Liu. 2018. Adapting the neural encoder-decoder framework from single to multi-document summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4131-4141, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "A hierarchical neural autoencoder for paragraphs and documents", "authors": [ { "first": "Jiwei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Minh-Thang", "middle": [], "last": "Luong", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2015, "venue": "ACL", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiwei Li, Minh-Thang Luong, and Dan Jurafsky. 2015. A hierarchical neural autoencoder for paragraphs and documents. In ACL.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Salience estimation via variational auto-encoders for multi-document summarization", "authors": [ { "first": "Piji", "middle": [], "last": "Li", "suffix": "" }, { "first": "Zihao", "middle": [], "last": "Wang", "suffix": "" }, { "first": "Wai", "middle": [], "last": "Lam", "suffix": "" }, { "first": "Zhaochun", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Lidong", "middle": [], "last": "Bing", "suffix": "" } ], "year": 2017, "venue": "Thirty-First AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Piji Li, Zihao Wang, Wai Lam, Zhaochun Ren, and Lidong Bing. 2017. Salience estimation via varia- tional auto-encoders for multi-document summariza- tion. In Thirty-First AAAI Conference on Artificial Intelligence.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Improving neural abstractive document summarization with structural regularization", "authors": [ { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Xinyan", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Yajuan", "middle": [], "last": "Lyu", "suffix": "" }, { "first": "Yuanzhuo", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4078--4087", "other_ids": { "DOI": [ "10.18653/v1/D18-1441" ] }, "num": null, "urls": [], "raw_text": "Wei Li, Xinyan Xiao, Yajuan Lyu, and Yuanzhuo Wang. 2018. Improving neural abstractive document sum- marization with structural regularization. In Pro- ceedings of the 2018 Conference on Empirical Meth- ods in Natural Language Processing, pages 4078- 4087, Brussels, Belgium. Association for Computa- tional Linguistics.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "ROUGE: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text Summarization Branches Out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Hierarchical recurrent neural network for document modeling", "authors": [ { "first": "Rui", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Shujie", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Muyun", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Mu", "middle": [], "last": "Li", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Sheng", "middle": [], "last": "Li", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "899--907", "other_ids": { "DOI": [ "10.18653/v1/D15-1106" ] }, "num": null, "urls": [], "raw_text": "Rui Lin, Shujie Liu, Muyun Yang, Mu Li, Ming Zhou, and Sheng Li. 2015. Hierarchical recurrent neural network for document modeling. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 899-907, Lis- bon, Portugal. Association for Computational Lin- guistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Generative adversarial network for abstractive text summarization", "authors": [ { "first": "Linqing", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yao", "middle": [], "last": "Lu", "suffix": "" }, { "first": "Min", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Qiang", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Jia", "middle": [], "last": "Zhu", "suffix": "" }, { "first": "Hongyan", "middle": [], "last": "Li", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "8109--8110", "other_ids": {}, "num": null, "urls": [], "raw_text": "Linqing Liu, Yao Lu, Min Yang, Qiang Qu, Jia Zhu, and Hongyan Li. 2018a. Generative adversarial net- work for abstractive text summarization. In Proceed- ings of the Thirty-Second AAAI Conference on Ar- tificial Intelligence, New Orleans, Louisiana, USA, February 2-7, 2018, pages 8109-8110. AAAI Press.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Generating wikipedia by summarizing long sequences", "authors": [ { "first": "J", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Etienne", "middle": [], "last": "Saleh", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Pot", "suffix": "" }, { "first": "Ryan", "middle": [], "last": "Goodrich", "suffix": "" }, { "first": "Lukasz", "middle": [], "last": "Sepassi", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Shazeer", "suffix": "" } ], "year": 2018, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, and Noam Shazeer. 2018b. Generating wikipedia by summariz- ing long sequences. CoRR, abs/1801.10198.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Hierarchical transformers for multi-document summarization", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/p19-1500" ] }, "num": null, "urls": [], "raw_text": "Yang Liu and Mirella Lapata. 2019. Hierarchical trans- formers for multi-document summarization. Pro- ceedings of the 57th Annual Meeting of the Associa- tion for Computational Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "An unsupervised multi-document summarization framework based on neural document model", "authors": [ { "first": "Shulei", "middle": [], "last": "Ma", "suffix": "" }, { "first": "Zhi-Hong", "middle": [], "last": "Deng", "suffix": "" }, { "first": "Yunlun", "middle": [], "last": "Yang", "suffix": "" } ], "year": 2016, "venue": "Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers", "volume": "", "issue": "", "pages": "1514--1523", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shulei Ma, Zhi-Hong Deng, and Yunlun Yang. 2016. An unsupervised multi-document summarization framework based on neural document model. In Pro- ceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Techni- cal Papers, pages 1514-1523, Osaka, Japan. The COLING 2016 Organizing Committee.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Summarunner: A recurrent neural network based sequence model for extractive summarization of documents", "authors": [ { "first": "Ramesh", "middle": [], "last": "Nallapati", "suffix": "" }, { "first": "Feifei", "middle": [], "last": "Zhai", "suffix": "" }, { "first": "Bowen", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2016. Summarunner: A recurrent neural network based se- quence model for extractive summarization of docu- ments.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Get to the point: Summarization with pointergenerator networks", "authors": [ { "first": "Abigail", "middle": [], "last": "See", "suffix": "" }, { "first": "J", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.18653/v1/p17-1099" ] }, "num": null, "urls": [], "raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers).", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Hierarchical attention networks for document classification", "authors": [ { "first": "Zichao", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Diyi", "middle": [], "last": "Yang", "suffix": "" }, { "first": "Chris", "middle": [], "last": "Dyer", "suffix": "" }, { "first": "Xiaodong", "middle": [], "last": "He", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Smola", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2016, "venue": "Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1480--1489", "other_ids": { "DOI": [ "10.18653/v1/N16-1174" ] }, "num": null, "urls": [], "raw_text": "Zichao Yang, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. 2016. Hierarchical attention networks for document classification. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 1480-1489, San Diego, California. Associa- tion for Computational Linguistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Graph-based neural multi-document summarization", "authors": [ { "first": "Michihiro", "middle": [], "last": "Yasunaga", "suffix": "" }, { "first": "Rui", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Kshitijh", "middle": [], "last": "Meelu", "suffix": "" }, { "first": "Ayush", "middle": [], "last": "Pareek", "suffix": "" }, { "first": "Krishnan", "middle": [], "last": "Srinivasan", "suffix": "" }, { "first": "Dragomir", "middle": [], "last": "Radev", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 21st Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "452--462", "other_ids": { "DOI": [ "10.18653/v1/K17-1045" ] }, "num": null, "urls": [], "raw_text": "Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, and Dragomir Radev. 2017. Graph-based neural multi-document summarization. In Proceedings of the 21st Confer- ence on Computational Natural Language Learning (CoNLL 2017), pages 452-462, Vancouver, Canada. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "HI-BERT: Document level pre-training of hierarchical bidirectional transformers for document summarization", "authors": [ { "first": "Xingxing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5059--5069", "other_ids": { "DOI": [ "10.18653/v1/P19-1499" ] }, "num": null, "urls": [], "raw_text": "Xingxing Zhang, Furu Wei, and Ming Zhou. 2019. HI- BERT: Document level pre-training of hierarchical bidirectional transformers for document summariza- tion. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5059-5069, Florence, Italy. Association for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "text": "Flat structure (top) -concatenating documents to a flat sequence. Hierarchical structure (bottom)-hierarchical input and representation of documents + modeling cross-document relationships.", "num": null, "type_str": "figure" }, "FIGREF1": { "uris": null, "text": "PHT -shared encoders on the left and the decoder on the right.", "num": null, "type_str": "figure" }, "FIGREF2": { "uris": null, "text": "VHT -removing attention pooling in the encoder and using the vertical architecture in the decoder.", "num": null, "type_str": "figure" }, "TABREF0": { "text": "Average cosine similarities between attention distributions of generated summaries and the gold attention distribution", "num": null, "type_str": "table", "content": "
ModelCosine similarity
Flat Transformer0.8143
T-DMCA0.8654
Transformer-XL0.8447
Liu's HT0.8769
Vertical HT0.9142
Parallel HT0.8936
It is proved by
", "html": null }, "TABREF1": { "text": "Average ROUGE F 1 scores. The second and the third panels are models of the flat and hierarchical structures, respectively.", "num": null, "type_str": "table", "content": "
ModelR-1R-2R-L
Lead36.40 16.66 32.95
FT40.30 18.67 32.84
T-DMCA41.09 19.78 33.31
Transformer-XL 41.11 19.81 33.72
Liu's HT40.83 19.41 33.26
1-layer PHT41.02 19.82 33.28
1-layer VHT41.04 19.50 33.64
PHT41.99 20.44 34.50
VHT41.85 20.21 34.61
", "html": null }, "TABREF2": { "text": "Human evaluation results", "num": null, "type_str": "table", "content": "
ModelInformativeness Fluency Conciseness Factual consistency Preference
T-DMCA Transformer-XL3.69 3.573.66 3.713.82 3.773.04 2.881
PHT VHT4.11 4.243.97 3.873.81 3.813.28 3.362.92
", "html": null }, "TABREF3": { "text": "Computational efficiency (Transformer-decoder is used to show that abandoning the encoder removes approximately one quarter of parameters from the encoder-decoder model.).", "num": null, "type_str": "table", "content": "
ModelMax Batch Size Parameters (MB) Validation Speed (s)
Flat Transformer11165.0634
Transformer-decoder-127.1-
T-DMCA10131.1656
Transformer-XL8130.4489
Liu's HT11190.8639
Vertical HT13174.5930
Parallel HT17182.4648
", "html": null } } } }