{ "paper_id": "2021", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T14:47:17.179807Z" }, "title": "Modeling Endorsement for Multi-Document Abstractive Summarization", "authors": [ { "first": "Logan", "middle": [], "last": "Lebanoff", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Central Florida", "location": { "settlement": "Orlando", "region": "FL" } }, "email": "loganlebanoff@knights.ucf.edu" }, { "first": "Bingqing", "middle": [], "last": "Wang", "suffix": "", "affiliation": { "laboratory": "", "institution": "Robert Bosch LLC", "location": { "settlement": "Sunnyvale", "region": "CA" } }, "email": "bingqing.wang@us.bosch.com" }, { "first": "Zhe", "middle": [], "last": "Feng", "suffix": "", "affiliation": { "laboratory": "", "institution": "Robert Bosch LLC", "location": { "settlement": "Sunnyvale", "region": "CA" } }, "email": "zhe.feng2@us.bosch.com" }, { "first": "Fei", "middle": [], "last": "Liu", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Central Florida", "location": { "settlement": "Orlando", "region": "FL" } }, "email": "feiliu@cs.ucf.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "A crucial difference between single-and multidocument summarization is how salient content manifests itself in the document(s). While such content may appear at the beginning of a single document, essential information is frequently reiterated in a set of documents related to a particular topic, resulting in an endorsement effect that increases information salience. In this paper, we model the cross-document endorsement effect and its utilization in multiple document summarization. Our method generates a synopsis from each document, which serves as an endorser to identify salient content from other documents. Strongly endorsed text segments are used to enrich a neural encoderdecoder model to consolidate them into an abstractive summary. The method has a great potential to learn from fewer examples to identify salient content, which alleviates the need for costly retraining when the set of documents is dynamically adjusted. Through extensive experiments on benchmark multi-document summarization datasets, we demonstrate the effectiveness of our proposed method over strong published baselines. Finally, we shed light on future research directions and discuss broader challenges of this task using a case study.", "pdf_parse": { "paper_id": "2021", "_pdf_hash": "", "abstract": [ { "text": "A crucial difference between single-and multidocument summarization is how salient content manifests itself in the document(s). While such content may appear at the beginning of a single document, essential information is frequently reiterated in a set of documents related to a particular topic, resulting in an endorsement effect that increases information salience. In this paper, we model the cross-document endorsement effect and its utilization in multiple document summarization. Our method generates a synopsis from each document, which serves as an endorser to identify salient content from other documents. Strongly endorsed text segments are used to enrich a neural encoderdecoder model to consolidate them into an abstractive summary. The method has a great potential to learn from fewer examples to identify salient content, which alleviates the need for costly retraining when the set of documents is dynamically adjusted. Through extensive experiments on benchmark multi-document summarization datasets, we demonstrate the effectiveness of our proposed method over strong published baselines. Finally, we shed light on future research directions and discuss broader challenges of this task using a case study.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "\"Repeat a lie often enough and it becomes the truth.\" This proverb stresses the importance of repetition and frequency in human comprehension. It causes an endorsement effect that increases the salience of repeated information. In this paper, we leverage the endorsement effect to summarize multiple documents that discuss a particular event or topic (MDS). In the commercial arena, MDS could be used to aggregate search results (Miller, 2020) and distill insights from customer reviews (Bra\u017einskas et al., 2020) . Further, MDS is an integral part of the daily work of intelligence analysts who identify important information from raw documents and consolidate it into a summary report to be disseminated to the leadership (Hamilton, 2014) . Synopsis-document endorsements are leveraged to identify important text segments from a source document (e.g., Doc A). Strongly endorsed segments of all documents are consolidated into an abstractive summary.", "cite_spans": [ { "start": 429, "end": 443, "text": "(Miller, 2020)", "ref_id": "BIBREF29" }, { "start": 487, "end": 512, "text": "(Bra\u017einskas et al., 2020)", "ref_id": "BIBREF3" }, { "start": 723, "end": 739, "text": "(Hamilton, 2014)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Multi-document Abstractive Summarization, i.e. MuDAS, remains a challenging problem compared to its single-document counterpart (See et al., 2017; Chen and Bansal, 2018; Narayan et al., 2018; Raffel et al., 2020; Lewis et al., 2020) . The task poses a substantial challenge to modern neural models: when the set of source documents is concatenated into a flat sequence, it may exceed the maximum length allowed by the GPU memory. There are also fewer datasets available to train MuDAS models in an end-to-end fashion. Recent work tackles this problem by selecting representative sentences from the source documents to reduce the task to singledocument summarization (Lebanoff et al., 2018; Coavoux et al., 2019; Fabbri et al., 2019) .", "cite_spans": [ { "start": 128, "end": 146, "text": "(See et al., 2017;", "ref_id": "BIBREF37" }, { "start": 147, "end": 169, "text": "Chen and Bansal, 2018;", "ref_id": "BIBREF5" }, { "start": 170, "end": 191, "text": "Narayan et al., 2018;", "ref_id": "BIBREF30" }, { "start": 192, "end": 212, "text": "Raffel et al., 2020;", "ref_id": "BIBREF35" }, { "start": 213, "end": 232, "text": "Lewis et al., 2020)", "ref_id": "BIBREF22" }, { "start": 666, "end": 689, "text": "(Lebanoff et al., 2018;", "ref_id": "BIBREF21" }, { "start": 690, "end": 711, "text": "Coavoux et al., 2019;", "ref_id": "BIBREF9" }, { "start": 712, "end": 732, "text": "Fabbri et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Nevertheless, there could be substantial information loss if only representative sentences are used for MuDAS. It becomes unclear what information is reiterated and salient, resulting in unimportant sentence parts being included in the summary. E.g., when the sentence \"World leaders join to pledge $8 billion for vaccine, but the U.S. sits out\" is selected from the document set, it is unclear which of its segments, \"$8 billion\" or \"U.S. sits out,\" is more salient given the topic of discussion. The neural representations also treat different quantities, e.g., \"$8 billion\" and \"$5 million,\" indiscriminately (Rogers et al., 2020) . Consequently, there is an urgent need for summarization systems to acquire fine-grained, segment-level textual salience. Without that, a neural abstractive system can miss out on salient details and favor fluency over information accuracy.", "cite_spans": [ { "start": 612, "end": 633, "text": "(Rogers et al., 2020)", "ref_id": "BIBREF36" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In this paper, we present a conceptual framework that leverages the endorsement effect to model finegrained segment salience for multi-document summarization. When an analyst reads a document, he retains a synopsis of the key ideas of the document in his mind. The synopsis later serves as an endorser to identify segments in other documents that reiterate the same ideas (Hintzman, 1976) . We call the synopsis an \"Endorser\" and the document a \"Candidate.\" Segments of the candidate documents that are frequently endorsed by synopses suggest high salience and are to be consolidated into an abstractive summary. Our synopses are generated from a state-of-the-art summarizer (Lewis et al., 2020) and a variety of methods are investigated to quantify the level of endorsement from a text synopsis to a document. Figure 1 provides an overview of synopsis-document endorsement.", "cite_spans": [ { "start": 372, "end": 388, "text": "(Hintzman, 1976)", "ref_id": "BIBREF13" }, { "start": 675, "end": 695, "text": "(Lewis et al., 2020)", "ref_id": "BIBREF22" } ], "ref_spans": [ { "start": 811, "end": 819, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Our contributions in this paper include:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 presenting a new conceptual framework to model asynchronous endorsement from text synopses to documents for multi-document summarization;", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 devising a novel method to enrich neural encoderdecoder models with fine-grained segment-level endorsement to consolidate strongly endorsed content into an abstractive summary; and", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 through extensive experiments on multiple benchmark summarization datasets, we demonstrate the effectiveness of the endorsement method over state-of-the-art baselines. We make our code and models publicly available. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Redundancy is essential in multi-document summarization. Without repetition and redundancy, even humans cannot agree on what information is salient and should be included in the summary (Daume III and Marcu, 2004) . Optimizing summaries for frequency-based saliency has attained success prior to the era of deep learning (Berg-Kirkpatrick et al., 2011; Kulesza and Taskar, 2012; Boudin et al., 2015) . These extractive systems strive to include the most frequently occurring concepts in the summary. However, when it comes to abstractive summarization systems, the frequency of concepts is not fully utilized by modern neural models.", "cite_spans": [ { "start": 186, "end": 213, "text": "(Daume III and Marcu, 2004)", "ref_id": "BIBREF11" }, { "start": 321, "end": 352, "text": "(Berg-Kirkpatrick et al., 2011;", "ref_id": "BIBREF0" }, { "start": 353, "end": 378, "text": "Kulesza and Taskar, 2012;", "ref_id": "BIBREF19" }, { "start": 379, "end": 399, "text": "Boudin et al., 2015)", "ref_id": "BIBREF2" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "1 https://github.com/ucfnlp/endorser-summ Recent studies on MuDAS implicitly estimate frequency using hierarchical encoders / decoders. encode the documents using hierarchical Transformers where cross-document relationships are characterized by attention weights. Perez-Beltrachini et al. (2019) explore structured convolutional decoders. leverage similarity and discourse graphs to alter the attention mechanism of encoder-decoder models. Researchers have also attempted optimization algorithms such as maximal margin relevance and determinantal point processes combined with contextualized representations and reinforcement learning (Cho et al., 2019a,b; Mao et al., 2020) . Despite promising progress, modeling frequency for multidocument summarization remains an open problem, in part because neural summarization models are often pretrained on single documents that contain little or no redundant content (Kryscinski et al., 2019; Zhang et al., 2019; Jin and Wan, 2020; Laban et al., 2020; Zhang et al., 2020a) . Named entities and quantities that represent salient information details are not properly accounted for (Xu and Durrett, 2021 ). If we do not explicitly model frequency, abstractive summarizers may fail to adequately recognize such salient details.", "cite_spans": [ { "start": 635, "end": 656, "text": "(Cho et al., 2019a,b;", "ref_id": null }, { "start": 657, "end": 674, "text": "Mao et al., 2020)", "ref_id": "BIBREF27" }, { "start": 910, "end": 935, "text": "(Kryscinski et al., 2019;", "ref_id": "BIBREF18" }, { "start": 936, "end": 955, "text": "Zhang et al., 2019;", "ref_id": "BIBREF43" }, { "start": 956, "end": 974, "text": "Jin and Wan, 2020;", "ref_id": "BIBREF16" }, { "start": 975, "end": 994, "text": "Laban et al., 2020;", "ref_id": "BIBREF20" }, { "start": 995, "end": 1015, "text": "Zhang et al., 2020a)", "ref_id": "BIBREF41" }, { "start": 1122, "end": 1143, "text": "(Xu and Durrett, 2021", "ref_id": "BIBREF40" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We are particularly interested in reducing multiple input documents to a single document, then consolidate the content into a succinct abstract (Nayeem et al., 2018; Coavoux et al., 2019) . Our method enhances the single document with fine-grained segment salience to offset the lead bias (Grenander et al., 2019; Xing et al., 2021) , which hinders the development of multiple-document summarization. Our salience estimates are obtained from a frequency-driven endorsement model. Below we present details of the proposed method.", "cite_spans": [ { "start": 144, "end": 165, "text": "(Nayeem et al., 2018;", "ref_id": "BIBREF31" }, { "start": 166, "end": 187, "text": "Coavoux et al., 2019)", "ref_id": "BIBREF9" }, { "start": 289, "end": 313, "text": "(Grenander et al., 2019;", "ref_id": null }, { "start": 314, "end": 332, "text": "Xing et al., 2021)", "ref_id": "BIBREF39" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "2" }, { "text": "We approach the MuDAS problem in two stages. First, we obtain fine-grained segment-level endorsement for any candidate document. By excluding unendorsed sentences from consideration, we reduce the set of documents to a single input document. We next present an enhanced abstractive summarization model to consolidate the document into a succinct abstract, analogously to how an editor would consolidate text with emphasis on endorsed segments. This process involves non-trivial design decisions. In this section, we start by presenting the second stage in our approach -the summarization model with endorsement.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Summarization with Endorsement", "sec_num": "3" }, { "text": "We choose the encoder-decoder architecture over decoder-only architectures (Radford et al., 2019; Dong et al., 2019; Brown et al., 2020) . It allows us to balance the contribution from the source text and its endorsed segments in summary generation. The encoder and decoder each comprise of a stack of L Transformer blocks (Vaswani et al., 2017) . Let {x} m i=0 be the source sequence corresponding to the input document, and {y} n j=0 the summary sequence. x 0 and y 0 are beginning-of-sequence symbols. Let E be a matrix of token embeddings and P be position embeddings. An encoder produces a set of hidden vectors in its l-th layer (Eq. (1)),", "cite_spans": [ { "start": 75, "end": 97, "text": "(Radford et al., 2019;", "ref_id": "BIBREF34" }, { "start": 98, "end": 116, "text": "Dong et al., 2019;", "ref_id": null }, { "start": 117, "end": 136, "text": "Brown et al., 2020)", "ref_id": null }, { "start": 323, "end": 345, "text": "(Vaswani et al., 2017)", "ref_id": "BIBREF38" } ], "ref_spans": [], "eq_spans": [], "section": "The Original Transformer", "sec_num": "3.1" }, { "text": "H (l) = h (l) 0 , . . . , h (l) m , where h (l)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Original Transformer", "sec_num": "3.1" }, { "text": "i is a hidden vector of the i-th source token. A decoder utilizes top-layer encoder hidden vectors H (L) to decode the summary sequence, where G (l) represents a sequence of hidden vectors of the l-th decoder layer (Eq. 2). An upper triangular-shaped mask is used by the decoder, so that g j only depends on summary tokens whose positions are less than j.", "cite_spans": [ { "start": 101, "end": 104, "text": "(L)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Original Transformer", "sec_num": "3.1" }, { "text": "H (l) = h (l) 0 , . . . , h (l) m (1) = E x 0 + P 0 , . . . , E xm + P m l = 0 ENCBLOCK l H (l\u22121) l > 0 G (l) = g (l) 0 , . . . , g (l) n (2) = E y 0 + P 0 , . . . , E yn + P n l = 0 DECBLOCK l G (l\u22121) , H (L) l > 0", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Original Transformer", "sec_num": "3.1" }, { "text": "With this architecture, we argue that it is preferable to modify the decoder and cross-attention to steer it towards endorsed content, rather than modifying the encoder representations H (L) , as they are often unsupervisedly pretrained. It would be best if such representations remain unaffected by whether a segment of the source text is endorsed or not to provide model flexibility. A decoder layer consists of three main blocks to transform from G (l\u22121) to G (l) (Eqs. (3-5)). 2 In particular, self-attention allows a summary token to attend to other summary tokens. Cross-attention allows a summary token to attend to all source tokens using H (L) . Finally, a feed-forward network with ReLU activation is applied to generate G (l) . Our focus of this work is to improve the cross-attention to emphasize on endorsed content during decoding.", "cite_spans": [ { "start": 187, "end": 190, "text": "(L)", "ref_id": null }, { "start": 649, "end": 652, "text": "(L)", "ref_id": null }, { "start": 733, "end": 736, "text": "(l)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "The Original Transformer", "sec_num": "3.1" }, { "text": "G (l\u22121) = SELF-ATTN(G (l\u22121) )", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Original Transformer", "sec_num": "3.1" }, { "text": "(3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Original Transformer", "sec_num": "3.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "G (l) = CROSS-ATTN( G (l\u22121) , H (L) ) (4) G (l) = FEEDFORWARD( G (l) )", "eq_num": "(5)" } ], "section": "The Original Transformer", "sec_num": "3.1" }, { "text": "The original cross-attention head z transforms the j-th decoder state g", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Original Transformer", "sec_num": "3.1" }, { "text": "(l\u22121) j and i-th encoder state h (L) i", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Original Transformer", "sec_num": "3.1" }, { "text": "into query, key and value vectors (Eqs. (6-8)). It computes attention weights as a normalized dot product between query and key vectors. The output of the head is a weighted sum of value vectors.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "The Original Transformer", "sec_num": "3.1" }, { "text": "We introduce a set of companion heads for each original head. All companion heads of z share the parameters", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Companion Heads", "sec_num": "3.2" }, { "text": "{W Q z , W K z , W V z }, but a companion head z,\u03c4", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Companion Heads", "sec_num": "3.2" }, { "text": "j with an endorsement level of \u03c4 attends only to source tokens that are endorsed \u03c4 times or more. This is achieved with a special binary mask M \u03c4 i (Eqs. (9-10)). The original heads are believed to copy over source tokens that are deemed relevant to summary tokens according to the dependency syntax (Clark et al., 2019) . The companion heads serve a similar purpose but have a narrower focus on endorsed source tokens-frequently endorsed tokens are more likely to be copied over by companion heads. The method thus improves head diversity similar to that of sparse Transformers (Correia et al., 2019; Huang et al., 2021 ). The hyperparameter \u03c4 controls the level of endorsement. Finally, all heads are pooled into a hidden vector g (l) j (Eq. (11)) to be passed to the feedforward layer.", "cite_spans": [ { "start": 300, "end": 320, "text": "(Clark et al., 2019)", "ref_id": "BIBREF8" }, { "start": 566, "end": 601, "text": "Transformers (Correia et al., 2019;", "ref_id": null }, { "start": 602, "end": 620, "text": "Huang et al., 2021", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Companion Heads", "sec_num": "3.2" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "q z j = W Q z g (l\u22121) j j \u2208 [n] (6) k z i = W K z h (L) i i \u2208 [m] (7) v z i = W V z h (L) i i \u2208 [m] (8) head z,\u03c4 j = m i=0 exp(q z j k z i ) m r=0 exp(q z j k z r ) M \u03c4 i v z i (9) M \u03c4 i = 1 if Endorse(x i ) \u2265 \u03c4 0 otherwise (10) g (l) j = n head z=1 \u03c4max \u03c4 =0 head z,\u03c4 j W \u03c4 z", "eq_num": "(11)" } ], "section": "Companion Heads", "sec_num": "3.2" }, { "text": "When \u03c4 max is set to 0, the model reduces to its initial form using the original heads, i.e., head z,0 j . Further, we initialize W \u03c4 z = \u03bb \u03c4 W z , where W z \u2208 R h head \u00d7h model are pretrained model parameters associated with the head z. \u03bb \u03c4 \u2208 [0, 1] is a coefficient and W z = \u03c4max \u03c4 =0 W \u03c4 z . It indicates that, head z and all of its companion heads are linearly interpolated to produce decoder hidden state g (l) j . If a source token is not endorsed, it will have a reduced impact on the decoder hidden state when companion heads are used. The method has the advantage that, when new documents are dynamically added or removed from the set, it only changes the level of endorsement received by the tokens (\u03c4 ), thus avoiding costly retraining of the neural encoder-decoder model. We proceed by describing how fine-grained segment-level endorsement is obtained from modeling synopsis-document relationships.", "cite_spans": [ { "start": 413, "end": 416, "text": "(l)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Companion Heads", "sec_num": "3.2" }, { "text": "In this section, we present the first stage in our approach -modelling endorsement -whose outputs are passed to the abstractive summarization model in the second stage. Modelling endorsement serves two main purposes. It allows us to identify salient segments of text using a frequency-driven endorsement model, and the level of endorsement guides the summarizer to consolidate salient content. Further, it helps us reduce the source input from multiple documents to a single pseudodocument, whereby any unendorsed sentences are removed from consideration.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Modelling Endorsement", "sec_num": "4" }, { "text": "A fragment of text is considered to be endorsed if its information is observed in the endorser. We obtain a set of synopses from the source documents; they are used as endorsers to identify salient segments from a candidate source document. A segment that is endorsed only once indicates its information is considered important by only one source document. Frequent endorsement by multiple endorsers suggests the information is reiterated in multiple source documents, and reiteration implies increased salience. Any information that is present among multiple sources is likely to be important. Thus, our method identifies salient segments considering both within-and cross-document saliency. Our approach is in spirit similar to those of building semantic concept graphs for multi-document summarization (Bing et al., 2015; Handler and O'Connor, 2018; Falke and Gurevych, 2019) in that frequently reiterated concepts are likely to be captured. However, we do not explicitly construct semantic concept graphs, but focus on modeling synopsis-document endorsement and incorporating it into summary generation, which distinguishes our work from these studies. We investigate two variants to compute segment-level endorsement.", "cite_spans": [ { "start": 805, "end": 824, "text": "(Bing et al., 2015;", "ref_id": "BIBREF1" }, { "start": 825, "end": 852, "text": "Handler and O'Connor, 2018;", "ref_id": null }, { "start": 853, "end": 878, "text": "Falke and Gurevych, 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Modelling Endorsement", "sec_num": "4" }, { "text": "Let S be a synopsis serving as the endorser and D a source document, our goal is to estimate whether a token x i of the document is endorsed by the synopsis. A soft alignment between the synopsis and document is attainable by utilizing text evaluation metrics such as BERTScore (Zhang et al., 2020b) , where we build contextualized embeddings for tokens of the document and synopsis, compute the cosine similarity of embeddings, and find a most similar synopsis token for each token of the document to obtain the endorsement score S(x i ) (Eq. (12)). Albeit a greedy alignment, the method can produce competitive results comparing to methods such as the earth mover's distance (Zhao et al., 2019) .", "cite_spans": [ { "start": 278, "end": 299, "text": "(Zhang et al., 2020b)", "ref_id": "BIBREF42" }, { "start": 677, "end": 696, "text": "(Zhao et al., 2019)", "ref_id": "BIBREF44" } ], "ref_spans": [], "eq_spans": [], "section": "Synopsis-Document Alignment", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "S(x i ) = max y j \u2208S Sim(x i , y j )", "eq_num": "(12)" } ], "section": "Synopsis-Document Alignment", "sec_num": "4.1" }, { "text": "Contiguous Segments It is important to endorse segments of text rather than isolated tokens, as segments such as \"$8 million\" is either included in the abstract in its entirety, or not at all. We transform token-level endorsement scores into binary decisions using the maximum sum subarray algorithm (Eq. (13)), which finds a contiguous subsequence that yields the highest sum of scores. The solution is trivial when all scores are positive. We thus offset the scores by \u03b4 before applying the algorithm. Let {0.2, 0.3, \u22120.1, 0.4, \u22120.5} be an example of a set of adjusted endorsement scores, the algorithm endorses the first four tokens as the sum of their scores is the highest, yielding {1, 1, 1, 1, 0}, where 1 indicates the token is endorsed and 0 otherwise. We apply the algorithm to each sentence of the document and discard the segment if it has less than 5 tokens. The method endorses salient segments of text, yet is lenient to include gap tokens.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synopsis-Document Alignment", "sec_num": "4.1" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "{s, e} = arg max {i,j}\u2208m j k=i (S(x k ) \u2212 \u03b4)", "eq_num": "(13)" } ], "section": "Synopsis-Document Alignment", "sec_num": "4.1" }, { "text": "Soft vs. Hard Alignment A hard alignment between the synopsis and document can be obtained from string matching. A document token receives a score of 1 if it finds a match in the synopsis. Similar to above, we offset the scores by \u03b4 to obtain segments of endorsed text. Hard alignment is sensitive to entities and quantities; yet it can miss out on paraphrases. We compare the effectiveness of these alignment methods in the results section.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Synopsis-Document Alignment", "sec_num": "4.1" }, { "text": "A synopsis contains the main points of the source document. We employ BART (Lewis et al., 2020) , fine-tuned on the CNN/DailyMail dataset, as a single-document abstractive summarizer to produce a synopsis from each document of the input cluster. Synopses as endorsers are superior to whole documents or sentence extracts. Not only are synopses more concise, but they can exclude superfluous information such as quoted material from consideration. We score all sentences of the source documents according to the sum of their token endorsement scores. Highest endorsed sentences are selected and arranged in chronological order to form a pseudo-document, with a limit of |D| tokens, which serves as the input to our summarization module. When a token is deemed salient by \u03c4 endorsers, we set Endorse(x i )=\u03c4 , analogous to a majority vote by the pool of endorsers. We introduce two endorsement patterns. Reciprocal endorsement is where a synopsis can endorse every document of the cluster, akin to how every token attends to every other token in Transformer self-attention. Sequential endorsement is where source documents are arranged in chronological order and only synopses of the later documents can endorse the earlier documents, akin to how each token can attend only to previous tokens in decoder-only self-attention. Sequential endorsement assumes the first few articles of an event or topic are more important than others. It avoids endorsing redundant content, which is particularly useful when the documents contain redundancy or noise that is typical in the output of clustering algorithms for content aggregation. Importantly, our endorsement framework offers a potential to customize endorsement patterns based on the trustworthiness of news sources, political leanings, content quality, and more.", "cite_spans": [ { "start": 75, "end": 95, "text": "(Lewis et al., 2020)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Synopses as Endorsers", "sec_num": "4.2" }, { "text": "We experiment with a large-scale multi-document summarization dataset (Gholipour Ghalandari et al., 2020) whose data are gathered from the Wikipedia Current Events Portal (WCEP). 3 The dataset contains an archive of important news events happening around 2016-2019. Each event is associated with a succinct summary of 30-40 words written by the editor and an average of 1.2 source articles linked from the event page. Additional source articles are retrieved from the CommonCrawl-News dataset using an event classifier. These articles are published within a window of \u00b11 day of the 3 https://en.wikipedia.org/wiki/Portal: Current _ events event date. We sample from these additional articles to ensure each event has 10 source articles. All summaries and source articles are in English. The dataset contains 8,158, 1,020 and 1,022 clusters respectively in the train, validation and test splits.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "5" }, { "text": "Our method aims to produce an abstractive summary from a cluster of news articles discussing a given event or topic. To assess the generality of our method, we apply the model trained on WCEP to three different test sets, i.e., the test split of WCEP and two benchmark multi-document summarization datasets, DUC-04 and TAC-11. The DUC/TAC datasets contain 50 and 44 clusters, respectively. They each comprise a set of news events collected over a period of time, and thus are suitable for evaluation of the model's generality in out-of-domain scenarios. DUC and TAC datasets contain four reference summaries per cluster created by NIST evaluators. WCEP has a single reference summary per cluster written by editors. The target summary length is 100 words for DUC/TAC and 40 words for WCEP, following the convention of previously published results. Endorsement-related statistics for these datasets are presented in Table 1 .", "cite_spans": [], "ref_spans": [ { "start": 915, "end": 922, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Data", "sec_num": "5" }, { "text": "Baseline Systems. We compare our endorsement method to strong multi-document summarization baselines. The extractive summarization systems include (i) TextRank (Mihalcea and Tarau, 2004) and LexRank (Erkan and Radev, 2004) , which are graph-based approaches that perform strongly on this task. (ii) Centroid (Hong et al., 2014) computes the importance of a source sentence based on its cosine similarity with the document centroid. (iii) Submodular (Lin and Bilmes, 2011) treats multidocument summarization as a submodular maximization problem. (iv) KL-Sum (Haghighi and Vanderwende, 2009) is a greedy approach that adds sentences to the summary to minimize KL divergence. (v) TSR and BertReg (Gholipour Ghalandari et al., 2020) are regression-based sentence ranking methods using averaged word embeddings (TSR) and BERT sentence embeddings (BertReg).", "cite_spans": [ { "start": 160, "end": 186, "text": "(Mihalcea and Tarau, 2004)", "ref_id": "BIBREF28" }, { "start": 191, "end": 222, "text": "LexRank (Erkan and Radev, 2004)", "ref_id": null }, { "start": 308, "end": 327, "text": "(Hong et al., 2014)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "The abstractive summarization systems include: (vi) PointerGen (See et al., 2017) , which generates a summary by copying source words and predicting new words. The set of documents are concatenated to form the input. (vii) PG-MMR (Lebanoff et al., 2018) exploits the maximal marginal relevance method to select sentences and an encoderdecoder model to fuse them into an abstract. (viii)", "cite_spans": [ { "start": 63, "end": 81, "text": "(See et al., 2017)", "ref_id": "BIBREF37" }, { "start": 230, "end": 253, "text": "(Lebanoff et al., 2018)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "Synop Num Seg % Endorse Scores \u2265 \u03c4 Dataset Len Segs Len \u03c4 = 0 \u03c4 = 1 \u03c4 = 2 WCEP 61 4.9 14.2 100.0 12.6 5.6 DUC-04 58 6.1 11.7 100.0 9.7 2.3 TAC-11 60 6.7 11.8 100.0 14.5 4.1 Table 1 : (LEFT) The average length of synopses (SynopLen), average number of segments in a source document endorsed by a synopsis and average length of endorsed segments (SegLen). (RIGHT) Percentage of tokens with endorsement scores above the threshold value used in each set of companion heads. All tokens with scores below the threshold are masked out.", "cite_spans": [], "ref_spans": [ { "start": 173, "end": 180, "text": "Table 1", "ref_id": "TABREF0" } ], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "Hi- MAP (Fabbri et al., 2019) introduces an endto-end hierarchical attention model to generate abstracts from multi-document inputs. We compare our system to these baselines and report results on WCEP, DUC-04, and TAC-11 datasets 4 .", "cite_spans": [ { "start": 4, "end": 29, "text": "MAP (Fabbri et al., 2019)", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "Sequential vs. Reciprocal Endorsement. We investigate two endorsement patterns: (a) reciprocal endorsement allows any two documents of the same cluster to endorse each other, and (b) sequential endorsement arranges source documents in chronological order and only later documents are allowed to endorse earlier ones. The endorsement mechanism provides the flexibility needed for many domains to exploit cross-document relationships to generate abstractive summaries. For our variants, the highest-scoring sentences are consolidated to form an input document which, along with the endorsement scores, are passed to our endorsementaware abstractor to be condensed into a summary.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "Endorsement-Aware Abstractor. We employ BART, a state-of-the-art encoder-decoder model as our base abstractor (Lewis et al., 2020) . The model has 24 layers in the encoder and decoder, a hidden size of 1024, 16 heads, with a total of 406M parameters. It was fine-tuned on the train split of WCEP for an average of two epochs with a batch size of 4. We use the Adam optimizer (Kingma and Ba, 2015) and a learning rate of 3 \u22125 with warm-up. At inference time, we use a beam size of K=4, with a minimum decoding length of 10 and a maximum of 50 tokens. Our implementation is based on fairseq 5 and it takes about two hours to train the model on a NVIDIA V100 32GB GPU card.", "cite_spans": [ { "start": 110, "end": 130, "text": "(Lewis et al., 2020)", "ref_id": "BIBREF22" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "For the endorsement-aware abstractor, we add two sets of companion heads to the decoder, for a total of 48 attention heads. The \u03c4 values for each set of heads are 0/1/2. 12% of the tokens receive level-1 attention (\u03c4 = 1), 4% receive level-2 attention (\u03c4 = 2). The \u03bb \u03c4 values are set to be 0.8, 0.1, and 0.1-this gives more influence to the original attention heads, so the model is not confused by the addition of the new heads that attend to endorsed segments. We use a maximum of 1024 tokens for the input document.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "Synopsis-Document Endorsement. To enable soft alignment between a synopsis and a candidate document, we use BERTScore (Zhang et al., 2020b) with the following hash code: roberta-large_L17_no-idf_version=0.3.2(hug_trans=2.8.0)rescaled. It suggests that the token representations are drawn from the 17th layer of RoBERTa-large. Our maximum sum subarray algorithm requires the scores to contain a mix of positive/negative values. Thus, we subtract all scores by \u03b4. The \u03b4 values are 0.85 and 0.8 for the soft and hard alignment, respectively. These values are tuned on validation data, where a larger \u03b4 indicates fewer tokens will be endorsed.", "cite_spans": [ { "start": 118, "end": 139, "text": "(Zhang et al., 2020b)", "ref_id": "BIBREF42" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "We proceed by presenting summarization results on our datasets, including an ablation study to examine the contribution of each part of our method. We also present a case study showcasing the potential of our endorsement method.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "Our methods achieve state-of-the-art results when compared to previous work on WCEP's test set (Table 2). Sequential endorsement outperforms reciprocal endorsement due to the ability of sequential endorsement to remove redundancies introduced in later documents. In news domain, later articles generally review information from previous articles and introduce small developments in the story. By ordering the documents chronologically and having later articles give endorsement to earlier articles, it encourages the summarizer to pick content from earlier articles and reduce redundancy introduced in later articles. The largest performance increase can be seen in R-2, with Endorser-Sequential achieving a 9.7 increase over a BERTbased method. It demonstrates the effectiveness of endorsement for detecting salient segments and stitching them together to form a summary. We report experimental results on DUC-04 and TAC-11 datasets in Tables 3 and 5 . Here, our methods can outperform or perform comparably to previous summarization methods. On the WCEP test set, it corresponds to an in-domain scenario. On DUC-04 and TAC-11 test sets, it is an out-of-domain scenario. Due to data scarcity, the model can only be trained on the train split of WCEP and then tested on DUC/TAC datasets. The fact that our system, when used out-of-the-box, can attain better or comparable results to the previous state-of-the-art has demonstrated its strong generalization capability. It suggests that obtaining segment-level endorsement on an outside domain then using it to inform summary generation is meaningful.", "cite_spans": [], "ref_spans": [ { "start": 937, "end": 951, "text": "Tables 3 and 5", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "6.1" }, { "text": "We observe that the reciprocal endorsement strategy outperforms sequential endorsement for DUC-04 and TAC-11 test sets. A closer look at the data suggests that this is due to the lower amount of redundancy present in DUC/TAC data. While WCEP documents are automatically clustered and contain Intuitively, we want to steer the model attention towards endorsed segments if they are of high quality, and away from the segments otherwise. We conduct a set of oracle experiments that set \u03bb \u03c4 values to be proportional to the R-2 recall scores of endorsed segments (Endorser-Oracle). If the segments obtained for \u03c4 = 2 yield a high R-2 recall score, they contain summary content and the model should attend to these endorsed segments by using a high \u03bb \u03c4 value. Results are reported in Tables 3 and 5 . We find that such a strategy is effective for making the most of companion heads. Future work may associate attention (\u03bb \u03c4 values) with the quality of segments obtained at different levels of endorsement (\u03c4 = {0, 1, 2}).", "cite_spans": [], "ref_spans": [ { "start": 779, "end": 794, "text": "Tables 3 and 5", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Results", "sec_num": "6.1" }, { "text": "We perform an ablation study on WCEP to study the effects of each component in our model (Table 4) . First, we compare the endorsement methods, denoted by HardAlign and SoftAlign. SoftAlign achieves consistently better results, showing that it is important to allow flexibility when aligning synopses to documents for endorsement. Next, we remove several components from the best-performing model (SoftAlign) to understand the effect of each. Removing \"companion heads\" from the abstractive model results in a very small boost in performance. Removing \"endorsement selection\"-meaning the model uses no information gained from performing endorsement, and is simply a BART model trained to summarize documents-leads to a significant performance drop, especially in R-1. It suggests that using endorsement to identify summary- Table 6 : An analysis of endorsed segments for a document. (a) A synopsis is generated from a candidate document. (b) The document also receives endorsement from the other 9 synopses in the cluster. (c) We compare to segments chosen by a human using the Pyramid method. Stronger highlighting indicates the segment received endorsement from many synopses.", "cite_spans": [], "ref_spans": [ { "start": 89, "end": 99, "text": "(Table 4)", "ref_id": "TABREF3" }, { "start": 825, "end": 832, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "Ablation", "sec_num": "6.2" }, { "text": "worthy content from multiple documents is beneficial for an abstractive model. Moreover, removing the \"abstractive model\"meaning summaries are created extractively by selecting the highest-endorsed sentences-results in a large decrease in scores. It indicates that contentselection by endorsement cannot be done alone without an abstractor to create a more concise summary. This is especially the case for WCEP, where human reference summaries are relatively short.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Ablation", "sec_num": "6.2" }, { "text": "We additionally report BERTScore (Zhang et al., 2020b) to evaluate summaries, in addition to the ROUGE metric (Lin, 2004) . BERTScore uses cosine similarity between BERT contextual embeddings of words to detect word overlap between two texts, thus overcoming the problem of lexical variation in summarization. On DUC-04, the F 1 scores are 29.89 and 30.14, respectively for our sequential and reciprocal model. The score for human reference summary is 35.08. They show very similar trends to those in Table 3 , suggesting that our method when tested in out-of-domain scenarios can achieve competitive results.", "cite_spans": [ { "start": 33, "end": 54, "text": "(Zhang et al., 2020b)", "ref_id": "BIBREF42" }, { "start": 110, "end": 121, "text": "(Lin, 2004)", "ref_id": "BIBREF24" } ], "ref_spans": [ { "start": 501, "end": 508, "text": "Table 3", "ref_id": "TABREF2" } ], "eq_spans": [], "section": "Ablation", "sec_num": "6.2" }, { "text": "We present an in-depth analysis of our fine-grained endorsement in Table 6 . Soft alignment is used to endorse a candidate document from synopses of the cluster. We compare the resulting endorsements to the text segments chosen by a human using the Pyramid method (Nenkova and Passonneau, 2004) , where semantic content units (SCUs) are identified from the reference summaries and are matched to phrases in the candidate document. The segments selected by our endorsement method and those chosen by manual annotation show a great amount of overlap, exemplifying the strength of our method in locating salient content from multi-document inputs. In fact, our endorsement method draws strong parallels with the Pyramid method-in our case, sentences from the automatically-generated synopses act as SCUs, which are then matched to phrases in the candidate document using a soft or hard alignment.", "cite_spans": [ { "start": 264, "end": 294, "text": "(Nenkova and Passonneau, 2004)", "ref_id": "BIBREF32" } ], "ref_spans": [ { "start": 67, "end": 74, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "A Case Study", "sec_num": "6.3" }, { "text": "We observe that the endorsement given by a single synopsis is already quite similar to the human segments. However, taking the average endorsement from all ten synopses results in a higher quality set of segments. This shows the inherent value that exists from repetition in multi-document clusters, and it shows the importance of leveraging all of the documents rather than just a single one for salience estimation. Importantly, we observe that named entities, e.g., \"Sam Rainsy,\" \"King Norodom Sihanouk,\" are more readily endorsed than other phrases. These entities are frequently repeated verbatim in all of the documents, thereby increasing their likelihood of being endorsed.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Case Study", "sec_num": "6.3" }, { "text": "We envision future neural document summarization systems to produce better synopses than BART. They can lead to more accurate estimates for endorsed segments, hence improving the overall per-formance of our multi-document summarizer. The endorsement mechanism at its core is simple and robust-looking for shared content between a document and a synopsis. It provides great flexibility allowing the summarizer to potentially operate on document clusters containing a varying number of documents, which is a desirable characteristic.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "A Case Study", "sec_num": "6.3" }, { "text": "We presented a novel framework to model asynchronous endorsement between synopses and documents for multi-document abstractive summarization. We introduced an endorsement method to enrich the encoder-decoder model with fine-grained endorsement. Our method was evaluated on benchmark multi-document datasets and we discussed challenges and shed light on future research.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "7" }, { "text": "We omit the residual connection and layer normalization associated with each block for brevity.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We were unable to compare our method with hierarchical Transformers because the authors did not make their ranker available for ranking paragraphs.5 https://github.com/pytorch/fairseq", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We note that baseline summarizers use a maximum of 100 articles per cluster; these results are obtained from Gholipour Ghalandari et al. (2020). In contrast, our endorsement methods outperform the baselines with only 10 input articles per cluster.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "We are grateful to the anonymous reviewers for their helpful comments and suggestions. This research was supported in part by the National Science Foundation grant IIS-1909603. ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgments", "sec_num": null }, { "text": "(a) Single Synopsis Generated by BART Opposition leader Sam Rainsy seeks clarification of security guarantees promised by Hun Sen. Hun Sen announced a government guarantee of all politicians' safety Wednesday. The opposition leader was forced to take refuge in a U.N. office in September to avoid arrest. The two parties have formed three working groups to hammer out details of the agreement.(b) Endorsement from All Synopses Sam Rainsy, who earlier called Hun Sen's statement \"full of loopholes,\" asked Sihanouk for his help in obtaining a promise from Hun Sen that all members of the Sam Rainsy Party were free from prosecution for their political activities during and after last July's election. Sam Rainsy, a staunch critic of Hun Sen, was forced to take refuge in a U.N. office in September to avoid arrest after Hun Sen accused him of being behind a plot against his life. The alleged assassination attempt came during massive street demonstrations organized by the opposition after Hun Sen's Cambodian People's Party narrowly won the election. The opposition, alleging widespread fraud and intimidation, refused to accept the results of the polls. Fearing for their safety, Sam Rainsy and his then-ally Prince Norodom Ranariddh led an exodus of opposition lawmakers out of Cambodia after parliament was ceremonially opened in late September. Ranariddh, whose FUNCINPEC party finished a close second in the election, returned last week and struck a deal with Hun Sen to form a coalition government. The agreement will make Hun Sen prime minister and Ranariddh president of the National Assembly. The two parties have formed three working groups to hammer out details of the agreement, including the establishment of a Senate to be the upper house of parliament. Sok An, representing Hun Sen's party , said...", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "annex", "sec_num": null }, { "text": "Sam Rainsy, who earlier called Hun Sen's statement \"full of loopholes,\" asked Sihanouk for his help in obtaining a promise from Hun Sen that all members of the Sam Rainsy Party were free from prosecution for their political activities during and after last July's election. Sam Rainsy, a staunch critic of Hun Sen, was forced to take refuge in a U.N. office in September to avoid arrest after Hun Sen accused him of being behind a plot against his life. The alleged assassination attempt came during massive street demonstrations organized by the opposition after Hun Sen's Cambodian People's Party narrowly won the election. The opposition, alleging widespread fraud and intimidation, refused to accept the results of the polls. Fearing for their safety, Sam Rainsy and his then-ally Prince Norodom Ranariddh led an exodus of opposition lawmakers out of Cambodia after parliament was ceremonially opened in late September. Ranariddh, whose FUNCINPEC party finished a close second in the election, returned last week and struck a deal with Hun Sen to form a coalition government. The agreement will make Hun Sen prime minister and Ranariddh president of the National Assembly. The two parties have formed three working groups to hammer out details of the agreement, including the establishment of a Senate to be the upper house of parliament. Sok An, representing Hun Sen's party, said... ", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "(c) Human-Chosen Segments", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Jointly learning to extract and compress", "authors": [ { "first": "Taylor", "middle": [], "last": "Berg-Kirkpatrick", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Gillick", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "481--490", "other_ids": {}, "num": null, "urls": [], "raw_text": "Taylor Berg-Kirkpatrick, Dan Gillick, and Dan Klein. 2011. Jointly learning to extract and compress. In Proceedings of the 49th Annual Meeting of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, pages 481-490, Portland, Ore- gon, USA. Association for Computational Linguis- tics.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Abstractive multidocument summarization via phrase selection and merging", "authors": [ { "first": "Lidong", "middle": [], "last": "Bing", "suffix": "" }, { "first": "Piji", "middle": [], "last": "Li", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Liao", "suffix": "" }, { "first": "Wai", "middle": [], "last": "Lam", "suffix": "" }, { "first": "Weiwei", "middle": [], "last": "Guo", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Passonneau", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", "volume": "1", "issue": "", "pages": "1587--1597", "other_ids": { "DOI": [ "10.3115/v1/P15-1153" ] }, "num": null, "urls": [], "raw_text": "Lidong Bing, Piji Li, Yi Liao, Wai Lam, Weiwei Guo, and Rebecca Passonneau. 2015. Abstractive multi- document summarization via phrase selection and merging. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), pages 1587-1597, Beijing, China. Association for Computational Linguistics.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Concept-based summarization using integer linear programming: From concept pruning to multiple optimal solutions", "authors": [ { "first": "Florian", "middle": [], "last": "Boudin", "suffix": "" }, { "first": "Hugo", "middle": [], "last": "Mougard", "suffix": "" }, { "first": "Benoit", "middle": [], "last": "Favre", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1914--1918", "other_ids": { "DOI": [ "10.18653/v1/D15-1220" ] }, "num": null, "urls": [], "raw_text": "Florian Boudin, Hugo Mougard, and Benoit Favre. 2015. Concept-based summarization using integer linear programming: From concept pruning to mul- tiple optimal solutions. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1914-1918, Lisbon, Portu- gal. Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Few-shot learning for opinion summarization", "authors": [ { "first": "Arthur", "middle": [], "last": "Bra\u017einskas", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" }, { "first": "Ivan", "middle": [], "last": "Titov", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "4119--4135", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.337" ] }, "num": null, "urls": [], "raw_text": "Arthur Bra\u017einskas, Mirella Lapata, and Ivan Titov. 2020. Few-shot learning for opinion summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4119-4135, Online. Association for Computa- tional Linguistics.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Fast abstractive summarization with reinforce-selected sentence rewriting", "authors": [ { "first": "Yen-Chun", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "675--686", "other_ids": { "DOI": [ "10.18653/v1/P18-1063" ] }, "num": null, "urls": [], "raw_text": "Yen-Chun Chen and Mohit Bansal. 2018. Fast abstrac- tive summarization with reinforce-selected sentence rewriting. In Proceedings of the 56th Annual Meet- ing of the Association for Computational Linguis- tics (Volume 1: Long Papers), pages 675-686, Mel- bourne, Australia. Association for Computational Linguistics.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Improving the similarity measure of determinantal point processes for extractive multidocument summarization", "authors": [ { "first": "Sangwoo", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Logan", "middle": [], "last": "Lebanoff", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Foroosh", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "1027--1038", "other_ids": { "DOI": [ "10.18653/v1/P19-1098" ] }, "num": null, "urls": [], "raw_text": "Sangwoo Cho, Logan Lebanoff, Hassan Foroosh, and Fei Liu. 2019a. Improving the similarity measure of determinantal point processes for extractive multi- document summarization. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 1027-1038, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Multi-document summarization with determinantal point processes and contextualized representations", "authors": [ { "first": "Sangwoo", "middle": [], "last": "Cho", "suffix": "" }, { "first": "Chen", "middle": [], "last": "Li", "suffix": "" }, { "first": "Dong", "middle": [], "last": "Yu", "suffix": "" }, { "first": "Hassan", "middle": [], "last": "Foroosh", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on New Frontiers in Summarization", "volume": "", "issue": "", "pages": "98--103", "other_ids": { "DOI": [ "10.18653/v1/D19-5412" ] }, "num": null, "urls": [], "raw_text": "Sangwoo Cho, Chen Li, Dong Yu, Hassan Foroosh, and Fei Liu. 2019b. Multi-document summariza- tion with determinantal point processes and con- textualized representations. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 98-103, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "What does BERT look at? an analysis of BERT's attention", "authors": [ { "first": "Kevin", "middle": [], "last": "Clark", "suffix": "" }, { "first": "Urvashi", "middle": [], "last": "Khandelwal", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP", "volume": "", "issue": "", "pages": "276--286", "other_ids": { "DOI": [ "10.18653/v1/W19-4828" ] }, "num": null, "urls": [], "raw_text": "Kevin Clark, Urvashi Khandelwal, Omer Levy, and Christopher D. Manning. 2019. What does BERT look at? an analysis of BERT's attention. In Pro- ceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 276-286, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Unsupervised aspect-based multi-document abstractive summarization", "authors": [ { "first": "Maximin", "middle": [], "last": "Coavoux", "suffix": "" }, { "first": "Hady", "middle": [], "last": "Elsahar", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Gall\u00e9", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2nd Workshop on New Frontiers in Summarization", "volume": "", "issue": "", "pages": "42--47", "other_ids": { "DOI": [ "10.18653/v1/D19-5405" ] }, "num": null, "urls": [], "raw_text": "Maximin Coavoux, Hady Elsahar, and Matthias Gall\u00e9. 2019. Unsupervised aspect-based multi-document abstractive summarization. In Proceedings of the 2nd Workshop on New Frontiers in Summarization, pages 42-47, Hong Kong, China. Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Adaptively sparse transformers", "authors": [ { "first": "M", "middle": [], "last": "Gon\u00e7alo", "suffix": "" }, { "first": "Vlad", "middle": [], "last": "Correia", "suffix": "" }, { "first": "Andr\u00e9", "middle": [ "F T" ], "last": "Niculae", "suffix": "" }, { "first": "", "middle": [], "last": "Martins", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "2174--2184", "other_ids": { "DOI": [ "10.18653/v1/D19-1223" ] }, "num": null, "urls": [], "raw_text": "Gon\u00e7alo M. Correia, Vlad Niculae, and Andr\u00e9 F. T. Martins. 2019. Adaptively sparse transformers. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 2174- 2184, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Generic sentence fusion is an ill-defined summarization task", "authors": [ { "first": "Hal", "middle": [], "last": "Daume", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2004, "venue": "Text Summarization Branches Out", "volume": "", "issue": "", "pages": "96--103", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hal Daume III and Daniel Marcu. 2004. Generic sen- tence fusion is an ill-defined summarization task. In Text Summarization Branches Out, pages 96-103, Barcelona, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Long Papers)", "authors": [], "year": null, "venue": "", "volume": "1", "issue": "", "pages": "1760--1769", "other_ids": {}, "num": null, "urls": [], "raw_text": "Volume 1 (Long Papers), pages 1760-1769, New Orleans, Louisiana. Association for Computational Linguistics.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Repetition and memory", "authors": [ { "first": "Douglas", "middle": [ "L" ], "last": "Hintzman", "suffix": "" } ], "year": 1976, "venue": "Psychology of Learning and Motivation", "volume": "10", "issue": "", "pages": "47--91", "other_ids": {}, "num": null, "urls": [], "raw_text": "Douglas L. Hintzman. 1976. Repetition and memory. Psychology of Learning and Motivation, 10:47-91.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "A repository of state of the art and competitive baseline summaries for generic news summarization", "authors": [ { "first": "Kai", "middle": [], "last": "Hong", "suffix": "" }, { "first": "M", "middle": [], "last": "John", "suffix": "" }, { "first": "Benoit", "middle": [], "last": "Conroy", "suffix": "" }, { "first": "Alex", "middle": [], "last": "Favre", "suffix": "" }, { "first": "Hui", "middle": [], "last": "Kulesza", "suffix": "" }, { "first": "Ani", "middle": [], "last": "Lin", "suffix": "" }, { "first": "", "middle": [], "last": "Nenkova", "suffix": "" } ], "year": 2014, "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation (LREC)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kai Hong, John M Conroy, Benoit Favre, Alex Kulesza, Hui Lin, and Ani Nenkova. 2014. A repository of state of the art and competitive baseline summaries for generic news summarization. In Proceedings of the Ninth International Conference on Language Re- sources and Evaluation (LREC).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Efficient attentions for long document summarization", "authors": [ { "first": "Luyang", "middle": [], "last": "Huang", "suffix": "" }, { "first": "Shuyang", "middle": [], "last": "Cao", "suffix": "" }, { "first": "Nikolaus", "middle": [], "last": "Parulian", "suffix": "" }, { "first": "Ji", "middle": [], "last": "Heng", "suffix": "" }, { "first": "Lu", "middle": [], "last": "Wang", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "1419--1436", "other_ids": { "DOI": [ "10.18653/v1/2021.naacl-main.112" ] }, "num": null, "urls": [], "raw_text": "Luyang Huang, Shuyang Cao, Nikolaus Parulian, Heng Ji, and Lu Wang. 2021. Efficient attentions for long document summarization. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 1419-1436, On- line. Association for Computational Linguistics.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Abstractive multidocument summarization via joint learning with single-document summarization", "authors": [ { "first": "Hanqi", "middle": [], "last": "Jin", "suffix": "" }, { "first": "Xiaojun", "middle": [], "last": "Wan", "suffix": "" } ], "year": 2020, "venue": "Findings of the Association for Computational Linguistics: EMNLP 2020", "volume": "", "issue": "", "pages": "2545--2554", "other_ids": { "DOI": [ "10.18653/v1/2020.findings-emnlp.231" ] }, "num": null, "urls": [], "raw_text": "Hanqi Jin and Xiaojun Wan. 2020. Abstractive multi- document summarization via joint learning with single-document summarization. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2545-2554, Online. Association for Computational Linguistics.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Adam: A method for stochastic optimization", "authors": [ { "first": "P", "middle": [], "last": "Diederik", "suffix": "" }, { "first": "Jimmy", "middle": [], "last": "Kingma", "suffix": "" }, { "first": "", "middle": [], "last": "Ba", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In Proceedings of the International Conference on Learning Repre- sentations (ICLR).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Neural text summarization: A critical evaluation", "authors": [ { "first": "Wojciech", "middle": [], "last": "Kryscinski", "suffix": "" }, { "first": "Nitish", "middle": [], "last": "Shirish Keskar", "suffix": "" }, { "first": "Bryan", "middle": [], "last": "Mc-Cann", "suffix": "" }, { "first": "Caiming", "middle": [], "last": "Xiong", "suffix": "" }, { "first": "Richard", "middle": [], "last": "Socher", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "540--551", "other_ids": { "DOI": [ "10.18653/v1/D19-1051" ] }, "num": null, "urls": [], "raw_text": "Wojciech Kryscinski, Nitish Shirish Keskar, Bryan Mc- Cann, Caiming Xiong, and Richard Socher. 2019. Neural text summarization: A critical evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 540- 551, Hong Kong, China. Association for Computa- tional Linguistics.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Determinantal Point Processes for Machine Learning", "authors": [ { "first": "Alex", "middle": [], "last": "Kulesza", "suffix": "" }, { "first": "Ben", "middle": [], "last": "Taskar", "suffix": "" } ], "year": 2012, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alex Kulesza and Ben Taskar. 2012. Determinantal Point Processes for Machine Learning. Now Pub- lishers Inc.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "The summary loop: Learning to write abstractive summaries without examples", "authors": [ { "first": "Philippe", "middle": [], "last": "Laban", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Hsi", "suffix": "" }, { "first": "John", "middle": [], "last": "Canny", "suffix": "" }, { "first": "Marti", "middle": [ "A" ], "last": "Hearst", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5135--5150", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.460" ] }, "num": null, "urls": [], "raw_text": "Philippe Laban, Andrew Hsi, John Canny, and Marti A. Hearst. 2020. The summary loop: Learning to write abstractive summaries without examples. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5135- 5150, Online. Association for Computational Lin- guistics.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Adapting the neural encoder-decoder framework from single to multi-document summarization", "authors": [ { "first": "Logan", "middle": [], "last": "Lebanoff", "suffix": "" }, { "first": "Kaiqiang", "middle": [], "last": "Song", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "4131--4141", "other_ids": { "DOI": [ "10.18653/v1/D18-1446" ] }, "num": null, "urls": [], "raw_text": "Logan Lebanoff, Kaiqiang Song, and Fei Liu. 2018. Adapting the neural encoder-decoder framework from single to multi-document summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4131-4141, Brussels, Belgium. Association for Computational Linguistics.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "BART: Denoising sequence-to-sequence pretraining for natural language generation, translation, and comprehension", "authors": [ { "first": "Mike", "middle": [], "last": "Lewis", "suffix": "" }, { "first": "Yinhan", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Naman", "middle": [], "last": "Goyal ; Abdelrahman Mohamed", "suffix": "" }, { "first": "Omer", "middle": [], "last": "Levy", "suffix": "" }, { "first": "Veselin", "middle": [], "last": "Stoyanov", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "7871--7880", "other_ids": { "DOI": [ "10.18653/v1/2020.acl-main.703" ] }, "num": null, "urls": [], "raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Veselin Stoyanov, and Luke Zettlemoyer. 2020. BART: Denoising sequence-to-sequence pre- training for natural language generation, translation, and comprehension. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7871-7880, Online. Association for Computational Linguistics.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Leveraging large pretrained models for WebNLG 2020", "authors": [ { "first": "Xintong", "middle": [], "last": "Li", "suffix": "" }, { "first": "Aleksandre", "middle": [], "last": "Maskharashvili", "suffix": "" }, { "first": "Symon Jory", "middle": [], "last": "Stevens-Guille", "suffix": "" }, { "first": "Michael", "middle": [], "last": "White", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 3rd International Workshop on Natural Language Generation from the Semantic Web (WebNLG+)", "volume": "", "issue": "", "pages": "117--124", "other_ids": {}, "num": null, "urls": [], "raw_text": "Xintong Li, Aleksandre Maskharashvili, Symon Jory Stevens-Guille, and Michael White. 2020. Lever- aging large pretrained models for WebNLG 2020. In Proceedings of the 3rd International Workshop on Natural Language Generation from the Seman- tic Web (WebNLG+), pages 117-124, Dublin, Ire- land (Virtual). Association for Computational Lin- guistics.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "ROUGE: A package for automatic evaluation of summaries", "authors": [ { "first": "Chin-Yew", "middle": [], "last": "Lin", "suffix": "" } ], "year": 2004, "venue": "Text Summarization Branches Out", "volume": "", "issue": "", "pages": "74--81", "other_ids": {}, "num": null, "urls": [], "raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "A class of submodular functions for document summarization", "authors": [ { "first": "Hui", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Bilmes", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "510--520", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hui Lin and Jeff Bilmes. 2011. A class of submodu- lar functions for document summarization. In Pro- ceedings of the 49th Annual Meeting of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, pages 510-520, Portland, Ore- gon, USA. Association for Computational Linguis- tics.", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Hierarchical transformers for multi-document summarization", "authors": [ { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5070--5081", "other_ids": { "DOI": [ "10.18653/v1/P19-1500" ] }, "num": null, "urls": [], "raw_text": "Yang Liu and Mirella Lapata. 2019. Hierarchical trans- formers for multi-document summarization. In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 5070- 5081, Florence, Italy. Association for Computa- tional Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Multi-document summarization with maximal marginal relevance-guided reinforcement learning", "authors": [ { "first": "Yuning", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Yanru", "middle": [], "last": "Qu", "suffix": "" }, { "first": "Yiqing", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1737--1751", "other_ids": { "DOI": [ "10.18653/v1/2020.emnlp-main.136" ] }, "num": null, "urls": [], "raw_text": "Yuning Mao, Yanru Qu, Yiqing Xie, Xiang Ren, and Jiawei Han. 2020. Multi-document summarization with maximal marginal relevance-guided reinforce- ment learning. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1737-1751, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "TextRank: Bringing order into text", "authors": [ { "first": "Rada", "middle": [], "last": "Mihalcea", "suffix": "" }, { "first": "Paul", "middle": [], "last": "Tarau", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "404--411", "other_ids": {}, "num": null, "urls": [], "raw_text": "Rada Mihalcea and Paul Tarau. 2004. TextRank: Bringing order into text. In Proceedings of the 2004 Conference on Empirical Methods in Natural Lan- guage Processing, pages 404-411, Barcelona, Spain. Association for Computational Linguistics.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Former salesforce chief scientist announces new search engine to take on google", "authors": [ { "first": "Ron", "middle": [], "last": "Miller", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ron Miller. 2020. Former salesforce chief sci- entist announces new search engine to take on google. https://techcrunch.com/2020/12/08/former- salesforce-chief-scientist-announces-new-search- engine-to-take-on-google/.", "links": null }, "BIBREF30": { "ref_id": "b30", "title": "Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization", "authors": [ { "first": "Shashi", "middle": [], "last": "Narayan", "suffix": "" }, { "first": "Shay", "middle": [ "B" ], "last": "Cohen", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "1797--1807", "other_ids": { "DOI": [ "10.18653/v1/D18-1206" ] }, "num": null, "urls": [], "raw_text": "Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for ex- treme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1797-1807, Brussels, Bel- gium. Association for Computational Linguistics.", "links": null }, "BIBREF31": { "ref_id": "b31", "title": "Abstractive unsupervised multidocument summarization using paraphrastic sentence fusion", "authors": [ { "first": "Tanvir", "middle": [], "last": "Mir Tafseer Nayeem", "suffix": "" }, { "first": "Yllias", "middle": [], "last": "Ahmed Fuad", "suffix": "" }, { "first": "", "middle": [], "last": "Chali", "suffix": "" } ], "year": 2018, "venue": "Proceedings of the 27th International Conference on Computational Linguistics", "volume": "", "issue": "", "pages": "1191--1204", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mir Tafseer Nayeem, Tanvir Ahmed Fuad, and Yl- lias Chali. 2018. Abstractive unsupervised multi- document summarization using paraphrastic sen- tence fusion. In Proceedings of the 27th Inter- national Conference on Computational Linguistics, pages 1191-1204, Santa Fe, New Mexico, USA. As- sociation for Computational Linguistics.", "links": null }, "BIBREF32": { "ref_id": "b32", "title": "Evaluating content selection in summarization: The pyramid method", "authors": [ { "first": "Ani", "middle": [], "last": "Nenkova", "suffix": "" }, { "first": "Rebecca", "middle": [], "last": "Passonneau", "suffix": "" } ], "year": 2004, "venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL 2004", "volume": "", "issue": "", "pages": "145--152", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ani Nenkova and Rebecca Passonneau. 2004. Evaluat- ing content selection in summarization: The pyra- mid method. In Proceedings of the Human Lan- guage Technology Conference of the North Ameri- can Chapter of the Association for Computational Linguistics: HLT-NAACL 2004, pages 145-152, Boston, Massachusetts, USA. Association for Com- putational Linguistics.", "links": null }, "BIBREF33": { "ref_id": "b33", "title": "Generating summaries with topic templates and structured convolutional decoders", "authors": [ { "first": "Laura", "middle": [], "last": "Perez-Beltrachini", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Mirella", "middle": [], "last": "Lapata", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5107--5116", "other_ids": { "DOI": [ "10.18653/v1/P19-1504" ] }, "num": null, "urls": [], "raw_text": "Laura Perez-Beltrachini, Yang Liu, and Mirella Lapata. 2019. Generating summaries with topic templates and structured convolutional decoders. In Proceed- ings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5107-5116, Florence, Italy. Association for Computational Lin- guistics.", "links": null }, "BIBREF34": { "ref_id": "b34", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeff", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeff Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.", "links": null }, "BIBREF35": { "ref_id": "b35", "title": "Exploring the limits of transfer learning with a unified text-totext transformer", "authors": [ { "first": "Colin", "middle": [], "last": "Raffel", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Adam", "middle": [], "last": "Roberts", "suffix": "" }, { "first": "Katherine", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Sharan", "middle": [], "last": "Narang", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Matena", "suffix": "" }, { "first": "Yanqi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Wei", "middle": [], "last": "Li", "suffix": "" }, { "first": "Peter", "middle": [ "J" ], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Journal of Machine Learning Research", "volume": "21", "issue": "140", "pages": "1--67", "other_ids": {}, "num": null, "urls": [], "raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Kather- ine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to- text transformer. Journal of Machine Learning Re- search, 21(140):1-67.", "links": null }, "BIBREF36": { "ref_id": "b36", "title": "A primer in bertology: What we know about how bert works", "authors": [ { "first": "Anna", "middle": [], "last": "Rogers", "suffix": "" }, { "first": "Olga", "middle": [], "last": "Kovaleva", "suffix": "" }, { "first": "Anna", "middle": [], "last": "Rumshisky", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in bertology: What we know about how bert works.", "links": null }, "BIBREF37": { "ref_id": "b37", "title": "Get to the point: Summarization with pointergenerator networks", "authors": [ { "first": "Abigail", "middle": [], "last": "See", "suffix": "" }, { "first": "J", "middle": [], "last": "Peter", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Liu", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2017, "venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics", "volume": "1", "issue": "", "pages": "1073--1083", "other_ids": { "DOI": [ "10.18653/v1/P17-1099" ] }, "num": null, "urls": [], "raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083, Vancouver, Canada. Association for Computa- tional Linguistics.", "links": null }, "BIBREF38": { "ref_id": "b38", "title": "Attention is all you need", "authors": [ { "first": "Ashish", "middle": [], "last": "Vaswani", "suffix": "" }, { "first": "Noam", "middle": [], "last": "Shazeer", "suffix": "" }, { "first": "Niki", "middle": [], "last": "Parmar", "suffix": "" }, { "first": "Jakob", "middle": [], "last": "Uszkoreit", "suffix": "" }, { "first": "Llion", "middle": [], "last": "Jones", "suffix": "" }, { "first": "Aidan", "middle": [ "N" ], "last": "Gomez", "suffix": "" }, { "first": "Illia", "middle": [], "last": "Kaiser", "suffix": "" }, { "first": "", "middle": [], "last": "Polosukhin", "suffix": "" } ], "year": 2017, "venue": "Advances in Neural Information Processing Systems", "volume": "30", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141 ukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In I. Guyon, U. V. Luxburg, S. Bengio, H. Wallach, R. Fergus, S. Vishwanathan, and R. Gar- nett, editors, Advances in Neural Information Pro- cessing Systems 30.", "links": null }, "BIBREF39": { "ref_id": "b39", "title": "Demoting the lead bias in news summarization via alternating adversarial learning", "authors": [ { "first": "Linzi", "middle": [], "last": "Xing", "suffix": "" }, { "first": "Wen", "middle": [], "last": "Xiao", "suffix": "" }, { "first": "Giuseppe", "middle": [], "last": "Carenini", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "2", "issue": "", "pages": "948--954", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-short.119" ] }, "num": null, "urls": [], "raw_text": "Linzi Xing, Wen Xiao, and Giuseppe Carenini. 2021. Demoting the lead bias in news summarization via alternating adversarial learning. In Proceedings of the 59th Annual Meeting of the Association for Com- putational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 2: Short Papers), pages 948-954, Online. Association for Computational Linguistics.", "links": null }, "BIBREF40": { "ref_id": "b40", "title": "Dissecting generation modes for abstractive summarization models via ablation and attribution", "authors": [ { "first": "Jiacheng", "middle": [], "last": "Xu", "suffix": "" }, { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "" } ], "year": 2021, "venue": "Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "6925--6940", "other_ids": { "DOI": [ "10.18653/v1/2021.acl-long.539" ] }, "num": null, "urls": [], "raw_text": "Jiacheng Xu and Greg Durrett. 2021. Dissecting gen- eration modes for abstractive summarization models via ablation and attribution. In Proceedings of the 59th Annual Meeting of the Association for Compu- tational Linguistics and the 11th International Joint Conference on Natural Language Processing (Vol- ume 1: Long Papers), pages 6925-6940, Online. As- sociation for Computational Linguistics.", "links": null }, "BIBREF41": { "ref_id": "b41", "title": "PEGASUS: Pre-training with extracted gap-sentences for abstractive summarization", "authors": [ { "first": "Jingqing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Yao", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Mohammad", "middle": [], "last": "Saleh", "suffix": "" }, { "first": "Peter", "middle": [], "last": "Liu", "suffix": "" } ], "year": 2020, "venue": "Proceedings of the 37th International Conference on Machine Learning", "volume": "119", "issue": "", "pages": "11328--11339", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Pe- ter Liu. 2020a. PEGASUS: Pre-training with ex- tracted gap-sentences for abstractive summarization. In Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, pages 11328-11339. PMLR.", "links": null }, "BIBREF42": { "ref_id": "b42", "title": "Bertscore: Evaluating text generation with bert", "authors": [ { "first": "Tianyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Varsha", "middle": [], "last": "Kishore", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Kilian", "middle": [ "Q" ], "last": "Weinberger", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2020, "venue": "International Conference on Learning Representations", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020b. Bertscore: Evaluating text generation with bert. In Interna- tional Conference on Learning Representations.", "links": null }, "BIBREF43": { "ref_id": "b43", "title": "HI-BERT: Document level pre-training of hierarchical bidirectional transformers for document summarization", "authors": [ { "first": "Xingxing", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Furu", "middle": [], "last": "Wei", "suffix": "" }, { "first": "Ming", "middle": [], "last": "Zhou", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "5059--5069", "other_ids": { "DOI": [ "10.18653/v1/P19-1499" ] }, "num": null, "urls": [], "raw_text": "Xingxing Zhang, Furu Wei, and Ming Zhou. 2019. HI- BERT: Document level pre-training of hierarchical bidirectional transformers for document summariza- tion. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5059-5069, Florence, Italy. Association for Computational Linguistics.", "links": null }, "BIBREF44": { "ref_id": "b44", "title": "MoverScore: Text generation evaluating with contextualized embeddings and earth mover distance", "authors": [ { "first": "Wei", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Maxime", "middle": [], "last": "Peyrard", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Christian", "middle": [ "M" ], "last": "Meyer", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)", "volume": "", "issue": "", "pages": "563--578", "other_ids": { "DOI": [ "10.18653/v1/D19-1053" ] }, "num": null, "urls": [], "raw_text": "Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Chris- tian M. Meyer, and Steffen Eger. 2019. MoverScore: Text generation evaluating with contextualized em- beddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 563-578, Hong Kong, China. Association for Computational Lin- guistics.", "links": null } }, "ref_entries": { "FIGREF0": { "text": "An example of synopsis-document relationships.", "uris": null, "num": null, "type_str": "figure" }, "TABREF0": { "html": null, "content": "
SystemR-1R-2R-SU4
Extractive
Random Lead27.69.1-
Random18.13.0-
TextRank34.113.1-
Centroid34.113.3-
Submodular34.413.1-
TSR35.313.7-
BertReg35.013.5-
Our Method (In-Domain)
Endorser-Reciprocal43.321.922.1
Endorser-Sequential45.423.223.5
Table 2: A comparison of multi-document summarizers on
WCEP's test set.
", "type_str": "table", "text": "shows the percentage of tokens that receive different levels of attention:", "num": null }, "TABREF2": { "html": null, "content": "
: A comparison of multi-document summarizers on the
DUC-04 dataset. Endorser-* are our methods.
SystemR-1R-2 R-SU4
Endorser-HardAlign44.722.422.6
Endorser-SoftAlign45.423.223.5
-companion heads45.823.523.8
-endorse selection43.623.022.9
-abstractive module 28.39.310.9
", "type_str": "table", "text": "", "num": null }, "TABREF3": { "html": null, "content": "", "type_str": "table", "text": "Ablation study on WCEP dataset.", "num": null }, "TABREF5": { "html": null, "content": "
: A comparison of multi-document summarizers on the
TAC-11 test set. Endorser-* are our methods.
much redundancy, source documents of DUC/TAC
are manually selected by NIST assessors, each suc-
cessive document in a topic cluster presents new
developments about the topic. Thus, reciprocal en-
dorsement may lead to better results for domains
with less redundancy.
", "type_str": "table", "text": "", "num": null } } } }