{ "paper_id": "2020", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T07:28:25.871616Z" }, "title": "Task Proposal: Abstractive Snippet Generation for Web Pages", "authors": [ { "first": "Shahbaz", "middle": [], "last": "Syed", "suffix": "", "affiliation": { "laboratory": "", "institution": "Leipzig University \u2020 Paderborn University * * Martin-Luther-Universit\u00e4t Halle-Wittenberg \u2021 Bauhaus-Universit\u00e4t Weimar", "location": {} }, "email": "" }, { "first": "Wei-Fan", "middle": [], "last": "Chen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Leipzig University \u2020 Paderborn University * * Martin-Luther-Universit\u00e4t Halle-Wittenberg \u2021 Bauhaus-Universit\u00e4t Weimar", "location": {} }, "email": "" }, { "first": "Matthias", "middle": [], "last": "Hagen", "suffix": "", "affiliation": { "laboratory": "", "institution": "Leipzig University \u2020 Paderborn University * * Martin-Luther-Universit\u00e4t Halle-Wittenberg \u2021 Bauhaus-Universit\u00e4t Weimar", "location": {} }, "email": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "", "affiliation": { "laboratory": "", "institution": "Leipzig University \u2020 Paderborn University * * Martin-Luther-Universit\u00e4t Halle-Wittenberg \u2021 Bauhaus-Universit\u00e4t Weimar", "location": {} }, "email": "" }, { "first": "Henning", "middle": [], "last": "Wachsmuth", "suffix": "", "affiliation": { "laboratory": "", "institution": "Leipzig University \u2020 Paderborn University * * Martin-Luther-Universit\u00e4t Halle-Wittenberg \u2021 Bauhaus-Universit\u00e4t Weimar", "location": {} }, "email": "" }, { "first": "Martin", "middle": [], "last": "Potthast", "suffix": "", "affiliation": { "laboratory": "", "institution": "Leipzig University \u2020 Paderborn University * * Martin-Luther-Universit\u00e4t Halle-Wittenberg \u2021 Bauhaus-Universit\u00e4t Weimar", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "We propose a shared task on abstractive snippet generation for web pages, a novel task of generating query-biased abstractive summaries for documents that are to be shown on a search results page. Conventional snippets are extractive in nature, which recently gave rise to copyright claims from news publishers as well as a new copyright legislation being passed in the European Union, limiting the fair use of web page contents for snippets. At the same time, abstractive summarization has matured considerably in recent years, potentially allowing for more personalization of snippets in the future. Taken together, these facts render further research into generating abstractive snippets both timely and promising.", "pdf_parse": { "paper_id": "2020", "_pdf_hash": "", "abstract": [ { "text": "We propose a shared task on abstractive snippet generation for web pages, a novel task of generating query-biased abstractive summaries for documents that are to be shown on a search results page. Conventional snippets are extractive in nature, which recently gave rise to copyright claims from news publishers as well as a new copyright legislation being passed in the European Union, limiting the fair use of web page contents for snippets. At the same time, abstractive summarization has matured considerably in recent years, potentially allowing for more personalization of snippets in the future. Taken together, these facts render further research into generating abstractive snippets both timely and promising.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "The task of abstractive snippet generation can be defined as follows: Given a user query and a web page, generate an abstractive summary of the web page's content that conveys how it relates to the user's information need. A key aspect of this task is that the summary should be abstractive in nature, i.e., text reuse should be avoided as much as possible, whereas named entities and other facts that cannot be changed in terms of their phrasing should be retained. We perform both automatic as well as manual evaluation (via crowdsourcing) of the submitted models. To ensure reproducibility and blind evaluation, we employ the cloud-based evaluation platform TIRA , 1 which facilitates software submission and blind evaluation on a hidden test set that is otherwise inaccessible. This ensures that participants cannot unwittingly optimize their approach against a test set. All data and code developed as part of the shared task will be publicly shared after its termination. 1 TIRA, https://www.tira.io", "cite_spans": [ { "start": 978, "end": 979, "text": "1", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Task Overview", "sec_num": "1" }, { "text": "A snippet on a search results page is a short summary accompanying each retrieved web page for a given user query. Conventionally, snippets are extractive summaries, where a few sentences containing the query's terms are extracted from the web page and arranged for display. With the query's terms highlighted in bold, one may decide at a glance whether a given web page merits further investigation with respect to one's information need. However, despite the relative ease with which snippets can be generated, and their near-universal use on modern web search engines, extractive snippets have limitations in terms of their expressiveness. More urgently, they have also increasingly become subject to copyright disputes and claims, 2 in particular from news publishers who lobbied for fair use, rendering extractive snippets from news almost or even entirely infeasible in some countries.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2" }, { "text": "As a way forward, we envision abstractive snippets as an alternative . An abstractive snippet is a query-biased abstractive summary of a web page that has minimal text reuse. Abstractive snippets have been shown to be on a par with extractive ones in terms of enabling search engine users to identify relevant results . Further, they are ideally suited to advance explainability and personalization of search results . Explanations may include reasons for ranking a page high or low, and personalization may hint either at information that the user has not seen elsewhere, or at adaptations to the user's experience level on a given subject. Owing to the recent advances in abstractive summarization (Lin and Ng, 2019) and text synthesis technology (Radford et al., 2019) , we believe this is the right time to delve into abstractive Query: Treasury of Humor Snippet: anchor context Asimov, on the other hand, proposes (in his first jokebook, Treasury of Humor) that the essence of humour is anticlimax: an abrupt change in point of view, in which trivial matters are suddenly elevated in importance above those that would normally be far more important. Document [ . . . ] Treasury of Humor is unique in that in addition to being a working joke book, it is a treatise on the theory of humor, propounding Asimov's theory that the essence of humor is an abrupt, jarring change in emphasis and/or point of view, moving from the crucial to the trivial, and/or from the sublime to the ridiculous [ . . . ] snippet generation. To the best of our knowledge, abstractive snippets are not yet adopted for commercial use. Through this shared task, we aim to investigate the capability of state-of-the-art summarization models to generate snippets that can be reliably presented to users of a search engine (commercial or otherwise).", "cite_spans": [ { "start": 700, "end": 718, "text": "(Lin and Ng, 2019)", "ref_id": "BIBREF5" }, { "start": 749, "end": 771, "text": "(Radford et al., 2019)", "ref_id": "BIBREF9" }, { "start": 1492, "end": 1501, "text": "[ . . . ]", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2" }, { "text": "Given the many relevant text generation tasks hosted at INLG, our task makes for a strong addition to this catalog, inviting participants not only from the NLP community but also from information retrieval. Building on a long and successful series of shared tasks in various domains, 3 including the recent TL;DR summarization challenge at INLG (Syed et al., 2019) , we strive to provide a supportive infrastructure, rigorous evaluations, and to share useful insights with the research community.", "cite_spans": [ { "start": 345, "end": 364, "text": "(Syed et al., 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Motivation", "sec_num": "2" }, { "text": "Participants may employ any abstractive summarization technology, with the aim of generating summaries (in the sense of snippets) that are querybiased. Among the first to study this task was Hasselqvist et al. 2017 3 https://webis.de/events.html?q=shared+task Query: Customer Respect Index Snippet: DMOZ description The Customer Respect Group: An international research and consulting firm, publishes the Online Customer Respect Index (CRI) and provides industry and company-specific research and analysis to help companies increase sales and customer retention by improving how they treat their customers online. Document [ . . . ] The Customer Respect Group has been a trusted source of online benchmark data and strategic insight since 2003. While much of our work is in financial services, we have worked across a variety of industries including telecommunications, education, government, and retail. [ . . . ] Table 2 : Example of a DMOZ description as training snippet. The original anchor text is highlighted.", "cite_spans": [], "ref_spans": [ { "start": 915, "end": 922, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Task Description", "sec_num": "3" }, { "text": "Similar to other text generation tasks, snippet generation requires a large dataset for training neural models. Accordingly, we have prepared the Webis Abstractive Snippet Corpus , 4 a novel and large-scale corpus for abstractive snippet generation. This corpus has been mined from the ClueWeb09, the ClueWeb12, and the DMOZ Open Directory Project, extracting more than 3.5 million examples of the form query, snippet, document .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1" }, { "text": "The snippets come from two sources: (1) anchor contexts, i.e., the text surrounding the anchor text of a hyperlink (linking to another web page) on a web page (see Table 1 ), and (2) descriptions from the Directory of Mozilla, DMOZ, the largest open directory project (see Table 2 ). The anchor contexts are used for distant supervision, based on the assumption that an author deciding to link to another's web page explains why the web page is linked, and what is found there, by summarizing its content. To ensure readable and meaningful snippet surrogates suited to the task, we filtered them in a multi-step pipeline. In case of DMOZ descriptions, they lend themselves directly: The directory contains human-written descriptions for web pages which serve as concise, abstractive snippets.", "cite_spans": [], "ref_spans": [ { "start": 164, "end": 171, "text": "Table 1", "ref_id": "TABREF0" }, { "start": 273, "end": 280, "text": "Table 2", "ref_id": null } ], "eq_spans": [], "section": "Data", "sec_num": "3.1" }, { "text": "Finally, for each snippet, document tuple, we generated a matching query to which the document is relevant and the abstractive snippet surrogate is semantically related, at least marginally. To this end, we extracted noun phrases with the Stanford POS tagger (Toutanova et al., 2003) , using only those phrases that occur in both the snippet and the document in our examples. This ensured that the queries are relevant and distinct with regard to their context in the corresponding web page. At most three such queries per tuple were generated, with an average length of 2.4 words each. To allow for synergy, we additionally consider web pages that have been judged relevant to topics at the TREC Web, Session, and Task tracks. 5 Crowdsourcing is employed to evaluate the quality of (1) the anchor contexts, (2) the generated queries, and, (3) the anchor contexts when used directly as query-biased snippets. The quality of the DMOZ descriptions was not evaluated, given their high a-priori quality. We selected 200 query, anchor context, document triples to be assessed. For all the three crowdsourcing studies, each task was done by five workers. The mean score for an anchor context was calculated based on the following annotation scheme: very bad gets score -2, poor score -1, okay score 1, and very good score 2.", "cite_spans": [ { "start": 259, "end": 283, "text": "(Toutanova et al., 2003)", "ref_id": "BIBREF13" }, { "start": 728, "end": 729, "text": "5", "ref_id": null } ], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1" }, { "text": "In the study of anchor context quality, workers were shown individual anchor contexts to validate that the anchor contexts remaining after our preprocessing steps are of high linguistic quality. On average, the quality score was 1.06, showing that the quality of the anchor contexts can be expected to be okay. Next, the annotators judged if the generated queries are important with respect to their respective anchor contexts to validate our query generation approach. The mean query quality score was 0.28, showing the overall query quality is just above average. Lastly, we studied if the anchor contexts can be used directly as query-biased snippets by showing the entire triple to the workers. Here, the average score was -0.08, underlining that the anchor contexts may allow for distantly supervised training, but not close supervision.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1" }, { "text": "Altogether, the three crowdsourcing studies have given us confidence that the anchor contexts we mined are reasonably well-suited to serve as summaries of their linked web documents, and that the queries generated for them serve as a reasonable point of connection between them. By extension, this also applies to the DMOZ descriptions, since high writing quality can be presumed here. For additional details about the corpus, we refer readers to .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1" }, { "text": "Participants are free to split this dataset into training and validation sets accordingly. The test dataset comprises 500 examples held out from the training 5 TREC, https://trec.nist.gov data, each of which is manually inspected by multiple annotators to ensure quality. These are further divided into two subsets, one for automatic evaluation shared on the leaderboard, and one (truly hidden test set) for the final manual evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Data", "sec_num": "3.1" }, { "text": "We follow a similar protocol as the TL;DR challenge (Syed et al., 2018) . The shared task is split into three phases: (1) Participants train suitable models using our dataset on their own hardware.", "cite_spans": [ { "start": 52, "end": 71, "text": "(Syed et al., 2018)", "ref_id": "BIBREF12" } ], "ref_spans": [], "eq_spans": [], "section": "Protocol", "sec_num": "3.2" }, { "text": "(2) With the submission system open, participants deploy their models on TIRA and generate snippets for the provided test set. (3) After the submission deadline, the generated snippets from participants are manually evaluated via crowdsourcing. Ideally, Phase 1 begins three months before Phase 2 to ensure sufficient time for training. During Phase 2, the deployed models are automatically evaluated using multiple metrics to provide fast approximations of performance. All scores will be visible on a public leaderboard, with participants still being able to submit additional models at their discretion.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Protocol", "sec_num": "3.2" }, { "text": "To ensure blind evaluation and reproducibility, the trained models are submitted as working software that generates snippets for a given set of query, document tuples. Participants deploy their software and all required dependencies on a virtual machine provided by the organizers. The test dataset is not accessible to participants while the competition is running; test set snippets are generated offline on the aforementioned virtual machine, without direct input from the participants. All evaluation runs are started from a clone of the participant's virtual machine, without network access, such that no test set data can be leaked. We operate the cloud infrastructure as well as the TIRA evaluation platform ourselves, so that no third party needs to be involved. We plan the following schedule for the abstractive snippet generation task:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Protocol", "sec_num": "3.2" }, { "text": "\u2022 December 15th, 2020. The shared task is announced along with the training data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Protocol", "sec_num": "3.2" }, { "text": "\u2022 February 15th, 2021. The submission system and public leaderboard are open. Participants can deploy and test models on the automatic evaluation test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Protocol", "sec_num": "3.2" }, { "text": "\u2022 May 15th, 2021. This is the deadline for software submission; manual evaluation begins.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Protocol", "sec_num": "3.2" }, { "text": "We estimate up to three weeks for completing the manual evaluation and their presentation on the public leaderboard. The shared task's findings are then presented at the following INLG, as was done for the TL;DR challenge (Syed et al., 2019) .", "cite_spans": [ { "start": 222, "end": 241, "text": "(Syed et al., 2019)", "ref_id": "BIBREF11" } ], "ref_spans": [], "eq_spans": [], "section": "Protocol", "sec_num": "3.2" }, { "text": "Our evaluation is based on that of as a two-step process involving intrinsic and extrinsic evaluation. The intrinsic evaluation assesses multiple properties of a snippet: text reuse, faithfulness (no hallucinations), and fluency. Extrinsic evaluation assesses their adequacy in the context of being used within a search engine. A combination of relevant automatic metrics and manual evaluation are used in both the scenarios. While results of the automatic metrics are shared on the leaderboard throughout the duration of the task, those of the manual evaluation will be shared later. This is primarily due to the cost constraint of selecting only the top-performing models on the automatic metrics for the subsequent manual evaluation via crowdsourcing.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.3" }, { "text": "Intrinsic Evaluation For an overall comparison of the generated snippets to the ground truth, we employ the n-gram-based ROUGE as well as the contextual embedding-based BERTScore (Zhang et al., 2020) and MoverScore (Zhao et al., 2019) . BERTScore computes similarity by aligning the generated and the reference snippet on a token level with the objective of maximizing the cosine similarity between their contextual embeddings. Mover-Score measures the semantic distance between the two by using Word Mover's Distance (Kusner et al., 2015) operating over the n-gram embeddings pooled from their BERT representations. This combination of metrics provides a decent approximation of the model's performance on both lexical and semantic levels. For assessing text reuse, we use the ROUGE-L precision score between the generated snippet and its source document. A lower precision implies less text reuse. Evaluating faithfulness is done by calculating the ratio of noun phrases preserved by the generated snippet for a given document: |S \u2229\u015c|/|\u015c|, where S is the set of noun phrases in a document, and\u015c is the set of noun phrases in its generated snippet. Here, a noun phrase is defined with the restriction of being a head noun and an optional adjective, which have also been exclusively considered for query generation. This ratio approximates the amount of content units from the document that are preserved by the generated snippet.", "cite_spans": [ { "start": 179, "end": 199, "text": "(Zhang et al., 2020)", "ref_id": "BIBREF15" }, { "start": 215, "end": 234, "text": "(Zhao et al., 2019)", "ref_id": "BIBREF16" }, { "start": 518, "end": 539, "text": "(Kusner et al., 2015)", "ref_id": "BIBREF4" } ], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.3" }, { "text": "Finally, fluency is judged manually via crowdsourcing (for the top-performing models) where (up to five) workers score a snippet's fluency on a 4-point Likert scale from very bad via bad and good to very good. Initially, however, fluency is indicated on the leaderboard as the perplexity of the generated snippet derived from a state-of-the-art language model as done by .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Evaluation", "sec_num": "3.3" }, { "text": "We assume that an adequate snippet of a web page summarizes its content in a query-biased manner and helps users identify relevant documents to the query from a given list of search results, where only the snippets are presented for each document. To this end, we set up a crowdsourcing experiment which simulates a typical search scenario. Our hidden test set contains topics from TREC tracks that have at least three relevant and three irrelevant documents judged in their corresponding datasets. Participating models generate snippets for a given topic (query) and its six documents with relevance judgments. Human annotators then judge each snippet with respect to its relevance to the given search query. We envision using up to 50 topics for the extrinsic evaluation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Extrinsic Evaluation", "sec_num": null }, { "text": "We believe that our shared task will open new avenues to study abstractive snippet generation and query-biased summarization in general. By analyzing the performance of existing abstractive summarization technology from various perspectives, carrying out a comprehensive qualitative evaluation, and openly publishing all our data, code, and findings, we intend to make a meaningful contribution to the community in constrained text generation.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "Abstractive summarization may hold the key to future web search technology, where a search engine not only explains to its users how a given web page is relevant to their current information need, but also why a given web page might be particularly relevant to them, personally. Although, even with current technology, we are still far removed from pursuing this goal, enabling contrained abstractive summarization is the first into this direction. Moreover, since web search engines currently operate perhaps the largest deployments of summarization technology, it is vitally important for our information society's ecosystem to maintain the ability to generate snippets in a copyright-compliant way.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Conclusion", "sec_num": "4" }, { "text": "https://juliareda.eu/eu-copyright-reform/ extra-copyright-for-news-sites/", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "https://webis.de/data.html#webis-snippet-20", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "A User Study on Snippet Generation: Text Reuse vs. Paraphrases", "authors": [ { "first": "Wei-Fan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Hagen", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Potthast", "suffix": "" } ], "year": 2018, "venue": "41st International ACM Conference on Research and Development in Information Retrieval (SIGIR)", "volume": "", "issue": "", "pages": "1033--1036", "other_ids": { "DOI": [ "10.1145/3209978.3210149" ] }, "num": null, "urls": [], "raw_text": "Wei-Fan Chen, Matthias Hagen, Benno Stein, and Mar- tin Potthast. 2018. A User Study on Snippet Gener- ation: Text Reuse vs. Paraphrases. In 41st Interna- tional ACM Conference on Research and Develop- ment in Information Retrieval (SIGIR), pages 1033- 1036. ACM.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Abstractive Snippet Generation", "authors": [ { "first": "Wei-Fan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Shahbaz", "middle": [], "last": "Syed", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Hagen", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Potthast", "suffix": "" } ], "year": 2020, "venue": "The Web Conference (WWW)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Wei-Fan Chen, Shahbaz Syed, Benno Stein, Matthias Hagen, and Martin Potthast. 2020. Abstractive Snip- pet Generation. In The Web Conference (WWW). ACM.", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Niklas Helmertz, and Mikael K\u00e5geb\u00e4ck. 2017. Query-based abstractive summarization using neural networks", "authors": [ { "first": "Johan", "middle": [], "last": "Hasselqvist", "suffix": "" } ], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1712.06100" ] }, "num": null, "urls": [], "raw_text": "Johan Hasselqvist, Niklas Helmertz, and Mikael K\u00e5ge- b\u00e4ck. 2017. Query-based abstractive summa- rization using neural networks. arXiv preprint arXiv:1712.06100.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Teaching machines to read and comprehend", "authors": [ { "first": "Karl", "middle": [], "last": "Moritz Hermann", "suffix": "" }, { "first": "Tomas", "middle": [], "last": "Kocisky", "suffix": "" }, { "first": "Edward", "middle": [], "last": "Grefenstette", "suffix": "" }, { "first": "Lasse", "middle": [], "last": "Espeholt", "suffix": "" }, { "first": "Will", "middle": [], "last": "Kay", "suffix": "" }, { "first": "Mustafa", "middle": [], "last": "Suleyman", "suffix": "" }, { "first": "Phil", "middle": [], "last": "Blunsom", "suffix": "" } ], "year": 2015, "venue": "Advances in neural information processing systems (NeurIPS)", "volume": "", "issue": "", "pages": "1693--1701", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karl Moritz Hermann, Tomas Kocisky, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Advances in neural information processing systems (NeurIPS), pages 1693-1701.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "From word embeddings to document distances", "authors": [ { "first": "Matt", "middle": [ "J" ], "last": "Kusner", "suffix": "" }, { "first": "Yu", "middle": [], "last": "Sun", "suffix": "" }, { "first": "Nicholas", "middle": [ "I" ], "last": "Kolkin", "suffix": "" }, { "first": "Kilian", "middle": [ "Q" ], "last": "Weinberger", "suffix": "" } ], "year": 2015, "venue": "Proceedings of the 32nd International Conference on Machine Learning, ICML", "volume": "37", "issue": "", "pages": "957--966", "other_ids": {}, "num": null, "urls": [], "raw_text": "Matt J. Kusner, Yu Sun, Nicholas I. Kolkin, and Kilian Q. Weinberger. 2015. From word embed- dings to document distances. In Proceedings of the 32nd International Conference on Machine Learn- ing, ICML, volume 37 of JMLR Workshop and Con- ference Proceedings, pages 957-966. JMLR.org.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Abstractive summarization: A survey of the state of the art", "authors": [ { "first": "Hui", "middle": [], "last": "Lin", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2019, "venue": "The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI", "volume": "", "issue": "", "pages": "9815--9822", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hui Lin and Vincent Ng. 2019. Abstractive summa- rization: A survey of the state of the art. In The Thirty-Third AAAI Conference on Artificial Intelli- gence, AAAI, pages 9815-9822. AAAI Press.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Constrained abstractive summarization: Preserving factual consistency with constrained generation", "authors": [ { "first": "Yuning", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Xiang", "middle": [], "last": "Ren", "suffix": "" }, { "first": "Ji", "middle": [], "last": "Heng", "suffix": "" }, { "first": "Jiawei", "middle": [], "last": "Han", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Yuning Mao, Xiang Ren, Heng Ji, and Jiawei Han. 2020. Constrained abstractive summarization: Pre- serving factual consistency with constrained genera- tion.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "A Plan for Ancillary Copyright: Original Snippets", "authors": [ { "first": "Martin", "middle": [], "last": "Potthast", "suffix": "" }, { "first": "Wei-Fan", "middle": [], "last": "Chen", "suffix": "" }, { "first": "Matthias", "middle": [], "last": "Hagen", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2018, "venue": "2nd International Workshop on Recent Trends in News Information Retrieval", "volume": "", "issue": "", "pages": "3--5", "other_ids": {}, "num": null, "urls": [], "raw_text": "Martin Potthast, Wei-Fan Chen, Matthias Hagen, and Benno Stein. 2018. A Plan for Ancillary Copyright: Original Snippets. In 2nd International Workshop on Recent Trends in News Information Retrieval (NewsIR 2018) at ECIR, volume 2079 of CEUR Workshop Proceedings, pages 3-5.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "TIRA Integrated Research Architecture", "authors": [ { "first": "Martin", "middle": [], "last": "Potthast", "suffix": "" }, { "first": "Tim", "middle": [], "last": "Gollub", "suffix": "" }, { "first": "Matti", "middle": [], "last": "Wiegmann", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" } ], "year": 2019, "venue": "In Information Retrieval Evaluation in a Changing World", "volume": "", "issue": "", "pages": "", "other_ids": { "DOI": [ "10.1007/978-3-030-22948-1_5" ] }, "num": null, "urls": [], "raw_text": "Martin Potthast, Tim Gollub, Matti Wiegmann, and Benno Stein. 2019. TIRA Integrated Research Ar- chitecture. In Information Retrieval Evaluation in a Changing World, The Information Retrieval Series. Springer, Berlin Heidelberg New York.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Language models are unsupervised multitask learners", "authors": [ { "first": "Alec", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Jeffrey", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Rewon", "middle": [], "last": "Child", "suffix": "" }, { "first": "David", "middle": [], "last": "Luan", "suffix": "" }, { "first": "Dario", "middle": [], "last": "Amodei", "suffix": "" }, { "first": "Ilya", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2019, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Unsupervised dual-cascade learning with pseudo-feedback distillation for query-focused extractive summarization", "authors": [ { "first": "Haggai", "middle": [], "last": "Roitman", "suffix": "" }, { "first": "Guy", "middle": [], "last": "Feigenblat", "suffix": "" }, { "first": "Doron", "middle": [], "last": "Cohen", "suffix": "" }, { "first": "Odellia", "middle": [], "last": "Boni", "suffix": "" }, { "first": "David", "middle": [], "last": "Konopnicki", "suffix": "" } ], "year": 2020, "venue": "The Web Conference (WWW)", "volume": "", "issue": "", "pages": "2577--2584", "other_ids": {}, "num": null, "urls": [], "raw_text": "Haggai Roitman, Guy Feigenblat, Doron Cohen, Odel- lia Boni, and David Konopnicki. 2020. Unsuper- vised dual-cascade learning with pseudo-feedback distillation for query-focused extractive summariza- tion. In The Web Conference (WWW), pages 2577- 2584.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Towards Summarization for Social Media -Results of the TL;DR Challenge", "authors": [ { "first": "Shahbaz", "middle": [], "last": "Syed", "suffix": "" }, { "first": "Michael", "middle": [], "last": "V\u00f6lske", "suffix": "" }, { "first": "Nedim", "middle": [], "last": "Lipka", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Potthast", "suffix": "" } ], "year": 2019, "venue": "12th International Natural Language Generation Conference (INLG)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shahbaz Syed, Michael V\u00f6lske, Nedim Lipka, Benno Stein, Hinrich Sch\u00fctze, and Martin Potthast. 2019. Towards Summarization for Social Media -Results of the TL;DR Challenge. In 12th International Nat- ural Language Generation Conference (INLG).", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Task Proposal: The TL;DR Challenge", "authors": [ { "first": "Shahbaz", "middle": [], "last": "Syed", "suffix": "" }, { "first": "Michael", "middle": [], "last": "V\u00f6lske", "suffix": "" }, { "first": "Martin", "middle": [], "last": "Potthast", "suffix": "" }, { "first": "Nedim", "middle": [], "last": "Lipka", "suffix": "" }, { "first": "Benno", "middle": [], "last": "Stein", "suffix": "" }, { "first": "Hinrich", "middle": [], "last": "Sch\u00fctze", "suffix": "" } ], "year": 2018, "venue": "11th International Conference on Natural Language Generation (INLG)", "volume": "", "issue": "", "pages": "318--321", "other_ids": {}, "num": null, "urls": [], "raw_text": "Shahbaz Syed, Michael V\u00f6lske, Martin Potthast, Nedim Lipka, Benno Stein, and Hinrich Sch\u00fctze. 2018. Task Proposal: The TL;DR Challenge. In 11th International Conference on Natural Language Generation (INLG), pages 318-321.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Feature-rich part-ofspeech tagging with a cyclic dependency network", "authors": [ { "first": "Kristina", "middle": [], "last": "Toutanova", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" }, { "first": "Yoram", "middle": [], "last": "Singer", "suffix": "" } ], "year": 2003, "venue": "Proceedings of NAACL/HLT 2003", "volume": "", "issue": "", "pages": "173--180", "other_ids": {}, "num": null, "urls": [], "raw_text": "Kristina Toutanova, Dan Klein, Christopher D. Man- ning, and Yoram Singer. 2003. Feature-rich part-of- speech tagging with a cyclic dependency network. In Proceedings of NAACL/HLT 2003, pages 173- 180.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Conditional self-attention for query-based summarization", "authors": [ { "first": "Yujia", "middle": [], "last": "Xie", "suffix": "" }, { "first": "Tianyi", "middle": [], "last": "Zhou", "suffix": "" }, { "first": "Yi", "middle": [], "last": "Mao", "suffix": "" }, { "first": "Weizhu", "middle": [], "last": "Chen", "suffix": "" } ], "year": 2020, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:2002.07338" ] }, "num": null, "urls": [], "raw_text": "Yujia Xie, Tianyi Zhou, Yi Mao, and Weizhu Chen. 2020. Conditional self-attention for query-based summarization. arXiv preprint arXiv:2002.07338.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Bertscore: Evaluating text generation with BERT", "authors": [ { "first": "Tianyi", "middle": [], "last": "Zhang", "suffix": "" }, { "first": "Varsha", "middle": [], "last": "Kishore", "suffix": "" }, { "first": "Felix", "middle": [], "last": "Wu", "suffix": "" }, { "first": "Kilian", "middle": [ "Q" ], "last": "Weinberger", "suffix": "" }, { "first": "Yoav", "middle": [], "last": "Artzi", "suffix": "" } ], "year": 2020, "venue": "8th International Conference on Learning Representations, ICLR. OpenReview.net", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- uating text generation with BERT. In 8th Inter- national Conference on Learning Representations, ICLR. OpenReview.net.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Moverscore: Text generation evaluating with contextualized embeddings and earth mover distance", "authors": [ { "first": "Wei", "middle": [], "last": "Zhao", "suffix": "" }, { "first": "Maxime", "middle": [], "last": "Peyrard", "suffix": "" }, { "first": "Fei", "middle": [], "last": "Liu", "suffix": "" }, { "first": "Yang", "middle": [], "last": "Gao", "suffix": "" }, { "first": "Christian", "middle": [ "M" ], "last": "Meyer", "suffix": "" }, { "first": "Steffen", "middle": [], "last": "Eger", "suffix": "" } ], "year": 2019, "venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing", "volume": "", "issue": "", "pages": "563--578", "other_ids": { "DOI": [ "10.18653/v1/D19-1053" ] }, "num": null, "urls": [], "raw_text": "Wei Zhao, Maxime Peyrard, Fei Liu, Yang Gao, Chris- tian M. Meyer, and Steffen Eger. 2019. Moverscore: Text generation evaluating with contextualized em- beddings and earth mover distance. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Interna- tional Joint Conference on Natural Language Pro- cessing, EMNLP-IJCNLP, pages 563-578. Associa- tion for Computational Linguistics.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "num": null, "text": ", who summarized news documents by using named entities as queries from the CNN/DailyMail dataset (Hermann et al., 2015). Recently, Chen et al. (2020) proposed a model employing bidirectional generation to induce the query bias, Xie et al. (2020) use conditional self-attention, Roitman et al. (2020) unsupervised learning, and Mao et al. (2020) lexically constrained decoding.", "type_str": "figure" }, "TABREF0": { "text": "Example of an anchor context as training snippet. The original anchor text is highlighted.", "html": null, "type_str": "table", "content": "", "num": null } } } }