ACL-OCL / Base_JSON /prefixI /json /inlg /2020.inlg-1.11.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
raw
history blame contribute delete
No virus
78.1 kB
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:28:42.318509Z"
},
"title": "Towards Generating Query to Perform Query Focused Abstractive Summarization using Pre-trained Model",
"authors": [
{
"first": "Deen",
"middle": [],
"last": "Mohammad",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Lethbridge Lethbridge",
"location": {
"region": "AB",
"country": "Canada"
}
},
"email": ""
},
{
"first": "Yllias",
"middle": [],
"last": "Chali",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Lethbridge Lethbridge",
"location": {
"region": "AB",
"country": "Canada"
}
},
"email": "yllias.chali@uleth.ca"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Query Focused Abstractive Summarization (QFAS) represents an abstractive summary from the source document based on a given query. To measure the performance of abstractive summarization tasks, different datasets have been broadly used. However, for QFAS tasks, only a limited number of datasets have been used, which are comparatively small and provide single sentence summaries. This paper presents a query generation approach, where we considered most similar words between documents and summaries for generating queries. By implementing our query generation approach, we prepared two relatively large datasets, namely CNN/DailyMail and Newsroom which contain multiple sentence summaries and can be used for future QFAS tasks. We also implemented a pre-processing approach to perform QFAS tasks using a pretrained language model, BERTSUM. In our pre-processing approach, we sorted the sentences of the documents from the most queryrelated sentences to the less query-related sentences. Then, we fine-tuned the BERT-SUM model for generating the abstractive summaries. We also experimented on one of the largely used datasets, Debatepedia, to compare our QFAS approach with other models. The experimental results show that our approach outperforms the state-of-the-art models on three ROUGE scores.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Query Focused Abstractive Summarization (QFAS) represents an abstractive summary from the source document based on a given query. To measure the performance of abstractive summarization tasks, different datasets have been broadly used. However, for QFAS tasks, only a limited number of datasets have been used, which are comparatively small and provide single sentence summaries. This paper presents a query generation approach, where we considered most similar words between documents and summaries for generating queries. By implementing our query generation approach, we prepared two relatively large datasets, namely CNN/DailyMail and Newsroom which contain multiple sentence summaries and can be used for future QFAS tasks. We also implemented a pre-processing approach to perform QFAS tasks using a pretrained language model, BERTSUM. In our pre-processing approach, we sorted the sentences of the documents from the most queryrelated sentences to the less query-related sentences. Then, we fine-tuned the BERT-SUM model for generating the abstractive summaries. We also experimented on one of the largely used datasets, Debatepedia, to compare our QFAS approach with other models. The experimental results show that our approach outperforms the state-of-the-art models on three ROUGE scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Text summarization has two major types: extractive summarization and abstractive summarization. The extractive summarization approach only selects important sentences for generating extractive summaries and may lose the main context of the documents. In contrast, the abstractive summarization approach considers all the sentences of the document to hold the actual context of the document and paraphrase sentences to generate ab-stractive summaries. Query focused abstractive summarization (QFAS) emphasizes those sentences relevant to the given query and generates abstractive summaries based on the query. For example, a user may need to know the summary of the tourist places located in Vancouver rather than all the tourist places of the entire Canada. Then the QFAS approach will focus on the query keywords 'tourist places''Vancouver'and generate an abstractive summary.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "With the advancement of the neural network, modern approaches of text summarization have focused on abstractive summarization, which paraphrases the words in the sentences by using encoder-decoder architecture (Rush et al., 2015; Nallapati et al., 2016; See et al., 2017; Tan et al., 2017; Narayan et al., 2018) . With the RNN encoder-decoder model, Hu et al. (2015) introduced a dataset for Chinese text summarization. To solve the problem of recurring words in encoderdecoder models, have given an attention model to minimize the repetition of the same words and phrases.",
"cite_spans": [
{
"start": 210,
"end": 229,
"text": "(Rush et al., 2015;",
"ref_id": "BIBREF16"
},
{
"start": 230,
"end": 253,
"text": "Nallapati et al., 2016;",
"ref_id": "BIBREF12"
},
{
"start": 254,
"end": 271,
"text": "See et al., 2017;",
"ref_id": "BIBREF17"
},
{
"start": 272,
"end": 289,
"text": "Tan et al., 2017;",
"ref_id": "BIBREF19"
},
{
"start": 290,
"end": 311,
"text": "Narayan et al., 2018)",
"ref_id": "BIBREF13"
},
{
"start": 350,
"end": 366,
"text": "Hu et al. (2015)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In other works, the transformer model has been used to get better summaries (Egonmwan and Chali, 2019) . A pretrained language model, Bidirectional Encoder Representations from Transformers (BERT) (Devlin et al., 2019) model can combine the word and sentence representations in a single substantial transformer (Vaswani et al., 2017) , which can be fine-tuned for next sentence prediction tasks. Recently, the BERT has been used on the BERTSUM model for summarization tasks and showed state-of-the-art results (Liu and Lapata, 2019) . However, all these research works were focused only on generating better abstractive summaries and did not consider the relevance of query for abstractive summarization.",
"cite_spans": [
{
"start": 76,
"end": 102,
"text": "(Egonmwan and Chali, 2019)",
"ref_id": "BIBREF4"
},
{
"start": 197,
"end": 218,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 311,
"end": 333,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF20"
},
{
"start": 510,
"end": 532,
"text": "(Liu and Lapata, 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Query focused summarization highlights those sentences which are relevant to the context of a given query. Still, only few works have been done on QFAS (Nema et al., 2017; Hasselqvist et al., 2017; Aryal and Chali, 2020) . Here, Nema et al. (2017) , Aryal and Chali (2020) independently used Debatepedia 1 (Nema et al., 2017) dataset and Hasselqvist et al. (2017) used CNN/DailyMail 2 (Hermann et al., 2015) dataset for QFAS tasks. Debatepedia dataset is a small dataset which consists of single sentence summaries. Hence, we intended to investigate whether relatively large datasets with multiple sentence summaries perform better on QFAS tasks. We prepared and used two large datasets; CNN/DailyMail and Newsroom 3 (Grusky et al., 2018) for our QFAS task, which have multiple sentence summaries. Using CNN/DailyMail dataset, Hasselqvist et al. (2017) generated queries for QFAS task and conducted their research experiments. In their query generation approach, the authors considered only summaries and did not focus on the relevant documents which may have an impact on the performance of their proposed model. Therefore, we developed our new query generation approach, considering the relevant documents and summaries for our QFAS task.",
"cite_spans": [
{
"start": 152,
"end": 171,
"text": "(Nema et al., 2017;",
"ref_id": "BIBREF14"
},
{
"start": 172,
"end": 197,
"text": "Hasselqvist et al., 2017;",
"ref_id": "BIBREF6"
},
{
"start": 198,
"end": 220,
"text": "Aryal and Chali, 2020)",
"ref_id": "BIBREF0"
},
{
"start": 229,
"end": 247,
"text": "Nema et al. (2017)",
"ref_id": "BIBREF14"
},
{
"start": 250,
"end": 272,
"text": "Aryal and Chali (2020)",
"ref_id": "BIBREF0"
},
{
"start": 306,
"end": 325,
"text": "(Nema et al., 2017)",
"ref_id": "BIBREF14"
},
{
"start": 717,
"end": 738,
"text": "(Grusky et al., 2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In our QFAS approach, we emphasized on the input representation and implemented our idea of sorting the sentences of the documents according to the corresponding queries. Then we used our pre-processed input to fine-tune the BERT-SUM model for generating abstractive summaries. For CNN/DailyMail our approach achieved better ROUGE scores than the work of Hasselqvist et al. (2017) . As there is no previous work which performs QFAS tasks on Newsroom dataset, we present our results for future research comparison. We also implemented our query generation and QFAS approaches on Debatepedia dataset and found that our approaches work well on Debatepedia dataset in comparison with the existing QFAS based research works.",
"cite_spans": [
{
"start": 355,
"end": 380,
"text": "Hasselqvist et al. (2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The research work of Nema et al. (2017) implemented the attention model for both queries and documents on Debatepedia dataset. Their model succeeded in solving the problem of repeating phrases in summaries. They proposed a model with two key addition to the encode attend decode model. In other work, Aryal and Chali (2020) focused on solving the problem of noisy encoder. The authors focused on representing the input sequence in a selective approach and used sequenceto-sequence model on Debatepedia dataset to generate query focused abstractive summaries. Hasselqvist et al. 2017proposed a pointer-generator model for query focused abstractive summarization on CNN/DailyMail dataset. They incorporated attention and pointer generation mechanism on a sequence-to-sequence model.",
"cite_spans": [
{
"start": 21,
"end": 39,
"text": "Nema et al. (2017)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "To perform many natural language tasks pretrained language models have been used (Devlin et al., 2019) . Introducing a novel document level encoder based on BERT, Liu and Lapata (2019) proposed a fine-tuning schedule and named the model as BERTSUM to generate summaries. For the decoding phase, they followed the same approach as Vaswani et al. (2017) . But in their work, they did not consider the query relevance for the summarization. Therefore, we used the BERTSUM model as a pretrained language model for QFAS task. We pre-processed the input according to the query and then fine-tuned the BERTSUM model to generate query focused abstractive summaries.",
"cite_spans": [
{
"start": 81,
"end": 102,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 330,
"end": 351,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "In this work, we used three datasets; CNN/DailyMail, Newsroom and Debatepedia for our experiments. For our QFAS task, we prepared these three datasets with our new query generation approach. The CNN/DailyMail dataset comprises of 287K news articles with 3-4 lines related highlights. In our work, we collected the stories as the text documents and the highlights as the corresponding summaries. In this way, the summaries contain more than one sentence which made the dataset more useful for the pretrained language models. The Newsroom dataset has been developed from 38 major news publications. The authors collected words and phrases from articles to generate summaries by combining the abstractive and extractive approaches. In Newsroom, there are three types of datasets: Abstractive, Extractive, and Mixed. For our QFAS task, we used Abstractive and Mixed datasets of Newsroom where we eliminated those data which had single sentence summaries. The Debatepedia dataset corpus has 663 debates under 53 categories. Though the dataset contains single sentence summaries, some QFAS models used this dataset for their experiments. Therefore, we experimented our query generation and QFAS approaches using the Debatepedia dataset to compare whether our new approach outperforms the state-of-the art result or not. Figure 1 : Generated query from given document and summary in CNN/DailyMail",
"cite_spans": [],
"ref_spans": [
{
"start": 1314,
"end": 1322,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Dataset Preparation",
"sec_num": "3"
},
{
"text": "When we search in a text with our given query, we expect the presence of those query keywords in our search results. In query focused summarization, both the generated summary as well as the source document should contain the context of the query keywords. For example, we have a document that contains a patient's medicine information corresponding to his/her different diseases. If the patient wants to know about his/her diabetes related medicine information as a summary and provide a query ('diabetes' 'medicine '), then the main document should contain the information on 'diabetes' and 'medicine'. Otherwise, we can assume that the source document has no information regarding that person's diabetes related medicine, and both the document and the query will be considered as invalid. Similarly, in the summary, the presence of these two keywords will confirm that the generated summary is query relevant. The query holds the context of the summary, where the context of the query keywords should be present in the source document. For this reason, we considered those words from the summary that are most similar to the document.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Query Generation Approach",
"sec_num": "3.1"
},
{
"text": "In our query generation approach, we preprocessed each document and the document's corresponding summary. We performed tokenization, the removal of the stop words, and lemmatization as pre-processing steps. Then we used the Python library, spaCy 4 and trained a pretrained model 'en core web md' with the source document. Then, we considered each word of the summary and calculated the cosine similarity with the trained model. Finally, we selected five most similar words as our query. In Figure 1 , we have shown our generated query from a document and the corresponding summary for CNN/DailyMail. Here, we can observe that the word 'action' is not present in the source document but convey contextual relation with the document and hence selected as one of the query keywords.",
"cite_spans": [],
"ref_spans": [
{
"start": 490,
"end": 498,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Our Query Generation Approach",
"sec_num": "3.1"
},
{
"text": "We used the source document and query as the system input and generated summary as system output. Our summarization framework has two parts, at first we pre-processed the source document according to the query by which we incorporated the query relevance to our QFAS task. Then, we used the BERTSUM model to generate abstractive summaries, where we fine-tuned the model with our pre-processed source documents. Our summarization approach is shown in Figure 2 .",
"cite_spans": [],
"ref_spans": [
{
"start": 450,
"end": 458,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Our Summarization Framework",
"sec_num": "4"
},
{
"text": "We sorted the sentences of a document according to the relevance of the generated query. Given a document, D = {S 1 , S 2 , ..., S n } and generated query, Q = {q 1 , q 2 , ..., q m }, we ordered the sentences to get the sorted document, D SORT = {..., S i , S j , ...}, where, 1 \u2264 i, j \u2264 n; i = j; and similarity(Q, S i ) \u2265 similarity(Q, S j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Pre-Processing Approach",
"sec_num": "4.1"
},
{
"text": "Here, we used the Python library, spaCy and trained 'en core web md' model with the query. Then, for each sentence of the document, we calculated the cosine similarity with the trained model. Finally, we sorted the document from the most similar to the less similar values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Our Pre-Processing Approach",
"sec_num": "4.1"
},
{
"text": "In this paper, we followed the same fine-tuning approach of Liu and Lapata (2019) . We selected sorted sentences one by one from the document, D SORT and tokenized each sentence by following the work of Durrett et al. (2016) . Then, we incorporated the [CLS] token at the beginning of each sentence and assigned three embeddings; token embedding, segmentation embedding, and position embedding for each token. Finally, the summation of three embeddings of the input document was passed to the transformer. Token embedding has been used to represent the meaning of each token, whereas segmentation embedding is used to identify each sentence separately. The position embedding has been used to determine the position of each token. Following the same encoder-decoder framework of See et al. (2017), we used pretrained encoder and 6-layered transformer for decoder as Liu and Lapata (2019) used for their BERTSUM model. We used Adam optimizers, \u03b2 1 = 0.9 for the encoder, and \u03b2 2 = 0.999 for the decoder to make our fine-tuning stable and used the learning rates for encoder and decoder as in following equations:",
"cite_spans": [
{
"start": 60,
"end": 81,
"text": "Liu and Lapata (2019)",
"ref_id": "BIBREF11"
},
{
"start": 203,
"end": 224,
"text": "Durrett et al. (2016)",
"ref_id": "BIBREF3"
},
{
"start": 866,
"end": 887,
"text": "Liu and Lapata (2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-Tuning the BERTSUM Model",
"sec_num": "4.2"
},
{
"text": "\u03b1 =\u03b1.min(N \u22120.5 , N.warmup \u22121.5 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-Tuning the BERTSUM Model",
"sec_num": "4.2"
},
{
"text": "where, N stands for the iteration number, warmup is 20, 000 for the encoder and 10, 000 for the decoder, and\u03b1 is 2e \u22123 for the encoder and 0.1 for the decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-Tuning the BERTSUM Model",
"sec_num": "4.2"
},
{
"text": "We implemented our query generation and QFAS approaches on CNN/DailyMail, Newsroom and Debatepedia datasets using the same experimental setup.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setups",
"sec_num": "5"
},
{
"text": "We trained the model for 200, 000 steps on TITAN X GPU (GTX Machine) and used PyTorch (Paszke et al., 2017) , OpenNMT (Klein et al., 2017) . We imported 'bert-base-uncased' of the BERT (Devlin et al., 2019) model for utilizing the BERTSUM model. We set the dropout probability 0.1 and the label-smoothing factor 0.1 (Szegedy et al., 2016) . For the encoder, we took 768 hidden units with the hidden size for feed-forward layers 2, 048. In the decoding phase, we used beam size 5, and tuned the length penalty between 0.6 and 1.0 (Wu et al., 2016) .",
"cite_spans": [
{
"start": 86,
"end": 107,
"text": "(Paszke et al., 2017)",
"ref_id": "BIBREF15"
},
{
"start": 118,
"end": 138,
"text": "(Klein et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 185,
"end": 206,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF2"
},
{
"start": 316,
"end": 338,
"text": "(Szegedy et al., 2016)",
"ref_id": "BIBREF18"
},
{
"start": 529,
"end": 546,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation Details",
"sec_num": "5.1"
},
{
"text": "We evaluated our approach for all datasets using ROUGE-1 (R1), ROUGE-2 (R2), and ROUGE-L (RL) (Lin, 2004) , which calculate the word-overlap between the reference and the system summaries. Table 1 presents the comparison of R1, R2 and RL scores for Debatepedia dataset. We compared our Recall (R) values of R1, R2 and RL with the works of Nema et al. (2017) and Aryal and Chali (2020) . After comparing the results we observed that, our approach successfully achieved new stateof-the-art results for QFAS task. We also provided our Precision (P) and F1-measure (F1) values in Table 1 . (Hasselqvist et al., 2017) Our Approach 44.91 21.81 41.70 For Newsroom dataset, no previous QFAS work has been performed. Therefore, in Table 3 ",
"cite_spans": [
{
"start": 94,
"end": 105,
"text": "(Lin, 2004)",
"ref_id": "BIBREF10"
},
{
"start": 339,
"end": 357,
"text": "Nema et al. (2017)",
"ref_id": "BIBREF14"
},
{
"start": 362,
"end": 384,
"text": "Aryal and Chali (2020)",
"ref_id": "BIBREF0"
},
{
"start": 586,
"end": 612,
"text": "(Hasselqvist et al., 2017)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [
{
"start": 189,
"end": 196,
"text": "Table 1",
"ref_id": null
},
{
"start": 576,
"end": 583,
"text": "Table 1",
"ref_id": null
},
{
"start": 722,
"end": 729,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "5.2"
},
{
"text": "In this research, one of our aim was to incorporate query and prepare two datasets which contain multiple sentence summaries for QFAS task. Our another goal was to pre-process the source documents with a new document sorting approach and then fed to the pretrained model. We targeted to fine-tune the BERTSUM model for our QFAS task. We compared our results and investigated that our QFAS approach successfully achieved new state-ofthe-art results for Debatepedia and CNN/DailyMail datasets. As no previous research used Newsroom dataset for QFAS task, we provided our results of Newsroom dataset for future comparison of the related research work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "https://github.com/PrekshaNema25/ DiverstiyBasedAttentionMechanism 2 https://cs.nyu.edu/\u02dckcho/DMQA/ 3 http://lil.nlp.cornell.edu/newsroom/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://spacy.io/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their useful comments. The research reported in this paper was conducted at the University of Lethbridge and supported by the Natural Sciences and Engineering Research Council (NSERC) of Canada discovery grant and the University of Lethbridge.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Selection driven query focused abstractive document summarization",
"authors": [
{
"first": "Chudamani",
"middle": [],
"last": "Aryal",
"suffix": ""
},
{
"first": "Yllias",
"middle": [],
"last": "Chali",
"suffix": ""
}
],
"year": 2020,
"venue": "Advances in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "118--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chudamani Aryal and Yllias Chali. 2020. Selec- tion driven query focused abstractive document sum- marization. In Advances in Artificial Intelligence, pages 118-124, Cham. Springer International Pub- lishing.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Distraction-based neural networks for modeling documents",
"authors": [
{
"first": "Qian",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Xiaodan",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Zhenhua",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Si",
"middle": [],
"last": "Wei",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Jiang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence, IJCAI'16",
"volume": "",
"issue": "",
"pages": "2754--2760",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei, and Hui Jiang. 2016. Distraction-based neural networks for modeling documents. In Proceedings of the Twenty-Fifth International Joint Conference on Arti- ficial Intelligence, IJCAI'16, page 2754-2760, New York, New York, USA. AAAI Press.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "BERT: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {
"DOI": [
"10.18653/v1/N19-1423"
]
},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Learning-based single-document summarization with compression and anaphoricity constraints",
"authors": [
{
"first": "Greg",
"middle": [],
"last": "Durrett",
"suffix": ""
},
{
"first": "Taylor",
"middle": [],
"last": "Berg-Kirkpatrick",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.18653/v1/P16-1188"
]
},
"num": null,
"urls": [],
"raw_text": "Greg Durrett, Taylor Berg-Kirkpatrick, and Dan Klein. 2016. Learning-based single-document summariza- tion with compression and anaphoricity constraints. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1998-2008, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Transformer-based model for single documents neural summarization",
"authors": [
{
"first": "Elozino",
"middle": [],
"last": "Egonmwan",
"suffix": ""
},
{
"first": "Yllias",
"middle": [],
"last": "Chali",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 3rd Workshop on Neural Generation and Translation",
"volume": "",
"issue": "",
"pages": "70--79",
"other_ids": {
"DOI": [
"10.18653/v1/D19-5607"
]
},
"num": null,
"urls": [],
"raw_text": "Elozino Egonmwan and Yllias Chali. 2019. Transformer-based model for single documents neural summarization. In Proceedings of the 3rd Workshop on Neural Generation and Transla- tion, pages 70-79, Hong Kong. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies",
"authors": [
{
"first": "Max",
"middle": [],
"last": "Grusky",
"suffix": ""
},
{
"first": "Mor",
"middle": [],
"last": "Naaman",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "708--719",
"other_ids": {
"DOI": [
"10.18653/v1/N18-1065"
]
},
"num": null,
"urls": [],
"raw_text": "Max Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 708-719, New Orleans, Louisiana. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Query-based abstractive summarization using neural networks",
"authors": [
{
"first": "Johan",
"middle": [],
"last": "Hasselqvist",
"suffix": ""
},
{
"first": "Niklas",
"middle": [],
"last": "Helmertz",
"suffix": ""
},
{
"first": "Mikael",
"middle": [],
"last": "K\u00e5geb\u00e4ck",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Johan Hasselqvist, Niklas Helmertz, and Mikael K\u00e5geb\u00e4ck. 2017. Query-based abstractive summarization using neural networks. CoRR, abs/1712.06100.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Teaching machines to read and comprehend",
"authors": [
{
"first": "Karl",
"middle": [],
"last": "Moritz Hermann",
"suffix": ""
},
{
"first": "Tom\u00e1\u0161",
"middle": [],
"last": "Ko\u010disk\u00fd",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Grefenstette",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Will",
"middle": [],
"last": "Kay",
"suffix": ""
},
{
"first": "Mustafa",
"middle": [],
"last": "Suleyman",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 28th International Conference on Neural Information Processing Systems",
"volume": "1",
"issue": "",
"pages": "1693--1701",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karl Moritz Hermann, Tom\u00e1\u0161 Ko\u010disk\u00fd, Edward Grefen- stette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. 2015. Teaching machines to read and comprehend. In Proceedings of the 28th Inter- national Conference on Neural Information Process- ing Systems -Volume 1, NIPS'15, page 1693-1701, Cambridge, MA, USA. MIT Press.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Lcsts: A large scale chinese short text summarization dataset",
"authors": [
{
"first": "Baotian",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Qingcai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Fangze",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1967--1972",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1229"
]
},
"num": null,
"urls": [],
"raw_text": "Baotian Hu, Qingcai Chen, and Fangze Zhu. 2015. Lc- sts: A large scale chinese short text summarization dataset. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1967-1972, Lisbon, Portugal. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "OpenNMT: Opensource toolkit for neural machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Yuntian",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Jean",
"middle": [],
"last": "Senellart",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of ACL 2017, System Demonstrations",
"volume": "",
"issue": "",
"pages": "67--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Klein, Yoon Kim, Yuntian Deng, Jean Senel- lart, and Alexander Rush. 2017. OpenNMT: Open- source toolkit for neural machine translation. In Proceedings of ACL 2017, System Demonstrations, pages 67-72, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Text summarization with pretrained encoders",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)",
"volume": "",
"issue": "",
"pages": "3730--3740",
"other_ids": {
"DOI": [
"10.18653/v1/D19-1387"
]
},
"num": null,
"urls": [],
"raw_text": "Yang Liu and Mirella Lapata. 2019. Text summariza- tion with pretrained encoders. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 3730-3740, Hong Kong, China. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Abstractive text summarization using sequence-to-sequence rnns and beyond",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Cicero Dos Santos",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Xiang",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "280--290",
"other_ids": {
"DOI": [
"10.18653/v1/K16-1028"
]
},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Bowen Zhou, Cicero dos Santos, Caglar Gulcehre, and Bing Xiang. 2016. Abstrac- tive text summarization using sequence-to-sequence rnns and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Lan- guage Learning, pages 280-290, Berlin, Germany. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Don't give me the details, just the summary! topic-aware convolutional neural networks for extreme summarization",
"authors": [
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Shay",
"middle": [
"B"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1797--1807",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shashi Narayan, Shay B. Cohen, and Mirella Lapata. 2018. Don't give me the details, just the summary! topic-aware convolutional neural networks for ex- treme summarization. In Proceedings of the 2018 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1797-1807, Brussels, Bel- gium. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Diversity driven attention model for query-based abstractive summarization",
"authors": [
{
"first": "Preksha",
"middle": [],
"last": "Nema",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Mitesh",
"suffix": ""
},
{
"first": "Anirban",
"middle": [],
"last": "Khapra",
"suffix": ""
},
{
"first": "Balaraman",
"middle": [],
"last": "Laha",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ravindran",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1063--1072",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1098"
]
},
"num": null,
"urls": [],
"raw_text": "Preksha Nema, Mitesh M. Khapra, Anirban Laha, and Balaraman Ravindran. 2017. Diversity driven atten- tion model for query-based abstractive summariza- tion. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1063-1072, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Automatic differentiation in pytorch",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Paszke",
"suffix": ""
},
{
"first": "Sam",
"middle": [],
"last": "Gross",
"suffix": ""
},
{
"first": "Soumith",
"middle": [],
"last": "Chintala",
"suffix": ""
},
{
"first": "Gregory",
"middle": [],
"last": "Chanan",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Devito",
"suffix": ""
},
{
"first": "Zeming",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Alban",
"middle": [],
"last": "Desmaison",
"suffix": ""
},
{
"first": "Luca",
"middle": [],
"last": "Antiga",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Lerer",
"suffix": ""
}
],
"year": 2017,
"venue": "NIPS-W",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Paszke, Sam Gross, Soumith Chintala, Gregory Chanan, Edward Yang, Zachary DeVito, Zeming Lin, Alban Desmaison, Luca Antiga, and Adam Lerer. 2017. Automatic differentiation in pytorch. In NIPS-W.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A neural attention model for abstractive sentence summarization",
"authors": [
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "379--389",
"other_ids": {
"DOI": [
"10.18653/v1/D15-1044"
]
},
"num": null,
"urls": [],
"raw_text": "Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 379-389, Lisbon, Portugal. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Get to the point: Summarization with pointergenerator networks",
"authors": [
{
"first": "Abigail",
"middle": [],
"last": "See",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Liu",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1073--1083",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1099"
]
},
"num": null,
"urls": [],
"raw_text": "Abigail See, Peter J. Liu, and Christopher D. Manning. 2017. Get to the point: Summarization with pointer- generator networks. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1073- 1083, Vancouver, Canada. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Rethinking the inception architecture for computer vision",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Vanhoucke",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Ioffe",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "Zbigniew",
"middle": [],
"last": "Wojna",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "2818--2826",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, and Zbigniew Wojna. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vi- sion and pattern recognition, pages 2818-2826, Las Vegas. IEEE.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Abstractive document summarization with a graphbased attentional neural model",
"authors": [
{
"first": "Jiwei",
"middle": [],
"last": "Tan",
"suffix": ""
},
{
"first": "Xiaojun",
"middle": [],
"last": "Wan",
"suffix": ""
},
{
"first": "Jianguo",
"middle": [],
"last": "Xiao",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1171--1181",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1108"
]
},
"num": null,
"urls": [],
"raw_text": "Jiwei Tan, Xiaojun Wan, and Jianguo Xiao. 2017. Abstractive document summarization with a graph- based attentional neural model. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1171-1181, Vancouver, Canada. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Kaiser",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17",
"volume": "",
"issue": "",
"pages": "6000--6010",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, undefine- dukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st Interna- tional Conference on Neural Information Processing Systems, NIPS'17, page 6000-6010, Red Hook, NY, USA. Curran Associates Inc.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Klingner",
"suffix": ""
},
{
"first": "Apurva",
"middle": [],
"last": "Shah",
"suffix": ""
},
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Xiaobing",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Gouws",
"suffix": ""
},
{
"first": "Yoshikiyo",
"middle": [],
"last": "Kato",
"suffix": ""
},
{
"first": "Taku",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kazawa",
"suffix": ""
},
{
"first": "Keith",
"middle": [],
"last": "Stevens",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Kurian",
"suffix": ""
},
{
"first": "Nishant",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Oriol Vinyals",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V. Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, Jeff Klingner, Apurva Shah, Melvin John- son, Xiaobing Liu, Lukasz Kaiser, Stephan Gouws, Yoshikiyo Kato, Taku Kudo, Hideto Kazawa, Keith Stevens, George Kurian, Nishant Patil, Wei Wang, Cliff Young, Jason Smith, Jason Riesa, Alex Rud- nick, Oriol Vinyals, Greg Corrado, Macduff Hughes, and Jeffrey Dean. 2016. Google's neural machine translation system: Bridging the gap between human and machine translation. CoRR, abs/1609.08144.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Our summarization approach (Pre-processing and Fine-tuning)",
"type_str": "figure",
"num": null,
"uris": null
},
"TABREF1": {
"text": "",
"type_str": "table",
"num": null,
"content": "<table><tr><td colspan=\"4\">illustrates F1 values of R1, R2 and RL</td></tr><tr><td colspan=\"4\">scores for CNN/DailyMail dataset. After compar-</td></tr><tr><td colspan=\"4\">ing our results with the work of Hasselqvist et al.</td></tr><tr><td colspan=\"4\">(2017), we observed that our approach efficiently</td></tr><tr><td colspan=\"4\">performed better for CNN/DailyMail dataset.</td></tr><tr><td>Model</td><td>R1</td><td>R2</td><td>RL</td></tr><tr><td>PG Model</td><td colspan=\"3\">18.25 5.04 16.17</td></tr></table>",
"html": null
},
"TABREF2": {
"text": "ROUGE-F1 (%) scores of abstractive models on the CNN/DailyMail test set.",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
},
"TABREF3": {
"text": "we present our F1 values of R1, R2 and RL scores for Abstractive and Mixed datasets of Newsroom for future QFAS comparison.",
"type_str": "table",
"num": null,
"content": "<table><tr><td>Dataset</td><td>R1</td><td>R2</td><td>RL</td></tr><tr><td colspan=\"4\">Abstractive 15.05 2.26 13.50</td></tr><tr><td>Mixed</td><td colspan=\"3\">40.67 22.66 36.92</td></tr></table>",
"html": null
},
"TABREF4": {
"text": "ROUGE-F1 (%) scores of our approach on the Newsroom test set.",
"type_str": "table",
"num": null,
"content": "<table/>",
"html": null
}
}
}
}