|
{ |
|
"paper_id": "2021", |
|
"header": { |
|
"generated_with": "S2ORC 1.0.0", |
|
"date_generated": "2023-01-19T01:07:40.752001Z" |
|
}, |
|
"title": "ChicHealth @ MEDIQA 2021: Exploring the limits of pre-trained seq2seq models for medical summarization", |
|
"authors": [ |
|
{ |
|
"first": "Liwen", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Chic Health", |
|
"location": { |
|
"settlement": "Shanghai", |
|
"country": "China" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Yan", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Chic Health", |
|
"location": { |
|
"settlement": "Shanghai", |
|
"country": "China" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Yi", |
|
"middle": [], |
|
"last": "Hong", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Chic Health", |
|
"location": { |
|
"settlement": "Shanghai", |
|
"country": "China" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "", |
|
"middle": [], |
|
"last": "Cai", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Chic Health", |
|
"location": { |
|
"settlement": "Shanghai", |
|
"country": "China" |
|
} |
|
}, |
|
"email": "" |
|
}, |
|
{ |
|
"first": "Szui", |
|
"middle": [], |
|
"last": "Sung", |
|
"suffix": "", |
|
"affiliation": { |
|
"laboratory": "", |
|
"institution": "Chic Health", |
|
"location": { |
|
"settlement": "Shanghai", |
|
"country": "China" |
|
} |
|
}, |
|
"email": "" |
|
} |
|
], |
|
"year": "", |
|
"venue": null, |
|
"identifiers": {}, |
|
"abstract": "In this article, we will describe our system for MEDIQA2021 shared tasks. First, we will describe the method of the second task, multiple answer summary (MAS). For extracting abstracts, we follow the rules of Xu and Lapata (2020). First, the candidate sentences are roughly estimated by using the Roberta model. Then the Markov chain model is used to evaluate the sentences in a fine-grained manner. Our team won the first place in overall performance, with the fourth place in MAS task, the seventh place in RRS task and the eleventh place in QS task. For the QS and RRS tasks, we investigate the performanceS of the end-to-end pre-trained seq2seq model. Experiments show that the methods of adversarial training and reverse translation are beneficial to improve the fine tuning performance.", |
|
"pdf_parse": { |
|
"paper_id": "2021", |
|
"_pdf_hash": "", |
|
"abstract": [ |
|
{ |
|
"text": "In this article, we will describe our system for MEDIQA2021 shared tasks. First, we will describe the method of the second task, multiple answer summary (MAS). For extracting abstracts, we follow the rules of Xu and Lapata (2020). First, the candidate sentences are roughly estimated by using the Roberta model. Then the Markov chain model is used to evaluate the sentences in a fine-grained manner. Our team won the first place in overall performance, with the fourth place in MAS task, the seventh place in RRS task and the eleventh place in QS task. For the QS and RRS tasks, we investigate the performanceS of the end-to-end pre-trained seq2seq model. Experiments show that the methods of adversarial training and reverse translation are beneficial to improve the fine tuning performance.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstract", |
|
"sec_num": null |
|
} |
|
], |
|
"body_text": [ |
|
{ |
|
"text": "The mediqa 2021 shared tasks aim to investigate the most advanced summary models, especially their performance in the medical field. There are three tasks. The first is question summary (QS), which classifies long and complex consumer health problems into simple ones, which has been proved to be helpful to answer questions automatically (Abacha and Demner-Fushman, 2019). The second task is multiple answer summary (MAS) (Savery et al., 2020) . Different answers can bring complementary views, which may benefit the users of QA system. The goal of this task is to develop a system that can aggregate and summarize answers scattered across multiple documents. The third task is radiology report summary (RRs) (Zhang et al., 2018 (Zhang et al., , 2020b , which generates radiology impression statements by summarizing the text results written by radiologists.", |
|
"cite_spans": [ |
|
{ |
|
"start": 423, |
|
"end": 444, |
|
"text": "(Savery et al., 2020)", |
|
"ref_id": "BIBREF6" |
|
}, |
|
{ |
|
"start": 710, |
|
"end": 729, |
|
"text": "(Zhang et al., 2018", |
|
"ref_id": "BIBREF10" |
|
}, |
|
{ |
|
"start": 730, |
|
"end": 752, |
|
"text": "(Zhang et al., , 2020b", |
|
"ref_id": "BIBREF11" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Automatic summarization is an important task in the field of medicine. When users use Google, MEDLINE and other search engines, they need to read a large number of medical documents about a certain topic and get a list of possible answers, which is very time-consuming. First, the content may be too specialized for laymen to understand. Second, one document may not be able to fully answer queries, and users may need to summarize conclusions across multiple documents, which may lead to a waste of time or misunderstanding. In order to improve the user experience when using medical applications, automatic summarization technology is needed.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the MAS task, we improve upon (Xu and Lapata, 2020) via three methods. First, during the coarse ranking of a sentence in one of the given documents, we also add the surrounding sentences as input and use two special tokens marking the positions of the sentence. This modification improves the coarse ranking with a large margin. Second, due to the low resource settings of this task, we find that applying a RoBERTa (Liu et al., 2019) model which is already fine-tuned on the GLUE benchmark (Wang et al., 2018) can be beneficial.", |
|
"cite_spans": [ |
|
{ |
|
"start": 419, |
|
"end": 437, |
|
"text": "(Liu et al., 2019)", |
|
"ref_id": "BIBREF3" |
|
}, |
|
{ |
|
"start": 494, |
|
"end": 513, |
|
"text": "(Wang et al., 2018)", |
|
"ref_id": "BIBREF7" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "In the MAS task, we use two methods to improve (?). First, when we rank a sentence coarsely in a given document, we add the surrounding sentences as input. This modification greatly improves the efficiency of coarse ranking. Secondly, due to the low resource setting of this task, we find that it is beneficial to apply the Roberta (Liu2019RoBERTaAR) model, which has been fine tuned on the glue benchmark (Wang2018GLUEAM).", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "For the other two tasks, we mainly discuss how the pre trained seq2seq model, such as Bart (Lewis et al., 2020) , Pegasus (Zhang et al., 2020a) , can be implemented in these tasks. You can make two takeout. First, for tasks with smaller datasets, freezing part of the parameters is beneficial. Second, backtranslation is beneficial for generalization.", |
|
"cite_spans": [ |
|
{ |
|
"start": 91, |
|
"end": 111, |
|
"text": "(Lewis et al., 2020)", |
|
"ref_id": "BIBREF2" |
|
}, |
|
{ |
|
"start": 122, |
|
"end": 143, |
|
"text": "(Zhang et al., 2020a)", |
|
"ref_id": "BIBREF9" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Our team ChicHealth participated in all three tasks and won the first place for the overall per-formances. Experiments show that our methods are beneficial for pre-trained models' downstream performances.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Introduction", |
|
"sec_num": "1" |
|
}, |
|
{ |
|
"text": "Let Q denote a query, and", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extractive MDS", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "D = {d 1 , d 2 , ..., d M } a set of documents.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extractive MDS", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "We have implemented multi granularity MDS following the implementation of Xu and Lapata (2020) . We first break down the document into paragraphs, which are sentences. Then, a trained Roberta model quantifies the semantic similarity between the selected sentence and the query, and estimates the importance of the sentence (evidence estimator) according to the sentence itself or the local context of the sentence. Thirdly, in order to give the global estimation of the importance of each part in the summary, we use the centrality estimator based on the Markov chain.", |
|
"cite_spans": [ |
|
{ |
|
"start": 74, |
|
"end": 94, |
|
"text": "Xu and Lapata (2020)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Extractive MDS", |
|
"sec_num": "2" |
|
}, |
|
{ |
|
"text": "Let {S 1 , S 2 , ..., S N }", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evidence Estimator", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "as the candidate answer set. Our training goal is to find the right answers in this group. We use Roberta as our sequence encoder", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evidence Estimator", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We concatenate query Q after candidate sentence S into a sequence < /s >, S, < /s > < s >, Q, < /s >, as the input to the RoBERTa encoder. The starting < s > token's vector representations t serves as input to a single layer feed forward layer to obtain the distribution over positive and negative classes, where the positive class denotes that a sentence contains the answer and 0 otherwise.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evidence Estimator", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "We connect the query Q to the sequence < s >, S, < /s >, Q, < /s > after the candidate statement sas the input of the Roberta encoder. The vector of the starting < s > is used as the input of the single feed-forward layer to obtain the distribution on the positive and negative classes, where the positive class indicates that the sentence contains the answer, otherwise it is 0. We can improve the performance of the evidence estimator by adding the surrounding sentences of S into the model during training.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evidence Estimator", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "After fine-tuning, we take the probability of positive class as the score of local evidence, and we will use it to sort all sentences of each query.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Evidence Estimator", |
|
"sec_num": "2.1" |
|
}, |
|
{ |
|
"text": "In order to obtain a global estimate of the score of each candidate sentence, we apply a global estimator following Xu and Lapata (2020) . The centrality estimator is essentially an extension of the famous LexRank algorithm (Erkan and Radev, 2004) .", |
|
"cite_spans": [ |
|
{ |
|
"start": 116, |
|
"end": 136, |
|
"text": "Xu and Lapata (2020)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 224, |
|
"end": 247, |
|
"text": "(Erkan and Radev, 2004)", |
|
"ref_id": "BIBREF1" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Centrality Estimator", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "For each document cluster, i.e., the collections of documents for each query in our tasks, LexRank builds a graph G = (V ; E) with nodes V corresponding to sentences and undirected edges E whose weights are computed based on a certian similarity metric. The original LEXRANK algorithm uses TF-IDF (Term Frequency Inverse Document Frequency). (Xu and Lapata, 2020) proposes to use TF-ISF (Term Frequency Inverse Sentence Frequency), which is similar to TF-IDF but operates at the sentence level.", |
|
"cite_spans": [ |
|
{ |
|
"start": 342, |
|
"end": 363, |
|
"text": "(Xu and Lapata, 2020)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Centrality Estimator", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Following ((Xu and Lapata, 2020)), the similarity matrix E is combined with the evidence estimator's , that is,", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Centrality Estimator", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "E = w * [q; ...;q] + (1 \u2212 w) * E,", |
|
"eq_num": "(1)" |
|
} |
|
], |
|
"section": "Centrality Estimator", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "where w \u2208 (0, 1) controls the extent to which the evidence estimator can influence the final summarization, andq is obtained by normalizing the the evidence scores,q", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Centrality Estimator", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "EQUATION", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [ |
|
{ |
|
"start": 0, |
|
"end": 8, |
|
"text": "EQUATION", |
|
"ref_id": "EQREF", |
|
"raw_str": "= q |V | v q v .", |
|
"eq_num": "(2)" |
|
} |
|
], |
|
"section": "Centrality Estimator", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "We run a Markov Chain on the graph and the final stationary distributionq * of this Markov chain serves as the final scores of each sentence.", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Centrality Estimator", |
|
"sec_num": "2.2" |
|
}, |
|
{ |
|
"text": "Pre-trained models. In this section, we investigate the pretrained Seq2Seq models to obtain abstractive summarizations, after finetuning their on our datasets. We mainly investigate two types of models, BART ((Lewis et al., 2020)) and PEGASUS ( (Zhang et al., ", |
|
"cite_spans": [ |
|
{ |
|
"start": 245, |
|
"end": 259, |
|
"text": "(Zhang et al.,", |
|
"ref_id": null |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Abstractive summarization", |
|
"sec_num": "3" |
|
}, |
|
{ |
|
"text": "Finetuning techniques. In order to fine tune the pre-trained seq2seq model, we test some methods/techniques that can improve the performance of downstream tasks:", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2020a)). And experiments show the PEGASUS model is better", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Freezing a proportion of the parameters of the model;", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2020a)). And experiments show the PEGASUS model is better", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Advarsarial training method, i.e., Projected Gradient Descent (PGD, (Madry et al., 2018) ).", |
|
"cite_spans": [ |
|
{ |
|
"start": 70, |
|
"end": 90, |
|
"text": "(Madry et al., 2018)", |
|
"ref_id": "BIBREF4" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2020a)). And experiments show the PEGASUS model is better", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "\u2022 Backtraslation (from English to Thai, and then Thai to English) is applied for data augmentation. ", |
|
"cite_spans": [], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "2020a)). And experiments show the PEGASUS model is better", |
|
"sec_num": null |
|
}, |
|
{ |
|
"text": "For QS tasks (Figure 1 and 2) , the source length distribution is consistent on the train/Val/test set, and the target length distribution is also consistent. For RRS tasks (7 and 8), we can observe that the sequence length distribution of train/ val/test set is different, which may lead to skewed model. For task 2, the length of the document varies, which is too long for pre-trained models like Pegasus. Therefore, for task 2, abstractive summaries are generated from extractive summaries. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 13, |
|
"end": 29, |
|
"text": "(Figure 1 and 2)", |
|
"ref_id": "FIGREF0" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "dataset statistics", |
|
"sec_num": "4.1" |
|
}, |
|
{ |
|
"text": "We first report the results on the QS task. First, we compare BART and PEGASUS (Table 1) , and find that PEGASUS performs significantly better than BART. Second, we compare PEGASUS with different number of layers freezed (Table 2) , and find that freezing three 3 layers obtains the best dev performance. Third, we compare the model with or without adversarial training (Table 3) , and show that adversarial training is important for this task.", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 79, |
|
"end": 88, |
|
"text": "(Table 1)", |
|
"ref_id": null |
|
}, |
|
{ |
|
"start": 221, |
|
"end": 230, |
|
"text": "(Table 2)", |
|
"ref_id": "TABREF1" |
|
}, |
|
{ |
|
"start": 370, |
|
"end": 379, |
|
"text": "(Table 3)", |
|
"ref_id": "TABREF2" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results on QS", |
|
"sec_num": "4.2" |
|
}, |
|
{ |
|
"text": "Now we report results on the MAS task (Table 4) . RoBERTa large performs better on coarse ranking than RoBERTa base. And using a model finetuned on GLUE also helps to improve the fine-tuning task. After centrality ranking with LexRank, the score improve by more than one percent. And our best score is obtained by using ensemble on the evidence estimators. ", |
|
"cite_spans": [], |
|
"ref_spans": [ |
|
{ |
|
"start": 38, |
|
"end": 47, |
|
"text": "(Table 4)", |
|
"ref_id": "TABREF3" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results on MAS", |
|
"sec_num": "4.3" |
|
}, |
|
{ |
|
"text": "Now we report results on the RRS task. We compare 4 groups of models, BERT-abs, T5 (Raffel et al., 2020) , BART and PEGASUS (Table 5) . PE-GASUS also performs best, like in the QS task. However, we find that the PEGASUS trained on PubMed performs significant worse, which is contradictory to our hypothesis that fine-tuning on related domain corpus is beneficial for downstream tasks.", |
|
"cite_spans": [ |
|
{ |
|
"start": 83, |
|
"end": 104, |
|
"text": "(Raffel et al., 2020)", |
|
"ref_id": "BIBREF5" |
|
} |
|
], |
|
"ref_spans": [ |
|
{ |
|
"start": 124, |
|
"end": 133, |
|
"text": "(Table 5)", |
|
"ref_id": "TABREF4" |
|
} |
|
], |
|
"eq_spans": [], |
|
"section": "Results on RRS", |
|
"sec_num": "4.4" |
|
}, |
|
{ |
|
"text": "In this work, we elaborate on the methods we employed for the three tasks in the MEDIQA 2021 shared tasks. For the extractive summarization of MAS task, we build upon Xu and Lapata (2020) , and achieve improvements by adding contexts and sentence position markers. For generating abstractive summaries, we leverage the pre-trained seq2seq models. To improve the fine-tuning performances on the downstream tasks, we implement a few techniques, like freezing part of the models, adversarial training and back-translation. Our team achieves the 1st place for the overall performances. In this work, we elaborate the methods used in the three shared tasks of mediqa 2021. For MAS task, we employ the methods that are similar to Xu and Lapata (2020) . In order to generate abstract abstracts, we take advantages of the pre-trained seq2seq model. In order to improve the fine-tuning performance of downstream tasks, we use freezing part of the model, adversarial training. Our team ranks first in the overall performances of the three task.", |
|
"cite_spans": [ |
|
{ |
|
"start": 167, |
|
"end": 187, |
|
"text": "Xu and Lapata (2020)", |
|
"ref_id": "BIBREF8" |
|
}, |
|
{ |
|
"start": 724, |
|
"end": 744, |
|
"text": "Xu and Lapata (2020)", |
|
"ref_id": "BIBREF8" |
|
} |
|
], |
|
"ref_spans": [], |
|
"eq_spans": [], |
|
"section": "Conclusion", |
|
"sec_num": "5" |
|
} |
|
], |
|
"back_matter": [], |
|
"bib_entries": { |
|
"BIBREF0": { |
|
"ref_id": "b0", |
|
"title": "On the role of question summarization and information source restriction in consumer health question answering", |
|
"authors": [ |
|
{ |
|
"first": "Asma", |
|
"middle": [], |
|
"last": "Ben Abacha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dina", |
|
"middle": [], |
|
"last": "Demner-Fushman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "117--126", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Asma Ben Abacha and Dina Demner-Fushman. 2019. On the role of question summarization and informa- tion source restriction in consumer health question answering. AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Trans- lational Science, 2019:117-126.", |
|
"links": null |
|
}, |
|
"BIBREF1": { |
|
"ref_id": "b1", |
|
"title": "Lexrank: Graph-based lexical centrality as salience in text summarization", |
|
"authors": [ |
|
{ |
|
"first": "G\u00fcnes", |
|
"middle": [], |
|
"last": "Erkan", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dragomir", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Radev", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2004, |
|
"venue": "J. Artif. Intell. Res", |
|
"volume": "22", |
|
"issue": "", |
|
"pages": "457--479", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "G\u00fcnes Erkan and Dragomir R. Radev. 2004. Lexrank: Graph-based lexical centrality as salience in text sum- marization. J. Artif. Intell. Res., 22:457-479.", |
|
"links": null |
|
}, |
|
"BIBREF2": { |
|
"ref_id": "b2", |
|
"title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation", |
|
"authors": [ |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yinhan", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Marjan", |
|
"middle": [], |
|
"last": "Ghazvininejad", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Mohamed", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Ves", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "M. Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, A. Mohamed, Omer Levy, Ves Stoy- anov, and Luke Zettlemoyer. 2020. Bart: Denoising sequence-to-sequence pre-training for natural lan- guage generation, translation, and comprehension. ArXiv, abs/1910.13461.", |
|
"links": null |
|
}, |
|
"BIBREF3": { |
|
"ref_id": "b3", |
|
"title": "Roberta: A robustly optimized bert pretraining approach", |
|
"authors": [ |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Liu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Myle", |
|
"middle": [], |
|
"last": "Ott", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Naman", |
|
"middle": [], |
|
"last": "Goyal", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Jingfei", |
|
"middle": [], |
|
"last": "Du", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mandar", |
|
"middle": [], |
|
"last": "Joshi", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Danqi", |
|
"middle": [], |
|
"last": "Chen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "M", |
|
"middle": [], |
|
"last": "Lewis", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Luke", |
|
"middle": [], |
|
"last": "Zettlemoyer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Veselin", |
|
"middle": [], |
|
"last": "Stoyanov", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2019, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Y. Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, M. Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. ArXiv, abs/1907.11692.", |
|
"links": null |
|
}, |
|
"BIBREF4": { |
|
"ref_id": "b4", |
|
"title": "Towards deep learning models resistant to adversarial attacks", |
|
"authors": [ |
|
{ |
|
"first": "A", |
|
"middle": [], |
|
"last": "Madry", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Aleksandar", |
|
"middle": [], |
|
"last": "Makelov", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "L", |
|
"middle": [], |
|
"last": "Schmidt", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Tsipras", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adrian", |
|
"middle": [], |
|
"last": "Vladu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "ArXiv", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "A. Madry, Aleksandar Makelov, L. Schmidt, D. Tsipras, and Adrian Vladu. 2018. Towards deep learn- ing models resistant to adversarial attacks. ArXiv, abs/1706.06083.", |
|
"links": null |
|
}, |
|
"BIBREF5": { |
|
"ref_id": "b5", |
|
"title": "Exploring the limits of transfer learning with a unified text-to-text transformer", |
|
"authors": [ |
|
{ |
|
"first": "Colin", |
|
"middle": [], |
|
"last": "Raffel", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Noam", |
|
"middle": [], |
|
"last": "Shazeer", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Adam", |
|
"middle": [], |
|
"last": "Roberts", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Katherine", |
|
"middle": [], |
|
"last": "Lee", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Sharan", |
|
"middle": [], |
|
"last": "Narang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Michael", |
|
"middle": [], |
|
"last": "Matena", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Yanqi", |
|
"middle": [], |
|
"last": "Zhou", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "W", |
|
"middle": [], |
|
"last": "Li", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "J. Mach. Learn. Res", |
|
"volume": "21", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, W. Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former. J. Mach. Learn. Res., 21:140:1-140:67.", |
|
"links": null |
|
}, |
|
"BIBREF6": { |
|
"ref_id": "b6", |
|
"title": "Question-driven summarization of answers to consumer health questions", |
|
"authors": [ |
|
{ |
|
"first": "Max", |
|
"middle": [ |
|
"E" |
|
], |
|
"last": "Savery", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Asma", |
|
"middle": [], |
|
"last": "Ben Abacha", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Soumya", |
|
"middle": [], |
|
"last": "Gayen", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Dina", |
|
"middle": [], |
|
"last": "Demner-Fushman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Max E. Savery, Asma Ben Abacha, Soumya Gayen, and Dina Demner-Fushman. 2020. Question-driven sum- marization of answers to consumer health questions. Scientific Data, 7.", |
|
"links": null |
|
}, |
|
"BIBREF7": { |
|
"ref_id": "b7", |
|
"title": "Glue: A multi-task benchmark and analysis platform for natural language understanding", |
|
"authors": [ |
|
{ |
|
"first": "Alex", |
|
"middle": [], |
|
"last": "Wang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Amanpreet", |
|
"middle": [], |
|
"last": "Singh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Julian", |
|
"middle": [], |
|
"last": "Michael", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Felix", |
|
"middle": [], |
|
"last": "Hill", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Omer", |
|
"middle": [], |
|
"last": "Levy", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Samuel", |
|
"middle": [ |
|
"R" |
|
], |
|
"last": "Bowman", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2018. Glue: A multi-task benchmark and analysis plat- form for natural language understanding. In Black- boxNLP@EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF8": { |
|
"ref_id": "b8", |
|
"title": "Coarse-to-fine query focused multi-document summarization", |
|
"authors": [ |
|
{ |
|
"first": "Yumo", |
|
"middle": [], |
|
"last": "Xu", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mirella", |
|
"middle": [], |
|
"last": "Lapata", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "EMNLP", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yumo Xu and Mirella Lapata. 2020. Coarse-to-fine query focused multi-document summarization. In EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF9": { |
|
"ref_id": "b9", |
|
"title": "Pegasus: Pre-training with extracted gapsentences for abstractive summarization", |
|
"authors": [ |
|
{ |
|
"first": "Jingqing", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Y", |
|
"middle": [], |
|
"last": "Zhao", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Mohammad", |
|
"middle": [], |
|
"last": "Saleh", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Peter", |
|
"middle": [ |
|
"J" |
|
], |
|
"last": "Liu", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "ICML", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Jingqing Zhang, Y. Zhao, Mohammad Saleh, and Peter J. Liu. 2020a. Pegasus: Pre-training with extracted gap- sentences for abstractive summarization. In ICML.", |
|
"links": null |
|
}, |
|
"BIBREF10": { |
|
"ref_id": "b10", |
|
"title": "Learning to summarize radiology findings", |
|
"authors": [ |
|
{ |
|
"first": "Yuhao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "D", |
|
"middle": [], |
|
"last": "Ding", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Tianpei", |
|
"middle": [], |
|
"last": "Qian", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Langlotz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2018, |
|
"venue": "", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuhao Zhang, D. Ding, Tianpei Qian, Christopher D. Manning, and C. Langlotz. 2018. Learning to sum- marize radiology findings. In Louhi@EMNLP.", |
|
"links": null |
|
}, |
|
"BIBREF11": { |
|
"ref_id": "b11", |
|
"title": "Optimizing the factual correctness of a summary: A study of summarizing radiology reports", |
|
"authors": [ |
|
{ |
|
"first": "Yuhao", |
|
"middle": [], |
|
"last": "Zhang", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Derek", |
|
"middle": [], |
|
"last": "Merck", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "E", |
|
"middle": [], |
|
"last": "Tsai", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "Christopher", |
|
"middle": [ |
|
"D" |
|
], |
|
"last": "Manning", |
|
"suffix": "" |
|
}, |
|
{ |
|
"first": "C", |
|
"middle": [], |
|
"last": "Langlotz", |
|
"suffix": "" |
|
} |
|
], |
|
"year": 2020, |
|
"venue": "ACL", |
|
"volume": "", |
|
"issue": "", |
|
"pages": "", |
|
"other_ids": {}, |
|
"num": null, |
|
"urls": [], |
|
"raw_text": "Yuhao Zhang, Derek Merck, E. Tsai, Christopher D. Manning, and C. Langlotz. 2020b. Optimizing the factual correctness of a summary: A study of summa- rizing radiology reports. In ACL.", |
|
"links": null |
|
} |
|
}, |
|
"ref_entries": { |
|
"FIGREF0": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Source sequence length of QS." |
|
}, |
|
"FIGREF1": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Target sequence length of QS." |
|
}, |
|
"FIGREF2": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Query length of MAS." |
|
}, |
|
"FIGREF3": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Document length of MAS." |
|
}, |
|
"FIGREF4": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Extractive summary length of MAS." |
|
}, |
|
"FIGREF5": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "Abstractive summary length of MAS." |
|
}, |
|
"FIGREF6": { |
|
"type_str": "figure", |
|
"uris": null, |
|
"num": null, |
|
"text": "source length of task3 using PEGA-SUS tokenizerFigure 8: target length of task2 using PEGA-SUS tokenizer" |
|
}, |
|
"TABREF1": { |
|
"content": "<table><tr><td>with Adv training?</td><td>ROUGE-2</td></tr><tr><td>Yes</td><td>16.37</td></tr><tr><td>No</td><td>15.46</td></tr></table>", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"text": "Results of PEGASUS-large model, when we freeze different numbers of lower layers of the encoder and decoder." |
|
}, |
|
"TABREF2": { |
|
"content": "<table><tr><td>evidence estimator</td><td>centrality estimator</td><td>ROUGE-2</td></tr><tr><td/><td>dev set</td><td/></tr><tr><td>roberta-base</td><td>No</td><td>44.32</td></tr><tr><td>roberta-large</td><td>No</td><td>46.48</td></tr><tr><td>roberta-large + GLUE finetuning</td><td>No</td><td>47.13</td></tr><tr><td>roberta-large + GLUE finetuning</td><td>LexRank</td><td>48.24</td></tr><tr><td>ensemble models</td><td>LexRank</td><td>49.18</td></tr></table>", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"text": "Results of PEGASUS-large model, with or without adversarial training." |
|
}, |
|
"TABREF3": { |
|
"content": "<table><tr><td>model</td><td>ROUGE-2</td></tr><tr><td>BERT-abs</td><td>34.95</td></tr><tr><td>T5-small</td><td>45.46</td></tr><tr><td>T5-base</td><td>49.41</td></tr><tr><td>T5-large</td><td>50.68</td></tr><tr><td>BART-base</td><td>49.65</td></tr><tr><td>BART-large</td><td>49.81</td></tr><tr><td>PEGASUS-pubmed</td><td>45.93</td></tr><tr><td>PEGASUS-large</td><td>51.95</td></tr></table>", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"text": "Comparison of different models on dev set of the MAS task." |
|
}, |
|
"TABREF4": { |
|
"content": "<table/>", |
|
"type_str": "table", |
|
"num": null, |
|
"html": null, |
|
"text": "" |
|
} |
|
} |
|
} |
|
} |