ACL-OCL / Base_JSON /prefixB /json /bionlp /2021.bionlp-1.32.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:06:54.806596Z"
},
"title": "Optum at MEDIQA 2021: Abstractive Summarization of Radiology Reports using simple BART Finetuning",
"authors": [
{
"first": "Ravi",
"middle": [],
"last": "Kondadadi",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Sahil",
"middle": [],
"last": "Manchanda",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Jason",
"middle": [],
"last": "Ngo",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Ronan",
"middle": [],
"last": "Mccormack",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper describes experiments undertaken and their results as part of the BioNLP MEDIQA 2021 challenge. We participated in Task 3: Radiology Report Summarization. Multiple runs were submitted for evaluation, from solutions leveraging transfer learning from pre-trained transformer models, which were then fine tuned on a subset of MIMIC-CXR, for abstractive report summarization. The task was evaluated using ROUGE and our best performing system obtained a ROUGE-2 score of 0.392.",
"pdf_parse": {
"paper_id": "2021",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper describes experiments undertaken and their results as part of the BioNLP MEDIQA 2021 challenge. We participated in Task 3: Radiology Report Summarization. Multiple runs were submitted for evaluation, from solutions leveraging transfer learning from pre-trained transformer models, which were then fine tuned on a subset of MIMIC-CXR, for abstractive report summarization. The task was evaluated using ROUGE and our best performing system obtained a ROUGE-2 score of 0.392.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "A BioNLP 2021 shared task, the MEDIQA challenge aims to attract research efforts in NLU across three summarization tasks in the medical domain: multi-answer summarization, and radiology report summarization. We participated in the radiology report summarization and offer experiments and results. A radiology report describes an exam and patient information resulting from trained clinicians(radiologists) interpreting imaging studies during routine clinical care (Zhang et al., 2018) . The primary purpose of the report is for radiologists to communicate imaging results to ordering physicians (Gershanik et al., 2011) . A standard report will consist of a Background section which will contain details of the patient and describe the examination undertaken, A findings section, in which the radiologist has dictated the initial results into the report, and an Impression section. The Impression section consists of a concise summarization of the most relevant details from the exam based on the dictated findings. Although guidelines for the practice of generating radiology reports are outlined by the American College of Radiology (ACR), there is flexibility in the document in the usage of terms for describing findings and where they are documented. This can lead to referring physicians focusing on just the impressions section of the document (Hall, 2012) . Additionally, the process of writing the impressions from the dictation of the findings is time-consuming and repetitive. In this work we propose experiments to automate the generation of the impressions section from the findings of the radiology report, accelerating the radiology workflow and improving the efficiency of clinical communications. Experiments were performed implementing sequence to sequence models with encoder-decoder architecture like BART (Lewis et al., 2019) , Pegasus (Zhang et al., 2020a) , and T5 (Raffel et al., 2020) . These models were then further fine-tuned on a subset of MIMIC-CXR Dataset (Johnson et al., 2019) , to generate abstractive summaries from the findings section of the report. MIMIC-CXR is de-identified and Protected health information (PHI) removed, large publicly available dataset of chest radiographs in DICOM format with free-text radiology reports. A subset of MIMIC-CXR and Indiana datasets 1 used for validation carried out using standard ROUGE (Lin, 2004) metrics.",
"cite_spans": [
{
"start": 464,
"end": 484,
"text": "(Zhang et al., 2018)",
"ref_id": "BIBREF24"
},
{
"start": 595,
"end": 619,
"text": "(Gershanik et al., 2011)",
"ref_id": "BIBREF6"
},
{
"start": 1351,
"end": 1363,
"text": "(Hall, 2012)",
"ref_id": "BIBREF7"
},
{
"start": 1826,
"end": 1846,
"text": "(Lewis et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 1857,
"end": 1878,
"text": "(Zhang et al., 2020a)",
"ref_id": "BIBREF23"
},
{
"start": 1888,
"end": 1909,
"text": "(Raffel et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 1987,
"end": 2009,
"text": "(Johnson et al., 2019)",
"ref_id": null
},
{
"start": 2364,
"end": 2375,
"text": "(Lin, 2004)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Initial efforts on summarization were mainly focused on Extractive summarization. Extractive summarization is the process involving extraction of noteworthy words from the text to form a summary. (Luhn, 1958; Kupiec et al., 1995) The advent of Neural network models enabled Abstractive summarization, which involves producing new words to convey the meaning of the text. This involves rephrasing the text in a shorter and more succinct form using similar but not the exact words used in the main text. Nallapati et al. (2016) proposed an RNN based approach to not only achieve state-of-the-art results in extractive summarization but also enable this model to be trained on abstractive summaries. Rush et al. (2015) described an attention-based summarization approach where an encoder and a generator model are jointly trained on article pairs. Their work builds on attention-based encoders that are used in neural machine translation (Bahdanau et al. (2016) ). Fan et al. (2018) build on the previous work on abstractive summarization to create length constrained summaries and summaries concentrated on particular entities and subjects in the text. Paulus et al. 2017used intra-temporal attention to produce state-of-the-art results on CNN/Daily Mail dataset. The work on summarizing radiology reports started with the extraction of information from the text (Friedman et al., 1995; Hassanpour and Langlotz, 2016) . For instance, Cornegruta et al. (2016) proposed using clinical language understanding of a radiology report to extract Named entities. A Bidirectional LSTM architecture was used to achieve this. Zhang et al. (2018) describes one of the first attempts at automatic summarization of radiology reports. This work describes an encoder-decoder architecture. Both the encoder and decoder sides are made of Bidirectional LSTMs using the attention framework (Bahdanau et al., 2016) . With the advent of transformers, Pretraining based language generation has been the norm in summarization. Zhang et al. (2019) and Liu (2019) used BERT (Devlin et al., 2019) , a pre-trained transformer model on extractive summarization, and achieved state of the art results. Sotudeh et al. (2020) proposed an approach to content selection for abstractive text summarization in clinical notes. Zhang et al. (2020b) presented a general framework and a training strategy to improve the factual correctness of neural abstractive summarization models for radiology reports. In this work, we fine-tune a pre-trained BART architecture (Lewis et al., 2019) for the radiology report summarization task.",
"cite_spans": [
{
"start": 196,
"end": 208,
"text": "(Luhn, 1958;",
"ref_id": "BIBREF14"
},
{
"start": 209,
"end": 229,
"text": "Kupiec et al., 1995)",
"ref_id": "BIBREF10"
},
{
"start": 502,
"end": 525,
"text": "Nallapati et al. (2016)",
"ref_id": "BIBREF15"
},
{
"start": 697,
"end": 715,
"text": "Rush et al. (2015)",
"ref_id": "BIBREF18"
},
{
"start": 935,
"end": 958,
"text": "(Bahdanau et al. (2016)",
"ref_id": "BIBREF0"
},
{
"start": 962,
"end": 979,
"text": "Fan et al. (2018)",
"ref_id": "BIBREF4"
},
{
"start": 1361,
"end": 1384,
"text": "(Friedman et al., 1995;",
"ref_id": "BIBREF5"
},
{
"start": 1385,
"end": 1415,
"text": "Hassanpour and Langlotz, 2016)",
"ref_id": "BIBREF8"
},
{
"start": 1432,
"end": 1456,
"text": "Cornegruta et al. (2016)",
"ref_id": "BIBREF2"
},
{
"start": 1613,
"end": 1632,
"text": "Zhang et al. (2018)",
"ref_id": "BIBREF24"
},
{
"start": 1868,
"end": 1891,
"text": "(Bahdanau et al., 2016)",
"ref_id": "BIBREF0"
},
{
"start": 2001,
"end": 2020,
"text": "Zhang et al. (2019)",
"ref_id": "BIBREF22"
},
{
"start": 2025,
"end": 2035,
"text": "Liu (2019)",
"ref_id": "BIBREF13"
},
{
"start": 2046,
"end": 2067,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 2170,
"end": 2191,
"text": "Sotudeh et al. (2020)",
"ref_id": "BIBREF19"
},
{
"start": 2288,
"end": 2308,
"text": "Zhang et al. (2020b)",
"ref_id": "BIBREF25"
},
{
"start": 2523,
"end": 2543,
"text": "(Lewis et al., 2019)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The objective of this task is to generate summary of a given radiology report. The training data for the MEDIQA 2021 Radiology report summarization shared task is extracted from a subset from the MIMIC-CXR Dataset (Johnson et al., 2019) . The training set contains around 91,544 examples of radiology reports and the corresponding summaries.",
"cite_spans": [
{
"start": 214,
"end": 236,
"text": "(Johnson et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description & Dataset",
"sec_num": "3"
},
{
"text": "Each example contains three fields; Findings field contains the original human-written radiology findings text, impression contains the human-written radiology impression text and background contains background information of the study in text format. One can use both the findings and the background fields to generate the summary. There are two development sets that come from two different institutes. The first development set from MIMIC-CXR contains around 2000 examples. There is another development set that also contains 2000 examples from the Indiana University radiology report dataset (Johnson et al., 2019) . In all our experiments, we first trained our model on the training set and tested on the validation set. For the actual task submissions, we trained our models by combining training set and both the development sets.",
"cite_spans": [
{
"start": 596,
"end": 618,
"text": "(Johnson et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Task Description & Dataset",
"sec_num": "3"
},
{
"text": "Our proposed method leverages pretrained summarization models. We finetuned three types of pretrained models for the radiology report summarization; BART (Lewis et al., 2019) , T5 (Raffel et al., 2020) and Pegasus (Zhang et al., 2020a) . We used Huggingface Transformers (Wolf et al., 2020) library for finetuning.",
"cite_spans": [
{
"start": 154,
"end": 174,
"text": "(Lewis et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 180,
"end": 201,
"text": "(Raffel et al., 2020)",
"ref_id": "BIBREF17"
},
{
"start": 214,
"end": 235,
"text": "(Zhang et al., 2020a)",
"ref_id": "BIBREF23"
},
{
"start": 271,
"end": 290,
"text": "(Wolf et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method & Results",
"sec_num": "4"
},
{
"text": "BART: Developed by Facebook, BART is a denoising autoencoder. Since it uses the standard transformer-based neural machine translation architecture, it is a generalization of both BERT and GPT3 (Brown et al., 2020) . For pretraining, it was trained by shuffling the order of sentence (an extension of next sentence prediction) and text infilling (an extension of the language masking). During text infilling, random spans of text are replaced by masked tokens. The job of the model during training is to recreate this span. Due to its flexible transformer architecture, the inputs to the encoder do not need to be aligned with the outputs of the decoder. This enables the BART model to be trained on a variety of tasks such as token masking, token deletion, sentence permutation, document rotation, etc. Since BART has an autoregressive decoder, it is better suited for sequence generation tasks such as summarization.",
"cite_spans": [
{
"start": 193,
"end": 213,
"text": "(Brown et al., 2020)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Method & Results",
"sec_num": "4"
},
{
"text": "T5 stands for Text-To-Text Transfer Transformer. It is a sequence-to-sequence model that takes in text and outputs text. This text-to-text framework enables one to use the same model, loss function, and hyperparameters on any NLP task, which can range from document summarization to classification. As a result, the way that data is fed into the model is quite different from models like BERT. The task description is used as a prefix to the input. For example, to translate a sentence from English to French, the input would be prefixed with \"translate English to French:\" Similarly, to summarize a passage, you would add the prefix \"summarize:\" followed by the text to be summarized. This text-to-text framework uses the same model across a range of tasks. T5 model made improvements on a wide range of categories such as model architecture, and pretraining objectives. T5 uses the standard transformer architecture (Vaswani et al., 2017) . For pretraining, T5 was trained on denoising, where spans of text are replaced with the drop token. The model objective is to reproduce the span of text given the drop token.",
"cite_spans": [
{
"start": 918,
"end": 940,
"text": "(Vaswani et al., 2017)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "T5:",
"sec_num": null
},
{
"text": "Pegasus: PEGASUS (Pre-training with Extracted Gap-sentences for Abstractive SUmmarization Sequence-to-sequence) Pegasus starts with the concept that if the pretraining task and fine-tuning task are closely related, then the model will perform better. As a result, they designed a pretraining task specifically for abstractive summarization. This pretraining task, gap-sentence generation, removes entire sentences from documents. The model's learning objective is to recover these sentences in the concatenated model output. Instead of randomly removing sentences, only the important sentences are removed, so that the model can reproduce these sentences that summarize the text. As a result of this pretraining task, Pegasus can achieve results like T5 with 5% of the parameters. Table 1 shows the model performance of each participant in the leaderboard for the top 10 teams. Only Rouge-2 F1 is shown because that was the metric used to rank the teams in this task. Our method ranked third on the leaderboard.",
"cite_spans": [],
"ref_spans": [
{
"start": 781,
"end": 788,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "T5:",
"sec_num": null
},
{
"text": "We propose eight different runs for this task. Table 2 shows the evaluation of different models we experimented with on the development set. We experimented with different versions of BART, T5 and Pegasus on Huggingface Transformers. We ended up using BART-base, T5-small, T5-base and Pegasus-Pubmed due to memory limitations of our GPUs. The following set of hyperparameters are applied for the following runs. Learning rate=5e-05 , number of epochs=15, gradient accumulation steps=5. The evaluations results of various runs for the radiology report summarization task are summarized in Table 2. 1. Our first proposed method is based on BARTbase. We finetuned BART-base on the training set and tested on the development set. We used a batch size of 20 for both training and validation sets.",
"cite_spans": [],
"ref_spans": [
{
"start": 588,
"end": 596,
"text": "Table 2.",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.1"
},
{
"text": "addition to findings. In this case, we were able to use a batch size of 10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.1"
},
{
"text": "5. This run is same as the fourth one, but we used T5-small as our base model. A batch size of 10 was used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.1"
},
{
"text": "6. In this run, but we used Pegasus-pubmed as our base model. A batch size of 1 was used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.1"
},
{
"text": "7. This run is same as the first run, but we used T5-base as the base model. A batch size of 10 was used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.1"
},
{
"text": "8. In this run also, we used T5-base as the base model except that we also used background section. A batch size of 2 was used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.1"
},
{
"text": "Overall, the best results on the test set are achieved using the BART-base as the pre-trained model. The model is trained using just the findings section on the test set. But on the development set, using the background section in addition to the findings helped.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "4.1"
},
{
"text": "In this paper, we present all our experiments of fine-tuning pre-trained models for radiology report summarization. Our experiments demonstrate how an encoder-decoder architecture like BART, which achieved state-of-the-art results in text generation tasks outperforms other architectures in this particular task. Our methods proved effective on the summarization task and were ranked third on the leaderboard.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "5"
},
{
"text": "https://openi.nlm.nih.gov/faq/collection",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": ". In this run, we used T5-small and finetuned on the training set. We used a batch size of 20 for both training and validation sets.3. In our third run, we finetuned on pegasuspubmed. We were able to use only a smaller batch size of 2.4. The fourth run is similar to the first approach, but we also used the background section in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2016. Neural machine translation by jointly learning to align and translate.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Modelling radiological language with bidirectional long short-term memory networks",
"authors": [
{
"first": "Savelie",
"middle": [],
"last": "Cornegruta",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Bakewell",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "Withey",
"suffix": ""
},
{
"first": "Giovanni",
"middle": [],
"last": "Montana",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the Seventh International Workshop on Health Text Mining and Information Analysis",
"volume": "",
"issue": "",
"pages": "17--27",
"other_ids": {
"DOI": [
"10.18653/v1/W16-6103"
]
},
"num": null,
"urls": [],
"raw_text": "Savelie Cornegruta, Robert Bakewell, Samuel Withey, and Giovanni Montana. 2016. Modelling radio- logical language with bidirectional long short-term memory networks. In Proceedings of the Seventh International Workshop on Health Text Mining and Information Analysis, pages 17-27, Auxtin, TX. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Controllable abstractive summarization",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angela Fan, David Grangier, and Michael Auli. 2018. Controllable abstractive summarization.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Natural language processing in an operational clinical information system",
"authors": [
{
"first": "C",
"middle": [],
"last": "Friedman",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Hripcsak",
"suffix": ""
},
{
"first": "W",
"middle": [],
"last": "Dumouchel",
"suffix": ""
},
{
"first": "S",
"middle": [
"B"
],
"last": "Johnson",
"suffix": ""
},
{
"first": "P",
"middle": [
"D"
],
"last": "Clayton",
"suffix": ""
}
],
"year": 1995,
"venue": "Natural Language Engineering",
"volume": "1",
"issue": "1",
"pages": "83--108",
"other_ids": {
"DOI": [
"10.1017/S1351324900000061"
]
},
"num": null,
"urls": [],
"raw_text": "C. Friedman, G. Hripcsak, W. DuMouchel, S. B. John- son, and P. D. Clayton. 1995. Natural language pro- cessing in an operational clinical information sys- tem. Natural Language Engineering, 1(1):83-108.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Critical Finding Capture in the Impression Section of Radiology Reports",
"authors": [
{
"first": "Esteban",
"middle": [
"F"
],
"last": "Gershanik",
"suffix": ""
},
{
"first": "Ronilda",
"middle": [],
"last": "Lacson",
"suffix": ""
},
{
"first": "Ramin",
"middle": [],
"last": "Khorasani",
"suffix": ""
}
],
"year": 2011,
"venue": "AMIA Annu. Symp. Proc",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Esteban F. Gershanik, Ronilda Lacson, and Ramin Khorasani. 2011. Critical Finding Capture in the Im- pression Section of Radiology Reports. AMIA Annu. Symp. Proc., 2011:465.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Language of the Radiology Report",
"authors": [
{
"first": "M",
"middle": [],
"last": "Ferris",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hall",
"suffix": ""
}
],
"year": 2012,
"venue": "Am. J. Roentgenol",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.2214/ajr.175.5.1751239"
]
},
"num": null,
"urls": [],
"raw_text": "Ferris M. Hall. 2012. Language of the Radiology Re- port. Am. J. Roentgenol.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Information extraction from multi-institutional radiology reports",
"authors": [
{
"first": "Saeed",
"middle": [],
"last": "Hassanpour",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Curtis",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Langlotz",
"suffix": ""
}
],
"year": 2016,
"venue": "Artificial intelligence in medicine",
"volume": "66",
"issue": "",
"pages": "29--39",
"other_ids": {
"DOI": [
"10.1016/j.artmed.2015.09.007"
]
},
"num": null,
"urls": [],
"raw_text": "Saeed Hassanpour and Curtis P Langlotz. 2016. In- formation extraction from multi-institutional radi- ology reports. Artificial intelligence in medicine, 66:29-39.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Seth Berkowitz, and Steven Horng. 2019. The MIMIC-CXR Database. Type: dataset",
"authors": [
{
"first": "E",
"middle": [
"W"
],
"last": "Alistair",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Roger",
"middle": [],
"last": "Pollard",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mark",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"DOI": [
"10.13026/C2JT1Q"
]
},
"num": null,
"urls": [],
"raw_text": "Alistair E. W. Johnson, Tom Pollard, Roger Mark, Seth Berkowitz, and Steven Horng. 2019. The MIMIC- CXR Database. Type: dataset.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "A trainable document summarizer",
"authors": [
{
"first": "Julian",
"middle": [],
"last": "Kupiec",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "Francine",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 18th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '95",
"volume": "",
"issue": "",
"pages": "68--73",
"other_ids": {
"DOI": [
"10.1145/215206.215333"
]
},
"num": null,
"urls": [],
"raw_text": "Julian Kupiec, Jan Pedersen, and Francine Chen. 1995. A trainable document summarizer. In Proceedings of the 18th Annual International ACM SIGIR Con- ference on Research and Development in Informa- tion Retrieval, SIGIR '95, page 68-73, New York, NY, USA. Association for Computing Machinery.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Yinhan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Naman",
"middle": [],
"last": "Goyal ; Abdelrahman Mohamed",
"suffix": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Ves",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Lewis, Yinhan Liu, Naman Goyal, Mar- jan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "ROUGE: A package for automatic evaluation of summaries",
"authors": [
{
"first": "Chin-Yew",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2004,
"venue": "Text Summarization Branches Out",
"volume": "",
"issue": "",
"pages": "74--81",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74-81, Barcelona, Spain. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Fine-tune bert for extractive summarization",
"authors": [
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yang Liu. 2019. Fine-tune bert for extractive summa- rization.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "The automatic creation of literature abstracts",
"authors": [
{
"first": "H",
"middle": [
"P"
],
"last": "Luhn",
"suffix": ""
}
],
"year": 1958,
"venue": "IBM Journal of Research and Development",
"volume": "2",
"issue": "2",
"pages": "159--165",
"other_ids": {
"DOI": [
"10.1147/rd.22.0159"
]
},
"num": null,
"urls": [],
"raw_text": "H. P. Luhn. 1958. The automatic creation of literature abstracts. IBM Journal of Research and Develop- ment, 2(2):159-165.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Summarunner: A recurrent neural network based sequence model for extractive summarization of documents",
"authors": [
{
"first": "Ramesh",
"middle": [],
"last": "Nallapati",
"suffix": ""
},
{
"first": "Feifei",
"middle": [],
"last": "Zhai",
"suffix": ""
},
{
"first": "Bowen",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ramesh Nallapati, Feifei Zhai, and Bowen Zhou. 2016. Summarunner: A recurrent neural network based se- quence model for extractive summarization of docu- ments.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "A deep reinforced model for abstractive summarization",
"authors": [
{
"first": "Romain",
"middle": [],
"last": "Paulus",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Romain Paulus, Caiming Xiong, and Richard Socher. 2017. A deep reinforced model for abstractive sum- marization.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Exploring the limits of transfer learning with a unified text-to",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Raffel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Adam",
"middle": [],
"last": "Roberts",
"suffix": ""
},
{
"first": "Katherine",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Sharan",
"middle": [],
"last": "Narang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Matena",
"suffix": ""
},
{
"first": "Yanqi",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text trans- former.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "A neural attention model for abstractive sentence summarization",
"authors": [
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Attend to medical ontologies: Content selection for clinical abstractive summarization",
"authors": [
{
"first": "Sajad",
"middle": [],
"last": "Sotudeh",
"suffix": ""
},
{
"first": "Nazli",
"middle": [],
"last": "Goharian",
"suffix": ""
},
{
"first": "Ross",
"middle": [
"W"
],
"last": "Filice",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sajad Sotudeh, Nazli Goharian, and Ross W. Filice. 2020. Attend to medical ontologies: Content selec- tion for clinical abstractive summarization.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Pretraining-based natural language generation for text summarization",
"authors": [
{
"first": "Haoyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jianjun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Ji",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Haoyu Zhang, Jianjun Xu, and Ji Wang. 2019. Pretraining-based natural language generation for text summarization.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Pegasus: Pre-training with extracted gap-sentences for abstractive summarization",
"authors": [
{
"first": "Jingqing",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yao",
"middle": [],
"last": "Zhao",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Saleh",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"J"
],
"last": "Liu",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jingqing Zhang, Yao Zhao, Mohammad Saleh, and Pe- ter J. Liu. 2020a. Pegasus: Pre-training with ex- tracted gap-sentences for abstractive summarization.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Learning to summarize radiology findings",
"authors": [
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Daisy",
"middle": [
"Yi"
],
"last": "Ding",
"suffix": ""
},
{
"first": "Tianpei",
"middle": [],
"last": "Qian",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Curtis",
"middle": [
"P"
],
"last": "Langlotz",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuhao Zhang, Daisy Yi Ding, Tianpei Qian, Christo- pher D. Manning, and Curtis P. Langlotz. 2018. Learning to summarize radiology findings.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Optimizing the factual correctness of a summary: A study of summarizing radiology reports",
"authors": [
{
"first": "Yuhao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Derek",
"middle": [],
"last": "Merck",
"suffix": ""
},
{
"first": "Emily",
"middle": [
"Bao"
],
"last": "Tsai",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Curtis",
"middle": [
"P"
],
"last": "Langlotz",
"suffix": ""
}
],
"year": 2020,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuhao Zhang, Derek Merck, Emily Bao Tsai, Christo- pher D. Manning, and Curtis P. Langlotz. 2020b. Optimizing the factual correctness of a summary: A study of summarizing radiology reports.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"html": null,
"num": null,
"content": "<table><tr><td/><td colspan=\"3\">: Top 10 teams on the leaderboard</td></tr><tr><td colspan=\"4\">Run Rouge 1 Rouge 2 Rouge L</td></tr><tr><td>1</td><td>60.51</td><td>48.14</td><td>57.65</td></tr><tr><td>2</td><td>52.35</td><td>40.98</td><td>50.41</td></tr><tr><td>3</td><td>35.72</td><td>22.69</td><td>31.53</td></tr><tr><td>4</td><td>63.47</td><td>51.35</td><td>60.54</td></tr><tr><td>5</td><td>56.14</td><td>44.65</td><td>53.98</td></tr><tr><td>6</td><td>37.8</td><td>24.73</td><td>33.80</td></tr><tr><td>7</td><td>58.59</td><td>46.5</td><td>56.01</td></tr><tr><td>8</td><td>62.85</td><td>51.22</td><td>60.25</td></tr></table>",
"type_str": "table",
"text": ""
},
"TABREF2": {
"html": null,
"num": null,
"content": "<table/>",
"type_str": "table",
"text": "Evaluation of Radiology Report Summarization on the development set"
}
}
}
}