ACL-OCL / Base_JSON /prefixB /json /bionlp /2020.bionlp-1.6.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T01:06:59.348414Z"
},
"title": "Towards Visual Dialog for Radiology",
"authors": [
{
"first": "Olga",
"middle": [],
"last": "Kovaleva",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Massachussets",
"location": {
"settlement": "Lowell"
}
},
"email": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Shivade",
"suffix": "",
"affiliation": {
"laboratory": "Amazon IBM Almaden Research Center",
"institution": "",
"location": {}
},
"email": ""
},
{
"first": "Satyananda",
"middle": [],
"last": "Kashyap",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Karina",
"middle": [],
"last": "Kanjaria",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Adam",
"middle": [],
"last": "Coy",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Deddeh",
"middle": [],
"last": "Ballah",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Joy",
"middle": [],
"last": "Wu",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Yufan",
"middle": [],
"last": "Guo",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Alexandros",
"middle": [],
"last": "Karargyris",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "David",
"middle": [],
"last": "Beymer",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Anna",
"middle": [],
"last": "Rumshisky",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Massachussets",
"location": {
"settlement": "Lowell"
}
},
"email": ""
},
{
"first": "Vandana",
"middle": [],
"last": "Mukherjee",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Current research in machine learning for radiology is focused mostly on images. There exists limited work in investigating intelligent interactive systems for radiology. To address this limitation, we introduce a realistic and information-rich task of Visual Dialog in radiology, specific to chest X-ray images. Using MIMIC-CXR, an openly available database of chest X-ray images, we construct both a synthetic and a real-world dataset and provide baseline scores achieved by state-of-theart models. We show that incorporating medical history of the patient leads to better performance in answering questions as opposed to conventional visual question answering model which looks only at the image. While our experiments show promising results, they indicate that the task is extremely challenging with significant scope for improvement. We make both the datasets (synthetic and gold standard) and the associated code publicly available to the research community.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Current research in machine learning for radiology is focused mostly on images. There exists limited work in investigating intelligent interactive systems for radiology. To address this limitation, we introduce a realistic and information-rich task of Visual Dialog in radiology, specific to chest X-ray images. Using MIMIC-CXR, an openly available database of chest X-ray images, we construct both a synthetic and a real-world dataset and provide baseline scores achieved by state-of-theart models. We show that incorporating medical history of the patient leads to better performance in answering questions as opposed to conventional visual question answering model which looks only at the image. While our experiments show promising results, they indicate that the task is extremely challenging with significant scope for improvement. We make both the datasets (synthetic and gold standard) and the associated code publicly available to the research community.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Answering questions about an image is a complex multi-modal task demonstrating an important capability of artificial intelligence. A well-defined task evaluating such capabilities is Visual Question Answering (VQA) (Antol et al., 2015) where a system answers free-form questions reasoning about an image. VQA demands careful understanding of elements in an image along with intricacies of the language used in framing a question about it. Visual Dialog (VisDial) (Das et al., 2017; de Vries et al., 2016) is an extension to the VQA problem, where a system is required to engage in a dialog about the image. This adds significant complexity to VQA where a system should now be able to associate the question in the image, and reason * Equal contribution, Work done at IBM Research over additional information gathered from previous question answers in the dialog.",
"cite_spans": [
{
"start": 215,
"end": 235,
"text": "(Antol et al., 2015)",
"ref_id": "BIBREF6"
},
{
"start": 463,
"end": 481,
"text": "(Das et al., 2017;",
"ref_id": "BIBREF10"
},
{
"start": 482,
"end": 504,
"text": "de Vries et al., 2016)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Although limited work exploring VQA in radiology exists, VisDial in radiology remains an unexplored problem. With the healthcare setting increasingly requiring efficiency, evaluation of physicians is now based on both the quality and the timeliness of patient care. Clinicians often depend on official reports of imaging exam findings from radiologists to determine the appropriate next step. However, radiologists generally have a long queue of imaging studies to interpret and report, causing subsequent delay in patient care (Bhargavan et al., 2009; Siewert et al., 2016) . Furthermore, it is common practice for clinicians to call radiologists asking follow-up questions on the official reporting, leading to further inefficiencies and disruptions in the workflow (Mangano et al., 2014) .",
"cite_spans": [
{
"start": 528,
"end": 552,
"text": "(Bhargavan et al., 2009;",
"ref_id": "BIBREF7"
},
{
"start": 553,
"end": 574,
"text": "Siewert et al., 2016)",
"ref_id": "BIBREF32"
},
{
"start": 768,
"end": 790,
"text": "(Mangano et al., 2014)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Visual dialog is a useful imaging adjunct that can help expedite patient care. It can potentially answer a physician's questions regarding official interpretations without interrupting the radiologist's workflow, allowing the radiologist to concentrate their efforts on interpreting more studies in a timely manner. Additionally, visual dialog could provide clinicians with a preliminary radiology exam interpretation prior to receiving the formal dictation from the radiologist. Clinicians could use the information to start planning patient care and decrease the time from the completion of the radiology exam to subsequent medical management (Halsted and Froehle, 2008) .",
"cite_spans": [
{
"start": 645,
"end": 672,
"text": "(Halsted and Froehle, 2008)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we address these gaps and make the following contributions: 1) we introduce construction of RadVisDial -the first publicly available dataset for visual dialog in radiology, derived from the MIMIC-CXR (Johnson et al., 2019) dataset, 2) we compare several state-of-the-art models for VQA and VisDial applied to these images, and 3) we conduct a comprehensive set of experiments highlighting different challenges of the problem and propose solutions to overcome them.",
"cite_spans": [
{
"start": 205,
"end": 237,
"text": "MIMIC-CXR (Johnson et al., 2019)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most of the large publicly available datasets (Kaggle, 2017; Rajpurkar et al., 2017) for radiology consist of images associated with a limited amount of structured information. For example, Irvin et al. (2019) ; Johnson et al. (2019) make images available along with the output of a text extraction module that produces labels for 13 abnormalities in a chest X-ray. Of note recently, the task of generating reports from radiology images has become popular in the research community (Jing et al., 2018; . Two recent shared tasks at Image-CLEF explored the VQA problem with radiology images Abacha et al., 2019) . also released a small dataset VQA-RAD for the specific task.",
"cite_spans": [
{
"start": 46,
"end": 60,
"text": "(Kaggle, 2017;",
"ref_id": "BIBREF21"
},
{
"start": 61,
"end": 84,
"text": "Rajpurkar et al., 2017)",
"ref_id": "BIBREF28"
},
{
"start": 190,
"end": 209,
"text": "Irvin et al. (2019)",
"ref_id": "BIBREF17"
},
{
"start": 482,
"end": 501,
"text": "(Jing et al., 2018;",
"ref_id": "BIBREF18"
},
{
"start": 589,
"end": 609,
"text": "Abacha et al., 2019)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The first VQA shared task at ImageCLEF (Hasan et al., 2018) used images from articles at PubMed Central. While Abacha et al. 2019and use clinical images, the sizes of these datasets are limited. They are a mix of several modalities including 2D modalities such as X-rays, and 3D modalities such as ultrasound, MRI, and CT scans. They also cover several anatomic locations from the brain to the limbs. This makes a multimodal task with such images overly challenging, with shared task participants developing separate models (Al-Sadi et al., 2019; Kornuta et al., 2019) to first address these subtasks (such as modality detection) before actually solving the problem of VQA.",
"cite_spans": [
{
"start": 524,
"end": 546,
"text": "(Al-Sadi et al., 2019;",
"ref_id": "BIBREF3"
},
{
"start": 547,
"end": 568,
"text": "Kornuta et al., 2019)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We address these limitations and build up on MIMIC-CXR (Johnson et al., 2019) the largest publicly available dataset of chest X-rays and corresponding reports. We focus on the problem of visual dialog for a single modality and anatomy in the form of 2D chest X-rays. We restrict the number of questions and generate answers for them automatically which allows us to report results on a large set of images.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The MIMIC-CXR dataset 1 consists of 371,920 chest X-ray images in the Digital Imaging and Communications (DICOM) format along with 1 https://physionet.org/content/ mimic-cxr/1.0.0/ 206,576 reports. Each report is well structured and typically consists of sections such as Medical Condition, Comparison, Findings, and Impression. Each report can map to one or more images and each patient can have one or more reports. The images consist of both frontal and lateral views. The frontal views are either anterior-posterior (AP) or posterior-anterior (PA). The initial release of data also consists of annotations for 14 labels (13 abnormalities and one No Findings label) for each image. These annotations are obtained by running the CheXpert labeler (Irvin et al., 2019) ; a rule-based NLP pipeline against the associated report. The labeler output assigns one of four possibilities for each of the 13 abnormalities: {yes, no, maybe, not mentioned in the report}.",
"cite_spans": [
{
"start": 748,
"end": 768,
"text": "(Irvin et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "MIMIC-CXR",
"sec_num": "3.1"
},
{
"text": "Every training record of the original VisDial dataset (Das et al., 2017) consists of three elements: an image I, a caption for the image C, and a dialog history H consisting of a sequence of ten questionanswer pairs. Given the image I, the caption C, a possibly empty dialog history H, and a followup question q, the task is to generate an answer a where {q, a} \u2208 H. Following the original formulation, we synthetically create our dataset using the plain text reports associated with each image (this synthetic dataset will be considered to be silverstandard data for the experiments described in section 5). The Medical Condition section of the radiology report is a single sentence describing the medical history of the patient. We treat this sentence from the Medical Condition section as the caption of the image. We use NegBio for extracting sections within a report.",
"cite_spans": [
{
"start": 54,
"end": 72,
"text": "(Das et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Dialog dataset construction",
"sec_num": "3.2"
},
{
"text": "We discard all images that do not have a medical condition in their report. Further, each CheXpert label is formulated as a question probing the presence of a disorder, and the output from the labeler is treated as the corresponding answer. Thus, ignoring the No Findings label, there are 52 possible question-answer pairs as a result of 13 questions and 4 possible answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Visual Dialog dataset construction",
"sec_num": "3.2"
},
{
"text": "We decided to focus on PA images for most of our experiments as this is the most informative view for chest X-rays, according to our team radiologists. The original VisDial dataset (Das et al., 2017) consists of ten questions per dialog and one dialog per image. Since we only have a set of 13 possible questions, we limit the length of the dialog to 5 randomly sampled questions. The resulting dataset has 91060 images in the PA view (with train/validation/test splits containing 77205, 7340 and 6515 images, respectively). This synthetic data will be made available through the MIMIC Derived Data Repository. 2 Thus any individual with access to MIMIC-CXR will have access to our data. Figure 1 shows an example from our dataset and how it compares with one from VisDial 1.0.",
"cite_spans": [
{
"start": 181,
"end": 199,
"text": "(Das et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [
{
"start": 688,
"end": 694,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Visual Dialog dataset construction",
"sec_num": "3.2"
},
{
"text": "The questions in our dataset are limited to probing the presence of an abnormality in a chest X-ray. Similarly, the answers are limited to one of the four choices. Owing to the restricted nature of the problem, we deviate from the evaluation protocol outlined in (Das et al., 2017) and instead calculate the F1-score for each of the four answers. We also report a macro-averaged F1 score across the four answers to make model comparisons easier.",
"cite_spans": [
{
"start": 263,
"end": 281,
"text": "(Das et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation",
"sec_num": "3.3"
},
{
"text": "For our experiments, we selected a set of models designed for image-based question answering tasks. Namely, we experimented with three architectures: Stacked Attention Network (SAN) (Yang et al., 2016) , Late Fusion Network (LF) (Das et al., 2017) , and Recursive Visual Attention Network (RVA) (Niu et al., 2019) . Following the original VisDial study (Das et al., 2017) , we use an encoder-decoder structure with a discriminative decoder for each of the models. Below we give an overview of all the three algorithms.",
"cite_spans": [
{
"start": 182,
"end": 201,
"text": "(Yang et al., 2016)",
"ref_id": "BIBREF37"
},
{
"start": 229,
"end": 247,
"text": "(Das et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 295,
"end": 313,
"text": "(Niu et al., 2019)",
"ref_id": "BIBREF25"
},
{
"start": 353,
"end": 371,
"text": "(Das et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "4"
},
{
"text": "The original configuration of SAN was introduced for the general-domain VQA task. The model performs multi-step reasoning by refining question-2 https://physionet.org/physiotools/ mimic-code/HEADER.shtml guided attention over image features in an iterative manner. The attended image features are then combined with the question features for answer prediction. SAN has been successfully adapted for medical VQA tasks such as VQA-RAD (Lau et al., 2018) and VQA-Med task of the ImageCLEF 2018 challenge (Ionescu et al., 2018) . In our setup, we use a stack of two image attention layers and an LSTM-based question representation.",
"cite_spans": [
{
"start": 501,
"end": 523,
"text": "(Ionescu et al., 2018)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Stacked Attention Network",
"sec_num": "4.1"
},
{
"text": "To take the dialog history into account and therefore adjust the SAN model for the needs of the Visual Dialog task, we modify the first image attention layer of the network by adding a term for LSTM representation of the history. This modification forces the image attention weights to become both question-and history-guided (see Figure 2 ).",
"cite_spans": [],
"ref_spans": [
{
"start": 331,
"end": 339,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Stacked Attention Network",
"sec_num": "4.1"
},
{
"text": "Proposed by (Das et al., 2017 ) as a baseline model for the Visual Dialog task, Late Fusion Network encodes the question and the dialog history through two separate RNNs, and the image through a CNN. The resulting representations are simply concate-nated in a single vector, which is then used by a decoder for predicting the answer. We use this model unchanged, as released in the original Visual Dialog challenge.",
"cite_spans": [
{
"start": 12,
"end": 29,
"text": "(Das et al., 2017",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Late Fusion Network",
"sec_num": "4.2"
},
{
"text": "This model is the winner of the 2019 Visual Dialog challenge 3 . It recursively browses the past history of dialog turns until the current question is paired with the turn containing the most relevant information. This strategy is particularly useful for resolving co-references, naturally occurring in general-domain dialog questions. As previously, we do not modify the architecture of the model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Recursive Visual Attention",
"sec_num": "4.3"
},
{
"text": "This section presents our down-sampling strategy, gives details about conducted ablation studies, and describes experiments with various representations of images and texts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "A closer analysis of our data showed that the majority of the reports processed by the CheXpert labeler resulted in no mention of most of the 13 pathologies. This presented a heavily skewed dataset that would lead to a biased model instead of true visual understanding. This issue is not unique to radiology; it is observed even in the current benchmarks for VQA, and attempts have been made to mitigate the resulting problems (Hudson and Manning, 2019; Agrawal et al., 2018) .",
"cite_spans": [
{
"start": 454,
"end": 475,
"text": "Agrawal et al., 2018)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Downsampling",
"sec_num": "5.1"
},
{
"text": "In order to dissuade the answer biases, we performed data balancing, specifically by downsampling major labels in our dataset. As mentioned above, the CheXpert labeler outputs four possible answers for 13 labels. To investigate the skew in the data, we plotted a distribution of the 52 questionanswer pairs (Figure 3 ). Further, we downsampled the question-answer pairs to fit a smoother answer distribution with the method presented in GQA based on the Earth Mover's Distance method (Hudson and Manning, 2019; Rubner et al., 2000) . We iterated over the 52 pairs in decreasing frequency order and downsampled the categories belonging to the skewed head of the distribution. The relative label ranks by frequency remained the same for the balanced sets as with the unbalanced sets. For example, the pairs {'Other pleural findings' \u2192 'Not in report' } and {'Fracture' \u2192 'Not in report' } remained the first and second largest counts in both the unbalanced and downsampled versions of the datasets. To reduce the disparity between dominant and underrepresented categories, we tuned the parameters outlined in (Hudson and Manning, 2019). We experimented with two different sets of parameter values and obtained two datasets with more balanced question-answer distributions. We further refer to them as \"minor\" and \"major\" downsampling, reflecting the total amount of data reduced (shown in blue and gray in Figure 3 ).",
"cite_spans": [
{
"start": 484,
"end": 510,
"text": "(Hudson and Manning, 2019;",
"ref_id": "BIBREF15"
},
{
"start": 511,
"end": 531,
"text": "Rubner et al., 2000)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [
{
"start": 307,
"end": 316,
"text": "(Figure 3",
"ref_id": "FIGREF3"
},
{
"start": 1404,
"end": 1412,
"text": "Figure 3",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Downsampling",
"sec_num": "5.1"
},
{
"text": "To assess the importance of the dialog context for question answering, we compare the performance of different variations of the Stacked Attention Network, selected as the best-performing model in the previous experiment (see subsection 6.1). In particular, we examine three scenarios: (a) the model makes a prediction based solely on a given image (essentially solving the VQA task rather than the Visual Dialog task), (b) the model makes its prediction given an image and its caption, and (c) the model makes its prediction given an image, a caption, and a history of question-answer pairs. Similar to the model modifications described in subsection 4.1 and Figure 2 , we achieve the goal through experimenting with the SAN model by changing its first image attention layer to accordingly take in (a) question and image features, (b) question, image, and caption features, and (c) question, image, and full dialog history features.",
"cite_spans": [],
"ref_spans": [
{
"start": 660,
"end": 668,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Evaluating importance of context",
"sec_num": "5.2"
},
{
"text": "We test three approaches for pre-trained image representations. The first approach uses a ResNet-101 architecture for multiclass classification of input X-ray images into 14 finding labels extracted from the associated reports (as described in section 3.2). Our second method aims to replicate the original CheXpert study (Irvin et al., 2019) . Here we use a DenseNet-121 image classifier trained for prediction of five pre-selected and clinically important labels, namely, atelectasis, cardiomegaly, consolidation, edema, and pleural effusion. In both ResNet and DenseNet-based approaches we take the features obtained from the last pooling layer.",
"cite_spans": [
{
"start": 322,
"end": 342,
"text": "(Irvin et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Image representations",
"sec_num": "5.3"
},
{
"text": "Finally, we adopted a bottom-up mechanism for image region proposal introduced by Anderson et al. (2018) . More specifically, we first trained a neural network predicting bounding boxes for the image regions, corresponding to a set of 11 handcrafted clinical annotations adopted from an existing chest X-ray dataset 4 . We then represented every region as a latent feature vector of a trained patch-wise convolution autoencoder, and (3) concatenated all the obtained vectors to represent the entire image.",
"cite_spans": [
{
"start": 82,
"end": 104,
"text": "Anderson et al. (2018)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Image representations",
"sec_num": "5.3"
},
{
"text": "Based on the results of the experiment (subsection 6.3), we found that ResNet-101 image vectors yielded the best performance, so we used them in other experiments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Image representations",
"sec_num": "5.3"
},
{
"text": "One of the crucial aspects of X-ray radiography exams is to capture the subject from multiple views. Typically, in case of chest X-rays, radiologists order an additional lateral view to confirm and locate findings that are not clearly visible from a frontal (PA or AP) view. We test whether the VisDial models are able to leverage the additional visual information offered by a lateral (LAT) view. We filter the data down to the patients whose chest Xray exams had both a frontal and lateral views and re-sample the resulting data-set into train (52952 PA and 8086 AP images), validation (6614 PA and 964 AP images), and test (6508 PA and 1035 AP images). We train a separate ResNet-101 model for each of the three views on this re-sampled data using the method described in the previous section. The vector representations of a frontal view and the corresponding lateral view are concatenated as an aggregate image representation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Effect of incorporating a lateral view",
"sec_num": "5.4"
},
{
"text": "Finally, we investigate the best way for representing the textual data by incorporating different pretrained word vectors. More specifically, we measure the performance of our best-performing SAN model reached with (a) randomly initialized word embeddings trained jointly with the rest of the models, (b) domain-independent GloVe Common Crawl embeddings (Pennington et al., 2014) , and (c) domain-specific fastText embeddings trained by (Romanov and Shivade, 2018) . The latter are initialized with GloVe embeddings trained on Common Crawl, followed by training on 12M PubMed abstracts, and finally on 2M clinical notes from MIMIC-III database (Johnson et al., 2016) . In all the experiments, we use 300-dimensional word vectors. We also experimented with transformerbased contextual vectors using BERT (Devlin et al., 2019) . More specifically, instead of using LSTM representations of the textual data, we extracted the last layer vectors from ClinicalBERT (Alsentzer et al., 2019) pre-trained on MIMIC notes, and averaged them over input sequence tokens.",
"cite_spans": [
{
"start": 354,
"end": 379,
"text": "(Pennington et al., 2014)",
"ref_id": "BIBREF27"
},
{
"start": 437,
"end": 464,
"text": "(Romanov and Shivade, 2018)",
"ref_id": "BIBREF29"
},
{
"start": 644,
"end": 666,
"text": "(Johnson et al., 2016)",
"ref_id": "BIBREF20"
},
{
"start": 803,
"end": 824,
"text": "(Devlin et al., 2019)",
"ref_id": "BIBREF11"
},
{
"start": 959,
"end": 983,
"text": "(Alsentzer et al., 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Text representations",
"sec_num": "5.5"
},
{
"text": "In a visual dialog setting, a model is conditioned on the image vector, the image caption, and the dialog history to predict the answer to a new question.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question order",
"sec_num": "5.6"
},
{
"text": "We hypothesized that a model should be able to answer later questions in a dialog better since it has more information from the previous questions and their answers. As described in Section 3.2, we randomly sample 5 questions out of 13 possible choices to construct a dialog. We re-ordered the question-answer pairs in the dialog to reflect the order in which the corresponding abnormality label mentions occurred in the report. However, results for questions ordered based on their occurrence in the narrative did not vary from the setup with a random order of questions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Question order",
"sec_num": "5.6"
},
{
"text": "We report macro-averaged F1-scores achieved on the same unbalanced validation set for each of the experiments. When experimenting with different configurations of the same model, we also break down the aggregate score to the F1 scores for individual answer options. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "One of the main findings of our study revealed the importance of contextual information for answering questions about a given image. As shown in Table 1 , adding the image caption and the history of turns results in incremental increases of macro F1-scores. Notably, the VQA setup in which the model relies on the image only, it fails to detect the 'No' answer, whereas the history-aware configuration leads to a significant performance gain for this particular label. As expected and due to the skewed nature of the data-set, the highest and the lowest per-label scores were achieved for the most and the least frequent labels ('Not in report' and 'Maybe'), respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 145,
"end": 152,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluating importance of context",
"sec_num": "6.2"
},
{
"text": "Out of the tested image representations, ResNetderived vectors perform consistently better than the other approaches (see Table 3 ). Although in our DenseNet-121 image classification pre-training we were able to replicate the performance of (Irvin et al., 2019) , the Visual Dialog scores for the corresponding vectors turned out to be lower. We believe this might be due to the fact that, by design, the network uses a limited set of pre-training classes not sufficient to generalize well to a full set of diseases used in the Visual Dialog task. ",
"cite_spans": [
{
"start": 241,
"end": 261,
"text": "(Irvin et al., 2019)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 122,
"end": 129,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Image representation",
"sec_num": "6.3"
},
{
"text": "As expected, for both variations of the frontal view (i.e. AP and PA) appending lateral image vectors enhanced the performance of the tested SAN model (see Table 4 ). This suggests that lateral and frontal image vectors complement each other, and the models can benefit from using both. However, in our data-set only a subset of reports has both views available, which significantly reduces the amount of training data.",
"cite_spans": [],
"ref_spans": [
{
"start": 156,
"end": 163,
"text": "Table 4",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "Effect of incorporating a lateral view",
"sec_num": "6.4"
},
{
"text": "Another observation from our experiments is that domain-specific pre-trained word embeddings contribute to better scores (see Table 5 ). This is due to the fact that domain-specific embeddings contain medical knowledge that helps the model make more justified predictions. When using BERT, we did not notice gains in performance, which most likely means that the lastlayer averaging strategy is not optimal and more sophisticated approaches such as (Xiao, 2018) are required . Alternatively, the final representation of the CLS can be used to represent input text. ",
"cite_spans": [
{
"start": 449,
"end": 461,
"text": "(Xiao, 2018)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [
{
"start": 126,
"end": 133,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Word embeddings",
"sec_num": "6.5"
},
{
"text": "To complement our experiments with the silver data and investigate the applicability of the trained models to real-world scenarios, we also collected a set of gold standard data which consisted of two expert radiologists having a dialog about a particular chest X-ray. These X-ray images were randomly sampled PA views from the test our data. In this section, we present the data collection workflow, outline the associated challenges, compare the resulting data-set with the silver-standard, and report the performance of trained models.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison with the gold-standard data",
"sec_num": "7"
},
{
"text": "We laid the foundations for our data collection in a manner similar to that of the general visual dialog challenge (Das et al., 2017) . Two radiologists, designated as a \"questioner\" and an \"answerer\", conversed with each other following a detailed annotation guideline created to ensure consistency. The \"answerer\" in each scenario was provided with an image and a caption (medical condition). The \"questioner\" was provided with only the caption, and tasked with asking follow-up questions about the image, visible only to the \"answerer\". In order to make the gold data-set comparable to the silver-standard one, we restricted the beginning of each answer to contain a direct response of 'Yes', 'No', 'Maybe', or 'Not mentioned'. In our annotation guidelines 'Not mentioned' referred to the lack of evidence of the given medical condition that was asked by the \"questioner\" radiologist. The answer was elaborated with additional information if the radiologists found it necessary. The whole data collection procedure resulted in 100 annotated dialogs.",
"cite_spans": [
{
"start": 115,
"end": 133,
"text": "(Das et al., 2017)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Gold Standard Data Collection",
"sec_num": "7.1"
},
{
"text": "Following the gold standard data collection, we performed some preliminary analyses with the best silver standard SAN model. Our gold standard data was split into train (70), validation (20), and test (10) sets. We experimented with three setups: (a) evaluating the silver-data trained networks on the gold standard data, (b) training and evaluating the models on the gold data, and (c) fine-tuning the silver-data trained networks on the gold standard data. Table 6 shows the results of these experiments. We found the best macro-F1 score of 0.47 was achieved by the silver data-trained SAN network fine-tuned on the gold standard data. We observed that the model could not directly predict any of the classes if directly evaluated on the gold data-set, suggesting that it was trained to fit the data patterns significantly different from those present in the collected data-set. However, pre-training on the silver data serves as a good starting point for further model fine-tuning. The obtained scores in general imply that there are many differences between the gold and silver data, including their vocabularies, answer distributions, and level of question detail.",
"cite_spans": [],
"ref_spans": [
{
"start": 459,
"end": 466,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Gold standard results",
"sec_num": "7.2"
},
{
"text": "To provide a meaningful analysis of the sources of difference between the gold and silver datasets, we grouped the gold questions semantically by using the CheXpert vocabulary for the 13 labels used for the construction of the silver dataset. The gold questions that are unable to be grouped via CheXpert were mapped manually using expert clinical knowledge. We systematically compared the gold and silver dialogs on the same 100 chest X-rays and noted the following differences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of gold and silver data",
"sec_num": "7.3"
},
{
"text": "\u2022 Frequency of semantically equivalent questions. Just under half of the gold question types were semantically covered by the questions in the silver dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of gold and silver data",
"sec_num": "7.3"
},
{
"text": "\u2022 Granularity of questions. We observed that the silver dataset tends to ask highly granular questions about specific findings (e.g. \"consolidation\") as expected. The radiology experts, however, asked a range of low (e.g. \"Are there any bone abnormalities?), medium (e.g. \"Are the lungs clear?\") and high (e.g. \"Is there evidence of pneumonia?\") granularity questions. The gold dialogs tend to start with broader (low granularity) questions and narrow the differential diagnosis down as the dialogs progress.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of gold and silver data",
"sec_num": "7.3"
},
{
"text": "\u2022 Question sense. The radiologists also asked questions in the form of whether some structure is \"normal\" (e.g. \"Is the soft tissue normal?\"). Whereas, the silver questions only asked whether an abnormality is present. Since chest X-rays are screening exams where a good proportion of the images may be \"normal\", having more questions asking whether different anatomies are normal would, therefore, yield more 'Yes' answers.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of gold and silver data",
"sec_num": "7.3"
},
{
"text": "\u2022 Answer distributions The answer distributions of the gold and silver data differ greatly. Specifically, while the gold data was com-posed heavily of 'Yes' or 'No' answers, the silver comprised mostly of 'Not in report'.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison of gold and silver data",
"sec_num": "7.3"
},
{
"text": "Our main finding is that the introduced task of visual dialog in radiology presents a lot of challenges from the machine learning perspective, including a skewed distribution of classes and a required ability to reason over both visual and textual input data. The best of our baseline models achieved 0.34 macro-averaged F1-score, indicating on a significant scope for potential improvements. Our comparison of gold and silver standard data shows some trends are in line with medical doctors' strategies in medical history taking, starting with broader, general questions and then narrowing the scope of their questions to more specific findings (Talbot et al.; Campillos-Llanos et al., 2020) . Despite the difficulty and the practical usefulness of the task, it is important to list the limitations of our study. The questions were limited to presence of 13 abnormalities extracted by CheXpert and the answers were limited to 4 options. The studies used in this work (from MIMIC-CXR) originate from a single tertiary hospital in the United States. Moreover, they correspond to a specific group of patients, namely those admitted to the Emergency Department (ED) from 2012 to 2014. Therefore, the data and hence the model reflect multiple realworld biases. It should also be noted that chest X-rays are mostly used for screening than diagnostic purposes. A radiology image is only one of the many data points (e.g. labs, demographics, medications) used while making a diagnosis. Therefore, although predicting presence of abnormalities (e.g. pneumonia) based on brief knowledge of the patient's medical history and the chest X-ray might be a good exercise and a promising first step in evaluating machine learning models, it is clinically limited.",
"cite_spans": [
{
"start": 646,
"end": 661,
"text": "(Talbot et al.;",
"ref_id": "BIBREF33"
},
{
"start": 662,
"end": 692,
"text": "Campillos-Llanos et al., 2020)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "There are plenty of directions for future work that we intend to pursue. To make the synthetic data more realistic and expressive, both questions and answers should be diversified with the help of clinicians' expertise and external knowledge bases such as UMLS (Bodenreider, 2004) . We plan to enrich the data with more question types, addressing, for example, the location or the size of a given lung abnormality. We plan to collect more real life dialog between radiologists and augment the two datasets to get a richer set of more expressive dia-log. We anticipate that bridging the gap between the silver-and the gold-standard data in terms of natural language formulations would significantly reduce the difference in model performance for the two setups.",
"cite_spans": [
{
"start": 261,
"end": 280,
"text": "(Bodenreider, 2004)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "Another direction is to develop a strategy to manage the uncertain labels such as 'Maybe' and 'Not in report' to make the dataset more balanced.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "8"
},
{
"text": "We explored the task of Visual Dialog for radiology using chest X-rays and released the first publicly available silver-and gold-standard datasets for this task. Having conducted a set of rigorous experiments with state-of-the-art machine learning models used for the combination of visual and language reasoning, we demonstrated the complexity of the task and outlined the promising directions for further research.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "9"
},
{
"text": "https://www.kaggle.com/c/ rsna-pneumonia-detection-challenge",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Mousumi Roy for her help in this project. We are also thankful to Mehdi Moradi for helpful discussions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Vqa-med: Overview of the medical visual question answering task at image-clef 2019",
"authors": [
{
"first": "",
"middle": [],
"last": "Ab Abacha",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Vv Datla",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Demner-Fushman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 2019,
"venue": "CLEF2019 Working Notes. CEUR Workshop Proceedings (CEURWS. org), ISSN",
"volume": "",
"issue": "",
"pages": "1613--0073",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "AB Abacha, SA Hasan, VV Datla, J Liu, D Demner- Fushman, and H M\u00fcller. 2019. Vqa-med: Overview of the medical visual question answering task at image-clef 2019. In CLEF2019 Working Notes. CEUR Workshop Proceedings (CEURWS. org), ISSN, pages 1613-0073.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Nlm at imageclef 2018 visual question answering in the medical domain",
"authors": [
{
"first": "Asma",
"middle": [],
"last": "Ben Abacha",
"suffix": ""
},
{
"first": "Soumya",
"middle": [],
"last": "Gayen",
"suffix": ""
},
{
"first": "Jason",
"middle": [
"J"
],
"last": "Lau",
"suffix": ""
},
{
"first": "Sivaramakrishnan",
"middle": [],
"last": "Rajaraman",
"suffix": ""
},
{
"first": "Dina",
"middle": [],
"last": "Demner-Fushman",
"suffix": ""
}
],
"year": 2018,
"venue": "CLEF (Working Notes)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Asma Ben Abacha, Soumya Gayen, Jason J Lau, Sivaramakrishnan Rajaraman, and Dina Demner- Fushman. 2018. Nlm at imageclef 2018 visual ques- tion answering in the medical domain. In CLEF (Working Notes).",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Don't just assume; look and answer: Overcoming priors for visual question answering",
"authors": [
{
"first": "Aishwarya",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "Aniruddha",
"middle": [],
"last": "Kembhavi",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "4971--4980",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aishwarya Agrawal, Dhruv Batra, Devi Parikh, and Aniruddha Kembhavi. 2018. Don't just assume; look and answer: Overcoming priors for vi- sual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 4971-4980.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Just at imageclef 2019 visual question answering in the medical domain",
"authors": [
{
"first": "Aisha",
"middle": [],
"last": "Al-Sadi",
"suffix": ""
},
{
"first": "Bashar",
"middle": [],
"last": "Talafha",
"suffix": ""
},
{
"first": "Mahmoud",
"middle": [],
"last": "Al-Ayyoub",
"suffix": ""
},
{
"first": "Yaser",
"middle": [],
"last": "Jararweh",
"suffix": ""
},
{
"first": "Fumie",
"middle": [],
"last": "Costen",
"suffix": ""
}
],
"year": 2019,
"venue": "CLEF (Working Notes)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aisha Al-Sadi, Bashar Talafha, Mahmoud Al-Ayyoub, Yaser Jararweh, and Fumie Costen. 2019. Just at im- ageclef 2019 visual question answering in the medi- cal domain. In CLEF (Working Notes).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Publicly available clinical BERT embeddings",
"authors": [
{
"first": "Emily",
"middle": [],
"last": "Alsentzer",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Murphy",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Boag",
"suffix": ""
},
{
"first": "Wei-Hung",
"middle": [],
"last": "Weng",
"suffix": ""
},
{
"first": "Di",
"middle": [],
"last": "Jin",
"suffix": ""
},
{
"first": "Tristan",
"middle": [],
"last": "Naumann",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "Mcdermott",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2nd Clinical Natural Language Processing Workshop",
"volume": "",
"issue": "",
"pages": "72--78",
"other_ids": {
"DOI": [
"10.18653/v1/W19-1909"
]
},
"num": null,
"urls": [],
"raw_text": "Emily Alsentzer, John Murphy, William Boag, Wei- Hung Weng, Di Jin, Tristan Naumann, and Matthew McDermott. 2019. Publicly available clinical BERT embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Workshop, pages 72- 78, Minneapolis, Minnesota, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Bottom-up and top-down attention for image captioning and visual question answering",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Anderson",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Buehler",
"suffix": ""
},
{
"first": "Damien",
"middle": [],
"last": "Teney",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Gould",
"suffix": ""
},
{
"first": "Lei",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "6077--6086",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6077-6086.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Vqaa: Visual question answering",
"authors": [
{
"first": "Stanislaw",
"middle": [],
"last": "Antol",
"suffix": ""
},
{
"first": "Aishwarya",
"middle": [],
"last": "Agrawal",
"suffix": ""
},
{
"first": "Jiasen",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Margaret",
"middle": [],
"last": "Mitchell",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Lawrence",
"middle": [],
"last": "Zitnick",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the IEEE International Conference on Computer Vision",
"volume": "",
"issue": "",
"pages": "2425--2433",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqaa: Visual question an- swering. In Proceedings of the IEEE International Conference on Computer Vision, pages 2425-2433.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Workload of radiologists in united states in 2006-2007 and trends since 1991-1992",
"authors": [
{
"first": "Mythreyi",
"middle": [],
"last": "Bhargavan",
"suffix": ""
},
{
"first": "H",
"middle": [],
"last": "Adam",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kaye",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Howard",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"H"
],
"last": "Forman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sunshine",
"suffix": ""
}
],
"year": 2009,
"venue": "Radiology",
"volume": "252",
"issue": "2",
"pages": "458--467",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mythreyi Bhargavan, Adam H Kaye, Howard P For- man, and Jonathan H Sunshine. 2009. Workload of radiologists in united states in 2006-2007 and trends since 1991-1992. Radiology, 252(2):458-467.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "The unified medical language system (umls): integrating biomedical terminology",
"authors": [
{
"first": "Olivier",
"middle": [],
"last": "Bodenreider",
"suffix": ""
}
],
"year": 2004,
"venue": "Nucleic acids research",
"volume": "32",
"issue": "1",
"pages": "267--270",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Olivier Bodenreider. 2004. The unified medical lan- guage system (umls): integrating biomedical termi- nology. Nucleic acids research, 32(suppl 1):D267- D270.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Designing a virtual patient dialogue system based on terminology-rich resources: Challenges and evaluation",
"authors": [
{
"first": "Leonardo",
"middle": [],
"last": "Campillos-Llanos",
"suffix": ""
},
{
"first": "Catherine",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "\u00c9ric",
"middle": [],
"last": "Bilinski",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Zweigenbaum",
"suffix": ""
},
{
"first": "Sophie",
"middle": [],
"last": "Rosset",
"suffix": ""
}
],
"year": 2020,
"venue": "Natural Language Engineering",
"volume": "26",
"issue": "2",
"pages": "183--220",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Leonardo Campillos-Llanos, Catherine Thomas,\u00c9ric Bilinski, Pierre Zweigenbaum, and Sophie Rosset. 2020. Designing a virtual patient dialogue system based on terminology-rich resources: Challenges and evaluation. Natural Language Engineering, 26(2):183-220.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Visual dialog",
"authors": [
{
"first": "Abhishek",
"middle": [],
"last": "Das",
"suffix": ""
},
{
"first": "Satwik",
"middle": [],
"last": "Kottur",
"suffix": ""
},
{
"first": "Khushi",
"middle": [],
"last": "Gupta",
"suffix": ""
},
{
"first": "Avi",
"middle": [],
"last": "Singh",
"suffix": ""
},
{
"first": "Deshraj",
"middle": [],
"last": "Yadav",
"suffix": ""
},
{
"first": "M",
"middle": [
"F"
],
"last": "Jos\u00e9",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Moura",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Parikh",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Batra",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "326--335",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, Jos\u00e9 MF Moura, Devi Parikh, and Dhruv Batra. 2017. Visual dialog. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 326-335.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Bert: Pre-training of deep bidirectional transformers for language understanding",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Kenton",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "4171--4186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Design, implementation, and assessment of a radiology workflow management system",
"authors": [
{
"first": "J",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Halsted",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Craig M Froehle",
"suffix": ""
}
],
"year": 2008,
"venue": "American Journal of Roentgenology",
"volume": "191",
"issue": "2",
"pages": "321--327",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark J Halsted and Craig M Froehle. 2008. De- sign, implementation, and assessment of a radiology workflow management system. American Journal of Roentgenology, 191(2):321-327.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Overview of imageclef 2018 medical domain visual question answering task",
"authors": [
{
"first": "A",
"middle": [],
"last": "Sadid",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Hasan",
"suffix": ""
},
{
"first": "Oladimeji",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Joey",
"middle": [],
"last": "Farri",
"suffix": ""
},
{
"first": "Henning",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Matthew",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lungren",
"suffix": ""
}
],
"year": 2018,
"venue": "CLEF (Working Notes)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sadid A Hasan, Yuan Ling, Oladimeji Farri, Joey Liu, Henning M\u00fcller, and Matthew Lungren. 2018. Overview of imageclef 2018 medical domain visual question answering task. In CLEF (Working Notes).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "770--778",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770- 778.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Gqa: A new dataset for real-world visual reasoning and compositional question answering",
"authors": [
{
"first": "A",
"middle": [],
"last": "Drew",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Hudson",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "6700--6709",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Drew A Hudson and Christopher D Manning. 2019. Gqa: A new dataset for real-world visual reason- ing and compositional question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6700-6709.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Overview of imageclef 2018: Challenges, datasets and evaluation",
"authors": [
{
"first": "Bogdan",
"middle": [],
"last": "Ionescu",
"suffix": ""
},
{
"first": "Henning",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
},
{
"first": "Mauricio",
"middle": [],
"last": "Villegas",
"suffix": ""
},
{
"first": "Alba",
"middle": [],
"last": "Garc\u00eda Seco De Herrera",
"suffix": ""
},
{
"first": "Carsten",
"middle": [],
"last": "Eickhoff",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Andrearczyk",
"suffix": ""
},
{
"first": "Yashin",
"middle": [],
"last": "Dicente Cid",
"suffix": ""
},
{
"first": "Vitali",
"middle": [],
"last": "Liauchuk",
"suffix": ""
},
{
"first": "Vassili",
"middle": [],
"last": "Kovalev",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Sadid",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hasan",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference of the Cross-Language Evaluation Forum for European Languages",
"volume": "",
"issue": "",
"pages": "309--334",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bogdan Ionescu, Henning M\u00fcller, Mauricio Villegas, Alba Garc\u00eda Seco de Herrera, Carsten Eickhoff, Vin- cent Andrearczyk, Yashin Dicente Cid, Vitali Li- auchuk, Vassili Kovalev, Sadid A Hasan, et al. 2018. Overview of imageclef 2018: Challenges, datasets and evaluation. In International Conference of the Cross-Language Evaluation Forum for European Languages, pages 309-334. Springer.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison",
"authors": [
{
"first": "Jeremy",
"middle": [],
"last": "Irvin",
"suffix": ""
},
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ko",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Silviana",
"middle": [],
"last": "Ciurea-Ilcus",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Chute",
"suffix": ""
},
{
"first": "Henrik",
"middle": [],
"last": "Marklund",
"suffix": ""
},
{
"first": "Behzad",
"middle": [],
"last": "Haghgoo",
"suffix": ""
},
{
"first": "Robyn",
"middle": [],
"last": "Ball",
"suffix": ""
},
{
"first": "Katie",
"middle": [],
"last": "Shpanskaya",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeremy Irvin, Pranav Rajpurkar, Michael Ko, Yifan Yu, Silviana Ciurea-Ilcus, Chris Chute, Henrik Mark- lund, Behzad Haghgoo, Robyn Ball, Katie Shpan- skaya, et al. 2019. Chexpert: A large chest radio- graph dataset with uncertainty labels and expert com- parison. In Proceedings of AAAI.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "On the automatic generation of medical imaging reports",
"authors": [
{
"first": "Baoyu",
"middle": [],
"last": "Jing",
"suffix": ""
},
{
"first": "Pengtao",
"middle": [],
"last": "Xie",
"suffix": ""
},
{
"first": "Eric",
"middle": [],
"last": "Xing",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "2577--2586",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Baoyu Jing, Pengtao Xie, and Eric Xing. 2018. On the automatic generation of medical imaging reports. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2577-2586.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Mimic-cxr: A large publicly available database of labeled chest radiographs",
"authors": [
{
"first": "E",
"middle": [
"W"
],
"last": "Alistair",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tom",
"suffix": ""
},
{
"first": "Seth",
"middle": [],
"last": "Pollard",
"suffix": ""
},
{
"first": "Nathaniel",
"middle": [
"R"
],
"last": "Berkowitz",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Greenbaum",
"suffix": ""
},
{
"first": "Chihying",
"middle": [],
"last": "Matthew P Lungren",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "G",
"middle": [],
"last": "Roger",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Mark",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Horng",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1901.07042"
]
},
"num": null,
"urls": [],
"raw_text": "Alistair EW Johnson, Tom J Pollard, Seth Berkowitz, Nathaniel R Greenbaum, Matthew P Lungren, Chih- ying Deng, Roger G Mark, and Steven Horng. 2019. Mimic-cxr: A large publicly available database of labeled chest radiographs. arXiv preprint arXiv:1901.07042.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Mimic-iii, a freely accessible critical care database",
"authors": [
{
"first": "E",
"middle": [
"W"
],
"last": "Alistair",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Tom",
"suffix": ""
},
{
"first": "Lu",
"middle": [],
"last": "Pollard",
"suffix": ""
},
{
"first": "H Lehman",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Mengling",
"middle": [],
"last": "Li-Wei",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Feng",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Ghassemi",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Moody",
"suffix": ""
},
{
"first": "Leo",
"middle": [
"Anthony"
],
"last": "Szolovits",
"suffix": ""
},
{
"first": "Roger G",
"middle": [],
"last": "Celi",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mark",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "3",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alistair EW Johnson, Tom J Pollard, Lu Shen, H Lehman Li-wei, Mengling Feng, Moham- mad Ghassemi, Benjamin Moody, Peter Szolovits, Leo Anthony Celi, and Roger G Mark. 2016. Mimic-iii, a freely accessible critical care database. Scientific data, 3:160035.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Data science bowl",
"authors": [
{
"first": "",
"middle": [],
"last": "Kaggle",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaggle. 2017. Data science bowl. https://www. kaggle.com/c/data-science-bowl-2017.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Leveraging medical visual question answering with supporting facts",
"authors": [
{
"first": "Tomasz",
"middle": [],
"last": "Kornuta",
"suffix": ""
},
{
"first": "Deepta",
"middle": [],
"last": "Rajan",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Shivade",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Asseman",
"suffix": ""
},
{
"first": "Ahmet",
"middle": [
"S"
],
"last": "Ozcan",
"suffix": ""
}
],
"year": 2019,
"venue": "CLEF (Working Notes)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomasz Kornuta, Deepta Rajan, Chaitanya Shivade, Alexis Asseman, and Ahmet S Ozcan. 2019. Lever- aging medical visual question answering with sup- porting facts. In CLEF (Working Notes).",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A dataset of clinically generated visual questions and answers about radiology images",
"authors": [
{
"first": "J",
"middle": [],
"last": "Jason",
"suffix": ""
},
{
"first": "Soumya",
"middle": [],
"last": "Lau",
"suffix": ""
},
{
"first": "Asma",
"middle": [],
"last": "Gayen",
"suffix": ""
},
{
"first": "Dina",
"middle": [],
"last": "Ben Abacha",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Demner-Fushman",
"suffix": ""
}
],
"year": 2018,
"venue": "Scientific Data",
"volume": "5",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jason J Lau, Soumya Gayen, Asma Ben Abacha, and Dina Demner-Fushman. 2018. A dataset of clini- cally generated visual questions and answers about radiology images. Scientific Data, 5:180251.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Radiologists' role in the communication of imaging examination results to patients: perceptions and preferences of patients",
"authors": [
{
"first": "Arifeen",
"middle": [],
"last": "Mark D Mangano",
"suffix": ""
},
{
"first": "Garry",
"middle": [],
"last": "Rahman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Choy",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Dushyant",
"suffix": ""
},
{
"first": "Giles",
"middle": [
"W"
],
"last": "Sahani",
"suffix": ""
},
{
"first": "Andrew J",
"middle": [],
"last": "Boland",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Gunn",
"suffix": ""
}
],
"year": 2014,
"venue": "American Journal of Roentgenology",
"volume": "203",
"issue": "5",
"pages": "1034--1039",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark D Mangano, Arifeen Rahman, Garry Choy, Dushyant V Sahani, Giles W Boland, and Andrew J Gunn. 2014. Radiologists' role in the communi- cation of imaging examination results to patients: perceptions and preferences of patients. American Journal of Roentgenology, 203(5):1034-1039.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Recursive visual attention in visual dialog",
"authors": [
{
"first": "Yulei",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Hanwang",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Manli",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Jianhong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Zhiwu",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Ji-Rong",
"middle": [],
"last": "Wen",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "6679--6688",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yulei Niu, Hanwang Zhang, Manli Zhang, Jianhong Zhang, Zhiwu Lu, and Ji-Rong Wen. 2019. Recur- sive visual attention in visual dialog. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6679-6688.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Negbio: a high-performance tool for negation and uncertainty detection in radiology reports",
"authors": [
{
"first": "Yifan",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Xiaosong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Mohammadhadi",
"middle": [],
"last": "Bagheri",
"suffix": ""
},
{
"first": "Ronald",
"middle": [],
"last": "Summers",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
}
],
"year": 2018,
"venue": "AMIA Summits on Translational Science Proceedings",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yifan Peng, Xiaosong Wang, Le Lu, Mohammad- hadi Bagheri, Ronald Summers, and Zhiyong Lu. 2018. Negbio: a high-performance tool for nega- tion and uncertainty detection in radiology re- ports. AMIA Summits on Translational Science Proceedings, 2018:188.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Glove: Global vectors for word representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Mura: Large dataset for abnormality detection in musculoskeletal radiographs",
"authors": [
{
"first": "Pranav",
"middle": [],
"last": "Rajpurkar",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Irvin",
"suffix": ""
},
{
"first": "Aarti",
"middle": [],
"last": "Bagul",
"suffix": ""
},
{
"first": "Daisy",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Tony",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Hershel",
"middle": [],
"last": "Mehta",
"suffix": ""
},
{
"first": "Brandon",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Kaylie",
"middle": [],
"last": "Zhu",
"suffix": ""
},
{
"first": "Dillon",
"middle": [],
"last": "Laird",
"suffix": ""
},
{
"first": "Robyn",
"middle": [
"L"
],
"last": "Ball",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1712.06957"
]
},
"num": null,
"urls": [],
"raw_text": "Pranav Rajpurkar, Jeremy Irvin, Aarti Bagul, Daisy Ding, Tony Duan, Hershel Mehta, Brandon Yang, Kaylie Zhu, Dillon Laird, Robyn L Ball, et al. 2017. Mura: Large dataset for abnormality detec- tion in musculoskeletal radiographs. arXiv preprint arXiv:1712.06957.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Lessons from natural language inference in the clinical domain",
"authors": [
{
"first": "Alexey",
"middle": [],
"last": "Romanov",
"suffix": ""
},
{
"first": "Chaitanya",
"middle": [],
"last": "Shivade",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexey Romanov and Chaitanya Shivade. 2018. Lessons from natural language inference in the clinical domain. In Proceedings of the 2018",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Conference on Empirical Methods in Natural Language Processing",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "1586--1596",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Conference on Empirical Methods in Natural Language Processing, pages 1586-1596.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "The earth mover's distance as a metric for image retrieval",
"authors": [
{
"first": "Yossi",
"middle": [],
"last": "Rubner",
"suffix": ""
},
{
"first": "Carlo",
"middle": [],
"last": "Tomasi",
"suffix": ""
},
{
"first": "Leonidas",
"middle": [
"J"
],
"last": "Guibas",
"suffix": ""
}
],
"year": 2000,
"venue": "International journal of computer vision",
"volume": "40",
"issue": "2",
"pages": "99--121",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yossi Rubner, Carlo Tomasi, and Leonidas J Guibas. 2000. The earth mover's distance as a metric for image retrieval. International journal of computer vision, 40(2):99-121.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Impact of communication errors in radiology on patient care, customer satisfaction, and work-flow efficiency",
"authors": [
{
"first": "Bettina",
"middle": [],
"last": "Siewert",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Olga",
"suffix": ""
},
{
"first": "Mary",
"middle": [],
"last": "Brook",
"suffix": ""
},
{
"first": "Ronald",
"middle": [
"L"
],
"last": "Hochman",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Eisenberg",
"suffix": ""
}
],
"year": 2016,
"venue": "American Journal of Roentgenology",
"volume": "206",
"issue": "3",
"pages": "573--579",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bettina Siewert, Olga R Brook, Mary Hochman, and Ronald L Eisenberg. 2016. Impact of communica- tion errors in radiology on patient care, customer satisfaction, and work-flow efficiency. American Journal of Roentgenology, 206(3):573-579.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Designing useful virtual standardized patient encounters",
"authors": [
{
"first": "B",
"middle": [],
"last": "Thomas",
"suffix": ""
},
{
"first": "Kenji",
"middle": [],
"last": "Talbot",
"suffix": ""
},
{
"first": "Bruce",
"middle": [],
"last": "Sagae",
"suffix": ""
},
{
"first": "Albert A",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Rizzo",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thomas B Talbot, Kenji Sagae, Bruce John, and Al- bert A Rizzo. Designing useful virtual standardized patient encounters.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Guesswhat?! visual object discovery through multi-modal dialogue",
"authors": [
{
"first": "Florian",
"middle": [],
"last": "Harm De Vries",
"suffix": ""
},
{
"first": "A",
"middle": [
"P"
],
"last": "Strub",
"suffix": ""
},
{
"first": "Olivier",
"middle": [],
"last": "Sarath Chandar",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Pietquin",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"C"
],
"last": "Larochelle",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Courville",
"suffix": ""
}
],
"year": 2016,
"venue": "2017 IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "4466--4475",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Harm de Vries, Florian Strub, A. P. Sarath Chandar, Olivier Pietquin, Hugo Larochelle, and Aaron C. Courville. 2016. Guesswhat?! visual object dis- covery through multi-modal dialogue. In 2017 IEEE Conference on Computer Vision and Pattern Recognition, pages 4466-4475.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Tienet: Text-image embedding network for common thorax disease classification and reporting in chest x-rays",
"authors": [
{
"first": "Xiaosong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yifan",
"middle": [],
"last": "Peng",
"suffix": ""
},
{
"first": "Le",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Zhiyong",
"middle": [],
"last": "Lu",
"suffix": ""
},
{
"first": "Ronald M",
"middle": [],
"last": "Summers",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "9049--9058",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaosong Wang, Yifan Peng, Le Lu, Zhiyong Lu, and Ronald M Summers. 2018. Tienet: Text-image em- bedding network for common thorax disease classifi- cation and reporting in chest x-rays. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 9049-9058.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "bert-as-service",
"authors": [
{
"first": "Han",
"middle": [],
"last": "Xiao",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Han Xiao. 2018. bert-as-service. https://github. com/hanxiao/bert-as-service.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Stacked attention networks for image question answering",
"authors": [
{
"first": "Zichao",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Smola",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "21--29",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zichao Yang, Xiaodong He, Jianfeng Gao, Li Deng, and Alex Smola. 2016. Stacked attention networks for image question answering. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 21-29.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Yin and yang: Balancing and answering binary visual questions",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Yash",
"middle": [],
"last": "Goyal",
"suffix": ""
},
{
"first": "Douglas",
"middle": [],
"last": "Summers-Stay",
"suffix": ""
},
{
"first": "Dhruv",
"middle": [],
"last": "Batra",
"suffix": ""
},
{
"first": "Devi",
"middle": [],
"last": "Parikh",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition",
"volume": "",
"issue": "",
"pages": "5014--5022",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Zhang, Yash Goyal, Douglas Summers-Stay, Dhruv Batra, and Devi Parikh. 2016. Yin and yang: Balancing and answering binary visual questions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 5014-5022.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Comparison of VisDial 1.0 (left) with our synthetically constructed dataset (right)."
},
"FIGREF1": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "The modified architecture of the SAN model (image taken from(Yang et al., 2016)). The proposed modification shown in orange incorporates the history of dialog turns in the same way as the question through an LSTM. In our ablation experiments the changed part either reduces to encoding an image caption only or gets cut completely."
},
"FIGREF2": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "https://visualdialog.org/challenge/ 2019"
},
"FIGREF3": {
"type_str": "figure",
"num": null,
"uris": null,
"text": "Downsampling strategies. Every bar along the X axis represents a single question-answer pair, where questions (13 in total) and answers (4 in total) are obtained through CheXpert."
},
"TABREF2": {
"html": null,
"text": "Data balancing experiments. Macro F1 scores are reported for every tested model.",
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF4": {
"html": null,
"text": "",
"num": null,
"content": "<table><tr><td>: Comparative performance (macro-F1) of Vi-</td></tr><tr><td>sual Dialog models on the test set with different image</td></tr><tr><td>representations.</td></tr></table>",
"type_str": "table"
},
"TABREF6": {
"html": null,
"text": "Effect of adding the lateral view to a frontal view (AP and PA).",
"num": null,
"content": "<table><tr><td>Embedding</td><td colspan=\"5\">'Yes' 'No' 'Maybe' 'Not in report' Macro F1</td></tr><tr><td>Random</td><td colspan=\"3\">0.26 0.22 0.04</td><td>0.73</td><td>0.31</td></tr><tr><td colspan=\"2\">GloVe (common crawl) 0.27</td><td>0</td><td>0.09</td><td>0.80</td><td>0.29</td></tr><tr><td>fastText (MedNLI)</td><td colspan=\"3\">0.24 0.22 0.07</td><td>0.84</td><td>0.33</td></tr></table>",
"type_str": "table"
},
"TABREF7": {
"html": null,
"text": "Comparative performance of the SAN model with different word embeddings.",
"num": null,
"content": "<table/>",
"type_str": "table"
},
"TABREF9": {
"html": null,
"text": "Comparative performance of the SAN model trained on different combinations of silver and gold data, and evaluated on the test subset of gold data. Note that the gold annotations did not contain 'Not in report' and 'Maybe' options.",
"num": null,
"content": "<table/>",
"type_str": "table"
}
}
}
}