--- license: apache-2.0 task_categories: - image-to-text-generation - text-generation - radiology-report-generation language: - en tags: - chest - x-ray - report-generation --- # MCL [MCL: Multi-view Enhanced Contrastive Learning for Chest X-ray Report Generation](https://arxiv.org/abs/2411.10224) Radiology reports are crucial for planning treatment strategies and enhancing doctor-patient communication, yet manually writing these reports is burdensome for radiologists. While automatic report generation offers a solution, existing methods often rely on single-view radiographs, limiting diagnostic accuracy. To address this problem, we propose MCL, a **M**ulti-view enhanced **C**ontrastive **L**earning method for chest X-ray report generation. Specifically, we first introduce multi-view enhanced contrastive learning for visual representation by maximizing the agreement between multi-view radiographs and their corresponding report. Subsequently, to fully exploit patient-specific indications (e.g., patient's symptoms) for report generation, we add a transitional ``bridge" for missing indications to reduce embedding space discrepancies caused by their presence or absence. Additionally, we construct Multi-view CXR and Two-view CXR datasets from public sources to support research on multi-view report generation. Our proposed MCL surpasses recent state-of-the-art methods across multiple datasets, achieving a 5.0\% F1 RadGraph improvement on MIMIC-CXR, a 7.3\% BLEU-1 improvement on MIMIC-ABN, a 3.1\% BLEU-4 improvement on Multi-view CXR, and an 8.2\% F1 CheXbert improvement on Two-view CXR. ## Github - For more details, please refer to [our github](https://github.com/mk-runner/MCL/tree/main). ## Multi-view CXR Multi-view CXR aggregates studies with multiple views from both MIMIC-CXR [1] and IU X-ray [2]. - Regarding radiographs, they can be obtained from [physionet](https://physionet.org/content/mimic-cxr-jpg/2.1.0/) and [NIH](https://openi.nlm.nih.gov/faq#collection). The file structure for storing these images can be represented as: ``` files/ ├── p10 ├── p11 ├── p12 ├── p13 ├── p14 ├── p15 ├── p16 ├── p17 ├── p18 ├── p19 └── NLMCXR_png ``` - As for radiology reports, they can be downloaded in [huggingface 🤗](https://huggingface.co/datasets/MK-runner/Multi-view-CXR). ## Two-view CXR Two-view CXR is a variant of Multi-view CXR that includes only two views per study. The dataset can be downloaded in [huggingface 🤗](https://huggingface.co/datasets/MK-runner/Multi-view-CXR). ## Usage ```python # obtain all studies of Multi-view CXR import json path = 'multiview_cxr_annotation.json' multi_view_cxr_data = json.load(open(path)) # obtain all studies of Two-view CXR ann_data = json.load(open(path)) two_view_cxr_data = {} for key, value in ann_data.items(): two_view_cxr_data[key] = [] for item in ann_data: ## current image_num image_num = len(item['anchor_scan']['image_path']) + len(item['auxiliary_references']['image_path']) if image_num != 2: two_view_cxr_data[key].append(item) ``` ## Statistics for the training, validation, and test sets across MIMIC-CXR, MIMIC-ABN, Multi-view CXR, and Two-view CXR.