whyxrayclip / README.md
Yue Yang
update README
c45d5bc
|
raw
history blame
6.41 kB
metadata
license: mit
widget:
  - src: >-
      https://prod-images-static.radiopaedia.org/images/566180/d527ff6fc1482161c9225345c4ab42_big_gallery.jpg
    candidate_labels: enlarged heart, pleural effusion
    example_title: X-ray of cardiomegaly
library_name: open_clip
pipeline_tag: zero-shot-image-classification

Model Card for WhyXrayCLIP 🩻

Table of Contents

  1. Model Details
  2. Get Started
  3. Uses
  4. Training Details
  5. Evaluation
  6. Citation

Model Details

WhyXrayCLIP can align X-ray images with text descriptions. It is fine-tuned from OpenCLIP (ViT-L/14) on MIMIC-CXR with clinical reports processed by GPT-4. WhyXrayCLIP significantly outperforms PubMedCLIP, BioMedCLIP, etc. in zero-shot and linear probing on various chest X-ray datasets. (See results in Evaluation) While our CLIP models excel with careful data curation, training converges quickly, suggesting the current contrastive objective might not fully exploit the information from the data, potentially taking shortcuts, such as comparing images from different patients instead of focusing on diseases. Future research should explore more suitable objectives and larger-scale data collections to develop more robust medical foundation models.

How to Get Started with the Model

Use the code below to get started with the model.

pip install open_clip_torch
import torch
from PIL import Image
import open_clip

model, _, preprocess = open_clip.create_model_and_transforms("hf-hub:yyupenn/whyxrayclip")
model.eval()
tokenizer = open_clip.get_tokenizer("ViT-L-14")

image = preprocess(Image.open("test_xray.jpg")).unsqueeze(0)
text = tokenizer(["enlarged heart", "pleural effusion"])

with torch.no_grad(), torch.cuda.amp.autocast():
 image_features = model.encode_image(image)
 text_features = model.encode_text(text)
 image_features /= image_features.norm(dim=-1, keepdim=True)
 text_features /= text_features.norm(dim=-1, keepdim=True)

 text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)

print("Label probs:", text_probs)

Uses

As per the original OpenAI CLIP model card, this model is intended as a research output for research communities. We hope that this model will enable researchers to better understand and explore zero-shot medical image (X-ray) classification. We also hope it can be used for interdisciplinary studies of the potential impact of such models.

Direct Use

WhyXrayCLIP can be used for zero-shot X-ray classification. You can use it to compute the similarity between an X-ray image and a text description.

Downstream Use

WhyXrayCLIP can be used as a feature extractor for downstream tasks. You can use it to extract features from X-ray images and text descriptions for other downstream tasks.

Out-of-Scope Use

WhyXrayCLIP should not be used for clinical diagnosis or treatment. It is not intended to be used for any clinical decision-making. Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model.

Training Details

Training Data

We utilize the MIMIC-CXR dataset, specifically selecting only the PA and AP X-rays, which results in 243,334 images, each accompanied by a clinical report written by doctors. We preprocess these reports by extracting medically relevant findings, each described in a short and concise term. In total, we assemble 953K image-text pairs for training WhyXrayCLIP.

Training Details

We utilize the training script from OpenCLIP and select ViT-L/14 as the backbone. Training is performed on 4 RTX A6000 GPUs for 10 epochs with a batch size of 128 and a learning rate of 1e−5. We choose checkpoints based on the lowest contrastive loss on validation sets.

Evaluation

Testing Data

We evaluate on 5 X-ray classification datasets: Pneumonia, COVID-QU, NIH-CXR, Open-i, and VinDr-CXR. We report the zero-shot and linear probing accuracy on the above 5 datasets.

Baselines

We compare various CLIP models, including OpenAI-CLIP, OpenCLIP, PubMedCLIP, BioMedCLIP, PMC-CLIP and MedCLIP. We evaluate these models in both zero-shot and linear probe scenarios. In zero-shot, GPT-4 generates prompts for each class, and we use the ensemble of cosine similarities between the image and prompts as the score for each class. In linear probing, we use the CLIP models as image encoders to extract features for logistic regression. Additionally, we include DenseNet-121 (fine-tuned on the pretraining datasets with cross-entropy loss) as a baseline for linear probing.

Results

The figure below shows the averaged Zero-shot and Linear Probe performance of different models on five chest X-ray datasets.

Results

Citation

Please cite our paper if you use this model in your work:

@article{yang2024textbook,
 title={A Textbook Remedy for Domain Shifts: Knowledge Priors for Medical Image Analysis}, 
 author={Yue Yang and Mona Gandhi and Yufei Wang and Yifan Wu and Michael S. Yao and Chris Callison-Burch and James C. Gee and Mark Yatskar},
 journal={arXiv preprint arXiv:2405.14839},
 year={2024}
}