Update README.md
Browse files
README.md
CHANGED
@@ -3,197 +3,219 @@ library_name: transformers
|
|
3 |
tags: []
|
4 |
---
|
5 |
|
6 |
-
# Model
|
7 |
|
8 |
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
10 |
-
|
11 |
-
|
12 |
-
## Model Details
|
13 |
-
|
14 |
-
### Model Description
|
15 |
|
16 |
<!-- Provide a longer summary of what this model is. -->
|
17 |
|
18 |
-
|
19 |
-
|
20 |
-
- **Developed by:** [More Information Needed]
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
|
28 |
-
|
29 |
|
30 |
-
|
31 |
-
|
32 |
-
- **
|
33 |
-
- **
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
|
36 |
## Uses
|
37 |
|
38 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
|
40 |
-
###
|
41 |
-
|
42 |
-
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
|
43 |
|
44 |
-
|
45 |
|
46 |
-
|
|
|
47 |
|
48 |
-
|
|
|
|
|
|
|
49 |
|
50 |
-
|
51 |
|
52 |
-
### Out-of-
|
53 |
|
54 |
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
55 |
|
56 |
-
|
|
|
57 |
|
58 |
-
## Bias,
|
59 |
|
60 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
61 |
|
62 |
-
|
63 |
-
|
64 |
-
|
65 |
-
|
66 |
-
|
67 |
-
|
68 |
-
|
69 |
-
|
70 |
-
|
71 |
-
|
72 |
-
|
73 |
-
|
74 |
-
|
75 |
-
|
76 |
-
|
77 |
-
|
78 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
79 |
|
80 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
81 |
|
82 |
-
|
|
|
83 |
|
84 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
85 |
|
86 |
-
|
87 |
|
88 |
-
|
89 |
|
90 |
-
|
91 |
|
|
|
92 |
|
93 |
-
####
|
94 |
|
95 |
-
|
96 |
|
97 |
-
####
|
98 |
|
99 |
-
|
100 |
|
101 |
-
|
102 |
|
103 |
## Evaluation
|
104 |
|
105 |
<!-- This section describes the evaluation protocols and provides the results. -->
|
106 |
|
107 |
-
|
108 |
|
109 |
-
|
110 |
|
111 |
-
|
112 |
|
113 |
[More Information Needed]
|
114 |
|
115 |
#### Factors
|
116 |
|
117 |
-
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
|
118 |
-
|
119 |
[More Information Needed]
|
120 |
|
121 |
#### Metrics
|
122 |
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
[More Information Needed]
|
126 |
|
127 |
### Results
|
128 |
|
129 |
[More Information Needed]
|
130 |
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
|
141 |
-
## Environmental
|
142 |
|
143 |
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
|
153 |
-
|
|
|
|
|
|
|
|
|
154 |
|
155 |
-
###
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
|
159 |
-
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
|
163 |
#### Hardware
|
164 |
|
165 |
-
|
166 |
|
167 |
#### Software
|
168 |
|
169 |
-
[
|
|
|
170 |
|
171 |
-
## Citation
|
172 |
|
173 |
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
|
175 |
**BibTeX:**
|
176 |
|
177 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
178 |
|
179 |
**APA:**
|
180 |
|
181 |
-
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
-
|
193 |
-
## Model Card Authors [optional]
|
194 |
-
|
195 |
-
[More Information Needed]
|
196 |
|
197 |
-
## Model
|
198 |
|
199 |
-
|
|
|
3 |
tags: []
|
4 |
---
|
5 |
|
6 |
+
# Model card for RAD-DINO
|
7 |
|
8 |
<!-- Provide a quick summary of what the model is/does. -->
|
9 |
|
10 |
+
## Model description
|
|
|
|
|
|
|
|
|
11 |
|
12 |
<!-- Provide a longer summary of what this model is. -->
|
13 |
|
14 |
+
RAD-DINO is a vision transformer model trained to encode chest X-rays using the self-supervised learning method [DINOv2](https://openreview.net/forum?id=a68SUt6zFt).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
15 |
|
16 |
+
RAD-DINO is described in detail in [RAD-DINO: Exploring Scalable Medical Image Encoders Beyond Text Supervision (Pérez-García, Sharma, Bond-Taylor et al., 2024)](https://arxiv.org/abs/2401.10815).
|
17 |
|
18 |
+
- **Developed by:** Microsoft Health Futures
|
19 |
+
- **Model type:** Vision transformer
|
20 |
+
- **License:** MIT
|
21 |
+
- **Finetuned from model:** [`dinov2-base`](https://huggingface.co/facebook/dinov2-base)
|
|
|
22 |
|
23 |
## Uses
|
24 |
|
25 |
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
26 |
|
27 |
+
### Downstream use
|
|
|
|
|
28 |
|
29 |
+
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
30 |
|
31 |
+
RAD-DINO is a vision backbone that can be plugged to other models for downstream tasks.
|
32 |
+
Some potential uses are:
|
33 |
|
34 |
+
- Image classification, with a classifier trained on top of the `CLS` token
|
35 |
+
- Image segmentation, with a decoder trained using the patch tokens
|
36 |
+
- Image retrieval, using nearest neighbors of the CLS token
|
37 |
+
- Report generation, with a language model to decode text
|
38 |
|
39 |
+
Fine-tuning RAD-DINO is typically not necessary to obtain good performance in downstream tasks.
|
40 |
|
41 |
+
### Out-of-scope use
|
42 |
|
43 |
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
44 |
|
45 |
+
This model is shared for research purposes only.
|
46 |
+
It is not meant to be used for clinical practice.
|
47 |
|
48 |
+
## Bias, risks, and limitations
|
49 |
|
50 |
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
51 |
|
52 |
+
RAD-DINO was trained with data from three countries, therefore it might be biased towards population in the training data.
|
53 |
+
Underlying biases of the training datasets may not be well characterised.
|
54 |
+
|
55 |
+
## Getting started
|
56 |
+
|
57 |
+
```python
|
58 |
+
>>> import torch
|
59 |
+
>>> from PIL import Image
|
60 |
+
>>> from transformers import AutoModel
|
61 |
+
>>> from transformers import AutoImageProcessor
|
62 |
+
>>>
|
63 |
+
>>> # Define a small function to get a sample image
|
64 |
+
>>> def download_sample_image() -> Image.Image:
|
65 |
+
... """Download chest X-ray with CC license."""
|
66 |
+
... import requests
|
67 |
+
... from PIL import Image
|
68 |
+
... base_url = "https://upload.wikimedia.org/wikipedia/commons"
|
69 |
+
... image_url = f"{base_url}/2/20/Chest_X-ray_in_influenza_and_Haemophilus_influenzae.jpg"
|
70 |
+
... headers = {"User-Agent": "fperezgarcia@microsoft.com"}
|
71 |
+
... response = requests.get(image_url, headers=headers, stream=True)
|
72 |
+
... return Image.open(response.raw)
|
73 |
+
...
|
74 |
+
>>> # Download the model
|
75 |
+
>>> repo = "microsoft/rad-dino"
|
76 |
+
>>> model = AutoModel.from_pretrained(repo).cuda()
|
77 |
+
|
78 |
+
>>> # The processor takes a PIL image, performs resizing, center-cropping, and
|
79 |
+
>>> # intensity normalization using stats from MIMIC-CXR, and returns a
|
80 |
+
>>> # dictionary with a PyTorch tensor ready for the encoder
|
81 |
+
>>> processor = AutoProcessor.from_pretrained(repo)
|
82 |
+
>>>
|
83 |
+
>>> # Download and preprocess a chest X-ray
|
84 |
+
>>> image = download_sample_image()
|
85 |
+
>>> inputs = processor(images=image, return_tensors="pt")
|
86 |
+
>>>
|
87 |
+
>>> # Encode the image!
|
88 |
+
>>> with torch.inference_mode():
|
89 |
+
>>> outputs = model(**inputs)
|
90 |
+
>>>
|
91 |
+
>>> # Look at the CLS embeddings
|
92 |
+
>>> cls_embeddings = outputs.pooler_output.shape
|
93 |
+
>>> cls_embeddings # (batch_size, num_channels)
|
94 |
+
torch.Size([1, 768])
|
95 |
+
>>>
|
96 |
+
>>> # Look at the patch embeddings (needs `pip install einops`)
|
97 |
+
>>> def reshape_patch_embeddings(flat_tokens: torch.Tensor) -> torch.Tensor:
|
98 |
+
... """Reshape flat list of patch tokens into a nice grid."""
|
99 |
+
... from einops import rearrange
|
100 |
+
... image_size = processor.crop_size["height"]
|
101 |
+
... patch_size = model.config.patch_size
|
102 |
+
... embeddings_size = image_size // patch_size
|
103 |
+
... patches_grid = rearrange(flat_tokens, "b (h w) c -> b c h w", h=embeddings_size)
|
104 |
+
... return patches_grid
|
105 |
+
...
|
106 |
+
>>> flat_patch_embeddings = outputs.last_hidden_state[:, 1:] # first token is CLS
|
107 |
+
>>> reshaped_patch_embeddings = reshape_patch_embeddings(flat_patch_embeddings)
|
108 |
+
>>> reshaped_patch_embeddings.shape # (batch_size, num_channels, height, width)
|
109 |
+
torch.Size([1, 768, 16, 16])
|
110 |
+
```
|
111 |
+
|
112 |
+
## Training details
|
113 |
+
|
114 |
+
### Training data
|
115 |
|
116 |
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
117 |
|
118 |
+
We used images from five public, deidentified chest X-ray datasets to train this checkpoint of RAD-DINO.
|
119 |
+
Images in the validation and test sets from [MAIRA-1](https://arxiv.org/abs/2311.13668) were excluded from the training set of RAD-DINO.
|
120 |
|
121 |
+
| Dataset | Num. images |
|
122 |
+
| --------- | ----------: |
|
123 |
+
| [MIMIC-CXR](https://www.nature.com/articles/s41597-019-0322-0) | 368 960 |
|
124 |
+
| [CheXpert](https://ojs.aaai.org/index.php/AAAI/article/view/3834) | 223 648 |
|
125 |
+
| [NIH-CXR](https://openaccess.thecvf.com/content_cvpr_2017/html/Wang_ChestX-ray8_Hospital-Scale_Chest_CVPR_2017_paper.html) | 112 120 |
|
126 |
+
| [PadChest](https://www.sciencedirect.com/science/article/abs/pii/S1361841520301614) | 136 787 |
|
127 |
+
| [BRAX](https://www.nature.com/articles/s41597-022-01608-8) | 41 260 |
|
128 |
|
129 |
+
Note this checkpoint is different from the one in the paper, where some private data was used.
|
130 |
|
131 |
+
### Training procedure
|
132 |
|
133 |
+
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
134 |
|
135 |
+
We refer to the [manuscript](https://arxiv.org/abs/2401.10815) for a detailed description of the training procedure.
|
136 |
|
137 |
+
#### Preprocessing
|
138 |
|
139 |
+
All DICOM files were resized using B-spline interpolation so that their shorter size was 518, min-max scaled to [0, 255], and stored as PNG files.
|
140 |
|
141 |
+
#### Training hyperparameters
|
142 |
|
143 |
+
- **Training regime:** fp16 using PyTorch-FSDP mixed-precision.
|
144 |
|
145 |
+
<!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
146 |
|
147 |
## Evaluation
|
148 |
|
149 |
<!-- This section describes the evaluation protocols and provides the results. -->
|
150 |
|
151 |
+
Our evaluation is best described in the [manuscript](https://arxiv.org/abs/2401.10815).
|
152 |
|
153 |
+
<!-- ### Testing data, factors & metrics
|
154 |
|
155 |
+
#### Testing Data
|
156 |
|
157 |
[More Information Needed]
|
158 |
|
159 |
#### Factors
|
160 |
|
|
|
|
|
161 |
[More Information Needed]
|
162 |
|
163 |
#### Metrics
|
164 |
|
|
|
|
|
165 |
[More Information Needed]
|
166 |
|
167 |
### Results
|
168 |
|
169 |
[More Information Needed]
|
170 |
|
171 |
+
#### Summary -->
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
172 |
|
173 |
+
## Environmental impact
|
174 |
|
175 |
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
176 |
|
177 |
+
<!-- Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). -->
|
|
|
|
|
|
|
|
|
|
|
|
|
178 |
|
179 |
+
- **Hardware Type:** NVIDIA A100 GPUs
|
180 |
+
- **Hours used:** 47 hours/GPU × 4 nodes × 4 GPUs/node = 752 hours
|
181 |
+
- **Cloud Provider:** Azure
|
182 |
+
- **Compute Region:** West US 2
|
183 |
+
- **Carbon Emitted:** 65.2 kg CO₂ eq.
|
184 |
|
185 |
+
### Compute infrastructure
|
|
|
|
|
186 |
|
187 |
+
RAD-DINO was trained on [Azure Machine Learning](https://azure.microsoft.com/en-us/products/machine-learning).
|
|
|
|
|
188 |
|
189 |
#### Hardware
|
190 |
|
191 |
+
We used four `Standard_NC96ads_A100_v4` nodes with four NVIDIA A100 (80 GB) GPUs each.
|
192 |
|
193 |
#### Software
|
194 |
|
195 |
+
We leveraged the code in [DINOv2](https://openreview.net/forum?id=a68SUt6zFt) for training.
|
196 |
+
We used [SimpleITK](https://simpleitk.org/) and [Pydicom](https://pydicom.github.io/) for processing of DICOM files.
|
197 |
|
198 |
+
## Citation
|
199 |
|
200 |
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
201 |
|
202 |
**BibTeX:**
|
203 |
|
204 |
+
```bibtex
|
205 |
+
@article{PerezGarcia2024RADDINOES,
|
206 |
+
title={RAD-DINO: Exploring Scalable Medical Image Encoders Beyond Text Supervision},
|
207 |
+
author={Fernando Pérez-García and Harshita Sharma and Sam Bond-Taylor and Kenza Bouzid and Valentina Salvatelli and Maximilian Ilse and Shruthi Bannur and Daniel C. Castro and Anton Schwaighofer and Matthew P. Lungren and Maria Teodora Wetscherek and Noel Codella and Stephanie L. Hyland and Javier Alvarez-Valle and Ozan Oktay},
|
208 |
+
journal={ArXiv},
|
209 |
+
year={2024},
|
210 |
+
volume={abs/2401.10815},
|
211 |
+
url={https://api.semanticscholar.org/CorpusID:267060839}
|
212 |
+
}
|
213 |
+
```
|
214 |
|
215 |
**APA:**
|
216 |
|
217 |
+
> Pérez-García, F., Sharma, H., Bond-Taylor, S., Bouzid, K., Salvatelli, V., Ilse, M., Bannur, S., Castro, D.C., Schwaighofer, A., Lungren, M.P., Wetscherek, M.T., Codella, N., Hyland, S.L., Alvarez-Valle, J., & Oktay, O. (2024). *RAD-DINO: Exploring Scalable Medical Image Encoders Beyond Text Supervision*. ArXiv, abs/2401.10815.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
218 |
|
219 |
+
## Model card contact
|
220 |
|
221 |
+
Fernando Pérez-García ([`fperezgarcia@microsoft.com`](mailto:fperezgarcia@microsoft.com)).
|