ZiyueWang commited on
Commit
f11d94c
1 Parent(s): ff3260e

Upload 11 files

Browse files
LICENSE.md ADDED
@@ -0,0 +1,9 @@
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) Microsoft Corporation
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
6
+
7
+ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
8
+
9
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
README.md ADDED
@@ -0,0 +1,258 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ tags:
4
+ - clip
5
+ - biology
6
+ - medical
7
+ license: mit
8
+ library_name: open_clip
9
+ widget:
10
+ - src: https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224/resolve/main/example_data/biomed_image_classification_example_data/squamous_cell_carcinoma_histopathology.jpeg
11
+ candidate_labels: adenocarcinoma histopathology, squamous cell carcinoma histopathology
12
+ example_title: squamous cell carcinoma histopathology
13
+ - src: >-
14
+ https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224/resolve/main/example_data/biomed_image_classification_example_data/adenocarcinoma_histopathology.jpg
15
+ candidate_labels: adenocarcinoma histopathology, squamous cell carcinoma histopathology
16
+ example_title: adenocarcinoma histopathology
17
+ - src: >-
18
+ https://upload.wikimedia.org/wikipedia/commons/5/57/Left-sided_Pleural_Effusion.jpg
19
+ candidate_labels: left-sided pleural effusion chest x-ray, right-sided pleural effusion chest x-ray, normal chest x-ray
20
+ example_title: left-sided pleural effusion chest x-ray
21
+ pipeline_tag: zero-shot-image-classification
22
+ ---
23
+
24
+ # BiomedCLIP-PubMedBERT_256-vit_base_patch16_224
25
+
26
+ [BiomedCLIP](https://aka.ms/biomedclip-paper) is a biomedical vision-language foundation model that is pretrained on [PMC-15M](https://aka.ms/biomedclip-paper), a dataset of 15 million figure-caption pairs extracted from biomedical research articles in PubMed Central, using contrastive learning.
27
+ It uses PubMedBERT as the text encoder and Vision Transformer as the image encoder, with domain-specific adaptations.
28
+ It can perform various vision-language processing (VLP) tasks such as cross-modal retrieval, image classification, and visual question answering.
29
+ BiomedCLIP establishes new state of the art in a wide range of standard datasets, and substantially outperforms prior VLP approaches:
30
+
31
+ ![](biomed-vlp-eval.svg)
32
+
33
+
34
+ ## Citation
35
+
36
+ ```bibtex
37
+ @misc{https://doi.org/10.48550/arXiv.2303.00915,
38
+ doi = {10.48550/ARXIV.2303.00915},
39
+ url = {https://arxiv.org/abs/2303.00915},
40
+ author = {Zhang, Sheng and Xu, Yanbo and Usuyama, Naoto and Bagga, Jaspreet and Tinn, Robert and Preston, Sam and Rao, Rajesh and Wei, Mu and Valluri, Naveen and Wong, Cliff and Lungren, Matthew and Naumann, Tristan and Poon, Hoifung},
41
+ title = {Large-Scale Domain-Specific Pretraining for Biomedical Vision-Language Processing},
42
+ publisher = {arXiv},
43
+ year = {2023},
44
+ }
45
+ ```
46
+
47
+ ## Model Use
48
+
49
+ ### 1. Environment
50
+
51
+ ```bash
52
+ conda create -n biomedclip python=3.10 -y
53
+ conda activate biomedclip
54
+ pip install open_clip_torch==2.23.0 transformers==4.35.2 matplotlib
55
+ ```
56
+
57
+ ### 2.1 Load from HF hub
58
+
59
+ ```python
60
+ import torch
61
+ from urllib.request import urlopen
62
+ from PIL import Image
63
+ from open_clip import create_model_from_pretrained, get_tokenizer
64
+
65
+ # Load the model and config files from the Hugging Face Hub
66
+ model, preprocess = create_model_from_pretrained('hf-hub:microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224')
67
+ tokenizer = get_tokenizer('hf-hub:microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224')
68
+
69
+
70
+ # Zero-shot image classification
71
+ template = 'this is a photo of '
72
+ labels = [
73
+ 'adenocarcinoma histopathology',
74
+ 'brain MRI',
75
+ 'covid line chart',
76
+ 'squamous cell carcinoma histopathology',
77
+ 'immunohistochemistry histopathology',
78
+ 'bone X-ray',
79
+ 'chest X-ray',
80
+ 'pie chart',
81
+ 'hematoxylin and eosin histopathology'
82
+ ]
83
+
84
+ dataset_url = 'https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224/resolve/main/example_data/biomed_image_classification_example_data/'
85
+ test_imgs = [
86
+ 'squamous_cell_carcinoma_histopathology.jpeg',
87
+ 'H_and_E_histopathology.jpg',
88
+ 'bone_X-ray.jpg',
89
+ 'adenocarcinoma_histopathology.jpg',
90
+ 'covid_line_chart.png',
91
+ 'IHC_histopathology.jpg',
92
+ 'chest_X-ray.jpg',
93
+ 'brain_MRI.jpg',
94
+ 'pie_chart.png'
95
+ ]
96
+ device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
97
+ model.to(device)
98
+ model.eval()
99
+
100
+ context_length = 256
101
+
102
+ images = torch.stack([preprocess(Image.open(urlopen(dataset_url + img))) for img in test_imgs]).to(device)
103
+ texts = tokenizer([template + l for l in labels], context_length=context_length).to(device)
104
+ with torch.no_grad():
105
+ image_features, text_features, logit_scale = model(images, texts)
106
+
107
+ logits = (logit_scale * image_features @ text_features.t()).detach().softmax(dim=-1)
108
+ sorted_indices = torch.argsort(logits, dim=-1, descending=True)
109
+
110
+ logits = logits.cpu().numpy()
111
+ sorted_indices = sorted_indices.cpu().numpy()
112
+
113
+ top_k = -1
114
+
115
+ for i, img in enumerate(test_imgs):
116
+ pred = labels[sorted_indices[i][0]]
117
+
118
+ top_k = len(labels) if top_k == -1 else top_k
119
+ print(img.split('/')[-1] + ':')
120
+ for j in range(top_k):
121
+ jth_index = sorted_indices[i][j]
122
+ print(f'{labels[jth_index]}: {logits[i][jth_index]}')
123
+ print('\n')
124
+ ```
125
+
126
+ ### 2.2 Load from local files
127
+
128
+ ```python
129
+ import json
130
+
131
+ from urllib.request import urlopen
132
+ from PIL import Image
133
+ import torch
134
+ from huggingface_hub import hf_hub_download
135
+ from open_clip import create_model_and_transforms, get_tokenizer
136
+ from open_clip.factory import HF_HUB_PREFIX, _MODEL_CONFIGS
137
+
138
+
139
+ # Download the model and config files
140
+ hf_hub_download(
141
+ repo_id="microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224",
142
+ filename="open_clip_pytorch_model.bin",
143
+ local_dir="checkpoints"
144
+ )
145
+ hf_hub_download(
146
+ repo_id="microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224",
147
+ filename="open_clip_config.json",
148
+ local_dir="checkpoints"
149
+ )
150
+
151
+
152
+ # Load the model and config files
153
+ model_name = "biomedclip_local"
154
+
155
+ with open("checkpoints/open_clip_config.json", "r") as f:
156
+ config = json.load(f)
157
+ model_cfg = config["model_cfg"]
158
+ preprocess_cfg = config["preprocess_cfg"]
159
+
160
+
161
+ if (not model_name.startswith(HF_HUB_PREFIX)
162
+ and model_name not in _MODEL_CONFIGS
163
+ and config is not None):
164
+ _MODEL_CONFIGS[model_name] = model_cfg
165
+
166
+ tokenizer = get_tokenizer(model_name)
167
+
168
+ model, _, preprocess = create_model_and_transforms(
169
+ model_name=model_name,
170
+ pretrained="checkpoints/open_clip_pytorch_model.bin",
171
+ **{f"image_{k}": v for k, v in preprocess_cfg.items()},
172
+ )
173
+
174
+
175
+ # Zero-shot image classification
176
+ template = 'this is a photo of '
177
+ labels = [
178
+ 'adenocarcinoma histopathology',
179
+ 'brain MRI',
180
+ 'covid line chart',
181
+ 'squamous cell carcinoma histopathology',
182
+ 'immunohistochemistry histopathology',
183
+ 'bone X-ray',
184
+ 'chest X-ray',
185
+ 'pie chart',
186
+ 'hematoxylin and eosin histopathology'
187
+ ]
188
+
189
+ dataset_url = 'https://huggingface.co/microsoft/BiomedCLIP-PubMedBERT_256-vit_base_patch16_224/resolve/main/example_data/biomed_image_classification_example_data/'
190
+ test_imgs = [
191
+ 'squamous_cell_carcinoma_histopathology.jpeg',
192
+ 'H_and_E_histopathology.jpg',
193
+ 'bone_X-ray.jpg',
194
+ 'adenocarcinoma_histopathology.jpg',
195
+ 'covid_line_chart.png',
196
+ 'IHC_histopathology.jpg',
197
+ 'chest_X-ray.jpg',
198
+ 'brain_MRI.jpg',
199
+ 'pie_chart.png'
200
+ ]
201
+ device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
202
+ model.to(device)
203
+ model.eval()
204
+
205
+ context_length = 256
206
+
207
+ images = torch.stack([preprocess(Image.open(urlopen(dataset_url + img))) for img in test_imgs]).to(device)
208
+ texts = tokenizer([template + l for l in labels], context_length=context_length).to(device)
209
+ with torch.no_grad():
210
+ image_features, text_features, logit_scale = model(images, texts)
211
+
212
+ logits = (logit_scale * image_features @ text_features.t()).detach().softmax(dim=-1)
213
+ sorted_indices = torch.argsort(logits, dim=-1, descending=True)
214
+
215
+ logits = logits.cpu().numpy()
216
+ sorted_indices = sorted_indices.cpu().numpy()
217
+
218
+ top_k = -1
219
+
220
+ for i, img in enumerate(test_imgs):
221
+ pred = labels[sorted_indices[i][0]]
222
+
223
+ top_k = len(labels) if top_k == -1 else top_k
224
+ print(img.split('/')[-1] + ':')
225
+ for j in range(top_k):
226
+ jth_index = sorted_indices[i][j]
227
+ print(f'{labels[jth_index]}: {logits[i][jth_index]}')
228
+ print('\n')
229
+
230
+ ```
231
+
232
+ ### Use in Jupyter Notebook
233
+
234
+ Please refer to this [example notebook](https://aka.ms/biomedclip-example-notebook).
235
+
236
+ ### Intended Use
237
+
238
+ This model is intended to be used solely for (I) future research on visual-language processing and (II) reproducibility of the experimental results reported in the reference paper.
239
+
240
+ #### Primary Intended Use
241
+
242
+ The primary intended use is to support AI researchers building on top of this work. BiomedCLIP and its associated models should be helpful for exploring various biomedical VLP research questions, especially in the radiology domain.
243
+
244
+ #### Out-of-Scope Use
245
+
246
+ **Any** deployed use case of the model --- commercial or otherwise --- is currently out of scope. Although we evaluated the models using a broad set of publicly-available research benchmarks, the models and evaluations are not intended for deployed use cases. Please refer to [the associated paper](https://aka.ms/biomedclip-paper) for more details.
247
+
248
+ ## Data
249
+
250
+ This model builds upon [PMC-15M dataset](https://aka.ms/biomedclip-paper), which is a large-scale parallel image-text dataset for biomedical vision-language processing. It contains 15 million figure-caption pairs extracted from biomedical research articles in PubMed Central. It covers a diverse range of biomedical image types, such as microscopy, radiography, histology, and more.
251
+
252
+ ## Limitations
253
+
254
+ This model was developed using English corpora, and thus can be considered English-only.
255
+
256
+ ## Further information
257
+
258
+ Please refer to the corresponding paper, ["Large-Scale Domain-Specific Pretraining for Biomedical Vision-Language Processing"](https://aka.ms/biomedclip-paper) for additional details on the model training and evaluation.
biomed-vlp-eval.svg ADDED
biomed_clip_example.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
gitattributes ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tflite filter=lfs diff=lfs merge=lfs -text
29
+ *.tgz filter=lfs diff=lfs merge=lfs -text
30
+ *.wasm filter=lfs diff=lfs merge=lfs -text
31
+ *.xz filter=lfs diff=lfs merge=lfs -text
32
+ *.zip filter=lfs diff=lfs merge=lfs -text
33
+ *.zst filter=lfs diff=lfs merge=lfs -text
34
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
open_clip_config.json ADDED
@@ -0,0 +1,31 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "model_cfg": {
3
+ "embed_dim": 512,
4
+ "vision_cfg": {
5
+ "timm_model_name": "vit_base_patch16_224",
6
+ "timm_model_pretrained": false,
7
+ "timm_pool": "",
8
+ "timm_proj": "linear",
9
+ "image_size": 224
10
+ },
11
+ "text_cfg": {
12
+ "hf_model_name": "microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract",
13
+ "hf_tokenizer_name": "microsoft/BiomedNLP-BiomedBERT-base-uncased-abstract",
14
+ "hf_proj_type": "mlp",
15
+ "hf_pooler_type": "cls_last_hidden_state_pooler",
16
+ "context_length": 256
17
+ }
18
+ },
19
+ "preprocess_cfg": {
20
+ "mean": [
21
+ 0.48145466,
22
+ 0.4578275,
23
+ 0.40821073
24
+ ],
25
+ "std": [
26
+ 0.26862954,
27
+ 0.26130258,
28
+ 0.27577711
29
+ ]
30
+ }
31
+ }
open_clip_pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:52cc993c5c5ff962bd0c60931874bc001e7e9b41666a385530f4a036294576be
3
+ size 783705670
special_tokens_map.json ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ {
2
+ "cls_token": "[CLS]",
3
+ "mask_token": "[MASK]",
4
+ "pad_token": "[PAD]",
5
+ "sep_token": "[SEP]",
6
+ "unk_token": "[UNK]"
7
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "clean_up_tokenization_spaces": true,
3
+ "cls_token": "[CLS]",
4
+ "do_basic_tokenize": true,
5
+ "do_lower_case": true,
6
+ "mask_token": "[MASK]",
7
+ "model_max_length": 1000000000000000019884624838656,
8
+ "never_split": null,
9
+ "pad_token": "[PAD]",
10
+ "sep_token": "[SEP]",
11
+ "strip_accents": null,
12
+ "tokenize_chinese_chars": true,
13
+ "tokenizer_class": "BertTokenizer",
14
+ "unk_token": "[UNK]"
15
+ }
vocab.txt ADDED
The diff for this file is too large to render. See raw diff