File size: 1,369 Bytes
c3f6a06
a3da179
f3f0e75
1c9561a
1fd6c11
 
9cc2de7
034c334
c3f6a06
 
f3f0e75
c3f6a06
 
 
 
 
 
 
 
 
01841c3
c3f6a06
 
 
 
 
 
 
ea84b10
c3f6a06
 
 
 
 
 
 
 
 
01841c3
 
23a6a9a
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
model_type: clip
tags:
- medical
language:
- en
inference: false
pipeline_tag: zero-shot-image-classification
---


# Model Card: ClipMD

## Model Details
ClipMD is a medical image-text matching model based on OpenAI's CLIP model with a sliding window text encoder.

### Model Description

The model uses a ViT-B/32 Transformer architecture as an image encoder and uses a masked sliding window elf-attention Transformer as a text encoder. These encoders are trained to maximize the similarity of (image, text) pairs via a contrastive loss.

The model was fine-tuned on the [ROCO dataset](https://github.com/razorx89/roco-dataset).

## Use with Transformers
```
from PIL import Image

from transformers import AutoProcessor,AutoModel

model = AutoModel.from_pretrained("Idan0405/ClipMD",trust_remote_code=True)
processor = AutoProcessor.from_pretrained("Idan0405/ClipMD")

image = Image.open("your image path")

inputs = processor(text=["chest x-ray", "head MRI"], images=image, return_tensors="pt", padding=True)

outputs = model(**inputs)
logits_per_image = outputs[0] # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1) # we can take the softmax to get the label probabilities
```
# See also
* [ClipMD repository on github.](https://github.cs.huji.ac.il/tomhope-lab/ClipMD)
* [ClipMD paper on arxiv](https://arxiv.org/abs/2303.13340)