pcuenq HF staff commited on
Commit
89aae25
1 Parent(s): 1a6519d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -1
README.md CHANGED
@@ -3,4 +3,63 @@ license: other
3
  license_name: apple-ascl
4
  license_link: LICENSE
5
  library_name: mobileclip
6
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  license_name: apple-ascl
4
  license_link: LICENSE
5
  library_name: mobileclip
6
+ ---
7
+
8
+ # MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
9
+
10
+ MobileCLIP was introduced in [MobileCLIP: Fast Image-Text Models through Multi-Modal Reinforced Training
11
+ ](https://arxiv.org/pdf/2311.17049.pdf) (CVPR 2024), by Pavan Kumar Anasosalu Vasu, Hadi Pouransari, Fartash Faghri, Raviteja Vemulapalli, Oncel Tuzel.
12
+
13
+ This repository contains the **MobileCLIP-B** checkpoint.
14
+
15
+ ![MobileCLIP Performance Figure](fig_accuracy_latency.png)
16
+
17
+ ### Highlights
18
+
19
+ * Our smallest variant `MobileCLIP-S0` obtains similar zero-shot performance as [OpenAI](https://arxiv.org/abs/2103.00020)'s ViT-B/16 model while being 4.8x faster and 2.8x smaller.
20
+ * `MobileCLIP-S2` obtains better avg zero-shot performance than [SigLIP](https://arxiv.org/abs/2303.15343)'s ViT-B/16 model while being 2.3x faster and 2.1x smaller, and trained with 3x less seen samples.
21
+ * `MobileCLIP-B`(LT) attains zero-shot ImageNet performance of **77.2%** which is significantly better than recent works like [DFN](https://arxiv.org/abs/2309.17425) and [SigLIP](https://arxiv.org/abs/2303.15343) with similar architectures or even [OpenAI's ViT-L/14@336](https://arxiv.org/abs/2103.00020).
22
+
23
+ ## Checkpoints
24
+
25
+ | Model | # Seen <BR>Samples (B) | # Params (M) <BR> (img + txt) | Latency (ms) <BR> (img + txt) | IN-1k Zero-Shot <BR> Top-1 Acc. (%) | Avg. Perf. (%) <BR> on 38 datasets |
26
+ |:----------------------------------------------------------|:----------------------:|:-----------------------------:|:-----------------------------:|:-----------------------------------:|:----------------------------------:|
27
+ | [MobileCLIP-S0](https://hf.co/pcuenq/MobileCLIP-S0) | 13 | 11.4 + 42.4 | 1.5 + 1.6 | 67.8 | 58.1 |
28
+ | [MobileCLIP-S1](https://hf.co/pcuenq/MobileCLIP-S1) | 13 | 21.5 + 63.4 | 2.5 + 3.3 | 72.6 | 61.3 |
29
+ | [MobileCLIP-S2](https://hf.co/pcuenq/MobileCLIP-S2) | 13 | 35.7 + 63.4 | 3.6 + 3.3 | 74.4 | 63.7 |
30
+ | [MobileCLIP-B](https://hf.co/pcuenq/MobileCLIP-B) | 13 | 86.3 + 63.4 | 10.4 + 3.3 | 76.8 | 65.2 |
31
+ | [MobileCLIP-B (LT)](https://hf.co/pcuenq/MobileCLIP-B-LT) | 36 | 86.3 + 63.4 | 10.4 + 3.3 | 77.2 | 65.8 |
32
+
33
+ ## How to Use
34
+
35
+ First, download the desired checkpoint visiting one of the links in the table above, then click the `Files and versions` tab, and download the PyTorch checkpoint.
36
+ For programmatic downloading, if you have `huggingface_hub` installed, you can also run:
37
+
38
+ ```
39
+ huggingface-cli download pcuenq/MobileCLIP-B
40
+ ```
41
+
42
+ Then, install [`ml-mobileclip`](https://github.com/apple/ml-mobileclip) by following the instructions in the repo. It uses an API similar to [`open_clip`'s](https://github.com/mlfoundations/open_clip).
43
+ You can run inference with a code snippet like the following:
44
+
45
+ ```py
46
+ import torch
47
+ from PIL import Image
48
+ import mobileclip
49
+
50
+ model, _, preprocess = mobileclip.create_model_and_transforms('mobileclip_s0', pretrained='/path/to/mobileclip_s0.pt')
51
+ tokenizer = mobileclip.get_tokenizer('mobileclip_s0')
52
+
53
+ image = preprocess(Image.open("docs/fig_accuracy_latency.png").convert('RGB')).unsqueeze(0)
54
+ text = tokenizer(["a diagram", "a dog", "a cat"])
55
+
56
+ with torch.no_grad(), torch.cuda.amp.autocast():
57
+ image_features = model.encode_image(image)
58
+ text_features = model.encode_text(text)
59
+ image_features /= image_features.norm(dim=-1, keepdim=True)
60
+ text_features /= text_features.norm(dim=-1, keepdim=True)
61
+
62
+ text_probs = (100.0 * image_features @ text_features.T).softmax(dim=-1)
63
+
64
+ print("Label probs:", text_probs)
65
+ ```