bbexx commited on
Commit
9d0e31a
1 Parent(s): cfeaffa

UPDATE README.md

Browse files
Files changed (1) hide show
  1. README.md +82 -3
README.md CHANGED
@@ -1,3 +1,82 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # [ViTamin: Design Scalable Vision Models in the Vision-language Era](https://arxiv.org)
2
+ Official PyTorch implementation of **ViTamin**, from the following paper:
3
+
4
+ [ViTamin: Design Scalable Vision Models in the Vision-language Era](https://arxiv.org/).\
5
+ ✨  [Jieneng Chen](https://beckschen.github.io), [Qihang Yu](https://yucornetto.github.io/), [Xiaohui Shen](https://xiaohuishen.github.io/), [Alan Yuille](https://www.cs.jhu.edu/~ayuille/) and [Liang-Chieh Chen](http://liangchiehchen.com/)\
6
+ 🏠  Johns Hopkins University, Bytedance
7
+
8
+
9
+ Load from HuggingFace:
10
+ ```python
11
+ import torch
12
+ from PIL import Image
13
+ from transformers import AutoModel, CLIPImageProcessor
14
+
15
+ model = AutoModel.from_pretrained(
16
+ 'jienengchen/ViTamin-XL-384px',
17
+ torch_dtype=torch.bfloat16,
18
+ low_cpu_mem_usage=True,
19
+ trust_remote_code=True).cuda().eval()
20
+
21
+ image = Image.open('./image.png').convert('RGB')
22
+
23
+ image_processor = CLIPImageProcessor.from_pretrained('jienengchen/ViTamin-XL-384px')
24
+
25
+ pixel_values = image_processor(images=image, return_tensors='pt').pixel_values
26
+ pixel_values = pixel_values.to(torch.bfloat16).cuda()
27
+
28
+ outputs = model(pixel_values)
29
+ ```
30
+
31
+ ## Main Results with CLIP Pre-training on DataComp-1B
32
+
33
+
34
+ | image encoder | image size | num patches | text encoder depth/width | seen samples (B) | trainable params Image+Text (M) | MACs Image+Text (G) | ImageNet Acc. | avg. 38 datasets | ImageNet dist. shift. | VTAB | retrieval |
35
+ |---------------|------------|-------------|--------------------------|-------------------|---------------------------------|----------------------|---------------|------------------|-----------------------|------|-----------|
36
+ | ViTamin-L | 224 | 196 | 12/768 | 12.8 | 333.3+123.7 | 72.6+6.6 | 80.8 | 66.7 | 69.8 | 65.3 | 60.3 |
37
+ | ViTamin-L | 256 | 256 | 12/768 | 12.8+0.2 | 333.4+123.7 | 94.8+6.6 | 81.2 | 67.0 | 71.1 | 65.3 | 61.2 |
38
+ | ViTamin-L | 336 | 441 | 12/768 | 12.8+0.2 | 333.6+123.7 | 163.4+6.6 | 81.6 | 67.0 | 72.1 | 64.4 | 61.6 |
39
+ | ViTamin-L | 384 | 576 | 12/768 | 12.8+0.2 | 333.7+123.7 | 213.4+6.6 | 81.8 | 67.2 | 72.4 | 64.7 | 61.8 |
40
+ | ViTamin-L2 | 224 | 196 | 24/1024 | 12.8 | 333.6+354.0 | 72.6+23.3 | 80.9 | 66.4 | 70.6 | 63.4 | 61.5 |
41
+ | ViTamin-L2 | 256 | 256 | 24/1024 | 12.8+0.5 | 333.6+354.0 | 94.8+23.3 | 81.5 | 67.4 | 71.9 | 64.1 | 63.1 |
42
+ | ViTamin-L2 | 336 | 441 | 24/1024 | 12.8+0.5 | 333.8+354.0 | 163.4+23.3 | 81.8 | 67.8 | 73.0 | 64.5 | 63.6 |
43
+ | ViTamin-L2 | 384 | 576 | 24/1024 | 12.8+0.5 | 334.0+354.0 | 213.4+23.3 | 82.1 | 68.1 | 73.4 | 64.8 | 63.7 |
44
+ | ViTamin-XL | 256 | 256 | 27/1152 | 12.8+0.5 | 436.1+488.7 | 125.3+33.1 | 82.1 | 67.6 | 72.3 | 65.4 | 62.7 |
45
+ | ViTamin-XL | 384 | 576 | 27/1152 | 12.8+0.5 | 436.1+488.7 | 281.9+33.1 | 82.6 | 68.1 | 73.6 | 65.6 | 63.8 |
46
+ | ViTamin-XL | 256 | 256 | 27/1152 | 40 | 436.1+488.7 | 125.3+33.1 | 82.3 | 67.5 | 72.8 | 64.0 | 62.1 |
47
+ | ViTamin-XL | 336 | 441 | 27/1152 | 40+1 | 436.1+488.7 | 215.9+33.1 | 82.7 | 68.0 | 73.9 | 64.1 | 62.6 |
48
+ | ViTamin-XL | 384 | 576 | 27/1152 | 40+1 | 436.1+488.7 | 281.9+33.1 | 82.9 | 68.1 | 74.1 | 64.0 | 62.5 |
49
+
50
+ ## Main Results on Downstream tasks
51
+ **Open-Vocab Detection**
52
+ | image encoder | detector | OV-COCO (AP<sub>50</sub><sup>novel</sup>) | OV-LVIS (AP<sub>r</sub>) |
53
+ |---------------|----------|---------------------------------------|-----------------------|
54
+ | ViT-L/14 | Sliding F-ViT | 36.1 | 32.5 |
55
+ | ViTamin-L | Sliding F-ViT | 37.5 | 35.6 |
56
+
57
+ **Open-Vocab Segmentation**
58
+
59
+ | image encoder | segmentor | ADE | Cityscapes | MV | A-150 | A-847 | PC-459 | PC-59 | PAS-21 |
60
+ |---------------|-------------|----------------|--------------|------|-------|-------|--------|-------|--------------------|
61
+ | ViT-L/14 | Sliding FC-CLIP | 24.6 | 40.7 | 16.5 | 31.8 | 14.3 | 18.3 | 55.1 | 81.5 |
62
+ | ViTamin-L | Sliding FC-CLIP | 27.3 | 44.0 | 18.2 | 35.6 | 16.1 | 20.4 | 58.4 | 83.4 |
63
+
64
+ Note: Panoptic dataset (ADE, CityScapes, MV) are with the metric of PQ. Semantic dataset (A-150, A-847, PC-459, PC-59, PAS-21) are with the metric of mIoU.
65
+
66
+ **Large Multi-modal Models**
67
+
68
+ | image encoder | image size | VQAv2 | GQA | VizWiz | SQA | T-VQA | POPE | MME | MM-Bench | MM-B-CN | SEED | LLaVA-Wild | MM-Vet |
69
+ |---------------|----------|-------|------|--------|------|-------|------|------|----------|---------|------|------------|--------|
70
+ | ViTamin-L | 224 | 78.4 | 61.6 | 51.1 | 66.9 | 58.7 | 84.6 | 1421 | 65.4 | 58.4 | 57.7 | 64.5 | 33.6 |
71
+ | ViTamin-L | 384 | 78.9 | 61.6 | 55.4 | 67.6 | 59.8 | 85.5 | 1447 | 64.5 | 58.3 | 57.9 | 66.1 | 33.6 |
72
+
73
+
74
+ ## Citing ViTamin
75
+
76
+ ```
77
+ @inproceedings{chen2024vitamin,
78
+ title={ViTamin: Design Scalable Vision Models in the Vision-language Era},
79
+ author={Chen, Jieneng and Yu, Qihang and Shen, Xiaohui and Yuille, ALan and Chen, Liang-Chieh},
80
+ journal={arXiv preprint arXiv:xxx.xxxxx},
81
+ year={2024}
82
+ }