rwightman HF staff commited on
Commit
e21819a
1 Parent(s): ef4740a

Update model config and README

Browse files
Files changed (3) hide show
  1. README.md +118 -1
  2. config.json +1 -1
  3. model.safetensors +3 -0
README.md CHANGED
@@ -3,5 +3,122 @@ tags:
3
  - image-classification
4
  - timm
5
  library_tag: timm
 
6
  ---
7
- # Model card for eva_giant_patch14_224.clip_ft_in1k
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  - image-classification
4
  - timm
5
  library_tag: timm
6
+ license: apache-2.0
7
  ---
8
+ # Model card for eva_giant_patch14_224.clip_ft_in1k
9
+
10
+ An EVA-CLIP image classification model. Pretrained on LAION-400M with CLIP and fine-tuned on ImageNet-1k by paper authors. EVA-CLIP uses MIM pretrained image towers and pretrained text towers, FLIP patch dropout, and different optimizers and hparams to accelerate training.
11
+
12
+ NOTE: `timm` checkpoints are float32 for consistency with other models. Original checkpoints are float16 or bfloat16 in some cases, see originals if that's preferred.
13
+
14
+
15
+ ## Model Details
16
+ - **Model Type:** Image classification / feature backbone
17
+ - **Model Stats:**
18
+ - Params (M): 1012.6
19
+ - GMACs: 267.2
20
+ - Activations (M): 192.6
21
+ - Image size: 224 x 224
22
+ - **Papers:**
23
+ - EVA-CLIP: Improved Training Techniques for CLIP at Scale: https://arxiv.org/abs/2303.15389
24
+ - **Original:**
25
+ - https://github.com/baaivision/EVA
26
+ - https://huggingface.co/QuanSun/EVA-CLIP
27
+
28
+ ## Model Usage
29
+ ### Image Classification
30
+ ```python
31
+ from urllib.request import urlopen
32
+ from PIL import Image
33
+ import timm
34
+
35
+ img = Image.open(urlopen(
36
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
37
+ ))
38
+
39
+ model = timm.create_model('eva_giant_patch14_224.clip_ft_in1k', pretrained=True)
40
+ model = model.eval()
41
+
42
+ # get model specific transforms (normalization, resize)
43
+ data_config = timm.data.resolve_model_data_config(model)
44
+ transforms = timm.data.create_transform(**data_config, is_training=False)
45
+
46
+ output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
47
+
48
+ top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
49
+ ```
50
+
51
+ ### Image Embeddings
52
+ ```python
53
+ from urllib.request import urlopen
54
+ from PIL import Image
55
+ import timm
56
+
57
+ img = Image.open(urlopen(
58
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
59
+ ))
60
+
61
+ model = timm.create_model(
62
+ 'eva_giant_patch14_224.clip_ft_in1k',
63
+ pretrained=True,
64
+ num_classes=0, # remove classifier nn.Linear
65
+ )
66
+ model = model.eval()
67
+
68
+ # get model specific transforms (normalization, resize)
69
+ data_config = timm.data.resolve_model_data_config(model)
70
+ transforms = timm.data.create_transform(**data_config, is_training=False)
71
+
72
+ output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
73
+
74
+ # or equivalently (without needing to set num_classes=0)
75
+
76
+ output = model.forward_features(transforms(img).unsqueeze(0))
77
+ # output is unpooled, a (1, 257, 1408) shaped tensor
78
+
79
+ output = model.forward_head(output, pre_logits=True)
80
+ # output is a (1, num_features) shaped tensor
81
+ ```
82
+
83
+ ## Model Comparison
84
+ Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
85
+
86
+ |model |top1 |top5 |param_count|img_size|
87
+ |-----------------------------------------------|------|------|-----------|--------|
88
+ |eva02_large_patch14_448.mim_m38m_ft_in22k_in1k |90.054|99.042|305.08 |448 |
89
+ |eva02_large_patch14_448.mim_in22k_ft_in22k_in1k|89.946|99.01 |305.08 |448 |
90
+ |eva_giant_patch14_560.m30m_ft_in22k_in1k |89.792|98.992|1014.45 |560 |
91
+ |eva02_large_patch14_448.mim_in22k_ft_in1k |89.626|98.954|305.08 |448 |
92
+ |eva02_large_patch14_448.mim_m38m_ft_in1k |89.57 |98.918|305.08 |448 |
93
+ |eva_giant_patch14_336.m30m_ft_in22k_in1k |89.56 |98.956|1013.01 |336 |
94
+ |eva_giant_patch14_336.clip_ft_in1k |89.466|98.82 |1013.01 |336 |
95
+ |eva_large_patch14_336.in22k_ft_in22k_in1k |89.214|98.854|304.53 |336 |
96
+ |eva_giant_patch14_224.clip_ft_in1k |88.882|98.678|1012.56 |224 |
97
+ |eva02_base_patch14_448.mim_in22k_ft_in22k_in1k |88.692|98.722|87.12 |448 |
98
+ |eva_large_patch14_336.in22k_ft_in1k |88.652|98.722|304.53 |336 |
99
+ |eva_large_patch14_196.in22k_ft_in22k_in1k |88.592|98.656|304.14 |196 |
100
+ |eva02_base_patch14_448.mim_in22k_ft_in1k |88.23 |98.564|87.12 |448 |
101
+ |eva_large_patch14_196.in22k_ft_in1k |87.934|98.504|304.14 |196 |
102
+ |eva02_small_patch14_336.mim_in22k_ft_in1k |85.74 |97.614|22.13 |336 |
103
+ |eva02_tiny_patch14_336.mim_in22k_ft_in1k |80.658|95.524|5.76 |336 |
104
+
105
+ ## Citation
106
+ ```bibtex
107
+ @article{EVA-CLIP,
108
+ title={EVA-02: A Visual Representation for Neon Genesis},
109
+ author={Sun, Quan and Fang, Yuxin and Wu, Ledell and Wang, Xinlong and Cao, Yue},
110
+ journal={arXiv preprint arXiv:2303.15389},
111
+ year={2023}
112
+ }
113
+ ```
114
+ ```bibtex
115
+ @misc{rw2019timm,
116
+ author = {Ross Wightman},
117
+ title = {PyTorch Image Models},
118
+ year = {2019},
119
+ publisher = {GitHub},
120
+ journal = {GitHub repository},
121
+ doi = {10.5281/zenodo.4414861},
122
+ howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
123
+ }
124
+ ```
config.json CHANGED
@@ -13,7 +13,7 @@
13
  ],
14
  "fixed_input_size": true,
15
  "interpolation": "bicubic",
16
- "crop_pct": 1.0,
17
  "crop_mode": "center",
18
  "mean": [
19
  0.48145466,
 
13
  ],
14
  "fixed_input_size": true,
15
  "interpolation": "bicubic",
16
+ "crop_pct": 0.9,
17
  "crop_mode": "center",
18
  "mean": [
19
  0.48145466,
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0786c919b43f33fa52671abd9b9a00d707fa9435d309bd1274b8b91fe25befcd
3
+ size 4050271238