timm
/

Image Classification
timm
PyTorch
Safetensors
rwightman HF staff commited on
Commit
f11c3e2
1 Parent(s): 3d06066
Files changed (4) hide show
  1. README.md +264 -0
  2. config.json +35 -0
  3. model.safetensors +3 -0
  4. pytorch_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,264 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - image-classification
4
+ - timm
5
+ library_tag: timm
6
+ license: apache-2.0
7
+ datasets:
8
+ - imagenet-1k
9
+ - laion-2b
10
+ ---
11
+ # Model card for convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320
12
+
13
+ A ConvNeXt image classification model. CLIP image tower weights pretrained in [OpenCLIP](https://github.com/mlfoundations/open_clip) on LAION and fine-tuned on ImageNet-12k followed by ImageNet-1k in `timm` bby Ross Wightman.
14
+
15
+ Please see related OpenCLIP model cards for more details on pretrain:
16
+ * https://huggingface.co/laion/CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg-soup
17
+ * https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg
18
+ * https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg
19
+ * https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K
20
+
21
+
22
+ ## Model Details
23
+ - **Model Type:** Image classification / feature backbone
24
+ - **Model Stats:**
25
+ - Params (M): 200.1
26
+ - GMACs: 70.2
27
+ - Activations (M): 88.0
28
+ - Image size: 320 x 320
29
+ - **Papers:**
30
+ - LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402
31
+ - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
32
+ - Learning Transferable Visual Models From Natural Language Supervision: https://arxiv.org/abs/2103.00020
33
+ - **Original:** https://github.com/mlfoundations/open_clip
34
+ - **Pretrain Dataset:** LAION-2B
35
+ - **Dataset:** ImageNet-1k
36
+
37
+ ## Model Usage
38
+ ### Image Classification
39
+ ```python
40
+ from urllib.request import urlopen
41
+ from PIL import Image
42
+ import timm
43
+
44
+ img = Image.open(urlopen(
45
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
46
+ ))
47
+
48
+ model = timm.create_model('convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320', pretrained=True)
49
+ model = model.eval()
50
+
51
+ # get model specific transforms (normalization, resize)
52
+ data_config = timm.data.resolve_model_data_config(model)
53
+ transforms = timm.data.create_transform(**data_config, is_training=False)
54
+
55
+ output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
56
+
57
+ top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
58
+ ```
59
+
60
+ ### Feature Map Extraction
61
+ ```python
62
+ from urllib.request import urlopen
63
+ from PIL import Image
64
+ import timm
65
+
66
+ img = Image.open(urlopen(
67
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
68
+ ))
69
+
70
+ model = timm.create_model(
71
+ 'convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320',
72
+ pretrained=True,
73
+ features_only=True,
74
+ )
75
+ model = model.eval()
76
+
77
+ # get model specific transforms (normalization, resize)
78
+ data_config = timm.data.resolve_model_data_config(model)
79
+ transforms = timm.data.create_transform(**data_config, is_training=False)
80
+
81
+ output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
82
+
83
+ for o in output:
84
+ # print shape of each feature map in output
85
+ # e.g.:
86
+ # torch.Size([1, 192, 80, 80])
87
+ # torch.Size([1, 384, 40, 40])
88
+ # torch.Size([1, 768, 20, 20])
89
+ # torch.Size([1, 1536, 10, 10])
90
+
91
+ print(o.shape)
92
+ ```
93
+
94
+ ### Image Embeddings
95
+ ```python
96
+ from urllib.request import urlopen
97
+ from PIL import Image
98
+ import timm
99
+
100
+ img = Image.open(urlopen(
101
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
102
+ ))
103
+
104
+ model = timm.create_model(
105
+ 'convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320',
106
+ pretrained=True,
107
+ num_classes=0, # remove classifier nn.Linear
108
+ )
109
+ model = model.eval()
110
+
111
+ # get model specific transforms (normalization, resize)
112
+ data_config = timm.data.resolve_model_data_config(model)
113
+ transforms = timm.data.create_transform(**data_config, is_training=False)
114
+
115
+ output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
116
+
117
+ # or equivalently (without needing to set num_classes=0)
118
+
119
+ output = model.forward_features(transforms(img).unsqueeze(0))
120
+ # output is unpooled, a (1, 1536, 10, 10) shaped tensor
121
+
122
+ output = model.forward_head(output, pre_logits=True)
123
+ # output is a (1, num_features) shaped tensor
124
+ ```
125
+
126
+ ## Model Comparison
127
+ Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
128
+
129
+ All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
130
+
131
+ | model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
132
+ |------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
133
+ | [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
134
+ | [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
135
+ | [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
136
+ | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
137
+ | [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
138
+ | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
139
+ | [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
140
+ | [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
141
+ | [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
142
+ | [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
143
+ | [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
144
+ | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
145
+ | [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
146
+ | [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
147
+ | [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
148
+ | [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
149
+ | [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
150
+ | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
151
+ | [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
152
+ | [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
153
+ | [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
154
+ | [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
155
+ | [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
156
+ | [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
157
+ | [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
158
+ | [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
159
+ | [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
160
+ | [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
161
+ | [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
162
+ | [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
163
+ | [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
164
+ | [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
165
+ | [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
166
+ | [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
167
+ | [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
168
+ | [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
169
+ | [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
170
+ | [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
171
+ | [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
172
+ | [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
173
+ | [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
174
+ | [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
175
+ | [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
176
+ | [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
177
+ | [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
178
+ | [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
179
+ | [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
180
+ | [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
181
+ | [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
182
+ | [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
183
+ | [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
184
+ | [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
185
+ | [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
186
+ | [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
187
+
188
+ ## Citation
189
+ ```bibtex
190
+ @software{ilharco_gabriel_2021_5143773,
191
+ author = {Ilharco, Gabriel and
192
+ Wortsman, Mitchell and
193
+ Wightman, Ross and
194
+ Gordon, Cade and
195
+ Carlini, Nicholas and
196
+ Taori, Rohan and
197
+ Dave, Achal and
198
+ Shankar, Vaishaal and
199
+ Namkoong, Hongseok and
200
+ Miller, John and
201
+ Hajishirzi, Hannaneh and
202
+ Farhadi, Ali and
203
+ Schmidt, Ludwig},
204
+ title = {OpenCLIP},
205
+ month = jul,
206
+ year = 2021,
207
+ note = {If you use this software, please cite it as below.},
208
+ publisher = {Zenodo},
209
+ version = {0.1},
210
+ doi = {10.5281/zenodo.5143773},
211
+ url = {https://doi.org/10.5281/zenodo.5143773}
212
+ }
213
+ ```
214
+ ```bibtex
215
+ @inproceedings{schuhmann2022laionb,
216
+ title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
217
+ author={Christoph Schuhmann and
218
+ Romain Beaumont and
219
+ Richard Vencu and
220
+ Cade W Gordon and
221
+ Ross Wightman and
222
+ Mehdi Cherti and
223
+ Theo Coombes and
224
+ Aarush Katta and
225
+ Clayton Mullis and
226
+ Mitchell Wortsman and
227
+ Patrick Schramowski and
228
+ Srivatsa R Kundurthy and
229
+ Katherine Crowson and
230
+ Ludwig Schmidt and
231
+ Robert Kaczmarczyk and
232
+ Jenia Jitsev},
233
+ booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
234
+ year={2022},
235
+ url={https://openreview.net/forum?id=M3Y74vmsMcY}
236
+ }
237
+ ```
238
+ ```bibtex
239
+ @misc{rw2019timm,
240
+ author = {Ross Wightman},
241
+ title = {PyTorch Image Models},
242
+ year = {2019},
243
+ publisher = {GitHub},
244
+ journal = {GitHub repository},
245
+ doi = {10.5281/zenodo.4414861},
246
+ howpublished = {\url{https://github.com/huggingface/pytorch-image-models}}
247
+ }
248
+ ```
249
+ ```bibtex
250
+ @inproceedings{Radford2021LearningTV,
251
+ title={Learning Transferable Visual Models From Natural Language Supervision},
252
+ author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
253
+ booktitle={ICML},
254
+ year={2021}
255
+ }
256
+ ```
257
+ ```bibtex
258
+ @article{liu2022convnet,
259
+ author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
260
+ title = {A ConvNet for the 2020s},
261
+ journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
262
+ year = {2022},
263
+ }
264
+ ```
config.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architecture": "convnext_large_mlp",
3
+ "num_classes": 1000,
4
+ "num_features": 1536,
5
+ "pretrained_cfg": {
6
+ "tag": "clip_laion2b_soup_ft_in12k_in1k_320",
7
+ "custom_load": false,
8
+ "input_size": [
9
+ 3,
10
+ 320,
11
+ 320
12
+ ],
13
+ "fixed_input_size": false,
14
+ "interpolation": "bicubic",
15
+ "crop_pct": 1.0,
16
+ "crop_mode": "center",
17
+ "mean": [
18
+ 0.48145466,
19
+ 0.4578275,
20
+ 0.40821073
21
+ ],
22
+ "std": [
23
+ 0.26862954,
24
+ 0.26130258,
25
+ 0.27577711
26
+ ],
27
+ "num_classes": 1000,
28
+ "pool_size": [
29
+ 10,
30
+ 10
31
+ ],
32
+ "first_conv": "stem.0",
33
+ "classifier": "head.fc"
34
+ }
35
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:de8e8a815cd1edc8e812d93687d43f14eefb2ee42afa5736ea980dcb91584431
3
+ size 800547534
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1df70194ec4c8358deac1c2484fd94b21d29c9bdb7f9e707613d75d4088a4e28
3
+ size 800642869