timm
/

Image Classification
timm
PyTorch
Safetensors
rwightman HF staff commited on
Commit
f2a8ebd
1 Parent(s): 95e5083
Files changed (4) hide show
  1. README.md +258 -0
  2. config.json +35 -0
  3. model.safetensors +3 -0
  4. pytorch_model.bin +3 -0
README.md ADDED
@@ -0,0 +1,258 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ tags:
3
+ - image-classification
4
+ - timm
5
+ library_name: timm
6
+ license: apache-2.0
7
+ datasets:
8
+ - imagenet-12k
9
+ - laion-2b
10
+ ---
11
+ # Model card for convnext_xxlarge.clip_laion2b_soup_ft_in12k
12
+
13
+ A ConvNeXt image classification model. CLIP image tower weights pretrained in [OpenCLIP](https://github.com/mlfoundations/open_clip) on LAION and fine-tuned on ImageNet-12k by Ross Wightman.
14
+
15
+ Please see related OpenCLIP model cards for more details on pretrain:
16
+ * https://huggingface.co/laion/CLIP-convnext_xxlarge-laion2B-s34B-b82K-augreg-soup
17
+ * https://huggingface.co/laion/CLIP-convnext_large_d.laion2B-s26B-b102K-augreg
18
+ * https://huggingface.co/laion/CLIP-convnext_base_w-laion2B-s13B-b82K-augreg
19
+ * https://huggingface.co/laion/CLIP-convnext_base_w_320-laion_aesthetic-s13B-b82K
20
+
21
+
22
+ ## Model Details
23
+ - **Model Type:** Image classification / feature backbone
24
+ - **Model Stats:**
25
+ - Params (M): 879.7
26
+ - GMACs: 198.1
27
+ - Activations (M): 124.5
28
+ - Image size: 256 x 256
29
+ - **Papers:**
30
+ - OpenCLIP: https://github.com/mlfoundations/open_clip
31
+ - LAION-5B: An open large-scale dataset for training next generation image-text models: https://arxiv.org/abs/2210.08402
32
+ - @: m
33
+ - Learning Transferable Visual Models From Natural Language Supervision: https://arxiv.org/abs/2103.00020
34
+ - A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
35
+ - **Original:** https://github.com/mlfoundations/open_clip
36
+ - **Pretrain Dataset:** LAION-2B
37
+ - **Dataset:** ImageNet-12k
38
+
39
+ ## Model Usage
40
+ ### Image Classification
41
+ ```python
42
+ from urllib.request import urlopen
43
+ from PIL import Image
44
+ import timm
45
+
46
+ img = Image.open(urlopen(
47
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
48
+ ))
49
+
50
+ model = timm.create_model('convnext_xxlarge.clip_laion2b_soup_ft_in12k', pretrained=True)
51
+ model = model.eval()
52
+
53
+ # get model specific transforms (normalization, resize)
54
+ data_config = timm.data.resolve_model_data_config(model)
55
+ transforms = timm.data.create_transform(**data_config, is_training=False)
56
+
57
+ output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
58
+
59
+ top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
60
+ ```
61
+
62
+ ### Feature Map Extraction
63
+ ```python
64
+ from urllib.request import urlopen
65
+ from PIL import Image
66
+ import timm
67
+
68
+ img = Image.open(urlopen(
69
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
70
+ ))
71
+
72
+ model = timm.create_model(
73
+ 'convnext_xxlarge.clip_laion2b_soup_ft_in12k',
74
+ pretrained=True,
75
+ features_only=True,
76
+ )
77
+ model = model.eval()
78
+
79
+ # get model specific transforms (normalization, resize)
80
+ data_config = timm.data.resolve_model_data_config(model)
81
+ transforms = timm.data.create_transform(**data_config, is_training=False)
82
+
83
+ output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
84
+
85
+ for o in output:
86
+ # print shape of each feature map in output
87
+ # e.g.:
88
+ # torch.Size([1, 384, 64, 64])
89
+ # torch.Size([1, 768, 32, 32])
90
+ # torch.Size([1, 1536, 16, 16])
91
+ # torch.Size([1, 3072, 8, 8])
92
+
93
+ print(o.shape)
94
+ ```
95
+
96
+ ### Image Embeddings
97
+ ```python
98
+ from urllib.request import urlopen
99
+ from PIL import Image
100
+ import timm
101
+
102
+ img = Image.open(urlopen(
103
+ 'https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'
104
+ ))
105
+
106
+ model = timm.create_model(
107
+ 'convnext_xxlarge.clip_laion2b_soup_ft_in12k',
108
+ pretrained=True,
109
+ num_classes=0, # remove classifier nn.Linear
110
+ )
111
+ model = model.eval()
112
+
113
+ # get model specific transforms (normalization, resize)
114
+ data_config = timm.data.resolve_model_data_config(model)
115
+ transforms = timm.data.create_transform(**data_config, is_training=False)
116
+
117
+ output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
118
+
119
+ # or equivalently (without needing to set num_classes=0)
120
+
121
+ output = model.forward_features(transforms(img).unsqueeze(0))
122
+ # output is unpooled, a (1, 3072, 8, 8) shaped tensor
123
+
124
+ output = model.forward_head(output, pre_logits=True)
125
+ # output is a (1, num_features) shaped tensor
126
+ ```
127
+
128
+ ## Model Comparison
129
+ Explore the dataset and runtime metrics of this model in timm [model results](https://github.com/huggingface/pytorch-image-models/tree/main/results).
130
+
131
+ All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
132
+
133
+ | model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
134
+ |------------------------------------------------------------------------------------------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
135
+ | [convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512) |88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
136
+ | [convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384) |88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
137
+ | [convnext_xxlarge.clip_laion2b_soup_ft_in1k](https://huggingface.co/timm/convnext_xxlarge.clip_laion2b_soup_ft_in1k) |88.612|98.704|256 |846.47 |198.09|124.45|122.45 |256 |
138
+ | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_384) |88.312|98.578|384 |200.13 |101.11|126.74|196.84 |256 |
139
+ | [convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384) |88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
140
+ | [convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_soup_ft_in12k_in1k_320) |87.968|98.47 |320 |200.13 |70.21 |88.02 |283.42 |256 |
141
+ | [convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384) |87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
142
+ | [convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384) |87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
143
+ | [convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384) |87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
144
+ | [convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k) |87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
145
+ | [convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k) |87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
146
+ | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k_384) |87.138|98.212|384 |88.59 |45.21 |84.49 |365.47 |256 |
147
+ | [convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k) |87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
148
+ | [convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384) |86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
149
+ | [convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k) |86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
150
+ | [convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k) |86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
151
+ | [convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384) |86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
152
+ | [convnext_base.clip_laion2b_augreg_ft_in12k_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in12k_in1k) |86.344|97.97 |256 |88.59 |20.09 |37.55 |816.14 |256 |
153
+ | [convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k) |86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
154
+ | [convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384) |86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
155
+ | [convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k) |86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
156
+ | [convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k) |85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
157
+ | [convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384) |85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
158
+ | [convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k) |85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
159
+ | [convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k) |85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
160
+ | [convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384) |85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
161
+ | [convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384) |85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
162
+ | [convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k) |84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
163
+ | [convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k) |84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
164
+ | [convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k) |84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
165
+ | [convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k) |84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
166
+ | [convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384) |84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
167
+ | [convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k) |83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
168
+ | [convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k) |83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
169
+ | [convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384) |83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
170
+ | [convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k) |83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
171
+ | [convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k) |82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
172
+ | [convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k) |82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
173
+ | [convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k) |82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
174
+ | [convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k) |82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
175
+ | [convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k) |82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
176
+ | [convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k) |82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
177
+ | [convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k) |81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
178
+ | [convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k) |80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
179
+ | [convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k) |80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
180
+ | [convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k) |80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
181
+ | [convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k) |79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
182
+ | [convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k) |79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
183
+ | [convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k) |78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
184
+ | [convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k) |77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
185
+ | [convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k) |77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
186
+ | [convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k) |76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
187
+ | [convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k) |75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
188
+ | [convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k) |75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
189
+
190
+ ## Citation
191
+ ```bibtex
192
+ @software{ilharco_gabriel_2021_5143773,
193
+ author = {Ilharco, Gabriel and
194
+ Wortsman, Mitchell and
195
+ Wightman, Ross and
196
+ Gordon, Cade and
197
+ Carlini, Nicholas and
198
+ Taori, Rohan and
199
+ Dave, Achal and
200
+ Shankar, Vaishaal and
201
+ Namkoong, Hongseok and
202
+ Miller, John and
203
+ Hajishirzi, Hannaneh and
204
+ Farhadi, Ali and
205
+ Schmidt, Ludwig},
206
+ title = {OpenCLIP},
207
+ month = jul,
208
+ year = 2021,
209
+ note = {If you use this software, please cite it as below.},
210
+ publisher = {Zenodo},
211
+ version = {0.1},
212
+ doi = {10.5281/zenodo.5143773},
213
+ url = {https://doi.org/10.5281/zenodo.5143773}
214
+ }
215
+ ```
216
+ ```bibtex
217
+ @inproceedings{schuhmann2022laionb,
218
+ title={{LAION}-5B: An open large-scale dataset for training next generation image-text models},
219
+ author={Christoph Schuhmann and
220
+ Romain Beaumont and
221
+ Richard Vencu and
222
+ Cade W Gordon and
223
+ Ross Wightman and
224
+ Mehdi Cherti and
225
+ Theo Coombes and
226
+ Aarush Katta and
227
+ Clayton Mullis and
228
+ Mitchell Wortsman and
229
+ Patrick Schramowski and
230
+ Srivatsa R Kundurthy and
231
+ Katherine Crowson and
232
+ Ludwig Schmidt and
233
+ Robert Kaczmarczyk and
234
+ Jenia Jitsev},
235
+ booktitle={Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track},
236
+ year={2022},
237
+ url={https://openreview.net/forum?id=M3Y74vmsMcY}
238
+ }
239
+ ```
240
+ ```bibtex
241
+ i
242
+ ```
243
+ ```bibtex
244
+ @inproceedings{Radford2021LearningTV,
245
+ title={Learning Transferable Visual Models From Natural Language Supervision},
246
+ author={Alec Radford and Jong Wook Kim and Chris Hallacy and A. Ramesh and Gabriel Goh and Sandhini Agarwal and Girish Sastry and Amanda Askell and Pamela Mishkin and Jack Clark and Gretchen Krueger and Ilya Sutskever},
247
+ booktitle={ICML},
248
+ year={2021}
249
+ }
250
+ ```
251
+ ```bibtex
252
+ @article{liu2022convnet,
253
+ author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
254
+ title = {A ConvNet for the 2020s},
255
+ journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
256
+ year = {2022},
257
+ }
258
+ ```
config.json ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "architecture": "convnext_xxlarge",
3
+ "num_classes": 11821,
4
+ "num_features": 3072,
5
+ "pretrained_cfg": {
6
+ "tag": "clip_laion2b_soup_ft_in12k",
7
+ "custom_load": false,
8
+ "input_size": [
9
+ 3,
10
+ 256,
11
+ 256
12
+ ],
13
+ "fixed_input_size": false,
14
+ "interpolation": "bicubic",
15
+ "crop_pct": 1.0,
16
+ "crop_mode": "center",
17
+ "mean": [
18
+ 0.48145466,
19
+ 0.4578275,
20
+ 0.40821073
21
+ ],
22
+ "std": [
23
+ 0.26862954,
24
+ 0.26130258,
25
+ 0.27577711
26
+ ],
27
+ "num_classes": 11821,
28
+ "pool_size": [
29
+ 8,
30
+ 8
31
+ ],
32
+ "first_conv": "stem.0",
33
+ "classifier": "head.fc"
34
+ }
35
+ }
model.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:61715aeb39dede55467dbee956ca89de0d6e2cddde30888ecf110c1aed6bf6f1
3
+ size 3518935034
pytorch_model.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d70198b8d3d01c44f644e9b9a8126a12cf00cfc204716768853ec3d37574dfc
3
+ size 3519039037