cs-giung commited on
Commit
dc516b3
1 Parent(s): 92dc8f3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -5,4 +5,4 @@ license: apache-2.0
5
  # Vision Transformer
6
 
7
  Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) and further enhanced in the follow-up paper [How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers](https://arxiv.org/abs/2106.10270).
8
- The weights were converted from the `Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0.npz.npz` file in [GCS buckets](https://console.cloud.google.com/storage/browser/vit_models/augreg/) presented in the [original repository](https://github.com/google-research/vision_transformer).
 
5
  # Vision Transformer
6
 
7
  Vision Transformer (ViT) model pre-trained on ImageNet-21k (14 million images, 21,843 classes) at resolution 224x224. It was introduced in the paper [An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929) and further enhanced in the follow-up paper [How to train your ViT? Data, Augmentation, and Regularization in Vision Transformers](https://arxiv.org/abs/2106.10270).
8
+ The weights were converted from the `Ti_16-i21k-300ep-lr_0.001-aug_none-wd_0.03-do_0.0-sd_0.0.npz` file in [GCS buckets](https://console.cloud.google.com/storage/browser/vit_models/augreg/) presented in the [original repository](https://github.com/google-research/vision_transformer).