Update README.md
Browse files
README.md
CHANGED
@@ -57,7 +57,8 @@ You can also choose "active regions" to get a general pool for testing.
|
|
57 |
The ViT model was pretrained on a dataset consisting of 14 million images and 21k classes ([ImageNet-21k](http://www.image-net.org/).
|
58 |
More information on the base model used can be found here: (https://huggingface.co/google/vit-base-patch16-224-in21k)
|
59 |
|
60 |
-
## How to use this Model
|
|
|
61 |
|
62 |
```python
|
63 |
!pip install transformers --quiet
|
|
|
57 |
The ViT model was pretrained on a dataset consisting of 14 million images and 21k classes ([ImageNet-21k](http://www.image-net.org/).
|
58 |
More information on the base model used can be found here: (https://huggingface.co/google/vit-base-patch16-224-in21k)
|
59 |
|
60 |
+
## How to use this Model
|
61 |
+
(quick snippet to work on Google Colab - comment the pip install for local use if you have transformers already installed)
|
62 |
|
63 |
```python
|
64 |
!pip install transformers --quiet
|