STEM-AI-mtl commited on
Commit
3c94648
1 Parent(s): 65c778f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -3
README.md CHANGED
@@ -22,11 +22,11 @@ metrics:
22
  - accuracy
23
  ---
24
 
25
- # The fine-tuned ViT model that beats [Google's base model](https://huggingface.co/google/vit-base-patch16-224) and OpenAI's GPT4
26
 
27
  Image-classification fine-tuned model that identifies which city map is illustrated from an image input.
28
 
29
- The Vision Transformer base model(ViT) is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
30
 
31
 
32
 
@@ -34,7 +34,6 @@ The Vision Transformer base model(ViT) is a transformer encoder model (BERT-like
34
 
35
  [Inference script](https://github.com/STEM-ai/Vision/raw/7d92c8daa388eb74e8c336f2d0d3942722fec3c6/ViT_inference.py)
36
 
37
-
38
  For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/vit.html#).
39
 
40
  ## Training data
 
22
  - accuracy
23
  ---
24
 
25
+ # The fine-tuned ViT model that beats [Google's state-of-the-art model](https://huggingface.co/google/vit-base-patch16-224) and OpenAI's famous GPT4
26
 
27
  Image-classification fine-tuned model that identifies which city map is illustrated from an image input.
28
 
29
+ The Vision Transformer (ViT) base model is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224.
30
 
31
 
32
 
 
34
 
35
  [Inference script](https://github.com/STEM-ai/Vision/raw/7d92c8daa388eb74e8c336f2d0d3942722fec3c6/ViT_inference.py)
36
 
 
37
  For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/vit.html#).
38
 
39
  ## Training data