|
--- |
|
license: other |
|
license_name: stem.ai.mtl |
|
license_link: LICENSE |
|
tags: |
|
- vision |
|
- image-classification |
|
- STEM-AI-mtl/City_map |
|
- Google |
|
- ViT |
|
- STEM-AI-mtl |
|
datasets: |
|
- STEM-AI-mtl/City_map |
|
|
|
widget: |
|
- image: https://cdn.britannica.com/50/69550-050-B9DA3DCA/Central-New-York-City-borough-Manhattan-Park.jpg |
|
output: |
|
text: NYC |
|
metrics: |
|
- accuracy |
|
--- |
|
|
|
# The fine-tuned ViT model that beats [Google's state-of-the-art model](https://huggingface.co/google/vit-base-patch16-224) and OpenAI's famous GPT4 |
|
|
|
Image-classification fine-tuned model that identifies which city map is illustrated from an image input. |
|
|
|
The Vision Transformer (ViT) base model is a transformer encoder model (BERT-like) pretrained on a large collection of images in a supervised fashion, namely ImageNet-21k, at a resolution of 224x224 pixels. Next, the model was fine-tuned on ImageNet (also referred to as ILSVRC2012), a dataset comprising 1 million images and 1,000 classes, also at resolution 224x224. |
|
|
|
|
|
|
|
### How to use: |
|
|
|
[Inference script](https://github.com/STEM-ai/Vision/raw/7d92c8daa388eb74e8c336f2d0d3942722fec3c6/ViT_inference.py) |
|
|
|
For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/vit.html#). |
|
|
|
## Training data |
|
|
|
This [Google's ViT-base-patch16-224 for city identification](https://huggingface.co/google/vit-base-patch16-224) model was fine-tuned on the [STEM-AI-mtl/City_map dataset](https://huggingface.co/datasets/STEM-AI-mtl/City_map), contaning overer 600 images of 45 different maps of cities around the world. |
|
|
|
## Training procedure |
|
|
|
A Transformer training was performed on [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on a 4 Gb Nvidia GTX 1650 GPU. |
|
|
|
[Training notebook](https://github.com/STEM-ai/Vision/raw/7d92c8daa388eb74e8c336f2d0d3942722fec3c6/Trainer_ViT.ipynb) |
|
|
|
## Training evaluation results |
|
|
|
The most accurate output model was obtained from a learning rate of 1e-3. The quality of the training was evaluated with the training dataset and resulted in the following metrics:\ |
|
|
|
{'eval_loss': 1.3691096305847168,\ |
|
'eval_accuracy': 0.6666666666666666,\ |
|
'eval_runtime': 13.0277,\ |
|
'eval_samples_per_second': 4.606,\ |
|
'eval_steps_per_second': 0.154,\ |
|
'epoch': 2.82} |
|
|
|
|
|
## Model Card Authors |
|
|
|
STEM.AI: stem.ai.mtl@gmail.com\ |
|
[William Harbec](https://www.linkedin.com/in/william-harbec-56a262248/) |