Dhyey8's picture
End of training
856e57d verified
---
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: vit-base-patch16-224-finetuned-teeth_dataset
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9347826086956522
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-base-patch16-224-finetuned-teeth_dataset
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1736
- Accuracy: 0.9348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.8 | 3 | 4.6533 | 0.0087 |
| No log | 1.87 | 7 | 4.5848 | 0.0065 |
| 4.6048 | 2.93 | 11 | 4.4608 | 0.0304 |
| 4.6048 | 4.0 | 15 | 4.2857 | 0.0848 |
| 4.6048 | 4.8 | 18 | 4.1470 | 0.1152 |
| 4.2716 | 5.87 | 22 | 3.9641 | 0.2043 |
| 4.2716 | 6.93 | 26 | 3.7705 | 0.3152 |
| 3.7404 | 8.0 | 30 | 3.5809 | 0.4196 |
| 3.7404 | 8.8 | 33 | 3.4766 | 0.4522 |
| 3.7404 | 9.87 | 37 | 3.2981 | 0.5087 |
| 3.1589 | 10.93 | 41 | 3.1132 | 0.6087 |
| 3.1589 | 12.0 | 45 | 2.9494 | 0.6696 |
| 3.1589 | 12.8 | 48 | 2.8361 | 0.6783 |
| 2.6384 | 13.87 | 52 | 2.6521 | 0.7348 |
| 2.6384 | 14.93 | 56 | 2.4943 | 0.7587 |
| 2.1342 | 16.0 | 60 | 2.3422 | 0.7848 |
| 2.1342 | 16.8 | 63 | 2.2327 | 0.8109 |
| 2.1342 | 17.87 | 67 | 2.0834 | 0.8261 |
| 1.714 | 18.93 | 71 | 1.9834 | 0.8565 |
| 1.714 | 20.0 | 75 | 1.8932 | 0.8674 |
| 1.714 | 20.8 | 78 | 1.8618 | 0.8587 |
| 1.4427 | 21.87 | 82 | 1.6974 | 0.8891 |
| 1.4427 | 22.93 | 86 | 1.6663 | 0.8891 |
| 1.1858 | 24.0 | 90 | 1.6014 | 0.8848 |
| 1.1858 | 24.8 | 93 | 1.5112 | 0.9043 |
| 1.1858 | 25.87 | 97 | 1.4732 | 0.9109 |
| 1.0222 | 26.93 | 101 | 1.4304 | 0.9065 |
| 1.0222 | 28.0 | 105 | 1.3915 | 0.9130 |
| 1.0222 | 28.8 | 108 | 1.3509 | 0.9217 |
| 0.8306 | 29.87 | 112 | 1.3054 | 0.9283 |
| 0.8306 | 30.93 | 116 | 1.2870 | 0.9261 |
| 0.7391 | 32.0 | 120 | 1.2645 | 0.9283 |
| 0.7391 | 32.8 | 123 | 1.2454 | 0.9261 |
| 0.7391 | 33.87 | 127 | 1.2395 | 0.9283 |
| 0.6971 | 34.93 | 131 | 1.2076 | 0.9304 |
| 0.6971 | 36.0 | 135 | 1.1821 | 0.9326 |
| 0.6971 | 36.8 | 138 | 1.1736 | 0.9348 |
| 0.6758 | 37.87 | 142 | 1.1671 | 0.9326 |
| 0.6758 | 38.93 | 146 | 1.1656 | 0.9348 |
| 0.6445 | 40.0 | 150 | 1.1649 | 0.9348 |
### Framework versions
- Transformers 4.38.2
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2