|
---
|
|
license: apache-2.0
|
|
tags:
|
|
- generated_from_trainer
|
|
base_model: google/vit-base-patch16-224-in21k
|
|
metrics:
|
|
- accuracy
|
|
model-index:
|
|
- name: vit-xray-pneumonia-classification
|
|
results: []
|
|
---
|
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
|
|
should probably proofread and complete it, then remove this comment. -->
|
|
|
|
# vit-xray-pneumonia-classification
|
|
|
|
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on an unknown dataset.
|
|
It achieves the following results on the evaluation set:
|
|
- Loss: 0.0740
|
|
- Accuracy: 0.9734
|
|
|
|
## Model description
|
|
|
|
More information needed
|
|
|
|
## Intended uses & limitations
|
|
|
|
More information needed
|
|
|
|
## Training and evaluation data
|
|
|
|
More information needed
|
|
|
|
## Training procedure
|
|
|
|
### Training hyperparameters
|
|
|
|
The following hyperparameters were used during training:
|
|
- learning_rate: 5e-05
|
|
- train_batch_size: 16
|
|
- eval_batch_size: 16
|
|
- seed: 42
|
|
- gradient_accumulation_steps: 4
|
|
- total_train_batch_size: 64
|
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
|
|
- lr_scheduler_type: linear
|
|
- lr_scheduler_warmup_ratio: 0.1
|
|
- num_epochs: 15
|
|
- mixed_precision_training: Native AMP
|
|
|
|
### Training results
|
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|
|
|:-------------:|:-------:|:----:|:---------------:|:--------:|
|
|
| 0.4843 | 0.9882 | 63 | 0.1954 | 0.9408 |
|
|
| 0.1986 | 1.9922 | 127 | 0.1483 | 0.9494 |
|
|
| 0.1694 | 2.9961 | 191 | 0.1316 | 0.9459 |
|
|
| 0.1368 | 4.0 | 255 | 0.1207 | 0.9554 |
|
|
| 0.1399 | 4.9882 | 318 | 0.1738 | 0.9296 |
|
|
| 0.1203 | 5.9922 | 382 | 0.0966 | 0.9631 |
|
|
| 0.1085 | 6.9961 | 446 | 0.0956 | 0.9631 |
|
|
| 0.1046 | 8.0 | 510 | 0.0952 | 0.9665 |
|
|
| 0.0883 | 8.9882 | 573 | 0.0990 | 0.9665 |
|
|
| 0.0773 | 9.9922 | 637 | 0.0896 | 0.9717 |
|
|
| 0.0815 | 10.9961 | 701 | 0.1084 | 0.9605 |
|
|
| 0.0793 | 12.0 | 765 | 0.0767 | 0.9742 |
|
|
| 0.0778 | 12.9882 | 828 | 0.0885 | 0.9691 |
|
|
| 0.0609 | 13.9922 | 892 | 0.0778 | 0.9708 |
|
|
| 0.0685 | 14.8235 | 945 | 0.0740 | 0.9734 |
|
|
|
|
|
|
### Framework versions
|
|
|
|
- Transformers 4.40.1
|
|
- Pytorch 2.3.0
|
|
- Datasets 2.19.0
|
|
- Tokenizers 0.19.1
|
|
|