Edit model card

vit-base-chest-xray

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the trpakov/chest-xray-classification dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0856
  • Accuracy: 0.9742

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0002
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 4
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.1891 0.1307 100 0.1028 0.9665
0.2123 0.2614 200 0.1254 0.9562
0.0536 0.3922 300 0.1142 0.9691
0.0799 0.5229 400 0.1173 0.9648
0.0537 0.6536 500 0.0856 0.9742
0.0911 0.7843 600 0.2005 0.9425
0.1027 0.9150 700 0.0869 0.9708
0.1011 1.0458 800 0.1063 0.9631
0.0717 1.1765 900 0.1424 0.9588
0.0605 1.3072 1000 0.1525 0.9648
0.0573 1.4379 1100 0.0970 0.9700
0.024 1.5686 1200 0.0867 0.9751
0.0056 1.6993 1300 0.0888 0.9760
0.0051 1.8301 1400 0.1054 0.9768
0.063 1.9608 1500 0.1896 0.9571
0.002 2.0915 1600 0.1886 0.9588
0.005 2.2222 1700 0.1184 0.9734
0.0083 2.3529 1800 0.1084 0.9760
0.0013 2.4837 1900 0.0903 0.9777
0.0298 2.6144 2000 0.1023 0.9734
0.0008 2.7451 2100 0.1104 0.9768
0.0011 2.8758 2200 0.1128 0.9785
0.0006 3.0065 2300 0.1395 0.9734
0.0059 3.1373 2400 0.1419 0.9725
0.0005 3.2680 2500 0.1335 0.9777
0.0005 3.3987 2600 0.1249 0.9768
0.0007 3.5294 2700 0.1157 0.9777
0.0005 3.6601 2800 0.1202 0.9785
0.001 3.7908 2900 0.1239 0.9777
0.0004 3.9216 3000 0.1231 0.9768

Framework versions

  • Transformers 4.40.0
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1
Downloads last month
10
Safetensors
Model size
85.8M params
Tensor type
F32
·

Finetuned from