Edit model card

5-classifier-finetuned-padchest

This model is a fine-tuned version of nickmuchi/vit-finetuned-chest-xray-pneumonia on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7307
  • Accuracy: 0.7644

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
2.0492 1.0 16 1.9604 0.3142
1.8545 2.0 32 1.7361 0.4079
1.724 3.0 48 1.5064 0.5166
1.4761 4.0 64 1.3116 0.5710
1.3215 5.0 80 1.2030 0.6344
1.2325 6.0 96 1.0904 0.6254
1.124 7.0 112 1.0145 0.6677
1.0516 8.0 128 0.9864 0.6707
0.9858 9.0 144 0.9372 0.6767
0.9518 10.0 160 0.9161 0.6949
0.9612 11.0 176 0.8916 0.6949
0.8994 12.0 192 0.8579 0.7069
0.8194 13.0 208 0.8281 0.7100
0.8141 14.0 224 0.8064 0.7341
0.8056 15.0 240 0.8272 0.7221
0.7953 16.0 256 0.7751 0.7251
0.7679 17.0 272 0.7638 0.7523
0.7262 18.0 288 0.7867 0.7432
0.7302 19.0 304 0.7835 0.7311
0.7237 20.0 320 0.7698 0.7492
0.6496 21.0 336 0.7618 0.7523
0.6708 22.0 352 0.7595 0.7492
0.6719 23.0 368 0.7455 0.7553
0.6361 24.0 384 0.7993 0.7221
0.6125 25.0 400 0.7372 0.7432
0.6392 26.0 416 0.7321 0.7613
0.6175 27.0 432 0.7310 0.7704
0.5613 28.0 448 0.7244 0.7462
0.5831 29.0 464 0.7535 0.7523
0.5892 30.0 480 0.7299 0.7583
0.5259 31.0 496 0.7211 0.7674
0.5553 32.0 512 0.7564 0.7341
0.5497 33.0 528 0.7233 0.7704
0.5699 34.0 544 0.7314 0.7523
0.5263 35.0 560 0.7334 0.7583
0.4953 36.0 576 0.6991 0.7674
0.5029 37.0 592 0.7191 0.7674
0.5253 38.0 608 0.7233 0.7704
0.4657 39.0 624 0.7204 0.7644
0.498 40.0 640 0.7236 0.7674
0.4768 41.0 656 0.7242 0.7734
0.5016 42.0 672 0.7405 0.7553
0.4774 43.0 688 0.7363 0.7674
0.4859 44.0 704 0.7208 0.7734
0.4628 45.0 720 0.7393 0.7674
0.4515 46.0 736 0.7078 0.7734
0.4297 47.0 752 0.7287 0.7674
0.4023 48.0 768 0.7138 0.7734
0.4404 49.0 784 0.7272 0.7674
0.4236 50.0 800 0.7307 0.7644

Framework versions

  • Transformers 4.28.0.dev0
  • Pytorch 2.0.0+cu117
  • Datasets 2.18.0
  • Tokenizers 0.13.3
Downloads last month
0
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results