12-classifier-finetuned-padchest

This model is a fine-tuned version of nickmuchi/vit-finetuned-chest-xray-pneumonia on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.9215
  • F1: 0.7424

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 128
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 100

Training results

Training Loss Epoch Step Validation Loss F1
2.0498 1.0 18 1.9843 0.2451
1.9376 2.0 36 1.8429 0.2757
1.7541 3.0 54 1.7097 0.2984
1.6052 4.0 72 1.5666 0.4007
1.4372 5.0 90 1.4392 0.4857
1.3696 6.0 108 1.3127 0.4894
1.2546 7.0 126 1.2461 0.5015
1.1526 8.0 144 1.1999 0.5683
1.092 9.0 162 1.1166 0.5704
1.0166 10.0 180 1.0568 0.6253
0.9753 11.0 198 1.0377 0.6055
0.939 12.0 216 0.9584 0.6535
0.916 13.0 234 0.9181 0.7092
0.8834 14.0 252 0.9164 0.7056
0.8126 15.0 270 0.9044 0.6914
0.7936 16.0 288 0.8730 0.7387
0.805 17.0 306 0.8627 0.7222
0.7146 18.0 324 0.8602 0.7136
0.7224 19.0 342 0.9320 0.6709
0.7335 20.0 360 0.9246 0.7081
0.6566 21.0 378 0.8585 0.7321
0.6451 22.0 396 0.8339 0.7341
0.6864 23.0 414 0.8402 0.7305
0.6683 24.0 432 0.8399 0.7450
0.6256 25.0 450 0.8209 0.7503
0.6041 26.0 468 0.8354 0.7461
0.6229 27.0 486 0.7940 0.7659
0.5954 28.0 504 0.8654 0.7383
0.5866 29.0 522 0.8525 0.7321
0.5895 30.0 540 0.8314 0.7510
0.5723 31.0 558 0.8777 0.7238
0.5319 32.0 576 0.8369 0.7498
0.5307 33.0 594 0.8801 0.7181
0.5285 34.0 612 0.8198 0.7420
0.4851 35.0 630 0.8202 0.7379
0.4827 36.0 648 0.8372 0.7481
0.4985 37.0 666 0.8032 0.7505
0.4714 38.0 684 0.8410 0.7390
0.4907 39.0 702 0.8401 0.7394
0.4752 40.0 720 0.8979 0.7253
0.4604 41.0 738 0.8654 0.7276
0.4287 42.0 756 0.9682 0.7113
0.4419 43.0 774 0.8762 0.7242
0.422 44.0 792 0.8998 0.7301
0.4432 45.0 810 0.9363 0.7024
0.4178 46.0 828 0.8751 0.7404
0.3901 47.0 846 0.8387 0.7432
0.4066 48.0 864 0.9137 0.7184
0.3919 49.0 882 0.8873 0.7234
0.4027 50.0 900 0.8805 0.7358
0.3593 51.0 918 0.8617 0.7332
0.3774 52.0 936 0.8781 0.7354
0.364 53.0 954 0.8993 0.7225
0.3585 54.0 972 0.9047 0.7293
0.3539 55.0 990 0.8719 0.7462
0.3224 56.0 1008 0.8578 0.7632
0.3486 57.0 1026 0.8934 0.7384
0.3359 58.0 1044 0.8853 0.7428
0.288 59.0 1062 0.8655 0.7466
0.297 60.0 1080 0.8850 0.7394
0.2875 61.0 1098 0.9405 0.7247
0.3267 62.0 1116 0.9057 0.7222
0.2825 63.0 1134 0.9186 0.7413
0.3129 64.0 1152 0.9200 0.7409
0.3264 65.0 1170 0.9506 0.7404
0.3079 66.0 1188 0.9671 0.7176
0.2915 67.0 1206 0.9504 0.7417
0.2797 68.0 1224 0.9254 0.7424
0.2496 69.0 1242 0.8910 0.7433
0.3063 70.0 1260 0.9178 0.7292
0.2626 71.0 1278 0.9140 0.7415
0.2552 72.0 1296 0.9249 0.7333
0.2655 73.0 1314 0.9000 0.7508
0.2797 74.0 1332 0.8777 0.7400
0.2678 75.0 1350 0.9043 0.7357
0.2464 76.0 1368 0.9432 0.7258
0.2789 77.0 1386 0.9355 0.7356
0.2617 78.0 1404 0.9354 0.7333
0.2381 79.0 1422 0.8852 0.7545
0.2573 80.0 1440 0.9500 0.7384
0.2429 81.0 1458 0.9095 0.7470
0.2513 82.0 1476 0.9898 0.7272
0.2422 83.0 1494 0.9237 0.7487
0.2476 84.0 1512 0.9146 0.7505
0.2399 85.0 1530 0.9386 0.7345
0.2343 86.0 1548 0.9082 0.7414
0.2336 87.0 1566 0.9074 0.7491
0.2176 88.0 1584 0.9291 0.7359
0.2253 89.0 1602 0.9334 0.7331
0.2244 90.0 1620 0.9364 0.7412
0.2215 91.0 1638 0.9617 0.7269
0.2049 92.0 1656 0.9155 0.7562
0.2238 93.0 1674 0.9206 0.7517
0.1761 94.0 1692 0.9312 0.7402
0.2025 95.0 1710 0.9287 0.7444
0.214 96.0 1728 0.9215 0.7444
0.2493 97.0 1746 0.9268 0.7489
0.2414 98.0 1764 0.9190 0.7477
0.1971 99.0 1782 0.9221 0.7451
0.2015 100.0 1800 0.9215 0.7424

Framework versions

  • Transformers 4.28.0.dev0
  • Pytorch 2.0.0+cu117
  • Datasets 2.19.0
  • Tokenizers 0.12.1
Downloads last month
11
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Evaluation results