rhlc's picture
End of training
6c9684b verified
metadata
license: apache-2.0
base_model: facebook/vit-msn-small
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: vit-msn-small-finetuned-alzheimers
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: train
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.996875

vit-msn-small-finetuned-alzheimers

This model is a fine-tuned version of facebook/vit-msn-small on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0160
  • Accuracy: 0.9969

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 256
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.2996 0.9778 22 0.3897 0.8438
0.3703 2.0 45 0.3595 0.8594
0.3087 2.9778 67 0.3777 0.8625
0.486 4.0 90 0.4530 0.8187
0.3307 4.9778 112 0.4560 0.8234
0.306 6.0 135 0.3471 0.8672
0.3005 6.9778 157 0.3025 0.8859
0.319 8.0 180 0.2451 0.8984
0.3489 8.9778 202 0.1814 0.9281
0.3251 10.0 225 0.2451 0.9156
0.3034 10.9778 247 0.1566 0.9406
0.2746 12.0 270 0.2493 0.8922
0.2369 12.9778 292 0.1622 0.9375
0.2231 14.0 315 0.1781 0.9359
0.2281 14.9778 337 0.1268 0.9531
0.2001 16.0 360 0.2431 0.9141
0.183 16.9778 382 0.1017 0.9625
0.1891 18.0 405 0.1802 0.9391
0.1862 18.9778 427 0.0869 0.9766
0.1935 20.0 450 0.1079 0.9688
0.1797 20.9778 472 0.1250 0.9563
0.1605 22.0 495 0.0655 0.9719
0.1848 22.9778 517 0.0806 0.9766
0.1498 24.0 540 0.1116 0.9578
0.1394 24.9778 562 0.0807 0.9672
0.1584 26.0 585 0.0525 0.9797
0.1302 26.9778 607 0.0513 0.9828
0.1356 28.0 630 0.0420 0.9875
0.1101 28.9778 652 0.0354 0.9875
0.1227 30.0 675 0.0583 0.9766
0.1158 30.9778 697 0.0253 0.9906
0.117 32.0 720 0.0231 0.9906
0.1022 32.9778 742 0.0726 0.9797
0.1221 34.0 765 0.0160 0.9969
0.0956 34.9778 787 0.0482 0.9844
0.0856 36.0 810 0.0256 0.9875
0.0996 36.9778 832 0.0211 0.9906
0.0848 38.0 855 0.0446 0.9797
0.1001 38.9778 877 0.0274 0.9875
0.0976 40.0 900 0.0225 0.9922
0.0864 40.9778 922 0.0207 0.9922
0.0865 42.0 945 0.0193 0.9969
0.0773 42.9778 967 0.0203 0.9922
0.075 44.0 990 0.0131 0.9969
0.0761 44.9778 1012 0.0129 0.9938
0.0624 46.0 1035 0.0114 0.9969
0.0557 46.9778 1057 0.0102 0.9953
0.0708 48.0 1080 0.0116 0.9953
0.0667 48.8889 1100 0.0131 0.9953

Framework versions

  • Transformers 4.40.0
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1