KiViDrag's picture
End of training
a90a233 verified
metadata
library_name: transformers
license: apache-2.0
base_model: google/vit-base-patch16-224
tags:
  - generated_from_trainer
datasets:
  - medmnist-v2
metrics:
  - accuracy
  - f1
model-index:
  - name: ViT_bloodmnist_std_45
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: medmnist-v2
          type: medmnist-v2
          config: bloodmnist
          split: validation
          args: bloodmnist
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.9064600993861444
          - name: F1
            type: f1
            value: 0.8909233140229111

ViT_bloodmnist_std_45

This model is a fine-tuned version of google/vit-base-patch16-224 on the medmnist-v2 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2659
  • Accuracy: 0.9065
  • F1: 0.8909

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 3

Training results

Training Loss Epoch Step Validation Loss Accuracy F1
0.6113 0.0595 200 0.8908 0.6846 0.5917
0.3578 0.1189 400 0.5958 0.7956 0.7548
0.3118 0.1784 600 0.5688 0.7810 0.7132
0.2815 0.2378 800 0.5227 0.7961 0.7645
0.266 0.2973 1000 0.6554 0.7687 0.7229
0.2353 0.3567 1200 0.3328 0.8838 0.8615
0.2297 0.4162 1400 0.4696 0.8592 0.7990
0.2267 0.4756 1600 0.4362 0.8493 0.8117
0.2266 0.5351 1800 0.3286 0.8838 0.8407
0.2047 0.5945 2000 0.3614 0.8697 0.8382
0.1948 0.6540 2200 0.3144 0.8843 0.8546
0.1953 0.7134 2400 0.3805 0.8657 0.8180
0.1728 0.7729 2600 0.3364 0.8820 0.8339
0.1658 0.8323 2800 0.2873 0.8978 0.8743
0.1594 0.8918 3000 0.3062 0.8914 0.8580
0.1649 0.9512 3200 0.3313 0.8867 0.8577
0.1508 1.0107 3400 0.2117 0.9217 0.9133
0.1062 1.0702 3600 0.2978 0.8919 0.8756
0.1091 1.1296 3800 0.2832 0.9019 0.8831
0.0993 1.1891 4000 0.3275 0.8943 0.8718
0.1001 1.2485 4200 0.3420 0.8896 0.8568
0.1092 1.3080 4400 0.2594 0.9130 0.8909
0.092 1.3674 4600 0.3181 0.8966 0.8753
0.1036 1.4269 4800 0.2721 0.9048 0.8852
0.0896 1.4863 5000 0.3795 0.8820 0.8617
0.0904 1.5458 5200 0.2382 0.9171 0.8980
0.0864 1.6052 5400 0.3845 0.8814 0.8499
0.0809 1.6647 5600 0.3189 0.8984 0.8758
0.0764 1.7241 5800 0.3952 0.8843 0.8522
0.0796 1.7836 6000 0.3656 0.8867 0.8460
0.0695 1.8430 6200 0.3266 0.8925 0.8597
0.0682 1.9025 6400 0.3247 0.8960 0.8647
0.06 1.9620 6600 0.2349 0.9223 0.9055
0.0498 2.0214 6800 0.2578 0.9176 0.8952
0.0296 2.0809 7000 0.2592 0.9211 0.9070
0.0251 2.1403 7200 0.3249 0.9048 0.8797
0.02 2.1998 7400 0.2977 0.9165 0.8973
0.0274 2.2592 7600 0.3411 0.9013 0.8730
0.0241 2.3187 7800 0.3916 0.9013 0.8752
0.0253 2.3781 8000 0.2919 0.9136 0.8939
0.0197 2.4376 8200 0.3294 0.9077 0.8835
0.0209 2.4970 8400 0.3709 0.8966 0.8652
0.0175 2.5565 8600 0.3639 0.9001 0.8733
0.0191 2.6159 8800 0.3706 0.9048 0.8790
0.0167 2.6754 9000 0.3120 0.9171 0.8993
0.0224 2.7348 9200 0.3493 0.9048 0.8799
0.015 2.7943 9400 0.3398 0.9130 0.8889
0.0155 2.8537 9600 0.3707 0.9036 0.8758
0.0129 2.9132 9800 0.3467 0.9118 0.8909
0.0126 2.9727 10000 0.3470 0.9095 0.8874

Framework versions

  • Transformers 4.45.1
  • Pytorch 2.4.0
  • Datasets 3.0.1
  • Tokenizers 0.20.0