RickyIG's picture
End of training
7aa2af4
|
raw
history blame
4.77 kB
metadata
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: emotion_face_image_classification
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: train
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.55

emotion_face_image_classification

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2110
  • Accuracy: 0.55

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
2.0717 1.0 10 2.0593 0.2062
2.005 2.0 20 1.9999 0.2625
1.9169 3.0 30 1.8931 0.35
1.7635 4.0 40 1.7616 0.4062
1.6614 5.0 50 1.6452 0.4562
1.6182 6.0 60 1.5661 0.4125
1.5434 7.0 70 1.5183 0.4125
1.46 8.0 80 1.4781 0.4875
1.4564 9.0 90 1.3939 0.5125
1.2966 10.0 100 1.3800 0.4562
1.3732 11.0 110 1.3557 0.475
1.2907 12.0 120 1.3473 0.5
1.2875 13.0 130 1.3416 0.5312
1.2743 14.0 140 1.2964 0.4875
1.1249 15.0 150 1.2385 0.525
1.0963 16.0 160 1.2775 0.5062
1.0261 17.0 170 1.2751 0.5125
0.9298 18.0 180 1.2318 0.525
1.0668 19.0 190 1.2520 0.5437
0.9933 20.0 200 1.2512 0.525
1.1069 21.0 210 1.3016 0.5
1.0279 22.0 220 1.3279 0.475
0.967 23.0 230 1.2481 0.5
0.8115 24.0 240 1.1791 0.5563
0.7912 25.0 250 1.2336 0.55
0.9294 26.0 260 1.1759 0.5813
0.8936 27.0 270 1.1685 0.6
0.7706 28.0 280 1.2403 0.5312
0.7694 29.0 290 1.2479 0.5687
0.7265 30.0 300 1.2000 0.5625
0.6781 31.0 310 1.1856 0.55
0.6676 32.0 320 1.2661 0.5437
0.7254 33.0 330 1.1986 0.5437
0.7396 34.0 340 1.1497 0.575
0.5532 35.0 350 1.2796 0.5062
0.622 36.0 360 1.2749 0.5125
0.6958 37.0 370 1.2034 0.5687
0.6102 38.0 380 1.2576 0.5188
0.6161 39.0 390 1.2635 0.5062
0.6927 40.0 400 1.1535 0.5437
0.549 41.0 410 1.1405 0.6
0.6668 42.0 420 1.2683 0.5312
0.5144 43.0 430 1.2249 0.6
0.6703 44.0 440 1.2297 0.5687
0.6383 45.0 450 1.1507 0.6062
0.5211 46.0 460 1.2914 0.4813
0.4743 47.0 470 1.2782 0.5125
0.553 48.0 480 1.2256 0.5375
0.6407 49.0 490 1.2149 0.5687
0.4195 50.0 500 1.2024 0.5625

Framework versions

  • Transformers 4.33.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3