SoulPerforms's picture
End of training
d7e9ba9 verified
|
raw
history blame
4.19 kB
metadata
license: apache-2.0
base_model: google/vit-base-patch16-224-in21k
tags:
  - generated_from_trainer
datasets:
  - imagefolder
metrics:
  - accuracy
model-index:
  - name: visual_emotion_classification_vit_base_finetunned
    results:
      - task:
          name: Image Classification
          type: image-classification
        dataset:
          name: imagefolder
          type: imagefolder
          config: default
          split: train
          args: default
        metrics:
          - name: Accuracy
            type: accuracy
            value: 0.51875

visual_emotion_classification_vit_base_finetunned

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2429
  • Accuracy: 0.5188

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Accuracy
2.026 1.25 100 2.0071 0.275
1.8882 2.5 200 1.8921 0.3625
1.7186 3.75 300 1.7326 0.4188
1.5892 5.0 400 1.6242 0.475
1.4942 6.25 500 1.5443 0.5125
1.3825 7.5 600 1.4763 0.5062
1.3084 8.75 700 1.4554 0.4938
1.2388 10.0 800 1.4057 0.525
1.1519 11.25 900 1.3756 0.4938
1.1054 12.5 1000 1.3604 0.4875
1.0605 13.75 1100 1.3597 0.4938
1.016 15.0 1200 1.3370 0.4938
0.9601 16.25 1300 1.2981 0.4938
0.8445 17.5 1400 1.2420 0.5563
0.8514 18.75 1500 1.2485 0.5625
0.7899 20.0 1600 1.2861 0.4875
0.7459 21.25 1700 1.2860 0.4875
0.6917 22.5 1800 1.2335 0.5813
0.6864 23.75 1900 1.2726 0.5437
0.6414 25.0 2000 1.2215 0.5375
0.5583 26.25 2100 1.2756 0.5312
0.597 27.5 2200 1.2314 0.5375
0.5654 28.75 2300 1.3791 0.5125
0.5798 30.0 2400 1.1890 0.5687
0.5247 31.25 2500 1.2440 0.5687
0.5099 32.5 2600 1.2787 0.5625
0.496 33.75 2700 1.2628 0.55
0.479 35.0 2800 1.3420 0.4875
0.4685 36.25 2900 1.2817 0.5563
0.4375 37.5 3000 1.3122 0.525
0.4314 38.75 3100 1.1791 0.5563
0.4174 40.0 3200 1.2322 0.55
0.4019 41.25 3300 1.3871 0.5125
0.3738 42.5 3400 1.2854 0.5312
0.3938 43.75 3500 1.3057 0.5375
0.369 45.0 3600 1.2792 0.5437
0.3768 46.25 3700 1.2761 0.5625
0.3202 47.5 3800 1.2704 0.5375
0.3859 48.75 3900 1.2746 0.5312
0.3689 50.0 4000 1.3306 0.5563

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.17.0
  • Tokenizers 0.15.2