emotion_classification_2

This model is a fine-tuned version of google/vit-base-patch16-224-in21k on the imagefolder dataset. It achieves the following results on the evaluation set:

  • Loss: 1.3274
  • Accuracy: 0.5188

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 4e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.0 20 1.9337 0.3563
No log 2.0 40 1.7116 0.3375
No log 3.0 60 1.5755 0.4562
No log 4.0 80 1.4939 0.45
No log 5.0 100 1.4377 0.5062
No log 6.0 120 1.4363 0.4562
No log 7.0 140 1.3615 0.5125
No log 8.0 160 1.3021 0.5375
No log 9.0 180 1.3307 0.525
No log 10.0 200 1.3085 0.4938
No log 11.0 220 1.2798 0.5813
No log 12.0 240 1.2707 0.525
No log 13.0 260 1.2339 0.55
No log 14.0 280 1.3053 0.5437
No log 15.0 300 1.3038 0.4938
No log 16.0 320 1.3088 0.5375
No log 17.0 340 1.3336 0.5312
No log 18.0 360 1.3053 0.5
No log 19.0 380 1.2206 0.5687
No log 20.0 400 1.2598 0.5312
No log 21.0 420 1.3332 0.5125
No log 22.0 440 1.3388 0.5312
No log 23.0 460 1.3129 0.5563
No log 24.0 480 1.3632 0.5062
0.9153 25.0 500 1.4166 0.4688
0.9153 26.0 520 1.4094 0.5
0.9153 27.0 540 1.4294 0.475
0.9153 28.0 560 1.4937 0.475
0.9153 29.0 580 1.3897 0.4938
0.9153 30.0 600 1.4565 0.475

Framework versions

  • Transformers 4.33.2
  • Pytorch 2.0.1+cu118
  • Datasets 2.14.5
  • Tokenizers 0.13.3
Downloads last month
23
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for raffel-22/emotion_classification_2

Finetuned
(1775)
this model
Finetunes
1 model

Evaluation results