Edit model card

Wav2vec2-xlsr-Shemo-Ravdess-4EMO

This model is a fine-tuned version of makhataei/Wav2vec2-xlsr-Shemo-Ravdess-4EMO on the minoosh/shEMO dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7652
  • Accuracy: 0.7256

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0005
  • train_batch_size: 4
  • eval_batch_size: 4
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 16
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Accuracy
0.6733 1.0 250 0.7858 0.7256
0.6866 2.0 500 0.7594 0.7143
0.6558 3.0 750 0.8044 0.7279
0.6496 4.0 1000 0.7633 0.7188
0.6393 5.0 1250 0.7901 0.7098
0.6319 6.0 1500 0.7739 0.7075
0.626 7.0 1750 0.7509 0.7302
0.6335 8.0 2000 0.7741 0.7166
0.6376 9.0 2250 0.7856 0.7143
0.6621 10.0 2500 0.7515 0.7188
0.6379 11.0 2750 0.7932 0.7166
0.6186 12.0 3000 0.7728 0.7166
0.6189 13.0 3250 0.7640 0.7143
0.6215 14.0 3500 0.7579 0.7211
0.6187 15.0 3750 0.7822 0.7166
0.6267 16.0 4000 0.7863 0.7143
0.6144 17.0 4250 0.7654 0.7188
0.6138 18.0 4500 0.7719 0.7166
0.5821 19.0 4750 0.7707 0.7234
0.5835 20.0 5000 0.7652 0.7256

Framework versions

  • Transformers 4.29.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.12.0
  • Tokenizers 0.13.3
Downloads last month
12
Unable to determine this model’s pipeline type. Check the docs .