wav2vec2-large-xls-r-300m-dm32

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.7826
  • Accuracy: 0.7292

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: cosine
  • num_epochs: 22
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Accuracy
No log 1.1930 34 0.6558 0.625
No log 2.3860 68 0.6781 0.6042
No log 3.5789 102 0.7024 0.4167
0.7018 4.7719 136 0.6712 0.5833
0.7018 5.9649 170 0.6506 0.625
0.7018 7.1579 204 0.6987 0.4375
0.6854 8.3509 238 0.6089 0.6875
0.6854 9.5439 272 0.5723 0.75
0.6854 10.7368 306 0.5788 0.7083
0.609 11.9298 340 0.6704 0.7083
0.609 13.1228 374 0.6395 0.7292
0.609 14.3158 408 0.6201 0.7292
0.4355 15.5088 442 0.7633 0.75
0.4355 16.7018 476 0.7693 0.7083
0.4355 17.8947 510 0.7872 0.6875
0.4355 19.0877 544 0.7747 0.7292
0.3369 20.2807 578 0.7819 0.7292
0.3369 21.4737 612 0.7826 0.7292

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.4.1+cu121
  • Datasets 3.0.1
  • Tokenizers 0.19.1
Downloads last month
6
Safetensors
Model size
316M params
Tensor type
F32
·
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for anirudh512/wav2vec2-large-xls-r-300m-dm32

Finetuned
(525)
this model