Edit model card

kkkh_w2v2_large_finetune_teacher_babble_noise_mozilla_50_epochs_batch_16

This model is a fine-tuned version of facebook/wav2vec2-large-960h-lv60 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 24.1934
  • Wer: 0.2500

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 1
  • seed: 42
  • gradient_accumulation_steps: 256
  • total_train_batch_size: 4096
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.2
  • num_epochs: 50
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
791.6482 8.78 150 30.5908 0.2961
509.9743 17.55 300 26.0843 0.2612
457.1243 26.33 450 25.0523 0.2562
432.307 35.11 600 24.4050 0.2510
420.762 43.89 750 24.1934 0.2500

Framework versions

  • Transformers 4.29.2
  • Pytorch 1.12.1
  • Datasets 2.6.1
  • Tokenizers 0.13.1
Downloads last month
2