Edit model card

wav2vec2-large-teacher-base-student-en-asr-timit

This model is a fine-tuned version of facebook/wav2vec2-base on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 73.5882
  • Wer: 0.3422

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 64
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
920.6083 3.17 200 1256.0675 1.0
660.5993 6.35 400 717.6098 0.9238
336.5288 9.52 600 202.0025 0.5306
131.3178 12.7 800 108.0701 0.4335
73.4232 15.87 1000 90.2797 0.3728
54.9439 19.05 1200 76.9043 0.3636
44.6595 22.22 1400 79.2443 0.3550
38.6381 25.4 1600 73.6277 0.3493
35.074 28.57 1800 73.5882 0.3422

Framework versions

  • Transformers 4.25.1
  • Pytorch 1.13.0+cu116
  • Datasets 1.18.3
  • Tokenizers 0.13.2
Downloads last month
5
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.