Edit model card

wav2vecvanilla_load_best

This model is a fine-tuned version of facebook/wav2vec2-base-960h on an unknown dataset. It achieves the following results on the evaluation set:

  • best model at 1400 steps, training loss 0.871600, validation loss 0.835038, evaluation WER 0.324867

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
1.47 0.43 100 1.0582 0.4027
1.2584 0.85 200 0.9775 0.3719
1.1377 1.28 300 0.9759 0.3628
1.0955 1.71 400 1.1271 0.3626
1.0945 2.14 500 0.9063 0.3589
1.0567 2.56 600 0.9288 0.3564
1.033 2.99 700 1.1634 0.3522
1.0043 3.42 800 0.8396 0.3506
0.9776 3.85 900 0.8654 0.3437
0.9147 4.27 1000 0.8816 0.3362
0.9041 4.7 1100 0.8994 0.3303
0.8648 5.13 1200 0.8379 0.3361
0.8241 5.56 1300 0.8263 0.3292
0.8716 5.98 1400 0.8350 0.3249
0.8218 6.41 1500 nan 1.0
0.0 6.84 1600 nan 1.0
0.0 7.26 1700 nan 1.0
0.0 7.69 1800 nan 1.0
0.0 8.12 1900 nan 1.0
0.0 8.55 2000 nan 1.0
0.0 8.97 2100 nan 1.0
0.0 9.4 2200 nan 1.0
0.0 9.83 2300 nan 1.0
0.0 10.26 2400 nan 1.0
0.0 10.68 2500 nan 1.0
0.0 11.11 2600 nan 1.0
0.0 11.54 2700 nan 1.0
0.0 11.97 2800 nan 1.0
0.0 12.39 2900 nan 1.0
0.0 12.82 3000 nan 1.0
0.0 13.25 3100 nan 1.0
0.0 13.68 3200 nan 1.0
0.0 14.1 3300 nan 1.0
0.0 14.53 3400 nan 1.0
0.0 14.96 3500 nan 1.0
0.0 15.38 3600 nan 1.0
0.0 15.81 3700 nan 1.0
0.0 16.24 3800 nan 1.0
0.0 16.67 3900 nan 1.0
0.0 17.09 4000 nan 1.0
0.0 17.52 4100 nan 1.0
0.0 17.95 4200 nan 1.0
0.0 18.38 4300 nan 1.0
0.0 18.8 4400 nan 1.0
0.0 19.23 4500 nan 1.0
0.0 19.66 4600 nan 1.0

Framework versions

  • Transformers 4.38.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.2
Downloads last month
2
Safetensors
Model size
94.4M params
Tensor type
F32
·
Inference API
or
This model can be loaded on Inference API (serverless).

Finetuned from