wav2vec2-large-960h-lv60-self-with-wikipedia-lm-timit
This model is a fine-tuned version of gxbag/wav2vec2-large-960h-lv60-self-with-wikipedia-lm on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.0889
- Wer: 0.4976
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
7.7911 | 2.02 | 250 | 3.0896 | 1.0 |
1.3854 | 4.03 | 500 | 0.0704 | 0.5052 |
0.1926 | 6.05 | 750 | 0.0678 | 0.5010 |
0.1472 | 8.06 | 1000 | 0.0794 | 0.5157 |
0.1326 | 10.08 | 1250 | 0.0937 | 0.5031 |
0.104 | 12.1 | 1500 | 0.0859 | 0.5055 |
0.0754 | 14.11 | 1750 | 0.0903 | 0.5031 |
0.0624 | 16.13 | 2000 | 0.0927 | 0.5034 |
0.0594 | 18.14 | 2250 | 0.0929 | 0.5016 |
0.057 | 20.16 | 2500 | 0.0873 | 0.5039 |
0.0476 | 22.18 | 2750 | 0.0974 | 0.5055 |
0.0382 | 24.19 | 3000 | 0.0886 | 0.5003 |
0.0329 | 26.21 | 3250 | 0.0832 | 0.4987 |
0.032 | 28.22 | 3500 | 0.0889 | 0.4976 |
Framework versions
- Transformers 4.23.0.dev0
- Pytorch 1.13.0.dev20220624+cu113
- Datasets 2.5.2.dev0
- Tokenizers 0.12.1
- Downloads last month
- 2