wav2vec2-base-checkpoint-13
This model is a fine-tuned version of jiobiala24/wav2vec2-base-checkpoint-12 on the common_voice dataset. It achieves the following results on the evaluation set:
- Loss: 1.1804
- Wer: 0.3809
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
0.2688 | 1.92 | 1000 | 0.6518 | 0.3692 |
0.1944 | 3.85 | 2000 | 0.7188 | 0.3808 |
0.1503 | 5.77 | 3000 | 0.7552 | 0.3853 |
0.1218 | 7.69 | 4000 | 0.8155 | 0.3834 |
0.1024 | 9.62 | 5000 | 0.8867 | 0.3779 |
0.0874 | 11.54 | 6000 | 0.8917 | 0.3866 |
0.0775 | 13.46 | 7000 | 1.0320 | 0.4019 |
0.0712 | 15.38 | 8000 | 1.0110 | 0.3922 |
0.0656 | 17.31 | 9000 | 1.0494 | 0.3885 |
0.0578 | 19.23 | 10000 | 1.1054 | 0.3883 |
0.053 | 21.15 | 11000 | 1.1285 | 0.3938 |
0.0496 | 23.08 | 12000 | 1.1358 | 0.3884 |
0.0459 | 25.0 | 13000 | 1.2062 | 0.3904 |
0.0445 | 26.92 | 14000 | 1.1811 | 0.3830 |
0.0414 | 28.85 | 15000 | 1.1804 | 0.3809 |
Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
- Downloads last month
- 0
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.