wav2vec2-large-xlsr-korean-demo-colab_epoch15
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.4133
- Wer: 0.3801
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
16.9017 | 0.8 | 400 | 4.6273 | 1.0 |
4.4633 | 1.6 | 800 | 4.4419 | 1.0 |
4.2262 | 2.4 | 1200 | 3.8477 | 0.9994 |
2.4402 | 3.21 | 1600 | 1.3564 | 0.8111 |
1.3499 | 4.01 | 2000 | 0.9070 | 0.6664 |
0.9922 | 4.81 | 2400 | 0.7496 | 0.6131 |
0.8271 | 5.61 | 2800 | 0.6240 | 0.5408 |
0.6918 | 6.41 | 3200 | 0.5506 | 0.5026 |
0.6015 | 7.21 | 3600 | 0.5303 | 0.4935 |
0.5435 | 8.02 | 4000 | 0.4951 | 0.4696 |
0.4584 | 8.82 | 4400 | 0.4677 | 0.4432 |
0.4258 | 9.62 | 4800 | 0.4602 | 0.4307 |
0.3906 | 10.42 | 5200 | 0.4456 | 0.4195 |
0.3481 | 11.22 | 5600 | 0.4265 | 0.4062 |
0.3216 | 12.02 | 6000 | 0.4241 | 0.4046 |
0.2908 | 12.83 | 6400 | 0.4106 | 0.3941 |
0.2747 | 13.63 | 6800 | 0.4146 | 0.3855 |
0.2633 | 14.43 | 7200 | 0.4133 | 0.3801 |
Framework versions
- Transformers 4.17.0
- Pytorch 1.12.0+cu113
- Datasets 1.18.3
- Tokenizers 0.12.1
- Downloads last month
- 1
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.