whisper-small-ko-E2.1-SA
This model is a fine-tuned version of openai/whisper-small on the aihub elder over 70 dataset. It achieves the following results on the evaluation set:
- Loss: 0.1587
- Cer: 4.5054
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 2
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Cer |
---|---|---|---|---|
0.4621 | 0.13 | 100 | 0.2838 | 5.9563 |
0.3338 | 0.26 | 200 | 0.2081 | 5.6215 |
0.3232 | 0.39 | 300 | 0.1974 | 5.3513 |
0.2781 | 0.52 | 400 | 0.1949 | 5.4159 |
0.2583 | 0.64 | 500 | 0.1817 | 5.2103 |
0.2485 | 0.77 | 600 | 0.1745 | 4.7874 |
0.237 | 0.9 | 700 | 0.1699 | 4.8285 |
0.1745 | 1.03 | 800 | 0.1659 | 4.4995 |
0.147 | 1.16 | 900 | 0.1662 | 4.6758 |
0.1737 | 1.29 | 1000 | 0.1644 | 5.0282 |
0.1639 | 1.42 | 1100 | 0.1637 | 4.8285 |
0.1497 | 1.55 | 1200 | 0.1603 | 4.6640 |
0.1756 | 1.68 | 1300 | 0.1599 | 4.5818 |
0.1586 | 1.81 | 1400 | 0.1593 | 4.4525 |
0.141 | 1.93 | 1500 | 0.1587 | 4.5054 |
Framework versions
- Transformers 4.37.0.dev0
- Pytorch 2.1.0+cu121
- Datasets 2.16.0
- Tokenizers 0.15.0
- Downloads last month
- 4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.