Configuration Parsing
Warning:
In adapter_config.json: "peft.task_type" must be a string
Whisper Turbo ko
This model is a fine-tuned version of openai/whisper-large-v3-turbo on the custom dataset. It achieves the following results on the evaluation set:
- Loss: 0.2173
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 64
- eval_batch_size: 256
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 300
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.806 | 0.0474 | 10 | 1.5761 |
0.555 | 0.0948 | 20 | 1.1182 |
0.2932 | 0.1422 | 30 | 0.9172 |
0.2319 | 0.1896 | 40 | 0.7350 |
0.1991 | 0.2370 | 50 | 0.6569 |
0.1782 | 0.2844 | 60 | 0.5887 |
0.1676 | 0.3318 | 70 | 0.5292 |
0.1526 | 0.3791 | 80 | 0.4856 |
0.1464 | 0.4265 | 90 | 0.4509 |
0.1429 | 0.4739 | 100 | 0.4312 |
0.1417 | 0.5213 | 110 | 0.3998 |
0.1359 | 0.5687 | 120 | 0.3802 |
0.1279 | 0.6161 | 130 | 0.3608 |
0.125 | 0.6635 | 140 | 0.3369 |
0.111 | 0.7109 | 150 | 0.3189 |
0.1203 | 0.7583 | 160 | 0.3088 |
0.1416 | 0.8057 | 170 | 0.3010 |
0.1135 | 0.8531 | 180 | 0.2795 |
0.108 | 0.9005 | 190 | 0.2734 |
0.0881 | 0.9479 | 200 | 0.2702 |
0.1225 | 0.9953 | 210 | 0.2580 |
0.0874 | 1.0427 | 220 | 0.2523 |
0.0997 | 1.0900 | 230 | 0.2610 |
0.0964 | 1.1374 | 240 | 0.2388 |
0.082 | 1.1848 | 250 | 0.2300 |
0.0793 | 1.2322 | 260 | 0.2279 |
0.0816 | 1.2796 | 270 | 0.2243 |
0.0885 | 1.3270 | 280 | 0.2198 |
0.0745 | 1.3744 | 290 | 0.2180 |
0.077 | 1.4218 | 300 | 0.2173 |
Framework versions
- PEFT 0.14.0
- Transformers 4.47.1
- Pytorch 2.3.1+cu121
- Datasets 3.0.0
- Tokenizers 0.21.0
- Downloads last month
- 1
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.