Configuration Parsing
Warning:
In adapter_config.json: "peft.task_type" must be a string
Whisper large-v2 Korean - ML_project_custom_data_10epoch_with500
This model is a fine-tuned version of openai/whisper-large-v2 on the customd_ataset dataset. It achieves the following results on the evaluation set:
- Loss: 0.7993
- Cer: 20.9642
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 4
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 10
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Cer |
---|---|---|---|---|
0.1305 | 1.0 | 113 | 0.8786 | 53.5427 |
0.1204 | 2.0 | 226 | 0.8298 | 93.2067 |
0.0958 | 3.0 | 339 | 0.8469 | 24.7626 |
0.0543 | 4.0 | 452 | 0.8597 | 46.3112 |
0.0408 | 5.0 | 565 | 0.8339 | 63.3309 |
0.0375 | 6.0 | 678 | 0.8222 | 60.4091 |
0.0267 | 7.0 | 791 | 0.7989 | 20.7451 |
0.0066 | 8.0 | 904 | 0.8033 | 24.9087 |
0.0061 | 9.0 | 1017 | 0.7966 | 20.2337 |
0.0021 | 10.0 | 1130 | 0.7993 | 20.9642 |
Framework versions
- PEFT 0.11.2.dev0
- Transformers 4.41.2
- Pytorch 2.3.0+cu121
- Datasets 2.19.2
- Tokenizers 0.19.1
- Downloads last month
- 4
Model tree for ymlee/ML_project_custom_data_10epoch_with500
Base model
openai/whisper-large-v2