Configuration Parsing
Warning:
In adapter_config.json: "peft.task_type" must be a string
Whisper Medium
This model is a fine-tuned version of openai/whisper-medium on the b-brave-clean dataset. It achieves the following results on the evaluation set:
- Loss: 0.6333
- Wer: 56.7335
- Cer: 39.2960
- Lr: 0.0000
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.3
- num_epochs: 8
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Lr |
---|---|---|---|---|---|---|
4.1253 | 1.0 | 502 | 3.7472 | 83.6676 | 51.0638 | 0.0000 |
1.2287 | 2.0 | 1004 | 1.0543 | 73.3524 | 47.8329 | 0.0000 |
0.9543 | 3.0 | 1506 | 0.8523 | 80.3725 | 49.8555 | 0.0000 |
0.7365 | 4.0 | 2008 | 0.7315 | 79.6562 | 58.7602 | 0.0000 |
0.4663 | 5.0 | 2510 | 0.6675 | 57.3066 | 39.2960 | 0.0000 |
0.5042 | 6.0 | 3012 | 0.6423 | 54.8711 | 39.2960 | 0.0000 |
0.386 | 7.0 | 3514 | 0.6314 | 55.1576 | 38.2453 | 0.0000 |
0.2509 | 7.9850 | 4008 | 0.6333 | 56.7335 | 39.2960 | 0.0000 |
Framework versions
- PEFT 0.14.0
- Transformers 4.48.3
- Pytorch 2.2.0
- Datasets 3.2.0
- Tokenizers 0.21.0
- Downloads last month
- 4
Inference Providers
NEW
This model is not currently available via any of the supported Inference Providers.
The model cannot be deployed to the HF Inference API:
The model has no pipeline_tag.
Model tree for miosipof/asr2_medium_v0.2
Base model
openai/whisper-medium