whisper-base-aug-11-may-lightning-v1
This model is a fine-tuned version of openai/whisper-base on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.1011
- Wer: 83.8329
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
0.7002 | 1.0 | 1414 | 0.1698 | 90.1418 |
0.1406 | 2.0 | 2828 | 0.1163 | 82.7225 |
0.0973 | 3.0 | 4242 | 0.1005 | 81.6232 |
0.0759 | 4.0 | 5656 | 0.0944 | 79.2710 |
0.0613 | 5.0 | 7070 | 0.0910 | 82.6157 |
0.0504 | 6.0 | 8484 | 0.0916 | 81.6877 |
0.0416 | 7.0 | 9898 | 0.0932 | 83.1141 |
0.0344 | 8.0 | 11312 | 0.0960 | 83.1408 |
0.0285 | 9.0 | 12726 | 0.0992 | 83.6482 |
0.0243 | 9.9933 | 14130 | 0.1011 | 83.8329 |
Framework versions
- Transformers 4.52.0.dev0
- Pytorch 2.7.0+cu128
- Datasets 3.5.1
- Tokenizers 0.21.1
- Downloads last month
- 13
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for PhanithLIM/whisper-base-aug-11-may-lightning-v1
Base model
openai/whisper-base