File size: 945 Bytes
01a95e4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 |
Whisper Md Ca - 1k
This model is a fine-tuned version of openai/whisper-medium on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set:
Loss: 0.2554
Wer: 10.9688
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
learning_rate: 1e-05
train_batch_size: 32
eval_batch_size: 8
seed: 42
gradient_accumulation_steps: 2
total_train_batch_size: 64
optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
lr_scheduler_type: linear
lr_scheduler_warmup_steps: 100
training_steps: 1000
mixed_precision_training: Native AMP
Training results
Training Loss Epoch Step Validation Loss Wer
0.2554 1.0 1000 0.2554 10.9688
Framework versions
Transformers 4.26.0.dev0
Pytorch 1.13.1+cu117
Datasets 2.7.1.dev0
Tokenizers 0.13.2 |