metadata
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: dgx2_whisper_small_mozilla_noisy_distil_epochs_50_batch_8
results: []
dgx2_whisper_small_mozilla_noisy_distil_epochs_50_batch_8
This model is a fine-tuned version of rohitp1/kkkh_whisper_small_distillation_att_loss_mozilla_epochs_100_batch_4_concat_dataset on the None dataset. It achieves the following results on the evaluation set:
- Loss: 1.1254
- Wer: 20.5209
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 256
- total_train_batch_size: 2048
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine_with_restarts
- lr_scheduler_warmup_ratio: 0.2
- num_epochs: 50
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
0.0006 | 4.39 | 150 | 0.5538 | 19.3893 |
0.1453 | 8.78 | 300 | 0.7751 | 20.7263 |
0.3233 | 13.17 | 450 | 0.8857 | 20.7994 |
0.486 | 17.55 | 600 | 1.0980 | 20.6462 |
0.6433 | 21.94 | 750 | 1.1264 | 20.5835 |
0.6452 | 26.33 | 900 | 1.1254 | 20.5209 |
Framework versions
- Transformers 4.28.1
- Pytorch 1.12.1
- Datasets 2.8.0
- Tokenizers 0.13.2