t5-asr-CV16 / README.md
urarik's picture
End of training
47d4365 verified
metadata
library_name: transformers
license: apache-2.0
base_model: google/umt5-small
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: t5-asr-CV16
    results: []

t5-asr-CV16

This model is a fine-tuned version of google/umt5-small on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6678
  • Wer: 0.7639

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 128
  • total_train_batch_size: 4096
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer
1.8105 1.9694 48 0.7812 0.8528
1.6752 3.9694 96 0.7174 0.8285
1.6146 5.9694 144 0.7357 0.8215
1.3847 7.9694 192 0.6796 0.8172
1.2792 9.9694 240 0.6601 0.7841
1.2129 11.9694 288 0.6540 0.7764
1.279 13.9694 336 0.6792 0.7837
1.1706 15.9694 384 0.6695 0.7888
1.0348 17.9694 432 0.6931 0.7948
0.9335 19.9694 480 0.6678 0.7639

Framework versions

  • Transformers 4.48.3
  • Pytorch 2.5.1+cu121
  • Datasets 2.17.1
  • Tokenizers 0.21.0