Edit model card

speech_ocean_hubert_mdd

This model is a fine-tuned version of facebook/hubert-large-ll60k on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2027
  • Wer: 0.0517
  • Cer: 0.0499

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 20
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer Cer
42.7069 0.9873 39 36.7247 0.9992 0.9977
16.2787 2.0 79 7.8315 1.0 1.0
6.7896 2.9873 118 4.5645 1.0 1.0
4.0104 4.0 158 3.8654 1.0 1.0
3.8037 4.9873 197 3.8060 1.0 1.0
3.7898 6.0 237 3.7695 1.0 1.0
3.7777 6.9873 276 3.7717 1.0 1.0
3.7442 8.0 316 3.7320 1.0 1.0
3.7286 8.9873 355 3.6978 1.0 1.0
3.6272 10.0 395 3.5089 1.0 1.0
3.0921 10.9873 434 2.6068 0.9992 0.9997
2.2556 12.0 474 1.6832 0.5880 0.6815
1.7791 12.9873 513 1.2117 0.3861 0.4433
1.2731 14.0 553 0.7338 0.1793 0.1505
0.9596 14.9873 592 0.4892 0.1220 0.1005
0.7152 16.0 632 0.3525 0.0892 0.0752
0.521 16.9873 671 0.2843 0.0704 0.0623
0.4791 18.0 711 0.2351 0.0607 0.0568
0.3992 18.9873 750 0.2120 0.0547 0.0523
0.4245 19.7468 780 0.2027 0.0517 0.0499

Framework versions

  • Transformers 4.40.0
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1
Downloads last month
11
Safetensors
Model size
316M params
Tensor type
F32
·

Finetuned from