Edit model card

hubert-base-libri-demo-feature_extractor_not_frozen_v1_30epochs_weight_decay

This model is a fine-tuned version of facebook/hubert-base-ls960 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 3.8346
  • Wer: 1.0000

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.00015
  • train_batch_size: 64
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 3000
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
4.3231 1.12 500 3.4739 1.0000
2.879 2.24 1000 3.5811 1.0000
2.8709 3.36 1500 3.9899 1.0000
2.871 4.48 2000 4.0805 1.0000
2.8706 5.61 2500 3.8464 1.0000
2.8672 6.73 3000 3.7938 1.0000
2.8709 7.85 3500 3.8578 1.0000
2.8708 8.97 4000 3.7691 1.0000
2.8678 10.09 4500 3.8619 1.0000
2.8662 11.21 5000 3.8804 1.0000
2.8664 12.33 5500 3.8169 1.0000
2.8662 13.45 6000 3.6758 1.0000
2.8654 14.57 6500 3.7314 1.0000
2.8658 15.7 7000 3.8113 1.0000
2.8647 16.82 7500 3.8938 1.0000
2.8653 17.94 8000 3.9268 1.0000
2.8652 19.06 8500 3.9288 1.0000
2.8649 20.18 9000 3.9164 1.0000
2.8654 21.3 9500 3.8781 1.0000
2.8652 22.42 10000 3.8628 1.0000
2.8642 23.54 10500 3.8646 1.0000
2.8655 24.66 11000 3.8467 1.0000
2.8658 25.78 11500 3.8157 1.0000
2.8647 26.91 12000 3.8274 1.0000
2.8648 28.03 12500 3.8022 1.0000
2.8644 29.15 13000 3.8346 1.0000

Framework versions

  • Transformers 4.30.0.dev0
  • Pytorch 2.0.1
  • Datasets 2.12.1.dev0
  • Tokenizers 0.13.3
Downloads last month
4