Edit model card

hubert-base-libri-demo-feature_extractor_frozen

This model is a fine-tuned version of facebook/hubert-base-ls960 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.1217
  • Wer: 0.3031

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.00015
  • train_batch_size: 64
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 3000
  • num_epochs: 25
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
4.446 1.12 500 3.5285 0.9999
2.8792 2.24 1000 3.2540 0.9999
2.8504 3.36 1500 3.0608 0.9999
1.2084 4.48 2000 0.3020 0.6763
0.3281 5.61 2500 0.1665 0.4887
0.2158 6.73 3000 0.1389 0.4071
0.179 7.85 3500 0.1294 0.3710
0.163 8.97 4000 0.1199 0.3592
0.1461 10.09 4500 0.1177 0.3485
0.1138 11.21 5000 0.1120 0.3374
0.1154 12.33 5500 0.1215 0.3307
0.0919 13.45 6000 0.1182 0.3279
0.0911 14.57 6500 0.1194 0.3286
0.0856 15.7 7000 0.1163 0.3185
0.0786 16.82 7500 0.1154 0.3193
0.0738 17.94 8000 0.1124 0.3122
0.0738 19.06 8500 0.1185 0.3105
0.0767 20.18 9000 0.1208 0.3061
0.0664 21.3 9500 0.1211 0.3050
0.0654 22.42 10000 0.1189 0.3039
0.0606 23.54 10500 0.1235 0.3041
0.0584 24.66 11000 0.1217 0.3031

Framework versions

  • Transformers 4.30.0.dev0
  • Pytorch 2.0.1
  • Datasets 2.12.1.dev0
  • Tokenizers 0.13.3
Downloads last month
6