Edit model card

hubert-base-libri-pruning-TEST6

This model was trained from scratch on the None dataset. It achieves the following results on the evaluation set:

  • Loss: -0.1778
  • Wer: 0.1113

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.00015
  • train_batch_size: 64
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 3000
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0811 1.12 500 0.1186 0.1113
0.0736 2.24 1000 0.1194 0.1114
0.0721 3.36 1500 0.1197 0.1115
0.0714 4.48 2000 0.1127 0.1114
0.045 5.61 2500 0.0819 0.1114
0.011 6.73 3000 0.0554 0.1113
-0.0114 7.85 3500 0.0316 0.1112
-0.0312 8.97 4000 0.0121 0.1114
-0.0488 10.09 4500 -0.0078 0.1115
-0.0767 11.21 5000 -0.0271 0.1113
-0.0882 12.33 5500 -0.0439 0.1112
-0.1142 13.45 6000 -0.0604 0.1114
-0.1255 14.57 6500 -0.0751 0.1113
-0.1383 15.7 7000 -0.0885 0.1115
-0.1518 16.82 7500 -0.1019 0.1111
-0.1646 17.94 8000 -0.1137 0.1114
-0.1723 19.06 8500 -0.1247 0.1114
-0.178 20.18 9000 -0.1343 0.1113
-0.1926 21.3 9500 -0.1432 0.1114
-0.2006 22.42 10000 -0.1507 0.1114
-0.2029 23.54 10500 -0.1581 0.1113
-0.2081 24.66 11000 -0.1645 0.1112
-0.2054 25.78 11500 -0.1698 0.1111
-0.2153 26.91 12000 -0.1738 0.1112
-0.2111 28.03 12500 -0.1764 0.1112
-0.2175 29.15 13000 -0.1778 0.1113

Framework versions

  • Transformers 4.30.0.dev0
  • Pytorch 2.0.1
  • Datasets 2.12.1.dev0
  • Tokenizers 0.13.3
Downloads last month
10