hubert-base-libri-demo-feature_extractor_not_frozen_v1_55epochs_weight_decay
This model is a fine-tuned version of facebook/hubert-base-ls960 on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.1581
- Wer: 0.1050
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00015
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 6000
- num_epochs: 55
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer |
---|---|---|---|---|
4.7881 | 1.12 | 500 | 3.5427 | 0.9837 |
2.9014 | 2.24 | 1000 | 3.1265 | 0.9837 |
2.5896 | 3.36 | 1500 | 1.2285 | 0.8156 |
0.8572 | 4.48 | 2000 | 0.4080 | 0.4125 |
0.448 | 5.61 | 2500 | 0.2272 | 0.2605 |
0.2903 | 6.73 | 3000 | 0.1640 | 0.1932 |
0.2293 | 7.85 | 3500 | 0.1383 | 0.1656 |
0.2072 | 8.97 | 4000 | 0.1230 | 0.1503 |
0.1843 | 10.09 | 4500 | 0.1184 | 0.1409 |
0.145 | 11.21 | 5000 | 0.1159 | 0.1350 |
0.1477 | 12.33 | 5500 | 0.1296 | 0.1314 |
0.1186 | 13.45 | 6000 | 0.1282 | 0.1310 |
0.1181 | 14.57 | 6500 | 0.1172 | 0.1255 |
0.1102 | 15.7 | 7000 | 0.1181 | 0.1250 |
0.0976 | 16.82 | 7500 | 0.1200 | 0.1218 |
0.0916 | 17.94 | 8000 | 0.1204 | 0.1208 |
0.0908 | 19.06 | 8500 | 0.1247 | 0.1206 |
0.0928 | 20.18 | 9000 | 0.1202 | 0.1173 |
0.0808 | 21.3 | 9500 | 0.1234 | 0.1158 |
0.0785 | 22.42 | 10000 | 0.1256 | 0.1145 |
0.0732 | 23.54 | 10500 | 0.1265 | 0.1137 |
0.0684 | 24.66 | 11000 | 0.1230 | 0.1138 |
0.0748 | 25.78 | 11500 | 0.1279 | 0.1167 |
0.0612 | 26.91 | 12000 | 0.1354 | 0.1136 |
0.0679 | 28.03 | 12500 | 0.1420 | 0.1131 |
0.0611 | 29.15 | 13000 | 0.1347 | 0.1123 |
0.0589 | 30.27 | 13500 | 0.1323 | 0.1130 |
0.0569 | 31.39 | 14000 | 0.1367 | 0.1122 |
0.0549 | 32.51 | 14500 | 0.1427 | 0.1110 |
0.0525 | 33.63 | 15000 | 0.1397 | 0.1104 |
0.0489 | 34.75 | 15500 | 0.1409 | 0.1097 |
0.0502 | 35.87 | 16000 | 0.1391 | 0.1095 |
0.0626 | 37.0 | 16500 | 0.1405 | 0.1083 |
0.0453 | 38.12 | 17000 | 0.1507 | 0.1094 |
0.0527 | 39.24 | 17500 | 0.1468 | 0.1089 |
0.0552 | 40.36 | 18000 | 0.1408 | 0.1078 |
0.0427 | 41.48 | 18500 | 0.1504 | 0.1073 |
0.0468 | 42.6 | 19000 | 0.1536 | 0.1071 |
0.0444 | 43.72 | 19500 | 0.1502 | 0.1071 |
0.0396 | 44.84 | 20000 | 0.1513 | 0.1073 |
0.0444 | 45.96 | 20500 | 0.1552 | 0.1062 |
0.0397 | 47.09 | 21000 | 0.1591 | 0.1061 |
0.0415 | 48.21 | 21500 | 0.1568 | 0.1055 |
0.0389 | 49.33 | 22000 | 0.1569 | 0.1055 |
0.0361 | 50.45 | 22500 | 0.1599 | 0.1053 |
0.0345 | 51.57 | 23000 | 0.1562 | 0.1051 |
0.0346 | 52.69 | 23500 | 0.1566 | 0.1048 |
0.0312 | 53.81 | 24000 | 0.1600 | 0.1050 |
0.0336 | 54.93 | 24500 | 0.1581 | 0.1050 |
Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1
- Datasets 2.12.1.dev0
- Tokenizers 0.13.3
- Downloads last month
- 3
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.