Hubert-kakeiken-W-clean
This model is a fine-tuned version of rinna/japanese-hubert-base on the ORIGINAL_KAKEIKEN_W_CLEAN - JA dataset. It achieves the following results on the evaluation set:
- Loss: 0.0018
- Wer: 0.9990
- Cer: 1.0124
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 12500
- num_epochs: 40.0
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss | Wer | Cer |
---|---|---|---|---|---|
20.8804 | 1.0 | 820 | 8.9051 | 1.0 | 1.1284 |
7.4869 | 2.0 | 1640 | 6.3826 | 1.0 | 1.1284 |
5.9007 | 3.0 | 2460 | 3.9252 | 1.0 | 1.1284 |
3.4438 | 4.0 | 3280 | 2.8383 | 1.0 | 1.1284 |
2.5168 | 5.0 | 4100 | 2.2606 | 1.0 | 1.1284 |
2.0175 | 6.0 | 4920 | 0.8359 | 1.0 | 1.0241 |
0.6013 | 7.0 | 5740 | 0.3640 | 0.9996 | 1.0556 |
0.2807 | 8.0 | 6560 | 0.1356 | 0.9991 | 1.0265 |
0.2171 | 9.0 | 7380 | 0.0684 | 0.9991 | 1.0165 |
0.1741 | 10.0 | 8200 | 0.0652 | 0.9991 | 1.0215 |
0.1395 | 11.0 | 9020 | 0.0215 | 0.9990 | 1.0140 |
0.135 | 12.0 | 9840 | 0.0659 | 0.9990 | 1.0144 |
0.1265 | 13.0 | 10660 | 0.0098 | 0.9991 | 1.0137 |
0.1244 | 14.0 | 11480 | 0.0161 | 0.9990 | 1.0142 |
0.1224 | 15.0 | 12300 | 0.0118 | 0.9990 | 1.0142 |
0.1143 | 16.0 | 13120 | 0.0063 | 0.9990 | 1.0132 |
0.1147 | 17.0 | 13940 | 0.0357 | 0.9988 | 1.0202 |
0.1168 | 18.0 | 14760 | 0.0061 | 0.9990 | 1.0132 |
0.0993 | 19.0 | 15580 | 0.0030 | 0.9990 | 1.0129 |
0.0982 | 20.0 | 16400 | 0.0092 | 0.9991 | 1.0131 |
0.0932 | 21.0 | 17220 | 0.0033 | 0.9990 | 1.0130 |
0.0878 | 22.0 | 18040 | 0.0063 | 0.9988 | 1.0134 |
0.0921 | 23.0 | 18860 | 0.0036 | 0.9990 | 1.0128 |
0.0802 | 24.0 | 19680 | 0.0022 | 0.9990 | 1.0127 |
0.0807 | 25.0 | 20500 | 0.0058 | 0.9993 | 1.0142 |
0.0757 | 26.0 | 21320 | 0.0103 | 0.9990 | 1.0142 |
0.071 | 27.0 | 22140 | 0.0040 | 0.9991 | 1.0128 |
0.0726 | 28.0 | 22960 | 0.0029 | 0.9990 | 1.0127 |
0.0625 | 29.0 | 23780 | 0.0053 | 0.9988 | 1.0135 |
0.0584 | 30.0 | 24600 | 0.0038 | 0.9988 | 1.0127 |
0.0612 | 31.0 | 25420 | 0.0029 | 0.9988 | 1.0130 |
0.057 | 32.0 | 26240 | 0.0047 | 0.9990 | 1.0128 |
0.0527 | 33.0 | 27060 | 0.0017 | 0.9990 | 1.0125 |
0.0491 | 34.0 | 27880 | 0.0017 | 0.9990 | 1.0124 |
0.0501 | 35.0 | 28700 | 0.0020 | 0.9990 | 1.0126 |
0.0458 | 36.0 | 29520 | 0.0019 | 0.9990 | 1.0124 |
0.0432 | 37.0 | 30340 | 0.0018 | 0.9990 | 1.0124 |
0.0451 | 38.0 | 31160 | 0.0018 | 0.9990 | 1.0124 |
0.0432 | 39.0 | 31980 | 0.0019 | 0.9991 | 1.0125 |
0.0453 | 39.9518 | 32760 | 0.0018 | 0.9991 | 1.0125 |
Framework versions
- Transformers 4.48.0
- Pytorch 2.5.1+cu124
- Datasets 3.1.0
- Tokenizers 0.21.0
- Downloads last month
- 1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
Model tree for utakumi/Hubert-kakeiken-W-clean
Base model
rinna/japanese-hubert-base