mirfan899's picture
update model card README.md
36642a6
|
raw
history blame
5.35 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
model-index:
  - name: kids_phoneme_sm_model
    results: []

kids_phoneme_sm_model

This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 1.4558
  • Cer: 0.4079

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 2
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 50

Training results

Training Loss Epoch Step Validation Loss Cer
3.0642 0.74 500 4.4995 1.0
2.8486 1.48 1000 3.8639 1.0
2.7909 2.22 1500 3.4712 1.0
1.5475 2.96 2000 1.0263 0.6825
0.7353 3.7 2500 0.8291 0.5760
0.6036 4.44 3000 0.7387 0.5327
0.5553 5.19 3500 0.7382 0.5023
0.4271 5.93 4000 0.7244 0.4991
0.43 6.67 4500 0.7152 0.4805
0.3925 7.41 5000 0.7210 0.4587
0.3719 8.15 5500 0.7888 0.4491
0.3451 8.89 6000 0.7599 0.4433
0.319 9.63 6500 0.7642 0.4508
0.2638 10.37 7000 0.8490 0.4426
0.3084 11.11 7500 0.9387 0.4315
0.2553 11.85 8000 0.8477 0.4287
0.2537 12.59 8500 0.8261 0.4301
0.2058 13.33 9000 1.1093 0.4247
0.2283 14.07 9500 0.7638 0.4230
0.2043 14.81 10000 1.0104 0.4219
0.1918 15.56 10500 0.9618 0.4194
0.1764 16.3 11000 0.9460 0.4226
0.1677 17.04 11500 0.9750 0.4233
0.1751 17.78 12000 0.9600 0.4240
0.1465 18.52 12500 1.1328 0.4172
0.1239 19.26 13000 1.0746 0.4176
0.1495 20.0 13500 1.2143 0.4194
0.1444 20.74 14000 1.1595 0.4219
0.134 21.48 14500 1.1601 0.4201
0.1343 22.22 15000 1.1730 0.4233
0.1051 22.96 15500 1.1257 0.4172
0.1067 23.7 16000 1.1206 0.4190
0.0959 24.44 16500 1.1539 0.4133
0.1028 25.19 17000 1.2425 0.4126
0.1028 25.93 17500 1.2008 0.4144
0.1052 26.67 18000 1.1974 0.4094
0.0813 27.41 18500 1.0960 0.4133
0.0973 28.15 19000 1.1153 0.4101
0.0783 28.89 19500 1.1596 0.4126
0.0704 29.63 20000 1.1881 0.4087
0.068 30.37 20500 1.2289 0.4040
0.0664 31.11 21000 1.2289 0.4079
0.0747 31.85 21500 1.2642 0.4122
0.0663 32.59 22000 1.3062 0.4101
0.0668 33.33 22500 1.3486 0.4101
0.0592 34.07 23000 1.3346 0.4040
0.0513 34.81 23500 1.2958 0.4097
0.0511 35.56 24000 1.3798 0.4108
0.0557 36.3 24500 1.3521 0.4065
0.049 37.04 25000 1.4192 0.4094
0.0465 37.78 25500 1.4308 0.4108
0.0474 38.52 26000 1.4004 0.4058
0.0428 39.26 26500 1.3988 0.4054
0.0509 40.0 27000 1.4218 0.4069
0.0386 40.74 27500 1.3819 0.4104
0.0426 41.48 28000 1.4681 0.4090
0.0408 42.22 28500 1.4543 0.4104
0.0405 42.96 29000 1.4999 0.4108
0.036 43.7 29500 1.4922 0.4072
0.036 44.44 30000 1.4709 0.4087
0.04 45.19 30500 1.4858 0.4094
0.0343 45.93 31000 1.4606 0.4087
0.0288 46.67 31500 1.4599 0.4044
0.0454 47.41 32000 1.4288 0.4087
0.0322 48.15 32500 1.4589 0.4083
0.0327 48.89 33000 1.4502 0.4094
0.0272 49.63 33500 1.4558 0.4079

Framework versions

  • Transformers 4.30.1
  • Pytorch 2.0.0
  • Datasets 2.12.0
  • Tokenizers 0.13.3