Edit model card

hubert-large-ls960-ft-V2

This model is a fine-tuned version of facebook/hubert-large-ls960-ft on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3760
  • Wer: 0.0445
  • Per: 0.0354

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 20

Training results

Training Loss Epoch Step Validation Loss Wer Per
5.7632 1.0 1637 0.8273 0.3876 0.3794
0.374 2.0 3274 0.4468 0.0635 0.0493
0.1657 3.0 4911 0.3662 0.0596 0.0460
0.1179 4.0 6548 0.3809 0.0518 0.0401
0.0922 5.0 8185 0.4233 0.0519 0.0386
0.0763 6.0 9822 0.3852 0.0517 0.0390
0.0649 7.0 11459 0.3056 0.0487 0.0377
0.056 8.0 13096 0.3266 0.0460 0.0353
0.0467 9.0 14733 0.3608 0.0458 0.0346
0.04 10.0 16370 0.3399 0.0461 0.0355
0.0356 11.0 18007 0.3549 0.0441 0.0341
0.0336 12.0 19644 0.3391 0.0430 0.0332
0.0338 13.0 21281 0.3968 0.0482 0.0383
0.0278 14.0 22918 0.3699 0.0468 0.0376
0.0224 15.0 24555 0.3680 0.0458 0.0365
0.0225 16.0 26192 0.3713 0.0477 0.0392
0.0189 17.0 27829 0.3681 0.0442 0.0350
0.0185 18.0 29466 0.3829 0.0444 0.0352
0.018 19.0 31103 0.3866 0.0450 0.0354
0.0169 20.0 32740 0.3760 0.0445 0.0354

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.16.1
  • Tokenizers 0.15.0
Downloads last month
1
Safetensors
Model size
315M params
Tensor type
F32
·

Finetuned from