Doogie's picture
update model card README.md
5d337bf
|
raw
history blame
No virus
4.84 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
model-index:
  name: Waynehills-STT-doogie-server

Waynehills-STT-doogie-server

This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 3.9322
  • Wer: 1.0368

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • num_epochs: 30

Training results

Training Loss Epoch Step Validation Loss Wer
1.9017 0.51 100 3.9322 1.0368
1.9117 1.01 200 3.9322 1.0368
1.9099 1.52 300 3.9322 1.0368
1.8933 2.02 400 3.9322 1.0368
1.8659 2.53 500 3.9322 1.0368
1.936 3.03 600 3.9322 1.0368
1.8939 3.54 700 3.9322 1.0368
1.9037 4.04 800 3.9322 1.0368
1.9076 4.55 900 3.9322 1.0368
1.9136 5.05 1000 3.9322 1.0368
1.8875 5.56 1100 3.9322 1.0368
1.9003 6.06 1200 3.9322 1.0368
1.9138 6.57 1300 3.9322 1.0368
1.8942 7.07 1400 3.9322 1.0368
1.9035 7.58 1500 3.9322 1.0368
1.9076 8.08 1600 3.9322 1.0368
1.8997 8.59 1700 3.9322 1.0368
1.8958 9.09 1800 3.9322 1.0368
1.891 9.6 1900 3.9322 1.0368
1.9245 10.1 2000 3.9322 1.0368
1.9042 10.61 2100 3.9322 1.0368
1.9153 11.11 2200 3.9322 1.0368
1.892 11.62 2300 3.9322 1.0368
1.8937 12.12 2400 3.9322 1.0368
1.9036 12.63 2500 3.9322 1.0368
1.9162 13.13 2600 3.9322 1.0368
1.9014 13.64 2700 3.9322 1.0368
1.9083 14.14 2800 3.9322 1.0368
1.9003 14.65 2900 3.9322 1.0368
1.9015 15.15 3000 3.9322 1.0368
1.8851 15.66 3100 3.9322 1.0368
1.9062 16.16 3200 3.9322 1.0368
1.9279 16.67 3300 3.9322 1.0368
1.8795 17.17 3400 3.9322 1.0368
1.9126 17.68 3500 3.9322 1.0368
1.8688 18.18 3600 3.9322 1.0368
1.9234 18.69 3700 3.9322 1.0368
1.8872 19.19 3800 3.9322 1.0368
1.9096 19.7 3900 3.9322 1.0368
1.8854 20.2 4000 3.9322 1.0368
1.9168 20.71 4100 3.9322 1.0368
1.9145 21.21 4200 3.9322 1.0368
1.904 21.72 4300 3.9322 1.0368
1.8982 22.22 4400 3.9322 1.0368
1.8978 22.73 4500 3.9322 1.0368
1.9023 23.23 4600 3.9322 1.0368
1.8901 23.74 4700 3.9322 1.0368
1.9079 24.24 4800 3.9322 1.0368
1.8923 24.75 4900 3.9322 1.0368
1.9095 25.25 5000 3.9322 1.0368
1.909 25.76 5100 3.9322 1.0368
1.8871 26.26 5200 3.9322 1.0368
1.9046 26.77 5300 3.9322 1.0368
1.8877 27.27 5400 3.9322 1.0368
1.901 27.78 5500 3.9322 1.0368
1.9045 28.28 5600 3.9322 1.0368
1.907 28.79 5700 3.9322 1.0368
1.9075 29.29 5800 3.9322 1.0368
1.895 29.8 5900 3.9322 1.0368

Framework versions

  • Transformers 4.12.5
  • Pytorch 1.10.0+cu113
  • Datasets 1.17.0
  • Tokenizers 0.10.3