xlsr_ur_training / README.md
hadiqa123's picture
update model card README.md
ab17f4c
|
raw
history blame
2.39 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
datasets:
  - common_voice_8_0
model-index:
  - name: xlsr_ur_training
    results: []

xlsr_ur_training

This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common_voice_8_0 dataset. It achieves the following results on the evaluation set:

  • Loss: 1.2610
  • Wer: 0.7325

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 8
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
14.5044 1.69 100 3.9173 1.0
3.3645 3.39 200 3.2475 1.0
3.2318 5.08 300 3.2143 1.0
3.1887 6.78 400 3.1672 1.0
3.1233 8.47 500 3.0927 1.0
3.0938 10.17 600 3.0836 0.9970
3.0706 11.86 700 3.0319 0.9996
2.9622 13.56 800 2.7973 0.9985
2.6267 15.25 900 2.2553 0.9974
1.9748 16.95 1000 1.6858 0.9170
1.4739 18.64 1100 1.4620 0.8125
1.2102 20.34 1200 1.3890 0.7779
1.036 22.03 1300 1.3347 0.7672
0.9462 23.73 1400 1.2970 0.7476
0.8725 25.42 1500 1.2792 0.7461
0.8374 27.12 1600 1.2574 0.7384
0.7976 28.81 1700 1.2610 0.7325

Framework versions

  • Transformers 4.21.0
  • Pytorch 1.11.0+cu113
  • Datasets 2.4.0
  • Tokenizers 0.12.1