cromz22's picture
update model card README.md
b872725
|
raw
history blame
3.43 kB
metadata
license: apache-2.0
tags:
  - generated_from_trainer
datasets:
  - common_voice
model-index:
  - name: wav2vec2-common_voice-tr-demo-dist
    results: []

wav2vec2-common_voice-tr-demo-dist

This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common_voice dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3893
  • Wer: 0.3238

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 4
  • total_train_batch_size: 16
  • total_eval_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • num_epochs: 15.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
3.5279 0.46 100 3.6260 1.0
3.1065 0.92 200 3.0854 0.9999
1.4111 1.38 300 1.3343 0.8839
0.8468 1.83 400 0.6920 0.6826
0.6242 2.29 500 0.6001 0.5996
0.4181 2.75 600 0.5655 0.5680
0.4311 3.21 700 0.4478 0.5003
0.3601 3.67 800 0.4548 0.5011
0.2756 4.13 900 0.4444 0.4682
0.2373 4.59 1000 0.4111 0.4432
0.1831 5.05 1100 0.4178 0.4447
0.2423 5.5 1200 0.3881 0.4277
0.2128 5.96 1300 0.3865 0.4018
0.1256 6.42 1400 0.3818 0.4137
0.1038 6.88 1500 0.3739 0.3942
0.1662 7.34 1600 0.3938 0.3929
0.198 7.8 1700 0.3831 0.3837
0.0728 8.26 1800 0.3910 0.3867
0.123 8.72 1900 0.3722 0.3735
0.0776 9.17 2000 0.3938 0.3725
0.1597 9.63 2100 0.3786 0.3697
0.1124 10.09 2200 0.3947 0.3590
0.0965 10.55 2300 0.3952 0.3562
0.0612 11.01 2400 0.3810 0.3476
0.0764 11.47 2500 0.3734 0.3507
0.0973 11.93 2600 0.3935 0.3472
0.0649 12.39 2700 0.3672 0.3413
0.0542 12.84 2800 0.3732 0.3369
0.087 13.3 2900 0.3833 0.3458
0.0196 13.76 3000 0.3761 0.3303
0.0548 14.22 3100 0.3855 0.3274
0.0577 14.68 3200 0.3893 0.3238

Framework versions

  • Transformers 4.20.0.dev0
  • Pytorch 1.11.0+cu102
  • Datasets 2.2.1
  • Tokenizers 0.12.1