DewiBrynJones's picture
End of training
93d2d61 verified
|
raw
history blame
4.63 kB
metadata
license: apache-2.0
base_model: facebook/wav2vec2-large-xlsr-53
tags:
  - automatic-speech-recognition
  - DewiBrynJones/banc-trawsgrifiadau-bangor-clean-with-ccv
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: wav2vec2-xlsr-53-ft-btb-ccv-cy
    results: []

wav2vec2-xlsr-53-ft-btb-ccv-cy

This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the DEWIBRYNJONES/BANC-TRAWSGRIFIADAU-BANGOR-CLEAN-WITH-CCV - DEFAULT dataset. It achieves the following results on the evaluation set:

  • Loss: 1.6956
  • Wer: 0.7702

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0003
  • train_batch_size: 64
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
No log 0.1548 100 3.5587 1.0
No log 0.3096 200 3.2506 1.0
No log 0.4644 300 2.7740 1.0000
No log 0.6192 400 1.1196 0.7807
3.6484 0.7740 500 0.9134 0.6539
3.6484 0.9288 600 0.7675 0.5923
3.6484 1.0836 700 0.7208 0.5290
3.6484 1.2384 800 0.6209 0.4745
3.6484 1.3932 900 0.6220 0.4788
0.6286 1.5480 1000 0.5739 0.4588
0.6286 1.7028 1100 0.5642 0.4262
0.6286 1.8576 1200 0.5512 0.4208
0.6286 2.0124 1300 0.5275 0.3865
0.6286 2.1672 1400 0.4955 0.3755
0.4816 2.3220 1500 0.4909 0.3733
0.4816 2.4768 1600 0.4983 0.3728
0.4816 2.6316 1700 0.4891 0.3655
0.4816 2.7864 1800 0.4796 0.3571
0.4816 2.9412 1900 0.4643 0.3592
0.4017 3.0960 2000 0.5085 0.3698
0.4017 3.2508 2100 0.6755 0.4530
0.4017 3.4056 2200 0.7100 0.5108
0.4017 3.5604 2300 0.8311 0.5643
0.4017 3.7152 2400 0.7032 0.5029
0.6839 3.8700 2500 0.7071 0.5007
0.6839 4.0248 2600 0.8224 0.5069
0.6839 4.1796 2700 0.8344 0.5162
0.6839 4.3344 2800 0.9089 0.5620
0.6839 4.4892 2900 0.9665 0.5640
0.8292 4.6440 3000 0.9128 0.5415
0.8292 4.7988 3100 1.1925 0.5939
0.8292 4.9536 3200 1.4327 0.6999
0.8292 5.1084 3300 1.2741 0.7827
0.8292 5.2632 3400 1.9348 0.8742
1.4131 5.4180 3500 1.9216 0.9870
1.4131 5.5728 3600 1.8565 0.9367
1.4131 5.7276 3700 1.7828 0.8240
1.4131 5.8824 3800 1.6847 0.8059
1.4131 6.0372 3900 1.6440 0.7984
1.7728 6.1920 4000 1.6765 0.8053
1.7728 6.3467 4100 1.6733 0.8024
1.7728 6.5015 4200 1.6601 0.7900
1.7728 6.6563 4300 1.6605 0.7973
1.7728 6.8111 4400 1.6599 0.7805
1.6777 6.9659 4500 1.6359 0.7693
1.6777 7.1207 4600 1.6400 0.7651
1.6777 7.2755 4700 1.6759 0.7672
1.6777 7.4303 4800 1.6849 0.7686
1.6777 7.5851 4900 1.6858 0.7690
1.683 7.7399 5000 1.6956 0.7702

Framework versions

  • Transformers 4.40.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.1
  • Tokenizers 0.19.1