--- license: apache-2.0 tags: - automatic-speech-recognition - techiaith/banc-trawsgrifiadau-bangor - generated_from_trainer datasets: - banc-trawsgrifiadau-bangor metrics: - wer model-index: - name: wav2vec2-xlsr-ft-btb results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: TECHIAITH/BANC-TRAWSGRIFIADAU-BANGOR - NA type: banc-trawsgrifiadau-bangor config: default split: test args: 'Config: na, Training split: train, Eval split: test' metrics: - name: Wer type: wer value: 0.3262315072590479 language: - cy pipeline_tag: automatic-speech-recognition --- # wav2vec2-xlsr-ft-cy-verbatim This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the [techiaith/banc-trawsgrifiadau-bangor](https://huggingface.co/datasets/techiaith/banc-trawsgrifiadau-bangor) dataset. It achieves the following results on the evaluation set: - Loss: 0.4357 - Wer: 0.3262 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 0.21 | 100 | 3.4135 | 1.0 | | No log | 0.41 | 200 | 2.9521 | 1.0 | | No log | 0.62 | 300 | 2.3339 | 0.9365 | | No log | 0.83 | 400 | 1.2433 | 0.8259 | | 3.1912 | 1.03 | 500 | 0.8614 | 0.6385 | | 3.1912 | 1.24 | 600 | 0.7557 | 0.5612 | | 3.1912 | 1.44 | 700 | 0.6781 | 0.5195 | | 3.1912 | 1.65 | 800 | 0.6363 | 0.4879 | | 3.1912 | 1.86 | 900 | 0.5959 | 0.4559 | | 0.8237 | 2.06 | 1000 | 0.5430 | 0.4260 | | 0.8237 | 2.27 | 1100 | 0.5293 | 0.4098 | | 0.8237 | 2.48 | 1200 | 0.5141 | 0.4056 | | 0.8237 | 2.68 | 1300 | 0.4879 | 0.3947 | | 0.8237 | 2.89 | 1400 | 0.4697 | 0.3788 | | 0.5625 | 3.1 | 1500 | 0.4748 | 0.3780 | | 0.5625 | 3.3 | 1600 | 0.4836 | 0.3684 | | 0.5625 | 3.51 | 1700 | 0.4796 | 0.3625 | | 0.5625 | 3.72 | 1800 | 0.4582 | 0.3515 | | 0.5625 | 3.92 | 1900 | 0.4395 | 0.3437 | | 0.4267 | 4.13 | 2000 | 0.4410 | 0.3420 | | 0.4267 | 4.33 | 2100 | 0.4467 | 0.3382 | | 0.4267 | 4.54 | 2200 | 0.4398 | 0.3329 | | 0.4267 | 4.75 | 2300 | 0.4383 | 0.3287 | | 0.4267 | 4.95 | 2400 | 0.4358 | 0.3264 | ### Framework versions - Transformers 4.28.1 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3