--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: wav2vec2-large-xls-r-300m-telugu-asr results: [] --- # wav2vec2-large-xls-r-300m-telugu-asr This model is a fine-tuned version of [henilp105/wav2vec2-large-xls-r-300m-telugu-asr](https://huggingface.co/henilp105/wav2vec2-large-xls-r-300m-telugu-asr) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: inf - Wer: 0.8094 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.3952 | 2.3 | 200 | inf | 0.9316 | | 0.9044 | 4.59 | 400 | inf | 0.8832 | | 0.5208 | 6.89 | 600 | inf | 0.8467 | | 0.3552 | 9.19 | 800 | inf | 0.8444 | | 0.259 | 11.49 | 1000 | inf | 0.84 | | 0.2113 | 13.79 | 1200 | inf | 0.8321 | | 0.1681 | 16.09 | 1400 | inf | 0.8264 | | 0.1468 | 18.39 | 1600 | inf | 0.8281 | | 0.1321 | 20.69 | 1800 | inf | 0.8198 | | 0.1167 | 22.98 | 2000 | inf | 0.7958 | | 0.1028 | 25.29 | 2200 | inf | 0.8007 | | 0.0947 | 27.58 | 2400 | inf | 0.8035 | | 0.0829 | 29.88 | 2600 | inf | 0.8094 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.10.0+cu113 - Datasets 1.18.3 - Tokenizers 0.13.2