--- tags: - generated_from_trainer metrics: - wer model-index: - name: libri-alpha-0.75-Temp-1-attention-3-layers-distil-with-6-layers-mse-take-3 results: [] --- # libri-alpha-0.75-Temp-1-attention-3-layers-distil-with-6-layers-mse-take-3 This model is a fine-tuned version of [rohitp1/libri-alpha-0.75-Temp-1-attention-3-layers-distil-with-6-layers-mse](https://huggingface.co/rohitp1/libri-alpha-0.75-Temp-1-attention-3-layers-distil-with-6-layers-mse) on the None dataset. It achieves the following results on the evaluation set: - Loss: 28.9263 - Wer: 0.3301 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 4 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.2 - num_epochs: 40 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 291.1088 | 0.22 | 400 | 28.4207 | 0.3362 | | 284.1968 | 0.45 | 800 | 28.1458 | 0.3314 | | 288.1414 | 0.67 | 1200 | 28.1397 | 0.3326 | | 290.0272 | 0.9 | 1600 | 28.4186 | 0.3323 | | 287.3224 | 1.12 | 2000 | 28.3548 | 0.3283 | | 279.1482 | 1.35 | 2400 | 28.5373 | 0.3309 | | 285.8217 | 1.57 | 2800 | 28.4447 | 0.3301 | | 282.9265 | 1.79 | 3200 | 28.5379 | 0.3365 | | 292.6254 | 2.02 | 3600 | 28.2632 | 0.3299 | | 279.215 | 2.24 | 4000 | 28.9263 | 0.3301 | ### Framework versions - Transformers 4.24.0 - Pytorch 1.12.1 - Datasets 2.7.1 - Tokenizers 0.11.0