--- tags: - generated_from_trainer model-index: - name: super_large_finetune_CM01 results: [] --- # super_large_finetune_CM01 This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 7.2285 - Wer: 0.7714 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 15 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 857 - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 1.0031 | 5.0 | 1715 | 1.9766 | 0.7857 | | 0.2107 | 10.0 | 3430 | 3.8748 | 0.8238 | | 0.1393 | 15.0 | 5145 | 4.7403 | 0.7952 | | 0.0931 | 20.0 | 6860 | 3.5077 | 0.6667 | | 0.0649 | 25.0 | 8575 | 7.7419 | 0.9333 | | 0.0592 | 30.0 | 10290 | 5.6440 | 0.7762 | | 0.0396 | 35.0 | 12005 | 6.9629 | 0.6810 | | 0.03 | 40.0 | 13720 | 7.8282 | 0.7524 | | 0.0191 | 45.0 | 15435 | 6.4626 | 0.7429 | | 0.0121 | 50.0 | 17150 | 7.2285 | 0.7714 | ### Framework versions - Transformers 4.18.0 - Pytorch 1.10.2+cu102 - Datasets 2.3.2 - Tokenizers 0.12.1