--- base_model: lnxdx/B4_1000_1e-5_hp-myself-2 tags: - generated_from_trainer metrics: - wer model-index: - name: C2_1000_1e-5_hp-myself-2 results: [] --- # C2_1000_1e-5_hp-myself-2 This model is a fine-tuned version of [lnxdx/B4_1000_1e-5_hp-myself-2](https://huggingface.co/lnxdx/B4_1000_1e-5_hp-myself-2) on the None dataset. It achieves the following results on the evaluation set: - Loss on ShEMO train set: 0.7065 - Loss on ShEMO dev set: 0.6634 - WER on ShEMO train set: 26.52 - WER on ShEMO dev set: 30.87 - WER on Common Voice 13 test set: 19.43 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 1000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7564 | 0.62 | 100 | 0.6705 | 0.3131 | | 0.7761 | 1.25 | 200 | 0.6664 | 0.3140 | | 0.7722 | 1.88 | 300 | 0.6573 | 0.3137 | | 0.7035 | 2.5 | 400 | 0.6627 | 0.3157 | | 0.7026 | 3.12 | 500 | 0.6834 | 0.3107 | | 0.7213 | 3.75 | 600 | 0.6561 | 0.3169 | | 0.6996 | 4.38 | 700 | 0.6664 | 0.3096 | | 0.7146 | 5.0 | 800 | 0.6593 | 0.3148 | | 0.7071 | 5.62 | 900 | 0.6646 | 0.3125 | | 0.7065 | 6.25 | 1000 | 0.6634 | 0.3107 | ### Framework versions - Transformers 4.35.2 - Pytorch 2.1.0+cu118 - Datasets 2.15.0 - Tokenizers 0.15.0