TTS_Amharic
This model is a fine-tuned version of microsoft/speecht5_tts on the walelign_data dataset. It achieves the following results on the evaluation set:
- Loss: 0.3741
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 16000
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
0.4531 | 16.39 | 1000 | 0.4116 |
0.429 | 32.79 | 2000 | 0.3916 |
0.4135 | 49.18 | 3000 | 0.3825 |
0.4102 | 65.57 | 4000 | 0.3783 |
0.3982 | 81.97 | 5000 | 0.3758 |
0.3948 | 98.36 | 6000 | 0.3731 |
0.3935 | 114.75 | 7000 | 0.3741 |
0.3877 | 131.15 | 8000 | 0.3726 |
0.3866 | 147.54 | 9000 | 0.3719 |
0.3868 | 163.93 | 10000 | 0.3734 |
0.3855 | 180.33 | 11000 | 0.3718 |
0.3806 | 196.72 | 12000 | 0.3728 |
0.3841 | 213.11 | 13000 | 0.3729 |
0.3823 | 229.51 | 14000 | 0.3735 |
0.3796 | 245.9 | 15000 | 0.3724 |
0.3814 | 262.3 | 16000 | 0.3741 |
Framework versions
- Transformers 4.38.1
- Pytorch 2.1.0+cu121
- Datasets 2.18.0
- Tokenizers 0.15.2
- Downloads last month
- 6