SpeechT5 TTS Twi_v6
This model is a fine-tuned version of microsoft/speecht5_tts on the lagyamfi/Akan dataset. It achieves the following results on the evaluation set:
- Loss: 0.3921
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 40
- eval_batch_size: 40
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 80
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
No log | 1.0 | 269 | 0.4005 |
0.433 | 2.0 | 538 | 0.3988 |
0.433 | 3.0 | 807 | 0.4027 |
0.4353 | 4.0 | 1076 | 0.4052 |
0.4353 | 5.0 | 1345 | 0.4040 |
0.4356 | 6.0 | 1614 | 0.3986 |
0.4356 | 7.0 | 1883 | 0.3979 |
0.4314 | 8.0 | 2152 | 0.3988 |
0.4314 | 9.0 | 2421 | 0.3967 |
0.4283 | 10.0 | 2690 | 0.3960 |
0.4283 | 11.0 | 2959 | 0.3956 |
0.4221 | 12.0 | 3228 | 0.3945 |
0.4221 | 13.0 | 3497 | 0.3942 |
0.4185 | 14.0 | 3766 | 0.3943 |
0.4161 | 15.0 | 4035 | 0.3933 |
0.4161 | 16.0 | 4304 | 0.3950 |
0.4193 | 17.0 | 4573 | 0.3971 |
0.4193 | 18.0 | 4842 | 0.3952 |
0.4171 | 19.0 | 5111 | 0.3942 |
0.4171 | 20.0 | 5380 | 0.3937 |
0.4146 | 21.0 | 5649 | 0.3949 |
0.4146 | 22.0 | 5918 | 0.3948 |
0.4126 | 23.0 | 6187 | 0.3920 |
0.4126 | 24.0 | 6456 | 0.3920 |
0.41 | 25.0 | 6725 | 0.3953 |
0.41 | 26.0 | 6994 | 0.3927 |
0.4091 | 27.0 | 7263 | 0.3922 |
0.4065 | 28.0 | 7532 | 0.3910 |
0.4065 | 29.0 | 7801 | 0.3930 |
0.4057 | 30.0 | 8070 | 0.3921 |
Framework versions
- Transformers 4.40.1
- Pytorch 2.2.1+cu121
- Datasets 2.18.0
- Tokenizers 0.19.1
- Downloads last month
- 15
This model does not have enough activity to be deployed to Inference API (serverless) yet.
Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.