TheMabrouk's picture
End of training
8f37c0b verified
metadata
license: mit
base_model: MBZUAI/speecht5_tts_clartts_ar
tags:
  - generated_from_trainer
model-index:
  - name: speecht5_clartts_finetuned_TTS-PLS
    results: []

speecht5_clartts_finetuned_TTS-PLS

This model is a fine-tuned version of MBZUAI/speecht5_tts_clartts_ar on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5869

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000

Training results

Training Loss Epoch Step Validation Loss
0.642 2.8169 250 0.5850
0.588 5.6338 500 0.5583
0.5758 8.4507 750 0.5506
0.56 11.2676 1000 0.5511
0.5522 14.0845 1250 0.5469
0.5398 16.9014 1500 0.5490
0.5341 19.7183 1750 0.5551
0.5294 22.5352 2000 0.5555
0.5235 25.3521 2250 0.5591
0.5197 28.1690 2500 0.5637
0.5154 30.9859 2750 0.5738
0.5109 33.8028 3000 0.5694
0.5094 36.6197 3250 0.5742
0.507 39.4366 3500 0.5740
0.5063 42.2535 3750 0.5791
0.5018 45.0704 4000 0.5811
0.4988 47.8873 4250 0.5844
0.4989 50.7042 4500 0.5835
0.4989 53.5211 4750 0.5850
0.5001 56.3380 5000 0.5869

Framework versions

  • Transformers 4.43.0.dev0
  • Pytorch 2.3.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1