ArabicTTS / README.md
CarmelaFinianos's picture
End of training
d8d87e7 verified
|
raw
history blame
2.24 kB
metadata
library_name: transformers
license: mit
base_model: MBZUAI/speecht5_tts_clartts_ar
tags:
  - generated_from_trainer
model-index:
  - name: ArabicTTS
    results: []

ArabicTTS

This model is a fine-tuned version of MBZUAI/speecht5_tts_clartts_ar on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5655

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 4
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 700
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.6259 8.1509 50 0.6030
0.5631 16.3019 100 0.5849
0.5468 24.4528 150 0.5856
0.5217 32.6038 200 0.5570
0.5102 40.7547 250 0.5555
0.4944 48.9057 300 0.5534
0.4829 57.1321 350 0.5509
0.477 65.2830 400 0.5567
0.4692 73.4340 450 0.5552
0.4635 81.5849 500 0.5572
0.4592 89.7358 550 0.5573
0.4546 97.8868 600 0.5610
0.4515 106.1132 650 0.5653
0.45 114.2642 700 0.5655

Framework versions

  • Transformers 4.46.2
  • Pytorch 2.5.1+cu121
  • Datasets 3.1.0
  • Tokenizers 0.20.3