speecht5_mehdi_as_1 / README.md
kingmhd1519's picture
End of training
0d5dd5b verified
metadata
library_name: transformers
license: mit
base_model: microsoft/speecht5_tts
tags:
  - generated_from_trainer
model-index:
  - name: speecht5_mehdi_as_1
    results: []

speecht5_mehdi_as_1

This model is a fine-tuned version of microsoft/speecht5_tts on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5176

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 4
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 100
  • training_steps: 1500
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.611 3.5556 100 0.5611
0.55 7.1111 200 0.5361
0.5435 10.6667 300 0.5158
0.5081 14.2222 400 0.4987
0.4918 17.7778 500 0.5124
0.4851 21.3333 600 0.4984
0.4783 24.8889 700 0.5027
0.4721 28.4444 800 0.4964
0.4595 32.0 900 0.5092
0.4524 35.5556 1000 0.5169
0.4528 39.1111 1100 0.5130
0.4423 42.6667 1200 0.5114
0.4401 46.2222 1300 0.5175
0.439 49.7778 1400 0.5202
0.4357 53.3333 1500 0.5176

Framework versions

  • Transformers 4.46.3
  • Pytorch 2.5.1+cu121
  • Tokenizers 0.20.3