fil_b32_le4_s8000 / README.md
mikhail-panzo's picture
End of training
31dd388 verified
metadata
license: mit
base_model: microsoft/speecht5_tts
tags:
  - generated_from_trainer
model-index:
  - name: fil_b32_le4_s8000
    results: []

fil_b32_le4_s8000

This model is a fine-tuned version of microsoft/speecht5_tts on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4087

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2000
  • training_steps: 8000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.498 10.9890 500 0.4393
0.448 21.9780 1000 0.4195
0.4411 32.9670 1500 0.4205
0.4347 43.9560 2000 0.4253
0.4173 54.9451 2500 0.4151
0.4012 65.9341 3000 0.4118
0.4023 76.9231 3500 0.4092
0.3873 87.9121 4000 0.4116
0.381 98.9011 4500 0.4089
0.3804 109.8901 5000 0.4093
0.3724 120.8791 5500 0.4066
0.3665 131.8681 6000 0.4092
0.3635 142.8571 6500 0.4099
0.3562 153.8462 7000 0.4075
0.3581 164.8352 7500 0.4097
0.3461 175.8242 8000 0.4087

Framework versions

  • Transformers 4.41.0.dev0
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1