Edit model card

ceb_b64_le5_s12000

This model is a fine-tuned version of microsoft/speecht5_tts on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3942

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2000
  • training_steps: 12000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.5368 19.8020 500 0.4637
0.4674 39.6040 1000 0.4246
0.448 59.4059 1500 0.4101
0.4358 79.2079 2000 0.4011
0.4273 99.0099 2500 0.3966
0.4161 118.8119 3000 0.3953
0.4119 138.6139 3500 0.3939
0.4008 158.4158 4000 0.3932
0.4029 178.2178 4500 0.3932
0.4008 198.0198 5000 0.3947
0.3989 217.8218 5500 0.3933
0.3934 237.6238 6000 0.3943
0.3927 257.4257 6500 0.3937
0.3875 277.2277 7000 0.3918
0.3914 297.0297 7500 0.3931
0.3911 316.8317 8000 0.3937
0.3821 336.6337 8500 0.3925
0.3932 356.4356 9000 0.3938
0.3861 376.2376 9500 0.3940
0.3829 396.0396 10000 0.3934
0.3819 415.8416 10500 0.3935
0.383 435.6436 11000 0.3939
0.3828 455.4455 11500 0.3937
0.3818 475.2475 12000 0.3942

Framework versions

  • Transformers 4.41.0.dev0
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1
Downloads last month
1
Safetensors
Model size
144M params
Tensor type
F32
·

Finetuned from