zlm_b64_le5_s8000 / README.md
mikhail-panzo's picture
End of training
308ab63 verified
|
raw
history blame
2.28 kB
metadata
license: mit
base_model: microsoft/speecht5_tts
tags:
  - generated_from_trainer
model-index:
  - name: zlm_b64_le5_s8000
    results: []

zlm_b64_le5_s8000

This model is a fine-tuned version of microsoft/speecht5_tts on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3630

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 32
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2000
  • training_steps: 8500
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.4525 0.4188 500 0.4043
0.4541 0.8375 1000 0.3992
0.4355 1.2563 1500 0.3946
0.4315 1.6750 2000 0.3966
0.4329 2.0938 2500 0.3881
0.4235 2.5126 3000 0.3829
0.4179 2.9313 3500 0.3775
0.4116 3.3501 4000 0.3739
0.4107 3.7688 4500 0.3721
0.4029 4.1876 5000 0.3693
0.409 4.6064 5500 0.3680
0.4061 5.0251 6000 0.3662
0.403 5.4439 6500 0.3654
0.3958 5.8626 7000 0.3630
0.3952 6.2814 7500 0.3635
0.3971 6.7002 8000 0.3627
0.4004 7.1189 8500 0.3630

Framework versions

  • Transformers 4.41.0.dev0
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1