Edit model card

zlm_b64_le4_s12000

This model is a fine-tuned version of microsoft/speecht5_tts on an unknown dataset. It achieves the following results on the evaluation set:

  • Loss: 0.3114

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 0.0001
  • train_batch_size: 32
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 2000
  • training_steps: 12000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.5487 0.4188 500 0.4746
0.483 0.8375 1000 0.4227
0.432 1.2563 1500 0.3983
0.429 1.6750 2000 0.3953
0.4168 2.0938 2500 0.3701
0.4021 2.5126 3000 0.3613
0.3925 2.9313 3500 0.3509
0.3839 3.3501 4000 0.3506
0.3798 3.7688 4500 0.3423
0.3693 4.1876 5000 0.3375
0.3712 4.6064 5500 0.3367
0.3668 5.0251 6000 0.3316
0.3635 5.4439 6500 0.3291
0.3543 5.8626 7000 0.3250
0.3526 6.2814 7500 0.3221
0.3525 6.7002 8000 0.3218
0.3513 7.1189 8500 0.3182
0.346 7.5377 9000 0.3163
0.3448 7.9564 9500 0.3162
0.3563 8.3752 10000 0.3145
0.3449 8.7940 10500 0.3126
0.3436 9.2127 11000 0.3128
0.3413 9.6315 11500 0.3121
0.3397 10.0503 12000 0.3114

Framework versions

  • Transformers 4.41.0.dev0
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.0
  • Tokenizers 0.19.1
Downloads last month
9
Safetensors
Model size
144M params
Tensor type
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for mikhail-panzo/zlm_b64_le4_s12000

Finetuned
(746)
this model
Finetunes
1 model