glaswegian_tts / README.md
divakaivan's picture
End of training
96c2db5 verified
metadata
language:
  - en
license: mit
base_model: microsoft/speecht5_tts
tags:
  - scottish
  - tts
  - glaswegian
  - generated_from_trainer
datasets:
  - divakaivan/glaswegian_audio
model-index:
  - name: GlaswegianTTS v0.1.0
    results: []

GlaswegianTTS v0.1.0

This model is a fine-tuned version of microsoft/speecht5_tts on the glaswegian_tts_v0.1.0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.5090

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • training_steps: 8000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.4421 52.6316 1000 0.4186
0.3878 105.2632 2000 0.4447
0.3775 157.8947 3000 0.4671
0.3639 210.5263 4000 0.4907
0.354 263.1579 5000 0.4884
0.356 315.7895 6000 0.4997
0.3451 368.4211 7000 0.5021
0.3514 421.0526 8000 0.5090

Framework versions

  • Transformers 4.42.0.dev0
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1