Edit model card

speecht5_finetuned_voxpopuli_de_16

This model is a fine-tuned version of bregsi/speecht5_finetuned_voxpopuli_de on the voxpopuli dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4477

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 12000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.4928 2.2812 1000 0.4584
0.4885 4.5623 2000 0.4555
0.4831 6.8435 3000 0.4523
0.4815 9.1246 4000 0.4515
0.4786 11.4058 5000 0.4508
0.4735 13.6869 6000 0.4491
0.4734 15.9681 7000 0.4494
0.4729 18.2492 8000 0.4482
0.4678 20.5304 9000 0.4483
0.4722 22.8115 10000 0.4479
0.47 25.0927 11000 0.4481
0.4682 27.3738 12000 0.4477

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.1+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
144M params
Tensor type
F32
·

Finetuned from