Edmon02's picture
Training in progress, step 500
a0d59fc verified
|
raw
history blame
1.86 kB
metadata
license: mit
base_model: Edmon02/speecht5_finetuned_hy
tags:
  - generated_from_trainer
model-index:
  - name: speecht5_finetuned_voxpopuli_hy
    results: []

speecht5_finetuned_voxpopuli_hy

This model is a fine-tuned version of Edmon02/speecht5_finetuned_hy on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6214

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 4
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 8
  • total_train_batch_size: 32
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 1000
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
0.7717 4.4199 500 0.7195
0.7361 8.8398 1000 0.6810
0.7088 13.2597 1500 0.6597
0.6942 17.6796 2000 0.6465
0.6835 22.0994 2500 0.6367
0.6753 26.5193 3000 0.6309
0.6734 30.9392 3500 0.6245
0.6747 35.3591 4000 0.6214

Framework versions

  • Transformers 4.43.3
  • Pytorch 2.4.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1