Edit model card

basic_train_basic_test 1000 similar params: per_device_train_batch_size=32, # bylo 16 a pod tim 1 gradient_accumulation_steps=2, warmup_steps=300, max_steps=3000

This model is a fine-tuned version of openai/whisper-small on the xbilek25/train_set_1sd_1000_en_de_en_v2.0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6814
  • Wer: 22.2374

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 3000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0044 9.01 600 0.5872 21.2453
0.0007 19.0 1200 0.6347 21.4848
0.0004 28.01 1800 0.6607 21.9295
0.0003 38.0 2400 0.6757 22.1348
0.0003 47.01 3000 0.6814 22.2374

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.19.0
  • Tokenizers 0.15.2
Downloads last month
2
Safetensors
Model size
242M params
Tensor type
F32
·

Finetuned from

Dataset used to train xbilek25/whisper-small-train-v3.1

Evaluation results

  • Wer on xbilek25/train_set_1sd_1000_en_de_en_v2.0
    self-reported
    22.237