tokomni-whisper-i3 / README.md
OverloadedOperator's picture
End of training
46effff verified
|
raw
history blame
3.03 kB
metadata
language:
  - he
license: apache-2.0
library_name: peft
tags:
  - hf-asr-leaderboard
  - generated_from_trainer
datasets:
  - imvladikon/hebrew_speech_kan
base_model: openai/whisper-tiny
model-index:
  - name: TK_Whisper_ASR
    results: []

TK_Whisper_ASR

This model is a fine-tuned version of openai/whisper-tiny on the KAN Hebrew Speech dataset. It achieves the following results on the evaluation set:

  • Loss: 5.4256

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 2
  • eval_batch_size: 2
  • seed: 42
  • gradient_accumulation_steps: 4
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 50
  • num_epochs: 30
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss
No log 1.0 1 6.4006
No log 2.0 2 6.4006
No log 3.0 3 6.4006
No log 4.0 4 6.4006
No log 5.0 5 6.4006
No log 6.0 6 6.3966
No log 7.0 7 6.3910
No log 8.0 8 6.3844
No log 9.0 9 6.3857
No log 10.0 10 6.3694
No log 11.0 11 6.3381
No log 12.0 12 6.3115
No log 13.0 13 6.2788
No log 14.0 14 6.2411
No log 15.0 15 6.1932
No log 16.0 16 6.1552
No log 17.0 17 6.1146
No log 18.0 18 6.0711
No log 19.0 19 6.0122
No log 20.0 20 5.9643
No log 21.0 21 5.9193
No log 22.0 22 5.8783
No log 23.0 23 5.8258
No log 24.0 24 5.7742
6.7306 25.0 25 5.7174
6.7306 26.0 26 5.6629
6.7306 27.0 27 5.6041
6.7306 28.0 28 5.5453
6.7306 29.0 29 5.4896
6.7306 30.0 30 5.4256

Framework versions

  • PEFT 0.9.0
  • Transformers 4.38.2
  • Pytorch 2.2.1+cu121
  • Datasets 2.18.0
  • Tokenizers 0.15.0