librarian-bot's picture
Librarian Bot: Add base_model information to model
3ef49a1
|
raw
history blame
3.2 kB
metadata
language:
  - te
license: apache-2.0
tags:
  - whisper-event
  - generated_from_trainer
datasets:
  - google/fleurs
metrics:
  - wer
base_model: openai/whisper-small
model-index:
  - name: whisper-small-telugu
    results:
      - task:
          type: automatic-speech-recognition
          name: Automatic Speech Recognition
        dataset:
          name: google/fleurs
          type: google/fleurs
          config: te_in
          split: test
        metrics:
          - type: wer
            value: 39.67740444608772
            name: Wer

whisper-small-telugu

This model is a fine-tuned version of openai/whisper-small on the google/fleurs dataset. It achieves the following results on the evaluation set (google/flerus telugu test set):

  • Loss: 0.3622
  • Wer: 39.6774

openai/whisper-small has the following zero shot performance on google/fleurs test set:

  • Wer: 117.91

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 8
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 10000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.2623 1.55 500 0.2733 65.9750
0.0859 3.1 1000 0.2045 39.7652
0.0538 4.64 1500 0.2220 42.3811
0.0265 6.19 2000 0.2526 42.3626
0.0179 7.74 2500 0.2754 42.1685
0.008 9.29 3000 0.2966 41.2257
0.0061 10.83 3500 0.2950 40.6202
0.0034 12.38 4000 0.3049 40.3198
0.004 13.93 4500 0.3106 40.5879
0.0018 15.48 5000 0.3199 40.1812
0.0016 17.03 5500 0.3346 39.8345
0.0006 18.57 6000 0.3337 40.2274
0.0003 20.12 6500 0.3396 40.2597
0.0005 21.67 7000 0.3465 40.1072
0.0002 23.22 7500 0.3485 39.7282
0.0002 24.77 8000 0.3519 39.7837
0.0001 26.32 8500 0.3567 39.7560
0.0001 27.86 9000 0.3614 39.8068
0.0 29.41 9500 0.3609 39.4925
0.0 30.96 10000 0.3622 39.6774

Framework versions

  • Transformers 4.24.0
  • Pytorch 1.13.0+cu117
  • Datasets 2.7.1
  • Tokenizers 0.13.2