whisper-small-vi-1 / README.md
arun100's picture
End of training
a5e4ad2 verified
metadata
license: apache-2.0
base_model: openai/whisper-small
tags:
  - whisper-event
  - generated_from_trainer
datasets:
  - google/fleurs
metrics:
  - wer
model-index:
  - name: Whisper Small Vietnamese
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: google/fleurs vi_vn
          type: google/fleurs
          config: vi_vn
          split: test
          args: vi_vn
        metrics:
          - name: Wer
            type: wer
            value: 18.305149884704075

Whisper Small Vietnamese

This model is a fine-tuned version of openai/whisper-small on the google/fleurs vi_vn dataset. It achieves the following results on the evaluation set:

  • Loss: 0.4476
  • Wer: 18.3051

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-06
  • train_batch_size: 32
  • eval_batch_size: 32
  • seed: 42
  • gradient_accumulation_steps: 2
  • total_train_batch_size: 64
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 10000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0083 86.0 1000 0.4476 18.3051
0.0022 173.0 2000 0.4754 18.8086
0.001 260.0 3000 0.4970 18.8278
0.0006 347.0 4000 0.5153 19.5042
0.0004 434.0 5000 0.5331 19.4081
0.0003 521.0 6000 0.5482 19.5042
0.0002 608.0 7000 0.5638 19.3659
0.0001 695.0 8000 0.5755 19.6195
0.0001 782.0 9000 0.5862 19.6503
0.0001 869.0 10000 0.5902 19.6349

Framework versions

  • Transformers 4.37.0.dev0
  • Pytorch 2.1.2+cu121
  • Datasets 2.16.2.dev0
  • Tokenizers 0.15.0