whisper-base-pt / README.md
zuazo's picture
End of training
f732862 verified
metadata
language:
  - pt
license: apache-2.0
base_model: openai/whisper-base
tags:
  - whisper-event
  - generated_from_trainer
datasets:
  - mozilla-foundation/common_voice_13_0
metrics:
  - wer
model-index:
  - name: Whisper Base Portuguese
    results:
      - task:
          name: Automatic Speech Recognition
          type: automatic-speech-recognition
        dataset:
          name: mozilla-foundation/common_voice_13_0 pt
          type: mozilla-foundation/common_voice_13_0
          config: pt
          split: test
          args: pt
        metrics:
          - name: Wer
            type: wer
            value: 18.80853021391253

Whisper Base Portuguese

This model is a fine-tuned version of openai/whisper-base on the mozilla-foundation/common_voice_13_0 pt dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6511
  • Wer: 18.8085

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2.5e-05
  • train_batch_size: 128
  • eval_batch_size: 64
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 20000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0324 7.04 1000 0.4417 19.1174
0.0064 14.08 2000 0.5218 19.4279
0.0037 21.13 3000 0.5563 19.4871
0.0029 28.17 4000 0.5665 19.4279
0.0018 35.21 5000 0.6106 19.6333
0.0021 42.25 6000 0.6238 20.0588
0.0003 49.3 7000 0.6348 19.3277
0.0002 56.34 8000 0.6511 18.8085
0.0001 63.38 9000 0.6660 18.8102
0.0001 70.42 10000 0.6805 18.8595
0.0001 77.46 11000 0.6956 18.9597
0.0 84.51 12000 0.7114 19.0336
0.0 91.55 13000 0.7280 19.0583
0.0 98.59 14000 0.7444 19.1519
0.0 105.63 15000 0.7608 19.2094
0.0 112.68 16000 0.7768 19.2028
0.0 119.72 17000 0.7913 19.2735
0.0 126.76 18000 0.8042 19.2899
0.0 133.8 19000 0.8136 19.3261
0.0 140.85 20000 0.8176 19.3474

Framework versions

  • Transformers 4.37.2
  • Pytorch 2.0.1+cu117
  • Datasets 2.14.4
  • Tokenizers 0.15.1