rishabhjain16's picture
Update metadata with huggingface_hub
e7eedd4
metadata
license: apache-2.0
tags:
  - generated_from_trainer
metrics:
  - wer
model-index:
  - name: openai/whisper-base.en
    results:
      - task:
          type: automatic-speech-recognition
          name: Automatic Speech Recognition
        dataset:
          name: rishabhjain16/infer_pfs
          type: rishabhjain16/infer_pfs
          config: en
          split: test
        metrics:
          - type: wer
            value: 33.53
            name: WER
      - task:
          type: automatic-speech-recognition
          name: Automatic Speech Recognition
        dataset:
          name: rishabhjain16/infer_myst
          type: rishabhjain16/infer_myst
          config: en
          split: test
        metrics:
          - type: wer
            value: 15.17
            name: WER
      - task:
          type: automatic-speech-recognition
          name: Automatic Speech Recognition
        dataset:
          name: rishabhjain16/infer_cmu
          type: rishabhjain16/infer_cmu
          config: en
          split: test
        metrics:
          - type: wer
            value: 13.32
            name: WER
      - task:
          type: automatic-speech-recognition
          name: Automatic Speech Recognition
        dataset:
          name: rishabhjain16/libritts_dev_clean
          type: rishabhjain16/libritts_dev_clean
          config: en
          split: test
        metrics:
          - type: wer
            value: 7.43
            name: WER

openai/whisper-base.en

This model is a fine-tuned version of openai/whisper-base.en on the None dataset. It achieves the following results on the evaluation set:

  • Loss: 0.6446
  • Wer: 16.4580

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 64
  • eval_batch_size: 32
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.3205 4.02 1000 0.4080 14.5116
0.1568 8.04 2000 0.4672 15.3758
0.035 13.01 3000 0.5696 15.9737
0.0087 17.02 4000 0.6242 15.7283
0.0065 21.04 5000 0.6446 16.4580

Framework versions

  • Transformers 4.27.0.dev0
  • Pytorch 1.13.1+cu117
  • Datasets 2.9.1.dev0
  • Tokenizers 0.13.2