whisper-base-quran / README.md
raghadOmar's picture
Upload processor
496d68f verified
|
raw
history blame
No virus
2.29 kB
metadata
language:
  - ar
license: apache-2.0
tags:
  - generated_from_trainer
base_model: openai/whisper-small
datasets:
  - zolfa
metrics:
  - wer
model-index:
  - name: Zolfa-raghadomar
    results:
      - task:
          type: automatic-speech-recognition
          name: Automatic Speech Recognition
        dataset:
          name: Zolfa Dataset
          type: zolfa
          args: 'config: ar, split: test'
        metrics:
          - type: wer
            value: 8.571428571428571
            name: Wer

Zolfa-raghadomar

This model is a fine-tuned version of openai/whisper-small on the Zolfa Dataset dataset. It achieves the following results on the evaluation set:

  • Loss: 0.2396
  • Wer: 8.5714

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 4
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 5
  • training_steps: 1000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0673 0.6993 100 0.2047 12.8571
0.0239 1.3986 200 0.2376 10.4082
0.0087 2.0979 300 0.2165 9.7959
0.009 2.7972 400 0.2169 7.9592
0.0034 3.4965 500 0.2277 8.5714
0.0041 4.1958 600 0.2401 8.5714
0.0032 4.8951 700 0.2395 7.9592
0.0007 5.5944 800 0.2430 8.5714
0.0002 6.2937 900 0.2380 8.5714
0.0015 6.9930 1000 0.2396 8.5714

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1