whisper-large-v2 / README.md
LeoKuo49's picture
End of training
1bbe743 verified
metadata
language:
  - zh
base_model: openai/whisper-large_v2
tags:
  - generated_from_trainer
datasets:
  - LeoKuo49/Amitabha
model-index:
  - name: Whisper largev2 amitabha
    results: []

Whisper largev2 amitabha

This model is a fine-tuned version of openai/whisper-large_v2 on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0000
  • Cer: 3.0142

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 4000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Cer
0.0121 9.1743 1000 0.0062 4.9920
0.0002 18.3486 2000 0.0002 3.0260
0.0 27.5229 3000 0.0001 3.0142
0.0001 36.6972 4000 0.0000 3.0142

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.19.2
  • Tokenizers 0.19.1