Edit model card

Whisper Medium TW

This model is a fine-tuned version of openai/whisper-medium on the mozilla-foundation/common_voice_11_0 dataset.

Training and evaluation data

Training:

Evaluation:

Training procedure

  • Datasets were augmented using audiomentations via PitchShift, TimeStretch, Gain, AddGaussianNoise transformations at p=0.3.
  • A space is added between each Chinese character, as demonstrated in the original paper. Effectively, WER == CER in this case.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 1
  • eval_batch_size: 1
  • gradient_accumulation_steps: 32
  • optimizer: Adam
  • generation_max_length: 225,
  • warmup_steps: 200
  • max_steps: 2000,
  • fp16: True,
  • evaluation_strategy: "steps",

Framework versions

  • Transformers 4.27.1
  • Pytorch 2.0.1+cu120
  • Datasets 2.13.1
Downloads last month
8

Dataset used to train Jasper881108/whisper-medium-zh

Evaluation results