Edit model card

whisper-large-clinical

This model is a fine-tuned version of openai/whisper-large-v3 on a private audiofolder dataset of 18.96 hours of clinical notes text data and corresponding synthetic audio generated by a TTS API. It achieves the following results on the evaluation set:

  • Loss: 0.2757
  • Wer: 5.2122

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-06
  • train_batch_size: 16
  • eval_batch_size: 8
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 5000
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Wer
0.0143 9.0090 1000 0.2275 5.2605
0.0009 18.0180 2000 0.2468 5.1724
0.0003 27.0270 3000 0.2641 5.2548
0.0002 36.0360 4000 0.2728 5.2264
0.0002 45.0450 5000 0.2757 5.2122

Framework versions

  • Transformers 4.41.2
  • Pytorch 2.3.0+cu121
  • Datasets 2.20.0
  • Tokenizers 0.19.1
Downloads last month
7
Safetensors
Model size
1.54B params
Tensor type
F32
ยท
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Finetuned from

Space using bayesianinversion/whisper-large-clinical 1

Evaluation results