aotrih's picture
Update README.md
26acf91 verified
metadata
pretty_name: WhisperKit ASR Evaluation Results
viewer: false
library_name: whisperkit
tags:
  - whisper
  - whisperkit
  - coreml
  - asr
  - quantized

WhisperKit-0.7.0 VAD Chunking Strategy Evaluation Results

This is an evaluation study to verify that the Voice Activity Detection (VAD) based chunk-and-batch strategy introduced in WhisperKit-0.7.0 does not decrease transcription quality. In order to measure the impact of chunking, we picked a random 10% subset of the earnings22 dataset which comprises corporate earnings call recordings in English with various accents. The long-form nature (>1hr/clip) and the density of speech in these audio clips are intended to stress test VAD accuracy. If VAD is inaccurate, WhisperKit will present speech segments to the Whisper model that start middle-of-speech and cause Whisper to hallucinate at increased rates.

Dataset: earnings22-12hours

Long-Form Audio (>1hr/clip) - ~12 hours of earnings call recordings in English with various accents

with VAD

WER (↓) QoI (↑) File Size (MB) Code Commit
large-v3_turbo 11.97 100 3100 Link
large-v2 12.4 38.5 3100 Link
distil-large-v3 12.32 23.1 1510 Link
small.en 13.08 15.4 483 Link
small 13.27 15.4 483 Link
base.en 15.34 7.7 145 Link
base 16.62 7.7 145 Link
tiny.en 19.02 0 66 Link
tiny 21.21 0 66 Link

without VAD

WER (↓) QoI (↑) File Size (MB) Code Commit
large-v3_turbo 11.95 100 3100 Link
large-v2 13.76 15.4 3100 Link
distil-large-v3 13.03 15.4 1510 Link
small.en 15.39 7.7 483 Link
small 16.27 7.7 483 Link
base.en 19.62 0 145 Link
base 25.26 0 145 Link
tiny.en 23.79 0 66 Link
tiny 31.48 0 66 Link