aotrih commited on
Commit
42a1674
1 Parent(s): 9c56c88

whisperkittools generated README.md

Browse files
Files changed (1) hide show
  1. README.md +101 -0
README.md ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ ---
3
+ pretty_name: "WhisperKit ASR Evaluation Results"
4
+ viewer: false
5
+ library_name: whisperkit
6
+ tags:
7
+ - whisper
8
+ - whisperkit
9
+ - coreml
10
+ - asr
11
+ - quantized
12
+ ---
13
+ # WhisperKit Transcription Quality
14
+
15
+
16
+
17
+ ## Dataset: `earnings22-12hours`
18
+ abc
19
+
20
+ | | WER (↓) | QoI (↑) | File Size (MB) | Code Commit |
21
+ |:------------------------------------------------------------------------------------------------------|:------------------------------------------------------------------------------------------------------------------------------------------|----------:|-----------------:|:---------------------------------------------------------------|
22
+ | [large-v3_turbo](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-large-v3_turbo) | [11.97](https://hf.co/datasets/argmaxinc/whisperkit-evals-staging/tree/main/WhisperKit/openai_whisper-large-v3_turbo/earnings22-12hours) | 100 | 3100 | [Link](https://github.com/argmaxinc/WhisperKit/commit/65cb888) |
23
+ | [large-v2](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-large-v2) | [12.4](https://hf.co/datasets/argmaxinc/whisperkit-evals-staging/tree/main/WhisperKit/openai_whisper-large-v2/earnings22-12hours) | 38.5 | 3100 | [Link](https://github.com/argmaxinc/WhisperKit/commit/65cb888) |
24
+ | [large-v2_turbo](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-large-v2_turbo) | [12.47](https://hf.co/datasets/argmaxinc/whisperkit-evals-staging/tree/main/WhisperKit/openai_whisper-large-v2_turbo/earnings22-12hours) | 38.5 | 3100 | [Link](https://github.com/argmaxinc/WhisperKit/commit/65cb888) |
25
+ | [distil-large-v3](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/distil-whisper_distil-large-v3) | [12.32](https://hf.co/datasets/argmaxinc/whisperkit-evals-staging/tree/main/WhisperKit/distil-whisper_distil-large-v3/earnings22-12hours) | 23.1 | 1510 | [Link](https://github.com/argmaxinc/WhisperKit/commit/65cb888) |
26
+ | [small.en](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-small.en) | [13.08](https://hf.co/datasets/argmaxinc/whisperkit-evals-staging/tree/main/WhisperKit/openai_whisper-small.en/earnings22-12hours) | 15.4 | 483 | [Link](https://github.com/argmaxinc/WhisperKit/commit/65cb888) |
27
+ | [small](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-small) | [13.27](https://hf.co/datasets/argmaxinc/whisperkit-evals-staging/tree/main/WhisperKit/openai_whisper-small/earnings22-12hours) | 15.4 | 483 | [Link](https://github.com/argmaxinc/WhisperKit/commit/65cb888) |
28
+ | [base.en](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-base.en) | [15.34](https://hf.co/datasets/argmaxinc/whisperkit-evals-staging/tree/main/WhisperKit/openai_whisper-base.en/earnings22-12hours) | 7.7 | 145 | [Link](https://github.com/argmaxinc/WhisperKit/commit/65cb888) |
29
+ | [base](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-base) | [16.62](https://hf.co/datasets/argmaxinc/whisperkit-evals-staging/tree/main/WhisperKit/openai_whisper-base/earnings22-12hours) | 7.7 | 145 | [Link](https://github.com/argmaxinc/WhisperKit/commit/65cb888) |
30
+ | [tiny.en](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-tiny.en) | [19.02](https://hf.co/datasets/argmaxinc/whisperkit-evals-staging/tree/main/WhisperKit/openai_whisper-tiny.en/earnings22-12hours) | 0 | 66 | [Link](https://github.com/argmaxinc/WhisperKit/commit/65cb888) |
31
+ | [tiny](https://hf.co/argmaxinc/whisperkit-coreml/tree/main/openai_whisper-tiny) | [21.21](https://hf.co/datasets/argmaxinc/whisperkit-evals-staging/tree/main/WhisperKit/openai_whisper-tiny/earnings22-12hours) | 0 | 66 | [Link](https://github.com/argmaxinc/WhisperKit/commit/65cb888) |
32
+
33
+
34
+ ### Explanation
35
+
36
+ We believe that rigorously measuring the quality of inference is necessary for developers and
37
+ enterprises to make informed decisions when opting to use optimized or compressed variants of
38
+ any machine learning model in production. To contextualize `WhisperKit`, we take the following Whisper
39
+ implementations and benchmark them using a consistent evaluation harness:
40
+
41
+ Server-side:
42
+ - `WhisperOpenAIAPI`: [OpenAI's Whisper API](https://platform.openai.com/docs/guides/speech-to-text)
43
+
44
+ ($0.36 per hour of audio as of 02/29/24, 25MB file size limit per request)
45
+
46
+ On-device:
47
+ - `WhisperKit`: Argmax's implementation [[Eval Harness]](https://github.com/argmaxinc/whisperkittools/blob/main/whisperkit/pipelines.py#L100) [[Repo]](https://github.com/argmaxinc/WhisperKit)
48
+ - `whisper.cpp`: A C++ implementation form ggerganov [[Eval Harness]](https://github.com/argmaxinc/whisperkittools/blob/main/whisperkit/pipelines.py#L212) [[Repo]](https://github.com/ggerganov/whisper.cpp)
49
+ - `WhisperMLX`: A Python implementation from Apple MLX [[Eval Harness]](https://github.com/argmaxinc/whisperkittools/blob/main/whisperkit/pipelines.py#L338) [[Repo]](https://github.com/ml-explore/mlx-examples/blob/main/whisper/whisper/transcribe.py)
50
+
51
+ (All on-device implementations are available for free under MIT license as of 03/19/2024)
52
+
53
+ `WhisperOpenAIAPI` sets the reference and we assume that it is using the equivalent of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2)
54
+ in float16 precision along with additional undisclosed optimizations from OpenAI. In all measurements, we care primarily about per-example no-regressions (quantified as `qoi` below)
55
+ which is a stricter metric compared to dataset average [Word Error RATE (WER)](https://en.wikipedia.org/wiki/Word_error_rate). A 100% `qoi` preserves perfect backwards-compatibility on the test distribution and avoids "perceived regressions", the phenomenon
56
+ where per-example known behavior changes after a code/model update and causes divergence in downstream code or breaks the user experience itself (even if dataset averages might stay flat
57
+ across updates). Pseudocode for `qoi`:
58
+
59
+ ```python
60
+ qoi = []
61
+ for example in dataset:
62
+ no_regression = wer(optimized_model(example)) <= wer(reference_model(example))
63
+ qoi.append(no_regression)
64
+ qoi = (sum(qoi) / len(qoi)) * 100.
65
+ ```
66
+
67
+ Note that the ordering of models with respect to `WER` does not necessarily match the ordering with respect to `QoI`. This is because the reference model gets assigned
68
+ a QoI of 100% by definition. Any per-example regression by other implementations get penalized while per-example improvements are not rewarded. `QoI` (higher is better) matters
69
+ where the production behavior is established by the reference results and the goal is to not regress when switching to an optimized or compressed model. On the other hand,
70
+ `WER` (lower is better) matters when there is no established production behavior and one is picking the best quality versus model size trade off point.
71
+
72
+ We anticipate developers that use Whisper (or similar models) in production to have their own Quality Assurance test sets and [whisperkittools](https://github.com/argmaxinc/whisperkittools) offers
73
+ the tooling necessary to run the same measurements on such custom test sets, please see the [Model Evaluation on Custom Dataset]((https://github.com/argmaxinc/whisperkittools)) for details.
74
+
75
+ ### Why are there so many Whisper versions?
76
+ WhisperKit is an SDK for building speech-to-text features in apps across a wide range of Apple devices. We are working towards abstracting away the model versioning from the developer so WhisperKit
77
+ "just works" by deploying the highest-quality model version that a particular device can execute. In the interim, we leave the choice to the developer by providing quality and size trade-offs.
78
+
79
+
80
+ ### Datasets
81
+ - [librispeech](https://huggingface.co/datasets/argmaxinc/librispeech): ~5 hours of short English audio clips, tests short-form transcription quality
82
+ - [earnings22](https://huggingface.co/datasets/argmaxinc/earnings22): ~120 hours of English audio clips from earnings calls with various accents, tests long-form transcription quality
83
+
84
+ ### Reproducing Results
85
+ Benchmark results on this page were automatically generated by [whisperkittools](https://github.com/argmaxinc/whisperkittools) using our cluster of Apple Silicon Macs as self-hosted runners on
86
+ Github Actions. We periodically recompute these benchmarks as part of our CI pipeline. Due to [security concerns](https://docs.github.com/en/actions/security-guides/security-hardening-for-github-actions#hardening-for-self-hosted-runners),
87
+ we are unable to open up the cluster to the public. However, any Apple Silicon Mac (even with 8GB RAM) can be used to
88
+ run identical [evaluation jobs](#evaluation) locally. For reference, our M2 Ultra devices complete a `librispeech` + `openai/whisper-large-v3`
89
+ evaluation in under 1 hour regardless of the Whisper implementation. Oldest Apple Silicon Macs should take less than 1 day to complete the same evaluation.
90
+
91
+
92
+
93
+ ### Glossary
94
+
95
+ - `_turbo`: Indicates the presence of additional optimizations (not compression) to unlock streaming transcription
96
+ as described in our [Blog Post](https://www.takeargmax.com/blog/whisperkit).
97
+
98
+ - `_*MB`: Indicates the presence of model compression. Instead of cluttering the filename with details like
99
+ `_AudioEncoder-5.8bits_TextDecoder-6.1bits_QLoRA-rank=16`, we choose to summarize the compression spec as the
100
+ resulting total file size since this is what matters to developers in production.
101
+