sanchit-gandhi HF staff commited on
Commit
024ce6a
1 Parent(s): fce6dfa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -23,13 +23,13 @@ It is a distilled version of the Whisper model that is **6 times faster**, 49% s
23
  **within 1% WER** on out-of-distribution evaluation sets. This is the repository for distil-medium.en,
24
  a distilled variant of [Whisper medium.en](https://huggingface.co/openai/whisper-medium.en).
25
 
26
- | Model | Params / M | Rel. Latency | Short-Form WER | Long-Form WER |
27
- |----------------------------------------------------------------------------|------------|--------------|----------------|---------------|
28
- | [large-v2](https://huggingface.co/openai/whisper-large-v2) | 1550 | 1.0 | **9.1** | 11.7 |
29
- | | | | | |
30
- | [distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2) | 756 | 5.8 | 10.1 | **11.6** |
31
- | [distil-medium.en](https://huggingface.co/distil-whisper/distil-medium.en) | 394 | **6.8** | 11.1 | 12.4 |
32
- | [distil-small.en](https://huggingface.co/distil-whisper/distil-small.en) | **166** | 5.6 | 12.1 | 12.8 |
33
 
34
  **Note:** Distil-Whisper is currently only available for English speech recognition. We are working with the community
35
  to distill Whisper on other languages. If you are interested in distilling Whisper in your language, check out the
@@ -148,9 +148,9 @@ result = pipe("https://huggingface.co/datasets/sanchit-gandhi/librispeech_long/r
148
 
149
  ### Speculative Decoding
150
 
151
- Distil-Whisper can be used as an assistant model to Whisper for speculative decoding. Speculative decoding mathematically
152
- ensures the exact same outputs as Whisper are obtained while being 2 times faster. This makes it the perfect drop-in
153
- replacement for existing Whisper pipelines, since the same outputs are guaranteed.
154
 
155
  In the following code-snippet, we load the assistant Distil-Whisper model standalone to the main Whisper pipeline. We then
156
  specify it as the "assistant model" for generation:
 
23
  **within 1% WER** on out-of-distribution evaluation sets. This is the repository for distil-medium.en,
24
  a distilled variant of [Whisper medium.en](https://huggingface.co/openai/whisper-medium.en).
25
 
26
+ | Model | Params / M | Rel. Latency | Short-Form WER | Long-Form WER |
27
+ |----------------------------------------------------------------------------|------------|----------------|------------------|-----------------|
28
+ | [large-v2](https://huggingface.co/openai/whisper-large-v2) | 1550 | 1.0 | **9.1** | 11.7 |
29
+ | | | | | |
30
+ | [distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2) | 756 | 5.8 | 10.1 | **11.6** |
31
+ | [distil-medium.en](https://huggingface.co/distil-whisper/distil-medium.en) | 394 | **6.8** | 11.1 | 12.4 |
32
+ | [distil-small.en](https://huggingface.co/distil-whisper/distil-small.en) | **166** | 5.6 | 12.1 | 12.8 |
33
 
34
  **Note:** Distil-Whisper is currently only available for English speech recognition. We are working with the community
35
  to distill Whisper on other languages. If you are interested in distilling Whisper in your language, check out the
 
148
 
149
  ### Speculative Decoding
150
 
151
+ Distil-Whisper can be used as an assistant model to Whisper for [speculative decoding](https://huggingface.co/blog/whisper-speculative-decoding).
152
+ Speculative decoding mathematically ensures the exact same outputs as Whisper are obtained while being 2 times faster.
153
+ This makes it the perfect drop-in replacement for existing Whisper pipelines, since the same outputs are guaranteed.
154
 
155
  In the following code-snippet, we load the assistant Distil-Whisper model standalone to the main Whisper pipeline. We then
156
  specify it as the "assistant model" for generation: