sanchit-gandhi HF staff commited on
Commit
7a5f483
1 Parent(s): 255ed69

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -10
README.md CHANGED
@@ -34,13 +34,13 @@ For most other applications, the [distil-medium.en](https://huggingface.co/disti
34
  or [distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2) checkpoints are recommended, since they are
35
  both faster and achieve better WER results:
36
 
37
- | Model | Params / M | Rel. Latency | Short-Form WER | Long-Form WER |
38
- |----------------------------------------------------------------------------|------------|--------------|----------------|---------------|
39
- | [large-v2](https://huggingface.co/openai/whisper-large-v2) | 1550 | 1.0 | **9.1** | 11.7 |
40
- | | | | | |
41
- | [distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2) | 756 | 5.8 | 10.1 | **11.6** |
42
- | [distil-medium.en](https://huggingface.co/distil-whisper/distil-medium.en) | 394 | **6.8** | 11.1 | 12.4 |
43
- | [distil-small.en](https://huggingface.co/distil-whisper/distil-small.en) | **166** | 5.6 | 12.1 | 12.8 |
44
 
45
  **Note:** Distil-Whisper is currently only available for English speech recognition. We are working with the community
46
  to distill Whisper on other languages. If you are interested in distilling Whisper in your language, check out the
@@ -168,9 +168,9 @@ result = pipe("https://huggingface.co/datasets/sanchit-gandhi/librispeech_long/r
168
 
169
  ### Speculative Decoding
170
 
171
- Distil-Whisper can be used as an assistant model to Whisper for speculative decoding. Speculative decoding mathematically
172
- ensures the exact same outputs as Whisper are obtained while being 2 times faster. This makes it the perfect drop-in
173
- replacement for existing Whisper pipelines, since the same outputs are guaranteed.
174
 
175
  In the following code-snippet, we load the assistant Distil-Whisper model standalone to the main Whisper pipeline. We then
176
  specify it as the "assistant model" for generation:
 
34
  or [distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2) checkpoints are recommended, since they are
35
  both faster and achieve better WER results:
36
 
37
+ | Model | Params / M | Rel. Latency | Short-Form WER | Long-Form WER |
38
+ |----------------------------------------------------------------------------|------------|----------------|------------------|-----------------|
39
+ | [large-v2](https://huggingface.co/openai/whisper-large-v2) | 1550 | 1.0 | **9.1** | 11.7 |
40
+ | | | | | |
41
+ | [distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2) | 756 | 5.8 | 10.1 | **11.6** |
42
+ | [distil-medium.en](https://huggingface.co/distil-whisper/distil-medium.en) | 394 | **6.8** | 11.1 | 12.4 |
43
+ | [distil-small.en](https://huggingface.co/distil-whisper/distil-small.en) | **166** | 5.6 | 12.1 | 12.8 |
44
 
45
  **Note:** Distil-Whisper is currently only available for English speech recognition. We are working with the community
46
  to distill Whisper on other languages. If you are interested in distilling Whisper in your language, check out the
 
168
 
169
  ### Speculative Decoding
170
 
171
+ Distil-Whisper can be used as an assistant model to Whisper for [speculative decoding](https://huggingface.co/blog/whisper-speculative-decoding).
172
+ Speculative decoding mathematically ensures the exact same outputs as Whisper are obtained while being 2 times faster.
173
+ This makes it the perfect drop-in replacement for existing Whisper pipelines, since the same outputs are guaranteed.
174
 
175
  In the following code-snippet, we load the assistant Distil-Whisper model standalone to the main Whisper pipeline. We then
176
  specify it as the "assistant model" for generation: