sanchit-gandhi HF staff commited on
Commit
66bb165
β€’
1 Parent(s): 16a9ff6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +8 -2
README.md CHANGED
@@ -25,12 +25,18 @@ a distilled variant of [Whisper large-v2](https://huggingface.co/openai/whisper-
25
 
26
  | Model | Params / M | Rel. Latency ↑ | Short-Form WER ↓ | Long-Form WER ↓ |
27
  |----------------------------------------------------------------------------|------------|----------------|------------------|-----------------|
28
- | [large-v2](https://huggingface.co/openai/whisper-large-v2) | 1550 | 1.0 | **9.1** | 11.7 |
 
29
  | | | | | |
30
- | [distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2) | 756 | 5.8 | 10.1 | **11.6** |
 
31
  | [distil-medium.en](https://huggingface.co/distil-whisper/distil-medium.en) | 394 | **6.8** | 11.1 | 12.4 |
32
  | [distil-small.en](https://huggingface.co/distil-whisper/distil-small.en) | **166** | 5.6 | 12.1 | 12.8 |
33
 
 
 
 
 
34
  **Note:** Distil-Whisper is currently only available for English speech recognition. We are working with the community
35
  to distill Whisper on other languages. If you are interested in distilling Whisper in your language, check out the
36
  provided [training code](https://github.com/huggingface/distil-whisper/tree/main/training). We will update the
 
25
 
26
  | Model | Params / M | Rel. Latency ↑ | Short-Form WER ↓ | Long-Form WER ↓ |
27
  |----------------------------------------------------------------------------|------------|----------------|------------------|-----------------|
28
+ | [large-v3](https://huggingface.co/openai/whisper-large-v3) | 1550 | 1.0 | **8.4** | 11.0 |
29
+ | [large-v2](https://huggingface.co/openai/whisper-large-v2) | 1550 | 1.0 | 9.1 | 11.7 |
30
  | | | | | |
31
+ | [distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3) | 756 | 6.3 | 9.7 | **10.8** |
32
+ | [distil-large-v2](https://huggingface.co/distil-whisper/distil-large-v2) | 756 | 5.8 | 10.1 | 11.6 |
33
  | [distil-medium.en](https://huggingface.co/distil-whisper/distil-medium.en) | 394 | **6.8** | 11.1 | 12.4 |
34
  | [distil-small.en](https://huggingface.co/distil-whisper/distil-small.en) | **166** | 5.6 | 12.1 | 12.8 |
35
 
36
+ <div class="course-tip course-tip-orange bg-gradient-to-br dark:bg-gradient-to-r before:border-orange-500 dark:before:border-orange-800 from-orange-50 dark:from-gray-900 to-white dark:to-gray-950 border border-orange-50 text-orange-700 dark:text-gray-400">
37
+ <p><b>Update:</b> following the release of OpenAI's Whisper large-v3, an updated <a href="ttps://huggingface.co/distil-whisper/distil-large-v3"> distil-large-v3</a> model was published. This <a href="ttps://huggingface.co/distil-whisper/distil-large-v3"> distil-large-v3</a> model surpasses the performance of the distil-large-v2 model, with no architecture changes and better support for sequential long-form generation. Thus, it is recommended that the <a href="ttps://huggingface.co/distil-whisper/distil-large-v3"> distil-large-v3</a> model is used in-place of the large-v2 model. </p>
38
+ </div>
39
+
40
  **Note:** Distil-Whisper is currently only available for English speech recognition. We are working with the community
41
  to distill Whisper on other languages. If you are interested in distilling Whisper in your language, check out the
42
  provided [training code](https://github.com/huggingface/distil-whisper/tree/main/training). We will update the