sanchit-gandhi HF staff commited on
Commit
39c4a38
1 Parent(s): 96a67a8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -24,10 +24,11 @@ OpenAI's [Whisper large-v3](https://huggingface.co/openai/whisper-large-v3), the
24
  to date.
25
 
26
  Compared to previous Distil-Whisper models, the distillation procedure for distil-large-v3 has been adapted to give
27
- **superior long-form transcription accuracy** with OpenAI's **sequential long-form algorithm**. The result is a distilled
28
- model that performs to within 1% WER of large-v3 on long-form audio using both the sequential and chunked algorithms, and
29
- outperforms distil-large-v2 by 4.8% using the sequential algorithm. The model is also faster than previous Distil-Whisper
30
- models: **6.3x faster than large-v3**, and 1.1x faster than distil-large-v2.
 
31
 
32
  | Model | Params / M | Rel. Latency | Short-Form | Sequential Long-Form | Chunked Long-Form |
33
  |------------------------------------------------------------------------------|------------|--------------|------------|----------------------|-------------------|
 
24
  to date.
25
 
26
  Compared to previous Distil-Whisper models, the distillation procedure for distil-large-v3 has been adapted to give
27
+ **superior long-form transcription accuracy** with OpenAI's **sequential long-form algorithm**.
28
+
29
+ The result is a distilled model that performs to within 1% WER of large-v3 on long-form audio using both the sequential
30
+ and chunked algorithms, and outperforms distil-large-v2 by 4.8% using the sequential algorithm. The model is also faster
31
+ than previous Distil-Whisper models: **6.3x faster than large-v3**, and 1.1x faster than distil-large-v2.
32
 
33
  | Model | Params / M | Rel. Latency | Short-Form | Sequential Long-Form | Chunked Long-Form |
34
  |------------------------------------------------------------------------------|------------|--------------|------------|----------------------|-------------------|