sanchit-gandhi HF staff commited on
Commit
9094e51
1 Parent(s): d83d188

Add weights and readme

Browse files
README.md ADDED
@@ -0,0 +1,73 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ tags:
5
+ - audio
6
+ - automatic-speech-recognition
7
+ license: mit
8
+ ---
9
+
10
+ # Distil-Whisper: distil-large-v3 for Whisper cpp
11
+
12
+ This repository contains the model weights for [distil-large-v3](https://huggingface.co/distil-whisper/distil-large-v3)
13
+ converted to [GGML](https://github.com/ggerganov/ggml) format. GGML is the weight format expected by C/C++ packages
14
+ such as [Whisper.cpp](https://github.com/ggerganov/whisper.cpp), for which we provide an example below.
15
+
16
+ Compared to previous Distil-Whisper releases, the distil-large-v3 is specifically designed to give one-to-one equivalence
17
+ with the Whisper cpp long-form transcription algorithm. In our benchmark over 4 out-of-distribution datasets, distil-large-v3
18
+ outperformed distil-large-v2 by 5% WER average. Thus, you can expect significant performance gains by switching to this
19
+ latest checkpoint.
20
+
21
+ ## Usage
22
+
23
+ Distil-Whisper can be run with the [Whisper.cpp](https://github.com/ggerganov/whisper.cpp) package with the original
24
+ sequential long-form transcription algorithm. In a provisional benchmark on Mac M1, distil-large-v3 is over 5x faster
25
+ than Whisper large-v3, while performing to within 0.8% WER over long-form audio.
26
+
27
+ Steps for getting started:
28
+
29
+ 1. Clone the Whisper.cpp repository:
30
+ ```
31
+ git clone https://github.com/ggerganov/whisper.cpp.git
32
+ cd whisper.cpp
33
+ ```
34
+ 2. Download the GGML weights for distil-large-v3 in fp16 from the Hugging Face Hub:
35
+
36
+ ```bash
37
+ python -c "from huggingface_hub import hf_hub_download; hf_hub_download(repo_id='distil-whisper/distil-large-v3-ggml', filename='ggml-distil-large-v3.bin', local_dir='./models')"
38
+ ```
39
+
40
+ Note that if you do not have the `huggingface_hub` package installed, you can also download the weights with `wget`:
41
+
42
+ ```bash
43
+ wget https://huggingface.co/distil-whisper/distil-large-v3-ggml/resolve/main/ggml-distil-large-v3.bin -P ./models
44
+ ```
45
+
46
+ 3. Run inference using the provided sample audio:
47
+
48
+ ```bash
49
+ make -j && ./main -m models/ggml-distil-large-v3.bin -f samples/jfk.wav
50
+ ```
51
+
52
+ ## Model Details
53
+
54
+ For more information about the distil-large-v3 model, refer to the original [model card](https://huggingface.co/distil-whisper/distil-large-v3).
55
+
56
+ ## License
57
+
58
+ Distil-Whisper inherits the [MIT license](https://github.com/huggingface/distil-whisper/blob/main/LICENSE) from OpenAI's Whisper model.
59
+
60
+ ## Citation
61
+
62
+ If you use this model, please consider citing the [Distil-Whisper paper](https://arxiv.org/abs/2311.00430):
63
+ ```
64
+ @misc{gandhi2023distilwhisper,
65
+ title={Distil-Whisper: Robust Knowledge Distillation via Large-Scale Pseudo Labelling},
66
+ author={Sanchit Gandhi and Patrick von Platen and Alexander M. Rush},
67
+ year={2023},
68
+ eprint={2311.00430},
69
+ archivePrefix={arXiv},
70
+ primaryClass={cs.CL}
71
+ }
72
+ ```
73
+
ggml-distil-large-v3.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2883a11b90fb10ed592d826edeaee7d2929bf1ab985109fe9e1e7b4d2b69a298
3
+ size 1519521155
ggml-distil-large-v3.fp32.bin ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1a3d507e5e2d82ce0add00cb4e4df4fe3defb82c525c836e5a706188a1a798e0
3
+ size 3026260355