sanchit-gandhi HF staff commited on
Commit
60a88eb
1 Parent(s): 9094e51

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +12 -6
README.md CHANGED
@@ -13,8 +13,8 @@ This repository contains the model weights for [distil-large-v3](https://hugging
13
  converted to [GGML](https://github.com/ggerganov/ggml) format. GGML is the weight format expected by C/C++ packages
14
  such as [Whisper.cpp](https://github.com/ggerganov/whisper.cpp), for which we provide an example below.
15
 
16
- Compared to previous Distil-Whisper releases, the distil-large-v3 is specifically designed to give one-to-one equivalence
17
- with the Whisper cpp long-form transcription algorithm. In our benchmark over 4 out-of-distribution datasets, distil-large-v3
18
  outperformed distil-large-v2 by 5% WER average. Thus, you can expect significant performance gains by switching to this
19
  latest checkpoint.
20
 
@@ -31,13 +31,19 @@ Steps for getting started:
31
  git clone https://github.com/ggerganov/whisper.cpp.git
32
  cd whisper.cpp
33
  ```
34
- 2. Download the GGML weights for distil-large-v3 in fp16 from the Hugging Face Hub:
35
-
36
  ```bash
37
- python -c "from huggingface_hub import hf_hub_download; hf_hub_download(repo_id='distil-whisper/distil-large-v3-ggml', filename='ggml-distil-large-v3.bin', local_dir='./models')"
 
 
 
 
 
 
 
38
  ```
39
 
40
- Note that if you do not have the `huggingface_hub` package installed, you can also download the weights with `wget`:
41
 
42
  ```bash
43
  wget https://huggingface.co/distil-whisper/distil-large-v3-ggml/resolve/main/ggml-distil-large-v3.bin -P ./models
 
13
  converted to [GGML](https://github.com/ggerganov/ggml) format. GGML is the weight format expected by C/C++ packages
14
  such as [Whisper.cpp](https://github.com/ggerganov/whisper.cpp), for which we provide an example below.
15
 
16
+ Compared to previous Distil-Whisper releases, distil-large-v3 is specifically designed to give one-to-one equivalence
17
+ with the OpenAI Whisper long-form transcription algorithm. In our benchmark over 4 out-of-distribution datasets, distil-large-v3
18
  outperformed distil-large-v2 by 5% WER average. Thus, you can expect significant performance gains by switching to this
19
  latest checkpoint.
20
 
 
31
  git clone https://github.com/ggerganov/whisper.cpp.git
32
  cd whisper.cpp
33
  ```
34
+ 2. Install the Hugging Face Hub Python package:
 
35
  ```bash
36
+ pip install --upgrade huggingface_hub
37
+ ```
38
+ And download the GGML weights for distil-large-v3 using the following Python snippet:
39
+
40
+ ```python
41
+ from huggingface_hub import hf_hub_download
42
+
43
+ hf_hub_download(repo_id='distil-whisper/distil-large-v3-ggml', filename='ggml-distil-large-v3.bin', local_dir='./models')
44
  ```
45
 
46
+ Note that if you do not have a Python environment set-up, you can also download the weights directly with `wget`:
47
 
48
  ```bash
49
  wget https://huggingface.co/distil-whisper/distil-large-v3-ggml/resolve/main/ggml-distil-large-v3.bin -P ./models