aadnk commited on
Commit
b5af58b
1 Parent(s): 9939d13

Update README with info on Faster Whisper

Browse files
Files changed (1) hide show
  1. README.md +29 -6
README.md CHANGED
@@ -46,6 +46,35 @@ python cli.py --model large --vad silero-vad --language Japanese "https://www.yo
46
  Rather than supplying arguments to `app.py` or `cli.py`, you can also use the configuration file [config.json5](config.json5). See that file for more information.
47
  If you want to use a different configuration file, you can use the `WHISPER_WEBUI_CONFIG` environment variable to specify the path to another file.
48
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
  ## Google Colab
50
 
51
  You can also run this Web UI directly on [Google Colab](https://colab.research.google.com/drive/1qeTSvi7Bt_5RMm88ipW4fkcsMOKlDDss?usp=sharing), if you haven't got a GPU powerful enough to run the larger models.
@@ -87,12 +116,6 @@ cores (up to 8):
87
  python app.py --input_audio_max_duration -1 --auto_parallel True
88
  ```
89
 
90
- ### Multiple Files
91
-
92
- You can upload multiple files either through the "Upload files" option, or as a playlist on YouTube.
93
- Each audio file will then be processed in turn, and the resulting SRT/VTT/Transcript will be made available in the "Download" section.
94
- When more than one file is processed, the UI will also generate a "All_Output" zip file containing all the text output files.
95
-
96
  # Docker
97
 
98
  To run it in Docker, first install Docker and optionally the NVIDIA Container Toolkit in order to use the GPU.
 
46
  Rather than supplying arguments to `app.py` or `cli.py`, you can also use the configuration file [config.json5](config.json5). See that file for more information.
47
  If you want to use a different configuration file, you can use the `WHISPER_WEBUI_CONFIG` environment variable to specify the path to another file.
48
 
49
+ ### Multiple Files
50
+
51
+ You can upload multiple files either through the "Upload files" option, or as a playlist on YouTube.
52
+ Each audio file will then be processed in turn, and the resulting SRT/VTT/Transcript will be made available in the "Download" section.
53
+ When more than one file is processed, the UI will also generate a "All_Output" zip file containing all the text output files.
54
+
55
+ ## Faster Whisper
56
+
57
+ You can also use [Faster Whisper](https://github.com/guillaumekln/faster-whisper) as a drop-in replacement for the default Whisper which achieves up to a 4x speedup
58
+ and 2x reduction in memory usage.
59
+
60
+ To use Faster Whisper, install the requirements in `requirements-fastWhisper.txt`:
61
+ ```
62
+ pip install -r requirements-fastWhisper.txt
63
+ ```
64
+ And then run the App or the CLI with the `--whisper_implementation fast-whisper` flag:
65
+ ```
66
+ python app.py --whisper_implementation fast-whisper --input_audio_max_duration -1 --server_name 127.0.0.1 --auto_parallel True
67
+ ```
68
+ You can also select the whisper implementation in `config.json5`:
69
+ ```json5
70
+ {
71
+ "whisper_implementation": "fast-whisper"
72
+ }
73
+ ```
74
+ ### GPU Acceleration
75
+
76
+ In order to use GPU acceleration with Faster Whisper, both CUDA 11.2 and cuDNN 8 must be installed. You may want to install it in a virtual environment like Anaconda.
77
+
78
  ## Google Colab
79
 
80
  You can also run this Web UI directly on [Google Colab](https://colab.research.google.com/drive/1qeTSvi7Bt_5RMm88ipW4fkcsMOKlDDss?usp=sharing), if you haven't got a GPU powerful enough to run the larger models.
 
116
  python app.py --input_audio_max_duration -1 --auto_parallel True
117
  ```
118
 
 
 
 
 
 
 
119
  # Docker
120
 
121
  To run it in Docker, first install Docker and optionally the NVIDIA Container Toolkit in order to use the GPU.