whisper-webui / README.md
aadnk's picture
Update CLI documentation in readme
51a543e
|
raw
history blame
2.16 kB
metadata
title: Whisper Webui
emoji: 
colorFrom: pink
colorTo: purple
sdk: gradio
sdk_version: 3.3.1
app_file: app.py
pinned: false
license: apache-2.0

Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference

Running Locally

To run this program locally, first install Python 3.9+ and Git. Then install Pytorch 10.1+ and all the other dependencies:

pip install -r requirements.txt

Finally, run the full version (no audio length restrictions) of the app:

python app-full.py

You can also run the CLI interface, which is similar to Whisper's own CLI but also supports the following additional arguments:

python cli.py \
[--vad {none,silero-vad,silero-vad-skip-gaps,silero-vad-expand-into-gaps,periodic-vad}] \
[--vad_merge_window VAD_MERGE_WINDOW] \
[--vad_max_merge_size VAD_MAX_MERGE_SIZE] \
[--vad_padding VAD_PADDING] \
[--vad_prompt_window VAD_PROMPT_WINDOW]

In addition, you may also use URL's in addition to file paths as input.

python cli.py --model large --vad silero-vad --language Japanese "https://www.youtube.com/watch?v=4cICErqqRSM"

Docker

To run it in Docker, first install Docker and optionally the NVIDIA Container Toolkit in order to use the GPU. Then check out this repository and build an image:

sudo docker build -t whisper-webui:1 .

You can then start the WebUI with GPU support like so:

sudo docker run -d --gpus=all -p 7860:7860 whisper-webui:1

Leave out "--gpus=all" if you don't have access to a GPU with enough memory, and are fine with running it on the CPU only:

sudo docker run -d -p 7860:7860 whisper-webui:1

Caching

Note that the models themselves are currently not included in the Docker images, and will be downloaded on the demand. To avoid this, bind the directory /root/.cache/whisper to some directory on the host (for instance /home/administrator/.cache/whisper), where you can (optionally) prepopulate the directory with the different Whisper models.

sudo docker run -d --gpus=all -p 7860:7860 --mount type=bind,source=/home/administrator/.cache/whisper,target=/root/.cache/whisper whisper-webui:1