rvc-ui / README.md
Blane187's picture
Update README.md
72c1c6d verified
|
raw
history blame
8.74 kB
metadata
license: mit
title: RVC UI
sdk: gradio
emoji: 🏃
colorFrom: blue
colorTo: blue
pinned: true
short_description: An easy-to-use voice conversion framework based on VITS.
sdk_version: 4.41.0

Retrieval-based-Voice-Conversion-WebUI

An easy-to-use voice conversion framework based on VITS.

madewithlove

moe

Licence Huggingface

Discord

FAQ (Frequently Asked Questions)

English | 中文简体 | 日本語 | 한국어 (韓國語) | Français | Türkçe | Português

The base model is trained using nearly 50 hours of high-quality open-source VCTK training set. Therefore, there are no copyright concerns, please feel free to use.

Please look forward to the base model of RVCv3 with larger parameters, larger dataset, better effects, basically flat inference speed, and less training data required.

There's a one-click downloader for models/integration packages/tools. Welcome to try.

Training and inference Webui
web
Real-time voice changing GUI
realtime-gui

Features:

  • Reduce tone leakage by replacing the source feature to training-set feature using top1 retrieval;
  • Easy + fast training, even on poor graphics cards;
  • Training with a small amounts of data (>=10min low noise speech recommended);
  • Model fusion to change timbres (using ckpt processing tab->ckpt merge);
  • Easy-to-use WebUI;
  • UVR5 model to quickly separate vocals and instruments;
  • High-pitch Voice Extraction Algorithm InterSpeech2023-RMVPE to prevent a muted sound problem. Provides the best results (significantly) and is faster with lower resource consumption than Crepe_full;
  • AMD/Intel graphics cards acceleration supported;
  • Intel ARC graphics cards acceleration with IPEX supported.

Check out our Demo Video here!

Environment Configuration

Python Version Limitation

It is recommended to use conda to manage the Python environment.

For the reason of the version limitation, please refer to this bug.

python --version # 3.8 <= Python < 3.11

Linux/MacOS One-click Dependency Installation & Startup Script

By executing run.sh in the project root directory, you can configure the venv virtual environment, automatically install the required dependencies, and start the main program with one click.

sh ./run.sh

Manual Installation of Dependencies

  1. Install pytorch and its core dependencies, skip if already installed. Refer to: https://pytorch.org/get-started/locally/
    pip install torch torchvision torchaudio
    
  2. If you are using Nvidia Ampere architecture (RTX30xx) in Windows, according to the experience of #21, you need to specify the cuda version corresponding to pytorch.
    pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu117
    
  3. Install the corresponding dependencies according to your own graphics card.
  • Nvidia GPU
    pip install -r requirements/main.txt
    
  • AMD/Intel GPU
    pip install -r requirements/dml.txt
    
  • AMD ROCM (Linux)
    pip install -r requirements/amd.txt
    
  • Intel IPEX (Linux)
    pip install -r requirements/ipex.txt
    

Preparation of Other Files

1. Assets

RVC requires some models located in the assets folder for inference and training.

Check/Download Automatically (Default)

By default, RVC can automatically check the integrity of the required resources when the main program starts.

Even if the resources are not complete, the program will continue to start.

  • If you want to download all resources, please add the --update parameter.
  • If you want to skip the resource integrity check at startup, please add the --nocheck parameter.

Download Manually

All resource files are located in Hugging Face space

You can find some scripts to download them in the tools folder

You can also use the one-click downloader for models/integration packages/tools

Below is a list that includes the names of all pre-models and other files required by RVC.

  • ./assets/hubert/hubert_base.pt

    rvcmd assets/hubert # RVC-Models-Downloader command
    
  • ./assets/pretrained

    rvcmd assets/v1 # RVC-Models-Downloader command
    
  • ./assets/uvr5_weights

    rvcmd assets/uvr5 # RVC-Models-Downloader command
    

    If you want to use the v2 version of the model, you need to download additional resources in

  • ./assets/pretrained_v2

    rvcmd assets/v2 # RVC-Models-Downloader command
    

2. Download the required files for the rmvpe vocal pitch extraction algorithm

If you want to use the latest RMVPE vocal pitch extraction algorithm, you need to download the pitch extraction model parameters and place them in assets/rmvpe.

  • rmvpe.pt
    rvcmd assets/rmvpe # RVC-Models-Downloader command
    

Download DML environment of RMVPE (optional, for AMD/Intel GPU)

  • rmvpe.onnx
    rvcmd assets/rmvpe # RVC-Models-Downloader command
    

3. AMD ROCM (optional, Linux only)

If you want to run RVC on a Linux system based on AMD's ROCM technology, please first install the required drivers here.

If you are using Arch Linux, you can use pacman to install the required drivers.

pacman -S rocm-hip-sdk rocm-opencl-sdk

For some models of graphics cards, you may need to configure the following environment variables (such as: RX6700XT).

export ROCM_PATH=/opt/rocm
export HSA_OVERRIDE_GFX_VERSION=10.3.0

Also, make sure your current user is in the render and video user groups.

sudo usermod -aG render $USERNAME
sudo usermod -aG video $USERNAME

Getting Started

Direct Launch

Use the following command to start the WebUI.

python web.py

Linux/MacOS

./run.sh

For I-card users who need to use IPEX technology (Linux only)

source /opt/intel/oneapi/setvars.sh
./run.sh

Using the Integration Package (Windows Users)

Download and unzip RVC-beta.7z. After unzipping, double-click go-web.bat to start the program with one click.

rvcmd packs/general/latest # RVC-Models-Downloader command

Credits

Thanks to all contributors for their efforts

contributors