metadata
license: mit
language:
- en
Amphion Multi-Speaker TTS Pre-trained Model
Quick Start
We provide the pre-trained checkpoint of VITS, trained on Hi-fi TTS, which consists of a total of 291.6 hours audio contributed by 10 speakers, on an average of 17 hours per speaker. To utilize the pre-trained model, run the following commands:
Step1: Download the checkpoint
git lfs install
git clone https://huggingface.co/amphion/vits_hifitts
Step2: Clone the Amphion's Source Code of GitHub
git clone https://github.com/open-mmlab/Amphion.git
Step3: Specify the checkpoint's path
Use the soft link to specify the downloaded checkpoint in the first step:
cd Amphion
mkdir -p ckpts/tts
ln -s ../../../vits_hifitts ckpts/tts/
Step4: Inference
You can follow the inference part of this recipe to generate speech from text. For example, if you want to synthesize a clip of speech with the text of "This is a clip of generated speech with the given text from a TTS model.", just, run:
sh egs/tts/VITS/run.sh --stage 3 --gpu "0" \
--config ckpts/tts/vits_hifitts/args.json \
--infer_expt_dir ckpts/tts/vits_hifitts/ \
--infer_output_dir ckpts/tts/vits_hifitts/result \
--infer_mode "single" \
--infer_text "This is a clip of generated speech with the given text from a TTS model." \
--infer_speaker_name "hifitts_92"
Note: The supported infer_speaker_name
values can be seen here.