--- license: mit language: - en --- # Pretrained Model of Amphion VITS We provide the pre-trained checkpoint of [VITS](https://github.com/open-mmlab/Amphion/tree/main/egs/tts/VITS) trained on LJSpeech, which consists of 13,100 short audio clips of a single speaker and has a total length of approximately 24 hours. ## Quick Start To utilize the pretrained models, just run the following commands: ### Step1: Download the checkpoint ```bash git lfs install git clone https://huggingface.co/amphion/vits_ljspeech ``` ### Step2: Clone the Amphion's Source Code of GitHub ```bash git clone https://github.com/open-mmlab/Amphion.git ``` ### Step3: Specify the checkpoint's path Use the soft link to specify the downloaded checkpoint in the first step: ```bash cd Amphion mkdir -p ckpts/tts ln -s ../../../vits_ljspeech ckpts/tts/ ``` ### Step4: Inference You can follow the inference part of [this recipe](https://github.com/open-mmlab/Amphion/tree/main/egs/tts/VITS#4-inference) to generate speech from text. For example, if you want to synthesize a clip of speech with the text of "This is a clip of generated speech with the given text from a TTS model.", just, run: ```bash sh egs/tts/VITS/run.sh --stage 3 --gpu "0" \ --config ckpts/tts/vits_ljspeech/args.json \ --infer_expt_dir ckpts/tts/vits_ljspeech/ \ --infer_output_dir ckpts/tts/vits_ljspeech/result \ --infer_mode "single" \ --infer_text "This is a clip of generated speech with the given text from a TTS model." ```