valle_librilight_6k / README.md
lmxue's picture
Update README.md
f537952 verified
|
raw
history blame
1.83 kB
metadata
license: mit
language:
  - en

Pretrained Model of Amphion Vall-E

We provide the pre-trained checkpoint of Vall-E trained on Libri-light, which is derived from open-source audio books from the LibriVox project and contains over 60K hours of audio. Here we processed about 6,000-hour data to train Vall-E.

Quick Start

To utilize the pre-trained models, just run the following commands:

Step1: Download the checkpoint

git lfs install
git clone https://huggingface.co/amphion/valle_librilight_6k

Step2: Clone the Amphion's Source Code of GitHub

git clone https://github.com/open-mmlab/Amphion.git

Step3: Specify the checkpoint's path

Use the soft link to specify the downloaded checkpoint in the first step:

cd Amphion
mkdir -p ckpts/tts
ln -s  ../../../valle_librilight_6k  ckpts/tts/

Step4: Inference

You can follow the inference part of this recipe to generate speech from text. For example, if you want to synthesize a clip of speech with the text of "This is a clip of generated speech with the given text from Amphion Vall-E model.", just, run:

sh egs/tts/VALLE/run.sh --stage 3 --gpu "0" \
    --config "ckpts/tts/valle_librilight_6k/args.json" \
    --infer_expt_dir ckpts/tts/valle_librilight_6k \
    --infer_output_dir ckpts/tts/valle_librilight_6k/result \
    --infer_mode "single" \
    --infer_text "This is a clip of generated speech with the given text from Amphion Vall-E model." \
    --infer_text_prompt "But even the unsuccessful dramatist has his moments." \
    --infer_audio_prompt egs/tts/VALLE/prompt_examples/7176_92135_000004_000000.wav