File size: 1,500 Bytes
7a8091c
 
a0df1ce
 
7a8091c
dd20dec
a0df1ce
dd20dec
dc6d1ea
dd20dec
 
 
 
 
 
 
 
 
 
ecc7b2c
dd20dec
 
 
 
 
 
 
 
 
 
 
 
 
ecc7b2c
dd20dec
 
 
 
 
 
 
 
ecc7b2c
 
 
dd20dec
 
a0df1ce
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
license: mit
language:
- en
---

# Pretrained Model of Amphion VITS

We provide the pre-trained checkpoint of [VITS](https://github.com/open-mmlab/Amphion/tree/main/egs/tts/VITS) trained on LJSpeech, which consists of 13,100 short audio clips of a single speaker and has a total length of approximately 24 hours.



## Quick Start

To utilize the pretrained models, just run the following commands:

### Step1: Download the checkpoint
```bash
git lfs install
git clone https://huggingface.co/amphion/vits_ljspeech
```

### Step2: Clone the Amphion's Source Code of GitHub
```bash
git clone https://github.com/open-mmlab/Amphion.git
```

### Step3: Specify the checkpoint's path
Use the soft link to specify the downloaded checkpoint in the first step:

```bash
cd Amphion
mkdir -p ckpts/tts
ln -s  ../../../vits_ljspeech  ckpts/tts/
```

### Step4: Inference

You can follow the inference part of [this recipe](https://github.com/open-mmlab/Amphion/tree/main/egs/tts/VITS#4-inference) to generate speech from text. For example, if you want to synthesize a clip of speech with the text of "This is a clip of generated speech with the given text from a TTS model.", just, run:

```bash
sh egs/tts/VITS/run.sh --stage 3 --gpu "0" \
    --config ckpts/tts/vits_ljspeech/args.json \
	--infer_expt_dir ckpts/tts/vits_ljspeech/ \
	--infer_output_dir ckpts/tts/vits_ljspeech/result \
	--infer_mode "single" \
    --infer_text "This is a clip of generated speech with the given text from a TTS model."
```