Spaces:
Running
on
Zero
Running
on
Zero
artificialguybr
commited on
Commit
•
45ee559
1
Parent(s):
36ec8f0
Upload 650 files
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- TTS/.cardboardlint.yml +5 -0
- TTS/.dockerignore +9 -0
- TTS/.github/ISSUE_TEMPLATE/bug_report.yaml +85 -0
- TTS/.github/ISSUE_TEMPLATE/config.yml +8 -0
- TTS/.github/ISSUE_TEMPLATE/feature_request.md +25 -0
- TTS/.github/PR_TEMPLATE.md +15 -0
- TTS/.github/stale.yml +18 -0
- TTS/.github/workflows/api_tests.yml +53 -0
- TTS/.github/workflows/aux_tests.yml +51 -0
- TTS/.github/workflows/data_tests.yml +51 -0
- TTS/.github/workflows/docker.yaml +65 -0
- TTS/.github/workflows/inference_tests.yml +53 -0
- TTS/.github/workflows/pypi-release.yml +94 -0
- TTS/.github/workflows/style_check.yml +47 -0
- TTS/.github/workflows/text_tests.yml +50 -0
- TTS/.github/workflows/tts_tests.yml +53 -0
- TTS/.github/workflows/tts_tests2.yml +53 -0
- TTS/.github/workflows/vocoder_tests.yml +48 -0
- TTS/.github/workflows/zoo_tests0.yml +54 -0
- TTS/.github/workflows/zoo_tests1.yml +53 -0
- TTS/.github/workflows/zoo_tests2.yml +52 -0
- TTS/.gitignore +171 -0
- TTS/.pre-commit-config.yaml +27 -0
- TTS/.pylintrc +599 -0
- TTS/.readthedocs.yml +23 -0
- TTS/CITATION.cff +20 -0
- TTS/CODE_OF_CONDUCT.md +133 -0
- TTS/CODE_OWNERS.rst +75 -0
- TTS/CONTRIBUTING.md +136 -0
- TTS/Dockerfile +13 -0
- TTS/LICENSE.txt +373 -0
- TTS/MANIFEST.in +15 -0
- TTS/Makefile +78 -0
- TTS/README.md +431 -0
- TTS/TTS/.models.json +920 -0
- TTS/TTS/VERSION +1 -0
- TTS/TTS/__init__.py +6 -0
- TTS/TTS/api.py +476 -0
- TTS/TTS/bin/__init__.py +0 -0
- TTS/TTS/bin/collect_env_info.py +48 -0
- TTS/TTS/bin/compute_attention_masks.py +165 -0
- TTS/TTS/bin/compute_embeddings.py +197 -0
- TTS/TTS/bin/compute_statistics.py +96 -0
- TTS/TTS/bin/eval_encoder.py +88 -0
- TTS/TTS/bin/extract_tts_spectrograms.py +286 -0
- TTS/TTS/bin/find_unique_chars.py +45 -0
- TTS/TTS/bin/find_unique_phonemes.py +74 -0
- TTS/TTS/bin/remove_silence_using_vad.py +124 -0
- TTS/TTS/bin/resample.py +90 -0
- TTS/TTS/bin/synthesize.py +502 -0
TTS/.cardboardlint.yml
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
linters:
|
2 |
+
- pylint:
|
3 |
+
# pylintrc: pylintrc
|
4 |
+
filefilter: ['- test_*.py', '+ *.py', '- *.npy']
|
5 |
+
# exclude:
|
TTS/.dockerignore
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
.git/
|
2 |
+
Dockerfile
|
3 |
+
build/
|
4 |
+
dist/
|
5 |
+
TTS.egg-info/
|
6 |
+
tests/outputs/*
|
7 |
+
tests/train_outputs/*
|
8 |
+
__pycache__/
|
9 |
+
*.pyc
|
TTS/.github/ISSUE_TEMPLATE/bug_report.yaml
ADDED
@@ -0,0 +1,85 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: "🐛 Bug report"
|
2 |
+
description: Create a bug report to help 🐸 improve
|
3 |
+
title: '[Bug] '
|
4 |
+
labels: [ "bug" ]
|
5 |
+
body:
|
6 |
+
- type: markdown
|
7 |
+
attributes:
|
8 |
+
value: |
|
9 |
+
Welcome to the 🐸TTS! Thanks for taking the time to fill out this bug report!
|
10 |
+
|
11 |
+
- type: textarea
|
12 |
+
id: bug-description
|
13 |
+
attributes:
|
14 |
+
label: Describe the bug
|
15 |
+
description: A clear and concise description of what the bug is. If you intend to submit a PR for this issue, tell us in the description. Thanks!
|
16 |
+
placeholder: Bug description
|
17 |
+
validations:
|
18 |
+
required: true
|
19 |
+
|
20 |
+
- type: textarea
|
21 |
+
id: reproduction
|
22 |
+
attributes:
|
23 |
+
label: To Reproduce
|
24 |
+
description: |
|
25 |
+
Please share your code to reproduce the error.
|
26 |
+
|
27 |
+
Issues are fixed faster if you can provide a working example.
|
28 |
+
|
29 |
+
The best place for sharing code is colab. https://colab.research.google.com/
|
30 |
+
So we can directly run your code and reproduce the issue.
|
31 |
+
|
32 |
+
In the worse case, provide steps to reproduce the behavior.
|
33 |
+
|
34 |
+
1. Run the following command '...'
|
35 |
+
2. ...
|
36 |
+
3. See error
|
37 |
+
placeholder: Reproduction
|
38 |
+
validations:
|
39 |
+
required: true
|
40 |
+
|
41 |
+
- type: textarea
|
42 |
+
id: expected-behavior
|
43 |
+
attributes:
|
44 |
+
label: Expected behavior
|
45 |
+
description: "Write down what the expected behaviour"
|
46 |
+
|
47 |
+
- type: textarea
|
48 |
+
id: logs
|
49 |
+
attributes:
|
50 |
+
label: Logs
|
51 |
+
description: "Please include the relevant logs if you can."
|
52 |
+
render: shell
|
53 |
+
|
54 |
+
- type: textarea
|
55 |
+
id: system-info
|
56 |
+
attributes:
|
57 |
+
label: Environment
|
58 |
+
description: |
|
59 |
+
You can either run `TTS/bin/collect_env_info.py`
|
60 |
+
|
61 |
+
```bash
|
62 |
+
wget https://raw.githubusercontent.com/coqui-ai/TTS/main/TTS/bin/collect_env_info.py
|
63 |
+
python collect_env_info.py
|
64 |
+
```
|
65 |
+
|
66 |
+
or fill in the fields below manually.
|
67 |
+
render: shell
|
68 |
+
placeholder: |
|
69 |
+
- 🐸TTS Version (e.g., 1.3.0):
|
70 |
+
- PyTorch Version (e.g., 1.8)
|
71 |
+
- Python version:
|
72 |
+
- OS (e.g., Linux):
|
73 |
+
- CUDA/cuDNN version:
|
74 |
+
- GPU models and configuration:
|
75 |
+
- How you installed PyTorch (`conda`, `pip`, source):
|
76 |
+
- Any other relevant information:
|
77 |
+
validations:
|
78 |
+
required: true
|
79 |
+
- type: textarea
|
80 |
+
id: context
|
81 |
+
attributes:
|
82 |
+
label: Additional context
|
83 |
+
description: Add any other context about the problem here.
|
84 |
+
validations:
|
85 |
+
required: false
|
TTS/.github/ISSUE_TEMPLATE/config.yml
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
blank_issues_enabled: false
|
2 |
+
contact_links:
|
3 |
+
- name: CoquiTTS GitHub Discussions
|
4 |
+
url: https://github.com/coqui-ai/TTS/discussions
|
5 |
+
about: Please ask and answer questions here.
|
6 |
+
- name: Coqui Security issue disclosure
|
7 |
+
url: mailto:info@coqui.ai
|
8 |
+
about: Please report security vulnerabilities here.
|
TTS/.github/ISSUE_TEMPLATE/feature_request.md
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
name: 🚀 Feature request
|
3 |
+
about: Suggest a feature or an idea for this project
|
4 |
+
title: '[Feature request] '
|
5 |
+
labels: feature request
|
6 |
+
assignees: ''
|
7 |
+
|
8 |
+
---
|
9 |
+
<!-- Welcome to the 🐸TTS project!
|
10 |
+
We are excited to see your interest, and appreciate your support! --->
|
11 |
+
**🚀 Feature Description**
|
12 |
+
|
13 |
+
<!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...] -->
|
14 |
+
|
15 |
+
**Solution**
|
16 |
+
|
17 |
+
<!-- A clear and concise description of what you want to happen. -->
|
18 |
+
|
19 |
+
**Alternative Solutions**
|
20 |
+
|
21 |
+
<!-- A clear and concise description of any alternative solutions or features you've considered. -->
|
22 |
+
|
23 |
+
**Additional context**
|
24 |
+
|
25 |
+
<!-- Add any other context or screenshots about the feature request here. -->
|
TTS/.github/PR_TEMPLATE.md
ADDED
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Pull request guidelines
|
2 |
+
|
3 |
+
Welcome to the 🐸TTS project! We are excited to see your interest, and appreciate your support!
|
4 |
+
|
5 |
+
This repository is governed by the Contributor Covenant Code of Conduct. For more details, see the [CODE_OF_CONDUCT.md](CODE_OF_CONDUCT.md) file.
|
6 |
+
|
7 |
+
In order to make a good pull request, please see our [CONTRIBUTING.md](CONTRIBUTING.md) file.
|
8 |
+
|
9 |
+
Before accepting your pull request, you will be asked to sign a [Contributor License Agreement](https://cla-assistant.io/coqui-ai/TTS).
|
10 |
+
|
11 |
+
This [Contributor License Agreement](https://cla-assistant.io/coqui-ai/TTS):
|
12 |
+
|
13 |
+
- Protects you, Coqui, and the users of the code.
|
14 |
+
- Does not change your rights to use your contributions for any purpose.
|
15 |
+
- Does not change the license of the 🐸TTS project. It just makes the terms of your contribution clearer and lets us know you are OK to contribute.
|
TTS/.github/stale.yml
ADDED
@@ -0,0 +1,18 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Number of days of inactivity before an issue becomes stale
|
2 |
+
daysUntilStale: 30
|
3 |
+
# Number of days of inactivity before a stale issue is closed
|
4 |
+
daysUntilClose: 7
|
5 |
+
# Issues with these labels will never be considered stale
|
6 |
+
exemptLabels:
|
7 |
+
- pinned
|
8 |
+
- security
|
9 |
+
# Label to use when marking an issue as stale
|
10 |
+
staleLabel: wontfix
|
11 |
+
# Comment to post when marking an issue as stale. Set to `false` to disable
|
12 |
+
markComment: >
|
13 |
+
This issue has been automatically marked as stale because it has not had
|
14 |
+
recent activity. It will be closed if no further activity occurs. Thank you
|
15 |
+
for your contributions. You might also look our discussion channels.
|
16 |
+
# Comment to post when closing a stale issue. Set to `false` to disable
|
17 |
+
closeComment: false
|
18 |
+
|
TTS/.github/workflows/api_tests.yml
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: api_tests
|
2 |
+
|
3 |
+
on:
|
4 |
+
push:
|
5 |
+
branches:
|
6 |
+
- main
|
7 |
+
jobs:
|
8 |
+
check_skip:
|
9 |
+
runs-on: ubuntu-latest
|
10 |
+
if: "! contains(github.event.head_commit.message, '[ci skip]')"
|
11 |
+
steps:
|
12 |
+
- run: echo "${{ github.event.head_commit.message }}"
|
13 |
+
|
14 |
+
test:
|
15 |
+
runs-on: ubuntu-latest
|
16 |
+
strategy:
|
17 |
+
fail-fast: false
|
18 |
+
matrix:
|
19 |
+
python-version: [3.9, "3.10", "3.11"]
|
20 |
+
experimental: [false]
|
21 |
+
steps:
|
22 |
+
- uses: actions/checkout@v3
|
23 |
+
- name: Set up Python ${{ matrix.python-version }}
|
24 |
+
uses: actions/setup-python@v4
|
25 |
+
with:
|
26 |
+
python-version: ${{ matrix.python-version }}
|
27 |
+
architecture: x64
|
28 |
+
cache: 'pip'
|
29 |
+
cache-dependency-path: 'requirements*'
|
30 |
+
- name: check OS
|
31 |
+
run: cat /etc/os-release
|
32 |
+
- name: set ENV
|
33 |
+
run: |
|
34 |
+
export TRAINER_TELEMETRY=0
|
35 |
+
- name: Install dependencies
|
36 |
+
run: |
|
37 |
+
sudo apt-get update
|
38 |
+
sudo apt-get install -y --no-install-recommends git make gcc
|
39 |
+
sudo apt-get install espeak-ng
|
40 |
+
make system-deps
|
41 |
+
- name: Install/upgrade Python setup deps
|
42 |
+
run: python3 -m pip install --upgrade pip setuptools wheel
|
43 |
+
- name: Replace scarf urls
|
44 |
+
run: |
|
45 |
+
sed -i 's/https:\/\/coqui.gateway.scarf.sh\//https:\/\/github.com\/coqui-ai\/TTS\/releases\/download\//g' TTS/.models.json
|
46 |
+
- name: Install TTS
|
47 |
+
run: |
|
48 |
+
python3 -m pip install .[all]
|
49 |
+
python3 setup.py egg_info
|
50 |
+
- name: Unit tests
|
51 |
+
run: make api_tests
|
52 |
+
env:
|
53 |
+
COQUI_STUDIO_TOKEN: ${{ secrets.COQUI_STUDIO_TOKEN }}
|
TTS/.github/workflows/aux_tests.yml
ADDED
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: aux-tests
|
2 |
+
|
3 |
+
on:
|
4 |
+
push:
|
5 |
+
branches:
|
6 |
+
- main
|
7 |
+
pull_request:
|
8 |
+
types: [opened, synchronize, reopened]
|
9 |
+
jobs:
|
10 |
+
check_skip:
|
11 |
+
runs-on: ubuntu-latest
|
12 |
+
if: "! contains(github.event.head_commit.message, '[ci skip]')"
|
13 |
+
steps:
|
14 |
+
- run: echo "${{ github.event.head_commit.message }}"
|
15 |
+
|
16 |
+
test:
|
17 |
+
runs-on: ubuntu-latest
|
18 |
+
strategy:
|
19 |
+
fail-fast: false
|
20 |
+
matrix:
|
21 |
+
python-version: [3.9, "3.10", "3.11"]
|
22 |
+
experimental: [false]
|
23 |
+
steps:
|
24 |
+
- uses: actions/checkout@v3
|
25 |
+
- name: Set up Python ${{ matrix.python-version }}
|
26 |
+
uses: actions/setup-python@v4
|
27 |
+
with:
|
28 |
+
python-version: ${{ matrix.python-version }}
|
29 |
+
architecture: x64
|
30 |
+
cache: 'pip'
|
31 |
+
cache-dependency-path: 'requirements*'
|
32 |
+
- name: check OS
|
33 |
+
run: cat /etc/os-release
|
34 |
+
- name: set ENV
|
35 |
+
run: export TRAINER_TELEMETRY=0
|
36 |
+
- name: Install dependencies
|
37 |
+
run: |
|
38 |
+
sudo apt-get update
|
39 |
+
sudo apt-get install -y git make gcc
|
40 |
+
make system-deps
|
41 |
+
- name: Install/upgrade Python setup deps
|
42 |
+
run: python3 -m pip install --upgrade pip setuptools wheel
|
43 |
+
- name: Replace scarf urls
|
44 |
+
run: |
|
45 |
+
sed -i 's/https:\/\/coqui.gateway.scarf.sh\//https:\/\/github.com\/coqui-ai\/TTS\/releases\/download\//g' TTS/.models.json
|
46 |
+
- name: Install TTS
|
47 |
+
run: |
|
48 |
+
python3 -m pip install .[all]
|
49 |
+
python3 setup.py egg_info
|
50 |
+
- name: Unit tests
|
51 |
+
run: make test_aux
|
TTS/.github/workflows/data_tests.yml
ADDED
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: data-tests
|
2 |
+
|
3 |
+
on:
|
4 |
+
push:
|
5 |
+
branches:
|
6 |
+
- main
|
7 |
+
pull_request:
|
8 |
+
types: [opened, synchronize, reopened]
|
9 |
+
jobs:
|
10 |
+
check_skip:
|
11 |
+
runs-on: ubuntu-latest
|
12 |
+
if: "! contains(github.event.head_commit.message, '[ci skip]')"
|
13 |
+
steps:
|
14 |
+
- run: echo "${{ github.event.head_commit.message }}"
|
15 |
+
|
16 |
+
test:
|
17 |
+
runs-on: ubuntu-latest
|
18 |
+
strategy:
|
19 |
+
fail-fast: false
|
20 |
+
matrix:
|
21 |
+
python-version: [3.9, "3.10", "3.11"]
|
22 |
+
experimental: [false]
|
23 |
+
steps:
|
24 |
+
- uses: actions/checkout@v3
|
25 |
+
- name: Set up Python ${{ matrix.python-version }}
|
26 |
+
uses: actions/setup-python@v4
|
27 |
+
with:
|
28 |
+
python-version: ${{ matrix.python-version }}
|
29 |
+
architecture: x64
|
30 |
+
cache: 'pip'
|
31 |
+
cache-dependency-path: 'requirements*'
|
32 |
+
- name: check OS
|
33 |
+
run: cat /etc/os-release
|
34 |
+
- name: set ENV
|
35 |
+
run: export TRAINER_TELEMETRY=0
|
36 |
+
- name: Install dependencies
|
37 |
+
run: |
|
38 |
+
sudo apt-get update
|
39 |
+
sudo apt-get install -y --no-install-recommends git make gcc
|
40 |
+
make system-deps
|
41 |
+
- name: Install/upgrade Python setup deps
|
42 |
+
run: python3 -m pip install --upgrade pip setuptools wheel
|
43 |
+
- name: Replace scarf urls
|
44 |
+
run: |
|
45 |
+
sed -i 's/https:\/\/coqui.gateway.scarf.sh\//https:\/\/github.com\/coqui-ai\/TTS\/releases\/download\//g' TTS/.models.json
|
46 |
+
- name: Install TTS
|
47 |
+
run: |
|
48 |
+
python3 -m pip install .[all]
|
49 |
+
python3 setup.py egg_info
|
50 |
+
- name: Unit tests
|
51 |
+
run: make data_tests
|
TTS/.github/workflows/docker.yaml
ADDED
@@ -0,0 +1,65 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: "Docker build and push"
|
2 |
+
on:
|
3 |
+
pull_request:
|
4 |
+
push:
|
5 |
+
branches:
|
6 |
+
- main
|
7 |
+
- dev
|
8 |
+
tags:
|
9 |
+
- v*
|
10 |
+
jobs:
|
11 |
+
docker-build:
|
12 |
+
name: "Build and push Docker image"
|
13 |
+
runs-on: ubuntu-20.04
|
14 |
+
strategy:
|
15 |
+
matrix:
|
16 |
+
arch: ["amd64"]
|
17 |
+
base:
|
18 |
+
- "nvidia/cuda:11.8.0-base-ubuntu22.04" # GPU enabled
|
19 |
+
- "python:3.10.8-slim" # CPU only
|
20 |
+
steps:
|
21 |
+
- uses: actions/checkout@v2
|
22 |
+
- name: Log in to the Container registry
|
23 |
+
uses: docker/login-action@v1
|
24 |
+
with:
|
25 |
+
registry: ghcr.io
|
26 |
+
username: ${{ github.actor }}
|
27 |
+
password: ${{ secrets.GITHUB_TOKEN }}
|
28 |
+
- name: Compute Docker tags, check VERSION file matches tag
|
29 |
+
id: compute-tag
|
30 |
+
run: |
|
31 |
+
set -ex
|
32 |
+
base="ghcr.io/coqui-ai/tts"
|
33 |
+
tags="" # PR build
|
34 |
+
|
35 |
+
if [[ ${{ matrix.base }} = "python:3.10.8-slim" ]]; then
|
36 |
+
base="ghcr.io/coqui-ai/tts-cpu"
|
37 |
+
fi
|
38 |
+
|
39 |
+
if [[ "${{ startsWith(github.ref, 'refs/heads/') }}" = "true" ]]; then
|
40 |
+
# Push to branch
|
41 |
+
github_ref="${{ github.ref }}"
|
42 |
+
branch=${github_ref#*refs/heads/} # strip prefix to get branch name
|
43 |
+
tags="${base}:${branch},${base}:${{ github.sha }},"
|
44 |
+
elif [[ "${{ startsWith(github.ref, 'refs/tags/') }}" = "true" ]]; then
|
45 |
+
VERSION="v$(cat TTS/VERSION)"
|
46 |
+
if [[ "${{ github.ref }}" != "refs/tags/${VERSION}" ]]; then
|
47 |
+
echo "Pushed tag does not match VERSION file. Aborting push."
|
48 |
+
exit 1
|
49 |
+
fi
|
50 |
+
tags="${base}:${VERSION},${base}:latest,${base}:${{ github.sha }}"
|
51 |
+
fi
|
52 |
+
echo "::set-output name=tags::${tags}"
|
53 |
+
- name: Set up QEMU
|
54 |
+
uses: docker/setup-qemu-action@v1
|
55 |
+
- name: Set up Docker Buildx
|
56 |
+
id: buildx
|
57 |
+
uses: docker/setup-buildx-action@v1
|
58 |
+
- name: Build and push
|
59 |
+
uses: docker/build-push-action@v2
|
60 |
+
with:
|
61 |
+
context: .
|
62 |
+
platforms: linux/${{ matrix.arch }}
|
63 |
+
push: ${{ github.event_name == 'push' }}
|
64 |
+
build-args: "BASE=${{ matrix.base }}"
|
65 |
+
tags: ${{ steps.compute-tag.outputs.tags }}
|
TTS/.github/workflows/inference_tests.yml
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: inference_tests
|
2 |
+
|
3 |
+
on:
|
4 |
+
push:
|
5 |
+
branches:
|
6 |
+
- main
|
7 |
+
pull_request:
|
8 |
+
types: [opened, synchronize, reopened]
|
9 |
+
jobs:
|
10 |
+
check_skip:
|
11 |
+
runs-on: ubuntu-latest
|
12 |
+
if: "! contains(github.event.head_commit.message, '[ci skip]')"
|
13 |
+
steps:
|
14 |
+
- run: echo "${{ github.event.head_commit.message }}"
|
15 |
+
|
16 |
+
test:
|
17 |
+
runs-on: ubuntu-latest
|
18 |
+
strategy:
|
19 |
+
fail-fast: false
|
20 |
+
matrix:
|
21 |
+
python-version: [3.9, "3.10", "3.11"]
|
22 |
+
experimental: [false]
|
23 |
+
steps:
|
24 |
+
- uses: actions/checkout@v3
|
25 |
+
- name: Set up Python ${{ matrix.python-version }}
|
26 |
+
uses: actions/setup-python@v4
|
27 |
+
with:
|
28 |
+
python-version: ${{ matrix.python-version }}
|
29 |
+
architecture: x64
|
30 |
+
cache: 'pip'
|
31 |
+
cache-dependency-path: 'requirements*'
|
32 |
+
- name: check OS
|
33 |
+
run: cat /etc/os-release
|
34 |
+
- name: set ENV
|
35 |
+
run: |
|
36 |
+
export TRAINER_TELEMETRY=0
|
37 |
+
- name: Install dependencies
|
38 |
+
run: |
|
39 |
+
sudo apt-get update
|
40 |
+
sudo apt-get install -y --no-install-recommends git make gcc
|
41 |
+
sudo apt-get install espeak-ng
|
42 |
+
make system-deps
|
43 |
+
- name: Install/upgrade Python setup deps
|
44 |
+
run: python3 -m pip install --upgrade pip setuptools wheel
|
45 |
+
- name: Replace scarf urls
|
46 |
+
run: |
|
47 |
+
sed -i 's/https:\/\/coqui.gateway.scarf.sh\//https:\/\/github.com\/coqui-ai\/TTS\/releases\/download\//g' TTS/.models.json
|
48 |
+
- name: Install TTS
|
49 |
+
run: |
|
50 |
+
python3 -m pip install .[all]
|
51 |
+
python3 setup.py egg_info
|
52 |
+
- name: Unit tests
|
53 |
+
run: make inference_tests
|
TTS/.github/workflows/pypi-release.yml
ADDED
@@ -0,0 +1,94 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: Publish Python 🐍 distributions 📦 to PyPI
|
2 |
+
on:
|
3 |
+
release:
|
4 |
+
types: [published]
|
5 |
+
defaults:
|
6 |
+
run:
|
7 |
+
shell:
|
8 |
+
bash
|
9 |
+
jobs:
|
10 |
+
build-sdist:
|
11 |
+
runs-on: ubuntu-20.04
|
12 |
+
steps:
|
13 |
+
- uses: actions/checkout@v2
|
14 |
+
- name: Verify tag matches version
|
15 |
+
run: |
|
16 |
+
set -ex
|
17 |
+
version=$(cat TTS/VERSION)
|
18 |
+
tag="${GITHUB_REF/refs\/tags\/}"
|
19 |
+
if [[ "v$version" != "$tag" ]]; then
|
20 |
+
exit 1
|
21 |
+
fi
|
22 |
+
- uses: actions/setup-python@v2
|
23 |
+
with:
|
24 |
+
python-version: 3.9
|
25 |
+
- run: |
|
26 |
+
python -m pip install -U pip setuptools wheel build
|
27 |
+
- run: |
|
28 |
+
python -m build
|
29 |
+
- run: |
|
30 |
+
pip install dist/*.tar.gz
|
31 |
+
- uses: actions/upload-artifact@v2
|
32 |
+
with:
|
33 |
+
name: sdist
|
34 |
+
path: dist/*.tar.gz
|
35 |
+
build-wheels:
|
36 |
+
runs-on: ubuntu-20.04
|
37 |
+
strategy:
|
38 |
+
matrix:
|
39 |
+
python-version: ["3.9", "3.10", "3.11"]
|
40 |
+
steps:
|
41 |
+
- uses: actions/checkout@v2
|
42 |
+
- uses: actions/setup-python@v2
|
43 |
+
with:
|
44 |
+
python-version: ${{ matrix.python-version }}
|
45 |
+
- name: Install pip requirements
|
46 |
+
run: |
|
47 |
+
python -m pip install -U pip setuptools wheel build
|
48 |
+
python -m pip install -r requirements.txt
|
49 |
+
- name: Setup and install manylinux1_x86_64 wheel
|
50 |
+
run: |
|
51 |
+
python setup.py bdist_wheel --plat-name=manylinux1_x86_64
|
52 |
+
python -m pip install dist/*-manylinux*.whl
|
53 |
+
- uses: actions/upload-artifact@v2
|
54 |
+
with:
|
55 |
+
name: wheel-${{ matrix.python-version }}
|
56 |
+
path: dist/*-manylinux*.whl
|
57 |
+
publish-artifacts:
|
58 |
+
runs-on: ubuntu-20.04
|
59 |
+
needs: [build-sdist, build-wheels]
|
60 |
+
steps:
|
61 |
+
- run: |
|
62 |
+
mkdir dist
|
63 |
+
- uses: actions/download-artifact@v2
|
64 |
+
with:
|
65 |
+
name: "sdist"
|
66 |
+
path: "dist/"
|
67 |
+
- uses: actions/download-artifact@v2
|
68 |
+
with:
|
69 |
+
name: "wheel-3.9"
|
70 |
+
path: "dist/"
|
71 |
+
- uses: actions/download-artifact@v2
|
72 |
+
with:
|
73 |
+
name: "wheel-3.10"
|
74 |
+
path: "dist/"
|
75 |
+
- uses: actions/download-artifact@v2
|
76 |
+
with:
|
77 |
+
name: "wheel-3.11"
|
78 |
+
path: "dist/"
|
79 |
+
- run: |
|
80 |
+
ls -lh dist/
|
81 |
+
- name: Setup PyPI config
|
82 |
+
run: |
|
83 |
+
cat << EOF > ~/.pypirc
|
84 |
+
[pypi]
|
85 |
+
username=__token__
|
86 |
+
password=${{ secrets.PYPI_TOKEN }}
|
87 |
+
EOF
|
88 |
+
- uses: actions/setup-python@v2
|
89 |
+
with:
|
90 |
+
python-version: 3.9
|
91 |
+
- run: |
|
92 |
+
python -m pip install twine
|
93 |
+
- run: |
|
94 |
+
twine upload --repository pypi dist/*
|
TTS/.github/workflows/style_check.yml
ADDED
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: style-check
|
2 |
+
|
3 |
+
on:
|
4 |
+
push:
|
5 |
+
branches:
|
6 |
+
- main
|
7 |
+
pull_request:
|
8 |
+
types: [opened, synchronize, reopened]
|
9 |
+
jobs:
|
10 |
+
check_skip:
|
11 |
+
runs-on: ubuntu-latest
|
12 |
+
if: "! contains(github.event.head_commit.message, '[ci skip]')"
|
13 |
+
steps:
|
14 |
+
- run: echo "${{ github.event.head_commit.message }}"
|
15 |
+
|
16 |
+
test:
|
17 |
+
runs-on: ubuntu-latest
|
18 |
+
strategy:
|
19 |
+
fail-fast: false
|
20 |
+
matrix:
|
21 |
+
python-version: [3.9]
|
22 |
+
experimental: [false]
|
23 |
+
steps:
|
24 |
+
- uses: actions/checkout@v3
|
25 |
+
- name: Set up Python ${{ matrix.python-version }}
|
26 |
+
uses: actions/setup-python@v4
|
27 |
+
with:
|
28 |
+
python-version: ${{ matrix.python-version }}
|
29 |
+
architecture: x64
|
30 |
+
cache: 'pip'
|
31 |
+
cache-dependency-path: 'requirements*'
|
32 |
+
- name: check OS
|
33 |
+
run: cat /etc/os-release
|
34 |
+
- name: Install dependencies
|
35 |
+
run: |
|
36 |
+
sudo apt-get update
|
37 |
+
sudo apt-get install -y git make gcc
|
38 |
+
make system-deps
|
39 |
+
- name: Install/upgrade Python setup deps
|
40 |
+
run: python3 -m pip install --upgrade pip setuptools wheel
|
41 |
+
- name: Install TTS
|
42 |
+
run: |
|
43 |
+
python3 -m pip install .[all]
|
44 |
+
python3 setup.py egg_info
|
45 |
+
# - name: Lint check
|
46 |
+
# run: |
|
47 |
+
# make lint
|
TTS/.github/workflows/text_tests.yml
ADDED
@@ -0,0 +1,50 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: text-tests
|
2 |
+
|
3 |
+
on:
|
4 |
+
push:
|
5 |
+
branches:
|
6 |
+
- main
|
7 |
+
pull_request:
|
8 |
+
types: [opened, synchronize, reopened]
|
9 |
+
jobs:
|
10 |
+
check_skip:
|
11 |
+
runs-on: ubuntu-latest
|
12 |
+
if: "! contains(github.event.head_commit.message, '[ci skip]')"
|
13 |
+
steps:
|
14 |
+
- run: echo "${{ github.event.head_commit.message }}"
|
15 |
+
|
16 |
+
test:
|
17 |
+
runs-on: ubuntu-latest
|
18 |
+
strategy:
|
19 |
+
fail-fast: false
|
20 |
+
matrix:
|
21 |
+
python-version: [3.9, "3.10", "3.11"]
|
22 |
+
experimental: [false]
|
23 |
+
steps:
|
24 |
+
- uses: actions/checkout@v3
|
25 |
+
- name: Set up Python ${{ matrix.python-version }}
|
26 |
+
uses: actions/setup-python@v4
|
27 |
+
with:
|
28 |
+
python-version: ${{ matrix.python-version }}
|
29 |
+
architecture: x64
|
30 |
+
cache: 'pip'
|
31 |
+
cache-dependency-path: 'requirements*'
|
32 |
+
- name: check OS
|
33 |
+
run: cat /etc/os-release
|
34 |
+
- name: set ENV
|
35 |
+
run: export TRAINER_TELEMETRY=0
|
36 |
+
- name: Install dependencies
|
37 |
+
run: |
|
38 |
+
sudo apt-get update
|
39 |
+
sudo apt-get install -y --no-install-recommends git make gcc
|
40 |
+
sudo apt-get install espeak
|
41 |
+
sudo apt-get install espeak-ng
|
42 |
+
make system-deps
|
43 |
+
- name: Install/upgrade Python setup deps
|
44 |
+
run: python3 -m pip install --upgrade pip setuptools wheel
|
45 |
+
- name: Install TTS
|
46 |
+
run: |
|
47 |
+
python3 -m pip install .[all]
|
48 |
+
python3 setup.py egg_info
|
49 |
+
- name: Unit tests
|
50 |
+
run: make test_text
|
TTS/.github/workflows/tts_tests.yml
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: tts-tests
|
2 |
+
|
3 |
+
on:
|
4 |
+
push:
|
5 |
+
branches:
|
6 |
+
- main
|
7 |
+
pull_request:
|
8 |
+
types: [opened, synchronize, reopened]
|
9 |
+
jobs:
|
10 |
+
check_skip:
|
11 |
+
runs-on: ubuntu-latest
|
12 |
+
if: "! contains(github.event.head_commit.message, '[ci skip]')"
|
13 |
+
steps:
|
14 |
+
- run: echo "${{ github.event.head_commit.message }}"
|
15 |
+
|
16 |
+
test:
|
17 |
+
runs-on: ubuntu-latest
|
18 |
+
strategy:
|
19 |
+
fail-fast: false
|
20 |
+
matrix:
|
21 |
+
python-version: [3.9, "3.10", "3.11"]
|
22 |
+
experimental: [false]
|
23 |
+
steps:
|
24 |
+
- uses: actions/checkout@v3
|
25 |
+
- name: Set up Python ${{ matrix.python-version }}
|
26 |
+
uses: actions/setup-python@v4
|
27 |
+
with:
|
28 |
+
python-version: ${{ matrix.python-version }}
|
29 |
+
architecture: x64
|
30 |
+
cache: 'pip'
|
31 |
+
cache-dependency-path: 'requirements*'
|
32 |
+
- name: check OS
|
33 |
+
run: cat /etc/os-release
|
34 |
+
- name: set ENV
|
35 |
+
run: export TRAINER_TELEMETRY=0
|
36 |
+
- name: Install dependencies
|
37 |
+
run: |
|
38 |
+
sudo apt-get update
|
39 |
+
sudo apt-get install -y --no-install-recommends git make gcc
|
40 |
+
sudo apt-get install espeak
|
41 |
+
sudo apt-get install espeak-ng
|
42 |
+
make system-deps
|
43 |
+
- name: Install/upgrade Python setup deps
|
44 |
+
run: python3 -m pip install --upgrade pip setuptools wheel
|
45 |
+
- name: Replace scarf urls
|
46 |
+
run: |
|
47 |
+
sed -i 's/https:\/\/coqui.gateway.scarf.sh\//https:\/\/github.com\/coqui-ai\/TTS\/releases\/download\//g' TTS/.models.json
|
48 |
+
- name: Install TTS
|
49 |
+
run: |
|
50 |
+
python3 -m pip install .[all]
|
51 |
+
python3 setup.py egg_info
|
52 |
+
- name: Unit tests
|
53 |
+
run: make test_tts
|
TTS/.github/workflows/tts_tests2.yml
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: tts-tests2
|
2 |
+
|
3 |
+
on:
|
4 |
+
push:
|
5 |
+
branches:
|
6 |
+
- main
|
7 |
+
pull_request:
|
8 |
+
types: [opened, synchronize, reopened]
|
9 |
+
jobs:
|
10 |
+
check_skip:
|
11 |
+
runs-on: ubuntu-latest
|
12 |
+
if: "! contains(github.event.head_commit.message, '[ci skip]')"
|
13 |
+
steps:
|
14 |
+
- run: echo "${{ github.event.head_commit.message }}"
|
15 |
+
|
16 |
+
test:
|
17 |
+
runs-on: ubuntu-latest
|
18 |
+
strategy:
|
19 |
+
fail-fast: false
|
20 |
+
matrix:
|
21 |
+
python-version: [3.9, "3.10", "3.11"]
|
22 |
+
experimental: [false]
|
23 |
+
steps:
|
24 |
+
- uses: actions/checkout@v3
|
25 |
+
- name: Set up Python ${{ matrix.python-version }}
|
26 |
+
uses: actions/setup-python@v4
|
27 |
+
with:
|
28 |
+
python-version: ${{ matrix.python-version }}
|
29 |
+
architecture: x64
|
30 |
+
cache: 'pip'
|
31 |
+
cache-dependency-path: 'requirements*'
|
32 |
+
- name: check OS
|
33 |
+
run: cat /etc/os-release
|
34 |
+
- name: set ENV
|
35 |
+
run: export TRAINER_TELEMETRY=0
|
36 |
+
- name: Install dependencies
|
37 |
+
run: |
|
38 |
+
sudo apt-get update
|
39 |
+
sudo apt-get install -y --no-install-recommends git make gcc
|
40 |
+
sudo apt-get install espeak
|
41 |
+
sudo apt-get install espeak-ng
|
42 |
+
make system-deps
|
43 |
+
- name: Install/upgrade Python setup deps
|
44 |
+
run: python3 -m pip install --upgrade pip setuptools wheel
|
45 |
+
- name: Replace scarf urls
|
46 |
+
run: |
|
47 |
+
sed -i 's/https:\/\/coqui.gateway.scarf.sh\//https:\/\/github.com\/coqui-ai\/TTS\/releases\/download\//g' TTS/.models.json
|
48 |
+
- name: Install TTS
|
49 |
+
run: |
|
50 |
+
python3 -m pip install .[all]
|
51 |
+
python3 setup.py egg_info
|
52 |
+
- name: Unit tests
|
53 |
+
run: make test_tts2
|
TTS/.github/workflows/vocoder_tests.yml
ADDED
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: vocoder-tests
|
2 |
+
|
3 |
+
on:
|
4 |
+
push:
|
5 |
+
branches:
|
6 |
+
- main
|
7 |
+
pull_request:
|
8 |
+
types: [opened, synchronize, reopened]
|
9 |
+
jobs:
|
10 |
+
check_skip:
|
11 |
+
runs-on: ubuntu-latest
|
12 |
+
if: "! contains(github.event.head_commit.message, '[ci skip]')"
|
13 |
+
steps:
|
14 |
+
- run: echo "${{ github.event.head_commit.message }}"
|
15 |
+
|
16 |
+
test:
|
17 |
+
runs-on: ubuntu-latest
|
18 |
+
strategy:
|
19 |
+
fail-fast: false
|
20 |
+
matrix:
|
21 |
+
python-version: [3.9, "3.10", "3.11"]
|
22 |
+
experimental: [false]
|
23 |
+
steps:
|
24 |
+
- uses: actions/checkout@v3
|
25 |
+
- name: Set up Python ${{ matrix.python-version }}
|
26 |
+
uses: actions/setup-python@v4
|
27 |
+
with:
|
28 |
+
python-version: ${{ matrix.python-version }}
|
29 |
+
architecture: x64
|
30 |
+
cache: 'pip'
|
31 |
+
cache-dependency-path: 'requirements*'
|
32 |
+
- name: check OS
|
33 |
+
run: cat /etc/os-release
|
34 |
+
- name: set ENV
|
35 |
+
run: export TRAINER_TELEMETRY=0
|
36 |
+
- name: Install dependencies
|
37 |
+
run: |
|
38 |
+
sudo apt-get update
|
39 |
+
sudo apt-get install -y git make gcc
|
40 |
+
make system-deps
|
41 |
+
- name: Install/upgrade Python setup deps
|
42 |
+
run: python3 -m pip install --upgrade pip setuptools wheel
|
43 |
+
- name: Install TTS
|
44 |
+
run: |
|
45 |
+
python3 -m pip install .[all]
|
46 |
+
python3 setup.py egg_info
|
47 |
+
- name: Unit tests
|
48 |
+
run: make test_vocoder
|
TTS/.github/workflows/zoo_tests0.yml
ADDED
@@ -0,0 +1,54 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: zoo-tests-0
|
2 |
+
|
3 |
+
on:
|
4 |
+
push:
|
5 |
+
branches:
|
6 |
+
- main
|
7 |
+
pull_request:
|
8 |
+
types: [opened, synchronize, reopened]
|
9 |
+
jobs:
|
10 |
+
check_skip:
|
11 |
+
runs-on: ubuntu-latest
|
12 |
+
if: "! contains(github.event.head_commit.message, '[ci skip]')"
|
13 |
+
steps:
|
14 |
+
- run: echo "${{ github.event.head_commit.message }}"
|
15 |
+
|
16 |
+
test:
|
17 |
+
runs-on: ubuntu-latest
|
18 |
+
strategy:
|
19 |
+
fail-fast: false
|
20 |
+
matrix:
|
21 |
+
python-version: [3.9, "3.10", "3.11"]
|
22 |
+
experimental: [false]
|
23 |
+
steps:
|
24 |
+
- uses: actions/checkout@v3
|
25 |
+
- name: Set up Python ${{ matrix.python-version }}
|
26 |
+
uses: actions/setup-python@v4
|
27 |
+
with:
|
28 |
+
python-version: ${{ matrix.python-version }}
|
29 |
+
architecture: x64
|
30 |
+
cache: 'pip'
|
31 |
+
cache-dependency-path: 'requirements*'
|
32 |
+
- name: check OS
|
33 |
+
run: cat /etc/os-release
|
34 |
+
- name: set ENV
|
35 |
+
run: export TRAINER_TELEMETRY=0
|
36 |
+
- name: Install dependencies
|
37 |
+
run: |
|
38 |
+
sudo apt-get update
|
39 |
+
sudo apt-get install -y git make gcc
|
40 |
+
sudo apt-get install espeak espeak-ng
|
41 |
+
make system-deps
|
42 |
+
- name: Install/upgrade Python setup deps
|
43 |
+
run: python3 -m pip install --upgrade pip setuptools wheel
|
44 |
+
- name: Replace scarf urls
|
45 |
+
run: |
|
46 |
+
sed -i 's/https:\/\/coqui.gateway.scarf.sh\//https:\/\/github.com\/coqui-ai\/TTS\/releases\/download\//g' TTS/.models.json
|
47 |
+
- name: Install TTS
|
48 |
+
run: |
|
49 |
+
python3 -m pip install .[all]
|
50 |
+
python3 setup.py egg_info
|
51 |
+
- name: Unit tests
|
52 |
+
run: |
|
53 |
+
nose2 -F -v -B TTS tests.zoo_tests.test_models.test_models_offset_0_step_3
|
54 |
+
nose2 -F -v -B TTS tests.zoo_tests.test_models.test_voice_conversion
|
TTS/.github/workflows/zoo_tests1.yml
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: zoo-tests-1
|
2 |
+
|
3 |
+
on:
|
4 |
+
push:
|
5 |
+
branches:
|
6 |
+
- main
|
7 |
+
pull_request:
|
8 |
+
types: [opened, synchronize, reopened]
|
9 |
+
jobs:
|
10 |
+
check_skip:
|
11 |
+
runs-on: ubuntu-latest
|
12 |
+
if: "! contains(github.event.head_commit.message, '[ci skip]')"
|
13 |
+
steps:
|
14 |
+
- run: echo "${{ github.event.head_commit.message }}"
|
15 |
+
|
16 |
+
test:
|
17 |
+
runs-on: ubuntu-latest
|
18 |
+
strategy:
|
19 |
+
fail-fast: false
|
20 |
+
matrix:
|
21 |
+
python-version: [3.9, "3.10", "3.11"]
|
22 |
+
experimental: [false]
|
23 |
+
steps:
|
24 |
+
- uses: actions/checkout@v3
|
25 |
+
- name: Set up Python ${{ matrix.python-version }}
|
26 |
+
uses: actions/setup-python@v4
|
27 |
+
with:
|
28 |
+
python-version: ${{ matrix.python-version }}
|
29 |
+
architecture: x64
|
30 |
+
cache: 'pip'
|
31 |
+
cache-dependency-path: 'requirements*'
|
32 |
+
- name: check OS
|
33 |
+
run: cat /etc/os-release
|
34 |
+
- name: set ENV
|
35 |
+
run: export TRAINER_TELEMETRY=0
|
36 |
+
- name: Install dependencies
|
37 |
+
run: |
|
38 |
+
sudo apt-get update
|
39 |
+
sudo apt-get install -y git make gcc
|
40 |
+
sudo apt-get install espeak espeak-ng
|
41 |
+
make system-deps
|
42 |
+
- name: Install/upgrade Python setup deps
|
43 |
+
run: python3 -m pip install --upgrade pip setuptools wheel
|
44 |
+
- name: Replace scarf urls
|
45 |
+
run: |
|
46 |
+
sed -i 's/https:\/\/coqui.gateway.scarf.sh\/hf\/bark\//https:\/\/huggingface.co\/erogol\/bark\/resolve\/main\//g' TTS/.models.json
|
47 |
+
sed -i 's/https:\/\/coqui.gateway.scarf.sh\//https:\/\/github.com\/coqui-ai\/TTS\/releases\/download\//g' TTS/.models.json
|
48 |
+
- name: Install TTS
|
49 |
+
run: |
|
50 |
+
python3 -m pip install .[all]
|
51 |
+
python3 setup.py egg_info
|
52 |
+
- name: Unit tests
|
53 |
+
run: nose2 -F -v -B --with-coverage --coverage TTS tests.zoo_tests.test_models.test_models_offset_1_step_3
|
TTS/.github/workflows/zoo_tests2.yml
ADDED
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: zoo-tests-2
|
2 |
+
|
3 |
+
on:
|
4 |
+
push:
|
5 |
+
branches:
|
6 |
+
- main
|
7 |
+
pull_request:
|
8 |
+
types: [opened, synchronize, reopened]
|
9 |
+
jobs:
|
10 |
+
check_skip:
|
11 |
+
runs-on: ubuntu-latest
|
12 |
+
if: "! contains(github.event.head_commit.message, '[ci skip]')"
|
13 |
+
steps:
|
14 |
+
- run: echo "${{ github.event.head_commit.message }}"
|
15 |
+
|
16 |
+
test:
|
17 |
+
runs-on: ubuntu-latest
|
18 |
+
strategy:
|
19 |
+
fail-fast: false
|
20 |
+
matrix:
|
21 |
+
python-version: [3.9, "3.10", "3.11"]
|
22 |
+
experimental: [false]
|
23 |
+
steps:
|
24 |
+
- uses: actions/checkout@v3
|
25 |
+
- name: Set up Python ${{ matrix.python-version }}
|
26 |
+
uses: actions/setup-python@v4
|
27 |
+
with:
|
28 |
+
python-version: ${{ matrix.python-version }}
|
29 |
+
architecture: x64
|
30 |
+
cache: 'pip'
|
31 |
+
cache-dependency-path: 'requirements*'
|
32 |
+
- name: check OS
|
33 |
+
run: cat /etc/os-release
|
34 |
+
- name: set ENV
|
35 |
+
run: export TRAINER_TELEMETRY=0
|
36 |
+
- name: Install dependencies
|
37 |
+
run: |
|
38 |
+
sudo apt-get update
|
39 |
+
sudo apt-get install -y git make gcc
|
40 |
+
sudo apt-get install espeak espeak-ng
|
41 |
+
make system-deps
|
42 |
+
- name: Install/upgrade Python setup deps
|
43 |
+
run: python3 -m pip install --upgrade pip setuptools wheel
|
44 |
+
- name: Replace scarf urls
|
45 |
+
run: |
|
46 |
+
sed -i 's/https:\/\/coqui.gateway.scarf.sh\//https:\/\/github.com\/coqui-ai\/TTS\/releases\/download\//g' TTS/.models.json
|
47 |
+
- name: Install TTS
|
48 |
+
run: |
|
49 |
+
python3 -m pip install .[all]
|
50 |
+
python3 setup.py egg_info
|
51 |
+
- name: Unit tests
|
52 |
+
run: nose2 -F -v -B --with-coverage --coverage TTS tests.zoo_tests.test_models.test_models_offset_2_step_3
|
TTS/.gitignore
ADDED
@@ -0,0 +1,171 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
WadaSNR/
|
2 |
+
.idea/
|
3 |
+
*.pyc
|
4 |
+
.DS_Store
|
5 |
+
./__init__.py
|
6 |
+
# Byte-compiled / optimized / DLL files
|
7 |
+
__pycache__/
|
8 |
+
*.py[cod]
|
9 |
+
*$py.class
|
10 |
+
|
11 |
+
# C extensions
|
12 |
+
*.so
|
13 |
+
|
14 |
+
# Distribution / packaging
|
15 |
+
.Python
|
16 |
+
build/
|
17 |
+
develop-eggs/
|
18 |
+
dist/
|
19 |
+
downloads/
|
20 |
+
eggs/
|
21 |
+
.eggs/
|
22 |
+
lib/
|
23 |
+
lib64/
|
24 |
+
parts/
|
25 |
+
sdist/
|
26 |
+
var/
|
27 |
+
wheels/
|
28 |
+
*.egg-info/
|
29 |
+
.installed.cfg
|
30 |
+
*.egg
|
31 |
+
MANIFEST
|
32 |
+
|
33 |
+
# PyInstaller
|
34 |
+
# Usually these files are written by a python script from a template
|
35 |
+
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
36 |
+
*.manifest
|
37 |
+
*.spec
|
38 |
+
|
39 |
+
# Installer logs
|
40 |
+
pip-log.txt
|
41 |
+
pip-delete-this-directory.txt
|
42 |
+
|
43 |
+
# Unit test / coverage reports
|
44 |
+
htmlcov/
|
45 |
+
.tox/
|
46 |
+
.coverage
|
47 |
+
.coverage.*
|
48 |
+
.cache
|
49 |
+
nosetests.xml
|
50 |
+
coverage.xml
|
51 |
+
*.cover
|
52 |
+
.hypothesis/
|
53 |
+
|
54 |
+
# Translations
|
55 |
+
*.mo
|
56 |
+
*.pot
|
57 |
+
|
58 |
+
# Django stuff:
|
59 |
+
*.log
|
60 |
+
.static_storage/
|
61 |
+
.media/
|
62 |
+
local_settings.py
|
63 |
+
|
64 |
+
# Flask stuff:
|
65 |
+
instance/
|
66 |
+
.webassets-cache
|
67 |
+
|
68 |
+
# Scrapy stuff:
|
69 |
+
.scrapy
|
70 |
+
|
71 |
+
# Sphinx documentation
|
72 |
+
docs/_build/
|
73 |
+
|
74 |
+
# PyBuilder
|
75 |
+
target/
|
76 |
+
|
77 |
+
# Jupyter Notebook
|
78 |
+
.ipynb_checkpoints
|
79 |
+
|
80 |
+
# pyenv
|
81 |
+
.python-version
|
82 |
+
|
83 |
+
# celery beat schedule file
|
84 |
+
celerybeat-schedule
|
85 |
+
|
86 |
+
# SageMath parsed files
|
87 |
+
*.sage.py
|
88 |
+
|
89 |
+
# Environments
|
90 |
+
.env
|
91 |
+
.venv
|
92 |
+
env/
|
93 |
+
venv/
|
94 |
+
ENV/
|
95 |
+
env.bak/
|
96 |
+
venv.bak/
|
97 |
+
|
98 |
+
# Spyder project settings
|
99 |
+
.spyderproject
|
100 |
+
.spyproject
|
101 |
+
|
102 |
+
# Rope project settings
|
103 |
+
.ropeproject
|
104 |
+
|
105 |
+
# mkdocs documentation
|
106 |
+
/site
|
107 |
+
|
108 |
+
# mypy
|
109 |
+
.mypy_cache/
|
110 |
+
|
111 |
+
# vim
|
112 |
+
*.swp
|
113 |
+
*.swm
|
114 |
+
*.swn
|
115 |
+
*.swo
|
116 |
+
|
117 |
+
# pytorch models
|
118 |
+
*.pth
|
119 |
+
*.pth.tar
|
120 |
+
!dummy_speakers.pth
|
121 |
+
result/
|
122 |
+
|
123 |
+
# setup.py
|
124 |
+
version.py
|
125 |
+
|
126 |
+
# jupyter dummy files
|
127 |
+
core
|
128 |
+
|
129 |
+
# ignore local datasets
|
130 |
+
recipes/WIP/*
|
131 |
+
recipes/ljspeech/LJSpeech-1.1/*
|
132 |
+
recipes/vctk/VCTK/*
|
133 |
+
recipes/**/*.npy
|
134 |
+
recipes/**/*.json
|
135 |
+
VCTK-Corpus-removed-silence/*
|
136 |
+
|
137 |
+
# ignore training logs
|
138 |
+
trainer_*_log.txt
|
139 |
+
|
140 |
+
# files used internally for dev, test etc.
|
141 |
+
tests/outputs/*
|
142 |
+
tests/train_outputs/*
|
143 |
+
TODO.txt
|
144 |
+
.vscode/*
|
145 |
+
data/*
|
146 |
+
notebooks/data/*
|
147 |
+
TTS/tts/utils/monotonic_align/core.c
|
148 |
+
.vscode-upload.json
|
149 |
+
temp_build/*
|
150 |
+
events.out*
|
151 |
+
old_configs/*
|
152 |
+
model_importers/*
|
153 |
+
model_profiling/*
|
154 |
+
docs/source/TODO/*
|
155 |
+
.noseids
|
156 |
+
.dccache
|
157 |
+
log.txt
|
158 |
+
umap.png
|
159 |
+
*.out
|
160 |
+
SocialMedia.txt
|
161 |
+
output.wav
|
162 |
+
tts_output.wav
|
163 |
+
deps.json
|
164 |
+
speakers.json
|
165 |
+
internal/*
|
166 |
+
*_pitch.npy
|
167 |
+
*_phoneme.npy
|
168 |
+
wandb
|
169 |
+
depot/*
|
170 |
+
coqui_recipes/*
|
171 |
+
local_scripts/*
|
TTS/.pre-commit-config.yaml
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
repos:
|
2 |
+
- repo: 'https://github.com/pre-commit/pre-commit-hooks'
|
3 |
+
rev: v2.3.0
|
4 |
+
hooks:
|
5 |
+
- id: check-yaml
|
6 |
+
- id: end-of-file-fixer
|
7 |
+
- id: trailing-whitespace
|
8 |
+
- repo: 'https://github.com/psf/black'
|
9 |
+
rev: 22.3.0
|
10 |
+
hooks:
|
11 |
+
- id: black
|
12 |
+
language_version: python3
|
13 |
+
- repo: https://github.com/pycqa/isort
|
14 |
+
rev: 5.8.0
|
15 |
+
hooks:
|
16 |
+
- id: isort
|
17 |
+
name: isort (python)
|
18 |
+
- id: isort
|
19 |
+
name: isort (cython)
|
20 |
+
types: [cython]
|
21 |
+
- id: isort
|
22 |
+
name: isort (pyi)
|
23 |
+
types: [pyi]
|
24 |
+
- repo: https://github.com/pycqa/pylint
|
25 |
+
rev: v2.8.2
|
26 |
+
hooks:
|
27 |
+
- id: pylint
|
TTS/.pylintrc
ADDED
@@ -0,0 +1,599 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
[MASTER]
|
2 |
+
|
3 |
+
# A comma-separated list of package or module names from where C extensions may
|
4 |
+
# be loaded. Extensions are loading into the active Python interpreter and may
|
5 |
+
# run arbitrary code.
|
6 |
+
extension-pkg-whitelist=
|
7 |
+
|
8 |
+
# Add files or directories to the blacklist. They should be base names, not
|
9 |
+
# paths.
|
10 |
+
ignore=CVS
|
11 |
+
|
12 |
+
# Add files or directories matching the regex patterns to the blacklist. The
|
13 |
+
# regex matches against base names, not paths.
|
14 |
+
ignore-patterns=
|
15 |
+
|
16 |
+
# Python code to execute, usually for sys.path manipulation such as
|
17 |
+
# pygtk.require().
|
18 |
+
#init-hook=
|
19 |
+
|
20 |
+
# Use multiple processes to speed up Pylint. Specifying 0 will auto-detect the
|
21 |
+
# number of processors available to use.
|
22 |
+
jobs=1
|
23 |
+
|
24 |
+
# Control the amount of potential inferred values when inferring a single
|
25 |
+
# object. This can help the performance when dealing with large functions or
|
26 |
+
# complex, nested conditions.
|
27 |
+
limit-inference-results=100
|
28 |
+
|
29 |
+
# List of plugins (as comma separated values of python modules names) to load,
|
30 |
+
# usually to register additional checkers.
|
31 |
+
load-plugins=
|
32 |
+
|
33 |
+
# Pickle collected data for later comparisons.
|
34 |
+
persistent=yes
|
35 |
+
|
36 |
+
# Specify a configuration file.
|
37 |
+
#rcfile=
|
38 |
+
|
39 |
+
# When enabled, pylint would attempt to guess common misconfiguration and emit
|
40 |
+
# user-friendly hints instead of false-positive error messages.
|
41 |
+
suggestion-mode=yes
|
42 |
+
|
43 |
+
# Allow loading of arbitrary C extensions. Extensions are imported into the
|
44 |
+
# active Python interpreter and may run arbitrary code.
|
45 |
+
unsafe-load-any-extension=no
|
46 |
+
|
47 |
+
|
48 |
+
[MESSAGES CONTROL]
|
49 |
+
|
50 |
+
# Only show warnings with the listed confidence levels. Leave empty to show
|
51 |
+
# all. Valid levels: HIGH, INFERENCE, INFERENCE_FAILURE, UNDEFINED.
|
52 |
+
confidence=
|
53 |
+
|
54 |
+
# Disable the message, report, category or checker with the given id(s). You
|
55 |
+
# can either give multiple identifiers separated by comma (,) or put this
|
56 |
+
# option multiple times (only on the command line, not in the configuration
|
57 |
+
# file where it should appear only once). You can also use "--disable=all" to
|
58 |
+
# disable everything first and then reenable specific checks. For example, if
|
59 |
+
# you want to run only the similarities checker, you can use "--disable=all
|
60 |
+
# --enable=similarities". If you want to run only the classes checker, but have
|
61 |
+
# no Warning level messages displayed, use "--disable=all --enable=classes
|
62 |
+
# --disable=W".
|
63 |
+
disable=missing-docstring,
|
64 |
+
too-many-public-methods,
|
65 |
+
too-many-lines,
|
66 |
+
bare-except,
|
67 |
+
## for avoiding weird p3.6 CI linter error
|
68 |
+
## TODO: see later if we can remove this
|
69 |
+
assigning-non-slot,
|
70 |
+
unsupported-assignment-operation,
|
71 |
+
## end
|
72 |
+
line-too-long,
|
73 |
+
fixme,
|
74 |
+
wrong-import-order,
|
75 |
+
ungrouped-imports,
|
76 |
+
wrong-import-position,
|
77 |
+
import-error,
|
78 |
+
invalid-name,
|
79 |
+
too-many-instance-attributes,
|
80 |
+
arguments-differ,
|
81 |
+
arguments-renamed,
|
82 |
+
no-name-in-module,
|
83 |
+
no-member,
|
84 |
+
unsubscriptable-object,
|
85 |
+
print-statement,
|
86 |
+
parameter-unpacking,
|
87 |
+
unpacking-in-except,
|
88 |
+
old-raise-syntax,
|
89 |
+
backtick,
|
90 |
+
long-suffix,
|
91 |
+
old-ne-operator,
|
92 |
+
old-octal-literal,
|
93 |
+
import-star-module-level,
|
94 |
+
non-ascii-bytes-literal,
|
95 |
+
raw-checker-failed,
|
96 |
+
bad-inline-option,
|
97 |
+
locally-disabled,
|
98 |
+
file-ignored,
|
99 |
+
suppressed-message,
|
100 |
+
useless-suppression,
|
101 |
+
deprecated-pragma,
|
102 |
+
use-symbolic-message-instead,
|
103 |
+
useless-object-inheritance,
|
104 |
+
too-few-public-methods,
|
105 |
+
too-many-branches,
|
106 |
+
too-many-arguments,
|
107 |
+
too-many-locals,
|
108 |
+
too-many-statements,
|
109 |
+
apply-builtin,
|
110 |
+
basestring-builtin,
|
111 |
+
buffer-builtin,
|
112 |
+
cmp-builtin,
|
113 |
+
coerce-builtin,
|
114 |
+
execfile-builtin,
|
115 |
+
file-builtin,
|
116 |
+
long-builtin,
|
117 |
+
raw_input-builtin,
|
118 |
+
reduce-builtin,
|
119 |
+
standarderror-builtin,
|
120 |
+
unicode-builtin,
|
121 |
+
xrange-builtin,
|
122 |
+
coerce-method,
|
123 |
+
delslice-method,
|
124 |
+
getslice-method,
|
125 |
+
setslice-method,
|
126 |
+
no-absolute-import,
|
127 |
+
old-division,
|
128 |
+
dict-iter-method,
|
129 |
+
dict-view-method,
|
130 |
+
next-method-called,
|
131 |
+
metaclass-assignment,
|
132 |
+
indexing-exception,
|
133 |
+
raising-string,
|
134 |
+
reload-builtin,
|
135 |
+
oct-method,
|
136 |
+
hex-method,
|
137 |
+
nonzero-method,
|
138 |
+
cmp-method,
|
139 |
+
input-builtin,
|
140 |
+
round-builtin,
|
141 |
+
intern-builtin,
|
142 |
+
unichr-builtin,
|
143 |
+
map-builtin-not-iterating,
|
144 |
+
zip-builtin-not-iterating,
|
145 |
+
range-builtin-not-iterating,
|
146 |
+
filter-builtin-not-iterating,
|
147 |
+
using-cmp-argument,
|
148 |
+
eq-without-hash,
|
149 |
+
div-method,
|
150 |
+
idiv-method,
|
151 |
+
rdiv-method,
|
152 |
+
exception-message-attribute,
|
153 |
+
invalid-str-codec,
|
154 |
+
sys-max-int,
|
155 |
+
bad-python3-import,
|
156 |
+
deprecated-string-function,
|
157 |
+
deprecated-str-translate-call,
|
158 |
+
deprecated-itertools-function,
|
159 |
+
deprecated-types-field,
|
160 |
+
next-method-defined,
|
161 |
+
dict-items-not-iterating,
|
162 |
+
dict-keys-not-iterating,
|
163 |
+
dict-values-not-iterating,
|
164 |
+
deprecated-operator-function,
|
165 |
+
deprecated-urllib-function,
|
166 |
+
xreadlines-attribute,
|
167 |
+
deprecated-sys-function,
|
168 |
+
exception-escape,
|
169 |
+
comprehension-escape,
|
170 |
+
duplicate-code,
|
171 |
+
not-callable,
|
172 |
+
import-outside-toplevel,
|
173 |
+
logging-fstring-interpolation,
|
174 |
+
logging-not-lazy
|
175 |
+
|
176 |
+
# Enable the message, report, category or checker with the given id(s). You can
|
177 |
+
# either give multiple identifier separated by comma (,) or put this option
|
178 |
+
# multiple time (only on the command line, not in the configuration file where
|
179 |
+
# it should appear only once). See also the "--disable" option for examples.
|
180 |
+
enable=c-extension-no-member
|
181 |
+
|
182 |
+
|
183 |
+
[REPORTS]
|
184 |
+
|
185 |
+
# Python expression which should return a note less than 10 (10 is the highest
|
186 |
+
# note). You have access to the variables errors warning, statement which
|
187 |
+
# respectively contain the number of errors / warnings messages and the total
|
188 |
+
# number of statements analyzed. This is used by the global evaluation report
|
189 |
+
# (RP0004).
|
190 |
+
evaluation=10.0 - ((float(5 * error + warning + refactor + convention) / statement) * 10)
|
191 |
+
|
192 |
+
# Template used to display messages. This is a python new-style format string
|
193 |
+
# used to format the message information. See doc for all details.
|
194 |
+
#msg-template=
|
195 |
+
|
196 |
+
# Set the output format. Available formats are text, parseable, colorized, json
|
197 |
+
# and msvs (visual studio). You can also give a reporter class, e.g.
|
198 |
+
# mypackage.mymodule.MyReporterClass.
|
199 |
+
output-format=text
|
200 |
+
|
201 |
+
# Tells whether to display a full report or only the messages.
|
202 |
+
reports=no
|
203 |
+
|
204 |
+
# Activate the evaluation score.
|
205 |
+
score=yes
|
206 |
+
|
207 |
+
|
208 |
+
[REFACTORING]
|
209 |
+
|
210 |
+
# Maximum number of nested blocks for function / method body
|
211 |
+
max-nested-blocks=5
|
212 |
+
|
213 |
+
# Complete name of functions that never returns. When checking for
|
214 |
+
# inconsistent-return-statements if a never returning function is called then
|
215 |
+
# it will be considered as an explicit return statement and no message will be
|
216 |
+
# printed.
|
217 |
+
never-returning-functions=sys.exit
|
218 |
+
|
219 |
+
|
220 |
+
[LOGGING]
|
221 |
+
|
222 |
+
# Format style used to check logging format string. `old` means using %
|
223 |
+
# formatting, while `new` is for `{}` formatting.
|
224 |
+
logging-format-style=old
|
225 |
+
|
226 |
+
# Logging modules to check that the string format arguments are in logging
|
227 |
+
# function parameter format.
|
228 |
+
logging-modules=logging
|
229 |
+
|
230 |
+
|
231 |
+
[SPELLING]
|
232 |
+
|
233 |
+
# Limits count of emitted suggestions for spelling mistakes.
|
234 |
+
max-spelling-suggestions=4
|
235 |
+
|
236 |
+
# Spelling dictionary name. Available dictionaries: none. To make it working
|
237 |
+
# install python-enchant package..
|
238 |
+
spelling-dict=
|
239 |
+
|
240 |
+
# List of comma separated words that should not be checked.
|
241 |
+
spelling-ignore-words=
|
242 |
+
|
243 |
+
# A path to a file that contains private dictionary; one word per line.
|
244 |
+
spelling-private-dict-file=
|
245 |
+
|
246 |
+
# Tells whether to store unknown words to indicated private dictionary in
|
247 |
+
# --spelling-private-dict-file option instead of raising a message.
|
248 |
+
spelling-store-unknown-words=no
|
249 |
+
|
250 |
+
|
251 |
+
[MISCELLANEOUS]
|
252 |
+
|
253 |
+
# List of note tags to take in consideration, separated by a comma.
|
254 |
+
notes=FIXME,
|
255 |
+
XXX,
|
256 |
+
TODO
|
257 |
+
|
258 |
+
|
259 |
+
[TYPECHECK]
|
260 |
+
|
261 |
+
# List of decorators that produce context managers, such as
|
262 |
+
# contextlib.contextmanager. Add to this list to register other decorators that
|
263 |
+
# produce valid context managers.
|
264 |
+
contextmanager-decorators=contextlib.contextmanager
|
265 |
+
|
266 |
+
# List of members which are set dynamically and missed by pylint inference
|
267 |
+
# system, and so shouldn't trigger E1101 when accessed. Python regular
|
268 |
+
# expressions are accepted.
|
269 |
+
generated-members=numpy.*,torch.*
|
270 |
+
|
271 |
+
# Tells whether missing members accessed in mixin class should be ignored. A
|
272 |
+
# mixin class is detected if its name ends with "mixin" (case insensitive).
|
273 |
+
ignore-mixin-members=yes
|
274 |
+
|
275 |
+
# Tells whether to warn about missing members when the owner of the attribute
|
276 |
+
# is inferred to be None.
|
277 |
+
ignore-none=yes
|
278 |
+
|
279 |
+
# This flag controls whether pylint should warn about no-member and similar
|
280 |
+
# checks whenever an opaque object is returned when inferring. The inference
|
281 |
+
# can return multiple potential results while evaluating a Python object, but
|
282 |
+
# some branches might not be evaluated, which results in partial inference. In
|
283 |
+
# that case, it might be useful to still emit no-member and other checks for
|
284 |
+
# the rest of the inferred objects.
|
285 |
+
ignore-on-opaque-inference=yes
|
286 |
+
|
287 |
+
# List of class names for which member attributes should not be checked (useful
|
288 |
+
# for classes with dynamically set attributes). This supports the use of
|
289 |
+
# qualified names.
|
290 |
+
ignored-classes=optparse.Values,thread._local,_thread._local
|
291 |
+
|
292 |
+
# List of module names for which member attributes should not be checked
|
293 |
+
# (useful for modules/projects where namespaces are manipulated during runtime
|
294 |
+
# and thus existing member attributes cannot be deduced by static analysis. It
|
295 |
+
# supports qualified module names, as well as Unix pattern matching.
|
296 |
+
ignored-modules=
|
297 |
+
|
298 |
+
# Show a hint with possible names when a member name was not found. The aspect
|
299 |
+
# of finding the hint is based on edit distance.
|
300 |
+
missing-member-hint=yes
|
301 |
+
|
302 |
+
# The minimum edit distance a name should have in order to be considered a
|
303 |
+
# similar match for a missing member name.
|
304 |
+
missing-member-hint-distance=1
|
305 |
+
|
306 |
+
# The total number of similar names that should be taken in consideration when
|
307 |
+
# showing a hint for a missing member.
|
308 |
+
missing-member-max-choices=1
|
309 |
+
|
310 |
+
|
311 |
+
[VARIABLES]
|
312 |
+
|
313 |
+
# List of additional names supposed to be defined in builtins. Remember that
|
314 |
+
# you should avoid defining new builtins when possible.
|
315 |
+
additional-builtins=
|
316 |
+
|
317 |
+
# Tells whether unused global variables should be treated as a violation.
|
318 |
+
allow-global-unused-variables=yes
|
319 |
+
|
320 |
+
# List of strings which can identify a callback function by name. A callback
|
321 |
+
# name must start or end with one of those strings.
|
322 |
+
callbacks=cb_,
|
323 |
+
_cb
|
324 |
+
|
325 |
+
# A regular expression matching the name of dummy variables (i.e. expected to
|
326 |
+
# not be used).
|
327 |
+
dummy-variables-rgx=_+$|(_[a-zA-Z0-9_]*[a-zA-Z0-9]+?$)|dummy|^ignored_|^unused_
|
328 |
+
|
329 |
+
# Argument names that match this expression will be ignored. Default to name
|
330 |
+
# with leading underscore.
|
331 |
+
ignored-argument-names=_.*|^ignored_|^unused_
|
332 |
+
|
333 |
+
# Tells whether we should check for unused import in __init__ files.
|
334 |
+
init-import=no
|
335 |
+
|
336 |
+
# List of qualified module names which can have objects that can redefine
|
337 |
+
# builtins.
|
338 |
+
redefining-builtins-modules=six.moves,past.builtins,future.builtins,builtins,io
|
339 |
+
|
340 |
+
|
341 |
+
[FORMAT]
|
342 |
+
|
343 |
+
# Expected format of line ending, e.g. empty (any line ending), LF or CRLF.
|
344 |
+
expected-line-ending-format=
|
345 |
+
|
346 |
+
# Regexp for a line that is allowed to be longer than the limit.
|
347 |
+
ignore-long-lines=^\s*(# )?<?https?://\S+>?$
|
348 |
+
|
349 |
+
# Number of spaces of indent required inside a hanging or continued line.
|
350 |
+
indent-after-paren=4
|
351 |
+
|
352 |
+
# String used as indentation unit. This is usually " " (4 spaces) or "\t" (1
|
353 |
+
# tab).
|
354 |
+
indent-string=' '
|
355 |
+
|
356 |
+
# Maximum number of characters on a single line.
|
357 |
+
max-line-length=120
|
358 |
+
|
359 |
+
# Maximum number of lines in a module.
|
360 |
+
max-module-lines=1000
|
361 |
+
|
362 |
+
# List of optional constructs for which whitespace checking is disabled. `dict-
|
363 |
+
# separator` is used to allow tabulation in dicts, etc.: {1 : 1,\n222: 2}.
|
364 |
+
# `trailing-comma` allows a space between comma and closing bracket: (a, ).
|
365 |
+
# `empty-line` allows space-only lines.
|
366 |
+
no-space-check=trailing-comma,
|
367 |
+
dict-separator
|
368 |
+
|
369 |
+
# Allow the body of a class to be on the same line as the declaration if body
|
370 |
+
# contains single statement.
|
371 |
+
single-line-class-stmt=no
|
372 |
+
|
373 |
+
# Allow the body of an if to be on the same line as the test if there is no
|
374 |
+
# else.
|
375 |
+
single-line-if-stmt=no
|
376 |
+
|
377 |
+
|
378 |
+
[SIMILARITIES]
|
379 |
+
|
380 |
+
# Ignore comments when computing similarities.
|
381 |
+
ignore-comments=yes
|
382 |
+
|
383 |
+
# Ignore docstrings when computing similarities.
|
384 |
+
ignore-docstrings=yes
|
385 |
+
|
386 |
+
# Ignore imports when computing similarities.
|
387 |
+
ignore-imports=no
|
388 |
+
|
389 |
+
# Minimum lines number of a similarity.
|
390 |
+
min-similarity-lines=4
|
391 |
+
|
392 |
+
|
393 |
+
[BASIC]
|
394 |
+
|
395 |
+
# Naming style matching correct argument names.
|
396 |
+
argument-naming-style=snake_case
|
397 |
+
|
398 |
+
# Regular expression matching correct argument names. Overrides argument-
|
399 |
+
# naming-style.
|
400 |
+
argument-rgx=[a-z_][a-z0-9_]{0,30}$
|
401 |
+
|
402 |
+
# Naming style matching correct attribute names.
|
403 |
+
attr-naming-style=snake_case
|
404 |
+
|
405 |
+
# Regular expression matching correct attribute names. Overrides attr-naming-
|
406 |
+
# style.
|
407 |
+
#attr-rgx=
|
408 |
+
|
409 |
+
# Bad variable names which should always be refused, separated by a comma.
|
410 |
+
bad-names=
|
411 |
+
|
412 |
+
# Naming style matching correct class attribute names.
|
413 |
+
class-attribute-naming-style=any
|
414 |
+
|
415 |
+
# Regular expression matching correct class attribute names. Overrides class-
|
416 |
+
# attribute-naming-style.
|
417 |
+
#class-attribute-rgx=
|
418 |
+
|
419 |
+
# Naming style matching correct class names.
|
420 |
+
class-naming-style=PascalCase
|
421 |
+
|
422 |
+
# Regular expression matching correct class names. Overrides class-naming-
|
423 |
+
# style.
|
424 |
+
#class-rgx=
|
425 |
+
|
426 |
+
# Naming style matching correct constant names.
|
427 |
+
const-naming-style=UPPER_CASE
|
428 |
+
|
429 |
+
# Regular expression matching correct constant names. Overrides const-naming-
|
430 |
+
# style.
|
431 |
+
#const-rgx=
|
432 |
+
|
433 |
+
# Minimum line length for functions/classes that require docstrings, shorter
|
434 |
+
# ones are exempt.
|
435 |
+
docstring-min-length=-1
|
436 |
+
|
437 |
+
# Naming style matching correct function names.
|
438 |
+
function-naming-style=snake_case
|
439 |
+
|
440 |
+
# Regular expression matching correct function names. Overrides function-
|
441 |
+
# naming-style.
|
442 |
+
#function-rgx=
|
443 |
+
|
444 |
+
# Good variable names which should always be accepted, separated by a comma.
|
445 |
+
good-names=i,
|
446 |
+
j,
|
447 |
+
k,
|
448 |
+
x,
|
449 |
+
ex,
|
450 |
+
Run,
|
451 |
+
_
|
452 |
+
|
453 |
+
# Include a hint for the correct naming format with invalid-name.
|
454 |
+
include-naming-hint=no
|
455 |
+
|
456 |
+
# Naming style matching correct inline iteration names.
|
457 |
+
inlinevar-naming-style=any
|
458 |
+
|
459 |
+
# Regular expression matching correct inline iteration names. Overrides
|
460 |
+
# inlinevar-naming-style.
|
461 |
+
#inlinevar-rgx=
|
462 |
+
|
463 |
+
# Naming style matching correct method names.
|
464 |
+
method-naming-style=snake_case
|
465 |
+
|
466 |
+
# Regular expression matching correct method names. Overrides method-naming-
|
467 |
+
# style.
|
468 |
+
#method-rgx=
|
469 |
+
|
470 |
+
# Naming style matching correct module names.
|
471 |
+
module-naming-style=snake_case
|
472 |
+
|
473 |
+
# Regular expression matching correct module names. Overrides module-naming-
|
474 |
+
# style.
|
475 |
+
#module-rgx=
|
476 |
+
|
477 |
+
# Colon-delimited sets of names that determine each other's naming style when
|
478 |
+
# the name regexes allow several styles.
|
479 |
+
name-group=
|
480 |
+
|
481 |
+
# Regular expression which should only match function or class names that do
|
482 |
+
# not require a docstring.
|
483 |
+
no-docstring-rgx=^_
|
484 |
+
|
485 |
+
# List of decorators that produce properties, such as abc.abstractproperty. Add
|
486 |
+
# to this list to register other decorators that produce valid properties.
|
487 |
+
# These decorators are taken in consideration only for invalid-name.
|
488 |
+
property-classes=abc.abstractproperty
|
489 |
+
|
490 |
+
# Naming style matching correct variable names.
|
491 |
+
variable-naming-style=snake_case
|
492 |
+
|
493 |
+
# Regular expression matching correct variable names. Overrides variable-
|
494 |
+
# naming-style.
|
495 |
+
variable-rgx=[a-z_][a-z0-9_]{0,30}$
|
496 |
+
|
497 |
+
|
498 |
+
[STRING]
|
499 |
+
|
500 |
+
# This flag controls whether the implicit-str-concat-in-sequence should
|
501 |
+
# generate a warning on implicit string concatenation in sequences defined over
|
502 |
+
# several lines.
|
503 |
+
check-str-concat-over-line-jumps=no
|
504 |
+
|
505 |
+
|
506 |
+
[IMPORTS]
|
507 |
+
|
508 |
+
# Allow wildcard imports from modules that define __all__.
|
509 |
+
allow-wildcard-with-all=no
|
510 |
+
|
511 |
+
# Analyse import fallback blocks. This can be used to support both Python 2 and
|
512 |
+
# 3 compatible code, which means that the block might have code that exists
|
513 |
+
# only in one or another interpreter, leading to false positives when analysed.
|
514 |
+
analyse-fallback-blocks=no
|
515 |
+
|
516 |
+
# Deprecated modules which should not be used, separated by a comma.
|
517 |
+
deprecated-modules=optparse,tkinter.tix
|
518 |
+
|
519 |
+
# Create a graph of external dependencies in the given file (report RP0402 must
|
520 |
+
# not be disabled).
|
521 |
+
ext-import-graph=
|
522 |
+
|
523 |
+
# Create a graph of every (i.e. internal and external) dependencies in the
|
524 |
+
# given file (report RP0402 must not be disabled).
|
525 |
+
import-graph=
|
526 |
+
|
527 |
+
# Create a graph of internal dependencies in the given file (report RP0402 must
|
528 |
+
# not be disabled).
|
529 |
+
int-import-graph=
|
530 |
+
|
531 |
+
# Force import order to recognize a module as part of the standard
|
532 |
+
# compatibility libraries.
|
533 |
+
known-standard-library=
|
534 |
+
|
535 |
+
# Force import order to recognize a module as part of a third party library.
|
536 |
+
known-third-party=enchant
|
537 |
+
|
538 |
+
|
539 |
+
[CLASSES]
|
540 |
+
|
541 |
+
# List of method names used to declare (i.e. assign) instance attributes.
|
542 |
+
defining-attr-methods=__init__,
|
543 |
+
__new__,
|
544 |
+
setUp
|
545 |
+
|
546 |
+
# List of member names, which should be excluded from the protected access
|
547 |
+
# warning.
|
548 |
+
exclude-protected=_asdict,
|
549 |
+
_fields,
|
550 |
+
_replace,
|
551 |
+
_source,
|
552 |
+
_make
|
553 |
+
|
554 |
+
# List of valid names for the first argument in a class method.
|
555 |
+
valid-classmethod-first-arg=cls
|
556 |
+
|
557 |
+
# List of valid names for the first argument in a metaclass class method.
|
558 |
+
valid-metaclass-classmethod-first-arg=cls
|
559 |
+
|
560 |
+
|
561 |
+
[DESIGN]
|
562 |
+
|
563 |
+
# Maximum number of arguments for function / method.
|
564 |
+
max-args=5
|
565 |
+
|
566 |
+
# Maximum number of attributes for a class (see R0902).
|
567 |
+
max-attributes=7
|
568 |
+
|
569 |
+
# Maximum number of boolean expressions in an if statement.
|
570 |
+
max-bool-expr=5
|
571 |
+
|
572 |
+
# Maximum number of branch for function / method body.
|
573 |
+
max-branches=12
|
574 |
+
|
575 |
+
# Maximum number of locals for function / method body.
|
576 |
+
max-locals=15
|
577 |
+
|
578 |
+
# Maximum number of parents for a class (see R0901).
|
579 |
+
max-parents=15
|
580 |
+
|
581 |
+
# Maximum number of public methods for a class (see R0904).
|
582 |
+
max-public-methods=20
|
583 |
+
|
584 |
+
# Maximum number of return / yield for function / method body.
|
585 |
+
max-returns=6
|
586 |
+
|
587 |
+
# Maximum number of statements in function / method body.
|
588 |
+
max-statements=50
|
589 |
+
|
590 |
+
# Minimum number of public methods for a class (see R0903).
|
591 |
+
min-public-methods=2
|
592 |
+
|
593 |
+
|
594 |
+
[EXCEPTIONS]
|
595 |
+
|
596 |
+
# Exceptions that will emit a warning when being caught. Defaults to
|
597 |
+
# "BaseException, Exception".
|
598 |
+
overgeneral-exceptions=BaseException,
|
599 |
+
Exception
|
TTS/.readthedocs.yml
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# .readthedocs.yml
|
2 |
+
# Read the Docs configuration file
|
3 |
+
# See https://docs.readthedocs.io/en/stable/config-file/v2.html for details
|
4 |
+
|
5 |
+
# Required
|
6 |
+
version: 2
|
7 |
+
|
8 |
+
# Set the version of Python and other tools you might need
|
9 |
+
build:
|
10 |
+
os: ubuntu-22.04
|
11 |
+
tools:
|
12 |
+
python: "3.11"
|
13 |
+
|
14 |
+
# Optionally set the version of Python and requirements required to build your docs
|
15 |
+
python:
|
16 |
+
install:
|
17 |
+
- requirements: docs/requirements.txt
|
18 |
+
- requirements: requirements.txt
|
19 |
+
|
20 |
+
# Build documentation in the docs/ directory with Sphinx
|
21 |
+
sphinx:
|
22 |
+
builder: html
|
23 |
+
configuration: docs/source/conf.py
|
TTS/CITATION.cff
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
cff-version: 1.2.0
|
2 |
+
message: "If you want to cite 🐸💬, feel free to use this (but only if you loved it 😊)"
|
3 |
+
title: "Coqui TTS"
|
4 |
+
abstract: "A deep learning toolkit for Text-to-Speech, battle-tested in research and production"
|
5 |
+
date-released: 2021-01-01
|
6 |
+
authors:
|
7 |
+
- family-names: "Eren"
|
8 |
+
given-names: "Gölge"
|
9 |
+
- name: "The Coqui TTS Team"
|
10 |
+
version: 1.4
|
11 |
+
doi: 10.5281/zenodo.6334862
|
12 |
+
license: "MPL-2.0"
|
13 |
+
url: "https://www.coqui.ai"
|
14 |
+
repository-code: "https://github.com/coqui-ai/TTS"
|
15 |
+
keywords:
|
16 |
+
- machine learning
|
17 |
+
- deep learning
|
18 |
+
- artificial intelligence
|
19 |
+
- text to speech
|
20 |
+
- TTS
|
TTS/CODE_OF_CONDUCT.md
ADDED
@@ -0,0 +1,133 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
# Contributor Covenant Code of Conduct
|
3 |
+
|
4 |
+
## Our Pledge
|
5 |
+
|
6 |
+
We as members, contributors, and leaders pledge to make participation in our
|
7 |
+
community a harassment-free experience for everyone, regardless of age, body
|
8 |
+
size, visible or invisible disability, ethnicity, sex characteristics, gender
|
9 |
+
identity and expression, level of experience, education, socio-economic status,
|
10 |
+
nationality, personal appearance, race, caste, color, religion, or sexual identity
|
11 |
+
and orientation.
|
12 |
+
|
13 |
+
We pledge to act and interact in ways that contribute to an open, welcoming,
|
14 |
+
diverse, inclusive, and healthy community.
|
15 |
+
|
16 |
+
## Our Standards
|
17 |
+
|
18 |
+
Examples of behavior that contributes to a positive environment for our
|
19 |
+
community include:
|
20 |
+
|
21 |
+
* Demonstrating empathy and kindness toward other people
|
22 |
+
* Being respectful of differing opinions, viewpoints, and experiences
|
23 |
+
* Giving and gracefully accepting constructive feedback
|
24 |
+
* Accepting responsibility and apologizing to those affected by our mistakes,
|
25 |
+
and learning from the experience
|
26 |
+
* Focusing on what is best not just for us as individuals, but for the
|
27 |
+
overall community
|
28 |
+
|
29 |
+
Examples of unacceptable behavior include:
|
30 |
+
|
31 |
+
* The use of sexualized language or imagery, and sexual attention or
|
32 |
+
advances of any kind
|
33 |
+
* Trolling, insulting or derogatory comments, and personal or political attacks
|
34 |
+
* Public or private harassment
|
35 |
+
* Publishing others' private information, such as a physical or email
|
36 |
+
address, without their explicit permission
|
37 |
+
* Other conduct which could reasonably be considered inappropriate in a
|
38 |
+
professional setting
|
39 |
+
|
40 |
+
## Enforcement Responsibilities
|
41 |
+
|
42 |
+
Community leaders are responsible for clarifying and enforcing our standards of
|
43 |
+
acceptable behavior and will take appropriate and fair corrective action in
|
44 |
+
response to any behavior that they deem inappropriate, threatening, offensive,
|
45 |
+
or harmful.
|
46 |
+
|
47 |
+
Community leaders have the right and responsibility to remove, edit, or reject
|
48 |
+
comments, commits, code, wiki edits, issues, and other contributions that are
|
49 |
+
not aligned to this Code of Conduct, and will communicate reasons for moderation
|
50 |
+
decisions when appropriate.
|
51 |
+
|
52 |
+
## Scope
|
53 |
+
|
54 |
+
This Code of Conduct applies within all community spaces, and also applies when
|
55 |
+
an individual is officially representing the community in public spaces.
|
56 |
+
Examples of representing our community include using an official e-mail address,
|
57 |
+
posting via an official social media account, or acting as an appointed
|
58 |
+
representative at an online or offline event.
|
59 |
+
|
60 |
+
## Enforcement
|
61 |
+
|
62 |
+
Instances of abusive, harassing, or otherwise unacceptable behavior may be
|
63 |
+
reported to the community leaders responsible for enforcement at
|
64 |
+
coc-report@coqui.ai.
|
65 |
+
All complaints will be reviewed and investigated promptly and fairly.
|
66 |
+
|
67 |
+
All community leaders are obligated to respect the privacy and security of the
|
68 |
+
reporter of any incident.
|
69 |
+
|
70 |
+
## Enforcement Guidelines
|
71 |
+
|
72 |
+
Community leaders will follow these Community Impact Guidelines in determining
|
73 |
+
the consequences for any action they deem in violation of this Code of Conduct:
|
74 |
+
|
75 |
+
### 1. Correction
|
76 |
+
|
77 |
+
**Community Impact**: Use of inappropriate language or other behavior deemed
|
78 |
+
unprofessional or unwelcome in the community.
|
79 |
+
|
80 |
+
**Consequence**: A private, written warning from community leaders, providing
|
81 |
+
clarity around the nature of the violation and an explanation of why the
|
82 |
+
behavior was inappropriate. A public apology may be requested.
|
83 |
+
|
84 |
+
### 2. Warning
|
85 |
+
|
86 |
+
**Community Impact**: A violation through a single incident or series
|
87 |
+
of actions.
|
88 |
+
|
89 |
+
**Consequence**: A warning with consequences for continued behavior. No
|
90 |
+
interaction with the people involved, including unsolicited interaction with
|
91 |
+
those enforcing the Code of Conduct, for a specified period of time. This
|
92 |
+
includes avoiding interactions in community spaces as well as external channels
|
93 |
+
like social media. Violating these terms may lead to a temporary or
|
94 |
+
permanent ban.
|
95 |
+
|
96 |
+
### 3. Temporary Ban
|
97 |
+
|
98 |
+
**Community Impact**: A serious violation of community standards, including
|
99 |
+
sustained inappropriate behavior.
|
100 |
+
|
101 |
+
**Consequence**: A temporary ban from any sort of interaction or public
|
102 |
+
communication with the community for a specified period of time. No public or
|
103 |
+
private interaction with the people involved, including unsolicited interaction
|
104 |
+
with those enforcing the Code of Conduct, is allowed during this period.
|
105 |
+
Violating these terms may lead to a permanent ban.
|
106 |
+
|
107 |
+
### 4. Permanent Ban
|
108 |
+
|
109 |
+
**Community Impact**: Demonstrating a pattern of violation of community
|
110 |
+
standards, including sustained inappropriate behavior, harassment of an
|
111 |
+
individual, or aggression toward or disparagement of classes of individuals.
|
112 |
+
|
113 |
+
**Consequence**: A permanent ban from any sort of public interaction within
|
114 |
+
the community.
|
115 |
+
|
116 |
+
## Attribution
|
117 |
+
|
118 |
+
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
|
119 |
+
version 2.0, available at
|
120 |
+
[https://www.contributor-covenant.org/version/2/0/code_of_conduct.html][v2.0].
|
121 |
+
|
122 |
+
Community Impact Guidelines were inspired by
|
123 |
+
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
|
124 |
+
|
125 |
+
For answers to common questions about this code of conduct, see the FAQ at
|
126 |
+
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available
|
127 |
+
at [https://www.contributor-covenant.org/translations][translations].
|
128 |
+
|
129 |
+
[homepage]: https://www.contributor-covenant.org
|
130 |
+
[v2.0]: https://www.contributor-covenant.org/version/2/0/code_of_conduct.html
|
131 |
+
[Mozilla CoC]: https://github.com/mozilla/diversity
|
132 |
+
[FAQ]: https://www.contributor-covenant.org/faq
|
133 |
+
[translations]: https://www.contributor-covenant.org/translations
|
TTS/CODE_OWNERS.rst
ADDED
@@ -0,0 +1,75 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
TTS code owners / governance system
|
2 |
+
==========================================
|
3 |
+
|
4 |
+
TTS is run under a governance system inspired (and partially copied from) by the `Mozilla module ownership system <https://www.mozilla.org/about/governance/policies/module-ownership/>`_. The project is roughly divided into modules, and each module has its owners, which are responsible for reviewing pull requests and deciding on technical direction for their modules. Module ownership authority is given to people who have worked extensively on areas of the project.
|
5 |
+
|
6 |
+
Module owners also have the authority of naming other module owners or appointing module peers, which are people with authority to review pull requests in that module. They can also sub-divide their module into sub-modules with their owners.
|
7 |
+
|
8 |
+
Module owners are not tyrants. They are chartered to make decisions with input from the community and in the best interest of the community. Module owners are not required to make code changes or additions solely because the community wants them to do so. (Like anyone else, the module owners may write code because they want to, because their employers want them to, because the community wants them to, or for some other reason.) Module owners do need to pay attention to patches submitted to that module. However “pay attention” does not mean agreeing to every patch. Some patches may not make sense for the WebThings project; some may be poorly implemented. Module owners have the authority to decline a patch; this is a necessary part of the role. We ask the module owners to describe in the relevant issue their reasons for wanting changes to a patch, for declining it altogether, or for postponing review for some period. We don’t ask or expect them to rewrite patches to make them acceptable. Similarly, module owners may need to delay review of a promising patch due to an upcoming deadline. For example, a patch may be of interest, but not for the next milestone. In such a case it may make sense for the module owner to postpone review of a patch until after matters needed for a milestone have been finalized. Again, we expect this to be described in the relevant issue. And of course, it shouldn’t go on very often or for very long or escalation and review is likely.
|
9 |
+
|
10 |
+
The work of the various module owners and peers is overseen by the global owners, which are responsible for making final decisions in case there's conflict between owners as well as set the direction for the project as a whole.
|
11 |
+
|
12 |
+
This file describes module owners who are active on the project and which parts of the code they have expertise on (and interest in). If you're making changes to the code and are wondering who's an appropriate person to talk to, this list will tell you who to ping.
|
13 |
+
|
14 |
+
There's overlap in the areas of expertise of each owner, and in particular when looking at which files are covered by each area, there is a lot of overlap. Don't worry about getting it exactly right when requesting review, any code owner will be happy to redirect the request to a more appropriate person.
|
15 |
+
|
16 |
+
Global owners
|
17 |
+
----------------
|
18 |
+
|
19 |
+
These are people who have worked on the project extensively and are familiar with all or most parts of it. Their expertise and review guidance is trusted by other code owners to cover their own areas of expertise. In case of conflicting opinions from other owners, global owners will make a final decision.
|
20 |
+
|
21 |
+
- Eren Gölge (@erogol)
|
22 |
+
- Reuben Morais (@reuben)
|
23 |
+
|
24 |
+
Training, feeding
|
25 |
+
-----------------
|
26 |
+
|
27 |
+
- Eren Gölge (@erogol)
|
28 |
+
|
29 |
+
Model exporting
|
30 |
+
---------------
|
31 |
+
|
32 |
+
- Eren Gölge (@erogol)
|
33 |
+
|
34 |
+
Multi-Speaker TTS
|
35 |
+
-----------------
|
36 |
+
|
37 |
+
- Eren Gölge (@erogol)
|
38 |
+
- Edresson Casanova (@edresson)
|
39 |
+
|
40 |
+
TTS
|
41 |
+
---
|
42 |
+
|
43 |
+
- Eren Gölge (@erogol)
|
44 |
+
|
45 |
+
Vocoders
|
46 |
+
--------
|
47 |
+
|
48 |
+
- Eren Gölge (@erogol)
|
49 |
+
|
50 |
+
Speaker Encoder
|
51 |
+
---------------
|
52 |
+
|
53 |
+
- Eren Gölge (@erogol)
|
54 |
+
|
55 |
+
Testing & CI
|
56 |
+
------------
|
57 |
+
|
58 |
+
- Eren Gölge (@erogol)
|
59 |
+
- Reuben Morais (@reuben)
|
60 |
+
|
61 |
+
Python bindings
|
62 |
+
---------------
|
63 |
+
|
64 |
+
- Eren Gölge (@erogol)
|
65 |
+
- Reuben Morais (@reuben)
|
66 |
+
|
67 |
+
Documentation
|
68 |
+
-------------
|
69 |
+
|
70 |
+
- Eren Gölge (@erogol)
|
71 |
+
|
72 |
+
Third party bindings
|
73 |
+
--------------------
|
74 |
+
|
75 |
+
Owned by the author.
|
TTS/CONTRIBUTING.md
ADDED
@@ -0,0 +1,136 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Contribution guidelines
|
2 |
+
|
3 |
+
Welcome to the 🐸TTS!
|
4 |
+
|
5 |
+
This repository is governed by [the Contributor Covenant Code of Conduct](https://github.com/coqui-ai/TTS/blob/main/CODE_OF_CONDUCT.md).
|
6 |
+
|
7 |
+
## Where to start.
|
8 |
+
We welcome everyone who likes to contribute to 🐸TTS.
|
9 |
+
|
10 |
+
You can contribute not only with code but with bug reports, comments, questions, answers, or just a simple tweet to spread the word.
|
11 |
+
|
12 |
+
If you like to contribute code, squash a bug but if you don't know where to start, here are some pointers.
|
13 |
+
|
14 |
+
- [Development Road Map](https://github.com/coqui-ai/TTS/issues/378)
|
15 |
+
|
16 |
+
You can pick something out of our road map. We keep the progess of the project in this simple issue thread. It has new model proposals or developmental updates etc.
|
17 |
+
|
18 |
+
- [Github Issues Tracker](https://github.com/coqui-ai/TTS/issues)
|
19 |
+
|
20 |
+
This is a place to find feature requests, bugs.
|
21 |
+
|
22 |
+
Issues with the ```good first issue``` tag are good place for beginners to take on.
|
23 |
+
|
24 |
+
- ✨**PR**✨ [pages](https://github.com/coqui-ai/TTS/pulls) with the ```🚀new version``` tag.
|
25 |
+
|
26 |
+
We list all the target improvements for the next version. You can pick one of them and start contributing.
|
27 |
+
|
28 |
+
- Also feel free to suggest new features, ideas and models. We're always open for new things.
|
29 |
+
|
30 |
+
## Call for sharing language models
|
31 |
+
If possible, please consider sharing your pre-trained models in any language (if the licences allow for you to do so). We will include them in our model catalogue for public use and give the proper attribution, whether it be your name, company, website or any other source specified.
|
32 |
+
|
33 |
+
This model can be shared in two ways:
|
34 |
+
1. Share the model files with us and we serve them with the next 🐸 TTS release.
|
35 |
+
2. Upload your models on GDrive and share the link.
|
36 |
+
|
37 |
+
Models are served under `.models.json` file and any model is available under TTS CLI or Server end points.
|
38 |
+
|
39 |
+
Either way you choose, please make sure you send the models [here](https://github.com/coqui-ai/TTS/discussions/930).
|
40 |
+
|
41 |
+
## Sending a ✨**PR**✨
|
42 |
+
|
43 |
+
If you have a new feature, a model to implement, or a bug to squash, go ahead and send a ✨**PR**✨.
|
44 |
+
Please use the following steps to send a ✨**PR**✨.
|
45 |
+
Let us know if you encounter a problem along the way.
|
46 |
+
|
47 |
+
The following steps are tested on an Ubuntu system.
|
48 |
+
|
49 |
+
1. Fork 🐸TTS[https://github.com/coqui-ai/TTS] by clicking the fork button at the top right corner of the project page.
|
50 |
+
|
51 |
+
2. Clone 🐸TTS and add the main repo as a new remote named ```upsteam```.
|
52 |
+
|
53 |
+
```bash
|
54 |
+
$ git clone git@github.com:<your Github name>/TTS.git
|
55 |
+
$ cd TTS
|
56 |
+
$ git remote add upstream https://github.com/coqui-ai/TTS.git
|
57 |
+
```
|
58 |
+
|
59 |
+
3. Install 🐸TTS for development.
|
60 |
+
|
61 |
+
```bash
|
62 |
+
$ make system-deps # intended to be used on Ubuntu (Debian). Let us know if you have a different OS.
|
63 |
+
$ make install
|
64 |
+
```
|
65 |
+
|
66 |
+
4. Create a new branch with an informative name for your goal.
|
67 |
+
|
68 |
+
```bash
|
69 |
+
$ git checkout -b an_informative_name_for_my_branch
|
70 |
+
```
|
71 |
+
|
72 |
+
5. Implement your changes on your new branch.
|
73 |
+
|
74 |
+
6. Explain your code using [Google Style](https://google.github.io/styleguide/pyguide.html#381-docstrings) docstrings.
|
75 |
+
|
76 |
+
7. Add your tests to our test suite under ```tests``` folder. It is important to show that your code works, edge cases are considered, and inform others about the intended use.
|
77 |
+
|
78 |
+
8. Run the tests to see how your updates work with the rest of the project. You can repeat this step multiple times as you implement your changes to make sure you are on the right direction.
|
79 |
+
|
80 |
+
```bash
|
81 |
+
$ make test # stop at the first error
|
82 |
+
$ make test_all # run all the tests, report all the errors
|
83 |
+
```
|
84 |
+
|
85 |
+
9. Format your code. We use ```black``` for code and ```isort``` for ```import``` formatting.
|
86 |
+
|
87 |
+
```bash
|
88 |
+
$ make style
|
89 |
+
```
|
90 |
+
|
91 |
+
10. Run the linter and correct the issues raised. We use ```pylint``` for linting. It helps to enforce a coding standard, offers simple refactoring suggestions.
|
92 |
+
|
93 |
+
```bash
|
94 |
+
$ make lint
|
95 |
+
```
|
96 |
+
|
97 |
+
11. When things are good, add new files and commit your changes.
|
98 |
+
|
99 |
+
```bash
|
100 |
+
$ git add my_file1.py my_file2.py ...
|
101 |
+
$ git commit
|
102 |
+
```
|
103 |
+
|
104 |
+
It's a good practice to regularly sync your local copy of the project with the upstream code to keep up with the recent updates.
|
105 |
+
|
106 |
+
```bash
|
107 |
+
$ git fetch upstream
|
108 |
+
$ git rebase upstream/master
|
109 |
+
# or for the development version
|
110 |
+
$ git rebase upstream/dev
|
111 |
+
```
|
112 |
+
|
113 |
+
12. Send a PR to ```dev``` branch.
|
114 |
+
|
115 |
+
Push your branch to your fork.
|
116 |
+
|
117 |
+
```bash
|
118 |
+
$ git push -u origin an_informative_name_for_my_branch
|
119 |
+
```
|
120 |
+
|
121 |
+
Then go to your fork's Github page and click on 'Pull request' to send your ✨**PR**✨.
|
122 |
+
|
123 |
+
Please set ✨**PR**✨'s target branch to ```dev``` as we use ```dev``` to work on the next version.
|
124 |
+
|
125 |
+
13. Let's discuss until it is perfect. 💪
|
126 |
+
|
127 |
+
We might ask you for certain changes that would appear in the ✨**PR**✨'s page under 🐸TTS[https://github.com/coqui-ai/TTS/pulls].
|
128 |
+
|
129 |
+
14. Once things look perfect, We merge it to the ```dev``` branch and make it ready for the next version.
|
130 |
+
|
131 |
+
Feel free to ping us at any step you need help using our communication channels.
|
132 |
+
|
133 |
+
If you are new to Github or open-source contribution, These are good resources.
|
134 |
+
|
135 |
+
- [Github Docs](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/proposing-changes-to-your-work-with-pull-requests)
|
136 |
+
- [First-Contribution](https://github.com/firstcontributions/first-contributions)
|
TTS/Dockerfile
ADDED
@@ -0,0 +1,13 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
ARG BASE=nvidia/cuda:11.8.0-base-ubuntu22.04
|
2 |
+
FROM ${BASE}
|
3 |
+
RUN apt-get update && apt-get upgrade -y
|
4 |
+
RUN apt-get install -y --no-install-recommends gcc g++ make python3 python3-dev python3-pip python3-venv python3-wheel espeak-ng libsndfile1-dev && rm -rf /var/lib/apt/lists/*
|
5 |
+
RUN pip3 install llvmlite --ignore-installed
|
6 |
+
|
7 |
+
WORKDIR /root
|
8 |
+
COPY . /root
|
9 |
+
RUN pip3 install torch torchaudio --extra-index-url https://download.pytorch.org/whl/cu118
|
10 |
+
RUN rm -rf /root/.cache/pip
|
11 |
+
RUN make install
|
12 |
+
ENTRYPOINT ["tts"]
|
13 |
+
CMD ["--help"]
|
TTS/LICENSE.txt
ADDED
@@ -0,0 +1,373 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Mozilla Public License Version 2.0
|
2 |
+
==================================
|
3 |
+
|
4 |
+
1. Definitions
|
5 |
+
--------------
|
6 |
+
|
7 |
+
1.1. "Contributor"
|
8 |
+
means each individual or legal entity that creates, contributes to
|
9 |
+
the creation of, or owns Covered Software.
|
10 |
+
|
11 |
+
1.2. "Contributor Version"
|
12 |
+
means the combination of the Contributions of others (if any) used
|
13 |
+
by a Contributor and that particular Contributor's Contribution.
|
14 |
+
|
15 |
+
1.3. "Contribution"
|
16 |
+
means Covered Software of a particular Contributor.
|
17 |
+
|
18 |
+
1.4. "Covered Software"
|
19 |
+
means Source Code Form to which the initial Contributor has attached
|
20 |
+
the notice in Exhibit A, the Executable Form of such Source Code
|
21 |
+
Form, and Modifications of such Source Code Form, in each case
|
22 |
+
including portions thereof.
|
23 |
+
|
24 |
+
1.5. "Incompatible With Secondary Licenses"
|
25 |
+
means
|
26 |
+
|
27 |
+
(a) that the initial Contributor has attached the notice described
|
28 |
+
in Exhibit B to the Covered Software; or
|
29 |
+
|
30 |
+
(b) that the Covered Software was made available under the terms of
|
31 |
+
version 1.1 or earlier of the License, but not also under the
|
32 |
+
terms of a Secondary License.
|
33 |
+
|
34 |
+
1.6. "Executable Form"
|
35 |
+
means any form of the work other than Source Code Form.
|
36 |
+
|
37 |
+
1.7. "Larger Work"
|
38 |
+
means a work that combines Covered Software with other material, in
|
39 |
+
a separate file or files, that is not Covered Software.
|
40 |
+
|
41 |
+
1.8. "License"
|
42 |
+
means this document.
|
43 |
+
|
44 |
+
1.9. "Licensable"
|
45 |
+
means having the right to grant, to the maximum extent possible,
|
46 |
+
whether at the time of the initial grant or subsequently, any and
|
47 |
+
all of the rights conveyed by this License.
|
48 |
+
|
49 |
+
1.10. "Modifications"
|
50 |
+
means any of the following:
|
51 |
+
|
52 |
+
(a) any file in Source Code Form that results from an addition to,
|
53 |
+
deletion from, or modification of the contents of Covered
|
54 |
+
Software; or
|
55 |
+
|
56 |
+
(b) any new file in Source Code Form that contains any Covered
|
57 |
+
Software.
|
58 |
+
|
59 |
+
1.11. "Patent Claims" of a Contributor
|
60 |
+
means any patent claim(s), including without limitation, method,
|
61 |
+
process, and apparatus claims, in any patent Licensable by such
|
62 |
+
Contributor that would be infringed, but for the grant of the
|
63 |
+
License, by the making, using, selling, offering for sale, having
|
64 |
+
made, import, or transfer of either its Contributions or its
|
65 |
+
Contributor Version.
|
66 |
+
|
67 |
+
1.12. "Secondary License"
|
68 |
+
means either the GNU General Public License, Version 2.0, the GNU
|
69 |
+
Lesser General Public License, Version 2.1, the GNU Affero General
|
70 |
+
Public License, Version 3.0, or any later versions of those
|
71 |
+
licenses.
|
72 |
+
|
73 |
+
1.13. "Source Code Form"
|
74 |
+
means the form of the work preferred for making modifications.
|
75 |
+
|
76 |
+
1.14. "You" (or "Your")
|
77 |
+
means an individual or a legal entity exercising rights under this
|
78 |
+
License. For legal entities, "You" includes any entity that
|
79 |
+
controls, is controlled by, or is under common control with You. For
|
80 |
+
purposes of this definition, "control" means (a) the power, direct
|
81 |
+
or indirect, to cause the direction or management of such entity,
|
82 |
+
whether by contract or otherwise, or (b) ownership of more than
|
83 |
+
fifty percent (50%) of the outstanding shares or beneficial
|
84 |
+
ownership of such entity.
|
85 |
+
|
86 |
+
2. License Grants and Conditions
|
87 |
+
--------------------------------
|
88 |
+
|
89 |
+
2.1. Grants
|
90 |
+
|
91 |
+
Each Contributor hereby grants You a world-wide, royalty-free,
|
92 |
+
non-exclusive license:
|
93 |
+
|
94 |
+
(a) under intellectual property rights (other than patent or trademark)
|
95 |
+
Licensable by such Contributor to use, reproduce, make available,
|
96 |
+
modify, display, perform, distribute, and otherwise exploit its
|
97 |
+
Contributions, either on an unmodified basis, with Modifications, or
|
98 |
+
as part of a Larger Work; and
|
99 |
+
|
100 |
+
(b) under Patent Claims of such Contributor to make, use, sell, offer
|
101 |
+
for sale, have made, import, and otherwise transfer either its
|
102 |
+
Contributions or its Contributor Version.
|
103 |
+
|
104 |
+
2.2. Effective Date
|
105 |
+
|
106 |
+
The licenses granted in Section 2.1 with respect to any Contribution
|
107 |
+
become effective for each Contribution on the date the Contributor first
|
108 |
+
distributes such Contribution.
|
109 |
+
|
110 |
+
2.3. Limitations on Grant Scope
|
111 |
+
|
112 |
+
The licenses granted in this Section 2 are the only rights granted under
|
113 |
+
this License. No additional rights or licenses will be implied from the
|
114 |
+
distribution or licensing of Covered Software under this License.
|
115 |
+
Notwithstanding Section 2.1(b) above, no patent license is granted by a
|
116 |
+
Contributor:
|
117 |
+
|
118 |
+
(a) for any code that a Contributor has removed from Covered Software;
|
119 |
+
or
|
120 |
+
|
121 |
+
(b) for infringements caused by: (i) Your and any other third party's
|
122 |
+
modifications of Covered Software, or (ii) the combination of its
|
123 |
+
Contributions with other software (except as part of its Contributor
|
124 |
+
Version); or
|
125 |
+
|
126 |
+
(c) under Patent Claims infringed by Covered Software in the absence of
|
127 |
+
its Contributions.
|
128 |
+
|
129 |
+
This License does not grant any rights in the trademarks, service marks,
|
130 |
+
or logos of any Contributor (except as may be necessary to comply with
|
131 |
+
the notice requirements in Section 3.4).
|
132 |
+
|
133 |
+
2.4. Subsequent Licenses
|
134 |
+
|
135 |
+
No Contributor makes additional grants as a result of Your choice to
|
136 |
+
distribute the Covered Software under a subsequent version of this
|
137 |
+
License (see Section 10.2) or under the terms of a Secondary License (if
|
138 |
+
permitted under the terms of Section 3.3).
|
139 |
+
|
140 |
+
2.5. Representation
|
141 |
+
|
142 |
+
Each Contributor represents that the Contributor believes its
|
143 |
+
Contributions are its original creation(s) or it has sufficient rights
|
144 |
+
to grant the rights to its Contributions conveyed by this License.
|
145 |
+
|
146 |
+
2.6. Fair Use
|
147 |
+
|
148 |
+
This License is not intended to limit any rights You have under
|
149 |
+
applicable copyright doctrines of fair use, fair dealing, or other
|
150 |
+
equivalents.
|
151 |
+
|
152 |
+
2.7. Conditions
|
153 |
+
|
154 |
+
Sections 3.1, 3.2, 3.3, and 3.4 are conditions of the licenses granted
|
155 |
+
in Section 2.1.
|
156 |
+
|
157 |
+
3. Responsibilities
|
158 |
+
-------------------
|
159 |
+
|
160 |
+
3.1. Distribution of Source Form
|
161 |
+
|
162 |
+
All distribution of Covered Software in Source Code Form, including any
|
163 |
+
Modifications that You create or to which You contribute, must be under
|
164 |
+
the terms of this License. You must inform recipients that the Source
|
165 |
+
Code Form of the Covered Software is governed by the terms of this
|
166 |
+
License, and how they can obtain a copy of this License. You may not
|
167 |
+
attempt to alter or restrict the recipients' rights in the Source Code
|
168 |
+
Form.
|
169 |
+
|
170 |
+
3.2. Distribution of Executable Form
|
171 |
+
|
172 |
+
If You distribute Covered Software in Executable Form then:
|
173 |
+
|
174 |
+
(a) such Covered Software must also be made available in Source Code
|
175 |
+
Form, as described in Section 3.1, and You must inform recipients of
|
176 |
+
the Executable Form how they can obtain a copy of such Source Code
|
177 |
+
Form by reasonable means in a timely manner, at a charge no more
|
178 |
+
than the cost of distribution to the recipient; and
|
179 |
+
|
180 |
+
(b) You may distribute such Executable Form under the terms of this
|
181 |
+
License, or sublicense it under different terms, provided that the
|
182 |
+
license for the Executable Form does not attempt to limit or alter
|
183 |
+
the recipients' rights in the Source Code Form under this License.
|
184 |
+
|
185 |
+
3.3. Distribution of a Larger Work
|
186 |
+
|
187 |
+
You may create and distribute a Larger Work under terms of Your choice,
|
188 |
+
provided that You also comply with the requirements of this License for
|
189 |
+
the Covered Software. If the Larger Work is a combination of Covered
|
190 |
+
Software with a work governed by one or more Secondary Licenses, and the
|
191 |
+
Covered Software is not Incompatible With Secondary Licenses, this
|
192 |
+
License permits You to additionally distribute such Covered Software
|
193 |
+
under the terms of such Secondary License(s), so that the recipient of
|
194 |
+
the Larger Work may, at their option, further distribute the Covered
|
195 |
+
Software under the terms of either this License or such Secondary
|
196 |
+
License(s).
|
197 |
+
|
198 |
+
3.4. Notices
|
199 |
+
|
200 |
+
You may not remove or alter the substance of any license notices
|
201 |
+
(including copyright notices, patent notices, disclaimers of warranty,
|
202 |
+
or limitations of liability) contained within the Source Code Form of
|
203 |
+
the Covered Software, except that You may alter any license notices to
|
204 |
+
the extent required to remedy known factual inaccuracies.
|
205 |
+
|
206 |
+
3.5. Application of Additional Terms
|
207 |
+
|
208 |
+
You may choose to offer, and to charge a fee for, warranty, support,
|
209 |
+
indemnity or liability obligations to one or more recipients of Covered
|
210 |
+
Software. However, You may do so only on Your own behalf, and not on
|
211 |
+
behalf of any Contributor. You must make it absolutely clear that any
|
212 |
+
such warranty, support, indemnity, or liability obligation is offered by
|
213 |
+
You alone, and You hereby agree to indemnify every Contributor for any
|
214 |
+
liability incurred by such Contributor as a result of warranty, support,
|
215 |
+
indemnity or liability terms You offer. You may include additional
|
216 |
+
disclaimers of warranty and limitations of liability specific to any
|
217 |
+
jurisdiction.
|
218 |
+
|
219 |
+
4. Inability to Comply Due to Statute or Regulation
|
220 |
+
---------------------------------------------------
|
221 |
+
|
222 |
+
If it is impossible for You to comply with any of the terms of this
|
223 |
+
License with respect to some or all of the Covered Software due to
|
224 |
+
statute, judicial order, or regulation then You must: (a) comply with
|
225 |
+
the terms of this License to the maximum extent possible; and (b)
|
226 |
+
describe the limitations and the code they affect. Such description must
|
227 |
+
be placed in a text file included with all distributions of the Covered
|
228 |
+
Software under this License. Except to the extent prohibited by statute
|
229 |
+
or regulation, such description must be sufficiently detailed for a
|
230 |
+
recipient of ordinary skill to be able to understand it.
|
231 |
+
|
232 |
+
5. Termination
|
233 |
+
--------------
|
234 |
+
|
235 |
+
5.1. The rights granted under this License will terminate automatically
|
236 |
+
if You fail to comply with any of its terms. However, if You become
|
237 |
+
compliant, then the rights granted under this License from a particular
|
238 |
+
Contributor are reinstated (a) provisionally, unless and until such
|
239 |
+
Contributor explicitly and finally terminates Your grants, and (b) on an
|
240 |
+
ongoing basis, if such Contributor fails to notify You of the
|
241 |
+
non-compliance by some reasonable means prior to 60 days after You have
|
242 |
+
come back into compliance. Moreover, Your grants from a particular
|
243 |
+
Contributor are reinstated on an ongoing basis if such Contributor
|
244 |
+
notifies You of the non-compliance by some reasonable means, this is the
|
245 |
+
first time You have received notice of non-compliance with this License
|
246 |
+
from such Contributor, and You become compliant prior to 30 days after
|
247 |
+
Your receipt of the notice.
|
248 |
+
|
249 |
+
5.2. If You initiate litigation against any entity by asserting a patent
|
250 |
+
infringement claim (excluding declaratory judgment actions,
|
251 |
+
counter-claims, and cross-claims) alleging that a Contributor Version
|
252 |
+
directly or indirectly infringes any patent, then the rights granted to
|
253 |
+
You by any and all Contributors for the Covered Software under Section
|
254 |
+
2.1 of this License shall terminate.
|
255 |
+
|
256 |
+
5.3. In the event of termination under Sections 5.1 or 5.2 above, all
|
257 |
+
end user license agreements (excluding distributors and resellers) which
|
258 |
+
have been validly granted by You or Your distributors under this License
|
259 |
+
prior to termination shall survive termination.
|
260 |
+
|
261 |
+
************************************************************************
|
262 |
+
* *
|
263 |
+
* 6. Disclaimer of Warranty *
|
264 |
+
* ------------------------- *
|
265 |
+
* *
|
266 |
+
* Covered Software is provided under this License on an "as is" *
|
267 |
+
* basis, without warranty of any kind, either expressed, implied, or *
|
268 |
+
* statutory, including, without limitation, warranties that the *
|
269 |
+
* Covered Software is free of defects, merchantable, fit for a *
|
270 |
+
* particular purpose or non-infringing. The entire risk as to the *
|
271 |
+
* quality and performance of the Covered Software is with You. *
|
272 |
+
* Should any Covered Software prove defective in any respect, You *
|
273 |
+
* (not any Contributor) assume the cost of any necessary servicing, *
|
274 |
+
* repair, or correction. This disclaimer of warranty constitutes an *
|
275 |
+
* essential part of this License. No use of any Covered Software is *
|
276 |
+
* authorized under this License except under this disclaimer. *
|
277 |
+
* *
|
278 |
+
************************************************************************
|
279 |
+
|
280 |
+
************************************************************************
|
281 |
+
* *
|
282 |
+
* 7. Limitation of Liability *
|
283 |
+
* -------------------------- *
|
284 |
+
* *
|
285 |
+
* Under no circumstances and under no legal theory, whether tort *
|
286 |
+
* (including negligence), contract, or otherwise, shall any *
|
287 |
+
* Contributor, or anyone who distributes Covered Software as *
|
288 |
+
* permitted above, be liable to You for any direct, indirect, *
|
289 |
+
* special, incidental, or consequential damages of any character *
|
290 |
+
* including, without limitation, damages for lost profits, loss of *
|
291 |
+
* goodwill, work stoppage, computer failure or malfunction, or any *
|
292 |
+
* and all other commercial damages or losses, even if such party *
|
293 |
+
* shall have been informed of the possibility of such damages. This *
|
294 |
+
* limitation of liability shall not apply to liability for death or *
|
295 |
+
* personal injury resulting from such party's negligence to the *
|
296 |
+
* extent applicable law prohibits such limitation. Some *
|
297 |
+
* jurisdictions do not allow the exclusion or limitation of *
|
298 |
+
* incidental or consequential damages, so this exclusion and *
|
299 |
+
* limitation may not apply to You. *
|
300 |
+
* *
|
301 |
+
************************************************************************
|
302 |
+
|
303 |
+
8. Litigation
|
304 |
+
-------------
|
305 |
+
|
306 |
+
Any litigation relating to this License may be brought only in the
|
307 |
+
courts of a jurisdiction where the defendant maintains its principal
|
308 |
+
place of business and such litigation shall be governed by laws of that
|
309 |
+
jurisdiction, without reference to its conflict-of-law provisions.
|
310 |
+
Nothing in this Section shall prevent a party's ability to bring
|
311 |
+
cross-claims or counter-claims.
|
312 |
+
|
313 |
+
9. Miscellaneous
|
314 |
+
----------------
|
315 |
+
|
316 |
+
This License represents the complete agreement concerning the subject
|
317 |
+
matter hereof. If any provision of this License is held to be
|
318 |
+
unenforceable, such provision shall be reformed only to the extent
|
319 |
+
necessary to make it enforceable. Any law or regulation which provides
|
320 |
+
that the language of a contract shall be construed against the drafter
|
321 |
+
shall not be used to construe this License against a Contributor.
|
322 |
+
|
323 |
+
10. Versions of the License
|
324 |
+
---------------------------
|
325 |
+
|
326 |
+
10.1. New Versions
|
327 |
+
|
328 |
+
Mozilla Foundation is the license steward. Except as provided in Section
|
329 |
+
10.3, no one other than the license steward has the right to modify or
|
330 |
+
publish new versions of this License. Each version will be given a
|
331 |
+
distinguishing version number.
|
332 |
+
|
333 |
+
10.2. Effect of New Versions
|
334 |
+
|
335 |
+
You may distribute the Covered Software under the terms of the version
|
336 |
+
of the License under which You originally received the Covered Software,
|
337 |
+
or under the terms of any subsequent version published by the license
|
338 |
+
steward.
|
339 |
+
|
340 |
+
10.3. Modified Versions
|
341 |
+
|
342 |
+
If you create software not governed by this License, and you want to
|
343 |
+
create a new license for such software, you may create and use a
|
344 |
+
modified version of this License if you rename the license and remove
|
345 |
+
any references to the name of the license steward (except to note that
|
346 |
+
such modified license differs from this License).
|
347 |
+
|
348 |
+
10.4. Distributing Source Code Form that is Incompatible With Secondary
|
349 |
+
Licenses
|
350 |
+
|
351 |
+
If You choose to distribute Source Code Form that is Incompatible With
|
352 |
+
Secondary Licenses under the terms of this version of the License, the
|
353 |
+
notice described in Exhibit B of this License must be attached.
|
354 |
+
|
355 |
+
Exhibit A - Source Code Form License Notice
|
356 |
+
-------------------------------------------
|
357 |
+
|
358 |
+
This Source Code Form is subject to the terms of the Mozilla Public
|
359 |
+
License, v. 2.0. If a copy of the MPL was not distributed with this
|
360 |
+
file, You can obtain one at http://mozilla.org/MPL/2.0/.
|
361 |
+
|
362 |
+
If it is not possible or desirable to put the notice in a particular
|
363 |
+
file, then You may include the notice in a location (such as a LICENSE
|
364 |
+
file in a relevant directory) where a recipient would be likely to look
|
365 |
+
for such a notice.
|
366 |
+
|
367 |
+
You may add additional accurate notices of copyright ownership.
|
368 |
+
|
369 |
+
Exhibit B - "Incompatible With Secondary Licenses" Notice
|
370 |
+
---------------------------------------------------------
|
371 |
+
|
372 |
+
This Source Code Form is "Incompatible With Secondary Licenses", as
|
373 |
+
defined by the Mozilla Public License, v. 2.0.
|
TTS/MANIFEST.in
ADDED
@@ -0,0 +1,15 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
include README.md
|
2 |
+
include LICENSE.txt
|
3 |
+
include requirements.*.txt
|
4 |
+
include *.cff
|
5 |
+
include requirements.txt
|
6 |
+
include TTS/VERSION
|
7 |
+
recursive-include TTS *.json
|
8 |
+
recursive-include TTS *.html
|
9 |
+
recursive-include TTS *.png
|
10 |
+
recursive-include TTS *.md
|
11 |
+
recursive-include TTS *.py
|
12 |
+
recursive-include TTS *.pyx
|
13 |
+
recursive-include images *.png
|
14 |
+
recursive-exclude tests *
|
15 |
+
prune tests*
|
TTS/Makefile
ADDED
@@ -0,0 +1,78 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
.DEFAULT_GOAL := help
|
2 |
+
.PHONY: test system-deps dev-deps deps style lint install help docs
|
3 |
+
|
4 |
+
help:
|
5 |
+
@grep -E '^[a-zA-Z_-]+:.*?## .*$$' $(MAKEFILE_LIST) | sort | awk 'BEGIN {FS = ":.*?## "}; {printf "\033[36m%-30s\033[0m %s\n", $$1, $$2}'
|
6 |
+
|
7 |
+
target_dirs := tests TTS notebooks recipes
|
8 |
+
|
9 |
+
test_all: ## run tests and don't stop on an error.
|
10 |
+
nose2 --with-coverage --coverage TTS tests
|
11 |
+
./run_bash_tests.sh
|
12 |
+
|
13 |
+
test: ## run tests.
|
14 |
+
nose2 -F -v -B --with-coverage --coverage TTS tests
|
15 |
+
|
16 |
+
test_vocoder: ## run vocoder tests.
|
17 |
+
nose2 -F -v -B --with-coverage --coverage TTS tests.vocoder_tests
|
18 |
+
|
19 |
+
test_tts: ## run tts tests.
|
20 |
+
nose2 -F -v -B --with-coverage --coverage TTS tests.tts_tests
|
21 |
+
|
22 |
+
test_tts2: ## run tts tests.
|
23 |
+
nose2 -F -v -B --with-coverage --coverage TTS tests.tts_tests2
|
24 |
+
|
25 |
+
test_aux: ## run aux tests.
|
26 |
+
nose2 -F -v -B --with-coverage --coverage TTS tests.aux_tests
|
27 |
+
./run_bash_tests.sh
|
28 |
+
|
29 |
+
test_zoo: ## run zoo tests.
|
30 |
+
nose2 -F -v -B --with-coverage --coverage TTS tests.zoo_tests
|
31 |
+
|
32 |
+
inference_tests: ## run inference tests.
|
33 |
+
nose2 -F -v -B --with-coverage --coverage TTS tests.inference_tests
|
34 |
+
|
35 |
+
api_tests: ## run api tests.
|
36 |
+
nose2 -F -v -B --with-coverage --coverage TTS tests.api_tests
|
37 |
+
|
38 |
+
data_tests: ## run data tests.
|
39 |
+
nose2 -F -v -B --with-coverage --coverage TTS tests.data_tests
|
40 |
+
|
41 |
+
test_text: ## run text tests.
|
42 |
+
nose2 -F -v -B --with-coverage --coverage TTS tests.text_tests
|
43 |
+
|
44 |
+
test_failed: ## only run tests failed the last time.
|
45 |
+
nose2 -F -v -B --with-coverage --coverage TTS tests
|
46 |
+
|
47 |
+
style: ## update code style.
|
48 |
+
black ${target_dirs}
|
49 |
+
isort ${target_dirs}
|
50 |
+
|
51 |
+
lint: ## run pylint linter.
|
52 |
+
pylint ${target_dirs}
|
53 |
+
black ${target_dirs} --check
|
54 |
+
isort ${target_dirs} --check-only
|
55 |
+
|
56 |
+
system-deps: ## install linux system deps
|
57 |
+
sudo apt-get install -y libsndfile1-dev
|
58 |
+
|
59 |
+
dev-deps: ## install development deps
|
60 |
+
pip install -r requirements.dev.txt
|
61 |
+
|
62 |
+
doc-deps: ## install docs dependencies
|
63 |
+
pip install -r docs/requirements.txt
|
64 |
+
|
65 |
+
build-docs: ## build the docs
|
66 |
+
cd docs && make clean && make build
|
67 |
+
|
68 |
+
hub-deps: ## install deps for torch hub use
|
69 |
+
pip install -r requirements.hub.txt
|
70 |
+
|
71 |
+
deps: ## install 🐸 requirements.
|
72 |
+
pip install -r requirements.txt
|
73 |
+
|
74 |
+
install: ## install 🐸 TTS for development.
|
75 |
+
pip install -e .[all]
|
76 |
+
|
77 |
+
docs: ## build the docs
|
78 |
+
$(MAKE) -C docs clean && $(MAKE) -C docs html
|
TTS/README.md
ADDED
@@ -0,0 +1,431 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
|
2 |
+
## 🐸Coqui.ai News
|
3 |
+
- 📣 ⓍTTS, our production TTS model that can speak 13 languages, is released [Blog Post](https://coqui.ai/blog/tts/open_xtts), [Demo](https://huggingface.co/spaces/coqui/xtts), [Docs](https://tts.readthedocs.io/en/dev/models/xtts.html)
|
4 |
+
- 📣 [🐶Bark](https://github.com/suno-ai/bark) is now available for inference with unconstrained voice cloning. [Docs](https://tts.readthedocs.io/en/dev/models/bark.html)
|
5 |
+
- 📣 You can use [~1100 Fairseq models](https://github.com/facebookresearch/fairseq/tree/main/examples/mms) with 🐸TTS.
|
6 |
+
- 📣 🐸TTS now supports 🐢Tortoise with faster inference. [Docs](https://tts.readthedocs.io/en/dev/models/tortoise.html)
|
7 |
+
- 📣 **Coqui Studio API** is landed on 🐸TTS. - [Example](https://github.com/coqui-ai/TTS/blob/dev/README.md#-python-api)
|
8 |
+
- 📣 [**Coqui Studio API**](https://docs.coqui.ai/docs) is live.
|
9 |
+
- 📣 Voice generation with prompts - **Prompt to Voice** - is live on [**Coqui Studio**](https://app.coqui.ai/auth/signin)!! - [Blog Post](https://coqui.ai/blog/tts/prompt-to-voice)
|
10 |
+
- 📣 Voice generation with fusion - **Voice fusion** - is live on [**Coqui Studio**](https://app.coqui.ai/auth/signin).
|
11 |
+
- 📣 Voice cloning is live on [**Coqui Studio**](https://app.coqui.ai/auth/signin).
|
12 |
+
|
13 |
+
<div align="center">
|
14 |
+
<img src="https://static.scarf.sh/a.png?x-pxid=cf317fe7-2188-4721-bc01-124bb5d5dbb2" />
|
15 |
+
|
16 |
+
## <img src="https://raw.githubusercontent.com/coqui-ai/TTS/main/images/coqui-log-green-TTS.png" height="56"/>
|
17 |
+
|
18 |
+
|
19 |
+
**🐸TTS is a library for advanced Text-to-Speech generation.**
|
20 |
+
|
21 |
+
🚀 Pretrained models in +1100 languages.
|
22 |
+
|
23 |
+
🛠️ Tools for training new models and fine-tuning existing models in any language.
|
24 |
+
|
25 |
+
📚 Utilities for dataset analysis and curation.
|
26 |
+
______________________________________________________________________
|
27 |
+
|
28 |
+
[![Dicord](https://img.shields.io/discord/1037326658807533628?color=%239B59B6&label=chat%20on%20discord)](https://discord.gg/5eXr5seRrv)
|
29 |
+
[![License](<https://img.shields.io/badge/License-MPL%202.0-brightgreen.svg>)](https://opensource.org/licenses/MPL-2.0)
|
30 |
+
[![PyPI version](https://badge.fury.io/py/TTS.svg)](https://badge.fury.io/py/TTS)
|
31 |
+
[![Covenant](https://camo.githubusercontent.com/7d620efaa3eac1c5b060ece5d6aacfcc8b81a74a04d05cd0398689c01c4463bb/68747470733a2f2f696d672e736869656c64732e696f2f62616467652f436f6e7472696275746f72253230436f76656e616e742d76322e3025323061646f707465642d6666363962342e737667)](https://github.com/coqui-ai/TTS/blob/master/CODE_OF_CONDUCT.md)
|
32 |
+
[![Downloads](https://pepy.tech/badge/tts)](https://pepy.tech/project/tts)
|
33 |
+
[![DOI](https://zenodo.org/badge/265612440.svg)](https://zenodo.org/badge/latestdoi/265612440)
|
34 |
+
|
35 |
+
![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/aux_tests.yml/badge.svg)
|
36 |
+
![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/data_tests.yml/badge.svg)
|
37 |
+
![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/docker.yaml/badge.svg)
|
38 |
+
![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/inference_tests.yml/badge.svg)
|
39 |
+
![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/style_check.yml/badge.svg)
|
40 |
+
![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/text_tests.yml/badge.svg)
|
41 |
+
![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/tts_tests.yml/badge.svg)
|
42 |
+
![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/vocoder_tests.yml/badge.svg)
|
43 |
+
![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/zoo_tests0.yml/badge.svg)
|
44 |
+
![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/zoo_tests1.yml/badge.svg)
|
45 |
+
![GithubActions](https://github.com/coqui-ai/TTS/actions/workflows/zoo_tests2.yml/badge.svg)
|
46 |
+
[![Docs](<https://readthedocs.org/projects/tts/badge/?version=latest&style=plastic>)](https://tts.readthedocs.io/en/latest/)
|
47 |
+
|
48 |
+
</div>
|
49 |
+
|
50 |
+
______________________________________________________________________
|
51 |
+
|
52 |
+
## 💬 Where to ask questions
|
53 |
+
Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly so that more people can benefit from it.
|
54 |
+
|
55 |
+
| Type | Platforms |
|
56 |
+
| ------------------------------- | --------------------------------------- |
|
57 |
+
| 🚨 **Bug Reports** | [GitHub Issue Tracker] |
|
58 |
+
| 🎁 **Feature Requests & Ideas** | [GitHub Issue Tracker] |
|
59 |
+
| 👩💻 **Usage Questions** | [GitHub Discussions] |
|
60 |
+
| 🗯 **General Discussion** | [GitHub Discussions] or [Discord] |
|
61 |
+
|
62 |
+
[github issue tracker]: https://github.com/coqui-ai/tts/issues
|
63 |
+
[github discussions]: https://github.com/coqui-ai/TTS/discussions
|
64 |
+
[discord]: https://discord.gg/5eXr5seRrv
|
65 |
+
[Tutorials and Examples]: https://github.com/coqui-ai/TTS/wiki/TTS-Notebooks-and-Tutorials
|
66 |
+
|
67 |
+
|
68 |
+
## 🔗 Links and Resources
|
69 |
+
| Type | Links |
|
70 |
+
| ------------------------------- | --------------------------------------- |
|
71 |
+
| 💼 **Documentation** | [ReadTheDocs](https://tts.readthedocs.io/en/latest/)
|
72 |
+
| 💾 **Installation** | [TTS/README.md](https://github.com/coqui-ai/TTS/tree/dev#install-tts)|
|
73 |
+
| 👩💻 **Contributing** | [CONTRIBUTING.md](https://github.com/coqui-ai/TTS/blob/main/CONTRIBUTING.md)|
|
74 |
+
| 📌 **Road Map** | [Main Development Plans](https://github.com/coqui-ai/TTS/issues/378)
|
75 |
+
| 🚀 **Released Models** | [TTS Releases](https://github.com/coqui-ai/TTS/releases) and [Experimental Models](https://github.com/coqui-ai/TTS/wiki/Experimental-Released-Models)|
|
76 |
+
| 📰 **Papers** | [TTS Papers](https://github.com/erogol/TTS-papers)|
|
77 |
+
|
78 |
+
|
79 |
+
## 🥇 TTS Performance
|
80 |
+
<p align="center"><img src="https://raw.githubusercontent.com/coqui-ai/TTS/main/images/TTS-performance.png" width="800" /></p>
|
81 |
+
|
82 |
+
Underlined "TTS*" and "Judy*" are **internal** 🐸TTS models that are not released open-source. They are here to show the potential. Models prefixed with a dot (.Jofish .Abe and .Janice) are real human voices.
|
83 |
+
|
84 |
+
## Features
|
85 |
+
- High-performance Deep Learning models for Text2Speech tasks.
|
86 |
+
- Text2Spec models (Tacotron, Tacotron2, Glow-TTS, SpeedySpeech).
|
87 |
+
- Speaker Encoder to compute speaker embeddings efficiently.
|
88 |
+
- Vocoder models (MelGAN, Multiband-MelGAN, GAN-TTS, ParallelWaveGAN, WaveGrad, WaveRNN)
|
89 |
+
- Fast and efficient model training.
|
90 |
+
- Detailed training logs on the terminal and Tensorboard.
|
91 |
+
- Support for Multi-speaker TTS.
|
92 |
+
- Efficient, flexible, lightweight but feature complete `Trainer API`.
|
93 |
+
- Released and ready-to-use models.
|
94 |
+
- Tools to curate Text2Speech datasets under```dataset_analysis```.
|
95 |
+
- Utilities to use and test your models.
|
96 |
+
- Modular (but not too much) code base enabling easy implementation of new ideas.
|
97 |
+
|
98 |
+
## Model Implementations
|
99 |
+
### Spectrogram models
|
100 |
+
- Tacotron: [paper](https://arxiv.org/abs/1703.10135)
|
101 |
+
- Tacotron2: [paper](https://arxiv.org/abs/1712.05884)
|
102 |
+
- Glow-TTS: [paper](https://arxiv.org/abs/2005.11129)
|
103 |
+
- Speedy-Speech: [paper](https://arxiv.org/abs/2008.03802)
|
104 |
+
- Align-TTS: [paper](https://arxiv.org/abs/2003.01950)
|
105 |
+
- FastPitch: [paper](https://arxiv.org/pdf/2006.06873.pdf)
|
106 |
+
- FastSpeech: [paper](https://arxiv.org/abs/1905.09263)
|
107 |
+
- FastSpeech2: [paper](https://arxiv.org/abs/2006.04558)
|
108 |
+
- SC-GlowTTS: [paper](https://arxiv.org/abs/2104.05557)
|
109 |
+
- Capacitron: [paper](https://arxiv.org/abs/1906.03402)
|
110 |
+
- OverFlow: [paper](https://arxiv.org/abs/2211.06892)
|
111 |
+
- Neural HMM TTS: [paper](https://arxiv.org/abs/2108.13320)
|
112 |
+
- Delightful TTS: [paper](https://arxiv.org/abs/2110.12612)
|
113 |
+
|
114 |
+
### End-to-End Models
|
115 |
+
- ⓍTTS: [blog](https://coqui.ai/blog/tts/open_xtts)
|
116 |
+
- VITS: [paper](https://arxiv.org/pdf/2106.06103)
|
117 |
+
- 🐸 YourTTS: [paper](https://arxiv.org/abs/2112.02418)
|
118 |
+
- 🐢 Tortoise: [orig. repo](https://github.com/neonbjb/tortoise-tts)
|
119 |
+
- 🐶 Bark: [orig. repo](https://github.com/suno-ai/bark)
|
120 |
+
|
121 |
+
### Attention Methods
|
122 |
+
- Guided Attention: [paper](https://arxiv.org/abs/1710.08969)
|
123 |
+
- Forward Backward Decoding: [paper](https://arxiv.org/abs/1907.09006)
|
124 |
+
- Graves Attention: [paper](https://arxiv.org/abs/1910.10288)
|
125 |
+
- Double Decoder Consistency: [blog](https://erogol.com/solving-attention-problems-of-tts-models-with-double-decoder-consistency/)
|
126 |
+
- Dynamic Convolutional Attention: [paper](https://arxiv.org/pdf/1910.10288.pdf)
|
127 |
+
- Alignment Network: [paper](https://arxiv.org/abs/2108.10447)
|
128 |
+
|
129 |
+
### Speaker Encoder
|
130 |
+
- GE2E: [paper](https://arxiv.org/abs/1710.10467)
|
131 |
+
- Angular Loss: [paper](https://arxiv.org/pdf/2003.11982.pdf)
|
132 |
+
|
133 |
+
### Vocoders
|
134 |
+
- MelGAN: [paper](https://arxiv.org/abs/1910.06711)
|
135 |
+
- MultiBandMelGAN: [paper](https://arxiv.org/abs/2005.05106)
|
136 |
+
- ParallelWaveGAN: [paper](https://arxiv.org/abs/1910.11480)
|
137 |
+
- GAN-TTS discriminators: [paper](https://arxiv.org/abs/1909.11646)
|
138 |
+
- WaveRNN: [origin](https://github.com/fatchord/WaveRNN/)
|
139 |
+
- WaveGrad: [paper](https://arxiv.org/abs/2009.00713)
|
140 |
+
- HiFiGAN: [paper](https://arxiv.org/abs/2010.05646)
|
141 |
+
- UnivNet: [paper](https://arxiv.org/abs/2106.07889)
|
142 |
+
|
143 |
+
### Voice Conversion
|
144 |
+
- FreeVC: [paper](https://arxiv.org/abs/2210.15418)
|
145 |
+
|
146 |
+
You can also help us implement more models.
|
147 |
+
|
148 |
+
## Installation
|
149 |
+
🐸TTS is tested on Ubuntu 18.04 with **python >= 3.7, < 3.11.**.
|
150 |
+
|
151 |
+
If you are only interested in [synthesizing speech](https://tts.readthedocs.io/en/latest/inference.html) with the released 🐸TTS models, installing from PyPI is the easiest option.
|
152 |
+
|
153 |
+
```bash
|
154 |
+
pip install TTS
|
155 |
+
```
|
156 |
+
|
157 |
+
If you plan to code or train models, clone 🐸TTS and install it locally.
|
158 |
+
|
159 |
+
```bash
|
160 |
+
git clone https://github.com/coqui-ai/TTS
|
161 |
+
pip install -e .[all,dev,notebooks] # Select the relevant extras
|
162 |
+
```
|
163 |
+
|
164 |
+
If you are on Ubuntu (Debian), you can also run following commands for installation.
|
165 |
+
|
166 |
+
```bash
|
167 |
+
$ make system-deps # intended to be used on Ubuntu (Debian). Let us know if you have a different OS.
|
168 |
+
$ make install
|
169 |
+
```
|
170 |
+
|
171 |
+
If you are on Windows, 👑@GuyPaddock wrote installation instructions [here](https://stackoverflow.com/questions/66726331/how-can-i-run-mozilla-tts-coqui-tts-training-with-cuda-on-a-windows-system).
|
172 |
+
|
173 |
+
|
174 |
+
## Docker Image
|
175 |
+
You can also try TTS without install with the docker image.
|
176 |
+
Simply run the following command and you will be able to run TTS without installing it.
|
177 |
+
|
178 |
+
```bash
|
179 |
+
docker run --rm -it -p 5002:5002 --entrypoint /bin/bash ghcr.io/coqui-ai/tts-cpu
|
180 |
+
python3 TTS/server/server.py --list_models #To get the list of available models
|
181 |
+
python3 TTS/server/server.py --model_name tts_models/en/vctk/vits # To start a server
|
182 |
+
```
|
183 |
+
|
184 |
+
You can then enjoy the TTS server [here](http://[::1]:5002/)
|
185 |
+
More details about the docker images (like GPU support) can be found [here](https://tts.readthedocs.io/en/latest/docker_images.html)
|
186 |
+
|
187 |
+
|
188 |
+
## Synthesizing speech by 🐸TTS
|
189 |
+
|
190 |
+
### 🐍 Python API
|
191 |
+
|
192 |
+
#### Running a multi-speaker and multi-lingual model
|
193 |
+
|
194 |
+
```python
|
195 |
+
import torch
|
196 |
+
from TTS.api import TTS
|
197 |
+
|
198 |
+
# Get device
|
199 |
+
device = "cuda" if torch.cuda.is_available() else "cpu"
|
200 |
+
|
201 |
+
# List available 🐸TTS models and choose the first one
|
202 |
+
model_name = TTS().list_models()[0]
|
203 |
+
# Init TTS
|
204 |
+
tts = TTS(model_name).to(device)
|
205 |
+
|
206 |
+
# Run TTS
|
207 |
+
# ❗ Since this model is multi-speaker and multi-lingual, we must set the target speaker and the language
|
208 |
+
# Text to speech with a numpy output
|
209 |
+
wav = tts.tts("This is a test! This is also a test!!", speaker=tts.speakers[0], language=tts.languages[0])
|
210 |
+
# Text to speech to a file
|
211 |
+
tts.tts_to_file(text="Hello world!", speaker=tts.speakers[0], language=tts.languages[0], file_path="output.wav")
|
212 |
+
```
|
213 |
+
|
214 |
+
#### Running a single speaker model
|
215 |
+
|
216 |
+
```python
|
217 |
+
# Init TTS with the target model name
|
218 |
+
tts = TTS(model_name="tts_models/de/thorsten/tacotron2-DDC", progress_bar=False).to(device)
|
219 |
+
|
220 |
+
# Run TTS
|
221 |
+
tts.tts_to_file(text="Ich bin eine Testnachricht.", file_path=OUTPUT_PATH)
|
222 |
+
|
223 |
+
# Example voice cloning with YourTTS in English, French and Portuguese
|
224 |
+
tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False).to(device)
|
225 |
+
tts.tts_to_file("This is voice cloning.", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav")
|
226 |
+
tts.tts_to_file("C'est le clonage de la voix.", speaker_wav="my/cloning/audio.wav", language="fr-fr", file_path="output.wav")
|
227 |
+
tts.tts_to_file("Isso é clonagem de voz.", speaker_wav="my/cloning/audio.wav", language="pt-br", file_path="output.wav")
|
228 |
+
```
|
229 |
+
|
230 |
+
#### Example voice conversion
|
231 |
+
|
232 |
+
Converting the voice in `source_wav` to the voice of `target_wav`
|
233 |
+
|
234 |
+
```python
|
235 |
+
tts = TTS(model_name="voice_conversion_models/multilingual/vctk/freevc24", progress_bar=False).to("cuda")
|
236 |
+
tts.voice_conversion_to_file(source_wav="my/source.wav", target_wav="my/target.wav", file_path="output.wav")
|
237 |
+
```
|
238 |
+
|
239 |
+
#### Example voice cloning together with the voice conversion model.
|
240 |
+
This way, you can clone voices by using any model in 🐸TTS.
|
241 |
+
|
242 |
+
```python
|
243 |
+
|
244 |
+
tts = TTS("tts_models/de/thorsten/tacotron2-DDC")
|
245 |
+
tts.tts_with_vc_to_file(
|
246 |
+
"Wie sage ich auf Italienisch, dass ich dich liebe?",
|
247 |
+
speaker_wav="target/speaker.wav",
|
248 |
+
file_path="output.wav"
|
249 |
+
)
|
250 |
+
```
|
251 |
+
|
252 |
+
#### Example using [🐸Coqui Studio](https://coqui.ai) voices.
|
253 |
+
You access all of your cloned voices and built-in speakers in [🐸Coqui Studio](https://coqui.ai).
|
254 |
+
To do this, you'll need an API token, which you can obtain from the [account page](https://coqui.ai/account).
|
255 |
+
After obtaining the API token, you'll need to configure the COQUI_STUDIO_TOKEN environment variable.
|
256 |
+
|
257 |
+
Once you have a valid API token in place, the studio speakers will be displayed as distinct models within the list.
|
258 |
+
These models will follow the naming convention `coqui_studio/en/<studio_speaker_name>/coqui_studio`
|
259 |
+
|
260 |
+
```python
|
261 |
+
# XTTS model
|
262 |
+
models = TTS(cs_api_model="XTTS").list_models()
|
263 |
+
# Init TTS with the target studio speaker
|
264 |
+
tts = TTS(model_name="coqui_studio/en/Torcull Diarmuid/coqui_studio", progress_bar=False)
|
265 |
+
# Run TTS
|
266 |
+
tts.tts_to_file(text="This is a test.", file_path=OUTPUT_PATH)
|
267 |
+
|
268 |
+
# V1 model
|
269 |
+
models = TTS(cs_api_model="V1").list_models()
|
270 |
+
# Run TTS with emotion and speed control
|
271 |
+
# Emotion control only works with V1 model
|
272 |
+
tts.tts_to_file(text="This is a test.", file_path=OUTPUT_PATH, emotion="Happy", speed=1.5)
|
273 |
+
|
274 |
+
# XTTS-multilingual
|
275 |
+
models = TTS(cs_api_model="XTTS-multilingual").list_models()
|
276 |
+
# Run TTS with emotion and speed control
|
277 |
+
# Emotion control only works with V1 model
|
278 |
+
tts.tts_to_file(text="Das ist ein Test.", file_path=OUTPUT_PATH, language="de", speed=1.0)
|
279 |
+
```
|
280 |
+
|
281 |
+
#### Example text to speech using **Fairseq models in ~1100 languages** 🤯.
|
282 |
+
For Fairseq models, use the following name format: `tts_models/<lang-iso_code>/fairseq/vits`.
|
283 |
+
You can find the language ISO codes [here](https://dl.fbaipublicfiles.com/mms/tts/all-tts-languages.html)
|
284 |
+
and learn about the Fairseq models [here](https://github.com/facebookresearch/fairseq/tree/main/examples/mms).
|
285 |
+
|
286 |
+
```python
|
287 |
+
# TTS with on the fly voice conversion
|
288 |
+
api = TTS("tts_models/deu/fairseq/vits")
|
289 |
+
api.tts_with_vc_to_file(
|
290 |
+
"Wie sage ich auf Italienisch, dass ich dich liebe?",
|
291 |
+
speaker_wav="target/speaker.wav",
|
292 |
+
file_path="output.wav"
|
293 |
+
)
|
294 |
+
```
|
295 |
+
|
296 |
+
### Command-line `tts`
|
297 |
+
|
298 |
+
<!-- begin-tts-readme -->
|
299 |
+
|
300 |
+
Synthesize speech on command line.
|
301 |
+
|
302 |
+
You can either use your trained model or choose a model from the provided list.
|
303 |
+
|
304 |
+
If you don't specify any models, then it uses LJSpeech based English model.
|
305 |
+
|
306 |
+
#### Single Speaker Models
|
307 |
+
|
308 |
+
- List provided models:
|
309 |
+
|
310 |
+
```
|
311 |
+
$ tts --list_models
|
312 |
+
```
|
313 |
+
|
314 |
+
- Get model info (for both tts_models and vocoder_models):
|
315 |
+
|
316 |
+
- Query by type/name:
|
317 |
+
The model_info_by_name uses the name as it from the --list_models.
|
318 |
+
```
|
319 |
+
$ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"
|
320 |
+
```
|
321 |
+
For example:
|
322 |
+
```
|
323 |
+
$ tts --model_info_by_name tts_models/tr/common-voice/glow-tts
|
324 |
+
$ tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2
|
325 |
+
```
|
326 |
+
- Query by type/idx:
|
327 |
+
The model_query_idx uses the corresponding idx from --list_models.
|
328 |
+
|
329 |
+
```
|
330 |
+
$ tts --model_info_by_idx "<model_type>/<model_query_idx>"
|
331 |
+
```
|
332 |
+
|
333 |
+
For example:
|
334 |
+
|
335 |
+
```
|
336 |
+
$ tts --model_info_by_idx tts_models/3
|
337 |
+
```
|
338 |
+
|
339 |
+
- Query info for model info by full name:
|
340 |
+
```
|
341 |
+
$ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"
|
342 |
+
```
|
343 |
+
|
344 |
+
- Run TTS with default models:
|
345 |
+
|
346 |
+
```
|
347 |
+
$ tts --text "Text for TTS" --out_path output/path/speech.wav
|
348 |
+
```
|
349 |
+
|
350 |
+
- Run a TTS model with its default vocoder model:
|
351 |
+
|
352 |
+
```
|
353 |
+
$ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav
|
354 |
+
```
|
355 |
+
|
356 |
+
For example:
|
357 |
+
|
358 |
+
```
|
359 |
+
$ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --out_path output/path/speech.wav
|
360 |
+
```
|
361 |
+
|
362 |
+
- Run with specific TTS and vocoder models from the list:
|
363 |
+
|
364 |
+
```
|
365 |
+
$ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --vocoder_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav
|
366 |
+
```
|
367 |
+
|
368 |
+
For example:
|
369 |
+
|
370 |
+
```
|
371 |
+
$ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --vocoder_name "vocoder_models/en/ljspeech/univnet" --out_path output/path/speech.wav
|
372 |
+
```
|
373 |
+
|
374 |
+
- Run your own TTS model (Using Griffin-Lim Vocoder):
|
375 |
+
|
376 |
+
```
|
377 |
+
$ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
|
378 |
+
```
|
379 |
+
|
380 |
+
- Run your own TTS and Vocoder models:
|
381 |
+
|
382 |
+
```
|
383 |
+
$ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
|
384 |
+
--vocoder_path path/to/vocoder.pth --vocoder_config_path path/to/vocoder_config.json
|
385 |
+
```
|
386 |
+
|
387 |
+
#### Multi-speaker Models
|
388 |
+
|
389 |
+
- List the available speakers and choose a <speaker_id> among them:
|
390 |
+
|
391 |
+
```
|
392 |
+
$ tts --model_name "<language>/<dataset>/<model_name>" --list_speaker_idxs
|
393 |
+
```
|
394 |
+
|
395 |
+
- Run the multi-speaker TTS model with the target speaker ID:
|
396 |
+
|
397 |
+
```
|
398 |
+
$ tts --text "Text for TTS." --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" --speaker_idx <speaker_id>
|
399 |
+
```
|
400 |
+
|
401 |
+
- Run your own multi-speaker TTS model:
|
402 |
+
|
403 |
+
```
|
404 |
+
$ tts --text "Text for TTS" --out_path output/path/speech.wav --model_path path/to/model.pth --config_path path/to/config.json --speakers_file_path path/to/speaker.json --speaker_idx <speaker_id>
|
405 |
+
```
|
406 |
+
|
407 |
+
### Voice Conversion Models
|
408 |
+
|
409 |
+
```
|
410 |
+
$ tts --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" --source_wav <path/to/speaker/wav> --target_wav <path/to/reference/wav>
|
411 |
+
```
|
412 |
+
|
413 |
+
<!-- end-tts-readme -->
|
414 |
+
|
415 |
+
## Directory Structure
|
416 |
+
```
|
417 |
+
|- notebooks/ (Jupyter Notebooks for model evaluation, parameter selection and data analysis.)
|
418 |
+
|- utils/ (common utilities.)
|
419 |
+
|- TTS
|
420 |
+
|- bin/ (folder for all the executables.)
|
421 |
+
|- train*.py (train your target model.)
|
422 |
+
|- ...
|
423 |
+
|- tts/ (text to speech models)
|
424 |
+
|- layers/ (model layer definitions)
|
425 |
+
|- models/ (model definitions)
|
426 |
+
|- utils/ (model specific utilities.)
|
427 |
+
|- speaker_encoder/ (Speaker Encoder models.)
|
428 |
+
|- (same)
|
429 |
+
|- vocoder/ (Vocoder models.)
|
430 |
+
|- (same)
|
431 |
+
```
|
TTS/TTS/.models.json
ADDED
@@ -0,0 +1,920 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"tts_models": {
|
3 |
+
"multilingual": {
|
4 |
+
"multi-dataset": {
|
5 |
+
"xtts_v1": {
|
6 |
+
"description": "XTTS-v1 by Coqui with 13 languages and cross-language voice cloning.",
|
7 |
+
"hf_url": [
|
8 |
+
"https://coqui.gateway.scarf.sh/hf-coqui/XTTS-v1/model.pth",
|
9 |
+
"https://coqui.gateway.scarf.sh/hf-coqui/XTTS-v1/config.json",
|
10 |
+
"https://coqui.gateway.scarf.sh/hf-coqui/XTTS-v1/vocab.json"
|
11 |
+
],
|
12 |
+
"default_vocoder": null,
|
13 |
+
"commit": "e9a1953e",
|
14 |
+
"license": "CPML",
|
15 |
+
"contact": "info@coqui.ai",
|
16 |
+
"tos_required": true
|
17 |
+
},
|
18 |
+
"your_tts": {
|
19 |
+
"description": "Your TTS model accompanying the paper https://arxiv.org/abs/2112.02418",
|
20 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.10.1_models/tts_models--multilingual--multi-dataset--your_tts.zip",
|
21 |
+
"default_vocoder": null,
|
22 |
+
"commit": "e9a1953e",
|
23 |
+
"license": "CC BY-NC-ND 4.0",
|
24 |
+
"contact": "egolge@coqui.ai"
|
25 |
+
},
|
26 |
+
"bark": {
|
27 |
+
"description": "🐶 Bark TTS model released by suno-ai. You can find the original implementation in https://github.com/suno-ai/bark.",
|
28 |
+
"hf_url": [
|
29 |
+
"https://coqui.gateway.scarf.sh/hf/bark/coarse_2.pt",
|
30 |
+
"https://coqui.gateway.scarf.sh/hf/bark/fine_2.pt",
|
31 |
+
"https://app.coqui.ai/tts_model/text_2.pt",
|
32 |
+
"https://coqui.gateway.scarf.sh/hf/bark/config.json",
|
33 |
+
"https://coqui.gateway.scarf.sh/hf/bark/hubert.pt",
|
34 |
+
"https://coqui.gateway.scarf.sh/hf/bark/tokenizer.pth"
|
35 |
+
],
|
36 |
+
"default_vocoder": null,
|
37 |
+
"commit": "e9a1953e",
|
38 |
+
"license": "MIT",
|
39 |
+
"contact": "https://www.suno.ai/"
|
40 |
+
}
|
41 |
+
}
|
42 |
+
},
|
43 |
+
"bg": {
|
44 |
+
"cv": {
|
45 |
+
"vits": {
|
46 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--bg--cv--vits.zip",
|
47 |
+
"default_vocoder": null,
|
48 |
+
"commit": null,
|
49 |
+
"author": "@NeonGeckoCom",
|
50 |
+
"license": "bsd-3-clause"
|
51 |
+
}
|
52 |
+
}
|
53 |
+
},
|
54 |
+
"cs": {
|
55 |
+
"cv": {
|
56 |
+
"vits": {
|
57 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--cs--cv--vits.zip",
|
58 |
+
"default_vocoder": null,
|
59 |
+
"commit": null,
|
60 |
+
"author": "@NeonGeckoCom",
|
61 |
+
"license": "bsd-3-clause"
|
62 |
+
}
|
63 |
+
}
|
64 |
+
},
|
65 |
+
"da": {
|
66 |
+
"cv": {
|
67 |
+
"vits": {
|
68 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--da--cv--vits.zip",
|
69 |
+
"default_vocoder": null,
|
70 |
+
"commit": null,
|
71 |
+
"author": "@NeonGeckoCom",
|
72 |
+
"license": "bsd-3-clause"
|
73 |
+
}
|
74 |
+
}
|
75 |
+
},
|
76 |
+
"et": {
|
77 |
+
"cv": {
|
78 |
+
"vits": {
|
79 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--et--cv--vits.zip",
|
80 |
+
"default_vocoder": null,
|
81 |
+
"commit": null,
|
82 |
+
"author": "@NeonGeckoCom",
|
83 |
+
"license": "bsd-3-clause"
|
84 |
+
}
|
85 |
+
}
|
86 |
+
},
|
87 |
+
"ga": {
|
88 |
+
"cv": {
|
89 |
+
"vits": {
|
90 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--ga--cv--vits.zip",
|
91 |
+
"default_vocoder": null,
|
92 |
+
"commit": null,
|
93 |
+
"author": "@NeonGeckoCom",
|
94 |
+
"license": "bsd-3-clause"
|
95 |
+
}
|
96 |
+
}
|
97 |
+
},
|
98 |
+
"en": {
|
99 |
+
"ek1": {
|
100 |
+
"tacotron2": {
|
101 |
+
"description": "EK1 en-rp tacotron2 by NMStoker",
|
102 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--en--ek1--tacotron2.zip",
|
103 |
+
"default_vocoder": "vocoder_models/en/ek1/wavegrad",
|
104 |
+
"commit": "c802255",
|
105 |
+
"license": "apache 2.0"
|
106 |
+
}
|
107 |
+
},
|
108 |
+
"ljspeech": {
|
109 |
+
"tacotron2-DDC": {
|
110 |
+
"description": "Tacotron2 with Double Decoder Consistency.",
|
111 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--en--ljspeech--tacotron2-DDC.zip",
|
112 |
+
"default_vocoder": "vocoder_models/en/ljspeech/hifigan_v2",
|
113 |
+
"commit": "bae2ad0f",
|
114 |
+
"author": "Eren Gölge @erogol",
|
115 |
+
"license": "apache 2.0",
|
116 |
+
"contact": "egolge@coqui.com"
|
117 |
+
},
|
118 |
+
"tacotron2-DDC_ph": {
|
119 |
+
"description": "Tacotron2 with Double Decoder Consistency with phonemes.",
|
120 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--en--ljspeech--tacotron2-DDC_ph.zip",
|
121 |
+
"default_vocoder": "vocoder_models/en/ljspeech/univnet",
|
122 |
+
"commit": "3900448",
|
123 |
+
"author": "Eren Gölge @erogol",
|
124 |
+
"license": "apache 2.0",
|
125 |
+
"contact": "egolge@coqui.com"
|
126 |
+
},
|
127 |
+
"glow-tts": {
|
128 |
+
"description": "",
|
129 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--en--ljspeech--glow-tts.zip",
|
130 |
+
"stats_file": null,
|
131 |
+
"default_vocoder": "vocoder_models/en/ljspeech/multiband-melgan",
|
132 |
+
"commit": "",
|
133 |
+
"author": "Eren Gölge @erogol",
|
134 |
+
"license": "MPL",
|
135 |
+
"contact": "egolge@coqui.com"
|
136 |
+
},
|
137 |
+
"speedy-speech": {
|
138 |
+
"description": "Speedy Speech model trained on LJSpeech dataset using the Alignment Network for learning the durations.",
|
139 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--en--ljspeech--speedy-speech.zip",
|
140 |
+
"stats_file": null,
|
141 |
+
"default_vocoder": "vocoder_models/en/ljspeech/hifigan_v2",
|
142 |
+
"commit": "4581e3d",
|
143 |
+
"author": "Eren Gölge @erogol",
|
144 |
+
"license": "apache 2.0",
|
145 |
+
"contact": "egolge@coqui.com"
|
146 |
+
},
|
147 |
+
"tacotron2-DCA": {
|
148 |
+
"description": "",
|
149 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--en--ljspeech--tacotron2-DCA.zip",
|
150 |
+
"default_vocoder": "vocoder_models/en/ljspeech/multiband-melgan",
|
151 |
+
"commit": "",
|
152 |
+
"author": "Eren Gölge @erogol",
|
153 |
+
"license": "MPL",
|
154 |
+
"contact": "egolge@coqui.com"
|
155 |
+
},
|
156 |
+
"vits": {
|
157 |
+
"description": "VITS is an End2End TTS model trained on LJSpeech dataset with phonemes.",
|
158 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--en--ljspeech--vits.zip",
|
159 |
+
"default_vocoder": null,
|
160 |
+
"commit": "3900448",
|
161 |
+
"author": "Eren Gölge @erogol",
|
162 |
+
"license": "apache 2.0",
|
163 |
+
"contact": "egolge@coqui.com"
|
164 |
+
},
|
165 |
+
"vits--neon": {
|
166 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--en--ljspeech--vits.zip",
|
167 |
+
"default_vocoder": null,
|
168 |
+
"author": "@NeonGeckoCom",
|
169 |
+
"license": "bsd-3-clause",
|
170 |
+
"contact": null,
|
171 |
+
"commit": null
|
172 |
+
},
|
173 |
+
"fast_pitch": {
|
174 |
+
"description": "FastPitch model trained on LJSpeech using the Aligner Network",
|
175 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--en--ljspeech--fast_pitch.zip",
|
176 |
+
"default_vocoder": "vocoder_models/en/ljspeech/hifigan_v2",
|
177 |
+
"commit": "b27b3ba",
|
178 |
+
"author": "Eren Gölge @erogol",
|
179 |
+
"license": "apache 2.0",
|
180 |
+
"contact": "egolge@coqui.com"
|
181 |
+
},
|
182 |
+
"overflow": {
|
183 |
+
"description": "Overflow model trained on LJSpeech",
|
184 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.10.0_models/tts_models--en--ljspeech--overflow.zip",
|
185 |
+
"default_vocoder": "vocoder_models/en/ljspeech/hifigan_v2",
|
186 |
+
"commit": "3b1a28f",
|
187 |
+
"author": "Eren Gölge @erogol",
|
188 |
+
"license": "apache 2.0",
|
189 |
+
"contact": "egolge@coqui.ai"
|
190 |
+
},
|
191 |
+
"neural_hmm": {
|
192 |
+
"description": "Neural HMM model trained on LJSpeech",
|
193 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.11.0_models/tts_models--en--ljspeech--neural_hmm.zip",
|
194 |
+
"default_vocoder": "vocoder_models/en/ljspeech/hifigan_v2",
|
195 |
+
"commit": "3b1a28f",
|
196 |
+
"author": "Shivam Metha @shivammehta25",
|
197 |
+
"license": "apache 2.0",
|
198 |
+
"contact": "d83ee8fe45e3c0d776d4a865aca21d7c2ac324c4"
|
199 |
+
}
|
200 |
+
},
|
201 |
+
"vctk": {
|
202 |
+
"vits": {
|
203 |
+
"description": "VITS End2End TTS model trained on VCTK dataset with 109 different speakers with EN accent.",
|
204 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--en--vctk--vits.zip",
|
205 |
+
"default_vocoder": null,
|
206 |
+
"commit": "3900448",
|
207 |
+
"author": "Eren @erogol",
|
208 |
+
"license": "apache 2.0",
|
209 |
+
"contact": "egolge@coqui.ai"
|
210 |
+
},
|
211 |
+
"fast_pitch": {
|
212 |
+
"description": "FastPitch model trained on VCTK dataseset.",
|
213 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--en--vctk--fast_pitch.zip",
|
214 |
+
"default_vocoder": null,
|
215 |
+
"commit": "bdab788d",
|
216 |
+
"author": "Eren @erogol",
|
217 |
+
"license": "CC BY-NC-ND 4.0",
|
218 |
+
"contact": "egolge@coqui.ai"
|
219 |
+
}
|
220 |
+
},
|
221 |
+
"sam": {
|
222 |
+
"tacotron-DDC": {
|
223 |
+
"description": "Tacotron2 with Double Decoder Consistency trained with Aceenture's Sam dataset.",
|
224 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--en--sam--tacotron-DDC.zip",
|
225 |
+
"default_vocoder": "vocoder_models/en/sam/hifigan_v2",
|
226 |
+
"commit": "bae2ad0f",
|
227 |
+
"author": "Eren Gölge @erogol",
|
228 |
+
"license": "apache 2.0",
|
229 |
+
"contact": "egolge@coqui.com"
|
230 |
+
}
|
231 |
+
},
|
232 |
+
"blizzard2013": {
|
233 |
+
"capacitron-t2-c50": {
|
234 |
+
"description": "Capacitron additions to Tacotron 2 with Capacity at 50 as in https://arxiv.org/pdf/1906.03402.pdf",
|
235 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.7.0_models/tts_models--en--blizzard2013--capacitron-t2-c50.zip",
|
236 |
+
"commit": "d6284e7",
|
237 |
+
"default_vocoder": "vocoder_models/en/blizzard2013/hifigan_v2",
|
238 |
+
"author": "Adam Froghyar @a-froghyar",
|
239 |
+
"license": "apache 2.0",
|
240 |
+
"contact": "adamfroghyar@gmail.com"
|
241 |
+
},
|
242 |
+
"capacitron-t2-c150_v2": {
|
243 |
+
"description": "Capacitron additions to Tacotron 2 with Capacity at 150 as in https://arxiv.org/pdf/1906.03402.pdf",
|
244 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.7.1_models/tts_models--en--blizzard2013--capacitron-t2-c150_v2.zip",
|
245 |
+
"commit": "a67039d",
|
246 |
+
"default_vocoder": "vocoder_models/en/blizzard2013/hifigan_v2",
|
247 |
+
"author": "Adam Froghyar @a-froghyar",
|
248 |
+
"license": "apache 2.0",
|
249 |
+
"contact": "adamfroghyar@gmail.com"
|
250 |
+
}
|
251 |
+
},
|
252 |
+
"multi-dataset": {
|
253 |
+
"tortoise-v2": {
|
254 |
+
"description": "Tortoise tts model https://github.com/neonbjb/tortoise-tts",
|
255 |
+
"github_rls_url": [
|
256 |
+
"https://app.coqui.ai/tts_model/autoregressive.pth",
|
257 |
+
"https://coqui.gateway.scarf.sh/v0.14.1_models/clvp2.pth",
|
258 |
+
"https://coqui.gateway.scarf.sh/v0.14.1_models/cvvp.pth",
|
259 |
+
"https://coqui.gateway.scarf.sh/v0.14.1_models/diffusion_decoder.pth",
|
260 |
+
"https://coqui.gateway.scarf.sh/v0.14.1_models/rlg_auto.pth",
|
261 |
+
"https://coqui.gateway.scarf.sh/v0.14.1_models/rlg_diffuser.pth",
|
262 |
+
"https://coqui.gateway.scarf.sh/v0.14.1_models/vocoder.pth",
|
263 |
+
"https://coqui.gateway.scarf.sh/v0.14.1_models/mel_norms.pth",
|
264 |
+
"https://coqui.gateway.scarf.sh/v0.14.1_models/config.json"
|
265 |
+
],
|
266 |
+
"commit": "c1875f6",
|
267 |
+
"default_vocoder": null,
|
268 |
+
"author": "@neonbjb - James Betker, @manmay-nakhashi Manmay Nakhashi",
|
269 |
+
"license": "apache 2.0"
|
270 |
+
}
|
271 |
+
},
|
272 |
+
"jenny": {
|
273 |
+
"jenny": {
|
274 |
+
"description": "VITS model trained with Jenny(Dioco) dataset. Named as Jenny as demanded by the license. Original URL for the model https://www.kaggle.com/datasets/noml4u/tts-models--en--jenny-dioco--vits",
|
275 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.14.0_models/tts_models--en--jenny--jenny.zip",
|
276 |
+
"default_vocoder": null,
|
277 |
+
"commit": "ba40a1c",
|
278 |
+
"license": "custom - see https://github.com/dioco-group/jenny-tts-dataset#important",
|
279 |
+
"author": "@noml4u"
|
280 |
+
}
|
281 |
+
}
|
282 |
+
},
|
283 |
+
"es": {
|
284 |
+
"mai": {
|
285 |
+
"tacotron2-DDC": {
|
286 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--es--mai--tacotron2-DDC.zip",
|
287 |
+
"default_vocoder": "vocoder_models/universal/libri-tts/fullband-melgan",
|
288 |
+
"commit": "",
|
289 |
+
"author": "Eren G��lge @erogol",
|
290 |
+
"license": "MPL",
|
291 |
+
"contact": "egolge@coqui.com"
|
292 |
+
}
|
293 |
+
},
|
294 |
+
"css10": {
|
295 |
+
"vits": {
|
296 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--es--css10--vits.zip",
|
297 |
+
"default_vocoder": null,
|
298 |
+
"commit": null,
|
299 |
+
"author": "@NeonGeckoCom",
|
300 |
+
"license": "bsd-3-clause"
|
301 |
+
}
|
302 |
+
}
|
303 |
+
},
|
304 |
+
"fr": {
|
305 |
+
"mai": {
|
306 |
+
"tacotron2-DDC": {
|
307 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--fr--mai--tacotron2-DDC.zip",
|
308 |
+
"default_vocoder": "vocoder_models/universal/libri-tts/fullband-melgan",
|
309 |
+
"commit": null,
|
310 |
+
"author": "Eren Gölge @erogol",
|
311 |
+
"license": "MPL",
|
312 |
+
"contact": "egolge@coqui.com"
|
313 |
+
}
|
314 |
+
},
|
315 |
+
"css10": {
|
316 |
+
"vits": {
|
317 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--fr--css10--vits.zip",
|
318 |
+
"default_vocoder": null,
|
319 |
+
"commit": null,
|
320 |
+
"author": "@NeonGeckoCom",
|
321 |
+
"license": "bsd-3-clause"
|
322 |
+
}
|
323 |
+
}
|
324 |
+
},
|
325 |
+
"uk": {
|
326 |
+
"mai": {
|
327 |
+
"glow-tts": {
|
328 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--uk--mai--glow-tts.zip",
|
329 |
+
"author": "@robinhad",
|
330 |
+
"commit": "bdab788d",
|
331 |
+
"license": "MIT",
|
332 |
+
"contact": "",
|
333 |
+
"default_vocoder": "vocoder_models/uk/mai/multiband-melgan"
|
334 |
+
},
|
335 |
+
"vits": {
|
336 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--uk--mai--vits.zip",
|
337 |
+
"default_vocoder": null,
|
338 |
+
"commit": null,
|
339 |
+
"author": "@NeonGeckoCom",
|
340 |
+
"license": "bsd-3-clause"
|
341 |
+
}
|
342 |
+
}
|
343 |
+
},
|
344 |
+
"zh-CN": {
|
345 |
+
"baker": {
|
346 |
+
"tacotron2-DDC-GST": {
|
347 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--zh-CN--baker--tacotron2-DDC-GST.zip",
|
348 |
+
"commit": "unknown",
|
349 |
+
"author": "@kirianguiller",
|
350 |
+
"license": "apache 2.0",
|
351 |
+
"default_vocoder": null
|
352 |
+
}
|
353 |
+
}
|
354 |
+
},
|
355 |
+
"nl": {
|
356 |
+
"mai": {
|
357 |
+
"tacotron2-DDC": {
|
358 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--nl--mai--tacotron2-DDC.zip",
|
359 |
+
"author": "@r-dh",
|
360 |
+
"license": "apache 2.0",
|
361 |
+
"default_vocoder": "vocoder_models/nl/mai/parallel-wavegan",
|
362 |
+
"stats_file": null,
|
363 |
+
"commit": "540d811"
|
364 |
+
}
|
365 |
+
},
|
366 |
+
"css10": {
|
367 |
+
"vits": {
|
368 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--nl--css10--vits.zip",
|
369 |
+
"default_vocoder": null,
|
370 |
+
"commit": null,
|
371 |
+
"author": "@NeonGeckoCom",
|
372 |
+
"license": "bsd-3-clause"
|
373 |
+
}
|
374 |
+
}
|
375 |
+
},
|
376 |
+
"de": {
|
377 |
+
"thorsten": {
|
378 |
+
"tacotron2-DCA": {
|
379 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--de--thorsten--tacotron2-DCA.zip",
|
380 |
+
"default_vocoder": "vocoder_models/de/thorsten/fullband-melgan",
|
381 |
+
"author": "@thorstenMueller",
|
382 |
+
"license": "apache 2.0",
|
383 |
+
"commit": "unknown"
|
384 |
+
},
|
385 |
+
"vits": {
|
386 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.7.0_models/tts_models--de--thorsten--vits.zip",
|
387 |
+
"default_vocoder": null,
|
388 |
+
"author": "@thorstenMueller",
|
389 |
+
"license": "apache 2.0",
|
390 |
+
"commit": "unknown"
|
391 |
+
},
|
392 |
+
"tacotron2-DDC": {
|
393 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--de--thorsten--tacotron2-DDC.zip",
|
394 |
+
"default_vocoder": "vocoder_models/de/thorsten/hifigan_v1",
|
395 |
+
"description": "Thorsten-Dec2021-22k-DDC",
|
396 |
+
"author": "@thorstenMueller",
|
397 |
+
"license": "apache 2.0",
|
398 |
+
"commit": "unknown"
|
399 |
+
}
|
400 |
+
},
|
401 |
+
"css10": {
|
402 |
+
"vits-neon": {
|
403 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--de--css10--vits.zip",
|
404 |
+
"default_vocoder": null,
|
405 |
+
"author": "@NeonGeckoCom",
|
406 |
+
"license": "bsd-3-clause",
|
407 |
+
"commit": null
|
408 |
+
}
|
409 |
+
}
|
410 |
+
},
|
411 |
+
"ja": {
|
412 |
+
"kokoro": {
|
413 |
+
"tacotron2-DDC": {
|
414 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--ja--kokoro--tacotron2-DDC.zip",
|
415 |
+
"default_vocoder": "vocoder_models/ja/kokoro/hifigan_v1",
|
416 |
+
"description": "Tacotron2 with Double Decoder Consistency trained with Kokoro Speech Dataset.",
|
417 |
+
"author": "@kaiidams",
|
418 |
+
"license": "apache 2.0",
|
419 |
+
"commit": "401fbd89"
|
420 |
+
}
|
421 |
+
}
|
422 |
+
},
|
423 |
+
"tr": {
|
424 |
+
"common-voice": {
|
425 |
+
"glow-tts": {
|
426 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--tr--common-voice--glow-tts.zip",
|
427 |
+
"default_vocoder": "vocoder_models/tr/common-voice/hifigan",
|
428 |
+
"license": "MIT",
|
429 |
+
"description": "Turkish GlowTTS model using an unknown speaker from the Common-Voice dataset.",
|
430 |
+
"author": "Fatih Akademi",
|
431 |
+
"commit": null
|
432 |
+
}
|
433 |
+
}
|
434 |
+
},
|
435 |
+
"it": {
|
436 |
+
"mai_female": {
|
437 |
+
"glow-tts": {
|
438 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--it--mai_female--glow-tts.zip",
|
439 |
+
"default_vocoder": null,
|
440 |
+
"description": "GlowTTS model as explained on https://github.com/coqui-ai/TTS/issues/1148.",
|
441 |
+
"author": "@nicolalandro",
|
442 |
+
"license": "apache 2.0",
|
443 |
+
"commit": null
|
444 |
+
},
|
445 |
+
"vits": {
|
446 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--it--mai_female--vits.zip",
|
447 |
+
"default_vocoder": null,
|
448 |
+
"description": "GlowTTS model as explained on https://github.com/coqui-ai/TTS/issues/1148.",
|
449 |
+
"author": "@nicolalandro",
|
450 |
+
"license": "apache 2.0",
|
451 |
+
"commit": null
|
452 |
+
}
|
453 |
+
},
|
454 |
+
"mai_male": {
|
455 |
+
"glow-tts": {
|
456 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--it--mai_male--glow-tts.zip",
|
457 |
+
"default_vocoder": null,
|
458 |
+
"description": "GlowTTS model as explained on https://github.com/coqui-ai/TTS/issues/1148.",
|
459 |
+
"author": "@nicolalandro",
|
460 |
+
"license": "apache 2.0",
|
461 |
+
"commit": null
|
462 |
+
},
|
463 |
+
"vits": {
|
464 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/tts_models--it--mai_male--vits.zip",
|
465 |
+
"default_vocoder": null,
|
466 |
+
"description": "GlowTTS model as explained on https://github.com/coqui-ai/TTS/issues/1148.",
|
467 |
+
"author": "@nicolalandro",
|
468 |
+
"license": "apache 2.0",
|
469 |
+
"commit": null
|
470 |
+
}
|
471 |
+
}
|
472 |
+
},
|
473 |
+
"ewe": {
|
474 |
+
"openbible": {
|
475 |
+
"vits": {
|
476 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.2_models/tts_models--ewe--openbible--vits.zip",
|
477 |
+
"default_vocoder": null,
|
478 |
+
"license": "CC-BY-SA 4.0",
|
479 |
+
"description": "Original work (audio and text) by Biblica available for free at www.biblica.com and open.bible.",
|
480 |
+
"author": "@coqui_ai",
|
481 |
+
"commit": "1b22f03"
|
482 |
+
}
|
483 |
+
}
|
484 |
+
},
|
485 |
+
"hau": {
|
486 |
+
"openbible": {
|
487 |
+
"vits": {
|
488 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.2_models/tts_models--hau--openbible--vits.zip",
|
489 |
+
"default_vocoder": null,
|
490 |
+
"license": "CC-BY-SA 4.0",
|
491 |
+
"description": "Original work (audio and text) by Biblica available for free at www.biblica.com and open.bible.",
|
492 |
+
"author": "@coqui_ai",
|
493 |
+
"commit": "1b22f03"
|
494 |
+
}
|
495 |
+
}
|
496 |
+
},
|
497 |
+
"lin": {
|
498 |
+
"openbible": {
|
499 |
+
"vits": {
|
500 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.2_models/tts_models--lin--openbible--vits.zip",
|
501 |
+
"default_vocoder": null,
|
502 |
+
"license": "CC-BY-SA 4.0",
|
503 |
+
"description": "Original work (audio and text) by Biblica available for free at www.biblica.com and open.bible.",
|
504 |
+
"author": "@coqui_ai",
|
505 |
+
"commit": "1b22f03"
|
506 |
+
}
|
507 |
+
}
|
508 |
+
},
|
509 |
+
"tw_akuapem": {
|
510 |
+
"openbible": {
|
511 |
+
"vits": {
|
512 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.2_models/tts_models--tw_akuapem--openbible--vits.zip",
|
513 |
+
"default_vocoder": null,
|
514 |
+
"license": "CC-BY-SA 4.0",
|
515 |
+
"description": "Original work (audio and text) by Biblica available for free at www.biblica.com and open.bible.",
|
516 |
+
"author": "@coqui_ai",
|
517 |
+
"commit": "1b22f03"
|
518 |
+
}
|
519 |
+
}
|
520 |
+
},
|
521 |
+
"tw_asante": {
|
522 |
+
"openbible": {
|
523 |
+
"vits": {
|
524 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.2_models/tts_models--tw_asante--openbible--vits.zip",
|
525 |
+
"default_vocoder": null,
|
526 |
+
"license": "CC-BY-SA 4.0",
|
527 |
+
"description": "Original work (audio and text) by Biblica available for free at www.biblica.com and open.bible.",
|
528 |
+
"author": "@coqui_ai",
|
529 |
+
"commit": "1b22f03"
|
530 |
+
}
|
531 |
+
}
|
532 |
+
},
|
533 |
+
"yor": {
|
534 |
+
"openbible": {
|
535 |
+
"vits": {
|
536 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.2_models/tts_models--yor--openbible--vits.zip",
|
537 |
+
"default_vocoder": null,
|
538 |
+
"license": "CC-BY-SA 4.0",
|
539 |
+
"description": "Original work (audio and text) by Biblica available for free at www.biblica.com and open.bible.",
|
540 |
+
"author": "@coqui_ai",
|
541 |
+
"commit": "1b22f03"
|
542 |
+
}
|
543 |
+
}
|
544 |
+
},
|
545 |
+
"hu": {
|
546 |
+
"css10": {
|
547 |
+
"vits": {
|
548 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--hu--css10--vits.zip",
|
549 |
+
"default_vocoder": null,
|
550 |
+
"commit": null,
|
551 |
+
"author": "@NeonGeckoCom",
|
552 |
+
"license": "bsd-3-clause"
|
553 |
+
}
|
554 |
+
}
|
555 |
+
},
|
556 |
+
"el": {
|
557 |
+
"cv": {
|
558 |
+
"vits": {
|
559 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--el--cv--vits.zip",
|
560 |
+
"default_vocoder": null,
|
561 |
+
"commit": null,
|
562 |
+
"author": "@NeonGeckoCom",
|
563 |
+
"license": "bsd-3-clause"
|
564 |
+
}
|
565 |
+
}
|
566 |
+
},
|
567 |
+
"fi": {
|
568 |
+
"css10": {
|
569 |
+
"vits": {
|
570 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--fi--css10--vits.zip",
|
571 |
+
"default_vocoder": null,
|
572 |
+
"commit": null,
|
573 |
+
"author": "@NeonGeckoCom",
|
574 |
+
"license": "bsd-3-clause"
|
575 |
+
}
|
576 |
+
}
|
577 |
+
},
|
578 |
+
"hr": {
|
579 |
+
"cv": {
|
580 |
+
"vits": {
|
581 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--hr--cv--vits.zip",
|
582 |
+
"default_vocoder": null,
|
583 |
+
"commit": null,
|
584 |
+
"author": "@NeonGeckoCom",
|
585 |
+
"license": "bsd-3-clause"
|
586 |
+
}
|
587 |
+
}
|
588 |
+
},
|
589 |
+
"lt": {
|
590 |
+
"cv": {
|
591 |
+
"vits": {
|
592 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--lt--cv--vits.zip",
|
593 |
+
"default_vocoder": null,
|
594 |
+
"commit": null,
|
595 |
+
"author": "@NeonGeckoCom",
|
596 |
+
"license": "bsd-3-clause"
|
597 |
+
}
|
598 |
+
}
|
599 |
+
},
|
600 |
+
"lv": {
|
601 |
+
"cv": {
|
602 |
+
"vits": {
|
603 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--lv--cv--vits.zip",
|
604 |
+
"default_vocoder": null,
|
605 |
+
"commit": null,
|
606 |
+
"author": "@NeonGeckoCom",
|
607 |
+
"license": "bsd-3-clause"
|
608 |
+
}
|
609 |
+
}
|
610 |
+
},
|
611 |
+
"mt": {
|
612 |
+
"cv": {
|
613 |
+
"vits": {
|
614 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--mt--cv--vits.zip",
|
615 |
+
"default_vocoder": null,
|
616 |
+
"commit": null,
|
617 |
+
"author": "@NeonGeckoCom",
|
618 |
+
"license": "bsd-3-clause"
|
619 |
+
}
|
620 |
+
}
|
621 |
+
},
|
622 |
+
"pl": {
|
623 |
+
"mai_female": {
|
624 |
+
"vits": {
|
625 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--pl--mai_female--vits.zip",
|
626 |
+
"default_vocoder": null,
|
627 |
+
"commit": null,
|
628 |
+
"author": "@NeonGeckoCom",
|
629 |
+
"license": "bsd-3-clause"
|
630 |
+
}
|
631 |
+
}
|
632 |
+
},
|
633 |
+
"pt": {
|
634 |
+
"cv": {
|
635 |
+
"vits": {
|
636 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--pt--cv--vits.zip",
|
637 |
+
"default_vocoder": null,
|
638 |
+
"commit": null,
|
639 |
+
"author": "@NeonGeckoCom",
|
640 |
+
"license": "bsd-3-clause"
|
641 |
+
}
|
642 |
+
}
|
643 |
+
},
|
644 |
+
"ro": {
|
645 |
+
"cv": {
|
646 |
+
"vits": {
|
647 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--ro--cv--vits.zip",
|
648 |
+
"default_vocoder": null,
|
649 |
+
"commit": null,
|
650 |
+
"author": "@NeonGeckoCom",
|
651 |
+
"license": "bsd-3-clause"
|
652 |
+
}
|
653 |
+
}
|
654 |
+
},
|
655 |
+
"sk": {
|
656 |
+
"cv": {
|
657 |
+
"vits": {
|
658 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--sk--cv--vits.zip",
|
659 |
+
"default_vocoder": null,
|
660 |
+
"commit": null,
|
661 |
+
"author": "@NeonGeckoCom",
|
662 |
+
"license": "bsd-3-clause"
|
663 |
+
}
|
664 |
+
}
|
665 |
+
},
|
666 |
+
"sl": {
|
667 |
+
"cv": {
|
668 |
+
"vits": {
|
669 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--sl--cv--vits.zip",
|
670 |
+
"default_vocoder": null,
|
671 |
+
"commit": null,
|
672 |
+
"author": "@NeonGeckoCom",
|
673 |
+
"license": "bsd-3-clause"
|
674 |
+
}
|
675 |
+
}
|
676 |
+
},
|
677 |
+
"sv": {
|
678 |
+
"cv": {
|
679 |
+
"vits": {
|
680 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/tts_models--sv--cv--vits.zip",
|
681 |
+
"default_vocoder": null,
|
682 |
+
"commit": null,
|
683 |
+
"author": "@NeonGeckoCom",
|
684 |
+
"license": "bsd-3-clause"
|
685 |
+
}
|
686 |
+
}
|
687 |
+
},
|
688 |
+
"ca": {
|
689 |
+
"custom": {
|
690 |
+
"vits": {
|
691 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.10.1_models/tts_models--ca--custom--vits.zip",
|
692 |
+
"default_vocoder": null,
|
693 |
+
"commit": null,
|
694 |
+
"description": " It is trained from zero with 101460 utterances consisting of 257 speakers, approx 138 hours of speech. We used three datasets;\nFestcat and Google Catalan TTS (both TTS datasets) and also a part of Common Voice 8. It is trained with TTS v0.8.0.\nhttps://github.com/coqui-ai/TTS/discussions/930#discussioncomment-4466345",
|
695 |
+
"author": "@gullabi",
|
696 |
+
"license": "CC-BY-4.0"
|
697 |
+
}
|
698 |
+
}
|
699 |
+
},
|
700 |
+
"fa": {
|
701 |
+
"custom": {
|
702 |
+
"glow-tts": {
|
703 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.10.1_models/tts_models--fa--custom--glow-tts.zip",
|
704 |
+
"default_vocoder": null,
|
705 |
+
"commit": null,
|
706 |
+
"description": "persian-tts-female-glow_tts model for text to speech purposes. Single-speaker female voice Trained on persian-tts-dataset-famale. \nThis model has no compatible vocoder thus the output quality is not very good. \nDataset: https://www.kaggle.com/datasets/magnoliasis/persian-tts-dataset-famale.",
|
707 |
+
"author": "@karim23657",
|
708 |
+
"license": "CC-BY-4.0"
|
709 |
+
}
|
710 |
+
}
|
711 |
+
},
|
712 |
+
"bn": {
|
713 |
+
"custom": {
|
714 |
+
"vits-male": {
|
715 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.13.3_models/tts_models--bn--custom--vits_male.zip",
|
716 |
+
"default_vocoder": null,
|
717 |
+
"commit": null,
|
718 |
+
"description": "Single speaker Bangla male model. For more information -> https://github.com/mobassir94/comprehensive-bangla-tts",
|
719 |
+
"author": "@mobassir94",
|
720 |
+
"license": "Apache 2.0"
|
721 |
+
},
|
722 |
+
"vits-female": {
|
723 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.13.3_models/tts_models--bn--custom--vits_female.zip",
|
724 |
+
"default_vocoder": null,
|
725 |
+
"commit": null,
|
726 |
+
"description": "Single speaker Bangla female model. For more information -> https://github.com/mobassir94/comprehensive-bangla-tts",
|
727 |
+
"author": "@mobassir94",
|
728 |
+
"license": "Apache 2.0"
|
729 |
+
}
|
730 |
+
}
|
731 |
+
},
|
732 |
+
"be": {
|
733 |
+
"common-voice": {
|
734 |
+
"glow-tts":{
|
735 |
+
"description": "Belarusian GlowTTS model created by @alex73 (Github).",
|
736 |
+
"github_rls_url":"https://coqui.gateway.scarf.sh/v0.16.6/tts_models--be--common-voice--glow-tts.zip",
|
737 |
+
"default_vocoder": "vocoder_models/be/common-voice/hifigan",
|
738 |
+
"commit": "c0aabb85",
|
739 |
+
"license": "CC-BY-SA 4.0",
|
740 |
+
"contact": "alex73mail@gmail.com"
|
741 |
+
}
|
742 |
+
}
|
743 |
+
}
|
744 |
+
},
|
745 |
+
"vocoder_models": {
|
746 |
+
"universal": {
|
747 |
+
"libri-tts": {
|
748 |
+
"wavegrad": {
|
749 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/vocoder_models--universal--libri-tts--wavegrad.zip",
|
750 |
+
"commit": "ea976b0",
|
751 |
+
"author": "Eren Gölge @erogol",
|
752 |
+
"license": "MPL",
|
753 |
+
"contact": "egolge@coqui.com"
|
754 |
+
},
|
755 |
+
"fullband-melgan": {
|
756 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/vocoder_models--universal--libri-tts--fullband-melgan.zip",
|
757 |
+
"commit": "4132240",
|
758 |
+
"author": "Eren Gölge @erogol",
|
759 |
+
"license": "MPL",
|
760 |
+
"contact": "egolge@coqui.com"
|
761 |
+
}
|
762 |
+
}
|
763 |
+
},
|
764 |
+
"en": {
|
765 |
+
"ek1": {
|
766 |
+
"wavegrad": {
|
767 |
+
"description": "EK1 en-rp wavegrad by NMStoker",
|
768 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/vocoder_models--en--ek1--wavegrad.zip",
|
769 |
+
"commit": "c802255",
|
770 |
+
"license": "apache 2.0"
|
771 |
+
}
|
772 |
+
},
|
773 |
+
"ljspeech": {
|
774 |
+
"multiband-melgan": {
|
775 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/vocoder_models--en--ljspeech--multiband-melgan.zip",
|
776 |
+
"commit": "ea976b0",
|
777 |
+
"author": "Eren Gölge @erogol",
|
778 |
+
"license": "MPL",
|
779 |
+
"contact": "egolge@coqui.com"
|
780 |
+
},
|
781 |
+
"hifigan_v2": {
|
782 |
+
"description": "HiFiGAN_v2 LJSpeech vocoder from https://arxiv.org/abs/2010.05646.",
|
783 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/vocoder_models--en--ljspeech--hifigan_v2.zip",
|
784 |
+
"commit": "bae2ad0f",
|
785 |
+
"author": "@erogol",
|
786 |
+
"license": "apache 2.0",
|
787 |
+
"contact": "egolge@coqui.ai"
|
788 |
+
},
|
789 |
+
"univnet": {
|
790 |
+
"description": "UnivNet model finetuned on TacotronDDC_ph spectrograms for better compatibility.",
|
791 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/vocoder_models--en--ljspeech--univnet_v2.zip",
|
792 |
+
"commit": "4581e3d",
|
793 |
+
"author": "Eren @erogol",
|
794 |
+
"license": "apache 2.0",
|
795 |
+
"contact": "egolge@coqui.ai"
|
796 |
+
}
|
797 |
+
},
|
798 |
+
"blizzard2013": {
|
799 |
+
"hifigan_v2": {
|
800 |
+
"description": "HiFiGAN_v2 LJSpeech vocoder from https://arxiv.org/abs/2010.05646.",
|
801 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.7.0_models/vocoder_models--en--blizzard2013--hifigan_v2.zip",
|
802 |
+
"commit": "d6284e7",
|
803 |
+
"author": "Adam Froghyar @a-froghyar",
|
804 |
+
"license": "apache 2.0",
|
805 |
+
"contact": "adamfroghyar@gmail.com"
|
806 |
+
}
|
807 |
+
},
|
808 |
+
"vctk": {
|
809 |
+
"hifigan_v2": {
|
810 |
+
"description": "Finetuned and intended to be used with tts_models/en/vctk/sc-glow-tts",
|
811 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/vocoder_models--en--vctk--hifigan_v2.zip",
|
812 |
+
"commit": "2f07160",
|
813 |
+
"author": "Edresson Casanova",
|
814 |
+
"license": "apache 2.0",
|
815 |
+
"contact": ""
|
816 |
+
}
|
817 |
+
},
|
818 |
+
"sam": {
|
819 |
+
"hifigan_v2": {
|
820 |
+
"description": "Finetuned and intended to be used with tts_models/en/sam/tacotron_DDC",
|
821 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/vocoder_models--en--sam--hifigan_v2.zip",
|
822 |
+
"commit": "2f07160",
|
823 |
+
"author": "Eren Gölge @erogol",
|
824 |
+
"license": "apache 2.0",
|
825 |
+
"contact": "egolge@coqui.ai"
|
826 |
+
}
|
827 |
+
}
|
828 |
+
},
|
829 |
+
"nl": {
|
830 |
+
"mai": {
|
831 |
+
"parallel-wavegan": {
|
832 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/vocoder_models--nl--mai--parallel-wavegan.zip",
|
833 |
+
"author": "@r-dh",
|
834 |
+
"license": "apache 2.0",
|
835 |
+
"commit": "unknown"
|
836 |
+
}
|
837 |
+
}
|
838 |
+
},
|
839 |
+
"de": {
|
840 |
+
"thorsten": {
|
841 |
+
"wavegrad": {
|
842 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/vocoder_models--de--thorsten--wavegrad.zip",
|
843 |
+
"author": "@thorstenMueller",
|
844 |
+
"license": "apache 2.0",
|
845 |
+
"commit": "unknown"
|
846 |
+
},
|
847 |
+
"fullband-melgan": {
|
848 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/vocoder_models--de--thorsten--fullband-melgan.zip",
|
849 |
+
"author": "@thorstenMueller",
|
850 |
+
"license": "apache 2.0",
|
851 |
+
"commit": "unknown"
|
852 |
+
},
|
853 |
+
"hifigan_v1": {
|
854 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.8.0_models/vocoder_models--de--thorsten--hifigan_v1.zip",
|
855 |
+
"description": "HifiGAN vocoder model for Thorsten Neutral Dec2021 22k Samplerate Tacotron2 DDC model",
|
856 |
+
"author": "@thorstenMueller",
|
857 |
+
"license": "apache 2.0",
|
858 |
+
"commit": "unknown"
|
859 |
+
}
|
860 |
+
}
|
861 |
+
},
|
862 |
+
"ja": {
|
863 |
+
"kokoro": {
|
864 |
+
"hifigan_v1": {
|
865 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/vocoder_models--ja--kokoro--hifigan_v1.zip",
|
866 |
+
"description": "HifiGAN model trained for kokoro dataset by @kaiidams",
|
867 |
+
"author": "@kaiidams",
|
868 |
+
"license": "apache 2.0",
|
869 |
+
"commit": "3900448"
|
870 |
+
}
|
871 |
+
}
|
872 |
+
},
|
873 |
+
"uk": {
|
874 |
+
"mai": {
|
875 |
+
"multiband-melgan": {
|
876 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/vocoder_models--uk--mai--multiband-melgan.zip",
|
877 |
+
"author": "@robinhad",
|
878 |
+
"commit": "bdab788d",
|
879 |
+
"license": "MIT",
|
880 |
+
"contact": ""
|
881 |
+
}
|
882 |
+
}
|
883 |
+
},
|
884 |
+
"tr": {
|
885 |
+
"common-voice": {
|
886 |
+
"hifigan": {
|
887 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.6.1_models/vocoder_models--tr--common-voice--hifigan.zip",
|
888 |
+
"description": "HifiGAN model using an unknown speaker from the Common-Voice dataset.",
|
889 |
+
"author": "Fatih Akademi",
|
890 |
+
"license": "MIT",
|
891 |
+
"commit": null
|
892 |
+
}
|
893 |
+
}
|
894 |
+
},
|
895 |
+
"be": {
|
896 |
+
"common-voice": {
|
897 |
+
"hifigan": {
|
898 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.16.6/vocoder_models--be--common-voice--hifigan.zip",
|
899 |
+
"description": "Belarusian HiFiGAN model created by @alex73 (Github).",
|
900 |
+
"author": "@alex73",
|
901 |
+
"license": "CC-BY-SA 4.0",
|
902 |
+
"commit": "c0aabb85"
|
903 |
+
}
|
904 |
+
}
|
905 |
+
}
|
906 |
+
},
|
907 |
+
"voice_conversion_models": {
|
908 |
+
"multilingual": {
|
909 |
+
"vctk": {
|
910 |
+
"freevc24": {
|
911 |
+
"github_rls_url": "https://coqui.gateway.scarf.sh/v0.13.0_models/voice_conversion_models--multilingual--vctk--freevc24.zip",
|
912 |
+
"description": "FreeVC model trained on VCTK dataset from https://github.com/OlaWod/FreeVC",
|
913 |
+
"author": "Jing-Yi Li @OlaWod",
|
914 |
+
"license": "MIT",
|
915 |
+
"commit": null
|
916 |
+
}
|
917 |
+
}
|
918 |
+
}
|
919 |
+
}
|
920 |
+
}
|
TTS/TTS/VERSION
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
0.17.5
|
TTS/TTS/__init__.py
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import os
|
2 |
+
|
3 |
+
with open(os.path.join(os.path.dirname(__file__), "VERSION"), "r", encoding="utf-8") as f:
|
4 |
+
version = f.read().strip()
|
5 |
+
|
6 |
+
__version__ = version
|
TTS/TTS/api.py
ADDED
@@ -0,0 +1,476 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import tempfile
|
2 |
+
import warnings
|
3 |
+
from pathlib import Path
|
4 |
+
from typing import Union
|
5 |
+
|
6 |
+
import numpy as np
|
7 |
+
from torch import nn
|
8 |
+
|
9 |
+
from TTS.cs_api import CS_API
|
10 |
+
from TTS.utils.audio.numpy_transforms import save_wav
|
11 |
+
from TTS.utils.manage import ModelManager
|
12 |
+
from TTS.utils.synthesizer import Synthesizer
|
13 |
+
|
14 |
+
|
15 |
+
class TTS(nn.Module):
|
16 |
+
"""TODO: Add voice conversion and Capacitron support."""
|
17 |
+
|
18 |
+
def __init__(
|
19 |
+
self,
|
20 |
+
model_name: str = None,
|
21 |
+
model_path: str = None,
|
22 |
+
config_path: str = None,
|
23 |
+
vocoder_path: str = None,
|
24 |
+
vocoder_config_path: str = None,
|
25 |
+
progress_bar: bool = True,
|
26 |
+
cs_api_model: str = "XTTS",
|
27 |
+
gpu=False,
|
28 |
+
):
|
29 |
+
"""🐸TTS python interface that allows to load and use the released models.
|
30 |
+
|
31 |
+
Example with a multi-speaker model:
|
32 |
+
>>> from TTS.api import TTS
|
33 |
+
>>> tts = TTS(TTS.list_models()[0])
|
34 |
+
>>> wav = tts.tts("This is a test! This is also a test!!", speaker=tts.speakers[0], language=tts.languages[0])
|
35 |
+
>>> tts.tts_to_file(text="Hello world!", speaker=tts.speakers[0], language=tts.languages[0], file_path="output.wav")
|
36 |
+
|
37 |
+
Example with a single-speaker model:
|
38 |
+
>>> tts = TTS(model_name="tts_models/de/thorsten/tacotron2-DDC", progress_bar=False, gpu=False)
|
39 |
+
>>> tts.tts_to_file(text="Ich bin eine Testnachricht.", file_path="output.wav")
|
40 |
+
|
41 |
+
Example loading a model from a path:
|
42 |
+
>>> tts = TTS(model_path="/path/to/checkpoint_100000.pth", config_path="/path/to/config.json", progress_bar=False, gpu=False)
|
43 |
+
>>> tts.tts_to_file(text="Ich bin eine Testnachricht.", file_path="output.wav")
|
44 |
+
|
45 |
+
Example voice cloning with YourTTS in English, French and Portuguese:
|
46 |
+
>>> tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False, gpu=True)
|
47 |
+
>>> tts.tts_to_file("This is voice cloning.", speaker_wav="my/cloning/audio.wav", language="en", file_path="thisisit.wav")
|
48 |
+
>>> tts.tts_to_file("C'est le clonage de la voix.", speaker_wav="my/cloning/audio.wav", language="fr", file_path="thisisit.wav")
|
49 |
+
>>> tts.tts_to_file("Isso é clonagem de voz.", speaker_wav="my/cloning/audio.wav", language="pt", file_path="thisisit.wav")
|
50 |
+
|
51 |
+
Example Fairseq TTS models (uses ISO language codes in https://dl.fbaipublicfiles.com/mms/tts/all-tts-languages.html):
|
52 |
+
>>> tts = TTS(model_name="tts_models/eng/fairseq/vits", progress_bar=False, gpu=True)
|
53 |
+
>>> tts.tts_to_file("This is a test.", file_path="output.wav")
|
54 |
+
|
55 |
+
Args:
|
56 |
+
model_name (str, optional): Model name to load. You can list models by ```tts.models```. Defaults to None.
|
57 |
+
model_path (str, optional): Path to the model checkpoint. Defaults to None.
|
58 |
+
config_path (str, optional): Path to the model config. Defaults to None.
|
59 |
+
vocoder_path (str, optional): Path to the vocoder checkpoint. Defaults to None.
|
60 |
+
vocoder_config_path (str, optional): Path to the vocoder config. Defaults to None.
|
61 |
+
progress_bar (bool, optional): Whether to pring a progress bar while downloading a model. Defaults to True.
|
62 |
+
cs_api_model (str, optional): Name of the model to use for the Coqui Studio API. Available models are
|
63 |
+
"XTTS", "XTTS-multilingual", "V1". You can also use `TTS.cs_api.CS_API" for more control.
|
64 |
+
Defaults to "XTTS".
|
65 |
+
gpu (bool, optional): Enable/disable GPU. Some models might be too slow on CPU. Defaults to False.
|
66 |
+
"""
|
67 |
+
super().__init__()
|
68 |
+
self.manager = ModelManager(models_file=self.get_models_file_path(), progress_bar=progress_bar, verbose=False)
|
69 |
+
|
70 |
+
self.synthesizer = None
|
71 |
+
self.voice_converter = None
|
72 |
+
self.csapi = None
|
73 |
+
self.cs_api_model = cs_api_model
|
74 |
+
self.model_name = None
|
75 |
+
|
76 |
+
if gpu:
|
77 |
+
warnings.warn("`gpu` will be deprecated. Please use `tts.to(device)` instead.")
|
78 |
+
|
79 |
+
if model_name is not None:
|
80 |
+
if "tts_models" in model_name or "coqui_studio" in model_name:
|
81 |
+
self.load_tts_model_by_name(model_name, gpu)
|
82 |
+
elif "voice_conversion_models" in model_name:
|
83 |
+
self.load_vc_model_by_name(model_name, gpu)
|
84 |
+
|
85 |
+
if model_path:
|
86 |
+
self.load_tts_model_by_path(
|
87 |
+
model_path, config_path, vocoder_path=vocoder_path, vocoder_config=vocoder_config_path, gpu=gpu
|
88 |
+
)
|
89 |
+
|
90 |
+
@property
|
91 |
+
def models(self):
|
92 |
+
return self.manager.list_tts_models()
|
93 |
+
|
94 |
+
@property
|
95 |
+
def is_multi_speaker(self):
|
96 |
+
if hasattr(self.synthesizer.tts_model, "speaker_manager") and self.synthesizer.tts_model.speaker_manager:
|
97 |
+
return self.synthesizer.tts_model.speaker_manager.num_speakers > 1
|
98 |
+
return False
|
99 |
+
|
100 |
+
@property
|
101 |
+
def is_coqui_studio(self):
|
102 |
+
if self.model_name is None:
|
103 |
+
return False
|
104 |
+
return "coqui_studio" in self.model_name
|
105 |
+
|
106 |
+
@property
|
107 |
+
def is_multi_lingual(self):
|
108 |
+
# TODO: fix this
|
109 |
+
if "xtts" in self.model_name:
|
110 |
+
return True
|
111 |
+
if hasattr(self.synthesizer.tts_model, "language_manager") and self.synthesizer.tts_model.language_manager:
|
112 |
+
return self.synthesizer.tts_model.language_manager.num_languages > 1
|
113 |
+
return False
|
114 |
+
|
115 |
+
@property
|
116 |
+
def speakers(self):
|
117 |
+
if not self.is_multi_speaker:
|
118 |
+
return None
|
119 |
+
return self.synthesizer.tts_model.speaker_manager.speaker_names
|
120 |
+
|
121 |
+
@property
|
122 |
+
def languages(self):
|
123 |
+
if not self.is_multi_lingual:
|
124 |
+
return None
|
125 |
+
return self.synthesizer.tts_model.language_manager.language_names
|
126 |
+
|
127 |
+
@staticmethod
|
128 |
+
def get_models_file_path():
|
129 |
+
return Path(__file__).parent / ".models.json"
|
130 |
+
|
131 |
+
def list_models(self):
|
132 |
+
try:
|
133 |
+
csapi = CS_API(model=self.cs_api_model)
|
134 |
+
models = csapi.list_speakers_as_tts_models()
|
135 |
+
except ValueError as e:
|
136 |
+
print(e)
|
137 |
+
models = []
|
138 |
+
manager = ModelManager(models_file=TTS.get_models_file_path(), progress_bar=False, verbose=False)
|
139 |
+
return manager.list_tts_models() + models
|
140 |
+
|
141 |
+
def download_model_by_name(self, model_name: str):
|
142 |
+
model_path, config_path, model_item = self.manager.download_model(model_name)
|
143 |
+
if "fairseq" in model_name or (model_item is not None and isinstance(model_item["model_url"], list)):
|
144 |
+
# return model directory if there are multiple files
|
145 |
+
# we assume that the model knows how to load itself
|
146 |
+
return None, None, None, None, model_path
|
147 |
+
if model_item.get("default_vocoder") is None:
|
148 |
+
return model_path, config_path, None, None, None
|
149 |
+
vocoder_path, vocoder_config_path, _ = self.manager.download_model(model_item["default_vocoder"])
|
150 |
+
return model_path, config_path, vocoder_path, vocoder_config_path, None
|
151 |
+
|
152 |
+
def load_vc_model_by_name(self, model_name: str, gpu: bool = False):
|
153 |
+
"""Load one of the voice conversion models by name.
|
154 |
+
|
155 |
+
Args:
|
156 |
+
model_name (str): Model name to load. You can list models by ```tts.models```.
|
157 |
+
gpu (bool, optional): Enable/disable GPU. Some models might be too slow on CPU. Defaults to False.
|
158 |
+
"""
|
159 |
+
self.model_name = model_name
|
160 |
+
model_path, config_path, _, _, _ = self.download_model_by_name(model_name)
|
161 |
+
self.voice_converter = Synthesizer(vc_checkpoint=model_path, vc_config=config_path, use_cuda=gpu)
|
162 |
+
|
163 |
+
def load_tts_model_by_name(self, model_name: str, gpu: bool = False):
|
164 |
+
"""Load one of 🐸TTS models by name.
|
165 |
+
|
166 |
+
Args:
|
167 |
+
model_name (str): Model name to load. You can list models by ```tts.models```.
|
168 |
+
gpu (bool, optional): Enable/disable GPU. Some models might be too slow on CPU. Defaults to False.
|
169 |
+
|
170 |
+
TODO: Add tests
|
171 |
+
"""
|
172 |
+
self.synthesizer = None
|
173 |
+
self.csapi = None
|
174 |
+
self.model_name = model_name
|
175 |
+
|
176 |
+
if "coqui_studio" in model_name:
|
177 |
+
self.csapi = CS_API()
|
178 |
+
else:
|
179 |
+
model_path, config_path, vocoder_path, vocoder_config_path, model_dir = self.download_model_by_name(
|
180 |
+
model_name
|
181 |
+
)
|
182 |
+
|
183 |
+
# init synthesizer
|
184 |
+
# None values are fetch from the model
|
185 |
+
self.synthesizer = Synthesizer(
|
186 |
+
tts_checkpoint=model_path,
|
187 |
+
tts_config_path=config_path,
|
188 |
+
tts_speakers_file=None,
|
189 |
+
tts_languages_file=None,
|
190 |
+
vocoder_checkpoint=vocoder_path,
|
191 |
+
vocoder_config=vocoder_config_path,
|
192 |
+
encoder_checkpoint=None,
|
193 |
+
encoder_config=None,
|
194 |
+
model_dir=model_dir,
|
195 |
+
use_cuda=gpu,
|
196 |
+
)
|
197 |
+
|
198 |
+
def load_tts_model_by_path(
|
199 |
+
self, model_path: str, config_path: str, vocoder_path: str = None, vocoder_config: str = None, gpu: bool = False
|
200 |
+
):
|
201 |
+
"""Load a model from a path.
|
202 |
+
|
203 |
+
Args:
|
204 |
+
model_path (str): Path to the model checkpoint.
|
205 |
+
config_path (str): Path to the model config.
|
206 |
+
vocoder_path (str, optional): Path to the vocoder checkpoint. Defaults to None.
|
207 |
+
vocoder_config (str, optional): Path to the vocoder config. Defaults to None.
|
208 |
+
gpu (bool, optional): Enable/disable GPU. Some models might be too slow on CPU. Defaults to False.
|
209 |
+
"""
|
210 |
+
|
211 |
+
self.synthesizer = Synthesizer(
|
212 |
+
tts_checkpoint=model_path,
|
213 |
+
tts_config_path=config_path,
|
214 |
+
tts_speakers_file=None,
|
215 |
+
tts_languages_file=None,
|
216 |
+
vocoder_checkpoint=vocoder_path,
|
217 |
+
vocoder_config=vocoder_config,
|
218 |
+
encoder_checkpoint=None,
|
219 |
+
encoder_config=None,
|
220 |
+
use_cuda=gpu,
|
221 |
+
)
|
222 |
+
|
223 |
+
def _check_arguments(
|
224 |
+
self,
|
225 |
+
speaker: str = None,
|
226 |
+
language: str = None,
|
227 |
+
speaker_wav: str = None,
|
228 |
+
emotion: str = None,
|
229 |
+
speed: float = None,
|
230 |
+
**kwargs,
|
231 |
+
) -> None:
|
232 |
+
"""Check if the arguments are valid for the model."""
|
233 |
+
if not self.is_coqui_studio:
|
234 |
+
# check for the coqui tts models
|
235 |
+
if self.is_multi_speaker and (speaker is None and speaker_wav is None):
|
236 |
+
raise ValueError("Model is multi-speaker but no `speaker` is provided.")
|
237 |
+
if self.is_multi_lingual and language is None:
|
238 |
+
raise ValueError("Model is multi-lingual but no `language` is provided.")
|
239 |
+
if not self.is_multi_speaker and speaker is not None and "voice_dir" not in kwargs:
|
240 |
+
raise ValueError("Model is not multi-speaker but `speaker` is provided.")
|
241 |
+
if not self.is_multi_lingual and language is not None:
|
242 |
+
raise ValueError("Model is not multi-lingual but `language` is provided.")
|
243 |
+
if not emotion is None and not speed is None:
|
244 |
+
raise ValueError("Emotion and speed can only be used with Coqui Studio models.")
|
245 |
+
else:
|
246 |
+
if emotion is None:
|
247 |
+
emotion = "Neutral"
|
248 |
+
if speed is None:
|
249 |
+
speed = 1.0
|
250 |
+
# check for the studio models
|
251 |
+
if speaker_wav is not None:
|
252 |
+
raise ValueError("Coqui Studio models do not support `speaker_wav` argument.")
|
253 |
+
if speaker is not None:
|
254 |
+
raise ValueError("Coqui Studio models do not support `speaker` argument.")
|
255 |
+
if language is not None and language != "en":
|
256 |
+
raise ValueError("Coqui Studio models currently support only `language=en` argument.")
|
257 |
+
if emotion not in ["Neutral", "Happy", "Sad", "Angry", "Dull"]:
|
258 |
+
raise ValueError(f"Emotion - `{emotion}` - must be one of `Neutral`, `Happy`, `Sad`, `Angry`, `Dull`.")
|
259 |
+
|
260 |
+
def tts_coqui_studio(
|
261 |
+
self,
|
262 |
+
text: str,
|
263 |
+
speaker_name: str = None,
|
264 |
+
language: str = None,
|
265 |
+
emotion: str = None,
|
266 |
+
speed: float = 1.0,
|
267 |
+
file_path: str = None,
|
268 |
+
) -> Union[np.ndarray, str]:
|
269 |
+
"""Convert text to speech using Coqui Studio models. Use `CS_API` class if you are only interested in the API.
|
270 |
+
|
271 |
+
Args:
|
272 |
+
text (str):
|
273 |
+
Input text to synthesize.
|
274 |
+
speaker_name (str, optional):
|
275 |
+
Speaker name from Coqui Studio. Defaults to None.
|
276 |
+
language (str): Language of the text. If None, the default language of the speaker is used. Language is only
|
277 |
+
supported by `XTTS-multilang` model. Currently supports en, de, es, fr, it, pt, pl. Defaults to "en".
|
278 |
+
emotion (str, optional):
|
279 |
+
Emotion of the speaker. One of "Neutral", "Happy", "Sad", "Angry", "Dull". Emotions are only available
|
280 |
+
with "V1" model. Defaults to None.
|
281 |
+
speed (float, optional):
|
282 |
+
Speed of the speech. Defaults to 1.0.
|
283 |
+
file_path (str, optional):
|
284 |
+
Path to save the output file. When None it returns the `np.ndarray` of waveform. Defaults to None.
|
285 |
+
|
286 |
+
Returns:
|
287 |
+
Union[np.ndarray, str]: Waveform of the synthesized speech or path to the output file.
|
288 |
+
"""
|
289 |
+
speaker_name = self.model_name.split("/")[2]
|
290 |
+
if file_path is not None:
|
291 |
+
return self.csapi.tts_to_file(
|
292 |
+
text=text,
|
293 |
+
speaker_name=speaker_name,
|
294 |
+
language=language,
|
295 |
+
speed=speed,
|
296 |
+
emotion=emotion,
|
297 |
+
file_path=file_path,
|
298 |
+
)[0]
|
299 |
+
return self.csapi.tts(text=text, speaker_name=speaker_name, language=language, speed=speed, emotion=emotion)[0]
|
300 |
+
|
301 |
+
def tts(
|
302 |
+
self,
|
303 |
+
text: str,
|
304 |
+
speaker: str = None,
|
305 |
+
language: str = None,
|
306 |
+
speaker_wav: str = None,
|
307 |
+
emotion: str = None,
|
308 |
+
speed: float = None,
|
309 |
+
**kwargs,
|
310 |
+
):
|
311 |
+
"""Convert text to speech.
|
312 |
+
|
313 |
+
Args:
|
314 |
+
text (str):
|
315 |
+
Input text to synthesize.
|
316 |
+
speaker (str, optional):
|
317 |
+
Speaker name for multi-speaker. You can check whether loaded model is multi-speaker by
|
318 |
+
`tts.is_multi_speaker` and list speakers by `tts.speakers`. Defaults to None.
|
319 |
+
language (str): Language of the text. If None, the default language of the speaker is used. Language is only
|
320 |
+
supported by `XTTS-multilang` model. Currently supports en, de, es, fr, it, pt, pl. Defaults to "en".
|
321 |
+
speaker_wav (str, optional):
|
322 |
+
Path to a reference wav file to use for voice cloning with supporting models like YourTTS.
|
323 |
+
Defaults to None.
|
324 |
+
emotion (str, optional):
|
325 |
+
Emotion to use for 🐸Coqui Studio models. If None, Studio models use "Neutral". Defaults to None.
|
326 |
+
speed (float, optional):
|
327 |
+
Speed factor to use for 🐸Coqui Studio models, between 0 and 2.0. If None, Studio models use 1.0.
|
328 |
+
Defaults to None.
|
329 |
+
"""
|
330 |
+
self._check_arguments(
|
331 |
+
speaker=speaker, language=language, speaker_wav=speaker_wav, emotion=emotion, speed=speed, **kwargs
|
332 |
+
)
|
333 |
+
if self.csapi is not None:
|
334 |
+
return self.tts_coqui_studio(
|
335 |
+
text=text, speaker_name=speaker, language=language, emotion=emotion, speed=speed
|
336 |
+
)
|
337 |
+
wav = self.synthesizer.tts(
|
338 |
+
text=text,
|
339 |
+
speaker_name=speaker,
|
340 |
+
language_name=language,
|
341 |
+
speaker_wav=speaker_wav,
|
342 |
+
reference_wav=None,
|
343 |
+
style_wav=None,
|
344 |
+
style_text=None,
|
345 |
+
reference_speaker_name=None,
|
346 |
+
**kwargs,
|
347 |
+
)
|
348 |
+
return wav
|
349 |
+
|
350 |
+
def tts_to_file(
|
351 |
+
self,
|
352 |
+
text: str,
|
353 |
+
speaker: str = None,
|
354 |
+
language: str = None,
|
355 |
+
speaker_wav: str = None,
|
356 |
+
emotion: str = None,
|
357 |
+
speed: float = 1.0,
|
358 |
+
file_path: str = "output.wav",
|
359 |
+
**kwargs,
|
360 |
+
):
|
361 |
+
"""Convert text to speech.
|
362 |
+
|
363 |
+
Args:
|
364 |
+
text (str):
|
365 |
+
Input text to synthesize.
|
366 |
+
speaker (str, optional):
|
367 |
+
Speaker name for multi-speaker. You can check whether loaded model is multi-speaker by
|
368 |
+
`tts.is_multi_speaker` and list speakers by `tts.speakers`. Defaults to None.
|
369 |
+
language (str, optional):
|
370 |
+
Language code for multi-lingual models. You can check whether loaded model is multi-lingual
|
371 |
+
`tts.is_multi_lingual` and list available languages by `tts.languages`. Defaults to None.
|
372 |
+
speaker_wav (str, optional):
|
373 |
+
Path to a reference wav file to use for voice cloning with supporting models like YourTTS.
|
374 |
+
Defaults to None.
|
375 |
+
emotion (str, optional):
|
376 |
+
Emotion to use for 🐸Coqui Studio models. Defaults to "Neutral".
|
377 |
+
speed (float, optional):
|
378 |
+
Speed factor to use for 🐸Coqui Studio models, between 0.0 and 2.0. Defaults to None.
|
379 |
+
file_path (str, optional):
|
380 |
+
Output file path. Defaults to "output.wav".
|
381 |
+
kwargs (dict, optional):
|
382 |
+
Additional arguments for the model.
|
383 |
+
"""
|
384 |
+
self._check_arguments(speaker=speaker, language=language, speaker_wav=speaker_wav, **kwargs)
|
385 |
+
|
386 |
+
if self.csapi is not None:
|
387 |
+
return self.tts_coqui_studio(
|
388 |
+
text=text, speaker_name=speaker, language=language, emotion=emotion, speed=speed, file_path=file_path
|
389 |
+
)
|
390 |
+
wav = self.tts(text=text, speaker=speaker, language=language, speaker_wav=speaker_wav, **kwargs)
|
391 |
+
self.synthesizer.save_wav(wav=wav, path=file_path)
|
392 |
+
return file_path
|
393 |
+
|
394 |
+
def voice_conversion(
|
395 |
+
self,
|
396 |
+
source_wav: str,
|
397 |
+
target_wav: str,
|
398 |
+
):
|
399 |
+
"""Voice conversion with FreeVC. Convert source wav to target speaker.
|
400 |
+
|
401 |
+
Args:``
|
402 |
+
source_wav (str):
|
403 |
+
Path to the source wav file.
|
404 |
+
target_wav (str):`
|
405 |
+
Path to the target wav file.
|
406 |
+
"""
|
407 |
+
wav = self.voice_converter.voice_conversion(source_wav=source_wav, target_wav=target_wav)
|
408 |
+
return wav
|
409 |
+
|
410 |
+
def voice_conversion_to_file(
|
411 |
+
self,
|
412 |
+
source_wav: str,
|
413 |
+
target_wav: str,
|
414 |
+
file_path: str = "output.wav",
|
415 |
+
):
|
416 |
+
"""Voice conversion with FreeVC. Convert source wav to target speaker.
|
417 |
+
|
418 |
+
Args:
|
419 |
+
source_wav (str):
|
420 |
+
Path to the source wav file.
|
421 |
+
target_wav (str):
|
422 |
+
Path to the target wav file.
|
423 |
+
file_path (str, optional):
|
424 |
+
Output file path. Defaults to "output.wav".
|
425 |
+
"""
|
426 |
+
wav = self.voice_conversion(source_wav=source_wav, target_wav=target_wav)
|
427 |
+
save_wav(wav=wav, path=file_path, sample_rate=self.voice_converter.vc_config.audio.output_sample_rate)
|
428 |
+
return file_path
|
429 |
+
|
430 |
+
def tts_with_vc(self, text: str, language: str = None, speaker_wav: str = None):
|
431 |
+
"""Convert text to speech with voice conversion.
|
432 |
+
|
433 |
+
It combines tts with voice conversion to fake voice cloning.
|
434 |
+
|
435 |
+
- Convert text to speech with tts.
|
436 |
+
- Convert the output wav to target speaker with voice conversion.
|
437 |
+
|
438 |
+
Args:
|
439 |
+
text (str):
|
440 |
+
Input text to synthesize.
|
441 |
+
language (str, optional):
|
442 |
+
Language code for multi-lingual models. You can check whether loaded model is multi-lingual
|
443 |
+
`tts.is_multi_lingual` and list available languages by `tts.languages`. Defaults to None.
|
444 |
+
speaker_wav (str, optional):
|
445 |
+
Path to a reference wav file to use for voice cloning with supporting models like YourTTS.
|
446 |
+
Defaults to None.
|
447 |
+
"""
|
448 |
+
with tempfile.NamedTemporaryFile(suffix=".wav", delete=False) as fp:
|
449 |
+
# Lazy code... save it to a temp file to resample it while reading it for VC
|
450 |
+
self.tts_to_file(text=text, speaker=None, language=language, file_path=fp.name)
|
451 |
+
if self.voice_converter is None:
|
452 |
+
self.load_vc_model_by_name("voice_conversion_models/multilingual/vctk/freevc24")
|
453 |
+
wav = self.voice_converter.voice_conversion(source_wav=fp.name, target_wav=speaker_wav)
|
454 |
+
return wav
|
455 |
+
|
456 |
+
def tts_with_vc_to_file(
|
457 |
+
self, text: str, language: str = None, speaker_wav: str = None, file_path: str = "output.wav"
|
458 |
+
):
|
459 |
+
"""Convert text to speech with voice conversion and save to file.
|
460 |
+
|
461 |
+
Check `tts_with_vc` for more details.
|
462 |
+
|
463 |
+
Args:
|
464 |
+
text (str):
|
465 |
+
Input text to synthesize.
|
466 |
+
language (str, optional):
|
467 |
+
Language code for multi-lingual models. You can check whether loaded model is multi-lingual
|
468 |
+
`tts.is_multi_lingual` and list available languages by `tts.languages`. Defaults to None.
|
469 |
+
speaker_wav (str, optional):
|
470 |
+
Path to a reference wav file to use for voice cloning with supporting models like YourTTS.
|
471 |
+
Defaults to None.
|
472 |
+
file_path (str, optional):
|
473 |
+
Output file path. Defaults to "output.wav".
|
474 |
+
"""
|
475 |
+
wav = self.tts_with_vc(text=text, language=language, speaker_wav=speaker_wav)
|
476 |
+
save_wav(wav=wav, path=file_path, sample_rate=self.voice_converter.vc_config.audio.output_sample_rate)
|
TTS/TTS/bin/__init__.py
ADDED
File without changes
|
TTS/TTS/bin/collect_env_info.py
ADDED
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""Get detailed info about the working environment."""
|
2 |
+
import os
|
3 |
+
import platform
|
4 |
+
import sys
|
5 |
+
|
6 |
+
import numpy
|
7 |
+
import torch
|
8 |
+
|
9 |
+
sys.path += [os.path.abspath(".."), os.path.abspath(".")]
|
10 |
+
import json
|
11 |
+
|
12 |
+
import TTS
|
13 |
+
|
14 |
+
|
15 |
+
def system_info():
|
16 |
+
return {
|
17 |
+
"OS": platform.system(),
|
18 |
+
"architecture": platform.architecture(),
|
19 |
+
"version": platform.version(),
|
20 |
+
"processor": platform.processor(),
|
21 |
+
"python": platform.python_version(),
|
22 |
+
}
|
23 |
+
|
24 |
+
|
25 |
+
def cuda_info():
|
26 |
+
return {
|
27 |
+
"GPU": [torch.cuda.get_device_name(i) for i in range(torch.cuda.device_count())],
|
28 |
+
"available": torch.cuda.is_available(),
|
29 |
+
"version": torch.version.cuda,
|
30 |
+
}
|
31 |
+
|
32 |
+
|
33 |
+
def package_info():
|
34 |
+
return {
|
35 |
+
"numpy": numpy.__version__,
|
36 |
+
"PyTorch_version": torch.__version__,
|
37 |
+
"PyTorch_debug": torch.version.debug,
|
38 |
+
"TTS": TTS.__version__,
|
39 |
+
}
|
40 |
+
|
41 |
+
|
42 |
+
def main():
|
43 |
+
details = {"System": system_info(), "CUDA": cuda_info(), "Packages": package_info()}
|
44 |
+
print(json.dumps(details, indent=4, sort_keys=True))
|
45 |
+
|
46 |
+
|
47 |
+
if __name__ == "__main__":
|
48 |
+
main()
|
TTS/TTS/bin/compute_attention_masks.py
ADDED
@@ -0,0 +1,165 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import argparse
|
2 |
+
import importlib
|
3 |
+
import os
|
4 |
+
from argparse import RawTextHelpFormatter
|
5 |
+
|
6 |
+
import numpy as np
|
7 |
+
import torch
|
8 |
+
from torch.utils.data import DataLoader
|
9 |
+
from tqdm import tqdm
|
10 |
+
|
11 |
+
from TTS.config import load_config
|
12 |
+
from TTS.tts.datasets.TTSDataset import TTSDataset
|
13 |
+
from TTS.tts.models import setup_model
|
14 |
+
from TTS.tts.utils.text.characters import make_symbols, phonemes, symbols
|
15 |
+
from TTS.utils.audio import AudioProcessor
|
16 |
+
from TTS.utils.io import load_checkpoint
|
17 |
+
|
18 |
+
if __name__ == "__main__":
|
19 |
+
# pylint: disable=bad-option-value
|
20 |
+
parser = argparse.ArgumentParser(
|
21 |
+
description="""Extract attention masks from trained Tacotron/Tacotron2 models.
|
22 |
+
These masks can be used for different purposes including training a TTS model with a Duration Predictor.\n\n"""
|
23 |
+
"""Each attention mask is written to the same path as the input wav file with ".npy" file extension.
|
24 |
+
(e.g. path/bla.wav (wav file) --> path/bla.npy (attention mask))\n"""
|
25 |
+
"""
|
26 |
+
Example run:
|
27 |
+
CUDA_VISIBLE_DEVICE="0" python TTS/bin/compute_attention_masks.py
|
28 |
+
--model_path /data/rw/home/Models/ljspeech-dcattn-December-14-2020_11+10AM-9d0e8c7/checkpoint_200000.pth
|
29 |
+
--config_path /data/rw/home/Models/ljspeech-dcattn-December-14-2020_11+10AM-9d0e8c7/config.json
|
30 |
+
--dataset_metafile metadata.csv
|
31 |
+
--data_path /root/LJSpeech-1.1/
|
32 |
+
--batch_size 32
|
33 |
+
--dataset ljspeech
|
34 |
+
--use_cuda True
|
35 |
+
""",
|
36 |
+
formatter_class=RawTextHelpFormatter,
|
37 |
+
)
|
38 |
+
parser.add_argument("--model_path", type=str, required=True, help="Path to Tacotron/Tacotron2 model file ")
|
39 |
+
parser.add_argument(
|
40 |
+
"--config_path",
|
41 |
+
type=str,
|
42 |
+
required=True,
|
43 |
+
help="Path to Tacotron/Tacotron2 config file.",
|
44 |
+
)
|
45 |
+
parser.add_argument(
|
46 |
+
"--dataset",
|
47 |
+
type=str,
|
48 |
+
default="",
|
49 |
+
required=True,
|
50 |
+
help="Target dataset processor name from TTS.tts.dataset.preprocess.",
|
51 |
+
)
|
52 |
+
|
53 |
+
parser.add_argument(
|
54 |
+
"--dataset_metafile",
|
55 |
+
type=str,
|
56 |
+
default="",
|
57 |
+
required=True,
|
58 |
+
help="Dataset metafile inclusing file paths with transcripts.",
|
59 |
+
)
|
60 |
+
parser.add_argument("--data_path", type=str, default="", help="Defines the data path. It overwrites config.json.")
|
61 |
+
parser.add_argument("--use_cuda", type=bool, default=False, help="enable/disable cuda.")
|
62 |
+
|
63 |
+
parser.add_argument(
|
64 |
+
"--batch_size", default=16, type=int, help="Batch size for the model. Use batch_size=1 if you have no CUDA."
|
65 |
+
)
|
66 |
+
args = parser.parse_args()
|
67 |
+
|
68 |
+
C = load_config(args.config_path)
|
69 |
+
ap = AudioProcessor(**C.audio)
|
70 |
+
|
71 |
+
# if the vocabulary was passed, replace the default
|
72 |
+
if "characters" in C.keys():
|
73 |
+
symbols, phonemes = make_symbols(**C.characters)
|
74 |
+
|
75 |
+
# load the model
|
76 |
+
num_chars = len(phonemes) if C.use_phonemes else len(symbols)
|
77 |
+
# TODO: handle multi-speaker
|
78 |
+
model = setup_model(C)
|
79 |
+
model, _ = load_checkpoint(model, args.model_path, args.use_cuda, True)
|
80 |
+
|
81 |
+
# data loader
|
82 |
+
preprocessor = importlib.import_module("TTS.tts.datasets.formatters")
|
83 |
+
preprocessor = getattr(preprocessor, args.dataset)
|
84 |
+
meta_data = preprocessor(args.data_path, args.dataset_metafile)
|
85 |
+
dataset = TTSDataset(
|
86 |
+
model.decoder.r,
|
87 |
+
C.text_cleaner,
|
88 |
+
compute_linear_spec=False,
|
89 |
+
ap=ap,
|
90 |
+
meta_data=meta_data,
|
91 |
+
characters=C.characters if "characters" in C.keys() else None,
|
92 |
+
add_blank=C["add_blank"] if "add_blank" in C.keys() else False,
|
93 |
+
use_phonemes=C.use_phonemes,
|
94 |
+
phoneme_cache_path=C.phoneme_cache_path,
|
95 |
+
phoneme_language=C.phoneme_language,
|
96 |
+
enable_eos_bos=C.enable_eos_bos_chars,
|
97 |
+
)
|
98 |
+
|
99 |
+
dataset.sort_and_filter_items(C.get("sort_by_audio_len", default=False))
|
100 |
+
loader = DataLoader(
|
101 |
+
dataset,
|
102 |
+
batch_size=args.batch_size,
|
103 |
+
num_workers=4,
|
104 |
+
collate_fn=dataset.collate_fn,
|
105 |
+
shuffle=False,
|
106 |
+
drop_last=False,
|
107 |
+
)
|
108 |
+
|
109 |
+
# compute attentions
|
110 |
+
file_paths = []
|
111 |
+
with torch.no_grad():
|
112 |
+
for data in tqdm(loader):
|
113 |
+
# setup input data
|
114 |
+
text_input = data[0]
|
115 |
+
text_lengths = data[1]
|
116 |
+
linear_input = data[3]
|
117 |
+
mel_input = data[4]
|
118 |
+
mel_lengths = data[5]
|
119 |
+
stop_targets = data[6]
|
120 |
+
item_idxs = data[7]
|
121 |
+
|
122 |
+
# dispatch data to GPU
|
123 |
+
if args.use_cuda:
|
124 |
+
text_input = text_input.cuda()
|
125 |
+
text_lengths = text_lengths.cuda()
|
126 |
+
mel_input = mel_input.cuda()
|
127 |
+
mel_lengths = mel_lengths.cuda()
|
128 |
+
|
129 |
+
model_outputs = model.forward(text_input, text_lengths, mel_input)
|
130 |
+
|
131 |
+
alignments = model_outputs["alignments"].detach()
|
132 |
+
for idx, alignment in enumerate(alignments):
|
133 |
+
item_idx = item_idxs[idx]
|
134 |
+
# interpolate if r > 1
|
135 |
+
alignment = (
|
136 |
+
torch.nn.functional.interpolate(
|
137 |
+
alignment.transpose(0, 1).unsqueeze(0),
|
138 |
+
size=None,
|
139 |
+
scale_factor=model.decoder.r,
|
140 |
+
mode="nearest",
|
141 |
+
align_corners=None,
|
142 |
+
recompute_scale_factor=None,
|
143 |
+
)
|
144 |
+
.squeeze(0)
|
145 |
+
.transpose(0, 1)
|
146 |
+
)
|
147 |
+
# remove paddings
|
148 |
+
alignment = alignment[: mel_lengths[idx], : text_lengths[idx]].cpu().numpy()
|
149 |
+
# set file paths
|
150 |
+
wav_file_name = os.path.basename(item_idx)
|
151 |
+
align_file_name = os.path.splitext(wav_file_name)[0] + "_attn.npy"
|
152 |
+
file_path = item_idx.replace(wav_file_name, align_file_name)
|
153 |
+
# save output
|
154 |
+
wav_file_abs_path = os.path.abspath(item_idx)
|
155 |
+
file_abs_path = os.path.abspath(file_path)
|
156 |
+
file_paths.append([wav_file_abs_path, file_abs_path])
|
157 |
+
np.save(file_path, alignment)
|
158 |
+
|
159 |
+
# ourput metafile
|
160 |
+
metafile = os.path.join(args.data_path, "metadata_attn_mask.txt")
|
161 |
+
|
162 |
+
with open(metafile, "w", encoding="utf-8") as f:
|
163 |
+
for p in file_paths:
|
164 |
+
f.write(f"{p[0]}|{p[1]}\n")
|
165 |
+
print(f" >> Metafile created: {metafile}")
|
TTS/TTS/bin/compute_embeddings.py
ADDED
@@ -0,0 +1,197 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import argparse
|
2 |
+
import os
|
3 |
+
from argparse import RawTextHelpFormatter
|
4 |
+
|
5 |
+
import torch
|
6 |
+
from tqdm import tqdm
|
7 |
+
|
8 |
+
from TTS.config import load_config
|
9 |
+
from TTS.config.shared_configs import BaseDatasetConfig
|
10 |
+
from TTS.tts.datasets import load_tts_samples
|
11 |
+
from TTS.tts.utils.managers import save_file
|
12 |
+
from TTS.tts.utils.speakers import SpeakerManager
|
13 |
+
|
14 |
+
|
15 |
+
def compute_embeddings(
|
16 |
+
model_path,
|
17 |
+
config_path,
|
18 |
+
output_path,
|
19 |
+
old_speakers_file=None,
|
20 |
+
old_append=False,
|
21 |
+
config_dataset_path=None,
|
22 |
+
formatter_name=None,
|
23 |
+
dataset_name=None,
|
24 |
+
dataset_path=None,
|
25 |
+
meta_file_train=None,
|
26 |
+
meta_file_val=None,
|
27 |
+
disable_cuda=False,
|
28 |
+
no_eval=False,
|
29 |
+
):
|
30 |
+
use_cuda = torch.cuda.is_available() and not disable_cuda
|
31 |
+
|
32 |
+
if config_dataset_path is not None:
|
33 |
+
c_dataset = load_config(config_dataset_path)
|
34 |
+
meta_data_train, meta_data_eval = load_tts_samples(c_dataset.datasets, eval_split=not no_eval)
|
35 |
+
else:
|
36 |
+
c_dataset = BaseDatasetConfig()
|
37 |
+
c_dataset.formatter = formatter_name
|
38 |
+
c_dataset.dataset_name = dataset_name
|
39 |
+
c_dataset.path = dataset_path
|
40 |
+
if meta_file_train is not None:
|
41 |
+
c_dataset.meta_file_train = meta_file_train
|
42 |
+
if meta_file_val is not None:
|
43 |
+
c_dataset.meta_file_val = meta_file_val
|
44 |
+
meta_data_train, meta_data_eval = load_tts_samples(c_dataset, eval_split=not no_eval)
|
45 |
+
|
46 |
+
if meta_data_eval is None:
|
47 |
+
samples = meta_data_train
|
48 |
+
else:
|
49 |
+
samples = meta_data_train + meta_data_eval
|
50 |
+
|
51 |
+
encoder_manager = SpeakerManager(
|
52 |
+
encoder_model_path=model_path,
|
53 |
+
encoder_config_path=config_path,
|
54 |
+
d_vectors_file_path=old_speakers_file,
|
55 |
+
use_cuda=use_cuda,
|
56 |
+
)
|
57 |
+
|
58 |
+
class_name_key = encoder_manager.encoder_config.class_name_key
|
59 |
+
|
60 |
+
# compute speaker embeddings
|
61 |
+
if old_speakers_file is not None and old_append:
|
62 |
+
speaker_mapping = encoder_manager.embeddings
|
63 |
+
else:
|
64 |
+
speaker_mapping = {}
|
65 |
+
|
66 |
+
for fields in tqdm(samples):
|
67 |
+
class_name = fields[class_name_key]
|
68 |
+
audio_file = fields["audio_file"]
|
69 |
+
embedding_key = fields["audio_unique_name"]
|
70 |
+
|
71 |
+
# Only update the speaker name when the embedding is already in the old file.
|
72 |
+
if embedding_key in speaker_mapping:
|
73 |
+
speaker_mapping[embedding_key]["name"] = class_name
|
74 |
+
continue
|
75 |
+
|
76 |
+
if old_speakers_file is not None and embedding_key in encoder_manager.clip_ids:
|
77 |
+
# get the embedding from the old file
|
78 |
+
embedd = encoder_manager.get_embedding_by_clip(embedding_key)
|
79 |
+
else:
|
80 |
+
# extract the embedding
|
81 |
+
embedd = encoder_manager.compute_embedding_from_clip(audio_file)
|
82 |
+
|
83 |
+
# create speaker_mapping if target dataset is defined
|
84 |
+
speaker_mapping[embedding_key] = {}
|
85 |
+
speaker_mapping[embedding_key]["name"] = class_name
|
86 |
+
speaker_mapping[embedding_key]["embedding"] = embedd
|
87 |
+
|
88 |
+
if speaker_mapping:
|
89 |
+
# save speaker_mapping if target dataset is defined
|
90 |
+
if os.path.isdir(output_path):
|
91 |
+
mapping_file_path = os.path.join(output_path, "speakers.pth")
|
92 |
+
else:
|
93 |
+
mapping_file_path = output_path
|
94 |
+
|
95 |
+
if os.path.dirname(mapping_file_path) != "":
|
96 |
+
os.makedirs(os.path.dirname(mapping_file_path), exist_ok=True)
|
97 |
+
|
98 |
+
save_file(speaker_mapping, mapping_file_path)
|
99 |
+
print("Speaker embeddings saved at:", mapping_file_path)
|
100 |
+
|
101 |
+
|
102 |
+
if __name__ == "__main__":
|
103 |
+
parser = argparse.ArgumentParser(
|
104 |
+
description="""Compute embedding vectors for each audio file in a dataset and store them keyed by `{dataset_name}#{file_path}` in a .pth file\n\n"""
|
105 |
+
"""
|
106 |
+
Example runs:
|
107 |
+
python TTS/bin/compute_embeddings.py --model_path speaker_encoder_model.pth --config_path speaker_encoder_config.json --config_dataset_path dataset_config.json
|
108 |
+
|
109 |
+
python TTS/bin/compute_embeddings.py --model_path speaker_encoder_model.pth --config_path speaker_encoder_config.json --formatter_name coqui --dataset_path /path/to/vctk/dataset --dataset_name my_vctk --meta_file_train /path/to/vctk/metafile_train.csv --meta_file_val /path/to/vctk/metafile_eval.csv
|
110 |
+
""",
|
111 |
+
formatter_class=RawTextHelpFormatter,
|
112 |
+
)
|
113 |
+
parser.add_argument(
|
114 |
+
"--model_path",
|
115 |
+
type=str,
|
116 |
+
help="Path to model checkpoint file. It defaults to the released speaker encoder.",
|
117 |
+
default="https://github.com/coqui-ai/TTS/releases/download/speaker_encoder_model/model_se.pth.tar",
|
118 |
+
)
|
119 |
+
parser.add_argument(
|
120 |
+
"--config_path",
|
121 |
+
type=str,
|
122 |
+
help="Path to model config file. It defaults to the released speaker encoder config.",
|
123 |
+
default="https://github.com/coqui-ai/TTS/releases/download/speaker_encoder_model/config_se.json",
|
124 |
+
)
|
125 |
+
parser.add_argument(
|
126 |
+
"--config_dataset_path",
|
127 |
+
type=str,
|
128 |
+
help="Path to dataset config file. You either need to provide this or `formatter_name`, `dataset_name` and `dataset_path` arguments.",
|
129 |
+
default=None,
|
130 |
+
)
|
131 |
+
parser.add_argument(
|
132 |
+
"--output_path",
|
133 |
+
type=str,
|
134 |
+
help="Path for output `pth` or `json` file.",
|
135 |
+
default="speakers.pth",
|
136 |
+
)
|
137 |
+
parser.add_argument(
|
138 |
+
"--old_file",
|
139 |
+
type=str,
|
140 |
+
help="The old existing embedding file, from which the embeddings will be directly loaded for already computed audio clips.",
|
141 |
+
default=None,
|
142 |
+
)
|
143 |
+
parser.add_argument(
|
144 |
+
"--old_append",
|
145 |
+
help="Append new audio clip embeddings to the old embedding file, generate a new non-duplicated merged embedding file. Default False",
|
146 |
+
default=False,
|
147 |
+
action="store_true",
|
148 |
+
)
|
149 |
+
parser.add_argument("--disable_cuda", type=bool, help="Flag to disable cuda.", default=False)
|
150 |
+
parser.add_argument("--no_eval", help="Do not compute eval?. Default False", default=False, action="store_true")
|
151 |
+
parser.add_argument(
|
152 |
+
"--formatter_name",
|
153 |
+
type=str,
|
154 |
+
help="Name of the formatter to use. You either need to provide this or `config_dataset_path`",
|
155 |
+
default=None,
|
156 |
+
)
|
157 |
+
parser.add_argument(
|
158 |
+
"--dataset_name",
|
159 |
+
type=str,
|
160 |
+
help="Name of the dataset to use. You either need to provide this or `config_dataset_path`",
|
161 |
+
default=None,
|
162 |
+
)
|
163 |
+
parser.add_argument(
|
164 |
+
"--dataset_path",
|
165 |
+
type=str,
|
166 |
+
help="Path to the dataset. You either need to provide this or `config_dataset_path`",
|
167 |
+
default=None,
|
168 |
+
)
|
169 |
+
parser.add_argument(
|
170 |
+
"--meta_file_train",
|
171 |
+
type=str,
|
172 |
+
help="Path to the train meta file. If not set, dataset formatter uses the default metafile if it is defined in the formatter. You either need to provide this or `config_dataset_path`",
|
173 |
+
default=None,
|
174 |
+
)
|
175 |
+
parser.add_argument(
|
176 |
+
"--meta_file_val",
|
177 |
+
type=str,
|
178 |
+
help="Path to the evaluation meta file. If not set, dataset formatter uses the default metafile if it is defined in the formatter. You either need to provide this or `config_dataset_path`",
|
179 |
+
default=None,
|
180 |
+
)
|
181 |
+
args = parser.parse_args()
|
182 |
+
|
183 |
+
compute_embeddings(
|
184 |
+
args.model_path,
|
185 |
+
args.config_path,
|
186 |
+
args.output_path,
|
187 |
+
old_speakers_file=args.old_file,
|
188 |
+
old_append=args.old_append,
|
189 |
+
config_dataset_path=args.config_dataset_path,
|
190 |
+
formatter_name=args.formatter_name,
|
191 |
+
dataset_name=args.dataset_name,
|
192 |
+
dataset_path=args.dataset_path,
|
193 |
+
meta_file_train=args.meta_file_train,
|
194 |
+
meta_file_val=args.meta_file_val,
|
195 |
+
disable_cuda=args.disable_cuda,
|
196 |
+
no_eval=args.no_eval,
|
197 |
+
)
|
TTS/TTS/bin/compute_statistics.py
ADDED
@@ -0,0 +1,96 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
|
4 |
+
import argparse
|
5 |
+
import glob
|
6 |
+
import os
|
7 |
+
|
8 |
+
import numpy as np
|
9 |
+
from tqdm import tqdm
|
10 |
+
|
11 |
+
# from TTS.utils.io import load_config
|
12 |
+
from TTS.config import load_config
|
13 |
+
from TTS.tts.datasets import load_tts_samples
|
14 |
+
from TTS.utils.audio import AudioProcessor
|
15 |
+
|
16 |
+
|
17 |
+
def main():
|
18 |
+
"""Run preprocessing process."""
|
19 |
+
parser = argparse.ArgumentParser(description="Compute mean and variance of spectrogtram features.")
|
20 |
+
parser.add_argument("config_path", type=str, help="TTS config file path to define audio processin parameters.")
|
21 |
+
parser.add_argument("out_path", type=str, help="save path (directory and filename).")
|
22 |
+
parser.add_argument(
|
23 |
+
"--data_path",
|
24 |
+
type=str,
|
25 |
+
required=False,
|
26 |
+
help="folder including the target set of wavs overriding dataset config.",
|
27 |
+
)
|
28 |
+
args, overrides = parser.parse_known_args()
|
29 |
+
|
30 |
+
CONFIG = load_config(args.config_path)
|
31 |
+
CONFIG.parse_known_args(overrides, relaxed_parser=True)
|
32 |
+
|
33 |
+
# load config
|
34 |
+
CONFIG.audio.signal_norm = False # do not apply earlier normalization
|
35 |
+
CONFIG.audio.stats_path = None # discard pre-defined stats
|
36 |
+
|
37 |
+
# load audio processor
|
38 |
+
ap = AudioProcessor(**CONFIG.audio.to_dict())
|
39 |
+
|
40 |
+
# load the meta data of target dataset
|
41 |
+
if args.data_path:
|
42 |
+
dataset_items = glob.glob(os.path.join(args.data_path, "**", "*.wav"), recursive=True)
|
43 |
+
else:
|
44 |
+
dataset_items = load_tts_samples(CONFIG.datasets)[0] # take only train data
|
45 |
+
print(f" > There are {len(dataset_items)} files.")
|
46 |
+
|
47 |
+
mel_sum = 0
|
48 |
+
mel_square_sum = 0
|
49 |
+
linear_sum = 0
|
50 |
+
linear_square_sum = 0
|
51 |
+
N = 0
|
52 |
+
for item in tqdm(dataset_items):
|
53 |
+
# compute features
|
54 |
+
wav = ap.load_wav(item if isinstance(item, str) else item["audio_file"])
|
55 |
+
linear = ap.spectrogram(wav)
|
56 |
+
mel = ap.melspectrogram(wav)
|
57 |
+
|
58 |
+
# compute stats
|
59 |
+
N += mel.shape[1]
|
60 |
+
mel_sum += mel.sum(1)
|
61 |
+
linear_sum += linear.sum(1)
|
62 |
+
mel_square_sum += (mel**2).sum(axis=1)
|
63 |
+
linear_square_sum += (linear**2).sum(axis=1)
|
64 |
+
|
65 |
+
mel_mean = mel_sum / N
|
66 |
+
mel_scale = np.sqrt(mel_square_sum / N - mel_mean**2)
|
67 |
+
linear_mean = linear_sum / N
|
68 |
+
linear_scale = np.sqrt(linear_square_sum / N - linear_mean**2)
|
69 |
+
|
70 |
+
output_file_path = args.out_path
|
71 |
+
stats = {}
|
72 |
+
stats["mel_mean"] = mel_mean
|
73 |
+
stats["mel_std"] = mel_scale
|
74 |
+
stats["linear_mean"] = linear_mean
|
75 |
+
stats["linear_std"] = linear_scale
|
76 |
+
|
77 |
+
print(f" > Avg mel spec mean: {mel_mean.mean()}")
|
78 |
+
print(f" > Avg mel spec scale: {mel_scale.mean()}")
|
79 |
+
print(f" > Avg linear spec mean: {linear_mean.mean()}")
|
80 |
+
print(f" > Avg linear spec scale: {linear_scale.mean()}")
|
81 |
+
|
82 |
+
# set default config values for mean-var scaling
|
83 |
+
CONFIG.audio.stats_path = output_file_path
|
84 |
+
CONFIG.audio.signal_norm = True
|
85 |
+
# remove redundant values
|
86 |
+
del CONFIG.audio.max_norm
|
87 |
+
del CONFIG.audio.min_level_db
|
88 |
+
del CONFIG.audio.symmetric_norm
|
89 |
+
del CONFIG.audio.clip_norm
|
90 |
+
stats["audio_config"] = CONFIG.audio.to_dict()
|
91 |
+
np.save(output_file_path, stats, allow_pickle=True)
|
92 |
+
print(f" > stats saved to {output_file_path}")
|
93 |
+
|
94 |
+
|
95 |
+
if __name__ == "__main__":
|
96 |
+
main()
|
TTS/TTS/bin/eval_encoder.py
ADDED
@@ -0,0 +1,88 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import argparse
|
2 |
+
from argparse import RawTextHelpFormatter
|
3 |
+
|
4 |
+
import torch
|
5 |
+
from tqdm import tqdm
|
6 |
+
|
7 |
+
from TTS.config import load_config
|
8 |
+
from TTS.tts.datasets import load_tts_samples
|
9 |
+
from TTS.tts.utils.speakers import SpeakerManager
|
10 |
+
|
11 |
+
|
12 |
+
def compute_encoder_accuracy(dataset_items, encoder_manager):
|
13 |
+
class_name_key = encoder_manager.encoder_config.class_name_key
|
14 |
+
map_classid_to_classname = getattr(encoder_manager.encoder_config, "map_classid_to_classname", None)
|
15 |
+
|
16 |
+
class_acc_dict = {}
|
17 |
+
|
18 |
+
# compute embeddings for all wav_files
|
19 |
+
for item in tqdm(dataset_items):
|
20 |
+
class_name = item[class_name_key]
|
21 |
+
wav_file = item["audio_file"]
|
22 |
+
|
23 |
+
# extract the embedding
|
24 |
+
embedd = encoder_manager.compute_embedding_from_clip(wav_file)
|
25 |
+
if encoder_manager.encoder_criterion is not None and map_classid_to_classname is not None:
|
26 |
+
embedding = torch.FloatTensor(embedd).unsqueeze(0)
|
27 |
+
if encoder_manager.use_cuda:
|
28 |
+
embedding = embedding.cuda()
|
29 |
+
|
30 |
+
class_id = encoder_manager.encoder_criterion.softmax.inference(embedding).item()
|
31 |
+
predicted_label = map_classid_to_classname[str(class_id)]
|
32 |
+
else:
|
33 |
+
predicted_label = None
|
34 |
+
|
35 |
+
if class_name is not None and predicted_label is not None:
|
36 |
+
is_equal = int(class_name == predicted_label)
|
37 |
+
if class_name not in class_acc_dict:
|
38 |
+
class_acc_dict[class_name] = [is_equal]
|
39 |
+
else:
|
40 |
+
class_acc_dict[class_name].append(is_equal)
|
41 |
+
else:
|
42 |
+
raise RuntimeError("Error: class_name or/and predicted_label are None")
|
43 |
+
|
44 |
+
acc_avg = 0
|
45 |
+
for key, values in class_acc_dict.items():
|
46 |
+
acc = sum(values) / len(values)
|
47 |
+
print("Class", key, "Accuracy:", acc)
|
48 |
+
acc_avg += acc
|
49 |
+
|
50 |
+
print("Average Accuracy:", acc_avg / len(class_acc_dict))
|
51 |
+
|
52 |
+
|
53 |
+
if __name__ == "__main__":
|
54 |
+
parser = argparse.ArgumentParser(
|
55 |
+
description="""Compute the accuracy of the encoder.\n\n"""
|
56 |
+
"""
|
57 |
+
Example runs:
|
58 |
+
python TTS/bin/eval_encoder.py emotion_encoder_model.pth emotion_encoder_config.json dataset_config.json
|
59 |
+
""",
|
60 |
+
formatter_class=RawTextHelpFormatter,
|
61 |
+
)
|
62 |
+
parser.add_argument("model_path", type=str, help="Path to model checkpoint file.")
|
63 |
+
parser.add_argument(
|
64 |
+
"config_path",
|
65 |
+
type=str,
|
66 |
+
help="Path to model config file.",
|
67 |
+
)
|
68 |
+
|
69 |
+
parser.add_argument(
|
70 |
+
"config_dataset_path",
|
71 |
+
type=str,
|
72 |
+
help="Path to dataset config file.",
|
73 |
+
)
|
74 |
+
parser.add_argument("--use_cuda", type=bool, help="flag to set cuda.", default=True)
|
75 |
+
parser.add_argument("--eval", type=bool, help="compute eval.", default=True)
|
76 |
+
|
77 |
+
args = parser.parse_args()
|
78 |
+
|
79 |
+
c_dataset = load_config(args.config_dataset_path)
|
80 |
+
|
81 |
+
meta_data_train, meta_data_eval = load_tts_samples(c_dataset.datasets, eval_split=args.eval)
|
82 |
+
items = meta_data_train + meta_data_eval
|
83 |
+
|
84 |
+
enc_manager = SpeakerManager(
|
85 |
+
encoder_model_path=args.model_path, encoder_config_path=args.config_path, use_cuda=args.use_cuda
|
86 |
+
)
|
87 |
+
|
88 |
+
compute_encoder_accuracy(items, enc_manager)
|
TTS/TTS/bin/extract_tts_spectrograms.py
ADDED
@@ -0,0 +1,286 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python3
|
2 |
+
"""Extract Mel spectrograms with teacher forcing."""
|
3 |
+
|
4 |
+
import argparse
|
5 |
+
import os
|
6 |
+
|
7 |
+
import numpy as np
|
8 |
+
import torch
|
9 |
+
from torch.utils.data import DataLoader
|
10 |
+
from tqdm import tqdm
|
11 |
+
|
12 |
+
from TTS.config import load_config
|
13 |
+
from TTS.tts.datasets import TTSDataset, load_tts_samples
|
14 |
+
from TTS.tts.models import setup_model
|
15 |
+
from TTS.tts.utils.speakers import SpeakerManager
|
16 |
+
from TTS.tts.utils.text.tokenizer import TTSTokenizer
|
17 |
+
from TTS.utils.audio import AudioProcessor
|
18 |
+
from TTS.utils.generic_utils import count_parameters
|
19 |
+
|
20 |
+
use_cuda = torch.cuda.is_available()
|
21 |
+
|
22 |
+
|
23 |
+
def setup_loader(ap, r, verbose=False):
|
24 |
+
tokenizer, _ = TTSTokenizer.init_from_config(c)
|
25 |
+
dataset = TTSDataset(
|
26 |
+
outputs_per_step=r,
|
27 |
+
compute_linear_spec=False,
|
28 |
+
samples=meta_data,
|
29 |
+
tokenizer=tokenizer,
|
30 |
+
ap=ap,
|
31 |
+
batch_group_size=0,
|
32 |
+
min_text_len=c.min_text_len,
|
33 |
+
max_text_len=c.max_text_len,
|
34 |
+
min_audio_len=c.min_audio_len,
|
35 |
+
max_audio_len=c.max_audio_len,
|
36 |
+
phoneme_cache_path=c.phoneme_cache_path,
|
37 |
+
precompute_num_workers=0,
|
38 |
+
use_noise_augment=False,
|
39 |
+
verbose=verbose,
|
40 |
+
speaker_id_mapping=speaker_manager.name_to_id if c.use_speaker_embedding else None,
|
41 |
+
d_vector_mapping=speaker_manager.embeddings if c.use_d_vector_file else None,
|
42 |
+
)
|
43 |
+
|
44 |
+
if c.use_phonemes and c.compute_input_seq_cache:
|
45 |
+
# precompute phonemes to have a better estimate of sequence lengths.
|
46 |
+
dataset.compute_input_seq(c.num_loader_workers)
|
47 |
+
dataset.preprocess_samples()
|
48 |
+
|
49 |
+
loader = DataLoader(
|
50 |
+
dataset,
|
51 |
+
batch_size=c.batch_size,
|
52 |
+
shuffle=False,
|
53 |
+
collate_fn=dataset.collate_fn,
|
54 |
+
drop_last=False,
|
55 |
+
sampler=None,
|
56 |
+
num_workers=c.num_loader_workers,
|
57 |
+
pin_memory=False,
|
58 |
+
)
|
59 |
+
return loader
|
60 |
+
|
61 |
+
|
62 |
+
def set_filename(wav_path, out_path):
|
63 |
+
wav_file = os.path.basename(wav_path)
|
64 |
+
file_name = wav_file.split(".")[0]
|
65 |
+
os.makedirs(os.path.join(out_path, "quant"), exist_ok=True)
|
66 |
+
os.makedirs(os.path.join(out_path, "mel"), exist_ok=True)
|
67 |
+
os.makedirs(os.path.join(out_path, "wav_gl"), exist_ok=True)
|
68 |
+
os.makedirs(os.path.join(out_path, "wav"), exist_ok=True)
|
69 |
+
wavq_path = os.path.join(out_path, "quant", file_name)
|
70 |
+
mel_path = os.path.join(out_path, "mel", file_name)
|
71 |
+
wav_gl_path = os.path.join(out_path, "wav_gl", file_name + ".wav")
|
72 |
+
wav_path = os.path.join(out_path, "wav", file_name + ".wav")
|
73 |
+
return file_name, wavq_path, mel_path, wav_gl_path, wav_path
|
74 |
+
|
75 |
+
|
76 |
+
def format_data(data):
|
77 |
+
# setup input data
|
78 |
+
text_input = data["token_id"]
|
79 |
+
text_lengths = data["token_id_lengths"]
|
80 |
+
mel_input = data["mel"]
|
81 |
+
mel_lengths = data["mel_lengths"]
|
82 |
+
item_idx = data["item_idxs"]
|
83 |
+
d_vectors = data["d_vectors"]
|
84 |
+
speaker_ids = data["speaker_ids"]
|
85 |
+
attn_mask = data["attns"]
|
86 |
+
avg_text_length = torch.mean(text_lengths.float())
|
87 |
+
avg_spec_length = torch.mean(mel_lengths.float())
|
88 |
+
|
89 |
+
# dispatch data to GPU
|
90 |
+
if use_cuda:
|
91 |
+
text_input = text_input.cuda(non_blocking=True)
|
92 |
+
text_lengths = text_lengths.cuda(non_blocking=True)
|
93 |
+
mel_input = mel_input.cuda(non_blocking=True)
|
94 |
+
mel_lengths = mel_lengths.cuda(non_blocking=True)
|
95 |
+
if speaker_ids is not None:
|
96 |
+
speaker_ids = speaker_ids.cuda(non_blocking=True)
|
97 |
+
if d_vectors is not None:
|
98 |
+
d_vectors = d_vectors.cuda(non_blocking=True)
|
99 |
+
if attn_mask is not None:
|
100 |
+
attn_mask = attn_mask.cuda(non_blocking=True)
|
101 |
+
return (
|
102 |
+
text_input,
|
103 |
+
text_lengths,
|
104 |
+
mel_input,
|
105 |
+
mel_lengths,
|
106 |
+
speaker_ids,
|
107 |
+
d_vectors,
|
108 |
+
avg_text_length,
|
109 |
+
avg_spec_length,
|
110 |
+
attn_mask,
|
111 |
+
item_idx,
|
112 |
+
)
|
113 |
+
|
114 |
+
|
115 |
+
@torch.no_grad()
|
116 |
+
def inference(
|
117 |
+
model_name,
|
118 |
+
model,
|
119 |
+
ap,
|
120 |
+
text_input,
|
121 |
+
text_lengths,
|
122 |
+
mel_input,
|
123 |
+
mel_lengths,
|
124 |
+
speaker_ids=None,
|
125 |
+
d_vectors=None,
|
126 |
+
):
|
127 |
+
if model_name == "glow_tts":
|
128 |
+
speaker_c = None
|
129 |
+
if speaker_ids is not None:
|
130 |
+
speaker_c = speaker_ids
|
131 |
+
elif d_vectors is not None:
|
132 |
+
speaker_c = d_vectors
|
133 |
+
outputs = model.inference_with_MAS(
|
134 |
+
text_input,
|
135 |
+
text_lengths,
|
136 |
+
mel_input,
|
137 |
+
mel_lengths,
|
138 |
+
aux_input={"d_vectors": speaker_c, "speaker_ids": speaker_ids},
|
139 |
+
)
|
140 |
+
model_output = outputs["model_outputs"]
|
141 |
+
model_output = model_output.detach().cpu().numpy()
|
142 |
+
|
143 |
+
elif "tacotron" in model_name:
|
144 |
+
aux_input = {"speaker_ids": speaker_ids, "d_vectors": d_vectors}
|
145 |
+
outputs = model(text_input, text_lengths, mel_input, mel_lengths, aux_input)
|
146 |
+
postnet_outputs = outputs["model_outputs"]
|
147 |
+
# normalize tacotron output
|
148 |
+
if model_name == "tacotron":
|
149 |
+
mel_specs = []
|
150 |
+
postnet_outputs = postnet_outputs.data.cpu().numpy()
|
151 |
+
for b in range(postnet_outputs.shape[0]):
|
152 |
+
postnet_output = postnet_outputs[b]
|
153 |
+
mel_specs.append(torch.FloatTensor(ap.out_linear_to_mel(postnet_output.T).T))
|
154 |
+
model_output = torch.stack(mel_specs).cpu().numpy()
|
155 |
+
|
156 |
+
elif model_name == "tacotron2":
|
157 |
+
model_output = postnet_outputs.detach().cpu().numpy()
|
158 |
+
return model_output
|
159 |
+
|
160 |
+
|
161 |
+
def extract_spectrograms(
|
162 |
+
data_loader, model, ap, output_path, quantized_wav=False, save_audio=False, debug=False, metada_name="metada.txt"
|
163 |
+
):
|
164 |
+
model.eval()
|
165 |
+
export_metadata = []
|
166 |
+
for _, data in tqdm(enumerate(data_loader), total=len(data_loader)):
|
167 |
+
# format data
|
168 |
+
(
|
169 |
+
text_input,
|
170 |
+
text_lengths,
|
171 |
+
mel_input,
|
172 |
+
mel_lengths,
|
173 |
+
speaker_ids,
|
174 |
+
d_vectors,
|
175 |
+
_,
|
176 |
+
_,
|
177 |
+
_,
|
178 |
+
item_idx,
|
179 |
+
) = format_data(data)
|
180 |
+
|
181 |
+
model_output = inference(
|
182 |
+
c.model.lower(),
|
183 |
+
model,
|
184 |
+
ap,
|
185 |
+
text_input,
|
186 |
+
text_lengths,
|
187 |
+
mel_input,
|
188 |
+
mel_lengths,
|
189 |
+
speaker_ids,
|
190 |
+
d_vectors,
|
191 |
+
)
|
192 |
+
|
193 |
+
for idx in range(text_input.shape[0]):
|
194 |
+
wav_file_path = item_idx[idx]
|
195 |
+
wav = ap.load_wav(wav_file_path)
|
196 |
+
_, wavq_path, mel_path, wav_gl_path, wav_path = set_filename(wav_file_path, output_path)
|
197 |
+
|
198 |
+
# quantize and save wav
|
199 |
+
if quantized_wav:
|
200 |
+
wavq = ap.quantize(wav)
|
201 |
+
np.save(wavq_path, wavq)
|
202 |
+
|
203 |
+
# save TTS mel
|
204 |
+
mel = model_output[idx]
|
205 |
+
mel_length = mel_lengths[idx]
|
206 |
+
mel = mel[:mel_length, :].T
|
207 |
+
np.save(mel_path, mel)
|
208 |
+
|
209 |
+
export_metadata.append([wav_file_path, mel_path])
|
210 |
+
if save_audio:
|
211 |
+
ap.save_wav(wav, wav_path)
|
212 |
+
|
213 |
+
if debug:
|
214 |
+
print("Audio for debug saved at:", wav_gl_path)
|
215 |
+
wav = ap.inv_melspectrogram(mel)
|
216 |
+
ap.save_wav(wav, wav_gl_path)
|
217 |
+
|
218 |
+
with open(os.path.join(output_path, metada_name), "w", encoding="utf-8") as f:
|
219 |
+
for data in export_metadata:
|
220 |
+
f.write(f"{data[0]}|{data[1]+'.npy'}\n")
|
221 |
+
|
222 |
+
|
223 |
+
def main(args): # pylint: disable=redefined-outer-name
|
224 |
+
# pylint: disable=global-variable-undefined
|
225 |
+
global meta_data, speaker_manager
|
226 |
+
|
227 |
+
# Audio processor
|
228 |
+
ap = AudioProcessor(**c.audio)
|
229 |
+
|
230 |
+
# load data instances
|
231 |
+
meta_data_train, meta_data_eval = load_tts_samples(
|
232 |
+
c.datasets, eval_split=args.eval, eval_split_max_size=c.eval_split_max_size, eval_split_size=c.eval_split_size
|
233 |
+
)
|
234 |
+
|
235 |
+
# use eval and training partitions
|
236 |
+
meta_data = meta_data_train + meta_data_eval
|
237 |
+
|
238 |
+
# init speaker manager
|
239 |
+
if c.use_speaker_embedding:
|
240 |
+
speaker_manager = SpeakerManager(data_items=meta_data)
|
241 |
+
elif c.use_d_vector_file:
|
242 |
+
speaker_manager = SpeakerManager(d_vectors_file_path=c.d_vector_file)
|
243 |
+
else:
|
244 |
+
speaker_manager = None
|
245 |
+
|
246 |
+
# setup model
|
247 |
+
model = setup_model(c)
|
248 |
+
|
249 |
+
# restore model
|
250 |
+
model.load_checkpoint(c, args.checkpoint_path, eval=True)
|
251 |
+
|
252 |
+
if use_cuda:
|
253 |
+
model.cuda()
|
254 |
+
|
255 |
+
num_params = count_parameters(model)
|
256 |
+
print("\n > Model has {} parameters".format(num_params), flush=True)
|
257 |
+
# set r
|
258 |
+
r = 1 if c.model.lower() == "glow_tts" else model.decoder.r
|
259 |
+
own_loader = setup_loader(ap, r, verbose=True)
|
260 |
+
|
261 |
+
extract_spectrograms(
|
262 |
+
own_loader,
|
263 |
+
model,
|
264 |
+
ap,
|
265 |
+
args.output_path,
|
266 |
+
quantized_wav=args.quantized,
|
267 |
+
save_audio=args.save_audio,
|
268 |
+
debug=args.debug,
|
269 |
+
metada_name="metada.txt",
|
270 |
+
)
|
271 |
+
|
272 |
+
|
273 |
+
if __name__ == "__main__":
|
274 |
+
parser = argparse.ArgumentParser()
|
275 |
+
parser.add_argument("--config_path", type=str, help="Path to config file for training.", required=True)
|
276 |
+
parser.add_argument("--checkpoint_path", type=str, help="Model file to be restored.", required=True)
|
277 |
+
parser.add_argument("--output_path", type=str, help="Path to save mel specs", required=True)
|
278 |
+
parser.add_argument("--debug", default=False, action="store_true", help="Save audio files for debug")
|
279 |
+
parser.add_argument("--save_audio", default=False, action="store_true", help="Save audio files")
|
280 |
+
parser.add_argument("--quantized", action="store_true", help="Save quantized audio files")
|
281 |
+
parser.add_argument("--eval", type=bool, help="compute eval.", default=True)
|
282 |
+
args = parser.parse_args()
|
283 |
+
|
284 |
+
c = load_config(args.config_path)
|
285 |
+
c.audio.trim_silence = False
|
286 |
+
main(args)
|
TTS/TTS/bin/find_unique_chars.py
ADDED
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""Find all the unique characters in a dataset"""
|
2 |
+
import argparse
|
3 |
+
from argparse import RawTextHelpFormatter
|
4 |
+
|
5 |
+
from TTS.config import load_config
|
6 |
+
from TTS.tts.datasets import load_tts_samples
|
7 |
+
|
8 |
+
|
9 |
+
def main():
|
10 |
+
# pylint: disable=bad-option-value
|
11 |
+
parser = argparse.ArgumentParser(
|
12 |
+
description="""Find all the unique characters or phonemes in a dataset.\n\n"""
|
13 |
+
"""
|
14 |
+
Example runs:
|
15 |
+
|
16 |
+
python TTS/bin/find_unique_chars.py --config_path config.json
|
17 |
+
""",
|
18 |
+
formatter_class=RawTextHelpFormatter,
|
19 |
+
)
|
20 |
+
parser.add_argument("--config_path", type=str, help="Path to dataset config file.", required=True)
|
21 |
+
args = parser.parse_args()
|
22 |
+
|
23 |
+
c = load_config(args.config_path)
|
24 |
+
|
25 |
+
# load all datasets
|
26 |
+
train_items, eval_items = load_tts_samples(
|
27 |
+
c.datasets, eval_split=True, eval_split_max_size=c.eval_split_max_size, eval_split_size=c.eval_split_size
|
28 |
+
)
|
29 |
+
|
30 |
+
items = train_items + eval_items
|
31 |
+
|
32 |
+
texts = "".join(item["text"] for item in items)
|
33 |
+
chars = set(texts)
|
34 |
+
lower_chars = filter(lambda c: c.islower(), chars)
|
35 |
+
chars_force_lower = [c.lower() for c in chars]
|
36 |
+
chars_force_lower = set(chars_force_lower)
|
37 |
+
|
38 |
+
print(f" > Number of unique characters: {len(chars)}")
|
39 |
+
print(f" > Unique characters: {''.join(sorted(chars))}")
|
40 |
+
print(f" > Unique lower characters: {''.join(sorted(lower_chars))}")
|
41 |
+
print(f" > Unique all forced to lower characters: {''.join(sorted(chars_force_lower))}")
|
42 |
+
|
43 |
+
|
44 |
+
if __name__ == "__main__":
|
45 |
+
main()
|
TTS/TTS/bin/find_unique_phonemes.py
ADDED
@@ -0,0 +1,74 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
"""Find all the unique characters in a dataset"""
|
2 |
+
import argparse
|
3 |
+
import multiprocessing
|
4 |
+
from argparse import RawTextHelpFormatter
|
5 |
+
|
6 |
+
from tqdm.contrib.concurrent import process_map
|
7 |
+
|
8 |
+
from TTS.config import load_config
|
9 |
+
from TTS.tts.datasets import load_tts_samples
|
10 |
+
from TTS.tts.utils.text.phonemizers import Gruut
|
11 |
+
|
12 |
+
|
13 |
+
def compute_phonemes(item):
|
14 |
+
text = item["text"]
|
15 |
+
ph = phonemizer.phonemize(text).replace("|", "")
|
16 |
+
return set(list(ph))
|
17 |
+
|
18 |
+
|
19 |
+
def main():
|
20 |
+
# pylint: disable=W0601
|
21 |
+
global c, phonemizer
|
22 |
+
# pylint: disable=bad-option-value
|
23 |
+
parser = argparse.ArgumentParser(
|
24 |
+
description="""Find all the unique characters or phonemes in a dataset.\n\n"""
|
25 |
+
"""
|
26 |
+
Example runs:
|
27 |
+
|
28 |
+
python TTS/bin/find_unique_phonemes.py --config_path config.json
|
29 |
+
""",
|
30 |
+
formatter_class=RawTextHelpFormatter,
|
31 |
+
)
|
32 |
+
parser.add_argument("--config_path", type=str, help="Path to dataset config file.", required=True)
|
33 |
+
args = parser.parse_args()
|
34 |
+
|
35 |
+
c = load_config(args.config_path)
|
36 |
+
|
37 |
+
# load all datasets
|
38 |
+
train_items, eval_items = load_tts_samples(
|
39 |
+
c.datasets, eval_split=True, eval_split_max_size=c.eval_split_max_size, eval_split_size=c.eval_split_size
|
40 |
+
)
|
41 |
+
items = train_items + eval_items
|
42 |
+
print("Num items:", len(items))
|
43 |
+
|
44 |
+
language_list = [item["language"] for item in items]
|
45 |
+
is_lang_def = all(language_list)
|
46 |
+
|
47 |
+
if not c.phoneme_language or not is_lang_def:
|
48 |
+
raise ValueError("Phoneme language must be defined in config.")
|
49 |
+
|
50 |
+
if not language_list.count(language_list[0]) == len(language_list):
|
51 |
+
raise ValueError(
|
52 |
+
"Currently, just one phoneme language per config file is supported !! Please split the dataset config into different configs and run it individually for each language !!"
|
53 |
+
)
|
54 |
+
|
55 |
+
phonemizer = Gruut(language=language_list[0], keep_puncs=True)
|
56 |
+
|
57 |
+
phonemes = process_map(compute_phonemes, items, max_workers=multiprocessing.cpu_count(), chunksize=15)
|
58 |
+
phones = []
|
59 |
+
for ph in phonemes:
|
60 |
+
phones.extend(ph)
|
61 |
+
|
62 |
+
phones = set(phones)
|
63 |
+
lower_phones = filter(lambda c: c.islower(), phones)
|
64 |
+
phones_force_lower = [c.lower() for c in phones]
|
65 |
+
phones_force_lower = set(phones_force_lower)
|
66 |
+
|
67 |
+
print(f" > Number of unique phonemes: {len(phones)}")
|
68 |
+
print(f" > Unique phonemes: {''.join(sorted(phones))}")
|
69 |
+
print(f" > Unique lower phonemes: {''.join(sorted(lower_phones))}")
|
70 |
+
print(f" > Unique all forced to lower phonemes: {''.join(sorted(phones_force_lower))}")
|
71 |
+
|
72 |
+
|
73 |
+
if __name__ == "__main__":
|
74 |
+
main()
|
TTS/TTS/bin/remove_silence_using_vad.py
ADDED
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import argparse
|
2 |
+
import glob
|
3 |
+
import multiprocessing
|
4 |
+
import os
|
5 |
+
import pathlib
|
6 |
+
|
7 |
+
import torch
|
8 |
+
from tqdm import tqdm
|
9 |
+
|
10 |
+
from TTS.utils.vad import get_vad_model_and_utils, remove_silence
|
11 |
+
|
12 |
+
torch.set_num_threads(1)
|
13 |
+
|
14 |
+
|
15 |
+
def adjust_path_and_remove_silence(audio_path):
|
16 |
+
output_path = audio_path.replace(os.path.join(args.input_dir, ""), os.path.join(args.output_dir, ""))
|
17 |
+
# ignore if the file exists
|
18 |
+
if os.path.exists(output_path) and not args.force:
|
19 |
+
return output_path, False
|
20 |
+
|
21 |
+
# create all directory structure
|
22 |
+
pathlib.Path(output_path).parent.mkdir(parents=True, exist_ok=True)
|
23 |
+
# remove the silence and save the audio
|
24 |
+
output_path, is_speech = remove_silence(
|
25 |
+
model_and_utils,
|
26 |
+
audio_path,
|
27 |
+
output_path,
|
28 |
+
trim_just_beginning_and_end=args.trim_just_beginning_and_end,
|
29 |
+
use_cuda=args.use_cuda,
|
30 |
+
)
|
31 |
+
return output_path, is_speech
|
32 |
+
|
33 |
+
|
34 |
+
def preprocess_audios():
|
35 |
+
files = sorted(glob.glob(os.path.join(args.input_dir, args.glob), recursive=True))
|
36 |
+
print("> Number of files: ", len(files))
|
37 |
+
if not args.force:
|
38 |
+
print("> Ignoring files that already exist in the output idrectory.")
|
39 |
+
|
40 |
+
if args.trim_just_beginning_and_end:
|
41 |
+
print("> Trimming just the beginning and the end with nonspeech parts.")
|
42 |
+
else:
|
43 |
+
print("> Trimming all nonspeech parts.")
|
44 |
+
|
45 |
+
filtered_files = []
|
46 |
+
if files:
|
47 |
+
# create threads
|
48 |
+
# num_threads = multiprocessing.cpu_count()
|
49 |
+
# process_map(adjust_path_and_remove_silence, files, max_workers=num_threads, chunksize=15)
|
50 |
+
|
51 |
+
if args.num_processes > 1:
|
52 |
+
with multiprocessing.Pool(processes=args.num_processes) as pool:
|
53 |
+
results = list(
|
54 |
+
tqdm(
|
55 |
+
pool.imap_unordered(adjust_path_and_remove_silence, files),
|
56 |
+
total=len(files),
|
57 |
+
desc="Processing audio files",
|
58 |
+
)
|
59 |
+
)
|
60 |
+
for output_path, is_speech in results:
|
61 |
+
if not is_speech:
|
62 |
+
filtered_files.append(output_path)
|
63 |
+
else:
|
64 |
+
for f in tqdm(files):
|
65 |
+
output_path, is_speech = adjust_path_and_remove_silence(f)
|
66 |
+
if not is_speech:
|
67 |
+
filtered_files.append(output_path)
|
68 |
+
|
69 |
+
# write files that do not have speech
|
70 |
+
with open(os.path.join(args.output_dir, "filtered_files.txt"), "w", encoding="utf-8") as f:
|
71 |
+
for file in filtered_files:
|
72 |
+
f.write(str(file) + "\n")
|
73 |
+
else:
|
74 |
+
print("> No files Found !")
|
75 |
+
|
76 |
+
|
77 |
+
if __name__ == "__main__":
|
78 |
+
parser = argparse.ArgumentParser(
|
79 |
+
description="python TTS/bin/remove_silence_using_vad.py -i=VCTK-Corpus/ -o=VCTK-Corpus-removed-silence/ -g=wav48_silence_trimmed/*/*_mic1.flac --trim_just_beginning_and_end True"
|
80 |
+
)
|
81 |
+
parser.add_argument("-i", "--input_dir", type=str, help="Dataset root dir", required=True)
|
82 |
+
parser.add_argument("-o", "--output_dir", type=str, help="Output Dataset dir", default="")
|
83 |
+
parser.add_argument("-f", "--force", default=False, action="store_true", help="Force the replace of exists files")
|
84 |
+
parser.add_argument(
|
85 |
+
"-g",
|
86 |
+
"--glob",
|
87 |
+
type=str,
|
88 |
+
default="**/*.wav",
|
89 |
+
help="path in glob format for acess wavs from input_dir. ex: wav48/*/*.wav",
|
90 |
+
)
|
91 |
+
parser.add_argument(
|
92 |
+
"-t",
|
93 |
+
"--trim_just_beginning_and_end",
|
94 |
+
type=bool,
|
95 |
+
default=True,
|
96 |
+
help="If True this script will trim just the beginning and end nonspeech parts. If False all nonspeech parts will be trim. Default True",
|
97 |
+
)
|
98 |
+
parser.add_argument(
|
99 |
+
"-c",
|
100 |
+
"--use_cuda",
|
101 |
+
type=bool,
|
102 |
+
default=False,
|
103 |
+
help="If True use cuda",
|
104 |
+
)
|
105 |
+
parser.add_argument(
|
106 |
+
"--use_onnx",
|
107 |
+
type=bool,
|
108 |
+
default=False,
|
109 |
+
help="If True use onnx",
|
110 |
+
)
|
111 |
+
parser.add_argument(
|
112 |
+
"--num_processes",
|
113 |
+
type=int,
|
114 |
+
default=1,
|
115 |
+
help="Number of processes to use",
|
116 |
+
)
|
117 |
+
args = parser.parse_args()
|
118 |
+
|
119 |
+
if args.output_dir == "":
|
120 |
+
args.output_dir = args.input_dir
|
121 |
+
|
122 |
+
# load the model and utils
|
123 |
+
model_and_utils = get_vad_model_and_utils(use_cuda=args.use_cuda, use_onnx=args.use_onnx)
|
124 |
+
preprocess_audios()
|
TTS/TTS/bin/resample.py
ADDED
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import argparse
|
2 |
+
import glob
|
3 |
+
import os
|
4 |
+
from argparse import RawTextHelpFormatter
|
5 |
+
from multiprocessing import Pool
|
6 |
+
from shutil import copytree
|
7 |
+
|
8 |
+
import librosa
|
9 |
+
import soundfile as sf
|
10 |
+
from tqdm import tqdm
|
11 |
+
|
12 |
+
|
13 |
+
def resample_file(func_args):
|
14 |
+
filename, output_sr = func_args
|
15 |
+
y, sr = librosa.load(filename, sr=output_sr)
|
16 |
+
sf.write(filename, y, sr)
|
17 |
+
|
18 |
+
|
19 |
+
def resample_files(input_dir, output_sr, output_dir=None, file_ext="wav", n_jobs=10):
|
20 |
+
if output_dir:
|
21 |
+
print("Recursively copying the input folder...")
|
22 |
+
copytree(input_dir, output_dir)
|
23 |
+
input_dir = output_dir
|
24 |
+
|
25 |
+
print("Resampling the audio files...")
|
26 |
+
audio_files = glob.glob(os.path.join(input_dir, f"**/*.{file_ext}"), recursive=True)
|
27 |
+
print(f"Found {len(audio_files)} files...")
|
28 |
+
audio_files = list(zip(audio_files, len(audio_files) * [output_sr]))
|
29 |
+
with Pool(processes=n_jobs) as p:
|
30 |
+
with tqdm(total=len(audio_files)) as pbar:
|
31 |
+
for _, _ in enumerate(p.imap_unordered(resample_file, audio_files)):
|
32 |
+
pbar.update()
|
33 |
+
|
34 |
+
print("Done !")
|
35 |
+
|
36 |
+
|
37 |
+
if __name__ == "__main__":
|
38 |
+
parser = argparse.ArgumentParser(
|
39 |
+
description="""Resample a folder recusively with librosa
|
40 |
+
Can be used in place or create a copy of the folder as an output.\n\n
|
41 |
+
Example run:
|
42 |
+
python TTS/bin/resample.py
|
43 |
+
--input_dir /root/LJSpeech-1.1/
|
44 |
+
--output_sr 22050
|
45 |
+
--output_dir /root/resampled_LJSpeech-1.1/
|
46 |
+
--file_ext wav
|
47 |
+
--n_jobs 24
|
48 |
+
""",
|
49 |
+
formatter_class=RawTextHelpFormatter,
|
50 |
+
)
|
51 |
+
|
52 |
+
parser.add_argument(
|
53 |
+
"--input_dir",
|
54 |
+
type=str,
|
55 |
+
default=None,
|
56 |
+
required=True,
|
57 |
+
help="Path of the folder containing the audio files to resample",
|
58 |
+
)
|
59 |
+
|
60 |
+
parser.add_argument(
|
61 |
+
"--output_sr",
|
62 |
+
type=int,
|
63 |
+
default=22050,
|
64 |
+
required=False,
|
65 |
+
help="Samlple rate to which the audio files should be resampled",
|
66 |
+
)
|
67 |
+
|
68 |
+
parser.add_argument(
|
69 |
+
"--output_dir",
|
70 |
+
type=str,
|
71 |
+
default=None,
|
72 |
+
required=False,
|
73 |
+
help="Path of the destination folder. If not defined, the operation is done in place",
|
74 |
+
)
|
75 |
+
|
76 |
+
parser.add_argument(
|
77 |
+
"--file_ext",
|
78 |
+
type=str,
|
79 |
+
default="wav",
|
80 |
+
required=False,
|
81 |
+
help="Extension of the audio files to resample",
|
82 |
+
)
|
83 |
+
|
84 |
+
parser.add_argument(
|
85 |
+
"--n_jobs", type=int, default=None, help="Number of threads to use, by default it uses all cores"
|
86 |
+
)
|
87 |
+
|
88 |
+
args = parser.parse_args()
|
89 |
+
|
90 |
+
resample_files(args.input_dir, args.output_sr, args.output_dir, args.file_ext, args.n_jobs)
|
TTS/TTS/bin/synthesize.py
ADDED
@@ -0,0 +1,502 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
#!/usr/bin/env python3
|
2 |
+
# -*- coding: utf-8 -*-
|
3 |
+
|
4 |
+
import argparse
|
5 |
+
import sys
|
6 |
+
from argparse import RawTextHelpFormatter
|
7 |
+
|
8 |
+
# pylint: disable=redefined-outer-name, unused-argument
|
9 |
+
from pathlib import Path
|
10 |
+
|
11 |
+
description = """
|
12 |
+
Synthesize speech on command line.
|
13 |
+
|
14 |
+
You can either use your trained model or choose a model from the provided list.
|
15 |
+
|
16 |
+
If you don't specify any models, then it uses LJSpeech based English model.
|
17 |
+
|
18 |
+
#### Single Speaker Models
|
19 |
+
|
20 |
+
- List provided models:
|
21 |
+
|
22 |
+
```
|
23 |
+
$ tts --list_models
|
24 |
+
```
|
25 |
+
|
26 |
+
- Get model info (for both tts_models and vocoder_models):
|
27 |
+
|
28 |
+
- Query by type/name:
|
29 |
+
The model_info_by_name uses the name as it from the --list_models.
|
30 |
+
```
|
31 |
+
$ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"
|
32 |
+
```
|
33 |
+
For example:
|
34 |
+
```
|
35 |
+
$ tts --model_info_by_name tts_models/tr/common-voice/glow-tts
|
36 |
+
$ tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2
|
37 |
+
```
|
38 |
+
- Query by type/idx:
|
39 |
+
The model_query_idx uses the corresponding idx from --list_models.
|
40 |
+
|
41 |
+
```
|
42 |
+
$ tts --model_info_by_idx "<model_type>/<model_query_idx>"
|
43 |
+
```
|
44 |
+
|
45 |
+
For example:
|
46 |
+
|
47 |
+
```
|
48 |
+
$ tts --model_info_by_idx tts_models/3
|
49 |
+
```
|
50 |
+
|
51 |
+
- Query info for model info by full name:
|
52 |
+
```
|
53 |
+
$ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"
|
54 |
+
```
|
55 |
+
|
56 |
+
- Run TTS with default models:
|
57 |
+
|
58 |
+
```
|
59 |
+
$ tts --text "Text for TTS" --out_path output/path/speech.wav
|
60 |
+
```
|
61 |
+
|
62 |
+
- Run a TTS model with its default vocoder model:
|
63 |
+
|
64 |
+
```
|
65 |
+
$ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav
|
66 |
+
```
|
67 |
+
|
68 |
+
For example:
|
69 |
+
|
70 |
+
```
|
71 |
+
$ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --out_path output/path/speech.wav
|
72 |
+
```
|
73 |
+
|
74 |
+
- Run with specific TTS and vocoder models from the list:
|
75 |
+
|
76 |
+
```
|
77 |
+
$ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --vocoder_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav
|
78 |
+
```
|
79 |
+
|
80 |
+
For example:
|
81 |
+
|
82 |
+
```
|
83 |
+
$ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --vocoder_name "vocoder_models/en/ljspeech/univnet" --out_path output/path/speech.wav
|
84 |
+
```
|
85 |
+
|
86 |
+
- Run your own TTS model (Using Griffin-Lim Vocoder):
|
87 |
+
|
88 |
+
```
|
89 |
+
$ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
|
90 |
+
```
|
91 |
+
|
92 |
+
- Run your own TTS and Vocoder models:
|
93 |
+
|
94 |
+
```
|
95 |
+
$ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
|
96 |
+
--vocoder_path path/to/vocoder.pth --vocoder_config_path path/to/vocoder_config.json
|
97 |
+
```
|
98 |
+
|
99 |
+
#### Multi-speaker Models
|
100 |
+
|
101 |
+
- List the available speakers and choose a <speaker_id> among them:
|
102 |
+
|
103 |
+
```
|
104 |
+
$ tts --model_name "<language>/<dataset>/<model_name>" --list_speaker_idxs
|
105 |
+
```
|
106 |
+
|
107 |
+
- Run the multi-speaker TTS model with the target speaker ID:
|
108 |
+
|
109 |
+
```
|
110 |
+
$ tts --text "Text for TTS." --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" --speaker_idx <speaker_id>
|
111 |
+
```
|
112 |
+
|
113 |
+
- Run your own multi-speaker TTS model:
|
114 |
+
|
115 |
+
```
|
116 |
+
$ tts --text "Text for TTS" --out_path output/path/speech.wav --model_path path/to/model.pth --config_path path/to/config.json --speakers_file_path path/to/speaker.json --speaker_idx <speaker_id>
|
117 |
+
```
|
118 |
+
|
119 |
+
### Voice Conversion Models
|
120 |
+
|
121 |
+
```
|
122 |
+
$ tts --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" --source_wav <path/to/speaker/wav> --target_wav <path/to/reference/wav>
|
123 |
+
```
|
124 |
+
"""
|
125 |
+
|
126 |
+
|
127 |
+
def str2bool(v):
|
128 |
+
if isinstance(v, bool):
|
129 |
+
return v
|
130 |
+
if v.lower() in ("yes", "true", "t", "y", "1"):
|
131 |
+
return True
|
132 |
+
if v.lower() in ("no", "false", "f", "n", "0"):
|
133 |
+
return False
|
134 |
+
raise argparse.ArgumentTypeError("Boolean value expected.")
|
135 |
+
|
136 |
+
|
137 |
+
def main():
|
138 |
+
parser = argparse.ArgumentParser(
|
139 |
+
description=description.replace(" ```\n", ""),
|
140 |
+
formatter_class=RawTextHelpFormatter,
|
141 |
+
)
|
142 |
+
|
143 |
+
parser.add_argument(
|
144 |
+
"--list_models",
|
145 |
+
type=str2bool,
|
146 |
+
nargs="?",
|
147 |
+
const=True,
|
148 |
+
default=False,
|
149 |
+
help="list available pre-trained TTS and vocoder models.",
|
150 |
+
)
|
151 |
+
|
152 |
+
parser.add_argument(
|
153 |
+
"--model_info_by_idx",
|
154 |
+
type=str,
|
155 |
+
default=None,
|
156 |
+
help="model info using query format: <model_type>/<model_query_idx>",
|
157 |
+
)
|
158 |
+
|
159 |
+
parser.add_argument(
|
160 |
+
"--model_info_by_name",
|
161 |
+
type=str,
|
162 |
+
default=None,
|
163 |
+
help="model info using query format: <model_type>/<language>/<dataset>/<model_name>",
|
164 |
+
)
|
165 |
+
|
166 |
+
parser.add_argument("--text", type=str, default=None, help="Text to generate speech.")
|
167 |
+
|
168 |
+
# Args for running pre-trained TTS models.
|
169 |
+
parser.add_argument(
|
170 |
+
"--model_name",
|
171 |
+
type=str,
|
172 |
+
default="tts_models/en/ljspeech/tacotron2-DDC",
|
173 |
+
help="Name of one of the pre-trained TTS models in format <language>/<dataset>/<model_name>",
|
174 |
+
)
|
175 |
+
parser.add_argument(
|
176 |
+
"--vocoder_name",
|
177 |
+
type=str,
|
178 |
+
default=None,
|
179 |
+
help="Name of one of the pre-trained vocoder models in format <language>/<dataset>/<model_name>",
|
180 |
+
)
|
181 |
+
|
182 |
+
# Args for running custom models
|
183 |
+
parser.add_argument("--config_path", default=None, type=str, help="Path to model config file.")
|
184 |
+
parser.add_argument(
|
185 |
+
"--model_path",
|
186 |
+
type=str,
|
187 |
+
default=None,
|
188 |
+
help="Path to model file.",
|
189 |
+
)
|
190 |
+
parser.add_argument(
|
191 |
+
"--out_path",
|
192 |
+
type=str,
|
193 |
+
default="tts_output.wav",
|
194 |
+
help="Output wav file path.",
|
195 |
+
)
|
196 |
+
parser.add_argument("--use_cuda", type=bool, help="Run model on CUDA.", default=False)
|
197 |
+
parser.add_argument("--device", type=str, help="Device to run model on.", default="cpu")
|
198 |
+
parser.add_argument(
|
199 |
+
"--vocoder_path",
|
200 |
+
type=str,
|
201 |
+
help="Path to vocoder model file. If it is not defined, model uses GL as vocoder. Please make sure that you installed vocoder library before (WaveRNN).",
|
202 |
+
default=None,
|
203 |
+
)
|
204 |
+
parser.add_argument("--vocoder_config_path", type=str, help="Path to vocoder model config file.", default=None)
|
205 |
+
parser.add_argument(
|
206 |
+
"--encoder_path",
|
207 |
+
type=str,
|
208 |
+
help="Path to speaker encoder model file.",
|
209 |
+
default=None,
|
210 |
+
)
|
211 |
+
parser.add_argument("--encoder_config_path", type=str, help="Path to speaker encoder config file.", default=None)
|
212 |
+
|
213 |
+
# args for coqui studio
|
214 |
+
parser.add_argument(
|
215 |
+
"--cs_model",
|
216 |
+
type=str,
|
217 |
+
help="Name of the 🐸Coqui Studio model. Available models are `XTTS`, `XTTS-multilingual`, `V1`.",
|
218 |
+
)
|
219 |
+
parser.add_argument(
|
220 |
+
"--emotion",
|
221 |
+
type=str,
|
222 |
+
help="Emotion to condition the model with. Only available for 🐸Coqui Studio `V1` model.",
|
223 |
+
default=None,
|
224 |
+
)
|
225 |
+
parser.add_argument(
|
226 |
+
"--language",
|
227 |
+
type=str,
|
228 |
+
help="Language to condition the model with. Only available for 🐸Coqui Studio `XTTS-multilingual` model.",
|
229 |
+
default=None,
|
230 |
+
)
|
231 |
+
|
232 |
+
# args for multi-speaker synthesis
|
233 |
+
parser.add_argument("--speakers_file_path", type=str, help="JSON file for multi-speaker model.", default=None)
|
234 |
+
parser.add_argument("--language_ids_file_path", type=str, help="JSON file for multi-lingual model.", default=None)
|
235 |
+
parser.add_argument(
|
236 |
+
"--speaker_idx",
|
237 |
+
type=str,
|
238 |
+
help="Target speaker ID for a multi-speaker TTS model.",
|
239 |
+
default=None,
|
240 |
+
)
|
241 |
+
parser.add_argument(
|
242 |
+
"--language_idx",
|
243 |
+
type=str,
|
244 |
+
help="Target language ID for a multi-lingual TTS model.",
|
245 |
+
default=None,
|
246 |
+
)
|
247 |
+
parser.add_argument(
|
248 |
+
"--speaker_wav",
|
249 |
+
nargs="+",
|
250 |
+
help="wav file(s) to condition a multi-speaker TTS model with a Speaker Encoder. You can give multiple file paths. The d_vectors is computed as their average.",
|
251 |
+
default=None,
|
252 |
+
)
|
253 |
+
parser.add_argument("--gst_style", help="Wav path file for GST style reference.", default=None)
|
254 |
+
parser.add_argument(
|
255 |
+
"--capacitron_style_wav", type=str, help="Wav path file for Capacitron prosody reference.", default=None
|
256 |
+
)
|
257 |
+
parser.add_argument("--capacitron_style_text", type=str, help="Transcription of the reference.", default=None)
|
258 |
+
parser.add_argument(
|
259 |
+
"--list_speaker_idxs",
|
260 |
+
help="List available speaker ids for the defined multi-speaker model.",
|
261 |
+
type=str2bool,
|
262 |
+
nargs="?",
|
263 |
+
const=True,
|
264 |
+
default=False,
|
265 |
+
)
|
266 |
+
parser.add_argument(
|
267 |
+
"--list_language_idxs",
|
268 |
+
help="List available language ids for the defined multi-lingual model.",
|
269 |
+
type=str2bool,
|
270 |
+
nargs="?",
|
271 |
+
const=True,
|
272 |
+
default=False,
|
273 |
+
)
|
274 |
+
# aux args
|
275 |
+
parser.add_argument(
|
276 |
+
"--save_spectogram",
|
277 |
+
type=bool,
|
278 |
+
help="If true save raw spectogram for further (vocoder) processing in out_path.",
|
279 |
+
default=False,
|
280 |
+
)
|
281 |
+
parser.add_argument(
|
282 |
+
"--reference_wav",
|
283 |
+
type=str,
|
284 |
+
help="Reference wav file to convert in the voice of the speaker_idx or speaker_wav",
|
285 |
+
default=None,
|
286 |
+
)
|
287 |
+
parser.add_argument(
|
288 |
+
"--reference_speaker_idx",
|
289 |
+
type=str,
|
290 |
+
help="speaker ID of the reference_wav speaker (If not provided the embedding will be computed using the Speaker Encoder).",
|
291 |
+
default=None,
|
292 |
+
)
|
293 |
+
parser.add_argument(
|
294 |
+
"--progress_bar",
|
295 |
+
type=str2bool,
|
296 |
+
help="If true shows a progress bar for the model download. Defaults to True",
|
297 |
+
default=True,
|
298 |
+
)
|
299 |
+
|
300 |
+
# voice conversion args
|
301 |
+
parser.add_argument(
|
302 |
+
"--source_wav",
|
303 |
+
type=str,
|
304 |
+
default=None,
|
305 |
+
help="Original audio file to convert in the voice of the target_wav",
|
306 |
+
)
|
307 |
+
parser.add_argument(
|
308 |
+
"--target_wav",
|
309 |
+
type=str,
|
310 |
+
default=None,
|
311 |
+
help="Target audio file to convert in the voice of the source_wav",
|
312 |
+
)
|
313 |
+
|
314 |
+
parser.add_argument(
|
315 |
+
"--voice_dir",
|
316 |
+
type=str,
|
317 |
+
default=None,
|
318 |
+
help="Voice dir for tortoise model",
|
319 |
+
)
|
320 |
+
|
321 |
+
args = parser.parse_args()
|
322 |
+
|
323 |
+
# print the description if either text or list_models is not set
|
324 |
+
check_args = [
|
325 |
+
args.text,
|
326 |
+
args.list_models,
|
327 |
+
args.list_speaker_idxs,
|
328 |
+
args.list_language_idxs,
|
329 |
+
args.reference_wav,
|
330 |
+
args.model_info_by_idx,
|
331 |
+
args.model_info_by_name,
|
332 |
+
args.source_wav,
|
333 |
+
args.target_wav,
|
334 |
+
]
|
335 |
+
if not any(check_args):
|
336 |
+
parser.parse_args(["-h"])
|
337 |
+
|
338 |
+
# Late-import to make things load faster
|
339 |
+
from TTS.api import TTS
|
340 |
+
from TTS.utils.manage import ModelManager
|
341 |
+
from TTS.utils.synthesizer import Synthesizer
|
342 |
+
|
343 |
+
# load model manager
|
344 |
+
path = Path(__file__).parent / "../.models.json"
|
345 |
+
manager = ModelManager(path, progress_bar=args.progress_bar)
|
346 |
+
api = TTS()
|
347 |
+
|
348 |
+
tts_path = None
|
349 |
+
tts_config_path = None
|
350 |
+
speakers_file_path = None
|
351 |
+
language_ids_file_path = None
|
352 |
+
vocoder_path = None
|
353 |
+
vocoder_config_path = None
|
354 |
+
encoder_path = None
|
355 |
+
encoder_config_path = None
|
356 |
+
vc_path = None
|
357 |
+
vc_config_path = None
|
358 |
+
model_dir = None
|
359 |
+
|
360 |
+
# CASE1 #list : list pre-trained TTS models
|
361 |
+
if args.list_models:
|
362 |
+
manager.add_cs_api_models(api.list_models())
|
363 |
+
manager.list_models()
|
364 |
+
sys.exit()
|
365 |
+
|
366 |
+
# CASE2 #info : model info for pre-trained TTS models
|
367 |
+
if args.model_info_by_idx:
|
368 |
+
model_query = args.model_info_by_idx
|
369 |
+
manager.model_info_by_idx(model_query)
|
370 |
+
sys.exit()
|
371 |
+
|
372 |
+
if args.model_info_by_name:
|
373 |
+
model_query_full_name = args.model_info_by_name
|
374 |
+
manager.model_info_by_full_name(model_query_full_name)
|
375 |
+
sys.exit()
|
376 |
+
|
377 |
+
# CASE3: TTS with coqui studio models
|
378 |
+
if "coqui_studio" in args.model_name:
|
379 |
+
print(" > Using 🐸Coqui Studio model: ", args.model_name)
|
380 |
+
api = TTS(model_name=args.model_name, cs_api_model=args.cs_model)
|
381 |
+
api.tts_to_file(text=args.text, emotion=args.emotion, file_path=args.out_path, language=args.language)
|
382 |
+
print(" > Saving output to ", args.out_path)
|
383 |
+
return
|
384 |
+
|
385 |
+
# CASE4: load pre-trained model paths
|
386 |
+
if args.model_name is not None and not args.model_path:
|
387 |
+
model_path, config_path, model_item = manager.download_model(args.model_name)
|
388 |
+
# tts model
|
389 |
+
if model_item["model_type"] == "tts_models":
|
390 |
+
tts_path = model_path
|
391 |
+
tts_config_path = config_path
|
392 |
+
if "default_vocoder" in model_item:
|
393 |
+
args.vocoder_name = model_item["default_vocoder"] if args.vocoder_name is None else args.vocoder_name
|
394 |
+
|
395 |
+
# voice conversion model
|
396 |
+
if model_item["model_type"] == "voice_conversion_models":
|
397 |
+
vc_path = model_path
|
398 |
+
vc_config_path = config_path
|
399 |
+
|
400 |
+
# tts model with multiple files to be loaded from the directory path
|
401 |
+
if model_item.get("author", None) == "fairseq" or isinstance(model_item["model_url"], list):
|
402 |
+
model_dir = model_path
|
403 |
+
tts_path = None
|
404 |
+
tts_config_path = None
|
405 |
+
args.vocoder_name = None
|
406 |
+
|
407 |
+
# load vocoder
|
408 |
+
if args.vocoder_name is not None and not args.vocoder_path:
|
409 |
+
vocoder_path, vocoder_config_path, _ = manager.download_model(args.vocoder_name)
|
410 |
+
|
411 |
+
# CASE5: set custom model paths
|
412 |
+
if args.model_path is not None:
|
413 |
+
tts_path = args.model_path
|
414 |
+
tts_config_path = args.config_path
|
415 |
+
speakers_file_path = args.speakers_file_path
|
416 |
+
language_ids_file_path = args.language_ids_file_path
|
417 |
+
|
418 |
+
if args.vocoder_path is not None:
|
419 |
+
vocoder_path = args.vocoder_path
|
420 |
+
vocoder_config_path = args.vocoder_config_path
|
421 |
+
|
422 |
+
if args.encoder_path is not None:
|
423 |
+
encoder_path = args.encoder_path
|
424 |
+
encoder_config_path = args.encoder_config_path
|
425 |
+
|
426 |
+
device = args.device
|
427 |
+
if args.use_cuda:
|
428 |
+
device = "cuda"
|
429 |
+
|
430 |
+
# load models
|
431 |
+
synthesizer = Synthesizer(
|
432 |
+
tts_path,
|
433 |
+
tts_config_path,
|
434 |
+
speakers_file_path,
|
435 |
+
language_ids_file_path,
|
436 |
+
vocoder_path,
|
437 |
+
vocoder_config_path,
|
438 |
+
encoder_path,
|
439 |
+
encoder_config_path,
|
440 |
+
vc_path,
|
441 |
+
vc_config_path,
|
442 |
+
model_dir,
|
443 |
+
args.voice_dir,
|
444 |
+
).to(device)
|
445 |
+
|
446 |
+
# query speaker ids of a multi-speaker model.
|
447 |
+
if args.list_speaker_idxs:
|
448 |
+
print(
|
449 |
+
" > Available speaker ids: (Set --speaker_idx flag to one of these values to use the multi-speaker model."
|
450 |
+
)
|
451 |
+
print(synthesizer.tts_model.speaker_manager.name_to_id)
|
452 |
+
return
|
453 |
+
|
454 |
+
# query langauge ids of a multi-lingual model.
|
455 |
+
if args.list_language_idxs:
|
456 |
+
print(
|
457 |
+
" > Available language ids: (Set --language_idx flag to one of these values to use the multi-lingual model."
|
458 |
+
)
|
459 |
+
print(synthesizer.tts_model.language_manager.name_to_id)
|
460 |
+
return
|
461 |
+
|
462 |
+
# check the arguments against a multi-speaker model.
|
463 |
+
if synthesizer.tts_speakers_file and (not args.speaker_idx and not args.speaker_wav):
|
464 |
+
print(
|
465 |
+
" [!] Looks like you use a multi-speaker model. Define `--speaker_idx` to "
|
466 |
+
"select the target speaker. You can list the available speakers for this model by `--list_speaker_idxs`."
|
467 |
+
)
|
468 |
+
return
|
469 |
+
|
470 |
+
# RUN THE SYNTHESIS
|
471 |
+
if args.text:
|
472 |
+
print(" > Text: {}".format(args.text))
|
473 |
+
|
474 |
+
# kick it
|
475 |
+
if tts_path is not None:
|
476 |
+
wav = synthesizer.tts(
|
477 |
+
args.text,
|
478 |
+
speaker_name=args.speaker_idx,
|
479 |
+
language_name=args.language_idx,
|
480 |
+
speaker_wav=args.speaker_wav,
|
481 |
+
reference_wav=args.reference_wav,
|
482 |
+
style_wav=args.capacitron_style_wav,
|
483 |
+
style_text=args.capacitron_style_text,
|
484 |
+
reference_speaker_name=args.reference_speaker_idx,
|
485 |
+
)
|
486 |
+
elif vc_path is not None:
|
487 |
+
wav = synthesizer.voice_conversion(
|
488 |
+
source_wav=args.source_wav,
|
489 |
+
target_wav=args.target_wav,
|
490 |
+
)
|
491 |
+
elif model_dir is not None:
|
492 |
+
wav = synthesizer.tts(
|
493 |
+
args.text, speaker_name=args.speaker_idx, language_name=args.language_idx, speaker_wav=args.speaker_wav
|
494 |
+
)
|
495 |
+
|
496 |
+
# save the results
|
497 |
+
print(" > Saving output to {}".format(args.out_path))
|
498 |
+
synthesizer.save_wav(wav, args.out_path)
|
499 |
+
|
500 |
+
|
501 |
+
if __name__ == "__main__":
|
502 |
+
main()
|