Update README.md
Browse files
README.md
CHANGED
@@ -6,6 +6,17 @@ configs:
|
|
6 |
size_categories:
|
7 |
- n<1K
|
8 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
|
10 |
| Name | GitHub | Weights | License | Fine-tune | Languages | Paper | Demo | Issues |
|
11 |
|---|---|---|---|---|---|---|---|---|
|
|
|
6 |
size_categories:
|
7 |
- n<1K
|
8 |
---
|
9 |
+
Cloned the GitHub repo for easier viewing and embedding the above table. https://github.com/Vaibhavs10/open-tts-tracker
|
10 |
+
|
11 |
+
Legend for the above TTS capability table:
|
12 |
+
* Processor - CPU (1/♾)/CUDA/ROCm (single/multi used for inference; Real-time factor should be below 2.0 to qualify for CPU, though some leeway can be given if it supports audio streaming)
|
13 |
+
* Phonetic alphabet - None/[IPA](https://en.wikipedia.org/wiki/International_Phonetic_Alphabet)/[ARPAbet](https://en.wikipedia.org/wiki/ARPABET) (Phonetic transcription that allows to control pronunciation of certain words during inference)
|
14 |
+
* Insta-clone - Yes/No (Zero-shot model for quick voice clone)
|
15 |
+
* Emotional control - Yes🎭/Strict (Strict, as in has no ability to go in-between states, insta-clone switch/🎭👥)
|
16 |
+
* Prompting - Yes/No (A side effect of narrator based datasets and a way to affect the emotional state, [ElevenLabs docs](https://elevenlabs.io/docs/speech-synthesis/prompting#emotion))
|
17 |
+
* Streaming support - Yes/No (If it is possible to playback audio that is still being generated)
|
18 |
+
* Speech control - speed/pitch/ (Ability to change the pitch, duration, energy and/or emotion of generated speech)
|
19 |
+
* Speech-To-Speech support - Yes/No (Streaming support implies real-time S2S; S2T=>T2S does not count)
|
20 |
|
21 |
| Name | GitHub | Weights | License | Fine-tune | Languages | Paper | Demo | Issues |
|
22 |
|---|---|---|---|---|---|---|---|---|
|