Tortoise TTS Model Installation

#1
by rSaita - opened

Hi and thank you for sharing this!
As a newbie, I was wondering if installing your models into Tortoise TTS is possible. If so, can you explain to me how it can be done?

This is a YourTTS model, so it'll work with Coqui. The other model I've posted is for Tortoise. The YourTTS model is for Coqui TTS. I haven't used it for anything else.

The Tortoise model works with the various Tortoise forks, and possibly with the new Coqui implementation for Tortoise inference (Coqui doesn't support Tortoise training. Tortoise training is a mess.)

With the Tortoise WebUI: Drop the Tortoise model in "finetunes", put the tokenizer file in "tokenizers" (I think. You'll fine another file called ipa.json or something. Put the new tokenizer in that directory). Switch the model and tokenizer in the settings. Reload. Generally I like to ctrl+c shutdown and re-run start.sh. It loads faster from shutdown.

For Coqui, direct model_path, and specify the new tokenizer somehow. I haven't used Coqui's Tortoise inference yet, so I don't know the specifics.'

These may be the last new public models I post. They're really expensive and time consuming to make (This YourTTS model was ~60 full days of training), and the YouTube channel makes about thirty cents a day, so it doesn't even come close to covering the electric. If I kill this GPU with the heat I won't be able to replace it :(

If I can find any of the other open-source dataset models on my Google Drive, I'll upload them to Huggingface. They're not fully trained, but some may be a good base for further training.

This one is reasonably well trained, so you should get some decent output out of it.

Sign up or log in to comment