Papers
arxiv:2402.08093

BASE TTS: Lessons from building a billion-parameter Text-to-Speech model on 100K hours of data

Published on Feb 12
· Featured in Daily Papers on Feb 14
Authors:
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

We introduce a text-to-speech (TTS) model called BASE TTS, which stands for Big Adaptive Streamable TTS with Emergent abilities. BASE TTS is the largest TTS model to-date, trained on 100K hours of public domain speech data, achieving a new state-of-the-art in speech naturalness. It deploys a 1-billion-parameter autoregressive Transformer that converts raw texts into discrete codes ("speechcodes") followed by a convolution-based decoder which converts these speechcodes into waveforms in an incremental, streamable manner. Further, our speechcodes are built using a novel speech tokenization technique that features speaker ID disentanglement and compression with byte-pair encoding. Echoing the widely-reported "emergent abilities" of large language models when trained on increasing volume of data, we show that BASE TTS variants built with 10K+ hours and 500M+ parameters begin to demonstrate natural prosody on textually complex sentences. We design and share a specialized dataset to measure these emergent abilities for text-to-speech. We showcase state-of-the-art naturalness of BASE TTS by evaluating against baselines that include publicly available large-scale text-to-speech systems: YourTTS, Bark and TortoiseTTS. Audio samples generated by the model can be heard at https://amazon-ltts-paper.com/.

Community

This is incredible! It’s really great with emotions. We need an open sourced implementation!

thi is a really cool idea for an implementation, its really awesome, almost like a hash

"However, due to the potential misuse of this capability, we have decided against open-sourcing this model as a precautionary measure." I'm really tired of seeing this excuse in speech generation models.

"However, due to the potential misuse of this capability, we have decided against open-sourcing this model as a precautionary measure." I'm really tired of seeing this excuse in speech generation models.

It's virtue signalling, and can be translated as "F you, this belongs to me", but of course they are saints, and saints can't say such things.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Is there any word on when this will make it's way to AWS?

Amazon cannot have enough of everybody money... They never help the community or release anything of value.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2402.08093 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2402.08093 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2402.08093 in a Space README.md to link it from this page.

Collections including this paper 12