Llasa: Scaling Train-Time and Inference-Time Compute for Llama-based Speech Synthesis
Abstract
Recent advances in text-based large language models (LLMs), particularly in the GPT series and the o1 model, have demonstrated the effectiveness of scaling both training-time and inference-time compute. However, current state-of-the-art TTS systems leveraging LLMs are often multi-stage, requiring separate models (e.g., diffusion models after LLM), complicating the decision of whether to scale a particular model during training or testing. This work makes the following contributions: First, we explore the scaling of train-time and inference-time compute for speech synthesis. Second, we propose a simple framework Llasa for speech synthesis that employs a single-layer vector quantizer (VQ) codec and a single Transformer architecture to fully align with standard LLMs such as Llama. Our experiments reveal that scaling train-time compute for Llasa consistently improves the naturalness of synthesized speech and enables the generation of more complex and accurate prosody patterns. Furthermore, from the perspective of scaling inference-time compute, we employ speech understanding models as verifiers during the search, finding that scaling inference-time compute shifts the sampling modes toward the preferences of specific verifiers, thereby improving emotional expressiveness, timbre consistency, and content accuracy. In addition, we released the checkpoint and training code for our TTS model (1B, 3B, 8B) and codec model publicly available.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- CosyVoice 2: Scalable Streaming Speech Synthesis with Large Language Models (2024)
- Autoregressive Speech Synthesis with Next-Distribution Prediction (2024)
- MARS6: A Small and Robust Hierarchical-Codec Text-to-Speech Model (2025)
- Continuous Autoregressive Modeling with Stochastic Monotonic Alignment for Speech Synthesis (2025)
- DrawSpeech: Expressive Speech Synthesis Using Prosodic Sketches as Control Conditions (2025)
- Transducer-Llama: Integrating LLMs into Streamable Transducer-based Speech Recognition (2024)
- Long-Form Speech Generation with Spoken Language Models (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend