--- library_name: transformers tags: - text-to-speech - annotation license: apache-2.0 language: - en - fr - es - pt - pl - de - nl - it pipeline_tag: text-to-speech inference: false datasets: - facebook/multilingual_librispeech - parler-tts/libritts_r_filtered - parler-tts/libritts-r-filtered-speaker-descriptions - parler-tts/mls_eng - parler-tts/mls-eng-speaker-descriptions - PHBJT/mls-annotated - PHBJT/cml-tts-filtered-annotated - PHBJT/cml-tts-filtered --- { "2450": "Mark", "496": "Jessica", "3060": "Daniel", "12709": "Christine", "1897": "Christopher", "10148": "Nicole", "4998": "Richard", "4649": "Julia", "6892": "Alex", "7014": "Natalie", "4367": "Nicholas", "2961": "Sophia", "3946": "Steven", "10246": "Olivia", "11772": "Megan", "4174": "Michelle" } Parler Logo # Parler-TTS Mini Multilingual v1.1 Open in HuggingFace **Parler-TTS Mini Multilingual v1.1** is a multilingual extension of [Parler-TTS Mini](https://huggingface.co/parler-tts/parler-tts-mini-v1.1). It is a fine-tuned version, trained on a [cleaned version](https://huggingface.co/datasets/PHBJT/cml-tts-filtered) of [CML-TTS](https://huggingface.co/datasets/ylacombe/cml-tts) and on the non-English version of [Multilingual LibriSpeech](https://huggingface.co/datasets/facebook/multilingual_librispeech). In all, this represents some 9,200 hours of non-English data. To retain English capabilities, we also added back the [LibriTTS-R English dataset](https://huggingface.co/datasets/parler-tts/libritts_r_filtered), some 580h of high-quality English data. **Parler-TTS Mini Multilingual** can speak in 8 European languages: English, French, Spanish, Portuguese, Polish, German, Italian and Dutch. Thanks to its **better prompt tokenizer**, it can easily be extended to other languages. This tokenizer has a larger vocabulary and handles byte fallback, which simplifies multilingual training. 🚨 This work is the result of a collaboration between the **HuggingFace audio team** and the **[Quantum Squadra](https://quantumsquadra.com/) team**. The **[AI4Bharat](https://ai4bharat.iitm.ac.in/) team** also provided advice and assistance in improving tokenization. 🚨 ## πŸ“– Quick Index * [πŸ‘¨β€πŸ’» Installation](#πŸ‘¨β€πŸ’»-installation) * [🎯 Inference](#inference) * [Motivation](#motivation) * [Optimizing inference](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md) ## πŸ› οΈ Usage 🚨Unlike previous versions of Parler-TTS, here we use two tokenizers - one for the prompt and one for the description.🚨 ### πŸ‘¨β€πŸ’» Installation Using Parler-TTS is as simple as "bonjour". Simply install the library once: ```sh pip install git+https://github.com/huggingface/parler-tts.git ``` ### Inference **Parler-TTS** has been trained to generate speech with features that can be controlled with a simple text prompt, for example: ```py import torch from parler_tts import ParlerTTSForConditionalGeneration from transformers import AutoTokenizer import soundfile as sf device = "cuda:0" if torch.cuda.is_available() else "cpu" model = ParlerTTSForConditionalGeneration.from_pretrained("parler-tts/parler-tts-mini-multilingual").to(device) tokenizer = AutoTokenizer.from_pretrained("parler-tts/parler-tts-mini-multilingual") description_tokenizer = AutoTokenizer.from_pretrained(model.config.text_encoder._name_or_path) prompt = "Hey, how are you doing today?" description = "A female speaker delivers a slightly expressive and animated speech with a moderate speed and pitch. The recording is of very high quality, with the speaker's voice sounding clear and very close up." input_ids = description_tokenizer(description, return_tensors="pt").input_ids.to(device) prompt_input_ids = tokenizer(prompt, return_tensors="pt").input_ids.to(device) generation = model.generate(input_ids=input_ids, prompt_input_ids=prompt_input_ids) audio_arr = generation.cpu().numpy().squeeze() sf.write("parler_tts_out.wav", audio_arr, model.config.sampling_rate) ``` **Tips**: * We've set up an [inference guide](https://github.com/huggingface/parler-tts/blob/main/INFERENCE.md) to make generation faster. Think SDPA, torch.compile, batching and streaming! * Include the term "very clear audio" to generate the highest quality audio, and "very noisy audio" for high levels of background noise * Punctuation can be used to control the prosody of the generations, e.g. use commas to add small breaks in speech * The remaining speech features (gender, speaking rate, pitch and reverberation) can be controlled directly through the prompt ## Motivation Parler-TTS is a reproduction of work from the paper [Natural language guidance of high-fidelity text-to-speech with synthetic annotations](https://www.text-description-to-speech.com) by Dan Lyth and Simon King, from Stability AI and Edinburgh University respectively. Contrarily to other TTS models, Parler-TTS is a **fully open-source** release. All of the datasets, pre-processing, training code and weights are released publicly under permissive license, enabling the community to build on our work and develop their own powerful TTS models. Parler-TTS was released alongside: * [The Parler-TTS repository](https://github.com/huggingface/parler-tts) - you can train and fine-tuned your own version of the model. * [The Data-Speech repository](https://github.com/huggingface/dataspeech) - a suite of utility scripts designed to annotate speech datasets. * [The Parler-TTS organization](https://huggingface.co/parler-tts) - where you can find the annotated datasets as well as the future checkpoints. ## Citation If you found this repository useful, please consider citing this work and also the original Stability AI paper: ``` @misc{lacombe-etal-2024-parler-tts, author = {Yoach Lacombe and Vaibhav Srivastav and Sanchit Gandhi}, title = {Parler-TTS}, year = {2024}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/huggingface/parler-tts}} } ``` ``` @misc{lyth2024natural, title={Natural language guidance of high-fidelity text-to-speech with synthetic annotations}, author={Dan Lyth and Simon King}, year={2024}, eprint={2402.01912}, archivePrefix={arXiv}, primaryClass={cs.SD} } ``` ## License This model is permissively licensed under the Apache 2.0 license.