Hugging Face
Models
Datasets
Spaces
Posts
Docs
Enterprise
Pricing
Log In
Sign Up
parler-tts
/
parler-tts-mini-v1
like
126
Follow
Parler TTS
178
Text-to-Speech
Transformers
Safetensors
4 datasets
English
parler_tts
text2text-generation
annotation
Inference Endpoints
arxiv:
2402.01912
License:
apache-2.0
Model card
Files
Files and versions
Community
3
Train
Deploy
Use this model
New discussion
New pull request
Resources
PR & discussions documentation
Code of Conduct
Hub documentation
All
Discussions
Pull requests
View closed (1)
Quantization Options for Faster Inference and Lower VRAM Usage
#2 opened about 1 month ago by
1sarim
GPU requirements for real time response?
2
#1 opened 3 months ago by
lukiggs