Real-Time Latent Consistency Model
Text to Image
This demo showcases LCM Text to Image model using Diffusers with a MJPEG stream server.
There are 0 user(s) sharing the same GPU, affecting real-time performance. Maximum queue size is 10. Duplicate and run it on your own GPU.