Real-Time Latent Consistency Model

Text to Image

This demo showcases LCM Text to Image model using Diffusers with a MJPEG stream server.

There are 0 user(s) sharing the same GPU, affecting real-time performance. Maximum queue size is 10. Duplicate and run it on your own GPU.

Prompt

Start your session and type your prompt here, accepts Compel syntax.

Advanced Options
8.0