Dreamshaper-8-lcm
lykon/dreamshaper-8-lcm
is a Stable Diffusion model that has been fine-tuned on runwayml/stable-diffusion-v1-5.
Please consider supporting me:
- on Patreon
- or buy me a coffee
Diffusers
For more general information on how to run text-to-image models with 𧨠Diffusers, see the docs.
- Installation
pip install diffusers transformers accelerate
- Run
from diffusers import AutoPipelineForText2Image, LCMScheduler
import torch
pipe = AutoPipelineForText2Image.from_pretrained('lykon/dreamshaper-8-lcm', torch_dtype=torch.float16, variant="fp16")
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe = pipe.to("cuda")
prompt = "portrait photo of muscular bearded guy in a worn mech suit, light bokeh, intricate, steel metal, elegant, sharp focus, soft lighting, vibrant colors"
generator = torch.manual_seed(0)
image = pipe(prompt, num_inference_steps=15, guidance_scale=2, generator=generator).images[0]
image.save("./image.png")
Notes
- Version 8 focuses on improving what V7 started. Might be harder to do photorealism compared to realism focused models, as it might be hard to do anime compared to anime focused models, but it can do both pretty well if you're skilled enough. Check the examples!
- Version 7 improves lora support, NSFW and realism. If you're interested in "absolute" realism, try AbsoluteReality.
- Version 6 adds more lora support and more style in general. It should also be better at generating directly at 1024 height (but be careful with it). 6.x are all improvements.
- Version 5 is the best at photorealism and has noise offset.
- Version 4 is much better with anime (can do them with no LoRA) and booru tags. It might be harder to control if you're used to caption style, so you might still want to use version 3.31. V4 is also better with eyes at lower resolutions. Overall is like a "fix" of V3 and shouldn't be too much different.
- Downloads last month
- 3,994
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.