results too different when used locally on pc

#5
by patientxtr - opened

Tried it on sdwebui with 2,4 and 8 steps. 2 and 4 downright looks too bad and with 8 it is good BUT it doesn't look anywhere close to what this demo space creates , for one everything I generate on pc looks "more realistic" while here it looks more "dreamy" and this is with or without any negative prompts too. Maybe if you can share your negative prompts or any other specific things you are using while generating it would be cool.

LCM currently do not support negative prompts. Maybe you can show your implementaion on sdwebui? Let's see what happens?

Using sdwebui a1111 with an amd gpu with these parameters : --opt-sub-quad-attention --no-half-vae --disable-nan-check --medvram

Got the safetensors from here, VAE none (doesn't matter if I use any other one though , same output) , using DDIM as sampler, (actually all samplers create this sort of realistic image you can see below nothing like yours) generator cpu , same seed, guidance scale 8 , 512x512 and with your sample prompt :

this is what this space creates :
1.png

this is what sdwebui creates :

00059-172864489.png

I know what happens. You are using the wrong sampler. We are using the LCMSampler (ours method), not DDIM sampler, which has not been integrated to the sdwebui yet. Will figure out how to add into sd-webui.

!!! Local Gradio Demos is out.
!!! You can run the LCM model locally by using the gradio.
Please refer to: https://github.com/luosiallen/latent-consistency-model
For A1111 users, we are still working on it! Currently, the A1111 does not incorporate our LCMsampler, so you can get reduced inference time on A1111. But you can first try the local gradio demos.

just open yourself to diversity
:D

Sign up or log in to comment