Question about Image generation

#30
by ExeSmokey - opened

Firstly, very nice work on the LoRAs.

I am quite new to messing about in Stable Diffusion but making some good CG style realistic images.
I'm trying my hand at more Anime images with your LoRAs. They are very impressive.

Anyway, I have what is probably a noob question but I would appreciate if you could help.
I'll use the Genshin Impact Eula image of yours below as a reference.

I dropped your image into Stable Diffusion and sent it all to Text2Img.
I checked the model hash and believe you used AbyssOrangeMix2NSFW?

When I try to recreate the image it comes very close but mine looks "washed out" and the colour is more flat.

Do I need to use a VAE in Settings? And if so which one is good?

Also my LoRAs are on that new tab so when you click it (its no longer a drop down box), it instead adds an extra tag in the prompt box e.g. "lora:eulaHard:1".
Does the position of this prompt effect the outcome in anyway as I have just left it as the last prompt.

Any help would be appreciated.
Cheers.

image.png

Yes, remember to use the VAE when you are using OrangeMix2.
And you might want to set the Clip skip value to 2 in some models to get better result.

You can change a setting in the WebUI called Quicksettings list to sd_model_checkpoint, sd_vae, CLIP_stop_at_last_layers. Apply and reload. Then you can adjust them on the top of the WebUI next to where you select the model.

Sign up or log in to comment