Any idea why the results are noisy

#2
by VMass - opened

Thanks for the model, the generation is really quick!
I'm using it with diffuser 0.26.2 on a NVIDIA Tesla A100 (in Colab), do you have any idea why the results are noisy/grainy?
This is my pipe script:
pipe = DiffusionPipeline.from_pretrained("SG161222/RealVisXL_V3.0_Turbo",
torch_dtype=torch.float16,
variant="fp16")

pipe.scheduler = DPMSolverMultistepScheduler.from_config(
pipe.scheduler.config,
use_karras_sigmas=True,
final_sigmas_type="sigma_min"
)

pipe.to("cuda")

Those are my parameters:
prompt = "(product shot:1.5), a pink sport sneaker, highly detailed, ultra realistic, ultra sharp, 8k"
negative_prompt = "lowres, bad anatomy, naked, explicit, breast, (bad hands:1.5), missing, finger, sketches, ugly, low-quality, signature, deformed, pattern, downsampling, aliasing, distorted, blurry, glossy, blur, jpeg artifacts, compression artifacts, poorly drawn, poor quality, low-resolution, bad, distortion, twisted, excessive, exaggerated pose, exaggerated limbs, grainy, symmetrical, error, pattern, beginner, pixelated, fake, hyper, glitch, worst quality, low quality, overexposed, high-contrast, bad-contrast, (duplicate:2), (long body:2), (long torso:2), (long neck), (long arm)"
num_samples = 1
guidance_scale = 2.5
num_inference_steps = 7

RVXLTurbo.png
RVXLturbo2.png

This is the results on A111
grid-0000 (1).png

+1 on this :/
it looked great on A1111 but with the diffusers library, results are terrible

I got nice results using DPMSolverSDEScheduler().
I think documentation is a bit old on huggingface and you're supposed to use this one.
Not sure though, but I do get better results.

Sign up or log in to comment