Does this model only work with the Euler Discrete scheduler?

#17
by therealdarkknight - opened

Am I the only one struggling to get non-noisy outputs using any schedule other than EulerDiscrete? Happy to hear what others have tried/done/noticed.

Hello @therealdarkknight

According to the docs DDIMScheduler, LMSDiscreteScheduler, and PNDMScheduler should work.

A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
        [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].

I tested DDIMScheduler and It worked as expected, but just like youPNDMScheduler and LMSDiscreteScheduler are producing noisy output.

Hi all, maybe related. But it seems that the current file in this folder builds a DDIM scheduler by default.

However, looking at the docs:
https://huggingface.co/docs/diffusers/v0.9.0/en/api/schedulers#diffusers.DDIMScheduler

It says the following:
v-prediction is not supported for this class. (Edit: yup. the docs are just outdated. The scheduler has an arg for v-prediction now).

We need some special handling v-prediction schedulers, specifically for the 768x768 model.

NOTE: The following schedulers support "v_prediction" at the moment, but I had to manually pass the argument:

  • Euler
  • DDIM
  • DPMSolverMultistepScheduler
  • DDPM

The LMS scheduler is still missing the argument for v_prediction.

Edit 2: it seems the AUTOMATIC1111 webui was able to properly integrate a bunch of different samplers from k_diffusion. I noticed there is a HuggingFace community example for integrating k_diffusion with diffusers, but it is broken for Stable Diffusion v2. Specifically the v_prediction part. Looking through the web-ui code, we'll probably need more wrapping and proper handling of v_prediction to use the k_diffusion classes.

DPMEncoder also works with stable diffusion 2 as can be seen in the official examples:
https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion_2#tips

Sign up or log in to comment