improved pseudo numerical methods for diffusion models (iPNDM)
Original implementation can be found here.
class diffusers.IPNDMScheduler< source >
( num_train_timesteps: int = 1000 trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None )
Improved Pseudo numerical methods for diffusion models (iPNDM) ported from @crowsonkb’s amazing k-diffusion library
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s
function, such as
num_train_timesteps. They can be accessed via
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and
For more details, see the original paper: https://arxiv.org/abs/2202.09778
scale_model_input< source >
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the current timestep.
set_timesteps< source >
( num_inference_steps: int device: typing.Union[str, torch.device] = None )
Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference.
step< source >
return_dict: bool = True
torch.FloatTensor) — direct output from learned diffusion model.
int) — current discrete timestep in the diffusion chain.
torch.FloatTensor) — current instance of sample being created by diffusion process.
bool) — option for returning tuple rather than SchedulerOutput class
True, otherwise a
tuple. When returning a tuple, the first element is the sample tensor.
Step function propagating the sample with the linear multi-step method. This has one forward pass with multiple times to approximate the solution.