IPNDMScheduler
IPNDMScheduler
is a fourth-order Improved Pseudo Linear Multistep scheduler. The original implementation can be found at crowsonkb/v-diffusion-pytorch.
IPNDMScheduler
class diffusers.IPNDMScheduler
< source >( num_train_timesteps: int = 1000 trained_betas: Union = None )
A fourth-order Improved Pseudo Linear Multistep scheduler.
This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic methods the library implements for all schedulers such as loading and saving.
scale_model_input
< source >( sample: Tensor *args **kwargs ) → torch.Tensor
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the current timestep.
set_begin_index
< source >( begin_index: int = 0 )
Sets the begin index for the scheduler. This function should be run from pipeline before the inference.
set_timesteps
< source >( num_inference_steps: int device: Union = None )
Sets the discrete timesteps used for the diffusion chain (to be run before inference).
step
< source >( model_output: Tensor timestep: Union sample: Tensor return_dict: bool = True ) → SchedulerOutput or tuple
Parameters
- model_output (
torch.Tensor
) — The direct output from learned diffusion model. - timestep (
int
) — The current discrete timestep in the diffusion chain. - sample (
torch.Tensor
) — A current instance of a sample created by the diffusion process. - return_dict (
bool
) — Whether or not to return a SchedulerOutput or tuple.
Returns
SchedulerOutput or tuple
If return_dict is True
, SchedulerOutput is returned, otherwise a
tuple is returned where the first element is the sample tensor.
Predict the sample from the previous timestep by reversing the SDE. This function propagates the sample with the linear multistep method. It performs one forward pass multiple times to approximate the solution.
SchedulerOutput
class diffusers.schedulers.scheduling_utils.SchedulerOutput
< source >( prev_sample: Tensor )
Base class for the output of a scheduler’s step
function.