Linear multistep scheduler for discrete beta schedules
Overview
Original implementation can be found here.
LMSDiscreteScheduler
class diffusers.LMSDiscreteScheduler
< source >( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None use_karras_sigmas: typing.Optional[bool] = False prediction_type: str = 'epsilon' timestep_spacing: str = 'linspace' steps_offset: int = 0 )
Parameters
-
num_train_timesteps (
int
) — number of diffusion steps used to train the model. -
beta_start (
float
) — the startingbeta
value of inference. -
beta_end (
float
) — the finalbeta
value. -
beta_schedule (
str
) — the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose fromlinear
orscaled_linear
. -
trained_betas (
np.ndarray
, optional) — option to pass an array of betas directly to the constructor to bypassbeta_start
,beta_end
etc. -
use_karras_sigmas (
bool
, optional, defaults toFalse
) — This parameter controls whether to use Karras sigmas (Karras et al. (2022) scheme) for step sizes in the noise schedule during the sampling process. If True, the sigmas will be determined according to a sequence of noise levels {σi} as defined in Equation (5) of the paper https://arxiv.org/pdf/2206.00364.pdf. -
prediction_type (
str
, defaultepsilon
, optional) — prediction type of the scheduler function, one ofepsilon
(predicting the noise of the diffusion process),sample
(directly predicting the noisy sample) or
v_prediction` (see section 2.4 https://imagen.research.google/video/paper.pdf) -
timestep_spacing (
str
, default"linspace"
) — The way the timesteps should be scaled. Refer to Table 2. of Common Diffusion Noise Schedules and Sample Steps are Flawed for more information. -
steps_offset (
int
, default0
) — an offset added to the inference steps. You can use a combination ofoffset=1
andset_alpha_to_one=False
, to make the last step use step 0 for the previous alpha product, as done in stable diffusion.
Linear Multistep Scheduler for discrete beta schedules. Based on the original k-diffusion implementation by Katherine Crowson: https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L181
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__
function, such as num_train_timesteps
. They can be accessed via scheduler.config.num_train_timesteps
.
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and
from_pretrained() functions.
get_lms_coefficient
< source >( order t current_order )
Compute a linear multistep coefficient.
scale_model_input
< source >(
sample: FloatTensor
timestep: typing.Union[float, torch.FloatTensor]
)
→
torch.FloatTensor
Scales the denoising model input by (sigma**2 + 1) ** 0.5
to match the K-LMS algorithm.
set_timesteps
< source >( num_inference_steps: int device: typing.Union[str, torch.device] = None )
Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
step
< source >(
model_output: FloatTensor
timestep: typing.Union[float, torch.FloatTensor]
sample: FloatTensor
order: int = 4
return_dict: bool = True
)
→
~schedulers.scheduling_utils.LMSDiscreteSchedulerOutput
or tuple
Parameters
-
model_output (
torch.FloatTensor
) — direct output from learned diffusion model. -
timestep (
float
) — current timestep in the diffusion chain. -
sample (
torch.FloatTensor
) — current instance of sample being created by diffusion process. order — coefficient for multi-step inference. -
return_dict (
bool
) — option for returning tuple rather than LMSDiscreteSchedulerOutput class
Returns
~schedulers.scheduling_utils.LMSDiscreteSchedulerOutput
or tuple
~schedulers.scheduling_utils.LMSDiscreteSchedulerOutput
if return_dict
is True, otherwise a tuple
.
When returning a tuple, the first element is the sample tensor.
Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion process from the learned model outputs (most often the predicted noise).