Algorithm 1 of Karras et. al. Scheduler ported from @crowsonkb’s https://github.com/crowsonkb/k-diffusion library:
All credit for making this scheduler work goes to Katherine Crowson
( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None prediction_type: str = 'epsilon' )
Parameters
int
) — number of diffusion steps used to train the model. beta_start (float
): the
beta
value of inference. beta_end (float
) — the final beta
value. beta_schedule (str
):
the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
linear
or scaled_linear
.
np.ndarray
, optional) —
option to pass an array of betas directly to the constructor to bypass beta_start
, beta_end
etc.
options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small
,
fixed_small_log
, fixed_large
, fixed_large_log
, learned
or learned_range
.
str
, default epsilon
, optional) —
prediction type of the scheduler function, one of epsilon
(predicting the noise of the diffusion
process), sample
(directly predicting the noisy sample) or
v_prediction` (see section 2.4
https://imagen.research.google/video/paper.pdf)
Implements Algorithm 2 (Heun steps) from Karras et al. (2022). for discrete beta schedules. Based on the original k-diffusion implementation by Katherine Crowson: https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L90
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__
function, such as num_train_timesteps
. They can be accessed via scheduler.config.num_train_timesteps
.
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and
from_pretrained() functions.
(
sample: FloatTensor
timestep: typing.Union[float, torch.FloatTensor]
)
→
torch.FloatTensor
( num_inference_steps: int device: typing.Union[str, torch.device] = None num_train_timesteps: typing.Optional[int] = None )
Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
(
model_output: typing.Union[torch.FloatTensor, numpy.ndarray]
timestep: typing.Union[float, torch.FloatTensor]
sample: typing.Union[torch.FloatTensor, numpy.ndarray]
return_dict: bool = True
)
→
SchedulerOutput or tuple
Parameters
torch.FloatTensor
or np.ndarray
): direct output from learned diffusion model. timestep
(int
): current discrete timestep in the diffusion chain. sample (torch.FloatTensor
or np.ndarray
):
current instance of sample being created by diffusion process.
return_dict (bool
): option for returning tuple rather than SchedulerOutput class
Returns
SchedulerOutput or tuple
SchedulerOutput if return_dict
is True, otherwise a tuple
. When
returning a tuple, the first element is the sample tensor.