Diffusers contains multiple pre-built schedule functions for the diffusion process.
The schedule functions, denoted Schedulers in the library take in the output of a trained model, a sample which the diffusion process is iterating on, and a timestep to return a denoised sample. That’s why schedulers may also be called Samplers in other diffusion models implementations.
All schedulers take in a timestep to predict the updated version of the sample being diffused.
The timesteps dictate where in the diffusion process the step is, where data is generated by iterating forward in time and inference is executed by propagating backwards through timesteps.
Different algorithms use timesteps that both discrete (accepting int
inputs), such as the DDPMScheduler or PNDMScheduler, and continuous (accepting float
inputs), such as the score-based schedulers ScoreSdeVeScheduler or ScoreSdeVpScheduler
.
The core design principle between the schedule functions is to be model, system, and framework independent. This allows for rapid experimentation and cleaner abstractions in the code, where the model prediction is separated from the sample update. To this end, the design of schedulers is such that:
The core API for any new scheduler must follow a limited structure.
def step(...)
functions that should be called to update the generated sample iteratively.set_timesteps(...)
method that configures the parameters of a schedule function for a specific inference task.The base class SchedulerMixin implements low level utilities used by multiple schedulers.
Mixin containing common functions for the schedulers.
Class attributes:
List[str]
) — A list of classes that are compatible with the parent class, so that
from_config
can be used from a class different than the one used to save the config (should be overridden
by parent class).( pretrained_model_name_or_path: typing.Dict[str, typing.Any] = None subfolder: typing.Optional[str] = None return_unused_kwargs = False **kwargs )
Parameters
str
or os.PathLike
, optional) —
Can be either:
google/ddpm-celebahq-256
../my_model_directory/
.str
, optional) —
In case the relevant files are located inside a subfolder of the model repo (either remote in
huggingface.co or downloaded locally), you can specify the folder name here.
bool
, optional, defaults to False
) —
Whether kwargs that are not consumed by the Python class should be returned or not.
Union[str, os.PathLike]
, optional) —
Path to a directory in which a downloaded pretrained model configuration should be cached if the
standard cache should not be used.
bool
, optional, defaults to False
) —
Whether or not to force the (re-)download of the model weights and configuration files, overriding the
cached versions if they exist.
bool
, optional, defaults to False
) —
Whether or not to delete incompletely received files. Will attempt to resume the download if such a
file exists.
Dict[str, str]
, optional) —
A dictionary of proxy servers to use by protocol or endpoint, e.g., {'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request.
bool
, optional, defaults to False
) —
Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages.
bool
, optional, defaults to False
) —
Whether or not to only look at local files (i.e., do not try to download the model).
str
or bool, optional) —
The token to use as HTTP bearer authorization for remote files. If True
, will use the token generated
when running transformers-cli login
(stored in ~/.huggingface
).
str
, optional, defaults to "main"
) —
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so revision
can be any
identifier allowed by git.
Instantiate a Scheduler class from a pre-defined JSON configuration file inside a directory or Hub repo.
It is required to be logged in (huggingface-cli login
) when you want to use private or gated
models.
Activate the special “offline-mode” to use this method in a firewalled environment.
( save_directory: typing.Union[str, os.PathLike] push_to_hub: bool = False **kwargs )
Save a scheduler configuration object to the directory save_directory
, so that it can be re-loaded using the
from_pretrained() class method.
( prev_sample: FloatTensor )
Base class for the scheduler’s step function output.
Original paper can be found here.
( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None clip_sample: bool = True set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' **kwargs )
Parameters
int
) — number of diffusion steps used to train the model.
float
) — the starting beta
value of inference.
float
) — the final beta
value.
str
) —
the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
linear
, scaled_linear
, or squaredcos_cap_v2
.
np.ndarray
, optional) —
option to pass an array of betas directly to the constructor to bypass beta_start
, beta_end
etc.
bool
, default True
) —
option to clip predicted sample between -1 and 1 for numerical stability.
bool
, default True
) —
each diffusion step uses the value of alphas product at that step and at the previous one. For the final
step there is no previous alpha. When this option is True
the previous alpha product is fixed to 1
,
otherwise it uses the value of alpha at step 0.
int
, default 0
) —
an offset added to the inference steps. You can use a combination of offset=1
and
set_alpha_to_one=False
, to make the last step use step 0 for the previous alpha product, as done in
stable diffusion.
str
, default epsilon
, optional) —
prediction type of the scheduler function, one of epsilon
(predicting the noise of the diffusion
process), sample
(directly predicting the noisy sample) or
v_prediction` (see section 2.4
https://imagen.research.google/video/paper.pdf)
Denoising diffusion implicit models is a scheduler that extends the denoising procedure introduced in denoising diffusion probabilistic models (DDPMs) with non-Markovian guidance.
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__
function, such as num_train_timesteps
. They can be accessed via scheduler.config.num_train_timesteps
.
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and
from_pretrained() functions.
For more details, see the original paper: https://arxiv.org/abs/2010.02502
(
sample: FloatTensor
timestep: typing.Optional[int] = None
)
→
torch.FloatTensor
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the current timestep.
( num_inference_steps: int device: typing.Union[str, torch.device] = None )
Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference.
(
model_output: FloatTensor
timestep: int
sample: FloatTensor
eta: float = 0.0
use_clipped_model_output: bool = False
generator = None
variance_noise: typing.Optional[torch.FloatTensor] = None
return_dict: bool = True
)
→
~schedulers.scheduling_utils.DDIMSchedulerOutput
or tuple
Parameters
torch.FloatTensor
) — direct output from learned diffusion model.
int
) — current discrete timestep in the diffusion chain.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
float
) — weight of noise for added noise in diffusion step.
bool
) — if True
, compute “corrected” model_output
from the clipped
predicted original sample. Necessary because predicted original sample is clipped to [-1, 1] when
self.config.clip_sample
is True
. If no clipping has happened, “corrected” model_output
would
coincide with the one provided as input and use_clipped_model_output
will have not effect.
generator — random number generator.
torch.FloatTensor
) — instead of generating noise for the variance using generator
, we
can directly provide the noise for the variance itself. This is useful for methods such as
CycleDiffusion. (https://arxiv.org/abs/2210.05559)
bool
) — option for returning tuple rather than DDIMSchedulerOutput class
Returns
~schedulers.scheduling_utils.DDIMSchedulerOutput
or tuple
~schedulers.scheduling_utils.DDIMSchedulerOutput
if return_dict
is True, otherwise a tuple
. When
returning a tuple, the first element is the sample tensor.
Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion process from the learned model outputs (most often the predicted noise).
Original paper can be found here.
( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None variance_type: str = 'fixed_small' clip_sample: bool = True prediction_type: str = 'epsilon' **kwargs )
Parameters
int
) — number of diffusion steps used to train the model.
float
) — the starting beta
value of inference.
float
) — the final beta
value.
str
) —
the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
linear
, scaled_linear
, or squaredcos_cap_v2
.
np.ndarray
, optional) —
option to pass an array of betas directly to the constructor to bypass beta_start
, beta_end
etc.
str
) —
options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small
,
fixed_small_log
, fixed_large
, fixed_large_log
, learned
or learned_range
.
bool
, default True
) —
option to clip predicted sample between -1 and 1 for numerical stability.
str
, default epsilon
, optional) —
prediction type of the scheduler function, one of epsilon
(predicting the noise of the diffusion
process), sample
(directly predicting the noisy sample) or
v_prediction` (see section 2.4
https://imagen.research.google/video/paper.pdf)
Denoising diffusion probabilistic models (DDPMs) explores the connections between denoising score matching and Langevin dynamics sampling.
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__
function, such as num_train_timesteps
. They can be accessed via scheduler.config.num_train_timesteps
.
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and
from_pretrained() functions.
For more details, see the original paper: https://arxiv.org/abs/2006.11239
(
sample: FloatTensor
timestep: typing.Optional[int] = None
)
→
torch.FloatTensor
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the current timestep.
( num_inference_steps: int device: typing.Union[str, torch.device] = None )
Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference.
(
model_output: FloatTensor
timestep: int
sample: FloatTensor
generator = None
return_dict: bool = True
**kwargs
)
→
~schedulers.scheduling_utils.DDPMSchedulerOutput
or tuple
Parameters
torch.FloatTensor
) — direct output from learned diffusion model.
int
) — current discrete timestep in the diffusion chain.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
generator — random number generator.
bool
) — option for returning tuple rather than DDPMSchedulerOutput class
Returns
~schedulers.scheduling_utils.DDPMSchedulerOutput
or tuple
~schedulers.scheduling_utils.DDPMSchedulerOutput
if return_dict
is True, otherwise a tuple
. When
returning a tuple, the first element is the sample tensor.
Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion process from the learned model outputs (most often the predicted noise).
Original paper can be found here and the improved version. The original implementation can be found here.
( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: typing.Optional[numpy.ndarray] = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = True )
Parameters
int
) — number of diffusion steps used to train the model.
float
) — the starting beta
value of inference.
float
) — the final beta
value.
str
) —
the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
linear
, scaled_linear
, or squaredcos_cap_v2
.
np.ndarray
, optional) —
option to pass an array of betas directly to the constructor to bypass beta_start
, beta_end
etc.
int
, default 2
) —
the order of DPM-Solver; can be 1
or 2
or 3
. We recommend to use solver_order=2
for guided
sampling, and solver_order=3
for unconditional sampling.
str
, default epsilon
) —
indicates whether the model predicts the noise (epsilon), or the data / x0
. One of epsilon
, sample
,
or v-prediction
.
bool
, default False
) —
whether to use the “dynamic thresholding” method (introduced by Imagen, https://arxiv.org/abs/2205.11487).
For pixel-space diffusion models, you can set both algorithm_type=dpmsolver++
and thresholding=True
to
use the dynamic thresholding. Note that the thresholding method is unsuitable for latent-space diffusion
models (such as stable-diffusion).
float
, default 0.995
) —
the ratio for the dynamic thresholding method. Default is 0.995
, the same as Imagen
(https://arxiv.org/abs/2205.11487).
float
, default 1.0
) —
the threshold value for dynamic thresholding. Valid only when thresholding=True
and
algorithm_type="dpmsolver++
.
str
, default dpmsolver++
) —
the algorithm type for the solver. Either dpmsolver
or dpmsolver++
. The dpmsolver
type implements the
algorithms in https://arxiv.org/abs/2206.00927, and the dpmsolver++
type implements the algorithms in
https://arxiv.org/abs/2211.01095. We recommend to use dpmsolver++
with solver_order=2
for guided
sampling (e.g. stable-diffusion).
str
, default midpoint
) —
the solver type for the second-order solver. Either midpoint
or heun
. The solver type slightly affects
the sample quality, especially for small number of steps. We empirically find that midpoint
solvers are
slightly better, so we recommend to use the midpoint
type.
bool
, default True
) —
whether to use lower-order solvers in the final steps. For singlestep schedulers, we recommend to enable
this to use up all the function evaluations.
DPM-Solver (and the improved version DPM-Solver++) is a fast dedicated high-order solver for diffusion ODEs with the convergence order guarantee. Empirically, sampling by DPM-Solver with only 20 steps can generate high-quality samples, and it can generate quite good samples even in only 10 steps.
For more details, see the original paper: https://arxiv.org/abs/2206.00927 and https://arxiv.org/abs/2211.01095
Currently, we support the singlestep DPM-Solver for both noise prediction models and data prediction models. We
recommend to use solver_order=2
for guided sampling, and solver_order=3
for unconditional sampling.
We also support the “dynamic thresholding” method in Imagen (https://arxiv.org/abs/2205.11487). For pixel-space
diffusion models, you can set both algorithm_type="dpmsolver++"
and thresholding=True
to use the dynamic
thresholding. Note that the thresholding method is unsuitable for latent-space diffusion models (such as
stable-diffusion).
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__
function, such as num_train_timesteps
. They can be accessed via scheduler.config.num_train_timesteps
.
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and
from_pretrained() functions.
(
model_output: FloatTensor
timestep: int
sample: FloatTensor
)
→
torch.FloatTensor
Parameters
torch.FloatTensor
) — direct output from learned diffusion model.
int
) — current discrete timestep in the diffusion chain.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
Returns
torch.FloatTensor
the converted model output.
Convert the model output to the corresponding type that the algorithm (DPM-Solver / DPM-Solver++) needs.
DPM-Solver is designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an integral of the data prediction model. So we need to first convert the model output to the corresponding type to match the algorithm.
Note that the algorithm type and the model type is decoupled. That is to say, we can use either DPM-Solver or DPM-Solver++ for both noise prediction model and data prediction model.
(
model_output: FloatTensor
timestep: int
prev_timestep: int
sample: FloatTensor
)
→
torch.FloatTensor
Parameters
torch.FloatTensor
) — direct output from learned diffusion model.
int
) — current discrete timestep in the diffusion chain.
int
) — previous discrete timestep in the diffusion chain.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
Returns
torch.FloatTensor
the sample tensor at the previous timestep.
One step for the first-order DPM-Solver (equivalent to DDIM).
See https://arxiv.org/abs/2206.00927 for the detailed derivation.
( num_inference_steps: int )
Computes the solver order at each time step.
(
sample: FloatTensor
*args
**kwargs
)
→
torch.FloatTensor
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the current timestep.
( num_inference_steps: int device: typing.Union[str, torch.device] = None )
Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
(
model_output_list: typing.List[torch.FloatTensor]
timestep_list: typing.List[int]
prev_timestep: int
sample: FloatTensor
)
→
torch.FloatTensor
Parameters
List[torch.FloatTensor]
) —
direct outputs from learned diffusion model at current and latter timesteps.
int
) — current and latter discrete timestep in the diffusion chain.
int
) — previous discrete timestep in the diffusion chain.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
Returns
torch.FloatTensor
the sample tensor at the previous timestep.
One step for the second-order singlestep DPM-Solver.
It computes the solution at time prev_timestep
from the time timestep_list[-2]
.
(
model_output_list: typing.List[torch.FloatTensor]
timestep_list: typing.List[int]
prev_timestep: int
sample: FloatTensor
)
→
torch.FloatTensor
Parameters
List[torch.FloatTensor]
) —
direct outputs from learned diffusion model at current and latter timesteps.
int
) — current and latter discrete timestep in the diffusion chain.
int
) — previous discrete timestep in the diffusion chain.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
Returns
torch.FloatTensor
the sample tensor at the previous timestep.
One step for the third-order singlestep DPM-Solver.
It computes the solution at time prev_timestep
from the time timestep_list[-3]
.
(
model_output_list: typing.List[torch.FloatTensor]
timestep_list: typing.List[int]
prev_timestep: int
sample: FloatTensor
order: int
)
→
torch.FloatTensor
Parameters
List[torch.FloatTensor]
) —
direct outputs from learned diffusion model at current and latter timesteps.
int
) — current and latter discrete timestep in the diffusion chain.
int
) — previous discrete timestep in the diffusion chain.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
int
) —
the solver order at this step.
Returns
torch.FloatTensor
the sample tensor at the previous timestep.
One step for the singlestep DPM-Solver.
(
model_output: FloatTensor
timestep: int
sample: FloatTensor
return_dict: bool = True
)
→
~scheduling_utils.SchedulerOutput
or tuple
Parameters
torch.FloatTensor
) — direct output from learned diffusion model.
int
) — current discrete timestep in the diffusion chain.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
bool
) — option for returning tuple rather than SchedulerOutput class
Returns
~scheduling_utils.SchedulerOutput
or tuple
~scheduling_utils.SchedulerOutput
if return_dict
is
True, otherwise a tuple
. When returning a tuple, the first element is the sample tensor.
Step function propagating the sample with the singlestep DPM-Solver.
Original paper can be found here and the improved version. The original implementation can be found here.
( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None solver_order: int = 2 prediction_type: str = 'epsilon' thresholding: bool = False dynamic_thresholding_ratio: float = 0.995 sample_max_value: float = 1.0 algorithm_type: str = 'dpmsolver++' solver_type: str = 'midpoint' lower_order_final: bool = True **kwargs )
Parameters
int
) — number of diffusion steps used to train the model.
float
) — the starting beta
value of inference.
float
) — the final beta
value.
str
) —
the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
linear
, scaled_linear
, or squaredcos_cap_v2
.
np.ndarray
, optional) —
option to pass an array of betas directly to the constructor to bypass beta_start
, beta_end
etc.
int
, default 2
) —
the order of DPM-Solver; can be 1
or 2
or 3
. We recommend to use solver_order=2
for guided
sampling, and solver_order=3
for unconditional sampling.
str
, default epsilon
, optional) —
prediction type of the scheduler function, one of epsilon
(predicting the noise of the diffusion
process), sample
(directly predicting the noisy sample) or
v_prediction` (see section 2.4
https://imagen.research.google/video/paper.pdf)
bool
, default False
) —
whether to use the “dynamic thresholding” method (introduced by Imagen, https://arxiv.org/abs/2205.11487).
For pixel-space diffusion models, you can set both algorithm_type=dpmsolver++
and thresholding=True
to
use the dynamic thresholding. Note that the thresholding method is unsuitable for latent-space diffusion
models (such as stable-diffusion).
float
, default 0.995
) —
the ratio for the dynamic thresholding method. Default is 0.995
, the same as Imagen
(https://arxiv.org/abs/2205.11487).
float
, default 1.0
) —
the threshold value for dynamic thresholding. Valid only when thresholding=True
and
algorithm_type="dpmsolver++
.
str
, default dpmsolver++
) —
the algorithm type for the solver. Either dpmsolver
or dpmsolver++
. The dpmsolver
type implements the
algorithms in https://arxiv.org/abs/2206.00927, and the dpmsolver++
type implements the algorithms in
https://arxiv.org/abs/2211.01095. We recommend to use dpmsolver++
with solver_order=2
for guided
sampling (e.g. stable-diffusion).
str
, default midpoint
) —
the solver type for the second-order solver. Either midpoint
or heun
. The solver type slightly affects
the sample quality, especially for small number of steps. We empirically find that midpoint
solvers are
slightly better, so we recommend to use the midpoint
type.
bool
, default True
) —
whether to use lower-order solvers in the final steps. Only valid for < 15 inference steps. We empirically
find this trick can stabilize the sampling of DPM-Solver for steps < 15, especially for steps <= 10.
DPM-Solver (and the improved version DPM-Solver++) is a fast dedicated high-order solver for diffusion ODEs with the convergence order guarantee. Empirically, sampling by DPM-Solver with only 20 steps can generate high-quality samples, and it can generate quite good samples even in only 10 steps.
For more details, see the original paper: https://arxiv.org/abs/2206.00927 and https://arxiv.org/abs/2211.01095
Currently, we support the multistep DPM-Solver for both noise prediction models and data prediction models. We
recommend to use solver_order=2
for guided sampling, and solver_order=3
for unconditional sampling.
We also support the “dynamic thresholding” method in Imagen (https://arxiv.org/abs/2205.11487). For pixel-space
diffusion models, you can set both algorithm_type="dpmsolver++"
and thresholding=True
to use the dynamic
thresholding. Note that the thresholding method is unsuitable for latent-space diffusion models (such as
stable-diffusion).
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__
function, such as num_train_timesteps
. They can be accessed via scheduler.config.num_train_timesteps
.
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and
from_pretrained() functions.
(
model_output: FloatTensor
timestep: int
sample: FloatTensor
)
→
torch.FloatTensor
Parameters
torch.FloatTensor
) — direct output from learned diffusion model.
int
) — current discrete timestep in the diffusion chain.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
Returns
torch.FloatTensor
the converted model output.
Convert the model output to the corresponding type that the algorithm (DPM-Solver / DPM-Solver++) needs.
DPM-Solver is designed to discretize an integral of the noise prediction model, and DPM-Solver++ is designed to discretize an integral of the data prediction model. So we need to first convert the model output to the corresponding type to match the algorithm.
Note that the algorithm type and the model type is decoupled. That is to say, we can use either DPM-Solver or DPM-Solver++ for both noise prediction model and data prediction model.
(
model_output: FloatTensor
timestep: int
prev_timestep: int
sample: FloatTensor
)
→
torch.FloatTensor
Parameters
torch.FloatTensor
) — direct output from learned diffusion model.
int
) — current discrete timestep in the diffusion chain.
int
) — previous discrete timestep in the diffusion chain.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
Returns
torch.FloatTensor
the sample tensor at the previous timestep.
One step for the first-order DPM-Solver (equivalent to DDIM).
See https://arxiv.org/abs/2206.00927 for the detailed derivation.
(
model_output_list: typing.List[torch.FloatTensor]
timestep_list: typing.List[int]
prev_timestep: int
sample: FloatTensor
)
→
torch.FloatTensor
Parameters
List[torch.FloatTensor]
) —
direct outputs from learned diffusion model at current and latter timesteps.
int
) — current and latter discrete timestep in the diffusion chain.
int
) — previous discrete timestep in the diffusion chain.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
Returns
torch.FloatTensor
the sample tensor at the previous timestep.
One step for the second-order multistep DPM-Solver.
(
model_output_list: typing.List[torch.FloatTensor]
timestep_list: typing.List[int]
prev_timestep: int
sample: FloatTensor
)
→
torch.FloatTensor
Parameters
List[torch.FloatTensor]
) —
direct outputs from learned diffusion model at current and latter timesteps.
int
) — current and latter discrete timestep in the diffusion chain.
int
) — previous discrete timestep in the diffusion chain.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
Returns
torch.FloatTensor
the sample tensor at the previous timestep.
One step for the third-order multistep DPM-Solver.
(
sample: FloatTensor
*args
**kwargs
)
→
torch.FloatTensor
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the current timestep.
( num_inference_steps: int device: typing.Union[str, torch.device] = None )
Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
(
model_output: FloatTensor
timestep: int
sample: FloatTensor
return_dict: bool = True
)
→
~scheduling_utils.SchedulerOutput
or tuple
Parameters
torch.FloatTensor
) — direct output from learned diffusion model.
int
) — current discrete timestep in the diffusion chain.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
bool
) — option for returning tuple rather than SchedulerOutput class
Returns
~scheduling_utils.SchedulerOutput
or tuple
~scheduling_utils.SchedulerOutput
if return_dict
is
True, otherwise a tuple
. When returning a tuple, the first element is the sample tensor.
Step function propagating the sample with the multistep DPM-Solver.
Algorithm 1 of Karras et. al. Scheduler ported from @crowsonkb’s https://github.com/crowsonkb/k-diffusion library:
All credit for making this scheduler work goes to Katherine Crowson
( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None prediction_type: str = 'epsilon' )
Parameters
int
) — number of diffusion steps used to train the model. beta_start (float
): the
beta
value of inference. beta_end (float
) — the final beta
value. beta_schedule (str
):
the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
linear
or scaled_linear
.
np.ndarray
, optional) —
option to pass an array of betas directly to the constructor to bypass beta_start
, beta_end
etc.
options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small
,
fixed_small_log
, fixed_large
, fixed_large_log
, learned
or learned_range
.
str
, default epsilon
, optional) —
prediction type of the scheduler function, one of epsilon
(predicting the noise of the diffusion
process), sample
(directly predicting the noisy sample) or
v_prediction` (see section 2.4
https://imagen.research.google/video/paper.pdf)
Implements Algorithm 2 (Heun steps) from Karras et al. (2022). for discrete beta schedules. Based on the original k-diffusion implementation by Katherine Crowson: https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L90
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__
function, such as num_train_timesteps
. They can be accessed via scheduler.config.num_train_timesteps
.
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and
from_pretrained() functions.
(
sample: FloatTensor
timestep: typing.Union[float, torch.FloatTensor]
)
→
torch.FloatTensor
( num_inference_steps: int device: typing.Union[str, torch.device] = None num_train_timesteps: typing.Optional[int] = None )
Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
(
model_output: typing.Union[torch.FloatTensor, numpy.ndarray]
timestep: typing.Union[float, torch.FloatTensor]
sample: typing.Union[torch.FloatTensor, numpy.ndarray]
return_dict: bool = True
)
→
SchedulerOutput or tuple
Parameters
torch.FloatTensor
or np.ndarray
): direct output from learned diffusion model. timestep
(int
): current discrete timestep in the diffusion chain. sample (torch.FloatTensor
or np.ndarray
):
current instance of sample being created by diffusion process.
return_dict (bool
): option for returning tuple rather than SchedulerOutput class
Returns
SchedulerOutput or tuple
SchedulerOutput if return_dict
is True, otherwise a tuple
. When
returning a tuple, the first element is the sample tensor.
Inspired by Karras et. al. Scheduler ported from @crowsonkb’s https://github.com/crowsonkb/k-diffusion library:
All credit for making this scheduler work goes to Katherine Crowson
( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None prediction_type: str = 'epsilon' )
Parameters
int
) — number of diffusion steps used to train the model. beta_start (float
): the
beta
value of inference. beta_end (float
) — the final beta
value. beta_schedule (str
):
the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
linear
or scaled_linear
.
np.ndarray
, optional) —
option to pass an array of betas directly to the constructor to bypass beta_start
, beta_end
etc.
options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small
,
fixed_small_log
, fixed_large
, fixed_large_log
, learned
or learned_range
.
str
, default epsilon
, optional) —
prediction type of the scheduler function, one of epsilon
(predicting the noise of the diffusion
process), sample
(directly predicting the noisy sample) or
v_prediction` (see section 2.4
https://imagen.research.google/video/paper.pdf)
Scheduler created by @crowsonkb in k_diffusion, see: https://github.com/crowsonkb/k-diffusion/blob/5b3af030dd83e0297272d861c19477735d0317ec/k_diffusion/sampling.py#L188
Scheduler inspired by DPM-Solver-2 and Algorthim 2 from Karras et al. (2022).
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__
function, such as num_train_timesteps
. They can be accessed via scheduler.config.num_train_timesteps
.
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and
from_pretrained() functions.
(
sample: FloatTensor
timestep: typing.Union[float, torch.FloatTensor]
)
→
torch.FloatTensor
( num_inference_steps: int device: typing.Union[str, torch.device] = None num_train_timesteps: typing.Optional[int] = None )
Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
(
model_output: typing.Union[torch.FloatTensor, numpy.ndarray]
timestep: typing.Union[float, torch.FloatTensor]
sample: typing.Union[torch.FloatTensor, numpy.ndarray]
return_dict: bool = True
)
→
SchedulerOutput or tuple
Parameters
torch.FloatTensor
or np.ndarray
): direct output from learned diffusion model. timestep
(int
): current discrete timestep in the diffusion chain. sample (torch.FloatTensor
or np.ndarray
):
current instance of sample being created by diffusion process.
return_dict (bool
): option for returning tuple rather than SchedulerOutput class
Returns
SchedulerOutput or tuple
SchedulerOutput if return_dict
is True, otherwise a tuple
. When
returning a tuple, the first element is the sample tensor.
Inspired by Karras et. al. Scheduler ported from @crowsonkb’s https://github.com/crowsonkb/k-diffusion library:
All credit for making this scheduler work goes to Katherine Crowson
( num_train_timesteps: int = 1000 beta_start: float = 0.00085 beta_end: float = 0.012 beta_schedule: str = 'linear' trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None prediction_type: str = 'epsilon' )
Parameters
int
) — number of diffusion steps used to train the model. beta_start (float
): the
beta
value of inference. beta_end (float
) — the final beta
value. beta_schedule (str
):
the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
linear
or scaled_linear
.
np.ndarray
, optional) —
option to pass an array of betas directly to the constructor to bypass beta_start
, beta_end
etc.
options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small
,
fixed_small_log
, fixed_large
, fixed_large_log
, learned
or learned_range
.
str
, default epsilon
, optional) —
prediction type of the scheduler function, one of epsilon
(predicting the noise of the diffusion
process), sample
(directly predicting the noisy sample) or
v_prediction` (see section 2.4
https://imagen.research.google/video/paper.pdf)
Scheduler created by @crowsonkb in k_diffusion, see: https://github.com/crowsonkb/k-diffusion/blob/5b3af030dd83e0297272d861c19477735d0317ec/k_diffusion/sampling.py#L188
Scheduler inspired by DPM-Solver-2 and Algorthim 2 from Karras et al. (2022).
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__
function, such as num_train_timesteps
. They can be accessed via scheduler.config.num_train_timesteps
.
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and
from_pretrained() functions.
(
sample: FloatTensor
timestep: typing.Union[float, torch.FloatTensor]
)
→
torch.FloatTensor
( num_inference_steps: int device: typing.Union[str, torch.device] = None num_train_timesteps: typing.Optional[int] = None )
Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
(
model_output: typing.Union[torch.FloatTensor, numpy.ndarray]
timestep: typing.Union[float, torch.FloatTensor]
sample: typing.Union[torch.FloatTensor, numpy.ndarray]
generator: typing.Optional[torch._C.Generator] = None
return_dict: bool = True
)
→
SchedulerOutput or tuple
Parameters
torch.FloatTensor
or np.ndarray
): direct output from learned diffusion model. timestep
(int
): current discrete timestep in the diffusion chain. sample (torch.FloatTensor
or np.ndarray
):
current instance of sample being created by diffusion process.
return_dict (bool
): option for returning tuple rather than SchedulerOutput class
Returns
SchedulerOutput or tuple
SchedulerOutput if return_dict
is True, otherwise a tuple
. When
returning a tuple, the first element is the sample tensor.
Original paper can be found here.
( sigma_min: float = 0.02 sigma_max: float = 100 s_noise: float = 1.007 s_churn: float = 80 s_min: float = 0.05 s_max: float = 50 )
Parameters
float
) — minimum noise magnitude
float
) — maximum noise magnitude
float
) — the amount of additional noise to counteract loss of detail during sampling.
A reasonable range is [1.000, 1.011].
float
) — the parameter controlling the overall amount of stochasticity.
A reasonable range is [0, 100].
float
) — the start value of the sigma range where we add noise (enable stochasticity).
A reasonable range is [0, 10].
float
) — the end value of the sigma range where we add noise.
A reasonable range is [0.2, 80].
Stochastic sampling from Karras et al. [1] tailored to the Variance-Expanding (VE) models [2]. Use Algorithm 2 and the VE column of Table 1 from [1] for reference.
[1] Karras, Tero, et al. “Elucidating the Design Space of Diffusion-Based Generative Models.” https://arxiv.org/abs/2206.00364 [2] Song, Yang, et al. “Score-based generative modeling through stochastic differential equations.” https://arxiv.org/abs/2011.13456
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__
function, such as num_train_timesteps
. They can be accessed via scheduler.config.num_train_timesteps
.
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and
from_pretrained() functions.
For more details on the parameters, see the original paper’s Appendix E.: “Elucidating the Design Space of Diffusion-Based Generative Models.” https://arxiv.org/abs/2206.00364. The grid search values used to find the optimal {s_noise, s_churn, s_min, s_max} for a specific model are described in Table 5 of the paper.
( sample: FloatTensor sigma: float generator: typing.Optional[torch._C.Generator] = None )
Explicit Langevin-like “churn” step of adding noise to the sample according to a factor gamma_i ≥ 0 to reach a higher noise level sigma_hat = sigma_i + gamma_i*sigma_i.
TODO Args:
(
sample: FloatTensor
timestep: typing.Optional[int] = None
)
→
torch.FloatTensor
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the current timestep.
( num_inference_steps: int device: typing.Union[str, torch.device] = None )
Sets the continuous timesteps used for the diffusion chain. Supporting function to be run before inference.
(
model_output: FloatTensor
sigma_hat: float
sigma_prev: float
sample_hat: FloatTensor
return_dict: bool = True
)
→
KarrasVeOutput
or tuple
Parameters
torch.FloatTensor
) — direct output from learned diffusion model.
float
) — TODO
float
) — TODO
torch.FloatTensor
) — TODO
bool
) — option for returning tuple rather than KarrasVeOutput class
KarrasVeOutput — updated sample in the diffusion chain and derivative (TODO double check).
Returns
KarrasVeOutput
or tuple
KarrasVeOutput
if return_dict
is True, otherwise a tuple
. When
returning a tuple, the first element is the sample tensor.
Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion process from the learned model outputs (most often the predicted noise).
( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor sample_prev: FloatTensor derivative: FloatTensor return_dict: bool = True ) → prev_sample (TODO)
Parameters
torch.FloatTensor
) — direct output from learned diffusion model.
float
) — TODO
float
) — TODO
torch.FloatTensor
) — TODO
torch.FloatTensor
) — TODO
torch.FloatTensor
) — TODO
bool
) — option for returning tuple rather than KarrasVeOutput class
Returns
prev_sample (TODO)
updated sample in the diffusion chain. derivative (TODO): TODO
Correct the predicted sample based on the output model_output of the network. TODO complete description
Original implementation can be found here.
( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None prediction_type: str = 'epsilon' )
Parameters
int
) — number of diffusion steps used to train the model.
float
) — the starting beta
value of inference.
float
) — the final beta
value.
str
) —
the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
linear
or scaled_linear
.
np.ndarray
, optional) —
option to pass an array of betas directly to the constructor to bypass beta_start
, beta_end
etc.
str
, default epsilon
, optional) —
prediction type of the scheduler function, one of epsilon
(predicting the noise of the diffusion
process), sample
(directly predicting the noisy sample) or
v_prediction` (see section 2.4
https://imagen.research.google/video/paper.pdf)
Linear Multistep Scheduler for discrete beta schedules. Based on the original k-diffusion implementation by Katherine Crowson: https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L181
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__
function, such as num_train_timesteps
. They can be accessed via scheduler.config.num_train_timesteps
.
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and
from_pretrained() functions.
( order t current_order )
Compute a linear multistep coefficient.
(
sample: FloatTensor
timestep: typing.Union[float, torch.FloatTensor]
)
→
torch.FloatTensor
Scales the denoising model input by (sigma**2 + 1) ** 0.5
to match the K-LMS algorithm.
( num_inference_steps: int device: typing.Union[str, torch.device] = None )
Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
(
model_output: FloatTensor
timestep: typing.Union[float, torch.FloatTensor]
sample: FloatTensor
order: int = 4
return_dict: bool = True
)
→
~schedulers.scheduling_utils.LMSDiscreteSchedulerOutput
or tuple
Parameters
torch.FloatTensor
) — direct output from learned diffusion model.
float
) — current timestep in the diffusion chain.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
order — coefficient for multi-step inference.
bool
) — option for returning tuple rather than LMSDiscreteSchedulerOutput class
Returns
~schedulers.scheduling_utils.LMSDiscreteSchedulerOutput
or tuple
~schedulers.scheduling_utils.LMSDiscreteSchedulerOutput
if return_dict
is True, otherwise a tuple
.
When returning a tuple, the first element is the sample tensor.
Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion process from the learned model outputs (most often the predicted noise).
Original implementation can be found here.
( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None skip_prk_steps: bool = False set_alpha_to_one: bool = False prediction_type: str = 'epsilon' steps_offset: int = 0 )
Parameters
int
) — number of diffusion steps used to train the model.
float
) — the starting beta
value of inference.
float
) — the final beta
value.
str
) —
the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
linear
, scaled_linear
, or squaredcos_cap_v2
.
np.ndarray
, optional) —
option to pass an array of betas directly to the constructor to bypass beta_start
, beta_end
etc.
bool
) —
allows the scheduler to skip the Runge-Kutta steps that are defined in the original paper as being required
before plms steps; defaults to False
.
bool
, default False
) —
each diffusion step uses the value of alphas product at that step and at the previous one. For the final
step there is no previous alpha. When this option is True
the previous alpha product is fixed to 1
,
otherwise it uses the value of alpha at step 0.
str
, default epsilon
, optional) —
prediction type of the scheduler function, one of epsilon
(predicting the noise of the diffusion
process), sample
(directly predicting the noisy sample) or
v_prediction` (see section 2.4
https://imagen.research.google/video/paper.pdf)
int
, default 0
) —
an offset added to the inference steps. You can use a combination of offset=1
and
set_alpha_to_one=False
, to make the last step use step 0 for the previous alpha product, as done in
stable diffusion.
Pseudo numerical methods for diffusion models (PNDM) proposes using more advanced ODE integration techniques, namely Runge-Kutta method and a linear multi-step method.
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__
function, such as num_train_timesteps
. They can be accessed via scheduler.config.num_train_timesteps
.
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and
from_pretrained() functions.
For more details, see the original paper: https://arxiv.org/abs/2202.09778
(
sample: FloatTensor
*args
**kwargs
)
→
torch.FloatTensor
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the current timestep.
( num_inference_steps: int device: typing.Union[str, torch.device] = None )
Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference.
(
model_output: FloatTensor
timestep: int
sample: FloatTensor
return_dict: bool = True
)
→
SchedulerOutput or tuple
Parameters
torch.FloatTensor
) — direct output from learned diffusion model.
int
) — current discrete timestep in the diffusion chain.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
bool
) — option for returning tuple rather than SchedulerOutput class
Returns
SchedulerOutput or tuple
SchedulerOutput if return_dict
is True, otherwise a tuple
. When
returning a tuple, the first element is the sample tensor.
Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion process from the learned model outputs (most often the predicted noise).
This function calls step_prk()
or step_plms()
depending on the internal variable counter
.
(
model_output: FloatTensor
timestep: int
sample: FloatTensor
return_dict: bool = True
)
→
~scheduling_utils.SchedulerOutput
or tuple
Parameters
torch.FloatTensor
) — direct output from learned diffusion model.
int
) — current discrete timestep in the diffusion chain.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
bool
) — option for returning tuple rather than SchedulerOutput class
Returns
~scheduling_utils.SchedulerOutput
or tuple
~scheduling_utils.SchedulerOutput
if return_dict
is
True, otherwise a tuple
. When returning a tuple, the first element is the sample tensor.
Step function propagating the sample with the linear multi-step method. This has one forward pass with multiple times to approximate the solution.
(
model_output: FloatTensor
timestep: int
sample: FloatTensor
return_dict: bool = True
)
→
~scheduling_utils.SchedulerOutput
or tuple
Parameters
torch.FloatTensor
) — direct output from learned diffusion model.
int
) — current discrete timestep in the diffusion chain.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
bool
) — option for returning tuple rather than SchedulerOutput class
Returns
~scheduling_utils.SchedulerOutput
or tuple
~scheduling_utils.SchedulerOutput
if return_dict
is
True, otherwise a tuple
. When returning a tuple, the first element is the sample tensor.
Step function propagating the sample with the Runge-Kutta method. RK takes 4 forward passes to approximate the solution to the differential equation.
Original paper can be found here.
( num_train_timesteps: int = 2000 snr: float = 0.15 sigma_min: float = 0.01 sigma_max: float = 1348.0 sampling_eps: float = 1e-05 correct_steps: int = 1 )
Parameters
int
) — number of diffusion steps used to train the model.
float
) —
coefficient weighting the step from the model_output sample (from the network) to the random noise.
float
) —
initial noise scale for sigma sequence in sampling procedure. The minimum sigma should mirror the
distribution of the data.
float
) — maximum value used for the range of continuous timesteps passed into the model.
float
) — the end value of sampling, where timesteps decrease progressively from 1 to
epsilon. —
int
) — number of correction steps performed on a produced sample.
The variance exploding stochastic differential equation (SDE) scheduler.
For more information, see the original paper: https://arxiv.org/abs/2011.13456
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__
function, such as num_train_timesteps
. They can be accessed via scheduler.config.num_train_timesteps
.
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and
from_pretrained() functions.
(
sample: FloatTensor
timestep: typing.Optional[int] = None
)
→
torch.FloatTensor
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the current timestep.
( num_inference_steps: int sigma_min: float = None sigma_max: float = None sampling_eps: float = None )
Parameters
int
) —
the number of diffusion steps used when generating samples with a pre-trained model.
float
, optional) —
initial noise scale value (overrides value given at Scheduler instantiation).
float
, optional) — final noise scale value (overrides value given at Scheduler instantiation).
float
, optional) — final timestep value (overrides value given at Scheduler instantiation).
Sets the noise scales used for the diffusion chain. Supporting function to be run before inference.
The sigmas control the weight of the drift
and diffusion
components of sample update.
( num_inference_steps: int sampling_eps: float = None device: typing.Union[str, torch.device] = None )
Sets the continuous timesteps used for the diffusion chain. Supporting function to be run before inference.
(
model_output: FloatTensor
sample: FloatTensor
generator: typing.Optional[torch._C.Generator] = None
return_dict: bool = True
)
→
SdeVeOutput
or tuple
Parameters
torch.FloatTensor
) — direct output from learned diffusion model.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
generator — random number generator.
bool
) — option for returning tuple rather than SchedulerOutput class
Returns
SdeVeOutput
or tuple
SdeVeOutput
if
return_dict
is True, otherwise a tuple
. When returning a tuple, the first element is the sample tensor.
Correct the predicted sample based on the output model_output of the network. This is often run repeatedly after making the prediction for the previous timestep.
(
model_output: FloatTensor
timestep: int
sample: FloatTensor
generator: typing.Optional[torch._C.Generator] = None
return_dict: bool = True
)
→
SdeVeOutput
or tuple
Parameters
torch.FloatTensor
) — direct output from learned diffusion model.
int
) — current discrete timestep in the diffusion chain.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
generator — random number generator.
bool
) — option for returning tuple rather than SchedulerOutput class
Returns
SdeVeOutput
or tuple
SdeVeOutput
if
return_dict
is True, otherwise a tuple
. When returning a tuple, the first element is the sample tensor.
Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion process from the learned model outputs (most often the predicted noise).
Original implementation can be found here.
( num_train_timesteps: int = 1000 trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None )
Improved Pseudo numerical methods for diffusion models (iPNDM) ported from @crowsonkb’s amazing k-diffusion library
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__
function, such as num_train_timesteps
. They can be accessed via scheduler.config.num_train_timesteps
.
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and
from_pretrained() functions.
For more details, see the original paper: https://arxiv.org/abs/2202.09778
(
sample: FloatTensor
*args
**kwargs
)
→
torch.FloatTensor
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the current timestep.
( num_inference_steps: int device: typing.Union[str, torch.device] = None )
Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference.
(
model_output: FloatTensor
timestep: int
sample: FloatTensor
return_dict: bool = True
)
→
~scheduling_utils.SchedulerOutput
or tuple
Parameters
torch.FloatTensor
) — direct output from learned diffusion model.
int
) — current discrete timestep in the diffusion chain.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
bool
) — option for returning tuple rather than SchedulerOutput class
Returns
~scheduling_utils.SchedulerOutput
or tuple
~scheduling_utils.SchedulerOutput
if return_dict
is
True, otherwise a tuple
. When returning a tuple, the first element is the sample tensor.
Step function propagating the sample with the linear multi-step method. This has one forward pass with multiple times to approximate the solution.
Original paper can be found here.
Score SDE-VP is under construction.
( num_train_timesteps = 2000 beta_min = 0.1 beta_max = 20 sampling_eps = 0.001 )
The variance preserving stochastic differential equation (SDE) scheduler.
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__
function, such as num_train_timesteps
. They can be accessed via scheduler.config.num_train_timesteps
.
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and
from_pretrained() functions.
For more information, see the original paper: https://arxiv.org/abs/2011.13456
UNDER CONSTRUCTION
Euler scheduler (Algorithm 2) from the paper Elucidating the Design Space of Diffusion-Based Generative Models by Karras et al. (2022). Based on the original k-diffusion implementation by Katherine Crowson. Fast scheduler which often times generates good outputs with 20-30 steps.
( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None prediction_type: str = 'epsilon' )
Parameters
int
) — number of diffusion steps used to train the model.
float
) — the starting beta
value of inference.
float
) — the final beta
value.
str
) —
the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
linear
or scaled_linear
.
np.ndarray
, optional) —
option to pass an array of betas directly to the constructor to bypass beta_start
, beta_end
etc.
str
, default epsilon
, optional) —
prediction type of the scheduler function, one of epsilon
(predicting the noise of the diffusion
process), sample
(directly predicting the noisy sample) or
v_prediction` (see section 2.4
https://imagen.research.google/video/paper.pdf)
Euler scheduler (Algorithm 2) from Karras et al. (2022) https://arxiv.org/abs/2206.00364. . Based on the original k-diffusion implementation by Katherine Crowson: https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L51
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__
function, such as num_train_timesteps
. They can be accessed via scheduler.config.num_train_timesteps
.
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and
from_pretrained() functions.
(
sample: FloatTensor
timestep: typing.Union[float, torch.FloatTensor]
)
→
torch.FloatTensor
Scales the denoising model input by (sigma**2 + 1) ** 0.5
to match the Euler algorithm.
( num_inference_steps: int device: typing.Union[str, torch.device] = None )
Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
(
model_output: FloatTensor
timestep: typing.Union[float, torch.FloatTensor]
sample: FloatTensor
s_churn: float = 0.0
s_tmin: float = 0.0
s_tmax: float = inf
s_noise: float = 1.0
generator: typing.Optional[torch._C.Generator] = None
return_dict: bool = True
)
→
~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput
or tuple
Parameters
torch.FloatTensor
) — direct output from learned diffusion model.
float
) — current timestep in the diffusion chain.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
float
) —
float
) —
float
) —
float
) —
torch.Generator
, optional) — Random number generator.
bool
) — option for returning tuple rather than EulerDiscreteSchedulerOutput class
Returns
~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput
or tuple
~schedulers.scheduling_utils.EulerDiscreteSchedulerOutput
if return_dict
is True, otherwise a
tuple
. When returning a tuple, the first element is the sample tensor.
Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion process from the learned model outputs (most often the predicted noise).
Ancestral sampling with Euler method steps. Based on the original (k-diffusion)[https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72] implementation by Katherine Crowson. Fast scheduler which often times generates good outputs with 20-30 steps.
( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None prediction_type: str = 'epsilon' )
Parameters
int
) — number of diffusion steps used to train the model.
float
) — the starting beta
value of inference.
float
) — the final beta
value.
str
) —
the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
linear
or scaled_linear
.
np.ndarray
, optional) —
option to pass an array of betas directly to the constructor to bypass beta_start
, beta_end
etc.
str
, default epsilon
, optional) —
prediction type of the scheduler function, one of epsilon
(predicting the noise of the diffusion
process), sample
(directly predicting the noisy sample) or
v_prediction` (see section 2.4
https://imagen.research.google/video/paper.pdf)
Ancestral sampling with Euler method steps. Based on the original k-diffusion implementation by Katherine Crowson: https://github.com/crowsonkb/k-diffusion/blob/481677d114f6ea445aa009cf5bd7a9cdee909e47/k_diffusion/sampling.py#L72
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__
function, such as num_train_timesteps
. They can be accessed via scheduler.config.num_train_timesteps
.
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and
from_pretrained() functions.
(
sample: FloatTensor
timestep: typing.Union[float, torch.FloatTensor]
)
→
torch.FloatTensor
Scales the denoising model input by (sigma**2 + 1) ** 0.5
to match the Euler algorithm.
( num_inference_steps: int device: typing.Union[str, torch.device] = None )
Sets the timesteps used for the diffusion chain. Supporting function to be run before inference.
(
model_output: FloatTensor
timestep: typing.Union[float, torch.FloatTensor]
sample: FloatTensor
generator: typing.Optional[torch._C.Generator] = None
return_dict: bool = True
)
→
~schedulers.scheduling_utils.EulerAncestralDiscreteSchedulerOutput
or tuple
Parameters
torch.FloatTensor
) — direct output from learned diffusion model.
float
) — current timestep in the diffusion chain.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
torch.Generator
, optional) — Random number generator.
bool
) — option for returning tuple rather than EulerAncestralDiscreteSchedulerOutput class
Returns
~schedulers.scheduling_utils.EulerAncestralDiscreteSchedulerOutput
or tuple
~schedulers.scheduling_utils.EulerAncestralDiscreteSchedulerOutput
if return_dict
is True, otherwise
a tuple
. When returning a tuple, the first element is the sample tensor.
Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion process from the learned model outputs (most often the predicted noise).
Original paper can be found here
( num_vec_classes: int num_train_timesteps: int = 100 alpha_cum_start: float = 0.99999 alpha_cum_end: float = 9e-06 gamma_cum_start: float = 9e-06 gamma_cum_end: float = 0.99999 )
Parameters
int
) —
The number of classes of the vector embeddings of the latent pixels. Includes the class for the masked
latent pixel.
int
) —
Number of diffusion steps used to train the model.
float
) —
The starting cumulative alpha value.
float
) —
The ending cumulative alpha value.
float
) —
The starting cumulative gamma value.
float
) —
The ending cumulative gamma value.
The VQ-diffusion transformer outputs predicted probabilities of the initial unnoised image.
The VQ-diffusion scheduler converts the transformer’s output into a sample for the unnoised image at the previous diffusion timestep.
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__
function, such as num_train_timesteps
. They can be accessed via scheduler.config.num_train_timesteps
.
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and
from_pretrained() functions.
For more details, see the original paper: https://arxiv.org/abs/2111.14822
(
t: torch.int32
x_t: LongTensor
log_onehot_x_t: FloatTensor
cumulative: bool
)
→
torch.FloatTensor
of shape (batch size, num classes - 1, num latent pixels)
Parameters
torch.LongTensor
of shape (batch size, num latent pixels)
) —
The classes of each latent pixel at time t
.
torch.FloatTensor
of shape (batch size, num classes, num latent pixels)
) —
The log one-hot vectors of x_t
bool
) —
If cumulative is False
, we use the single step transition matrix t-1
->t
. If cumulative is True
,
we use the cumulative transition matrix 0
->t
.
Returns
torch.FloatTensor
of shape (batch size, num classes - 1, num latent pixels)
Each column of the returned matrix is a row of log probabilities of the complete probability transition matrix.
When non cumulative, returns self.num_classes - 1
rows because the initial latent pixel cannot be
masked.
Where:
q_n
is the probability distribution for the forward process of the n
th latent pixel.non-cumulative result (omitting logarithms):
cumulative result (omitting logarithms):
Returns the log probabilities of the rows from the (cumulative or non-cumulative) transition matrix for each
latent pixel in x_t
.
See equation (7) for the complete non-cumulative transition matrix. The complete cumulative transition matrix is the same structure except the parameters (alpha, beta, gamma) are the cumulative analogs.
(
log_p_x_0
x_t
t
)
→
torch.FloatTensor
of shape (batch size, num classes, num latent pixels)
Calculates the log probabilities for the predicted classes of the image at timestep t-1
. I.e. Equation (11).
Instead of directly computing equation (11), we use Equation (5) to restate Equation (11) in terms of only forward probabilities.
Equation (11) stated in terms of forward probabilities via Equation (5):
Where:
p(x{t-1} | x_t) = sum( q(x_t | x{t-1}) q(x_{t-1} | x_0) p(x_0) / q(x_t | x_0) )
( num_inference_steps: int device: typing.Union[str, torch.device] = None )
Sets the discrete timesteps used for the diffusion chain. Supporting function to be run before inference.
(
model_output: FloatTensor
timestep: torch.int64
sample: LongTensor
generator: typing.Optional[torch._C.Generator] = None
return_dict: bool = True
)
→
~schedulers.scheduling_utils.VQDiffusionSchedulerOutput
or tuple
Parameters
torch.long
) —
The timestep that determines which transition matrices are used.
x_t — (torch.LongTensor
of shape (batch size, num latent pixels)
):
The classes of each latent pixel at time t
generator — (torch.Generator
or None):
RNG for the noise applied to p(x_{t-1} | x_t) before it is sampled from.
bool
) —
option for returning tuple rather than VQDiffusionSchedulerOutput class
Returns
~schedulers.scheduling_utils.VQDiffusionSchedulerOutput
or tuple
~schedulers.scheduling_utils.VQDiffusionSchedulerOutput
if return_dict
is True, otherwise a tuple
.
When returning a tuple, the first element is the sample tensor.
Predict the sample at the previous timestep via the reverse transition distribution i.e. Equation (11). See the
docstring for self.q_posterior
for more in depth docs on how Equation (11) is computed.
DDPM-based inpainting scheduler for unsupervised inpainting with extreme masks. Intended for use with RePaintPipeline. Based on the paper RePaint: Inpainting using Denoising Diffusion Probabilistic Models and the original implementation by Andreas Lugmayr et al.: https://github.com/andreas128/RePaint
( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' eta: float = 0.0 trained_betas: typing.Optional[numpy.ndarray] = None clip_sample: bool = True )
Parameters
int
) — number of diffusion steps used to train the model.
float
) — the starting beta
value of inference.
float
) — the final beta
value.
str
) —
the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
linear
, scaled_linear
, or squaredcos_cap_v2
.
float
) —
The weight of noise for added noise in a diffusion step. Its value is between 0.0 and 1.0 -0.0 is DDIM and
1.0 is DDPM scheduler respectively.
np.ndarray
, optional) —
option to pass an array of betas directly to the constructor to bypass beta_start
, beta_end
etc.
str
) —
options to clip the variance used when adding noise to the denoised sample. Choose from fixed_small
,
fixed_small_log
, fixed_large
, fixed_large_log
, learned
or learned_range
.
bool
, default True
) —
option to clip predicted sample between -1 and 1 for numerical stability.
RePaint is a schedule for DDPM inpainting inside a given mask.
~ConfigMixin takes care of storing all config attributes that are passed in the scheduler’s __init__
function, such as num_train_timesteps
. They can be accessed via scheduler.config.num_train_timesteps
.
SchedulerMixin provides general loading and saving functionality via the SchedulerMixin.save_pretrained() and
from_pretrained() functions.
For more details, see the original paper: https://arxiv.org/pdf/2201.09865.pdf
(
sample: FloatTensor
timestep: typing.Optional[int] = None
)
→
torch.FloatTensor
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the current timestep.
(
model_output: FloatTensor
timestep: int
sample: FloatTensor
original_image: FloatTensor
mask: FloatTensor
generator: typing.Optional[torch._C.Generator] = None
return_dict: bool = True
)
→
~schedulers.scheduling_utils.RePaintSchedulerOutput
or tuple
Parameters
torch.FloatTensor
) — direct output from learned
diffusion model.
int
) — current discrete timestep in the diffusion chain.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
torch.FloatTensor
) —
the original image to inpaint on.
torch.FloatTensor
) —
the mask where 0.0 values define which part of the original image to inpaint (change).
torch.Generator
, optional) — random number generator.
bool
) — option for returning tuple rather than
DDPMSchedulerOutput class
Returns
~schedulers.scheduling_utils.RePaintSchedulerOutput
or tuple
~schedulers.scheduling_utils.RePaintSchedulerOutput
if return_dict
is True, otherwise a tuple
. When
returning a tuple, the first element is the sample tensor.
Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion process from the learned model outputs (most often the predicted noise).