KarrasVeScheduler
KarrasVeScheduler
is a stochastic sampler tailored o variance-expanding (VE) models. It is based on the Elucidating the Design Space of Diffusion-Based Generative Models and Score-based generative modeling through stochastic differential equations papers.
KarrasVeScheduler
class diffusers.KarrasVeScheduler
< source >( sigma_min: float = 0.02 sigma_max: float = 100 s_noise: float = 1.007 s_churn: float = 80 s_min: float = 0.05 s_max: float = 50 )
Parameters
- sigma_min (
float
, defaults to 0.02) — The minimum noise magnitude. - sigma_max (
float
, defaults to 100) — The maximum noise magnitude. - s_noise (
float
, defaults to 1.007) — The amount of additional noise to counteract loss of detail during sampling. A reasonable range is [1.000, 1.011]. - s_churn (
float
, defaults to 80) — The parameter controlling the overall amount of stochasticity. A reasonable range is [0, 100]. - s_min (
float
, defaults to 0.05) — The start value of the sigma range to add noise (enable stochasticity). A reasonable range is [0, 10]. - s_max (
float
, defaults to 50) — The end value of the sigma range to add noise. A reasonable range is [0.2, 80].
A stochastic scheduler tailored to variance-expanding models.
This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic methods the library implements for all schedulers such as loading and saving.
For more details on the parameters, see Appendix E. The grid search values used
to find the optimal {s_noise, s_churn, s_min, s_max}
for a specific model are described in Table 5 of the paper.
add_noise_to_input
< source >( sample: FloatTensor sigma: float generator: typing.Optional[torch._C.Generator] = None )
Explicit Langevin-like “churn” step of adding noise to the sample according to a gamma_i ≥ 0
to reach a
higher noise level sigma_hat = sigma_i + gamma_i*sigma_i
.
scale_model_input
< source >( sample: FloatTensor timestep: typing.Optional[int] = None ) → torch.FloatTensor
Ensures interchangeability with schedulers that need to scale the denoising model input depending on the current timestep.
set_timesteps
< source >( num_inference_steps: int device: typing.Union[str, torch.device] = None )
Sets the discrete timesteps used for the diffusion chain (to be run before inference).
step
< source >( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor return_dict: bool = True ) → ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput
or tuple
Parameters
- model_output (
torch.FloatTensor
) — The direct output from learned diffusion model. - sigma_hat (
float
) — - sigma_prev (
float
) — - sample_hat (
torch.FloatTensor
) — - return_dict (
bool
, optional, defaults toTrue
) — Whether or not to return a~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput
ortuple
.
Returns
~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput
or tuple
If return_dict is True
, ~schedulers.scheduling_karras_ve.KarrasVESchedulerOutput
is returned,
otherwise a tuple is returned where the first element is the sample tensor.
Predict the sample from the previous timestep by reversing the SDE. This function propagates the diffusion process from the learned model outputs (most often the predicted noise).
step_correct
< source >( model_output: FloatTensor sigma_hat: float sigma_prev: float sample_hat: FloatTensor sample_prev: FloatTensor derivative: FloatTensor return_dict: bool = True ) → prev_sample (TODO)
Parameters
- model_output (
torch.FloatTensor
) — The direct output from learned diffusion model. - sigma_hat (
float
) — TODO - sigma_prev (
float
) — TODO - sample_hat (
torch.FloatTensor
) — TODO - sample_prev (
torch.FloatTensor
) — TODO - derivative (
torch.FloatTensor
) — TODO - return_dict (
bool
, optional, defaults toTrue
) — Whether or not to return a DDPMSchedulerOutput ortuple
.
Returns
prev_sample (TODO)
updated sample in the diffusion chain. derivative (TODO): TODO
Corrects the predicted sample based on the model_output
of the network.
KarrasVeOutput
class diffusers.schedulers.scheduling_karras_ve.KarrasVeOutput
< source >( prev_sample: FloatTensor derivative: FloatTensor pred_original_sample: typing.Optional[torch.FloatTensor] = None )
Parameters
- prev_sample (
torch.FloatTensor
of shape(batch_size, num_channels, height, width)
for images) — Computed sample (x_{t-1}) of previous timestep.prev_sample
should be used as next model input in the denoising loop. - derivative (
torch.FloatTensor
of shape(batch_size, num_channels, height, width)
for images) — Derivative of predicted original image sample (x_0). - pred_original_sample (
torch.FloatTensor
of shape(batch_size, num_channels, height, width)
for images) — The predicted denoised sample (x_{0}) based on the model output from the current timestep.pred_original_sample
can be used to preview progress or for guidance.
Output class for the scheduler’s step function output.