Schedulers
Diffusers contains multiple pre-built schedule functions for the diffusion process.
What is a scheduler?
The schedule functions, denoted Schedulers in the library take in the output of a trained model, a sample which the diffusion process is iterating on, and a timestep to return a denoised sample. That’s why schedulers may also be called Samplers in other diffusion models implementations.
- Schedulers define the methodology for iteratively adding noise to an image or for updating a sample based on model outputs.
- adding noise in different manners represent the algorithmic processes to train a diffusion model by adding noise to images.
- for inference, the scheduler defines how to update a sample based on an output from a pretrained model.
- Schedulers are often defined by a noise schedule and an update rule to solve the differential equation solution.
Discrete versus continuous schedulers
All schedulers take in a timestep to predict the updated version of the sample being diffused.
The timesteps dictate where in the diffusion process the step is, where data is generated by iterating forward in time and inference is executed by propagating backwards through timesteps.
Different algorithms use timesteps that can be discrete (accepting int
inputs), such as the DDPMScheduler or PNDMScheduler, or continuous (accepting float
inputs), such as the score-based schedulers ScoreSdeVeScheduler or ScoreSdeVpScheduler
.
Designing Re-usable schedulers
The core design principle between the schedule functions is to be model, system, and framework independent. This allows for rapid experimentation and cleaner abstractions in the code, where the model prediction is separated from the sample update. To this end, the design of schedulers is such that:
- Schedulers can be used interchangeably between diffusion models in inference to find the preferred trade-off between speed and generation quality.
- Schedulers are currently by default in PyTorch, but are designed to be framework independent (partial Jax support currently exists).
- Many diffusion pipelines, such as StableDiffusionPipeline and DiTPipeline can use any of
KarrasDiffusionSchedulers
Schedulers Summary
The following table summarizes all officially supported schedulers, their corresponding paper
API
The core API for any new scheduler must follow a limited structure.
- Schedulers should provide one or more
def step(...)
functions that should be called to update the generated sample iteratively. - Schedulers should provide a
set_timesteps(...)
method that configures the parameters of a schedule function for a specific inference task. - Schedulers should be framework-specific.
The base class SchedulerMixin implements low level utilities used by multiple schedulers.
SchedulerMixin
Mixin containing common functions for the schedulers.
Class attributes:
- _compatibles (
List[str]
) — A list of classes that are compatible with the parent class, so thatfrom_config
can be used from a class different than the one used to save the config (should be overridden by parent class).
from_pretrained
< source >( pretrained_model_name_or_path: typing.Dict[str, typing.Any] = None subfolder: typing.Optional[str] = None return_unused_kwargs = False **kwargs )
Parameters
-
pretrained_model_name_or_path (
str
oros.PathLike
, optional) — Can be either:- A string, the model id of a model repo on huggingface.co. Valid model ids should have an
organization name, like
google/ddpm-celebahq-256
. - A path to a directory containing the schedluer configurations saved using
save_pretrained(), e.g.,
./my_model_directory/
.
- A string, the model id of a model repo on huggingface.co. Valid model ids should have an
organization name, like
-
subfolder (
str
, optional) — In case the relevant files are located inside a subfolder of the model repo (either remote in huggingface.co or downloaded locally), you can specify the folder name here. -
return_unused_kwargs (
bool
, optional, defaults toFalse
) — Whether kwargs that are not consumed by the Python class should be returned or not. -
cache_dir (
Union[str, os.PathLike]
, optional) — Path to a directory in which a downloaded pretrained model configuration should be cached if the standard cache should not be used. -
force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. -
resume_download (
bool
, optional, defaults toFalse
) — Whether or not to delete incompletely received files. Will attempt to resume the download if such a file exists. -
proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, e.g.,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. -
output_loading_info(
bool
, optional, defaults toFalse
) — Whether or not to also return a dictionary containing missing keys, unexpected keys and error messages. -
local_files_only(
bool
, optional, defaults toFalse
) — Whether or not to only look at local files (i.e., do not try to download the model). -
use_auth_token (
str
or bool, optional) — The token to use as HTTP bearer authorization for remote files. IfTrue
, will use the token generated when runningtransformers-cli login
(stored in~/.huggingface
). -
revision (
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a git-based system for storing models and other artifacts on huggingface.co, sorevision
can be any identifier allowed by git.
Instantiate a Scheduler class from a pre-defined JSON configuration file inside a directory or Hub repo.
It is required to be logged in (huggingface-cli login
) when you want to use private or gated
models.
Activate the special “offline-mode” to use this method in a firewalled environment.
save_pretrained
< source >( save_directory: typing.Union[str, os.PathLike] push_to_hub: bool = False **kwargs )
Save a scheduler configuration object to the directory save_directory
, so that it can be re-loaded using the
from_pretrained() class method.
SchedulerOutput
The class `SchedulerOutput` contains the outputs from any schedulers `step(...)` call.class diffusers.schedulers.scheduling_utils.SchedulerOutput
< source >( prev_sample: FloatTensor )
Base class for the scheduler’s step function output.
KarrasDiffusionSchedulers
class diffusers.schedulers.KarrasDiffusionSchedulers
< source >( value names = None module = None qualname = None type = None start = 1 )
An enumeration.