The GaudiStableDiffusionPipeline
class enables to perform text-to-image generation on HPUs.
It inherits from the GaudiDiffusionPipeline
class that is the parent to any kind of diffuser pipeline.
In order to get the most out of it, it should be associated to a scheduler that is optimized for HPUs like GaudiDDIMScheduler
.
( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.utils.dummy_torch_and_scipy_objects.LMSDiscreteScheduler, diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, diffusers.schedulers.scheduling_euler_ancestral_discrete.EulerAncestralDiscreteScheduler, diffusers.schedulers.scheduling_dpmsolver_multistep.DPMSolverMultistepScheduler] safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor requires_safety_checker: bool = True use_habana: bool = False use_hpu_graphs: bool = False gaudi_config: typing.Union[str, optimum.habana.transformers.gaudi_configuration.GaudiConfig] = None )
Parameters
AutoencoderKL
) —
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
CLIPTextModel
) —
Frozen text-encoder. Stable Diffusion uses the text portion of
CLIP, specifically
the clip-vit-large-patch14 variant.
CLIPTokenizer
) —
Tokenizer of class
CLIPTokenizer.
UNet2DConditionModel
) — Conditional U-Net architecture to denoise the encoded image latents.
SchedulerMixin
) —
A scheduler to be used in combination with unet
to denoise the encoded image latents. Can be one of
DDIMScheduler
, LMSDiscreteScheduler
, or PNDMScheduler
.
StableDiffusionSafetyChecker
) —
Classification module that estimates whether generated images could be considered offensive or harmful.
Please, refer to the model card for details.
CLIPFeatureExtractor
) —
Model that extracts features from generated images to be used as inputs for the safety_checker
.
False
) —
Whether to use Gaudi (True
) or CPU (False
).
False
) —
Whether to use HPU graphs or not.
None
) —
Gaudi configuration to use. Can be a string to download it from the Hub.
Or a previously initialized config can be passed.
Extends the StableDiffusionPipeline
class:
mark_step()
were added to add support for lazy mode(
prompt: typing.Union[str, typing.List[str]]
height: typing.Optional[int] = None
width: typing.Optional[int] = None
num_inference_steps: int = 50
guidance_scale: float = 7.5
negative_prompt: typing.Union[typing.List[str], str, NoneType] = None
num_images_per_prompt: typing.Optional[int] = 1
batch_size: int = 1
eta: float = 0.0
generator: typing.Optional[torch._C.Generator] = None
latents: typing.Optional[torch.FloatTensor] = None
output_type: typing.Optional[str] = 'pil'
return_dict: bool = True
callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None
callback_steps: typing.Optional[int] = 1
)
→
GaudiStableDiffusionPipelineOutput
or tuple
Parameters
str
or List[str]
) —
The prompt or prompts to guide the image generation.
int
, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) —
The height in pixels of the generated images.
int
, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) —
The width in pixels of the generated images.
int
, optional, defaults to 50) —
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference.
float
, optional, defaults to 7.5) —
Guidance scale as defined in Classifier-Free Diffusion Guidance.
guidance_scale
is defined as w
of equation 2. of Imagen
Paper. Guidance scale is enabled by setting guidance_scale > 1
. Higher guidance scale encourages to generate images that are closely linked to the text prompt
,
usually at the expense of lower image quality.
str
or List[str]
, optional) —
The prompt or prompts not to guide the image generation. Ignored when not using guidance (i.e., ignored
if guidance_scale
is less than 1
).
int
, optional, defaults to 1) —
The number of images to generate per prompt.
int
, optional, defaults to 1) —
The number of images in a batch.
float
, optional, defaults to 0.0) —
Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
schedulers.DDIMScheduler
, will be ignored for others.
torch.Generator
, optional) —
A torch generator to make generation
deterministic.
torch.FloatTensor
, optional) —
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
tensor will ge generated randomly.
str
, optional, defaults to "pil"
) —
The output format of the generate image. Choose between
PIL: PIL.Image.Image
or np.array
.
bool
, optional, defaults to True
) —
Whether or not to return a GaudiStableDiffusionPipelineOutput
instead of a
plain tuple.
Callable
, optional) —
A function that will be called every callback_steps
steps during inference. The function will be
called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor)
.
int
, optional, defaults to 1) —
The frequency at which the callback
function will be called. If not specified, the callback will be
called at every step.
Returns
GaudiStableDiffusionPipelineOutput
or tuple
GaudiStableDiffusionPipelineOutput
if return_dict
is True, otherwise a tuple
.
When returning a tuple, the first element is a list with the generated images, and the second element is a
list of bool
s denoting whether the corresponding generated image likely represents “not-safe-for-work”
(nsfw) content, according to the safety_checker
.
Function invoked when calling the pipeline for generation.
( use_habana: bool = False use_hpu_graphs: bool = False gaudi_config: typing.Union[str, optimum.habana.transformers.gaudi_configuration.GaudiConfig] = None )
Parameters
False
) —
Whether to use Gaudi (True
) or CPU (False
).
False
) —
Whether to use HPU graphs or not.
None
) —
Gaudi configuration to use. Can be a string to download it from the Hub.
Or a previously initialized config can be passed.
Extends the DiffusionPipeline
class:
use_habana=True
.( pretrained_model_name_or_path: typing.Union[str, os.PathLike, NoneType] **kwargs )
More information here.
( save_directory: typing.Union[str, os.PathLike] safe_serialization: bool = False )
Save the pipeline and Gaudi configurations. More information here.
( num_train_timesteps: int = 1000 beta_start: float = 0.0001 beta_end: float = 0.02 beta_schedule: str = 'linear' trained_betas: typing.Union[numpy.ndarray, typing.List[float], NoneType] = None clip_sample: bool = True set_alpha_to_one: bool = True steps_offset: int = 0 prediction_type: str = 'epsilon' **kwargs )
Parameters
int
) — number of diffusion steps used to train the model.
float
) — the starting beta
value of inference.
float
) — the final beta
value.
str
) —
the beta schedule, a mapping from a beta range to a sequence of betas for stepping the model. Choose from
linear
, scaled_linear
, or squaredcos_cap_v2
.
np.ndarray
, optional) —
option to pass an array of betas directly to the constructor to bypass beta_start
, beta_end
etc.
bool
, default True
) —
option to clip predicted sample between -1 and 1 for numerical stability.
bool
, default True
) —
each diffusion step uses the value of alphas product at that step and at the previous one. For the final
step there is no previous alpha. When this option is True
the previous alpha product is fixed to 1
,
otherwise it uses the value of alpha at step 0.
int
, default 0
) —
an offset added to the inference steps. You can use a combination of offset=1
and
set_alpha_to_one=False
, to make the last step use step 0 for the previous alpha product, as done in
stable diffusion.
str
, default epsilon
, optional) —
prediction type of the scheduler function, one of epsilon
(predicting the noise of the diffusion
process), sample
(directly predicting the noisy sample) or
v_prediction` (see section 2.4
https://imagen.research.google/video/paper.pdf)
Extends Diffusers’ DDIMScheduler to run optimally on Gaudi:
(
model_output: FloatTensor
sample: FloatTensor
eta: float = 0.0
use_clipped_model_output: bool = False
generator = None
variance_noise: typing.Optional[torch.FloatTensor] = None
return_dict: bool = True
)
→
diffusers.schedulers.scheduling_utils.DDIMSchedulerOutput
or tuple
Parameters
torch.FloatTensor
) — direct output from learned diffusion model.
torch.FloatTensor
) —
current instance of sample being created by diffusion process.
float
) — weight of noise for added noise in diffusion step.
bool
) — if True
, compute “corrected” model_output
from the clipped
predicted original sample. Necessary because predicted original sample is clipped to [-1, 1] when
self.config.clip_sample
is True
. If no clipping has happened, “corrected” model_output
would
coincide with the one provided as input and use_clipped_model_output
will have not effect.
generator — random number generator.
torch.FloatTensor
) — instead of generating noise for the variance using generator
, we
can directly provide the noise for the variance itself. This is useful for methods such as
CycleDiffusion. (https://arxiv.org/abs/2210.05559)
bool
) — option for returning tuple rather than DDIMSchedulerOutput class
Returns
diffusers.schedulers.scheduling_utils.DDIMSchedulerOutput
or tuple
diffusers.schedulers.scheduling_utils.DDIMSchedulerOutput
if return_dict
is True, otherwise a tuple
. When
returning a tuple, the first element is the sample tensor.
Predict the sample at the previous timestep by reversing the SDE. Core function to propagate the diffusion process from the learned model outputs (most often the predicted noise).