Image-to-image
The Stable Diffusion model can also be applied to image-to-image generation by passing a text prompt and an initial image to condition the generation of new images.
The StableDiffusionImg2ImgPipeline uses the diffusion-denoising mechanism proposed in SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations by Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, Stefano Ermon.
The abstract from the paper is:
Guided image synthesis enables everyday users to create and edit photo-realistic images with minimum effort. The key challenge is balancing faithfulness to the user input (e.g., hand-drawn colored strokes) and realism of the synthesized image. Existing GAN-based methods attempt to achieve such balance using either conditional GANs or GAN inversions, which are challenging and often require additional training data or loss functions for individual applications. To address these issues, we introduce a new image synthesis and editing method, Stochastic Differential Editing (SDEdit), based on a diffusion model generative prior, which synthesizes realistic images by iteratively denoising through a stochastic differential equation (SDE). Given an input image with user guide of any type, SDEdit first adds noise to the input, then subsequently denoises the resulting image through the SDE prior to increase its realism. SDEdit does not require task-specific training or inversions and can naturally achieve the balance between realism and faithfulness. SDEdit significantly outperforms state-of-the-art GAN-based methods by up to 98.09% on realism and 91.72% on overall satisfaction scores, according to a human perception study, on multiple tasks, including stroke-based image synthesis and editing as well as image compositing.
Make sure to check out the Stable Diffusion Tips section to learn how to explore the tradeoff between scheduler speed and quality, and how to reuse pipeline components efficiently!
StableDiffusionImg2ImgPipeline
class diffusers.StableDiffusionImg2ImgPipeline
< source >( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor image_encoder: CLIPVisionModelWithProjection = None requires_safety_checker: bool = True )
Parameters
- vae (AutoencoderKL) — Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- text_encoder (CLIPTextModel) — Frozen text-encoder (clip-vit-large-patch14).
- tokenizer (CLIPTokenizer) —
A
CLIPTokenizer
to tokenize text. - unet (UNet2DConditionModel) —
A
UNet2DConditionModel
to denoise the encoded image latents. - scheduler (SchedulerMixin) —
A scheduler to be used in combination with
unet
to denoise the encoded image latents. Can be one of DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. - safety_checker (
StableDiffusionSafetyChecker
) — Classification module that estimates whether generated images could be considered offensive or harmful. Please refer to the model card for more details about a model’s potential harms. - feature_extractor (CLIPImageProcessor) —
A
CLIPImageProcessor
to extract features from generated images; used as inputs to thesafety_checker
.
Pipeline for text-guided image-to-image generation using Stable Diffusion.
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).
The pipeline also inherits the following loading methods:
- load_textual_inversion() for loading textual inversion embeddings
- load_lora_weights() for loading LoRA weights
- save_lora_weights() for saving LoRA weights
- from_single_file() for loading
.ckpt
files - load_ip_adapter() for loading IP Adapters
__call__
< source >( prompt: Union = None image: Union = None strength: float = 0.8 num_inference_steps: Optional = 50 timesteps: List = None sigmas: List = None guidance_scale: Optional = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: Optional = 0.0 generator: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None ip_adapter_image: Union = None ip_adapter_image_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True cross_attention_kwargs: Optional = None clip_skip: int = None callback_on_step_end: Union = None callback_on_step_end_tensor_inputs: List = ['latents'] **kwargs ) → StableDiffusionPipelineOutput or tuple
Parameters
- prompt (
str
orList[str]
, optional) — The prompt or prompts to guide image generation. If not defined, you need to passprompt_embeds
. - image (
torch.Tensor
,PIL.Image.Image
,np.ndarray
,List[torch.Tensor]
,List[PIL.Image.Image]
, orList[np.ndarray]
) —Image
, numpy array or tensor representing an image batch to be used as the starting point. For both numpy array and pytorch tensor, the expected value range is between[0, 1]
If it’s a tensor or a list or tensors, the expected shape should be(B, C, H, W)
or(C, H, W)
. If it is a numpy array or a list of arrays, the expected shape should be(B, H, W, C)
or(H, W, C)
It can also accept image latents asimage
, but if passing latents directly it is not encoded again. - strength (
float
, optional, defaults to 0.8) — Indicates extent to transform the referenceimage
. Must be between 0 and 1.image
is used as a starting point and more noise is added the higher thestrength
. The number of denoising steps depends on the amount of noise initially added. Whenstrength
is 1, added noise is maximum and the denoising process runs for the full number of iterations specified innum_inference_steps
. A value of 1 essentially ignoresimage
. - num_inference_steps (
int
, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. This parameter is modulated bystrength
. - timesteps (
List[int]
, optional) — Custom timesteps to use for the denoising process with schedulers which support atimesteps
argument in theirset_timesteps
method. If not defined, the default behavior whennum_inference_steps
is passed will be used. Must be in descending order. - sigmas (
List[float]
, optional) — Custom sigmas to use for the denoising process with schedulers which support asigmas
argument in theirset_timesteps
method. If not defined, the default behavior whennum_inference_steps
is passed will be used. - guidance_scale (
float
, optional, defaults to 7.5) — A higher guidance scale value encourages the model to generate images closely linked to the textprompt
at the expense of lower image quality. Guidance scale is enabled whenguidance_scale > 1
. - negative_prompt (
str
orList[str]
, optional) — The prompt or prompts to guide what to not include in image generation. If not defined, you need to passnegative_prompt_embeds
instead. Ignored when not using guidance (guidance_scale < 1
). - num_images_per_prompt (
int
, optional, defaults to 1) — The number of images to generate per prompt. - eta (
float
, optional, defaults to 0.0) — Corresponds to parameter eta (η) from the DDIM paper. Only applies to the DDIMScheduler, and is ignored in other schedulers. - generator (
torch.Generator
orList[torch.Generator]
, optional) — Atorch.Generator
to make generation deterministic. - prompt_embeds (
torch.Tensor
, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided, text embeddings are generated from theprompt
input argument. - negative_prompt_embeds (
torch.Tensor
, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not provided,negative_prompt_embeds
are generated from thenegative_prompt
input argument. ip_adapter_image — (PipelineImageInput
, optional): Optional image input to work with IP Adapters. - ip_adapter_image_embeds (
List[torch.Tensor]
, optional) — Pre-generated image embeddings for IP-Adapter. It should be a list of length same as number of IP-adapters. Each element should be a tensor of shape(batch_size, num_images, emb_dim)
. It should contain the negative image embedding ifdo_classifier_free_guidance
is set toTrue
. If not provided, embeddings are computed from theip_adapter_image
input argument. - output_type (
str
, optional, defaults to"pil"
) — The output format of the generated image. Choose betweenPIL.Image
ornp.array
. - return_dict (
bool
, optional, defaults toTrue
) — Whether or not to return a StableDiffusionPipelineOutput instead of a plain tuple. - cross_attention_kwargs (
dict
, optional) — A kwargs dictionary that if specified is passed along to theAttentionProcessor
as defined inself.processor
. - clip_skip (
int
, optional) — Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings. - callback_on_step_end (
Callable
,PipelineCallback
,MultiPipelineCallbacks
, optional) — A function or a subclass ofPipelineCallback
orMultiPipelineCallbacks
that is called at the end of each denoising step during the inference. with the following arguments:callback_on_step_end(self: DiffusionPipeline, step: int, timestep: int, callback_kwargs: Dict)
.callback_kwargs
will include a list of all tensors as specified bycallback_on_step_end_tensor_inputs
. - callback_on_step_end_tensor_inputs (
List
, optional) — The list of tensor inputs for thecallback_on_step_end
function. The tensors specified in the list will be passed ascallback_kwargs
argument. You will only be able to include variables listed in the._callback_tensor_inputs
attribute of your pipeline class.
Returns
StableDiffusionPipelineOutput or tuple
If return_dict
is True
, StableDiffusionPipelineOutput is returned,
otherwise a tuple
is returned where the first element is a list with the generated images and the
second element is a list of bool
s indicating whether the corresponding generated image contains
“not-safe-for-work” (nsfw) content.
The call function to the pipeline for generation.
Examples:
>>> import requests
>>> import torch
>>> from PIL import Image
>>> from io import BytesIO
>>> from diffusers import StableDiffusionImg2ImgPipeline
>>> device = "cuda"
>>> model_id_or_path = "runwayml/stable-diffusion-v1-5"
>>> pipe = StableDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
>>> pipe = pipe.to(device)
>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
>>> response = requests.get(url)
>>> init_image = Image.open(BytesIO(response.content)).convert("RGB")
>>> init_image = init_image.resize((768, 512))
>>> prompt = "A fantasy landscape, trending on artstation"
>>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images
>>> images[0].save("fantasy_landscape.png")
enable_attention_slicing
< source >( slice_size: Union = 'auto' )
Parameters
- slice_size (
str
orint
, optional, defaults to"auto"
) — When"auto"
, halves the input to the attention heads, so attention will be computed in two steps. If"max"
, maximum amount of memory will be saved by running only one slice at a time. If a number is provided, uses as many slices asattention_head_dim // slice_size
. In this case,attention_head_dim
must be a multiple ofslice_size
.
Enable sliced attention computation. When this option is enabled, the attention module splits the input tensor in slices to compute attention in several steps. For more than one attention head, the computation is performed sequentially over each head. This is useful to save some memory in exchange for a small speed decrease.
⚠️ Don’t enable attention slicing if you’re already using scaled_dot_product_attention
(SDPA) from PyTorch
2.0 or xFormers. These attention computations are already very memory efficient so you won’t need to enable
this function. If you enable attention slicing with SDPA or xFormers, it can lead to serious slow downs!
Examples:
>>> import torch
>>> from diffusers import StableDiffusionPipeline
>>> pipe = StableDiffusionPipeline.from_pretrained(
... "runwayml/stable-diffusion-v1-5",
... torch_dtype=torch.float16,
... use_safetensors=True,
... )
>>> prompt = "a photo of an astronaut riding a horse on mars"
>>> pipe.enable_attention_slicing()
>>> image = pipe(prompt).images[0]
Disable sliced attention computation. If enable_attention_slicing
was previously called, attention is
computed in one step.
enable_xformers_memory_efficient_attention
< source >( attention_op: Optional = None )
Parameters
- attention_op (
Callable
, optional) — Override the defaultNone
operator for use asop
argument to thememory_efficient_attention()
function of xFormers.
Enable memory efficient attention from xFormers. When this option is enabled, you should observe lower GPU memory usage and a potential speed up during inference. Speed up during training is not guaranteed.
⚠️ When memory efficient attention and sliced attention are both enabled, memory efficient attention takes precedent.
Examples:
>>> import torch
>>> from diffusers import DiffusionPipeline
>>> from xformers.ops import MemoryEfficientAttentionFlashAttentionOp
>>> pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")
>>> pipe.enable_xformers_memory_efficient_attention(attention_op=MemoryEfficientAttentionFlashAttentionOp)
>>> # Workaround for not accepting attention shape using VAE for Flash Attention
>>> pipe.vae.enable_xformers_memory_efficient_attention(attention_op=None)
Disable memory efficient attention from xFormers.
load_textual_inversion
< source >( pretrained_model_name_or_path: Union token: Union = None tokenizer: Optional = None text_encoder: Optional = None **kwargs )
Parameters
- pretrained_model_name_or_path (
str
oros.PathLike
orList[str or os.PathLike]
orDict
orList[Dict]
) — Can be either one of the following or a list of them:- A string, the model id (for example
sd-concepts-library/low-poly-hd-logos-icons
) of a pretrained model hosted on the Hub. - A path to a directory (for example
./my_text_inversion_directory/
) containing the textual inversion weights. - A path to a file (for example
./my_text_inversions.pt
) containing textual inversion weights. - A torch state dict.
- A string, the model id (for example
- token (
str
orList[str]
, optional) — Override the token to use for the textual inversion weights. Ifpretrained_model_name_or_path
is a list, thentoken
must also be a list of equal length. - text_encoder (CLIPTextModel, optional) — Frozen text-encoder (clip-vit-large-patch14). If not specified, function will take self.tokenizer.
- tokenizer (CLIPTokenizer, optional) —
A
CLIPTokenizer
to tokenize text. If not specified, function will take self.tokenizer. - weight_name (
str
, optional) — Name of a custom weight file. This should be used when:- The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight
name such as
text_inv.bin
. - The saved textual inversion file is in the Automatic1111 format.
- The saved textual inversion file is in 🤗 Diffusers format, but was saved under a specific weight
name such as
- cache_dir (
Union[str, os.PathLike]
, optional) — Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. - force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. - proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, for example,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. - local_files_only (
bool
, optional, defaults toFalse
) — Whether to only load local model weights and configuration files or not. If set toTrue
, the model won’t be downloaded from the Hub. - token (
str
or bool, optional) — The token to use as HTTP bearer authorization for remote files. IfTrue
, the token generated fromdiffusers-cli login
(stored in~/.huggingface
) is used. - revision (
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. - subfolder (
str
, optional, defaults to""
) — The subfolder location of a model file within a larger model repository on the Hub or locally. - mirror (
str
, optional) — Mirror source to resolve accessibility issues if you’re downloading a model in China. We do not guarantee the timeliness or safety of the source, and you should refer to the mirror site for more information.
Load Textual Inversion embeddings into the text encoder of StableDiffusionPipeline (both 🤗 Diffusers and Automatic1111 formats are supported).
Example:
To load a Textual Inversion embedding vector in 🤗 Diffusers format:
from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
pipe.load_textual_inversion("sd-concepts-library/cat-toy")
prompt = "A <cat-toy> backpack"
image = pipe(prompt, num_inference_steps=50).images[0]
image.save("cat-backpack.png")
To load a Textual Inversion embedding vector in Automatic1111 format, make sure to download the vector first (for example from civitAI) and then load the vector
locally:
from diffusers import StableDiffusionPipeline
import torch
model_id = "runwayml/stable-diffusion-v1-5"
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
pipe.load_textual_inversion("./charturnerv2.pt", token="charturnerv2")
prompt = "charturnerv2, multiple views of the same character in the same outfit, a character turnaround of a woman wearing a black jacket and red shirt, best quality, intricate details."
image = pipe(prompt, num_inference_steps=50).images[0]
image.save("character.png")
from_single_file
< source >( pretrained_model_link_or_path **kwargs )
Parameters
- pretrained_model_link_or_path (
str
oros.PathLike
, optional) — Can be either:- A link to the
.ckpt
file (for example"https://huggingface.co/<repo_id>/blob/main/<path_to_file>.ckpt"
) on the Hub. - A path to a file containing all pipeline weights.
- A link to the
- torch_dtype (
str
ortorch.dtype
, optional) — Override the defaulttorch.dtype
and load the model with another dtype. - force_download (
bool
, optional, defaults toFalse
) — Whether or not to force the (re-)download of the model weights and configuration files, overriding the cached versions if they exist. - cache_dir (
Union[str, os.PathLike]
, optional) — Path to a directory where a downloaded pretrained model configuration is cached if the standard cache is not used. - proxies (
Dict[str, str]
, optional) — A dictionary of proxy servers to use by protocol or endpoint, for example,{'http': 'foo.bar:3128', 'http://hostname': 'foo.bar:4012'}
. The proxies are used on each request. - local_files_only (
bool
, optional, defaults toFalse
) — Whether to only load local model weights and configuration files or not. If set toTrue
, the model won’t be downloaded from the Hub. - token (
str
or bool, optional) — The token to use as HTTP bearer authorization for remote files. IfTrue
, the token generated fromdiffusers-cli login
(stored in~/.huggingface
) is used. - revision (
str
, optional, defaults to"main"
) — The specific model version to use. It can be a branch name, a tag name, a commit id, or any identifier allowed by Git. - original_config_file (
str
, optional) — The path to the original config file that was used to train the model. If not provided, the config file will be inferred from the checkpoint file. - config (
str
, optional) — Can be either:- A string, the repo id (for example
CompVis/ldm-text2im-large-256
) of a pretrained pipeline hosted on the Hub. - A path to a directory (for example
./my_pipeline_directory/
) containing the pipeline component configs in Diffusers format.
- A string, the repo id (for example
- kwargs (remaining dictionary of keyword arguments, optional) —
Can be used to overwrite load and saveable variables (the pipeline components of the specific pipeline
class). The overwritten components are passed directly to the pipelines
__init__
method. See example below for more information.
Instantiate a DiffusionPipeline from pretrained pipeline weights saved in the .ckpt
or .safetensors
format. The pipeline is set in evaluation mode (model.eval()
) by default.
Examples:
>>> from diffusers import StableDiffusionPipeline
>>> # Download pipeline from huggingface.co and cache.
>>> pipeline = StableDiffusionPipeline.from_single_file(
... "https://huggingface.co/WarriorMama777/OrangeMixs/blob/main/Models/AbyssOrangeMix/AbyssOrangeMix.safetensors"
... )
>>> # Download pipeline from local file
>>> # file is downloaded under ./v1-5-pruned-emaonly.ckpt
>>> pipeline = StableDiffusionPipeline.from_single_file("./v1-5-pruned-emaonly.ckpt")
>>> # Enable float16 and move to GPU
>>> pipeline = StableDiffusionPipeline.from_single_file(
... "https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt",
... torch_dtype=torch.float16,
... )
>>> pipeline.to("cuda")
load_lora_weights
< source >( pretrained_model_name_or_path_or_dict: Union adapter_name = None **kwargs )
Parameters
- pretrained_model_name_or_path_or_dict (
str
oros.PathLike
ordict
) — See lora_state_dict(). - kwargs (
dict
, optional) — See lora_state_dict(). - adapter_name (
str
, optional) — Adapter name to be used for referencing the loaded adapter model. If not specified, it will usedefault_{i}
where i is the total number of adapters being loaded.
Load LoRA weights specified in pretrained_model_name_or_path_or_dict
into self.unet
and
self.text_encoder
.
All kwargs are forwarded to self.lora_state_dict
.
See lora_state_dict() for more details on how the state dict is loaded.
See load_lora_into_unet() for more details on how the state dict is
loaded into self.unet
.
See load_lora_into_text_encoder() for more details on how the state
dict is loaded into self.text_encoder
.
save_lora_weights
< source >( save_directory: Union unet_lora_layers: Dict = None text_encoder_lora_layers: Dict = None is_main_process: bool = True weight_name: str = None save_function: Callable = None safe_serialization: bool = True )
Parameters
- save_directory (
str
oros.PathLike
) — Directory to save LoRA parameters to. Will be created if it doesn’t exist. - unet_lora_layers (
Dict[str, torch.nn.Module]
orDict[str, torch.Tensor]
) — State dict of the LoRA layers corresponding to theunet
. - text_encoder_lora_layers (
Dict[str, torch.nn.Module]
orDict[str, torch.Tensor]
) — State dict of the LoRA layers corresponding to thetext_encoder
. Must explicitly pass the text encoder LoRA state dict because it comes from 🤗 Transformers. - is_main_process (
bool
, optional, defaults toTrue
) — Whether the process calling this is the main process or not. Useful during distributed training and you need to call this function on all processes. In this case, setis_main_process=True
only on the main process to avoid race conditions. - save_function (
Callable
) — The function to use to save the state dictionary. Useful during distributed training when you need to replacetorch.save
with another method. Can be configured with the environment variableDIFFUSERS_SAVE_MODE
. - safe_serialization (
bool
, optional, defaults toTrue
) — Whether to save the model usingsafetensors
or the traditional PyTorch way withpickle
.
Save the LoRA parameters corresponding to the UNet and text encoder.
encode_prompt
< source >( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None )
Parameters
- prompt (
str
orList[str]
, optional) — prompt to be encoded device — (torch.device
): torch device - num_images_per_prompt (
int
) — number of images that should be generated per prompt - do_classifier_free_guidance (
bool
) — whether to use classifier free guidance or not - negative_prompt (
str
orList[str]
, optional) — The prompt or prompts not to guide the image generation. If not defined, one has to passnegative_prompt_embeds
instead. Ignored when not using guidance (i.e., ignored ifguidance_scale
is less than1
). - prompt_embeds (
torch.Tensor
, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated fromprompt
input argument. - negative_prompt_embeds (
torch.Tensor
, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated fromnegative_prompt
input argument. - lora_scale (
float
, optional) — A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. - clip_skip (
int
, optional) — Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that the output of the pre-final layer will be used for computing the prompt embeddings.
Encodes the prompt into text encoder hidden states.
get_guidance_scale_embedding
< source >( w: Tensor embedding_dim: int = 512 dtype: dtype = torch.float32 ) → torch.Tensor
Parameters
- w (
torch.Tensor
) — Generate embedding vectors with a specified guidance scale to subsequently enrich timestep embeddings. - embedding_dim (
int
, optional, defaults to 512) — Dimension of the embeddings to generate. - dtype (
torch.dtype
, optional, defaults totorch.float32
) — Data type of the generated embeddings.
Returns
torch.Tensor
Embedding vectors with shape (len(w), embedding_dim)
.
StableDiffusionPipelineOutput
class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput
< source >( images: Union nsfw_content_detected: Optional )
Parameters
- images (
List[PIL.Image.Image]
ornp.ndarray
) — List of denoised PIL images of lengthbatch_size
or NumPy array of shape(batch_size, height, width, num_channels)
. - nsfw_content_detected (
List[bool]
) — List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content orNone
if safety checking could not be performed.
Output class for Stable Diffusion pipelines.
FlaxStableDiffusionImg2ImgPipeline
class diffusers.FlaxStableDiffusionImg2ImgPipeline
< source >( vae: FlaxAutoencoderKL text_encoder: FlaxCLIPTextModel tokenizer: CLIPTokenizer unet: FlaxUNet2DConditionModel scheduler: Union safety_checker: FlaxStableDiffusionSafetyChecker feature_extractor: CLIPImageProcessor dtype: dtype = <class 'jax.numpy.float32'> )
Parameters
- vae (FlaxAutoencoderKL) — Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
- text_encoder (FlaxCLIPTextModel) — Frozen text-encoder (clip-vit-large-patch14).
- tokenizer (CLIPTokenizer) —
A
CLIPTokenizer
to tokenize text. - unet (FlaxUNet2DConditionModel) —
A
FlaxUNet2DConditionModel
to denoise the encoded image latents. - scheduler (SchedulerMixin) —
A scheduler to be used in combination with
unet
to denoise the encoded image latents. Can be one ofFlaxDDIMScheduler
,FlaxLMSDiscreteScheduler
,FlaxPNDMScheduler
, orFlaxDPMSolverMultistepScheduler
. - safety_checker (
FlaxStableDiffusionSafetyChecker
) — Classification module that estimates whether generated images could be considered offensive or harmful. Please refer to the model card for more details about a model’s potential harms. - feature_extractor (CLIPImageProcessor) —
A
CLIPImageProcessor
to extract features from generated images; used as inputs to thesafety_checker
.
Flax-based pipeline for text-guided image-to-image generation using Stable Diffusion.
This model inherits from FlaxDiffusionPipeline. Check the superclass documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular device, etc.).
__call__
< source >( prompt_ids: Array image: Array params: Union prng_seed: Array strength: float = 0.8 num_inference_steps: int = 50 height: Optional = None width: Optional = None guidance_scale: Union = 7.5 noise: Array = None neg_prompt_ids: Array = None return_dict: bool = True jit: bool = False ) → FlaxStableDiffusionPipelineOutput or tuple
Parameters
- prompt_ids (
jnp.ndarray
) — The prompt or prompts to guide image generation. - image (
jnp.ndarray
) — Array representing an image batch to be used as the starting point. - params (
Dict
orFrozenDict
) — Dictionary containing the model parameters/weights. - prng_seed (
jax.Array
orjax.Array
) — Array containing random number generator key. - strength (
float
, optional, defaults to 0.8) — Indicates extent to transform the referenceimage
. Must be between 0 and 1.image
is used as a starting point and more noise is added the higher thestrength
. The number of denoising steps depends on the amount of noise initially added. Whenstrength
is 1, added noise is maximum and the denoising process runs for the full number of iterations specified innum_inference_steps
. A value of 1 essentially ignoresimage
. - num_inference_steps (
int
, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. This parameter is modulated bystrength
. - height (
int
, optional, defaults toself.unet.config.sample_size * self.vae_scale_factor
) — The height in pixels of the generated image. - width (
int
, optional, defaults toself.unet.config.sample_size * self.vae_scale_factor
) — The width in pixels of the generated image. - guidance_scale (
float
, optional, defaults to 7.5) — A higher guidance scale value encourages the model to generate images closely linked to the textprompt
at the expense of lower image quality. Guidance scale is enabled whenguidance_scale > 1
. - noise (
jnp.ndarray
, optional) — Pre-generated noisy latents sampled from a Gaussian distribution to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. The array is generated by sampling using the supplied randomgenerator
. - return_dict (
bool
, optional, defaults toTrue
) — Whether or not to return a FlaxStableDiffusionPipelineOutput instead of a plain tuple. - jit (
bool
, defaults toFalse
) — Whether to runpmap
versions of the generation and safety scoring functions.This argument exists because
__call__
is not yet end-to-end pmap-able. It will be removed in a future release.
Returns
FlaxStableDiffusionPipelineOutput or tuple
If return_dict
is True
, FlaxStableDiffusionPipelineOutput is
returned, otherwise a tuple
is returned where the first element is a list with the generated images
and the second element is a list of bool
s indicating whether the corresponding generated image
contains “not-safe-for-work” (nsfw) content.
The call function to the pipeline for generation.
Examples:
>>> import jax
>>> import numpy as np
>>> import jax.numpy as jnp
>>> from flax.jax_utils import replicate
>>> from flax.training.common_utils import shard
>>> import requests
>>> from io import BytesIO
>>> from PIL import Image
>>> from diffusers import FlaxStableDiffusionImg2ImgPipeline
>>> def create_key(seed=0):
... return jax.random.PRNGKey(seed)
>>> rng = create_key(0)
>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
>>> response = requests.get(url)
>>> init_img = Image.open(BytesIO(response.content)).convert("RGB")
>>> init_img = init_img.resize((768, 512))
>>> prompts = "A fantasy landscape, trending on artstation"
>>> pipeline, params = FlaxStableDiffusionImg2ImgPipeline.from_pretrained(
... "CompVis/stable-diffusion-v1-4",
... revision="flax",
... dtype=jnp.bfloat16,
... )
>>> num_samples = jax.device_count()
>>> rng = jax.random.split(rng, jax.device_count())
>>> prompt_ids, processed_image = pipeline.prepare_inputs(
... prompt=[prompts] * num_samples, image=[init_img] * num_samples
... )
>>> p_params = replicate(params)
>>> prompt_ids = shard(prompt_ids)
>>> processed_image = shard(processed_image)
>>> output = pipeline(
... prompt_ids=prompt_ids,
... image=processed_image,
... params=p_params,
... prng_seed=rng,
... strength=0.75,
... num_inference_steps=50,
... jit=True,
... height=512,
... width=768,
... ).images
>>> output_images = pipeline.numpy_to_pil(np.asarray(output.reshape((num_samples,) + output.shape[-3:])))
FlaxStableDiffusionPipelineOutput
class diffusers.pipelines.stable_diffusion.FlaxStableDiffusionPipelineOutput
< source >( images: ndarray nsfw_content_detected: List )
Output class for Flax-based Stable Diffusion pipelines.
“Returns a new object replacing the specified fields with new values.