VAE Image Processor
The VaeImageProcessor
provides a unified API for StableDiffusionPipelines to prepare image inputs for VAE encoding and post-processing outputs once they’re decoded. This includes transformations such as resizing, normalization, and conversion between PIL Image, PyTorch, and NumPy arrays.
All pipelines with VaeImageProcessor
accept PIL Image, PyTorch tensor, or NumPy arrays as image inputs and return outputs based on the output_type
argument by the user. You can pass encoded image latents directly to the pipeline and return latents from the pipeline as a specific output with the output_type
argument (for example output_type="latent"
). This allows you to take the generated latents from one pipeline and pass it to another pipeline as input without leaving the latent space. It also makes it much easier to use multiple pipelines together by passing PyTorch tensors directly between different pipelines.
VaeImageProcessor
class diffusers.image_processor.VaeImageProcessor
< source >( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True do_binarize: bool = False do_convert_rgb: bool = False do_convert_grayscale: bool = False )
Parameters
- do_resize (
bool
, optional, defaults toTrue
) — Whether to downscale the image’s (height, width) dimensions to multiples ofvae_scale_factor
. Can acceptheight
andwidth
arguments from image_processor.VaeImageProcessor.preprocess() method. - vae_scale_factor (
int
, optional, defaults to8
) — VAE scale factor. Ifdo_resize
isTrue
, the image is automatically resized to multiples of this factor. - resample (
str
, optional, defaults tolanczos
) — Resampling filter to use when resizing the image. - do_normalize (
bool
, optional, defaults toTrue
) — Whether to normalize the image to [-1,1]. - do_binarize (
bool
, optional, defaults toFalse
) — Whether to binarize the image to 0/1. - do_convert_rgb (
bool
, optional, defaults to beFalse
) — Whether to convert the images to RGB format. - do_convert_grayscale (
bool
, optional, defaults to beFalse
) — Whether to convert the images to grayscale format.
Image processor for VAE.
binarize
< source >( image: Image ) → PIL.Image.Image
Create a mask.
Converts a PIL image to grayscale format.
Converts a PIL image to RGB format.
Denormalize an image array to [0,1].
get_default_height_width
< source >( image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor] height: typing.Optional[int] = None width: typing.Optional[int] = None )
Parameters
- image(
PIL.Image.Image
,np.ndarray
ortorch.Tensor
) — The image input, can be a PIL image, numpy array or pytorch tensor. if it is a numpy array, should have shape[batch, height, width]
or[batch, height, width, channel]
if it is a pytorch tensor, should have shape[batch, channel, height, width]
. - height (
int
, optional, defaults toNone
) — The height in preprocessed image. IfNone
, will use the height ofimage
input. - width (
int
, optional, defaults to
None) -- The width in preprocessed. If
None, will use the width of the
image` input.
This function return the height and width that are downscaled to the next integer multiple of
vae_scale_factor
.
Normalize an image array to [-1,1].
Convert a numpy image or a batch of images to a PIL image.
Convert a NumPy image to a PyTorch tensor.
Convert a PIL image or a list of PIL images to NumPy arrays.
postprocess
< source >( image: FloatTensor output_type: str = 'pil' do_denormalize: typing.Optional[typing.List[bool]] = None ) → PIL.Image.Image
, np.ndarray
or torch.FloatTensor
Parameters
- image (
torch.FloatTensor
) — The image input, should be a pytorch tensor with shapeB x C x H x W
. - output_type (
str
, optional, defaults topil
) — The output type of the image, can be one ofpil
,np
,pt
,latent
. - do_denormalize (
List[bool]
, optional, defaults toNone
) — Whether to denormalize the image to [0,1]. IfNone
, will use the value ofdo_normalize
in theVaeImageProcessor
config.
Returns
PIL.Image.Image
, np.ndarray
or torch.FloatTensor
The postprocessed image.
Postprocess the image output from tensor to output_type
.
preprocess
< source >( image: typing.Union[torch.FloatTensor, PIL.Image.Image, numpy.ndarray] height: typing.Optional[int] = None width: typing.Optional[int] = None )
Preprocess the image input. Accepted formats are PIL images, NumPy arrays or PyTorch tensors.
Convert a PyTorch tensor to a NumPy image.
resize
< source >( image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.Tensor] height: typing.Optional[int] = None width: typing.Optional[int] = None ) → PIL.Image.Image
, np.ndarray
or torch.Tensor
Parameters
- image (
PIL.Image.Image
,np.ndarray
ortorch.Tensor
) — The image input, can be a PIL image, numpy array or pytorch tensor. - height (
int
, optional, defaults toNone
) — The height to resize to. - width (
int
, optional, defaults to
None`) — The width to resize to.
Returns
PIL.Image.Image
, np.ndarray
or torch.Tensor
The resized image.
Resize image.
VaeImageProcessorLDM3D
The VaeImageProcessorLDM3D
accepts RGB and depth inputs and returns RGB and depth outputs.
class diffusers.image_processor.VaeImageProcessorLDM3D
< source >( do_resize: bool = True vae_scale_factor: int = 8 resample: str = 'lanczos' do_normalize: bool = True )
Parameters
- do_resize (
bool
, optional, defaults toTrue
) — Whether to downscale the image’s (height, width) dimensions to multiples ofvae_scale_factor
. - vae_scale_factor (
int
, optional, defaults to8
) — VAE scale factor. Ifdo_resize
isTrue
, the image is automatically resized to multiples of this factor. - resample (
str
, optional, defaults tolanczos
) — Resampling filter to use when resizing the image. - do_normalize (
bool
, optional, defaults toTrue
) — Whether to normalize the image to [-1,1].
Image processor for VAE LDM3D.
depth_pil_to_numpy
< source >( images: typing.Union[typing.List[PIL.Image.Image], PIL.Image.Image] )
Convert a PIL image or a list of PIL images to NumPy arrays.
Convert a NumPy depth image or a batch of images to a PIL image.
Convert a NumPy image or a batch of images to a PIL image.
Returns: depth map