Diffusers documentation

AltDiffusion

Join the Hugging Face community

and get access to the augmented documentation experience

to get started

# AltDiffusion

AltDiffusion was proposed in AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu

The abstract of the paper is the following:

In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model. Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding.

Overview:

Pipeline Tasks Colab Demo
pipeline_alt_diffusion.py Text-to-Image Generation - -
pipeline_alt_diffusion_img2img.py Image-to-Image Text-Guided Generation - -

## Tips

• AltDiffusion is conceptually exaclty the same as Stable Diffusion.

• Run AltDiffusion

AltDiffusion can be tested very easily with the AltDiffusionPipeline, AltDiffusionImg2ImgPipeline and the "BAAI/AltDiffusion-m9" checkpoint exactly in the same way it is shown in the Conditional Image Generation Guide and the Image-to-Image Generation Guide.

• How to load and use different schedulers.

The alt diffusion pipeline uses DDIMScheduler scheduler by default. But diffusers provides many other schedulers that can be used with the alt diffusion pipeline such as PNDMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, EulerAncestralDiscreteScheduler etc. To use a different scheduler, you can either change it via the ConfigMixin.from_config() method or pass the scheduler argument to the from_pretrained method of the pipeline. For example, to use the EulerDiscreteScheduler, you can do the following:

>>> from diffusers import AltDiffusionPipeline, EulerDiscreteScheduler

>>> pipeline = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9")
>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config)

>>> # or
>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("BAAI/AltDiffusion-m9", subfolder="scheduler")
>>> pipeline = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9", scheduler=euler_scheduler)
• How to convert all use cases with multiple or single pipeline

If you want to use all possible use cases in a single DiffusionPipeline we recommend using the components functionality to instantiate all components in the most memory-efficient way:

>>> from diffusers import (
...     AltDiffusionPipeline,
...     AltDiffusionImg2ImgPipeline,
... )

>>> text2img = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9")
>>> img2img = AltDiffusionImg2ImgPipeline(**text2img.components)

>>> # now you can use text2img(...) and img2img(...) just like the call methods of each respective pipeline

## AltDiffusionPipelineOutput

### class diffusers.pipelines.alt_diffusion.AltDiffusionPipelineOutput

< >

( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] nsfw_content_detected: typing.Optional[typing.List[bool]] )

Parameters

• images (List[PIL.Image.Image] or np.ndarray) — List of denoised PIL images of length batch_size or numpy array of shape (batch_size, height, width, num_channels). PIL images or numpy array present the denoised images of the diffusion pipeline.
• nsfw_content_detected (List[bool]) — List of flags denoting whether the corresponding generated image likely represents “not-safe-for-work” (nsfw) content, or None if safety checking could not be performed.

Output class for Alt Diffusion pipelines.

#### __call__

( *args **kwargs )

Call self as a function.

## AltDiffusionPipeline

### class diffusers.AltDiffusionPipeline

< >

( vae: AutoencoderKL text_encoder: RobertaSeriesModelWithTransformation tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True )

Parameters

• vae (AutoencoderKL) — Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
• text_encoder (RobertaSeriesModelWithTransformation) — Frozen text-encoder. Alt Diffusion uses the text portion of CLIP, specifically the clip-vit-large-patch14 variant.
• tokenizer (XLMRobertaTokenizer) — Tokenizer of class XLMRobertaTokenizer.
• unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents.
• scheduler (SchedulerMixin) — A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler.
• safety_checker (StableDiffusionSafetyChecker) — Classification module that estimates whether generated images could be considered offensive or harmful. Please, refer to the model card for details.
• feature_extractor (CLIPFeatureExtractor) — Model that extracts features from generated images to be used as inputs for the safety_checker.

Pipeline for text-to-image generation using Alt Diffusion.

This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

#### __call__

< >

( prompt: typing.Union[str, typing.List[str]] = None height: typing.Optional[int] = None width: typing.Optional[int] = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: float = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None latents: typing.Optional[torch.FloatTensor] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: typing.Optional[int] = 1 ) ~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple

Parameters

• prompt (str or List[str], optional) — The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. instead.
• height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The height in pixels of the generated image.
• width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) — The width in pixels of the generated image.
• num_inference_steps (int, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference.
• guidance_scale (float, optional, defaults to 7.5) — Guidance scale as defined in Classifier-Free Diffusion Guidance. guidance_scale is defined as w of equation 2. of Imagen Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, usually at the expense of lower image quality.
• negative_prompt (str or List[str], optional) — The prompt or prompts not to guide the image generation. If not defined, one has to pass negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1).
• num_images_per_prompt (int, optional, defaults to 1) — The number of images to generate per prompt.
• eta (float, optional, defaults to 0.0) — Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to schedulers.DDIMScheduler, will be ignored for others.
• generator (torch.Generator or List[torch.Generator], optional) — One or a list of torch generator(s) to make generation deterministic.
• latents (torch.FloatTensor, optional) — Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image generation. Can be used to tweak the same generation with different prompts. If not provided, a latents tensor will ge generated by sampling using the supplied random generator.
• prompt_embeds (torch.FloatTensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input argument.
• negative_prompt_embeds (torch.FloatTensor, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input argument.
• output_type (str, optional, defaults to "pil") — The output format of the generate image. Choose between PIL: PIL.Image.Image or np.array.
• return_dict (bool, optional, defaults to True) — Whether or not to return a ~pipelines.stable_diffusion.AltDiffusionPipelineOutput instead of a plain tuple.
• callback (Callable, optional) — A function that will be called every callback_steps steps during inference. The function will be called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor).
• callback_steps (int, optional, defaults to 1) — The frequency at which the callback function will be called. If not specified, the callback will be called at every step.

Returns

~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple

~pipelines.stable_diffusion.AltDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker.

Function invoked when calling the pipeline for generation.

Examples:

>>> import torch
>>> from diffusers import AltDiffusionPipeline

>>> pipe = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9", torch_dtype=torch.float16)
>>> pipe = pipe.to("cuda")

>>> # "dark elf princess, highly detailed, d & d, fantasy, highly detailed, digital painting, trending on artstation, concept art, sharp focus, illustration, art by artgerm and greg rutkowski and fuji choko and viktoria gavrilenko and hoang lap"
>>> prompt = "黑暗精灵公主，非常详细，幻想，非常详细，数字绘画，概念艺术，敏锐的焦点，插图"
>>> image = pipe(prompt).images[0]

#### disable_vae_slicing

< >

( )

Disable sliced VAE decoding. If enable_vae_slicing was previously invoked, this method will go back to computing decoding in one step.

< >

( gpu_id = 0 )

Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward method called.

#### enable_vae_slicing

< >

( )

Enable sliced VAE decoding.

When this option is enabled, the VAE will split the input tensor in slices to compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.

## AltDiffusionImg2ImgPipeline

### class diffusers.AltDiffusionImg2ImgPipeline

< >

( vae: AutoencoderKL text_encoder: RobertaSeriesModelWithTransformation tokenizer: XLMRobertaTokenizer unet: UNet2DConditionModel scheduler: KarrasDiffusionSchedulers safety_checker: StableDiffusionSafetyChecker feature_extractor: CLIPFeatureExtractor requires_safety_checker: bool = True )

Parameters

• vae (AutoencoderKL) — Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
• text_encoder (RobertaSeriesModelWithTransformation) — Frozen text-encoder. Alt Diffusion uses the text portion of CLIP, specifically the clip-vit-large-patch14 variant.
• tokenizer (XLMRobertaTokenizer) — Tokenizer of class XLMRobertaTokenizer.
• unet (UNet2DConditionModel) — Conditional U-Net architecture to denoise the encoded image latents.
• scheduler (SchedulerMixin) — A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler.
• safety_checker (StableDiffusionSafetyChecker) — Classification module that estimates whether generated images could be considered offensive or harmful. Please, refer to the model card for details.
• feature_extractor (CLIPFeatureExtractor) — Model that extracts features from generated images to be used as inputs for the safety_checker.

Pipeline for text-guided image to image generation using Alt Diffusion.

This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)

#### __call__

< >

( prompt: typing.Union[str, typing.List[str]] = None image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None strength: float = 0.8 num_inference_steps: typing.Optional[int] = 50 guidance_scale: typing.Optional[float] = 7.5 negative_prompt: typing.Union[str, typing.List[str], NoneType] = None num_images_per_prompt: typing.Optional[int] = 1 eta: typing.Optional[float] = 0.0 generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None output_type: typing.Optional[str] = 'pil' return_dict: bool = True callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None callback_steps: typing.Optional[int] = 1 **kwargs ) ~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple

Parameters

• prompt (str or List[str], optional) — The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. instead.
• image (torch.FloatTensor or PIL.Image.Image) — Image, or tensor representing an image batch, that will be used as the starting point for the process.
• strength (float, optional, defaults to 0.8) — Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image will be used as a starting point, adding more noise to it the larger the strength. The number of denoising steps depends on the amount of noise initially added. When strength is 1, added noise will be maximum and the denoising process will run for the full number of iterations specified in num_inference_steps. A value of 1, therefore, essentially ignores image.
• num_inference_steps (int, optional, defaults to 50) — The number of denoising steps. More denoising steps usually lead to a higher quality image at the expense of slower inference. This parameter will be modulated by strength.
• guidance_scale (float, optional, defaults to 7.5) — Guidance scale as defined in Classifier-Free Diffusion Guidance. guidance_scale is defined as w of equation 2. of Imagen Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, usually at the expense of lower image quality.
• negative_prompt (str or List[str], optional) — The prompt or prompts not to guide the image generation. If not defined, one has to pass negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1).
• num_images_per_prompt (int, optional, defaults to 1) — The number of images to generate per prompt.
• eta (float, optional, defaults to 0.0) — Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to schedulers.DDIMScheduler, will be ignored for others.
• generator (torch.Generator, optional) — One or a list of torch generator(s) to make generation deterministic.
• prompt_embeds (torch.FloatTensor, optional) — Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, text embeddings will be generated from prompt input argument.
• negative_prompt_embeds (torch.FloatTensor, optional) — Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input argument.
• output_type (str, optional, defaults to "pil") — The output format of the generate image. Choose between PIL: PIL.Image.Image or np.array.
• return_dict (bool, optional, defaults to True) — Whether or not to return a ~pipelines.stable_diffusion.AltDiffusionPipelineOutput instead of a plain tuple.
• callback (Callable, optional) — A function that will be called every callback_steps steps during inference. The function will be called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor).
• callback_steps (int, optional, defaults to 1) — The frequency at which the callback function will be called. If not specified, the callback will be called at every step.

Returns

~pipelines.stable_diffusion.AltDiffusionPipelineOutput or tuple

~pipelines.stable_diffusion.AltDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker.

Function invoked when calling the pipeline for generation.

Examples:

>>> import requests
>>> import torch
>>> from PIL import Image
>>> from io import BytesIO

>>> from diffusers import AltDiffusionImg2ImgPipeline

>>> device = "cuda"
>>> model_id_or_path = "BAAI/AltDiffusion-m9"
>>> pipe = AltDiffusionImg2ImgPipeline.from_pretrained(model_id_or_path, torch_dtype=torch.float16)
>>> pipe = pipe.to(device)

>>> url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"

>>> response = requests.get(url)
>>> init_image = Image.open(BytesIO(response.content)).convert("RGB")
>>> init_image = init_image.resize((768, 512))

>>> # "A fantasy landscape, trending on artstation"
>>> prompt = "幻想风景, artstation"

>>> images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images
>>> images[0].save("幻想风景.png")

Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, unet, text_encoder, vae and safety checker have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward method called.