text
stringlengths
0
5.54k
"vae": [
"diffusers",
"AutoencoderKL"
]
}
Community pipelines For more context about the design choices behind community pipelines, please have a look at this issue. Community pipelines allow you to get creative and build your own unique pipelines to share with the community. You can find all community pipelines in the diffusers/examples/community folder along with inference and training examples for how to use them. This guide showcases some of the community pipelines and hopefully it’ll inspire you to create your own (feel free to open a PR with your own pipeline and we will merge it!). To load a community pipeline, use the custom_pipeline argument in DiffusionPipeline to specify one of the files in diffusers/examples/community: Copied from diffusers import DiffusionPipeline
pipe = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4", custom_pipeline="filename_in_the_community_folder", use_safetensors=True
) If a community pipeline doesn’t work as expected, please open a GitHub issue and mention the author. You can learn more about community pipelines in the how to load community pipelines and how to contribute a community pipeline guides. Multilingual Stable Diffusion The multilingual Stable Diffusion pipeline uses a pretrained XLM-RoBERTa to identify a language and the mBART-large-50 model to handle the translation. This allows you to generate images from text in 20 languages. Copied import torch
from diffusers import DiffusionPipeline
from diffusers.utils import make_image_grid
from transformers import (
pipeline,
MBart50TokenizerFast,
MBartForConditionalGeneration,
)
device = "cuda" if torch.cuda.is_available() else "cpu"
device_dict = {"cuda": 0, "cpu": -1}
# add language detection pipeline
language_detection_model_ckpt = "papluca/xlm-roberta-base-language-detection"
language_detection_pipeline = pipeline("text-classification",
model=language_detection_model_ckpt,
device=device_dict[device])
# add model for language translation
translation_tokenizer = MBart50TokenizerFast.from_pretrained("facebook/mbart-large-50-many-to-one-mmt")
translation_model = MBartForConditionalGeneration.from_pretrained("facebook/mbart-large-50-many-to-one-mmt").to(device)
diffuser_pipeline = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
custom_pipeline="multilingual_stable_diffusion",
detection_pipeline=language_detection_pipeline,
translation_model=translation_model,
translation_tokenizer=translation_tokenizer,
torch_dtype=torch.float16,
)
diffuser_pipeline.enable_attention_slicing()
diffuser_pipeline = diffuser_pipeline.to(device)
prompt = ["a photograph of an astronaut riding a horse",
"Una casa en la playa",
"Ein Hund, der Orange isst",
"Un restaurant parisien"]
images = diffuser_pipeline(prompt).images
make_image_grid(images, rows=2, cols=2) MagicMix MagicMix is a pipeline that can mix an image and text prompt to generate a new image that preserves the image structure. The mix_factor determines how much influence the prompt has on the layout generation, kmin controls the number of steps during the content generation process, and kmax determines how much information is kept in the layout of the original image. Copied from diffusers import DiffusionPipeline, DDIMScheduler
from diffusers.utils import load_image, make_image_grid
pipeline = DiffusionPipeline.from_pretrained(
"CompVis/stable-diffusion-v1-4",
custom_pipeline="magic_mix",
scheduler=DDIMScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler"),
).to('cuda')
img = load_image("https://user-images.githubusercontent.com/59410571/209578593-141467c7-d831-4792-8b9a-b17dc5e47816.jpg")
mix_img = pipeline(img, prompt="bed", kmin=0.3, kmax=0.5, mix_factor=0.5)
make_image_grid([img, mix_img], rows=1, cols=2) original image image and text prompt mix
Attention Processor An attention processor is a class for applying different types of attention mechanisms. AttnProcessor class diffusers.models.attention_processor.AttnProcessor < source > ( ) Default processor for performing attention-related computations. AttnProcessor2_0 class diffusers.models.attention_processor.AttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0). FusedAttnProcessor2_0 class diffusers.models.attention_processor.FusedAttnProcessor2_0 < source > ( ) Processor for implementing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0).
It uses fused projection layers. For self-attention modules, all projection matrices (i.e., query,
key, value) are fused. For cross-attention modules, key and value projection matrices are fused. This API is currently 🧪 experimental in nature and can change in future. LoRAAttnProcessor class diffusers.models.attention_processor.LoRAAttnProcessor < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None **kwargs ) Parameters hidden_size (int, optional) —
The hidden size of the attention layer. cross_attention_dim (int, optional) —
The number of channels in the encoder_hidden_states. rank (int, defaults to 4) —
The dimension of the LoRA update matrices. network_alpha (int, optional) —
Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) —
Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism. LoRAAttnProcessor2_0 class diffusers.models.attention_processor.LoRAAttnProcessor2_0 < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None **kwargs ) Parameters hidden_size (int) —
The hidden size of the attention layer. cross_attention_dim (int, optional) —
The number of channels in the encoder_hidden_states. rank (int, defaults to 4) —
The dimension of the LoRA update matrices. network_alpha (int, optional) —
Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) —
Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism using PyTorch 2.0’s memory-efficient scaled dot-product
attention. CustomDiffusionAttnProcessor class diffusers.models.attention_processor.CustomDiffusionAttnProcessor < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) —
Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) —
Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) —
The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) —
The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) —
Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) —
The dropout probability to use. Processor for implementing attention for the Custom Diffusion method. CustomDiffusionAttnProcessor2_0 class diffusers.models.attention_processor.CustomDiffusionAttnProcessor2_0 < source > ( train_kv: bool = True train_q_out: bool = True hidden_size: Optional = None cross_attention_dim: Optional = None out_bias: bool = True dropout: float = 0.0 ) Parameters train_kv (bool, defaults to True) —
Whether to newly train the key and value matrices corresponding to the text features. train_q_out (bool, defaults to True) —
Whether to newly train query matrices corresponding to the latent image features. hidden_size (int, optional, defaults to None) —
The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) —
The number of channels in the encoder_hidden_states. out_bias (bool, defaults to True) —
Whether to include the bias parameter in train_q_out. dropout (float, optional, defaults to 0.0) —
The dropout probability to use. Processor for implementing attention for the Custom Diffusion method using PyTorch 2.0’s memory-efficient scaled
dot-product attention. AttnAddedKVProcessor class diffusers.models.attention_processor.AttnAddedKVProcessor < source > ( ) Processor for performing attention-related computations with extra learnable key and value matrices for the text
encoder. AttnAddedKVProcessor2_0 class diffusers.models.attention_processor.AttnAddedKVProcessor2_0 < source > ( ) Processor for performing scaled dot-product attention (enabled by default if you’re using PyTorch 2.0), with extra
learnable key and value matrices for the text encoder. LoRAAttnAddedKVProcessor class diffusers.models.attention_processor.LoRAAttnAddedKVProcessor < source > ( hidden_size: int cross_attention_dim: Optional = None rank: int = 4 network_alpha: Optional = None ) Parameters hidden_size (int, optional) —
The hidden size of the attention layer. cross_attention_dim (int, optional, defaults to None) —
The number of channels in the encoder_hidden_states. rank (int, defaults to 4) —
The dimension of the LoRA update matrices. network_alpha (int, optional) —
Equivalent to alpha but it’s usage is specific to Kohya (A1111) style LoRAs. kwargs (dict) —
Additional keyword arguments to pass to the LoRALinearLayer layers. Processor for implementing the LoRA attention mechanism with extra learnable key and value matrices for the text
encoder. XFormersAttnProcessor class diffusers.models.attention_processor.XFormersAttnProcessor < source > ( attention_op: Optional = None ) Parameters attention_op (Callable, optional, defaults to None) —
The base
operator to
use as the attention operator. It is recommended to set to None, and allow xFormers to choose the best
operator. Processor for implementing memory efficient attention using xFormers. LoRAXFormersAttnProcessor class diffusers.models.attention_processor.LoRAXFormersAttnProcessor < source > ( hidden_size: int cross_attention_dim: int rank: int = 4 attention_op: Optional = None network_alpha: Optional = None **kwargs ) Parameters hidden_size (int, optional) —