text
stringlengths 0
5.54k
|
---|
generator (torch.Generator, optional) — |
One or a list of torch generator(s) |
to make generation deterministic. |
num_inference_steps (int, optional, defaults to 50) — |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. |
output_type (str, optional, defaults to "pil") — |
The output format of the generate image. Choose between |
PIL: PIL.Image.Image or np.array. |
return_dict (bool, optional, defaults to True) — |
Whether or not to return a ImagePipelineOutput instead of a plain tuple. |
Returns |
ImagePipelineOutput or tuple |
~pipelines.utils.ImagePipelineOutput if return_dict is |
True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. |
Custom Pipelines |
For more information about community pipelines, please have a look at this issue. |
Community examples consist of both inference and training examples that have been added by the community. |
Please have a look at the following table to get an overview of all community examples. Click on the Code Example to get a copy-and-paste ready code example that you can try out. |
If a community doesn’t work as expected, please open an issue and ping the author on it. |
Example |
Description |
Code Example |
Colab |
Author |
CLIP Guided Stable Diffusion |
Doing CLIP guidance for text to image generation with Stable Diffusion |
CLIP Guided Stable Diffusion |
Suraj Patil |
One Step U-Net (Dummy) |
Example showcasing of how to use Community Pipelines (see https://github.com/huggingface/diffusers/issues/841) |
One Step U-Net |
- |
Patrick von Platen |
Stable Diffusion Interpolation |
Interpolate the latent space of Stable Diffusion between different prompts/seeds |
Stable Diffusion Interpolation |
- |
Nate Raw |
Stable Diffusion Mega |
One Stable Diffusion Pipeline with all functionalities of Text2Image, Image2Image and Inpainting |
Stable Diffusion Mega |
- |
Patrick von Platen |
Long Prompt Weighting Stable Diffusion |
One Stable Diffusion Pipeline without tokens length limit, and support parsing weighting in prompt. |
Long Prompt Weighting Stable Diffusion |
- |
SkyTNT |
Speech to Image |
Using automatic-speech-recognition to transcribe text and Stable Diffusion to generate images |
Speech to Image |
- |
Mikail Duzenli |
To load a custom pipeline you just need to pass the custom_pipeline argument to DiffusionPipeline, as one of the files in diffusers/examples/community. Feel free to send a PR with your own pipelines, we will merge them quickly. |
Copied |
pipe = DiffusionPipeline.from_pretrained( |
"CompVis/stable-diffusion-v1-4", custom_pipeline="filename_in_the_community_folder" |
) |
Example usages |
CLIP Guided Stable Diffusion |
CLIP guided stable diffusion can help to generate more realistic images |
by guiding stable diffusion at every denoising step with an additional CLIP model. |
The following code requires roughly 12GB of GPU RAM. |
Copied |
from diffusers import DiffusionPipeline |
from transformers import CLIPFeatureExtractor, CLIPModel |
import torch |
feature_extractor = CLIPFeatureExtractor.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K") |
clip_model = CLIPModel.from_pretrained("laion/CLIP-ViT-B-32-laion2B-s34B-b79K", torch_dtype=torch.float16) |
guided_pipeline = DiffusionPipeline.from_pretrained( |
"CompVis/stable-diffusion-v1-4", |
custom_pipeline="clip_guided_stable_diffusion", |
clip_model=clip_model, |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.