text
stringlengths 0
5.54k
|
---|
Whether or not to return a StableDiffusionPipelineOutput instead of a |
plain tuple. callback (Callable, optional) β |
A function that calls every callback_steps steps during inference. The function is called with the |
following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) β |
The frequency at which the callback function is called. If not specified, the callback is called at |
every step. cross_attention_kwargs (dict, optional) β |
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in |
self.processor. clip_skip (int, optional) β |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. Returns |
StableDiffusionPipelineOutput or tuple |
If return_dict is True, StableDiffusionPipelineOutput is returned, |
otherwise a tuple is returned where the first element is a list with the generated images and the |
second element is a list of bools indicating whether the corresponding generated image contains |
βnot-safe-for-workβ (nsfw) content. |
The call function to the pipeline for generation. Example: Copied import requests |
import torch |
from PIL import Image |
from io import BytesIO |
from diffusers import CycleDiffusionPipeline, DDIMScheduler |
# load the pipeline |
# make sure you're logged in with `huggingface-cli login` |
model_id_or_path = "CompVis/stable-diffusion-v1-4" |
scheduler = DDIMScheduler.from_pretrained(model_id_or_path, subfolder="scheduler") |
pipe = CycleDiffusionPipeline.from_pretrained(model_id_or_path, scheduler=scheduler).to("cuda") |
# let's download an initial image |
url = "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/An%20astronaut%20riding%20a%20horse.png" |
response = requests.get(url) |
init_image = Image.open(BytesIO(response.content)).convert("RGB") |
init_image = init_image.resize((512, 512)) |
init_image.save("horse.png") |
# let's specify a prompt |
source_prompt = "An astronaut riding a horse" |
prompt = "An astronaut riding an elephant" |
# call the pipeline |
image = pipe( |
prompt=prompt, |
source_prompt=source_prompt, |
image=init_image, |
num_inference_steps=100, |
eta=0.1, |
strength=0.8, |
guidance_scale=2, |
source_guidance_scale=1, |
).images[0] |
image.save("horse_to_elephant.png") |
# let's try another example |
# See more samples at the original repo: https://github.com/ChenWu98/cycle-diffusion |
url = ( |
"https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/A%20black%20colored%20car.png" |
) |
response = requests.get(url) |
init_image = Image.open(BytesIO(response.content)).convert("RGB") |
init_image = init_image.resize((512, 512)) |
init_image.save("black.png") |
source_prompt = "A black colored car" |
prompt = "A blue colored car" |
# call the pipeline |
torch.manual_seed(0) |
image = pipe( |
prompt=prompt, |
source_prompt=source_prompt, |
image=init_image, |
num_inference_steps=100, |
eta=0.1, |
strength=0.85, |
guidance_scale=3, |
source_guidance_scale=1, |
).images[0] |
image.save("black_to_blue.png") encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: typing.Optional[torch.FloatTensor] = None negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None lora_scale: typing.Optional[float] = None clip_skip: typing.Optional[int] = None ) Parameters prompt (str or List[str], optional) β |
prompt to be encoded |
device β (torch.device): |
torch device num_images_per_prompt (int) β |
number of images that should be generated per prompt do_classifier_free_guidance (bool) β |
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) β |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is |
less than 1). prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. lora_scale (float, optional) β |
A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) β |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPiplineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] nsfw_content_detected: typing.Optional[typing.List[bool]] ) Parameters images (List[PIL.Image.Image] or np.ndarray) β |
List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) β |
List indicating whether the corresponding generated image contains βnot-safe-for-workβ (nsfw) content or |
None if safety checking could not be performed. Output class for Stable Diffusion pipelines. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.