text
stringlengths 0
5.54k
|
---|
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
|
provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β
|
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
|
not provided, negative_prompt_embeds are generated from the negative_prompt input argument. decode_latents (bool, optional, defaults to False) β
|
Whether or not to decode the inverted latents into a generated image. Setting this argument to True
|
decodes all inverted latents for each timestep into a list of generated images. output_type (str, optional, defaults to "pil") β
|
The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) β
|
Whether or not to return a ~pipelines.stable_diffusion.DiffEditInversionPipelineOutput instead of a
|
plain tuple. callback (Callable, optional) β
|
A function that calls every callback_steps steps during inference. The function is called with the
|
following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) β
|
The frequency at which the callback function is called. If not specified, the callback is called at
|
every step. cross_attention_kwargs (dict, optional) β
|
A kwargs dictionary that if specified is passed along to the
|
AttnProcessor as defined in
|
self.processor. lambda_auto_corr (float, optional, defaults to 20.0) β
|
Lambda parameter to control auto correction. lambda_kl (float, optional, defaults to 20.0) β
|
Lambda parameter to control Kullback-Leibler divergence output. num_reg_steps (int, optional, defaults to 0) β
|
Number of regularization loss steps. num_auto_corr_rolls (int, optional, defaults to 5) β
|
Number of auto correction roll steps. Generate inverted latents given a prompt and image. Copied >>> import PIL
|
>>> import requests
|
>>> import torch
|
>>> from io import BytesIO
|
>>> from diffusers import StableDiffusionDiffEditPipeline
|
>>> def download_image(url):
|
... response = requests.get(url)
|
... return PIL.Image.open(BytesIO(response.content)).convert("RGB")
|
>>> img_url = "https://github.com/Xiang-cd/DiffEdit-stable-diffusion/raw/main/assets/origin.png"
|
>>> init_image = download_image(img_url).resize((768, 768))
|
>>> pipe = StableDiffusionDiffEditPipeline.from_pretrained(
|
... "stabilityai/stable-diffusion-2-1", torch_dtype=torch.float16
|
... )
|
>>> pipe = pipe.to("cuda")
|
>>> pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
|
>>> pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
|
>>> pipeline.enable_model_cpu_offload()
|
>>> prompt = "A bowl of fruits"
|
>>> inverted_latents = pipe.invert(image=init_image, prompt=prompt).latents __call__ < source > ( prompt: Union = None mask_image: Union = None image_latents: Union = None inpaint_strength: Optional = 0.8 num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: Optional = 1 eta: float = 0.0 generator: Union = None latents: Optional = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 cross_attention_kwargs: Optional = None clip_ckip: int = None ) β StableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str], optional) β
|
The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. mask_image (PIL.Image.Image) β
|
Image or tensor representing an image batch to mask the generated image. White pixels in the mask are
|
repainted, while black pixels are preserved. If mask_image is a PIL image, it is converted to a
|
single channel (luminance) before use. If itβs a tensor, it should contain one color channel (L)
|
instead of 3, so the expected shape would be (B, 1, H, W). image_latents (PIL.Image.Image or torch.FloatTensor) β
|
Partially noised image latents from the inversion process to be used as inputs for image generation. inpaint_strength (float, optional, defaults to 0.8) β
|
Indicates extent to inpaint the masked area. Must be between 0 and 1. When inpaint_strength is 1, the
|
denoising process is run on the masked area for the full number of iterations specified in
|
num_inference_steps. image_latents is used as a reference for the masked area, and adding more
|
noise to a region increases inpaint_strength. If inpaint_strength is 0, no inpainting occurs. num_inference_steps (int, optional, defaults to 50) β
|
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
|
expense of slower inference. guidance_scale (float, optional, defaults to 7.5) β
|
A higher guidance scale value encourages the model to generate images closely linked to the text
|
prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) β
|
The prompt or prompts to guide what to not include in image generation. If not defined, you need to
|
pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) β
|
The number of images to generate per prompt. eta (float, optional, defaults to 0.0) β
|
Corresponds to parameter eta (Ξ·) from the DDIM paper. Only applies
|
to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator, optional) β
|
A torch.Generator to make
|
generation deterministic. latents (torch.FloatTensor, optional) β
|
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
|
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
|
tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) β
|
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
|
provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β
|
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
|
not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") β
|
The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) β
|
Whether or not to return a StableDiffusionPipelineOutput instead of a
|
plain tuple. callback (Callable, optional) β
|
A function that calls every callback_steps steps during inference. The function is called with the
|
following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) β
|
The frequency at which the callback function is called. If not specified, the callback is called at
|
every step. cross_attention_kwargs (dict, optional) β
|
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in
|
self.processor. clip_skip (int, optional) β
|
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
|
the output of the pre-final layer will be used for computing the prompt embeddings. Returns
|
StableDiffusionPipelineOutput or tuple
|
If return_dict is True, StableDiffusionPipelineOutput is returned,
|
otherwise a tuple is returned where the first element is a list with the generated images and the
|
second element is a list of bools indicating whether the corresponding generated image contains
|
βnot-safe-for-workβ (nsfw) content.
|
The call function to the pipeline for generation. Copied >>> import PIL
|
>>> import requests
|
>>> import torch
|
>>> from io import BytesIO
|
>>> from diffusers import StableDiffusionDiffEditPipeline
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.