text
stringlengths 0
5.54k
|
---|
A CLIPTokenizer to tokenize text. unet (UNet2DConditionModel) β |
A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) β |
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of |
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (Q16SafetyChecker) β |
Classification module that estimates whether generated images could be considered offensive or harmful. |
Please refer to the model card for more details |
about a modelβs potential harms. feature_extractor (CLIPImageProcessor) β |
A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with latent editing. This model inherits from DiffusionPipeline and builds on the StableDiffusionPipeline. Check the superclass |
documentation for the generic methods implemented for all pipelines (downloading, saving, running on a particular |
device, etc.). __call__ < source > ( prompt: Union height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 negative_prompt: Union = None num_images_per_prompt: int = 1 eta: float = 0.0 generator: Union = None latents: Optional = None output_type: Optional = 'pil' return_dict: bool = True callback: Optional = None callback_steps: int = 1 editing_prompt: Union = None editing_prompt_embeddings: Optional = None reverse_editing_direction: Union = False edit_guidance_scale: Union = 5 edit_warmup_steps: Union = 10 edit_cooldown_steps: Union = None edit_threshold: Union = 0.9 edit_momentum_scale: Optional = 0.1 edit_mom_beta: Optional = 0.4 edit_weights: Optional = None sem_guidance: Optional = None ) β SemanticStableDiffusionPipelineOutput or tuple Parameters prompt (str or List[str]) β |
The prompt or prompts to guide image generation. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β |
The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β |
The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) β |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. guidance_scale (float, optional, defaults to 7.5) β |
A higher guidance scale value encourages the model to generate images closely linked to the text |
prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) β |
The prompt or prompts to guide what to not include in image generation. If not defined, you need to |
pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) β |
The number of images to generate per prompt. eta (float, optional, defaults to 0.0) β |
Corresponds to parameter eta (Ξ·) from the DDIM paper. Only applies |
to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) β |
A torch.Generator to make |
generation deterministic. latents (torch.FloatTensor, optional) β |
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image |
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
tensor is generated by sampling using the supplied random generator. output_type (str, optional, defaults to "pil") β |
The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) β |
Whether or not to return a StableDiffusionPipelineOutput instead of a |
plain tuple. callback (Callable, optional) β |
A function that calls every callback_steps steps during inference. The function is called with the |
following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) β |
The frequency at which the callback function is called. If not specified, the callback is called at |
every step. editing_prompt (str or List[str], optional) β |
The prompt or prompts to use for semantic guidance. Semantic guidance is disabled by setting |
editing_prompt = None. Guidance direction of prompt should be specified via |
reverse_editing_direction. editing_prompt_embeddings (torch.Tensor, optional) β |
Pre-computed embeddings to use for semantic guidance. Guidance direction of embedding should be |
specified via reverse_editing_direction. reverse_editing_direction (bool or List[bool], optional, defaults to False) β |
Whether the corresponding prompt in editing_prompt should be increased or decreased. edit_guidance_scale (float or List[float], optional, defaults to 5) β |
Guidance scale for semantic guidance. If provided as a list, values should correspond to |
editing_prompt. edit_warmup_steps (float or List[float], optional, defaults to 10) β |
Number of diffusion steps (for each prompt) for which semantic guidance is not applied. Momentum is |
calculated for those steps and applied once all warmup periods are over. edit_cooldown_steps (float or List[float], optional, defaults to None) β |
Number of diffusion steps (for each prompt) after which semantic guidance is longer applied. edit_threshold (float or List[float], optional, defaults to 0.9) β |
Threshold of semantic guidance. edit_momentum_scale (float, optional, defaults to 0.1) β |
Scale of the momentum to be added to the semantic guidance at each diffusion step. If set to 0.0, |
momentum is disabled. Momentum is already built up during warmup (for diffusion steps smaller than |
sld_warmup_steps). Momentum is only added to latent guidance once all warmup periods are finished. edit_mom_beta (float, optional, defaults to 0.4) β |
Defines how semantic guidance momentum builds up. edit_mom_beta indicates how much of the previous |
momentum is kept. Momentum is already built up during warmup (for diffusion steps smaller than |
edit_warmup_steps). edit_weights (List[float], optional, defaults to None) β |
Indicates how much each individual concept should influence the overall guidance. If no weights are |
provided all concepts are applied equally. sem_guidance (List[torch.Tensor], optional) β |
List of pre-generated guidance vectors to be applied at generation. Length of the list has to |
correspond to num_inference_steps. Returns |
SemanticStableDiffusionPipelineOutput or tuple |
If return_dict is True, |
SemanticStableDiffusionPipelineOutput is returned, otherwise a |
tuple is returned where the first element is a list with the generated images and the second element |
is a list of bools indicating whether the corresponding generated image contains βnot-safe-for-workβ |
(nsfw) content. |
The call function to the pipeline for generation. Examples: Copied >>> import torch |
>>> from diffusers import SemanticStableDiffusionPipeline |
>>> pipe = SemanticStableDiffusionPipeline.from_pretrained( |
... "runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16 |
... ) |
>>> pipe = pipe.to("cuda") |
>>> out = pipe( |
... prompt="a photo of the face of a woman", |
... num_images_per_prompt=1, |
... guidance_scale=7, |
... editing_prompt=[ |
... "smiling, smile", # Concepts to apply |
... "glasses, wearing glasses", |
... "curls, wavy hair, curly hair", |
... "beard, full beard, mustache", |
... ], |
... reverse_editing_direction=[ |
... False, |
... False, |
... False, |
... False, |
... ], # Direction of guidance i.e. increase all concepts |
... edit_warmup_steps=[10, 10, 10, 10], # Warmup period for each concept |
... edit_guidance_scale=[4, 5, 5, 5.4], # Guidance scale for each concept |
... edit_threshold=[ |
... 0.99, |
... 0.975, |
... 0.925, |
... 0.96, |
... ], # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions |
... edit_momentum_scale=0.3, # Momentum scale that will be added to the latent guidance |
... edit_mom_beta=0.6, # Momentum beta |
... edit_weights=[1, 1, 1, 1, 1], # Weights of the individual concepts against each other |
... ) |
>>> image = out.images[0] StableDiffusionSafePipelineOutput class diffusers.pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters images (List[PIL.Image.Image] or np.ndarray) β |