|
<!--Copyright 2022 The HuggingFace Team. All rights reserved. |
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with |
|
the License. You may obtain a copy of the License at |
|
|
|
http: |
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on |
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the |
|
specific language governing permissions and limitations under the License. |
|
--> |
|
|
|
# Stable diffusion pipelines |
|
|
|
Stable Diffusion is a text-to-image _latent diffusion_ model created by the researchers and engineers from [CompVis](https: |
|
|
|
Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https: |
|
|
|
For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, please refer to the official [launch announcement post](https: |
|
|
|
|
|
- To tweak your prompts on a specific result you liked, you can generate your own latents, as demonstrated in the following notebook: [ |
|
>>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config) |
|
|
|
>>> # or |
|
>>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler") |
|
>>> pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=euler_scheduler) |
|
``` |
|
|
|
|
|
### How to conver all use cases with multiple or single pipeline |
|
|
|
If you want to use all possible use cases in a single `DiffusionPipeline` you can either: |
|
- Make use of the [Stable Diffusion Mega Pipeline](https: |
|
- Make use of the `components` functionality to instantiate all components in the most memory-efficient way: |
|
|
|
```python |
|
>>> from diffusers import ( |
|
... StableDiffusionPipeline, |
|
... StableDiffusionImg2ImgPipeline, |
|
... StableDiffusionInpaintPipeline, |
|
... ) |
|
|
|
>>> text2img = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4") |
|
>>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components) |
|
>>> inpaint = StableDiffusionInpaintPipeline(**text2img.components) |
|
|
|
>>> # now you can use text2img(...), img2img(...), inpaint(...) just like the call methods of each respective pipeline |
|
``` |
|
|
|
## StableDiffusionPipelineOutput |
|
[[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput |
|
|
|
## StableDiffusionPipeline |
|
[[autodoc]] StableDiffusionPipeline |
|
- __call__ |
|
- enable_attention_slicing |
|
- disable_attention_slicing |
|
|
|
## StableDiffusionImg2ImgPipeline |
|
[[autodoc]] StableDiffusionImg2ImgPipeline |
|
- __call__ |
|
- enable_attention_slicing |
|
- disable_attention_slicing |
|
|
|
## StableDiffusionInpaintPipeline |
|
[[autodoc]] StableDiffusionInpaintPipeline |
|
- __call__ |
|
- enable_attention_slicing |
|
- disable_attention_slicing |
|
|