Fabrice-TIERCELIN commited on
Commit
ddb43d3
1 Parent(s): 5bda455

Useless doc

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. diffusers/docs/README.md +0 -271
  2. diffusers/docs/TRANSLATING.md +0 -57
  3. diffusers/docs/source/_config.py +0 -9
  4. diffusers/docs/source/en/_toctree.yml +0 -264
  5. diffusers/docs/source/en/api/configuration.mdx +0 -25
  6. diffusers/docs/source/en/api/diffusion_pipeline.mdx +0 -47
  7. diffusers/docs/source/en/api/experimental/rl.mdx +0 -15
  8. diffusers/docs/source/en/api/loaders.mdx +0 -30
  9. diffusers/docs/source/en/api/logging.mdx +0 -98
  10. diffusers/docs/source/en/api/models.mdx +0 -107
  11. diffusers/docs/source/en/api/outputs.mdx +0 -55
  12. diffusers/docs/source/en/api/pipelines/alt_diffusion.mdx +0 -83
  13. diffusers/docs/source/en/api/pipelines/audio_diffusion.mdx +0 -98
  14. diffusers/docs/source/en/api/pipelines/audioldm.mdx +0 -82
  15. diffusers/docs/source/en/api/pipelines/cycle_diffusion.mdx +0 -100
  16. diffusers/docs/source/en/api/pipelines/dance_diffusion.mdx +0 -34
  17. diffusers/docs/source/en/api/pipelines/ddim.mdx +0 -36
  18. diffusers/docs/source/en/api/pipelines/ddpm.mdx +0 -37
  19. diffusers/docs/source/en/api/pipelines/dit.mdx +0 -59
  20. diffusers/docs/source/en/api/pipelines/latent_diffusion.mdx +0 -49
  21. diffusers/docs/source/en/api/pipelines/latent_diffusion_uncond.mdx +0 -42
  22. diffusers/docs/source/en/api/pipelines/overview.mdx +0 -213
  23. diffusers/docs/source/en/api/pipelines/paint_by_example.mdx +0 -74
  24. diffusers/docs/source/en/api/pipelines/pndm.mdx +0 -35
  25. diffusers/docs/source/en/api/pipelines/repaint.mdx +0 -77
  26. diffusers/docs/source/en/api/pipelines/score_sde_ve.mdx +0 -36
  27. diffusers/docs/source/en/api/pipelines/semantic_stable_diffusion.mdx +0 -79
  28. diffusers/docs/source/en/api/pipelines/spectrogram_diffusion.mdx +0 -54
  29. diffusers/docs/source/en/api/pipelines/stable_diffusion/attend_and_excite.mdx +0 -75
  30. diffusers/docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx +0 -280
  31. diffusers/docs/source/en/api/pipelines/stable_diffusion/depth2img.mdx +0 -33
  32. diffusers/docs/source/en/api/pipelines/stable_diffusion/image_variation.mdx +0 -31
  33. diffusers/docs/source/en/api/pipelines/stable_diffusion/img2img.mdx +0 -36
  34. diffusers/docs/source/en/api/pipelines/stable_diffusion/inpaint.mdx +0 -37
  35. diffusers/docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx +0 -33
  36. diffusers/docs/source/en/api/pipelines/stable_diffusion/model_editing.mdx +0 -61
  37. diffusers/docs/source/en/api/pipelines/stable_diffusion/overview.mdx +0 -82
  38. diffusers/docs/source/en/api/pipelines/stable_diffusion/panorama.mdx +0 -58
  39. diffusers/docs/source/en/api/pipelines/stable_diffusion/pix2pix.mdx +0 -70
  40. diffusers/docs/source/en/api/pipelines/stable_diffusion/pix2pix_zero.mdx +0 -291
  41. diffusers/docs/source/en/api/pipelines/stable_diffusion/self_attention_guidance.mdx +0 -64
  42. diffusers/docs/source/en/api/pipelines/stable_diffusion/text2img.mdx +0 -45
  43. diffusers/docs/source/en/api/pipelines/stable_diffusion/upscale.mdx +0 -32
  44. diffusers/docs/source/en/api/pipelines/stable_diffusion_2.mdx +0 -176
  45. diffusers/docs/source/en/api/pipelines/stable_diffusion_safe.mdx +0 -90
  46. diffusers/docs/source/en/api/pipelines/stable_unclip.mdx +0 -175
  47. diffusers/docs/source/en/api/pipelines/stochastic_karras_ve.mdx +0 -36
  48. diffusers/docs/source/en/api/pipelines/text_to_video.mdx +0 -130
  49. diffusers/docs/source/en/api/pipelines/unclip.mdx +0 -37
  50. diffusers/docs/source/en/api/pipelines/versatile_diffusion.mdx +0 -70
diffusers/docs/README.md DELETED
@@ -1,271 +0,0 @@
1
- <!---
2
- Copyright 2023- The HuggingFace Team. All rights reserved.
3
-
4
- Licensed under the Apache License, Version 2.0 (the "License");
5
- you may not use this file except in compliance with the License.
6
- You may obtain a copy of the License at
7
-
8
- http://www.apache.org/licenses/LICENSE-2.0
9
-
10
- Unless required by applicable law or agreed to in writing, software
11
- distributed under the License is distributed on an "AS IS" BASIS,
12
- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
- See the License for the specific language governing permissions and
14
- limitations under the License.
15
- -->
16
-
17
- # Generating the documentation
18
-
19
- To generate the documentation, you first have to build it. Several packages are necessary to build the doc,
20
- you can install them with the following command, at the root of the code repository:
21
-
22
- ```bash
23
- pip install -e ".[docs]"
24
- ```
25
-
26
- Then you need to install our open source documentation builder tool:
27
-
28
- ```bash
29
- pip install git+https://github.com/huggingface/doc-builder
30
- ```
31
-
32
- ---
33
- **NOTE**
34
-
35
- You only need to generate the documentation to inspect it locally (if you're planning changes and want to
36
- check how they look before committing for instance). You don't have to commit the built documentation.
37
-
38
- ---
39
-
40
- ## Previewing the documentation
41
-
42
- To preview the docs, first install the `watchdog` module with:
43
-
44
- ```bash
45
- pip install watchdog
46
- ```
47
-
48
- Then run the following command:
49
-
50
- ```bash
51
- doc-builder preview {package_name} {path_to_docs}
52
- ```
53
-
54
- For example:
55
-
56
- ```bash
57
- doc-builder preview diffusers docs/source/en
58
- ```
59
-
60
- The docs will be viewable at [http://localhost:3000](http://localhost:3000). You can also preview the docs once you have opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives.
61
-
62
- ---
63
- **NOTE**
64
-
65
- The `preview` command only works with existing doc files. When you add a completely new file, you need to update `_toctree.yml` & restart `preview` command (`ctrl-c` to stop it & call `doc-builder preview ...` again).
66
-
67
- ---
68
-
69
- ## Adding a new element to the navigation bar
70
-
71
- Accepted files are Markdown (.md or .mdx).
72
-
73
- Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting
74
- the filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/diffusers/blob/main/docs/source/_toctree.yml) file.
75
-
76
- ## Renaming section headers and moving sections
77
-
78
- It helps to keep the old links working when renaming the section header and/or moving sections from one document to another. This is because the old links are likely to be used in Issues, Forums, and Social media and it'd make for a much more superior user experience if users reading those months later could still easily navigate to the originally intended information.
79
-
80
- Therefore, we simply keep a little map of moved sections at the end of the document where the original section was. The key is to preserve the original anchor.
81
-
82
- So if you renamed a section from: "Section A" to "Section B", then you can add at the end of the file:
83
-
84
- ```
85
- Sections that were moved:
86
-
87
- [ <a href="#section-b">Section A</a><a id="section-a"></a> ]
88
- ```
89
- and of course, if you moved it to another file, then:
90
-
91
- ```
92
- Sections that were moved:
93
-
94
- [ <a href="../new-file#section-b">Section A</a><a id="section-a"></a> ]
95
- ```
96
-
97
- Use the relative style to link to the new file so that the versioned docs continue to work.
98
-
99
- For an example of a rich moved section set please see the very end of [the transformers Trainer doc](https://github.com/huggingface/transformers/blob/main/docs/source/en/main_classes/trainer.mdx).
100
-
101
-
102
- ## Writing Documentation - Specification
103
-
104
- The `huggingface/diffusers` documentation follows the
105
- [Google documentation](https://sphinxcontrib-napoleon.readthedocs.io/en/latest/example_google.html) style for docstrings,
106
- although we can write them directly in Markdown.
107
-
108
- ### Adding a new tutorial
109
-
110
- Adding a new tutorial or section is done in two steps:
111
-
112
- - Add a new file under `docs/source`. This file can either be ReStructuredText (.rst) or Markdown (.md).
113
- - Link that file in `docs/source/_toctree.yml` on the correct toc-tree.
114
-
115
- Make sure to put your new file under the proper section. It's unlikely to go in the first section (*Get Started*), so
116
- depending on the intended targets (beginners, more advanced users, or researchers) it should go in sections two, three, or four.
117
-
118
- ### Adding a new pipeline/scheduler
119
-
120
- When adding a new pipeline:
121
-
122
- - create a file `xxx.mdx` under `docs/source/api/pipelines` (don't hesitate to copy an existing file as template).
123
- - Link that file in (*Diffusers Summary*) section in `docs/source/api/pipelines/overview.mdx`, along with the link to the paper, and a colab notebook (if available).
124
- - Write a short overview of the diffusion model:
125
- - Overview with paper & authors
126
- - Paper abstract
127
- - Tips and tricks and how to use it best
128
- - Possible an end-to-end example of how to use it
129
- - Add all the pipeline classes that should be linked in the diffusion model. These classes should be added using our Markdown syntax. By default as follows:
130
-
131
- ```
132
- ## XXXPipeline
133
-
134
- [[autodoc]] XXXPipeline
135
- - all
136
- - __call__
137
- ```
138
-
139
- This will include every public method of the pipeline that is documented, as well as the `__call__` method that is not documented by default. If you just want to add additional methods that are not documented, you can put the list of all methods to add in a list that contains `all`.
140
-
141
- ```
142
- [[autodoc]] XXXPipeline
143
- - all
144
- - __call__
145
- - enable_attention_slicing
146
- - disable_attention_slicing
147
- - enable_xformers_memory_efficient_attention
148
- - disable_xformers_memory_efficient_attention
149
- ```
150
-
151
- You can follow the same process to create a new scheduler under the `docs/source/api/schedulers` folder
152
-
153
- ### Writing source documentation
154
-
155
- Values that should be put in `code` should either be surrounded by backticks: \`like so\`. Note that argument names
156
- and objects like True, None, or any strings should usually be put in `code`.
157
-
158
- When mentioning a class, function, or method, it is recommended to use our syntax for internal links so that our tool
159
- adds a link to its documentation with this syntax: \[\`XXXClass\`\] or \[\`function\`\]. This requires the class or
160
- function to be in the main package.
161
-
162
- If you want to create a link to some internal class or function, you need to
163
- provide its path. For instance: \[\`pipelines.ImagePipelineOutput\`\]. This will be converted into a link with
164
- `pipelines.ImagePipelineOutput` in the description. To get rid of the path and only keep the name of the object you are
165
- linking to in the description, add a ~: \[\`~pipelines.ImagePipelineOutput\`\] will generate a link with `ImagePipelineOutput` in the description.
166
-
167
- The same works for methods so you can either use \[\`XXXClass.method\`\] or \[~\`XXXClass.method\`\].
168
-
169
- #### Defining arguments in a method
170
-
171
- Arguments should be defined with the `Args:` (or `Arguments:` or `Parameters:`) prefix, followed by a line return and
172
- an indentation. The argument should be followed by its type, with its shape if it is a tensor, a colon, and its
173
- description:
174
-
175
- ```
176
- Args:
177
- n_layers (`int`): The number of layers of the model.
178
- ```
179
-
180
- If the description is too long to fit in one line, another indentation is necessary before writing the description
181
- after the argument.
182
-
183
- Here's an example showcasing everything so far:
184
-
185
- ```
186
- Args:
187
- input_ids (`torch.LongTensor` of shape `(batch_size, sequence_length)`):
188
- Indices of input sequence tokens in the vocabulary.
189
-
190
- Indices can be obtained using [`AlbertTokenizer`]. See [`~PreTrainedTokenizer.encode`] and
191
- [`~PreTrainedTokenizer.__call__`] for details.
192
-
193
- [What are input IDs?](../glossary#input-ids)
194
- ```
195
-
196
- For optional arguments or arguments with defaults we follow the following syntax: imagine we have a function with the
197
- following signature:
198
-
199
- ```
200
- def my_function(x: str = None, a: float = 1):
201
- ```
202
-
203
- then its documentation should look like this:
204
-
205
- ```
206
- Args:
207
- x (`str`, *optional*):
208
- This argument controls ...
209
- a (`float`, *optional*, defaults to 1):
210
- This argument is used to ...
211
- ```
212
-
213
- Note that we always omit the "defaults to \`None\`" when None is the default for any argument. Also note that even
214
- if the first line describing your argument type and its default gets long, you can't break it on several lines. You can
215
- however write as many lines as you want in the indented description (see the example above with `input_ids`).
216
-
217
- #### Writing a multi-line code block
218
-
219
- Multi-line code blocks can be useful for displaying examples. They are done between two lines of three backticks as usual in Markdown:
220
-
221
-
222
- ````
223
- ```
224
- # first line of code
225
- # second line
226
- # etc
227
- ```
228
- ````
229
-
230
- #### Writing a return block
231
-
232
- The return block should be introduced with the `Returns:` prefix, followed by a line return and an indentation.
233
- The first line should be the type of the return, followed by a line return. No need to indent further for the elements
234
- building the return.
235
-
236
- Here's an example of a single value return:
237
-
238
- ```
239
- Returns:
240
- `List[int]`: A list of integers in the range [0, 1] --- 1 for a special token, 0 for a sequence token.
241
- ```
242
-
243
- Here's an example of a tuple return, comprising several objects:
244
-
245
- ```
246
- Returns:
247
- `tuple(torch.FloatTensor)` comprising various elements depending on the configuration ([`BertConfig`]) and inputs:
248
- - ** loss** (*optional*, returned when `masked_lm_labels` is provided) `torch.FloatTensor` of shape `(1,)` --
249
- Total loss is the sum of the masked language modeling loss and the next sequence prediction (classification) loss.
250
- - **prediction_scores** (`torch.FloatTensor` of shape `(batch_size, sequence_length, config.vocab_size)`) --
251
- Prediction scores of the language modeling head (scores for each vocabulary token before SoftMax).
252
- ```
253
-
254
- #### Adding an image
255
-
256
- Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted `dataset` like
257
- the ones hosted on [`hf-internal-testing`](https://huggingface.co/hf-internal-testing) in which to place these files and reference
258
- them by URL. We recommend putting them in the following dataset: [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images).
259
- If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images
260
- to this dataset.
261
-
262
- ## Styling the docstring
263
-
264
- We have an automatic script running with the `make style` command that will make sure that:
265
- - the docstrings fully take advantage of the line width
266
- - all code examples are formatted using black, like the code of the Transformers library
267
-
268
- This script may have some weird failures if you made a syntax mistake or if you uncover a bug. Therefore, it's
269
- recommended to commit your changes before running `make style`, so you can revert the changes done by that script
270
- easily.
271
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/TRANSLATING.md DELETED
@@ -1,57 +0,0 @@
1
- ### Translating the Diffusers documentation into your language
2
-
3
- As part of our mission to democratize machine learning, we'd love to make the Diffusers library available in many more languages! Follow the steps below if you want to help translate the documentation into your language 🙏.
4
-
5
- **🗞️ Open an issue**
6
-
7
- To get started, navigate to the [Issues](https://github.com/huggingface/diffusers/issues) page of this repo and check if anyone else has opened an issue for your language. If not, open a new issue by selecting the "Translation template" from the "New issue" button.
8
-
9
- Once an issue exists, post a comment to indicate which chapters you'd like to work on, and we'll add your name to the list.
10
-
11
-
12
- **🍴 Fork the repository**
13
-
14
- First, you'll need to [fork the Diffusers repo](https://docs.github.com/en/get-started/quickstart/fork-a-repo). You can do this by clicking on the **Fork** button on the top-right corner of this repo's page.
15
-
16
- Once you've forked the repo, you'll want to get the files on your local machine for editing. You can do that by cloning the fork with Git as follows:
17
-
18
- ```bash
19
- git clone https://github.com/YOUR-USERNAME/diffusers.git
20
- ```
21
-
22
- **📋 Copy-paste the English version with a new language code**
23
-
24
- The documentation files are in one leading directory:
25
-
26
- - [`docs/source`](https://github.com/huggingface/diffusers/tree/main/docs/source): All the documentation materials are organized here by language.
27
-
28
- You'll only need to copy the files in the [`docs/source/en`](https://github.com/huggingface/diffusers/tree/main/docs/source/en) directory, so first navigate to your fork of the repo and run the following:
29
-
30
- ```bash
31
- cd ~/path/to/diffusers/docs
32
- cp -r source/en source/LANG-ID
33
- ```
34
-
35
- Here, `LANG-ID` should be one of the ISO 639-1 or ISO 639-2 language codes -- see [here](https://www.loc.gov/standards/iso639-2/php/code_list.php) for a handy table.
36
-
37
- **✍️ Start translating**
38
-
39
- The fun part comes - translating the text!
40
-
41
- The first thing we recommend is translating the part of the `_toctree.yml` file that corresponds to your doc chapter. This file is used to render the table of contents on the website.
42
-
43
- > 🙋 If the `_toctree.yml` file doesn't yet exist for your language, you can create one by copy-pasting from the English version and deleting the sections unrelated to your chapter. Just make sure it exists in the `docs/source/LANG-ID/` directory!
44
-
45
- The fields you should add are `local` (with the name of the file containing the translation; e.g. `autoclass_tutorial`), and `title` (with the title of the doc in your language; e.g. `Load pretrained instances with an AutoClass`) -- as a reference, here is the `_toctree.yml` for [English](https://github.com/huggingface/diffusers/blob/main/docs/source/en/_toctree.yml):
46
-
47
- ```yaml
48
- - sections:
49
- - local: pipeline_tutorial # Do not change this! Use the same name for your .md file
50
- title: Pipelines for inference # Translate this!
51
- ...
52
- title: Tutorials # Translate this!
53
- ```
54
-
55
- Once you have translated the `_toctree.yml` file, you can start translating the [MDX](https://mdxjs.com/) files associated with your docs chapter.
56
-
57
- > 🙋 If you'd like others to help you with the translation, you should [open an issue](https://github.com/huggingface/diffusers/issues) and tag @patrickvonplaten.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/_config.py DELETED
@@ -1,9 +0,0 @@
1
- # docstyle-ignore
2
- INSTALL_CONTENT = """
3
- # Diffusers installation
4
- ! pip install diffusers transformers datasets accelerate
5
- # To install from source instead of the last release, comment the command above and uncomment the following one.
6
- # ! pip install git+https://github.com/huggingface/diffusers.git
7
- """
8
-
9
- notebook_first_cells = [{"type": "code", "content": INSTALL_CONTENT}]
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/_toctree.yml DELETED
@@ -1,264 +0,0 @@
1
- - sections:
2
- - local: index
3
- title: 🧨 Diffusers
4
- - local: quicktour
5
- title: Quicktour
6
- - local: stable_diffusion
7
- title: Effective and efficient diffusion
8
- - local: installation
9
- title: Installation
10
- title: Get started
11
- - sections:
12
- - local: tutorials/tutorial_overview
13
- title: Overview
14
- - local: using-diffusers/write_own_pipeline
15
- title: Understanding models and schedulers
16
- - local: tutorials/basic_training
17
- title: Train a diffusion model
18
- title: Tutorials
19
- - sections:
20
- - sections:
21
- - local: using-diffusers/loading_overview
22
- title: Overview
23
- - local: using-diffusers/loading
24
- title: Load pipelines, models, and schedulers
25
- - local: using-diffusers/schedulers
26
- title: Load and compare different schedulers
27
- - local: using-diffusers/custom_pipeline_overview
28
- title: Load and add custom pipelines
29
- - local: using-diffusers/kerascv
30
- title: Load KerasCV Stable Diffusion checkpoints
31
- title: Loading & Hub
32
- - sections:
33
- - local: using-diffusers/pipeline_overview
34
- title: Overview
35
- - local: using-diffusers/unconditional_image_generation
36
- title: Unconditional image generation
37
- - local: using-diffusers/conditional_image_generation
38
- title: Text-to-image generation
39
- - local: using-diffusers/img2img
40
- title: Text-guided image-to-image
41
- - local: using-diffusers/inpaint
42
- title: Text-guided image-inpainting
43
- - local: using-diffusers/depth2img
44
- title: Text-guided depth-to-image
45
- - local: using-diffusers/reusing_seeds
46
- title: Improve image quality with deterministic generation
47
- - local: using-diffusers/reproducibility
48
- title: Create reproducible pipelines
49
- - local: using-diffusers/custom_pipeline_examples
50
- title: Community Pipelines
51
- - local: using-diffusers/contribute_pipeline
52
- title: How to contribute a Pipeline
53
- - local: using-diffusers/using_safetensors
54
- title: Using safetensors
55
- - local: using-diffusers/stable_diffusion_jax_how_to
56
- title: Stable Diffusion in JAX/Flax
57
- - local: using-diffusers/weighted_prompts
58
- title: Weighting Prompts
59
- title: Pipelines for Inference
60
- - sections:
61
- - local: training/overview
62
- title: Overview
63
- - local: training/unconditional_training
64
- title: Unconditional image generation
65
- - local: training/text_inversion
66
- title: Textual Inversion
67
- - local: training/dreambooth
68
- title: DreamBooth
69
- - local: training/text2image
70
- title: Text-to-image
71
- - local: training/lora
72
- title: Low-Rank Adaptation of Large Language Models (LoRA)
73
- - local: training/controlnet
74
- title: ControlNet
75
- - local: training/instructpix2pix
76
- title: InstructPix2Pix Training
77
- title: Training
78
- - sections:
79
- - local: using-diffusers/rl
80
- title: Reinforcement Learning
81
- - local: using-diffusers/audio
82
- title: Audio
83
- - local: using-diffusers/other-modalities
84
- title: Other Modalities
85
- title: Taking Diffusers Beyond Images
86
- title: Using Diffusers
87
- - sections:
88
- - local: optimization/opt_overview
89
- title: Overview
90
- - local: optimization/fp16
91
- title: Memory and Speed
92
- - local: optimization/torch2.0
93
- title: Torch2.0 support
94
- - local: optimization/xformers
95
- title: xFormers
96
- - local: optimization/onnx
97
- title: ONNX
98
- - local: optimization/open_vino
99
- title: OpenVINO
100
- - local: optimization/mps
101
- title: MPS
102
- - local: optimization/habana
103
- title: Habana Gaudi
104
- title: Optimization/Special Hardware
105
- - sections:
106
- - local: conceptual/philosophy
107
- title: Philosophy
108
- - local: using-diffusers/controlling_generation
109
- title: Controlled generation
110
- - local: conceptual/contribution
111
- title: How to contribute?
112
- - local: conceptual/ethical_guidelines
113
- title: Diffusers' Ethical Guidelines
114
- - local: conceptual/evaluation
115
- title: Evaluating Diffusion Models
116
- title: Conceptual Guides
117
- - sections:
118
- - sections:
119
- - local: api/models
120
- title: Models
121
- - local: api/diffusion_pipeline
122
- title: Diffusion Pipeline
123
- - local: api/logging
124
- title: Logging
125
- - local: api/configuration
126
- title: Configuration
127
- - local: api/outputs
128
- title: Outputs
129
- - local: api/loaders
130
- title: Loaders
131
- title: Main Classes
132
- - sections:
133
- - local: api/pipelines/overview
134
- title: Overview
135
- - local: api/pipelines/alt_diffusion
136
- title: AltDiffusion
137
- - local: api/pipelines/audio_diffusion
138
- title: Audio Diffusion
139
- - local: api/pipelines/audioldm
140
- title: AudioLDM
141
- - local: api/pipelines/cycle_diffusion
142
- title: Cycle Diffusion
143
- - local: api/pipelines/dance_diffusion
144
- title: Dance Diffusion
145
- - local: api/pipelines/ddim
146
- title: DDIM
147
- - local: api/pipelines/ddpm
148
- title: DDPM
149
- - local: api/pipelines/dit
150
- title: DiT
151
- - local: api/pipelines/latent_diffusion
152
- title: Latent Diffusion
153
- - local: api/pipelines/paint_by_example
154
- title: PaintByExample
155
- - local: api/pipelines/pndm
156
- title: PNDM
157
- - local: api/pipelines/repaint
158
- title: RePaint
159
- - local: api/pipelines/stable_diffusion_safe
160
- title: Safe Stable Diffusion
161
- - local: api/pipelines/score_sde_ve
162
- title: Score SDE VE
163
- - local: api/pipelines/semantic_stable_diffusion
164
- title: Semantic Guidance
165
- - local: api/pipelines/spectrogram_diffusion
166
- title: "Spectrogram Diffusion"
167
- - sections:
168
- - local: api/pipelines/stable_diffusion/overview
169
- title: Overview
170
- - local: api/pipelines/stable_diffusion/text2img
171
- title: Text-to-Image
172
- - local: api/pipelines/stable_diffusion/img2img
173
- title: Image-to-Image
174
- - local: api/pipelines/stable_diffusion/inpaint
175
- title: Inpaint
176
- - local: api/pipelines/stable_diffusion/depth2img
177
- title: Depth-to-Image
178
- - local: api/pipelines/stable_diffusion/image_variation
179
- title: Image-Variation
180
- - local: api/pipelines/stable_diffusion/upscale
181
- title: Super-Resolution
182
- - local: api/pipelines/stable_diffusion/latent_upscale
183
- title: Stable-Diffusion-Latent-Upscaler
184
- - local: api/pipelines/stable_diffusion/pix2pix
185
- title: InstructPix2Pix
186
- - local: api/pipelines/stable_diffusion/attend_and_excite
187
- title: Attend and Excite
188
- - local: api/pipelines/stable_diffusion/pix2pix_zero
189
- title: Pix2Pix Zero
190
- - local: api/pipelines/stable_diffusion/self_attention_guidance
191
- title: Self-Attention Guidance
192
- - local: api/pipelines/stable_diffusion/panorama
193
- title: MultiDiffusion Panorama
194
- - local: api/pipelines/stable_diffusion/controlnet
195
- title: Text-to-Image Generation with ControlNet Conditioning
196
- - local: api/pipelines/stable_diffusion/model_editing
197
- title: Text-to-Image Model Editing
198
- title: Stable Diffusion
199
- - local: api/pipelines/stable_diffusion_2
200
- title: Stable Diffusion 2
201
- - local: api/pipelines/stable_unclip
202
- title: Stable unCLIP
203
- - local: api/pipelines/stochastic_karras_ve
204
- title: Stochastic Karras VE
205
- - local: api/pipelines/text_to_video
206
- title: Text-to-Video
207
- - local: api/pipelines/unclip
208
- title: UnCLIP
209
- - local: api/pipelines/latent_diffusion_uncond
210
- title: Unconditional Latent Diffusion
211
- - local: api/pipelines/versatile_diffusion
212
- title: Versatile Diffusion
213
- - local: api/pipelines/vq_diffusion
214
- title: VQ Diffusion
215
- title: Pipelines
216
- - sections:
217
- - local: api/schedulers/overview
218
- title: Overview
219
- - local: api/schedulers/ddim
220
- title: DDIM
221
- - local: api/schedulers/ddim_inverse
222
- title: DDIMInverse
223
- - local: api/schedulers/ddpm
224
- title: DDPM
225
- - local: api/schedulers/deis
226
- title: DEIS
227
- - local: api/schedulers/dpm_discrete
228
- title: DPM Discrete Scheduler
229
- - local: api/schedulers/dpm_discrete_ancestral
230
- title: DPM Discrete Scheduler with ancestral sampling
231
- - local: api/schedulers/euler_ancestral
232
- title: Euler Ancestral Scheduler
233
- - local: api/schedulers/euler
234
- title: Euler scheduler
235
- - local: api/schedulers/heun
236
- title: Heun Scheduler
237
- - local: api/schedulers/ipndm
238
- title: IPNDM
239
- - local: api/schedulers/lms_discrete
240
- title: Linear Multistep
241
- - local: api/schedulers/multistep_dpm_solver
242
- title: Multistep DPM-Solver
243
- - local: api/schedulers/pndm
244
- title: PNDM
245
- - local: api/schedulers/repaint
246
- title: RePaint Scheduler
247
- - local: api/schedulers/singlestep_dpm_solver
248
- title: Singlestep DPM-Solver
249
- - local: api/schedulers/stochastic_karras_ve
250
- title: Stochastic Kerras VE
251
- - local: api/schedulers/unipc
252
- title: UniPCMultistepScheduler
253
- - local: api/schedulers/score_sde_ve
254
- title: VE-SDE
255
- - local: api/schedulers/score_sde_vp
256
- title: VP-SDE
257
- - local: api/schedulers/vq_diffusion
258
- title: VQDiffusionScheduler
259
- title: Schedulers
260
- - sections:
261
- - local: api/experimental/rl
262
- title: RL Planning
263
- title: Experimental Features
264
- title: API
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/configuration.mdx DELETED
@@ -1,25 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Configuration
14
-
15
- Schedulers from [`~schedulers.scheduling_utils.SchedulerMixin`] and models from [`ModelMixin`] inherit from [`ConfigMixin`] which conveniently takes care of storing all the parameters that are
16
- passed to their respective `__init__` methods in a JSON-configuration file.
17
-
18
- ## ConfigMixin
19
-
20
- [[autodoc]] ConfigMixin
21
- - load_config
22
- - from_config
23
- - save_config
24
- - to_json_file
25
- - to_json_string
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/diffusion_pipeline.mdx DELETED
@@ -1,47 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Pipelines
14
-
15
- The [`DiffusionPipeline`] is the easiest way to load any pretrained diffusion pipeline from the [Hub](https://huggingface.co/models?library=diffusers) and to use it in inference.
16
-
17
- <Tip>
18
-
19
- One should not use the Diffusion Pipeline class for training or fine-tuning a diffusion model. Individual
20
- components of diffusion pipelines are usually trained individually, so we suggest to directly work
21
- with [`UNetModel`] and [`UNetConditionModel`].
22
-
23
- </Tip>
24
-
25
- Any diffusion pipeline that is loaded with [`~DiffusionPipeline.from_pretrained`] will automatically
26
- detect the pipeline type, *e.g.* [`StableDiffusionPipeline`] and consequently load each component of the
27
- pipeline and pass them into the `__init__` function of the pipeline, *e.g.* [`~StableDiffusionPipeline.__init__`].
28
-
29
- Any pipeline object can be saved locally with [`~DiffusionPipeline.save_pretrained`].
30
-
31
- ## DiffusionPipeline
32
- [[autodoc]] DiffusionPipeline
33
- - all
34
- - __call__
35
- - device
36
- - to
37
- - components
38
-
39
- ## ImagePipelineOutput
40
- By default diffusion pipelines return an object of class
41
-
42
- [[autodoc]] pipelines.ImagePipelineOutput
43
-
44
- ## AudioPipelineOutput
45
- By default diffusion pipelines return an object of class
46
-
47
- [[autodoc]] pipelines.AudioPipelineOutput
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/experimental/rl.mdx DELETED
@@ -1,15 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # TODO
14
-
15
- Coming soon!
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/loaders.mdx DELETED
@@ -1,30 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Loaders
14
-
15
- There are many ways to train adapter neural networks for diffusion models, such as
16
- - [Textual Inversion](./training/text_inversion.mdx)
17
- - [LoRA](https://github.com/cloneofsimo/lora)
18
- - [Hypernetworks](https://arxiv.org/abs/1609.09106)
19
-
20
- Such adapter neural networks often only consist of a fraction of the number of weights compared
21
- to the pretrained model and as such are very portable. The Diffusers library offers an easy-to-use
22
- API to load such adapter neural networks via the [`loaders.py` module](https://github.com/huggingface/diffusers/blob/main/src/diffusers/loaders.py).
23
-
24
- **Note**: This module is still highly experimental and prone to future changes.
25
-
26
- ## LoaderMixins
27
-
28
- ### UNet2DConditionLoadersMixin
29
-
30
- [[autodoc]] loaders.UNet2DConditionLoadersMixin
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/logging.mdx DELETED
@@ -1,98 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Logging
14
-
15
- 🧨 Diffusers has a centralized logging system, so that you can setup the verbosity of the library easily.
16
-
17
- Currently the default verbosity of the library is `WARNING`.
18
-
19
- To change the level of verbosity, just use one of the direct setters. For instance, here is how to change the verbosity
20
- to the INFO level.
21
-
22
- ```python
23
- import diffusers
24
-
25
- diffusers.logging.set_verbosity_info()
26
- ```
27
-
28
- You can also use the environment variable `DIFFUSERS_VERBOSITY` to override the default verbosity. You can set it
29
- to one of the following: `debug`, `info`, `warning`, `error`, `critical`. For example:
30
-
31
- ```bash
32
- DIFFUSERS_VERBOSITY=error ./myprogram.py
33
- ```
34
-
35
- Additionally, some `warnings` can be disabled by setting the environment variable
36
- `DIFFUSERS_NO_ADVISORY_WARNINGS` to a true value, like *1*. This will disable any warning that is logged using
37
- [`logger.warning_advice`]. For example:
38
-
39
- ```bash
40
- DIFFUSERS_NO_ADVISORY_WARNINGS=1 ./myprogram.py
41
- ```
42
-
43
- Here is an example of how to use the same logger as the library in your own module or script:
44
-
45
- ```python
46
- from diffusers.utils import logging
47
-
48
- logging.set_verbosity_info()
49
- logger = logging.get_logger("diffusers")
50
- logger.info("INFO")
51
- logger.warning("WARN")
52
- ```
53
-
54
-
55
- All the methods of this logging module are documented below, the main ones are
56
- [`logging.get_verbosity`] to get the current level of verbosity in the logger and
57
- [`logging.set_verbosity`] to set the verbosity to the level of your choice. In order (from the least
58
- verbose to the most verbose), those levels (with their corresponding int values in parenthesis) are:
59
-
60
- - `diffusers.logging.CRITICAL` or `diffusers.logging.FATAL` (int value, 50): only report the most
61
- critical errors.
62
- - `diffusers.logging.ERROR` (int value, 40): only report errors.
63
- - `diffusers.logging.WARNING` or `diffusers.logging.WARN` (int value, 30): only reports error and
64
- warnings. This the default level used by the library.
65
- - `diffusers.logging.INFO` (int value, 20): reports error, warnings and basic information.
66
- - `diffusers.logging.DEBUG` (int value, 10): report all information.
67
-
68
- By default, `tqdm` progress bars will be displayed during model download. [`logging.disable_progress_bar`] and [`logging.enable_progress_bar`] can be used to suppress or unsuppress this behavior.
69
-
70
- ## Base setters
71
-
72
- [[autodoc]] logging.set_verbosity_error
73
-
74
- [[autodoc]] logging.set_verbosity_warning
75
-
76
- [[autodoc]] logging.set_verbosity_info
77
-
78
- [[autodoc]] logging.set_verbosity_debug
79
-
80
- ## Other functions
81
-
82
- [[autodoc]] logging.get_verbosity
83
-
84
- [[autodoc]] logging.set_verbosity
85
-
86
- [[autodoc]] logging.get_logger
87
-
88
- [[autodoc]] logging.enable_default_handler
89
-
90
- [[autodoc]] logging.disable_default_handler
91
-
92
- [[autodoc]] logging.enable_explicit_format
93
-
94
- [[autodoc]] logging.reset_format
95
-
96
- [[autodoc]] logging.enable_progress_bar
97
-
98
- [[autodoc]] logging.disable_progress_bar
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/models.mdx DELETED
@@ -1,107 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Models
14
-
15
- Diffusers contains pretrained models for popular algorithms and modules for creating the next set of diffusion models.
16
- The primary function of these models is to denoise an input sample, by modeling the distribution $p_\theta(\mathbf{x}_{t-1}|\mathbf{x}_t)$.
17
- The models are built on the base class ['ModelMixin'] that is a `torch.nn.module` with basic functionality for saving and loading models both locally and from the HuggingFace hub.
18
-
19
- ## ModelMixin
20
- [[autodoc]] ModelMixin
21
-
22
- ## UNet2DOutput
23
- [[autodoc]] models.unet_2d.UNet2DOutput
24
-
25
- ## UNet2DModel
26
- [[autodoc]] UNet2DModel
27
-
28
- ## UNet1DOutput
29
- [[autodoc]] models.unet_1d.UNet1DOutput
30
-
31
- ## UNet1DModel
32
- [[autodoc]] UNet1DModel
33
-
34
- ## UNet2DConditionOutput
35
- [[autodoc]] models.unet_2d_condition.UNet2DConditionOutput
36
-
37
- ## UNet2DConditionModel
38
- [[autodoc]] UNet2DConditionModel
39
-
40
- ## UNet3DConditionOutput
41
- [[autodoc]] models.unet_3d_condition.UNet3DConditionOutput
42
-
43
- ## UNet3DConditionModel
44
- [[autodoc]] UNet3DConditionModel
45
-
46
- ## DecoderOutput
47
- [[autodoc]] models.vae.DecoderOutput
48
-
49
- ## VQEncoderOutput
50
- [[autodoc]] models.vq_model.VQEncoderOutput
51
-
52
- ## VQModel
53
- [[autodoc]] VQModel
54
-
55
- ## AutoencoderKLOutput
56
- [[autodoc]] models.autoencoder_kl.AutoencoderKLOutput
57
-
58
- ## AutoencoderKL
59
- [[autodoc]] AutoencoderKL
60
-
61
- ## Transformer2DModel
62
- [[autodoc]] Transformer2DModel
63
-
64
- ## Transformer2DModelOutput
65
- [[autodoc]] models.transformer_2d.Transformer2DModelOutput
66
-
67
- ## TransformerTemporalModel
68
- [[autodoc]] models.transformer_temporal.TransformerTemporalModel
69
-
70
- ## Transformer2DModelOutput
71
- [[autodoc]] models.transformer_temporal.TransformerTemporalModelOutput
72
-
73
- ## PriorTransformer
74
- [[autodoc]] models.prior_transformer.PriorTransformer
75
-
76
- ## PriorTransformerOutput
77
- [[autodoc]] models.prior_transformer.PriorTransformerOutput
78
-
79
- ## ControlNetOutput
80
- [[autodoc]] models.controlnet.ControlNetOutput
81
-
82
- ## ControlNetModel
83
- [[autodoc]] ControlNetModel
84
-
85
- ## FlaxModelMixin
86
- [[autodoc]] FlaxModelMixin
87
-
88
- ## FlaxUNet2DConditionOutput
89
- [[autodoc]] models.unet_2d_condition_flax.FlaxUNet2DConditionOutput
90
-
91
- ## FlaxUNet2DConditionModel
92
- [[autodoc]] FlaxUNet2DConditionModel
93
-
94
- ## FlaxDecoderOutput
95
- [[autodoc]] models.vae_flax.FlaxDecoderOutput
96
-
97
- ## FlaxAutoencoderKLOutput
98
- [[autodoc]] models.vae_flax.FlaxAutoencoderKLOutput
99
-
100
- ## FlaxAutoencoderKL
101
- [[autodoc]] FlaxAutoencoderKL
102
-
103
- ## FlaxControlNetOutput
104
- [[autodoc]] models.controlnet_flax.FlaxControlNetOutput
105
-
106
- ## FlaxControlNetModel
107
- [[autodoc]] FlaxControlNetModel
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/outputs.mdx DELETED
@@ -1,55 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # BaseOutputs
14
-
15
- All models have outputs that are instances of subclasses of [`~utils.BaseOutput`]. Those are
16
- data structures containing all the information returned by the model, but that can also be used as tuples or
17
- dictionaries.
18
-
19
- Let's see how this looks in an example:
20
-
21
- ```python
22
- from diffusers import DDIMPipeline
23
-
24
- pipeline = DDIMPipeline.from_pretrained("google/ddpm-cifar10-32")
25
- outputs = pipeline()
26
- ```
27
-
28
- The `outputs` object is a [`~pipelines.ImagePipelineOutput`], as we can see in the
29
- documentation of that class below, it means it has an image attribute.
30
-
31
- You can access each attribute as you would usually do, and if that attribute has not been returned by the model, you will get `None`:
32
-
33
- ```python
34
- outputs.images
35
- ```
36
-
37
- or via keyword lookup
38
-
39
- ```python
40
- outputs["images"]
41
- ```
42
-
43
- When considering our `outputs` object as tuple, it only considers the attributes that don't have `None` values.
44
- Here for instance, we could retrieve images via indexing:
45
-
46
- ```python
47
- outputs[:1]
48
- ```
49
-
50
- which will return the tuple `(outputs.images)` for instance.
51
-
52
- ## BaseOutput
53
-
54
- [[autodoc]] utils.BaseOutput
55
- - to_tuple
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/alt_diffusion.mdx DELETED
@@ -1,83 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # AltDiffusion
14
-
15
- AltDiffusion was proposed in [AltCLIP: Altering the Language Encoder in CLIP for Extended Language Capabilities](https://arxiv.org/abs/2211.06679) by Zhongzhi Chen, Guang Liu, Bo-Wen Zhang, Fulong Ye, Qinghong Yang, Ledell Wu.
16
-
17
- The abstract of the paper is the following:
18
-
19
- *In this work, we present a conceptually simple and effective method to train a strong bilingual multimodal representation model. Starting from the pretrained multimodal representation model CLIP released by OpenAI, we switched its text encoder with a pretrained multilingual text encoder XLM-R, and aligned both languages and image representations by a two-stage training schema consisting of teacher learning and contrastive learning. We validate our method through evaluations of a wide range of tasks. We set new state-of-the-art performances on a bunch of tasks including ImageNet-CN, Flicker30k- CN, and COCO-CN. Further, we obtain very close performances with CLIP on almost all tasks, suggesting that one can simply alter the text encoder in CLIP for extended capabilities such as multilingual understanding.*
20
-
21
-
22
- *Overview*:
23
-
24
- | Pipeline | Tasks | Colab | Demo
25
- |---|---|:---:|:---:|
26
- | [pipeline_alt_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion.py) | *Text-to-Image Generation* | - | -
27
- | [pipeline_alt_diffusion_img2img.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/alt_diffusion/pipeline_alt_diffusion_img2img.py) | *Image-to-Image Text-Guided Generation* | - |-
28
-
29
- ## Tips
30
-
31
- - AltDiffusion is conceptually exactly the same as [Stable Diffusion](./api/pipelines/stable_diffusion/overview).
32
-
33
- - *Run AltDiffusion*
34
-
35
- AltDiffusion can be tested very easily with the [`AltDiffusionPipeline`], [`AltDiffusionImg2ImgPipeline`] and the `"BAAI/AltDiffusion-m9"` checkpoint exactly in the same way it is shown in the [Conditional Image Generation Guide](./using-diffusers/conditional_image_generation) and the [Image-to-Image Generation Guide](./using-diffusers/img2img).
36
-
37
- - *How to load and use different schedulers.*
38
-
39
- The alt diffusion pipeline uses [`DDIMScheduler`] scheduler by default. But `diffusers` provides many other schedulers that can be used with the alt diffusion pipeline such as [`PNDMScheduler`], [`LMSDiscreteScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`] etc.
40
- To use a different scheduler, you can either change it via the [`ConfigMixin.from_config`] method or pass the `scheduler` argument to the `from_pretrained` method of the pipeline. For example, to use the [`EulerDiscreteScheduler`], you can do the following:
41
-
42
- ```python
43
- >>> from diffusers import AltDiffusionPipeline, EulerDiscreteScheduler
44
-
45
- >>> pipeline = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9")
46
- >>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config)
47
-
48
- >>> # or
49
- >>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("BAAI/AltDiffusion-m9", subfolder="scheduler")
50
- >>> pipeline = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9", scheduler=euler_scheduler)
51
- ```
52
-
53
-
54
- - *How to convert all use cases with multiple or single pipeline*
55
-
56
- If you want to use all possible use cases in a single `DiffusionPipeline` we recommend using the `components` functionality to instantiate all components in the most memory-efficient way:
57
-
58
- ```python
59
- >>> from diffusers import (
60
- ... AltDiffusionPipeline,
61
- ... AltDiffusionImg2ImgPipeline,
62
- ... )
63
-
64
- >>> text2img = AltDiffusionPipeline.from_pretrained("BAAI/AltDiffusion-m9")
65
- >>> img2img = AltDiffusionImg2ImgPipeline(**text2img.components)
66
-
67
- >>> # now you can use text2img(...) and img2img(...) just like the call methods of each respective pipeline
68
- ```
69
-
70
- ## AltDiffusionPipelineOutput
71
- [[autodoc]] pipelines.alt_diffusion.AltDiffusionPipelineOutput
72
- - all
73
- - __call__
74
-
75
- ## AltDiffusionPipeline
76
- [[autodoc]] AltDiffusionPipeline
77
- - all
78
- - __call__
79
-
80
- ## AltDiffusionImg2ImgPipeline
81
- [[autodoc]] AltDiffusionImg2ImgPipeline
82
- - all
83
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/audio_diffusion.mdx DELETED
@@ -1,98 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Audio Diffusion
14
-
15
- ## Overview
16
-
17
- [Audio Diffusion](https://github.com/teticio/audio-diffusion) by Robert Dargavel Smith.
18
-
19
- Audio Diffusion leverages the recent advances in image generation using diffusion models by converting audio samples to
20
- and from mel spectrogram images.
21
-
22
- The original codebase of this implementation can be found [here](https://github.com/teticio/audio-diffusion), including
23
- training scripts and example notebooks.
24
-
25
- ## Available Pipelines:
26
-
27
- | Pipeline | Tasks | Colab
28
- |---|---|:---:|
29
- | [pipeline_audio_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/audio_diffusion/pipeline_audio_diffusion.py) | *Unconditional Audio Generation* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/teticio/audio-diffusion/blob/master/notebooks/audio_diffusion_pipeline.ipynb) |
30
-
31
-
32
- ## Examples:
33
-
34
- ### Audio Diffusion
35
-
36
- ```python
37
- import torch
38
- from IPython.display import Audio
39
- from diffusers import DiffusionPipeline
40
-
41
- device = "cuda" if torch.cuda.is_available() else "cpu"
42
- pipe = DiffusionPipeline.from_pretrained("teticio/audio-diffusion-256").to(device)
43
-
44
- output = pipe()
45
- display(output.images[0])
46
- display(Audio(output.audios[0], rate=mel.get_sample_rate()))
47
- ```
48
-
49
- ### Latent Audio Diffusion
50
-
51
- ```python
52
- import torch
53
- from IPython.display import Audio
54
- from diffusers import DiffusionPipeline
55
-
56
- device = "cuda" if torch.cuda.is_available() else "cpu"
57
- pipe = DiffusionPipeline.from_pretrained("teticio/latent-audio-diffusion-256").to(device)
58
-
59
- output = pipe()
60
- display(output.images[0])
61
- display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate()))
62
- ```
63
-
64
- ### Audio Diffusion with DDIM (faster)
65
-
66
- ```python
67
- import torch
68
- from IPython.display import Audio
69
- from diffusers import DiffusionPipeline
70
-
71
- device = "cuda" if torch.cuda.is_available() else "cpu"
72
- pipe = DiffusionPipeline.from_pretrained("teticio/audio-diffusion-ddim-256").to(device)
73
-
74
- output = pipe()
75
- display(output.images[0])
76
- display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate()))
77
- ```
78
-
79
- ### Variations, in-painting, out-painting etc.
80
-
81
- ```python
82
- output = pipe(
83
- raw_audio=output.audios[0, 0],
84
- start_step=int(pipe.get_default_steps() / 2),
85
- mask_start_secs=1,
86
- mask_end_secs=1,
87
- )
88
- display(output.images[0])
89
- display(Audio(output.audios[0], rate=pipe.mel.get_sample_rate()))
90
- ```
91
-
92
- ## AudioDiffusionPipeline
93
- [[autodoc]] AudioDiffusionPipeline
94
- - all
95
- - __call__
96
-
97
- ## Mel
98
- [[autodoc]] Mel
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/audioldm.mdx DELETED
@@ -1,82 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # AudioLDM
14
-
15
- ## Overview
16
-
17
- AudioLDM was proposed in [AudioLDM: Text-to-Audio Generation with Latent Diffusion Models](https://arxiv.org/abs/2301.12503) by Haohe Liu et al.
18
-
19
- Inspired by [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/overview), AudioLDM
20
- is a text-to-audio _latent diffusion model (LDM)_ that learns continuous audio representations from [CLAP](https://huggingface.co/docs/transformers/main/model_doc/clap)
21
- latents. AudioLDM takes a text prompt as input and predicts the corresponding audio. It can generate text-conditional
22
- sound effects, human speech and music.
23
-
24
- This pipeline was contributed by [sanchit-gandhi](https://huggingface.co/sanchit-gandhi). The original codebase can be found [here](https://github.com/haoheliu/AudioLDM).
25
-
26
- ## Text-to-Audio
27
-
28
- The [`AudioLDMPipeline`] can be used to load pre-trained weights from [cvssp/audioldm](https://huggingface.co/cvssp/audioldm) and generate text-conditional audio outputs:
29
-
30
- ```python
31
- from diffusers import AudioLDMPipeline
32
- import torch
33
- import scipy
34
-
35
- repo_id = "cvssp/audioldm"
36
- pipe = AudioLDMPipeline.from_pretrained(repo_id, torch_dtype=torch.float16)
37
- pipe = pipe.to("cuda")
38
-
39
- prompt = "Techno music with a strong, upbeat tempo and high melodic riffs"
40
- audio = pipe(prompt, num_inference_steps=10, audio_length_in_s=5.0).audios[0]
41
-
42
- # save the audio sample as a .wav file
43
- scipy.io.wavfile.write("techno.wav", rate=16000, data=audio)
44
- ```
45
-
46
- ### Tips
47
-
48
- Prompts:
49
- * Descriptive prompt inputs work best: you can use adjectives to describe the sound (e.g. "high quality" or "clear") and make the prompt context specific (e.g., "water stream in a forest" instead of "stream").
50
- * It's best to use general terms like 'cat' or 'dog' instead of specific names or abstract objects that the model may not be familiar with.
51
-
52
- Inference:
53
- * The _quality_ of the predicted audio sample can be controlled by the `num_inference_steps` argument: higher steps give higher quality audio at the expense of slower inference.
54
- * The _length_ of the predicted audio sample can be controlled by varying the `audio_length_in_s` argument.
55
-
56
- ### How to load and use different schedulers
57
-
58
- The AudioLDM pipeline uses [`DDIMScheduler`] scheduler by default. But `diffusers` provides many other schedulers
59
- that can be used with the AudioLDM pipeline such as [`PNDMScheduler`], [`LMSDiscreteScheduler`], [`EulerDiscreteScheduler`],
60
- [`EulerAncestralDiscreteScheduler`] etc. We recommend using the [`DPMSolverMultistepScheduler`] as it's currently the fastest
61
- scheduler there is.
62
-
63
- To use a different scheduler, you can either change it via the [`ConfigMixin.from_config`]
64
- method, or pass the `scheduler` argument to the `from_pretrained` method of the pipeline. For example, to use the
65
- [`DPMSolverMultistepScheduler`], you can do the following:
66
-
67
- ```python
68
- >>> from diffusers import AudioLDMPipeline, DPMSolverMultistepScheduler
69
- >>> import torch
70
-
71
- >>> pipeline = AudioLDMPipeline.from_pretrained("cvssp/audioldm", torch_dtype=torch.float16)
72
- >>> pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)
73
-
74
- >>> # or
75
- >>> dpm_scheduler = DPMSolverMultistepScheduler.from_pretrained("cvssp/audioldm", subfolder="scheduler")
76
- >>> pipeline = AudioLDMPipeline.from_pretrained("cvssp/audioldm", scheduler=dpm_scheduler, torch_dtype=torch.float16)
77
- ```
78
-
79
- ## AudioLDMPipeline
80
- [[autodoc]] AudioLDMPipeline
81
- - all
82
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/cycle_diffusion.mdx DELETED
@@ -1,100 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Cycle Diffusion
14
-
15
- ## Overview
16
-
17
- Cycle Diffusion is a Text-Guided Image-to-Image Generation model proposed in [Unifying Diffusion Models' Latent Space, with Applications to CycleDiffusion and Guidance](https://arxiv.org/abs/2210.05559) by Chen Henry Wu, Fernando De la Torre.
18
-
19
- The abstract of the paper is the following:
20
-
21
- *Diffusion models have achieved unprecedented performance in generative modeling. The commonly-adopted formulation of the latent code of diffusion models is a sequence of gradually denoised samples, as opposed to the simpler (e.g., Gaussian) latent space of GANs, VAEs, and normalizing flows. This paper provides an alternative, Gaussian formulation of the latent space of various diffusion models, as well as an invertible DPM-Encoder that maps images into the latent space. While our formulation is purely based on the definition of diffusion models, we demonstrate several intriguing consequences. (1) Empirically, we observe that a common latent space emerges from two diffusion models trained independently on related domains. In light of this finding, we propose CycleDiffusion, which uses DPM-Encoder for unpaired image-to-image translation. Furthermore, applying CycleDiffusion to text-to-image diffusion models, we show that large-scale text-to-image diffusion models can be used as zero-shot image-to-image editors. (2) One can guide pre-trained diffusion models and GANs by controlling the latent codes in a unified, plug-and-play formulation based on energy-based models. Using the CLIP model and a face recognition model as guidance, we demonstrate that diffusion models have better coverage of low-density sub-populations and individuals than GANs.*
22
-
23
- *Tips*:
24
- - The Cycle Diffusion pipeline is fully compatible with any [Stable Diffusion](./stable_diffusion) checkpoints
25
- - Currently Cycle Diffusion only works with the [`DDIMScheduler`].
26
-
27
- *Example*:
28
-
29
- In the following we should how to best use the [`CycleDiffusionPipeline`]
30
-
31
- ```python
32
- import requests
33
- import torch
34
- from PIL import Image
35
- from io import BytesIO
36
-
37
- from diffusers import CycleDiffusionPipeline, DDIMScheduler
38
-
39
- # load the pipeline
40
- # make sure you're logged in with `huggingface-cli login`
41
- model_id_or_path = "CompVis/stable-diffusion-v1-4"
42
- scheduler = DDIMScheduler.from_pretrained(model_id_or_path, subfolder="scheduler")
43
- pipe = CycleDiffusionPipeline.from_pretrained(model_id_or_path, scheduler=scheduler).to("cuda")
44
-
45
- # let's download an initial image
46
- url = "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/An%20astronaut%20riding%20a%20horse.png"
47
- response = requests.get(url)
48
- init_image = Image.open(BytesIO(response.content)).convert("RGB")
49
- init_image = init_image.resize((512, 512))
50
- init_image.save("horse.png")
51
-
52
- # let's specify a prompt
53
- source_prompt = "An astronaut riding a horse"
54
- prompt = "An astronaut riding an elephant"
55
-
56
- # call the pipeline
57
- image = pipe(
58
- prompt=prompt,
59
- source_prompt=source_prompt,
60
- image=init_image,
61
- num_inference_steps=100,
62
- eta=0.1,
63
- strength=0.8,
64
- guidance_scale=2,
65
- source_guidance_scale=1,
66
- ).images[0]
67
-
68
- image.save("horse_to_elephant.png")
69
-
70
- # let's try another example
71
- # See more samples at the original repo: https://github.com/ChenWu98/cycle-diffusion
72
- url = "https://raw.githubusercontent.com/ChenWu98/cycle-diffusion/main/data/dalle2/A%20black%20colored%20car.png"
73
- response = requests.get(url)
74
- init_image = Image.open(BytesIO(response.content)).convert("RGB")
75
- init_image = init_image.resize((512, 512))
76
- init_image.save("black.png")
77
-
78
- source_prompt = "A black colored car"
79
- prompt = "A blue colored car"
80
-
81
- # call the pipeline
82
- torch.manual_seed(0)
83
- image = pipe(
84
- prompt=prompt,
85
- source_prompt=source_prompt,
86
- image=init_image,
87
- num_inference_steps=100,
88
- eta=0.1,
89
- strength=0.85,
90
- guidance_scale=3,
91
- source_guidance_scale=1,
92
- ).images[0]
93
-
94
- image.save("black_to_blue.png")
95
- ```
96
-
97
- ## CycleDiffusionPipeline
98
- [[autodoc]] CycleDiffusionPipeline
99
- - all
100
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/dance_diffusion.mdx DELETED
@@ -1,34 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Dance Diffusion
14
-
15
- ## Overview
16
-
17
- [Dance Diffusion](https://github.com/Harmonai-org/sample-generator) by Zach Evans.
18
-
19
- Dance Diffusion is the first in a suite of generative audio tools for producers and musicians to be released by Harmonai.
20
- For more info or to get involved in the development of these tools, please visit https://harmonai.org and fill out the form on the front page.
21
-
22
- The original codebase of this implementation can be found [here](https://github.com/Harmonai-org/sample-generator).
23
-
24
- ## Available Pipelines:
25
-
26
- | Pipeline | Tasks | Colab
27
- |---|---|:---:|
28
- | [pipeline_dance_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/dance_diffusion/pipeline_dance_diffusion.py) | *Unconditional Audio Generation* | - |
29
-
30
-
31
- ## DanceDiffusionPipeline
32
- [[autodoc]] DanceDiffusionPipeline
33
- - all
34
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/ddim.mdx DELETED
@@ -1,36 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # DDIM
14
-
15
- ## Overview
16
-
17
- [Denoising Diffusion Implicit Models](https://arxiv.org/abs/2010.02502) (DDIM) by Jiaming Song, Chenlin Meng and Stefano Ermon.
18
-
19
- The abstract of the paper is the following:
20
-
21
- Denoising diffusion probabilistic models (DDPMs) have achieved high quality image generation without adversarial training, yet they require simulating a Markov chain for many steps to produce a sample. To accelerate sampling, we present denoising diffusion implicit models (DDIMs), a more efficient class of iterative implicit probabilistic models with the same training procedure as DDPMs. In DDPMs, the generative process is defined as the reverse of a Markovian diffusion process. We construct a class of non-Markovian diffusion processes that lead to the same training objective, but whose reverse process can be much faster to sample from. We empirically demonstrate that DDIMs can produce high quality samples 10× to 50× faster in terms of wall-clock time compared to DDPMs, allow us to trade off computation for sample quality, and can perform semantically meaningful image interpolation directly in the latent space.
22
-
23
- The original codebase of this paper can be found here: [ermongroup/ddim](https://github.com/ermongroup/ddim).
24
- For questions, feel free to contact the author on [tsong.me](https://tsong.me/).
25
-
26
- ## Available Pipelines:
27
-
28
- | Pipeline | Tasks | Colab
29
- |---|---|:---:|
30
- | [pipeline_ddim.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ddim/pipeline_ddim.py) | *Unconditional Image Generation* | - |
31
-
32
-
33
- ## DDIMPipeline
34
- [[autodoc]] DDIMPipeline
35
- - all
36
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/ddpm.mdx DELETED
@@ -1,37 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # DDPM
14
-
15
- ## Overview
16
-
17
- [Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2006.11239)
18
- (DDPM) by Jonathan Ho, Ajay Jain and Pieter Abbeel proposes the diffusion based model of the same name, but in the context of the 🤗 Diffusers library, DDPM refers to the discrete denoising scheduler from the paper as well as the pipeline.
19
-
20
- The abstract of the paper is the following:
21
-
22
- We present high quality image synthesis results using diffusion probabilistic models, a class of latent variable models inspired by considerations from nonequilibrium thermodynamics. Our best results are obtained by training on a weighted variational bound designed according to a novel connection between diffusion probabilistic models and denoising score matching with Langevin dynamics, and our models naturally admit a progressive lossy decompression scheme that can be interpreted as a generalization of autoregressive decoding. On the unconditional CIFAR10 dataset, we obtain an Inception score of 9.46 and a state-of-the-art FID score of 3.17. On 256x256 LSUN, we obtain sample quality similar to ProgressiveGAN.
23
-
24
- The original codebase of this paper can be found [here](https://github.com/hojonathanho/diffusion).
25
-
26
-
27
- ## Available Pipelines:
28
-
29
- | Pipeline | Tasks | Colab
30
- |---|---|:---:|
31
- | [pipeline_ddpm.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/ddpm/pipeline_ddpm.py) | *Unconditional Image Generation* | - |
32
-
33
-
34
- # DDPMPipeline
35
- [[autodoc]] DDPMPipeline
36
- - all
37
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/dit.mdx DELETED
@@ -1,59 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Scalable Diffusion Models with Transformers (DiT)
14
-
15
- ## Overview
16
-
17
- [Scalable Diffusion Models with Transformers](https://arxiv.org/abs/2212.09748) (DiT) by William Peebles and Saining Xie.
18
-
19
- The abstract of the paper is the following:
20
-
21
- *We explore a new class of diffusion models based on the transformer architecture. We train latent diffusion models of images, replacing the commonly-used U-Net backbone with a transformer that operates on latent patches. We analyze the scalability of our Diffusion Transformers (DiTs) through the lens of forward pass complexity as measured by Gflops. We find that DiTs with higher Gflops -- through increased transformer depth/width or increased number of input tokens -- consistently have lower FID. In addition to possessing good scalability properties, our largest DiT-XL/2 models outperform all prior diffusion models on the class-conditional ImageNet 512x512 and 256x256 benchmarks, achieving a state-of-the-art FID of 2.27 on the latter.*
22
-
23
- The original codebase of this paper can be found here: [facebookresearch/dit](https://github.com/facebookresearch/dit).
24
-
25
- ## Available Pipelines:
26
-
27
- | Pipeline | Tasks | Colab
28
- |---|---|:---:|
29
- | [pipeline_dit.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/dit/pipeline_dit.py) | *Conditional Image Generation* | - |
30
-
31
-
32
- ## Usage example
33
-
34
- ```python
35
- from diffusers import DiTPipeline, DPMSolverMultistepScheduler
36
- import torch
37
-
38
- pipe = DiTPipeline.from_pretrained("facebook/DiT-XL-2-256", torch_dtype=torch.float16)
39
- pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
40
- pipe = pipe.to("cuda")
41
-
42
- # pick words from Imagenet class labels
43
- pipe.labels # to print all available words
44
-
45
- # pick words that exist in ImageNet
46
- words = ["white shark", "umbrella"]
47
-
48
- class_ids = pipe.get_label_ids(words)
49
-
50
- generator = torch.manual_seed(33)
51
- output = pipe(class_labels=class_ids, num_inference_steps=25, generator=generator)
52
-
53
- image = output.images[0] # label 'white shark'
54
- ```
55
-
56
- ## DiTPipeline
57
- [[autodoc]] DiTPipeline
58
- - all
59
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/latent_diffusion.mdx DELETED
@@ -1,49 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Latent Diffusion
14
-
15
- ## Overview
16
-
17
- Latent Diffusion was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer.
18
-
19
- The abstract of the paper is the following:
20
-
21
- *By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs.*
22
-
23
- The original codebase can be found [here](https://github.com/CompVis/latent-diffusion).
24
-
25
- ## Tips:
26
-
27
- -
28
- -
29
- -
30
-
31
- ## Available Pipelines:
32
-
33
- | Pipeline | Tasks | Colab
34
- |---|---|:---:|
35
- | [pipeline_latent_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion.py) | *Text-to-Image Generation* | - |
36
- | [pipeline_latent_diffusion_superresolution.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion/pipeline_latent_diffusion_superresolution.py) | *Super Resolution* | - |
37
-
38
- ## Examples:
39
-
40
-
41
- ## LDMTextToImagePipeline
42
- [[autodoc]] LDMTextToImagePipeline
43
- - all
44
- - __call__
45
-
46
- ## LDMSuperResolutionPipeline
47
- [[autodoc]] LDMSuperResolutionPipeline
48
- - all
49
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/latent_diffusion_uncond.mdx DELETED
@@ -1,42 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Unconditional Latent Diffusion
14
-
15
- ## Overview
16
-
17
- Unconditional Latent Diffusion was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer.
18
-
19
- The abstract of the paper is the following:
20
-
21
- *By decomposing the image formation process into a sequential application of denoising autoencoders, diffusion models (DMs) achieve state-of-the-art synthesis results on image data and beyond. Additionally, their formulation allows for a guiding mechanism to control the image generation process without retraining. However, since these models typically operate directly in pixel space, optimization of powerful DMs often consumes hundreds of GPU days and inference is expensive due to sequential evaluations. To enable DM training on limited computational resources while retaining their quality and flexibility, we apply them in the latent space of powerful pretrained autoencoders. In contrast to previous work, training diffusion models on such a representation allows for the first time to reach a near-optimal point between complexity reduction and detail preservation, greatly boosting visual fidelity. By introducing cross-attention layers into the model architecture, we turn diffusion models into powerful and flexible generators for general conditioning inputs such as text or bounding boxes and high-resolution synthesis becomes possible in a convolutional manner. Our latent diffusion models (LDMs) achieve a new state of the art for image inpainting and highly competitive performance on various tasks, including unconditional image generation, semantic scene synthesis, and super-resolution, while significantly reducing computational requirements compared to pixel-based DMs.*
22
-
23
- The original codebase can be found [here](https://github.com/CompVis/latent-diffusion).
24
-
25
- ## Tips:
26
-
27
- -
28
- -
29
- -
30
-
31
- ## Available Pipelines:
32
-
33
- | Pipeline | Tasks | Colab
34
- |---|---|:---:|
35
- | [pipeline_latent_diffusion_uncond.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/latent_diffusion_uncond/pipeline_latent_diffusion_uncond.py) | *Unconditional Image Generation* | - |
36
-
37
- ## Examples:
38
-
39
- ## LDMPipeline
40
- [[autodoc]] LDMPipeline
41
- - all
42
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/overview.mdx DELETED
@@ -1,213 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Pipelines
14
-
15
- Pipelines provide a simple way to run state-of-the-art diffusion models in inference.
16
- Most diffusion systems consist of multiple independently-trained models and highly adaptable scheduler
17
- components - all of which are needed to have a functioning end-to-end diffusion system.
18
-
19
- As an example, [Stable Diffusion](https://huggingface.co/blog/stable_diffusion) has three independently trained models:
20
- - [Autoencoder](./api/models#vae)
21
- - [Conditional Unet](./api/models#UNet2DConditionModel)
22
- - [CLIP text encoder](https://huggingface.co/docs/transformers/v4.27.1/en/model_doc/clip#transformers.CLIPTextModel)
23
- - a scheduler component, [scheduler](./api/scheduler#pndm),
24
- - a [CLIPImageProcessor](https://huggingface.co/docs/transformers/v4.27.1/en/model_doc/clip#transformers.CLIPImageProcessor),
25
- - as well as a [safety checker](./stable_diffusion#safety_checker).
26
- All of these components are necessary to run stable diffusion in inference even though they were trained
27
- or created independently from each other.
28
-
29
- To that end, we strive to offer all open-sourced, state-of-the-art diffusion system under a unified API.
30
- More specifically, we strive to provide pipelines that
31
- - 1. can load the officially published weights and yield 1-to-1 the same outputs as the original implementation according to the corresponding paper (*e.g.* [LDMTextToImagePipeline](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/latent_diffusion), uses the officially released weights of [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752)),
32
- - 2. have a simple user interface to run the model in inference (see the [Pipelines API](#pipelines-api) section),
33
- - 3. are easy to understand with code that is self-explanatory and can be read along-side the official paper (see [Pipelines summary](#pipelines-summary)),
34
- - 4. can easily be contributed by the community (see the [Contribution](#contribution) section).
35
-
36
- **Note** that pipelines do not (and should not) offer any training functionality.
37
- If you are looking for *official* training examples, please have a look at [examples](https://github.com/huggingface/diffusers/tree/main/examples).
38
-
39
- ## 🧨 Diffusers Summary
40
-
41
- The following table summarizes all officially supported pipelines, their corresponding paper, and if
42
- available a colab notebook to directly try them out.
43
-
44
-
45
- | Pipeline | Paper | Tasks | Colab
46
- |---|---|:---:|:---:|
47
- | [alt_diffusion](./alt_diffusion) | [**AltDiffusion**](https://arxiv.org/abs/2211.06679) | Image-to-Image Text-Guided Generation | -
48
- | [audio_diffusion](./audio_diffusion) | [**Audio Diffusion**](https://github.com/teticio/audio_diffusion.git) | Unconditional Audio Generation |
49
- | [controlnet](./api/pipelines/stable_diffusion/controlnet) | [**ControlNet with Stable Diffusion**](https://arxiv.org/abs/2302.05543) | Image-to-Image Text-Guided Generation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/controlnet.ipynb)
50
- | [cycle_diffusion](./cycle_diffusion) | [**Cycle Diffusion**](https://arxiv.org/abs/2210.05559) | Image-to-Image Text-Guided Generation |
51
- | [dance_diffusion](./dance_diffusion) | [**Dance Diffusion**](https://github.com/williamberman/diffusers.git) | Unconditional Audio Generation |
52
- | [ddpm](./ddpm) | [**Denoising Diffusion Probabilistic Models**](https://arxiv.org/abs/2006.11239) | Unconditional Image Generation |
53
- | [ddim](./ddim) | [**Denoising Diffusion Implicit Models**](https://arxiv.org/abs/2010.02502) | Unconditional Image Generation |
54
- | [latent_diffusion](./latent_diffusion) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752)| Text-to-Image Generation |
55
- | [latent_diffusion](./latent_diffusion) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752)| Super Resolution Image-to-Image |
56
- | [latent_diffusion_uncond](./latent_diffusion_uncond) | [**High-Resolution Image Synthesis with Latent Diffusion Models**](https://arxiv.org/abs/2112.10752) | Unconditional Image Generation |
57
- | [paint_by_example](./paint_by_example) | [**Paint by Example: Exemplar-based Image Editing with Diffusion Models**](https://arxiv.org/abs/2211.13227) | Image-Guided Image Inpainting |
58
- | [pndm](./pndm) | [**Pseudo Numerical Methods for Diffusion Models on Manifolds**](https://arxiv.org/abs/2202.09778) | Unconditional Image Generation |
59
- | [score_sde_ve](./score_sde_ve) | [**Score-Based Generative Modeling through Stochastic Differential Equations**](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation |
60
- | [score_sde_vp](./score_sde_vp) | [**Score-Based Generative Modeling through Stochastic Differential Equations**](https://openreview.net/forum?id=PxTIG12RRHS) | Unconditional Image Generation |
61
- | [semantic_stable_diffusion](./semantic_stable_diffusion) | [**SEGA: Instructing Diffusion using Semantic Dimensions**](https://arxiv.org/abs/2301.12247) | Text-to-Image Generation |
62
- | [stable_diffusion_text2img](./stable_diffusion/text2img) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Text-to-Image Generation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/training_example.ipynb)
63
- | [stable_diffusion_img2img](./stable_diffusion/img2img) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Image-to-Image Text-Guided Generation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb)
64
- | [stable_diffusion_inpaint](./stable_diffusion/inpaint) | [**Stable Diffusion**](https://stability.ai/blog/stable-diffusion-public-release) | Text-Guided Image Inpainting | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)
65
- | [stable_diffusion_panorama](./stable_diffusion/panorama) | [**MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation**](https://arxiv.org/abs/2302.08113) | Text-Guided Panorama View Generation |
66
- | [stable_diffusion_pix2pix](./stable_diffusion/pix2pix) | [**InstructPix2Pix: Learning to Follow Image Editing Instructions**](https://arxiv.org/abs/2211.09800) | Text-Based Image Editing |
67
- | [stable_diffusion_pix2pix_zero](./stable_diffusion/pix2pix_zero) | [**Zero-shot Image-to-Image Translation**](https://arxiv.org/abs/2302.03027) | Text-Based Image Editing |
68
- | [stable_diffusion_attend_and_excite](./stable_diffusion/attend_and_excite) | [**Attend and Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models**](https://arxiv.org/abs/2301.13826) | Text-to-Image Generation |
69
- | [stable_diffusion_self_attention_guidance](./stable_diffusion/self_attention_guidance) | [**Self-Attention Guidance**](https://arxiv.org/abs/2210.00939) | Text-to-Image Generation |
70
- | [stable_diffusion_image_variation](./stable_diffusion/image_variation) | [**Stable Diffusion Image Variations**](https://github.com/LambdaLabsML/lambda-diffusers#stable-diffusion-image-variations) | Image-to-Image Generation |
71
- | [stable_diffusion_latent_upscale](./stable_diffusion/latent_upscale) | [**Stable Diffusion Latent Upscaler**](https://twitter.com/StabilityAI/status/1590531958815064065) | Text-Guided Super Resolution Image-to-Image |
72
- | [stable_diffusion_2](./stable_diffusion_2/) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Text-to-Image Generation |
73
- | [stable_diffusion_2](./stable_diffusion_2) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Image Inpainting |
74
- | [stable_diffusion_2](./stable_diffusion_2) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Depth-to-Image Text-Guided Generation |
75
- | [stable_diffusion_2](./stable_diffusion_2) | [**Stable Diffusion 2**](https://stability.ai/blog/stable-diffusion-v2-release) | Text-Guided Super Resolution Image-to-Image |
76
- | [stable_diffusion_safe](./stable_diffusion_safe) | [**Safe Stable Diffusion**](https://arxiv.org/abs/2211.05105) | Text-Guided Generation | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ml-research/safe-latent-diffusion/blob/main/examples/Safe%20Latent%20Diffusion.ipynb)
77
- | [stable_unclip](./stable_unclip) | **Stable unCLIP** | Text-to-Image Generation |
78
- | [stable_unclip](./stable_unclip) | **Stable unCLIP** | Image-to-Image Text-Guided Generation |
79
- | [stochastic_karras_ve](./stochastic_karras_ve) | [**Elucidating the Design Space of Diffusion-Based Generative Models**](https://arxiv.org/abs/2206.00364) | Unconditional Image Generation |
80
- | [text_to_video_sd](./api/pipelines/text_to_video) | [Modelscope's Text-to-video-synthesis Model in Open Domain](https://modelscope.cn/models/damo/text-to-video-synthesis/summary) | Text-to-Video Generation |
81
- | [unclip](./unclip) | [Hierarchical Text-Conditional Image Generation with CLIP Latents](https://arxiv.org/abs/2204.06125) | Text-to-Image Generation |
82
- | [versatile_diffusion](./versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Text-to-Image Generation |
83
- | [versatile_diffusion](./versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Image Variations Generation |
84
- | [versatile_diffusion](./versatile_diffusion) | [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) | Dual Image and Text Guided Generation |
85
- | [vq_diffusion](./vq_diffusion) | [Vector Quantized Diffusion Model for Text-to-Image Synthesis](https://arxiv.org/abs/2111.14822) | Text-to-Image Generation |
86
-
87
-
88
- **Note**: Pipelines are simple examples of how to play around with the diffusion systems as described in the corresponding papers.
89
-
90
- However, most of them can be adapted to use different scheduler components or even different model components. Some pipeline examples are shown in the [Examples](#examples) below.
91
-
92
- ## Pipelines API
93
-
94
- Diffusion models often consist of multiple independently-trained models or other previously existing components.
95
-
96
-
97
- Each model has been trained independently on a different task and the scheduler can easily be swapped out and replaced with a different one.
98
- During inference, we however want to be able to easily load all components and use them in inference - even if one component, *e.g.* CLIP's text encoder, originates from a different library, such as [Transformers](https://github.com/huggingface/transformers). To that end, all pipelines provide the following functionality:
99
-
100
- - [`from_pretrained` method](../diffusion_pipeline) that accepts a Hugging Face Hub repository id, *e.g.* [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5) or a path to a local directory, *e.g.*
101
- "./stable-diffusion". To correctly retrieve which models and components should be loaded, one has to provide a `model_index.json` file, *e.g.* [runwayml/stable-diffusion-v1-5/model_index.json](https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/model_index.json), which defines all components that should be
102
- loaded into the pipelines. More specifically, for each model/component one needs to define the format `<name>: ["<library>", "<class name>"]`. `<name>` is the attribute name given to the loaded instance of `<class name>` which can be found in the library or pipeline folder called `"<library>"`.
103
- - [`save_pretrained`](../diffusion_pipeline) that accepts a local path, *e.g.* `./stable-diffusion` under which all models/components of the pipeline will be saved. For each component/model a folder is created inside the local path that is named after the given attribute name, *e.g.* `./stable_diffusion/unet`.
104
- In addition, a `model_index.json` file is created at the root of the local path, *e.g.* `./stable_diffusion/model_index.json` so that the complete pipeline can again be instantiated
105
- from the local path.
106
- - [`to`](../diffusion_pipeline) which accepts a `string` or `torch.device` to move all models that are of type `torch.nn.Module` to the passed device. The behavior is fully analogous to [PyTorch's `to` method](https://pytorch.org/docs/stable/generated/torch.nn.Module.html#torch.nn.Module.to).
107
- - [`__call__`] method to use the pipeline in inference. `__call__` defines inference logic of the pipeline and should ideally encompass all aspects of it, from pre-processing to forwarding tensors to the different models and schedulers, as well as post-processing. The API of the `__call__` method can strongly vary from pipeline to pipeline. *E.g.* a text-to-image pipeline, such as [`StableDiffusionPipeline`](./stable_diffusion) should accept among other things the text prompt to generate the image. A pure image generation pipeline, such as [DDPMPipeline](https://github.com/huggingface/diffusers/tree/main/src/diffusers/pipelines/ddpm) on the other hand can be run without providing any inputs. To better understand what inputs can be adapted for
108
- each pipeline, one should look directly into the respective pipeline.
109
-
110
- **Note**: All pipelines have PyTorch's autograd disabled by decorating the `__call__` method with a [`torch.no_grad`](https://pytorch.org/docs/stable/generated/torch.no_grad.html) decorator because pipelines should
111
- not be used for training. If you want to store the gradients during the forward pass, we recommend writing your own pipeline, see also our [community-examples](https://github.com/huggingface/diffusers/tree/main/examples/community).
112
-
113
- ## Contribution
114
-
115
- We are more than happy about any contribution to the officially supported pipelines 🤗. We aspire
116
- all of our pipelines to be **self-contained**, **easy-to-tweak**, **beginner-friendly** and for **one-purpose-only**.
117
-
118
- - **Self-contained**: A pipeline shall be as self-contained as possible. More specifically, this means that all functionality should be either directly defined in the pipeline file itself, should be inherited from (and only from) the [`DiffusionPipeline` class](.../diffusion_pipeline) or be directly attached to the model and scheduler components of the pipeline.
119
- - **Easy-to-use**: Pipelines should be extremely easy to use - one should be able to load the pipeline and
120
- use it for its designated task, *e.g.* text-to-image generation, in just a couple of lines of code. Most
121
- logic including pre-processing, an unrolled diffusion loop, and post-processing should all happen inside the `__call__` method.
122
- - **Easy-to-tweak**: Certain pipelines will not be able to handle all use cases and tasks that you might like them to. If you want to use a certain pipeline for a specific use case that is not yet supported, you might have to copy the pipeline file and tweak the code to your needs. We try to make the pipeline code as readable as possible so that each part –from pre-processing to diffusing to post-processing– can easily be adapted. If you would like the community to benefit from your customized pipeline, we would love to see a contribution to our [community-examples](https://github.com/huggingface/diffusers/tree/main/examples/community). If you feel that an important pipeline should be part of the official pipelines but isn't, a contribution to the [official pipelines](./overview) would be even better.
123
- - **One-purpose-only**: Pipelines should be used for one task and one task only. Even if two tasks are very similar from a modeling point of view, *e.g.* image2image translation and in-painting, pipelines shall be used for one task only to keep them *easy-to-tweak* and *readable*.
124
-
125
- ## Examples
126
-
127
- ### Text-to-Image generation with Stable Diffusion
128
-
129
- ```python
130
- # make sure you're logged in with `huggingface-cli login`
131
- from diffusers import StableDiffusionPipeline, LMSDiscreteScheduler
132
-
133
- pipe = StableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5")
134
- pipe = pipe.to("cuda")
135
-
136
- prompt = "a photo of an astronaut riding a horse on mars"
137
- image = pipe(prompt).images[0]
138
-
139
- image.save("astronaut_rides_horse.png")
140
- ```
141
-
142
- ### Image-to-Image text-guided generation with Stable Diffusion
143
-
144
- The `StableDiffusionImg2ImgPipeline` lets you pass a text prompt and an initial image to condition the generation of new images.
145
-
146
- ```python
147
- import requests
148
- from PIL import Image
149
- from io import BytesIO
150
-
151
- from diffusers import StableDiffusionImg2ImgPipeline
152
-
153
- # load the pipeline
154
- device = "cuda"
155
- pipe = StableDiffusionImg2ImgPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16).to(
156
- device
157
- )
158
-
159
- # let's download an initial image
160
- url = "https://raw.githubusercontent.com/CompVis/stable-diffusion/main/assets/stable-samples/img2img/sketch-mountains-input.jpg"
161
-
162
- response = requests.get(url)
163
- init_image = Image.open(BytesIO(response.content)).convert("RGB")
164
- init_image = init_image.resize((768, 512))
165
-
166
- prompt = "A fantasy landscape, trending on artstation"
167
-
168
- images = pipe(prompt=prompt, image=init_image, strength=0.75, guidance_scale=7.5).images
169
-
170
- images[0].save("fantasy_landscape.png")
171
- ```
172
- You can also run this example on colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb)
173
-
174
- ### Tweak prompts reusing seeds and latents
175
-
176
- You can generate your own latents to reproduce results, or tweak your prompt on a specific result you liked. [This notebook](https://github.com/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb) shows how to do it step by step. You can also run it in Google Colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb)
177
-
178
-
179
- ### In-painting using Stable Diffusion
180
-
181
- The `StableDiffusionInpaintPipeline` lets you edit specific parts of an image by providing a mask and text prompt.
182
-
183
- ```python
184
- import PIL
185
- import requests
186
- import torch
187
- from io import BytesIO
188
-
189
- from diffusers import StableDiffusionInpaintPipeline
190
-
191
-
192
- def download_image(url):
193
- response = requests.get(url)
194
- return PIL.Image.open(BytesIO(response.content)).convert("RGB")
195
-
196
-
197
- img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
198
- mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
199
-
200
- init_image = download_image(img_url).resize((512, 512))
201
- mask_image = download_image(mask_url).resize((512, 512))
202
-
203
- pipe = StableDiffusionInpaintPipeline.from_pretrained(
204
- "runwayml/stable-diffusion-inpainting",
205
- torch_dtype=torch.float16,
206
- )
207
- pipe = pipe.to("cuda")
208
-
209
- prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
210
- image = pipe(prompt=prompt, image=init_image, mask_image=mask_image).images[0]
211
- ```
212
-
213
- You can also run this example on colab [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/paint_by_example.mdx DELETED
@@ -1,74 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # PaintByExample
14
-
15
- ## Overview
16
-
17
- [Paint by Example: Exemplar-based Image Editing with Diffusion Models](https://arxiv.org/abs/2211.13227) by Binxin Yang, Shuyang Gu, Bo Zhang, Ting Zhang, Xuejin Chen, Xiaoyan Sun, Dong Chen, Fang Wen.
18
-
19
- The abstract of the paper is the following:
20
-
21
- *Language-guided image editing has achieved great success recently. In this paper, for the first time, we investigate exemplar-guided image editing for more precise control. We achieve this goal by leveraging self-supervised training to disentangle and re-organize the source image and the exemplar. However, the naive approach will cause obvious fusing artifacts. We carefully analyze it and propose an information bottleneck and strong augmentations to avoid the trivial solution of directly copying and pasting the exemplar image. Meanwhile, to ensure the controllability of the editing process, we design an arbitrary shape mask for the exemplar image and leverage the classifier-free guidance to increase the similarity to the exemplar image. The whole framework involves a single forward of the diffusion model without any iterative optimization. We demonstrate that our method achieves an impressive performance and enables controllable editing on in-the-wild images with high fidelity.*
22
-
23
- The original codebase can be found [here](https://github.com/Fantasy-Studio/Paint-by-Example).
24
-
25
- ## Available Pipelines:
26
-
27
- | Pipeline | Tasks | Colab
28
- |---|---|:---:|
29
- | [pipeline_paint_by_example.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/paint_by_example/pipeline_paint_by_example.py) | *Image-Guided Image Painting* | - |
30
-
31
- ## Tips
32
-
33
- - PaintByExample is supported by the official [Fantasy-Studio/Paint-by-Example](https://huggingface.co/Fantasy-Studio/Paint-by-Example) checkpoint. The checkpoint has been warm-started from the [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4) and with the objective to inpaint partly masked images conditioned on example / reference images
34
- - To quickly demo *PaintByExample*, please have a look at [this demo](https://huggingface.co/spaces/Fantasy-Studio/Paint-by-Example)
35
- - You can run the following code snippet as an example:
36
-
37
-
38
- ```python
39
- # !pip install diffusers transformers
40
-
41
- import PIL
42
- import requests
43
- import torch
44
- from io import BytesIO
45
- from diffusers import DiffusionPipeline
46
-
47
-
48
- def download_image(url):
49
- response = requests.get(url)
50
- return PIL.Image.open(BytesIO(response.content)).convert("RGB")
51
-
52
-
53
- img_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/image/example_1.png"
54
- mask_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/mask/example_1.png"
55
- example_url = "https://raw.githubusercontent.com/Fantasy-Studio/Paint-by-Example/main/examples/reference/example_1.jpg"
56
-
57
- init_image = download_image(img_url).resize((512, 512))
58
- mask_image = download_image(mask_url).resize((512, 512))
59
- example_image = download_image(example_url).resize((512, 512))
60
-
61
- pipe = DiffusionPipeline.from_pretrained(
62
- "Fantasy-Studio/Paint-by-Example",
63
- torch_dtype=torch.float16,
64
- )
65
- pipe = pipe.to("cuda")
66
-
67
- image = pipe(image=init_image, mask_image=mask_image, example_image=example_image).images[0]
68
- image
69
- ```
70
-
71
- ## PaintByExamplePipeline
72
- [[autodoc]] PaintByExamplePipeline
73
- - all
74
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/pndm.mdx DELETED
@@ -1,35 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # PNDM
14
-
15
- ## Overview
16
-
17
- [Pseudo Numerical methods for Diffusion Models on manifolds](https://arxiv.org/abs/2202.09778) (PNDM) by Luping Liu, Yi Ren, Zhijie Lin and Zhou Zhao.
18
-
19
- The abstract of the paper is the following:
20
-
21
- Denoising Diffusion Probabilistic Models (DDPMs) can generate high-quality samples such as image and audio samples. However, DDPMs require hundreds to thousands of iterations to produce final samples. Several prior works have successfully accelerated DDPMs through adjusting the variance schedule (e.g., Improved Denoising Diffusion Probabilistic Models) or the denoising equation (e.g., Denoising Diffusion Implicit Models (DDIMs)). However, these acceleration methods cannot maintain the quality of samples and even introduce new noise at a high speedup rate, which limit their practicability. To accelerate the inference process while keeping the sample quality, we provide a fresh perspective that DDPMs should be treated as solving differential equations on manifolds. Under such a perspective, we propose pseudo numerical methods for diffusion models (PNDMs). Specifically, we figure out how to solve differential equations on manifolds and show that DDIMs are simple cases of pseudo numerical methods. We change several classical numerical methods to corresponding pseudo numerical methods and find that the pseudo linear multi-step method is the best in most situations. According to our experiments, by directly using pre-trained models on Cifar10, CelebA and LSUN, PNDMs can generate higher quality synthetic images with only 50 steps compared with 1000-step DDIMs (20x speedup), significantly outperform DDIMs with 250 steps (by around 0.4 in FID) and have good generalization on different variance schedules.
22
-
23
- The original codebase can be found [here](https://github.com/luping-liu/PNDM).
24
-
25
- ## Available Pipelines:
26
-
27
- | Pipeline | Tasks | Colab
28
- |---|---|:---:|
29
- | [pipeline_pndm.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/pndm/pipeline_pndm.py) | *Unconditional Image Generation* | - |
30
-
31
-
32
- ## PNDMPipeline
33
- [[autodoc]] PNDMPipeline
34
- - all
35
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/repaint.mdx DELETED
@@ -1,77 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # RePaint
14
-
15
- ## Overview
16
-
17
- [RePaint: Inpainting using Denoising Diffusion Probabilistic Models](https://arxiv.org/abs/2201.09865) (PNDM) by Andreas Lugmayr, Martin Danelljan, Andres Romero, Fisher Yu, Radu Timofte, Luc Van Gool.
18
-
19
- The abstract of the paper is the following:
20
-
21
- Free-form inpainting is the task of adding new content to an image in the regions specified by an arbitrary binary mask. Most existing approaches train for a certain distribution of masks, which limits their generalization capabilities to unseen mask types. Furthermore, training with pixel-wise and perceptual losses often leads to simple textural extensions towards the missing areas instead of semantically meaningful generation. In this work, we propose RePaint: A Denoising Diffusion Probabilistic Model (DDPM) based inpainting approach that is applicable to even extreme masks. We employ a pretrained unconditional DDPM as the generative prior. To condition the generation process, we only alter the reverse diffusion iterations by sampling the unmasked regions using the given image information. Since this technique does not modify or condition the original DDPM network itself, the model produces high-quality and diverse output images for any inpainting form. We validate our method for both faces and general-purpose image inpainting using standard and extreme masks.
22
- RePaint outperforms state-of-the-art Autoregressive, and GAN approaches for at least five out of six mask distributions.
23
-
24
- The original codebase can be found [here](https://github.com/andreas128/RePaint).
25
-
26
- ## Available Pipelines:
27
-
28
- | Pipeline | Tasks | Colab
29
- |-------------------------------------------------------------------------------------------------------------------------------|--------------------|:---:|
30
- | [pipeline_repaint.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/repaint/pipeline_repaint.py) | *Image Inpainting* | - |
31
-
32
- ## Usage example
33
-
34
- ```python
35
- from io import BytesIO
36
-
37
- import torch
38
-
39
- import PIL
40
- import requests
41
- from diffusers import RePaintPipeline, RePaintScheduler
42
-
43
-
44
- def download_image(url):
45
- response = requests.get(url)
46
- return PIL.Image.open(BytesIO(response.content)).convert("RGB")
47
-
48
-
49
- img_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/celeba_hq_256.png"
50
- mask_url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/repaint/mask_256.png"
51
-
52
- # Load the original image and the mask as PIL images
53
- original_image = download_image(img_url).resize((256, 256))
54
- mask_image = download_image(mask_url).resize((256, 256))
55
-
56
- # Load the RePaint scheduler and pipeline based on a pretrained DDPM model
57
- scheduler = RePaintScheduler.from_pretrained("google/ddpm-ema-celebahq-256")
58
- pipe = RePaintPipeline.from_pretrained("google/ddpm-ema-celebahq-256", scheduler=scheduler)
59
- pipe = pipe.to("cuda")
60
-
61
- generator = torch.Generator(device="cuda").manual_seed(0)
62
- output = pipe(
63
- original_image=original_image,
64
- mask_image=mask_image,
65
- num_inference_steps=250,
66
- eta=0.0,
67
- jump_length=10,
68
- jump_n_sample=10,
69
- generator=generator,
70
- )
71
- inpainted_image = output.images[0]
72
- ```
73
-
74
- ## RePaintPipeline
75
- [[autodoc]] RePaintPipeline
76
- - all
77
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/score_sde_ve.mdx DELETED
@@ -1,36 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Score SDE VE
14
-
15
- ## Overview
16
-
17
- [Score-Based Generative Modeling through Stochastic Differential Equations](https://arxiv.org/abs/2011.13456) (Score SDE) by Yang Song, Jascha Sohl-Dickstein, Diederik P. Kingma, Abhishek Kumar, Stefano Ermon and Ben Poole.
18
-
19
- The abstract of the paper is the following:
20
-
21
- Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (\aka, score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024 x 1024 images for the first time from a score-based generative model.
22
-
23
- The original codebase can be found [here](https://github.com/yang-song/score_sde_pytorch).
24
-
25
- This pipeline implements the Variance Expanding (VE) variant of the method.
26
-
27
- ## Available Pipelines:
28
-
29
- | Pipeline | Tasks | Colab
30
- |---|---|:---:|
31
- | [pipeline_score_sde_ve.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/score_sde_ve/pipeline_score_sde_ve.py) | *Unconditional Image Generation* | - |
32
-
33
- ## ScoreSdeVePipeline
34
- [[autodoc]] ScoreSdeVePipeline
35
- - all
36
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/semantic_stable_diffusion.mdx DELETED
@@ -1,79 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Semantic Guidance
14
-
15
- Semantic Guidance for Diffusion Models was proposed in [SEGA: Instructing Diffusion using Semantic Dimensions](https://arxiv.org/abs/2301.12247) and provides strong semantic control over the image generation.
16
- Small changes to the text prompt usually result in entirely different output images. However, with SEGA a variety of changes to the image are enabled that can be controlled easily and intuitively, and stay true to the original image composition.
17
-
18
- The abstract of the paper is the following:
19
-
20
- *Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user's intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA's effectiveness on a variety of tasks and provide evidence for its versatility and flexibility.*
21
-
22
-
23
- *Overview*:
24
-
25
- | Pipeline | Tasks | Colab | Demo
26
- |---|---|:---:|:---:|
27
- | [pipeline_semantic_stable_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/semantic_stable_diffusion/pipeline_semantic_stable_diffusion.py) | *Text-to-Image Generation* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ml-research/semantic-image-editing/blob/main/examples/SemanticGuidance.ipynb) | [Coming Soon](https://huggingface.co/AIML-TUDA)
28
-
29
- ## Tips
30
-
31
- - The Semantic Guidance pipeline can be used with any [Stable Diffusion](./stable_diffusion/text2img) checkpoint.
32
-
33
- ### Run Semantic Guidance
34
-
35
- The interface of [`SemanticStableDiffusionPipeline`] provides several additional parameters to influence the image generation.
36
- Exemplary usage may look like this:
37
-
38
- ```python
39
- import torch
40
- from diffusers import SemanticStableDiffusionPipeline
41
-
42
- pipe = SemanticStableDiffusionPipeline.from_pretrained("runwayml/stable-diffusion-v1-5", torch_dtype=torch.float16)
43
- pipe = pipe.to("cuda")
44
-
45
- out = pipe(
46
- prompt="a photo of the face of a woman",
47
- num_images_per_prompt=1,
48
- guidance_scale=7,
49
- editing_prompt=[
50
- "smiling, smile", # Concepts to apply
51
- "glasses, wearing glasses",
52
- "curls, wavy hair, curly hair",
53
- "beard, full beard, mustache",
54
- ],
55
- reverse_editing_direction=[False, False, False, False], # Direction of guidance i.e. increase all concepts
56
- edit_warmup_steps=[10, 10, 10, 10], # Warmup period for each concept
57
- edit_guidance_scale=[4, 5, 5, 5.4], # Guidance scale for each concept
58
- edit_threshold=[
59
- 0.99,
60
- 0.975,
61
- 0.925,
62
- 0.96,
63
- ], # Threshold for each concept. Threshold equals the percentile of the latent space that will be discarded. I.e. threshold=0.99 uses 1% of the latent dimensions
64
- edit_momentum_scale=0.3, # Momentum scale that will be added to the latent guidance
65
- edit_mom_beta=0.6, # Momentum beta
66
- edit_weights=[1, 1, 1, 1, 1], # Weights of the individual concepts against each other
67
- )
68
- ```
69
-
70
- For more examples check the Colab notebook.
71
-
72
- ## StableDiffusionSafePipelineOutput
73
- [[autodoc]] pipelines.semantic_stable_diffusion.SemanticStableDiffusionPipelineOutput
74
- - all
75
-
76
- ## SemanticStableDiffusionPipeline
77
- [[autodoc]] SemanticStableDiffusionPipeline
78
- - all
79
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/spectrogram_diffusion.mdx DELETED
@@ -1,54 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Multi-instrument Music Synthesis with Spectrogram Diffusion
14
-
15
- ## Overview
16
-
17
- [Spectrogram Diffusion](https://arxiv.org/abs/2206.05408) by Curtis Hawthorne, Ian Simon, Adam Roberts, Neil Zeghidour, Josh Gardner, Ethan Manilow, and Jesse Engel.
18
-
19
- An ideal music synthesizer should be both interactive and expressive, generating high-fidelity audio in realtime for arbitrary combinations of instruments and notes. Recent neural synthesizers have exhibited a tradeoff between domain-specific models that offer detailed control of only specific instruments, or raw waveform models that can train on any music but with minimal control and slow generation. In this work, we focus on a middle ground of neural synthesizers that can generate audio from MIDI sequences with arbitrary combinations of instruments in realtime. This enables training on a wide range of transcription datasets with a single model, which in turn offers note-level control of composition and instrumentation across a wide range of instruments. We use a simple two-stage process: MIDI to spectrograms with an encoder-decoder Transformer, then spectrograms to audio with a generative adversarial network (GAN) spectrogram inverter. We compare training the decoder as an autoregressive model and as a Denoising Diffusion Probabilistic Model (DDPM) and find that the DDPM approach is superior both qualitatively and as measured by audio reconstruction and Fréchet distance metrics. Given the interactivity and generality of this approach, we find this to be a promising first step towards interactive and expressive neural synthesis for arbitrary combinations of instruments and notes.
20
-
21
- The original codebase of this implementation can be found at [magenta/music-spectrogram-diffusion](https://github.com/magenta/music-spectrogram-diffusion).
22
-
23
- ## Model
24
-
25
- ![img](https://storage.googleapis.com/music-synthesis-with-spectrogram-diffusion/architecture.png)
26
-
27
- As depicted above the model takes as input a MIDI file and tokenizes it into a sequence of 5 second intervals. Each tokenized interval then together with positional encodings is passed through the Note Encoder and its representation is concatenated with the previous window's generated spectrogram representation obtained via the Context Encoder. For the initial 5 second window this is set to zero. The resulting context is then used as conditioning to sample the denoised Spectrogram from the MIDI window and we concatenate this spectrogram to the final output as well as use it for the context of the next MIDI window. The process repeats till we have gone over all the MIDI inputs. Finally a MelGAN decoder converts the potentially long spectrogram to audio which is the final result of this pipeline.
28
-
29
- ## Available Pipelines:
30
-
31
- | Pipeline | Tasks | Colab
32
- |---|---|:---:|
33
- | [pipeline_spectrogram_diffusion.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/spectrogram_diffusion/pipeline_spectrogram_diffusion) | *Unconditional Audio Generation* | - |
34
-
35
-
36
- ## Example usage
37
-
38
- ```python
39
- from diffusers import SpectrogramDiffusionPipeline, MidiProcessor
40
-
41
- pipe = SpectrogramDiffusionPipeline.from_pretrained("google/music-spectrogram-diffusion")
42
- pipe = pipe.to("cuda")
43
- processor = MidiProcessor()
44
-
45
- # Download MIDI from: wget http://www.piano-midi.de/midis/beethoven/beethoven_hammerklavier_2.mid
46
- output = pipe(processor("beethoven_hammerklavier_2.mid"))
47
-
48
- audio = output.audios[0]
49
- ```
50
-
51
- ## SpectrogramDiffusionPipeline
52
- [[autodoc]] SpectrogramDiffusionPipeline
53
- - all
54
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/stable_diffusion/attend_and_excite.mdx DELETED
@@ -1,75 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Attend and Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models
14
-
15
- ## Overview
16
-
17
- Attend and Excite for Stable Diffusion was proposed in [Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models](https://attendandexcite.github.io/Attend-and-Excite/) and provides textual attention control over the image generation.
18
-
19
- The abstract of the paper is the following:
20
-
21
- *Text-to-image diffusion models have recently received a lot of interest for their astonishing ability to produce high-fidelity images from text only. However, achieving one-shot generation that aligns with the user's intent is nearly impossible, yet small changes to the input prompt often result in very different images. This leaves the user with little semantic control. To put the user in control, we show how to interact with the diffusion process to flexibly steer it along semantic directions. This semantic guidance (SEGA) allows for subtle and extensive edits, changes in composition and style, as well as optimizing the overall artistic conception. We demonstrate SEGA's effectiveness on a variety of tasks and provide evidence for its versatility and flexibility.*
22
-
23
- Resources
24
-
25
- * [Project Page](https://attendandexcite.github.io/Attend-and-Excite/)
26
- * [Paper](https://arxiv.org/abs/2301.13826)
27
- * [Original Code](https://github.com/AttendAndExcite/Attend-and-Excite)
28
- * [Demo](https://huggingface.co/spaces/AttendAndExcite/Attend-and-Excite)
29
-
30
-
31
- ## Available Pipelines:
32
-
33
- | Pipeline | Tasks | Colab | Demo
34
- |---|---|:---:|:---:|
35
- | [pipeline_semantic_stable_diffusion_attend_and_excite.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_semantic_stable_diffusion_attend_and_excite) | *Text-to-Image Generation* | - | https://huggingface.co/spaces/AttendAndExcite/Attend-and-Excite
36
-
37
-
38
- ### Usage example
39
-
40
-
41
- ```python
42
- import torch
43
- from diffusers import StableDiffusionAttendAndExcitePipeline
44
-
45
- model_id = "CompVis/stable-diffusion-v1-4"
46
- pipe = StableDiffusionAttendAndExcitePipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
47
- pipe = pipe.to("cuda")
48
-
49
- prompt = "a cat and a frog"
50
-
51
- # use get_indices function to find out indices of the tokens you want to alter
52
- pipe.get_indices(prompt)
53
-
54
- token_indices = [2, 5]
55
- seed = 6141
56
- generator = torch.Generator("cuda").manual_seed(seed)
57
-
58
- images = pipe(
59
- prompt=prompt,
60
- token_indices=token_indices,
61
- guidance_scale=7.5,
62
- generator=generator,
63
- num_inference_steps=50,
64
- max_iter_to_alter=25,
65
- ).images
66
-
67
- image = images[0]
68
- image.save(f"../images/{prompt}_{seed}.png")
69
- ```
70
-
71
-
72
- ## StableDiffusionAttendAndExcitePipeline
73
- [[autodoc]] StableDiffusionAttendAndExcitePipeline
74
- - all
75
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/stable_diffusion/controlnet.mdx DELETED
@@ -1,280 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Text-to-Image Generation with ControlNet Conditioning
14
-
15
- ## Overview
16
-
17
- [Adding Conditional Control to Text-to-Image Diffusion Models](https://arxiv.org/abs/2302.05543) by Lvmin Zhang and Maneesh Agrawala.
18
-
19
- Using the pretrained models we can provide control images (for example, a depth map) to control Stable Diffusion text-to-image generation so that it follows the structure of the depth image and fills in the details.
20
-
21
- The abstract of the paper is the following:
22
-
23
- *We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k). Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices. Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data. We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. This may enrich the methods to control large diffusion models and further facilitate related applications.*
24
-
25
- This model was contributed by the amazing community contributor [takuma104](https://huggingface.co/takuma104) ❤️ .
26
-
27
- Resources:
28
-
29
- * [Paper](https://arxiv.org/abs/2302.05543)
30
- * [Original Code](https://github.com/lllyasviel/ControlNet)
31
-
32
- ## Available Pipelines:
33
-
34
- | Pipeline | Tasks | Demo
35
- |---|---|:---:|
36
- | [StableDiffusionControlNetPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_controlnet.py) | *Text-to-Image Generation with ControlNet Conditioning* | [Colab Example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/controlnet.ipynb)
37
-
38
- ## Usage example
39
-
40
- In the following we give a simple example of how to use a *ControlNet* checkpoint with Diffusers for inference.
41
- The inference pipeline is the same for all pipelines:
42
-
43
- * 1. Take an image and run it through a pre-conditioning processor.
44
- * 2. Run the pre-processed image through the [`StableDiffusionControlNetPipeline`].
45
-
46
- Let's have a look at a simple example using the [Canny Edge ControlNet](https://huggingface.co/lllyasviel/sd-controlnet-canny).
47
-
48
- ```python
49
- from diffusers import StableDiffusionControlNetPipeline
50
- from diffusers.utils import load_image
51
-
52
- # Let's load the popular vermeer image
53
- image = load_image(
54
- "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png"
55
- )
56
- ```
57
-
58
- ![img](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/input_image_vermeer.png)
59
-
60
- Next, we process the image to get the canny image. This is step *1.* - running the pre-conditioning processor. The pre-conditioning processor is different for every ControlNet. Please see the model cards of the [official checkpoints](#controlnet-with-stable-diffusion-1.5) for more information about other models.
61
-
62
- First, we need to install opencv:
63
-
64
- ```
65
- pip install opencv-contrib-python
66
- ```
67
-
68
- Next, let's also install all required Hugging Face libraries:
69
-
70
- ```
71
- pip install diffusers transformers git+https://github.com/huggingface/accelerate.git
72
- ```
73
-
74
- Then we can retrieve the canny edges of the image.
75
-
76
- ```python
77
- import cv2
78
- from PIL import Image
79
- import numpy as np
80
-
81
- image = np.array(image)
82
-
83
- low_threshold = 100
84
- high_threshold = 200
85
-
86
- image = cv2.Canny(image, low_threshold, high_threshold)
87
- image = image[:, :, None]
88
- image = np.concatenate([image, image, image], axis=2)
89
- canny_image = Image.fromarray(image)
90
- ```
91
-
92
- Let's take a look at the processed image.
93
-
94
- ![img](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/vermeer_canny_edged.png)
95
-
96
- Now, we load the official [Stable Diffusion 1.5 Model](runwayml/stable-diffusion-v1-5) as well as the ControlNet for canny edges.
97
-
98
- ```py
99
- from diffusers import StableDiffusionControlNetPipeline, ControlNetModel
100
- import torch
101
-
102
- controlnet = ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16)
103
- pipe = StableDiffusionControlNetPipeline.from_pretrained(
104
- "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16
105
- )
106
- ```
107
-
108
- To speed-up things and reduce memory, let's enable model offloading and use the fast [`UniPCMultistepScheduler`].
109
-
110
- ```py
111
- from diffusers import UniPCMultistepScheduler
112
-
113
- pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
114
-
115
- # this command loads the individual model components on GPU on-demand.
116
- pipe.enable_model_cpu_offload()
117
- ```
118
-
119
- Finally, we can run the pipeline:
120
-
121
- ```py
122
- generator = torch.manual_seed(0)
123
-
124
- out_image = pipe(
125
- "disco dancer with colorful lights", num_inference_steps=20, generator=generator, image=canny_image
126
- ).images[0]
127
- ```
128
-
129
- This should take only around 3-4 seconds on GPU (depending on hardware). The output image then looks as follows:
130
-
131
- ![img](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/vermeer_disco_dancing.png)
132
-
133
-
134
- **Note**: To see how to run all other ControlNet checkpoints, please have a look at [ControlNet with Stable Diffusion 1.5](#controlnet-with-stable-diffusion-1.5).
135
-
136
- <!-- TODO: add space -->
137
-
138
- ## Combining multiple conditionings
139
-
140
- Multiple ControlNet conditionings can be combined for a single image generation. Pass a list of ControlNets to the pipeline's constructor and a corresponding list of conditionings to `__call__`.
141
-
142
- When combining conditionings, it is helpful to mask conditionings such that they do not overlap. In the example, we mask the middle of the canny map where the pose conditioning is located.
143
-
144
- It can also be helpful to vary the `controlnet_conditioning_scales` to emphasize one conditioning over the other.
145
-
146
- ### Canny conditioning
147
-
148
- The original image:
149
-
150
- <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png"/>
151
-
152
- Prepare the conditioning:
153
-
154
- ```python
155
- from diffusers.utils import load_image
156
- from PIL import Image
157
- import cv2
158
- import numpy as np
159
- from diffusers.utils import load_image
160
-
161
- canny_image = load_image(
162
- "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png"
163
- )
164
- canny_image = np.array(canny_image)
165
-
166
- low_threshold = 100
167
- high_threshold = 200
168
-
169
- canny_image = cv2.Canny(canny_image, low_threshold, high_threshold)
170
-
171
- # zero out middle columns of image where pose will be overlayed
172
- zero_start = canny_image.shape[1] // 4
173
- zero_end = zero_start + canny_image.shape[1] // 2
174
- canny_image[:, zero_start:zero_end] = 0
175
-
176
- canny_image = canny_image[:, :, None]
177
- canny_image = np.concatenate([canny_image, canny_image, canny_image], axis=2)
178
- canny_image = Image.fromarray(canny_image)
179
- ```
180
-
181
- <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/controlnet/landscape_canny_masked.png"/>
182
-
183
- ### Openpose conditioning
184
-
185
- The original image:
186
-
187
- <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png" width=600/>
188
-
189
- Prepare the conditioning:
190
-
191
- ```python
192
- from controlnet_aux import OpenposeDetector
193
- from diffusers.utils import load_image
194
-
195
- openpose = OpenposeDetector.from_pretrained("lllyasviel/ControlNet")
196
-
197
- openpose_image = load_image(
198
- "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/person.png"
199
- )
200
- openpose_image = openpose(openpose_image)
201
- ```
202
-
203
- <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/controlnet/person_pose.png" width=600/>
204
-
205
- ### Running ControlNet with multiple conditionings
206
-
207
- ```python
208
- from diffusers import StableDiffusionControlNetPipeline, ControlNetModel, UniPCMultistepScheduler
209
- import torch
210
-
211
- controlnet = [
212
- ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-openpose", torch_dtype=torch.float16),
213
- ControlNetModel.from_pretrained("lllyasviel/sd-controlnet-canny", torch_dtype=torch.float16),
214
- ]
215
-
216
- pipe = StableDiffusionControlNetPipeline.from_pretrained(
217
- "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16
218
- )
219
- pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
220
-
221
- pipe.enable_xformers_memory_efficient_attention()
222
- pipe.enable_model_cpu_offload()
223
-
224
- prompt = "a giant standing in a fantasy landscape, best quality"
225
- negative_prompt = "monochrome, lowres, bad anatomy, worst quality, low quality"
226
-
227
- generator = torch.Generator(device="cpu").manual_seed(1)
228
-
229
- images = [openpose_image, canny_image]
230
-
231
- image = pipe(
232
- prompt,
233
- images,
234
- num_inference_steps=20,
235
- generator=generator,
236
- negative_prompt=negative_prompt,
237
- controlnet_conditioning_scale=[1.0, 0.8],
238
- ).images[0]
239
-
240
- image.save("./multi_controlnet_output.png")
241
- ```
242
-
243
- <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/controlnet/multi_controlnet_output.png" width=600/>
244
-
245
- ## Available checkpoints
246
-
247
- ControlNet requires a *control image* in addition to the text-to-image *prompt*.
248
- Each pretrained model is trained using a different conditioning method that requires different images for conditioning the generated outputs. For example, Canny edge conditioning requires the control image to be the output of a Canny filter, while depth conditioning requires the control image to be a depth map. See the overview and image examples below to know more.
249
-
250
- All checkpoints can be found under the authors' namespace [lllyasviel](https://huggingface.co/lllyasviel).
251
-
252
- ### ControlNet with Stable Diffusion 1.5
253
-
254
- | Model Name | Control Image Overview| Control Image Example | Generated Image Example |
255
- |---|---|---|---|
256
- |[lllyasviel/sd-controlnet-canny](https://huggingface.co/lllyasviel/sd-controlnet-canny)<br/> *Trained with canny edge detection* | A monochrome image with white edges on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_bird_canny.png"><img width="64" style="margin:0;padding:0;" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_bird_canny.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_canny_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_canny_1.png"/></a>|
257
- |[lllyasviel/sd-controlnet-depth](https://huggingface.co/lllyasviel/sd-controlnet-depth)<br/> *Trained with Midas depth estimation* |A grayscale image with black representing deep areas and white representing shallow areas.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_vermeer_depth.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_vermeer_depth.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_depth_2.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_depth_2.png"/></a>|
258
- |[lllyasviel/sd-controlnet-hed](https://huggingface.co/lllyasviel/sd-controlnet-hed)<br/> *Trained with HED edge detection (soft edge)* |A monochrome image with white soft edges on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_bird_hed.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_bird_hed.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_hed_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_bird_hed_1.png"/></a> |
259
- |[lllyasviel/sd-controlnet-mlsd](https://huggingface.co/lllyasviel/sd-controlnet-mlsd)<br/> *Trained with M-LSD line detection* |A monochrome image composed only of white straight lines on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_room_mlsd.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_room_mlsd.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_mlsd_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_mlsd_0.png"/></a>|
260
- |[lllyasviel/sd-controlnet-normal](https://huggingface.co/lllyasviel/sd-controlnet-normal)<br/> *Trained with normal map* |A [normal mapped](https://en.wikipedia.org/wiki/Normal_mapping) image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_human_normal.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_normal.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_normal_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_normal_1.png"/></a>|
261
- |[lllyasviel/sd-controlnet-openpose](https://huggingface.co/lllyasviel/sd-controlnet_openpose)<br/> *Trained with OpenPose bone image* |A [OpenPose bone](https://github.com/CMU-Perceptual-Computing-Lab/openpose) image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_human_openpose.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_human_openpose.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_openpose_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_human_openpose_0.png"/></a>|
262
- |[lllyasviel/sd-controlnet-scribble](https://huggingface.co/lllyasviel/sd-controlnet_scribble)<br/> *Trained with human scribbles* |A hand-drawn monochrome image with white outlines on a black background.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_vermeer_scribble.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_vermeer_scribble.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_scribble_0.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_vermeer_scribble_0.png"/></a> |
263
- |[lllyasviel/sd-controlnet-seg](https://huggingface.co/lllyasviel/sd-controlnet_seg)<br/>*Trained with semantic segmentation* |An [ADE20K](https://groups.csail.mit.edu/vision/datasets/ADE20K/)'s segmentation protocol image.|<a href="https://huggingface.co/takuma104/controlnet_dev/blob/main/gen_compare/control_images/converted/control_room_seg.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/control_images/converted/control_room_seg.png"/></a>|<a href="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_seg_1.png"><img width="64" src="https://huggingface.co/takuma104/controlnet_dev/resolve/main/gen_compare/output_images/diffusers/output_room_seg_1.png"/></a> |
264
-
265
- ## StableDiffusionControlNetPipeline
266
- [[autodoc]] StableDiffusionControlNetPipeline
267
- - all
268
- - __call__
269
- - enable_attention_slicing
270
- - disable_attention_slicing
271
- - enable_vae_slicing
272
- - disable_vae_slicing
273
- - enable_xformers_memory_efficient_attention
274
- - disable_xformers_memory_efficient_attention
275
-
276
- ## FlaxStableDiffusionControlNetPipeline
277
- [[autodoc]] FlaxStableDiffusionControlNetPipeline
278
- - all
279
- - __call__
280
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/stable_diffusion/depth2img.mdx DELETED
@@ -1,33 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Depth-to-Image Generation
14
-
15
- ## StableDiffusionDepth2ImgPipeline
16
-
17
- The depth-guided stable diffusion model was created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), and [LAION](https://laion.ai/), as part of Stable Diffusion 2.0. It uses [MiDas](https://github.com/isl-org/MiDaS) to infer depth based on an image.
18
-
19
- [`StableDiffusionDepth2ImgPipeline`] lets you pass a text prompt and an initial image to condition the generation of new images as well as a `depth_map` to preserve the images’ structure.
20
-
21
- The original codebase can be found here:
22
- - *Stable Diffusion v2*: [Stability-AI/stablediffusion](https://github.com/Stability-AI/stablediffusion#depth-conditional-stable-diffusion)
23
-
24
- Available Checkpoints are:
25
- - *stable-diffusion-2-depth*: [stabilityai/stable-diffusion-2-depth](https://huggingface.co/stabilityai/stable-diffusion-2-depth)
26
-
27
- [[autodoc]] StableDiffusionDepth2ImgPipeline
28
- - all
29
- - __call__
30
- - enable_attention_slicing
31
- - disable_attention_slicing
32
- - enable_xformers_memory_efficient_attention
33
- - disable_xformers_memory_efficient_attention
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/stable_diffusion/image_variation.mdx DELETED
@@ -1,31 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Image Variation
14
-
15
- ## StableDiffusionImageVariationPipeline
16
-
17
- [`StableDiffusionImageVariationPipeline`] lets you generate variations from an input image using Stable Diffusion. It uses a fine-tuned version of Stable Diffusion model, trained by [Justin Pinkney](https://www.justinpinkney.com/) (@Buntworthy) at [Lambda](https://lambdalabs.com/).
18
-
19
- The original codebase can be found here:
20
- [Stable Diffusion Image Variations](https://github.com/LambdaLabsML/lambda-diffusers#stable-diffusion-image-variations)
21
-
22
- Available Checkpoints are:
23
- - *sd-image-variations-diffusers*: [lambdalabs/sd-image-variations-diffusers](https://huggingface.co/lambdalabs/sd-image-variations-diffusers)
24
-
25
- [[autodoc]] StableDiffusionImageVariationPipeline
26
- - all
27
- - __call__
28
- - enable_attention_slicing
29
- - disable_attention_slicing
30
- - enable_xformers_memory_efficient_attention
31
- - disable_xformers_memory_efficient_attention
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/stable_diffusion/img2img.mdx DELETED
@@ -1,36 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Image-to-Image Generation
14
-
15
- ## StableDiffusionImg2ImgPipeline
16
-
17
- The Stable Diffusion model was created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), [runway](https://github.com/runwayml), and [LAION](https://laion.ai/). The [`StableDiffusionImg2ImgPipeline`] lets you pass a text prompt and an initial image to condition the generation of new images using Stable Diffusion.
18
-
19
- The original codebase can be found here: [CampVis/stable-diffusion](https://github.com/CompVis/stable-diffusion/blob/main/scripts/img2img.py)
20
-
21
- [`StableDiffusionImg2ImgPipeline`] is compatible with all Stable Diffusion checkpoints for [Text-to-Image](./text2img)
22
-
23
- The pipeline uses the diffusion-denoising mechanism proposed by SDEdit ([SDEdit: Guided Image Synthesis and Editing with Stochastic Differential Equations](https://arxiv.org/abs/2108.01073)
24
- proposed by Chenlin Meng, Yutong He, Yang Song, Jiaming Song, Jiajun Wu, Jun-Yan Zhu, Stefano Ermon).
25
-
26
- [[autodoc]] StableDiffusionImg2ImgPipeline
27
- - all
28
- - __call__
29
- - enable_attention_slicing
30
- - disable_attention_slicing
31
- - enable_xformers_memory_efficient_attention
32
- - disable_xformers_memory_efficient_attention
33
-
34
- [[autodoc]] FlaxStableDiffusionImg2ImgPipeline
35
- - all
36
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/stable_diffusion/inpaint.mdx DELETED
@@ -1,37 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Text-Guided Image Inpainting
14
-
15
- ## StableDiffusionInpaintPipeline
16
-
17
- The Stable Diffusion model was created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), [runway](https://github.com/runwayml), and [LAION](https://laion.ai/). The [`StableDiffusionInpaintPipeline`] lets you edit specific parts of an image by providing a mask and a text prompt using Stable Diffusion.
18
-
19
- The original codebase can be found here:
20
- - *Stable Diffusion V1*: [CampVis/stable-diffusion](https://github.com/runwayml/stable-diffusion#inpainting-with-stable-diffusion)
21
- - *Stable Diffusion V2*: [Stability-AI/stablediffusion](https://github.com/Stability-AI/stablediffusion#image-inpainting-with-stable-diffusion)
22
-
23
- Available checkpoints are:
24
- - *stable-diffusion-inpainting (512x512 resolution)*: [runwayml/stable-diffusion-inpainting](https://huggingface.co/runwayml/stable-diffusion-inpainting)
25
- - *stable-diffusion-2-inpainting (512x512 resolution)*: [stabilityai/stable-diffusion-2-inpainting](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting)
26
-
27
- [[autodoc]] StableDiffusionInpaintPipeline
28
- - all
29
- - __call__
30
- - enable_attention_slicing
31
- - disable_attention_slicing
32
- - enable_xformers_memory_efficient_attention
33
- - disable_xformers_memory_efficient_attention
34
-
35
- [[autodoc]] FlaxStableDiffusionInpaintPipeline
36
- - all
37
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/stable_diffusion/latent_upscale.mdx DELETED
@@ -1,33 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Stable Diffusion Latent Upscaler
14
-
15
- ## StableDiffusionLatentUpscalePipeline
16
-
17
- The Stable Diffusion Latent Upscaler model was created by [Katherine Crowson](https://github.com/crowsonkb/k-diffusion) in collaboration with [Stability AI](https://stability.ai/). It can be used on top of any [`StableDiffusionUpscalePipeline`] checkpoint to enhance its output image resolution by a factor of 2.
18
-
19
- A notebook that demonstrates the original implementation can be found here:
20
- - [Stable Diffusion Upscaler Demo](https://colab.research.google.com/drive/1o1qYJcFeywzCIdkfKJy7cTpgZTCM2EI4)
21
-
22
- Available Checkpoints are:
23
- - *stabilityai/latent-upscaler*: [stabilityai/sd-x2-latent-upscaler](https://huggingface.co/stabilityai/sd-x2-latent-upscaler)
24
-
25
-
26
- [[autodoc]] StableDiffusionLatentUpscalePipeline
27
- - all
28
- - __call__
29
- - enable_sequential_cpu_offload
30
- - enable_attention_slicing
31
- - disable_attention_slicing
32
- - enable_xformers_memory_efficient_attention
33
- - disable_xformers_memory_efficient_attention
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/stable_diffusion/model_editing.mdx DELETED
@@ -1,61 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Editing Implicit Assumptions in Text-to-Image Diffusion Models
14
-
15
- ## Overview
16
-
17
- [Editing Implicit Assumptions in Text-to-Image Diffusion Models](https://arxiv.org/abs/2303.08084) by Hadas Orgad, Bahjat Kawar, and Yonatan Belinkov.
18
-
19
- The abstract of the paper is the following:
20
-
21
- *Text-to-image diffusion models often make implicit assumptions about the world when generating images. While some assumptions are useful (e.g., the sky is blue), they can also be outdated, incorrect, or reflective of social biases present in the training data. Thus, there is a need to control these assumptions without requiring explicit user input or costly re-training. In this work, we aim to edit a given implicit assumption in a pre-trained diffusion model. Our Text-to-Image Model Editing method, TIME for short, receives a pair of inputs: a "source" under-specified prompt for which the model makes an implicit assumption (e.g., "a pack of roses"), and a "destination" prompt that describes the same setting, but with a specified desired attribute (e.g., "a pack of blue roses"). TIME then updates the model's cross-attention layers, as these layers assign visual meaning to textual tokens. We edit the projection matrices in these layers such that the source prompt is projected close to the destination prompt. Our method is highly efficient, as it modifies a mere 2.2% of the model's parameters in under one second. To evaluate model editing approaches, we introduce TIMED (TIME Dataset), containing 147 source and destination prompt pairs from various domains. Our experiments (using Stable Diffusion) show that TIME is successful in model editing, generalizes well for related prompts unseen during editing, and imposes minimal effect on unrelated generations.*
22
-
23
- Resources:
24
-
25
- * [Project Page](https://time-diffusion.github.io/).
26
- * [Paper](https://arxiv.org/abs/2303.08084).
27
- * [Original Code](https://github.com/bahjat-kawar/time-diffusion).
28
- * [Demo](https://huggingface.co/spaces/bahjat-kawar/time-diffusion).
29
-
30
- ## Available Pipelines:
31
-
32
- | Pipeline | Tasks | Demo
33
- |---|---|:---:|
34
- | [StableDiffusionModelEditingPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_model_editing.py) | *Text-to-Image Model Editing* | [🤗 Space](https://huggingface.co/spaces/bahjat-kawar/time-diffusion)) |
35
-
36
- This pipeline enables editing the diffusion model weights, such that its assumptions on a given concept are changed. The resulting change is expected to take effect in all prompt generations pertaining to the edited concept.
37
-
38
- ## Usage example
39
-
40
- ```python
41
- import torch
42
- from diffusers import StableDiffusionModelEditingPipeline
43
-
44
- model_ckpt = "CompVis/stable-diffusion-v1-4"
45
- pipe = StableDiffusionModelEditingPipeline.from_pretrained(model_ckpt)
46
-
47
- pipe = pipe.to("cuda")
48
-
49
- source_prompt = "A pack of roses"
50
- destination_prompt = "A pack of blue roses"
51
- pipe.edit_model(source_prompt, destination_prompt)
52
-
53
- prompt = "A field of roses"
54
- image = pipe(prompt).images[0]
55
- image.save("field_of_roses.png")
56
- ```
57
-
58
- ## StableDiffusionModelEditingPipeline
59
- [[autodoc]] StableDiffusionModelEditingPipeline
60
- - __call__
61
- - all
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/stable_diffusion/overview.mdx DELETED
@@ -1,82 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Stable diffusion pipelines
14
-
15
- Stable Diffusion is a text-to-image _latent diffusion_ model created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/) and [LAION](https://laion.ai/). It's trained on 512x512 images from a subset of the [LAION-5B](https://laion.ai/blog/laion-5b/) dataset. This model uses a frozen CLIP ViT-L/14 text encoder to condition the model on text prompts. With its 860M UNet and 123M text encoder, the model is relatively lightweight and can run on consumer GPUs.
16
-
17
- Latent diffusion is the research on top of which Stable Diffusion was built. It was proposed in [High-Resolution Image Synthesis with Latent Diffusion Models](https://arxiv.org/abs/2112.10752) by Robin Rombach, Andreas Blattmann, Dominik Lorenz, Patrick Esser, Björn Ommer. You can learn more details about it in the [specific pipeline for latent diffusion](pipelines/latent_diffusion) that is part of 🤗 Diffusers.
18
-
19
- For more details about how Stable Diffusion works and how it differs from the base latent diffusion model, please refer to the official [launch announcement post](https://stability.ai/blog/stable-diffusion-announcement) and [this section of our own blog post](https://huggingface.co/blog/stable_diffusion#how-does-stable-diffusion-work).
20
-
21
- *Tips*:
22
- - To tweak your prompts on a specific result you liked, you can generate your own latents, as demonstrated in the following notebook: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/pcuenca/diffusers-examples/blob/main/notebooks/stable-diffusion-seeds.ipynb)
23
-
24
- *Overview*:
25
-
26
- | Pipeline | Tasks | Colab | Demo
27
- |---|---|:---:|:---:|
28
- | [StableDiffusionPipeline](./text2img) | *Text-to-Image Generation* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_diffusion.ipynb) | [🤗 Stable Diffusion](https://huggingface.co/spaces/stabilityai/stable-diffusion)
29
- | [StableDiffusionImg2ImgPipeline](./img2img) | *Image-to-Image Text-Guided Generation* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/image_2_image_using_diffusers.ipynb) | [🤗 Diffuse the Rest](https://huggingface.co/spaces/huggingface/diffuse-the-rest)
30
- | [StableDiffusionInpaintPipeline](./inpaint) | **Experimental** – *Text-Guided Image Inpainting* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/in_painting_with_stable_diffusion_using_diffusers.ipynb) | Coming soon
31
- | [StableDiffusionDepth2ImgPipeline](./depth2img) | **Experimental** – *Depth-to-Image Text-Guided Generation * | | Coming soon
32
- | [StableDiffusionImageVariationPipeline](./image_variation) | **Experimental** – *Image Variation Generation * | | [🤗 Stable Diffusion Image Variations](https://huggingface.co/spaces/lambdalabs/stable-diffusion-image-variations)
33
- | [StableDiffusionUpscalePipeline](./upscale) | **Experimental** – *Text-Guided Image Super-Resolution * | | Coming soon
34
- | [StableDiffusionLatentUpscalePipeline](./latent_upscale) | **Experimental** – *Text-Guided Image Super-Resolution * | | Coming soon
35
- | [StableDiffusionInstructPix2PixPipeline](./pix2pix) | **Experimental** – *Text-Based Image Editing * | | [InstructPix2Pix: Learning to Follow Image Editing Instructions](https://huggingface.co/spaces/timbrooks/instruct-pix2pix)
36
- | [StableDiffusionAttendAndExcitePipeline](./attend_and_excite) | **Experimental** – *Text-to-Image Generation * | | [Attend-and-Excite: Attention-Based Semantic Guidance for Text-to-Image Diffusion Models](https://huggingface.co/spaces/AttendAndExcite/Attend-and-Excite)
37
- | [StableDiffusionPix2PixZeroPipeline](./pix2pix_zero) | **Experimental** – *Text-Based Image Editing * | | [Zero-shot Image-to-Image Translation](https://arxiv.org/abs/2302.03027)
38
- | [StableDiffusionModelEditingPipeline](./model_editing) | **Experimental** – *Text-to-Image Model Editing * | | [Editing Implicit Assumptions in Text-to-Image Diffusion Models](https://arxiv.org/abs/2303.08084)
39
-
40
-
41
-
42
- ## Tips
43
-
44
- ### How to load and use different schedulers.
45
-
46
- The stable diffusion pipeline uses [`PNDMScheduler`] scheduler by default. But `diffusers` provides many other schedulers that can be used with the stable diffusion pipeline such as [`DDIMScheduler`], [`LMSDiscreteScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`] etc.
47
- To use a different scheduler, you can either change it via the [`ConfigMixin.from_config`] method or pass the `scheduler` argument to the `from_pretrained` method of the pipeline. For example, to use the [`EulerDiscreteScheduler`], you can do the following:
48
-
49
- ```python
50
- >>> from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
51
-
52
- >>> pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
53
- >>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config)
54
-
55
- >>> # or
56
- >>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="scheduler")
57
- >>> pipeline = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", scheduler=euler_scheduler)
58
- ```
59
-
60
-
61
- ### How to convert all use cases with multiple or single pipeline
62
-
63
- If you want to use all possible use cases in a single `DiffusionPipeline` you can either:
64
- - Make use of the [Stable Diffusion Mega Pipeline](https://github.com/huggingface/diffusers/tree/main/examples/community#stable-diffusion-mega) or
65
- - Make use of the `components` functionality to instantiate all components in the most memory-efficient way:
66
-
67
- ```python
68
- >>> from diffusers import (
69
- ... StableDiffusionPipeline,
70
- ... StableDiffusionImg2ImgPipeline,
71
- ... StableDiffusionInpaintPipeline,
72
- ... )
73
-
74
- >>> text2img = StableDiffusionPipeline.from_pretrained("CompVis/stable-diffusion-v1-4")
75
- >>> img2img = StableDiffusionImg2ImgPipeline(**text2img.components)
76
- >>> inpaint = StableDiffusionInpaintPipeline(**text2img.components)
77
-
78
- >>> # now you can use text2img(...), img2img(...), inpaint(...) just like the call methods of each respective pipeline
79
- ```
80
-
81
- ## StableDiffusionPipelineOutput
82
- [[autodoc]] pipelines.stable_diffusion.StableDiffusionPipelineOutput
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/stable_diffusion/panorama.mdx DELETED
@@ -1,58 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation
14
-
15
- ## Overview
16
-
17
- [MultiDiffusion: Fusing Diffusion Paths for Controlled Image Generation](https://arxiv.org/abs/2302.08113) by Omer Bar-Tal, Lior Yariv, Yaron Lipman, and Tali Dekel.
18
-
19
- The abstract of the paper is the following:
20
-
21
- *Recent advances in text-to-image generation with diffusion models present transformative capabilities in image quality. However, user controllability of the generated image, and fast adaptation to new tasks still remains an open challenge, currently mostly addressed by costly and long re-training and fine-tuning or ad-hoc adaptations to specific image generation tasks. In this work, we present MultiDiffusion, a unified framework that enables versatile and controllable image generation, using a pre-trained text-to-image diffusion model, without any further training or finetuning. At the center of our approach is a new generation process, based on an optimization task that binds together multiple diffusion generation processes with a shared set of parameters or constraints. We show that MultiDiffusion can be readily applied to generate high quality and diverse images that adhere to user-provided controls, such as desired aspect ratio (e.g., panorama), and spatial guiding signals, ranging from tight segmentation masks to bounding boxes.
22
-
23
- Resources:
24
-
25
- * [Project Page](https://multidiffusion.github.io/).
26
- * [Paper](https://arxiv.org/abs/2302.08113).
27
- * [Original Code](https://github.com/omerbt/MultiDiffusion).
28
- * [Demo](https://huggingface.co/spaces/weizmannscience/MultiDiffusion).
29
-
30
- ## Available Pipelines:
31
-
32
- | Pipeline | Tasks | Demo
33
- |---|---|:---:|
34
- | [StableDiffusionPanoramaPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_panorama.py) | *Text-Guided Panorama View Generation* | [🤗 Space](https://huggingface.co/spaces/weizmannscience/MultiDiffusion)) |
35
-
36
- <!-- TODO: add Colab -->
37
-
38
- ## Usage example
39
-
40
- ```python
41
- import torch
42
- from diffusers import StableDiffusionPanoramaPipeline, DDIMScheduler
43
-
44
- model_ckpt = "stabilityai/stable-diffusion-2-base"
45
- scheduler = DDIMScheduler.from_pretrained(model_ckpt, subfolder="scheduler")
46
- pipe = StableDiffusionPanoramaPipeline.from_pretrained(model_ckpt, scheduler=scheduler, torch_dtype=torch.float16)
47
-
48
- pipe = pipe.to("cuda")
49
-
50
- prompt = "a photo of the dolomites"
51
- image = pipe(prompt).images[0]
52
- image.save("dolomites.png")
53
- ```
54
-
55
- ## StableDiffusionPanoramaPipeline
56
- [[autodoc]] StableDiffusionPanoramaPipeline
57
- - __call__
58
- - all
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/stable_diffusion/pix2pix.mdx DELETED
@@ -1,70 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # InstructPix2Pix: Learning to Follow Image Editing Instructions
14
-
15
- ## Overview
16
-
17
- [InstructPix2Pix: Learning to Follow Image Editing Instructions](https://arxiv.org/abs/2211.09800) by Tim Brooks, Aleksander Holynski and Alexei A. Efros.
18
-
19
- The abstract of the paper is the following:
20
-
21
- *We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models -- a language model (GPT-3) and a text-to-image model (Stable Diffusion) -- to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions.*
22
-
23
- Resources:
24
-
25
- * [Project Page](https://www.timothybrooks.com/instruct-pix2pix).
26
- * [Paper](https://arxiv.org/abs/2211.09800).
27
- * [Original Code](https://github.com/timothybrooks/instruct-pix2pix).
28
- * [Demo](https://huggingface.co/spaces/timbrooks/instruct-pix2pix).
29
-
30
-
31
- ## Available Pipelines:
32
-
33
- | Pipeline | Tasks | Demo
34
- |---|---|:---:|
35
- | [StableDiffusionInstructPix2PixPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_instruct_pix2pix.py) | *Text-Based Image Editing* | [🤗 Space](https://huggingface.co/spaces/timbrooks/instruct-pix2pix) |
36
-
37
- <!-- TODO: add Colab -->
38
-
39
- ## Usage example
40
-
41
- ```python
42
- import PIL
43
- import requests
44
- import torch
45
- from diffusers import StableDiffusionInstructPix2PixPipeline
46
-
47
- model_id = "timbrooks/instruct-pix2pix"
48
- pipe = StableDiffusionInstructPix2PixPipeline.from_pretrained(model_id, torch_dtype=torch.float16).to("cuda")
49
-
50
- url = "https://huggingface.co/datasets/diffusers/diffusers-images-docs/resolve/main/mountain.png"
51
-
52
-
53
- def download_image(url):
54
- image = PIL.Image.open(requests.get(url, stream=True).raw)
55
- image = PIL.ImageOps.exif_transpose(image)
56
- image = image.convert("RGB")
57
- return image
58
-
59
-
60
- image = download_image(url)
61
-
62
- prompt = "make the mountains snowy"
63
- images = pipe(prompt, image=image, num_inference_steps=20, image_guidance_scale=1.5, guidance_scale=7).images
64
- images[0].save("snowy_mountains.png")
65
- ```
66
-
67
- ## StableDiffusionInstructPix2PixPipeline
68
- [[autodoc]] StableDiffusionInstructPix2PixPipeline
69
- - __call__
70
- - all
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/stable_diffusion/pix2pix_zero.mdx DELETED
@@ -1,291 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Zero-shot Image-to-Image Translation
14
-
15
- ## Overview
16
-
17
- [Zero-shot Image-to-Image Translation](https://arxiv.org/abs/2302.03027).
18
-
19
- The abstract of the paper is the following:
20
-
21
- *Large-scale text-to-image generative models have shown their remarkable ability to synthesize diverse and high-quality images. However, it is still challenging to directly apply these models for editing real images for two reasons. First, it is hard for users to come up with a perfect text prompt that accurately describes every visual detail in the input image. Second, while existing models can introduce desirable changes in certain regions, they often dramatically alter the input content and introduce unexpected changes in unwanted regions. In this work, we propose pix2pix-zero, an image-to-image translation method that can preserve the content of the original image without manual prompting. We first automatically discover editing directions that reflect desired edits in the text embedding space. To preserve the general content structure after editing, we further propose cross-attention guidance, which aims to retain the cross-attention maps of the input image throughout the diffusion process. In addition, our method does not need additional training for these edits and can directly use the existing pre-trained text-to-image diffusion model. We conduct extensive experiments and show that our method outperforms existing and concurrent works for both real and synthetic image editing.*
22
-
23
- Resources:
24
-
25
- * [Project Page](https://pix2pixzero.github.io/).
26
- * [Paper](https://arxiv.org/abs/2302.03027).
27
- * [Original Code](https://github.com/pix2pixzero/pix2pix-zero).
28
- * [Demo](https://huggingface.co/spaces/pix2pix-zero-library/pix2pix-zero-demo).
29
-
30
- ## Tips
31
-
32
- * The pipeline can be conditioned on real input images. Check out the code examples below to know more.
33
- * The pipeline exposes two arguments namely `source_embeds` and `target_embeds`
34
- that let you control the direction of the semantic edits in the final image to be generated. Let's say,
35
- you wanted to translate from "cat" to "dog". In this case, the edit direction will be "cat -> dog". To reflect
36
- this in the pipeline, you simply have to set the embeddings related to the phrases including "cat" to
37
- `source_embeds` and "dog" to `target_embeds`. Refer to the code example below for more details.
38
- * When you're using this pipeline from a prompt, specify the _source_ concept in the prompt. Taking
39
- the above example, a valid input prompt would be: "a high resolution painting of a **cat** in the style of van gough".
40
- * If you wanted to reverse the direction in the example above, i.e., "dog -> cat", then it's recommended to:
41
- * Swap the `source_embeds` and `target_embeds`.
42
- * Change the input prompt to include "dog".
43
- * To learn more about how the source and target embeddings are generated, refer to the [original
44
- paper](https://arxiv.org/abs/2302.03027). Below, we also provide some directions on how to generate the embeddings.
45
- * Note that the quality of the outputs generated with this pipeline is dependent on how good the `source_embeds` and `target_embeds` are. Please, refer to [this discussion](#generating-source-and-target-embeddings) for some suggestions on the topic.
46
-
47
- ## Available Pipelines:
48
-
49
- | Pipeline | Tasks | Demo
50
- |---|---|:---:|
51
- | [StableDiffusionPix2PixZeroPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_pix2pix_zero.py) | *Text-Based Image Editing* | [🤗 Space](https://huggingface.co/spaces/pix2pix-zero-library/pix2pix-zero-demo) |
52
-
53
- <!-- TODO: add Colab -->
54
-
55
- ## Usage example
56
-
57
- ### Based on an image generated with the input prompt
58
-
59
- ```python
60
- import requests
61
- import torch
62
-
63
- from diffusers import DDIMScheduler, StableDiffusionPix2PixZeroPipeline
64
-
65
-
66
- def download(embedding_url, local_filepath):
67
- r = requests.get(embedding_url)
68
- with open(local_filepath, "wb") as f:
69
- f.write(r.content)
70
-
71
-
72
- model_ckpt = "CompVis/stable-diffusion-v1-4"
73
- pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained(
74
- model_ckpt, conditions_input_image=False, torch_dtype=torch.float16
75
- )
76
- pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
77
- pipeline.to("cuda")
78
-
79
- prompt = "a high resolution painting of a cat in the style of van gogh"
80
- src_embs_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/embeddings_sd_1.4/cat.pt"
81
- target_embs_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/embeddings_sd_1.4/dog.pt"
82
-
83
- for url in [src_embs_url, target_embs_url]:
84
- download(url, url.split("/")[-1])
85
-
86
- src_embeds = torch.load(src_embs_url.split("/")[-1])
87
- target_embeds = torch.load(target_embs_url.split("/")[-1])
88
-
89
- images = pipeline(
90
- prompt,
91
- source_embeds=src_embeds,
92
- target_embeds=target_embeds,
93
- num_inference_steps=50,
94
- cross_attention_guidance_amount=0.15,
95
- ).images
96
- images[0].save("edited_image_dog.png")
97
- ```
98
-
99
- ### Based on an input image
100
-
101
- When the pipeline is conditioned on an input image, we first obtain an inverted
102
- noise from it using a `DDIMInverseScheduler` with the help of a generated caption. Then
103
- the inverted noise is used to start the generation process.
104
-
105
- First, let's load our pipeline:
106
-
107
- ```py
108
- import torch
109
- from transformers import BlipForConditionalGeneration, BlipProcessor
110
- from diffusers import DDIMScheduler, DDIMInverseScheduler, StableDiffusionPix2PixZeroPipeline
111
-
112
- captioner_id = "Salesforce/blip-image-captioning-base"
113
- processor = BlipProcessor.from_pretrained(captioner_id)
114
- model = BlipForConditionalGeneration.from_pretrained(captioner_id, torch_dtype=torch.float16, low_cpu_mem_usage=True)
115
-
116
- sd_model_ckpt = "CompVis/stable-diffusion-v1-4"
117
- pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained(
118
- sd_model_ckpt,
119
- caption_generator=model,
120
- caption_processor=processor,
121
- torch_dtype=torch.float16,
122
- safety_checker=None,
123
- )
124
- pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
125
- pipeline.inverse_scheduler = DDIMInverseScheduler.from_config(pipeline.scheduler.config)
126
- pipeline.enable_model_cpu_offload()
127
- ```
128
-
129
- Then, we load an input image for conditioning and obtain a suitable caption for it:
130
-
131
- ```py
132
- import requests
133
- from PIL import Image
134
-
135
- img_url = "https://github.com/pix2pixzero/pix2pix-zero/raw/main/assets/test_images/cats/cat_6.png"
136
- raw_image = Image.open(requests.get(img_url, stream=True).raw).convert("RGB").resize((512, 512))
137
- caption = pipeline.generate_caption(raw_image)
138
- ```
139
-
140
- Then we employ the generated caption and the input image to get the inverted noise:
141
-
142
- ```py
143
- generator = torch.manual_seed(0)
144
- inv_latents = pipeline.invert(caption, image=raw_image, generator=generator).latents
145
- ```
146
-
147
- Now, generate the image with edit directions:
148
-
149
- ```py
150
- # See the "Generating source and target embeddings" section below to
151
- # automate the generation of these captions with a pre-trained model like Flan-T5 as explained below.
152
- source_prompts = ["a cat sitting on the street", "a cat playing in the field", "a face of a cat"]
153
- target_prompts = ["a dog sitting on the street", "a dog playing in the field", "a face of a dog"]
154
-
155
- source_embeds = pipeline.get_embeds(source_prompts, batch_size=2)
156
- target_embeds = pipeline.get_embeds(target_prompts, batch_size=2)
157
-
158
-
159
- image = pipeline(
160
- caption,
161
- source_embeds=source_embeds,
162
- target_embeds=target_embeds,
163
- num_inference_steps=50,
164
- cross_attention_guidance_amount=0.15,
165
- generator=generator,
166
- latents=inv_latents,
167
- negative_prompt=caption,
168
- ).images[0]
169
- image.save("edited_image.png")
170
- ```
171
-
172
- ## Generating source and target embeddings
173
-
174
- The authors originally used the [GPT-3 API](https://openai.com/api/) to generate the source and target captions for discovering
175
- edit directions. However, we can also leverage open source and public models for the same purpose.
176
- Below, we provide an end-to-end example with the [Flan-T5](https://huggingface.co/docs/transformers/model_doc/flan-t5) model
177
- for generating captions and [CLIP](https://huggingface.co/docs/transformers/model_doc/clip) for
178
- computing embeddings on the generated captions.
179
-
180
- **1. Load the generation model**:
181
-
182
- ```py
183
- import torch
184
- from transformers import AutoTokenizer, T5ForConditionalGeneration
185
-
186
- tokenizer = AutoTokenizer.from_pretrained("google/flan-t5-xl")
187
- model = T5ForConditionalGeneration.from_pretrained("google/flan-t5-xl", device_map="auto", torch_dtype=torch.float16)
188
- ```
189
-
190
- **2. Construct a starting prompt**:
191
-
192
- ```py
193
- source_concept = "cat"
194
- target_concept = "dog"
195
-
196
- source_text = f"Provide a caption for images containing a {source_concept}. "
197
- "The captions should be in English and should be no longer than 150 characters."
198
-
199
- target_text = f"Provide a caption for images containing a {target_concept}. "
200
- "The captions should be in English and should be no longer than 150 characters."
201
- ```
202
-
203
- Here, we're interested in the "cat -> dog" direction.
204
-
205
- **3. Generate captions**:
206
-
207
- We can use a utility like so for this purpose.
208
-
209
- ```py
210
- def generate_captions(input_prompt):
211
- input_ids = tokenizer(input_prompt, return_tensors="pt").input_ids.to("cuda")
212
-
213
- outputs = model.generate(
214
- input_ids, temperature=0.8, num_return_sequences=16, do_sample=True, max_new_tokens=128, top_k=10
215
- )
216
- return tokenizer.batch_decode(outputs, skip_special_tokens=True)
217
- ```
218
-
219
- And then we just call it to generate our captions:
220
-
221
- ```py
222
- source_captions = generate_captions(source_text)
223
- target_captions = generate_captions(target_concept)
224
- ```
225
-
226
- We encourage you to play around with the different parameters supported by the
227
- `generate()` method ([documentation](https://huggingface.co/docs/transformers/main/en/main_classes/text_generation#transformers.generation_tf_utils.TFGenerationMixin.generate)) for the generation quality you are looking for.
228
-
229
- **4. Load the embedding model**:
230
-
231
- Here, we need to use the same text encoder model used by the subsequent Stable Diffusion model.
232
-
233
- ```py
234
- from diffusers import StableDiffusionPix2PixZeroPipeline
235
-
236
- pipeline = StableDiffusionPix2PixZeroPipeline.from_pretrained(
237
- "CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16
238
- )
239
- pipeline = pipeline.to("cuda")
240
- tokenizer = pipeline.tokenizer
241
- text_encoder = pipeline.text_encoder
242
- ```
243
-
244
- **5. Compute embeddings**:
245
-
246
- ```py
247
- import torch
248
-
249
- def embed_captions(sentences, tokenizer, text_encoder, device="cuda"):
250
- with torch.no_grad():
251
- embeddings = []
252
- for sent in sentences:
253
- text_inputs = tokenizer(
254
- sent,
255
- padding="max_length",
256
- max_length=tokenizer.model_max_length,
257
- truncation=True,
258
- return_tensors="pt",
259
- )
260
- text_input_ids = text_inputs.input_ids
261
- prompt_embeds = text_encoder(text_input_ids.to(device), attention_mask=None)[0]
262
- embeddings.append(prompt_embeds)
263
- return torch.concatenate(embeddings, dim=0).mean(dim=0).unsqueeze(0)
264
-
265
- source_embeddings = embed_captions(source_captions, tokenizer, text_encoder)
266
- target_embeddings = embed_captions(target_captions, tokenizer, text_encoder)
267
- ```
268
-
269
- And you're done! [Here](https://colab.research.google.com/drive/1tz2C1EdfZYAPlzXXbTnf-5PRBiR8_R1F?usp=sharing) is a Colab Notebook that you can use to interact with the entire process.
270
-
271
- Now, you can use these embeddings directly while calling the pipeline:
272
-
273
- ```py
274
- from diffusers import DDIMScheduler
275
-
276
- pipeline.scheduler = DDIMScheduler.from_config(pipeline.scheduler.config)
277
-
278
- images = pipeline(
279
- prompt,
280
- source_embeds=source_embeddings,
281
- target_embeds=target_embeddings,
282
- num_inference_steps=50,
283
- cross_attention_guidance_amount=0.15,
284
- ).images
285
- images[0].save("edited_image_dog.png")
286
- ```
287
-
288
- ## StableDiffusionPix2PixZeroPipeline
289
- [[autodoc]] StableDiffusionPix2PixZeroPipeline
290
- - __call__
291
- - all
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/stable_diffusion/self_attention_guidance.mdx DELETED
@@ -1,64 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Self-Attention Guidance (SAG)
14
-
15
- ## Overview
16
-
17
- [Self-Attention Guidance](https://arxiv.org/abs/2210.00939) by Susung Hong et al.
18
-
19
- The abstract of the paper is the following:
20
-
21
- *Denoising diffusion models (DDMs) have been drawing much attention for their appreciable sample quality and diversity. Despite their remarkable performance, DDMs remain black boxes on which further study is necessary to take a profound step. Motivated by this, we delve into the design of conventional U-shaped diffusion models. More specifically, we investigate the self-attention modules within these models through carefully designed experiments and explore their characteristics. In addition, inspired by the studies that substantiate the effectiveness of the guidance schemes, we present plug-and-play diffusion guidance, namely Self-Attention Guidance (SAG), that can drastically boost the performance of existing diffusion models. Our method, SAG, extracts the intermediate attention map from a diffusion model at every iteration and selects tokens above a certain attention score for masking and blurring to obtain a partially blurred input. Subsequently, we measure the dissimilarity between the predicted noises obtained from feeding the blurred and original input to the diffusion model and leverage it as guidance. With this guidance, we observe apparent improvements in a wide range of diffusion models, e.g., ADM, IDDPM, and Stable Diffusion, and show that the results further improve by combining our method with the conventional guidance scheme. We provide extensive ablation studies to verify our choices.*
22
-
23
- Resources:
24
-
25
- * [Project Page](https://ku-cvlab.github.io/Self-Attention-Guidance).
26
- * [Paper](https://arxiv.org/abs/2210.00939).
27
- * [Original Code](https://github.com/KU-CVLAB/Self-Attention-Guidance).
28
- * [Demo](https://colab.research.google.com/github/SusungHong/Self-Attention-Guidance/blob/main/SAG_Stable.ipynb).
29
-
30
-
31
- ## Available Pipelines:
32
-
33
- | Pipeline | Tasks | Demo
34
- |---|---|:---:|
35
- | [StableDiffusionSAGPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/pipeline_stable_diffusion_sag.py) | *Text-to-Image Generation* | [Colab](https://colab.research.google.com/github/SusungHong/Self-Attention-Guidance/blob/main/SAG_Stable.ipynb) |
36
-
37
- ## Usage example
38
-
39
- ```python
40
- import torch
41
- from diffusers import StableDiffusionSAGPipeline
42
- from accelerate.utils import set_seed
43
-
44
- pipe = StableDiffusionSAGPipeline.from_pretrained("CompVis/stable-diffusion-v1-4", torch_dtype=torch.float16)
45
- pipe = pipe.to("cuda")
46
-
47
- seed = 8978
48
- prompt = "."
49
- guidance_scale = 7.5
50
- num_images_per_prompt = 1
51
-
52
- sag_scale = 1.0
53
-
54
- set_seed(seed)
55
- images = pipe(
56
- prompt, num_images_per_prompt=num_images_per_prompt, guidance_scale=guidance_scale, sag_scale=sag_scale
57
- ).images
58
- images[0].save("example.png")
59
- ```
60
-
61
- ## StableDiffusionSAGPipeline
62
- [[autodoc]] StableDiffusionSAGPipeline
63
- - __call__
64
- - all
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/stable_diffusion/text2img.mdx DELETED
@@ -1,45 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Text-to-Image Generation
14
-
15
- ## StableDiffusionPipeline
16
-
17
- The Stable Diffusion model was created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), [runway](https://github.com/runwayml), and [LAION](https://laion.ai/). The [`StableDiffusionPipeline`] is capable of generating photo-realistic images given any text input using Stable Diffusion.
18
-
19
- The original codebase can be found here:
20
- - *Stable Diffusion V1*: [CompVis/stable-diffusion](https://github.com/CompVis/stable-diffusion)
21
- - *Stable Diffusion v2*: [Stability-AI/stablediffusion](https://github.com/Stability-AI/stablediffusion)
22
-
23
- Available Checkpoints are:
24
- - *stable-diffusion-v1-4 (512x512 resolution)* [CompVis/stable-diffusion-v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4)
25
- - *stable-diffusion-v1-5 (512x512 resolution)* [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
26
- - *stable-diffusion-2-base (512x512 resolution)*: [stabilityai/stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base)
27
- - *stable-diffusion-2 (768x768 resolution)*: [stabilityai/stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2)
28
- - *stable-diffusion-2-1-base (512x512 resolution)* [stabilityai/stable-diffusion-2-1-base](https://huggingface.co/stabilityai/stable-diffusion-2-1-base)
29
- - *stable-diffusion-2-1 (768x768 resolution)*: [stabilityai/stable-diffusion-2-1](https://huggingface.co/stabilityai/stable-diffusion-2-1)
30
-
31
- [[autodoc]] StableDiffusionPipeline
32
- - all
33
- - __call__
34
- - enable_attention_slicing
35
- - disable_attention_slicing
36
- - enable_vae_slicing
37
- - disable_vae_slicing
38
- - enable_xformers_memory_efficient_attention
39
- - disable_xformers_memory_efficient_attention
40
- - enable_vae_tiling
41
- - disable_vae_tiling
42
-
43
- [[autodoc]] FlaxStableDiffusionPipeline
44
- - all
45
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/stable_diffusion/upscale.mdx DELETED
@@ -1,32 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Super-Resolution
14
-
15
- ## StableDiffusionUpscalePipeline
16
-
17
- The upscaler diffusion model was created by the researchers and engineers from [CompVis](https://github.com/CompVis), [Stability AI](https://stability.ai/), and [LAION](https://laion.ai/), as part of Stable Diffusion 2.0. [`StableDiffusionUpscalePipeline`] can be used to enhance the resolution of input images by a factor of 4.
18
-
19
- The original codebase can be found here:
20
- - *Stable Diffusion v2*: [Stability-AI/stablediffusion](https://github.com/Stability-AI/stablediffusion#image-upscaling-with-stable-diffusion)
21
-
22
- Available Checkpoints are:
23
- - *stabilityai/stable-diffusion-x4-upscaler (x4 resolution resolution)*: [stable-diffusion-x4-upscaler](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler)
24
-
25
-
26
- [[autodoc]] StableDiffusionUpscalePipeline
27
- - all
28
- - __call__
29
- - enable_attention_slicing
30
- - disable_attention_slicing
31
- - enable_xformers_memory_efficient_attention
32
- - disable_xformers_memory_efficient_attention
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/stable_diffusion_2.mdx DELETED
@@ -1,176 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Stable diffusion 2
14
-
15
- Stable Diffusion 2 is a text-to-image _latent diffusion_ model built upon the work of [Stable Diffusion 1](https://stability.ai/blog/stable-diffusion-public-release).
16
- The project to train Stable Diffusion 2 was led by Robin Rombach and Katherine Crowson from [Stability AI](https://stability.ai/) and [LAION](https://laion.ai/).
17
-
18
- *The Stable Diffusion 2.0 release includes robust text-to-image models trained using a brand new text encoder (OpenCLIP), developed by LAION with support from Stability AI, which greatly improves the quality of the generated images compared to earlier V1 releases. The text-to-image models in this release can generate images with default resolutions of both 512x512 pixels and 768x768 pixels.
19
- These models are trained on an aesthetic subset of the [LAION-5B dataset](https://laion.ai/blog/laion-5b/) created by the DeepFloyd team at Stability AI, which is then further filtered to remove adult content using [LAION’s NSFW filter](https://openreview.net/forum?id=M3Y74vmsMcY).*
20
-
21
- For more details about how Stable Diffusion 2 works and how it differs from Stable Diffusion 1, please refer to the official [launch announcement post](https://stability.ai/blog/stable-diffusion-v2-release).
22
-
23
- ## Tips
24
-
25
- ### Available checkpoints:
26
-
27
- Note that the architecture is more or less identical to [Stable Diffusion 1](./stable_diffusion/overview) so please refer to [this page](./stable_diffusion/overview) for API documentation.
28
-
29
- - *Text-to-Image (512x512 resolution)*: [stabilityai/stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base) with [`StableDiffusionPipeline`]
30
- - *Text-to-Image (768x768 resolution)*: [stabilityai/stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) with [`StableDiffusionPipeline`]
31
- - *Image Inpainting (512x512 resolution)*: [stabilityai/stable-diffusion-2-inpainting](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting) with [`StableDiffusionInpaintPipeline`]
32
- - *Super-Resolution (x4 resolution resolution)*: [stable-diffusion-x4-upscaler](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler) [`StableDiffusionUpscalePipeline`]
33
- - *Depth-to-Image (512x512 resolution)*: [stabilityai/stable-diffusion-2-depth](https://huggingface.co/stabilityai/stable-diffusion-2-depth) with [`StableDiffusionDepth2ImagePipeline`]
34
-
35
- We recommend using the [`DPMSolverMultistepScheduler`] as it's currently the fastest scheduler there is.
36
-
37
-
38
- ### Text-to-Image
39
-
40
- - *Text-to-Image (512x512 resolution)*: [stabilityai/stable-diffusion-2-base](https://huggingface.co/stabilityai/stable-diffusion-2-base) with [`StableDiffusionPipeline`]
41
-
42
- ```python
43
- from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
44
- import torch
45
-
46
- repo_id = "stabilityai/stable-diffusion-2-base"
47
- pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16")
48
-
49
- pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
50
- pipe = pipe.to("cuda")
51
-
52
- prompt = "High quality photo of an astronaut riding a horse in space"
53
- image = pipe(prompt, num_inference_steps=25).images[0]
54
- image.save("astronaut.png")
55
- ```
56
-
57
- - *Text-to-Image (768x768 resolution)*: [stabilityai/stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2) with [`StableDiffusionPipeline`]
58
-
59
- ```python
60
- from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
61
- import torch
62
-
63
- repo_id = "stabilityai/stable-diffusion-2"
64
- pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16")
65
-
66
- pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
67
- pipe = pipe.to("cuda")
68
-
69
- prompt = "High quality photo of an astronaut riding a horse in space"
70
- image = pipe(prompt, guidance_scale=9, num_inference_steps=25).images[0]
71
- image.save("astronaut.png")
72
- ```
73
-
74
- ### Image Inpainting
75
-
76
- - *Image Inpainting (512x512 resolution)*: [stabilityai/stable-diffusion-2-inpainting](https://huggingface.co/stabilityai/stable-diffusion-2-inpainting) with [`StableDiffusionInpaintPipeline`]
77
-
78
- ```python
79
- import PIL
80
- import requests
81
- import torch
82
- from io import BytesIO
83
-
84
- from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
85
-
86
-
87
- def download_image(url):
88
- response = requests.get(url)
89
- return PIL.Image.open(BytesIO(response.content)).convert("RGB")
90
-
91
-
92
- img_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo.png"
93
- mask_url = "https://raw.githubusercontent.com/CompVis/latent-diffusion/main/data/inpainting_examples/overture-creations-5sI6fQgYIuo_mask.png"
94
-
95
- init_image = download_image(img_url).resize((512, 512))
96
- mask_image = download_image(mask_url).resize((512, 512))
97
-
98
- repo_id = "stabilityai/stable-diffusion-2-inpainting"
99
- pipe = DiffusionPipeline.from_pretrained(repo_id, torch_dtype=torch.float16, revision="fp16")
100
-
101
- pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
102
- pipe = pipe.to("cuda")
103
-
104
- prompt = "Face of a yellow cat, high resolution, sitting on a park bench"
105
- image = pipe(prompt=prompt, image=init_image, mask_image=mask_image, num_inference_steps=25).images[0]
106
-
107
- image.save("yellow_cat.png")
108
- ```
109
-
110
- ### Super-Resolution
111
-
112
- - *Image Upscaling (x4 resolution resolution)*: [stable-diffusion-x4-upscaler](https://huggingface.co/stabilityai/stable-diffusion-x4-upscaler) with [`StableDiffusionUpscalePipeline`]
113
-
114
-
115
- ```python
116
- import requests
117
- from PIL import Image
118
- from io import BytesIO
119
- from diffusers import StableDiffusionUpscalePipeline
120
- import torch
121
-
122
- # load model and scheduler
123
- model_id = "stabilityai/stable-diffusion-x4-upscaler"
124
- pipeline = StableDiffusionUpscalePipeline.from_pretrained(model_id, torch_dtype=torch.float16)
125
- pipeline = pipeline.to("cuda")
126
-
127
- # let's download an image
128
- url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/sd2-upscale/low_res_cat.png"
129
- response = requests.get(url)
130
- low_res_img = Image.open(BytesIO(response.content)).convert("RGB")
131
- low_res_img = low_res_img.resize((128, 128))
132
- prompt = "a white cat"
133
- upscaled_image = pipeline(prompt=prompt, image=low_res_img).images[0]
134
- upscaled_image.save("upsampled_cat.png")
135
- ```
136
-
137
- ### Depth-to-Image
138
-
139
- - *Depth-Guided Text-to-Image*: [stabilityai/stable-diffusion-2-depth](https://huggingface.co/stabilityai/stable-diffusion-2-depth) [`StableDiffusionDepth2ImagePipeline`]
140
-
141
-
142
- ```python
143
- import torch
144
- import requests
145
- from PIL import Image
146
-
147
- from diffusers import StableDiffusionDepth2ImgPipeline
148
-
149
- pipe = StableDiffusionDepth2ImgPipeline.from_pretrained(
150
- "stabilityai/stable-diffusion-2-depth",
151
- torch_dtype=torch.float16,
152
- ).to("cuda")
153
-
154
-
155
- url = "http://images.cocodataset.org/val2017/000000039769.jpg"
156
- init_image = Image.open(requests.get(url, stream=True).raw)
157
- prompt = "two tigers"
158
- n_propmt = "bad, deformed, ugly, bad anotomy"
159
- image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0]
160
- ```
161
-
162
- ### How to load and use different schedulers.
163
-
164
- The stable diffusion pipeline uses [`DDIMScheduler`] scheduler by default. But `diffusers` provides many other schedulers that can be used with the stable diffusion pipeline such as [`PNDMScheduler`], [`LMSDiscreteScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`] etc.
165
- To use a different scheduler, you can either change it via the [`ConfigMixin.from_config`] method or pass the `scheduler` argument to the `from_pretrained` method of the pipeline. For example, to use the [`EulerDiscreteScheduler`], you can do the following:
166
-
167
- ```python
168
- >>> from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
169
-
170
- >>> pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2")
171
- >>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config)
172
-
173
- >>> # or
174
- >>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("stabilityai/stable-diffusion-2", subfolder="scheduler")
175
- >>> pipeline = StableDiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2", scheduler=euler_scheduler)
176
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/stable_diffusion_safe.mdx DELETED
@@ -1,90 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Safe Stable Diffusion
14
-
15
- Safe Stable Diffusion was proposed in [Safe Latent Diffusion: Mitigating Inappropriate Degeneration in Diffusion Models](https://arxiv.org/abs/2211.05105) and mitigates the well known issue that models like Stable Diffusion that are trained on unfiltered, web-crawled datasets tend to suffer from inappropriate degeneration. For instance Stable Diffusion may unexpectedly generate nudity, violence, images depicting self-harm, or otherwise offensive content.
16
- Safe Stable Diffusion is an extension to the Stable Diffusion that drastically reduces content like this.
17
-
18
- The abstract of the paper is the following:
19
-
20
- *Text-conditioned image generation models have recently achieved astonishing results in image quality and text alignment and are consequently employed in a fast-growing number of applications. Since they are highly data-driven, relying on billion-sized datasets randomly scraped from the internet, they also suffer, as we demonstrate, from degenerated and biased human behavior. In turn, they may even reinforce such biases. To help combat these undesired side effects, we present safe latent diffusion (SLD). Specifically, to measure the inappropriate degeneration due to unfiltered and imbalanced training sets, we establish a novel image generation test bed-inappropriate image prompts (I2P)-containing dedicated, real-world image-to-text prompts covering concepts such as nudity and violence. As our exhaustive empirical evaluation demonstrates, the introduced SLD removes and suppresses inappropriate image parts during the diffusion process, with no additional training required and no adverse effect on overall image quality or text alignment.*
21
-
22
-
23
- *Overview*:
24
-
25
- | Pipeline | Tasks | Colab | Demo
26
- |---|---|:---:|:---:|
27
- | [pipeline_stable_diffusion_safe.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion_safe/pipeline_stable_diffusion_safe.py) | *Text-to-Image Generation* | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/ml-research/safe-latent-diffusion/blob/main/examples/Safe%20Latent%20Diffusion.ipynb) | [![Huggingface Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/AIML-TUDA/unsafe-vs-safe-stable-diffusion)
28
-
29
- ## Tips
30
-
31
- - Safe Stable Diffusion may also be used with weights of [Stable Diffusion](./api/pipelines/stable_diffusion/text2img).
32
-
33
- ### Run Safe Stable Diffusion
34
-
35
- Safe Stable Diffusion can be tested very easily with the [`StableDiffusionPipelineSafe`], and the `"AIML-TUDA/stable-diffusion-safe"` checkpoint exactly in the same way it is shown in the [Conditional Image Generation Guide](./using-diffusers/conditional_image_generation).
36
-
37
- ### Interacting with the Safety Concept
38
-
39
- To check and edit the currently used safety concept, use the `safety_concept` property of [`StableDiffusionPipelineSafe`]:
40
- ```python
41
- >>> from diffusers import StableDiffusionPipelineSafe
42
-
43
- >>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe")
44
- >>> pipeline.safety_concept
45
- ```
46
- For each image generation the active concept is also contained in [`StableDiffusionSafePipelineOutput`].
47
-
48
- ### Using pre-defined safety configurations
49
-
50
- You may use the 4 configurations defined in the [Safe Latent Diffusion paper](https://arxiv.org/abs/2211.05105) as follows:
51
-
52
- ```python
53
- >>> from diffusers import StableDiffusionPipelineSafe
54
- >>> from diffusers.pipelines.stable_diffusion_safe import SafetyConfig
55
-
56
- >>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe")
57
- >>> prompt = "the four horsewomen of the apocalypse, painting by tom of finland, gaston bussiere, craig mullins, j. c. leyendecker"
58
- >>> out = pipeline(prompt=prompt, **SafetyConfig.MAX)
59
- ```
60
-
61
- The following configurations are available: `SafetyConfig.WEAK`, `SafetyConfig.MEDIUM`, `SafetyConfig.STRONG`, and `SafetyConfig.MAX`.
62
-
63
- ### How to load and use different schedulers
64
-
65
- The safe stable diffusion pipeline uses [`PNDMScheduler`] scheduler by default. But `diffusers` provides many other schedulers that can be used with the stable diffusion pipeline such as [`DDIMScheduler`], [`LMSDiscreteScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`] etc.
66
- To use a different scheduler, you can either change it via the [`ConfigMixin.from_config`] method or pass the `scheduler` argument to the `from_pretrained` method of the pipeline. For example, to use the [`EulerDiscreteScheduler`], you can do the following:
67
-
68
- ```python
69
- >>> from diffusers import StableDiffusionPipelineSafe, EulerDiscreteScheduler
70
-
71
- >>> pipeline = StableDiffusionPipelineSafe.from_pretrained("AIML-TUDA/stable-diffusion-safe")
72
- >>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config)
73
-
74
- >>> # or
75
- >>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("AIML-TUDA/stable-diffusion-safe", subfolder="scheduler")
76
- >>> pipeline = StableDiffusionPipelineSafe.from_pretrained(
77
- ... "AIML-TUDA/stable-diffusion-safe", scheduler=euler_scheduler
78
- ... )
79
- ```
80
-
81
-
82
- ## StableDiffusionSafePipelineOutput
83
- [[autodoc]] pipelines.stable_diffusion_safe.StableDiffusionSafePipelineOutput
84
- - all
85
- - __call__
86
-
87
- ## StableDiffusionPipelineSafe
88
- [[autodoc]] StableDiffusionPipelineSafe
89
- - all
90
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/stable_unclip.mdx DELETED
@@ -1,175 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Stable unCLIP
14
-
15
- Stable unCLIP checkpoints are finetuned from [stable diffusion 2.1](./stable_diffusion_2) checkpoints to condition on CLIP image embeddings.
16
- Stable unCLIP also still conditions on text embeddings. Given the two separate conditionings, stable unCLIP can be used
17
- for text guided image variation. When combined with an unCLIP prior, it can also be used for full text to image generation.
18
-
19
- To know more about the unCLIP process, check out the following paper:
20
-
21
- [Hierarchical Text-Conditional Image Generation with CLIP Latents](https://arxiv.org/abs/2204.06125) by Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen.
22
-
23
- ## Tips
24
-
25
- Stable unCLIP takes a `noise_level` as input during inference. `noise_level` determines how much noise is added
26
- to the image embeddings. A higher `noise_level` increases variation in the final un-noised images. By default,
27
- we do not add any additional noise to the image embeddings i.e. `noise_level = 0`.
28
-
29
- ### Available checkpoints:
30
-
31
- * Image variation
32
- * [stabilityai/stable-diffusion-2-1-unclip](https://hf.co/stabilityai/stable-diffusion-2-1-unclip)
33
- * [stabilityai/stable-diffusion-2-1-unclip-small](https://hf.co/stabilityai/stable-diffusion-2-1-unclip-small)
34
- * Text-to-image
35
- * [stabilityai/stable-diffusion-2-1-unclip-small](https://hf.co/stabilityai/stable-diffusion-2-1-unclip-small)
36
-
37
- ### Text-to-Image Generation
38
- Stable unCLIP can be leveraged for text-to-image generation by pipelining it with the prior model of KakaoBrain's open source DALL-E 2 replication [Karlo](https://huggingface.co/kakaobrain/karlo-v1-alpha)
39
-
40
- ```python
41
- import torch
42
- from diffusers import UnCLIPScheduler, DDPMScheduler, StableUnCLIPPipeline
43
- from diffusers.models import PriorTransformer
44
- from transformers import CLIPTokenizer, CLIPTextModelWithProjection
45
-
46
- prior_model_id = "kakaobrain/karlo-v1-alpha"
47
- data_type = torch.float16
48
- prior = PriorTransformer.from_pretrained(prior_model_id, subfolder="prior", torch_dtype=data_type)
49
-
50
- prior_text_model_id = "openai/clip-vit-large-patch14"
51
- prior_tokenizer = CLIPTokenizer.from_pretrained(prior_text_model_id)
52
- prior_text_model = CLIPTextModelWithProjection.from_pretrained(prior_text_model_id, torch_dtype=data_type)
53
- prior_scheduler = UnCLIPScheduler.from_pretrained(prior_model_id, subfolder="prior_scheduler")
54
- prior_scheduler = DDPMScheduler.from_config(prior_scheduler.config)
55
-
56
- stable_unclip_model_id = "stabilityai/stable-diffusion-2-1-unclip-small"
57
-
58
- pipe = StableUnCLIPPipeline.from_pretrained(
59
- stable_unclip_model_id,
60
- torch_dtype=data_type,
61
- variant="fp16",
62
- prior_tokenizer=prior_tokenizer,
63
- prior_text_encoder=prior_text_model,
64
- prior=prior,
65
- prior_scheduler=prior_scheduler,
66
- )
67
-
68
- pipe = pipe.to("cuda")
69
- wave_prompt = "dramatic wave, the Oceans roar, Strong wave spiral across the oceans as the waves unfurl into roaring crests; perfect wave form; perfect wave shape; dramatic wave shape; wave shape unbelievable; wave; wave shape spectacular"
70
-
71
- images = pipe(prompt=wave_prompt).images
72
- images[0].save("waves.png")
73
- ```
74
- <Tip warning={true}>
75
-
76
- For text-to-image we use `stabilityai/stable-diffusion-2-1-unclip-small` as it was trained on CLIP ViT-L/14 embedding, the same as the Karlo model prior. [stabilityai/stable-diffusion-2-1-unclip](https://hf.co/stabilityai/stable-diffusion-2-1-unclip) was trained on OpenCLIP ViT-H, so we don't recommend its use.
77
-
78
- </Tip>
79
-
80
- ### Text guided Image-to-Image Variation
81
-
82
- ```python
83
- from diffusers import StableUnCLIPImg2ImgPipeline
84
- from diffusers.utils import load_image
85
- import torch
86
-
87
- pipe = StableUnCLIPImg2ImgPipeline.from_pretrained(
88
- "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=torch.float16, variation="fp16"
89
- )
90
- pipe = pipe.to("cuda")
91
-
92
- url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png"
93
- init_image = load_image(url)
94
-
95
- images = pipe(init_image).images
96
- images[0].save("variation_image.png")
97
- ```
98
-
99
- Optionally, you can also pass a prompt to `pipe` such as:
100
-
101
- ```python
102
- prompt = "A fantasy landscape, trending on artstation"
103
-
104
- images = pipe(init_image, prompt=prompt).images
105
- images[0].save("variation_image_two.png")
106
- ```
107
-
108
- ### Memory optimization
109
-
110
- If you are short on GPU memory, you can enable smart CPU offloading so that models that are not needed
111
- immediately for a computation can be offloaded to CPU:
112
-
113
- ```python
114
- from diffusers import StableUnCLIPImg2ImgPipeline
115
- from diffusers.utils import load_image
116
- import torch
117
-
118
- pipe = StableUnCLIPImg2ImgPipeline.from_pretrained(
119
- "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=torch.float16, variation="fp16"
120
- )
121
- # Offload to CPU.
122
- pipe.enable_model_cpu_offload()
123
-
124
- url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png"
125
- init_image = load_image(url)
126
-
127
- images = pipe(init_image).images
128
- images[0]
129
- ```
130
-
131
- Further memory optimizations are possible by enabling VAE slicing on the pipeline:
132
-
133
- ```python
134
- from diffusers import StableUnCLIPImg2ImgPipeline
135
- from diffusers.utils import load_image
136
- import torch
137
-
138
- pipe = StableUnCLIPImg2ImgPipeline.from_pretrained(
139
- "stabilityai/stable-diffusion-2-1-unclip", torch_dtype=torch.float16, variation="fp16"
140
- )
141
- pipe.enable_model_cpu_offload()
142
- pipe.enable_vae_slicing()
143
-
144
- url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png"
145
- init_image = load_image(url)
146
-
147
- images = pipe(init_image).images
148
- images[0]
149
- ```
150
-
151
- ### StableUnCLIPPipeline
152
-
153
- [[autodoc]] StableUnCLIPPipeline
154
- - all
155
- - __call__
156
- - enable_attention_slicing
157
- - disable_attention_slicing
158
- - enable_vae_slicing
159
- - disable_vae_slicing
160
- - enable_xformers_memory_efficient_attention
161
- - disable_xformers_memory_efficient_attention
162
-
163
-
164
- ### StableUnCLIPImg2ImgPipeline
165
-
166
- [[autodoc]] StableUnCLIPImg2ImgPipeline
167
- - all
168
- - __call__
169
- - enable_attention_slicing
170
- - disable_attention_slicing
171
- - enable_vae_slicing
172
- - disable_vae_slicing
173
- - enable_xformers_memory_efficient_attention
174
- - disable_xformers_memory_efficient_attention
175
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/stochastic_karras_ve.mdx DELETED
@@ -1,36 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # Stochastic Karras VE
14
-
15
- ## Overview
16
-
17
- [Elucidating the Design Space of Diffusion-Based Generative Models](https://arxiv.org/abs/2206.00364) by Tero Karras, Miika Aittala, Timo Aila and Samuli Laine.
18
-
19
- The abstract of the paper is the following:
20
-
21
- We argue that the theory and practice of diffusion-based generative models are currently unnecessarily convoluted and seek to remedy the situation by presenting a design space that clearly separates the concrete design choices. This lets us identify several changes to both the sampling and training processes, as well as preconditioning of the score networks. Together, our improvements yield new state-of-the-art FID of 1.79 for CIFAR-10 in a class-conditional setting and 1.97 in an unconditional setting, with much faster sampling (35 network evaluations per image) than prior designs. To further demonstrate their modular nature, we show that our design changes dramatically improve both the efficiency and quality obtainable with pre-trained score networks from previous work, including improving the FID of an existing ImageNet-64 model from 2.07 to near-SOTA 1.55.
22
-
23
- This pipeline implements the Stochastic sampling tailored to the Variance-Expanding (VE) models.
24
-
25
-
26
- ## Available Pipelines:
27
-
28
- | Pipeline | Tasks | Colab
29
- |---|---|:---:|
30
- | [pipeline_stochastic_karras_ve.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stochastic_karras_ve/pipeline_stochastic_karras_ve.py) | *Unconditional Image Generation* | - |
31
-
32
-
33
- ## KarrasVePipeline
34
- [[autodoc]] KarrasVePipeline
35
- - all
36
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/text_to_video.mdx DELETED
@@ -1,130 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- <Tip warning={true}>
14
-
15
- This pipeline is for research purposes only.
16
-
17
- </Tip>
18
-
19
- # Text-to-video synthesis
20
-
21
- ## Overview
22
-
23
- [VideoFusion: Decomposed Diffusion Models for High-Quality Video Generation](https://arxiv.org/abs/2303.08320) by Zhengxiong Luo, Dayou Chen, Yingya Zhang, Yan Huang, Liang Wang, Yujun Shen, Deli Zhao, Jingren Zhou, Tieniu Tan.
24
-
25
- The abstract of the paper is the following:
26
-
27
- *A diffusion probabilistic model (DPM), which constructs a forward diffusion process by gradually adding noise to data points and learns the reverse denoising process to generate new samples, has been shown to handle complex data distribution. Despite its recent success in image synthesis, applying DPMs to video generation is still challenging due to high-dimensional data spaces. Previous methods usually adopt a standard diffusion process, where frames in the same video clip are destroyed with independent noises, ignoring the content redundancy and temporal correlation. This work presents a decomposed diffusion process via resolving the per-frame noise into a base noise that is shared among all frames and a residual noise that varies along the time axis. The denoising pipeline employs two jointly-learned networks to match the noise decomposition accordingly. Experiments on various datasets confirm that our approach, termed as VideoFusion, surpasses both GAN-based and diffusion-based alternatives in high-quality video generation. We further show that our decomposed formulation can benefit from pre-trained image diffusion models and well-support text-conditioned video creation.*
28
-
29
- Resources:
30
-
31
- * [Website](https://modelscope.cn/models/damo/text-to-video-synthesis/summary)
32
- * [GitHub repository](https://github.com/modelscope/modelscope/)
33
- * [🤗 Spaces](https://huggingface.co/spaces/damo-vilab/modelscope-text-to-video-synthesis)
34
-
35
- ## Available Pipelines:
36
-
37
- | Pipeline | Tasks | Demo
38
- |---|---|:---:|
39
- | [TextToVideoSDPipeline](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/text_to_video_synthesis/pipeline_text_to_video_synth.py) | *Text-to-Video Generation* | [🤗 Spaces](https://huggingface.co/spaces/damo-vilab/modelscope-text-to-video-synthesis)
40
-
41
- ## Usage example
42
-
43
- Let's start by generating a short video with the default length of 16 frames (2s at 8 fps):
44
-
45
- ```python
46
- import torch
47
- from diffusers import DiffusionPipeline
48
- from diffusers.utils import export_to_video
49
-
50
- pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16")
51
- pipe = pipe.to("cuda")
52
-
53
- prompt = "Spiderman is surfing"
54
- video_frames = pipe(prompt).frames
55
- video_path = export_to_video(video_frames)
56
- video_path
57
- ```
58
-
59
- Diffusers supports different optimization techniques to improve the latency
60
- and memory footprint of a pipeline. Since videos are often more memory-heavy than images,
61
- we can enable CPU offloading and VAE slicing to keep the memory footprint at bay.
62
-
63
- Let's generate a video of 8 seconds (64 frames) on the same GPU using CPU offloading and VAE slicing:
64
-
65
- ```python
66
- import torch
67
- from diffusers import DiffusionPipeline
68
- from diffusers.utils import export_to_video
69
-
70
- pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16")
71
- pipe.enable_model_cpu_offload()
72
-
73
- # memory optimization
74
- pipe.enable_vae_slicing()
75
-
76
- prompt = "Darth Vader surfing a wave"
77
- video_frames = pipe(prompt, num_frames=64).frames
78
- video_path = export_to_video(video_frames)
79
- video_path
80
- ```
81
-
82
- It just takes **7 GBs of GPU memory** to generate the 64 video frames using PyTorch 2.0, "fp16" precision and the techniques mentioned above.
83
-
84
- We can also use a different scheduler easily, using the same method we'd use for Stable Diffusion:
85
-
86
- ```python
87
- import torch
88
- from diffusers import DiffusionPipeline, DPMSolverMultistepScheduler
89
- from diffusers.utils import export_to_video
90
-
91
- pipe = DiffusionPipeline.from_pretrained("damo-vilab/text-to-video-ms-1.7b", torch_dtype=torch.float16, variant="fp16")
92
- pipe.scheduler = DPMSolverMultistepScheduler.from_config(pipe.scheduler.config)
93
- pipe.enable_model_cpu_offload()
94
-
95
- prompt = "Spiderman is surfing"
96
- video_frames = pipe(prompt, num_inference_steps=25).frames
97
- video_path = export_to_video(video_frames)
98
- video_path
99
- ```
100
-
101
- Here are some sample outputs:
102
-
103
- <table>
104
- <tr>
105
- <td><center>
106
- An astronaut riding a horse.
107
- <br>
108
- <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/astr.gif"
109
- alt="An astronaut riding a horse."
110
- style="width: 300px;" />
111
- </center></td>
112
- <td ><center>
113
- Darth vader surfing in waves.
114
- <br>
115
- <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/vader.gif"
116
- alt="Darth vader surfing in waves."
117
- style="width: 300px;" />
118
- </center></td>
119
- </tr>
120
- </table>
121
-
122
- ## Available checkpoints
123
-
124
- * [damo-vilab/text-to-video-ms-1.7b](https://huggingface.co/damo-vilab/text-to-video-ms-1.7b/)
125
- * [damo-vilab/text-to-video-ms-1.7b-legacy](https://huggingface.co/damo-vilab/text-to-video-ms-1.7b-legacy)
126
-
127
- ## TextToVideoSDPipeline
128
- [[autodoc]] TextToVideoSDPipeline
129
- - all
130
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/unclip.mdx DELETED
@@ -1,37 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
3
- the License. You may obtain a copy of the License at
4
- http://www.apache.org/licenses/LICENSE-2.0
5
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
6
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
7
- specific language governing permissions and limitations under the License.
8
- -->
9
-
10
- # unCLIP
11
-
12
- ## Overview
13
-
14
- [Hierarchical Text-Conditional Image Generation with CLIP Latents](https://arxiv.org/abs/2204.06125) by Aditya Ramesh, Prafulla Dhariwal, Alex Nichol, Casey Chu, Mark Chen
15
-
16
- The abstract of the paper is the following:
17
-
18
- Contrastive models like CLIP have been shown to learn robust representations of images that capture both semantics and style. To leverage these representations for image generation, we propose a two-stage model: a prior that generates a CLIP image embedding given a text caption, and a decoder that generates an image conditioned on the image embedding. We show that explicitly generating image representations improves image diversity with minimal loss in photorealism and caption similarity. Our decoders conditioned on image representations can also produce variations of an image that preserve both its semantics and style, while varying the non-essential details absent from the image representation. Moreover, the joint embedding space of CLIP enables language-guided image manipulations in a zero-shot fashion. We use diffusion models for the decoder and experiment with both autoregressive and diffusion models for the prior, finding that the latter are computationally more efficient and produce higher-quality samples.
19
-
20
- The unCLIP model in diffusers comes from kakaobrain's karlo and the original codebase can be found [here](https://github.com/kakaobrain/karlo). Additionally, lucidrains has a DALL-E 2 recreation [here](https://github.com/lucidrains/DALLE2-pytorch).
21
-
22
- ## Available Pipelines:
23
-
24
- | Pipeline | Tasks | Colab
25
- |---|---|:---:|
26
- | [pipeline_unclip.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unclip/pipeline_unclip.py) | *Text-to-Image Generation* | - |
27
- | [pipeline_unclip_image_variation.py](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/unclip/pipeline_unclip_image_variation.py) | *Image-Guided Image Generation* | - |
28
-
29
-
30
- ## UnCLIPPipeline
31
- [[autodoc]] UnCLIPPipeline
32
- - all
33
- - __call__
34
-
35
- [[autodoc]] UnCLIPImageVariationPipeline
36
- - all
37
- - __call__
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
diffusers/docs/source/en/api/pipelines/versatile_diffusion.mdx DELETED
@@ -1,70 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
- -->
12
-
13
- # VersatileDiffusion
14
-
15
- VersatileDiffusion was proposed in [Versatile Diffusion: Text, Images and Variations All in One Diffusion Model](https://arxiv.org/abs/2211.08332) by Xingqian Xu, Zhangyang Wang, Eric Zhang, Kai Wang, Humphrey Shi .
16
-
17
- The abstract of the paper is the following:
18
-
19
- *The recent advances in diffusion models have set an impressive milestone in many generation tasks. Trending works such as DALL-E2, Imagen, and Stable Diffusion have attracted great interest in academia and industry. Despite the rapid landscape changes, recent new approaches focus on extensions and performance rather than capacity, thus requiring separate models for separate tasks. In this work, we expand the existing single-flow diffusion pipeline into a multi-flow network, dubbed Versatile Diffusion (VD), that handles text-to-image, image-to-text, image-variation, and text-variation in one unified model. Moreover, we generalize VD to a unified multi-flow multimodal diffusion framework with grouped layers, swappable streams, and other propositions that can process modalities beyond images and text. Through our experiments, we demonstrate that VD and its underlying framework have the following merits: a) VD handles all subtasks with competitive quality; b) VD initiates novel extensions and applications such as disentanglement of style and semantic, image-text dual-guided generation, etc.; c) Through these experiments and applications, VD provides more semantic insights of the generated outputs.*
20
-
21
- ## Tips
22
-
23
- - VersatileDiffusion is conceptually very similar as [Stable Diffusion](./api/pipelines/stable_diffusion/overview), but instead of providing just a image data stream conditioned on text, VersatileDiffusion provides both a image and text data stream and can be conditioned on both text and image.
24
-
25
- ### *Run VersatileDiffusion*
26
-
27
- You can both load the memory intensive "all-in-one" [`VersatileDiffusionPipeline`] that can run all tasks
28
- with the same class as shown in [`VersatileDiffusionPipeline.text_to_image`], [`VersatileDiffusionPipeline.image_variation`], and [`VersatileDiffusionPipeline.dual_guided`]
29
-
30
- **or**
31
-
32
- You can run the individual pipelines which are much more memory efficient:
33
-
34
- - *Text-to-Image*: [`VersatileDiffusionTextToImagePipeline.__call__`]
35
- - *Image Variation*: [`VersatileDiffusionImageVariationPipeline.__call__`]
36
- - *Dual Text and Image Guided Generation*: [`VersatileDiffusionDualGuidedPipeline.__call__`]
37
-
38
- ### *How to load and use different schedulers.*
39
-
40
- The versatile diffusion pipelines uses [`DDIMScheduler`] scheduler by default. But `diffusers` provides many other schedulers that can be used with the alt diffusion pipeline such as [`PNDMScheduler`], [`LMSDiscreteScheduler`], [`EulerDiscreteScheduler`], [`EulerAncestralDiscreteScheduler`] etc.
41
- To use a different scheduler, you can either change it via the [`ConfigMixin.from_config`] method or pass the `scheduler` argument to the `from_pretrained` method of the pipeline. For example, to use the [`EulerDiscreteScheduler`], you can do the following:
42
-
43
- ```python
44
- >>> from diffusers import VersatileDiffusionPipeline, EulerDiscreteScheduler
45
-
46
- >>> pipeline = VersatileDiffusionPipeline.from_pretrained("shi-labs/versatile-diffusion")
47
- >>> pipeline.scheduler = EulerDiscreteScheduler.from_config(pipeline.scheduler.config)
48
-
49
- >>> # or
50
- >>> euler_scheduler = EulerDiscreteScheduler.from_pretrained("shi-labs/versatile-diffusion", subfolder="scheduler")
51
- >>> pipeline = VersatileDiffusionPipeline.from_pretrained("shi-labs/versatile-diffusion", scheduler=euler_scheduler)
52
- ```
53
-
54
- ## VersatileDiffusionPipeline
55
- [[autodoc]] VersatileDiffusionPipeline
56
-
57
- ## VersatileDiffusionTextToImagePipeline
58
- [[autodoc]] VersatileDiffusionTextToImagePipeline
59
- - all
60
- - __call__
61
-
62
- ## VersatileDiffusionImageVariationPipeline
63
- [[autodoc]] VersatileDiffusionImageVariationPipeline
64
- - all
65
- - __call__
66
-
67
- ## VersatileDiffusionDualGuidedPipeline
68
- [[autodoc]] VersatileDiffusionDualGuidedPipeline
69
- - all
70
- - __call__