Update README.md
Browse files
README.md
CHANGED
@@ -27,37 +27,36 @@ extra_gated_prompt: |-
|
|
27 |
extra_gated_heading: Please read the LICENSE to access this model
|
28 |
---
|
29 |
|
30 |
-
#
|
31 |
|
32 |
-
|
33 |
-
|
34 |
|
35 |
-
|
36 |
-
|
37 |
|
38 |
-
This weights here are intended to be used with the 🧨 Diffusers library. If you
|
39 |
|
40 |
## Model Details
|
41 |
- **Developed by:** Robin Rombach, Patrick Esser
|
42 |
-
- **Model type:** Diffusion-based
|
43 |
- **Language(s):** English
|
44 |
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
|
45 |
-
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
|
46 |
-
- **Resources for more information:** [GitHub Repository](https://github.com/
|
47 |
- **Cite as:**
|
48 |
|
49 |
-
@
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
54 |
-
year = {2022},
|
55 |
-
pages = {10684-10695}
|
56 |
}
|
57 |
|
|
|
58 |
## Examples
|
59 |
|
60 |
-
We recommend using [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run
|
61 |
|
62 |
### PyTorch
|
63 |
|
@@ -69,124 +68,34 @@ Running the pipeline with the default PNDM scheduler:
|
|
69 |
|
70 |
```python
|
71 |
import torch
|
72 |
-
|
|
|
73 |
|
74 |
-
model_id = "
|
75 |
device = "cuda"
|
76 |
|
77 |
-
|
78 |
-
pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
|
79 |
pipe = pipe.to(device)
|
80 |
|
81 |
-
prompt = "a
|
82 |
-
|
83 |
-
|
84 |
-
|
85 |
-
|
86 |
-
|
87 |
-
|
88 |
-
|
89 |
-
|
90 |
-
|
91 |
-
|
92 |
-
|
93 |
-
|
94 |
-
|
95 |
-
|
96 |
-
|
97 |
-
|
98 |
-
prompt = "a photo of an astronaut riding a horse on mars"
|
99 |
-
image = pipe(prompt).images[0]
|
100 |
-
|
101 |
-
image.save("astronaut_rides_horse.png")
|
102 |
-
```
|
103 |
-
|
104 |
-
To swap out the noise scheduler, pass it to `from_pretrained`:
|
105 |
-
|
106 |
-
```python
|
107 |
-
from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
|
108 |
-
|
109 |
-
model_id = "CompVis/stable-diffusion-v1-4"
|
110 |
-
|
111 |
-
# Use the Euler scheduler here instead
|
112 |
-
scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
|
113 |
-
pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)
|
114 |
-
pipe = pipe.to("cuda")
|
115 |
-
|
116 |
-
prompt = "a photo of an astronaut riding a horse on mars"
|
117 |
-
image = pipe(prompt).images[0]
|
118 |
-
|
119 |
-
image.save("astronaut_rides_horse.png")
|
120 |
-
```
|
121 |
-
|
122 |
-
### JAX/Flax
|
123 |
-
|
124 |
-
To use StableDiffusion on TPUs and GPUs for faster inference you can leverage JAX/Flax.
|
125 |
-
|
126 |
-
Running the pipeline with default PNDMScheduler
|
127 |
-
|
128 |
-
```python
|
129 |
-
import jax
|
130 |
-
import numpy as np
|
131 |
-
from flax.jax_utils import replicate
|
132 |
-
from flax.training.common_utils import shard
|
133 |
-
|
134 |
-
from diffusers import FlaxStableDiffusionPipeline
|
135 |
-
|
136 |
-
pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(
|
137 |
-
"CompVis/stable-diffusion-v1-4", revision="flax", dtype=jax.numpy.bfloat16
|
138 |
-
)
|
139 |
-
|
140 |
-
prompt = "a photo of an astronaut riding a horse on mars"
|
141 |
-
|
142 |
-
prng_seed = jax.random.PRNGKey(0)
|
143 |
-
num_inference_steps = 50
|
144 |
-
|
145 |
-
num_samples = jax.device_count()
|
146 |
-
prompt = num_samples * [prompt]
|
147 |
-
prompt_ids = pipeline.prepare_inputs(prompt)
|
148 |
-
|
149 |
-
# shard inputs and rng
|
150 |
-
params = replicate(params)
|
151 |
-
prng_seed = jax.random.split(prng_seed, num_samples)
|
152 |
-
prompt_ids = shard(prompt_ids)
|
153 |
-
|
154 |
-
images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
|
155 |
-
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
|
156 |
```
|
157 |
|
158 |
-
**Note**:
|
159 |
-
If you are limited by TPU memory, please make sure to load the `FlaxStableDiffusionPipeline` in `bfloat16` precision instead of the default `float32` precision as done above. You can do so by telling diffusers to load the weights from "bf16" branch.
|
160 |
-
|
161 |
-
```python
|
162 |
-
import jax
|
163 |
-
import numpy as np
|
164 |
-
from flax.jax_utils import replicate
|
165 |
-
from flax.training.common_utils import shard
|
166 |
-
|
167 |
-
from diffusers import FlaxStableDiffusionPipeline
|
168 |
-
|
169 |
-
pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(
|
170 |
-
"CompVis/stable-diffusion-v1-4", revision="bf16", dtype=jax.numpy.bfloat16
|
171 |
-
)
|
172 |
-
|
173 |
-
prompt = "a photo of an astronaut riding a horse on mars"
|
174 |
-
|
175 |
-
prng_seed = jax.random.PRNGKey(0)
|
176 |
-
num_inference_steps = 50
|
177 |
-
|
178 |
-
num_samples = jax.device_count()
|
179 |
-
prompt = num_samples * [prompt]
|
180 |
-
prompt_ids = pipeline.prepare_inputs(prompt)
|
181 |
-
|
182 |
-
# shard inputs and rng
|
183 |
-
params = replicate(params)
|
184 |
-
prng_seed = jax.random.split(prng_seed, num_samples)
|
185 |
-
prompt_ids = shard(prompt_ids)
|
186 |
-
|
187 |
-
images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
|
188 |
-
images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
|
189 |
-
```
|
190 |
|
191 |
# Uses
|
192 |
|
@@ -203,7 +112,7 @@ tasks include
|
|
203 |
Excluded uses are described below.
|
204 |
|
205 |
### Misuse, Malicious Use, and Out-of-Scope Use
|
206 |
-
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to
|
207 |
|
208 |
|
209 |
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
|
@@ -259,65 +168,16 @@ The concepts are passed into the model with the generated image and compared to
|
|
259 |
|
260 |
|
261 |
## Training
|
262 |
-
|
263 |
-
**Training Data**
|
264 |
-
The model developers used the following dataset for training the model:
|
265 |
-
|
266 |
-
- LAION-2B (en) and subsets thereof (see next section)
|
267 |
-
|
268 |
-
**Training Procedure**
|
269 |
-
Stable Diffusion v1-4 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
|
270 |
-
|
271 |
-
- Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
|
272 |
-
- Text prompts are encoded through a ViT-L/14 text-encoder.
|
273 |
-
- The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
|
274 |
-
- The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
|
275 |
-
|
276 |
-
We currently provide four checkpoints, which were trained as follows.
|
277 |
-
- [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
|
278 |
-
194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
|
279 |
-
- [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
|
280 |
-
515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
|
281 |
-
filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
|
282 |
-
- [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2`. 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
|
283 |
-
- [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2`.225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
|
284 |
-
|
285 |
-
- **Hardware:** 32 x 8 x A100 GPUs
|
286 |
-
- **Optimizer:** AdamW
|
287 |
-
- **Gradient Accumulations**: 2
|
288 |
-
- **Batch:** 32 x 8 x 2 x 4 = 2048
|
289 |
-
- **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
|
290 |
-
|
291 |
-
## Evaluation Results
|
292 |
-
Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
|
293 |
-
5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
|
294 |
-
steps show the relative improvements of the checkpoints:
|
295 |
-
|
296 |
-
![pareto](https://huggingface.co/CompVis/stable-diffusion/resolve/main/v1-variants-scores.jpg)
|
297 |
-
|
298 |
-
Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
|
299 |
-
## Environmental Impact
|
300 |
-
|
301 |
-
**Stable Diffusion v1** **Estimated Emissions**
|
302 |
-
Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
|
303 |
-
|
304 |
-
- **Hardware Type:** A100 PCIe 40GB
|
305 |
-
- **Hours used:** 150000
|
306 |
-
- **Cloud Provider:** AWS
|
307 |
-
- **Compute Region:** US-east
|
308 |
-
- **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
|
309 |
-
|
310 |
|
311 |
## Citation
|
312 |
|
313 |
```bibtex
|
314 |
-
@
|
315 |
-
|
316 |
-
|
317 |
-
|
318 |
-
|
319 |
-
year = {2022},
|
320 |
-
pages = {10684-10695}
|
321 |
}
|
322 |
```
|
323 |
|
|
|
27 |
extra_gated_heading: Please read the LICENSE to access this model
|
28 |
---
|
29 |
|
30 |
+
# GLIGEN: Open-Set Grounded Text-to-Image Generation
|
31 |
|
32 |
+
The GLIGEN model was created by researchers and engineers from [University of Wisconsin-Madison, Columbia University, and Microsoft](https://github.com/gligen/GLIGEN).
|
33 |
+
The [`StableDiffusionGLIGENPipeline`] can generate photorealistic images conditioned on grounding inputs.
|
34 |
|
35 |
+
Along with text and bounding boxes, if input images are given, this pipeline can insert objects described by text at the region defined by bounding boxes.
|
36 |
+
Otherwise, it'll generate an image described by the caption/prompt and insert objects described by text at the region defined by bounding boxes. It's trained on COCO2014D and COCO2014CD datasets, and the model uses a frozen CLIP ViT-L/14 text encoder to condition itself on grounding inputs.
|
37 |
|
38 |
+
This weights here are intended to be used with the 🧨 Diffusers library. If you want to use one of the official checkpoints for a task, explore the [gligen](https://huggingface.co/gligen) Hub organizations!
|
39 |
|
40 |
## Model Details
|
41 |
- **Developed by:** Robin Rombach, Patrick Esser
|
42 |
+
- **Model type:** Diffusion-based Grounded Text-to-image generation model
|
43 |
- **Language(s):** English
|
44 |
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
|
45 |
+
- **Model Description:** This is a model that can be used to generate and modify images based on text prompts and bounding boxes. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
|
46 |
+
- **Resources for more information:** [GitHub Repository](https://github.com/gligen/GLIGEN), [Paper](https://arxiv.org/pdf/2301.07093.pdf).
|
47 |
- **Cite as:**
|
48 |
|
49 |
+
@article{li2023gligen,
|
50 |
+
author = {Li, Yuheng and Liu, Haotian and Wu, Qingyang and Mu, Fangzhou and Yang, Jianwei and Gao, Jianfeng and Li, Chunyuan and Lee, Yong Jae},
|
51 |
+
title = {GLIGEN: Open-Set Grounded Text-to-Image Generation},
|
52 |
+
publisher = {arXiv:2301.07093},
|
53 |
+
year = {2023},
|
|
|
|
|
54 |
}
|
55 |
|
56 |
+
|
57 |
## Examples
|
58 |
|
59 |
+
We recommend using [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run GLIGEN.
|
60 |
|
61 |
### PyTorch
|
62 |
|
|
|
68 |
|
69 |
```python
|
70 |
import torch
|
71 |
+
import torchvision
|
72 |
+
from diffusers import StableDiffusionGLIGENPipeline
|
73 |
|
74 |
+
model_id = "masterful/gligen-1-4-generation-text-box"
|
75 |
device = "cuda"
|
76 |
|
77 |
+
pipe = StableDiffusionGLIGENPipeline.from_pretrained(model_id, variant="fp16", torch_dtype=torch.float16)
|
|
|
78 |
pipe = pipe.to(device)
|
79 |
|
80 |
+
prompt = "a water glass and a bread on the kitchen counter"
|
81 |
+
|
82 |
+
images = pipe(
|
83 |
+
prompt,
|
84 |
+
num_images_per_prompt=1,
|
85 |
+
gligen_phrases = ['a water glass', 'a bread'],
|
86 |
+
gligen_boxes = [
|
87 |
+
[0.1387, 0.2051, 0.4277, 0.7090],
|
88 |
+
[0.4980, 0.4355, 0.8516, 0.7266],
|
89 |
+
],
|
90 |
+
gligen_scheduled_sampling_beta=0.3,
|
91 |
+
output_type="np",
|
92 |
+
num_inference_steps=50
|
93 |
+
).images
|
94 |
+
|
95 |
+
images = torch.stack([torch.from_numpy(image) for image in images]).permute(0, 3, 1, 2)
|
96 |
+
torchvision.utils.save_image(images, "./gligen-1-4-generation-text-box.jpg", nrow=1, normalize=False)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
97 |
```
|
98 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
99 |
|
100 |
# Uses
|
101 |
|
|
|
112 |
Excluded uses are described below.
|
113 |
|
114 |
### Misuse, Malicious Use, and Out-of-Scope Use
|
115 |
+
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to GLIGEN.
|
116 |
|
117 |
|
118 |
The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
|
|
|
168 |
|
169 |
|
170 |
## Training
|
171 |
+
Refer [`GLIGEN`](https://github.com/gligen/GLIGEN) for more details.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
172 |
|
173 |
## Citation
|
174 |
|
175 |
```bibtex
|
176 |
+
@article{li2023gligen,
|
177 |
+
author = {Li, Yuheng and Liu, Haotian and Wu, Qingyang and Mu, Fangzhou and Yang, Jianwei and Gao, Jianfeng and Li, Chunyuan and Lee, Yong Jae},
|
178 |
+
title = {GLIGEN: Open-Set Grounded Text-to-Image Generation},
|
179 |
+
publisher = {arXiv:2301.07093},
|
180 |
+
year = {2023},
|
|
|
|
|
181 |
}
|
182 |
```
|
183 |
|