File size: 10,705 Bytes
1c0ccbd
 
c1467a7
 
 
 
1c0ccbd
c1467a7
 
 
 
7791069
c1467a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
98826b7
c1467a7
 
 
 
f40f8ca
c1467a7
 
 
 
 
 
acf5416
c1467a7
 
 
 
 
 
 
 
 
 
 
 
 
 
c2e216c
 
98826b7
 
 
 
 
 
 
 
 
 
 
 
 
5f20603
98826b7
 
5f20603
98826b7
 
 
 
2d3563f
 
98826b7
 
 
 
 
 
 
 
 
 
c4ea354
98826b7
c1467a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
---
license: creativeml-openrail-m
pipeline_tag: text-to-image
tags:
- stable diffusion
- diffusers
---

# GLIGEN: Open-Set Grounded Text-to-Image Generation

The GLIGEN model was created by researchers and engineers from [University of Wisconsin-Madison, Columbia University, and Microsoft](https://github.com/gligen/GLIGEN).
The [`StableDiffusionGLIGENTextImagePipeline`] can generate photorealistic images conditioned on grounding inputs.

Along with text and bounding boxes, if input images are given, this pipeline can insert objects described by text at the region defined by bounding boxes.
Otherwise, it'll generate an image described by the caption/prompt and insert objects described by text at the region defined by bounding boxes. It's trained on COCO2014D and COCO2014CD datasets, and the model uses a frozen CLIP ViT-L/14 text encoder to condition itself on grounding inputs.

This weights here are intended to be used with the 🧨 Diffusers library. If you want to use one of the official checkpoints for a task, explore the [gligen](https://huggingface.co/gligen) Hub organizations!

## Model Details
- **Developed by:** Yuheng Li, Haotian Liu, Qingyang Wu, Fangzhou Mu, Jianwei Yang, Jianfeng Gao, Chunyuan Li, Yong Jae Lee
- **Model type:** Diffusion-based Grounded Text-to-image generation model
- **Language(s):** English
- **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
- **Model Description:** This is a model that can be used to generate images based on text prompts, bounding boxes and reference images. It can add new object or style in generated images without using textual inversion, dreambooth or LoRA finetunig.  It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
- **Resources for more information:** [GitHub Repository](https://github.com/gligen/GLIGEN), [Paper](https://arxiv.org/pdf/2301.07093.pdf).
- **Cite as:**

      @article{li2023gligen,
        author      = {Li, Yuheng and Liu, Haotian and Wu, Qingyang and Mu, Fangzhou and Yang, Jianwei and Gao, Jianfeng and Li, Chunyuan and Lee, Yong Jae},
        title       = {GLIGEN: Open-Set Grounded Text-to-Image Generation},
        publisher   = {arXiv:2301.07093},
        year        = {2023},
      }


## Examples

We recommend using [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run GLIGEN.

### PyTorch

```bash
pip install --upgrade diffusers transformers scipy
```

Running the pipeline with the default scheduler:

```python
# Using reference image to add object in generated image
import torch
from diffusers import StableDiffusionGLIGENTextImagePipeline
from diffusers.utils import load_image

pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained("anhnct/Gligen_Text_Image", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "a flower sitting on the beach"
boxes = [[0.0, 0.09, 0.53, 0.76]]
phrases = ["flower"]
gligen_image = load_image(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/pexels-pixabay-60597.jpg"
)

images = pipe(
    prompt=prompt,
    gligen_phrases=phrases,
    gligen_images=[gligen_image],
    gligen_boxes=boxes,
    gligen_scheduled_sampling_beta=1,
    output_type="pil",
    num_inference_steps=50,
).images

images[0].save("./gligen-generation-text-image-box.jpg")
```
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/flower_gligen.jpg" alt="gen-output-1" width="640"/>

```python
# Using reference image to add style in generated image
import torch
from diffusers import StableDiffusionGLIGENTextImagePipeline
from diffusers.utils import load_image

pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained("anhnct/Gligen_Text_Image", torch_dtype=torch.float16)
pipe = pipe.to("cuda")

prompt = "a dragon flying on the sky"
boxes = [[0.4, 0.2, 1.0, 0.8], [0.0, 1.0, 0.0, 1.0]] # Set `[0.0, 1.0, 0.0, 1.0]` for the style

gligen_image = load_image(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png"
)
gligen_placeholder = load_image(
    "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png"
)

images = pipe(
    prompt=prompt,
    gligen_phrases=["dragon", "placeholder"],         # Can use any text instead of `placeholder` token, because we will use mask here
    gligen_images=[gligen_placeholder, gligen_image], # Can use any image in gligen_placeholder, because we will use mask here
    input_phrases_mask=[1, 0],                        # Set 0 for the placeholder token
    input_images_mask=[0, 1],                         # Set 0 for the placeholder image
    gligen_boxes=boxes,
    gligen_scheduled_sampling_beta=1,
    output_type="pil",
    num_inference_steps=50,
).images

images[0].save("./gligen-generation-text-image-box.jpg")
```
<img src="https://huggingface.co/datasets/anhnct/Gligen/resolve/main/gligen-generation-text-image-box-style-transfer.jpg" alt="gen-output-1" width="640"/>

# Uses

## Direct Use 
The model is intended for research purposes only. Possible research areas and
tasks include

- Safe deployment of models which have the potential to generate harmful content.
- Probing and understanding the limitations and biases of generative models.
- Generation of artworks and use in design and other artistic processes.
- Applications in educational or creative tools.
- Research on generative models.

Excluded uses are described below.

 ### Misuse, Malicious Use, and Out-of-Scope Use
_Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to GLIGEN.


The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.

#### Out-of-Scope Use
The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.

#### Misuse and Malicious Use
Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:

- Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.
- Impersonating individuals without their consent.
- Sexual content without consent of the people who might see it.
- Mis- and disinformation
- Representations of egregious violence and gore
- Sharing of copyrighted or licensed material in violation of its terms of use.
- Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.

## Limitations and Bias

### Limitations

- The model does not achieve perfect photorealism
- The model cannot render legible text
- The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
- Faces and people in general may not be generated properly.
- The model was trained mainly with English captions and will not work as well in other languages.
- The autoencoding part of the model is lossy
- The model was trained on a large-scale dataset
  [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
  and is not fit for product use without additional safety mechanisms and
  considerations.
- No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
  The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.

### Bias

While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases. 
Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/), 
which consists of images that are primarily limited to English descriptions. 
Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for. 
This affects the overall output of the model, as white and western cultures are often set as the default. Further, the 
ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.

### Safety Module

The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers. 
This checker works by checking model outputs against known hard-coded NSFW concepts.
The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images. 
The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.


## Training
Refer [`GLIGEN`](https://github.com/gligen/GLIGEN) for more details.

## Citation

```bibtex
    @article{li2023gligen,
      author      = {Li, Yuheng and Liu, Haotian and Wu, Qingyang and Mu, Fangzhou and Yang, Jianwei and Gao, Jianfeng and Li, Chunyuan and Lee, Yong Jae},
      title       = {GLIGEN: Open-Set Grounded Text-to-Image Generation},
      publisher   = {arXiv:2301.07093},
      year        = {2023},
    }
```

*This model card was written by: [Nguyễn Công Tú Anh](https://github.com/tuanh123789) and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*