patrickvonplaten commited on
Commit
2603929
1 Parent(s): 61fbd68

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +149 -1
README.md CHANGED
@@ -1,3 +1,151 @@
1
  ---
2
- duplicated_from: diffusers/stable-diffusion-2-1-unclip-i2i-l
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: openrail++
3
+ tags:
4
+ - stable-diffusion
5
+ - text-to-image
6
+ pinned: true
7
  ---
8
+
9
+ # Stable Diffusion v2-1-unclip (small) Model Card
10
+ This model card focuses on the model associated with the Stable Diffusion v2-1 model, codebase available [here](https://github.com/Stability-AI/stablediffusion).
11
+
12
+ This `stable-diffusion-2-1-unclip-small` is a finetuned version of Stable Diffusion 2.1, modified to accept (noisy) CLIP image embedding in addition to the text prompt, and can be used to create image variations (Examples) or can be chained with text-to-image CLIP priors. The amount of noise added to the image embedding can be specified via the noise_level (0 means no noise, 1000 full noise).
13
+
14
+ - Use it with 🧨 [`diffusers`](#examples)
15
+
16
+ ## Model Details
17
+ - **Developed by:** Robin Rombach, Patrick Esser
18
+ - **Model type:** Diffusion-based text-to-image generation model
19
+ - **Language(s):** English
20
+ - **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL)
21
+ - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip)).
22
+ - **Resources for more information:** [GitHub Repository](https://github.com/Stability-AI/).
23
+ - **Cite as:**
24
+
25
+ @InProceedings{Rombach_2022_CVPR,
26
+ author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
27
+ title = {High-Resolution Image Synthesis With Latent Diffusion Models},
28
+ booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
29
+ month = {June},
30
+ year = {2022},
31
+ pages = {10684-10695}
32
+ }
33
+
34
+
35
+ ## Examples
36
+
37
+ Using the [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion UnCLIP 2-1-small in a simple and efficient manner.
38
+
39
+ ```bash
40
+ pip install git+https://github.com/diffusers.git transformers accelerate scipy safetensors
41
+ ```
42
+ Running the pipeline (if you don't swap the scheduler it will run with the default DDIM, in this example we are swapping it to DPMSolverMultistepScheduler):
43
+
44
+ ```python
45
+ import requests
46
+ import torch
47
+ from PIL import Image
48
+ from io import BytesIO
49
+
50
+ from diffusers import DiffusionPipeline
51
+
52
+ pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-2-1-unclip-small", torch_dtype=torch.float16, variant="fp16")
53
+ pipe.to("cuda")
54
+
55
+ # get image
56
+ url = "https://huggingface.co/datasets/hf-internal-testing/diffusers-images/resolve/main/stable_unclip/tarsila_do_amaral.png"
57
+ response = requests.get(url)
58
+ image = Image.open(BytesIO(response.content)).convert("RGB")
59
+
60
+ # run image variation
61
+ image = pipe(image).images[0]
62
+ ```
63
+
64
+ ![img]("./image.png")
65
+
66
+ # Uses
67
+
68
+ ## Direct Use
69
+ The model is intended for research purposes only. Possible research areas and tasks include
70
+
71
+ - Safe deployment of models which have the potential to generate harmful content.
72
+ - Probing and understanding the limitations and biases of generative models.
73
+ - Generation of artworks and use in design and other artistic processes.
74
+ - Applications in educational or creative tools.
75
+ - Research on generative models.
76
+
77
+ Excluded uses are described below.
78
+
79
+ ### Misuse, Malicious Use, and Out-of-Scope Use
80
+ _Note: This section is originally taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), was used for Stable Diffusion v1, but applies in the same way to Stable Diffusion v2_.
81
+
82
+ The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
83
+
84
+ #### Out-of-Scope Use
85
+ The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
86
+
87
+ #### Misuse and Malicious Use
88
+ Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
89
+
90
+ - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
91
+ - Intentionally promoting or propagating discriminatory content or harmful stereotypes.
92
+ - Impersonating individuals without their consent.
93
+ - Sexual content without consent of the people who might see it.
94
+ - Mis- and disinformation
95
+ - Representations of egregious violence and gore
96
+ - Sharing of copyrighted or licensed material in violation of its terms of use.
97
+ - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
98
+
99
+ ## Limitations and Bias
100
+
101
+ ### Limitations
102
+
103
+ - The model does not achieve perfect photorealism
104
+ - The model cannot render legible text
105
+ - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
106
+ - Faces and people in general may not be generated properly.
107
+ - The model was trained mainly with English captions and will not work as well in other languages.
108
+ - The autoencoding part of the model is lossy
109
+ - The model was trained on a subset of the large-scale dataset
110
+ [LAION-5B](https://laion.ai/blog/laion-5b/), which contains adult, violent and sexual content. To partially mitigate this, we have filtered the dataset using LAION's NFSW detector (see Training section).
111
+
112
+ ### Bias
113
+ While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
114
+ Stable Diffusion was primarily trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
115
+ which consists of images that are limited to English descriptions.
116
+ Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
117
+ This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
118
+ ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
119
+ Stable Diffusion v2 mirrors and exacerbates biases to such a degree that viewer discretion must be advised irrespective of the input or its intent.
120
+
121
+
122
+ ## Training
123
+
124
+ **Training Data**
125
+ The model developers used the following dataset for training the model:
126
+
127
+ - LAION-5B and subsets (details below). The training data is further filtered using LAION's NSFW detector, with a "p_unsafe" score of 0.1 (conservative). For more details, please refer to LAION-5B's [NeurIPS 2022](https://openreview.net/forum?id=M3Y74vmsMcY) paper and reviewer discussions on the topic.
128
+
129
+
130
+ ## Environmental Impact
131
+
132
+ **Stable Diffusion v1** **Estimated Emissions**
133
+ Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
134
+
135
+ - **Hardware Type:** A100 PCIe 40GB
136
+ - **Hours used:** 200000
137
+ - **Cloud Provider:** AWS
138
+ - **Compute Region:** US-east
139
+ - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 15000 kg CO2 eq.
140
+
141
+ ## Citation
142
+ @InProceedings{Rombach_2022_CVPR,
143
+ author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
144
+ title = {High-Resolution Image Synthesis With Latent Diffusion Models},
145
+ booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
146
+ month = {June},
147
+ year = {2022},
148
+ pages = {10684-10695}
149
+ }
150
+
151
+ *This model card was written by: Robin Rombach, Patrick Esser and David Ha and is based on the [Stable Diffusion v1](https://github.com/CompVis/stable-diffusion/blob/main/Stable_Diffusion_v1_Model_Card.md) and [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*