estellea commited on
Commit
89641dc
1 Parent(s): f0b0083

Upload 13 files

Browse files
README.md CHANGED
@@ -1,81 +1,324 @@
1
  ---
2
- datasets:
3
- - laion/laion400m
4
- language:
5
- - en
6
- pipeline_tag: text-to-image
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  ---
8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
 
 
 
 
 
 
 
 
 
 
 
 
10
 
11
- # LDM3D model
 
 
 
 
12
 
13
- The LDM3D model was proposed in "LDM3D: Latent Diffusion Model for 3D" by Gabriela Ben Melech Stan, Diana Wofk, Scottie Fox, Alex Redden, Will Saxton, Jean Yu, Estelle Aflalo, Shao-Yen Tseng, Fabio Nonato, Matthias Muller, Vasudev Lal.
14
- It was introduced in [this paper](https://arxiv.org/abs/2305.10853.pdf)
15
 
16
- LDM3D got accepted to [CVPRW'23](https://cvpr2023.thecvf.com/).
 
 
17
 
18
- ## Model description
19
 
20
- The abstract from the paper is the following:
21
- This research paper proposes a Latent Diffusion Model for 3D (LDM3D) that generates both image and depth map data from a given text prompt, allowing users to generate RGBD images from text prompts. The LDM3D model is fine-tuned on a dataset of tuples containing an RGB image, depth map and caption, and validated through extensive experiments. We also develop an application called DepthFusion, which uses the generated RGB images and depth maps to create immersive and interactive 360-degree-view experiences using TouchDesigner. This technology has the potential to transform a wide range of industries, from entertainment and gaming to architecture and design. Overall, this paper presents a significant contribution to the field of generative AI and computer vision, and showcases the potential of LDM3D and DepthFusion to revolutionize content creation and digital experiences
22
 
23
- ## Intended uses & limitations(TODO)
 
 
24
 
25
- You can use the model to generate RGB and depth images from text prompt.
26
- A short video summarizing the approach can be found at this [URL](https://t.ly/tdi2) and a VR demo can be found [here](https://www.youtube.com/watch?v=3hbUo-hwAs0)
 
 
27
 
28
- ### How to use
 
 
29
 
30
- Here is how to use this model:
 
31
 
32
  ```python
 
 
 
 
 
 
 
 
 
 
33
 
34
- from diffusers import StableDiffusionLDM3DPipeline
35
 
36
- pipe_ldm3d = StableDiffusionLDM3DPipeline.from_pretrained("LDM3D/ldm3d-v1")
37
- pipe_ldm3d.to("cuda")
38
 
39
- prompt ="A picture of some lemons on a table"
40
- name = "lemons"
 
41
 
42
- rgb_image, depth_image = pipe_ldm3d(prompt).images
43
- rgb_image[0].save(name+"_ldm3d_rgb.jpg")
44
- depth_image[0].save(name+"_ldm3d_depth.png")
 
 
 
 
45
  ```
46
- ### Limitations and bias
47
 
48
- TODO
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
49
 
50
- ## Training data
 
51
 
52
- The LDM3D model was finetuned on a subset of the LAION-400M dataset, a large-scale image-caption dataset that contains over 400 million image-caption pairs
 
53
 
 
 
 
 
 
 
 
 
54
 
55
- ## Training procedure
56
 
57
- The fine-tuning process comprises two stages. In the first stage, we train an autoencoder to generate a lower-dimensional, perceptually equivalent data representation. Subsequently, we fine-tune the diffusion model using the frozen autoencoder.
58
 
59
- ### Preprocessing
 
 
 
 
 
 
 
 
 
 
 
60
 
61
- TODO
62
 
63
- ### Compute Infrastructure
 
 
 
 
 
64
 
65
- All training runs reported in this work are conducted on an Intel AI supercomputing cluster comprising of Intel Xeon processors and Intel Habana Gaudi AI accelerators. The LDM3D model training run is scaled out to 16 accelerators (Gaudis) on the corpus of 9,600 tupples (text caption, RGB image, depth map). The KL-autoencoder used in our LDM3D model was trained on Nvidia A6000 GPUs.
66
 
67
- ## Evaluation results
 
 
 
 
68
 
69
- Please refer to Table 1 and 2 of the [paper](https://arxiv.org/pdf/2305.10853.pdf)
70
 
71
- ### BibTeX entry and citation info
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
72
  ```bibtex
73
- @misc{stan2023ldm3d,
74
- title={LDM3D: Latent Diffusion Model for 3D},
75
- author={Gabriela Ben Melech Stan and Diana Wofk and Scottie Fox and Alex Redden and Will Saxton and Jean Yu and Estelle Aflalo and Shao-Yen Tseng and Fabio Nonato and Matthias Muller and Vasudev Lal},
76
- year={2023},
77
- eprint={2305.10853},
78
- archivePrefix={arXiv},
79
- primaryClass={cs.CV}
80
- }
81
  ```
 
 
 
1
  ---
2
+ license: creativeml-openrail-m
3
+ tags:
4
+ - stable-diffusion
5
+ - stable-diffusion-diffusers
6
+ - text-to-image
7
+ widget:
8
+ - text: "A high tech solarpunk utopia in the Amazon rainforest"
9
+ example_title: Amazon rainforest
10
+ - text: "A pikachu fine dining with a view to the Eiffel Tower"
11
+ example_title: Pikachu in Paris
12
+ - text: "A mecha robot in a favela in expressionist style"
13
+ example_title: Expressionist robot
14
+ - text: "an insect robot preparing a delicious meal"
15
+ example_title: Insect robot
16
+ - text: "A small cabin on top of a snowy mountain in the style of Disney, artstation"
17
+ example_title: Snowy disney cabin
18
+ extra_gated_prompt: |-
19
+ This model is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage.
20
+ The CreativeML OpenRAIL License specifies:
21
+
22
+ 1. You can't use the model to deliberately produce nor share illegal or harmful outputs or content
23
+ 2. The authors claim no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license
24
+ 3. You may re-distribute the weights and use the model commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully)
25
+ Please read the full license carefully here: https://huggingface.co/spaces/CompVis/stable-diffusion-license
26
+
27
+ extra_gated_heading: Please read the LICENSE to access this model
28
  ---
29
 
30
+ # Stable Diffusion v1-4 Model Card
31
+
32
+ Stable Diffusion is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input.
33
+ For more information about how Stable Diffusion functions, please have a look at [🤗's Stable Diffusion with 🧨Diffusers blog](https://huggingface.co/blog/stable_diffusion).
34
+
35
+ The **Stable-Diffusion-v1-4** checkpoint was initialized with the weights of the [Stable-Diffusion-v1-2](https:/steps/huggingface.co/CompVis/stable-diffusion-v1-2)
36
+ checkpoint and subsequently fine-tuned on 225k steps at resolution 512x512 on "laion-aesthetics v2 5+" and 10% dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
37
+
38
+ This weights here are intended to be used with the 🧨 Diffusers library. If you are looking for the weights to be loaded into the CompVis Stable Diffusion codebase, [come here](https://huggingface.co/CompVis/stable-diffusion-v-1-4-original)
39
+
40
+ ## Model Details
41
+ - **Developed by:** Robin Rombach, Patrick Esser
42
+ - **Model type:** Diffusion-based text-to-image generation model
43
+ - **Language(s):** English
44
+ - **License:** [The CreativeML OpenRAIL M license](https://huggingface.co/spaces/CompVis/stable-diffusion-license) is an [Open RAIL M license](https://www.licenses.ai/blog/2022/8/18/naming-convention-of-responsible-ai-licenses), adapted from the work that [BigScience](https://bigscience.huggingface.co/) and [the RAIL Initiative](https://www.licenses.ai/) are jointly carrying in the area of responsible AI licensing. See also [the article about the BLOOM Open RAIL license](https://bigscience.huggingface.co/blog/the-bigscience-rail-license) on which our license is based.
45
+ - **Model Description:** This is a model that can be used to generate and modify images based on text prompts. It is a [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) that uses a fixed, pretrained text encoder ([CLIP ViT-L/14](https://arxiv.org/abs/2103.00020)) as suggested in the [Imagen paper](https://arxiv.org/abs/2205.11487).
46
+ - **Resources for more information:** [GitHub Repository](https://github.com/CompVis/stable-diffusion), [Paper](https://arxiv.org/abs/2112.10752).
47
+ - **Cite as:**
48
+
49
+ @InProceedings{Rombach_2022_CVPR,
50
+ author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
51
+ title = {High-Resolution Image Synthesis With Latent Diffusion Models},
52
+ booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
53
+ month = {June},
54
+ year = {2022},
55
+ pages = {10684-10695}
56
+ }
57
+
58
+ ## Examples
59
+
60
+ We recommend using [🤗's Diffusers library](https://github.com/huggingface/diffusers) to run Stable Diffusion.
61
+
62
+ ### PyTorch
63
+
64
+ ```bash
65
+ pip install --upgrade diffusers transformers scipy
66
+ ```
67
+
68
+ Running the pipeline with the default PNDM scheduler:
69
+
70
+ ```python
71
+ import torch
72
+ from diffusers import StableDiffusionPipeline
73
+
74
+ model_id = "CompVis/stable-diffusion-v1-4"
75
+ device = "cuda"
76
+
77
+
78
+ pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
79
+ pipe = pipe.to(device)
80
+
81
+ prompt = "a photo of an astronaut riding a horse on mars"
82
+ image = pipe(prompt).images[0]
83
+
84
+ image.save("astronaut_rides_horse.png")
85
+ ```
86
+
87
+ **Note**:
88
+ If you are limited by GPU memory and have less than 4GB of GPU RAM available, please make sure to load the StableDiffusionPipeline in float16 precision instead of the default float32 precision as done above. You can do so by telling diffusers to expect the weights to be in float16 precision:
89
+
90
+
91
+ ```py
92
+ import torch
93
+
94
+ pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16)
95
+ pipe = pipe.to(device)
96
+ pipe.enable_attention_slicing()
97
+
98
+ prompt = "a photo of an astronaut riding a horse on mars"
99
+ image = pipe(prompt).images[0]
100
+
101
+ image.save("astronaut_rides_horse.png")
102
+ ```
103
+
104
+ To swap out the noise scheduler, pass it to `from_pretrained`:
105
+
106
+ ```python
107
+ from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler
108
+
109
+ model_id = "CompVis/stable-diffusion-v1-4"
110
+
111
+ # Use the Euler scheduler here instead
112
+ scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler")
113
+ pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16)
114
+ pipe = pipe.to("cuda")
115
 
116
+ prompt = "a photo of an astronaut riding a horse on mars"
117
+ image = pipe(prompt).images[0]
118
+
119
+ image.save("astronaut_rides_horse.png")
120
+ ```
121
+
122
+ ### JAX/Flax
123
+
124
+ To use StableDiffusion on TPUs and GPUs for faster inference you can leverage JAX/Flax.
125
+
126
+ Running the pipeline with default PNDMScheduler
127
 
128
+ ```python
129
+ import jax
130
+ import numpy as np
131
+ from flax.jax_utils import replicate
132
+ from flax.training.common_utils import shard
133
 
134
+ from diffusers import FlaxStableDiffusionPipeline
 
135
 
136
+ pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(
137
+ "CompVis/stable-diffusion-v1-4", revision="flax", dtype=jax.numpy.bfloat16
138
+ )
139
 
140
+ prompt = "a photo of an astronaut riding a horse on mars"
141
 
142
+ prng_seed = jax.random.PRNGKey(0)
143
+ num_inference_steps = 50
144
 
145
+ num_samples = jax.device_count()
146
+ prompt = num_samples * [prompt]
147
+ prompt_ids = pipeline.prepare_inputs(prompt)
148
 
149
+ # shard inputs and rng
150
+ params = replicate(params)
151
+ prng_seed = jax.random.split(prng_seed, num_samples)
152
+ prompt_ids = shard(prompt_ids)
153
 
154
+ images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
155
+ images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
156
+ ```
157
 
158
+ **Note**:
159
+ If you are limited by TPU memory, please make sure to load the `FlaxStableDiffusionPipeline` in `bfloat16` precision instead of the default `float32` precision as done above. You can do so by telling diffusers to load the weights from "bf16" branch.
160
 
161
  ```python
162
+ import jax
163
+ import numpy as np
164
+ from flax.jax_utils import replicate
165
+ from flax.training.common_utils import shard
166
+
167
+ from diffusers import FlaxStableDiffusionPipeline
168
+
169
+ pipeline, params = FlaxStableDiffusionPipeline.from_pretrained(
170
+ "CompVis/stable-diffusion-v1-4", revision="bf16", dtype=jax.numpy.bfloat16
171
+ )
172
 
173
+ prompt = "a photo of an astronaut riding a horse on mars"
174
 
175
+ prng_seed = jax.random.PRNGKey(0)
176
+ num_inference_steps = 50
177
 
178
+ num_samples = jax.device_count()
179
+ prompt = num_samples * [prompt]
180
+ prompt_ids = pipeline.prepare_inputs(prompt)
181
 
182
+ # shard inputs and rng
183
+ params = replicate(params)
184
+ prng_seed = jax.random.split(prng_seed, num_samples)
185
+ prompt_ids = shard(prompt_ids)
186
+
187
+ images = pipeline(prompt_ids, params, prng_seed, num_inference_steps, jit=True).images
188
+ images = pipeline.numpy_to_pil(np.asarray(images.reshape((num_samples,) + images.shape[-3:])))
189
  ```
 
190
 
191
+ # Uses
192
+
193
+ ## Direct Use
194
+ The model is intended for research purposes only. Possible research areas and
195
+ tasks include
196
+
197
+ - Safe deployment of models which have the potential to generate harmful content.
198
+ - Probing and understanding the limitations and biases of generative models.
199
+ - Generation of artworks and use in design and other artistic processes.
200
+ - Applications in educational or creative tools.
201
+ - Research on generative models.
202
+
203
+ Excluded uses are described below.
204
+
205
+ ### Misuse, Malicious Use, and Out-of-Scope Use
206
+ _Note: This section is taken from the [DALLE-MINI model card](https://huggingface.co/dalle-mini/dalle-mini), but applies in the same way to Stable Diffusion v1_.
207
+
208
+
209
+ The model should not be used to intentionally create or disseminate images that create hostile or alienating environments for people. This includes generating images that people would foreseeably find disturbing, distressing, or offensive; or content that propagates historical or current stereotypes.
210
 
211
+ #### Out-of-Scope Use
212
+ The model was not trained to be factual or true representations of people or events, and therefore using the model to generate such content is out-of-scope for the abilities of this model.
213
 
214
+ #### Misuse and Malicious Use
215
+ Using the model to generate content that is cruel to individuals is a misuse of this model. This includes, but is not limited to:
216
 
217
+ - Generating demeaning, dehumanizing, or otherwise harmful representations of people or their environments, cultures, religions, etc.
218
+ - Intentionally promoting or propagating discriminatory content or harmful stereotypes.
219
+ - Impersonating individuals without their consent.
220
+ - Sexual content without consent of the people who might see it.
221
+ - Mis- and disinformation
222
+ - Representations of egregious violence and gore
223
+ - Sharing of copyrighted or licensed material in violation of its terms of use.
224
+ - Sharing content that is an alteration of copyrighted or licensed material in violation of its terms of use.
225
 
226
+ ## Limitations and Bias
227
 
228
+ ### Limitations
229
 
230
+ - The model does not achieve perfect photorealism
231
+ - The model cannot render legible text
232
+ - The model does not perform well on more difficult tasks which involve compositionality, such as rendering an image corresponding to “A red cube on top of a blue sphere”
233
+ - Faces and people in general may not be generated properly.
234
+ - The model was trained mainly with English captions and will not work as well in other languages.
235
+ - The autoencoding part of the model is lossy
236
+ - The model was trained on a large-scale dataset
237
+ [LAION-5B](https://laion.ai/blog/laion-5b/) which contains adult material
238
+ and is not fit for product use without additional safety mechanisms and
239
+ considerations.
240
+ - No additional measures were used to deduplicate the dataset. As a result, we observe some degree of memorization for images that are duplicated in the training data.
241
+ The training data can be searched at [https://rom1504.github.io/clip-retrieval/](https://rom1504.github.io/clip-retrieval/) to possibly assist in the detection of memorized images.
242
 
243
+ ### Bias
244
 
245
+ While the capabilities of image generation models are impressive, they can also reinforce or exacerbate social biases.
246
+ Stable Diffusion v1 was trained on subsets of [LAION-2B(en)](https://laion.ai/blog/laion-5b/),
247
+ which consists of images that are primarily limited to English descriptions.
248
+ Texts and images from communities and cultures that use other languages are likely to be insufficiently accounted for.
249
+ This affects the overall output of the model, as white and western cultures are often set as the default. Further, the
250
+ ability of the model to generate content with non-English prompts is significantly worse than with English-language prompts.
251
 
252
+ ### Safety Module
253
 
254
+ The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers.
255
+ This checker works by checking model outputs against known hard-coded NSFW concepts.
256
+ The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter.
257
+ Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPTextModel` *after generation* of the images.
258
+ The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept.
259
 
 
260
 
261
+ ## Training
262
+
263
+ **Training Data**
264
+ The model developers used the following dataset for training the model:
265
+
266
+ - LAION-2B (en) and subsets thereof (see next section)
267
+
268
+ **Training Procedure**
269
+ Stable Diffusion v1-4 is a latent diffusion model which combines an autoencoder with a diffusion model that is trained in the latent space of the autoencoder. During training,
270
+
271
+ - Images are encoded through an encoder, which turns images into latent representations. The autoencoder uses a relative downsampling factor of 8 and maps images of shape H x W x 3 to latents of shape H/f x W/f x 4
272
+ - Text prompts are encoded through a ViT-L/14 text-encoder.
273
+ - The non-pooled output of the text encoder is fed into the UNet backbone of the latent diffusion model via cross-attention.
274
+ - The loss is a reconstruction objective between the noise that was added to the latent and the prediction made by the UNet.
275
+
276
+ We currently provide four checkpoints, which were trained as follows.
277
+ - [`stable-diffusion-v1-1`](https://huggingface.co/CompVis/stable-diffusion-v1-1): 237,000 steps at resolution `256x256` on [laion2B-en](https://huggingface.co/datasets/laion/laion2B-en).
278
+ 194,000 steps at resolution `512x512` on [laion-high-resolution](https://huggingface.co/datasets/laion/laion-high-resolution) (170M examples from LAION-5B with resolution `>= 1024x1024`).
279
+ - [`stable-diffusion-v1-2`](https://huggingface.co/CompVis/stable-diffusion-v1-2): Resumed from `stable-diffusion-v1-1`.
280
+ 515,000 steps at resolution `512x512` on "laion-improved-aesthetics" (a subset of laion2B-en,
281
+ filtered to images with an original size `>= 512x512`, estimated aesthetics score `> 5.0`, and an estimated watermark probability `< 0.5`. The watermark estimate is from the LAION-5B metadata, the aesthetics score is estimated using an [improved aesthetics estimator](https://github.com/christophschuhmann/improved-aesthetic-predictor)).
282
+ - [`stable-diffusion-v1-3`](https://huggingface.co/CompVis/stable-diffusion-v1-3): Resumed from `stable-diffusion-v1-2`. 195,000 steps at resolution `512x512` on "laion-improved-aesthetics" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
283
+ - [`stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) Resumed from `stable-diffusion-v1-2`.225,000 steps at resolution `512x512` on "laion-aesthetics v2 5+" and 10 % dropping of the text-conditioning to improve [classifier-free guidance sampling](https://arxiv.org/abs/2207.12598).
284
+
285
+ - **Hardware:** 32 x 8 x A100 GPUs
286
+ - **Optimizer:** AdamW
287
+ - **Gradient Accumulations**: 2
288
+ - **Batch:** 32 x 8 x 2 x 4 = 2048
289
+ - **Learning rate:** warmup to 0.0001 for 10,000 steps and then kept constant
290
+
291
+ ## Evaluation Results
292
+ Evaluations with different classifier-free guidance scales (1.5, 2.0, 3.0, 4.0,
293
+ 5.0, 6.0, 7.0, 8.0) and 50 PLMS sampling
294
+ steps show the relative improvements of the checkpoints:
295
+
296
+ ![pareto](https://huggingface.co/CompVis/stable-diffusion/resolve/main/v1-variants-scores.jpg)
297
+
298
+ Evaluated using 50 PLMS steps and 10000 random prompts from the COCO2017 validation set, evaluated at 512x512 resolution. Not optimized for FID scores.
299
+ ## Environmental Impact
300
+
301
+ **Stable Diffusion v1** **Estimated Emissions**
302
+ Based on that information, we estimate the following CO2 emissions using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). The hardware, runtime, cloud provider, and compute region were utilized to estimate the carbon impact.
303
+
304
+ - **Hardware Type:** A100 PCIe 40GB
305
+ - **Hours used:** 150000
306
+ - **Cloud Provider:** AWS
307
+ - **Compute Region:** US-east
308
+ - **Carbon Emitted (Power consumption x Time x Carbon produced based on location of power grid):** 11250 kg CO2 eq.
309
+
310
+
311
+ ## Citation
312
+
313
  ```bibtex
314
+ @InProceedings{Rombach_2022_CVPR,
315
+ author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn},
316
+ title = {High-Resolution Image Synthesis With Latent Diffusion Models},
317
+ booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
318
+ month = {June},
319
+ year = {2022},
320
+ pages = {10684-10695}
321
+ }
322
  ```
323
+
324
+ *This model card was written by: Robin Rombach and Patrick Esser and is based on the [DALL-E Mini model card](https://huggingface.co/dalle-mini/dalle-mini).*
feature_extractor/preprocessor_config.json ADDED
@@ -0,0 +1,20 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "crop_size": 224,
3
+ "do_center_crop": true,
4
+ "do_convert_rgb": true,
5
+ "do_normalize": true,
6
+ "do_resize": true,
7
+ "feature_extractor_type": "CLIPFeatureExtractor",
8
+ "image_mean": [
9
+ 0.48145466,
10
+ 0.4578275,
11
+ 0.40821073
12
+ ],
13
+ "image_std": [
14
+ 0.26862954,
15
+ 0.26130258,
16
+ 0.27577711
17
+ ],
18
+ "resample": 3,
19
+ "size": 224
20
+ }
model_index.json ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "StableDiffusionPipeline",
3
+ "_diffusers_version": "0.2.2",
4
+ "feature_extractor": [
5
+ "transformers",
6
+ "CLIPImageProcessor"
7
+ ],
8
+ "safety_checker": [
9
+ "stable_diffusion",
10
+ "StableDiffusionSafetyChecker"
11
+ ],
12
+ "scheduler": [
13
+ "diffusers",
14
+ "PNDMScheduler"
15
+ ],
16
+ "text_encoder": [
17
+ "transformers",
18
+ "CLIPTextModel"
19
+ ],
20
+ "tokenizer": [
21
+ "transformers",
22
+ "CLIPTokenizer"
23
+ ],
24
+ "unet": [
25
+ "diffusers",
26
+ "UNet2DConditionModel"
27
+ ],
28
+ "vae": [
29
+ "diffusers",
30
+ "AutoencoderKL"
31
+ ]
32
+ }
safety_checker/config.json ADDED
@@ -0,0 +1,171 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "./safety_module",
3
+ "architectures": [
4
+ "StableDiffusionSafetyChecker"
5
+ ],
6
+ "initializer_factor": 1.0,
7
+ "logit_scale_init_value": 2.6592,
8
+ "model_type": "clip",
9
+ "projection_dim": 768,
10
+ "text_config": {
11
+ "_name_or_path": "",
12
+ "add_cross_attention": false,
13
+ "architectures": null,
14
+ "attention_dropout": 0.0,
15
+ "bad_words_ids": null,
16
+ "bos_token_id": 0,
17
+ "chunk_size_feed_forward": 0,
18
+ "cross_attention_hidden_size": null,
19
+ "decoder_start_token_id": null,
20
+ "diversity_penalty": 0.0,
21
+ "do_sample": false,
22
+ "dropout": 0.0,
23
+ "early_stopping": false,
24
+ "encoder_no_repeat_ngram_size": 0,
25
+ "eos_token_id": 2,
26
+ "exponential_decay_length_penalty": null,
27
+ "finetuning_task": null,
28
+ "forced_bos_token_id": null,
29
+ "forced_eos_token_id": null,
30
+ "hidden_act": "quick_gelu",
31
+ "hidden_size": 768,
32
+ "id2label": {
33
+ "0": "LABEL_0",
34
+ "1": "LABEL_1"
35
+ },
36
+ "initializer_factor": 1.0,
37
+ "initializer_range": 0.02,
38
+ "intermediate_size": 3072,
39
+ "is_decoder": false,
40
+ "is_encoder_decoder": false,
41
+ "label2id": {
42
+ "LABEL_0": 0,
43
+ "LABEL_1": 1
44
+ },
45
+ "layer_norm_eps": 1e-05,
46
+ "length_penalty": 1.0,
47
+ "max_length": 20,
48
+ "max_position_embeddings": 77,
49
+ "min_length": 0,
50
+ "model_type": "clip_text_model",
51
+ "no_repeat_ngram_size": 0,
52
+ "num_attention_heads": 12,
53
+ "num_beam_groups": 1,
54
+ "num_beams": 1,
55
+ "num_hidden_layers": 12,
56
+ "num_return_sequences": 1,
57
+ "output_attentions": false,
58
+ "output_hidden_states": false,
59
+ "output_scores": false,
60
+ "pad_token_id": 1,
61
+ "prefix": null,
62
+ "problem_type": null,
63
+ "pruned_heads": {},
64
+ "remove_invalid_values": false,
65
+ "repetition_penalty": 1.0,
66
+ "return_dict": true,
67
+ "return_dict_in_generate": false,
68
+ "sep_token_id": null,
69
+ "task_specific_params": null,
70
+ "temperature": 1.0,
71
+ "tie_encoder_decoder": false,
72
+ "tie_word_embeddings": true,
73
+ "tokenizer_class": null,
74
+ "top_k": 50,
75
+ "top_p": 1.0,
76
+ "torch_dtype": null,
77
+ "torchscript": false,
78
+ "transformers_version": "4.21.0.dev0",
79
+ "typical_p": 1.0,
80
+ "use_bfloat16": false,
81
+ "vocab_size": 49408
82
+ },
83
+ "text_config_dict": {
84
+ "hidden_size": 768,
85
+ "intermediate_size": 3072,
86
+ "num_attention_heads": 12,
87
+ "num_hidden_layers": 12
88
+ },
89
+ "torch_dtype": "float32",
90
+ "transformers_version": null,
91
+ "vision_config": {
92
+ "_name_or_path": "",
93
+ "add_cross_attention": false,
94
+ "architectures": null,
95
+ "attention_dropout": 0.0,
96
+ "bad_words_ids": null,
97
+ "bos_token_id": null,
98
+ "chunk_size_feed_forward": 0,
99
+ "cross_attention_hidden_size": null,
100
+ "decoder_start_token_id": null,
101
+ "diversity_penalty": 0.0,
102
+ "do_sample": false,
103
+ "dropout": 0.0,
104
+ "early_stopping": false,
105
+ "encoder_no_repeat_ngram_size": 0,
106
+ "eos_token_id": null,
107
+ "exponential_decay_length_penalty": null,
108
+ "finetuning_task": null,
109
+ "forced_bos_token_id": null,
110
+ "forced_eos_token_id": null,
111
+ "hidden_act": "quick_gelu",
112
+ "hidden_size": 1024,
113
+ "id2label": {
114
+ "0": "LABEL_0",
115
+ "1": "LABEL_1"
116
+ },
117
+ "image_size": 224,
118
+ "initializer_factor": 1.0,
119
+ "initializer_range": 0.02,
120
+ "intermediate_size": 4096,
121
+ "is_decoder": false,
122
+ "is_encoder_decoder": false,
123
+ "label2id": {
124
+ "LABEL_0": 0,
125
+ "LABEL_1": 1
126
+ },
127
+ "layer_norm_eps": 1e-05,
128
+ "length_penalty": 1.0,
129
+ "max_length": 20,
130
+ "min_length": 0,
131
+ "model_type": "clip_vision_model",
132
+ "no_repeat_ngram_size": 0,
133
+ "num_attention_heads": 16,
134
+ "num_beam_groups": 1,
135
+ "num_beams": 1,
136
+ "num_hidden_layers": 24,
137
+ "num_return_sequences": 1,
138
+ "output_attentions": false,
139
+ "output_hidden_states": false,
140
+ "output_scores": false,
141
+ "pad_token_id": null,
142
+ "patch_size": 14,
143
+ "prefix": null,
144
+ "problem_type": null,
145
+ "pruned_heads": {},
146
+ "remove_invalid_values": false,
147
+ "repetition_penalty": 1.0,
148
+ "return_dict": true,
149
+ "return_dict_in_generate": false,
150
+ "sep_token_id": null,
151
+ "task_specific_params": null,
152
+ "temperature": 1.0,
153
+ "tie_encoder_decoder": false,
154
+ "tie_word_embeddings": true,
155
+ "tokenizer_class": null,
156
+ "top_k": 50,
157
+ "top_p": 1.0,
158
+ "torch_dtype": null,
159
+ "torchscript": false,
160
+ "transformers_version": "4.21.0.dev0",
161
+ "typical_p": 1.0,
162
+ "use_bfloat16": false
163
+ },
164
+ "vision_config_dict": {
165
+ "hidden_size": 1024,
166
+ "intermediate_size": 4096,
167
+ "num_attention_heads": 16,
168
+ "num_hidden_layers": 24,
169
+ "patch_size": 14
170
+ }
171
+ }
scheduler/scheduler_config.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "PNDMScheduler",
3
+ "_diffusers_version": "0.7.0.dev0",
4
+ "beta_end": 0.012,
5
+ "beta_schedule": "scaled_linear",
6
+ "beta_start": 0.00085,
7
+ "num_train_timesteps": 1000,
8
+ "set_alpha_to_one": false,
9
+ "skip_prk_steps": true,
10
+ "steps_offset": 1,
11
+ "trained_betas": null,
12
+ "clip_sample": false
13
+ }
text_encoder/config.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "openai/clip-vit-large-patch14",
3
+ "architectures": [
4
+ "CLIPTextModel"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 0,
8
+ "dropout": 0.0,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "quick_gelu",
11
+ "hidden_size": 768,
12
+ "initializer_factor": 1.0,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 3072,
15
+ "layer_norm_eps": 1e-05,
16
+ "max_position_embeddings": 77,
17
+ "model_type": "clip_text_model",
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 12,
20
+ "pad_token_id": 1,
21
+ "torch_dtype": "float32",
22
+ "transformers_version": "4.21.0.dev0",
23
+ "vocab_size": 49408
24
+ }
tokenizer/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer/special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|startoftext|>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "<|endoftext|>",
17
+ "unk_token": {
18
+ "content": "<|endoftext|>",
19
+ "lstrip": false,
20
+ "normalized": true,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
tokenizer/tokenizer_config.json ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": {
4
+ "__type": "AddedToken",
5
+ "content": "<|startoftext|>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false
10
+ },
11
+ "do_lower_case": true,
12
+ "eos_token": {
13
+ "__type": "AddedToken",
14
+ "content": "<|endoftext|>",
15
+ "lstrip": false,
16
+ "normalized": true,
17
+ "rstrip": false,
18
+ "single_word": false
19
+ },
20
+ "errors": "replace",
21
+ "model_max_length": 77,
22
+ "name_or_path": "openai/clip-vit-large-patch14",
23
+ "pad_token": "<|endoftext|>",
24
+ "special_tokens_map_file": "./special_tokens_map.json",
25
+ "tokenizer_class": "CLIPTokenizer",
26
+ "unk_token": {
27
+ "__type": "AddedToken",
28
+ "content": "<|endoftext|>",
29
+ "lstrip": false,
30
+ "normalized": true,
31
+ "rstrip": false,
32
+ "single_word": false
33
+ }
34
+ }
tokenizer/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
unet/config.json ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "UNet2DConditionModel",
3
+ "_diffusers_version": "0.2.2",
4
+ "act_fn": "silu",
5
+ "attention_head_dim": 8,
6
+ "block_out_channels": [
7
+ 320,
8
+ 640,
9
+ 1280,
10
+ 1280
11
+ ],
12
+ "center_input_sample": false,
13
+ "cross_attention_dim": 768,
14
+ "down_block_types": [
15
+ "CrossAttnDownBlock2D",
16
+ "CrossAttnDownBlock2D",
17
+ "CrossAttnDownBlock2D",
18
+ "DownBlock2D"
19
+ ],
20
+ "downsample_padding": 1,
21
+ "flip_sin_to_cos": true,
22
+ "freq_shift": 0,
23
+ "in_channels": 4,
24
+ "layers_per_block": 2,
25
+ "mid_block_scale_factor": 1,
26
+ "norm_eps": 1e-05,
27
+ "norm_num_groups": 32,
28
+ "out_channels": 4,
29
+ "sample_size": 64,
30
+ "up_block_types": [
31
+ "UpBlock2D",
32
+ "CrossAttnUpBlock2D",
33
+ "CrossAttnUpBlock2D",
34
+ "CrossAttnUpBlock2D"
35
+ ]
36
+ }
v1-variants-scores.jpg ADDED
vae/config.json ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_class_name": "AutoencoderKL",
3
+ "_diffusers_version": "0.2.2",
4
+ "act_fn": "silu",
5
+ "block_out_channels": [
6
+ 128,
7
+ 256,
8
+ 512,
9
+ 512
10
+ ],
11
+ "down_block_types": [
12
+ "DownEncoderBlock2D",
13
+ "DownEncoderBlock2D",
14
+ "DownEncoderBlock2D",
15
+ "DownEncoderBlock2D"
16
+ ],
17
+ "in_channels": 3,
18
+ "latent_channels": 4,
19
+ "layers_per_block": 2,
20
+ "out_channels": 3,
21
+ "sample_size": 512,
22
+ "scaling_factor": 0.18215,
23
+ "up_block_types": [
24
+ "UpDecoderBlock2D",
25
+ "UpDecoderBlock2D",
26
+ "UpDecoderBlock2D",
27
+ "UpDecoderBlock2D"
28
+ ]
29
+ }