File size: 9,577 Bytes
2f5fed5
 
 
 
 
 
 
 
 
2388414
 
 
2f5fed5
 
 
 
 
 
 
 
 
 
2388414
 
2f5fed5
 
99d0c31
c05bc31
2f5fed5
 
 
fe6f855
2f5fed5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8ea4807
2f5fed5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
082b1b2
 
 
 
 
 
8ea4807
082b1b2
 
 
 
2f5fed5
 
 
 
 
 
 
 
 
 
 
8ea4807
2f5fed5
 
 
8795b99
 
 
 
019ae01
 
8795b99
 
 
2f5fed5
 
c8ab872
2f5fed5
c8ab872
 
2f5fed5
d204b62
2f5fed5
d204b62
2f5fed5
d204b62
2f5fed5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c8ab872
 
2f5fed5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
---
pipeline_tag: text-to-image
inference: true
license: openrail++
language:
- en
tags:
- Deci AI
- DeciDiffusion
datasets:
- laion/laion-art
- laion/laion2B-en
---
# DeciDiffusion 1.0

DeciDiffusion 1.0 is an 820 million parameter text-to-image latent diffusion model trained on the LAION-v2 dataset and fine-tuned on the LAION-ART dataset. Advanced training techniques were used to speed up training, improve training performance, and achieve better inference quality. 

## Model Details

- **Developed by:** Deci
- **Model type:** Diffusion-based text-to-image generation model
- **Language(s) (NLP):** English
- **Code License:** The code in this repository is released under the [Apache 2.0 License](https://huggingface.co/Deci/DeciDiffusion-v1-0/blob/main/LICENSE-MODEL.md) 
- **Weights License:** The weights are released under the [CreativeML Open RAIL++-M License](https://huggingface.co/Deci/DeciDiffusion-v1-0/blob/main/LICENSE-WEIGHTS.md)

### Model Sources
- **Blog:** [A technical overview and comparison to Stable Diffusion 1.5](https://deci.ai/blog/decidiffusion-1-0-3x-faster-than-stable-diffusion-same-quality/?utm_campaign=repos&utm_source=hugging-face&utm_medium=model-card&utm_content=decidiffusion-v1)
- **Demo:** [Experience DeciDiffusion in action](https://huggingface.co/spaces/Deci/DeciDiffusion-v1-0)

## Model Architecture

DeciDiffusion 1.0 is a diffusion-based text-to-image generation model. While it maintains foundational architecture elements from Stable Diffusion, such as the Variational Autoencoder (VAE) and CLIP's pre-trained Text Encoder, DeciDiffusion introduces significant enhancements. The primary innovation is the substitution of U-Net with the more efficient U-Net-NAS, a design pioneered by Deci. This novel component streamlines the model by reducing the number of parameters, leading to superior computational efficiency.


## Training Details

### Training Procedure

The model was trained in 4 phases: 

- **Phase 1:** Trained from scratch 1.28 million steps at resolution 256x256 on a 320 million sample subset of LAION-v2. 
- **Phase 2:** Trained from 870k steps at resolution 512x512 on the same dataset to learn more fine-detailed information.
- **Phase 3:** Trained 65k steps with EMA, another learning rate scheduler, and more "qualitative" data.
- **Phase 4:** Fine-tuning on a 2M sample subset of LAION-ART.

### Training Techniques

DeciDiffusion 1.0 was trained to be sample efficient, i.e. to produce high-quality results using fewer diffusion timesteps during inference. 
The following training techniques were used to that end: 

- **[V-prediction](https://arxiv.org/pdf/2202.00512.pdf)**
- **[Enforcing zero terminal SNR during training](https://arxiv.org/pdf/2305.08891.pdf)**
- **[Employing a cosine variance schedule](https://arxiv.org/pdf/2102.09672.pdf)**
- **[Using a Min-SNR loss weighting strategy](https://arxiv.org/abs/2303.09556)**
- **[Employing Rescale Classifier-Free Guidance during inference](https://arxiv.org/pdf/2305.08891.pdf)**
- **[Sampling from the last timestep](https://arxiv.org/pdf/2305.08891.pdf)**
- **Training from 870k steps at resolution 512x512 on the same dataset to learn more fine-detailed information.**
- **[Utilizing LAMB optimizer with large batch](https://arxiv.org/abs/1904.00962)**
- 
The following techniques were used to shorten training time: 

- **Using precomputed VAE and CLIP latents**
- **Using EMA only in the last phase of training** 

### Additional Details
#### Phase 1
- **Hardware:** 8 x 8 x A100 (80gb)
- **Optimizer:** AdamW
- **Batch:** 8192
- **Learning rate:** 1e-4
  
#### Phases 2-4
- **Hardware:** 8 x 8 x H100 (80gb)
- **Optimizer:** LAMB
- **Batch:** 6144
- **Learning rate:** 5e-3

## Evaluation

On average, DeciDiffusion’s generated images after 30 iterations achieve comparable Frechet Inception Distance (FID) scores to those generated by Stable Diffusion 1.5 after 50 iterations.
However, many recent articles question the reliability of FID scores, warning that FID results [tend to be fragile](https://huggingface.co/docs/diffusers/conceptual/evaluation), that they are [inconsistent with human judgments on MNIST](https://arxiv.org/pdf/1803.07474.pdf) and [subjective evaluation](https://arxiv.org/pdf/2307.01952.pdf), that they are [statistically biased](https://arxiv.org/pdf/1911.07023.pdf), and that they [give better scores](https://arxiv.org/pdf/2001.03653.pdf) to memorization of the dataset than to generalization beyond it. 

Given this skepticism about FID’s reliability, we chose to assess DeciDiffusion 1.0's sample efficiency by performing a user study against Stable Diffusion 1.5. Our source for image captions was the [PartiPrompts](https://arxiv.org/pdf/2206.10789.pdf) benchmark, which was introduced to compare large text-to-image models on various challenging prompts.  

For our study we chose 10 random prompts and for each prompt generated 3 images 
by Stable Diffusion 1.5 configured to run for 50 iterations and 3 images by DeciDiffusion configured to run for 30 iterations. 

We then presented 30 side by side comparisons to a group of professionals, who voted based on adherence to the prompt and aesthetic value. 

According to the results, DeciDiffusion at 30 iterations exhibits an edge in aesthetics, but when it comes to prompt alignment, it’s on par with Stable Diffusion at 50 iterations.

The following table summarizes our survey results:

|Answer| Better image aesthetics | Better prompt alignment |
|:----------|:----------|:----------|
| DeciDiffusion 1.0 30 Iterations  | 41.1% | 20.8% |
| StableDiffusion v1.5 50 Iterations | 30.5% |18.8% |
| On Par | 26.3% |39.1% |
| Neither | 2.1% | 11.4%|

## Runtime Benchmarks

The following tables provide an image latency comparison between DeciDiffusion 1.0 and Stable Diffusion v1.5. 

DeciDiffusion 1.0 vs. Stable Diffusion v1.5 at FP16 precision
|Inference Tool + Iterations| DeciDiffusion 1.0 on A10 (seconds/image) | Stable Diffusion v1.5 on A10 (seconds/image) |
|:----------|:----------|:----------|
| Pytorch 50 Iterations  | 2.11 | 2.95 |
| Infery 50 Iterations  | 1.55 |2.08 |
| Pytorch 35 Iterations  | 1.52 |- |
| Infery 35 Iterations  | 1.07 | -|
| Pytorch 30 Iterations | 1.29 | -|
| Infery 30 Iterations | 0.98 | - |

## How to Use

```bibtex
# pip install diffusers transformers torch

from diffusers import StableDiffusionPipeline
import torch

device = 'cuda' if torch.cuda.is_available() else 'cpu'

checkpoint = "Deci/DeciDiffusion-v1-0"
pipeline = StableDiffusionPipeline.from_pretrained(checkpoint, custom_pipeline=checkpoint, torch_dtype=torch.float16)
pipeline.unet = pipeline.unet.from_pretrained(checkpoint, subfolder='flexible_unet', torch_dtype=torch.float16)

pipeline = pipeline.to(device)

img = pipeline(prompt=['A photo of an astronaut riding a horse on Mars']).images[0]
```

# Uses

### Misuse, Malicious Use, and Out-of-Scope Use
The model must not be employed to deliberately produce or spread images that foster hostile or unwelcoming settings for individuals. This encompasses generating visuals that might be predictably upsetting, distressing, or inappropriate, as well as content that perpetuates existing or historical biases.

#### Out-of-Scope Use
The model isn't designed to produce accurate or truthful depictions of people or events. Thus, using it for such purposes exceeds its intended capabilities.

#### Misuse and Malicious Use
Misusing the model to produce content that harms or maligns individuals is strictly discouraged. Such misuses include, but aren't limited to:

- Creating offensive, degrading, or damaging portrayals of individuals, their cultures, religions, or surroundings.
- Intentionally promoting or propagating discriminatory content or harmful stereotypes.Deliberately endorsing or disseminating prejudiced content or harmful stereotypes.
- Deliberately endorsing or disseminating prejudiced content or harmful stereotypes.
- Posing as someone else without their agreement.
- Generating explicit content without the knowledge or agreement of potential viewers.
- Distributing copyrighted or licensed content against its usage terms.
- Sharing modified versions of copyrighted or licensed content in breach of its usage guidelines.
  
## Limitations and Bias

### Limitations

The model has certain limitations and may not function optimally in the following scenarios:

- It doesn't produce completely photorealistic images.
- Rendering legible text is beyond its capability.
- Complex compositions, like visualizing “A green sphere to the left of a blue square”, are challenging for the model.
- Generation of faces and human figures may be imprecise.
- It is primarily optimized for English captions and might not be as effective with other languages.
- The autoencoding component of the model is lossy.

### Bias
The remarkable abilities of image generation models can unintentionally amplify societal biases. DeciDiffusion was mainly trained on subsets of LAION-v2, focused on English descriptions. Consequently, non-English communities and cultures might be underrepresented, leading to a bias towards white and western norms. Outputs from non-English prompts are notably less accurate. Given these biases, users should approach DeciDiffusion with discretion, regardless of input.


## How to Cite

Please cite this model using this format.

```bibtex
@misc{DeciFoundationModels,
title = {DeciDiffusion 1.0},
author = {DeciAI Research Team},
year = {2023}
url={[https://huggingface.co/deci/decidiffusion-v1-0](https://huggingface.co/deci/decidiffusion-v1-0)},
}
```