Spaces:
Running
on
Zero
Running
on
Zero
adamelliotfields
commited on
Commit
•
4d6f2bc
1
Parent(s):
584d2bd
Add app
Browse files- about.md +55 -0
- demo.css +59 -0
- demo.js +11 -0
- demo.py +222 -0
- generate.py +219 -0
- header.html +7 -0
- requirements.txt +9 -0
about.md
ADDED
@@ -0,0 +1,55 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## Usage
|
2 |
+
|
3 |
+
Enter a prompt and click `Generate`. [Civitai](https://civitai.com) has an excellent guide on [prompting](https://education.civitai.com/civitais-prompt-crafting-guide-part-1-basics/).
|
4 |
+
|
5 |
+
### Compel
|
6 |
+
|
7 |
+
Positive and negative prompts are embedded by [Compel](https://github.com/damian0815/compel), enabling weighting and blending. See [syntax features](https://github.com/damian0815/compel/blob/main/doc/syntax.md).
|
8 |
+
|
9 |
+
### Arrays
|
10 |
+
|
11 |
+
Arrays allow you to generate different images from a single prompt. For example, `a cute [[cat,corgi,koala]]` will expand into 3 prompts. Note that it only works for the positive prompt. You must also increase `Images` to generate more than 1 image at a time. Inspired by [Fooocus](https://github.com/lllyasviel/Fooocus/pull/1503).
|
12 |
+
|
13 |
+
### Autoincrement
|
14 |
+
|
15 |
+
If `Autoincrement` is checked, the seed will be incremented for each image. When using arrays, you might want to uncheck this so the same seed is used for each prompt variation.
|
16 |
+
|
17 |
+
## Models
|
18 |
+
|
19 |
+
Models are diffusion pipelines. All use `float16`. Recommended settings are shown below:
|
20 |
+
|
21 |
+
* [fluently/fluently-v4](https://huggingface.co/fluently/Fluently-v4)
|
22 |
+
- sampler: DPM++ 2M, guidance: 5-7, steps: 20-30
|
23 |
+
* [lykon/dreamshaper-8](https://huggingface.co/Lykon/dreamshaper-8)
|
24 |
+
- sampler: DEIS 2M
|
25 |
+
* [prompthero/openjourney-v4](https://huggingface.co/prompthero/openjourney-v4)
|
26 |
+
- sampler: PNDM
|
27 |
+
* [runwayml/stable-diffusion-v1-5](https://huggingface.co/runwayml/stable-diffusion-v1-5)
|
28 |
+
- sampler: PNDM
|
29 |
+
* [sg161222/realistic_vision_v5.1](https://huggingface.co/SG161222/Realistic_Vision_V5.1_noVAE)
|
30 |
+
- sampler: DPM++ 2M, guidance: 4-7
|
31 |
+
|
32 |
+
### Schedulers
|
33 |
+
|
34 |
+
All are based on [k_diffusion](https://github.com/crowsonkb/k-diffusion) except [DEIS](https://github.com/qsh-zh/deis) and [DPM++](https://github.com/LuChengTHU/dpm-solver). Optionally, the [Karras](https://arxiv.org/abs/2206.00364) noise schedule can be used.
|
35 |
+
|
36 |
+
* [DEIS 2M](https://huggingface.co/docs/diffusers/en/api/schedulers/deis)
|
37 |
+
* [DPM++ 2M](https://huggingface.co/docs/diffusers/en/api/schedulers/multistep_dpm_solver)
|
38 |
+
* [DPM2 a](https://huggingface.co/docs/diffusers/api/schedulers/dpm_discrete_ancestral)
|
39 |
+
* [Euler a](https://huggingface.co/docs/diffusers/en/api/schedulers/euler_ancestral)
|
40 |
+
* [Heun](https://huggingface.co/docs/diffusers/api/schedulers/heun)
|
41 |
+
* [LMS](https://huggingface.co/docs/diffusers/api/schedulers/lms_discrete)
|
42 |
+
* [PNDM](https://huggingface.co/docs/diffusers/api/schedulers/pndm)
|
43 |
+
|
44 |
+
### VAE
|
45 |
+
|
46 |
+
All models use [madebyollin/taesd](https://huggingface.co/madebyollin/taesd) for speed.
|
47 |
+
|
48 |
+
## TODO
|
49 |
+
|
50 |
+
- [ ] Performance improvements
|
51 |
+
- [ ] Support `bfloat16`
|
52 |
+
- [ ] Support LoRA
|
53 |
+
- [ ] Add VAE radio
|
54 |
+
- [ ] Add styles
|
55 |
+
- [ ] Badges
|
demo.css
ADDED
@@ -0,0 +1,59 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
.accordion {
|
2 |
+
background-color: transparent;
|
3 |
+
}
|
4 |
+
.accordion > button {
|
5 |
+
justify-content: flex-start;
|
6 |
+
}
|
7 |
+
.accordion > button > span:first-child {
|
8 |
+
width: auto;
|
9 |
+
margin-right: 4px;
|
10 |
+
}
|
11 |
+
|
12 |
+
.gr-group div {
|
13 |
+
gap: 0px;
|
14 |
+
}
|
15 |
+
|
16 |
+
.tabs, .tabitem, .tab-nav, .tab-nav > .selected {
|
17 |
+
border-width: 0px;
|
18 |
+
}
|
19 |
+
|
20 |
+
#about {
|
21 |
+
padding: 20px 24px;
|
22 |
+
}
|
23 |
+
|
24 |
+
#gallery {
|
25 |
+
background-color: var(--bg);
|
26 |
+
}
|
27 |
+
#gallery > div:nth-child(2) {
|
28 |
+
overflow-y: hidden;
|
29 |
+
}
|
30 |
+
.dark #gallery {
|
31 |
+
background-color: var(--background-fill-primary);
|
32 |
+
}
|
33 |
+
|
34 |
+
#header {
|
35 |
+
display: flex;
|
36 |
+
align-items: center;
|
37 |
+
}
|
38 |
+
#header > svg {
|
39 |
+
display: inline-block;
|
40 |
+
width: 1.75rem;
|
41 |
+
height: 1.75rem;
|
42 |
+
margin-left: 0.5rem;
|
43 |
+
fill: #047857 !important;
|
44 |
+
animation: spin 3s linear infinite reverse;
|
45 |
+
}
|
46 |
+
#header > svg:is(.dark *) {
|
47 |
+
fill: #10b981 !important;
|
48 |
+
}
|
49 |
+
@keyframes spin {
|
50 |
+
100% { transform: rotate(360deg); }
|
51 |
+
}
|
52 |
+
|
53 |
+
#menu-tabs {
|
54 |
+
margin-top: 12px;
|
55 |
+
}
|
56 |
+
|
57 |
+
#random-seed > button {
|
58 |
+
margin-right: 8px;
|
59 |
+
}
|
demo.js
ADDED
@@ -0,0 +1,11 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
() =>{
|
2 |
+
const menu = document.querySelector("#menu");
|
3 |
+
const menuButton = menu.querySelector("button");
|
4 |
+
|
5 |
+
// scroll on accordion click
|
6 |
+
menuButton.addEventListener("click", () => {
|
7 |
+
requestAnimationFrame(() => {
|
8 |
+
menu.scrollIntoView({ behavior: "instant" });
|
9 |
+
});
|
10 |
+
});
|
11 |
+
}
|
demo.py
ADDED
@@ -0,0 +1,222 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import time
|
2 |
+
|
3 |
+
import gradio as gr
|
4 |
+
|
5 |
+
from generate import generate
|
6 |
+
|
7 |
+
# base font stacks
|
8 |
+
mono_fonts = ["monospace"]
|
9 |
+
sans_fonts = [
|
10 |
+
"sans-serif",
|
11 |
+
"Apple Color Emoji",
|
12 |
+
"Segoe UI Emoji",
|
13 |
+
"Segoe UI Symbol",
|
14 |
+
"Noto Color Emoji",
|
15 |
+
]
|
16 |
+
|
17 |
+
|
18 |
+
def read_file(path: str) -> str:
|
19 |
+
with open(path, "r", encoding="utf-8") as file:
|
20 |
+
return file.read()
|
21 |
+
|
22 |
+
|
23 |
+
def toggle_json(checkbox: gr.Checkbox, json: gr.JSON) -> None:
|
24 |
+
json.visible = checkbox
|
25 |
+
|
26 |
+
|
27 |
+
# don't request a GPU if input is bad
|
28 |
+
def generate_btn_click(*args, **kwargs):
|
29 |
+
start = time.perf_counter()
|
30 |
+
|
31 |
+
if "prompt" in kwargs:
|
32 |
+
prompt = kwargs.get("prompt")
|
33 |
+
elif len(args) > 0:
|
34 |
+
prompt = args[0]
|
35 |
+
else:
|
36 |
+
prompt = None
|
37 |
+
|
38 |
+
if prompt is None or prompt.strip() == "":
|
39 |
+
raise gr.Error("You must enter a prompt")
|
40 |
+
|
41 |
+
images = generate(*args, **kwargs)
|
42 |
+
end = time.perf_counter()
|
43 |
+
diff = end - start
|
44 |
+
gr.Info(f"Generated {len(images)} images in {diff:.2f}s")
|
45 |
+
return images
|
46 |
+
|
47 |
+
|
48 |
+
with gr.Blocks(
|
49 |
+
css="./demo.css",
|
50 |
+
js="./demo.js",
|
51 |
+
theme=gr.themes.Default(
|
52 |
+
# colors
|
53 |
+
primary_hue=gr.themes.colors.orange,
|
54 |
+
secondary_hue=gr.themes.colors.blue,
|
55 |
+
neutral_hue=gr.themes.colors.gray,
|
56 |
+
# sizing
|
57 |
+
text_size=gr.themes.sizes.text_md,
|
58 |
+
spacing_size=gr.themes.sizes.spacing_md,
|
59 |
+
radius_size=gr.themes.sizes.radius_sm,
|
60 |
+
# fonts
|
61 |
+
font=[gr.themes.GoogleFont("Inter"), *sans_fonts],
|
62 |
+
font_mono=[gr.themes.GoogleFont("Ubuntu Mono"), *mono_fonts],
|
63 |
+
).set(
|
64 |
+
block_background_fill=gr.themes.colors.gray.c50,
|
65 |
+
block_background_fill_dark=gr.themes.colors.gray.c900,
|
66 |
+
block_border_width="0px",
|
67 |
+
block_border_width_dark="0px",
|
68 |
+
block_shadow="0 0 #0000",
|
69 |
+
block_shadow_dark="0 0 #0000",
|
70 |
+
block_title_text_weight=500,
|
71 |
+
form_gap_width="0px",
|
72 |
+
section_header_text_weight=500,
|
73 |
+
),
|
74 |
+
) as demo:
|
75 |
+
gr.HTML(read_file("header.html"))
|
76 |
+
output_images = gr.Gallery(
|
77 |
+
height=320,
|
78 |
+
label="Output",
|
79 |
+
show_label=False,
|
80 |
+
columns=4,
|
81 |
+
interactive=False,
|
82 |
+
elem_id="gallery",
|
83 |
+
)
|
84 |
+
|
85 |
+
with gr.Group():
|
86 |
+
prompt = gr.Textbox(
|
87 |
+
label="Prompt",
|
88 |
+
show_label=False,
|
89 |
+
lines=2,
|
90 |
+
placeholder="A painting of a sunset over a mountain",
|
91 |
+
value=None,
|
92 |
+
elem_id="prompt",
|
93 |
+
)
|
94 |
+
generate_btn = gr.Button("Generate", variant="primary", elem_classes=[])
|
95 |
+
|
96 |
+
with gr.Accordion(
|
97 |
+
label="Menu",
|
98 |
+
open=True,
|
99 |
+
elem_id="menu",
|
100 |
+
elem_classes=["accordion"],
|
101 |
+
):
|
102 |
+
with gr.Tabs(elem_id="menu-tabs"):
|
103 |
+
with gr.TabItem("⚙️ Settings"):
|
104 |
+
with gr.Group():
|
105 |
+
negative_prompt = gr.Textbox(
|
106 |
+
label="Negative Prompt",
|
107 |
+
lines=1,
|
108 |
+
placeholder="ugly, bad art, low quality",
|
109 |
+
value="",
|
110 |
+
)
|
111 |
+
|
112 |
+
with gr.Row():
|
113 |
+
num_images = gr.Dropdown(
|
114 |
+
label="Images",
|
115 |
+
choices=[1, 2, 3, 4],
|
116 |
+
value=1,
|
117 |
+
filterable=False,
|
118 |
+
)
|
119 |
+
aspect_ratio = gr.Dropdown(
|
120 |
+
label="Aspect Ratio",
|
121 |
+
choices=["1:1", "4:3", "3:4", "16:9", "9:16"],
|
122 |
+
value="1:1",
|
123 |
+
filterable=False,
|
124 |
+
)
|
125 |
+
|
126 |
+
with gr.Row():
|
127 |
+
guidance_scale = gr.Slider(
|
128 |
+
label="Guidance Scale",
|
129 |
+
minimum=1.0,
|
130 |
+
maximum=15.0,
|
131 |
+
step=0.1,
|
132 |
+
value=7,
|
133 |
+
)
|
134 |
+
inference_steps = gr.Slider(
|
135 |
+
label="Inference Steps",
|
136 |
+
minimum=1,
|
137 |
+
maximum=50,
|
138 |
+
step=1,
|
139 |
+
value=30,
|
140 |
+
)
|
141 |
+
|
142 |
+
with gr.Column():
|
143 |
+
seed = gr.Number(label="Seed", value=0)
|
144 |
+
with gr.Row():
|
145 |
+
random_seed_btn = gr.Button(
|
146 |
+
"🎲 Random",
|
147 |
+
variant="secondary",
|
148 |
+
size="sm",
|
149 |
+
scale=1,
|
150 |
+
)
|
151 |
+
increment_seed = gr.Checkbox(
|
152 |
+
label="Autoincrement",
|
153 |
+
value=True,
|
154 |
+
scale=8,
|
155 |
+
elem_classes=["checkbox"],
|
156 |
+
elem_id="increment-seed",
|
157 |
+
)
|
158 |
+
|
159 |
+
with gr.TabItem("🧠 Model"):
|
160 |
+
model = gr.Dropdown(
|
161 |
+
label="Model",
|
162 |
+
choices=[
|
163 |
+
"fluently/Fluently-v4",
|
164 |
+
"Lykon/dreamshaper-8",
|
165 |
+
"prompthero/openjourney-v4",
|
166 |
+
"runwayml/stable-diffusion-v1-5",
|
167 |
+
"SG161222/Realistic_Vision_V5.1_Novae",
|
168 |
+
],
|
169 |
+
value="Lykon/dreamshaper-8",
|
170 |
+
)
|
171 |
+
scheduler = gr.Dropdown(
|
172 |
+
label="Scheduler",
|
173 |
+
choices=[
|
174 |
+
"DEIS 2M",
|
175 |
+
"DPM++ 2M",
|
176 |
+
"DPM2 a",
|
177 |
+
"Euler a",
|
178 |
+
"Heun",
|
179 |
+
"LMS",
|
180 |
+
"PNDM",
|
181 |
+
],
|
182 |
+
value="DEIS 2M",
|
183 |
+
elem_id="scheduler",
|
184 |
+
)
|
185 |
+
use_karras = gr.Checkbox(
|
186 |
+
label="Karras σ",
|
187 |
+
value=True,
|
188 |
+
elem_classes=["checkbox"],
|
189 |
+
)
|
190 |
+
|
191 |
+
with gr.TabItem("ℹ️ About", elem_id="about"):
|
192 |
+
gr.Markdown(read_file("about.md"))
|
193 |
+
|
194 |
+
# update the random seed using JavaScript
|
195 |
+
random_seed_btn.click(None, outputs=[seed], js="() => Math.floor(Math.random() * 2**32)")
|
196 |
+
|
197 |
+
generate_btn.click(
|
198 |
+
generate_btn_click,
|
199 |
+
api_name="generate",
|
200 |
+
outputs=[output_images],
|
201 |
+
inputs=[
|
202 |
+
prompt,
|
203 |
+
negative_prompt,
|
204 |
+
seed,
|
205 |
+
model,
|
206 |
+
scheduler,
|
207 |
+
aspect_ratio,
|
208 |
+
guidance_scale,
|
209 |
+
inference_steps,
|
210 |
+
use_karras,
|
211 |
+
num_images,
|
212 |
+
increment_seed,
|
213 |
+
],
|
214 |
+
)
|
215 |
+
|
216 |
+
# https://www.gradio.app/docs/gradio/interface#interface-queue
|
217 |
+
demo.queue().launch(
|
218 |
+
{
|
219 |
+
"server_name": "0.0.0.0",
|
220 |
+
"server_port": 7860,
|
221 |
+
}
|
222 |
+
)
|
generate.py
ADDED
@@ -0,0 +1,219 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
import re
|
2 |
+
from datetime import datetime
|
3 |
+
from itertools import product
|
4 |
+
from os import environ
|
5 |
+
from warnings import filterwarnings
|
6 |
+
|
7 |
+
import spaces
|
8 |
+
import torch
|
9 |
+
from compel import Compel
|
10 |
+
from diffusers import (
|
11 |
+
DEISMultistepScheduler,
|
12 |
+
DPMSolverMultistepScheduler,
|
13 |
+
EulerAncestralDiscreteScheduler,
|
14 |
+
HeunDiscreteScheduler,
|
15 |
+
KDPM2AncestralDiscreteScheduler,
|
16 |
+
LMSDiscreteScheduler,
|
17 |
+
PNDMScheduler,
|
18 |
+
StableDiffusionPipeline,
|
19 |
+
)
|
20 |
+
from diffusers.models import AutoencoderTiny
|
21 |
+
|
22 |
+
# some models use the deprecated CLIPFeatureExtractor class
|
23 |
+
# should use CLIPImageProcessor instead
|
24 |
+
filterwarnings("ignore", category=FutureWarning, module="transformers")
|
25 |
+
|
26 |
+
|
27 |
+
class Loader:
|
28 |
+
_instance = None
|
29 |
+
|
30 |
+
def __new__(cls):
|
31 |
+
if cls._instance is None:
|
32 |
+
cls._instance = super(Loader, cls).__new__(cls)
|
33 |
+
cls._instance.cpu = torch.device("cpu")
|
34 |
+
cls._instance.gpu = torch.device("cuda")
|
35 |
+
cls._instance.model_cpu = None
|
36 |
+
cls._instance.model_gpu = None
|
37 |
+
return cls._instance
|
38 |
+
|
39 |
+
def load(self, model, scheduler, karras):
|
40 |
+
SPACES_ZERO_GPU = (
|
41 |
+
environ.get("SPACES_ZERO_GPU", "").lower() == "true"
|
42 |
+
or environ.get("SPACES_ZERO_GPU", "") == "1"
|
43 |
+
)
|
44 |
+
model_lower = model.lower()
|
45 |
+
|
46 |
+
scheduler_map = {
|
47 |
+
"DEIS 2M": DEISMultistepScheduler,
|
48 |
+
"DPM++ 2M": DPMSolverMultistepScheduler,
|
49 |
+
"DPM2 a": KDPM2AncestralDiscreteScheduler,
|
50 |
+
"Euler a": EulerAncestralDiscreteScheduler,
|
51 |
+
"Heun": HeunDiscreteScheduler,
|
52 |
+
"LMS": LMSDiscreteScheduler,
|
53 |
+
"PNDM": PNDMScheduler,
|
54 |
+
}
|
55 |
+
|
56 |
+
scheduler_kwargs = {
|
57 |
+
"beta_start": 0.00085,
|
58 |
+
"beta_end": 0.012,
|
59 |
+
"beta_schedule": "scaled_linear",
|
60 |
+
"timestep_spacing": "leading",
|
61 |
+
"steps_offset": 1,
|
62 |
+
}
|
63 |
+
|
64 |
+
if self.model_gpu is not None:
|
65 |
+
same_model = self.model_gpu.config._name_or_path.lower() == model_lower
|
66 |
+
same_scheduler = isinstance(self.model_gpu.scheduler, scheduler_map[scheduler])
|
67 |
+
same_karras = (
|
68 |
+
not hasattr(self.model_gpu.scheduler.config, "use_karras_sigmas")
|
69 |
+
or self.model_gpu.scheduler.config.use_karras_sigmas == karras
|
70 |
+
)
|
71 |
+
if same_model and same_scheduler and same_karras:
|
72 |
+
return self.model_gpu
|
73 |
+
|
74 |
+
if karras:
|
75 |
+
scheduler_kwargs["use_karras_sigmas"] = True
|
76 |
+
|
77 |
+
if scheduler == "PNDM":
|
78 |
+
del scheduler_kwargs["use_karras_sigmas"]
|
79 |
+
|
80 |
+
variant = (
|
81 |
+
None
|
82 |
+
if model_lower in ["sg161222/realistic_vision_v5.1_novae", "prompthero/openjourney-v4"]
|
83 |
+
else "fp16"
|
84 |
+
)
|
85 |
+
|
86 |
+
pipeline_kwargs = {
|
87 |
+
"pretrained_model_name_or_path": model_lower,
|
88 |
+
"requires_safety_checker": False,
|
89 |
+
"safety_checker": None,
|
90 |
+
"scheduler": scheduler_map[scheduler](**scheduler_kwargs),
|
91 |
+
"torch_dtype": torch.float16,
|
92 |
+
"variant": variant,
|
93 |
+
"use_safetensors": True,
|
94 |
+
"vae": AutoencoderTiny.from_pretrained(
|
95 |
+
"madebyollin/taesd",
|
96 |
+
torch_dtype=torch.float16,
|
97 |
+
use_safetensors=True,
|
98 |
+
),
|
99 |
+
}
|
100 |
+
|
101 |
+
scheduler_cls = scheduler_map[scheduler]
|
102 |
+
pipeline_kwargs["scheduler"] = scheduler_cls(**scheduler_kwargs)
|
103 |
+
|
104 |
+
# in ZeroGPU we always start fresh
|
105 |
+
if SPACES_ZERO_GPU:
|
106 |
+
self.model_gpu = None
|
107 |
+
self.model_cpu = None
|
108 |
+
|
109 |
+
if self.model_gpu is not None:
|
110 |
+
model_gpu_name = self.model_gpu.config._name_or_path
|
111 |
+
self.model_cpu = self.model_gpu.to(self.cpu, silence_dtype_warnings=True)
|
112 |
+
self.model_gpu = None
|
113 |
+
torch.cuda.empty_cache()
|
114 |
+
print(f"Moved {model_gpu_name} to CPU ✓")
|
115 |
+
|
116 |
+
self.model_gpu = StableDiffusionPipeline.from_pretrained(**pipeline_kwargs).to(self.gpu)
|
117 |
+
print(f"Moved {model_lower} to GPU ✓")
|
118 |
+
return self.model_gpu
|
119 |
+
|
120 |
+
|
121 |
+
# prepare prompts for Compel
|
122 |
+
def join_prompt(prompt: str) -> str:
|
123 |
+
lines = prompt.strip().splitlines()
|
124 |
+
return '("' + '", "'.join(lines) + '").and()' if len(lines) > 1 else prompt
|
125 |
+
|
126 |
+
|
127 |
+
# parse prompts with arrays
|
128 |
+
def parse_prompt(prompt: str) -> list[str]:
|
129 |
+
joined_prompt = join_prompt(prompt)
|
130 |
+
arrays = re.findall(r"\[\[(.*?)\]\]", joined_prompt)
|
131 |
+
|
132 |
+
if not arrays:
|
133 |
+
return [joined_prompt]
|
134 |
+
|
135 |
+
tokens = [item.split(",") for item in arrays]
|
136 |
+
combinations = list(product(*tokens))
|
137 |
+
prompts = []
|
138 |
+
|
139 |
+
for combo in combinations:
|
140 |
+
current_prompt = joined_prompt
|
141 |
+
for i, token in enumerate(combo):
|
142 |
+
current_prompt = current_prompt.replace(f"[[{arrays[i]}]]", token.strip(), 1)
|
143 |
+
|
144 |
+
prompts.append(current_prompt)
|
145 |
+
return prompts
|
146 |
+
|
147 |
+
|
148 |
+
@spaces.GPU(duration=30)
|
149 |
+
def generate(
|
150 |
+
positive_prompt,
|
151 |
+
negative_prompt="",
|
152 |
+
seed=None,
|
153 |
+
model="lykon/dreamshaper-8",
|
154 |
+
scheduler="DEIS 2M",
|
155 |
+
aspect_ratio="1:1",
|
156 |
+
guidance_scale=7,
|
157 |
+
inference_steps=30,
|
158 |
+
karras=True,
|
159 |
+
num_images=1,
|
160 |
+
increment_seed=True,
|
161 |
+
):
|
162 |
+
# image dimensions
|
163 |
+
aspect_ratios = {
|
164 |
+
"16:9": (640, 360),
|
165 |
+
"4:3": (576, 432),
|
166 |
+
"1:1": (512, 512),
|
167 |
+
"3:4": (432, 576),
|
168 |
+
"9:16": (360, 640),
|
169 |
+
}
|
170 |
+
width, height = aspect_ratios[aspect_ratio]
|
171 |
+
|
172 |
+
with torch.inference_mode():
|
173 |
+
loader = Loader()
|
174 |
+
pipe = loader.load(model, scheduler, karras)
|
175 |
+
|
176 |
+
# prompt embeds
|
177 |
+
compel = Compel(
|
178 |
+
tokenizer=pipe.tokenizer,
|
179 |
+
text_encoder=pipe.text_encoder,
|
180 |
+
truncate_long_prompts=False,
|
181 |
+
device=pipe.device.type,
|
182 |
+
dtype_for_device_getter=lambda _: torch.float16,
|
183 |
+
)
|
184 |
+
|
185 |
+
neg_prompt = join_prompt(negative_prompt)
|
186 |
+
neg_embeds = compel(neg_prompt)
|
187 |
+
|
188 |
+
if seed is None:
|
189 |
+
seed = int(datetime.now().timestamp())
|
190 |
+
|
191 |
+
current_seed = seed
|
192 |
+
images = []
|
193 |
+
|
194 |
+
for i in range(num_images):
|
195 |
+
generator = torch.Generator(device=pipe.device.type).manual_seed(current_seed)
|
196 |
+
all_positive_prompts = parse_prompt(positive_prompt)
|
197 |
+
prompt_index = i % len(all_positive_prompts)
|
198 |
+
pos_prompt = all_positive_prompts[prompt_index]
|
199 |
+
pos_embeds = compel(pos_prompt)
|
200 |
+
pos_embeds, neg_embeds = compel.pad_conditioning_tensors_to_same_length(
|
201 |
+
[pos_embeds, neg_embeds]
|
202 |
+
)
|
203 |
+
|
204 |
+
result = pipe(
|
205 |
+
width=width,
|
206 |
+
height=height,
|
207 |
+
prompt_embeds=pos_embeds,
|
208 |
+
negative_prompt_embeds=neg_embeds,
|
209 |
+
num_inference_steps=inference_steps,
|
210 |
+
guidance_scale=guidance_scale,
|
211 |
+
generator=generator,
|
212 |
+
)
|
213 |
+
|
214 |
+
images.append((result.images[0], str(current_seed)))
|
215 |
+
|
216 |
+
if increment_seed:
|
217 |
+
current_seed += 1
|
218 |
+
|
219 |
+
return images
|
header.html
ADDED
@@ -0,0 +1,7 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<div id="header">
|
2 |
+
<h1>Stable Diffusion <em>Zero</em></h1>
|
3 |
+
<svg xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" aria-hidden="true" focusable="false" role="img" width="1em" height="1em" preserveAspectRatio="xMidYMid meet" viewBox="0 0 15 15">
|
4 |
+
<path d="M7.48877 6.75C7.29015 6.75 7.09967 6.82902 6.95923 6.96967C6.81879 7.11032 6.73989 7.30109 6.73989 7.5C6.73989 7.69891 6.81879 7.88968 6.95923 8.03033C7.09967 8.17098 7.29015 8.25 7.48877 8.25C7.68738 8.25 7.87786 8.17098 8.0183 8.03033C8.15874 7.88968 8.23764 7.69891 8.23764 7.5C8.23764 7.30109 8.15874 7.11032 8.0183 6.96967C7.87786 6.82902 7.68738 6.75 7.48877 6.75ZM7.8632 0C11.2331 0 11.3155 2.6775 9.54818 3.5625C8.80679 3.93 8.47728 4.7175 8.335 5.415C8.69446 5.565 9.00899 5.7975 9.24863 6.0975C12.0195 4.5975 15 5.19 15 7.875C15 11.25 12.3265 11.325 11.4428 9.5475C11.0684 8.805 10.2746 8.475 9.57813 8.3325C9.42836 8.6925 9.19621 9 8.89665 9.255C10.3869 12.0225 9.79531 15 7.11433 15C3.74438 15 3.67698 12.315 5.44433 11.43C6.17823 11.0625 6.50774 10.2825 6.65751 9.5925C6.29056 9.4425 5.96855 9.2025 5.72891 8.9025C2.96555 10.3875 0 9.8025 0 7.125C0 3.75 2.666 3.6675 3.54967 5.445C3.92411 6.1875 4.71043 6.51 5.40689 6.6525C5.54918 6.2925 5.78882 5.9775 6.09586 5.7375C4.60559 2.97 5.1972 0 7.8632 0Z"></path>
|
5 |
+
</svg>
|
6 |
+
</div>
|
7 |
+
<p>Stable Diffusion 1.5 with extras. Powered by 🤗 <a href="https://huggingface.co/spaces/zero-gpu-explorers/README" target="_blank" rel="noopener noreferrer">ZeroGPU</a>.</p>
|
requirements.txt
ADDED
@@ -0,0 +1,9 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
accelerate
|
2 |
+
compel
|
3 |
+
diffusers
|
4 |
+
hf-transfer
|
5 |
+
gradio==4.39.0
|
6 |
+
ruff
|
7 |
+
spaces
|
8 |
+
torch
|
9 |
+
torchvision
|