Tony Lian commited on
Commit
67a209d
1 Parent(s): f0bfa56
Files changed (1) hide show
  1. app.py +2 -2
app.py CHANGED
@@ -202,7 +202,7 @@ html = f"""<h1>LLM-grounded Diffusion: Enhancing Prompt Understanding of Text-to
202
  <p>2. You can perform multi-round specification by giving ChatGPT follow-up requests (e.g., make the object boxes bigger).</p>
203
  <p>3. You can also try prompts in Simplified Chinese. If you want to try prompts in another language, translate the first line of last example to your language.</p>
204
  <p>4. The diffusion model only runs 20 steps by default. You can make it run 50 steps to get higher quality images (or tweak frozen steps/guidance steps for better guidance and coherence).</p>
205
- <p>5. Duplicate this space and add GPU to skip the queue and run our model faster. (Currently we are using a T4, and you can add a A10G or A100 to make it much faster) {duplicate_html}</p>
206
  <br/>
207
  <p>Implementation note: In this demo, we replace the attention manipulation in our layout-guided Stable Diffusion described in our paper with GLIGEN due to much faster inference speed (<b>FlashAttention supported, no backprop needed</b> during inference). Compared to vanilla GLIGEN, we have better coherence. Other parts of text-to-image pipeline, including single object generation and SAM, remain the same. The settings and examples in the prompt are simplified in this demo.</p>"""
208
 
@@ -244,7 +244,7 @@ with gr.Blocks(
244
  response = gr.Textbox(lines=5, label="Paste ChatGPT response here (no original caption needed)", placeholder=layout_placeholder)
245
  visualize_btn = gr.Button("Visualize Layout")
246
  generate_btn = gr.Button("Generate Image from Layout", variant='primary')
247
- with gr.Accordion("Advanced options", open=False):
248
  seed = gr.Slider(0, 10000, value=0, step=1, label="Seed")
249
  num_inference_steps = gr.Slider(1, 50, value=20, step=1, label="Number of inference steps")
250
  dpm_scheduler = gr.Checkbox(label="Use DPM scheduler (unchecked: DDIM scheduler, may have better coherence, recommend 50 inference steps)", show_label=False, value=True)
 
202
  <p>2. You can perform multi-round specification by giving ChatGPT follow-up requests (e.g., make the object boxes bigger).</p>
203
  <p>3. You can also try prompts in Simplified Chinese. If you want to try prompts in another language, translate the first line of last example to your language.</p>
204
  <p>4. The diffusion model only runs 20 steps by default. You can make it run 50 steps to get higher quality images (or tweak frozen steps/guidance steps for better guidance and coherence).</p>
205
+ <p>5. Duplicate this space and add GPU to skip the queue and run our model faster. (Currently we are using a T4, and you can add a A10G to make it 5x faster) {duplicate_html}</p>
206
  <br/>
207
  <p>Implementation note: In this demo, we replace the attention manipulation in our layout-guided Stable Diffusion described in our paper with GLIGEN due to much faster inference speed (<b>FlashAttention supported, no backprop needed</b> during inference). Compared to vanilla GLIGEN, we have better coherence. Other parts of text-to-image pipeline, including single object generation and SAM, remain the same. The settings and examples in the prompt are simplified in this demo.</p>"""
208
 
 
244
  response = gr.Textbox(lines=5, label="Paste ChatGPT response here (no original caption needed)", placeholder=layout_placeholder)
245
  visualize_btn = gr.Button("Visualize Layout")
246
  generate_btn = gr.Button("Generate Image from Layout", variant='primary')
247
+ with gr.Accordion("Advanced options (play around for better generation)", open=False):
248
  seed = gr.Slider(0, 10000, value=0, step=1, label="Seed")
249
  num_inference_steps = gr.Slider(1, 50, value=20, step=1, label="Number of inference steps")
250
  dpm_scheduler = gr.Checkbox(label="Use DPM scheduler (unchecked: DDIM scheduler, may have better coherence, recommend 50 inference steps)", show_label=False, value=True)