Spaces:
Running
on
Zero
Running
on
Zero
update gradio
Browse files
UltraEdit
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
Subproject commit 1f07d3cf2026ef372d3117f0bede5240f5e0e850
|
app.py
CHANGED
@@ -65,16 +65,44 @@ outputs = gr.Image(label="Generated Image")
|
|
65 |
|
66 |
|
67 |
# Custom HTML content
|
68 |
-
article_html = """
|
69 |
-
<
|
70 |
-
<
|
71 |
-
""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
72 |
|
73 |
demo = gr.Interface(
|
74 |
fn=generate,
|
75 |
inputs=inputs,
|
76 |
outputs=outputs,
|
77 |
-
|
78 |
)
|
79 |
|
80 |
demo.queue().launch()
|
|
|
65 |
|
66 |
|
67 |
# Custom HTML content
|
68 |
+
# article_html = """
|
69 |
+
# <div style="text-align: center; max-width: 1200px; margin: 20px auto;">
|
70 |
+
# <h1 style="font-weight: 900; font-size: 3rem; margin-bottom: 0.5rem">🖼️ Stable Diffusion 3 Image Editor</h1>
|
71 |
+
# <h2 style="font-weight: 450; font-size: 1rem; margin: 0rem">
|
72 |
+
# This interface allows you to perform image editing using the Stable Diffusion 3 model trained with the UltraEdit dataset.
|
73 |
+
# </h2>
|
74 |
+
# <h2 style="font-weight: 450; font-size: 1rem; margin: 0.7rem auto; max-width: 1000px">
|
75 |
+
# Supports both free-form (without mask) and region-based (with mask) image editing. Use the sliders to adjust the inference steps and guidance scales, and provide a seed for reproducibility.
|
76 |
+
# </h2>
|
77 |
+
# <h2 style="font-weight: 450; font-size: 1rem; margin: 1rem auto; max-width: 1000px">
|
78 |
+
# <b>UltraEdit: Instruction-based Fine-Grained Image Editing at Scale</b>
|
79 |
+
# </h2>
|
80 |
+
# <div style="text-align: left; max-width: 1000px; margin: 0 auto;">
|
81 |
+
# <p>
|
82 |
+
# Haozhe Zhao<sup>1*</sup>, Xiaojian Ma<sup>2*</sup>, Liang Chen<sup>1</sup>, Shuzheng Si<sup>1</sup>, Rujie Wu<sup>1</sup>, Kaikai An<sup>1</sup>, Peiyu Yu<sup>3</sup>, Minjia Zhang<sup>4</sup>, Qing Li<sup>2</sup>, Baobao Chang<sup>2</sup>
|
83 |
+
# <br>
|
84 |
+
# <sup>1</sup>Peking University, <sup>2</sup>BIGAI, <sup>3</sup>UCLA, <sup>4</sup>UIUC
|
85 |
+
# </p>
|
86 |
+
# <p>
|
87 |
+
# This paper presents ULTRAEDIT, a large-scale (~4M editing samples), automatically generated dataset for instruction-based image editing. Our key idea is to address the drawbacks in existing image editing datasets like InstructPix2Pix and MagicBrush, and provide a systematic approach to producing massive and high-quality image editing samples. ULTRAEDIT offers several distinct advantages:
|
88 |
+
# </p>
|
89 |
+
# <ul>
|
90 |
+
# <li>It features a broader range of editing instructions by leveraging the creativity of large language models (LLMs) alongside in-context editing examples from human raters.</li>
|
91 |
+
# <li>Its data sources are based on real images, including photographs and artworks, which provide greater diversity and reduced bias compared to datasets solely generated by text-to-image models.</li>
|
92 |
+
# <li>It also supports region-based editing, enhanced by high-quality, automatically produced region annotations.</li>
|
93 |
+
# </ul>
|
94 |
+
# <p>
|
95 |
+
# Our experiments show that canonical diffusion-based editing baselines trained on ULTRAEDIT set new records on MagicBrush and Emu-Edit benchmarks. Our analysis further confirms the crucial role of real image anchors and region-based editing data. The dataset, code, and models will be made public.
|
96 |
+
# </p>
|
97 |
+
# </div>
|
98 |
+
# </div>
|
99 |
+
# """
|
100 |
|
101 |
demo = gr.Interface(
|
102 |
fn=generate,
|
103 |
inputs=inputs,
|
104 |
outputs=outputs,
|
105 |
+
# title=article_html # Add article parameter
|
106 |
)
|
107 |
|
108 |
demo.queue().launch()
|