Spaces:
Runtime error
Runtime error
Update app.py
Browse files
app.py
CHANGED
@@ -116,12 +116,19 @@ with gr.Blocks() as demo:
|
|
116 |
gr.HTML('''
|
117 |
<br>
|
118 |
<p style="margin-bottom: 14px; font-size: 100%"> With thousands of generated images, we found it useful to provide ways to explore the data in a structured way that did not depend on any external dataset or model. We provide two such tools, one based on <b>colorfulness</b> and one based on a <b>bag-of-visual words</b> model computed using SIFT features.</p>
|
|
|
|
|
|
|
119 |
<h4>Colorfulness</h4>
|
120 |
<p style="margin-bottom: 14px; font-size: 100%"> We compute an image's "colorfulness" following <a href="https://doi.org/10.1117/12.477378">this work</a> by David Hasler and Sabine E. Suesstrunk and allow the user to choose a specific prompt and model and explore the neighborhood of that chosen starting point. One interesting orthogonal insight is that images generated by DALL·E 2 are on average the most colorful. Images of men are on average less colorful than all other gender labels, consistently across all three models. </p>
|
|
|
|
|
|
|
|
|
121 |
<h4>Bag of Visual Words</h4>
|
122 |
<p style="margin-bottom: 14px; font-size: 100%"> Another way of providing the means for a structured traversal of the dataset is a nearest-neighbor explorer based on visual features provided by an image's SIFT features, which we quantize into a visual vocabulary to represent the entire image dataset as a TF-IDF matrix. These tools are especially useful in honing in on stereotypical content that is often encoded visually, but also failure modes of the model such as the misinterpetation of the "stocker" profession as an imagined dog-breed.</p>
|
123 |
-
|
124 |
-
|
125 |
gr.Markdown("""
|
126 |
### All of the tools created as part of this project:
|
127 |
""")
|
|
|
116 |
gr.HTML('''
|
117 |
<br>
|
118 |
<p style="margin-bottom: 14px; font-size: 100%"> With thousands of generated images, we found it useful to provide ways to explore the data in a structured way that did not depend on any external dataset or model. We provide two such tools, one based on <b>colorfulness</b> and one based on a <b>bag-of-visual words</b> model computed using SIFT features.</p>
|
119 |
+
''')
|
120 |
+
with gr.Row():
|
121 |
+
gr.HTML('''
|
122 |
<h4>Colorfulness</h4>
|
123 |
<p style="margin-bottom: 14px; font-size: 100%"> We compute an image's "colorfulness" following <a href="https://doi.org/10.1117/12.477378">this work</a> by David Hasler and Sabine E. Suesstrunk and allow the user to choose a specific prompt and model and explore the neighborhood of that chosen starting point. One interesting orthogonal insight is that images generated by DALL·E 2 are on average the most colorful. Images of men are on average less colorful than all other gender labels, consistently across all three models. </p>
|
124 |
+
''')
|
125 |
+
gr.Image("images/colorfulness/nativeamerican_man.png")
|
126 |
+
with gr.Row():
|
127 |
+
gr.HTML('''
|
128 |
<h4>Bag of Visual Words</h4>
|
129 |
<p style="margin-bottom: 14px; font-size: 100%"> Another way of providing the means for a structured traversal of the dataset is a nearest-neighbor explorer based on visual features provided by an image's SIFT features, which we quantize into a visual vocabulary to represent the entire image dataset as a TF-IDF matrix. These tools are especially useful in honing in on stereotypical content that is often encoded visually, but also failure modes of the model such as the misinterpetation of the "stocker" profession as an imagined dog-breed.</p>
|
130 |
+
''')
|
131 |
+
gr.Image("images/bovw/librarians.png")
|
132 |
gr.Markdown("""
|
133 |
### All of the tools created as part of this project:
|
134 |
""")
|