Spaces:
Runtime error
Runtime error
Update app.py
Browse files
app.py
CHANGED
@@ -90,14 +90,16 @@ with gr.Blocks() as demo:
|
|
90 |
gr.HTML('''
|
91 |
<p style="margin-bottom: 14px; font-size: 100%"> Looking at the average faces for a given profession across multiple models can help see the dominant characteristics of that profession, as well as how much variation there is (based on how fuzzy the image is). <br> In the images shown here, we can see that representations of these professions significantly differ across the three models, while sharing common characteristics, e.g. <i> postal workers </i> all wear caps. <br> Also, the average faces of <i> hairdressers </i> seem more fuzzy than the other professions, indicating a higher diversity among the generations compared to other professions. <br> Look at the <a href='https://huggingface.co/spaces/society-ethics/Average_diffusion_faces' style='text-decoration: underline;' target='_blank'> Average Diffusion Faces </a> tool for more examples! </p>''')
|
92 |
|
93 |
-
with gr.Accordion("Exploring the
|
94 |
gr.HTML('''
|
95 |
-
<
|
96 |
-
|
97 |
-
|
98 |
-
|
99 |
-
<
|
100 |
-
|
|
|
|
|
101 |
gr.Markdown("""
|
102 |
### All of the tools created as part of this project:
|
103 |
""")
|
|
|
90 |
gr.HTML('''
|
91 |
<p style="margin-bottom: 14px; font-size: 100%"> Looking at the average faces for a given profession across multiple models can help see the dominant characteristics of that profession, as well as how much variation there is (based on how fuzzy the image is). <br> In the images shown here, we can see that representations of these professions significantly differ across the three models, while sharing common characteristics, e.g. <i> postal workers </i> all wear caps. <br> Also, the average faces of <i> hairdressers </i> seem more fuzzy than the other professions, indicating a higher diversity among the generations compared to other professions. <br> Look at the <a href='https://huggingface.co/spaces/society-ethics/Average_diffusion_faces' style='text-decoration: underline;' target='_blank'> Average Diffusion Faces </a> tool for more examples! </p>''')
|
92 |
|
93 |
+
with gr.Accordion("Exploring the pixel space of generated images", open=False):
|
94 |
gr.HTML('''
|
95 |
+
<br>
|
96 |
+
<p style="margin-bottom: 14px; font-size: 100%"> With thousands of generated images, we found it useful to provide ways to explore the data in a structured way that did not depend on any external dataset or model. We provide two such tools, one based on <b>colorfulness</b> and one based on a <b>bag-of-visual words</b> model computed using SIFT features.</p>
|
97 |
+
<h4>Colorfulness</h4>
|
98 |
+
<p style="margin-bottom: 14px; font-size: 100%"> We compute an image's "colorfulness" following <a href="https://doi.org/10.1117/12.477378">this work</a> by David Hasler and Sabine E. Suesstrunk and allow the user to choose a specific prompt and model and explore the neighborhood of that chosen starting point. One interesting orthogonal insight is that images generated by DALL·E 2 are on average the most colorful. Images of men are on average less colorful than all other gender labels, consistently across all three models. </p>
|
99 |
+
<h4>Bag of Visual Words</h4>
|
100 |
+
<p style="margin-bottom: 14px; font-size: 100%"> Another way of providing the means for a structured traversal of the dataset is a nearest-neighbor explorer based on visual features provided by an image's SIFT features, which we quantize into a visual vocabulary to represent the entire image dataset as a TF-IDF matrix. These tools are especially useful in honing in on stereotypical content that is often encoded visually, but also failure modes of the model such as the misinterpetation of the "stocker" profession as an imagined dog-breed.</p>
|
101 |
+
''')
|
102 |
+
|
103 |
gr.Markdown("""
|
104 |
### All of the tools created as part of this project:
|
105 |
""")
|