Spaces:
Runtime error
Runtime error
sashavor
commited on
Commit
·
792a0ad
1
Parent(s):
4842425
changing titles
Browse files
app.py
CHANGED
@@ -23,10 +23,10 @@ with gr.Blocks() as demo:
|
|
23 |
''')
|
24 |
|
25 |
gr.Markdown("""
|
26 |
-
|
27 |
""")
|
28 |
|
29 |
-
with gr.Accordion("
|
30 |
gr.HTML('''
|
31 |
<p style="margin-bottom: 14px; font-size: 100%"> One of the approaches that we adopted in our work is hierarchical clustering of the images generated by the text-to-image systems in response to prompts that include identity terms with regards to ethnicity and gender. <br> We computed 3 different numbers of clusters (12, 24 and 48) and created an <a href='https://huggingface.co/spaces/society-ethics/DiffusionFaceClustering' style='text-decoration: underline;' target='_blank'> Identity Representation Demo </a> that allows for the exploration of the different clusters and their contents. </p>
|
32 |
''')
|
@@ -41,11 +41,11 @@ with gr.Blocks() as demo:
|
|
41 |
<p style="margin-bottom: 14px; font-size: 100%"> You can see that the models reflect many societal biases -- for instance representing Native Americans wearing traditional headdresses, non-binary people with stereotypical haircuts and glasses, and East Asian men with features that amplify ethnic stereotypes. <br> This is problematic because it reinforces existing cultural stereotypes and fails to represent the diversity that is present in all identity groups.</p>
|
42 |
''')
|
43 |
gr.Markdown("""
|
44 |
-
|
45 |
""")
|
46 |
|
47 |
|
48 |
-
with gr.Accordion("
|
49 |
gr.HTML('''
|
50 |
<p style="margin-bottom: 14px; font-size: 100%"> We also explore the correlations between the professions that use used in our prompts and the different identity clusters that we identified. <br> Using both the <a href='https://huggingface.co/spaces/society-ethics/DiffusionClustering' style='text-decoration: underline;' target='_blank'> Diffusion Cluster Explorer </a> and the <a href='https://huggingface.co/spaces/society-ethics/DiffusionFaceClustering' style='text-decoration: underline;' target='_blank'> Identity Representation Demo </a>, we can see which clusters are most correlated with each profession and what identities are in these clusters.</p>
|
51 |
''')
|
@@ -76,10 +76,10 @@ with gr.Blocks() as demo:
|
|
76 |
cluster4 = gr.Image(Image.open("images/bias/Cluster2.png"), label = "Cluster 2 Image", show_label=False)
|
77 |
|
78 |
gr.Markdown("""
|
79 |
-
|
80 |
""")
|
81 |
|
82 |
-
with gr.Accordion("Comparing
|
83 |
gr.HTML('''
|
84 |
<p style="margin-bottom: 14px; font-size: 100%"> One of the goals of our study was allowing users to compare model generations across professions in an open-ended way, uncovering patterns and trends on their own. This is why we created the <a href='https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer' style='text-decoration: underline;' target='_blank'> Diffusion Bias Explorer </a> and the <a href='https://huggingface.co/spaces/society-ethics/Average_diffusion_faces' style='text-decoration: underline;' target='_blank'> Average Diffusion Faces </a> tools. We show some of their functionalities below: </p> ''')
|
85 |
with gr.Row():
|
@@ -100,9 +100,9 @@ with gr.Blocks() as demo:
|
|
100 |
<p style="margin-bottom: 14px; font-size: 100%"> Looking at the average faces for a given profession across multiple models can help see the dominant characteristics of that profession, as well as how much variation there is (based on how fuzzy the image is). <br> In the images shown here, we can see that representations of these professions significantly differ across the three models, while sharing common characteristics, e.g. <i> postal workers </i> all wear caps. <br> Also, the average faces of <i> hairdressers </i> seem more fuzzy than the other professions, indicating a higher diversity among the generations compared to other professions. <br> Look at the <a href='https://huggingface.co/spaces/society-ethics/Average_diffusion_faces' style='text-decoration: underline;' target='_blank'> Average Diffusion Faces </a> tool for more examples! </p>''')
|
101 |
|
102 |
gr.Markdown("""
|
103 |
-
|
104 |
""")
|
105 |
-
with gr.Accordion("Exploring the
|
106 |
gr.HTML('''
|
107 |
<br>
|
108 |
<p style="margin-bottom: 14px; font-size: 100%"> With thousands of generated images, we found it useful to provide ways to explore the data in a structured way that did not depend on any external dataset or model. We provide two such tools, one based on <b>colorfulness</b> and one based on a <b>bag-of-visual words</b> model computed using SIFT features.</p>
|
|
|
23 |
''')
|
24 |
|
25 |
gr.Markdown("""
|
26 |
+
### Looking at Identity Groups
|
27 |
""")
|
28 |
|
29 |
+
with gr.Accordion("Looking at Identity Groups(ethnicity and gender)", open=False):
|
30 |
gr.HTML('''
|
31 |
<p style="margin-bottom: 14px; font-size: 100%"> One of the approaches that we adopted in our work is hierarchical clustering of the images generated by the text-to-image systems in response to prompts that include identity terms with regards to ethnicity and gender. <br> We computed 3 different numbers of clusters (12, 24 and 48) and created an <a href='https://huggingface.co/spaces/society-ethics/DiffusionFaceClustering' style='text-decoration: underline;' target='_blank'> Identity Representation Demo </a> that allows for the exploration of the different clusters and their contents. </p>
|
32 |
''')
|
|
|
41 |
<p style="margin-bottom: 14px; font-size: 100%"> You can see that the models reflect many societal biases -- for instance representing Native Americans wearing traditional headdresses, non-binary people with stereotypical haircuts and glasses, and East Asian men with features that amplify ethnic stereotypes. <br> This is problematic because it reinforces existing cultural stereotypes and fails to represent the diversity that is present in all identity groups.</p>
|
42 |
''')
|
43 |
gr.Markdown("""
|
44 |
+
### Exploring Biases
|
45 |
""")
|
46 |
|
47 |
|
48 |
+
with gr.Accordion("Exploring Biases", open=False):
|
49 |
gr.HTML('''
|
50 |
<p style="margin-bottom: 14px; font-size: 100%"> We also explore the correlations between the professions that use used in our prompts and the different identity clusters that we identified. <br> Using both the <a href='https://huggingface.co/spaces/society-ethics/DiffusionClustering' style='text-decoration: underline;' target='_blank'> Diffusion Cluster Explorer </a> and the <a href='https://huggingface.co/spaces/society-ethics/DiffusionFaceClustering' style='text-decoration: underline;' target='_blank'> Identity Representation Demo </a>, we can see which clusters are most correlated with each profession and what identities are in these clusters.</p>
|
51 |
''')
|
|
|
76 |
cluster4 = gr.Image(Image.open("images/bias/Cluster2.png"), label = "Cluster 2 Image", show_label=False)
|
77 |
|
78 |
gr.Markdown("""
|
79 |
+
### Comparing Model Generations
|
80 |
""")
|
81 |
|
82 |
+
with gr.Accordion("Comparing Model Generations", open=False):
|
83 |
gr.HTML('''
|
84 |
<p style="margin-bottom: 14px; font-size: 100%"> One of the goals of our study was allowing users to compare model generations across professions in an open-ended way, uncovering patterns and trends on their own. This is why we created the <a href='https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer' style='text-decoration: underline;' target='_blank'> Diffusion Bias Explorer </a> and the <a href='https://huggingface.co/spaces/society-ethics/Average_diffusion_faces' style='text-decoration: underline;' target='_blank'> Average Diffusion Faces </a> tools. We show some of their functionalities below: </p> ''')
|
85 |
with gr.Row():
|
|
|
100 |
<p style="margin-bottom: 14px; font-size: 100%"> Looking at the average faces for a given profession across multiple models can help see the dominant characteristics of that profession, as well as how much variation there is (based on how fuzzy the image is). <br> In the images shown here, we can see that representations of these professions significantly differ across the three models, while sharing common characteristics, e.g. <i> postal workers </i> all wear caps. <br> Also, the average faces of <i> hairdressers </i> seem more fuzzy than the other professions, indicating a higher diversity among the generations compared to other professions. <br> Look at the <a href='https://huggingface.co/spaces/society-ethics/Average_diffusion_faces' style='text-decoration: underline;' target='_blank'> Average Diffusion Faces </a> tool for more examples! </p>''')
|
101 |
|
102 |
gr.Markdown("""
|
103 |
+
### Exploring the Pixel Space of Generated Images
|
104 |
""")
|
105 |
+
with gr.Accordion("Exploring the Pixel Space of Generated Images", open=False):
|
106 |
gr.HTML('''
|
107 |
<br>
|
108 |
<p style="margin-bottom: 14px; font-size: 100%"> With thousands of generated images, we found it useful to provide ways to explore the data in a structured way that did not depend on any external dataset or model. We provide two such tools, one based on <b>colorfulness</b> and one based on a <b>bag-of-visual words</b> model computed using SIFT features.</p>
|