yjernite commited on
Commit
f61b68d
1 Parent(s): b686bc2
Files changed (1) hide show
  1. app.py +164 -80
app.py CHANGED
@@ -2,137 +2,220 @@ import gradio as gr
2
  from PIL import Image
3
  import os
4
 
 
5
  def get_images(path):
6
- images = [Image.open(os.path.join(path,im)) for im in os.listdir(path)]
7
  paths = os.listdir(path)
8
- return([(im, path) for im, path in zip(images,paths)])
9
 
10
 
11
  with gr.Blocks() as demo:
12
- gr.Markdown("""
 
13
  ## Stable Bias: Analyzing Societal Representations in Diffusion Models
14
- """)
15
- gr.HTML('''
 
 
16
  <p style="margin-bottom: 10px; font-size: 94%">This is the demo page for the "Stable Bias" paper, which aims to explore and quantify social biases in text-to-image systems. <br> This work was done by <a href='https://huggingface.co/sasha' style='text-decoration: underline;' target='_blank'> Alexandra Sasha Luccioni (Hugging Face) </a>, <a href='https://huggingface.co/cakiki' style='text-decoration: underline;' target='_blank'> Christopher Akiki (ScaDS.AI, Leipzig University)</a>, <a href='https://huggingface.co/meg' style='text-decoration: underline;' target='_blank'> Margaret Mitchell (Hugging Face) </a> and <a href='https://huggingface.co/yjernite' style='text-decoration: underline;' target='_blank'> Yacine Jernite (Hugging Face) </a> .</p>
17
- ''')
18
- examples_path= "images/examples"
19
- examples_gallery = gr.Gallery(get_images(examples_path),
20
- label="Example images generated by three text-to-image models (Dall-E 2, Stable Diffusion v1.4 and v.2).", show_label=True, elem_id="gallery").style(grid=[1,6], height="auto")
21
- gr.HTML('''
 
 
 
 
 
 
22
  <p style="margin-bottom: 14px; font-size: 100%"> As AI-enabled Text-to-Image models are becoming increasingly used, characterizing the social biases they exhibit is a necessary first step to lowering their risk of discriminatory outcomes. <br> We compare three such models: <b> Stable Diffusion v.1.4, Stable Diffusion v.2. </b>, and <b> Dall-E 2 </b>, prompting them to produce images of different <i> professions </i> and <i> identity characteristics </i>. <br> You explore our findings in the sections below: </p>
23
- ''')
24
-
25
- gr.Markdown("""
 
 
26
  ### Looking at Identity Groups
27
- """)
28
-
29
- gr.Markdown("""
 
 
30
  One of the goals of our study was to look at the ways in which different identity groups (ethnicity and gender) are represented by text-to-image models. Since artificial depictions of fictive humans have no inherent gender or ethnicity nor do they belong to socially-constructed groups, we pursued our analysis <i> without </i> ascribing identity categories to the images generated, using unsupervised techniques such as clustering. We find clear evidence of ethnicity and gender biases, which you can see by expanding the accordion below or directly via the [Identity Representation Demo](https://huggingface.co/spaces/society-ethics/DiffusionFaceClustering).
31
- """)
 
32
 
33
  with gr.Accordion("Looking at Identity Groups", open=False):
34
- gr.HTML('''
 
35
  <p style="margin-bottom: 14px; font-size: 100%"> One of the approaches that we adopted in our work is hierarchical clustering of the images generated by the text-to-image systems in response to prompts that include identity terms with regards to ethnicity and gender. <br> We computed 3 different numbers of clusters (12, 24 and 48) and created an <a href='https://huggingface.co/spaces/society-ethics/DiffusionFaceClustering' style='text-decoration: underline;' target='_blank'> Identity Representation Demo </a> that allows for the exploration of the different clusters and their contents. </p>
36
- ''')
37
- with gr.Row():
 
38
  with gr.Column(scale=2):
39
  impath = "images/identities"
40
- identity_gallery = gr.Gallery([os.path.join(impath,im) for im in os.listdir(impath)],
41
- label="Identity cluster images", show_label=False, elem_id="gallery"
42
- ).style(grid=3, height="auto")
 
 
 
43
  with gr.Column(scale=1):
44
- gr.HTML('''
 
45
  <p style="margin-bottom: 14px; font-size: 100%"> You can see that the models reflect many societal biases -- for instance representing Native Americans wearing traditional headdresses, non-binary people with stereotypical haircuts and glasses, and East Asian men with features that amplify ethnic stereotypes. <br> <br> This is problematic because it reinforces existing cultural stereotypes and fails to represent the diversity that is present in all identity groups.</p>
46
- ''')
47
- gr.Markdown("""
 
 
48
  ### Exploring Biases
49
- """)
50
- gr.Markdown("""
 
 
51
  Machine Learning models encode and amplify biases that are represented in the data that they are trained on -this can include, for instance, stereotypes around the appearances of members of different professions. In our study, we prompted the 3 text-to-image models with texts pertaining to 150 different professions and analyzed the presence of different identity groups in the images generated. We found evidence of many societal stereotypes in the images generated, such as the fact that people in positions of power (e.g. director, CEO) are often White- and male-appearing, while the images generated for other professions are more diverse. Read more about our findings in the accordion below or directly via the [Diffusion Cluster Explorer](https://huggingface.co/spaces/society-ethics/DiffusionClustering) tool.
52
- """)
 
53
  with gr.Accordion("Exploring Biases", open=False):
54
- gr.HTML('''
 
55
  <p style="margin-bottom: 14px; font-size: 100%"> We also explore the correlations between the professions that use used in our prompts and the different identity clusters that we identified. <br> Using both the <a href='https://huggingface.co/spaces/society-ethics/DiffusionClustering' style='text-decoration: underline;' target='_blank'> Diffusion Cluster Explorer </a> and the <a href='https://huggingface.co/spaces/society-ethics/DiffusionFaceClustering' style='text-decoration: underline;' target='_blank'> Identity Representation Demo </a>, we can see which clusters are most correlated with each profession and what identities are in these clusters.</p>
56
- ''')
57
- with gr.Row():
 
58
  with gr.Column():
59
- gr.HTML('''
60
- <p style="margin-bottom: 14px; font-size: 100%"> Using the <b><a href='https://huggingface.co/spaces/society-ethics/DiffusionClustering' style='text-decoration: underline;' target='_blank'> Diffusion Cluster Explorer</a></b>, we can see that the top cluster for the CEO and director professions is <b> Cluster 4</b>: </p> ''')
 
 
61
  with gr.Column():
62
- ceo_img = gr.Image(Image.open("images/bias/ceo_dir.png"), label = "CEO Image", show_label=False)
63
-
64
- with gr.Row():
 
 
 
 
65
  with gr.Column():
66
- gr.HTML('''
67
- <p style="margin-bottom: 14px; font-size: 100%"> Going back to the <b><a href='https://huggingface.co/spaces/society-ethics/DiffusionFaceClustering' style='text-decoration: underline;' target='_blank'> Identity Representation Demo </a></b>, we can see that the most represented gender term is <i> man </i> (56% of the cluster) and <i> White </i> (29% of the cluster). <br> This is consistent with common stereotypes regarding people in positions of power, who are predominantly male, according to the US Labor Bureau Statistics. </p> ''')
 
 
68
  with gr.Column():
69
- cluster4 = gr.Image(Image.open("images/bias/Cluster4.png"), label = "Cluster 4 Image", show_label=False)
70
- with gr.Row():
 
 
 
 
71
  with gr.Column():
72
- gr.HTML('''
73
- <p style="margin-bottom: 14px; font-size: 100%"> If we look at the cluster representation of professions such as social assistant and social worker, we can observe that the former is best represented by <b>Cluster 2</b>, whereas the latter has a more uniform representation across multiple clusters: </p> ''')
 
 
74
  with gr.Column():
75
- social_img = gr.Image(Image.open("images/bias/social.png"), label = "social image", show_label=False)
76
- with gr.Row():
 
 
 
 
77
  with gr.Column(scale=1):
78
- gr.HTML('''
79
- <p style="margin-bottom: 14px; font-size: 100%"> Cluster 2 is best represented by the gender term is <i> woman </i> (81%) as well as <i> Latinx </i> (19%) <br> This gender proportion is exactly the same as the one provided by the United States Labor Bureau (which you can see in the table above), with 81% of social assistants identifying as women. </p> ''')
 
 
80
  with gr.Column(scale=2):
81
- cluster4 = gr.Image(Image.open("images/bias/Cluster2.png"), label = "Cluster 2 Image", show_label=False)
 
 
 
 
82
 
83
- gr.Markdown("""
 
84
  ### Comparing Model Generations
85
- """)
86
- gr.Markdown("""
 
 
87
  Above and beyond quantitative analyses, one of the main goals of our project was to create accessible ways for the users to explore the generated images themselves, based on their own interests. For this purpose, we created two interactive tools: the [Diffusion Bias Explorer](https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer), which can be used to compare two models and the images they generate for a given profession or for a given model across two professions, and the [Average Diffusion Faces Tool](https://huggingface.co/spaces/society-ethics/Average_diffusion_faces), which shows an 'average' representation of faces across professions, based on the images generated by the 3 models.
88
- """)
 
89
  with gr.Accordion("Comparing Model Generations", open=False):
90
- gr.HTML('''
91
- <p style="margin-bottom: 14px; font-size: 100%"> One of the goals of our study was allowing users to compare model generations across professions in an open-ended way, uncovering patterns and trends on their own. This is why we created the <a href='https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer' style='text-decoration: underline;' target='_blank'> Diffusion Bias Explorer </a> and the <a href='https://huggingface.co/spaces/society-ethics/Average_diffusion_faces' style='text-decoration: underline;' target='_blank'> Average Diffusion Faces </a> tools. We show some of their functionalities below: </p> ''')
92
- with gr.Row():
 
 
93
  with gr.Column():
94
  explorerpath = "images/biasexplorer"
95
- biasexplorer_gallery = gr.Gallery(get_images(explorerpath),
96
- label="Bias explorer images", show_label=False, elem_id="gallery").style(grid=[2,2])
 
 
 
 
97
  with gr.Column():
98
- gr.HTML('''
99
- <p style="margin-bottom: 14px; font-size: 100%"> Comparing generations both between two models and within a single model can help uncover trends and patterns that are hard to measure using quantitative approaches. <br> For instance, we can observe that both Dall-E 2 and Stable Diffusion 2 represent both <i> CEOs </i> and <i> nurses </i> as homogenous groups with distinct characteristics, such as ties and scrubs (which makes sense given the results of our clustering, shown above. <br> We can also see that the images of <i> waitresses </i> generated by Dall-E 2 and Stable Diffusion v.1.4. have different characteristics, both in terms of their clothes as well as their appearance. <br> It's also possible to see harder to describe phenomena, like the fact that portraits of <i> painters </i> often look like paintings themselves. <br> We encourage you to use the <a href='https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer' style='text-decoration: underline;' target='_blank'> Diffusion Bias Explorer </a> tool to explore these phenomena further! </p>''')
100
- with gr.Row():
 
 
101
  with gr.Column():
102
  averagepath = "images/averagefaces"
103
- average_gallery = gr.Gallery(get_images(averagepath),
104
- label="Average Face images", show_label=False, elem_id="gallery").style(grid=[1,3], height=560)
 
 
 
 
105
  with gr.Column():
106
- gr.HTML('''
107
- <p style="margin-bottom: 14px; font-size: 100%"> Looking at the average faces for a given profession across multiple models can help see the dominant characteristics of that profession, as well as how much variation there is (based on how fuzzy the image is). <br> In the images shown here, we can see that representations of these professions significantly differ across the three models, while sharing common characteristics, e.g. <i> postal workers </i> all wear caps. <br> Also, the average faces of <i> hairdressers </i> seem more fuzzy than the other professions, indicating a higher diversity among the generations compared to other professions. <br> Look at the <a href='https://huggingface.co/spaces/society-ethics/Average_diffusion_faces' style='text-decoration: underline;' target='_blank'> Average Diffusion Faces </a> tool for more examples! </p>''')
 
 
108
 
109
- gr.Markdown("""
 
110
  ### Exploring the Pixel Space of Generated Images
111
- """)
112
- gr.Markdown("""
 
 
113
  Finally, an interesting aspect of the generations of the 3 models are the images themselves, which can be analyzed from different angles on a pixel-level. We explore the images in terms of their colorfulness using the [Colorfulness Profession Explorer](https://huggingface.co/spaces/tti-bias/identities-colorfulness-knn) and the [Colorfulness Identities Explorer](https://huggingface.co/spaces/tti-bias/professions-colorfulness-knn), which allow users to hone in on patterns in terms of colors and shades within the images generated. We also allow exploration of the images in terms of their visual features using the bag-of-visual-words approach (BoVW), which allows users to hone in on visual stereotypical content such as professions that have uniforms of a given color, of elements like glasses and hair styles -- this can be done via the [BoVW Nearest Neighbors Explorer](https://huggingface.co/spaces/tti-bias/identities-bovw-knn) and the [BoVW Professions Explorer](https://huggingface.co/spaces/tti-bias/professions-bovw-knn) -- we also present some of our salient findings in the accordion below.
114
- """)
 
115
  with gr.Accordion("Exploring the Pixel Space of Generated Images", open=False):
116
- gr.HTML('''
 
117
  <br>
118
  <p style="margin-bottom: 14px; font-size: 100%"> With thousands of generated images, we found it useful to provide ways to explore the data in a structured way that did not depend on any external dataset or model. We provide two such tools, one based on <b>colorfulness</b> and one based on a <b>bag-of-visual words</b> model computed using SIFT features.</p>
119
- ''')
120
- with gr.Row():
121
- gr.HTML('''
 
 
122
  <h4>Colorfulness</h4>
123
  <p style="margin-bottom: 14px; font-size: 100%"> We compute an image's "colorfulness" following <a href="https://doi.org/10.1117/12.477378">this work</a> by David Hasler and Sabine E. Suesstrunk and allow the user to choose a specific prompt and model and explore the neighborhood of that chosen starting point. One interesting orthogonal insight is that images generated by DALL·E 2 are on average the most colorful. Images of men are on average less colorful than all other gender labels, consistently across all three models. Patterns revealed using this explorer include for example the exoticizing depiction of native Americans as can be seen in the very stereotypical gallery of images generated in the example on the right.</p>
124
- ''')
 
125
  gr.Image("images/colorfulness/nativeamerican_man.png")
126
  with gr.Row():
127
- gr.HTML('''
 
128
  <h4>Bag of Visual Words</h4>
129
  <p style="margin-bottom: 14px; font-size: 100%"> Another way of providing the means for a structured traversal of the dataset is a nearest-neighbor explorer based on visual features provided by an image's SIFT features, which we quantize into a visual vocabulary to represent the entire image dataset as a TF-IDF matrix. These tools are especially useful in honing in on stereotypical content that is often encoded visually, but also failure modes of the model such as the misinterpetation of the "stocker" profession as an imagined dog-breed. The screenshot to the right shows how SIFT visual patterns tend to cluster together, namely in this instance the bookshelf in the background. </p>
130
- ''')
 
131
  gr.Image("images/bovw/librarians.png")
132
- gr.Markdown("""
 
133
  ### All of the tools created as part of this project:
134
- """)
135
- gr.HTML('''
 
 
136
  <p style="margin-bottom: 10px; font-size: 110%">
137
  <a href='https://huggingface.co/spaces/society-ethics/Average_diffusion_faces' style='text-decoration: underline;' target='_blank'> Average Diffusion Faces </a> <br>
138
  <a href='https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer' style='text-decoration: underline;' target='_blank'> Diffusion Bias Explorer </a> <br>
@@ -142,7 +225,8 @@ with gr.Blocks() as demo:
142
  <a href='https://huggingface.co/spaces/tti-bias/professions-bovw-knn' style='text-decoration: underline;' target='_blank'> BoVW Professions Explorer </a> <br>
143
  <a href='https://huggingface.co/spaces/tti-bias/identities-colorfulness-knn' style='text-decoration: underline;' target='_blank'> Colorfulness Profession Explorer </a> <br>
144
  <a href='https://huggingface.co/spaces/tti-bias/professions-colorfulness-knn' style='text-decoration: underline;' target='_blank'> Colorfulness Identities Explorer </a> <br> </p>
145
- ''')
 
146
  # gr.Interface.load("spaces/society-ethics/DiffusionBiasExplorer")
147
-
148
  demo.launch(debug=True)
 
2
  from PIL import Image
3
  import os
4
 
5
+
6
  def get_images(path):
7
+ images = [Image.open(os.path.join(path, im)) for im in os.listdir(path)]
8
  paths = os.listdir(path)
9
+ return [(im, path) for im, path in zip(images, paths)]
10
 
11
 
12
  with gr.Blocks() as demo:
13
+ gr.Markdown(
14
+ """
15
  ## Stable Bias: Analyzing Societal Representations in Diffusion Models
16
+ """
17
+ )
18
+ gr.HTML(
19
+ """
20
  <p style="margin-bottom: 10px; font-size: 94%">This is the demo page for the "Stable Bias" paper, which aims to explore and quantify social biases in text-to-image systems. <br> This work was done by <a href='https://huggingface.co/sasha' style='text-decoration: underline;' target='_blank'> Alexandra Sasha Luccioni (Hugging Face) </a>, <a href='https://huggingface.co/cakiki' style='text-decoration: underline;' target='_blank'> Christopher Akiki (ScaDS.AI, Leipzig University)</a>, <a href='https://huggingface.co/meg' style='text-decoration: underline;' target='_blank'> Margaret Mitchell (Hugging Face) </a> and <a href='https://huggingface.co/yjernite' style='text-decoration: underline;' target='_blank'> Yacine Jernite (Hugging Face) </a> .</p>
21
+ """
22
+ )
23
+ examples_path = "images/examples"
24
+ examples_gallery = gr.Gallery(
25
+ get_images(examples_path),
26
+ label="Example images generated by three text-to-image models (Dall-E 2, Stable Diffusion v1.4 and v.2).",
27
+ show_label=True,
28
+ elem_id="gallery",
29
+ ).style(grid=[1, 6], height="auto")
30
+ gr.HTML(
31
+ """
32
  <p style="margin-bottom: 14px; font-size: 100%"> As AI-enabled Text-to-Image models are becoming increasingly used, characterizing the social biases they exhibit is a necessary first step to lowering their risk of discriminatory outcomes. <br> We compare three such models: <b> Stable Diffusion v.1.4, Stable Diffusion v.2. </b>, and <b> Dall-E 2 </b>, prompting them to produce images of different <i> professions </i> and <i> identity characteristics </i>. <br> You explore our findings in the sections below: </p>
33
+ """
34
+ )
35
+
36
+ gr.Markdown(
37
+ """
38
  ### Looking at Identity Groups
39
+ """
40
+ )
41
+
42
+ gr.Markdown(
43
+ """
44
  One of the goals of our study was to look at the ways in which different identity groups (ethnicity and gender) are represented by text-to-image models. Since artificial depictions of fictive humans have no inherent gender or ethnicity nor do they belong to socially-constructed groups, we pursued our analysis <i> without </i> ascribing identity categories to the images generated, using unsupervised techniques such as clustering. We find clear evidence of ethnicity and gender biases, which you can see by expanding the accordion below or directly via the [Identity Representation Demo](https://huggingface.co/spaces/society-ethics/DiffusionFaceClustering).
45
+ """
46
+ )
47
 
48
  with gr.Accordion("Looking at Identity Groups", open=False):
49
+ gr.HTML(
50
+ """
51
  <p style="margin-bottom: 14px; font-size: 100%"> One of the approaches that we adopted in our work is hierarchical clustering of the images generated by the text-to-image systems in response to prompts that include identity terms with regards to ethnicity and gender. <br> We computed 3 different numbers of clusters (12, 24 and 48) and created an <a href='https://huggingface.co/spaces/society-ethics/DiffusionFaceClustering' style='text-decoration: underline;' target='_blank'> Identity Representation Demo </a> that allows for the exploration of the different clusters and their contents. </p>
52
+ """
53
+ )
54
+ with gr.Row():
55
  with gr.Column(scale=2):
56
  impath = "images/identities"
57
+ identity_gallery = gr.Gallery(
58
+ [os.path.join(impath, im) for im in os.listdir(impath)],
59
+ label="Identity cluster images",
60
+ show_label=False,
61
+ elem_id="gallery",
62
+ ).style(grid=3, height="auto")
63
  with gr.Column(scale=1):
64
+ gr.HTML(
65
+ """
66
  <p style="margin-bottom: 14px; font-size: 100%"> You can see that the models reflect many societal biases -- for instance representing Native Americans wearing traditional headdresses, non-binary people with stereotypical haircuts and glasses, and East Asian men with features that amplify ethnic stereotypes. <br> <br> This is problematic because it reinforces existing cultural stereotypes and fails to represent the diversity that is present in all identity groups.</p>
67
+ """
68
+ )
69
+ gr.Markdown(
70
+ """
71
  ### Exploring Biases
72
+ """
73
+ )
74
+ gr.Markdown(
75
+ """
76
  Machine Learning models encode and amplify biases that are represented in the data that they are trained on -this can include, for instance, stereotypes around the appearances of members of different professions. In our study, we prompted the 3 text-to-image models with texts pertaining to 150 different professions and analyzed the presence of different identity groups in the images generated. We found evidence of many societal stereotypes in the images generated, such as the fact that people in positions of power (e.g. director, CEO) are often White- and male-appearing, while the images generated for other professions are more diverse. Read more about our findings in the accordion below or directly via the [Diffusion Cluster Explorer](https://huggingface.co/spaces/society-ethics/DiffusionClustering) tool.
77
+ """
78
+ )
79
  with gr.Accordion("Exploring Biases", open=False):
80
+ gr.HTML(
81
+ """
82
  <p style="margin-bottom: 14px; font-size: 100%"> We also explore the correlations between the professions that use used in our prompts and the different identity clusters that we identified. <br> Using both the <a href='https://huggingface.co/spaces/society-ethics/DiffusionClustering' style='text-decoration: underline;' target='_blank'> Diffusion Cluster Explorer </a> and the <a href='https://huggingface.co/spaces/society-ethics/DiffusionFaceClustering' style='text-decoration: underline;' target='_blank'> Identity Representation Demo </a>, we can see which clusters are most correlated with each profession and what identities are in these clusters.</p>
83
+ """
84
+ )
85
+ with gr.Row():
86
  with gr.Column():
87
+ gr.HTML(
88
+ """
89
+ <p style="margin-bottom: 14px; font-size: 100%"> Using the <b><a href='https://huggingface.co/spaces/society-ethics/DiffusionClustering' style='text-decoration: underline;' target='_blank'> Diffusion Cluster Explorer</a></b>, we can see that the top cluster for the CEO and director professions is <b> Cluster 4</b>: </p> """
90
+ )
91
  with gr.Column():
92
+ ceo_img = gr.Image(
93
+ Image.open("images/bias/ceo_dir.png"),
94
+ label="CEO Image",
95
+ show_label=False,
96
+ )
97
+
98
+ with gr.Row():
99
  with gr.Column():
100
+ gr.HTML(
101
+ """
102
+ <p style="margin-bottom: 14px; font-size: 100%"> Going back to the <b><a href='https://huggingface.co/spaces/society-ethics/DiffusionFaceClustering' style='text-decoration: underline;' target='_blank'> Identity Representation Demo </a></b>, we can see that the most represented gender term is <i> man </i> (56% of the cluster) and <i> White </i> (29% of the cluster). <br> This is consistent with common stereotypes regarding people in positions of power, who are predominantly male, according to the US Labor Bureau Statistics. </p> """
103
+ )
104
  with gr.Column():
105
+ cluster4 = gr.Image(
106
+ Image.open("images/bias/Cluster4.png"),
107
+ label="Cluster 4 Image",
108
+ show_label=False,
109
+ )
110
+ with gr.Row():
111
  with gr.Column():
112
+ gr.HTML(
113
+ """
114
+ <p style="margin-bottom: 14px; font-size: 100%"> If we look at the cluster representation of professions such as social assistant and social worker, we can observe that the former is best represented by <b>Cluster 2</b>, whereas the latter has a more uniform representation across multiple clusters: </p> """
115
+ )
116
  with gr.Column():
117
+ social_img = gr.Image(
118
+ Image.open("images/bias/social.png"),
119
+ label="social image",
120
+ show_label=False,
121
+ )
122
+ with gr.Row():
123
  with gr.Column(scale=1):
124
+ gr.HTML(
125
+ """
126
+ <p style="margin-bottom: 14px; font-size: 100%"> Cluster 2 is best represented by the gender term is <i> woman </i> (81%) as well as <i> Latinx </i> (19%) <br> This gender proportion is exactly the same as the one provided by the United States Labor Bureau (which you can see in the table above), with 81% of social assistants identifying as women. </p> """
127
+ )
128
  with gr.Column(scale=2):
129
+ cluster4 = gr.Image(
130
+ Image.open("images/bias/Cluster2.png"),
131
+ label="Cluster 2 Image",
132
+ show_label=False,
133
+ )
134
 
135
+ gr.Markdown(
136
+ """
137
  ### Comparing Model Generations
138
+ """
139
+ )
140
+ gr.Markdown(
141
+ """
142
  Above and beyond quantitative analyses, one of the main goals of our project was to create accessible ways for the users to explore the generated images themselves, based on their own interests. For this purpose, we created two interactive tools: the [Diffusion Bias Explorer](https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer), which can be used to compare two models and the images they generate for a given profession or for a given model across two professions, and the [Average Diffusion Faces Tool](https://huggingface.co/spaces/society-ethics/Average_diffusion_faces), which shows an 'average' representation of faces across professions, based on the images generated by the 3 models.
143
+ """
144
+ )
145
  with gr.Accordion("Comparing Model Generations", open=False):
146
+ gr.HTML(
147
+ """
148
+ <p style="margin-bottom: 14px; font-size: 100%"> One of the goals of our study was allowing users to compare model generations across professions in an open-ended way, uncovering patterns and trends on their own. This is why we created the <a href='https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer' style='text-decoration: underline;' target='_blank'> Diffusion Bias Explorer </a> and the <a href='https://huggingface.co/spaces/society-ethics/Average_diffusion_faces' style='text-decoration: underline;' target='_blank'> Average Diffusion Faces </a> tools. We show some of their functionalities below: </p> """
149
+ )
150
+ with gr.Row():
151
  with gr.Column():
152
  explorerpath = "images/biasexplorer"
153
+ biasexplorer_gallery = gr.Gallery(
154
+ get_images(explorerpath),
155
+ label="Bias explorer images",
156
+ show_label=False,
157
+ elem_id="gallery",
158
+ ).style(grid=[2, 2])
159
  with gr.Column():
160
+ gr.HTML(
161
+ """
162
+ <p style="margin-bottom: 14px; font-size: 100%"> Comparing generations both between two models and within a single model can help uncover trends and patterns that are hard to measure using quantitative approaches. <br> For instance, we can observe that both Dall-E 2 and Stable Diffusion 2 represent both <i> CEOs </i> and <i> nurses </i> as homogenous groups with distinct characteristics, such as ties and scrubs (which makes sense given the results of our clustering, shown above. <br> We can also see that the images of <i> waitresses </i> generated by Dall-E 2 and Stable Diffusion v.1.4. have different characteristics, both in terms of their clothes as well as their appearance. <br> It's also possible to see harder to describe phenomena, like the fact that portraits of <i> painters </i> often look like paintings themselves. <br> We encourage you to use the <a href='https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer' style='text-decoration: underline;' target='_blank'> Diffusion Bias Explorer </a> tool to explore these phenomena further! </p>"""
163
+ )
164
+ with gr.Row():
165
  with gr.Column():
166
  averagepath = "images/averagefaces"
167
+ average_gallery = gr.Gallery(
168
+ get_images(averagepath),
169
+ label="Average Face images",
170
+ show_label=False,
171
+ elem_id="gallery",
172
+ ).style(grid=[1, 3], height=560)
173
  with gr.Column():
174
+ gr.HTML(
175
+ """
176
+ <p style="margin-bottom: 14px; font-size: 100%"> Looking at the average faces for a given profession across multiple models can help see the dominant characteristics of that profession, as well as how much variation there is (based on how fuzzy the image is). <br> In the images shown here, we can see that representations of these professions significantly differ across the three models, while sharing common characteristics, e.g. <i> postal workers </i> all wear caps. <br> Also, the average faces of <i> hairdressers </i> seem more fuzzy than the other professions, indicating a higher diversity among the generations compared to other professions. <br> Look at the <a href='https://huggingface.co/spaces/society-ethics/Average_diffusion_faces' style='text-decoration: underline;' target='_blank'> Average Diffusion Faces </a> tool for more examples! </p>"""
177
+ )
178
 
179
+ gr.Markdown(
180
+ """
181
  ### Exploring the Pixel Space of Generated Images
182
+ """
183
+ )
184
+ gr.Markdown(
185
+ """
186
  Finally, an interesting aspect of the generations of the 3 models are the images themselves, which can be analyzed from different angles on a pixel-level. We explore the images in terms of their colorfulness using the [Colorfulness Profession Explorer](https://huggingface.co/spaces/tti-bias/identities-colorfulness-knn) and the [Colorfulness Identities Explorer](https://huggingface.co/spaces/tti-bias/professions-colorfulness-knn), which allow users to hone in on patterns in terms of colors and shades within the images generated. We also allow exploration of the images in terms of their visual features using the bag-of-visual-words approach (BoVW), which allows users to hone in on visual stereotypical content such as professions that have uniforms of a given color, of elements like glasses and hair styles -- this can be done via the [BoVW Nearest Neighbors Explorer](https://huggingface.co/spaces/tti-bias/identities-bovw-knn) and the [BoVW Professions Explorer](https://huggingface.co/spaces/tti-bias/professions-bovw-knn) -- we also present some of our salient findings in the accordion below.
187
+ """
188
+ )
189
  with gr.Accordion("Exploring the Pixel Space of Generated Images", open=False):
190
+ gr.HTML(
191
+ """
192
  <br>
193
  <p style="margin-bottom: 14px; font-size: 100%"> With thousands of generated images, we found it useful to provide ways to explore the data in a structured way that did not depend on any external dataset or model. We provide two such tools, one based on <b>colorfulness</b> and one based on a <b>bag-of-visual words</b> model computed using SIFT features.</p>
194
+ """
195
+ )
196
+ with gr.Row():
197
+ gr.HTML(
198
+ """
199
  <h4>Colorfulness</h4>
200
  <p style="margin-bottom: 14px; font-size: 100%"> We compute an image's "colorfulness" following <a href="https://doi.org/10.1117/12.477378">this work</a> by David Hasler and Sabine E. Suesstrunk and allow the user to choose a specific prompt and model and explore the neighborhood of that chosen starting point. One interesting orthogonal insight is that images generated by DALL·E 2 are on average the most colorful. Images of men are on average less colorful than all other gender labels, consistently across all three models. Patterns revealed using this explorer include for example the exoticizing depiction of native Americans as can be seen in the very stereotypical gallery of images generated in the example on the right.</p>
201
+ """
202
+ )
203
  gr.Image("images/colorfulness/nativeamerican_man.png")
204
  with gr.Row():
205
+ gr.HTML(
206
+ """
207
  <h4>Bag of Visual Words</h4>
208
  <p style="margin-bottom: 14px; font-size: 100%"> Another way of providing the means for a structured traversal of the dataset is a nearest-neighbor explorer based on visual features provided by an image's SIFT features, which we quantize into a visual vocabulary to represent the entire image dataset as a TF-IDF matrix. These tools are especially useful in honing in on stereotypical content that is often encoded visually, but also failure modes of the model such as the misinterpetation of the "stocker" profession as an imagined dog-breed. The screenshot to the right shows how SIFT visual patterns tend to cluster together, namely in this instance the bookshelf in the background. </p>
209
+ """
210
+ )
211
  gr.Image("images/bovw/librarians.png")
212
+ gr.Markdown(
213
+ """
214
  ### All of the tools created as part of this project:
215
+ """
216
+ )
217
+ gr.HTML(
218
+ """
219
  <p style="margin-bottom: 10px; font-size: 110%">
220
  <a href='https://huggingface.co/spaces/society-ethics/Average_diffusion_faces' style='text-decoration: underline;' target='_blank'> Average Diffusion Faces </a> <br>
221
  <a href='https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer' style='text-decoration: underline;' target='_blank'> Diffusion Bias Explorer </a> <br>
 
225
  <a href='https://huggingface.co/spaces/tti-bias/professions-bovw-knn' style='text-decoration: underline;' target='_blank'> BoVW Professions Explorer </a> <br>
226
  <a href='https://huggingface.co/spaces/tti-bias/identities-colorfulness-knn' style='text-decoration: underline;' target='_blank'> Colorfulness Profession Explorer </a> <br>
227
  <a href='https://huggingface.co/spaces/tti-bias/professions-colorfulness-knn' style='text-decoration: underline;' target='_blank'> Colorfulness Identities Explorer </a> <br> </p>
228
+ """
229
+ )
230
  # gr.Interface.load("spaces/society-ethics/DiffusionBiasExplorer")
231
+
232
  demo.launch(debug=True)