import gradio as gr from PIL import Image import os def get_images(path): images = [Image.open(os.path.join(path,im)) for im in os.listdir(path)] paths = os.listdir(path) return([(im, path) for im, path in zip(images,paths)]) with gr.Blocks() as demo: gr.Markdown(""" ## Stable Bias: Analyzing Societal Representations in Diffusion Models """) gr.HTML('''

This is the demo page for the "Stable Bias" paper, which aims to explore and quantify social biases in text-to-image systems.
This work was done by Alexandra Sasha Luccioni (Hugging Face) , Christopher Akiki (ScaDS.AI, Leipzig University), Margaret Mitchell (Hugging Face) and Yacine Jernite (Hugging Face) .

''') examples_path= "images/examples" examples_gallery = gr.Gallery(get_images(examples_path), label="Example images generated by three text-to-image models (Dall-E 2, Stable Diffusion v1.4 and v.2).", show_label=True, elem_id="gallery").style(grid=[1,6], height="auto") gr.HTML('''

As AI-enabled Text-to-Image models are becoming increasingly used, characterizing the social biases they exhibit is a necessary first step to lowering their risk of discriminatory outcomes.
We compare three such models: Stable Diffusion v.1.4, Stable Diffusion v.2. , and Dall-E 2 , prompting them to produce images of different professions and identity characteristics .
You explore our findings in the sections below:

''') gr.Markdown(""" ### Looking at Identity Groups """) gr.Markdown(""" One of the goals of our study was to look at the ways in which different identity groups (ethnicity and gender) are represented by text-to-image models. Since artificial depictions of fictive humans have no inherent gender or ethnicity nor do they belong to socially-constructed groups, we pursued our analysis without ascribing identity categories to the images generated, using unsupervised techniques such as clustering. We find clear evidence of ethnicity and gender biases, which you can see by expanding the accordion below or directly via the [Identity Representation Demo](https://huggingface.co/spaces/society-ethics/DiffusionFaceClustering). """) with gr.Accordion("Looking at Identity Groups", open=False): gr.HTML('''

One of the approaches that we adopted in our work is hierarchical clustering of the images generated by the text-to-image systems in response to prompts that include identity terms with regards to ethnicity and gender.
We computed 3 different numbers of clusters (12, 24 and 48) and created an Identity Representation Demo that allows for the exploration of the different clusters and their contents.

''') with gr.Row(): with gr.Column(scale=2): impath = "images/identities" identity_gallery = gr.Gallery([os.path.join(impath,im) for im in os.listdir(impath)], label="Identity cluster images", show_label=False, elem_id="gallery" ).style(grid=3, height="auto") with gr.Column(scale=1): gr.HTML('''

You can see that the models reflect many societal biases -- for instance representing Native Americans wearing traditional headdresses, non-binary people with stereotypical haircuts and glasses, and East Asian men with features that amplify ethnic stereotypes.

This is problematic because it reinforces existing cultural stereotypes and fails to represent the diversity that is present in all identity groups.

''') gr.Markdown(""" ### Exploring Biases """) gr.Markdown(""" Machine Learning models encode and amplify biases that are represented in the data that they are trained on -this can include, for instance, stereotypes around the appearances of members of different professions. In our study, we prompted the 3 text-to-image models with texts pertaining to 150 different professions and analyzed the presence of different identity groups in the images generated. We found evidence of many societal stereotypes in the images generated, such as the fact that people in positions of power (e.g. director, CEO) are often White- and male-appearing, while the images generated for other professions are more diverse. Read more about our findings in the accordion below or directly via the [Diffusion Cluster Explorer](https://huggingface.co/spaces/society-ethics/DiffusionClustering) tool. """) with gr.Accordion("Exploring Biases", open=False): gr.HTML('''

We also explore the correlations between the professions that use used in our prompts and the different identity clusters that we identified.
Using both the Diffusion Cluster Explorer and the Identity Representation Demo , we can see which clusters are most correlated with each profession and what identities are in these clusters.

''') with gr.Row(): with gr.Column(): gr.HTML('''

Using the Diffusion Cluster Explorer, we can see that the top cluster for the CEO and director professions is Cluster 4:

''') with gr.Column(): ceo_img = gr.Image(Image.open("images/bias/ceo_dir.png"), label = "CEO Image", show_label=False) with gr.Row(): with gr.Column(): gr.HTML('''

Going back to the Identity Representation Demo , we can see that the most represented gender term is man (56% of the cluster) and White (29% of the cluster).
This is consistent with common stereotypes regarding people in positions of power, who are predominantly male, according to the US Labor Bureau Statistics.

''') with gr.Column(): cluster4 = gr.Image(Image.open("images/bias/Cluster4.png"), label = "Cluster 4 Image", show_label=False) with gr.Row(): with gr.Column(): gr.HTML('''

If we look at the cluster representation of professions such as social assistant and social worker, we can observe that the former is best represented by Cluster 2, whereas the latter has a more uniform representation across multiple clusters:

''') with gr.Column(): social_img = gr.Image(Image.open("images/bias/social.png"), label = "social image", show_label=False) with gr.Row(): with gr.Column(scale=1): gr.HTML('''

Cluster 2 is best represented by the gender term is woman (81%) as well as Latinx (19%)
This gender proportion is exactly the same as the one provided by the United States Labor Bureau (which you can see in the table above), with 81% of social assistants identifying as women.

''') with gr.Column(scale=2): cluster4 = gr.Image(Image.open("images/bias/Cluster2.png"), label = "Cluster 2 Image", show_label=False) gr.Markdown(""" ### Comparing Model Generations """) gr.Markdown(""" Above and beyond quantitative analyses, one of the main goals of our project was to create accessible ways for the users to explore the generated images themselves, based on their own interests. For this purpose, we created two interactive tools: the [Diffusion Bias Explorer](https://huggingface.co/spaces/society-ethics/DiffusionBiasExplorer), which can be used to compare two models and the images they generate for a given profession or for a given model across two professions, and the [Average Diffusion Faces Tool](https://huggingface.co/spaces/society-ethics/Average_diffusion_faces), which shows an 'average' representation of faces across professions, based on the images generated by the 3 models. """) with gr.Accordion("Comparing Model Generations", open=False): gr.HTML('''

One of the goals of our study was allowing users to compare model generations across professions in an open-ended way, uncovering patterns and trends on their own. This is why we created the Diffusion Bias Explorer and the Average Diffusion Faces tools. We show some of their functionalities below:

''') with gr.Row(): with gr.Column(): explorerpath = "images/biasexplorer" biasexplorer_gallery = gr.Gallery(get_images(explorerpath), label="Bias explorer images", show_label=False, elem_id="gallery").style(grid=[2,2]) with gr.Column(): gr.HTML('''

Comparing generations both between two models and within a single model can help uncover trends and patterns that are hard to measure using quantitative approaches.
For instance, we can observe that both Dall-E 2 and Stable Diffusion 2 represent both CEOs and nurses as homogenous groups with distinct characteristics, such as ties and scrubs (which makes sense given the results of our clustering, shown above.
We can also see that the images of waitresses generated by Dall-E 2 and Stable Diffusion v.1.4. have different characteristics, both in terms of their clothes as well as their appearance.
It's also possible to see harder to describe phenomena, like the fact that portraits of painters often look like paintings themselves.
We encourage you to use the Diffusion Bias Explorer tool to explore these phenomena further!

''') with gr.Row(): with gr.Column(): averagepath = "images/averagefaces" average_gallery = gr.Gallery(get_images(averagepath), label="Average Face images", show_label=False, elem_id="gallery").style(grid=[1,3], height=560) with gr.Column(): gr.HTML('''

Looking at the average faces for a given profession across multiple models can help see the dominant characteristics of that profession, as well as how much variation there is (based on how fuzzy the image is).
In the images shown here, we can see that representations of these professions significantly differ across the three models, while sharing common characteristics, e.g. postal workers all wear caps.
Also, the average faces of hairdressers seem more fuzzy than the other professions, indicating a higher diversity among the generations compared to other professions.
Look at the Average Diffusion Faces tool for more examples!

''') gr.Markdown(""" ### Exploring the Pixel Space of Generated Images """) gr.Markdown(""" Finally, an interesting aspect of the generations of the 3 models are the images themselves, which can be analyzed from different angles on a pixel-level. We explore the images in terms of their colorfulness using the [Colorfulness Profession Explorer](https://huggingface.co/spaces/tti-bias/identities-colorfulness-knn) and the [Colorfulness Identities Explorer](https://huggingface.co/spaces/tti-bias/professions-colorfulness-knn), which allow users to hone in on patterns in terms of colors and shades within the images generated. We also allow exploration of the images in terms of their visual features using the bag-of-visual-words approach (BoVW), which allows users to hone in on visual stereotypical content such as professions that have uniforms of a given color, of elements like glasses and hair styles -- this can be done via the [BoVW Nearest Neighbors Explorer](https://huggingface.co/spaces/tti-bias/identities-bovw-knn) and the [BoVW Professions Explorer](https://huggingface.co/spaces/tti-bias/professions-bovw-knn) -- we also present some of our salient findings in the accordion below. """) with gr.Accordion("Exploring the Pixel Space of Generated Images", open=False): gr.HTML('''

With thousands of generated images, we found it useful to provide ways to explore the data in a structured way that did not depend on any external dataset or model. We provide two such tools, one based on colorfulness and one based on a bag-of-visual words model computed using SIFT features.

''') with gr.Row(): gr.HTML('''

Colorfulness

We compute an image's "colorfulness" following this work by David Hasler and Sabine E. Suesstrunk and allow the user to choose a specific prompt and model and explore the neighborhood of that chosen starting point. One interesting orthogonal insight is that images generated by DALLĀ·E 2 are on average the most colorful. Images of men are on average less colorful than all other gender labels, consistently across all three models. Patterns revealed using this explorer include for example the exoticizing depiction of native Americans as can be seen in the very stereotypical gallery of images generated in the example on the right.

''') gr.Image("images/colorfulness/nativeamerican_man.png") with gr.Row(): gr.HTML('''

Bag of Visual Words

Another way of providing the means for a structured traversal of the dataset is a nearest-neighbor explorer based on visual features provided by an image's SIFT features, which we quantize into a visual vocabulary to represent the entire image dataset as a TF-IDF matrix. These tools are especially useful in honing in on stereotypical content that is often encoded visually, but also failure modes of the model such as the misinterpetation of the "stocker" profession as an imagined dog-breed. The screenshot to the right shows how SIFT visual patterns tend to cluster together, namely in this instance the booksheves in the background, or the gibberish pseudo-English text that often plagues TTI systems.

''') with gr.Column(): gr.Image("images/bovw/bookshelves.png") gr.Image("images/bovw/gibberish.png") gr.Markdown(""" ### All of the tools created as part of this project: """) gr.HTML('''

Average Diffusion Faces
Diffusion Bias Explorer
Diffusion Cluster Explorer
Identity Representation Demo
BoVW Nearest Neighbors Explorer
BoVW Professions Explorer
Colorfulness Profession Explorer
Colorfulness Identities Explorer

''') # gr.Interface.load("spaces/society-ethics/DiffusionBiasExplorer") demo.launch(debug=True)