davanstrien HF staff commited on
Commit
875c003
1 Parent(s): 1939e85

update nb link

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -238,7 +238,7 @@ To measure whether IDEFICS demonstrates bias across various protected characteri
238
 
239
  These prompts were run across two datasets [fairface](https://huggingface.co/datasets/HuggingFaceM4/FairFace) and [Stable Bias](yjernite/stable-bias_grounding-images_multimodel_3_12_22).
240
  The FairFace dataset is "a face image dataset which is race balanced. It contains 108,501 images from 7 different race groups: White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latino. Images were collected from the YFCC-100M Flickr dataset and labelled with race, gender, and age groups".
241
- The Stable Bias dataset is a dataset of synthetically generated images from the prompt "A photo portrait of a (ethnicity) (gender) at work.".
242
 
243
  Running the above prompts across both these datasets results in two datasets containing three generated responses for each image alongside information about the ascribed ethnicity and gender of the person depicted in each image.
244
  This allows for the generated response to each prompt to be compared across gender and ethnicity axis.
@@ -253,7 +253,7 @@ To surface potential biases in the outputs, we consider the following simple [TF
253
  With this approach, we can see subtle differences in the frequency of terms across gender and ethnicity. For example, for the prompt related to resumes, we see that synthetic images generated for `non-binary` are more likely to lead to resumes that include **data** or **science** than those generated for `man` or `woman`.
254
  When looking at the response to the arrest prompt for the FairFace dataset, the term `theft` is more frequently associated with `East Asian`, `Indian`, `Black` and `Southeast Asian` than `White` and `Middle Eastern`.
255
 
256
- Comparing generated responses to the resume prompt by gender across both datasets, we see for FairFace that the terms `financial`, `development`, `product` and `software` appear more frequently for `man`. For StableBias, the terms `data` and `science` appear more frequently for `non-binary`. This [notebook](https://huggingface.co/spaces/davanstrien/m4-bias/blob/main/m4_bias_eval.ipynb) offers a more comprehensive overview of this evaluation.
257
 
258
 
259
  ## Other limitations
 
238
 
239
  These prompts were run across two datasets [fairface](https://huggingface.co/datasets/HuggingFaceM4/FairFace) and [Stable Bias](yjernite/stable-bias_grounding-images_multimodel_3_12_22).
240
  The FairFace dataset is "a face image dataset which is race balanced. It contains 108,501 images from 7 different race groups: White, Black, Indian, East Asian, Southeast Asian, Middle Eastern, and Latino. Images were collected from the YFCC-100M Flickr dataset and labelled with race, gender, and age groups".
241
+ The Stable Bias dataset is a dataset of synthetically generated images from the prompt "A photo portrait of a (ethnicity) (gender) at work".
242
 
243
  Running the above prompts across both these datasets results in two datasets containing three generated responses for each image alongside information about the ascribed ethnicity and gender of the person depicted in each image.
244
  This allows for the generated response to each prompt to be compared across gender and ethnicity axis.
 
253
  With this approach, we can see subtle differences in the frequency of terms across gender and ethnicity. For example, for the prompt related to resumes, we see that synthetic images generated for `non-binary` are more likely to lead to resumes that include **data** or **science** than those generated for `man` or `woman`.
254
  When looking at the response to the arrest prompt for the FairFace dataset, the term `theft` is more frequently associated with `East Asian`, `Indian`, `Black` and `Southeast Asian` than `White` and `Middle Eastern`.
255
 
256
+ Comparing generated responses to the resume prompt by gender across both datasets, we see for FairFace that the terms `financial`, `development`, `product` and `software` appear more frequently for `man`. For StableBias, the terms `data` and `science` appear more frequently for `non-binary`. This [notebook](https://huggingface.co/spaces/HuggingFaceM4/m4-bias-eval/blob/main/m4_bias_eval.ipynb) offers a more comprehensive overview of this evaluation.
257
 
258
 
259
  ## Other limitations