Avijit Ghosh commited on
Commit
1945d3f
1 Parent(s): 85b09dd

update description

Browse files
Files changed (1) hide show
  1. app.py +3 -3
app.py CHANGED
@@ -164,15 +164,15 @@ with gr.Blocks(title="Skin Tone and Gender bias in Text to Image Models") as dem
164
  In this demo, we explore the potential biases in text-to-image models by generating multiple images based on user prompts and analyzing the gender and skin tone of the generated subjects. Here's how the analysis works:
165
 
166
  1. **Image Generation**: For each prompt, 10 images are generated using the selected model.
167
- 2. **Gender Detection**: The BLIP caption generator is used to detect gender by identifying words like "man," "boy," "woman," and "girl" in the captions.
168
- 3. **Skin Tone Classification**: The skin-tone-classifier library is used to extract the skin tones of the generated subjects.
169
 
170
 
171
  #### Visualization
172
 
173
  We create visual grids to represent the data:
174
 
175
- - **Skin Tone Grids**: Skin tones are plotted as exact hex codes rather than using the Fitzpatrick scale, which can be problematic and limiting for darker skin tones.
176
  - **Gender Grids**: Light green denotes men, dark green denotes women, and grey denotes cases where the BLIP caption did not specify a binary gender.
177
 
178
  This demo provides an insightful look into how current text-to-image models handle sensitive attributes, shedding light on areas for improvement and further study.
 
164
  In this demo, we explore the potential biases in text-to-image models by generating multiple images based on user prompts and analyzing the gender and skin tone of the generated subjects. Here's how the analysis works:
165
 
166
  1. **Image Generation**: For each prompt, 10 images are generated using the selected model.
167
+ 2. **Gender Detection**: The [BLIP caption generator](https://huggingface.co/Salesforce/blip-image-captioning-large) is used to detect gender by identifying words like "man," "boy," "woman," and "girl" in the captions.
168
+ 3. **Skin Tone Classification**: The [skin-tone-classifier library](https://github.com/ChenglongMa/SkinToneClassifier) is used to extract the skin tones of the generated subjects.
169
 
170
 
171
  #### Visualization
172
 
173
  We create visual grids to represent the data:
174
 
175
+ - **Skin Tone Grids**: Skin tones are plotted as exact hex codes rather than using the Fitzpatrick scale, which can be [problematic and limiting for darker skin tones](https://arxiv.org/pdf/2309.05148).
176
  - **Gender Grids**: Light green denotes men, dark green denotes women, and grey denotes cases where the BLIP caption did not specify a binary gender.
177
 
178
  This demo provides an insightful look into how current text-to-image models handle sensitive attributes, shedding light on areas for improvement and further study.