yjernite commited on
Commit
8024608
1 Parent(s): f7198da

conclusion sec 1

Browse files
Files changed (1) hide show
  1. app.py +6 -7
app.py CHANGED
@@ -123,7 +123,7 @@ with gr.Blocks() as demo:
123
 
124
  **Look like** is the operative phrase here as the people depicted in the pictures are synthetic and so do not belong to socially-constructed groups.
125
  Consequently, since we cannot assign a gender or ethnicity label to each data point,
126
- we instead focus on dataset-level trends in visual features that are correlated with social variation in the text prompts.
127
  We do this through *controlled prompting* and *hierarchical clustering*: for each system,
128
  we obtain a dataset of images corresponding to prompts of the format "*Photo portrait of a **(identity terms)** person at work*",
129
  where ***(identity terms)*** jointly enumerate phrases describing ethnicities and phrases denoting gender.
@@ -218,7 +218,11 @@ with gr.Blocks() as demo:
218
  The clusters with the most examples of both prompts with unspecified gender and ethnicity terms are **clusters 5 and 19**,
219
  and both are also strongly associated with the words *man*, *White*, and *Causian*.
220
  This association holds across genders (as showcased by **cluster 15**, which has a majority of *woman* and *White* prompts)
221
- and ethnicities (comparing the proportions of unspecified genders in **clusters 0 and 6**)
 
 
 
 
222
  """
223
  )
224
  with gr.Column(scale=1):
@@ -233,11 +237,6 @@ with gr.Blocks() as demo:
233
  ),
234
  label="Screenshot of the Identity Exploration tool for: Cluster 19 of 24",
235
  )
236
- gr.Markdown(
237
- """
238
- Conclusion: let's use those to measure other outputs of the model that represent people!!!
239
- """
240
- )
241
  for var in [id_cl_id_1, id_cl_id_2, id_cl_id_3]:
242
  var.change(
243
  show_id_images,
 
123
 
124
  **Look like** is the operative phrase here as the people depicted in the pictures are synthetic and so do not belong to socially-constructed groups.
125
  Consequently, since we cannot assign a gender or ethnicity label to each data point,
126
+ we instead focus on dataset-level trends in visual features that are correlated with social variation in the text prompts.
127
  We do this through *controlled prompting* and *hierarchical clustering*: for each system,
128
  we obtain a dataset of images corresponding to prompts of the format "*Photo portrait of a **(identity terms)** person at work*",
129
  where ***(identity terms)*** jointly enumerate phrases describing ethnicities and phrases denoting gender.
 
218
  The clusters with the most examples of both prompts with unspecified gender and ethnicity terms are **clusters 5 and 19**,
219
  and both are also strongly associated with the words *man*, *White*, and *Causian*.
220
  This association holds across genders (as showcased by **cluster 15**, which has a majority of *woman* and *White* prompts)
221
+ and ethnicities (comparing the proportions of unspecified genders in **clusters 0 and 6**).
222
+
223
+ This provides the beginning of an answer to our motivating question: since users rarely specify an explicit gender or ethnicity when using
224
+ these systems to generate images of people, the high likelihood of defaulting to *Whiteness* and *masculinity* is likely to at least partially explain the observed lack of diversity.
225
+ We compare these behaviors across systems and professions in the next section.
226
  """
227
  )
228
  with gr.Column(scale=1):
 
237
  ),
238
  label="Screenshot of the Identity Exploration tool for: Cluster 19 of 24",
239
  )
 
 
 
 
 
240
  for var in [id_cl_id_1, id_cl_id_2, id_cl_id_3]:
241
  var.change(
242
  show_id_images,