yjernite HF staff commited on
Commit
376239d
1 Parent(s): 4bceec3

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +8 -1
app.py CHANGED
@@ -20,7 +20,9 @@ We encourage users to take advantage of this app to explore those trends, for ex
20
  - Do you find that some ethnicity terms lead to more stereotypical visual representations than others?
21
  - Do you find that some gender terms lead to more stereotypical visual representations than others?
22
 
23
- These questions only scratch the surface of what we can learn from demos like this one, let us know what you find [in the discussions tab](https://huggingface.co/spaces/society-ethics/DiffusionFaceClustering/discussions), or if you think of other relevant questions!
 
 
24
  """
25
 
26
  _CONTEXT = """
@@ -36,6 +38,11 @@ we should not assign a specific gender or ethnicity to a synthetic figure genera
36
  In this app, we instead take a 2-step clustering-based approach. First, we generate 680 images for each model by varying mentions of terms that denote gender or ethnicity in the prompts.
37
  Then, we use a [VQA-based model](https://huggingface.co/Salesforce/blip-vqa-base) to cluster these images at different granularities (12, 24, or 48 clusters).
38
  Exploring these clusters allows us to examine trends in the models' associations between visual features and textual representation of social attributes.
 
 
 
 
 
39
  """
40
 
41
  clusters_12 = json.load(open("clusters/id_all_blip_clusters_12.json"))
 
20
  - Do you find that some ethnicity terms lead to more stereotypical visual representations than others?
21
  - Do you find that some gender terms lead to more stereotypical visual representations than others?
22
 
23
+ These questions only scratch the surface of what we can learn from demos like this one,
24
+ let us know what you find [in the discussions tab](https://huggingface.co/spaces/society-ethics/DiffusionFaceClustering/discussions),
25
+ or if you think of other relevant questions!
26
  """
27
 
28
  _CONTEXT = """
 
38
  In this app, we instead take a 2-step clustering-based approach. First, we generate 680 images for each model by varying mentions of terms that denote gender or ethnicity in the prompts.
39
  Then, we use a [VQA-based model](https://huggingface.co/Salesforce/blip-vqa-base) to cluster these images at different granularities (12, 24, or 48 clusters).
40
  Exploring these clusters allows us to examine trends in the models' associations between visual features and textual representation of social attributes.
41
+
42
+ **Note:** this demo was developped with a limited set of gender- and ethnicity-related terms that are more relevant to the US context as a first approach,
43
+ so users may not always find themselves represented.
44
+ If you have suggestions for additional categories you would particularly like to see in the next version,
45
+ please tell us about them [in the discussions tab](https://huggingface.co/spaces/society-ethics/DiffusionFaceClustering/discussions)!
46
  """
47
 
48
  clusters_12 = json.load(open("clusters/id_all_blip_clusters_12.json"))