cakiki commited on
Commit
b686bc2
1 Parent(s): 10c4625

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +2 -2
app.py CHANGED
@@ -120,13 +120,13 @@ with gr.Blocks() as demo:
120
  with gr.Row():
121
  gr.HTML('''
122
  <h4>Colorfulness</h4>
123
- <p style="margin-bottom: 14px; font-size: 100%"> We compute an image's "colorfulness" following <a href="https://doi.org/10.1117/12.477378">this work</a> by David Hasler and Sabine E. Suesstrunk and allow the user to choose a specific prompt and model and explore the neighborhood of that chosen starting point. One interesting orthogonal insight is that images generated by DALL·E 2 are on average the most colorful. Images of men are on average less colorful than all other gender labels, consistently across all three models. </p>
124
  ''')
125
  gr.Image("images/colorfulness/nativeamerican_man.png")
126
  with gr.Row():
127
  gr.HTML('''
128
  <h4>Bag of Visual Words</h4>
129
- <p style="margin-bottom: 14px; font-size: 100%"> Another way of providing the means for a structured traversal of the dataset is a nearest-neighbor explorer based on visual features provided by an image's SIFT features, which we quantize into a visual vocabulary to represent the entire image dataset as a TF-IDF matrix. These tools are especially useful in honing in on stereotypical content that is often encoded visually, but also failure modes of the model such as the misinterpetation of the "stocker" profession as an imagined dog-breed.</p>
130
  ''')
131
  gr.Image("images/bovw/librarians.png")
132
  gr.Markdown("""
 
120
  with gr.Row():
121
  gr.HTML('''
122
  <h4>Colorfulness</h4>
123
+ <p style="margin-bottom: 14px; font-size: 100%"> We compute an image's "colorfulness" following <a href="https://doi.org/10.1117/12.477378">this work</a> by David Hasler and Sabine E. Suesstrunk and allow the user to choose a specific prompt and model and explore the neighborhood of that chosen starting point. One interesting orthogonal insight is that images generated by DALL·E 2 are on average the most colorful. Images of men are on average less colorful than all other gender labels, consistently across all three models. Patterns revealed using this explorer include for example the exoticizing depiction of native Americans as can be seen in the very stereotypical gallery of images generated in the example on the right.</p>
124
  ''')
125
  gr.Image("images/colorfulness/nativeamerican_man.png")
126
  with gr.Row():
127
  gr.HTML('''
128
  <h4>Bag of Visual Words</h4>
129
+ <p style="margin-bottom: 14px; font-size: 100%"> Another way of providing the means for a structured traversal of the dataset is a nearest-neighbor explorer based on visual features provided by an image's SIFT features, which we quantize into a visual vocabulary to represent the entire image dataset as a TF-IDF matrix. These tools are especially useful in honing in on stereotypical content that is often encoded visually, but also failure modes of the model such as the misinterpetation of the "stocker" profession as an imagined dog-breed. The screenshot to the right shows how SIFT visual patterns tend to cluster together, namely in this instance the bookshelf in the background. </p>
130
  ''')
131
  gr.Image("images/bovw/librarians.png")
132
  gr.Markdown("""