File size: 9,481 Bytes
736eb2a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
import gradio as gr
from css import custom_css
import pandas as pd
from gradio_modal import Modal



# create a gradio page with tabs and accordions

# Path: taxonomy.py

metadatadict = {}

def loadtable(path):
    rawdf = pd.read_csv(path)
    for i, row in rawdf.iterrows():
        metadatadict['<u>'+str(row['Link'])+'</u>'] = '['+str(row['Link'])+']('+str(row['URL'])+')'
    #rawdf['Link'] = '['+rawdf['Link']+']('+rawdf['URL']+')'
    rawdf['Link'] = '<u>'+rawdf['Link']+'</u>'
    rawdf = rawdf.drop(columns=['URL'])
    return rawdf

def filter_bias_table(fulltable, modality_filter):
    filteredtable = fulltable[fulltable['Modality'].isin(modality_filter)]
    return filteredtable

def showmodal(evt: gr.SelectData):
    print(evt.value, evt.index, evt.target)
    modal = Modal(visible=False)
    md = gr.Markdown("")
    if evt.index[1] == 4:
        modal = Modal(visible=True)
        md = gr.Markdown('# '+metadatadict[evt.value])
    return [modal,md]

with gr.Blocks(title = "Social Impact Measurement V2", css=custom_css) as demo: #theme=gr.themes.Soft(), 
    # create tabs for the app, moving the current table to one titled "rewardbench" and the benchmark_text to a tab called "About"
    with gr.Row():
        gr.Markdown("""
# Social Impact Measurement
## A taxonomy of the social impacts of AI models and measurement techniques.
                    """)
    with gr.Row():
        gr.Markdown("""
#### A: Technical Base System Evaluations:
                    
Below we list the aspects possible to evaluate in a generative system. Context-absent evaluations only provide narrow insights into the described aspects of the type of generative AI system. The depth of literature and research on evaluations differ by modality with some modalities having sparse or no relevant literature, but the themes for evaluations can be applied to most systems.

The following categories are high-level, non-exhaustive, and present a synthesis of the findings across different modalities. They refer solely to what can be evaluated in a base technical system:

                    """)
    with gr.Tabs(elem_classes="tab-buttons") as tabs1:
        with gr.TabItem("Bias/Stereotypes"):
            fulltable = loadtable('BiasEvals.csv')
            gr.Markdown("""
            Generative AI systems can perpetuate harmful biases from various sources, including systemic, human, and statistical biases. These biases, also known as "fairness" considerations, can manifest in the final system due to choices made throughout the development process. They include harmful associations and stereotypes related to protected classes, such as race, gender, and sexuality. Evaluating biases involves assessing correlations, co-occurrences, sentiment, and toxicity across different modalities, both within the model itself and in the outputs of downstream tasks.
                        """)
            with gr.Row():
                modality_filter = gr.CheckboxGroup(["Text", "Image", "Audio", "Video"], 
                                                 value=["Text", "Image", "Audio", "Video"], 
                                                 label="Modality", 
                                                 show_label=True,
                                                #  info="Which modality to show."
                                                 )
            with gr.Row():
                biastable_full = gr.DataFrame(value=fulltable, wrap=True, datatype="markdown", visible=False)
                biastable_filtered = gr.DataFrame(value=fulltable, wrap=True, datatype="markdown", visible=True)
                modality_filter.change(filter_bias_table, inputs=[biastable_full, modality_filter], outputs=biastable_filtered)
                with Modal(visible=False) as modal:
                    md = gr.Markdown("Test 1")
                    gr.Markdown('### Aylin Caliskan, Joanna J. Bryson, and Arvind Narayanan')
                    tags = ['Bias', 'Word Association', 'Embedding', 'NLP']
                    tagmd = ''
                    for tag in tags:
                        tagmd += '<span class="tag">#'+tag+'</span> '
                    gr.Markdown(tagmd)
                    gr.Markdown('''
Artificial intelligence and machine learning are in a period of astounding growth. However, there are concerns that these
technologies may be used, either with or without intention, to perpetuate the prejudice and unfairness that unfortunately
characterizes many human institutions. Here we show for the first time that human-like semantic biases result from the
application of standard machine learning to ordinary language—the same sort of language humans are exposed to every
day. We replicate a spectrum of standard human biases as exposed by the Implicit Association Test and other well-known
psychological studies. We replicate these using a widely used, purely statistical machine-learning model—namely, the GloVe
word embedding—trained on a corpus of text from the Web. Our results indicate that language itself contains recoverable and
accurate imprints of our historic biases, whether these are morally neutral as towards insects or flowers, problematic as towards
race or gender, or even simply veridical, reflecting the status quo for the distribution of gender with respect to careers or first
names. These regularities are captured by machine learning along with the rest of semantics. In addition to our empirical
findings concerning language, we also contribute new methods for evaluating bias in text, the Word Embedding Association
Test (WEAT) and the Word Embedding Factual Association Test (WEFAT). Our results have implications not only for AI and
machine learning, but also for the fields of psychology, sociology, and human ethics, since they raise the possibility that mere
exposure to everyday language can account for the biases we replicate here.
                                ''')
                    gr.Gallery(['Images/WEAT1.png', 'Images/WEAT2.png'])
                biastable_filtered.select(showmodal, None, [modal,md])



        with gr.TabItem("Cultural Values/Sensitive Content"):
            with gr.Row():
                gr.Image()

        with gr.TabItem("Disparate Performance"):
            with gr.Row():
                gr.Image()

        with gr.TabItem("Privacy/Data Protection"):
            with gr.Row():
                gr.Image()

        with gr.TabItem("Financial Costs"):
            with gr.Row():
                gr.Image()

        with gr.TabItem("Environmental Costs"):
            with gr.Row():
                gr.Image()
        
        with gr.TabItem("Data and Content Moderation Labor"):
            with gr.Row():
                gr.Image()
    
    with gr.Row():
        gr.Markdown("""
#### B: People and Society Impact Evaluations:
                    
Long-term effects of systems embedded in society, such as economic or labor impact, largely require ideation of generative AI systems’ possible use cases and have fewer available general evaluations. The following categories heavily depend on how generative AI systems are deployed, including sector and application. In the broader ecosystem, methods of deployment affect social impact.

The following categories are high-level, non-exhaustive, and present a synthesis of the findings across different modalities. They refer solely to what can be evaluated in people and society:
                    """)

    with gr.Tabs(elem_classes="tab-buttons") as tabs2:
        with gr.TabItem("Trustworthiness and Autonomy"):
            with gr.Accordion("Trust in Media and Information", open=False):
                gr.Image()
            with gr.Accordion("Overreliance on Outputs", open=False):
                gr.Image()
            with gr.Accordion("Personal Privacy and Sense of Self", open=False):
                gr.Image()

        with gr.TabItem("Inequality, Marginalization, and Violence"):
            with gr.Accordion("Community Erasure", open=False):
                gr.Image()
            with gr.Accordion("Long-term Amplifying Marginalization by Exclusion (and Inclusion)", open=False):
                gr.Image()
            with gr.Accordion("Abusive or Violent Content", open=False):
                gr.Image()

        with gr.TabItem("Concentration of Authority"):
            with gr.Accordion("Militarization, Surveillance, and Weaponization", open=False):
                gr.Image()
            with gr.Accordion("Imposing Norms and Values", open=False):
                gr.Image()

        with gr.TabItem("Labor and Creativity"):
            with gr.Accordion("Intellectual Property and Ownership", open=False):
                gr.Image()
            with gr.Accordion("Economy and Labor Market", open=False):
                gr.Image()

        with gr.TabItem("Ecosystem and Environment"):
            with gr.Accordion("Widening Resource Gaps", open=False):
                gr.Image()
            with gr.Accordion("Environmental Impacts", open=False):
                gr.Image()
            
        
    with gr.Row():
        with gr.Accordion("📚 Citation", open=False):
            citation_button = gr.Textbox(
                value=r"""BOOK CHAPTER CITE GOES HERE""",
                lines=7,
                label="Copy the following to cite this work.",
                elem_id="citation-button",
                show_copy_button=True,
            )


demo.launch(debug=True)