meg HF staff commited on
Commit
c5bee90
β€’
1 Parent(s): 579f68b

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +1 -1
app.py CHANGED
@@ -9,7 +9,7 @@ with gr.Blocks() as tutorial:
9
 
10
  Our approach to constructing ethical charters is based on the ethical concept of moral value pluralism, which allows different value systems present among the collaborators of a project to be considered of equal importance. In line with this approach, we adopt the practice typical of the Confucian ethical tradition that emphasizes, among other things, harmony (ε’Œ). This guides resolution, focusing on the coexistence of different parties \cite{ConfucianIdeal}. The notion of harmony thus fills one of our primary goals in helping people build ethical charters in AI development: to guide what people ought to do when making choices that affect others. Bottom-up collaboration, the support of ethics as a tool for constructing guidance, and value pluralism based on the notion of harmony make up the beacons of our approach.
11
 
12
- Our tutorial will draw on the idea that the scientific domain of artificial intelligence provides an opportunity for researchers from different disciplines to work together towards common goals. Tools of ethical reflection can help guide this interdisciplinary collaboration, aiding in distinguishing opinions from moral judgments, the importance of asking the right research questions, and the complexity of human judgment and emotion when defining "good" and "bad", ``right'' and ``wrong''. We will utilize previous work we have led as case studies, for example, the experience of drafting an ethical charter for BigScience \cite{BigScience}, an open science project focused on developing a multilingual language model and its dataset for research purposes. We will also connect to more well-known ethical charters similar to those discussed in this tutorial also include "pillars" and "principles" statements increasingly released by tech organizations,
13
  such as [Google's AI Principles](https://ai.google/responsibility/principles/), which Dr. Mitchell worked on operationalizing Google-internally; Microsoft's [Responsible AI Principles](https://www.microsoft.com/en-us/ai/responsible-ai) and Meta's [Five Pillars of Responsible AI](https://ai.meta.com/responsible-ai/).
14
 
15
  ## Impact statement
 
9
 
10
  Our approach to constructing ethical charters is based on the ethical concept of moral value pluralism, which allows different value systems present among the collaborators of a project to be considered of equal importance. In line with this approach, we adopt the practice typical of the Confucian ethical tradition that emphasizes, among other things, harmony (ε’Œ). This guides resolution, focusing on the coexistence of different parties \cite{ConfucianIdeal}. The notion of harmony thus fills one of our primary goals in helping people build ethical charters in AI development: to guide what people ought to do when making choices that affect others. Bottom-up collaboration, the support of ethics as a tool for constructing guidance, and value pluralism based on the notion of harmony make up the beacons of our approach.
11
 
12
+ Our tutorial will draw on the idea that the scientific domain of artificial intelligence provides an opportunity for researchers from different disciplines to work together towards common goals. Tools of ethical reflection can help guide this interdisciplinary collaboration, aiding in distinguishing opinions from moral judgments, the importance of asking the right research questions, and the complexity of human judgment and emotion when defining "good" and "bad", "right" and "wrong". We will utilize previous work we have led as case studies, for example, the experience of drafting an ethical charter for BigScience \cite{BigScience}, an open science project focused on developing a multilingual language model and its dataset for research purposes. We will also connect to more well-known ethical charters similar to those discussed in this tutorial also include "pillars" and "principles" statements increasingly released by tech organizations,
13
  such as [Google's AI Principles](https://ai.google/responsibility/principles/), which Dr. Mitchell worked on operationalizing Google-internally; Microsoft's [Responsible AI Principles](https://www.microsoft.com/en-us/ai/responsible-ai) and Meta's [Five Pillars of Responsible AI](https://ai.meta.com/responsible-ai/).
14
 
15
  ## Impact statement