Datasets:

ArXiv:
DOI:
License:
irenesolaiman HF staff commited on
Commit
7219869
1 Parent(s): bc3dd47
Files changed (1) hide show
  1. README.md +82 -0
README.md CHANGED
@@ -1,3 +1,85 @@
1
  ---
2
  license: cc-by-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-4.0
3
  ---
4
+
5
+ ## Mission: Open and Good ML
6
+ In our mission to democratize good machine learning (ML), we examine how supporting ML community work also empowers examining and preventing possible harms. Open development and science decentralizes power so that many people can collectively work on AI that reflects their needs and values. While [openness enables broader perspectives to contribute to research and AI overall, it faces the tension of less risk control](https://arxiv.org/abs/2302.04844).
7
+
8
+ Moderating ML artifacts presents unique challenges due to the dynamic and rapidly evolving nature of these systems. In fact, as ML models become more advanced and capable of producing increasingly diverse content, the potential for harmful or unintended outputs grows, necessitating the development of robust moderation and evaluation strategies. Moreover, the complexity of ML models and the vast amounts of data they process exacerbate the challenge of identifying and addressing potential biases and ethical concerns.
9
+
10
+ As hosts, we recognize the responsibility that comes with potentially amplifying harm to our users and the world more broadly. Often these harms disparately impact minority communities in a context-dependent manner. We have taken the approach of analyzing the tensions in play for each context, open to discussion across the company and Hugging Face community. While many models can amplify harm, especially discriminatory content, we are taking a series of steps to identify highest risk models and what action to take. Importantly, active perspectives from many backgrounds is key to understanding, measuring, and mitigating potential harms that affect different groups of people.
11
+
12
+ We are crafting tools and safeguards in addition to improving our documentation practices to ensure open source science empowers individuals and continues to minimize potential harms.
13
+
14
+ ## Ethical Categories
15
+ The first major aspect of our work to foster good open ML consists in promoting the tools and positive examples of ML development that prioritize values and consideration for its stakeholders. This helps users take concrete steps to address outstanding issues, and present plausible alternatives to de facto damaging practices in ML development.
16
+
17
+ To help our users discover and engage with ethics-related ML work, we have compiled a set of tags. These 6 high-level categories are based on our analysis of Spaces that community members had contributed. They are designed to give you a jargon-free way of thinking about ethical technology:
18
+
19
+ - Rigorous work pays special attention to developing with best practices in mind. In ML, this can mean examining failure cases (including conducting bias and fairness audits), protecting privacy through security measures, and ensuring that potential users (technical and non-technical) are informed about the project's limitations.
20
+ - Consentful work [supports](https://www.consentfultech.io/) the self-determination of people who use and are affected by these technologies.
21
+ - Socially Conscious work shows us how technology can support social, environmental, and scientific efforts.
22
+ - Sustainable work highlights and explores techniques for making machine learning ecologically sustainable.
23
+ - Inclusive work broadens the scope of who builds and benefits in the machine learning world.
24
+ - Inquisitive work shines a light on inequities and power structures which challenge the community to rethink its relationship to technology.
25
+
26
+ Read more at https://huggingface.co/ethics
27
+
28
+ Look for these terms as we’ll be using these tags, and updating them based on community contributions, across some new projects on the Hub!
29
+
30
+ ## Safeguards
31
+ Taking an “all-or-nothing” view of open releases ignores the wide variety of contexts that determine an ML artifact’s positive or negative impacts. Having more levers of control over how ML systems are shared and re-used supports collaborative development and analysis with less risk of promoting harmful uses or misuses; allowing for more openness and participation in innovation for shared benefits.
32
+
33
+ We engage directly with contributors and have addressed pressing issues. To bring this to the next level, we are building community-based processes. This approach empowers both Hugging Face contributors, and those affected by contributions, to inform the limitations, sharing, and additional mechanisms necessary for models and data made available on our platform. The three main aspects we will pay attention to are: the origin of the artifact, how the artifact is handled by its developers, and how the artifact has been used. In that respect we:
34
+ - launched a [flagging feature](https://twitter.com/GiadaPistilli/status/1571865167092396033) for our community to determine whether ML artifacts or community content (model, dataset, space, or discussion) violate our [content guidelines](https://huggingface.co/content-guidelines),
35
+ - monitor our community discussion boards to ensure Hub users abide by the [code of conduct](https://huggingface.co/code-of-conduct),
36
+ - robustly document our most-downloaded models with model cards that detail social impacts, biases, and intended and out-of-scope use cases,
37
+ - create audience-guiding tags, such as the “Not For All Eyes” tag that can be added to the repository’s card metadata to avoid un-requested violent and sexual content,
38
+ - promote use of [Open Responsible AI Licenses (RAIL)](https://huggingface.co/blog/open_rail) for [models](https://www.licenses.ai/blog/2022/8/26/bigscience-open-rail-m-license), such as with LLMs ([BLOOM](https://huggingface.co/spaces/bigscience/license), [BigCode](https://huggingface.co/spaces/bigcode/license)),
39
+ - conduct research that [analyzes](https://arxiv.org/abs/2302.04844) which models and datasets have the highest potential for, or track record of, misuse and malicious use.
40
+
41
+ **How to use the flagging function:**
42
+ Click on the flag icon on any Model, Dataset, Space, or Discussion:
43
+ <p align="center">
44
+ <br>
45
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img5.png" alt="screenshot pointing to the flag icon to Report this model" />
46
+ </p>
47
+
48
+ Share why you flagged this item:
49
+ <p align="center">
50
+ <br>
51
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img5.png" alt="screenshot showing the text window where you describe why you flagged this item" />
52
+ </p>
53
+
54
+ In prioritizing open science, we examine potential harm on a case-by-case basis. When users flag a system, developers can directly and transparently respond to concerns. Moderators are able to disengage from discussions should behavior become hateful and/or abusive (see [code of conduct](https://huggingface.co/code-of-conduct)).
55
+
56
+
57
+ Should a specific model be flagged as high risk by our community, we consider:
58
+ - Downgrading the ML artifact’s visibility across the Hub in the trending tab and in feeds,
59
+ - Requesting that the models be made private,
60
+ - Gating access to ML artifacts (see documentation for [models](https://huggingface.co/docs/hub/models-gated) and [datasets](https://huggingface.co/docs/hub/datasets-gated)),
61
+ - Disabling access.
62
+
63
+ **How to add the “Not For All Eyes” tag:**
64
+
65
+ Edit the model/data card → add “not_for_all_eyes” in the tags section → open the PR and wait for the authors to merge it.
66
+ <p align="center">
67
+ <br>
68
+ <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/blog/ethics_soc_2/img5.png" alt="screenshot showing where to add tags" />
69
+ </p>
70
+
71
+ Open science requires safeguards, and one of our goals is to create an environment informed by tradeoffs with different values. Hosting and providing access to models in addition to cultivating community and discussion empowers diverse groups to assess social implications and guide what is good machine learning.
72
+
73
+
74
+ ## Are you working on safeguards? Share them on Hugging Face Hub!
75
+
76
+ The most important part of Hugging Face is our community. If you’re a researcher working on making ML safer to use, especially for open science, we want to support and showcase your work!
77
+
78
+ Here are some recent demos and tools from researchers in the Hugging Face community:
79
+ - [A Watermark for LLMs](https://huggingface.co/spaces/tomg-group-umd/lm-watermarking) by John Kirchenbauer, Jonas Geiping, Yuxin Wen, Jonathan Katz, Ian Miers, Tom Goldstein ([paper](https://arxiv.org/abs/2301.10226))
80
+ - [Generate Model Cards Tool](https://huggingface.co/spaces/huggingface/Model_Cards_Writing_Tool) by the Hugging Face team
81
+ - [Photoguard](https://huggingface.co/spaces/RamAnanth1/photoguard) to safeguard images against manipulation by Ram Ananth
82
+
83
+ Thanks for reading! 🤗
84
+
85
+ ~ Irene, Nima, Giada, Yacine, and Meg, on behalf of the Ethics and Society regulars