Datasets:

ArXiv:
DOI:
License:
yjernite HF staff commited on
Commit
596296e
1 Parent(s): 19a1f6c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -34,7 +34,7 @@ We engage directly with contributors and have addressed pressing issues. To brin
34
  - launched a [flagging feature](https://twitter.com/GiadaPistilli/status/1571865167092396033) for our community to determine whether ML artifacts or community content (model, dataset, space, or discussion) violate our [content guidelines](https://huggingface.co/content-guidelines),
35
  - monitor our community discussion boards to ensure Hub users abide by the [code of conduct](https://huggingface.co/code-of-conduct),
36
  - robustly document our most-downloaded models with model cards that detail social impacts, biases, and intended and out-of-scope use cases,
37
- - create audience-guiding tags, such as the “Not For All Eyes” tag that can be added to the repository’s card metadata to avoid un-requested violent and sexual content,
38
  - promote use of [Open Responsible AI Licenses (RAIL)](https://huggingface.co/blog/open_rail) for [models](https://www.licenses.ai/blog/2022/8/26/bigscience-open-rail-m-license), such as with LLMs ([BLOOM](https://huggingface.co/spaces/bigscience/license), [BigCode](https://huggingface.co/spaces/bigcode/license)),
39
  - conduct research that [analyzes](https://arxiv.org/abs/2302.04844) which models and datasets have the highest potential for, or track record of, misuse and malicious use.
40
 
@@ -60,7 +60,7 @@ Should a specific model be flagged as high risk by our community, we consider:
60
  - Gating access to ML artifacts (see documentation for [models](https://huggingface.co/docs/hub/models-gated) and [datasets](https://huggingface.co/docs/hub/datasets-gated)),
61
  - Disabling access.
62
 
63
- **How to add the “Not For All Eyes” tag:**
64
 
65
  Edit the model/data card → add `not-for-all-audiences` in the tags section → open the PR and wait for the authors to merge it.
66
  <p align="center">
 
34
  - launched a [flagging feature](https://twitter.com/GiadaPistilli/status/1571865167092396033) for our community to determine whether ML artifacts or community content (model, dataset, space, or discussion) violate our [content guidelines](https://huggingface.co/content-guidelines),
35
  - monitor our community discussion boards to ensure Hub users abide by the [code of conduct](https://huggingface.co/code-of-conduct),
36
  - robustly document our most-downloaded models with model cards that detail social impacts, biases, and intended and out-of-scope use cases,
37
+ - create audience-guiding tags, such as the “Not For All Audiences” tag that can be added to the repository’s card metadata to avoid un-requested violent and sexual content,
38
  - promote use of [Open Responsible AI Licenses (RAIL)](https://huggingface.co/blog/open_rail) for [models](https://www.licenses.ai/blog/2022/8/26/bigscience-open-rail-m-license), such as with LLMs ([BLOOM](https://huggingface.co/spaces/bigscience/license), [BigCode](https://huggingface.co/spaces/bigcode/license)),
39
  - conduct research that [analyzes](https://arxiv.org/abs/2302.04844) which models and datasets have the highest potential for, or track record of, misuse and malicious use.
40
 
 
60
  - Gating access to ML artifacts (see documentation for [models](https://huggingface.co/docs/hub/models-gated) and [datasets](https://huggingface.co/docs/hub/datasets-gated)),
61
  - Disabling access.
62
 
63
+ **How to add the “Not For All Audiences” tag:**
64
 
65
  Edit the model/data card → add `not-for-all-audiences` in the tags section → open the PR and wait for the authors to merge it.
66
  <p align="center">