Commit
•
fc5e407
1
Parent(s):
1c382cb
Update README.md
Browse files
README.md
CHANGED
@@ -11,8 +11,14 @@ A 260B token subset of [cerebras/SlimPajama-627B](https://huggingface.co/dataset
|
|
11 |
|
12 |
In a pre-processing step, we split documents in into chunks of exactly 1024 tokens. We provide tokenization with the Llama-2 tokenizer in the `input_ids` column.
|
13 |
|
|
|
14 |
**Paper:** [QuRating: Selecting High-Quality Data for Training Language Models](https://arxiv.org/pdf/2402.09739.pdf)
|
15 |
|
|
|
|
|
|
|
|
|
|
|
16 |
**Citation:**
|
17 |
```
|
18 |
@article{wettig2024qurating,
|
|
|
11 |
|
12 |
In a pre-processing step, we split documents in into chunks of exactly 1024 tokens. We provide tokenization with the Llama-2 tokenizer in the `input_ids` column.
|
13 |
|
14 |
+
|
15 |
**Paper:** [QuRating: Selecting High-Quality Data for Training Language Models](https://arxiv.org/pdf/2402.09739.pdf)
|
16 |
|
17 |
+
**Guidance on Responsible Use**
|
18 |
+
|
19 |
+
In the paper, we document various types of bias that are present in the quality ratings (biases related to domains, topics, social roles, regions and languages - see Section 6 of the paper).
|
20 |
+
Hence, be aware that data selection with QuRating could have unintended and harmful effects on the language model that is being trained. We strongly recommend a comprehensive evaluation of the language model for these and other types of bias, particularly before real-world deployment. We hope that releasing the data/models can facilitate future research aimed at uncovering and mitigating such biases.
|
21 |
+
|
22 |
**Citation:**
|
23 |
```
|
24 |
@article{wettig2024qurating,
|