princeton-nlp commited on
Commit
4c02222
1 Parent(s): 3a6138d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -31
README.md CHANGED
@@ -1,31 +1,21 @@
1
- ---
2
- dataset_info:
3
- features:
4
- - name: text
5
- dtype: string
6
- - name: input_ids
7
- sequence: int32
8
- - name: length
9
- dtype: int64
10
- - name: source_domain
11
- dtype: string
12
- - name: writing_style_average
13
- dtype: float64
14
- - name: required_expertise_average
15
- dtype: float64
16
- - name: facts_and_trivia_average
17
- dtype: float64
18
- - name: educational_value_average
19
- dtype: float64
20
- splits:
21
- - name: train
22
- num_bytes: 2010865867591
23
- num_examples: 254141282
24
- download_size: 1055335695720
25
- dataset_size: 2010865867591
26
- configs:
27
- - config_name: default
28
- data_files:
29
- - split: train
30
- path: data/train-*
31
- ---
 
1
+ ## QuRatedPajama
2
+
3
+ A 260B token subset of [cerebras/SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B), annotated by [princeton-nlp/QuRater-1.3B](https://huggingface.co/princeton-nlp/QuRater-1.3B/tree/main) with sequence-level quality ratings across 4 criteria:
4
+ - **Educational Value** - e.g. the text includes clear explanations, step-by-step reasoning, or questions and answers
5
+ - **Facts & Trivia** - how much factual and trivia knowledge the text contains, where specific facts and obscure trivia are preferred over more common knowledge
6
+ - **Writing Style** - how polished and good is the writing style in the text
7
+ - **Required Expertise**: - how much required expertise and prerequisite knowledge is necessary to understand the text
8
+
9
+ In a pre-processing step, we split documents in into chunks of exactly 1024 tokens. We provide tokenization with the Llama-2 tokenizer in the `input_ids` column.
10
+
11
+ Paper: [QuRating: Selecting High-Quality Data for Training Language Models](https://arxiv.org/pdf/2402.09739.pdf)
12
+
13
+ Citation:
14
+ ```
15
+ @article{wettig2024qurating,
16
+ title={QuRating: Selecting High-Quality Data for Training Language Models},
17
+ author={Alexander Wettig, Aatmik Gupta, Saumya Malik, Danqi Chen},
18
+ journal={arXiv preprint 2402.09739},
19
+ year={2024}
20
+ }
21
+ ```