princeton-nlp's picture
Update README.md
9308d78 verified
metadata
pretty_name: QuRatedPajama-1B_tokens_for_analysis

QuRatedPajama

Paper: QuRating: Selecting High-Quality Data for Training Language Models

This dataset is a 1B token subset derived from princeton-nlp/QuRatedPajama-260B, which is a subset of cerebras/SlimPajama-627B annotated by princeton-nlp/QuRater-1.3B with sequence-level quality ratings across 4 criteria:

  • Educational Value - e.g. the text includes clear explanations, step-by-step reasoning, or questions and answers
  • Facts & Trivia - how much factual and trivia knowledge the text contains, where specific facts and obscure trivia are preferred over more common knowledge
  • Writing Style - how polished and good is the writing style in the text
  • Required Expertise: - how much required expertise and prerequisite knowledge is necessary to understand the text

This subset is useful for analysis of quality ratings. It unsupervised domain clusters for the CommonCrawl and C4 domains (a description of these clusters can be found here). We also report the quality ratings per 512 token chunk of each example.

In a pre-processing step, we split documents in into chunks of exactly 1024 tokens. We provide tokenization with the Llama-2 tokenizer in the input_ids column.

Guidance on Responsible Use:

In the paper, we document various types of bias that are present in the quality ratings (biases related to domains, topics, social roles, regions and languages - see Section 6 of the paper). Hence, be aware that data selection with QuRating could have unintended and harmful effects on the language model that is being trained. We strongly recommend a comprehensive evaluation of the language model for these and other types of bias, particularly before real-world deployment. We hope that releasing the data/models can facilitate future research aimed at uncovering and mitigating such biases. Note that the quality ratings do not measure the social or literary value of a text and should not be used for textual or demographic studies.

Citation:

@article{wettig2024qurating,
   title={QuRating: Selecting High-Quality Data for Training Language Models},
   author={Alexander Wettig, Aatmik Gupta, Saumya Malik, Danqi Chen},
   journal={arXiv preprint 2402.09739},
   year={2024}
}