princeton-nlp commited on
Commit
3a45279
1 Parent(s): 4c02222

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -8,9 +8,9 @@ A 260B token subset of [cerebras/SlimPajama-627B](https://huggingface.co/dataset
8
 
9
  In a pre-processing step, we split documents in into chunks of exactly 1024 tokens. We provide tokenization with the Llama-2 tokenizer in the `input_ids` column.
10
 
11
- Paper: [QuRating: Selecting High-Quality Data for Training Language Models](https://arxiv.org/pdf/2402.09739.pdf)
12
 
13
- Citation:
14
  ```
15
  @article{wettig2024qurating,
16
  title={QuRating: Selecting High-Quality Data for Training Language Models},
 
8
 
9
  In a pre-processing step, we split documents in into chunks of exactly 1024 tokens. We provide tokenization with the Llama-2 tokenizer in the `input_ids` column.
10
 
11
+ **Paper:** [QuRating: Selecting High-Quality Data for Training Language Models](https://arxiv.org/pdf/2402.09739.pdf)
12
 
13
+ **Citation:**
14
  ```
15
  @article{wettig2024qurating,
16
  title={QuRating: Selecting High-Quality Data for Training Language Models},