Datasets:

Modalities:
Text
ArXiv:
Tags:
Maurice Weber commited on
Commit
1a99058
1 Parent(s): bc6bd35

update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -4
README.md CHANGED
@@ -15,7 +15,8 @@ pretty_name: Red Pajama V2 Dataset
15
  RedPajama-V2 is an open dataset for training large language models. The dataset includes over 100B text
16
  documents coming from 84 CommonCrawl snapshots and processed using
17
  the [CCNet](https://github.com/facebookresearch/cc_net) pipeline. Out of these, there are 30B documents in the corpus
18
- that additionally come with quality signals, and 20B documents that are deduplicated.
 
19
 
20
  Check out our [blog post](https://together.ai/blog/redpajama-data-v2) for more details on the build process, dataset
21
  structure and schema.
@@ -30,7 +31,7 @@ ds = load_dataset("togethercomputer/RedPajama-Data-V2", name="sample")
30
 
31
  To download a the dataset for a specific combination of `{partition} x {snapshot_id} x {language}` (e.g., English and
32
  German data from the `head_middle` partition of the 2023-06 and the 2022-49 dumps), you can run the following command.
33
- _Note that this will downlaod the entire dumps and requires 1TB disk space per dump_.
34
 
35
  ```python
36
  from datasets import load_dataset
@@ -172,8 +173,8 @@ for sample in ds_iterator["train"]:
172
 
173
  ### Dataset Summary
174
 
175
- RedPajama-V2 is an open dataset for training large laguage models and includes over 100B text documents. Out of these,
176
- 30B documents come with quality annotations. Out of the 30B quality annotated documents, 20B are deduplicated.
177
 
178
  #### Quality Annotations
179
 
 
15
  RedPajama-V2 is an open dataset for training large language models. The dataset includes over 100B text
16
  documents coming from 84 CommonCrawl snapshots and processed using
17
  the [CCNet](https://github.com/facebookresearch/cc_net) pipeline. Out of these, there are 30B documents in the corpus
18
+ that additionally come with quality signals. In addition, we also provide the ids of duplicated documents which can be
19
+ used to create a dataset with 20B deduplicated documents.
20
 
21
  Check out our [blog post](https://together.ai/blog/redpajama-data-v2) for more details on the build process, dataset
22
  structure and schema.
 
31
 
32
  To download a the dataset for a specific combination of `{partition} x {snapshot_id} x {language}` (e.g., English and
33
  German data from the `head_middle` partition of the 2023-06 and the 2022-49 dumps), you can run the following command.
34
+ _Note that this will downlaod the entire dumps and requires ~1TB disk space per dump_.
35
 
36
  ```python
37
  from datasets import load_dataset
 
173
 
174
  ### Dataset Summary
175
 
176
+ RedPajama-V2 is an open dataset for training large language models and includes over 100B text documents. Out of these,
177
+ 30B documents come with quality annotations. Out of these, there are 20B unique documents.
178
 
179
  #### Quality Annotations
180