Datasets:

ArXiv:
Maurice Weber commited on
Commit
6218229
1 Parent(s): 816154f

update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -14,7 +14,7 @@ pretty_name: Red Pajama V2 Dataset
14
 
15
  The full RedPajama-V2 dataset is a data foundation that includes over 100B text documents coming from 84 CommonCrawl
16
  snapshots and processed using the [CCNet](https://github.com/facebookresearch/cc_net) pipeline. Out of these, there are
17
- 30B documents in the corpus that additionally come with quality signals.
18
 
19
  Check out our [blog post](XXXXX) for more details on the build process, dataset structure and schema.
20
 
@@ -128,7 +128,7 @@ RedPajama-V2 is an open dataset for training large laguage models and includes o
128
  | rps_doc_ldnoobw_words | The number of sequences of words that are contained in the List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words blocklist. The blocklist is obtained from the [LDNOOBW](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) repo. | toxicity | [C4](https://arxiv.org/abs/1910.10683) |
129
  | rps_doc_ut1_blacklist | A categorical id corresponding to the list of categories of the domain of the document. Categories are obtained from the UT1 blacklist. The list is obtained from [UT-Capitole](https://dsi.ut-capitole.fr/blacklists/). | toxicictiy | [RefinedWeb](https://arxiv.org/abs/2306.01116) |
130
 
131
- #### Document and Token Counts for the Annotated (`head_middle`) part of the dataset
132
 
133
  | | # Documents | Estimated Token count (deduped) |
134
  |-------|-------------|---------------------------------|
 
14
 
15
  The full RedPajama-V2 dataset is a data foundation that includes over 100B text documents coming from 84 CommonCrawl
16
  snapshots and processed using the [CCNet](https://github.com/facebookresearch/cc_net) pipeline. Out of these, there are
17
+ 30B documents in the corpus that additionally come with quality signals, and 20B documents that are deduplicated.
18
 
19
  Check out our [blog post](XXXXX) for more details on the build process, dataset structure and schema.
20
 
 
128
  | rps_doc_ldnoobw_words | The number of sequences of words that are contained in the List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words blocklist. The blocklist is obtained from the [LDNOOBW](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) repo. | toxicity | [C4](https://arxiv.org/abs/1910.10683) |
129
  | rps_doc_ut1_blacklist | A categorical id corresponding to the list of categories of the domain of the document. Categories are obtained from the UT1 blacklist. The list is obtained from [UT-Capitole](https://dsi.ut-capitole.fr/blacklists/). | toxicictiy | [RefinedWeb](https://arxiv.org/abs/2306.01116) |
130
 
131
+ #### Document and Token Counts for the Annotated and deduplicated `head_middle` part of the dataset
132
 
133
  | | # Documents | Estimated Token count (deduped) |
134
  |-------|-------------|---------------------------------|