Datasets:
task_categories:
- text-generation
language:
- en
- de
- fr
- es
- it
pretty_name: Red Pajama V2 Data Foundation
Getting Started
The full RedPajama-V2 dataset is a data foundation that includes over 100B text documents coming from 84 CommonCrawl snapshots and processed using the CCNet pipeline. Out of these, there are 30B documents in the corpus that additionally come with quality signals.
Check out our blog post for more details on the build process, dataset structure and schema.
To familiarize yourself with the dataset, you can load the sample dataset using:
from datasets import load_dataset
ds = load_dataset("togethercomputer/RedPajama-Data-V2", name="sample")
To download a the dataset for a specific combination of {partition} x {snapshot_id} x {language}
, you can run
from datasets import load_dataset
ds = load_dataset("togethercomputer/RedPajama-Data-V2",
name="sample",
partition="head_middle",
snapshots=["2023-06", "2022-49"],
languages=["en", "de"])
Alternatively, you can also directly download the files using the following instructions, using English data from the
2023-06
snapshot and the head_middle
partition as an example. The full set of CC snapshots included in the dataset
is given in _CC_SNAPSHOT_IDS
, and the available partitions are tail
and head_middle
. The available language tags
are en
, de
, fr
, es
, it
.
CC_SNAPSHOT="2023-06"
LANG="en"
PARTITION="head_middle"
BASE_URL="https://data.together.xyz/redpajama-data-v2/v1.0.0/"
listings_file="${LANG}-${CC_SNAPSHOT}-${PARTITION}.txt"
wget "${BASE_URL}/listings/${listings_file}"
# download documents
while read line; do
url="${BASE_URL}/documents/${line}.json.gz"
dest="documents/${line}.json.gz"
mkdir -p $(dirname $dest)
wget "$line" -O "$dest"
done <"${LANG}-${CC_SNAPSHOT}-${PARTITION}.txt"
# download other components
COMPS=("quality_signals" "minhash" "duplicates")
for comp in "${COMPS[@]}"; do
while read line; do
url="${BASE_URL}/${comp}/${line}.${comp}.json.gz"
dest="${comp}/${line}.${comp}.json.gz"
mkdir -p $(dirname $dest)
wget "$line" -O "$dest"
done <"${LANG}-${CC_SNAPSHOT}-${PARTITION}.txt"
done
A full set of scripts to recreate the dataset, including the quality signals, can be found here.
Dataset Summary
RedPajama-V2 is an open dataset for training large laguage models and includes over 100B text documents. Out of these, 30B documents come with quality annotations.
Quality Annotations
Annotation Tag | Description | Category | Reference |
---|---|---|---|
ccnet_bucket | head, middle or tail bucket of the perplexity score | ccnet | ccnet |
ccnet_language_score | score of the language identification model | ccnet | ccnet |
ccnet_length | number of characters | ccnet | ccnet |
ccnet_nlines | number of lines | ccnet | ccnet |
ccnet_original_length | number of characters before in-document line deduplication | ccnet | ccnet |
ccnet_original_nlines | number of lines before in-document line deduplication | ccnet | ccnet |
ccnet_perplexity | perplexity of an LM trained on Wikipedia | ccnet | ccnet |
rps_doc_books_importance | Given a bag of {1,2}-wordgram model trained on Books p, and a model trained on the source domain q, This is the logarithm of the ratio p(doc)/q(doc) | ML Heuristics | Importance Resampling (Xie et al.) |
rps_doc_openwebtext_importance | Given a bag of {1,2}-wordgram model trained on OpenWebText p, and a model trained on the source domain q, this is the logarithm of the ratio p(doc)/q(doc). | ML Heuristics | Importance Resampling (Xie et al.) |
rps_doc_wikipedia_importance | Given a bag of {1,2}-wordgram model trained on Wikipedia articles p, and a model trained on the source domain q, this is the logarithm of the ratio p(doc)/q(doc). | ML Heuristics | Importance Resampling (Xie et al.) |
rps_doc_ml_wikiref_score | Fasttext classifier prediction for the document being a Wikipedia | ML Heuristics | LLaMA, RedPajama-1T |
reference. This is the same fasttext model used in the RedPajama-1T | |||
dataset. Only applies to English data. | |||
rps_doc_ml_palm_score | Fasttext classifier prediction for the document being a Wikipedia | ML Heuristics | PaLM, GLaM |
article, OpenWebText sample or a RedPajama-V1 book. Only for English | |||
data. | |||
rps_doc_ml_wikipedia_score | Fasttext classifier prediction for the document being a Wikipedia | ML Heuristics | - |
article. This is used for non-English data |
Document Counts for the Annotated part of the dataset
en | de | fr | es | it | Total | |
---|---|---|---|---|---|---|
# Documents | 24.5B | 2.7B | 2.2B | 2.3B | 1.2B | 32.9B |
Languages
English, German, French, Italian, Spanish
Dataset Structure
The dataset is structured into four components, each following the same key structure:
βββ documents
βββ 2018-43
βββ 0000
βββ en_head.json.gz
βββ ...
βββ it_middle.json.gz
βββ quality_signals
βββ 2018-43
βββ 0000
βββ en_head.signals.json.gz
βββ ...
βββ it_middle.json.gz
βββ duplicates
βββ 2018-43
βββ 0000
βββ en_head.duplicates.parquet
βββ ...
βββ it_middle.duplicates.parquet
βββ minhash
βββ 2018-43
βββ 0000
βββ en_head.minhash.parquet
βββ ...
βββ it_middle.minhash.parquet
Documents files, which contain the text, folow the schema defined by CCNet, and the quality signals follow the schema
{
"id": "2018-43/0000/en_head.json.gz/0",
"id_int": 7972430436813205988,
"metadata": {
"cc_segment": "crawl-data/...",
"cc_net_source": "2018-43/0000/en_head.json.gz",
"url": "...",
"source_domain": "...",
"language": "en",
"snapshot_id": "2018-43"
},
"quality_signals": {
"ccnet_original_length": [
[
0,
7033,
8711.0
]
],
...,
"rps_doc_stop_word_fraction": [
[
0,
7033,
0.45121107
]
],
"rps_lines_num_words": [
[
0,
25,
2
],
...,
[
6980,
7033,
10
]
]
}
}
where signal scores are encoded as a list of tuples (start, end, score)
, where start
and end
are the locations in
the
raw_content
string where the score
applies.
Dataset Creation
The dataset is based on 84 snapshots provided by Common Crawl.
Citation
To cite RedPajama-V2, please use:
@software{together2023redpajama-v2,
author = {Together Computer},
title = {RedPajama-Data-v2: a living data foundation for training open LLM models},
month = October,
year = 2023,
url = {https://github.com/togethercomputer/RedPajama-Data}
}
License
Please refer to the Common Crawl Foundation Terms of Use for the data. The code used to load and process the dataset is licensed under the Apache 2.0 license.