Maurice Weber
commited on
Commit
·
34b0752
1
Parent(s):
fbbd8c2
update README.md
Browse files
README.md
CHANGED
@@ -18,7 +18,7 @@ snapshots and processed using the [CCNet](https://github.com/facebookresearch/cc
|
|
18 |
|
19 |
Check out our [blog post](XXXXX) for more details on the build process, dataset structure and schema.
|
20 |
|
21 |
-
To familiarize yourself with the dataset, you can load the sample dataset
|
22 |
|
23 |
```python
|
24 |
from datasets import load_dataset
|
@@ -26,10 +26,22 @@ from datasets import load_dataset
|
|
26 |
ds = load_dataset("togethercomputer/RedPajama-Data-V2", name="sample")
|
27 |
```
|
28 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
29 |
Alternatively, you can also directly download the files using the following instructions, using English data from the
|
30 |
`2023-06` snapshot and the `head_middle` partition as an example. The full set of CC snapshots included in the dataset
|
31 |
-
is given in `_CC_SNAPSHOT_IDS`, and the available partitions are `tail` and `head_middle`. The available language tags
|
32 |
-
`en`, `de`, `fr`, `es`, `it`.
|
33 |
|
34 |
```bash
|
35 |
CC_SNAPSHOT="2023-06"
|
@@ -66,8 +78,37 @@ found [here](https://github.com/togethercomputer/RedPajama-Data).
|
|
66 |
|
67 |
### Dataset Summary
|
68 |
|
69 |
-
RedPajama-V2 is
|
70 |
-
quality annotations.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
71 |
|
72 |
### Languages
|
73 |
|
@@ -151,7 +192,8 @@ Documents files, which contain the text, folow the schema defined by CCNet, and
|
|
151 |
}
|
152 |
```
|
153 |
|
154 |
-
where signal scores are encoded as list of
|
|
|
155 |
`raw_content` string where the `score` applies.
|
156 |
|
157 |
## Dataset Creation
|
|
|
18 |
|
19 |
Check out our [blog post](XXXXX) for more details on the build process, dataset structure and schema.
|
20 |
|
21 |
+
To familiarize yourself with the dataset, you can load the sample dataset using:
|
22 |
|
23 |
```python
|
24 |
from datasets import load_dataset
|
|
|
26 |
ds = load_dataset("togethercomputer/RedPajama-Data-V2", name="sample")
|
27 |
```
|
28 |
|
29 |
+
To download a the dataset for a specific combination of `{partition} x {snapshot_id} x {language}`, you can run
|
30 |
+
|
31 |
+
```python
|
32 |
+
from datasets import load_dataset
|
33 |
+
|
34 |
+
ds = load_dataset("togethercomputer/RedPajama-Data-V2",
|
35 |
+
name="sample",
|
36 |
+
partition="head_middle",
|
37 |
+
snapshots=["2023-06", "2022-49"],
|
38 |
+
languages=["en", "de"])
|
39 |
+
```
|
40 |
+
|
41 |
Alternatively, you can also directly download the files using the following instructions, using English data from the
|
42 |
`2023-06` snapshot and the `head_middle` partition as an example. The full set of CC snapshots included in the dataset
|
43 |
+
is given in `_CC_SNAPSHOT_IDS`, and the available partitions are `tail` and `head_middle`. The available language tags
|
44 |
+
are `en`, `de`, `fr`, `es`, `it`.
|
45 |
|
46 |
```bash
|
47 |
CC_SNAPSHOT="2023-06"
|
|
|
78 |
|
79 |
### Dataset Summary
|
80 |
|
81 |
+
RedPajama-V2 is an open dataset for training large laguage models and includes over 100B text documents. Out of these,
|
82 |
+
30B documents come with quality annotations.
|
83 |
+
|
84 |
+
#### Quality Annotations
|
85 |
+
|
86 |
+
| Annotation Tag | Description | Category | Reference |
|
87 |
+
|--------------------------------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------|------------------------------------|
|
88 |
+
| ccnet_bucket | head, middle or tail bucket of the perplexity score | ccnet | ccnet |
|
89 |
+
| ccnet_language_score | score of the language identification model | ccnet | ccnet |
|
90 |
+
| ccnet_length | number of characters | ccnet | ccnet |
|
91 |
+
| ccnet_nlines | number of lines | ccnet | ccnet |
|
92 |
+
| ccnet_original_length | number of characters before in-document line deduplication | ccnet | ccnet |
|
93 |
+
| ccnet_original_nlines | number of lines before in-document line deduplication | ccnet | ccnet |
|
94 |
+
| ccnet_perplexity | perplexity of an LM trained on Wikipedia | ccnet | ccnet |
|
95 |
+
| rps_doc_books_importance | Given a bag of {1,2}-wordgram model trained on Books p, and a model trained on the source domain q, This is the logarithm of the ratio p(doc)/q(doc) | ML Heuristics | Importance Resampling (Xie et al.) |
|
96 |
+
| rps_doc_openwebtext_importance | Given a bag of {1,2}-wordgram model trained on OpenWebText p, and a model trained on the source domain q, this is the logarithm of the ratio p(doc)/q(doc). | ML Heuristics | Importance Resampling (Xie et al.) |
|
97 |
+
| rps_doc_wikipedia_importance | Given a bag of {1,2}-wordgram model trained on Wikipedia articles p, and a model trained on the source domain q, this is the logarithm of the ratio p(doc)/q(doc). | ML Heuristics | Importance Resampling (Xie et al.) |
|
98 |
+
| rps_doc_ml_wikiref_score | Fasttext classifier prediction for the document being a Wikipedia | ML Heuristics | LLaMA, RedPajama-1T |
|
99 |
+
| | reference. This is the same fasttext model used in the RedPajama-1T | | |
|
100 |
+
| | dataset. Only applies to English data. | | |
|
101 |
+
| rps_doc_ml_palm_score | Fasttext classifier prediction for the document being a Wikipedia | ML Heuristics | PaLM, GLaM |
|
102 |
+
| | article, OpenWebText sample or a RedPajama-V1 book. Only for English | | |
|
103 |
+
| | data. | | |
|
104 |
+
| rps_doc_ml_wikipedia_score | Fasttext classifier prediction for the document being a Wikipedia | ML Heuristics | - |
|
105 |
+
| | article. This is used for non-English data | | |
|
106 |
+
|
107 |
+
#### Document Counts for the Annotated part of the dataset
|
108 |
+
|
109 |
+
| | en | de | fr | es | it | Total |
|
110 |
+
|-------------|-------|------|------|------|------|-------|
|
111 |
+
| # Documents | 24.5B | 2.7B | 2.2B | 2.3B | 1.2B | 32.9B |
|
112 |
|
113 |
### Languages
|
114 |
|
|
|
192 |
}
|
193 |
```
|
194 |
|
195 |
+
where signal scores are encoded as a list of tuples `(start, end, score)`, where `start` and `end` are the locations in
|
196 |
+
the
|
197 |
`raw_content` string where the `score` applies.
|
198 |
|
199 |
## Dataset Creation
|