Datasets:

Modalities:
Text
ArXiv:
Libraries:
Datasets
Maurice Weber commited on
Commit
cb715ae
1 Parent(s): 2aa608b

initial commit

Browse files
Files changed (4) hide show
  1. README.md +115 -0
  2. RedPajama-Data-V2.py +265 -0
  3. _CC_SNAPSHOT_IDS +84 -0
  4. _QUALITY_SIGNAL_TAGS +42 -0
README.md ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - text-generation
4
+ language:
5
+ - en
6
+ - de
7
+ - fr
8
+ - es
9
+ - it
10
+ pretty_name: Red Pajama V2 Data Foundation
11
+ ---
12
+
13
+ ### Getting Started
14
+
15
+ ```python
16
+ from datasets import load_dataset
17
+
18
+ ds = load_dataset("togethercomputer/RedPajama-Data-V2", name="en-head-middle-all")
19
+ ```
20
+
21
+ Or you can directly download the files using the following command:
22
+
23
+ ```bash
24
+ wget 'https://data.together.xyz/redpajama-data-v2/v1.0.0/urls.txt'
25
+ while read line; do
26
+ dload_loc=${line#https://data.together.xyz/redpajama-data-v2/v1.0.0/}
27
+ mkdir -p $(dirname $dload_loc)
28
+ wget "$line" -O "$dload_loc"
29
+ done < urls.txt
30
+ ```
31
+
32
+ After downloading the files, you can load the dataset from disk by setting the `RED_PAJAMA_DATA_DIR` environment
33
+ variable XXXX TODO XXX
34
+
35
+ A smaller sample of the dataset can be downloaded via
36
+
37
+ ```python
38
+ from datasets import load_dataset
39
+
40
+ ds = load_dataset("togethercomputer/RedPajama-Data-V2", name="en-sample")
41
+ ```
42
+
43
+ A full set of scripts to recreate the dataset from can be
44
+ found [here](https://github.com/togethercomputer/RedPajama-Data).
45
+
46
+ ### Dataset Summary
47
+
48
+ TODO: Write a sentence about the dataset
49
+
50
+ ### Languages
51
+
52
+ English, German, French, Italian, Spanish
53
+
54
+ ## Dataset Structure
55
+
56
+ The dataset structure is as follows:
57
+
58
+ ```json
59
+ {
60
+ TODO
61
+ }
62
+ ```
63
+
64
+ ## Dataset Creation
65
+
66
+ XXXX
67
+
68
+ ### Commoncrawl
69
+ TODO
70
+
71
+ To cite RedPajama, please use:
72
+
73
+ ```
74
+ @software{together2023redpajama,
75
+ author = {Together Computer},
76
+ title = {RedPajama-Data-v2: a living data foundation for training open LLM models},
77
+ month = October,
78
+ year = 2023,
79
+ url = {https://github.com/togethercomputer/RedPajama-Data}
80
+ }
81
+ ```
82
+
83
+ ### License
84
+
85
+ TODO: double check this
86
+
87
+ Please refer to the licenses of the data subsets you use.
88
+
89
+ * [Common Crawl Foundation Terms of Use](https://commoncrawl.org/terms-of-use/full/)
90
+
91
+ <!--
92
+ ### Annotations
93
+ #### Annotation process
94
+ [More Information Needed]
95
+ #### Who are the annotators?
96
+ [More Information Needed]
97
+ ### Personal and Sensitive Information
98
+ [More Information Needed]
99
+ ## Considerations for Using the Data
100
+ ### Social Impact of Dataset
101
+ [More Information Needed]
102
+ ### Discussion of Biases
103
+ [More Information Needed]
104
+ ### Other Known Limitations
105
+ [More Information Needed]
106
+ ## Additional Information
107
+ ### Dataset Curators
108
+ [More Information Needed]
109
+ ### Licensing Information
110
+ [More Information Needed]
111
+ ### Citation Information
112
+ [More Information Needed]
113
+ ### Contributions
114
+ [More Information Needed]
115
+ -->
RedPajama-Data-V2.py ADDED
@@ -0,0 +1,265 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2023 Together Computer
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # Lint as: python3
16
+ """RedPajama V2: Quality annotated Web Text Documents."""
17
+
18
+ import json
19
+
20
+ import datasets
21
+ import traceback
22
+ import os
23
+ import gzip
24
+
25
+ logger = datasets.logging.get_logger(__name__)
26
+
27
+ _DESCRIPTION = """\
28
+ RedPajama V2 is a dataset of web documents and their quality signals.
29
+ """
30
+
31
+ with open("_CC_SNAPSHOT_IDS", "r") as f:
32
+ _CC_SNAPSHOT_IDS = [line.strip() for line in f]
33
+
34
+ with open("_QUALITY_SIGNAL_TAGS", "r") as f:
35
+ _QUALITY_SIGNAL_TAGS = [line.strip() for line in f]
36
+
37
+ _URL_BASE = 'https://data.together.xyz/redpajama-data-v2/v1.0.0'
38
+ _LANGUAGES = ("en", "de", "fr", "es", "it")
39
+ _DATA_DIR = os.environ.get('RED_PAJAMA_V2_DATA_DIR', None)
40
+
41
+ _LISTINGS_PATTERN = "urls/{language}-{snapshot}-{partition}.txt"
42
+
43
+
44
+ class RedPajamaDataV2Config(datasets.BuilderConfig):
45
+ """BuilderConfig for RedPajama."""
46
+
47
+ def __init__(self, *args, language, partition, snapshots, **kwargs):
48
+ """BuilderConfig for RedPajama.
49
+ Args:
50
+ **kwargs: keyword arguments forwarded to super.
51
+ """
52
+ super(RedPajamaDataV2Config, self).__init__(**kwargs)
53
+ self.partition = partition
54
+ self.snapshots = snapshots
55
+ self.language = language
56
+
57
+
58
+ _BUILDER_CONFIGS = []
59
+
60
+ for lang in _LANGUAGES:
61
+ _BUILDER_CONFIGS.extend(
62
+ [
63
+ # single snapshot
64
+ RedPajamaDataV2Config(
65
+ name=f'{lang}-head-middle-{snapshot}',
66
+ partition='head-middle',
67
+ snapshots=[snapshot],
68
+ language=lang,
69
+ version=datasets.Version("1.0.0", ""),
70
+ description=f"RedPajamaV2 head-middle {lang}-{snapshot}",
71
+ )
72
+ for snapshot in _CC_SNAPSHOT_IDS
73
+ ] + [
74
+ # all snapshots
75
+ RedPajamaDataV2Config(
76
+ name=f'{lang}-head-middle-all',
77
+ partition='head_middle',
78
+ snapshots=_CC_SNAPSHOT_IDS,
79
+ language=lang,
80
+ version=datasets.Version("1.0.0", ""),
81
+ description=f"RedPajamaV2 head-middle {lang}"
82
+ )
83
+ ]
84
+ )
85
+
86
+ _BUILDER_CONFIGS.extend(
87
+ [
88
+ # single snapshot
89
+ RedPajamaDataV2Config(
90
+ name=f'{lang}-tail-{snapshot}',
91
+ partition='tail',
92
+ snapshots=[snapshot],
93
+ language=lang,
94
+ version=datasets.Version("1.0.0", ""),
95
+ description=f"RedPajamaV2 tail {lang}-{snapshot}",
96
+ )
97
+ for snapshot in _CC_SNAPSHOT_IDS
98
+ ] + [
99
+ # all snapshots
100
+ RedPajamaDataV2Config(
101
+ name=f'{lang}-tail-all',
102
+ partition='tail',
103
+ snapshots=_CC_SNAPSHOT_IDS,
104
+ language=lang,
105
+ version=datasets.Version("1.0.0", ""),
106
+ description=f"RedPajamaV2 tail {lang}"
107
+ )
108
+ ]
109
+ )
110
+
111
+
112
+ class RedPajamaV2(datasets.GeneratorBasedBuilder):
113
+ """ RedPajama V2: Quality annotated Web Text Documents. """
114
+
115
+ BUILDER_CONFIGS = _BUILDER_CONFIGS
116
+
117
+ def _info(self):
118
+ if self.config.partition == "tail":
119
+ return datasets.DatasetInfo(
120
+ description=_DESCRIPTION,
121
+ features=datasets.Features(
122
+ {
123
+ "raw_content": datasets.Value("string"),
124
+ "doc_id": datasets.Value("string"),
125
+ "meta": datasets.Value("string"),
126
+ }
127
+ ),
128
+ supervised_keys=None,
129
+ )
130
+ else:
131
+ return datasets.DatasetInfo(
132
+ description=_DESCRIPTION,
133
+ features=datasets.Features(
134
+ {
135
+ "raw_content": datasets.Value("string"),
136
+ "doc_id": datasets.Value("string"),
137
+ "meta": datasets.Value("string"),
138
+ "quality_signals": datasets.Value("string")
139
+ }
140
+ ),
141
+ supervised_keys=None,
142
+ )
143
+
144
+ def _split_generators(self, dl_manager):
145
+ url_lists = dl_manager.download_and_extract({
146
+ snapshot_id: _LISTINGS_PATTERN.format(
147
+ language=self.config.language,
148
+ snapshot=snapshot_id,
149
+ partition=self.config.partition,
150
+ )
151
+ for snapshot_id in self.config.snapshots
152
+ })
153
+
154
+ listings_ids = {}
155
+
156
+ for snapshot_id, listings_file in url_lists.items():
157
+ with open(listings_file, encoding="utf-8") as f:
158
+ listings_ids[snapshot_id] = [line.strip() for line in f]
159
+
160
+ # mapping document type -> url for download
161
+ if _DATA_DIR is not None:
162
+ documents_files = None
163
+ quality_signals_files = None
164
+ else:
165
+ # build urls pointing to documents
166
+ document_urls = {
167
+ snapshot_id: [
168
+ os.path.join(_URL_BASE, f"documents/{lst_id}.json.gz")
169
+ for lst_id in listings_ids[snapshot_id]
170
+ ]
171
+ for snapshot_id in self.config.snapshots
172
+ }
173
+
174
+ documents_files = dl_manager.download(document_urls)
175
+
176
+ # build urls pointing to quality signals
177
+ if self.config.partition == "head_middle":
178
+ quality_signals_urls = {
179
+ snapshot_id: [
180
+ os.path.join(
181
+ _URL_BASE,
182
+ f"quality_signals/{lst_id}.signals.json.gz"
183
+ )
184
+ for lst_id in listings_ids[snapshot_id]
185
+ ]
186
+ for snapshot_id in self.config.snapshots
187
+ }
188
+
189
+ quality_signals_files = dl_manager.download(
190
+ quality_signals_urls
191
+ )
192
+ else:
193
+ quality_signals_files = {}
194
+
195
+ return [
196
+ datasets.SplitGenerator(
197
+ name=datasets.Split.TRAIN,
198
+ gen_kwargs={
199
+ "listings_ids": listings_ids,
200
+ "documents_files": {
201
+ snapshot_id: documents_files[snapshot_id]
202
+ for snapshot_id in self.config.snapshots
203
+ },
204
+ "quality_signals_files": {
205
+ snapshot_id: quality_signals_files.get(snapshot_id)
206
+ for snapshot_id in self.config.snapshots
207
+ }
208
+ }
209
+ )
210
+ ]
211
+
212
+ def _generate_examples(self, documents_files, quality_signals_files):
213
+ """ This function returns examples """
214
+ snapshots = list(documents_files.keys())
215
+
216
+ key = 0
217
+ for snapshot in snapshots:
218
+ docs_files = documents_files[snapshot]
219
+ if self.config.partition == "head_middle":
220
+ qs_files = quality_signals_files[snapshot]
221
+ else:
222
+ qs_files = None
223
+
224
+ assert len(docs_files) == len(qs_files)
225
+
226
+ for doc_file, qs_file in zip(docs_files, qs_files):
227
+ with gzip.open(doc_file, "rt", encoding="utf-8") as df:
228
+ with gzip.open(qs_file, "rt", encoding="utf-8") as qf:
229
+ for row, (doc, qs) in enumerate(zip(df, qf)):
230
+
231
+ try:
232
+ doc = json.loads(doc)
233
+ qs = json.loads(qs)
234
+ doc_id = qs["id"]
235
+
236
+ meta = {
237
+ "url": doc["url"],
238
+ "source_domain": doc["source_domain"],
239
+ "date_download": doc["date_download"],
240
+ "digest": doc["digest"],
241
+ }
242
+
243
+ if self.config.partition == "tail":
244
+ yield key, {
245
+ "raw_content": doc["raw_content"],
246
+ "doc_id": doc_id,
247
+ "meta": json.dumps(meta),
248
+ }
249
+ else:
250
+ yield key, {
251
+ "raw_content": doc["raw_content"],
252
+ "doc_id": doc_id,
253
+ "meta": json.dumps(meta),
254
+ "quality_signals": json.dumps(
255
+ qs["quality_signals"]
256
+ ),
257
+ }
258
+ key += 1
259
+ except Exception as e:
260
+ print(f'doc_file: {doc_file}')
261
+ print(f'qs_file: {qs_file}')
262
+ print(f'row: {row}')
263
+ traceback.print_exc()
264
+
265
+ raise e
_CC_SNAPSHOT_IDS ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ 2014-15
2
+ 2014-23
3
+ 2014-35
4
+ 2014-41
5
+ 2014-42
6
+ 2014-49
7
+ 2014-52
8
+ 2015-14
9
+ 2015-22
10
+ 2015-27
11
+ 2015-32
12
+ 2015-35
13
+ 2015-40
14
+ 2015-48
15
+ 2016-07
16
+ 2016-18
17
+ 2016-22
18
+ 2016-26
19
+ 2016-30
20
+ 2016-36
21
+ 2016-40
22
+ 2016-44
23
+ 2016-50
24
+ 2017-04
25
+ 2017-09
26
+ 2017-17
27
+ 2017-22
28
+ 2017-26
29
+ 2017-30
30
+ 2017-34
31
+ 2017-39
32
+ 2017-43
33
+ 2017-47
34
+ 2017-51
35
+ 2018-05
36
+ 2018-09
37
+ 2018-13
38
+ 2018-17
39
+ 2018-22
40
+ 2018-26
41
+ 2018-30
42
+ 2018-34
43
+ 2018-39
44
+ 2018-43
45
+ 2018-47
46
+ 2018-51
47
+ 2019-04
48
+ 2019-09
49
+ 2019-13
50
+ 2019-18
51
+ 2019-22
52
+ 2019-26
53
+ 2019-30
54
+ 2019-35
55
+ 2019-39
56
+ 2019-43
57
+ 2019-47
58
+ 2019-51
59
+ 2020-05
60
+ 2020-10
61
+ 2020-16
62
+ 2020-24
63
+ 2020-29
64
+ 2020-34
65
+ 2020-40
66
+ 2020-45
67
+ 2020-50
68
+ 2021-04
69
+ 2021-10
70
+ 2021-17
71
+ 2021-21
72
+ 2021-25
73
+ 2021-31
74
+ 2021-39
75
+ 2021-43
76
+ 2021-49
77
+ 2022-05
78
+ 2022-21
79
+ 2022-27
80
+ 2022-33
81
+ 2022-40
82
+ 2022-49
83
+ 2023-06
84
+ 2023-14
_QUALITY_SIGNAL_TAGS ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ccnet_length
2
+ ccnet_original_length
3
+ ccnet_nlines
4
+ ccnet_original_nlines
5
+ ccnet_language_score
6
+ ccnet_perplexity
7
+ ccnet_bucket
8
+ rps_doc_curly_bracket
9
+ rps_doc_ldnoobw_words
10
+ rps_doc_lorem_ipsum
11
+ rps_doc_stop_word_fraction
12
+ rps_doc_ut1_blacklist
13
+ rps_doc_frac_all_caps_words
14
+ rps_doc_frac_lines_end_with_ellipsis
15
+ rps_doc_frac_no_alph_words
16
+ rps_doc_frac_unique_words
17
+ rps_doc_mean_word_length
18
+ rps_doc_symbol_to_word_ratio
19
+ rps_doc_unigram_entropy
20
+ rps_doc_word_count
21
+ rps_num_sentences
22
+ rps_doc_frac_chars_dupe_10grams
23
+ rps_doc_frac_chars_dupe_5grams
24
+ rps_doc_frac_chars_dupe_6grams
25
+ rps_doc_frac_chars_dupe_7grams
26
+ rps_doc_frac_chars_dupe_8grams
27
+ rps_doc_frac_chars_dupe_9grams
28
+ rps_doc_frac_chars_top_2gram
29
+ rps_doc_frac_chars_top_3gram
30
+ rps_doc_frac_chars_top_4gram
31
+ rps_lines_ending_with_terminal_punctution_mark
32
+ rps_lines_javascript_counts
33
+ rps_lines_num_words
34
+ rps_lines_numerical_chars_fraction
35
+ rps_lines_start_with_bulletpoint
36
+ rps_lines_uppercase_letter_fraction
37
+ rps_doc_ml_palm_score
38
+ rps_doc_ml_wikipedia_score
39
+ rps_doc_ml_wikiref_score
40
+ rps_doc_books_importance
41
+ rps_doc_openwebtext_importance
42
+ rps_doc_wikipedia_importance