system HF staff commited on
Commit
5403654
0 Parent(s):

Update files from the datasets library (from 1.17.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.17.0

Files changed (4) hide show
  1. .gitattributes +27 -0
  2. README.md +286 -0
  3. dummy/all/0.0.0/dummy_data.zip +3 -0
  4. the_pile.py +250 -0
.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,286 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - other-
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: The Pile
13
+ size_categories:
14
+ - unknown
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - sequence-modeling
19
+ task_ids:
20
+ - language-modeling
21
+ ---
22
+
23
+ # Dataset Card for The Pile
24
+
25
+ ## Table of Contents
26
+ - [Table of Contents](#table-of-contents)
27
+ - [Dataset Description](#dataset-description)
28
+ - [Dataset Summary](#dataset-summary)
29
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
30
+ - [Languages](#languages)
31
+ - [Dataset Structure](#dataset-structure)
32
+ - [Data Instances](#data-instances)
33
+ - [Data Fields](#data-fields)
34
+ - [Data Splits](#data-splits)
35
+ - [Dataset Creation](#dataset-creation)
36
+ - [Curation Rationale](#curation-rationale)
37
+ - [Source Data](#source-data)
38
+ - [Annotations](#annotations)
39
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
40
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
41
+ - [Social Impact of Dataset](#social-impact-of-dataset)
42
+ - [Discussion of Biases](#discussion-of-biases)
43
+ - [Other Known Limitations](#other-known-limitations)
44
+ - [Additional Information](#additional-information)
45
+ - [Dataset Curators](#dataset-curators)
46
+ - [Licensing Information](#licensing-information)
47
+ - [Citation Information](#citation-information)
48
+ - [Contributions](#contributions)
49
+
50
+ ## Dataset Description
51
+
52
+ - **Homepage:** https://pile.eleuther.ai/
53
+ - **Repository:** https://github.com/EleutherAI/the-pile
54
+ - **Paper:** [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027)
55
+ - **Leaderboard:**
56
+ - **Point of Contact:** [EleutherAI](mailto:contact@eleuther.ai)
57
+
58
+ ### Dataset Summary
59
+
60
+ The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality
61
+ datasets combined together.
62
+
63
+
64
+ ### Supported Tasks and Leaderboards
65
+
66
+ [More Information Needed]
67
+
68
+ ### Languages
69
+
70
+ This dataset is in English (`EN`).
71
+
72
+ ## Dataset Structure
73
+
74
+ ### Data Instances
75
+
76
+ #### all
77
+ ```
78
+ {
79
+ 'meta': {'pile_set_name': 'Pile-CC'},
80
+ 'text': 'It is done, and submitted. You can play “Survival of the Tastiest” on Android, and on the web. Playing on...'
81
+ }
82
+ ```
83
+
84
+ #### enron_emails
85
+ ```
86
+ {
87
+ 'text': 'Name\t\t\tNew Title\t\t\t\tEffective Date\t\t\tMid Year promotion Yes/No\n\nFloyd, Jodie\t\tSr Cust Svc Rep (no change)\t\t7/16/01\t\t\t\tNo\n\nBuehler, Craig\t\tSr Mkt/Sup Analyst (no change)\t\t7/16/01\t\t\t\tNo\n\nWagoner, Mike\t\tTeam Advisor - Gas Control\t\t7/1/01\t\t\t\tNo\n\nClapper, Karen\t\tSr Cust Svc Rep\t\t\t8/1/01\t\t\t\tYes\n\nGreaney, Chris\t\tSr Cust Svc Rep\t\t\t8/1/01\t\t\t\tYes\n\nWilkens, Jerry\t\tSr Cust Svc Rep\t\t\t8/1/01\t\t\t\tYes\n\nMinton, Kevin\t\tPipeline Controller\t\t\t8/1/01\t\t\t\tYes\n\nCox, Don\t\tPipeline Controller\t\t\t8/1/01\t\t\t\tYes\n\nHanagriff, Richard\tSr Accounting Control Spec\t\t8/1/01\t\t\t\tYes\n\n\nThanks,\nMS'
88
+ 'meta': "{}",
89
+
90
+ }
91
+ ```
92
+
93
+ #### europarl
94
+ ```
95
+ {
96
+ 'text': 'Uvádění biocidních přípravků na trh - Nový návrh revize týkající se biocidních přípravků (rozprava) \nPředsedající\nDalším bodem je společná rozprava o následujících tématech:\nzpráva paní Sârbuové za Výbor pro životní prostředí, veřejné zdraví a bezpečnost potravin o návrhu...'
97
+ 'meta': "{'language': 'cs'}",
98
+
99
+ }
100
+ ```
101
+
102
+ #### free_law
103
+ ```
104
+ {
105
+ 'meta': "{'case_jurisdiction': 'scotus.tar.gz', 'case_ID': '110921.json','date_created': '2010-04-28T17:12:49Z'}",
106
+ 'text': '\n461 U.S. 238 (1983)\nOLIM ET AL.\nv.\nWAKINEKONA\nNo. 81-1581.\nSupreme Court of United States.\nArgued...'
107
+ }
108
+ ```
109
+
110
+ #### hacker_news
111
+ ```
112
+ {
113
+ 'text': "\nChina Deserves Donald Trump - rm2889\nhttps://www.nytimes.com/2019/05/21/opinion/china-trump-trade.html\n======\nNotPaidToPost\n> so he’d be wise to curb his nationalistic “no-one-tells-China-what-to-do”\n> bluster\n\nThis comment highlights both ignorance of Chinese history and continuing\nAmerican arrogance.\n\nChina has been painfully dictated what to do during the last 200 years. This\nhas had a profound effect on the country and has led to the collapse of\nimperial rule and the drive to 'rejuvenate'...",
114
+ 'meta': "{'id': '19979654'}",
115
+ }
116
+ ```
117
+
118
+ #### nih_exporter
119
+ ```
120
+ {
121
+ 'text': "The National Domestic Violence Hotline (NDVH) and the National Dating Abuse Helpline (NDAH), which are supported by the Division of Family Violence Prevention and Services within the Family and Youth Services Bureau, serve as critical partners in the intervention, prevention, and resource assistance efforts of the network of family violence, domestic violence, and dating violence service providers. They provide crisis intervention and support services; information about resources on domestic...",
122
+ 'meta': " {'APPLICATION_ID': 100065}",
123
+ }
124
+ ```
125
+
126
+ #### pubmed
127
+ ```
128
+ {
129
+ 'meta': {'pmid': 11409574, 'language': 'eng'},
130
+ 'text': 'Epidemiology of hypoxaemia in children with acute lower respiratory infection.\nTo determine the prevalence of hypoxaemia in children aged under 5 years suffering acute lower respiratory infections (ALRI), the risk factors for hypoxaemia in children under 5 years of age with ALRI, and the association of hypoxaemia with an increased risk of dying in children of the same age. Systematic review of the published literature. Out-patient clinics, emergency departments and hospitalisation wards in 23 health centres from 10 countries. Cohort studies reporting the frequency of hypoxaemia in children under 5 years of age with ALRI, and the association between hypoxaemia and the risk of dying. Prevalence of hypoxaemia measured in children with ARI and relative risks for the association between the severity of illness and the frequency of hypoxaemia, and between hypoxaemia and the risk of dying. Seventeen published studies were found that included 4,021 children under 5 with acute respiratory infections (ARI) and reported the prevalence of hypoxaemia. Out-patient children and those with a clinical diagnosis of upper ARI had a low risk of hypoxaemia (pooled estimate of 6% to 9%). The prevalence increased to 31% and to 43% in patients in emergency departments and in cases with clinical pneumonia, respectively, and it was even higher among hospitalised children (47%) and in those with radiographically confirmed pneumonia (72%). The cumulated data also suggest that hypoxaemia is more frequent in children living at high altitude. Three papers reported an association between hypoxaemia and death, with relative risks varying between 1.4 and 4.6. Papers describing predictors of hypoxaemia have focused on clinical signs for detecting hypoxaemia rather than on identifying risk factors for developing this complication. Hypoxaemia is a common and potentially lethal complication of ALRI in children under 5, particularly among those with severe disease and those living at high altitude. Given the observed high prevalence of hypoxaemia and its likely association with increased mortality, efforts should be made to improve the detection of hypoxaemia and to provide oxygen earlier to more children with severe ALRI.'
131
+ }
132
+ ```
133
+
134
+ #### pubmed_central
135
+ ```
136
+ {
137
+ 'meta': "{id': 'PMC5595690'}",
138
+ 'text': 'Introduction {#acel12642-sec-0001}\n============\n\nAlzheimer\\\'s disease (AD), the most common cause of...'
139
+ }
140
+ ```
141
+
142
+ #### ubuntu_irc
143
+ ```
144
+ {
145
+ 'text': "#ubuntu 2004-07-05\n* Window 3\n* \tServer: [0] <None>\n* \tScreen: 0x817e90c\n* \tGeometry Info: [0 11 0 11 11 11] \n* \tCO, LI are [94 49] \n* \tCurrent channel: #ubuntu\n* \tQuery User: <None> \n*\tPrompt: <None>\n* \tSecond status line is OFF\n* \tSplit line is ON triple is OFF\n* \tLogging is ON\n* \tLogfile is irclogs/ubuntu.log\n* \tNotification is OFF\n* \tHold mode is OFF\n* \tWindow level is NONE\n* \tLastlog level is ALL\n* \tNotify level is ALL\n<mdz> lifeless: using tla effectively for all packages in Warty requ...",
146
+ 'meta': "{'channel': 'ubuntu', 'month': 7}"
147
+ }
148
+ ```
149
+
150
+ #### uspto
151
+ ```
152
+ {
153
+ 'text': "1. Field of the Invention\nIn an extensive plant breeding program, Grant Merrill, originator and now deceased, originated a large number of new and distinct varieties of fruit trees, and which included the herein-claimed variety of peach tree. Such plant breeding program was undertaken in originator's experimental orchard located near Exeter, Tulare County, Calif.\n2. Prior Varieties\nAmong the existent varieties of peach trees which were known to originator, particular reference is made to Gemfree (U.S. Plant Pat. No. 1,409) and June Lady (U.S. Plant Pat. No. 3,022) hereinafter mentioned for the purpose of comparison.",
154
+ 'meta': "{'bibliographic_information': {'Patent Number': 'PP0049700', 'Series Code': '6', 'Application Number': '2845415', 'Application Type': '6', 'Art unit': '337', 'Application Filing Date': '19810720', 'Title of Invention': 'Peach tree (A3-10)', 'Issue Date': '19830104', 'Number of Claims': '1', 'Exemplary Claim Number(s)': '1', 'Primary Examiner': 'Bagwill; Robert E.', 'Number of Drawing Sheets': '1', 'Number of figures': '1'}, 'source_file': 'https://bulkdata.uspto.gov/data/patent/grant/redbook/fulltext/1983/pftaps19830104_wk01.zip', 'abstract': 'A peach tree which is large, vigorous, and spreading; foliated with large, lanceolate leaves having a finely serrate margin, a petiole of medium length and thickness, and medium size, reniform glands; blooms from medium size, conic, plump, pubescent buds; the flowers, medium in blooming period compared with other varieties, being of medium size, and pink; and is a regular and very productive bearer of medium but variable size, round truncate, clingstone fruit having yellow skin substantially overspread with red, yellow flesh mottled with red adjacent the skin, and an amber stone.', 'classifications': [{'OCL': ['Plt', '43'], 'EDF': ['3'], 'ICL': ['A01H', '503'], 'FSC': ['Plt'], 'FSS': ['43']}], 'inventors': [{'inventor name': 'Merrill, deceased; Grant', 'Street': '325 Breese Ave.', 'City': 'late of Red Bluff', 'State': 'CA'}, {'inventor name': 'Merrill, executrix; by Lucile B.', 'Street': '325 Breese Ave.', 'City': 'Red Bluff', 'State': 'CA', 'Zip code': '96080'}]}"
155
+ }
156
+ ```
157
+
158
+ ### Data Fields
159
+
160
+ #### all
161
+
162
+ - `text` (str): Text.
163
+ - `meta` (dict): Metadata of the data instance with keys:
164
+ - pile_set_name: Name of the subset.
165
+
166
+ #### enron_emails
167
+
168
+ - `text` (str): Text.
169
+ - `meta` (str): Metadata of the data instance.
170
+
171
+ #### europarl
172
+
173
+ - `text` (str): Text.
174
+ - `meta` (str): Metadata of the data instance with: language.
175
+
176
+ #### free_law
177
+
178
+ - `text` (str): Text.
179
+ - `meta` (str): Metadata of the data instance with: case_ID, case_jurisdiction, date_created.
180
+
181
+ #### hacker_news
182
+
183
+ - `text` (str): Text.
184
+ - `meta` (str): Metadata of the data instance with: id.
185
+
186
+ #### nih_exporter
187
+
188
+ - `text` (str): Text.
189
+ - `meta` (str): Metadata of the data instance with: APPLICATION_ID.
190
+
191
+ #### pubmed
192
+
193
+ - `text` (str): Text.
194
+ - `meta` (str): Metadata of the data instance with: pmid, language.
195
+
196
+ #### pubmed_central
197
+
198
+ - `text` (str): Text.
199
+ - `meta` (str): Metadata of the data instance with: ID of the data instance.
200
+
201
+ #### ubuntu_irc
202
+
203
+ - `text` (str): Text.
204
+ - `meta` (str): Metadata of the data instance with: channel, month.
205
+
206
+ #### uspto
207
+
208
+ - `text` (str): Text.
209
+ - `meta` (str): Metadata of the data instance with: bibliographic_information, source_file, abstract, classifications,
210
+ inventors.
211
+
212
+ ### Data Splits
213
+
214
+ The "all" configuration is composed of 3 splits: train, validation and test.
215
+
216
+ ## Dataset Creation
217
+
218
+ ### Curation Rationale
219
+
220
+ [More Information Needed]
221
+
222
+ ### Source Data
223
+
224
+ #### Initial Data Collection and Normalization
225
+
226
+ [More Information Needed]
227
+
228
+ #### Who are the source language producers?
229
+
230
+ [More Information Needed]
231
+
232
+ ### Annotations
233
+
234
+ #### Annotation process
235
+
236
+ [More Information Needed]
237
+
238
+ #### Who are the annotators?
239
+
240
+ [More Information Needed]
241
+
242
+ ### Personal and Sensitive Information
243
+
244
+ [More Information Needed]
245
+
246
+ ## Considerations for Using the Data
247
+
248
+ ### Social Impact of Dataset
249
+
250
+ [More Information Needed]
251
+
252
+ ### Discussion of Biases
253
+
254
+ [More Information Needed]
255
+
256
+ ### Other Known Limitations
257
+
258
+ [More Information Needed]
259
+
260
+ ## Additional Information
261
+
262
+ ### Dataset Curators
263
+
264
+ [More Information Needed]
265
+
266
+ ### Licensing Information
267
+
268
+ Please refer to the specific license depending on the subset you use:
269
+ - PubMed Central: [MIT License](https://github.com/EleutherAI/pile-pubmedcentral/blob/master/LICENSE)
270
+
271
+ ### Citation Information
272
+
273
+ ```
274
+ @misc{gao2020pile,
275
+ title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
276
+ author={Leo Gao and Stella Biderman and Sid Black and Laurence Golding and Travis Hoppe and Charles Foster and Jason Phang and Horace He and Anish Thite and Noa Nabeshima and Shawn Presser and Connor Leahy},
277
+ year={2020},
278
+ eprint={2101.00027},
279
+ archivePrefix={arXiv},
280
+ primaryClass={cs.CL}
281
+ }
282
+ ```
283
+
284
+ ### Contributions
285
+
286
+ Thanks to [@github-username](https://github.com/<github-username>) for adding this dataset.
dummy/all/0.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5fe6be3c2c842a211a5143bff5c9718208adcb61124fa3952a7cb2d64f3344f1
3
+ size 184510
the_pile.py ADDED
@@ -0,0 +1,250 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2020 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """The Pile dataset."""
16
+
17
+ import json
18
+
19
+ import datasets
20
+
21
+
22
+ _CITATION = """\
23
+ @misc{gao2020pile,
24
+ title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
25
+ author={Leo Gao and Stella Biderman and Sid Black and Laurence Golding and Travis Hoppe and Charles Foster and Jason Phang and Horace He and Anish Thite and Noa Nabeshima and Shawn Presser and Connor Leahy},
26
+ year={2020},
27
+ eprint={2101.00027},
28
+ archivePrefix={arXiv},
29
+ primaryClass={cs.CL}
30
+ }
31
+ """
32
+
33
+ _DESCRIPTION = """\
34
+ The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality
35
+ datasets combined together.
36
+ """
37
+
38
+ _HOMEPAGE = "https://pile.eleuther.ai/"
39
+
40
+ _LICENSES = {
41
+ "all": "Multiple: see each subset license",
42
+ "enron_emails": "Unknown",
43
+ "europarl": "Unknown",
44
+ "free_law": "Unknown",
45
+ "hacker_news": "Unknown",
46
+ "nih_exporter": "Unknown",
47
+ "pubmed": "Unknown",
48
+ "pubmed_central": "Unknown",
49
+ "ubuntu_irc": "Unknown",
50
+ "uspto": "Unknown",
51
+ }
52
+
53
+ _DATA_URLS = {
54
+ "all": {
55
+ "train": [f"https://the-eye.eu/public/AI/pile/train/{i:0>2}.jsonl.zst" for i in range(30)],
56
+ "validation": ["https://the-eye.eu/public/AI/pile/val.jsonl.zst"],
57
+ "test": ["https://the-eye.eu/public/AI/pile/test.jsonl.zst"],
58
+ },
59
+ "enron_emails": "http://eaidata.bmk.sh/data/enron_emails.jsonl.zst",
60
+ "europarl": "https://the-eye.eu/public/AI/pile_preliminary_components/EuroParliamentProceedings_1996_2011.jsonl.zst",
61
+ "free_law": "https://the-eye.eu/public/AI/pile_preliminary_components/FreeLaw_Opinions.jsonl.zst",
62
+ "hacker_news": "https://the-eye.eu/public/AI/pile_preliminary_components/hn.tar.gz",
63
+ "nih_exporter": "https://the-eye.eu/public/AI/pile_preliminary_components/NIH_ExPORTER_awarded_grant_text.jsonl.zst",
64
+ "pubmed": "https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst",
65
+ "pubmed_central": "https://the-eye.eu/public/AI/pile_preliminary_components/PMC_extracts.tar.gz",
66
+ "ubuntu_irc": "https://the-eye.eu/public/AI/pile_preliminary_components/ubuntu_irc_until_2020_9_1.jsonl.zst",
67
+ "uspto": "https://the-eye.eu/public/AI/pile_preliminary_components/pile_uspto.tar",
68
+ }
69
+
70
+ _FEATURES = {
71
+ "all": datasets.Features(
72
+ {
73
+ "text": datasets.Value("string"),
74
+ "meta": {"pile_set_name": datasets.Value("string")},
75
+ }
76
+ ),
77
+ "enron_emails": datasets.Features(
78
+ {
79
+ "text": datasets.Value("string"),
80
+ "meta": datasets.Value("string"),
81
+ }
82
+ ),
83
+ "europarl": datasets.Features(
84
+ {
85
+ "text": datasets.Value("string"),
86
+ "meta": datasets.Value("string"),
87
+ }
88
+ ),
89
+ "free_law": datasets.Features(
90
+ {
91
+ "text": datasets.Value("string"),
92
+ "meta": datasets.Value("string"),
93
+ }
94
+ ),
95
+ "hacker_news": datasets.Features(
96
+ {
97
+ "text": datasets.Value("string"),
98
+ "meta": datasets.Value("string"),
99
+ }
100
+ ),
101
+ "nih_exporter": datasets.Features(
102
+ {
103
+ "text": datasets.Value("string"),
104
+ "meta": datasets.Value("string"),
105
+ }
106
+ ),
107
+ "pubmed": datasets.Features(
108
+ {
109
+ "text": datasets.Value("string"),
110
+ "meta": datasets.Value("string"),
111
+ }
112
+ ),
113
+ "pubmed_central": datasets.Features(
114
+ {
115
+ "text": datasets.Value("string"),
116
+ "meta": datasets.Value("string"),
117
+ }
118
+ ),
119
+ "ubuntu_irc": datasets.Features(
120
+ {
121
+ "text": datasets.Value("string"),
122
+ "meta": datasets.Value("string"),
123
+ }
124
+ ),
125
+ "uspto": datasets.Features(
126
+ {
127
+ "text": datasets.Value("string"),
128
+ "meta": datasets.Value("string"),
129
+ }
130
+ ),
131
+ }
132
+
133
+
134
+ class ThePileConfig(datasets.BuilderConfig):
135
+ """BuilderConfig for The Pile."""
136
+
137
+ def __init__(self, *args, subsets, **kwargs):
138
+ """BuilderConfig for The Pile.
139
+
140
+ Args:
141
+ subsets (:obj:`List[str]`): List of subsets to load.
142
+ **kwargs: keyword arguments forwarded to super.
143
+ """
144
+ super().__init__(
145
+ *args,
146
+ name="+".join(subsets),
147
+ **kwargs,
148
+ )
149
+ self.subsets = subsets
150
+
151
+
152
+ class ThePile(datasets.GeneratorBasedBuilder):
153
+ """The Pile dataset."""
154
+
155
+ VERSION = datasets.Version("1.1.0")
156
+
157
+ BUILDER_CONFIG_CLASS = ThePileConfig
158
+ BUILDER_CONFIGS = [ThePileConfig(subsets=[subset]) for subset in _DATA_URLS]
159
+ DEFAULT_CONFIG_NAME = "all"
160
+
161
+ def _info(self):
162
+ """Give information and typings for the dataset."""
163
+ return datasets.DatasetInfo(
164
+ # This is the description that will appear on the datasets page.
165
+ description=_DESCRIPTION,
166
+ # This defines the different columns of the dataset and their types
167
+ features=_FEATURES.get(self.config.name),
168
+ # If there's a common (input, target) tuple from the features,
169
+ # specify them here. They'll be used if as_supervised=True in
170
+ # builder.as_dataset.
171
+ supervised_keys=None,
172
+ # Homepage of the dataset for documentation
173
+ homepage=_HOMEPAGE,
174
+ # License for the dataset if available
175
+ license=_LICENSES.get(self.config.name, "Multiple: see each subset license"),
176
+ # Citation for the dataset
177
+ citation=_CITATION,
178
+ )
179
+
180
+ def _split_generators(self, dl_manager):
181
+ """Return SplitGenerators."""
182
+ if self.config.name == "all":
183
+ data_dir = dl_manager.download(_DATA_URLS[self.config.name])
184
+ return [
185
+ datasets.SplitGenerator(
186
+ name=split,
187
+ gen_kwargs={
188
+ "files": data_dir[split],
189
+ },
190
+ )
191
+ for split in [datasets.Split.TRAIN, datasets.Split.VALIDATION, datasets.Split.TEST]
192
+ ]
193
+ else:
194
+ data_urls = {subset: _DATA_URLS[subset] for subset in self.config.subsets}
195
+ archive = dl_manager.download(data_urls)
196
+ return [
197
+ datasets.SplitGenerator(
198
+ name=datasets.Split.TRAIN,
199
+ gen_kwargs={
200
+ "files": {
201
+ subset: dl_manager.iter_archive(archive[subset])
202
+ if ".tar" in data_urls[subset]
203
+ else archive[subset]
204
+ for subset in self.config.subsets
205
+ },
206
+ },
207
+ ),
208
+ ]
209
+
210
+ def _generate_examples(self, files):
211
+ """Yield examples as (key, example) tuples."""
212
+ key = 0
213
+ if isinstance(files, list):
214
+ import zstandard as zstd
215
+
216
+ for path in files:
217
+ with zstd.open(open(path, "rb"), "rt", encoding="utf-8") as f:
218
+ for row in f:
219
+ data = json.loads(row)
220
+ yield key, data
221
+ key += 1
222
+ else:
223
+ for subset in files:
224
+ if subset in {"enron_emails", "europarl", "free_law", "nih_exporter", "pubmed", "ubuntu_irc"}:
225
+ import zstandard as zstd
226
+
227
+ with zstd.open(open(files[subset], "rb"), "rt", encoding="utf-8") as f:
228
+ for row in f:
229
+ data = json.loads(row)
230
+ yield key, data
231
+ key += 1
232
+ elif subset in {"hacker_news", "pubmed_central"}:
233
+ for path, file in files[subset]:
234
+ id_ = path.split("/")[-1].split(".")[0]
235
+ meta = {"id": id_}
236
+ text = file.read().decode("utf-8")
237
+ yield key, {
238
+ "text": text,
239
+ "meta": meta,
240
+ }
241
+ key += 1
242
+ elif subset == "uspto":
243
+ import zstandard as zstd
244
+
245
+ for path, file in files[subset]:
246
+ with zstd.open(file, "rt", encoding="utf-8") as f:
247
+ for row in f:
248
+ data = json.loads(row)
249
+ yield key, data
250
+ key += 1