system HF staff commited on
Commit
cf21600
0 Parent(s):

Update files from the datasets library (from 1.12.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.12.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,166 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - no-annotation
4
+ language_creators:
5
+ - found
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: Stack Exchange
13
+ size_categories:
14
+ - 1M<n<10M
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - sequence-modeling
19
+ task_ids:
20
+ - language-modeling
21
+ ---
22
+
23
+ # Dataset Card for Stack Exchange
24
+
25
+ ## Table of Contents
26
+ - [Dataset Card for Stack Exchange](#dataset-card-for-the_pile_stack_exchange)
27
+ - [Table of Contents](#table-of-contents)
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits](#data-splits)
36
+ - [|split|num examples|](#splitnum-examples)
37
+ - [Dataset Creation](#dataset-creation)
38
+ - [Curation Rationale](#curation-rationale)
39
+ - [Source Data](#source-data)
40
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
41
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
42
+ - [Annotations](#annotations)
43
+ - [Annotation process](#annotation-process)
44
+ - [Who are the annotators?](#who-are-the-annotators)
45
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
46
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
47
+ - [Social Impact of Dataset](#social-impact-of-dataset)
48
+ - [Discussion of Biases](#discussion-of-biases)
49
+ - [Other Known Limitations](#other-known-limitations)
50
+ - [Additional Information](#additional-information)
51
+ - [Dataset Curators](#dataset-curators)
52
+ - [Licensing Information](#licensing-information)
53
+ - [Citation Information](#citation-information)
54
+ - [Contributions](#contributions)
55
+
56
+ ## Dataset Description
57
+
58
+ - **Homepage:** [GitHub](https://github.com/EleutherAI/stackexchange-dataset)
59
+ - **Repository:** [Needs More Information]
60
+ - **Paper:** [arXiv](https://arxiv.org/abs/2101.00027)
61
+ - **Leaderboard:** [Needs More Information]
62
+ - **Point of Contact:** [Needs More Information]
63
+
64
+ ### Dataset Summary
65
+
66
+ This dataset is part of EleutherAI/The Pile dataset and is a dataset for Language Models from processing stackexchange data dump, which is an anonymized dump of all user-contributed content on the Stack Exchange network.
67
+
68
+ |download_size|34.28 Gib|
69
+ |dataset_size|10.3 Gib|
70
+
71
+ ### Supported Tasks and Leaderboards
72
+
73
+ The dataset is used for Language Modeling.
74
+
75
+ ### Languages
76
+
77
+ The dataset is in English.
78
+
79
+ ## Dataset Structure
80
+
81
+ ### Data Instances
82
+
83
+ ```
84
+ {'domain': 'chemistry',
85
+ 'text':"\nQ: \n \nReviving old questions or asking a new one? \n \nI'm relatively new to the Chemistry SE community, and sometimes when I go to ask a question, I notice that the same (or similar) question has \nalready been asked. However, the previous question doesn't have a good answer (or is unanswered). In this case, is it better to ask the questi\non again in a new post (which might be marked as duplicate) or comment on the old post (which might be several years old)? In other words, wha\nt are the customs of this site in regards to reviving old questions/discussions?\n\nA:\n\nAs Martin commented, it really depends on the type of question. In any case, you always have the following possibilities:\n\nAsk a new question\nEdit the question to bump it to the first page\nAdd a bounty\nBring it to the attention of people in chat\n\nConsider the following cases:\n\nI have exactly the same question as asked and unanswered before!\n\nIf you ask a new question which turns out to be the same question, it may be closed as a dupe (depending on whether users remember the old que\nstion). Not the ideal option.\nIf you can find something substantial to edit and bump the question, do so. Maybe add a comment that you would really love an answer.\nIf you can spare some rep for a bounty (50 is usually enough), do so.\nYou can always bring it to the attention of people in chat.\n",}
86
+ ```
87
+
88
+ ### Data Fields
89
+
90
+ - `domain`: Stack Exchange domain of the sample
91
+ - `text`: Text content containing both the question and the answer
92
+
93
+ ### Data Splits
94
+
95
+ |split|num examples|
96
+ --------------------------------
97
+ |train|5096117|
98
+
99
+ ## Dataset Creation
100
+
101
+ ### Curation Rationale
102
+
103
+ [Needs More Information]
104
+
105
+ ### Source Data
106
+
107
+ #### Initial Data Collection and Normalization
108
+
109
+ [Needs More Information]
110
+
111
+ #### Who are the source language producers?
112
+
113
+ [Needs More Information]
114
+
115
+ ### Annotations
116
+
117
+ #### Annotation process
118
+
119
+ [Needs More Information]
120
+
121
+ #### Who are the annotators?
122
+
123
+ [Needs More Information]
124
+
125
+ ### Personal and Sensitive Information
126
+
127
+ [Needs More Information]
128
+
129
+ ## Considerations for Using the Data
130
+
131
+ ### Social Impact of Dataset
132
+
133
+ [Needs More Information]
134
+
135
+ ### Discussion of Biases
136
+
137
+ [Needs More Information]
138
+
139
+ ### Other Known Limitations
140
+
141
+ [Needs More Information]
142
+
143
+ ## Additional Information
144
+
145
+ ### Dataset Curators
146
+
147
+ [Needs More Information]
148
+
149
+ ### Licensing Information
150
+
151
+ [Needs More Information]
152
+
153
+ ### Citation Information
154
+
155
+ ```
156
+ @article{pile,
157
+ title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling},
158
+ author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor},
159
+ journal={arXiv preprint arXiv:2101.00027},
160
+ year={2020}
161
+ }
162
+ ```
163
+
164
+ ### Contributions
165
+ Thanks to [sdtblck](https://github.com/sdtblck) for creating the dataset.
166
+ Thanks to [richarddwang](https://github.com/richarddwang) for adding the dataset.
dataset_infos.json ADDED
@@ -0,0 +1 @@
 
 
1
+ {"plain_text": {"description": "This dataset is part of EleutherAI/The Pile dataset and is a dataset for Language Models from processing stackexchange data dump, which is an anonymized dump of all user-contributed content on the Stack Exchange network.\n", "citation": "@article{pile,\n title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling},\n author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor},\n journal={arXiv preprint arXiv:2101.00027},\n year={2020}\n}\n", "homepage": "https://github.com/EleutherAI/stackexchange-dataset", "license": "", "features": {"domain": {"dtype": "string", "id": null, "_type": "Value"}, "text": {"dtype": "string", "id": null, "_type": "Value"}}, "post_processed": null, "supervised_keys": null, "task_templates": null, "builder_name": "the_pile_stack_exchange", "config_name": "plain_text", "version": {"version_str": "1.0.0", "description": null, "major": 1, "minor": 0, "patch": 0}, "splits": {"train": {"name": "train", "num_bytes": 11075434609, "num_examples": 5096117, "dataset_name": "the_pile_stack_exchange"}}, "download_checksums": {"https://the-eye.eu/public/AI/pile_preliminary_components/stackexchange_dataset.tar": {"num_bytes": 36802959360, "checksum": "f64f31d20db8d8692c1a019314a14974b4911a34ffef126feaf42da88860c666"}}, "download_size": 36802959360, "post_processing_size": null, "dataset_size": 11075434609, "size_in_bytes": 47878393969}}
dummy/plain_text/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e40eb6242535c430efb42376d61eb97fa414113fe4f79bc352dcdedef33e4701
3
+ size 1227
the_pile_stack_exchange.py ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2021 The HuggingFace Datasets Authors and the current dataset script contributor.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """The Stack Exchange Corpus"""
16
+
17
+ import os
18
+ from pathlib import Path
19
+
20
+ import datasets
21
+
22
+
23
+ _CITATION = """\
24
+ @article{pile,
25
+ title={The {P}ile: An 800GB Dataset of Diverse Text for Language Modeling},
26
+ author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and Presser, Shawn and Leahy, Connor},
27
+ journal={arXiv preprint arXiv:2101.00027},
28
+ year={2020}
29
+ }
30
+ """
31
+
32
+ _DESCRIPTION = """\
33
+ This dataset is part of EleutherAI/The Pile dataset and is a dataset for Language Models from processing stackexchange data dump, \
34
+ which is an anonymized dump of all user-contributed content on the Stack Exchange network.
35
+ """
36
+
37
+ _URL = "https://the-eye.eu/public/AI/pile_preliminary_components/stackexchange_dataset.tar"
38
+
39
+
40
+ class ThePileStackExchange(datasets.GeneratorBasedBuilder):
41
+ """The StackExchange dataset."""
42
+
43
+ BUILDER_CONFIGS = [
44
+ datasets.BuilderConfig(
45
+ name="plain_text",
46
+ description="Plain text",
47
+ version=datasets.Version("1.0.0"),
48
+ )
49
+ ]
50
+
51
+ def _info(self):
52
+ return datasets.DatasetInfo(
53
+ description=_DESCRIPTION,
54
+ features=datasets.Features({"domain": datasets.Value("string"), "text": datasets.Value("string")}),
55
+ homepage="https://github.com/EleutherAI/stackexchange-dataset",
56
+ citation=_CITATION,
57
+ )
58
+
59
+ def _split_generators(self, dl_manager):
60
+ dl_dir = dl_manager.download_and_extract(_URL)
61
+ zips = [str(f) for f in (Path(dl_dir) / "out").iterdir()]
62
+ extracted = dl_manager.extract(zips, num_proc=os.cpu_count())
63
+ # non-dir extracteds are zero-size unknown things
64
+ dirs = [path for path in extracted if os.path.isdir(path)]
65
+ return [
66
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"dirs": dirs}),
67
+ ]
68
+
69
+ def _generate_examples(self, dirs):
70
+ """Yields examples."""
71
+ _id = 0
72
+ for dir in sorted(dirs):
73
+ txt_files = sorted(Path(dir).glob("**/*.txt"))
74
+ for txt_file in txt_files:
75
+ # PosiPath(/home/user/.cache/huggingface/datasets/downloads/extracted/3923d60abeeb876021dc55a897ac2f260b181556f8ca56a7c61e3b8b80afec77/academia.stackexchange_0000000001.txt)
76
+ domain = txt_file.name.split(".")[0]
77
+ with txt_file.open(mode="r", encoding="utf-8") as f:
78
+ document = f.read()
79
+ yield _id, {"domain": domain, "text": document}
80
+ _id += 1