JonasGeiping commited on
Commit
98a279b
1 Parent(s): fca102b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +140 -2
README.md CHANGED
@@ -9,7 +9,145 @@ dataset_info:
9
  num_examples: 43166767
10
  download_size: 12187746609
11
  dataset_size: 22274051772
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
- # Dataset Card for "the_pile_WordPiecex32768_97b8e776baafb99c3892e6572a9f51b3"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  num_examples: 43166767
10
  download_size: 12187746609
11
  dataset_size: 22274051772
12
+ annotations_creators:
13
+ - no-annotation
14
+ language_creators:
15
+ - found
16
+ language:
17
+ - en
18
+ license: other
19
+ multilinguality:
20
+ - monolingual
21
+ pretty_name: pretokenized,filtered,sorted subset of the Pile
22
+ size_categories:
23
+ - 10B<n<100B
24
+ source_datasets:
25
+ - the-pile
26
+ task_categories:
27
+ - text-generation
28
+ - fill-mask
29
+ task_ids:
30
+ - language-modeling
31
+ - masked-language-modeling
32
+ paperswithcode_id: the-pile-cramming
33
+
34
  ---
35
+ # Dataset Card for the_pile_WordPiecex32768_97b8e776baafb99c3892e6572a9f51b3
36
+
37
+ This is a preprocessed, tokenized dataset for the cramming-project.
38
+
39
+ Use only with the tokenizer uploaded here.
40
+ This version is `97b8e776baafb99c3892e6572a9f51b3`, which corresponds to a specific dataset construction setup, described below.
41
+ The raw data source is the Pile, a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality
42
+ datasets combined together.
43
+
44
+
45
+ ## Dataset Description
46
+
47
+ - **Repository:** https://github.com/JonasGeiping/cramming
48
+ - **Paper:** https://arxiv.org/abs/2212.14034
49
+ - **Raw Data Source Paper:** [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027)
50
+ - **Raw Data Source Datasheet:** [Datasheet for the Pile](https://arxiv.org/abs/2201.07311)
51
+
52
+
53
+ ### Languages
54
+
55
+ This dataset is in tokenized English (`EN`).
56
+
57
+ ### Data Splits
58
+
59
+ This preprocessed subset contains only a train split.
60
+
61
+ ## Dataset Creation
62
+
63
+ The configuration to create this dataset with the cramming project code (https://github.com/JonasGeiping/cramming) is
64
+
65
+ ```
66
+ name: the_pile
67
+ defaults:
68
+ - sources:
69
+ - the_pile
70
+
71
+
72
+ # Preprocessing
73
+ normalizer:
74
+ force_lowercase: True
75
+ strip_accents: True
76
+ force_english_keyboard: True
77
+ whitespace_escape: False
78
+ tokenizer: WordPiece
79
+ vocab_size: 32768
80
+
81
+ # Dataset Formation
82
+ seq_length: 128
83
+ include_cls_token_in_corpus: False
84
+ include_sep_token_in_corpus: True
85
+ use_type_ids: False
86
+ max_entries_in_raw_dataset: 16e6
87
+ max_seq_in_tokenized_dataset: 85e6
88
+
89
+ # Data Cleaning:
90
+ named_entity_simplification: False
91
+ remove_whitespaces: False
92
+ remove_trash: True
93
+ trash_cutoff: 0.25
94
+ deduplicate_entries: False
95
+ deduplication_threshold: 75
96
+
97
+ # Data Order:
98
+ ordering: sentence-length-curriculum
99
+
100
+ ```
101
+
102
+ ## Considerations for Using the Data
103
+
104
+ Limitations and bias:
105
+ This training data was further filtered and sorted beyond the normal preprocessing.
106
+ These modifications were not tested for unintended consequences.
107
+
108
+ ## Additional Information
109
+
110
+ ### Dataset Curators
111
+
112
+ This dataset is a filtered, sorted and preprocessed subset of the the-Pile made by Jonas Geiping . The original dataset was primarily curated by Leo Gao and Stella Biderman, with assistance from other authors of the Pile paper.
113
+
114
+ ### Licensing Information
115
+
116
+ Please refer to the specific license depending on the subset you use at https://huggingface.co/datasets/EleutherAI/pile
117
+
118
+ ### Citation Information
119
+ Filtered version for the cramming project:
120
+ ```
121
+ @article{geiping_cramming_2022,
122
+ title = {Cramming: {{Training}} a {{Language Model}} on a {{Single GPU}} in {{One Day}}},
123
+ shorttitle = {Cramming},
124
+ author = {Geiping, Jonas and Goldstein, Tom},
125
+ year = {2022},
126
+ month = dec,
127
+ eprint = {2212.14034},
128
+ primaryclass = {cs},
129
+ publisher = {{arXiv}},
130
+ doi = {10.48550/arXiv.2212.14034},
131
+ url = {http://arxiv.org/abs/2212.14034},
132
+ urldate = {2023-01-10},
133
+ archiveprefix = {arxiv},
134
+ keywords = {Computer Science - Computation and Language,Computer Science - Machine Learning},
135
+ journal = {arxiv:2212.14034[cs]}
136
+ }
137
+ ```
138
 
139
+ Original Data Curation:
140
+ ```
141
+ @article{gao2020pile,
142
+ title={The {P}ile: An 800{GB} dataset of diverse text for language modeling},
143
+ author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others},
144
+ journal={arXiv preprint arXiv:2101.00027},
145
+ year={2020}
146
+ }
147
+ @article{biderman2022datasheet,
148
+ title={Datasheet for the pile},
149
+ author={Biderman, Stella and Bicheno, Kieran and Gao, Leo},
150
+ journal={arXiv preprint arXiv:2201.07311},
151
+ year={2022}
152
+ }
153
+ ```