JonasGeiping commited on
Commit
2861f55
1 Parent(s): e5214ee

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +127 -6
README.md CHANGED
@@ -5,11 +5,132 @@ dataset_info:
5
  sequence: int32
6
  splits:
7
  - name: train
8
- num_bytes: 43860000000
9
- num_examples: 85000000
10
- download_size: 24001057282
11
- dataset_size: 43860000000
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
- # Dataset Card for "the_pile_WordPiecex32768_2efdb9d060d1ae95faf952ec1a50f020"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14
 
15
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  sequence: int32
6
  splits:
7
  - name: train
8
+ num_bytes: 22274051772
9
+ num_examples: 43166767
10
+ download_size: 12187746609
11
+ dataset_size: 22274051772
12
+ annotations_creators:
13
+ - no-annotation
14
+ language_creators:
15
+ - found
16
+ language:
17
+ - en
18
+ license: other
19
+ multilinguality:
20
+ - monolingual
21
+ pretty_name: pretokenized,filtered,sorted subset of the Pile
22
+ size_categories:
23
+ - 10B<n<100B
24
+ source_datasets:
25
+ - the-pile
26
+ task_categories:
27
+ - text-generation
28
+ - fill-mask
29
+ task_ids:
30
+ - language-modeling
31
+ - masked-language-modeling
32
+ paperswithcode_id: the-pile-cramming
33
+
34
  ---
35
+ # Dataset Card for "the_pile_WordPiecex32768_97b8e776baafb99c3892e6572a9f51b3"
36
+
37
+
38
+ ## Dataset Description
39
+
40
+ - **Repository:** https://github.com/JonasGeiping/cramming
41
+ - **Paper:** https://arxiv.org/abs/2212.14034
42
+ - **Raw Data Source Paper:** [The Pile: An 800GB Dataset of Diverse Text for Language Modeling](https://arxiv.org/abs/2101.00027)
43
+ - **Raw Data Source Datasheet:** [Datasheet for the Pile](https://arxiv.org/abs/2201.07311)
44
+
45
+ ### Dataset Summary
46
+
47
+ This is a preprocessed, tokenized dataset for the cramming-project.
48
+
49
+ Use only with the tokenizer uploaded here.
50
+ This version is `97b8e776baafb99c3892e6572a9f51b3`, which corresponds to a specific dataset construction setup, described below.
51
+ The raw data source is the Pile, a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality
52
+ datasets combined together.
53
+
54
+
55
+ ### Languages
56
+
57
+ This dataset is in English (`EN`).
58
+
59
+ ### Data Splits
60
+
61
+ This preprocessed subset contains only a train split.
62
+
63
+ ## Dataset Creation
64
+
65
+ The configuration to create this dataset with the cramming project code (https://github.com/JonasGeiping/cramming) is
66
+
67
+ ```
68
+ # This is a slice of the pile, loaded from a local source
69
+ name: the_pile
70
+ defaults:
71
+ - sources:
72
+ - the_pile
73
+
74
+ #
75
+ # Preprocessing
76
+ normalizer:
77
+ force_lowercase: True
78
+ strip_accents: True
79
+ force_english_keyboard: True
80
+ whitespace_escape: False
81
+ tokenizer: WordPiece
82
+ vocab_size: 32768
83
+
84
+ # Dataset Formation
85
+ seq_length: 128
86
+ include_cls_token_in_corpus: False
87
+ include_sep_token_in_corpus: True
88
+ use_type_ids: False
89
+ max_entries_in_raw_dataset: 16e6 # About 40 mio seqs of length 128
90
+ max_seq_in_tokenized_dataset: 85e6 # Select only this many tokenized sequences.
91
+ # max_seq_in_tokenized_dataset should be just slightly more than budget * 60 * 60 * expected tokens/sec for the single epoch of training
92
+
93
+ # Data Cleaning:
94
+ named_entity_simplification: False
95
+ remove_whitespaces: False
96
+ remove_trash: True
97
+ trash_cutoff: 0.25
98
+ deduplicate_entries: False
99
+ deduplication_threshold: 75
100
+
101
+ # Data Order:
102
+ ordering: sentence-length-curriculum # could be a curriculum
103
+
104
+ ```
105
+
106
+ ## Considerations for Using the Data
107
+
108
+ Limitations and bias:
109
+ This training data was further filtered and sorted beyond the normal preprocessing.
110
+ These modifications were not tested for unintended consequences.
111
+
112
+ ## Additional Information
113
+
114
+ ### Dataset Curators
115
+
116
+ This dataset is a filtered, sorted and preprocessed subset of the the-Pile made by Jonas Geiping . The original dataset was primarily curated by Leo Gao and Stella Biderman, with assistance from other authors of the Pile paper.
117
+
118
+ ### Licensing Information
119
+
120
+ Please refer to the specific license depending on the subset you use at https://huggingface.co/datasets/EleutherAI/pile
121
+
122
+ ### Citation Information
123
 
124
+ ```
125
+ @article{gao2020pile,
126
+ title={The {P}ile: An 800{GB} dataset of diverse text for language modeling},
127
+ author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others},
128
+ journal={arXiv preprint arXiv:2101.00027},
129
+ year={2020}
130
+ }
131
+ @article{biderman2022datasheet,
132
+ title={Datasheet for the pile},
133
+ author={Biderman, Stella and Bicheno, Kieran and Gao, Leo},
134
+ journal={arXiv preprint arXiv:2201.07311},
135
+ year={2022}
136
+ }