iNeil77 commited on
Commit
78e6a7d
1 Parent(s): 9065ef9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +26 -0
README.md CHANGED
@@ -113,4 +113,30 @@ configs:
113
  data_files:
114
  - split: train
115
  path: wiki/train-*
 
 
 
 
 
 
116
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
113
  data_files:
114
  - split: train
115
  path: wiki/train-*
116
+ task_categories:
117
+ - text-generation
118
+ language:
119
+ - en
120
+ size_categories:
121
+ - 10M<n<100M
122
  ---
123
+
124
+ A small, aggressively cleaned and de-duped pre-training corpus for academic settings. It aims to recreate something akin to [The Pile](https://huggingface.co/datasets/EleutherAI/pile) but prioritizes quality for the constrained token budget academic researchers live with.
125
+
126
+ It has seven config subsets and an eighth `all` subset that combines them for a total of ~91B tokens (GPT2 Tokenizer estimate). These splits are as follows:
127
+
128
+ 1. `c4_realnews`: The RealNews domain subset of the C4 dataset containing news articles.
129
+ 2. `openwebtext`: The OpenWebText dataset containing the contents of the links mentioned in Reddit posts with at least 3 upvotes.
130
+ 3. `peS2o`: The PeS2o dataset containing academic articles from Semantic Scholar.
131
+ 4. `redpajama_books`: The books subset of RedPajama V1.
132
+ 5. `stackexchange`: The EN StackExchange non-code subset of the BigScience ROOTs dataset.
133
+ 6. `uspto`: The EN USPTO patent applications contents' subset of the BigScience ROOTs dataset.
134
+ 7. `wiki`: The EN Wiki subset of the BigScience ROOTs dataset.
135
+
136
+ The following processing and filtering steps have been applied:
137
+
138
+ 1. Removed citation text and bibliography information for academic texts.
139
+ 2. Ran a perplexity filter using a KenLM model trained on the EN OSCAR corpus and removed documents with a perplexity of more than 325 and less than 7.
140
+ 3. Removed samples which have a repeating <=4-gram proportion of 15%.
141
+ 4. Removed samples which have lower than 99% confidence of being EN using the lingua language detector.
142
+ 5. Performed an aggressive MinHash de-dupe using a shingle size of 8 and a low threshold of 0.5.