PatrickHaller commited on
Commit
3538e0c
·
verified ·
1 Parent(s): bd18da6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md CHANGED
@@ -21,3 +21,53 @@ configs:
21
  - split: train
22
  path: data/train-*
23
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  - split: train
22
  path: data/train-*
23
  ---
24
+
25
+
26
+ # Description
27
+
28
+ This dataset is a sampled subset of the [Pile](https://huggingface.co/datasets/EleutherAI/pile) dataset.
29
+ We used [DSIR](https://github.com/p-lambda/dsir) a data selection tool with importance resampling for subsampling.
30
+
31
+ The subset sample distribution is:
32
+
33
+ ```json
34
+ {
35
+ 'Pile-CC': 198245,
36
+ 'OpenWebText2': 122382,
37
+ 'FreeLaw': 37517,
38
+ 'USPTO Backgrounds': 10195,
39
+ 'Wikipedia (en)': 8072,
40
+ 'PubMed Central': 5849,
41
+ 'PubMed Abstracts': 4965,
42
+ 'Gutenberg (PG-19)': 2712,
43
+ 'BookCorpus2': 2550,
44
+ 'Books3': 2432,
45
+ 'StackExchange': 1753,
46
+ 'PhilPapers': 1560,
47
+ 'YoutubeSubtitles': 1187,
48
+ 'OpenSubtitles': 1015,
49
+ 'ArXiv': 610,
50
+ 'NIH ExPorter': 476,
51
+ 'Enron Emails': 439,
52
+ 'EuroParl': 419,
53
+ 'Github': 390,
54
+ 'HackerNews': 259
55
+ }
56
+ ```
57
+
58
+
59
+ The dataset contains ~100M words of text. This can be checked with:
60
+
61
+ ```python
62
+ from datasets import load_dataset
63
+
64
+ ds = load_dataset("PatrickHaller/dsir-pile-100M-words")
65
+
66
+ count = 0
67
+ for row in ds["train"]:
68
+ count += len(row["text"].split(" "))
69
+
70
+ print(count)
71
+
72
+ # Out: 99999861
73
+ ```