Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
1M<n<10M
Language Creators:
found
Annotations Creators:
no-annotation
Source Datasets:
original
ArXiv:
Tags:
License:
JeanKaddour commited on
Commit
18ad1b0
1 Parent(s): 9ee5f99

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +79 -2
README.md CHANGED
@@ -15,7 +15,84 @@ dataset_info:
15
  num_examples: 10000
16
  download_size: 3177432813
17
  dataset_size: 5967446087
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ---
19
- # Dataset Card for "minipile"
20
 
21
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  num_examples: 10000
16
  download_size: 3177432813
17
  dataset_size: 5967446087
18
+ annotations_creators:
19
+ - no-annotation
20
+ language_creators:
21
+ - found
22
+ language:
23
+ - en
24
+ license: other
25
+ multilinguality:
26
+ - monolingual
27
+ pretty_name: MiniPile
28
+ size_categories:
29
+ - 1M<n<10M
30
+ source_datasets:
31
+ - original
32
+ task_categories:
33
+ - text-generation
34
+ - fill-mask
35
+ task_ids:
36
+ - language-modeling
37
+ - masked-language-modeling
38
+ paperswithcode_id: minipile
39
  ---
 
40
 
41
+ # Dataset Card for MiniPile
42
+
43
+ ## Table of Contents
44
+ - [Table of Contents](#table-of-contents)
45
+ - [Dataset Description](#dataset-description)
46
+ - [Dataset Summary](#dataset-summary)
47
+ - [Languages](#languages)
48
+ - [Additional Information](#additional-information)
49
+ - [Dataset Curators](#dataset-curators)
50
+ - [Licensing Information](#licensing-information)
51
+ - [Citation Information](#citation-information)
52
+
53
+
54
+ ## Dataset Description
55
+
56
+ [The MiniPile Challenge for Data-Efficient Language Models](https://arxiv.org/abs/2304.08442)
57
+
58
+ ### Dataset Summary
59
+
60
+ MiniPile is a 6GB subset of the [deduplicated The Pile corpus](https://huggingface.co/datasets/EleutherAI/the_pile_deduplicated). To curate MiniPile, we perform a simple, three-step data filtering process: we (1) infer embeddings for all documents of the Pile, (2) cluster the embedding space using k-means, and (3) filter out low-quality clusters.
61
+
62
+ The primary motivation for curating MiniPile is that (i) diverse pre-training datasets (like the Pile) are often too large for academic budgets and (ii) most smaller-scale datasets are fairly homogeneous and thereby unrepresentative of contemporary general-purpose language models. MiniPile aims to fill this gap and thereby facilitate data-efficient research on model architectures, training procedures, optimizers, etc.
63
+
64
+ More details on the MiniPile curation procedure and some pre-training results be found in the [MiniPile paper](https://arxiv.org/abs/2304.08442).
65
+
66
+ For more details on the Pile corpus, we refer the reader to [the Pile datasheet](https://arxiv.org/abs/2201.07311).
67
+
68
+ ### Languages
69
+
70
+ English (`EN`)
71
+
72
+ ## Additional Information
73
+
74
+ ### Dataset Curators
75
+
76
+ MiniPile is a subset of the Pile, curated by Jean Kaddour. The Pile was created by Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, Connor Leahy.
77
+
78
+ ### Licensing Information
79
+
80
+ Since MiniPile is a subset of the Pile, the same MIT License holds.
81
+
82
+ ### Citation Information
83
+
84
+ ```
85
+ @article{kaddour2023minipile,
86
+ title={The MiniPile Challenge for Data-Efficient Language Models},
87
+ author={Kaddour, Jean},
88
+ journal={arXiv preprint arXiv:2304.08442},
89
+ year={2023}
90
+ }
91
+
92
+ @article{gao2020pile,
93
+ title={The {P}ile: An 800{GB} dataset of diverse text for language modeling},
94
+ author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others},
95
+ journal={arXiv preprint arXiv:2101.00027},
96
+ year={2020}
97
+ }
98
+ ```