Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
ArXiv:
Tags:
License:
bleysg commited on
Commit
af1ed11
1 Parent(s): 6686cf1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +113 -16
README.md CHANGED
@@ -6,34 +6,131 @@ library_name: transformers
6
  pipeline_tag: text-generation
7
  datasets:
8
  - Open-Orca/OpenOrca
 
 
9
  ---
10
 
11
  <p><h1>🍮 The WHOLE FLAN Collection! 🍮</h1></p>
12
 
13
  ![OO-FLAN Logo](https://huggingface.co/datasets/Open-Orca/FLAN/resolve/main/OOFlanLogo.png "OO-FLAN Logo")
14
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
15
  # FLAN Collection
16
 
17
  * JSON files at top level are used for subsampling in OpenOrca
18
  * Parquets in subdirectories contain the entire FLAN collection in Dask-sharded folders by submix fractions
19
 
 
20
  # Data
21
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
22
  ```
23
- $ du -h --max-depth=1 ./parquets/
24
- 9.1G ./parquets/niv2_fsopt_data
25
- 2.4G ./parquets/niv2_zsopt_data
26
- 59G ./parquets/flan_fsopt_data
27
- 984M ./parquets/dialog_zsopt_data
28
- 11G ./parquets/flan_zsopt_data
29
- 8.6G ./parquets/dialog_fsopt_data
30
- 16G ./parquets/t0_zsnoopt_data
31
- 149M ./parquets/cot_fsopt_data
32
- 20M ./parquets/cot_zsopt_data
33
- 17G ./parquets/t0_zsopt_data
34
- 11G ./parquets/flan_zsnoopt_data
35
- 101G ./parquets/t0_fsopt_data
36
- 25G ./parquets/flan_fsnoopt_data
37
- 39G ./parquets/t0_fsnoopt_data
38
- 296G ./parquets/
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
39
  ```
 
6
  pipeline_tag: text-generation
7
  datasets:
8
  - Open-Orca/OpenOrca
9
+ size_categories:
10
+ - 1B<n<10B
11
  ---
12
 
13
  <p><h1>🍮 The WHOLE FLAN Collection! 🍮</h1></p>
14
 
15
  ![OO-FLAN Logo](https://huggingface.co/datasets/Open-Orca/FLAN/resolve/main/OOFlanLogo.png "OO-FLAN Logo")
16
 
17
+
18
+ # Overview
19
+
20
+ This repository includes the full dataset from the [FLAN Collection](https://ai.googleblog.com/2023/02/the-flan-collection-advancing-open.html), totalling ~300GB as parquets.
21
+ Generated using the official seqio templating from the [Google FLAN Collection GitHub repo](https://github.com/google-research/FLAN/tree/main/flan/v2).
22
+ The data is subject to all the same licensing of the component datasets.
23
+
24
+
25
+ # Motivation
26
+
27
+ This work was done as part of the requirements for the OpenOrca project.
28
+ There was not a large enough subset of FLAN Collection generated publicly to subsample from to complete the work.
29
+ So, we opted to process the entire collection ourselves.
30
+ Generating this requires an understanding of seqio and a Linux server with 512GB of CPU ram, as well as fast drives and custom limits for many parameters beyond what is default on Linux server distributions (e.g., requiring up to 45,000 threads running at once).
31
+ It takes downloading over 400GB of datasets, working around tfds bugs, and then processing the datasets over the course of several days.
32
+ We provide this repo as a resource to other ML researchers, as it saves these time consuming and laborious steps to getting the data into a more accessible format for further consumption.
33
+
34
+
35
  # FLAN Collection
36
 
37
  * JSON files at top level are used for subsampling in OpenOrca
38
  * Parquets in subdirectories contain the entire FLAN collection in Dask-sharded folders by submix fractions
39
 
40
+
41
  # Data
42
 
43
+ ## Zero-Shot vs Few-Shot and Options vs No-Options
44
+
45
+ The core sub-collections of FLAN are `CoT`, `Dialog`, `NIv2`, `T0`, and `flan2021`.
46
+ Within those sub-collections are four "remixes" of the data that are templated differently:
47
+ * `Zero-Shot` and `Few-Shot`
48
+ * `Zero-Shot` provides a prompt, question, or challenge without any exemplaries prior
49
+ * `Few-Shot` provides exemplaries first
50
+ * `Options` and `No-Options`
51
+ * `Options` provides a question or challenge with multiple-choice (e.g. A/B/C/D) answer options provided to select from
52
+ * `No-Options` requires a free-form answer
53
+
54
+ For every sub-collection, only some of the "remixes" may officially be provided. All available have been generated in full without any redaction or sub-sampling.
55
+
56
+ An example: `t0_fsopt_data` folder contains the sub-collection `T0`'s Few-Shot (FS), Options (OPT) remix set.
57
+ Notably, this is the largest "remix" and the one that necessitates 512GB CPU ram to generate. The raw json output is nearly 200GB.
58
+
59
+ ## Parquet Sizes
60
+
61
  ```
62
+ $ du -h --max-depth=1 ./
63
+ 9.1G ./niv2_fsopt_data
64
+ 2.4G ./niv2_zsopt_data
65
+ 59G ./flan_fsopt_data
66
+ 984M ./dialog_zsopt_data
67
+ 11G ./flan_zsopt_data
68
+ 8.6G ./dialog_fsopt_data
69
+ 16G ./t0_zsnoopt_data
70
+ 149M ./cot_fsopt_data
71
+ 20M ./cot_zsopt_data
72
+ 17G ./t0_zsopt_data
73
+ 11G ./flan_zsnoopt_data
74
+ 101G ./t0_fsopt_data
75
+ 25G ./flan_fsnoopt_data
76
+ 39G ./t0_fsnoopt_data
77
+ 296G ./
78
+ ```
79
+
80
+
81
+ # Citations
82
+
83
+ ```bibtex
84
+ @misc{goodson2023huggyflan
85
+ title={Fine FLAN: Seqio to Parquet So You Don't Have To},
86
+ author={Bleys Goodson},
87
+ year={2023},
88
+ publisher = {HuggingFace},
89
+ journal = {HuggingFace repository},
90
+ howpublished = {\url{https://https://huggingface.co/datasets/Open-Orca/FLAN},
91
+ }
92
+ ```
93
+
94
+ ```bibtex
95
+ @misc{longpre2023flan,
96
+ title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning},
97
+ author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts},
98
+ year={2023},
99
+ eprint={2301.13688},
100
+ archivePrefix={arXiv},
101
+ primaryClass={cs.AI}
102
+ }
103
+ ```
104
+
105
+ ```bibtex
106
+ @misc{wei2022finetuned,
107
+ title={Finetuned Language Models Are Zero-Shot Learners},
108
+ author={Jason Wei and Maarten Bosma and Vincent Y. Zhao and Kelvin Guu and Adams Wei Yu and Brian Lester and Nan Du and Andrew M. Dai and Quoc V. Le},
109
+ year={2022},
110
+ eprint={2109.01652},
111
+ archivePrefix={arXiv},
112
+ primaryClass={cs.CL}
113
+ }
114
+ ```
115
+
116
+ ```bibtex
117
+ @misc{sanh2022multitask,
118
+ title={Multitask Prompted Training Enables Zero-Shot Task Generalization},
119
+ author={Victor Sanh and Albert Webson and Colin Raffel and Stephen H. Bach and Lintang Sutawika and Zaid Alyafeai and Antoine Chaffin and Arnaud Stiegler and Teven Le Scao and Arun Raja and Manan Dey and M Saiful Bari and Canwen Xu and Urmish Thakker and Shanya Sharma Sharma and Eliza Szczechla and Taewoon Kim and Gunjan Chhablani and Nihal Nayak and Debajyoti Datta and Jonathan Chang and Mike Tian-Jian Jiang and Han Wang and Matteo Manica and Sheng Shen and Zheng Xin Yong and Harshit Pandey and Rachel Bawden and Thomas Wang and Trishala Neeraj and Jos Rozen and Abheesht Sharma and Andrea Santilli and Thibault Fevry and Jason Alan Fries and Ryan Teehan and Tali Bers and Stella Biderman and Leo Gao and Thomas Wolf and Alexander M. Rush},
120
+ year={2022},
121
+ eprint={2110.08207},
122
+ archivePrefix={arXiv},
123
+ primaryClass={cs.LG}
124
+ }
125
+ ```
126
+
127
+ ```bibtex
128
+ @misc{wang2022supernaturalinstructions,
129
+ title={Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks},
130
+ author={Yizhong Wang and Swaroop Mishra and Pegah Alipoormolabashi and Yeganeh Kordi and Amirreza Mirzaei and Anjana Arunkumar and Arjun Ashok and Arut Selvan Dhanasekaran and Atharva Naik and David Stap and Eshaan Pathak and Giannis Karamanolakis and Haizhi Gary Lai and Ishan Purohit and Ishani Mondal and Jacob Anderson and Kirby Kuznia and Krima Doshi and Maitreya Patel and Kuntal Kumar Pal and Mehrad Moradshahi and Mihir Parmar and Mirali Purohit and Neeraj Varshney and Phani Rohitha Kaza and Pulkit Verma and Ravsehaj Singh Puri and Rushang Karia and Shailaja Keyur Sampat and Savan Doshi and Siddhartha Mishra and Sujan Reddy and Sumanta Patro and Tanay Dixit and Xudong Shen and Chitta Baral and Yejin Choi and Noah A. Smith and Hannaneh Hajishirzi and Daniel Khashabi},
131
+ year={2022},
132
+ eprint={2204.07705},
133
+ archivePrefix={arXiv},
134
+ primaryClass={cs.CL}
135
+ }
136
  ```