Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
slippylolo commited on
Commit
22c7e4e
β€’
1 Parent(s): e26b5c1

Improve dataset card

Browse files
Files changed (1) hide show
  1. README.md +61 -18
README.md CHANGED
@@ -30,25 +30,33 @@ size_categories:
30
  - 100B<n<1T
31
  ---
32
 
33
- # Falcon RefinedWeb
34
 
35
  **Falcon RefinedWeb is a massive English web dataset built by [TII](https://www.tii.ae) and released under an Apache 2.0 license.**
36
 
37
- RefinedWeb is built through stringent filtering and large-scale deduplication of CommonCrawl; we found models trained on RefinedWeb to achieve performance in-line or better than models trained on curated datasets, while relying only on web data.
38
 
39
- This public extract should contain 500-650GT depending on your tokenizer of choice.
 
 
40
 
41
  ```python
42
  from datasets import load_dataset
43
  rw = load_dataset("tiiuae/falcon-refinedweb")
44
  ```
45
 
46
- # Dataset card
 
 
 
 
 
 
47
 
48
  ## Dataset Description
49
 
50
  * **Homepage:** [falconllm.tii.ae](falconllm.tii.ae)
51
- * **Paper:** coming soon
52
  * **Point of Contact:** [falconllm@tii.ae](mailto:falconllm@tii.ae)
53
 
54
  ### Dataset Summary
@@ -57,44 +65,79 @@ Falcon RefinedWeb was created to serve as an English large-scale dataset for the
57
 
58
  It was built on top of CommonCrawl, leveraging stringent filtering and extensive deduplication.
59
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
60
 
61
  ## Dataset Creation
62
 
63
  ### Curation Rationale
64
 
65
- RefinedWeb is built on-top of CommonCrawl, using the Macrodata Refinement Pipeline, which combines content extraction, filtering heuristics, and deduplication.
66
 
67
  In designing RefinedWeb, we abided to the following philosophy:
68
 
69
- * (1) **Scale first.** We intend MDR to produce datasets to be used to train 40-200B parameters models, thus requiring trillions of tokens For English-only RefinedWeb, we target a size of 3-6 trillion tokens. Specifically, we eschew any labour intensive human curation process, and focus on CommonCrawl instead of disparate single-domain sources.
70
- * (2) **Strict deduplication.** Inspired by the work of Katherine Lee, which demonstrated the value of deduplication for large language models, we implement a rigorous deduplication pipeline. We combine both exact and fuzzy deduplication, and use strict settings leading to removal rates far higher than others have reported.
71
- * (3) **Neutral filtering.** To avoid introducing further undesirable biases into the model, we avoid using ML-based filtering outside of language identification. We stick to simple rules and heuristics, and use only URL filtering for adult content.
72
 
73
- During its development, we iterated on RefinedWeb by measuring the zero-shot performance of models trained with earlier versions of the dataset. We also manually audited samples to identify potential filtering improvements.
74
 
75
  ### Source Data
76
 
77
- RefinedWeb is built from [CommonCrawl](https://commoncrawl.org) dumps.
78
 
79
  ### Data Collection and Preprocessing
80
 
81
- See the upcoming paper for further details.
82
 
83
- We applied extensive preprocessing and cleaning of the data.
84
 
85
- We first filter URLs to remove adult content using a blocklist and a score system, we then use `trafilatura` to extract content from pages, and perform language identification with the `fastText` classifier from CCNet.
86
 
87
- After this first preprocessing stage, we filter data using heuristics from MassiveWeb, and our own line-wise corrections.
88
 
89
- Finally, we run extensive deduplication, removing URLs revisited across dumps and performing subsequently fuzzy and exact substring deduplication.
 
 
 
90
 
 
91
 
92
  ## Considerations for Using the Data
93
 
94
- Despite our best efforts to filter content that does not qualify as natural language, and to deduplicate documents, our pipeline may let through documents that may be considered as errors or redundant.
 
 
 
 
95
 
96
  As toxic or biased data is prevalent on the internet, it is likely our dataset contains such content. Notably, using the Perspective API, we estimated the prevalence of toxic content in the dataset to be similar to The Pile.
97
 
 
 
 
 
98
  ## Additional Information
99
 
100
  ### Licensing Information
@@ -103,4 +146,4 @@ Apache 2.0.
103
 
104
  ### Citation Information
105
 
106
- Paper coming soon.
 
30
  - 100B<n<1T
31
  ---
32
 
33
+ # πŸ“€ Falcon RefinedWeb
34
 
35
  **Falcon RefinedWeb is a massive English web dataset built by [TII](https://www.tii.ae) and released under an Apache 2.0 license.**
36
 
37
+ *Paper coming soon 😊.*
38
 
39
+ RefinedWeb is built through stringent filtering and large-scale deduplication of CommonCrawl; we found models trained on RefinedWeb to achieve performance in-line or better than models trained on curated datasets, while only relying on web data.
40
+
41
+ This public extract should contain 500-650GT depending on the tokenizer you use, and can be enhanced with the curated corpora of your choosing.
42
 
43
  ```python
44
  from datasets import load_dataset
45
  rw = load_dataset("tiiuae/falcon-refinedweb")
46
  ```
47
 
48
+ RefinedWeb is the main dataset we have used for training the [Falcon LLM](https://falconllm.tii.ae) models:
49
+
50
+ * It was used in conjunction with a curated corpora to train Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b), two state-of-the-art open-source models.
51
+ * It was also used to train Falcon-RW-[1B](https://huggingface.co/tiiuae/falcon-rw-1b)/[7B](https://huggingface.co/tiiuae/falcon-rw-7b), two models trained on 350 billion tokens of RefinedWeb alone to demonstrate its quality compared to curated corpora.
52
+
53
+
54
+ # Dataset card for Falcon RefinedWeb
55
 
56
  ## Dataset Description
57
 
58
  * **Homepage:** [falconllm.tii.ae](falconllm.tii.ae)
59
+ * **Paper:** coming soon 😊
60
  * **Point of Contact:** [falconllm@tii.ae](mailto:falconllm@tii.ae)
61
 
62
  ### Dataset Summary
 
65
 
66
  It was built on top of CommonCrawl, leveraging stringent filtering and extensive deduplication.
67
 
68
+ ### Supported Tasks and Leaderboards
69
+
70
+ RefinedWeb is intended to be primarly used as a pretraining dataset for large language models. Practitioners may leverage it for upstream evaluation with a validation loss, but we do not provide any canonical split.
71
+
72
+ ### Languages
73
+
74
+ RefinedWeb primarly contains English.
75
+
76
+
77
+ ## Dataset Structure
78
+
79
+ ### Data Instances
80
+
81
+ [More Information Needed]
82
+
83
+ ### Data Fields
84
+
85
+ [More Information Needed]
86
+
87
+ ### Data Splits
88
+
89
+ [More Information Needed]
90
+
91
 
92
  ## Dataset Creation
93
 
94
  ### Curation Rationale
95
 
96
+ Falcon RefinedWeb is built on-top of [CommonCrawl](https://commoncrawl.org), using the Macrodata Refinement Pipeline, which combines content extraction, filtering heuristics, and deduplication.
97
 
98
  In designing RefinedWeb, we abided to the following philosophy:
99
 
100
+ * (1) **Scale first.** We intend MDR to produce datasets to be used to train 40-200B parameters models, thus requiring trillions of tokens [(Hoffmann et al., 2022)](https://arxiv.org/abs/2203.15556). For English-only RefinedWeb, we target a size of 3-6 trillion tokens. Specifically, we eschew any labour intensive human curation process, and focus on CommonCrawl instead of disparate single-domain sources.
101
+ * (2) **Strict deduplication.** Inspired by the work of [Lee et al., 2021](https://arxiv.org/abs/2107.06499), which demonstrated the value of deduplication for large language models, we implement a rigorous deduplication pipeline. We combine both exact and fuzzy deduplication, and use strict settings leading to removal rates far higher than others datasets have reported.
102
+ * (3) **Neutral filtering.** To avoid introducing further undesirable biases into the model, we avoid using ML-based filtering outside of language identification ([Dodge et al., 2021](https://arxiv.org/abs/2104.08758); [Welbl et al., 2021](https://arxiv.org/abs/2109.07445)) . We stick to simple rules and heuristics, and use only URL filtering for adult content.
103
 
104
+ During its development, we iterated on RefinedWeb by measuring the zero-shot performance of models trained on development version of the dataset. Our main goal was to maximize the performance obtained, bridging the gap between curated and web data. We also manually audited samples to identify potential filtering improvements.
105
 
106
  ### Source Data
107
 
108
+ RefinedWeb is built from [CommonCrawl](https://commoncrawl.org) dumps. These dumps are constructed from crawling publicly available web pages.
109
 
110
  ### Data Collection and Preprocessing
111
 
112
+ We applied extensive preprocessing and cleaning of the data, using our Macrodata Refinement Pipeline.
113
 
114
+ We first filter URLs to remove adult content using a blocklist and a score system, we then use `trafilatura` to extract content from pages, and perform language identification with the `fastText` classifier from CCNet ([Wenzek et al., 2019](https://arxiv.org/abs/1911.00359)). After this first preprocessing stage, we filter data using heuristics from MassiveWeb ([Rae et al., 2021](https://arxiv.org/abs/2112.11446)), and our own line-wise corrections.
115
 
116
+ Finally, we run extensive deduplication, removing URLs revisited across dumps and performing subsequently fuzzy and exact substring deduplication.
117
 
118
+ ### Annotations
119
 
120
+ We provide automatically collected annotations for the source `url`, `timestamp` of the crawl, original CommonCrawl `dump` and `segment` in which the document was found, and `image_urls` contained in the page.
121
+
122
+
123
+ ### Personal and Sensitive Information
124
 
125
+ As RefinedWeb is built upon publicly available web pages, it may contain sensitive information such as emails, phone numbers, or IP addresses. We believe that deduplication may have helped reduced the prevalence of PII in the dataset, but practitioners working with RefinedWeb should take care.
126
 
127
  ## Considerations for Using the Data
128
 
129
+ ### Social Impact of Dataset
130
+
131
+ With the open-source release of Falcon RefinedWeb, we aim to increase access to high-quality web data, which has typically been held private by model developers. We believe this release will in turn improve the accessibility and the spread of performant large language models.
132
+
133
+ ### Discussion of Biases
134
 
135
  As toxic or biased data is prevalent on the internet, it is likely our dataset contains such content. Notably, using the Perspective API, we estimated the prevalence of toxic content in the dataset to be similar to The Pile.
136
 
137
+ ### Other Known Limitations
138
+
139
+ Despite our best efforts to filter content that does not qualify as natural language, and to deduplicate documents, our pipeline may let through documents that may be considered as errors or redundant.
140
+
141
  ## Additional Information
142
 
143
  ### Licensing Information
 
146
 
147
  ### Citation Information
148
 
149
+ Paper coming soon 😊.