slippylolo
commited on
Commit
•
1ea7bd4
1
Parent(s):
1d093b9
Update README.md
Browse files
README.md
CHANGED
@@ -20,7 +20,87 @@ dataset_info:
|
|
20 |
num_examples: 968000015
|
21 |
download_size: 466888198663
|
22 |
dataset_size: 2766953721769
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
---
|
24 |
-
# Dataset Card for "refined_web_big_unshuffled"
|
25 |
|
26 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
20 |
num_examples: 968000015
|
21 |
download_size: 466888198663
|
22 |
dataset_size: 2766953721769
|
23 |
+
license: apache-2.0
|
24 |
+
task_categories:
|
25 |
+
- text-generation
|
26 |
+
language:
|
27 |
+
- en
|
28 |
+
pretty_name: Falcon RefinedWeb
|
29 |
+
size_categories:
|
30 |
+
- 100B<n<1T
|
31 |
---
|
|
|
32 |
|
33 |
+
# Falcon RefinedWeb
|
34 |
+
|
35 |
+
**Falcon RefinedWeb is a massive English web dataset built by [TII](https://www.tii.ae) and released under an Apache 2.0 license.**
|
36 |
+
|
37 |
+
RefinedWeb is built through stringent filtering and large-scale deduplication of CommonCrawl; we found models trained on RefinedWeb to achieve performance in-line or better than models trained on curated datasets, while relying only on web data.
|
38 |
+
|
39 |
+
This public extract should contain 500-650GT depending on your tokenizer of choice.
|
40 |
+
|
41 |
+
```python
|
42 |
+
from datasets import load_dataset
|
43 |
+
rw = load_dataset("tiiuae/falcon-refinedweb")
|
44 |
+
```
|
45 |
+
|
46 |
+
# Dataset card
|
47 |
+
|
48 |
+
## Dataset Description
|
49 |
+
|
50 |
+
* **Homepage:** [falconllm.tii.ae](falconllm.tii.ae);
|
51 |
+
* **Paper:** coming soon;
|
52 |
+
* **Point of Contact:** [falconllm@tii.ae](mailto:falconllm@tii.ae).
|
53 |
+
|
54 |
+
### Dataset Summary
|
55 |
+
|
56 |
+
Falcon RefinedWeb was created to serve as an English large-scale dataset for the pretraining of large language models. It may be used on its own, or augmented with curated sources (e.g., Wikipedia, StackOverflow).
|
57 |
+
|
58 |
+
It was built on top of CommonCrawl, leveraging stringent filtering and extensive deduplication.
|
59 |
+
|
60 |
+
|
61 |
+
## Dataset Creation
|
62 |
+
|
63 |
+
### Curation Rationale
|
64 |
+
|
65 |
+
RefinedWeb is built on-top of CommonCrawl, using the Macrodata Refinement Pipeline, which combines content extraction, filtering heuristics, and deduplication.
|
66 |
+
|
67 |
+
In designing RefinedWeb, we abided to the following philosophy:
|
68 |
+
|
69 |
+
* (1) **Scale first.** We intend MDR to produce datasets to be used to train 40-200B parameters models, thus requiring trillions of tokens For English-only RefinedWeb, we target a size of 3-6 trillion tokens. Specifically, we eschew any labour intensive human curation process, and focus on CommonCrawl instead of disparate single-domain sources.
|
70 |
+
* (2) **Strict deduplication.** Inspired by the work of Katherine Lee, which demonstrated the value of deduplication for large language models, we implement a rigorous deduplication pipeline. We combine both exact and fuzzy deduplication, and use strict settings leading to removal rates far higher than others have reported.
|
71 |
+
* (3) **Neutral filtering.** To avoid introducing further undesirable biases into the model, we avoid using ML-based filtering outside of language identification. We stick to simple rules and heuristics, and use only URL filtering for adult content.
|
72 |
+
|
73 |
+
During its development, we iterated on RefinedWeb by measuring the zero-shot performance of models trained with earlier versions of the dataset. We also manually audited samples to identify potential filtering improvements.
|
74 |
+
|
75 |
+
### Source Data
|
76 |
+
|
77 |
+
RefinedWeb is built from [CommonCrawl](https://commoncrawl.org) dumps.
|
78 |
+
|
79 |
+
### Data Collection and Preprocessing
|
80 |
+
|
81 |
+
See the upcoming paper for further details.
|
82 |
+
|
83 |
+
We applied extensive preprocessing and cleaning of the data.
|
84 |
+
|
85 |
+
We first filter URLs to remove adult content using a blocklist and a score system, we then use `trafilatura` to extract content from pages, and perform language identification with the `fastText` classifier from CCNet.
|
86 |
+
|
87 |
+
After this first preprocessing stage, we filter data using heuristics from MassiveWeb, and our own line-wise corrections.
|
88 |
+
|
89 |
+
Finally, we run extensive deduplication, removing URLs revisited across dumps and performing subsequently fuzzy and exact substring deduplication.
|
90 |
+
|
91 |
+
|
92 |
+
## Considerations for Using the Data
|
93 |
+
|
94 |
+
Despite our best efforts to filter content that does not qualify as natural language, and to deduplicate documents, our pipeline may let through documents that may be considered as errors or redundant.
|
95 |
+
|
96 |
+
As toxic or biased data is prevalent on the internet, it is likely our dataset contains such content. Notably, using the Perspective API, we estimated the prevalence of toxic content in the dataset to be similar to The Pile.
|
97 |
+
|
98 |
+
## Additional Information
|
99 |
+
|
100 |
+
### Licensing Information
|
101 |
+
|
102 |
+
Apache 2.0.
|
103 |
+
|
104 |
+
### Citation Information
|
105 |
+
|
106 |
+
Paper coming soon.
|