Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
File size: 9,410 Bytes
c735840
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bc82587
 
 
 
 
 
 
c735840
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
---
dataset_info:
  features:
  - name: content
    dtype: string
  - name: url
    dtype: string
  - name: timestamp
    dtype: timestamp[s]
  - name: dump
    dtype: string
  - name: segment
    dtype: string
  - name: image_urls
    sequence:
      sequence: string
  splits:
  - name: train
    num_bytes: 2766953721769
    num_examples: 968000015
  download_size: 466888198663
  dataset_size: 2766953721769
license: odc-by
task_categories:
- text-generation
language:
- en
pretty_name: Falcon RefinedWeb
size_categories:
- 100B<n<1T
---

# 📀 Falcon RefinedWeb

**Falcon RefinedWeb is a massive English web dataset built by [TII](https://www.tii.ae) and released under an ODC-By 1.0 license.**

See the 📓 [paper on arXiv](https://arxiv.org/abs/2306.01116) for more details. 

RefinedWeb is built through stringent filtering and large-scale deduplication of CommonCrawl; we found models trained on RefinedWeb to achieve performance in-line or better than models trained on curated datasets, while only relying on web data. 

RefinedWeb is also "multimodal-friendly": it contains links and alt texts for images in processed samples.

This public extract should contain 500-650GT depending on the tokenizer you use, and can be enhanced with the curated corpora of your choosing. This public extract is about ~500GB to download, requiring 2.8TB of local storage once unpacked. 

```python
from datasets import load_dataset
rw = load_dataset("tiiuae/falcon-refinedweb")
```

RefinedWeb is the main dataset we have used for training the [Falcon LLM](https://falconllm.tii.ae) models:

* It was used in conjunction with a curated corpora to train Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[40B](https://huggingface.co/tiiuae/falcon-40b), two state-of-the-art open-source models.
* It was also used to train Falcon-RW-[1B](https://huggingface.co/tiiuae/falcon-rw-1b)/[7B](https://huggingface.co/tiiuae/falcon-rw-7b), two models trained on 350 billion tokens of RefinedWeb alone to demonstrate its quality compared to curated corpora.


# Dataset card for Falcon RefinedWeb

## Dataset Description

* **Homepage:** [falconllm.tii.ae](falconllm.tii.ae)
* **Paper:** [https://arxiv.org/abs/2306.01116](https://arxiv.org/abs/2306.01116)
* **Point of Contact:** [falconllm@tii.ae](mailto:falconllm@tii.ae)

### Dataset Summary

Falcon RefinedWeb was created to serve as an English large-scale dataset for the pretraining of large language models. It may be used on its own, or augmented with curated sources (e.g., Wikipedia, StackOverflow).

It was built on top of CommonCrawl, leveraging stringent filtering and extensive deduplication.

### Dataset Visualization
Click the [Nomic Atlas](https://atlas.nomic.ai/map/refined_web) map below to visualize a 10 million subsample of RefinedWeb.

<a href="https://atlas.nomic.ai/map/refined_web">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/6480c476cacb1c4a0696eeb8/HKNKUloDBH9xVcj6MJGv4.webp" alt="Nomic-Atlas RefinedWeb Map" width="30%"/>
</a>

### Supported Tasks and Leaderboards

RefinedWeb is intended to be primarly used as a pretraining dataset for large language models. Practitioners may leverage it for upstream evaluation with a validation loss, but we do not provide any canonical split.

### Languages

RefinedWeb primarly contains English. 


## Dataset Structure

### Data Instances

Each data instance corresponds to an individual web page which has been crawled, processed, and deduplicated against all other instances.

This public extract of RefinedWeb contains about 1B instances (968M individual web pages), for a total of 2.8TB of clean text data. 

### Data Fields

* `content`: the processed and cleaned text contained in the page;
* `url`: the url of the webpage crawled to produce the sample;
* `timestamp`: timestamp of when the webpage was crawled by CommonCrawl;
* `dump`: the CommonCrawl dump the sample is a part of;
* `segment`: the CommonCrawl segment the sample is a part of;
* `image_urls`: a list of elements in the type [`image_url`, `image_alt_text`] for all the images found in the content of the sample. 

### Data Splits

We do not provide any canonical splits for RefinedWeb.


## Dataset Creation

### Curation Rationale

Falcon RefinedWeb is built on-top of [CommonCrawl](https://commoncrawl.org), using the Macrodata Refinement Pipeline, which combines content extraction, filtering heuristics, and deduplication. 

In designing RefinedWeb, we abided to the following philosophy: 

* (1) **Scale first.** We intend MDR to produce datasets to be used to train 40-200B parameters models, thus requiring trillions of tokens [(Hoffmann et al., 2022)](https://arxiv.org/abs/2203.15556). For English-only RefinedWeb, we target a size of 3-6 trillion tokens. Specifically, we eschew any labour intensive human curation process, and focus on CommonCrawl instead of disparate single-domain sources.
* (2) **Strict deduplication.** Inspired by the work of [Lee et al., 2021](https://arxiv.org/abs/2107.06499), which demonstrated the value of deduplication for large language models, we implement a rigorous deduplication pipeline. We combine both exact and fuzzy deduplication, and use strict settings leading to removal rates far higher than others datasets have reported.
* (3) **Neutral filtering.** To avoid introducing further undesirable biases into the model, we avoid using ML-based filtering outside of language identification ([Dodge et al., 2021](https://arxiv.org/abs/2104.08758); [Welbl et al., 2021](https://arxiv.org/abs/2109.07445)) . We stick to simple rules and heuristics, and use only URL filtering for adult content.

During its development, we iterated on RefinedWeb by measuring the zero-shot performance of models trained on development version of the dataset. Our main goal was to maximize the performance obtained, bridging the gap between curated and web data. We also manually audited samples to identify potential filtering improvements.

### Source Data

RefinedWeb is built from [CommonCrawl](https://commoncrawl.org) dumps. These dumps are constructed from crawling publicly available web pages. 

### Data Collection and Preprocessing

We applied extensive preprocessing and cleaning of the data, using our Macrodata Refinement Pipeline. 

We first filter URLs to remove adult content using a blocklist and a score system, we then use `trafilatura` to extract content from pages, and perform language identification with the `fastText` classifier from CCNet ([Wenzek et al., 2019](https://arxiv.org/abs/1911.00359)). After this first preprocessing stage, we filter data using heuristics from MassiveWeb ([Rae et al., 2021](https://arxiv.org/abs/2112.11446)), and our own line-wise corrections. 

Finally, we run extensive deduplication, removing URLs revisited across dumps and performing subsequently fuzzy and exact substring deduplication.

### Annotations

We provide automatically collected annotations for the source `url`, `timestamp` of the crawl, original CommonCrawl `dump` and `segment` in which the document was found, and `image_urls` contained in the page.


### Personal and Sensitive Information

As RefinedWeb is built upon publicly available web pages, it may contain sensitive information such as emails, phone numbers, or IP addresses. We believe that deduplication may have helped reduced the prevalence of PII in the dataset, but practitioners working with RefinedWeb should take care.

## Considerations for Using the Data

### Social Impact of Dataset

With the open-source release of Falcon RefinedWeb, we aim to increase access to high-quality web data, which has typically been held private by model developers. We believe this release will in turn improve the accessibility and the spread of performant large language models.  

### Discussion of Biases

As toxic or biased data is prevalent on the internet, it is likely our dataset contains such content. Notably, using the Perspective API, we estimated the prevalence of toxic content in the dataset to be similar to The Pile.

### Other Known Limitations

Despite our best efforts to filter content that does not qualify as natural language, and to deduplicate documents, our pipeline may let through documents that may be considered as errors or redundant.

## Additional Information

### Licensing Information

This public extract is made available under an [ODC-By 1.0](https://opendatacommons.org/licenses/by/1-0/) license; users should also abide to the [CommonCrawl ToU](https://commoncrawl.org/terms-of-use/). 

### Citation Information

```
@article{refinedweb,
  title={The {R}efined{W}eb dataset for {F}alcon {LLM}: outperforming curated corpora with web data, and web data only},
  author={Guilherme Penedo and Quentin Malartic and Daniel Hesslow and Ruxandra Cojocaru and Alessandro Cappelli and Hamza Alobeidli and Baptiste Pannier and Ebtesam Almazrouei and Julien Launay},
  journal={arXiv preprint arXiv:2306.01116},
  eprint={2306.01116},
  eprinttype = {arXiv},
  url={https://arxiv.org/abs/2306.01116},
  year={2023}
}
```

### Opt-out request

RefinedWeb is based on [CommonCrawl](https://commoncrawl.org/). Their crawler honors opt-out requests in the `robots.txt`, see the [CC FAQ](https://commoncrawl.org/big-picture/frequently-asked-questions/) for details.

To remove a document from RefinedWeb, please message falconllm@tii.ae. 

### Contact
falconllm@tii.ae