harshit-gupta
commited on
Commit
•
50f2340
1
Parent(s):
d4d0c8a
Update README.md
Browse files
README.md
CHANGED
@@ -69,11 +69,11 @@ It was built on top of CommonCrawl, leveraging stringent filtering and extensive
|
|
69 |
|
70 |
### Supported Tasks and Leaderboards
|
71 |
|
72 |
-
RefinedWeb is intended to be
|
73 |
|
74 |
### Languages
|
75 |
|
76 |
-
RefinedWeb
|
77 |
|
78 |
|
79 |
## Dataset Structure
|
@@ -120,7 +120,7 @@ RefinedWeb is built from [CommonCrawl](https://commoncrawl.org) dumps. These dum
|
|
120 |
|
121 |
We applied extensive preprocessing and cleaning of the data, using our Macrodata Refinement Pipeline.
|
122 |
|
123 |
-
We first filter URLs to remove adult content using a blocklist and a
|
124 |
|
125 |
Finally, we run extensive deduplication, removing URLs revisited across dumps and performing subsequently fuzzy and exact substring deduplication.
|
126 |
|
@@ -169,7 +169,7 @@ This public extract is made available under an [ODC-By 1.0](https://opendatacomm
|
|
169 |
|
170 |
### Opt-out request
|
171 |
|
172 |
-
RefinedWeb is based on [CommonCrawl](https://commoncrawl.org/). Their crawler
|
173 |
|
174 |
To remove a document from RefinedWeb, please message falconllm@tii.ae.
|
175 |
|
|
|
69 |
|
70 |
### Supported Tasks and Leaderboards
|
71 |
|
72 |
+
RefinedWeb is intended to be primarily used as a pretraining dataset for large language models. Practitioners may leverage it for upstream evaluation with a validation loss, but we do not provide any canonical split.
|
73 |
|
74 |
### Languages
|
75 |
|
76 |
+
RefinedWeb primarily contains English.
|
77 |
|
78 |
|
79 |
## Dataset Structure
|
|
|
120 |
|
121 |
We applied extensive preprocessing and cleaning of the data, using our Macrodata Refinement Pipeline.
|
122 |
|
123 |
+
We first filter URLs to remove adult content using a blocklist and a scoring system, we then use `trafilatura` to extract content from pages and perform language identification with the `fastText` classifier from CCNet ([Wenzek et al., 2019](https://arxiv.org/abs/1911.00359)). After this first preprocessing stage, we filter data using heuristics from MassiveWeb ([Rae et al., 2021](https://arxiv.org/abs/2112.11446)), and our own line-wise corrections.
|
124 |
|
125 |
Finally, we run extensive deduplication, removing URLs revisited across dumps and performing subsequently fuzzy and exact substring deduplication.
|
126 |
|
|
|
169 |
|
170 |
### Opt-out request
|
171 |
|
172 |
+
RefinedWeb is based on [CommonCrawl](https://commoncrawl.org/). Their crawler honours opt-out requests in the `robots.txt`, see the [CC FAQ](https://commoncrawl.org/big-picture/frequently-asked-questions/) for details.
|
173 |
|
174 |
To remove a document from RefinedWeb, please message falconllm@tii.ae.
|
175 |
|