Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
slippylolo commited on
Commit
a1dee95
1 Parent(s): 22c7e4e

Update technical dataset information

Browse files
Files changed (1) hide show
  1. README.md +13 -4
README.md CHANGED
@@ -38,7 +38,9 @@ size_categories:
38
 
39
  RefinedWeb is built through stringent filtering and large-scale deduplication of CommonCrawl; we found models trained on RefinedWeb to achieve performance in-line or better than models trained on curated datasets, while only relying on web data.
40
 
41
- This public extract should contain 500-650GT depending on the tokenizer you use, and can be enhanced with the curated corpora of your choosing.
 
 
42
 
43
  ```python
44
  from datasets import load_dataset
@@ -78,15 +80,22 @@ RefinedWeb primarly contains English.
78
 
79
  ### Data Instances
80
 
81
- [More Information Needed]
 
 
82
 
83
  ### Data Fields
84
 
85
- [More Information Needed]
 
 
 
 
 
86
 
87
  ### Data Splits
88
 
89
- [More Information Needed]
90
 
91
 
92
  ## Dataset Creation
 
38
 
39
  RefinedWeb is built through stringent filtering and large-scale deduplication of CommonCrawl; we found models trained on RefinedWeb to achieve performance in-line or better than models trained on curated datasets, while only relying on web data.
40
 
41
+ RefinedWeb is also "multimodal-friendly": it contains links and alt texts for images in processed samples.
42
+
43
+ This public extract should contain 500-650GT depending on the tokenizer you use, and can be enhanced with the curated corpora of your choosing. This public extract is about ~500GB to download, requiring 2.8TB of local storage once unpacked.
44
 
45
  ```python
46
  from datasets import load_dataset
 
80
 
81
  ### Data Instances
82
 
83
+ Each data instance corresponds to an individual web page which has been crawled, processed, and deduplicated against all other instances.
84
+
85
+ This public extract of RefinedWeb contains about 1B instances (968M individual web pages), for a total of 2.8TB of clean text data.
86
 
87
  ### Data Fields
88
 
89
+ * `content`: the processed and cleaned text contained in the page;
90
+ * `url`: the url of the webpage crawled to produce the sample;
91
+ * `timestamp`: timestamp of when the webpage was crawled by CommonCrawl;
92
+ * `dump`: the CommonCrawl dump the sample is a part of;
93
+ * `segment`: the CommonCrawl segment the sample is a part of;
94
+ * `image_urls`: a list of elements in the type [`image_url`, `image_alt_text`] for all the images found in the content of the sample.
95
 
96
  ### Data Splits
97
 
98
+ We do not provide any canonical splits for RefinedWeb.
99
 
100
 
101
  ## Dataset Creation