Update README.md
Browse files
README.md
CHANGED
@@ -6,6 +6,35 @@ The GneissWeb recipe consists of sharded exact substring deduplication and a jud
|
|
6 |
|
7 |
Our evaluations demonstrate that GneissWeb outperforms state-of-the-art large open datasets (5T+ tokens). Specifically, ablation models trained on GneissWeb outperform those trained on FineWeb.V1.1 by 2.14 percentage points in terms of average score computed on a set of 11 benchmarks (both zero-shot and few-shot) commonly used to evaluate pre-train datasets. When the evaluation set is extended to 20 benchmarks (both zero-shot and few-shot), ablation models trained on GneissWeb outperform those trained on FineWeb.V1.1 by 1.49 percentage points. In future, we plan to release a detailed technical paper with fine grained details and the IBM Data Prep Kit to create the GneissWeb dataset.
|
8 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
9 |
**Dataset Summary**
|
10 |
|
11 |
Recently, IBM has introduced GneissWeb; a large dataset yielding around 10 trillion tokens that caters to the data quality and quantity requirements of training LLMs. The models trained using GneissWeb dataset outperform those trained on FineWeb 1.1.0 by 2.14 percentage points in terms of average score computed on a set of 11 commonly used benchmarks
|
|
|
6 |
|
7 |
Our evaluations demonstrate that GneissWeb outperforms state-of-the-art large open datasets (5T+ tokens). Specifically, ablation models trained on GneissWeb outperform those trained on FineWeb.V1.1 by 2.14 percentage points in terms of average score computed on a set of 11 benchmarks (both zero-shot and few-shot) commonly used to evaluate pre-train datasets. When the evaluation set is extended to 20 benchmarks (both zero-shot and few-shot), ablation models trained on GneissWeb outperform those trained on FineWeb.V1.1 by 1.49 percentage points. In future, we plan to release a detailed technical paper with fine grained details and the IBM Data Prep Kit to create the GneissWeb dataset.
|
8 |
|
9 |
+
**The GneissWeb Recipe in a Nutshell : Building on Top of FineWeb**
|
10 |
+
|
11 |
+
Hugging Face introduced FineWeb V1.1, a large-scale dataset for LLM pre-training, consisting of 15 trillion tokens (44TB disk space). FineWeb is derived from 96 Common Crawl snapshots, focusing on English text by applying a series of processing steps, mainly including language classification, deduplication, and heuristic rule-based quality filters. Models trained on FineWeb are shown to outperform those trained on other publicly available datasets — C4, RefinedWeb, Dolma, RedPajamav2, SlimPajama, and The Pile.
|
12 |
+
|
13 |
+
We started with the goal of distilling 10T+ high quality tokens from FineWeb V1.1, so that we get sufficiently large number of quality tokens suitable for Stage-1 pre-training. Unlike the FineWeb.Edu families, which rely on a single quality annotator and perform aggressive filtering, we developed a multi-faceted ensemble of quality annotators to enable fine-grained quality filtering. This allowed us to achieve a finer trade-off between the quality and quantity of the tokens retained. While the GneissWeb recipe is focused at obtaining 10T+ high quality tokens suitable for Stage-1 pre-training, it is also possible to adapt the recipe by tuning filtering parameters to produce smaller and higher quality datasets fit for Stage-2 kind of training.
|
14 |
+
|
15 |
+
An Overview of the GneissWeb Recipe
|
16 |
+
|
17 |
+
The GneissWeb dataset was obtained by applying the following processing steps to Fineweb :
|
18 |
+
|
19 |
+
Exact substring deduplication at line level
|
20 |
+
|
21 |
+
Custom built Fasttext quality filter
|
22 |
+
|
23 |
+
Custom built Fasttext category classifier
|
24 |
+
|
25 |
+
Custom built Category-aware readability score quality filter
|
26 |
+
|
27 |
+
Custom built Category-aware extreme_tokenized quality filter
|
28 |
+
|
29 |
+
These were applied in the order shown in the Fig 1
|
30 |
+
|
31 |
+
|
32 |
+
|
33 |
+
Figure 1 : GneissWeb recipe
|
34 |
+
|
35 |
+
The net impact was that the dataset size of 15T tokens was filtered down to approx 10T tokens. In subsequent sections we describe the overall performance obtained using GneissWeb compared to other baselines. We then dive deeper into each of these processing steps in detail and the impact they have individually through a series of ablations.
|
36 |
+
|
37 |
+
|
38 |
**Dataset Summary**
|
39 |
|
40 |
Recently, IBM has introduced GneissWeb; a large dataset yielding around 10 trillion tokens that caters to the data quality and quantity requirements of training LLMs. The models trained using GneissWeb dataset outperform those trained on FineWeb 1.1.0 by 2.14 percentage points in terms of average score computed on a set of 11 commonly used benchmarks
|