Update README.md
Browse files
README.md
CHANGED
@@ -4,11 +4,11 @@ Data quantity and quality play a vital role in determining the performance of La
|
|
4 |
|
5 |
The GneissWeb recipe consists of sharded exact substring deduplication and a judiciously constructed ensemble of quality filters. We present the key evaluations that guided our design choices and provide filtering thresholds that can be used to filter the dataset to match the token and quality needs of Stage-1 (early pre-training) or Stage-2 (annealing) datasets.
|
6 |
|
7 |
-
Our evaluations demonstrate that GneissWeb outperforms state-of-the-art large open datasets (5T+ tokens). Specifically, ablation models trained on GneissWeb outperform those trained on FineWeb.V1.1 by 2.14 percentage points in terms of average score computed on a set of 11 benchmarks (both zero-shot and few-shot) commonly used to evaluate pre-train datasets. When the evaluation set is extended to 20 benchmarks (both zero-shot and few-shot), ablation models trained on GneissWeb outperform those trained on FineWeb.V1.1 by 1.49 percentage points.
|
8 |
|
9 |
**The GneissWeb Recipe in a Nutshell : Building on Top of FineWeb**
|
10 |
|
11 |
-
Hugging Face introduced FineWeb V1.1, a large-scale dataset for LLM pre-training, consisting of 15 trillion tokens (44TB disk space). We started with the goal of distilling 10T+ high quality tokens from FineWeb V1.1, so that we get sufficiently large number of quality tokens suitable for Stage-1 pre-training. Unlike the FineWeb.Edu families, which rely on a single quality annotator and perform aggressive filtering, we developed a multi-faceted ensemble of quality annotators to enable fine-grained quality filtering. This allowed us to achieve a finer trade-off between the quality and quantity of the tokens retained. While the GneissWeb recipe is focused at obtaining 10T+ high quality tokens suitable for Stage-1 pre-training, it is also possible to adapt the recipe by tuning filtering parameters to produce smaller and higher quality datasets fit for Stage-2 kind of training.
|
12 |
|
13 |
**An Overview of the GneissWeb Recipe**
|
14 |
|
|
|
4 |
|
5 |
The GneissWeb recipe consists of sharded exact substring deduplication and a judiciously constructed ensemble of quality filters. We present the key evaluations that guided our design choices and provide filtering thresholds that can be used to filter the dataset to match the token and quality needs of Stage-1 (early pre-training) or Stage-2 (annealing) datasets.
|
6 |
|
7 |
+
Our evaluations demonstrate that GneissWeb outperforms state-of-the-art large open datasets (5T+ tokens). Specifically, ablation models trained on GneissWeb outperform those trained on FineWeb.V1.1.0 by 2.14 percentage points in terms of average score computed on a set of 11 benchmarks (both zero-shot and few-shot) commonly used to evaluate pre-train datasets. When the evaluation set is extended to 20 benchmarks (both zero-shot and few-shot), ablation models trained on GneissWeb outperform those trained on FineWeb.V1.1.0 by 1.49 percentage points.
|
8 |
|
9 |
**The GneissWeb Recipe in a Nutshell : Building on Top of FineWeb**
|
10 |
|
11 |
+
Hugging Face introduced FineWeb V1.1.0, a large-scale dataset for LLM pre-training, consisting of 15 trillion tokens (44TB disk space). We started with the goal of distilling 10T+ high quality tokens from FineWeb V1.1.0, so that we get sufficiently large number of quality tokens suitable for Stage-1 pre-training. Unlike the FineWeb.Edu families, which rely on a single quality annotator and perform aggressive filtering, we developed a multi-faceted ensemble of quality annotators to enable fine-grained quality filtering. This allowed us to achieve a finer trade-off between the quality and quantity of the tokens retained. While the GneissWeb recipe is focused at obtaining 10T+ high quality tokens suitable for Stage-1 pre-training, it is also possible to adapt the recipe by tuning filtering parameters to produce smaller and higher quality datasets fit for Stage-2 kind of training.
|
12 |
|
13 |
**An Overview of the GneissWeb Recipe**
|
14 |
|