Datasets:

Languages:
English
ArXiv:
License:
bhatta1 commited on
Commit
0e014d0
·
verified ·
1 Parent(s): 10efc05

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -12,19 +12,19 @@ Hugging Face introduced FineWeb V1.1, a large-scale dataset for LLM pre-training
12
 
13
  We started with the goal of distilling 10T+ high quality tokens from FineWeb V1.1, so that we get sufficiently large number of quality tokens suitable for Stage-1 pre-training. Unlike the FineWeb.Edu families, which rely on a single quality annotator and perform aggressive filtering, we developed a multi-faceted ensemble of quality annotators to enable fine-grained quality filtering. This allowed us to achieve a finer trade-off between the quality and quantity of the tokens retained. While the GneissWeb recipe is focused at obtaining 10T+ high quality tokens suitable for Stage-1 pre-training, it is also possible to adapt the recipe by tuning filtering parameters to produce smaller and higher quality datasets fit for Stage-2 kind of training.
14
 
15
- An Overview of the GneissWeb Recipe
16
 
17
- The GneissWeb dataset was obtained by applying the following processing steps to Fineweb :
18
 
19
- Exact substring deduplication at line level
20
 
21
- Custom built Fasttext quality filter
22
 
23
- Custom built Fasttext category classifier
24
 
25
- Custom built Category-aware readability score quality filter
26
 
27
- Custom built Category-aware extreme_tokenized quality filter
28
 
29
  These were applied in the order shown in the Fig 1
30
 
 
12
 
13
  We started with the goal of distilling 10T+ high quality tokens from FineWeb V1.1, so that we get sufficiently large number of quality tokens suitable for Stage-1 pre-training. Unlike the FineWeb.Edu families, which rely on a single quality annotator and perform aggressive filtering, we developed a multi-faceted ensemble of quality annotators to enable fine-grained quality filtering. This allowed us to achieve a finer trade-off between the quality and quantity of the tokens retained. While the GneissWeb recipe is focused at obtaining 10T+ high quality tokens suitable for Stage-1 pre-training, it is also possible to adapt the recipe by tuning filtering parameters to produce smaller and higher quality datasets fit for Stage-2 kind of training.
14
 
15
+   **An Overview of the GneissWeb Recipe**
16
 
17
+   The GneissWeb dataset was obtained by applying the following processing steps to Fineweb :
18
 
19
+     Exact substring deduplication at line level
20
 
21
+     Custom built Fasttext quality filter
22
 
23
+     Custom built Fasttext category classifier
24
 
25
+     Custom built Category-aware readability score quality filter
26
 
27
+     Custom built Category-aware extreme_tokenized quality filter
28
 
29
  These were applied in the order shown in the Fig 1
30