Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
Dask
License:
loubnabnl HF staff commited on
Commit
5946453
β€’
1 Parent(s): 8ff1c2f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -14
README.md CHANGED
@@ -405,7 +405,7 @@ configs:
405
 
406
  πŸ“š FineWeb-Edu dataset consists of **1.3T tokens** ([FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)) and **5.4T tokens** of educational web pages filtered from 🍷 FineWeb dataset. This is the 5.4 trillion version.
407
 
408
- ### Note: this version uses a lower educational score threshold = 2, which results in more coverage, but lower quality documents.
409
 
410
  To enhance FineWeb's quality, we developed an [educational quality classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier) using annotations generated by LLama3-70B-Instruct. We then used this classifier to retain only the most educational web pages. FineWeb-Edu outperforms FineWeb on popular benchmarks and shows the power of classifiers trained on synthetic data.
411
 
@@ -475,23 +475,19 @@ We fine-tuned a Bert-like regression model using these annotations, based on [Sn
475
 
476
  The classifier is available at: [https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier/ ](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier/)
477
 
478
- ### Filtering
479
- We filtered out samples with scores lower than 3. This removed 92% of the dataset, leaving us with 1.2T educational tokens. Our ablation demonstrated that this refined dataset significantly outperforms the original FineWeb dumps and even the best dump, FineWeb-2024-10. To retain more tokens, we also experimented with a less strict threshold of 2 instead of 3. This approach preserved 4.5T tokens and still outperformed the non-filtered dataset.
480
- TODO: add ablation results
481
-
482
- We release these two dataset as [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) and [FineWeb-Edu-score-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-score-2) along with the classifier.
483
-
484
- ## Dataset performance evaluation and ablations
485
 
486
- We conducted our dataset performance ablations and evaluations by training 1.8B parameters models on 28B tokens and then 350B billion tokens to validate the results.
487
 
488
- The detailed configurations for training the models can be found here (TODO).
489
-
490
- FineWeb-Edu outperforms FineWeb and other web datasets on all popular benchmarks.
 
491
 
492
- TODO: add barplots & agg_score curves
493
 
494
- You will find these models on [this collection](https://huggingface.co/collections/HuggingFaceFW/ablation-models-662457b0d213e8c14fe47f32). The FineWeb-Edu ablation model (trained on 350B tokens) is available at [https://huggingface.co/HuggingFaceFW/ablation-model-fineweb-edu](https://huggingface.co/HuggingFaceFW/ablation-model-fineweb-edu).
495
 
496
  ## Considerations for Using the Data
497
  This section is copied from the parent dataset: [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb).
 
405
 
406
  πŸ“š FineWeb-Edu dataset consists of **1.3T tokens** ([FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu)) and **5.4T tokens** of educational web pages filtered from 🍷 FineWeb dataset. This is the 5.4 trillion version.
407
 
408
+ ### Note: this version uses a lower educational score threshold = 2, which results in more documents, but lower quality compared to the 1.3T version. For more details check the FineWeb [blog post](https://huggingface.co/spaces/HuggingFaceFW/blogpost-fineweb-v1).
409
 
410
  To enhance FineWeb's quality, we developed an [educational quality classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier) using annotations generated by LLama3-70B-Instruct. We then used this classifier to retain only the most educational web pages. FineWeb-Edu outperforms FineWeb on popular benchmarks and shows the power of classifiers trained on synthetic data.
411
 
 
475
 
476
  The classifier is available at: [https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier/ ](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier/)
477
 
478
+ ### Filtering and results
479
+ **Note**: You can find more details about the ablations and results in the FineWeb blog post (TODO).
 
 
 
 
 
480
 
481
+ We investigated the impact of using different thresholds for the filtering and found that threshold 3 gave the best overall results. Although using a threshold higher than 3 improves performance on knowledge and reasoning intensive benchmarks, it significantly degrades performance on HellaSwag and PIQA.
482
 
483
+ We then built πŸ“š FineWeb-Edu by filtering out samples with scores lower than 3. This removed 92% of the dataset, leaving us with 1.3T educational tokens. Our ablation demonstrated that this refined dataset surpasses 🍷 FineWeb and all other open web datasets, with remarkable improvements on educational benchmarks such as MMLU, ARC, and OpenBookQA. To retain more tokens, we also experimented with a less strict threshold of 2 instead of 3. While being less performant than using threshold 3, it still outperformed FineWeb and it preserved 5.4T tokens.
484
+ The plot below compare FineWeb-Edu to other web datasets:
485
+
486
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/hJlyTgDzZpYuxO9LUm0PF.png)
487
 
488
+ We release these two dataset as [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) and [FineWeb-Edu-score-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-score-2) along with the [classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier).
489
 
490
+ You will find all the ablation models in [this collection](https://huggingface.co/collections/HuggingFaceFW/ablation-models-662457b0d213e8c14fe47f32). The FineWeb-Edu ablation model (trained on 350B tokens) is available at [https://huggingface.co/HuggingFaceFW/ablation-model-fineweb-edu](https://huggingface.co/HuggingFaceFW/ablation-model-fineweb-edu).
491
 
492
  ## Considerations for Using the Data
493
  This section is copied from the parent dataset: [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb).