Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
anton-l HF staff commited on
Commit
20b2c9f
β€’
1 Parent(s): 6861e17

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -415,7 +415,7 @@ configs:
415
 
416
  ## What is it?
417
 
418
- πŸ“š FineWeb-Edu dataset consists of **1.3T tokens** and **5.4T tokens** ([FineWeb-Edu -Large](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-large)) of educational web pages filtered from 🍷 FineWeb dataset. This is the 1.3 trillion version.
419
 
420
  To enhance FineWeb's quality, we developed an [educational quality classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier) using annotations generated by LLama3-70B-Instruct. We then used this classifier to retain only the most educational web pages. FineWeb-Edu outperforms FineWeb on popular benchmarks and shows the power of classifiers trained on synthetic data.
421
 
@@ -499,7 +499,7 @@ The classifier is available at: [https://huggingface.co/HuggingFaceFW/fineweb-ed
499
  We filtered out samples with scores lower than 3. This removed 92% of the dataset, leaving us with 1.2T educational tokens. Our ablation demonstrated that this refined dataset significantly outperforms the original FineWeb dumps and even the best dump, FineWeb-2024-10. To retain more tokens, we also experimented with a less strict threshold of 2 instead of 3. This approach preserved 4.5T tokens and still outperformed the non-filtered dataset.
500
  TODO: add ablation results
501
 
502
- We release these two dataset as [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) and [FineWeb-Edu-Large](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-large) along with the classifier.
503
 
504
  ## Dataset performance evaluation and ablations
505
 
 
415
 
416
  ## What is it?
417
 
418
+ πŸ“š FineWeb-Edu dataset consists of **1.3T tokens** and **5.4T tokens** ([FineWeb-Edu-score-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-score-2)) of educational web pages filtered from 🍷 FineWeb dataset. This is the 1.3 trillion version.
419
 
420
  To enhance FineWeb's quality, we developed an [educational quality classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier) using annotations generated by LLama3-70B-Instruct. We then used this classifier to retain only the most educational web pages. FineWeb-Edu outperforms FineWeb on popular benchmarks and shows the power of classifiers trained on synthetic data.
421
 
 
499
  We filtered out samples with scores lower than 3. This removed 92% of the dataset, leaving us with 1.2T educational tokens. Our ablation demonstrated that this refined dataset significantly outperforms the original FineWeb dumps and even the best dump, FineWeb-2024-10. To retain more tokens, we also experimented with a less strict threshold of 2 instead of 3. This approach preserved 4.5T tokens and still outperformed the non-filtered dataset.
500
  TODO: add ablation results
501
 
502
+ We release these two dataset as [FineWeb-Edu](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu) and [FineWeb-Edu-score-2](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-score-2) along with the classifier.
503
 
504
  ## Dataset performance evaluation and ablations
505