The FineWeb Datasets: Decanting the Web for the Finest Text Data at Scale
Abstract
The performance of a large language model (LLM) depends heavily on the quality and size of its pretraining dataset. However, the pretraining datasets for state-of-the-art open LLMs like Llama 3 and Mixtral are not publicly available and very little is known about how they were created. In this work, we introduce FineWeb, a 15-trillion token dataset derived from 96 Common Crawl snapshots that produces better-performing LLMs than other open pretraining datasets. To advance the understanding of how best to curate high-quality pretraining datasets, we carefully document and ablate all of the design choices used in FineWeb, including in-depth investigations of deduplication and filtering strategies. In addition, we introduce FineWeb-Edu, a 1.3-trillion token collection of educational text filtered from FineWeb. LLMs pretrained on FineWeb-Edu exhibit dramatically better performance on knowledge- and reasoning-intensive benchmarks like MMLU and ARC. Along with our datasets, we publicly release our data curation codebase and all of the models trained during our ablation experiments.
Community
Paper to the latest and most popular pretraining dataset.
Cool paper!
Related recent work to continue the readings on top of this work:
- https://huggingface.co/papers/2406.11794 from the amazing @vaishaal and many many co-authors (dataset is also here: https://huggingface.co/datasets/mlfoundations/dclm-baseline-1.0)
- https://huggingface.co/papers/2406.08446 from the AllenAI team @SaveBertAndGpt @OyvindTafjord @baileyk @JesseDodge
Some Spaces demo to query the dataset in SQL:
@librarian-bot recommend
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SWEb: A Large Web Dataset for the Scandinavian Languages (2024)
- InfiMM-WebMath-40B: Advancing Multimodal Pre-Training for Enhanced Mathematical Reasoning (2024)
- CCI3.0-HQ: a large-scale Chinese dataset of high quality designed for pre-training large language models (2024)
- Multilingual Pretraining Using a Large Corpus Machine-Translated from a Single Source Language (2024)
- Data Processing for the OpenGPT-X Model Family (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend