Datasets:

Modalities:
Tabular
Text
Formats:
parquet
Languages:
English
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
anton-l HF staff commited on
Commit
6861e17
β€’
1 Parent(s): 9248047

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -411,11 +411,11 @@ configs:
411
  <img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/wwRnEQydH9qdRtFofIE-A.png" alt="FineWeb-Edu: The finest collection of educational content the web has to offer">
412
  </center>
413
 
414
- > 1.2 trillion tokens of the finest educational data the 🌐 web has to offer
415
 
416
  ## What is it?
417
 
418
- πŸ“š FineWeb-Edu dataset consists of **1.3T tokens** and **5.4T tokens** ([FineWeb-Edu -Large](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-large)) of educational web pages filtered from 🍷 FineWeb dataset. This is the 1.2 trillion version.
419
 
420
  To enhance FineWeb's quality, we developed an [educational quality classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier) using annotations generated by LLama3-70B-Instruct. We then used this classifier to retain only the most educational web pages. FineWeb-Edu outperforms FineWeb on popular benchmarks and shows the power of classifiers trained on synthetic data.
421
 
@@ -423,12 +423,12 @@ The [Dataset Curation](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu
423
 
424
  ## What is being released?
425
 
426
- Along with the dataset, which includes all filtered CommonCrawl dumps since 2013, we also release the educational classifier used for the filtering as well as the code for training it and running inference at: TODO.
427
 
428
  ## How to load the dataset
429
  Similarily to FineWeb, You can load the full dataset or a specific crawl/dump. Dumps have the format `CC-MAIN-(year)-(week number)`.
430
 
431
- ### (Smaller) sample versions TODO
432
  Along with config `default` (all the data), and the configs for each individual dump, you can also download the following configs:
433
  - `sample-350BT`: a subset randomly sampled from the whole dataset of around 350B gpt2 tokens
434
  - `sample-100BT`: a subset randomly sampled from the whole dataset of around 100B gpt2 tokens
 
411
  <img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/wwRnEQydH9qdRtFofIE-A.png" alt="FineWeb-Edu: The finest collection of educational content the web has to offer">
412
  </center>
413
 
414
+ > 1.3 trillion tokens of the finest educational data the 🌐 web has to offer
415
 
416
  ## What is it?
417
 
418
+ πŸ“š FineWeb-Edu dataset consists of **1.3T tokens** and **5.4T tokens** ([FineWeb-Edu -Large](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-large)) of educational web pages filtered from 🍷 FineWeb dataset. This is the 1.3 trillion version.
419
 
420
  To enhance FineWeb's quality, we developed an [educational quality classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier) using annotations generated by LLama3-70B-Instruct. We then used this classifier to retain only the most educational web pages. FineWeb-Edu outperforms FineWeb on popular benchmarks and shows the power of classifiers trained on synthetic data.
421
 
 
423
 
424
  ## What is being released?
425
 
426
+ Along with the dataset, which includes all filtered CommonCrawl dumps since 2013, we also release the educational classifier used for the filtering as well as the code for training it and running inference at: https://github.com/huggingface/cosmopedia/tree/main/classification
427
 
428
  ## How to load the dataset
429
  Similarily to FineWeb, You can load the full dataset or a specific crawl/dump. Dumps have the format `CC-MAIN-(year)-(week number)`.
430
 
431
+ ### (Smaller) sample versions
432
  Along with config `default` (all the data), and the configs for each individual dump, you can also download the following configs:
433
  - `sample-350BT`: a subset randomly sampled from the whole dataset of around 350B gpt2 tokens
434
  - `sample-100BT`: a subset randomly sampled from the whole dataset of around 100B gpt2 tokens