Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -411,11 +411,11 @@ configs:
|
|
411 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/wwRnEQydH9qdRtFofIE-A.png" alt="FineWeb-Edu: The finest collection of educational content the web has to offer">
|
412 |
</center>
|
413 |
|
414 |
-
> 1.
|
415 |
|
416 |
## What is it?
|
417 |
|
418 |
-
π FineWeb-Edu dataset consists of **1.3T tokens** and **5.4T tokens** ([FineWeb-Edu -Large](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-large)) of educational web pages filtered from π· FineWeb dataset. This is the 1.
|
419 |
|
420 |
To enhance FineWeb's quality, we developed an [educational quality classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier) using annotations generated by LLama3-70B-Instruct. We then used this classifier to retain only the most educational web pages. FineWeb-Edu outperforms FineWeb on popular benchmarks and shows the power of classifiers trained on synthetic data.
|
421 |
|
@@ -423,12 +423,12 @@ The [Dataset Curation](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu
|
|
423 |
|
424 |
## What is being released?
|
425 |
|
426 |
-
Along with the dataset, which includes all filtered CommonCrawl dumps since 2013, we also release the educational classifier used for the filtering as well as the code for training it and running inference at:
|
427 |
|
428 |
## How to load the dataset
|
429 |
Similarily to FineWeb, You can load the full dataset or a specific crawl/dump. Dumps have the format `CC-MAIN-(year)-(week number)`.
|
430 |
|
431 |
-
### (Smaller) sample versions
|
432 |
Along with config `default` (all the data), and the configs for each individual dump, you can also download the following configs:
|
433 |
- `sample-350BT`: a subset randomly sampled from the whole dataset of around 350B gpt2 tokens
|
434 |
- `sample-100BT`: a subset randomly sampled from the whole dataset of around 100B gpt2 tokens
|
|
|
411 |
<img src="https://cdn-uploads.huggingface.co/production/uploads/61c141342aac764ce1654e43/wwRnEQydH9qdRtFofIE-A.png" alt="FineWeb-Edu: The finest collection of educational content the web has to offer">
|
412 |
</center>
|
413 |
|
414 |
+
> 1.3 trillion tokens of the finest educational data the π web has to offer
|
415 |
|
416 |
## What is it?
|
417 |
|
418 |
+
π FineWeb-Edu dataset consists of **1.3T tokens** and **5.4T tokens** ([FineWeb-Edu -Large](https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu-large)) of educational web pages filtered from π· FineWeb dataset. This is the 1.3 trillion version.
|
419 |
|
420 |
To enhance FineWeb's quality, we developed an [educational quality classifier](https://huggingface.co/HuggingFaceFW/fineweb-edu-classifier) using annotations generated by LLama3-70B-Instruct. We then used this classifier to retain only the most educational web pages. FineWeb-Edu outperforms FineWeb on popular benchmarks and shows the power of classifiers trained on synthetic data.
|
421 |
|
|
|
423 |
|
424 |
## What is being released?
|
425 |
|
426 |
+
Along with the dataset, which includes all filtered CommonCrawl dumps since 2013, we also release the educational classifier used for the filtering as well as the code for training it and running inference at: https://github.com/huggingface/cosmopedia/tree/main/classification
|
427 |
|
428 |
## How to load the dataset
|
429 |
Similarily to FineWeb, You can load the full dataset or a specific crawl/dump. Dumps have the format `CC-MAIN-(year)-(week number)`.
|
430 |
|
431 |
+
### (Smaller) sample versions
|
432 |
Along with config `default` (all the data), and the configs for each individual dump, you can also download the following configs:
|
433 |
- `sample-350BT`: a subset randomly sampled from the whole dataset of around 350B gpt2 tokens
|
434 |
- `sample-100BT`: a subset randomly sampled from the whole dataset of around 100B gpt2 tokens
|