Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -398,6 +398,40 @@ configs:
|
|
398 |
> 15 trillion tokens of the finest data the π web has to offer
|
399 |
>
|
400 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
401 |
## What is it?
|
402 |
|
403 |
The π· FineWeb dataset consists of more than **15T tokens** of cleaned and deduplicated english web data from CommonCrawl. The data processing pipeline is optimized for LLM performance and ran on the π [`datatrove`](https://github.com/huggingface/datatrove/) library, our large scale data processing library.
|
@@ -416,7 +450,7 @@ You will find details on the different processing decisions we took and some int
|
|
416 |
|
417 |
You can load the full dataset or a specific crawl/dump (see table below). Dumps have the format `CC-MAIN-(year)-(week number)`.
|
418 |
|
419 |
-
### Using π [`datatrove`](https://github.com/huggingface/datatrove/)
|
420 |
|
421 |
```python
|
422 |
from datatrove.pipeline.readers import ParquetReader
|
@@ -621,7 +655,7 @@ You will find these models on [this collection](https://huggingface.co/collectio
|
|
621 |
|
622 |
- **Homepage and Repository:** [https://huggingface.co/datasets/HuggingFaceFW/fineweb](https://huggingface.co/datasets/HuggingFaceFW/fineweb)
|
623 |
- **Point of Contact:** please create a discussion on the Community tab
|
624 |
-
- **License
|
625 |
|
626 |
### Dataset Summary
|
627 |
|
|
|
398 |
> 15 trillion tokens of the finest data the π web has to offer
|
399 |
>
|
400 |
|
401 |
+
- [π· FineWeb](#-fineweb)
|
402 |
+
* [What is it?](#what-is-it)
|
403 |
+
* [What is being released?](#what-is-being-released)
|
404 |
+
* [How to download and use π· FineWeb](#how-to-download-and-use-fineweb)
|
405 |
+
+ [Using π `datatrove`](#using-datatrove)
|
406 |
+
+ [Using `huggingface_hub`](#using-huggingface_hub)
|
407 |
+
+ [Using `datasets`](#using-datasets)
|
408 |
+
* [Breakdown by dump/crawl](#breakdown-by-dumpcrawl)
|
409 |
+
* [Dataset performance evaluation and ablations](#dataset-performance-evaluation-and-ablations)
|
410 |
+
+ [Hyper-parameters for ablation models](#hyper-parameters-for-ablation-models)
|
411 |
+
+ [Ablation evaluation benchmarks](#ablation-evaluation-benchmarks)
|
412 |
+
+ [Comparison with other datasets](#comparison-with-other-datasets)
|
413 |
+
- [Dataset card for π· FineWeb](#dataset-card-for-fineweb)
|
414 |
+
* [Dataset Description](#dataset-description)
|
415 |
+
+ [Dataset Summary](#dataset-summary)
|
416 |
+
* [Dataset Structure](#dataset-structure)
|
417 |
+
+ [Data Instances](#data-instances)
|
418 |
+
+ [Data Fields](#data-fields)
|
419 |
+
+ [Data Splits](#data-splits)
|
420 |
+
* [Dataset Creation](#dataset-creation)
|
421 |
+
+ [Curation Rationale](#curation-rationale)
|
422 |
+
+ [Source Data](#source-data)
|
423 |
+
+ [Data processing steps](#data-processing-steps)
|
424 |
+
+ [Annotations](#annotations)
|
425 |
+
+ [Personal and Sensitive Information](#personal-and-sensitive-information)
|
426 |
+
* [Considerations for Using the Data](#considerations-for-using-the-data)
|
427 |
+
+ [Social Impact of Dataset](#social-impact-of-dataset)
|
428 |
+
+ [Discussion of Biases](#discussion-of-biases)
|
429 |
+
+ [Other Known Limitations](#other-known-limitations)
|
430 |
+
* [Additional Information](#additional-information)
|
431 |
+
+ [Licensing Information](#licensing-information)
|
432 |
+
+ [Future work](#future-work)
|
433 |
+
+ [Citation Information](#citation-information)
|
434 |
+
|
435 |
## What is it?
|
436 |
|
437 |
The π· FineWeb dataset consists of more than **15T tokens** of cleaned and deduplicated english web data from CommonCrawl. The data processing pipeline is optimized for LLM performance and ran on the π [`datatrove`](https://github.com/huggingface/datatrove/) library, our large scale data processing library.
|
|
|
450 |
|
451 |
You can load the full dataset or a specific crawl/dump (see table below). Dumps have the format `CC-MAIN-(year)-(week number)`.
|
452 |
|
453 |
+
### Using π [`datatrove`](https://github.com/huggingface/datatrove/)
|
454 |
|
455 |
```python
|
456 |
from datatrove.pipeline.readers import ParquetReader
|
|
|
655 |
|
656 |
- **Homepage and Repository:** [https://huggingface.co/datasets/HuggingFaceFW/fineweb](https://huggingface.co/datasets/HuggingFaceFW/fineweb)
|
657 |
- **Point of Contact:** please create a discussion on the Community tab
|
658 |
+
- **License:** Open Data Commons Attribution License (ODC-By) v1.0
|
659 |
|
660 |
### Dataset Summary
|
661 |
|