Datasets:

Languages:
English
Size Categories:
n>1T
ArXiv:
Tags:
DOI:
License:
guipenedo HF staff commited on
Commit
042ac03
1 Parent(s): e9b9d5a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -12
README.md CHANGED
@@ -12,6 +12,18 @@ configs:
12
  data_files:
13
  - split: train
14
  path: data/*/*
 
 
 
 
 
 
 
 
 
 
 
 
15
  - config_name: CC-MAIN-2024-10
16
  data_files:
17
  - split: train
@@ -392,18 +404,6 @@ configs:
392
  data_files:
393
  - split: train
394
  path: data/CC-MAIN-2013-20/*
395
- - config_name: sample-10BT
396
- data_files:
397
- - split: train
398
- path: sample/10BT/*
399
- - config_name: sample-100BT
400
- data_files:
401
- - split: train
402
- path: sample/100BT/*
403
- - config_name: sample-350BT
404
- data_files:
405
- - split: train
406
- path: sample/350BT/*
407
  ---
408
  # 🍷 FineWeb
409
  <center>
@@ -464,6 +464,14 @@ You will find details on the different processing decisions we took and some int
464
 
465
  You can load the full dataset or a specific crawl/dump (see table below). Dumps have the format `CC-MAIN-(year)-(week number)`.
466
 
 
 
 
 
 
 
 
 
467
  ### Using 🏭 [`datatrove`](https://github.com/huggingface/datatrove/)
468
 
469
  ```python
@@ -471,6 +479,7 @@ from datatrove.pipeline.readers import ParquetReader
471
 
472
  # limit determines how many documents will be streamed (remove for all)
473
  # to fetch a specific dump: hf://datasets/HuggingFaceFW/fineweb/data/CC-MAIN-2024-10
 
474
  data_reader = ParquetReader("hf://datasets/HuggingFaceFW/fineweb/data", limit=1000)
475
  for document in data_reader():
476
  # do something with document
@@ -487,6 +496,7 @@ from datatrove.pipeline.writers import JsonlWriter
487
 
488
  pipeline_exec = LocalPipelineExecutor(
489
  pipeline=[
 
490
  ParquetReader("hf://datasets/HuggingFaceFW/fineweb/data/CC-MAIN-2024-10", limit=1000),
491
  LambdaFilter(lambda doc: "hugging" in doc.text),
492
  JsonlWriter("some-output-path")
@@ -504,6 +514,7 @@ folder = snapshot_download(
504
  "HuggingFaceFW/fineweb",
505
  repo_type="dataset",
506
  local_dir="./fineweb/",
 
507
  allow_patterns="data/CC-MAIN-2023-50/*")
508
  ```
509
 
@@ -513,6 +524,7 @@ For faster downloads, make sure to install `pip install huggingface_hub[hf_trans
513
 
514
  ```python
515
  from datasets import load_dataset
 
516
  fw = load_dataset("HuggingFaceFW/fineweb", name="CC-MAIN-2024-10", split="train", streaming=True)
517
  ```
518
 
 
12
  data_files:
13
  - split: train
14
  path: data/*/*
15
+ - config_name: sample-10BT
16
+ data_files:
17
+ - split: train
18
+ path: sample/10BT/*
19
+ - config_name: sample-100BT
20
+ data_files:
21
+ - split: train
22
+ path: sample/100BT/*
23
+ - config_name: sample-350BT
24
+ data_files:
25
+ - split: train
26
+ path: sample/350BT/*
27
  - config_name: CC-MAIN-2024-10
28
  data_files:
29
  - split: train
 
404
  data_files:
405
  - split: train
406
  path: data/CC-MAIN-2013-20/*
 
 
 
 
 
 
 
 
 
 
 
 
407
  ---
408
  # 🍷 FineWeb
409
  <center>
 
464
 
465
  You can load the full dataset or a specific crawl/dump (see table below). Dumps have the format `CC-MAIN-(year)-(week number)`.
466
 
467
+ ### (Smaller) sample versions
468
+ Along with config `default` (all the data), and the configs for each individual dump, you can also download the following configs:
469
+ - `sample-350BT`: a subset randomly sampled from the whole dataset of around 350B gpt2 tokens (388GB)
470
+ - `sample-100BT`: a subset randomly sampled from the whole dataset of around 100B gpt2 tokens (277.4GB)
471
+ - `sample-10BT`: a subset randomly sampled from the whole dataset of around 10B gpt2 tokens (27.6GB)
472
+
473
+ `sample-10B` was sampled from `sample-100B` which in turn was sampled from `sample-350BT`.
474
+
475
  ### Using 🏭 [`datatrove`](https://github.com/huggingface/datatrove/)
476
 
477
  ```python
 
479
 
480
  # limit determines how many documents will be streamed (remove for all)
481
  # to fetch a specific dump: hf://datasets/HuggingFaceFW/fineweb/data/CC-MAIN-2024-10
482
+ # replace "data" with "sample/100BT" to use the 100BT sample
483
  data_reader = ParquetReader("hf://datasets/HuggingFaceFW/fineweb/data", limit=1000)
484
  for document in data_reader():
485
  # do something with document
 
496
 
497
  pipeline_exec = LocalPipelineExecutor(
498
  pipeline=[
499
+ # replace "data/CC-MAIN-2024-10" with "sample/100BT" to use the 100BT sample
500
  ParquetReader("hf://datasets/HuggingFaceFW/fineweb/data/CC-MAIN-2024-10", limit=1000),
501
  LambdaFilter(lambda doc: "hugging" in doc.text),
502
  JsonlWriter("some-output-path")
 
514
  "HuggingFaceFW/fineweb",
515
  repo_type="dataset",
516
  local_dir="./fineweb/",
517
+ # replace "data/CC-MAIN-2023-50/*" with "sample/100BT/*" to use the 100BT sample
518
  allow_patterns="data/CC-MAIN-2023-50/*")
519
  ```
520
 
 
524
 
525
  ```python
526
  from datasets import load_dataset
527
+ # use name="sample-10BT" to use the 10BT sample
528
  fw = load_dataset("HuggingFaceFW/fineweb", name="CC-MAIN-2024-10", split="train", streaming=True)
529
  ```
530