File size: 685 Bytes
0e5f344 90d74e1 3c71816 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 |
---
dataset_info:
features:
- name: tokens
dtype: int64
splits:
- name: train
num_bytes: 77760409152
num_examples: 9720051144
download_size: 31455581823
dataset_size: 77760409152
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
license: odc-by
task_categories:
- fill-mask
- text-generation
language:
- en
pretty_name: FineWeb EDU 10BT Tokenized (BERT)
---
# fw-bert-tokenized-flattened
Just a tokenized and flattened version of the 10 billion token sample of https://huggingface.co/datasets/HuggingFaceFW/fineweb-edu with the bert-base-uncased tokenizer. Practically a huge array of tokens with each doc sepatated by [SEP]. |