wikipedia_BERT_512 / README.md
gmongaras's picture
Upload README.md with huggingface_hub
eae8b25
metadata
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
dataset_info:
  features:
    - name: input_ids
      sequence: int32
    - name: token_type_ids
      sequence: int8
    - name: attention_mask
      sequence: int8
  splits:
    - name: train
      num_bytes: 19918538280
      num_examples: 6458670
  download_size: 4218892705
  dataset_size: 19918538280

Dataset using the bert-cased tokenizer, cutoff at 512 tokens.

Original dataset: https://huggingface.co/datasets/wikipedia Variant: 20220301.en