gmongaras's picture
Update README.md
4a44efc verified
metadata
dataset_info:
  features:
    - name: input_ids
      sequence: int32
    - name: token_type_ids
      sequence: int8
    - name: attention_mask
      sequence: int8
  splits:
    - name: train
      num_bytes: 51067549265.998314
      num_examples: 131569119
  download_size: 15915934708
  dataset_size: 51067549265.998314
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Dataset using the bert-cased tokenizer, cutoff sentences to 128 length (not sentence pairs), all sentence pairs extracted.

Original datasets:

https://huggingface.co/datasets/bookcorpus https://huggingface.co/datasets/wikipedia Variant: 20220301.en

Mapped from: https://huggingface.co/datasets/gmongaras/BERT_Base_Cased_128_Dataset