File size: 734 Bytes
1abdea1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4a44efc
6a23358
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
---
dataset_info:
  features:
  - name: input_ids
    sequence: int32
  - name: token_type_ids
    sequence: int8
  - name: attention_mask
    sequence: int8
  splits:
  - name: train
    num_bytes: 51067549265.998314
    num_examples: 131569119
  download_size: 15915934708
  dataset_size: 51067549265.998314
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---
Dataset using the bert-cased tokenizer, cutoff sentences to 128 length (not sentence pairs), all sentence pairs extracted.

Original datasets:

https://huggingface.co/datasets/bookcorpus
https://huggingface.co/datasets/wikipedia Variant: 20220301.en


Mapped from: https://huggingface.co/datasets/gmongaras/BERT_Base_Cased_128_Dataset