Dataset using the bert-cased tokenizer, cutoff sentences to 512 length (not sentence pairs), all sentence pairs extracted.
Original datasets:
https://huggingface.co/datasets/bookcorpus https://huggingface.co/datasets/wikipedia Variant: 20220301.en
Mapped from: https://huggingface.co/datasets/gmongaras/BERT_Base_Cased_128_Dataset