Dataset using the bert-cased tokenizer, cutoff sentences to 512 length (not sentence pairs), all sentence pairs extracted. Original datasets: - https://huggingface.co/datasets/bookcorpus - https://huggingface.co/datasets/wikipedia Variant: 20220301.en