gmongaras's picture
Upload README.md with huggingface_hub
12e0ad9
|
raw
history blame
545 Bytes
metadata
dataset_info:
  features:
    - name: text
      dtype: string
  splits:
    - name: train
      num_bytes: 36961083473
      num_examples: 136338653
  download_size: 13895887135
  dataset_size: 36961083473
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*

Dataset using the bert-cased tokenizer, cutoff sentences to 512 length (not sentence pairs), all sentence pairs extracted.

Original datasets: