xz56's picture
Update README.md
55fe8f9 verified
metadata
dataset_info:
  features:
    - name: input_ids
      sequence: int32
  splits:
    - name: train
      num_bytes: 5881094452
      num_examples: 5720909
    - name: test
      num_bytes: 309531828
      num_examples: 301101
  download_size: 3050952379
  dataset_size: 6190626280
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: test
        path: data/test-*

The first 15% of Openwebtext2 tokenized with LLamaTokenizer for 256 context length. Total of ~1.5 Billion tokens.