SlimPajama-6B / README.md
DKYoon's picture
Update README.md
6b78d1a
|
raw
history blame
2.07 kB
metadata
language:
  - en
size_categories:
  - 1M<n<10M
task_categories:
  - text-generation
pretty_name: SlimPajama-6B
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
dataset_info:
  features:
    - name: text
      dtype: string
    - name: meta
      struct:
        - name: redpajama_set_name
          dtype: string
    - name: __index_level_0__
      dtype: int64
  splits:
    - name: train
      num_bytes: 23918118724
      num_examples: 5489000
  download_size: 14002333468
  dataset_size: 23918118724

Sampled version of cerebras/SlimPajama-627B.

Since the original data was shuffled before chunking, I only downloaded train/chunk1 (of 10 total) and further sampled 10%. This should result in roughly 6B tokens, hence SlimPajama-6B.

The dataset is 24GBs in storage size when decompressed (original dataset is over 2TBs) and has 5489000 rows.


Data source proportions for SlimPajama-627B and SlimPajama-6B

For sanity purpose, I caluclated the byte proportion of the sampled version.

Data source SlimPajama-627B SlimPajama-6B
Commoncrawl 52.2% 54.1%
C4 26.7% 28.7%
GitHub 5.2% 4.2%
Books 4.2% 3.7%
ArXiv 4.6% 3.4%
Wikpedia 3.8% 3.1%
StackExchange 3.3% 2.8%

Please refer to the original dataset for other info.

@misc{cerebras2023slimpajama,
  author = {Soboleva, Daria and Al-Khateeb, Faisal and Myers, Robert and Steeves, Jacob R and Hestness, Joel and Dey, Nolan},
  title = {{SlimPajama: A 627B token cleaned and deduplicated version of RedPajama}},
  month = June,
  year = 2023,
  howpublished = {\url{https://www.cerebras.net/blog/slimpajama-a-627b-token-cleaned-and-deduplicated-version-of-redpajama}},
  url = {https://huggingface.co/datasets/cerebras/SlimPajama-627B},
}