DKYoon commited on
Commit
6b78d1a
1 Parent(s): 58417ae

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -30,13 +30,13 @@ dataset_info:
30
  ---
31
  Sampled version of [cerebras/SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
32
 
33
- Based on the [fact that the original data was shuffled before chunking](https://huggingface.co/datasets/cerebras/SlimPajama-627B/discussions/4), I only downloaded chunk1 (out of 10) and further sampled 10%. This should result in roughly 6B tokens, hence SlimPajama-6B.
34
 
35
- The dataset is roughly 24GBs in storage size when decompressed (original dataset is over 2TBs) and has 5489000 rows.
36
 
37
  ---
38
  #### Data source proportions for SlimPajama-627B and SlimPajama-6B
39
- For sanity purpose, I recaluclated the byte proportion of the sampled version, it rougly matches the original dataset.
40
 
41
 
42
  | Data source | SlimPajama-627B | SlimPajama-6B |
 
30
  ---
31
  Sampled version of [cerebras/SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B).
32
 
33
+ [Since the original data was shuffled before chunking](https://huggingface.co/datasets/cerebras/SlimPajama-627B/discussions/4), I only downloaded train/chunk1 (of 10 total) and further sampled 10%. This should result in roughly 6B tokens, hence SlimPajama-6B.
34
 
35
+ The dataset is 24GBs in storage size when decompressed (original dataset is over 2TBs) and has 5489000 rows.
36
 
37
  ---
38
  #### Data source proportions for SlimPajama-627B and SlimPajama-6B
39
+ For sanity purpose, I caluclated the byte proportion of the sampled version.
40
 
41
 
42
  | Data source | SlimPajama-627B | SlimPajama-6B |