emozilla commited on
Commit
ce147f0
·
verified ·
1 Parent(s): abdba1b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -13,11 +13,11 @@ size_categories:
13
  - 100B<n<1T
14
  ---
15
 
16
- Tokenized (Llama 2) verison of [NousResearch/dolma-v1_7-30B](https://huggingface.co/datasets/NousResearch/dolma-v1_7-30B) as a [Nanotron](https://github.com/huggingface/nanotron) dataset split into 10 GB chunks.
17
 
18
  To download:
19
  ```shell
20
- huggingface-cli download --repo-type dataset --local-dir dolma-v1_7-30B-tokenized-llama2-nanoset --local-dir-use-symlinks False NousResearch/dolma-v1_7-30B-tokenized-llama2-nanoset
21
  ```
22
 
23
  To recombine:
@@ -31,7 +31,7 @@ Can also be used directly with numpy, for example
31
  import numpy as np
32
 
33
  dataset_buffer_mmap = np.memmap("dolma-v1_7-30B-tokenized-llama2-nanoset.npy",
34
- mode="r", order="C", dtype=np.int32)
35
  dataset_buffer = memoryview(dataset_buffer_mmap)
36
  dataset_number_of_tokens = int(len(dataset_buffer))
37
  ```
 
13
  - 100B<n<1T
14
  ---
15
 
16
+ Tokenized (Llama 2) verison of [emozilla/dolma-v1_7-30B](https://huggingface.co/datasets/emozilla/dolma-v1_7-30B) as a [Nanotron](https://github.com/huggingface/nanotron) dataset split into 10 GB chunks.
17
 
18
  To download:
19
  ```shell
20
+ huggingface-cli download --repo-type dataset --local-dir dolma-v1_7-30B-tokenized-llama2-nanoset --local-dir-use-symlinks False emozilla/dolma-v1_7-30B-tokenized-llama2-nanoset
21
  ```
22
 
23
  To recombine:
 
31
  import numpy as np
32
 
33
  dataset_buffer_mmap = np.memmap("dolma-v1_7-30B-tokenized-llama2-nanoset.npy",
34
+ mode="r", order="C", dtype=np.int16)
35
  dataset_buffer = memoryview(dataset_buffer_mmap)
36
  dataset_number_of_tokens = int(len(dataset_buffer))
37
  ```