omkarenator commited on
Commit
90dfb5b
1 Parent(s): 29f96f9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -6
README.md CHANGED
@@ -36,15 +36,30 @@ Get access now at [LLM360 site](https://www.llm360.ai/)
36
 
37
  # Loading Amber's Pretraining Data
38
 
 
 
39
  ```python
40
- import datasets
 
 
 
 
 
 
 
 
 
 
 
 
41
 
42
- dataset = datasets.load_dataset('llm360/amber', chunk_idx=111)
 
43
 
44
- print(dataset[0])
45
- print(len(dataset[0]))
46
- # [1, 5, 9, 2, 6, ...]
47
- # 2049
48
  ```
49
 
50
 
 
36
 
37
  # Loading Amber's Pretraining Data
38
 
39
+ Below is an example of how to download, sample, and detokenize any subset of AmberDatasets corresponding to an Amber checkpoint. Just set the `CHECKPOINT_NUM` to the subset you are interested in (0-359) and point `CHECKPOINT_PATH` to the local checkpoint folder.
40
+
41
  ```python
42
+ import random
43
+ from transformers import AutoTokenizer
44
+ from datasets import load_dataset
45
+
46
+ CHECKPOINT_NUM = 0 # Pretraining dataset for checkpoint
47
+ NUM_SAMPLES = 10 # Number of random samples to decode
48
+ CHECKPOINT_PATH = "/path/to/ckpt_000/" # Local path to a Amber checkpoint
49
+
50
+ dataset = load_dataset(
51
+ "LLM360/AmberDatasets",
52
+ data_files=f"train/train_{CHECKPOINT_NUM:03}.jsonl",
53
+ split=None,
54
+ )
55
 
56
+ tokenizer = AutoTokenizer.from_pretrained(CHECKPOINT_PATH)
57
+ samples = set(random.choices(range(len(dataset["train"])), k=NUM_SAMPLES))
58
 
59
+ for i, line in enumerate(dataset["train"]):
60
+ if i in samples:
61
+ tokens = line["token_ids"]
62
+ print(f"{i}:{tokenizer.decode(tokens)}")
63
  ```
64
 
65