pszemraj commited on
Commit
7d13bea
1 Parent(s): 986f197

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +18 -1
README.md CHANGED
@@ -20,4 +20,21 @@ print(tokenizer.decode(ids['input_ids'], skip_special_tokens=False))
20
  # <bos>Hello, this is a test input.<EOT>
21
  len(tokenizer)
22
  # 65004
23
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
  # <bos>Hello, this is a test input.<EOT>
21
  len(tokenizer)
22
  # 65004
23
+ ```
24
+
25
+
26
+ details relevant for model configs using this:
27
+ ```py
28
+ >>> tokenizer
29
+ GPT2TokenizerFast(name_or_path='pszemraj/claude-tokenizer-mlm', vocab_size=65000, model_max_length=200000, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<bos>', 'eos_token': '<EOT>', 'unk_token': '<EOT>', 'sep_token': '<EOT>', 'pad_token': '<pad>', 'cls_token': '<bos>', 'mask_token': '<mask>'}, clean_up_tokenization_spaces=True), added_tokens_decoder={
30
+ 0: AddedToken("<EOT>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
31
+ 1: AddedToken("<META>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
32
+ 2: AddedToken("<META_START>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
33
+ 3: AddedToken("<META_END>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
34
+ 4: AddedToken("<SOS>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
35
+ 65000: AddedToken("<pad>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
36
+ 65001: AddedToken("<CLS>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
37
+ 65002: AddedToken("<bos>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
38
+ 65003: AddedToken("<mask>", rstrip=False, lstrip=True, single_word=False, normalized=True, special=True),
39
+ }
40
+ ```