File size: 2,197 Bytes
c357007
 
e28096f
 
 
 
c357007
 
e28096f
c357007
e28096f
c357007
e28096f
 
 
c357007
e28096f
 
 
 
986f197
 
7d13bea
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9d6dd6b
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
---
library_name: transformers
tags:
- tokenizer
- mlm
license: mit
---

# claude tokenizer: mlm

A variant of [Xenova/claude-tokenizer](https://huggingface.co/Xenova/claude-tokenizer) with some small changes to support usage as an MLM tokenizer.

```py
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('pszemraj/claude-tokenizer-mlm')

text = "Hello, this is a test input."
ids = tokenizer(text)
print(tokenizer.decode(ids['input_ids'], skip_special_tokens=False))
# <bos>Hello, this is a test input.<EOT>
len(tokenizer)
# 65004
```


details relevant for model configs using this:
```py
>>> tokenizer
GPT2TokenizerFast(name_or_path='pszemraj/claude-tokenizer-mlm', vocab_size=65000, model_max_length=200000, is_fast=True, padding_side='right', truncation_side='right', special_tokens={'bos_token': '<bos>', 'eos_token': '<EOT>', 'unk_token': '<EOT>', 'sep_token': '<EOT>', 'pad_token': '<pad>', 'cls_token': '<bos>', 'mask_token': '<mask>'}, clean_up_tokenization_spaces=True),  added_tokens_decoder={
        0: AddedToken("<EOT>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
        1: AddedToken("<META>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
        2: AddedToken("<META_START>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
        3: AddedToken("<META_END>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
        4: AddedToken("<SOS>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
        65000: AddedToken("<pad>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
        65001: AddedToken("<CLS>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
        65002: AddedToken("<bos>", rstrip=False, lstrip=False, single_word=False, normalized=False, special=True),
        65003: AddedToken("<mask>", rstrip=False, lstrip=True, single_word=False, normalized=True, special=True),
}
```

the `<CLS>` token is added but unused, both the CLS and BOS tokens are set to `<bos>` - see `tokenizer_config.json` for details