Qubitium commited on
Commit
cb56fd5
1 Parent(s): 7cb66c0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -6,9 +6,10 @@ tags:
6
  ---
7
 
8
  ## Why should you use this and not the tiktoken included in the orignal model?
9
- 1. Original tokenizer pad the vocabulary to correct size with `<extra_N>` tokens but encoder never uses them
10
- 2. Original tokenizer use eos as pad token which may confuse trainers to mask out the eos token so model never output eos.
11
- 3. [NOT FIXED: INVESTIGATING] config.json embedding size of "vocab_size": 100352 does not match 100277
 
12
 
13
  modified from original code @ https://huggingface.co/Xenova/dbrx-instruct-tokenizer
14
 
 
6
  ---
7
 
8
  ## Why should you use this and not the tiktoken included in the orignal model?
9
+ 1. This tokenizer is validated with the https://huggingface.co/datasets/xn (all languages) to be encode/decode compatible with dbrx-base tiktoken
10
+ 2. Original tokenizer pad the vocabulary to correct size with `<extra_N>` tokens but encoder never uses them
11
+ 3. Original tokenizer use eos as pad token which may confuse trainers to mask out the eos token so model never output eos.
12
+ 4. [NOT FIXED: INVESTIGATING] config.json embedding size of "vocab_size": 100352 does not match 100277
13
 
14
  modified from original code @ https://huggingface.co/Xenova/dbrx-instruct-tokenizer
15