Why should you use this and not the titotken included in the orignal model?
Original tokenizer pad vocabulary to correct size with <extra_N>
tokens but encoder never uses them causing inconsistency and deterimental to training code that may want to use the unused <extra_N>
tokens.
modified from original code @ https://huggingface.co/Xenova/dbrx-instruct-tokenizer
DBRX Instruct Tokenizer
A 🤗-compatible version of the DBRX Instruct (adapted from databricks/dbrx-instruct). This means it can be used with Hugging Face libraries including Transformers, Tokenizers, and Transformers.js.
Example usage:
Transformers/Tokenizers
from transformers import GPT2TokenizerFast
tokenizer = GPT2TokenizerFast.from_pretrained('Xenova/dbrx-instruct-tokenizer')
assert tokenizer.encode('hello world') == [15339, 1917]
Transformers.js
import { AutoTokenizer } from '@xenova/transformers';
const tokenizer = await AutoTokenizer.from_pretrained('Xenova/dbrx-instruct-tokenizer');
const tokens = tokenizer.encode('hello world'); // [15339, 1917]