"vocab_size" is inconsistent with tokenizer.get_vocab()

#7
by tonyaw - opened

I got this wired data:
In config.json: "vocab_size": 32256
But: tokenizer.get_vocab=32022
Why they are difference?

DeepSeek org
edited Feb 4

32000 is the actual vocabulary size.
32022 is the size of the vocabulary plus bos, eos, and some other special tokens.
32256 is the size of the final word embedding, due to the reason that our training framework haillm needs to be padded to this size to obtain optimal performance.
@tonyaw

Sign up or log in to comment