Inquiry on Resolving Vocab Size Discrepancy Between Tokenizer and Model

#1
by SEONGRYEONG - opened

Hello,

I am currently utilizing a Transformer-based model you developed and have encountered an issue where there is a discrepancy in vocab_size between the tokenizer and the model. Specifically, the vocab_size used by the tokenizer is smaller than what is utilized by the model. This mismatch is hindering my ability to effectively leverage the model.

Could you provide any recommendations or possible approaches to resolve this issue? I am particularly interested in methods for aligning the tokenizer's vocab_size with that of the model, or alternatively, reducing the model's vocab_size to match the tokenizer. Additionally, I would appreciate insights into the causes of such discrepancies and measures to prevent them in the future.

Your expertise and advice on this matter would be invaluable. I would also be grateful for any guidance on key considerations to take into account when addressing this issue.

Thank you for your time and assistance.

Best regards,

SeongRyeong

Sign up or log in to comment