Vocab size mismatch in tokenizer.model?

#1
by audreyt - opened

llama.cpp> python3 convert.py ../b.11.0.0
Exception: Vocab size mismatch (model has 56064, but /Users/audreyt/w/b.11.0.0/tokenizer.model has 56020).

TAIDE org

Thank you for the feedback.
We have updated the configuration files to resolve this issue.

Sign up or log in to comment