is it possible to reproduct tokenizer.model?
#193
by
Asmedeus
- opened
I try to convert hf to gguf from llama.cpp repo but I get error message when I try run this code
tokenizer = llama3_tokenizer(tokenizer.gguf)
for token, rank in (line.split() for line in contents.splitlines() if line
ValueError: too many values to unpack (expected 2)