You think you could re-quant with the regex fix?

#3
by YearZero - opened

https://github.com/ggerganov/llama.cpp/issues/7062

"For anyone wanting to use this:

Edit your HF model's tokenizer.json file
Swap the two patterns in the pretokenizer
Convert to gguf using llamacpp
Profit"

..alternatively there may be a llamacpp fix forthcoming (not sure yet), and we could just wait for that too.

Another commenter says:
"Note to users: there is no need to "re-quant". Replacing the regex pattern under LLAMA_VOCAB_PRE_TYPE_LLAMA3 in the llama.cpp file before building/compiling will fix the issue (at least for the fingerprint; I didn't test anything else).

[NOTE: this is the current workaround until the llama.cpp devs study this issue]

I tested for both llama.cpp CPU and GPU and I get the fingerprint. I also tested making this change to koboldcpp (but for default BPE regex, as I cannot use override-kv options in koboldcpp) and it worked perfectly. I have yet to test using server, but I asume it will also work."

Yeah i've been keeping my eye on that, i'm hoping that there'll be a real full fix merged soon, ideally it would be a fix that doesn't involve changing the existing official files

What's the status of the llama fix? I'm not a techie, that issue @ github is closed but I don't understand if it has been fixed

It was a fix on the generation side of things

That said I'll be remaking this probably today anyways because there was a change in metas repo AND bf16 conversion is about to be added to llama.cpp, so it should yield slightly more accurate quants

bf16 conversion is about to be added to llama.cpp

BTW this new comment about bf16 says "no statistically significant advantage over FP16" https://github.com/ggerganov/llama.cpp/issues/7062#issuecomment-2106158969

Sign up or log in to comment