Text Generation
Transformers
Safetensors
Japanese
English
mistral
conversational
Inference Endpoints
text-generation-inference

llama.cpp doesn't work w/ Shisa, has a bug that affects certain BPE tokenizers like ours

#1
by leonardlin - opened

GGUFs can be created, but currently llama.cpp has a known bug that causes it to crash with certain tokenizers:

GGML_ASSERT: llama.cpp:2683: codepoints_from_utf8(word).size() > 0
Aborted (core dumped)

This bug was reported in September and October 2023 and seems to also affect at least ELYZA-japanese-Llama-2-7b-fast-instruct and InternLM

I did some poking and submitted a new issue, and those looking to see the status (or wanting to have a poke) can look here: https://github.com/ggerganov/llama.cpp/issues/4360

AUGMXNT org

Looks like there is a (rather involved) workaround for this llama.cpp's handling of extended unicode here: https://github.com/ggerganov/llama.cpp/issues/4360#issuecomment-1846617653

Just leaving it for those who really need to use llama.cpp for some reason (GPTQ and AWQ quants both tested to work fine).

Thanks Leonard. I was just in the middle of trying to convert shisa to gguf. Good to hear the other quant methods were ok.

AUGMXNT org

@alexkoo300 Take a look at https://huggingface.co/mmnga/shisa-7b-v1-gguf - mmnga managed to use a modify conversion to merge/use spm instead of bpe. Seems to work!

Sign up or log in to comment