Text Generation
Transformers
Safetensors
Japanese
English
mistral
conversational
Inference Endpoints
text-generation-inference

Quants

#2
by leonardlin - opened

Thanks @TheBloke for doing his thing :)

I'll keep this list updated if GGUFs come along (See https://huggingface.co/augmxnt/shisa-7b-v1/discussions/1 to follow along on that story, basically llama.cpp bugged atm for most BPE tokenizers so no point in quantizing).

It looks like mmnga was able to get GGUF conversion working with a custom_shisa.py conversion script that combines the extra BPE characters into the spm tokenizer. Seems to run great, thanks!

If anyone does their own (EXLs, etc) feel free to post it in here.

Sign up or log in to comment