Could you please convert this to GGUF?

#1
by MouhuAI - opened

Unfortunately, the model architecture (RecurrentGemma) is not supported by llama.cpp/GGUF at the moment. See https://huggingface.co/google/recurrentgemma-2b-it

If support is added (can happen anytime, or never) you can notify me and I will quantize it.

mradermacher changed discussion status to closed

Sorry, wrong url. The url I meant to paste, but chrome refused, was: https://github.com/ggerganov/llama.cpp/issues/6564

Sign up or log in to comment