gguf is required :)
#11
by
flymonk
- opened
Could u support gguf format?
Thank u very much.
Yes i second this, we need gguf
Looking for the gguf too.
In the meantime, how could I test it? With Ollama?
read ollama docs on how to create new model
Apparently, it won't work with Ollama right now (from their Discord).
I was able to convert the safetensors to a GGUF model. I'm still working on adding inference support to llama.cpp.
See PR: https://github.com/ggerganov/llama.cpp/pull/6033
where is the Gguff format this model ?
Starting to add the GGUF files here: https://huggingface.co/andrewcanis/c4ai-command-r-v01-GGUF
Great work Andrew! Both on the GGUF models and specially on the PR you made in Llama.cpp.
Thank you
Is there GPTQ or AWQ version?