Hi,I see gguf files model-q6k.gguf model-q4k.gguf, how to run it ?original llama.cpp looks like does not support madlad ?
· Sign up or log in to comment