Not seeing model loaded into RAM, no errors, but not functioning properly

#2
by HartLabs - opened

I am running koboldcpp 1.41 on Linux, and not seeing the CodeLlama models in GGUF or GGML (can't test GPTQ right now) get loaded into RAM. Other non-CodeLLama models are loading as expected. The CodeLlama models are giving answers that are clearly indicating it is not functioning, but nothing in the terminal indicates an error.

Anyone have ideas?

Sign up or log in to comment