GGGGGGGGGGGGGGGGGGGGGGGGGGGG
After the answer I receive multiple GGGGGGGGGG or only GGGGGGGGGGG. The same goes for DeepSeek-Coder-V2-Lite-Instruct. Who came across it? llama.cpp - latest
Yes, it happens often enough to make the model near impossible to work with.
Can you try playing with the rope settings at all? the original looks like it has rope set to 500 but that doesn't seem to make sense to me..
INFO [ launch_slot_with_task] slot is processing task | tid="140318474858496" timestamp=1720791770 id_slot=0 id_task=15836
INFO [ update_slots] kv cache rm [p0, end) | tid="140318474858496" timestamp=1720791770 id_slot=0 id_task=15836 p0=0
INFO [ update_slots] kv cache rm [p0, end) | tid="140318474858496" timestamp=1720791772 id_slot=0 id_task=15836 p0=2048
I noticed that this model does not go beyond p=2048. Could this be the case?
Can you try playing with the rope settings at all? the original looks like it has rope set to 500 but that doesn't seem to make sense to me..
Since you're here: tell me, what do you think is the best model in C++ at the moment? preferably immoral of course XD
With latest llama.cpp, removing the system prompt fixed this issue for me (trying Q4_K_M). Command below, please change --threads
and -ngl
as needed:
./llama-cli --model ~/docs/models/codegeex4-all-9b/codegeex4-all-9b-Q4_K_L.gguf --color --threads 11 --keep -1 --n-predict -1 --repeat-penalty 1.1 --ctx-size 4096 --interactive --simple-io --in-prefix " <|user|>\n" --in-suffix " <|assistant|>\n" -p "[gMASK] <sop> <|system|>\n" -e --multiline-input --no-display-prompt --conversation --no-mmap -ngl 12
.
I'd probably use Codestral since it's the only one to explicitly mention using c++ in it's dataset I think?
That said I'd bank on codegemma 2 being a beast if it comes..
I'm using with llamacpp q8 version and never experience something like that.
I'm using with llamacpp q8 version and never experience something like that.
??? wow
I'd probably use Codestral since it's the only one to explicitly mention using c++ in it's dataset I think?
That said I'd bank on codegemma 2 being a beast if it comes..
i use it 2. but it more slower than deepseek for ex.