This LLM seems to be trolling me??
I have encountered this strange behavior a few times but the last one has been especially entertaining. I ask a programming concept question about Rust and this is what I get?!! Since I am new to the who AI and LLM world I was not sure if the fault is coming from the original trained model or TheBlokes model. Anyone else faced similar issues? please take a look at the attached screenshot
did you offload some of the layers to GPU in lmstudio? if yes, try running them without GPU offload and make sure you have enough ram to load the model
Thanks for the reply!
My GPU has enough RAM (36GB macbook) so that is not an issue. But I don't really get the point. Not having enough ram should lead the model to be considerably slower, how come it goes all philosophical on a programming question suddenly!? Did you notice what it start to answer in the above screenshot?
@skynet24 I am not a 100% sure but I believe DeepSeek coder has some tokenizer issues in llama.cpp so it can produce gibberish outputs sometimes. As an alternative, I would recommend codeqwen as that's newer and better I believe.