Output is only colons and dashes

#1
by BlahBlah1 - opened

When I try to display anything at all from the model its only dashes and dots. Any way to fix it. Im using rope-freq-scale set to 0.125 for ctx of 32k.

Hello! Sorry for the late response, but I have been quite busy in the last few days.

Which version of lama.cpp are you using? The Python version seems to have some issues - use the cpp version instead. Additionally, you may run into problems like the one you mentioned if you do not have enough memory where you plan to run the inference (i.e., CPU or GPU)

Im using python version itself and im running it on colab. Colab offers 15gb vram t4 gpu would that be an issue. Ill try changing to llama.cpp and test it. Thx for the response

good success!

@BlahBlah1 any progress? I guess Python version is still broken - my output is ''''''''''

@reddiamond nope it never worked. just switched to another model

@BlahBlah1 which one did you use? I am using Llama 2 chat 7B, but 4k context window is not enough for me...

@reddiamond what is your use case? cause you can try chunking it down to the models context window and repeating that process until the end.

@BlahBlah1 I am using LLaMA for graph querying. So it must go over all nodes and relations... I tested 32k model on my RAG app which works very good with normal q4 model but my output was '''''''''. I tested both, old model and new model with 32k context, on Llama.cpp and Llama-cpp-python examples and there was problem only with 32k model... It did not generate any response. So there must be a problem with library.

Sign up or log in to comment