using oobaboga to load model fail for 70b chat ggml Q2_k anad Q3_K_S

#2
by nps798 - opened

in llamacpp_loader model, tokenizer = LlamaCppModel.from_pretrained(model_file) File “~/oobabooga_text_generation_webui/text-generation-webui/modules/llamacpp_model.py”, line 58, in from_pretrained result.model = Llama(**params) File “/home/MYNAME/anaconda3/envs/textgen/lib/python3.10/site-packages/llama_cpp/llama.py”, line 305, in init assert self.model is not None AssertionError.

is it related to memory issue? i only got 32gm of RAM

Yes, please see the README. Not supported yet.

oh. I overlook that. should have checked that carefully beforehand

thanks for your pointing out !!!!!

Sign up or log in to comment