How to join the Q6 files?

#2
by AIGUYCONTENT - opened

I have downloaded both Q6 files but cannot figure out how to join them. I am using Oobabooga.

You shouldn't need to join them manually, if you download them all and select part 1, it should automatically load them all. If it doesn't, you may need to update to latest text-gen-webui (possibly dev branch even)

Otherwise, you can install llama.cpp and use the ./gguf-split with --merge option

Ah, so I did download both manually to the /models folder of Oobabooga (before I attempted to merge with Windows Command---which was a complete failure. Even though I was able to merge the two files, Oobabooga gave me an error "llama_model_load: error loading model: invalid split file: models\Meta-Llama-3-70B-Instruct-Q6_K.gguf").

I loaded Meta-Llama-3-70B-Instruct-Q6_K-00001-of-00002.gguf and it loaded in only 12 seconds. I only have a 4090 and 64GB of DDR5 (and offloaded 34 layers to the GPU which resulted in 23.4/24 VRAM usage and only 4GB of shared memory being used). I performed a test query in chat-instruct in OobaBooga and asked it what its name was. It said "I'm Lumina, a self-hosting LLM sitting on your office computer, and I'm thrilled to assist you with any writing tasks or questions you may have."

Does it matter which of the two Q6 files I load?

I'm getting about .37 tokens per second.

So...nothing more I need to do besides save my allowance money for more VRAM? : )

If you're getting 0.37 tokens per second, then try the Q4 instead of Q6 (or even a lower bit), but it will still probably be too slow. Maybe consider the llama3-8B until you have a place to run the llama3-70B.

It's slow...but do-able.

don't quote me on this, but I think that 34 layers (especially on windows) is too many, even if your VRAM didn't go to 24 it's possible that windows is doing some silly silent offloading

try going down to 30 and seeing if performance is any better

yeah, I have to run the Q4 of this model on 2 x 24GB GPU to prevent CPU offloading.

that's one thing, but there's also some nvidia drivers that will just silently take any overloaded VRAM and offload it onto system RAM, that's great if you're just barely accidentally going over on non-important memory, but llama.cpp does a MUCH better job of splitting the load, so it's better to make sure you aren't getting into that edge case of "dumb" offloading and make sure you use purely llama.cpp's "smart" offloading

So I lower the GPU offloading to 30 and it was .26 tokens per second. However, shared VRAM went down from ~4 to 1.9 but VRAM used was still 23.4. I then went down to 28 layers offloaded and same thing....but only ~17gb of VRAM being used.

I did not have llama.cpp installed, but I just installed it now. Do I need to do anything else or just restart Oobabooga and it will now use llama.cpp to optimize things?

that's one thing, but there's also some nvidia drivers that will just silently take any overloaded VRAM and offload it onto system RAM, that's great if you're just barely accidentally going over on non-important memory, but llama.cpp does a MUCH better job of splitting the load, so it's better to make sure you aren't getting into that edge case of "dumb" offloading and make sure you use purely llama.cpp's "smart" offloading

I take that back...having issues with installing llama.cpp. Getting this error message:
note: This error originates from a subprocess, and is likely not a problem with pip.
ERROR: Failed building wheel for llama-cpp-python
Failed to build llama-cpp-python
ERROR: Could not build wheels for llama-cpp-python, which is required to install pyproject.toml-based projects

yeah, just pip install wheel setuptools packages then do whatever you were trying to do

This comment has been hidden

Sign up or log in to comment