Warm-up period?

#2
by gtkunit - opened

Hi, does anyone know if imatrix quants need some kind of warm-up period?

When I load the IQ4K_M here with some (16) layers offloaded to GPU, I get about 0.5 t/s during the first few minutes. But after that it hits what feels like a solid 3 t/s. Non-imatrix quants don't seem to have this and previously I only uses imatrix quants if I could fully offload them to the GPU (and didn't run into this warm-up thing).

This isn't much of a problem, but I would've skipped these quants if I didn't leave it running. After searching this doesn't seem like common knowledge, so a lot of people may be missing out on these awesome quants. Although, I usually stick with text-generation-webui (tgw) but have been using llama.cpp from source for this one because support hadn't been merged into tgw yet. I'll try tgw now.

Well, every software is a bit different, but models do not and can not require a warm-up, they are immutable blocks of data.

What you see could be explained by your system having to swap out other data or your inference engine loading the data only on first use (and then using a bad way to measure tokens/s), or possibly including the prompt processing step in the generation time. That would explain why it isn't common knowledge, because it either is an artefact of a bad measurement or something specific to your system (such as lack of memory, bad scheduling, background load).

mradermacher changed discussion status to closed

Thanks for the answer and sorry for using the wrong term.
llama.cpp through its llama-cli starts output immediately but it's slow at first and then speeds up.
text-generation-webui, using llama.cpp, handles it (!) slightly differently. It displays "Prompt evaluation" for roughly the same amount of time as llama-cli is outputting text slowly. However, tgw doesn't start output until this is complete and generation is fast then. Which is pretty elegant and probably why most users don't run into this. Ollama probably handles it in a similar way. So mystery solved, I guess.
It's been hard not to get defensive about my system because it's almost good enough to make RMS proud. ;p

When text is generated, prompt processing has necessarily finished, so this wouldn't explain it (text cnanot be generated before the prompt has processed). tgw and ollama are not inference engines and use llama.cpp for that, so the behaviour should be identical, so what you see cannot be explained this way.

I meant the mystery of why most people don't run into this, and I don't have enough knowledge on the matter and time to figure this out myself, so I wanted to thank you for your time and drop this.

Sign up or log in to comment