Thanks for sharing! Just notice that you have uploaded the 64 group size

#1
by Yhyu13 - opened

Would 128 groups size, the suggested one, be better?

There isn't really a 'suggested' groupsize. 128 has tended to be default for Llama models under 30B in size, but I believe that's more from convention than based on any hard testing. It is possible that for Llama models a groupsize of <128 could increase VRAM usage too much for certain GPUs, eg going over an 8GB or 12Gb threshold, but I'm not certain that's the case.

However Falcon 7B is behaving differently. VRAM usage doesn't seem to be an issue as I was able to return 2000 tokens in under 8GB VRAM usage. VRAM usage seems to grow very little as context increases on this model, which is quite different to Llama. I'm not yet sure why that is.

So as group_size = 64 improves inference quality a little, I have gone for that. Maybe I could have done 32 even.

Yhyu13 changed discussion status to closed

There isn't really a 'suggested' groupsize. 128 has tended to be default for Llama models under 30B in size, but I believe that's more from convention than based on any hard testing. It is possible that for Llama models a groupsize of <128 could increase VRAM usage too much for certain GPUs, eg going over an 8GB or 12Gb threshold, but I'm not certain that's the case.

However Falcon 7B is behaving differently. VRAM usage doesn't seem to be an issue as I was able to return 2000 tokens in under 8GB VRAM usage. VRAM usage seems to grow very little as context increases on this model, which is quite different to Llama. I'm not yet sure why that is.

So as group_size = 64 improves inference quality a little, I have gone for that. Maybe I could have done 32 even.

Thank! Valuable observation there!

Due to multi-query attention the model only needs to save one k,v per position instead of num_heads, for the 7b this corresponds to a factor of 71 :)

Oh thanks @Seledorn that's really interesting !

Sign up or log in to comment