Is the 34B llama2 actually GPTQ working?

#3
by mzbac - opened

Somehow I keep getting an error for the fused llama attention.

/auto_gptq/nn_modules/fused_llama_attn.py", line 59, in forward
    query_states, key_states, value_states = torch.split(qkv_states, self.hidden_size, dim=2)
    ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

It seems that qkv_states doesn't match

Shape of qkv_states: torch.Size([1, 62, 10240])
Expected third dimension size: 24576
Actual third dimension size: 10240
Value of self.hidden_size: 8192

image.png
Works for me on the 1a642c12b582330a4780047461905787cc3f18c5 commit of ooga_booga with ExLlama_HF.

image.png

@giblesnot, thanks mate, I was using AutoGPTQ to load the model. I will try the Exllama.

same here when running not in oogabooga. i am using the code for running it with AutoGPTQForCausalLM from huggingface and it's failing with the same error. please help.

If using AutoGPTQ, please pass inject_fused_attention=False.

The issue is that Llama 2 70B and 34B introduced a new optimisation called GQA (Ghost Query Attention) which is not supported by AutoGPTQ's Fused Attention optimisation.

This issue will be fixed in AutoGPTQ fairly soon, hopefully: https://github.com/PanQiWei/AutoGPTQ/pull/237

Sign up or log in to comment