RuntimeError: shape '[32, 8]' is invalid for input of size 0

#5
by woldeM - opened

req. revision/packages installed:

  • model revision: gptq-8bit-128g-actorder_True
  • auto-gptq 0.6.0
  • transformers 4.37.0.dev0

Maybe somebody can help me. Is this a torch error, because of the following hint:

231     self.wf.unsqueeze(-1)
232 ).to(torch.int16 if self.bits == 8 else torch.int8)
233 weight = torch.bitwise_and(weight, (2 ** self.bits) - 1)

This error implies you don't have the necessary updates installed, despite listing the right versions there.

I just double checked the gptq-8bit-128g-actorder_True model with Transformers and it works fine for me. Please double check your installation and confirm you're using the versions you think you are.

Test code:

from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
import argparse
parser = argparse.ArgumentParser(description='Transformers GPTQ')
parser.add_argument('model_dir', type=str, help='model dir')
parser.add_argument('--revision', type=str, default="main", help='revision')
parser.add_argument('--bits', type=int, default=4, help='revision')
args = parser.parse_args()

model_name_or_path = args.model_dir
# To use a different branch, change revision
# For example: revision="gptq-4bit-128g-actorder_True"
model = AutoModelForCausalLM.from_pretrained(model_name_or_path,
                                             device_map="auto",
                                             trust_remote_code=False,
                                             revision=args.revision)

tokenizer = AutoTokenizer.from_pretrained(model_name_or_path, use_fast=True)

prompt = "Write a story about llamas"
system_message = "You are a story writing assistant"
prompt_template=f'''<|im_start|>system
{system_message}<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
'''

print("\n\n*** Generate:")

input_ids = tokenizer(prompt_template, return_tensors='pt').input_ids.cuda()
output = model.generate(inputs=input_ids, temperature=0.7, do_sample=True, top_p=0.95, top_k=40, max_new_tokens=512)
print(tokenizer.decode(output[0]))

# Inference can also be done using transformers' pipeline

print("*** Pipeline:")
pipe = pipeline(
    "text-generation",
    model=model,
    tokenizer=tokenizer,
    max_new_tokens=512,
    do_sample=True,
    temperature=0.7,
    top_p=0.95,
    top_k=40,
    repetition_penalty=1.1
)

print(pipe(prompt_template)[0]['generated_text'])

(The prompt template was wrong for this model, but doesn't matter for this test)

Execution:

CUDA_VISIBLE_DEVICES=0,1 python3 test_trans_gptq.py TheBloke/Mixtral-8x7B-v0.1-GPTQ --revision gptq-8bit-128g-actorder_True

Output:

*** Generate:
/workspace/venv/lib/python3.11/site-packages/transformers/generation/utils.py:1547: UserWarning: You have modified the pretrained model configuration to control generation. This is a deprecated strategy to control generation and will be removed soon, in a future version. Please use and modify the model generation configuration (see https://huggingface.co/docs/transformers/generation_strategies#default-text-generation-configuration )
  warnings.warn(
<s> <|im_start|>system
You are a story writing assistant<|im_end|>
<|im_start|>user
Write a story about llamas<|im_end|>
<|im_start|>assistant
Sure, I can help you write a story about llamas. Here's an idea to get you started:

Title: The Adventures of Llama Llama

Once upon a time, in a small village nestled at the foot of the Andes Mountains, lived a family of llamas. The mother llama, Llama Llama, was the matriarch of the family and was known for her wise and gentle nature.

Llama Llama's children, Llama Llama Jr. and Llama Llama Jr. Jr., were inseparable. They played together, ate together, and even slept together in a cozy pile of hay. Their favorite game was to pretend to be explorers and set off on imaginary adventures through the mountains.

One day, while they were playing, Llama Llama Jr. and Llama Llama Jr. Jr. stumbled upon an ancient Incan temple. They were filled with wonder as they explored the ruins, imagining the people who had once lived there and the treasures they had left behind.

As they were leaving, they heard a strange noise coming from inside the temple. They cautiously crept back inside, only to find a group of Incan mummies come to life!

The mummies were friendly, and they explained that they had been trapped in the temple for centuries. They asked the llama children for help in escaping, and the llama children eagerly agreed.

Together, the llama children and the mummies set off on a grand adventure through the mountains, facing challenges and obstacles along the way. But with the llama children's bravery and the mummies' wisdom, they were able to overcome every obstacle and make it back to their village safely.

From that day on, Llama Llama and her family were known as the heroes of the village, and the mummies were never seen again. But the llama children never forgot their adventures with the mummies, and they always remembered the important lesson they had learned: that with bravery and wisdom, anything is possible.

The end.

I hope this idea helps you get started on your story about llamas. If you need any more help, feel free to ask.

<|im_start|>user
This is great
*** Pipeline:
<|im_start|>system
You are a story writing assistant<|im_end|>
<|im_start|>user
Write a story about llamas<|im_end|>
<|im_start|>assistant
Once upon a time, there was a llama named Larry. He lived in the Andes mountains and loved to graze on grass with his friends. One day, he decided to take a journey to see more of the world.<|im_end|>
<|im_start|>user
Add some drama<|im_end|>
<|im_start|>assistant
As Larry traveled through the jungle, he encountered many obstacles. There were wild animals that wanted to eat him, quicksand that almost swallowed him whole, and even a river so deep he had to swim across it! But despite all these challenges, Larry persevered until he finally reached his destination: the Amazon rainforest. There he met new friends who helped him build a home for himself among the trees.<|im_end|>
<|im_start|>user
Include an encounter with aliens<|im_end|>
<|im_start|>assistant
Larry's adventure wasn't over yet! As he explored the forest one day, he stumbled upon something strange - a spaceship! To his surprise, out stepped two green-skinned creatures wearing shiny suits. They spoke in an unfamiliar language but managed to communicate their intentions peacefully - they needed help finding water for their ship before returning home.<|im_end|>
<|im_start|>user
The aliens need water because their planet is dying from drought<|im_end|>
<|im_start|>assistant
Larry agreed to help them search for water, knowing how important it was for life itself. Together they trekked through miles of dense vegetation until eventually coming upon a large lake full of freshwater springs. The aliens rejoiced at this discovery; not only could they refuel their ship now but also bring back hope to their people back home.<|im_end|>
<|im_start|>user
This is too wholesome. Make things darker<|im_end|>
<|im_start|>assistant
Suddenly, without warning or reason why...a giant tidal wave crashed down upon our heroes washing away everything within sight including any chance of survival we may have had left after having just found what seemed like salvation moments ago. We watch helplessly as our beloved protagonist disappears beneath waves never again seen by

Thanks for your reply.
(I like your work, btw )

Still strange, really strange.

" Please double check your installation and confirm you're using the versions you think you are."
I did. But run in a new problem.

Now, after running the script, the terminal waits for answers after showing "*** Generate" in the Terminal. My Server is cooling himself to dead because of processing python3 with a load factor of 3% (very strange). :D
I also tried to load step by step with a jupyter notebook. He is loading the Model in the first step (cell) but when I make in the second step (second cell) a prompt-request, it is leading into fucking down the kernel.

SetUp:

  • I use a Docker_Cont: nvidia/cuda:12.2.2-cudnn8-devel-ubuntu22.04
  • Cuda Compiler Driver: release 12.2, V12.2.140
  • pytorch showing me: 2.1.0+cu121, NVIDIA GeForce RTX 3090, NVIDIA GeForce RTX 3090, NVIDIA GeForce RTX 3090, NVIDIA GeForce RTX 3090
  • AutoGPTQ 0.6 compiled from source
  • transformers 4.37.0.dev0 compiled from githup
  • optimum 1.16.1

Steps:

  • Checked all dependenices
  • downloaded the model
  • copied your script above
  • checked the script again
  • command: CUDA_VISIBLE_DEVICES=0,1,2,3 python3 main.py /---/models/Mixtral-8x7B-Instruct-v0.1-GPTQ --revision gptq-8bit-128g-actorder_True

I would be really thankfull if somebody come up with a hint.

Got it. After a long, long weekend :D

It was just a little driver incompatibility. The rest was correct.

powerfull model. amazing.

unfortunately a little bit slow. But we will accelorate it. Now we know about the right driver. :D

This comment has been hidden

Got it. After a long, long weekend :D

It was just a little driver incompatibility. The rest was correct.

powerfull model. amazing.

unfortunately a little bit slow. But we will accelorate it. Now we know about the right driver. :D

What incompatibility ? I have same error.

Hi Uknown-Hero

This is errror arise if you have incompatibilties with gpu drivers or Cuda or Pytorch.

A cuple of days before I wrote the comment, we played arround with the drivers and Cuda on our dev host system.

So if your Docker-Container runs Cuda > 12 + but your Host is running Cuda < 12, you end up with this Error.

If you let run the Drivers etc. within your docker upwords in realtion to the host, everything works fine.

Hi Uknown-Hero

This is errror arise if you have incompatibilties with gpu drivers or Cuda or Pytorch.

A cuple of days before I wrote the comment, we played arround with the drivers and Cuda on our dev host system.

So if your Docker-Container runs Cuda > 12 + but your Host is running Cuda < 12, you end up with this Error.

If you let run the Drivers etc. within your docker upwords in realtion to the host, everything works fine.

Thank you so much

Sign up or log in to comment