Does not generate text

#5
by russellsparadox - opened

All the time I use the model, it generates only <|endoftext|>. How can I fix it?

from transformers import AutoModelForCausalLM, AutoModelForSeq2SeqLM, GPTNeoXForCausalLM, AutoTokenizer

checkpoint = "OpenAssistant/oasst-sft-1-pythia-12b"

tokenizer = AutoTokenizer.from_pretrained(checkpoint, cache_dir='models_hf')
model = AutoModelForCausalLM.from_pretrained(checkpoint, device_map="auto", load_in_8bit=True, cache_dir='models_hf')

message = "Hello, I am"
inp = "<|prompter|>"+message+"<|endoftext|><|assistant|>"
data = tokenizer([inp], return_tensors="pt")
data = {k: v.to(model.device) for k, v in data.items() if k in ("input_ids", "attention_mask")}
outputs = model.generate(**data)
print(tokenizer.decode(outputs[0]))

<|prompter|>Hello, I am<|endoftext|><|assistant|><|endoftext|>

Having the same issue.

My two cents -
One cent: If I run a forward pass, the logits are all nan
Two cents: Running the cpu version works fine

Three cents: works well without load_in_8bit=True. (using 2 gpus in my case)

I have the same problem. It doens't work with 8 bit loader

Sign up or log in to comment