How to run the model? `KeyError: 'cache_position'`

#4
by sanjeev-bhandari01 - opened

To load the model in google colab

!pip install omegaconf
!pip install botocore boto3 cached_path
!pip install accelerate

Then load model as given in Model CArd:

import torch
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline, TextStreamer

tokenizer = AutoTokenizer.from_pretrained("NousResearch/OLMo-Bitnet-1B")
model = AutoModelForCausalLM.from_pretrained("NousResearch/OLMo-Bitnet-1B",
    torch_dtype=torch.bfloat16, trust_remote_code=True, device_map="auto")

streamer = TextStreamer(tokenizer)
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer, 
                pad_token_id=tokenizer.eos_token_id,
    temperature=0.8, repetition_penalty=1.1, do_sample=True,streamer=streamer)
pipe("The capitol of Paris is",  max_new_tokens=256)
This gives error: KeyError: 'cache_position'

What is happening. How is it passing cache_position?

I upgraded transformers model but I kept getting other error like:

ValueError: 'olmo' is already used by a Transformers config, pick another name.

Same here

Were you able to solve this issue?

"ValueError: 'olmo' is already used by a Transformers config, pick another name."

While instantiating the model, just remove the trust_remote_code

model = AutoModelForCausalLM.from_pretrained("NousResearch/OLMo-Bitnet-1B",
torch_dtype=torch.bfloat16)

While instantiating the model, just remove the trust_remote_code

model = AutoModelForCausalLM.from_pretrained("NousResearch/OLMo-Bitnet-1B",
torch_dtype=torch.bfloat16)

This doesn't fix "ValueError: 'olmo' is already used by a Transformers config, pick another name.". Moreover, the error appears in the first line (when importing hf_olmo). It didn't even reach the line where trust_remote_code is set :-(

Same here

Sign up or log in to comment