Could not load Model

#28
by Ibrahim-Ola - opened

I'm getting the following the error below when I try to load my model on Ubuntu and MacOS (i7, 2018)

ValueError: Could not load model tiiuae/falcon-7b-instruct with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForCausalLM'>,).

Note: all my packages are up to date. My pipeline is:

model = "tiiuae/falcon-7b-instruct"

tokenizer = AutoTokenizer.from_pretrained(model)

pipeline = pipeline(
task="text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id
)

Any help is appreciated.

Thanks!

Same issue here

I'm on a 16" M1 Pro macbook 16GB RAM 16Core GPU ,

Python3.9.2


Traceback (most recent call last):
  File "/Users/__/Code/FalconLLM/./main.py", line 11, in <module>
    pipeline = transformers.pipeline(
               ^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/transformers/pipelines/__init__.py", line 788, in pipeline
    framework, model = infer_framework_load_model(
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/opt/homebrew/lib/python3.11/site-packages/transformers/pipelines/base.py", line 278, in infer_framework_load_model
    raise ValueError(f"Could not load model {model} with any of the following classes: {class_tuple}.")
ValueError: Could not load model tiiuae/falcon-7b with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForCausalLM'>,).

I was able to load it and generate text on my 64GB M1 Max after upgrading torch to the latest 2.01 via pip install --upgrade torch and then changing torch_dtype=torch.bfloat16 to torch_dtype=torch.float32 in the pipeline. However the generation was extremely slow.

I was able to load it and generate text on my 64GB M1 Max after upgrading torch to the latest 2.01 via pip install --upgrade torch and then changing torch_dtype=torch.bfloat16 to torch_dtype=torch.float32 in the pipeline. However the generation was extremely slow.

I changed torch_dtype=torch.bfloat16 to torch_dtype=torch.float32, but I still get the same error. My torch is the latest. I am on 16GB RAM, though.

having the same issue as well

Facing same issue. Running on Mac M1. Can it be due to low memory as I'm using 8 GB RAM?

Same issue here, I was using it smoothly and suddenly it threw this error, no changes, no upgrades, downgrades.

I am on Mac M1 as well and same issue

I'm getting the following the error below when I try to load my model on Ubuntu and MacOS (i7, 2018)

ValueError: Could not load model tiiuae/falcon-7b-instruct with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForCausalLM'>,).

Note: all my packages are up to date. My pipeline is:

model = "tiiuae/falcon-7b-instruct"

tokenizer = AutoTokenizer.from_pretrained(model)

pipeline = pipeline(
task="text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id
)

Any help is appreciated.

Thanks!

I found a way to make it work:

from transformers import AutoModelForCausalLM

model_id="tiiuae/falcon-7b-instruct"
tokenizer=AutoTokenizer.from_pretrained(model_id)
model=AutoModelForCausalLM.from_pretrained(model_id, trust_remote_code=True)
Ibrahim-Ola changed discussion status to closed

I'm getting the following the error below when I try to load my model on Ubuntu and MacOS (i7, 2018)

ValueError: Could not load model tiiuae/falcon-7b-instruct with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForCausalLM'>,).

Note: all my packages are up to date. My pipeline is:

model = "tiiuae/falcon-7b-instruct"

tokenizer = AutoTokenizer.from_pretrained(model)

pipeline = pipeline(
task="text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
max_length=200,
do_sample=True,
top_k=10,
num_return_sequences=1,
eos_token_id=tokenizer.eos_token_id
)

Any help is appreciated.

Thanks!
Even i got the same error but when i specify device manually as "cuda" in device_map parameter it starts to load the model . try this method once

Sign up or log in to comment