OSError: tiiuae/falcon-7b does not appear to have a file named configuration_RW.py

#62
by chintan4560 - opened

When I used finetuning code for Falcon using this model_name: "ybelkada/falcon-7b-sharded-bf16".
I am getting this error: OSError: tiiuae/falcon-7b does not appear to have a file named configuration_RW.py

Please help me to resolve this.

Same issue faced by me. This commit is very disappointing. I think we can think of 2 below workarounds

A. Can we use the missing file from local storage where the script is executing from?

B. Also if can load a previous revision of a model commit in the above code. Then I think we can bypass this.

You can use the code_revision argument to specify a specific revision (so commit for instance) for the code files. Since we are migrating this model into Transformers, it's probably safer to do this until the integration is finished when using "ybelkada/falcon-7b-sharded-bf16".

I also tried similar approach biut failed yesterday. May I know in which line I can use the argument you highlighted. This is my code.

import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig, GenerationConfig

peft_model_id = "mrm8488/falcon-7b-ft-codeAlpaca_20k-v2"
config = PeftConfig.from_pretrained(peft_model_id)
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, return_dict=True, load_in_8bit=True, device_map = {"":0}, trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained(peft_model_id)

model = PeftModel.from_pretrained(model, peft_model_id)
model.eval()

I got this same error again today

am still getting the error, can someone please help me out here is my code
pipeline = transformers.pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
torch_dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
)

Sign up or log in to comment