unable to use falcon-7b-instruct using transformers

#47
by subhashhf - opened

Hi ,

I have cloned the repo using git lfs clone

I am using the following code

from transformers import AutoTokenizer, AutoModelForQuestionAnswering

# specify the local path where your model is
model_path = "/this/is/local/path/falcon-7b-instruct"

# Load the model and the tokenizer
# Load the model and the tokenizer with trust_remote_code=True
tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)
model = AutoModelForQuestionAnswering.from_pretrained(model_path, trust_remote_code=True)

# Now, you can use the model to answer questions
question = "What is the capital of France?"
context = "Paris is the capital and most populous city of France."

# you need to encode the input
inputs = tokenizer.encode_plus(question, context, return_tensors='pt')

# get model output
answer_start_scores, answer_end_scores = model(**inputs)

# Get the most likely beginning and end of answer with the argmax of the scores
answer_start = torch.argmax(answer_start_scores)
answer_end = torch.argmax(answer_end_scores) + 1

# Get the answer. Convert the tokens to strings and join them
answer = tokenizer.convert_tokens_to_string(tokenizer.convert_ids_to_tokens(inputs["input_ids"][0][answer_start:answer_end]))

print(answer)

I get the following error

ValueError: Unrecognized configuration class <class 'transformers_modules.falcon-7b-instruct.configuration_RW.RWConfig'> for this kind of AutoModel: AutoModelForQuestionAnswering.
Model type should be one of AlbertConfig, BartConfig, BertConfig, BigBirdConfig, BigBirdPegasusConfig, BloomConfig, CamembertConfig, CanineConfig, ConvBertConfig, Data2VecTextConfig, DebertaConfig, DebertaV2Config, DistilBertConfig, ElectraConfig, ErnieConfig, ErnieMConfig, FlaubertConfig, FNetConfig, FunnelConfig, GPT2Config, GPTNeoConfig, GPTNeoXConfig, GPTJConfig, IBertConfig, LayoutLMv2Config, LayoutLMv3Config, LEDConfig, LiltConfig, LongformerConfig, LukeConfig, LxmertConfig, MarkupLMConfig, MBartConfig, MegaConfig, MegatronBertConfig, MobileBertConfig, MPNetConfig, MvpConfig, NezhaConfig, NystromformerConfig, OPTConfig, QDQBertConfig, ReformerConfig, RemBertConfig, RobertaConfig, RobertaPreLayerNormConfig, RoCBertConfig, RoFormerConfig, SplinterConfig, SqueezeBertConfig, XLMConfig, XLMRobertaConfig, XLMRobertaXLConfig, XLNetConfig, XmodConfig, YosoConfig.

Please help me with this.

Pipeline works out of the box.
you can find that file, though, in the model folder (model_path), along with a couple of others you may need. just import them

I may be wrong but I believe they mentioned that this is due to some update needed in huggingface? but that it could be ignored

Sign up or log in to comment