Low score and wrong answer for "question-answering" task
#58
by
ybensaid
- opened
Hi,
I'm trying to run Flan-T5-XXL model for a "question-answering" task.
Here's how I loaded and executed the model:
model_id = "~/Downloads/test_LLM/flan-t5-xxl"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForQuestionAnswering.from_pretrained(model_id, return_dict=False).to(DEVICE)
qa_T5XXL = pipeline("question-answering", model=model, tokenizer=tokenizer)
question = "What is 42?"
context = "42 is the answer to life, the universe and everything"
result = qa_T5XXL({
"question": question,
"context": context
})
However, I get a low score and a wrong answer:
{'score': 0.03840925544500351, 'start': 0, 'end': 2, 'answer': '42'}
Could you please help me make changes to achieve the correct answer?
Thanks in advance.