does anybody have solution to this issue:TypeError: forward() got an unexpected keyword argument 'token_type_ids'

#3
by warfaisal - opened

TypeError: forward() got an unexpected keyword argument 'token_type_ids'

TypeError Traceback (most recent call last)
Cell In[6], line 1
----> 1 answer = qa_pipeline({"question": question, "context": context})
2 print(answer)

File ~/Library/Python/3.9/lib/python/site-packages/transformers/pipelines/question_answering.py:390, in QuestionAnsweringPipeline.call(self, *args, **kwargs)
388 examples = self._args_parser(*args, **kwargs)
389 if isinstance(examples, (list, tuple)) and len(examples) == 1:
--> 390 return super().call(examples[0], **kwargs)
391 return super().call(examples, **kwargs)

File ~/Library/Python/3.9/lib/python/site-packages/transformers/pipelines/base.py:1111, in Pipeline.call(self, inputs, num_workers, batch_size, *args, **kwargs)
1109 return self.iterate(inputs, preprocess_params, forward_params, postprocess_params)
1110 elif self.framework == "pt" and isinstance(self, ChunkPipeline):
-> 1111 return next(
1112 iter(
1113 self.get_iterator(
1114 [inputs], num_workers, batch_size, preprocess_params, forward_params, postprocess_params
1115 )
1116 )
1117 )
1118 else:
1119 return self.run_single(inputs, preprocess_params, forward_params, postprocess_params)

File ~/Library/Python/3.9/lib/python/site-packages/transformers/pipelines/pt_utils.py:124, in PipelineIterator.next(self)
121 return self.loader_batch_item()
123 # We're out of items within a batch
--> 124 item = next(self.iterator)
125 processed = self.infer(item, **self.params)
126 # We now have a batch of "inferred things".

File ~/Library/Python/3.9/lib/python/site-packages/transformers/pipelines/pt_utils.py:266, in PipelinePackIterator.next(self)
263 return accumulator
265 while not is_last:
--> 266 processed = self.infer(next(self.iterator), **self.params)
267 if self.loader_batch_size is not None:
268 if isinstance(processed, torch.Tensor):

File ~/Library/Python/3.9/lib/python/site-packages/transformers/pipelines/base.py:1025, in Pipeline.forward(self, model_inputs, **forward_params)
1023 with inference_context():
1024 model_inputs = self._ensure_tensor_on_device(model_inputs, device=self.device)
-> 1025 model_outputs = self._forward(model_inputs, **forward_params)
1026 model_outputs = self._ensure_tensor_on_device(model_outputs, device=torch.device("cpu"))
1027 else:

File ~/Library/Python/3.9/lib/python/site-packages/transformers/pipelines/question_answering.py:513, in QuestionAnsweringPipeline._forward(self, inputs)
511 example = inputs["example"]
512 model_inputs = {k: inputs[k] for k in self.tokenizer.model_input_names}
--> 513 output = self.model(**model_inputs)
514 if isinstance(output, dict):
515 return {"start": output["start_logits"], "end": output["end_logits"], "example": example, **inputs}

File ~/Library/Python/3.9/lib/python/site-packages/torch/nn/modules/module.py:1501, in Module._call_impl(self, *args, **kwargs)
1496 # If we don't have any hooks, we want to skip the rest of the logic in
1497 # this function, and just call forward.
1498 if not (self._backward_hooks or self._backward_pre_hooks or self._forward_hooks or self._forward_pre_hooks
1499 or _global_backward_pre_hooks or _global_backward_hooks
1500 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1501 return forward_call(*args, **kwargs)
1502 # Do not call functions when jit is used
1503 full_backward_hooks, non_full_backward_hooks = [], []

TypeError: forward() got an unexpected keyword argument 'token_type_ids'

I am having the same issue.

@Dries1 @warfaisal model_inputs.pop("token_type_ids").

This happens because their instructions are wrong, they say to use 'question-answering' pipeline but the model is only built for text gen.

I resolved it by using the model via langchain hugging face local hub.
its really slow on the cpu though.

from langchain import HuggingFacePipeline

llm = HuggingFacePipeline.from_model_id(
model_id="bigscience/bloom-1b7",
task="text-generation",
model_kwargs={"temperature": 0, "max_length": 64},
)

Yes, using the question-answering gives errors, but text generation works:

pipeline("text-generation", model="medalpaca/medalpaca-7b", tokenizer="medalpaca/medalpaca-7b")

Can the model config be updated to actually enable question answering? I am trying to use this for that task, and that seems to be one of the advertised use cases.

Sign up or log in to comment