ValueError: The following model_kwargs are not used by the model: ['token_type_ids'] (note: typos in the generate arguments will also show up in this list)

#66
by yiz4869 - opened

I trained the falcon 7b model but when I load them for inference I got the following error.
ValueError: The following model_kwargs are not used by the model: ['token_type_ids'] (note: typos in the generate arguments will also show up in this list).

Based on a previous discussion (https://huggingface.co/tiiuae/falcon-40b/discussions/7), I tried to set 'return_token_type_ids=False' in tokenizer() as following:

hf_predictor = HuggingFaceLLMPredictor(
max_input_size=2048,
max_new_tokens=256,
generate_kwargs={"temperature": 0.25, "do_sample": False},
query_wrapper_prompt=query_wrapper_prompt,
device_map="auto",
model_name="tiiuae/falcon-7b",
tokenizer="tiiuae/falcon-7b",
tokenizer_kwargs={"max_length": 2048, "return_token_type_ids": False},
model_kwargs={"torch_dtype": torch.bfloat16}
)

I end up getting the same error after the changes. Any help would be greatly appreciated.

@yiz4869 I was able to successfully finetune the model without passing in the token_type_ids. If using an AutoModelForCausalLM model class and a DataCollatorForLanguageModeling being passed to the trainer then you should be able to train successfully!

My example is available here https://github.com/anthonyhughes/pico-evidence-training-data/blob/main/falcon_main.py in the train section.

Sign up or log in to comment