Image-Text-to-Text
Transformers
Safetensors
English
idefics2
pretraining
multimodal
vision
Inference Endpoints
5 papers

Model is incompatible with Inference Endpoints

#23
by sebbyjp - opened
2024/04/18 15:56:07 ~   File "/usr/local/lib/python3.10/dist-packages/transformers/models/auto/configuration_auto.py", line 1130, in from_pretrained
2024/04/18 15:56:07 ~     raise ValueError(
2024/04/18 15:56:07 ~ ValueError: The checkpoint you are trying to load has model type `idefics2` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
- 2024-04-18T19:56:07.898+00:00 
2024/04/18 15:56:07 ~ Application startup failed. Exiting.

Has the transformer version on the Inference Endpoints not been updated?

HuggingFaceM4 org

hi @sebbyjp
let me circle back. transformers 4.40.0 was released just a few hours ago.

HuggingFaceM4 org

Hi @sebbyjp ,
Here's the answer from the team:

We don't support multimodal pipelines in the toolkit wihtout a custom handler. So they need to create a handler.py and can add a requirements.txt.
They should see this warning message before deploying, that references the need for a custom handler

hfe message.png

reference doc: https://huggingface.co/docs/inference-endpoints/guides/custom_handler

Sign up or log in to comment