Image-Text-to-Text
Transformers
Safetensors
English
idefics2
pretraining
multimodal
vision
Inference Endpoints
5 papers

Fine-tuned idefics for inference?

#37
by baldesco - opened

I followed the fine-tuning tutorial and pushed to hugging face.

How can I use this fine-tuned version for inference?

HuggingFaceM4 org

Hi @baldesco ,
assuming you are following the tutorial, then you saved some lora parameters.
you can load the model trained with its lora parameters simply by doing a AutoModel.from_pretrained(PATH_TO_YOUR_MODEL_ON_THE_HUB)

Thank you @VictorSanh for the answer.

Yes, I followed the tutorial so I am using QLORA.

In the sample code for using idefics2 for inference, this is how the model is loaded:

processor = AutoProcessor.from_pretrained("HuggingFaceM4/idefics2-8b")
model = AutoModelForVision2Seq.from_pretrained(
    "HuggingFaceM4/idefics2-8b",
).to(DEVICE)

So I have 2 questions:

  1. For the processor, should I leave the original idefics model, or should I also point to my weights?
  2. For the model you mention AutoModel.from_pretrained, but the snippet above uses AutoModelForVision2Seq.from_pretrained. Are these two the same, or when should I use each?

Thank you

HuggingFaceM4 org

1/ either way. that is equivalent
2/ AutoModelForVision2Seq is indeed safer, if you want to be really safe (i.e. avoid any mismatch in the auto-mapping), I would even recommend Idefics2ForConditionalGeneration.from_pretrained

VictorSanh changed discussion status to closed

Sign up or log in to comment