Code for instruction tuning

#10
by schwarzwalder - opened

Hi Team,

Could you please provide the code used for instruction fine-tuning the base model ?

I have checked the examples in here - https://github.com/huggingface/notebooks/tree/main/examples/idefics , but I suppose these are for normal fine-tuning using image-text pairs (interleaved) with next token prediction task.
inputs["labels"] = inputs["input_ids"]

I am specifically interested to know the changes required in the input/output processors for fine tuning on VQA datasets (which requires a conditional generation task given the image + question) ?

Thanks in advance.

Hi @VishnuSuganth
We have not released the codebase we used to train and perform sft for idefics (mostly because we don't have the bandwidth of maintaining this codebase).
However the fine-tuning script will give you a good start!
The exact formatting we used to fine-tune on vqa datasets is essentially the same as what you see for inference:

User: {question}{image}<end_of_utterance>\nAssistant: {answer}<end_of_utterance>

Ok thanks. Can you also share some information on how to set the input[“labels”] for sft and changes to the loss computation in the forward function ?

No changes on the loss, standard next token prediction!
We did not limit the loss computation to the answer, but that could be a fun thing to sanity-try.

@VictorSanh Thank you.

schwarzwalder changed discussion status to closed

Sign up or log in to comment