Prompting model for OCR

#79
by EugeneSel - opened

Hello, thank you for your great contribution.

I was wondering how one could properly prompt Idefics2 for the OCR task. Are there any specific instructions you have been using during model training or its evaluation on OCR?
Could not find examples anywhere.

HuggingFaceM4 org

Hi! It depends on the task, for example we used this one:
"Extract the information in this CV." for examples on a CV. And something like "What does this say?" to transcribe a letter. Some tasks that we consider to be OCR-heavy already come with prompts (think DocVQA).
What were you trying to do?

Hi, thanks for your answer.

I wanted to use the model purely for the OCR task by itself: return the integral image text.
Potentially, with the bounding boxes of the corresponding text regions. However, I found that even the biggest MLLMs orient poorly with respect to the exact pixel coordinates.

So now I am at least trying to obtain all of the text with no additional comments systematically. Something like "What text does the image contain?", but the answer seems too story-telling: "The image contains 'text' on the top, ..." with no possibility to parse out the exact image text.
Makes me wonder whether the model was evaluated on OCRBench's Text Recognition or similar. Or are the model's OCR capabilities only helpful for the intermediate background efforts while performing more complex OCR-heavy tasks such as VQA?

Sign up or log in to comment