Join the conversation

Join the community of Machine Learners and AI enthusiasts.

Sign Up
sashakunitsyn 
posted an update Apr 3
Post
1794
Introducing VLRM: Vision-Language Models act as Reward Models for Image Captioning
Paper:
https://arxiv.org/abs/2404.01911
Model weights and training logs:
sashakunitsyn/vlrm-blip2-opt-2.7b

We propose VLRM, a method for fine-tuning an existing image captioning model using reinforcement learning and vision-language models as reward models.
The method manages to significantly improve the generation quality without human-labeled data and is applicable to any image captioning model.
Our model reaches impressive 0.90 R@1 CLIP Recall score on MS-COCO Carpathy Test Split.