Papers
arxiv:2309.14525

Aligning Large Multimodal Models with Factually Augmented RLHF

Published on Sep 25, 2023
· Featured in Daily Papers on Sep 27, 2023
Authors:
,
,
,
,

Abstract

Large Multimodal Models (LMM) are built across modalities and the misalignment between two modalities can result in "hallucination", generating textual outputs that are not grounded by the multimodal information in context. To address the multimodal misalignment issue, we adapt the Reinforcement Learning from Human Feedback (RLHF) from the text domain to the task of vision-language alignment, where human annotators are asked to compare two responses and pinpoint the more hallucinated one, and the vision-language model is trained to maximize the simulated human rewards. We propose a new alignment algorithm called Factually Augmented RLHF that augments the reward model with additional factual information such as image captions and ground-truth multi-choice options, which alleviates the reward hacking phenomenon in RLHF and further improves the performance. We also enhance the GPT-4-generated training data (for vision instruction tuning) with previously available human-written image-text pairs to improve the general capabilities of our model. To evaluate the proposed approach in real-world scenarios, we develop a new evaluation benchmark MMHAL-BENCH with a special focus on penalizing hallucinations. As the first LMM trained with RLHF, our approach achieves remarkable improvement on the LLaVA-Bench dataset with the 94% performance level of the text-only GPT-4 (while previous best methods can only achieve the 87% level), and an improvement by 60% on MMHAL-BENCH over other baselines. We opensource our code, model, data at https://llava-rlhf.github.io.

Community

Here is an ML-generated summary

Objective
The paper aims to align large multimodal models (LMMs) with human values and reduce hallucinations by adapting reinforcement learning from human feedback (RLHF) to the multimodal domain.

Insights

  • High-quality instruction tuning data (VQA-v2, A-OKVQA, Flickr30k) significantly improves LMM capabilities on benchmarks.
  • RLHF further enhances human alignment, reduces hallucination, and encourages truthfulness based on evaluations.
  • Factually Augmented RLHF effectively utilizes existing human annotations to improve reward modeling.
  • Symbolic rewards help mitigate reward hacking issues in RLHF.
  • New benchmark MMH_AL-BENCH focuses on detecting hallucinations in LMM responses.
  • LLaVA-RLHF achieves state-of-the-art results across multiple benchmarks as the first LMM trained with RLHF.

Implementation
*Enriched the synthetic vision instruction tuning data from LLaVA with existing high-quality human-annotated image-text pairs (VQA-v2, A-OKVQA, Flickr30k).

  • Collected human preferences for 10k responses by re-sampling LLaVA responses, emphasizing multimodal alignment and minimizing hallucinations.
  • Performed RLHF on 50k LLaVA conversations to optimize against simulated human preferences.
  • Introduced Factually Augmented RLHF which utilizes additional factual information like image captions to calibrate the reward model.
  • Added symbolic rewards for correctness and length to prevent reward hacking.
  • Evaluated on LLaVA-Bench, MMH_AL-BENCH (new benchmark to detect hallucinations), MMBench, and POPE.

Results
The proposed LLaVA-RLHF model achieves significant improvements in human alignment benchmarks like LLaVA-Bench (+10%) and MMH_AL-BENCH (+60%) over baselines, establishing new state-of-the-art results.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2309.14525 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2309.14525 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2309.14525 in a Space README.md to link it from this page.

Collections including this paper 12