AI & ML interests

None defined yet.

👩‍⚖️ MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?

Project page: https://mj-bench.github.io/ Code repository: https://github.com/MJ-Bench/MJ-Bench

While text-to-image models like DALLE-3 and Stable Diffusion are rapidly proliferating, they often encounter challenges such as hallucination, bias, and the production of unsafe, low-quality output. To effectively address these issues, it is crucial to align these models with desired behaviors based on feedback from a multimodal judge. Despite their significance, current multimodal judges frequently undergo inadequate evaluation of their capabilities and limitations, potentially leading to misalignment and unsafe fine-tuning outcomes.

To address this issue, we introduce MJ-Bench, a novel benchmark which incorporates a comprehensive preference dataset to evaluate multimodal judges in providing feedback for image generation models across four key perspectives: alignment, safety, image quality, and bias.

Dataset Overview

Specifically, we evaluate a large variety of multimodal judges including

  • 6 smaller-sized CLIP-based scoring models
  • 11 open-source VLMs (e.g. LLaVA family)
  • 4 and close-source VLMs (e.g. GPT-4o, Claude 3)
  • Radar Plot

🔥🔥We are actively updating the leaderboard and you are welcome to submit the evaluation result of your multimodal judge on our dataset to huggingface leaderboard.