Papers
arxiv:2308.06595

VisIT-Bench: A Benchmark for Vision-Language Instruction Following Inspired by Real-World Use

Published on Aug 12, 2023
· Featured in Daily Papers on Aug 15, 2023

Abstract

We introduce VisIT-Bench (Visual InsTruction Benchmark), a benchmark for evaluation of instruction-following vision-language models for real-world use. Our starting point is curating 70 'instruction families' that we envision instruction tuned vision-language models should be able to address. Extending beyond evaluations like VQAv2 and COCO, tasks range from basic recognition to game playing and creative generation. Following curation, our dataset comprises 592 test queries, each with a human-authored instruction-conditioned caption. These descriptions surface instruction-specific factors, e.g., for an instruction asking about the accessibility of a storefront for wheelchair users, the instruction-conditioned caption describes ramps/potential obstacles. These descriptions enable 1) collecting human-verified reference outputs for each instance; and 2) automatic evaluation of candidate multimodal generations using a text-only LLM, aligning with human judgment. We quantify quality gaps between models and references using both human and automatic evaluations; e.g., the top-performing instruction-following model wins against the GPT-4 reference in just 27% of the comparison. VisIT-Bench is dynamic to participate, practitioners simply submit their model's response on the project website; Data, code and leaderboard is available at visit-bench.github.io.

Community

Introduces VisIT-Bench (Visual Instruction Benchmark) for evaluation of VLMs for real-world use; tasks include basic recognition, game playing, VQA, change captioning, and creative generation (in a chatbot-style format). Dataset created by human annotators, has multiple human evaluator steps (human judgement); uses Elo ratings for model preference, win rate for absolute reference; also has auto preference evaluation using GPT4. Authors name instruction family and create image-instruction pair as seed; crowdworkers create new instruction from seed, allow an entity to respond, and assess correctness of GPT-4's response. Instruction conditioned captions are necessary as VLM models like BLIP-2 cannot caption accurately with context. Most needed skills are writing and generation based. Benchmarked LLaVA-13B, InstructBLIP-13B, mini-GPT4, mPLUG-Owl, LLaMA-Adapter-v2-7B, PandaGPT, OpenFlamingo-v1, etc.; LLaVA and LLaMA-Adapter have best performance (both human preference and reference free evaluations). Also has results for each instruction category and auto-evaluation using GPT-4. Appendix has dataset analysis (object classes from YOLO), interface description (for human annotators), existing datasets comprising topics of VisIT-Bench, description on Elo rating, and evaluation prompts for GPT-4. From Google, Allen Institute, Stanford, LAION.

Links: website, Blog, arxiv, GitHub, HuggingFace Datasets, HuggingFace spaces

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2308.06595 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2308.06595 in a Space README.md to link it from this page.

Collections including this paper 1