MMVP_VLM / README.md
tsbpp's picture
Create README.md
33c6f4f
|
raw
history blame
No virus
1.29 kB
metadata
license: mit
task_categories:
  - zero-shot-classification
size_categories:
  - n<1K

MMVP-VLM Benchmark Datacard

Basic Information

Title: MMVP-VLM Benchmark

Description: The MMVP-VLM (Multimodal Visual Patterns - Visual Language Models) Benchmark is designed to systematically evaluate the performance of recent CLIP-based models in understanding and processing visual patterns. It distills a subset of questions from the original MMVP benchmark into simpler language descriptions, categorizing them into distinct visual patterns. Each visual pattern is represented by 15 text-image pairs. The benchmark assesses whether CLIP models can accurately match these image-text combinations, providing insights into the capabilities and limitations of these models.

Dataset Details

  • Content Types: Text-Image Pairs
  • Volume: Balanced number of questions for each visual pattern, with each pattern represented by 15 pairs.
  • Source of Data: Subset from MMVP benchmark, supplemented with additional questions for balance
  • Data Collection Method: Distillation and categorization of questions from MMVP benchmark into simpler language

Usage

Intended Use

  • Evaluation of CLIP models' ability to understand and process various visual patterns.