Datasets:
MMVP
/

Modalities:
Image
Size:
< 1K
Libraries:
Datasets
License:
Search is not available for this dataset
image
imagewidth (px)
224
224
label
class label
9 classes
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
0Camera Perspective
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
1Color
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
2Orientation
3Presence
3Presence
3Presence
3Presence
3Presence
3Presence
3Presence
3Presence
3Presence
3Presence

MMVP-VLM Benchmark Datacard

Basic Information

Title: MMVP-VLM Benchmark

Description: The MMVP-VLM (Multimodal Visual Patterns - Visual Language Models) Benchmark is designed to systematically evaluate the performance of recent CLIP-based models in understanding and processing visual patterns. It distills a subset of questions from the original MMVP benchmark into simpler language descriptions, categorizing them into distinct visual patterns. Each visual pattern is represented by 15 text-image pairs. The benchmark assesses whether CLIP models can accurately match these image-text combinations, providing insights into the capabilities and limitations of these models.

Dataset Details

  • Content Types: Text-Image Pairs
  • Volume: Balanced number of questions for each visual pattern, with each pattern represented by 15 pairs.
  • Source of Data: Subset from MMVP benchmark, supplemented with additional questions for balance
  • Data Collection Method: Distillation and categorization of questions from MMVP benchmark into simpler language

Usage

Intended Use

  • Evaluation of CLIP models' ability to understand and process various visual patterns.
Downloads last month
127