metadata
license: mit
pretty_name: MMB Counterfactual Dataset
task_categories:
- visual-question-answering
- multiple-choice
language:
- en
tags:
- vision
- language
- multimodal
- counterfactual
- question-answering
- synthetic
size_categories:
- 1K<n<10K
dataset_info:
features:
- name: original_image
dtype: image
- name: counterfactual1_image
dtype: image
- name: counterfactual2_image
dtype: image
- name: counterfactual1_type
dtype: string
- name: counterfactual2_type
dtype: string
- name: counterfactual1_description
dtype: string
- name: counterfactual2_description
dtype: string
- name: original_question
dtype: string
- name: counterfactual1_question
dtype: string
- name: counterfactual2_question
dtype: string
- name: original_question_difficulty
dtype: string
- name: counterfactual1_question_difficulty
dtype: string
- name: counterfactual2_question_difficulty
dtype: string
- name: original_image_answer_to_original_question
dtype: string
- name: original_image_answer_to_cf1_question
dtype: string
- name: original_image_answer_to_cf2_question
dtype: string
- name: cf1_image_answer_to_original_question
dtype: string
- name: cf1_image_answer_to_cf1_question
dtype: string
- name: cf1_image_answer_to_cf2_question
dtype: string
- name: cf2_image_answer_to_original_question
dtype: string
- name: cf2_image_answer_to_cf1_question
dtype: string
- name: cf2_image_answer_to_cf2_question
dtype: string
splits:
- name: train
num_bytes: 29666931
num_examples: 100
download_size: 29653393
dataset_size: 29666931
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
MMB Counterfactual Dataset
A counterfactual VQA dataset constructed using the CLEVR blender assets to procedurally generate both negative and normal counter factual VQA images and questions for the Multimodal Benchmark paper.
Dataset Structure
This repository contains counterfactual visual question answering data with:
- Original images and counterfactual variants (modifications to test reasoning)
- Questions for each image variant
- Answer matrices showing how each image answers each question (9 values per scene: 3 images × 3 questions)
Loading from Python
After pushing this repository to the Hub, load it with:
from datasets import load_dataset
ds = load_dataset("scholo/MMB_dataset", split="train")
print(ds[0])
No trust_remote_code=True needed since we use standard Parquet format!
Directory Structure
MMB-Dataset/
├── README.md # This file
├── .gitattributes # Git LFS configuration for images
├── data/ # Dataset files (Parquet format)
│ └── train.parquet # Main dataset file
├── Dataset/ # Current dataset run
│ ├── images/ # All PNG images (referenced by Parquet)
│ ├── scenes/ # JSON scene descriptions (reference)
│ ├── image_mapping_with_questions.csv # Original CSV (source)
│ ├── checkpoint.json # Run metadata
│ └── run_metadata.json # Run metadata
License
MIT