MMNeedle / README.md
aaron16's picture
Upload README.md with huggingface_hub
0759130 verified
metadata
pretty_name: MMNeedle
language:
  - en
license: cc-by-4.0
homepage: https://mmneedle.github.io/
paper: https://arxiv.org/abs/2406.11230
tagline: >-
  Benchmarking long-context multimodal retrieval under extreme visual context
  lengths.
tags:
  - multimodal
  - visual-question-answering
  - long-context
  - evaluation
size_categories:
  - 100K<n<1M

MMNeedle

MMNeedle is a stress test for long-context multimodal reasoning. Each example contains a sequence of haystack images created by stitching MS COCO sub-images into 1×1, 2×2, 4×4, or 8×8 grids. Given textual needle descriptions (derived from MS COCO captions), models must predict which haystack image and which sub-image cell matches the caption—or report that the needle is absent.

This dataset card accompanies the official Hugging Face release so researchers no longer need to download from Google Drive or regenerate the benchmark from MS COCO.

Dataset structure

  • Sequences (sequence_length): either a single stitched image or a set of 10 stitched images.
  • Grid sizes (grid_rows, grid_cols): {1, 2, 4, 8} with square layouts.
  • Needles per query (needles_per_query): {1, 2, 5}. Each query provides that many captions.
  • Examples per configuration: 10,000. Half contain the needle(s); half are negatives.
  • Total examples: 210,000 (21 configurations × 10k samples).

Every example stores the full list of haystack image paths, the ground-truth needle locations (image_index, row, col), the MS COCO image IDs for the needles, the natural-language captions, and a has_needle boolean.

Usage

from datasets import load_dataset

ds = load_dataset("Wang-ML-Lab/MMNeedle", split="test")
example = ds[0]
print(example.keys())
# dict_keys(['id', 'sequence_length', 'grid_rows', 'grid_cols', 'needles_per_query',
#            'haystack_images', 'needle_locations', 'needle_image_ids',
#            'needle_captions', 'has_needle'])

Each entry in haystack_images is a PIL-compatible image object. needle_captions contains one string per requested needle (even for negative examples, where the corresponding location is (-1, -1, -1)).

Data fields

Field Type Description
id string Unique identifier combining configuration and sample id.
sequence_length int Number of stitched haystack images shown to the model.
grid_rows, grid_cols int Dimensions of the stitched grid (each cell is 256×256 px).
needles_per_query int Number of captions provided for the sample (1, 2, or 5).
haystack_images list of Image Ordered haystack images for the sequence.
needle_locations list of dict One dict per caption with image_index, row, and col (−1 when absent).
needle_image_ids list of string MS COCO filenames that generated each caption.
needle_captions list of string MS COCO captions used as the needle descriptions.
has_needle bool True if at least one caption corresponds to a haystack cell.

Recommended evaluation protocol

  1. Feed the ordered haystack images (preserving grid layout) plus the instruction template from the MMNeedle paper to your multimodal model.
  2. Parse the model output into (image_index, row, col) triples.
  3. Compare against needle_locations to compute accuracy for positives and the false-positive rate for negatives.

See the repository’s needle.py for a reference implementation.

Source data

  • Images & Captions: MS COCO 2014 validation split (CC BY 4.0).
  • Needle Metadata: Automatically generated by the MMNeedle authors; included here as JSON files.

Licensing

All stitched haystack images inherit the Creative Commons Attribution 4.0 License from MS COCO. Attribution at minimum should cite both MMNeedle and MS COCO.

Citations

@article{wang2024mmneedle,
  title={Multimodal Needle in a Haystack: Benchmarking Long-Context Capability of Multimodal Large Language Models},
  author={Wang, Hengyi and Shi, Haizhou and Tan, Shiwei and Qin, Weiyi and Wang, Wenyuan and Zhang, Tunyu and Nambi, Akshay and Ganu, Tanuja and Wang, Hao},
  journal={arXiv preprint arXiv:2406.11230},
  year={2024}
}

@article{lin2014microsoft,
  title={Microsoft COCO: Common Objects in Context},
  author={Lin, Tsung-Yi and Maire, Michael and Belongie, Serge and Hays, James and Perona, Pietro and Ramanan, Deva and Doll{\'a}r, Piotr and Zitnick, C. Lawrence},
  journal={ECCV},
  year={2014}
}