MERRIN / README.md
HanNight's picture
Update README.md
d9616c8 verified
metadata
license: apache-2.0
task_categories:
  - question-answering
language:
  - en
pretty_name: MERRIN
size_categories:
  - n<1K
configs:
  - config_name: default
    data_files:
      - split: test
        path: MERRIN_encrypted.jsonl

MERRIN: Multimodal Evidence Retrieval and Reasoning in Noisy Web Environments

MERRIN is a human-annotated benchmark for evaluating search-augmented agents on multi-hop reasoning over noisy, multimodal web sources. It measures agents' ability to (1) identify which modalities are relevant without explicit cues, (2) retrieve multimodal evidence from the open web, and (3) reason over noisy, conflicting, and incomplete sources spanning text, images, video, and audio.

[🌐 Website] [πŸ“„ Paper] [πŸ’» Code]

Dataset Structure

Each record contains the following fields:

  • id (int): Unique question identifier.
  • question (string, encrypted): The question text.
  • answer (string, encrypted): The gold-standard short answer.
  • question_types (list[str]): Reasoning types required β€” multihop (combining information across sources/modalities), multimodal_conflict (reconciling inconsistent evidence across modalities), or both.
  • multimodal_roles (list[str]): How non-text evidence is used β€” as_reasoning_chain (provides an intermediate fact needed to derive the answer), as_answer (the answer can only be extracted from a non-text source), or both.
  • freshness (string): How time-sensitive the answer is β€” never-changing (stable facts), slow-changing (changes over years), or fast-changing (changes frequently).
  • effective_year (int/str): The year in which the ground-truth answer first became valid.
  • source (string): Question origin β€” Scratch (newly constructed), SealQA, or ChartMuseum (adapted from existing benchmarks).
  • required_modalities (list[str]): The modalities required to answer the question (e.g., text, image, video).
  • resources (list[dict], encrypted): Annotated gold source URLs. Each resource has a modality label and a url pointing to the evidence needed to answer the question.
  • canary (string): Decryption key for encrypted fields.

How to Use

The question, answer, and resources fields are encrypted to prevent data contamination in LLM training corpora. To decrypt, run the following:

import base64, hashlib, json
from datasets import load_dataset

ENCRYPTED_FIELDS = ["question", "answer", "resources"]

def derive_key(password: str, length: int) -> bytes:
    key = hashlib.sha256(password.encode()).digest()
    return (key * (length // len(key) + 1))[:length]

def decrypt(ciphertext_b64: str, password: str) -> str:
    encrypted = base64.b64decode(ciphertext_b64)
    key = derive_key(password, len(encrypted))
    return bytes(a ^ b for a, b in zip(encrypted, key)).decode("utf-8")

def decrypt_record(record):
    canary = record.get("canary")
    if not canary:
        return record
    out = {}
    for k, v in record.items():
        if k == "canary":
            continue
        if k in ENCRYPTED_FIELDS and isinstance(v, str):
            plaintext = decrypt(v, canary)
            out[k] = json.loads(plaintext) if plaintext.startswith(("[", "{")) else plaintext
        else:
            out[k] = v
    return out

# Load and decrypt
dataset = load_dataset("HanNight/MERRIN", split="test")
data = [decrypt_record(record) for record in dataset]

print(f"Loaded {len(data)} questions")
print(f"Example: {data[0]['question']}")

Citation

@article{wang2026merrin,
  title={MERRIN: Multimodal Evidence Retrieval and Reasoning in Noisy Web Environments},
  author={Han Wang and David Wan and Hyunji Lee and Thinh Pham and Mikaela Cankosyan and Weiyuan Chen and Elias Stengel-Eskin and Tu Vu and Mohit Bansal},
  year={2026},
  journal={arXiv preprint arXiv:2604.13418}
}