HanNight commited on
Commit
676a1c6
Β·
verified Β·
1 Parent(s): 36704f3

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +76 -1
README.md CHANGED
@@ -12,4 +12,79 @@ configs:
12
  data_files:
13
  - split: test
14
  path: "MERRIN_encrypted.jsonl"
15
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  data_files:
13
  - split: test
14
  path: "MERRIN_encrypted.jsonl"
15
+ ---
16
+
17
+ # MERRIN: Multimodal Evidence Retrieval and Reasoning in Noisy Web Environments
18
+
19
+ MERRIN is a human-annotated benchmark for evaluating search-augmented agents on multi-hop reasoning over noisy, multimodal web sources. It measures agents' ability to (1) identify which modalities are relevant without explicit cues, (2) retrieve multimodal evidence from the open web, and (3) reason over noisy, conflicting, and incomplete sources spanning text, images, video, and audio.
20
+
21
+ [[🌐 **Website**](https://merrin-benchmark.github.io/)] [[πŸ“„ **Paper**](https://arxiv.org/abs/)] [[πŸ’» **Code**](https://github.com/HanNight/MERRIN)]
22
+
23
+ ## Dataset Structure
24
+
25
+ Each record contains the following fields:
26
+
27
+ - **`id`** (int): Unique question identifier.
28
+ - **`question`** (string, encrypted): The question text.
29
+ - **`answer`** (string, encrypted): The gold-standard short answer.
30
+ - **`question_types`** (list[str]): Reasoning types required β€” `multihop` (combining information across sources/modalities), `multimodal_conflict` (reconciling inconsistent evidence across modalities), or both.
31
+ - **`multimodal_roles`** (list[str]): How non-text evidence is used β€” `as_reasoning_chain` (provides an intermediate fact needed to derive the answer), `as_answer` (the answer can only be extracted from a non-text source), or both.
32
+ - **`freshness`** (string): How time-sensitive the answer is β€” `never-changing` (stable facts), `slow-changing` (changes over years), or `fast-changing` (changes frequently).
33
+ - **`effective_year`** (int/str): The year in which the ground-truth answer first became valid.
34
+ - **`source`** (string): Question origin β€” `Scratch` (newly constructed), `SealQA`, or `ChartMuseum` (adapted from existing benchmarks).
35
+ - **`required_modalities`** (list[str]): The modalities required to answer the question (e.g., `text`, `image`, `video`).
36
+ - **`resources`** (list[dict], encrypted): Annotated gold source URLs. Each resource has a `modality` label and a `url` pointing to the evidence needed to answer the question.
37
+ - **`canary`** (string): Decryption key for encrypted fields.
38
+
39
+ ## How to Use
40
+
41
+ The `question`, `answer`, and `resources` fields are encrypted to prevent data contamination in LLM training corpora. To decrypt, run the following:
42
+
43
+ ```python
44
+ import base64, hashlib, json
45
+ from datasets import load_dataset
46
+
47
+ ENCRYPTED_FIELDS = ["question", "answer", "resources"]
48
+
49
+ def derive_key(password: str, length: int) -> bytes:
50
+ key = hashlib.sha256(password.encode()).digest()
51
+ return (key * (length // len(key) + 1))[:length]
52
+
53
+ def decrypt(ciphertext_b64: str, password: str) -> str:
54
+ encrypted = base64.b64decode(ciphertext_b64)
55
+ key = derive_key(password, len(encrypted))
56
+ return bytes(a ^ b for a, b in zip(encrypted, key)).decode("utf-8")
57
+
58
+ def decrypt_record(record):
59
+ canary = record.get("canary")
60
+ if not canary:
61
+ return record
62
+ out = {}
63
+ for k, v in record.items():
64
+ if k == "canary":
65
+ continue
66
+ if k in ENCRYPTED_FIELDS and isinstance(v, str):
67
+ plaintext = decrypt(v, canary)
68
+ out[k] = json.loads(plaintext) if plaintext.startswith(("[", "{")) else plaintext
69
+ else:
70
+ out[k] = v
71
+ return out
72
+
73
+ # Load and decrypt
74
+ dataset = load_dataset("HanNight/MERRIN", split="test")
75
+ data = [decrypt_record(record) for record in dataset]
76
+
77
+ print(f"Loaded {len(data)} questions")
78
+ print(f"Example: {data[0]['question']}")
79
+ ```
80
+
81
+ ## Citation
82
+
83
+ ```bibtex
84
+ @article{wang2026merrin,
85
+ title={MERRIN: Multimodal Evidence Retrieval and Reasoning in Noisy Web Environments},
86
+ author={Han Wang and David Wan and Hyunji Lee and Thinh Pham and Mikaela Cankosyan and Weiyuan Chen and Elias Stengel-Eskin and Tu Vu and Mohit Bansal},
87
+ year={2026},
88
+ journal={arXiv preprint arXiv:}
89
+ }
90
+ ```