Dariusfar commited on
Commit
6ed31f8
·
verified ·
1 Parent(s): 67e4519

update dataset card

Browse files
Files changed (1) hide show
  1. README.md +24 -24
README.md CHANGED
@@ -1,39 +1,35 @@
1
  ---
2
  license: mit
3
  task_categories:
4
- - text-generation
5
- - question-answering
6
  language:
7
- - en
8
  tags:
9
- - physics
10
- - high-energy-physics
11
- - particle-physics
12
- - LHC
13
- - CMS
14
- - benchmark
15
- - agentic
16
- - llm-agents
17
- - tool-use
18
- - simulation
19
  pretty_name: Collider-Bench
20
  size_categories:
21
- - n<1K
22
  ---
23
 
24
  # Collider-Bench
25
 
26
- **Collider-Bench** Collider-Bench is an AI benchmark for evaluating whether LLM agents can reproduce experimental analyses from the **Large Hadron Collider** (LHC) at CERN using only public papers and open scientific software.
27
 
28
- Each task requires multi-step scientific reasoning by an autonomous coding agent, from reading a published CMS or ATLAS search and identifying the relevant signal region, to generating and processing simulated signal events, implementing the event selection, and predicting the binned signal yields reported by the analysis
29
-
30
- The benchmark tests long-horizon scientific reasoning under realistic conditions, including ambiguous or underspecified paper descriptions, underdocumented domain-specific tools and approximate public simulation pipelines.
31
-
32
- This HuggingFace dataset hosts the **task corpus only** — the agent-facing instructions and artifacts. The **runtime harness, scorer, and hidden reference values** live in the companion GitHub repository:
33
 
34
  🔗 **https://github.com/dfaroughy/Collider-Bench**
35
 
36
- The reference yields used by the scorer are deliberately not published here to preserve the benchmark's blind-test property.
37
 
38
  ## Quick start
39
 
@@ -105,7 +101,9 @@ Each `sim` task asks the agent to reproduce the published per-bin yield distribu
105
 
106
  $$d(\hat y, y^\star) = \sqrt{\sum_k (\hat y_k - y_k^\star)^2 \big/ \sum_k (y_k^\star)^2}$$
107
 
108
- between the agent's bin yields $\hat y$ and the published reference $y^\star$. Scoring is offline and deterministic it does **not** require an LLM. See [`ColliderBench/Evals/`](https://github.com/dfaroughy/Collider-Bench/tree/main/ColliderBench/Evals) in the harness repo.
 
 
109
 
110
  ## Citation
111
 
@@ -113,13 +111,15 @@ If you use Collider-Bench in your research, please cite:
113
 
114
  ```
115
  @misc{colliderbench2026,
116
- title = {Collider-Bench: Benchmarking AI Agents with Particle Physics Analysis Reproduction},
117
  author = {Faroughy, Darius A. and contributors},
118
  year = {2026},
119
  url = {https://huggingface.co/datasets/Dariusfar/ColliderBench},
120
  }
121
  ```
122
 
 
 
123
  ## License
124
 
125
- MIT (matches the GitHub repo). The CMS paper PDFs and detector efficiency maps are reproduced here as published by the CMS Collaboration under the terms of their respective public-data policies.
 
1
  ---
2
  license: mit
3
  task_categories:
4
+ - text-generation
5
+ - question-answering
6
  language:
7
+ - en
8
  tags:
9
+ - physics
10
+ - high-energy-physics
11
+ - particle-physics
12
+ - LHC
13
+ - CMS
14
+ - benchmark
15
+ - agentic
16
+ - llm-agents
17
+ - tool-use
18
+ - simulation
19
  pretty_name: Collider-Bench
20
  size_categories:
21
+ - n<1K
22
  ---
23
 
24
  # Collider-Bench
25
 
26
+ **Collider-Bench** is a benchmark for evaluating whether LLM agents can reproduce experimental analyses from the Large Hadron Collider (LHC) using only public papers and open scientific software. Such analyses are often difficult to reproduce because the public toolchain only approximates the software used internally by the experimental collaborations, while the published papers inevitably omit implementation details needed for a faithful reconstruction. Agents must therefore rely on physical reasoning, domain knowledge, and trial-and-error to fill these gaps. Each task requires the agent to turn a published analysis into an executable simulation-and-selection pipeline and submit predicted collision event yields in specified signal regions.
27
 
28
+ This HuggingFace dataset hosts the **task corpus only** the agent-facing instructions, the null-filled HEPData-style template the agent fills, the CMS paper PDF, and the published object-efficiency maps. The **runtime harness, scorer, and hidden reference values** live in the companion GitHub repository:
 
 
 
 
29
 
30
  🔗 **https://github.com/dfaroughy/Collider-Bench**
31
 
32
+ The reference yields used by the scorer are deliberately not published here leaking them would let any LLM ingesting HF datasets memorize the answers and defeat the benchmark's blind-test property.
33
 
34
  ## Quick start
35
 
 
101
 
102
  $$d(\hat y, y^\star) = \sqrt{\sum_k (\hat y_k - y_k^\star)^2 \big/ \sum_k (y_k^\star)^2}$$
103
 
104
+ between the agent's bin yields $\hat y$ and the published reference $y^\star$, plus the integrated yield error $\Delta = |\Sigma\hat y - \Sigma y^\star| / \Sigma y^\star$. Diagnostic metrics (RMSLE, Jensen-Shannon, Baker-Cousins shape p-value) are also computed per run.
105
+
106
+ Scoring is offline and deterministic — it does **not** require an LLM. See [`ColliderBench/Evals/`](https://github.com/dfaroughy/Collider-Bench/tree/main/ColliderBench/Evals) in the harness repo.
107
 
108
  ## Citation
109
 
 
111
 
112
  ```
113
  @misc{colliderbench2026,
114
+ title = {Collider-Bench: A benchmark for LHC analysis recasting by LLM agents},
115
  author = {Faroughy, Darius A. and contributors},
116
  year = {2026},
117
  url = {https://huggingface.co/datasets/Dariusfar/ColliderBench},
118
  }
119
  ```
120
 
121
+ …and the four underlying CMS papers (CMS-SUS-16-034, -046, -047, -051) as listed in the GitHub repo's [References section](https://github.com/dfaroughy/Collider-Bench#references).
122
+
123
  ## License
124
 
125
+ MIT (matches the GitHub repo). The CMS paper PDFs and detector efficiency maps are reproduced here as published by the CMS Collaboration under the terms of their respective public-data policies.