license: mit
task_categories:
- question-answering
- text-generation
language:
- en
tags:
- forecasting
- benchmark
- llm-evaluation
- reasoning
- temporal-reasoning
- contamination-control
- leakage-control
- prediction
- agent
size_categories:
- n<1K
pretty_name: OracleProto Forecasting Eval Set
OracleProto: Forecasting Evaluation Set
Chinese doc: [中文文档]
GitHub repo: [MaYiding/OracleProto]
Visit Our Leaderboards: [Website]
View Our Paper: [arXiv]
A SQLite-packaged evaluation set of 80 hand-curated forecasting questions on real-world events, with resolution dates between 2026-03-12 and 2026-04-14, released alongside the GitHub Repo. Both the rows and the byte-stable prompt-reconstruction recipe are packaged in a single file, forecast_eval_set_example.db, which exposes two tables: forecast_eval_set_example (the 80 rows) and dataset_metadata (the recipe).
1. Dataset at a glance
| Field | Value |
|---|---|
| Release date | 2026-04-29 |
| Rows | 80 |
| Splits | train (80); single split, intended as a held-out evaluation set |
| Resolution-date range | 2026-03-12 → 2026-04-14 |
| Question types | yes_no, binary_named, multiple_choice |
| Choice types | single (one correct letter), multi (one or more correct letters) |
| Database file | forecast_eval_set_example.db (SQLite 3, ~52 KB) |
| Tables in the file | forecast_eval_set_example (80 rows), dataset_metadata (1 row) |
| License | MIT |
| Upstream source | HuggingFace forecasting questions (levels 1+2), 322 raw → 80 curated |
Type distribution
question_type |
choice_type |
Rows |
|---|---|---|
yes_no |
single |
37 |
binary_named |
single |
3 |
multiple_choice |
single |
32 |
multiple_choice |
multi |
8 |
| Total | 80 |
yes_no is binary Yes/No. binary_named is a binary choice between two named entities such as two teams, two contestants, or two competing parties. multiple_choice has at least three labelled options, one or more of which are correct; "None of the above" is a valid answer when it appears in the option list. Each row stores the exact option labels: letter A maps to options[0], B to options[1], and so on (§3.4 covers labels beyond Z).
2. Files
OracleProto/
├── forecast_eval_set_example.db # SQLite database file (the dataset; ~52 KB)
├── forecast_eval_set_example.csv # CSV export of the rows table; 80 rows + header (~18 KB)
├── README.md # this file
├── LICENSE # MIT
└── .gitattributes # standard HF binary attributes
The dataset is published as a single SQLite file, not as Parquet or JSONL, because the prompt-reconstruction recipe and per-row provenance share the same file as the rows (in dataset_metadata.features_json). A loader that converts the rows to a datasets.Dataset is shown in §6.3.
The CSV is a row-table export of forecast_eval_set_example; it does not include dataset_metadata, so the prompt template is reachable only via the SQLite file. Use the CSV when a downstream pipeline needs only the 80 rows (pandas, a spreadsheet, or a grep filter) and reconstructs prompts on its own. The options column is preserved as a JSON-encoded array string, escaped per RFC 4180.
3. Database schema
Two tables: forecast_eval_set_example holds the 80 rows; dataset_metadata holds the canonical recipe. The file takes its name from the primary table.
3.1 Table forecast_eval_set_example (the rows)
CREATE TABLE forecast_eval_set_example (
id TEXT PRIMARY KEY,
choice_type TEXT NOT NULL CHECK (choice_type IN ('single','multi')),
question_type TEXT NOT NULL, -- yes_no | binary_named | multiple_choice
event TEXT NOT NULL, -- the event being predicted
options TEXT NOT NULL, -- JSON array of option labels
answer TEXT NOT NULL, -- canonical correct answer as letter(s)
end_time TEXT NOT NULL -- 'YYYY-MM-DD'
);
CREATE INDEX idx_forecast_eval_set_example_choice_type ON forecast_eval_set_example(choice_type);
CREATE INDEX idx_forecast_eval_set_example_question_type ON forecast_eval_set_example(question_type);
CREATE INDEX idx_forecast_eval_set_example_end_time ON forecast_eval_set_example(end_time);
3.2 Table dataset_metadata (the recipe)
A one-row table whose features_json blob stores the prompt template, the four output formats, the outcomes-block rule, the agent role, and curation provenance. The full recipe is documented in §5.
CREATE TABLE dataset_metadata (
dataset_name TEXT NOT NULL,
split_name TEXT NOT NULL,
table_name TEXT NOT NULL,
row_count INTEGER NOT NULL,
imported_at_utc TEXT NOT NULL,
features_json TEXT NOT NULL
);
3.3 Column semantics
| Column | Type | Description |
|---|---|---|
id |
TEXT | Stable source-side question ID inherited from the upstream HuggingFace forecasting set; primary join key. |
choice_type |
TEXT | 'single' if exactly one letter is correct, 'multi' if one or more letters are correct. Derived from the number of letters in answer. Selects between the single-answer and multi-select templates in §5.4. |
question_type |
TEXT | One of yes_no, binary_named, multiple_choice. Selects which prompt template is rendered (§5). |
event |
TEXT | Natural-language description of the event being predicted, author-edited to make the time anchor, the units, and the binary framing explicit. |
options |
TEXT | JSON array of option labels. For yes_no it is fixed to ["Yes","No"]. For binary_named it is the two named entities. For multiple_choice it is the list of choice labels, where each letter is given by its position (A=options[0], B=options[1], …). |
answer |
TEXT | Canonical correct answer encoded as letters. For yes_no and binary_named it is 'A' or 'B'. For multiple_choice it is a comma-separated letter list in option order, e.g. 'A' or 'A, B'. |
end_time |
TEXT | Resolution date in YYYY-MM-DD. The column stores a calendar date only; the prompt template (§5.2) attaches the GMT+8 reading at render time. If finer-grained admissibility is needed, treat each resolution as covering the whole calendar day. |
3.4 Letter-to-index encoding
Letters map to option indices via index = ord(letter) - ord('A'). Beyond Z (≥27 options) the labels continue along the contiguous ASCII range that starts at A: [, \, ], ^, _, `, a, b, …. The reference renderer wraps any non-A–Z label in backticks to keep the label intact under Markdown rendering. None of the 80 rows exceed 26 options; the encoding is documented because the framework's parser supports it.
4. Sample rows
{
"id": "699d9ffc098cca008728b6f0",
"choice_type": "single",
"question_type": "yes_no",
"event": "Will the US PCE annual inflation be greater than 2.9% in January 2026?",
"options": ["Yes", "No"],
"answer": "B",
"end_time": "2026-03-13"
}
{
"id": "69a2e39e5692ef005cdbf2d3",
"choice_type": "single",
"question_type": "binary_named",
"event": "Will US or Israel strike Iran first?",
"options": ["US", "Israel"],
"answer": "B",
"end_time": "2026-03-31"
}
{
"id": "6995b1073ea64b005b11f285",
"choice_type": "single",
"question_type": "multiple_choice",
"event": "Which men's basketball team will win the Big 12 Conference Championship tournament in the 2025-26 season?",
"options": ["Arizona", "Baylor", "Brigham Young University (BYU)",
"Houston", "Iowa State", "Kansas", "Kansas State"],
"answer": "A",
"end_time": "2026-03-14"
}
{
"id": "698f198bda7a8b006575444c",
"choice_type": "multi",
"question_type": "multiple_choice",
"event": "Which movies will win multiple Oscars? (2026)",
"options": ["One Battle After Another", "Sinners", "Frankenstein",
"KPop Demon Hunters", "F1", "Sentimental Value", "Hamnet",
"Marty Supreme", "The Secret Agent", "Avatar: Fire and Ash",
"Train Dreams", "Bugonia", "Blue Moon", "It Was Just An Accident"],
"answer": "A, B, C, D",
"end_time": "2026-03-15"
}
5. Prompt reconstruction (canonical recipe)
Every row is rendered into a single user message via the recipe stored in dataset_metadata.features_json.prompt_reconstruction. The recipe is byte-stable and is the source of truth for the OracleProto evaluator; downstream users who reconstruct prompts themselves should follow it exactly to keep results comparable.
5.1 Static fragments
agent_role: "You are an agent that can predict future events."
guidance: "Do not use any other format. Do not refuse to make a prediction.
Do not say \"I cannot predict the future.\" You must make a clear
prediction based on the best data currently available, using the
box format specified above."
5.2 Master template
{agent_role} The event to be predicted: "{event} (resolved around {end_time} (GMT+8)).{outcomes_block}"
IMPORTANT: Your final answer MUST end with this exact format:
{output_format}
{guidance}
The literal (GMT+8) inside the user-visible string is what attaches a timezone to the resolution date at render time.
5.3 outcomes_block
For yes_no and binary_named: empty, since the option labels are embedded directly in output_format.
For multiple_choice: a leading newline followed by one line per option in A. <label> form, for example \nA. Arizona\nB. Baylor\nC. Brigham Young University (BYU)\n…. Labels whose derived letter falls outside A–Z are wrapped in backticks.
5.4 output_format (one of four, chosen by question_type × choice_type)
yes_no:
Your task is to predict whether the event will occur based on your analysis.
Your prediction will be scored based on its accuracy. You will only receive points if your answer is correct.
Your final answer MUST end with this exact format:
\boxed{Yes} or \boxed{No}
binary_named (the literals <options[0]> and <options[1]> are replaced by the two named entities from options):
Your task is to predict which of the two outcomes will occur based on your analysis.
Your prediction will be scored based on its accuracy. You will only receive points if your answer is correct.
Your final answer MUST end with this exact format:
\boxed{<options[0]>} or \boxed{<options[1]>}
multiple_choice with choice_type='single':
This is a SINGLE-ANSWER question: exactly ONE of the listed options is correct.
Your prediction will be scored on strict equality with the unique correct letter; choosing the wrong letter, or selecting more than one letter, scores zero.
Your final answer MUST end with this exact format:
the single correct letter inside the box, e.g. \boxed{A}.
Do NOT list more than one letter, even if you believe two outcomes are tied — pick the one you find most likely.
multiple_choice with choice_type='multi':
This is a MULTI-SELECT question: ONE OR MORE of the listed options can be correct.
Your prediction will be scored on strict equality with the FULL set of correct letters: any extra letter, any missing letter, or any wrong letter scores zero. You must include ALL correct options and NO incorrect options.
Your final answer MUST end with this exact format:
listing all correct option(s) you have identified, separated by commas, within the box.
For example: \boxed{A} for a single correct option, or \boxed{B, C} for multiple correct options.
5.5 Answer parsing
The reference parser (forecast_eval/parser.py::parse_answer) applies these rules:
- Take the last
\boxed{...}substring in the model's reply; everything else is reasoning or scratchpad and is ignored. - For
yes_no(case-insensitive):Yes→A,No→B. Anything else is unparsed. - For
binary_named(case-insensitive): match the boxed payload againstoptions[0]oroptions[1]. Anything else is unparsed. - For
multiple_choice: split the boxed payload on commas and whitespace, validate that each token is a single letter, and check that each letter resolves to a valid option index. Out-of-range letters or multi-character tokens are unparsed. - Score by strict set equality against the canonical letter set parsed from
answer. A missing or unparsed boxed answer is recorded asparse_ok = 0rather than raised as an error, and the run continues without halting.
Reusing the framework's parser is the simplest way to get bit-identical scores across implementations.
6. Loading the dataset
6.1 With raw sqlite3 (no extra dependencies)
import sqlite3
import json
conn = sqlite3.connect("forecast_eval_set_example.db")
conn.row_factory = sqlite3.Row
# Read the rows.
rows = conn.execute("SELECT * FROM forecast_eval_set_example").fetchall()
print(f"loaded {len(rows)} rows")
sample = dict(rows[0])
sample["options"] = json.loads(sample["options"]) # JSON-decode option list
print(sample)
# Read the canonical prompt-reconstruction recipe.
meta_row = conn.execute("SELECT features_json FROM dataset_metadata").fetchone()
meta = json.loads(meta_row["features_json"])
prompt_template = meta["prompt_reconstruction"]["prompt_template"]
print(prompt_template)
6.2 With huggingface_hub
from huggingface_hub import hf_hub_download
import sqlite3, json
db_path = hf_hub_download(
repo_id="MaYiding/OracleProto",
filename="forecast_eval_set_example.db",
repo_type="dataset",
)
conn = sqlite3.connect(db_path)
rows = conn.execute("SELECT * FROM forecast_eval_set_example").fetchall()
6.3 Convert to a datasets.Dataset
import sqlite3, json
from datasets import Dataset
conn = sqlite3.connect("forecast_eval_set_example.db")
cur = conn.execute("SELECT * FROM forecast_eval_set_example")
cols = [c[0] for c in cur.description]
def _row(r):
d = dict(zip(cols, r))
d["options"] = json.loads(d["options"]) # list[str]
d["answer_letters"] = [
s.strip() for s in d["answer"].split(",") if s.strip()
] # list[str]
return d
ds = Dataset.from_list([_row(r) for r in cur.fetchall()])
print(ds)
print(ds[0])
6.4 Render a prompt (minimal, faithful to the canonical recipe)
def render_prompt(row, meta):
rcp = meta["prompt_reconstruction"]
options = row["options"]
qt, ct = row["question_type"], row["choice_type"]
if qt == "yes_no":
outcomes_block = ""
out_fmt = rcp["yes_no_output_format"]
elif qt == "binary_named":
outcomes_block = ""
out_fmt = (
rcp["binary_named_output_format"]
.replace("<options[0]>", options[0])
.replace("<options[1]>", options[1])
)
elif qt == "multiple_choice":
outcomes_block = "\n" + "\n".join(
f"{chr(ord('A') + i)}. {label}" for i, label in enumerate(options)
)
key = (
"multiple_choice_single_output_format" if ct == "single"
else "multiple_choice_multi_output_format"
)
out_fmt = rcp[key]
else:
raise ValueError(qt)
return rcp["prompt_template"].format(
agent_role=rcp["agent_role"],
event=row["event"],
end_time=row["end_time"],
outcomes_block=outcomes_block,
output_format=out_fmt,
guidance=rcp["guidance"],
)
The full reference renderer, which extends the example above with the >26-option backtick rule and an optional reflection / belief-elicitation tail, is implemented in forecast_eval/prompts.py; reusing it produces byte-identical prompts.
6.5 With the CSV export (stdlib csv, no prompt template)
import csv, json
with open("forecast_eval_set_example.csv", encoding="utf-8", newline="") as f:
rows = [
{**r, "options": json.loads(r["options"])}
for r in csv.DictReader(f)
]
print(f"loaded {len(rows)} rows; first event: {rows[0]['event']!r}")
The CSV path skips dataset_metadata entirely. To pair the rows with the prompt template, either follow §5 by hand or switch back to the SQLite path in §6.1.
7. Recommended evaluation protocol
Pair the dataset with the OracleProto evaluation harness, which layers information-boundary discipline on top of a plain prompt-and-score loop. Five concrete recommendations:
Declare a knowledge cutoff $\kappa_M$ for every model. A question is admissible for model $M$ only when $\kappa_M \le \chi_i < \tau_i$, where $\chi_i$ is the per-question prediction cutoff and $\tau_i$ is its resolution date. Inadmissible questions are filtered upstream rather than counted as model errors. A model with no declared cutoff cannot be fairly compared against a model that has one.
Time-mask any retrieval or browsing tool. If the harness lets the model issue web searches, pin the search-side
end_dateto $\chi_i + \delta$ with a conservative offset; OracleProto defaults to $\delta = -1$ day. The mechanism behind this barrier (L2) is documented in the framework's DESIGN and FRAME notes.Run an independent retrieval-content auditor. Each retrieved snippet is passed to a separate LLM auditor that decides whether the snippet leaks the resolution. This is the L3 barrier in the framework's threat model.
Forbid provider-native browsing. OracleProto refuses model slugs ending in
:onlineand similar hosted-browsing variants on three layers: config validation, on-the-wire client, and detector client. This is the L4 barrier, the final check that any billable LLM call must clear before it leaves the process.Score with strict set equality on letter sets, per §5.5. Optional probability-calibration metrics (Brier, NLL, ECE, Murphy decomposition) are supported when the model emits an additional
<belief>{ ... }</belief>JSON block following the framework's belief-elicitation protocol; the schema is documented inforecast_eval/prompts.py::BELIEF_PROTOCOL.
Without the OracleProto harness in place, treat the resulting numbers as upper bounds on forecasting ability: any model that can browse the open web, or that was trained past a question's end_time, may have memorised the answer. The dataset makes the admissibility check possible; it does not enforce it on its own.
8. Provenance and curation
- Source. Upstream HuggingFace forecasting questions, restricted to levels 1+2 (the easier two of the upstream difficulty bands). The raw set was harvested as 322 candidate questions.
- Curation pipeline (5 passes).
- Source-side broken-row removal and column flattening.
end_time/ answer-encoding / option-label normalization:end_timereduced to aYYYY-MM-DDcalendar date;Yes/Nomapped toA/B; option labels stripped of stray markdown.- Down-sampling 322 → 200 → 100 → 80 with placeholder removal, deduplication, and an ambiguity audit.
- Final HIGH+MEDIUM ambiguity remediation: 4 rows reworded to make their time anchor, units, or binary framing explicit.
- CRITICAL fix on one S&P 500 multi-select truth set so it satisfies the monotonic-threshold logic implied by the option ladder.
- Verification. All 80 ground-truths verified end-to-end via parser round-trip (the rendered prompt is parsed and re-encoded back to the canonical letter set). Final tally: 0 critical / 0 high / 0 medium ambiguity issues remaining.
9. Intended uses and limitations
9.1 Intended uses
- Forecasting benchmark for LLMs and LLM agents, particularly tool-using agents that combine parametric knowledge with time-masked web retrieval.
- Reproducibility testbed for forecasting harnesses. The
dataset_metadatatable makes every prompt byte-stable; pairing it with the OracleProto framework yields a run unit whose scoring artefacts are bit-identical when the configuration matches. - Calibration and proper-scoring research. The 80-row size is small enough that per-question analysis (belief evolution, source attribution, calibration plots) stays tractable.
9.2 Out-of-scope uses
- Long-horizon forecasting. All resolutions land in a one-month window (2026-03-12 → 2026-04-14); the set does not represent multi-quarter or multi-year forecasting.
- Open-ended generation. Every question has a closed answer set, so this is not a generation benchmark.
10. License
Released under the MIT License (see LICENSE). The upstream questions originate from a public HuggingFace forecasting set; the curation work, schema, prompt-reconstruction recipe, and answer encodings in this release are the contribution of this project.
11. Contact
For questions about code usage, dataset construction, or reproducing results, please reach out to the developers directly:
- Yiding Ma: yidingma@bupt.edu.cn
- Chengyun Ruan: ruanchengyun815@bupt.edu.cn
For joint research, dataset and benchmark co-development, or paper collaboration, please contact the principal investigators:
- Kaibo Huang (corresponding author): huangkaibo@bupt.edu.cn
- Zhongliang Yang (corresponding author): yangzl@bupt.edu.cn
12. Paper
View Our Paper: arXiv
13. Citation
If you use this project in your research, please cite our paper:
@article{OracleProto,
title={OracleProto: A Reproducible Framework for Benchmarking LLM Native Forecasting via Knowledge Cutoff and Temporal Masking},
author={Yiding Ma, Chengyun Ruan, Kaibo Huang, Zhongliang Yang, Linna Zhou},
journal={arXiv preprint arXiv:2605.03762},
year={2026}
}