MaYiding commited on
Commit
4e2dbdd
·
1 Parent(s): c10bdb7

Update README

Browse files
Files changed (1) hide show
  1. README-ZH.md +114 -114
README-ZH.md CHANGED
@@ -1,71 +1,71 @@
1
- # OracleProto: Forecasting Evaluation Set
2
 
3
- **Chinese Doc:** [[`中文文档`](https://huggingface.co/datasets/MaYiding/OracleProto/blob/main/README-ZH.md)]
4
 
5
- **GitHub Repo:** [`Github`](https://github.com/MaYiding/OracleProto)
6
 
7
 
8
- A SQLite-packaged evaluation set of 80 hand-curated forecasting questions on real-world events, with resolution dates between 2026-03-12 and 2026-04-14, released alongside the [GitHub Repo](https://github.com/MaYiding/OracleProto). Both the rows and the byte-stable prompt-reconstruction recipe ship inside a single file, `forecast_eval_set_example.db`, which holds two tables: `forecast_eval_set_example` (the 80 rows) and `dataset_metadata` (the recipe).
9
 
10
  ---
11
 
12
- ## 1. Dataset at a glance
13
 
14
- | Field | Value |
15
- | ---------------------- | ---------------------------------------------------------------------- |
16
- | Release date | `2026-04-29` |
17
- | Rows | 80 |
18
- | Splits | `train` (80); single split, intended as a held-out evaluation set |
19
- | Resolution-date range | `2026-03-12` → `2026-04-14` |
20
- | Question types | `yes_no`, `binary_named`, `multiple_choice` |
21
- | Choice types | `single` (one correct letter), `multi` (one-or-more correct letters) |
22
- | Database file | `forecast_eval_set_example.db` (SQLite 3, ~52 KB) |
23
- | Tables in the file | `forecast_eval_set_example` (80 rows), `dataset_metadata` (1 row) |
24
- | License | MIT |
25
- | Source upstream | HuggingFace forecasting questions (levels 1+2), 322 raw 80 curated |
26
 
27
- ### Type distribution
28
 
29
- | `question_type` | `choice_type` | Rows |
30
  | ------------------- | ------------- | ------ |
31
  | `yes_no` | `single` | 37 |
32
  | `binary_named` | `single` | 3 |
33
  | `multiple_choice` | `single` | 32 |
34
  | `multiple_choice` | `multi` | 8 |
35
- | **Total** | | **80** |
36
 
37
- `yes_no` is binary Yes/No; `binary_named` is binary between two named entities such as sports teams, fighters, or sides; `multiple_choice` carries at least three labelled options with one or more correct letters allowed, and "None of the above" is a valid answer when listed. Each row stores the exact option labels; letter `A` maps to `options[0]`, `B` to `options[1]`, and so on (§3.4 covers labels beyond `Z`).
38
 
39
  ---
40
 
41
- ## 2. Files
42
 
43
  ```text
44
  OracleProto/
45
- ├── forecast_eval_set_example.db # SQLite database file (the dataset; ~52 KB)
46
- ├── README.md # this file
47
  ├── LICENSE # MIT
48
- └── .gitattributes # standard HF binary attributes
49
  ```
50
 
51
- The dataset ships as one SQLite file, not Parquet or JSONL, because the prompt-reconstruction recipe and per-row provenance live in the same file as the rows (in `dataset_metadata.features_json`). A loader for `datasets.Dataset` and Parquet conversion appears in §6.
52
 
53
  ---
54
 
55
- ## 3. Database schema
56
 
57
- Two tables: `forecast_eval_set_example` holds the 80 rows; `dataset_metadata` holds the canonical recipe. The file takes its name from the primary table.
58
 
59
- ### 3.1 Table `forecast_eval_set_example` (the rows)
60
 
61
  ```sql
62
  CREATE TABLE forecast_eval_set_example (
63
  id TEXT PRIMARY KEY,
64
  choice_type TEXT NOT NULL CHECK (choice_type IN ('single','multi')),
65
  question_type TEXT NOT NULL, -- yes_no | binary_named | multiple_choice
66
- event TEXT NOT NULL, -- the event being predicted
67
- options TEXT NOT NULL, -- JSON array of option labels
68
- answer TEXT NOT NULL, -- canonical correct answer as letter(s)
69
  end_time TEXT NOT NULL -- 'YYYY-MM-DD'
70
  );
71
 
@@ -74,9 +74,9 @@ CREATE INDEX idx_forecast_eval_set_example_question_type ON forecast_eval_set_ex
74
  CREATE INDEX idx_forecast_eval_set_example_end_time ON forecast_eval_set_example(end_time);
75
  ```
76
 
77
- ### 3.2 Table `dataset_metadata` (the recipe)
78
 
79
- A one-row table whose `features_json` blob carries the prompt template, the four output formats, the outcomes-block rule, the agent-role string, and curation provenance. The full recipe is rendered in §5.
80
 
81
  ```sql
82
  CREATE TABLE dataset_metadata (
@@ -89,25 +89,25 @@ CREATE TABLE dataset_metadata (
89
  );
90
  ```
91
 
92
- ### 3.3 Column semantics
93
 
94
- | Column | Type | Description |
95
- | --------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
96
- | `id` | TEXT | Stable source-side question ID inherited from the upstream HuggingFace forecasting set; primary join key. |
97
- | `choice_type` | TEXT | `'single'` if exactly one letter is correct, `'multi'` if one-or-more letters can be correct. Derived from the number of letters in `answer`. Drives the single-answer vs multi-select branch in §5.4. |
98
- | `question_type` | TEXT | One of `yes_no`, `binary_named`, `multiple_choice`. Selects which prompt template is rendered (§5). |
99
- | `event` | TEXT | Natural-language description of the event being predicted, author-edited for explicit time anchoring, unit explicitness, and unambiguous binary framing. |
100
- | `options` | TEXT | JSON array of option labels. For `yes_no` it is fixed to `["Yes","No"]`. For `binary_named` it is two named entities. For `multiple_choice` it is a list of choice labels whose letter is implied by index (`A=options[0]`, `B=options[1]`, …). |
101
- | `answer` | TEXT | Canonical correct answer encoded as letters. For `yes_no` and `binary_named` it is `'A'` or `'B'`. For `multiple_choice` it is a comma-separated letter list in option order, e.g. `'A'` or `'A, B'`. |
102
- | `end_time` | TEXT | Resolution date in `YYYY-MM-DD`. The column stores a calendar date only; the prompt template (§5.2) supplies the GMT+8 reading. If finer-grained admissibility is needed, treat each resolution as covering the whole calendar day. |
103
 
104
- ### 3.4 Letter-to-index encoding
105
 
106
- Letters map to option indices via `index = ord(letter) - ord('A')`. Beyond `Z` (≥27 options) the labels run on as `[`, `\`, `]`, `^`, `_`, `` ` ``, `a`, `b`, , the contiguous ASCII range starting at `A`. The reference renderer wraps any non-`A`–`Z` label in backticks so it survives Markdown rendering. None of the 80 rows exceed 26 options, but the encoding is documented because the framework's parser supports it.
107
 
108
  ---
109
 
110
- ## 4. Sample rows
111
 
112
  ```json
113
  {
@@ -163,11 +163,11 @@ Letters map to option indices via `index = ord(letter) - ord('A')`. Beyond `Z` (
163
 
164
  ---
165
 
166
- ## 5. Prompt reconstruction (canonical recipe)
167
 
168
- Every row is rendered into a single user message via the recipe stored in `dataset_metadata.features_json.prompt_reconstruction`. The recipe is byte-stable and is the source of truth for the OracleProto evaluator; downstream users who reconstruct prompts themselves should follow it exactly so results stay comparable.
169
 
170
- ### 5.1 Static fragments
171
 
172
  ```text
173
  agent_role: "You are an agent that can predict future events."
@@ -178,7 +178,7 @@ guidance: "Do not use any other format. Do not refuse to make a prediction.
178
  box format specified above."
179
  ```
180
 
181
- ### 5.2 Master template
182
 
183
  ```text
184
  {agent_role} The event to be predicted: "{event} (resolved around {end_time} (GMT+8)).{outcomes_block}"
@@ -188,14 +188,14 @@ IMPORTANT: Your final answer MUST end with this exact format:
188
  {guidance}
189
  ```
190
 
191
- The literal `(GMT+8)` inside the user-visible string is what gives `end_time` its timezone reading; the column itself stores only a date.
192
 
193
  ### 5.3 `outcomes_block`
194
 
195
- For `yes_no` and `binary_named`: empty, since the option labels are baked into `output_format`.
196
- For `multiple_choice`: a leading newline followed by one line per option in `A. <label>` form, e.g. `\nA. Arizona\nB. Baylor\nC. Brigham Young University (BYU)\n…`. Labels whose derived letter falls outside `A`–`Z` are wrapped in backticks.
197
 
198
- ### 5.4 `output_format` (one of four, chosen by `question_type` × `choice_type`)
199
 
200
  **`yes_no`:**
201
  ```text
@@ -205,7 +205,7 @@ Your final answer MUST end with this exact format:
205
  \boxed{Yes} or \boxed{No}
206
  ```
207
 
208
- **`binary_named`** (the literals `<options[0]>` and `<options[1]>` are replaced by the two named entities from `options`):
209
  ```text
210
  Your task is to predict which of the two outcomes will occur based on your analysis.
211
  Your prediction will be scored based on its accuracy. You will only receive points if your answer is correct.
@@ -213,7 +213,7 @@ Your final answer MUST end with this exact format:
213
  \boxed{<options[0]>} or \boxed{<options[1]>}
214
  ```
215
 
216
- **`multiple_choice` with `choice_type='single'`:**
217
  ```text
218
  This is a SINGLE-ANSWER question: exactly ONE of the listed options is correct.
219
  Your prediction will be scored on strict equality with the unique correct letter; choosing the wrong letter, or selecting more than one letter, scores zero.
@@ -222,7 +222,7 @@ the single correct letter inside the box, e.g. \boxed{A}.
222
  Do NOT list more than one letter, even if you believe two outcomes are tied — pick the one you find most likely.
223
  ```
224
 
225
- **`multiple_choice` with `choice_type='multi'`:**
226
  ```text
227
  This is a MULTI-SELECT question: ONE OR MORE of the listed options can be correct.
228
  Your prediction will be scored on strict equality with the FULL set of correct letters: any extra letter, any missing letter, or any wrong letter scores zero. You must include ALL correct options and NO incorrect options.
@@ -231,23 +231,23 @@ listing all correct option(s) you have identified, separated by commas, within t
231
  For example: \boxed{A} for a single correct option, or \boxed{B, C} for multiple correct options.
232
  ```
233
 
234
- ### 5.5 Answer parsing
235
 
236
- The reference parser ([`forecast_eval/parser.py::parse_answer`](https://github.com/MaYiding/OracleProto/blob/main/forecast_eval/parser.py)) applies these rules:
237
 
238
- 1. Take the **last** `\boxed{...}` substring in the model's reply; everything else is reasoning or scratchpad and is ignored.
239
- 2. For `yes_no` (case-insensitive): `Yes` → `A`, `No` → `B`. Anything else is unparsed.
240
- 3. For `binary_named` (case-insensitive): match the boxed payload against `options[0]` or `options[1]`. Anything else is unparsed.
241
- 4. For `multiple_choice`: split the boxed payload on commas and whitespace, validate that each token is a single letter, and check that each letter resolves to a valid option index. Out-of-range letters or multi-character tokens are unparsed.
242
- 5. Score by strict set equality against the canonical letter set parsed from `answer`. A missing or unparsed boxed answer is recorded as `parse_ok = 0` rather than treated as a parser error; the run records it and moves on.
243
 
244
- Reusing the framework's parser is the practical way to get bit-identical scores across implementations.
245
 
246
  ---
247
 
248
- ## 6. Loading the dataset
249
 
250
- ### 6.1 With raw `sqlite3` (no extra deps)
251
 
252
  ```python
253
  import sqlite3
@@ -256,21 +256,21 @@ import json
256
  conn = sqlite3.connect("forecast_eval_set_example.db")
257
  conn.row_factory = sqlite3.Row
258
 
259
- # Read the rows.
260
  rows = conn.execute("SELECT * FROM forecast_eval_set_example").fetchall()
261
  print(f"loaded {len(rows)} rows")
262
  sample = dict(rows[0])
263
- sample["options"] = json.loads(sample["options"]) # JSON-decode option list
264
  print(sample)
265
 
266
- # Read the canonical prompt-reconstruction recipe.
267
  meta_row = conn.execute("SELECT features_json FROM dataset_metadata").fetchone()
268
  meta = json.loads(meta_row["features_json"])
269
  prompt_template = meta["prompt_reconstruction"]["prompt_template"]
270
  print(prompt_template)
271
  ```
272
 
273
- ### 6.2 With `huggingface_hub`
274
 
275
  ```python
276
  from huggingface_hub import hf_hub_download
@@ -285,7 +285,7 @@ conn = sqlite3.connect(db_path)
285
  rows = conn.execute("SELECT * FROM forecast_eval_set_example").fetchall()
286
  ```
287
 
288
- ### 6.3 Convert to a `datasets.Dataset`
289
 
290
  ```python
291
  import sqlite3, json
@@ -308,7 +308,7 @@ print(ds)
308
  print(ds[0])
309
  ```
310
 
311
- ### 6.4 Render a prompt (minimal, faithful to the canonical recipe)
312
 
313
  ```python
314
  def render_prompt(row, meta):
@@ -348,77 +348,77 @@ def render_prompt(row, meta):
348
  )
349
  ```
350
 
351
- The full reference renderer (with the >26-option backtick rule and an optional reflection / belief-elicitation tail) lives at [`forecast_eval/prompts.py`](https://github.com/MaYiding/OracleProto/blob/main/forecast_eval/prompts.py); reusing it gives byte-identical prompts.
352
 
353
  ---
354
 
355
- ## 7. Recommended evaluation protocol
356
 
357
- Pair the dataset with the OracleProto evaluation harness, which layers information-boundary discipline on top of the bare prompt-and-score loop. Five concrete recommendations:
358
 
359
- 1. **Declare a knowledge cutoff $\kappa_M$ for every model.** A question is admissible for model $M$ only when $\kappa_M \le \chi_i < \tau_i$, where $\chi_i$ is the per-question prediction cutoff and $\tau_i$ is its resolution date. Inadmissible questions are filtered upstream rather than counted as model errors. A model with no declared cutoff cannot be fairly compared to one that has one.
360
 
361
- 2. **Time-mask any retrieval or browsing tool.** If the harness lets the model issue web searches, pin the search-side `end_date` to $\chi_i + \delta$ with a conservative offset; OracleProto defaults to $\delta = -1$ day. The mechanism behind this barrier (L2) is documented in the framework's DESIGN and FRAME notes.
362
 
363
- 3. **Run an independent retrieval-content auditor.** Each retrieved snippet is passed to a separate LLM auditor that decides whether the snippet leaks the resolution. This is the L3 barrier in the framework's threat model.
364
 
365
- 4. **Forbid provider-native browsing.** OracleProto refuses model slugs ending in `:online` and similar hosted-browsing variants on three layers: config validation, on-the-wire client, and detector client. This is the L4 residual that must pass before any billable LLM call leaves the process.
366
 
367
- 5. **Score with strict set equality on letter sets**, per §5.5. Optional probability-calibration metrics (Brier, NLL, ECE, Murphy decomposition) are supported when the model emits an additional `<belief>{ ... }</belief>` JSON block per the v4 belief protocol; the schema is documented in [`forecast_eval/prompts.py::BELIEF_PROTOCOL`](https://github.com/MaYiding/OracleProto/blob/main/forecast_eval/prompts.py).
368
 
369
- Without the OracleProto harness in place, treat the resulting numbers as upper bounds on forecasting ability: any model that can browse the open web, or that was trained past a question's `end_time`, may have memorised the answer. The dataset makes the honesty audit possible; it does not enforce it on its own.
370
 
371
  ---
372
 
373
- ## 8. Provenance and curation
374
 
375
- * **Source.** Upstream HuggingFace forecasting questions, restricted to *levels 1+2* (the easier two of the upstream difficulty bands). The raw set was harvested as 322 candidate questions.
376
- * **Curation pipeline (5 passes).**
377
- 1. Source-side broken-row removal and column flattening.
378
- 2. `end_time` / answer-encoding / option-label normalization: `end_time` reduced to a `YYYY-MM-DD` calendar date; `Yes/No` mapped to `A/B`; option labels stripped of stray markdown.
379
- 3. Down-sampling 322 → 200 → 100 → 80 with placeholder removal, deduplication, and an ambiguity audit.
380
- 4. Final HIGH+MEDIUM ambiguity remediation: 4 rows reworded for explicit time anchoring, unit explicitness, and unambiguous binary framing.
381
- 5. CRITICAL fix on one S&P 500 multi-select truth set so it satisfies the monotonic-threshold logic implied by the option ladder.
382
- * **Verification.** All 80 ground-truths verified end-to-end via parser round-trip (the rendered prompt is parsed and re-encoded back to the canonical letter set). Final tally: 0 critical / 0 high / 0 medium ambiguity issues remaining.
383
 
384
  ---
385
 
386
- ## 9. Intended uses and limitations
387
 
388
- ### 9.1 Intended uses
389
 
390
- * **Forecasting benchmark for LLMs and LLM agents**, particularly tool-using agents that combine parametric knowledge with time-masked web retrieval.
391
- * **Reproducibility testbed for forecasting harnesses.** The `dataset_metadata` table makes every prompt byte-stable; pairing it with the OracleProto framework yields a run unit whose scoring artefacts are bit-identical when the configuration matches.
392
- * **Calibration and proper-scoring research.** The 80-row size is small enough that per-question analysis (belief evolution, source attribution, calibration plots) stays tractable.
393
 
394
- ### 9.2 Out-of-scope uses
395
 
396
- * **Training data.** Including the rows in any training, fine-tuning, or RLHF corpus contaminates downstream forecasting evaluations of the trained model. The dataset is evaluation-only.
397
- * **Long-horizon forecasting.** All resolutions land in a one-month window (2026-03-12 → 2026-04-14); the set does not represent multi-quarter or multi-year forecasting.
398
- * **Open-ended generation.** Every question has a closed answer set, so this is not a generation benchmark.
399
 
400
- ### 9.3 Known limitations and biases
401
 
402
- * **Sample size.** 80 rows is small. Confidence intervals on accuracy or Brier are wide; report them alongside point estimates and use paired tests when comparing models on the same set.
403
- * **Topical skew.** Questions concentrate in finance and macro indicators, sports events, awards (Oscars, NBA, UEFA, etc.), and US-centric political and geopolitical events, reflecting the upstream HuggingFace market mix. They are not a globally representative sample.
404
- * **English-only.** All `event` and `options` strings are English.
405
- * **Date-only resolution.** `end_time` is a date, not a timestamp, and the dataset does not carry a timezone column. If finer-grained admissibility is needed, treat each resolution as covering the whole GMT+8 calendar day.
406
- * **Provider-side residual leakage.** Any LLM that has ingested the upstream HuggingFace dataset, or that was trained past the resolution window, can recover ground truths from parametric memory. The dataset cannot patch this on its own; it relies on the harness to enforce admissibility ($\kappa_M$).
407
- * **Snapshot of a moving label space.** A few questions ("none of the above", "all of the above") interact non-trivially with multi-select scoring; the curation pass fixed the one S&P 500 case, but the convention for similar questions in future revisions may shift. Pin to the schema version if byte-stable behaviour across releases is required.
408
 
409
  ---
410
 
411
  ## 10. License
412
 
413
- Released under the **MIT License** (see `LICENSE`). The upstream questions originate from a public HuggingFace forecasting set; the curation work, schema, prompt-reconstruction recipe, and answer encodings in this release are the contribution of this project.
414
 
415
  ---
416
 
417
- ## 11. Contact and contributions
418
 
419
- Issues, schema feedback, and ambiguity reports are welcome. If a row's ground truth has changed, or its framing is ambiguous under §5.5, open an issue in the relevant repository:
420
 
421
- * Dataset: [`MaYiding/OracleProto` on Hugging Face](https://huggingface.co/datasets/MaYiding/OracleProto/discussions) for row-level questions, ambiguity reports, and label disputes.
422
- * Code Repo: [`MaYiding/OracleProto` on GitHub](https://github.com/MaYiding/OracleProto/issues) for evaluator, parser, or harness behaviour.
423
 
424
- Row-level reports should include the `id`, the disputed framing, and where available a primary source; those are the inputs the curation pipeline needs to update the row in the next release.
 
1
+ # OracleProto:预测评估集
2
 
3
+ **英文文档:** [[`English Doc`](https://huggingface.co/datasets/MaYiding/OracleProto/blob/main/README.md)]
4
 
5
+ **GitHub 仓库:** [`Github`](https://github.com/MaYiding/OracleProto)
6
 
7
 
8
+ 一份以 SQLite 打包的评估集,包含 80 道人工精校的真实事件预测题,resolution date 落在 2026-03-12 2026-04-14 之间,与 [GitHub Repo](https://github.com/MaYiding/OracleProto) 同步发布。题目行与字节稳定的提示重建配方共用一个文件 `forecast_eval_set_example.db`,文件内有两张表:`forecast_eval_set_example`80 行题目)与 `dataset_metadata`(配方)。
9
 
10
  ---
11
 
12
+ ## 1. 数据集概览
13
 
14
+ | 字段 | 取值 |
15
+ | --------------------- | --------------------------------------------------------------------- |
16
+ | 发布日期 | `2026-04-29` |
17
+ | 行数 | 80 |
18
+ | Splits | `train` (80);单一 split,定位为留出评估集 |
19
+ | Resolution-date 范围 | `2026-03-12` → `2026-04-14` |
20
+ | 题目类型 | `yes_no``binary_named``multiple_choice` |
21
+ | Choice 类型 | `single`(恰一个正确字母)、`multi`(一个或多个正确字母) |
22
+ | 数据库文件 | `forecast_eval_set_example.db`SQLite 3,约 52 KB|
23
+ | 文件中的表 | `forecast_eval_set_example`80 行)、`dataset_metadata`1 行) |
24
+ | 协议 | MIT |
25
+ | 上游来源 | HuggingFace 预测题集(levels 1+2),原始 322 → 精校 80 |
26
 
27
+ ### 类型分布
28
 
29
+ | `question_type` | `choice_type` | 行数 |
30
  | ------------------- | ------------- | ------ |
31
  | `yes_no` | `single` | 37 |
32
  | `binary_named` | `single` | 3 |
33
  | `multiple_choice` | `single` | 32 |
34
  | `multiple_choice` | `multi` | 8 |
35
+ | **合计** | | **80** |
36
 
37
+ `yes_no` 是二元 Yes/No`binary_named` 是体育队伍、格斗选手或两方阵营等命名实体之间的二元判断;`multiple_choice` 给出至少三个标签化选项,正确字母可为一个或多个,列表中出现 "None of the above" 时它也是合法答案。每���存储确切的选项标签;字母 `A` 映射到 `options[0]``B` 映射到 `options[1]`,依此类推(§3.4 覆盖超过 `Z` 的标签情形)。
38
 
39
  ---
40
 
41
+ ## 2. 文件
42
 
43
  ```text
44
  OracleProto/
45
+ ├── forecast_eval_set_example.db # SQLite 数据库文件(数据集本体;约 52 KB
46
+ ├── README.md # 本文件
47
  ├── LICENSE # MIT
48
+ └── .gitattributes # HF 标准二进制属性
49
  ```
50
 
51
+ 数据集以单个 SQLite 文件(而非 Parquet JSONL)发布,因为提示重建配方与逐行 provenance 与题目行同住一个文件中(位于 `dataset_metadata.features_json`)。`datasets.Dataset` loader Parquet 转换示例见 §6
52
 
53
  ---
54
 
55
+ ## 3. 数据库 schema
56
 
57
+ 两张表:`forecast_eval_set_example` 存放 80 行题目;`dataset_metadata` 存放规范配方。文件名与主表同名。
58
 
59
+ ### 3.1 `forecast_eval_set_example`(题目行)
60
 
61
  ```sql
62
  CREATE TABLE forecast_eval_set_example (
63
  id TEXT PRIMARY KEY,
64
  choice_type TEXT NOT NULL CHECK (choice_type IN ('single','multi')),
65
  question_type TEXT NOT NULL, -- yes_no | binary_named | multiple_choice
66
+ event TEXT NOT NULL, -- 待预测事件
67
+ options TEXT NOT NULL, -- 选项标签的 JSON 数组
68
+ answer TEXT NOT NULL, -- 规范化的正确答案,编码为字母
69
  end_time TEXT NOT NULL -- 'YYYY-MM-DD'
70
  );
71
 
 
74
  CREATE INDEX idx_forecast_eval_set_example_end_time ON forecast_eval_set_example(end_time);
75
  ```
76
 
77
+ ### 3.2 `dataset_metadata`(配方)
78
 
79
+ 单行表,其 `features_json` blob 装载提示模板、四种 output_format、outcomes-block 规则、agent-role 字符串,以及 curation provenance。完整配方在 §5 中展开。
80
 
81
  ```sql
82
  CREATE TABLE dataset_metadata (
 
89
  );
90
  ```
91
 
92
+ ### 3.3 列语义
93
 
94
+ | | 类型 | 描述 |
95
+ | --------------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
96
+ | `id` | TEXT | 来自上游 HuggingFace 预测题集的稳定 source-side question ID;主连接键。 |
97
+ | `choice_type` | TEXT | 当且仅当一个字母正确时为 `'single'`,可为一个或多个字母时为 `'multi'`。由 `answer` 中的字母个数推导。决定 §5.4 中走单选还是多选分支。 |
98
+ | `question_type` | TEXT | `yes_no``binary_named``multiple_choice` 之一。决定渲染哪一种提示模板(§5)。 |
99
+ | `event` | TEXT | 待预测事件的自然语言描述,作者编辑过以确保显式时间锚定、单位明确、二元框架无歧义。 |
100
+ | `options` | TEXT | 选项标签的 JSON 数组。`yes_no` 固定为 `["Yes","No"]``binary_named` 是两个命名实体。`multiple_choice` 是若干 choice 标签,字母由下标隐式确定(`A=options[0]`, `B=options[1]`, …)。 |
101
+ | `answer` | TEXT | 规范化的正确答案,编码为字母。`yes_no` `binary_named` `'A'` `'B'``multiple_choice` 为按选项顺序排列的逗号分隔字母列表,例如 `'A'` `'A, B'`|
102
+ | `end_time` | TEXT | resolution date,格式 `YYYY-MM-DD`。该列只存日历日期;GMT+8 的时区读法由提示模板(§5.2)提供。需要更细粒度 admissibility 时,将每条 resolution 视为覆盖整个日历日。 |
103
 
104
+ ### 3.4 字母到下标的编码
105
 
106
+ 字母通过 `index = ord(letter) - ord('A')` 映射到选项下标。超出 `Z`≥27 个选项)后,标签按从 `A` 开始的连续 ASCII 区间继续:`[``\``]``^``_``` ` ```a``b`。参考 renderer 会用反引号包裹任何非 `A`–`Z` 标签,使其在 markdown 渲染下保持可读。80 行中没有超过 26 个选项的题,但因 framework parser 支持该编码,所以一并文档化。
107
 
108
  ---
109
 
110
+ ## 4. 行示例
111
 
112
  ```json
113
  {
 
163
 
164
  ---
165
 
166
+ ## 5. 提示重建(规范配方)
167
 
168
+ 每一行通过 `dataset_metadata.features_json.prompt_reconstruction` 中的配方渲染为一条 user message。该配方字节稳定,是 OracleProto 评估器的事实来源;自��重建提示的下游用户应严格照其执行,以确保结果可比。
169
 
170
+ ### 5.1 静态片段
171
 
172
  ```text
173
  agent_role: "You are an agent that can predict future events."
 
178
  box format specified above."
179
  ```
180
 
181
+ ### 5.2 主模板
182
 
183
  ```text
184
  {agent_role} The event to be predicted: "{event} (resolved around {end_time} (GMT+8)).{outcomes_block}"
 
188
  {guidance}
189
  ```
190
 
191
+ 字符串中字面 `(GMT+8)` 赋予 `end_time` 时区读法;列本身只存日期。
192
 
193
  ### 5.3 `outcomes_block`
194
 
195
+ `yes_no` `binary_named`:为空,因为选项标签已嵌入 `output_format`
196
+ `multiple_choice`:以一个换行符开头,随后每行一个选项,形式为 `A. <label>`,例如 `\nA. Arizona\nB. Baylor\nC. Brigham Young University (BYU)\n…`。派生字母落在 `A`–`Z` 之外的标签用反引号包裹。
197
 
198
+ ### 5.4 `output_format`(四选一,由 `question_type` × `choice_type` 决定)
199
 
200
  **`yes_no`:**
201
  ```text
 
205
  \boxed{Yes} or \boxed{No}
206
  ```
207
 
208
+ **`binary_named`**(字面 `<options[0]>` `<options[1]>` 替换为 `options` 中的两个命名实体):
209
  ```text
210
  Your task is to predict which of the two outcomes will occur based on your analysis.
211
  Your prediction will be scored based on its accuracy. You will only receive points if your answer is correct.
 
213
  \boxed{<options[0]>} or \boxed{<options[1]>}
214
  ```
215
 
216
+ **`multiple_choice` `choice_type='single'`:**
217
  ```text
218
  This is a SINGLE-ANSWER question: exactly ONE of the listed options is correct.
219
  Your prediction will be scored on strict equality with the unique correct letter; choosing the wrong letter, or selecting more than one letter, scores zero.
 
222
  Do NOT list more than one letter, even if you believe two outcomes are tied — pick the one you find most likely.
223
  ```
224
 
225
+ **`multiple_choice` `choice_type='multi'`:**
226
  ```text
227
  This is a MULTI-SELECT question: ONE OR MORE of the listed options can be correct.
228
  Your prediction will be scored on strict equality with the FULL set of correct letters: any extra letter, any missing letter, or any wrong letter scores zero. You must include ALL correct options and NO incorrect options.
 
231
  For example: \boxed{A} for a single correct option, or \boxed{B, C} for multiple correct options.
232
  ```
233
 
234
+ ### 5.5 答案解析
235
 
236
+ 参考 parser[`forecast_eval/parser.py::parse_answer`](https://github.com/MaYiding/OracleProto/blob/main/forecast_eval/parser.py))应用如下规则:
237
 
238
+ 1. 取模型回复中**最后一个** `\boxed{...}` 子串;其余视为 reasoning scratchpad,忽略。
239
+ 2. `yes_no`(不区分大小写):`Yes` → `A`,`No` → `B`。其余记为 unparsed
240
+ 3. `binary_named`(不区分大小写):将盒内 payload `options[0]` `options[1]` 匹配。其余记为 unparsed
241
+ 4. `multiple_choice`:按逗号与空白切分盒内 payload,校验每个 token 都是单字母,且每个字母都解析到合法的选项下标。越界字母或多字符 token 记为 unparsed
242
+ 5. 与从 `answer` 解析出的规范字母集合做严格集合相等评分。缺失或 unparsed 的盒内答案记为 `parse_ok = 0`,不视为 parser 错误;运行记录该状态后继续。
243
 
244
+ 复用 framework parser 是跨实现获得 bit-identical 分数的实用做法。
245
 
246
  ---
247
 
248
+ ## 6. 加载数据集
249
 
250
+ ### 6.1 使用原生 `sqlite3`(无额外依赖)
251
 
252
  ```python
253
  import sqlite3
 
256
  conn = sqlite3.connect("forecast_eval_set_example.db")
257
  conn.row_factory = sqlite3.Row
258
 
259
+ # 读取题目行。
260
  rows = conn.execute("SELECT * FROM forecast_eval_set_example").fetchall()
261
  print(f"loaded {len(rows)} rows")
262
  sample = dict(rows[0])
263
+ sample["options"] = json.loads(sample["options"]) # JSON 解码选项列表
264
  print(sample)
265
 
266
+ # 读取规范化的提示重建配方。
267
  meta_row = conn.execute("SELECT features_json FROM dataset_metadata").fetchone()
268
  meta = json.loads(meta_row["features_json"])
269
  prompt_template = meta["prompt_reconstruction"]["prompt_template"]
270
  print(prompt_template)
271
  ```
272
 
273
+ ### 6.2 使用 `huggingface_hub`
274
 
275
  ```python
276
  from huggingface_hub import hf_hub_download
 
285
  rows = conn.execute("SELECT * FROM forecast_eval_set_example").fetchall()
286
  ```
287
 
288
+ ### 6.3 转换为 `datasets.Dataset`
289
 
290
  ```python
291
  import sqlite3, json
 
308
  print(ds[0])
309
  ```
310
 
311
+ ### 6.4 渲染提示(最小实现,遵从规范配方)
312
 
313
  ```python
314
  def render_prompt(row, meta):
 
348
  )
349
  ```
350
 
351
+ 完整参考 renderer(含 >26 选项的反引号规则与可选的 reflection / belief-elicitation 尾部)位于 [`forecast_eval/prompts.py`](https://github.com/MaYiding/OracleProto/blob/main/forecast_eval/prompts.py);复用它即可获得 byte-identical 提示。
352
 
353
  ---
354
 
355
+ ## 7. 推荐评估协议
356
 
357
+ 将本数据集与 OracleProto 评估 harness 配套使用,后者在裸提示评分循环之上叠加信息边界纪律。五条具体建议:
358
 
359
+ 1. **为每个模型声明 knowledge cutoff $\kappa_M$** 题目 $i$ 对模型 $M$ 而言 admissible 的条件是 $\kappa_M \le \chi_i < \tau_i$,其中 $\chi_i$ 是题目的 prediction cutoff,$\tau_i$ 是其 resolution dateInadmissible 题目在上游过滤,不计入模型错误。未声明 cutoff 的模型无法与已声明的模型公平比较。
360
 
361
+ 2. **对任何 retrieval browsing 工具做时间掩蔽。** harness 允许模型发起 web 搜索时,将搜索侧 `end_date` 钉到 $\chi_i + \delta$ 并采用保守 offsetOracleProto 默认 $\delta = -1$ 天。该屏障 (L2) 背后的机制记录在 framework DESIGN FRAME 笔记中。
362
 
363
+ 3. **运行独立的 retrieval-content 审计员。** 每条召回片段交由独立 LLM 审计员判断是否泄露 resolution。这是 framework 威胁模型中的 L3 屏障。
364
 
365
+ 4. **禁用 provider-native browsing** OracleProto 在三层拒收以 `:online` 与类似 hosted-browsing 变体结尾的 model slug:config 校验、on-the-wire clientdetector client。这是任何一次计费 LLM 调用离开进程前必须通过的 L4 残留检查。
366
 
367
+ 5. **以字母集合上的严格集合相等评分**,参 §5.5。当模型按 v4 belief 协议附加 `<belief>{ ... }</belief>` JSON 块时,可选启用概率 calibration metrics(Brier、NLL、ECE、Murphy 分解);schema [`forecast_eval/prompts.py::BELIEF_PROTOCOL`](https://github.com/MaYiding/OracleProto/blob/main/forecast_eval/prompts.py)
368
 
369
+ 未启用 OracleProto harness 时,应将所得数字视为预测能力的上界:任何能浏览开放 web、或训练截止越过题目 `end_time` 的模型都可能记忆了答案。数据集让诚实性审计成为可能;它本身不强制执行。
370
 
371
  ---
372
 
373
+ ## 8. Provenance curation
374
 
375
+ * **来源。** 上游 HuggingFace 预测题集,限制在 *levels 1+2*(上游难度带中较容易的两档)。原始集合采集了 322 道候选题。
376
+ * **Curation 流水线(5 pass)。**
377
+ 1. Source-side 坏行剔除与列拍扁。
378
+ 2. `end_time` / 答案编码 / 选项标签规范化:`end_time` 归约为 `YYYY-MM-DD` 日历日期;`Yes/No` 映射为 `A/B`;选项标签清洗散落 markdown
379
+ 3. 322 → 200 → 100 → 80 下采样,伴随 placeholder 移除、去重与歧义审计。
380
+ 4. 终轮 HIGH+MEDIUM 歧义修复:4 行重写,做到显式时间锚定、单位明确、二元框架无歧义。
381
+ 5. 对一道 S&P 500 multi-select 真值集做 CRITICAL 修复,使其满足选项阶梯隐含的单调阈值逻辑。
382
+ * **验证。** 全部 80 ground-truth 通过 parser 往返(rendered prompt parse 后重新编码回规范字母集合)端到端验证。最终计数:剩余 0 critical / 0 high / 0 medium 歧义问题。
383
 
384
  ---
385
 
386
+ ## 9. 用途与局限
387
 
388
+ ### 9.1 适用场景
389
 
390
+ * **LLM LLM 智能体的预测基准**,特别是结合参数化知识与时间掩蔽 web retrieval 的工具型智能体。
391
+ * **预测 harness 的复现性试验台。** `dataset_metadata` 表使每条提示字节稳定;与 OracleProto framework 配套使用时,可得到一个运行单元,其评分工件在配置匹配时 bit-identical
392
+ * **校准与 proper-scoring 研究。** 80 行规模足够小,使逐题分析(信念演化、来源归因、calibration 图)保��可处理。
393
 
394
+ ### 9.2 不适用场景
395
 
396
+ * **训练数据。** 把这些行纳入任何训练、微调或 RLHF 语料,会污染对所训模型的下游预测评估。该数据集仅用于评估。
397
+ * **长时程预测。** 所有 resolution 落在一个月窗口(2026-03-12 → 2026-04-14);该集合不代表跨季度或跨年度预测。
398
+ * **开放生成。** 每题都有封闭答案集,因此并非生成基准。
399
 
400
+ ### 9.3 已知局限与偏置
401
 
402
+ * **样本量。** 80 行偏小。准确率或 Brier 的置信区间宽;汇报点估计时同时给出区间,并在同一集合上比较模型时使用配对检验。
403
+ * **题材偏置。** 题目集中在金融与宏观指标、体育赛事、奖项(OscarsNBAUEFA 等)以及美国为主的政治与地缘政治事件,反映上游 HuggingFace 市场结构。它不是全球代表性样本。
404
+ * **仅英文。** 所有 `event` `options` 字符串均为英文。
405
+ * **仅日期级 resolution** `end_time` 是日期而非时间戳,且数据集不带时区列。需要更细粒度 admissibility 时,将每条 resolution 视为覆盖整个 GMT+8 日历日。
406
+ * **Provider 侧残留泄漏。** LLM 已摄入上游 HuggingFace 数据集,或训练截止越过 resolution 窗口,则可凭参数化记忆恢复 ground truth。数据集本身无法补丁这一点;它依赖 harness 强制 admissibility$\kappa_M$)。
407
+ * **移动 label space 的快照。** 少数题目("none of the above""all of the above")与 multi-select 评分非平凡交互;curation pass 已修复一例 S&P 500 案例,但未来版本对类似题目的约定可能调整。需要跨发布字节稳定行为时,请钉到 schema version
408
 
409
  ---
410
 
411
  ## 10. License
412
 
413
+ **MIT License** 发布(见 `LICENSE`)。上游题目源自公开 HuggingFace 预测集;本版本中的 curation 工作、schema、提示重建配方与答案编码由本项目贡献。
414
 
415
  ---
416
 
417
+ ## 11. 联系与贡献
418
 
419
+ 欢迎 issue、schema 反馈与歧义报告。当某行 ground truth 已变更,或题面在 §5.5 下存在歧义,请到对应仓库开 issue:
420
 
421
+ * 数据集:[Hugging Face 上的 `MaYiding/OracleProto`](https://huggingface.co/datasets/MaYiding/OracleProto/discussions),用于行级问题、歧义报告与标签争议。
422
+ * 代码仓库:[GitHub 上的 `MaYiding/OracleProto`](https://github.com/MaYiding/OracleProto/issues),用于 evaluatorparser harness 行为。
423
 
424
+ 行级报告应包含 `id`、被争议的题面,以及尽可能附上的一手来源;这是 curation 流水线在下一版更新该行所需的输入。