Update README
Browse files
README.md
CHANGED
|
@@ -39,8 +39,8 @@ The dataset is shipped as **a single SQLite database file** named
|
|
| 39 |
* `dataset_metadata` — a one-row table holding the canonical prompt-reconstruction recipe
|
| 40 |
(prompt template, output formats, agent role, answer-encoding rules, provenance).
|
| 41 |
|
| 42 |
-
The dataset is designed to be the **dataset
|
| 43 |
-
|
| 44 |
prompt template, and answer-encoding rule below is byte-stable and round-trip parsed by the
|
| 45 |
reference parser, so a forecasting run on this set is auditable, replayable, and comparable
|
| 46 |
across models and across calendar years.
|
|
@@ -334,7 +334,7 @@ The rules are:
|
|
| 334 |
parsed from `answer`. A missing or unparsed boxed answer is recorded as `parse_ok = 0`
|
| 335 |
and is **not** an error of the parser — the run records it and moves on.
|
| 336 |
|
| 337 |
-
> The parser is the formal answer-validator
|
| 338 |
> it (rather than rolling your own regex) is the easiest way to get bit-identical scores
|
| 339 |
> across implementations.
|
| 340 |
|
|
@@ -456,16 +456,16 @@ This dataset is meant to be paired with the **OracleProto** evaluation harness,
|
|
| 456 |
information-boundary discipline on top of the bare prompt-and-score loop. The headline
|
| 457 |
recommendations are:
|
| 458 |
|
| 459 |
-
1. **Declare a knowledge cutoff
|
| 460 |
-
for model
|
| 461 |
-
|
| 462 |
questions are filtered upstream (not counted as model errors). This separates *"the model
|
| 463 |
failed to forecast"* from *"the model already knew the answer"*. Models with no declared
|
| 464 |
cutoff cannot be fairly compared to those with one.
|
| 465 |
|
| 466 |
2. **Time-mask any retrieval / browsing tool.** If your harness lets the model issue web
|
| 467 |
-
searches (e.g. via Tavily), pin the search-side `end_date` to
|
| 468 |
-
conservative offset (OracleProto defaults to
|
| 469 |
"tool-mediated" leakage barrier.
|
| 470 |
|
| 471 |
3. **Run an independent retrieval-content auditor.** Each retrieved snippet is passed to a
|
|
@@ -552,7 +552,7 @@ ability**: any model that can browse the open web or that was trained past a que
|
|
| 552 |
* **Provider-side residual leakage (L4 channel).** Any LLM that has ingested the upstream
|
| 553 |
HuggingFace dataset, or that was trained past the resolution window, can recover ground
|
| 554 |
truths from parametric memory. The dataset cannot patch this on its own — it relies on the
|
| 555 |
-
harness to enforce admissibility (
|
| 556 |
* **Snapshot of a moving label space.** A few questions ("none of the above", "all of the
|
| 557 |
above") interact non-trivially with multi-select scoring; the curation pass fixed the one
|
| 558 |
S&P 500 case but the convention for similar questions in future revisions may shift. Pin
|
|
|
|
| 39 |
* `dataset_metadata` — a one-row table holding the canonical prompt-reconstruction recipe
|
| 40 |
(prompt template, output formats, agent role, answer-encoding rules, provenance).
|
| 41 |
|
| 42 |
+
The dataset is designed to be the **dataset \\(\mathcal{D}\\)** in the OracleProto run unit
|
| 43 |
+
\\(\mathcal{R} = (\mathcal{D}, M, \kappa_M, \delta, T, C, R, \Psi, \phi, \Gamma)\\): every column,
|
| 44 |
prompt template, and answer-encoding rule below is byte-stable and round-trip parsed by the
|
| 45 |
reference parser, so a forecasting run on this set is auditable, replayable, and comparable
|
| 46 |
across models and across calendar years.
|
|
|
|
| 334 |
parsed from `answer`. A missing or unparsed boxed answer is recorded as `parse_ok = 0`
|
| 335 |
and is **not** an error of the parser — the run records it and moves on.
|
| 336 |
|
| 337 |
+
> The parser is the formal answer-validator \\(\Psi\\) in the OracleProto run unit. Re-using
|
| 338 |
> it (rather than rolling your own regex) is the easiest way to get bit-identical scores
|
| 339 |
> across implementations.
|
| 340 |
|
|
|
|
| 456 |
information-boundary discipline on top of the bare prompt-and-score loop. The headline
|
| 457 |
recommendations are:
|
| 458 |
|
| 459 |
+
1. **Declare a knowledge cutoff \\(\kappa_M\\) for every model.** OracleProto admits a question
|
| 460 |
+
for model \\(M\\) only when its prediction cutoff \\(\chi_i\\) satisfies
|
| 461 |
+
\\(\kappa_M \le \chi_i < \tau_i\\), where \\(\tau_i\\) is the resolution time. Inadmissible
|
| 462 |
questions are filtered upstream (not counted as model errors). This separates *"the model
|
| 463 |
failed to forecast"* from *"the model already knew the answer"*. Models with no declared
|
| 464 |
cutoff cannot be fairly compared to those with one.
|
| 465 |
|
| 466 |
2. **Time-mask any retrieval / browsing tool.** If your harness lets the model issue web
|
| 467 |
+
searches (e.g. via Tavily), pin the search-side `end_date` to \\(\chi_i + \delta\\) with a
|
| 468 |
+
conservative offset (OracleProto defaults to \\(\delta = -1\\) day). This is the L2
|
| 469 |
"tool-mediated" leakage barrier.
|
| 470 |
|
| 471 |
3. **Run an independent retrieval-content auditor.** Each retrieved snippet is passed to a
|
|
|
|
| 552 |
* **Provider-side residual leakage (L4 channel).** Any LLM that has ingested the upstream
|
| 553 |
HuggingFace dataset, or that was trained past the resolution window, can recover ground
|
| 554 |
truths from parametric memory. The dataset cannot patch this on its own — it relies on the
|
| 555 |
+
harness to enforce admissibility (\\(\kappa_M\\)).
|
| 556 |
* **Snapshot of a moving label space.** A few questions ("none of the above", "all of the
|
| 557 |
above") interact non-trivially with multi-select scoring; the curation pass fixed the one
|
| 558 |
S&P 500 case but the convention for similar questions in future revisions may shift. Pin
|