jizej commited on
Commit
0cff67f
·
verified ·
1 Parent(s): 3e18ce9

Add Limitations, Biases, PII, Use Cases, Social Impact, Source Datasets, Provenance sections (Croissant RAI fields)

Browse files
Files changed (1) hide show
  1. README.md +296 -5
README.md CHANGED
@@ -136,12 +136,303 @@ sft = load_dataset("jizej/Competence-Based-Evaluation", "sft_noleak")
136
  train = sft["train"]
137
  ```
138
 
139
- ## Construction
140
 
141
- Datasets are generated procedurally from entity pools sourced from Wikidata,
142
- Wikipedia, and curated lists. The generator and entity-pool fetcher are open
143
- source at the project repository (linked above). All generation seeds are
144
- recorded in the per-subset `meta.json`.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
145
 
146
  ## License
147
 
 
136
  train = sft["train"]
137
  ```
138
 
139
+ ## Source Datasets
140
 
141
+ The dataset is fully synthetic no records are copied from another corpus.
142
+ However, the procedural generator draws **entity names** (people, animals,
143
+ cities, structures, etc.) from external knowledge sources. The complete list
144
+ of upstream source URIs is:
145
+
146
+ - **Wikidata SPARQL endpoint:** `https://query.wikidata.org/sparql`
147
+ Used for the `size_animals`, `height_structures`, `age_figures`,
148
+ `time_events`, `brightness_stars`, and `speed_animals` (Wikidata fallback)
149
+ pools. Queries are stored verbatim in
150
+ [`invariance_bench/generate_entities.py`](https://github.com/jizej/Competence-Based-Evaluation/blob/main/invariance_bench/generate_entities.py).
151
+ - **English Wikipedia REST API:** `https://en.wikipedia.org/api/rest_v1/page/html/...`
152
+ Specific source pages:
153
+ - `https://en.wikipedia.org/wiki/List_of_cities_by_average_temperature`
154
+ (`temperature_cities` pool)
155
+ - `https://en.wikipedia.org/wiki/Fastest_animals` (fallback for
156
+ `speed_animals`)
157
+ - **Curated lists** embedded in the generator script (no external URI):
158
+ `weight_objects`, `price_items`, `rank_athletes`, `spatial_objects`, and
159
+ the names list used by the `pos` and SFT relations. These are author-
160
+ maintained and are the only non-Wikidata/Wikipedia sources.
161
+
162
+ Wikidata content is licensed CC0 and Wikipedia text is licensed CC BY-SA.
163
+ Cached responses for every pool are stored under `.entity_cache/{pool}.json`
164
+ in the open-source repository so the dataset can be regenerated bit-exact
165
+ without re-querying the upstream sources.
166
+
167
+ **Synthetic-generation seeds** that fully determine the released splits are
168
+ recorded in the per-subset `meta.json` files (`sft/full/meta.json`,
169
+ `sft/noleak/meta.json`); the seed used for the released SFT subsets is
170
+ `42`. Eval splits use deterministic enumeration over `(N, ordering, query)`
171
+ triples and require no random seed.
172
+
173
+ ## Provenance Activities
174
+
175
+ The end-to-end activities applied to produce this dataset are:
176
+
177
+ 1. **Collection (automated, online).** Entity pools fetched from Wikidata
178
+ via SPARQL and from Wikipedia via the REST API; see
179
+ [`invariance_bench/generate_entities.py`](https://github.com/jizej/Competence-Based-Evaluation/blob/main/invariance_bench/generate_entities.py).
180
+ Rate-limited with retries; results cached on disk.
181
+ 2. **Cleaning / filtering.** Per-pool deduplication (case-insensitive name
182
+ collapsing), removal of entries missing the relevant ground-truth value,
183
+ and merging of SPARQL results with curated fallback lists. For
184
+ `age_figures` the SPARQL query is split into three era-based
185
+ sub-queries to avoid `wikibase:sitelinks`-induced timeouts.
186
+ 3. **Curated fallback authoring.** Manual curation by the dataset authors
187
+ for the `weight_objects`, `price_items`, `rank_athletes`,
188
+ `spatial_objects`, and `names` pools (lists embedded directly in
189
+ `generate_entities.py` and `question_generation.py`).
190
+ 4. **Synthetic question generation.** Procedural construction of the
191
+ eval and SFT records in
192
+ [`invariance_bench/question_generation.py`](https://github.com/jizej/Competence-Based-Evaluation/blob/main/invariance_bench/question_generation.py)
193
+ and the entry-point scripts
194
+ [`scripts/generate_dataset.py`](https://github.com/jizej/Competence-Based-Evaluation/blob/main/scripts/generate_dataset.py),
195
+ [`scripts/generate_heldout_dataset.py`](https://github.com/jizej/Competence-Based-Evaluation/blob/main/scripts/generate_heldout_dataset.py),
196
+ and
197
+ [`scripts/generate_training_data.py`](https://github.com/jizej/Competence-Based-Evaluation/blob/main/scripts/generate_training_data.py).
198
+ This step is fully deterministic given the seed and entity pools.
199
+ 5. **Annotation.** None. There is no human-annotation step. All
200
+ `answer` / `messages` ground-truth labels are produced by the same
201
+ deterministic generator that creates the question text, and are derived
202
+ from the synthesized ground-truth ordering, not from human judgment.
203
+ 6. **Synthetic agents / LLMs.** **None.** No language model, embedding
204
+ model, or generative agent is used at any step in the pipeline.
205
+ 7. **Crowdsourcing platforms / human teams.** Not applicable — no
206
+ crowdsourcing, no human raters, no annotation contractors were
207
+ involved.
208
+ 8. **Validation / leak audit.** The released `_shufnames` / `noleak`
209
+ subsets were produced after an internal audit revealed that an earlier
210
+ version's prompt-side names list ordering correlated with the answer.
211
+ The audit is documented in `docs/paper_methodology_experiments.md` of
212
+ the open-source repository.
213
+
214
+ ## Construction (Synthetic-Data Generation Process)
215
+
216
+ **All records in this dataset are synthetic.** They are produced by a
217
+ deterministic procedural generator; no model-based generation, no human
218
+ annotation, and no scraped natural-language Q&A is used in the pipeline.
219
+
220
+ The generation process is:
221
+
222
+ 1. **Entity pools.** Names of entities (animals, structures, people, cities,
223
+ events, stars, etc.) are sourced from Wikidata SPARQL queries, Wikipedia
224
+ HTML tables, and small curated fallback lists embedded in the generator.
225
+ Each pool is cached on disk as JSON. See
226
+ [`invariance_bench/generate_entities.py`](https://github.com/jizej/Competence-Based-Evaluation/blob/main/invariance_bench/generate_entities.py).
227
+ 2. **Ordering sampling.** For each (relation, `n`) bucket the generator
228
+ samples a random permutation of `n` entities from the appropriate pool
229
+ and lays out the chain implied by the relation (e.g. *front-of* / *behind*).
230
+ 3. **Constraint expansion.** A subset of consecutive pairs is selected to
231
+ form the "rules" shown in the prompt; the unstated remainder is what the
232
+ transitive-closure query exercises.
233
+ 4. **Phrasing duplication.** Every ordering is rendered twice: once with
234
+ the canonical relation (`original`) and once with the logically inverse
235
+ relation (`equivalent`). The two renderings carry the same ground-truth
236
+ boolean answer.
237
+ 5. **Yes/no query selection.** A query pair `(a, b)` is sampled at a
238
+ configured minimum hop distance, with the ground-truth `yes`/`no` answer
239
+ balanced by construction.
240
+ 6. **(SFT subsets only) Chat formatting.** Each (ordering, query, phrasing)
241
+ triple is serialized into a `messages` array with system / user /
242
+ assistant turns ready for SFT trainers.
243
+
244
+ All generation seeds, the per-relation count distributions, the `N`
245
+ schedule, and the held-out relation list are recorded in the per-subset
246
+ `meta.json`. The full pipeline is reproducible from the open-source
247
+ repository linked above.
248
+
249
+ ## Intended Use Cases
250
+
251
+ The dataset is designed to measure **answer-level invariance** of language
252
+ models under semantically-preserving paraphrasing of logical-ordering
253
+ constraints. Concretely:
254
+
255
+ - **Primary use case (validated):** measuring whether a model returns the
256
+ same boolean answer to a transitive-closure query when the underlying
257
+ ordering is described with a relation versus its inverse. Validation is
258
+ reported in our accompanying NeurIPS 2026 D&B submission across
259
+ proprietary and open-weight models.
260
+ - **Primary use case (validated):** comparing pre- and post-fine-tuning
261
+ checkpoints to verify that targeted SFT improves invariance without
262
+ destroying out-of-distribution generalization (held-out `pos` and `depth`
263
+ relations).
264
+ - **Secondary use case (partially validated):** scaling-law style analyses
265
+ of invariance vs. accuracy as a function of `N` (the number of entities
266
+ in the ordering). Validated for `N ∈ [4, 2048]`; behavior beyond this
267
+ range is not characterized.
268
+ - **Secondary use case (not validated here):** as a regression test for
269
+ training pipelines that aim to preserve symbolic reasoning under
270
+ paraphrase. We provide the data; we do not certify any specific training
271
+ recipe.
272
+
273
+ Use cases for which validation **does not** exist or may not hold:
274
+ general-reasoning leaderboard ranking, safety / alignment evaluation,
275
+ detection of jailbreaks or adversarial prompts, multilingual robustness,
276
+ evaluation of long-form generation quality, and any clinical, legal, or
277
+ high-stakes decision-support setting.
278
+
279
+ ## Personal and Sensitive Information
280
+
281
+ The dataset contains **no real personal data, no real PII, and no health,
282
+ medical, financial, biometric, political, or religious data about
283
+ identifiable individuals**. All "people" in the prompts are synthetic
284
+ references constructed by sampling from entity pools.
285
+
286
+ The following indirect demographic signals are present and should be
287
+ declared:
288
+
289
+ - **Gender (indirect, via names).** First names sampled from US-style name
290
+ lists carry conventional masculine/feminine associations. No gender label
291
+ is attached to any record; gender is only implicit in the name token.
292
+ - **Geography.** Pools such as `temperature_cities`, `height_structures`,
293
+ and `time_events` contain real geographic place names sourced from
294
+ Wikidata and Wikipedia. These pools are skewed toward globally prominent,
295
+ English-Wikipedia-covered locations.
296
+ - **Language.** Prompts and answers are exclusively in English; this is a
297
+ deliberate scope restriction, not a privacy signal, but it is recorded
298
+ here for completeness.
299
+ - **Culture.** Entity selection inherits the cultural skew of Wikidata /
300
+ English Wikipedia (Western, anglophone over-representation).
301
+ - **Age (of historical figures only).** The `age_figures` pool references
302
+ real historical figures with their public birth years. These are
303
+ deceased public figures whose biographical data is already published on
304
+ Wikidata; no contemporary individuals' ages are present.
305
+
306
+ The following are **not** present: socio-economic status of identifiable
307
+ individuals, professional experience or seniority of identifiable
308
+ individuals (the `seniority` and `priority` relations operate on synthetic
309
+ placeholders, not on real employees or rankings), health or medical data,
310
+ political affiliation, and religious belief.
311
+
312
+ No data subjects were contacted or surveyed in producing this dataset, so
313
+ no consent or withdrawal procedures apply. Wikidata is licensed CC0 and
314
+ Wikipedia is licensed CC BY-SA; both permit redistribution of the entity
315
+ metadata used here.
316
+
317
+ ## Social Impact
318
+
319
+ **Intended positive impact.** Releasing a clean invariance benchmark
320
+ encourages the field to evaluate language models on robustness to
321
+ paraphrase, not only on accuracy. Reproducible held-out splits and an
322
+ open-source generator make it harder for the benchmark to be quietly
323
+ over-fit, and the SFT subsets give researchers a concrete starting point
324
+ for studying targeted invariance training.
325
+
326
+ **Potential negative impact and risks of misuse.**
327
+
328
+ - *Over-claiming general reasoning.* High invariance scores on this
329
+ dataset measure invariance on transitive ordering only. A naive reader
330
+ could mistake them for evidence of general reasoning robustness; results
331
+ should always be reported with the scope of the benchmark stated.
332
+ - *Skill leaderboarding pressure.* As with any public benchmark, optimizing
333
+ directly against this dataset risks Goodharting — gains here may not
334
+ transfer to natural-language reasoning. We encourage reporting paired
335
+ held-out evaluations from other benchmarks.
336
+ - *Cultural / linguistic skew.* Because entity pools are anglocentric,
337
+ models tuned on this data may improve on similarly-distributed inputs
338
+ while showing little transfer to non-English or non-Western surface
339
+ forms.
340
+ - *Indirect demographic correlations.* US-style first names carry
341
+ conventional gender signals. If a downstream model is trained on the SFT
342
+ subsets in a way that picks up name-conditioned heuristics, that bias
343
+ will propagate. Users training on this data should audit for gendered
344
+ response patterns.
345
+
346
+ **Mitigations in this release.**
347
+
348
+ - The dataset is open-license (CC BY 4.0) but **gated by deliberate
349
+ narrowness of scope** rather than access controls: every record is
350
+ explicitly a synthetic transitive-ordering question, and the dataset
351
+ card states the intended-use boundaries above.
352
+ - Held-out relations (`pos`, `depth`) are **excluded from the SFT
353
+ subsets** so OOD generalization claims remain defensible.
354
+ - The earlier internal `_shufnames` / `noleak` audit (where the displayed
355
+ names list accidentally encoded the answer) is documented above; the
356
+ released eval and SFT files have the leak fixed.
357
+ - The generator is open-source, allowing external auditors to reproduce
358
+ every record from a documented seed.
359
+
360
+ No usage gating, embargo, or differential-access controls are applied.
361
+ Users are expected to follow the limitations and intended-use guidance
362
+ above and to cite the dataset when reporting results.
363
+
364
+ ## Limitations
365
+
366
+ - **Narrow reasoning skill.** Each question tests transitive closure over a
367
+ linear ordering induced by a single binary relation. Performance here does
368
+ not generalize to multi-step natural-language reasoning, common-sense
369
+ inference, math, code, or any non-ordering relational structure.
370
+ - **Synthetic phrasings.** Questions are produced by a small grammar (a fixed
371
+ template per relation) rather than written by humans, so surface-form
372
+ diversity is limited. Distributional gaps relative to natural prose,
373
+ conversational queries, or noisy real-world text are large.
374
+ - **English only.** All prompts and answers are English. The benchmark says
375
+ nothing about cross-lingual robustness.
376
+ - **Yes/no output space.** The eval rewards a literal `yes` or `no` token.
377
+ Models that hedge, refuse, or emit verbose chains of thought without a
378
+ committed answer score zero on accuracy and invariance regardless of
379
+ whether the underlying reasoning is correct. Practitioners using CoT-style
380
+ models should add an answer-extraction step (see `invariance_bench/scoring.py`).
381
+ - **Single deterministic ground truth.** The eval does not measure
382
+ calibration, uncertainty, or partial credit; orderings with ties or
383
+ under-specified constraints are not represented.
384
+ - **Long-context confound.** At large `N` (especially in `eval_pos_largeN`
385
+ and the `N=2048` slice of `eval_pos`), prompts can exceed the effective
386
+ context window of many models. Failures at large `N` may reflect context
387
+ handling rather than reasoning ability and should not be interpreted as
388
+ pure invariance violations.
389
+ - **Held-out coverage.** The OOD evaluation surface is two relations (`pos`,
390
+ `depth`); the benchmark cannot verify whether a model's invariance
391
+ generalizes to relations beyond those seen at train *and* eval time.
392
+ - **Names-list leak in earlier internal versions.** Released `_shufnames`
393
+ eval files and the `sft_noleak` training files **do not** have this leak.
394
+ Older `base2_*` artifacts (not released on HF) did, and any third-party
395
+ reuse of those files would over-estimate model performance.
396
+
397
+ **Not recommended for:** general reasoning leaderboards, safety/alignment
398
+ evaluation, multilingual evaluation, evaluating models whose primary output
399
+ mode is a long chain of thought without an extractable boolean answer.
400
+
401
+ ## Biases
402
+
403
+ - **Anglo/Western entity skew.** The `names` pool used by the `pos`-relation
404
+ questions and by the SFT data is drawn from US-style first-name lists, so
405
+ most prompts contain English-coded given names. The `temperature_cities`,
406
+ `height_structures`, and `time_events` pools likewise over-represent
407
+ Wikipedia/Wikidata-prominent (largely Western, English-language)
408
+ entities. Under-represented populations include non-Western cultures and
409
+ languages whose entities have lower Wikipedia coverage.
410
+ - **Source-driven content bias.** Wikidata and Wikipedia are themselves known
411
+ to be skewed toward male, Western, and modern-era subjects (especially in
412
+ `age_figures`). The benchmark inherits these biases. Curated fallback
413
+ lists for `weight_objects`, `price_items`, and `rank_athletes` reflect the
414
+ authors' own selections and are not demographically balanced.
415
+ - **Relation-template bias.** Each relation has one canonical phrasing and
416
+ one inverse phrasing. The grammar does not exercise the full space of
417
+ English ways to express ordering (passive voice, comparative clauses,
418
+ idiomatic expressions, etc.), so reported invariance is a conservative
419
+ lower bound: a model that is invariant on this dataset may still be
420
+ sensitive to other surface variations.
421
+ - **Position-of-name leak (mitigated).** In an earlier internal version,
422
+ the order of names listed in the prompt correlated with their position in
423
+ the underlying ordering, which models could exploit without reading the
424
+ rules. Released eval files (`*_shufnames.jsonl`) and the `sft_noleak`
425
+ subset shuffle the displayed names list to remove this leak. Users
426
+ regenerating data with the included scripts must pass
427
+ `--shuffle-names-display` to reproduce the no-leak setting.
428
+ - **Train/eval relation leakage controls.** `pos` (front/behind) and `depth`
429
+ (above/below) are deliberately held out of the SFT data so they remain
430
+ OOD for fine-tuned checkpoints. Mixing the SFT subsets with held-out
431
+ evaluation defeats the OOD claim.
432
+ - **Per-`N` row-count imbalance.** Both eval and SFT skew toward small `N`
433
+ (the SFT distribution explicitly down-weights large `N`). Aggregate
434
+ metrics across `N` are therefore dominated by the small-`N` regime;
435
+ report per-`N` numbers when comparing models.
436
 
437
  ## License
438