Datasets:
doc: added Dataset card
Browse files
README.md
CHANGED
|
@@ -190,4 +190,83 @@ configs:
|
|
| 190 |
path: technology/upper_direct-*
|
| 191 |
- split: lower_direct
|
| 192 |
path: technology/lower_direct-*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 193 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 190 |
path: technology/upper_direct-*
|
| 191 |
- split: lower_direct
|
| 192 |
path: technology/lower_direct-*
|
| 193 |
+
license: cc-by-sa-4.0
|
| 194 |
+
task_categories:
|
| 195 |
+
- question-answering
|
| 196 |
+
- multiple-choice
|
| 197 |
+
language:
|
| 198 |
+
- en
|
| 199 |
+
tags:
|
| 200 |
+
- knowledge-probing
|
| 201 |
+
- llm-evaluation
|
| 202 |
+
- entity-resolution
|
| 203 |
+
- machine-unlearning
|
| 204 |
+
size_categories:
|
| 205 |
+
- 10K<n<100K
|
| 206 |
---
|
| 207 |
+
|
| 208 |
+
# ShadowBench: A Hardened Benchmark for Latent Entity Association
|
| 209 |
+
|
| 210 |
+
**ShadowBench** is a diagnostic framework designed to evaluate the "Shadow Knowledge" of Large Language Models (LLMs). While traditional benchmarks measure factual recall using explicit entity names (e.g., "Elon Musk"), ShadowBench evaluates whether a model can navigate its internal knowledge graph when these **lexical anchors** are removed.
|
| 211 |
+
|
| 212 |
+
## Dataset Summary
|
| 213 |
+
The core task in ShadowBench is **Dual-Trait Association (DTA)**. A model is presented with an anonymized shadow description (Trait A) and must associate it with a second, independent fact (Trait B) among three "Hard Negative" distractors.
|
| 214 |
+
|
| 215 |
+
Success requires the model to utilize the hidden entity as a semantic bridge:
|
| 216 |
+
|
| 217 |
+
`Trait A (Shadow)` → `[Latent Entity]` → `Trait B (Target Choice)`
|
| 218 |
+
|
| 219 |
+
### Key Features
|
| 220 |
+
* **Adversarially Hardened:** Unlike standard MCQs, ShadowBench (v3) is filtered to prevent "shortcut learning" via gendered pronouns, chronological era-matching, or category-leaks.
|
| 221 |
+
* **Scale Robust:** Evaluated on models ranging from 8B parameters (Llama-3, Qwen3) to frontier scales (GPT-5.4).
|
| 222 |
+
* **Multi-Domain:** Covers Technology, Sports (Tennis), and Entertainment (Actors).
|
| 223 |
+
* **Stratified:** Includes "Upper Tier" (Head) and "Lower Tier" (Tail) entities based on Wikipedia popularity metrics to evaluate "Popularity Bias."
|
| 224 |
+
|
| 225 |
+
## Dataset Structure
|
| 226 |
+
|
| 227 |
+
### Subsets
|
| 228 |
+
The dataset is divided into three primary domains:
|
| 229 |
+
* `technology`: Corporate, product, and leadership-based associations.
|
| 230 |
+
* `sports`: Numerical achievements and career milestones in professional tennis.
|
| 231 |
+
* `entertainment`: Narrative roles and filmographic associations.
|
| 232 |
+
|
| 233 |
+
### Splits
|
| 234 |
+
Each subset contains the following splits:
|
| 235 |
+
* `upper_shadow` / `lower_shadow`: The primary anonymized DTA task.
|
| 236 |
+
* `upper_direct` / `lower_direct`: A control split where explicit names are restored to establish a factual "ceiling" (Direct QA).
|
| 237 |
+
* `upper_controlled` / `lower_controlled`: A 1:1 entity-matched subset used for sensitivity analysis.
|
| 238 |
+
|
| 239 |
+
### Data Schema
|
| 240 |
+
Each sample contains:
|
| 241 |
+
- `entity`: The hidden entity name.
|
| 242 |
+
- `question`: The shadow description (Trait A).
|
| 243 |
+
- `choices`: A dictionary (A, B, C, D) containing Trait B and three hard distractors.
|
| 244 |
+
- `answer`: The correct option key.
|
| 245 |
+
- `metadata`: A mapping dictionary where each key (A, B, C, D) corresponds to the actual entity represented by that answer choice.
|
| 246 |
+
|
| 247 |
+
## Construction & Hardening (v1 to v3)
|
| 248 |
+
ShadowBench was developed through an iterative process to ensure success is strictly contingent on latent semantic reasoning:
|
| 249 |
+
1. **v1:** Lexical Anonymization (Names removed).
|
| 250 |
+
2. **v2:** Chronological & Syntactic Hardening (Pronouns neutralized + Generational Proximity Filter added).
|
| 251 |
+
3. **v3:** Demographic Homogeneity (Gender-matched distractors added to prevent elimination via lexical cues like "WTA" or "Best Actress").
|
| 252 |
+
|
| 253 |
+
## Usage
|
| 254 |
+
You can load this dataset using the Hugging Face `datasets` library:
|
| 255 |
+
|
| 256 |
+
```python
|
| 257 |
+
from datasets import load_dataset
|
| 258 |
+
|
| 259 |
+
# Load the Technology Shadow split
|
| 260 |
+
dataset = load_dataset("shadow-bench/ShadowBench", "technology", split="upper_shadow")
|
| 261 |
+
|
| 262 |
+
# Inspect a sample
|
| 263 |
+
print(dataset[0])
|
| 264 |
+
```
|
| 265 |
+
|
| 266 |
+
## Licensing
|
| 267 |
+
|
| 268 |
+
This dataset is derived from Wikipedia and is licensed under CC BY-SA 4.0.
|
| 269 |
+
|
| 270 |
+
## Citation
|
| 271 |
+
|
| 272 |
+
If you use this dataset in your research, please cite our paper: [TBD]
|