EXOROBOURII commited on
Commit
596e999
·
verified ·
1 Parent(s): 99c8d2c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +29 -29
README.md CHANGED
@@ -8,7 +8,7 @@ language:
8
  - en
9
  size_categories:
10
  - 100K<n<1M
11
- pretty_name: 'Stanza-2: Geometry-Aware WikiText'
12
  tags:
13
  - dependency-parsing
14
  - universal-dependencies
@@ -18,18 +18,18 @@ tags:
18
  - wikipedia
19
  ---
20
 
21
- # Dataset Card for Stanza-2
22
 
23
  ## Dataset Description
24
 
25
- Stanza-2 is a structurally pristine, mathematically verified NLP dataset designed for multi-task language modeling, custom tokenizer training, structural NLP research, and mechanistic interpretability work.
26
 
27
- It is a rigorously modernized and annotated derivative of the `wikitext-2-raw-v1` corpus. Using the Stanford NLP `Stanza` neural pipeline, every token in the corpus has been explicitly mapped to its grammatical, syntactic, and semantic function across seven aligned annotation layers. Stanza-2 preserves document geometry, explicitly labeling Markdown headers to support structure-aware neural architectures.
28
 
29
  - **Curated by:** Jonathan R. Belanger (Exorobourii LLC)
30
  - **Language:** English (`en`)
31
  - **License:** CC-BY-SA-4.0
32
- - **DOI:** Locked at publication
33
  - **Total Sentences:** 101,455 (across all splits)
34
  - **Total Tokens:** 2,469,912
35
 
@@ -44,17 +44,17 @@ It is a rigorously modernized and annotated derivative of the `wikitext-2-raw-v1
44
  | Test | 10,073 | 237,742 |
45
  | **Total** | **101,455** | **2,469,912** |
46
 
47
- Rows discarded by degradation filter: **8** (out of ~101,463 pre-filter)
48
 
49
  ---
50
 
51
  ## Structural Characterization
52
 
53
- Unlike standard text corpora, Stanza-2 ships with a full quantitative geometric characterization derived from its dependency structure. These figures are provided to assist researchers in assessing corpus suitability before use.
54
 
55
  ### Dependency Degree Distribution
56
 
57
- Dependency degree (number of dependents per token) follows a power-law distribution with exponent **α = −1.06**. The corpus is heavily left-concentrated — the majority of tokens are leaves.
58
 
59
  | Percentile | Degree |
60
  |-----------|--------|
@@ -86,17 +86,17 @@ The cross-product of UPOS tags and DepRel labels yields **451 unique UPOS×DepRe
86
 
87
  ### Geometric Motif Analysis
88
 
89
- A dependency motif is defined as a parent node (UPOS×DepRel) paired with a sorted tuple of its children's (UPOS×DepRel) labels. The train split contains **106,057 unique motifs** following a power-law frequency distribution.
90
 
91
- | Coverage | Motifs Required |
92
- |----------|----------------|
93
- | 50% | 343 |
94
- | 80% | ~3,500 |
95
- | 90% | ~12,000 |
96
- | 95% | ~30,000 |
97
- | 100% | 106,057 |
98
 
99
- The top 343 motifs account for half of all motif occurrences a Zipfian concentration consistent with the structural redundancy hypothesis underlying geometry-aware tokenizer design.
100
 
101
  ### Structural Rigidity by UPOS
102
 
@@ -129,13 +129,13 @@ Degree carries substantially more linguistic signal than depth. Neither measurem
129
 
130
  ### Per-Sentence Structural Complexity
131
 
132
- Per-sentence degree entropy has mean **1.555 bits** (std 0.275, max 1.954 bits). The tight distribution indicates consistent structural complexity across sentences, with limited register variance attributable to the Wikipedia source.
133
 
134
  ---
135
 
136
  ## Dataset Structure
137
 
138
- Stanza-2 uses **Parallel Arrays**. Each row represents a single sentence. All linguistic features are stored in co-indexed, equal-length arrays guaranteeing 1:1 token-to-annotation alignment.
139
 
140
  ### Schema
141
 
@@ -167,28 +167,28 @@ To prevent silent upstream updates from compromising downstream reproducibility,
167
  - **Source Archive:** `wikitext-2-raw-v1.zip`
168
  - **SHA-256 Checksum:** `ef7edb566e3e2b2d31b29c1fdb0c89a4cc683597484c3dc2517919c615435a11`
169
 
170
- ### Phase 2: Degradation Filtering
171
 
172
- WikiText-2's unknown token substitution (`<unk>`) is non-uniform. A penalized degradation score is computed per text block:
173
 
174
  ```
175
  D*(P) = (|unk| / N) · log₂(1 + √N)
176
  ```
177
 
178
- The logarithmic penalty prevents discrimination against longer passages with isolated `<unk>` tokens. The discard threshold is set at μ + over the distribution of affected blocks. **8 rows were discarded** under this criterion.
179
 
180
  ### Phase 3: GPU-Accelerated Normalization
181
 
182
  Text normalization was performed using NVIDIA RAPIDS cuDF on an L4 GPU. Four operations applied in sequence:
183
 
184
  1. **Whitespace normalization:** leading/trailing whitespace stripped
185
- 2. **Hyphen modernization:** legacy `@-@` artifacts collapsed to standard hyphens
186
- 3. **Punctuation normalization:** floating punctuation corrected via CPU bypass using Python `re` with backreferences *(cuDF vectorized backreferences routed through standard Python to bypass known libcudf regex injection vulnerabilities)*
187
  4. **Header normalization:** `= Title =` through `====== Title ======` converted to Markdown H1–H6 in strict descending order to preserve document hierarchy
188
 
189
  ### Phase 4: Stanza NLP Enrichment
190
 
191
- Stanza 1.x initialized with `tokenize, pos, lemma, depparse, ner` on GPU. Output serialized to Parquet with ZSTD compression (level 3).
192
 
193
  Following enrichment, all Parquet files were subjected to a microscopic integrity audit guaranteeing:
194
 
@@ -196,7 +196,7 @@ Following enrichment, all Parquet files were subjected to a microscopic integrit
196
  2. **Root singularity:** every sentence has exactly one dependency root (`head == 0`)
197
  3. **Graph bounds:** no head index points outside the sentence boundary
198
 
199
- The Stanza-2 dataset is **100% structurally valid** across all splits.
200
 
201
  ### Phase 5: Structural Metadata Injection
202
 
@@ -257,7 +257,7 @@ The following analytical reports are available in the dataset repository:
257
  | `geometric_motifs_wiki.train.enriched.csv` | 106,057 unique dependency motifs |
258
  | `entity_distribution.csv` | Named entity frequencies and types |
259
  | `entity_cooccurrence.csv` | Sentence-level entity co-occurrence pairs |
260
- | `motif_analytics_summary.txt` | Power-law analysis and valency statistics |
261
  | `structural_rigidity_full.csv` | Per-UPOS weighted valency statistics |
262
  | `degree_distribution.csv` | Full token degree frequency table |
263
  | `depth_distribution.csv` | Full token depth frequency table |
@@ -271,11 +271,11 @@ The following analytical reports are available in the dataset repository:
271
  ```bibtex
272
  @dataset{belanger2025stanza2,
273
  author = {Belanger, Jonathan R.},
274
- title = {Stanza-2: A Structurally Enriched Modernization of WikiText-2},
275
  year = {2026},
276
  publisher = {HuggingFace},
277
  url = {https://huggingface.co/datasets/EXOROBOURII/Stanza-Wikitext-2},
278
- doi = {[10.57967/hf/8060]}
279
  }
280
  ```
281
 
 
8
  - en
9
  size_categories:
10
  - 100K<n<1M
11
+ pretty_name: 'Stanza-Wikitext-2: A Structurally Enriched Modernization of WikiText-2'
12
  tags:
13
  - dependency-parsing
14
  - universal-dependencies
 
18
  - wikipedia
19
  ---
20
 
21
+ # Dataset Card for Stanza-Wikitext-2
22
 
23
  ## Dataset Description
24
 
25
+ Stanza-Wikitext-2 is a structurally pristine, mathematically verified NLP dataset designed for multi-task language modeling, custom tokenizer training, structural NLP research, and mechanistic interpretability work.
26
 
27
+ It is a rigorously modernized and annotated derivative of the `wikitext-2-raw-v1` corpus. Using the Stanford NLP `Stanza` neural pipeline, every token in the corpus has been explicitly mapped to its grammatical, syntactic, and semantic function across seven aligned annotation layers. Stanza-Wikitext-2 preserves document geometry, explicitly labeling Markdown headers to support structure-aware neural architectures.
28
 
29
  - **Curated by:** Jonathan R. Belanger (Exorobourii LLC)
30
  - **Language:** English (`en`)
31
  - **License:** CC-BY-SA-4.0
32
+ - **DOI:** 10.57967/hf/8060
33
  - **Total Sentences:** 101,455 (across all splits)
34
  - **Total Tokens:** 2,469,912
35
 
 
44
  | Test | 10,073 | 237,742 |
45
  | **Total** | **101,455** | **2,469,912** |
46
 
47
+ Rows removed by Phase 4c integrity repair: **8** (train split only)
48
 
49
  ---
50
 
51
  ## Structural Characterization
52
 
53
+ Unlike standard text corpora, Stanza-Wikitext-2 ships with a full quantitative geometric characterization derived from its dependency structure. These figures are provided to assist researchers in assessing corpus suitability before use.
54
 
55
  ### Dependency Degree Distribution
56
 
57
+ Dependency degree (number of dependents per token) is strongly right-skewed with faster-than-power-law decay. A KS-based MLE scan (Clauset et al., 2009) found no well-supported power-law regime across the observable degree range. The corpus is heavily left-concentrated — the majority of tokens are leaves.
58
 
59
  | Percentile | Degree |
60
  |-----------|--------|
 
86
 
87
  ### Geometric Motif Analysis
88
 
89
+ A dependency motif is defined as a parent node (UPOS×DepRel) paired with a sorted tuple of its children's (UPOS×DepRel) labels. The train split contains **106,057 unique motifs** with a strongly right-skewed frequency distribution.
90
 
91
+ | Coverage | Motifs Required | % of Total Motifs |
92
+ |----------|----------------|-------------------|
93
+ | 50% | 343 | 0.32% |
94
+ | 80% | 7,743 | 7.30% |
95
+ | 90% | 33,080 | 31.19% |
96
+ | 95% | 69,571 | 65.60% |
97
+ | 100% | 106,057 | 100% |
98
 
99
+ The top 343 motifs account for half of all motif occurrences. The distribution is heavily long-tailed: 95% coverage requires 65.6% of the full motif vocabulary, indicating a compact high-frequency structural core alongside a large population of rare configurations.
100
 
101
  ### Structural Rigidity by UPOS
102
 
 
129
 
130
  ### Per-Sentence Structural Complexity
131
 
132
+ Per-sentence degree entropy has mean **1.555 bits** (std 0.275, max 1.954 bits). Structural complexity means are stable across all three splits, confirming that the canonical WikiText-2 split boundaries do not introduce distributional artifacts.
133
 
134
  ---
135
 
136
  ## Dataset Structure
137
 
138
+ Stanza-Wikitext-2 uses **Parallel Arrays**. Each row represents a single sentence. All linguistic features are stored in co-indexed, equal-length arrays guaranteeing 1:1 token-to-annotation alignment.
139
 
140
  ### Schema
141
 
 
167
  - **Source Archive:** `wikitext-2-raw-v1.zip`
168
  - **SHA-256 Checksum:** `ef7edb566e3e2b2d31b29c1fdb0c89a4cc683597484c3dc2517919c615435a11`
169
 
170
+ ### Phase 2: Degradation Audit
171
 
172
+ WikiText-2 is distributed in two variants: a pre-tokenized `.tokens` format in which low-frequency terms are replaced with `<unk>` substitution tokens, and a `.raw` format retaining original surface forms. This pipeline operates on the `.raw` files exclusively. A precautionary contamination audit computed a penalized degradation score per text block:
173
 
174
  ```
175
  D*(P) = (|unk| / N) · log₂(1 + √N)
176
  ```
177
 
178
+ The audit confirmed zero `<unk>` tokens across all 23,767 text blocks, returning a clean result. No filtering was applied or required. This validates the source file selection: by operating on `.raw` rather than `.tokens`, the pipeline inherits no vocabulary substitution artifacts, and downstream analyses reflect genuine surface token distributions.
179
 
180
  ### Phase 3: GPU-Accelerated Normalization
181
 
182
  Text normalization was performed using NVIDIA RAPIDS cuDF on an L4 GPU. Four operations applied in sequence:
183
 
184
  1. **Whitespace normalization:** leading/trailing whitespace stripped
185
+ 2. **Hyphen modernization:** legacy `@-@` artifacts collapsed to standard hyphens (e.g. `Apollo @-@ Soyuz` → `Apollo-Soyuz`)
186
+ 3. **Punctuation normalization:** floating punctuation corrected via CPU bypass using Python `re` with backreferences (e.g. `word ,` `word,`)
187
  4. **Header normalization:** `= Title =` through `====== Title ======` converted to Markdown H1–H6 in strict descending order to preserve document hierarchy
188
 
189
  ### Phase 4: Stanza NLP Enrichment
190
 
191
+ Stanza 1.11.1 initialized with `tokenize, pos, lemma, depparse, ner` on GPU. Output serialized to Parquet with ZSTD compression (level 3).
192
 
193
  Following enrichment, all Parquet files were subjected to a microscopic integrity audit guaranteeing:
194
 
 
196
  2. **Root singularity:** every sentence has exactly one dependency root (`head == 0`)
197
  3. **Graph bounds:** no head index points outside the sentence boundary
198
 
199
+ 8 structurally invalid sentences were identified in the train split and removed via automated ledger repair. The Stanza-Wikitext-2 dataset is **100% structurally valid** across all splits.
200
 
201
  ### Phase 5: Structural Metadata Injection
202
 
 
257
  | `geometric_motifs_wiki.train.enriched.csv` | 106,057 unique dependency motifs |
258
  | `entity_distribution.csv` | Named entity frequencies and types |
259
  | `entity_cooccurrence.csv` | Sentence-level entity co-occurrence pairs |
260
+ | `motif_analytics_summary.txt` | Motif coverage analysis and valency statistics |
261
  | `structural_rigidity_full.csv` | Per-UPOS weighted valency statistics |
262
  | `degree_distribution.csv` | Full token degree frequency table |
263
  | `depth_distribution.csv` | Full token depth frequency table |
 
271
  ```bibtex
272
  @dataset{belanger2025stanza2,
273
  author = {Belanger, Jonathan R.},
274
+ title = {Stanza-Wikitext-2: A Structurally Enriched Modernization of WikiText-2},
275
  year = {2026},
276
  publisher = {HuggingFace},
277
  url = {https://huggingface.co/datasets/EXOROBOURII/Stanza-Wikitext-2},
278
+ doi = {10.57967/hf/8060}
279
  }
280
  ```
281