Update README.md
Browse files
README.md
CHANGED
|
@@ -1,4 +1,86 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
pretty_name: Syntactic Reconstruction Corpus
|
| 4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
pretty_name: Syntactic Reconstruction Corpus
|
| 4 |
+
task_categories:
|
| 5 |
+
- text-generation
|
| 6 |
+
language:
|
| 7 |
+
- en
|
| 8 |
+
tags:
|
| 9 |
+
- wiki
|
| 10 |
+
size_categories:
|
| 11 |
+
- 1K<n<10K
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
# SyReC: The Syntactic Reconstruction Corpus
|
| 19 |
+
|
| 20 |
+
## Dataset Description
|
| 21 |
+
|
| 22 |
+
**SyReC** (Syntactic Reconstruction Corpus) is a large-scale, synthetically generated dataset designed to benchmark and train language models on the task of **syntactic and semantic reconstruction**. The core challenge presented by this dataset is to reconstruct a coherent, grammatically correct paragraph from a "bag of words"—a collection of its constituent words that have been deliberately stripped of their original order.
|
| 23 |
+
|
| 24 |
+
The dataset is built to test and enhance a model's deep understanding of syntax, semantic relationships, and logical flow, moving beyond simple next-token prediction to a more holistic, structural comprehension of language. By training on SyReC, models can learn to infer grammatical structure and narrative coherence from a fixed set of semantic tokens, a skill crucial for advanced tasks like high-fidelity summarization, data grounding, and complex instruction following.
|
| 25 |
+
|
| 26 |
+
This version of SyReC was generated exclusively from the `wikimedia/wikipedia` (`20231101.en`) dataset.
|
| 27 |
+
|
| 28 |
+
## How the Data is Generated
|
| 29 |
+
|
| 30 |
+
The dataset was created using an automated pipeline with the following steps:
|
| 31 |
+
|
| 32 |
+
1. **Source Text Extraction:** Paragraphs are streamed from the English Wikipedia dataset (`wikimedia/wikipedia`, `20231101.en` config).
|
| 33 |
+
2. **Filtering:** Only paragraphs between 250 and 2000 characters are selected to ensure substantive yet manageable content.
|
| 34 |
+
3. **Tokenization:** Each paragraph is tokenized into a list of its constituent words, and all words are converted to lowercase. Punctuation is discarded.
|
| 35 |
+
4. **Scrambling:** The list of words is then scrambled using three distinct methods to create a diverse set of inputs for each original text.
|
| 36 |
+
5. **Formatting:** The original text and its scrambled versions are saved as a JSONL file.
|
| 37 |
+
|
| 38 |
+
## Data Fields
|
| 39 |
+
|
| 40 |
+
Each record in the dataset is a JSON object with the following fields:
|
| 41 |
+
|
| 42 |
+
* `original_text`: (string) The original, coherent paragraph from Wikipedia. This serves as the ground truth for reconstruction.
|
| 43 |
+
* `scrambled_text_random`: (string) A comma-separated string of the words from the original text, shuffled into a completely random order.
|
| 44 |
+
* `scrambled_text_alphabetical`: (string) A comma-separated string of the words from the original text, sorted in alphabetical order.
|
| 45 |
+
* `scrambled_text_numerical`: (string) A comma-separated string of the words from the original text, sorted by word length (shortest to longest), with alphabetical sorting for words of the same length.
|
| 46 |
+
|
| 47 |
+
### Example Record
|
| 48 |
+
|
| 49 |
+
```json
|
| 50 |
+
{
|
| 51 |
+
"original_text": "The path of an object through space is called its orbit. Kepler initially assumed that the orbits of planets were circles, but doing so did not allow him to find orbits that were consistent with Brahe’s observations. Working with the data for Mars, he eventually discovered that the orbit of that planet had the shape of a somewhat flattened circle, or ellipse. Next to the circle, the ellipse is the simplest kind of closed curve, belonging to a family of curves known as conic sections.",
|
| 52 |
+
"scrambled_text_random": "initially, observations, the, a, kepler, had, his, he, to, conic, belonging, that, find, did, of, the, of, ellipse, eventually, planet, but, of, simplest, an, orbit, a, curves, of, with, is, a, to, were, orbits, the, somewhat, path, known, closed, allow, next, as, the, that, were, sections, flattened, with, mars, a, him, so, not, data, doing, family, planets, its, through, discovered, consistent, called, orbit, the, for, circle, assumed, working, shape, circles, kind, ellipse, space, the, is, object, curve, brahe’s",
|
| 53 |
+
"scrambled_text_alphabetical": "a, a, a, a, a, allow, an, as, assumed, belonging, brahe’s, but, called, circle, circle, circles, closed, conic, consistent, curve, curves, data, did, discovered, doing, ellipse, ellipse, eventually, family, find, flattened, for, had, he, him, his, initially, is, is, its, kepler, kind, known, mars, next, not, object, observations, of, of, of, of, orbit, orbit, orbits, path, planet, planets, sections, shape, simplest, so, somewhat, space, that, that, the, the, the, the, the, the, through, to, to, were, were, with, with, working",
|
| 54 |
+
"scrambled_text_numerical": "a, a, a, a, a, an, as, but, did, for, had, he, him, his, is, is, its, not, of, of, of, of, so, the, the, the, the, the, the, to, to, allow, call, curve, data, doing, find, kind, mars, next, path, shape, were, were, with, with, closed, conic, circle, curves, ellipse, family, kepler, object, orbits, planet, space, working, assumed, brahe’s, circles, planets, sections, simplest, flattened, consistent, discovered, initially, somewhat, belonging, observations"
|
| 55 |
+
}
|
| 56 |
+
```
|
| 57 |
+
|
| 58 |
+
## Intended Use
|
| 59 |
+
|
| 60 |
+
This dataset is primarily intended for **fine-tuning language models** to perform the syntactic reconstruction task. By training a model to predict the `original_text` from one of the `scrambled_text_*` fields, you can teach it to:
|
| 61 |
+
|
| 62 |
+
* **Improve its understanding of English syntax and grammar.**
|
| 63 |
+
* **Strengthen its ability to maintain context and logical flow.**
|
| 64 |
+
* **Enhance its "source grounding" capabilities for RAG and fact-checking tasks.**
|
| 65 |
+
* **Follow complex, constrained instructions more reliably.**
|
| 66 |
+
|
| 67 |
+
It can also be used as a **benchmark** to evaluate the zero-shot reasoning and reconstruction capabilities of existing language models.
|
| 68 |
+
|
| 69 |
+
## Limitations and Bias
|
| 70 |
+
|
| 71 |
+
* **Source Bias:** As the dataset is derived entirely from English Wikipedia, it inherits any biases present in the source material, including its writing style, topical coverage, and cultural perspective.
|
| 72 |
+
* **Lack of Punctuation:** The scrambled inputs do not contain punctuation, forcing the model to infer it. This is an intended part of the challenge but may not be suitable for all use cases.
|
| 73 |
+
* **Monolingual:** This version of the dataset is exclusively in English.
|
| 74 |
+
|
| 75 |
+
## Citation
|
| 76 |
+
|
| 77 |
+
If you use this dataset in your research, please consider citing the original Wikipedia source and this project.
|
| 78 |
+
|
| 79 |
+
```
|
| 80 |
+
@misc{syrec_dataset_2025,
|
| 81 |
+
author = {[ambrosfitz]},
|
| 82 |
+
title = {SyReC: The Syntactic Reconstruction Corpus},
|
| 83 |
+
year = {2025},
|
| 84 |
+
publisher = {Hugging Face},
|
| 85 |
+
url = {[https://huggingface.co/datasets/ambrosfitz/SyReC]}
|
| 86 |
+
}
|