Commit ·
e95af4a
1
Parent(s): 6851a63
Update MD
Browse files- .gitignore +2 -0
- README.md +13 -13
.gitignore
CHANGED
|
@@ -2,3 +2,5 @@
|
|
| 2 |
*.jsonl
|
| 3 |
.claude
|
| 4 |
__pycache__/
|
|
|
|
|
|
|
|
|
| 2 |
*.jsonl
|
| 3 |
.claude
|
| 4 |
__pycache__/
|
| 5 |
+
*.json
|
| 6 |
+
*.txt
|
README.md
CHANGED
|
@@ -8,9 +8,7 @@ A normalized Python dataset for training small language models on code logic wit
|
|
| 8 |
|
| 9 |
## Why
|
| 10 |
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
Small models already carry semantic understanding of concepts like iteration, conditions, data flow, and function composition from pretraining on natural language. Raw code forces the model to bridge that understanding through unfamiliar syntax — brackets, colons, indentation rules, and language-specific idioms it may have seen rarely. TinyDSL closes that gap by expressing the same logic in a form that maps directly onto the model's existing semantic representations, letting it reason about what code *does* rather than spending capacity parsing what it *looks like*.
|
| 14 |
|
| 15 |
## Format
|
| 16 |
|
|
@@ -18,18 +16,20 @@ Parquet, shuffled. Each row:
|
|
| 18 |
|
| 19 |
| Field | Type | Description |
|
| 20 |
|---|---|---|
|
| 21 |
-
| `code` | string |
|
|
|
|
| 22 |
| `original_language` | string | Always `Python` |
|
| 23 |
| `source` | string | Origin dataset identifier |
|
| 24 |
|
| 25 |
## Sources
|
| 26 |
|
| 27 |
-
| Source | Dataset |
|
| 28 |
-
|---|---|---|
|
| 29 |
-
| `nomic_cornstack_python_v1` | nomic-ai/cornstack-python-v1 |
|
| 30 |
-
| `zaydzuhri_stack_edu_python` | zaydzuhri/stack-edu-python
|
| 31 |
-
| `jtatman_500k` | jtatman/python-code-dataset-500k | |
|
| 32 |
-
| `iamtarun_python_18k_alpaca` | iamtarun/python_code_instructions_18k_alpaca | |
|
| 33 |
-
| `flytech_python_25k` | flytech/python-codes-25k | |
|
| 34 |
-
| `dbands_pythonMath` | dbands/pythonMath | |
|
| 35 |
-
| `greatdarklord_python_dataset` | greatdarklord/python_dataset | |
|
|
|
|
|
|
| 8 |
|
| 9 |
## Why
|
| 10 |
|
| 11 |
+
Small language models trained on natural language corpora develop latent representations of logical constructs -- iteration, conditionals, data flow, function composition -- yet struggle to apply this reasoning to source code, where syntactic overhead (delimiters, indentation conventions, language-specific idioms) occupies a disproportionate share of the token budget, requires a vocabulary of code-specific tokens rarely encountered during pretraining, and introduces a surface-form distribution shift relative to the model's prior knowledge. NPset addresses this by normalizing Python source through an AST-based converter that strips syntactic noise while preserving the full logical structure of each program, producing a pseudocode representation composed entirely of natural language tokens that aligns more directly with the semantic representations already present in small models, allowing them to reason about what code *does* rather than expending capacity learning what it *looks like*.
|
|
|
|
|
|
|
| 12 |
|
| 13 |
## Format
|
| 14 |
|
|
|
|
| 16 |
|
| 17 |
| Field | Type | Description |
|
| 18 |
|---|---|---|
|
| 19 |
+
| `code` | string | Normalized pseudocode |
|
| 20 |
+
| `original_code` | string | Original Python source |
|
| 21 |
| `original_language` | string | Always `Python` |
|
| 22 |
| `source` | string | Origin dataset identifier |
|
| 23 |
|
| 24 |
## Sources
|
| 25 |
|
| 26 |
+
| Source | Dataset | Rows |
|
| 27 |
+
|---|---|---:|
|
| 28 |
+
| `nomic_cornstack_python_v1` | nomic-ai/cornstack-python-v1 | 3,498,845 |
|
| 29 |
+
| `zaydzuhri_stack_edu_python` | zaydzuhri/stack-edu-python (`license_type=no_license`) | 3,543,752 |
|
| 30 |
+
| `jtatman_500k` | jtatman/python-code-dataset-500k | 32,590 |
|
| 31 |
+
| `iamtarun_python_18k_alpaca` | iamtarun/python_code_instructions_18k_alpaca | 17,496 |
|
| 32 |
+
| `flytech_python_25k` | flytech/python-codes-25k | 42,968 |
|
| 33 |
+
| `dbands_pythonMath` | dbands/pythonMath | 5,726 |
|
| 34 |
+
| `greatdarklord_python_dataset` | greatdarklord/python_dataset | 18,452 |
|
| 35 |
+
| | **Total** | **7,159,829** |
|