madiedgar commited on
Commit
25d314a
·
1 Parent(s): 43b1e15

Update dataset card: add configs, fix languages, improve documentation (#1)

Browse files

- Update dataset card: add configs, fix languages, improve documentation (3cced8a21dad75c64525eb5d40290aa0769a10aa)

Files changed (1) hide show
  1. README.md +117 -93
README.md CHANGED
@@ -1,128 +1,146 @@
1
  ---
 
 
 
 
 
2
  license: apache-2.0
3
  task_categories:
4
- - text-generation
5
- - text2text-generation
6
- language:
7
- - en
8
- - ur
9
- - am
10
- - zh
11
  tags:
12
- - code
13
- - multilingual
14
- - legesher
15
- - transpilation
16
- - tiny-aya-expedition
17
- - language-decoded
18
  pretty_name: Language Decoded Data
19
  size_categories:
20
- - 10K<n<100K
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
  dataset_info:
22
- config_name: condition-1-en
23
  features:
24
- - name: code
25
- dtype: string
26
- - name: code_en
27
- dtype: string
28
- - name: language
29
- dtype: string
30
- - name: file_path
31
- dtype: string
32
- - name: license
33
- dtype: string
34
- - name: token_count
35
- dtype: int64
36
- splits:
37
- - name: train
38
- num_bytes: 516073703
39
- num_examples: 49500
40
- - name: validation
41
- num_bytes: 57341522
42
- num_examples: 5500
43
- download_size: 221522346
44
- dataset_size: 573415225
45
- configs:
46
- - config_name: condition-1-en
47
- data_files:
48
- - split: train
49
- path: data/condition-1-en/train-*
50
- - split: validation
51
- path: data/condition-1-en/validation-*
52
  ---
53
 
54
  # Language Decoded | Multilingual Code Dataset
55
 
56
- Multilingual Python code datasets for the **Language Decoded** project (part of Cohere's Tiny Aya Expedition), investigating whether code's reasoning benefit for language models is **language-dependent** or **structure-dependent**.
57
 
58
  ## Research Question
59
 
60
  > Does fine-tuning on non-English code (Python with translated keywords) improve multilingual reasoning as much as English code does?
61
 
62
- Prior work ([Aryabumi et al., 2024](https://arxiv.org/abs/2408.10914)) showed English code improves English reasoning by 8.2%, but never tested non-English code. This dataset enables that experiment.
63
 
64
- ## Dataset Structure
65
 
66
- This repo contains multiple experimental conditions as subdirectories:
67
 
68
- | Subdirectory | Condition | Description |
69
- |---|---|---|
70
- | `source-python/` | Source | Filtered Python files from The Stack (shared base) |
71
- | `baseline/` | Condition 1 | No code augmentation (control) |
72
- | `english-code/` | Condition 2 | Original English-keyword Python code |
73
- | `multilingual-code-ur/` | Condition 3a | Python transpiled to Urdu keywords via Legesher |
74
- | `multilingual-code-am/` | Condition 3b | Python transpiled to Amharic keywords via Legesher |
75
- | `multilingual-code-zh/` | Condition 3c | Python transpiled to Chinese keywords via Legesher |
76
- | `multilingual-text/` | Condition 4 | Non-code multilingual text (control) |
77
 
78
- ## Usage
79
 
80
- ```python
81
- from datasets import load_dataset
82
 
83
- # Load a specific condition
84
- ds = load_dataset("Legesher/language-decoded-data", data_dir="multilingual-code-ur")
85
- ```
 
 
 
86
 
87
- ## Transpilation
88
 
89
- Code translation is performed using [Legesher](https://github.com/Legesher/legesher), which translates Python reserved words (keywords, builtins, exceptions) into target languages while preserving code structure and semantics.
 
 
 
 
 
 
 
90
 
91
- Example (English → Chinese):
92
 
93
- ```python
94
- # English
95
- for item in range(10):
96
- if item > 5:
97
- print(item)
98
-
99
- # Chinese / 中文 (via Legesher)
100
- 循环 元素 范围(10):
101
- 如果 元素 > 5:
102
- 打印(元素)
103
- ```
104
 
105
- ## Source Data
106
 
107
- - **Base**: [The Stack](https://huggingface.co/datasets/bigcode/the-stack-dedup) — permissively licensed Python subset
108
- - **Filtering**: Quality-filtered to 50K-100K files
109
- - **Transpilation tool**: [Legesher v0.6.0+](https://github.com/Legesher/legesher)
110
 
111
- ## Evaluation Benchmarks
 
112
 
113
- Models fine-tuned on these conditions are evaluated on:
 
 
 
114
 
115
- - **XNLI** — Cross-lingual natural language inference (15 languages)
116
- - **XStoryCloze** — Story completion (11 languages)
117
- - **TyDi QA** — Question answering (11 languages)
118
- - **MMLU** — Multilingual knowledge
119
 
120
- ## Related Resources
121
 
122
- - **Models**: [Legesher/language-decoded-lora](https://huggingface.co/Legesher/language-decoded-lora) — LoRA adapters trained on these conditions
123
- - **Community code**: [Legesher/language-decoded-community](https://huggingface.co/datasets/Legesher/language-decoded-community) Human-written native language code
124
- - **Experiments**: [Legesher/language-decoded-experiments](https://huggingface.co/datasets/Legesher/language-decoded-experiments) Training logs and eval results
125
- - **Paper**: Coming soon
 
 
 
 
 
126
 
127
  ## Citation
128
 
@@ -132,10 +150,16 @@ Models fine-tuned on these conditions are evaluated on:
132
  author={Madison Edgar and Saad Bazaz and Rafay Mustafa and Sarah Jawaid and Rashik Shahjahan and Khojasteh Mirza and Sohaib Bazaz},
133
  year={2026},
134
  publisher={Hugging Face},
135
- url={https://huggingface.co/datasets/Legesher/language-decoded-data}
136
  }
137
  ```
138
 
 
 
 
 
 
 
139
  ## License
140
 
141
- Apache 2.0
 
1
  ---
2
+ language:
3
+ - en
4
+ - zh
5
+ - es
6
+ - ur
7
  license: apache-2.0
8
  task_categories:
9
+ - text-generation
 
 
 
 
 
 
10
  tags:
11
+ - code
12
+ - multilingual
13
+ - legesher
14
+ - transpilation
15
+ - tiny-aya-expedition
16
+ - language-decoded
17
  pretty_name: Language Decoded Data
18
  size_categories:
19
+ - 10K<n<100K
20
+ configs:
21
+ - config_name: condition-1-en
22
+ data_files:
23
+ - split: train
24
+ path: data/condition-1-en/train-*.parquet
25
+ - split: validation
26
+ path: data/condition-1-en/validation-*.parquet
27
+ - config_name: condition-2-ur
28
+ data_files:
29
+ - split: train
30
+ path: data/condition-2-ur/train-*.parquet
31
+ - split: validation
32
+ path: data/condition-2-ur/validation-*.parquet
33
+ - config_name: condition-2-zh
34
+ data_files:
35
+ - split: train
36
+ path: data/condition-2-zh/train-*.parquet
37
+ - split: validation
38
+ path: data/condition-2-zh/validation-*.parquet
39
+ - config_name: condition-2-es
40
+ data_files:
41
+ - split: train
42
+ path: data/condition-2-es/train-*.parquet
43
+ - split: validation
44
+ path: data/condition-2-es/validation-*.parquet
45
  dataset_info:
 
46
  features:
47
+ - name: code
48
+ dtype: string
49
+ - name: code_en
50
+ dtype: string
51
+ - name: language
52
+ dtype: string
53
+ - name: file_path
54
+ dtype: string
55
+ - name: license
56
+ dtype: string
57
+ - name: token_count
58
+ dtype: int64
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
  ---
60
 
61
  # Language Decoded | Multilingual Code Dataset
62
 
63
+ Multilingual Python code datasets for the **Language Decoded** project (part of [Cohere's Tiny Aya Expedition](https://aya.for.ai)), investigating whether code's reasoning benefit for language models is **language-dependent** or **structure-dependent**.
64
 
65
  ## Research Question
66
 
67
  > Does fine-tuning on non-English code (Python with translated keywords) improve multilingual reasoning as much as English code does?
68
 
69
+ Prior work ([Aryabumi et al., 2024 -- "To Code or Not to Code"](https://arxiv.org/abs/2408.10914)) demonstrated that including English code in pre-training data improves downstream reasoning performance by approximately 8%. However, that study only tested English code. This dataset enables the natural follow-up: does the reasoning benefit come from the _structure_ of code, or from the _language_ of its keywords?
70
 
71
+ ## Dataset Description
72
 
73
+ This dataset provides filtered, quality-controlled Python source code in four configurations: the original English and three keyword-swapped variants (Chinese, Spanish, Urdu). The source data is drawn from [bigcode/the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup) (Python subset), filtered for quality using the following criteria:
74
 
75
+ - AST-valid Python only (must parse without errors)
76
+ - Permissive licenses only (MIT, Apache-2.0, BSD, etc.)
77
+ - 10--1000 lines of code
78
+ - Minimum 21 GitHub stars
79
+ - No autogenerated files
80
+ - SHA-256 deduplication
 
 
 
81
 
82
+ Keyword-swapped variants are produced using [Legesher](https://github.com/legesher/legesher) v0.7.3, which translates Python reserved words (37 keywords, 72 builtins, 66 exceptions) into the target language while preserving code structure and semantics.
83
 
84
+ ## Available Configs
 
85
 
86
+ | Config | Condition | Language | Description |
87
+ | ---------------- | --------------------- | -------- | ------------------------------------------------------------------------------------------------ |
88
+ | `condition-1-en` | Condition 1 (control) | English | Unmodified filtered Python from The Stack Dedup |
89
+ | `condition-2-ur` | Condition 2 | Urdu | Keyword-swapped Python -- 37 keywords, 72 builtins, 66 exceptions translated via Legesher v0.7.3 |
90
+ | `condition-2-zh` | Condition 2 | Chinese | Keyword-swapped Python -- same transpilation method |
91
+ | `condition-2-es` | Condition 2 | Spanish | Keyword-swapped Python -- same transpilation method |
92
 
93
+ ## Schema
94
 
95
+ | Column | Type | Description |
96
+ | ------------- | ------ | ---------------------------------------------------------------------------------------------------------------------------------------------------- |
97
+ | `code` | string | Python source code. For condition-2 configs, this is the transpiled (keyword-swapped) version. For condition-1, this is the original English source. |
98
+ | `code_en` | string | Original English Python source code. Identical to `code` for condition-1-en. |
99
+ | `language` | string | ISO 639-1 language code: `en`, `ur`, `zh`, or `es`. |
100
+ | `file_path` | string | Original file path in The Stack Dedup. |
101
+ | `license` | string | SPDX license identifier for the source file. |
102
+ | `token_count` | int64 | Token count computed using the CohereLabs/tiny-aya-base tokenizer. |
103
 
104
+ ## Experimental Conditions
105
 
106
+ The Language Decoded experiment uses a ladder of six conditions to isolate the mechanism behind code's reasoning benefit. This dataset currently provides data for conditions 1 and 2:
107
+
108
+ | Condition | Name | Purpose |
109
+ | --------------- | -------------------- | -------------------------------------------------------------------------- |
110
+ | Baseline | No fine-tuning | Establishes the performance floor |
111
+ | Condition 1 | English code | Tests whether code fine-tuning helps at all (replicates Aryabumi et al.) |
112
+ | Condition 2 | Keyword-swapped code | Tests whether the _language_ of keywords matters for the reasoning benefit |
113
+ | Conditions 3--6 | (planned) | Additional controls not yet included in this dataset |
 
 
 
114
 
115
+ ## Usage
116
 
117
+ ```python
118
+ from datasets import load_dataset
 
119
 
120
+ # Load English code (control)
121
+ ds = load_dataset("legesher/language-decoded-data", "condition-1-en")
122
 
123
+ # Load a keyword-swapped variant
124
+ ds = load_dataset("legesher/language-decoded-data", "condition-2-ur")
125
+ ds = load_dataset("legesher/language-decoded-data", "condition-2-zh")
126
+ ds = load_dataset("legesher/language-decoded-data", "condition-2-es")
127
 
128
+ # Access splits
129
+ train = ds["train"]
130
+ val = ds["validation"]
131
+ ```
132
 
133
+ ## Technical Details
134
 
135
+ | Parameter | Value |
136
+ | ---------------------- | ------------------------------------------------------------------------------------------------------------------ |
137
+ | Source dataset | [bigcode/the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup) (Python subset) |
138
+ | Transpilation tool | [Legesher](https://github.com/legesher/legesher) v0.7.3 (legesher-core, legesher-i18n) |
139
+ | Tokenizer | CohereLabs/tiny-aya-base |
140
+ | Base model | [CohereLabs/tiny-aya-base](https://huggingface.co/CohereLabs/tiny-aya-base) (3.35B params) |
141
+ | Train/validation split | 90% / 10% (seed 42) |
142
+ | File format | Parquet (snappy compression) |
143
+ | Filtering criteria | AST-valid, permissive licenses, 10--1000 lines, min 21 GitHub stars, no autogenerated files, SHA-256 deduplication |
144
 
145
  ## Citation
146
 
 
150
  author={Madison Edgar and Saad Bazaz and Rafay Mustafa and Sarah Jawaid and Rashik Shahjahan and Khojasteh Mirza and Sohaib Bazaz},
151
  year={2026},
152
  publisher={Hugging Face},
153
+ url={https://huggingface.co/datasets/legesher/language-decoded-data}
154
  }
155
  ```
156
 
157
+ ## Links
158
+
159
+ - [Legesher on GitHub](https://github.com/legesher/legesher)
160
+ - [Tiny Aya Expedition](https://aya.for.ai)
161
+ - [bigcode/the-stack-dedup](https://huggingface.co/datasets/bigcode/the-stack-dedup)
162
+
163
  ## License
164
 
165
+ Apache 2.0