kaori02 commited on
Commit
b3db6ef
Β·
verified Β·
1 Parent(s): 2cc6ce7

Add comprehensive ARM Compiler Optimization via RL wiki

Browse files
Files changed (1) hide show
  1. README.md +1174 -0
README.md ADDED
@@ -0,0 +1,1174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # RL-Based ARM Compiler Optimization β€” Comprehensive Research Wiki
2
+
3
+ > **Goal**: Train an LLM with reinforcement learning (PPO/GRPO) to generate optimized AArch64/ARM assembly code that outperforms `gcc -O3`, using compiler feedback (correctness + speedup) as the reward signal.
4
+
5
+ > **Last updated**: April 2026 | **Primary recipe**: SuperCoder (arxiv:2505.11480) adapted for ARM
6
+
7
+ ---
8
+
9
+ ## Table of Contents
10
+
11
+ 1. [Executive Summary](#1-executive-summary)
12
+ 2. [Landscape & Key Papers](#2-landscape--key-papers)
13
+ 3. [Recipe 1: SuperCoder β€” RL Assembly Superoptimization (SOTA)](#3-recipe-1-supercoder--rl-assembly-superoptimization-sota)
14
+ 4. [Recipe 2: Meta LLM Compiler β€” SFT on LLVM IR](#4-recipe-2-meta-llm-compiler--sft-on-llvm-ir)
15
+ 5. [Recipe 3: Compiler Feedback β€” Iterative Refinement](#5-recipe-3-compiler-feedback--iterative-refinement)
16
+ 6. [Recipe 4: CUDA-L1 β€” Contrastive RL (3-Stage Pipeline)](#6-recipe-4-cuda-l1--contrastive-rl-3-stage-pipeline)
17
+ 7. [Recipe 5: StepCoder β€” Fine-Grained RL for Code](#7-recipe-5-stepcoder--fine-grained-rl-for-code)
18
+ 8. [ARM Adaptation Guide](#8-arm-adaptation-guide)
19
+ 9. [Datasets](#9-datasets)
20
+ 10. [Model Selection](#10-model-selection)
21
+ 11. [Reward Function Design](#11-reward-function-design)
22
+ 12. [Reward Hacking & Mitigations](#12-reward-hacking--mitigations)
23
+ 13. [Training Infrastructure: TRL GRPO Implementation](#13-training-infrastructure-trl-grpo-implementation)
24
+ 14. [Full Training Script](#14-full-training-script)
25
+ 15. [Program Transformation Taxonomy](#15-program-transformation-taxonomy)
26
+ 16. [Results Benchmarks & Ablations](#16-results-benchmarks--ablations)
27
+ 17. [Citation Graph & Future Directions](#17-citation-graph--future-directions)
28
+ 18. [Reference Links](#18-reference-links)
29
+
30
+ ---
31
+
32
+ ## 1. Executive Summary
33
+
34
+ **The problem**: Modern compilers like `gcc -O3` apply fixed heuristics. LLMs can learn program-specific optimizations that compilers miss β€” loop restructuring, better instruction selection, algorithmic simplification β€” achieving **1.46Γ— average speedup over gcc -O3** on x86 and potentially similar gains on ARM.
35
+
36
+ **The approach**: Use GRPO (Group Relative Policy Optimization) to train `Qwen2.5-Coder-7B-Instruct` with a reward function that:
37
+ 1. Compiles the generated assembly β†’ reward=0 if it fails
38
+ 2. Runs all test cases β†’ reward=0 if any fail
39
+ 3. Measures speedup vs baseline β†’ reward = speedup ratio (continuous)
40
+
41
+ **Key insight from the literature**: RL beats SFT for this task because superoptimization is open-ended β€” there's no single "correct" optimized assembly. RL directly optimizes for the metric we care about (speedup) rather than imitating examples.
42
+
43
+ **No ARM-specific work exists yet** β€” all published results are on x86-64 or CUDA. This is a greenfield opportunity.
44
+
45
+ ---
46
+
47
+ ## 2. Landscape & Key Papers
48
+
49
+ ### Paper Dependency Graph
50
+
51
+ ```
52
+ MLGO (Google, 2021)
53
+ ML replaces compiler heuristics
54
+ β”‚
55
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
56
+ β–Ό β–Ό β–Ό
57
+ Meta LLM Compiler ProGraML ML Cost Model
58
+ (Meta, 2023) (2020) for MLIR (2023)
59
+ SFT from scratch GNN for IR
60
+ on LLVM IR
61
+ β”‚
62
+ β–Ό
63
+ Compiler Feedback StepCoder
64
+ (Meta, 2024) (2024)
65
+ Iterative refinement FGO masking
66
+ with oracle feedback for code RL
67
+ β”‚ β”‚
68
+ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
69
+ β–Ό
70
+ SuperCoder (2025) ◄── CURRENT SOTA
71
+ PPO/GRPO on assembly
72
+ with compiler reward
73
+ β”‚
74
+ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
75
+ β–Ό β–Ό β–Ό
76
+ CUDA-L1 LLM-VeriOpt Astra
77
+ (ICLR 2026) (2026) (2025)
78
+ Contrastive Formal Multi-agent
79
+ RL for CUDA verification GPU kernel opt
80
+ ```
81
+
82
+ ### Papers Ranked by Relevance to ARM RL Optimizer
83
+
84
+ | Rank | Paper | Year | Key Contribution | Result |
85
+ |------|-------|------|-----------------|--------|
86
+ | πŸ₯‡ | [SuperCoder](https://arxiv.org/abs/2505.11480) | 2025 | PPO/GRPO on assembly with compiler reward | 95% correct, 1.46Γ— speedup over gcc -O3 |
87
+ | πŸ₯ˆ | [CUDA-L1](https://arxiv.org/abs/2507.14111) | 2025 | 3-stage SFTβ†’Self-supervisedβ†’Contrastive RL | 3.12Γ— avg speedup on KernelBench |
88
+ | πŸ₯‰ | [Meta LLM Compiler](https://arxiv.org/abs/2309.07062) | 2023 | SFT from scratch on LLVM IR for pass ordering | 3.0% instruction reduction over -Oz |
89
+ | 4 | [Compiler Feedback](https://arxiv.org/abs/2403.14714) | 2024 | Iterative refinement with compiler oracle | +0.53% over base; sampling > feedback |
90
+ | 5 | [StepCoder](https://arxiv.org/abs/2402.01391) | 2024 | Fine-Grained Optimization masking for code RL | +8% pass@1 on APPS+ |
91
+ | 6 | [MLGO](https://arxiv.org/abs/2101.04808) | 2021 | ML in LLVM framework (Google) | Foundation work |
92
+
93
+ ---
94
+
95
+ ## 3. Recipe 1: SuperCoder β€” RL Assembly Superoptimization (SOTA)
96
+
97
+ > **Paper**: "SuperCoder: Assembly Program Superoptimization with Large Language Models"
98
+ > **ArXiv**: [2505.11480](https://arxiv.org/abs/2505.11480) | May 2025
99
+
100
+ ### 3.1 Task Formulation
101
+
102
+ Framed as a **contextual multi-armed bandit** (not full MDP):
103
+ - **Context** `s ∈ S`: source program C, baseline assembly P, test cases T
104
+ - **Action** `a ∈ A`: generate candidate optimized assembly PΜƒ
105
+ - **Reward** `r(s,a)`: correctness-gated speedup (see Β§11)
106
+ - **Policy** `Ο€: S β†’ Ξ”(A)`: the LLM maps context to a distribution over assemblies
107
+
108
+ Single-turn generation β€” no rollout history, no multi-step environment.
109
+
110
+ ### 3.2 Training Configuration
111
+
112
+ | Component | Setting | Source |
113
+ |-----------|---------|--------|
114
+ | Base model | `Qwen/Qwen2.5-Coder-7B-Instruct` | Table A1 |
115
+ | Actor learning rate | `1e-6` | Appendix A.2 |
116
+ | Critic learning rate | `1e-5` (PPO only) | Appendix A.2 |
117
+ | Batch size | 16 | Appendix A.2 |
118
+ | Epochs | 1 | Appendix A.2 |
119
+ | Max prompt length | 2000 tokens | Appendix A.2 |
120
+ | Max response length | 2000 tokens | Appendix A.2 |
121
+ | Gradient checkpointing | Enabled (actor + critic) | Appendix A.2 |
122
+ | Rollout temperature | 0.5 | Appendix A.2 |
123
+ | Hardware | 4Γ— A100 GPUs | Appendix A.2 |
124
+ | RL framework | [verl](https://github.com/volcengine/verl) | Β§3.3 |
125
+
126
+ ### 3.3 Dataset Construction
127
+
128
+ **Source**: IBM CodeNet β€” 8M+ C/C++ competitive programming submissions.
129
+
130
+ **Curation strategy** (critical for performance):
131
+ 1. Sample programs with **highest relative speedup from -O0 to -O3** β€” this selects computationally rich programs where further optimization is possible
132
+ 2. Compile each with `gcc -O3 -S` to get baseline x86-64 assembly
133
+ 3. Use test inputs from [Li et al., 2022], but **regenerate outputs** by executing the original program (CodeNet outputs are unreliable)
134
+ 4. Final dataset: **7,872 training programs, 200 evaluation programs**
135
+
136
+ | Split | Programs | Avg Tests/Program | Avg LOC (C) | Avg LOC (Assembly) |
137
+ |-------|----------|-------------------|-------------|-------------------|
138
+ | Train | 7,872 | 8.86 | 22.3 | 130.3 |
139
+ | Eval | 200 | 8.92 | 21.9 | 133.3 |
140
+
141
+ ### 3.4 Prompt Template
142
+
143
+ ```
144
+ Given the following C code and assembly code, your task is to generate
145
+ highly optimized x86-64 assembly code.
146
+
147
+ C Code: <C code here>
148
+ Assembly Code: <baseline assembly code here produced by gcc -O3>
149
+
150
+ Only output the optimized assembly code. Do not include any other text.
151
+ Do not write any comments in the assembly code.
152
+ Wrap the assembly code in assembly tags.
153
+
154
+ Optimized Assembly Code:
155
+ ```
156
+
157
+ > ⚠️ **Critical finding** (Appendix A.5): Removing the baseline assembly from the prompt causes a **catastrophic drop** β€” correctness falls from 95% to near 0%. The model needs the gcc -O3 output as a starting point.
158
+
159
+ ### 3.5 Results
160
+
161
+ | Model | Compile Pass | Test Pass | Avg Speedup |
162
+ |-------|-------------|-----------|-------------|
163
+ | Qwen2.5-Coder-7B (base) | 77.9% | 61.4% | 1.10Γ— |
164
+ | SuperCoder (PPO) | 96.0% | 95.0% | **1.46Γ—** |
165
+ | SuperCoder (GRPO) | 95.0% | 94.7% | **1.44Γ—** |
166
+ | SuperCoder (SFT only) | 95.5% | 92.5% | 1.39Γ— |
167
+
168
+ **PPO β‰ˆ GRPO** β€” nearly identical results, but GRPO is simpler (no critic/value head needed).
169
+
170
+ **Best-of-N + RL**:
171
+ - Base best-of-8: ~1.39Γ— (β‰ˆ RL best-of-1)
172
+ - SuperCoder best-of-8: **1.93Γ—**
173
+
174
+ ### 3.6 Model Evaluation (23 Models Tested)
175
+
176
+ | Model | Test Pass | Avg Speedup | Notes |
177
+ |-------|-----------|-------------|-------|
178
+ | DeepSeek-R1 | 0.0% | 1.00Γ— | Generates verbose analysis, no actual code |
179
+ | GPT-4o | 5.0% | 1.02Γ— | Compiles (81%) but sacrifices correctness for optimization |
180
+ | Claude-opus-4 | 51.5% | 1.43Γ— | Best zero-shot baseline |
181
+ | Qwen2.5-Coder-7B | 61.4% | 1.10Γ— | Best base model for RL starting point |
182
+ | llm-compiler-13b | 59.5% | 1.34Γ— | Pretrained on assembly/IR |
183
+ | **SuperCoder (PPO)** | **95.0%** | **1.46Γ—** | **SOTA** |
184
+
185
+ **Key failure modes**:
186
+ - Reasoning models (R1, o1) completely fail β€” they analyze instead of generating code
187
+ - GPT-4o compiles but breaks low-level conventions (stack canaries, .cfi directives, calling conventions)
188
+ - Models that work best have been pretrained on assembly/code (Qwen-Coder, llm-compiler)
189
+
190
+ ---
191
+
192
+ ## 4. Recipe 2: Meta LLM Compiler β€” SFT on LLVM IR
193
+
194
+ > **Paper**: "Large Language Models for Compiler Optimization"
195
+ > **ArXiv**: [2309.07062](https://arxiv.org/abs/2309.07062) | Sep 2023 | Meta AI
196
+
197
+ ### 4.1 Approach
198
+
199
+ Train a 7B transformer **from scratch** (Llama 2 architecture) on LLVM IR to predict:
200
+ 1. **Optimization pass list** (primary task)
201
+ 2. **Instruction counts before/after** (auxiliary task β€” critical for performance)
202
+ 3. **Full optimized IR** (auxiliary task)
203
+
204
+ ### 4.2 Training Details
205
+
206
+ | Component | Setting |
207
+ |-----------|---------|
208
+ | Architecture | Llama 2 7B (32 heads, 4096 hidden, 32 layers) |
209
+ | Training | From scratch (random init) |
210
+ | Dataset | 1,000,000 deduplicated LLVM-IR functions, 373M tokens |
211
+ | Autotuner | 37,424 compilations per function avg; 9,016 CPU days total |
212
+ | Optimizer | AdamW (β₁=0.9, Ξ²β‚‚=0.95) |
213
+ | LR schedule | Cosine, 1000 warmup, peak=1e-5, final=1e-6 |
214
+ | Batch size | 256 (524,288 tokens/batch) |
215
+ | Steps | 30,000 (7.7 epochs, 15.7B total tokens) |
216
+ | Sequence length | 2048 tokens |
217
+ | Hardware | 64Γ— V100 GPUs, 620 GPU-days |
218
+
219
+ ### 4.3 Key Results
220
+
221
+ - **3.0% instruction count reduction over -Oz** (without invoking compiler at inference)
222
+ - **91% compilable** generated code
223
+ - **70%** perfect emulation of compiler output
224
+ - Achieves **60% of autotuner gains** at 0 additional compilation cost
225
+
226
+ ### 4.4 Key Insight: Auxiliary Tasks Matter
227
+
228
+ Training with instruction count prediction + code generation alongside pass list prediction **dramatically improves** optimization quality. The instruction count acts as a self-consistency check β€” the model learns to verify its own predictions.
229
+
230
+ ---
231
+
232
+ ## 5. Recipe 3: Compiler Feedback β€” Iterative Refinement
233
+
234
+ > **Paper**: "Compiler Generated Feedback for Large Language Models"
235
+ > **ArXiv**: [2403.14714](https://arxiv.org/abs/2403.14714) | Mar 2024 | Meta AI
236
+
237
+ ### 5.1 Approach
238
+
239
+ Start from the best checkpoint of Recipe 2. Add compiler feedback to the prompt:
240
+ - Predicted instruction counts
241
+ - Whether the code compiled correctly
242
+ - Whether the IR is valid
243
+
244
+ Model outputs "I am sure!" if confident, else "Let me try again."
245
+
246
+ ### 5.2 Training Details
247
+
248
+ | Component | Setting |
249
+ |-----------|---------|
250
+ | Base model | Best checkpoint from Meta LLM Compiler (7B) |
251
+ | Dataset | 1M training + 100K test LLVM-IR functions |
252
+ | Optimizer | AdamW (β₁=0.9, Ξ²β‚‚=0.95) |
253
+ | LR | Cosine, 1000 warmup, peak=1e-5 |
254
+ | Batch size | 256 (786K–1M tokens/batch) |
255
+ | Steps | 20,000 (5.12 epochs, 16–21B tokens) |
256
+ | Hardware | 64Γ— A100s, 60 GPU-days |
257
+
258
+ ### 5.3 Key Finding
259
+
260
+ > **Sampling beats iterative feedback**: At nβ‰₯10 samples, the base model without feedback training outperforms the feedback-trained model. This strongly motivates using **Best-of-N + RL** (SuperCoder approach) over iterative SFT feedback.
261
+
262
+ ---
263
+
264
+ ## 6. Recipe 4: CUDA-L1 β€” Contrastive RL (3-Stage Pipeline)
265
+
266
+ > **Paper**: "CUDA-L1: Improving CUDA Optimization via Contrastive Reinforcement Learning"
267
+ > **ArXiv**: [2507.14111](https://arxiv.org/abs/2507.14111) | Jul 2025 | ICLR 2026
268
+
269
+ ### 6.1 The 3-Stage Pipeline
270
+
271
+ This is the most important contribution for cases where the **base model has low initial success rate**.
272
+
273
+ ```
274
+ Stage 1: SFT via Data Augmentation
275
+ β”œβ”€β”€ Generate CUDA code from 6 LLMs (GPT-4o, o1, DeepSeek-R1/V3, Llama-405B, Claude-3.7)
276
+ β”œβ”€β”€ Filter for correct + fast implementations
277
+ β”œβ”€β”€ Fine-tune DeepSeek-V3-671B on successful samples
278
+ └── Result: Model can generate correct code at reasonable rate
279
+
280
+ Stage 2: Self-Supervised Learning
281
+ β”œβ”€β”€ Sample from Stage 1 model
282
+ β”œβ”€β”€ Keep only correct samples (self-filtering)
283
+ β”œβ”€β”€ Retrain on filtered dataset
284
+ └── Result: Higher base success rate for RL
285
+
286
+ Stage 3: Contrastive Reinforcement Learning
287
+ β”œβ”€β”€ Present model with multiple code variants + their speedup scores
288
+ β”œβ”€β”€ Model analyzes WHY certain implementations are faster
289
+ β”œβ”€β”€ Generates improved solution based on comparative analysis
290
+ β”œβ”€β”€ Score serves dual purpose: (1) gradient update, (2) future prompt enrichment
291
+ └── Result: 3.12Γ— average speedup on KernelBench
292
+ ```
293
+
294
+ ### 6.2 Why Standard GRPO/PPO Failed for CUDA-L1
295
+
296
+ > "Standard RL algorithms compute a scalar reward for each generated CUDA code sample... the reward signal is used exclusively for parameter updates and is never provided as input to the LLM. Consequently, the LLM cannot directly reason about performance trade-offs during code generation."
297
+
298
+ Their solution: **Contrastive RL** β€” embed performance feedback within the input prompt. The model sees previous code + scores and learns comparative analysis.
299
+
300
+ ### 6.3 Results
301
+
302
+ | Configuration | Mean Speedup | Max | Median | Success Rate |
303
+ |--------------|-------------|-----|--------|-------------|
304
+ | Default | 3.12Γ— | 120Γ— | 1.42Γ— | 249/250 |
305
+ | vs Torch Compile | 2.77Γ— | β€” | β€” | β€” |
306
+ | vs CUDA Graph | 2.81Γ— | β€” | β€” | β€” |
307
+
308
+ ### 6.4 ARM Relevance
309
+
310
+ - Use the **3-stage pipeline** if the base model has <40% ARM assembly correctness
311
+ - Contrastive RL is useful when you need the model to reason about WHY optimizations work
312
+ - Stage 1 data augmentation from multiple LLMs is a powerful bootstrapping technique
313
+
314
+ ---
315
+
316
+ ## 7. Recipe 5: StepCoder β€” Fine-Grained RL for Code
317
+
318
+ > **Paper**: "StepCoder: Improve Code Generation with Reinforcement Learning from Compiler Feedback"
319
+ > **ArXiv**: [2402.01391](https://arxiv.org/abs/2402.01391) | Feb 2024
320
+
321
+ ### 7.1 Key Innovation: Fine-Grained Optimization (FGO)
322
+
323
+ Standard PPO updates ALL tokens in the generated code equally. StepCoder's FGO **masks tokens not executed by unit tests** β€” only code segments that are actually run contribute to the gradient update.
324
+
325
+ This is crucial for sparse compiler rewards where most of the generated code might be boilerplate.
326
+
327
+ ### 7.2 Reward Design
328
+
329
+ | Outcome | Reward |
330
+ |---------|--------|
331
+ | All unit tests pass | +1.0 |
332
+ | Test failure | -0.3 |
333
+ | Runtime error | -0.6 |
334
+ | Compile error | -1.0 |
335
+
336
+ This graduated penalty scheme (vs SuperCoder's binary 0/non-zero) helps the model distinguish failure modes.
337
+
338
+ ---
339
+
340
+ ## 8. ARM Adaptation Guide
341
+
342
+ ### 8.1 The ARM Gap
343
+
344
+ **No published work targets ARM/AArch64**. All results are on x86-64 (SuperCoder, Meta) or CUDA (CUDA-L1). This requires adaptation:
345
+
346
+ ### 8.2 Minimal Changes from SuperCoder (x86 β†’ ARM)
347
+
348
+ | Component | x86 (Original) | ARM (Adapted) |
349
+ |-----------|----------------|---------------|
350
+ | Compiler | `gcc -O3` | `aarch64-linux-gnu-gcc -O3` |
351
+ | Assembler | `as` | `aarch64-linux-gnu-as` |
352
+ | Linker | `ld` | `aarch64-linux-gnu-ld` |
353
+ | Execution | Native | `qemu-aarch64-static` (emulation) |
354
+ | Timing | `hyperfine` (wall-clock) | QEMU instruction counting or real ARM HW |
355
+ | ISA in prompt | "x86-64 assembly" | "AArch64 assembly" |
356
+ | Assembly flag | `-S` | `-S` (same) |
357
+
358
+ ### 8.3 ARM-Specific Prompt Template
359
+
360
+ ```
361
+ Given the following C code and assembly code, your task is to generate
362
+ highly optimized AArch64 assembly code.
363
+
364
+ C Code: {c_code}
365
+ Assembly Code: {arm_baseline_asm}
366
+
367
+ Only output the optimized assembly code. Do not include any other text.
368
+ Do not write any comments in the assembly code.
369
+ Wrap the assembly code in <assembly></assembly> tags.
370
+
371
+ Optimized Assembly Code:
372
+ ```
373
+
374
+ ### 8.4 Execution Environment
375
+
376
+ ```bash
377
+ # Install ARM cross-compilation toolchain
378
+ sudo apt-get install gcc-aarch64-linux-gnu g++-aarch64-linux-gnu
379
+ sudo apt-get install qemu-user-static
380
+
381
+ # Cross-compile to ARM assembly
382
+ aarch64-linux-gnu-gcc -O3 -S -o output.s input.c
383
+
384
+ # Cross-compile to ARM binary
385
+ aarch64-linux-gnu-gcc -O3 -o output input.c
386
+
387
+ # Run ARM binary on x86 via QEMU
388
+ qemu-aarch64-static ./output < test_input.txt
389
+ ```
390
+
391
+ ### 8.5 Timing Considerations
392
+
393
+ | Method | Accuracy | Availability |
394
+ |--------|----------|-------------|
395
+ | QEMU instruction counting | Deterministic but not wall-clock accurate | Any x86 machine |
396
+ | QEMU `-plugin libinsn` | Counts executed instructions precisely | QEMU 6.0+ |
397
+ | Real ARM hardware | Ground truth | Requires ARM machine |
398
+ | `perf stat` on ARM | Cycle-accurate | Requires ARM + Linux perf |
399
+
400
+ **Recommendation**: Use QEMU instruction counting for training (deterministic, reproducible) and validate final results on real ARM hardware.
401
+
402
+ ### 8.6 ARM-Specific Optimization Opportunities
403
+
404
+ The model should learn to leverage:
405
+ - **NEON SIMD** instructions (128-bit vector operations)
406
+ - **SVE/SVE2** (Scalable Vector Extension β€” variable-length vectors)
407
+ - **LSE atomics** (Large System Extensions)
408
+ - **Conditional select** (`csel`, `csinc`) instead of branches
409
+ - **Fused multiply-add** (`fmadd`, `fmsub`)
410
+ - **Load/store pair** (`ldp`, `stp`) for memory throughput
411
+ - **Predicated operations** in SVE (eliminate branch mispredictions)
412
+
413
+ ### 8.7 3-Stage Plan for ARM (if base model correctness < 40%)
414
+
415
+ Following CUDA-L1's insight:
416
+
417
+ ```
418
+ Stage 1: SFT Warmup
419
+ β”œβ”€β”€ Dataset: (C source β†’ ARM gcc -O3 assembly) pairs
420
+ β”œβ”€β”€ Task: Teach the model to generate valid ARM assembly
421
+ β”œβ”€β”€ Expected: Model learns ARM syntax, calling conventions, directives
422
+ └── Duration: ~2-4 hours on A100
423
+
424
+ Stage 2: Filtered Self-Training
425
+ β”œβ”€β”€ Sample N completions per prompt from Stage 1 model
426
+ β”œβ”€β”€ Keep only compilable + correct samples
427
+ β”œβ”€β”€ Retrain on filtered dataset (higher quality)
428
+ └── Expected: Correctness improves to >60%
429
+
430
+ Stage 3: GRPO with Compiler Reward
431
+ β”œβ”€β”€ Apply SuperCoder's reward function
432
+ β”œβ”€β”€ Binary correctness gate + continuous speedup
433
+ β”œβ”€β”€ Expected: Correctness >90%, speedup >1.3Γ—
434
+ └── Duration: ~4-8 hours on 4Γ—A100
435
+ ```
436
+
437
+ ---
438
+
439
+ ## 9. Datasets
440
+
441
+ ### 9.1 Available Datasets
442
+
443
+ | Dataset | Source | Format | Size | ARM Compatible | Notes |
444
+ |---------|--------|--------|------|----------------|-------|
445
+ | **IBM CodeNet** | [GitHub](https://github.com/IBM/Project_CodeNet) | C/C++ source + test I/O | 8M+ submissions | βœ… Recompile with ARM GCC | Used by SuperCoder |
446
+ | **deepmind/code_contests** | [HF Hub](https://hf.co/datasets/deepmind/code_contests) | C/C++ solutions + tests | ~2GB train | βœ… Filter C, cross-compile | Has public/private/generated tests |
447
+ | **llvm-ml/ComPile** | [HF Hub](https://hf.co/datasets/llvm-ml/ComPile) | LLVM bitcode IR | 602GB (2.7TB source) | βœ… Retarget to AArch64 via `llc` | C/C++/Rust/Swift from Spack |
448
+ | APPS+ | [GitHub](https://github.com/ablustrund/apps_plus) | Python problems + tests | ~10K | ❌ Python only | Used by StepCoder |
449
+
450
+ ### 9.2 Dataset Format for GRPO Training
451
+
452
+ The dataset must have a `prompt` column in conversational format. Extra columns are forwarded to reward functions as `**kwargs`.
453
+
454
+ ```python
455
+ {
456
+ "prompt": [
457
+ {"role": "user", "content": "Given the following C code and assembly code..."}
458
+ ],
459
+ "c_code": "int main() { ... }",
460
+ "baseline_asm": ".text\n.globl main\nmain:\n...",
461
+ "test_inputs": ["3\n1 2 3", "5\n1 2 3 4 5"],
462
+ "test_outputs": ["6", "15"],
463
+ "baseline_time": 0.0042 # seconds
464
+ }
465
+ ```
466
+
467
+ ### 9.3 Dataset Construction Pipeline
468
+
469
+ ```python
470
+ from datasets import Dataset
471
+ import subprocess, json
472
+
473
+ def build_arm_dataset(codenet_programs):
474
+ """Build ARM optimization dataset from CodeNet C programs."""
475
+ samples = []
476
+
477
+ for prog in codenet_programs:
478
+ c_code = prog["source"]
479
+ test_cases = prog["test_cases"] # [(input, output), ...]
480
+
481
+ # Step 1: Cross-compile to ARM assembly with -O3
482
+ result = subprocess.run(
483
+ ["aarch64-linux-gnu-gcc", "-O3", "-S", "-o", "/dev/stdout", "-x", "c", "-"],
484
+ input=c_code, capture_output=True, text=True
485
+ )
486
+ if result.returncode != 0:
487
+ continue # Skip programs that don't compile
488
+ baseline_asm = result.stdout
489
+
490
+ # Step 2: Compile to binary for benchmarking
491
+ # ... (compile + link + measure baseline time via QEMU)
492
+
493
+ # Step 3: Build prompt
494
+ prompt_text = f"""Given the following C code and assembly code, your task is to generate highly optimized AArch64 assembly code.
495
+
496
+ C Code: {c_code}
497
+ Assembly Code: {baseline_asm}
498
+
499
+ Only output the optimized assembly code. Do not include any other text.
500
+ Do not write any comments in the assembly code.
501
+ Wrap the assembly code in <assembly></assembly> tags.
502
+
503
+ Optimized Assembly Code:"""
504
+
505
+ samples.append({
506
+ "prompt": [{"role": "user", "content": prompt_text}],
507
+ "c_code": c_code,
508
+ "baseline_asm": baseline_asm,
509
+ "test_inputs": [tc[0] for tc in test_cases],
510
+ "test_outputs": [tc[1] for tc in test_cases],
511
+ "baseline_time": baseline_time,
512
+ })
513
+
514
+ return Dataset.from_list(samples)
515
+ ```
516
+
517
+ ### 9.4 deepmind/code_contests Schema
518
+
519
+ ```
520
+ | Column | Type |
521
+ |--------|------|
522
+ | name | string |
523
+ | description | string |
524
+ | public_tests | Sequence: {input: list[str], output: list[str]} |
525
+ | private_tests | Sequence: {input: list[str], output: list[str]} |
526
+ | generated_tests | Sequence: {input: list[str], output: list[str]} |
527
+ | solutions | Sequence: {language: list[int], solution: list[str]} |
528
+ | difficulty | ClassLabel (29 classes) |
529
+ | source | ClassLabel (7 classes) |
530
+ ```
531
+
532
+ Filter C solutions: `language == 2` (C) or look for C-like syntax in the solution strings.
533
+
534
+ ---
535
+
536
+ ## 10. Model Selection
537
+
538
+ ### 10.1 Base Model Recommendation
539
+
540
+ **Primary**: `Qwen/Qwen2.5-Coder-7B-Instruct`
541
+ - 7.6B parameters, Qwen2 architecture
542
+ - Apache-2.0 license
543
+ - **Proven by SuperCoder**: 61.4% test pass rate (highest among 7B models)
544
+ - Strong code generation baseline for RL starting point
545
+ - Available on [HF Hub](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct)
546
+
547
+ ### 10.2 Why Not Other Models?
548
+
549
+ | Model | Issue |
550
+ |-------|-------|
551
+ | DeepSeek-R1 | 0% compilation β€” generates analysis, not code |
552
+ | GPT-4o | 81% compile but 5% correct β€” breaks low-level conventions |
553
+ | Reasoning models (o1, R1) | Fundamentally fail β€” spend tokens reasoning instead of generating |
554
+ | llm-compiler-7b-ftd | Fine-tuned for disassembly, not optimization |
555
+ | llm-compiler-13b | Good (1.34Γ— speedup) but not instruction-tuned; harder to RL fine-tune |
556
+
557
+ ### 10.3 Model Sizing
558
+
559
+ | Model Size | Hardware Needed | VRAM (bf16) |
560
+ |-----------|----------------|-------------|
561
+ | 7B (Qwen2.5-Coder-7B) | 4Γ— A100 80GB | ~14GB model + ~40GB for RL |
562
+ | 13B | 4Γ— A100 80GB | ~26GB model + ~60GB for RL |
563
+ | 70B+ | 8Γ— A100 or 4Γ— H100 | Multi-node |
564
+
565
+ ---
566
+
567
+ ## 11. Reward Function Design
568
+
569
+ ### 11.1 SuperCoder Reward (Recommended)
570
+
571
+ From Section 3.3 of arxiv:2505.11480:
572
+
573
+ ```
574
+ r(s, a) = {
575
+ 0, if pass(s,a) < 1 (any test fails β†’ zero reward)
576
+ speedup(s,a), if pass(s,a) = 1 (all tests pass β†’ continuous speedup)
577
+ }
578
+ ```
579
+
580
+ Where:
581
+ - `pass(s,a) = (1/|T|) Γ— Ξ£ 𝟏[PΜƒ(xα΅’) = yα΅’]` β€” fraction of test cases passed
582
+ - `speedup(s,a) = t(P) / t(P̃)` — baseline time / optimized time
583
+
584
+ **Design principles**:
585
+ 1. **Hard binary correctness gate**: No partial credit. Forces model to learn correctness first.
586
+ 2. **Continuous speedup reward**: Provides gradient signal proportional to actual optimization gain.
587
+ 3. **No reward for partial correctness**: Passing 99% of tests still gets 0 reward.
588
+
589
+ ### 11.2 StepCoder Graduated Penalty (Alternative)
590
+
591
+ ```
592
+ +1.0 β†’ all unit tests pass
593
+ -0.3 β†’ test failure
594
+ -0.6 β†’ runtime error
595
+ -1.0 β†’ compile error
596
+ ```
597
+
598
+ Distinguishes failure modes β€” the model gets a stronger negative signal for "worse" failures.
599
+
600
+ ### 11.3 CUDA-L1 Reward Smoothing (Anti-Hacking)
601
+
602
+ ```python
603
+ r_normalized = (r - ΞΌ) / Οƒ # normalize by running stats
604
+ r_smooth = clip(r_normalized, -k, k) # clip to [-1.5, 1.5]
605
+ ```
606
+
607
+ Prevents the model from over-optimizing outlier high-reward solutions.
608
+
609
+ ### 11.4 Implementation
610
+
611
+ ```python
612
+ import subprocess
613
+ import tempfile
614
+ import os
615
+ import re
616
+
617
+ def arm_compiler_reward(completions, c_code, baseline_asm,
618
+ test_inputs, test_outputs, baseline_time, **kwargs):
619
+ """
620
+ Compiler feedback reward for ARM assembly optimization.
621
+ Follows SuperCoder Β§3.3 exactly: binary correctness gate + continuous speedup.
622
+ """
623
+ rewards = []
624
+
625
+ for i, completion in enumerate(completions):
626
+ # Extract assembly from completion
627
+ content = completion[0]["content"] if isinstance(completion, list) else completion
628
+ asm_match = re.search(r'<assembly>(.*?)</assembly>', content, re.DOTALL)
629
+ if not asm_match:
630
+ rewards.append(0.0)
631
+ continue
632
+ asm_code = asm_match.group(1).strip()
633
+
634
+ with tempfile.TemporaryDirectory() as tmpdir:
635
+ asm_path = os.path.join(tmpdir, "opt.s")
636
+ bin_path = os.path.join(tmpdir, "opt")
637
+
638
+ # Write assembly
639
+ with open(asm_path, "w") as f:
640
+ f.write(asm_code)
641
+
642
+ # Step 1: Assemble
643
+ result = subprocess.run(
644
+ ["aarch64-linux-gnu-gcc", "-o", bin_path, asm_path, "-static", "-lm"],
645
+ capture_output=True, text=True, timeout=30
646
+ )
647
+ if result.returncode != 0:
648
+ rewards.append(0.0) # Compile failure
649
+ continue
650
+
651
+ # Step 2: Run all tests via QEMU
652
+ all_pass = True
653
+ for test_in, expected_out in zip(test_inputs[i], test_outputs[i]):
654
+ try:
655
+ run = subprocess.run(
656
+ ["qemu-aarch64-static", bin_path],
657
+ input=test_in, capture_output=True, text=True, timeout=10
658
+ )
659
+ if run.stdout.strip() != expected_out.strip():
660
+ all_pass = False
661
+ break
662
+ except subprocess.TimeoutExpired:
663
+ all_pass = False
664
+ break
665
+
666
+ if not all_pass:
667
+ rewards.append(0.0) # Test failure
668
+ continue
669
+
670
+ # Step 3: Measure speedup (instruction count via QEMU)
671
+ # Use QEMU instruction counting for deterministic measurement
672
+ try:
673
+ run = subprocess.run(
674
+ ["qemu-aarch64-static", "-d", "in_asm", bin_path],
675
+ input=test_inputs[i][0], capture_output=True, text=True, timeout=30
676
+ )
677
+ insn_count = run.stderr.count('\n') # Rough instruction count
678
+ speedup = baseline_time[i] / max(insn_count, 1)
679
+ rewards.append(max(speedup, 0.1))
680
+ except Exception:
681
+ rewards.append(1.0) # Default: assume baseline-equivalent
682
+
683
+ return rewards
684
+ ```
685
+
686
+ ---
687
+
688
+ ## 12. Reward Hacking & Mitigations
689
+
690
+ ### 12.1 Known Hacking Behaviors (from CUDA-L1 Β§3.1)
691
+
692
+ | Hack | Description | Prevalence |
693
+ |------|-------------|-----------|
694
+ | **Improper timing** | Create async streams; timing only measures main stream | 32.8% of outputs |
695
+ | **Lazy evaluation** | Return lazy tensor; actual compute happens at correctness check | Found in training |
696
+ | **Hyperparameter manipulation** | Reduce batch_size/dimensions in generated code | Found in training |
697
+ | **Result caching** | Cache outputs by input address; return cached results | Found in training |
698
+
699
+ ### 12.2 Mitigations
700
+
701
+ 1. **Reward checking model**: When reward jumps significantly, use an adversarial model (e.g., DeepSeek-R1) to check for exploitation. Catches >60% of hacking.
702
+ 2. **Hacking-case database**: Maintain a growing database of known hacking patterns. Use retrieval-augmented checking.
703
+ 3. **Reward smoothing**: `r_smooth = clip((r - ΞΌ)/Οƒ, -k, k)` with k=1.5
704
+ 4. **Robust evaluation**: Synchronize all execution before timing. Validate output is real tensor with allocated storage.
705
+
706
+ ### 12.3 ARM-Specific Hacking Risks
707
+
708
+ - **QEMU timing artifacts**: Model could learn to generate code that runs fast in QEMU but slow on real ARM hardware
709
+ - **Mitigation**: Use instruction counting (deterministic) rather than wall-clock timing in QEMU
710
+ - **NOP padding**: Model could pad with NOPs that don't affect correctness but confuse instruction counting
711
+ - **Mitigation**: Count only non-NOP instructions, or use basic-block counting
712
+
713
+ ---
714
+
715
+ ## 13. Training Infrastructure: TRL GRPO Implementation
716
+
717
+ ### 13.1 Why GRPO over PPO
718
+
719
+ | Feature | PPO | GRPO |
720
+ |---------|-----|------|
721
+ | Value head/critic | Required | Not needed |
722
+ | Memory usage | ~2Γ— model | ~1Γ— model |
723
+ | Reward model | Optional (can use custom) | Custom function native |
724
+ | Performance | 1.46Γ— speedup | 1.44Γ— speedup |
725
+ | Implementation complexity | Higher | Lower |
726
+ | TRL support | Experimental (`trl.experimental.ppo`) | Full support (`trl.GRPOTrainer`) |
727
+
728
+ **Verdict**: GRPO is strictly better for this use case β€” same results, half the memory, simpler code.
729
+
730
+ ### 13.2 GRPOConfig Parameters
731
+
732
+ ```python
733
+ from trl import GRPOConfig
734
+
735
+ config = GRPOConfig(
736
+ # === Model ===
737
+ output_dir="arm-compiler-optimizer",
738
+
739
+ # === Generation ===
740
+ num_generations=4, # G: completions per prompt (group size for GRPO)
741
+ max_prompt_length=2048, # tokens; truncated from left
742
+ max_completion_length=2048, # tokens
743
+ temperature=0.5, # rollout sampling temperature (SuperCoder: 0.5)
744
+
745
+ # === Loss / Reward ===
746
+ beta=0.0, # KL penalty (0.0 = no KL term, default in TRL)
747
+ epsilon=0.2, # PPO clip range (used in GRPO's clipped objective)
748
+ scale_rewards=False, # Don't normalize β€” binary-gated rewards aren't Gaussian
749
+
750
+ # === Training ===
751
+ learning_rate=1e-6, # SuperCoder Appendix A.2
752
+ per_device_train_batch_size=4, # Per GPU; effective batch = 4 GPUs Γ— 4 = 16
753
+ gradient_accumulation_steps=1,
754
+ num_train_epochs=1, # SuperCoder: 1 epoch
755
+ gradient_checkpointing=True, # Memory savings (default True in GRPOConfig)
756
+ bf16=True, # Default True
757
+
758
+ # === Logging ===
759
+ logging_steps=10,
760
+ log_completions=True, # Log generated completions
761
+ disable_tqdm=True, # Plain text logs for job monitoring
762
+ logging_first_step=True,
763
+ logging_strategy="steps",
764
+
765
+ # === Saving ===
766
+ push_to_hub=True,
767
+ hub_model_id="kaori02/arm-compiler-optimizer",
768
+ save_strategy="steps",
769
+ save_steps=100,
770
+ )
771
+ ```
772
+
773
+ ### 13.3 Custom Reward Function Signature
774
+
775
+ TRL GRPOTrainer passes these keyword arguments to reward functions:
776
+
777
+ ```python
778
+ def my_reward(
779
+ completions, # list[list[dict]] β€” each is [{"role":"assistant","content":"..."}]
780
+ prompts=None, # list[list[dict]] or list[str]
781
+ completion_ids=None, # list[list[int]] β€” tokenized completions
782
+ trainer_state=None, # TrainerState β€” current training step, epoch, etc.
783
+ log_extra=None, # callable to log extra columns
784
+ log_metric=None, # callable to log scalar metrics
785
+ **kwargs # ALL extra dataset columns forwarded here
786
+ ) -> list[float]: # one reward per completion
787
+ ...
788
+ ```
789
+
790
+ ### 13.4 Multiple Reward Functions (Composable)
791
+
792
+ ```python
793
+ trainer = GRPOTrainer(
794
+ model="Qwen/Qwen2.5-Coder-7B-Instruct",
795
+ reward_funcs=[compile_reward, correctness_reward, speedup_reward],
796
+ reward_weights=[1.0, 2.0, 5.0], # Weight speedup most heavily
797
+ args=config,
798
+ train_dataset=dataset,
799
+ )
800
+ ```
801
+
802
+ ---
803
+
804
+ ## 14. Full Training Script
805
+
806
+ ```python
807
+ """
808
+ ARM Compiler Optimization via GRPO
809
+ Based on: SuperCoder (arxiv:2505.11480) adapted for AArch64
810
+
811
+ Recipe:
812
+ - Base model: Qwen/Qwen2.5-Coder-7B-Instruct
813
+ - Method: GRPO with custom compiler reward
814
+ - Reward: Binary correctness gate + continuous speedup
815
+ - Dataset: CodeNet C programs β†’ ARM assembly
816
+ """
817
+
818
+ import os
819
+ import re
820
+ import subprocess
821
+ import tempfile
822
+ from datasets import load_dataset, Dataset
823
+ from trl import GRPOTrainer, GRPOConfig
824
+ import trackio
825
+
826
+ # ═══════════════════════════════════════════════════════════════════════════
827
+ # REWARD FUNCTION
828
+ # ═══════════════════════════════════════════════════════════════════════════
829
+
830
+ def extract_assembly(content):
831
+ """Extract assembly code from <assembly> tags."""
832
+ match = re.search(r'<assembly>(.*?)</assembly>', content, re.DOTALL)
833
+ return match.group(1).strip() if match else None
834
+
835
+ def compile_arm(asm_code, output_path):
836
+ """Cross-compile ARM assembly to binary."""
837
+ with tempfile.NamedTemporaryFile(suffix=".s", mode="w", delete=False) as f:
838
+ f.write(asm_code)
839
+ asm_path = f.name
840
+ try:
841
+ result = subprocess.run(
842
+ ["aarch64-linux-gnu-gcc", "-o", output_path, asm_path, "-static", "-lm"],
843
+ capture_output=True, text=True, timeout=30
844
+ )
845
+ return result.returncode == 0
846
+ except Exception:
847
+ return False
848
+ finally:
849
+ os.unlink(asm_path)
850
+
851
+ def run_test(binary_path, test_input, expected_output, timeout=10):
852
+ """Run a test case via QEMU and check output."""
853
+ try:
854
+ result = subprocess.run(
855
+ ["qemu-aarch64-static", binary_path],
856
+ input=test_input, capture_output=True, text=True, timeout=timeout
857
+ )
858
+ return result.stdout.strip() == expected_output.strip()
859
+ except subprocess.TimeoutExpired:
860
+ return False
861
+
862
+ def measure_instructions(binary_path, test_input, timeout=30):
863
+ """Count executed instructions via QEMU for deterministic timing."""
864
+ try:
865
+ result = subprocess.run(
866
+ ["qemu-aarch64-static", "-d", "in_asm", binary_path],
867
+ input=test_input, capture_output=True, text=True, timeout=timeout
868
+ )
869
+ return result.stderr.count('\n')
870
+ except Exception:
871
+ return float('inf')
872
+
873
+ def arm_compiler_reward(completions, test_inputs, test_outputs,
874
+ baseline_insn_count, log_metric=None, **kwargs):
875
+ """
876
+ SuperCoder Β§3.3 reward adapted for ARM:
877
+ r = 0 if compilation fails or any test fails
878
+ r = baseline_instructions / optimized_instructions if all tests pass
879
+ """
880
+ rewards = []
881
+ compile_successes = 0
882
+ test_successes = 0
883
+
884
+ for i, completion in enumerate(completions):
885
+ content = completion[0]["content"] if isinstance(completion, list) else completion
886
+ asm = extract_assembly(content)
887
+
888
+ if asm is None:
889
+ rewards.append(0.0)
890
+ continue
891
+
892
+ with tempfile.TemporaryDirectory() as tmpdir:
893
+ bin_path = os.path.join(tmpdir, "opt_binary")
894
+
895
+ # Step 1: Compile
896
+ if not compile_arm(asm, bin_path):
897
+ rewards.append(0.0)
898
+ continue
899
+ compile_successes += 1
900
+
901
+ # Step 2: Run ALL tests
902
+ all_pass = True
903
+ inputs = test_inputs[i] if isinstance(test_inputs[i], list) else [test_inputs[i]]
904
+ outputs = test_outputs[i] if isinstance(test_outputs[i], list) else [test_outputs[i]]
905
+
906
+ for tin, tout in zip(inputs, outputs):
907
+ if not run_test(bin_path, tin, tout):
908
+ all_pass = False
909
+ break
910
+
911
+ if not all_pass:
912
+ rewards.append(0.0)
913
+ continue
914
+ test_successes += 1
915
+
916
+ # Step 3: Measure speedup via instruction count
917
+ opt_insn = measure_instructions(bin_path, inputs[0])
918
+ baseline = baseline_insn_count[i] if isinstance(baseline_insn_count, list) else baseline_insn_count
919
+
920
+ if opt_insn > 0 and baseline > 0:
921
+ speedup = baseline / opt_insn
922
+ rewards.append(max(speedup, 0.1))
923
+ else:
924
+ rewards.append(1.0)
925
+
926
+ # Log metrics
927
+ if log_metric and len(rewards) > 0:
928
+ log_metric("compile_rate", compile_successes / len(completions))
929
+ log_metric("test_pass_rate", test_successes / len(completions))
930
+ log_metric("avg_reward", sum(rewards) / len(rewards))
931
+
932
+ return rewards
933
+
934
+ # ═══════════════════════════════════════════════════════════════════════════
935
+ # TRAINING
936
+ # ═══════════════════════════════════════════════════════════════════════════
937
+
938
+ def main():
939
+ # Initialize tracking
940
+ trackio.init(
941
+ project="arm-compiler-optimizer",
942
+ run="grpo-qwen-7b-arm",
943
+ )
944
+
945
+ # Load pre-built dataset (see Β§9.3 for construction)
946
+ dataset = load_dataset("your-org/arm-compiler-dataset", split="train")
947
+
948
+ # GRPO Configuration (SuperCoder Appendix A.2, adapted for TRL)
949
+ config = GRPOConfig(
950
+ output_dir="arm-compiler-optimizer",
951
+
952
+ # Generation
953
+ num_generations=4,
954
+ max_prompt_length=2048,
955
+ max_completion_length=2048,
956
+ temperature=0.5,
957
+
958
+ # Loss
959
+ beta=0.0,
960
+ scale_rewards=False,
961
+
962
+ # Training
963
+ learning_rate=1e-6,
964
+ per_device_train_batch_size=4,
965
+ gradient_accumulation_steps=1,
966
+ num_train_epochs=1,
967
+ gradient_checkpointing=True,
968
+ bf16=True,
969
+
970
+ # Logging
971
+ logging_steps=10,
972
+ log_completions=True,
973
+ disable_tqdm=True,
974
+ logging_first_step=True,
975
+ logging_strategy="steps",
976
+
977
+ # Saving
978
+ push_to_hub=True,
979
+ hub_model_id="kaori02/arm-compiler-optimizer",
980
+ save_strategy="steps",
981
+ save_steps=100,
982
+ )
983
+
984
+ trainer = GRPOTrainer(
985
+ model="Qwen/Qwen2.5-Coder-7B-Instruct",
986
+ reward_funcs=arm_compiler_reward,
987
+ args=config,
988
+ train_dataset=dataset,
989
+ )
990
+
991
+ trainer.train()
992
+ trainer.push_to_hub()
993
+
994
+ if __name__ == "__main__":
995
+ main()
996
+ ```
997
+
998
+ ---
999
+
1000
+ ## 15. Program Transformation Taxonomy
1001
+
1002
+ SuperCoder Β§5.4 analyzed all 200 evaluation programs. The LLM learns these optimization patterns:
1003
+
1004
+ | Transformation | Description | Frequency |
1005
+ |---------------|-------------|-----------|
1006
+ | **Loop Restructuring** | Reorder, unroll, alter loop control flow | 45% |
1007
+ | **Instruction Selection** | Use specialized CPU instructions (e.g., `popcnt`, `bsr`, `cmov`) instead of generic sequences | 35% |
1008
+ | **Algorithmic Simplification** | Replace custom logic with standard library calls (`memcmp`, `strcmp`, `atoi`) | 30% |
1009
+ | **Stack Canary Removal** | Eliminate stack protection checks and security instrumentation | 25% |
1010
+ | **Register Allocation** | Better register assignment, reuse, reduced spills | 20% |
1011
+ | **Branch Elimination** | Replace conditional branches with conditional moves (`cmov`, `setcc`) | 15% |
1012
+ | **Address Calculation** | Optimize memory address computation | 10% |
1013
+ | **Dead Code Elimination** | Remove unused code paths | 10% |
1014
+ | **Constant Propagation** | Evaluate expressions at compile time | 5% |
1015
+
1016
+ ### ARM-Specific Transformations to Expect
1017
+
1018
+ | Transformation | ARM Instructions | x86 Equivalent |
1019
+ |---------------|-----------------|----------------|
1020
+ | Vectorization | NEON `ld1`, `add`, `mul` (4Γ—32-bit) | SSE/AVX |
1021
+ | Predication | SVE predicated ops (no branch) | None (x86 lacks predication) |
1022
+ | Paired load/store | `ldp`, `stp` | None (x86 does one at a time) |
1023
+ | Fused multiply-add | `fmadd`, `fmsub` | `vfmadd` (AVX) |
1024
+ | Conditional select | `csel`, `csinc`, `csneg` | `cmov` |
1025
+ | Bit manipulation | `clz`, `rbit`, `cnt` | `bsr`, `popcnt` |
1026
+
1027
+ ---
1028
+
1029
+ ## 16. Results Benchmarks & Ablations
1030
+
1031
+ ### 16.1 RL vs SFT (SuperCoder Β§5.2-5.3)
1032
+
1033
+ | Method | Correctness | Speedup | Notes |
1034
+ |--------|------------|---------|-------|
1035
+ | Base (zero-shot) | 61.4% | 1.10Γ— | No training |
1036
+ | SFT | 92.5% | 1.39Γ— | Trained on best samples |
1037
+ | GRPO | 94.7% | 1.44Γ— | RL with compiler reward |
1038
+ | PPO | 95.0% | 1.46Γ— | RL with compiler reward |
1039
+ | PPO + best-of-8 | ~95% | **1.93Γ—** | Inference-time scaling |
1040
+
1041
+ > **RL > SFT** because optimization is open-ended. There's no single "correct" optimized program β€” RL directly maximizes the objective (speedup) rather than imitating examples.
1042
+
1043
+ ### 16.2 Inference-Time Scaling
1044
+
1045
+ Best-of-N sampling works multiplicatively with RL:
1046
+
1047
+ | Model | N=1 | N=2 | N=4 | N=8 |
1048
+ |-------|-----|-----|-----|-----|
1049
+ | Base (Qwen-7B) | 1.10Γ— | 1.20Γ— | 1.30Γ— | 1.39Γ— |
1050
+ | Claude-opus-4 | 1.43Γ— | 1.60Γ— | 1.80Γ— | 2.05Γ— |
1051
+ | SuperCoder (PPO) | 1.46Γ— | 1.60Γ— | 1.75Γ— | **1.93Γ—** |
1052
+
1053
+ ### 16.3 Ablation: Prompt Components
1054
+
1055
+ | Prompt Contains | Correctness | Speedup |
1056
+ |----------------|-------------|---------|
1057
+ | C code + gcc -O3 assembly | 95.0% | 1.46Γ— |
1058
+ | C code only (no assembly) | ~0% | β€” |
1059
+ | gcc -O3 assembly only (no C) | ~60% | ~1.3Γ— |
1060
+
1061
+ > The baseline assembly is **essential**. Without it, the model cannot generate valid assembly.
1062
+
1063
+ ### 16.4 Random vs Curated Dataset (SuperCoder Β§A.7)
1064
+
1065
+ | Dataset | Correctness | Speedup |
1066
+ |---------|-------------|---------|
1067
+ | Curated (high O0β†’O3 speedup) | 95.0% | 1.46Γ— |
1068
+ | Random sample from CodeNet | 93.5% | 1.35Γ— |
1069
+
1070
+ Curated dataset helps, but random works too β€” the approach is robust.
1071
+
1072
+ ---
1073
+
1074
+ ## 17. Citation Graph & Future Directions
1075
+
1076
+ ### 17.1 Papers Citing SuperCoder
1077
+
1078
+ | Paper | Key Insight |
1079
+ |-------|------------|
1080
+ | **LLM-VeriOpt** (2026) [influential] | Uses formal verification instead of test suites for correctness β€” eliminates false positives from finite test coverage |
1081
+ | **Astra** (2025, 31 citations) | Multi-agent system for GPU kernel optimization β€” agent decomposition for complex optimizations |
1082
+ | **InCoder-32B** (2026) | Industrial code model covering compiler optimization as a domain |
1083
+ | **Genesys** (2025) | Evolutionary program synthesis with continuous optimization |
1084
+
1085
+ ### 17.2 Future Directions
1086
+
1087
+ 1. **Formal Verification as Reward** (LLM-VeriOpt direction): Replace test-based correctness with formal equivalence checking β€” eliminates false positive rewards from insufficient test coverage.
1088
+
1089
+ 2. **Multi-Turn Refinement** (Kevin, 2025 β€” arxiv:2507.11948): Let the model iteratively refine its assembly based on profiler feedback. Multiple rounds of generation β†’ profiling β†’ feedback.
1090
+
1091
+ 3. **Contrastive RL** (CUDA-L1): When base model success rate is too low for standard GRPO, present multiple code variants with scores and let the model reason about WHY certain versions are faster.
1092
+
1093
+ 4. **Cross-Architecture Transfer**: Train on x86, transfer to ARM. The C source code is architecture-agnostic β€” optimization patterns (loop unrolling, vectorization, etc.) transfer across ISAs even if specific instructions differ.
1094
+
1095
+ 5. **SVE/SVE2 Exploitation**: ARM's Scalable Vector Extension offers variable-length SIMD. This is a unique optimization opportunity not available on x86 β€” models that learn to leverage SVE could achieve outsized speedups.
1096
+
1097
+ 6. **Auto-Parallelization**: Beyond single-thread optimization β€” teach the model to identify parallelization opportunities and generate multi-threaded ARM code with proper synchronization.
1098
+
1099
+ ---
1100
+
1101
+ ## 18. Reference Links
1102
+
1103
+ ### Papers
1104
+ | Paper | ArXiv | Year |
1105
+ |-------|-------|------|
1106
+ | SuperCoder | [2505.11480](https://arxiv.org/abs/2505.11480) | 2025 |
1107
+ | CUDA-L1 | [2507.14111](https://arxiv.org/abs/2507.14111) | 2025 |
1108
+ | Meta LLM Compiler | [2309.07062](https://arxiv.org/abs/2309.07062) | 2023 |
1109
+ | Compiler Feedback | [2403.14714](https://arxiv.org/abs/2403.14714) | 2024 |
1110
+ | StepCoder | [2402.01391](https://arxiv.org/abs/2402.01391) | 2024 |
1111
+ | MLGO | [2101.04808](https://arxiv.org/abs/2101.04808) | 2021 |
1112
+ | ProGraML | [2012.01470](https://arxiv.org/abs/2012.01470) | 2020 |
1113
+ | DeepSeekMath (GRPO) | [2402.03300](https://arxiv.org/abs/2402.03300) | 2024 |
1114
+ | VeriReason | [2505.11849](https://arxiv.org/abs/2505.11849) | 2025 |
1115
+ | ACECoder | [2502.01718](https://arxiv.org/abs/2502.01718) | 2025 |
1116
+
1117
+ ### Code & Frameworks
1118
+ | Resource | URL |
1119
+ |----------|-----|
1120
+ | TRL (GRPO Trainer) | [huggingface.co/docs/trl/grpo_trainer](https://huggingface.co/docs/trl/grpo_trainer) |
1121
+ | TRL Reward Functions | [huggingface.co/docs/trl/rewards](https://huggingface.co/docs/trl/rewards) |
1122
+ | TRL OpenEnv | [huggingface.co/docs/trl/openenv](https://huggingface.co/docs/trl/openenv) |
1123
+ | verl (Volcano Engine RL) | [github.com/volcengine/verl](https://github.com/volcengine/verl) |
1124
+ | CUDA-L1 Code | [github.com/deepreinforce-ai/CUDA-L1](https://github.com/deepreinforce-ai/CUDA-L1) |
1125
+ | StepCoder / APPS+ | [github.com/ablustrund/apps_plus](https://github.com/ablustrund/apps_plus) |
1126
+ | IBM CodeNet | [github.com/IBM/Project_CodeNet](https://github.com/IBM/Project_CodeNet) |
1127
+
1128
+ ### Models & Datasets
1129
+ | Resource | URL |
1130
+ |----------|-----|
1131
+ | Qwen2.5-Coder-7B-Instruct | [huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct) |
1132
+ | deepmind/code_contests | [huggingface.co/datasets/deepmind/code_contests](https://huggingface.co/datasets/deepmind/code_contests) |
1133
+ | llvm-ml/ComPile | [huggingface.co/datasets/llvm-ml/ComPile](https://huggingface.co/datasets/llvm-ml/ComPile) |
1134
+
1135
+ ### Tools
1136
+ | Tool | Purpose |
1137
+ |------|---------|
1138
+ | `aarch64-linux-gnu-gcc` | ARM cross-compiler |
1139
+ | `qemu-aarch64-static` | ARM userspace emulation |
1140
+ | `hyperfine` | Benchmarking tool |
1141
+ | `trackio` | Experiment tracking |
1142
+
1143
+ ---
1144
+
1145
+ ## Appendix: Quick Reference Card
1146
+
1147
+ ```
1148
+ ═══════════════════════════════════════════════════════════════
1149
+ ARM COMPILER OPTIMIZATION VIA GRPO β€” QUICK REFERENCE
1150
+ ═══════════════════════════════════════════════════════════════
1151
+
1152
+ Model: Qwen/Qwen2.5-Coder-7B-Instruct
1153
+ Method: GRPO (no critic needed)
1154
+ Dataset: CodeNet C programs β†’ ARM assembly (7,872 train / 200 eval)
1155
+ Reward: r=0 if fail, r=speedup if all tests pass
1156
+ LR: 1e-6
1157
+ Batch: 16 (4 per GPU Γ— 4 GPUs)
1158
+ Epochs: 1
1159
+ G: 4 completions per prompt
1160
+ Temp: 0.5
1161
+ Seq len: 2048 prompt + 2048 completion
1162
+ KL (beta): 0.0
1163
+ Hardware: 4Γ— A100 80GB
1164
+ Framework: TRL GRPOTrainer
1165
+ Time: ~4-8 hours
1166
+
1167
+ Expected: 61% β†’ 95% correctness, 1.10Γ— β†’ 1.46Γ— speedup
1168
+
1169
+ If base model ARM correctness < 40%, use 3-stage pipeline:
1170
+ Stage 1: SFT warmup on (C β†’ ARM assembly) pairs
1171
+ Stage 2: Self-filtered retraining
1172
+ Stage 3: GRPO with compiler reward
1173
+ ═══════════════════════════════════════════════════════════════
1174
+ ```