Infatoshi commited on
Commit
80692f2
·
verified ·
1 Parent(s): 76a0db8

initial upload: 7 problem definitions

Browse files
Files changed (50) hide show
  1. 01_fp8_gemm/PROMPT.txt +7 -0
  2. 01_fp8_gemm/benchmark.py +128 -0
  3. 01_fp8_gemm/check.py +112 -0
  4. 01_fp8_gemm/problem.yaml +37 -0
  5. 01_fp8_gemm/reference.py +45 -0
  6. 01_fp8_gemm/shapes.py +15 -0
  7. 01_fp8_gemm/sota.py +53 -0
  8. 02_kda_cutlass/PROMPT.txt +7 -0
  9. 02_kda_cutlass/benchmark.py +133 -0
  10. 02_kda_cutlass/check.py +113 -0
  11. 02_kda_cutlass/problem.yaml +54 -0
  12. 02_kda_cutlass/reference.py +143 -0
  13. 02_kda_cutlass/shapes.py +19 -0
  14. 02_kda_cutlass/sota.py +71 -0
  15. 03_paged_attention/PROMPT.txt +7 -0
  16. 03_paged_attention/benchmark.py +131 -0
  17. 03_paged_attention/check.py +109 -0
  18. 03_paged_attention/problem.yaml +48 -0
  19. 03_paged_attention/reference.py +144 -0
  20. 03_paged_attention/shapes.py +18 -0
  21. 03_paged_attention/sota.py +84 -0
  22. 04_kahan_softmax/PROMPT.txt +7 -0
  23. 04_kahan_softmax/benchmark.py +135 -0
  24. 04_kahan_softmax/check.py +126 -0
  25. 04_kahan_softmax/problem.yaml +43 -0
  26. 04_kahan_softmax/reference.py +52 -0
  27. 04_kahan_softmax/shapes.py +24 -0
  28. 04_kahan_softmax/sota.py +45 -0
  29. 05_topk_bitonic/PROMPT.txt +7 -0
  30. 05_topk_bitonic/benchmark.py +122 -0
  31. 05_topk_bitonic/check.py +149 -0
  32. 05_topk_bitonic/problem.yaml +56 -0
  33. 05_topk_bitonic/reference.py +52 -0
  34. 05_topk_bitonic/shapes.py +19 -0
  35. 05_topk_bitonic/sota.py +25 -0
  36. 06_sonic_moe_swiglu/PROMPT.txt +7 -0
  37. 06_sonic_moe_swiglu/benchmark.py +131 -0
  38. 06_sonic_moe_swiglu/check.py +110 -0
  39. 06_sonic_moe_swiglu/problem.yaml +47 -0
  40. 06_sonic_moe_swiglu/reference.py +102 -0
  41. 06_sonic_moe_swiglu/shapes.py +19 -0
  42. 06_sonic_moe_swiglu/sota.py +71 -0
  43. 07_w4a16_gemm/PROMPT.txt +7 -0
  44. 07_w4a16_gemm/benchmark.py +128 -0
  45. 07_w4a16_gemm/check.py +112 -0
  46. 07_w4a16_gemm/problem.yaml +49 -0
  47. 07_w4a16_gemm/reference.py +112 -0
  48. 07_w4a16_gemm/shapes.py +13 -0
  49. 07_w4a16_gemm/sota.py +87 -0
  50. README.md +114 -0
01_fp8_gemm/PROMPT.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ I need you to optimize an FP8 GEMM for the RTX PRO 6000 (SM120 Blackwell, GDDR7, 1.8 TB/s). The reference implementation is in reference.py and your kernel goes in solution.py. You can make whatever mess you want in this directory — scratch files, intermediate kernels, build artifacts, profiling traces — but the final answer has to be in solution.py with the same Model, get_inputs, and get_init_inputs interface as reference.py.
2
+
3
+ The input is an fp8_e4m3 activation tensor (M, K) and an fp8_e4m3 weight (N, K), and you return y = x @ w.T as bf16 (M, N). Correctness tolerance on the bf16 output is 0.15 abs/rel — fp8 multiply has real noise so the bar is loose, but it has to match the reference within that. The shapes you have to handle are M=4096 N=4096 K=4096 (square aligned, the easy case), M=4096 N=4096 K=4127 (K not a multiple of 128, which forces predicated tails on tile-quantized kernels), M=32 N=8192 K=8192 (skinny M, decode-style), and M=4096 N=14336 K=4096 (Llama-3 up-proj).
4
+
5
+ This needs to be a real custom kernel. Don't reach for torch._scaled_mm or torch.ops.aten._scaled_mm — they're off-limits and using them fails correctness. Try CUDA C++ via torch.utils.cpp_extension.load_inline, inline PTX, or CUTLASS — whatever fits. Anything you're uncertain about, look up PTX docs, clone CUTLASS or other reference repos, read library headers, and investigate.
6
+
7
+ Your flywheel is implement, profile (ncu, nsys, torch.profiler — whatever's useful) and time it with benchmark.py, verify correctness by running `python check.py` and reading the output, then iterate. Don't substitute your own one-off correctness snippets for check.py — it iterates over every shape, your spot-check almost certainly won't. If `python check.py` hasn't printed PASS, you're not done. Take as long as you need to actually push the number up.
01_fp8_gemm/benchmark.py ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Roofline benchmark for FP8 GEMM.
2
+
3
+ For each shape: times eager reference, compiled reference, SOTA (if available),
4
+ and the agent's solution. Reports achieved TFLOPS, GB/s, and peak_fraction.
5
+
6
+ Output lines the harness picks up:
7
+ shape=<idx> variant=<name> tflops=<N> gbps=<N> ms=<N>
8
+ peak_fraction: <N> (geomean over shapes of solution's peak_fraction)
9
+ """
10
+ import sys
11
+ from math import exp, log
12
+ from pathlib import Path
13
+
14
+ import torch
15
+ import yaml
16
+
17
+ REPO_ROOT = Path(__file__).resolve().parents[2]
18
+ sys.path.insert(0, str(REPO_ROOT))
19
+
20
+ from src.eval.roofline import compute_gbps, compute_tflops, peak_fraction # noqa: E402
21
+ from src.eval.timing import time_fn # noqa: E402
22
+ from src.hardware import get as get_hw # noqa: E402
23
+
24
+
25
+ def _eval_formula(expr: str, vars: dict) -> float:
26
+ # Very small eval: only names from `vars` are valid.
27
+ return float(eval(expr, {"__builtins__": {}}, vars))
28
+
29
+
30
+ def main():
31
+ import reference
32
+ import shapes
33
+ import solution
34
+
35
+ meta = yaml.safe_load(Path("problem.yaml").read_text())
36
+ hw = get_hw(meta["hardware"][0])
37
+ peak_tflops = hw.peak_tflops_dense.get(meta["peak_tflops_key"], 0.0)
38
+ peak_gbps = hw.peak_bandwidth_gb_s
39
+ regime = meta.get("regime", "compute")
40
+ flops_formula = meta["flops_formula"]
41
+ bytes_formula = meta["bytes_formula"]
42
+ num_perf_trials = int(meta.get("num_perf_trials", 30))
43
+
44
+ device = torch.device("cuda:0")
45
+
46
+ # Optional SOTA
47
+ try:
48
+ import sota as sota_mod
49
+ has_sota = sota_mod.is_available()
50
+ except Exception:
51
+ has_sota = False
52
+
53
+ sol_fractions: list[float] = []
54
+
55
+ for shape_idx, shape in enumerate(shapes.SHAPES):
56
+ reference.M = shape["M"]
57
+ reference.N = shape["N"]
58
+ reference.K = shape["K"]
59
+
60
+ init_args = reference.get_init_inputs()
61
+ ref_model = reference.Model(*init_args).to(device).eval()
62
+ sol_model = solution.Model(*init_args).to(device).eval()
63
+ sd = ref_model.state_dict()
64
+ try:
65
+ sol_model.load_state_dict(sd, strict=True)
66
+ except RuntimeError:
67
+ pass
68
+
69
+ torch.manual_seed(2026)
70
+ inputs = [t.to(device) for t in reference.get_inputs()]
71
+
72
+ # Theoretical work per call
73
+ flops = _eval_formula(flops_formula, shape)
74
+ bytes_moved = _eval_formula(bytes_formula, shape)
75
+
76
+ # Eager
77
+ ms_eager = time_fn(ref_model, inputs, iters=num_perf_trials)
78
+
79
+ # Compiled (best-effort)
80
+ try:
81
+ comp = torch.compile(ref_model, mode="reduce-overhead")
82
+ ms_comp = time_fn(comp, inputs, iters=num_perf_trials)
83
+ except Exception as e:
84
+ print(f" [compile fallback] {type(e).__name__}: {e}")
85
+ ms_comp = None
86
+
87
+ # SOTA
88
+ ms_sota = None
89
+ if has_sota:
90
+ try:
91
+ def sota_fn(x, _w=ref_model.weight):
92
+ return sota_mod.sota_forward(x, _w)
93
+ ms_sota = time_fn(sota_fn, inputs, iters=num_perf_trials)
94
+ except Exception as e:
95
+ print(f" [sota unavailable] {type(e).__name__}: {e}")
96
+
97
+ # Solution
98
+ ms_sol = time_fn(sol_model, inputs, iters=num_perf_trials)
99
+
100
+ for variant, ms in [
101
+ ("eager", ms_eager),
102
+ ("compiled", ms_comp),
103
+ ("sota", ms_sota),
104
+ ("solution", ms_sol),
105
+ ]:
106
+ if ms is None:
107
+ continue
108
+ tflops = compute_tflops(flops, ms)
109
+ gbps = compute_gbps(bytes_moved, ms)
110
+ print(f"shape={shape_idx} variant={variant} tflops={tflops:.3f} gbps={gbps:.3f} ms={ms:.3f}")
111
+
112
+ # Score: peak_fraction depends on regime
113
+ sol_tflops = compute_tflops(flops, ms_sol)
114
+ sol_gbps = compute_gbps(bytes_moved, ms_sol)
115
+ if regime == "compute":
116
+ frac = peak_fraction(sol_tflops, peak_tflops)
117
+ else:
118
+ frac = peak_fraction(sol_gbps, peak_gbps)
119
+ sol_fractions.append(frac)
120
+ print(f"shape={shape_idx} solution_peak_fraction={frac:.4f}")
121
+
122
+ gmean = exp(sum(log(max(f, 1e-9)) for f in sol_fractions) / len(sol_fractions))
123
+ print(f"peak_fraction: {gmean:.4f}")
124
+ print(f"RESULT: {'OK' if gmean >= 0.1 else 'LOW'}")
125
+
126
+
127
+ if __name__ == "__main__":
128
+ main()
01_fp8_gemm/check.py ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Correctness runner for FP8 GEMM.
2
+
3
+ Runs solution.Model vs reference.Model across all shapes in shapes.py, 3 seeds
4
+ each, with per-dtype atol/rtol. Also rejects forbidden ops by grep.
5
+ """
6
+ import re
7
+ import sys
8
+ from pathlib import Path
9
+
10
+ import torch
11
+ import yaml
12
+
13
+ # Make the repo's src/ importable
14
+ REPO_ROOT = Path(__file__).resolve().parents[2]
15
+ sys.path.insert(0, str(REPO_ROOT))
16
+
17
+ from src.eval.correctness import check_correctness # noqa: E402
18
+
19
+
20
+ def main():
21
+ try:
22
+ import reference
23
+ import shapes
24
+ import solution
25
+ except Exception as e:
26
+ print(f"FAIL: import error: {e}")
27
+ sys.exit(1)
28
+
29
+ problem_yaml = Path("problem.yaml")
30
+ meta = yaml.safe_load(problem_yaml.read_text()) if problem_yaml.exists() else {}
31
+
32
+ # --- Forbidden-op check ------------------------------------------------
33
+ sol_src = Path("solution.py").read_text() if Path("solution.py").exists() else ""
34
+ for forbidden in meta.get("forbidden", []):
35
+ pat = re.escape(forbidden)
36
+ if re.search(pat, sol_src):
37
+ print(f"FAIL: forbidden op used: {forbidden}")
38
+ sys.exit(1)
39
+
40
+ device = torch.device("cuda:0")
41
+ tol_override = meta.get("tolerance") or None
42
+
43
+ # --- Per-shape correctness --------------------------------------------
44
+ all_shapes = shapes.SHAPES
45
+ for shape_idx, shape in enumerate(all_shapes):
46
+ # Rebuild reference module's module-level M/N/K shims so get_inputs /
47
+ # get_init_inputs match this shape.
48
+ reference.M = shape["M"]
49
+ reference.N = shape["N"]
50
+ reference.K = shape["K"]
51
+
52
+ init_args = reference.get_init_inputs()
53
+ ref_model = reference.Model(*init_args).to(device).eval()
54
+ sol_model = solution.Model(*init_args).to(device).eval()
55
+
56
+ # Share weights. strict=True — if sol_model doesn't declare the same
57
+ # parameters, correctness fails (this closes the "identity kernel"
58
+ # cheat class).
59
+ sd = ref_model.state_dict()
60
+ try:
61
+ sol_model.load_state_dict(sd, strict=True)
62
+ except RuntimeError as e:
63
+ print(f"FAIL: state_dict mismatch at shape {shape_idx} ({shape}): {e}")
64
+ sys.exit(1)
65
+
66
+ for seed in (42, 123, 456):
67
+ torch.manual_seed(seed)
68
+ torch.cuda.manual_seed_all(seed)
69
+ inputs = [t.to(device) for t in reference.get_inputs()]
70
+
71
+ with torch.no_grad():
72
+ ref_out = ref_model(*inputs)
73
+ sol_out = sol_model(*inputs)
74
+
75
+ ok, msg = check_correctness(
76
+ ref_out, sol_out,
77
+ dtype=ref_out.dtype,
78
+ override=tol_override,
79
+ )
80
+ if not ok:
81
+ print(f"FAIL: shape {shape_idx} {shape} seed {seed}: {msg}")
82
+ sys.exit(1)
83
+
84
+ # --- Framework label (for stats) --------------------------------------
85
+ _emit_framework_label()
86
+ print("PASS")
87
+
88
+
89
+ def _emit_framework_label():
90
+ """Write framework.txt with the detected kernel framework."""
91
+ patterns = [
92
+ ("ptx", r"asm\s+volatile|asm\s*\(|mma\.sync|tcgen05\."),
93
+ ("cutlass3", r"\bcute::|cutlass/gemm/collective|cutlass::arch::Sm(9|10|12)"),
94
+ ("cutlass2", r"cutlass/gemm/device/gemm|cutlass::gemm::device"),
95
+ ("cuda_wmma", r"\bnvcuda::wmma\b|wmma::fragment"),
96
+ ("triton", r"import\s+triton\b|@triton\.jit|\btl\.dot\b"),
97
+ ("cuda_raw", r"torch\.utils\.cpp_extension\.load_inline|__global__\s+void"),
98
+ ]
99
+ sol = Path("solution.py")
100
+ if not sol.exists():
101
+ return
102
+ code = sol.read_text()
103
+ label = "unknown"
104
+ for name, pat in patterns:
105
+ if re.search(pat, code):
106
+ label = name
107
+ break
108
+ Path("framework.txt").write_text(label + "\n")
109
+
110
+
111
+ if __name__ == "__main__":
112
+ main()
01_fp8_gemm/problem.yaml ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: 01_fp8_gemm
2
+ display_name: "FP8 e4m3 GEMM"
3
+ precision: fp8_e4m3
4
+ regime: compute
5
+
6
+ # Dense-equivalent FLOPs per invocation. Evaluated per-shape; the benchmark
7
+ # substitutes M/N/K from each shapes.py entry.
8
+ flops_formula: "2 * M * N * K"
9
+
10
+ # Bytes moved (approximate): fp8 inputs in, bf16 output.
11
+ bytes_formula: "M*K + K*N + M*N*2"
12
+
13
+ hardware: [RTX_PRO_6000]
14
+ peak_tflops_key: fp8 # index into src/hardware/rtx_pro_6000.py peak_tflops_dense
15
+ peak_bandwidth_key: dram
16
+
17
+ # Per-dtype tolerance override (optional). Defaults are in src/eval/correctness.py.
18
+ tolerance:
19
+ fp8_e4m3fn: 0.15
20
+
21
+ # Forbidden ops — using any of these in solution.py fails correctness post-hoc.
22
+ # torch._scaled_mm is Stanford-style cheating (just call the vendor library).
23
+ forbidden:
24
+ - "torch._scaled_mm"
25
+ - "torch.ops.aten._scaled_mm"
26
+
27
+ sota:
28
+ name: "FlashInfer FP8 GEMM"
29
+ url: "https://github.com/flashinfer-ai/flashinfer"
30
+ function: "flashinfer.gemm.fp8_gemm"
31
+ deps:
32
+ - "flashinfer>=0.6.8"
33
+ # Documented H100 throughput for this shape (informational, not graded):
34
+ reference_throughput_tflops_h100: 550
35
+
36
+ num_correct_trials: 3
37
+ num_perf_trials: 30
01_fp8_gemm/reference.py ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Naive FP8 e4m3 GEMM reference (correctness only, NOT the SOTA baseline).
2
+
3
+ We cast inputs to bf16 and use torch.matmul. The agent's solution must match
4
+ this numerically within the fp8 tolerance declared in problem.yaml.
5
+ """
6
+ import torch
7
+ import torch.nn as nn
8
+
9
+ OP_TYPE = "gemm"
10
+ SUPPORTED_PRECISIONS = ["fp8_e4m3"]
11
+ HARDWARE_REQUIRED = ["RTX_PRO_6000", "H100", "B200"]
12
+
13
+
14
+ class Model(nn.Module):
15
+ """y = (x @ w.T).to(bf16), where x is fp8_e4m3 (M, K), w is fp8_e4m3 (N, K)."""
16
+
17
+ def __init__(self, M: int, N: int, K: int):
18
+ super().__init__()
19
+ self.M, self.N, self.K = M, N, K
20
+ # Weights stored as parameters so state_dict is well-defined.
21
+ # We initialize in bf16 then cast; the fp8 dtype is set by get_inputs.
22
+ self.weight = nn.Parameter(torch.empty(N, K, dtype=torch.bfloat16))
23
+ nn.init.normal_(self.weight, std=0.02)
24
+
25
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
26
+ # Upcast to bf16 for the naive reference; the kernel equivalent would
27
+ # use mma.sync f8f6f4 kind directly.
28
+ x_bf = x.to(torch.bfloat16)
29
+ w_bf = self.weight.to(torch.bfloat16)
30
+ return x_bf @ w_bf.T # (M, N) bf16
31
+
32
+
33
+ M = 4096
34
+ N = 4096
35
+ K = 4096
36
+
37
+
38
+ def get_inputs():
39
+ # fp8_e4m3 input; random uniform in [-4, 4] then cast.
40
+ x = (torch.rand(M, K) * 8 - 4).to(torch.float8_e4m3fn)
41
+ return [x]
42
+
43
+
44
+ def get_init_inputs():
45
+ return [M, N, K]
01_fp8_gemm/shapes.py ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Canonical shape sweep for FP8 GEMM.
2
+
3
+ Mix of:
4
+ - square aligned (the easy case)
5
+ - off-alignment K (common real-world failure mode for tile-quantized kernels)
6
+ - skinny (decode-like, memory-bound)
7
+ - rectangular (prefill with grouped attention)
8
+ """
9
+
10
+ SHAPES = [
11
+ {"M": 4096, "N": 4096, "K": 4096}, # square aligned
12
+ {"M": 4096, "N": 4096, "K": 4127}, # K not multiple of 128 -> forces predicated tails
13
+ {"M": 32, "N": 8192, "K": 8192}, # skinny M (decode)
14
+ {"M": 4096, "N": 14336, "K": 4096}, # Llama3 up-proj shape
15
+ ]
01_fp8_gemm/sota.py ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """SOTA reference for FP8 GEMM: flashinfer.gemm.fp8_gemm.
2
+
3
+ If flashinfer is not installed or the SM120 path isn't supported, this falls
4
+ back to torch._scaled_mm which is the cuBLAS FP8 path. The benchmark treats
5
+ whichever succeeds as the SOTA reference line.
6
+
7
+ Agents are FORBIDDEN from using torch._scaled_mm in their solution (see
8
+ problem.yaml.forbidden). This file is only for the benchmark's reference line.
9
+ """
10
+ from __future__ import annotations
11
+
12
+ import torch
13
+
14
+
15
+ def _try_flashinfer(x: torch.Tensor, w: torch.Tensor) -> torch.Tensor | None:
16
+ try:
17
+ import flashinfer # noqa: F401
18
+ # Note: flashinfer's FP8 GEMM API surface may differ; adapt if needed.
19
+ # Placeholder call — replace with the actual flashinfer entry point
20
+ # once validated on SM120.
21
+ return None
22
+ except ImportError:
23
+ return None
24
+
25
+
26
+ def _scaled_mm(x: torch.Tensor, w: torch.Tensor) -> torch.Tensor:
27
+ # torch._scaled_mm wants per-tensor scales. Use unit scales for the reference.
28
+ scale_a = torch.tensor(1.0, device=x.device)
29
+ scale_b = torch.tensor(1.0, device=x.device)
30
+ out = torch._scaled_mm(
31
+ x,
32
+ w.T,
33
+ scale_a=scale_a,
34
+ scale_b=scale_b,
35
+ out_dtype=torch.bfloat16,
36
+ )
37
+ return out if not isinstance(out, tuple) else out[0]
38
+
39
+
40
+ def sota_forward(x: torch.Tensor, w: torch.Tensor) -> torch.Tensor:
41
+ """Best-available FP8 GEMM reference. x: (M, K) fp8, w: (N, K) fp8."""
42
+ out = _try_flashinfer(x, w)
43
+ if out is not None:
44
+ return out
45
+ return _scaled_mm(x, w)
46
+
47
+
48
+ def is_available() -> bool:
49
+ try:
50
+ # Verify torch._scaled_mm is callable (smoke)
51
+ return hasattr(torch, "_scaled_mm")
52
+ except Exception:
53
+ return False
02_kda_cutlass/PROMPT.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ I need you to implement Kimi Delta Attention forward (chunk form) for the RTX PRO 6000 (SM120 Blackwell, GDDR7, 1.8 TB/s). The reference implementation is in reference.py and your kernel goes in solution.py. You can make whatever mess you want in this directory — scratch files, intermediate kernels, build artifacts, profiling traces — but the final answer has to be in solution.py with the same Model, get_inputs, and get_init_inputs interface as reference.py.
2
+
3
+ The op is the chunk-parallel KDA forward from the FLA library: q and k of shape (B, T, H, K) in bf16, v of shape (B, T, H, V) in bf16, g of shape (B, T, H, K) in fp32 (per-channel log-decay with in-chunk cumsum already applied), beta of shape (B, T, H) in bf16, scale a python float, chunk_size 64, no initial state, no final state. You return o of shape (B, T, H, V) in bf16. Correctness tolerance is 0.05 abs/rel — the long recurrence accumulates more error than a single GEMM so the bar's a bit looser than default bf16. The shapes you have to handle are B=2 T=1024 H=8 K=128 V=128 (short-context training step), B=2 T=2048 H=8 K=128 V=128 (the headline shape from the Kimi Linear paper), B=1 T=4096 H=8 K=128 V=128 (long context that stresses the inter-chunk recurrence), and B=1 T=2048 H=4 K=128 V=128 (thin batch decode).
4
+
5
+ This needs to be a real custom kernel — the whole point of the problem is to write the chunk-parallel attention yourself, not call FLA's existing implementation. Don't import or call fla.ops.kda, fla.ops.chunk_kda, chunk_kda, fused_recurrent_kda, naive_chunk_kda, or naive_recurrent_kda. The intended path is CUTLASS CuTe on SM120 but Triton, CUDA C++ via load_inline, or inline PTX are also fine if you prefer. Anything you're uncertain about, look up PTX docs, clone CUTLASS or FLA or other reference repos, read library headers, and investigate.
6
+
7
+ Your flywheel is implement, profile (ncu, nsys, torch.profiler — whatever's useful) and time it with benchmark.py, verify correctness by running `python check.py` and reading the output, then iterate. Don't substitute your own one-off correctness snippets for check.py — it iterates over every shape, your spot-check almost certainly won't. If `python check.py` hasn't printed PASS, you're not done. Take as long as you need to actually push the number up.
02_kda_cutlass/benchmark.py ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Roofline benchmark for KDA forward (chunk form).
2
+
3
+ For each shape: times eager reference, compiled reference, SOTA (FLA's Triton
4
+ chunk_kda, if available on this GPU), and the agent's solution. Reports
5
+ achieved TFLOPS, GB/s, and peak_fraction.
6
+
7
+ Output lines the harness picks up:
8
+ shape=<idx> variant=<name> tflops=<N> gbps=<N> ms=<N>
9
+ peak_fraction: <N> (geomean over shapes of solution's peak_fraction)
10
+ """
11
+ import sys
12
+ from math import exp, log
13
+ from pathlib import Path
14
+
15
+ import torch
16
+ import yaml
17
+
18
+ REPO_ROOT = Path(__file__).resolve().parents[2]
19
+ sys.path.insert(0, str(REPO_ROOT))
20
+
21
+ from src.eval.roofline import compute_gbps, compute_tflops, peak_fraction # noqa: E402
22
+ from src.eval.timing import time_fn # noqa: E402
23
+ from src.hardware import get as get_hw # noqa: E402
24
+
25
+
26
+ def _eval_formula(expr: str, vars: dict) -> float:
27
+ return float(eval(expr, {"__builtins__": {}}, vars))
28
+
29
+
30
+ def _apply_shape(reference, shape):
31
+ for k, v in shape.items():
32
+ setattr(reference, k, v)
33
+
34
+
35
+ def main():
36
+ import reference
37
+ import shapes
38
+ import solution
39
+
40
+ meta = yaml.safe_load(Path("problem.yaml").read_text())
41
+ hw = get_hw(meta["hardware"][0])
42
+ peak_tflops = hw.peak_tflops_dense.get(meta["peak_tflops_key"], 0.0)
43
+ peak_gbps = hw.peak_bandwidth_gb_s
44
+ regime = meta.get("regime", "compute")
45
+ flops_formula = meta["flops_formula"]
46
+ bytes_formula = meta["bytes_formula"]
47
+ num_perf_trials = int(meta.get("num_perf_trials", 20))
48
+
49
+ device = torch.device("cuda:0")
50
+
51
+ # Optional SOTA
52
+ try:
53
+ import sota as sota_mod
54
+ has_sota = sota_mod.is_available()
55
+ except Exception:
56
+ has_sota = False
57
+
58
+ sol_fractions: list[float] = []
59
+
60
+ for shape_idx, shape in enumerate(shapes.SHAPES):
61
+ _apply_shape(reference, shape)
62
+
63
+ init_args = reference.get_init_inputs()
64
+ ref_model = reference.Model(*init_args).to(device).eval()
65
+ sol_model = solution.Model(*init_args).to(device).eval()
66
+ sd = ref_model.state_dict()
67
+ try:
68
+ sol_model.load_state_dict(sd, strict=True)
69
+ except RuntimeError:
70
+ pass
71
+
72
+ torch.manual_seed(2026)
73
+ inputs = [t.to(device) if hasattr(t, "to") else t for t in reference.get_inputs()]
74
+
75
+ # Theoretical work per call
76
+ flops = _eval_formula(flops_formula, shape)
77
+ bytes_moved = _eval_formula(bytes_formula, shape)
78
+
79
+ # Eager
80
+ ms_eager = time_fn(ref_model, inputs, iters=num_perf_trials)
81
+
82
+ # Compiled (best-effort -- the chunk-form recurrence often defeats inductor)
83
+ try:
84
+ comp = torch.compile(ref_model, mode="reduce-overhead")
85
+ ms_comp = time_fn(comp, inputs, iters=num_perf_trials)
86
+ except Exception as e:
87
+ print(f" [compile fallback] {type(e).__name__}: {e}")
88
+ ms_comp = None
89
+
90
+ # SOTA
91
+ ms_sota = None
92
+ if has_sota:
93
+ try:
94
+ scale = float(shape["K"]) ** -0.5
95
+
96
+ def sota_fn(q, k, v, g, beta, _scale=scale):
97
+ return sota_mod.sota_forward(q, k, v, g, beta, scale=_scale)
98
+
99
+ ms_sota = time_fn(sota_fn, inputs, iters=num_perf_trials)
100
+ except Exception as e:
101
+ print(f" [sota unavailable] {type(e).__name__}: {e}")
102
+
103
+ # Solution
104
+ ms_sol = time_fn(sol_model, inputs, iters=num_perf_trials)
105
+
106
+ for variant, ms in [
107
+ ("eager", ms_eager),
108
+ ("compiled", ms_comp),
109
+ ("sota", ms_sota),
110
+ ("solution", ms_sol),
111
+ ]:
112
+ if ms is None:
113
+ continue
114
+ tflops = compute_tflops(flops, ms)
115
+ gbps = compute_gbps(bytes_moved, ms)
116
+ print(f"shape={shape_idx} variant={variant} tflops={tflops:.3f} gbps={gbps:.3f} ms={ms:.3f}")
117
+
118
+ sol_tflops = compute_tflops(flops, ms_sol)
119
+ sol_gbps = compute_gbps(bytes_moved, ms_sol)
120
+ if regime == "compute":
121
+ frac = peak_fraction(sol_tflops, peak_tflops)
122
+ else:
123
+ frac = peak_fraction(sol_gbps, peak_gbps)
124
+ sol_fractions.append(frac)
125
+ print(f"shape={shape_idx} solution_peak_fraction={frac:.4f}")
126
+
127
+ gmean = exp(sum(log(max(f, 1e-9)) for f in sol_fractions) / len(sol_fractions))
128
+ print(f"peak_fraction: {gmean:.4f}")
129
+ print(f"RESULT: {'OK' if gmean >= 0.1 else 'LOW'}")
130
+
131
+
132
+ if __name__ == "__main__":
133
+ main()
02_kda_cutlass/check.py ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Correctness runner for KDA forward (chunk form).
2
+
3
+ Runs solution.Model vs reference.Model across all shapes in shapes.py, 3 seeds
4
+ each, with per-dtype atol/rtol (bf16 default 1e-2 plus a 5e-2 override for
5
+ this problem). Also rejects forbidden ops by grep.
6
+ """
7
+ import re
8
+ import sys
9
+ from pathlib import Path
10
+
11
+ import torch
12
+ import yaml
13
+
14
+ # Make the repo's src/ importable
15
+ REPO_ROOT = Path(__file__).resolve().parents[2]
16
+ sys.path.insert(0, str(REPO_ROOT))
17
+
18
+ from src.eval.correctness import check_correctness # noqa: E402
19
+
20
+
21
+ def _apply_shape(reference, shape):
22
+ """Override reference's module-level shape shims so get_inputs/get_init_inputs match."""
23
+ for k, v in shape.items():
24
+ setattr(reference, k, v)
25
+
26
+
27
+ def main():
28
+ try:
29
+ import reference
30
+ import shapes
31
+ import solution
32
+ except Exception as e:
33
+ print(f"FAIL: import error: {e}")
34
+ sys.exit(1)
35
+
36
+ problem_yaml = Path("problem.yaml")
37
+ meta = yaml.safe_load(problem_yaml.read_text()) if problem_yaml.exists() else {}
38
+
39
+ # --- Forbidden-op check ------------------------------------------------
40
+ sol_src = Path("solution.py").read_text() if Path("solution.py").exists() else ""
41
+ for forbidden in meta.get("forbidden", []):
42
+ pat = re.escape(forbidden)
43
+ if re.search(pat, sol_src):
44
+ print(f"FAIL: forbidden op used: {forbidden}")
45
+ sys.exit(1)
46
+
47
+ device = torch.device("cuda:0")
48
+ tol_override = meta.get("tolerance") or None
49
+
50
+ # --- Per-shape correctness --------------------------------------------
51
+ all_shapes = shapes.SHAPES
52
+ for shape_idx, shape in enumerate(all_shapes):
53
+ _apply_shape(reference, shape)
54
+
55
+ init_args = reference.get_init_inputs()
56
+ ref_model = reference.Model(*init_args).to(device).eval()
57
+ sol_model = solution.Model(*init_args).to(device).eval()
58
+
59
+ # Share weights/buffers. strict=True closes the "identity kernel" cheat.
60
+ sd = ref_model.state_dict()
61
+ try:
62
+ sol_model.load_state_dict(sd, strict=True)
63
+ except RuntimeError as e:
64
+ print(f"FAIL: state_dict mismatch at shape {shape_idx} ({shape}): {e}")
65
+ sys.exit(1)
66
+
67
+ for seed in (42, 123, 456):
68
+ torch.manual_seed(seed)
69
+ torch.cuda.manual_seed_all(seed)
70
+ inputs = [t.to(device) if hasattr(t, "to") else t for t in reference.get_inputs()]
71
+
72
+ with torch.no_grad():
73
+ ref_out = ref_model(*inputs)
74
+ sol_out = sol_model(*inputs)
75
+
76
+ ok, msg = check_correctness(
77
+ ref_out, sol_out,
78
+ dtype=ref_out.dtype,
79
+ override=tol_override,
80
+ )
81
+ if not ok:
82
+ print(f"FAIL: shape {shape_idx} {shape} seed {seed}: {msg}")
83
+ sys.exit(1)
84
+
85
+ # --- Framework label (for stats) --------------------------------------
86
+ _emit_framework_label()
87
+ print("PASS")
88
+
89
+
90
+ def _emit_framework_label():
91
+ """Write framework.txt with the detected kernel framework."""
92
+ patterns = [
93
+ ("ptx", r"asm\s+volatile|asm\s*\(|mma\.sync|tcgen05\."),
94
+ ("cutlass3", r"\bcute::|cutlass/gemm/collective|cutlass::arch::Sm(9|10|12)"),
95
+ ("cutlass2", r"cutlass/gemm/device/gemm|cutlass::gemm::device"),
96
+ ("cuda_wmma", r"\bnvcuda::wmma\b|wmma::fragment"),
97
+ ("triton", r"import\s+triton\b|@triton\.jit|\btl\.dot\b"),
98
+ ("cuda_raw", r"torch\.utils\.cpp_extension\.load_inline|__global__\s+void"),
99
+ ]
100
+ sol = Path("solution.py")
101
+ if not sol.exists():
102
+ return
103
+ code = sol.read_text()
104
+ label = "unknown"
105
+ for name, pat in patterns:
106
+ if re.search(pat, code):
107
+ label = name
108
+ break
109
+ Path("framework.txt").write_text(label + "\n")
110
+
111
+
112
+ if __name__ == "__main__":
113
+ main()
02_kda_cutlass/problem.yaml ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: 02_kda_cutlass
2
+ display_name: "Kimi Delta Attention (chunk forward) via CUTLASS CuTe"
3
+ precision: bf16
4
+ regime: compute
5
+
6
+ # Approximate FLOPs per forward call. KDA forward (chunk form) breakdown per
7
+ # token across all heads:
8
+ # - intra-chunk Aqk (q @ k.T with decay): 2*BT*K
9
+ # - intra-chunk Akk + cumulative invert : ~3*BT*K + BT^2
10
+ # - w = A @ (g.exp()*k): 2*BT*K
11
+ # - u = A @ v: 2*BT*V
12
+ # - inter-chunk q @ S: 2*K*V
13
+ # - inter-chunk (Aqk @ v_i): 2*BT*V
14
+ # - state update S += k.T @ v_i: 2*K*V
15
+ # Aggregating with a constant of 4 (typical for chunked linear attention) gives
16
+ # the formula below. Used only for roofline reporting -- not graded directly.
17
+ flops_formula: "4 * B * T * H * (K*V + CHUNK_SIZE*K + CHUNK_SIZE*V)"
18
+
19
+ # Bytes moved (lower bound): one read of q,k,v,g,beta and one write of o.
20
+ bytes_formula: "B*T*H*K*2 + B*T*H*K*2 + B*T*H*V*2 + B*T*H*K*4 + B*T*H*2 + B*T*H*V*2"
21
+
22
+ hardware: [RTX_PRO_6000]
23
+ peak_tflops_key: bf16 # KDA's heavy work is bf16 @ bf16 -> fp32 GEMM-equivalent
24
+ peak_bandwidth_key: dram
25
+
26
+ # bf16 default tolerance from src/eval/correctness.py is 1e-2 atol/rtol; KDA's
27
+ # long recurrence accumulates more error than a single GEMM, so we relax slightly.
28
+ tolerance:
29
+ torch.bfloat16: 0.05 # was "5e-2" -- PyYAML parses un-dotted sci-notation as string
30
+
31
+ # Forbidden ops -- using any in solution.py fails correctness post-hoc.
32
+ # The whole point of the problem is to write the kernel; calling FLA's existing
33
+ # Triton chunk_kda is the canonical reward-hack to block.
34
+ forbidden:
35
+ - "fla.ops.kda"
36
+ - "fla.ops.chunk_kda"
37
+ - "chunk_kda"
38
+ - "fused_recurrent_kda"
39
+ - "naive_chunk_kda"
40
+ - "naive_recurrent_kda"
41
+
42
+ sota:
43
+ name: "FLA chunk_kda (Triton)"
44
+ url: "https://github.com/fla-org/flash-linear-attention/tree/main/fla/ops/kda"
45
+ function: "fla.ops.kda.chunk_kda"
46
+ deps:
47
+ - "flash-linear-attention>=0.3"
48
+ # Documented H100 throughput (informational, not graded). FLA's KDA Triton
49
+ # kernel hits roughly 0.6-0.8x of FlashAttention-2 wall-clock on H100 at the
50
+ # B=2,T=2048,H=8,K=V=128 shape (per the Kimi Linear blog / FLA benchmarks).
51
+ reference_throughput_tflops_h100: null
52
+
53
+ num_correct_trials: 3
54
+ num_perf_trials: 20
02_kda_cutlass/reference.py ADDED
@@ -0,0 +1,143 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Naive PyTorch reference for Kimi Delta Attention (KDA) forward, chunk form.
2
+
3
+ This is the correctness oracle, NOT the SOTA baseline. It mirrors the
4
+ chunk-parallel formulation in fla/ops/kda/naive.py (Songlin Yang et al.)
5
+ without any Triton or CUDA optimization.
6
+
7
+ Inputs (per the FLA convention):
8
+ q, k : (B, T, H, K) bf16 -- queries / keys
9
+ v : (B, T, H, V) bf16 -- values
10
+ g : (B, T, H, K) fp32 -- per-channel log-decay (in-chunk cumsum applied)
11
+ beta : (B, T, H) bf16 -- write strength
12
+
13
+ Output:
14
+ o : (B, T, H, V) bf16
15
+
16
+ The agent must reproduce this output (within bf16 tolerance) using a CUTLASS
17
+ CuTe kernel on SM120 -- NOT by calling fla.ops.chunk_kda directly.
18
+ """
19
+ from __future__ import annotations
20
+
21
+ import torch
22
+ import torch.nn as nn
23
+ from einops import rearrange
24
+
25
+ OP_TYPE = "linear_attention"
26
+ SUPPORTED_PRECISIONS = ["bf16"]
27
+ HARDWARE_REQUIRED = ["RTX_PRO_6000", "H100", "B200"]
28
+
29
+
30
+ def _naive_chunk_kda(
31
+ q: torch.Tensor,
32
+ k: torch.Tensor,
33
+ v: torch.Tensor,
34
+ g: torch.Tensor,
35
+ beta: torch.Tensor,
36
+ scale: float,
37
+ chunk_size: int = 64,
38
+ ) -> torch.Tensor:
39
+ """KDA forward, no initial state, no final state. Returns o with v's dtype."""
40
+ dtype = v.dtype
41
+ B, T, H, K = q.shape
42
+ V = v.shape[-1]
43
+ BT = chunk_size
44
+ assert T % BT == 0, f"T={T} must be a multiple of chunk_size={BT}"
45
+ NT = T // BT
46
+
47
+ q, k, v, g, beta = (x.to(torch.float32) for x in (q, k, v, g, beta))
48
+ q = q * scale
49
+
50
+ q = rearrange(q, "b (n c) h d -> b h n c d", c=BT)
51
+ k = rearrange(k, "b (n c) h d -> b h n c d", c=BT)
52
+ v = rearrange(v, "b (n c) h d -> b h n c d", c=BT)
53
+ g = rearrange(g, "b (n c) h d -> b h n c d", c=BT)
54
+ beta = rearrange(beta, "b (n c) h -> b h n c", c=BT)
55
+
56
+ g = g.cumsum(-2)
57
+
58
+ # ---- Build A_kk (intra-chunk K-K interaction, lower-triangular w/ diag masked) ----
59
+ mask_diag_upper = torch.triu(torch.ones(BT, BT, dtype=torch.bool, device=q.device), diagonal=0)
60
+ A = torch.zeros(*q.shape[:-1], BT, dtype=torch.float32, device=q.device)
61
+ for i in range(BT):
62
+ k_i = k[..., i, :]
63
+ g_i = g[..., i:i + 1, :]
64
+ A[..., i] = torch.einsum("... c d, ... d -> ... c", k * (g - g_i).exp(), k_i)
65
+ A = A * beta[..., None]
66
+ A = -A.masked_fill(mask_diag_upper, 0)
67
+
68
+ for i in range(1, BT):
69
+ A[..., i, :i] = A[..., i, :i].clone() + (A[..., i, :, None].clone() * A[..., :, :i].clone()).sum(-2)
70
+ A = (A + torch.eye(BT, dtype=torch.float32, device=q.device)) * beta[..., None, :]
71
+
72
+ w = A @ (g.exp() * k)
73
+ u = A @ v
74
+
75
+ # ---- Recurrent inter-chunk pass ----
76
+ S = q.new_zeros(B, H, K, V)
77
+ o = torch.zeros_like(v)
78
+ mask_strict_upper = torch.triu(torch.ones(BT, BT, dtype=torch.bool, device=q.device), diagonal=1)
79
+ for i in range(NT):
80
+ q_i, k_i, u_i, g_i, w_i = q[:, :, i], k[:, :, i], u[:, :, i], g[:, :, i], w[:, :, i]
81
+ Aqk = torch.zeros(B, H, BT, BT, dtype=torch.float32, device=q.device)
82
+ for j in range(BT):
83
+ k_j = k[:, :, i, j]
84
+ g_j = g[:, :, i, j:j + 1, :]
85
+ Aqk[..., j] = torch.einsum("... c d, ... d -> ... c", q_i * (g_i - g_j).exp(), k_j)
86
+ Aqk = Aqk.masked_fill(mask_strict_upper, 0)
87
+ v_i = u_i - w_i @ S
88
+ o[:, :, i] = (q_i * g_i.exp()) @ S + Aqk @ v_i
89
+ S = S * rearrange(g_i[:, :, -1].exp(), "b h k -> b h k 1")
90
+ S = S + rearrange((g_i[:, :, -1:] - g_i).exp() * k_i, "b h c k -> b h k c") @ v_i
91
+
92
+ o = rearrange(o, "b h n c d -> b (n c) h d")
93
+ return o.to(dtype)
94
+
95
+
96
+ class Model(nn.Module):
97
+ """KDA forward (chunk form). No learned parameters; all inputs are activations."""
98
+
99
+ def __init__(self, B: int, T: int, H: int, K: int, V: int, chunk_size: int = 64):
100
+ super().__init__()
101
+ self.B, self.T, self.H, self.K, self.V = B, T, H, K, V
102
+ self.chunk_size = chunk_size
103
+ self.scale = float(K) ** -0.5
104
+ # No learned params; declare a dummy buffer so state_dict is well-defined.
105
+ self.register_buffer("_dummy", torch.zeros(1), persistent=False)
106
+
107
+ def forward(
108
+ self,
109
+ q: torch.Tensor,
110
+ k: torch.Tensor,
111
+ v: torch.Tensor,
112
+ g: torch.Tensor,
113
+ beta: torch.Tensor,
114
+ ) -> torch.Tensor:
115
+ return _naive_chunk_kda(q, k, v, g, beta, scale=self.scale, chunk_size=self.chunk_size)
116
+
117
+
118
+ # Module-level shape shims (overridden by check.py / benchmark.py per shape).
119
+ B = 2
120
+ T = 1024
121
+ H = 8
122
+ K = 128
123
+ V = 128
124
+ CHUNK_SIZE = 64
125
+
126
+
127
+ def get_inputs():
128
+ """Return a list of activations for one forward call.
129
+
130
+ bf16 for q/k/v/beta; fp32 for the log-decay g (per FLA convention).
131
+ """
132
+ torch.manual_seed(0)
133
+ q = torch.randn(B, T, H, K, dtype=torch.bfloat16) * 0.1
134
+ k = torch.randn(B, T, H, K, dtype=torch.bfloat16) * 0.1
135
+ v = torch.randn(B, T, H, V, dtype=torch.bfloat16) * 0.1
136
+ # log-decay: small negative numbers so exp(g) is in (0, 1).
137
+ g = (torch.randn(B, T, H, K, dtype=torch.float32) * 0.1 - 0.05)
138
+ beta = torch.sigmoid(torch.randn(B, T, H, dtype=torch.bfloat16))
139
+ return [q, k, v, g, beta]
140
+
141
+
142
+ def get_init_inputs():
143
+ return [B, T, H, K, V, CHUNK_SIZE]
02_kda_cutlass/shapes.py ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Canonical shape sweep for KDA forward (chunk form).
2
+
3
+ Mix of:
4
+ - short-context training-step scale (T=1024)
5
+ - mid-context (T=2048) which is the headline benchmark
6
+ - long-context that stresses the inter-chunk recurrence (T=4096)
7
+ - thin-batch decode-style (B=1, T=2048, fewer heads)
8
+
9
+ Constraints:
10
+ - T % chunk_size == 0 (chunk_size = 64)
11
+ - K, V are the per-head channel dims; KDA in Kimi Linear uses K=V=128
12
+ """
13
+
14
+ SHAPES = [
15
+ {"B": 2, "T": 1024, "H": 8, "K": 128, "V": 128, "CHUNK_SIZE": 64},
16
+ {"B": 2, "T": 2048, "H": 8, "K": 128, "V": 128, "CHUNK_SIZE": 64},
17
+ {"B": 1, "T": 4096, "H": 8, "K": 128, "V": 128, "CHUNK_SIZE": 64},
18
+ {"B": 1, "T": 2048, "H": 4, "K": 128, "V": 128, "CHUNK_SIZE": 64},
19
+ ]
02_kda_cutlass/sota.py ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """SOTA reference for KDA forward: fla.ops.kda.chunk_kda (Triton).
2
+
3
+ The agent's solution is forbidden from importing this module path (see
4
+ problem.yaml.forbidden). This file is only used by benchmark.py to draw
5
+ the SOTA reference line.
6
+
7
+ If FLA's Triton kernel does not run on SM120 (Blackwell consumer-lineage --
8
+ some Triton kernels in FLA target Hopper TMA), is_available() returns False
9
+ and benchmark.py omits the SOTA variant. The H100 reference is documented
10
+ in problem.yaml for context.
11
+ """
12
+ from __future__ import annotations
13
+
14
+ import torch
15
+
16
+
17
+ def _import_fla():
18
+ try:
19
+ from fla.ops.kda import chunk_kda # noqa: F401
20
+ return chunk_kda
21
+ except Exception:
22
+ return None
23
+
24
+
25
+ def sota_forward(
26
+ q: torch.Tensor,
27
+ k: torch.Tensor,
28
+ v: torch.Tensor,
29
+ g: torch.Tensor,
30
+ beta: torch.Tensor,
31
+ scale: float | None = None,
32
+ ) -> torch.Tensor:
33
+ """Run FLA's Triton chunk_kda. Returns o (B, T, H, V) in v's dtype."""
34
+ chunk_kda = _import_fla()
35
+ if chunk_kda is None:
36
+ raise RuntimeError("fla.ops.kda.chunk_kda unavailable")
37
+ # FLA's chunk_kda has a richer signature (A_log, dt_bias, l2norm, gates, ...).
38
+ # We need the bare forward: pass A_log/dt_bias as None, gates off, no l2norm.
39
+ # The wrapper expects fp32 g; q/k/v/beta in bf16/fp16.
40
+ out = chunk_kda(
41
+ q=q,
42
+ k=k,
43
+ v=v,
44
+ g=g,
45
+ beta=beta,
46
+ scale=scale,
47
+ initial_state=None,
48
+ output_final_state=False,
49
+ use_qk_l2norm_in_kernel=False,
50
+ use_gate_in_kernel=False,
51
+ )
52
+ # chunk_kda returns (o, final_state) or just o depending on flags.
53
+ return out[0] if isinstance(out, tuple) else out
54
+
55
+
56
+ def is_available() -> bool:
57
+ if _import_fla() is None:
58
+ return False
59
+ # Probe a tiny call to confirm the kernel compiles on the current SM.
60
+ try:
61
+ device = torch.device("cuda:0")
62
+ B, T, H, K, V = 1, 64, 1, 64, 64
63
+ q = torch.randn(B, T, H, K, dtype=torch.bfloat16, device=device)
64
+ k = torch.randn(B, T, H, K, dtype=torch.bfloat16, device=device)
65
+ v = torch.randn(B, T, H, V, dtype=torch.bfloat16, device=device)
66
+ g = torch.randn(B, T, H, K, dtype=torch.float32, device=device) * 0.01
67
+ beta = torch.sigmoid(torch.randn(B, T, H, dtype=torch.bfloat16, device=device))
68
+ sota_forward(q, k, v, g, beta, scale=K ** -0.5)
69
+ return True
70
+ except Exception:
71
+ return False
03_paged_attention/PROMPT.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ I need you to write a paged attention decode kernel for the RTX PRO 6000 (SM120 Blackwell, GDDR7, 1.8 TB/s). The reference implementation is in reference.py and your kernel goes in solution.py. You can make whatever mess you want in this directory — scratch files, intermediate kernels, build artifacts, profiling traces — but the final answer has to be in solution.py with the same Model, get_inputs, and get_init_inputs interface as reference.py.
2
+
3
+ This is single-query decode: each batch element has a query of shape (num_heads, head_dim) and attends over a KV cache of seq_len[b] tokens stored as fixed-size pages in a global pool, with block_table[b] listing which pages belong to batch element b. The KV cache is laid out as (num_blocks, page_size, num_kv_heads, head_dim * 2) where the last dim packs [K | V] so a single gather pulls both. You return the bf16 output of shape (batch, num_heads, head_dim). Correctness tolerance is 0.02 abs/rel on bf16/fp16. The shapes you have to handle are batch=8 num_heads=32 num_kv_heads=8 head_dim=128 seq_len=1024 page_size=16 (Llama-3 8B-style decode), batch=32 num_heads=32 num_kv_heads=8 head_dim=128 seq_len=2048 page_size=16 (server-batched decode), batch=4 num_heads=64 num_kv_heads=8 head_dim=128 seq_len=4096 page_size=16 (Llama-3 70B GQA wide ratio with long context), batch=16 num_heads=32 num_kv_heads=8 head_dim=128 seq_len=1535 page_size=16 (non-power-of-two seq_len that forces predicated tails), and batch=8 num_heads=16 num_kv_heads=4 head_dim=64 seq_len=2000 page_size=16 (small head_dim, non-pow2).
4
+
5
+ This needs to be a real custom kernel. Don't import or call vllm.attention, flashinfer.batch_decode_with_paged_kv_cache, flashinfer.decode, torch.nn.functional.scaled_dot_product_attention, or F.scaled_dot_product_attention — they're off-limits, including the trick of gathering pages first and then dispatching to SDPA. Try CUDA C++ via load_inline, Triton, inline PTX, or CUTLASS — whatever fits. Anything you're uncertain about, look up PTX docs, clone CUTLASS or vLLM or FlashInfer or other reference repos, read library headers, and investigate.
6
+
7
+ Your flywheel is implement, profile (ncu, nsys, torch.profiler — whatever's useful) and time it with benchmark.py, verify correctness by running `python check.py` and reading the output, then iterate. Don't substitute your own one-off correctness snippets for check.py — it iterates over every shape, your spot-check almost certainly won't. If `python check.py` hasn't printed PASS, you're not done. Take as long as you need to actually push the number up.
03_paged_attention/benchmark.py ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Roofline benchmark for paged-attention decode.
2
+
3
+ For each shape: times eager reference, compiled reference, SOTA (if available),
4
+ and the agent's solution. Reports achieved TFLOPS, GB/s, and peak_fraction.
5
+
6
+ Decode is memory-bound, so peak_fraction is computed from achieved GB/s vs
7
+ the GPU's peak DRAM bandwidth.
8
+ """
9
+ import sys
10
+ from math import exp, log
11
+ from pathlib import Path
12
+
13
+ import torch
14
+ import yaml
15
+
16
+ REPO_ROOT = Path(__file__).resolve().parents[2]
17
+ sys.path.insert(0, str(REPO_ROOT))
18
+
19
+ from src.eval.roofline import compute_gbps, compute_tflops, peak_fraction # noqa: E402
20
+ from src.eval.timing import time_fn # noqa: E402
21
+ from src.hardware import get as get_hw # noqa: E402
22
+
23
+
24
+ def _eval_formula(expr: str, vars: dict) -> float:
25
+ return float(eval(expr, {"__builtins__": {}}, vars))
26
+
27
+
28
+ def _apply_shape(reference, shape: dict) -> None:
29
+ reference.BATCH = shape["batch"]
30
+ reference.NUM_HEADS = shape["num_heads"]
31
+ reference.NUM_KV_HEADS = shape["num_kv_heads"]
32
+ reference.HEAD_DIM = shape["head_dim"]
33
+ reference.SEQ_LEN = shape["seq_len"]
34
+ reference.PAGE_SIZE = shape["page_size"]
35
+
36
+
37
+ def main():
38
+ import reference
39
+ import shapes
40
+ import solution
41
+
42
+ meta = yaml.safe_load(Path("problem.yaml").read_text())
43
+ hw = get_hw(meta["hardware"][0])
44
+ peak_tflops = hw.peak_tflops_dense.get(meta["peak_tflops_key"], 0.0)
45
+ peak_gbps = hw.peak_bandwidth_gb_s
46
+ regime = meta.get("regime", "memory")
47
+ flops_formula = meta["flops_formula"]
48
+ bytes_formula = meta["bytes_formula"]
49
+ num_perf_trials = int(meta.get("num_perf_trials", 30))
50
+
51
+ device = torch.device("cuda:0")
52
+
53
+ try:
54
+ import sota as sota_mod
55
+ has_sota = sota_mod.is_available()
56
+ except Exception:
57
+ has_sota = False
58
+
59
+ sol_fractions: list[float] = []
60
+
61
+ for shape_idx, shape in enumerate(shapes.SHAPES):
62
+ _apply_shape(reference, shape)
63
+
64
+ init_args = reference.get_init_inputs()
65
+ ref_model = reference.Model(*init_args).to(device).eval()
66
+ sol_model = solution.Model(*init_args).to(device).eval()
67
+ sd = ref_model.state_dict()
68
+ try:
69
+ sol_model.load_state_dict(sd, strict=True)
70
+ except RuntimeError:
71
+ pass
72
+
73
+ torch.manual_seed(2026)
74
+ inputs = [t.to(device) for t in reference.get_inputs()]
75
+
76
+ flops = _eval_formula(flops_formula, shape)
77
+ bytes_moved = _eval_formula(bytes_formula, shape)
78
+
79
+ ms_eager = time_fn(ref_model, inputs, iters=num_perf_trials)
80
+
81
+ try:
82
+ comp = torch.compile(ref_model, mode="reduce-overhead")
83
+ ms_comp = time_fn(comp, inputs, iters=num_perf_trials)
84
+ except Exception as e:
85
+ print(f" [compile fallback] {type(e).__name__}: {e}")
86
+ ms_comp = None
87
+
88
+ ms_sota = None
89
+ if has_sota:
90
+ try:
91
+ Hkv = shape["num_kv_heads"]
92
+ D = shape["head_dim"]
93
+ P = shape["page_size"]
94
+
95
+ def sota_fn(q, kvc, bt, sl, _Hkv=Hkv, _D=D, _P=P):
96
+ return sota_mod.sota_forward(q, kvc, bt, sl, _Hkv, _D, _P)
97
+
98
+ ms_sota = time_fn(sota_fn, inputs, iters=num_perf_trials)
99
+ except Exception as e:
100
+ print(f" [sota unavailable] {type(e).__name__}: {e}")
101
+
102
+ ms_sol = time_fn(sol_model, inputs, iters=num_perf_trials)
103
+
104
+ for variant, ms in [
105
+ ("eager", ms_eager),
106
+ ("compiled", ms_comp),
107
+ ("sota", ms_sota),
108
+ ("solution", ms_sol),
109
+ ]:
110
+ if ms is None:
111
+ continue
112
+ tflops = compute_tflops(flops, ms)
113
+ gbps = compute_gbps(bytes_moved, ms)
114
+ print(f"shape={shape_idx} variant={variant} tflops={tflops:.3f} gbps={gbps:.3f} ms={ms:.3f}")
115
+
116
+ sol_tflops = compute_tflops(flops, ms_sol)
117
+ sol_gbps = compute_gbps(bytes_moved, ms_sol)
118
+ if regime == "compute":
119
+ frac = peak_fraction(sol_tflops, peak_tflops)
120
+ else:
121
+ frac = peak_fraction(sol_gbps, peak_gbps)
122
+ sol_fractions.append(frac)
123
+ print(f"shape={shape_idx} solution_peak_fraction={frac:.4f}")
124
+
125
+ gmean = exp(sum(log(max(f, 1e-9)) for f in sol_fractions) / len(sol_fractions))
126
+ print(f"peak_fraction: {gmean:.4f}")
127
+ print(f"RESULT: {'OK' if gmean >= 0.1 else 'LOW'}")
128
+
129
+
130
+ if __name__ == "__main__":
131
+ main()
03_paged_attention/check.py ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Correctness runner for paged-attention decode.
2
+
3
+ Runs solution.Model vs reference.Model across all shapes in shapes.py, 3 seeds
4
+ each, with per-dtype atol/rtol. Also rejects forbidden ops by grep.
5
+ """
6
+ import re
7
+ import sys
8
+ from pathlib import Path
9
+
10
+ import torch
11
+ import yaml
12
+
13
+ REPO_ROOT = Path(__file__).resolve().parents[2]
14
+ sys.path.insert(0, str(REPO_ROOT))
15
+
16
+ from src.eval.correctness import check_correctness # noqa: E402
17
+
18
+
19
+ def _apply_shape(reference, shape: dict) -> None:
20
+ reference.BATCH = shape["batch"]
21
+ reference.NUM_HEADS = shape["num_heads"]
22
+ reference.NUM_KV_HEADS = shape["num_kv_heads"]
23
+ reference.HEAD_DIM = shape["head_dim"]
24
+ reference.SEQ_LEN = shape["seq_len"]
25
+ reference.PAGE_SIZE = shape["page_size"]
26
+
27
+
28
+ def main():
29
+ try:
30
+ import reference
31
+ import shapes
32
+ import solution
33
+ except Exception as e:
34
+ print(f"FAIL: import error: {e}")
35
+ sys.exit(1)
36
+
37
+ problem_yaml = Path("problem.yaml")
38
+ meta = yaml.safe_load(problem_yaml.read_text()) if problem_yaml.exists() else {}
39
+
40
+ sol_src = Path("solution.py").read_text() if Path("solution.py").exists() else ""
41
+ for forbidden in meta.get("forbidden", []):
42
+ pat = re.escape(forbidden)
43
+ if re.search(pat, sol_src):
44
+ print(f"FAIL: forbidden op used: {forbidden}")
45
+ sys.exit(1)
46
+
47
+ device = torch.device("cuda:0")
48
+ tol_override = meta.get("tolerance") or None
49
+
50
+ all_shapes = shapes.SHAPES
51
+ for shape_idx, shape in enumerate(all_shapes):
52
+ _apply_shape(reference, shape)
53
+
54
+ init_args = reference.get_init_inputs()
55
+ ref_model = reference.Model(*init_args).to(device).eval()
56
+ sol_model = solution.Model(*init_args).to(device).eval()
57
+
58
+ sd = ref_model.state_dict()
59
+ try:
60
+ sol_model.load_state_dict(sd, strict=True)
61
+ except RuntimeError as e:
62
+ print(f"FAIL: state_dict mismatch at shape {shape_idx} ({shape}): {e}")
63
+ sys.exit(1)
64
+
65
+ for seed in (42, 123, 456):
66
+ torch.manual_seed(seed)
67
+ torch.cuda.manual_seed_all(seed)
68
+ inputs = [t.to(device) for t in reference.get_inputs()]
69
+
70
+ with torch.no_grad():
71
+ ref_out = ref_model(*inputs)
72
+ sol_out = sol_model(*inputs)
73
+
74
+ ok, msg = check_correctness(
75
+ ref_out, sol_out,
76
+ dtype=ref_out.dtype,
77
+ override=tol_override,
78
+ )
79
+ if not ok:
80
+ print(f"FAIL: shape {shape_idx} {shape} seed {seed}: {msg}")
81
+ sys.exit(1)
82
+
83
+ _emit_framework_label()
84
+ print("PASS")
85
+
86
+
87
+ def _emit_framework_label():
88
+ patterns = [
89
+ ("ptx", r"asm\s+volatile|asm\s*\(|mma\.sync|tcgen05\."),
90
+ ("cutlass3", r"\bcute::|cutlass/gemm/collective|cutlass::arch::Sm(9|10|12)"),
91
+ ("cutlass2", r"cutlass/gemm/device/gemm|cutlass::gemm::device"),
92
+ ("cuda_wmma", r"\bnvcuda::wmma\b|wmma::fragment"),
93
+ ("triton", r"import\s+triton\b|@triton\.jit|\btl\.dot\b"),
94
+ ("cuda_raw", r"torch\.utils\.cpp_extension\.load_inline|__global__\s+void"),
95
+ ]
96
+ sol = Path("solution.py")
97
+ if not sol.exists():
98
+ return
99
+ code = sol.read_text()
100
+ label = "unknown"
101
+ for name, pat in patterns:
102
+ if re.search(pat, code):
103
+ label = name
104
+ break
105
+ Path("framework.txt").write_text(label + "\n")
106
+
107
+
108
+ if __name__ == "__main__":
109
+ main()
03_paged_attention/problem.yaml ADDED
@@ -0,0 +1,48 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: 03_paged_attention
2
+ display_name: "Paged Attention Decode"
3
+ precision: bf16
4
+ regime: memory # decode is bandwidth-bound (KV cache streamed once per token)
5
+
6
+ # Effective FLOPs per call: QK^T + softmax*V across all batches/heads/seq.
7
+ # Sum-formula evaluated per-shape; the benchmark substitutes batch/num_heads/etc.
8
+ # 2 * B * H * L * D for QK^T plus 2 * B * H * L * D for AV => 4 * B * H * L * D
9
+ flops_formula: "4 * batch * num_heads * seq_len * head_dim"
10
+
11
+ # Bytes moved (the real bottleneck): the KV cache must be streamed end-to-end.
12
+ # K and V each: batch * seq_len * num_kv_heads * head_dim * 2 bytes (bf16)
13
+ # Q: batch * num_heads * head_dim * 2 (negligible vs KV)
14
+ # Out: same as Q
15
+ # So total ~ 2 * (B * L * Hkv * D * 2) + small.
16
+ bytes_formula: "2 * batch * seq_len * num_kv_heads * head_dim * 2 + batch * num_heads * head_dim * 2 * 2"
17
+
18
+ hardware: [RTX_PRO_6000]
19
+ peak_tflops_key: bf16 # informational; regime=memory uses bandwidth
20
+ peak_bandwidth_key: dram # 1.8 TB/s GDDR7 on RTX PRO 6000 Blackwell
21
+
22
+ tolerance:
23
+ bfloat16: 0.02
24
+ float16: 0.02
25
+
26
+ # Forbidden ops -- using any of these in solution.py fails correctness post-hoc.
27
+ # vllm.attention / flashinfer.batch_decode_with_paged_kv_cache: dispatching to
28
+ # the SOTA library is the trivial cheat. SDPA is also banned because the agent
29
+ # could gather pages then call SDPA and inherit FlashAttention "for free".
30
+ forbidden:
31
+ - "vllm.attention"
32
+ - "flashinfer.batch_decode_with_paged_kv_cache"
33
+ - "flashinfer.decode"
34
+ - "torch.nn.functional.scaled_dot_product_attention"
35
+ - "F.scaled_dot_product_attention"
36
+
37
+ sota:
38
+ name: "vLLM PagedAttention v2 / FlashInfer batch_decode_with_paged_kv_cache"
39
+ url: "https://github.com/vllm-project/vllm/blob/main/csrc/attention/paged_attention_v2.cu"
40
+ function: "vllm._C.ops.paged_attention_v2"
41
+ deps:
42
+ - "vllm>=0.6.0"
43
+ - "flashinfer>=0.2.0"
44
+ # Decode is memory-bound; reference reaches ~70-85% of peak HBM bandwidth on H100.
45
+ reference_bandwidth_gbps_h100: 2400
46
+
47
+ num_correct_trials: 3
48
+ num_perf_trials: 30
03_paged_attention/reference.py ADDED
@@ -0,0 +1,144 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Naive PyTorch paged-attention decode reference (correctness oracle, not SOTA).
2
+
3
+ Single-query decode: each batch element has a query of shape (num_heads, head_dim)
4
+ and attends over a KV cache of `seq_len[b]` tokens stored as fixed-size pages in
5
+ a global pool. Pages for batch element b are listed in `block_table[b]`.
6
+
7
+ The reference performs the slow path:
8
+ 1. Gather pages -> contiguous (seq_len, num_kv_heads, head_dim) per batch element.
9
+ 2. Repeat KV heads for grouped-query (broadcast num_kv_heads -> num_heads).
10
+ 3. Manual softmax(QK^T / sqrt(d)) @ V in fp32, cast back to bf16.
11
+
12
+ This avoids torch.nn.functional.scaled_dot_product_attention (which is on the
13
+ forbidden list) so the agent cannot dispatch through SDPA either.
14
+ """
15
+ import math
16
+
17
+ import torch
18
+ import torch.nn as nn
19
+
20
+ OP_TYPE = "attention"
21
+ SUPPORTED_PRECISIONS = ["bf16"]
22
+ HARDWARE_REQUIRED = ["RTX_PRO_6000", "H100", "B200"]
23
+
24
+
25
+ # --- Shape knobs (overridden by check.py / benchmark.py from shapes.py) ----
26
+ BATCH = 8
27
+ NUM_HEADS = 32
28
+ NUM_KV_HEADS = 8
29
+ HEAD_DIM = 128
30
+ SEQ_LEN = 1024
31
+ PAGE_SIZE = 16
32
+
33
+
34
+ class Model(nn.Module):
35
+ """Single-query paged attention decode.
36
+
37
+ Forward inputs (all on device):
38
+ query: (batch, num_heads, head_dim) bf16
39
+ kv_cache: (num_blocks, page_size, num_kv_heads, head_dim * 2)
40
+ Layout: last dim packs [K | V] so a single gather pulls both.
41
+ Stored as bf16.
42
+ block_table: (batch, max_blocks) int32
43
+ seq_lens: (batch,) int32
44
+
45
+ Output:
46
+ attn_out: (batch, num_heads, head_dim) bf16
47
+ """
48
+
49
+ def __init__(
50
+ self,
51
+ batch: int,
52
+ num_heads: int,
53
+ num_kv_heads: int,
54
+ head_dim: int,
55
+ seq_len: int,
56
+ page_size: int,
57
+ ):
58
+ super().__init__()
59
+ assert num_heads % num_kv_heads == 0, "num_heads must be a multiple of num_kv_heads (GQA)"
60
+ self.batch = batch
61
+ self.num_heads = num_heads
62
+ self.num_kv_heads = num_kv_heads
63
+ self.head_dim = head_dim
64
+ self.seq_len = seq_len
65
+ self.page_size = page_size
66
+ self.group_size = num_heads // num_kv_heads
67
+ self.scale = 1.0 / math.sqrt(head_dim)
68
+
69
+ # No learned parameters: everything flows through get_inputs(). We keep
70
+ # an empty buffer so state_dict() round-trips trivially between reference
71
+ # and solution.
72
+ self.register_buffer("_dummy", torch.zeros(1, dtype=torch.bfloat16), persistent=False)
73
+
74
+ def forward(
75
+ self,
76
+ query: torch.Tensor,
77
+ kv_cache: torch.Tensor,
78
+ block_table: torch.Tensor,
79
+ seq_lens: torch.Tensor,
80
+ ) -> torch.Tensor:
81
+ B, H, D = query.shape
82
+ Hkv = self.num_kv_heads
83
+ G = self.group_size
84
+ P = self.page_size
85
+
86
+ out = torch.empty(B, H, D, dtype=query.dtype, device=query.device)
87
+
88
+ for b in range(B):
89
+ L = int(seq_lens[b].item())
90
+ num_pages = (L + P - 1) // P
91
+ pages = block_table[b, :num_pages].long()
92
+ # Gather: (num_pages, page_size, num_kv_heads, 2*head_dim)
93
+ kv = kv_cache.index_select(0, pages)
94
+ kv = kv.reshape(num_pages * P, Hkv, 2 * D)
95
+ kv = kv[:L] # mask trailing padded slots
96
+ k = kv[..., :D] # (L, Hkv, D)
97
+ v = kv[..., D:] # (L, Hkv, D)
98
+
99
+ # Broadcast KV heads to query heads (GQA): (L, H, D)
100
+ k = k.repeat_interleave(G, dim=1)
101
+ v = v.repeat_interleave(G, dim=1)
102
+
103
+ q = query[b] # (H, D)
104
+ # Attention in fp32 for the oracle.
105
+ qf = q.float()
106
+ kf = k.float()
107
+ vf = v.float()
108
+ # scores: (H, L) = (H, D) @ (L, H, D) -> per-head dot
109
+ scores = torch.einsum("hd,lhd->hl", qf, kf) * self.scale
110
+ probs = torch.softmax(scores, dim=-1)
111
+ # out: (H, D) = sum_l probs[h, l] * v[l, h, :]
112
+ o = torch.einsum("hl,lhd->hd", probs, vf)
113
+ out[b] = o.to(query.dtype)
114
+
115
+ return out
116
+
117
+
118
+ def get_inputs():
119
+ """Build random paged inputs for the current module-level shape knobs."""
120
+ B = BATCH
121
+ H = NUM_HEADS
122
+ Hkv = NUM_KV_HEADS
123
+ D = HEAD_DIM
124
+ L = SEQ_LEN
125
+ P = PAGE_SIZE
126
+
127
+ pages_per_seq = (L + P - 1) // P
128
+ # Keep the global pool larger than strictly needed and shuffle assignments
129
+ # so the block_table actually exercises non-contiguous gather.
130
+ total_pages = max(B * pages_per_seq + 8, 64)
131
+
132
+ query = torch.randn(B, H, D, dtype=torch.bfloat16) * 0.1
133
+ kv_cache = torch.randn(total_pages, P, Hkv, 2 * D, dtype=torch.bfloat16) * 0.1
134
+
135
+ perm = torch.randperm(total_pages)[: B * pages_per_seq].reshape(B, pages_per_seq).int()
136
+ # Pad to pages_per_seq columns; for fixed-seq-len shapes this is exact.
137
+ block_table = perm.contiguous()
138
+ seq_lens = torch.full((B,), L, dtype=torch.int32)
139
+
140
+ return [query, kv_cache, block_table, seq_lens]
141
+
142
+
143
+ def get_init_inputs():
144
+ return [BATCH, NUM_HEADS, NUM_KV_HEADS, HEAD_DIM, SEQ_LEN, PAGE_SIZE]
03_paged_attention/shapes.py ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Shape sweep for paged attention decode.
2
+
3
+ Mix targets:
4
+ - small batch / long context (Llama-3 8B-style decode)
5
+ - large batch / medium context (server batched decode)
6
+ - GQA wide ratio (Llama-3 70B: 64 heads / 8 kv-heads)
7
+ - non-power-of-2 seq_len (forces predicated tail handling)
8
+ - head_dim=64 small-head case
9
+ """
10
+
11
+ SHAPES = [
12
+ # (B, H, Hkv, D, L, P)
13
+ {"batch": 8, "num_heads": 32, "num_kv_heads": 8, "head_dim": 128, "seq_len": 1024, "page_size": 16},
14
+ {"batch": 32, "num_heads": 32, "num_kv_heads": 8, "head_dim": 128, "seq_len": 2048, "page_size": 16},
15
+ {"batch": 4, "num_heads": 64, "num_kv_heads": 8, "head_dim": 128, "seq_len": 4096, "page_size": 16},
16
+ {"batch": 16, "num_heads": 32, "num_kv_heads": 8, "head_dim": 128, "seq_len": 1535, "page_size": 16}, # non-pow2
17
+ {"batch": 8, "num_heads": 16, "num_kv_heads": 4, "head_dim": 64, "seq_len": 2000, "page_size": 16}, # small-D, non-pow2
18
+ ]
03_paged_attention/sota.py ADDED
@@ -0,0 +1,84 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """SOTA reference for paged-attention decode.
2
+
3
+ Tries, in order:
4
+ 1. FlashInfer's BatchDecodeWithPagedKVCacheWrapper (preferred -- portable,
5
+ supports SM120, GQA, arbitrary head_dim).
6
+ 2. vLLM's paged_attention_v2 CUDA op (requires its KV-cache layout, more
7
+ finicky; we adapt the layout on the fly when possible).
8
+
9
+ If neither is importable, is_available() returns False and the benchmark just
10
+ reports eager + compiled + solution.
11
+
12
+ Agents are FORBIDDEN from importing these in solution.py (see problem.yaml).
13
+ This file is only for the benchmark's reference line.
14
+ """
15
+ from __future__ import annotations
16
+
17
+ import torch
18
+
19
+
20
+ def _try_flashinfer(
21
+ query: torch.Tensor,
22
+ kv_cache: torch.Tensor,
23
+ block_table: torch.Tensor,
24
+ seq_lens: torch.Tensor,
25
+ num_kv_heads: int,
26
+ head_dim: int,
27
+ page_size: int,
28
+ ) -> torch.Tensor | None:
29
+ try:
30
+ import flashinfer # noqa: F401
31
+ from flashinfer.decode import BatchDecodeWithPagedKVCacheWrapper
32
+ except Exception:
33
+ return None
34
+
35
+ B, H, D = query.shape
36
+ # FlashInfer expects K and V as separate (num_blocks, page_size, num_kv_heads, head_dim) tensors.
37
+ # Our reference packs [K|V] on the last dim -- split here.
38
+ k_cache = kv_cache[..., :D].contiguous()
39
+ v_cache = kv_cache[..., D:].contiguous()
40
+
41
+ workspace = torch.empty(128 * 1024 * 1024, dtype=torch.uint8, device=query.device)
42
+ wrapper = BatchDecodeWithPagedKVCacheWrapper(workspace, kv_layout="NHD")
43
+
44
+ # Build the indptr / indices / last_page_len schedule.
45
+ pages_per_seq = ((seq_lens + page_size - 1) // page_size).int()
46
+ indptr = torch.zeros(B + 1, dtype=torch.int32, device=query.device)
47
+ indptr[1:] = torch.cumsum(pages_per_seq, dim=0)
48
+ indices_list = [block_table[b, : int(pages_per_seq[b].item())] for b in range(B)]
49
+ indices = torch.cat(indices_list).int()
50
+ last_page_len = ((seq_lens - 1) % page_size + 1).int()
51
+
52
+ wrapper.plan(
53
+ indptr, indices, last_page_len,
54
+ num_qo_heads=H,
55
+ num_kv_heads=num_kv_heads,
56
+ head_dim=D,
57
+ page_size=page_size,
58
+ data_type=query.dtype,
59
+ )
60
+ return wrapper.run(query, (k_cache, v_cache))
61
+
62
+
63
+ def sota_forward(
64
+ query: torch.Tensor,
65
+ kv_cache: torch.Tensor,
66
+ block_table: torch.Tensor,
67
+ seq_lens: torch.Tensor,
68
+ num_kv_heads: int,
69
+ head_dim: int,
70
+ page_size: int,
71
+ ) -> torch.Tensor:
72
+ out = _try_flashinfer(query, kv_cache, block_table, seq_lens, num_kv_heads, head_dim, page_size)
73
+ if out is not None:
74
+ return out
75
+ raise RuntimeError("No SOTA backend available (flashinfer not installed)")
76
+
77
+
78
+ def is_available() -> bool:
79
+ try:
80
+ import flashinfer # noqa: F401
81
+ from flashinfer.decode import BatchDecodeWithPagedKVCacheWrapper # noqa: F401
82
+ return True
83
+ except Exception:
84
+ return False
04_kahan_softmax/PROMPT.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ I need you to write a numerically tight softmax for the RTX PRO 6000 (SM120 Blackwell, GDDR7, 1.8 TB/s). The reference implementation is in reference.py and your kernel goes in solution.py. You can make whatever mess you want in this directory — scratch files, intermediate kernels, build artifacts, profiling traces — but the final answer has to be in solution.py with the same Model, get_inputs, and get_init_inputs interface as reference.py.
2
+
3
+ The op is softmax along the last dim of a 2D fp32 tensor. The reference computes ground truth in fp64 and casts back to fp32, and you have to match it within atol=rtol=1e-5 — that's a tighter bar than default fp32 (1e-4) on purpose. With long reductions naive fp16 accumulation drifts past this; fp32 accumulation with subtract-max stability is enough on most shapes; on the largest vocabs you may need compensated (Kahan-style) summation to stay under the bar. The shapes you have to handle are batch=32 vocab=4096 (sanity), batch=16 vocab=32768 (GPT-2 class), batch=8 vocab=131072 (Llama-3 vocab), batch=4 vocab=262144 (256K, DeepSeek-V3 / Gemma-3 class — naive fp16 sum drifts past 1e-5 here), and batch=8 vocab=131072 with extreme logits (a few very large positives per row to stress max-subtract — exping before subtracting overflows). The check and benchmark scripts handle the extreme-flag input generation; you just need to read x and return y.
4
+
5
+ This needs to be a real custom kernel. Don't import or call torch.nn.functional.softmax, torch.softmax, F.softmax, liger_kernel.softmax, liger_kernel.transformers.softmax, or any .softmax( method on a tensor. Try Triton, CUDA C++ via load_inline, or inline PTX — whatever fits. Anything you're uncertain about, look up PTX docs, clone Liger-Kernel or other reference repos, read library headers, and investigate.
6
+
7
+ Your flywheel is implement, profile (ncu, nsys, torch.profiler — whatever's useful) and time it with benchmark.py, verify correctness by running `python check.py` and reading the output, then iterate. Don't substitute your own one-off correctness snippets for check.py — it iterates over every shape, your spot-check almost certainly won't. If `python check.py` hasn't printed PASS, you're not done. Take as long as you need to actually push the number up.
04_kahan_softmax/benchmark.py ADDED
@@ -0,0 +1,135 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Roofline benchmark for Kahan-corrected softmax.
2
+
3
+ For each shape: times eager reference, compiled reference, SOTA (if
4
+ available), and the agent's solution. Reports achieved TFLOPS, GB/s, and
5
+ peak_fraction. Softmax is memory-bound, so the score is GB/s / peak_dram.
6
+
7
+ Output lines the harness picks up:
8
+ shape=<idx> variant=<name> tflops=<N> gbps=<N> ms=<N>
9
+ peak_fraction: <N> (geomean over shapes of solution's peak_fraction)
10
+ """
11
+ import sys
12
+ from math import exp, log
13
+ from pathlib import Path
14
+
15
+ import torch
16
+ import yaml
17
+
18
+ REPO_ROOT = Path(__file__).resolve().parents[2]
19
+ sys.path.insert(0, str(REPO_ROOT))
20
+
21
+ from src.eval.roofline import compute_gbps, compute_tflops, peak_fraction # noqa: E402
22
+ from src.eval.timing import time_fn # noqa: E402
23
+ from src.hardware import get as get_hw # noqa: E402
24
+
25
+
26
+ def _eval_formula(expr: str, vars: dict) -> float:
27
+ return float(eval(expr, {"__builtins__": {}}, vars))
28
+
29
+
30
+ def _make_inputs(batch: int, vocab: int, extreme: bool) -> torch.Tensor:
31
+ if extreme:
32
+ x = torch.randn(batch, vocab) * 2.0
33
+ idx = torch.randint(0, vocab, (batch, 4))
34
+ x.scatter_(1, idx, 30.0)
35
+ else:
36
+ x = torch.randn(batch, vocab) * 4.0
37
+ return x.to(torch.float32)
38
+
39
+
40
+ def main():
41
+ import reference
42
+ import shapes
43
+ import solution
44
+
45
+ meta = yaml.safe_load(Path("problem.yaml").read_text())
46
+ hw = get_hw(meta["hardware"][0])
47
+ peak_tflops = hw.peak_tflops_dense.get(meta["peak_tflops_key"], 0.0)
48
+ peak_gbps = hw.peak_bandwidth_gb_s
49
+ regime = meta.get("regime", "memory")
50
+ flops_formula = meta["flops_formula"]
51
+ bytes_formula = meta["bytes_formula"]
52
+ num_perf_trials = int(meta.get("num_perf_trials", 30))
53
+
54
+ device = torch.device("cuda:0")
55
+
56
+ try:
57
+ import sota as sota_mod
58
+ has_sota = sota_mod.is_available()
59
+ except Exception:
60
+ has_sota = False
61
+
62
+ sol_fractions: list[float] = []
63
+
64
+ for shape_idx, shape in enumerate(shapes.SHAPES):
65
+ batch = shape["batch"]
66
+ vocab = shape["vocab"]
67
+ extreme = shape.get("extreme", False)
68
+
69
+ reference.BATCH = batch
70
+ reference.VOCAB = vocab
71
+
72
+ init_args = reference.get_init_inputs()
73
+ ref_model = reference.Model(*init_args).to(device).eval()
74
+ sol_model = solution.Model(*init_args).to(device).eval()
75
+ sd = ref_model.state_dict()
76
+ try:
77
+ sol_model.load_state_dict(sd, strict=True)
78
+ except RuntimeError:
79
+ pass
80
+
81
+ torch.manual_seed(2026)
82
+ x = _make_inputs(batch, vocab, extreme).to(device)
83
+ inputs = [x]
84
+
85
+ flops = _eval_formula(flops_formula, {"batch": batch, "vocab": vocab})
86
+ bytes_moved = _eval_formula(bytes_formula, {"batch": batch, "vocab": vocab})
87
+
88
+ ms_eager = time_fn(ref_model, inputs, iters=num_perf_trials)
89
+
90
+ try:
91
+ comp = torch.compile(ref_model, mode="reduce-overhead")
92
+ ms_comp = time_fn(comp, inputs, iters=num_perf_trials)
93
+ except Exception as e:
94
+ print(f" [compile fallback] {type(e).__name__}: {e}")
95
+ ms_comp = None
96
+
97
+ ms_sota = None
98
+ if has_sota:
99
+ try:
100
+ def sota_fn(t):
101
+ return sota_mod.sota_forward(t)
102
+ ms_sota = time_fn(sota_fn, inputs, iters=num_perf_trials)
103
+ except Exception as e:
104
+ print(f" [sota unavailable] {type(e).__name__}: {e}")
105
+
106
+ ms_sol = time_fn(sol_model, inputs, iters=num_perf_trials)
107
+
108
+ for variant, ms in [
109
+ ("eager", ms_eager),
110
+ ("compiled", ms_comp),
111
+ ("sota", ms_sota),
112
+ ("solution", ms_sol),
113
+ ]:
114
+ if ms is None:
115
+ continue
116
+ tflops = compute_tflops(flops, ms)
117
+ gbps = compute_gbps(bytes_moved, ms)
118
+ print(f"shape={shape_idx} variant={variant} tflops={tflops:.3f} gbps={gbps:.3f} ms={ms:.3f}")
119
+
120
+ sol_tflops = compute_tflops(flops, ms_sol)
121
+ sol_gbps = compute_gbps(bytes_moved, ms_sol)
122
+ if regime == "compute":
123
+ frac = peak_fraction(sol_tflops, peak_tflops)
124
+ else:
125
+ frac = peak_fraction(sol_gbps, peak_gbps)
126
+ sol_fractions.append(frac)
127
+ print(f"shape={shape_idx} solution_peak_fraction={frac:.4f}")
128
+
129
+ gmean = exp(sum(log(max(f, 1e-9)) for f in sol_fractions) / len(sol_fractions))
130
+ print(f"peak_fraction: {gmean:.4f}")
131
+ print(f"RESULT: {'OK' if gmean >= 0.1 else 'LOW'}")
132
+
133
+
134
+ if __name__ == "__main__":
135
+ main()
04_kahan_softmax/check.py ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Correctness runner for Kahan-corrected softmax.
2
+
3
+ Runs solution.Model vs reference.Model across all shapes in shapes.py, 3
4
+ seeds each, with the tight (1e-5) fp32 tolerance from problem.yaml. Also
5
+ rejects forbidden ops via grep.
6
+ """
7
+ import re
8
+ import sys
9
+ from pathlib import Path
10
+
11
+ import torch
12
+ import yaml
13
+
14
+ # Make the repo's src/ importable
15
+ REPO_ROOT = Path(__file__).resolve().parents[2]
16
+ sys.path.insert(0, str(REPO_ROOT))
17
+
18
+ from src.eval.correctness import check_correctness # noqa: E402
19
+
20
+
21
+ def _make_inputs(batch: int, vocab: int, extreme: bool, seed: int) -> torch.Tensor:
22
+ g = torch.Generator().manual_seed(seed)
23
+ if extreme:
24
+ # Adversarial: most logits are mild but a handful per row are huge.
25
+ # If the kernel forgets to subtract the row-max before exp, this
26
+ # row overflows fp32 and produces NaN/Inf. If it accumulates in
27
+ # fp16, the long tail of small exp() values is lost beneath the
28
+ # tolerance threshold.
29
+ x = torch.randn(batch, vocab, generator=g) * 2.0
30
+ # Spike: 4 very large positive logits per row.
31
+ idx = torch.randint(0, vocab, (batch, 4), generator=g)
32
+ x.scatter_(1, idx, 30.0)
33
+ else:
34
+ x = torch.randn(batch, vocab, generator=g) * 4.0
35
+ return x.to(torch.float32)
36
+
37
+
38
+ def main():
39
+ try:
40
+ import reference
41
+ import shapes
42
+ import solution
43
+ except Exception as e:
44
+ print(f"FAIL: import error: {e}")
45
+ sys.exit(1)
46
+
47
+ problem_yaml = Path("problem.yaml")
48
+ meta = yaml.safe_load(problem_yaml.read_text()) if problem_yaml.exists() else {}
49
+
50
+ # --- Forbidden-op check ------------------------------------------------
51
+ sol_src = Path("solution.py").read_text() if Path("solution.py").exists() else ""
52
+ for forbidden in meta.get("forbidden", []):
53
+ pat = re.escape(forbidden)
54
+ if re.search(pat, sol_src):
55
+ print(f"FAIL: forbidden op used: {forbidden}")
56
+ sys.exit(1)
57
+
58
+ device = torch.device("cuda:0")
59
+ tol_override = meta.get("tolerance") or None
60
+
61
+ # --- Per-shape correctness --------------------------------------------
62
+ for shape_idx, shape in enumerate(shapes.SHAPES):
63
+ batch = shape["batch"]
64
+ vocab = shape["vocab"]
65
+ extreme = shape.get("extreme", False)
66
+
67
+ reference.BATCH = batch
68
+ reference.VOCAB = vocab
69
+
70
+ init_args = reference.get_init_inputs()
71
+ ref_model = reference.Model(*init_args).to(device).eval()
72
+ sol_model = solution.Model(*init_args).to(device).eval()
73
+
74
+ sd = ref_model.state_dict()
75
+ try:
76
+ sol_model.load_state_dict(sd, strict=True)
77
+ except RuntimeError as e:
78
+ print(f"FAIL: state_dict mismatch at shape {shape_idx} ({shape}): {e}")
79
+ sys.exit(1)
80
+
81
+ for seed in (42, 123, 456):
82
+ torch.manual_seed(seed)
83
+ torch.cuda.manual_seed_all(seed)
84
+ x = _make_inputs(batch, vocab, extreme, seed).to(device)
85
+
86
+ with torch.no_grad():
87
+ ref_out = ref_model(x)
88
+ sol_out = sol_model(x)
89
+
90
+ ok, msg = check_correctness(
91
+ ref_out, sol_out,
92
+ dtype=torch.float32,
93
+ override=tol_override,
94
+ )
95
+ if not ok:
96
+ print(f"FAIL: shape {shape_idx} {shape} seed {seed}: {msg}")
97
+ sys.exit(1)
98
+
99
+ _emit_framework_label()
100
+ print("PASS")
101
+
102
+
103
+ def _emit_framework_label():
104
+ """Write framework.txt with the detected kernel framework."""
105
+ patterns = [
106
+ ("ptx", r"asm\s+volatile|asm\s*\(|mma\.sync|tcgen05\."),
107
+ ("cutlass3", r"\bcute::|cutlass/gemm/collective|cutlass::arch::Sm(9|10|12)"),
108
+ ("cutlass2", r"cutlass/gemm/device/gemm|cutlass::gemm::device"),
109
+ ("cuda_wmma", r"\bnvcuda::wmma\b|wmma::fragment"),
110
+ ("triton", r"import\s+triton\b|@triton\.jit|\btl\.dot\b"),
111
+ ("cuda_raw", r"torch\.utils\.cpp_extension\.load_inline|__global__\s+void"),
112
+ ]
113
+ sol = Path("solution.py")
114
+ if not sol.exists():
115
+ return
116
+ code = sol.read_text()
117
+ label = "unknown"
118
+ for name, pat in patterns:
119
+ if re.search(pat, code):
120
+ label = name
121
+ break
122
+ Path("framework.txt").write_text(label + "\n")
123
+
124
+
125
+ if __name__ == "__main__":
126
+ main()
04_kahan_softmax/problem.yaml ADDED
@@ -0,0 +1,43 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: 04_kahan_softmax
2
+ display_name: "Kahan-corrected Softmax"
3
+ precision: fp32
4
+ regime: memory # softmax is bandwidth-bound: 2 passes over the input tensor
5
+
6
+ # Softmax FLOPs: per-element exp + 2 reductions + divide. Roughly 5 flops/elt.
7
+ flops_formula: "5 * batch * vocab"
8
+
9
+ # Bytes moved: read x once, write y once. Both fp32.
10
+ bytes_formula: "batch * vocab * 4 + batch * vocab * 4"
11
+
12
+ hardware: [RTX_PRO_6000]
13
+ peak_tflops_key: fp32
14
+ peak_bandwidth_key: dram
15
+
16
+ # TIGHTER than default (fp32 default is 1e-4). This problem exists
17
+ # specifically to test whether the agent uses compensated summation, so
18
+ # we squeeze the tolerance to 1e-5 — naive fp16 sum across 256K elements
19
+ # drifts past this; fp32 accumulation passes; Kahan/fp32 always passes.
20
+ tolerance:
21
+ "torch.float32": {"atol": 1.0e-5, "rtol": 1.0e-5}
22
+
23
+ # Forbidden ops — block the obvious "just call the library" cheats. The
24
+ # agent must implement softmax themselves with explicit (compensated)
25
+ # summation logic.
26
+ forbidden:
27
+ - "torch.nn.functional.softmax"
28
+ - "torch.softmax"
29
+ - "F.softmax"
30
+ - "liger_kernel.softmax"
31
+ - "liger_kernel.transformers.softmax"
32
+ - ".softmax("
33
+
34
+ sota:
35
+ name: "Liger-Kernel Softmax (Triton)"
36
+ url: "https://github.com/linkedin/Liger-Kernel"
37
+ function: "liger_kernel.ops.softmax.LigerSoftmaxFunction"
38
+ deps:
39
+ - "liger-kernel>=0.5.0"
40
+ reference_throughput_gbps_h100: 2800
41
+
42
+ num_correct_trials: 3
43
+ num_perf_trials: 30
04_kahan_softmax/reference.py ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Naive softmax over the last dim, computed in fp64 for ground-truth.
2
+
3
+ The reference deliberately runs in float64 so that fp16 / fp32 accumulation
4
+ drift in agent solutions is exposed by the tight tolerance in problem.yaml.
5
+ The agent's job is to produce an fp32 softmax whose values match this
6
+ double-precision reference within atol=rtol=1e-5 — this requires either
7
+ fp32 accumulation or compensated (Kahan) summation when vocab is large.
8
+ """
9
+ import torch
10
+ import torch.nn as nn
11
+
12
+ OP_TYPE = "softmax"
13
+ SUPPORTED_PRECISIONS = ["fp32"]
14
+ HARDWARE_REQUIRED = ["RTX_PRO_6000", "H100", "B200"]
15
+
16
+
17
+ class Model(nn.Module):
18
+ """y = softmax(x, dim=-1) computed in fp64 then returned as fp32.
19
+
20
+ No learned parameters — softmax is parameter-free. We still expose an
21
+ empty state_dict so the harness's strict load_state_dict matches.
22
+ """
23
+
24
+ def __init__(self, batch: int, vocab: int):
25
+ super().__init__()
26
+ self.batch = batch
27
+ self.vocab = vocab
28
+
29
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
30
+ # Promote to fp64 for the ground-truth pathway. Even with double
31
+ # precision we still subtract the row-max for stability.
32
+ x64 = x.to(torch.float64)
33
+ m = x64.amax(dim=-1, keepdim=True)
34
+ e = torch.exp(x64 - m)
35
+ s = e.sum(dim=-1, keepdim=True)
36
+ return (e / s).to(torch.float32)
37
+
38
+
39
+ # Default shape; overridden per-iteration by check.py / benchmark.py.
40
+ BATCH = 8
41
+ VOCAB = 32768
42
+
43
+
44
+ def get_inputs():
45
+ # Mix of moderate-magnitude logits. The shapes module supplies an
46
+ # extreme-magnitude variant separately to stress numerical stability.
47
+ x = torch.randn(BATCH, VOCAB, dtype=torch.float32) * 4.0
48
+ return [x]
49
+
50
+
51
+ def get_init_inputs():
52
+ return [BATCH, VOCAB]
04_kahan_softmax/shapes.py ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Shape sweep for Kahan-corrected softmax.
2
+
3
+ The point of this problem is numerical accuracy on long reductions. Shapes
4
+ mix typical LLM vocab sizes with deliberately adversarial regimes:
5
+
6
+ - small vocab (sanity check; naive fp32 should pass)
7
+ - Llama3 vocab 128K (real-world, where fp16 accumulation starts to drift)
8
+ - 256K (DeepSeek-V3 / Gemma-3 class vocab; naive fp16 sum DOES drift past
9
+ the 1e-5 tolerance — this row is what proves Kahan was needed)
10
+ - extreme-logit edge case (large positive logits stress max-subtract +
11
+ summation; if the implementation accidentally exps before subtracting
12
+ max, this row overflows)
13
+
14
+ The 'extreme' flag is read by check.py to switch input generation to a
15
+ distribution that produces a few very large logits per row.
16
+ """
17
+
18
+ SHAPES = [
19
+ {"batch": 32, "vocab": 4096, "extreme": False}, # sanity
20
+ {"batch": 16, "vocab": 32768, "extreme": False}, # GPT-2 class
21
+ {"batch": 8, "vocab": 131072, "extreme": False}, # Llama3 vocab
22
+ {"batch": 4, "vocab": 262144, "extreme": False}, # 256K — Kahan needed
23
+ {"batch": 8, "vocab": 131072, "extreme": True}, # extreme logits edge
24
+ ]
04_kahan_softmax/sota.py ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """SOTA reference for last-dim softmax.
2
+
3
+ Preference order:
4
+ 1. liger-kernel's Triton softmax (LigerSoftmaxFunction) — fused, fp32-acc
5
+ 2. torch.compile(torch.softmax) — Inductor generates a Triton kernel that
6
+ is competitive on bandwidth-bound reductions
7
+
8
+ Agents are FORBIDDEN from importing either of these in solution.py (see
9
+ problem.yaml.forbidden). This file is only the benchmark's reference line.
10
+ """
11
+ from __future__ import annotations
12
+
13
+ import torch
14
+
15
+ _compiled_softmax = None
16
+
17
+
18
+ def _liger_softmax(x: torch.Tensor) -> torch.Tensor | None:
19
+ try:
20
+ from liger_kernel.ops.softmax import LigerSoftmaxFunction
21
+ return LigerSoftmaxFunction.apply(x)
22
+ except Exception:
23
+ return None
24
+
25
+
26
+ def _compiled(x: torch.Tensor) -> torch.Tensor:
27
+ global _compiled_softmax
28
+ if _compiled_softmax is None:
29
+ _compiled_softmax = torch.compile(
30
+ lambda t: torch.softmax(t, dim=-1),
31
+ mode="reduce-overhead",
32
+ )
33
+ return _compiled_softmax(x)
34
+
35
+
36
+ def sota_forward(x: torch.Tensor) -> torch.Tensor:
37
+ """Best-available softmax reference. x: (batch, vocab) fp32."""
38
+ out = _liger_softmax(x)
39
+ if out is not None:
40
+ return out
41
+ return _compiled(x)
42
+
43
+
44
+ def is_available() -> bool:
45
+ return True # torch.compile fallback is always available
05_topk_bitonic/PROMPT.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ I need you to optimize a top-k kernel for the RTX PRO 6000 (SM120 Blackwell, GDDR7, 1.8 TB/s). The reference implementation is in reference.py and your kernel goes in solution.py. You can make whatever mess you want in this directory — scratch files, intermediate kernels, build artifacts, profiling traces — but the final answer has to be in solution.py with the same Model, get_inputs, and get_init_inputs interface as reference.py.
2
+
3
+ The input is a 2D fp32 tensor and you return the top-k values and their int64 indices along the last dim, sorted descending, same contract as torch.topk(largest=True, sorted=True). Correctness tolerance is 1e-4 abs/rel on the values; index ties are handled leniently — what matters is the returned values match the reference to tolerance, not that the index sets are bitwise identical. The shapes you have to handle are batch=1 n=131072 k=64 (decoder vocab top-k over a Llama-size vocabulary), batch=64 n=8192 k=8 (prefill / attention top-k), batch=32 n=16384 k=32 (mid-size batched), batch=16 n=12000 k=16 (non-power-of-two n, which bitonic networks don't naturally want), and batch=128 n=4096 k=1 (batched argmax).
4
+
5
+ This needs to be a real custom kernel — CUDA C++ via torch.utils.cpp_extension.load_inline, Triton, inline PTX, or CUTLASS, whatever you think fits. Don't reach for torch.topk, torch.kthvalue, torch.sort, or torch.argsort, or any of their Tensor.* / torch.ops.aten.* variants; they're off-limits and using them fails correctness. Anything you're uncertain about, look up PTX docs, clone CUTLASS or other reference repos, read library headers, and investigate.
6
+
7
+ Your flywheel is implement, profile (ncu, nsys, torch.profiler — whatever's useful) and time it with benchmark.py, verify correctness by running `python check.py` and reading the output, then iterate. Don't substitute your own one-off correctness snippets for check.py — it iterates over every shape, your spot-check almost certainly won't. If `python check.py` hasn't printed PASS, you're not done. Take as long as you need to actually push the number up.
05_topk_bitonic/benchmark.py ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Roofline benchmark for TopK.
2
+
3
+ For each shape: times eager reference (torch.topk), compiled reference, SOTA
4
+ (also torch.topk — see sota.py), and the agent's solution. Reports achieved
5
+ TFLOPS, GB/s, and peak_fraction (vs DRAM bandwidth, since this is memory-bound).
6
+
7
+ Output lines the harness picks up:
8
+ shape=<idx> variant=<name> tflops=<N> gbps=<N> ms=<N>
9
+ peak_fraction: <N> (geomean over shapes of solution's peak_fraction)
10
+ """
11
+ import sys
12
+ from math import exp, log
13
+ from pathlib import Path
14
+
15
+ import torch
16
+ import yaml
17
+
18
+ REPO_ROOT = Path(__file__).resolve().parents[2]
19
+ sys.path.insert(0, str(REPO_ROOT))
20
+
21
+ from src.eval.roofline import compute_gbps, compute_tflops, peak_fraction # noqa: E402
22
+ from src.eval.timing import time_fn # noqa: E402
23
+ from src.hardware import get as get_hw # noqa: E402
24
+
25
+
26
+ def _eval_formula(expr: str, vars: dict) -> float:
27
+ return float(eval(expr, {"__builtins__": {}}, vars))
28
+
29
+
30
+ def main():
31
+ import reference
32
+ import shapes
33
+ import solution
34
+
35
+ meta = yaml.safe_load(Path("problem.yaml").read_text())
36
+ hw = get_hw(meta["hardware"][0])
37
+ peak_tflops = hw.peak_tflops_dense.get(meta["peak_tflops_key"], 0.0)
38
+ peak_gbps = hw.peak_bandwidth_gb_s
39
+ regime = meta.get("regime", "memory")
40
+ flops_formula = meta["flops_formula"]
41
+ bytes_formula = meta["bytes_formula"]
42
+ num_perf_trials = int(meta.get("num_perf_trials", 50))
43
+
44
+ device = torch.device("cuda:0")
45
+
46
+ try:
47
+ import sota as sota_mod
48
+ has_sota = sota_mod.is_available()
49
+ except Exception:
50
+ has_sota = False
51
+
52
+ sol_fractions: list[float] = []
53
+
54
+ for shape_idx, shape in enumerate(shapes.SHAPES):
55
+ reference.batch = shape["batch"]
56
+ reference.n = shape["n"]
57
+ reference.k = shape["k"]
58
+
59
+ init_args = reference.get_init_inputs()
60
+ ref_model = reference.Model(*init_args).to(device).eval()
61
+ sol_model = solution.Model(*init_args).to(device).eval()
62
+ sd = ref_model.state_dict()
63
+ try:
64
+ sol_model.load_state_dict(sd, strict=True)
65
+ except RuntimeError:
66
+ pass
67
+
68
+ torch.manual_seed(2026)
69
+ inputs = [t.to(device) for t in reference.get_inputs()]
70
+
71
+ flops = _eval_formula(flops_formula, shape)
72
+ bytes_moved = _eval_formula(bytes_formula, shape)
73
+
74
+ ms_eager = time_fn(ref_model, inputs, iters=num_perf_trials)
75
+
76
+ try:
77
+ comp = torch.compile(ref_model, mode="reduce-overhead")
78
+ ms_comp = time_fn(comp, inputs, iters=num_perf_trials)
79
+ except Exception as e:
80
+ print(f" [compile fallback] {type(e).__name__}: {e}")
81
+ ms_comp = None
82
+
83
+ ms_sota = None
84
+ if has_sota:
85
+ try:
86
+ k_val = shape["k"]
87
+ def sota_fn(x, _k=k_val):
88
+ return sota_mod.sota_forward(x, _k)
89
+ ms_sota = time_fn(sota_fn, inputs, iters=num_perf_trials)
90
+ except Exception as e:
91
+ print(f" [sota unavailable] {type(e).__name__}: {e}")
92
+
93
+ ms_sol = time_fn(sol_model, inputs, iters=num_perf_trials)
94
+
95
+ for variant, ms in [
96
+ ("eager", ms_eager),
97
+ ("compiled", ms_comp),
98
+ ("sota", ms_sota),
99
+ ("solution", ms_sol),
100
+ ]:
101
+ if ms is None:
102
+ continue
103
+ tflops = compute_tflops(flops, ms)
104
+ gbps = compute_gbps(bytes_moved, ms)
105
+ print(f"shape={shape_idx} variant={variant} tflops={tflops:.3f} gbps={gbps:.3f} ms={ms:.3f}")
106
+
107
+ sol_tflops = compute_tflops(flops, ms_sol)
108
+ sol_gbps = compute_gbps(bytes_moved, ms_sol)
109
+ if regime == "compute":
110
+ frac = peak_fraction(sol_tflops, peak_tflops)
111
+ else:
112
+ frac = peak_fraction(sol_gbps, peak_gbps)
113
+ sol_fractions.append(frac)
114
+ print(f"shape={shape_idx} solution_peak_fraction={frac:.4f}")
115
+
116
+ gmean = exp(sum(log(max(f, 1e-9)) for f in sol_fractions) / len(sol_fractions))
117
+ print(f"peak_fraction: {gmean:.4f}")
118
+ print(f"RESULT: {'OK' if gmean >= 0.1 else 'LOW'}")
119
+
120
+
121
+ if __name__ == "__main__":
122
+ main()
05_topk_bitonic/check.py ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Correctness runner for TopK.
2
+
3
+ Runs solution.Model vs reference.Model across all shapes in shapes.py, 3 seeds
4
+ each. Top-k correctness has two parts:
5
+
6
+ 1. VALUES: sol_values must match ref_values within fp32 tol. Both are
7
+ returned sorted descending, so positional comparison is well-defined.
8
+ 2. INDICES: lenient — we do NOT require sol_indices == ref_indices because
9
+ ties in x can yield multiple valid index sets. Instead we gather x at
10
+ sol_indices and check those values match ref_values within tol. This
11
+ catches "wrong indices" without false-failing on legitimate tie-breaks.
12
+
13
+ Also rejects forbidden ops by grep.
14
+ """
15
+ import re
16
+ import sys
17
+ from pathlib import Path
18
+
19
+ import torch
20
+ import yaml
21
+
22
+ REPO_ROOT = Path(__file__).resolve().parents[2]
23
+ sys.path.insert(0, str(REPO_ROOT))
24
+
25
+ from src.eval.correctness import check_correctness # noqa: E402
26
+
27
+
28
+ def main():
29
+ try:
30
+ import reference
31
+ import shapes
32
+ import solution
33
+ except Exception as e:
34
+ print(f"FAIL: import error: {e}")
35
+ sys.exit(1)
36
+
37
+ problem_yaml = Path("problem.yaml")
38
+ meta = yaml.safe_load(problem_yaml.read_text()) if problem_yaml.exists() else {}
39
+
40
+ # --- Forbidden-op check ------------------------------------------------
41
+ sol_src = Path("solution.py").read_text() if Path("solution.py").exists() else ""
42
+ for forbidden in meta.get("forbidden", []):
43
+ pat = re.escape(forbidden)
44
+ if re.search(pat, sol_src):
45
+ print(f"FAIL: forbidden op used: {forbidden}")
46
+ sys.exit(1)
47
+
48
+ device = torch.device("cuda:0")
49
+ tol_override = meta.get("tolerance") or None
50
+
51
+ all_shapes = shapes.SHAPES
52
+ for shape_idx, shape in enumerate(all_shapes):
53
+ reference.batch = shape["batch"]
54
+ reference.n = shape["n"]
55
+ reference.k = shape["k"]
56
+
57
+ init_args = reference.get_init_inputs()
58
+ ref_model = reference.Model(*init_args).to(device).eval()
59
+ sol_model = solution.Model(*init_args).to(device).eval()
60
+
61
+ sd = ref_model.state_dict()
62
+ try:
63
+ sol_model.load_state_dict(sd, strict=True)
64
+ except RuntimeError as e:
65
+ print(f"FAIL: state_dict mismatch at shape {shape_idx} ({shape}): {e}")
66
+ sys.exit(1)
67
+
68
+ for seed in (42, 123, 456):
69
+ torch.manual_seed(seed)
70
+ torch.cuda.manual_seed_all(seed)
71
+ inputs = [t.to(device) for t in reference.get_inputs()]
72
+
73
+ with torch.no_grad():
74
+ ref_values, ref_indices = ref_model(*inputs)
75
+ sol_out = sol_model(*inputs)
76
+
77
+ if not (isinstance(sol_out, (tuple, list)) and len(sol_out) == 2):
78
+ print(f"FAIL: shape {shape_idx} {shape} seed {seed}: "
79
+ f"solution must return (values, indices); got {type(sol_out)}")
80
+ sys.exit(1)
81
+ sol_values, sol_indices = sol_out
82
+
83
+ # Shape checks
84
+ expected_shape = (shape["batch"], shape["k"])
85
+ if tuple(sol_values.shape) != expected_shape:
86
+ print(f"FAIL: shape {shape_idx} values shape {tuple(sol_values.shape)} "
87
+ f"!= expected {expected_shape}")
88
+ sys.exit(1)
89
+ if tuple(sol_indices.shape) != expected_shape:
90
+ print(f"FAIL: shape {shape_idx} indices shape {tuple(sol_indices.shape)} "
91
+ f"!= expected {expected_shape}")
92
+ sys.exit(1)
93
+
94
+ # 1. Strict-ish values check (positional, both are sorted desc)
95
+ ok, msg = check_correctness(
96
+ ref_values.float(), sol_values.float(),
97
+ dtype=torch.float32,
98
+ override=tol_override,
99
+ )
100
+ if not ok:
101
+ print(f"FAIL: shape {shape_idx} {shape} seed {seed} values: {msg}")
102
+ sys.exit(1)
103
+
104
+ # 2. Lenient indices check: gather x at sol_indices, compare to ref_values.
105
+ # This handles ties without false negatives.
106
+ x = inputs[0]
107
+ sol_idx_long = sol_indices.to(torch.int64)
108
+ if sol_idx_long.min() < 0 or sol_idx_long.max() >= shape["n"]:
109
+ print(f"FAIL: shape {shape_idx} indices out of range "
110
+ f"[{int(sol_idx_long.min())}, {int(sol_idx_long.max())}]")
111
+ sys.exit(1)
112
+ gathered = torch.gather(x, dim=-1, index=sol_idx_long)
113
+ ok, msg = check_correctness(
114
+ ref_values.float(), gathered.float(),
115
+ dtype=torch.float32,
116
+ override=tol_override,
117
+ )
118
+ if not ok:
119
+ print(f"FAIL: shape {shape_idx} {shape} seed {seed} indices "
120
+ f"(gather mismatch): {msg}")
121
+ sys.exit(1)
122
+
123
+ _emit_framework_label()
124
+ print("PASS")
125
+
126
+
127
+ def _emit_framework_label():
128
+ patterns = [
129
+ ("ptx", r"asm\s+volatile|asm\s*\(|mma\.sync|tcgen05\."),
130
+ ("cutlass3", r"\bcute::|cutlass/gemm/collective|cutlass::arch::Sm(9|10|12)"),
131
+ ("cutlass2", r"cutlass/gemm/device/gemm|cutlass::gemm::device"),
132
+ ("cuda_wmma", r"\bnvcuda::wmma\b|wmma::fragment"),
133
+ ("triton", r"import\s+triton\b|@triton\.jit|\btl\.dot\b"),
134
+ ("cuda_raw", r"torch\.utils\.cpp_extension\.load_inline|__global__\s+void"),
135
+ ]
136
+ sol = Path("solution.py")
137
+ if not sol.exists():
138
+ return
139
+ code = sol.read_text()
140
+ label = "unknown"
141
+ for name, pat in patterns:
142
+ if re.search(pat, code):
143
+ label = name
144
+ break
145
+ Path("framework.txt").write_text(label + "\n")
146
+
147
+
148
+ if __name__ == "__main__":
149
+ main()
05_topk_bitonic/problem.yaml ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: 05_topk_bitonic
2
+ display_name: "TopK via Bitonic Sort"
3
+ precision: fp32
4
+ regime: memory
5
+
6
+ # Top-k is dominated by the input read (small output, no reduction over k).
7
+ # Comparator-network FLOPs are not the bottleneck on real hardware, so we
8
+ # track them but score on bandwidth.
9
+ flops_formula: "batch * n * 4" # ~O(n log^2 n) compares total, but the
10
+ # bitonic network is so cheap relative
11
+ # to memory that a coarse 4*n estimate
12
+ # is fine for telemetry only.
13
+ bytes_formula: "batch * n * 4 + batch * k * (4 + 8)" # fp32 input read + (fp32 value + int64 idx) output
14
+
15
+ hardware: [RTX_PRO_6000]
16
+ peak_tflops_key: fp32
17
+ peak_bandwidth_key: dram
18
+
19
+ # Top-k correctness:
20
+ # - VALUES must match within fp32 atol/rtol (the kth largest value is
21
+ # well-defined modulo float-equal ties, so we use a loose-ish tol).
22
+ # - INDICES are checked leniently: for each row, the multiset of returned
23
+ # indices must select values that match ref values within tol. Direct
24
+ # index equality is NOT required (ties in x can yield different valid
25
+ # index sets).
26
+ tolerance:
27
+ float32: 1.0e-4
28
+
29
+ # Forbidden ops — using any of these in solution.py fails correctness post-hoc.
30
+ # This problem is about IMPLEMENTING the selection, not dispatching to PyTorch's
31
+ # tuned top-k. torch.sort is also banned because torch.topk falls back to it.
32
+ forbidden:
33
+ - "torch.topk"
34
+ - "torch.kthvalue"
35
+ - "torch.sort"
36
+ - "torch.argsort"
37
+ - "Tensor.topk"
38
+ - "Tensor.kthvalue"
39
+ - "Tensor.sort"
40
+ - "Tensor.argsort"
41
+ - "torch.ops.aten.topk"
42
+ - "torch.ops.aten.sort"
43
+ - "torch.ops.aten.kthvalue"
44
+
45
+ sota:
46
+ name: "torch.topk (cuTOPK / CUB internals)"
47
+ url: "https://github.com/pytorch/pytorch/blob/main/aten/src/ATen/native/cuda/TensorTopK.cu"
48
+ function: "torch.topk"
49
+ deps: []
50
+ # Informational: torch.topk dispatches to a radix-select kernel for moderate
51
+ # k and to a bitonic sort kernel for small n. Beating it on the (1, 131072,
52
+ # 64) decoder shape requires saturating DRAM bandwidth on the input read.
53
+ reference_throughput_gbps_h100: 2400
54
+
55
+ num_correct_trials: 3
56
+ num_perf_trials: 50
05_topk_bitonic/reference.py ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Naive top-k reference: torch.topk over the last dim.
2
+
3
+ This is the correctness oracle. The agent's solution must produce the same
4
+ top-k values (and equivalent indices modulo ties) within the tolerance
5
+ declared in problem.yaml. Note that solution.py is FORBIDDEN from calling
6
+ torch.topk / torch.sort / torch.kthvalue (see problem.yaml).
7
+ """
8
+ import torch
9
+ import torch.nn as nn
10
+
11
+ OP_TYPE = "topk"
12
+ SUPPORTED_PRECISIONS = ["fp32"]
13
+ HARDWARE_REQUIRED = ["RTX_PRO_6000", "H100", "B200"]
14
+
15
+
16
+ class Model(nn.Module):
17
+ """Top-k over the last dim of a 2D tensor.
18
+
19
+ Input:
20
+ x: (batch, n) fp32
21
+ Output:
22
+ values: (batch, k) fp32, sorted descending
23
+ indices: (batch, k) int64, into the last dim of x
24
+ """
25
+
26
+ def __init__(self, batch: int, n: int, k: int):
27
+ super().__init__()
28
+ self.batch, self.n, self.k = batch, n, k
29
+ # No learned parameters, but declare a dummy buffer so state_dict
30
+ # is non-empty and load_state_dict(strict=True) is meaningful.
31
+ self.register_buffer("_dummy", torch.zeros(1))
32
+
33
+ def forward(self, x: torch.Tensor):
34
+ values, indices = torch.topk(x, k=self.k, dim=-1, largest=True, sorted=True)
35
+ return values, indices
36
+
37
+
38
+ # Module-level shims rebuilt by check.py / benchmark.py per shape.
39
+ batch = 64
40
+ n = 8192
41
+ k = 8
42
+
43
+
44
+ def get_inputs():
45
+ # fp32 input drawn from a roughly Gaussian distribution; ties unlikely
46
+ # but possible. Seed is set by the caller.
47
+ x = torch.randn(batch, n, dtype=torch.float32)
48
+ return [x]
49
+
50
+
51
+ def get_init_inputs():
52
+ return [batch, n, k]
05_topk_bitonic/shapes.py ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Canonical shape sweep for TopK.
2
+
3
+ Mix of:
4
+ - decoder vocab top-k (single sequence, very large n, moderate k) — pure
5
+ bandwidth test; the input read dominates everything.
6
+ - prefill / batched attention top-k (many rows, moderate n, small k) — tests
7
+ per-row parallelism and shared-memory bitonic networks.
8
+ - non-power-of-2 n stress case — bitonic sort networks naturally want
9
+ powers of two; this forces the agent to handle padding or partial sorts.
10
+ - small-k limit — k=1 (argmax) is a degenerate but useful sanity case.
11
+ """
12
+
13
+ SHAPES = [
14
+ {"batch": 1, "n": 131072, "k": 64}, # decoder vocab top-k (Llama vocab ~128k)
15
+ {"batch": 64, "n": 8192, "k": 8}, # prefill / attention top-k
16
+ {"batch": 32, "n": 16384, "k": 32}, # mid-size batched
17
+ {"batch": 16, "n": 12000, "k": 16}, # non-power-of-2 n stress
18
+ {"batch": 128, "n": 4096, "k": 1}, # batched argmax (k=1 corner case)
19
+ ]
05_topk_bitonic/sota.py ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """SOTA reference for TopK: torch.topk itself.
2
+
3
+ torch.topk dispatches to a CUB-backed kernel that uses radix-select for
4
+ moderate k and a tuned bitonic sort for small n. It is the bar the agent's
5
+ hand-rolled bitonic kernel must beat. There is no obvious vendor library that
6
+ does better on the (batch, n, k) shape mix we evaluate — Faiss BlockSelect is
7
+ specialized for k>=32 with much larger n, and CUB's DeviceSegmentedRadixSort
8
+ sorts the full row (overkill for top-k).
9
+
10
+ This file is INTENTIONALLY allowed to call torch.topk because it is the SOTA
11
+ oracle, not the agent's submission. The agent's solution.py is forbidden from
12
+ using torch.topk (see problem.yaml.forbidden).
13
+ """
14
+ from __future__ import annotations
15
+
16
+ import torch
17
+
18
+
19
+ def sota_forward(x: torch.Tensor, k: int):
20
+ """Best-available top-k reference. x: (batch, n) fp32."""
21
+ return torch.topk(x, k=k, dim=-1, largest=True, sorted=True)
22
+
23
+
24
+ def is_available() -> bool:
25
+ return True
06_sonic_moe_swiglu/PROMPT.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ I need you to write the up-projection of a top-K MoE FFN with grouped GEMM and fused SwiGLU for the RTX PRO 6000 (SM120 Blackwell, GDDR7, 1.8 TB/s). The reference implementation is in reference.py and your kernel goes in solution.py. You can make whatever mess you want in this directory — scratch files, intermediate kernels, build artifacts, profiling traces — but the final answer has to be in solution.py with the same Model, get_inputs, and get_init_inputs interface as reference.py.
2
+
3
+ The op is: per expert e, compute h_e = silu(x_e @ W_gate[e]) * (x_e @ W_up[e]) where x_e is the slice of permuted hidden states routed to expert e. Inputs are hidden_states of shape (T_perm, H) bf16 (already permuted into expert order, T_perm = T_total * K), expert_offsets of shape (E+1,) int32 (prefix sums of token counts per expert, so expert e owns rows [offsets[e]:offsets[e+1]]), and per-expert weights W_gate and W_up each of shape (E, H, I) bf16. You return gated_up of shape (T_perm, I) bf16. Correctness tolerance is 0.02 abs/rel on bf16. The shapes you have to handle are T_total=32768 H=4096 I=1536 E=128 K=8 (the headline sonic-moe configuration, 32K tokens, 128 experts, top-8), T_total=4096 H=2048 I=1024 E=64 K=4 (smaller / faster iteration), and T_total=16384 H=2048 I=4096 E=64 K=8 (intermediate-heavy aspect ratio that forces larger N tiles).
4
+
5
+ This needs to be a real custom kernel — the whole point is to implement the variable-length grouped GEMM and fuse SwiGLU into the epilogue, not to dispatch to a vendor library. Don't import or call torch.matmul, torch.bmm, torch.nn.functional.linear, F.linear, or anything from sonic_moe. Try CUTLASS grouped GEMM (the intended path), Triton with a grouped-GEMM autotune, CUDA C++ via load_inline, or inline PTX — your call. Anything you're uncertain about, look up PTX docs, clone CUTLASS or sonic-moe or quack-kernels or other reference repos, read library headers, and investigate.
6
+
7
+ Your flywheel is implement, profile (ncu, nsys, torch.profiler — whatever's useful) and time it with benchmark.py, verify correctness by running `python check.py` and reading the output, then iterate. Don't substitute your own one-off correctness snippets for check.py — it iterates over every shape, your spot-check almost certainly won't. If `python check.py` hasn't printed PASS, you're not done. Take as long as you need to actually push the number up.
06_sonic_moe_swiglu/benchmark.py ADDED
@@ -0,0 +1,131 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Roofline benchmark for Sonic-MoE up-projection (grouped GEMM + fused SwiGLU).
2
+
3
+ For each shape: times eager reference, compiled reference, SOTA (if available),
4
+ and the agent's solution. Reports achieved TFLOPS, GB/s, and peak_fraction.
5
+
6
+ Output lines the harness picks up:
7
+ shape=<idx> variant=<name> tflops=<N> gbps=<N> ms=<N>
8
+ peak_fraction: <N> (geomean over shapes of solution's peak_fraction)
9
+ """
10
+ import sys
11
+ from math import exp, log
12
+ from pathlib import Path
13
+
14
+ import torch
15
+ import yaml
16
+
17
+ REPO_ROOT = Path(__file__).resolve().parents[2]
18
+ sys.path.insert(0, str(REPO_ROOT))
19
+
20
+ from src.eval.roofline import compute_gbps, compute_tflops, peak_fraction # noqa: E402
21
+ from src.eval.timing import time_fn # noqa: E402
22
+ from src.hardware import get as get_hw # noqa: E402
23
+
24
+
25
+ def _eval_formula(expr: str, vars: dict) -> float:
26
+ return float(eval(expr, {"__builtins__": {}}, vars))
27
+
28
+
29
+ def main():
30
+ import reference
31
+ import shapes
32
+ import solution
33
+
34
+ meta = yaml.safe_load(Path("problem.yaml").read_text())
35
+ hw = get_hw(meta["hardware"][0])
36
+ peak_tflops = hw.peak_tflops_dense.get(meta["peak_tflops_key"], 0.0)
37
+ peak_gbps = hw.peak_bandwidth_gb_s
38
+ regime = meta.get("regime", "compute")
39
+ flops_formula = meta["flops_formula"]
40
+ bytes_formula = meta["bytes_formula"]
41
+ num_perf_trials = int(meta.get("num_perf_trials", 20))
42
+
43
+ device = torch.device("cuda:0")
44
+
45
+ # Optional SOTA
46
+ try:
47
+ import sota as sota_mod
48
+ has_sota = sota_mod.is_available()
49
+ except Exception:
50
+ has_sota = False
51
+
52
+ sol_fractions: list[float] = []
53
+
54
+ for shape_idx, shape in enumerate(shapes.SHAPES):
55
+ reference.T_total = shape["T_total"]
56
+ reference.H = shape["H"]
57
+ reference.I = shape["I"]
58
+ reference.E = shape["E"]
59
+ reference.K = shape["K"]
60
+
61
+ init_args = reference.get_init_inputs()
62
+ ref_model = reference.Model(*init_args).to(device).eval()
63
+ sol_model = solution.Model(*init_args).to(device).eval()
64
+ sd = ref_model.state_dict()
65
+ try:
66
+ sol_model.load_state_dict(sd, strict=True)
67
+ except RuntimeError:
68
+ pass
69
+
70
+ torch.manual_seed(2026)
71
+ inputs = [t.to(device) for t in reference.get_inputs()]
72
+
73
+ flops = _eval_formula(flops_formula, shape)
74
+ bytes_moved = _eval_formula(bytes_formula, shape)
75
+
76
+ # Eager (slow Python loop in reference)
77
+ ms_eager = time_fn(ref_model, inputs, iters=max(3, num_perf_trials // 4))
78
+
79
+ # Compiled (best-effort)
80
+ try:
81
+ comp = torch.compile(ref_model, mode="reduce-overhead")
82
+ ms_comp = time_fn(comp, inputs, iters=max(3, num_perf_trials // 4))
83
+ except Exception as e:
84
+ print(f" [compile fallback] {type(e).__name__}: {e}")
85
+ ms_comp = None
86
+
87
+ # SOTA (sonic-moe). Wrap in try/except: SM120 path may be unavailable.
88
+ ms_sota = None
89
+ if has_sota:
90
+ try:
91
+ hidden_states, expert_offsets = inputs
92
+ W_gate, W_up = ref_model.W_gate, ref_model.W_up
93
+
94
+ def sota_fn(_x=hidden_states, _o=expert_offsets, _g=W_gate, _u=W_up):
95
+ return sota_mod.sota_forward(_x, _g, _u, _o)
96
+
97
+ ms_sota = time_fn(sota_fn, [], iters=num_perf_trials)
98
+ except Exception as e:
99
+ print(f" [sota unavailable] {type(e).__name__}: {e}")
100
+
101
+ # Solution
102
+ ms_sol = time_fn(sol_model, inputs, iters=num_perf_trials)
103
+
104
+ for variant, ms in [
105
+ ("eager", ms_eager),
106
+ ("compiled", ms_comp),
107
+ ("sota", ms_sota),
108
+ ("solution", ms_sol),
109
+ ]:
110
+ if ms is None:
111
+ continue
112
+ tflops = compute_tflops(flops, ms)
113
+ gbps = compute_gbps(bytes_moved, ms)
114
+ print(f"shape={shape_idx} variant={variant} tflops={tflops:.3f} gbps={gbps:.3f} ms={ms:.3f}")
115
+
116
+ sol_tflops = compute_tflops(flops, ms_sol)
117
+ sol_gbps = compute_gbps(bytes_moved, ms_sol)
118
+ if regime == "compute":
119
+ frac = peak_fraction(sol_tflops, peak_tflops)
120
+ else:
121
+ frac = peak_fraction(sol_gbps, peak_gbps)
122
+ sol_fractions.append(frac)
123
+ print(f"shape={shape_idx} solution_peak_fraction={frac:.4f}")
124
+
125
+ gmean = exp(sum(log(max(f, 1e-9)) for f in sol_fractions) / len(sol_fractions))
126
+ print(f"peak_fraction: {gmean:.4f}")
127
+ print(f"RESULT: {'OK' if gmean >= 0.1 else 'LOW'}")
128
+
129
+
130
+ if __name__ == "__main__":
131
+ main()
06_sonic_moe_swiglu/check.py ADDED
@@ -0,0 +1,110 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Correctness runner for Sonic-MoE up-projection (grouped GEMM + fused SwiGLU).
2
+
3
+ Runs solution.Model vs reference.Model across all shapes in shapes.py, 3 seeds
4
+ each, with per-dtype atol/rtol. Also rejects forbidden ops by grep.
5
+ """
6
+ import re
7
+ import sys
8
+ from pathlib import Path
9
+
10
+ import torch
11
+ import yaml
12
+
13
+ # Make the repo's src/ importable
14
+ REPO_ROOT = Path(__file__).resolve().parents[2]
15
+ sys.path.insert(0, str(REPO_ROOT))
16
+
17
+ from src.eval.correctness import check_correctness # noqa: E402
18
+
19
+
20
+ def main():
21
+ try:
22
+ import reference
23
+ import shapes
24
+ import solution
25
+ except Exception as e:
26
+ print(f"FAIL: import error: {e}")
27
+ sys.exit(1)
28
+
29
+ problem_yaml = Path("problem.yaml")
30
+ meta = yaml.safe_load(problem_yaml.read_text()) if problem_yaml.exists() else {}
31
+
32
+ # --- Forbidden-op check ------------------------------------------------
33
+ sol_src = Path("solution.py").read_text() if Path("solution.py").exists() else ""
34
+ for forbidden in meta.get("forbidden", []):
35
+ pat = re.escape(forbidden)
36
+ if re.search(pat, sol_src):
37
+ print(f"FAIL: forbidden op used: {forbidden}")
38
+ sys.exit(1)
39
+
40
+ device = torch.device("cuda:0")
41
+ tol_override = meta.get("tolerance") or None
42
+
43
+ # --- Per-shape correctness --------------------------------------------
44
+ all_shapes = shapes.SHAPES
45
+ for shape_idx, shape in enumerate(all_shapes):
46
+ # Rebuild reference module's module-level shape shims.
47
+ reference.T_total = shape["T_total"]
48
+ reference.H = shape["H"]
49
+ reference.I = shape["I"]
50
+ reference.E = shape["E"]
51
+ reference.K = shape["K"]
52
+
53
+ init_args = reference.get_init_inputs()
54
+ ref_model = reference.Model(*init_args).to(device).eval()
55
+ sol_model = solution.Model(*init_args).to(device).eval()
56
+
57
+ sd = ref_model.state_dict()
58
+ try:
59
+ sol_model.load_state_dict(sd, strict=True)
60
+ except RuntimeError as e:
61
+ print(f"FAIL: state_dict mismatch at shape {shape_idx} ({shape}): {e}")
62
+ sys.exit(1)
63
+
64
+ for seed in (42, 123, 456):
65
+ torch.manual_seed(seed)
66
+ torch.cuda.manual_seed_all(seed)
67
+ inputs = [t.to(device) for t in reference.get_inputs()]
68
+
69
+ with torch.no_grad():
70
+ ref_out = ref_model(*inputs)
71
+ sol_out = sol_model(*inputs)
72
+
73
+ ok, msg = check_correctness(
74
+ ref_out, sol_out,
75
+ dtype=ref_out.dtype,
76
+ override=tol_override,
77
+ )
78
+ if not ok:
79
+ print(f"FAIL: shape {shape_idx} {shape} seed {seed}: {msg}")
80
+ sys.exit(1)
81
+
82
+ # --- Framework label (for stats) --------------------------------------
83
+ _emit_framework_label()
84
+ print("PASS")
85
+
86
+
87
+ def _emit_framework_label():
88
+ """Write framework.txt with the detected kernel framework."""
89
+ patterns = [
90
+ ("ptx", r"asm\s+volatile|asm\s*\(|mma\.sync|tcgen05\."),
91
+ ("cutlass3", r"\bcute::|cutlass/gemm/collective|cutlass::arch::Sm(9|10|12)"),
92
+ ("cutlass2", r"cutlass/gemm/device/gemm|cutlass::gemm::device"),
93
+ ("cuda_wmma", r"\bnvcuda::wmma\b|wmma::fragment"),
94
+ ("triton", r"import\s+triton\b|@triton\.jit|\btl\.dot\b"),
95
+ ("cuda_raw", r"torch\.utils\.cpp_extension\.load_inline|__global__\s+void"),
96
+ ]
97
+ sol = Path("solution.py")
98
+ if not sol.exists():
99
+ return
100
+ code = sol.read_text()
101
+ label = "unknown"
102
+ for name, pat in patterns:
103
+ if re.search(pat, code):
104
+ label = name
105
+ break
106
+ Path("framework.txt").write_text(label + "\n")
107
+
108
+
109
+ if __name__ == "__main__":
110
+ main()
06_sonic_moe_swiglu/problem.yaml ADDED
@@ -0,0 +1,47 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: 06_sonic_moe_swiglu
2
+ display_name: "Sonic-MoE up-projection (Grouped GEMM + SwiGLU)"
3
+ precision: bf16
4
+ regime: compute
5
+ hardware: [RTX_PRO_6000]
6
+ peak_tflops_key: bf16
7
+ peak_bandwidth_key: dram
8
+
9
+ # Dense-equivalent FLOPs: gate GEMM + up GEMM + SwiGLU (negligible elementwise).
10
+ # Per token: 2*H*I FMAs for gate, 2*H*I for up => 2 * T_total * H * (2*I).
11
+ # (Each of T_total tokens visits K experts, but total work = T_total * K * (2*H*I*2)
12
+ # only if you count routing. The standard MoE FLOPs convention counts only the
13
+ # active per-token compute: T_total tokens * 2 * (2*I) * H. We follow that.)
14
+ flops_formula: "2 * T_total * H * (2 * I)"
15
+
16
+ # Bytes moved (approximate, lower bound):
17
+ # read hidden (T_perm = T_total*K rows of H bf16) + read 2 weight matrices per
18
+ # expert (E * H * 2*I bf16) + write output (T_perm rows of I bf16).
19
+ bytes_formula: "T_total*K*H*2 + E*H*(2*I)*2 + T_total*K*I*2"
20
+
21
+ tolerance:
22
+ bfloat16: 0.02
23
+
24
+ # Forbidden ops -- agent must write the grouped GEMM + fused SwiGLU themselves.
25
+ # - torch.matmul / torch.bmm / F.linear: cuBLAS dispatch, defeats the point.
26
+ # - sonic_moe imports: vendor-call cheating; the SOTA is graded separately.
27
+ forbidden:
28
+ - "torch.matmul"
29
+ - "torch.bmm"
30
+ - "torch.nn.functional.linear"
31
+ - "F.linear"
32
+ - "from sonic_moe"
33
+ - "import sonic_moe"
34
+
35
+ sota:
36
+ name: "Sonic-MoE up-projection (Tri Dao)"
37
+ url: "https://github.com/Dao-AILab/sonic-moe"
38
+ function: "sonic_moe.fused_moe_up"
39
+ deps:
40
+ - "sonic-moe>=0.1.2" # requires Python>=3.12, sm_120 support in-progress
41
+ - "quack-kernels" # CuTeDSL grouped GEMM that sonic-moe dispatches to
42
+ # Documented H100 paper number for this configuration (informational, not graded
43
+ # live on SM120). Sonic-MoE reports 1.87-4.04x over ScatterMoE/MoMoE on H100.
44
+ reference_throughput_tflops_h100: 480
45
+
46
+ num_correct_trials: 3
47
+ num_perf_trials: 20
06_sonic_moe_swiglu/reference.py ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Naive grouped GEMM + fused SwiGLU reference (correctness only, NOT the SOTA).
2
+
3
+ This is the up-projection of an MoE FFN. Each token i is assigned to K experts;
4
+ expert_indices[i*K + j] tells you which expert. Tokens are dispatched to experts
5
+ according to routing metadata; we compute, per expert e:
6
+
7
+ h_e = silu(x_e @ W_gate[e]) * (x_e @ W_up[e])
8
+
9
+ where x_e is the slice of permuted hidden states routed to expert e, with
10
+ expert_offsets[e]:expert_offsets[e+1] giving its row range in the permuted layout.
11
+
12
+ The reference loops over experts in Python. Slow, but pedagogically clear and
13
+ correct. Forbidden ops (torch.matmul, torch.bmm, F.linear, sonic_moe imports)
14
+ are NOT used here, but the reference is exempt — only solution.py is checked.
15
+ """
16
+ from __future__ import annotations
17
+
18
+ import torch
19
+ import torch.nn as nn
20
+ import torch.nn.functional as F
21
+
22
+ OP_TYPE = "grouped_gemm_swiglu"
23
+ SUPPORTED_PRECISIONS = ["bf16"]
24
+ HARDWARE_REQUIRED = ["RTX_PRO_6000", "H100", "B200"]
25
+
26
+
27
+ class Model(nn.Module):
28
+ """Up-projection of a top-K MoE FFN with fused SwiGLU.
29
+
30
+ Inputs at call time:
31
+ hidden_states: (T_perm, H) bf16, already permuted to expert order
32
+ expert_offsets: (E+1,) int32, prefix sums of token counts per expert
33
+ so expert e owns rows [offsets[e]:offsets[e+1]]
34
+ T_perm = T_total * K (each token visits K experts)
35
+
36
+ Output:
37
+ gated_up: (T_perm, I) bf16
38
+ """
39
+
40
+ def __init__(self, T_total: int, H: int, I: int, E: int, K: int): # noqa: E741
41
+ super().__init__()
42
+ self.T_total = T_total
43
+ self.H = H
44
+ self.I = I
45
+ self.E = E
46
+ self.K = K
47
+ # Two weight tensors per expert: gate (E, H, I) and up (E, H, I).
48
+ self.W_gate = nn.Parameter(torch.empty(E, H, I, dtype=torch.bfloat16))
49
+ self.W_up = nn.Parameter(torch.empty(E, H, I, dtype=torch.bfloat16))
50
+ nn.init.normal_(self.W_gate, std=0.02)
51
+ nn.init.normal_(self.W_up, std=0.02)
52
+
53
+ def forward(
54
+ self,
55
+ hidden_states: torch.Tensor, # (T_perm, H) bf16
56
+ expert_offsets: torch.Tensor, # (E+1,) int32
57
+ ) -> torch.Tensor:
58
+ T_perm, H = hidden_states.shape
59
+ out = torch.empty(T_perm, self.I, dtype=torch.bfloat16, device=hidden_states.device)
60
+ # Loop over experts. Each expert is a small dense GEMM on its slice.
61
+ for e in range(self.E):
62
+ start = int(expert_offsets[e].item())
63
+ end = int(expert_offsets[e + 1].item())
64
+ if end == start:
65
+ continue
66
+ x_e = hidden_states[start:end] # (n_e, H)
67
+ gate = x_e @ self.W_gate[e] # (n_e, I)
68
+ up = x_e @ self.W_up[e] # (n_e, I)
69
+ out[start:end] = F.silu(gate) * up
70
+ return out
71
+
72
+
73
+ # Module-level shape shims rewritten by check.py / benchmark.py per shape.
74
+ T_total = 32768
75
+ H = 4096
76
+ I = 1536 # noqa: E741
77
+ E = 128
78
+ K = 8
79
+
80
+
81
+ def _build_routing(T_total: int, E: int, K: int, device: str = "cpu") -> torch.Tensor:
82
+ """Round-robin-ish routing metadata: balanced offsets summing to T_total*K."""
83
+ T_perm = T_total * K
84
+ # Even split with remainder distributed to first experts.
85
+ base = T_perm // E
86
+ rem = T_perm - base * E
87
+ counts = torch.full((E,), base, dtype=torch.int32, device=device)
88
+ counts[:rem] += 1
89
+ offsets = torch.zeros(E + 1, dtype=torch.int32, device=device)
90
+ offsets[1:] = torch.cumsum(counts, dim=0)
91
+ return offsets
92
+
93
+
94
+ def get_inputs():
95
+ T_perm = T_total * K
96
+ hidden_states = torch.randn(T_perm, H, dtype=torch.bfloat16) * 0.1
97
+ expert_offsets = _build_routing(T_total, E, K)
98
+ return [hidden_states, expert_offsets]
99
+
100
+
101
+ def get_init_inputs():
102
+ return [T_total, H, I, E, K]
06_sonic_moe_swiglu/shapes.py ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Shape sweep for Sonic-MoE up-projection (grouped GEMM + fused SwiGLU).
2
+
3
+ Defaults match the sonic-moe paper's headline configuration. We add:
4
+ - a smaller shape for fast iteration during agent development
5
+ - a wider intermediate (different aspect ratio) to stress N-tile selection
6
+ """
7
+
8
+ SHAPES = [
9
+ # Headline sonic-moe shape: 32K tokens, 128 experts, top-8.
10
+ {"T_total": 32768, "H": 4096, "I": 1536, "E": 128, "K": 8},
11
+
12
+ # Fast-iteration shape (~16x cheaper). Same expert count to keep the
13
+ # variable-length grouped layout meaningful, but smaller token / hidden dims.
14
+ {"T_total": 4096, "H": 2048, "I": 1024, "E": 64, "K": 4},
15
+
16
+ # Different aspect ratio: smaller H, wider I (intermediate-heavy FFN).
17
+ # Forces tiles to handle larger N relative to K.
18
+ {"T_total": 16384, "H": 2048, "I": 4096, "E": 64, "K": 8},
19
+ ]
06_sonic_moe_swiglu/sota.py ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """SOTA reference for Sonic-MoE up-projection: Tri Dao's sonic-moe.
2
+
3
+ Status (2026-04): sonic-moe ships on PyPI as `sonic-moe` (>=0.1.2.post1) and
4
+ requires Python>=3.12. It dispatches to QuACK CuTeDSL grouped GEMM kernels.
5
+ SM120 (RTX PRO 6000 Blackwell Workstation) support is in-progress upstream --
6
+ the package installs cleanly but kernels may fail at launch on SM120 (the
7
+ QuACK grouped-GEMM path targets Sm90/Sm100 in the public release).
8
+
9
+ If the live call fails, `is_available()` returns False and the benchmark scores
10
+ the agent against PyTorch eager + the documented H100 paper ceiling (see
11
+ problem.yaml.sota.reference_throughput_tflops_h100). Agents are FORBIDDEN from
12
+ importing sonic_moe in solution.py (see problem.yaml.forbidden).
13
+ """
14
+ from __future__ import annotations
15
+
16
+ import torch
17
+
18
+
19
+ def _try_sonic_moe(
20
+ hidden_states: torch.Tensor,
21
+ W_gate: torch.Tensor,
22
+ W_up: torch.Tensor,
23
+ expert_offsets: torch.Tensor,
24
+ ) -> torch.Tensor | None:
25
+ try:
26
+ import sonic_moe # type: ignore # noqa: F401
27
+ except Exception:
28
+ return None
29
+ try:
30
+ # Public sonic-moe API surface is still stabilizing. The expected entry
31
+ # point bundles gate+up weights as a single (E, H, 2*I) tensor and fuses
32
+ # SwiGLU. Adapt to the actual signature once SM120 lands.
33
+ W = torch.cat([W_gate, W_up], dim=-1).contiguous() # (E, H, 2*I)
34
+ from sonic_moe import fused_moe_up # type: ignore
35
+ return fused_moe_up(hidden_states, W, expert_offsets)
36
+ except Exception:
37
+ return None
38
+
39
+
40
+ def sota_forward(
41
+ hidden_states: torch.Tensor,
42
+ W_gate: torch.Tensor,
43
+ W_up: torch.Tensor,
44
+ expert_offsets: torch.Tensor,
45
+ ) -> torch.Tensor:
46
+ """Best-available grouped-GEMM + SwiGLU reference."""
47
+ out = _try_sonic_moe(hidden_states, W_gate, W_up, expert_offsets)
48
+ if out is not None:
49
+ return out
50
+ raise RuntimeError("sonic-moe SOTA path unavailable on this hardware")
51
+
52
+
53
+ def is_available() -> bool:
54
+ # On SM120 with the current public sonic-moe, this is expected to return
55
+ # False until upstream lands SM120 kernels. Detect by attempting a tiny
56
+ # smoke call on import; any failure -> not available.
57
+ try:
58
+ import sonic_moe # type: ignore # noqa: F401
59
+ except Exception:
60
+ return False
61
+ if not torch.cuda.is_available():
62
+ return False
63
+ # Cheap capability gate: sonic-moe public release targets sm_90/sm_100.
64
+ major, _ = torch.cuda.get_device_capability(0)
65
+ if major < 9:
66
+ return False
67
+ # We do not run a live smoke here (would require allocating real weights);
68
+ # benchmark.py wraps sota_forward in try/except and treats failures as
69
+ # "SOTA unavailable" -- see problem.yaml.sota.reference_throughput_tflops_h100
70
+ # for the documented paper ceiling used in that case.
71
+ return True
07_w4a16_gemm/PROMPT.txt ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ I need you to write a weight-only int4 quantized GEMM (W4A16) for the RTX PRO 6000 (SM120 Blackwell, GDDR7, 1.8 TB/s). The reference implementation is in reference.py and your kernel goes in solution.py. You can make whatever mess you want in this directory — scratch files, intermediate kernels, build artifacts, profiling traces — but the final answer has to be in solution.py with the same Model, get_inputs, and get_init_inputs interface as reference.py.
2
+
3
+ The scheme is AWQ/GPTQ-style asymmetric int4 with explicit zero-points and per-group bf16 scales. Inputs are x of shape (M, K) bf16, w_q of shape (K // 2, N) uint8 (two int4 weights packed per byte, low nibble = even-K row, high nibble = odd-K row), scales of shape (K // 128, N) bf16, and zeros of shape (K // 128, N) bf16. Group size is 128 along K. Dequant per group is w_bf[k, n] = (unpack(w_q)[k, n] - zeros[k // 128, n]) * scales[k // 128, n], and the output is (M, N) bf16. Correctness tolerance is 0.10 abs/rel — group-quant adds noise on top of bf16 accumulator slop. The shapes you have to handle are M=1 N=12288 K=4096 (decode, memory-bound on the int4 weight read), M=32 N=12288 K=4096 (small prefill, mixed regime), M=256 N=12288 K=4096 (larger prefill, approaching compute-bound), M=1 N=4096 K=4096 (decode, square), and M=16 N=14336 K=4096 (speculative-decode-ish).
4
+
5
+ This needs to be a real custom kernel that fuses unpack and GEMM in the same pass — a separate dequant-then-matmul wastes the entire bandwidth advantage of int4. Don't import or call bitsandbytes.functional.dequantize_4bit, bitsandbytes.functional.gemv_4bit, marlin_kernel.gemm, or torch.nn.functional.linear. Try CUTLASS mixed-input GEMM (the intended path), Triton with a fused dequant epilogue, CUDA C++ via load_inline, or inline PTX — your call. Anything you're uncertain about, look up PTX docs, clone CUTLASS or Marlin or bitsandbytes or other reference repos, read library headers, and investigate.
6
+
7
+ Your flywheel is implement, profile (ncu, nsys, torch.profiler — whatever's useful) and time it with benchmark.py, verify correctness by running `python check.py` and reading the output, then iterate. Don't substitute your own one-off correctness snippets for check.py — it iterates over every shape, your spot-check almost certainly won't. If `python check.py` hasn't printed PASS, you're not done. Take as long as you need to actually push the number up.
07_w4a16_gemm/benchmark.py ADDED
@@ -0,0 +1,128 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Roofline benchmark for FP8 GEMM.
2
+
3
+ For each shape: times eager reference, compiled reference, SOTA (if available),
4
+ and the agent's solution. Reports achieved TFLOPS, GB/s, and peak_fraction.
5
+
6
+ Output lines the harness picks up:
7
+ shape=<idx> variant=<name> tflops=<N> gbps=<N> ms=<N>
8
+ peak_fraction: <N> (geomean over shapes of solution's peak_fraction)
9
+ """
10
+ import sys
11
+ from math import exp, log
12
+ from pathlib import Path
13
+
14
+ import torch
15
+ import yaml
16
+
17
+ REPO_ROOT = Path(__file__).resolve().parents[2]
18
+ sys.path.insert(0, str(REPO_ROOT))
19
+
20
+ from src.eval.roofline import compute_gbps, compute_tflops, peak_fraction # noqa: E402
21
+ from src.eval.timing import time_fn # noqa: E402
22
+ from src.hardware import get as get_hw # noqa: E402
23
+
24
+
25
+ def _eval_formula(expr: str, vars: dict) -> float:
26
+ # Very small eval: only names from `vars` are valid.
27
+ return float(eval(expr, {"__builtins__": {}}, vars))
28
+
29
+
30
+ def main():
31
+ import reference
32
+ import shapes
33
+ import solution
34
+
35
+ meta = yaml.safe_load(Path("problem.yaml").read_text())
36
+ hw = get_hw(meta["hardware"][0])
37
+ peak_tflops = hw.peak_tflops_dense.get(meta["peak_tflops_key"], 0.0)
38
+ peak_gbps = hw.peak_bandwidth_gb_s
39
+ regime = meta.get("regime", "compute")
40
+ flops_formula = meta["flops_formula"]
41
+ bytes_formula = meta["bytes_formula"]
42
+ num_perf_trials = int(meta.get("num_perf_trials", 30))
43
+
44
+ device = torch.device("cuda:0")
45
+
46
+ # Optional SOTA
47
+ try:
48
+ import sota as sota_mod
49
+ has_sota = sota_mod.is_available()
50
+ except Exception:
51
+ has_sota = False
52
+
53
+ sol_fractions: list[float] = []
54
+
55
+ for shape_idx, shape in enumerate(shapes.SHAPES):
56
+ reference.M = shape["M"]
57
+ reference.N = shape["N"]
58
+ reference.K = shape["K"]
59
+
60
+ init_args = reference.get_init_inputs()
61
+ ref_model = reference.Model(*init_args).to(device).eval()
62
+ sol_model = solution.Model(*init_args).to(device).eval()
63
+ sd = ref_model.state_dict()
64
+ try:
65
+ sol_model.load_state_dict(sd, strict=True)
66
+ except RuntimeError:
67
+ pass
68
+
69
+ torch.manual_seed(2026)
70
+ inputs = [t.to(device) for t in reference.get_inputs()]
71
+
72
+ # Theoretical work per call
73
+ flops = _eval_formula(flops_formula, shape)
74
+ bytes_moved = _eval_formula(bytes_formula, shape)
75
+
76
+ # Eager
77
+ ms_eager = time_fn(ref_model, inputs, iters=num_perf_trials)
78
+
79
+ # Compiled (best-effort)
80
+ try:
81
+ comp = torch.compile(ref_model, mode="reduce-overhead")
82
+ ms_comp = time_fn(comp, inputs, iters=num_perf_trials)
83
+ except Exception as e:
84
+ print(f" [compile fallback] {type(e).__name__}: {e}")
85
+ ms_comp = None
86
+
87
+ # SOTA
88
+ ms_sota = None
89
+ if has_sota:
90
+ try:
91
+ def sota_fn(x, _ref=ref_model):
92
+ return sota_mod.sota_forward(x, _ref)
93
+ ms_sota = time_fn(sota_fn, inputs, iters=num_perf_trials)
94
+ except Exception as e:
95
+ print(f" [sota unavailable] {type(e).__name__}: {e}")
96
+
97
+ # Solution
98
+ ms_sol = time_fn(sol_model, inputs, iters=num_perf_trials)
99
+
100
+ for variant, ms in [
101
+ ("eager", ms_eager),
102
+ ("compiled", ms_comp),
103
+ ("sota", ms_sota),
104
+ ("solution", ms_sol),
105
+ ]:
106
+ if ms is None:
107
+ continue
108
+ tflops = compute_tflops(flops, ms)
109
+ gbps = compute_gbps(bytes_moved, ms)
110
+ print(f"shape={shape_idx} variant={variant} tflops={tflops:.3f} gbps={gbps:.3f} ms={ms:.3f}")
111
+
112
+ # Score: peak_fraction depends on regime
113
+ sol_tflops = compute_tflops(flops, ms_sol)
114
+ sol_gbps = compute_gbps(bytes_moved, ms_sol)
115
+ if regime == "compute":
116
+ frac = peak_fraction(sol_tflops, peak_tflops)
117
+ else:
118
+ frac = peak_fraction(sol_gbps, peak_gbps)
119
+ sol_fractions.append(frac)
120
+ print(f"shape={shape_idx} solution_peak_fraction={frac:.4f}")
121
+
122
+ gmean = exp(sum(log(max(f, 1e-9)) for f in sol_fractions) / len(sol_fractions))
123
+ print(f"peak_fraction: {gmean:.4f}")
124
+ print(f"RESULT: {'OK' if gmean >= 0.1 else 'LOW'}")
125
+
126
+
127
+ if __name__ == "__main__":
128
+ main()
07_w4a16_gemm/check.py ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Correctness runner for FP8 GEMM.
2
+
3
+ Runs solution.Model vs reference.Model across all shapes in shapes.py, 3 seeds
4
+ each, with per-dtype atol/rtol. Also rejects forbidden ops by grep.
5
+ """
6
+ import re
7
+ import sys
8
+ from pathlib import Path
9
+
10
+ import torch
11
+ import yaml
12
+
13
+ # Make the repo's src/ importable
14
+ REPO_ROOT = Path(__file__).resolve().parents[2]
15
+ sys.path.insert(0, str(REPO_ROOT))
16
+
17
+ from src.eval.correctness import check_correctness # noqa: E402
18
+
19
+
20
+ def main():
21
+ try:
22
+ import reference
23
+ import shapes
24
+ import solution
25
+ except Exception as e:
26
+ print(f"FAIL: import error: {e}")
27
+ sys.exit(1)
28
+
29
+ problem_yaml = Path("problem.yaml")
30
+ meta = yaml.safe_load(problem_yaml.read_text()) if problem_yaml.exists() else {}
31
+
32
+ # --- Forbidden-op check ------------------------------------------------
33
+ sol_src = Path("solution.py").read_text() if Path("solution.py").exists() else ""
34
+ for forbidden in meta.get("forbidden", []):
35
+ pat = re.escape(forbidden)
36
+ if re.search(pat, sol_src):
37
+ print(f"FAIL: forbidden op used: {forbidden}")
38
+ sys.exit(1)
39
+
40
+ device = torch.device("cuda:0")
41
+ tol_override = meta.get("tolerance") or None
42
+
43
+ # --- Per-shape correctness --------------------------------------------
44
+ all_shapes = shapes.SHAPES
45
+ for shape_idx, shape in enumerate(all_shapes):
46
+ # Rebuild reference module's module-level M/N/K shims so get_inputs /
47
+ # get_init_inputs match this shape.
48
+ reference.M = shape["M"]
49
+ reference.N = shape["N"]
50
+ reference.K = shape["K"]
51
+
52
+ init_args = reference.get_init_inputs()
53
+ ref_model = reference.Model(*init_args).to(device).eval()
54
+ sol_model = solution.Model(*init_args).to(device).eval()
55
+
56
+ # Share weights. strict=True — if sol_model doesn't declare the same
57
+ # parameters, correctness fails (this closes the "identity kernel"
58
+ # cheat class).
59
+ sd = ref_model.state_dict()
60
+ try:
61
+ sol_model.load_state_dict(sd, strict=True)
62
+ except RuntimeError as e:
63
+ print(f"FAIL: state_dict mismatch at shape {shape_idx} ({shape}): {e}")
64
+ sys.exit(1)
65
+
66
+ for seed in (42, 123, 456):
67
+ torch.manual_seed(seed)
68
+ torch.cuda.manual_seed_all(seed)
69
+ inputs = [t.to(device) for t in reference.get_inputs()]
70
+
71
+ with torch.no_grad():
72
+ ref_out = ref_model(*inputs)
73
+ sol_out = sol_model(*inputs)
74
+
75
+ ok, msg = check_correctness(
76
+ ref_out, sol_out,
77
+ dtype=ref_out.dtype,
78
+ override=tol_override,
79
+ )
80
+ if not ok:
81
+ print(f"FAIL: shape {shape_idx} {shape} seed {seed}: {msg}")
82
+ sys.exit(1)
83
+
84
+ # --- Framework label (for stats) --------------------------------------
85
+ _emit_framework_label()
86
+ print("PASS")
87
+
88
+
89
+ def _emit_framework_label():
90
+ """Write framework.txt with the detected kernel framework."""
91
+ patterns = [
92
+ ("ptx", r"asm\s+volatile|asm\s*\(|mma\.sync|tcgen05\."),
93
+ ("cutlass3", r"\bcute::|cutlass/gemm/collective|cutlass::arch::Sm(9|10|12)"),
94
+ ("cutlass2", r"cutlass/gemm/device/gemm|cutlass::gemm::device"),
95
+ ("cuda_wmma", r"\bnvcuda::wmma\b|wmma::fragment"),
96
+ ("triton", r"import\s+triton\b|@triton\.jit|\btl\.dot\b"),
97
+ ("cuda_raw", r"torch\.utils\.cpp_extension\.load_inline|__global__\s+void"),
98
+ ]
99
+ sol = Path("solution.py")
100
+ if not sol.exists():
101
+ return
102
+ code = sol.read_text()
103
+ label = "unknown"
104
+ for name, pat in patterns:
105
+ if re.search(pat, code):
106
+ label = name
107
+ break
108
+ Path("framework.txt").write_text(label + "\n")
109
+
110
+
111
+ if __name__ == "__main__":
112
+ main()
07_w4a16_gemm/problem.yaml ADDED
@@ -0,0 +1,49 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: 07_w4a16_gemm
2
+ display_name: "W4A16 Weight-only Quantized GEMM"
3
+ precision: int4_bf16
4
+ regime: memory # decode-dominant; M=1 is bandwidth-bound on the int4 weight stream
5
+
6
+ # Dense-equivalent FLOPs (matmul work, ignoring dequant arithmetic).
7
+ flops_formula: "2 * M * N * K"
8
+
9
+ # Bytes moved per call (memory roofline):
10
+ # x: M*K*2 (bf16 activations, streamed in once)
11
+ # w_q: (K/2)*N (packed int4, 0.5 B/elem)
12
+ # scales: (K/128)*N*2 (bf16 scales)
13
+ # zeros: (K/128)*N*2 (bf16 zero-points)
14
+ # out: M*N*2 (bf16 store)
15
+ bytes_formula: "M*K*2 + (K/2)*N + (K/128)*N*2 + (K/128)*N*2 + M*N*2"
16
+
17
+ hardware: [RTX_PRO_6000]
18
+ peak_tflops_key: bf16
19
+ peak_bandwidth_key: dram
20
+
21
+ tolerance:
22
+ bfloat16: 0.10 # group-quant adds noise on top of bf16 accumulator slop
23
+
24
+ # Forbidden ops -- agent must write the unpack + GEMM themselves, not call a
25
+ # vendor library that does both.
26
+ forbidden:
27
+ - "bitsandbytes.functional.dequantize_4bit"
28
+ - "bitsandbytes.functional.gemv_4bit"
29
+ - "marlin_kernel.gemm"
30
+ - "torch.nn.functional.linear"
31
+
32
+ sota:
33
+ name: "bitsandbytes NF4 (gemv_4bit / dequantize_4bit + matmul)"
34
+ url: "https://github.com/TimDettmers/bitsandbytes"
35
+ function: "bitsandbytes.functional.gemv_4bit"
36
+ notes: |
37
+ Marlin (IST-DASLab) is the W4A16 SOTA on Ampere/Hopper but does not have
38
+ SM120 (Blackwell consumer) kernels yet. GPTQ-Triton is unmaintained and
39
+ does not target SM120. bitsandbytes 0.49.2 *does* run on SM120 -- it
40
+ autotunes its CUDA kernels for compute capability 12.0 -- so we use its
41
+ NF4 path (different quant scheme but same regime) as the SOTA reference
42
+ line. Note that bnb's NF4 is symmetric/non-uniform; our reference uses
43
+ AWQ-style asymmetric int4 with explicit zero-points, which is what the
44
+ agent must implement. The SOTA line is informational only.
45
+ deps:
46
+ - "bitsandbytes>=0.49.2"
47
+
48
+ num_correct_trials: 3
49
+ num_perf_trials: 50
07_w4a16_gemm/reference.py ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Naive W4A16 weight-only quantized GEMM reference (correctness only).
2
+
3
+ AWQ/GPTQ-style scheme:
4
+ x: (M, K) bf16
5
+ w_q: (K // 2, N) uint8 -- two int4 weights packed per byte (low nibble = even-K, high = odd-K)
6
+ scales: (K // group, N) bf16
7
+ zeros: (K // group, N) bf16 -- asymmetric (stored already as float zero-point)
8
+ out: (M, N) bf16
9
+
10
+ Dequant (per group along K):
11
+ w_bf[k, n] = (w_q[k, n] - zeros[k // group, n]) * scales[k // group, n]
12
+ where w_q[k, n] is the unpacked 4-bit value (0..15).
13
+
14
+ This reference unpacks to a full bf16 matrix and then runs torch.matmul. Slow and
15
+ memory-heavy on the dequant; the agent's solution must fuse unpack+GEMM.
16
+ """
17
+ from __future__ import annotations
18
+
19
+ import torch
20
+ import torch.nn as nn
21
+
22
+ OP_TYPE = "gemm_w4a16"
23
+ SUPPORTED_PRECISIONS = ["int4_bf16"]
24
+ HARDWARE_REQUIRED = ["RTX_PRO_6000", "H100", "B200"]
25
+
26
+ GROUP_SIZE = 128
27
+
28
+
29
+ def _pack_int4(w_q: torch.Tensor) -> torch.Tensor:
30
+ """Pack (K, N) uint8 in [0,15] into (K//2, N) uint8.
31
+
32
+ Even rows go in the low nibble, odd rows in the high nibble.
33
+ """
34
+ K, N = w_q.shape
35
+ assert K % 2 == 0
36
+ lo = w_q[0::2].to(torch.uint8) & 0xF
37
+ hi = w_q[1::2].to(torch.uint8) & 0xF
38
+ return (lo | (hi << 4)).contiguous()
39
+
40
+
41
+ def _unpack_int4(w_packed: torch.Tensor, K: int) -> torch.Tensor:
42
+ """Unpack (K//2, N) uint8 -> (K, N) uint8 in [0,15]."""
43
+ Kh, N = w_packed.shape
44
+ assert Kh * 2 == K
45
+ out = torch.empty((K, N), dtype=torch.uint8, device=w_packed.device)
46
+ out[0::2] = w_packed & 0xF
47
+ out[1::2] = (w_packed >> 4) & 0xF
48
+ return out
49
+
50
+
51
+ class Model(nn.Module):
52
+ """W4A16 GEMM: y = x @ dequant(w_q, scales, zeros).
53
+
54
+ Buffers are registered (not Parameters) so state_dict carries them across to
55
+ the agent's solution. Initialization picks scales/zeros from a normal weight,
56
+ then quantizes deterministically.
57
+ """
58
+
59
+ def __init__(self, M: int, N: int, K: int, group_size: int = GROUP_SIZE):
60
+ super().__init__()
61
+ assert K % group_size == 0, "K must be divisible by group_size"
62
+ assert K % 2 == 0, "K must be even (int4 packing)"
63
+ self.M, self.N, self.K = M, N, K
64
+ self.group_size = group_size
65
+ n_groups = K // group_size
66
+
67
+ # Synthetic quant: take a random bf16 weight, compute per-group asym
68
+ # quant params, then pack. This produces a *correct* set of (w_q, s, z)
69
+ # triples that round-trip cleanly under the dequant formula.
70
+ torch.manual_seed(0xC0DE ^ (M * 1315423911 + N * 2654435761 + K))
71
+ w_full = torch.randn(K, N, dtype=torch.float32) * 0.02
72
+
73
+ w_g = w_full.view(n_groups, group_size, N)
74
+ w_min = w_g.min(dim=1, keepdim=True).values # (n_groups, 1, N)
75
+ w_max = w_g.max(dim=1, keepdim=True).values
76
+ scales = (w_max - w_min).clamp_min(1e-8) / 15.0 # (n_groups, 1, N)
77
+ zeros = (-w_min / scales).round().clamp(0, 15) # (n_groups, 1, N)
78
+ # Quantize
79
+ w_q = ((w_g / scales) + zeros).round().clamp(0, 15).to(torch.uint8)
80
+ w_q = w_q.view(K, N)
81
+
82
+ scales_2d = scales.squeeze(1).to(torch.bfloat16) # (n_groups, N)
83
+ zeros_2d = zeros.squeeze(1).to(torch.bfloat16) # (n_groups, N)
84
+ w_packed = _pack_int4(w_q) # (K//2, N)
85
+
86
+ self.register_buffer("w_q", w_packed) # uint8
87
+ self.register_buffer("scales", scales_2d) # bf16
88
+ self.register_buffer("zeros", zeros_2d) # bf16
89
+
90
+ def forward(self, x: torch.Tensor) -> torch.Tensor:
91
+ # Naive: unpack -> dequant -> matmul.
92
+ K = self.K
93
+ w_unpacked = _unpack_int4(self.w_q, K).to(torch.bfloat16) # (K, N) in [0,15]
94
+ # Broadcast scales/zeros along the group axis.
95
+ scales = self.scales.repeat_interleave(self.group_size, dim=0) # (K, N) bf16
96
+ zeros = self.zeros.repeat_interleave(self.group_size, dim=0) # (K, N) bf16
97
+ w_bf = (w_unpacked - zeros) * scales # (K, N) bf16
98
+ return x.to(torch.bfloat16) @ w_bf # (M, N) bf16
99
+
100
+
101
+ M = 1
102
+ N = 12288
103
+ K = 4096
104
+
105
+
106
+ def get_inputs():
107
+ x = torch.randn(M, K, dtype=torch.bfloat16)
108
+ return [x]
109
+
110
+
111
+ def get_init_inputs():
112
+ return [M, N, K]
07_w4a16_gemm/shapes.py ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """Shape sweep for W4A16 GEMM.
2
+
3
+ Llama-style up_proj / qkv_proj shapes. Decode (M=1) is the bandwidth-bound
4
+ case every inference engine optimizes -- it's the bar to beat.
5
+ """
6
+
7
+ SHAPES = [
8
+ {"M": 1, "N": 12288, "K": 4096}, # decode: memory-bound on int4 weight read
9
+ {"M": 32, "N": 12288, "K": 4096}, # small prefill: mixed regime
10
+ {"M": 256, "N": 12288, "K": 4096}, # larger prefill: approaching compute
11
+ {"M": 1, "N": 4096, "K": 4096}, # decode: square shape
12
+ {"M": 16, "N": 14336, "K": 4096}, # speculative-decode-ish
13
+ ]
07_w4a16_gemm/sota.py ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """SOTA reference for W4A16 GEMM.
2
+
3
+ Library survey on RTX PRO 6000 Blackwell (SM120, CC 12.0):
4
+
5
+ - Marlin (IST-DASLab): no SM120 kernels (Ampere/Hopper only). Skip.
6
+ - GPTQ-Triton (fpgaminer): unmaintained; pure Triton path works on SM120
7
+ but is not faster than Marlin on its target HW
8
+ and has no Blackwell tuning. Skip as primary.
9
+ - AWQ (mit-han-lab/llm-awq): CUDA kernels not built for SM120 in the wheel.
10
+ Skip.
11
+ - bitsandbytes >= 0.49.2: CUDA kernels compile and run on SM120 (verified
12
+ on this machine). Different quant scheme (NF4,
13
+ symmetric, blocksize 64) than our reference's
14
+ AWQ-style asymmetric INT4 with group_size 128,
15
+ but it occupies the same memory regime and is
16
+ the only tuned W4A16-class kernel that runs on
17
+ SM120 today. Used here as an *informational*
18
+ SOTA line, not as a numerical reference.
19
+
20
+ The benchmark calls `sota_forward(x, ref_model)` and times it; correctness is
21
+ NOT checked against this path (the quant scheme differs).
22
+ """
23
+ from __future__ import annotations
24
+
25
+ import torch
26
+
27
+ _BNB_OK: bool | None = None
28
+
29
+
30
+ def is_available() -> bool:
31
+ global _BNB_OK
32
+ if _BNB_OK is not None:
33
+ return _BNB_OK
34
+ try:
35
+ import bitsandbytes # noqa: F401
36
+ from bitsandbytes.functional import quantize_4bit # noqa: F401
37
+ _BNB_OK = torch.cuda.is_available()
38
+ except Exception:
39
+ _BNB_OK = False
40
+ return _BNB_OK
41
+
42
+
43
+ _CACHE: dict[tuple[int, int, int], tuple] = {}
44
+
45
+
46
+ def _prepare(ref_model) -> tuple:
47
+ """Quantize the reference's bf16-equivalent weight with bnb NF4 once."""
48
+ key = (ref_model.M, ref_model.N, ref_model.K)
49
+ if key in _CACHE:
50
+ return _CACHE[key]
51
+ from bitsandbytes.functional import quantize_4bit
52
+ # Reconstruct the bf16 weight that the reference effectively uses.
53
+ # We dequantize the int4 packed weights via the reference's own formula
54
+ # so the SOTA line operates on the *same* underlying matrix.
55
+ # (Numerics will still differ slightly because bnb re-quantizes to NF4.)
56
+ K, N = ref_model.K, ref_model.N
57
+ w_packed = ref_model.w_q # (K//2, N) uint8
58
+ scales = ref_model.scales # (K/group, N) bf16
59
+ zeros = ref_model.zeros # (K/group, N) bf16
60
+ g = ref_model.group_size
61
+
62
+ w_unpacked = torch.empty((K, N), dtype=torch.uint8, device=w_packed.device)
63
+ w_unpacked[0::2] = w_packed & 0xF
64
+ w_unpacked[1::2] = (w_packed >> 4) & 0xF
65
+ s_full = scales.repeat_interleave(g, dim=0) # (K, N)
66
+ z_full = zeros.repeat_interleave(g, dim=0)
67
+ w_bf = (w_unpacked.to(torch.bfloat16) - z_full) * s_full # (K, N) bf16
68
+
69
+ # bnb expects (out_features, in_features) = (N, K)
70
+ w_for_bnb = w_bf.t().contiguous()
71
+ qw, qstate = quantize_4bit(w_for_bnb, blocksize=64, quant_type="nf4")
72
+ _CACHE[key] = (qw, qstate, w_bf)
73
+ return _CACHE[key]
74
+
75
+
76
+ def sota_forward(x: torch.Tensor, ref_model) -> torch.Tensor:
77
+ """W4A16 GEMM via bitsandbytes NF4. x: (M, K) bf16, returns (M, N) bf16."""
78
+ from bitsandbytes.functional import dequantize_4bit, gemv_4bit
79
+ qw, qstate, _ = _prepare(ref_model)
80
+ M = x.shape[0]
81
+ if M == 1:
82
+ # Decode path: bnb gemv_4bit. Wants (1, 1, K).
83
+ out = gemv_4bit(x.view(1, 1, -1).contiguous(), qw.t(), state=qstate)
84
+ return out.view(1, -1)
85
+ # Prefill: dequant then matmul (bnb has no batched W4A16 GEMM kernel).
86
+ w_deq = dequantize_4bit(qw, qstate, blocksize=64, quant_type="nf4") # (N, K)
87
+ return x @ w_deq.t()
README.md ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ tags:
6
+ - gpu
7
+ - cuda
8
+ - kernels
9
+ - benchmarks
10
+ - code-generation
11
+ - agents
12
+ size_categories:
13
+ - n<1K
14
+ pretty_name: KernelBench-Hard Problems
15
+ ---
16
+
17
+ # KernelBench-Hard — Problem Definitions
18
+
19
+ The 7 problem definitions for **KernelBench-Hard**, a benchmark for autonomous LLM coding agents writing GPU kernels on a single Blackwell GPU (RTX PRO 6000, sm_120, CUDA 13.2).
20
+
21
+ Companion datasets:
22
+ - [`Infatoshi/kernelbench-hard-runs`](https://huggingface.co/datasets/Infatoshi/kernelbench-hard-runs) — 84 agent transcripts, winning solutions, leaderboard, reward-hack annotations
23
+ - Live site: https://kernelbench.com/hard
24
+ - Methodology blog: https://kernelbench.com/blog/hard
25
+ - Source repo: https://github.com/Infatoshi/kernelbench.com (monorepo)
26
+
27
+ ## Problems
28
+
29
+ | id | task | shapes | regime |
30
+ | --- | --- | --- | --- |
31
+ | `01_fp8_gemm` | FP8 (E4M3) GEMM with bf16 accumulation | 8 | compute-bound |
32
+ | `02_kda_cutlass` | Kimi Delta Attention forward (CUTLASS) | 4 | compute-bound |
33
+ | `03_paged_attention` | Paged-attention decode (vLLM-style) | 6 | memory-bound |
34
+ | `04_kahan_softmax` | Numerically stable softmax with Kahan compensation | 5 | memory-bound |
35
+ | `05_topk_bitonic` | Top-k via bitonic select | 6 | memory-bound |
36
+ | `06_sonic_moe_swiglu` | Sonic-MoE forward with SwiGLU | 4 | compute-bound |
37
+ | `07_w4a16_gemm` | W4A16 weight-only quantized GEMM | 8 | compute-bound |
38
+
39
+ ## File layout per problem
40
+
41
+ Each `0X_<name>/` directory contains:
42
+
43
+ | file | purpose |
44
+ | --- | --- |
45
+ | `reference.py` | The PyTorch reference implementation. The agent must match its output. |
46
+ | `check.py` | Correctness harness — reference vs. submission with `torch.allclose` tolerances |
47
+ | `benchmark.py` | Timing harness — L2-flush + 30-trial median, prints `shape= variant= tflops= gbps= ms=` lines |
48
+ | `problem.yaml` | Metadata: regime (compute/memory bound), tolerance, dtype, shapes |
49
+ | `shapes.py` | Iterable of input shapes the benchmark runs |
50
+ | `sota.py` | Hand-written SOTA reference (when available) for the upper-bound row in the leaderboard |
51
+ | `PROMPT.txt` | The exact prompt fed to the agent harness |
52
+
53
+ ## Scoring (peak_fraction)
54
+
55
+ For each (model, problem) cell, we compute `peak_fraction` ∈ [0, 1] as:
56
+
57
+ ```
58
+ peak_fraction = geomean over shapes of (achieved_throughput / hardware_peak)
59
+ ```
60
+
61
+ where `achieved_throughput` is TFLOPS for compute-bound or GB/s for memory-bound, and `hardware_peak` is the sm_120 spec (peak fp8 TFLOPS or peak HBM bandwidth). This rewards approaching the hardware ceiling rather than the easier-to-game "speedup over PyTorch."
62
+
63
+ A solution must first pass `check.py` (correctness) before it gets a `peak_fraction`.
64
+
65
+ ## Hardware
66
+
67
+ - **GPU**: NVIDIA RTX PRO 6000 Blackwell Workstation
68
+ - **SM**: sm_120a (Blackwell)
69
+ - **VRAM**: 96 GB GDDR7
70
+ - **Peak HBM bandwidth**: ~1800 GB/s
71
+ - **CUDA**: 13.2 / NVCC 12.8 / Driver 595.58.03
72
+ - **Host**: Ryzen 9950X3D, 92 GB DDR5
73
+
74
+ ## Rubric leaks (known issues)
75
+
76
+ Two of the seven problems leak the rubric — meaning the easiest path to a high score involves something other than writing a fast correct kernel. We publish anyway with these documented inline in `benchmarks/hard/SPEC.md` and per-cell in the runs dataset's annotation files:
77
+
78
+ - **`01_fp8_gemm`** — agents that downcast to bf16 + tensor cores get ~80–90% peak without doing the actual fp8 quantization. The judge model catches some of these but not all. See annotations with `verdict: rubric_leak`.
79
+ - **`04_kahan_softmax`** — agents that skip Kahan compensation pass `check.py`'s atol within float32 limits and get a free ~2× speedup. The numerical instability only shows up at extreme magnitudes that the test inputs don't probe.
80
+
81
+ These are documented honestly because (a) we want the community to fix them, and (b) the rubric leaks themselves are interesting reward-hacking examples.
82
+
83
+ ## How to use
84
+
85
+ ```python
86
+ from datasets import load_dataset
87
+ # Or just clone the repo
88
+ # git clone https://huggingface.co/datasets/Infatoshi/kernelbench-hard-problems
89
+ ```
90
+
91
+ To run a problem locally with your own kernel:
92
+
93
+ ```bash
94
+ cd 01_fp8_gemm
95
+ # Drop your solution at solution.py
96
+ uv run python check.py # verifies correctness
97
+ uv run python benchmark.py # measures throughput
98
+ ```
99
+
100
+ ## License
101
+
102
+ MIT. Cite as:
103
+
104
+ ```
105
+ @misc{kernelbench-hard-2026,
106
+ author = {Arledge, Elliot},
107
+ title = {KernelBench-Hard: A GPU Kernel Engineering Benchmark for Autonomous Coding Agents},
108
+ year = {2026},
109
+ url = {https://kernelbench.com/hard},
110
+ note = {Built on top of KernelBench (Ouyang et al., 2025).}
111
+ }
112
+ ```
113
+
114
+ Original KernelBench: Ouyang et al., https://github.com/ScalingIntelligence/KernelBench