andylizf commited on
Commit
d7cab26
·
verified ·
1 Parent(s): f2f9820

Update dataset

Browse files
Files changed (1) hide show
  1. data/test-00000-of-00001.json +1 -1
data/test-00000-of-00001.json CHANGED
@@ -188,7 +188,7 @@
188
  {"problem_id": "poc_generation/heap_use_after_free", "category": "research", "statement": "", "config": "tag: security\n{\n \"dependencies\": {\n \"uv_project\": \"resources\"\n },\n \"datasets\": [\n \"arvo:47101\"\n ],\n \"tag\": \"security\"\n}\n"}
189
  {"problem_id": "poc_generation/stack_buffer_overflow", "category": "research", "statement": "", "config": "{\"tag\": \"security\"}\n"}
190
  {"problem_id": "poc_generation/uninitialized_value", "category": "research", "statement": "", "config": "{\"tag\": \"security\"}\n"}
191
- {"problem_id": "qknorm", "category": "research", "statement": "QKNorm Optimization Problem\n============================\n\nProblem Setting\n---------------\nDesign and optimize high-performance implementations for Query-Key Normalization (QKNorm) on GPU. This problem focuses on implementing efficient normalization kernels that apply RMSNorm to query and key tensors.\n\nThis is a **memory-bound** (even **launch-bound**) **tiny operator**. Performance optimization requires careful attention to:\n\n1. **Memory Efficiency**: Focus on **vectorized memory access patterns**. Minimize memory transactions and maximize memory bandwidth utilization.\n\n2. **Operation Fusion**: **Avoid additional transpose/contiguous kernels**. Fuse operations to reduce kernel launch overhead and memory traffic.\n\n3. **Non-Contiguous Input Handling**: **Be aware that inputs may be non-contiguous** due to weight-QKV fusion. Your implementation should efficiently handle non-contiguous memory layouts without triggering expensive memory copies.\n\nTarget\n------\n- **Primary**: Ensure correctness across diverse tensor shapes\n- **Secondary**: Maximize geometric mean speedup over baseline (higher is better)\n- **Tertiary**: Minimize kernel launch overhead and memory usage\n\nAPI Specification\n-----------------\nImplement a `Solution` class that returns a qknorm implementation:\n\n```python\nclass Solution:\n def solve(self, spec_path: str = None) -> dict:\n \"\"\"\n Returns a dict with either:\n - {\"code\": \"python_code_string\"}\n - {\"program_path\": \"path/to/kernel.py\"}\n \"\"\"\n # Your implementation\n pass\n```\n\nYour kernel implementation must provide:\n\n```python\nimport torch\nimport flashinfer\n\ndef qknorm(q: torch.Tensor, k: torch.Tensor, norm_weight: torch.Tensor):\n \"\"\"\n Apply RMSNorm to query and key tensors.\n \n Args:\n q: Query tensor of arbitrary shape (will be reshaped to 2D)\n k: Key tensor of arbitrary shape (will be reshaped to 2D)\n norm_weight: Normalization weight tensor of shape (hidden_dim,)\n \n Returns:\n Tuple of (q_normalized, k_normalized) tensors\n \"\"\"\n pass\n```\n\nRequired Default Implementation:\n```python\ndef default_qknorm(q: torch.Tensor, k: torch.Tensor, norm_weight: torch.Tensor):\n q_2d = q.contiguous().view(-1, q.shape[-1])\n k_2d = k.contiguous().view(-1, k.shape[-1])\n q_o = torch.empty_like(q_2d)\n k_o = torch.empty_like(k_2d)\n flashinfer.norm.rmsnorm(q_2d, norm_weight, out=q_o)\n flashinfer.norm.rmsnorm(k_2d, norm_weight, out=k_o)\n return q_o.view(q.shape), k_o.view(k.shape)\n```\n\nBaseline Implementation:\n```python\ndef customized_qknorm(q: torch.Tensor, k: torch.Tensor, norm_weight: torch.Tensor):\n q_o = torch.empty(q.shape, device=q.device, dtype=q.dtype)\n k_o = torch.empty(k.shape, device=k.device, dtype=k.dtype)\n flashinfer.norm.rmsnorm(q, norm_weight, out=q_o)\n flashinfer.norm.rmsnorm(k, norm_weight, out=k_o)\n return q_o, k_o\n```\n\nAPI Usage Notes\n---------------\n- The evaluator looks for a `qknorm` function in the module namespace\n- Function must handle tensor reshaping correctly (q and k may have arbitrary shapes)\n- Must use flashinfer.norm.rmsnorm for normalization\n- Function returns a tuple of (q_normalized, k_normalized) tensors\n- **Important**: Inputs q and k may be **non-contiguous** due to weight-QKV fusion\n- **Avoid**: Additional `.contiguous()` or `.transpose()` calls that trigger memory copies\n- **Focus**: Vectorized memory access and operation fusion to minimize kernel launches\n\nScoring (0-100)\n---------------\nPerformance is measured against baseline implementations:\n\n```\ngeometric_mean_speedup = geometric_mean(answer_times / baseline_times)\n\nif speedup < 0.5 or correctness is wrong:\n score = 0\nelif speedup >= 0.5 and speedup < 1.0:\n score = 50\nelif speedup >= 1.0:\n score = 100\n```\n\n- 0 points = Speedup < 0.5x OR correctness fails\n- 50 points = Speedup >= 0.5x and < 1.0x\n- 100 points = Speedup >= 1.0x\n\nEvaluation Details\n------------------\n- Shapes focus on diverse batch-sizes, head-dim, num-kv-heads, num-qo-heads, e.g.:\n - (16, 8, 32, 128)\n - (128, 32, 32, 64)\n- Correctness verified with tolerance: rtol=1e-2, atol=5e-3\n- Performance measured using median execution time\n- Requires CUDA backend and GPU support\n", "config": "{\n \"dependencies\": {\n \"uv_project\": \"resources\"\n },\n \"datasets\": [],\n \"runtime\": {\n \"resources\": {\n \"accelerators\": \"L4:1\"\n },\n \"docker\": {\n \"image\": \"andylizf/triton-tlx:tlx-nv-cu122\",\n \"gpu\": true\n },\n \"environment\": \"CUDA 12.2, Python 3.11, PyTorch 2.0+, flashinfer 0.5.0, Triton 3.0+\"\n },\n \"tag\": \"hpc\"\n}\n"}
192
  {"problem_id": "ragged_attention", "category": "research", "statement": "Ragged Attention Optimization Problem\n======================================\n\nProblem Setting\n---------------\nDesign and optimize high-performance Triton kernels for ragged attention computation on GPU. This problem focuses on implementing efficient kernels that handle variable-length sequences using ragged attention, where each query row can attend to a different number of key/value rows.\n\nThe challenge involves optimizing:\n- **Ragged attention**: Efficiently handling variable-length sequences where each row has different attention lengths\n- **Memory access patterns**: Efficient loading and storing of Q, K, V tensors with ragged masking\n- **Streaming softmax**: Computing softmax in a streaming fashion for numerical stability\n- **Row-wise masking**: Correctly masking attention scores based on row_lens\n- **Mixed precision**: Handling float16 inputs/outputs with float32 accumulation\n- **Block tiling**: Optimal block sizes for GPU execution across different matrix sizes\n- **Performance benchmarking**: Achieving speedup over baseline PyTorch implementations\n\nTarget\n------\n- **Primary**: Maximize geometric mean speedup over baseline (higher is better)\n- **Secondary**: Ensure correctness across diverse matrix sizes and ragged lengths\n- **Tertiary**: Minimize kernel launch overhead and memory usage\n\nAPI Specification\n-----------------\nImplement a `Solution` class that returns a Triton kernel implementation:\n\n```python\nclass Solution:\n def solve(self, spec_path: str = None) -> dict:\n \"\"\"\n Returns a dict with either:\n - {\"code\": \"python_code_string\"}\n - {\"program_path\": \"path/to/kernel.py\"}\n \"\"\"\n # Your implementation\n pass\n```\n\nYour kernel implementation must provide:\n\n```python\nimport torch\nimport triton\nimport triton.language as tl\n\ndef ragged_attn(Q: torch.Tensor, K: torch.Tensor, V: torch.Tensor, row_lens: torch.Tensor) -> torch.Tensor:\n \"\"\"\n Ragged attention computation.\n \n Args:\n Q: Query tensor of shape (M, D) - query features (float16)\n K: Key tensor of shape (N, D) - key features (float16)\n V: Value tensor of shape (N, Dv) - value features (float16)\n row_lens: Row lengths tensor of shape (M,) - number of valid K/V rows per Q row (int32 or int64)\n \n Returns:\n Output tensor of shape (M, Dv) - attention output (float16)\n \n Semantics:\n For each query row i (0 <= i < M), compute attention over the first row_lens[i] key/value rows.\n Specifically:\n - scores[i, j] = (Q[i] @ K[j].T) * scale, for j < row_lens[i], else -inf\n - P[i] = softmax(scores[i])\n - O[i] = P[i] @ V[:row_lens[i]]\n \"\"\"\n pass\n```\n\nScoring\n-------\nThe scoring system evaluates your implementation based on geometric mean speedup over GPU baseline:\n\n- **0 points**: 1x GPU baseline (same speed as PyTorch GPU baseline)\n- **100 points**: 3x GPU baseline (3x speedup over PyTorch GPU baseline)\n- **Linear interpolation**: Scores between 0-100 are linearly interpolated based on speedup\n\nThe evaluation uses the following test cases:\n- M (number of query rows): [512, 1024]\n- N (number of key/value rows): 1024\n- D (model dimension): 64\n- Dv (value dimension): 64\n- row_lens: Random integers between [min_ratio*N, N] where min_ratio=0.25\n\nCorrectness is verified using:\n- Relative tolerance: 1e-2\n- Absolute tolerance: 5e-3\n\nAll tests must pass for a non-zero score. If any test fails correctness, the score is 0.\n\nExample\n-------\n```python\nimport torch\nimport triton\nimport triton.language as tl\n\n@triton.jit\ndef _ragged_kernel(Q, K, V, O, ROW_LENS, ...):\n # Your kernel implementation\n pass\n\ndef ragged_attn(Q: torch.Tensor, K: torch.Tensor, V: torch.Tensor, row_lens: torch.Tensor) -> torch.Tensor:\n # Your kernel launch logic\n pass\n```\n\nConstraints\n-----------\n- All tensors must be CUDA tensors (float16 for Q, K, V; int32/int64 for row_lens)\n- Output must be float16\n- The implementation must handle variable row lengths correctly\n- Accumulation should use float32 for numerical stability\n- Must use streaming softmax for numerical stability\n\nTips\n----\n1. Use efficient block tiling (BM, BN, BD, BDV) for optimal performance\n2. Implement streaming softmax to handle large attention matrices\n3. Correctly mask attention scores based on row_lens\n4. Load row_lens once per program and broadcast for masking\n5. Use proper masking for boundary conditions\n\n", "config": "dependencies:\n uv_project: resources\ntag: hpc\nruntime:\n environment: \"Triton 3.2.0 with CUDA 12.2 (triton-tlx image)\"\n docker:\n image: andylizf/triton-tlx:tlx-nv-cu122\n gpu: true\n resources:\n accelerators: L4:1\n"}
193
  {"problem_id": "symbolic_regression/mccormick", "category": "research", "statement": "Symbolic Regression Benchmark - McCormick Dataset\n=================================================\n\nProblem Setting\n---------------\nLearn a closed-form symbolic expression `f(x1, x2)` that predicts the target `y`.\n\nThis dataset is derived from the McCormick function, a classic 2D optimization test function featuring a combination of trigonometric and polynomial terms. The function exhibits a smooth, wavy surface with a global minimum.\n\nInput Format\n------------\n- Your `Solution.solve` receives:\n - `X`: numpy.ndarray of shape `(n, 2)` containing feature values\n - `y`: numpy.ndarray of shape `(n,)` containing target values\n- Dataset columns: `x1, x2, y`\n\nOutput Specification\n--------------------\nImplement a `Solution` class in `solution.py`:\n\n```python\nimport numpy as np\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n \"\"\"\n Args:\n X: Feature matrix of shape (n, 2)\n y: Target values of shape (n,)\n\n Returns:\n dict with keys:\n - \"expression\": str, a Python-evaluable expression using x1, x2\n - \"predictions\": list/array of length n (optional)\n - \"details\": dict with optional \"complexity\" int\n \"\"\"\n # Example: fit a symbolic expression to the data\n expression = \"x1 + x2\" # placeholder\n return {\n \"expression\": expression,\n \"predictions\": None, # will be computed from expression if omitted\n \"details\": {}\n }\n```\n\nExpression Requirements:\n- Must be a valid Python expression string\n- Use variable names: `x1`, `x2`\n- Allowed operators: `+`, `-`, `*`, `/`, `**`\n- Allowed functions: `sin`, `cos`, `exp`, `log`\n- Numeric constants are allowed\n\nDependencies (pinned versions)\n------------------------------\n```\npysr==0.19.0\nnumpy==1.26.4\npandas==2.2.2\nsympy==1.13.3\n```\n\nMinimal Working Examples\n------------------------\n\n**Example 1: Using PySR (recommended)**\n```python\nimport numpy as np\nfrom pysr import PySRRegressor\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n model = PySRRegressor(\n niterations=40,\n binary_operators=[\"+\", \"-\", \"*\", \"/\"],\n unary_operators=[\"sin\", \"cos\", \"exp\", \"log\"],\n populations=15,\n population_size=33,\n maxsize=25,\n verbosity=0,\n progress=False,\n random_state=42,\n )\n model.fit(X, y, variable_names=[\"x1\", \"x2\"])\n\n # Get best expression as sympy, convert to string\n best_expr = model.sympy()\n expression = str(best_expr)\n\n # Predictions\n predictions = model.predict(X)\n\n return {\n \"expression\": expression,\n \"predictions\": predictions.tolist(),\n \"details\": {}\n }\n```\n\n**Example 2: Manual expression (simple baseline)**\n```python\nimport numpy as np\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n # Simple linear combination as baseline\n x1, x2 = X[:, 0], X[:, 1]\n\n # Fit coefficients via least squares\n A = np.column_stack([x1, x2, np.ones_like(x1)])\n coeffs, _, _, _ = np.linalg.lstsq(A, y, rcond=None)\n a, b, c = coeffs\n\n expression = f\"{a:.6f}*x1 + {b:.6f}*x2 + {c:.6f}\"\n predictions = a * x1 + b * x2 + c\n\n return {\n \"expression\": expression,\n \"predictions\": predictions.tolist(),\n \"details\": {}\n }\n```\n\n**Example 3: Using sympy for expression manipulation**\n```python\nimport numpy as np\nimport sympy as sp\nfrom pysr import PySRRegressor\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n model = PySRRegressor(\n niterations=30,\n binary_operators=[\"+\", \"-\", \"*\", \"/\"],\n unary_operators=[\"sin\", \"cos\"],\n verbosity=0,\n progress=False,\n )\n model.fit(X, y, variable_names=[\"x1\", \"x2\"])\n\n # Get sympy expression and simplify\n sympy_expr = model.sympy()\n simplified = sp.simplify(sympy_expr)\n\n # Convert to evaluable string\n expression = str(simplified)\n\n return {\n \"expression\": expression,\n \"predictions\": None, # evaluator will compute from expression\n \"details\": {}\n }\n```\n\nPySR API Notes (v0.19.0)\n------------------------\n- `model.fit(X, y, variable_names=[\"x1\", \"x2\"])` - use variable_names to match expected output\n- `model.sympy()` - returns best expression as sympy object\n- `model.predict(X)` - returns predictions array\n- `model.equations_` - DataFrame of all discovered equations\n- Common parameters:\n - `niterations`: number of evolution iterations (more = better but slower)\n - `populations`: number of parallel populations\n - `maxsize`: maximum expression complexity\n - `verbosity=0, progress=False`: suppress output\n\nExpression Format Requirements\n------------------------------\n- Must be a valid Python expression string\n- Use variable names: `x1`, `x2`\n- Allowed operators: `+`, `-`, `*`, `/`, `**`\n- Allowed functions: `sin`, `cos`, `exp`, `log` (NO `np.` prefix)\n- Numeric constants are allowed\n- The evaluator uses `sympy.sympify()` to parse your expression\n\nScoring\n-------\n```\nMSE = (1/n) Σ (y_i - ŷ_i)²\nScore = 100 × clamp((m_base - MSE) / (m_base - m_ref), 0, 1) × 0.99^max(C - C_ref, 0)\n```\n\n- `m_base`: linear regression baseline MSE\n- `m_ref`, `C_ref`: reference solution MSE and complexity\n- `C = 2 × (#binary ops) + (#unary ops)`\n- Lower MSE and lower complexity yield higher scores\n\nEnvironment\n-----------\nRun `set_up_env.sh` to install dependencies.\n", "config": "{\n \"dependencies\": {\n \"uv_project\": \"resources\"\n },\n \"datasets\": [],\n \"tag\": \"pl\"\n}\n"}
194
  {"problem_id": "symbolic_regression/mixed_polyexp_4d", "category": "research", "statement": "Symbolic Regression Benchmark - Mixed PolyExp 4D Dataset\n=========================================================\n\nProblem Setting\n---------------\nLearn a closed-form symbolic expression `f(x1, x2, x3, x4)` that predicts the target `y`.\n\nThis is a higher-dimensional dataset (4 input features) combining polynomial interactions with exponential decay. The function involves cross-terms between variables and Gaussian-like damping, making it more challenging than the 2D variants.\n\nInput Format\n------------\n- Your `Solution.solve` receives:\n - `X`: numpy.ndarray of shape `(n, 4)` containing feature values\n - `y`: numpy.ndarray of shape `(n,)` containing target values\n- Dataset columns: `x1, x2, x3, x4, y`\n\nOutput Specification\n--------------------\nImplement a `Solution` class in `solution.py`:\n\n```python\nimport numpy as np\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n \"\"\"\n Args:\n X: Feature matrix of shape (n, 4)\n y: Target values of shape (n,)\n\n Returns:\n dict with keys:\n - \"expression\": str, a Python-evaluable expression using x1, x2, x3, x4\n - \"predictions\": list/array of length n (optional)\n - \"details\": dict with optional \"complexity\" int\n \"\"\"\n # Example: fit a symbolic expression to the data\n expression = \"x1 + x2 + x3 + x4\" # placeholder\n return {\n \"expression\": expression,\n \"predictions\": None, # will be computed from expression if omitted\n \"details\": {}\n }\n```\n\nExpression Requirements:\n- Must be a valid Python expression string\n- Use variable names: `x1`, `x2`, `x3`, `x4`\n- Allowed operators: `+`, `-`, `*`, `/`, `**`\n- Allowed functions: `sin`, `cos`, `exp`, `log`\n- Numeric constants are allowed\n\nDependencies (pinned versions)\n------------------------------\n```\npysr==0.19.0\nnumpy==1.26.4\npandas==2.2.2\nsympy==1.13.3\n```\n\nMinimal Working Examples\n------------------------\n\n**Example 1: Using PySR (recommended)**\n```python\nimport numpy as np\nfrom pysr import PySRRegressor\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n model = PySRRegressor(\n niterations=50, # more iterations for 4D\n binary_operators=[\"+\", \"-\", \"*\", \"/\"],\n unary_operators=[\"sin\", \"cos\", \"exp\", \"log\"],\n populations=20,\n population_size=40,\n maxsize=30, # larger for 4D complexity\n verbosity=0,\n progress=False,\n random_state=42,\n )\n model.fit(X, y, variable_names=[\"x1\", \"x2\", \"x3\", \"x4\"])\n\n # Get best expression as sympy, convert to string\n best_expr = model.sympy()\n expression = str(best_expr)\n\n # Predictions\n predictions = model.predict(X)\n\n return {\n \"expression\": expression,\n \"predictions\": predictions.tolist(),\n \"details\": {}\n }\n```\n\n**Example 2: Manual expression (simple baseline)**\n```python\nimport numpy as np\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n # Simple linear combination as baseline\n x1, x2, x3, x4 = X[:, 0], X[:, 1], X[:, 2], X[:, 3]\n\n # Fit coefficients via least squares\n A = np.column_stack([x1, x2, x3, x4, np.ones_like(x1)])\n coeffs, _, _, _ = np.linalg.lstsq(A, y, rcond=None)\n a, b, c, d, e = coeffs\n\n expression = f\"{a:.6f}*x1 + {b:.6f}*x2 + {c:.6f}*x3 + {d:.6f}*x4 + {e:.6f}\"\n predictions = a * x1 + b * x2 + c * x3 + d * x4 + e\n\n return {\n \"expression\": expression,\n \"predictions\": predictions.tolist(),\n \"details\": {}\n }\n```\n\nPySR API Notes (v0.19.0)\n------------------------\n- `model.fit(X, y, variable_names=[\"x1\", \"x2\", \"x3\", \"x4\"])` - use variable_names to match expected output\n- `model.sympy()` - returns best expression as sympy object\n- `model.predict(X)` - returns predictions array\n- `model.equations_` - DataFrame of all discovered equations\n- Common parameters:\n - `niterations`: number of evolution iterations (more = better but slower)\n - `populations`: number of parallel populations\n - `maxsize`: maximum expression complexity\n - `verbosity=0, progress=False`: suppress output\n\nExpression Format Requirements\n------------------------------\n- Must be a valid Python expression string\n- Use variable names: `x1`, `x2`, `x3`, `x4`\n- Allowed operators: `+`, `-`, `*`, `/`, `**`\n- Allowed functions: `sin`, `cos`, `exp`, `log` (NO `np.` prefix)\n- Numeric constants are allowed\n- The evaluator uses `sympy.sympify()` to parse your expression\n\nScoring\n-------\n```\nMSE = (1/n) Σ (y_i - ŷ_i)²\nScore = 100 × clamp((m_base - MSE) / (m_base - m_ref), 0, 1) × 0.99^max(C - C_ref, 0)\n```\n\n- `m_base`: linear regression baseline MSE\n- `m_ref`, `C_ref`: reference solution MSE and complexity\n- `C = 2 × (#binary ops) + (#unary ops)`\n- Lower MSE and lower complexity yield higher scores\n\nEnvironment\n-----------\nRun `set_up_env.sh` to install dependencies.\n", "config": "{\n \"dependencies\": {\n \"uv_project\": \"resources\"\n },\n \"datasets\": [],\n \"tag\": \"pl\"\n}\n"}
 
188
  {"problem_id": "poc_generation/heap_use_after_free", "category": "research", "statement": "", "config": "tag: security\n{\n \"dependencies\": {\n \"uv_project\": \"resources\"\n },\n \"datasets\": [\n \"arvo:47101\"\n ],\n \"tag\": \"security\"\n}\n"}
189
  {"problem_id": "poc_generation/stack_buffer_overflow", "category": "research", "statement": "", "config": "{\"tag\": \"security\"}\n"}
190
  {"problem_id": "poc_generation/uninitialized_value", "category": "research", "statement": "", "config": "{\"tag\": \"security\"}\n"}
191
+ {"problem_id": "qknorm", "category": "research", "statement": "QKNorm Optimization Problem\n============================\n\nProblem Setting\n---------------\nDesign and optimize high-performance implementations for Query-Key Normalization (QKNorm) on GPU. This problem focuses on implementing efficient normalization kernels that apply RMSNorm to query and key tensors.\n\nThis is a **memory-bound** (even **launch-bound**) **tiny operator**. Performance optimization requires careful attention to:\n\n1. **Memory Efficiency**: Focus on **vectorized memory access patterns**. Minimize memory transactions and maximize memory bandwidth utilization.\n\n2. **Operation Fusion**: **Avoid additional transpose/contiguous kernels**. Fuse operations to reduce kernel launch overhead and memory traffic.\n\n3. **Non-Contiguous Input Handling**: **Be aware that inputs may be non-contiguous** due to weight-QKV fusion. Your implementation should efficiently handle non-contiguous memory layouts without triggering expensive memory copies.\n\nTarget\n------\n- **Primary**: Ensure correctness across diverse tensor shapes\n- **Secondary**: Maximize geometric mean speedup over baseline (higher is better)\n- **Tertiary**: Minimize kernel launch overhead and memory usage\n\nAPI Specification\n-----------------\nImplement a `Solution` class that returns a qknorm implementation:\n\n```python\nclass Solution:\n def solve(self, spec_path: str = None) -> dict:\n \"\"\"\n Returns a dict with either:\n - {\"code\": \"python_code_string\"}\n - {\"program_path\": \"path/to/kernel.py\"}\n \"\"\"\n # Your implementation\n pass\n```\n\nYour kernel implementation must provide:\n\n```python\nimport torch\nimport flashinfer\n\ndef qknorm(q: torch.Tensor, k: torch.Tensor, norm_weight: torch.Tensor):\n \"\"\"\n Apply RMSNorm to query and key tensors.\n \n Args:\n q: Query tensor of arbitrary shape (will be reshaped to 2D)\n k: Key tensor of arbitrary shape (will be reshaped to 2D)\n norm_weight: Normalization weight tensor of shape (hidden_dim,)\n \n Returns:\n Tuple of (q_normalized, k_normalized) tensors\n \"\"\"\n pass\n```\n\nRequired Default Implementation:\n```python\ndef default_qknorm(q: torch.Tensor, k: torch.Tensor, norm_weight: torch.Tensor):\n q_2d = q.contiguous().view(-1, q.shape[-1])\n k_2d = k.contiguous().view(-1, k.shape[-1])\n q_o = torch.empty_like(q_2d)\n k_o = torch.empty_like(k_2d)\n flashinfer.norm.rmsnorm(q_2d, norm_weight, out=q_o)\n flashinfer.norm.rmsnorm(k_2d, norm_weight, out=k_o)\n return q_o.view(q.shape), k_o.view(k.shape)\n```\n\nBaseline Implementation:\n```python\ndef customized_qknorm(q: torch.Tensor, k: torch.Tensor, norm_weight: torch.Tensor):\n q_o = torch.empty(q.shape, device=q.device, dtype=q.dtype)\n k_o = torch.empty(k.shape, device=k.device, dtype=k.dtype)\n flashinfer.norm.rmsnorm(q, norm_weight, out=q_o)\n flashinfer.norm.rmsnorm(k, norm_weight, out=k_o)\n return q_o, k_o\n```\n\nAPI Usage Notes\n---------------\n- The evaluator looks for a `qknorm` function in the module namespace\n- Function must handle tensor reshaping correctly (q and k may have arbitrary shapes)\n- Must use flashinfer.norm.rmsnorm for normalization\n- Function returns a tuple of (q_normalized, k_normalized) tensors\n- **Important**: Inputs q and k may be **non-contiguous** due to weight-QKV fusion\n- **Avoid**: Additional `.contiguous()` or `.transpose()` calls that trigger memory copies\n- **Focus**: Vectorized memory access and operation fusion to minimize kernel launches\n\nScoring (0-100)\n---------------\nPerformance is measured against baseline implementations:\n\n```\ngeometric_mean_speedup = geometric_mean(answer_times / baseline_times)\n\nif speedup < 0.5 or correctness is wrong:\n score = 0\nelif speedup >= 0.5 and speedup < 1.0:\n score = 50\nelif speedup >= 1.0:\n score = 100\n```\n\n- 0 points = Speedup < 0.5x OR correctness fails\n- 50 points = Speedup >= 0.5x and < 1.0x\n- 100 points = Speedup >= 1.0x\n\nEvaluation Details\n------------------\n- Shapes focus on diverse batch-sizes, head-dim, num-kv-heads, num-qo-heads, e.g.:\n - (16, 8, 32, 128)\n - (128, 32, 32, 64)\n- Correctness verified with tolerance: rtol=1e-2, atol=5e-3\n- Performance measured using median execution time\n- Requires CUDA backend and GPU support\n", "config": "{\n \"dependencies\": {\n \"uv_project\": \"resources\"\n },\n \"datasets\": [],\n \"runtime\": {\n \"resources\": {\n \"accelerators\": \"L4:1\"\n },\n \"docker\": {\n \"image\": \"andylizf/triton-tlx:tlx-nv-cu122-nvcc\",\n \"gpu\": true\n },\n \"environment\": \"CUDA 12.2, Python 3.11, PyTorch 2.0+, flashinfer 0.5.0, Triton 3.0+\"\n },\n \"tag\": \"hpc\"\n}\n"}
192
  {"problem_id": "ragged_attention", "category": "research", "statement": "Ragged Attention Optimization Problem\n======================================\n\nProblem Setting\n---------------\nDesign and optimize high-performance Triton kernels for ragged attention computation on GPU. This problem focuses on implementing efficient kernels that handle variable-length sequences using ragged attention, where each query row can attend to a different number of key/value rows.\n\nThe challenge involves optimizing:\n- **Ragged attention**: Efficiently handling variable-length sequences where each row has different attention lengths\n- **Memory access patterns**: Efficient loading and storing of Q, K, V tensors with ragged masking\n- **Streaming softmax**: Computing softmax in a streaming fashion for numerical stability\n- **Row-wise masking**: Correctly masking attention scores based on row_lens\n- **Mixed precision**: Handling float16 inputs/outputs with float32 accumulation\n- **Block tiling**: Optimal block sizes for GPU execution across different matrix sizes\n- **Performance benchmarking**: Achieving speedup over baseline PyTorch implementations\n\nTarget\n------\n- **Primary**: Maximize geometric mean speedup over baseline (higher is better)\n- **Secondary**: Ensure correctness across diverse matrix sizes and ragged lengths\n- **Tertiary**: Minimize kernel launch overhead and memory usage\n\nAPI Specification\n-----------------\nImplement a `Solution` class that returns a Triton kernel implementation:\n\n```python\nclass Solution:\n def solve(self, spec_path: str = None) -> dict:\n \"\"\"\n Returns a dict with either:\n - {\"code\": \"python_code_string\"}\n - {\"program_path\": \"path/to/kernel.py\"}\n \"\"\"\n # Your implementation\n pass\n```\n\nYour kernel implementation must provide:\n\n```python\nimport torch\nimport triton\nimport triton.language as tl\n\ndef ragged_attn(Q: torch.Tensor, K: torch.Tensor, V: torch.Tensor, row_lens: torch.Tensor) -> torch.Tensor:\n \"\"\"\n Ragged attention computation.\n \n Args:\n Q: Query tensor of shape (M, D) - query features (float16)\n K: Key tensor of shape (N, D) - key features (float16)\n V: Value tensor of shape (N, Dv) - value features (float16)\n row_lens: Row lengths tensor of shape (M,) - number of valid K/V rows per Q row (int32 or int64)\n \n Returns:\n Output tensor of shape (M, Dv) - attention output (float16)\n \n Semantics:\n For each query row i (0 <= i < M), compute attention over the first row_lens[i] key/value rows.\n Specifically:\n - scores[i, j] = (Q[i] @ K[j].T) * scale, for j < row_lens[i], else -inf\n - P[i] = softmax(scores[i])\n - O[i] = P[i] @ V[:row_lens[i]]\n \"\"\"\n pass\n```\n\nScoring\n-------\nThe scoring system evaluates your implementation based on geometric mean speedup over GPU baseline:\n\n- **0 points**: 1x GPU baseline (same speed as PyTorch GPU baseline)\n- **100 points**: 3x GPU baseline (3x speedup over PyTorch GPU baseline)\n- **Linear interpolation**: Scores between 0-100 are linearly interpolated based on speedup\n\nThe evaluation uses the following test cases:\n- M (number of query rows): [512, 1024]\n- N (number of key/value rows): 1024\n- D (model dimension): 64\n- Dv (value dimension): 64\n- row_lens: Random integers between [min_ratio*N, N] where min_ratio=0.25\n\nCorrectness is verified using:\n- Relative tolerance: 1e-2\n- Absolute tolerance: 5e-3\n\nAll tests must pass for a non-zero score. If any test fails correctness, the score is 0.\n\nExample\n-------\n```python\nimport torch\nimport triton\nimport triton.language as tl\n\n@triton.jit\ndef _ragged_kernel(Q, K, V, O, ROW_LENS, ...):\n # Your kernel implementation\n pass\n\ndef ragged_attn(Q: torch.Tensor, K: torch.Tensor, V: torch.Tensor, row_lens: torch.Tensor) -> torch.Tensor:\n # Your kernel launch logic\n pass\n```\n\nConstraints\n-----------\n- All tensors must be CUDA tensors (float16 for Q, K, V; int32/int64 for row_lens)\n- Output must be float16\n- The implementation must handle variable row lengths correctly\n- Accumulation should use float32 for numerical stability\n- Must use streaming softmax for numerical stability\n\nTips\n----\n1. Use efficient block tiling (BM, BN, BD, BDV) for optimal performance\n2. Implement streaming softmax to handle large attention matrices\n3. Correctly mask attention scores based on row_lens\n4. Load row_lens once per program and broadcast for masking\n5. Use proper masking for boundary conditions\n\n", "config": "dependencies:\n uv_project: resources\ntag: hpc\nruntime:\n environment: \"Triton 3.2.0 with CUDA 12.2 (triton-tlx image)\"\n docker:\n image: andylizf/triton-tlx:tlx-nv-cu122\n gpu: true\n resources:\n accelerators: L4:1\n"}
193
  {"problem_id": "symbolic_regression/mccormick", "category": "research", "statement": "Symbolic Regression Benchmark - McCormick Dataset\n=================================================\n\nProblem Setting\n---------------\nLearn a closed-form symbolic expression `f(x1, x2)` that predicts the target `y`.\n\nThis dataset is derived from the McCormick function, a classic 2D optimization test function featuring a combination of trigonometric and polynomial terms. The function exhibits a smooth, wavy surface with a global minimum.\n\nInput Format\n------------\n- Your `Solution.solve` receives:\n - `X`: numpy.ndarray of shape `(n, 2)` containing feature values\n - `y`: numpy.ndarray of shape `(n,)` containing target values\n- Dataset columns: `x1, x2, y`\n\nOutput Specification\n--------------------\nImplement a `Solution` class in `solution.py`:\n\n```python\nimport numpy as np\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n \"\"\"\n Args:\n X: Feature matrix of shape (n, 2)\n y: Target values of shape (n,)\n\n Returns:\n dict with keys:\n - \"expression\": str, a Python-evaluable expression using x1, x2\n - \"predictions\": list/array of length n (optional)\n - \"details\": dict with optional \"complexity\" int\n \"\"\"\n # Example: fit a symbolic expression to the data\n expression = \"x1 + x2\" # placeholder\n return {\n \"expression\": expression,\n \"predictions\": None, # will be computed from expression if omitted\n \"details\": {}\n }\n```\n\nExpression Requirements:\n- Must be a valid Python expression string\n- Use variable names: `x1`, `x2`\n- Allowed operators: `+`, `-`, `*`, `/`, `**`\n- Allowed functions: `sin`, `cos`, `exp`, `log`\n- Numeric constants are allowed\n\nDependencies (pinned versions)\n------------------------------\n```\npysr==0.19.0\nnumpy==1.26.4\npandas==2.2.2\nsympy==1.13.3\n```\n\nMinimal Working Examples\n------------------------\n\n**Example 1: Using PySR (recommended)**\n```python\nimport numpy as np\nfrom pysr import PySRRegressor\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n model = PySRRegressor(\n niterations=40,\n binary_operators=[\"+\", \"-\", \"*\", \"/\"],\n unary_operators=[\"sin\", \"cos\", \"exp\", \"log\"],\n populations=15,\n population_size=33,\n maxsize=25,\n verbosity=0,\n progress=False,\n random_state=42,\n )\n model.fit(X, y, variable_names=[\"x1\", \"x2\"])\n\n # Get best expression as sympy, convert to string\n best_expr = model.sympy()\n expression = str(best_expr)\n\n # Predictions\n predictions = model.predict(X)\n\n return {\n \"expression\": expression,\n \"predictions\": predictions.tolist(),\n \"details\": {}\n }\n```\n\n**Example 2: Manual expression (simple baseline)**\n```python\nimport numpy as np\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n # Simple linear combination as baseline\n x1, x2 = X[:, 0], X[:, 1]\n\n # Fit coefficients via least squares\n A = np.column_stack([x1, x2, np.ones_like(x1)])\n coeffs, _, _, _ = np.linalg.lstsq(A, y, rcond=None)\n a, b, c = coeffs\n\n expression = f\"{a:.6f}*x1 + {b:.6f}*x2 + {c:.6f}\"\n predictions = a * x1 + b * x2 + c\n\n return {\n \"expression\": expression,\n \"predictions\": predictions.tolist(),\n \"details\": {}\n }\n```\n\n**Example 3: Using sympy for expression manipulation**\n```python\nimport numpy as np\nimport sympy as sp\nfrom pysr import PySRRegressor\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n model = PySRRegressor(\n niterations=30,\n binary_operators=[\"+\", \"-\", \"*\", \"/\"],\n unary_operators=[\"sin\", \"cos\"],\n verbosity=0,\n progress=False,\n )\n model.fit(X, y, variable_names=[\"x1\", \"x2\"])\n\n # Get sympy expression and simplify\n sympy_expr = model.sympy()\n simplified = sp.simplify(sympy_expr)\n\n # Convert to evaluable string\n expression = str(simplified)\n\n return {\n \"expression\": expression,\n \"predictions\": None, # evaluator will compute from expression\n \"details\": {}\n }\n```\n\nPySR API Notes (v0.19.0)\n------------------------\n- `model.fit(X, y, variable_names=[\"x1\", \"x2\"])` - use variable_names to match expected output\n- `model.sympy()` - returns best expression as sympy object\n- `model.predict(X)` - returns predictions array\n- `model.equations_` - DataFrame of all discovered equations\n- Common parameters:\n - `niterations`: number of evolution iterations (more = better but slower)\n - `populations`: number of parallel populations\n - `maxsize`: maximum expression complexity\n - `verbosity=0, progress=False`: suppress output\n\nExpression Format Requirements\n------------------------------\n- Must be a valid Python expression string\n- Use variable names: `x1`, `x2`\n- Allowed operators: `+`, `-`, `*`, `/`, `**`\n- Allowed functions: `sin`, `cos`, `exp`, `log` (NO `np.` prefix)\n- Numeric constants are allowed\n- The evaluator uses `sympy.sympify()` to parse your expression\n\nScoring\n-------\n```\nMSE = (1/n) Σ (y_i - ŷ_i)²\nScore = 100 × clamp((m_base - MSE) / (m_base - m_ref), 0, 1) × 0.99^max(C - C_ref, 0)\n```\n\n- `m_base`: linear regression baseline MSE\n- `m_ref`, `C_ref`: reference solution MSE and complexity\n- `C = 2 × (#binary ops) + (#unary ops)`\n- Lower MSE and lower complexity yield higher scores\n\nEnvironment\n-----------\nRun `set_up_env.sh` to install dependencies.\n", "config": "{\n \"dependencies\": {\n \"uv_project\": \"resources\"\n },\n \"datasets\": [],\n \"tag\": \"pl\"\n}\n"}
194
  {"problem_id": "symbolic_regression/mixed_polyexp_4d", "category": "research", "statement": "Symbolic Regression Benchmark - Mixed PolyExp 4D Dataset\n=========================================================\n\nProblem Setting\n---------------\nLearn a closed-form symbolic expression `f(x1, x2, x3, x4)` that predicts the target `y`.\n\nThis is a higher-dimensional dataset (4 input features) combining polynomial interactions with exponential decay. The function involves cross-terms between variables and Gaussian-like damping, making it more challenging than the 2D variants.\n\nInput Format\n------------\n- Your `Solution.solve` receives:\n - `X`: numpy.ndarray of shape `(n, 4)` containing feature values\n - `y`: numpy.ndarray of shape `(n,)` containing target values\n- Dataset columns: `x1, x2, x3, x4, y`\n\nOutput Specification\n--------------------\nImplement a `Solution` class in `solution.py`:\n\n```python\nimport numpy as np\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n \"\"\"\n Args:\n X: Feature matrix of shape (n, 4)\n y: Target values of shape (n,)\n\n Returns:\n dict with keys:\n - \"expression\": str, a Python-evaluable expression using x1, x2, x3, x4\n - \"predictions\": list/array of length n (optional)\n - \"details\": dict with optional \"complexity\" int\n \"\"\"\n # Example: fit a symbolic expression to the data\n expression = \"x1 + x2 + x3 + x4\" # placeholder\n return {\n \"expression\": expression,\n \"predictions\": None, # will be computed from expression if omitted\n \"details\": {}\n }\n```\n\nExpression Requirements:\n- Must be a valid Python expression string\n- Use variable names: `x1`, `x2`, `x3`, `x4`\n- Allowed operators: `+`, `-`, `*`, `/`, `**`\n- Allowed functions: `sin`, `cos`, `exp`, `log`\n- Numeric constants are allowed\n\nDependencies (pinned versions)\n------------------------------\n```\npysr==0.19.0\nnumpy==1.26.4\npandas==2.2.2\nsympy==1.13.3\n```\n\nMinimal Working Examples\n------------------------\n\n**Example 1: Using PySR (recommended)**\n```python\nimport numpy as np\nfrom pysr import PySRRegressor\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n model = PySRRegressor(\n niterations=50, # more iterations for 4D\n binary_operators=[\"+\", \"-\", \"*\", \"/\"],\n unary_operators=[\"sin\", \"cos\", \"exp\", \"log\"],\n populations=20,\n population_size=40,\n maxsize=30, # larger for 4D complexity\n verbosity=0,\n progress=False,\n random_state=42,\n )\n model.fit(X, y, variable_names=[\"x1\", \"x2\", \"x3\", \"x4\"])\n\n # Get best expression as sympy, convert to string\n best_expr = model.sympy()\n expression = str(best_expr)\n\n # Predictions\n predictions = model.predict(X)\n\n return {\n \"expression\": expression,\n \"predictions\": predictions.tolist(),\n \"details\": {}\n }\n```\n\n**Example 2: Manual expression (simple baseline)**\n```python\nimport numpy as np\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n # Simple linear combination as baseline\n x1, x2, x3, x4 = X[:, 0], X[:, 1], X[:, 2], X[:, 3]\n\n # Fit coefficients via least squares\n A = np.column_stack([x1, x2, x3, x4, np.ones_like(x1)])\n coeffs, _, _, _ = np.linalg.lstsq(A, y, rcond=None)\n a, b, c, d, e = coeffs\n\n expression = f\"{a:.6f}*x1 + {b:.6f}*x2 + {c:.6f}*x3 + {d:.6f}*x4 + {e:.6f}\"\n predictions = a * x1 + b * x2 + c * x3 + d * x4 + e\n\n return {\n \"expression\": expression,\n \"predictions\": predictions.tolist(),\n \"details\": {}\n }\n```\n\nPySR API Notes (v0.19.0)\n------------------------\n- `model.fit(X, y, variable_names=[\"x1\", \"x2\", \"x3\", \"x4\"])` - use variable_names to match expected output\n- `model.sympy()` - returns best expression as sympy object\n- `model.predict(X)` - returns predictions array\n- `model.equations_` - DataFrame of all discovered equations\n- Common parameters:\n - `niterations`: number of evolution iterations (more = better but slower)\n - `populations`: number of parallel populations\n - `maxsize`: maximum expression complexity\n - `verbosity=0, progress=False`: suppress output\n\nExpression Format Requirements\n------------------------------\n- Must be a valid Python expression string\n- Use variable names: `x1`, `x2`, `x3`, `x4`\n- Allowed operators: `+`, `-`, `*`, `/`, `**`\n- Allowed functions: `sin`, `cos`, `exp`, `log` (NO `np.` prefix)\n- Numeric constants are allowed\n- The evaluator uses `sympy.sympify()` to parse your expression\n\nScoring\n-------\n```\nMSE = (1/n) Σ (y_i - ŷ_i)²\nScore = 100 × clamp((m_base - MSE) / (m_base - m_ref), 0, 1) × 0.99^max(C - C_ref, 0)\n```\n\n- `m_base`: linear regression baseline MSE\n- `m_ref`, `C_ref`: reference solution MSE and complexity\n- `C = 2 × (#binary ops) + (#unary ops)`\n- Lower MSE and lower complexity yield higher scores\n\nEnvironment\n-----------\nRun `set_up_env.sh` to install dependencies.\n", "config": "{\n \"dependencies\": {\n \"uv_project\": \"resources\"\n },\n \"datasets\": [],\n \"tag\": \"pl\"\n}\n"}