andylizf commited on
Commit
0677ce4
·
verified ·
1 Parent(s): 1fecbf5

Update dataset

Browse files
Files changed (2) hide show
  1. README.md +2 -2
  2. data/test-00000-of-00001.json +2 -1
README.md CHANGED
@@ -19,9 +19,9 @@ A benchmark dataset for evaluating AI systems on challenging computer science pr
19
 
20
  ## Dataset Description
21
 
22
- This dataset contains 211 problems across two categories:
23
  - **Algorithmic**: 148 competitive programming problems with automated judging
24
- - **Research**: 63 open-ended research problems
25
 
26
  ## Dataset Structure
27
 
 
19
 
20
  ## Dataset Description
21
 
22
+ This dataset contains 212 problems across two categories:
23
  - **Algorithmic**: 148 competitive programming problems with automated judging
24
+ - **Research**: 64 open-ended research problems
25
 
26
  ## Dataset Structure
27
 
data/test-00000-of-00001.json CHANGED
@@ -74,7 +74,7 @@
74
  {"problem_id": "205", "category": "algorithmic", "statement": "Sequence Transformation\n\nProblem Description:\nYou are given two valid parenthesis sequences s1 and s2, both of length 2n. Your goal is to transform s1 into s2 using the minimum number of operations.\n\nAvailable Operations:\n- Op 1: Transform p(((A)B)C)q into p((A)B)(C)q\n- Op 2: Transform p((A)(B)C)q into p((A)B)(C)q\n- Op 3: Transform p(A)((B)C)q into p((A)B)(C)q\n- Op 4: Transform p(A)(B)(C)q into p((A)B)(C)q\n\nWhere A, B, C are valid parenthesis sequences (possibly empty), and p, q are arbitrary sequences.\n\nSpecial Operations (Each can be used at most 2 times per case):\n- Op 5: Insert a pair of empty parentheses \"()\" at any position (max 2 times).\n- Op 6: Remove a pair of empty parentheses \"()\" from any position (max 2 times).\n\nInput Format:\n- First line: an integer n (1 <= n <= 100,000)\n- Second line: a string s1 of length 2n\n- Third line: a string s2 of length 2n\n\nOutput Format:\n- First line: an integer Q (the number of operations, must not exceed 3n)\n- Next Q lines: each line contains two integers op and x\n - op: operation number (1-6)\n - x: position where the operation is applied\n\nPosition definition:\n- For operations 1-4: x is the starting position of the leftmost '(' in the pattern\n- For operations 5-6: x is the position to insert/remove \"()\"\n- All positions are 0-indexed\n\nExample:\nInput:\n3\n(())()\n((()))\n\nOutput:\n3\n5 6\n4 0\n6 6\n\nExplanation:\nInitial: (())()\nAfter Op 5 at position 6: (())()()\nAfter Op 4 at position 0: ((()))()\nAfter Op 6 at position 6: ((()))\n\nScoring:\nThis problem is graded based on the number of operations Q:\n- If Q <= 1.9n, you receive full score (1.0).\n- If Q >= 3n, you receive 0 score.\n- Otherwise, Score = (3n - Q) / (1.1n), clamped to [0, 1]\n\nConstraints:\n- 1 <= n <= 100,000\n- Total operations must not exceed 3n.\n- Op 5 can be used at most 2 times.\n- Op 6 can be used at most 2 times.\n- Both s1 and s2 are valid parenthesis sequences.\n", "config": "# Set the problem type to standard\ntype: default\n\n# Specify the checker source file\nchecker: chk.cc\n\n# Time and memory limits\ntime: 2s\nmemory: 256m\n\n# The subtasks section\nsubtasks:\n - score: 100\n n_cases: 3 # Looks for 1.in/1.ans, 2.in/2.ans, ... 10.in/10.ans in testdata/\n\n"}
75
  {"problem_id": "207", "category": "algorithmic", "statement": "Efficient Sorting\n\nDescription\n\nYou are given a permutation S of N distinct integers from 0 to N-1. Your task is to sort the permutation into increasing order (i.e., S[i] = i for all 0 <= i < N) while playing a game against a character named Jerry.\n\nThe game proceeds in a sequence of rounds. You must decide in advance the total number of rounds, R, you wish to play. Jerry has a predetermined sequence of M planned swaps. In each round k (where 0 <= k < R):\n\n1. Jerry's Move: Jerry performs his k-th planned swap on the array S.\n2. Your Move: You choose two indices u_k and v_k (0 <= u_k, v_k < N) and swap the elements S[u_k] and S[v_k].\n\nAfter the R rounds are completed, the array S must be sorted. If the array becomes sorted before the R-th round, you must still complete the remaining rounds (you may perform dummy swaps, such as swapping an index with itself).\n\nWe define the \"Energy Cost\" of a single swap (u, v) as the distance between the indices: |u - v|.\n\nYour objective is to minimize the \"Total Efficiency Value\" (V), defined as:\nV = R * (Sum of |u_k - v_k| for all k from 0 to R-1)\n\nInput Format\n\nThe first line contains an integer N, the length of the permutation.\nThe second line contains N space-separated integers S_0, S_1, ..., S_{N-1}, representing the initial permutation.\nThe third line contains an integer M, the number Jerry's planned swaps.\nThe following M lines each contain two space-separated integers X_j and Y_j, representing the indices Jerry intends to swap in round j (for 0 <= j < M).\n\nOutput Format\n\nThe first line of output should contain a single integer R, the number of rounds you choose to play.\nThe following R lines should each contain two space-separated integers u_k and v_k, representing your swap in round k.\nThe last line of output should contain a single integer V, the Total Efficiency Value.\n\nThe value of R must satisfy 0 <= R <= M. After the completion of all R rounds (including Jerry's moves and your moves), the array S must be sorted.\n\nScoring\n\nYour score is calculated based on the Total Efficiency Value V.\n\nThe scoring function is defined as follows:\n- If V <= 10,000,000,000,000 (10^13), you receive 100 points.\n- If V >= 3,300,000,000,000,000 (3.3×10^15), you receive 0 points.\n- Otherwise, your score is calculated linearly:\n Score = 100 * (3.3×10^15 - V) / (3.3×10^15 - 10^13)\n\nConstraints\n\n- 1 <= N <= 200,000\n- 1 <= M <= 600,000\n- 0 <= S_i < N, all S_i are distinct.\n- 0 <= X_j, Y_j < N\n- It is guaranteed that it is possible to sort the array within M rounds.\n\nExample\n\nInput:\n5\n4 3 2 1 0\n6\n0 1\n1 2\n2 3\n3 4\n0 1\n1 2\n\nOutput:\n3\n0 4\n1 3\n3 4\n21\n\nExplanation:\nInitial sequence: [4, 3, 2, 1, 0]\n\nRound 0:\n- Jerry swaps indices (0, 1). Sequence becomes: [3, 4, 2, 1, 0]\n- You swap indices (0, 4). Cost |0-4| = 4. Sequence becomes: [0, 4, 2, 1, 3]\n\nRound 1:\n- Jerry swaps indices (1, 2). Sequence becomes: [0, 2, 4, 1, 3]\n- You swap indices (1, 3). Cost |1-3| = 2. Sequence becomes: [0, 1, 4, 2, 3]\n\nRound 2:\n- Jerry swaps indices (2, 3). Sequence becomes: [0, 1, 2, 4, 3]\n- You swap indices (3, 4). Cost |3-4| = 1. Sequence becomes: [0, 1, 2, 3, 4]\n\nThe array is now sorted.\nTotal cost sum = 4 + 2 + 1 = 7.\nTotal Efficiency Value V = 3 * 7 = 21.", "config": "\ntype: default\nchecker: chk.cc\n\n# Time and memory limits still apply to the contestant's solution\ntime: 10s\nmemory: 256m\n\n# The subtasks section works the same way\nsubtasks:\n- score: 100\n n_cases: 3\n \n"}
76
  {"problem_id": "209", "category": "algorithmic", "statement": "Hidden Weights\n\nDescription\n\nThis is an interactive problem.\n\nYou are given a positive integer h. Let n = 2^h - 1.\nThere is a perfect binary tree G with n nodes, numbered 1 to n. The root of the tree is node 1. For any node u (2 <= u <= n), its parent is floor(u / 2).\n\nThe interactor maintains two hidden arrays:\n\n1. A permutation p of length n, containing integers from 1 to n.\n2. A weight array f of length n, where each element f_v corresponds to the weight of tree node v. All weights are non-negative integers not exceeding 10^9.\n\nYour task is to determine the sum of all weights in the tree.\n\nInteraction\n\nFirst, your program should read a single integer h (2 <= h <= 18) from standard input.\nThen, you may ask queries to the interactor. To make a query, print a line in the following format:\n\n? u d\n\nwhere u is an integer index (1 <= u <= n) and d is a distance (1 <= d <= 10^9).\n\nThe interactor will respond with a single integer: the sum of weights f_v for all nodes v in the tree such that the distance between node p_u and node v is exactly d.\nFormally, the interactor returns the sum of f_v for all v where dist(p_u, v) = d.\nIf no such nodes v exist, the interactor returns 0.\n\n* p_u denotes the u-th element of the hidden permutation p.\n* dist(x, y) is the number of edges on the simple path between node x and node y in the tree.\n\nOnce you have determined the total sum of weights, output the answer in the following format:\n\n! S\n\nwhere S is the calculated sum. After outputting the answer, your program must terminate immediately.\n\nConstraints\n\n* 2 <= h <= 18\n* n = 2^h - 1\n* 0 <= f_v <= 10^9\n* The interactor is not adaptive (p and f are fixed at the start).\n* 1 <= d <= 10^9 (Note: d must be at least 1).\n\nScoring\n\nYour score depends on Q, the number of queries you perform.\nLet L = 3 * n / 4 (integer division) and R = (13 * n + 21) / 8 (integer division).\n\n* If Q <= L, you receive 100 points.\n* If Q >= R, you receive 0 points.\n* Otherwise, your score is calculated linearly:\nScore = floor(100 * (R - Q) / (R - L))\n\nTechnical Note\n\nRemember to flush the output buffer after every query and the final answer.\n\n* C++: cout << endl; or fflush(stdout);\n* Python: print(..., flush=True)\n* Java: System.out.flush();\n\nExample\n\nInput:\n2\n11\n59\n11\n\nOutput:\n? 1 1\n? 2 1\n? 3 1\n! 70\n\nExplanation of Example:\nh = 2, so n = 3. The tree has nodes 1 (root), 2 (left child), 3 (right child).\nHidden permutation p = [2, 1, 3].\nHidden weights f = [11, 45, 14] (f_1=11, f_2=45, f_3=14). Total sum is 70.\n\nQuery 1: \"? 1 1\"\nu=1. Center is p_1 = 2.\nNodes at distance 1 from node 2 are {1}. (Node 3 is at distance 2).\nResponse: f_1 = 11.\n\nQuery 2: \"? 2 1\"\nu=2. Center is p_2 = 1.\nNodes at distance 1 from node 1 are {2, 3}.\nResponse: f_2 + f_3 = 45 + 14 = 59.\n\nQuery 3: \"? 3 1\"\nu=3. Center is p_3 = 3.\nNodes at distance 1 from node 3 are {1}. (Node 2 is at distance 2).\nResponse: f_1 = 11.\n\nCalculation:\nFrom Query 2, we know the sum of weights of children (nodes 2 and 3) is 59.\nFrom Query 1 (or 3), we know the weight of the root (node 1) is 11.\nTotal Sum = 59 + 11 = 70.", "config": "\ntype: interactive\ninteractor: interactor.cc\n\n# Time and memory limits still apply to the contestant's solution\ntime: 10s\nmemory: 512m\n\n# The subtasks section works the same way\nsubtasks:\n- score: 100\n n_cases: 3\n \n"}
77
- {"problem_id": "210", "category": "algorithmic", "statement": "# Military Exercise: Fighter Scheduling and Base Strikes (Blue Side)\n\nYou are the **blue side** in a simplified military exercise on a 2D grid.\n\nThe map is an \\(n \\times m\\) grid (0-indexed coordinates):\n\n- `#` : red base (enemy)\n- `*` : blue base (friendly)\n- `.` : neutral cell\n\nBoth sides have bases. Blue controls a set of fighters and must plan actions to destroy red bases and maximize score.\n\nThis is a **planning / simulation** task: your program reads the input once and outputs a sequence of per-frame commands. A custom checker simulates the world and computes your score.\n\n---\n\n## Rules\n\n### Fighters\n\nThere are \\(k\\) blue fighters, indexed \\(0..k-1\\). Each fighter has:\n\n- Initial position \\((x,y)\\) (guaranteed to be on a blue base)\n- Fuel tank capacity `G` (max fuel carried)\n- Missile capacity `C` (max missiles carried)\n\nInitial fuel and missiles are both **0**.\n\n### Movement\n\n- In one frame, a fighter may move by **1 cell** in one of 4 directions:\n - `0`: up, `1`: down, `2`: left, `3`: right\n- A **successful move consumes 1 unit of fuel**.\n- A fighter cannot leave the grid.\n- A fighter **must not enter a red base cell that is not yet destroyed**.\n\nIf a fighter does not successfully move in a frame, it is considered \"landed\" for that frame (no fuel consumption).\n\n### Refueling / Reloading (only on blue bases)\n\nIf a fighter is currently on a **blue base** cell, it can:\n\n- `fuel`: transfer fuel from the base to the fighter (up to remaining base supply and tank capacity)\n- `missile`: transfer missiles from the base to the fighter (up to remaining base supply and missile capacity)\n\nRefueling/reloading time is ignored; multiple such commands in a frame are allowed (subject to supplies/capacity).\n\n### Attacking\n\n- A fighter may attack in one of 4 directions (`0..3`) with range **1 cell** (adjacent).\n- The target cell must contain a **not-yet-destroyed red base**.\n- `attack <id> <dir> <count>` consumes exactly `count` missiles from the fighter.\n- Each red base has an integer defense `d`. When cumulative missiles received reaches `d`, the base is destroyed.\n\n### Scoring\n\nEach red base has a military value `v`. When a red base is **destroyed**, you gain **+v** points (only once per base).\n\nYour goal is to **maximize the total score** after up to **15000 frames**.\n\nInvalid commands are ignored (the simulation continues).\n\n---\n\n## Input Format\n\n### Map\n\n- Line 1: `n m` \\((1 \\le n,m \\le 200)\\)\n- Next `n` lines: `m` characters each, describing the grid.\n\n### Bases\n\nBlue bases first, then red bases.\n\nFor each side:\n\n- Line: integer `N` = number of bases\n- For each base:\n - Line: `x y` (0-indexed)\n - Line: `g c d v`\n - `g`: fuel supply\n - `c`: missile supply\n - `d`: defense (missiles needed to destroy)\n - `v`: military value\n\n### Fighters\n\n- Line: integer `k` \\((1 \\le k \\le 10)\\)\n- Next `k` lines: `x y G C` for fighter `id = i-1`\n\n---\n\n## Output Format\n\nFor each frame, output **zero or more** command lines, then a line:\n\n```\nOK\n```\n\nCommands:\n\n- `move <id> <dir>`\n- `attack <id> <dir> <count>`\n- `fuel <id> <count>`\n- `missile <id> <count>`\n\nThere are at most **15000 frames**. Your output may end early (remaining frames are treated as doing nothing).\n\n---\n\n## Sample Input\n\nSee `testdata/1.in`.\n\n\n", "config": "\ntype: default\nchecker: chk.cc\nchecker_type: testlib\n\n# Time and memory limits apply to the contestant's solution program.\ntime: 5s\nmemory: 512m\n\nsubtasks:\n - score: 100\n n_cases: 3\n\n\n"}
78
  {"problem_id": "211", "category": "algorithmic", "statement": "Communication Robots\n\n## Problem Description\n\nIn a task area, there are several robots distributed that must maintain a connected network through wireless communication to complete tasks collaboratively.\n\nWireless communication has the following characteristics:\n\n(1) Establishing communication links consumes energy.\n\n(2) There are high-power robots with more advanced communication modules, and links connected to them have lower energy consumption.\n\n(3) There are also several optional relay stations distributed in the task area that can serve as intermediate nodes for signals, helping to reduce overall energy consumption.\n\nYour task is to:\n\nDesign communication links, reasonably choose whether to enable relay stations, ensure all robots are connected, and minimize the overall energy consumption cost.\n\n## Communication Energy Consumption Rules\n\nThe square of the Euclidean distance between two nodes is the base value (D) of the communication energy consumption between the two nodes.\n\n- The energy consumption cost between an ordinary robot (R) and an ordinary robot (R) is 1 × D.\n- The energy consumption cost between an ordinary robot (R) and a high-power robot (S) is 0.8 × D.\n- The energy consumption cost between a high-power robot (S) and a high-power robot (S) is 0.8 × D.\n- The energy consumption cost between a relay station (C) and any robot (R or S) is 1 × D.\n- Relay stations (C) cannot communicate directly with each other.\n\n## Goal\n\nAll robots (R and S) must form a connected network. Any two robots must have a communication path between them, which can be a direct connection or pass through other robots or relay stations.\n\nYou can choose to use or not use any relay stations.\n\nMinimize the overall energy consumption cost.\n\n## Input Format\n\nThe first line contains two integers: N (number of robots) and K (number of optional relay stations).\n\nThe next N + K lines each contain: device ID, x-coordinate, y-coordinate, and type.\n\nConstraints:\n- N ≤ 1500, K ≤ 1500\n- Type R represents an ordinary robot, S represents a high-power robot, C represents an optional relay station\n- x-coordinates and y-coordinates are integers in the range [-10000, 10000]\n\n## Output Format\n\nThe first line: IDs of selected relay stations (if multiple, separated by \"#\"; if none, output \"#\").\n\nThe second line: Set of communication links (each link is represented as \"device_id-device_id\", multiple links are separated by \"#\").\n\n## Example\n\n### Input\n\n```\n3 1\n1 0 0 R\n2 100 0 R\n3 50 40 S\n4 50 0 C\n```\n\n### Output\n\n```\n4\n1-3#2-3#3-4\n```\n\n## Scoring\n\nYour solution will be evaluated based on the total energy consumption cost of the network you construct. The score is calculated as:\n\n- Base score: The minimum spanning tree (MST) cost of all non-relay nodes (without using any relay stations).\n- Your score: The actual total cost of your network.\n- Final score ratio: min(1.0, base_cost / actual_cost)\n\nIf your network cost is less than or equal to the base MST cost, you receive full score (1.0). Otherwise, your score decreases proportionally.\n\n## Constraints\n\n- 1 ≤ N ≤ 1500\n- 0 ≤ K ≤ 1500\n- Device coordinates: -10000 ≤ x, y ≤ 10000\n- All robots (R and S) must be connected in the final network\n- Relay stations cannot be directly connected to each other\n\n## Time Limit\n\n10 seconds per test case\n\n## Memory Limit\n\n512 MB\n", "config": "# Set the problem type to default (submit answer problems use default type)\ntype: default\n\n# Specify the checker source file\nchecker: chk.cc\n\n# Time and memory limits (for submit answer problems, these may not be strictly enforced)\ntime: 10s\nmemory: 512m\n\n# The subtasks section\nsubtasks:\n - score: 100\n n_cases: 4 # Test cases: 1.in, 2.in, ..., 10.in in testdata/\n"}
79
  {"problem_id": "212", "category": "algorithmic", "statement": "I wanna cross the grid\n\nProblem Description:\nSuddenly, the save point landed in a huge grid. Only by passing through all required areas can the next save point appear...\n\nYou are given a grid with n rows and m columns, where rows and columns are numbered starting from 1. Define a pair (x, y) to represent the cell at row x and column y. For each row, the cells from column L to column R are required areas. Formally, let D be the set of required areas, then D = {(x, y) | 1 ≤ x ≤ n, L ≤ y ≤ R, x, y ∈ N+}.\n\nIn each step, kid can move one step in one of the four directions (up, down, left, right) without exceeding the boundaries. Formally, if kid is currently at (x, y), then kid can move to (x+1, y), (x, y+1), (x-1, y), or (x, y-1).\n\nInitially, kid is at (Sx, Sy) (guaranteed that Sy = L). Kid needs to pass through all required areas, and any cell can be visited at most once. Formally, kid's path is a sequence of pairs P = (x₁, y₁), (x₂, y₂), ..., (xₖ, yₖ), which must satisfy:\n- ∀ (x₀, y₀) ∈ D, ∃ i ∈ [1, k] such that (x₀, y₀) = (xᵢ, yᵢ)\n- ∀ i ≠ j, (xᵢ, yᵢ) ≠ (xⱼ, yⱼ)\n\nAdditionally, kid needs to record a completion sequence p. When kid first enters the required area of a certain row, the row number must be appended to the current sequence, and kid must immediately pass through all required areas of that row. At the same time, p must contain a subsequence q of length Lq to be a valid completion sequence and truly complete the level. Formally, p is valid if and only if there exists a sequence c of length Lq such that p[cᵢ] = qᵢ and c is monotonically increasing.\n\nTo reduce the difficulty for lindongli2004, lindongli2004 hopes that kid takes as few steps as possible.\n\nGiven n, m, L, R, Sx, Sy, and q, please plan a completion route for kid, or tell him that no such route exists. The rest of the operations will be left to lindongli2004!\n\nInput Format:\nThe first line contains 8 positive integers: n, m, L, R, Sx, Sy, Lq, s, representing the number of rows and columns of the grid, the left and right boundaries of the required area, the x and y coordinates of the starting point, the length of sequence q, and the scoring parameter.\n\nThe second line contains Lq distinct positive integers, representing the sequence q.\n\nOutput Format:\nThe first line contains a string \"YES\" or \"NO\" (without quotes) indicating whether there exists a valid path.\n\nIf there exists a valid path, the second line contains a positive integer cnt representing the length of the path, followed by cnt lines, each containing two positive integers x and y representing the coordinates of the path.\n\nExample:\nInput:\n5 4 2 3 2 2 2 15\n3 1\n\nOutput:\nYES\n15\n2 2\n2 3\n3 3\n3 2\n4 2\n4 3\n5 3\n5 2\n5 1\n4 1\n3 1\n2 1\n1 1\n1 2\n1 3\n\nScoring:\nThe last number in the first line of the input file is s. Let your number of steps be cnt. Then:\n- If cnt ≤ s, you will receive 10 points.\n- If cnt > s and you can complete the level, you will receive max(5, 10 - (cnt - s) / ⌊n/2⌋ - 1) points.\n- If you cannot complete the level, you will receive 0 points.\n\nConstraints:\n- 1 ≤ L ≤ R ≤ m ≤ 40\n- 1 ≤ Sx ≤ n ≤ 40\n- Other constraints are detailed in the provided files.\n- The maximum capacity of the checker is 2.5 × 10⁸, meaning your solution cannot contain more than 2.5 × 10⁸ numbers.\n\nTime limit:\n30 seconds\n\nMemory limit:\n512 MB\n", "config": "# Set the problem type to default (submit answer problems use default type)\ntype: default\n\n# Specify the checker source file\nchecker: chk.cc\n\n# Time and memory limits (for submit answer problems, these may not be strictly enforced)\ntime: 30s\nmemory: 512m\n\n# The subtasks section\nsubtasks:\n - score: 100\n n_cases: 4 # Test cases: 1.in, 2.in, ..., 10.in in testdata/\n"}
80
  {"problem_id": "213", "category": "algorithmic", "statement": "Sequence Shift (moqueve)\n\nProblem Description:\nYou need to sort a permutation of $1\\sim n$ on a strange computer.\n\nYou can choose a number $x$, and then each time you can cyclically shift a segment of length $x$ to the left or right (the leftmost/rightmost element moves to the rightmost/leftmost position) (shift amount is $1$).\n\nPlease restore the sequence to $1\\sim n$ within $230\\times n$ operations.\n\nInput Format:\nThe first line contains a single integer $n$.\n\nThe second line contains $n$ integers, representing the sequence $a$.\n\nOutput Format:\nThe first line contains two integers $x$ and $m$, where $m$ represents the number of operations.\n\nThe next $m$ lines each contain three integers: the first two represent the shift interval, and the last one represents the direction, where $0$ means left and $1$ means right.\n\nExample:\nInput:\n5\n4 2 3 5 1\n\nOutput:\n3\n3\n3 5 1\n1 3 1\n2 4 0\n\nExplanation:\n- Right shift (3,5): sequence becomes 4,2,1,3,5\n- Right shift (1,3): sequence becomes 1,4,2,3,5\n- Left shift (2,4): sequence becomes 1,2,3,4,5\n\nConstraints:\n- $n \\leq 1000$\n- The sequence $a$ is a permutation of $1\\sim n$\n\nScoring:\nYour score is calculated based on the number of operations $m$:\n- If $m \\leq 23n$, you receive full score (1.0).\n- If $m > 230n$, you receive 0 score.\n- Otherwise, Score = max(0.0, 1.0 - (m - 23n) / (230n - 23n)), linearly decreasing from 1.0 to 0.0.\n\nTime limit:\n5 seconds\n\nMemory limit:\n512 MB\n", "config": "# Set the problem type to default (submit answer problems use default type)\ntype: default\n\n# Specify the checker source file\nchecker: chk.cc\n\n# Time and memory limits (for submit answer problems, these may not be strictly enforced)\ntime: 2s\nmemory: 512m\n\n# The subtasks section\nsubtasks:\n - score: 100\n n_cases: 3 # Test cases: 1.in, 2.in, ..., 10.in in testdata/\n"}
@@ -195,6 +195,7 @@
195
  {"problem_id": "poc_generation/stack_buffer_overflow", "category": "research", "statement": "", "config": "{\"tag\": \"security\"}\n"}
196
  {"problem_id": "poc_generation/uninitialized_value", "category": "research", "statement": "", "config": "{\"tag\": \"security\"}\n"}
197
  {"problem_id": "qknorm", "category": "research", "statement": "QKNorm Optimization Problem\n============================\n\nProblem Setting\n---------------\nDesign and optimize high-performance implementations for Query-Key Normalization (QKNorm) on GPU. This problem focuses on implementing efficient normalization kernels that apply RMSNorm to query and key tensors.\n\nThis is a **memory-bound** (even **launch-bound**) **tiny operator**. Performance optimization requires careful attention to:\n\n1. **Memory Efficiency**: Focus on **vectorized memory access patterns**. Minimize memory transactions and maximize memory bandwidth utilization.\n\n2. **Operation Fusion**: **Avoid additional transpose/contiguous kernels**. Fuse operations to reduce kernel launch overhead and memory traffic.\n\n3. **Non-Contiguous Input Handling**: **Be aware that inputs may be non-contiguous** due to weight-QKV fusion. Your implementation should efficiently handle non-contiguous memory layouts without triggering expensive memory copies.\n\nTarget\n------\n- **Primary**: Ensure correctness across diverse tensor shapes\n- **Secondary**: Maximize geometric mean speedup over baseline (higher is better)\n- **Tertiary**: Minimize kernel launch overhead and memory usage\n\nAPI Specification\n-----------------\nImplement a `Solution` class that returns a qknorm implementation:\n\n```python\nclass Solution:\n def solve(self, spec_path: str = None) -> dict:\n \"\"\"\n Returns a dict with either:\n - {\"code\": \"python_code_string\"}\n - {\"program_path\": \"path/to/kernel.py\"}\n \"\"\"\n # Your implementation\n pass\n```\n\nYour kernel implementation must provide:\n\n```python\nimport torch\nimport flashinfer\n\ndef qknorm(q: torch.Tensor, k: torch.Tensor, norm_weight: torch.Tensor):\n \"\"\"\n Apply RMSNorm to query and key tensors.\n \n Args:\n q: Query tensor of arbitrary shape (will be reshaped to 2D)\n k: Key tensor of arbitrary shape (will be reshaped to 2D)\n norm_weight: Normalization weight tensor of shape (hidden_dim,)\n \n Returns:\n Tuple of (q_normalized, k_normalized) tensors\n \"\"\"\n pass\n```\n\nRequired Default Implementation:\n```python\ndef default_qknorm(q: torch.Tensor, k: torch.Tensor, norm_weight: torch.Tensor):\n q_2d = q.contiguous().view(-1, q.shape[-1])\n k_2d = k.contiguous().view(-1, k.shape[-1])\n q_o = torch.empty_like(q_2d)\n k_o = torch.empty_like(k_2d)\n flashinfer.norm.rmsnorm(q_2d, norm_weight, out=q_o)\n flashinfer.norm.rmsnorm(k_2d, norm_weight, out=k_o)\n return q_o.view(q.shape), k_o.view(k.shape)\n```\n\nBaseline Implementation:\n```python\ndef customized_qknorm(q: torch.Tensor, k: torch.Tensor, norm_weight: torch.Tensor):\n q_o = torch.empty(q.shape, device=q.device, dtype=q.dtype)\n k_o = torch.empty(k.shape, device=k.device, dtype=k.dtype)\n flashinfer.norm.rmsnorm(q, norm_weight, out=q_o)\n flashinfer.norm.rmsnorm(k, norm_weight, out=k_o)\n return q_o, k_o\n```\n\nAPI Usage Notes\n---------------\n- The evaluator looks for a `qknorm` function in the module namespace\n- Function must handle tensor reshaping correctly (q and k may have arbitrary shapes)\n- Must use flashinfer.norm.rmsnorm for normalization\n- Function returns a tuple of (q_normalized, k_normalized) tensors\n- **Important**: Inputs q and k may be **non-contiguous** due to weight-QKV fusion\n- **Avoid**: Additional `.contiguous()` or `.transpose()` calls that trigger memory copies\n- **Focus**: Vectorized memory access and operation fusion to minimize kernel launches\n\nScoring (0-100)\n---------------\nPerformance is measured against baseline implementations:\n\n```\ngeometric_mean_speedup = geometric_mean(answer_times / baseline_times)\n\nif speedup < 0.5 or correctness is wrong:\n score = 0\nelif speedup >= 0.5 and speedup < 1.0:\n score = 50\nelif speedup >= 1.0:\n score = 100\n```\n\n- 0 points = Speedup < 0.5x OR correctness fails\n- 50 points = Speedup >= 0.5x and < 1.0x\n- 100 points = Speedup >= 1.0x\n\nEvaluation Details\n------------------\n- Shapes focus on diverse batch-sizes, head-dim, num-kv-heads, num-qo-heads, e.g.:\n - (16, 8, 32, 128)\n - (128, 32, 32, 64)\n- Correctness verified with tolerance: rtol=1e-2, atol=5e-3\n- Performance measured using median execution time\n- Requires CUDA backend and GPU support\n", "config": "{\n \"dependencies\": {\n \"uv_project\": \"resources\"\n },\n \"datasets\": [],\n \"runtime\": {\n \"resources\": {\n \"accelerators\": \"L4:1\"\n },\n \"docker\": {\n \"image\": \"andylizf/triton-tlx:tlx-nv-cu122-nvcc\",\n \"gpu\": true\n },\n \"environment\": \"CUDA 12.2, Python 3.11, PyTorch 2.0+, flashinfer 0.5.0, Triton 3.0+\"\n },\n \"tag\": \"hpc\"\n}\n"}
 
198
  {"problem_id": "ragged_attention", "category": "research", "statement": "Ragged Attention Optimization Problem\n======================================\n\nProblem Setting\n---------------\nDesign and optimize high-performance Triton kernels for ragged attention computation on GPU. This problem focuses on implementing efficient kernels that handle variable-length sequences using ragged attention, where each query row can attend to a different number of key/value rows.\n\nThe challenge involves optimizing:\n- **Ragged attention**: Efficiently handling variable-length sequences where each row has different attention lengths\n- **Memory access patterns**: Efficient loading and storing of Q, K, V tensors with ragged masking\n- **Streaming softmax**: Computing softmax in a streaming fashion for numerical stability\n- **Row-wise masking**: Correctly masking attention scores based on row_lens\n- **Mixed precision**: Handling float16 inputs/outputs with float32 accumulation\n- **Block tiling**: Optimal block sizes for GPU execution across different matrix sizes\n- **Performance benchmarking**: Achieving speedup over baseline PyTorch implementations\n\nTarget\n------\n- **Primary**: Maximize geometric mean speedup over baseline (higher is better)\n- **Secondary**: Ensure correctness across diverse matrix sizes and ragged lengths\n- **Tertiary**: Minimize kernel launch overhead and memory usage\n\nAPI Specification\n-----------------\nImplement a `Solution` class that returns a Triton kernel implementation:\n\n```python\nclass Solution:\n def solve(self, spec_path: str = None) -> dict:\n \"\"\"\n Returns a dict with either:\n - {\"code\": \"python_code_string\"}\n - {\"program_path\": \"path/to/kernel.py\"}\n \"\"\"\n # Your implementation\n pass\n```\n\nYour kernel implementation must provide:\n\n```python\nimport torch\nimport triton\nimport triton.language as tl\n\ndef ragged_attn(Q: torch.Tensor, K: torch.Tensor, V: torch.Tensor, row_lens: torch.Tensor) -> torch.Tensor:\n \"\"\"\n Ragged attention computation.\n \n Args:\n Q: Query tensor of shape (M, D) - query features (float16)\n K: Key tensor of shape (N, D) - key features (float16)\n V: Value tensor of shape (N, Dv) - value features (float16)\n row_lens: Row lengths tensor of shape (M,) - number of valid K/V rows per Q row (int32 or int64)\n \n Returns:\n Output tensor of shape (M, Dv) - attention output (float16)\n \n Semantics:\n For each query row i (0 <= i < M), compute attention over the first row_lens[i] key/value rows.\n Specifically:\n - scores[i, j] = (Q[i] @ K[j].T) * scale, for j < row_lens[i], else -inf\n - P[i] = softmax(scores[i])\n - O[i] = P[i] @ V[:row_lens[i]]\n \"\"\"\n pass\n```\n\nScoring\n-------\nThe scoring system evaluates your implementation based on geometric mean speedup over GPU baseline:\n\n- **0 points**: 1x GPU baseline (same speed as PyTorch GPU baseline)\n- **100 points**: 3x GPU baseline (3x speedup over PyTorch GPU baseline)\n- **Linear interpolation**: Scores between 0-100 are linearly interpolated based on speedup\n\nThe evaluation uses the following test cases:\n- M (number of query rows): [512, 1024]\n- N (number of key/value rows): 1024\n- D (model dimension): 64\n- Dv (value dimension): 64\n- row_lens: Random integers between [min_ratio*N, N] where min_ratio=0.25\n\nCorrectness is verified using:\n- Relative tolerance: 1e-2\n- Absolute tolerance: 5e-3\n\nAll tests must pass for a non-zero score. If any test fails correctness, the score is 0.\n\nExample\n-------\n```python\nimport torch\nimport triton\nimport triton.language as tl\n\n@triton.jit\ndef _ragged_kernel(Q, K, V, O, ROW_LENS, ...):\n # Your kernel implementation\n pass\n\ndef ragged_attn(Q: torch.Tensor, K: torch.Tensor, V: torch.Tensor, row_lens: torch.Tensor) -> torch.Tensor:\n # Your kernel launch logic\n pass\n```\n\nConstraints\n-----------\n- All tensors must be CUDA tensors (float16 for Q, K, V; int32/int64 for row_lens)\n- Output must be float16\n- The implementation must handle variable row lengths correctly\n- Accumulation should use float32 for numerical stability\n- Must use streaming softmax for numerical stability\n\nTips\n----\n1. Use efficient block tiling (BM, BN, BD, BDV) for optimal performance\n2. Implement streaming softmax to handle large attention matrices\n3. Correctly mask attention scores based on row_lens\n4. Load row_lens once per program and broadcast for masking\n5. Use proper masking for boundary conditions\n\n", "config": "dependencies:\n uv_project: resources\ntag: hpc\nruntime:\n environment: \"Triton 3.2.0 with CUDA 12.2 (triton-tlx image)\"\n docker:\n image: andylizf/triton-tlx:tlx-nv-cu122\n gpu: true\n resources:\n accelerators: L4:1\n"}
199
  {"problem_id": "symbolic_regression/mccormick", "category": "research", "statement": "Symbolic Regression Benchmark - McCormick Dataset\n=================================================\n\nProblem Setting\n---------------\nLearn a closed-form symbolic expression `f(x1, x2)` that predicts the target `y`.\n\nThis dataset is derived from the McCormick function, a classic 2D optimization test function featuring a combination of trigonometric and polynomial terms. The function exhibits a smooth, wavy surface with a global minimum.\n\nInput Format\n------------\n- Your `Solution.solve` receives:\n - `X`: numpy.ndarray of shape `(n, 2)` containing feature values\n - `y`: numpy.ndarray of shape `(n,)` containing target values\n- Dataset columns: `x1, x2, y`\n\nOutput Specification\n--------------------\nImplement a `Solution` class in `solution.py`:\n\n```python\nimport numpy as np\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n \"\"\"\n Args:\n X: Feature matrix of shape (n, 2)\n y: Target values of shape (n,)\n\n Returns:\n dict with keys:\n - \"expression\": str, a Python-evaluable expression using x1, x2\n - \"predictions\": list/array of length n (optional)\n - \"details\": dict with optional \"complexity\" int\n \"\"\"\n # Example: fit a symbolic expression to the data\n expression = \"x1 + x2\" # placeholder\n return {\n \"expression\": expression,\n \"predictions\": None, # will be computed from expression if omitted\n \"details\": {}\n }\n```\n\nExpression Requirements:\n- Must be a valid Python expression string\n- Use variable names: `x1`, `x2`\n- Allowed operators: `+`, `-`, `*`, `/`, `**`\n- Allowed functions: `sin`, `cos`, `exp`, `log`\n- Numeric constants are allowed\n\nDependencies (pinned versions)\n------------------------------\n```\npysr==0.19.0\nnumpy==1.26.4\npandas==2.2.2\nsympy==1.13.3\n```\n\nMinimal Working Examples\n------------------------\n\n**Example 1: Using PySR (recommended)**\n```python\nimport numpy as np\nfrom pysr import PySRRegressor\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n model = PySRRegressor(\n niterations=40,\n binary_operators=[\"+\", \"-\", \"*\", \"/\"],\n unary_operators=[\"sin\", \"cos\", \"exp\", \"log\"],\n populations=15,\n population_size=33,\n maxsize=25,\n verbosity=0,\n progress=False,\n random_state=42,\n )\n model.fit(X, y, variable_names=[\"x1\", \"x2\"])\n\n # Get best expression as sympy, convert to string\n best_expr = model.sympy()\n expression = str(best_expr)\n\n # Predictions\n predictions = model.predict(X)\n\n return {\n \"expression\": expression,\n \"predictions\": predictions.tolist(),\n \"details\": {}\n }\n```\n\n**Example 2: Manual expression (simple baseline)**\n```python\nimport numpy as np\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n # Simple linear combination as baseline\n x1, x2 = X[:, 0], X[:, 1]\n\n # Fit coefficients via least squares\n A = np.column_stack([x1, x2, np.ones_like(x1)])\n coeffs, _, _, _ = np.linalg.lstsq(A, y, rcond=None)\n a, b, c = coeffs\n\n expression = f\"{a:.6f}*x1 + {b:.6f}*x2 + {c:.6f}\"\n predictions = a * x1 + b * x2 + c\n\n return {\n \"expression\": expression,\n \"predictions\": predictions.tolist(),\n \"details\": {}\n }\n```\n\n**Example 3: Using sympy for expression manipulation**\n```python\nimport numpy as np\nimport sympy as sp\nfrom pysr import PySRRegressor\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n model = PySRRegressor(\n niterations=30,\n binary_operators=[\"+\", \"-\", \"*\", \"/\"],\n unary_operators=[\"sin\", \"cos\"],\n verbosity=0,\n progress=False,\n )\n model.fit(X, y, variable_names=[\"x1\", \"x2\"])\n\n # Get sympy expression and simplify\n sympy_expr = model.sympy()\n simplified = sp.simplify(sympy_expr)\n\n # Convert to evaluable string\n expression = str(simplified)\n\n return {\n \"expression\": expression,\n \"predictions\": None, # evaluator will compute from expression\n \"details\": {}\n }\n```\n\nPySR API Notes (v0.19.0)\n------------------------\n- `model.fit(X, y, variable_names=[\"x1\", \"x2\"])` - use variable_names to match expected output\n- `model.sympy()` - returns best expression as sympy object\n- `model.predict(X)` - returns predictions array\n- `model.equations_` - DataFrame of all discovered equations\n- Common parameters:\n - `niterations`: number of evolution iterations (more = better but slower)\n - `populations`: number of parallel populations\n - `maxsize`: maximum expression complexity\n - `verbosity=0, progress=False`: suppress output\n\nExpression Format Requirements\n------------------------------\n- Must be a valid Python expression string\n- Use variable names: `x1`, `x2`\n- Allowed operators: `+`, `-`, `*`, `/`, `**`\n- Allowed functions: `sin`, `cos`, `exp`, `log` (NO `np.` prefix)\n- Numeric constants are allowed\n- The evaluator uses `sympy.sympify()` to parse your expression\n\nScoring\n-------\n```\nMSE = (1/n) Σ (y_i - ŷ_i)²\nScore = 100 × clamp((m_base - MSE) / (m_base - m_ref), 0, 1) × 0.99^max(C - C_ref, 0)\n```\n\n- `m_base`: linear regression baseline MSE\n- `m_ref`, `C_ref`: reference solution MSE and complexity\n- `C = 2 × (#binary ops) + (#unary ops)`\n- Lower MSE and lower complexity yield higher scores\n\nEnvironment\n-----------\nRun `set_up_env.sh` to install dependencies.\n", "config": "{\n \"dependencies\": {\n \"uv_project\": \"resources\"\n },\n \"datasets\": [],\n \"tag\": \"pl\"\n}\n"}
200
  {"problem_id": "symbolic_regression/mixed_polyexp_4d", "category": "research", "statement": "Symbolic Regression Benchmark - Mixed PolyExp 4D Dataset\n=========================================================\n\nProblem Setting\n---------------\nLearn a closed-form symbolic expression `f(x1, x2, x3, x4)` that predicts the target `y`.\n\nThis is a higher-dimensional dataset (4 input features) combining polynomial interactions with exponential decay. The function involves cross-terms between variables and Gaussian-like damping, making it more challenging than the 2D variants.\n\nInput Format\n------------\n- Your `Solution.solve` receives:\n - `X`: numpy.ndarray of shape `(n, 4)` containing feature values\n - `y`: numpy.ndarray of shape `(n,)` containing target values\n- Dataset columns: `x1, x2, x3, x4, y`\n\nOutput Specification\n--------------------\nImplement a `Solution` class in `solution.py`:\n\n```python\nimport numpy as np\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n \"\"\"\n Args:\n X: Feature matrix of shape (n, 4)\n y: Target values of shape (n,)\n\n Returns:\n dict with keys:\n - \"expression\": str, a Python-evaluable expression using x1, x2, x3, x4\n - \"predictions\": list/array of length n (optional)\n - \"details\": dict with optional \"complexity\" int\n \"\"\"\n # Example: fit a symbolic expression to the data\n expression = \"x1 + x2 + x3 + x4\" # placeholder\n return {\n \"expression\": expression,\n \"predictions\": None, # will be computed from expression if omitted\n \"details\": {}\n }\n```\n\nExpression Requirements:\n- Must be a valid Python expression string\n- Use variable names: `x1`, `x2`, `x3`, `x4`\n- Allowed operators: `+`, `-`, `*`, `/`, `**`\n- Allowed functions: `sin`, `cos`, `exp`, `log`\n- Numeric constants are allowed\n\nDependencies (pinned versions)\n------------------------------\n```\npysr==0.19.0\nnumpy==1.26.4\npandas==2.2.2\nsympy==1.13.3\n```\n\nMinimal Working Examples\n------------------------\n\n**Example 1: Using PySR (recommended)**\n```python\nimport numpy as np\nfrom pysr import PySRRegressor\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n model = PySRRegressor(\n niterations=50, # more iterations for 4D\n binary_operators=[\"+\", \"-\", \"*\", \"/\"],\n unary_operators=[\"sin\", \"cos\", \"exp\", \"log\"],\n populations=20,\n population_size=40,\n maxsize=30, # larger for 4D complexity\n verbosity=0,\n progress=False,\n random_state=42,\n )\n model.fit(X, y, variable_names=[\"x1\", \"x2\", \"x3\", \"x4\"])\n\n # Get best expression as sympy, convert to string\n best_expr = model.sympy()\n expression = str(best_expr)\n\n # Predictions\n predictions = model.predict(X)\n\n return {\n \"expression\": expression,\n \"predictions\": predictions.tolist(),\n \"details\": {}\n }\n```\n\n**Example 2: Manual expression (simple baseline)**\n```python\nimport numpy as np\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n # Simple linear combination as baseline\n x1, x2, x3, x4 = X[:, 0], X[:, 1], X[:, 2], X[:, 3]\n\n # Fit coefficients via least squares\n A = np.column_stack([x1, x2, x3, x4, np.ones_like(x1)])\n coeffs, _, _, _ = np.linalg.lstsq(A, y, rcond=None)\n a, b, c, d, e = coeffs\n\n expression = f\"{a:.6f}*x1 + {b:.6f}*x2 + {c:.6f}*x3 + {d:.6f}*x4 + {e:.6f}\"\n predictions = a * x1 + b * x2 + c * x3 + d * x4 + e\n\n return {\n \"expression\": expression,\n \"predictions\": predictions.tolist(),\n \"details\": {}\n }\n```\n\nPySR API Notes (v0.19.0)\n------------------------\n- `model.fit(X, y, variable_names=[\"x1\", \"x2\", \"x3\", \"x4\"])` - use variable_names to match expected output\n- `model.sympy()` - returns best expression as sympy object\n- `model.predict(X)` - returns predictions array\n- `model.equations_` - DataFrame of all discovered equations\n- Common parameters:\n - `niterations`: number of evolution iterations (more = better but slower)\n - `populations`: number of parallel populations\n - `maxsize`: maximum expression complexity\n - `verbosity=0, progress=False`: suppress output\n\nExpression Format Requirements\n------------------------------\n- Must be a valid Python expression string\n- Use variable names: `x1`, `x2`, `x3`, `x4`\n- Allowed operators: `+`, `-`, `*`, `/`, `**`\n- Allowed functions: `sin`, `cos`, `exp`, `log` (NO `np.` prefix)\n- Numeric constants are allowed\n- The evaluator uses `sympy.sympify()` to parse your expression\n\nScoring\n-------\n```\nMSE = (1/n) Σ (y_i - ŷ_i)²\nScore = 100 × clamp((m_base - MSE) / (m_base - m_ref), 0, 1) × 0.99^max(C - C_ref, 0)\n```\n\n- `m_base`: linear regression baseline MSE\n- `m_ref`, `C_ref`: reference solution MSE and complexity\n- `C = 2 × (#binary ops) + (#unary ops)`\n- Lower MSE and lower complexity yield higher scores\n\nEnvironment\n-----------\nRun `set_up_env.sh` to install dependencies.\n", "config": "{\n \"dependencies\": {\n \"uv_project\": \"resources\"\n },\n \"datasets\": [],\n \"tag\": \"pl\"\n}\n"}
 
74
  {"problem_id": "205", "category": "algorithmic", "statement": "Sequence Transformation\n\nProblem Description:\nYou are given two valid parenthesis sequences s1 and s2, both of length 2n. Your goal is to transform s1 into s2 using the minimum number of operations.\n\nAvailable Operations:\n- Op 1: Transform p(((A)B)C)q into p((A)B)(C)q\n- Op 2: Transform p((A)(B)C)q into p((A)B)(C)q\n- Op 3: Transform p(A)((B)C)q into p((A)B)(C)q\n- Op 4: Transform p(A)(B)(C)q into p((A)B)(C)q\n\nWhere A, B, C are valid parenthesis sequences (possibly empty), and p, q are arbitrary sequences.\n\nSpecial Operations (Each can be used at most 2 times per case):\n- Op 5: Insert a pair of empty parentheses \"()\" at any position (max 2 times).\n- Op 6: Remove a pair of empty parentheses \"()\" from any position (max 2 times).\n\nInput Format:\n- First line: an integer n (1 <= n <= 100,000)\n- Second line: a string s1 of length 2n\n- Third line: a string s2 of length 2n\n\nOutput Format:\n- First line: an integer Q (the number of operations, must not exceed 3n)\n- Next Q lines: each line contains two integers op and x\n - op: operation number (1-6)\n - x: position where the operation is applied\n\nPosition definition:\n- For operations 1-4: x is the starting position of the leftmost '(' in the pattern\n- For operations 5-6: x is the position to insert/remove \"()\"\n- All positions are 0-indexed\n\nExample:\nInput:\n3\n(())()\n((()))\n\nOutput:\n3\n5 6\n4 0\n6 6\n\nExplanation:\nInitial: (())()\nAfter Op 5 at position 6: (())()()\nAfter Op 4 at position 0: ((()))()\nAfter Op 6 at position 6: ((()))\n\nScoring:\nThis problem is graded based on the number of operations Q:\n- If Q <= 1.9n, you receive full score (1.0).\n- If Q >= 3n, you receive 0 score.\n- Otherwise, Score = (3n - Q) / (1.1n), clamped to [0, 1]\n\nConstraints:\n- 1 <= n <= 100,000\n- Total operations must not exceed 3n.\n- Op 5 can be used at most 2 times.\n- Op 6 can be used at most 2 times.\n- Both s1 and s2 are valid parenthesis sequences.\n", "config": "# Set the problem type to standard\ntype: default\n\n# Specify the checker source file\nchecker: chk.cc\n\n# Time and memory limits\ntime: 2s\nmemory: 256m\n\n# The subtasks section\nsubtasks:\n - score: 100\n n_cases: 3 # Looks for 1.in/1.ans, 2.in/2.ans, ... 10.in/10.ans in testdata/\n\n"}
75
  {"problem_id": "207", "category": "algorithmic", "statement": "Efficient Sorting\n\nDescription\n\nYou are given a permutation S of N distinct integers from 0 to N-1. Your task is to sort the permutation into increasing order (i.e., S[i] = i for all 0 <= i < N) while playing a game against a character named Jerry.\n\nThe game proceeds in a sequence of rounds. You must decide in advance the total number of rounds, R, you wish to play. Jerry has a predetermined sequence of M planned swaps. In each round k (where 0 <= k < R):\n\n1. Jerry's Move: Jerry performs his k-th planned swap on the array S.\n2. Your Move: You choose two indices u_k and v_k (0 <= u_k, v_k < N) and swap the elements S[u_k] and S[v_k].\n\nAfter the R rounds are completed, the array S must be sorted. If the array becomes sorted before the R-th round, you must still complete the remaining rounds (you may perform dummy swaps, such as swapping an index with itself).\n\nWe define the \"Energy Cost\" of a single swap (u, v) as the distance between the indices: |u - v|.\n\nYour objective is to minimize the \"Total Efficiency Value\" (V), defined as:\nV = R * (Sum of |u_k - v_k| for all k from 0 to R-1)\n\nInput Format\n\nThe first line contains an integer N, the length of the permutation.\nThe second line contains N space-separated integers S_0, S_1, ..., S_{N-1}, representing the initial permutation.\nThe third line contains an integer M, the number Jerry's planned swaps.\nThe following M lines each contain two space-separated integers X_j and Y_j, representing the indices Jerry intends to swap in round j (for 0 <= j < M).\n\nOutput Format\n\nThe first line of output should contain a single integer R, the number of rounds you choose to play.\nThe following R lines should each contain two space-separated integers u_k and v_k, representing your swap in round k.\nThe last line of output should contain a single integer V, the Total Efficiency Value.\n\nThe value of R must satisfy 0 <= R <= M. After the completion of all R rounds (including Jerry's moves and your moves), the array S must be sorted.\n\nScoring\n\nYour score is calculated based on the Total Efficiency Value V.\n\nThe scoring function is defined as follows:\n- If V <= 10,000,000,000,000 (10^13), you receive 100 points.\n- If V >= 3,300,000,000,000,000 (3.3×10^15), you receive 0 points.\n- Otherwise, your score is calculated linearly:\n Score = 100 * (3.3×10^15 - V) / (3.3×10^15 - 10^13)\n\nConstraints\n\n- 1 <= N <= 200,000\n- 1 <= M <= 600,000\n- 0 <= S_i < N, all S_i are distinct.\n- 0 <= X_j, Y_j < N\n- It is guaranteed that it is possible to sort the array within M rounds.\n\nExample\n\nInput:\n5\n4 3 2 1 0\n6\n0 1\n1 2\n2 3\n3 4\n0 1\n1 2\n\nOutput:\n3\n0 4\n1 3\n3 4\n21\n\nExplanation:\nInitial sequence: [4, 3, 2, 1, 0]\n\nRound 0:\n- Jerry swaps indices (0, 1). Sequence becomes: [3, 4, 2, 1, 0]\n- You swap indices (0, 4). Cost |0-4| = 4. Sequence becomes: [0, 4, 2, 1, 3]\n\nRound 1:\n- Jerry swaps indices (1, 2). Sequence becomes: [0, 2, 4, 1, 3]\n- You swap indices (1, 3). Cost |1-3| = 2. Sequence becomes: [0, 1, 4, 2, 3]\n\nRound 2:\n- Jerry swaps indices (2, 3). Sequence becomes: [0, 1, 2, 4, 3]\n- You swap indices (3, 4). Cost |3-4| = 1. Sequence becomes: [0, 1, 2, 3, 4]\n\nThe array is now sorted.\nTotal cost sum = 4 + 2 + 1 = 7.\nTotal Efficiency Value V = 3 * 7 = 21.", "config": "\ntype: default\nchecker: chk.cc\n\n# Time and memory limits still apply to the contestant's solution\ntime: 10s\nmemory: 256m\n\n# The subtasks section works the same way\nsubtasks:\n- score: 100\n n_cases: 3\n \n"}
76
  {"problem_id": "209", "category": "algorithmic", "statement": "Hidden Weights\n\nDescription\n\nThis is an interactive problem.\n\nYou are given a positive integer h. Let n = 2^h - 1.\nThere is a perfect binary tree G with n nodes, numbered 1 to n. The root of the tree is node 1. For any node u (2 <= u <= n), its parent is floor(u / 2).\n\nThe interactor maintains two hidden arrays:\n\n1. A permutation p of length n, containing integers from 1 to n.\n2. A weight array f of length n, where each element f_v corresponds to the weight of tree node v. All weights are non-negative integers not exceeding 10^9.\n\nYour task is to determine the sum of all weights in the tree.\n\nInteraction\n\nFirst, your program should read a single integer h (2 <= h <= 18) from standard input.\nThen, you may ask queries to the interactor. To make a query, print a line in the following format:\n\n? u d\n\nwhere u is an integer index (1 <= u <= n) and d is a distance (1 <= d <= 10^9).\n\nThe interactor will respond with a single integer: the sum of weights f_v for all nodes v in the tree such that the distance between node p_u and node v is exactly d.\nFormally, the interactor returns the sum of f_v for all v where dist(p_u, v) = d.\nIf no such nodes v exist, the interactor returns 0.\n\n* p_u denotes the u-th element of the hidden permutation p.\n* dist(x, y) is the number of edges on the simple path between node x and node y in the tree.\n\nOnce you have determined the total sum of weights, output the answer in the following format:\n\n! S\n\nwhere S is the calculated sum. After outputting the answer, your program must terminate immediately.\n\nConstraints\n\n* 2 <= h <= 18\n* n = 2^h - 1\n* 0 <= f_v <= 10^9\n* The interactor is not adaptive (p and f are fixed at the start).\n* 1 <= d <= 10^9 (Note: d must be at least 1).\n\nScoring\n\nYour score depends on Q, the number of queries you perform.\nLet L = 3 * n / 4 (integer division) and R = (13 * n + 21) / 8 (integer division).\n\n* If Q <= L, you receive 100 points.\n* If Q >= R, you receive 0 points.\n* Otherwise, your score is calculated linearly:\nScore = floor(100 * (R - Q) / (R - L))\n\nTechnical Note\n\nRemember to flush the output buffer after every query and the final answer.\n\n* C++: cout << endl; or fflush(stdout);\n* Python: print(..., flush=True)\n* Java: System.out.flush();\n\nExample\n\nInput:\n2\n11\n59\n11\n\nOutput:\n? 1 1\n? 2 1\n? 3 1\n! 70\n\nExplanation of Example:\nh = 2, so n = 3. The tree has nodes 1 (root), 2 (left child), 3 (right child).\nHidden permutation p = [2, 1, 3].\nHidden weights f = [11, 45, 14] (f_1=11, f_2=45, f_3=14). Total sum is 70.\n\nQuery 1: \"? 1 1\"\nu=1. Center is p_1 = 2.\nNodes at distance 1 from node 2 are {1}. (Node 3 is at distance 2).\nResponse: f_1 = 11.\n\nQuery 2: \"? 2 1\"\nu=2. Center is p_2 = 1.\nNodes at distance 1 from node 1 are {2, 3}.\nResponse: f_2 + f_3 = 45 + 14 = 59.\n\nQuery 3: \"? 3 1\"\nu=3. Center is p_3 = 3.\nNodes at distance 1 from node 3 are {1}. (Node 2 is at distance 2).\nResponse: f_1 = 11.\n\nCalculation:\nFrom Query 2, we know the sum of weights of children (nodes 2 and 3) is 59.\nFrom Query 1 (or 3), we know the weight of the root (node 1) is 11.\nTotal Sum = 59 + 11 = 70.", "config": "\ntype: interactive\ninteractor: interactor.cc\n\n# Time and memory limits still apply to the contestant's solution\ntime: 10s\nmemory: 512m\n\n# The subtasks section works the same way\nsubtasks:\n- score: 100\n n_cases: 3\n \n"}
77
+ {"problem_id": "210", "category": "algorithmic", "statement": "# Military Exercise: Fighter Scheduling and Base Strikes (Blue Side)\n\nYou are the **blue side** in a simplified military exercise on a 2D grid.\n\nThe map is an \\(n \\times m\\) grid (0-indexed coordinates):\n\n- `#` : red base (enemy)\n- `*` : blue base (friendly)\n- `.` : neutral cell\n\nBoth sides have bases. Blue controls a set of fighters and must plan actions to destroy red bases and maximize score.\n\nThis is a **planning / simulation** task: your program reads the input once and outputs a sequence of per-frame commands. A custom checker simulates the world and computes your score.\n\n---\n\n## Rules\n\n### Fighters\n\nThere are \\(k\\) blue fighters, indexed \\(0..k-1\\). Each fighter has:\n\n- Initial position \\((x,y)\\) (guaranteed to be on a blue base)\n- Fuel tank capacity `G` (max fuel carried)\n- Missile capacity `C` (max missiles carried)\n\nInitial fuel and missiles are both **0**.\n\n### Movement\n\n- In one frame, a fighter may move by **1 cell** in one of 4 directions:\n - `0`: up, `1`: down, `2`: left, `3`: right\n- A **successful move consumes 1 unit of fuel**.\n- A fighter cannot leave the grid.\n- A fighter **must not enter a red base cell that is not yet destroyed**.\n\nIf a fighter does not successfully move in a frame, it is considered \"landed\" for that frame (no fuel consumption).\n\n### Refueling / Reloading (only on blue bases)\n\nIf a fighter is currently on a **blue base** cell, it can:\n\n- `fuel`: transfer fuel from the base to the fighter (up to remaining base supply and tank capacity)\n- `missile`: transfer missiles from the base to the fighter (up to remaining base supply and missile capacity)\n\nRefueling/reloading time is ignored; multiple such commands in a frame are allowed (subject to supplies/capacity).\n\n### Attacking\n\n- A fighter may attack in one of 4 directions (`0..3`) with range **1 cell** (adjacent).\n- The target cell must contain a **not-yet-destroyed red base**.\n- `attack <id> <dir> <count>` consumes exactly `count` missiles from the fighter.\n- Each red base has an integer defense `d`. When cumulative missiles received reaches `d`, the base is destroyed.\n\n### Scoring\n\nEach red base has a military value `v`. When a red base is **destroyed**, you gain **+v** points (only once per base).\n\nYour goal is to **maximize the total score** after up to **15000 frames**.\n\nInvalid commands are ignored (the simulation continues).\n\n---\n\n## Input Format\n\n### Map\n\n- Line 1: `n m` \\((1 \\le n,m \\le 200)\\)\n- Next `n` lines: `m` characters each, describing the grid.\n\n### Bases\n\nBlue bases first, then red bases.\n\nFor each side:\n\n- Line: integer `N` = number of bases\n- For each base:\n - Line: `x y` (0-indexed)\n - Line: `g c d v`\n - `g`: fuel supply\n - `c`: missile supply\n - `d`: defense (missiles needed to destroy)\n - `v`: military value\n\n### Fighters\n\n- Line: integer `k` \\((1 \\le k \\le 10)\\)\n- Next `k` lines: `x y G C` for fighter `id = i-1`\n\nConstraints (from released datasets):\n\n- `1 ≤ G ≤ 1000`\n- `1 ≤ C ≤ 1000`\n\n---\n\n## Output Format\n\nFor each frame, output **zero or more** command lines, then a line:\n\n```\nOK\n```\n\nCommands:\n\n- `move <id> <dir>`\n- `attack <id> <dir> <count>`\n- `fuel <id> <count>`\n- `missile <id> <count>`\n\nThere are at most **15000 frames**. Your output may end early (remaining frames are treated as doing nothing).\n\n---\n\n## Sample Input\n\nSee `testdata/1.in`.\n\n\n", "config": "\ntype: default\nchecker: chk.cc\nchecker_type: testlib\n\n# Time and memory limits apply to the contestant's solution program.\ntime: 5s\nmemory: 512m\n\nsubtasks:\n - score: 100\n n_cases: 3\n\n\n"}
78
  {"problem_id": "211", "category": "algorithmic", "statement": "Communication Robots\n\n## Problem Description\n\nIn a task area, there are several robots distributed that must maintain a connected network through wireless communication to complete tasks collaboratively.\n\nWireless communication has the following characteristics:\n\n(1) Establishing communication links consumes energy.\n\n(2) There are high-power robots with more advanced communication modules, and links connected to them have lower energy consumption.\n\n(3) There are also several optional relay stations distributed in the task area that can serve as intermediate nodes for signals, helping to reduce overall energy consumption.\n\nYour task is to:\n\nDesign communication links, reasonably choose whether to enable relay stations, ensure all robots are connected, and minimize the overall energy consumption cost.\n\n## Communication Energy Consumption Rules\n\nThe square of the Euclidean distance between two nodes is the base value (D) of the communication energy consumption between the two nodes.\n\n- The energy consumption cost between an ordinary robot (R) and an ordinary robot (R) is 1 × D.\n- The energy consumption cost between an ordinary robot (R) and a high-power robot (S) is 0.8 × D.\n- The energy consumption cost between a high-power robot (S) and a high-power robot (S) is 0.8 × D.\n- The energy consumption cost between a relay station (C) and any robot (R or S) is 1 × D.\n- Relay stations (C) cannot communicate directly with each other.\n\n## Goal\n\nAll robots (R and S) must form a connected network. Any two robots must have a communication path between them, which can be a direct connection or pass through other robots or relay stations.\n\nYou can choose to use or not use any relay stations.\n\nMinimize the overall energy consumption cost.\n\n## Input Format\n\nThe first line contains two integers: N (number of robots) and K (number of optional relay stations).\n\nThe next N + K lines each contain: device ID, x-coordinate, y-coordinate, and type.\n\nConstraints:\n- N ≤ 1500, K ≤ 1500\n- Type R represents an ordinary robot, S represents a high-power robot, C represents an optional relay station\n- x-coordinates and y-coordinates are integers in the range [-10000, 10000]\n\n## Output Format\n\nThe first line: IDs of selected relay stations (if multiple, separated by \"#\"; if none, output \"#\").\n\nThe second line: Set of communication links (each link is represented as \"device_id-device_id\", multiple links are separated by \"#\").\n\n## Example\n\n### Input\n\n```\n3 1\n1 0 0 R\n2 100 0 R\n3 50 40 S\n4 50 0 C\n```\n\n### Output\n\n```\n4\n1-3#2-3#3-4\n```\n\n## Scoring\n\nYour solution will be evaluated based on the total energy consumption cost of the network you construct. The score is calculated as:\n\n- Base score: The minimum spanning tree (MST) cost of all non-relay nodes (without using any relay stations).\n- Your score: The actual total cost of your network.\n- Final score ratio: min(1.0, base_cost / actual_cost)\n\nIf your network cost is less than or equal to the base MST cost, you receive full score (1.0). Otherwise, your score decreases proportionally.\n\n## Constraints\n\n- 1 ≤ N ≤ 1500\n- 0 ≤ K ≤ 1500\n- Device coordinates: -10000 ≤ x, y ≤ 10000\n- All robots (R and S) must be connected in the final network\n- Relay stations cannot be directly connected to each other\n\n## Time Limit\n\n10 seconds per test case\n\n## Memory Limit\n\n512 MB\n", "config": "# Set the problem type to default (submit answer problems use default type)\ntype: default\n\n# Specify the checker source file\nchecker: chk.cc\n\n# Time and memory limits (for submit answer problems, these may not be strictly enforced)\ntime: 10s\nmemory: 512m\n\n# The subtasks section\nsubtasks:\n - score: 100\n n_cases: 4 # Test cases: 1.in, 2.in, ..., 10.in in testdata/\n"}
79
  {"problem_id": "212", "category": "algorithmic", "statement": "I wanna cross the grid\n\nProblem Description:\nSuddenly, the save point landed in a huge grid. Only by passing through all required areas can the next save point appear...\n\nYou are given a grid with n rows and m columns, where rows and columns are numbered starting from 1. Define a pair (x, y) to represent the cell at row x and column y. For each row, the cells from column L to column R are required areas. Formally, let D be the set of required areas, then D = {(x, y) | 1 ≤ x ≤ n, L ≤ y ≤ R, x, y ∈ N+}.\n\nIn each step, kid can move one step in one of the four directions (up, down, left, right) without exceeding the boundaries. Formally, if kid is currently at (x, y), then kid can move to (x+1, y), (x, y+1), (x-1, y), or (x, y-1).\n\nInitially, kid is at (Sx, Sy) (guaranteed that Sy = L). Kid needs to pass through all required areas, and any cell can be visited at most once. Formally, kid's path is a sequence of pairs P = (x₁, y₁), (x₂, y₂), ..., (xₖ, yₖ), which must satisfy:\n- ∀ (x₀, y₀) ∈ D, ∃ i ∈ [1, k] such that (x₀, y₀) = (xᵢ, yᵢ)\n- ∀ i ≠ j, (xᵢ, yᵢ) ≠ (xⱼ, yⱼ)\n\nAdditionally, kid needs to record a completion sequence p. When kid first enters the required area of a certain row, the row number must be appended to the current sequence, and kid must immediately pass through all required areas of that row. At the same time, p must contain a subsequence q of length Lq to be a valid completion sequence and truly complete the level. Formally, p is valid if and only if there exists a sequence c of length Lq such that p[cᵢ] = qᵢ and c is monotonically increasing.\n\nTo reduce the difficulty for lindongli2004, lindongli2004 hopes that kid takes as few steps as possible.\n\nGiven n, m, L, R, Sx, Sy, and q, please plan a completion route for kid, or tell him that no such route exists. The rest of the operations will be left to lindongli2004!\n\nInput Format:\nThe first line contains 8 positive integers: n, m, L, R, Sx, Sy, Lq, s, representing the number of rows and columns of the grid, the left and right boundaries of the required area, the x and y coordinates of the starting point, the length of sequence q, and the scoring parameter.\n\nThe second line contains Lq distinct positive integers, representing the sequence q.\n\nOutput Format:\nThe first line contains a string \"YES\" or \"NO\" (without quotes) indicating whether there exists a valid path.\n\nIf there exists a valid path, the second line contains a positive integer cnt representing the length of the path, followed by cnt lines, each containing two positive integers x and y representing the coordinates of the path.\n\nExample:\nInput:\n5 4 2 3 2 2 2 15\n3 1\n\nOutput:\nYES\n15\n2 2\n2 3\n3 3\n3 2\n4 2\n4 3\n5 3\n5 2\n5 1\n4 1\n3 1\n2 1\n1 1\n1 2\n1 3\n\nScoring:\nThe last number in the first line of the input file is s. Let your number of steps be cnt. Then:\n- If cnt ≤ s, you will receive 10 points.\n- If cnt > s and you can complete the level, you will receive max(5, 10 - (cnt - s) / ⌊n/2⌋ - 1) points.\n- If you cannot complete the level, you will receive 0 points.\n\nConstraints:\n- 1 ≤ L ≤ R ≤ m ≤ 40\n- 1 ≤ Sx ≤ n ≤ 40\n- Other constraints are detailed in the provided files.\n- The maximum capacity of the checker is 2.5 × 10⁸, meaning your solution cannot contain more than 2.5 × 10⁸ numbers.\n\nTime limit:\n30 seconds\n\nMemory limit:\n512 MB\n", "config": "# Set the problem type to default (submit answer problems use default type)\ntype: default\n\n# Specify the checker source file\nchecker: chk.cc\n\n# Time and memory limits (for submit answer problems, these may not be strictly enforced)\ntime: 30s\nmemory: 512m\n\n# The subtasks section\nsubtasks:\n - score: 100\n n_cases: 4 # Test cases: 1.in, 2.in, ..., 10.in in testdata/\n"}
80
  {"problem_id": "213", "category": "algorithmic", "statement": "Sequence Shift (moqueve)\n\nProblem Description:\nYou need to sort a permutation of $1\\sim n$ on a strange computer.\n\nYou can choose a number $x$, and then each time you can cyclically shift a segment of length $x$ to the left or right (the leftmost/rightmost element moves to the rightmost/leftmost position) (shift amount is $1$).\n\nPlease restore the sequence to $1\\sim n$ within $230\\times n$ operations.\n\nInput Format:\nThe first line contains a single integer $n$.\n\nThe second line contains $n$ integers, representing the sequence $a$.\n\nOutput Format:\nThe first line contains two integers $x$ and $m$, where $m$ represents the number of operations.\n\nThe next $m$ lines each contain three integers: the first two represent the shift interval, and the last one represents the direction, where $0$ means left and $1$ means right.\n\nExample:\nInput:\n5\n4 2 3 5 1\n\nOutput:\n3\n3\n3 5 1\n1 3 1\n2 4 0\n\nExplanation:\n- Right shift (3,5): sequence becomes 4,2,1,3,5\n- Right shift (1,3): sequence becomes 1,4,2,3,5\n- Left shift (2,4): sequence becomes 1,2,3,4,5\n\nConstraints:\n- $n \\leq 1000$\n- The sequence $a$ is a permutation of $1\\sim n$\n\nScoring:\nYour score is calculated based on the number of operations $m$:\n- If $m \\leq 23n$, you receive full score (1.0).\n- If $m > 230n$, you receive 0 score.\n- Otherwise, Score = max(0.0, 1.0 - (m - 23n) / (230n - 23n)), linearly decreasing from 1.0 to 0.0.\n\nTime limit:\n5 seconds\n\nMemory limit:\n512 MB\n", "config": "# Set the problem type to default (submit answer problems use default type)\ntype: default\n\n# Specify the checker source file\nchecker: chk.cc\n\n# Time and memory limits (for submit answer problems, these may not be strictly enforced)\ntime: 2s\nmemory: 512m\n\n# The subtasks section\nsubtasks:\n - score: 100\n n_cases: 3 # Test cases: 1.in, 2.in, ..., 10.in in testdata/\n"}
 
195
  {"problem_id": "poc_generation/stack_buffer_overflow", "category": "research", "statement": "", "config": "{\"tag\": \"security\"}\n"}
196
  {"problem_id": "poc_generation/uninitialized_value", "category": "research", "statement": "", "config": "{\"tag\": \"security\"}\n"}
197
  {"problem_id": "qknorm", "category": "research", "statement": "QKNorm Optimization Problem\n============================\n\nProblem Setting\n---------------\nDesign and optimize high-performance implementations for Query-Key Normalization (QKNorm) on GPU. This problem focuses on implementing efficient normalization kernels that apply RMSNorm to query and key tensors.\n\nThis is a **memory-bound** (even **launch-bound**) **tiny operator**. Performance optimization requires careful attention to:\n\n1. **Memory Efficiency**: Focus on **vectorized memory access patterns**. Minimize memory transactions and maximize memory bandwidth utilization.\n\n2. **Operation Fusion**: **Avoid additional transpose/contiguous kernels**. Fuse operations to reduce kernel launch overhead and memory traffic.\n\n3. **Non-Contiguous Input Handling**: **Be aware that inputs may be non-contiguous** due to weight-QKV fusion. Your implementation should efficiently handle non-contiguous memory layouts without triggering expensive memory copies.\n\nTarget\n------\n- **Primary**: Ensure correctness across diverse tensor shapes\n- **Secondary**: Maximize geometric mean speedup over baseline (higher is better)\n- **Tertiary**: Minimize kernel launch overhead and memory usage\n\nAPI Specification\n-----------------\nImplement a `Solution` class that returns a qknorm implementation:\n\n```python\nclass Solution:\n def solve(self, spec_path: str = None) -> dict:\n \"\"\"\n Returns a dict with either:\n - {\"code\": \"python_code_string\"}\n - {\"program_path\": \"path/to/kernel.py\"}\n \"\"\"\n # Your implementation\n pass\n```\n\nYour kernel implementation must provide:\n\n```python\nimport torch\nimport flashinfer\n\ndef qknorm(q: torch.Tensor, k: torch.Tensor, norm_weight: torch.Tensor):\n \"\"\"\n Apply RMSNorm to query and key tensors.\n \n Args:\n q: Query tensor of arbitrary shape (will be reshaped to 2D)\n k: Key tensor of arbitrary shape (will be reshaped to 2D)\n norm_weight: Normalization weight tensor of shape (hidden_dim,)\n \n Returns:\n Tuple of (q_normalized, k_normalized) tensors\n \"\"\"\n pass\n```\n\nRequired Default Implementation:\n```python\ndef default_qknorm(q: torch.Tensor, k: torch.Tensor, norm_weight: torch.Tensor):\n q_2d = q.contiguous().view(-1, q.shape[-1])\n k_2d = k.contiguous().view(-1, k.shape[-1])\n q_o = torch.empty_like(q_2d)\n k_o = torch.empty_like(k_2d)\n flashinfer.norm.rmsnorm(q_2d, norm_weight, out=q_o)\n flashinfer.norm.rmsnorm(k_2d, norm_weight, out=k_o)\n return q_o.view(q.shape), k_o.view(k.shape)\n```\n\nBaseline Implementation:\n```python\ndef customized_qknorm(q: torch.Tensor, k: torch.Tensor, norm_weight: torch.Tensor):\n q_o = torch.empty(q.shape, device=q.device, dtype=q.dtype)\n k_o = torch.empty(k.shape, device=k.device, dtype=k.dtype)\n flashinfer.norm.rmsnorm(q, norm_weight, out=q_o)\n flashinfer.norm.rmsnorm(k, norm_weight, out=k_o)\n return q_o, k_o\n```\n\nAPI Usage Notes\n---------------\n- The evaluator looks for a `qknorm` function in the module namespace\n- Function must handle tensor reshaping correctly (q and k may have arbitrary shapes)\n- Must use flashinfer.norm.rmsnorm for normalization\n- Function returns a tuple of (q_normalized, k_normalized) tensors\n- **Important**: Inputs q and k may be **non-contiguous** due to weight-QKV fusion\n- **Avoid**: Additional `.contiguous()` or `.transpose()` calls that trigger memory copies\n- **Focus**: Vectorized memory access and operation fusion to minimize kernel launches\n\nScoring (0-100)\n---------------\nPerformance is measured against baseline implementations:\n\n```\ngeometric_mean_speedup = geometric_mean(answer_times / baseline_times)\n\nif speedup < 0.5 or correctness is wrong:\n score = 0\nelif speedup >= 0.5 and speedup < 1.0:\n score = 50\nelif speedup >= 1.0:\n score = 100\n```\n\n- 0 points = Speedup < 0.5x OR correctness fails\n- 50 points = Speedup >= 0.5x and < 1.0x\n- 100 points = Speedup >= 1.0x\n\nEvaluation Details\n------------------\n- Shapes focus on diverse batch-sizes, head-dim, num-kv-heads, num-qo-heads, e.g.:\n - (16, 8, 32, 128)\n - (128, 32, 32, 64)\n- Correctness verified with tolerance: rtol=1e-2, atol=5e-3\n- Performance measured using median execution time\n- Requires CUDA backend and GPU support\n", "config": "{\n \"dependencies\": {\n \"uv_project\": \"resources\"\n },\n \"datasets\": [],\n \"runtime\": {\n \"resources\": {\n \"accelerators\": \"L4:1\"\n },\n \"docker\": {\n \"image\": \"andylizf/triton-tlx:tlx-nv-cu122-nvcc\",\n \"gpu\": true\n },\n \"environment\": \"CUDA 12.2, Python 3.11, PyTorch 2.0+, flashinfer 0.5.0, Triton 3.0+\"\n },\n \"tag\": \"hpc\"\n}\n"}
198
+ {"problem_id": "quant_dot_int4", "category": "research", "statement": "Quantized Dot (Int4 Packed) Optimization Problem\n================================================\n\nProblem Setting\n---------------\nDesign and optimize high-performance Triton kernels for a **quantized matrix multiplication** where the left-hand matrix is stored as **packed int4 weights** plus per-group scale/offset, and the right-hand matrix is fp16 activations.\n\nThe challenge involves optimizing:\n- **Bit unpacking**: Efficiently unpacking int4 values from int32 lanes\n- **Dequantization fusion**: Fusing (unpack - offset) * scale directly into the dot product\n- **Memory access patterns**: Efficient access for packed weights / scales / activations\n- **Block tiling**: Choosing good block sizes for the small-K GEMM\n- **Performance benchmarking**: Achieving speedup over the baseline implementation\n\nTarget\n------\n- **Primary**: Maximize geometric mean speedup over baseline (higher is better)\n- **Secondary**: Ensure correctness across diverse matrix shapes\n- **Tertiary**: Minimize kernel launch overhead and memory usage\n\nAPI Specification\n-----------------\nImplement a `Solution` class that returns a Triton kernel implementation:\n\n```python\nclass Solution:\n def solve(self, spec_path: str = None) -> dict:\n \"\"\"\n Returns a dict with either:\n - {\"code\": \"python_code_string\"}\n - {\"program_path\": \"path/to/kernel.py\"}\n \"\"\"\n # Your implementation\n pass\n```\n\nYour kernel implementation must provide:\n\n```python\nimport torch\nimport triton\nimport triton.language as tl\n\ndef quant_dot(scale: torch.Tensor, offset_packed: torch.Tensor, weight_packed: torch.Tensor, activation: torch.Tensor) -> torch.Tensor:\n \"\"\"\n Args:\n scale: float16/float32 tensor of shape (M, K/8)\n offset_packed: int32 tensor of shape (M,)\n Each int32 packs 8 int4 offsets (one per 8-wide group).\n weight_packed: int32 tensor of shape (M, K/8)\n Each int32 packs 8 int4 weights.\n activation: float16 tensor of shape (K, N)\n \n Returns:\n Output tensor of shape (M, N), dtype float16\n \"\"\"\n pass\n```\n\nSemantics (matches Triton-Puzzles \"Quantized Matrix Mult\"):\n----------------------------------------------------------\n- Constants: `FPINT = 8` (8 int4 values per int32), `GROUP = 8`, so `K = FPINT * GROUP = 64`.\n- Unpack int4 weights: `w_int4` has shape (M, K) from `weight_packed`.\n- Unpack int4 offsets per row: `o_int4` has shape (M, FPINT) from `offset_packed`, then expanded to (M, K) by repeating each offset across `GROUP` lanes.\n- Expand scale similarly from shape (M, FPINT) to (M, K).\n- Dequantized A: `A = scale * (w_int4 - o_int4)` (float16/float32).\n- Output: `Z = A @ activation` (accumulate in fp32 recommended), return fp16.\n\nAPI Usage Notes\n---------------\n- The evaluator looks for a `quant_dot` function in the module namespace\n- Must use Triton JIT compilation for kernel definition\n- Scale/activation are CUDA tensors; packed tensors are int32 CUDA tensors\n- `K` is fixed to 64 in evaluation\n\nScoring (0-100)\n---------------\nPerformance is measured against baseline implementations:\n\n```\ngeometric_mean_speedup = geometric_mean(answer_times / baseline_times)\nraw_score = min(geometric_mean_speedup, 3.0) # Cap at 3x speedup\nscore = (raw_score - 1.0) / 2.0 * 100 # Map 1x-3x to 0-100\n```\n\n- 0 points = No speedup (1x baseline performance)\n- 50 points = 2x speedup over baseline\n- 100 points = 3x+ speedup over baseline\n\nEvaluation Details\n------------------\n- K is fixed to 64.\n- Tested on multiple (M, N) shapes (see `resources/benchmark.py`).\n- Correctness verified with tolerance: rtol=1e-2, atol=5e-3.\n- Performance measured using median execution time via `triton.testing.do_bench`.\n- Requires CUDA backend and GPU support.\n", "config": "dependencies:\n uv_project: resources\ndatasets: []\ntag: hpc\nruntime:\n docker:\n image: andylizf/triton-tlx:tlx-nv-cu122\n gpu: true\n"}
199
  {"problem_id": "ragged_attention", "category": "research", "statement": "Ragged Attention Optimization Problem\n======================================\n\nProblem Setting\n---------------\nDesign and optimize high-performance Triton kernels for ragged attention computation on GPU. This problem focuses on implementing efficient kernels that handle variable-length sequences using ragged attention, where each query row can attend to a different number of key/value rows.\n\nThe challenge involves optimizing:\n- **Ragged attention**: Efficiently handling variable-length sequences where each row has different attention lengths\n- **Memory access patterns**: Efficient loading and storing of Q, K, V tensors with ragged masking\n- **Streaming softmax**: Computing softmax in a streaming fashion for numerical stability\n- **Row-wise masking**: Correctly masking attention scores based on row_lens\n- **Mixed precision**: Handling float16 inputs/outputs with float32 accumulation\n- **Block tiling**: Optimal block sizes for GPU execution across different matrix sizes\n- **Performance benchmarking**: Achieving speedup over baseline PyTorch implementations\n\nTarget\n------\n- **Primary**: Maximize geometric mean speedup over baseline (higher is better)\n- **Secondary**: Ensure correctness across diverse matrix sizes and ragged lengths\n- **Tertiary**: Minimize kernel launch overhead and memory usage\n\nAPI Specification\n-----------------\nImplement a `Solution` class that returns a Triton kernel implementation:\n\n```python\nclass Solution:\n def solve(self, spec_path: str = None) -> dict:\n \"\"\"\n Returns a dict with either:\n - {\"code\": \"python_code_string\"}\n - {\"program_path\": \"path/to/kernel.py\"}\n \"\"\"\n # Your implementation\n pass\n```\n\nYour kernel implementation must provide:\n\n```python\nimport torch\nimport triton\nimport triton.language as tl\n\ndef ragged_attn(Q: torch.Tensor, K: torch.Tensor, V: torch.Tensor, row_lens: torch.Tensor) -> torch.Tensor:\n \"\"\"\n Ragged attention computation.\n \n Args:\n Q: Query tensor of shape (M, D) - query features (float16)\n K: Key tensor of shape (N, D) - key features (float16)\n V: Value tensor of shape (N, Dv) - value features (float16)\n row_lens: Row lengths tensor of shape (M,) - number of valid K/V rows per Q row (int32 or int64)\n \n Returns:\n Output tensor of shape (M, Dv) - attention output (float16)\n \n Semantics:\n For each query row i (0 <= i < M), compute attention over the first row_lens[i] key/value rows.\n Specifically:\n - scores[i, j] = (Q[i] @ K[j].T) * scale, for j < row_lens[i], else -inf\n - P[i] = softmax(scores[i])\n - O[i] = P[i] @ V[:row_lens[i]]\n \"\"\"\n pass\n```\n\nScoring\n-------\nThe scoring system evaluates your implementation based on geometric mean speedup over GPU baseline:\n\n- **0 points**: 1x GPU baseline (same speed as PyTorch GPU baseline)\n- **100 points**: 3x GPU baseline (3x speedup over PyTorch GPU baseline)\n- **Linear interpolation**: Scores between 0-100 are linearly interpolated based on speedup\n\nThe evaluation uses the following test cases:\n- M (number of query rows): [512, 1024]\n- N (number of key/value rows): 1024\n- D (model dimension): 64\n- Dv (value dimension): 64\n- row_lens: Random integers between [min_ratio*N, N] where min_ratio=0.25\n\nCorrectness is verified using:\n- Relative tolerance: 1e-2\n- Absolute tolerance: 5e-3\n\nAll tests must pass for a non-zero score. If any test fails correctness, the score is 0.\n\nExample\n-------\n```python\nimport torch\nimport triton\nimport triton.language as tl\n\n@triton.jit\ndef _ragged_kernel(Q, K, V, O, ROW_LENS, ...):\n # Your kernel implementation\n pass\n\ndef ragged_attn(Q: torch.Tensor, K: torch.Tensor, V: torch.Tensor, row_lens: torch.Tensor) -> torch.Tensor:\n # Your kernel launch logic\n pass\n```\n\nConstraints\n-----------\n- All tensors must be CUDA tensors (float16 for Q, K, V; int32/int64 for row_lens)\n- Output must be float16\n- The implementation must handle variable row lengths correctly\n- Accumulation should use float32 for numerical stability\n- Must use streaming softmax for numerical stability\n\nTips\n----\n1. Use efficient block tiling (BM, BN, BD, BDV) for optimal performance\n2. Implement streaming softmax to handle large attention matrices\n3. Correctly mask attention scores based on row_lens\n4. Load row_lens once per program and broadcast for masking\n5. Use proper masking for boundary conditions\n\n", "config": "dependencies:\n uv_project: resources\ntag: hpc\nruntime:\n environment: \"Triton 3.2.0 with CUDA 12.2 (triton-tlx image)\"\n docker:\n image: andylizf/triton-tlx:tlx-nv-cu122\n gpu: true\n resources:\n accelerators: L4:1\n"}
200
  {"problem_id": "symbolic_regression/mccormick", "category": "research", "statement": "Symbolic Regression Benchmark - McCormick Dataset\n=================================================\n\nProblem Setting\n---------------\nLearn a closed-form symbolic expression `f(x1, x2)` that predicts the target `y`.\n\nThis dataset is derived from the McCormick function, a classic 2D optimization test function featuring a combination of trigonometric and polynomial terms. The function exhibits a smooth, wavy surface with a global minimum.\n\nInput Format\n------------\n- Your `Solution.solve` receives:\n - `X`: numpy.ndarray of shape `(n, 2)` containing feature values\n - `y`: numpy.ndarray of shape `(n,)` containing target values\n- Dataset columns: `x1, x2, y`\n\nOutput Specification\n--------------------\nImplement a `Solution` class in `solution.py`:\n\n```python\nimport numpy as np\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n \"\"\"\n Args:\n X: Feature matrix of shape (n, 2)\n y: Target values of shape (n,)\n\n Returns:\n dict with keys:\n - \"expression\": str, a Python-evaluable expression using x1, x2\n - \"predictions\": list/array of length n (optional)\n - \"details\": dict with optional \"complexity\" int\n \"\"\"\n # Example: fit a symbolic expression to the data\n expression = \"x1 + x2\" # placeholder\n return {\n \"expression\": expression,\n \"predictions\": None, # will be computed from expression if omitted\n \"details\": {}\n }\n```\n\nExpression Requirements:\n- Must be a valid Python expression string\n- Use variable names: `x1`, `x2`\n- Allowed operators: `+`, `-`, `*`, `/`, `**`\n- Allowed functions: `sin`, `cos`, `exp`, `log`\n- Numeric constants are allowed\n\nDependencies (pinned versions)\n------------------------------\n```\npysr==0.19.0\nnumpy==1.26.4\npandas==2.2.2\nsympy==1.13.3\n```\n\nMinimal Working Examples\n------------------------\n\n**Example 1: Using PySR (recommended)**\n```python\nimport numpy as np\nfrom pysr import PySRRegressor\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n model = PySRRegressor(\n niterations=40,\n binary_operators=[\"+\", \"-\", \"*\", \"/\"],\n unary_operators=[\"sin\", \"cos\", \"exp\", \"log\"],\n populations=15,\n population_size=33,\n maxsize=25,\n verbosity=0,\n progress=False,\n random_state=42,\n )\n model.fit(X, y, variable_names=[\"x1\", \"x2\"])\n\n # Get best expression as sympy, convert to string\n best_expr = model.sympy()\n expression = str(best_expr)\n\n # Predictions\n predictions = model.predict(X)\n\n return {\n \"expression\": expression,\n \"predictions\": predictions.tolist(),\n \"details\": {}\n }\n```\n\n**Example 2: Manual expression (simple baseline)**\n```python\nimport numpy as np\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n # Simple linear combination as baseline\n x1, x2 = X[:, 0], X[:, 1]\n\n # Fit coefficients via least squares\n A = np.column_stack([x1, x2, np.ones_like(x1)])\n coeffs, _, _, _ = np.linalg.lstsq(A, y, rcond=None)\n a, b, c = coeffs\n\n expression = f\"{a:.6f}*x1 + {b:.6f}*x2 + {c:.6f}\"\n predictions = a * x1 + b * x2 + c\n\n return {\n \"expression\": expression,\n \"predictions\": predictions.tolist(),\n \"details\": {}\n }\n```\n\n**Example 3: Using sympy for expression manipulation**\n```python\nimport numpy as np\nimport sympy as sp\nfrom pysr import PySRRegressor\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n model = PySRRegressor(\n niterations=30,\n binary_operators=[\"+\", \"-\", \"*\", \"/\"],\n unary_operators=[\"sin\", \"cos\"],\n verbosity=0,\n progress=False,\n )\n model.fit(X, y, variable_names=[\"x1\", \"x2\"])\n\n # Get sympy expression and simplify\n sympy_expr = model.sympy()\n simplified = sp.simplify(sympy_expr)\n\n # Convert to evaluable string\n expression = str(simplified)\n\n return {\n \"expression\": expression,\n \"predictions\": None, # evaluator will compute from expression\n \"details\": {}\n }\n```\n\nPySR API Notes (v0.19.0)\n------------------------\n- `model.fit(X, y, variable_names=[\"x1\", \"x2\"])` - use variable_names to match expected output\n- `model.sympy()` - returns best expression as sympy object\n- `model.predict(X)` - returns predictions array\n- `model.equations_` - DataFrame of all discovered equations\n- Common parameters:\n - `niterations`: number of evolution iterations (more = better but slower)\n - `populations`: number of parallel populations\n - `maxsize`: maximum expression complexity\n - `verbosity=0, progress=False`: suppress output\n\nExpression Format Requirements\n------------------------------\n- Must be a valid Python expression string\n- Use variable names: `x1`, `x2`\n- Allowed operators: `+`, `-`, `*`, `/`, `**`\n- Allowed functions: `sin`, `cos`, `exp`, `log` (NO `np.` prefix)\n- Numeric constants are allowed\n- The evaluator uses `sympy.sympify()` to parse your expression\n\nScoring\n-------\n```\nMSE = (1/n) Σ (y_i - ŷ_i)²\nScore = 100 × clamp((m_base - MSE) / (m_base - m_ref), 0, 1) × 0.99^max(C - C_ref, 0)\n```\n\n- `m_base`: linear regression baseline MSE\n- `m_ref`, `C_ref`: reference solution MSE and complexity\n- `C = 2 × (#binary ops) + (#unary ops)`\n- Lower MSE and lower complexity yield higher scores\n\nEnvironment\n-----------\nRun `set_up_env.sh` to install dependencies.\n", "config": "{\n \"dependencies\": {\n \"uv_project\": \"resources\"\n },\n \"datasets\": [],\n \"tag\": \"pl\"\n}\n"}
201
  {"problem_id": "symbolic_regression/mixed_polyexp_4d", "category": "research", "statement": "Symbolic Regression Benchmark - Mixed PolyExp 4D Dataset\n=========================================================\n\nProblem Setting\n---------------\nLearn a closed-form symbolic expression `f(x1, x2, x3, x4)` that predicts the target `y`.\n\nThis is a higher-dimensional dataset (4 input features) combining polynomial interactions with exponential decay. The function involves cross-terms between variables and Gaussian-like damping, making it more challenging than the 2D variants.\n\nInput Format\n------------\n- Your `Solution.solve` receives:\n - `X`: numpy.ndarray of shape `(n, 4)` containing feature values\n - `y`: numpy.ndarray of shape `(n,)` containing target values\n- Dataset columns: `x1, x2, x3, x4, y`\n\nOutput Specification\n--------------------\nImplement a `Solution` class in `solution.py`:\n\n```python\nimport numpy as np\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n \"\"\"\n Args:\n X: Feature matrix of shape (n, 4)\n y: Target values of shape (n,)\n\n Returns:\n dict with keys:\n - \"expression\": str, a Python-evaluable expression using x1, x2, x3, x4\n - \"predictions\": list/array of length n (optional)\n - \"details\": dict with optional \"complexity\" int\n \"\"\"\n # Example: fit a symbolic expression to the data\n expression = \"x1 + x2 + x3 + x4\" # placeholder\n return {\n \"expression\": expression,\n \"predictions\": None, # will be computed from expression if omitted\n \"details\": {}\n }\n```\n\nExpression Requirements:\n- Must be a valid Python expression string\n- Use variable names: `x1`, `x2`, `x3`, `x4`\n- Allowed operators: `+`, `-`, `*`, `/`, `**`\n- Allowed functions: `sin`, `cos`, `exp`, `log`\n- Numeric constants are allowed\n\nDependencies (pinned versions)\n------------------------------\n```\npysr==0.19.0\nnumpy==1.26.4\npandas==2.2.2\nsympy==1.13.3\n```\n\nMinimal Working Examples\n------------------------\n\n**Example 1: Using PySR (recommended)**\n```python\nimport numpy as np\nfrom pysr import PySRRegressor\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n model = PySRRegressor(\n niterations=50, # more iterations for 4D\n binary_operators=[\"+\", \"-\", \"*\", \"/\"],\n unary_operators=[\"sin\", \"cos\", \"exp\", \"log\"],\n populations=20,\n population_size=40,\n maxsize=30, # larger for 4D complexity\n verbosity=0,\n progress=False,\n random_state=42,\n )\n model.fit(X, y, variable_names=[\"x1\", \"x2\", \"x3\", \"x4\"])\n\n # Get best expression as sympy, convert to string\n best_expr = model.sympy()\n expression = str(best_expr)\n\n # Predictions\n predictions = model.predict(X)\n\n return {\n \"expression\": expression,\n \"predictions\": predictions.tolist(),\n \"details\": {}\n }\n```\n\n**Example 2: Manual expression (simple baseline)**\n```python\nimport numpy as np\n\nclass Solution:\n def __init__(self, **kwargs):\n pass\n\n def solve(self, X: np.ndarray, y: np.ndarray) -> dict:\n # Simple linear combination as baseline\n x1, x2, x3, x4 = X[:, 0], X[:, 1], X[:, 2], X[:, 3]\n\n # Fit coefficients via least squares\n A = np.column_stack([x1, x2, x3, x4, np.ones_like(x1)])\n coeffs, _, _, _ = np.linalg.lstsq(A, y, rcond=None)\n a, b, c, d, e = coeffs\n\n expression = f\"{a:.6f}*x1 + {b:.6f}*x2 + {c:.6f}*x3 + {d:.6f}*x4 + {e:.6f}\"\n predictions = a * x1 + b * x2 + c * x3 + d * x4 + e\n\n return {\n \"expression\": expression,\n \"predictions\": predictions.tolist(),\n \"details\": {}\n }\n```\n\nPySR API Notes (v0.19.0)\n------------------------\n- `model.fit(X, y, variable_names=[\"x1\", \"x2\", \"x3\", \"x4\"])` - use variable_names to match expected output\n- `model.sympy()` - returns best expression as sympy object\n- `model.predict(X)` - returns predictions array\n- `model.equations_` - DataFrame of all discovered equations\n- Common parameters:\n - `niterations`: number of evolution iterations (more = better but slower)\n - `populations`: number of parallel populations\n - `maxsize`: maximum expression complexity\n - `verbosity=0, progress=False`: suppress output\n\nExpression Format Requirements\n------------------------------\n- Must be a valid Python expression string\n- Use variable names: `x1`, `x2`, `x3`, `x4`\n- Allowed operators: `+`, `-`, `*`, `/`, `**`\n- Allowed functions: `sin`, `cos`, `exp`, `log` (NO `np.` prefix)\n- Numeric constants are allowed\n- The evaluator uses `sympy.sympify()` to parse your expression\n\nScoring\n-------\n```\nMSE = (1/n) Σ (y_i - ŷ_i)²\nScore = 100 × clamp((m_base - MSE) / (m_base - m_ref), 0, 1) × 0.99^max(C - C_ref, 0)\n```\n\n- `m_base`: linear regression baseline MSE\n- `m_ref`, `C_ref`: reference solution MSE and complexity\n- `C = 2 × (#binary ops) + (#unary ops)`\n- Lower MSE and lower complexity yield higher scores\n\nEnvironment\n-----------\nRun `set_up_env.sh` to install dependencies.\n", "config": "{\n \"dependencies\": {\n \"uv_project\": \"resources\"\n },\n \"datasets\": [],\n \"tag\": \"pl\"\n}\n"}