Research Problems
Real-world systems challenges requiring domain expertise in GPU computing, distributed systems, ML pipelines, databases, and security.
Basic Usage
# List all problems
frontier-eval --list
# Evaluate a solution (requires Docker)
frontier-eval flash_attn <your_solution.py>
# Evaluate multiple problems
frontier-eval --problems flash_attn,cross_entropy <your_solution.py>
Cloud Evaluation with SkyPilot
Some problems require GPUs or specific hardware. Use SkyPilot to run evaluations on cloud VMs.
Setup:
sky check
See SkyPilot docs for cloud credential setup.
Usage:
frontier-eval flash_attn <your_solution.py> --skypilot
Batch Evaluation
For evaluating multiple solutions at once, create a pairs file mapping solutions to problems:
# pairs.txt format: solution_path:problem_id
solutions/my_flash_attn_v1.py:flash_attn
solutions/my_flash_attn_v2.py:flash_attn
solutions/my_cross_entropy.py:cross_entropy
Then run:
# Evaluate all pairs
frontier-eval batch --pairs-file pairs.txt
# Resume interrupted evaluation
frontier-eval batch --pairs-file pairs.txt --resume
# Check status
frontier-eval batch --status --results-dir results/batch
Python API
from frontier_cs import FrontierCSEvaluator
evaluator = FrontierCSEvaluator()
# Single problem
result = evaluator.evaluate("research", problem_id="flash_attn", code=my_code)
print(f"Score: {result.score}")
# With SkyPilot
result = evaluator.evaluate("research", problem_id="flash_attn", code=my_code,
backend="skypilot")
# Batch evaluation
results = evaluator.evaluate_batch("research",
problem_ids=["flash_attn", "cross_entropy"],
code=my_code)
Problem Structure
Each problem is in its own directory under research/problems/:
research/problems/
βββ flash_attn/ # Single problem
β βββ config.yaml
β βββ readme
β βββ evaluator.py
β βββ resources/
βββ gemm_optimization/ # Problem with variants
β βββ squares/
β βββ rectangles/
β βββ ...
βββ ...
File Reference
| File | Purpose |
|---|---|
config.yaml |
Runtime config (Docker image, GPU requirement, timeout) |
readme |
Problem description, API spec, scoring formula |
set_up_env.sh |
Environment setup (install deps, check CUDA) |
download_datasets.sh |
Download datasets (for local pre-download) |
evaluate.sh |
Evaluation entry point |
run_evaluator.sh |
Invokes evaluator.py |
evaluator.py |
Core evaluation logic |
resources/ |
Baseline code, benchmark, test data |
config.yaml Example
dependencies:
uv_project: resources # Optional: uv project in resources/
datasets: [] # Optional: dataset URLs
tag: hpc # Category: os, hpc, ai, db, pl, security
runtime:
docker:
image: andylizf/triton-tlx:tlx-nv-cu122
gpu: true
timeout_seconds: 1800
Evaluation Flow
Inside the Docker container, the execution order is:
1. set_up_env.sh β Initialize environment
2. Copy solution.py β /work/execution_env/solution_env/
3. evaluate.sh β Check files, call run_evaluator.sh
4. run_evaluator.sh β python3 evaluator.py
5. evaluator.py β Load Solution.solve(), run benchmark, print score
The final score is extracted from the last numeric line of stdout.
Solution Interface
Submit a solution.py implementing the Solution class. The interface varies by problem type:
Triton Kernel Problems (flash_attn, cross_entropy, gemm_optimization...)
class Solution:
def solve(self, spec_path: str = None) -> dict:
"""
Returns either:
- {"code": "python_code_string"}
- {"program_path": "path/to/kernel.py"}
"""
kernel_code = '''
import triton
import triton.language as tl
@triton.jit
def my_kernel(...):
...
def entry_function(...):
...
'''
return {"code": kernel_code}
ML Training Problems (imagenet_pareto...)
class Solution:
def solve(self, train_loader, val_loader, metadata: dict) -> torch.nn.Module:
"""
Train and return a model.
metadata contains: num_classes, input_dim, param_limit,
baseline_accuracy, device, etc.
"""
model = MyModel(...)
# training loop
return model
Other Problems
Check each problem's readme for the specific solve() signature and return type.