metadata
dataset_info:
- config_name: vllm
features:
- name: commit_hash
dtype: string
- name: pr_url
dtype: string
- name: pr_date
dtype: string
- name: timeline_extracted_at
dtype: string
- name: analysis_extracted_at
dtype: string
- name: models
list: string
- name: perf_command
dtype: string
- name: has_serving
dtype: bool
- name: has_latency
dtype: bool
- name: has_throughput
dtype: bool
- name: uses_lm_eval
dtype: bool
- name: commit_subject
dtype: string
- name: commit_message
dtype: string
- name: commit_date
dtype: string
- name: files_changed
list: string
- name: stats
struct:
- name: commit_year
dtype: int64
- name: num_edited_lines
dtype: int64
- name: num_files
dtype: int64
- name: num_hunks
dtype: int64
- name: num_non_test_edited_lines
dtype: int64
- name: num_non_test_files
dtype: int64
- name: num_test_files
dtype: int64
- name: only_non_test_files
dtype: int64
- name: only_test_files
dtype: int64
- name: diff_text
dtype: string
- name: apis
list: string
- name: affected_paths
list: string
- name: repo
dtype: string
- name: hardware
dtype: string
- name: lm_eval_command
dtype: string
splits:
- name: train
num_bytes: 545364
num_examples: 39
download_size: 192570
dataset_size: 545364
- config_name: sglang
features:
- name: commit_hash
dtype: string
- name: pr_url
dtype: string
- name: pr_date
dtype: string
- name: timeline_extracted_at
dtype: string
- name: analysis_extracted_at
dtype: string
- name: models
list: string
- name: perf_command
dtype: string
- name: has_serving
dtype: bool
- name: has_latency
dtype: bool
- name: has_throughput
dtype: bool
- name: uses_lm_eval
dtype: bool
- name: commit_subject
dtype: string
- name: commit_message
dtype: string
- name: commit_date
dtype: string
- name: files_changed
list: string
- name: stats
struct:
- name: commit_year
dtype: int64
- name: num_edited_lines
dtype: int64
- name: num_files
dtype: int64
- name: num_hunks
dtype: int64
- name: num_non_test_edited_lines
dtype: int64
- name: num_non_test_files
dtype: int64
- name: num_test_files
dtype: int64
- name: only_non_test_files
dtype: int64
- name: only_test_files
dtype: int64
- name: diff_text
dtype: string
- name: apis
list: string
- name: affected_paths
list: string
- name: repo
dtype: string
- name: hardware
dtype: string
- name: lm_eval_command
dtype: string
splits:
- name: train
num_bytes: 91137
num_examples: 15
download_size: 52410
dataset_size: 91137
configs:
- config_name: vllm
data_files:
- split: train
path: vllm/train-*
- config_name: sglang
data_files:
- split: train
path: sglang/train-*
license: apache-2.0
task_categories:
- text-generation
language:
- en
size_categories:
- n<1K
tags:
- performance-optimization
- software-engineering
- benchmark
- ai-agents
- code
ISO-Bench Dataset
A curated dataset of real-world software performance optimization commits from vLLM and SGLang, designed for evaluating AI agents on code optimization tasks.
Dataset Summary
| Config | Commits | Repository |
|---|---|---|
vllm |
39 | vLLM (LLM inference engine) |
sglang |
15 | SGLang (LLM serving framework) |
Each entry represents a human-authored performance optimization commit with:
- The original commit diff and message
- Performance benchmark commands (
perf_command) - Model configurations for benchmarking
- Hardware requirements
- API surface analysis
Usage
from datasets import load_dataset
# Load vLLM optimization commits
vllm = load_dataset('Lossfunk/ISO-Bench', 'vllm', split='train')
# Load SGLang optimization commits
sglang = load_dataset('Lossfunk/ISO-Bench', 'sglang', split='train')
# Example: inspect a commit
print(vllm[0]['commit_subject'])
print(vllm[0]['perf_command'])
print(vllm[0]['models'])
Schema
| Field | Type | Description |
|---|---|---|
commit_hash |
string | Short hash of the optimization commit |
pr_url |
string | URL to the pull request |
commit_subject |
string | Commit message subject line |
commit_message |
string | Full commit message |
diff_text |
string | Unified diff of the optimization |
models |
list[string] | HuggingFace model IDs used for benchmarking |
perf_command |
string | Command to run the performance benchmark |
has_serving |
bool | Whether commit affects serving performance |
has_latency |
bool | Whether commit affects latency |
has_throughput |
bool | Whether commit affects throughput |
uses_lm_eval |
bool | Whether correctness is validated via lm-eval |
lm_eval_command |
string | lm-eval command for correctness validation |
files_changed |
list[string] | Files modified in the commit |
apis |
list[string] | Affected API endpoints/functions |
affected_paths |
list[string] | Code paths affected by the change |
hardware |
string | Required hardware (e.g., GPU type) |
stats |
struct | Commit statistics (lines changed, files, hunks) |
How It Works
Each dataset entry captures a real performance optimization made by an expert developer. AI agents are given the codebase at the parent commit (before optimization) and must independently discover and implement a performance improvement. Their patches are then benchmarked against the human expert's solution using wall-clock timing comparisons.
License
Apache 2.0