anthonypjshaw commited on
Commit
e237ea4
·
verified ·
1 Parent(s): 1b9ce67

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +174 -187
README.md CHANGED
@@ -1,187 +1,174 @@
1
- ---
2
- dataset_info:
3
- features:
4
- - name: instance_id
5
- dtype: string
6
- - name: repo
7
- dtype: string
8
- - name: base_commit
9
- dtype: string
10
- - name: problem_statement
11
- dtype: string
12
- - name: test_patch
13
- dtype: string
14
- - name: human_patch
15
- dtype: string
16
- - name: pr_number
17
- dtype: int64
18
- - name: pr_url
19
- dtype: string
20
- - name: pr_merged_at
21
- dtype: string
22
- - name: issue_number
23
- dtype: int64
24
- - name: issue_url
25
- dtype: string
26
- - name: human_changed_lines
27
- dtype: int64
28
- - name: FAIL_TO_PASS
29
- dtype: string
30
- - name: PASS_TO_PASS
31
- dtype: string
32
- splits:
33
- - name: test
34
- num_examples: 119
35
- license: mit
36
- task_categories:
37
- - other
38
- language:
39
- - en
40
- tags:
41
- - code-generation
42
- - software-engineering
43
- - complexity
44
- - swe-bench
45
- - contamination-free
46
- - post-training-cutoff
47
- pretty_name: SWE-bench Complex
48
- size_categories:
49
- - n<1K
50
- ---
51
-
52
- # SWE-bench Complex
53
-
54
- **A contamination-free, complexity-focused evaluation set for AI coding agents.**
55
-
56
- SWE-bench Complex is a curated dataset of **119 real-world GitHub issues** from major Python open-source projects, designed specifically for studying code complexity in AI-generated patches. All tasks were merged between **January–March 2026**, guaranteeing they postdate the training cutoff of current frontier models.
57
-
58
- ## Why SWE-bench Complex?
59
-
60
- Existing benchmarks like [SWE-bench Verified](https://huggingface.co/datasets/princeton-nlp/SWE-bench_Verified) suffer from two problems for complexity research:
61
-
62
- ### 1. Data Contamination
63
-
64
- Over 94% of SWE-bench issues predate current LLM training cutoffs. Aleithan et al. found that **32.67% of successful patches involve "cheating"** through solution leakage, and resolution rates dropped from 12.47% to 3.97% when leaked instances were filtered out ([SWE-bench+, 2024](https://arxiv.org/abs/2410.06992)).
65
-
66
- All SWE-bench Complex instances postdate the training cutoffs of:
67
-
68
- | Model | Provider | Training Cutoff | Gap |
69
- |---|---|---|---|
70
- | Claude Opus 4.6 | Anthropic | Oct 2025 | 3+ months |
71
- | GPT-5.3-Codex | OpenAI | Sep 2025 | 4+ months |
72
- | GPT-5.4 | OpenAI | Nov 2025 | 2+ months |
73
- | Gemini 3.1 Pro | Google | Oct 2025 | 3+ months |
74
-
75
- ### 2. Trivial Patches
76
-
77
- SWE-bench Verified has a median patch size of just **7 changed lines** — 44.6% of tasks require only 1–5 lines. These trivial patches yield near-zero complexity deltas, reducing statistical power for quality studies.
78
-
79
- SWE-bench Complex targets **substantive patches** with a median of **48 changed lines** — 6.9× larger than SWE-bench Verified.
80
-
81
- ## Dataset Comparison
82
-
83
- | Characteristic | SWE-bench Verified | SWE-bench Complex |
84
- |---|---|---|
85
- | Tasks | 500 | 119 |
86
- | Repositories | 12 | 8 |
87
- | Median changed lines | 7 | **48** |
88
- | Mean changed lines | 14.3 | **74.9** |
89
- | Mean Python files changed | 1.2 | **3.9** |
90
- | Human ΔCC (mean) | +1.14 | **+4.06** |
91
- | Human ΔLLOC (mean) | +2.77 | **+19.08** |
92
- | Human ΔMI (mean) | −0.230 | **−0.417** |
93
- | Human ΔCogC (mean) | N/A | **+3.63** |
94
- | Post-training-cutoff | <6% | **100%** |
95
-
96
- Complexity metrics measured using [Wily v2](https://github.com/tonybaloney/wily):
97
- - **ΔCC**: Cyclomatic Complexity change (McCabe, 1976)
98
- - **ΔLLOC**: Logical Lines of Code change
99
- - **ΔMI**: Maintainability Index change (Oman & Hagemeister, 1992)
100
- - **ΔCogC**: Cognitive Complexity change (Campbell, 2018)
101
-
102
- ## Repository Distribution
103
-
104
- | Repository | Instances |
105
- |---|---|
106
- | django/django | 38 |
107
- | astropy/astropy | 22 |
108
- | pydata/xarray | 17 |
109
- | scikit-learn/scikit-learn | 14 |
110
- | pylint-dev/pylint | 10 |
111
- | matplotlib/matplotlib | 9 |
112
- | sympy/sympy | 8 |
113
- | pallets/flask | 1 |
114
-
115
- ## Selection Criteria
116
-
117
- Instances were collected from merged pull requests in the SWE-bench ecosystem repositories with the following filters:
118
-
119
- 1. **Date range**: Merged January 1 – March 10, 2026 (post-training-cutoff)
120
- 2. **Issue linkage**: PR explicitly references a GitHub issue via "fixes #N" or equivalent
121
- 3. **Test coverage**: PR includes both implementation and test changes to Python files
122
- 4. **Minimum complexity**: Implementation patch modifies ≥4 changed lines
123
- 5. **Python files**: Only `.py` file changes retained
124
- 6. **Manual review**: Each candidate reviewed for solvability — documentation-only changes, large-scale refactors (>300 lines or >10 files), and tasks requiring external domain knowledge were excluded
125
-
126
- From 1,043 scraped PRs → 712 with issue references → 224 after automated filters → **119 after manual review**.
127
-
128
- ## Schema
129
-
130
- Each instance contains:
131
-
132
- | Field | Type | Description |
133
- |---|---|---|
134
- | `instance_id` | string | Unique identifier (`{owner}__{repo}-{pr_number}`) |
135
- | `repo` | string | GitHub repository (`owner/repo`) |
136
- | `base_commit` | string | Parent commit SHA |
137
- | `problem_statement` | string | GitHub issue text (title + body) |
138
- | `test_patch` | string | Unified diff of test-file changes |
139
- | `human_patch` | string | Unified diff of implementation-file changes |
140
- | `pr_number` | int | Pull request number |
141
- | `pr_url` | string | Pull request URL |
142
- | `pr_merged_at` | string | Merge timestamp (ISO 8601) |
143
- | `issue_number` | int | Referenced issue number |
144
- | `issue_url` | string | Issue URL |
145
- | `human_changed_lines` | int | Total changed lines in the human patch |
146
- | `FAIL_TO_PASS` | string | JSON array of test IDs that must go FAIL→PASS |
147
- | `PASS_TO_PASS` | string | JSON array of test IDs that must remain PASS |
148
-
149
- ## SWE-bench Compatibility
150
-
151
- SWE-bench Complex uses the same schema as SWE-bench Verified and can be evaluated using the standard [SWE-bench harness](https://github.com/princeton-nlp/SWE-bench):
152
-
153
- ```bash
154
- python -m swebench.harness.run_evaluation \
155
- -d anthonypjshaw/SWE-bench_Complex \
156
- -s test \
157
- -p predictions.jsonl \
158
- -id my_run \
159
- --max_workers 4
160
- ```
161
-
162
- ## Usage
163
-
164
- ```python
165
- from datasets import load_dataset
166
-
167
- dataset = load_dataset("anthonypjshaw/SWE-bench_Complex", split="test")
168
- print(f"Tasks: {len(dataset)}")
169
- print(f"Repos: {len(set(dataset['repo']))}")
170
- ```
171
-
172
- ## Citation
173
-
174
- If you use SWE-bench Complex in your research, please cite:
175
-
176
- ```bibtex
177
- @inproceedings{Shaw2026SWEbenchComplex,
178
- author = {Shaw, Anthony},
179
- title = {Beyond the Benchmark: A Contamination-Free Study of {AI} Code Complexity Across Four Frontier Models},
180
- booktitle = {Proceedings of the IEEE International Conference on Software Engineering (SSE)},
181
- year = {2026},
182
- }
183
- ```
184
-
185
- ## License
186
-
187
- MIT License. The dataset contains references to publicly available open-source code under their respective licenses.
 
1
+ ---
2
+ dataset_info:
3
+ features:
4
+ - name: instance_id
5
+ dtype: string
6
+ - name: repo
7
+ dtype: string
8
+ - name: base_commit
9
+ dtype: string
10
+ - name: problem_statement
11
+ dtype: string
12
+ - name: test_patch
13
+ dtype: string
14
+ - name: human_patch
15
+ dtype: string
16
+ - name: pr_number
17
+ dtype: int64
18
+ - name: pr_url
19
+ dtype: string
20
+ - name: pr_merged_at
21
+ dtype: string
22
+ - name: issue_number
23
+ dtype: int64
24
+ - name: issue_url
25
+ dtype: string
26
+ - name: human_changed_lines
27
+ dtype: int64
28
+ - name: FAIL_TO_PASS
29
+ dtype: string
30
+ - name: PASS_TO_PASS
31
+ dtype: string
32
+ splits:
33
+ - name: test
34
+ num_examples: 119
35
+ license: mit
36
+ task_categories:
37
+ - other
38
+ language:
39
+ - en
40
+ tags:
41
+ - code-generation
42
+ - software-engineering
43
+ - complexity
44
+ - swe-bench
45
+ - contamination-free
46
+ - post-training-cutoff
47
+ pretty_name: SWE-bench Complex
48
+ size_categories:
49
+ - n<1K
50
+ ---
51
+
52
+ # SWE-bench Complex
53
+
54
+ **A contamination-free, complexity-focused evaluation set for AI coding agents.**
55
+
56
+ SWE-bench Complex is a curated dataset of **119 real-world GitHub issues** from major Python open-source projects, designed specifically for studying code complexity in AI-generated patches. All tasks were merged between **January–March 2026**, after training cutoff of current frontier models (Claude Opus 4.6, OpenAI GPT-5.4, Gemini 3.1 Pro).
57
+
58
+ ## Why SWE-bench Complex?
59
+
60
+ Existing benchmarks like [SWE-bench Verified](https://huggingface.co/datasets/princeton-nlp/SWE-bench_Verified) suffer from two problems for complexity research:
61
+
62
+ ### 1. Data Contamination
63
+
64
+ Over 94% of SWE-bench issues predate current LLM training cutoffs. Aleithan et al. found that **32.67% of successful patches involve "cheating"** through solution leakage, and resolution rates dropped from 12.47% to 3.97% when leaked instances were filtered out ([SWE-bench+, 2024](https://arxiv.org/abs/2410.06992)).
65
+
66
+ All SWE-bench Complex instances postdate the training cutoffs of:
67
+
68
+ | Model | Provider | Training Cutoff | Gap |
69
+ |---|---|---|---|
70
+ | Claude Opus 4.6 | Anthropic | Oct 2025 | 3+ months |
71
+ | GPT-5.3-Codex | OpenAI | Sep 2025 | 4+ months |
72
+ | GPT-5.4 | OpenAI | Nov 2025 | 2+ months |
73
+ | Gemini 3.1 Pro | Google | Oct 2025 | 3+ months |
74
+
75
+ ### 2. Trivial Patches
76
+
77
+ SWE-bench Verified has a median patch size of just **7 changed lines** — 44.6% of tasks require only 1–5 lines. These trivial patches yield near-zero complexity deltas, reducing statistical power for quality studies.
78
+
79
+ SWE-bench Complex targets **substantive patches** with a median of **48 changed lines** — 6.9× larger than SWE-bench Verified.
80
+
81
+ ## Dataset Comparison
82
+
83
+ | Characteristic | SWE-bench Verified | SWE-bench Complex |
84
+ |---|---|---|
85
+ | Tasks | 500 | 119 |
86
+ | Repositories | 12 | 8 |
87
+ | Median changed lines | 7 | **48** |
88
+ | Mean changed lines | 14.3 | **74.9** |
89
+ | Mean Python files changed | 1.2 | **3.9** |
90
+ | Human ΔCC (mean) | +1.14 | **+4.06** |
91
+ | Human ΔLLOC (mean) | +2.77 | **+19.08** |
92
+ | Human ΔMI (mean) | −0.230 | **−0.417** |
93
+ | Human ΔCogC (mean) | N/A | **+3.63** |
94
+ | Post-training-cutoff | <6% | **100%** |
95
+
96
+ Complexity metrics measured using [Wily v2](https://github.com/tonybaloney/wily):
97
+ - **ΔCC**: Cyclomatic Complexity change (McCabe, 1976)
98
+ - **ΔLLOC**: Logical Lines of Code change
99
+ - **ΔMI**: Maintainability Index change (Oman & Hagemeister, 1992)
100
+ - **ΔCogC**: Cognitive Complexity change (Campbell, 2018)
101
+
102
+ ## Repository Distribution
103
+
104
+ | Repository | Instances |
105
+ |---|---|
106
+ | django/django | 38 |
107
+ | astropy/astropy | 22 |
108
+ | pydata/xarray | 17 |
109
+ | scikit-learn/scikit-learn | 14 |
110
+ | pylint-dev/pylint | 10 |
111
+ | matplotlib/matplotlib | 9 |
112
+ | sympy/sympy | 8 |
113
+ | pallets/flask | 1 |
114
+
115
+ ## Selection Criteria
116
+
117
+ Instances were collected from merged pull requests in the SWE-bench ecosystem repositories with the following filters:
118
+
119
+ 1. **Date range**: Merged January 1 – March 10, 2026 (post-training-cutoff)
120
+ 2. **Issue linkage**: PR explicitly references a GitHub issue via "fixes #N" or equivalent
121
+ 3. **Test coverage**: PR includes both implementation and test changes to Python files
122
+ 4. **Minimum complexity**: Implementation patch modifies ≥4 changed lines
123
+ 5. **Python files**: Only `.py` file changes retained
124
+ 6. **Manual review**: Each candidate reviewed for solvability — documentation-only changes, large-scale refactors (>300 lines or >10 files), and tasks requiring external domain knowledge were excluded
125
+
126
+ From 1,043 scraped PRs → 712 with issue references → 224 after automated filters → **119 after manual review**.
127
+
128
+ ## Schema
129
+
130
+ Each instance contains:
131
+
132
+ | Field | Type | Description |
133
+ |---|---|---|
134
+ | `instance_id` | string | Unique identifier (`{owner}__{repo}-{pr_number}`) |
135
+ | `repo` | string | GitHub repository (`owner/repo`) |
136
+ | `base_commit` | string | Parent commit SHA |
137
+ | `problem_statement` | string | GitHub issue text (title + body) |
138
+ | `test_patch` | string | Unified diff of test-file changes |
139
+ | `human_patch` | string | Unified diff of implementation-file changes |
140
+ | `pr_number` | int | Pull request number |
141
+ | `pr_url` | string | Pull request URL |
142
+ | `pr_merged_at` | string | Merge timestamp (ISO 8601) |
143
+ | `issue_number` | int | Referenced issue number |
144
+ | `issue_url` | string | Issue URL |
145
+ | `human_changed_lines` | int | Total changed lines in the human patch |
146
+ | `FAIL_TO_PASS` | string | JSON array of test IDs that must go FAIL→PASS |
147
+ | `PASS_TO_PASS` | string | JSON array of test IDs that must remain PASS |
148
+
149
+ ## SWE-bench Compatibility
150
+
151
+ SWE-bench Complex uses the same schema as SWE-bench Verified and can be evaluated using the standard [SWE-bench harness](https://github.com/princeton-nlp/SWE-bench):
152
+
153
+ ```bash
154
+ python -m swebench.harness.run_evaluation \
155
+ -d anthonypjshaw/SWE-bench_Complex \
156
+ -s test \
157
+ -p predictions.jsonl \
158
+ -id my_run \
159
+ --max_workers 4
160
+ ```
161
+
162
+ ## Usage
163
+
164
+ ```python
165
+ from datasets import load_dataset
166
+
167
+ dataset = load_dataset("anthonypjshaw/SWE-bench_Complex", split="test")
168
+ print(f"Tasks: {len(dataset)}")
169
+ print(f"Repos: {len(set(dataset['repo']))}")
170
+ ```
171
+
172
+ ## License
173
+
174
+ MIT License. The dataset contains references to publicly available open-source code under their respective licenses.