configs:
- config_name: default
data_files:
- split: train
path: data/train-*
- split: dev
path: data/dev-*
- split: test
path: data/test-*
- split: validation
path: data/validation-*
dataset_info:
features:
- name: instance_id
dtype: string
- name: text
dtype: string
- name: repo
dtype: string
- name: base_commit
dtype: string
- name: problem_statement
dtype: string
- name: hints_text
dtype: string
- name: created_at
dtype: string
- name: patch
dtype: string
- name: test_patch
dtype: string
- name: version
dtype: string
- name: FAIL_TO_PASS
dtype: string
- name: PASS_TO_PASS
dtype: string
- name: environment_setup_commit
dtype: string
splits:
- name: train
num_bytes: 2606110505
num_examples: 18817
- name: dev
num_bytes: 29104632
num_examples: 225
- name: test
num_bytes: 291720091
num_examples: 2294
- name: validation
num_bytes: 27170684
num_examples: 191
download_size: 1292447999
dataset_size: 2954105912
Dataset Card for "SWE-bench_bm25_27K"
Dataset Summary
SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
The dataset was released as part of SWE-bench: Can Language Models Resolve Real-World GitHub Issues?
This dataset SWE-bench_bm25_27K
includes a formatting of each instance using Pyserini's BM25 retrieval as described in the paper. The code context size limit is 27,000 cl100k_base
tokens from the tiktoken
tokenization package used for OpenAI models.
The text
column can be used directly with LMs to generate patch files.
Models are instructed to generate patch
formatted file using the following template:
<patch>
diff
--- a/path/to/file.py
--- b/path/to/file.py
@@ -1,3 +1,3 @@
This is a test file.
-It contains several lines.
+It has been modified.
This is the third line.
</patch>
This format can be used directly with the SWE-bench inference scripts. Please refer to these scripts for more details on inference.
Supported Tasks and Leaderboards
SWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at www.swebench.com
Languages
The text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.
Dataset Structure
Data Instances
An example of a SWE-bench datum is as follows:
instance_id: (str) - A formatted instance identifier, usually as repo_owner__repo_name-PR-number.
text: (str) - The input text including instructions, the "Oracle" retrieved file, and an example of the patch format for output.
patch: (str) - The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue.
repo: (str) - The repository owner/name identifier from GitHub.
base_commit: (str) - The commit hash of the repository representing the HEAD of the repository before the solution PR is applied.
hints_text: (str) - Comments made on the issue prior to the creation of the solution PR’s first commit creation date.
created_at: (str) - The creation date of the pull request.
test_patch: (str) - A test-file patch that was contributed by the solution PR.
problem_statement: (str) - The issue title and body.
version: (str) - Installation version to use for running evaluation.
environment_setup_commit: (str) - commit hash to use for environment setup and installation.
FAIL_TO_PASS: (str) - A json list of strings that represent the set of tests resolved by the PR and tied to the issue resolution.
PASS_TO_PASS: (str) - A json list of strings that represent tests that should pass before and after the PR application.