princeton-nlp commited on
Commit
53b1cc4
1 Parent(s): a198af5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -0
README.md CHANGED
@@ -55,5 +55,55 @@ dataset_info:
55
  dataset_size: 2954123612
56
  ---
57
  # Dataset Card for "SWE-bench_bm25_27K"
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
58
 
59
  [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
55
  dataset_size: 2954123612
56
  ---
57
  # Dataset Card for "SWE-bench_bm25_27K"
58
+ ### Dataset Summary
59
+ SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
60
+
61
+ The dataset was released as part of [SWE-bench: Can Language Models Resolve Real-World GitHub Issues?](https://arxiv.org/abs/2310.06770)
62
+
63
+ This dataset `SWE-bench_bm25_27K` includes a formatting of each instance using Pyserini's BM25 retrieval as described in the paper. The code context size limit is 27,000 `cl100k_base` tokens from the [`tiktoken`](https://github.com/openai/tiktoken) tokenization package used for OpenAI models.
64
+ The `text` column can be used directly with LMs to generate patch files.
65
+ Models are instructed to generate [`patch`](https://en.wikipedia.org/wiki/Patch_(Unix)) formatted file using the following template:
66
+ ```diff
67
+ <patch>
68
+ diff
69
+ --- a/path/to/file.py
70
+ --- b/path/to/file.py
71
+ @@ -1,3 +1,3 @@
72
+ This is a test file.
73
+ -It contains several lines.
74
+ +It has been modified.
75
+ This is the third line.
76
+ </patch>
77
+ ```
78
+
79
+ This format can be used directly with the [SWE-bench inference scripts](https://github.com/princeton-nlp/SWE-bench/tree/main/inference). Please refer to these scripts for more details on inference.
80
+
81
+ ### Supported Tasks and Leaderboards
82
+ SWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at www.swebench.com
83
+
84
+ ### Languages
85
+
86
+ The text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.
87
+
88
+ ## Dataset Structure
89
+
90
+ ### Data Instances
91
+ An example of a SWE-bench datum is as follows:
92
+
93
+ ```
94
+ instance_id: (str) - A formatted instance identifier, usually as repo_owner__repo_name-PR-number.
95
+ text: (str) - The input text including instructions, the "Oracle" retrieved file, and an example of the patch format for output.
96
+ patch: (str) - The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue.
97
+ repo: (str) - The repository owner/name identifier from GitHub.
98
+ base_commit: (str) - The commit hash of the repository representing the HEAD of the repository before the solution PR is applied.
99
+ hints_text: (str) - Comments made on the issue prior to the creation of the solution PR’s first commit creation date.
100
+ created_at: (str) - The creation date of the pull request.
101
+ test_patch: (str) - A test-file patch that was contributed by the solution PR.
102
+ problem_statement: (str) - The issue title and body.
103
+ version: (str) - Installation version to use for running evaluation.
104
+ environment_setup_commit: (str) - commit hash to use for environment setup and installation.
105
+ FAIL_TO_PASS: (str) - A json list of strings that represent the set of tests resolved by the PR and tied to the issue resolution.
106
+ PASS_TO_PASS: (str) - A json list of strings that represent tests that should pass before and after the PR application.
107
+ ```
108
 
109
  [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)