Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
maxruetz commited on
Commit
4f051b1
·
verified ·
1 Parent(s): 8ed8584

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +102 -3
README.md CHANGED
@@ -1,3 +1,102 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ tags:
6
+ - benchmark
7
+ - code retrieval
8
+ - code generation
9
+ - java
10
+ size_categories:
11
+ - 1K<n<10K
12
+ ---
13
+ # Name of Dataset:
14
+ **Mozzarella-0.3.1**
15
+ -------------------------------------------------------------------------------------------------------------------------------
16
+
17
+
18
+
19
+ ## Motivation
20
+ - Mozzarella is a dataset matching issues (= problem statements) and corresponding pull requests (PRs = problem solutions) of a selection of well maintained Java GitHub repositories. The original purpose was to serve as training and evaluation data for ML models concerned with fault localization and automated program repair of complex code bases. However, there might be more use cases that could benefit from this data.
21
+ - Inspired by SWEBench paper (https://arxiv.org/abs/2310.06770) which collected similar data (however on file level only) for Python code bases.
22
+
23
+ ## Author
24
+
25
+ - Feedback2Code Bachelors Project at Hasso Plattner Institute, Potsdam in cooperation with SAP.
26
+
27
+ ## Composition
28
+ - Each instance is called a task and resembles a matching of a GitHub issue and the corresponding fix. Each tasks contains information about the issue/pr (ids, comments...), the problem statement and the solution that was applied by a human developer, including relevant files, relevant methods and the actual changed code.
29
+ - The dataset currently contains 2734 tasks from 8 repositories. For a repository to be included in the dataset it has to be written mostly in Java, have a large amount of issues and pull requests in English, have good test coverage and be published under a permissive license.
30
+ - Included in the dataset are three different train/validate/test splits:
31
+ - Random split: The tasks are randomly split with the proportions 60/20/20
32
+ - Repository split: Instead of splitting the individual tasks, the repositories are allocated to train/validate/test in 60/20/20 proportions and the tasks recieve the same split as the belonging repository
33
+ - Time split: All tasks in the test split were created earlier than tasks in the validation split. All tasks in the train split were created earlier than tasks in the test split. In respect to the belonging repository.
34
+
35
+ ## Repositories
36
+ - mockito/mockito (MIT)
37
+ - square/retrofit (Apache 2.0)
38
+ - iluwatar/java-design-patterns (MIT)
39
+ - netty/netty (Apache 2.0)
40
+ - pinpoint-apm/pinpoint (Apache 2.0)
41
+ - kestra-io/kestra (Apache 2.0)
42
+ - provectus/kafka-ui (Apache 2.0)
43
+ - bazelbuild/bazel (Apache 2.0)
44
+
45
+ ## Which columns exist?
46
+ - instance_id: (str) - unique identifier for this task/instance. Format: *username*__*reponame*-*issueid*
47
+ - repo: (str) - The repository owner/name identifier from GitHub.
48
+ - issue_id: (str) - A formatted identifier for an issue/problem, usually as repo_owner/repo_name/issue-number.
49
+ - pr_id: (str) - A formatted instance identifier for the corresponding PR/solution, usually as repo_owner/repo_name/PR-number.
50
+ - linking_methods: (list str) - The method used to create this task (eg. timestamp, keyword...). See details below.
51
+ - base_commit: (str) - The commit hash representing the HEAD of the repository before the solution PR is applied.
52
+ - merge_commit: (str) - The commit hash representing the HEAD of the repository after the PR is merged.
53
+ - hints_text: (str) - Comments made on the issue
54
+ - resolved_comments: (str) - Comments made on the PR
55
+ - created_at: (str) - The creation date of the pull request.
56
+ - labeled_as: (list str) - List of labels applied to the issue
57
+ - problem_statement: (str) - The issue title and body.
58
+ - gold_files: (list str) - List of paths to the files not containing tests that were changed in the PR (at the point of the base commit)
59
+ - test_files: (list str) - List of paths to the files containing tests that were changed in the PR (at the point of the base commit)
60
+ - gold_patch: (str) - The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue. As diff.
61
+ - test_patch: (str) - A test-file patch that was contributed by the solution PR. As diff.
62
+ - split_random: (str) - The random split this tasks belongs to ('train'/'test'/'val'). See details above.
63
+ - split_repo: (str) - The repository split this tasks belongs to ('train'/'test'/'val'). See details above.
64
+ - split_time: (str) - The time split this tasks belongs to ('train'/'test'/'val'). See details above.
65
+
66
+ ## Collection Process
67
+ - All data is taken from publicly accesible GitHub repositories under MIT or Apache-2.0 licenses.
68
+ - The data is collected by gathering issue and PR information from the GitHub API. To create a task instance, we attempt for each PR to find the issues that were solved by that PR using all of the following linking methods:
69
+ - connected: GitHub offers a feature to assign to a PR the issues that the PR adresses. These preexisting links are used as link in our dataset.
70
+ - keyword: Each PR is scanned for mentions of issues and each issue is scanned for mentions of PRs. Then the proximity of those matches is checked for certains keywords indicating a solution relationship.
71
+ - timestamp: Possible matches are determined by looking at issues and PR that were closed around the same time. Then their titles and descriptions are checked for semantic similarity using OpenAI embeddings.
72
+
73
+ ## Preprocessing
74
+ - From the dataset removed were tasks that modify more than ten files (because we deem them overly complex for our purposes) and tasks that modify no files or only test files.
75
+ - To improve the accuracy of timestamp linking, tasks linked by exclusively timestamp are removed if there are keyword/connection tasks that suggest a different matching or if there are other exclusively timestamp tasks with a higher similarity.
76
+
77
+ ## Uses
78
+ - The dataset is currently being used to train and validate models for fault localization at file and method level.
79
+ - The dataset will be used to train and validate models for automatic code generation / bug fixing.
80
+ - Other uses could be possible, however are not yet explored by our project.
81
+
82
+ ## Maintenance
83
+
84
+ - The dataset is currently only available on our teams DVC on DELab. It will probably be uploaded to Huggingface (or similar) publicly in the future.
85
+ - More repsitories will most likely be added fairly soon. All fields are subject to change depending on what we deem sensible.
86
+ - This is the newest version of the dataset as of 18/07/2024
87
+
88
+ ## License
89
+
90
+ Copyright 2024 Feedback2Code
91
+
92
+ Licensed under the Apache License, Version 2.0 (the "License");
93
+ you may not use this file except in compliance with the License.
94
+ You may obtain a copy of the License at
95
+
96
+ http://www.apache.org/licenses/LICENSE-2.0
97
+
98
+ Unless required by applicable law or agreed to in writing, software
99
+ distributed under the License is distributed on an "AS IS" BASIS,
100
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
101
+ See the License for the specific language governing permissions and
102
+ limitations under the License.