Datasets:

Modalities:
Text
Formats:
parquet
ArXiv:
DOI:
Libraries:
Datasets
pandas
License:

Update dataset card: rewrite the labels section, add code example for downloading repos, clean up

#2
Files changed (1) hide show
  1. README.md +67 -42
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  dataset_info:
3
- - config_name: commitchronicle-py-long
4
  features:
5
  - name: hash
6
  dtype: string
@@ -25,7 +25,7 @@ dataset_info:
25
  splits:
26
  - name: test
27
  num_examples: 163
28
- - config_name: commitchronicle-py-long-labels
29
  features:
30
  - name: hash
31
  dtype: string
@@ -46,11 +46,11 @@ dataset_info:
46
  num_bytes: 272359
47
  num_examples: 858
48
  configs:
49
- - config_name: commitchronicle-py-long
50
  data_files:
51
  - split: test
52
  path: commitchronicle-py-long/test-*
53
- - config_name: commitchronicle-py-long-labels
54
  data_files:
55
  - split: test
56
  path: commitchronicle-py-long-labels/test-*
@@ -62,46 +62,35 @@ license: apache-2.0
62
  This is the benchmark for the Commit message generation task as part of the
63
  🏟️ [Long Code Arena benchmark](https://huggingface.co/spaces/JetBrains-Research/long-code-arena).
64
 
65
- The current version is a manually curated subset of the Python test set from the 🤗 [CommitChronicle dataset](https://huggingface.co/datasets/JetBrains-Research/commit-chronicle), tailored for larger commits.
66
 
67
  All the repositories are published under permissive licenses (MIT, Apache-2.0, and BSD-3-Clause). The datapoints can be removed upon request.
68
 
69
  ## How-to
70
 
71
- 1. List all the available configs
72
- via [`datasets.get_dataset_config_names`](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.get_dataset_config_names)
73
- and choose an appropriate one.
74
 
75
- Current configs: `commitchronicle-py-long`, `commitchronicle-py-long-labels`
76
-
77
- 2. Load the data
78
- via [`load_dataset`](https://huggingface.co/docs/datasets/v2.14.3/en/package_reference/loading_methods#datasets.load_dataset):
79
-
80
- ```
81
- from datasets import load_dataset
82
-
83
- configuration = "TODO" # select a configuration
84
- dataset = load_dataset("JetBrains-Research/lca-cmg", configuration, split="test")
85
- ```
86
 
87
- Note that all the data we have is considered to be in the test split.
88
 
89
  **Note.** Working with git repositories
90
  under [`repos`](https://huggingface.co/datasets/JetBrains-Research/lca-cmg/tree/main/repos) directory is not supported
91
- via 🤗 Datasets. Download and extract the contents of each repository. We provide a full list of files
92
- in [`paths.json`](https://huggingface.co/datasets/JetBrains-Research/lca-cmg/blob/main/paths.json).
93
 
94
- ## Dataset Structure
95
 
96
- This dataset contains three kinds of data:
97
 
98
- * *full data* about each commit (including modifications)
99
- * metadata with quality *labels*
100
- * compressed *git repositories*
101
 
102
- ### Full data
103
 
104
- This section concerns configuration with *full data* about each commit (no `-labels` suffix).
 
 
105
 
106
  Each example has the following fields:
107
 
@@ -125,7 +114,7 @@ Each file modification has the following fields:
125
 
126
  Data point example:
127
 
128
- ```
129
  {'hash': 'b76ed0db81b3123ede5dc5e5f1bddf36336f3722',
130
  'repo': 'apache/libcloud',
131
  'date': '05.03.2022 17:52:34',
@@ -138,9 +127,53 @@ Data point example:
138
  }
139
  ```
140
 
141
- ### Labels
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
142
 
143
- This section concerns configuration with metadata and *labels* (with `-labels` suffix).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
144
 
145
  Each example has the following fields:
146
 
@@ -164,7 +197,7 @@ Labels are in 1–5 scale, where:
164
 
165
  Data point example:
166
 
167
- ```
168
  {'hash': '1559a4c686ddc2947fc3606e1c4279062cc9480f',
169
  'repo': 'appscale/gts',
170
  'date': '15.07.2018 21:00:39',
@@ -172,12 +205,4 @@ Data point example:
172
  'message': 'Add auto_id_policy and logs_path flags\n\nThese changes were introduced in the 1.7.5 SDK.',
173
  'label': 1,
174
  'comment': 'no way to know the version'}
175
- ```
176
-
177
- ### Git Repositories
178
-
179
- This section concerns the [`repos`](https://huggingface.co/datasets/JetBrains-Research/lca-cmg/tree/main/repos) directory,
180
- which stores compressed Git repositories for all the commits in this benchmark. After you download and extract it, you
181
- can work with each repository either via Git or via Python libraries
182
- like [GitPython](https://github.com/gitpython-developers/GitPython)
183
- or [PyDriller](https://github.com/ishepard/pydriller).
 
1
  ---
2
  dataset_info:
3
+ - config_name: default
4
  features:
5
  - name: hash
6
  dtype: string
 
25
  splits:
26
  - name: test
27
  num_examples: 163
28
+ - config_name: labels
29
  features:
30
  - name: hash
31
  dtype: string
 
46
  num_bytes: 272359
47
  num_examples: 858
48
  configs:
49
+ - config_name: default
50
  data_files:
51
  - split: test
52
  path: commitchronicle-py-long/test-*
53
+ - config_name: labels
54
  data_files:
55
  - split: test
56
  path: commitchronicle-py-long-labels/test-*
 
62
  This is the benchmark for the Commit message generation task as part of the
63
  🏟️ [Long Code Arena benchmark](https://huggingface.co/spaces/JetBrains-Research/long-code-arena).
64
 
65
+ The dataset is a manually curated subset of the Python test set from the 🤗 [CommitChronicle dataset](https://huggingface.co/datasets/JetBrains-Research/commit-chronicle), tailored for larger commits.
66
 
67
  All the repositories are published under permissive licenses (MIT, Apache-2.0, and BSD-3-Clause). The datapoints can be removed upon request.
68
 
69
  ## How-to
70
 
71
+ ```py
72
+ from datasets import load_dataset
 
73
 
74
+ dataset = load_dataset("JetBrains-Research/lca-cmg", split="test")
75
+ ```
 
 
 
 
 
 
 
 
 
76
 
77
+ Note that all the data we have is considered to be in the test split.
78
 
79
  **Note.** Working with git repositories
80
  under [`repos`](https://huggingface.co/datasets/JetBrains-Research/lca-cmg/tree/main/repos) directory is not supported
81
+ via 🤗 Datasets. See [Git Repositories](#git-repositories) section for more details.
 
82
 
83
+ ## About
84
 
85
+ ### Overview
86
 
87
+ In total, there are 163 commits from 34 repositories. For length statistics, refer to the [notebook](https://github.com/JetBrains-Research/lca-baselines/blob/main/commit_message_generation/notebooks/cmg_data_stats.ipynb) in our repository.
 
 
88
 
89
+ ### Dataset Structure
90
 
91
+ The dataset contains two kinds of data: data about each commit (under [`commitchronicle-py-long`](https://huggingface.co/datasets/JetBrains-Research/lca-commit-message-generation/tree/main/commitchronicle-py-long) folder) and compressed git repositories (under [`repos`](https://huggingface.co/datasets/JetBrains-Research/lca-commit-message-generation/tree/main/repos) folder).
92
+
93
+ #### Commits
94
 
95
  Each example has the following fields:
96
 
 
114
 
115
  Data point example:
116
 
117
+ ```json
118
  {'hash': 'b76ed0db81b3123ede5dc5e5f1bddf36336f3722',
119
  'repo': 'apache/libcloud',
120
  'date': '05.03.2022 17:52:34',
 
127
  }
128
  ```
129
 
130
+ #### Git Repositories
131
+
132
+ The compressed Git repositories for all the commits in this benchmark are stored under [`repos`](https://huggingface.co/datasets/JetBrains-Research/lca-cmg/tree/main/repos) directory.
133
+
134
+ Working with git repositories under [`repos`](https://huggingface.co/datasets/JetBrains-Research/lca-cmg/tree/main/repos) directory is not supported directly via 🤗 Datasets.
135
+ You can use [`huggingface_hub`](https://huggingface.co/docs/huggingface_hub/index) package to download the repositories. The sample code is provided below:
136
+
137
+ ```py
138
+ import tarfile
139
+ from huggingface_hub import list_repo_tree, hf_hub_download
140
+
141
+
142
+ data_dir = "..." # replace with a path to where you want to store repositories locally
143
+
144
+ for repo_file in list_repo_tree("JetBrains-Research/lca-commit-message-generation", "repos", repo_type="dataset"):
145
+ file_path = hf_hub_download(
146
+ repo_id="JetBrains-Research/lca-commit-message-generation",
147
+ filename=repo_file.path,
148
+ repo_type="dataset",
149
+ local_dir=data_dir,
150
+ )
151
+
152
+ with tarfile.open(file_path, "r:gz") as tar:
153
+ tar.extractall(path=os.path.join(data_dir, "extracted_repos"))
154
+ ```
155
+
156
+ For convenience, we also provide a full list of files in [`paths.json`](https://huggingface.co/datasets/JetBrains-Research/lca-cmg/blob/main/paths.json).
157
 
158
+ After you download and extract the repositories, you can work with each repository either via Git or via Python libraries like [GitPython](https://github.com/gitpython-developers/GitPython) or [PyDriller](https://github.com/ishepard/pydriller).
159
+
160
+ # 🏷️ Extra: commit labels
161
+
162
+ To facilitate further research, we additionally provide the manual labels for all the 858 commits that made it through initial filtering. The final version of the dataset described above consists of commits labeled either 4 or 5.
163
+
164
+ ## How-to
165
+
166
+ ```py
167
+ from datasets import load_dataset
168
+
169
+ dataset = load_dataset("JetBrains-Research/lca-cmg", "labels", split="test")
170
+ ```
171
+
172
+ Note that all the data we have is considered to be in the test split.
173
+
174
+ ## About
175
+
176
+ ### Dataset Structure
177
 
178
  Each example has the following fields:
179
 
 
197
 
198
  Data point example:
199
 
200
+ ```json
201
  {'hash': '1559a4c686ddc2947fc3606e1c4279062cc9480f',
202
  'repo': 'appscale/gts',
203
  'date': '15.07.2018 21:00:39',
 
205
  'message': 'Add auto_id_policy and logs_path flags\n\nThese changes were introduced in the 1.7.5 SDK.',
206
  'label': 1,
207
  'comment': 'no way to know the version'}
208
+ ```