--- dataset_info: - config_name: default features: - name: hash dtype: string - name: repo dtype: string - name: date dtype: string - name: license dtype: string - name: message dtype: string - name: mods list: - name: change_type dtype: string - name: old_path dtype: string - name: new_path dtype: string - name: diff dtype: string splits: - name: test num_examples: 163 - config_name: labels features: - name: hash dtype: string - name: repo dtype: string - name: date dtype: string - name: license dtype: string - name: message dtype: string - name: label dtype: int8 - name: comment dtype: string splits: - name: test num_bytes: 272359 num_examples: 858 - config_name: retrieval_bm25 features: - name: hash dtype: string - name: repo dtype: string - name: mods dtype: string - name: context list: - name: source dtype: string - name: content dtype: string configs: - config_name: default data_files: - split: test path: commitchronicle-py-long/test-* - config_name: labels data_files: - split: test path: commitchronicle-py-long-labels/test-* - config_name: full_files data_files: - split: 4k path: context/files/files_4k.parquet - split: 8k path: context/files/files_8k.parquet - split: 16k path: context/files/files_16k.parquet - split: full path: context/files/files_full.parquet - config_name: retrieval_bm25 data_files: - split: 4k path: context/retrieval/bm25_4k.parquet - split: 8k path: context/retrieval/bm25_8k.parquet - split: 16k path: context/retrieval/bm25_16k.parquet - split: 32k path: context/retrieval/bm25_32k.parquet - split: 64k path: context/retrieval/bm25_64k.parquet license: apache-2.0 --- # 🏟️ Long Code Arena (Commit message generation) This is the benchmark for the Commit message generation task as part of the 🏟️ [Long Code Arena benchmark](https://huggingface.co/spaces/JetBrains-Research/long-code-arena). The dataset is a manually curated subset of the Python test set from the 🤗 [CommitChronicle dataset](https://huggingface.co/datasets/JetBrains-Research/commit-chronicle), tailored for larger commits. All the repositories are published under permissive licenses (MIT, Apache-2.0, and BSD-3-Clause). The datapoints can be removed upon request. ## How-to ```py from datasets import load_dataset dataset = load_dataset("JetBrains-Research/lca-cmg", split="test") ``` Note that all the data we have is considered to be in the test split. **Note.** Working with git repositories under [`repos`](https://huggingface.co/datasets/JetBrains-Research/lca-cmg/tree/main/repos) directory is not supported via 🤗 Datasets. See [Git Repositories](#git-repositories) section for more details. ## About ### Overview In total, there are 163 commits from 34 repositories. For length statistics, refer to the [notebook](https://github.com/JetBrains-Research/lca-baselines/blob/main/commit_message_generation/notebooks/cmg_data_stats.ipynb) in our repository. ### Dataset Structure The dataset contains two kinds of data: data about each commit (under [`commitchronicle-py-long`](https://huggingface.co/datasets/JetBrains-Research/lca-commit-message-generation/tree/main/commitchronicle-py-long) folder) and compressed git repositories (under [`repos`](https://huggingface.co/datasets/JetBrains-Research/lca-commit-message-generation/tree/main/repos) folder). #### Commits Each example has the following fields: | **Field** | **Description** | |:---------:|:-----------------------------------------:| | `repo` | Commit repository. | | `hash` | Commit hash. | | `date` | Commit date. | | `license` | Commit repository's license. | | `message` | Commit message. | | `mods` | List of file modifications from a commit. | Each file modification has the following fields: | **Field** | **Description** | |:-------------:|:-------------------------------------------------------------------------------------------------:| | `change_type` | Type of change to current file. One of: `ADD`, `COPY`, `RENAME`, `DELETE`, `MODIFY` or `UNKNOWN`. | | `old_path` | Path to file before change (might be empty). | | `new_path` | Path to file after change (might be empty). | | `diff` | `git diff` for current file. | Data point example: ```json {'hash': 'b76ed0db81b3123ede5dc5e5f1bddf36336f3722', 'repo': 'apache/libcloud', 'date': '05.03.2022 17:52:34', 'license': 'Apache License 2.0', 'message': 'Add tests which verify that all OpenStack driver can be instantiated\nwith all the supported auth versions.\nNOTE: Those tests will fail right now due to the regressions being\nintroduced recently which breaks auth for some versions.', 'mods': [{'change_type': 'MODIFY', 'new_path': 'libcloud/test/compute/test_openstack.py', 'old_path': 'libcloud/test/compute/test_openstack.py', 'diff': '@@ -39,6 +39,7 @@ from libcloud.utils.py3 import u\n<...>'}], } ``` #### Git Repositories The compressed Git repositories for all the commits in this benchmark are stored under [`repos`](https://huggingface.co/datasets/JetBrains-Research/lca-cmg/tree/main/repos) directory. Working with git repositories under [`repos`](https://huggingface.co/datasets/JetBrains-Research/lca-cmg/tree/main/repos) directory is not supported directly via 🤗 Datasets. You can use [`huggingface_hub`](https://huggingface.co/docs/huggingface_hub/index) package to download the repositories. The sample code is provided below: ```py import tarfile from huggingface_hub import list_repo_tree, hf_hub_download data_dir = "..." # replace with a path to where you want to store repositories locally for repo_file in list_repo_tree("JetBrains-Research/lca-commit-message-generation", "repos", repo_type="dataset"): file_path = hf_hub_download( repo_id="JetBrains-Research/lca-commit-message-generation", filename=repo_file.path, repo_type="dataset", local_dir=data_dir, ) with tarfile.open(file_path, "r:gz") as tar: tar.extractall(path=os.path.join(data_dir, "extracted_repos")) ``` For convenience, we also provide a full list of files in [`paths.json`](https://huggingface.co/datasets/JetBrains-Research/lca-cmg/blob/main/paths.json). After you download and extract the repositories, you can work with each repository either via Git or via Python libraries like [GitPython](https://github.com/gitpython-developers/GitPython) or [PyDriller](https://github.com/ishepard/pydriller). # Extra: longer context ## Full Files To facilitate further research, we additionally provide full contents of modified files before and after each commit in `full_files` dataset config. `full` split provides the whole files, and the remaining splits truncates each file given the maximum allowed number of tokens n. The files are truncated uniformly, essentially, limiting the number of tokens for each file to max_num_tokens // num_files. We use [DeepSeek-V3 tokenizer](https://huggingface.co/deepseek-ai/DeepSeek-V3) to obtain the number of tokens. ```py from datasets import load_dataset dataset = load_dataset("JetBrains-Research/lca-commit-message-generation", "full_files", split="16k" # should be one of: '4k', '8k', '16k', 'full' ) ``` Each example has the following fields: * `repo`: commit repository * `hash`: commit hash * `mods`: commit modification (combined into a single diff) * `files`: a list of dictionaries, where each corresponds to a specific file changed in the commit and has the following keys: * `old_path`: file path before the commit * `old_contents`: file contents before the commit * `new_path`: file path after the commit * `old_contents`: file contents after the commit ## Retrieval To facilitate further research, we additionally provide context for each commit as retrieved by BM25 retriever in `retrieval_bm25` dataset config. For each commit, we run BM25 over all `.py` files in the corresponding repository at the state before the commit (excluding the files that were changed in this commit). We retrieve up to 50 files most relevant to the commit diff, and then, given the maximum allowed number of tokens n, we add files until the total context length (including diff) in tokens returned by the [DeepSeek-V3 tokenizer](https://huggingface.co/deepseek-ai/DeepSeek-V3) exceeds n, possibly trunctating the last included file. To access these, run the following: ```py from datasets import load_dataset dataset = load_dataset("JetBrains-Research/lca-commit-message-generation", "retrieval_bm25", split="16k" # should be one of: '4k', '8k', '16k', '32k', '64k' ) ``` Each example has the following fields: * `repo`: commit repository * `hash`: commit hash * `mods`: commit modification (combined into a single diff) * `context`: context retrieved for the current commit; a list of dictionaries, where each corresponds to a specific file and has the following keys: * `source`: file path * `content`: file content # 🏷️ Extra: commit labels To facilitate further research, we additionally provide the manual labels for all the 858 commits that made it through initial filtering. The final version of the dataset described above consists of commits labeled either 4 or 5. ## How-to ```py from datasets import load_dataset dataset = load_dataset("JetBrains-Research/lca-commit-message-generation", "labels", split="test") ``` Note that all the data we have is considered to be in the test split. ## About ### Dataset Structure Each example has the following fields: | **Field** | **Description** | |:---------:|:------------------------------------------------------------------:| | `repo` | Commit repository. | | `hash` | Commit hash. | | `date` | Commit date. | | `license` | Commit repository's license. | | `message` | Commit message. | | `label` | Label of the current commit as a target for CMG task. | | `comment` | Comment for a label for the current commit (optional, might be empty). | Labels are in 1–5 scale, where: * 1 – strong no * 2 – weak no * 3 – unsure * 4 – weak yes * 5 – strong yes Data point example: ```json {'hash': '1559a4c686ddc2947fc3606e1c4279062cc9480f', 'repo': 'appscale/gts', 'date': '15.07.2018 21:00:39', 'license': 'Apache License 2.0', 'message': 'Add auto_id_policy and logs_path flags\n\nThese changes were introduced in the 1.7.5 SDK.', 'label': 1, 'comment': 'no way to know the version'} ``` ## Citing ``` @article{bogomolov2024long, title={Long Code Arena: a Set of Benchmarks for Long-Context Code Models}, author={Bogomolov, Egor and Eliseeva, Aleksandra and Galimzyanov, Timur and Glukhov, Evgeniy and Shapkin, Anton and Tigina, Maria and Golubev, Yaroslav and Kovrigin, Alexander and van Deursen, Arie and Izadi, Maliheh and Bryksin, Timofey}, journal={arXiv preprint arXiv:2406.11612}, year={2024} } ``` You can find the paper [here](https://arxiv.org/abs/2406.11612).