Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
DOI:
Libraries:
Datasets
Dask
License:
The dataset viewer is not available for this split.
Rows from parquet row groups are too big to be read: 2.32 GiB (max=286.10 MiB)
Error code:   TooBigContentError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

🏟️ Long Code Arena (Project-level code completion)

This is the benchmark for Project-level code completion task as part of the 🏟️ Long Code Arena benchmark. Each datapoint contains the file for completion, a list of lines to complete with their categories (see the categorization below), and a repository snapshot that can be used to build the context. All the repositories are published under permissive licenses (MIT, Apache-2.0, BSD-3-Clause, and BSD-2-Clause). The datapoints can be removed upon request.

How-to

Load the data via load_dataset:

from datasets import load_dataset

config_names = [
  'small_context',
  'medium_context',
  'large_context',
  'huge_context'
]

ds = load_dataset('JetBrains-Research/lca-project-level-code-completion', config_name, split='test')

Dataset Structure

Datapoints in the dataset have the following structure:

  • repo – repository name in the format {GitHub_user_name}__{repository_name}
  • commit_hash – commit hash of the repository
  • completion_file – dictionary with the completion file content in the following format:
    • filename – path to the completion file
    • content – content of the completion file
  • completion_lines – dictionary where the keys are categories of lines and values are a list of integers (numbers of lines to complete). The categories are:
    • committed – line contains at least one function or class from the files that were added on the completion file commit
    • inproject – line contains at least one function or class from the repository snapshot at the moment of completion
    • infile – line contains at least one function or class from the completion file
    • common – line contains at least one function or class with common names, e.g., main, get, etc.
    • non_informative – line that was classified to be non-informative, e.g., too short, contains comments, etc.
    • random – other lines.
  • repo_snapshot – dictionary with a snapshot of the repository before the commit. It has the same structure as completion_file, but filenames and contents are orginized as lists.
  • completion_lines_raw – same as completion_lines, but before sampling

How we collected the data

To collect the data, we cloned repositories from GitHub where the main language is Python. The completion file for each datapoint is a .py file that was added to the repository in a commit. The state of the repository before this commit is the repo snapshot.

The dataset configurations are based on the number of characters in .py files from the repository snapshot:

  • small_context – less than 48K characters;
  • medium_context – from 48K to 192K characters;
  • large_context – from 192K to 768K characters;
  • huge_context – more than 768K characters.

Datasets Stats

Dataset Number of datapoints Number of repositories Number of commits
small_context 144 46 63
medium_context 224 80 175
large_context 270 75 219
huge_context 296 75 252

Completion File

Dataset Completion file lines, min Completion file lines, max Completion file lines, median
small_context 201 1916 310.5
medium_context 200 1648 310.0
large_context 200 1694 278.0
huge_context 200 1877 313.5

Repository Snapshot .py files

Dataset Context py files number, min Context py files number, max Context py files number, median Context py lines, median
small_context 0 52 4.0 128.0
medium_context 3 117 34.0 3786.0
large_context 3 255 84.0 15466.5
huge_context 47 5227 261.0 49811.0

Repository Snapshot non .py files

Dataset Context non-py files number, min Context non-py files number, max Context non-py files number, median Context non-py lines, median
small_context 1 1044 19.5 1227.0
medium_context 3 3977 64.5 9735.0
large_context 8 2174 155.0 18759.0
huge_context 24 7687 262.0 60163.0

Line Counts:

Dataset infile inproject common commited non-informative random all
small_context 1430 95 500 1426 532 703 4686
medium_context 2224 2236 779 1495 858 1084 8676
large_context 2691 2595 693 1322 1019 1311 9631
huge_context 2608 2901 692 1019 1164 1426 9810

Scores

You can find the results of running various models on this dataset in our leaderboard.

Citing

@article{bogomolov2024long,
  title={Long Code Arena: a Set of Benchmarks for Long-Context Code Models},
  author={Bogomolov, Egor and Eliseeva, Aleksandra and Galimzyanov, Timur and Glukhov, Evgeniy and Shapkin, Anton and Tigina, Maria and Golubev, Yaroslav and Kovrigin, Alexander and van Deursen, Arie and Izadi, Maliheh and Bryksin, Timofey},
  journal={arXiv preprint arXiv:2406.11612},
  year={2024}
}

You can find the paper here.

Downloads last month
547

Collection including JetBrains-Research/lca-project-level-code-completion