--- dataset_info: features: - name: repo dtype: string - name: commit_hash dtype: string - name: completion_file struct: - name: filename dtype: string - name: content dtype: string - name: completion_lines struct: - name: infile sequence: int32 - name: inproject sequence: int32 - name: common sequence: int32 - name: commited sequence: int32 - name: non_informative sequence: int32 - name: random sequence: int32 - name: repo_snapshot sequence: - name: filename dtype: string - name: content dtype: string - name: completion_lines_raw struct: - name: commited sequence: int64 - name: common sequence: int64 - name: infile sequence: int64 - name: inproject sequence: int64 - name: non_informative sequence: int64 - name: other sequence: int64 splits: - name: test num_bytes: 2972013125 num_examples: 270 download_size: 1242136049 dataset_size: 2972013125 configs: - config_name: default data_files: - split: test path: data/test-* --- # LCA Project Level Code Completion ## How to load the dataset ``` from datasets import load_dataset ds = load_dataset('JetBrains-Research/lca-codegen-large', split='test') ``` ## Data Point Structure * `repo` – repository name in format `{GitHub_user_name}__{repository_name}` * `commit_hash` – commit hash * `completion_file` – dictionary with the completion file content in the following format: * `filename` – filepath to the completion file * `content` – content of the completion file * `completion_lines` – dictionary where keys are classes of lines and values are a list of integers (numbers of lines to complete). The classes are: * `committed` – line contains at least one function or class that was declared in the committed files from `commit_hash` * `inproject` – line contains at least one function or class that was declared in the project (excluding previous) * `infile` – line contains at least one function or class that was declared in the completion file (excluding previous) * `common` – line contains at least one function or class that was classified to be common, e.g., `main`, `get`, etc (excluding previous) * `non_informative` – line that was classified to be non-informative, e.g. too short, contains comments, etc * `random` – randomly sampled from the rest of the lines * `repo_snapshot` – dictionary with a snapshot of the repository before the commit. Has the same structure as `completion_file`, but filenames and contents are orginized as lists. * `completion_lines_raw` – the same as `completion_lines`, but before sampling. ## How we collected the data To collect the data, we cloned repositories from GitHub where the main language is Python. The completion file for each data point is a `.py` file that was added to the repository in a commit. The state of the repository before this commit is the repo snapshot. Large dataset is defined by number of characters in `.py` files from the repository snapshot. This number is from 192K to 768K. ## Dataset Stats * Number of datapoints: 270 * Number of repositories: 75 * Number of commits: 219 ### Completion File * Number of lines, median: 278 * Number of lines, min: 200 * Number of lines, max: 1694 ### Repository Snapshot * `.py` files: median 84, from 3 to 255 * non `.py` files: median 155, from 8 to 2174 * `.py` lines: median 15466.5 * non `.py` lines: median 18759 ### Line Counts: * infile: 2691 * inproject: 2595 * common: 693 * committed: 1322 * non-informative: 1019 * random: 1311 * **total**: 9631 ## Scores [HF Space](https://huggingface.co/spaces/JetBrains-Research/long-code-arena)