repo
stringlengths
14
39
commit_hash
stringlengths
40
40
completion_file
dict
completion_lines
dict
repo_snapshot
sequence
completion_lines_raw
dict
ossf__fuzz-introspector
4867924b714a7789f94fbcde53713a29ceab7272
{"filename":"post-processing/analyses/fuzz_calltree_analysis.py","content":"# Copyright 2022 Fuzz In(...TRUNCATED)
{"infile":[240,133,166,79,181],"inproject":[66,207,46,47,144,210,276,213,153,250,155,62],"common":[](...TRUNCATED)
{"filename":[".gitignore","CODE_OF_CONDUCT.md","Dockerfile","LICENSE","README.md","build_all.sh","co(...TRUNCATED)
{"commited":[],"common":[],"infile":[161,133,166,79,240,181],"inproject":[66,229,46,47,144,207,210,2(...TRUNCATED)
vanheeringen-lab__seq2science
d5ff9782c8f6c4cd989f74684154c508b7c65127
{"filename":"docs/scripts/clean_dags.py","content":"import re\nimport networkx as nx\n\nrules_to_kee(...TRUNCATED)
{"infile":[135,133,166,136,167,197,152,124,123,188,125,126],"inproject":[96,97,95],"common":[142],"c(...TRUNCATED)
{"filename":[".flake8",".gitignore",".zenodo.json","CHANGELOG.md","LICENSE","README.md","pyproject.t(...TRUNCATED)
{"commited":[],"common":[142],"infile":[132,133,166,135,136,167,197,174,124,145,152,123,188,125,126](...TRUNCATED)
Significant-Gravitas__Auto-GPT
c6a1fc44a7330a49e21001ac223c6bb2c97f0234
{"filename":"autogpt/routes/agent_protocol.py","content":"\"\"\"\nRoutes for the Agent Service.\n\nT(...TRUNCATED)
{"infile":[68,329,175,152,283,94],"inproject":[146,277,281,288,34,35,171,174,176,177,63,323,90,93,22(...TRUNCATED)
{"filename":[".env.example",".flake8",".gitignore",".pre-commit-config.yaml","LICENSE","Makefile","R(...TRUNCATED)
{"commited":[],"common":[67,326,149,280,251,221],"infile":[68,329,175,152,283,94],"inproject":[146,2(...TRUNCATED)
Significant-Gravitas__Auto-GPT
5db1a9325751564b5fdd50f9dfd5c1ec624eb0d2
{"filename":"autogpt/forge_log.py","content":"import json\nimport logging\nimport logging.config\nim(...TRUNCATED)
{"infile":[128,100,174,175,113,126],"inproject":[],"common":[155,148],"commited":[117,84,165],"non_i(...TRUNCATED)
{"filename":[".env.example",".flake8",".gitignore",".pre-commit-config.yaml","LICENSE","Makefile","R(...TRUNCATED)
{"commited":[117,84,165],"common":[155,148],"infile":[128,100,174,175,113,126],"inproject":[],"non_i(...TRUNCATED)
Significant-Gravitas__Auto-GPT
0569d6652fbffa665de6e42403a48783fbede8ce
{"filename":"autogpt/commands.py","content":"from autogpt import browse\nimport json\nfrom autogpt.m(...TRUNCATED)
{"infile":[276,227,287,290,302,307,180,181,63,65,69,74,76,78,80,82,96,118],"inproject":[269,270,16,2(...TRUNCATED)
{"filename":[".env.template",".flake8",".gitignore",".isort.cfg",".pre-commit-config.yaml",".sourcer(...TRUNCATED)
{"commited":[],"common":[169,43,156,159],"infile":[276,286,287,290,302,307,180,181,63,65,69,74,76,78(...TRUNCATED)
christophermayes__xopt
740cac5c56265829c27f0efd970680c3ab04c7a6
{"filename":"xopt/_version.py","content":"\n# This file helps to compute a version number in source (...TRUNCATED)
{"infile":[129,264,366,274,157,161,40,556,558,560,562,179,564,566,568,570,66,586,590,611,221,609,610(...TRUNCATED)
{"filename":[".flake8",".gitattributes",".gitignore","LICENSE","MANIFEST.in","README.md","docs-requi(...TRUNCATED)
{"commited":[],"common":[576,162,547,349],"infile":[129,264,393,274,157,161,40,556,558,560,562,179,5(...TRUNCATED)
christophermayes__xopt
683fd0c3af2f0fc12a598932b20e3afe8070112b
{"filename":"xopt/generators/ga/cnsga.py","content":"\nfrom xopt.generators.ga import deap_creator\n(...TRUNCATED)
{"infile":[86,100,102],"inproject":[128,146,29,46,48,49,56,59,201,74,202,203,204,221,223,241,120,125(...TRUNCATED)
{"filename":[".flake8",".gitattributes",".gitignore","LICENSE","MANIFEST.in","README.md","docs-requi(...TRUNCATED)
{"commited":[],"common":[145,210,142,151],"infile":[86,100,102],"inproject":[128,146,29,46,48,49,56,(...TRUNCATED)
cerfacs-globc__icclim
7d571ec4d1e1b1fcc1433bb178de2bc0f2f2f0b7
{"filename":"icclim/tests/unit_tests/test_input_parsing.py","content":"import os\nimport shutil\n\ni(...TRUNCATED)
{"infile":[],"inproject":[160,103,104,88,74,139,207,119,89,183,31],"common":[],"commited":[],"non_in(...TRUNCATED)
{"filename":[".gitignore",".pre-commit-config.yaml",".readthedocs.yml","LICENSE","MANIFEST.in","READ(...TRUNCATED)
{"commited":[],"common":[],"infile":[],"inproject":[160,103,104,73,74,139,207,119,88,89,183,31],"non(...TRUNCATED)
TransformerOptimus__SuperAGI
fc170f2009135fa49139c107bd15e6cd31e9d1e9
{"filename":"alembic/versions/517185535767_tool_model_migration.py","content":"\"\"\"Tool Model migr(...TRUNCATED)
{"infile":[],"inproject":[],"common":[],"commited":[],"non_informative":[31,87,125],"random":[100,14(...TRUNCATED)
{"filename":[".gitignore","CODE_OF_CONDUCT.md","LICENSE","README.MD","config_template.yaml","main.py(...TRUNCATED)
{"commited":[],"common":[],"infile":[],"inproject":[],"non_informative":[0,1,2,3,4,5,6,7,8,9,10,11,1(...TRUNCATED)
TransformerOptimus__SuperAGI
fc170f2009135fa49139c107bd15e6cd31e9d1e9
{"filename":"alembic/versions/518f8576edfc_changed_model_migration.py","content":"\"\"\"Changed Mode(...TRUNCATED)
{"infile":[],"inproject":[],"common":[],"commited":[],"non_informative":[217,94],"random":[160,116,1(...TRUNCATED)
{"filename":[".gitignore","CODE_OF_CONDUCT.md","LICENSE","README.MD","config_template.yaml","main.py(...TRUNCATED)
{"commited":[],"common":[],"infile":[],"inproject":[],"non_informative":[0,1,2,3,4,5,6,7,8,9,10,11,1(...TRUNCATED)

LCA Project Level Code Completion

How to load the dataset

from datasets import load_dataset

ds = load_dataset('JetBrains-Research/lca-codegen-medium', split='test')

Data Point Structure

  • repo – repository name in format {GitHub_user_name}__{repository_name}
  • commit_hash – commit hash
  • completion_file – dictionary with the completion file content in the following format:
  • filename – filepath to the completion file
  • content – content of the completion file
  • completion_lines – dictionary where keys are classes of lines and values are a list of integers (numbers of lines to complete). The classes are:
  • committed – line contains at least one function or class that was declared in the committed files from commit_hash
  • inproject – line contains at least one function or class that was declared in the project (excluding previous)
  • infile – line contains at least one function or class that was declared in the completion file (excluding previous)
  • common – line contains at least one function or class that was classified to be common, e.g., main, get, etc (excluding previous)
  • non_informative – line that was classified to be non-informative, e.g. too short, contains comments, etc
  • random – randomly sampled from the rest of the lines
  • repo_snapshot – dictionary with a snapshot of the repository before the commit. Has the same structure as completion_file, but filenames and contents are orginized as lists.
  • completion_lines_raw – the same as completion_lines, but before sampling.

How we collected the data

To collect the data, we cloned repositories from GitHub where the main language is Python. The completion file for each data point is a .py file that was added to the repository in a commit. The state of the repository before this commit is the repo snapshot.

Medium dataset is defined by number of characters in .py files from the repository snapshot. This number is from 48K to 192K.

Dataset Stats

  • Number of datapoints: 224
  • Number of repositories: 80
  • Number of commits: 175

Completion File

  • Number of lines, median: 310
  • Number of lines, min: 200
  • Number of lines, max: 1648

Repository Snapshot

  • .py files: median 34, from 3 to 117
  • non .py files: median 64.5, from 3 to 3977
  • .py lines: median 3786
  • non .py lines: median 9735

Line Counts:

  • infile: 2224
  • inproject: 2236
  • common: 779
  • committed: 1495
  • non-informative: 858
  • random: 1084
  • total: 8676

Scores

HF Space

Downloads last month
78
Edit dataset card

Collection including JetBrains-Research/lca-codegen-medium