Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
ArXiv:
Libraries:
Datasets
pandas
shanchao's picture
Upload dataset
df4e219 verified
|
raw
history blame
4.45 kB
metadata
dataset_info:
  features:
    - name: instance_id
      dtype: string
    - name: version
      dtype: string
    - name: gold_patches
      struct:
        - name: code
          dtype: string
        - name: test
          dtype: string
    - name: test_patch
      dtype: 'null'
    - name: pre_patches
      struct:
        - name: code
          dtype: string
        - name: test
          dtype: string
    - name: pre_scripts
      dtype: 'null'
    - name: repo
      dtype: string
    - name: base_commit
      dtype: string
    - name: base_commit_timestamp
      dtype: string
    - name: hints_text
      dtype: 'null'
    - name: created_at
      dtype: 'null'
    - name: problem_statement
      struct:
        - name: code
          dtype: string
        - name: test
          dtype: string
    - name: environment_setup_commit
      dtype: string
    - name: evaluation
      struct:
        - name: FAIL_TO_PASS
          sequence: string
        - name: PASS_TO_PASS
          dtype: 'null'
  splits:
    - name: test
      num_bytes: 75055139
      num_examples: 200
  download_size: 20308767
  dataset_size: 75055139
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*

Can Language Models Replace Programmers? REPOCOD Says 'Not Yet'

Large language models (LLMs) have achieved high accuracy, i.e., more than 90 pass@1, in solving Python coding problems in HumanEval and MBPP. Thus, a natural question is, whether LLMs achieve comparable code completion performance compared to human developers? Unfortunately, one cannot answer this question using existing manual crafted or simple (e.g., single-line) code generation benchmarks, since such tasks fail to represent real-world software development tasks. In addition, existing benchmarks often use poor code correctness metrics, providing misleading conclusions.

To address these challenges, we create REPOCOD, a code generation benchmark with 980 problems collected from 11 popular real-world projects, with more than 58% of them requiring file-level or repository-level context information. In addition, REPOCOD has the longest average canonical solution length (331.6 tokens) and the highest average cyclomatic complexity (9.00) compared to existing benchmarks. Each task in REPOCOD includes 313.5 developer-written test cases on average for better correctness evaluation. In our evaluations on ten LLMs, none of the models achieves more than 30 pass@1 on REPOCOD, disclosing the necessity of building stronger LLMs that can help developers in real-world software development.

For easier evaluation, we sample 200 of the hardest problems in REPOCOD to create REPOCOD-Lite, using the product of the prompt length and canonical solution length (in terms of line count) as an indicator of difficulty. From the three categories of questions—self-contained, file-level, and repo-level—we select 66, 67, and 67 samples respectively in descending order of the scores.

REPOCOD_Lite_Unified is a variation of REPOCOD-Lite that has a similar format as SWE-Bench for easier integration into the established inference pipelines.

  • For more details on data collection and evaluation results, please refer to our arxiv preprint.

  • Examples code for downloading repositories, preparing repository snapshot, and running test cases for evaluation are propived at code

  • Check our Leaderboard for preliminary results using SOTA LLMs with RAG.


"instance_id": Instance ID in REPOCOD
"version": Version of REPOCOD
"gold_patches": {
    "code": Patch file to restore the target code,
    "test": Patch file to restore the relevant tests for the target code
}
"test_patch": None,
"pre_patches": {
    "code": Patch file to remove the target code,
    "test": Patch file to remove the relevant tests for the target code
}
"pre_scripts": None,
"repo": {GitHub User Name}/{Project Name}
"base_commit": base commit
"base_commit_timestamp": time of the base commit
"hints_text": None,
"created_at": None,
"problem_statement": {
    "code": Problem statement for code generation.
    "test": Problem statement for test generation.
}
# "problem_statement_source": "repocod",
"environment_setup_commit": base commit
"evaluation": {
    "FAIL_TO_PASS": list of relevant test cases
    "PASS_TO_PASS": None, (all remaining tests that passes, we choose not to run the PASS_TO_PASS tests to avoid the computational cost)
}