CanItEdit / README.md
cassanof's picture
Upload dataset
3c07f38 verified
metadata
annotations_creators:
  - expert-generated
language_creators:
  - expert-generated
language:
  - en
license:
  - mit
multilinguality:
  - monolingual
size_categories:
  - n<1K
source_datasets:
  - original
task_categories:
  - text2text-generation
task_ids: []
paperswithcode_id: canitedit
pretty_name: CanItEdit
tags:
  - code-generation
  - code
dataset_info:
  features:
    - name: id
      dtype: int64
    - name: name
      dtype: string
    - name: full_name
      dtype: string
    - name: before
      dtype: string
    - name: after
      dtype: string
    - name: tests
      dtype: string
    - name: instruction_descriptive
      dtype: string
    - name: instruction_lazy
      dtype: string
    - name: taxonomy
      struct:
        - name: change_kind
          dtype: string
        - name: libraries
          sequence: string
        - name: topic
          dtype: string
  splits:
    - name: test
      num_bytes: 564910
      num_examples: 105
  download_size: 250477
  dataset_size: 564910
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*

Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions

CanItEdit is a benchmark for evaluating LLMs on instructional code editing, the task of updating a program given a natural language instruction. The benchmark contains 105 hand-crafted Python programs with before and after code blocks, two types of natural language instructions (descriptive and lazy), and a hidden test suite.

The dataset’s dual natural language instructions test model efficiency in two scenarios:

  1. Descriptive: Detailed instructions replicate situations where users provide specific specifications or another model outlines a plan, similar to Reflexion prompting,
  2. Lazy: Informal instructions resemble typical user queries for LLMs in code generation.

For more information and results see our paper.

Citation

If you use our work, please cite our paper as such:

@inproceedings{cassano2023edit,
      title={{Can It Edit? Evaluating the Ability of Large Language Models to Follow Code Editing Instructions}}, 
      author={Federico Cassano and Luisa Li and Akul Sethi and Noah Shinn and Abby Brennan-Jones and Anton Lozhkov and Carolyn Jane Anderson and Arjun Guha},
      booktitle={The First International Workshop on Large Language Model for Code},
      year={2024},
      url={https://arxiv.org/abs/2312.12450}
}

How To Evaluate

All the code for evaluating the benchmark can be found in our GitHub repository.