Datasets:

ArXiv:
Tags:
License:
perturbed_humaneval / README.md
RaymondLi's picture
Update README.md
dd71a21
metadata
license: apache-2.0

Dataset Card for Dataset Name

Dataset Description

Dataset Summary

The Recode benchmark proposes to apply code and natural language transformations to code-generation benchmarks to evaluate the robustness of code-generation models. This dataset contains the perturbed version of HumanEval that they released. It was automatically generated from the HumanEval dataset.

Subsets

There are four transformation categories that form the subsets of this dataset: func_name, nlaugmenter, natgen and format.

Languages

The programming problems are written in Python and contains docstrings and comments in English.

Dataset Structure

Data Instances

[More Information Needed]

Data Fields

  • task_id: ID of the original HumanEval example
  • prompt: the perturbed prompt
  • entry_point: entry point for test
  • canonical_solution: solution for the problem in the prompt
  • test: contains function to test generated code for correctness
  • seed: seed of the perturbed prompt
  • perturbation_name: name of the perturbation
  • partial: partial solution to the problem. This field is only present for transformation categories that affect a partial solution: natgen and format.

Data Splits

The dataset only has a test split.

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

[More Information Needed]

Citation Information

@article{wang2022recode,
  title={ReCode: Robustness Evaluation of Code Generation Models},
  author={Wang, Shiqi and Li, Zheng and Qian, Haifeng and Yang, Chenghao and Wang, Zijian and Shang, Mingyue and Kumar, Varun and Tan, Samson and Ray, Baishakhi and Bhatia, Parminder and others},
  journal={arXiv preprint arXiv:2212.10264},
  year={2022}
}

Contributions

[More Information Needed]