Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
n<1K
Language Creators:
expert-generated
Annotations Creators:
expert-generated
Source Datasets:
original
ArXiv:
Tags:
code-generation
License:
openai_humaneval / README.md
albertvillanova's picture
Convert dataset to Parquet (#5)
7dce605
metadata
annotations_creators:
  - expert-generated
language_creators:
  - expert-generated
language:
  - en
license:
  - mit
multilinguality:
  - monolingual
size_categories:
  - n<1K
source_datasets:
  - original
task_categories:
  - text2text-generation
task_ids: []
paperswithcode_id: humaneval
pretty_name: OpenAI HumanEval
tags:
  - code-generation
dataset_info:
  config_name: openai_humaneval
  features:
    - name: task_id
      dtype: string
    - name: prompt
      dtype: string
    - name: canonical_solution
      dtype: string
    - name: test
      dtype: string
    - name: entry_point
      dtype: string
  splits:
    - name: test
      num_bytes: 194394
      num_examples: 164
  download_size: 83920
  dataset_size: 194394
configs:
  - config_name: openai_humaneval
    data_files:
      - split: test
        path: openai_humaneval/test-*
    default: true

Dataset Card for OpenAI HumanEval

Table of Contents

Dataset Description

Dataset Summary

The HumanEval dataset released by OpenAI includes 164 programming problems with a function sig- nature, docstring, body, and several unit tests. They were handwritten to ensure not to be included in the training set of code generation models.

Supported Tasks and Leaderboards

Languages

The programming problems are written in Python and contain English natural text in comments and docstrings.

Dataset Structure

from datasets import load_dataset
load_dataset("openai_humaneval")

DatasetDict({
    test: Dataset({
        features: ['task_id', 'prompt', 'canonical_solution', 'test', 'entry_point'],
        num_rows: 164
    })
})

Data Instances

An example of a dataset instance:

{
    "task_id": "test/0",
    "prompt": "def return1():\n",
    "canonical_solution": "    return 1",
    "test": "def check(candidate):\n    assert candidate() == 1",
    "entry_point": "return1"
}

Data Fields

  • task_id: identifier for the data sample
  • prompt: input for the model containing function header and docstrings
  • canonical_solution: solution for the problem in the prompt
  • test: contains function to test generated code for correctness
  • entry_point: entry point for test

Data Splits

The dataset only consists of a test split with 164 samples.

Dataset Creation

Curation Rationale

Since code generation models are often trained on dumps of GitHub a dataset not included in the dump was necessary to properly evaluate the model. However, since this dataset was published on GitHub it is likely to be included in future dumps.

Source Data

The dataset was handcrafted by engineers and researchers at OpenAI.

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

[More Information Needed]

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

None.

Considerations for Using the Data

Make sure you execute generated Python code in a safe environment when evauating against this dataset as generated code could be harmful.

Social Impact of Dataset

With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

OpenAI

Licensing Information

MIT License

Citation Information

@misc{chen2021evaluating,
      title={Evaluating Large Language Models Trained on Code},
      author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray and Raul Puri and Gretchen Krueger and Michael Petrov and Heidy Khlaaf and Girish Sastry and Pamela Mishkin and Brooke Chan and Scott Gray and Nick Ryder and Mikhail Pavlov and Alethea Power and Lukasz Kaiser and Mohammad Bavarian and Clemens Winter and Philippe Tillet and Felipe Petroski Such and Dave Cummings and Matthias Plappert and Fotios Chantzis and Elizabeth Barnes and Ariel Herbert-Voss and William Hebgen Guss and Alex Nichol and Alex Paino and Nikolas Tezak and Jie Tang and Igor Babuschkin and Suchir Balaji and Shantanu Jain and William Saunders and Christopher Hesse and Andrew N. Carr and Jan Leike and Josh Achiam and Vedant Misra and Evan Morikawa and Alec Radford and Matthew Knight and Miles Brundage and Mira Murati and Katie Mayer and Peter Welinder and Bob McGrew and Dario Amodei and Sam McCandlish and Ilya Sutskever and Wojciech Zaremba},
      year={2021},
      eprint={2107.03374},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Contributions

Thanks to @lvwerra for adding this dataset.