Datasets:

Languages:
English
Multilinguality:
monolingual
Size Categories:
n<1K
Language Creators:
expert-generated
Annotations Creators:
expert-generated
Source Datasets:
original
ArXiv:
Tags:
code-generation
License:
system HF staff commited on
Commit
d009b64
0 Parent(s):

Update files from the datasets library (from 1.13.0)

Browse files

Release notes: https://github.com/huggingface/datasets/releases/tag/1.13.0

.gitattributes ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bin.* filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.model filter=lfs diff=lfs merge=lfs -text
12
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
13
+ *.onnx filter=lfs diff=lfs merge=lfs -text
14
+ *.ot filter=lfs diff=lfs merge=lfs -text
15
+ *.parquet filter=lfs diff=lfs merge=lfs -text
16
+ *.pb filter=lfs diff=lfs merge=lfs -text
17
+ *.pt filter=lfs diff=lfs merge=lfs -text
18
+ *.pth filter=lfs diff=lfs merge=lfs -text
19
+ *.rar filter=lfs diff=lfs merge=lfs -text
20
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
21
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
22
+ *.tflite filter=lfs diff=lfs merge=lfs -text
23
+ *.tgz filter=lfs diff=lfs merge=lfs -text
24
+ *.xz filter=lfs diff=lfs merge=lfs -text
25
+ *.zip filter=lfs diff=lfs merge=lfs -text
26
+ *.zstandard filter=lfs diff=lfs merge=lfs -text
27
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,181 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - expert-generated
4
+ language_creators:
5
+ - expert-generated
6
+ languages:
7
+ - en
8
+ licenses:
9
+ - mit
10
+ multilinguality:
11
+ - monolingual
12
+ pretty_name: OpenAI HumanEval
13
+ size_categories:
14
+ - n<1K
15
+ source_datasets:
16
+ - original
17
+ task_categories:
18
+ - conditional-text-generation
19
+ task_ids:
20
+ - conditional-text-generation-other-code-generation
21
+ ---
22
+
23
+ # Dataset Card for OpenAI HumanEval
24
+
25
+ ## Table of Contents
26
+ - [OpenAI HumanEval](#openai-humaneval)
27
+ - [Table of Contents](#table-of-contents)
28
+ - [Dataset Description](#dataset-description)
29
+ - [Dataset Summary](#dataset-summary)
30
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
31
+ - [Languages](#languages)
32
+ - [Dataset Structure](#dataset-structure)
33
+ - [Data Instances](#data-instances)
34
+ - [Data Fields](#data-fields)
35
+ - [Data Splits](#data-splits)
36
+ - [Dataset Creation](#dataset-creation)
37
+ - [Curation Rationale](#curation-rationale)
38
+ - [Source Data](#source-data)
39
+ - [Initial Data Collection and Normalization](#initial-data-collection-and-normalization)
40
+ - [Who are the source language producers?](#who-are-the-source-language-producers)
41
+ - [Annotations](#annotations)
42
+ - [Annotation process](#annotation-process)
43
+ - [Who are the annotators?](#who-are-the-annotators)
44
+ - [Personal and Sensitive Information](#personal-and-sensitive-information)
45
+ - [Considerations for Using the Data](#considerations-for-using-the-data)
46
+ - [Social Impact of Dataset](#social-impact-of-dataset)
47
+ - [Discussion of Biases](#discussion-of-biases)
48
+ - [Other Known Limitations](#other-known-limitations)
49
+ - [Additional Information](#additional-information)
50
+ - [Dataset Curators](#dataset-curators)
51
+ - [Licensing Information](#licensing-information)
52
+ - [Citation Information](#citation-information)
53
+ - [Contributions](#contributions)
54
+
55
+ ## Dataset Description
56
+
57
+ - **Repository:** [GitHub Repository](https://github.com/openai/human-eval)
58
+ - **Paper:** [Evaluating Large Language Models Trained on Code](https://arxiv.org/abs/2107.03374)
59
+
60
+ ### Dataset Summary
61
+
62
+ The HumanEval dataset released by OpenAI includes 164 programming problems with a function sig- nature, docstring, body, and several unit tests. They were handwritten to ensure not to be included in the training set of code generation models.
63
+
64
+ ### Supported Tasks and Leaderboards
65
+
66
+ ### Languages
67
+ The programming problems are written in Python and contain English natural text in comments and docstrings.
68
+
69
+ ## Dataset Structure
70
+
71
+ ```python
72
+ from datasets import load_dataset
73
+ load_dataset("openai_humaneval")
74
+
75
+ DatasetDict({
76
+ test: Dataset({
77
+ features: ['task_id', 'prompt', 'canonical_solution', 'test', 'entry_point'],
78
+ num_rows: 164
79
+ })
80
+ })
81
+ ```
82
+
83
+ ### Data Instances
84
+
85
+ An example of a dataset instance:
86
+
87
+ ```
88
+ {
89
+ "task_id": "test/0",
90
+ "prompt": "def return1():\n",
91
+ "canonical_solution": " return 1",
92
+ "test": "def check(candidate):\n assert candidate() == 1",
93
+ "entry_point": "return1"
94
+ }
95
+ ```
96
+
97
+ ### Data Fields
98
+
99
+ - `task_id`: identifier for the data sample
100
+ - `prompt`: input for the model containing function header and docstrings
101
+ - `canonical_solution`: solution for the problem in the `prompt`
102
+ - `test`: contains function to test generated code for correctness
103
+ - `entry_point`: entry point for test
104
+
105
+
106
+ ### Data Splits
107
+
108
+ The dataset only consists of a test split with 164 samples.
109
+
110
+ ## Dataset Creation
111
+
112
+ ### Curation Rationale
113
+
114
+ Since code generation models are often trained on dumps of GitHub a dataset not included in the dump was necessary to properly evaluate the model. However, since this dataset was published on GitHub it is likely to be included in future dumps.
115
+
116
+ ### Source Data
117
+
118
+ The dataset was handcrafted by engineers and researchers at OpenAI.
119
+
120
+ #### Initial Data Collection and Normalization
121
+
122
+ [More Information Needed]
123
+
124
+ #### Who are the source language producers?
125
+
126
+ [More Information Needed]
127
+
128
+ ### Annotations
129
+
130
+ [More Information Needed]
131
+
132
+ #### Annotation process
133
+
134
+ [More Information Needed]
135
+
136
+ #### Who are the annotators?
137
+
138
+ [More Information Needed]
139
+
140
+ ### Personal and Sensitive Information
141
+
142
+ None.
143
+
144
+ ## Considerations for Using the Data
145
+ Make sure you execute generated Python code in a safe environment when evauating against this dataset as generated code could be harmful.
146
+
147
+ ### Social Impact of Dataset
148
+ With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
149
+
150
+ ### Discussion of Biases
151
+
152
+ [More Information Needed]
153
+
154
+ ### Other Known Limitations
155
+
156
+ [More Information Needed]
157
+
158
+ ## Additional Information
159
+
160
+ ### Dataset Curators
161
+ OpenAI
162
+
163
+ ### Licensing Information
164
+
165
+ MIT License
166
+
167
+ ### Citation Information
168
+ ```
169
+ @misc{chen2021evaluating,
170
+ title={Evaluating Large Language Models Trained on Code},
171
+ author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray and Raul Puri and Gretchen Krueger and Michael Petrov and Heidy Khlaaf and Girish Sastry and Pamela Mishkin and Brooke Chan and Scott Gray and Nick Ryder and Mikhail Pavlov and Alethea Power and Lukasz Kaiser and Mohammad Bavarian and Clemens Winter and Philippe Tillet and Felipe Petroski Such and Dave Cummings and Matthias Plappert and Fotios Chantzis and Elizabeth Barnes and Ariel Herbert-Voss and William Hebgen Guss and Alex Nichol and Alex Paino and Nikolas Tezak and Jie Tang and Igor Babuschkin and Suchir Balaji and Shantanu Jain and William Saunders and Christopher Hesse and Andrew N. Carr and Jan Leike and Josh Achiam and Vedant Misra and Evan Morikawa and Alec Radford and Matthew Knight and Miles Brundage and Mira Murati and Katie Mayer and Peter Welinder and Bob McGrew and Dario Amodei and Sam McCandlish and Ilya Sutskever and Wojciech Zaremba},
172
+ year={2021},
173
+ eprint={2107.03374},
174
+ archivePrefix={arXiv},
175
+ primaryClass={cs.LG}
176
+ }
177
+ ```
178
+
179
+ ### Contributions
180
+
181
+ Thanks to [@lvwerra](https://github.com/lvwerra) for adding this dataset.
dataset_infos.json ADDED
@@ -0,0 +1,65 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "openai_humaneval": {
3
+ "description": "The HumanEval dataset released by OpenAI contains 164 handcrafted programming challenges together with unittests to very the viability of a proposed solution.\n",
4
+ "citation": "@misc{chen2021evaluating,\n title={Evaluating Large Language Models Trained on Code},\n author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray and Raul Puri and Gretchen Krueger and Michael Petrov and Heidy Khlaaf and Girish Sastry and Pamela Mishkin and Brooke Chan and Scott Gray and Nick Ryder and Mikhail Pavlov and Alethea Power and Lukasz Kaiser and Mohammad Bavarian and Clemens Winter and Philippe Tillet and Felipe Petroski Such and Dave Cummings and Matthias Plappert and Fotios Chantzis and Elizabeth Barnes and Ariel Herbert-Voss and William Hebgen Guss and Alex Nichol and Alex Paino and Nikolas Tezak and Jie Tang and Igor Babuschkin and Suchir Balaji and Shantanu Jain and William Saunders and Christopher Hesse and Andrew N. Carr and Jan Leike and Josh Achiam and Vedant Misra and Evan Morikawa and Alec Radford and Matthew Knight and Miles Brundage and Mira Murati and Katie Mayer and Peter Welinder and Bob McGrew and Dario Amodei and Sam McCandlish and Ilya Sutskever and Wojciech Zaremba},\n year={2021},\n eprint={2107.03374},\n archivePrefix={arXiv},\n primaryClass={cs.LG}\n}",
5
+ "homepage": "https://github.com/openai/human-eval",
6
+ "license": "MIT",
7
+ "features": {
8
+ "task_id": {
9
+ "dtype": "string",
10
+ "id": null,
11
+ "_type": "Value"
12
+ },
13
+ "prompt": {
14
+ "dtype": "string",
15
+ "id": null,
16
+ "_type": "Value"
17
+ },
18
+ "canonical_solution": {
19
+ "dtype": "string",
20
+ "id": null,
21
+ "_type": "Value"
22
+ },
23
+ "test": {
24
+ "dtype": "string",
25
+ "id": null,
26
+ "_type": "Value"
27
+ },
28
+ "entry_point": {
29
+ "dtype": "string",
30
+ "id": null,
31
+ "_type": "Value"
32
+ }
33
+ },
34
+ "post_processed": null,
35
+ "supervised_keys": null,
36
+ "task_templates": null,
37
+ "builder_name": "openai_humaneval",
38
+ "config_name": "openai_humaneval",
39
+ "version": {
40
+ "version_str": "1.0.0",
41
+ "description": null,
42
+ "major": 1,
43
+ "minor": 0,
44
+ "patch": 0
45
+ },
46
+ "splits": {
47
+ "test": {
48
+ "name": "test",
49
+ "num_bytes": 194414,
50
+ "num_examples": 164,
51
+ "dataset_name": "openai_humaneval"
52
+ }
53
+ },
54
+ "download_checksums": {
55
+ "https://raw.githubusercontent.com/openai/human-eval/master/data/HumanEval.jsonl.gz": {
56
+ "num_bytes": 44877,
57
+ "checksum": "b796127e635a67f93fb35c04f4cb03cf06f38c8072ee7cee8833d7bee06979ef"
58
+ }
59
+ },
60
+ "download_size": 44877,
61
+ "post_processing_size": null,
62
+ "dataset_size": 194414,
63
+ "size_in_bytes": 239291
64
+ }
65
+ }
dummy/openai_humaneval/1.0.0/dummy_data.zip ADDED
@@ -0,0 +1,3 @@
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4ce64c6e256090e7dfabca6d95095f4c6cdf1b2582d5ccd528b200f70ff189c6
3
+ size 1846
openai_humaneval.py ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import json
2
+
3
+ import datasets
4
+
5
+
6
+ _DESCRIPTION = """\
7
+ The HumanEval dataset released by OpenAI contains 164 handcrafted programming challenges together with unittests to very the viability of a proposed solution.
8
+ """
9
+ _URL = "https://raw.githubusercontent.com/openai/human-eval/master/data/HumanEval.jsonl.gz"
10
+
11
+ _CITATION = """\
12
+ @misc{chen2021evaluating,
13
+ title={Evaluating Large Language Models Trained on Code},
14
+ author={Mark Chen and Jerry Tworek and Heewoo Jun and Qiming Yuan and Henrique Ponde de Oliveira Pinto and Jared Kaplan and Harri Edwards and Yuri Burda and Nicholas Joseph and Greg Brockman and Alex Ray and Raul Puri and Gretchen Krueger and Michael Petrov and Heidy Khlaaf and Girish Sastry and Pamela Mishkin and Brooke Chan and Scott Gray and Nick Ryder and Mikhail Pavlov and Alethea Power and Lukasz Kaiser and Mohammad Bavarian and Clemens Winter and Philippe Tillet and Felipe Petroski Such and Dave Cummings and Matthias Plappert and Fotios Chantzis and Elizabeth Barnes and Ariel Herbert-Voss and William Hebgen Guss and Alex Nichol and Alex Paino and Nikolas Tezak and Jie Tang and Igor Babuschkin and Suchir Balaji and Shantanu Jain and William Saunders and Christopher Hesse and Andrew N. Carr and Jan Leike and Josh Achiam and Vedant Misra and Evan Morikawa and Alec Radford and Matthew Knight and Miles Brundage and Mira Murati and Katie Mayer and Peter Welinder and Bob McGrew and Dario Amodei and Sam McCandlish and Ilya Sutskever and Wojciech Zaremba},
15
+ year={2021},
16
+ eprint={2107.03374},
17
+ archivePrefix={arXiv},
18
+ primaryClass={cs.LG}
19
+ }"""
20
+
21
+ _HOMEPAGE = "https://github.com/openai/human-eval"
22
+
23
+ _LICENSE = "MIT"
24
+
25
+
26
+ class OpenaiHumaneval(datasets.GeneratorBasedBuilder):
27
+ """HumanEval: A benchmark for code generation."""
28
+
29
+ VERSION = datasets.Version("1.0.0")
30
+
31
+ BUILDER_CONFIGS = [
32
+ datasets.BuilderConfig(
33
+ name="openai_humaneval",
34
+ version=datasets.Version("1.0.0"),
35
+ description=_DESCRIPTION,
36
+ )
37
+ ]
38
+
39
+ def _info(self):
40
+ features = datasets.Features(
41
+ {
42
+ "task_id": datasets.Value("string"),
43
+ "prompt": datasets.Value("string"),
44
+ "canonical_solution": datasets.Value("string"),
45
+ "test": datasets.Value("string"),
46
+ "entry_point": datasets.Value("string"),
47
+ }
48
+ )
49
+
50
+ return datasets.DatasetInfo(
51
+ description=_DESCRIPTION,
52
+ features=features,
53
+ supervised_keys=None,
54
+ homepage=_HOMEPAGE,
55
+ license=_LICENSE,
56
+ citation=_CITATION,
57
+ )
58
+
59
+ def _split_generators(self, dl_manager):
60
+ """Returns SplitGenerators."""
61
+ data_dir = dl_manager.download_and_extract(_URL)
62
+ return [
63
+ datasets.SplitGenerator(
64
+ name=datasets.Split.TEST,
65
+ gen_kwargs={
66
+ "filepath": data_dir,
67
+ },
68
+ )
69
+ ]
70
+
71
+ def _generate_examples(self, filepath):
72
+ """Yields examples."""
73
+ with open(filepath, encoding="utf-8") as file:
74
+ data = [json.loads(line) for line in file]
75
+ id_ = 0
76
+ for sample in data:
77
+ yield id_, sample
78
+ id_ += 1