File size: 8,125 Bytes
a548b63
d12e2aa
 
 
 
 
 
 
 
 
 
 
 
 
97fa393
d12e2aa
 
 
 
a548b63
d12e2aa
 
 
97fa393
d12e2aa
 
 
 
 
 
a548b63
d12e2aa
 
 
 
 
 
 
 
 
 
 
 
 
98a273c
 
8fbc790
d12e2aa
 
 
 
68d0130
d12e2aa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
46f1078
d12e2aa
 
 
 
 
 
 
 
c614f41
d12e2aa
 
 
 
 
 
c614f41
d12e2aa
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ed44219
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d12e2aa
 
 
 
 
 
 
 
 
 
63e2595
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d12e2aa
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
---
dataset_info:
  features:
  - name: task_id
    dtype: string
  - name: language
    dtype: string
  - name: prompt
    dtype: string
  - name: test
    dtype: string
  - name: entry_point
    dtype: string
  splits:
  - name: multi-humaneval_python
    num_bytes: 165716
    num_examples: 164
  download_size: 67983
  dataset_size: 165716
license: apache-2.0
task_categories:
- text-generation
tags:
- mxeval
- code-generation
- multi-humaneval
- humaneval
pretty_name: multi-humaneval
language:
- en
---
# Multi-HumanEval

## Table of Contents
- [multi-humaneval](#multi-humaneval)
  - [Table of Contents](#table-of-contents)
  - [Dataset Description](#dataset-description)
    - [Dataset Summary](#dataset-summary)
    - [Supported Tasks and Leaderboards](#related-tasks-and-leaderboards)
    - [Languages](#languages)
  - [Dataset Structure](#dataset-structure)
    - [Data Instances](#data-instances)
    - [Data Fields](#data-fields)
    - [Data Splits](#data-splits)
  - [Executional Correctness](#execution)
    - [Execution Example](#execution-example)
    - [Considerations for Using the Data](#considerations-for-using-the-data)
  - [Dataset Creation](#dataset-creation)
    - [Curation Rationale](#curation-rationale)
    - [Personal and Sensitive Information](#personal-and-sensitive-information)
    - [Social Impact of Dataset](#social-impact-of-dataset)
  
  - [Additional Information](#additional-information)
    - [Dataset Curators](#dataset-curators)
    - [Licensing Information](#licensing-information)
    - [Citation Information](#citation-information)
    - [Contributions](#contributions)

# multi-humaneval

## Dataset Description

- **Repository:** [GitHub Repository](https://github.com/amazon-science/mbxp-exec-eval)
- **Paper:** [Multi-lingual Evaluation of Code Generation Models](https://openreview.net/forum?id=Bo7eeXm6An8)

### Dataset Summary

This repository contains data and code to perform execution-based multi-lingual evaluation of code generation capabilities and the corresponding data,
namely, a multi-lingual benchmark MBXP, multi-lingual MathQA and multi-lingual HumanEval.
<br>Results and findings can be found in the paper ["Multi-lingual Evaluation of Code Generation Models"](https://arxiv.org/abs/2210.14868).


### Related Tasks and Leaderboards
* [Multi-HumanEval](https://huggingface.co/datasets/mxeval/multi-humaneval)
* [MBXP](https://huggingface.co/datasets/mxeval/mbxp)
* [MathQA-X](https://huggingface.co/datasets/mxeval/mathqa-x)

### Languages
The programming problems are written in multiple programming languages and contain English natural text in comments and docstrings.


## Dataset Structure
To lookup currently supported datasets
```python
get_dataset_config_names("mxeval/multi-humaneval")
['python', 'csharp', 'go', 'java', 'javascript', 'kotlin', 'perl', 'php', 'ruby', 'scala', 'swift', 'typescript']

```
To load a specific dataset and language
```python
from datasets import load_dataset
load_dataset("mxeval/multi-humaneval", "python")
DatasetDict({
    test: Dataset({
        features: ['task_id', 'language', 'prompt', 'test', 'entry_point', 'canonical_solution', 'description'],
        num_rows: 164
    })
})
```

### Data Instances

An example of a dataset instance:

```python
{
  "task_id": "HumanEval/0",
  "language": "python",
  "prompt": "from typing import List\n\n\ndef has_close_elements(numbers: List[float], threshold: float) -> bool:\n    \"\"\" Check if in given list of numbers, are any two numbers closer to each other than\n    given threshold.\n    >>> has_close_elements([1.0, 2.0, 3.0], 0.5)\n    False\n    >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\n    True\n    \"\"\"\n",
  "test": "\n\nMETADATA = {\n    \"author\": \"jt\",\n    \"dataset\": \"test\"\n}\n\n\ndef check(candidate):\n    assert candidate([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.3) == True\n    assert candidate([1.0, 2.0, 3.9, 4.0, 5.0, 2.2], 0.05) == False\n    assert candidate([1.0, 2.0, 5.9, 4.0, 5.0], 0.95) == True\n    assert candidate([1.0, 2.0, 5.9, 4.0, 5.0], 0.8) == False\n    assert candidate([1.0, 2.0, 3.0, 4.0, 5.0, 2.0], 0.1) == True\n    assert candidate([1.1, 2.2, 3.1, 4.1, 5.1], 1.0) == True\n    assert candidate([1.1, 2.2, 3.1, 4.1, 5.1], 0.5) == False\n\n",
  "entry_point": "has_close_elements",
  "canonical_solution": "    for idx, elem in enumerate(numbers):\n        for idx2, elem2 in enumerate(numbers):\n            if idx != idx2:\n                distance = abs(elem - elem2)\n                if distance < threshold:\n                    return True\n\n    return False\n",
  "description": "Check if in given list of numbers, are any two numbers closer to each other than\n    given threshold.\n    >>> has_close_elements([1.0, 2.0, 3.0], 0.5)\n    False\n    >>> has_close_elements([1.0, 2.8, 3.0, 4.0, 5.0, 2.0], 0.3)\n    True"
}
```

### Data Fields

- `task_id`: identifier for the data sample
- `prompt`: input for the model containing function header and docstrings
- `canonical_solution`: solution for the problem in the `prompt`
- `description`: task description
- `test`: contains function to test generated code for correctness
- `entry_point`: entry point for test
- `language`: programming lanuage identifier to call the appropriate subprocess call for program execution


### Data Splits

 - HumanXEval
   - Python
   - Csharp
   - Go
   - Java
   - Javascript
   - Kotlin
   - Perl
   - Php
   - Ruby
   - Scala
   - Swift
   - Typescript

## Dataset Creation

### Curation Rationale

Since code generation models are often trained on dumps of GitHub a dataset not included in the dump was necessary to properly evaluate the model. However, since this dataset was published on GitHub it is likely to be included in future dumps.

### Personal and Sensitive Information

None.

### Social Impact of Dataset
With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.

## Execution

### Execution Example
Install the repo [mbxp-exec-eval](https://github.com/amazon-science/mbxp-exec-eval) to execute generations or canonical solutions for the prompts from this dataset.

```python
>>> from datasets import load_dataset
>>> from mxeval.execution import check_correctness
>>> humaneval_python = load_dataset("mxeval/multi-humaneval", "python", split="test")
>>> example_problem = humaneval_python[0]
>>> check_correctness(example_problem, example_problem["canonical_solution"], timeout=20.0)
{'task_id': 'HumanEval/0', 'passed': True, 'result': 'passed', 'completion_id': None, 'time_elapsed': 9.636878967285156}
```

### Considerations for Using the Data
Make sure to sandbox the execution environment.

### Dataset Curators
AWS AI Labs

### Licensing Information

[LICENSE](https://huggingface.co/datasets/mxeval/multi-humaneval/blob/main/multi-humaneval-LICENSE) <br>
[THIRD PARTY LICENSES](https://huggingface.co/datasets/mxeval/multi-humaneval/blob/main/THIRD_PARTY_LICENSES)

### Citation Information
```
@article{mbxp_athiwaratkun2022,
  title = {Multi-lingual Evaluation of Code Generation Models},
  author = {Athiwaratkun, Ben and
   Gouda, Sanjay Krishna and
   Wang, Zijian and
   Li, Xiaopeng and
   Tian, Yuchen and
   Tan, Ming
   and Ahmad, Wasi Uddin and
   Wang, Shiqi and
   Sun, Qing and
   Shang, Mingyue and
   Gonugondla, Sujan Kumar and
   Ding, Hantian and
   Kumar, Varun and
   Fulton, Nathan and
   Farahani, Arash and
   Jain, Siddhartha and
   Giaquinto, Robert and
   Qian, Haifeng and
   Ramanathan, Murali Krishna and
   Nallapati, Ramesh and
   Ray, Baishakhi and
   Bhatia, Parminder and
   Sengupta, Sudipta and
   Roth, Dan and
   Xiang, Bing},
  doi = {10.48550/ARXIV.2210.14868},
  url = {https://arxiv.org/abs/2210.14868},
  keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
  publisher = {arXiv},
  year = {2022},
  copyright = {Creative Commons Attribution 4.0 International}
}
```

### Contributions

[skgouda@](https://github.com/sk-g) [benathi@](https://github.com/benathi)