Datasets:
File size: 2,514 Bytes
e99a7c1 6d6272d e99a7c1 6d6272d e99a7c1 7b6337d e99a7c1 7b6337d 6d6272d 7b6337d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 |
---
annotations_creators:
- machine-generated
language:
- en
language_creators:
- machine-generated
- expert-generated
license:
- mit
multilinguality:
- monolingual
pretty_name: MultiPLE-E
size_categories:
- 1K<n<10K
source_datasets:
- original
- extended|openai_humaneval
- extended|mbpp
tags: []
task_categories: []
task_ids: []
---
# Dataset Card for MultiPL-E
## Dataset Description
- **Homepage:** https://nuprl.github.io/MultiPL-E/
- **Repository:** https://github.com/nuprl/MultiPL-E
- **Paper:** https://arxiv.org/abs/2208.08227
- **Point of Contact:** carolyn.anderson@wellesley.edu, mfeldman@oberlin.edu, a.guha@northeastern.edu
## Dataset Summary
MultiPL-E is a dataset for evaluating large language models for code
generation that supports 18 programming languages. It takes the OpenAI
"HumanEval" and the MBPP Python benchmarks and uses little compilers to
translate them to other languages. It is easy to add support for new languages
and benchmarks.
## Example
The following script uses the Salesforce/codegen model to generate Lua
and MultiPL-E to produce a script with unit tests for luaunit.
```python
import datasets
from transformers import AutoTokenizer, AutoModelForCausalLM
LANG = "lua"
MODEL_NAME = "Salesforce/codegen-350M-multi"
tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME)
model = AutoModelForCausalLM.from_pretrained(MODEL_NAME).half().cuda()
problems = datasets.load_dataset("nuprl/MultiPL-E", f"humaneval-{LANG}")
def stop_at_stop_token(decoded_string, problem):
"""
Truncates the output at stop tokens, taking care to skip the prompt
which may have stop tokens.
"""
min_stop_index = len(decoded_string)
for stop_token in problem["stop_tokens"]:
stop_index = decoded_string.find(stop_token)
if stop_index != -1 and stop_index > len(problem["prompt"]) and stop_index < min_stop_index:
min_stop_index = stop_index
return decoded_string[:min_stop_index]
for problem in problems["test"]:
input_ids = tokenizer(
problem["prompt"],
return_tensors="pt",
).input_ids.cuda()
generated_ids = model.generate(
input_ids, max_length=256, pad_token_id=tokenizer.eos_token_id + 2
)
truncated_string = stop_at_stop_token(tokenizer.decode(generated_ids[0]), problem)
filename = problem["name"] + "." + LANG
with open(filename, "w") as f:
print(f"Created {filename}")
f.write(truncated_string)
f.write("\n")
f.write(problem["tests"])
``` |