|
--- |
|
annotations_creators: |
|
- machine-generated |
|
language: |
|
- en |
|
language_creators: |
|
- machine-generated |
|
- expert-generated |
|
license: |
|
- mit |
|
multilinguality: |
|
- monolingual |
|
pretty_name: MultiPLE-E |
|
size_categories: |
|
- 1K<n<10K |
|
source_datasets: |
|
- original |
|
- extended|openai_humaneval |
|
tags: [] |
|
task_categories: [] |
|
task_ids: [] |
|
--- |
|
|
|
# Dataset Card for MultiPL-E |
|
|
|
## Dataset Description |
|
|
|
- **Homepage:** https://nuprl.github.io/MultiPL-E/ |
|
- **Repository:** https://github.com/nuprl/MultiPL-E |
|
- **Paper:** https://arxiv.org/abs/2208.08227 |
|
- **Point of Contact:** carolyn.anderson@wellesley.edu, mfeldman@oberlin.edu, a.guha@northeastern.edu |
|
|
|
## Dataset Summary |
|
|
|
MultiPL-E is a dataset for evaluating large language models for code |
|
generation that supports 18 programming languages. It takes the OpenAI |
|
"HumanEval" Python benchmarks and uses little compilers to translate them |
|
to other languages. It is easy to add support for new languages and benchmarks. |
|
|
|
[More Information Needed] |
|
|
|
|