|
--- |
|
license: mit |
|
dataset_info: |
|
config_name: jhumaneval |
|
features: |
|
- name: task_id |
|
dtype: string |
|
- name: prompt_en |
|
dtype: string |
|
- name: prompt |
|
dtype: string |
|
- name: entry_point |
|
dtype: string |
|
- name: canonical_solution |
|
dtype: string |
|
- name: test |
|
dtype: string |
|
splits: |
|
- name: test |
|
num_bytes: 275012 |
|
num_examples: 164 |
|
download_size: 66597 |
|
dataset_size: 275012 |
|
task_categories: |
|
- text2text-generation |
|
language: |
|
- ja |
|
- en |
|
source_datasets: |
|
- openai_humaneval |
|
size_categories: |
|
- n<1K |
|
--- |
|
|
|
# Dataset Card for JHumanEval: Japanese Hand-Translated HumanEval |
|
|
|
## Dataset Description |
|
|
|
- **Repository:** [GitHub Repository](https://github.com/KuramitsuLab/jhuman-eval) |
|
|
|
## Dataset Summary |
|
This is a Japanese translated version of HumanEval, an evaluation harness for the HumanEval problem solving dataset described in the paper "Evaluating Large Language Models Trained on Code". |
|
|
|
LLMのコード生成能力の標準ベンチマーク HumanEval の日本語翻訳版です。 |
|
機械翻訳(DeepL, GPT-4)の翻訳結果を全て人手によって再修正し、 訳文を日本人のプログラマが読んで理解し、コードが書ける内容かチェックしました。 |
|
ただし、英語版 HumanEval の間違いは、修正せずに残して、 HumanEval 同様に不完全なドキュメントからの生成能力を見るようになっています。 |
|
日本語LLMのベンチマークとしてお使いください。 |
|
|
|
## Languages |
|
The programming problems are written in Python and contain English and Japanese natural text in comments and docstrings. |
|
|
|
Pythonで書かれたプログラミング問題のデータセットには、英語と日本語のコメントやドキュメントストリングがそれぞれ別々に含まれています。 |
|
|
|
## How to Load Dataset |
|
|
|
```python |
|
from datasets import load_dataset |
|
|
|
dataset = load_dataset("kogi-jwu/jhumaneval")['test'] |
|
``` |
|
|
|
|
|
## Dataset Structure |
|
|
|
```python |
|
from datasets import load_dataset |
|
load_dataset("kogi-jwu/jhumaneval") |
|
|
|
DatasetDict({ |
|
test: Dataset({ |
|
features: ['task_id', 'prompt_en', 'prompt', 'entry_point', 'canonical_solution', 'test'], |
|
num_rows: 164 |
|
}) |
|
}) |
|
``` |
|
|
|
## Data Instances |
|
An example of a dataset instance: |
|
|
|
``` |
|
{ |
|
"task_id": "test/0", |
|
"prompt_en": "def return1():\n \"\"\"\n A simple function that returns the integer 1.\n \"\"\"\n", |
|
"prompt": "def return1():\n \"\"\"\n 整数1を返すシンプルな関数。\n \"\"\"\n", |
|
"canonical_solution": " return 1", |
|
"test": "def check(candidate):\n assert candidate() == 1", |
|
"entry_point": "return1" |
|
} |
|
``` |
|
|
|
## Data Fields |
|
- `task_id` : Unique identifier for a task. |
|
- `prompt_en` : Function header and English docstrings as model input. |
|
- `prompt` : Function header and Japanese docstrings, parallel to prompt_en. |
|
- `canonical_solution` : The expected function implementation. |
|
- `test` : Function to verify the correctness of generated code. |
|
- `entry_point` : Function name to initiate the test. |
|
|
|
## Data Splits |
|
The dataset only consists of a test split with 164 samples. |
|
|
|
## How to Use |
|
|
|
参照コードでpass@1を算出する例: |
|
|
|
```python |
|
import os |
|
from datasets import load_dataset |
|
from evaluate import load |
|
|
|
os.environ["HF_ALLOW_CODE_EVAL"] = "1" |
|
|
|
ds = load_dataset("kogi-jwu/jhumaneval") |
|
|
|
code_eval = load("code_eval") |
|
|
|
candidates = [] |
|
test_cases = [] |
|
|
|
for d in ds: |
|
# FIXME: 参照コードをそのまま入れているが、予測コードに置き換えるべき |
|
candidates.append([d['prompt']+d['canonical_solution']]) |
|
# テストケースを実行可能な形式にする |
|
text_cases.append([d['test']+f"\n\ncheck({d['entry_point']})\n"v]) |
|
|
|
pass_at_k, results = code_eval.compute(references=test_cases, predictions=candidates, k=[1]) |
|
print(pass_at_k) |
|
``` |
|
|
|
## Additional Information |
|
|
|
## Licensing Information |
|
MIT License |
|
|
|
|