File size: 2,845 Bytes
8b4e076
 
c5dd1d0
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2c1d07a
 
 
 
 
f91945a
2c1d07a
 
 
31c3cd7
 
 
 
 
 
 
8c806c7
 
 
31c3cd7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8c806c7
31c3cd7
 
 
 
 
 
 
 
8c806c7
31c3cd7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
license: mit
dataset_info:
  config_name: jhumaneval
  features:
  - name: task_id
    dtype: string
  - name: prompt_en
    dtype: string
  - name: prompt
    dtype: string
  - name: entry_point
    dtype: string
  - name: canonical_solution
    dtype: string
  - name: test
    dtype: string
  splits:
  - name: test
    num_bytes: 275012
    num_examples: 164
  download_size: 66597
  dataset_size: 275012
task_categories:
- text2text-generation
language:
- ja
- en
source_datasets:
- openai_humaneval
size_categories:
- n<1K
---

# Dataset Card for JHumanEval: Japanese Hand-Translated HumanEval

## Dataset Summary
This is a Japanese translated version of HumanEval, an evaluation harness for the HumanEval problem solving dataset described in the paper "Evaluating Large Language Models Trained on Code".

LLMのコード生成能力の標準ベンチマーク HumanEval の日本語翻訳版です。  
機械翻訳(DeepL, GPT-4)の翻訳結果を全て人手によって再修正し、 訳文を日本人のプログラマが読んで理解し、コードが書ける内容かチェックしました。  
ただし、英語版 HumanEval の間違いは、修正せずに残して、 HumanEval 同様に不完全なドキュメントからの生成能力を見るようになっています。  
日本語LLMのベンチマークとしてお使いください。

## Languages
The programming problems are written in Python and contain English natural text in comments and docstrings.

## Dataset Structure

```python
from datasets import load_dataset
load_dataset("kogi-jwu/jhumaneval")

DatasetDict({
    test: Dataset({
        features: ['task_id', 'prompt_en', 'prompt', 'entry_point', 'canonical_solution', 'test'],
        num_rows: 164
    })
})
```

### Data Instances
An example of a dataset instance:

```
{
    "task_id": "test/0",
    "prompt_en": "def return1():\n    \"\"\"\n    A simple function that returns the integer 1.\n    \"\"\"\n",
    "prompt": "def return1():\n    \"\"\"\n    整数1を返すシンプルな関数。\n    \"\"\"\n",
    "canonical_solution": "    return 1",
    "test": "def check(candidate):\n    assert candidate() == 1",
    "entry_point": "return1"
}
```

### Data Fields
- task_id: Unique identifier for a task.
- prompt_en: Function header and English docstrings as model input.
- prompt: Function header and Japanese docstrings, parallel to prompt_en.
- canonical_solution: The expected function implementation.
- test: Function to verify the correctness of generated code.
- entry_point: Function name to initiate the test.

### Data Splits
The dataset only consists of a test split with 164 samples.

## Additional Information

### Licensing Information
MIT License

## GitHub Repository

[https://github.com/KuramitsuLab/jhuman-eval](https://github.com/KuramitsuLab/jhuman-eval)