artms007 commited on
Commit
31c3cd7
1 Parent(s): f91945a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +60 -1
README.md CHANGED
@@ -30,4 +30,63 @@ source_datasets:
30
  - openai_humaneval
31
  size_categories:
32
  - n<1K
33
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
  - openai_humaneval
31
  size_categories:
32
  - n<1K
33
+ ---
34
+
35
+ # Dataset Card for JHumanEval: Japanese Hand-Translated HumanEval
36
+
37
+ ## Dataset Summary
38
+ This is a Japanese translated version of HumanEval, an evaluation harness for the HumanEval problem solving dataset described in the paper "Evaluating Large Language Models Trained on Code".
39
+
40
+ LLMのコード生成能力の標準ベンチマーク HumanEval の日本語翻訳版です。
41
+ 機械翻訳(DeepL, GPT-4)の翻訳結果を全て人手によって再修正し、 訳文を日本人のプログラマが読んで理解し、コードが書ける内容かチェックしました。
42
+ ただし、英語版 HumanEval の間違いは、修正せずに残して、 HumanEval 同様に不完全なドキュメントからの生成能力を見るようになっています。
43
+ 日本語LLMのベンチマークとしてお使いください。
44
+
45
+ ## Languages
46
+ The programming problems are written in Python and contain English natural text in comments and docstrings.
47
+
48
+ ## Dataset Structure
49
+
50
+ ```python
51
+ from datasets import load_dataset
52
+ load_dataset("kogi-jwu/jhumaneval")
53
+
54
+ DatasetDict({
55
+ test: Dataset({
56
+ features: ['task_id', 'prompt_en', 'prompt', 'entry_point', 'canonical_solution', 'test'],
57
+ num_rows: 164
58
+ })
59
+ })
60
+ ```
61
+
62
+ ### Data Instances
63
+ An example of a dataset instance:
64
+
65
+ {
66
+ "task_id": "test/0",
67
+ "prompt_en": "def return1():\n \"\"\"\n A simple function that returns the integer 1.\n \"\"\"\n",
68
+ "prompt": "def return1():\n \"\"\"\n 整数1を返すシンプルな関数。\n \"\"\"\n",
69
+ "canonical_solution": " return 1",
70
+ "test": "def check(candidate):\n assert candidate() == 1",
71
+ "entry_point": "return1"
72
+ }
73
+
74
+ ### Data Fields
75
+ - task_id: Unique identifier for a task.
76
+ - prompt_en: Function header and English docstrings as model input.
77
+ - prompt: Function header and Japanese docstrings, parallel to prompt_en.
78
+ - canonical_solution: The expected function implementation.
79
+ - test: Function to verify the correctness of generated code.
80
+ - entry_point: Function name to initiate the test.
81
+
82
+ ### Data Splits
83
+ The dataset only consists of a test split with 164 samples.
84
+
85
+ ## Additional Information
86
+
87
+ ### Licensing Information
88
+ MIT License
89
+
90
+ ## GitHub Repository
91
+
92
+ [https://github.com/KuramitsuLab/jhuman-eval](https://github.com/KuramitsuLab/jhuman-eval)