Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
Libraries:
Datasets
pandas
License:
Areyde commited on
Commit
4a23160
1 Parent(s): 819b804

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -29,19 +29,19 @@ configs:
29
  path: data/train-*
30
  ---
31
 
32
- # Evaluation summary
33
 
34
  We introduce HumanEval for Kotlin, created from scratch by human experts.
35
- All HumanEval solutions and tests are written by an expert olympiad programmer with 6 years experience in Kotlin, and independently checked by a programmer with 4 years experience in Kotlin.
36
- The tests we implement are eqivalent to the original HumanEval tests for Python, and we fix the prompt signatures to address the generic variable signature we describe above.
37
 
38
  # How to use
39
 
40
- The evaluation presented as dataset which is prepared in a format suitable for MXEval and can be easily integrated into the MXEval pipeline.
41
 
42
- During the code generation step, we use early stopping on the `}\n}` sequence to expedite the process. We also perform some code post-processing before evaluation—specifically, we remove all comments and signatures.
43
 
44
- The early stopping method, post-processing steps, and evaluation code are available in the example below.
45
 
46
  ```python
47
  import json
@@ -151,6 +151,6 @@ print(f'Pass rate: {correct/total}')
151
  ```
152
 
153
 
154
- # Results:
155
 
156
  We evaluated multiple coding models using this benchmark, and the results are presented in the table below.
 
29
  path: data/train-*
30
  ---
31
 
32
+ # Benchmark summary
33
 
34
  We introduce HumanEval for Kotlin, created from scratch by human experts.
35
+ Solutions and tests for all 161 HumanEval tasks are written by an expert olympiad programmer with 6 years of experience in Kotlin, and independently checked by a programmer with 4 years of experience in Kotlin.
36
+ The tests we implement are eqivalent to the original HumanEval tests for Python.
37
 
38
  # How to use
39
 
40
+ The benchmark is prepared in a format suitable for MXEval and can be easily integrated into the MXEval pipeline.
41
 
42
+ When testing models on this benchmark, during the code generation step we use early stopping on the `}\n}` sequence to expedite the process. We also perform some code post-processing before evaluation specifically, we remove all comments and signatures.
43
 
44
+ The code for running an example model on the benchmark using the early stopping and post-processing is available below.
45
 
46
  ```python
47
  import json
 
151
  ```
152
 
153
 
154
+ # Results
155
 
156
  We evaluated multiple coding models using this benchmark, and the results are presented in the table below.