Update README.md
Browse files
README.md
CHANGED
@@ -29,19 +29,19 @@ configs:
|
|
29 |
path: data/train-*
|
30 |
---
|
31 |
|
32 |
-
#
|
33 |
|
34 |
We introduce HumanEval for Kotlin, created from scratch by human experts.
|
35 |
-
|
36 |
-
The tests we implement are eqivalent to the original HumanEval tests for Python
|
37 |
|
38 |
# How to use
|
39 |
|
40 |
-
The
|
41 |
|
42 |
-
|
43 |
|
44 |
-
The
|
45 |
|
46 |
```python
|
47 |
import json
|
@@ -151,6 +151,6 @@ print(f'Pass rate: {correct/total}')
|
|
151 |
```
|
152 |
|
153 |
|
154 |
-
# Results
|
155 |
|
156 |
We evaluated multiple coding models using this benchmark, and the results are presented in the table below.
|
|
|
29 |
path: data/train-*
|
30 |
---
|
31 |
|
32 |
+
# Benchmark summary
|
33 |
|
34 |
We introduce HumanEval for Kotlin, created from scratch by human experts.
|
35 |
+
Solutions and tests for all 161 HumanEval tasks are written by an expert olympiad programmer with 6 years of experience in Kotlin, and independently checked by a programmer with 4 years of experience in Kotlin.
|
36 |
+
The tests we implement are eqivalent to the original HumanEval tests for Python.
|
37 |
|
38 |
# How to use
|
39 |
|
40 |
+
The benchmark is prepared in a format suitable for MXEval and can be easily integrated into the MXEval pipeline.
|
41 |
|
42 |
+
When testing models on this benchmark, during the code generation step we use early stopping on the `}\n}` sequence to expedite the process. We also perform some code post-processing before evaluation — specifically, we remove all comments and signatures.
|
43 |
|
44 |
+
The code for running an example model on the benchmark using the early stopping and post-processing is available below.
|
45 |
|
46 |
```python
|
47 |
import json
|
|
|
151 |
```
|
152 |
|
153 |
|
154 |
+
# Results
|
155 |
|
156 |
We evaluated multiple coding models using this benchmark, and the results are presented in the table below.
|