File size: 4,374 Bytes
f831a6a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3ecc27e
 
 
 
 
f831a6a
 
 
3ecc27e
 
 
 
 
 
f831a6a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3ecc27e
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
---
license: apache-2.0
dataset_info:
  features:
  - name: task_id
    dtype: string
  - name: prompt
    dtype: string
  - name: entry_point
    dtype: string
  - name: test
    dtype: string
  - name: description
    dtype: string
  - name: language
    dtype: string
  - name: canonical_solution
    sequence: string
  splits:
  - name: train
    num_bytes: 505355
    num_examples: 161
  download_size: 174830
  dataset_size: 505355
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
---

# Evaluation summary

We introduce HumanEval for Kotlin, created from scratch by human experts.
All HumanEval solutions and tests are written by an expert olympiad programmer with 6 years experience in Kotlin, and independently checked by a programmer with 4 years experience in Kotlin. 
The tests we implement are eqivalent to the original HumanEval tests for Python, and we fix the prompt signatures to address the generic variable signature we describe above.

# How to use 

The evaluation presented as dataset which is prepared in a format suitable for MXEval and can be easily integrated into the MXEval pipeline.

During the code generation step, we use early stopping on the `}\n}` sequence to expedite the process. We also perform some code post-processing before evaluation—specifically, we remove all comments and signatures.

The early stopping method, post-processing steps, and evaluation code are available in the example below.

```python
import torch
import jsonlines
import re
from tqdm import tqdm 
from transformers import (
    AutoTokenizer,
    AutoModelForCausalLM,
    StoppingCriteria,
    StoppingCriteriaList,
)
from mxeval.data import get_data
from mxeval.evaluation import evaluate_functional_correctness
from datasets import load_dataset

class StoppingCriteriaSub(StoppingCriteria):
    
    def __init__(self, stops, tokenizer):
        (StoppingCriteria.__init__(self),)
        self.stops = rf"{stops}"
        self.tokenizer = tokenizer

    def __call__(
        self, input_ids: torch.LongTensor, scores: torch.FloatTensor, **kwargs
    ) -> bool:
        last_three_tokens = [int(x) for x in input_ids.data[0][-3:]]
        decoded_last_three_tokens = self.tokenizer.decode(last_three_tokens)

        return bool(re.search(self.stops, decoded_last_three_tokens))


def generate(problem):


    stopping_criteria = StoppingCriteriaList(
        [
            StoppingCriteriaSub(
                stops= "\n}\n", tokenizer=tokenizer
            )
        ]
    )

    
    problem = tokenizer.encode(problem, return_tensors="pt").to('cuda')
    sample = model.generate(
        problem,
        temperature=0.1,
        max_new_tokens=256,
        min_new_tokens=128,
        pad_token_id=tokenizer.eos_token_id,
        do_sample=False,
        num_beams=1,
        stopping_criteria=stopping_criteria,
    )
    
    answer = tokenizer.decode(sample[0], skip_special_tokens=True)

    return answer

def clean_asnwer(code):

    # Clean comments
    code_without_line_comments = re.sub(r"//.*", "", code)
    code_without_all_comments = re.sub(
        r"/\*.*?\*/", "", code_without_line_comments, flags=re.DOTALL
    )

    #Clean signatures
    lines = code.split("\n")
    for i, line in enumerate(lines):
        if line.startswith("fun "):
            return "\n".join(lines[i + 1 :])
            
    return code


model_name = "JetBrains/CodeLlama-7B-Kexer"
dataset = load_dataset("jetbrains/Kotlin_HumanEval")['train']
problem_dict = {problem['task_id']: problem for problem in dataset}

model = AutoModelForCausalLM.from_pretrained(model_name,torch_dtype=torch.bfloat16).to('cuda')
tokenizer = AutoTokenizer.from_pretrained(model_name)


output = []
for key in tqdm(list(problem_dict.keys()), leave=False):
    problem = problem_dict[key]["prompt"]
    answer = generate(problem)
    answer = clean_asnwer(answer)
    output.append({"task_id": key, "completion": answer, "language": "kotlin"})


output_file = f"answers"
with jsonlines.open(output_file, mode="w") as writer:
    for line in output:
        writer.write(line)

evaluate_functional_correctness(
    sample_file=output_file,
    k=[1],
    n_workers=16,
    timeout=15,
    problem_file=problem_dict,
)

```


# Results:

We evaluated multiple coding models using this benchmark, and the results are presented in the table below.