File size: 4,043 Bytes
efc3eaf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d227a25
efc3eaf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
4a2951c
efc3eaf
4a2951c
 
 
 
 
 
 
 
 
efc3eaf
 
 
4cb75db
efc3eaf
 
 
 
 
 
4cb75db
efc3eaf
 
 
4cb75db
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
---
license: apache-2.0
datasets:
- JetBrains/KStack
results:
  - task:
      type: text-generation
    dataset:
      name: MultiPL-HumanEval (Kotlin)
      type: openai_humaneval
    metrics:
      - name: pass@1
        type: pass@1
        value: 29.19
tags:
- code
---

# KStack-full models

KStack-full models is a collection of fine-tuned open-source generative text models fine-tuned on KStack dataset with rule-based filtering. 
This is a repository for fine-tuned CodeLlama-7b model in the Hugging Face Transformers format.

# Model use

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

# Load pre-trained model and tokenizer
model_name = 'JetBrains/CodeLlama-7B-KStack-full'
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name).to('cuda')

# Create and encode input
input_text = """\
This function takes an integer n and returns factorial of a number:
fun factorial(n: Int): Int {\
"""
input_ids = tokenizer.encode(
    input_text, return_tensors='pt'
).to('cuda')

# Generate
output = model.generate(
    input_ids, max_length=60, num_return_sequences=1, 
    pad_token_id=tokenizer.eos_token_id,
)

# Decode output
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
print(generated_text)
```

As with the base model, we can use FIM. To do this, the following format must be used: 
```
'<PRE> ' + prefix + ' <SUF> ' + suffix + ' <MID>'
```

# Training setup

The model was trained on one A100 GPU with following hyperparameters:

|         **Hyperparameter**           |             **Value**              |
|:---------------------------:|:----------------------------------------:|
|           `warmup`            |           5%            |
|        `max_lr`        |          1e-6          |
|        `num_epochs`        |          1          |
|        'attention_dropout' |          0.1        |
|        `scheduler`        |          cosine          |
|        `total_batch_size`        |          128 (~65K tokens per step)          |
|        `num_epochs`        |          1          |

More details about finetuning can be found in the technical report

# Data filtering

To increase the quality of the dataset and filter out statistical outliers such as homework assignments, we filter out the dataset entries according to the following rules:
* We filter out files which belong to the low-popular repos (the sum of stars and forks is less than 6)
* Next, we filter out files which belong to the repos with less than 5 Kotlin files
* Finally, we remove files which have less than 20 SLOC

We clean the content of the remaining dataset entries according to the following rules:
* We remove all non-ASCII entries
* We remove all package lines such as _package kotlinx.coroutines.channels_
* We remove half of the import lines.

# Evaluation 

To evaluate we used [Kotlin Humaneval](https://huggingface.co/datasets/JetBrains/Kotlin_HumanEval)

Fine-tuned model:

|         **Model name**           |             **Kotlin HumanEval Pass Rate**              |
|:---------------------------:|:----------------------------------------:|
|           `base model`            |           26.09            |
|        `fine-tuned model`        |          **29.19**         |

# Ethical Considerations and Limitations

CodeLlama-7B-KStack-full and its variants are a new technology that carries risks with use. The testing conducted to date could not cover all scenarios. For these reasons, as with all LLMs, CodeLlama-7B-KStack-full's potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. The model was fine-tuned on a specific data format (Kotlin tasks), and deviation from this format can also lead to inaccurate or undesirable responses to user queries. Therefore, before deploying any applications of CodeLlama-7B-KStack-full, developers should perform safety testing and tuning tailored to their specific applications of the model.