--- license: apache-2.0 datasets: - JetBrains/KExercises base_model: deepseek-ai/deepseek-coder-6.7b-base results: - task: type: text-generation dataset: name: MultiPL-HumanEval (Kotlin) type: openai_humaneval metrics: - name: pass@1 type: pass@1 value: 55.28 tags: - code --- # Kexer models Kexer models is a collection of fine-tuned open-source generative text models fine-tuned on Kotlin Exercices dataset. This is a repository for fine-tuned Deepseek-coder-6.7b model in the Hugging Face Transformers format. # Model use ```python from transformers import AutoModelForCausalLM, AutoTokenizer # Load pre-trained model and tokenizer model_name = 'JetBrains/Deepseek-7B-Kexer' tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained(model_name).to('cuda') # Create and encode input input_text = """\ This function takes an integer n and returns factorial of a number: fun factorial(n: Int): Int {\ """ input_ids = tokenizer.encode( input_text, return_tensors='pt' ).to('cuda') # Generate output = model.generate( input_ids, max_length=60, num_return_sequences=1, early_stopping=True, pad_token_id=tokenizer.eos_token_id, ) # Decode output generated_text = tokenizer.decode(output[0], skip_special_tokens=True) print(generated_text) ``` As with the base model, we can use FIM. To do this, the following format must be used: ``` '<|fim▁begin|>' + prefix + '<|fim▁hole|>' + suffix + '<|fim▁end|>' ``` # Training setup The model was trained on one A100 GPU with following hyperparameters: | **Hyperparameter** | **Value** | |:---------------------------:|:----------------------------------------:| | `warmup` | 10% | | `max_lr` | 1e-4 | | `scheduler` | linear | | `total_batch_size` | 256 (~130K tokens per step) | | `num_epochs` | 4 | More details about finetuning can be found in the technical report # Fine-tuning data For this model we used 15K exmaples of [Kotlin Exercices dataset](https://huggingface.co/datasets/JetBrains/KExercises). Every example follows HumanEval like format. In total dataset contains about 3.5M tokens. For more information about the dataset follow the link. # Evaluation To evaluate we used Kotlin Humaneval ([more infromation here](https://huggingface.co/datasets/JetBrains/Kotlin_HumanEval)) Fine-tuned model: | **Model name** | **Kotlin HumanEval Pass Rate** | |:---------------------------:|:----------------------------------------:| | `base model` | 40.99 | | `fine-tuned model` | 55.28 | # Ethical Considerations and Limitations Deepseek-7B-Kexer and its variants are a new technology that carries risks with use. The testing conducted to date could not cover all scenarios. For these reasons, as with all LLMs, Deepseek-7B-Kexer potential outputs cannot be predicted in advance, and the model may in some instances produce inaccurate or objectionable responses to user prompts. The model was fine-tuned on a specific data format (Kotlin tasks), and deviation from this format can also lead to inaccurate or undesirable responses to user queries. Therefore, before deploying any applications of Deepseek-7B-Kexer, developers should perform safety testing and tuning tailored to their specific applications of the model.