Edit model card

NOTE: You will need a recent build of llama.cpp to run these quants (i.e. at least commit 494c870).

GGUF importance matrix (imatrix) quants for https://huggingface.co/TechxGenus/starcoder2-15b-instruct

Fine-tuned starcoder2-15b with an additional 0.7 billion high-quality, code-related tokens for 3 epochs. We used DeepSpeed ZeRO 3 and Flash Attention 2 to accelerate the training process. It achieves 77.4 pass@1 on HumanEval-Python. This model operates using the Alpaca instruction format (excluding the system prompt).

Layers Context Template
40
16384
### Instruction
{instruction}
### Response
{response}
Downloads last month
176
GGUF
Model size
16B params
Architecture
starcoder2
+1
Inference API (serverless) does not yet support gguf models for this pipeline type.

Quantized from