File size: 741 Bytes
7141798 bb0ee84 7141798 ad1fa81 bb0ee84 2069895 8137ec6 2069895 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
---
license: bigcode-openrail-m
pipeline_tag: text-generation
library_name: gguf
---
**NOTE**: This is currently an unsupported model, for testing [PR#5795](https://github.com/ggerganov/llama.cpp/pull/5795)
GGUF quants for https://huggingface.co/bigcode/starcoder2-15b
> StarCoder2-15B model is a 15B parameter model trained on 600+ programming languages from The Stack v2, with opt-out requests excluded. The model uses Grouped Query Attention, a context window of 16,384 tokens with a sliding window attention of 4,096 tokens, and was trained using the Fill-in-the-Middle objective on 4+ trillion tokens.
| Layers | Context | Template (None/Base Model) |
| --- | --- | --- |
| <pre>40</pre> | <pre>16384</pre> | <pre>{prompt}</pre> |
|