GGUF and imatrix files of https://huggingface.co/microsoft/Phi-3-medium-128k-instruct

Chat Format

<|user|>\nQuestion <|end|>\n<|assistant|>

For example:

<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>

More uploading and perplexity benchmarks to be posted soon. Long context config may change, only tested up to 4k so far.

Cheers, Nisten

Downloads last month
73
GGUF
Model size
14B params
Architecture
phi3

16-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for nisten/phi3-medium-128k-gguf

Quantized
(69)
this model