Edit model card

Model Summary

This repo provides the GGUF format for the Phi-3-Mini-128K-Instruct.

For more details check out the original model at microsoft/Phi-3-mini-128k-instruct.

The Phi-3-Mini-128K-Instruct is a 3.8 billion-parameter, lightweight, state-of-the-art open model trained using the Phi-3 datasets. This dataset includes both synthetic data and filtered publicly available website data, with an emphasis on high-quality and reasoning-dense properties. The model belongs to the Phi-3 family with the Mini version in two variants 4K and 128K which is the context length (in tokens) that it can support.

After initial training, the model underwent a post-training process that involved supervised fine-tuning and direct preference optimization to enhance its ability to follow instructions and adhere to safety measures. When evaluated against benchmarks that test common sense, language understanding, mathematics, coding, long-term context, and logical reasoning, the Phi-3 Mini-128K-Instruct demonstrated robust and state-of-the-art performance among models with fewer than 13 billion parameters. Resources and Technical Documentation:

Resources and Technical Documentation:

This repo provides GGUF files and Llamafiles (d228e01d) for the Phi-3 Mini-128K-Instruct model.

Name Quant method Bits Size Use case
Phi-3-mini-128k-instruct-Q4_K_M.gguf Q4_K_M 4 2.39 GB medium, balanced quality - recommended
Phi-3-mini-128k-instruct-Q4_K_M.llamafile Q4_K_M 4 2.4 GB medium, balanced quality - recommended
Phi-3-mini-128k-instruct-f16.gguf None 16 7.64 GB minimal quality loss
Phi-3-mini-128k-instruct-f16.llamafile None 16 7.65 GB minimal quality loss

Note: When using the llamafile version make sure to specify the context size, e.g., ./Phi-3-mini-128k-instruct-Q4_K_M.llamafile -c 0 -p "your prompt".

License

The model is licensed under the MIT license.

Downloads last month
133
GGUF
Model size
3.82B params
Architecture
phi3

4-bit

16-bit

Unable to determine this model's library. Check the docs .

Quantized from