Description

These are GGUF model format files for the rhysjones/Phi-3-mini-mango-1 Phi-3 4k model.

Conversion process

The useful conversion script GGUF-n-Go by thesven was used along with llama.cpp to generate the different quantized sizes for the model.

Downloads last month
201
GGUF
Model size
3.82B params
Architecture
phi3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for rhysjones/Phi-3-mini-mango-1-GGUF

Quantized
(1)
this model