--- tags: - quantized - 2-bit - 3-bit - 4-bit - 5-bit - 6-bit - 8-bit - GGUF - phi - phi-3 - text-generation model_name: Phi-3-mini-mango-1-GGUF base_model: rhysjones/Phi-3-mini-mango-1 inference: false model_creator: rhysjones pipeline_tag: text-generation quantized_by: rhysjones license: mit --- ## Description These are GGUF model format files for the [rhysjones/Phi-3-mini-mango-1](https://huggingface.co/rhysjones/Phi-3-mini-mango-1) Phi-3 4k model. ## Conversion process The useful conversion script [GGUF-n-Go](https://github.com/thesven/GGUF-n-Go) by [thesven](https://github.com/thesven) was used along with [llama.cpp](https://github.com/ggerganov/llama.cpp) to generate the different quantized sizes for the model.