--- license: mit pipeline_tag: text-generation --- This is HuggingFaceH4's [zephyr-7b-beta](https://huggingface.co/HuggingFaceH4/zephyr-7b-beta), converted to GGUF without quantization. No other changes were made. The model was converted using `convert.py` from Georgi Gerganov's llama.cpp repo as it appears [here](https://github.com/ggerganov/llama.cpp/blob/ff5a3f0c09dfa0a8e0bf76d1748df5c6dee0e8ff/convert.py) (that is, the last change to the file was in commit `#ff5a3f0`.) All credit belongs to [HuggingFaceH4](https://huggingface.co/HuggingFaceH4) for fine-tuning and releasing this model. Thank you!