|
--- |
|
license: apache-2.0 |
|
pipeline_tag: text-generation |
|
--- |
|
This is Eric Hartford's [dolphin-2.1-mistral-7b](https://huggingface.co/ehartford/dolphin-2.1-mistral-7b), converted to GGUF without quantization. No other changes were made. |
|
|
|
The model was converted using `convert.py` from Georgi Gerganov's llama.cpp repo as it appears [here](https://github.com/ggerganov/llama.cpp/blob/ff5a3f0c09dfa0a8e0bf76d1748df5c6dee0e8ff/convert.py) (that is, the last change to the file was in commit `#ff5a3f0`.) |
|
|
|
All credit belongs to [Eric Hartford](https://huggingface.co/ehartford) for fine-tuning and releasing this model. Thank you! |