base_model: mistralai/Mistral-7B-Instruct-v0.2 | |
license: apache-2.0 | |
Base model: https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.2 | |
This is just a custom 4bit imatrix quant made to run optiomally on a macbook with 8gb of ram. | |
For use with llama.cpp https://github.com/ggerganov/llama.cpp |