Mistral-7B-Instruct-v0.1-GGUF (Q4_K_M)

With the utilization of the llama-cpp-python package, we are excited to introduce the GGUF model hosted in the Hugging Face Docker Spaces, made accessible through an OpenAI-compatible API. This space includes comprehensive API documentation to facilitate seamless integration.

Go ahead and try it out the API endpoint yourself with the mistral-7b-instruct.ipynb jupyter notebook.

If you find this resource valuable, your support in the form of starring the space would be greatly appreciated. Your engagement plays a vital role in furthering the application for a community GPU grant, ultimately enhancing the capabilities and accessibility of this space.