Spaces:
Running
Running
File size: 1,642 Bytes
106db30 8a4f00d 106db30 8a4f00d 106db30 8a4f00d 106db30 8a4f00d 106db30 19485c0 106db30 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 |
<!DOCTYPE html>
<html>
<head>
<title>Mistral-7B-Instruct-v0.1-GGUF (Q4_K_M)</title>
</head>
<body>
<h1>Mistral-7B-Instruct-v0.1-GGUF (Q4_K_M)</h1>
<p>
With the utilization of the
<a href="https://github.com/abetlen/llama-cpp-python">llama-cpp-python</a>
package, we are excited to introduce the GGUF model hosted in the Hugging
Face Docker Spaces, made accessible through an OpenAI-compatible API. This
space includes comprehensive API documentation to facilitate seamless
integration.
</p>
<ul>
<li>
The API endpoint:
<a href="https://limcheekin-mistral-7b-instruct-v0-1-gguf.hf.space/v1"
>https://limcheekin-mistral-7b-instruct-v0-1-gguf.hf.space/v1</a
>
</li>
<li>
The API doc:
<a href="https://limcheekin-mistral-7b-instruct-v0-1-gguf.hf.space/docs"
>https://limcheekin-mistral-7b-instruct-v0-1-gguf.hf.space/docs</a
>
</li>
</ul>
<p>
Go ahead and try it out the API endpoint yourself with the
<a
href="https://huggingface.co/spaces/limcheekin/Mistral-7B-Instruct-v0.1-GGUF/blob/main/mistral-7b-instruct.ipynb"
target="_blank"
>
mistral-7b-instruct.ipynb</a
>
jupyter notebook.
</p>
<p>
If you find this resource valuable, your support in the form of starring
the space would be greatly appreciated. Your engagement plays a vital role
in furthering the application for a community GPU grant, ultimately
enhancing the capabilities and accessibility of this space.
</p>
</body>
</html>
|