Supa-AI/gemma2-9b-cpt-sahabatai-v1-instruct-q8_0-gguf
This model was converted to GGUF format from GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct
using llama.cpp.
Refer to the original model card for more details on the model.
Use with llama.cpp
CLI:
llama-cli --hf-repo Supa-AI/gemma2-9b-cpt-sahabatai-v1-instruct-q8_0-gguf --hf-file gemma2-9b-cpt-sahabatai-v1-instruct.q8_0.gguf -p "Your prompt here"
Server:
llama-server --hf-repo Supa-AI/gemma2-9b-cpt-sahabatai-v1-instruct-q8_0-gguf --hf-file gemma2-9b-cpt-sahabatai-v1-instruct.q8_0.gguf -c 2048
Model Details
- Quantization Type: q8_0
- Original Model: GoToCompany/gemma2-9b-cpt-sahabatai-v1-instruct
- Format: GGUF
- Downloads last month
- 111
Model tree for Supa-AI/gemma2-9b-cpt-sahabatai-v1-instruct-q8_0-gguf
Base model
google/gemma-2-9b