paul-stansifer/qw-us-gemma2-9b-Q8_0-GGUF

This LoRA adapter was converted to GGUF format from paul-stansifer/qw-us-gemma2-9b via the ggml.ai's GGUF-my-lora space. Refer to the original adapter repository for more details.

Use with llama.cpp

# with cli
llama-cli -m base_model.gguf --lora qw-us-gemma2-9b-q8_0.gguf (...other args)

# with server
llama-server -m base_model.gguf --lora qw-us-gemma2-9b-q8_0.gguf (...other args)

To know more about LoRA usage with llama.cpp server, refer to the llama.cpp server documentation.

Downloads last month
4
GGUF
Model size
54M params
Architecture
gemma2

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for paul-stansifer/qw-us-gemma2-oldstyle-9b-adapter

Base model

google/gemma-2-9b
Quantized
(1)
this model

Dataset used to train paul-stansifer/qw-us-gemma2-oldstyle-9b-adapter