Kyllima-34B-v1-GGUF

This model was converted to GGUF format from sirmyrrh/Kyllima-34B-v1 using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Downloads last month
7
GGUF
Model size
34.4B params
Architecture
llama

3-bit

4-bit

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for sirmyrrh/Kyllima-34B-v1-GGUF

Quantized
(3)
this model