roleplaiapp/Confucius-o1-14B-Q4_K_M-GGUF

Repo: roleplaiapp/Confucius-o1-14B-Q4_K_M-GGUF Original Model: Confucius-o1-14B Quantized File: Confucius-o1-14B-Q4_K_M.gguf Quantization: GGUF Quantization Method: Q4_K_M

Overview

This is a GGUF Q4_K_M quantized version of Confucius-o1-14B

Quantization By

I often have idle GPUs while building/testing for the RP app, so I put them to use quantizing models. I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai.

Downloads last month
2
GGUF
Model size
14.8B params
Architecture
qwen2
Hardware compatibility
Log In to view the estimation

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support