Text Generation
GGUF
ggml
quantized
q2_k
q3_k_m
q4_k_m
q5_k_m
q6_k
q8_0
Edit model card

euclaise/Echo-3B-GGUF

Quantized GGUF model files for Echo-3B from euclaise

Name Quant method Size
echo-3b.fp16.gguf fp16 5.59 GB
echo-3b.q2_k.gguf q2_k 1.20 GB
echo-3b.q3_k_m.gguf q3_k_m 1.39 GB
echo-3b.q4_k_m.gguf q4_k_m 1.71 GB
echo-3b.q5_k_m.gguf q5_k_m 1.99 GB
echo-3b.q6_k.gguf q6_k 2.30 GB
echo-3b.q8_0.gguf q8_0 2.97 GB

Original Model Card:

Downloads last month
110
GGUF
+1
Inference Examples
Inference API (serverless) has been turned off for this model.

Quantized from

Datasets used to train afrideva/Echo-3B-GGUF