GEITje-7B - GGUF

This is a quantized (GGUF: Q8_0, Q6_K, Q5_K_M, Q4_K_M) version of GEITje-7B.

For more information about the model, see the original page.

Downloads last month
2
GGUF
Model size
7.24B params
Architecture
llama

4-bit

5-bit

6-bit

8-bit

Inference API
Inference API (serverless) has been turned off for this model.

Model tree for Sombressoul/GEITje-7B-GGUF

Quantized
(4)
this model