GEITje-7B - GGUF

This is a quantized (GGUF: Q8_0, Q6_K, Q5_K_M, Q4_K_M) version of GEITje-7B.

For more information about the model, see the original page.

Downloads last month
24
GGUF
Model size
7.24B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Sombressoul/GEITje-7B-GGUF

Quantized
(6)
this model