GGUF version of Felladrin/Minueza-32M-Chat.

It was not possible to quantize the model after converting it to F16/F32 GGUF, so only those versions are available, being F32 the recommended one for having better precision.

Recommended Inference Parameters

temp 0.4
min-p 0.1
top_p 1
top_k 0
repeat_penalty 1.0
Downloads last month
12
GGUF
Model size
32.8M params
Architecture
llama

16-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for Felladrin/gguf-Minueza-32M-Chat

Quantized
(2)
this model