GGUF version of Felladrin/Minueza-32M-Base.

It was not possible to quantize the model, so only the F16 and F32 GGUF files are available.

Try it with llama.cpp

brew install ggerganov/ggerganov/llama.cpp
llama-cli \
  --hf-repo Felladrin/gguf-Minueza-32M-Base \
  --model Minueza-32M-Base.F32.gguf \
  --random-prompt \
  --temp 1.3 \
  --dynatemp-range 1.2 \
  --top-k 0 \
  --top-p 1 \
  --min-p 0.1 \
  --typical 0.85 \
  --mirostat 2 \
  --mirostat-ent 3.5 \
  --repeat-penalty 1.1 \
  --repeat-last-n -1 \
  -n 256
Downloads last month
6
GGUF
Model size
32.8M params
Architecture
llama

16-bit

32-bit

Inference API
Unable to determine this model's library. Check the docs .

Model tree for Felladrin/gguf-Minueza-32M-Base

Quantized
(2)
this model