Edit model card

This is pansophic's rocket-3B, converted to GGUF. No other changes were made.

Two files are avaliable here:

  • rocket-3B-fp16.gguf: the original model converted to GGUF without quantization
  • rocket-3B-q8_0-LOT.gguf: the original model converted to GGUF with q8_0 quantization using the --leave-output-tensor command-line option

From llama.cpp/quantize --help:

--leave-output-tensor: Will leave output.weight un(re)quantized. Increases model size but may also increase quality, especially when requantizing

The model was converted using convert-hf-to-gguf.py from Georgi Gerganov's llama.cpp repo, commit #8e672ef.

All credit belongs to pansophic for training and releasing this model. Thank you!

Downloads last month
25
GGUF
Model size
2.8B params
Architecture
stablelm
+1
Inference Examples
Unable to determine this model's library. Check the docs .