This is a quantized GGUF version of TinyDolphin-2.8.2-1.1b-laser to 4_0, 8_0 bits and the converted 16 FP model.
(link to the original model : https://huggingface.co/cognitivecomputations/TinyDolphin-2.8.2-1.1b-laser)
- Downloads last month
- 63
Unable to determine this model's library. Check the
docs
.