Joseph717171's picture
Update README.md
fc914fc verified
|
raw
history blame
600 Bytes

Custom GGUF quants of arcee-ai’s Llama-3.1-SuperNova-Lite-8B, where the Output Tensors are quantized to Q8_0 while the Embeddings are kept at F32. Enjoy! 🧠🔥🚀

Update: For some reason, the model was initially smaller than LLama-3.1-8B-Instruct after quantizing. We have since, rectified this: if you want the most intelligent and most capable quantized GGUF version of Llama-3.1-SuperNova-Lite-8.0B, use the OF32.EF32.IQuants. The original OQ8_0.EF32.IQuants will remain in the repo for those who want to use them. Cheers! 😁