Joseph717171 commited on
Commit
4e7126d
Β·
verified Β·
1 Parent(s): 2423621

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -1,3 +1,5 @@
1
  Custom GGUF quants of Hermes-3-Llama-3.1-8B, where the Output Tensors are quantized to Q8_0 while the Embeddings are kept at F32. Enjoy! 🧠πŸ”₯πŸš€
2
 
3
- Update: This repo now contains OF32.EF32 GGUF IQuants for even more accuracy. Enjoy! πŸ˜‹
 
 
 
1
  Custom GGUF quants of Hermes-3-Llama-3.1-8B, where the Output Tensors are quantized to Q8_0 while the Embeddings are kept at F32. Enjoy! 🧠πŸ”₯πŸš€
2
 
3
+ Update: This repo now contains OF32.EF32 GGUF IQuants for even more accuracy. Enjoy! πŸ˜‹
4
+
5
+ UPDATE: This repo now contains updated O.E.IQuants, which were quantized using the llama.cpp version: 4067 (54ef9cfc). This particular version of llama.cpp made it so all K*Q mat_mul computations were done in F32 vs BF16. This change, plus the other very impactful prior change, which made all K*Q mat_muls be computed with F32 (float32) precision, have enhanced this O.E.IQuants and made it necessary for this update. Cheers!