Custom GGUF quants of Meta’s Llama-3.2-Instruct's finetunes, where the Output Tensors are quantized to Q8_0 or F32 and the Embeddings are kept @F32
Joseph
Joseph717171
AI & ML interests
None yet
Recent Activity
liked
a model
5 days ago
NyxKrage/Microsoft_Phi-4
upvoted
a
paper
5 days ago
Deliberation in Latent Space via Differentiable Cache Augmentation
liked
a model
7 days ago
black-forest-labs/FLUX.1-schnell
Organizations
Collections
3
Custom GGUF quants of Llama-3.1-8B-Instruct fine-tunes, where the Output Tensors are quantized to Q8_0 while the Embeddings are kept at F32. 🧠🔥🚀
-
Joseph717171/Hermes-3-Llama-3.1-8B-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated • 578 • 2 -
Joseph717171/Llama-3.1-SuperNova-Lite-8.0B-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated • 539 • 2 -
Joseph717171/Hermes-3-Llama-3.1-8B_TIES_with_base_Embeds_Initialized_dtypeF32-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated • 104 • 1
models
31
Joseph717171/Granite-3.1-8B-instruct-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated
•
57
Joseph717171/Imatrices
Updated
•
3
Joseph717171/Hermes-3-Llama-3.1-8B-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated
•
578
•
2
Joseph717171/Llama-3.1-SuperNova-Lite-8.0B-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated
•
539
•
2
Joseph717171/Hermes-3-Llama-3.2-3B-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated
•
238
•
1
Joseph717171/Llama-3.1-SuperNova-Lite-14B
Text Generation
•
Updated
•
16
Joseph717171/SuperNova-Lite-Hermes-3-Llama-3.1-8B_TIES_with_base_Embeddings_Pre-Initialized-dtypeF32
Text Generation
•
Updated
•
16
•
2
Joseph717171/Llama-3.1-8B-InitializedEmbeddings_with_Hermes-3
Text Generation
•
Updated
•
9
Joseph717171/Hermes-3-Llama-3.1-8B_TIES_with_Base_Embeds_Initialized_to_Special_Instruct_Toks_dtypeF32
Text Generation
•
Updated
•
63
•
1
Joseph717171/Hermes-3-Llama-3.1-8B_TIES_with_base_Embeds_Initialized_dtypeF32-OQ8_0-F32.EF32.IQ4_K-Q8_0-GGUF
Updated
•
104
•
1
datasets
None public yet