Locutusque/LocutusqueXFelladrin-TinyMistral248M-Instruct-GGUF
Quantized GGUF model files for LocutusqueXFelladrin-TinyMistral248M-Instruct from Locutusque
Name | Quant method | Size |
---|---|---|
locutusquexfelladrin-tinymistral248m-instruct.fp16.gguf | fp16 | 497.76 MB |
locutusquexfelladrin-tinymistral248m-instruct.q2_k.gguf | q2_k | 116.20 MB |
locutusquexfelladrin-tinymistral248m-instruct.q3_k_m.gguf | q3_k_m | 131.01 MB |
locutusquexfelladrin-tinymistral248m-instruct.q4_k_m.gguf | q4_k_m | 156.61 MB |
locutusquexfelladrin-tinymistral248m-instruct.q5_k_m.gguf | q5_k_m | 180.17 MB |
locutusquexfelladrin-tinymistral248m-instruct.q6_k.gguf | q6_k | 205.20 MB |
locutusquexfelladrin-tinymistral248m-instruct.q8_0.gguf | q8_0 | 265.26 MB |
Original Model Card:
LocutusqueXFelladrin-TinyMistral248M-Instruct
This model was created by merging Locutusque/TinyMistral-248M-Instruct and Felladrin/TinyMistral-248M-SFT-v4 using mergekit. After the two models were merged, the resulting model was further trained on ~20,000 examples on the Locutusque/inst_mix_v2_top_100k at a low learning rate to further normalize weights. The following is the YAML config used to merge:
models:
- model: Felladrin/TinyMistral-248M-SFT-v4
parameters:
weight: 0.5
- model: Locutusque/TinyMistral-248M-Instruct
parameters:
weight: 1.0
merge_method: linear
dtype: float16
The resulting model combines the best of both worlds. With Locutusque/TinyMistral-248M-Instruct's coding capabilities and reasoning skills, and Felladrin/TinyMistral-248M-SFT-v4's low hallucination and instruction-following capabilities. The resulting model has an incredible performance considering its size.
Evaluation
Coming soon...
- Downloads last month
- 57