--- base_model: v000000/L3-8B-MegaSerpentine library_name: transformers tags: - mergekit - merge - llama - not-for-all-audiences - llama-cpp --- # L3-8B-MegaSerpentine-imat-GGUFs This model was converted to GGUF format from [`v000000/L3-8B-MegaSerpentine`](https://huggingface.co/v000000/L3-8B-MegaSerpentine) Refer to the [original model card](https://huggingface.co/v000000/L3-8B-MegaSerpentine) for more details on the model. Various GGUF format quants including FP16 and pre-generated imatrix data for v000000/L3-8B-MegaSerpentine # List of quants in repo: * Q8_0 imatrix ~8.3GB * Q6_K imatrix ~6.4GB * Q5_K_S imatrix ~5.4GB * IQ4_XS imatrix ~4.3 GB * FP16 ![image/png](https://cdn-uploads.huggingface.co/production/uploads/64f74b6e6389380c77562762/OizoIG2tGezOo30KYXT0y.png) ,=e `-. _,-' ,=e `-. _,-' ,=e `-. _,-' ```yaml models: - model: Nitral-AI/Poppy_Porpoise-0.72-L3-8B+Blackroot/Llama-3-8B-Abomination-LORA - model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot - model: cgato/L3-TheSpice-8b-v0.8.3 - model: Vdr1/L3-8B-Sunfall-v0.3-Stheno-v3.2 - model: jondurbin/bagel-8b-v1.0 - model: lodrick-the-lafted/Fuselage-8B - model: HPAI-BSC/Llama3-Aloe-8B-Alpha - model: maldv/llama-3-fantasy-writer-8b+ToastyPigeon/Llama-3-8B-Instruct-SpringDragon-V2-QLoRA - model: lodrick-the-lafted/Limon-8B - model: Nitral-AI/Hathor_Stable-v0.2-L3-8B - model: princeton-nlp/Llama-3-Instruct-8B-SimPO - model: jeiku/Orthocopter_8B+mpasila/Llama-3-LimaRP-Instruct-LoRA-8B - model: jondurbin/bagel-8b-v1.0 - model: v000000/sauce base_model: princeton-nlp/Llama-3-Instruct-8B-SimPO merge_method: model_stock dtype: bfloat16 ``` # Prompt Template: ```bash <|begin_of_text|><|start_header_id|>system<|end_header_id|> {system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|> {input}<|eot_id|><|start_header_id|>assistant<|end_header_id|> {output}<|eot_id|> ``` ./llama-quantize --imatrix ./imatrix.dat ./L3-8B-MegaSerpentine-Tria.fp16.gguf name quantsize