metadata
base_model: v000000/L3-8B-MegaSerpentine-Tria
library_name: transformers
tags:
- mergekit
- merge
- llama
- not-for-all-audiences
- llama-cpp
L3-8B-MegaSerpentine-Tria-GGUFs
This model was converted to GGUF format from v000000/L3-8B-MegaSerpentine-Tria
Refer to the original model card for more details on the model.
Various GGUF format quants including FP16 and pre-generated imatrix data for v000000/L3-8B-MegaSerpentine-Tria
List of quants in repo:
- Q8_0 imatrix
- Q6_K imatrix
- Q5_K_S imatrix
,=e
-. _,-' ,=e
-.
_,-'
,=e
`-.
_,-'
models:
- model: Nitral-AI/Poppy_Porpoise-0.72-L3-8B+Blackroot/Llama-3-8B-Abomination-LORA
- model: Hastagaras/Jamet-8B-L3-MK.V-Blackroot
- model: cgato/L3-TheSpice-8b-v0.8.3
- model: Vdr1/L3-8B-Sunfall-v0.3-Stheno-v3.2
- model: jondurbin/bagel-8b-v1.0
- model: lodrick-the-lafted/Fuselage-8B
- model: HPAI-BSC/Llama3-Aloe-8B-Alpha
- model: maldv/llama-3-fantasy-writer-8b+ToastyPigeon/Llama-3-8B-Instruct-SpringDragon-V2-QLoRA
- model: lodrick-the-lafted/Limon-8B
- model: Nitral-AI/Hathor_Stable-v0.2-L3-8B
- model: princeton-nlp/Llama-3-Instruct-8B-SimPO
- model: jeiku/Orthocopter_8B+mpasila/Llama-3-LimaRP-Instruct-LoRA-8B
- model: jondurbin/bagel-8b-v1.0
- model: v000000/sauce
base_model: princeton-nlp/Llama-3-Instruct-8B-SimPO
merge_method: model_stock
dtype: bfloat16
Prompt Template:
<|begin_of_text|><|start_header_id|>system<|end_header_id|>
{system_prompt}<|eot_id|><|start_header_id|>user<|end_header_id|>
{input}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
{output}<|eot_id|>
./llama-quantize --imatrix ./imatrix.dat ./L3-8B-MegaSerpentine-Tria.fp16.gguf name quantsize