Templar-r128-LoRA
This is a LoRA extracted from a language model. It was extracted using mergekit.
LoRA Details
This LoRA adapter was extracted from ChaoticNeutrals/Templar_v1_8B and uses unsloth/llama-3-8b-Instruct as a base.
Parameters
The following command was used to extract this LoRA adapter:
mergekit-extract-lora ChaoticNeutrals/Templar_v1_8B unsloth/llama-3-8b-Instruct OUTPUT_PATH --no-lazy-unpickle --skip-undecomposable --rank=128 --extend-vocab --model_name=Templar-r128-LoRA --verbose
Model tree for kromcomp/L3-Templar-r128-LoRA
Base model
ChaoticNeutrals/T-900-8B
Finetuned
ChaoticNeutrals/Templar_v1_8B