metadata
base_model:
- Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2
- kloodia/lora-8b-bio
- NousResearch/Hermes-3-Llama-3.1-8B
- kloodia/lora-8b-physic
- cgato/L3-TheSpice-8b-v0.8.3
- kloodia/lora-8b-medic
- ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1
- Blackroot/Llama-3-8B-Abomination-LORA
- DreadPoor/L3-8B-Stheno-v3.2-TASKBLATED
- kloodia/lora-8b-math
- arcee-ai/Llama-3.1-SuperNova-Lite
- Blackroot/Llama3-RP-Lora
library_name: transformers
tags:
- mergekit
- merge
merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the Model Stock merge method using DreadPoor/L3-8B-Stheno-v3.2-TASKBLATED + kloodia/lora-8b-math as a base.
Models Merged
The following models were included in the merge:
- Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2 + kloodia/lora-8b-bio
- NousResearch/Hermes-3-Llama-3.1-8B + kloodia/lora-8b-physic
- cgato/L3-TheSpice-8b-v0.8.3 + kloodia/lora-8b-medic
- ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1 + Blackroot/Llama-3-8B-Abomination-LORA
- arcee-ai/Llama-3.1-SuperNova-Lite + Blackroot/Llama3-RP-Lora
Configuration
The following YAML configuration was used to produce this model:
models:
- model: Orenguteng/Llama-3.1-8B-Lexi-Uncensored-V2+kloodia/lora-8b-bio
- model: arcee-ai/Llama-3.1-SuperNova-Lite+Blackroot/Llama3-RP-Lora
- model: NousResearch/Hermes-3-Llama-3.1-8B+kloodia/lora-8b-physic
- model: cgato/L3-TheSpice-8b-v0.8.3+kloodia/lora-8b-medic
- model: ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1+Blackroot/Llama-3-8B-Abomination-LORA
merge_method: model_stock
base_model: DreadPoor/L3-8B-Stheno-v3.2-TASKBLATED+kloodia/lora-8b-math
normalize: false
int8_mask: true
dtype: bfloat16