merge
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the breadcrumbs_ties merge method using failspy/Llama-3-8B-Instruct-MopeyMule as a base.
Models Merged
The following models were included in the merge:
- Nitral-AI/Poppy_Porpoise-1.4-L3-8B
- maldv/badger-kappa-llama-3-8b
- Sao10K/L3-8B-Stheno-v3.2
- openlynn/Llama-3-Soliloquy-8B-v2
- Hastagaras/Jamet-8B-L3-MK.II
Configuration
The following YAML configuration was used to produce this model:
models:
- model: failspy/Llama-3-8B-Instruct-MopeyMule
- model: maldv/badger-kappa-llama-3-8b # 7/10
parameters:
density: 0.4
weight: 0.14
- model: Nitral-AI/Poppy_Porpoise-1.4-L3-8B # 7/10
parameters:
density: 0.5
weight: 0.18
- model: openlynn/Llama-3-Soliloquy-8B-v2 # 8/10
parameters:
density: 0.5
weight: 0.18
- model: Hastagaras/Jamet-8B-L3-MK.II # 6/10
parameters:
density: 0.3
weight: 0.1
- model: Sao10K/L3-8B-Stheno-v3.2 # 9/10
parameters:
density: 0.6
weight: 0.23
merge_method: breadcrumbs_ties
base_model: failspy/Llama-3-8B-Instruct-MopeyMule
parameters:
normalize: false
rescale: true
gamma: 0.01
dtype: float16
- Downloads last month
- 17
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.