M-MERGE3 / README.md
Tarek07's picture
Add files using upload-large-folder tool
6b4ea1b verified
---
base_model:
- TheDrummer/Anubis-70B-v1
- Sao10K/70B-L3.3-Cirrus-x1
- ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
- LatitudeGames/Wayfarer-Large-70B-Llama-3.3
- NeverSleep/Lumimaid-v0.2-70B
library_name: transformers
tags:
- mergekit
- merge
---
# M-MERGE3
This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
## Merge Details
### Merge Method
This model was merged using the [Linear DELLA](https://arxiv.org/abs/2406.11617) merge method using [Sao10K/70B-L3.3-Cirrus-x1](https://huggingface.co/Sao10K/70B-L3.3-Cirrus-x1) as a base.
### Models Merged
The following models were included in the merge:
* [TheDrummer/Anubis-70B-v1](https://huggingface.co/TheDrummer/Anubis-70B-v1)
* [ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4](https://huggingface.co/ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4)
* [LatitudeGames/Wayfarer-Large-70B-Llama-3.3](https://huggingface.co/LatitudeGames/Wayfarer-Large-70B-Llama-3.3)
* [NeverSleep/Lumimaid-v0.2-70B](https://huggingface.co/NeverSleep/Lumimaid-v0.2-70B)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NeverSleep/Lumimaid-v0.2-70B
parameters:
weight: 0.15
density: 0.7
epsilon: 0.2
lambda: 1.1
- model: LatitudeGames/Wayfarer-Large-70B-Llama-3.3
parameters:
weight: 0.20
density: 0.7
epsilon: 0.2
lambda: 1.1
- model: TheDrummer/Anubis-70B-v1
parameters:
weight: 0.20
density: 0.7
epsilon: 0.2
lambda: 1.1
- model: ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
parameters:
weight: 0.20
density: 0.7
epsilon: 0.2
lambda: 1.1
- model: Sao10K/70B-L3.3-Cirrus-x1
parameters:
weight: 0.25
density: 0.7
epsilon: 0.1
lambda: 1
base_model: Sao10K/70B-L3.3-Cirrus-x1
merge_method: della_linear
parameters:
normalize: false
int8_mask: true
chat_template: llama3
tokenizer:
source: union
dtype: bfloat16
```