|
--- |
|
base_model: |
|
- byroneverson/Mistral-Small-Instruct-2409-abliterated |
|
- rAIfle/Acolyte-LORA |
|
- InferenceIllusionist/SorcererLM-22B |
|
- allura-org/MS-Meadowlark-22B |
|
- Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B |
|
- ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 |
|
- Darkknight535/MS-Moonlight-22B-v3 |
|
- crestf411/MS-sunfall-v0.7.0 |
|
- TheDrummer/Cydonia-22B-v1.2 |
|
- TheDrummer/Cydonia-22B-v1.1 |
|
- hf-100/Mistral-Small-Instruct-2409-Spellbound-StoryWriter-22B-instruct-0.4-chkpt-336-16bit |
|
- Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small |
|
- unsloth/Mistral-Small-Instruct-2409 |
|
- anthracite-org/magnum-v4-22b |
|
- TheDrummer/Cydonia-22B-v1.3 |
|
- unsloth/Mistral-Small-Instruct-2409 |
|
- Kaoeiri/Moingooistrial-22B-V1-Lora |
|
- TroyDoesAI/BlackSheep-MermaidMistral-22B |
|
- spow12/ChatWaifu_v2.0_22B |
|
library_name: transformers |
|
tags: |
|
- mergekit |
|
- merge |
|
|
|
--- |
|
|
|
|
|
![image/webp](https://cdn-uploads.huggingface.co/production/uploads/66246c2984db70bddd3f9f5f/wbHTZeHPXlslbiIvLXAF_.webp) |
|
|
|
--- |
|
# π **Model Merge Overview** π |
|
|
|
This is a **merge of pre-trained and finetuned language models** created using [**mergekit**](https://github.com/cg123/mergekit). |
|
|
|
--- |
|
|
|
## π§ **Merge Method** |
|
|
|
This model was merged using the **[DARE](https://arxiv.org/abs/2311.03099)** and **[TIES](https://arxiv.org/abs/2306.01708)** merge methods, with [**unsloth/Mistral-Small-Instruct-2409**](https://huggingface.co/unsloth/Mistral-Small-Instruct-2409) as the base model. |
|
|
|
--- |
|
|
|
## π§ **Models Merged** |
|
|
|
The following models were included in the merge, combining the power and capabilities of various high-performance language models: |
|
|
|
- [**byroneverson/Mistral-Small-Instruct-2409-abliterated**](https://huggingface.co/byroneverson/Mistral-Small-Instruct-2409-abliterated) + [**rAIfle/Acolyte-LORA**](https://huggingface.co/rAIfle/Acolyte-LORA) |
|
- [**InferenceIllusionist/SorcererLM-22B**](https://huggingface.co/InferenceIllusionist/SorcererLM-22B) |
|
- [**allura-org/MS-Meadowlark-22B**](https://huggingface.co/allura-org/MS-Meadowlark-22B) |
|
- [**Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B**](https://huggingface.co/Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B) |
|
- [**ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1**](https://huggingface.co/ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1) |
|
- [**Darkknight535/MS-Moonlight-22B-v3**](https://huggingface.co/Darkknight535/MS-Moonlight-22B-v3) |
|
- [**crestf411/MS-sunfall-v0.7.0**](https://huggingface.co/crestf411/MS-sunfall-v0.7.0) |
|
- [**TheDrummer/Cydonia-22B-v1.2**](https://huggingface.co/TheDrummer/Cydonia-22B-v1.2) |
|
- [**TheDrummer/Cydonia-22B-v1.1**](https://huggingface.co/TheDrummer/Cydonia-22B-v1.1) |
|
- [**hf-100/Mistral-Small-Instruct-2409-Spellbound-StoryWriter-22B-instruct-0.4-chkpt-336-16bit**](https://huggingface.co/hf-100/Mistral-Small-Instruct-2409-Spellbound-StoryWriter-22B-instruct-0.4-chkpt-336-16bit) |
|
- [**Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small**](https://huggingface.co/Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small) |
|
- [**anthracite-org/magnum-v4-22b**](https://huggingface.co/anthracite-org/magnum-v4-22b) |
|
- [**TheDrummer/Cydonia-22B-v1.3**](https://huggingface.co/TheDrummer/Cydonia-22B-v1.3) |
|
- [**unsloth/Mistral-Small-Instruct-2409**](https://huggingface.co/unsloth/Mistral-Small-Instruct-2409) + [**Kaoeiri/Moingooistrial-22B-V1-Lora**](https://huggingface.co/Kaoeiri/Moingooistrial-22B-V1-Lora) |
|
- [**TroyDoesAI/BlackSheep-MermaidMistral-22B**](https://huggingface.co/TroyDoesAI/BlackSheep-MermaidMistral-22B) |
|
- [**spow12/ChatWaifu_v2.0_22B**](https://huggingface.co/spow12/ChatWaifu_v2.0_22B) |
|
|
|
--- |
|
|
|
## βοΈ **Configuration Details** |
|
|
|
Here is the YAML configuration used to create this powerful merged model: |
|
|
|
```yaml |
|
models: |
|
- model: anthracite-org/magnum-v4-22b |
|
parameters: |
|
weight: 1.0 |
|
density: 0.85 |
|
- model: TheDrummer/Cydonia-22B-v1.3 |
|
parameters: |
|
weight: 0.24 |
|
density: 0.69 |
|
- model: TheDrummer/Cydonia-22B-v1.2 |
|
parameters: |
|
weight: 0.14 |
|
density: 0.67 |
|
- model: TheDrummer/Cydonia-22B-v1.1 |
|
parameters: |
|
weight: 0.16 |
|
density: 0.67 |
|
- model: Gryphe/Pantheon-RP-Pure-1.6.2-22b-Small |
|
parameters: |
|
weight: 0.26 |
|
density: 0.75 |
|
- model: allura-org/MS-Meadowlark-22B |
|
parameters: |
|
weight: 0.27 |
|
density: 0.71 |
|
- model: spow12/ChatWaifu_v2.0_22B |
|
parameters: |
|
weight: 0.27 |
|
density: 0.7 |
|
- model: Saxo/Linkbricks-Horizon-AI-Japanese-Superb-V1-22B |
|
parameters: |
|
weight: 0.2 |
|
density: 0.58 |
|
- model: crestf411/MS-sunfall-v0.7.0 |
|
parameters: |
|
weight: 0.22 |
|
density: 0.71 |
|
- model: byroneverson/Mistral-Small-Instruct-2409-abliterated+rAIfle/Acolyte-LORA |
|
parameters: |
|
weight: 0.24 |
|
density: 0.7 |
|
- model: InferenceIllusionist/SorcererLM-22B |
|
parameters: |
|
weight: 0.21 |
|
density: 0.72 |
|
- model: unsloth/Mistral-Small-Instruct-2409+Kaoeiri/Moingooistrial-22B-V1-Lora |
|
parameters: |
|
weight: 0.32 |
|
density: 0.76 |
|
- model: ArliAI/Mistral-Small-22B-ArliAI-RPMax-v1.1 |
|
parameters: |
|
weight: 0.12 |
|
density: 0.65 |
|
- model: Darkknight535/MS-Moonlight-22B-v3 |
|
parameters: |
|
weight: 0.12 |
|
density: 0.62 |
|
- model: hf-100/Mistral-Small-Instruct-2409-Spellbound-StoryWriter-22B-instruct-0.4-chkpt-336-16bit |
|
parameters: |
|
weight: 0.28 |
|
density: 0.74 |
|
- model: TroyDoesAI/BlackSheep-MermaidMistral-22B |
|
parameters: |
|
weight: 0.24 |
|
density: 0.7 |
|
|
|
merge_method: dare_ties |
|
base_model: unsloth/Mistral-Small-Instruct-2409 |
|
parameters: |
|
density: 0.85 |
|
epsilon: 0.09 |
|
lambda: 1.22 |
|
dtype: bfloat16 |
|
``` |
|
|
|
--- |
|
|
|
|
|
# π Special Thanks |
|
## A huge THANK YOU to the authors and creators of these base models, whose work and dedication make this possible. We deeply appreciate their contributions to the machine learning community. |
|
|
|
### Special acknowledgment to Hugging Face for providing a platform where these models are shared and developed. |
|
### Gratitude to the creators behind the DARE and TIES techniques for pushing the boundaries of model merging. |
|
### Without their efforts, this project would not have been possible. π |
|
|
|
|