File size: 3,286 Bytes
977419d adec6d0 a25535a bf85cc1 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 |
---
base_model:
- NousResearch/Hermes-2-Pro-Llama-3-8B
- cognitivecomputations/dolphin-2.9-llama3-8b
- Danielbrdz/Barcenas-Llama3-8b-ORPO
- NousResearch/Meta-Llama-3-8B
- maum-ai/Llama-3-MAAL-8B-Instruct-v0.1
- asiansoul/Llama-3-Open-Ko-Linear-8B
- MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3
library_name: transformers
tags:
- mergekit
- merge
---
# <span style="color:blue;">AIA-Llama-3-MAAL-Ko-8B</span>
[<img src="https://i.ibb.co/TmGjFkj/llm-v1.png" alt="llm-v1" width="400"/>](https://ibb.co/cD9f71f)
I'm not going to say that my merge style one is the best model ever made.
I'm not going to tell you that you'll enjoy chatting with my style merge model.
All I want to say is thank you for taking time out of your day to visit today.
<span style="color:red;font-weight:bold;"> Without users like you, my merge model would be meaningless.</span>
<span style="color:navy;font-weight:bold;">Let's go on a fun trip together that we've never been on before to help each other.</span>
Isn't it boring to just do LLM?
<span style="color:purple;font-weight:bold;"> Soon I will open a very cool Streamlit base application based on the model I merged because i am an application engineer. Please wait until then.</span>
I haven't tested this merge model in depth yet. I'm going to post it here and test it out ^^
## Merge Details
### Merge Method
This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [NousResearch/Meta-Llama-3-8B](https://huggingface.co/NousResearch/Meta-Llama-3-8B) as a base.
### Models Merged
The following models were included in the merge:
* [NousResearch/Hermes-2-Pro-Llama-3-8B](https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B)
* [cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b)
* [Danielbrdz/Barcenas-Llama3-8b-ORPO](https://huggingface.co/Danielbrdz/Barcenas-Llama3-8b-ORPO)
* [maum-ai/Llama-3-MAAL-8B-Instruct-v0.1](https://huggingface.co/maum-ai/Llama-3-MAAL-8B-Instruct-v0.1)
* [asiansoul/Llama-3-Open-Ko-Linear-8B](https://huggingface.co/asiansoul/Llama-3-Open-Ko-Linear-8B)
* [MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3](https://huggingface.co/MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3)
### Configuration
The following YAML configuration was used to produce this model:
```yaml
models:
- model: NousResearch/Meta-Llama-3-8B
# Base model providing a general foundation without specific parameters
- model: maum-ai/Llama-3-MAAL-8B-Instruct-v0.1
parameters:
density: 0.60
weight: 0.4
- model: asiansoul/Llama-3-Open-Ko-Linear-8B
parameters:
density: 0.55
weight: 0.25
- model: MaziyarPanahi/Llama-3-8B-Instruct-DPO-v0.3
parameters:
density: 0.55
weight: 0.15
- model: cognitivecomputations/dolphin-2.9-llama3-8b
parameters:
density: 0.55
weight: 0.05
- model: Danielbrdz/Barcenas-Llama3-8b-ORPO
parameters:
density: 0.55
weight: 0.125
- model: NousResearch/Hermes-2-Pro-Llama-3-8B
parameters:
density: 0.55
weight: 0.125
merge_method: dare_ties
base_model: NousResearch/Meta-Llama-3-8B
parameters:
int8_mask: true
dtype: bfloat16
```
|