|
--- |
|
license: other |
|
license_name: llama-3 |
|
license_link: https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct/raw/main/LICENSE |
|
--- |
|
Merged [cognitivecomputations/dolphin-2.9-llama3-8b](https://huggingface.co/cognitivecomputations/dolphin-2.9-llama3-8b) and [meta-llama/Meta-Llama-3-8B-Instruct](https://huggingface.co/meta-llama/Meta-Llama-3-8B-Instruct) because I thought the Dolphin finetune was a bit too 'robot-y' in the answers. |
|
|
|
GGUF files can be found here: [RDson/Dolphin-less-Llama-3-Instruct-8B-GGUF](https://huggingface.co/RDson/Dolphin-less-Llama-3-Instruct-8B-GGUF). |
|
|
|
Mergekit yaml: |
|
``` |
|
tokenizer_source: union |
|
slices: |
|
- sources: |
|
- model: ollama/llama3/Meta-Llama-3-8B-Instruct |
|
layer_range: [0, 32] |
|
- model: dolphin-2.9-llama3-8b |
|
layer_range: [0, 32] |
|
parameters: |
|
weight: 0.75 |
|
merge_method: slerp |
|
base_model: ollama/llama3/Meta-Llama-3-8B-Instruct |
|
parameters: |
|
normalize: true |
|
embed_slerp: true |
|
t: |
|
- filter: self_attn |
|
value: [0, 0.5, 0.3, 0.7, 1] |
|
- filter: mlp |
|
value: [1, 0.5, 0.7, 0.3, 0] |
|
- value: 0.5 |
|
dtype: bfloat16 |
|
``` |