File size: 1,468 Bytes
d201cab
 
 
 
 
 
 
 
cf21b01
d201cab
d3fed72
 
 
d201cab
7b9a163
 
d201cab
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
cf21b01
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
---
base_model:
- Undi95/Meta-Llama-3-8B-Instruct-hf
- mpasila/Llama-3-LimaRP-Instruct-8B
library_name: transformers
tags:
- mergekit
- merge
license: llama3
---
# Llama-3-MetaRP-V2-8B

This might have issues with prompt template due to Unsloth messing up the prompt format for Llama 3.. (it added gpt and user that did not exist in the original Llama 3 Instruct format)

It appears to have destroyed some of the prompt following abilities. So I wonder if there's a better way to merge models with the instruct model.

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [Undi95/Meta-Llama-3-8B-Instruct-hf](https://huggingface.co/Undi95/Meta-Llama-3-8B-Instruct-hf) as a base.

### Models Merged

The following models were included in the merge:
* [mpasila/Llama-3-LimaRP-Instruct-8B](https://huggingface.co/mpasila/Llama-3-LimaRP-Instruct-8B)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
models:
  - model: mpasila/Llama-3-LimaRP-Instruct-8B
    parameters:
      density: 0.15
      weight:
        - filter: mlp
          value: 0.5
        - value: 0
merge_method: dare_ties
base_model: Undi95/Meta-Llama-3-8B-Instruct-hf
parameters:
  normalize: true
  int8_mask: true
dtype: bfloat16
```