File size: 1,562 Bytes
db1621d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
library_name: transformers
language:
  - en
pipeline_tag: text-generation
tags:
  - pytorch
  - llama
  - llama-3
  - mergekit
  - merge
license: llama3

---
# llama3

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the passthrough merge method.

### Models Merged

The following models were included in the merge:
* D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct

### Configuration

The following YAML configuration was used to produce this model:

```yaml
slices:
  - sources:
      - model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct # embed_tokens comes along with the ride with whatever is the first layer
        layer_range: [0, 1]
      - model:  D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct # add dummy second model with 0 weight so tokenizer-based merge routine is invoked for embed_tokens
        layer_range: [0, 1]
  - sources:
      - model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct
        layer_range: [1, 24]
  - sources:
      - model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct
        layer_range: [8, 20]
  - sources:
      - model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct
        layer_range: [18, 32]
      - model: D:/text-generation-webui/models/meta-llama_Meta-Llama-3-8B-Instruct
        layer_range: [18, 32]
merge_method: passthrough
dtype: bfloat16


```