File size: 1,560 Bytes
18905b4
 
 
 
 
 
 
 
 
 
 
 
 
 
a044322
 
 
 
18905b4
 
 
50531dc
18905b4
 
 
 
 
 
76de535
 
 
 
 
18905b4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
a044322
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18905b4
a044322
18905b4
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
base_model:
- Sao10K/Fimbulvetr-11B-v2
- Undi95/Mistral-11B-CC-Air-RP
library_name: transformers 
tags:
- mergekit
- merge
- πŸ‘
---
# Fimbul-Airo-18B πŸ‘

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). πŸ‘

I tested it for thirtneen.second πŸ‘

Works pretty good, seems uncensored. I'll update with more results/observations as I continue to test. 

## Merge Details
### Merge Method

This model was merged using the passthrough merge method. Taking models and smashing em all together πŸ‘

### Models Merged

The following models were included in the merge:
* [Sao10K/Fimbulvetr-11B-v2](https://huggingface.co/Sao10K/Fimbulvetr-11B-v2) πŸ‘
* [Undi95/Mistral-11B-CC-Air-RP](https://huggingface.co/Undi95/Mistral-11B-CC-Air-RP) πŸ‘
  * [CollectiveCognition-v1.1-Mistral-7B](https://huggingface.co/teknium/CollectiveCognition-v1.1-Mistral-7B)
  * [airoboros-mistral2.2-7b](https://huggingface.co/teknium/airoboros-mistral2.2-7b/)
  * PIPPA dataset 11B qlora
  * LimaRPv3 dataset 11B qlora


### The Sauce 

The following YAML configuration was used to produce this model:

```yaml
slices:
  - sources:
    - model: Sao10K/Fimbulvetr-11B-v2
      layer_range: [0, 40]
  - sources:
    - model: Undi95/Mistral-11B-CC-Air-RP
      layer_range: [8, 48]
merge_method: passthrough
dtype: bfloat16

πŸ‘

```

### Prompt Format: Alpaca πŸ‘

```
### Instruction:
<Prompt>

### Input:
<Insert Context Here>

### Response:

```

πŸ‘

Don't forget to take care of yourself and have a wonderful day!