File size: 2,430 Bytes
188b56b
 
 
 
 
 
 
 
 
ab9ecb1
 
188b56b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12de2d0
188b56b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7fa7d23
bdb3b03
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
---
library_name: transformers
tags:
- llama-3
license: cc-by-nc-4.0
---

![image/png](https://cdn-uploads.huggingface.co/production/uploads/65b19c1b098c85365af5a83e/kQpfZwQ2tmpUhHx7E7jFF.png)

[GGUF Quants](https://huggingface.co/mradermacher/spring-chicken-8x8b-GGUF)

# Spring Chicken 8x8b

I've been really impressed with how well these frankenmoe models quant compared to the base llama 8b, but with far better speed than the 70b.  There have been some great 4x8b models released recently, so I tried an 8x8b.

```
base_model: ./maldv/spring
gate_mode: hidden 
dtype: bfloat16
experts_per_token: 2
experts:
  - source_model: ./models/Llama3-ChatQA-1.5-8B
    positive_prompts:
      - 'add numbers'
      - 'solve for x'
    negative_prompts:
      - 'I love you'
      - 'Help me'
  - source_model: ./models/InfinityRP-v2-8B
    positive_prompts:
    - 'they said'
  - source_model: ./models/Einstein-v6.1-Llama3-8B
    positive_prompts:
    - 'the speed of light'
    - 'chemical reaction'
  - source_model: ./models/Llama-3-Soliloquy-8B-v2
    positive_prompts:
    - 'write a'
  - source_model: ./models/Llama-3-Lumimaid-8B-v0.1
    positive_prompts:
    - 'she looked'
  - source_model: ./models/L3-TheSpice-8b-v0.8.3
    positive_prompts:
    - 'they felt'
  - source_model: ./models/Llama3-OpenBioLLM-8B
    positive_prompts:
    - 'the correct treatment'
  - source_model: ./models/Llama-3-SauerkrautLM-8b-Instruct
    positive_prompts:
    - 'help me'
    - 'should i'
```

### Spring

Spring is a cascading dare-ties merge of the following models:

```python
[
  'Einstein-v6.1-Llama3-8B',
  'L3-TheSpice-8b-v0.8.3',
  'Configurable-Hermes-2-Pro-Llama-3-8B',
  'Llama3-ChatQA-1.5-8B',
  'Llama3-OpenBioLLM-8B',
  'InfinityRP-v2-8B',
  'Llama-3-Soliloquy-8B-v2',
  'Tiamat-8b-1.2-Llama-3-DPO',
  'Llama-3-8B-Instruct-Gradient-1048k',
  'Llama-3-Lumimaid-8B-v0.1',
  'Llama-3-SauerkrautLM-8b-Instruct',
  'Meta-Llama-3-8B-Instruct-DPO',
]
```

I'm finding my iq4_xs to be working well.  Llama 3 instruct format works well, but minimal format is highly creative.

## Scores

Not greater than the sum of it's parts, based on the scores; but it is really smart for an emotive RP model.

Metric | Score
---|---
Average | 65.89
ARC | 63.05
HellaSwag | 82.49
MMLU | 64.45
TruthfulQA | 51.63
Winogrande | 76.24
GSM8K | 51.63

[Details](https://huggingface.co/datasets/open-llm-leaderboard/details_maldv__spring-chicken-8x8b)