|
--- |
|
library_name: transformers |
|
tags: |
|
- llama-3 |
|
license: cc-by-nc-4.0 |
|
--- |
|
|
|
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65b19c1b098c85365af5a83e/kQpfZwQ2tmpUhHx7E7jFF.png) |
|
|
|
[GGUF Quants](https://huggingface.co/mradermacher/spring-chicken-8x8b-GGUF) |
|
|
|
# Spring Chicken 8x8b |
|
|
|
I've been really impressed with how well these frankenmoe models quant compared to the base llama 8b, but with far better speed than the 70b. There have been some great 4x8b models released recently, so I tried an 8x8b. |
|
|
|
``` |
|
base_model: ./maldv/spring |
|
gate_mode: hidden |
|
dtype: bfloat16 |
|
experts_per_token: 2 |
|
experts: |
|
- source_model: ./models/Llama3-ChatQA-1.5-8B |
|
positive_prompts: |
|
- 'add numbers' |
|
- 'solve for x' |
|
negative_prompts: |
|
- 'I love you' |
|
- 'Help me' |
|
- source_model: ./models/InfinityRP-v2-8B |
|
positive_prompts: |
|
- 'they said' |
|
- source_model: ./models/Einstein-v6.1-Llama3-8B |
|
positive_prompts: |
|
- 'the speed of light' |
|
- 'chemical reaction' |
|
- source_model: ./models/Llama-3-Soliloquy-8B-v2 |
|
positive_prompts: |
|
- 'write a' |
|
- source_model: ./models/Llama-3-Lumimaid-8B-v0.1 |
|
positive_prompts: |
|
- 'she looked' |
|
- source_model: ./models/L3-TheSpice-8b-v0.8.3 |
|
positive_prompts: |
|
- 'they felt' |
|
- source_model: ./models/Llama3-OpenBioLLM-8B |
|
positive_prompts: |
|
- 'the correct treatment' |
|
- source_model: ./models/Llama-3-SauerkrautLM-8b-Instruct |
|
positive_prompts: |
|
- 'help me' |
|
- 'should i' |
|
``` |
|
|
|
### Spring |
|
|
|
Spring is a cascading dare-ties merge of the following models: |
|
|
|
```python |
|
[ |
|
'Einstein-v6.1-Llama3-8B', |
|
'L3-TheSpice-8b-v0.8.3', |
|
'Configurable-Hermes-2-Pro-Llama-3-8B', |
|
'Llama3-ChatQA-1.5-8B', |
|
'Llama3-OpenBioLLM-8B', |
|
'InfinityRP-v2-8B', |
|
'Llama-3-Soliloquy-8B-v2', |
|
'Tiamat-8b-1.2-Llama-3-DPO', |
|
'Llama-3-8B-Instruct-Gradient-1048k', |
|
'Llama-3-Lumimaid-8B-v0.1', |
|
'Llama-3-SauerkrautLM-8b-Instruct', |
|
'Meta-Llama-3-8B-Instruct-DPO', |
|
] |
|
``` |
|
|
|
I'm finding my iq4_xs to be working well. Llama 3 instruct format works well, but minimal format is highly creative. |
|
|
|
## Scores |
|
|
|
Not greater than the sum of it's parts, based on the scores; but it is really smart for an emotive RP model. |
|
|
|
Metric | Score |
|
---|--- |
|
Average | 65.89 |
|
ARC | 63.05 |
|
HellaSwag | 82.49 |
|
MMLU | 64.45 |
|
TruthfulQA | 51.63 |
|
Winogrande | 76.24 |
|
GSM8K | 51.63 |
|
|
|
[Details](https://huggingface.co/datasets/open-llm-leaderboard/details_maldv__spring-chicken-8x8b) |