File size: 6,771 Bytes
d8f0911
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2ab6f57
d8f0911
2ab6f57
 
 
d8f0911
 
 
 
 
243a64b
 
8e923fa
 
2ab6f57
 
 
 
 
 
 
 
 
 
2b7bf6b
2ab6f57
2b7bf6b
 
1097f34
 
2ab6f57
 
0057f29
 
2ab6f57
 
 
 
 
 
 
 
 
b3a81de
2ab6f57
243a64b
d5cf92b
 
 
bec88f4
d5cf92b
 
2ab6f57
 
 
bec88f4
 
cd24354
2ab6f57
d8f0911
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
deab9fa
 
 
 
d8f0911
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2ab6f57
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
---
license: cc-by-nc-4.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
base_model:
- mlabonne/AlphaMonarch-7B
- beowolx/CodeNinja-1.0-OpenChat-7B
- SanjiWatsuki/Kunoichi-DPO-v2-7B
- mlabonne/NeuralDaredevil-7B
---

![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/9XVgxKyuXTQVO5mO-EOd4.jpeg)

# ๐Ÿ”ฎ Beyonder-4x7B-v3

Beyonder-4x7B-v3 is an improvement over the popular [Beyonder-4x7B-v2](https://huggingface.co/mlabonne/Beyonder-4x7B-v2). It's a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
* [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
* [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
* [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
* [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B)

Special thanks to [beowolx](https://huggingface.co/beowolx) for making the best Mistral-based code model and to [SanjiWatsuki](https://huggingface.co/SanjiWatsuki) for creating one of the very best RP models.

**Try the demo**: https://huggingface.co/spaces/mlabonne/Beyonder-4x7B-v3

## ๐Ÿ” Applications

This model uses a context window of 8k. I recommend using it with the Mistral Instruct chat template (works perfectly with LM Studio).

If you use SillyTavern, you might want to tweak the inference parameters. Here's what LM Studio uses as a reference: `temp` 0.8, `top_k` 40, `top_p` 0.95, `min_p` 0.05, `repeat_penalty` 1.1.

Thanks to its four experts, it's a well-rounded model, capable of achieving most tasks. As two experts are always used to generate an answer, every task benefits from other capabilities, like chat with RP, or math with code.

## โšก Quantized models

Thanks [bartowski](https://huggingface.co/bartowski) for quantizing this model.

* **GGUF**: https://huggingface.co/mlabonne/Beyonder-4x7B-v3-GGUF
* **More GGUF**: https://huggingface.co/bartowski/Beyonder-4x7B-v3-GGUF
* **ExLlamaV2**: https://huggingface.co/bartowski/Beyonder-4x7B-v3-exl2

## ๐Ÿ† Evaluation

This model is not designed to excel in traditional benchmarks, as the code and role-playing models generally do not apply to those contexts. Nonetheless, it performs remarkably well thanks to strong general-purpose experts.

### Nous

Beyonder-4x7B-v3 is one of the best models on Nous' benchmark suite (evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval)) and significantly outperforms the v2. See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).

| Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
|---|---:|---:|---:|---:|---:|
| [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B) [๐Ÿ“„](https://gist.github.com/mlabonne/1d33c86824b3a11d2308e36db1ba41c1) | 62.74 | 45.37 | 77.01 | 78.39 | 50.2 |
| [**mlabonne/Beyonder-4x7B-v3**](https://huggingface.co/mlabonne/Beyonder-4x7B-v3) [๐Ÿ“„](https://gist.github.com/mlabonne/3740020807e559f7057c32e85ce42d92) | **61.91** | **45.85** | **76.67** | **74.98** | **50.12** |
| [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B) [๐Ÿ“„](https://gist.github.com/mlabonne/cbeb077d1df71cb81c78f742f19f4155) | 59.39 | 45.23 | 76.2 | 67.61 | 48.52 |
| [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) [๐Ÿ“„](https://gist.github.com/mlabonne/895ff5171e998abfdf2a41a4f9c84450) | 58.29 | 44.79 | 75.05 | 65.68 | 47.65 |
| [mlabonne/Beyonder-4x7B-v2](https://huggingface.co/mlabonne/Beyonder-4x7B-v2) [๐Ÿ“„](https://gist.github.com/mlabonne/f73baa140a510a676242f8a4496d05ca) | 57.13 | 45.29 | 75.95 | 60.86 | 46.4 |
| [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B) [๐Ÿ“„](https://gist.github.com/mlabonne/08b5280c221fbd7f98eb27561ae902a3) | 50.35 | 39.98 | 71.77 | 48.73 | 40.92 |

### EQ-Bench

Beyonder-4x7B-v3 is the best 4x7B model on the EQ-Bench leaderboard, outperforming older versions of ChatGPT and Llama-2-70b-chat. It is very close to Mixtral-8x7B-Instruct-v0.1 and Gemini Pro. Thanks [Sam Paech](https://huggingface.co/sam-paech) for running the eval.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/-OSHe2ImrxN8wAREnSZAZ.png)

### Open LLM Leaderboard

It's also a strong performer on the Open LLM Leaderboard, significantly outperforming the v2 model.

![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/NFRYqzwuy9TB-s-Hy3gRy.png)

## ๐Ÿงฉ Configuration

```yaml
base_model: mlabonne/AlphaMonarch-7B
experts:
  - source_model: mlabonne/AlphaMonarch-7B
    positive_prompts:
    - "chat"
    - "assistant"
    - "tell me"
    - "explain"
    - "I want"
  - source_model: beowolx/CodeNinja-1.0-OpenChat-7B
    positive_prompts:
    - "code"
    - "python"
    - "javascript"
    - "programming"
    - "algorithm"
  - source_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
    positive_prompts:
    - "storywriting"
    - "write"
    - "scene"
    - "story"
    - "character"
  - source_model: mlabonne/NeuralDaredevil-7B
    positive_prompts:
    - "reason"
    - "math"
    - "mathematics"
    - "solve"
    - "count"
```

## ๐ŸŒณ Model Family Tree

![image/png](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/zQi5VgmdqJv6pFaGoQ2AL.png)

## ๐Ÿ’ป Usage

```python
!pip install -qU transformers bitsandbytes accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "mlabonne/Beyonder-4x7B-v3"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)

messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```
Output:

> A Mixture of Experts (MoE) is a neural network architecture that tackles complex tasks by dividing them into simpler subtasks, delegating each to specialized expert modules. These experts learn to independently handle specific problem aspects. The MoE structure combines their outputs, leveraging their expertise for improved overall performance. This approach promotes modularity, adaptability, and scalability, allowing for better generalization in various applications.