mlabonne commited on
Commit
2ab6f57
โ€ข
1 Parent(s): d8f0911

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -3
README.md CHANGED
@@ -13,14 +13,45 @@ base_model:
13
  - mlabonne/NeuralDaredevil-7B
14
  ---
15
 
16
- # Beyonder-4x7B-v3
17
 
18
- Beyonder-4x7B-v3 is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
 
 
19
  * [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
20
  * [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
21
  * [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
22
  * [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B)
23
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
24
  ## ๐Ÿงฉ Configuration
25
 
26
  ```yaml
@@ -78,4 +109,7 @@ messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in
78
  prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
79
  outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
80
  print(outputs[0]["generated_text"])
81
- ```
 
 
 
 
13
  - mlabonne/NeuralDaredevil-7B
14
  ---
15
 
16
+ ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/61b8e2ba285851687028d395/9XVgxKyuXTQVO5mO-EOd4.jpeg)
17
 
18
+ # ๐Ÿ”ฎ Beyonder-4x7B-v3
19
+
20
+ Beyonder-4x7B-v3 is an improvement over the popular [Beyonder-4x7B-v2](https://huggingface.co/mlabonne/Beyonder-4x7B-v2). It's a Mixture of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
21
  * [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B)
22
  * [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
23
  * [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B)
24
  * [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B)
25
 
26
+ ## ๐Ÿ” Applications
27
+
28
+ This model uses a context window of 8k. I recommend using it with the Mistral Instruct chat template (works perfectly with LM Studio).
29
+
30
+ If you use SillyTavern, you might want to tweak the inference parameters. Here's what LM Studio uses as a reference: `temp` 0.8, `top_k` 40, `top_p` 0.95, `min_p` 0.05, `repeat_penalty` 1.1.
31
+
32
+ Thanks to its four experts, it's a well-rounded model, capable of achieving most tasks. As two experts are always used to generate an answer, every task benefits from other capabilities, like chat with RP, or math with code.
33
+
34
+ ## โšก Quantized models
35
+
36
+ * **GGUF**: https://huggingface.co/mlabonne/Beyonder-4x7B-v3-GGUF
37
+
38
+ ## ๐Ÿ† Evaluation
39
+
40
+ ### Nous
41
+
42
+ Beyonder-4x7B-v3 is one of the best models on Nous' benchmark suite (evaluation performed using [LLM AutoEval](https://github.com/mlabonne/llm-autoeval)) and significantly outperforms the v2. See the entire leaderboard [here](https://huggingface.co/spaces/mlabonne/Yet_Another_LLM_Leaderboard).
43
+
44
+ | Model | Average | AGIEval | GPT4All | TruthfulQA | Bigbench |
45
+ |---|---:|---:|---:|---:|---:|
46
+ | [mlabonne/AlphaMonarch-7B](https://huggingface.co/mlabonne/AlphaMonarch-7B) [๐Ÿ“„](https://gist.github.com/mlabonne/1d33c86824b3a11d2308e36db1ba41c1) | 62.74 | 45.37 | 77.01 | 78.39 | 50.2 |
47
+ | [**mlabonne/Beyonder-4x7B-v3**](https://huggingface.co/mlabonne/Beyonder-4x7B-v3) [๐Ÿ“„](https://gist.github.com/mlabonne/3740020807e559f7057c32e85ce42d92) | **61.91** | **45.85** | **76.67** | **74.98** | **50.12** |
48
+ | [mlabonne/NeuralDaredevil-7B](https://huggingface.co/mlabonne/NeuralDaredevil-7B) [๐Ÿ“„](https://gist.github.com/mlabonne/cbeb077d1df71cb81c78f742f19f4155) | 59.39 | 45.23 | 76.2 | 67.61 | 48.52 |
49
+ | [mlabonne/Beyonder-4x7B-v2](https://huggingface.co/mlabonne/Beyonder-4x7B-v2) [๐Ÿ“„](https://gist.github.com/mlabonne/f73baa140a510a676242f8a4496d05ca) | 57.13 | 45.29 | 75.95 | 60.86 | 46.4 |
50
+
51
+ ### Open LLM Leaderboard
52
+
53
+ Running...
54
+
55
  ## ๐Ÿงฉ Configuration
56
 
57
  ```yaml
 
109
  prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
110
  outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
111
  print(outputs[0]["generated_text"])
112
+ ```
113
+ Output:
114
+
115
+ > A Mixture of Experts (MoE) is a neural network architecture that tackles complex tasks by dividing them into simpler subtasks, delegating each to specialized expert modules. These experts learn to independently handle specific problem aspects. The MoE structure combines their outputs, leveraging their expertise for improved overall performance. This approach promotes modularity, adaptability, and scalability, allowing for better generalization in various applications.