File size: 4,928 Bytes
641f99e
 
 
 
 
 
 
 
 
 
 
9bfcdeb
 
 
641f99e
 
 
 
9bfcdeb
 
 
63499a3
641f99e
 
9bfcdeb
641f99e
5f854ad
9bfcdeb
5f854ad
641f99e
 
 
 
9bfcdeb
 
 
 
 
 
 
 
 
641f99e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9bfcdeb
641f99e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9bfcdeb
 
 
 
 
 
 
 
 
5f854ad
9bfcdeb
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
---
license: apache-2.0
tags:
- moe
- frankenmoe
- merge
- mergekit
- lazymergekit
- dvilasuero/DistilabelBeagle14-7B
- beowolx/CodeNinja-1.0-OpenChat-7B
- WizardLM/WizardMath-7B-V1.1
- Maths
- Code
- Python
base_model:
- dvilasuero/DistilabelBeagle14-7B
- beowolx/CodeNinja-1.0-OpenChat-7B
- WizardLM/WizardMath-7B-V1.1
language:
- en
library_name: transformers
pipeline_tag: text-generation
---

<center><img src='https://i.imgur.com/0xFTuAX.png' width='450px'></center>

# Pearl-3x7B, an xtraordinary Mixture of Experts (MoE) for data science

Pearl-3x7B is a Mixture of Experts (MoE) made with the following models :
* [dvilasuero/DistilabelBeagle14-7B](https://huggingface.co/dvilasuero/DistilabelBeagle14-7B)
* [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
* [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)

A Mixture of Experts (MoE) model represents a sophisticated architecture that amalgamates the capabilities of multiple specialized models to address a wide array of tasks within a unified framework. Within the realm of a MoE model tailored for a chat application, the integration of expertise spanning three distinct domains - chat, code, and mathematics - substantially enhances its capacity to furnish nuanced and precise responses to a diverse spectrum of user inquiries.

The initial expert model, honed for chat applications, exhibits prowess in comprehending natural language nuances, conversational dynamics, and contextual cues. Drawing upon extensive conversational data, it adeptly generates engaging and contextually pertinent responses, thereby fostering meaningful interactions with users.

The subsequent expert model, centered on code, brings to the fore proficiency in programming languages, algorithms, and software engineering principles. Possessing a deep-seated understanding of syntax, logical constructs, and problem-solving methodologies, it deftly tackles queries spanning coding challenges, debugging assistance, and software development inquiries.

Lastly, the third expert model, specializing in mathematics, boasts expertise in mathematical reasoning, problem-solving strategies, and analytical techniques. Armed with a breadth of knowledge encompassing arithmetic, algebra, calculus, and beyond, it offers precise solutions, lucid explanations, and profound insights for mathematical queries, equations, and proofs.
  
## Configuration

```yaml
base_model: argilla/CapybaraHermes-2.5-Mistral-7B
experts:
  - source_model: dvilasuero/DistilabelBeagle14-7B
    positive_prompts:
      - "chat"
      - "assistant"
      - "tell me"
      - "explain"
      - "help"
      - "guide"
      - "assist"
      - "answer"
      - "support"
      - "clarify"
      - "elaborate"
      - "educate"
      - "inform"
      - "advise"
      - "instruct"
  - source_model: beowolx/CodeNinja-1.0-OpenChat-7B
    positive_prompts:
      - "code"
      - "python"
      - "javascript"
      - "programming"
      - "algorithm"
      - "develop"
      - "debug"
      - "optimize"
      - "software"
      - "engineer"
      - "web"
      - "application"
      - "framework"
      - "library"
      - "syntax"
      - "logic"
      - "compile"
      - "execute"
  - source_model: WizardLM/WizardMath-7B-V1.1
    positive_prompts:
      - "reason"
      - "math"
      - "mathematics"
      - "solve"
      - "count"
      - "calculate"
      - "analyze"
      - "derive"
      - "compute"
      - "numbers"
      - "equation"
      - "theorem"
      - "proof"
      - "geometry"
      - "trigonometry"
      - "statistics"
      - "probability"
      - "algebra"
      - "integral"
```

## Usage

```python
!pip install -qU transformers bitsandbytes accelerate

from transformers import AutoTokenizer
import transformers
import torch

model = "louisbrulenaudet/Pearl-3x7B"

tokenizer = AutoTokenizer.from_pretrained(model)
pipeline = transformers.pipeline(
    "text-generation",
    model=model,
    model_kwargs={"torch_dtype": torch.float16, "load_in_4bit": True},
)

messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in less than 100 words."}]
prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
print(outputs[0]["generated_text"])
```

## Citing & Authors

If you use this code in your research, please use the following BibTeX entry.

```BibTeX
@misc{louisbrulenaudet2023,
  author =       {Louis Brulé Naudet},
  title =        {Pearl-3x7B, an xtraordinary Mixture of Experts (MoE) for data science},
  year =         {2023}
  howpublished = {\url{https://huggingface.co/louisbrulenaudet/Pearl-3x7B}},
}
```

## Feedback

If you have any feedback, please reach out at [louisbrulenaudet@icloud.com](mailto:louisbrulenaudet@icloud.com).