louisbrulenaudet commited on
Commit
9bfcdeb
1 Parent(s): 641f99e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +38 -5
README.md CHANGED
@@ -9,20 +9,36 @@ tags:
9
  - dvilasuero/DistilabelBeagle14-7B
10
  - beowolx/CodeNinja-1.0-OpenChat-7B
11
  - WizardLM/WizardMath-7B-V1.1
 
 
 
12
  base_model:
13
  - dvilasuero/DistilabelBeagle14-7B
14
  - beowolx/CodeNinja-1.0-OpenChat-7B
15
  - WizardLM/WizardMath-7B-V1.1
 
 
 
16
  ---
17
 
18
- # Pearl-3x7B
19
 
20
- Pearl-3x7B is a Mixure of Experts (MoE) made with the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
 
 
21
  * [dvilasuero/DistilabelBeagle14-7B](https://huggingface.co/dvilasuero/DistilabelBeagle14-7B)
22
  * [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
23
  * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
24
 
25
- ## 🧩 Configuration
 
 
 
 
 
 
 
 
26
 
27
  ```yaml
28
  base_model: argilla/CapybaraHermes-2.5-Mistral-7B
@@ -87,7 +103,7 @@ experts:
87
  - "integral"
88
  ```
89
 
90
- ## 💻 Usage
91
 
92
  ```python
93
  !pip install -qU transformers bitsandbytes accelerate
@@ -109,4 +125,21 @@ messages = [{"role": "user", "content": "Explain what a Mixture of Experts is in
109
  prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
110
  outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
111
  print(outputs[0]["generated_text"])
112
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9
  - dvilasuero/DistilabelBeagle14-7B
10
  - beowolx/CodeNinja-1.0-OpenChat-7B
11
  - WizardLM/WizardMath-7B-V1.1
12
+ - Maths
13
+ - Code
14
+ - Python
15
  base_model:
16
  - dvilasuero/DistilabelBeagle14-7B
17
  - beowolx/CodeNinja-1.0-OpenChat-7B
18
  - WizardLM/WizardMath-7B-V1.1
19
+ language:
20
+ - en
21
+ library_name: transformers
22
  ---
23
 
24
+ <center><img src='https://i.imgur.com/0xFTuAX.png' width='450px'></center>
25
 
26
+ # Pearl-3x7B, an xtraordinary Mixure of Experts (MoE) for data science
27
+
28
+ Pearl-3x7B is a Mixure of Experts (MoE) made with the following models :
29
  * [dvilasuero/DistilabelBeagle14-7B](https://huggingface.co/dvilasuero/DistilabelBeagle14-7B)
30
  * [beowolx/CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B)
31
  * [WizardLM/WizardMath-7B-V1.1](https://huggingface.co/WizardLM/WizardMath-7B-V1.1)
32
 
33
+ A Mixture of Experts (MoE) model represents a sophisticated architecture that amalgamates the capabilities of multiple specialized models to address a wide array of tasks within a unified framework. Within the realm of a MoE model tailored for a chat application, the integration of expertise spanning three distinct domains - chat, code, and mathematics - substantially enhances its capacity to furnish nuanced and precise responses to a diverse spectrum of user inquiries.
34
+
35
+ The initial expert model, honed for chat applications, exhibits prowess in comprehending natural language nuances, conversational dynamics, and contextual cues. Drawing upon extensive conversational data, it adeptly generates engaging and contextually pertinent responses, thereby fostering meaningful interactions with users.
36
+
37
+ The subsequent expert model, centered on code, brings to the fore proficiency in programming languages, algorithms, and software engineering principles. Possessing a deep-seated understanding of syntax, logical constructs, and problem-solving methodologies, it deftly tackles queries spanning coding challenges, debugging assistance, and software development inquiries.
38
+
39
+ Lastly, the third expert model, specializing in mathematics, boasts expertise in mathematical reasoning, problem-solving strategies, and analytical techniques. Armed with a breadth of knowledge encompassing arithmetic, algebra, calculus, and beyond, it offers precise solutions, lucid explanations, and profound insights for mathematical queries, equations, and proofs.
40
+
41
+ ## Configuration
42
 
43
  ```yaml
44
  base_model: argilla/CapybaraHermes-2.5-Mistral-7B
 
103
  - "integral"
104
  ```
105
 
106
+ ## Usage
107
 
108
  ```python
109
  !pip install -qU transformers bitsandbytes accelerate
 
125
  prompt = pipeline.tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
126
  outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
127
  print(outputs[0]["generated_text"])
128
+ ```
129
+
130
+ ## Citing & Authors
131
+
132
+ If you use this code in your research, please use the following BibTeX entry.
133
+
134
+ ```BibTeX
135
+ @misc{louisbrulenaudet2023,
136
+ author = {Louis Brulé Naudet},
137
+ title = {Pearl-3x7B, an xtraordinary Mixure of Experts (MoE) for data science},
138
+ year = {2023}
139
+ howpublished = {\url{https://huggingface.co/louisbrulenaudet/Pearl-3x7B}},
140
+ }
141
+ ```
142
+
143
+ ## Feedback
144
+
145
+ If you have any feedback, please reach out at [louisbrulenaudet@icloud.com](mailto:louisbrulenaudet@icloud.com).