Commit
•
23acf09
1
Parent(s):
e1606e9
Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,9 @@
|
|
1 |
---
|
2 |
license: gpl-3.0
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: gpl-3.0
|
3 |
---
|
4 |
+
|
5 |
+
Only intended for conceptual validation, however the expert models do not seem to be working as expected.
|
6 |
+
|
7 |
+
There are 8 completely different expert models based on Qwen-7B / CausalLM, six of which are specific domain models that have seen 50~100 billion tokens, including: a Toolformer/Agent expert model, a multilingual translation expert model, a mathematics expert model, a visual expert model, a coding and computer expert model, and an unreviewed knowledge model — together forming the MoE model along with Qwen-Chat and Qwen-Base.
|
8 |
+
|
9 |
+
The initialization of the gate is based on the hidden state of the few-shot prompt input from each expert model and undergoes simple alignment training.
|