JosephusCheung commited on
Commit
7f34ed2
1 Parent(s): 60354c5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -7,7 +7,9 @@ Only intended for conceptual validation, however the expert models do not seem t
7
 
8
  There are 8 completely different expert models based on Qwen-7B / CausalLM, six of which are specific domain models that have seen 50~100 billion tokens, including: a Toolformer/Agent expert model, a multilingual translation expert model, a mathematics expert model, a visual expert model, a coding and computer expert model, and an uncensored knowledge model — together forming the MoE model along with Qwen-Chat and Qwen-Base.
9
 
10
- The initialization of the gate is based on the hidden state of the few-shot prompt input from each expert model and undergoes simple alignment training.
 
 
11
 
12
  Prompt format: ChatML
13
 
 
7
 
8
  There are 8 completely different expert models based on Qwen-7B / CausalLM, six of which are specific domain models that have seen 50~100 billion tokens, including: a Toolformer/Agent expert model, a multilingual translation expert model, a mathematics expert model, a visual expert model, a coding and computer expert model, and an uncensored knowledge model — together forming the MoE model along with Qwen-Chat and Qwen-Base.
9
 
10
+ The initialization of the gate is based on the hidden state of the few-shot prompt input from each expert model and undergoes simple alignment training, on flan/ orca style.
11
+
12
+ For multimodal input, please use visual.bin/py, should be the same as Qwen-VL.
13
 
14
  Prompt format: ChatML
15