yuuko-eth commited on
Commit
ea31edc
1 Parent(s): 7846abe

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md CHANGED
@@ -1,3 +1,50 @@
1
  ---
 
 
 
 
2
  license: unknown
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ inference: false
3
+ language:
4
+ - zh
5
+ - en
6
  license: unknown
7
+ model_name: Rain-2x7B-MoE-32k-v0.2
8
+ pipeline_tag: text-generation
9
+ prompt_template: '<s> SYS_PROMPT [INST] QUERY1 [/INST] RESPONSE1 [INST] QUERY2 [/INST]'
10
+ tags:
11
+ - nlp
12
+ - chinese
13
+ - mistral
14
+ - mixtral
15
+ - traditional_chinese
16
+ - merge
17
+ - mergekit
18
+ - MediaTek-Research/Breeze-7B-Instruct-v0_1
19
+ - beowolx/CodeNinja-1.0-OpenChat-7B
20
+ - mlabonne/Marcoro14-7B-slerp
21
  ---
22
+
23
+ <br/>
24
+
25
+ # 小雨同學 2x7B
26
+
27
+ 採用聯發科 Breeze 7B Instruct 為基底的國語 MoE (Mixture-of-Experts) 模型,共有兩個 Expert model。
28
+
29
+ 請用 Marcoro14-7B 或是 Breeze-7B-Instruct 所推薦的 Prompt 格式進行操作;以下為模型配置。
30
+
31
+ ![](https://i.imgur.com/f3Ro6Fu.png)
32
+
33
+ ### Rain-2x7B-MoE-32k-v0.2
34
+
35
+ This is an experimental Mixtral-architecture MoE model with 2 of 7B sized fine-tunes. Breeze and CodeNinja are used on top of [Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp).
36
+
37
+ Model configuration is as follows:
38
+
39
+ * [Marcoro14-7B-slerp](https://huggingface.co/mlabonne/Marcoro14-7B-slerp) as base.
40
+ * [Breeze-7B-Instruct-v0_1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0_1) as model 0.
41
+ * [CodeNinja-1.0-OpenChat-7B](https://huggingface.co/beowolx/CodeNinja-1.0-OpenChat-7B) as model 1.
42
+
43
+ To use the model, please use either prompt templates suggested by the base models.
44
+
45
+ ## Notes
46
+
47
+ Please evaluate before use in any application pipeline. Activation for coding part of the model would be `'code'`, `'python'`, `'typescript'`, `'javascript'`, `'programming'`, `'algorithm'`.
48
+
49
+
50
+