Taishi-N324 commited on
Commit
7b4f46a
1 Parent(s): 8b1b5f2

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +109 -49
README.md CHANGED
@@ -4,66 +4,127 @@ language:
4
  - ja
5
  library_name: transformers
6
  pipeline_tag: text-generation
7
- tag: moe
8
  license: apache-2.0
9
  ---
10
 
11
- # Swallow-MX-8x7b-NVE-v0.1
12
 
13
- Our Swallow-MX-8x7b-NVE-v0.1 model has undergone continuous pre-training from the [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1), primarily with the addition of Japanese language data.
14
 
 
 
 
 
 
15
  ![logo](./logo.png)
16
 
 
 
17
  ## Model Details
18
 
19
- * **Model type**: Please refer to [Mixtral technical report](https://arxiv.org/abs/2401.04088) for details on the model architecture.
20
  * **Language(s)**: Japanese English
21
- * **Tokenizer**: This model utilizes the same tokenizer as employed by Mixtral-8x7B-Instruct-v0.1.
22
  * **Contact**: swallow[at]nlp.c.titech.ac.jp
23
 
 
24
  ## Base Model Performance
25
 
26
- ### Japanese version
27
-
28
- |Model|Size|JCommonsenseQA|JEMHopQA|NIILC|JSQuAD|XL-Sum|MGSM|WMT20-en-ja|WMT20-ja-en|
29
- |---|---|---|---|---|---|---|---|---|---|
30
- | | |4-shot|4-shot|4-shot|4-shot|1-shot|4-shot|4-shot|4-shot|
31
- | Llama 2 | 7B | 0.3852 | 0.4240 | 0.3410 | 0.7917 | 0.1905 | 0.0760 | 0.1783 | 0.1738 |
32
- | Swallow | 7B | 0.4808 | 0.5078 | 0.5968 | 0.8573 | 0.1830 | 0.1240 | 0.2510 | 0.1511 |
33
- | Swallow-Plus | 7B | 0.5478 | 0.5493 | 0.6030 | 0.8544 | 0.1806 | 0.1360 | 0.2568 | 0.1441 |
34
- | Swallow-NVE | 7B | 0.5433 | 0.5425 | 0.5729 | 0.8684 | 0.2117 | 0.1200 | 0.2405 | 0.1512 |
35
- | Mistral-7B-v0.1 | 7B | 0.7301 | 0.4245 | 0.2722 | 0.8563 | 0.2006 | 0.1760 | 0.1405 | 0.1733 |
36
- |Swallow-MS-7b-v0.1| 7B | 0.8570 | 0.4915 | 0.5519 | 0.8802 | 0.1988 | 0.2240 | 0.2494 | 0.1667 |
37
- | Llama 2 | 13B | 0.6997 | 0.4415 | 0.4170 | 0.8533 | 0.2139 | 0.1320 | 0.2146 | 0.1982 |
38
- | Swallow | 13B | 0.7837 | 0.5063 | 0.6398 | 0.9005 | 0.2168 | 0.2040 | 0.2720 | 0.1771 |
39
- | Swallow-NVE | 13B | 0.7712 | 0.5438 | 0.6351 | 0.9030 | 0.2294 | 0.2120 | 0.2735 | 0.1817 |
40
- | Llama 2 | 70B | 0.8686 | 0.4656 | 0.5256 | 0.9080 | 0.2361 | 0.3560 | 0.2643 | **0.2398** |
41
- | Swallow | 70B | 0.9348 | **0.6290** | 0.6960 | 0.9176 | 0.2266 | **0.4840** | **0.3043** | 0.2298 |
42
- | Swallow-NVE | 70B | **0.9410** | 0.5759 | **0.7024** | **0.9254** | **0.2758** | 0.4720 | 0.3042 | 0.2322 |
43
- |Mixtral-8x7B-v0.1|8x7B|0.8347|0.5335|0.3549|0.8847|0.2192|0.3120|0.1970|0.1987|
44
- |Swallow-MX-8x7b-NVE-v0.1|8x7B|0.9258|0.5843|0.5687|0.9148|0.2589|0.4360|0.2705|0.2074|
45
-
46
- ### English version
47
-
48
- |Model|Size|OpenBookQA|TriviaQA|HellaSwag|SQuAD2.0|XWINO|GSM8K|
49
- |---|---|---|---|---|---|---|---|
50
- | | |8-shot|8-shot|8-shot|8-shot|8-shot|8-shot|
51
- | Llama 2 | 7B | 0.3580 | 0.6265 | 0.5860 | 0.3207 | 0.9049 | 0.1410 |
52
- | Swallow | 7B | 0.3180 | 0.4836 | 0.5308 | 0.3125 | 0.8817 | 0.1130 |
53
- | Swallow-Plus | 7B | 0.3280 | 0.4558 | 0.5259 | 0.3134 | 0.8929 | 0.1061 |
54
- | Swallow-NVE | 7B | 0.3180 | 0.5079 | 0.5329 | 0.2919 | 0.8817 | 0.0986 |
55
- | Mistral-7B-v0.1 | 7B | 0.3660 | 0.7050 | 0.6264 | 0.3799 | 0.9157 | 0.3533 | 0.3440 | 0.5976 | 0.5810 | 0.3364 | 0.9037 | 0.2623 |
56
- |Swallow-MS-7b-v0.1| 7B | 0.3440 | 0.5976 | 0.5810 | 0.3364 | 0.9037 | 0.2623 |
57
- | Llama 2 | 13B | 0.3760 | 0.7255 | 0.6148 | 0.3681 | 0.9140 | 0.2403 |
58
- | Swallow | 13B | 0.3500 | 0.5852 | 0.5660 | 0.3406 | 0.9075 | 0.2039 |
59
- | Swallow-NVE | 13B | 0.3460 | 0.6025 | 0.5700 | 0.3478 | 0.9006 | 0.1751 |
60
- | Llama 2 | 70B | **0.4280** | **0.8239** | **0.6742** | 0.3770 | **0.9290** | 0.5284 |
61
- | Swallow | 70B | 0.4220 | 0.7756 | 0.6458 | 0.3745 | 0.9204 | 0.4867 |
62
- | Swallow-NVE | 70B | 0.4240 | 0.7817 | 0.6439 | 0.3451 | 0.9256 | 0.4943 |
63
- |Mixtral-8x7B-v0.1|8x7B|0.3960|0.7989|0.6678|**0.3842**|0.9204|**0.5747**|
64
- |Swallow-MX-8x7b-NVE-v0.1|8x7B|0.3740|0.7847|0.6520|0.3801|0.9170|0.5694|
65
-
66
- Please note that Swallow-MX-8x7b-NVE-v0.1 is not derived from Mixtral-8x7B-v0.1, but rather underwent continued pre-training from Mixtral-8x7B-Instruct-v0.1.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
67
 
68
  ## Usage
69
 
@@ -79,7 +140,7 @@ pip install -r requirements.txt
79
  from transformers import AutoModelForCausalLM, AutoTokenizer
80
  import torch
81
 
82
- model_name = "tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1"
83
  tokenizer = AutoTokenizer.from_pretrained(model_name)
84
 
85
  model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
@@ -111,7 +172,6 @@ The following datasets were used for continual pre-training.
111
  - [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
112
  - [Swallow Corpus](https://arxiv.org/abs/2404.17733)
113
  - [The Pile](https://huggingface.co/datasets/EleutherAI/pile)
114
- - [The Vault](https://github.com/FSoft-AI4Code/TheVault)
115
 
116
  ## Risks and Limitations
117
 
@@ -119,7 +179,7 @@ The models released here are still in the early stages of our research and devel
119
 
120
  ## Acknowledgements
121
 
122
- We thank Mistral AI for releasing Mixtral-8x7B-Instruct-v0.1 under an open license for others to build on.
123
 
124
  Our project is supported by the [ABCI Large-scale Language Model Building Support Program](https://abci.ai/en/link/llm_support_program.html) of the National Institute of Advanced Industrial Science and Technology.
125
 
 
4
  - ja
5
  library_name: transformers
6
  pipeline_tag: text-generation
7
+ model_type: mistral
8
  license: apache-2.0
9
  ---
10
 
11
+ # Swallow-MS-7b-v0.1
12
 
13
+ Our Swallow-MS-7b-v0.1 model has undergone continual pre-training from the Mistral-7B-v0.1, primarily with the addition of Japanese language data.
14
 
15
+ # Model Release Updates
16
+
17
+ We are excited to share the release schedule for our latest models:
18
+ - **April 26, 2024**: Released the [Swallow-MS-7b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-instruct-v0.1)
19
+ - **March 11, 2024**: Released the [Swallow-MS-7b-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-v0.1)
20
  ![logo](./logo.png)
21
 
22
+ This repository provides large language models developed by [TokyoTech-LLM](https://tokyotech-llm.github.io/).
23
+
24
  ## Model Details
25
 
26
+ * **Model type**: Please refer to Mistral technical report for details on the model architecture.
27
  * **Language(s)**: Japanese English
28
+ * **Tokenizer**: This model employs a tokenizer that features a broadened vocabulary based on Japanese data. This allows for a more efficient representation of text using fewer tokens, leading to a notably faster inference process.
29
  * **Contact**: swallow[at]nlp.c.titech.ac.jp
30
 
31
+
32
  ## Base Model Performance
33
 
34
+ ### Japanese tasks
35
+ |Model|Size|JCommonsenseQA|JEMHopQA|NIILC|JSQuAD|XL-Sum|MGSM|WMT20-en-ja|WMT20-ja-en|Average|
36
+ |---------------------------|-------|---------|-------|-------|-------|------|------------|------------|------|-----|
37
+ | | |4-shot|4-shot|4-shot|4-shot|1-shot|4-shot|4-shot|4-shot||
38
+ | CyberAgentLM2-7B |7B| 0.2198 | 0.5047 | 0.5066 | 0.7799 | 0.0233 | 0.0600 | 0.2345 | 0.1499 | 0.3098 |
39
+ | Llama 2 |7B| 0.3852 | 0.4240 | 0.3410 | 0.7917 | 0.1905 | 0.0760 | 0.1783 | 0.1738 | 0.3201 |
40
+ | japanese-stablelm-base-beta-7b|7B| 0.3610 | 0.4478 | 0.4432 | 0.8318 | 0.2195 | 0.0720 | 0.1946 | 0.1226 | 0.3366 |
41
+ | japanese-stablelm-base-ja_vocab-beta-7b|7B| 0.2172 | 0.4482 | 0.4309 | 0.8202 | 0.0757 | 0.0520 | 0.1601 | 0.1453 | 0.2937 |
42
+ | ELYZA-japanese-Llama-2-7b|7B| 0.5791 | 0.4703 | 0.4019 | 0.8226 | 0.1312 | 0.0600 | 0.1795 | 0.1289 | 0.3467 |
43
+ | ELYZA-japanese-Llama-2-7b-fast|7B| 0.5308 | 0.4330 | 0.3898 | 0.8131 | 0.1289 | 0.0720 | 0.1678 | 0.1143 | 0.3312 |
44
+ | youri-7b (base) |7B| 0.4620 | 0.4776 | 0.4999 | 0.8506 | 0.1957 | 0.0640 | 0.2671 | **0.1971** | 0.3768 |
45
+ | Swallow-7b |7B| 0.4808 | 0.5078 | 0.5968 | 0.8573 | 0.1830 | 0.1240 | 0.2510 | 0.1511 | 0.3940 |
46
+ | Swallow-7b-plus |7B| 0.5478 | **0.5493** | **0.6030** | 0.8544 | 0.1806 | 0.1360 | 0.2568 | 0.1441 | 0.4090 |
47
+ | Qwen-7B |7B| 0.7712 | 0.4234 | 0.2376 | 0.8594 | 0.1371 | 0.2160 | 0.1689 | 0.1801 | 0.3742 |
48
+ | nekomata-7b |7B| 0.7417 | 0.4928 | 0.5022 | 0.8707 | 0.1676 | 0.1240 | **0.2673** | 0.1815 | 0.4185 |
49
+ | Mistral-7B-v0.1 |7B| 0.7301 | 0.4245 | 0.2722 | 0.8563 | 0.2006 | 0.1760 | 0.1405 | 0.1733 | 0.3717 |
50
+ | japanese-stablelm-base-gamma-7b|7B| 0.7364 | 0.4643 | 0.5568 | **0.8910** | **0.2293** | 0.1680 | 0.2390 | 0.1561 | 0.4301 |
51
+ | Swallow-MS-7b-v0.1 |7B| **0.8570** | 0.4915 | 0.5519 | 0.8802 | 0.1988 | **0.2240** | 0.2494 | 0.1667 | **0.4524** |
52
+
53
+
54
+ ### English tasks
55
+
56
+ |Model|Size|OpenBookQA|TriviaQA|HellaSwag|SQuAD2.0|XWINO|GSM8K|Average|
57
+ |---|---|---|---|---|---|---|---|---|
58
+ | | |8-shot|8-shot|8-shot|8-shot|8-shot|8-shot||
59
+ | CyberAgentLM2-7B |7B| 0.2860 | 0.3496 | 0.5003 | 0.3510 | 0.8581 | 0.0705 | 0.4026 |
60
+ | Llama 2 |7B| 0.3580 | 0.6265 | 0.5860 | 0.3207 | 0.9049 | 0.1410 | 0.4895 |
61
+ | japanese-stablelm-base-beta-7b|7B| 0.3620 | 0.5903 | 0.5707 | 0.2992 | 0.8994 | 0.1198 | 0.4736 |
62
+ | japanese-stablelm-base-ja_vocab-beta-7b|7B| 0.3520 | 0.5549 | 0.5644 | 0.3079 | 0.8942 | 0.0538 | 0.4545 |
63
+ | ELYZA-japanese-Llama-2-7b|7B| 0.3400 | 0.5875 | 0.5595 | 0.2721 | 0.8989 | 0.1638 | 0.4703 |
64
+ | ELYZA-japanese-Llama-2-7b-fast|7B| 0.3280 | 0.5817 | 0.5530 | 0.2605 | 0.8989 | 0.1425 | 0.4608 |
65
+ | youri-7b (base) |7B| 0.3400 | 0.5257 | 0.5540 | 0.3297 | 0.8938 | 0.0963 | 0.4566 |
66
+ | Swallow-7b |7B| 0.3180 | 0.4836 | 0.5308 | 0.3125 | 0.8817 | 0.1130 | 0.4399 |
67
+ | Swallow-7b-plus |7B| 0.3280 | 0.4558 | 0.5259 | 0.3134 | 0.8929 | 0.1061 | 0.4370 |
68
+ | Qwen-7B |7B| 0.3640 | 0.5695 | 0.5787 | **0.3799** | 0.8933 | **0.4617** | 0.5412 |
69
+ | nekomata-7b |7B| 0.3340 | 0.4371 | 0.5340 | 0.2933 | 0.8766 | 0.1531 | 0.4380 |
70
+ | Mistral-7B-v0.1 |7B| **0.3660** | **0.7050** | **0.6264** | **0.3799** | **0.9157** | 0.3533 | **0.5577** |
71
+ | japanese-stablelm-base-gamma-7b|7B| 0.3240 | 0.5745 | 0.5739 | 0.3546 | 0.8976 | 0.1911 | 0.4860 |
72
+ | Swallow-MS-7b-v0.1 |7B| 0.3440 | 0.5976 | 0.5810 | 0.3364 | 0.9037 | 0.2623 | 0.5042 |
73
+
74
+
75
+ ### Code generation tasks
76
+
77
+ |Model|Size|JHumanEval|HumanEval|
78
+ |---|---|---|---|
79
+ | | |pass@1|pass@1|
80
+ | CyberAgentLM2-7B |7B|0.0634|0.0756|
81
+ | Llama 2 |7B|0.1152|0.1378|
82
+ | japanese-stablelm-base-beta-7b|7B|0.1018|0.1280|
83
+ | japanese-stablelm-base-ja_vocab-beta-7b|7B|0.0896|0.1122|
84
+ | ELYZA-japanese-Llama-2-7b|7B|0.0287|0.0427|
85
+ | ELYZA-japanese-Llama-2-7b-fast|7B| 0.0000 |0.0037|
86
+ | youri-7b (base) |7B|0.0829|0.0982|
87
+ | Swallow-7b |7B|0.0183|0.0183|
88
+ | Swallow-7b-plus |7B| 0.0061|0.0037|
89
+ | Qwen-7B |7B|0.1701|0.1805|
90
+ | nekomata-7b |7B|0.0988|0.1402|
91
+ | Mistral-7B-v0.1 |7B|**0.2555**|**0.2933**|
92
+ | japanese-stablelm-base-gamma-7b|7B|0.1823|0.1915|
93
+ | Swallow-MS-7b-v0.1 |7B|0.2305|0.2768|
94
+
95
+ ## Evaluation Benchmarks
96
+
97
+ ### Japanese evaluation benchmarks
98
+
99
+ We used llm-jp-eval(v1.0.0) and JP Language Model Evaluation Harness(commit #9b42d41). The details are as follows:
100
+
101
+ - Multiple-choice question answering (JCommonsenseQA [Kurihara+, 2022])
102
+ - Open-ended question answering (JEMHopQA [Ishii+, 2023])
103
+ - Open-ended question answering (NIILC [Sekine, 2003])
104
+ - Machine reading comprehension (JSQuAD [Kurihara+, 2022])
105
+ - Automatic summarization (XL-Sum [Hasan+, 2021])
106
+ - Machine translation (WMT2020 ja-en [Barrault+, 2020])
107
+ - Machine translation (WMT2020 en-ja [Barrault+, 2020])
108
+ - Mathematical reasoning (MGSM [Shi+, 2023])
109
+
110
+ ### English evaluation benchmarks
111
+
112
+ We used the Language Model Evaluation Harness(v.0.3.0). The details are as follows:
113
+
114
+ - Multiple-choice question answering (OpenBookQA [Mihaylov+, 2018])
115
+ - Open-ended question answering (TriviaQA [Joshi+, 2017])
116
+ - Machine reading comprehension (SQuAD 2.0 [Rajpurkar+, 2018])
117
+ - Commonsense reasoning (XWINO [Tikhonov & Ryabinin, 2021])
118
+ - Natural language inference (HellaSwag [Zellers+, 2019])
119
+ - Mathematical reasoning (GSM8k [Cobbe+, 2021])
120
+
121
+ ### Code evaluation benchmarks
122
+
123
+ We utilized the Code Generation LM Evaluation Harness [Allal+, 2022] (commit #0261c52). The details are as follows:
124
+
125
+ - Code generation (HumanEval [Chen+, 2021])
126
+ - Code generation in Japanese (JHumanEval [Satoh+, 2024])
127
+
128
 
129
  ## Usage
130
 
 
140
  from transformers import AutoModelForCausalLM, AutoTokenizer
141
  import torch
142
 
143
+ model_name = "tokyotech-llm/Swallow-MS-7b-v0.1"
144
  tokenizer = AutoTokenizer.from_pretrained(model_name)
145
 
146
  model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
 
172
  - [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
173
  - [Swallow Corpus](https://arxiv.org/abs/2404.17733)
174
  - [The Pile](https://huggingface.co/datasets/EleutherAI/pile)
 
175
 
176
  ## Risks and Limitations
177
 
 
179
 
180
  ## Acknowledgements
181
 
182
+ We thank Mistral AI for releasing Mistral 7B v0.1 under an open license for others to build on.
183
 
184
  Our project is supported by the [ABCI Large-scale Language Model Building Support Program](https://abci.ai/en/link/llm_support_program.html) of the National Institute of Advanced Industrial Science and Technology.
185