Taishi-N324
commited on
Commit
•
8b1b5f2
1
Parent(s):
aba0d74
Upload README.md
Browse files
README.md
CHANGED
@@ -4,127 +4,66 @@ language:
|
|
4 |
- ja
|
5 |
library_name: transformers
|
6 |
pipeline_tag: text-generation
|
7 |
-
|
8 |
license: apache-2.0
|
9 |
---
|
10 |
|
11 |
-
# Swallow-
|
12 |
|
13 |
-
Our Swallow-
|
14 |
|
15 |
-
# Model Release Updates
|
16 |
-
|
17 |
-
We are excited to share the release schedule for our latest models:
|
18 |
-
- **April 26, 2024**: Released the [Swallow-MS-7b-instruct-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-instruct-v0.1)
|
19 |
-
- **March 11, 2024**: Released the [Swallow-MS-7b-v0.1](https://huggingface.co/tokyotech-llm/Swallow-MS-7b-v0.1)
|
20 |
![logo](./logo.png)
|
21 |
|
22 |
-
This repository provides large language models developed by [TokyoTech-LLM](https://tokyotech-llm.github.io/).
|
23 |
-
|
24 |
## Model Details
|
25 |
|
26 |
-
* **Model type**: Please refer to
|
27 |
* **Language(s)**: Japanese English
|
28 |
-
* **Tokenizer**: This model
|
29 |
* **Contact**: swallow[at]nlp.c.titech.ac.jp
|
30 |
|
31 |
-
|
32 |
## Base Model Performance
|
33 |
|
34 |
-
### Japanese
|
35 |
-
|
36 |
-
|
37 |
-
|
38 |
-
|
|
39 |
-
| Llama 2
|
40 |
-
|
|
41 |
-
|
|
42 |
-
|
|
43 |
-
|
|
44 |
-
|
|
45 |
-
|
|
46 |
-
| Swallow
|
47 |
-
|
|
48 |
-
|
|
49 |
-
|
|
50 |
-
|
|
51 |
-
|
|
52 |
-
|
53 |
-
|
54 |
-
### English
|
55 |
-
|
56 |
-
|Model|Size|OpenBookQA|TriviaQA|HellaSwag|SQuAD2.0|XWINO|GSM8K|
|
57 |
-
|
58 |
-
| | |8-shot|8-shot|8-shot|8-shot|8-shot|8-shot
|
59 |
-
|
|
60 |
-
|
|
61 |
-
|
|
62 |
-
|
|
63 |
-
|
|
64 |
-
|
|
65 |
-
|
|
66 |
-
| Swallow
|
67 |
-
| Swallow-
|
68 |
-
|
|
69 |
-
|
|
70 |
-
|
|
71 |
-
|
|
72 |
-
|
|
73 |
-
|
74 |
-
|
75 |
-
### Code generation tasks
|
76 |
-
|
77 |
-
|Model|Size|JHumanEval|HumanEval|
|
78 |
-
|---|---|---|---|
|
79 |
-
| | |pass@1|pass@1|
|
80 |
-
| CyberAgentLM2-7B |7B|0.0634|0.0756|
|
81 |
-
| Llama 2 |7B|0.1152|0.1378|
|
82 |
-
| japanese-stablelm-base-beta-7b|7B|0.1018|0.1280|
|
83 |
-
| japanese-stablelm-base-ja_vocab-beta-7b|7B|0.0896|0.1122|
|
84 |
-
| ELYZA-japanese-Llama-2-7b|7B|0.0287|0.0427|
|
85 |
-
| ELYZA-japanese-Llama-2-7b-fast|7B| 0.0000 |0.0037|
|
86 |
-
| youri-7b (base) |7B|0.0829|0.0982|
|
87 |
-
| Swallow-7b |7B|0.0183|0.0183|
|
88 |
-
| Swallow-7b-plus |7B| 0.0061|0.0037|
|
89 |
-
| Qwen-7B |7B|0.1701|0.1805|
|
90 |
-
| nekomata-7b |7B|0.0988|0.1402|
|
91 |
-
| Mistral-7B-v0.1 |7B|**0.2555**|**0.2933**|
|
92 |
-
| japanese-stablelm-base-gamma-7b|7B|0.1823|0.1915|
|
93 |
-
| Swallow-MS-7b-v0.1 |7B|0.2305|0.2768|
|
94 |
-
|
95 |
-
## Evaluation Benchmarks
|
96 |
-
|
97 |
-
### Japanese evaluation benchmarks
|
98 |
-
|
99 |
-
We used llm-jp-eval(v1.0.0) and JP Language Model Evaluation Harness(commit #9b42d41). The details are as follows:
|
100 |
-
|
101 |
-
- Multiple-choice question answering (JCommonsenseQA [Kurihara+, 2022])
|
102 |
-
- Open-ended question answering (JEMHopQA [Ishii+, 2023])
|
103 |
-
- Open-ended question answering (NIILC [Sekine, 2003])
|
104 |
-
- Machine reading comprehension (JSQuAD [Kurihara+, 2022])
|
105 |
-
- Automatic summarization (XL-Sum [Hasan+, 2021])
|
106 |
-
- Machine translation (WMT2020 ja-en [Barrault+, 2020])
|
107 |
-
- Machine translation (WMT2020 en-ja [Barrault+, 2020])
|
108 |
-
- Mathematical reasoning (MGSM [Shi+, 2023])
|
109 |
-
|
110 |
-
### English evaluation benchmarks
|
111 |
-
|
112 |
-
We used the Language Model Evaluation Harness(v.0.3.0). The details are as follows:
|
113 |
-
|
114 |
-
- Multiple-choice question answering (OpenBookQA [Mihaylov+, 2018])
|
115 |
-
- Open-ended question answering (TriviaQA [Joshi+, 2017])
|
116 |
-
- Machine reading comprehension (SQuAD 2.0 [Rajpurkar+, 2018])
|
117 |
-
- Commonsense reasoning (XWINO [Tikhonov & Ryabinin, 2021])
|
118 |
-
- Natural language inference (HellaSwag [Zellers+, 2019])
|
119 |
-
- Mathematical reasoning (GSM8k [Cobbe+, 2021])
|
120 |
-
|
121 |
-
### Code evaluation benchmarks
|
122 |
-
|
123 |
-
We utilized the Code Generation LM Evaluation Harness [Allal+, 2022] (commit #0261c52). The details are as follows:
|
124 |
-
|
125 |
-
- Code generation (HumanEval [Chen+, 2021])
|
126 |
-
- Code generation in Japanese (JHumanEval [Satoh+, 2024])
|
127 |
-
|
128 |
|
129 |
## Usage
|
130 |
|
@@ -140,7 +79,7 @@ pip install -r requirements.txt
|
|
140 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
141 |
import torch
|
142 |
|
143 |
-
model_name = "tokyotech-llm/Swallow-
|
144 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
145 |
|
146 |
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
|
@@ -170,8 +109,9 @@ The following datasets were used for continual pre-training.
|
|
170 |
- [Algebraic Stack](https://huggingface.co/datasets/EleutherAI/proof-pile-2)
|
171 |
- [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
|
172 |
- [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
|
173 |
-
- [Swallow Corpus](https://
|
174 |
- [The Pile](https://huggingface.co/datasets/EleutherAI/pile)
|
|
|
175 |
|
176 |
## Risks and Limitations
|
177 |
|
@@ -179,7 +119,7 @@ The models released here are still in the early stages of our research and devel
|
|
179 |
|
180 |
## Acknowledgements
|
181 |
|
182 |
-
We thank Mistral AI for releasing
|
183 |
|
184 |
Our project is supported by the [ABCI Large-scale Language Model Building Support Program](https://abci.ai/en/link/llm_support_program.html) of the National Institute of Advanced Industrial Science and Technology.
|
185 |
|
|
|
4 |
- ja
|
5 |
library_name: transformers
|
6 |
pipeline_tag: text-generation
|
7 |
+
tag: moe
|
8 |
license: apache-2.0
|
9 |
---
|
10 |
|
11 |
+
# Swallow-MX-8x7b-NVE-v0.1
|
12 |
|
13 |
+
Our Swallow-MX-8x7b-NVE-v0.1 model has undergone continuous pre-training from the [Mixtral-8x7B-Instruct-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1), primarily with the addition of Japanese language data.
|
14 |
|
|
|
|
|
|
|
|
|
|
|
15 |
![logo](./logo.png)
|
16 |
|
|
|
|
|
17 |
## Model Details
|
18 |
|
19 |
+
* **Model type**: Please refer to [Mixtral technical report](https://arxiv.org/abs/2401.04088) for details on the model architecture.
|
20 |
* **Language(s)**: Japanese English
|
21 |
+
* **Tokenizer**: This model utilizes the same tokenizer as employed by Mixtral-8x7B-Instruct-v0.1.
|
22 |
* **Contact**: swallow[at]nlp.c.titech.ac.jp
|
23 |
|
|
|
24 |
## Base Model Performance
|
25 |
|
26 |
+
### Japanese version
|
27 |
+
|
28 |
+
|Model|Size|JCommonsenseQA|JEMHopQA|NIILC|JSQuAD|XL-Sum|MGSM|WMT20-en-ja|WMT20-ja-en|
|
29 |
+
|---|---|---|---|---|---|---|---|---|---|
|
30 |
+
| | |4-shot|4-shot|4-shot|4-shot|1-shot|4-shot|4-shot|4-shot|
|
31 |
+
| Llama 2 | 7B | 0.3852 | 0.4240 | 0.3410 | 0.7917 | 0.1905 | 0.0760 | 0.1783 | 0.1738 |
|
32 |
+
| Swallow | 7B | 0.4808 | 0.5078 | 0.5968 | 0.8573 | 0.1830 | 0.1240 | 0.2510 | 0.1511 |
|
33 |
+
| Swallow-Plus | 7B | 0.5478 | 0.5493 | 0.6030 | 0.8544 | 0.1806 | 0.1360 | 0.2568 | 0.1441 |
|
34 |
+
| Swallow-NVE | 7B | 0.5433 | 0.5425 | 0.5729 | 0.8684 | 0.2117 | 0.1200 | 0.2405 | 0.1512 |
|
35 |
+
| Mistral-7B-v0.1 | 7B | 0.7301 | 0.4245 | 0.2722 | 0.8563 | 0.2006 | 0.1760 | 0.1405 | 0.1733 |
|
36 |
+
|Swallow-MS-7b-v0.1| 7B | 0.8570 | 0.4915 | 0.5519 | 0.8802 | 0.1988 | 0.2240 | 0.2494 | 0.1667 |
|
37 |
+
| Llama 2 | 13B | 0.6997 | 0.4415 | 0.4170 | 0.8533 | 0.2139 | 0.1320 | 0.2146 | 0.1982 |
|
38 |
+
| Swallow | 13B | 0.7837 | 0.5063 | 0.6398 | 0.9005 | 0.2168 | 0.2040 | 0.2720 | 0.1771 |
|
39 |
+
| Swallow-NVE | 13B | 0.7712 | 0.5438 | 0.6351 | 0.9030 | 0.2294 | 0.2120 | 0.2735 | 0.1817 |
|
40 |
+
| Llama 2 | 70B | 0.8686 | 0.4656 | 0.5256 | 0.9080 | 0.2361 | 0.3560 | 0.2643 | **0.2398** |
|
41 |
+
| Swallow | 70B | 0.9348 | **0.6290** | 0.6960 | 0.9176 | 0.2266 | **0.4840** | **0.3043** | 0.2298 |
|
42 |
+
| Swallow-NVE | 70B | **0.9410** | 0.5759 | **0.7024** | **0.9254** | **0.2758** | 0.4720 | 0.3042 | 0.2322 |
|
43 |
+
|Mixtral-8x7B-v0.1|8x7B|0.8347|0.5335|0.3549|0.8847|0.2192|0.3120|0.1970|0.1987|
|
44 |
+
|Swallow-MX-8x7b-NVE-v0.1|8x7B|0.9258|0.5843|0.5687|0.9148|0.2589|0.4360|0.2705|0.2074|
|
45 |
+
|
46 |
+
### English version
|
47 |
+
|
48 |
+
|Model|Size|OpenBookQA|TriviaQA|HellaSwag|SQuAD2.0|XWINO|GSM8K|
|
49 |
+
|---|---|---|---|---|---|---|---|
|
50 |
+
| | |8-shot|8-shot|8-shot|8-shot|8-shot|8-shot|
|
51 |
+
| Llama 2 | 7B | 0.3580 | 0.6265 | 0.5860 | 0.3207 | 0.9049 | 0.1410 |
|
52 |
+
| Swallow | 7B | 0.3180 | 0.4836 | 0.5308 | 0.3125 | 0.8817 | 0.1130 |
|
53 |
+
| Swallow-Plus | 7B | 0.3280 | 0.4558 | 0.5259 | 0.3134 | 0.8929 | 0.1061 |
|
54 |
+
| Swallow-NVE | 7B | 0.3180 | 0.5079 | 0.5329 | 0.2919 | 0.8817 | 0.0986 |
|
55 |
+
| Mistral-7B-v0.1 | 7B | 0.3660 | 0.7050 | 0.6264 | 0.3799 | 0.9157 | 0.3533 | 0.3440 | 0.5976 | 0.5810 | 0.3364 | 0.9037 | 0.2623 |
|
56 |
+
|Swallow-MS-7b-v0.1| 7B | 0.3440 | 0.5976 | 0.5810 | 0.3364 | 0.9037 | 0.2623 |
|
57 |
+
| Llama 2 | 13B | 0.3760 | 0.7255 | 0.6148 | 0.3681 | 0.9140 | 0.2403 |
|
58 |
+
| Swallow | 13B | 0.3500 | 0.5852 | 0.5660 | 0.3406 | 0.9075 | 0.2039 |
|
59 |
+
| Swallow-NVE | 13B | 0.3460 | 0.6025 | 0.5700 | 0.3478 | 0.9006 | 0.1751 |
|
60 |
+
| Llama 2 | 70B | **0.4280** | **0.8239** | **0.6742** | 0.3770 | **0.9290** | 0.5284 |
|
61 |
+
| Swallow | 70B | 0.4220 | 0.7756 | 0.6458 | 0.3745 | 0.9204 | 0.4867 |
|
62 |
+
| Swallow-NVE | 70B | 0.4240 | 0.7817 | 0.6439 | 0.3451 | 0.9256 | 0.4943 |
|
63 |
+
|Mixtral-8x7B-v0.1|8x7B|0.3960|0.7989|0.6678|**0.3842**|0.9204|**0.5747**|
|
64 |
+
|Swallow-MX-8x7b-NVE-v0.1|8x7B|0.3740|0.7847|0.6520|0.3801|0.9170|0.5694|
|
65 |
+
|
66 |
+
Please note that Swallow-MX-8x7b-NVE-v0.1 is not derived from Mixtral-8x7B-v0.1, but rather underwent continued pre-training from Mixtral-8x7B-Instruct-v0.1.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
67 |
|
68 |
## Usage
|
69 |
|
|
|
79 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
80 |
import torch
|
81 |
|
82 |
+
model_name = "tokyotech-llm/Swallow-MX-8x7b-NVE-v0.1"
|
83 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
84 |
|
85 |
model = AutoModelForCausalLM.from_pretrained(model_name, torch_dtype=torch.bfloat16, device_map="auto")
|
|
|
109 |
- [Algebraic Stack](https://huggingface.co/datasets/EleutherAI/proof-pile-2)
|
110 |
- [Japanese Wikipedia](https://dumps.wikimedia.org/other/cirrussearch)
|
111 |
- [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb)
|
112 |
+
- [Swallow Corpus](https://arxiv.org/abs/2404.17733)
|
113 |
- [The Pile](https://huggingface.co/datasets/EleutherAI/pile)
|
114 |
+
- [The Vault](https://github.com/FSoft-AI4Code/TheVault)
|
115 |
|
116 |
## Risks and Limitations
|
117 |
|
|
|
119 |
|
120 |
## Acknowledgements
|
121 |
|
122 |
+
We thank Mistral AI for releasing Mixtral-8x7B-Instruct-v0.1 under an open license for others to build on.
|
123 |
|
124 |
Our project is supported by the [ABCI Large-scale Language Model Building Support Program](https://abci.ai/en/link/llm_support_program.html) of the National Institute of Advanced Industrial Science and Technology.
|
125 |
|