Edit tokenizer
#1
by
wonhosong
- opened
- README.md +11 -132
- config.json +1 -2
- generation_config.json +1 -1
- solar-api-banner.png +0 -0
- solar_logo.png +0 -0
- tokenizer_config.json +2 -2
README.md
CHANGED
@@ -1,110 +1,21 @@
|
|
1 |
---
|
2 |
-
|
3 |
-
- c-s-ale/alpaca-gpt4-data
|
4 |
-
- Open-Orca/OpenOrca
|
5 |
-
- Intel/orca_dpo_pairs
|
6 |
-
- allenai/ultrafeedback_binarized_cleaned
|
7 |
-
language:
|
8 |
-
- en
|
9 |
-
license: cc-by-nc-4.0
|
10 |
-
base_model:
|
11 |
-
- upstage/SOLAR-10.7B-v1.0
|
12 |
---
|
13 |
|
14 |
-
<p align="left">
|
15 |
-
<a href="https://console.upstage.ai/">
|
16 |
-
<img src="https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0/resolve/main/solar-api-banner.png" width="100%"/>
|
17 |
-
</a>
|
18 |
-
<p>
|
19 |
-
|
20 |
# **Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!**
|
21 |
|
22 |
-
**(This model is [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) fine-tuned version for single-turn conversation.)**
|
23 |
|
24 |
|
25 |
# **Introduction**
|
26 |
-
We introduce SOLAR-10.7B, an advanced large language model (LLM) with 10.7 billion parameters, demonstrating superior performance in various natural language processing (NLP) tasks. It's compact, yet remarkably powerful, and demonstrates unparalleled state-of-the-art performance in models with parameters under 30B.
|
27 |
-
|
28 |
-
We present a methodology for scaling LLMs called depth up-scaling (DUS) , which encompasses architectural modifications and continued pretraining. In other words, we integrated Mistral 7B weights into the upscaled layers, and finally, continued pre-training for the entire model.
|
29 |
-
|
30 |
-
|
31 |
-
SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table.
|
32 |
-
Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements.
|
33 |
-
|
34 |
-
For full details of this model please read our [paper](https://arxiv.org/abs/2312.15166).
|
35 |
-
|
36 |
-
|
37 |
-
# **Instruction Fine-Tuning Strategy**
|
38 |
-
|
39 |
-
We utilize state-of-the-art instruction fine-tuning methods including supervised fine-tuning (SFT) and direct preference optimization (DPO) [1].
|
40 |
-
|
41 |
-
We used a mixture of the following datasets
|
42 |
-
- c-s-ale/alpaca-gpt4-data (SFT)
|
43 |
-
- Open-Orca/OpenOrca (SFT)
|
44 |
-
- in-house generated data utilizing Metamath [2] (SFT, DPO)
|
45 |
-
- Intel/orca_dpo_pairs (DPO)
|
46 |
-
- allenai/ultrafeedback_binarized_cleaned (DPO)
|
47 |
-
|
48 |
-
where we were careful of data contamination by not using GSM8K samples when generating data and filtering tasks when applicable via the following list.
|
49 |
-
```python
|
50 |
-
filtering_task_list = [
|
51 |
-
'task228_arc_answer_generation_easy',
|
52 |
-
'ai2_arc/ARC-Challenge:1.0.0',
|
53 |
-
'ai2_arc/ARC-Easy:1.0.0',
|
54 |
-
'task229_arc_answer_generation_hard',
|
55 |
-
'hellaswag:1.1.0',
|
56 |
-
'task1389_hellaswag_completion',
|
57 |
-
'cot_gsm8k',
|
58 |
-
'cot_gsm8k_ii',
|
59 |
-
'drop:2.0.0',
|
60 |
-
'winogrande:1.1.0'
|
61 |
-
]
|
62 |
-
```
|
63 |
-
|
64 |
-
Using the datasets mentioned above, we applied SFT and iterative DPO training, a proprietary alignment strategy, to maximize the performance of our resulting model.
|
65 |
-
|
66 |
-
[1] Rafailov, R., Sharma, A., Mitchell, E., Ermon, S., Manning, C.D. and Finn, C., 2023. Direct preference optimization: Your language model is secretly a reward model. NeurIPS.
|
67 |
|
68 |
-
|
69 |
|
70 |
-
|
71 |
|
72 |
-
|
73 |
-
|
74 |
-
We also ensured the integrity of our model by conducting a data contamination test [3] that is also used by the HuggingFace team [4, 5].
|
75 |
|
76 |
-
Our results, with `result < 0.1, %:` being well below 0.9, indicate that our model is free from contamination.
|
77 |
-
|
78 |
-
*The data contamination test results of HellaSwag and Winograde will be added once [3] supports them.*
|
79 |
-
|
80 |
-
| Model | ARC | MMLU | TruthfulQA | GSM8K |
|
81 |
-
|------------------------------|-------|-------|-------|-------|
|
82 |
-
| **SOLAR-10.7B-Instruct-v1.0**| result < 0.1, %: 0.06 |result < 0.1, %: 0.15 | result < 0.1, %: 0.28 | result < 0.1, %: 0.70 |
|
83 |
-
|
84 |
-
[3] https://github.com/swj0419/detect-pretrain-code-contamination
|
85 |
-
|
86 |
-
[4] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/474#657f2245365456e362412a06
|
87 |
-
|
88 |
-
[5] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/265#657b6debf81f6b44b8966230
|
89 |
-
|
90 |
-
# **Evaluation Results**
|
91 |
-
|
92 |
-
| Model | H6 | Model Size |
|
93 |
-
|----------------------------------------|-------|------------|
|
94 |
-
| **SOLAR-10.7B-Instruct-v1.0** | **74.20** | **~ 11B** |
|
95 |
-
| mistralai/Mixtral-8x7B-Instruct-v0.1 | 72.62 | ~ 46.7B |
|
96 |
-
| 01-ai/Yi-34B-200K | 70.81 | ~ 34B |
|
97 |
-
| 01-ai/Yi-34B | 69.42 | ~ 34B |
|
98 |
-
| mistralai/Mixtral-8x7B-v0.1 | 68.42 | ~ 46.7B |
|
99 |
-
| meta-llama/Llama-2-70b-hf | 67.87 | ~ 70B |
|
100 |
-
| tiiuae/falcon-180B | 67.85 | ~ 180B |
|
101 |
-
| **SOLAR-10.7B-v1.0** | **66.04** | **~11B** |
|
102 |
-
| mistralai/Mistral-7B-Instruct-v0.2 | 65.71 | ~ 7B |
|
103 |
-
| Qwen/Qwen-14B | 65.86 | ~ 14B |
|
104 |
-
| 01-ai/Yi-34B-Chat | 65.32 | ~34B |
|
105 |
-
| meta-llama/Llama-2-70b-chat-hf | 62.4 | ~ 70B |
|
106 |
-
| mistralai/Mistral-7B-v0.1 | 60.97 | ~ 7B |
|
107 |
-
| mistralai/Mistral-7B-Instruct-v0.1 | 54.96 | ~ 7B |
|
108 |
|
109 |
# **Usage Instructions**
|
110 |
|
@@ -142,48 +53,16 @@ conversation = [ {'role': 'user', 'content': 'Hello?'} ]
|
|
142 |
prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)
|
143 |
|
144 |
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
|
145 |
-
outputs = model.generate(**inputs, use_cache=True, max_length=4096)
|
146 |
-
output_text = tokenizer.decode(outputs[0])
|
147 |
print(output_text)
|
148 |
```
|
149 |
|
150 |
Below is an example of the output.
|
151 |
```
|
152 |
-
<s>
|
153 |
-
Hello
|
154 |
-
|
155 |
-
|
156 |
-
Hello, how can I assist you today? Please feel free to ask any questions or request help with a specific task.</s>
|
157 |
-
```
|
158 |
-
|
159 |
-
### **License**
|
160 |
-
- [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0): apache-2.0
|
161 |
-
- [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0): cc-by-nc-4.0
|
162 |
-
- Since some non-commercial datasets such as Alpaca are used for fine-tuning, we release this model as cc-by-nc-4.0.
|
163 |
-
|
164 |
-
### **How to Cite**
|
165 |
-
|
166 |
-
Please cite the following papers using the below format when using this model.
|
167 |
-
|
168 |
-
```bibtex
|
169 |
-
@misc{kim2023solar,
|
170 |
-
title={SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling},
|
171 |
-
author={Dahyun Kim and Chanjun Park and Sanghoon Kim and Wonsung Lee and Wonho Song and Yunsu Kim and Hyeonwoo Kim and Yungi Kim and Hyeonju Lee and Jihoo Kim and Changbae Ahn and Seonghoon Yang and Sukyung Lee and Hyunbyung Park and Gyoungjin Gim and Mikyoung Cha and Hwalsuk Lee and Sunghun Kim},
|
172 |
-
year={2023},
|
173 |
-
eprint={2312.15166},
|
174 |
-
archivePrefix={arXiv},
|
175 |
-
primaryClass={cs.CL}
|
176 |
-
}
|
177 |
-
```
|
178 |
-
```bibtext
|
179 |
-
@misc{kim2024sdpo,
|
180 |
-
title={sDPO: Don't Use Your Data All at Once},
|
181 |
-
author={Dahyun Kim and Yungi Kim and Wonho Song and Hyeonwoo Kim and Yunsu Kim and Sanghoon Kim and Chanjun Park},
|
182 |
-
year={2024},
|
183 |
-
eprint={2403.19270},
|
184 |
-
archivePrefix={arXiv},
|
185 |
-
primaryClass={cs.CL}
|
186 |
-
}
|
187 |
```
|
188 |
|
189 |
### **The Upstage AI Team** ###
|
|
|
1 |
---
|
2 |
+
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
# **Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!**
|
6 |
|
7 |
+
**(This model is [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) fine-tuned version for single-turn conversation. Detailed description to be added.)**
|
8 |
|
9 |
|
10 |
# **Introduction**
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
11 |
|
12 |
+
We introduce the first 10.7 billion (B) parameter model, SOLAR-10.7B. It's compact, yet remarkably powerful, and demonstrates unparalleled state-of-the-art performance in models with parameters under 30B.
|
13 |
|
14 |
+
We developed the Depth Up-Scaling technique. Built on the Llama2 architecture, SOLAR-10.7B incorporates the innovative Upstage Depth Up-Scaling. We then integrated Mistral 7B weights into the upscaled layers, and finally, continued pre-training for the entire model.
|
15 |
|
16 |
+
Depth-Upscaled SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table ([link to be updated soon]).
|
17 |
+
Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements. [[link to be updated soon]]
|
|
|
18 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
19 |
|
20 |
# **Usage Instructions**
|
21 |
|
|
|
53 |
prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)
|
54 |
|
55 |
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
|
56 |
+
outputs = model.generate(**inputs, use_cache=True, max_length=4096) output_text = tokenizer.decode(outputs[0])
|
|
|
57 |
print(output_text)
|
58 |
```
|
59 |
|
60 |
Below is an example of the output.
|
61 |
```
|
62 |
+
<s> <|im_start|>user
|
63 |
+
Hello?<|im_end|>
|
64 |
+
<|im_start|>assistant
|
65 |
+
Hello, how can I assist you today?</s>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
```
|
67 |
|
68 |
### **The Upstage AI Team** ###
|
config.json
CHANGED
@@ -6,7 +6,6 @@
|
|
6 |
"attention_bias": false,
|
7 |
"bos_token_id": 1,
|
8 |
"eos_token_id": 2,
|
9 |
-
"pad_token_id": 2,
|
10 |
"hidden_act": "silu",
|
11 |
"hidden_size": 4096,
|
12 |
"initializer_range": 0.02,
|
@@ -23,6 +22,6 @@
|
|
23 |
"tie_word_embeddings": false,
|
24 |
"torch_dtype": "float16",
|
25 |
"transformers_version": "4.35.0",
|
26 |
-
"use_cache":
|
27 |
"vocab_size": 32000
|
28 |
}
|
|
|
6 |
"attention_bias": false,
|
7 |
"bos_token_id": 1,
|
8 |
"eos_token_id": 2,
|
|
|
9 |
"hidden_act": "silu",
|
10 |
"hidden_size": 4096,
|
11 |
"initializer_range": 0.02,
|
|
|
22 |
"tie_word_embeddings": false,
|
23 |
"torch_dtype": "float16",
|
24 |
"transformers_version": "4.35.0",
|
25 |
+
"use_cache": false,
|
26 |
"vocab_size": 32000
|
27 |
}
|
generation_config.json
CHANGED
@@ -2,7 +2,7 @@
|
|
2 |
"_from_model_config": true,
|
3 |
"bos_token_id": 1,
|
4 |
"eos_token_id": 2,
|
5 |
-
"pad_token_id": 2,
|
6 |
"transformers_version": "4.35.2",
|
7 |
"use_cache": false
|
8 |
}
|
|
|
|
2 |
"_from_model_config": true,
|
3 |
"bos_token_id": 1,
|
4 |
"eos_token_id": 2,
|
|
|
5 |
"transformers_version": "4.35.2",
|
6 |
"use_cache": false
|
7 |
}
|
8 |
+
|
solar-api-banner.png
DELETED
Binary file (138 kB)
|
|
solar_logo.png
DELETED
Binary file (77.1 kB)
|
|
tokenizer_config.json
CHANGED
@@ -28,13 +28,13 @@
|
|
28 |
}
|
29 |
},
|
30 |
"additional_special_tokens": [],
|
31 |
-
"chat_template": "{%
|
32 |
"bos_token": "<s>",
|
33 |
"clean_up_tokenization_spaces": false,
|
34 |
"eos_token": "</s>",
|
35 |
"legacy": true,
|
36 |
"model_max_length": 1000000000000000019884624838656,
|
37 |
-
"pad_token":
|
38 |
"sp_model_kwargs": {},
|
39 |
"spaces_between_special_tokens": false,
|
40 |
"tokenizer_class": "LlamaTokenizer",
|
|
|
28 |
}
|
29 |
},
|
30 |
"additional_special_tokens": [],
|
31 |
+
"chat_template": "{% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% for message in messages %}{{'<|im_start|>' + message['role'] + '\n' + message['content'] + '<|im_end|>' + '\n'}}{% endfor %}{% if add_generation_prompt %}{{ '<|im_start|>assistant\n' }}{% endif %}",
|
32 |
"bos_token": "<s>",
|
33 |
"clean_up_tokenization_spaces": false,
|
34 |
"eos_token": "</s>",
|
35 |
"legacy": true,
|
36 |
"model_max_length": 1000000000000000019884624838656,
|
37 |
+
"pad_token": null,
|
38 |
"sp_model_kwargs": {},
|
39 |
"spaces_between_special_tokens": false,
|
40 |
"tokenizer_class": "LlamaTokenizer",
|