RichardErkhov
commited on
Commit
•
31f76e2
1
Parent(s):
c0bfb09
uploaded readme
Browse files
README.md
ADDED
@@ -0,0 +1,128 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Quantization made by Richard Erkhov.
|
2 |
+
|
3 |
+
[Github](https://github.com/RichardErkhov)
|
4 |
+
|
5 |
+
[Discord](https://discord.gg/pvy7H8DZMG)
|
6 |
+
|
7 |
+
[Request more models](https://github.com/RichardErkhov/quant_request)
|
8 |
+
|
9 |
+
|
10 |
+
SOLAR-10.7B-v1.0 - bnb 8bits
|
11 |
+
- Model creator: https://huggingface.co/upstage/
|
12 |
+
- Original model: https://huggingface.co/upstage/SOLAR-10.7B-v1.0/
|
13 |
+
|
14 |
+
|
15 |
+
|
16 |
+
|
17 |
+
Original model description:
|
18 |
+
---
|
19 |
+
license: apache-2.0
|
20 |
+
---
|
21 |
+
|
22 |
+
<p align="left">
|
23 |
+
<a href="https://go.upstage.ai/solar-obt-hf-modelcardv1">
|
24 |
+
<img src="https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0/resolve/main/solar-api-banner.png" width="100%"/>
|
25 |
+
</a>
|
26 |
+
<p>
|
27 |
+
|
28 |
+
|
29 |
+
# **Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!**
|
30 |
+
|
31 |
+
|
32 |
+
# **Introduction**
|
33 |
+
We introduce SOLAR-10.7B, an advanced large language model (LLM) with 10.7 billion parameters, demonstrating superior performance in various natural language processing (NLP) tasks. It's compact, yet remarkably powerful, and demonstrates unparalleled state-of-the-art performance in models with parameters under 30B.
|
34 |
+
|
35 |
+
We present a methodology for scaling LLMs called depth up-scaling (DUS) , which encompasses architectural modifications and continued pretraining. In other words, we integrated Mistral 7B weights into the upscaled layers, and finally, continued pre-training for the entire model.
|
36 |
+
|
37 |
+
|
38 |
+
SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table.
|
39 |
+
Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements ([SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)).
|
40 |
+
|
41 |
+
For full details of this model please read our [paper](https://arxiv.org/abs/2312.15166).
|
42 |
+
|
43 |
+
# **Evaluation Results**
|
44 |
+
| Model | H6 | Model Size |
|
45 |
+
|----------------------------------------|-------|------------|
|
46 |
+
| **SOLAR-10.7B-Instruct-v1.0** | **74.20** | **~ 11B** |
|
47 |
+
| mistralai/Mixtral-8x7B-Instruct-v0.1 | 72.62 | ~ 46.7B |
|
48 |
+
| 01-ai/Yi-34B-200K | 70.81 | ~ 34B |
|
49 |
+
| 01-ai/Yi-34B | 69.42 | ~ 34B |
|
50 |
+
| mistralai/Mixtral-8x7B-v0.1 | 68.42 | ~ 46.7B |
|
51 |
+
| meta-llama/Llama-2-70b-hf | 67.87 | ~ 70B |
|
52 |
+
| tiiuae/falcon-180B | 67.85 | ~ 180B |
|
53 |
+
| **SOLAR-10.7B-v1.0** | **66.04** | **~11B** |
|
54 |
+
| mistralai/Mistral-7B-Instruct-v0.2 | 65.71 | ~ 7B |
|
55 |
+
| Qwen/Qwen-14B | 65.86 | ~ 14B |
|
56 |
+
| 01-ai/Yi-34B-Chat | 65.32 | ~34B |
|
57 |
+
| meta-llama/Llama-2-70b-chat-hf | 62.4 | ~ 70B |
|
58 |
+
| mistralai/Mistral-7B-v0.1 | 60.97 | ~ 7B |
|
59 |
+
| mistralai/Mistral-7B-Instruct-v0.1 | 54.96 | ~ 7B |
|
60 |
+
|
61 |
+
# **Usage Instructions**
|
62 |
+
|
63 |
+
This model is pre-trained and is capable of just generating random text. To use it for chatting, you must fine-tune the model first.
|
64 |
+
|
65 |
+
### **Version**
|
66 |
+
|
67 |
+
Make sure you have the correct version of the transformers library installed:
|
68 |
+
|
69 |
+
```sh
|
70 |
+
pip install transformers==4.35.2
|
71 |
+
```
|
72 |
+
|
73 |
+
### **Loading the Model**
|
74 |
+
|
75 |
+
Use the following Python code to load the model:
|
76 |
+
|
77 |
+
```python
|
78 |
+
import torch
|
79 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
80 |
+
|
81 |
+
tokenizer = AutoTokenizer.from_pretrained("Upstage/SOLAR-10.7B-v1.0")
|
82 |
+
model = AutoModelForCausalLM.from_pretrained(
|
83 |
+
"Upstage/SOLAR-10.7B-v1.0",
|
84 |
+
device_map="auto",
|
85 |
+
torch_dtype=torch.float16,
|
86 |
+
)
|
87 |
+
```
|
88 |
+
|
89 |
+
### **Generating Text**
|
90 |
+
|
91 |
+
To generate text, use the following Python code:
|
92 |
+
|
93 |
+
```python
|
94 |
+
text = "Hi, my name is "
|
95 |
+
inputs = tokenizer(text, return_tensors="pt")
|
96 |
+
|
97 |
+
outputs = model.generate(**inputs, max_new_tokens=64)
|
98 |
+
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
|
99 |
+
```
|
100 |
+
|
101 |
+
### **License**
|
102 |
+
- [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0): apache-2.0
|
103 |
+
- [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0): cc-by-nc-4.0
|
104 |
+
- Since some non-commercial datasets such as Alpaca are used for fine-tuning, we release fine-tuned model as cc-by-nc-4.0.
|
105 |
+
|
106 |
+
### **How to Cite**
|
107 |
+
|
108 |
+
Please cite this model using this format.
|
109 |
+
|
110 |
+
```bibtex
|
111 |
+
@misc{kim2023solar,
|
112 |
+
title={SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling},
|
113 |
+
author={Dahyun Kim and Chanjun Park and Sanghoon Kim and Wonsung Lee and Wonho Song and Yunsu Kim and Hyeonwoo Kim and Yungi Kim and Hyeonju Lee and Jihoo Kim and Changbae Ahn and Seonghoon Yang and Sukyung Lee and Hyunbyung Park and Gyoungjin Gim and Mikyoung Cha and Hwalsuk Lee and Sunghun Kim},
|
114 |
+
year={2023},
|
115 |
+
eprint={2312.15166},
|
116 |
+
archivePrefix={arXiv},
|
117 |
+
primaryClass={cs.CL}
|
118 |
+
}
|
119 |
+
```
|
120 |
+
|
121 |
+
### **The Upstage AI Team** ###
|
122 |
+
Upstage is creating the best LLM and DocAI. Please find more information at https://upstage.ai
|
123 |
+
|
124 |
+
### **Contact Us** ###
|
125 |
+
Any questions and suggestions, please use the discussion tab. If you want to contact us directly, drop an email to [contact@upstage.ai ](mailto:contact@upstage.ai)
|
126 |
+
|
127 |
+
|
128 |
+
|