Limerobot commited on
Commit
7698e3e
1 Parent(s): ee0b265

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -2
README.md CHANGED
@@ -4,12 +4,14 @@ license: apache-2.0
4
 
5
  # **Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!**
6
 
 
 
7
 
8
  # **Introduction**
9
 
10
- We introduce the first 10.7 billion (B) parameter model, [SOLAR-10.7B](https://huggingface.co/upstage/SOLAR-10.7B-v1.0). It's compact, yet remarkably powerful, and demonstrates unparalleled state-of-the-art performance in models with parameters under 30B.
11
 
12
- We developed the Depth Up-Scaling technique. Built on the Llama2 architecture, [SOLAR-10.7B](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) incorporates the innovative Upstage Depth Up-Scaling. We then integrated Mistral 7B weights into the upscaled layers, and finally, continued pre-training for the entire model.
13
 
14
  Depth-Upscaled SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table ([link to be updated soon]).
15
  Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements. [[link to be updated soon]]
 
4
 
5
  # **Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!**
6
 
7
+ **(This model is [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) fine-tuned version for single-turn conversation. Detailed description to be added.)**
8
+
9
 
10
  # **Introduction**
11
 
12
+ We introduce the first 10.7 billion (B) parameter model, SOLAR-10.7B. It's compact, yet remarkably powerful, and demonstrates unparalleled state-of-the-art performance in models with parameters under 30B.
13
 
14
+ We developed the Depth Up-Scaling technique. Built on the Llama2 architecture, SOLAR-10.7B incorporates the innovative Upstage Depth Up-Scaling. We then integrated Mistral 7B weights into the upscaled layers, and finally, continued pre-training for the entire model.
15
 
16
  Depth-Upscaled SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table ([link to be updated soon]).
17
  Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements. [[link to be updated soon]]