Chanjun commited on
Commit
aac9da5
1 Parent(s): 5c6538b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -25,7 +25,7 @@ We developed the Depth Up-Scaling technique. Built on the Llama2 architecture, S
25
  Depth-Upscaled SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table.
26
  Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements.
27
 
28
- For full details of this model please read our [paper](https://arxiv.org/submit/5313698).
29
 
30
  # **Instruction Fine-Tuning Strategy**
31
 
 
25
  Depth-Upscaled SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table.
26
  Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements.
27
 
28
+ For full details of this model please read our [paper](Coming Soon).
29
 
30
  # **Instruction Fine-Tuning Strategy**
31