Text Generation
Safetensors
Chinese
English
conversational
wangrongsheng commited on
Commit
2297715
1 Parent(s): 410e824

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -3
README.md CHANGED
@@ -15,17 +15,18 @@ pipeline_tag: text-generation
15
  </h2>
16
  </div>
17
 
 
 
 
18
  ## Overview
19
 
20
  Existing research has demonstrated that refining large language models (LLMs) through the utilization of machine-generated instruction-following data empowers these models to exhibit impressive zero-shot capabilities for novel tasks, without requiring human-authored instructions. In this paper, we systematically investigate, preprocess, and integrate three Chinese instruction-following datasets with the aim of enhancing the Chinese conversational capabilities of Mixtral-8x7B sparse Mixture-of-Experts model. Through instruction fine-tuning on this carefully processed dataset, we successfully construct the Mixtral-8x7B sparse Mixture-of-Experts model named "Aurora." To assess the performance of Aurora, we utilize three widely recognized benchmark tests: C-Eval, MMLU, and CMMLU. Empirical studies validate the effectiveness of instruction fine-tuning applied to Mixtral-8x7B sparse Mixture-of-Experts model. This work is pioneering in the execution of instruction fine-tuning on a sparse expert-mixed model, marking a significant breakthrough in enhancing the capabilities of this model architecture.
21
 
22
- <h1>Please follow our Github: <a href="https://github.com/WangRongsheng/Aurora">https://github.com/WangRongsheng/Aurora</a></h1>
23
-
24
  ![](./training_loss.png)
25
 
26
  ## Citation
27
  If you find our work helpful, feel free to give us a cite.
28
- ```bib
29
  @misc{wang2023auroraactivating,
30
  title={Aurora:Activating Chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning},
31
  author={Rongsheng Wang and Haoming Chen and Ruizhe Zhou and Yaofei Duan and Kunyan Cai and Han Ma and Jiaxi Cui and Jian Li and Patrick Cheong-Iao Pang and Yapeng Wang and Tao Tan},
 
15
  </h2>
16
  </div>
17
 
18
+ 1. <h1>Please follow our Github: <a href="https://github.com/WangRongsheng/Aurora">https://github.com/WangRongsheng/Aurora</a></h1>
19
+ 2. <h1>Please follow our Paper: <a href="https://arxiv.org/abs/2312.14557">https://arxiv.org/abs/2312.14557</a></h1>
20
+
21
  ## Overview
22
 
23
  Existing research has demonstrated that refining large language models (LLMs) through the utilization of machine-generated instruction-following data empowers these models to exhibit impressive zero-shot capabilities for novel tasks, without requiring human-authored instructions. In this paper, we systematically investigate, preprocess, and integrate three Chinese instruction-following datasets with the aim of enhancing the Chinese conversational capabilities of Mixtral-8x7B sparse Mixture-of-Experts model. Through instruction fine-tuning on this carefully processed dataset, we successfully construct the Mixtral-8x7B sparse Mixture-of-Experts model named "Aurora." To assess the performance of Aurora, we utilize three widely recognized benchmark tests: C-Eval, MMLU, and CMMLU. Empirical studies validate the effectiveness of instruction fine-tuning applied to Mixtral-8x7B sparse Mixture-of-Experts model. This work is pioneering in the execution of instruction fine-tuning on a sparse expert-mixed model, marking a significant breakthrough in enhancing the capabilities of this model architecture.
24
 
 
 
25
  ![](./training_loss.png)
26
 
27
  ## Citation
28
  If you find our work helpful, feel free to give us a cite.
29
+ ```latex
30
  @misc{wang2023auroraactivating,
31
  title={Aurora:Activating Chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning},
32
  author={Rongsheng Wang and Haoming Chen and Ruizhe Zhou and Yaofei Duan and Kunyan Cai and Han Ma and Jiaxi Cui and Jian Li and Patrick Cheong-Iao Pang and Yapeng Wang and Tao Tan},