File size: 1,617 Bytes
99ac780
 
48fa13d
 
 
 
99ac780
5034b7f
 
 
 
 
 
 
 
ae4560a
3e79abf
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
---
license: mit
datasets:
- ChiyuSONG/dynamics-of-instruction-tuning
language:
- zh
---

<p align="center">
  💻 <a href="https://github.com/ChiyuSONG/data-efficient-training-of-LLMs" target="_blank">[Github Repo]</a> • 📃 <a href="https://arxiv.org/abs/2310.19651" target="_blank">[Paper]</a> 
</p>


本[项目](https://github.com/ChiyuSONG/data-efficient-training-of-LLMs)旨在探索大型语言模型的数据高效训练方法,研究如何有效地构建和使用数据集,以便让模型更好地学习语言表达和通用能力。我们将利用前沿的NLP科研成果持续提升模型性能,模型权重将完全开源,并提供简洁明了的训练方法和推理部署方式。

在version 1中,我们关注指令微调的过程。[Dynamics of Instruction Tuning](https://arxiv.org/abs/2310.19651)一文揭示了大型语言模型的各种能力在指令微调过程中会受到多种因素的影响,从而产生不同的发展速率。我们利用文中开源的包含了创意写作、代码生成、逻辑推理等十项能力类别的人工清洗指令集[*DoIT*](https://huggingface.co/datasets/ChiyuSONG/dynamics-of-instruction-tuning),来验证基于[*Baichuan2-13B-Base*](https://github.com/baichuan-inc/Baichuan2)训练通用智能模型的效果。

引用:
```
@article{song2023dynamics,
  title={Dynamics of Instruction Tuning: Each Ability of Large Language Models Has Its Own Growth Pace},
  author={Song, Chiyu and Zhou, Zhanchao and Yan, Jianhao and Fei, Yuejiao and Lan, Zhenzhong and Zhang, Yue},
  journal={arXiv preprint arXiv:2310.19651},
  year={2023}
}
```