Spaces:
Running
Running
Upload 936 files
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- data/README_zh-CN.md +304 -0
- data/xtuner/.github/CONTRIBUTING.md +258 -0
- data/xtuner/.github/workflows/deploy.yml +26 -0
- data/xtuner/.github/workflows/lint.yml +23 -0
- data/xtuner/.gitignore +124 -0
- data/xtuner/.owners.yml +8 -0
- data/xtuner/.pre-commit-config-zh-cn.yaml +51 -0
- data/xtuner/.pre-commit-config.yaml +53 -0
- data/xtuner/LICENSE +201 -0
- data/xtuner/MANIFEST.in +2 -0
- data/xtuner/README.md +302 -0
- data/xtuner/docs/en/.readthedocs.yaml +16 -0
- data/xtuner/docs/en/Makefile +20 -0
- data/xtuner/docs/en/_static/css/readthedocs.css +6 -0
- data/xtuner/docs/en/_static/image/logo.png +0 -0
- data/xtuner/docs/en/acceleration/benchmark.rst +2 -0
- data/xtuner/docs/en/acceleration/deepspeed.rst +2 -0
- data/xtuner/docs/en/acceleration/flash_attn.rst +2 -0
- data/xtuner/docs/en/acceleration/hyper_parameters.rst +2 -0
- data/xtuner/docs/en/acceleration/length_grouped_sampler.rst +2 -0
- data/xtuner/docs/en/acceleration/pack_to_max_length.rst +2 -0
- data/xtuner/docs/en/acceleration/train_extreme_long_sequence.rst +2 -0
- data/xtuner/docs/en/acceleration/train_large_scale_dataset.rst +2 -0
- data/xtuner/docs/en/acceleration/varlen_flash_attn.rst +2 -0
- data/xtuner/docs/en/chat/agent.md +1 -0
- data/xtuner/docs/en/chat/llm.md +1 -0
- data/xtuner/docs/en/chat/lmdeploy.md +1 -0
- data/xtuner/docs/en/chat/vlm.md +1 -0
- data/xtuner/docs/en/conf.py +109 -0
- data/xtuner/docs/en/dpo/modify_settings.md +83 -0
- data/xtuner/docs/en/dpo/overview.md +27 -0
- data/xtuner/docs/en/dpo/quick_start.md +71 -0
- data/xtuner/docs/en/evaluation/hook.md +1 -0
- data/xtuner/docs/en/evaluation/mmbench.md +1 -0
- data/xtuner/docs/en/evaluation/mmlu.md +1 -0
- data/xtuner/docs/en/evaluation/opencompass.md +1 -0
- data/xtuner/docs/en/get_started/installation.md +52 -0
- data/xtuner/docs/en/get_started/overview.md +5 -0
- data/xtuner/docs/en/get_started/quickstart.md +308 -0
- data/xtuner/docs/en/index.rst +123 -0
- data/xtuner/docs/en/internevo_migration/ftdp_dataset/Case1.rst +2 -0
- data/xtuner/docs/en/internevo_migration/ftdp_dataset/Case2.rst +2 -0
- data/xtuner/docs/en/internevo_migration/ftdp_dataset/Case3.rst +2 -0
- data/xtuner/docs/en/internevo_migration/ftdp_dataset/Case4.rst +2 -0
- data/xtuner/docs/en/internevo_migration/ftdp_dataset/ftdp.rst +2 -0
- data/xtuner/docs/en/internevo_migration/internevo_migration.rst +2 -0
- data/xtuner/docs/en/make.bat +35 -0
- data/xtuner/docs/en/models/supported.md +1 -0
- data/xtuner/docs/en/notes/changelog.md +25 -0
- data/xtuner/docs/en/preparation/pretrained_model.rst +2 -0
data/README_zh-CN.md
ADDED
@@ -0,0 +1,304 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<div align="center">
|
2 |
+
<img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/>
|
3 |
+
<br /><br />
|
4 |
+
|
5 |
+
[![GitHub Repo stars](https://img.shields.io/github/stars/InternLM/xtuner?style=social)](https://github.com/InternLM/xtuner/stargazers)
|
6 |
+
[![license](https://img.shields.io/github/license/InternLM/xtuner.svg)](https://github.com/InternLM/xtuner/blob/main/LICENSE)
|
7 |
+
[![PyPI](https://img.shields.io/pypi/v/xtuner)](https://pypi.org/project/xtuner/)
|
8 |
+
[![Downloads](https://static.pepy.tech/badge/xtuner)](https://pypi.org/project/xtuner/)
|
9 |
+
[![issue resolution](https://img.shields.io/github/issues-closed-raw/InternLM/xtuner)](https://github.com/InternLM/xtuner/issues)
|
10 |
+
[![open issues](https://img.shields.io/github/issues-raw/InternLM/xtuner)](https://github.com/InternLM/xtuner/issues)
|
11 |
+
|
12 |
+
👋 加入我们:[![Static Badge](https://img.shields.io/badge/-grey?style=social&logo=wechat&label=微信)](https://cdn.vansin.top/internlm/xtuner.jpg)
|
13 |
+
[![Static Badge](https://img.shields.io/badge/-grey?style=social&logo=twitter&label=推特)](https://twitter.com/intern_lm)
|
14 |
+
[![Static Badge](https://img.shields.io/badge/-grey?style=social&logo=discord&label=Discord)](https://discord.gg/xa29JuW87d)
|
15 |
+
|
16 |
+
🔍 探索我们的模型:
|
17 |
+
[![Static Badge](https://img.shields.io/badge/-gery?style=social&label=🤗%20Huggingface)](https://huggingface.co/xtuner)
|
18 |
+
[![Static Badge](https://img.shields.io/badge/-gery?style=social&label=🤖%20ModelScope)](https://www.modelscope.cn/organization/xtuner)
|
19 |
+
[![Static Badge](https://img.shields.io/badge/-gery?style=social&label=🧰%20OpenXLab)](https://openxlab.org.cn/usercenter/xtuner)
|
20 |
+
[![Static Badge](https://img.shields.io/badge/-gery?style=social&label=🧠%20WiseModel)](https://www.wisemodel.cn/organization/xtuner)
|
21 |
+
|
22 |
+
[English](README.md) | 简体中文
|
23 |
+
|
24 |
+
</div>
|
25 |
+
|
26 |
+
## 🚀 Speed Benchmark
|
27 |
+
|
28 |
+
- XTuner 与 LLaMA-Factory 在 Llama2-7B 模型上的训练效率对比
|
29 |
+
|
30 |
+
<div align=center>
|
31 |
+
<img src="https://github.com/InternLM/xtuner/assets/41630003/9c9dfdf4-1efb-4daf-84bf-7c379ae40b8b" style="width:80%">
|
32 |
+
</div>
|
33 |
+
|
34 |
+
- XTuner 与 LLaMA-Factory 在 Llama2-70B 模型上的训练效率对比
|
35 |
+
|
36 |
+
<div align=center>
|
37 |
+
<img src="https://github.com/InternLM/xtuner/assets/41630003/5ba973b8-8885-4b72-b51b-c69fa1583bdd" style="width:80%">
|
38 |
+
</div>
|
39 |
+
|
40 |
+
## 🎉 更新
|
41 |
+
- **\[2024/07\]** 支持 [MiniCPM](xtuner/configs/minicpm/) 模型!
|
42 |
+
- **\[2024/07\]** 支持训练 [DPO](https://github.com/InternLM/xtuner/tree/main/xtuner/configs/dpo), [ORPO](https://github.com/InternLM/xtuner/tree/main/xtuner/configs/orpo) 还有 [Reward Model](https://github.com/InternLM/xtuner/tree/main/xtuner/configs/reward_model) ! 并且能够支持打包数据以及序列并行功能! 请参考 [文档](https://xtuner.readthedocs.io/zh-cn/latest/dpo/overview.html) 了解更多信息。
|
43 |
+
- **\[2024/07\]** 支持 [InternLM 2.5](xtuner/configs/internlm/internlm2_5_chat_7b/) 模型!
|
44 |
+
- **\[2024/06\]** 支持 [DeepSeek V2](xtuner/configs/deepseek/deepseek_v2_chat/) models! **训练速度提升一倍!**
|
45 |
+
- **\[2024/04\]** 多模态大模型 [LLaVA-Phi-3-mini](https://huggingface.co/xtuner/llava-phi-3-mini-hf) 发布!快速开始请查阅此[文档](xtuner/configs/llava/phi3_mini_4k_instruct_clip_vit_large_p14_336)!
|
46 |
+
- **\[2024/04\]** 多模态大模型 [LLaVA-Llama-3-8B](https://huggingface.co/xtuner/llava-llama-3-8b) 和 [LLaVA-Llama-3-8B-v1.1](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1) 发布!快速开始请查阅此[文档](xtuner/configs/llava/llama3_8b_instruct_clip_vit_large_p14_336)!
|
47 |
+
- **\[2024/04\]** 支持 [Llama 3](xtuner/configs/llama) 模型!
|
48 |
+
- **\[2024/04\]** 支持序列并行训练策略以实现语言模型超长上下文训练!\[[文档](https://github.com/InternLM/xtuner/blob/docs/docs/zh_cn/acceleration/train_extreme_long_sequence.rst)\] \[[速度基准](https://github.com/InternLM/xtuner/blob/docs/docs/zh_cn/acceleration/benchmark.rst)\]
|
49 |
+
- **\[2024/02\]** 支持 [Gemma](xtuner/configs/gemma) 模型!
|
50 |
+
- **\[2024/02\]** 支持 [Qwen1.5](xtuner/configs/qwen/qwen1_5) 模型!
|
51 |
+
- **\[2024/01\]** 支持 [InternLM2](xtuner/configs/internlm) 模型!同时,最新版的多模态大模型 [LLaVA-Internlm2-7B](https://huggingface.co/xtuner/llava-internlm2-7b) / [20B](https://huggingface.co/xtuner/llava-internlm2-20b) 发布,其表现出强大的性能!
|
52 |
+
- **\[2024/01\]** 支持 [DeepSeek-MoE](https://huggingface.co/deepseek-ai/deepseek-moe-16b-chat) 模型!20GB 显存即可实现 QLoRA 微调,4x80GB 即可实现全参数微调。快速开始请查阅相关[配置文件](xtuner/configs/deepseek/)!
|
53 |
+
- **\[2023/12\]** 🔥 支持多模态模型 VLM([LLaVA-v1.5](https://github.com/haotian-liu/LLaVA))预训练和指令微调!快速开始请查阅此[文档](xtuner/configs/llava/README_zh-CN.md)!
|
54 |
+
- **\[2023/12\]** 🔥 支持 [Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) 模型!快速开始请查阅此[文档](xtuner/configs/mixtral/README.md)!
|
55 |
+
- **\[2023/11\]** 支持 [ChatGLM3-6B](xtuner/configs/chatglm) 模型!
|
56 |
+
- **\[2023/10\]** 支持 [MSAgent-Bench](https://modelscope.cn/datasets/damo/MSAgent-Bench) 数据集,并且微调所得大语言模型可应用至 [Lagent](https://github.com/InternLM/lagent) 框架!
|
57 |
+
- **\[2023/10\]** 优化数据处理逻辑以兼容 `system` 字段,相关细节请查阅[文档](docs/zh_cn/user_guides/dataset_format.md)!
|
58 |
+
- **\[2023/09\]** 支持 [InternLM-20B](xtuner/configs/internlm) 系列模型!
|
59 |
+
- **\[2023/09\]** 支持 [Baichuan2](xtuner/configs/baichuan) 系列模型!
|
60 |
+
- **\[2023/08\]** XTuner 正式发布!众多微调模型已上传至 [HuggingFace](https://huggingface.co/xtuner)!
|
61 |
+
|
62 |
+
## 📖 介绍
|
63 |
+
|
64 |
+
XTuner 是一个高效、灵活、全能的轻量化大模型微调工具库。
|
65 |
+
|
66 |
+
**高效**
|
67 |
+
|
68 |
+
- 支持大语言模型 LLM、多模态图文模型 VLM 的预训练及轻量级微调。XTuner 支持在 8GB 显存下微调 7B 模型,同时也支持多节点跨设备微调更大尺度模型(70B+)。
|
69 |
+
- 自动分发高性能算子(如 FlashAttention、Triton kernels 等)以加速训练吞吐。
|
70 |
+
- 兼容 [DeepSpeed](https://github.com/microsoft/DeepSpeed) 🚀,轻松应用各种 ZeRO 训练优化策略。
|
71 |
+
|
72 |
+
**灵活**
|
73 |
+
|
74 |
+
- 支持多种大语言模型,包括但不限于 [InternLM](https://huggingface.co/internlm)、[Mixtral-8x7B](https://huggingface.co/mistralai)、[Llama 2](https://huggingface.co/meta-llama)、[ChatGLM](https://huggingface.co/THUDM)、[Qwen](https://huggingface.co/Qwen)、[Baichuan](https://huggingface.co/baichuan-inc)。
|
75 |
+
- 支持多模态图文模型 LLaVA 的预训练与微调。利用 XTuner 训得模型 [LLaVA-InternLM2-20B](https://huggingface.co/xtuner/llava-internlm2-20b) 表现优异。
|
76 |
+
- 精心设计的数据管道,兼容任意数据格式,开源数据或自定义数据皆可快速上手。
|
77 |
+
- 支持 [QLoRA](http://arxiv.org/abs/2305.14314)、[LoRA](http://arxiv.org/abs/2106.09685)、全量参数微调等多种微调算法,支撑用户根据具体需求作出最优选择。
|
78 |
+
|
79 |
+
**全能**
|
80 |
+
|
81 |
+
- 支持增量预训练、指令微调与 Agent 微调。
|
82 |
+
- 预定义众多开源对话模版,支持与开源或训练所得模型进行对话。
|
83 |
+
- 训练所得模型可无缝接入部署工具库 [LMDeploy](https://github.com/InternLM/lmdeploy)、大规模评测工具库 [OpenCompass](https://github.com/open-compass/opencompass) 及 [VLMEvalKit](https://github.com/open-compass/VLMEvalKit)。
|
84 |
+
|
85 |
+
## 🔥 支持列表
|
86 |
+
|
87 |
+
<table>
|
88 |
+
<tbody>
|
89 |
+
<tr align="center" valign="middle">
|
90 |
+
<td>
|
91 |
+
<b>模型</b>
|
92 |
+
</td>
|
93 |
+
<td>
|
94 |
+
<b>数据集</b>
|
95 |
+
</td>
|
96 |
+
<td>
|
97 |
+
<b>数据格式</b>
|
98 |
+
</td>
|
99 |
+
<td>
|
100 |
+
<b>微调算法</b>
|
101 |
+
</td>
|
102 |
+
</tr>
|
103 |
+
<tr valign="top">
|
104 |
+
<td align="left" valign="top">
|
105 |
+
<ul>
|
106 |
+
<li><a href="https://huggingface.co/internlm">InternLM 2 / 2.5</a></li>
|
107 |
+
<li><a href="https://huggingface.co/meta-llama">Llama 2 / 3</a></li>
|
108 |
+
<li><a href="https://huggingface.co/collections/microsoft/phi-3-6626e15e9585a200d2d761e3">Phi-3</a></li>
|
109 |
+
<li><a href="https://huggingface.co/THUDM/chatglm2-6b">ChatGLM2</a></li>
|
110 |
+
<li><a href="https://huggingface.co/THUDM/chatglm3-6b">ChatGLM3</a></li>
|
111 |
+
<li><a href="https://huggingface.co/Qwen/Qwen-7B">Qwen</a></li>
|
112 |
+
<li><a href="https://huggingface.co/baichuan-inc/Baichuan2-7B-Base">Baichuan2</a></li>
|
113 |
+
<li><a href="https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1">Mixtral</a></li>
|
114 |
+
<li><a href="https://huggingface.co/deepseek-ai/DeepSeek-V2-Chat">DeepSeek V2</a></li>
|
115 |
+
<li><a href="https://huggingface.co/google">Gemma</a></li>
|
116 |
+
<li><a href="https://huggingface.co/openbmb">MiniCPM</a></li>
|
117 |
+
<li>...</li>
|
118 |
+
</ul>
|
119 |
+
</td>
|
120 |
+
<td>
|
121 |
+
<ul>
|
122 |
+
<li><a href="https://modelscope.cn/datasets/damo/MSAgent-Bench">MSAgent-Bench</a></li>
|
123 |
+
<li><a href="https://huggingface.co/datasets/fnlp/moss-003-sft-data">MOSS-003-SFT</a> 🔧</li>
|
124 |
+
<li><a href="https://huggingface.co/datasets/tatsu-lab/alpaca">Alpaca en</a> / <a href="https://huggingface.co/datasets/silk-road/alpaca-data-gpt4-chinese">zh</a></li>
|
125 |
+
<li><a href="https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k">WizardLM</a></li>
|
126 |
+
<li><a href="https://huggingface.co/datasets/timdettmers/openassistant-guanaco">oasst1</a></li>
|
127 |
+
<li><a href="https://huggingface.co/datasets/garage-bAInd/Open-Platypus">Open-Platypus</a></li>
|
128 |
+
<li><a href="https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K">Code Alpaca</a></li>
|
129 |
+
<li><a href="https://huggingface.co/datasets/burkelibbey/colors">Colorist</a> 🎨</li>
|
130 |
+
<li><a href="https://github.com/WangRongsheng/ChatGenTitle">Arxiv GenTitle</a></li>
|
131 |
+
<li><a href="https://github.com/LiuHC0428/LAW-GPT">Chinese Law</a></li>
|
132 |
+
<li><a href="https://huggingface.co/datasets/Open-Orca/OpenOrca">OpenOrca</a></li>
|
133 |
+
<li><a href="https://huggingface.co/datasets/shibing624/medical">Medical Dialogue</a></li>
|
134 |
+
<li>...</li>
|
135 |
+
</ul>
|
136 |
+
</td>
|
137 |
+
<td>
|
138 |
+
<ul>
|
139 |
+
<li><a href="docs/zh_cn/user_guides/incremental_pretraining.md">Incremental Pre-training</a> </li>
|
140 |
+
<li><a href="docs/zh_cn/user_guides/single_turn_conversation.md">Single-turn Conversation SFT</a> </li>
|
141 |
+
<li><a href="docs/zh_cn/user_guides/multi_turn_conversation.md">Multi-turn Conversation SFT</a> </li>
|
142 |
+
</ul>
|
143 |
+
</td>
|
144 |
+
<td>
|
145 |
+
<ul>
|
146 |
+
<li><a href="http://arxiv.org/abs/2305.14314">QLoRA</a></li>
|
147 |
+
<li><a href="http://arxiv.org/abs/2106.09685">LoRA</a></li>
|
148 |
+
<li>全量参数微调</li>
|
149 |
+
<li><a href="https://arxiv.org/abs/2305.18290">DPO</a></li>
|
150 |
+
<li><a href="https://arxiv.org/abs/2403.07691">ORPO</a></li>
|
151 |
+
<li>Reward Model</a></li>
|
152 |
+
</ul>
|
153 |
+
</td>
|
154 |
+
</tr>
|
155 |
+
</tbody>
|
156 |
+
</table>
|
157 |
+
|
158 |
+
## 🛠️ 快速上手
|
159 |
+
|
160 |
+
### 安装
|
161 |
+
|
162 |
+
- 推荐使用 conda 先构建一个 Python-3.10 的虚拟环境
|
163 |
+
|
164 |
+
```bash
|
165 |
+
conda create --name xtuner-env python=3.10 -y
|
166 |
+
conda activate xtuner-env
|
167 |
+
```
|
168 |
+
|
169 |
+
- 通过 pip 安装 XTuner:
|
170 |
+
|
171 |
+
```shell
|
172 |
+
pip install -U xtuner
|
173 |
+
```
|
174 |
+
|
175 |
+
亦可集成 DeepSpeed 安装:
|
176 |
+
|
177 |
+
```shell
|
178 |
+
pip install -U 'xtuner[deepspeed]'
|
179 |
+
```
|
180 |
+
|
181 |
+
- 从源码安装 XTuner:
|
182 |
+
|
183 |
+
```shell
|
184 |
+
git clone https://github.com/InternLM/xtuner.git
|
185 |
+
cd xtuner
|
186 |
+
pip install -e '.[all]'
|
187 |
+
```
|
188 |
+
|
189 |
+
### 微调
|
190 |
+
|
191 |
+
XTuner 支持微调大语言模型。数据集预处理指南请查阅[文档](./docs/zh_cn/user_guides/dataset_prepare.md)。
|
192 |
+
|
193 |
+
- **步骤 0**,准备配置文件。XTuner 提供多个开箱即用的配置文件,用户可以通过下列命令查看:
|
194 |
+
|
195 |
+
```shell
|
196 |
+
xtuner list-cfg
|
197 |
+
```
|
198 |
+
|
199 |
+
或者,如果所提供的配置文件不能满足使用需求,请导出所提供的配置文件并进行相应更改:
|
200 |
+
|
201 |
+
```shell
|
202 |
+
xtuner copy-cfg ${CONFIG_NAME} ${SAVE_PATH}
|
203 |
+
vi ${SAVE_PATH}/${CONFIG_NAME}_copy.py
|
204 |
+
```
|
205 |
+
|
206 |
+
- **步骤 1**,开始微调。
|
207 |
+
|
208 |
+
```shell
|
209 |
+
xtuner train ${CONFIG_NAME_OR_PATH}
|
210 |
+
```
|
211 |
+
|
212 |
+
例如,我们可以利用 QLoRA 算法在 oasst1 数据集上微调 InternLM2.5-Chat-7B:
|
213 |
+
|
214 |
+
```shell
|
215 |
+
# 单卡
|
216 |
+
xtuner train internlm2_5_chat_7b_qlora_oasst1_e3 --deepspeed deepspeed_zero2
|
217 |
+
# 多卡
|
218 |
+
(DIST) NPROC_PER_NODE=${GPU_NUM} xtuner train internlm2_5_chat_7b_qlora_oasst1_e3 --deepspeed deepspeed_zero2
|
219 |
+
(SLURM) srun ${SRUN_ARGS} xtuner train internlm2_5_chat_7b_qlora_oasst1_e3 --launcher slurm --deepspeed deepspeed_zero2
|
220 |
+
```
|
221 |
+
|
222 |
+
- `--deepspeed` 表示使用 [DeepSpeed](https://github.com/microsoft/DeepSpeed) 🚀 来优化训练过程。XTuner 内置了多种策略,包括 ZeRO-1、ZeRO-2、ZeRO-3 等。如果用户期望关闭此功能,请直接移除此参数。
|
223 |
+
|
224 |
+
- 更多示例,请查阅[文档](./docs/zh_cn/user_guides/finetune.md)。
|
225 |
+
|
226 |
+
- **步骤 2**,将保存的 PTH 模型(如果使用的DeepSpeed,则将会是一个文件夹)转换为 HuggingFace 模型:
|
227 |
+
|
228 |
+
```shell
|
229 |
+
xtuner convert pth_to_hf ${CONFIG_NAME_OR_PATH} ${PTH} ${SAVE_PATH}
|
230 |
+
```
|
231 |
+
|
232 |
+
### 对话
|
233 |
+
|
234 |
+
XTuner 提供与大语言模型对话的工具。
|
235 |
+
|
236 |
+
```shell
|
237 |
+
xtuner chat ${NAME_OR_PATH_TO_LLM} --adapter {NAME_OR_PATH_TO_ADAPTER} [optional arguments]
|
238 |
+
```
|
239 |
+
|
240 |
+
例如:
|
241 |
+
|
242 |
+
与 InternLM2.5-Chat-7B 对话:
|
243 |
+
|
244 |
+
```shell
|
245 |
+
xtuner chat internlm/internlm2-chat-7b --prompt-template internlm2_chat
|
246 |
+
```
|
247 |
+
|
248 |
+
更多示例,请查阅[文档](./docs/zh_cn/user_guides/chat.md)。
|
249 |
+
|
250 |
+
### 部署
|
251 |
+
|
252 |
+
- **步骤 0**,将 HuggingFace adapter 合并到大语言模型:
|
253 |
+
|
254 |
+
```shell
|
255 |
+
xtuner convert merge \
|
256 |
+
${NAME_OR_PATH_TO_LLM} \
|
257 |
+
${NAME_OR_PATH_TO_ADAPTER} \
|
258 |
+
${SAVE_PATH} \
|
259 |
+
--max-shard-size 2GB
|
260 |
+
```
|
261 |
+
|
262 |
+
- **步骤 1**,使用任意推理框架部署微调后的大语言模型,例如 [LMDeploy](https://github.com/InternLM/lmdeploy) 🚀:
|
263 |
+
|
264 |
+
```shell
|
265 |
+
pip install lmdeploy
|
266 |
+
python -m lmdeploy.pytorch.chat ${NAME_OR_PATH_TO_LLM} \
|
267 |
+
--max_new_tokens 256 \
|
268 |
+
--temperture 0.8 \
|
269 |
+
--top_p 0.95 \
|
270 |
+
--seed 0
|
271 |
+
```
|
272 |
+
|
273 |
+
🔥 追求速度更快、显存占用更低的推理?欢迎体验 [LMDeploy](https://github.com/InternLM/lmdeploy) 提供的 4-bit 量化!使用指南请见[文档](https://github.com/InternLM/lmdeploy/tree/main#quantization)。
|
274 |
+
|
275 |
+
### 评测
|
276 |
+
|
277 |
+
- 推荐使用一站式平台 [OpenCompass](https://github.com/InternLM/opencompass) 来评测大语言模型,其目前已涵盖 50+ 数据集的约 30 万条题目。
|
278 |
+
|
279 |
+
## 🤝 贡献指南
|
280 |
+
|
281 |
+
我们感谢所有的贡献者为改进和提升 XTuner 所作出的努力。请参考[贡献指南](.github/CONTRIBUTING.md)来了解参与项目贡献的相关指引。
|
282 |
+
|
283 |
+
## 🎖️ 致谢
|
284 |
+
|
285 |
+
- [Llama 2](https://github.com/facebookresearch/llama)
|
286 |
+
- [DeepSpeed](https://github.com/microsoft/DeepSpeed)
|
287 |
+
- [QLoRA](https://github.com/artidoro/qlora)
|
288 |
+
- [LMDeploy](https://github.com/InternLM/lmdeploy)
|
289 |
+
- [LLaVA](https://github.com/haotian-liu/LLaVA)
|
290 |
+
|
291 |
+
## 🖊️ 引用
|
292 |
+
|
293 |
+
```bibtex
|
294 |
+
@misc{2023xtuner,
|
295 |
+
title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
|
296 |
+
author={XTuner Contributors},
|
297 |
+
howpublished = {\url{https://github.com/InternLM/xtuner}},
|
298 |
+
year={2023}
|
299 |
+
}
|
300 |
+
```
|
301 |
+
|
302 |
+
## 开源许可证
|
303 |
+
|
304 |
+
该项目采用 [Apache License 2.0 开源许可证](LICENSE)。同时,请遵守所使用的模型与数据集的许可证。
|
data/xtuner/.github/CONTRIBUTING.md
ADDED
@@ -0,0 +1,258 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## Contributing to InternLM
|
2 |
+
|
3 |
+
Welcome to the XTuner community! All kinds of contributions are welcomed, including but not limited to
|
4 |
+
|
5 |
+
**Fix bug**
|
6 |
+
|
7 |
+
You can directly post a Pull Request to fix typo in code or documents
|
8 |
+
|
9 |
+
The steps to fix the bug of code implementation are as follows.
|
10 |
+
|
11 |
+
1. If the modification involve significant changes, you should create an issue first and describe the error information and how to trigger the bug. Other developers will discuss with you and propose an proper solution.
|
12 |
+
|
13 |
+
2. Posting a pull request after fixing the bug and adding corresponding unit test.
|
14 |
+
|
15 |
+
**New Feature or Enhancement**
|
16 |
+
|
17 |
+
1. If the modification involve significant changes, you should create an issue to discuss with our developers to propose an proper design.
|
18 |
+
2. Post a Pull Request after implementing the new feature or enhancement and add corresponding unit test.
|
19 |
+
|
20 |
+
**Document**
|
21 |
+
|
22 |
+
You can directly post a pull request to fix documents. If you want to add a document, you should first create an issue to check if it is reasonable.
|
23 |
+
|
24 |
+
### Pull Request Workflow
|
25 |
+
|
26 |
+
If you're not familiar with Pull Request, don't worry! The following guidance will tell you how to create a Pull Request step by step. If you want to dive into the develop mode of Pull Request, you can refer to the [official documents](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/about-pull-requests)
|
27 |
+
|
28 |
+
#### 1. Fork and clone
|
29 |
+
|
30 |
+
If you are posting a pull request for the first time, you should fork the XTuner repository by clicking the **Fork** button in the top right corner of the GitHub page, and the forked repository will appear under your GitHub profile.
|
31 |
+
|
32 |
+
<img src="https://user-images.githubusercontent.com/57566630/167305749-43c7f4e9-449b-4e98-ade5-0c9276d5c9ce.png" width="1200">
|
33 |
+
|
34 |
+
Then, you can clone the repositories to local:
|
35 |
+
|
36 |
+
```shell
|
37 |
+
git clone git@github.com:{username}/xtuner.git
|
38 |
+
```
|
39 |
+
|
40 |
+
After that, you should add official repository as the upstream repository
|
41 |
+
|
42 |
+
```bash
|
43 |
+
git remote add upstream git@github.com:InternLM/xtuner.git
|
44 |
+
```
|
45 |
+
|
46 |
+
Check whether remote repository has been added successfully by `git remote -v`
|
47 |
+
|
48 |
+
```bash
|
49 |
+
origin git@github.com:{username}/xtuner.git (fetch)
|
50 |
+
origin git@github.com:{username}/xtuner.git (push)
|
51 |
+
upstream git@github.com:InternLM/xtuner.git (fetch)
|
52 |
+
upstream git@github.com:InternLM/xtuner.git (push)
|
53 |
+
```
|
54 |
+
|
55 |
+
> Here's a brief introduction to origin and upstream. When we use "git clone", we create an "origin" remote by default, which points to the repository cloned from. As for "upstream", we add it ourselves to point to the target repository. Of course, if you don't like the name "upstream", you could name it as you wish. Usually, we'll push the code to "origin". If the pushed code conflicts with the latest code in official("upstream"), we should pull the latest code from upstream to resolve the conflicts, and then push to "origin" again. The posted Pull Request will be updated automatically.
|
56 |
+
|
57 |
+
#### 2. Configure pre-commit
|
58 |
+
|
59 |
+
You should configure [pre-commit](https://pre-commit.com/#intro) in the local development environment to make sure the code style matches that of InternLM. **Note**: The following code should be executed under the XTuner directory.
|
60 |
+
|
61 |
+
```shell
|
62 |
+
pip install -U pre-commit
|
63 |
+
pre-commit install
|
64 |
+
```
|
65 |
+
|
66 |
+
Check that pre-commit is configured successfully, and install the hooks defined in `.pre-commit-config.yaml`.
|
67 |
+
|
68 |
+
```shell
|
69 |
+
pre-commit run --all-files
|
70 |
+
```
|
71 |
+
|
72 |
+
<img src="https://user-images.githubusercontent.com/57566630/173660750-3df20a63-cb66-4d33-a986-1f643f1d8aaf.png" width="1200">
|
73 |
+
|
74 |
+
<img src="https://user-images.githubusercontent.com/57566630/202368856-0465a90d-8fce-4345-918e-67b8b9c82614.png" width="1200">
|
75 |
+
|
76 |
+
If the installation process is interrupted, you can repeatedly run `pre-commit run ... ` to continue the installation.
|
77 |
+
|
78 |
+
If the code does not conform to the code style specification, pre-commit will raise a warning and fixes some of the errors automatically.
|
79 |
+
|
80 |
+
<img src="https://user-images.githubusercontent.com/57566630/202369176-67642454-0025-4023-a095-263529107aa3.png" width="1200">
|
81 |
+
|
82 |
+
If we want to commit our code bypassing the pre-commit hook, we can use the `--no-verify` option(**only for temporarily commit**).
|
83 |
+
|
84 |
+
```shell
|
85 |
+
git commit -m "xxx" --no-verify
|
86 |
+
```
|
87 |
+
|
88 |
+
#### 3. Create a development branch
|
89 |
+
|
90 |
+
After configuring the pre-commit, we should create a branch based on the master branch to develop the new feature or fix the bug. The proposed branch name is `username/pr_name`
|
91 |
+
|
92 |
+
```shell
|
93 |
+
git checkout -b yhc/refactor_contributing_doc
|
94 |
+
```
|
95 |
+
|
96 |
+
In subsequent development, if the master branch of the local repository is behind the master branch of "upstream", we need to pull the upstream for synchronization, and then execute the above command:
|
97 |
+
|
98 |
+
```shell
|
99 |
+
git pull upstream master
|
100 |
+
```
|
101 |
+
|
102 |
+
#### 4. Commit the code and pass the unit test
|
103 |
+
|
104 |
+
- XTuner introduces mypy to do static type checking to increase the robustness of the code. Therefore, we need to add Type Hints to our code and pass the mypy check. If you are not familiar with Type Hints, you can refer to [this tutorial](https://docs.python.org/3/library/typing.html).
|
105 |
+
|
106 |
+
- The committed code should pass through the unit test
|
107 |
+
|
108 |
+
```shell
|
109 |
+
# Pass all unit tests
|
110 |
+
pytest tests
|
111 |
+
|
112 |
+
# Pass the unit test of runner
|
113 |
+
pytest tests/test_runner/test_runner.py
|
114 |
+
```
|
115 |
+
|
116 |
+
If the unit test fails for lack of dependencies, you can install the dependencies referring to the [guidance](#unit-test)
|
117 |
+
|
118 |
+
- If the documents are modified/added, we should check the rendering result referring to [guidance](#document-rendering)
|
119 |
+
|
120 |
+
#### 5. Push the code to remote
|
121 |
+
|
122 |
+
We could push the local commits to remote after passing through the check of unit test and pre-commit. You can associate the local branch with remote branch by adding `-u` option.
|
123 |
+
|
124 |
+
```shell
|
125 |
+
git push -u origin {branch_name}
|
126 |
+
```
|
127 |
+
|
128 |
+
This will allow you to use the `git push` command to push code directly next time, without having to specify a branch or the remote repository.
|
129 |
+
|
130 |
+
#### 6. Create a Pull Request
|
131 |
+
|
132 |
+
(1) Create a pull request in GitHub's Pull request interface
|
133 |
+
|
134 |
+
<img src="https://user-images.githubusercontent.com/57566630/201533288-516f7ac4-0b14-4dc8-afbd-912475c368b5.png" width="1200">
|
135 |
+
|
136 |
+
(2) Modify the PR description according to the guidelines so that other developers can better understand your changes
|
137 |
+
|
138 |
+
<img src="https://user-images.githubusercontent.com/57566630/202242953-c91a18ff-e388-4ff9-8591-5fae0ead6c1e.png" width="1200">
|
139 |
+
|
140 |
+
Find more details about Pull Request description in [pull request guidelines](#pr-specs).
|
141 |
+
|
142 |
+
**note**
|
143 |
+
|
144 |
+
(a) The Pull Request description should contain the reason for the change, the content of the change, and the impact of the change, and be associated with the relevant Issue (see [documentation](https://docs.github.com/en/issues/tracking-your-work-with-issues/linking-a-pull-request-to-an-issue))
|
145 |
+
|
146 |
+
(b) If it is your first contribution, please sign the CLA
|
147 |
+
|
148 |
+
<img src="https://user-images.githubusercontent.com/57566630/167307569-a794b967-6e28-4eac-a942-00deb657815f.png" width="1200">
|
149 |
+
|
150 |
+
(c) Check whether the Pull Request pass through the CI
|
151 |
+
|
152 |
+
<img src="https://user-images.githubusercontent.com/57566630/167307490-f9ebf9fa-63c0-4d83-8ba1-081ea169eb3a.png" width="1200">
|
153 |
+
|
154 |
+
XTuner will run unit test for the posted Pull Request on different platforms (Linux, Window, Mac), based on different versions of Python, PyTorch, CUDA to make sure the code is correct. We can see the specific test information by clicking `Details` in the above image so that we can modify the code.
|
155 |
+
|
156 |
+
(3) If the Pull Request passes the CI, then you can wait for the review from other developers. You'll modify the code based on the reviewer's comments, and repeat the steps [4](#4-commit-the-code-and-pass-the-unit-test)-[5](#5-push-the-code-to-remote) until all reviewers approve it. Then, we will merge it ASAP.
|
157 |
+
|
158 |
+
<img src="https://user-images.githubusercontent.com/57566630/202145400-cc2cd8c4-10b0-472f-ba37-07e6f50acc67.png" width="1200">
|
159 |
+
|
160 |
+
#### 7. Resolve conflicts
|
161 |
+
|
162 |
+
If your local branch conflicts with the latest master branch of "upstream", you'll need to resolove them. There are two ways to do this:
|
163 |
+
|
164 |
+
```shell
|
165 |
+
git fetch --all --prune
|
166 |
+
git rebase upstream/master
|
167 |
+
```
|
168 |
+
|
169 |
+
or
|
170 |
+
|
171 |
+
```shell
|
172 |
+
git fetch --all --prune
|
173 |
+
git merge upstream/master
|
174 |
+
```
|
175 |
+
|
176 |
+
If you are very good at handling conflicts, then you can use rebase to resolve conflicts, as this will keep your commit logs tidy. If you are not familiar with `rebase`, then you can use `merge` to resolve conflicts.
|
177 |
+
|
178 |
+
### Guidance
|
179 |
+
|
180 |
+
#### Unit test
|
181 |
+
|
182 |
+
If you cannot run the unit test of some modules for lacking of some dependencies, such as [video](https://github.com/open-mmlab/mmcv/tree/master/mmcv/video) module, you can try to install the following dependencies:
|
183 |
+
|
184 |
+
```shell
|
185 |
+
# Linux
|
186 |
+
sudo apt-get update -y
|
187 |
+
sudo apt-get install -y libturbojpeg
|
188 |
+
sudo apt-get install -y ffmpeg
|
189 |
+
|
190 |
+
# Windows
|
191 |
+
conda install ffmpeg
|
192 |
+
```
|
193 |
+
|
194 |
+
We should also make sure the committed code will not decrease the coverage of unit test, we could run the following command to check the coverage of unit test:
|
195 |
+
|
196 |
+
```shell
|
197 |
+
python -m coverage run -m pytest /path/to/test_file
|
198 |
+
python -m coverage html
|
199 |
+
# check file in htmlcov/index.html
|
200 |
+
```
|
201 |
+
|
202 |
+
#### Document rendering
|
203 |
+
|
204 |
+
If the documents are modified/added, we should check the rendering result. We could install the dependencies and run the following command to render the documents and check the results:
|
205 |
+
|
206 |
+
```shell
|
207 |
+
pip install -r requirements/docs.txt
|
208 |
+
cd docs/zh_cn/
|
209 |
+
# or docs/en
|
210 |
+
make html
|
211 |
+
# check file in ./docs/zh_cn/_build/html/index.html
|
212 |
+
```
|
213 |
+
|
214 |
+
### Code style
|
215 |
+
|
216 |
+
#### Python
|
217 |
+
|
218 |
+
We adopt [PEP8](https://www.python.org/dev/peps/pep-0008/) as the preferred code style.
|
219 |
+
|
220 |
+
We use the following tools for linting and formatting:
|
221 |
+
|
222 |
+
- [flake8](https://github.com/PyCQA/flake8): A wrapper around some linter tools.
|
223 |
+
- [isort](https://github.com/timothycrosley/isort): A Python utility to sort imports.
|
224 |
+
- [yapf](https://github.com/google/yapf): A formatter for Python files.
|
225 |
+
- [codespell](https://github.com/codespell-project/codespell): A Python utility to fix common misspellings in text files.
|
226 |
+
- [mdformat](https://github.com/executablebooks/mdformat): Mdformat is an opinionated Markdown formatter that can be used to enforce a consistent style in Markdown files.
|
227 |
+
- [docformatter](https://github.com/myint/docformatter): A formatter to format docstring.
|
228 |
+
|
229 |
+
Style configurations of yapf and isort can be found in [setup.cfg](../setup.cfg).
|
230 |
+
|
231 |
+
We use [pre-commit hook](https://pre-commit.com/) that checks and formats for `flake8`, `yapf`, `isort`, `trailing whitespaces`, `markdown files`,
|
232 |
+
fixes `end-of-files`, `double-quoted-strings`, `python-encoding-pragma`, `mixed-line-ending`, sorts `requirments.txt` automatically on every commit.
|
233 |
+
The config for a pre-commit hook is stored in [.pre-commit-config](../.pre-commit-config.yaml).
|
234 |
+
|
235 |
+
#### C++ and CUDA
|
236 |
+
|
237 |
+
We follow the [Google C++ Style Guide](https://google.github.io/styleguide/cppguide.html).
|
238 |
+
|
239 |
+
### PR Specs
|
240 |
+
|
241 |
+
1. Use [pre-commit](https://pre-commit.com) hook to avoid issues of code style
|
242 |
+
|
243 |
+
2. One short-time branch should be matched with only one PR
|
244 |
+
|
245 |
+
3. Accomplish a detailed change in one PR. Avoid large PR
|
246 |
+
|
247 |
+
- Bad: Support Faster R-CNN
|
248 |
+
- Acceptable: Add a box head to Faster R-CNN
|
249 |
+
- Good: Add a parameter to box head to support custom conv-layer number
|
250 |
+
|
251 |
+
4. Provide clear and significant commit message
|
252 |
+
|
253 |
+
5. Provide clear and meaningful PR description
|
254 |
+
|
255 |
+
- Task name should be clarified in title. The general format is: \[Prefix\] Short description of the PR (Suffix)
|
256 |
+
- Prefix: add new feature \[Feature\], fix bug \[Fix\], related to documents \[Docs\], in developing \[WIP\] (which will not be reviewed temporarily)
|
257 |
+
- Introduce main changes, results and influences on other modules in short description
|
258 |
+
- Associate related issues and pull requests with a milestone
|
data/xtuner/.github/workflows/deploy.yml
ADDED
@@ -0,0 +1,26 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: deploy
|
2 |
+
|
3 |
+
on: push
|
4 |
+
|
5 |
+
concurrency:
|
6 |
+
group: ${{ github.workflow }}-${{ github.ref }}
|
7 |
+
cancel-in-progress: true
|
8 |
+
|
9 |
+
jobs:
|
10 |
+
build-n-publish:
|
11 |
+
runs-on: ubuntu-latest
|
12 |
+
if: startsWith(github.event.ref, 'refs/tags')
|
13 |
+
steps:
|
14 |
+
- uses: actions/checkout@v2
|
15 |
+
- name: Set up Python 3.8
|
16 |
+
uses: actions/setup-python@v2
|
17 |
+
with:
|
18 |
+
python-version: 3.8
|
19 |
+
- name: Build XTuner
|
20 |
+
run: |
|
21 |
+
pip install wheel
|
22 |
+
python setup.py sdist bdist_wheel
|
23 |
+
- name: Publish distribution to PyPI
|
24 |
+
run: |
|
25 |
+
pip install twine
|
26 |
+
twine upload dist/* -u __token__ -p ${{ secrets.pypi_password }}
|
data/xtuner/.github/workflows/lint.yml
ADDED
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
name: lint
|
2 |
+
|
3 |
+
on: [push, pull_request]
|
4 |
+
|
5 |
+
concurrency:
|
6 |
+
group: ${{ github.workflow }}-${{ github.ref }}
|
7 |
+
cancel-in-progress: true
|
8 |
+
|
9 |
+
jobs:
|
10 |
+
lint:
|
11 |
+
runs-on: ubuntu-latest
|
12 |
+
steps:
|
13 |
+
- uses: actions/checkout@v2
|
14 |
+
- name: Set up Python 3.8
|
15 |
+
uses: actions/setup-python@v2
|
16 |
+
with:
|
17 |
+
python-version: 3.8
|
18 |
+
- name: Install pre-commit hook
|
19 |
+
run: |
|
20 |
+
pip install pre-commit
|
21 |
+
pre-commit install
|
22 |
+
- name: Linting
|
23 |
+
run: pre-commit run --all-files
|
data/xtuner/.gitignore
ADDED
@@ -0,0 +1,124 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Byte-compiled / optimized / DLL files
|
2 |
+
__pycache__/
|
3 |
+
*.py[cod]
|
4 |
+
*$py.class
|
5 |
+
|
6 |
+
# C extensions
|
7 |
+
*.so
|
8 |
+
|
9 |
+
# Distribution / packaging
|
10 |
+
.Python
|
11 |
+
build/
|
12 |
+
develop-eggs/
|
13 |
+
dist/
|
14 |
+
downloads/
|
15 |
+
eggs/
|
16 |
+
.eggs/
|
17 |
+
lib/
|
18 |
+
lib64/
|
19 |
+
parts/
|
20 |
+
sdist/
|
21 |
+
var/
|
22 |
+
wheels/
|
23 |
+
*.egg-info/
|
24 |
+
.installed.cfg
|
25 |
+
*.egg
|
26 |
+
MANIFEST
|
27 |
+
|
28 |
+
# PyInstaller
|
29 |
+
# Usually these files are written by a python script from a template
|
30 |
+
# before PyInstaller builds the exe, so as to inject date/other infos into it.
|
31 |
+
*.manifest
|
32 |
+
*.spec
|
33 |
+
|
34 |
+
# Installer logs
|
35 |
+
pip-log.txt
|
36 |
+
pip-delete-this-directory.txt
|
37 |
+
|
38 |
+
# Unit test / coverage reports
|
39 |
+
htmlcov/
|
40 |
+
.tox/
|
41 |
+
.coverage
|
42 |
+
.coverage.*
|
43 |
+
.cache
|
44 |
+
nosetests.xml
|
45 |
+
coverage.xml
|
46 |
+
*.cover
|
47 |
+
.hypothesis/
|
48 |
+
.pytest_cache/
|
49 |
+
|
50 |
+
# Translations
|
51 |
+
*.mo
|
52 |
+
*.pot
|
53 |
+
|
54 |
+
# Django stuff:
|
55 |
+
*.log
|
56 |
+
local_settings.py
|
57 |
+
db.sqlite3
|
58 |
+
|
59 |
+
# Flask stuff:
|
60 |
+
instance/
|
61 |
+
.webassets-cache
|
62 |
+
|
63 |
+
# Scrapy stuff:
|
64 |
+
.scrapy
|
65 |
+
|
66 |
+
# Sphinx documentation
|
67 |
+
docs/*/_build/
|
68 |
+
|
69 |
+
# PyBuilder
|
70 |
+
target/
|
71 |
+
|
72 |
+
# Jupyter Notebook
|
73 |
+
.ipynb_checkpoints
|
74 |
+
|
75 |
+
# pyenv
|
76 |
+
.python-version
|
77 |
+
|
78 |
+
# celery beat schedule file
|
79 |
+
celerybeat-schedule
|
80 |
+
|
81 |
+
# SageMath parsed files
|
82 |
+
*.sage.py
|
83 |
+
|
84 |
+
# Environments
|
85 |
+
.env
|
86 |
+
.venv
|
87 |
+
env/
|
88 |
+
venv/
|
89 |
+
ENV/
|
90 |
+
env.bak/
|
91 |
+
venv.bak/
|
92 |
+
|
93 |
+
# Spyder project settings
|
94 |
+
.spyderproject
|
95 |
+
.spyproject
|
96 |
+
|
97 |
+
# Rope project settings
|
98 |
+
.ropeproject
|
99 |
+
|
100 |
+
# mkdocs documentation
|
101 |
+
/site
|
102 |
+
|
103 |
+
# mypy
|
104 |
+
.mypy_cache/
|
105 |
+
|
106 |
+
# custom
|
107 |
+
data/
|
108 |
+
data
|
109 |
+
.vscode
|
110 |
+
.idea
|
111 |
+
.DS_Store
|
112 |
+
*.pkl
|
113 |
+
*.pkl.json
|
114 |
+
*.log.json
|
115 |
+
work_dirs/
|
116 |
+
|
117 |
+
# Pytorch
|
118 |
+
*.pth
|
119 |
+
*.py~
|
120 |
+
*.sh~
|
121 |
+
|
122 |
+
# srun
|
123 |
+
*.out
|
124 |
+
batchscript-*
|
data/xtuner/.owners.yml
ADDED
@@ -0,0 +1,8 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
assign:
|
2 |
+
issues: disabled
|
3 |
+
pull_requests: disabled
|
4 |
+
strategy:
|
5 |
+
random
|
6 |
+
# daily-shift-based
|
7 |
+
schedule:
|
8 |
+
'*/1 * * * *'
|
data/xtuner/.pre-commit-config-zh-cn.yaml
ADDED
@@ -0,0 +1,51 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
exclude: ^tests/data/|^xtuner/tools/model_converters/modeling_internlm2_reward/
|
2 |
+
repos:
|
3 |
+
- repo: https://gitee.com/openmmlab/mirrors-flake8
|
4 |
+
rev: 5.0.4
|
5 |
+
hooks:
|
6 |
+
- id: flake8
|
7 |
+
args: ["--exclude=xtuner/model/transformers_models/*"]
|
8 |
+
- repo: https://gitee.com/openmmlab/mirrors-isort
|
9 |
+
rev: 5.11.5
|
10 |
+
hooks:
|
11 |
+
- id: isort
|
12 |
+
- repo: https://gitee.com/openmmlab/mirrors-yapf
|
13 |
+
rev: v0.32.0
|
14 |
+
hooks:
|
15 |
+
- id: yapf
|
16 |
+
- repo: https://gitee.com/openmmlab/mirrors-pre-commit-hooks
|
17 |
+
rev: v4.3.0
|
18 |
+
hooks:
|
19 |
+
- id: trailing-whitespace
|
20 |
+
- id: check-yaml
|
21 |
+
- id: end-of-file-fixer
|
22 |
+
- id: requirements-txt-fixer
|
23 |
+
- id: double-quote-string-fixer
|
24 |
+
- id: check-merge-conflict
|
25 |
+
- id: fix-encoding-pragma
|
26 |
+
args: ["--remove"]
|
27 |
+
- id: mixed-line-ending
|
28 |
+
args: ["--fix=lf"]
|
29 |
+
- repo: https://gitee.com/openmmlab/mirrors-codespell
|
30 |
+
rev: v2.2.1
|
31 |
+
hooks:
|
32 |
+
- id: codespell
|
33 |
+
- repo: https://gitee.com/openmmlab/mirrors-mdformat
|
34 |
+
rev: 0.7.9
|
35 |
+
hooks:
|
36 |
+
- id: mdformat
|
37 |
+
args: ["--number"]
|
38 |
+
additional_dependencies:
|
39 |
+
- mdformat-openmmlab
|
40 |
+
- mdformat_frontmatter
|
41 |
+
- linkify-it-py
|
42 |
+
- repo: https://gitee.com/openmmlab/mirrors-docformatter
|
43 |
+
rev: v1.3.1
|
44 |
+
hooks:
|
45 |
+
- id: docformatter
|
46 |
+
args: ["--in-place", "--wrap-descriptions", "79"]
|
47 |
+
- repo: https://github.com/asottile/pyupgrade
|
48 |
+
rev: v3.0.0
|
49 |
+
hooks:
|
50 |
+
- id: pyupgrade
|
51 |
+
args: ["--py36-plus"]
|
data/xtuner/.pre-commit-config.yaml
ADDED
@@ -0,0 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
exclude: ^tests/data/|^xtuner/tools/model_converters/modeling_internlm2_reward/
|
2 |
+
repos:
|
3 |
+
- repo: https://github.com/PyCQA/flake8
|
4 |
+
rev: 5.0.4
|
5 |
+
hooks:
|
6 |
+
- id: flake8
|
7 |
+
args: ["--exclude=xtuner/model/transformers_models/*"]
|
8 |
+
- repo: https://github.com/PyCQA/isort
|
9 |
+
rev: 5.11.5
|
10 |
+
hooks:
|
11 |
+
- id: isort
|
12 |
+
- repo: https://github.com/pre-commit/mirrors-yapf
|
13 |
+
rev: v0.32.0
|
14 |
+
hooks:
|
15 |
+
- id: yapf
|
16 |
+
exclude: 'xtuner/parallel/sequence/__init__.py'
|
17 |
+
- repo: https://github.com/pre-commit/pre-commit-hooks
|
18 |
+
rev: v4.3.0
|
19 |
+
hooks:
|
20 |
+
- id: trailing-whitespace
|
21 |
+
- id: check-yaml
|
22 |
+
- id: end-of-file-fixer
|
23 |
+
- id: requirements-txt-fixer
|
24 |
+
- id: double-quote-string-fixer
|
25 |
+
- id: check-merge-conflict
|
26 |
+
- id: fix-encoding-pragma
|
27 |
+
args: ["--remove"]
|
28 |
+
- id: mixed-line-ending
|
29 |
+
args: ["--fix=lf"]
|
30 |
+
- repo: https://github.com/codespell-project/codespell
|
31 |
+
rev: v2.2.1
|
32 |
+
hooks:
|
33 |
+
- id: codespell
|
34 |
+
- repo: https://github.com/executablebooks/mdformat
|
35 |
+
rev: 0.7.9
|
36 |
+
hooks:
|
37 |
+
- id: mdformat
|
38 |
+
args: ["--number"]
|
39 |
+
additional_dependencies:
|
40 |
+
- mdformat-openmmlab
|
41 |
+
- mdformat_frontmatter
|
42 |
+
- linkify-it-py
|
43 |
+
exclude: 'docs/zh_cn/user_guides/sequence_parallel.md'
|
44 |
+
- repo: https://github.com/myint/docformatter
|
45 |
+
rev: v1.3.1
|
46 |
+
hooks:
|
47 |
+
- id: docformatter
|
48 |
+
args: ["--in-place", "--wrap-descriptions", "79"]
|
49 |
+
- repo: https://github.com/asottile/pyupgrade
|
50 |
+
rev: v3.0.0
|
51 |
+
hooks:
|
52 |
+
- id: pyupgrade
|
53 |
+
args: ["--py36-plus"]
|
data/xtuner/LICENSE
ADDED
@@ -0,0 +1,201 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
Apache License
|
2 |
+
Version 2.0, January 2004
|
3 |
+
http://www.apache.org/licenses/
|
4 |
+
|
5 |
+
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
6 |
+
|
7 |
+
1. Definitions.
|
8 |
+
|
9 |
+
"License" shall mean the terms and conditions for use, reproduction,
|
10 |
+
and distribution as defined by Sections 1 through 9 of this document.
|
11 |
+
|
12 |
+
"Licensor" shall mean the copyright owner or entity authorized by
|
13 |
+
the copyright owner that is granting the License.
|
14 |
+
|
15 |
+
"Legal Entity" shall mean the union of the acting entity and all
|
16 |
+
other entities that control, are controlled by, or are under common
|
17 |
+
control with that entity. For the purposes of this definition,
|
18 |
+
"control" means (i) the power, direct or indirect, to cause the
|
19 |
+
direction or management of such entity, whether by contract or
|
20 |
+
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
21 |
+
outstanding shares, or (iii) beneficial ownership of such entity.
|
22 |
+
|
23 |
+
"You" (or "Your") shall mean an individual or Legal Entity
|
24 |
+
exercising permissions granted by this License.
|
25 |
+
|
26 |
+
"Source" form shall mean the preferred form for making modifications,
|
27 |
+
including but not limited to software source code, documentation
|
28 |
+
source, and configuration files.
|
29 |
+
|
30 |
+
"Object" form shall mean any form resulting from mechanical
|
31 |
+
transformation or translation of a Source form, including but
|
32 |
+
not limited to compiled object code, generated documentation,
|
33 |
+
and conversions to other media types.
|
34 |
+
|
35 |
+
"Work" shall mean the work of authorship, whether in Source or
|
36 |
+
Object form, made available under the License, as indicated by a
|
37 |
+
copyright notice that is included in or attached to the work
|
38 |
+
(an example is provided in the Appendix below).
|
39 |
+
|
40 |
+
"Derivative Works" shall mean any work, whether in Source or Object
|
41 |
+
form, that is based on (or derived from) the Work and for which the
|
42 |
+
editorial revisions, annotations, elaborations, or other modifications
|
43 |
+
represent, as a whole, an original work of authorship. For the purposes
|
44 |
+
of this License, Derivative Works shall not include works that remain
|
45 |
+
separable from, or merely link (or bind by name) to the interfaces of,
|
46 |
+
the Work and Derivative Works thereof.
|
47 |
+
|
48 |
+
"Contribution" shall mean any work of authorship, including
|
49 |
+
the original version of the Work and any modifications or additions
|
50 |
+
to that Work or Derivative Works thereof, that is intentionally
|
51 |
+
submitted to Licensor for inclusion in the Work by the copyright owner
|
52 |
+
or by an individual or Legal Entity authorized to submit on behalf of
|
53 |
+
the copyright owner. For the purposes of this definition, "submitted"
|
54 |
+
means any form of electronic, verbal, or written communication sent
|
55 |
+
to the Licensor or its representatives, including but not limited to
|
56 |
+
communication on electronic mailing lists, source code control systems,
|
57 |
+
and issue tracking systems that are managed by, or on behalf of, the
|
58 |
+
Licensor for the purpose of discussing and improving the Work, but
|
59 |
+
excluding communication that is conspicuously marked or otherwise
|
60 |
+
designated in writing by the copyright owner as "Not a Contribution."
|
61 |
+
|
62 |
+
"Contributor" shall mean Licensor and any individual or Legal Entity
|
63 |
+
on behalf of whom a Contribution has been received by Licensor and
|
64 |
+
subsequently incorporated within the Work.
|
65 |
+
|
66 |
+
2. Grant of Copyright License. Subject to the terms and conditions of
|
67 |
+
this License, each Contributor hereby grants to You a perpetual,
|
68 |
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
69 |
+
copyright license to reproduce, prepare Derivative Works of,
|
70 |
+
publicly display, publicly perform, sublicense, and distribute the
|
71 |
+
Work and such Derivative Works in Source or Object form.
|
72 |
+
|
73 |
+
3. Grant of Patent License. Subject to the terms and conditions of
|
74 |
+
this License, each Contributor hereby grants to You a perpetual,
|
75 |
+
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
76 |
+
(except as stated in this section) patent license to make, have made,
|
77 |
+
use, offer to sell, sell, import, and otherwise transfer the Work,
|
78 |
+
where such license applies only to those patent claims licensable
|
79 |
+
by such Contributor that are necessarily infringed by their
|
80 |
+
Contribution(s) alone or by combination of their Contribution(s)
|
81 |
+
with the Work to which such Contribution(s) was submitted. If You
|
82 |
+
institute patent litigation against any entity (including a
|
83 |
+
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
84 |
+
or a Contribution incorporated within the Work constitutes direct
|
85 |
+
or contributory patent infringement, then any patent licenses
|
86 |
+
granted to You under this License for that Work shall terminate
|
87 |
+
as of the date such litigation is filed.
|
88 |
+
|
89 |
+
4. Redistribution. You may reproduce and distribute copies of the
|
90 |
+
Work or Derivative Works thereof in any medium, with or without
|
91 |
+
modifications, and in Source or Object form, provided that You
|
92 |
+
meet the following conditions:
|
93 |
+
|
94 |
+
(a) You must give any other recipients of the Work or
|
95 |
+
Derivative Works a copy of this License; and
|
96 |
+
|
97 |
+
(b) You must cause any modified files to carry prominent notices
|
98 |
+
stating that You changed the files; and
|
99 |
+
|
100 |
+
(c) You must retain, in the Source form of any Derivative Works
|
101 |
+
that You distribute, all copyright, patent, trademark, and
|
102 |
+
attribution notices from the Source form of the Work,
|
103 |
+
excluding those notices that do not pertain to any part of
|
104 |
+
the Derivative Works; and
|
105 |
+
|
106 |
+
(d) If the Work includes a "NOTICE" text file as part of its
|
107 |
+
distribution, then any Derivative Works that You distribute must
|
108 |
+
include a readable copy of the attribution notices contained
|
109 |
+
within such NOTICE file, excluding those notices that do not
|
110 |
+
pertain to any part of the Derivative Works, in at least one
|
111 |
+
of the following places: within a NOTICE text file distributed
|
112 |
+
as part of the Derivative Works; within the Source form or
|
113 |
+
documentation, if provided along with the Derivative Works; or,
|
114 |
+
within a display generated by the Derivative Works, if and
|
115 |
+
wherever such third-party notices normally appear. The contents
|
116 |
+
of the NOTICE file are for informational purposes only and
|
117 |
+
do not modify the License. You may add Your own attribution
|
118 |
+
notices within Derivative Works that You distribute, alongside
|
119 |
+
or as an addendum to the NOTICE text from the Work, provided
|
120 |
+
that such additional attribution notices cannot be construed
|
121 |
+
as modifying the License.
|
122 |
+
|
123 |
+
You may add Your own copyright statement to Your modifications and
|
124 |
+
may provide additional or different license terms and conditions
|
125 |
+
for use, reproduction, or distribution of Your modifications, or
|
126 |
+
for any such Derivative Works as a whole, provided Your use,
|
127 |
+
reproduction, and distribution of the Work otherwise complies with
|
128 |
+
the conditions stated in this License.
|
129 |
+
|
130 |
+
5. Submission of Contributions. Unless You explicitly state otherwise,
|
131 |
+
any Contribution intentionally submitted for inclusion in the Work
|
132 |
+
by You to the Licensor shall be under the terms and conditions of
|
133 |
+
this License, without any additional terms or conditions.
|
134 |
+
Notwithstanding the above, nothing herein shall supersede or modify
|
135 |
+
the terms of any separate license agreement you may have executed
|
136 |
+
with Licensor regarding such Contributions.
|
137 |
+
|
138 |
+
6. Trademarks. This License does not grant permission to use the trade
|
139 |
+
names, trademarks, service marks, or product names of the Licensor,
|
140 |
+
except as required for reasonable and customary use in describing the
|
141 |
+
origin of the Work and reproducing the content of the NOTICE file.
|
142 |
+
|
143 |
+
7. Disclaimer of Warranty. Unless required by applicable law or
|
144 |
+
agreed to in writing, Licensor provides the Work (and each
|
145 |
+
Contributor provides its Contributions) on an "AS IS" BASIS,
|
146 |
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
147 |
+
implied, including, without limitation, any warranties or conditions
|
148 |
+
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
149 |
+
PARTICULAR PURPOSE. You are solely responsible for determining the
|
150 |
+
appropriateness of using or redistributing the Work and assume any
|
151 |
+
risks associated with Your exercise of permissions under this License.
|
152 |
+
|
153 |
+
8. Limitation of Liability. In no event and under no legal theory,
|
154 |
+
whether in tort (including negligence), contract, or otherwise,
|
155 |
+
unless required by applicable law (such as deliberate and grossly
|
156 |
+
negligent acts) or agreed to in writing, shall any Contributor be
|
157 |
+
liable to You for damages, including any direct, indirect, special,
|
158 |
+
incidental, or consequential damages of any character arising as a
|
159 |
+
result of this License or out of the use or inability to use the
|
160 |
+
Work (including but not limited to damages for loss of goodwill,
|
161 |
+
work stoppage, computer failure or malfunction, or any and all
|
162 |
+
other commercial damages or losses), even if such Contributor
|
163 |
+
has been advised of the possibility of such damages.
|
164 |
+
|
165 |
+
9. Accepting Warranty or Additional Liability. While redistributing
|
166 |
+
the Work or Derivative Works thereof, You may choose to offer,
|
167 |
+
and charge a fee for, acceptance of support, warranty, indemnity,
|
168 |
+
or other liability obligations and/or rights consistent with this
|
169 |
+
License. However, in accepting such obligations, You may act only
|
170 |
+
on Your own behalf and on Your sole responsibility, not on behalf
|
171 |
+
of any other Contributor, and only if You agree to indemnify,
|
172 |
+
defend, and hold each Contributor harmless for any liability
|
173 |
+
incurred by, or claims asserted against, such Contributor by reason
|
174 |
+
of your accepting any such warranty or additional liability.
|
175 |
+
|
176 |
+
END OF TERMS AND CONDITIONS
|
177 |
+
|
178 |
+
APPENDIX: How to apply the Apache License to your work.
|
179 |
+
|
180 |
+
To apply the Apache License to your work, attach the following
|
181 |
+
boilerplate notice, with the fields enclosed by brackets "[]"
|
182 |
+
replaced with your own identifying information. (Don't include
|
183 |
+
the brackets!) The text should be enclosed in the appropriate
|
184 |
+
comment syntax for the file format. We also recommend that a
|
185 |
+
file or class name and description of purpose be included on the
|
186 |
+
same "printed page" as the copyright notice for easier
|
187 |
+
identification within third-party archives.
|
188 |
+
|
189 |
+
Copyright [yyyy] [name of copyright owner]
|
190 |
+
|
191 |
+
Licensed under the Apache License, Version 2.0 (the "License");
|
192 |
+
you may not use this file except in compliance with the License.
|
193 |
+
You may obtain a copy of the License at
|
194 |
+
|
195 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
196 |
+
|
197 |
+
Unless required by applicable law or agreed to in writing, software
|
198 |
+
distributed under the License is distributed on an "AS IS" BASIS,
|
199 |
+
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
200 |
+
See the License for the specific language governing permissions and
|
201 |
+
limitations under the License.
|
data/xtuner/MANIFEST.in
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
recursive-include xtuner/configs *.py *.yml *.json
|
2 |
+
recursive-include xtuner/tools *.sh *.py
|
data/xtuner/README.md
ADDED
@@ -0,0 +1,302 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<div align="center">
|
2 |
+
<img src="https://github.com/InternLM/lmdeploy/assets/36994684/0cf8d00f-e86b-40ba-9b54-dc8f1bc6c8d8" width="600"/>
|
3 |
+
<br /><br />
|
4 |
+
|
5 |
+
[![GitHub Repo stars](https://img.shields.io/github/stars/InternLM/xtuner?style=social)](https://github.com/InternLM/xtuner/stargazers)
|
6 |
+
[![license](https://img.shields.io/github/license/InternLM/xtuner.svg)](https://github.com/InternLM/xtuner/blob/main/LICENSE)
|
7 |
+
[![PyPI](https://img.shields.io/pypi/v/xtuner)](https://pypi.org/project/xtuner/)
|
8 |
+
[![Downloads](https://static.pepy.tech/badge/xtuner)](https://pypi.org/project/xtuner/)
|
9 |
+
[![issue resolution](https://img.shields.io/github/issues-closed-raw/InternLM/xtuner)](https://github.com/InternLM/xtuner/issues)
|
10 |
+
[![open issues](https://img.shields.io/github/issues-raw/InternLM/xtuner)](https://github.com/InternLM/xtuner/issues)
|
11 |
+
|
12 |
+
👋 join us on [![Static Badge](https://img.shields.io/badge/-grey?style=social&logo=wechat&label=WeChat)](https://cdn.vansin.top/internlm/xtuner.jpg)
|
13 |
+
[![Static Badge](https://img.shields.io/badge/-grey?style=social&logo=twitter&label=Twitter)](https://twitter.com/intern_lm)
|
14 |
+
[![Static Badge](https://img.shields.io/badge/-grey?style=social&logo=discord&label=Discord)](https://discord.gg/xa29JuW87d)
|
15 |
+
|
16 |
+
🔍 Explore our models on
|
17 |
+
[![Static Badge](https://img.shields.io/badge/-gery?style=social&label=🤗%20Huggingface)](https://huggingface.co/xtuner)
|
18 |
+
[![Static Badge](https://img.shields.io/badge/-gery?style=social&label=🤖%20ModelScope)](https://www.modelscope.cn/organization/xtuner)
|
19 |
+
[![Static Badge](https://img.shields.io/badge/-gery?style=social&label=🧰%20OpenXLab)](https://openxlab.org.cn/usercenter/xtuner)
|
20 |
+
[![Static Badge](https://img.shields.io/badge/-gery?style=social&label=🧠%20WiseModel)](https://www.wisemodel.cn/organization/xtuner)
|
21 |
+
|
22 |
+
English | [简体中文](README_zh-CN.md)
|
23 |
+
|
24 |
+
</div>
|
25 |
+
|
26 |
+
## 🚀 Speed Benchmark
|
27 |
+
|
28 |
+
- Llama2 7B Training Speed
|
29 |
+
|
30 |
+
<div align=center>
|
31 |
+
<img src="https://github.com/InternLM/xtuner/assets/41630003/9c9dfdf4-1efb-4daf-84bf-7c379ae40b8b" style="width:80%">
|
32 |
+
</div>
|
33 |
+
|
34 |
+
- Llama2 70B Training Speed
|
35 |
+
|
36 |
+
<div align=center>
|
37 |
+
<img src="https://github.com/InternLM/xtuner/assets/41630003/5ba973b8-8885-4b72-b51b-c69fa1583bdd" style="width:80%">
|
38 |
+
</div>
|
39 |
+
|
40 |
+
## 🎉 News
|
41 |
+
- **\[2024/07\]** Support [MiniCPM](xtuner/configs/minicpm/) models!
|
42 |
+
- **\[2024/07\]** Support [DPO](https://github.com/InternLM/xtuner/tree/main/xtuner/configs/dpo), [ORPO](https://github.com/InternLM/xtuner/tree/main/xtuner/configs/orpo) and [Reward Model](https://github.com/InternLM/xtuner/tree/main/xtuner/configs/reward_model) training with packed data and sequence parallel! See [documents](https://xtuner.readthedocs.io/en/latest/dpo/overview.html) for more details.
|
43 |
+
- **\[2024/07\]** Support [InternLM 2.5](xtuner/configs/internlm/internlm2_5_chat_7b/) models!
|
44 |
+
- **\[2024/06\]** Support [DeepSeek V2](xtuner/configs/deepseek/deepseek_v2_chat/) models! **2x faster!**
|
45 |
+
- **\[2024/04\]** [LLaVA-Phi-3-mini](https://huggingface.co/xtuner/llava-phi-3-mini-hf) is released! Click [here](xtuner/configs/llava/phi3_mini_4k_instruct_clip_vit_large_p14_336) for details!
|
46 |
+
- **\[2024/04\]** [LLaVA-Llama-3-8B](https://huggingface.co/xtuner/llava-llama-3-8b) and [LLaVA-Llama-3-8B-v1.1](https://huggingface.co/xtuner/llava-llama-3-8b-v1_1) are released! Click [here](xtuner/configs/llava/llama3_8b_instruct_clip_vit_large_p14_336) for details!
|
47 |
+
- **\[2024/04\]** Support [Llama 3](xtuner/configs/llama) models!
|
48 |
+
- **\[2024/04\]** Support Sequence Parallel for enabling highly efficient and scalable LLM training with extremely long sequence lengths! \[[Usage](https://github.com/InternLM/xtuner/blob/docs/docs/zh_cn/acceleration/train_extreme_long_sequence.rst)\] \[[Speed Benchmark](https://github.com/InternLM/xtuner/blob/docs/docs/zh_cn/acceleration/benchmark.rst)\]
|
49 |
+
- **\[2024/02\]** Support [Gemma](xtuner/configs/gemma) models!
|
50 |
+
- **\[2024/02\]** Support [Qwen1.5](xtuner/configs/qwen/qwen1_5) models!
|
51 |
+
- **\[2024/01\]** Support [InternLM2](xtuner/configs/internlm) models! The latest VLM [LLaVA-Internlm2-7B](https://huggingface.co/xtuner/llava-internlm2-7b) / [20B](https://huggingface.co/xtuner/llava-internlm2-20b) models are released, with impressive performance!
|
52 |
+
- **\[2024/01\]** Support [DeepSeek-MoE](https://huggingface.co/deepseek-ai/deepseek-moe-16b-chat) models! 20GB GPU memory is enough for QLoRA fine-tuning, and 4x80GB for full-parameter fine-tuning. Click [here](xtuner/configs/deepseek/) for details!
|
53 |
+
- **\[2023/12\]** 🔥 Support multi-modal VLM pretraining and fine-tuning with [LLaVA-v1.5](https://github.com/haotian-liu/LLaVA) architecture! Click [here](xtuner/configs/llava/README.md) for details!
|
54 |
+
- **\[2023/12\]** 🔥 Support [Mixtral 8x7B](https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1) models! Click [here](xtuner/configs/mixtral/README.md) for details!
|
55 |
+
- **\[2023/11\]** Support [ChatGLM3-6B](xtuner/configs/chatglm) model!
|
56 |
+
- **\[2023/10\]** Support [MSAgent-Bench](https://modelscope.cn/datasets/damo/MSAgent-Bench) dataset, and the fine-tuned LLMs can be applied by [Lagent](https://github.com/InternLM/lagent)!
|
57 |
+
- **\[2023/10\]** Optimize the data processing to accommodate `system` context. More information can be found on [Docs](docs/en/user_guides/dataset_format.md)!
|
58 |
+
- **\[2023/09\]** Support [InternLM-20B](xtuner/configs/internlm) models!
|
59 |
+
- **\[2023/09\]** Support [Baichuan2](xtuner/configs/baichuan) models!
|
60 |
+
- **\[2023/08\]** XTuner is released, with multiple fine-tuned adapters on [Hugging Face](https://huggingface.co/xtuner).
|
61 |
+
|
62 |
+
## 📖 Introduction
|
63 |
+
|
64 |
+
XTuner is an efficient, flexible and full-featured toolkit for fine-tuning large models.
|
65 |
+
|
66 |
+
**Efficient**
|
67 |
+
|
68 |
+
- Support LLM, VLM pre-training / fine-tuning on almost all GPUs. XTuner is capable of fine-tuning 7B LLM on a single 8GB GPU, as well as multi-node fine-tuning of models exceeding 70B.
|
69 |
+
- Automatically dispatch high-performance operators such as FlashAttention and Triton kernels to increase training throughput.
|
70 |
+
- Compatible with [DeepSpeed](https://github.com/microsoft/DeepSpeed) 🚀, easily utilizing a variety of ZeRO optimization techniques.
|
71 |
+
|
72 |
+
**Flexible**
|
73 |
+
|
74 |
+
- Support various LLMs ([InternLM](https://huggingface.co/internlm), [Mixtral-8x7B](https://huggingface.co/mistralai), [Llama 2](https://huggingface.co/meta-llama), [ChatGLM](https://huggingface.co/THUDM), [Qwen](https://huggingface.co/Qwen), [Baichuan](https://huggingface.co/baichuan-inc), ...).
|
75 |
+
- Support VLM ([LLaVA](https://github.com/haotian-liu/LLaVA)). The performance of [LLaVA-InternLM2-20B](https://huggingface.co/xtuner/llava-internlm2-20b) is outstanding.
|
76 |
+
- Well-designed data pipeline, accommodating datasets in any format, including but not limited to open-source and custom formats.
|
77 |
+
- Support various training algorithms ([QLoRA](http://arxiv.org/abs/2305.14314), [LoRA](http://arxiv.org/abs/2106.09685), full-parameter fune-tune), allowing users to choose the most suitable solution for their requirements.
|
78 |
+
|
79 |
+
**Full-featured**
|
80 |
+
|
81 |
+
- Support continuous pre-training, instruction fine-tuning, and agent fine-tuning.
|
82 |
+
- Support chatting with large models with pre-defined templates.
|
83 |
+
- The output models can seamlessly integrate with deployment and server toolkit ([LMDeploy](https://github.com/InternLM/lmdeploy)), and large-scale evaluation toolkit ([OpenCompass](https://github.com/open-compass/opencompass), [VLMEvalKit](https://github.com/open-compass/VLMEvalKit)).
|
84 |
+
|
85 |
+
## 🔥 Supports
|
86 |
+
|
87 |
+
<table>
|
88 |
+
<tbody>
|
89 |
+
<tr align="center" valign="middle">
|
90 |
+
<td>
|
91 |
+
<b>Models</b>
|
92 |
+
</td>
|
93 |
+
<td>
|
94 |
+
<b>SFT Datasets</b>
|
95 |
+
</td>
|
96 |
+
<td>
|
97 |
+
<b>Data Pipelines</b>
|
98 |
+
</td>
|
99 |
+
<td>
|
100 |
+
<b>Algorithms</b>
|
101 |
+
</td>
|
102 |
+
</tr>
|
103 |
+
<tr valign="top">
|
104 |
+
<td align="left" valign="top">
|
105 |
+
<ul>
|
106 |
+
<li><a href="https://huggingface.co/internlm">InternLM2 / 2.5</a></li>
|
107 |
+
<li><a href="https://huggingface.co/meta-llama">Llama 2 / 3</a></li>
|
108 |
+
<li><a href="https://huggingface.co/collections/microsoft/phi-3-6626e15e9585a200d2d761e3">Phi-3</a></li>
|
109 |
+
<li><a href="https://huggingface.co/THUDM/chatglm2-6b">ChatGLM2</a></li>
|
110 |
+
<li><a href="https://huggingface.co/THUDM/chatglm3-6b">ChatGLM3</a></li>
|
111 |
+
<li><a href="https://huggingface.co/Qwen/Qwen-7B">Qwen</a></li>
|
112 |
+
<li><a href="https://huggingface.co/baichuan-inc/Baichuan2-7B-Base">Baichuan2</a></li>
|
113 |
+
<li><a href="https://huggingface.co/mistralai/Mixtral-8x7B-Instruct-v0.1">Mixtral</a></li>
|
114 |
+
<li><a href="https://huggingface.co/deepseek-ai/DeepSeek-V2-Chat">DeepSeek V2</a></li>
|
115 |
+
<li><a href="https://huggingface.co/google">Gemma</a></li>
|
116 |
+
<li><a href="https://huggingface.co/openbmb">MiniCPM</a></li>
|
117 |
+
<li>...</li>
|
118 |
+
</ul>
|
119 |
+
</td>
|
120 |
+
<td>
|
121 |
+
<ul>
|
122 |
+
<li><a href="https://modelscope.cn/datasets/damo/MSAgent-Bench">MSAgent-Bench</a></li>
|
123 |
+
<li><a href="https://huggingface.co/datasets/fnlp/moss-003-sft-data">MOSS-003-SFT</a> 🔧</li>
|
124 |
+
<li><a href="https://huggingface.co/datasets/tatsu-lab/alpaca">Alpaca en</a> / <a href="https://huggingface.co/datasets/silk-road/alpaca-data-gpt4-chinese">zh</a></li>
|
125 |
+
<li><a href="https://huggingface.co/datasets/WizardLM/WizardLM_evol_instruct_V2_196k">WizardLM</a></li>
|
126 |
+
<li><a href="https://huggingface.co/datasets/timdettmers/openassistant-guanaco">oasst1</a></li>
|
127 |
+
<li><a href="https://huggingface.co/datasets/garage-bAInd/Open-Platypus">Open-Platypus</a></li>
|
128 |
+
<li><a href="https://huggingface.co/datasets/HuggingFaceH4/CodeAlpaca_20K">Code Alpaca</a></li>
|
129 |
+
<li><a href="https://huggingface.co/datasets/burkelibbey/colors">Colorist</a> 🎨</li>
|
130 |
+
<li><a href="https://github.com/WangRongsheng/ChatGenTitle">Arxiv GenTitle</a></li>
|
131 |
+
<li><a href="https://github.com/LiuHC0428/LAW-GPT">Chinese Law</a></li>
|
132 |
+
<li><a href="https://huggingface.co/datasets/Open-Orca/OpenOrca">OpenOrca</a></li>
|
133 |
+
<li><a href="https://huggingface.co/datasets/shibing624/medical">Medical Dialogue</a></li>
|
134 |
+
<li>...</li>
|
135 |
+
</ul>
|
136 |
+
</td>
|
137 |
+
<td>
|
138 |
+
<ul>
|
139 |
+
<li><a href="docs/zh_cn/user_guides/incremental_pretraining.md">Incremental Pre-training</a> </li>
|
140 |
+
<li><a href="docs/zh_cn/user_guides/single_turn_conversation.md">Single-turn Conversation SFT</a> </li>
|
141 |
+
<li><a href="docs/zh_cn/user_guides/multi_turn_conversation.md">Multi-turn Conversation SFT</a> </li>
|
142 |
+
</ul>
|
143 |
+
</td>
|
144 |
+
<td>
|
145 |
+
<ul>
|
146 |
+
<li><a href="http://arxiv.org/abs/2305.14314">QLoRA</a></li>
|
147 |
+
<li><a href="http://arxiv.org/abs/2106.09685">LoRA</a></li>
|
148 |
+
<li>Full parameter fine-tune</li>
|
149 |
+
<li><a href="https://arxiv.org/abs/2305.18290">DPO</a></li>
|
150 |
+
<li><a href="https://arxiv.org/abs/2403.07691">ORPO</a></li>
|
151 |
+
<li>Reward Model</a></li>
|
152 |
+
</ul>
|
153 |
+
</td>
|
154 |
+
</tr>
|
155 |
+
</tbody>
|
156 |
+
</table>
|
157 |
+
|
158 |
+
## 🛠️ Quick Start
|
159 |
+
|
160 |
+
### Installation
|
161 |
+
|
162 |
+
- It is recommended to build a Python-3.10 virtual environment using conda
|
163 |
+
|
164 |
+
```bash
|
165 |
+
conda create --name xtuner-env python=3.10 -y
|
166 |
+
conda activate xtuner-env
|
167 |
+
```
|
168 |
+
|
169 |
+
- Install XTuner via pip
|
170 |
+
|
171 |
+
```shell
|
172 |
+
pip install -U xtuner
|
173 |
+
```
|
174 |
+
|
175 |
+
or with DeepSpeed integration
|
176 |
+
|
177 |
+
```shell
|
178 |
+
pip install -U 'xtuner[deepspeed]'
|
179 |
+
```
|
180 |
+
|
181 |
+
- Install XTuner from source
|
182 |
+
|
183 |
+
```shell
|
184 |
+
git clone https://github.com/InternLM/xtuner.git
|
185 |
+
cd xtuner
|
186 |
+
pip install -e '.[all]'
|
187 |
+
```
|
188 |
+
|
189 |
+
### Fine-tune
|
190 |
+
|
191 |
+
XTuner supports the efficient fine-tune (*e.g.*, QLoRA) for LLMs. Dataset prepare guides can be found on [dataset_prepare.md](./docs/en/user_guides/dataset_prepare.md).
|
192 |
+
|
193 |
+
- **Step 0**, prepare the config. XTuner provides many ready-to-use configs and we can view all configs by
|
194 |
+
|
195 |
+
```shell
|
196 |
+
xtuner list-cfg
|
197 |
+
```
|
198 |
+
|
199 |
+
Or, if the provided configs cannot meet the requirements, please copy the provided config to the specified directory and make specific modifications by
|
200 |
+
|
201 |
+
```shell
|
202 |
+
xtuner copy-cfg ${CONFIG_NAME} ${SAVE_PATH}
|
203 |
+
vi ${SAVE_PATH}/${CONFIG_NAME}_copy.py
|
204 |
+
```
|
205 |
+
|
206 |
+
- **Step 1**, start fine-tuning.
|
207 |
+
|
208 |
+
```shell
|
209 |
+
xtuner train ${CONFIG_NAME_OR_PATH}
|
210 |
+
```
|
211 |
+
|
212 |
+
For example, we can start the QLoRA fine-tuning of InternLM2.5-Chat-7B with oasst1 dataset by
|
213 |
+
|
214 |
+
```shell
|
215 |
+
# On a single GPU
|
216 |
+
xtuner train internlm2_5_chat_7b_qlora_oasst1_e3 --deepspeed deepspeed_zero2
|
217 |
+
# On multiple GPUs
|
218 |
+
(DIST) NPROC_PER_NODE=${GPU_NUM} xtuner train internlm2_5_chat_7b_qlora_oasst1_e3 --deepspeed deepspeed_zero2
|
219 |
+
(SLURM) srun ${SRUN_ARGS} xtuner train internlm2_5_chat_7b_qlora_oasst1_e3 --launcher slurm --deepspeed deepspeed_zero2
|
220 |
+
```
|
221 |
+
|
222 |
+
- `--deepspeed` means using [DeepSpeed](https://github.com/microsoft/DeepSpeed) 🚀 to optimize the training. XTuner comes with several integrated strategies including ZeRO-1, ZeRO-2, and ZeRO-3. If you wish to disable this feature, simply remove this argument.
|
223 |
+
|
224 |
+
- For more examples, please see [finetune.md](./docs/en/user_guides/finetune.md).
|
225 |
+
|
226 |
+
- **Step 2**, convert the saved PTH model (if using DeepSpeed, it will be a directory) to Hugging Face model, by
|
227 |
+
|
228 |
+
```shell
|
229 |
+
xtuner convert pth_to_hf ${CONFIG_NAME_OR_PATH} ${PTH} ${SAVE_PATH}
|
230 |
+
```
|
231 |
+
|
232 |
+
### Chat
|
233 |
+
|
234 |
+
XTuner provides tools to chat with pretrained / fine-tuned LLMs.
|
235 |
+
|
236 |
+
```shell
|
237 |
+
xtuner chat ${NAME_OR_PATH_TO_LLM} --adapter {NAME_OR_PATH_TO_ADAPTER} [optional arguments]
|
238 |
+
```
|
239 |
+
|
240 |
+
For example, we can start the chat with InternLM2.5-Chat-7B :
|
241 |
+
|
242 |
+
```shell
|
243 |
+
xtuner chat internlm/internlm2_5-chat-7b --prompt-template internlm2_chat
|
244 |
+
```
|
245 |
+
|
246 |
+
For more examples, please see [chat.md](./docs/en/user_guides/chat.md).
|
247 |
+
|
248 |
+
### Deployment
|
249 |
+
|
250 |
+
- **Step 0**, merge the Hugging Face adapter to pretrained LLM, by
|
251 |
+
|
252 |
+
```shell
|
253 |
+
xtuner convert merge \
|
254 |
+
${NAME_OR_PATH_TO_LLM} \
|
255 |
+
${NAME_OR_PATH_TO_ADAPTER} \
|
256 |
+
${SAVE_PATH} \
|
257 |
+
--max-shard-size 2GB
|
258 |
+
```
|
259 |
+
|
260 |
+
- **Step 1**, deploy fine-tuned LLM with any other framework, such as [LMDeploy](https://github.com/InternLM/lmdeploy) 🚀.
|
261 |
+
|
262 |
+
```shell
|
263 |
+
pip install lmdeploy
|
264 |
+
python -m lmdeploy.pytorch.chat ${NAME_OR_PATH_TO_LLM} \
|
265 |
+
--max_new_tokens 256 \
|
266 |
+
--temperture 0.8 \
|
267 |
+
--top_p 0.95 \
|
268 |
+
--seed 0
|
269 |
+
```
|
270 |
+
|
271 |
+
🔥 Seeking efficient inference with less GPU memory? Try 4-bit quantization from [LMDeploy](https://github.com/InternLM/lmdeploy)! For more details, see [here](https://github.com/InternLM/lmdeploy/tree/main#quantization).
|
272 |
+
|
273 |
+
### Evaluation
|
274 |
+
|
275 |
+
- We recommend using [OpenCompass](https://github.com/InternLM/opencompass), a comprehensive and systematic LLM evaluation library, which currently supports 50+ datasets with about 300,000 questions.
|
276 |
+
|
277 |
+
## 🤝 Contributing
|
278 |
+
|
279 |
+
We appreciate all contributions to XTuner. Please refer to [CONTRIBUTING.md](.github/CONTRIBUTING.md) for the contributing guideline.
|
280 |
+
|
281 |
+
## 🎖️ Acknowledgement
|
282 |
+
|
283 |
+
- [Llama 2](https://github.com/facebookresearch/llama)
|
284 |
+
- [DeepSpeed](https://github.com/microsoft/DeepSpeed)
|
285 |
+
- [QLoRA](https://github.com/artidoro/qlora)
|
286 |
+
- [LMDeploy](https://github.com/InternLM/lmdeploy)
|
287 |
+
- [LLaVA](https://github.com/haotian-liu/LLaVA)
|
288 |
+
|
289 |
+
## 🖊️ Citation
|
290 |
+
|
291 |
+
```bibtex
|
292 |
+
@misc{2023xtuner,
|
293 |
+
title={XTuner: A Toolkit for Efficiently Fine-tuning LLM},
|
294 |
+
author={XTuner Contributors},
|
295 |
+
howpublished = {\url{https://github.com/InternLM/xtuner}},
|
296 |
+
year={2023}
|
297 |
+
}
|
298 |
+
```
|
299 |
+
|
300 |
+
## License
|
301 |
+
|
302 |
+
This project is released under the [Apache License 2.0](LICENSE). Please also adhere to the Licenses of models and datasets being used.
|
data/xtuner/docs/en/.readthedocs.yaml
ADDED
@@ -0,0 +1,16 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
version: 2
|
2 |
+
|
3 |
+
build:
|
4 |
+
os: ubuntu-22.04
|
5 |
+
tools:
|
6 |
+
python: "3.8"
|
7 |
+
|
8 |
+
formats:
|
9 |
+
- epub
|
10 |
+
|
11 |
+
python:
|
12 |
+
install:
|
13 |
+
- requirements: requirements/docs.txt
|
14 |
+
|
15 |
+
sphinx:
|
16 |
+
configuration: docs/en/conf.py
|
data/xtuner/docs/en/Makefile
ADDED
@@ -0,0 +1,20 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Minimal makefile for Sphinx documentation
|
2 |
+
#
|
3 |
+
|
4 |
+
# You can set these variables from the command line, and also
|
5 |
+
# from the environment for the first two.
|
6 |
+
SPHINXOPTS ?=
|
7 |
+
SPHINXBUILD ?= sphinx-build
|
8 |
+
SOURCEDIR = .
|
9 |
+
BUILDDIR = _build
|
10 |
+
|
11 |
+
# Put it first so that "make" without argument is like "make help".
|
12 |
+
help:
|
13 |
+
@$(SPHINXBUILD) -M help "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
14 |
+
|
15 |
+
.PHONY: help Makefile
|
16 |
+
|
17 |
+
# Catch-all target: route all unknown targets to Sphinx using the new
|
18 |
+
# "make mode" option. $(O) is meant as a shortcut for $(SPHINXOPTS).
|
19 |
+
%: Makefile
|
20 |
+
@$(SPHINXBUILD) -M $@ "$(SOURCEDIR)" "$(BUILDDIR)" $(SPHINXOPTS) $(O)
|
data/xtuner/docs/en/_static/css/readthedocs.css
ADDED
@@ -0,0 +1,6 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
.header-logo {
|
2 |
+
background-image: url("../image/logo.png");
|
3 |
+
background-size: 177px 40px;
|
4 |
+
height: 40px;
|
5 |
+
width: 177px;
|
6 |
+
}
|
data/xtuner/docs/en/_static/image/logo.png
ADDED
data/xtuner/docs/en/acceleration/benchmark.rst
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
Benchmark
|
2 |
+
=========
|
data/xtuner/docs/en/acceleration/deepspeed.rst
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
DeepSpeed
|
2 |
+
=========
|
data/xtuner/docs/en/acceleration/flash_attn.rst
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
Flash Attention
|
2 |
+
===============
|
data/xtuner/docs/en/acceleration/hyper_parameters.rst
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
HyperParameters
|
2 |
+
===============
|
data/xtuner/docs/en/acceleration/length_grouped_sampler.rst
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
Length Grouped Sampler
|
2 |
+
======================
|
data/xtuner/docs/en/acceleration/pack_to_max_length.rst
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
Pack to Max Length
|
2 |
+
==================
|
data/xtuner/docs/en/acceleration/train_extreme_long_sequence.rst
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
Train Extreme Long Sequence
|
2 |
+
===========================
|
data/xtuner/docs/en/acceleration/train_large_scale_dataset.rst
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
Train Large-scale Dataset
|
2 |
+
=========================
|
data/xtuner/docs/en/acceleration/varlen_flash_attn.rst
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
Varlen Flash Attention
|
2 |
+
======================
|
data/xtuner/docs/en/chat/agent.md
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
# Chat with Agent
|
data/xtuner/docs/en/chat/llm.md
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
# Chat with LLM
|
data/xtuner/docs/en/chat/lmdeploy.md
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
# Accelerate chat by LMDeploy
|
data/xtuner/docs/en/chat/vlm.md
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
# Chat with VLM
|
data/xtuner/docs/en/conf.py
ADDED
@@ -0,0 +1,109 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Configuration file for the Sphinx documentation builder.
|
2 |
+
#
|
3 |
+
# This file only contains a selection of the most common options. For a full
|
4 |
+
# list see the documentation:
|
5 |
+
# https://www.sphinx-doc.org/en/master/usage/configuration.html
|
6 |
+
|
7 |
+
# -- Path setup --------------------------------------------------------------
|
8 |
+
|
9 |
+
# If extensions (or modules to document with autodoc) are in another directory,
|
10 |
+
# add these directories to sys.path here. If the directory is relative to the
|
11 |
+
# documentation root, use os.path.abspath to make it absolute, like shown here.
|
12 |
+
|
13 |
+
import os
|
14 |
+
import sys
|
15 |
+
|
16 |
+
from sphinx.ext import autodoc
|
17 |
+
|
18 |
+
sys.path.insert(0, os.path.abspath('../..'))
|
19 |
+
|
20 |
+
# -- Project information -----------------------------------------------------
|
21 |
+
|
22 |
+
project = 'XTuner'
|
23 |
+
copyright = '2024, XTuner Contributors'
|
24 |
+
author = 'XTuner Contributors'
|
25 |
+
|
26 |
+
# The full version, including alpha/beta/rc tags
|
27 |
+
version_file = '../../xtuner/version.py'
|
28 |
+
with open(version_file) as f:
|
29 |
+
exec(compile(f.read(), version_file, 'exec'))
|
30 |
+
__version__ = locals()['__version__']
|
31 |
+
# The short X.Y version
|
32 |
+
version = __version__
|
33 |
+
# The full version, including alpha/beta/rc tags
|
34 |
+
release = __version__
|
35 |
+
|
36 |
+
# -- General configuration ---------------------------------------------------
|
37 |
+
|
38 |
+
# Add any Sphinx extension module names here, as strings. They can be
|
39 |
+
# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
|
40 |
+
# ones.
|
41 |
+
extensions = [
|
42 |
+
'sphinx.ext.napoleon',
|
43 |
+
'sphinx.ext.viewcode',
|
44 |
+
'sphinx.ext.intersphinx',
|
45 |
+
'sphinx_copybutton',
|
46 |
+
'sphinx.ext.autodoc',
|
47 |
+
'sphinx.ext.autosummary',
|
48 |
+
'myst_parser',
|
49 |
+
'sphinxarg.ext',
|
50 |
+
]
|
51 |
+
|
52 |
+
# Add any paths that contain templates here, relative to this directory.
|
53 |
+
templates_path = ['_templates']
|
54 |
+
|
55 |
+
# List of patterns, relative to source directory, that match files and
|
56 |
+
# directories to ignore when looking for source files.
|
57 |
+
# This pattern also affects html_static_path and html_extra_path.
|
58 |
+
exclude_patterns = ['_build', 'Thumbs.db', '.DS_Store']
|
59 |
+
|
60 |
+
# Exclude the prompt "$" when copying code
|
61 |
+
copybutton_prompt_text = r'\$ '
|
62 |
+
copybutton_prompt_is_regexp = True
|
63 |
+
|
64 |
+
language = 'en'
|
65 |
+
|
66 |
+
# -- Options for HTML output -------------------------------------------------
|
67 |
+
|
68 |
+
# The theme to use for HTML and HTML Help pages. See the documentation for
|
69 |
+
# a list of builtin themes.
|
70 |
+
#
|
71 |
+
html_theme = 'sphinx_book_theme'
|
72 |
+
html_logo = '_static/image/logo.png'
|
73 |
+
html_theme_options = {
|
74 |
+
'path_to_docs': 'docs/en',
|
75 |
+
'repository_url': 'https://github.com/InternLM/xtuner',
|
76 |
+
'use_repository_button': True,
|
77 |
+
}
|
78 |
+
# Add any paths that contain custom static files (such as style sheets) here,
|
79 |
+
# relative to this directory. They are copied after the builtin static files,
|
80 |
+
# so a file named "default.css" will overwrite the builtin "default.css".
|
81 |
+
# html_static_path = ['_static']
|
82 |
+
|
83 |
+
# Mock out external dependencies here.
|
84 |
+
autodoc_mock_imports = [
|
85 |
+
'cpuinfo',
|
86 |
+
'torch',
|
87 |
+
'transformers',
|
88 |
+
'psutil',
|
89 |
+
'prometheus_client',
|
90 |
+
'sentencepiece',
|
91 |
+
'vllm.cuda_utils',
|
92 |
+
'vllm._C',
|
93 |
+
'numpy',
|
94 |
+
'tqdm',
|
95 |
+
]
|
96 |
+
|
97 |
+
|
98 |
+
class MockedClassDocumenter(autodoc.ClassDocumenter):
|
99 |
+
"""Remove note about base class when a class is derived from object."""
|
100 |
+
|
101 |
+
def add_line(self, line: str, source: str, *lineno: int) -> None:
|
102 |
+
if line == ' Bases: :py:class:`object`':
|
103 |
+
return
|
104 |
+
super().add_line(line, source, *lineno)
|
105 |
+
|
106 |
+
|
107 |
+
autodoc.ClassDocumenter = MockedClassDocumenter
|
108 |
+
|
109 |
+
navigation_with_keys = False
|
data/xtuner/docs/en/dpo/modify_settings.md
ADDED
@@ -0,0 +1,83 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## Modify DPO Training Configuration
|
2 |
+
|
3 |
+
This section introduces config parameters related to DPO (Direct Preference Optimization) training. For more details on XTuner config files, please refer to [Modifying Training Configuration](https://xtuner.readthedocs.io/zh-cn/latest/training/modify_settings.html).
|
4 |
+
|
5 |
+
### Loss Function
|
6 |
+
|
7 |
+
In DPO training, you can choose different types of loss functions according to your needs. XTuner provides various loss function options, such as `sigmoid`, `hinge`, `ipo`, etc. You can select the desired loss function type by setting the `dpo_loss_type` parameter.
|
8 |
+
|
9 |
+
Additionally, you can control the temperature coefficient in the loss function by adjusting the `loss_beta` parameter. The `label_smoothing` parameter can be used for smoothing labels.
|
10 |
+
|
11 |
+
```python
|
12 |
+
#######################################################################
|
13 |
+
# PART 1 Settings #
|
14 |
+
#######################################################################
|
15 |
+
# Model
|
16 |
+
dpo_loss_type = 'sigmoid' # One of ['sigmoid', 'hinge', 'ipo', 'kto_pair', 'sppo_hard', 'nca_pair', 'robust']
|
17 |
+
loss_beta = 0.1
|
18 |
+
label_smoothing = 0.0
|
19 |
+
```
|
20 |
+
|
21 |
+
### Modifying the Model
|
22 |
+
|
23 |
+
Users can modify `pretrained_model_name_or_path` to change the pretrained model.
|
24 |
+
|
25 |
+
```python
|
26 |
+
#######################################################################
|
27 |
+
# PART 1 Settings #
|
28 |
+
#######################################################################
|
29 |
+
# Model
|
30 |
+
pretrained_model_name_or_path = 'internlm/internlm2-chat-1_8b-sft'
|
31 |
+
```
|
32 |
+
|
33 |
+
### Training Data
|
34 |
+
|
35 |
+
In DPO training, you can specify the maximum number of tokens for a single sample sequence using the `max_length` parameter. XTuner will automatically truncate or pad the data.
|
36 |
+
|
37 |
+
```python
|
38 |
+
# Data
|
39 |
+
max_length = 2048
|
40 |
+
```
|
41 |
+
|
42 |
+
In the configuration file, we use the `train_dataset` field to specify the training dataset. You can specify the dataset loading method using the `dataset` field and the dataset mapping function using the `dataset_map_fn` field.
|
43 |
+
|
44 |
+
```python
|
45 |
+
#######################################################################
|
46 |
+
# PART 3 Dataset & Dataloader #
|
47 |
+
#######################################################################
|
48 |
+
sampler = SequenceParallelSampler \
|
49 |
+
if sequence_parallel_size > 1 else DefaultSampler
|
50 |
+
|
51 |
+
train_dataset = dict(
|
52 |
+
type=build_preference_dataset,
|
53 |
+
dataset=dict(type=load_dataset, path='mlabonne/orpo-dpo-mix-40k'),
|
54 |
+
tokenizer=tokenizer,
|
55 |
+
max_length=max_length,
|
56 |
+
dataset_map_fn=orpo_dpo_mix_40k_map_fn,
|
57 |
+
is_dpo=True,
|
58 |
+
is_reward=False,
|
59 |
+
reward_token_id=-1,
|
60 |
+
num_proc=32,
|
61 |
+
use_varlen_attn=use_varlen_attn,
|
62 |
+
max_packed_length=max_packed_length,
|
63 |
+
shuffle_before_pack=True,
|
64 |
+
)
|
65 |
+
|
66 |
+
train_dataloader = dict(
|
67 |
+
batch_size=batch_size,
|
68 |
+
num_workers=dataloader_num_workers,
|
69 |
+
dataset=train_dataset,
|
70 |
+
sampler=dict(type=sampler, shuffle=True),
|
71 |
+
collate_fn=dict(
|
72 |
+
type=preference_collate_fn, use_varlen_attn=use_varlen_attn))
|
73 |
+
```
|
74 |
+
|
75 |
+
In the above configuration, we use `load_dataset` to load the `mlabonne/orpo-dpo-mix-40k` dataset from Hugging Face and use `orpo_dpo_mix_40k_map_fn` as the dataset mapping function.
|
76 |
+
|
77 |
+
For more information on handling datasets and writing dataset mapping functions, please refer to the [Preference Dataset Section](../reward_model/preference_data.md).
|
78 |
+
|
79 |
+
### Accelerating Training
|
80 |
+
|
81 |
+
When training with preference data, we recommend enabling the [Variable-Length Attention Mechanism](https://xtuner.readthedocs.io/zh-cn/latest/acceleration/varlen_flash_attn.html) to avoid memory waste caused by length differences between chosen and rejected samples within a single preference. You can enable the variable-length attention mechanism by setting `use_varlen_attn=True`.
|
82 |
+
|
83 |
+
XTuner also supports many training acceleration methods. For details on how to use them, please refer to the [Acceleration Strategies Section](https://xtuner.readthedocs.io/zh-cn/latest/acceleration/hyper_parameters.html).
|
data/xtuner/docs/en/dpo/overview.md
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## Introduction to DPO
|
2 |
+
|
3 |
+
### Overview
|
4 |
+
|
5 |
+
DPO (Direct Preference Optimization) is a method used in large language model training for directly optimizing human preferences. Unlike traditional reinforcement learning methods, DPO directly uses human preference data to optimize the model, thereby improving the quality of generated content to better align with human preferences. DPO also eliminates the need to train a Reward Model and a Critic Model, avoiding the complexity of reinforcement learning algorithms, reducing training overhead, and enhancing training efficiency.
|
6 |
+
|
7 |
+
Many algorithms have made certain improvements to DPO's loss function. In XTuner, besides DPO, we have also implemented loss functions from papers such as [Identity Preference Optimization (IPO)](https://huggingface.co/papers/2310.12036). To use these algorithms, please refer to the [Modify DPO Settings](./modify_settings.md) section. We also provide some [example configurations](https://github.com/InternLM/xtuner/tree/main/xtuner/configs/dpo) for reference.
|
8 |
+
|
9 |
+
In addition to DPO, there are alignment algorithms like [ORPO](https://arxiv.org/abs/2403.07691) that do not require a reference model. ORPO uses the concept of odds ratio to optimize the model by penalizing rejected samples during the training process, thereby adapting more effectively to the chosen samples. ORPO eliminates the dependence on a reference model, making the training process more simplified and efficient. The training method for ORPO in XTuner is very similar to DPO, and we provide some [example configurations](https://github.com/InternLM/xtuner/tree/main/xtuner/configs/orpo). Users can refer to the DPO tutorial to modify the configuration.
|
10 |
+
|
11 |
+
### Features of DPO Training in XTuner
|
12 |
+
|
13 |
+
DPO training in XTuner offers the following significant advantages:
|
14 |
+
|
15 |
+
1. **Latest Algorithms**: In addition to supporting standard DPO, XTuner also supports improved DPO algorithms or memory efficient algorithms like ORPO that do not rely on reference models.
|
16 |
+
|
17 |
+
2. **Reducing Memory Waste**: Due to the length differences in chosen and rejected data in preference datasets, padding tokens during data concatenation can cause memory waste. In XTuner, by utilizing the variable-length attention feature from Flash Attention2, preference pairs are packed into the same sequence during training, significantly reducing memory waste caused by padding tokens. This not only improves memory efficiency but also allows for training larger models or handling more data under the same hardware conditions.
|
18 |
+
|
19 |
+
![img](../../zh_cn/reward_model/images/var_len_atten.png)
|
20 |
+
|
21 |
+
3. **Efficient Training**: Leveraging XTuner's QLoRA training capabilities, the reference model can be converted into a policy model with the LoRA adapter removed, eliminating the memory overhead of the reference model weights and significantly reducing DPO training costs.
|
22 |
+
|
23 |
+
4. **Long Text Training**: With XTuner's sequence parallel functionality, long text data can be trained efficiently.
|
24 |
+
|
25 |
+
### Getting Started
|
26 |
+
|
27 |
+
Refer to the [Quick Start Guide](./quick_start.md) to understand the basic concepts. For more information on configuring training parameters, please see the [Modify DPO Settings](./modify_settings.md) section.
|
data/xtuner/docs/en/dpo/quick_start.md
ADDED
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
## Quick Start with DPO
|
2 |
+
|
3 |
+
In this section, we will introduce how to use XTuner to train a 1.8B DPO (Direct Preference Optimization) model to help you get started quickly.
|
4 |
+
|
5 |
+
### Preparing Pretrained Model Weights
|
6 |
+
|
7 |
+
We use the model [InternLM2-chat-1.8b-sft](https://huggingface.co/internlm/internlm2-chat-1_8b-sft), as the initial model for DPO training to align human preferences.
|
8 |
+
|
9 |
+
Set `pretrained_model_name_or_path = 'internlm/internlm2-chat-1_8b-sft'` in the training configuration file, and the model files will be automatically downloaded when training starts. If you need to download the model weights manually, please refer to the section [Preparing Pretrained Model Weights](https://xtuner.readthedocs.io/zh-cn/latest/preparation/pretrained_model.html), which provides detailed instructions on how to download model weights from Huggingface or Modelscope. Here are the links to the models on HuggingFace and ModelScope:
|
10 |
+
|
11 |
+
- HuggingFace link: https://huggingface.co/internlm/internlm2-chat-1_8b-sft
|
12 |
+
- ModelScope link: https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm2-chat-1_8b-sft/summary
|
13 |
+
|
14 |
+
### Preparing Training Data
|
15 |
+
|
16 |
+
In this tutorial, we use the [mlabonne/orpo-dpo-mix-40k](https://huggingface.co/datasets/mlabonne/orpo-dpo-mix-40k) dataset from Huggingface as an example.
|
17 |
+
|
18 |
+
```python
|
19 |
+
train_dataset = dict(
|
20 |
+
type=build_preference_dataset,
|
21 |
+
dataset=dict(
|
22 |
+
type=load_dataset,
|
23 |
+
path='mlabonne/orpo-dpo-mix-40k'),
|
24 |
+
dataset_map_fn=orpo_dpo_mix_40k_map_fn,
|
25 |
+
is_dpo=True,
|
26 |
+
is_reward=False,
|
27 |
+
)
|
28 |
+
```
|
29 |
+
|
30 |
+
Using the above configuration in the configuration file will automatically download and process this dataset. If you want to use other open-source datasets from Huggingface or custom datasets, please refer to the [Preference Dataset](../reward_model/preference_data.md) section.
|
31 |
+
|
32 |
+
### Preparing Configuration File
|
33 |
+
|
34 |
+
XTuner provides several ready-to-use configuration files, which can be viewed using `xtuner list-cfg`. Execute the following command to copy a configuration file to the current directory.
|
35 |
+
|
36 |
+
```bash
|
37 |
+
xtuner copy-cfg internlm2_chat_1_8b_dpo_full .
|
38 |
+
```
|
39 |
+
|
40 |
+
Open the copied configuration file. If you choose to download the model and dataset automatically, no modifications are needed. If you want to specify paths to your pre-downloaded model and dataset, modify the `pretrained_model_name_or_path` and the `path` parameter in `dataset` under `train_dataset`.
|
41 |
+
|
42 |
+
For more training parameter configurations, please refer to the section [Modifying DPO Training Configuration](./modify_settings.md) section.
|
43 |
+
|
44 |
+
### Starting the Training
|
45 |
+
|
46 |
+
After completing the above steps, you can start the training task using the following commands.
|
47 |
+
|
48 |
+
```bash
|
49 |
+
# Single machine, single GPU
|
50 |
+
xtuner train ./internlm2_chat_1_8b_dpo_full_copy.py
|
51 |
+
# Single machine, multiple GPUs
|
52 |
+
NPROC_PER_NODE=${GPU_NUM} xtuner train ./internlm2_chat_1_8b_dpo_full_copy.py
|
53 |
+
# Slurm cluster
|
54 |
+
srun ${SRUN_ARGS} xtuner train ./internlm2_chat_1_8b_dpo_full_copy.py --launcher slurm
|
55 |
+
```
|
56 |
+
|
57 |
+
### Model Conversion
|
58 |
+
|
59 |
+
XTuner provides integrated tools to convert models to HuggingFace format. Simply execute the following commands:
|
60 |
+
|
61 |
+
```bash
|
62 |
+
# Create a directory for HuggingFace format parameters
|
63 |
+
mkdir work_dirs/internlm2_chat_1_8b_dpo_full_copy/iter_15230_hf
|
64 |
+
|
65 |
+
# Convert format
|
66 |
+
xtuner convert pth_to_hf internlm2_chat_1_8b_dpo_full_copy.py \
|
67 |
+
work_dirs/internlm2_chat_1_8b_dpo_full_copy/iter_15230.pth \
|
68 |
+
work_dirs/internlm2_chat_1_8b_dpo_full_copy/iter_15230_hf
|
69 |
+
```
|
70 |
+
|
71 |
+
This will convert the XTuner's ckpt to the HuggingFace format.
|
data/xtuner/docs/en/evaluation/hook.md
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
# Evaluation during training
|
data/xtuner/docs/en/evaluation/mmbench.md
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
# MMBench (VLM)
|
data/xtuner/docs/en/evaluation/mmlu.md
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
# MMLU (LLM)
|
data/xtuner/docs/en/evaluation/opencompass.md
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
# Evaluate with OpenCompass
|
data/xtuner/docs/en/get_started/installation.md
ADDED
@@ -0,0 +1,52 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
### Installation
|
2 |
+
|
3 |
+
In this section, we will show you how to install XTuner.
|
4 |
+
|
5 |
+
## Installation Process
|
6 |
+
|
7 |
+
We recommend users to follow our best practices for installing XTuner.
|
8 |
+
It is recommended to use a conda virtual environment with Python-3.10 to install XTuner.
|
9 |
+
|
10 |
+
### Best Practices
|
11 |
+
|
12 |
+
**Step 0.** Create a Python-3.10 virtual environment using conda.
|
13 |
+
|
14 |
+
```shell
|
15 |
+
conda create --name xtuner-env python=3.10 -y
|
16 |
+
conda activate xtuner-env
|
17 |
+
```
|
18 |
+
|
19 |
+
**Step 1.** Install XTuner.
|
20 |
+
|
21 |
+
Case a: Install XTuner via pip:
|
22 |
+
|
23 |
+
```shell
|
24 |
+
pip install -U xtuner
|
25 |
+
```
|
26 |
+
|
27 |
+
Case b: Install XTuner with DeepSpeed integration:
|
28 |
+
|
29 |
+
```shell
|
30 |
+
pip install -U 'xtuner[deepspeed]'
|
31 |
+
```
|
32 |
+
|
33 |
+
Case c: Install XTuner from the source code:
|
34 |
+
|
35 |
+
```shell
|
36 |
+
git clone https://github.com/InternLM/xtuner.git
|
37 |
+
cd xtuner
|
38 |
+
pip install -e '.[all]'
|
39 |
+
# "-e" indicates installing the project in editable mode, so any local modifications to the code will take effect without reinstalling.
|
40 |
+
```
|
41 |
+
|
42 |
+
## Verify the installation
|
43 |
+
|
44 |
+
To verify if XTuner is installed correctly, we will use a command to print the configuration files.
|
45 |
+
|
46 |
+
**Print Configuration Files:** Use the command `xtuner list-cfg` in the command line to verify if the configuration files can be printed.
|
47 |
+
|
48 |
+
```shell
|
49 |
+
xtuner list-cfg
|
50 |
+
```
|
51 |
+
|
52 |
+
You should see a list of XTuner configuration files, corresponding to the ones in [xtuner/configs](https://github.com/InternLM/xtuner/tree/main/xtuner/configs) in the source code.
|
data/xtuner/docs/en/get_started/overview.md
ADDED
@@ -0,0 +1,5 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Overview
|
2 |
+
|
3 |
+
This chapter introduces you to the framework and workflow of XTuner, and provides detailed tutorial links.
|
4 |
+
|
5 |
+
## What is XTuner
|
data/xtuner/docs/en/get_started/quickstart.md
ADDED
@@ -0,0 +1,308 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
# Quickstart
|
2 |
+
|
3 |
+
In this section, we will show you how to use XTuner to fine-tune a model to help you get started quickly.
|
4 |
+
|
5 |
+
After installing XTuner successfully, we can start fine-tuning the model. In this section, we will demonstrate how to use XTuner to apply the QLoRA algorithm to fine-tune InternLM2-Chat-7B on the Colorist dataset.
|
6 |
+
|
7 |
+
The Colorist dataset ([HuggingFace link](https://huggingface.co/datasets/burkelibbey/colors); [ModelScope link](https://www.modelscope.cn/datasets/fanqiNO1/colors/summary)) is a dataset that provides color choices and suggestions based on color descriptions. A model fine-tuned on this dataset can be used to give a hexadecimal color code based on the user's description of the color. For example, when the user enters "a calming but fairly bright light sky blue, between sky blue and baby blue, with a hint of fluorescence due to its brightness", the model will output ![#66ccff](https://img.shields.io/badge/%2366ccff-66CCFF), which matches the user's description. There are a few sample data from this dataset:
|
8 |
+
|
9 |
+
| Enligsh Description | Chinese Description | Color |
|
10 |
+
| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------ |
|
11 |
+
| Light Sky Blue: A calming, fairly bright color that falls between sky blue and baby blue, with a hint of slight fluorescence due to its brightness. | 浅天蓝色:一种介于天蓝和婴儿蓝之间的平和、相当明亮的颜色,由于明亮而带有一丝轻微的荧光。 | #66ccff: ![#66ccff](https://img.shields.io/badge/%2366ccff-66CCFF) |
|
12 |
+
| Bright red: This is a very vibrant, saturated and vivid shade of red, resembling the color of ripe apples or fresh blood. It is as red as you can get on a standard RGB color palette, with no elements of either blue or green. | 鲜红色: 这是一种非常鲜艳、饱和、生动的红色,类似成熟苹果或新鲜血液的颜色。它是标准 RGB 调色板上的红色,不含任何蓝色或绿色元素。 | #ee0000: ![#ee0000](https://img.shields.io/badge/%23ee0000-EE0000) |
|
13 |
+
| Bright Turquoise: This color mixes the freshness of bright green with the tranquility of light blue, leading to a vibrant shade of turquoise. It is reminiscent of tropical waters. | 明亮的绿松石色:这种颜色融合了鲜绿色的清新和淡蓝色的宁静,呈现出一种充满活力的绿松石色调。它让人联想到热带水域。 | #00ffcc: ![#00ffcc](https://img.shields.io/badge/%2300ffcc-00FFCC) |
|
14 |
+
|
15 |
+
## Prepare the model weights
|
16 |
+
|
17 |
+
Before fine-tuning the model, we first need to prepare the weights of the model.
|
18 |
+
|
19 |
+
### Download from HuggingFace
|
20 |
+
|
21 |
+
```bash
|
22 |
+
pip install -U huggingface_hub
|
23 |
+
|
24 |
+
# Download the model weights to Shanghai_AI_Laboratory/internlm2-chat-7b
|
25 |
+
huggingface-cli download internlm/internlm2-chat-7b \
|
26 |
+
--local-dir Shanghai_AI_Laboratory/internlm2-chat-7b \
|
27 |
+
--local-dir-use-symlinks False \
|
28 |
+
--resume-download
|
29 |
+
```
|
30 |
+
|
31 |
+
### Download from ModelScope
|
32 |
+
|
33 |
+
Since pulling model weights from HuggingFace may lead to an unstable download process, slow download speed and other problems, we can choose to download the weights of InternLM2-Chat-7B from ModelScope when experiencing network issues.
|
34 |
+
|
35 |
+
```bash
|
36 |
+
pip install -U modelscope
|
37 |
+
|
38 |
+
# Download the model weights to the current directory
|
39 |
+
python -c "from modelscope import snapshot_download; snapshot_download('Shanghai_AI_Laboratory/internlm2-chat-7b', cache_dir='.')"
|
40 |
+
```
|
41 |
+
|
42 |
+
After completing the download, we can start to prepare the dataset for fine-tuning.
|
43 |
+
|
44 |
+
The HuggingFace link and ModelScope link are attached here:
|
45 |
+
|
46 |
+
- The HuggingFace link is located at: https://huggingface.co/internlm/internlm2-chat-7b
|
47 |
+
- The ModelScope link is located at: https://modelscope.cn/models/Shanghai_AI_Laboratory/internlm2-chat-7b/summary
|
48 |
+
|
49 |
+
## Prepare the fine-tuning dataset
|
50 |
+
|
51 |
+
### Download from HuggingFace
|
52 |
+
|
53 |
+
```bash
|
54 |
+
git clone https://huggingface.co/datasets/burkelibbey/colors
|
55 |
+
```
|
56 |
+
|
57 |
+
### Download from ModelScope
|
58 |
+
|
59 |
+
Due to the same reason, we can choose to download the dataset from ModelScope.
|
60 |
+
|
61 |
+
```bash
|
62 |
+
git clone https://www.modelscope.cn/datasets/fanqiNO1/colors.git
|
63 |
+
```
|
64 |
+
|
65 |
+
The HuggingFace link and ModelScope link are attached here:
|
66 |
+
|
67 |
+
- The HuggingFace link is located at: https://huggingface.co/datasets/burkelibbey/colors
|
68 |
+
- The ModelScope link is located at: https://modelscope.cn/datasets/fanqiNO1/colors
|
69 |
+
|
70 |
+
## Prepare the config
|
71 |
+
|
72 |
+
XTuner provides several configs out-of-the-box, which can be viewed via `xtuner list-cfg`. We can use the following command to copy a config to the current directory.
|
73 |
+
|
74 |
+
```bash
|
75 |
+
xtuner copy-cfg internlm2_7b_qlora_colorist_e5 .
|
76 |
+
```
|
77 |
+
|
78 |
+
Explanation of the config name:
|
79 |
+
|
80 |
+
| Config Name | internlm2_7b_qlora_colorist_e5 |
|
81 |
+
| ----------- | ------------------------------ |
|
82 |
+
| Model Name | internlm2_7b |
|
83 |
+
| Algorithm | qlora |
|
84 |
+
| Dataset | colorist |
|
85 |
+
| Epochs | 5 |
|
86 |
+
|
87 |
+
The directory structure at this point should look like this:
|
88 |
+
|
89 |
+
```bash
|
90 |
+
.
|
91 |
+
├── colors
|
92 |
+
│ ├── colors.json
|
93 |
+
│ ├── dataset_infos.json
|
94 |
+
│ ├── README.md
|
95 |
+
│ └── train.jsonl
|
96 |
+
├── internlm2_7b_qlora_colorist_e5_copy.py
|
97 |
+
└── Shanghai_AI_Laboratory
|
98 |
+
└── internlm2-chat-7b
|
99 |
+
├── config.json
|
100 |
+
├── configuration_internlm2.py
|
101 |
+
├── configuration.json
|
102 |
+
├── generation_config.json
|
103 |
+
├── modeling_internlm2.py
|
104 |
+
├── pytorch_model-00001-of-00008.bin
|
105 |
+
├── pytorch_model-00002-of-00008.bin
|
106 |
+
├── pytorch_model-00003-of-00008.bin
|
107 |
+
├── pytorch_model-00004-of-00008.bin
|
108 |
+
├── pytorch_model-00005-of-00008.bin
|
109 |
+
├── pytorch_model-00006-of-00008.bin
|
110 |
+
├── pytorch_model-00007-of-00008.bin
|
111 |
+
├── pytorch_model-00008-of-00008.bin
|
112 |
+
├── pytorch_model.bin.index.json
|
113 |
+
├── README.md
|
114 |
+
├── special_tokens_map.json
|
115 |
+
├── tokenization_internlm2_fast.py
|
116 |
+
├── tokenization_internlm2.py
|
117 |
+
├── tokenizer_config.json
|
118 |
+
└── tokenizer.model
|
119 |
+
```
|
120 |
+
|
121 |
+
## Modify the config
|
122 |
+
|
123 |
+
In this step, we need to modify the model path and dataset path to local paths and modify the dataset loading method.
|
124 |
+
In addition, since the copied config is based on the Base model, we also need to modify the `prompt_template` to adapt to the Chat model.
|
125 |
+
|
126 |
+
```diff
|
127 |
+
#######################################################################
|
128 |
+
# PART 1 Settings #
|
129 |
+
#######################################################################
|
130 |
+
# Model
|
131 |
+
- pretrained_model_name_or_path = 'internlm/internlm2-7b'
|
132 |
+
+ pretrained_model_name_or_path = './Shanghai_AI_Laboratory/internlm2-chat-7b'
|
133 |
+
|
134 |
+
# Data
|
135 |
+
- data_path = 'burkelibbey/colors'
|
136 |
+
+ data_path = './colors/train.jsonl'
|
137 |
+
- prompt_template = PROMPT_TEMPLATE.default
|
138 |
+
+ prompt_template = PROMPT_TEMPLATE.internlm2_chat
|
139 |
+
|
140 |
+
...
|
141 |
+
#######################################################################
|
142 |
+
# PART 3 Dataset & Dataloader #
|
143 |
+
#######################################################################
|
144 |
+
train_dataset = dict(
|
145 |
+
type=process_hf_dataset,
|
146 |
+
- dataset=dict(type=load_dataset, path=data_path),
|
147 |
+
+ dataset=dict(type=load_dataset, path='json', data_files=dict(train=data_path)),
|
148 |
+
tokenizer=tokenizer,
|
149 |
+
max_length=max_length,
|
150 |
+
dataset_map_fn=colors_map_fn,
|
151 |
+
template_map_fn=dict(
|
152 |
+
type=template_map_fn_factory, template=prompt_template),
|
153 |
+
remove_unused_columns=True,
|
154 |
+
shuffle_before_pack=True,
|
155 |
+
pack_to_max_length=pack_to_max_length)
|
156 |
+
```
|
157 |
+
|
158 |
+
Therefore, `pretrained_model_name_or_path`, `data_path`, `prompt_template`, and the `dataset` fields in `train_dataset` are modified.
|
159 |
+
|
160 |
+
## Start fine-tuning
|
161 |
+
|
162 |
+
Once having done the above steps, we can start fine-tuning using the following command.
|
163 |
+
|
164 |
+
```bash
|
165 |
+
# Single GPU
|
166 |
+
xtuner train ./internlm2_7b_qlora_colorist_e5_copy.py
|
167 |
+
# Multiple GPUs
|
168 |
+
NPROC_PER_NODE=${GPU_NUM} xtuner train ./internlm2_7b_qlora_colorist_e5_copy.py
|
169 |
+
# Slurm
|
170 |
+
srun ${SRUN_ARGS} xtuner train ./internlm2_7b_qlora_colorist_e5_copy.py --launcher slurm
|
171 |
+
```
|
172 |
+
|
173 |
+
The correct training log may look similar to the one shown below:
|
174 |
+
|
175 |
+
```text
|
176 |
+
01/29 21:35:34 - mmengine - INFO - Iter(train) [ 10/720] lr: 9.0001e-05 eta: 0:31:46 time: 2.6851 data_time: 0.0077 memory: 12762 loss: 2.6900
|
177 |
+
01/29 21:36:02 - mmengine - INFO - Iter(train) [ 20/720] lr: 1.9000e-04 eta: 0:32:01 time: 2.8037 data_time: 0.0071 memory: 13969 loss: 2.6049 grad_norm: 0.9361
|
178 |
+
01/29 21:36:29 - mmengine - INFO - Iter(train) [ 30/720] lr: 1.9994e-04 eta: 0:31:24 time: 2.7031 data_time: 0.0070 memory: 13969 loss: 2.5795 grad_norm: 0.9361
|
179 |
+
01/29 21:36:57 - mmengine - INFO - Iter(train) [ 40/720] lr: 1.9969e-04 eta: 0:30:55 time: 2.7247 data_time: 0.0069 memory: 13969 loss: 2.3352 grad_norm: 0.8482
|
180 |
+
01/29 21:37:24 - mmengine - INFO - Iter(train) [ 50/720] lr: 1.9925e-04 eta: 0:30:28 time: 2.7286 data_time: 0.0068 memory: 13969 loss: 2.2816 grad_norm: 0.8184
|
181 |
+
01/29 21:37:51 - mmengine - INFO - Iter(train) [ 60/720] lr: 1.9863e-04 eta: 0:29:58 time: 2.7048 data_time: 0.0069 memory: 13969 loss: 2.2040 grad_norm: 0.8184
|
182 |
+
01/29 21:38:18 - mmengine - INFO - Iter(train) [ 70/720] lr: 1.9781e-04 eta: 0:29:31 time: 2.7302 data_time: 0.0068 memory: 13969 loss: 2.1912 grad_norm: 0.8460
|
183 |
+
01/29 21:38:46 - mmengine - INFO - Iter(train) [ 80/720] lr: 1.9681e-04 eta: 0:29:05 time: 2.7338 data_time: 0.0069 memory: 13969 loss: 2.1512 grad_norm: 0.8686
|
184 |
+
01/29 21:39:13 - mmengine - INFO - Iter(train) [ 90/720] lr: 1.9563e-04 eta: 0:28:36 time: 2.7047 data_time: 0.0068 memory: 13969 loss: 2.0653 grad_norm: 0.8686
|
185 |
+
01/29 21:39:40 - mmengine - INFO - Iter(train) [100/720] lr: 1.9426e-04 eta: 0:28:09 time: 2.7383 data_time: 0.0070 memory: 13969 loss: 1.9819 grad_norm: 0.9127
|
186 |
+
```
|
187 |
+
|
188 |
+
Before training begins, the output of the model is as shown below:
|
189 |
+
|
190 |
+
```text
|
191 |
+
2024/01/29 21:34:58 - mmengine - INFO - before_train in EvaluateChatHook.
|
192 |
+
2024/01/29 21:35:03 - mmengine - INFO - Sample output:
|
193 |
+
<s><|im_start|>system
|
194 |
+
You are a professional color designer. Please provide the corresponding colors based on the description of Human.
|
195 |
+
<|im_end|>
|
196 |
+
<|im_start|>user
|
197 |
+
请给我一个像天空一样清澈透明的蓝色。<|im_end|>
|
198 |
+
<|im_start|>assistant
|
199 |
+
为了匹配您所描述的“像天空一样清澈透明的蓝色”,我建议您选择一种名为“天蓝”(Cerulean)的颜色。这种颜色通常被用来代表天空、海洋和清澈的水域,它具有明亮
|
200 |
+
|
201 |
+
2024/01/29 21:35:07 - mmengine - INFO - Sample output:
|
202 |
+
<s><|im_start|>system
|
203 |
+
You are a professional color designer. Please provide the corresponding colors based on the description of Human.
|
204 |
+
<|im_end|>
|
205 |
+
<|im_start|>user
|
206 |
+
Please give me a clear blue like the sky.<|im_end|>
|
207 |
+
<|im_start|>assistant
|
208 |
+
The color you're describing is a clear, bright blue that resembles the color of the sky on a sunny day. This color is often referred to as "sky blue" or "cerulean blue." Here are some specific color codes and names that
|
209 |
+
```
|
210 |
+
|
211 |
+
After training, the output of the model is as shown below:
|
212 |
+
|
213 |
+
```text
|
214 |
+
2024/01/29 22:08:07 - mmengine - INFO - Sample output:
|
215 |
+
<s><|im_start|>system
|
216 |
+
You are a professional color designer. Please provide the corresponding colors based on the description of Human.
|
217 |
+
<|im_end|>
|
218 |
+
<|im_start|>user
|
219 |
+
请给我一个像天空一样清澈透明的蓝色。<|im_end|>
|
220 |
+
<|im_start|>assistant
|
221 |
+
#1099ee<|im_end|>
|
222 |
+
|
223 |
+
2024/01/29 22:08:08 - mmengine - INFO - Sample output:
|
224 |
+
<s><|im_start|>system
|
225 |
+
You are a professional color designer. Please provide the corresponding colors based on the description of Human.
|
226 |
+
<|im_end|>
|
227 |
+
<|im_start|>user
|
228 |
+
Please give me a clear blue like the sky.<|im_end|>
|
229 |
+
<|im_start|>assistant
|
230 |
+
#0066dd<|im_end|>
|
231 |
+
```
|
232 |
+
|
233 |
+
The color of the model output is shown below:
|
234 |
+
|
235 |
+
- 天空一样清澈透明的蓝色:![天空一样清澈透明的蓝色](https://img.shields.io/badge/天空一样清澈透明的蓝色-1099EE)
|
236 |
+
- A clear blue like the sky: ![A clear blue like the sky](https://img.shields.io/badge/A_clear_blue_like_the_sky-0066DD)
|
237 |
+
|
238 |
+
It is clear that the output of the model after training has been fully aligned with the content of the dataset.
|
239 |
+
|
240 |
+
# Model Convert + LoRA Merge
|
241 |
+
|
242 |
+
After training, we will get several `.pth` files that do **NOT** contain all the parameters of the model, but store the parameters updated by the training process of the QLoRA algorithm. Therefore, we need to convert these `.pth` files to HuggingFace format and merge them into the original LLM weights.
|
243 |
+
|
244 |
+
### Model Convert
|
245 |
+
|
246 |
+
XTuner has already integrated the tool of converting the model to HuggingFace format. We can use the following command to convert the model.
|
247 |
+
|
248 |
+
```bash
|
249 |
+
# Create the directory to store parameters in hf format
|
250 |
+
mkdir work_dirs/internlm2_7b_qlora_colorist_e5_copy/iter_720_hf
|
251 |
+
|
252 |
+
# Convert the model to hf format
|
253 |
+
xtuner convert pth_to_hf internlm2_7b_qlora_colorist_e5_copy.py \
|
254 |
+
work_dirs/internlm2_7b_qlora_colorist_e5_copy/iter_720.pth \
|
255 |
+
work_dirs/internlm2_7b_qlora_colorist_e5_copy/iter_720_hf
|
256 |
+
```
|
257 |
+
|
258 |
+
This command will convert `work_dirs/internlm2_7b_qlora_colorist_e5_copy/iter_720.pth` to hf format based on the contents of the config `internlm2_7b_qlora_colorist_e5_copy.py` and will save it in `work_dirs/internlm2_7b_qlora_colorist_e5_copy/iter_720_hf`.
|
259 |
+
|
260 |
+
### LoRA Merge
|
261 |
+
|
262 |
+
XTuner has also integrated the tool of merging LoRA weights, we just need to execute the following command:
|
263 |
+
|
264 |
+
```bash
|
265 |
+
# Create the directory to store the merged weights
|
266 |
+
mkdir work_dirs/internlm2_7b_qlora_colorist_e5_copy/merged
|
267 |
+
|
268 |
+
# Merge the weights
|
269 |
+
xtuner convert merge Shanghai_AI_Laboratory/internlm2-chat-7b \
|
270 |
+
work_dirs/internlm2_7b_qlora_colorist_e5_copy/iter_720_hf \
|
271 |
+
work_dirs/internlm2_7b_qlora_colorist_e5_copy/merged \
|
272 |
+
--max-shard-size 2GB
|
273 |
+
```
|
274 |
+
|
275 |
+
Similar to the command above, this command will read the original parameter path `Shanghai_AI_Laboratory/internlm2-chat-7b` and the path of parameter which has been converted to hf format `work_dirs/internlm2_7b_qlora_colorist_e5_copy/iter_720_hf` and merge the two parts of the parameters and save them in `work_dirs/internlm2_7b_qlora_colorist_e5_copy/merged`, where the maximum file size for each parameter slice is 2GB.
|
276 |
+
|
277 |
+
## Chat with the model
|
278 |
+
|
279 |
+
To better appreciate the model's capabilities after merging the weights, we can chat with the model. XTuner also integrates the tool of chatting with models. We can start a simple demo to chat with the model with the following command:
|
280 |
+
|
281 |
+
```bash
|
282 |
+
xtuner chat work_dirs/internlm2_7b_qlora_colorist_e5_copy/merged \
|
283 |
+
--prompt-template internlm2_chat \
|
284 |
+
--system-template colorist
|
285 |
+
```
|
286 |
+
|
287 |
+
Of course, we can also choose not to merge the weights and instead chat directly with the LLM + LoRA Adapter, we just need to execute the following command:
|
288 |
+
|
289 |
+
```bash
|
290 |
+
xtuner chat Shanghai_AI_Laboratory/internlm2-chat-7b
|
291 |
+
--adapter work_dirs/internlm2_7b_qlora_colorist_e5_copy/iter_720_hf \
|
292 |
+
--prompt-template internlm2_chat \
|
293 |
+
--system-template colorist
|
294 |
+
```
|
295 |
+
|
296 |
+
where `work_dirs/internlm2_7b_qlora_colorist_e5_copy/merged` is the path to the merged weights, `--prompt-template internlm2_chat` specifies that the chat template is InternLM2-Chat, and `-- system-template colorist` specifies that the System Prompt for conversations with models is the template required by the Colorist dataset.
|
297 |
+
|
298 |
+
There is an example below:
|
299 |
+
|
300 |
+
```text
|
301 |
+
double enter to end input (EXIT: exit chat, RESET: reset history) >>> A calming but fairly bright light sky blue, between sky blue and baby blue, with a hint of fluorescence due to its brightness.
|
302 |
+
|
303 |
+
#66ccff<|im_end|>
|
304 |
+
```
|
305 |
+
|
306 |
+
The color of the model output is shown below:
|
307 |
+
|
308 |
+
A calming but fairly bright light sky blue, between sky blue and baby blue, with a hint of fluorescence due to its brightness: ![#66ccff](https://img.shields.io/badge/A_calming_but_fairly_bright_light_sky_blue_between_sky_blue_and_baby_blue_with_a_hint_of_fluorescence_due_to_its_brightness-66CCFF).
|
data/xtuner/docs/en/index.rst
ADDED
@@ -0,0 +1,123 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
.. xtuner documentation master file, created by
|
2 |
+
sphinx-quickstart on Tue Jan 9 16:33:06 2024.
|
3 |
+
You can adapt this file completely to your liking, but it should at least
|
4 |
+
contain the root `toctree` directive.
|
5 |
+
|
6 |
+
Welcome to XTuner's documentation!
|
7 |
+
==================================
|
8 |
+
|
9 |
+
.. figure:: ./_static/image/logo.png
|
10 |
+
:align: center
|
11 |
+
:alt: xtuner
|
12 |
+
:class: no-scaled-link
|
13 |
+
|
14 |
+
.. raw:: html
|
15 |
+
|
16 |
+
<p style="text-align:center">
|
17 |
+
<strong>All-IN-ONE toolbox for LLM
|
18 |
+
</strong>
|
19 |
+
</p>
|
20 |
+
|
21 |
+
<p style="text-align:center">
|
22 |
+
<script async defer src="https://buttons.github.io/buttons.js"></script>
|
23 |
+
<a class="github-button" href="https://github.com/InternLM/xtuner" data-show-count="true" data-size="large" aria-label="Star">Star</a>
|
24 |
+
<a class="github-button" href="https://github.com/InternLM/xtuner/subscription" data-icon="octicon-eye" data-size="large" aria-label="Watch">Watch</a>
|
25 |
+
<a class="github-button" href="https://github.com/InternLM/xtuner/fork" data-icon="octicon-repo-forked" data-size="large" aria-label="Fork">Fork</a>
|
26 |
+
</p>
|
27 |
+
|
28 |
+
|
29 |
+
|
30 |
+
Documentation
|
31 |
+
-------------
|
32 |
+
.. toctree::
|
33 |
+
:maxdepth: 2
|
34 |
+
:caption: Get Started
|
35 |
+
|
36 |
+
get_started/overview.md
|
37 |
+
get_started/installation.md
|
38 |
+
get_started/quickstart.md
|
39 |
+
|
40 |
+
.. toctree::
|
41 |
+
:maxdepth: 2
|
42 |
+
:caption: Preparation
|
43 |
+
|
44 |
+
preparation/pretrained_model.rst
|
45 |
+
preparation/prompt_template.rst
|
46 |
+
|
47 |
+
.. toctree::
|
48 |
+
:maxdepth: 2
|
49 |
+
:caption: Training
|
50 |
+
|
51 |
+
training/modify_settings.rst
|
52 |
+
training/custom_sft_dataset.rst
|
53 |
+
training/custom_pretrain_dataset.rst
|
54 |
+
training/custom_agent_dataset.rst
|
55 |
+
training/multi_modal_dataset.rst
|
56 |
+
training/open_source_dataset.rst
|
57 |
+
training/visualization.rst
|
58 |
+
|
59 |
+
.. toctree::
|
60 |
+
:maxdepth: 2
|
61 |
+
:caption: DPO
|
62 |
+
|
63 |
+
dpo/overview.md
|
64 |
+
dpo/quick_start.md
|
65 |
+
dpo/modify_settings.md
|
66 |
+
|
67 |
+
.. toctree::
|
68 |
+
:maxdepth: 2
|
69 |
+
:caption: Reward Model
|
70 |
+
|
71 |
+
reward_model/overview.md
|
72 |
+
reward_model/quick_start.md
|
73 |
+
reward_model/modify_settings.md
|
74 |
+
reward_model/preference_data.md
|
75 |
+
|
76 |
+
.. toctree::
|
77 |
+
:maxdepth: 2
|
78 |
+
:caption: Acceleration
|
79 |
+
|
80 |
+
acceleration/deepspeed.rst
|
81 |
+
acceleration/pack_to_max_length.rst
|
82 |
+
acceleration/flash_attn.rst
|
83 |
+
acceleration/varlen_flash_attn.rst
|
84 |
+
acceleration/hyper_parameters.rst
|
85 |
+
acceleration/length_grouped_sampler.rst
|
86 |
+
acceleration/train_large_scale_dataset.rst
|
87 |
+
acceleration/train_extreme_long_sequence.rst
|
88 |
+
acceleration/benchmark.rst
|
89 |
+
|
90 |
+
.. toctree::
|
91 |
+
:maxdepth: 2
|
92 |
+
:caption: Chat
|
93 |
+
|
94 |
+
chat/llm.md
|
95 |
+
chat/agent.md
|
96 |
+
chat/vlm.md
|
97 |
+
chat/lmdeploy.md
|
98 |
+
|
99 |
+
.. toctree::
|
100 |
+
:maxdepth: 2
|
101 |
+
:caption: Evaluation
|
102 |
+
|
103 |
+
evaluation/hook.md
|
104 |
+
evaluation/mmlu.md
|
105 |
+
evaluation/mmbench.md
|
106 |
+
evaluation/opencompass.md
|
107 |
+
|
108 |
+
.. toctree::
|
109 |
+
:maxdepth: 2
|
110 |
+
:caption: Models
|
111 |
+
|
112 |
+
models/supported.md
|
113 |
+
|
114 |
+
.. toctree::
|
115 |
+
:maxdepth: 2
|
116 |
+
:caption: InternEvo Migration
|
117 |
+
|
118 |
+
internevo_migration/internevo_migration.rst
|
119 |
+
internevo_migration/ftdp_dataset/ftdp.rst
|
120 |
+
internevo_migration/ftdp_dataset/Case1.rst
|
121 |
+
internevo_migration/ftdp_dataset/Case2.rst
|
122 |
+
internevo_migration/ftdp_dataset/Case3.rst
|
123 |
+
internevo_migration/ftdp_dataset/Case4.rst
|
data/xtuner/docs/en/internevo_migration/ftdp_dataset/Case1.rst
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
Case 1
|
2 |
+
======
|
data/xtuner/docs/en/internevo_migration/ftdp_dataset/Case2.rst
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
Case 2
|
2 |
+
======
|
data/xtuner/docs/en/internevo_migration/ftdp_dataset/Case3.rst
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
Case 3
|
2 |
+
======
|
data/xtuner/docs/en/internevo_migration/ftdp_dataset/Case4.rst
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
Case 4
|
2 |
+
======
|
data/xtuner/docs/en/internevo_migration/ftdp_dataset/ftdp.rst
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
ftdp
|
2 |
+
====
|
data/xtuner/docs/en/internevo_migration/internevo_migration.rst
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
InternEVO Migration
|
2 |
+
===================
|
data/xtuner/docs/en/make.bat
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
@ECHO OFF
|
2 |
+
|
3 |
+
pushd %~dp0
|
4 |
+
|
5 |
+
REM Command file for Sphinx documentation
|
6 |
+
|
7 |
+
if "%SPHINXBUILD%" == "" (
|
8 |
+
set SPHINXBUILD=sphinx-build
|
9 |
+
)
|
10 |
+
set SOURCEDIR=.
|
11 |
+
set BUILDDIR=_build
|
12 |
+
|
13 |
+
%SPHINXBUILD% >NUL 2>NUL
|
14 |
+
if errorlevel 9009 (
|
15 |
+
echo.
|
16 |
+
echo.The 'sphinx-build' command was not found. Make sure you have Sphinx
|
17 |
+
echo.installed, then set the SPHINXBUILD environment variable to point
|
18 |
+
echo.to the full path of the 'sphinx-build' executable. Alternatively you
|
19 |
+
echo.may add the Sphinx directory to PATH.
|
20 |
+
echo.
|
21 |
+
echo.If you don't have Sphinx installed, grab it from
|
22 |
+
echo.https://www.sphinx-doc.org/
|
23 |
+
exit /b 1
|
24 |
+
)
|
25 |
+
|
26 |
+
if "%1" == "" goto help
|
27 |
+
|
28 |
+
%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
|
29 |
+
goto end
|
30 |
+
|
31 |
+
:help
|
32 |
+
%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%
|
33 |
+
|
34 |
+
:end
|
35 |
+
popd
|
data/xtuner/docs/en/models/supported.md
ADDED
@@ -0,0 +1 @@
|
|
|
|
|
1 |
+
# Supported Models
|
data/xtuner/docs/en/notes/changelog.md
ADDED
@@ -0,0 +1,25 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--
|
2 |
+
|
3 |
+
## vX.X.X (YYYY.MM.DD)
|
4 |
+
|
5 |
+
### Highlights
|
6 |
+
|
7 |
+
### New Features & Improvements
|
8 |
+
|
9 |
+
### Bug Fixes
|
10 |
+
|
11 |
+
### Contributors
|
12 |
+
|
13 |
+
-->
|
14 |
+
|
15 |
+
# Changelog
|
16 |
+
|
17 |
+
## v0.1.0 (2023.08.30)
|
18 |
+
|
19 |
+
XTuner is released! 🔥🔥🔥
|
20 |
+
|
21 |
+
### Highlights
|
22 |
+
|
23 |
+
- XTuner supports LLM fine-tuning on consumer-grade GPUs. The minimum GPU memory required for 7B LLM fine-tuning is only **8GB**.
|
24 |
+
- XTuner supports various LLMs, datasets, algorithms and training pipelines.
|
25 |
+
- Several fine-tuned adapters are released simultaneously, including various gameplays such as the colorist LLM, plugins-based LLM, and many more. For further details, please visit [XTuner on HuggingFace](https://huggingface.co/xtuner)!
|
data/xtuner/docs/en/preparation/pretrained_model.rst
ADDED
@@ -0,0 +1,2 @@
|
|
|
|
|
|
|
1 |
+
Pretrained Model
|
2 |
+
================
|