Yukang commited on
Commit
6b8b0db
1 Parent(s): ebaa375

Upload README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -0
README.md ADDED
@@ -0,0 +1,77 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [![Gradio](https://img.shields.io/badge/Gradio-Online%20Demo-blue)](https://b3cfcf9e79ff42df5f.gradio.live)
2
+
3
+ # LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
4
+
5
+ <font size=6><div align='center' > <a href=http://arxiv.org/abs/2309.12307>**Paper**</a> | <a href="https://huggingface.co/Yukang">**Models**</a> | <a href="https://github.com/dvlab-research/LongLoRA">**Code**</a> | <a href="https://b3cfcf9e79ff42df5f.gradio.live">**Demo**</a></div></font>
6
+
7
+ **LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models [[Paper](http://arxiv.org/abs/2309.12307)]** <br />
8
+ [Yukang Chen](https://scholar.google.com/citations?user=6p0ygKUAAAAJ&hl=en),
9
+ [Shengju Qian](https://scholar.google.com/citations?user=QNnWmasAAAAJ),
10
+ [Haotian Tang](https://scholar.google.com/citations?user=WxL13BAAAAAJ&hl),
11
+ [Xin Lai](https://scholar.google.com/citations?user=tqNDPA4AAAAJ&hl=zh-CN),
12
+ [Zhijian Liu](https://scholar.google.com/citations?user=3coYSTUAAAAJ&hl=en),
13
+ [Song Han](https://scholar.google.com/citations?user=E0iCaa4AAAAJ&hl=zh-CN),
14
+ [Jiaya Jia](https://scholar.google.com/citations?user=XPAkzTEAAAAJ&hl=en)<br />
15
+
16
+ ## Abstract
17
+ We present LongLoRA, an efficient fine-tuning approach that extends the context sizes of pre-trained large language models (LLMs), with limited computation cost.
18
+ Typically, training LLMs with long context sizes is computationally expensive, requiring extensive training hours and GPU resources.
19
+ In this paper, we speed up the context extension of LLMs in two aspects. On the one hand, although dense global attention is needed during inference, fine-tuning the model can be effectively and efficiently done by sparse local attention. The proposed shift short attention effectively enables context extension, leading to non-trivial computation saving with similar performance to fine-tuning with vanilla attention. On the other hand, we find that LoRA for context extension works well under the premise of trainable embedding and normalization. LongLoRA demonstrates strong empirical results on various tasks on LLaMA2 models from 7B/13B to 70B. LongLoRA adopts LLaMA2 7B from 4k context to 100k, or LLaMA2 70B to 32k on a single 8x A100 machine. LongLoRA extends models' context while retaining their original architectures, and is compatible with most existing techniques, like FlashAttention-2. In addition, to make LongLoRA practical, we collect a dataset, LongQA, for supervised fine-tuning. It contains more than 3k long context question-answer pairs. For more details, please refer to the [paper](http://arxiv.org/abs/2309.12307).
20
+
21
+
22
+ ## Highlights
23
+ **LongLoRA** speed up the context extension of pre-trained large language models in both attention-level and weight-level.
24
+ 1. The proposed shifted short attention is easy to implement, compatible with Flash-Attention, and not required during inference.
25
+ 2. We release all our models, including models from 7B to 70B, context length from 8k to 100k, including [LLaMA2-LongLoRA-7B-100k](https://huggingface.co/Yukang/Llama-2-7b-longlora-100k-ft), [LLaMA2-LongLoRA-13B-64k](https://huggingface.co/Yukang/Llama-2-13b-longlora-64k), and [LLaMA2-LongLoRA-70B-32k](https://huggingface.co/Yukang/Llama-2-70b-longlora-32k).
26
+ 3. We build up a long-context QA dataset, LongQA, for supervised fine-tuning (SFT). We release 13B and 70B 32k models with SFT, [Llama-2-13b-chat-longlora-32k-sft](https://huggingface.co/Yukang/Llama-2-13b-chat-longlora-32k-sft) and [Llama-2-70b-chat-longlora-32k-sft](https://huggingface.co/Yukang/Llama-2-70b-chat-longlora-32k-sft). We will further release the dataset next week.
27
+
28
+ ## Released models
29
+
30
+ ### Models with supervised fine-tuning
31
+ | Model | Size | Context | Train | Link |
32
+ |:----------------------------------|------|---------|---------|-------------------------------------------------------------------------|
33
+ | Llama-2-13b-chat-longlora-32k-sft | 13B | 32768 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-13b-chat-longlora-32k-sft) |
34
+ | Llama-2-70b-chat-longlora-32k-sft | 70B | 32768 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-70b-chat-longlora-32k-sft) |
35
+
36
+ ### Models with context extension via fully fine-tuning
37
+ | Model | Size | Context | Train | Link |
38
+ |:----------------------------|------|---------|-------|-------------------------------------------------------------------|
39
+ | Llama-2-7b-longlora-8k-ft | 7B | 8192 | Full FT | [link](https://huggingface.co/Yukang/Llama-2-7b-longlora-8k-ft) |
40
+ | Llama-2-7b-longlora-16k-ft | 7B | 16384 | Full FT | [link](https://huggingface.co/Yukang/Llama-2-7b-longlora-16k-ft) |
41
+ | Llama-2-7b-longlora-32k-ft | 7B | 32768 | Full FT | [link](https://huggingface.co/Yukang/Llama-2-7b-longlora-32k-ft) |
42
+ | Llama-2-7b-longlora-100k-ft | 7B | 100000 | Full FT | [link](https://huggingface.co/Yukang/Llama-2-7b-longlora-100k-ft) |
43
+ | Llama-2-13b-longlora-8k-ft | 13B | 8192 | Full FT | [link](https://huggingface.co/Yukang/Llama-2-13b-longlora-8k-ft) |
44
+ | Llama-2-13b-longlora-16k-ft | 13B | 16384 | Full FT | [link](https://huggingface.co/Yukang/Llama-2-13b-longlora-16k-ft) |
45
+ | Llama-2-13b-longlora-32k-ft | 13B | 32768 | Full FT | [link](https://huggingface.co/Yukang/Llama-2-13b-longlora-32k-ft) |
46
+
47
+ ### Models with context extension via improved LoRA fine-tuning
48
+ | Model | Size | Context | Train | Link |
49
+ |:----------------------------|------|---------|-------|-------------------------------------------------------------------|
50
+ | Llama-2-7b-longlora-8k | 7B | 8192 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-7b-longlora-8k) |
51
+ | Llama-2-7b-longlora-16k | 7B | 16384 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-7b-longlora-16k) |
52
+ | Llama-2-7b-longlora-32k | 7B | 32768 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-7b-longlora-32k) |
53
+ | Llama-2-13b-longlora-8k | 13B | 8192 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-13b-longlora-8k) |
54
+ | Llama-2-13b-longlora-16k | 13B | 16384 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-13b-longlora-16k) |
55
+ | Llama-2-13b-longlora-32k | 13B | 32768 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-13b-longlora-32k) |
56
+ | Llama-2-13b-longlora-64k | 13B | 65536 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-13b-longlora-64k) |
57
+ | Llama-2-70b-longlora-32k | 70B | 32768 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-70b-longlora-32k) |
58
+ | Llama-2-70b-chat-longlora-32k | 70B | 32768 | LoRA+ | [link](https://huggingface.co/Yukang/Llama-2-70b-chat-longlora-32k) |
59
+
60
+ ## Citation
61
+ If you find this project useful in your research, please consider citing:
62
+
63
+ ```
64
+ @article{longlora,
65
+ title={LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models},
66
+ author={Yukang Chen and Shengju Qian and Haotian Tang and Xin Lai and Zhijian Liu and Song Han and Jiaya Jia},
67
+ journal={arXiv:2309.12307},
68
+ year={2023}
69
+ }
70
+ ```
71
+
72
+ ## Acknowledgement
73
+ - This work is built upon the [LLaMA2](https://ai.meta.com/llama) as the pre-trained models.
74
+ - This work is based on [DeepSpeed](https://github.com/microsoft/DeepSpeed), [peft](https://github.com/huggingface/peft), and [Flash-Attention2](https://github.com/Dao-AILab/flash-attention) for acceleration.
75
+ - The perplexity evaluation code is modified upon [Landmark Attention](https://github.com/epfml/landmark-attention).
76
+ - We use [LongChat](https://github.com/DachengLi1/LongChat) for the retrieval evaluation.
77
+