Guanzheng commited on
Commit
af992ba
1 Parent(s): 1dc5479

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +72 -0
README.md CHANGED
@@ -1,3 +1,75 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+
5
+ # CLEX: Continuous Length Extrapolation for Large Language Models
6
+ This repo stores the checkpoint of CLEX-LLaMA-2-7B-64K.
7
+
8
+
9
+ ## Features and Highlights of CLEX
10
+ ![CLEX_diagram](https://github.com/DAMO-NLP-SG/CLEX/assets/18526640/063ffe34-0116-4759-92bf-e22fc7264cdf)
11
+
12
+ - **Simple and Clear**: _MINIMAL_ code and architecture changes. Only one up-and-down projection layer introduced, _NO_ recurrent memory caching or sparse attention required.
13
+ - **Train Short, Test Long**: _NO_ performance drop on the sequences _4x~8x longer_ than the training ones (see [here](https://github.com/DAMO-NLP-SG/CLEX#language-modelling)).
14
+ - **Continuous Length Extrapolation**: Explicitly modeling the continuous dynamics of context window size during length extrapolation.
15
+
16
+ If you have any questions, feel free to contact us. (Emails: guanzzh.chen@gmail.com, lixin4ever@gmail.com)
17
+
18
+ ## Model Zoo
19
+ <div align="center">
20
+
21
+ | Model Name | Model Type | Starting Point | Train Data |Train Length | MAX Test Length | HF Repo |
22
+ |:-----|:-----|:-----------|:-----------|:-----------|:-----------|:------:|
23
+ | CLEX-LLaMA-2-7B-16K | base | LLaMA-2-7B | [Redpajama-Book](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | 16K | 64K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-7B-16K) |
24
+ | CLEX-LLaMA-2-7B-Chat-16K | chat | CLEX-7B-16K | [UltraChat](https://github.com/thunlp/UltraChat) | 16K | 64K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-7B-Chat-16K) |
25
+ | CLEX-LLaMA-2-7B-64K | base | LLaMA-2-7B | [Redpajama-Book](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | 64k | 256K | Pending Upload |
26
+ | CLEX-Phi-2-7B-32K | base | Phi-2-2.7B | [LongCorpus-2.5B](https://huggingface.co/datasets/DAMO-NLP-SG/LongCorpus-2.5B) | 32k | 128K | Pending Upload |
27
+ | CLEX-Mixtral-8x7B-32K | base | Mixtral-8x7B-v0.1 | [LongCorpus-2.5B](https://huggingface.co/datasets/DAMO-NLP-SG/LongCorpus-2.5B) | 32k | >128K | Pending Upload |
28
+ | CLEX-Mixtral-8x7B-Chat-32k | chat | CLEX-Mixtral-8x7B-32K | [Ultrachat 200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) | 32k | >128K | Pending Upload |
29
+ </div>
30
+
31
+
32
+ ## Usage
33
+
34
+
35
+ ```bash
36
+ import torch
37
+ from transformers import AutoTokenizer, AutoModelForCausalLM
38
+
39
+ tokenizer = AutoTokenizer.from_pretrained("DAMO-NLP-SG/CLEX-LLaMA-2-7B-64K", trust_remote_code=True)
40
+ model = AutoModelForCausalLM.from_pretrained("DAMO-NLP-SG/CLEX-LLaMA-2-7B-64K", torch_dtype=torch.bfloat16)
41
+ inputs = tokenizer("What is CLEX?", return_tensors="pt")
42
+ sample = model.generate(**inputs, max_length=128)
43
+ print(tokenizer.decode(sample[0]))
44
+ ```
45
+
46
+
47
+
48
+
49
+ ## Evaluation
50
+ ### Language Modelling
51
+ Here are the evaluation PPLs of the base models trained with CLEX. We apply training and evaluation on a subset of 2B tokens from the [RedPajama-Book](https://github.com/togethercomputer/RedPajama-Data) corpus, where the training and test sets are split by 99:1.
52
+
53
+
54
+
55
+ | | Train Length | Eval.(32k) | Eval.(64k) | Eval.(128k) | Eval.(256k) |
56
+ | --------------- | ------------ | ---------- | ---------- | ----------- | ----------- |
57
+ | CLEX-LLaMA-2-7B | 64k | 5.99 | 5.89 | 6.04 | 5.98 |
58
+
59
+
60
+
61
+
62
+
63
+
64
+ ## Citation
65
+ If you find our project useful, hope you can star our repo and cite our paper as follows:
66
+ ```
67
+ @article{damonlpsg2023clex,
68
+ author = {Chen, Guanzheng and Li, Xin and Meng, Zaiqiao and Liang, Shangsong and Bing, Lidong},
69
+ title = {CLEX: Continuous Length Extrapolation for Large Language Models},
70
+ year = 2023,
71
+ journal = {arXiv preprint arXiv:2310.16450},
72
+ url = {https://arxiv.org/abs/2310.16450}
73
+ }
74
+ ```
75
+