lixin4ever commited on
Commit
3835c42
1 Parent(s): a7d03ef

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +47 -0
README.md CHANGED
@@ -1,3 +1,50 @@
1
  ---
2
  license: mit
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
+ language:
4
+ - en
5
+ metrics:
6
+ - perplexity
7
  ---
8
+ # CLEX: Continuous Length Extrapolation for Large Language Models
9
+ This repo stores the checkpoint of CLEX-7B-Chat-16K
10
+
11
+ ## Features and Highlights of CLEX
12
+ ![CLEX_diagram](https://github.com/DAMO-NLP-SG/CLEX/assets/18526640/063ffe34-0116-4759-92bf-e22fc7264cdf)
13
+
14
+ - **Simple and Clear**: _MINIMAL_ code and architecture changes. Only one up-and-down projection layer introduced, _NO_ recurrent memory caching or sparse attention required.
15
+ - **Train Short, Test Long**: _NO_ performance drop on the sequences _4x~8x longer_ than the training ones (see [here](https://github.com/DAMO-NLP-SG/CLEX#language-modelling)).
16
+ - **Continuous Length Extrapolation**: Explicitly modeling the continuous dynamics of context window size during length extrapolation.
17
+
18
+ More details about long-text modeling with our CLEX can be found at the git [repo](https://github.com/DAMO-NLP-SG/CLEX).
19
+
20
+ ## Model Zoo
21
+ | Model Name | Model Type | Starting Point | Train Data |Train Length | MAX Test Length |
22
+ |:-----|:-----|:-----------|:-----------|:-----------|:-----------|
23
+ | CLEX-7B-4K | base | LLaMA-2-7B | [Redpajama-Book](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | 4K | 16K |
24
+ | CLEX-7B-Chat-4K | chat | CLEX-7B-4K | [UltraChat](https://github.com/thunlp/UltraChat) | 4K | 16K |
25
+ | CLEX-7B-16K (this checkpoint) | base | LLaMA-2-7B | [Redpajama-Book](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | 16K | 64K |
26
+ | **CLEX-7B-Chat-16K** (this checkpoint) | chat | CLEX-7B-16K | [UltraChat](https://github.com/thunlp/UltraChat) | 16K | 64K |
27
+
28
+
29
+ ## How to Use
30
+ ```bash
31
+ import torch
32
+ from transformers import AutoTokenizer, AutoModelForCausalLM
33
+
34
+ tokenizer = AutoTokenizer.from_pretrained("DAMO-NLP-SG/CLEX-7B-Chat-16K", trust_remote_code=True)
35
+ model = AutoModelForCausalLM.from_pretrained("DAMO-NLP-SG/CLEX-7B-Chat-16K", torch_dtype=torch.bfloat16)
36
+ inputs = tokenizer("What is CLEX?", return_tensors="pt")
37
+ sample = model.generate(**inputs, max_length=128)
38
+ print(tokenizer.decode(sample[0]))
39
+ ```
40
+
41
+ ## Citation
42
+ If you find our project useful, hope you can star our repo and cite our paper as follows:
43
+ ```
44
+ @article{damonlpsg2023clex,
45
+ author = {Chen, Guanzheng and Li, Xin and Meng, Zaiqiao and Liang, Shangsong and Bing, Lidong},
46
+ title = {CLEX: Continuous Length Extrapolation for Large Language Models},
47
+ year = 2023,
48
+ journal = {arXiv preprint arXiv:2310.16450},
49
+ url = {https://arxiv.org/abs/2310.16450}
50
+ }