|
--- |
|
license: mit |
|
language: |
|
- en |
|
--- |
|
|
|
# CLEX: Continuous Length Extrapolation for Large Language Models |
|
This repo stores the checkpoint of CLEX-LLaMA-2-7B-64K. |
|
|
|
|
|
## Features and Highlights of CLEX |
|
![CLEX_diagram](https://github.com/DAMO-NLP-SG/CLEX/assets/18526640/063ffe34-0116-4759-92bf-e22fc7264cdf) |
|
|
|
- **Simple and Clear**: _MINIMAL_ code and architecture changes. Only one up-and-down projection layer introduced, _NO_ recurrent memory caching or sparse attention required. |
|
- **Train Short, Test Long**: _NO_ performance drop on the sequences _4x~8x longer_ than the training ones (see [here](https://github.com/DAMO-NLP-SG/CLEX#language-modelling)). |
|
- **Continuous Length Extrapolation**: Explicitly modeling the continuous dynamics of context window size during length extrapolation. |
|
|
|
If you have any questions, feel free to contact us. (Emails: guanzzh.chen@gmail.com, lixin4ever@gmail.com) |
|
|
|
## Model Zoo |
|
<div align="center"> |
|
|
|
| Model Name | Model Type | Starting Point | Train Data |Train Length | MAX Test Length | HF Repo | |
|
|:-----|:-----|:-----------|:-----------|:-----------|:-----------|:------:| |
|
| CLEX-LLaMA-2-7B-16K | base | LLaMA-2-7B | [Redpajama-Book](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | 16K | 64K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-7B-16K) | |
|
| CLEX-LLaMA-2-7B-Chat-16K | chat | CLEX-7B-16K | [UltraChat](https://github.com/thunlp/UltraChat) | 16K | 64K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-7B-Chat-16K) | |
|
| **CLEX-LLaMA-2-7B-64K** (this checkpoint) | base | LLaMA-2-7B | [Redpajama-Book](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) | 64k | 256K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-LLaMA-2-7B-64K) | |
|
| CLEX-Phi-2-32K | base | Phi-2-2.7B | [LongCorpus-2.5B](https://huggingface.co/datasets/DAMO-NLP-SG/LongCorpus-2.5B) | 32k | 128K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-Phi-2-32K) | |
|
| CLEX-Mixtral-8x7B-32K | base | Mixtral-8x7B-v0.1 | [LongCorpus-2.5B](https://huggingface.co/datasets/DAMO-NLP-SG/LongCorpus-2.5B) | 32k | >128K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-Mixtral-8x7B-32K) | |
|
| CLEX-Mixtral-8x7B-Chat-32k | chat | CLEX-Mixtral-8x7B-32K | [Ultrachat 200k](https://huggingface.co/datasets/HuggingFaceH4/ultrachat_200k) | 32k | >128K | [link](https://huggingface.co/DAMO-NLP-SG/CLEX-Mixtral-8x7B-Chat-32K) | |
|
</div> |
|
|
|
|
|
## Usage |
|
|
|
|
|
```bash |
|
import torch |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
tokenizer = AutoTokenizer.from_pretrained("DAMO-NLP-SG/CLEX-LLaMA-2-7B-64K", trust_remote_code=True) |
|
model = AutoModelForCausalLM.from_pretrained("DAMO-NLP-SG/CLEX-LLaMA-2-7B-64K", torch_dtype=torch.bfloat16, trust_remote_code=True) |
|
inputs = tokenizer("What is CLEX?", return_tensors="pt") |
|
sample = model.generate(**inputs, max_length=128) |
|
print(tokenizer.decode(sample[0])) |
|
``` |
|
|
|
|
|
|
|
|
|
## Evaluation |
|
### Language Modelling |
|
Here are the evaluation PPLs of the base models trained with CLEX. We apply training and evaluation on a subset of 2B tokens from the [RedPajama-Book](https://github.com/togethercomputer/RedPajama-Data) corpus, where the training and test sets are split by 99:1. |
|
|
|
|
|
|
|
| | Train Length | Eval.(32k) | Eval.(64k) | Eval.(128k) | Eval.(256k) | |
|
| --------------- | ------------ | ---------- | ---------- | ----------- | ----------- | |
|
| CLEX-LLaMA-2-7B | 64k | 5.99 | 5.89 | 6.04 | 5.98 | |
|
|
|
|
|
|
|
|
|
|
|
|
|
## Citation |
|
If you find our project useful, hope you can star our repo and cite our paper as follows: |
|
``` |
|
@article{damonlpsg2023clex, |
|
author = {Chen, Guanzheng and Li, Xin and Meng, Zaiqiao and Liang, Shangsong and Bing, Lidong}, |
|
title = {CLEX: Continuous Length Extrapolation for Large Language Models}, |
|
year = 2023, |
|
journal = {arXiv preprint arXiv:2310.16450}, |
|
url = {https://arxiv.org/abs/2310.16450} |
|
} |
|
``` |