Edit model card

LongQLoRA: Efficient and Effective Method to Extend Context Length of LLMs

Technical Report

Technical Report: LongQLoRA: Efficient and Effective Method to Extend Context Length of Large Language Models

Introduction

LongQLoRA is a memory-efficient and effective method to extend context length of Large Language Models with less training GPUs. On a single 32GB V100 GPU, LongQLoRA can extend the context length of LLaMA2 7B and 13B from 4096 to 8192 and even to 12k. LongQLoRA achieves competitive perplexity performance on PG19 and Proof-pile dataset after only 1000 finetuning steps, our model outperforms LongLoRA and is very close to MPT-7B-8K.

Evaluation perplexity on PG19 validation and Proof-pile test datasets in evaluation context length of 8192:

Model PG19 Proof-pile
LLaMA2-7B >1000 >1000
MPT-7B-8K 7.98 2.67
LongLoRA-LoRA-7B-8K 8.20 2.78
LongLoRA-Full-7B-8K 7.93 2.73
LongQLoRA-7B-8K 7.96 2.73
Downloads last month
4,207