|
--- |
|
library_name: peft |
|
base_model: meta-llama/Llama-2-7b-hf |
|
--- |
|
|
|
# LoftQ Initialization |
|
|
|
| [Paper](https://arxiv.org/abs/2310.08659) | [Code](https://github.com/yxli2123/LoftQ) | [PEFT Example](https://github.com/huggingface/peft/tree/main/examples/loftq_finetuning) | |
|
|
|
LoftQ (LoRA-fine-tuning-aware Quantization) provides a quantized backbone Q and LoRA adapters A and B, given a full-precision pre-trained weight W. |
|
|
|
This model, `LoftQ/Llama-2-7b-hf-fp16-64rank-gsm8k`, is LoRA fine-tuned from [LLAMA-2-7b](https://huggingface.co/meta-llama/Llama-2-7b-hf) on [GSM8K](https://huggingface.co/datasets/gsm8k) dataset. |
|
|
|
## Model Info |
|
### LoRA adapters |
|
- rank: 64 |
|
- lora_alpha: 16 |
|
- target_modules: ["down_proj", "up_proj", "q_proj", "k_proj", "v_proj", "o_proj", "gate_proj"] |
|
|
|
## Usage |
|
|
|
**Inference** Here is an example code for inference after the model has been fine-tuned on [GSM8K](https://huggingface.co/datasets/gsm8k). |
|
|
|
```python |
|
import torch |
|
from transformers import AutoModelForCausalLM |
|
|
|
MODEL_ID = "LoftQ/Llama-2-7b-hf-fp16-64rank-gsm8k" |
|
|
|
model = AutoModelForCausalLM.from_pretrained( |
|
MODEL_ID, |
|
torch_dtype=torch.bfloat16, # you may change it with different models |
|
token=YOUR_HF_TOKEN, |
|
) |
|
# you can also merge the LoRA adapters to the backbone if you like |
|
model = model.merge_and_unload() |
|
|
|
# Do inference with `model` ... |
|
``` |
|
|
|
See full evaluation on GSM8K on [Github](https://github.com/yxli2123/LoftQ/blob/main/test_gsm8k.py). |
|
|
|
## Experiment Results |
|
We have conducted experiments on supervised fine-tuning of [GSM8K](https://huggingface.co/datasets/gsm8k) |
|
and [WikiText-2](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1). |
|
|
|
| Model | Bits | Rank | LoRA Initial | GSM8K | |
|
| -------------- | ---- | ---- | -------------------- | ----- | |
|
| **LLAMA-2-7b** | 16 | 64 | Gaussian + 0 | 36.9 | |
|
| LLAMA-2-7b | 4 | 64 | Gaussian + 0 (QLoRA) | 35.1 | |
|
| LLAMA-2-7b | 4 | 64 | LoftQ | 35.0 | |
|
|
|
|
|
## Citation |
|
|
|
```bibtex |
|
@article{li2023loftq, |
|
title={Loftq: Lora-fine-tuning-aware quantization for large language models}, |
|
author={Li, Yixiao and Yu, Yifan and Liang, Chen and He, Pengcheng and Karampatziakis, Nikos and Chen, Weizhu and Zhao, Tuo}, |
|
journal={arXiv preprint arXiv:2310.08659}, |
|
year={2023} |
|
} |
|
``` |