LoftQ commited on
Commit
359c1c9
1 Parent(s): 55714d8

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +114 -0
README.md ADDED
@@ -0,0 +1,114 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ pipeline_tag: text-generation
6
+ tags:
7
+ - 'quantization '
8
+ - lora
9
+ ---
10
+ # LoftQ Initialization
11
+
12
+ | [Paper](https://arxiv.org/abs/2310.08659) | [Code](https://github.com/yxli2123/LoftQ) | [PEFT Example](https://github.com/huggingface/peft/tree/main/examples/loftq_finetuning) |
13
+
14
+ LoftQ (LoRA-fine-tuning-aware Quantization) provides a quantized backbone Q and LoRA adapters A and B, given a full-precision pre-trained weight W.
15
+
16
+ This model, `Llama-2-13b-hf-4bit-64rank`, is obtained from [LLAMA-2-13b](https://huggingface.co/meta-llama/Llama-2-13b-hf).
17
+ The backbone is under `LoftQ/Llama-2-13b-hf-4bit-64rank` and LoRA adapters are under the `subfolder='loftq_init'`.
18
+
19
+ ## Model Info
20
+ ### Backbone
21
+ - Stored format: `torch.bfloat16`
22
+ - Size: ~ 26 GiB
23
+ - Loaded format: bitsandbytes nf4
24
+ - Size loaded on GPU: ~6.5 GiB
25
+
26
+ ### LoRA adapters
27
+ - rank: 64
28
+ - lora_alpha: 16
29
+ - target_modules: ["down_proj", "up_proj", "q_proj", "k_proj", "v_proj", "o_proj", "gate_proj"]
30
+
31
+ ## Usage
32
+
33
+ **Training** Here's an example of loading this model and preparing for the LoRA fine-tuning.
34
+
35
+ ```python
36
+ import torch
37
+ from transformers import AutoModelForCausalLM, BitsAndBytesConfig
38
+ from peft import PeftModel
39
+
40
+ MODEL_ID = "LoftQ/Llama-2-13b-hf-4bit-64rank"
41
+
42
+ base_model = AutoModelForCausalLM.from_pretrained(
43
+ MODEL_ID,
44
+ torch_dtype=torch.bfloat16, # you may change it with different models
45
+ quantization_config=BitsAndBytesConfig(
46
+ load_in_4bit=True,
47
+ bnb_4bit_compute_dtype=torch.bfloat16, # bfloat16 is recommended
48
+ bnb_4bit_use_double_quant=False,
49
+ bnb_4bit_quant_type='nf4',
50
+ ),
51
+ )
52
+ peft_model = PeftModel.from_pretrained(
53
+ base_model,
54
+ MODEL_ID,
55
+ subfolder="loftq_init",
56
+ is_trainable=True,
57
+ )
58
+
59
+ # Do training with peft_model ...
60
+ ```
61
+
62
+ ## Experiment Results
63
+ We have conducted experiments on supervised fine-tuning of [GSM8K](https://huggingface.co/datasets/gsm8k)
64
+ and [WikiText-2](https://huggingface.co/datasets/wikitext/viewer/wikitext-2-raw-v1).
65
+
66
+ | Model | Bits | Rank | LoRA Initial | GSM8K | WikiText-2 |
67
+ | -------------- | ---- | ---- | -------------------- | ----- | ---------- |
68
+ | LLAMA-2-13b | 16 | 64 | Gaussian + 0 | 45.3 | 5.12 |
69
+ | LLAMA-2-13b | 4 | 64 | Gaussian + 0 (QLoRA) | 39.9 | 5.22 |
70
+ | **LLAMA-2-13b** | 4 | 64 | LoftQ | 45.0 | 5.16 |
71
+
72
+
73
+
74
+ **Inference** Here is an example code for inference after the model has been fine-tuned on [GSM8K](https://huggingface.co/datasets/gsm8k).
75
+
76
+ ```python
77
+ import torch
78
+ from transformers import AutoModelForCausalLM, BitsAndBytesConfig
79
+ from peft import PeftModel
80
+
81
+ MODEL_ID = "LoftQ/Llama-2-13b-hf-4bit-64rank"
82
+
83
+ base_model = AutoModelForCausalLM.from_pretrained(
84
+ MODEL_ID,
85
+ torch_dtype=torch.bfloat16, # you may change it with different models
86
+ quantization_config=BitsAndBytesConfig(
87
+ load_in_4bit=True,
88
+ bnb_4bit_compute_dtype=torch.bfloat16, # bfloat16 is recommended
89
+ bnb_4bit_use_double_quant=False,
90
+ bnb_4bit_quant_type='nf4',
91
+ ),
92
+ )
93
+ peft_model = PeftModel.from_pretrained(
94
+ base_model,
95
+ MODEL_ID,
96
+ subfolder="gsm8k",
97
+ is_trainable=True,
98
+ )
99
+
100
+ # Do inference with peft_model ...
101
+ ```
102
+ See the full code at our [Github Repo]((https://github.com/yxli2123/LoftQ))
103
+
104
+
105
+ ## Citation
106
+
107
+ ```bibtex
108
+ @article{li2023loftq,
109
+ title={Loftq: Lora-fine-tuning-aware quantization for large language models},
110
+ author={Li, Yixiao and Yu, Yifan and Liang, Chen and He, Pengcheng and Karampatziakis, Nikos and Chen, Weizhu and Zhao, Tuo},
111
+ journal={arXiv preprint arXiv:2310.08659},
112
+ year={2023}
113
+ }
114
+ ```