5c4b7bca883ff70dd2c0471b328f95050e328e9ac75b3a3d139d61d93285f7c2
Browse files- peft_md_files/developer_guides/low_level_api.md +97 -0
- peft_md_files/developer_guides/mixed_models.md +37 -0
- peft_md_files/developer_guides/model_merging.md +157 -0
- peft_md_files/developer_guides/quantization.md +200 -0
- peft_md_files/developer_guides/torch_compile.md +76 -0
- peft_md_files/developer_guides/troubleshooting.md +273 -0
- peft_md_files/index.md +49 -0
- peft_md_files/install.md +47 -0
- peft_md_files/package_reference/adalora.md +31 -0
- peft_md_files/package_reference/adapter_utils.md +31 -0
- peft_md_files/package_reference/auto_class.md +48 -0
- peft_md_files/package_reference/boft.md +31 -0
- peft_md_files/package_reference/config.md +22 -0
- peft_md_files/package_reference/fourierft.md +38 -0
- peft_md_files/package_reference/helpers.md +12 -0
- peft_md_files/package_reference/ia3.md +31 -0
- peft_md_files/package_reference/layernorm_tuning.md +34 -0
- peft_md_files/package_reference/llama_adapter.md +31 -0
- peft_md_files/package_reference/loha.md +31 -0
- peft_md_files/package_reference/lokr.md +27 -0
- peft_md_files/package_reference/lora.md +35 -0
- peft_md_files/package_reference/merge_utils.md +33 -0
- peft_md_files/package_reference/multitask_prompt_tuning.md +31 -0
- peft_md_files/package_reference/oft.md +31 -0
- peft_md_files/package_reference/p_tuning.md +31 -0
- peft_md_files/package_reference/peft_model.md +77 -0
- peft_md_files/package_reference/peft_types.md +27 -0
- peft_md_files/package_reference/poly.md +44 -0
- peft_md_files/package_reference/prefix_tuning.md +31 -0
- peft_md_files/package_reference/prompt_tuning.md +31 -0
- peft_md_files/package_reference/tuners.md +27 -0
- peft_md_files/package_reference/vera.md +42 -0
- peft_md_files/quicktour.md +170 -0
- peft_md_files/task_guides/ia3.md +239 -0
- peft_md_files/task_guides/lora_based_methods.md +348 -0
- peft_md_files/task_guides/prompt_based_methods.md +305 -0
- peft_md_files/tutorial/peft_integrations.md +152 -0
- peft_md_files/tutorial/peft_model_config.md +182 -0
peft_md_files/developer_guides/low_level_api.md
ADDED
@@ -0,0 +1,97 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# Adapter injection
|
18 |
+
|
19 |
+
With PEFT, you can inject trainable adapters into any `torch` module which allows you to use adapter methods without relying on the modeling classes in PEFT. Currently, PEFT supports injecting [LoRA](../conceptual_guides/adapter#low-rank-adaptation-lora), [AdaLoRA](../conceptual_guides/adapter#adaptive-low-rank-adaptation-adalora), and [IA3](../conceptual_guides/ia3) into models because for these adapters, inplace modification of the model is sufficient for finetuning it.
|
20 |
+
|
21 |
+
Check the table below to see when you should inject adapters.
|
22 |
+
|
23 |
+
| Pros | Cons |
|
24 |
+
|---|---|
|
25 |
+
| the model is modified inplace, keeping all the original attributes and methods | manually write the `from_pretrained` and `save_pretrained` utility functions from Hugging Face to save and load adapters |
|
26 |
+
| works for any `torch` module and modality | doesn't work with any of the utility methods provided by `PeftModel` such as disabling and merging adapters |
|
27 |
+
|
28 |
+
To perform the adapter injection, use the [`inject_adapter_in_model`] method. This method takes 3 arguments, the PEFT config, the model, and an optional adapter name. You can also attach multiple adapters to the model if you call [`inject_adapter_in_model`] multiple times with different adapter names.
|
29 |
+
|
30 |
+
For example, to inject LoRA adapters into the `linear` submodule of the `DummyModel` module:
|
31 |
+
|
32 |
+
```python
|
33 |
+
import torch
|
34 |
+
from peft import inject_adapter_in_model, LoraConfig
|
35 |
+
|
36 |
+
class DummyModel(torch.nn.Module):
|
37 |
+
def __init__(self):
|
38 |
+
super().__init__()
|
39 |
+
self.embedding = torch.nn.Embedding(10, 10)
|
40 |
+
self.linear = torch.nn.Linear(10, 10)
|
41 |
+
self.lm_head = torch.nn.Linear(10, 10)
|
42 |
+
|
43 |
+
def forward(self, input_ids):
|
44 |
+
x = self.embedding(input_ids)
|
45 |
+
x = self.linear(x)
|
46 |
+
x = self.lm_head(x)
|
47 |
+
return x
|
48 |
+
|
49 |
+
|
50 |
+
lora_config = LoraConfig(
|
51 |
+
lora_alpha=16,
|
52 |
+
lora_dropout=0.1,
|
53 |
+
r=64,
|
54 |
+
bias="none",
|
55 |
+
target_modules=["linear"],
|
56 |
+
)
|
57 |
+
|
58 |
+
model = DummyModel()
|
59 |
+
model = inject_adapter_in_model(lora_config, model)
|
60 |
+
|
61 |
+
dummy_inputs = torch.LongTensor([[0, 1, 2, 3, 4, 5, 6, 7]])
|
62 |
+
dummy_outputs = model(dummy_inputs)
|
63 |
+
```
|
64 |
+
|
65 |
+
Print the model to see that the adapters have been correctly injected.
|
66 |
+
|
67 |
+
```bash
|
68 |
+
DummyModel(
|
69 |
+
(embedding): Embedding(10, 10)
|
70 |
+
(linear): Linear(
|
71 |
+
in_features=10, out_features=10, bias=True
|
72 |
+
(lora_dropout): ModuleDict(
|
73 |
+
(default): Dropout(p=0.1, inplace=False)
|
74 |
+
)
|
75 |
+
(lora_A): ModuleDict(
|
76 |
+
(default): Linear(in_features=10, out_features=64, bias=False)
|
77 |
+
)
|
78 |
+
(lora_B): ModuleDict(
|
79 |
+
(default): Linear(in_features=64, out_features=10, bias=False)
|
80 |
+
)
|
81 |
+
(lora_embedding_A): ParameterDict()
|
82 |
+
(lora_embedding_B): ParameterDict()
|
83 |
+
)
|
84 |
+
(lm_head): Linear(in_features=10, out_features=10, bias=True)
|
85 |
+
)
|
86 |
+
```
|
87 |
+
|
88 |
+
To only save the adapter, use the [`get_peft_model_state_dict`] function:
|
89 |
+
|
90 |
+
```python
|
91 |
+
from peft import get_peft_model_state_dict
|
92 |
+
|
93 |
+
peft_state_dict = get_peft_model_state_dict(model)
|
94 |
+
print(peft_state_dict)
|
95 |
+
```
|
96 |
+
|
97 |
+
Otherwise, `model.state_dict()` returns the full state dict of the model.
|
peft_md_files/developer_guides/mixed_models.md
ADDED
@@ -0,0 +1,37 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
-->
|
12 |
+
|
13 |
+
# Mixed adapter types
|
14 |
+
|
15 |
+
Normally, it isn't possible to mix different adapter types in 🤗 PEFT. You can create a PEFT model with two different LoRA adapters (which can have different config options), but it is not possible to combine a LoRA and LoHa adapter. With [`PeftMixedModel`] however, this works as long as the adapter types are compatible. The main purpose of allowing mixed adapter types is to combine trained adapters for inference. While it is possible to train a mixed adapter model, this has not been tested and is not recommended.
|
16 |
+
|
17 |
+
To load different adapter types into a PEFT model, use [`PeftMixedModel`] instead of [`PeftModel`]:
|
18 |
+
|
19 |
+
```py
|
20 |
+
from peft import PeftMixedModel
|
21 |
+
|
22 |
+
base_model = ... # load the base model, e.g. from transformers
|
23 |
+
# load first adapter, which will be called "default"
|
24 |
+
peft_model = PeftMixedModel.from_pretrained(base_model, <path_to_adapter1>)
|
25 |
+
peft_model.load_adapter(<path_to_adapter2>, adapter_name="other")
|
26 |
+
peft_model.set_adapter(["default", "other"])
|
27 |
+
```
|
28 |
+
|
29 |
+
The [`~PeftMixedModel.set_adapter`] method is necessary to activate both adapters, otherwise only the first adapter would be active. You can keep adding more adapters by calling [`~PeftModel.add_adapter`] repeatedly.
|
30 |
+
|
31 |
+
[`PeftMixedModel`] does not support saving and loading mixed adapters. The adapters should already be trained, and loading the model requires a script to be run each time.
|
32 |
+
|
33 |
+
## Tips
|
34 |
+
|
35 |
+
- Not all adapter types can be combined. See [`peft.tuners.mixed.COMPATIBLE_TUNER_TYPES`](https://github.com/huggingface/peft/blob/1c1c7fdaa6e6abaa53939b865dee1eded82ad032/src/peft/tuners/mixed/model.py#L35) for a list of compatible types. An error will be raised if you try to combine incompatible adapter types.
|
36 |
+
- It is possible to mix multiple adapters of the same type which can be useful for combining adapters with very different configs.
|
37 |
+
- If you want to combine a lot of different adapters, the most performant way to do it is to consecutively add the same adapter types. For example, add LoRA1, LoRA2, LoHa1, LoHa2 in this order, instead of LoRA1, LoHa1, LoRA2, and LoHa2. While the order can affect the output, there is no inherently *best* order, so it is best to choose the fastest one.
|
peft_md_files/developer_guides/model_merging.md
ADDED
@@ -0,0 +1,157 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# Model merging
|
18 |
+
|
19 |
+
Training a model for each task can be costly, take up storage space, and the models aren't able to learn new information to improve their performance. Multitask learning can overcome some of these limitations by training a model to learn several tasks, but it is expensive to train and designing a dataset for it is challenging. *Model merging* offers a solution to these challenges by combining multiple pretrained models into one model, giving it the combined abilities of each individual model without any additional training.
|
20 |
+
|
21 |
+
PEFT provides several methods for merging models like a linear or SVD combination. This guide focuses on two methods that are more efficient for merging LoRA adapters by eliminating redundant parameters:
|
22 |
+
|
23 |
+
* [TIES](https://hf.co/papers/2306.01708) - TrIm, Elect, and Merge (TIES) is a three-step method for merging models. First, redundant parameters are trimmed, then conflicting signs are resolved into an aggregated vector, and finally the parameters whose signs are the same as the aggregate sign are averaged. This method takes into account that some values (redundant and sign disagreement) can degrade performance in the merged model.
|
24 |
+
* [DARE](https://hf.co/papers/2311.03099) - Drop And REscale is a method that can be used to prepare for other model merging methods like TIES. It works by randomly dropping parameters according to a drop rate and rescaling the remaining parameters. This helps to reduce the number of redundant and potentially interfering parameters among multiple models.
|
25 |
+
|
26 |
+
Models are merged with the [`~LoraModel.add_weighted_adapter`] method, and the specific model merging method is specified in the `combination_type` parameter.
|
27 |
+
|
28 |
+
## Merge method
|
29 |
+
|
30 |
+
With TIES and DARE, merging is enabled by setting `combination_type` and `density` to a value of the weights to keep from the individual models. For example, let's merge three finetuned [TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T](https://huggingface.co/TinyLlama/TinyLlama-1.1B-intermediate-step-1431k-3T) models: [tinyllama_lora_nobots](https://huggingface.co/smangrul/tinyllama_lora_norobots), [tinyllama_lora_sql](https://huggingface.co/smangrul/tinyllama_lora_sql), and [tinyllama_lora_adcopy](https://huggingface.co/smangrul/tinyllama_lora_adcopy).
|
31 |
+
|
32 |
+
<Tip warninig={true}>
|
33 |
+
|
34 |
+
When you're attempting to merge fully trained models with TIES, you should be aware of any special tokens each model may have added to the embedding layer which are not a part of the original checkpoint's vocabulary. This may cause an issue because each model may have added a special token to the same embedding position. If this is the case, you should use the [`~transformers.PreTrainedModel.resize_token_embeddings`] method to avoid merging the special tokens at the same embedding index.
|
35 |
+
|
36 |
+
<br>
|
37 |
+
|
38 |
+
This shouldn't be an issue if you're only merging LoRA adapters trained from the same base model.
|
39 |
+
|
40 |
+
</Tip>
|
41 |
+
|
42 |
+
Load a base model and can use the [`~PeftModel.load_adapter`] method to load and assign each adapter a name:
|
43 |
+
|
44 |
+
```py
|
45 |
+
from peft import PeftConfig, PeftModel
|
46 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
47 |
+
import torch
|
48 |
+
|
49 |
+
config = PeftConfig.from_pretrained("smangrul/tinyllama_lora_norobots")
|
50 |
+
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, load_in_4bit=True, device_map="auto").eval()
|
51 |
+
tokenizer = AutoTokenizer.from_pretrained("smangrul/tinyllama_lora_norobots")
|
52 |
+
|
53 |
+
model = PeftModel.from_pretrained(model, "smangrul/tinyllama_lora_norobots", adapter_name="norobots")
|
54 |
+
_ = model.load_adapter("smangrul/tinyllama_lora_sql", adapter_name="sql")
|
55 |
+
_ = model.load_adapter("smangrul/tinyllama_lora_adcopy", adapter_name="adcopy")
|
56 |
+
```
|
57 |
+
|
58 |
+
Set the adapters, weights, `adapter_name`, `combination_type`, and `density` with the [`~LoraModel.add_weighted_adapter`] method.
|
59 |
+
|
60 |
+
<hfoptions id="merge-method">
|
61 |
+
<hfoption id="TIES">
|
62 |
+
|
63 |
+
Weight values greater than `1.0` typically produce better results because they preserve the correct scale. A good default starting value for the weights is to set all values to `1.0`.
|
64 |
+
|
65 |
+
```py
|
66 |
+
adapters = ["norobots", "adcopy", "sql"]
|
67 |
+
weights = [2.0, 1.0, 1.0]
|
68 |
+
adapter_name = "merge"
|
69 |
+
density = 0.2
|
70 |
+
model.add_weighted_adapter(adapters, weights, adapter_name, combination_type="ties", density=density)
|
71 |
+
```
|
72 |
+
|
73 |
+
</hfoption>
|
74 |
+
<hfoption id="DARE">
|
75 |
+
|
76 |
+
```py
|
77 |
+
adapters = ["norobots", "adcopy", "sql"]
|
78 |
+
weights = [2.0, 0.3, 0.7]
|
79 |
+
adapter_name = "merge"
|
80 |
+
density = 0.2
|
81 |
+
model.add_weighted_adapter(adapters, weights, adapter_name, combination_type="dare_ties", density=density)
|
82 |
+
```
|
83 |
+
|
84 |
+
</hfoption>
|
85 |
+
</hfoptions>
|
86 |
+
|
87 |
+
Set the newly merged model as the active model with the [`~LoraModel.set_adapter`] method.
|
88 |
+
|
89 |
+
```py
|
90 |
+
model.set_adapter("merge")
|
91 |
+
```
|
92 |
+
|
93 |
+
Now you can use the merged model as an instruction-tuned model to write ad copy or SQL queries!
|
94 |
+
|
95 |
+
<hfoptions id="ties">
|
96 |
+
<hfoption id="instruct">
|
97 |
+
|
98 |
+
```py
|
99 |
+
messages = [
|
100 |
+
{"role": "user", "content": "Write an essay about Generative AI."},
|
101 |
+
]
|
102 |
+
text = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
|
103 |
+
inputs = tokenizer(text, return_tensors="pt")
|
104 |
+
inputs = {k: v.to("cuda") for k, v in inputs.items()}
|
105 |
+
outputs = model.generate(**inputs, max_new_tokens=256, do_sample=True, top_p=0.95, temperature=0.2, repetition_penalty=1.2, eos_token_id=tokenizer.eos_token_id)
|
106 |
+
print(tokenizer.decode(outputs[0]))
|
107 |
+
```
|
108 |
+
|
109 |
+
</hfoption>
|
110 |
+
<hfoption id="ad copy">
|
111 |
+
|
112 |
+
```py
|
113 |
+
messages = [
|
114 |
+
{"role": "system", "content": "Create a text ad given the following product and description."},
|
115 |
+
{"role": "user", "content": "Product: Sony PS5 PlayStation Console\nDescription: The PS5 console unleashes new gaming possibilities that you never anticipated."},
|
116 |
+
]
|
117 |
+
text = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
|
118 |
+
inputs = tokenizer(text, return_tensors="pt")
|
119 |
+
inputs = {k: v.to("cuda") for k, v in inputs.items()}
|
120 |
+
outputs = model.generate(**inputs, max_new_tokens=128, do_sample=True, top_p=0.95, temperature=0.2, repetition_penalty=1.2, eos_token_id=tokenizer.eos_token_id)
|
121 |
+
print(tokenizer.decode(outputs[0]))
|
122 |
+
```
|
123 |
+
|
124 |
+
</hfoption>
|
125 |
+
<hfoption id="SQL">
|
126 |
+
|
127 |
+
```py
|
128 |
+
text = """Table: 2-11365528-2
|
129 |
+
Columns: ['Team', 'Head Coach', 'President', 'Home Ground', 'Location']
|
130 |
+
Natural Query: Who is the Head Coach of the team whose President is Mario Volarevic?
|
131 |
+
SQL Query:"""
|
132 |
+
|
133 |
+
inputs = tokenizer(text, return_tensors="pt")
|
134 |
+
inputs = {k: v.to("cuda") for k, v in inputs.items()}
|
135 |
+
outputs = model.generate(**inputs, max_new_tokens=64, repetition_penalty=1.1, eos_token_id=tokenizer("</s>").input_ids[-1])
|
136 |
+
print(tokenizer.decode(outputs[0]))
|
137 |
+
```
|
138 |
+
|
139 |
+
</hfoption>
|
140 |
+
</hfoptions>
|
141 |
+
|
142 |
+
|
143 |
+
## Merging (IA)³ Models
|
144 |
+
The (IA)³ models facilitate linear merging of adapters. To merge adapters in an (IA)³ model, utilize the `add_weighted_adapter` method from the `IA3Model` class. This method is analogous to the `add_weighted_adapter` method used in `LoraModel`, with the key difference being the absence of the `combination_type` parameter. For example, to merge three (IA)³ adapters into a PEFT model, you would proceed as follows:
|
145 |
+
|
146 |
+
```py
|
147 |
+
adapters = ["adapter1", "adapter2", "adapter3"]
|
148 |
+
weights = [0.4, 0.3, 0.3]
|
149 |
+
adapter_name = "merge"
|
150 |
+
model.add_weighted_adapter(adapters, weights, adapter_name)
|
151 |
+
```
|
152 |
+
|
153 |
+
It is recommended that the weights sum to 1.0 to preserve the scale of the model. The merged model can then be set as the active model using the `set_adapter` method:
|
154 |
+
|
155 |
+
```py
|
156 |
+
model.set_adapter("merge")
|
157 |
+
```
|
peft_md_files/developer_guides/quantization.md
ADDED
@@ -0,0 +1,200 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# Quantization
|
18 |
+
|
19 |
+
Quantization represents data with fewer bits, making it a useful technique for reducing memory-usage and accelerating inference especially when it comes to large language models (LLMs). There are several ways to quantize a model including:
|
20 |
+
|
21 |
+
* optimizing which model weights are quantized with the [AWQ](https://hf.co/papers/2306.00978) algorithm
|
22 |
+
* independently quantizing each row of a weight matrix with the [GPTQ](https://hf.co/papers/2210.17323) algorithm
|
23 |
+
* quantizing to 8-bit and 4-bit precision with the [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) library
|
24 |
+
* quantizing to as low as 2-bit precision with the [AQLM](https://arxiv.org/abs/2401.06118) algorithm
|
25 |
+
|
26 |
+
However, after a model is quantized it isn't typically further trained for downstream tasks because training can be unstable due to the lower precision of the weights and activations. But since PEFT methods only add *extra* trainable parameters, this allows you to train a quantized model with a PEFT adapter on top! Combining quantization with PEFT can be a good strategy for training even the largest models on a single GPU. For example, [QLoRA](https://hf.co/papers/2305.14314) is a method that quantizes a model to 4-bits and then trains it with LoRA. This method allows you to finetune a 65B parameter model on a single 48GB GPU!
|
27 |
+
|
28 |
+
In this guide, you'll see how to quantize a model to 4-bits and train it with LoRA.
|
29 |
+
|
30 |
+
## Quantize a model
|
31 |
+
|
32 |
+
[bitsandbytes](https://github.com/TimDettmers/bitsandbytes) is a quantization library with a Transformers integration. With this integration, you can quantize a model to 8 or 4-bits and enable many other options by configuring the [`~transformers.BitsAndBytesConfig`] class. For example, you can:
|
33 |
+
|
34 |
+
* set `load_in_4bit=True` to quantize the model to 4-bits when you load it
|
35 |
+
* set `bnb_4bit_quant_type="nf4"` to use a special 4-bit data type for weights initialized from a normal distribution
|
36 |
+
* set `bnb_4bit_use_double_quant=True` to use a nested quantization scheme to quantize the already quantized weights
|
37 |
+
* set `bnb_4bit_compute_dtype=torch.bfloat16` to use bfloat16 for faster computation
|
38 |
+
|
39 |
+
```py
|
40 |
+
import torch
|
41 |
+
from transformers import BitsAndBytesConfig
|
42 |
+
|
43 |
+
config = BitsAndBytesConfig(
|
44 |
+
load_in_4bit=True,
|
45 |
+
bnb_4bit_quant_type="nf4",
|
46 |
+
bnb_4bit_use_double_quant=True,
|
47 |
+
bnb_4bit_compute_dtype=torch.bfloat16,
|
48 |
+
)
|
49 |
+
```
|
50 |
+
|
51 |
+
Pass the `config` to the [`~transformers.AutoModelForCausalLM.from_pretrained`] method.
|
52 |
+
|
53 |
+
```py
|
54 |
+
from transformers import AutoModelForCausalLM
|
55 |
+
|
56 |
+
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", quantization_config=config)
|
57 |
+
```
|
58 |
+
|
59 |
+
Next, you should call the [`~peft.utils.prepare_model_for_kbit_training`] function to preprocess the quantized model for training.
|
60 |
+
|
61 |
+
```py
|
62 |
+
from peft import prepare_model_for_kbit_training
|
63 |
+
|
64 |
+
model = prepare_model_for_kbit_training(model)
|
65 |
+
```
|
66 |
+
|
67 |
+
Now that the quantized model is ready, let's set up a configuration.
|
68 |
+
|
69 |
+
## LoraConfig
|
70 |
+
|
71 |
+
Create a [`LoraConfig`] with the following parameters (or choose your own):
|
72 |
+
|
73 |
+
```py
|
74 |
+
from peft import LoraConfig
|
75 |
+
|
76 |
+
config = LoraConfig(
|
77 |
+
r=16,
|
78 |
+
lora_alpha=8,
|
79 |
+
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
|
80 |
+
lora_dropout=0.05,
|
81 |
+
bias="none",
|
82 |
+
task_type="CAUSAL_LM"
|
83 |
+
)
|
84 |
+
```
|
85 |
+
|
86 |
+
Then use the [`get_peft_model`] function to create a [`PeftModel`] from the quantized model and configuration.
|
87 |
+
|
88 |
+
```py
|
89 |
+
from peft import get_peft_model
|
90 |
+
|
91 |
+
model = get_peft_model(model, config)
|
92 |
+
```
|
93 |
+
|
94 |
+
You're all set for training with whichever training method you prefer!
|
95 |
+
|
96 |
+
### LoftQ initialization
|
97 |
+
|
98 |
+
[LoftQ](https://hf.co/papers/2310.08659) initializes LoRA weights such that the quantization error is minimized, and it can improve performance when training quantized models. To get started, follow [these instructions](https://github.com/huggingface/peft/tree/main/examples/loftq_finetuning).
|
99 |
+
|
100 |
+
In general, for LoftQ to work best, it is recommended to target as many layers with LoRA as possible, since those not targeted cannot have LoftQ applied. This means that passing `LoraConfig(..., target_modules="all-linear")` will most likely give the best results. Also, you should use `nf4` as quant type in your quantization config when using 4bit quantization, i.e. `BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4")`.
|
101 |
+
|
102 |
+
### QLoRA-style training
|
103 |
+
|
104 |
+
QLoRA adds trainable weights to all the linear layers in the transformer architecture. Since the attribute names for these linear layers can vary across architectures, set `target_modules` to `"all-linear"` to add LoRA to all the linear layers:
|
105 |
+
|
106 |
+
```py
|
107 |
+
config = LoraConfig(target_modules="all-linear", ...)
|
108 |
+
```
|
109 |
+
|
110 |
+
## AQLM quantization
|
111 |
+
|
112 |
+
Additive Quantization of Language Models ([AQLM](https://arxiv.org/abs/2401.06118)) is a Large Language Models compression method. It quantizes multiple weights together and takes advantage of interdependencies between them. AQLM represents groups of 8-16 weights as a sum of multiple vector codes. This allows it to compress models down to as low as 2-bit with considerably low accuracy losses.
|
113 |
+
|
114 |
+
Since the AQLM quantization process is computationally expensive, a use of prequantized models is recommended. A partial list of available models can be found in the official aqlm [repository](https://github.com/Vahe1994/AQLM).
|
115 |
+
|
116 |
+
The models support LoRA adapter tuning. To tune the quantized model you'll need to install the `aqlm` inference library: `pip install aqlm>=1.0.2`. Finetuned LoRA adapters shall be saved separately, as merging them with AQLM quantized weights is not possible.
|
117 |
+
|
118 |
+
```py
|
119 |
+
quantized_model = AutoModelForCausalLM.from_pretrained(
|
120 |
+
"BlackSamorez/Mixtral-8x7b-AQLM-2Bit-1x16-hf-test-dispatch",
|
121 |
+
torch_dtype="auto", device_map="auto", low_cpu_mem_usage=True,
|
122 |
+
)
|
123 |
+
|
124 |
+
peft_config = LoraConfig(...)
|
125 |
+
|
126 |
+
quantized_model = get_peft_model(quantized_model, peft_config)
|
127 |
+
```
|
128 |
+
|
129 |
+
You can refer to the [Google Colab](https://colab.research.google.com/drive/12GTp1FCj5_0SnnNQH18h_2XFh9vS_guX?usp=sharing) example for an overview of AQLM+LoRA finetuning.
|
130 |
+
|
131 |
+
## EETQ quantization
|
132 |
+
|
133 |
+
You can also perform LoRA fine-tuning on EETQ quantized models. [EETQ](https://github.com/NetEase-FuXi/EETQ) package offers simple and efficient way to perform 8-bit quantization, which is claimed to be faster than the `LLM.int8()` algorithm. First, make sure that you have a transformers version that is compatible with EETQ (e.g. by installing it from latest pypi or from source).
|
134 |
+
|
135 |
+
```py
|
136 |
+
import torch
|
137 |
+
from transformers import EetqConfig
|
138 |
+
|
139 |
+
config = EetqConfig("int8")
|
140 |
+
```
|
141 |
+
|
142 |
+
Pass the `config` to the [`~transformers.AutoModelForCausalLM.from_pretrained`] method.
|
143 |
+
|
144 |
+
```py
|
145 |
+
from transformers import AutoModelForCausalLM
|
146 |
+
|
147 |
+
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-v0.1", quantization_config=config)
|
148 |
+
```
|
149 |
+
|
150 |
+
and create a `LoraConfig` and pass it to `get_peft_model`:
|
151 |
+
|
152 |
+
```py
|
153 |
+
from peft import LoraConfig, get_peft_model
|
154 |
+
|
155 |
+
config = LoraConfig(
|
156 |
+
r=16,
|
157 |
+
lora_alpha=8,
|
158 |
+
target_modules=["q_proj", "k_proj", "v_proj", "o_proj"],
|
159 |
+
lora_dropout=0.05,
|
160 |
+
bias="none",
|
161 |
+
task_type="CAUSAL_LM"
|
162 |
+
)
|
163 |
+
|
164 |
+
model = get_peft_model(model, config)
|
165 |
+
```
|
166 |
+
|
167 |
+
## HQQ quantization
|
168 |
+
|
169 |
+
The models that is quantized using Half-Quadratic Quantization of Large Machine Learning Models ([HQQ](https://mobiusml.github.io/hqq_blog/)) support LoRA adapter tuning. To tune the quantized model, you'll need to install the `hqq` library with: `pip install hqq`.
|
170 |
+
|
171 |
+
```py
|
172 |
+
from hqq.engine.hf import HQQModelForCausalLM
|
173 |
+
|
174 |
+
quantized_model = HQQModelForCausalLM.from_quantized(save_dir_or_hfhub, device='cuda')
|
175 |
+
|
176 |
+
peft_config = LoraConfig(...)
|
177 |
+
|
178 |
+
quantized_model = get_peft_model(quantized_model, peft_config)
|
179 |
+
```
|
180 |
+
|
181 |
+
Or using transformers version that is compatible with HQQ (e.g. by installing it from latest pypi or from source).
|
182 |
+
|
183 |
+
```python
|
184 |
+
from transformers import HqqConfig, AutoModelForCausalLM
|
185 |
+
|
186 |
+
quant_config = HqqConfig(nbits=4, group_size=64)
|
187 |
+
|
188 |
+
quantized_model = AutoModelForCausalLM.from_pretrained(save_dir_or_hfhub, device='cuda', quantization_config=quant_config)
|
189 |
+
|
190 |
+
peft_config = LoraConfig(...)
|
191 |
+
|
192 |
+
quantized_model = get_peft_model(quantized_model, peft_config)
|
193 |
+
```
|
194 |
+
|
195 |
+
## Next steps
|
196 |
+
|
197 |
+
If you're interested in learning more about quantization, the following may be helpful:
|
198 |
+
|
199 |
+
* Learn more about details about QLoRA and check out some benchmarks on its impact in the [Making LLMs even more accessible with bitsandbytes, 4-bit quantization and QLoRA](https://huggingface.co/blog/4bit-transformers-bitsandbytes) blog post.
|
200 |
+
* Read more about different quantization schemes in the Transformers [Quantization](https://hf.co/docs/transformers/main/quantization) guide.
|
peft_md_files/developer_guides/torch_compile.md
ADDED
@@ -0,0 +1,76 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# torch.compile
|
18 |
+
|
19 |
+
In PEFT, [torch.compile](https://pytorch.org/tutorials/intermediate/torch_compile_tutorial.html) works for some but not all features. The reason why it won't always work is because PEFT is highly dynamic in certain places (loading and switching between multiple adapters, for instance), which can cause trouble for `torch.compile`. In other places, `torch.compile` may work, but won't be as fast as expected because of graph breaks.
|
20 |
+
|
21 |
+
If you don't see an error, it doesn't necessarily mean that `torch.compile` worked correctly. It might give you an output, but the output is incorrect. This guide describes what works with `torch.compile` and what doesn't.
|
22 |
+
|
23 |
+
> [!TIP]
|
24 |
+
> Unless indicated otherwise, the default `torch.compile` settings were used.
|
25 |
+
|
26 |
+
## Training and inference with `torch.compile`
|
27 |
+
|
28 |
+
These features **work** with `torch.compile`. Everything listed below was tested with a causal LM:
|
29 |
+
|
30 |
+
- Training with `Trainer` from 🤗 transformers
|
31 |
+
- Training with a custom PyTorch loop
|
32 |
+
- Inference
|
33 |
+
- Generation
|
34 |
+
|
35 |
+
The following adapters were tested successfully:
|
36 |
+
|
37 |
+
- AdaLoRA
|
38 |
+
- BOFT
|
39 |
+
- IA³
|
40 |
+
- Layer Norm Tuning
|
41 |
+
- LoHa
|
42 |
+
- LoRA
|
43 |
+
- LoRA + DoRA
|
44 |
+
- OFT
|
45 |
+
- VeRA
|
46 |
+
- HRA
|
47 |
+
|
48 |
+
The following adapters **don't work** correctly for training or inference when using `torch.compile`:
|
49 |
+
|
50 |
+
- LoKr
|
51 |
+
- LoRA targeting embedding layers
|
52 |
+
|
53 |
+
## Advanced PEFT features with `torch.compile`
|
54 |
+
|
55 |
+
Below are some of the more advanced PEFT features that **work**. They were all tested with LoRA.
|
56 |
+
|
57 |
+
- `modules_to_save` (i.e. `config = LoraConfig(..., modules_to_save=...)`)
|
58 |
+
- Merging adapters (one or multiple)
|
59 |
+
- Merging multiple adapters into one adapter (i.e. calling `model.add_weighted_adapter(...)`)
|
60 |
+
|
61 |
+
Generally, we can expect that if a feature works correctly with LoRA and is also supported by other adapter types, it should also work for that adapter type.
|
62 |
+
|
63 |
+
The more advanced PEFT features below **don't work** in conjunction with `torch.compile`. Tests were run with LoRA:
|
64 |
+
|
65 |
+
- Using PEFT adapters with quantization (bitsandbytes)
|
66 |
+
- Inference with multiple adapters
|
67 |
+
- Unloading (i.e. calling `model.merge_and_unload()`)
|
68 |
+
- Disabling adapters (i.e. using `with model.disable_adapter()`)
|
69 |
+
- Mixed adapter batches (i.e. calling `model(batch, adapter_names=["__base__", "default", "other", ...])`)
|
70 |
+
|
71 |
+
## Test cases
|
72 |
+
|
73 |
+
All the use cases listed above are tested inside of [`peft/tests/test_torch_compile.py`](https://github.com/huggingface/peft/blob/main/tests/test_torch_compile.py). If you want to check in more detail how we tested a certain feature, please go to that file and check the test that corresponds to your use case.
|
74 |
+
|
75 |
+
> [!TIP]
|
76 |
+
> If you have another use case where you know that `torch.compile` does or does not work with PEFT, please contribute by letting us know or by opening a PR to add this use case to the covered test cases.
|
peft_md_files/developer_guides/troubleshooting.md
ADDED
@@ -0,0 +1,273 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# Troubleshooting
|
18 |
+
|
19 |
+
If you encounter any issue when using PEFT, please check the following list of common issues and their solutions.
|
20 |
+
|
21 |
+
## Examples don't work
|
22 |
+
|
23 |
+
Examples often rely on the most recent package versions, so please ensure they're up-to-date. In particular, check the following package versions:
|
24 |
+
|
25 |
+
- `peft`
|
26 |
+
- `transformers`
|
27 |
+
- `accelerate`
|
28 |
+
- `torch`
|
29 |
+
|
30 |
+
In general, you can update the package version by running this command inside your Python environment:
|
31 |
+
|
32 |
+
```bash
|
33 |
+
python -m pip install -U <package_name>
|
34 |
+
```
|
35 |
+
|
36 |
+
Installing PEFT from source is useful for keeping up with the latest developments:
|
37 |
+
|
38 |
+
```bash
|
39 |
+
python -m pip install git+https://github.com/huggingface/peft
|
40 |
+
```
|
41 |
+
|
42 |
+
## ValueError: Attempting to unscale FP16 gradients
|
43 |
+
|
44 |
+
This error probably occurred because the model was loaded with `torch_dtype=torch.float16` and then used in an automatic mixed precision (AMP) context, e.g. by setting `fp16=True` in the [`~transformers.Trainer`] class from 🤗 Transformers. The reason is that when using AMP, trainable weights should never use fp16. To make this work without loading the whole model in fp32, add the following to your code:
|
45 |
+
|
46 |
+
```python
|
47 |
+
peft_model = get_peft_model(...)
|
48 |
+
|
49 |
+
# add this:
|
50 |
+
for param in model.parameters():
|
51 |
+
if param.requires_grad:
|
52 |
+
param.data = param.data.float()
|
53 |
+
|
54 |
+
# proceed as usual
|
55 |
+
trainer = Trainer(model=peft_model, fp16=True, ...)
|
56 |
+
trainer.train()
|
57 |
+
```
|
58 |
+
|
59 |
+
Alternatively, you can use the [`~utils.cast_mixed_precision_params`] function to correctly cast the weights:
|
60 |
+
|
61 |
+
```python
|
62 |
+
from peft import cast_mixed_precision_params
|
63 |
+
|
64 |
+
peft_model = get_peft_model(...)
|
65 |
+
cast_mixed_precision_params(peft_model, dtype=torch.float16)
|
66 |
+
|
67 |
+
# proceed as usual
|
68 |
+
trainer = Trainer(model=peft_model, fp16=True, ...)
|
69 |
+
trainer.train()
|
70 |
+
```
|
71 |
+
|
72 |
+
<Tip>
|
73 |
+
|
74 |
+
Starting from PEFT verion v0.12.0, PEFT automatically promotes the dtype of adapter weights from `torch.float16` and `torch.bfloat16` to `torch.float32` where appropriate. To _prevent_ this behavior, you can pass `autocast_adapter_dtype=False` to [`~get_peft_model`], to [`~PeftModel.from_pretrained`], and to [`~PeftModel.load_adapter`].
|
75 |
+
|
76 |
+
</Tip>
|
77 |
+
|
78 |
+
## Bad results from a loaded PEFT model
|
79 |
+
|
80 |
+
There can be several reasons for getting a poor result from a loaded PEFT model which are listed below. If you're still unable to troubleshoot the problem, see if anyone else had a similar [issue](https://github.com/huggingface/peft/issues) on GitHub, and if you can't find any, open a new issue.
|
81 |
+
|
82 |
+
When opening an issue, it helps a lot if you provide a minimal code example that reproduces the issue. Also, please report if the loaded model performs at the same level as the model did before fine-tuning, if it performs at a random level, or if it is only slightly worse than expected. This information helps us identify the problem more quickly.
|
83 |
+
|
84 |
+
### Random deviations
|
85 |
+
|
86 |
+
If your model outputs are not exactly the same as previous runs, there could be an issue with random elements. For example:
|
87 |
+
|
88 |
+
1. please ensure it is in `.eval()` mode, which is important, for instance, if the model uses dropout
|
89 |
+
2. if you use [`~transformers.GenerationMixin.generate`] on a language model, there could be random sampling, so obtaining the same result requires setting a random seed
|
90 |
+
3. if you used quantization and merged the weights, small deviations are expected due to rounding errors
|
91 |
+
|
92 |
+
### Incorrectly loaded model
|
93 |
+
|
94 |
+
Please ensure that you load the model correctly. A common error is trying to load a _trained_ model with [`get_peft_model`] which is incorrect. Instead, the loading code should look like this:
|
95 |
+
|
96 |
+
```python
|
97 |
+
from peft import PeftModel, PeftConfig
|
98 |
+
|
99 |
+
base_model = ... # to load the base model, use the same code as when you trained it
|
100 |
+
config = PeftConfig.from_pretrained(peft_model_id)
|
101 |
+
peft_model = PeftModel.from_pretrained(base_model, peft_model_id)
|
102 |
+
```
|
103 |
+
|
104 |
+
### Randomly initialized layers
|
105 |
+
|
106 |
+
For some tasks, it is important to correctly configure `modules_to_save` in the config to account for randomly initialized layers.
|
107 |
+
|
108 |
+
As an example, this is necessary if you use LoRA to fine-tune a language model for sequence classification because 🤗 Transformers adds a randomly initialized classification head on top of the model. If you do not add this layer to `modules_to_save`, the classification head won't be saved. The next time you load the model, you'll get a _different_ randomly initialized classification head, resulting in completely different results.
|
109 |
+
|
110 |
+
PEFT tries to correctly guess the `modules_to_save` if you provide the `task_type` argument in the config. This should work for transformers models that follow the standard naming scheme. It is always a good idea to double check though because we can't guarantee all models follow the naming scheme.
|
111 |
+
|
112 |
+
When you load a transformers model that has randomly initialized layers, you should see a warning along the lines of:
|
113 |
+
|
114 |
+
```
|
115 |
+
Some weights of <MODEL> were not initialized from the model checkpoint at <ID> and are newly initialized: [<LAYER_NAMES>].
|
116 |
+
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.
|
117 |
+
```
|
118 |
+
|
119 |
+
The mentioned layers should be added to `modules_to_save` in the config to avoid the described problem.
|
120 |
+
|
121 |
+
### Extending the vocabulary
|
122 |
+
|
123 |
+
For many language fine-tuning tasks, extending the model's vocabulary is necessary since new tokens are being introduced. This requires extending the embedding layer to account for the new tokens and also storing the embedding layer in addition to the adapter weights when saving the adapter.
|
124 |
+
|
125 |
+
Save the embedding layer by adding it to the `target_modules` of the config. The embedding layer name must follow the standard naming scheme from Transformers. For example, the Mistral config could look like this:
|
126 |
+
|
127 |
+
```python
|
128 |
+
config = LoraConfig(..., target_modules=["embed_tokens", "lm_head", "q_proj", "v_proj"])
|
129 |
+
```
|
130 |
+
|
131 |
+
Once added to `target_modules`, PEFT automatically stores the embedding layer when saving the adapter if the model has the [`~transformers.PreTrainedModel.get_input_embeddings`] and [`~transformers.PreTrainedModel.get_output_embeddings`]. This is generally the case for Transformers models.
|
132 |
+
|
133 |
+
If the model's embedding layer doesn't follow the Transformer's naming scheme, you can still save it by manually passing `save_embedding_layers=True` when saving the adapter:
|
134 |
+
|
135 |
+
```python
|
136 |
+
model = get_peft_model(...)
|
137 |
+
# train the model
|
138 |
+
model.save_pretrained("my_adapter", save_embedding_layers=True)
|
139 |
+
```
|
140 |
+
|
141 |
+
For inference, load the base model first and resize it the same way you did before you trained the model. After you've resized the base model, you can load the PEFT checkpoint.
|
142 |
+
|
143 |
+
For a complete example, please check out [this notebook](https://github.com/huggingface/peft/blob/main/examples/causal_language_modeling/peft_lora_clm_with_additional_tokens.ipynb).
|
144 |
+
|
145 |
+
### Check layer and model status
|
146 |
+
|
147 |
+
Sometimes a PEFT model can end up in a bad state, especially when handling multiple adapters. There can be some confusion around what adapters exist, which one is active, which one is merged, etc. To help investigate this issue, call the [`~peft.PeftModel.get_layer_status`] and the [`~peft.PeftModel.get_model_status`] methods.
|
148 |
+
|
149 |
+
The [`~peft.PeftModel.get_layer_status`] method gives you a detailed overview of each targeted layer's active, merged, and available adapters.
|
150 |
+
|
151 |
+
```python
|
152 |
+
>>> from transformers import AutoModel
|
153 |
+
>>> from peft import get_peft_model, LoraConfig
|
154 |
+
|
155 |
+
>>> model_id = "google/flan-t5-small"
|
156 |
+
>>> model = AutoModel.from_pretrained(model_id)
|
157 |
+
>>> model = get_peft_model(model, LoraConfig())
|
158 |
+
|
159 |
+
>>> model.get_layer_status()
|
160 |
+
[TunerLayerStatus(name='model.encoder.block.0.layer.0.SelfAttention.q',
|
161 |
+
module_type='lora.Linear',
|
162 |
+
enabled=True,
|
163 |
+
active_adapters=['default'],
|
164 |
+
merged_adapters=[],
|
165 |
+
requires_grad={'default': True},
|
166 |
+
available_adapters=['default']),
|
167 |
+
TunerLayerStatus(name='model.encoder.block.0.layer.0.SelfAttention.v',
|
168 |
+
module_type='lora.Linear',
|
169 |
+
enabled=True,
|
170 |
+
active_adapters=['default'],
|
171 |
+
merged_adapters=[],
|
172 |
+
requires_grad={'default': True},
|
173 |
+
available_adapters=['default']),
|
174 |
+
...]
|
175 |
+
|
176 |
+
>>> model.get_model_status()
|
177 |
+
TunerModelStatus(
|
178 |
+
base_model_type='T5Model',
|
179 |
+
adapter_model_type='LoraModel',
|
180 |
+
peft_types={'default': 'LORA'},
|
181 |
+
trainable_params=344064,
|
182 |
+
total_params=60855680,
|
183 |
+
num_adapter_layers=48,
|
184 |
+
enabled=True,
|
185 |
+
active_adapters=['default'],
|
186 |
+
merged_adapters=[],
|
187 |
+
requires_grad={'default': True},
|
188 |
+
available_adapters=['default'],
|
189 |
+
)
|
190 |
+
```
|
191 |
+
|
192 |
+
In the model state output, you should look out for entries that say `"irregular"`. This means PEFT detected an inconsistent state in the model. For instance, if `merged_adapters="irregular"`, it means that for at least one adapter, it was merged on some target modules but not on others. The inference results will most likely be incorrect as a result.
|
193 |
+
|
194 |
+
The best way to resolve this issue is to reload the whole model and adapter checkpoint(s). Ensure that you don't perform any incorrect operations on the model, e.g. manually merging adapters on some modules but not others.
|
195 |
+
|
196 |
+
Convert the layer status into a pandas `DataFrame` for an easier visual inspection.
|
197 |
+
|
198 |
+
```python
|
199 |
+
from dataclasses import asdict
|
200 |
+
import pandas as pd
|
201 |
+
|
202 |
+
df = pd.DataFrame(asdict(layer) for layer in model.get_layer_status())
|
203 |
+
```
|
204 |
+
|
205 |
+
It is possible to get this information for non-PEFT models if they are using PEFT layers under the hood, but some information like the `base_model_type` or the `peft_types` cannot be determined in that case. As an example, you can call this on a [diffusers](https://huggingface.co/docs/diffusers/index) model like so:
|
206 |
+
|
207 |
+
```python
|
208 |
+
>>> import torch
|
209 |
+
>>> from diffusers import StableDiffusionPipeline
|
210 |
+
>>> from peft import get_model_status, get_layer_status
|
211 |
+
|
212 |
+
>>> path = "runwayml/stable-diffusion-v1-5"
|
213 |
+
>>> lora_id = "takuma104/lora-test-text-encoder-lora-target"
|
214 |
+
>>> pipe = StableDiffusionPipeline.from_pretrained(path, torch_dtype=torch.float16)
|
215 |
+
>>> pipe.load_lora_weights(lora_id, adapter_name="adapter-1")
|
216 |
+
>>> pipe.load_lora_weights(lora_id, adapter_name="adapter-2")
|
217 |
+
>>> pipe.set_lora_device(["adapter-2"], "cuda")
|
218 |
+
>>> get_layer_status(pipe.text_encoder)
|
219 |
+
[TunerLayerStatus(name='text_model.encoder.layers.0.self_attn.k_proj',
|
220 |
+
module_type='lora.Linear',
|
221 |
+
enabled=True,
|
222 |
+
active_adapters=['adapter-2'],
|
223 |
+
merged_adapters=[],
|
224 |
+
requires_grad={'adapter-1': False, 'adapter-2': True},
|
225 |
+
available_adapters=['adapter-1', 'adapter-2'],
|
226 |
+
devices={'adapter-1': ['cpu'], 'adapter-2': ['cuda']}),
|
227 |
+
TunerLayerStatus(name='text_model.encoder.layers.0.self_attn.v_proj',
|
228 |
+
module_type='lora.Linear',
|
229 |
+
enabled=True,
|
230 |
+
active_adapters=['adapter-2'],
|
231 |
+
merged_adapters=[],
|
232 |
+
requires_grad={'adapter-1': False, 'adapter-2': True},
|
233 |
+
devices={'adapter-1': ['cpu'], 'adapter-2': ['cuda']}),
|
234 |
+
...]
|
235 |
+
|
236 |
+
>>> get_model_status(pipe.unet)
|
237 |
+
TunerModelStatus(
|
238 |
+
base_model_type='other',
|
239 |
+
adapter_model_type='None',
|
240 |
+
peft_types={},
|
241 |
+
trainable_params=797184,
|
242 |
+
total_params=861115332,
|
243 |
+
num_adapter_layers=128,
|
244 |
+
enabled=True,
|
245 |
+
active_adapters=['adapter-2'],
|
246 |
+
merged_adapters=[],
|
247 |
+
requires_grad={'adapter-1': False, 'adapter-2': True},
|
248 |
+
available_adapters=['adapter-1', 'adapter-2'],
|
249 |
+
devices={'adapter-1': ['cpu'], 'adapter-2': ['cuda']},
|
250 |
+
)
|
251 |
+
```
|
252 |
+
|
253 |
+
## Reproducibility
|
254 |
+
|
255 |
+
### Models using batch norm
|
256 |
+
|
257 |
+
When loading a trained PEFT model where the base model uses batch norm (e.g. `torch.nn.BatchNorm1d` or `torch.nn.BatchNorm2d`), you may find that you cannot reproduce the exact same outputs. This is because the batch norm layers keep track of running stats during training, but these stats are not part of the PEFT checkpoint. Therefore, when you load the PEFT model, the running stats of the base model will be used (i.e. from before training with PEFT).
|
258 |
+
|
259 |
+
Depending on your use case, this may not be a big deal. If, however, you need your outputs to be 100% reproducible, you can achieve this by adding the batch norm layers to `modules_to_save`. Below is an example of this using resnet and LoRA. Notice that we set `modules_to_save=["classifier", "normalization"]`. We need the `"classifier"` argument because our task is image classification, and we add the `"normalization"` argument to ensure that the batch norm layers are saved in the PEFT checkpoint.
|
260 |
+
|
261 |
+
```python
|
262 |
+
from transformers import AutoModelForImageClassification
|
263 |
+
from peft import LoraConfig, get_peft_model
|
264 |
+
|
265 |
+
model_id = "microsoft/resnet-18"
|
266 |
+
base_model = AutoModelForImageClassification.from_pretrained(self.model_id)
|
267 |
+
config = LoraConfig(
|
268 |
+
target_modules=["convolution"],
|
269 |
+
modules_to_save=["classifier", "normalization"],
|
270 |
+
),
|
271 |
+
```
|
272 |
+
|
273 |
+
Depending on the type of model you use, the batch norm layers could have different names than `"normalization"`, so please ensure that the name matches your model architecture.
|
peft_md_files/index.md
ADDED
@@ -0,0 +1,49 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# PEFT
|
18 |
+
|
19 |
+
🤗 PEFT (Parameter-Efficient Fine-Tuning) is a library for efficiently adapting large pretrained models to various downstream applications without fine-tuning all of a model's parameters because it is prohibitively costly. PEFT methods only fine-tune a small number of (extra) model parameters - significantly decreasing computational and storage costs - while yielding performance comparable to a fully fine-tuned model. This makes it more accessible to train and store large language models (LLMs) on consumer hardware.
|
20 |
+
|
21 |
+
PEFT is integrated with the Transformers, Diffusers, and Accelerate libraries to provide a faster and easier way to load, train, and use large models for inference.
|
22 |
+
|
23 |
+
<div class="mt-10">
|
24 |
+
<div class="w-full flex flex-col space-y-4 md:space-y-0 md:grid md:grid-cols-2 md:gap-y-4 md:gap-x-5">
|
25 |
+
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="quicktour"
|
26 |
+
><div class="w-full text-center bg-gradient-to-br from-blue-400 to-blue-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Get started</div>
|
27 |
+
<p class="text-gray-700">Start here if you're new to 🤗 PEFT to get an overview of the library's main features, and how to train a model with a PEFT method.</p>
|
28 |
+
</a>
|
29 |
+
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./task_guides/image_classification_lora"
|
30 |
+
><div class="w-full text-center bg-gradient-to-br from-indigo-400 to-indigo-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">How-to guides</div>
|
31 |
+
<p class="text-gray-700">Practical guides demonstrating how to apply various PEFT methods across different types of tasks like image classification, causal language modeling, automatic speech recognition, and more. Learn how to use 🤗 PEFT with the DeepSpeed and Fully Sharded Data Parallel scripts.</p>
|
32 |
+
</a>
|
33 |
+
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./conceptual_guides/lora"
|
34 |
+
><div class="w-full text-center bg-gradient-to-br from-pink-400 to-pink-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Conceptual guides</div>
|
35 |
+
<p class="text-gray-700">Get a better theoretical understanding of how LoRA and various soft prompting methods help reduce the number of trainable parameters to make training more efficient.</p>
|
36 |
+
</a>
|
37 |
+
<a class="!no-underline border dark:border-gray-700 p-5 rounded-lg shadow hover:shadow-lg" href="./package_reference/config"
|
38 |
+
><div class="w-full text-center bg-gradient-to-br from-purple-400 to-purple-500 rounded-lg py-1.5 font-semibold mb-5 text-white text-lg leading-relaxed">Reference</div>
|
39 |
+
<p class="text-gray-700">Technical descriptions of how 🤗 PEFT classes and methods work.</p>
|
40 |
+
</a>
|
41 |
+
</div>
|
42 |
+
</div>
|
43 |
+
|
44 |
+
<iframe
|
45 |
+
src="https://stevhliu-peft-methods.hf.space"
|
46 |
+
frameborder="0"
|
47 |
+
width="850"
|
48 |
+
height="620"
|
49 |
+
></iframe>
|
peft_md_files/install.md
ADDED
@@ -0,0 +1,47 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# Installation
|
18 |
+
|
19 |
+
Before you start, you will need to setup your environment, install the appropriate packages, and configure 🤗 PEFT. 🤗 PEFT is tested on **Python 3.8+**.
|
20 |
+
|
21 |
+
🤗 PEFT is available on PyPI, as well as GitHub:
|
22 |
+
|
23 |
+
## PyPI
|
24 |
+
|
25 |
+
To install 🤗 PEFT from PyPI:
|
26 |
+
|
27 |
+
```bash
|
28 |
+
pip install peft
|
29 |
+
```
|
30 |
+
|
31 |
+
## Source
|
32 |
+
|
33 |
+
New features that haven't been released yet are added every day, which also means there may be some bugs. To try them out, install from the GitHub repository:
|
34 |
+
|
35 |
+
```bash
|
36 |
+
pip install git+https://github.com/huggingface/peft
|
37 |
+
```
|
38 |
+
|
39 |
+
If you're working on contributing to the library or wish to play with the source code and see live
|
40 |
+
results as you run the code, an editable version can be installed from a locally-cloned version of the
|
41 |
+
repository:
|
42 |
+
|
43 |
+
```bash
|
44 |
+
git clone https://github.com/huggingface/peft
|
45 |
+
cd peft
|
46 |
+
pip install -e .
|
47 |
+
```
|
peft_md_files/package_reference/adalora.md
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# AdaLoRA
|
18 |
+
|
19 |
+
[AdaLoRA](https://hf.co/papers/2303.10512) is a method for optimizing the number of trainable parameters to assign to weight matrices and layers, unlike LoRA, which distributes parameters evenly across all modules. More parameters are budgeted for important weight matrices and layers while less important ones receive fewer parameters.
|
20 |
+
|
21 |
+
The abstract from the paper is:
|
22 |
+
|
23 |
+
*Fine-tuning large pre-trained language models on downstream tasks has become an important paradigm in NLP. However, common practice fine-tunes all of the parameters in a pre-trained model, which becomes prohibitive when a large number of downstream tasks are present. Therefore, many fine-tuning methods are proposed to learn incremental updates of pre-trained weights in a parameter efficient way, e.g., low-rank increments. These methods often evenly distribute the budget of incremental updates across all pre-trained weight matrices, and overlook the varying importance of different weight parameters. As a consequence, the fine-tuning performance is suboptimal. To bridge this gap, we propose AdaLoRA, which adaptively allocates the parameter budget among weight matrices according to their importance score. In particular, AdaLoRA parameterizes the incremental updates in the form of singular value decomposition. Such a novel approach allows us to effectively prune the singular values of unimportant updates, which is essentially to reduce their parameter budget but circumvent intensive exact SVD computations. We conduct extensive experiments with several pre-trained models on natural language processing, question answering, and natural language generation to validate the effectiveness of AdaLoRA. Results demonstrate that AdaLoRA manifests notable improvement over baselines, especially in the low budget settings. Our code is publicly available at https://github.com/QingruZhang/AdaLoRA*.
|
24 |
+
|
25 |
+
## AdaLoraConfig
|
26 |
+
|
27 |
+
[[autodoc]] tuners.adalora.config.AdaLoraConfig
|
28 |
+
|
29 |
+
## AdaLoraModel
|
30 |
+
|
31 |
+
[[autodoc]] tuners.adalora.model.AdaLoraModel
|
peft_md_files/package_reference/adapter_utils.md
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# LyCORIS
|
18 |
+
|
19 |
+
[LyCORIS](https://hf.co/papers/2309.14859) (Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion) are LoRA-like matrix decomposition adapters that modify the cross-attention layer of the UNet. The [LoHa](loha) and [LoKr](lokr) methods inherit from the `Lycoris` classes here.
|
20 |
+
|
21 |
+
## LycorisConfig
|
22 |
+
|
23 |
+
[[autodoc]] tuners.lycoris_utils.LycorisConfig
|
24 |
+
|
25 |
+
## LycorisLayer
|
26 |
+
|
27 |
+
[[autodoc]] tuners.lycoris_utils.LycorisLayer
|
28 |
+
|
29 |
+
## LycorisTuner
|
30 |
+
|
31 |
+
[[autodoc]] tuners.lycoris_utils.LycorisTuner
|
peft_md_files/package_reference/auto_class.md
ADDED
@@ -0,0 +1,48 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# AutoPeftModels
|
18 |
+
|
19 |
+
The `AutoPeftModel` classes loads the appropriate PEFT model for the task type by automatically inferring it from the configuration file. They are designed to quickly and easily load a PEFT model in a single line of code without having to worry about which exact model class you need or manually loading a [`PeftConfig`].
|
20 |
+
|
21 |
+
## AutoPeftModel
|
22 |
+
|
23 |
+
[[autodoc]] auto.AutoPeftModel
|
24 |
+
- from_pretrained
|
25 |
+
|
26 |
+
## AutoPeftModelForCausalLM
|
27 |
+
|
28 |
+
[[autodoc]] auto.AutoPeftModelForCausalLM
|
29 |
+
|
30 |
+
## AutoPeftModelForSeq2SeqLM
|
31 |
+
|
32 |
+
[[autodoc]] auto.AutoPeftModelForSeq2SeqLM
|
33 |
+
|
34 |
+
## AutoPeftModelForSequenceClassification
|
35 |
+
|
36 |
+
[[autodoc]] auto.AutoPeftModelForSequenceClassification
|
37 |
+
|
38 |
+
## AutoPeftModelForTokenClassification
|
39 |
+
|
40 |
+
[[autodoc]] auto.AutoPeftModelForTokenClassification
|
41 |
+
|
42 |
+
## AutoPeftModelForQuestionAnswering
|
43 |
+
|
44 |
+
[[autodoc]] auto.AutoPeftModelForQuestionAnswering
|
45 |
+
|
46 |
+
## AutoPeftModelForFeatureExtraction
|
47 |
+
|
48 |
+
[[autodoc]] auto.AutoPeftModelForFeatureExtraction
|
peft_md_files/package_reference/boft.md
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# BOFT
|
18 |
+
|
19 |
+
[Orthogonal Butterfly (BOFT)](https://hf.co/papers/2311.06243) is a generic method designed for finetuning foundation models. It improves the paramter efficiency of the finetuning paradigm -- Orthogonal Finetuning (OFT), by taking inspiration from Cooley-Tukey fast Fourier transform, showing favorable results across finetuning different foundation models, including large vision transformers, large language models and text-to-image diffusion models.
|
20 |
+
|
21 |
+
The abstract from the paper is:
|
22 |
+
|
23 |
+
*Large foundation models are becoming ubiquitous, but training them from scratch is prohibitively expensive. Thus, efficiently adapting these powerful models to downstream tasks is increasingly important. In this paper, we study a principled finetuning paradigm -- Orthogonal Finetuning (OFT) -- for downstream task adaptation. Despite demonstrating good generalizability, OFT still uses a fairly large number of trainable parameters due to the high dimensionality of orthogonal matrices. To address this, we start by examining OFT from an information transmission perspective, and then identify a few key desiderata that enable better parameter-efficiency. Inspired by how the Cooley-Tukey fast Fourier transform algorithm enables efficient information transmission, we propose an efficient orthogonal parameterization using butterfly structures. We apply this parameterization to OFT, creating a novel parameter-efficient finetuning method, called Orthogonal Butterfly (BOFT). By subsuming OFT as a special case, BOFT introduces a generalized orthogonal finetuning framework. Finally, we conduct an extensive empirical study of adapting large vision transformers, large language models, and text-to-image diffusion models to various downstream tasks in vision and language*.
|
24 |
+
|
25 |
+
## BOFTConfig
|
26 |
+
|
27 |
+
[[autodoc]] tuners.boft.config.BOFTConfig
|
28 |
+
|
29 |
+
## BOFTModel
|
30 |
+
|
31 |
+
[[autodoc]] tuners.boft.model.BOFTModel
|
peft_md_files/package_reference/config.md
ADDED
@@ -0,0 +1,22 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
2 |
+
rendered properly in your Markdown viewer.
|
3 |
+
-->
|
4 |
+
|
5 |
+
# Configuration
|
6 |
+
|
7 |
+
[`PeftConfigMixin`] is the base configuration class for storing the adapter configuration of a [`PeftModel`], and [`PromptLearningConfig`] is the base configuration class for soft prompt methods (p-tuning, prefix tuning, and prompt tuning). These base classes contain methods for saving and loading model configurations from the Hub, specifying the PEFT method to use, type of task to perform, and model configurations like number of layers and number of attention heads.
|
8 |
+
|
9 |
+
## PeftConfigMixin
|
10 |
+
|
11 |
+
[[autodoc]] config.PeftConfigMixin
|
12 |
+
- all
|
13 |
+
|
14 |
+
## PeftConfig
|
15 |
+
|
16 |
+
[[autodoc]] PeftConfig
|
17 |
+
- all
|
18 |
+
|
19 |
+
## PromptLearningConfig
|
20 |
+
|
21 |
+
[[autodoc]] PromptLearningConfig
|
22 |
+
- all
|
peft_md_files/package_reference/fourierft.md
ADDED
@@ -0,0 +1,38 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# FourierFT: Discrete Fourier Transformation Fine-Tuning
|
18 |
+
|
19 |
+
[FourierFT](https://huggingface.co/papers/2405.03003) is a parameter-efficient fine-tuning technique that leverages Discrete Fourier Transform to compress the model's tunable weights. This method outperforms LoRA in the GLUE benchmark and common ViT classification tasks using much less parameters.
|
20 |
+
|
21 |
+
FourierFT currently has the following constraints:
|
22 |
+
|
23 |
+
- Only `nn.Linear` layers are supported.
|
24 |
+
- Quantized layers are not supported.
|
25 |
+
|
26 |
+
If these constraints don't work for your use case, consider other methods instead.
|
27 |
+
|
28 |
+
The abstract from the paper is:
|
29 |
+
|
30 |
+
> Low-rank adaptation (LoRA) has recently gained much interest in fine-tuning foundation models. It effectively reduces the number of trainable parameters by incorporating low-rank matrices A and B to represent the weight change, i.e., Delta W=BA. Despite LoRA's progress, it faces storage challenges when handling extensive customization adaptations or larger base models. In this work, we aim to further compress trainable parameters by enjoying the powerful expressiveness of the Fourier transform. Specifically, we introduce FourierFT, which treats Delta W as a matrix in the spatial domain and learns only a small fraction of its spectral coefficients. With the trained spectral coefficients, we implement the inverse discrete Fourier transform to recover Delta W. Empirically, our FourierFT method shows comparable or better performance with fewer parameters than LoRA on various tasks, including natural language understanding, natural language generation, instruction tuning, and image classification. For example, when performing instruction tuning on the LLaMA2-7B model, FourierFT surpasses LoRA with only 0.064M trainable parameters, compared to LoRA's 33.5M.
|
31 |
+
|
32 |
+
## FourierFTConfig
|
33 |
+
|
34 |
+
[[autodoc]] tuners.fourierft.config.FourierFTConfig
|
35 |
+
|
36 |
+
## FourierFTModel
|
37 |
+
|
38 |
+
[[autodoc]] tuners.fourierft.model.FourierFTModel
|
peft_md_files/package_reference/helpers.md
ADDED
@@ -0,0 +1,12 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
2 |
+
rendered properly in your Markdown viewer.
|
3 |
+
-->
|
4 |
+
|
5 |
+
# Document Title
|
6 |
+
|
7 |
+
A collection of helper functions for PEFT.
|
8 |
+
|
9 |
+
## Checking if a model is a PEFT model
|
10 |
+
|
11 |
+
[[autodoc]] helpers.check_if_peft_model
|
12 |
+
- all
|
peft_md_files/package_reference/ia3.md
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# IA3
|
18 |
+
|
19 |
+
Infused Adapter by Inhibiting and Amplifying Inner Activations, or [IA3](https://hf.co/papers/2205.05638), is a method that adds three learned vectors to rescale the keys and values of the self-attention and encoder-decoder attention layers, and the intermediate activation of the position-wise feed-forward network.
|
20 |
+
|
21 |
+
The abstract from the paper is:
|
22 |
+
|
23 |
+
*Few-shot in-context learning (ICL) enables pre-trained language models to perform a previously-unseen task without any gradient-based training by feeding a small number of training examples as part of the input. ICL incurs substantial computational, memory, and storage costs because it involves processing all of the training examples every time a prediction is made. Parameter-efficient fine-tuning (PEFT) (e.g. adapter modules, prompt tuning, sparse update methods, etc.) offers an alternative paradigm where a small set of parameters are trained to enable a model to perform the new task. In this paper, we rigorously compare few-shot ICL and PEFT and demonstrate that the latter offers better accuracy as well as dramatically lower computational costs. Along the way, we introduce a new PEFT method called (IA)^3 that scales activations by learned vectors, attaining stronger performance while only introducing a relatively tiny amount of new parameters. We also propose a simple recipe based on the T0 model called T-Few that can be applied to new tasks without task-specific tuning or modifications. We validate the effectiveness of T-Few on completely unseen tasks by applying it to the RAFT benchmark, attaining super-human performance for the first time and outperforming the state-of-the-art by 6% absolute. All of the code used in our experiments is publicly available*.
|
24 |
+
|
25 |
+
## IA3Config
|
26 |
+
|
27 |
+
[[autodoc]] tuners.ia3.config.IA3Config
|
28 |
+
|
29 |
+
## IA3Model
|
30 |
+
|
31 |
+
[[autodoc]] tuners.ia3.model.IA3Model
|
peft_md_files/package_reference/layernorm_tuning.md
ADDED
@@ -0,0 +1,34 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# LayerNorm Tuning
|
18 |
+
|
19 |
+
LayerNorm Tuning ([LN Tuning](https://huggingface.co/papers/2312.11420)) is a PEFT method that only fine-tunes the parameters of the LayerNorm layers in a model.
|
20 |
+
The paper has tested the performance of this method on large language models and has shown that it can achieve strong performance with a significant reduction in the number of trainable parameters and GPU memory usage.
|
21 |
+
However, the method is not limited to language models and can be applied to any model that uses LayerNorm layers.
|
22 |
+
In this implementation, the default is that all layernorm layers inside a model is finetuned, but it could be used to target other layer types such as `MLP` or `Attention` layers, this can be done by specifying the `target_modules` in the `LNTuningConfig`.
|
23 |
+
|
24 |
+
The abstract from the paper is:
|
25 |
+
|
26 |
+
*This paper introduces an efficient strategy to transform Large Language Models (LLMs) into Multi-Modal Large Language Models (MLLMs). By conceptualizing this transformation as a domain adaptation process, i.e., transitioning from text understanding to embracing multiple modalities, we intriguingly note that, within each attention block, tuning LayerNorm suffices to yield strong performance. Moreover, when benchmarked against other tuning approaches like full parameter finetuning or LoRA, its benefits on efficiency are substantial. For example, when compared to LoRA on a 13B model scale, performance can be enhanced by an average of over 20% across five multi-modal tasks, and meanwhile, results in a significant reduction of trainable parameters by 41.9% and a decrease in GPU memory usage by 17.6%. On top of this LayerNorm strategy, we showcase that selectively tuning only with conversational data can improve efficiency further. Beyond these empirical outcomes, we provide a comprehensive analysis to explore the role of LayerNorm in adapting LLMs to the multi-modal domain and improving the expressive power of the model.*
|
27 |
+
|
28 |
+
## LNTuningConfig
|
29 |
+
|
30 |
+
[[autodoc]] tuners.ln_tuning.config.LNTuningConfig
|
31 |
+
|
32 |
+
## LNTuningModel
|
33 |
+
|
34 |
+
[[autodoc]] tuners.ln_tuning.model.LNTuningModel
|
peft_md_files/package_reference/llama_adapter.md
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# Llama-Adapter
|
18 |
+
|
19 |
+
[Llama-Adapter](https://hf.co/papers/2303.16199) is a PEFT method specifically designed for turning Llama into an instruction-following model. The Llama model is frozen and only a set of adaptation prompts prefixed to the input instruction tokens are learned. Since randomly initialized modules inserted into the model can cause the model to lose some of its existing knowledge, Llama-Adapter uses zero-initialized attention with zero gating to progressively add the instructional prompts to the model.
|
20 |
+
|
21 |
+
The abstract from the paper is:
|
22 |
+
|
23 |
+
*We present LLaMA-Adapter, a lightweight adaption method to efficiently fine-tune LLaMA into an instruction-following model. Using 52K self-instruct demonstrations, LLaMA-Adapter only introduces 1.2M learnable parameters upon the frozen LLaMA 7B model, and costs less than one hour for fine-tuning on 8 A100 GPUs. Specifically, we adopt a set of learnable adaption prompts, and prepend them to the input text tokens at higher transformer layers. Then, a zero-init attention mechanism with zero gating is proposed, which adaptively injects the new instructional cues into LLaMA, while effectively preserves its pre-trained knowledge. With efficient training, LLaMA-Adapter generates high-quality responses, comparable to Alpaca with fully fine-tuned 7B parameters. Furthermore, our approach can be simply extended to multi-modal input, e.g., images, for image-conditioned LLaMA, which achieves superior reasoning capacity on ScienceQA. We release our code at https://github.com/ZrrSkywalker/LLaMA-Adapter*.
|
24 |
+
|
25 |
+
## AdaptionPromptConfig
|
26 |
+
|
27 |
+
[[autodoc]] tuners.adaption_prompt.config.AdaptionPromptConfig
|
28 |
+
|
29 |
+
## AdaptionPromptModel
|
30 |
+
|
31 |
+
[[autodoc]] tuners.adaption_prompt.model.AdaptionPromptModel
|
peft_md_files/package_reference/loha.md
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# LoHa
|
18 |
+
|
19 |
+
Low-Rank Hadamard Product ([LoHa](https://huggingface.co/papers/2108.06098)), is similar to LoRA except it approximates the large weight matrix with more low-rank matrices and combines them with the Hadamard product. This method is even more parameter-efficient than LoRA and achieves comparable performance.
|
20 |
+
|
21 |
+
The abstract from the paper is:
|
22 |
+
|
23 |
+
*In this work, we propose a communication-efficient parameterization, FedPara, for federated learning (FL) to overcome the burdens on frequent model uploads and downloads. Our method re-parameterizes weight parameters of layers using low-rank weights followed by the Hadamard product. Compared to the conventional low-rank parameterization, our FedPara method is not restricted to low-rank constraints, and thereby it has a far larger capacity. This property enables to achieve comparable performance while requiring 3 to 10 times lower communication costs than the model with the original layers, which is not achievable by the traditional low-rank methods. The efficiency of our method can be further improved by combining with other efficient FL optimizers. In addition, we extend our method to a personalized FL application, pFedPara, which separates parameters into global and local ones. We show that pFedPara outperforms competing personalized FL methods with more than three times fewer parameters*.
|
24 |
+
|
25 |
+
## LoHaConfig
|
26 |
+
|
27 |
+
[[autodoc]] tuners.loha.config.LoHaConfig
|
28 |
+
|
29 |
+
## LoHaModel
|
30 |
+
|
31 |
+
[[autodoc]] tuners.loha.model.LoHaModel
|
peft_md_files/package_reference/lokr.md
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# LoKr
|
18 |
+
|
19 |
+
Low-Rank Kronecker Product ([LoKr](https://hf.co/papers/2309.14859)), is a LoRA-variant method that approximates the large weight matrix with two low-rank matrices and combines them with the Kronecker product. LoKr also provides an optional third low-rank matrix to provide better control during fine-tuning.
|
20 |
+
|
21 |
+
## LoKrConfig
|
22 |
+
|
23 |
+
[[autodoc]] tuners.lokr.config.LoKrConfig
|
24 |
+
|
25 |
+
## LoKrModel
|
26 |
+
|
27 |
+
[[autodoc]] tuners.lokr.model.LoKrModel
|
peft_md_files/package_reference/lora.md
ADDED
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# LoRA
|
18 |
+
|
19 |
+
Low-Rank Adaptation ([LoRA](https://huggingface.co/papers/2309.15223)) is a PEFT method that decomposes a large matrix into two smaller low-rank matrices in the attention layers. This drastically reduces the number of parameters that need to be fine-tuned.
|
20 |
+
|
21 |
+
The abstract from the paper is:
|
22 |
+
|
23 |
+
*We propose a neural language modeling system based on low-rank adaptation (LoRA) for speech recognition output rescoring. Although pretrained language models (LMs) like BERT have shown superior performance in second-pass rescoring, the high computational cost of scaling up the pretraining stage and adapting the pretrained models to specific domains limit their practical use in rescoring. Here we present a method based on low-rank decomposition to train a rescoring BERT model and adapt it to new domains using only a fraction (0.08%) of the pretrained parameters. These inserted matrices are optimized through a discriminative training objective along with a correlation-based regularization loss. The proposed low-rank adaptation Rescore-BERT (LoRB) architecture is evaluated on LibriSpeech and internal datasets with decreased training times by factors between 5.4 and 3.6.*.
|
24 |
+
|
25 |
+
## LoraConfig
|
26 |
+
|
27 |
+
[[autodoc]] tuners.lora.config.LoraConfig
|
28 |
+
|
29 |
+
## LoraModel
|
30 |
+
|
31 |
+
[[autodoc]] tuners.lora.model.LoraModel
|
32 |
+
|
33 |
+
## Utility
|
34 |
+
|
35 |
+
[[autodoc]] utils.loftq_utils.replace_lora_weights_loftq
|
peft_md_files/package_reference/merge_utils.md
ADDED
@@ -0,0 +1,33 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# Model merge
|
18 |
+
|
19 |
+
PEFT provides several internal utilities for [merging LoRA adapters](../developer_guides/model_merging) with the TIES and DARE methods.
|
20 |
+
|
21 |
+
[[autodoc]] utils.merge_utils.prune
|
22 |
+
|
23 |
+
[[autodoc]] utils.merge_utils.calculate_majority_sign_mask
|
24 |
+
|
25 |
+
[[autodoc]] utils.merge_utils.disjoint_merge
|
26 |
+
|
27 |
+
[[autodoc]] utils.merge_utils.task_arithmetic
|
28 |
+
|
29 |
+
[[autodoc]] utils.merge_utils.ties
|
30 |
+
|
31 |
+
[[autodoc]] utils.merge_utils.dare_linear
|
32 |
+
|
33 |
+
[[autodoc]] utils.merge_utils.dare_ties
|
peft_md_files/package_reference/multitask_prompt_tuning.md
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# Multitask prompt tuning
|
18 |
+
|
19 |
+
[Multitask prompt tuning](https://huggingface.co/papers/2303.02861) decomposes the soft prompts of each task into a single learned transferable prompt instead of a separate prompt for each task. The single learned prompt can be adapted for each task by multiplicative low rank updates.
|
20 |
+
|
21 |
+
The abstract from the paper is:
|
22 |
+
|
23 |
+
*Prompt tuning, in which a base pretrained model is adapted to each task via conditioning on learned prompt vectors, has emerged as a promising approach for efficiently adapting large language models to multiple downstream tasks. However, existing methods typically learn soft prompt vectors from scratch, and it has not been clear how to exploit the rich cross-task knowledge with prompt vectors in a multitask learning setting. We propose multitask prompt tuning (MPT), which first learns a single transferable prompt by distilling knowledge from multiple task-specific source prompts. We then learn multiplicative low rank updates to this shared prompt to efficiently adapt it to each downstream target task. Extensive experiments on 23 NLP datasets demonstrate that our proposed approach outperforms the state-of-the-art methods, including the full finetuning baseline in some cases, despite only tuning 0.035% as many task-specific parameters*.
|
24 |
+
|
25 |
+
## MultitaskPromptTuningConfig
|
26 |
+
|
27 |
+
[[autodoc]] tuners.multitask_prompt_tuning.config.MultitaskPromptTuningConfig
|
28 |
+
|
29 |
+
## MultitaskPromptEmbedding
|
30 |
+
|
31 |
+
[[autodoc]] tuners.multitask_prompt_tuning.model.MultitaskPromptEmbedding
|
peft_md_files/package_reference/oft.md
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# OFT
|
18 |
+
|
19 |
+
[Orthogonal Finetuning (OFT)](https://hf.co/papers/2306.07280) is a method developed for adapting text-to-image diffusion models. It works by reparameterizing the pretrained weight matrices with it's orthogonal matrix to preserve information in the pretrained model. To reduce the number of parameters, OFT introduces a block-diagonal structure in the orthogonal matrix.
|
20 |
+
|
21 |
+
The abstract from the paper is:
|
22 |
+
|
23 |
+
*Large text-to-image diffusion models have impressive capabilities in generating photorealistic images from text prompts. How to effectively guide or control these powerful models to perform different downstream tasks becomes an important open problem. To tackle this challenge, we introduce a principled finetuning method -- Orthogonal Finetuning (OFT), for adapting text-to-image diffusion models to downstream tasks. Unlike existing methods, OFT can provably preserve hyperspherical energy which characterizes the pairwise neuron relationship on the unit hypersphere. We find that this property is crucial for preserving the semantic generation ability of text-to-image diffusion models. To improve finetuning stability, we further propose Constrained Orthogonal Finetuning (COFT) which imposes an additional radius constraint to the hypersphere. Specifically, we consider two important finetuning text-to-image tasks: subject-driven generation where the goal is to generate subject-specific images given a few images of a subject and a text prompt, and controllable generation where the goal is to enable the model to take in additional control signals. We empirically show that our OFT framework outperforms existing methods in generation quality and convergence speed*.
|
24 |
+
|
25 |
+
## OFTConfig
|
26 |
+
|
27 |
+
[[autodoc]] tuners.oft.config.OFTConfig
|
28 |
+
|
29 |
+
## OFTModel
|
30 |
+
|
31 |
+
[[autodoc]] tuners.oft.model.OFTModel
|
peft_md_files/package_reference/p_tuning.md
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# P-tuning
|
18 |
+
|
19 |
+
[P-tuning](https://hf.co/papers/2103.10385) adds trainable prompt embeddings to the input that is optimized by a prompt encoder to find a better prompt, eliminating the need to manually design prompts. The prompt tokens can be added anywhere in the input sequence, and p-tuning also introduces anchor tokens for improving performance.
|
20 |
+
|
21 |
+
The abstract from the paper is:
|
22 |
+
|
23 |
+
*While GPTs with traditional fine-tuning fail to achieve strong results on natural language understanding (NLU), we show that GPTs can be better than or comparable to similar-sized BERTs on NLU tasks with a novel method P-tuning -- which employs trainable continuous prompt embeddings. On the knowledge probing (LAMA) benchmark, the best GPT recovers 64\% (P@1) of world knowledge without any additional text provided during test time, which substantially improves the previous best by 20+ percentage points. On the SuperGlue benchmark, GPTs achieve comparable and sometimes better performance to similar-sized BERTs in supervised learning. Importantly, we find that P-tuning also improves BERTs' performance in both few-shot and supervised settings while largely reducing the need for prompt engineering. Consequently, P-tuning outperforms the state-of-the-art approaches on the few-shot SuperGlue benchmark.*.
|
24 |
+
|
25 |
+
## PromptEncoderConfig
|
26 |
+
|
27 |
+
[[autodoc]] tuners.p_tuning.config.PromptEncoderConfig
|
28 |
+
|
29 |
+
## PromptEncoder
|
30 |
+
|
31 |
+
[[autodoc]] tuners.p_tuning.model.PromptEncoder
|
peft_md_files/package_reference/peft_model.md
ADDED
@@ -0,0 +1,77 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
2 |
+
rendered properly in your Markdown viewer.
|
3 |
+
-->
|
4 |
+
|
5 |
+
# Models
|
6 |
+
|
7 |
+
[`PeftModel`] is the base model class for specifying the base Transformer model and configuration to apply a PEFT method to. The base `PeftModel` contains methods for loading and saving models from the Hub.
|
8 |
+
|
9 |
+
## PeftModel
|
10 |
+
|
11 |
+
[[autodoc]] PeftModel
|
12 |
+
- all
|
13 |
+
|
14 |
+
## PeftModelForSequenceClassification
|
15 |
+
|
16 |
+
A `PeftModel` for sequence classification tasks.
|
17 |
+
|
18 |
+
[[autodoc]] PeftModelForSequenceClassification
|
19 |
+
- all
|
20 |
+
|
21 |
+
## PeftModelForTokenClassification
|
22 |
+
|
23 |
+
A `PeftModel` for token classification tasks.
|
24 |
+
|
25 |
+
[[autodoc]] PeftModelForTokenClassification
|
26 |
+
- all
|
27 |
+
|
28 |
+
## PeftModelForCausalLM
|
29 |
+
|
30 |
+
A `PeftModel` for causal language modeling.
|
31 |
+
|
32 |
+
[[autodoc]] PeftModelForCausalLM
|
33 |
+
- all
|
34 |
+
|
35 |
+
## PeftModelForSeq2SeqLM
|
36 |
+
|
37 |
+
A `PeftModel` for sequence-to-sequence language modeling.
|
38 |
+
|
39 |
+
[[autodoc]] PeftModelForSeq2SeqLM
|
40 |
+
- all
|
41 |
+
|
42 |
+
## PeftModelForQuestionAnswering
|
43 |
+
|
44 |
+
A `PeftModel` for question answering.
|
45 |
+
|
46 |
+
[[autodoc]] PeftModelForQuestionAnswering
|
47 |
+
- all
|
48 |
+
|
49 |
+
## PeftModelForFeatureExtraction
|
50 |
+
|
51 |
+
A `PeftModel` for getting extracting features/embeddings from transformer models.
|
52 |
+
|
53 |
+
[[autodoc]] PeftModelForFeatureExtraction
|
54 |
+
- all
|
55 |
+
|
56 |
+
## PeftMixedModel
|
57 |
+
|
58 |
+
A `PeftModel` for mixing different adapter types (e.g. LoRA and LoHa).
|
59 |
+
|
60 |
+
[[autodoc]] PeftMixedModel
|
61 |
+
- all
|
62 |
+
|
63 |
+
## Utilities
|
64 |
+
|
65 |
+
[[autodoc]] utils.cast_mixed_precision_params
|
66 |
+
|
67 |
+
[[autodoc]] get_peft_model
|
68 |
+
|
69 |
+
[[autodoc]] inject_adapter_in_model
|
70 |
+
|
71 |
+
[[autodoc]] utils.get_peft_model_state_dict
|
72 |
+
|
73 |
+
[[autodoc]] utils.prepare_model_for_kbit_training
|
74 |
+
|
75 |
+
[[autodoc]] get_layer_status
|
76 |
+
|
77 |
+
[[autodoc]] get_model_status
|
peft_md_files/package_reference/peft_types.md
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# PEFT types
|
18 |
+
|
19 |
+
[`PeftType`] includes the supported adapters in PEFT, and [`TaskType`] includes PEFT-supported tasks.
|
20 |
+
|
21 |
+
## PeftType
|
22 |
+
|
23 |
+
[[autodoc]] utils.peft_types.PeftType
|
24 |
+
|
25 |
+
## TaskType
|
26 |
+
|
27 |
+
[[autodoc]] utils.peft_types.TaskType
|
peft_md_files/package_reference/poly.md
ADDED
@@ -0,0 +1,44 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# Polytropon
|
18 |
+
|
19 |
+
[Polytropon](https://hf.co/papers/2202.13914) is a multitask model with a number of different LoRA adapters in it's "inventory". The model learns the correct combination of adapters from the inventory with a routing function to choose the best subset of modules for a specific task. PEFT also supports [Multi-Head Adapter Routing (MHR)](https://hf.co/papers/2211.03831) for Polytropon which builds on and improves the routing function by combining the adapter heads more granularly. The adapter heads are separated into disjoint blocks and a different routing function is learned for each one, allowing for more expressivity.
|
20 |
+
|
21 |
+
<hfoptions id="paper">
|
22 |
+
<hfoption id="Combining Modular Skills in Multitask Learning">
|
23 |
+
|
24 |
+
The abstract from the paper is:
|
25 |
+
|
26 |
+
*A modular design encourages neural models to disentangle and recombine different facets of knowledge to generalise more systematically to new tasks. In this work, we assume that each task is associated with a subset of latent discrete skills from a (potentially small) inventory. In turn, skills correspond to parameter-efficient (sparse / low-rank) model parameterisations. By jointly learning these and a task-skill allocation matrix, the network for each task is instantiated as the average of the parameters of active skills. To favour non-trivial soft partitions of skills across tasks, we experiment with a series of inductive biases, such as an Indian Buffet Process prior and a two-speed learning rate. We evaluate our latent-skill model on two main settings: 1) multitask reinforcement learning for grounded instruction following on 8 levels of the BabyAI platform; and 2) few-shot adaptation of pre-trained text-to-text generative models on CrossFit, a benchmark comprising 160 NLP tasks. We find that the modular design of a network significantly increases sample efficiency in reinforcement learning and few-shot generalisation in supervised learning, compared to baselines with fully shared, task-specific, or conditionally generated parameters where knowledge is entangled across tasks. In addition, we show how discrete skills help interpretability, as they yield an explicit hierarchy of tasks.*
|
27 |
+
|
28 |
+
</hfoption>
|
29 |
+
<hfoption id="Multi-Head Adapter Routing for Cross-Task Generalization">
|
30 |
+
|
31 |
+
The abstract from the paper is:
|
32 |
+
|
33 |
+
*Parameter-efficient fine-tuning (PEFT) for cross-task generalization consists in pre-training adapters on a multi-task training set before few-shot adaptation to test tasks. Polytropon [Ponti et al., 2023] (Poly) jointly learns an inventory of adapters and a routing function that selects a (variable-size) subset of adapters for each task during both pre-training and few-shot adaptation. In this paper, we investigate the role that adapter routing plays in its success and design new variants based on our findings. First, we build on the intuition that finer-grained routing provides more expressivity. Hence, we propose MHR (Multi-Head Routing), which combines subsets of adapter parameters and outperforms Poly under a comparable parameter budget; by only fine-tuning the routing function and not the adapters (MHR-z), we achieve competitive performance with extreme parameter efficiency. Second, we find that Poly/MHR performance is a result of better multi-task optimization, rather than modular inductive biases that facilitate adapter recombination and local adaptation, as previously hypothesized. In fact, we find that MHR exhibits higher gradient alignment between tasks than any other method. Since this implies that routing is only crucial during multi-task pre-training, we propose MHR-mu, which discards routing and fine-tunes the average of the pre-trained adapters during few-shot adaptation. This establishes MHR-mu as an effective method for single-adapter fine-tuning.*.
|
34 |
+
|
35 |
+
</hfoption>
|
36 |
+
</hfoptions>
|
37 |
+
|
38 |
+
## PolyConfig
|
39 |
+
|
40 |
+
[[autodoc]] tuners.poly.config.PolyConfig
|
41 |
+
|
42 |
+
## PolyModel
|
43 |
+
|
44 |
+
[[autodoc]] tuners.poly.model.PolyModel
|
peft_md_files/package_reference/prefix_tuning.md
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# Prefix tuning
|
18 |
+
|
19 |
+
[Prefix tuning](https://hf.co/papers/2101.00190) prefixes a series of task-specific vectors to the input sequence that can be learned while keeping the pretrained model frozen. The prefix parameters are inserted in all of the model layers.
|
20 |
+
|
21 |
+
The abstract from the paper is:
|
22 |
+
|
23 |
+
*Fine-tuning is the de facto way to leverage large pretrained language models to perform downstream tasks. However, it modifies all the language model parameters and therefore necessitates storing a full copy for each task. In this paper, we propose prefix-tuning, a lightweight alternative to fine-tuning for natural language generation tasks, which keeps language model parameters frozen, but optimizes a small continuous task-specific vector (called the prefix). Prefix-tuning draws inspiration from prompting, allowing subsequent tokens to attend to this prefix as if it were "virtual tokens". We apply prefix-tuning to GPT-2 for table-to-text generation and to BART for summarization. We find that by learning only 0.1\% of the parameters, prefix-tuning obtains comparable performance in the full data setting, outperforms fine-tuning in low-data settings, and extrapolates better to examples with topics unseen during training*.
|
24 |
+
|
25 |
+
## PrefixTuningConfig
|
26 |
+
|
27 |
+
[[autodoc]] tuners.prefix_tuning.config.PrefixTuningConfig
|
28 |
+
|
29 |
+
## PrefixEncoder
|
30 |
+
|
31 |
+
[[autodoc]] tuners.prefix_tuning.model.PrefixEncoder
|
peft_md_files/package_reference/prompt_tuning.md
ADDED
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# Prompt tuning
|
18 |
+
|
19 |
+
[Prompt tuning](https://hf.co/papers/2104.08691) adds task-specific prompts to the input, and these prompt parameters are updated independently of the pretrained model parameters which are frozen.
|
20 |
+
|
21 |
+
The abstract from the paper is:
|
22 |
+
|
23 |
+
*In this work, we explore "prompt tuning", a simple yet effective mechanism for learning "soft prompts" to condition frozen language models to perform specific downstream tasks. Unlike the discrete text prompts used by GPT-3, soft prompts are learned through backpropagation and can be tuned to incorporate signal from any number of labeled examples. Our end-to-end learned approach outperforms GPT-3's "few-shot" learning by a large margin. More remarkably, through ablations on model size using T5, we show that prompt tuning becomes more competitive with scale: as models exceed billions of parameters, our method "closes the gap" and matches the strong performance of model tuning (where all model weights are tuned). This finding is especially relevant in that large models are costly to share and serve, and the ability to reuse one frozen model for multiple downstream tasks can ease this burden. Our method can be seen as a simplification of the recently proposed "prefix tuning" of Li and Liang (2021), and we provide a comparison to this and other similar approaches. Finally, we show that conditioning a frozen model with soft prompts confers benefits in robustness to domain transfer, as compared to full model tuning*.
|
24 |
+
|
25 |
+
## PromptTuningConfig
|
26 |
+
|
27 |
+
[[autodoc]] tuners.prompt_tuning.config.PromptTuningConfig
|
28 |
+
|
29 |
+
## PromptEmbedding
|
30 |
+
|
31 |
+
[[autodoc]] tuners.prompt_tuning.model.PromptEmbedding
|
peft_md_files/package_reference/tuners.md
ADDED
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# Tuners
|
18 |
+
|
19 |
+
A tuner (or adapter) is a module that can be plugged into a `torch.nn.Module`. [`BaseTuner`] base class for other tuners and provides shared methods and attributes for preparing an adapter configuration and replacing a target module with the adapter module. [`BaseTunerLayer`] is a base class for adapter layers. It offers methods and attributes for managing adapters such as activating and disabling adapters.
|
20 |
+
|
21 |
+
## BaseTuner
|
22 |
+
|
23 |
+
[[autodoc]] tuners.tuners_utils.BaseTuner
|
24 |
+
|
25 |
+
## BaseTunerLayer
|
26 |
+
|
27 |
+
[[autodoc]] tuners.tuners_utils.BaseTunerLayer
|
peft_md_files/package_reference/vera.md
ADDED
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# VeRA: Vector-based Random Matrix Adaptation
|
18 |
+
|
19 |
+
[VeRA](https://huggingface.co/papers/2310.11454) is a parameter-efficient fine-tuning technique that is similar to LoRA but requires even fewer extra parameters while promising similar or even better performance. As such, it is particularly useful when the parameter budget is very limited, e.g. when scaling to very large models. The reduction of the count of trainable parameters is achieved by sharing the same low-rank matrices across all layers, and only training two additional vectors per layer.
|
20 |
+
|
21 |
+
When saving the adapter parameters, it's possible to eschew storing the low rank matrices by setting `save_projection=False` on the `VeraConfig`. In that case, these matrices will be restored based on the fixed random seed from the `projection_prng_key` argument. This cuts down on the size of the checkpoint, but we cannot guarantee reproducibility on all devices and for all future versions of PyTorch. If you want to ensure reproducibility, set `save_projection=True` (which is the default).
|
22 |
+
|
23 |
+
To handle different shapes of adapted layers, VeRA initializes shared A and B matrices with the largest required size for each dimension. During the forward pass, submatrices A and B for a given layer are sliced out from these shared matrices and used as described in the paper. For example, adapting two linear layers of shapes (100, 20) and (80, 50) will create A and B matrices of shapes (rank, 50) and (100, rank) respectively. Then, to adapt a layer of shape (100, 20), submatrices A and B of shapes (rank, 20) and (100, rank) will be extracted.
|
24 |
+
|
25 |
+
VeRA currently has the following constraints:
|
26 |
+
|
27 |
+
- Only `nn.Linear` layers are supported.
|
28 |
+
- Quantized layers are not supported.
|
29 |
+
|
30 |
+
If these constraints don't work for your use case, use LoRA instead.
|
31 |
+
|
32 |
+
The abstract from the paper is:
|
33 |
+
|
34 |
+
> Low-rank adapation (LoRA) is a popular method that reduces the number of trainable parameters when finetuning large language models, but still faces acute storage challenges when scaling to even larger models or deploying numerous per-user or per-task adapted models. In this work, we present Vector-based Random Matrix Adaptation (VeRA), which significantly reduces the number of trainable parameters compared to LoRA, yet maintains the same performance. It achieves this by using a single pair of low-rank matrices shared across all layers and learning small scaling vectors instead. We demonstrate its effectiveness on the GLUE and E2E benchmarks, image classification tasks, and show its application in instruction-tuning of 7B and 13B language models.
|
35 |
+
|
36 |
+
## VeRAConfig
|
37 |
+
|
38 |
+
[[autodoc]] tuners.vera.config.VeraConfig
|
39 |
+
|
40 |
+
## VeRAModel
|
41 |
+
|
42 |
+
[[autodoc]] tuners.vera.model.VeraModel
|
peft_md_files/quicktour.md
ADDED
@@ -0,0 +1,170 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# Quicktour
|
18 |
+
|
19 |
+
PEFT offers parameter-efficient methods for finetuning large pretrained models. The traditional paradigm is to finetune all of a model's parameters for each downstream task, but this is becoming exceedingly costly and impractical because of the enormous number of parameters in models today. Instead, it is more efficient to train a smaller number of prompt parameters or use a reparametrization method like low-rank adaptation (LoRA) to reduce the number of trainable parameters.
|
20 |
+
|
21 |
+
This quicktour will show you PEFT's main features and how you can train or run inference on large models that would typically be inaccessible on consumer devices.
|
22 |
+
|
23 |
+
## Train
|
24 |
+
|
25 |
+
Each PEFT method is defined by a [`PeftConfig`] class that stores all the important parameters for building a [`PeftModel`]. For example, to train with LoRA, load and create a [`LoraConfig`] class and specify the following parameters:
|
26 |
+
|
27 |
+
- `task_type`: the task to train for (sequence-to-sequence language modeling in this case)
|
28 |
+
- `inference_mode`: whether you're using the model for inference or not
|
29 |
+
- `r`: the dimension of the low-rank matrices
|
30 |
+
- `lora_alpha`: the scaling factor for the low-rank matrices
|
31 |
+
- `lora_dropout`: the dropout probability of the LoRA layers
|
32 |
+
|
33 |
+
```python
|
34 |
+
from peft import LoraConfig, TaskType
|
35 |
+
|
36 |
+
peft_config = LoraConfig(task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1)
|
37 |
+
```
|
38 |
+
|
39 |
+
<Tip>
|
40 |
+
|
41 |
+
See the [`LoraConfig`] reference for more details about other parameters you can adjust, such as the modules to target or the bias type.
|
42 |
+
|
43 |
+
</Tip>
|
44 |
+
|
45 |
+
Once the [`LoraConfig`] is setup, create a [`PeftModel`] with the [`get_peft_model`] function. It takes a base model - which you can load from the Transformers library - and the [`LoraConfig`] containing the parameters for how to configure a model for training with LoRA.
|
46 |
+
|
47 |
+
Load the base model you want to finetune.
|
48 |
+
|
49 |
+
```python
|
50 |
+
from transformers import AutoModelForSeq2SeqLM
|
51 |
+
|
52 |
+
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/mt0-large")
|
53 |
+
```
|
54 |
+
|
55 |
+
Wrap the base model and `peft_config` with the [`get_peft_model`] function to create a [`PeftModel`]. To get a sense of the number of trainable parameters in your model, use the [`print_trainable_parameters`] method.
|
56 |
+
|
57 |
+
```python
|
58 |
+
from peft import get_peft_model
|
59 |
+
|
60 |
+
model = get_peft_model(model, peft_config)
|
61 |
+
model.print_trainable_parameters()
|
62 |
+
"output: trainable params: 2359296 || all params: 1231940608 || trainable%: 0.19151053100118282"
|
63 |
+
```
|
64 |
+
|
65 |
+
Out of [bigscience/mt0-large's](https://huggingface.co/bigscience/mt0-large) 1.2B parameters, you're only training 0.19% of them!
|
66 |
+
|
67 |
+
That is it 🎉! Now you can train the model with the Transformers [`~transformers.Trainer`], Accelerate, or any custom PyTorch training loop.
|
68 |
+
|
69 |
+
For example, to train with the [`~transformers.Trainer`] class, setup a [`~transformers.TrainingArguments`] class with some training hyperparameters.
|
70 |
+
|
71 |
+
```py
|
72 |
+
training_args = TrainingArguments(
|
73 |
+
output_dir="your-name/bigscience/mt0-large-lora",
|
74 |
+
learning_rate=1e-3,
|
75 |
+
per_device_train_batch_size=32,
|
76 |
+
per_device_eval_batch_size=32,
|
77 |
+
num_train_epochs=2,
|
78 |
+
weight_decay=0.01,
|
79 |
+
evaluation_strategy="epoch",
|
80 |
+
save_strategy="epoch",
|
81 |
+
load_best_model_at_end=True,
|
82 |
+
)
|
83 |
+
```
|
84 |
+
|
85 |
+
Pass the model, training arguments, dataset, tokenizer, and any other necessary component to the [`~transformers.Trainer`], and call [`~transformers.Trainer.train`] to start training.
|
86 |
+
|
87 |
+
```py
|
88 |
+
trainer = Trainer(
|
89 |
+
model=model,
|
90 |
+
args=training_args,
|
91 |
+
train_dataset=tokenized_datasets["train"],
|
92 |
+
eval_dataset=tokenized_datasets["test"],
|
93 |
+
tokenizer=tokenizer,
|
94 |
+
data_collator=data_collator,
|
95 |
+
compute_metrics=compute_metrics,
|
96 |
+
)
|
97 |
+
|
98 |
+
trainer.train()
|
99 |
+
```
|
100 |
+
|
101 |
+
### Save model
|
102 |
+
|
103 |
+
After your model is finished training, you can save your model to a directory using the [`~transformers.PreTrainedModel.save_pretrained`] function.
|
104 |
+
|
105 |
+
```py
|
106 |
+
model.save_pretrained("output_dir")
|
107 |
+
```
|
108 |
+
|
109 |
+
You can also save your model to the Hub (make sure you're logged in to your Hugging Face account first) with the [`~transformers.PreTrainedModel.push_to_hub`] function.
|
110 |
+
|
111 |
+
```python
|
112 |
+
from huggingface_hub import notebook_login
|
113 |
+
|
114 |
+
notebook_login()
|
115 |
+
model.push_to_hub("your-name/bigscience/mt0-large-lora")
|
116 |
+
```
|
117 |
+
|
118 |
+
Both methods only save the extra PEFT weights that were trained, meaning it is super efficient to store, transfer, and load. For example, this [facebook/opt-350m](https://huggingface.co/ybelkada/opt-350m-lora) model trained with LoRA only contains two files: `adapter_config.json` and `adapter_model.safetensors`. The `adapter_model.safetensors` file is just 6.3MB!
|
119 |
+
|
120 |
+
<div class="flex flex-col justify-center">
|
121 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/PEFT-hub-screenshot.png"/>
|
122 |
+
<figcaption class="text-center">The adapter weights for a opt-350m model stored on the Hub are only ~6MB compared to the full size of the model weights, which can be ~700MB.</figcaption>
|
123 |
+
</div>
|
124 |
+
|
125 |
+
## Inference
|
126 |
+
|
127 |
+
<Tip>
|
128 |
+
|
129 |
+
Take a look at the [AutoPeftModel](package_reference/auto_class) API reference for a complete list of available `AutoPeftModel` classes.
|
130 |
+
|
131 |
+
</Tip>
|
132 |
+
|
133 |
+
Easily load any PEFT-trained model for inference with the [`AutoPeftModel`] class and the [`~transformers.PreTrainedModel.from_pretrained`] method:
|
134 |
+
|
135 |
+
```py
|
136 |
+
from peft import AutoPeftModelForCausalLM
|
137 |
+
from transformers import AutoTokenizer
|
138 |
+
import torch
|
139 |
+
|
140 |
+
model = AutoPeftModelForCausalLM.from_pretrained("ybelkada/opt-350m-lora")
|
141 |
+
tokenizer = AutoTokenizer.from_pretrained("facebook/opt-350m")
|
142 |
+
|
143 |
+
model = model.to("cuda")
|
144 |
+
model.eval()
|
145 |
+
inputs = tokenizer("Preheat the oven to 350 degrees and place the cookie dough", return_tensors="pt")
|
146 |
+
|
147 |
+
outputs = model.generate(input_ids=inputs["input_ids"].to("cuda"), max_new_tokens=50)
|
148 |
+
print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True)[0])
|
149 |
+
|
150 |
+
"Preheat the oven to 350 degrees and place the cookie dough in the center of the oven. In a large bowl, combine the flour, baking powder, baking soda, salt, and cinnamon. In a separate bowl, combine the egg yolks, sugar, and vanilla."
|
151 |
+
```
|
152 |
+
|
153 |
+
For other tasks that aren't explicitly supported with an `AutoPeftModelFor` class - such as automatic speech recognition - you can still use the base [`AutoPeftModel`] class to load a model for the task.
|
154 |
+
|
155 |
+
```py
|
156 |
+
from peft import AutoPeftModel
|
157 |
+
|
158 |
+
model = AutoPeftModel.from_pretrained("smangrul/openai-whisper-large-v2-LORA-colab")
|
159 |
+
```
|
160 |
+
|
161 |
+
## Next steps
|
162 |
+
|
163 |
+
Now that you've seen how to train a model with one of the PEFT methods, we encourage you to try out some of the other methods like prompt tuning. The steps are very similar to the ones shown in the quicktour:
|
164 |
+
|
165 |
+
1. prepare a [`PeftConfig`] for a PEFT method
|
166 |
+
2. use the [`get_peft_model`] method to create a [`PeftModel`] from the configuration and base model
|
167 |
+
|
168 |
+
Then you can train it however you like! To load a PEFT model for inference, you can use the [`AutoPeftModel`] class.
|
169 |
+
|
170 |
+
Feel free to also take a look at the task guides if you're interested in training a model with another PEFT method for a specific task such as semantic segmentation, multilingual automatic speech recognition, DreamBooth, token classification, and more.
|
peft_md_files/task_guides/ia3.md
ADDED
@@ -0,0 +1,239 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# IA3
|
18 |
+
|
19 |
+
[IA3](../conceptual_guides/ia3) multiplies the model's activations (the keys and values in the self-attention and encoder-decoder attention blocks, and the intermediate activation of the position-wise feedforward network) by three learned vectors. This PEFT method introduces an even smaller number of trainable parameters than LoRA which introduces weight matrices instead of vectors. The original model's parameters are kept frozen and only these vectors are updated. As a result, it is faster, cheaper and more efficient to finetune for a new downstream task.
|
20 |
+
|
21 |
+
This guide will show you how to train a sequence-to-sequence model with IA3 to *generate a sentiment* given some financial news.
|
22 |
+
|
23 |
+
<Tip>
|
24 |
+
|
25 |
+
Some familiarity with the general process of training a sequence-to-sequence would be really helpful and allow you to focus on how to apply IA3. If you’re new, we recommend taking a look at the [Translation](https://huggingface.co/docs/transformers/tasks/translation) and [Summarization](https://huggingface.co/docs/transformers/tasks/summarization) guides first from the Transformers documentation. When you’re ready, come back and see how easy it is to drop PEFT in to your training!
|
26 |
+
|
27 |
+
</Tip>
|
28 |
+
|
29 |
+
## Dataset
|
30 |
+
|
31 |
+
You'll use the sentences_allagree subset of the [financial_phrasebank](https://huggingface.co/datasets/financial_phrasebank) dataset. This subset contains financial news with 100% annotator agreement on the sentiment label. Take a look at the [dataset viewer](https://huggingface.co/datasets/financial_phrasebank/viewer/sentences_allagree) for a better idea of the data and sentences you'll be working with.
|
32 |
+
|
33 |
+
Load the dataset with the [`~datasets.load_dataset`] function. This subset of the dataset only contains a train split, so use the [`~datasets.train_test_split`] function to create a train and validation split. Create a new `text_label` column so it is easier to understand what the `label` values `0`, `1`, and `2` mean.
|
34 |
+
|
35 |
+
```py
|
36 |
+
from datasets import load_dataset
|
37 |
+
|
38 |
+
ds = load_dataset("financial_phrasebank", "sentences_allagree")
|
39 |
+
ds = ds["train"].train_test_split(test_size=0.1)
|
40 |
+
ds["validation"] = ds["test"]
|
41 |
+
del ds["test"]
|
42 |
+
|
43 |
+
classes = ds["train"].features["label"].names
|
44 |
+
ds = ds.map(
|
45 |
+
lambda x: {"text_label": [classes[label] for label in x["label"]]},
|
46 |
+
batched=True,
|
47 |
+
num_proc=1,
|
48 |
+
)
|
49 |
+
|
50 |
+
ds["train"][0]
|
51 |
+
{'sentence': 'It will be operated by Nokia , and supported by its Nokia NetAct network and service management system .',
|
52 |
+
'label': 1,
|
53 |
+
'text_label': 'neutral'}
|
54 |
+
```
|
55 |
+
|
56 |
+
Load a tokenizer and create a preprocessing function that:
|
57 |
+
|
58 |
+
1. tokenizes the inputs, pads and truncates the sequence to the `max_length`
|
59 |
+
2. apply the same tokenizer to the labels but with a shorter `max_length` that corresponds to the label
|
60 |
+
3. mask the padding tokens
|
61 |
+
|
62 |
+
```py
|
63 |
+
from transformers import AutoTokenizer
|
64 |
+
|
65 |
+
text_column = "sentence"
|
66 |
+
label_column = "text_label"
|
67 |
+
max_length = 128
|
68 |
+
|
69 |
+
tokenizer = AutoTokenizer.from_pretrained("bigscience/mt0-large")
|
70 |
+
|
71 |
+
def preprocess_function(examples):
|
72 |
+
inputs = examples[text_column]
|
73 |
+
targets = examples[label_column]
|
74 |
+
model_inputs = tokenizer(inputs, max_length=max_length, padding="max_length", truncation=True, return_tensors="pt")
|
75 |
+
labels = tokenizer(targets, max_length=3, padding="max_length", truncation=True, return_tensors="pt")
|
76 |
+
labels = labels["input_ids"]
|
77 |
+
labels[labels == tokenizer.pad_token_id] = -100
|
78 |
+
model_inputs["labels"] = labels
|
79 |
+
return model_inputs
|
80 |
+
```
|
81 |
+
|
82 |
+
Use the [`~datasets.Dataset.map`] function to apply the preprocessing function to the entire dataset.
|
83 |
+
|
84 |
+
```py
|
85 |
+
processed_ds = ds.map(
|
86 |
+
preprocess_function,
|
87 |
+
batched=True,
|
88 |
+
num_proc=1,
|
89 |
+
remove_columns=ds["train"].column_names,
|
90 |
+
load_from_cache_file=False,
|
91 |
+
desc="Running tokenizer on dataset",
|
92 |
+
)
|
93 |
+
```
|
94 |
+
|
95 |
+
Create a training and evaluation [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader), and set `pin_memory=True` to speed up data transfer to the GPU during training if your dataset samples are on a CPU.
|
96 |
+
|
97 |
+
```py
|
98 |
+
from torch.utils.data import DataLoader
|
99 |
+
from transformers import default_data_collator
|
100 |
+
|
101 |
+
train_ds = processed_ds["train"]
|
102 |
+
eval_ds = processed_ds["validation"]
|
103 |
+
|
104 |
+
batch_size = 8
|
105 |
+
|
106 |
+
train_dataloader = DataLoader(
|
107 |
+
train_ds, shuffle=True, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True
|
108 |
+
)
|
109 |
+
eval_dataloader = DataLoader(eval_ds, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True)
|
110 |
+
```
|
111 |
+
|
112 |
+
## Model
|
113 |
+
|
114 |
+
Now you can load a pretrained model to use as the base model for IA3. This guide uses the [bigscience/mt0-large](https://huggingface.co/bigscience/mt0-large) model, but you can use any sequence-to-sequence model you like.
|
115 |
+
|
116 |
+
```py
|
117 |
+
from transformers import AutoModelForSeq2SeqLM
|
118 |
+
|
119 |
+
model = AutoModelForSeq2SeqLM.from_pretrained("bigscience/mt0-large")
|
120 |
+
```
|
121 |
+
|
122 |
+
### PEFT configuration and model
|
123 |
+
|
124 |
+
All PEFT methods need a configuration that contains and specifies all the parameters for how the PEFT method should be applied. Create an [`IA3Config`] with the task type and set the inference mode to `False`. You can find additional parameters for this configuration in the [API reference](../package_reference/ia3#ia3config).
|
125 |
+
|
126 |
+
<Tip>
|
127 |
+
|
128 |
+
Call the [`~PeftModel.print_trainable_parameters`] method to compare the number of trainable parameters of [`PeftModel`] versus the number of parameters in the base model!
|
129 |
+
|
130 |
+
</Tip>
|
131 |
+
|
132 |
+
Once the configuration is setup, pass it to the [`get_peft_model`] function along with the base model to create a trainable [`PeftModel`].
|
133 |
+
|
134 |
+
```py
|
135 |
+
from peft import IA3Config, get_peft_model
|
136 |
+
|
137 |
+
peft_config = IA3Config(task_type="SEQ_2_SEQ_LM")
|
138 |
+
model = get_peft_model(model, peft_config)
|
139 |
+
model.print_trainable_parameters()
|
140 |
+
"trainable params: 282,624 || all params: 1,229,863,936 || trainable%: 0.022980103060766553"
|
141 |
+
```
|
142 |
+
|
143 |
+
### Training
|
144 |
+
|
145 |
+
Set up an optimizer and learning rate scheduler.
|
146 |
+
|
147 |
+
```py
|
148 |
+
import torch
|
149 |
+
from transformers import get_linear_schedule_with_warmup
|
150 |
+
|
151 |
+
lr = 8e-3
|
152 |
+
num_epochs = 3
|
153 |
+
|
154 |
+
optimizer = torch.optim.AdamW(model.parameters(), lr=lr)
|
155 |
+
lr_scheduler = get_linear_schedule_with_warmup(
|
156 |
+
optimizer=optimizer,
|
157 |
+
num_warmup_steps=0,
|
158 |
+
num_training_steps=(len(train_dataloader) * num_epochs),
|
159 |
+
)
|
160 |
+
```
|
161 |
+
|
162 |
+
Move the model to the GPU and create a training loop that reports the loss and perplexity for each epoch.
|
163 |
+
|
164 |
+
```py
|
165 |
+
from tqdm import tqdm
|
166 |
+
|
167 |
+
device = "cuda"
|
168 |
+
model = model.to(device)
|
169 |
+
|
170 |
+
for epoch in range(num_epochs):
|
171 |
+
model.train()
|
172 |
+
total_loss = 0
|
173 |
+
for step, batch in enumerate(tqdm(train_dataloader)):
|
174 |
+
batch = {k: v.to(device) for k, v in batch.items()}
|
175 |
+
outputs = model(**batch)
|
176 |
+
loss = outputs.loss
|
177 |
+
total_loss += loss.detach().float()
|
178 |
+
loss.backward()
|
179 |
+
optimizer.step()
|
180 |
+
lr_scheduler.step()
|
181 |
+
optimizer.zero_grad()
|
182 |
+
|
183 |
+
model.eval()
|
184 |
+
eval_loss = 0
|
185 |
+
eval_preds = []
|
186 |
+
for step, batch in enumerate(tqdm(eval_dataloader)):
|
187 |
+
batch = {k: v.to(device) for k, v in batch.items()}
|
188 |
+
with torch.no_grad():
|
189 |
+
outputs = model(**batch)
|
190 |
+
loss = outputs.loss
|
191 |
+
eval_loss += loss.detach().float()
|
192 |
+
eval_preds.extend(
|
193 |
+
tokenizer.batch_decode(torch.argmax(outputs.logits, -1).detach().cpu().numpy(), skip_special_tokens=True)
|
194 |
+
)
|
195 |
+
|
196 |
+
eval_epoch_loss = eval_loss / len(eval_dataloader)
|
197 |
+
eval_ppl = torch.exp(eval_epoch_loss)
|
198 |
+
train_epoch_loss = total_loss / len(train_dataloader)
|
199 |
+
train_ppl = torch.exp(train_epoch_loss)
|
200 |
+
print(f"{epoch=}: {train_ppl=} {train_epoch_loss=} {eval_ppl=} {eval_epoch_loss=}")
|
201 |
+
```
|
202 |
+
|
203 |
+
## Share your model
|
204 |
+
|
205 |
+
After training is complete, you can upload your model to the Hub with the [`~transformers.PreTrainedModel.push_to_hub`] method. You'll need to login to your Hugging Face account first and enter your token when prompted.
|
206 |
+
|
207 |
+
```py
|
208 |
+
from huggingface_hub import notebook_login
|
209 |
+
|
210 |
+
account = <your-hf-account-name>
|
211 |
+
peft_model_id = f"{account}/mt0-large-ia3"
|
212 |
+
model.push_to_hub(peft_model_id)
|
213 |
+
```
|
214 |
+
|
215 |
+
## Inference
|
216 |
+
|
217 |
+
To load the model for inference, use the [`~AutoPeftModelForSeq2SeqLM.from_pretrained`] method. Let's also load a sentence of financial news from the dataset to generate a sentiment for.
|
218 |
+
|
219 |
+
```py
|
220 |
+
from peft import AutoPeftModelForSeq2SeqLM
|
221 |
+
|
222 |
+
model = AutoPeftModelForSeq2SeqLM.from_pretrained("<your-hf-account-name>/mt0-large-ia3").to("cuda")
|
223 |
+
tokenizer = AutoTokenizer.from_pretrained("bigscience/mt0-large")
|
224 |
+
|
225 |
+
i = 15
|
226 |
+
inputs = tokenizer(ds["validation"][text_column][i], return_tensors="pt")
|
227 |
+
print(ds["validation"][text_column][i])
|
228 |
+
"The robust growth was the result of the inclusion of clothing chain Lindex in the Group in December 2007 ."
|
229 |
+
```
|
230 |
+
|
231 |
+
Call the [`~transformers.GenerationMixin.generate`] method to generate the predicted sentiment label.
|
232 |
+
|
233 |
+
```py
|
234 |
+
with torch.no_grad():
|
235 |
+
inputs = {k: v.to(device) for k, v in inputs.items()}
|
236 |
+
outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=10)
|
237 |
+
print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))
|
238 |
+
['positive']
|
239 |
+
```
|
peft_md_files/task_guides/lora_based_methods.md
ADDED
@@ -0,0 +1,348 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# LoRA methods
|
18 |
+
|
19 |
+
A popular way to efficiently train large models is to insert (typically in the attention blocks) smaller trainable matrices that are a low-rank decomposition of the delta weight matrix to be learnt during finetuning. The pretrained model's original weight matrix is frozen and only the smaller matrices are updated during training. This reduces the number of trainable parameters, reducing memory usage and training time which can be very expensive for large models.
|
20 |
+
|
21 |
+
There are several different ways to express the weight matrix as a low-rank decomposition, but [Low-Rank Adaptation (LoRA)](../conceptual_guides/adapter#low-rank-adaptation-lora) is the most common method. The PEFT library supports several other LoRA variants, such as [Low-Rank Hadamard Product (LoHa)](../conceptual_guides/adapter#low-rank-hadamard-product-loha), [Low-Rank Kronecker Product (LoKr)](../conceptual_guides/adapter#low-rank-kronecker-product-lokr), and [Adaptive Low-Rank Adaptation (AdaLoRA)](../conceptual_guides/adapter#adaptive-low-rank-adaptation-adalora). You can learn more about how these methods work conceptually in the [Adapters](../conceptual_guides/adapter) guide. If you're interested in applying these methods to other tasks and use cases like semantic segmentation, token classification, take a look at our [notebook collection](https://huggingface.co/collections/PEFT/notebooks-6573b28b33e5a4bf5b157fc1)!
|
22 |
+
|
23 |
+
This guide will show you how to quickly train an image classification model - with a low-rank decomposition method - to identify the class of food shown in an image.
|
24 |
+
|
25 |
+
<Tip>
|
26 |
+
|
27 |
+
Some familiarity with the general process of training an image classification model would be really helpful and allow you to focus on the low-rank decomposition methods. If you're new, we recommend taking a look at the [Image classification](https://huggingface.co/docs/transformers/tasks/image_classification) guide first from the Transformers documentation. When you're ready, come back and see how easy it is to drop PEFT in to your training!
|
28 |
+
|
29 |
+
</Tip>
|
30 |
+
|
31 |
+
Before you begin, make sure you have all the necessary libraries installed.
|
32 |
+
|
33 |
+
```bash
|
34 |
+
pip install -q peft transformers datasets
|
35 |
+
```
|
36 |
+
|
37 |
+
## Dataset
|
38 |
+
|
39 |
+
In this guide, you'll use the [Food-101](https://huggingface.co/datasets/food101) dataset which contains images of 101 food classes (take a look at the [dataset viewer](https://huggingface.co/datasets/food101/viewer/default/train) to get a better idea of what the dataset looks like).
|
40 |
+
|
41 |
+
Load the dataset with the [`~datasets.load_dataset`] function.
|
42 |
+
|
43 |
+
```py
|
44 |
+
from datasets import load_dataset
|
45 |
+
|
46 |
+
ds = load_dataset("food101")
|
47 |
+
```
|
48 |
+
|
49 |
+
Each food class is labeled with an integer, so to make it easier to understand what these integers represent, you'll create a `label2id` and `id2label` dictionary to map the integer to its class label.
|
50 |
+
|
51 |
+
```py
|
52 |
+
labels = ds["train"].features["label"].names
|
53 |
+
label2id, id2label = dict(), dict()
|
54 |
+
for i, label in enumerate(labels):
|
55 |
+
label2id[label] = i
|
56 |
+
id2label[i] = label
|
57 |
+
|
58 |
+
id2label[2]
|
59 |
+
"baklava"
|
60 |
+
```
|
61 |
+
|
62 |
+
Load an image processor to properly resize and normalize the pixel values of the training and evaluation images.
|
63 |
+
|
64 |
+
```py
|
65 |
+
from transformers import AutoImageProcessor
|
66 |
+
|
67 |
+
image_processor = AutoImageProcessor.from_pretrained("google/vit-base-patch16-224-in21k")
|
68 |
+
```
|
69 |
+
|
70 |
+
You can also use the image processor to prepare some transformation functions for data augmentation and pixel scaling.
|
71 |
+
|
72 |
+
```py
|
73 |
+
from torchvision.transforms import (
|
74 |
+
CenterCrop,
|
75 |
+
Compose,
|
76 |
+
Normalize,
|
77 |
+
RandomHorizontalFlip,
|
78 |
+
RandomResizedCrop,
|
79 |
+
Resize,
|
80 |
+
ToTensor,
|
81 |
+
)
|
82 |
+
|
83 |
+
normalize = Normalize(mean=image_processor.image_mean, std=image_processor.image_std)
|
84 |
+
train_transforms = Compose(
|
85 |
+
[
|
86 |
+
RandomResizedCrop(image_processor.size["height"]),
|
87 |
+
RandomHorizontalFlip(),
|
88 |
+
ToTensor(),
|
89 |
+
normalize,
|
90 |
+
]
|
91 |
+
)
|
92 |
+
|
93 |
+
val_transforms = Compose(
|
94 |
+
[
|
95 |
+
Resize(image_processor.size["height"]),
|
96 |
+
CenterCrop(image_processor.size["height"]),
|
97 |
+
ToTensor(),
|
98 |
+
normalize,
|
99 |
+
]
|
100 |
+
)
|
101 |
+
|
102 |
+
def preprocess_train(example_batch):
|
103 |
+
example_batch["pixel_values"] = [train_transforms(image.convert("RGB")) for image in example_batch["image"]]
|
104 |
+
return example_batch
|
105 |
+
|
106 |
+
def preprocess_val(example_batch):
|
107 |
+
example_batch["pixel_values"] = [val_transforms(image.convert("RGB")) for image in example_batch["image"]]
|
108 |
+
return example_batch
|
109 |
+
```
|
110 |
+
|
111 |
+
Define the training and validation datasets, and use the [`~datasets.Dataset.set_transform`] function to apply the transformations on-the-fly.
|
112 |
+
|
113 |
+
```py
|
114 |
+
train_ds = ds["train"]
|
115 |
+
val_ds = ds["validation"]
|
116 |
+
|
117 |
+
train_ds.set_transform(preprocess_train)
|
118 |
+
val_ds.set_transform(preprocess_val)
|
119 |
+
```
|
120 |
+
|
121 |
+
Finally, you'll need a data collator to create a batch of training and evaluation data and convert the labels to `torch.tensor` objects.
|
122 |
+
|
123 |
+
```py
|
124 |
+
import torch
|
125 |
+
|
126 |
+
def collate_fn(examples):
|
127 |
+
pixel_values = torch.stack([example["pixel_values"] for example in examples])
|
128 |
+
labels = torch.tensor([example["label"] for example in examples])
|
129 |
+
return {"pixel_values": pixel_values, "labels": labels}
|
130 |
+
```
|
131 |
+
|
132 |
+
## Model
|
133 |
+
|
134 |
+
Now let's load a pretrained model to use as the base model. This guide uses the [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) model, but you can use any image classification model you want. Pass the `label2id` and `id2label` dictionaries to the model so it knows how to map the integer labels to their class labels, and you can optionally pass the `ignore_mismatched_sizes=True` parameter if you're finetuning a checkpoint that has already been finetuned.
|
135 |
+
|
136 |
+
```py
|
137 |
+
from transformers import AutoModelForImageClassification, TrainingArguments, Trainer
|
138 |
+
|
139 |
+
model = AutoModelForImageClassification.from_pretrained(
|
140 |
+
"google/vit-base-patch16-224-in21k",
|
141 |
+
label2id=label2id,
|
142 |
+
id2label=id2label,
|
143 |
+
ignore_mismatched_sizes=True,
|
144 |
+
)
|
145 |
+
```
|
146 |
+
|
147 |
+
### PEFT configuration and model
|
148 |
+
|
149 |
+
Every PEFT method requires a configuration that holds all the parameters specifying how the PEFT method should be applied. Once the configuration is setup, pass it to the [`~peft.get_peft_model`] function along with the base model to create a trainable [`PeftModel`].
|
150 |
+
|
151 |
+
<Tip>
|
152 |
+
|
153 |
+
Call the [`~PeftModel.print_trainable_parameters`] method to compare the number of parameters of [`PeftModel`] versus the number of parameters in the base model!
|
154 |
+
|
155 |
+
</Tip>
|
156 |
+
|
157 |
+
<hfoptions id="loras">
|
158 |
+
<hfoption id="LoRA">
|
159 |
+
|
160 |
+
[LoRA](../conceptual_guides/adapter#low-rank-adaptation-lora) decomposes the weight update matrix into *two* smaller matrices. The size of these low-rank matrices is determined by its *rank* or `r`. A higher rank means the model has more parameters to train, but it also means the model has more learning capacity. You'll also want to specify the `target_modules` which determine where the smaller matrices are inserted. For this guide, you'll target the *query* and *value* matrices of the attention blocks. Other important parameters to set are `lora_alpha` (scaling factor), `bias` (whether `none`, `all` or only the LoRA bias parameters should be trained), and `modules_to_save` (the modules apart from the LoRA layers to be trained and saved). All of these parameters - and more - are found in the [`LoraConfig`].
|
161 |
+
|
162 |
+
```py
|
163 |
+
from peft import LoraConfig, get_peft_model
|
164 |
+
|
165 |
+
config = LoraConfig(
|
166 |
+
r=16,
|
167 |
+
lora_alpha=16,
|
168 |
+
target_modules=["query", "value"],
|
169 |
+
lora_dropout=0.1,
|
170 |
+
bias="none",
|
171 |
+
modules_to_save=["classifier"],
|
172 |
+
)
|
173 |
+
model = get_peft_model(model, config)
|
174 |
+
model.print_trainable_parameters()
|
175 |
+
"trainable params: 667,493 || all params: 86,543,818 || trainable%: 0.7712775047664294"
|
176 |
+
```
|
177 |
+
|
178 |
+
</hfoption>
|
179 |
+
<hfoption id="LoHa">
|
180 |
+
|
181 |
+
[LoHa](../conceptual_guides/adapter#low-rank-hadamard-product-loha) decomposes the weight update matrix into *four* smaller matrices and each pair of smaller matrices is combined with the Hadamard product. This allows the weight update matrix to keep the same number of trainable parameters when compared to LoRA, but with a higher rank (`r^2` for LoHA when compared to `2*r` for LoRA). The size of the smaller matrices is determined by its *rank* or `r`. You'll also want to specify the `target_modules` which determines where the smaller matrices are inserted. For this guide, you'll target the *query* and *value* matrices of the attention blocks. Other important parameters to set are `alpha` (scaling factor), and `modules_to_save` (the modules apart from the LoHa layers to be trained and saved). All of these parameters - and more - are found in the [`LoHaConfig`].
|
182 |
+
|
183 |
+
```py
|
184 |
+
from peft import LoHaConfig, get_peft_model
|
185 |
+
|
186 |
+
config = LoHaConfig(
|
187 |
+
r=16,
|
188 |
+
alpha=16,
|
189 |
+
target_modules=["query", "value"],
|
190 |
+
module_dropout=0.1,
|
191 |
+
modules_to_save=["classifier"],
|
192 |
+
)
|
193 |
+
model = get_peft_model(model, config)
|
194 |
+
model.print_trainable_parameters()
|
195 |
+
"trainable params: 1,257,317 || all params: 87,133,642 || trainable%: 1.4429753779831676"
|
196 |
+
```
|
197 |
+
|
198 |
+
</hfoption>
|
199 |
+
<hfoption id="LoKr">
|
200 |
+
|
201 |
+
[LoKr](../conceptual_guides/adapter#low-rank-kronecker-product-lokr) expresses the weight update matrix as a decomposition of a Kronecker product, creating a block matrix that is able to preserve the rank of the original weight matrix. The size of the smaller matrices are determined by its *rank* or `r`. You'll also want to specify the `target_modules` which determines where the smaller matrices are inserted. For this guide, you'll target the *query* and *value* matrices of the attention blocks. Other important parameters to set are `alpha` (scaling factor), and `modules_to_save` (the modules apart from the LoKr layers to be trained and saved). All of these parameters - and more - are found in the [`LoKrConfig`].
|
202 |
+
|
203 |
+
```py
|
204 |
+
from peft import LoKrConfig, get_peft_model
|
205 |
+
|
206 |
+
config = LoKrConfig(
|
207 |
+
r=16,
|
208 |
+
alpha=16,
|
209 |
+
target_modules=["query", "value"],
|
210 |
+
module_dropout=0.1,
|
211 |
+
modules_to_save=["classifier"],
|
212 |
+
)
|
213 |
+
model = get_peft_model(model, config)
|
214 |
+
model.print_trainable_parameters()
|
215 |
+
"trainable params: 116,069 || all params: 87,172,042 || trainable%: 0.13314934162033282"
|
216 |
+
```
|
217 |
+
|
218 |
+
</hfoption>
|
219 |
+
<hfoption id="AdaLoRA">
|
220 |
+
|
221 |
+
[AdaLoRA](../conceptual_guides/adapter#adaptive-low-rank-adaptation-adalora) efficiently manages the LoRA parameter budget by assigning important weight matrices more parameters and pruning less important ones. In contrast, LoRA evenly distributes parameters across all modules. You can control the average desired *rank* or `r` of the matrices, and which modules to apply AdaLoRA to with `target_modules`. Other important parameters to set are `lora_alpha` (scaling factor), and `modules_to_save` (the modules apart from the AdaLoRA layers to be trained and saved). All of these parameters - and more - are found in the [`AdaLoraConfig`].
|
222 |
+
|
223 |
+
```py
|
224 |
+
from peft import AdaLoraConfig, get_peft_model
|
225 |
+
|
226 |
+
config = AdaLoraConfig(
|
227 |
+
r=8,
|
228 |
+
init_r=12,
|
229 |
+
tinit=200,
|
230 |
+
tfinal=1000,
|
231 |
+
deltaT=10,
|
232 |
+
target_modules=["query", "value"],
|
233 |
+
modules_to_save=["classifier"],
|
234 |
+
)
|
235 |
+
model = get_peft_model(model, config)
|
236 |
+
model.print_trainable_parameters()
|
237 |
+
"trainable params: 520,325 || all params: 87,614,722 || trainable%: 0.5938785036606062"
|
238 |
+
```
|
239 |
+
|
240 |
+
</hfoption>
|
241 |
+
</hfoptions>
|
242 |
+
|
243 |
+
### Training
|
244 |
+
|
245 |
+
For training, let's use the [`~transformers.Trainer`] class from Transformers. The [`Trainer`] contains a PyTorch training loop, and when you're ready, call [`~transformers.Trainer.train`] to start training. To customize the training run, configure the training hyperparameters in the [`~transformers.TrainingArguments`] class. With LoRA-like methods, you can afford to use a higher batch size and learning rate.
|
246 |
+
|
247 |
+
> [!WARNING]
|
248 |
+
> AdaLoRA has an [`~AdaLoraModel.update_and_allocate`] method that should be called at each training step to update the parameter budget and mask, otherwise the adaptation step is not performed. This requires writing a custom training loop or subclassing the [`~transformers.Trainer`] to incorporate this method. As an example, take a look at this [custom training loop](https://github.com/huggingface/peft/blob/912ad41e96e03652cabf47522cd876076f7a0c4f/examples/conditional_generation/peft_adalora_seq2seq.py#L120).
|
249 |
+
|
250 |
+
```py
|
251 |
+
from transformers import TrainingArguments, Trainer
|
252 |
+
|
253 |
+
account = "stevhliu"
|
254 |
+
peft_model_id = f"{account}/google/vit-base-patch16-224-in21k-lora"
|
255 |
+
batch_size = 128
|
256 |
+
|
257 |
+
args = TrainingArguments(
|
258 |
+
peft_model_id,
|
259 |
+
remove_unused_columns=False,
|
260 |
+
evaluation_strategy="epoch",
|
261 |
+
save_strategy="epoch",
|
262 |
+
learning_rate=5e-3,
|
263 |
+
per_device_train_batch_size=batch_size,
|
264 |
+
gradient_accumulation_steps=4,
|
265 |
+
per_device_eval_batch_size=batch_size,
|
266 |
+
fp16=True,
|
267 |
+
num_train_epochs=5,
|
268 |
+
logging_steps=10,
|
269 |
+
load_best_model_at_end=True,
|
270 |
+
label_names=["labels"],
|
271 |
+
)
|
272 |
+
```
|
273 |
+
|
274 |
+
Begin training with [`~transformers.Trainer.train`].
|
275 |
+
|
276 |
+
```py
|
277 |
+
trainer = Trainer(
|
278 |
+
model,
|
279 |
+
args,
|
280 |
+
train_dataset=train_ds,
|
281 |
+
eval_dataset=val_ds,
|
282 |
+
tokenizer=image_processor,
|
283 |
+
data_collator=collate_fn,
|
284 |
+
)
|
285 |
+
trainer.train()
|
286 |
+
```
|
287 |
+
|
288 |
+
## Share your model
|
289 |
+
|
290 |
+
Once training is complete, you can upload your model to the Hub with the [`~transformers.PreTrainedModel.push_to_hub`] method. You’ll need to login to your Hugging Face account first and enter your token when prompted.
|
291 |
+
|
292 |
+
```py
|
293 |
+
from huggingface_hub import notebook_login
|
294 |
+
|
295 |
+
notebook_login()
|
296 |
+
```
|
297 |
+
|
298 |
+
Call [`~transformers.PreTrainedModel.push_to_hub`] to save your model to your repositoy.
|
299 |
+
|
300 |
+
```py
|
301 |
+
model.push_to_hub(peft_model_id)
|
302 |
+
```
|
303 |
+
|
304 |
+
## Inference
|
305 |
+
|
306 |
+
Let's load the model from the Hub and test it out on a food image.
|
307 |
+
|
308 |
+
```py
|
309 |
+
from peft import PeftConfig, PeftModel
|
310 |
+
from transformers import AutoImageProcessor
|
311 |
+
from PIL import Image
|
312 |
+
import requests
|
313 |
+
|
314 |
+
config = PeftConfig.from_pretrained("stevhliu/vit-base-patch16-224-in21k-lora")
|
315 |
+
model = AutoModelForImageClassification.from_pretrained(
|
316 |
+
config.base_model_name_or_path,
|
317 |
+
label2id=label2id,
|
318 |
+
id2label=id2label,
|
319 |
+
ignore_mismatched_sizes=True,
|
320 |
+
)
|
321 |
+
model = PeftModel.from_pretrained(model, "stevhliu/vit-base-patch16-224-in21k-lora")
|
322 |
+
|
323 |
+
url = "https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/beignets.jpeg"
|
324 |
+
image = Image.open(requests.get(url, stream=True).raw)
|
325 |
+
image
|
326 |
+
```
|
327 |
+
|
328 |
+
<div class="flex justify-center">
|
329 |
+
<img src="https://huggingface.co/datasets/sayakpaul/sample-datasets/resolve/main/beignets.jpeg">
|
330 |
+
</div>
|
331 |
+
|
332 |
+
Convert the image to RGB and return the underlying PyTorch tensors.
|
333 |
+
|
334 |
+
```py
|
335 |
+
encoding = image_processor(image.convert("RGB"), return_tensors="pt")
|
336 |
+
```
|
337 |
+
|
338 |
+
Now run the model and return the predicted class!
|
339 |
+
|
340 |
+
```py
|
341 |
+
with torch.no_grad():
|
342 |
+
outputs = model(**encoding)
|
343 |
+
logits = outputs.logits
|
344 |
+
|
345 |
+
predicted_class_idx = logits.argmax(-1).item()
|
346 |
+
print("Predicted class:", model.config.id2label[predicted_class_idx])
|
347 |
+
"Predicted class: beignets"
|
348 |
+
```
|
peft_md_files/task_guides/prompt_based_methods.md
ADDED
@@ -0,0 +1,305 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2024 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# Prompt-based methods
|
18 |
+
|
19 |
+
A prompt can describe a task or provide an example of a task you want the model to learn. Instead of manually creating these prompts, soft prompting methods add learnable parameters to the input embeddings that can be optimized for a specific task while keeping the pretrained model's parameters frozen. This makes it both faster and easier to finetune large language models (LLMs) for new downstream tasks.
|
20 |
+
|
21 |
+
The PEFT library supports several types of prompting methods (p-tuning, prefix tuning, prompt tuning) and you can learn more about how these methods work conceptually in the [Soft prompts](../conceptual_guides/prompting) guide. If you're interested in applying these methods to other tasks and use cases, take a look at our [notebook collection](https://huggingface.co/spaces/PEFT/soft-prompting)!
|
22 |
+
|
23 |
+
This guide will show you how to train a causal language model - with a soft prompting method - to *generate a classification* for whether a tweet is a complaint or not.
|
24 |
+
|
25 |
+
<Tip>
|
26 |
+
|
27 |
+
Some familiarity with the general process of training a causal language model would be really helpful and allow you to focus on the soft prompting methods. If you're new, we recommend taking a look at the [Causal language modeling](https://huggingface.co/docs/transformers/tasks/language_modeling) guide first from the Transformers documentation. When you're ready, come back and see how easy it is to drop PEFT in to your training!
|
28 |
+
|
29 |
+
</Tip>
|
30 |
+
|
31 |
+
Before you begin, make sure you have all the necessary libraries installed.
|
32 |
+
|
33 |
+
```bash
|
34 |
+
pip install -q peft transformers datasets
|
35 |
+
```
|
36 |
+
|
37 |
+
## Dataset
|
38 |
+
|
39 |
+
For this guide, you'll use the `twitter_complaints` subset of the [RAFT](https://huggingface.co/datasets/ought/raft) dataset. The `twitter_complaints` subset contains tweets labeled as `complaint` and `no complaint` and you can check out the [dataset viewer](https://huggingface.co/datasets/ought/raft/viewer/twitter_complaints) for a better idea of what the data looks like.
|
40 |
+
|
41 |
+
Use the [`~datasets.load_dataset`] function to load the dataset and create a new `text_label` column so it is easier to understand what the `Label` values, `1` and `2` mean.
|
42 |
+
|
43 |
+
```py
|
44 |
+
from datasets import load_dataset
|
45 |
+
|
46 |
+
ds = load_dataset("ought/raft", "twitter_complaints")
|
47 |
+
|
48 |
+
classes = [k.replace("_", " ") for k in ds["train"].features["Label"].names]
|
49 |
+
ds = ds.map(
|
50 |
+
lambda x: {"text_label": [classes[label] for label in x["Label"]]},
|
51 |
+
batched=True,
|
52 |
+
num_proc=1,
|
53 |
+
)
|
54 |
+
ds["train"][0]
|
55 |
+
{"Tweet text": "@HMRCcustomers No this is my first job", "ID": 0, "Label": 2, "text_label": "no complaint"}
|
56 |
+
```
|
57 |
+
|
58 |
+
Load a tokenizer, define the padding token to use, and determine the maximum length of the tokenized label.
|
59 |
+
|
60 |
+
```py
|
61 |
+
from transformers import AutoTokenizer
|
62 |
+
|
63 |
+
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-560m")
|
64 |
+
if tokenizer.pad_token_id is None:
|
65 |
+
tokenizer.pad_token_id = tokenizer.eos_token_id
|
66 |
+
target_max_length = max([len(tokenizer(class_label)["input_ids"]) for class_label in classes])
|
67 |
+
print(target_max_length)
|
68 |
+
```
|
69 |
+
|
70 |
+
Create a preprocessing function that tokenizes the tweet text and labels, pad the inputs and labels in each batch, create an attention mask, and truncate sequences to the `max_length`. Then convert the `input_ids`, `attention_mask`, and `labels` to PyTorch tensors.
|
71 |
+
|
72 |
+
```py
|
73 |
+
import torch
|
74 |
+
|
75 |
+
max_length = 64
|
76 |
+
|
77 |
+
def preprocess_function(examples, text_column="Tweet text", label_column="text_label"):
|
78 |
+
batch_size = len(examples[text_column])
|
79 |
+
inputs = [f"{text_column} : {x} Label : " for x in examples[text_column]]
|
80 |
+
targets = [str(x) for x in examples[label_column]]
|
81 |
+
model_inputs = tokenizer(inputs)
|
82 |
+
labels = tokenizer(targets)
|
83 |
+
classes = [k.replace("_", " ") for k in ds["train"].features["Label"].names]
|
84 |
+
for i in range(batch_size):
|
85 |
+
sample_input_ids = model_inputs["input_ids"][i]
|
86 |
+
label_input_ids = labels["input_ids"][i]
|
87 |
+
model_inputs["input_ids"][i] = [tokenizer.pad_token_id] * (
|
88 |
+
max_length - len(sample_input_ids)
|
89 |
+
) + sample_input_ids
|
90 |
+
model_inputs["attention_mask"][i] = [0] * (max_length - len(sample_input_ids)) + model_inputs[
|
91 |
+
"attention_mask"
|
92 |
+
][i]
|
93 |
+
labels["input_ids"][i] = [-100] * (max_length - len(label_input_ids)) + label_input_ids
|
94 |
+
model_inputs["input_ids"][i] = torch.tensor(model_inputs["input_ids"][i][:max_length])
|
95 |
+
model_inputs["attention_mask"][i] = torch.tensor(model_inputs["attention_mask"][i][:max_length])
|
96 |
+
labels["input_ids"][i] = torch.tensor(labels["input_ids"][i][:max_length])
|
97 |
+
model_inputs["labels"] = labels["input_ids"]
|
98 |
+
return model_inputs
|
99 |
+
```
|
100 |
+
|
101 |
+
Apply the preprocessing function to the entire dataset with the [`~datasets.Dataset.map`] function, and remove the unprocessed columns because the model won't need them.
|
102 |
+
|
103 |
+
```py
|
104 |
+
processed_ds = ds.map(
|
105 |
+
preprocess_function,
|
106 |
+
batched=True,
|
107 |
+
num_proc=1,
|
108 |
+
remove_columns=ds["train"].column_names,
|
109 |
+
load_from_cache_file=False,
|
110 |
+
desc="Running tokenizer on dataset",
|
111 |
+
)
|
112 |
+
```
|
113 |
+
|
114 |
+
Finally, create a training and evaluation [`DataLoader`](https://pytorch.org/docs/stable/data.html#torch.utils.data.DataLoader). You can set `pin_memory=True` to speed up the data transfer to the GPU during training if the samples in your dataset are on a CPU.
|
115 |
+
|
116 |
+
```py
|
117 |
+
from torch.utils.data import DataLoader
|
118 |
+
from transformers import default_data_collator
|
119 |
+
|
120 |
+
train_ds = processed_ds["train"]
|
121 |
+
eval_ds = processed_ds["test"]
|
122 |
+
|
123 |
+
batch_size = 16
|
124 |
+
|
125 |
+
train_dataloader = DataLoader(train_ds, shuffle=True, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True)
|
126 |
+
eval_dataloader = DataLoader(eval_ds, collate_fn=default_data_collator, batch_size=batch_size, pin_memory=True)
|
127 |
+
```
|
128 |
+
|
129 |
+
## Model
|
130 |
+
|
131 |
+
Now let's load a pretrained model to use as the base model for the soft prompt method. This guide uses the [bigscience/bloomz-560m](https://huggingface.co/bigscience/bloomz-560m) model, but you can use any causal language model you want.
|
132 |
+
|
133 |
+
```py
|
134 |
+
from transformers import AutoModelForCausalLM
|
135 |
+
|
136 |
+
model = AutoModelForCausalLM.from_pretrained("bigscience/bloomz-560m")
|
137 |
+
```
|
138 |
+
|
139 |
+
### PEFT configuration and model
|
140 |
+
|
141 |
+
For any PEFT method, you'll need to create a configuration which contains all the parameters that specify how the PEFT method should be applied. Once the configuration is setup, pass it to the [`~peft.get_peft_model`] function along with the base model to create a trainable [`PeftModel`].
|
142 |
+
|
143 |
+
<Tip>
|
144 |
+
|
145 |
+
Call the [`~PeftModel.print_trainable_parameters`] method to compare the number of trainable parameters of [`PeftModel`] versus the number of parameters in the base model!
|
146 |
+
|
147 |
+
</Tip>
|
148 |
+
|
149 |
+
<hfoptions id="configurations">
|
150 |
+
<hfoption id="p-tuning">
|
151 |
+
|
152 |
+
[P-tuning](../conceptual_guides/prompting#p-tuning) adds a trainable embedding tensor where the prompt tokens can be added anywhere in the input sequence. Create a [`PromptEncoderConfig`] with the task type, the number of virtual tokens to add and learn, and the hidden size of the encoder for learning the prompt parameters.
|
153 |
+
|
154 |
+
```py
|
155 |
+
from peft import PromptEncoderConfig, get_peft_model
|
156 |
+
|
157 |
+
peft_config = PromptEncoderConfig(task_type="CAUSAL_LM", num_virtual_tokens=20, encoder_hidden_size=128)
|
158 |
+
model = get_peft_model(model, peft_config)
|
159 |
+
model.print_trainable_parameters()
|
160 |
+
"trainable params: 300,288 || all params: 559,514,880 || trainable%: 0.05366935013417338"
|
161 |
+
```
|
162 |
+
|
163 |
+
</hfoption>
|
164 |
+
<hfoption id="prefix tuning">
|
165 |
+
|
166 |
+
[Prefix tuning](../conceptual_guides/prompting#prefix-tuning) adds task-specific parameters in all of the model layers, which are optimized by a separate feed-forward network. Create a [`PrefixTuningConfig`] with the task type and number of virtual tokens to add and learn.
|
167 |
+
|
168 |
+
```py
|
169 |
+
from peft import PrefixTuningConfig, get_peft_model
|
170 |
+
|
171 |
+
peft_config = PrefixTuningConfig(task_type="CAUSAL_LM", num_virtual_tokens=20)
|
172 |
+
model = get_peft_model(model, peft_config)
|
173 |
+
model.print_trainable_parameters()
|
174 |
+
"trainable params: 983,040 || all params: 560,197,632 || trainable%: 0.1754809274167014"
|
175 |
+
```
|
176 |
+
|
177 |
+
</hfoption>
|
178 |
+
<hfoption id="prompt tuning">
|
179 |
+
|
180 |
+
[Prompt tuning](../conceptual_guides/prompting#prompt-tuning) formulates all tasks as a *generation* task and it adds a task-specific prompt to the input which is updated independently. The `prompt_tuning_init_text` parameter specifies how to finetune the model (in this case, it is classifying whether tweets are complaints or not). For the best results, the `prompt_tuning_init_text` should have the same number of tokens that should be predicted. To do this, you can set `num_virtual_tokens` to the number of tokens of the `prompt_tuning_init_text`.
|
181 |
+
|
182 |
+
Create a [`PromptTuningConfig`] with the task type, the initial prompt tuning text to train the model with, the number of virtual tokens to add and learn, and a tokenizer.
|
183 |
+
|
184 |
+
```py
|
185 |
+
from peft import PromptTuningConfig, PromptTuningInit, get_peft_model
|
186 |
+
|
187 |
+
prompt_tuning_init_text = "Classify if the tweet is a complaint or no complaint.\n"
|
188 |
+
peft_config = PromptTuningConfig(
|
189 |
+
task_type="CAUSAL_LM",
|
190 |
+
prompt_tuning_init=PromptTuningInit.TEXT,
|
191 |
+
num_virtual_tokens=len(tokenizer(prompt_tuning_init_text)["input_ids"]),
|
192 |
+
prompt_tuning_init_text=prompt_tuning_init_text,
|
193 |
+
tokenizer_name_or_path="bigscience/bloomz-560m",
|
194 |
+
)
|
195 |
+
model = get_peft_model(model, peft_config)
|
196 |
+
model.print_trainable_parameters()
|
197 |
+
"trainable params: 8,192 || all params: 559,222,784 || trainable%: 0.0014648902430985358"
|
198 |
+
```
|
199 |
+
|
200 |
+
</hfoption>
|
201 |
+
</hfoptions>
|
202 |
+
|
203 |
+
### Training
|
204 |
+
|
205 |
+
Set up an optimizer and learning rate scheduler.
|
206 |
+
|
207 |
+
```py
|
208 |
+
from transformers import get_linear_schedule_with_warmup
|
209 |
+
|
210 |
+
lr = 3e-2
|
211 |
+
num_epochs = 50
|
212 |
+
|
213 |
+
optimizer = torch.optim.AdamW(model.parameters(), lr=lr)
|
214 |
+
lr_scheduler = get_linear_schedule_with_warmup(
|
215 |
+
optimizer=optimizer,
|
216 |
+
num_warmup_steps=0,
|
217 |
+
num_training_steps=(len(train_dataloader) * num_epochs),
|
218 |
+
)
|
219 |
+
```
|
220 |
+
|
221 |
+
Move the model to the GPU and create a training loop that reports the loss and perplexity for each epoch.
|
222 |
+
|
223 |
+
```py
|
224 |
+
from tqdm import tqdm
|
225 |
+
|
226 |
+
device = "cuda"
|
227 |
+
model = model.to(device)
|
228 |
+
|
229 |
+
for epoch in range(num_epochs):
|
230 |
+
model.train()
|
231 |
+
total_loss = 0
|
232 |
+
for step, batch in enumerate(tqdm(train_dataloader)):
|
233 |
+
batch = {k: v.to(device) for k, v in batch.items()}
|
234 |
+
outputs = model(**batch)
|
235 |
+
loss = outputs.loss
|
236 |
+
total_loss += loss.detach().float()
|
237 |
+
loss.backward()
|
238 |
+
optimizer.step()
|
239 |
+
lr_scheduler.step()
|
240 |
+
optimizer.zero_grad()
|
241 |
+
|
242 |
+
model.eval()
|
243 |
+
eval_loss = 0
|
244 |
+
eval_preds = []
|
245 |
+
for step, batch in enumerate(tqdm(eval_dataloader)):
|
246 |
+
batch = {k: v.to(device) for k, v in batch.items()}
|
247 |
+
with torch.no_grad():
|
248 |
+
outputs = model(**batch)
|
249 |
+
loss = outputs.loss
|
250 |
+
eval_loss += loss.detach().float()
|
251 |
+
eval_preds.extend(
|
252 |
+
tokenizer.batch_decode(torch.argmax(outputs.logits, -1).detach().cpu().numpy(), skip_special_tokens=True)
|
253 |
+
)
|
254 |
+
|
255 |
+
eval_epoch_loss = eval_loss / len(eval_dataloader)
|
256 |
+
eval_ppl = torch.exp(eval_epoch_loss)
|
257 |
+
train_epoch_loss = total_loss / len(train_dataloader)
|
258 |
+
train_ppl = torch.exp(train_epoch_loss)
|
259 |
+
print(f"{epoch=}: {train_ppl=} {train_epoch_loss=} {eval_ppl=} {eval_epoch_loss=}")
|
260 |
+
```
|
261 |
+
|
262 |
+
## Share your model
|
263 |
+
|
264 |
+
Once training is complete, you can upload your model to the Hub with the [`~transformers.PreTrainedModel.push_to_hub`] method. You'll need to login to your Hugging Face account first and enter your token when prompted.
|
265 |
+
|
266 |
+
```py
|
267 |
+
from huggingface_hub import notebook_login
|
268 |
+
|
269 |
+
account = <your-hf-account-name>
|
270 |
+
peft_model_id = f"{account}/bloomz-560-m-peft-method"
|
271 |
+
model.push_to_hub(peft_model_id)
|
272 |
+
```
|
273 |
+
|
274 |
+
If you check the model file size in the repository, you’ll see that it is a lot smaller than a full sized model!
|
275 |
+
|
276 |
+
<div class="flex flex-col justify-center">
|
277 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/PEFT-hub-screenshot.png"/>
|
278 |
+
<figcaption class="text-center">For example, the adapter weights for a opt-350m model stored on the Hub are only ~6MB compared to the full model size which can be ~700MB.</figcaption>
|
279 |
+
</div>
|
280 |
+
|
281 |
+
## Inference
|
282 |
+
|
283 |
+
Let's load the model for inference and test it out on a tweet!
|
284 |
+
|
285 |
+
```py
|
286 |
+
from peft import AutoPeftModelForCausalLM
|
287 |
+
|
288 |
+
model = AutoPeftModelForCausalLM.from_pretrained("peft_model_id").to("cuda")
|
289 |
+
tokenizer = AutoTokenizer.from_pretrained("bigscience/bloomz-560m")
|
290 |
+
|
291 |
+
i = 15
|
292 |
+
inputs = tokenizer(f'{text_column} : {ds["test"][i]["Tweet text"]} Label : ', return_tensors="pt")
|
293 |
+
print(ds["test"][i]["Tweet text"])
|
294 |
+
"@NYTsupport i have complained a dozen times & yet my papers are still thrown FAR from my door. Why is this so hard to resolve?"
|
295 |
+
```
|
296 |
+
|
297 |
+
Call the [`~transformers.GenerationMixin.generate`] method to generate the predicted classification label.
|
298 |
+
|
299 |
+
```py
|
300 |
+
with torch.no_grad():
|
301 |
+
inputs = {k: v.to(device) for k, v in inputs.items()}
|
302 |
+
outputs = model.generate(input_ids=inputs["input_ids"], max_new_tokens=10)
|
303 |
+
print(tokenizer.batch_decode(outputs.detach().cpu().numpy(), skip_special_tokens=True))
|
304 |
+
"['Tweet text : @NYTsupport i have complained a dozen times & yet my papers are still thrown FAR from my door. Why is this so hard to resolve? Label : complaint']"
|
305 |
+
```
|
peft_md_files/tutorial/peft_integrations.md
ADDED
@@ -0,0 +1,152 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# PEFT integrations
|
18 |
+
|
19 |
+
PEFT's practical benefits extends to other Hugging Face libraries like [Diffusers](https://hf.co/docs/diffusers) and [Transformers](https://hf.co/docs/transformers). One of the main benefits of PEFT is that an adapter file generated by a PEFT method is a lot smaller than the original model, which makes it super easy to manage and use multiple adapters. You can use one pretrained base model for multiple tasks by simply loading a new adapter finetuned for the task you're solving. Or you can combine multiple adapters with a text-to-image diffusion model to create new effects.
|
20 |
+
|
21 |
+
This tutorial will show you how PEFT can help you manage adapters in Diffusers and Transformers.
|
22 |
+
|
23 |
+
## Diffusers
|
24 |
+
|
25 |
+
Diffusers is a generative AI library for creating images and videos from text or images with diffusion models. LoRA is an especially popular training method for diffusion models because you can very quickly train and share diffusion models to generate images in new styles. To make it easier to use and try multiple LoRA models, Diffusers uses the PEFT library to help manage different adapters for inference.
|
26 |
+
|
27 |
+
For example, load a base model and then load the [artificialguybr/3DRedmond-V1](https://huggingface.co/artificialguybr/3DRedmond-V1) adapter for inference with the [`load_lora_weights`](https://huggingface.co/docs/diffusers/v0.24.0/en/api/loaders/lora#diffusers.loaders.LoraLoaderMixin.load_lora_weights) method. The `adapter_name` argument in the loading method is enabled by PEFT and allows you to set a name for the adapter so it is easier to reference.
|
28 |
+
|
29 |
+
```py
|
30 |
+
import torch
|
31 |
+
from diffusers import DiffusionPipeline
|
32 |
+
|
33 |
+
pipeline = DiffusionPipeline.from_pretrained(
|
34 |
+
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
|
35 |
+
).to("cuda")
|
36 |
+
pipeline.load_lora_weights(
|
37 |
+
"peft-internal-testing/artificialguybr__3DRedmond-V1",
|
38 |
+
weight_name="3DRedmond-3DRenderStyle-3DRenderAF.safetensors",
|
39 |
+
adapter_name="3d"
|
40 |
+
)
|
41 |
+
image = pipeline("sushi rolls shaped like kawaii cat faces").images[0]
|
42 |
+
image
|
43 |
+
```
|
44 |
+
|
45 |
+
<div class="flex justify-center">
|
46 |
+
<img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/test-lora-diffusers.png"/>
|
47 |
+
</div>
|
48 |
+
|
49 |
+
Now let's try another cool LoRA model, [ostris/super-cereal-sdxl-lora](https://huggingface.co/ostris/super-cereal-sdxl-lora). All you need to do is load and name this new adapter with `adapter_name`, and use the [`set_adapters`](https://huggingface.co/docs/diffusers/api/loaders/unet#diffusers.loaders.UNet2DConditionLoadersMixin.set_adapters) method to set it as the currently active adapter.
|
50 |
+
|
51 |
+
```py
|
52 |
+
pipeline.load_lora_weights(
|
53 |
+
"ostris/super-cereal-sdxl-lora",
|
54 |
+
weight_name="cereal_box_sdxl_v1.safetensors",
|
55 |
+
adapter_name="cereal"
|
56 |
+
)
|
57 |
+
pipeline.set_adapters("cereal")
|
58 |
+
image = pipeline("sushi rolls shaped like kawaii cat faces").images[0]
|
59 |
+
image
|
60 |
+
```
|
61 |
+
|
62 |
+
<div class="flex justify-center">
|
63 |
+
<img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/test-lora-diffusers-2.png"/>
|
64 |
+
</div>
|
65 |
+
|
66 |
+
Finally, you can call the [`disable_lora`](https://huggingface.co/docs/diffusers/api/loaders/unet#diffusers.loaders.UNet2DConditionLoadersMixin.disable_lora) method to restore the base model.
|
67 |
+
|
68 |
+
```py
|
69 |
+
pipeline.disable_lora()
|
70 |
+
```
|
71 |
+
|
72 |
+
Learn more about how PEFT supports Diffusers in the [Inference with PEFT](https://huggingface.co/docs/diffusers/tutorials/using_peft_for_inference) tutorial.
|
73 |
+
|
74 |
+
## Transformers
|
75 |
+
|
76 |
+
🤗 [Transformers](https://hf.co/docs/transformers) is a collection of pretrained models for all types of tasks in all modalities. You can load these models for training or inference. Many of the models are large language models (LLMs), so it makes sense to integrate PEFT with Transformers to manage and train adapters.
|
77 |
+
|
78 |
+
Load a base pretrained model to train.
|
79 |
+
|
80 |
+
```py
|
81 |
+
from transformers import AutoModelForCausalLM
|
82 |
+
|
83 |
+
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m")
|
84 |
+
```
|
85 |
+
|
86 |
+
Next, add an adapter configuration to specify how to adapt the model parameters. Call the [`~PeftModel.add_adapter`] method to add the configuration to the base model.
|
87 |
+
|
88 |
+
```py
|
89 |
+
from peft import LoraConfig
|
90 |
+
|
91 |
+
peft_config = LoraConfig(
|
92 |
+
lora_alpha=16,
|
93 |
+
lora_dropout=0.1,
|
94 |
+
r=64,
|
95 |
+
bias="none",
|
96 |
+
task_type="CAUSAL_LM"
|
97 |
+
)
|
98 |
+
model.add_adapter(peft_config)
|
99 |
+
```
|
100 |
+
|
101 |
+
Now you can train the model with Transformer's [`~transformers.Trainer`] class or whichever training framework you prefer.
|
102 |
+
|
103 |
+
To use the newly trained model for inference, the [`~transformers.AutoModel`] class uses PEFT on the backend to load the adapter weights and configuration file into a base pretrained model.
|
104 |
+
|
105 |
+
```py
|
106 |
+
from transformers import AutoModelForCausalLM
|
107 |
+
|
108 |
+
model = AutoModelForCausalLM.from_pretrained("peft-internal-testing/opt-350m-lora")
|
109 |
+
```
|
110 |
+
|
111 |
+
Alternatively, you can use transformers [Pipelines](https://huggingface.co/docs/transformers/en/main_classes/pipelines) to load the model for conveniently running inference:
|
112 |
+
|
113 |
+
```py
|
114 |
+
from transformers import pipeline
|
115 |
+
|
116 |
+
model = pipeline("text-generation", "peft-internal-testing/opt-350m-lora")
|
117 |
+
print(model("Hello World"))
|
118 |
+
```
|
119 |
+
|
120 |
+
If you're interested in comparing or using more than one adapter, you can call the [`~PeftModel.add_adapter`] method to add the adapter configuration to the base model. The only requirement is the adapter type must be the same (you can't mix a LoRA and LoHa adapter).
|
121 |
+
|
122 |
+
```py
|
123 |
+
from transformers import AutoModelForCausalLM
|
124 |
+
from peft import LoraConfig
|
125 |
+
|
126 |
+
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m")
|
127 |
+
model.add_adapter(lora_config_1, adapter_name="adapter_1")
|
128 |
+
```
|
129 |
+
|
130 |
+
Call [`~PeftModel.add_adapter`] again to attach a new adapter to the base model.
|
131 |
+
|
132 |
+
```py
|
133 |
+
model.add_adapter(lora_config_2, adapter_name="adapter_2")
|
134 |
+
```
|
135 |
+
|
136 |
+
Then you can use [`~PeftModel.set_adapter`] to set the currently active adapter.
|
137 |
+
|
138 |
+
```py
|
139 |
+
model.set_adapter("adapter_1")
|
140 |
+
output = model.generate(**inputs)
|
141 |
+
print(tokenizer.decode(output_disabled[0], skip_special_tokens=True))
|
142 |
+
```
|
143 |
+
|
144 |
+
To disable the adapter, call the [disable_adapters](https://github.com/huggingface/transformers/blob/4e3490f79b40248c53ee54365a9662611e880892/src/transformers/integrations/peft.py#L313) method.
|
145 |
+
|
146 |
+
```py
|
147 |
+
model.disable_adapters()
|
148 |
+
```
|
149 |
+
|
150 |
+
The [enable_adapters](https://github.com/huggingface/transformers/blob/4e3490f79b40248c53ee54365a9662611e880892/src/transformers/integrations/peft.py#L336) can be used to enable the adapters again.
|
151 |
+
|
152 |
+
If you're curious, check out the [Load and train adapters with PEFT](https://huggingface.co/docs/transformers/main/peft) tutorial to learn more.
|
peft_md_files/tutorial/peft_model_config.md
ADDED
@@ -0,0 +1,182 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
2 |
+
|
3 |
+
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
4 |
+
the License. You may obtain a copy of the License at
|
5 |
+
|
6 |
+
http://www.apache.org/licenses/LICENSE-2.0
|
7 |
+
|
8 |
+
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
9 |
+
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
10 |
+
specific language governing permissions and limitations under the License.
|
11 |
+
|
12 |
+
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
13 |
+
rendered properly in your Markdown viewer.
|
14 |
+
|
15 |
+
-->
|
16 |
+
|
17 |
+
# PEFT configurations and models
|
18 |
+
|
19 |
+
The sheer size of today's large pretrained models - which commonly have billions of parameters - present a significant training challenge because they require more storage space and more computational power to crunch all those calculations. You'll need access to powerful GPUs or TPUs to train these large pretrained models which is expensive, not widely accessible to everyone, not environmentally friendly, and not very practical. PEFT methods address many of these challenges. There are several types of PEFT methods (soft prompting, matrix decomposition, adapters), but they all focus on the same thing, reduce the number of trainable parameters. This makes it more accessible to train and store large models on consumer hardware.
|
20 |
+
|
21 |
+
The PEFT library is designed to help you quickly train large models on free or low-cost GPUs, and in this tutorial, you'll learn how to setup a configuration to apply a PEFT method to a pretrained base model for training. Once the PEFT configuration is setup, you can use any training framework you like (Transformer's [`~transformers.Trainer`] class, [Accelerate](https://hf.co/docs/accelerate), a custom PyTorch training loop).
|
22 |
+
|
23 |
+
## PEFT configurations
|
24 |
+
|
25 |
+
<Tip>
|
26 |
+
|
27 |
+
Learn more about the parameters you can configure for each PEFT method in their respective API reference page.
|
28 |
+
|
29 |
+
</Tip>
|
30 |
+
|
31 |
+
A configuration stores important parameters that specify how a particular PEFT method should be applied.
|
32 |
+
|
33 |
+
For example, take a look at the following [`LoraConfig`](https://huggingface.co/ybelkada/opt-350m-lora/blob/main/adapter_config.json) for applying LoRA and [`PromptEncoderConfig`](https://huggingface.co/smangrul/roberta-large-peft-p-tuning/blob/main/adapter_config.json) for applying p-tuning (these configuration files are already JSON-serialized). Whenever you load a PEFT adapter, it is a good idea to check whether it has an associated adapter_config.json file which is required.
|
34 |
+
|
35 |
+
<hfoptions id="config">
|
36 |
+
<hfoption id="LoraConfig">
|
37 |
+
|
38 |
+
```json
|
39 |
+
{
|
40 |
+
"base_model_name_or_path": "facebook/opt-350m", #base model to apply LoRA to
|
41 |
+
"bias": "none",
|
42 |
+
"fan_in_fan_out": false,
|
43 |
+
"inference_mode": true,
|
44 |
+
"init_lora_weights": true,
|
45 |
+
"layers_pattern": null,
|
46 |
+
"layers_to_transform": null,
|
47 |
+
"lora_alpha": 32,
|
48 |
+
"lora_dropout": 0.05,
|
49 |
+
"modules_to_save": null,
|
50 |
+
"peft_type": "LORA", #PEFT method type
|
51 |
+
"r": 16,
|
52 |
+
"revision": null,
|
53 |
+
"target_modules": [
|
54 |
+
"q_proj", #model modules to apply LoRA to (query and value projection layers)
|
55 |
+
"v_proj"
|
56 |
+
],
|
57 |
+
"task_type": "CAUSAL_LM" #type of task to train model on
|
58 |
+
}
|
59 |
+
```
|
60 |
+
|
61 |
+
You can create your own configuration for training by initializing a [`LoraConfig`].
|
62 |
+
|
63 |
+
```py
|
64 |
+
from peft import LoraConfig, TaskType
|
65 |
+
|
66 |
+
lora_config = LoraConfig(
|
67 |
+
r=16,
|
68 |
+
target_modules=["q_proj", "v_proj"],
|
69 |
+
task_type=TaskType.CAUSAL_LM,
|
70 |
+
lora_alpha=32,
|
71 |
+
lora_dropout=0.05
|
72 |
+
)
|
73 |
+
```
|
74 |
+
|
75 |
+
</hfoption>
|
76 |
+
<hfoption id="PromptEncoderConfig">
|
77 |
+
|
78 |
+
```json
|
79 |
+
{
|
80 |
+
"base_model_name_or_path": "roberta-large", #base model to apply p-tuning to
|
81 |
+
"encoder_dropout": 0.0,
|
82 |
+
"encoder_hidden_size": 128,
|
83 |
+
"encoder_num_layers": 2,
|
84 |
+
"encoder_reparameterization_type": "MLP",
|
85 |
+
"inference_mode": true,
|
86 |
+
"num_attention_heads": 16,
|
87 |
+
"num_layers": 24,
|
88 |
+
"num_transformer_submodules": 1,
|
89 |
+
"num_virtual_tokens": 20,
|
90 |
+
"peft_type": "P_TUNING", #PEFT method type
|
91 |
+
"task_type": "SEQ_CLS", #type of task to train model on
|
92 |
+
"token_dim": 1024
|
93 |
+
}
|
94 |
+
```
|
95 |
+
|
96 |
+
You can create your own configuration for training by initializing a [`PromptEncoderConfig`].
|
97 |
+
|
98 |
+
```py
|
99 |
+
from peft import PromptEncoderConfig, TaskType
|
100 |
+
|
101 |
+
p_tuning_config = PromptEncoderConfig(
|
102 |
+
encoder_reparameterization_type="MLP",
|
103 |
+
encoder_hidden_size=128,
|
104 |
+
num_attention_heads=16,
|
105 |
+
num_layers=24,
|
106 |
+
num_transformer_submodules=1,
|
107 |
+
num_virtual_tokens=20,
|
108 |
+
token_dim=1024,
|
109 |
+
task_type=TaskType.SEQ_CLS
|
110 |
+
)
|
111 |
+
```
|
112 |
+
|
113 |
+
</hfoption>
|
114 |
+
</hfoptions>
|
115 |
+
|
116 |
+
## PEFT models
|
117 |
+
|
118 |
+
With a PEFT configuration in hand, you can now apply it to any pretrained model to create a [`PeftModel`]. Choose from any of the state-of-the-art models from the [Transformers](https://hf.co/docs/transformers) library, a custom model, and even new and unsupported transformer architectures.
|
119 |
+
|
120 |
+
For this tutorial, load a base [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) model to finetune.
|
121 |
+
|
122 |
+
```py
|
123 |
+
from transformers import AutoModelForCausalLM
|
124 |
+
|
125 |
+
model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m")
|
126 |
+
```
|
127 |
+
|
128 |
+
Use the [`get_peft_model`] function to create a [`PeftModel`] from the base facebook/opt-350m model and the `lora_config` you created earlier.
|
129 |
+
|
130 |
+
```py
|
131 |
+
from peft import get_peft_model
|
132 |
+
|
133 |
+
lora_model = get_peft_model(model, lora_config)
|
134 |
+
lora_model.print_trainable_parameters()
|
135 |
+
"trainable params: 1,572,864 || all params: 332,769,280 || trainable%: 0.472659014678278"
|
136 |
+
```
|
137 |
+
|
138 |
+
Now you can train the [`PeftModel`] with your preferred training framework! After training, you can save your model locally with [`~PeftModel.save_pretrained`] or upload it to the Hub with the [`~transformers.PreTrainedModel.push_to_hub`] method.
|
139 |
+
|
140 |
+
```py
|
141 |
+
# save locally
|
142 |
+
lora_model.save_pretrained("your-name/opt-350m-lora")
|
143 |
+
|
144 |
+
# push to Hub
|
145 |
+
lora_model.push_to_hub("your-name/opt-350m-lora")
|
146 |
+
```
|
147 |
+
|
148 |
+
To load a [`PeftModel`] for inference, you'll need to provide the [`PeftConfig`] used to create it and the base model it was trained from.
|
149 |
+
|
150 |
+
```py
|
151 |
+
from peft import PeftModel, PeftConfig
|
152 |
+
|
153 |
+
config = PeftConfig.from_pretrained("ybelkada/opt-350m-lora")
|
154 |
+
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path)
|
155 |
+
lora_model = PeftModel.from_pretrained(model, "ybelkada/opt-350m-lora")
|
156 |
+
```
|
157 |
+
|
158 |
+
<Tip>
|
159 |
+
|
160 |
+
By default, the [`PeftModel`] is set for inference, but if you'd like to train the adapter some more you can set `is_trainable=True`.
|
161 |
+
|
162 |
+
```py
|
163 |
+
lora_model = PeftModel.from_pretrained(model, "ybelkada/opt-350m-lora", is_trainable=True)
|
164 |
+
```
|
165 |
+
|
166 |
+
</Tip>
|
167 |
+
|
168 |
+
The [`PeftModel.from_pretrained`] method is the most flexible way to load a [`PeftModel`] because it doesn't matter what model framework was used (Transformers, timm, a generic PyTorch model). Other classes, like [`AutoPeftModel`], are just a convenient wrapper around the base [`PeftModel`], and makes it easier to load PEFT models directly from the Hub or locally where the PEFT weights are stored.
|
169 |
+
|
170 |
+
```py
|
171 |
+
from peft import AutoPeftModelForCausalLM
|
172 |
+
|
173 |
+
lora_model = AutoPeftModelForCausalLM.from_pretrained("ybelkada/opt-350m-lora")
|
174 |
+
```
|
175 |
+
|
176 |
+
Take a look at the [AutoPeftModel](package_reference/auto_class) API reference to learn more about the [`AutoPeftModel`] classes.
|
177 |
+
|
178 |
+
## Next steps
|
179 |
+
|
180 |
+
With the appropriate [`PeftConfig`], you can apply it to any pretrained model to create a [`PeftModel`] and train large powerful models faster on freely available GPUs! To learn more about PEFT configurations and models, the following guide may be helpful:
|
181 |
+
|
182 |
+
* Learn how to configure a PEFT method for models that aren't from Transformers in the [Working with custom models](../developer_guides/custom_models) guide.
|