omarsol commited on
Commit
02c2286
1 Parent(s): af50d9c

ee5f18566efe5c4e3eedda4442c7cf14c467b381b622b24f9a539f4ef1b1d1c9

Browse files
peft_md_files/tutorial/peft_integrations.md DELETED
@@ -1,152 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
-
12
- ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
- rendered properly in your Markdown viewer.
14
-
15
- -->
16
-
17
- # PEFT integrations
18
-
19
- PEFT's practical benefits extends to other Hugging Face libraries like [Diffusers](https://hf.co/docs/diffusers) and [Transformers](https://hf.co/docs/transformers). One of the main benefits of PEFT is that an adapter file generated by a PEFT method is a lot smaller than the original model, which makes it super easy to manage and use multiple adapters. You can use one pretrained base model for multiple tasks by simply loading a new adapter finetuned for the task you're solving. Or you can combine multiple adapters with a text-to-image diffusion model to create new effects.
20
-
21
- This tutorial will show you how PEFT can help you manage adapters in Diffusers and Transformers.
22
-
23
- ## Diffusers
24
-
25
- Diffusers is a generative AI library for creating images and videos from text or images with diffusion models. LoRA is an especially popular training method for diffusion models because you can very quickly train and share diffusion models to generate images in new styles. To make it easier to use and try multiple LoRA models, Diffusers uses the PEFT library to help manage different adapters for inference.
26
-
27
- For example, load a base model and then load the [artificialguybr/3DRedmond-V1](https://huggingface.co/artificialguybr/3DRedmond-V1) adapter for inference with the [`load_lora_weights`](https://huggingface.co/docs/diffusers/v0.24.0/en/api/loaders/lora#diffusers.loaders.LoraLoaderMixin.load_lora_weights) method. The `adapter_name` argument in the loading method is enabled by PEFT and allows you to set a name for the adapter so it is easier to reference.
28
-
29
- ```py
30
- import torch
31
- from diffusers import DiffusionPipeline
32
-
33
- pipeline = DiffusionPipeline.from_pretrained(
34
- "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
35
- ).to("cuda")
36
- pipeline.load_lora_weights(
37
- "peft-internal-testing/artificialguybr__3DRedmond-V1",
38
- weight_name="3DRedmond-3DRenderStyle-3DRenderAF.safetensors",
39
- adapter_name="3d"
40
- )
41
- image = pipeline("sushi rolls shaped like kawaii cat faces").images[0]
42
- image
43
- ```
44
-
45
- <div class="flex justify-center">
46
- <img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/test-lora-diffusers.png"/>
47
- </div>
48
-
49
- Now let's try another cool LoRA model, [ostris/super-cereal-sdxl-lora](https://huggingface.co/ostris/super-cereal-sdxl-lora). All you need to do is load and name this new adapter with `adapter_name`, and use the [`set_adapters`](https://huggingface.co/docs/diffusers/api/loaders/unet#diffusers.loaders.UNet2DConditionLoadersMixin.set_adapters) method to set it as the currently active adapter.
50
-
51
- ```py
52
- pipeline.load_lora_weights(
53
- "ostris/super-cereal-sdxl-lora",
54
- weight_name="cereal_box_sdxl_v1.safetensors",
55
- adapter_name="cereal"
56
- )
57
- pipeline.set_adapters("cereal")
58
- image = pipeline("sushi rolls shaped like kawaii cat faces").images[0]
59
- image
60
- ```
61
-
62
- <div class="flex justify-center">
63
- <img src="https://huggingface.co/datasets/ybelkada/documentation-images/resolve/main/test-lora-diffusers-2.png"/>
64
- </div>
65
-
66
- Finally, you can call the [`disable_lora`](https://huggingface.co/docs/diffusers/api/loaders/unet#diffusers.loaders.UNet2DConditionLoadersMixin.disable_lora) method to restore the base model.
67
-
68
- ```py
69
- pipeline.disable_lora()
70
- ```
71
-
72
- Learn more about how PEFT supports Diffusers in the [Inference with PEFT](https://huggingface.co/docs/diffusers/tutorials/using_peft_for_inference) tutorial.
73
-
74
- ## Transformers
75
-
76
- 🤗 [Transformers](https://hf.co/docs/transformers) is a collection of pretrained models for all types of tasks in all modalities. You can load these models for training or inference. Many of the models are large language models (LLMs), so it makes sense to integrate PEFT with Transformers to manage and train adapters.
77
-
78
- Load a base pretrained model to train.
79
-
80
- ```py
81
- from transformers import AutoModelForCausalLM
82
-
83
- model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m")
84
- ```
85
-
86
- Next, add an adapter configuration to specify how to adapt the model parameters. Call the [`~PeftModel.add_adapter`] method to add the configuration to the base model.
87
-
88
- ```py
89
- from peft import LoraConfig
90
-
91
- peft_config = LoraConfig(
92
- lora_alpha=16,
93
- lora_dropout=0.1,
94
- r=64,
95
- bias="none",
96
- task_type="CAUSAL_LM"
97
- )
98
- model.add_adapter(peft_config)
99
- ```
100
-
101
- Now you can train the model with Transformer's [`~transformers.Trainer`] class or whichever training framework you prefer.
102
-
103
- To use the newly trained model for inference, the [`~transformers.AutoModel`] class uses PEFT on the backend to load the adapter weights and configuration file into a base pretrained model.
104
-
105
- ```py
106
- from transformers import AutoModelForCausalLM
107
-
108
- model = AutoModelForCausalLM.from_pretrained("peft-internal-testing/opt-350m-lora")
109
- ```
110
-
111
- Alternatively, you can use transformers [Pipelines](https://huggingface.co/docs/transformers/en/main_classes/pipelines) to load the model for conveniently running inference:
112
-
113
- ```py
114
- from transformers import pipeline
115
-
116
- model = pipeline("text-generation", "peft-internal-testing/opt-350m-lora")
117
- print(model("Hello World"))
118
- ```
119
-
120
- If you're interested in comparing or using more than one adapter, you can call the [`~PeftModel.add_adapter`] method to add the adapter configuration to the base model. The only requirement is the adapter type must be the same (you can't mix a LoRA and LoHa adapter).
121
-
122
- ```py
123
- from transformers import AutoModelForCausalLM
124
- from peft import LoraConfig
125
-
126
- model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m")
127
- model.add_adapter(lora_config_1, adapter_name="adapter_1")
128
- ```
129
-
130
- Call [`~PeftModel.add_adapter`] again to attach a new adapter to the base model.
131
-
132
- ```py
133
- model.add_adapter(lora_config_2, adapter_name="adapter_2")
134
- ```
135
-
136
- Then you can use [`~PeftModel.set_adapter`] to set the currently active adapter.
137
-
138
- ```py
139
- model.set_adapter("adapter_1")
140
- output = model.generate(**inputs)
141
- print(tokenizer.decode(output_disabled[0], skip_special_tokens=True))
142
- ```
143
-
144
- To disable the adapter, call the [disable_adapters](https://github.com/huggingface/transformers/blob/4e3490f79b40248c53ee54365a9662611e880892/src/transformers/integrations/peft.py#L313) method.
145
-
146
- ```py
147
- model.disable_adapters()
148
- ```
149
-
150
- The [enable_adapters](https://github.com/huggingface/transformers/blob/4e3490f79b40248c53ee54365a9662611e880892/src/transformers/integrations/peft.py#L336) can be used to enable the adapters again.
151
-
152
- If you're curious, check out the [Load and train adapters with PEFT](https://huggingface.co/docs/transformers/main/peft) tutorial to learn more.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
peft_md_files/tutorial/peft_model_config.md DELETED
@@ -1,182 +0,0 @@
1
- <!--Copyright 2023 The HuggingFace Team. All rights reserved.
2
-
3
- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
4
- the License. You may obtain a copy of the License at
5
-
6
- http://www.apache.org/licenses/LICENSE-2.0
7
-
8
- Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
9
- an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
10
- specific language governing permissions and limitations under the License.
11
-
12
- ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
13
- rendered properly in your Markdown viewer.
14
-
15
- -->
16
-
17
- # PEFT configurations and models
18
-
19
- The sheer size of today's large pretrained models - which commonly have billions of parameters - present a significant training challenge because they require more storage space and more computational power to crunch all those calculations. You'll need access to powerful GPUs or TPUs to train these large pretrained models which is expensive, not widely accessible to everyone, not environmentally friendly, and not very practical. PEFT methods address many of these challenges. There are several types of PEFT methods (soft prompting, matrix decomposition, adapters), but they all focus on the same thing, reduce the number of trainable parameters. This makes it more accessible to train and store large models on consumer hardware.
20
-
21
- The PEFT library is designed to help you quickly train large models on free or low-cost GPUs, and in this tutorial, you'll learn how to setup a configuration to apply a PEFT method to a pretrained base model for training. Once the PEFT configuration is setup, you can use any training framework you like (Transformer's [`~transformers.Trainer`] class, [Accelerate](https://hf.co/docs/accelerate), a custom PyTorch training loop).
22
-
23
- ## PEFT configurations
24
-
25
- <Tip>
26
-
27
- Learn more about the parameters you can configure for each PEFT method in their respective API reference page.
28
-
29
- </Tip>
30
-
31
- A configuration stores important parameters that specify how a particular PEFT method should be applied.
32
-
33
- For example, take a look at the following [`LoraConfig`](https://huggingface.co/ybelkada/opt-350m-lora/blob/main/adapter_config.json) for applying LoRA and [`PromptEncoderConfig`](https://huggingface.co/smangrul/roberta-large-peft-p-tuning/blob/main/adapter_config.json) for applying p-tuning (these configuration files are already JSON-serialized). Whenever you load a PEFT adapter, it is a good idea to check whether it has an associated adapter_config.json file which is required.
34
-
35
- <hfoptions id="config">
36
- <hfoption id="LoraConfig">
37
-
38
- ```json
39
- {
40
- "base_model_name_or_path": "facebook/opt-350m", #base model to apply LoRA to
41
- "bias": "none",
42
- "fan_in_fan_out": false,
43
- "inference_mode": true,
44
- "init_lora_weights": true,
45
- "layers_pattern": null,
46
- "layers_to_transform": null,
47
- "lora_alpha": 32,
48
- "lora_dropout": 0.05,
49
- "modules_to_save": null,
50
- "peft_type": "LORA", #PEFT method type
51
- "r": 16,
52
- "revision": null,
53
- "target_modules": [
54
- "q_proj", #model modules to apply LoRA to (query and value projection layers)
55
- "v_proj"
56
- ],
57
- "task_type": "CAUSAL_LM" #type of task to train model on
58
- }
59
- ```
60
-
61
- You can create your own configuration for training by initializing a [`LoraConfig`].
62
-
63
- ```py
64
- from peft import LoraConfig, TaskType
65
-
66
- lora_config = LoraConfig(
67
- r=16,
68
- target_modules=["q_proj", "v_proj"],
69
- task_type=TaskType.CAUSAL_LM,
70
- lora_alpha=32,
71
- lora_dropout=0.05
72
- )
73
- ```
74
-
75
- </hfoption>
76
- <hfoption id="PromptEncoderConfig">
77
-
78
- ```json
79
- {
80
- "base_model_name_or_path": "roberta-large", #base model to apply p-tuning to
81
- "encoder_dropout": 0.0,
82
- "encoder_hidden_size": 128,
83
- "encoder_num_layers": 2,
84
- "encoder_reparameterization_type": "MLP",
85
- "inference_mode": true,
86
- "num_attention_heads": 16,
87
- "num_layers": 24,
88
- "num_transformer_submodules": 1,
89
- "num_virtual_tokens": 20,
90
- "peft_type": "P_TUNING", #PEFT method type
91
- "task_type": "SEQ_CLS", #type of task to train model on
92
- "token_dim": 1024
93
- }
94
- ```
95
-
96
- You can create your own configuration for training by initializing a [`PromptEncoderConfig`].
97
-
98
- ```py
99
- from peft import PromptEncoderConfig, TaskType
100
-
101
- p_tuning_config = PromptEncoderConfig(
102
- encoder_reparameterization_type="MLP",
103
- encoder_hidden_size=128,
104
- num_attention_heads=16,
105
- num_layers=24,
106
- num_transformer_submodules=1,
107
- num_virtual_tokens=20,
108
- token_dim=1024,
109
- task_type=TaskType.SEQ_CLS
110
- )
111
- ```
112
-
113
- </hfoption>
114
- </hfoptions>
115
-
116
- ## PEFT models
117
-
118
- With a PEFT configuration in hand, you can now apply it to any pretrained model to create a [`PeftModel`]. Choose from any of the state-of-the-art models from the [Transformers](https://hf.co/docs/transformers) library, a custom model, and even new and unsupported transformer architectures.
119
-
120
- For this tutorial, load a base [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) model to finetune.
121
-
122
- ```py
123
- from transformers import AutoModelForCausalLM
124
-
125
- model = AutoModelForCausalLM.from_pretrained("facebook/opt-350m")
126
- ```
127
-
128
- Use the [`get_peft_model`] function to create a [`PeftModel`] from the base facebook/opt-350m model and the `lora_config` you created earlier.
129
-
130
- ```py
131
- from peft import get_peft_model
132
-
133
- lora_model = get_peft_model(model, lora_config)
134
- lora_model.print_trainable_parameters()
135
- "trainable params: 1,572,864 || all params: 332,769,280 || trainable%: 0.472659014678278"
136
- ```
137
-
138
- Now you can train the [`PeftModel`] with your preferred training framework! After training, you can save your model locally with [`~PeftModel.save_pretrained`] or upload it to the Hub with the [`~transformers.PreTrainedModel.push_to_hub`] method.
139
-
140
- ```py
141
- # save locally
142
- lora_model.save_pretrained("your-name/opt-350m-lora")
143
-
144
- # push to Hub
145
- lora_model.push_to_hub("your-name/opt-350m-lora")
146
- ```
147
-
148
- To load a [`PeftModel`] for inference, you'll need to provide the [`PeftConfig`] used to create it and the base model it was trained from.
149
-
150
- ```py
151
- from peft import PeftModel, PeftConfig
152
-
153
- config = PeftConfig.from_pretrained("ybelkada/opt-350m-lora")
154
- model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path)
155
- lora_model = PeftModel.from_pretrained(model, "ybelkada/opt-350m-lora")
156
- ```
157
-
158
- <Tip>
159
-
160
- By default, the [`PeftModel`] is set for inference, but if you'd like to train the adapter some more you can set `is_trainable=True`.
161
-
162
- ```py
163
- lora_model = PeftModel.from_pretrained(model, "ybelkada/opt-350m-lora", is_trainable=True)
164
- ```
165
-
166
- </Tip>
167
-
168
- The [`PeftModel.from_pretrained`] method is the most flexible way to load a [`PeftModel`] because it doesn't matter what model framework was used (Transformers, timm, a generic PyTorch model). Other classes, like [`AutoPeftModel`], are just a convenient wrapper around the base [`PeftModel`], and makes it easier to load PEFT models directly from the Hub or locally where the PEFT weights are stored.
169
-
170
- ```py
171
- from peft import AutoPeftModelForCausalLM
172
-
173
- lora_model = AutoPeftModelForCausalLM.from_pretrained("ybelkada/opt-350m-lora")
174
- ```
175
-
176
- Take a look at the [AutoPeftModel](package_reference/auto_class) API reference to learn more about the [`AutoPeftModel`] classes.
177
-
178
- ## Next steps
179
-
180
- With the appropriate [`PeftConfig`], you can apply it to any pretrained model to create a [`PeftModel`] and train large powerful models faster on freely available GPUs! To learn more about PEFT configurations and models, the following guide may be helpful:
181
-
182
- * Learn how to configure a PEFT method for models that aren't from Transformers in the [Working with custom models](../developer_guides/custom_models) guide.