Update README.md
Browse files
README.md
CHANGED
@@ -81,9 +81,13 @@ There are actually 3 existing "multiplicative-LoRA" methods in [PEFT/tuners](htt
|
|
81 |
- https://github.com/huggingface/peft/tree/main/src/peft/tuners/boft (https://arxiv.org/abs/2311.06243)
|
82 |
- https://github.com/huggingface/peft/tree/main/src/peft/tuners/hra (https://arxiv.org/abs/2405.17484)
|
83 |
|
84 |
-
but
|
85 |
|
86 |
-
|
|
|
|
|
|
|
|
|
87 |
|
88 |
`h' = h - v @ v^T @ h`
|
89 |
|
@@ -93,10 +97,12 @@ whereas the general (non-orthogonal) "multiplicative-LoRA" method can do this by
|
|
93 |
|
94 |
In general, the way to think about these (non-orthogonal) "multiplicative-LoRAs" is as a kind of "conditional control-vector":
|
95 |
|
96 |
-
-
|
97 |
-
-
|
|
|
|
|
98 |
|
99 |
-
|
100 |
|
101 |
---
|
102 |
|
|
|
81 |
- https://github.com/huggingface/peft/tree/main/src/peft/tuners/boft (https://arxiv.org/abs/2311.06243)
|
82 |
- https://github.com/huggingface/peft/tree/main/src/peft/tuners/hra (https://arxiv.org/abs/2405.17484)
|
83 |
|
84 |
+
but as explained in [this conceptual guide](https://github.com/huggingface/peft/blob/main/docs/source/conceptual_guides/oft.md):
|
85 |
|
86 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65995c45539c808e84c38bf1/AQ_m88vjvYXZwesZxrJDj.png)
|
87 |
+
|
88 |
+
all 3 methods *deliberately* maintain [orthogonality](https://en.wikipedia.org/wiki/Orthogonal_matrix), and thus are more restrictive in the types of transformations they can perform (ie: [Rotations](https://en.wikipedia.org/wiki/Rotation) and/or [Improper Rotations](https://en.wikipedia.org/wiki/Improper_rotation) only; with no scaling and/or sheer possible...).
|
89 |
+
|
90 |
+
For example, these can't perform the orthogonal projection needed for ["abliteration"](https://www.lesswrong.com/posts/jGuXSZgv6qfdhMCuJ/refusal-in-llms-is-mediated-by-a-single-direction):
|
91 |
|
92 |
`h' = h - v @ v^T @ h`
|
93 |
|
|
|
97 |
|
98 |
In general, the way to think about these (non-orthogonal) "multiplicative-LoRAs" is as a kind of "conditional control-vector":
|
99 |
|
100 |
+
- Each vector in `lora_A` looks for a certain dirrection, and via the dot-product it generates (signed) weighting factor that measures the similarity between the output of the `down_proj` transformation.
|
101 |
+
- Each corresponding vector in `lora_B` then gets added to the hidden state / residual stream based on the corresponding weighting factor.
|
102 |
+
|
103 |
+
So instead of having just a single vector that we add (in essence we add a bias term and create an [affine transformation](https://en.wikipedia.org/wiki/Affine_transformation)), we now have many different control vectors that can be added (stored in `lora_B`), based on how well they match another set of "directional detection vectors" (stored in `lora_A`).
|
104 |
|
105 |
+
**NOTE**: The [LoRA+](https://arxiv.org/abs/2402.12354) uses a similar way of viewing the purpose of `lora_A` and `lora_B`, but where `lora_A` looks at the ***input*** to the `down_proj` transformation (for "additive-LoRAs"); instead of its ***output*** like the "multiplicative-LoRA" method does...
|
106 |
|
107 |
---
|
108 |
|