Update README.md
Browse files
README.md
CHANGED
@@ -102,7 +102,11 @@ In general, the way to think about these (non-orthogonal) "multiplicative-LoRAs"
|
|
102 |
|
103 |
So instead of having just a single vector that we add (and in essence adding a bias weight to create an [affine transformation](https://en.wikipedia.org/wiki/Affine_transformation)), we now have many different control vectors that can be added (stored in `lora_B`), based on how well they match another set of "directional detection vectors" (stored in `lora_A`).
|
104 |
|
105 |
-
**NOTE**: The [LoRA+](https://arxiv.org/abs/2402.12354) paper uses a similar way of viewing the purpose of `lora_A` and `lora_B
|
|
|
|
|
|
|
|
|
106 |
|
107 |
---
|
108 |
|
|
|
102 |
|
103 |
So instead of having just a single vector that we add (and in essence adding a bias weight to create an [affine transformation](https://en.wikipedia.org/wiki/Affine_transformation)), we now have many different control vectors that can be added (stored in `lora_B`), based on how well they match another set of "directional detection vectors" (stored in `lora_A`).
|
104 |
|
105 |
+
**NOTE**: The [LoRA+](https://arxiv.org/abs/2402.12354) paper uses a similar way of viewing the purpose of `lora_A` and `lora_B` (for "additive-LoRAs"):
|
106 |
+
|
107 |
+
![image/png](https://cdn-uploads.huggingface.co/production/uploads/65995c45539c808e84c38bf1/vZ2Gys3huKAWIVe0wz2-q.png)
|
108 |
+
|
109 |
+
but where `lora_A` looks at the ***input*** to the transformation; instead of the ***output*** of the (`down_proj`) transformation like these new (non-orthogonal) "multiplicative-LoRAs":
|
110 |
|
111 |
---
|
112 |
|