nozomuteruyo14 commited on
Commit
43dc00b
·
verified ·
1 Parent(s): 60d9ec5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +160 -3
README.md CHANGED
@@ -1,3 +1,160 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - glue
5
+ language:
6
+ - en
7
+ metrics:
8
+ - accuracy
9
+ - f1
10
+ - spearmanr
11
+ - pearsonr
12
+ - matthews_correlation
13
+ base_model: google-bert/bert-base-uncased
14
+ pipeline_tag: text-classification
15
+ tags:
16
+ - adapter
17
+ - low-rank
18
+ - fine-tuning
19
+ - LoRA
20
+ - DiffLoRA
21
+ eval_results: "Refer to GLUE experiments in the examples folder"
22
+ view_doc: "https://huggingface.co/nozomuteruyo14/Diff_LoRA"
23
+ ---
24
+
25
+ # Model Card for DiffLoRA
26
+
27
+ <!-- Provide a quick summary of what the model is/does. -->
28
+
29
+ DiffLoRA is an innovative adapter architecture that extends conventional low-rank adaptation (LoRA) by fine-tuning a pre-trained large-scale model using differential low-rank matrices. Instead of updating all model parameters, DiffLoRA updates only a small set of low-rank matrices, which allows for efficient fine-tuning with reduced trainable parameters.
30
+
31
+ ## Model Details
32
+
33
+ ### Model Description
34
+
35
+ DiffLoRA is an original method developed by the author and is inspired by the conceptual ideas from the Differential Transformer paper (https://arxiv.org/abs/2410.05258). It decomposes the weight update into two components—positive and negative contributions—enabling a more fine-grained adjustment than traditional LoRA. The output of a single layer is computed as:
36
+
37
+ \[
38
+ y = W x + \Delta y
39
+ \]
40
+
41
+ where:
42
+ - \(x \in \mathbb{R}^{d_{in}}\) is the input vector (or each sample in a batch).
43
+ - \(W \in \mathbb{R}^{d_{out} \times d_{in}}\) is the fixed pre-trained weight matrix.
44
+ - \(\Delta y\) is the differential update computed as:
45
+
46
+ \[
47
+ \Delta y = \frac{\alpha}{r} \Big( x' A_{\text{pos}} B_{\text{pos}} - \tau \, x' A_{\text{neg}} B_{\text{neg}} \Big)
48
+ \]
49
+
50
+ with:
51
+ - \(x'\) being the input after dropout (or another regularization).
52
+ - \(A_{\text{pos}} \in \mathbb{R}^{d_{in} \times r}\) and \(B_{\text{pos}} \in \mathbb{R}^{r \times d_{out}}\) capturing the positive contribution.
53
+ - \(A_{\text{neg}} \in \mathbb{R}^{d_{in} \times r}\) and \(B_{\text{neg}} \in \mathbb{R}^{r \times d_{out}}\) capturing the negative contribution.
54
+ - \(\tau \in \mathbb{R}\) is a learnable scalar that balances the two contributions.
55
+ - \(\alpha\) is a scaling factor and \(r\) is the chosen rank.
56
+
57
+ For computational efficiency, the two low-rank components are fused via concatenation:
58
+ - \( \text{combined\_A} = \big[ A_{\text{pos}}, A_{\text{neg}} \big] \in \mathbb{R}^{d_{in} \times 2r} \)
59
+ - \( \text{combined\_B} = \begin{bmatrix} B_{\text{pos}} \\ -\tau \, B_{\text{neg}} \end{bmatrix} \in \mathbb{R}^{2r \times d_{out}} \)
60
+
61
+ The update is then calculated as:
62
+
63
+ \[
64
+ \text{update} = x' \cdot \text{combined\_A} \cdot \text{combined\_B}
65
+ \]
66
+
67
+ resulting in the final output:
68
+
69
+ \[
70
+ y = W x + \frac{\alpha}{r} \, \text{update}.
71
+ \]
72
+
73
+ - **Developed by:** Nozomu Fujisawa in Kondo Lab
74
+ - **Model type:** Differential Low-Rank Adapter (DiffLoRA)
75
+ - **Language(s) (NLP):** en
76
+ - **License:** MIT
77
+ - **Finetuned from model [optional]:** bert-base-uncased
78
+
79
+ ### Model Sources [optional]
80
+
81
+ - **Repository:** [https://huggingface.co/nozomuteruyo14/Diff_LoRA](https://huggingface.co/nozomuteruyo14/Diff_LoRA)
82
+ - **Paper [optional]:** DiffLoRA is inspired by ideas from the Differential Transformer (https://arxiv.org/abs/2410.05258), but it is an original method developed by the author.
83
+
84
+ ## Uses
85
+
86
+ ### Direct Use
87
+
88
+ DiffLoRA is intended to be integrated as an adapter module into pre-trained transformer models. It allows efficient fine-tuning by updating only a small number of low-rank parameters, making it ideal for scenarios where computational resources are limited.
89
+
90
+ ### Out-of-Scope Use
91
+
92
+ DiffLoRA is not designed for training models from scratch, nor is it recommended for tasks where full parameter updates are necessary. It is optimized for transformer-based NLP tasks and may not generalize well to non-NLP domains. Also, there are only a limited number of base models that can be used.
93
+
94
+ ## Bias, Risks, and Limitations
95
+
96
+ While DiffLoRA offers a parameter-efficient fine-tuning approach, it inherits limitations from its base models (e.g., BERT, MiniLM). It may not capture all domain-specific nuances when only a limited number of parameters are updated. Users should carefully evaluate performance and consider potential biases in their applications.
97
+
98
+ ### Recommendations
99
+
100
+ Users should:
101
+ - Experiment with different rank \(r\) and scaling factor \(\alpha\) values.
102
+ - Compare DiffLoRA with other adapter techniques.
103
+ - Be cautious about over-relying on the adapter when full model adaptation might be necessary.
104
+
105
+ ## How to Get Started with the Model
106
+
107
+ To integrate DiffLoRA into your fine-tuning workflow, check the example script in the `examples/run_glue_experiment.py` file.
108
+
109
+ ## Training Details
110
+
111
+ ### Training Data
112
+
113
+ This implementation has been demonstrated on GLUE tasks using the Hugging Face Datasets library.
114
+
115
+ ### Training Procedure
116
+
117
+ DiffLoRA is applied by freezing the base model weights and updating only the low-rank adapter parameters. The procedure involves:
118
+ - Preprocessing text inputs (concatenating multiple text columns if necessary).
119
+ - Injecting DiffLoRA adapters into target linear layers.
120
+ - Fine-tuning on a downstream task while the base model remains frozen.
121
+
122
+ #### Training Hyperparameters
123
+
124
+ - **Training regime:** Fine-tuning with frozen base weights; only adapter parameters are updated.
125
+ - **Learning rate:** 2e-5 (example)
126
+ - **Batch size:** 32 per device
127
+ - **Epochs:** 3 (example)
128
+ - **Optimizer:** AdamW with weight decay
129
+
130
+ ## Evaluation
131
+
132
+ ### Testing Data, Factors & Metrics
133
+
134
+ #### Testing Data
135
+
136
+ GLUE validation sets are used for evaluation.
137
+
138
+ #### Factors
139
+
140
+ Evaluations are performed across multiple GLUE tasks to ensure comprehensive performance analysis.
141
+
142
+ #### Metrics
143
+
144
+ Evaluation metrics include accuracy, F1 score, Pearson correlation, and Spearman correlation, depending on the task.
145
+
146
+ ### Results
147
+
148
+ For detailed evaluation results, please refer to the GLUE experiment script in the `examples` directory.
149
+
150
+ #### Summary
151
+
152
+ DiffLoRA achieves faster convergence and competitive performance on GLUE tasks compared to other parameter-efficient fine-tuning methods.
153
+
154
+ ## Citation
155
+
156
+ paper: Writing
157
+
158
+ ## Model Card Contact
159
+
160
+ For any questions regarding this model card, please contact: [nozomu_fujisawa@keio.jp]