taesiri commited on
Commit
f1f0256
1 Parent(s): 3f4a590

Upload papers/2402/2402.11819.tex with huggingface_hub

Browse files
Files changed (1) hide show
  1. papers/2402/2402.11819.tex +1014 -0
papers/2402/2402.11819.tex ADDED
@@ -0,0 +1,1014 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ \pdfoutput=1
2
+
3
+
4
+ \documentclass[11pt]{article}
5
+
6
+ \usepackage{ACL2023}
7
+
8
+ \usepackage{times}
9
+ \usepackage{latexsym}
10
+ \usepackage{graphicx}
11
+ \usepackage[T1]{fontenc}
12
+
13
+
14
+ \usepackage[utf8]{inputenc}
15
+ \usepackage{makecell}
16
+ \usepackage{microtype}
17
+
18
+ \usepackage{stfloats}
19
+ \usepackage{inconsolata}
20
+ \usepackage{amssymb}
21
+ \usepackage{subcaption}
22
+ \usepackage{amsmath}
23
+ \usepackage{cleveref}
24
+ \usepackage{bm}
25
+ \usepackage{float}
26
+ \usepackage[linesnumbered,ruled,vlined,algo2e]{algorithm2e}
27
+ \usepackage{pifont}
28
+ \usepackage{booktabs}
29
+ \usepackage{multirow}
30
+ \usepackage{enumitem}
31
+ \usepackage{algorithmic}
32
+ \usepackage{algorithm}
33
+ \usepackage{CJKutf8}
34
+
35
+
36
+ \title{Head-wise Shareable Attention for Large Language Models}
37
+
38
+
39
+
40
+
41
+ \author{Zouying Cao$^{1,2}$, Yifei Yang$^{1,2}$, Hai Zhao$^{1,2,}$\thanks{$\ $ Corresponding author.}\\
42
+ $^{1}$Department of Computer Science and Engineering, Shanghai Jiao Tong University\\
43
+ $^{2}$MoE Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University\\
44
+ {\tt \{yifeiyang, zouyingcao\}@sjtu.edu.cn, zhaohai@cs.sjtu.edu.cn}}
45
+
46
+ \begin{document}
47
+ \maketitle
48
+ \begin{abstract}
49
+ Large Language Models (LLMs) suffer from huge number of parameters, which restricts their deployment on edge devices.
50
+ Weight sharing is one promising solution that encourages weight reuse, effectively reducing memory usage with less performance drop. However, current weight sharing techniques primarily focus on small-scale models like BERT and employ coarse-grained sharing rules, e.g., layer-wise.
51
+ This becomes limiting given the prevalence of LLMs and sharing an entire layer or block obviously diminishes the flexibility of weight sharing.
52
+ In this paper, we present a perspective on \textit{\textbf{head-wise shareable attention for large language models}}.
53
+ We further propose two memory-efficient methods that share parameters across attention heads, with a specific focus on LLMs. Both of them use the same dynamic strategy to select the shared weight matrices.
54
+ The first method directly reuses the pre-trained weights without retraining, denoted as \textbf{DirectShare}.
55
+ The second method first post-trains with constraint on weight matrix similarity and then shares, denoted as \textbf{PostShare}.
56
+ Experimental results reveal our head-wise shared models still maintain satisfactory capabilities, demonstrating the feasibility of fine-grained weight sharing applied to LLMs.
57
+ \end{abstract}
58
+
59
+ \section{Introduction}
60
+ Large Language Models (LLMs) have achieved breakthrough performance in a variety of natural language processing tasks ~\citep{wei2022emergent,bubeck2023sparks,zhao2023survey}.
61
+ However, such remarkable capability typically comes at the cost of a substantial increase in the model size ~\citep{kaplan2020scaling}.
62
+ Thus, LLMs with billions of parameters ~\citep{brown2020language,touvron2023llama} are more resource-hungry despite a wide margin of superiority over small-scale models~\citep{devlin2018bert,liu2019roberta}.
63
+ This can also pose challenges for deployment on low-capability devices due to limited storage and GPU memory.
64
+
65
+
66
+ To address the high memory requirements of models, weight sharing ~\citep{takase2021lessons,liu2023enhancing} aims to reuse the same parameters to achieve memory- and storage-efficiency while preserving model performance.
67
+ For small-scale models, e.g., BERT, it is known that several techniques ~\citep{lan2019albert,liu2023enhancing} are proposed to explore across-layer parameter sharing.
68
+ While, ~\citet{zhang2022minivit} demonstrate identical weights across different layers are the main cause of training instability and performance degradation.
69
+ Moreover, the effective of similar techniques at the scale of LLMs remains uncertain.
70
+
71
+ Thus, we strive to solve this central question: \textbf{\textit{Can we design fine-grained weight sharing strategy that can smoothly apply to large language models}?}
72
+ For an effective memory-efficient weight sharing method tailored to LLMs, two key challenges must be tackled: a) the choice of shared modules whose weights are reused; b) the trade-off between reducing memory footprint and preserving diverse capabilities.
73
+
74
+ In the preliminary work, we empirically evaluate the feasibility of weight sharing across the attention heads in LLMs inspired by attention map (i.e., attention scores) reuse.
75
+ Subsequently, we introduce our design of head-wise shareable attention strategy.
76
+ It is a simple and intuitive technique for parameter sharing that can be implemented in a few minutes.
77
+ Specifically, given the pre-trained weight matrices, we concatenate the weight matrix $W^q$ and $W^k$ for each head to measure the cosine similarity that determines which heads can be shared.
78
+ Meanwhile, head-wise weight sharing promotes parameter diversity in the layers, and thus its performance degradation is acceptable when the number of shared parameters is below 30\%.
79
+ Even as weight sharing ratio increases rapidly, our proposed constrained post-training method can narrow the performance drop, which may necessitate additional time.
80
+
81
+ In summary, our key contributions include:
82
+ \begin{itemize}[leftmargin=0.4cm,itemsep=0pt]
83
+ \item We investigate the feasibility of head-wise weight sharing for large language models and propose two corresponding methods named \textbf{DirectShare} and \textbf{PostShare}.
84
+ \item The proposed \textbf{DirectShare} is time-efficient and retain a large portion of the performance when sharing ratio is below 30\%.
85
+ Complementarily, \textbf{PostShare} yields satisfactory performance via post-training, especially under large ratios.
86
+ \item Experiments show our proposal achieves comparable performance to the competitive memory-efficient methods.
87
+ Additional analysis also indicates its efficiency in small-scale models.
88
+ \end{itemize} \section{Related Works}
89
+ \subsection{Memory-efficient Approaches for LLMs}
90
+ With the growing size of language models, several memory-efficient techniques are proposed to solve. One line to reducing the memory footprint involves network compression, like quantization ~\citep{bai2020binarybert,tao2022compression}, pruning ~\citep{yang2022gradient,tao2023structured} and knowledge distillation~\citep{wu2023ad,tan2023gkd}.
91
+ However, when applied to LLMs, many approaches have become infeasible ~\citep{frantar2023sparsegpt}.
92
+ To recover accuracy, they require extensive post-training of the model ~\citep{dettmers2023spqr,sun2023simple}.
93
+
94
+ In addition to these conventional methods, researchers have also investigated more efficient variations of the self-attention mechanism for LLMs ~\citep{kitaev2020reformer,lv2023lightformer}.
95
+ Reformer ~\citep{kitaev2020reformer} leverages sparsity in the attention layers to improve the efficiency on long sequences and with small memory use.
96
+ Lightformer ~\citep{lv2023lightformer} deploys SVD weight transfer and parameter sharing, which can significantly reduce the parameters on the premise of ensuring model performance.
97
+ In this paper, our focus is on weight sharing across attention heads.
98
+
99
+ \subsection{Weight Sharing}
100
+ Weight sharing is a widely used technique ~\citep{lan2019albert,liu2023enhancing,lv2023lightformer,xu2023compressionsurvey} that aims to improve parameter efficiency and reduce inference memory footprint.
101
+ Weight sharing enables model compression by eliminating redundant parameters and decouples computation and parameters by reusing the same parameters for multiple computations.
102
+
103
+ \textbf{Task-oriented Weight Sharing.}
104
+ One of the prevalent tasks using weight sharing mechanisms is nerual machine translation (NMT). Tied Transformer ~\citep{xia2019tied} considers model-level sharing and shares the weights of the encoder and decoder of an NMT model. ~\citet{dabre2019recurrent} proposes a method, which shares the weights across all Transformer layers and keeps performance in NMT.
105
+ Besides, ~\citet{chi2021audio} bring the idea of ALBERT ~\citep{lan2019albert} to the speech recognition task.
106
+
107
+ \textbf{Layer-wise Weight Sharing.}
108
+ Universal Transformer ~\citep{dehghani2018universal} shares the weights across all layers with a dynamic halting mechanism and improves accuracy on several tasks.
109
+ Subformer ~\citep{reid2021subformer} utilizes sandwich-style parameter sharing, which only shares the central layers while leaving the first and last layers independent.
110
+ ~\citet{takase2021lessons} study strategies to explore the best way to prepare parameters of M layers and assign them into N layers (1$\leq$M$\leq$N).
111
+ \begin{figure*}[ht]
112
+ \centering
113
+ \includegraphics[width=\textwidth]{images/similarity.pdf}
114
+ \caption{(a) Layer-wise Attention Map Similarity. Taking the last layer as an example, the most similar attention layer with it is marked with $\surd$. (b) Head-wise Attention Map Similarity. $\surd$ mark the top n heads whose attention maps that are most similar to the 6-th head in the last layer(n=the number of heads per layer). (c) Weight Matrix Similarity. \color{green!50!black}{\textbf{$\bigcirc$}}\color{black}{ mark the connection between attention map similarity and weight similarity.}}
115
+ \label{fig:similarity}
116
+ \end{figure*}
117
+
118
+ \section{Motivation and Empirical Analysis}\label{sec:motivation}
119
+ In this section, we analyze the feasibility of head-wise weight sharing from the perspective of attention map reuse.
120
+
121
+
122
+ \subsection{Attention Map Similarity: From Layer-wise to Head-wise}\label{sec:attention_map}
123
+ Prior researches \citep{xiao2019sharing,ying2021lazyformer,bhojanapalli2021leveraging} demonstrate the effectiveness of attention map reuse due to the high similarity of attention scores between different layers (especially for adjacency layers).
124
+ Motivated by this, we delves into attention map similarity, specifically transitioning from layer-wise to head-wise analysis.
125
+ To measure the evolution of the attention maps over layers and heads, we use the cosine similarity $\mathcal{S}_{cos}$.
126
+ When $\mathcal{S}_{cos}$ equals one, it means that the attention maps are perfectly similar.
127
+ Considering two specific self-attention layers, the cosine similarity is calculated as follows:
128
+ \begin{equation}
129
+ \mathcal{S}_{cos}(\textbf{A}_p,\textbf{A}_q)=\frac{\textbf{A}_p^T\textbf{A}_q}{\|\textbf{A}_p\| \|\textbf{A}_q\|}
130
+ \end{equation}
131
+ where $\textbf{A}_p,\textbf{A}_q$ denote the attention map of layers p and q.
132
+
133
+ We visualize the layer-wise and head-wise attention map similarity across three task-specific datasets: WMT14 (En-Fr) \citep{bojar-EtAl:2014:W14-33}, CommonsenceQA \citep{commonsenseqa} and WSC \citep{levesque2012winograd}.
134
+ As shown in Fig.~\ref{fig:similarity}(a) and (b), the degrees of similarity in attention scores computed in different layers and heads present a certain level of consistency across different tasks.
135
+ In addition, we find that the cosine similarity values for pairs with high similarity are higher among different heads compared to different layers.
136
+ Specifically, the most similar self-attention layers reach a cosine similarity value of approximately 0.90, while in the case of head-wise comparisons, several pairs have a remarkable similarity of nearly 0.99.
137
+
138
+ One observation is that as the number of parameters increases, modules with high similarity exhibit variations, particularly in the fine-grained (e.g., head-wise) comparisons within large-scale pre-trained language models.
139
+ Existing approaches employ "learning to share" techniques to dynamically adjust the sharing strategy~\citep{xiao2019sharing} or use a uniform sharing strategy but train the modified model from scratch~\citep{ying2021lazyformer,shim2023exploring}.
140
+ However, such strategies pay little attention on reusing attention map among heads and incur high computational costs for LLMs.
141
+
142
+ \subsection{From Attention Map Similarity to Weight Matrix Similarity}\label{sec:weight_similar}
143
+ Attention weight matrix similarity provides a complementary perspective to attention map similarity, since the attention scores are calculated based on the weight matrices $W^q,W^k$.
144
+ Weight sharing is traditionally based on the assumption that overparameterization is evident in large-scale Transformer models, i.e., the difference in weights decreases as model size increases~\citep{li2020train}.
145
+ In this paper, we explore a potential relationship between attention map similarity and weight similarity.
146
+
147
+ As mentioned in Section~\ref{sec:attention_map}, head-wise attention map similarity is higher than the cross-layer similarity, while to the best of our knowledge, head-wise attention map reuse is yet to be explored.
148
+ This might be attributed to the difficulty in finding an optimal dynamic head-wise sharing strategy across different tasks.
149
+ One intuitive solution is to first measure the attention map similarity between every pair of heads in each dataset separately, and then choose the overlapping modules to share.
150
+
151
+ Combined with the analysis of weight matrix similarity, we have made a key discovery: given a pre-trained LLM, by concatenating the weight matrix $W^q$ and $W^k$ for each head to measure the cosine similarity, \textit{\textbf{the most similar weight matrix corresponds to the overlapping modules with highly similar attention maps observed across different datasets}}.
152
+ As illustrated in Fig.~\ref{fig:similarity}(b) and (c), deep green circles mark the connection between attention map similarity and weight similarity.
153
+
154
+ This finding implies that attention heads with high weight matrix similarity also demonstrate analogous attention map similarity regardless of the datasets and model size.
155
+ Furthermore, since different heads within the layer present sufficient diversity~\citep{zhou2021deepvit,vig2019multiscale}, we suppose that weight sharing among these heads can result in higher model behavior consistency compared to layer-wise weight sharing.
156
+ Thus, we further propose a simple yet effective method for head-wise weight sharing, especially validating its feasibility in large-scale models. \begin{figure*}[htbp]
157
+ \centering
158
+ \includegraphics[width=\textwidth]{images/pipeline.pdf}
159
+ \caption{\ding{172} \textbf{DirectShare}: Inspired by attention map reuse, directly share weight matrices across different heads based on cosine similarity; \ding{173} \textbf{PostShare}: To balance the memory usage and the performance, implement post-training with the constraint of weight matrix similarity and then share.}
160
+ \label{fig:pipeline}
161
+ \end{figure*}
162
+
163
+ \section{Head-wise Shareable Attention}\label{sec:method}
164
+ Inspired by Section~\ref{sec:motivation}, we present a perspective on head-wise shareable attention for LLMs.
165
+ Based on one straightforward yet effective weight sharing strategy, we propose two complementary methods, named \textbf{DirectShare} and \textbf{PostShare}.
166
+ The overview of our proposal is presented in Figure~\ref{fig:pipeline}.
167
+
168
+ \subsection{Head-wise Weight Sharing Strategy}\label{sec:strategy}
169
+ Multi-Head Attention (MHA) block is essentially a procedure that computes the relevance of each token in a sentence with respect to all other tokens.
170
+ Let $L$ be the number of input tokens and $M$ be the number of attention heads in total.
171
+ Given the input $X\in \mathbb{R}^{L\times D}$, we can obtain queries, keys, and values in the $i$-th ($1$$\leq$$i$$\leq$$M$) head via three weight matrices, denoted by $W^q_i \in \mathbb{R}^{L\times d_q}$, $W^k_i \in \mathbb{R}^{L\times d_k}$ and $W^v_i \in \mathbb{R}^{L\times d_v}$, respectively.
172
+ $D$ is the embedding dimension, and $d_q, d_k (=d_q), d_v$ represent the dimensions of three weight matrices, respectively.
173
+
174
+ To investigate the strategy of weight sharing applied to all the above three weight matrices across heads for LLMs, we perform preliminary experiments in the choice of head-wise match functions \textbf{Match(}$\cdot$,$\cdot$\textbf{)}.
175
+ For the match functions, inputs are the weight matrices of head $\bm{i,j}$ and outputs are called matching scores $\bm{m}$.
176
+ The higher the score, the more likely it is to share parameters across the heads.
177
+ \begin{equation}
178
+ \bm{m_{i,j}^*}=\textbf{Match(}W_i^*,W_j^*\textbf{)},*\in \{q,k,v\}
179
+ \end{equation}
180
+ Based on our intuitive analysis in Section~\ref{sec:weight_similar}, we choose the cosine similarity between \textbf{the concatenation matrix of $W^q_i$ and $W^k_i$}:
181
+ \begin{equation}
182
+ \bm{m_{i,j}^q}=\bm{m_{i,j}^k}=\bm{m_{i,j}^v}=\mathcal{S}_{cos}(W^q_i||W^k_i,W^q_j||W^k_j)
183
+ \label{eq:match_func}
184
+ \end{equation}
185
+ Besides, we try another five match functions to compare: (1) \textbf{Only $W^q_i$} used to measure the cosine similarity, i.e., $\bm{m_{i,j}^*}=\mathcal{S}_{cos}(W^q_i,W^q_j)$; (2) \textbf{Only $W^k_i$} used to measure the cosine similarity, i.e., $\bm{m_{i,j}^*}=\mathcal{S}_{cos}(W^k_i,W^k_j)$; (3) \textbf{Only $W^v_i$} used to measure the cosine similarity, i.e., $\bm{m_{i,j}^*}=\mathcal{S}_{cos}(W^v_i,W^v_j)$; (4) \textbf{Concatenate all the three matrices} and then calculate the cosine similarity, i.e., $\bm{m_{i,j}^*}=\mathcal{S}_{cos}(W^q_i||W^k_i||W^v_i,W^q_j||W^k_j||W^v_j)$; (5) \textbf{Separately use $W^q_i, W^k_i, W^v_i$} to measure the cosine similarity and do weight sharing respectively, i.e., $\bm{m_{i,j}^*}=\mathcal{S}_{cos}(W^*_i,W^*_j)$ and again $*\in\{q,k,v\}$.
186
+
187
+
188
+ \begin{figure}[!t]
189
+ \centering
190
+ \includegraphics[width=\linewidth]{images/sharing_strategy.pdf}
191
+ \setlength{\abovecaptionskip}{-10pt}
192
+ \setlength{\belowcaptionskip}{-10pt}
193
+ \caption{Experiments performed on PIQA and OpenBookQA using different head-wise match functions for Baichuan 2-7B model.}
194
+ \label{fig:sharing_strategy}
195
+ \end{figure}
196
+
197
+ Figure~\ref{fig:sharing_strategy} shows the results of our exploratory study via \textbf{DirectShare}.
198
+ As evidenced by the performance curve, using separately weight sharing causes a significant decline in performance compared with sharing the three weight matrices together.
199
+ And it is enough to do head-wise weight sharing focusing only on the concatenation matrix of $W^q_i$ and $W^k_i$, since it achieves a favorable trade-off between reducing memory footprint and maintaining performance.
200
+
201
+ \subsection{DirectShare}
202
+ In practice, we traverse all head pairs to compute matching scores on Equation~\ref{eq:match_func} and for each head, select the one with the highest score to match.
203
+ When $M$ candidate head pairs prepared, we select the top-$N$ pairs in descending order according to the desired sharing ratio.
204
+ Finally, we can share the weight matrices together
205
+ between each selected attention head pairs.
206
+ A detailed algorithm for our \textbf{DirectShare} is presented in Algorithm~\ref{alg:sharing}.
207
+
208
+ \begin{algorithm2e}
209
+ \caption{\textbf{DirectShare} using Head-wise Weight Sharing Strategy}\label{alg:sharing}
210
+ \SetInd{0.5em}{0.6em}
211
+ \KwIn{Sharing Ratio $\alpha$, Weight Matrices of the MHA $\left\{ W^*\right\},* \in \left\{q,k,v\right\}$}
212
+ \KwOut{The LLM after weight sharing}
213
+ \For{$layer_i\leftarrow 2$ \KwTo $l$}{
214
+ \For{$i\leftarrow 1$ \KwTo $head\_num$}{
215
+ $match\_index$ $\leftarrow$ -$1$ \\
216
+ $match\_score$ $\leftarrow$ -$1$ \\
217
+ \For{$layer_j\leftarrow 1$ \KwTo $layer_i-1$}{
218
+ \For{$j\leftarrow 1$ \KwTo $head\_num$}{
219
+ Compute $\bm{m_{i,j}^*}$ using Eq.~\ref{eq:match_func}\;
220
+ \If{$\bm{m_{i,j}^*}>match\_score$}{
221
+ $match\_score \leftarrow \bm{m_{i,j}^*}$\\
222
+ $match\_index \leftarrow (layer_j,j)$
223
+ }
224
+ }
225
+ }
226
+ Get one candidate head pair $<i,match\_index>$ for sharing\;
227
+ }
228
+ }
229
+ Sort the matching scores in descending order\;
230
+ Select the top-$N$ pairs according to $\alpha$\;
231
+ Share the weight matrices together between each selected attention head pairs.
232
+ \end{algorithm2e}
233
+
234
+ \subsection{PostShare}Although \textbf{DirectShare} demonstrates effectiveness in our experiments, we have also encountered noticeable performance drop in minor reading comprehension datasets.
235
+ To alleviate this problem, we propose \textbf{PostShare}, softly aligning model weights during the post-training process.
236
+
237
+ With the same sharing strategy (Section \ref{sec:strategy}), \textbf{PostShare} first selects the set of weight matrices to share.
238
+ Next, we incorporate a regularization term into the loss function to constrain our post-training process, encouraging selected weight matrices more similar:
239
+ \begin{equation}
240
+ \mathcal{L}_w=\frac{1}{N}\sum_{n=1}^N\left( \sum_{* \in \{q,k,v\}} \left\| W_{n,i}^*-W_{n,j}^*\right\|_2\right)
241
+ \end{equation}
242
+ where $N$ is the number of selected attention head pairs <$i$,$j$> for sharing.
243
+ With this regularization weight loss, the proposed \textbf{PostShare} learn model weights $W$ by minimizing the following combined loss function:
244
+
245
+ \begin{equation}
246
+ \min_{W}\mathcal{L}_{post-training}+\gamma \times \mathcal{L}_w
247
+ \end{equation}
248
+ where $\mathcal{L}_{post-training}$ is the original post-training loss, $\gamma$ controls the strength of $\mathcal{L}_w$.
249
+ After the post-training process, the corresponding weight matrices can be shared as \textbf{DirectShare} does.
250
+ Although post-training indeed increases the time cost of weight sharing, \textbf{PostShare} achieves stable and satisfactory performance across different tasks when reducing memory usage.
251
+ %
252
+ \section{Experiments}
253
+ \subsection{Experimental Settings}
254
+ \textbf{Backbone Models.}
255
+ We evaluate \textbf{DirectShare} and \textbf{PostShare} on two open-source LLMs: Llama 2~\citep{touvron2023llama} and Baichuan 2~\citep{baichuan2023baichuan2} with 7B and 13B parameters.
256
+ In \textbf{PostShare}, we use English Wikipedia~\citep{wikidump} to post-train the backbone models for weight sharing.
257
+
258
+ \textbf{Evaluation.}
259
+ To comprehensively evaluate the model capabilities, we experiment on five distinct tasks: reasoning, understanding, language, knowledge and examination.
260
+ For consistent comparisons, we deploy open-source LLM evaluation platform OpenCompass~\citep{2023opencompass}.
261
+
262
+
263
+
264
+ \textbf{Baselines.}
265
+ Since existing weight sharing techniques do not support LLMs,
266
+ we compare \textbf{DirectShare} against \textbf{Magnitude Pruning}~\citep{zhu2017prune} and \textbf{LLM-Pruner}~\citep{ma2023llm}, two influential works for model pruning.
267
+ Certainly, they are not directly comparable.
268
+ To ensure fairness in the experiments, both of them only prune the multi-head attention module and thus compare when the same number of parameters is reduced.
269
+ See Appendix~\ref{app:experiments} for additional information.
270
+
271
+ \subsection{Main Results}
272
+ \subsubsection{Evaluation on DirectShare}\label{sec:directshare}
273
+ \textbf{Logical and Common Sense Reasoning.}
274
+ In the domain of reasoning, we consider two Chinese natural language inference benchmarks and three English benchmarks: CMNLI~\citep{xu2020clue}, OCNLI~\citep{hu2020ocnli},
275
+ along with AX-b, AX-g and RTE from SuperGLUE~\citep{wang2020superglue}.
276
+
277
+ In Table~\ref{tab:reasoning}, we show the results on the above five tasks of memory-efficient Llama 2 models.
278
+ The corresponding results for Baichuan 2 models can be found in Appendix~\ref{app:b_reasoning}.
279
+ When applying a 30\% parameter sharing to Llama 2-7B, our \textbf{DirectShare} can still maintain an average performance of 99.51\% across the five benchmarks, compared to the base model.
280
+ With the same setting, the shared Llama 2-13B retains 99.21\% performance.
281
+ This suggests our finding of head-wise shareable attention for LLMs indeed can work without significant performance degradation in reasoning tasks.
282
+
283
+ The overall efficacy of our \textbf{DirectShare} rivals with the structured pruning results of LLM-Pruner, without any training.
284
+ Moreover, our method is quite simple and fast, independent on the original training corpus, while structured pruning will nearly fail in the zero-shot generation tasks without dependencies~\citep{ma2023llm}.
285
+
286
+ \begin{table}[htbp]
287
+ \centering
288
+ \setlength\tabcolsep{3pt}
289
+ \large
290
+ \resizebox{\linewidth}{!}{
291
+ \begin{tabular}{@{}ccccccc@{}}
292
+ \toprule
293
+ \toprule
294
+ \textbf{Ratio} & \textbf{Method}
295
+ & \textbf{CMNLI} & \textbf{OCNLI} & \textbf{AX-b} & \textbf{AX-g} & \textbf{RTE}
296
+ \\
297
+ \midrule
298
+ \textbf{0\%} & \textbf{Llama 2-7B} & 32.98 &33.12 &53.53&55.34&49.82\\
299
+ \midrule
300
+ \multirow{3}{*}{\textbf{10\%}} &Magnitude&\underline{32.99}&30.63&\underline{56.70}&49.44&47.29\\
301
+ & LLM-Pruner & \underline{32.99}& \textbf{33.75}&\textbf{57.61}&\underline{50.00}&\underline{48.38}\\
302
+ & DirectShare &\textbf{33.00}&\underline{32.50}&54.17&\textbf{51.97}&\textbf{50.90}\\
303
+ \midrule
304
+ \multirow{3}{*}{\textbf{30\%}}
305
+ &Magnitude&\underline{33.16}&\textbf{35.00}&54.71&50.56&46.93\\
306
+ & LLM-Pruner& 32.99&31.25 &\underline{56.34}&\textbf{52.53}&\underline{48.74}\\
307
+ & DirectShare & \textbf{33.33} &\underline{32.50}&\textbf{57.07}&\underline{51.69}&\textbf{49.10}\\
308
+ \midrule
309
+ \midrule
310
+ \textbf{0\%} & \textbf{Llama 2-13B} &32.99&35.00&58.81&50.56&47.29\\
311
+ \midrule
312
+ \multirow{3}{*}{\textbf{10\%}}
313
+ &Magnitude&\underline{32.82}&\underline{33.12}&51.99&\textbf{50.56}&\textbf{48.38}\\
314
+ & LLM-Pruner &\textbf{32.99}&\textbf{36.25}&\textbf{58.70}&\underline{50.00}&46.93 \\
315
+ & DirectShare & \textbf{32.99}&\textbf{36.25}&\underline{57.61}&\underline{50.00}&\underline{47.29}\\
316
+ \midrule
317
+ \multirow{3}{*}{\textbf{30\%}}
318
+ &Magnitude&\textbf{33.78}&33.75&46.65&\underline{50.00}&\textbf{51.99}\\
319
+ & LLM-Pruner &\underline{32.99}&\underline{34.38}&\underline{57.16}&\textbf{54.21}&45.85 \\
320
+ & DirectShare &\underline{32.99}&\textbf{35.00}&\textbf{58.33}&\underline{50.00}&\underline{46.57}\\
321
+ \bottomrule
322
+ \bottomrule
323
+ \end{tabular}
324
+ }
325
+ \setlength{\abovecaptionskip}{5pt}
326
+ \setlength{\belowcaptionskip}{-5pt}
327
+ \caption{Evaluation Results on Reasoning of the Memory-efficient Llama 2-7B \& Llama 2-13B.}
328
+ \label{tab:reasoning}
329
+ \end{table}
330
+
331
+ \textbf{Natural Language Understanding (NLU).}
332
+ In this field, we cover multiple tasks, including RACE~\citep{lai2017race} and OpenBookQA~\citep{OpenBookQA2018} for reading comprehension, CSL~\cite{li2022csl} for content summary and TNEWS~\citep{xu2020clue} for content analysis.
333
+
334
+ Table~\ref{tab:understanding} shows the detailed results of ~\textbf{DirectShare} applied in Llama 2 model family on these benchmarks.
335
+ We provide a more comprehensive evaluation on Baichuan 2 models in Appendix~\ref{app:b_nlu}.
336
+ Compared to reasoning tasks, our experimental results unveil a notable performance decrease of approximately 30\% in large-scale reading comprehension datasets when applying \textbf{DirectShare} to Llama 2-7B model.
337
+ Beyond this, we discover that on content summary and analysis tasks, \textbf{DirectShare} manages to retain 94.23\% of the performance exhibited by the base model.
338
+ The evaluation results of Llama 2-13B align with those of Llama 2-7B and we find the accuracy gap is larger as model size increases.
339
+ This trend also exists in Magnitude Pruning and LLM-Pruner, even the performance drop is larger: LLM-Pruner drops $\approx$ 3 points more than ours on average while Magnitude Pruning is outperformed by ours by a large margin.
340
+
341
+ To mitigate this degradation, some post-training pruning methods like SparseGPT~\citep{frantar2023sparsegpt} preserves accuracy via the weight update procedure.
342
+ Similarly, LLM-Pruner uses the low-rank approximation (LoRA, ~\citealp{hu2021lora}) to post-train the pruned model.
343
+ Motivated by this, our \textbf{PostShare} proves to be beneficial, substantially improving 17.80\% accuracy, albeit at a certain amount of time cost.
344
+ For more details refer to Section~\ref{sec:postshare}.
345
+ However, this does not diminish the significance of our \textbf{DirectShare}.
346
+ The absence of post-training allows us to better understand the feasibility of head-wise weight sharing for LLMs.
347
+
348
+ \begin{table}[htbp]
349
+ \centering
350
+ \setlength\tabcolsep{3pt}
351
+ \Large
352
+ \resizebox{\linewidth}{!}{
353
+ \begin{tabular}{@{}ccccccc@{}}
354
+ \toprule
355
+ \toprule
356
+ \textbf{Ratio} & \textbf{Method}
357
+ & \textbf{\makecell{RACE- \\ middle}} & \textbf{\makecell{RACE- \\ high}} &
358
+ \textbf{OBQA} & \textbf{CSL} & \textbf{TNEWS}
359
+ \\
360
+ \midrule
361
+ \textbf{0\%} & \textbf{Llama 2-7B} & 33.15 &35.51&31.80&55.62&20.22\\
362
+ \midrule
363
+ \multirow{3}{*}{\textbf{10\%}}
364
+ &Magnitude&25.42&26.47&\textbf{28.20}&49.38&14.85\\
365
+ & LLM-Pruner & \underline{28.20}&\textbf{30.73}&\underline{27.20}&\underline{53.12}&\underline{19.76}\\
366
+ & DirectShare &\textbf{28.34}&\underline{28.96}&\textbf{28.20}&\textbf{54.37}&\textbf{20.86}\\
367
+ \midrule
368
+ \multirow{3}{*}{\textbf{30\%}}
369
+ &Magnitude&\textbf{21.80}&\underline{21.53}&25.00&45.62&7.01\\
370
+ & LLM-Pruner& \underline{21.52}&\textbf{22.21} &\textbf{26.80}&\underline{50.00}&\underline{10.20}\\
371
+ & DirectShare & 21.45&\underline{21.53}&\underline{26.00}&\textbf{51.25}&\textbf{20.22}\\
372
+ \midrule
373
+ \midrule
374
+ \textbf{0\%} & \textbf{Llama 2-13B} &60.24 & 58.03&42.40&58.75&22.13\\
375
+ \midrule
376
+ \multirow{3}{*}{\textbf{10\%}}
377
+ &Magnitude&22.42&21.78&27.40&51.25&15.39\\
378
+ & LLM-Pruner & \underline{51.46}&\underline{50.80}&\textbf{47.00}&\underline{56.25}& \textbf{20.95}\\
379
+ & DirectShare & \textbf{54.04}&\textbf{55.63}&\underline{39.40}&\textbf{56.88}&\underline{17.94}\\
380
+ \midrule
381
+ \multirow{3}{*}{\textbf{30\%}}
382
+ &Magnitude&21.80&22.01&\textbf{28.80}&46.25&4.19\\
383
+ & LLM-Pruner & \underline{23.96}&\underline{25.33}&26.40&\underline{53.75}&\textbf{16.76}\\
384
+ & DirectShare &\textbf{26.53}&\textbf{27.53}&\underline{27.40}&\textbf{59.38}&\underline{16.12}\\
385
+ \bottomrule
386
+ \bottomrule
387
+ \end{tabular}
388
+ }
389
+ \setlength{\abovecaptionskip}{5pt}
390
+ \setlength{\belowcaptionskip}{-5pt}
391
+ \caption{NLU Abilities of the Memory-efficient Models.}
392
+ \label{tab:understanding}
393
+ \end{table}
394
+
395
+ \begin{table*}[t]
396
+ \centering
397
+ \setlength\tabcolsep{3pt}
398
+ \large
399
+ \resizebox{\textwidth}{!}{
400
+ \begin{tabular}{@{}cccccccccccc@{}}
401
+ \toprule
402
+ \toprule
403
+ \textbf{Ratio} & \textbf{Method}
404
+ & \textbf{WinoGrande}& \textbf{BoolQ} & \textbf{C-Eval} & \textbf{MMLU}& \textbf{RACE-middle} & \textbf{RACE-high} & \textbf{OBQA}& \textbf{OBQA-fact}
405
+ \\
406
+ \midrule
407
+ \textbf{0\%} & \textbf{Llama 2-7B} & 54.04&70.67 & 32.20 & 46.69 & 33.15 & 35.51 & 31.8&42.2\\
408
+ \midrule
409
+ \multirow{2}{*}{\textbf{30\%}}& DirectShare & 50.18&54.43 & 26.24 & 26.53 & 21.45 & 21.53& 26.00&27.60\\
410
+ & PostShare &52.98 \small{\textcolor{red}{$\uparrow2.80$}}&66.57 \small{\textcolor{red}{$\uparrow12.14$}}&26.38 \small{\textcolor{red}{$\uparrow0.14$}}&33.36 \small{\textcolor{red}{$\uparrow6.83$}}&29.81 \small{\textcolor{red}{$\uparrow8.36$}}&29.45 \small{\textcolor{red}{$\uparrow7.92$}}&27.60 \small{\textcolor{red}{$\uparrow1.60$}}&33.60 \small{\textcolor{red}{$\uparrow6.00$}}\\
411
+ \bottomrule
412
+ \bottomrule
413
+ \end{tabular}
414
+ }
415
+ \setlength{\belowcaptionskip}{-10pt}
416
+ \caption{Performance of Memory-efficient Llama 2-7B via \textbf{PostShare}. See Appendix~\ref{app:postshare_13b} for results on Llama 2-13B.}
417
+ \label{tab:postshare}
418
+ \end{table*}
419
+
420
+ \textbf{Knowledge-related Tasks.}
421
+ We perform evaluations regarding knowledge on various datasets: WinoGrande~\citep{levesque2012winograd} about language, BoolQ~\citep{clark2019boolq} testing knowledge question answering, C-Eval~\citep{huang2023ceval} and MMLU~\citep{hendryckstest2021} standing for two comprehensive examination benchmarks.
422
+ Table~\ref{tab:knowledge} summarizes the mean accuracies on those tasks after \textbf{DirectShare} applied to Llama 2 models.
423
+ See Appendix~\ref{app:b_knowledge} for the results based on Baichuan 2 models.
424
+
425
+ As depicted in Table~\ref{tab:knowledge}, \textbf{DirectShare} takes a clear advantage over other approaches in the field of examination.
426
+ Our chosen C-Eval and MMLU span diverse disciplines to test both world knowledge and problem solving ability exclusively in a Chinese and English context, respectively.
427
+ To make this more concrete, Figure~\ref{fig:exam} vividly contrasts the performance across different subjects based on Llama 2-7B on C-Eval and MMLU.
428
+ But we have to admit directly do weight sharing across attention heads results in a obvious decline in knowledge-related abilities, which can be solved in \textbf{PostShare}.
429
+
430
+ \begin{table}[htbp]
431
+ \centering
432
+ \setlength\tabcolsep{3pt}
433
+ \large
434
+ \resizebox{\linewidth}{!}{
435
+ \begin{tabular}{@{}cccccc@{}}
436
+ \toprule
437
+ \toprule
438
+ \textbf{Ratio} & \textbf{Method}
439
+ & \textbf{WinoGrande} & \textbf{BoolQ} &
440
+ \textbf{C-Eval} & \textbf{MMLU}
441
+ \\
442
+ \midrule
443
+ \textbf{0\%} & \textbf{Llama 2-7B} & 54.04&70.67&32.20&46.69\\
444
+ \midrule
445
+ \multirow{3}{*}{\textbf{10\%}}
446
+ &Magnitude&51.58&60.80&22.16&28.20\\
447
+ & LLM-Pruner & \textbf{52.98}&\underline{66.09}&\underline{22.31}&\underline{38.11} \\
448
+ & DirectShare &\underline{52.63}&\textbf{67.74}&\textbf{28.75}&\textbf{43.43}\\
449
+ \midrule
450
+ \multirow{3}{*}{\textbf{30\%}}
451
+ &Magnitude&\textbf{50.88}&44.59&\underline{24.38}&23.15\\
452
+ & LLM-Pruner& \textbf{50.88}&\textbf{54.77}&22.82&\underline{25.16} \\
453
+ & DirectShare & \underline{50.18}&\underline{54.43} &\textbf{26.24}&\textbf{26.53}\\
454
+ \midrule
455
+ \midrule
456
+ \textbf{0\%} & \textbf{Llama 2-13B} &55.44&71.50&40.17&55.81\\
457
+ \midrule
458
+ \multirow{3}{*}{\textbf{10\%}}
459
+ &Magnitude&49.82&62.32&22.52&27.54\\
460
+ & LLM-Pruner & \textbf{55.44}&\underline{68.07}&\underline{30.25}&\underline{51.45}\\
461
+ & DirectShare & \underline{54.39}&\textbf{69.45}&\textbf{37.17}&\textbf{52.81}\\
462
+ \midrule
463
+ \multirow{3}{*}{\textbf{30\%}}
464
+ &Magnitude&49.12&56.45&\textbf{23.99}&22.86\\
465
+ & LLM-Pruner & \textbf{51.58}&\textbf{63.21}&22.17&\underline{27.22}\\
466
+ & DirectShare &\underline{50.18}&\underline{59.36}&\underline{22.30}&\textbf{30.79}\\
467
+ \bottomrule
468
+ \bottomrule
469
+ \end{tabular}
470
+ }
471
+ \setlength{\belowcaptionskip}{-15pt}
472
+ \caption{Results on Knowledge-related Tasks of the Memory-efficient Models.}
473
+ \label{tab:knowledge}
474
+ \end{table}
475
+ \begin{figure}[htbp]
476
+ \centering
477
+ \includegraphics[width=\linewidth]{images/exam.pdf}
478
+ \setlength{\abovecaptionskip}{-10pt}
479
+ \setlength{\belowcaptionskip}{-15pt}
480
+ \caption{Performance across Different Subjects based on Llama 2-7B on C-Eval and MMLU. }
481
+ \label{fig:exam}
482
+ \end{figure}
483
+
484
+ \subsubsection{Evaluation on PostShare}\label{sec:postshare}
485
+ Based on the evaluation conducted on \textbf{DirectShare}, we experiment on \textbf{PostShare}, with a special focus on those benchmarks where \textbf{DirectShare} experiences a large accuracy degradation.
486
+
487
+ Table~\ref{tab:postshare} reports how the performance improves with only 0.5 training epoch for Llama 2-7B model.
488
+ Specifically, in the reading comprehension and knowledge-related tasks mentioned above, \textbf{PostShare} achieves 87.53\% of the overall accuracy attained by the original model.
489
+ Most of the gap between models after \textbf{DirectShare} and the original counterparts can be narrowed via \textbf{PostShare}, especially in BoolQ and RACE datasets.
490
+
491
+ Last, it is important to emphasize that here we perform post-training with limited training corpus and thus it runs the risk of overfitting when training only for one epoch.
492
+ For example, PostShare achieves the higher accuracy in BoolQ at 0.3 epoch than at 0.5 epoch (68.29 vs. 66.57).
493
+ In contrast, as the training epoch increases from 0.5 to 0.9, the accuracy in WinoGrande rises (52.98 vs. 54.39).
494
+ It means that due to the domain-constrained corpus,
495
+ overfitting to one specific dataset will potentially compromise the capabilities in other tasks.
496
+ The in-depth analysis is provided in Appendix~\ref{app:overfitting}.
497
+
498
+ \subsection{Additional Analysis}\label{sec:addition_study}
499
+
500
+ \begin{table*}[t]
501
+ \centering
502
+ \vspace{-15pt}
503
+ \setlength\tabcolsep{3pt}
504
+ \large
505
+ \resizebox{\textwidth}{!}{
506
+ \begin{tabular}{@{}ccccccccccccccccc@{}}
507
+ \toprule
508
+ \toprule
509
+ \textbf{\makecell{Method\\Ratio=30\%}} & \textbf{CMNLI}& \textbf{OCNLI}& \textbf{AX-b}& \textbf{AX-g}& \textbf{RTE} & \textbf{\makecell{Wino-\\Grande}}& \textbf{BoolQ} & \textbf{C-Eval} & \textbf{MMLU}& \textbf{\makecell{RACE-\\middle}} & \textbf{\makecell{RACE-\\high}} & \textbf{OBQA}& \textbf{\makecell{OBQA-\\fact}}&\textbf{CSL}
510
+ \\
511
+ \midrule
512
+ DirectShare &33.33&32.50&57.07&51.69&49.10& 50.18&54.43 & 26.24 & 26.53 & 21.45 & 21.53& 26.00&27.60&51.25\\
513
+ \midrule
514
+ \makecell{DirectShare \\+ 4bit GPTQ} &\makecell{34.61\\\small{\textcolor{red!75!black}{$\uparrow1.28$}}}&\makecell{30.63\\\small{\textcolor{green!50!black}{$\downarrow1.87$}}}&\makecell{57.79\\\small{\textcolor{red!75!black}{$\uparrow0.72$}}}&\makecell{47.47\\\small{\textcolor{green!50!black}{$\downarrow4.22$}}}&\makecell{49.82\\\small{\textcolor{red!75!black}{$\uparrow0.72$}}}&\makecell{49.12\\\small{\textcolor{green!50!black}{$\downarrow1.06$}}}&\makecell{51.95\\\small{\textcolor{green!50!black}{$\downarrow2.48$}}}&\makecell{21.88\\\small{\textcolor{green!50!black}{$\downarrow4.34$}}}&\makecell{25.38\\\small{\textcolor{green!50!black}{$\downarrow1.15$}}}&\makecell{21.24\\\small{\textcolor{green!50!black}{$\downarrow0.21$}}}&\makecell{21.33\\\small{\textcolor{green!50!black}{$\downarrow0.20$}}}&\makecell{23.40\\\small{\textcolor{green!50!black}{$\downarrow2.60$}}}&\makecell{26.60\\\small{\textcolor{green!50!black}{$\downarrow1.00$}}}&\makecell{50.00\\\small{\textcolor{green!50!black}{$\downarrow1.25$}}}\\
515
+ \bottomrule
516
+ \bottomrule
517
+ \end{tabular}
518
+ }
519
+ \setlength{\abovecaptionskip}{5pt}
520
+ \setlength{\belowcaptionskip}{-10pt}
521
+ \caption{Weight Sharing and Quantization on Llama 2-7B.}
522
+ \label{tab:quant}
523
+ \end{table*}
524
+
525
+ \textbf{Ablation on Impact of Different Head-wise Matching Functions.}
526
+ For weight sharing, the choice of shared heads is critical.
527
+ In Figure~\ref{fig:sharing_strategy}, we plot the performance curve on PIQA~\citep{bisk2019piqa} and OpenBookQA using different head-wise match functions for Baichuan 2-7B model.
528
+ And the corresponding detailed results are presented in Appendix~\ref{app:ablation}.
529
+ Notably, using the cosine similarity between the concatenation matrix of $W^q$ and $W^k$ attains the most favorable outcomes.
530
+ This may be because it guarantees the maximum similarities between attention maps from the model before and after weight sharing.
531
+ Also, this choice is much more stable and robust in some tasks like reading comprehension(e.g., OpenBookQA).
532
+
533
+ \textbf{Robustness on the Model Size.}
534
+ In previous experiments, we adopt our approach in LLMs settings.
535
+ Since small-scale models are not highly over-parameterized as large-scale models~\citep{gao2023small}, we further verify the effectiveness of our method on smaller models like BERT-base, GPT2-small.
536
+ For this analysis, we set the sharing ratio from 0\% to 50\% with a step of 10\% for the fine-tuned GPT-small model on WMT-14 En-Fr dataset.
537
+ As shown in Table~\ref{tab:gpt2-small}, at a 50\% sharing ratio, the GPT-small can still yield a BLEU score of 39.44 without any post-training.
538
+ Such kind of variance in performance is acceptable that to some degree proves our method is also suitable for small-scale models.
539
+
540
+ \begin{table}[htbp]
541
+ \centering
542
+ \setlength\tabcolsep{3pt}
543
+ \large
544
+ \resizebox{\linewidth}{!}{\begin{tabular}{c|cccccc}
545
+ \toprule
546
+ \toprule
547
+ \textbf{Sharing Ratio}& 0\% & 10\% & 20\% & 30\%& 40\% & 50\%\\
548
+ \midrule
549
+ \textbf{BLEU}&\textbf{43.62}&42.49&41.95 &41.34&39.96&39.44\\
550
+ \textbf{Meteor}&\textbf{42.33}&40.75 &40.18 &38.43&37.21&36.62\\
551
+ \bottomrule
552
+ \bottomrule
553
+ \end{tabular}
554
+ }
555
+ \setlength{\abovecaptionskip}{5pt}
556
+ \setlength{\belowcaptionskip}{0pt}
557
+ \caption{Robustness on the Model Size via \textbf{PostShare} (Performed on GPT2-small using WMT-14 En-Fr).}
558
+ \label{tab:gpt2-small}
559
+ \vspace{-1.0em}
560
+ \end{table}
561
+
562
+
563
+ \textbf{Combine Weight Sharing with Quantization.}
564
+ In terms of saving memory consumption, post-quantization employs the strategy of reducing precision in the LLM parameters, while weight sharing aims to reduce the number of parameters.
565
+ From these two different directions, we suppose integrating weight sharing and quantization may help towards even more memory reduction of LLMs.
566
+ Hence, we choose GPTQ~\citep{frantar2022gptq} as a representative and test the effectiveness of applying two techniques in tandem.
567
+ Specifically,
568
+ we quantize Llama 2-7B model after 30\% \textbf{DirectShare} to 4 bit precision.
569
+ As is reported in Table~\ref{tab:quant}, they can be effectively combined with no more than 5 points performance drop.
570
+
571
+ \textbf{Combine PostShare with DirectShare.}
572
+ Another interesting research finding is the combination of our DirectShare and PostShare, where PostShare can play a role in fast performance recovery for DirectShare.
573
+ Specifically, if we set the sharing ratio to 30\% and post-train only 0.5 epoch, the combination based on Llama 2-7B performs on par with the PostShare, as Figure~\ref{fig:direct_post} shows.
574
+ It can also be seen that DirectShare+PostShare outperforms in some specific datasets like BoolQ and WinoGrande, which we speculate is due to the mitigation of overfitting problem in PostShare to some extent.
575
+
576
+
577
+ \begin{figure}[t]
578
+ \centering
579
+ \includegraphics[width=0.88\linewidth]{images/direct_post.pdf}
580
+ \setlength{\abovecaptionskip}{5pt}
581
+ \setlength{\belowcaptionskip}{-10pt}
582
+ \caption{Results in Various Benchmarks via \textbf{DirectShare+PostShare} based on Llama 2-7B model.}
583
+ \label{fig:direct_post}
584
+ \end{figure}
585
+
586
+ \textbf{Visualization Study on the Shared Weights.}
587
+ To provide a more detailed explanation of our rationale behind head-wise weight sharing, we conduct a visualization study on the ratios of weight sharing across the MHA layers in two models of different scales (see Appendix~\ref{app:visualization}).
588
+ Results indicate the shareable weights distribution across attention heads is similar regardless of the sharing ratio.
589
+ We also observe a relative balanced sharing ratio across MHA layers than layer-wise weight sharing, which may seem counter-intuitive.
590
+ However, we find such fine-grained operation on weights has already been used in model pruning~\cite{sun2023simple,ma2023llm}, constantly superior to layer-wise pruning.
591
+
592
+
593
+
594
+
595
+
596
+
597
+
598
+
599
+ \section{Conclusion}
600
+ In this paper, we illustrate the feasibility of fine-grained weight sharing strategy applied in LLMs, namely, head-wise shareable attention.
601
+ Consequently, we propose two methods for head-wise weight sharing called \textbf{DirectShare} and \textbf{PostShare}, which are complementary in terms of time and performance.
602
+ Our DirectShare concentrates on a simple, no-training yet effective sharing strategy, performing competitively with one of the state-of-the-art model pruning methods.
603
+ PostShare, on the other hand, shows an impressive performance on keeping LLM’s capabilities, needing to compromise on time efficiency.
604
+ Last, we hope our work inspires researchers to explore better fine-grained weight sharing techniques for memory-efficient LLMs.
605
+
606
+ \section*{Limitations}
607
+ This paper primarily focuses on the head-wise weight sharing in Multi-Head Attention (MHA) block, inspired by the attention map similarity across heads.
608
+ However, the Feed-Forward Network (FFN) block has more parameters compared to the MHA block.
609
+ To further reduce the memory usage for LLMs, there is necessary to investigate the feasibility of applying weight sharing to FFN block.
610
+ Subsequently, similar to exploration in MHA block, we should determine whether layer-wise weight sharing in FFN block is enough, otherwise fine-grained shared modules are needed to keep more performance.
611
+ We leave it as future work.
612
+
613
+ Furthermore, the computing resources limited our ability to conduct experiments on LLMs with a model size of more than 13B.
614
+ Although we hypothesize that our approach can still work in larger models, which proves to have redundant parameters~\citep{frantar2023sparsegpt}, it is crucial to validate this hypothesis with further exploration.
615
+
616
+
617
+
618
+
619
+ \bibliography{anthology,reference}
620
+ \bibliographystyle{acl_natbib}
621
+
622
+ \appendix
623
+ \section{Implementation Details}\label{app:experiments}
624
+ In this section, we will provide additional information about our experimental implementation.
625
+
626
+ \subsection{For the Baseline}
627
+ To our knowledge, there is no existing baseline for our methods, due to the absence of prior research on fine-grained weight sharing for LLMs.
628
+ To provide a comprehensive demonstration of the effectiveness of our \textbf{DirectShare}, we can only choose another important memory-efficient method of a different category for comparison.
629
+ Here, we select two model pruning methods applied in LLMs: one classical model pruning method \textbf{Magnitude Pruning} and one state-of-the-art structured pruning method \textbf{LLM-Pruner}.
630
+ We do not consider unstructured pruning methods in this paper since they can not achieve real memory reduction without specialized hardware or software.
631
+
632
+ Based on the results presented in Table~\ref{tab:b_reasoning},~\ref{tab:b_understanding},~\ref{tab:b_knowledge}, it is evident that our \textbf{DirectShare} performs on par with one of the prior best structured pruning methods regarding the overall performance, superior to the standard magnitude pruning.
633
+ Consequently, we claim that designing such a fine-grained (i.e., head-wise) weight sharing strategy with a specific focus on LLMs is indeed simple but effective and this would be a good direction for future work.
634
+
635
+ \subsection{For the Post-training}
636
+ For carrying out the post-training process, we employ the code framework from LLaMA-Factory repository\footnote{https://github.com/hiyouga/LLaMA-Factory} with DeepSpeed ZeRO-1\footnote{Because of our designed special loss function in the post-training stage, only DeepSpeed ZeRO-1 can work.}.
637
+ The Adam optimizer with a learning rate of 5e-5 is used in our experiment and the parameter values assigned during training are $\beta_1=0.9$ and $\beta_2=0.95$.
638
+ For Llama 2-7B model, we set the batch size to 32.
639
+ While for Llama 2-13B model, the batch size of training is only 8 subject to the limited computational resources.
640
+ Besides, the maximum context size and $\gamma$ are set to 4096 and 0.5, respectively.
641
+
642
+ \section{Experimental Results based on Baichuan 2 Models}\label{sec:baichuan2}
643
+ We re-implement \textbf{Magnitude Pruning} and \textbf{LLM-Pruner} with their public code to accommodate Baichuan 2 models.
644
+ \subsection{Logical and Common Sense Reasoning} \label{app:b_reasoning}
645
+ Table~\ref{tab:b_reasoning} presents a comparison on five datasets about reasoning abilities for three memory-efficient methods performed on the Baichuan 2 models.
646
+
647
+ Our results show that compared to NLU and knowledge-related abilities (listed in Table~\ref{tab:b_understanding},\ref{tab:b_knowledge}), \textbf{DirectShare} can indeed maintain its reasoning abilities to a large extent.
648
+ Specifically, at 30\% ratio, DirectShare remains competitive with LLM-Pruner.
649
+
650
+ \begin{table}[htbp]
651
+ \centering
652
+ \setlength\tabcolsep{3pt}
653
+ \large
654
+ \resizebox{\linewidth}{!}{
655
+ \begin{tabular}{@{}ccccccc@{}}
656
+ \toprule
657
+ \toprule
658
+ \textbf{Ratio} & \textbf{Method}
659
+ & \textbf{CMNLI} & \textbf{OCNLI} & \textbf{AX-b} & \textbf{AX-g} & \textbf{RTE}
660
+ \\
661
+ \midrule
662
+ \textbf{0\%} & \textbf{Baichuan 2-7B} &33.37&41.88&51.90&50.28&57.40\\
663
+ \midrule
664
+ \multirow{3}{*}{\textbf{10\%}} &Magnitude&\underline{33.11}&33.12&\textbf{55.62}&\underline{50.84}&55.96\\
665
+ & LLM-Pruner&\textbf{37.31}&\underline{40.62}&49.18&50.00&\underline{60.65}\\
666
+ & DirectShare&33.00&\textbf{41.25}&\underline{49.55}&\textbf{51.12}&\underline{60.29} \\
667
+ \midrule
668
+ \multirow{3}{*}{\textbf{30\%}}
669
+ &Magnitude&\underline{32.97}&\underline{31.25}&\underline{48.28}&\textbf{51.97}&46.57\\
670
+ & LLM-Pruner&\textbf{34.20}&\textbf{34.38}&47.55&50.84&\textbf{51.26}\\
671
+ & DirectShare & \underline{32.97} &30.63&\textbf{54.71}&\underline{51.69}&\underline{49.82}\\
672
+ \midrule
673
+ \midrule
674
+ \textbf{0\%} & \textbf{Baichuan 2-13B} &33.21&40.62&59.69&50.59&44.77\\
675
+ \midrule
676
+ \multirow{3}{*}{\textbf{10\%}}
677
+ &Magnitude&33.21&31.25& \underline{55.62}&48.60&46.93\\
678
+ & LLM-Pruner &\textbf{33.66}&\underline{36.88}&\textbf{58.51}&\underline{49.72}&\underline{47.65}\\
679
+ & DirectShare &\underline{33.23}&\textbf{40.00}&53.71&\textbf{53.37}& \textbf{53.07}\\
680
+ \midrule
681
+ \multirow{3}{*}{\textbf{30\%}}
682
+ &Magnitude&\textbf{33.21}&\underline{30.00}&50.91&48.03& 43.32\\
683
+ & LLM-Pruner &33.04&\textbf{36.88}&\textbf{55.71}&\textbf{50.28}&\underline{44.04} \\
684
+ & DirectShare &\underline{33.11}&\underline{30.00}&\underline{54.98}&\underline{50.00}&\textbf{45.13}\\
685
+ \bottomrule
686
+ \bottomrule
687
+ \end{tabular}
688
+ }
689
+ \caption{Evaluation Results on Reasoning of the Memory-efficient Baichuan 2-7B \& Baichuan 2-13B.}
690
+ \label{tab:b_reasoning}
691
+ \end{table}
692
+
693
+ \subsection{Natural Language Understanding}\label{app:b_nlu}
694
+ Table~\ref{tab:b_understanding} presents the performance for each NLU task discussed in Section~\ref{sec:directshare} when applying \textbf{DirectShare} to Baichuan 2 models.
695
+ Consistent with the experiments on Llama 2-7B and Llama 2-13B models, similar performance drop exists.
696
+ Thus, at the cost of post-training time, our PostShare can narrow the gap observed across the majority of datasets.
697
+ With regard to individual datasets, it remains to be seen if the gap can be largely recovered given the best training epoch\footnote{We speculate that it may be attributed to overfitting issue. Furthermore, as the model size increases, it becomes increasingly difficult to determine the optimal training epoch for effectively mitigating overfitting.}.
698
+
699
+ \begin{table}[htbp]
700
+ \centering
701
+ \setlength\tabcolsep{3pt}
702
+ \Large
703
+ \resizebox{\linewidth}{!}{
704
+ \begin{tabular}{@{}ccccccc@{}}
705
+ \toprule
706
+ \toprule
707
+ \textbf{Ratio} & \textbf{Method}
708
+ & \textbf{\makecell{RACE- \\ middle}} & \textbf{\makecell{RACE- \\ high}} &
709
+ \textbf{OBQA}
710
+ & \textbf{CSL} & \textbf{TNEWS}
711
+ \\
712
+ \midrule
713
+ \textbf{0\%} & \textbf{Baichuan 2-7B} &51.04&52.63& 32.20&66.25&28.60 \\
714
+ \midrule
715
+ \multirow{3}{*}{\textbf{10\%}}
716
+ &Magnitude&24.37&28.13&\underline{30.20}&57.50&\textbf{27.60}\\
717
+ & LLM-Pruner &\underline{25.42}&\underline{35.36}&\textbf{32.60}&\underline{61.25}&26.05 \\
718
+ & DirectShare &\textbf{50.49}&\textbf{48.46}&28.20&\textbf{63.75}&\underline{26.23}\\
719
+ \midrule
720
+ \multirow{3}{*}{\textbf{30\%}}
721
+ &Magnitude&21.80&21.67&\textbf{27.60}&\textbf{57.50}&13.66\\
722
+ & LLM-Pruner&\underline{22.56}&\underline{22.67}&\underline{27.40}&\underline{53.12}&\textbf{21.31}\\
723
+ & DirectShare &\textbf{25.14}&\textbf{23.44}&\textbf{27.60}&52.50&\underline{18.40}\\
724
+ \midrule
725
+ \midrule
726
+ \textbf{0\%} & \textbf{Baichuan 2-13B} &68.94 &67.27 &42.20&63.12&28.96\\
727
+ \midrule
728
+ \multirow{3}{*}{\textbf{10\%}}
729
+ &Magnitude&25.56&26.33&26.20&45.62&11.38\\
730
+ & LLM-Pruner & \underline{41.71}&\underline{46.80}&\textbf{32.40}&\underline{62.50}&\textbf{29.23}\\
731
+ & DirectShare &\textbf{47.56} &\textbf{49.34}&\underline{31.20}&\textbf{64.38}&\underline{22.22}\\
732
+ \midrule
733
+ \multirow{3}{*}{\textbf{30\%}}
734
+ &Magnitude&\textbf{24.58}&\textbf{24.58}&25.40&50.62&6.65\\
735
+ & LLM-Pruner &\underline{22.63}&21.81&\textbf{26.80}&\textbf{55.00}&\textbf{24.13}\\
736
+ & DirectShare &22.14&\underline{23.99}&\underline{26.60}&\underline{53.13}&\underline{17.58}\\
737
+ \bottomrule
738
+ \bottomrule
739
+ \end{tabular}
740
+ }
741
+ \caption{NLU Abilities of the Memory-efficient Models.}
742
+ \label{tab:b_understanding}
743
+ \end{table}
744
+
745
+ \subsection{Knowledge-related Tasks} \label{app:b_knowledge}
746
+ The results of Baichuan 2 models on knowledge-related tasks are shown in Table~\ref{tab:b_knowledge}.
747
+ Similar decline appears in Llama 2-7B and Llama 2-13B models as well.
748
+
749
+
750
+ \begin{table}[htbp]
751
+ \centering
752
+ \setlength\tabcolsep{3pt}
753
+ \large
754
+ \resizebox{\linewidth}{!}{
755
+ \begin{tabular}{@{}cccccc@{}}
756
+ \toprule
757
+ \toprule
758
+ \textbf{Ratio} & \textbf{Method}
759
+ & \textbf{WinoGrande} & \textbf{BoolQ} &
760
+ \textbf{C-Eval} & \textbf{MMLU}
761
+ \\
762
+ \midrule
763
+ \textbf{0\%} & \textbf{Baichuan 2-7B} &54.04 &63.30&56.19&54.65 \\
764
+ \midrule
765
+ \multirow{3}{*}{\textbf{10\%}}
766
+ &Magnitude&50.18&57.06&34.70&45.47\\
767
+ & LLM-Pruner &\underline{50.53}&\textbf{59.30}&\underline{48.14}&\textbf{51.78} \\
768
+ & DirectShare &\textbf{51.58}&\underline{58.01}&\textbf{50.41}&\underline{49.96}\\
769
+ \midrule
770
+ \multirow{3}{*}{\textbf{30\%}}
771
+ &Magnitude&49.12&\textbf{55.41}&\textbf{23.91}&\underline{24.36} \\
772
+ & LLM-Pruner&\underline{51.23}&48.93&\underline{22.11}&\textbf{25.62} \\
773
+ & DirectShare &\textbf{51.58}&\underline{51.53}&21.86&24.05\\
774
+ \midrule
775
+ \midrule
776
+ \textbf{0\%} & \textbf{Baichuan 2-13B} &56.14&67.00&59.21&59.58\\
777
+ \midrule
778
+ \multirow{3}{*}{\textbf{10\%}}
779
+ &Magnitude&50.53&40.55&25.22&25.55\\
780
+ & LLM-Pruner &\underline{51.23}&\textbf{65.87}&\underline{49.60}&\underline{51.49} \\
781
+ & DirectShare & \textbf{53.33}&\underline{61.04}&\textbf{53.65}&\textbf{52.60}\\
782
+ \midrule
783
+ \multirow{3}{*}{\textbf{30\%}}
784
+ &Magnitude&\underline{50.18}& \underline{50.09}&\textbf{25.35}&24.66\\
785
+ & LLM-Pruner&\textbf{50.53}&\textbf{59.42}&21.09&\textbf{24.95} \\
786
+ & DirectShare &48.77&40.83&\underline{23.25}&\underline{24.82}\\
787
+ \bottomrule
788
+ \bottomrule
789
+ \end{tabular}
790
+ }
791
+ \caption{Results on Knowledge-related Tasks of the Memory-efficient Models.}
792
+ \label{tab:b_knowledge}
793
+ \end{table}
794
+
795
+ \section{PostShare on Llama 2-13B Model}\label{app:postshare_13b}
796
+ In addition to Llama 2-7B, we also experiment with Llama 2-13B to evaluate \textbf{PostShare} (See Table~\ref{tab:postshare_13b}).
797
+ Compared to Llama 2-7B, the best training epoch on Llama 2-13B is much smaller: approximately hundreds of training steps is enough, otherwise it may suffer from overfitting issue.
798
+ However, the overfitting problem seems to be obvious as model size increases, resulting in the challenge with regard to choosing the best training epoch.
799
+ \begin{table*}[htbp]
800
+ \centering
801
+ \setlength\tabcolsep{3pt}
802
+ \large
803
+ \resizebox{\textwidth}{!}{
804
+ \begin{tabular}{@{}cccccccccccc@{}}
805
+ \toprule
806
+ \toprule
807
+ \textbf{Ratio} & \textbf{Method}
808
+ & \textbf{WinoGrande}& \textbf{BoolQ} & \textbf{C-Eval} & \textbf{MMLU}& \textbf{RACE-middle} & \textbf{RACE-high} & \textbf{OBQA}& \textbf{OBQA-fact}
809
+ \\
810
+ \midrule
811
+ \textbf{0\%} & \textbf{Llama 2-13B} &55.44&71.50&40.17&55.81&60.24&58.03& 42.40&60.00\\
812
+ \midrule
813
+ \multirow{2}{*}{\textbf{30\%}}& DirectShare &50.18&59.36&22.30&30.79&26.53&27.53&27.40&27.80\\
814
+ & PostShare$^*$
815
+ &53.68 \small{\textcolor{red}{$\uparrow3.50$}}&71.25 \small{\textcolor{red}{$\uparrow11.89$}}&25.80 \small{\textcolor{red}{$\uparrow3.50$}}&33.90 \small{\textcolor{red}{$\uparrow3.11$}}&32.03 \small{\textcolor{red}{$\uparrow3.30$}}&29.07 \small{\textcolor{red}{$\uparrow1.54$}}&33.60 \small{\textcolor{red}{$\uparrow6.20$}}&38.80 \small{\textcolor{red}{$\uparrow11.00$}}\\
816
+ \bottomrule
817
+ \bottomrule
818
+ \end{tabular}
819
+ }
820
+ \setlength{\abovecaptionskip}{5pt}
821
+ \caption{Performance of the Memory-efficient Llama 2-13B via \textbf{PostShare}. $*$ means choosing relatively good performance across different training steps.}
822
+ \label{tab:postshare_13b}
823
+ \end{table*}
824
+
825
+ \section{More Analysis}
826
+
827
+ \begin{figure*}[htbp]
828
+ \centering
829
+ \vspace{-0.6em}
830
+ \begin{subfigure}{\textwidth}
831
+ \centering
832
+ \includegraphics[width=\linewidth]{images/match_distribution_20_blue.pdf}
833
+ \setlength{\abovecaptionskip}{-13pt}
834
+ \setlength{\belowcaptionskip}{1pt}
835
+ \caption{Sharing Ratio=20\%}
836
+ \end{subfigure}
837
+ \begin{subfigure}{\textwidth}
838
+ \centering
839
+ \includegraphics[width=\linewidth]{images/match_distribution_30_blue.pdf}
840
+ \setlength{\abovecaptionskip}{-13pt}
841
+ \caption{Sharing Ratio=30\%}
842
+ \end{subfigure}
843
+ \setlength{\abovecaptionskip}{-5pt}
844
+ \setlength{\belowcaptionskip}{-5pt}
845
+ \caption{Ratios of Weight Sharing across the MHA Layers in Llama 2-7B/13B \& Baichuan 2-7B/13B.}
846
+ \label{fig:match_distribution}
847
+ \end{figure*}
848
+
849
+ \subsection{Overfitting Phenomenon in PostShare}
850
+ \label{app:overfitting}
851
+ Figure~\ref{fig:overfitting} shows the performance curves on different kinds of datasets across various post-training steps.
852
+ Remarkably, our \textbf{PostShare} requires no more than 1 epoch that can push the selected weights closer for sharing while keeping the performance.
853
+ However, we observe the slight overfitting phenomenon in \textbf{PostShare}, i.e., the capabilities initially improve and then experience a slight decline.
854
+ Besides, it is clear that the turning point about performance varies with datasets.
855
+ Detailed statistical data are provided in Table~\ref{tab:overfitting}.
856
+ \begin{figure}[htbp]
857
+ \centering
858
+ \includegraphics[width=\linewidth]{images/overfitting.pdf}
859
+ \setlength{\abovecaptionskip}{-10pt}
860
+ \setlength{\belowcaptionskip}{-10pt}
861
+ \caption{Accuracy across Different Training Steps during \textbf{PostShare}.}
862
+ \label{fig:overfitting}
863
+ \end{figure}
864
+
865
+ \begin{table}[htbp]
866
+ \centering
867
+ \setlength\tabcolsep{3pt}
868
+ \large
869
+ \resizebox{\linewidth}{!}{
870
+ \begin{tabular}{@{}ccccccc@{}}
871
+ \toprule
872
+ \toprule
873
+ \textbf{Epoch} &
874
+ \textbf{\makecell{RACE-\\middle}} & \textbf{\makecell{RACE-\\high}} & \textbf{OBQA}
875
+ & \textbf{BoolQ} & \textbf{PIQA} & \textbf{\makecell{Wino-\\Grande}}
876
+ \\
877
+ \midrule
878
+ 0.10&27.72&27.56&\textbf{29.40}&65.38&71.06&51.58\\
879
+ 0.20&27.72&27.59&\underline{28.80}&\underline{67.80}&73.29&52.28\\
880
+ 0.30&28.48&27.90&27.60&\textbf{68.29}&75.24&53.33\\
881
+ 0.40&28.13&27.99&27.00&66.09&75.79&52.98\\
882
+ 0.50&29.81&29.45&27.60&66.57&76.00&52.98\\
883
+ 0.60&\underline{30.36}&\textbf{30.36}&27.40&65.72&75.90&52.98\\
884
+ 0.70&\textbf{30.43}&\underline{30.25}&27.60&66.15&75.90&52.98\\
885
+ 0.80&29.60&30.10&27.80&65.44&75.90&\textbf{55.09}\\
886
+ 0.90&29.53&29.87&27.60&65.54&\textbf{76.33}&\underline{54.39}\\
887
+ 1.00&29.67&30.02&27.80&65.38&\underline{76.06}&54.04\\
888
+ \bottomrule
889
+ \bottomrule
890
+ \end{tabular}
891
+ }
892
+ \caption{Accuracy across Different Training Steps
893
+ during \textbf{PostShare}.}
894
+ \label{tab:overfitting}
895
+ \end{table}
896
+
897
+ \subsection{Impact of Different Head-wise Matching Functions}\label{app:ablation}
898
+ The selection of shared heads plays a crucial role in weight sharing.
899
+ An ablation experiment for this is shown in Table~\ref{tab:match_func}.
900
+
901
+ \begin{table*}[htbp]
902
+ \centering
903
+ \setlength\tabcolsep{3pt}
904
+ \large
905
+ \resizebox{\textwidth}{!}{
906
+ \begin{tabular}{@{}c|cc|cc|cc|cc|cc|cc|c|c@{}}
907
+ \toprule
908
+ \toprule
909
+ \textbf{Sharing Ratio} & \multicolumn{2}{c}{\textbf{5\% }} & \multicolumn{2}{c}{\textbf{10\% }} & \multicolumn{2}{c}{\textbf{15\% }} & \multicolumn{2}{c}{\textbf{20\% }} & \multicolumn{2}{c}{\textbf{25\% }} & \multicolumn{2}{c}{\textbf{30\% }} & \multicolumn{1}{c}{\textbf{35\% }} & \multicolumn{1}{c}{\textbf{40\% }} \\
910
+ \midrule
911
+ \textbf{Dataset} & PIQA & OBQA&PIQA & OBQA&PIQA & OBQA&PIQA & OBQA&PIQA & OBQA&PIQA & OBQA&OBQA&OBQA\\
912
+ \midrule
913
+ $W^q$&\underline{74.92}&29.2&\underline{74.97}&27.5&73.29&\underline{27.8}&\underline{70.89}&27.7&64.64&\underline{27.5}&58.43&25.5&24.4&25.7\\
914
+ $W^k$&\underline{74.92}&28.7&74.27&27.6&71.71&27.7&70.35&27.6&\underline{68.77}&\textbf{27.6}&\underline{64.36}&27.2&\underline{27.6}&\underline{26.9}\\
915
+ $W^v$&\underline{74.92}&28.1&74.48&27.7&73.29&26.7&70.46&\textbf{28.5}&68.39&25.6&60.17&23.1&23.9&22.5\\
916
+ $W^q$,$W^k$,$W^v$&71.71&27.6&63.55&27.8&54.03&26.8&50.16&24.5&51.41&25.5&51.09&25.5&\textbf{29.0}&25.3\\
917
+ $W^q||W^k||W^v$&74.59&\textbf{34.7}&74.59&\textbf{30.0}&\underline{73.45}&\textbf{30.3}&70.73&\underline{28.2}&66.59&\textbf{27.6}&63.33&\underline{27.6}&27.1&25.0\\
918
+ $W^q||W^k$(Ours)&\textbf{75.84}&\underline{33.9}&\textbf{75.30}&\underline{28.2}&\textbf{74.54}&27.5&\textbf{73.01}&27.3&\textbf{69.37}&\underline{27.5}&\textbf{65.56}&\textbf{28.0}&\underline{27.6}&\textbf{28.6}\\
919
+ \bottomrule
920
+ \bottomrule
921
+ \end{tabular}
922
+ }
923
+ \setlength{\belowcaptionskip}{-5pt}
924
+ \caption{Results on PIQA and OBQA with Different Head-wise Matching Functions for Baichuan 2-7B model.}
925
+ \label{tab:match_func}
926
+ \end{table*}
927
+
928
+
929
+ \subsection{Visualization Study on the Shared Weights}\label{app:visualization}
930
+ As depicted in Figure~\ref{fig:match_distribution}, the distribution of ratios of shared weights across attention heads is similar regardless of the sharing ratio.
931
+
932
+
933
+ \section{Generations from Models after Weight Sharing}
934
+ Table~\ref{tab:shared_llama2_7b}, \ref{tab:shared_baichuan2_7b} and \ref{tab:gpt2_small_translation} show more examples of the models after head-wise weight sharing.
935
+ We present the generation results of both the shared model via DirectShare and PostShare.
936
+ \begin{table}[htbp]
937
+ \centering
938
+ \setlength\tabcolsep{3pt}
939
+ \large
940
+ \resizebox{\linewidth}{!}{
941
+ \Large
942
+ \begin{tabular}{p{3cm}|p{10cm}}
943
+ \toprule
944
+ \toprule
945
+ \textbf{Model } & \textbf{Generated Responses} \\
946
+ \midrule
947
+ \multirow{13}{3cm}{Llama 2-7B \\w/ DirectShare} & \underline{The future is going to be one that presents many} \underline{challenges,} but also a great deal of opportunity. Everybody knows this, and it’s why we have been working so hard on the newest version of the future. The problem with the future is that it’s not always easy to see what you are doing wrong. It can be difficult to find \\
948
+ \cmidrule{2-2}
949
+ &\underline{Neural Sequential Model, especially transformers,} are a powerful tool for the analysis of time-series data. surely, they can be used to predict the future. Their use is not limited to the financial industry. In this article, we will discuss the benefits and limitations of the \\
950
+ \midrule
951
+ \multirow{13}{3cm}{Llama 2-7B \\w/ PostShare} & \underline{The future is going to be one that presents many} \underline{challenges,} and the world will need a new generation of leaders. The United Nations has identified the following as key issues for the future: Climate change, Food security, Energy security, Water scarcity, Global health, Human rights, International law, Peacekeeping, Sustainable\\
952
+ \cmidrule{2-2}
953
+ &\underline{Neural Sequential Model, especially transformers,} is a type of artificial neural network that uses a sequence of vectors to represent the input and output. The model was introduced by Geoffrey Hinton in 1987. It has been used for speech recognition, handwriting recognition,\\
954
+ \bottomrule
955
+ \bottomrule
956
+ \end{tabular}
957
+ }
958
+ \setlength{\belowcaptionskip}{-5pt}
959
+ \caption{Responses from Llama 2-7B (Sharing \\Ratio=30\%).}
960
+ \label{tab:shared_llama2_7b}
961
+ \end{table}
962
+
963
+ \begin{table}[htbp]
964
+ \centering
965
+ \setlength\tabcolsep{3pt}
966
+ \large
967
+ \resizebox{\linewidth}{!}{
968
+ \Large
969
+ \begin{tabular}{p{3cm}|p{10cm}}
970
+ \toprule
971
+ \toprule
972
+ \textbf{Model } & \textbf{Generated Responses} \\
973
+ \midrule
974
+ \multirow{14}{3.5cm}{Baichuan 2-7B w/ DirectShare (English)} & \underline{There are many reasons to support why persever-} \underline{ance is of great importance.} The first and foremost, it's the only way you can get better at what ever your doing! It also gives us an opportunity to learn from our mistakes or \\
975
+ \cmidrule{2-2}
976
+ &\underline{Her eyes shone with happiness.} “I’m so happy to see you,” She said, “and I am very proud of the work that we have done together in this project and it is a great honor for me as an artist because there are many people who has been working on these projects over the years but not all of them were able to do what was needed from us at any given time when things got out of hand or if they had lost their way” \\
977
+ \midrule
978
+ \multirow{8}{3.5cm}{Baichuan 2-7B w/ DirectShare (Chinese)} & \begin{CJK*}{UTF8}{gbsn}\underline{如何健康的生活} 1. 注意休息,保证睡眠。2.合理饮食:多吃水果、蔬菜等。3.适当运动(如散步) 4.保持乐观心态 5.多饮水 6.避免劳累7.不要熬夜 8.劳逸结合 9.注意安全 10.尊重生命\end{CJK*}\\
979
+ \cmidrule{2-2}
980
+ &\begin{CJK*}{UTF8}{gbsn}\underline{想要成功需要把握}什么?1. 首先,要有一个好的心态。2.其次,要有一定的经济基础3.接着就是自己的实力了4. 最后一个条件是:自己一定要努力!\end{CJK*}\\
981
+ \bottomrule
982
+ \bottomrule
983
+ \end{tabular}
984
+ }
985
+ \setlength{\belowcaptionskip}{-5pt}
986
+ \caption{Responses from Baichuan 2-7B (Sharing \\Ratio=30\%).}
987
+ \label{tab:shared_baichuan2_7b}
988
+ \end{table}
989
+
990
+ \begin{table}[htbp]
991
+ \centering
992
+ \setlength\tabcolsep{3pt}
993
+ \Large
994
+ \resizebox{\linewidth}{!}{
995
+ \Large
996
+ \begin{tabular}{p{2.6cm}|p{7.1cm}|p{4.8cm}}
997
+ \toprule
998
+ \toprule
999
+ \textbf{Model } & \textbf{Generated Responses} &\textbf{Reference}\\
1000
+ \midrule
1001
+ \multirow{24}{3cm}{GPT2-small w/ PostShare} & \underline{In the letter, Mr Snowden said he} \underline{believed the support of the intern-} \underline{ational community could persuade} \underline{the US government to abandon cr-} \underline{iminal charges against him.<to-fr>} Dans la lettre, M. le député a dit que l'appui de la communauté internationale pourrait convaincre le gouvernement américain de abandonner la charge pénale contre lui.
1002
+ & Dans sa lettre, M. Snowden a écrit qu'il pensait que le soutien de la communauté internationale pourrait persuader le gouvernement américain d'abandonner les charges pénales retenues contre lui. \\
1003
+ \cmidrule{2-3}
1004
+ &\underline{The letter extends an offer to coop-} \underline{erate with German authorities "wh-} \underline{en the difficulties of this humanita-} \underline{rian situation have been resolved."} \underline{<to-fr>} La lettre élargit une invitation à coopérer avec les autorités allemands lorsque les difficultés de cette situation humanitaire sont réglées. & La lettre avance une offre de coopération avec les autorités allemandes « lorsque les difficultés de cette situation humanitaire auront été résolues ». \\
1005
+ \cmidrule{2-3}
1006
+ & \underline{The first test plane was unveiled in} \underline{March and took flight for the first} \underline{time in September after months of} \underline{delays.<to-fr>} Le premier étudiant a été démontré en mars et a fait l'avion pour la première fois après des mois de retard. & Le premier avion d'essai a été dévoilé en mars et s'est envolé pour la première fois en septembre après des mois de retard.\\
1007
+ \bottomrule
1008
+ \bottomrule
1009
+ \end{tabular}
1010
+ }
1011
+ \caption{Responses from GPT2-small (Sharing \\Ratio=30\%).}
1012
+ \label{tab:gpt2_small_translation}
1013
+ \end{table}
1014
+ \end{document}