taesiri commited on
Commit
c77b889
1 Parent(s): a36a906

Upload papers/2402/2402.01364.tex with huggingface_hub

Browse files
Files changed (1) hide show
  1. papers/2402/2402.01364.tex +597 -0
papers/2402/2402.01364.tex ADDED
@@ -0,0 +1,597 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+ \typeout{IJCAI--23 Instructions for Authors}
4
+
5
+
6
+
7
+ \documentclass{article}
8
+ \pdfpagewidth=8.5in
9
+ \pdfpageheight=11in
10
+
11
+ \usepackage{ijcai24}
12
+
13
+ \usepackage{times}
14
+ \usepackage{soul}
15
+ \usepackage{url}
16
+ \usepackage[hidelinks]{hyperref}
17
+ \usepackage[utf8]{inputenc}
18
+ \usepackage[small]{caption}
19
+ \usepackage{graphicx}
20
+ \usepackage{amsmath}
21
+ \usepackage{amsthm}
22
+ \usepackage{booktabs}
23
+ \usepackage{algorithm}
24
+ \usepackage{algorithmic}
25
+ \usepackage[switch]{lineno}
26
+ \usepackage{booktabs}
27
+ \usepackage{fontawesome}
28
+ \usepackage{tikz}
29
+ \usepackage{inconsolata}
30
+ \usepackage{pifont}
31
+ \usepackage[edges]{forest}
32
+ \usepackage{makecell}
33
+ \usepackage{cleveref}
34
+ \usepackage{setspace}
35
+ \usepackage[utf8]{inputenc}
36
+ \definecolor{hidden-draw}{RGB}{205, 44, 36}
37
+ \definecolor{hidden-blue}{RGB}{194,232,247}
38
+ \definecolor{hidden-orange}{RGB}{243,202,120}
39
+ \definecolor{hidden-yellow}{RGB}{255,229,204}
40
+ \definecolor{hidden-red}{RGB}{255,204,204}
41
+ \definecolor{hidden-draw}{RGB}{20,68,106}
42
+ \definecolor{hidden-pink}{RGB}{255,245,247}
43
+
44
+
45
+
46
+ \urlstyle{same}
47
+
48
+
49
+
50
+ \newtheorem{example}{Example}
51
+ \newtheorem{theorem}{Theorem}
52
+ \newcommand{\todo}[1]{{\color{red}{ [To-do: #1]}}}
53
+
54
+
55
+ \newcommand{\trang}[1]{\textcolor{blue}{#1 }}
56
+
57
+
58
+
59
+
60
+
61
+
62
+
63
+ \pdfinfo{
64
+ /TemplateVersion (IJCAI.2023.0)
65
+ }
66
+
67
+ \title{Continual Learning for Large Language Models: A Survey}
68
+
69
+
70
+
71
+
72
+
73
+
74
+ \author{
75
+ Tongtong Wu$^1$\and
76
+ Linhao Luo$^1$\and
77
+ Yuan-Fang Li$^{1}$\and
78
+ Shirui Pan$^2$\and
79
+ Thuy-Trang Vu$^1$\and \\
80
+ Gholamreza Haffari$^1$
81
+ \affiliations
82
+ $^1$Monash University{ }
83
+ $^2$Griffith University\\
84
+ \emails
85
+ \{first-name.last-name\}@monash.edu,
86
+ s.pan@griffith.edu.au
87
+ }
88
+
89
+
90
+
91
+ \begin{document}
92
+
93
+ \maketitle
94
+
95
+ \begin{abstract}
96
+ Large language models (LLMs) are not amenable to frequent re-training, due to high training costs arising from their massive scale.
97
+ However, updates are necessary to endow LLMs with new skills and keep them up-to-date with rapidly evolving human knowledge.
98
+ This paper surveys recent works on continual learning for LLMs. Due to the unique nature of LLMs, we catalog continue learning techniques in a novel multi-staged categorization scheme, involving continual pretraining, instruction tuning, and alignment. We contrast continual learning for LLMs with simpler adaptation methods used in smaller models, as well as with other enhancement strategies like retrieval-augmented generation and model editing. Moreover, informed by a discussion of benchmarks and evaluation, we identify a number of challenges and future work directions for this crucial task.
99
+ \end{abstract}
100
+
101
+
102
+ \section{Introduction}
103
+ Recent years have witnessed the rapid advances of large language models' (LLMs) capabilities in solving a diverse range of problems. At the same time, it is vital for LLMs to be regularly updated to accurately reflect the ever-evolving human knowledge, values and linguistic patterns, calling for the investigation of \emph{continual learning} for LLMs.
104
+ Whilst continual learning bears some resemblance to other strategies for model improvements, such as retrieval-augmented generation (RAG)
105
+ ~\cite{LewisPPPKGKLYR020} and model editing ~\cite{yao-etal-2023-editing}, their main purposes differ (\Cref{tab:comp}).
106
+ Unlike these strategies, whose primarily focus is on refining the domain-specific accuracy or expanding the model's factual knowledge base, continual learning aims to enhance the overall linguistic and reasoning capabilities of LLMs. This distinction is crucial as it shifts the focus from merely updating information to developing a model's ability to process and generate language in a more comprehensive and nuanced manner~\cite{ZhangFCNW23}.
107
+
108
+
109
+ \begin{figure}[t]
110
+ \centering
111
+ \includegraphics[width=.48\textwidth]{figure/framework.png}
112
+ \caption{
113
+ Continual learning for large language models involves hybrid multi-stage training with multiple training objectives.}
114
+ \label{fig:cl4llm}
115
+ \end{figure}
116
+
117
+
118
+
119
+
120
+
121
+ \begin{figure*}[tb]
122
+ \centering
123
+ \includegraphics[width=\textwidth]{figure/multi-stage.png}
124
+ \caption{The continual learning of LLMs involves multi-stage and cross-stage iteration, which may lead to substantial forgetting problems. For example, when the instruction-tuned model resumes continual pre-training, it may encounter cross-stage forgetting, resulting in reduced performance on instruction-following tasks.}
125
+ \label{fig:stage}
126
+ \end{figure*}
127
+
128
+ Continual learning for LLMs also differs from its use in smaller models, including smaller pre-trained language models (PLMs). Due to their vast size and complexity, LLMs require a multi-faceted approach to continual learning. We categorise it into three different stages, i.e.\ \emph{continual pretraining} to expand the model's fundamental understanding of language~\cite{JinZZ00WA022},
129
+ \emph{continual instruction tuning} to improve the model's response to specific user commands~\cite{zhang2023citb}, and \emph{continual alignment} to ensure the model's outputs adhere to values, ethical standards and societal norms~\cite{zhang2023copf}. This multi-stage process is distinct from the more linear adaptation strategies used in smaller models, as illustrated in \Cref{fig:cl4llm}, highlighting the unique challenges and requirements of applying continual learning to LLMs.
130
+
131
+
132
+ This survey differentiates itself from previous studies by its unique focus and structure. While previous surveys in the field are typically organized around various continual learning strategies~\cite{biesialska-etal-2020-continual}, ours is the first to specifically address continual learning in the context of LLMs. We structure our analysis around the types of information that is updated continually and the distinct stages of learning involved in LLMs. This survey offers a detailed and novel perspective on how continual learning is applied to LLMs, shedding light on the specific challenges and opportunities of this application. Our goal is to provide a thorough understanding of the effective implementation of continual learning in LLMs, contributing to the development of more advanced and adaptable language models in the future.
133
+
134
+
135
+
136
+
137
+ \begin{table}[t]
138
+ \centering
139
+ \resizebox{\columnwidth}{!}{
140
+ \begin{tabular}{@{}ccc|c@{}}
141
+ \toprule
142
+ Information & RAG & Model Editing & Continual Learning \\ \midrule
143
+ Fact & \faCheckCircleO & \faCheckCircleO & \faCheckCircleO \\
144
+ Domain & \faCheckCircleO & × & \faCheckCircleO \\
145
+ Language & × & × & \faCheckCircleO \\
146
+ Task & × & × & \faCheckCircleO \\
147
+ Skills (Tool use) & × & × & \faCheckCircleO \\
148
+ Values & × & × & \faCheckCircleO \\
149
+ Preference & × & × & \faCheckCircleO \\ \bottomrule
150
+ \end{tabular}}
151
+ \caption{Continual Learning v.s.\ RAG and Model Editing}\label{tab:comp}
152
+ \end{table}
153
+ \section{Preliminary and Categorization}
154
+ \subsection{Large Language Model}
155
+ Large language models (LLMs) like ChatGPT\footnote{\url{https://openai.com/blog/chatgpt}} and LLaMa \cite{touvron2023llama}
156
+ have shown superior performance in many tasks.
157
+ They are usually trained in multiple stages, including pre-training, instruction tuning, and alignment, as illustrated in Figure \ref {fig:cl4llm}. In the \emph{pre-training} stage, LLMs are trained on a large corpus in a self-supervised manner \cite{dong2019unified}, where the training text is randomly masked and the LLMs are asked to predict the masked tokens. In the \emph{instruction tuning} stage, LLMs are fine-tuned on a set of instruction-output pairs in a supervised fashion \cite{zhang2023instruction}. Given a task-specific instruction as input, LLMs are asked to generate the corresponding output. In the \emph{alignment} stage, LLMs are further finetuned with human feedback to align their outputs with human expectations \cite{wang2023aligning}. The output of LLMs is scored by human annotators, and the LLMs are updated to generate more human-like responses.
158
+
159
+
160
+ \subsection{Continual Learning}
161
+ Continual learning focuses on developing learning algorithms to accumulate knowledge on non-stationary data, often delineated by classes, tasks, domains or instances. In supervised continual learning, a sequence of tasks $\left\{\mathcal{D}_1, \ldots, \mathcal{D}_T\right\}$ arrive in a streaming fashion. Each task $\mathcal{D}_t=\left\{\left(\boldsymbol{x}_i^t, y_i^t\right)\right\}_{i=1}^{n_t}$ contains a separate target dataset, where $\boldsymbol{x}_i^t\in \mathcal{X}_t$ , $\boldsymbol{y}_i^t\in \mathcal{Y}_t$.
162
+ A single model needs to adapt to them sequentially, with only access to $\mathcal{D}_t$ at the t-th task. This setting requires models to acquire, update, accumulate, and exploit knowledge throughout their lifetime~\cite{biesialska-etal-2020-continual}.
163
+
164
+ The major challenge conventional continuous learning tackles is that of \emph{catastrophic forgetting}, where the performance of a model on old tasks significantly diminishes when trained with new data. Existing studies can be roughly grouped into three categories, e.g., experience replay methods \cite{chaudhry2019tiny,WuLLHQZX21}, regularization-based methods \cite{kirkpatrick2017overcoming,0002GWQLD23}, and dynamic architecture methods \cite{mallya2018piggyback}. Recently, researchers have designed some hybrid methods that take advantage of the aforementioned techniques \cite{abs-2305-08698,he2024lifelong}.
165
+
166
+
167
+
168
+
169
+
170
+
171
+
172
+ \subsection{Continual Learning for LLMs}
173
+ Continual Learning for Large Language Models aims to enable LLMs to learn from a continuous data stream over time. Despite the importance, it is non-trivial to directly apply existing continual learning settings for LLMs. We now provide a forward-looking framework of continual learning for LLMs, then present a categorization of research in this area.
174
+
175
+ \paragraph{Framework}
176
+ Our framework of continual learning for LLMs is illustrated in Figure \ref{fig:stage}. W align continual learning for LLMs with the different training stages, including Continual Pre-training (CPT), Continual Instruction Tuning (CIT), and Continual Alignment (CA). The \emph{Continual Pre-training} stage aims to conduct training on a sequence of corpus self-supervisedly to enrich LLMs' knowledge and adapt to new domains. The \emph{Continual Instruction Tuning} stage finetunes LLMs on a stream of supervised instruction-following data, aiming to empower LLMs to follow users' instructions while transferring acquired knowledge for subsequent tasks. Responding to the evolving nature of human values and preferences, \emph{Continual Alignment (CA)} tries to continuously align LLMs with human values over time.
177
+
178
+ While continual learning on LLMs can be conducted in each stage sequentially, the iterative application of continual learning also makes it essential to transfer across stages without forgetting the ability and knowledge learned from previous stages. For instance, we can conduct continual pre-training based on either instruction-tuned models or aligned models. However, we do not want the LLM to lose their ability to follow users' instructions and align with human values. Therefore, as shown in Figure \ref{fig:stage}, we use arrows with different colors to show the transfer between stages.
179
+
180
+ \paragraph{Categorization}
181
+ To better understand the research in this area, we provide a fine-grained categorization for each stage of the framework.
182
+
183
+ \textbf{Continual Pre-training (CPT)}
184
+ \begin{itemize}
185
+ \item \emph{CPT for Updating Facts} includes works that adapt LLMs to learn new factual knowledge.
186
+ \item \emph{CPT for Updating Domains} includes research that tailors LLMs to specific fields like medical and legal domains.
187
+ \item \emph{CPT for Language Expansion} includes studies that extend the languages LLMs supports.
188
+ \end{itemize}
189
+
190
+ \textbf{Continual Instruction Tuning (CIT)}
191
+ \begin{itemize}
192
+ \item \emph{Task-incremental CIT} contains works that finetune LLMs on a series of tasks and acquire the ability to solve new tasks.
193
+ \item \emph{Domain-incremental CIT} contains methods that finetune LLMs on a stream of instructions to solve domain-specific tasks.
194
+ \item \emph{Tool-incremental CIT} contains research that continually teaches LLMs to use new tools to solve problems.
195
+ \end{itemize}
196
+
197
+ \textbf{Continual Alignment (CA)}
198
+ \begin{itemize}
199
+ \item \emph{Continual Value Alignment} incorporates studies that continually align LLMs with new ethical guidelines and social norms.
200
+ \item \emph{Continual Preference Alignment} incorporates works that adapt LLMs to dynamically match different human preferences.
201
+ \end{itemize}
202
+
203
+ Besides categorizing methods based on training stages, we also provide an alternative categorization based on the information updated during continual learning. In Table \ref{tab:information}, we list some representative information that is updated for LLMs, e.g., facts, domains, tasks, values, and preferences. Based on the training objectives of LLMs, this information can be updated in different stages of LLM continual learning.
204
+ The taxonomy in \Cref{fig:survey} shows our categorization scheme and recent representative work in each category.
205
+
206
+
207
+
208
+
209
+
210
+
211
+
212
+
213
+
214
+
215
+
216
+
217
+
218
+
219
+
220
+ \tikzstyle{my-box}=[
221
+ rectangle,
222
+ draw=hidden-draw,
223
+ rounded corners,
224
+ text opacity=1,
225
+ minimum height=1.5em,
226
+ minimum width=5em,
227
+ inner sep=2pt,
228
+ align=center,
229
+ fill opacity=.5,
230
+ line width=0.8pt,
231
+ ]
232
+ \tikzstyle{leaf}=[my-box, minimum height=1.5em,
233
+ text=black, align=left,font=\normalsize,
234
+ inner xsep=2pt,
235
+ inner ysep=4pt,
236
+ line width=0.8pt,
237
+ ]
238
+ \begin{figure*}[th!]
239
+ \centering
240
+ \resizebox{\textwidth}{!}{
241
+ \begin{forest}
242
+ forked edges,
243
+ for tree={
244
+ grow=east,
245
+ reversed=true,
246
+ anchor=base west,
247
+ parent anchor=east,
248
+ child anchor=west,
249
+ base=left,
250
+ font=\large,
251
+ rectangle,
252
+ draw=hidden-draw,
253
+ rounded corners,
254
+ align=left,
255
+ minimum width=4em,
256
+ edge+={darkgray, line width=1pt},
257
+ s sep=7pt,
258
+ inner xsep=2pt,
259
+ inner ysep=3pt,
260
+ line width=0.8pt,
261
+ ver/.style={rotate=90, child anchor=north, parent anchor=south, anchor=center},
262
+ },
263
+ where level=1{text width=14em,font=\normalsize,}{},
264
+ where level=2{text width=11em,font=\normalsize,}{},
265
+ where level=3{text width=11em,font=\normalsize,}{},
266
+ where level=4{text width=10em,font=\normalsize,}{},
267
+ [
268
+ Continual Learning for Large Language Models,fill=hidden-yellow!70,ver
269
+ [
270
+ Continual Pre-training (\S \ref{sect:cpt}),fill=hidden-red!70
271
+ [
272
+ Update Fact (\S \ref{sect:cpt_fact}),fill=hidden-blue!70
273
+ [
274
+ \cite{JangYYSHKCS22}{, }\cite{SunWLFTWW20}{, }\cite{JangYLYSHKS22}, leaf, text width=48.5em,fill=hidden-blue!70
275
+ ]
276
+ ]
277
+ [
278
+ Update Domain (\S \ref{sect:cpt_domain}),fill=hidden-blue!70
279
+ [
280
+ Domain-Incremental \\ Pre-training,fill=hidden-blue!70
281
+ [
282
+ \cite{KeSLKK023}{, }\cite{abs-2205-09357}{, }\cite{QinQHLWXLSZ23}
283
+ , leaf, text width=38em,fill=hidden-blue!70
284
+ ]
285
+ ]
286
+ [
287
+ Domain-Specific \\ Continual Pre-training,fill=hidden-blue!70
288
+ [
289
+ \cite{abs-2401-01600}{, }\cite{abs-2311-08545}{, }\cite{abs-2312-15696}
290
+ , leaf, text width=38em,fill=hidden-blue!70
291
+ ]
292
+ ]
293
+ ]
294
+ [
295
+ Update Language (\S \ref{sect:cpt_language}),fill=hidden-blue!70
296
+ [
297
+ Natural Language,fill=hidden-blue!70
298
+ [
299
+ \cite{CastellucciFC020}{, }\cite{abs-2311-01200}
300
+ , leaf, text width=38em,fill=hidden-blue!70
301
+ ]
302
+ ]
303
+ [
304
+ Programming Language,fill=hidden-blue!70
305
+ [
306
+ \cite{YadavSDLZTBMNRB23}{, }\cite{ZanCYLKGWCL22}, leaf, text width=38em,fill=hidden-blue!70
307
+ ]
308
+ ]
309
+ ]
310
+ ]
311
+ [
312
+ Continual Instruction Tuning (\S \ref{sect:cit}),fill=hidden-red!70
313
+ [
314
+ Update Task (\S \ref{sect:cit_task}),fill=hidden-blue!70
315
+ [
316
+ \cite{razdaibiedina2023progressive}{, }\cite{anonymous2024scalable}{, }\cite{mok2023large}{, }\cite{jang2023exploring}{, }\cite{wang2023orthogonal}
317
+ , leaf, text width=48.5em,fill=hidden-blue!70
318
+ ]
319
+ ]
320
+ [
321
+ Update Domain (\S \ref{sect:cit_domain}),fill=hidden-blue!70
322
+ [
323
+ \cite{wang2023trace}{, }\cite{song2023conpet}{,}\cite{cheng2023adapting}{, }\cite{cheng2023language}{, }\cite{zhang2023reformulating}
324
+ , leaf, text width=48.5em,fill=hidden-blue!70
325
+ ]
326
+ ]
327
+ [
328
+ Update Skill (\S \ref{sect:cit_tool}),fill=hidden-blue!70
329
+ [
330
+ \cite{hao2023toolkengpt}{, }\cite{kong2023tptu}{, }\cite{qin2023toolllm}{, }\cite{jin2023genegpt}{, }
331
+ , leaf, text width=48.5em,fill=hidden-blue!70
332
+ ]
333
+ ]
334
+ ]
335
+ [
336
+ Continual Alignment (\S \ref{sect:ca}),fill=hidden-red!70
337
+ [
338
+ Update Values (\S \ref{sect:ca_value}),fill=hidden-blue!70
339
+ [
340
+ \cite{zhang2023copf}{, }\cite{anonymous2023cppo}
341
+ , leaf, text width=48.5em,fill=hidden-blue!70
342
+ ]
343
+ ]
344
+ [
345
+ Update Preference (\S \ref{sect:ca_preference}),fill=hidden-blue!70
346
+ [
347
+ \cite{suhr2023continual}
348
+ , leaf, text width=48.5em,fill=hidden-blue!70
349
+ ]
350
+ ]
351
+ ]
352
+ ]
353
+ \end{forest}
354
+ }
355
+ \caption{Taxonomy of trends in continual learning for large language models.}
356
+ \label{fig:survey}
357
+ \end{figure*}
358
+
359
+
360
+
361
+
362
+
363
+
364
+
365
+
366
+
367
+
368
+
369
+ \begin{table}[htb]
370
+ \centering
371
+ \resizebox{\columnwidth}{!}{
372
+ \begin{tabular}{@{}cccc@{}}
373
+ \toprule
374
+ Information & Pretraining & Instruction-tuning & Alignment \\ \midrule
375
+ Fact & \faCheckCircleO & × & × \\
376
+ Domain & \faCheckCircleO & \faCheckCircleO & × \\
377
+ Language & \faCheckCircleO & × & × \\
378
+ Task & × & \faCheckCircleO & × \\
379
+ Skill (Tool use) & × & \faCheckCircleO & × \\
380
+ Value & × & × & \faCheckCircleO \\
381
+ Preference & × & × & \faCheckCircleO \\ \bottomrule
382
+ \end{tabular}}
383
+ \caption{Information updated during different stages of continual learning for LLMs.}\label{tab:information}
384
+ \end{table}
385
+
386
+
387
+
388
+ \section{Continual Pre-training (CPT)}
389
+ \label{sect:cpt}
390
+ Continual pretraining in large language models is essential for keeping the LLMs relevant and effective. This process involves regularly updating the models with the latest information~\cite{JangYLYSHKS22}, adapting them to specialized domains~\cite{KeSLKK023}, enhancing their coding capabilities~\cite{YadavSDLZTBMNRB23}, and expanding their linguistic range~\cite{CastellucciFC020}. With CPT, LLMs can stay current with new developments, adapt to evolving user needs, and remain effective across diverse applications. Continual pretraining ensures LLMs are not just knowledgeable but also adaptable and responsive to the changing world.
391
+
392
+
393
+ \subsection{CPT for Updating Facts}
394
+ \label{sect:cpt_fact}
395
+ The capability of LLMs to integrate and adapt to recent information is crucial. A pivotal strategy here is the employment of dynamic datasets that facilitate the real-time assimilation of data from a variety of sources like news feeds~\cite{SunWLFTWW20}, scholarly articles~\cite{abs-2205-09357}, and social media~\cite{abs-2205-09357}.
396
+ \cite{SunWLFTWW20} presents ERNIE 2.0, which is a continual pre-training framework that incrementally builds and learns from multiple tasks to maximize knowledge extraction from training data. \cite{JangYYSHKCS22} introduces continual knowledge learning, a method for updating temporal knowledge in LLMs, reducing forgetting while acquiring new information. \cite{JangYLYSHKS22} shows that continual learning with different data achieves comparable or better perplexity in language models than training on the entire snapshot, confirming that factual knowledge in LMs can be updated efficiently with minimal training data. Integral to this process is the implementation of automated systems for the verification of newly acquired data, ensuring both the accuracy and dependability of the information.
397
+
398
+ \subsection{CPT for Updating Domains}
399
+ \label{sect:cpt_domain}
400
+ Continual pre-training updates domain knowledge through two approaches: 1) domain-incremental pre-training accumulates knowledge across multiple domains, and 2) domain-specific continual learning, which evolves a general model into a domain expert by training on domain-specific datasets and tasks.
401
+ In domain-incremental pre-training, \cite{abs-2205-09357} explores how models can be continually pre-trained on new data streams for both language and vision, preparing them for various downstream tasks. \cite{QinQHLWXLSZ23} examines continual retraining by assessing model compatibility and benefits of recyclable tuning via parameter initialization and knowledge distillation. \cite{KeSLKK023} introduces a soft-masking mechanism to update language models (LMs) with domain corpora, aiming to boost performance while preserving general knowledge.
402
+ For domain-specific continual learning, \cite{abs-2311-08545} develops FinPythia-6.9B through domain-adaptive pre-training for the financial sector. EcomGPT-CT~\cite{abs-2312-15696} investigates the effects of continual pre-training in the E-commerce domain.
403
+ These studies collectively highlight the evolving landscape of continual pre-training, demonstrating its effectiveness in enhancing model adaptability and expertise across a wide range of domains.
404
+
405
+
406
+ \subsection{CPT for Language Expansion}
407
+ \label{sect:cpt_language}
408
+ Expanding the range of languages that LLMs can understand and process is essential for ensuring broader accessibility~\cite{CastellucciFC020}. This expansion is not just about including a wider variety of languages, particularly underrepresented ones, but also about embedding cultural contexts into language processing.
409
+ A significant challenge here is the model's ability to recognize and interpret regional dialects and contemporary slangs~\cite{abs-2311-01200}, which is crucial for effective and relevant communication across diverse racial, social and cultural groups.
410
+
411
+ In addition to mastering natural languages, LLMs have also made significant strides in understanding and generating programming languages.
412
+ \cite{YadavSDLZTBMNRB23} introduced CodeTask-CL, a benchmark for continual code learning that encompasses a diverse array of tasks, featuring various input and output formats across different programming languages.
413
+ \cite{ZanCYLKGWCL22} explore using an unlabeled code corpus for training models on library-oriented code generation, addressing the challenge of scarce text-code pairs due to extensive library reuse by programmers. They introduce CERT, a method where a "sketcher" outlines a code structure, and a "generator" completes it, both continuously pre-trained on unlabeled data to capture common patterns in library-focused code snippets. These developments highlight LLMs' potential to transform both natural and programming language processing, leading to more efficient coding practices.
414
+
415
+
416
+
417
+
418
+
419
+
420
+
421
+
422
+ \section{Continual Instruction Tuning (CIT)}
423
+ \label{sect:cit}
424
+ LLMs have shown great instruction following abilities that can be used to complete different tasks with a few-shot task prompt. Continual Instruction Tuning (CIT) involves continually finetuning the LLMs to learn how to follow instructions and transfer knowledge for future tasks \cite{zhang2023citb}. Based on the ability and knowledge updated during instruction tuning, we can further divide CIT into three categories: \emph{1) task-incremental CIT}, \emph{2) domain-incremental CIT}, and \emph{tool-incremental CIT}.
425
+
426
+
427
+ \subsection{Task-incremental CIT}
428
+ \label{sect:cit_task}
429
+ Task-incremental Continual Instruction Tuning (Task-incremental CIT) aims to continuously finetune LLMs on a sequence of task-specific instructions and acquire the ability to solve novel tasks. A straightforward solution is to continuously generate instruction-tuning data for new tasks and directly fine-tune LLMs on it \cite{wang2023trace}.
430
+ However, studies have shown that continuously finetuning LLMs on task-specific data would cause a catastrophic forgetting of the learned knowledge and problem-solving skills in previous tasks \cite{kotha2023understanding}. TAPT \cite{gururangan2020don} presents a simple data selection strategy that retrieves unlabeled text from the in-domain corpus, aligning it with the task distribution. This retrieved text is then utilized to fine-tune LLMs, preventing catastrophic forgetting and enhancing argument performance. To mitigate catastrophic forgetting, Contunual-T0 \cite{scialom2022fine} employs rehearsal with a memory buffer \cite{shin2017continual} to store previous tasks data and replay them during training. ConTinTin \cite{yin2022contintin} presents InstructionSpeak, which includes two strategies that make full use of task instructions to improve forward-transfer and backward-transfer. The first strategy involves learning from negative outputs, while the second strategy focuses on revisiting instructions from previous tasks. RationaleCL \cite{xiong2023rationale} conducts contrastive rationale replay to alleviate catastrophic forgetting. DynaInst \cite{mok2023large} proposes a hybrid approach incorporating a Dynamic Instruction Replay and a local minima-inducing regularizer. These two components enhance the generalizability of LLMs and decrease memory and computation usage in the replay module.
431
+ Unlike previous replay-based or regularization-based methods, SLM \cite{anonymous2024scalable} incorporates vector space retrieval into the language model, which aids in achieving scalable knowledge expansion and management. This enables LLMs' quick adaptation to novel tasks without compromising performance caused by catastrophic forgetting.
432
+
433
+
434
+ LLMs with billions of parameters introduce a huge computational burden for conducting continual learning. To address this issue, the Progressive Prompts technique \cite{razdaibiedina2023progressive} freezes the majority of parameters and only learns a fixed number of tokens (prompts) for each new task. Progressive Prompts significantly reduce the computational cost while alleviating catastrophic forgetting and improving the transfer of knowledge to future tasks. ELM \cite{jang2023exploring} first trains a small expert adapter on top of the LLM for each task. Then, it employs a retrieval-based approach to choose the most pertinent expert LLM for every new task. Based on the parameter-efficient tuning (PET) framework,
435
+ O-LoRA \cite{wang2023orthogonal} proposes an orthogonal low-rank adaptation for CIT. O-LoRA incrementally learns new tasks in an orthogonal subspace while fixing the LoRA parameters learned from past tasks to minimize catastrophic forgetting. Similarly, DAPT \cite{zhao2024dapt} proposes a novel Dual Attention Framework to align the learning and selection of LoRA parameters via the Dual Attentive Learning\&Selection module. LLaMA PRO \cite{wu2024llama} proposes a novel block expansion technique, which enables the injection of new knowledge into LLMs and preserves the initial capabilities with efficient post-training.
436
+
437
+ \subsection{Domain-incremental CIT}
438
+ \label{sect:cit_domain}
439
+ Domain-incremental Continual Instruction Tuning (Domain-incremental CIT) aims to continually finetune LLMs on a sequence of domain-specific instructions and acquire the knowledge to solve tasks in novel domains. TAPT \cite{gururangan2020don} adaptively tunes the LLMs on a series of domain-specific data including biomedicine, computer science, news, and shopping reviews. Then, it evaluates the LLMs' text classification ability in each domain. ConPET \cite{song2023conpet} applies previous continual learning methods, initially developed for smaller models, to LLMs using PET and a dynamic replay strategy. This approach significantly reduces tuning costs and mitigates overfitting and forgetting problems. Experiments conducted on a typical continual learning scenario, where new knowledge types gradually emerge, demonstrate the superior performance of ConPET. AdaptLLM \cite{cheng2023adapting} adapts LLMs to different domains by enriching the raw training corpus into a series of reading comprehension tasks relevant to its content. These tasks are designed to help the model leverage domain-specific knowledge while enhancing prompting performance. PlugLM \cite{cheng2023language} uses a differentiable plug-in memory (DPM) to explicitly store the domain knowledge. PlugLM could be easily adapted to different domains by plugging in in-domain memory. \cite{zhang2023reformulating} designs an adapt-retrieve-revise process that adapts LLMs to new domains. It first uses the initial LLMs' respose to retrieve knowledge from the domain database. The retrieved knowledge is used to revise initial responses and obtain final answers. \cite{dong2023abilities} analyze the LLMs continuously tuned on different domains and find that the sequence of training data has a significant impact on the performance of LLMs. They also offer a Mixed Fine-tuning (DMT) strategy to learn multiple abilities in different domains.
440
+
441
+ \subsection{Tool-incremental CIT}
442
+ \label{sect:cit_tool}
443
+ Tool-incremental Continual Instruction Tuning (Tool-incremental CIT) aims to fine-tune LLMs continuously, enabling them to interact with the real world and enhance their abilities by integrating with tools, such as calculators, search engines, and databases \cite{qin2023toolllm}. With the rapid emergence of new tools like advanced software libraries, novel APIs, or domain-specific utilities \cite{liang2023taskmatrix,jin2023genegpt}, there is a growing need to continually update LLMs so they can quickly adapt and master these new tools. Llemma \cite{azerbayev2023llemma} continues tuning LLMs on a dataset with a mixture of math-related text and code to enable LLMs to solve mathematical problems by using external tools. ToolkenGPT \cite{hao2023toolkengpt} represents each tool as a new token (toolken) whose embedding is learned during instruction tuning. This approach offers an efficient way for LLMs to master tools and swiftly adapt to new tools by adding additional tokens.
444
+ \section{Continual Alignment (CA)}
445
+ \label{sect:ca}
446
+
447
+ LLMs need to adapt to evolving societal values, social norms and ethical guidelines. Furthermore, there exists substantial diversity in preferences across different demographic groups, as well as individuals' changing preferences over time. The need to respond to these changes give rise to continual alignment.
448
+ In the context of continual alignment, two scenarios emerge: (i) the requirement to update LLMs to reflect shifts in societal values
449
+ and (ii) integrating new demographic groups or value types to existing LLMs, which we will describe in the following subsections.
450
+
451
+
452
+ \subsection{Continual Value Alignment}
453
+ \label{sect:ca_value}
454
+ Continual value alignment aims to continually incorporate ethical guidelines or adapt to cultural sensitivities and norms.
455
+ It requires updating to unlearn outdated notions and incorporating new values, akin to model editing and unlearning tasks. Model editing and knowledge unlearning have been studied in pretraining and instruction tuning phases~\cite{yao-etal-2023-editing}; however, they have not yet been explored in preference learning.
456
+
457
+ \subsection{Continual Preference Alignment}
458
+ \label{sect:ca_preference}
459
+ Adding new demographic groups or value types aligns with continual learning problems, aiming to guide LLMs in generating responses aligned with emerging values while adhering to previously learned ones. For example, many open-source aligned LLMs employ reinforcement learning with human feedback (RLHF) for safety. We may want to align the LLMs for additional attributes such as helpfulness and faithfulness. Beyond the challenge of retaining past preferences while maximising the reward on new ones, continual preference learning also faces difficulties in stable and efficient training with a large action space (vocabulary) and a large number of parameters.
460
+ Previous works have demonstrated proof-of-concept of such agents. However, there is a lack of standardized benchmarks to systematically evaluate the learning capabilities of new preferences over time.
461
+ Continual Proximal Policy Optimization (CPPO)~\cite{anonymous2023cppo} utilizes a sample-wise weighting on the Proximal Policy Optimization (PPO) algorithm~\cite{schulman2017proximal} to balance policy learning and knowledge retention in imitating the old policy output. On the other hand, \cite{zhang2023copf} extend the Direct Preference Optimization (DPO) algorithm \cite{rafailov2023direct} to the continual learning setting by employing Monte Carlo estimation to derive a sequence of optimal policies for the given sequences of tasks and incorporate them to regularize the policy learning on new tasks.
462
+
463
+
464
+
465
+
466
+
467
+
468
+
469
+
470
+
471
+
472
+
473
+
474
+
475
+
476
+
477
+
478
+
479
+
480
+
481
+
482
+
483
+ \section{Benchmarks}
484
+ \label{sect:eval}
485
+ The systematic evaluation of LLMs' continual learning performance demands benchmarks with high-quality data sources and diverse content. Below we summarize notable benchmark dataets.
486
+
487
+ \subsection{Benchmarks for CPT}
488
+ TemporalWiki~\cite{JangYLYSHKS22} serves as a lifelong benchmark, training and evaluating Language Models using consecutive snapshots of Wikipedia and Wikidata, helping assess an LM's ability to retain past knowledge and acquire new knowledge over time. Additional social media datasets like Firehose~\cite{HuSSK23} comprise 100 million tweets from one million users over six years. CKL~\cite{JangYYSHKCS22} focuses on web and news data, aiming to retain time-invariant world knowledge from initial pretraining while efficiently learning new knowledge through continued pre-training on different corpora. TRACE~\cite{wang2023trace} encompasses eight diverse datasets covering specialized domains, multilingual tasks, code generation, and mathematical reasoning. These datasets are harmonized into a standard format, facilitating straightforward and automated evaluation of LLMs. Due to the fast-paced nature of data, time-sensitive datasets quickly become outdated, necessitating frequent updates to continual pre-training benchmarks for model evaluation.
489
+
490
+ \subsection{Benchmarks for CIT} The Continual Instruction Tuning Benchmark (CITB)~\cite{zhang2023citb} is based on SuperNI, encompassing over 1,600 Natural Language Processing (NLP) tasks across 76 types like language generation and classification, all in a text-to-text format. ConTinTin~\cite{yin2022contintin}, another benchmark derived from NATURAL-INSTRUCTIONS, includes 61 tasks across six categories, such as question generation and classification.
491
+ When using these benchmarks for evaluating black-box language learning models that cannot access their training data, the selection of datasets is crucial to avoid task contamination and ensure reliable performance assessment in continual instruction tuning.
492
+
493
+
494
+ \subsection{Benmarks for CA}
495
+ COPF~\cite{zhang2023copf} conduct experiments for continual alignment using datasets like the Stanford Human Preferences (SHP)~\cite{EthayarajhCS22} and Helpful \& Harmless (HH) Datasets~\cite{abs-2204-05862}. The SHP Dataset comprises 385,000 human preferences across 18 subjects, from cooking to legal advice. The HH Dataset consists of two parts: one where crowdworkers interact with AI models for helpful responses, and another where they elicit harmful responses, selecting the more impactful response in each case.
496
+ Despite the growing interest in this field, there is a notable absence of dedicated benchmarks for continual alignment, presenting an opportunity for future research and development in this area.
497
+
498
+ \section{Evaluation}
499
+ \subsection{Evaluation for Target Task Sequence}
500
+ Continual learning for large language models involves evaluating the model's performance over a task sequence. Performance can be measured by three typical continual learning metrics: (1) average performance; (2) Forward Transfer Rate (FWT), and (3) Backward Transfer Rate (BWT)~\cite{lopez2017gradient,WuCLLQH22}:
501
+
502
+ (1) FWT assesses the impact of knowledge acquired from previous tasks on the initial ability to perform a new task, prior to any dedicated training for that new task.
503
+ \begin{align}
504
+ FWT = \frac{1}{T-1}\sum_{i=2}^{T-1}A_{T,i} - \tilde{b_{i}}
505
+ \end{align}
506
+
507
+ (2) BWT measures catastrophic forgetting by comparing a model's performance on old tasks before and after learning new ones.
508
+ \begin{align}
509
+ BWT = \frac{1}{T-1}\sum_{i=1}^{T-1}A_{T,i} - A_{i,i}
510
+ \end{align}
511
+
512
+ (3) Average Performance, e.g., the average accuracy assesses the ability of a model or algorithm to effectively learn from and adapt to a sequence of data streams or tasks over time.
513
+ \begin{align}
514
+ Avg.\ ACC = \frac{1}{T}\sum_{i=1}^{T}A_{T,i}
515
+ \end{align}
516
+ where $A_{t,i}$ is the accuracy of models on the test set of $i$th task after model learning on the $t$th task and $\tilde{b_{i}}$ is the test accuracy for task $i$ at random initialization.
517
+
518
+
519
+ \subsection{Evaluation for Cross-stage Forgetting}
520
+ Large language models continually trained on different stages can experience the issue of unconscious forgetting \cite{abs-2309-06256}, which shows that continual instruction tuning can erode the LLM's general knowledge. Additionally, previous studies \cite{abs-2310-03693} also demonstrate that the behavior of safely aligned LLMs can be easily affected and degraded by instruction tuning. To quantify these limitations, TRACE~\cite{wang2023trace} proposes to evaluate LLMs by using three novel metrics: General Ability Delta (GAD), Instruction Following Delta (IFD), and Safety Delta (SD):
521
+
522
+ (1) GAD assesses the performance difference of an LLM on general tasks after training on sequential target tasks.
523
+ \begin{align}
524
+ GAD = \frac{1}{T} \sum_{i=1}^T ( R_{t, i}^G - R_{0, i}^G)
525
+ \end{align}
526
+
527
+ (2) IFD assesses the changes of model's instruction-following ability after training on sequential different tasks.
528
+ \begin{align}
529
+ IFD = \frac{1}{T} \sum_{i=1}^T (R_{t, i}^I - R_{0, i}^I)
530
+ \end{align}
531
+
532
+ (3) SD assesses the safety variation of a model's response after sequential training.
533
+ \begin{align}
534
+ SD = \frac{1}{T} \sum_{i=1}^T (R_{t, i}^S - R_{0, i}^S)
535
+ \end{align}
536
+ The baseline performance of the initial LLM on the $i$-th task is represented by $ R_{0,i} $. After incrementally learning up to the $t$-th task, the score on the $i$-th task becomes $ R_{t,i}$. And $R^G$, $R^I$, and $R^S$ represent the performance of LLM on general tasks (assessing the information obtained from pre-training), instruction-following tasks, and alignment tasks, respectively.
537
+ These measure changes in an LLM's overall capabilities, adherence to instructions, and safety after continual learning, going beyond traditional benchmarks by focusing on maintaining inherent skills and aligning with human preferences.
538
+
539
+ \section{Challenges and Future Works}
540
+
541
+ \paragraph{Computation-efficient Continual Learning}
542
+
543
+ In the realm of computation efficiency, the focus is on enhancing the continual pretraining process with minimized computational resources~\cite{abs-2311-11908}. This involves developing innovative architectures that can handle the increasing complexity of pretraining tasks without proportional increases in computational demands. Efficiency in algorithms and data structures becomes crucial, especially in managing the extensive data involved in pretraining. Additionally, energy-efficient learning models are vital for sustainable scaling of LLMs, aligning with Green AI initiatives. This area requires balancing the computational cost vs the benefits in terms of model performance and capabilities.
544
+
545
+
546
+ \paragraph{Social Good Continual Learning}
547
+
548
+ Social responsibility in continual learning encompasses ensuring privacy and data security, particularly in the context of continual instruction tuning~\cite{gabriel_artificial_2020}. As LLMs are fine-tuned with more specific instructions or tasks, the handling of sensitive or personal data must be managed securely and ethically. Aligning with human values and culture is also paramount, especially in the realm of continual preference learning. This involves incorporating ethical AI principles and cultural sensitivities to ensure that the model's outputs are aligned with societal norms and values.
549
+
550
+
551
+ \paragraph{Automatic Continual Learning}
552
+ A significant challenge lies in creating systems capable of autonomously overseeing their learning processes, seamlessly adjusting to novel tasks (instruction tuning) and user preferences (alignment) while relying solely on the inherent capabilities of LLMs, all without the need for manual intervention~\cite{qiao2024autoact}. Automatic continual learning includes multi-agent systems capable of collaborative learning and self-planning algorithms that can autonomously adjust learning strategies based on performance feedback. Such systems would represent a significant advancement in the autonomy of LLMs.
553
+
554
+
555
+ \paragraph{Continual Learning with Controllable Forgetting}
556
+ Controllable forgetting is particularly relevant to continual pretraining. The ability to selectively retain or forget information as the model is exposed to new data streams can prevent catastrophic forgetting~\cite{abs-2310-03693} and enhance model adaptability~\cite{wang2023trace}. This challenge also extends to managing misinformation and unlearning incorrect or outdated information~\cite{ChenY23}, ensuring the accuracy and reliability of the LLM over time.
557
+
558
+
559
+ \paragraph{Continual Learning with History Tracking}
560
+ Effective history tracking is vital for understanding the evolution of the LLM through its phases of pre-training, instruction tuning, and preference learning. Managing history in model parameters and using external memory architectures can help in tracking the influence of past learning on current model behavior and decisions~\cite{abs-2302-07842}. This is crucial for analyzing the effectiveness of continual learning processes and making informed adjustments.
561
+
562
+ \paragraph{Theoretical insights on LLM in Continual Learning. }
563
+ Numerous evaluation studies have examined the issue of cross-stage forgetting~\cite{abs-2309-06256} and demonstrated the weak robustness of aligned LLMs~\cite{abs-2310-03693}. However, theoretical analyses of how multi-stage training impacts the performance of large language models in subsequent continual learning tasks are scarce. This gap highlights the need for a deeper understanding of the specific changes multi-stage training introduces to LLMs' learning capabilities and long-term performance.
564
+
565
+
566
+
567
+
568
+
569
+
570
+
571
+
572
+
573
+
574
+
575
+
576
+
577
+
578
+
579
+ \section{Conclusion}
580
+ Continual learning holds the vital importance of allowing large language models to be regularly and efficiently updated to remain up-to-date with the constantly changing human knowledge, language and values.
581
+ We showcase the complex, multi-stage process of continual learning in LLMs, encompassing continual pretraining, instruction tuning, and alignment, a paradigm more intricate than those used in continual learning on smaller models. As the first survey of its kind to thoroughly explore continual learning in LLMs, this paper categorizes the updates by learning stages and information types, providing a detailed understanding of how to effectively implement continual learning in LLMs. With a discussion of major challenges and future work directions, our goal is to provide a comprehensive account of recent developments in continual learning for LLMs, shedding light on the development of more advanced and adaptable language models.
582
+ \let\oldthebibliography\thebibliography
583
+ \let\endoldthebibliography\endthebibliography
584
+ \renewenvironment{thebibliography}[1]{
585
+ \begin{oldthebibliography}{#1}
586
+ \setlength{\itemsep}{0em}
587
+ \setlength{\parskip}{0em}
588
+ }
589
+ {
590
+ \end{oldthebibliography}
591
+ }
592
+ {
593
+ \setstretch{0.96}
594
+ \bibliographystyle{named}
595
+ \bibliography{ijcai24}
596
+ }
597
+ \end{document}