text
stringlengths 398
4.1k
|
---|
1. TAPE [22]: The motivation of TAPE is to leverage the
4.2 Text-levelEnhancementOgbn-arxiv Ogbn-products knowledge of LLMs to generate high-quality node fea-
GCN MLP RevGAT SAGE SAGN MLP tures. Specifically,itusesLLMstogeneratepseudolabels
For feature-level enhancement, LLMs in the pipeline must LM-phase GNN-phase GNN-phase
Non-contextualizedShallowEmbeddingsInputfeatures Backbone LM-phaseRunningtime(s)Memory(GB) Runningtime(s) Memory(GB)
TF-IDF 72.23 ±0.21 66.60 ±0.25 75.16 ±0.14 79.73 ±0.48 84.40 ±0.07 64.42 ±0.18
Word2Vec 71.74 ±0.29 55.50 ±0.23 73.78 ±0.19 81.33 ±0.79 84.12 ±0.18 69.27 ±0.54TF-IDFGCN N/A N/A 53 9.81
PLM/LLMEmbeddingswithoutFine-tuning(1024) RevGAT N/A N/A 873 7.32
Deberta-base 45.70 ±5.59 40.33 ±4.53 71.20 ±0.48 62.03 ±8.82 74.90 ±0.48 7.18 ±1.09Sentence-BERTGCN 239 1.30 48 7.11
LocalSentenceEmbeddingModels(384) RevGAT 239 1.30 674 4.37
Sentence-BERT(MiniLM) 73.10 ±0.25 71.62 ±0.10 76.94 ±0.11 82.51 ±0.53 84.79 ±0.23 72.73 ±0.34
text-ada-embedding-002 GCN 165 N/A 73 11.00
e5-large 73.74 ±0.12(1536)RevGAT 165 N/A 1038 8.3372.75 ±0.00 76.59 ±0.44 82.46 ±0.9185.47 ±0.2177.49 ±0.29
OnlineSentenceEmbeddingModelsDeberta-base GCN 13560 12.53 50 9.60
text-ada-embedding-002 72.76 ±0.23 72.17 ±0.00(768)RevGAT 13560 12.53 122 6.8276.64 ±0.2082.90 ±0.42 85.20 ±0.19 76.42 ±0.31
Fine-tunedPLMEmbeddings
Fine-tunedDeberta-baseGLEM-GNN 74.65 ±0.12GCN 68071 18.22 N/A N/A72.90 ±0.11 75.80 ±0.39 82.15 ±0.16 84.01 ±0.0579.08 ±0.23
Others (768) RevGAT 68294 18.22 N/A N/A
GIANT 73.29 ±0.10GIANTGCN N/A N/A 50 9.6073.06 ±0.11 75.90 ±0.1983.16 ±0.1986.67 ±0.0979.82 ±0.07
IterativeStructure(768) RevGAT N/A N/A 122 6.82
GLEM-GNN 75.93 ±0.19 N/A 76.97 ±0.19 83.16 ±0.09 87.36 ±0.07 N/A
GLEM-LM 75.71 ±0.24 N/A 75.45 ±0.12 81.25 ±0.15 84.83 ±0.04 N/A and explanations. These explanations aim to make the 4.2.1 Experimental Setups
logical relationship between the text features and cor- To evaluate these two strategies, we conduct experiments
responding labels more cl |
87.36 ±0.07 N/A
GLEM-LM 75.71 ±0.24 N/A 75.45 ±0.12 81.25 ±0.15 84.83 ±0.04 N/A and explanations. These explanations aim to make the 4.2.1 Experimental Setups
logical relationship between the text features and cor- To evaluate these two strategies, we conduct experiments
responding labels more clear. For example, given the on two small datasets Cora and Pubmed considering the
original attributes “mean-field approximation” and the cost to use the LLMs. For low labeling ratio and high la-
ground truth label “probabilistic methods”, it will gener- beling ratio, we adopt the same setting as that in Table 1
ate a description such as “mean-field approximation is a and Table 2. For predictors, we adopt GCN, GAT, and
widely adopted simplification technique for probabilistic MLP to study both the quality of textual embeddings be-
models”, which makes the connection of these two at- fore and after aggregations. For LLMs, we adopt ChatGPT
tributes much more clear. After generating pseudo labels with the latest version (gpt-3.5-turbo-0613). To better un-
and explanations, they further adopt PLMs to be fine- derstand the effectiveness of TAPE, we separate it into TA,
tuned on both the original text attributes and the expla- P, and E, where “TA” refers to “text attributes”, “P” refers
nationsgeneratedbyLLMs,separately. Next,theygener- to “pseudo labels”, and “E” refers to “explanations”. For
ate the corresponding text features and augmented text KEA, we try two approaches to inject the augmented tex-
features based on the original text attributes and aug- tual attributes. The first approach is appending the aug-
mented text attributes respectively, and finally ensemble mented textual attributes into the original attribute, which
them together as the initial node features for GNNs. is denoted as “KEA-I”. Then the combined attributes are
2. Knowledge-Enhanced Augmentation : The motiva- encoded into features. The second approach is to encode
tion behind Knowledge-Enhanced Augmentation (KEA) the augmented attributes and original attributes separately,
is to enrich the text attributes by providing additional which is denoted as “KEA-S”. We report the results for
information. KEA is inspired by knowledge-enhanced original, augmented, and ensembling features. Both TAPE
PLMs such as ERNIE [61] and K-BERT [36] and aims and KEA adopt the cascading structures. After encoding
to explicitly incorporate external knowledge. In KEA, the text attributes with LLMs, the generated embeddings
we prompt the LLMs to generate a list of knowledge en |
original, augmented, and ensembling features. Both TAPE
PLMs such as ERNIE [61] and K-BERT [36] and aims and KEA adopt the cascading structures. After encoding
to explicitly incorporate external knowledge. In KEA, the text attributes with LLMs, the generated embeddings
we prompt the LLMs to generate a list of knowledge en- are adopted as the initial features for GNNs. We try two
tities along with their text descriptions. For example, we approaches to encode the attributes, which are fine-tuned
can generate a description for the abstract term “Hopf- PLMsandlocalsentenceembeddingmodels. Specifically,we
Rinow theorem” as follows: “The Hopf-Rinow theorem adoptDeberta-baseande5-large. Toconductafaircompari-
establishes that a Riemannian manifold, which is both son,wefirstdeterminethebettertextencoderbyevaluating
complete and connected, is geodesically complete if and theiroverallperformance. Oncethetextencoderisselected,
only if it is simply connected.” By providing such de- we proceed to compare the performance of the augmented
scriptions, we establish a clearer connection between the attributes against the original attributes.
theoremandthecategory“Riemanniangeometry”. Once A comprehensive evaluation of TAPE. We first gain
weobtaintheentitylist,weencodeiteithertogetherwith a deeper understanding of TAPE through a comprehensive
theoriginaltextattributeorseparately. Wetryencoding ablation study. The experimental results are shown in Ta-
text attributes with fine-tuned PLMs and deep sentence ble5andTable6. Weshowtheapproachweadopttoencode
embedding models. We also employ ensemble methods the text attributes in the bracket. In particular, we mainly
to combine these embeddings. One potential advantage considerfine-tunedDeberta-base, whichisdenotedasPLM,
of KEA is that it is loosely coupled with the prediction and e5-large, which is denoted as e5.
performance of LLMs. In cases where LLMs generate in- Observation 7. The effectiveness of TAPE is mainly
correct predictions, TAPE may potentially generate low- from the explanations E generated by LLMs.
quality node features because the explanations provided Fromtheablationstudy,wecanseethatcomparedtopseudo
byPLMsmayalsobeincorrect. However,withKEA,the labels P ,theexplanationspresentbetterstabilityacrossdif-
augmented features may exhibit better stability since we ferent datasets. One main advantage of adopting explana-
do not rely on explicit predictions from LLMs. tionsgeneratedbyLLMsisthattheseaugmentedattri |
LMsmayalsobeincorrect. However,withKEA,the labels P ,theexplanationspresentbetterstabilityacrossdif-
augmented features may exhibit better stability since we ferent datasets. One main advantage of adopting explana-
do not rely on explicit predictions from LLMs. tionsgeneratedbyLLMsisthattheseaugmentedattributes
present better performance in the low-labeling rate setting.
From Table 5, we note that when choosing PLM as the en-
coders, E performsmuchbetterthan TA inthelowlabeling
rate setting. Compared to explanations, we find that the
effectiveness of the P mainly depends on the zero-shot per-
formanceofLLMs,whichmaypresentlargevariancesacross
different datasets. In the following analysis, we use TA +
E and neglect the pseudo labels generated by LLMs.
Observation 8. Replacing fine-tuned PLMs with
deep sentence embedding models can further im-
prove the overall performance of TAPE.
FromTable5andTable6,weobservethatadoptinge5-large
as the LLMs to encode the text attributes can achieve good
Figure 3: Illustrations for TAPE and KEA. TAPE leverages performanceacrossdifferentdatasetsanddifferentdatasplits.
the knowledge of LLMs to generate explanations for their Specifically, the TA + E encoded with e5 can achieve top 3
predictions. For KEA, we prompt the LLMs to generate performanceinalmostallsettings. Inthefollowinganalysis,
a list of technical terms with their descriptions. The main we adopt e5 to encode the original and enhanced attributes
motivation is to augment the attribute information. TA + E . Table 5: A detailed ablation study of TAPE on Cora and Pubmed dataset in low labeling rate setting. For each combination
of features and models, we useyellowto denote the best performance under a specific GNN/MLP model,greenthe second
best one, andpinkthe third best one.
Table 6: A detailed ablation |
ion. TA + E . Table 5: A detailed ablation study of TAPE on Cora and Pubmed dataset in low labeling rate setting. For each combination
of features and models, we useyellowto denote the best performance under a specific GNN/MLP model,greenthe second
best one, andpinkthe third best one.
Table 6: A detailed ablation study of TAPE on Cora and Pubmed dataset in the high labeling rate setting. For each
combination of features and models, we useyellowto denote the best performance under a specific GNN/MLP model,green
the second best one, andpinkthe third best one.
4.2.1.1 Effectiveness of KEA . 5. LLMSASTHEPREDICTORS
We then show the results of KEA in Table 7 and Table 8. In the LLMs-as-Enhancers pipeline, the role of the LLMs
For KEA-I ,weinjectthedescriptionofeachtechnicalterm remains somewhat limited since we only utilize their pre-
directly into the original attribute. For KEA-S , we encode trained knowledge but overlook their reasoning capability.
the generated description and original attribute separately. Drawing inspiration from the LLMs’ proficiency in handling
Observation 9. The proposed knowledge enhance- complex tasks with implicit structures, such as logical rea-
ment attributes KEA can enhance the performance soning[7]andrecommendation[14],wequestion: Isitpos-
of the original attribute TA. sible for the LLM to independently perform predic-
FromTable7andTable8,wefirstcomparetheperformance tive tasks on graph structures? By shifting our focus
of features encoded by e5 and PLM. We see that the pro- to node attributes and overlooking the graph structures, we
posed KEA is more fitted to the e5 encoder, and fine-tuned can perceive node classification as a text classification prob-
PLM embeddings present poor performance on the low la- lem. In [60], the LLMs demonstrate significant promise,
beling rate, thus we also select e5 as the encoder to further suggestingthattheycanproficientlyprocesstextattributes.
compare the quality of attributes. From Table 9 we can see However, one key problem is that LLMs are not originally
that the proposed KEA-I + TA and KEA-S + TA at- designed to process graph structures. Therefore, it can not
tributes can consistently outperform the original attributes directly process structural information like GNNs.
TA . |
t LLMs are not originally
that the proposed KEA-I + TA and KEA-S + TA at- designed to process graph structures. Therefore, it can not
tributes can consistently outperform the original attributes directly process structural information like GNNs.
TA . In this section, we aim to explore the potential of LLMs
Observation 10. For different datasets, the most ef- as a predictor. In particular, we first check whether LLM
fective enhancement methods may vary. can perform well without any structural information. Then,
Moreover, we compare the performance of our proposed we further explore some prompts to incorporate structural
KEA with TA + E , and the results are shown in Ta- information with natural languages. Finally, we show a case
ble 9. We can see that on Cora , our methods can achieve study in Section 5.3 to explore its potential usage as an
better performance while TA + E can achieve better per- annotator for graphs.
formance on Pubmed . One potential explanation for this 5.1 HowCanLLMPerformonPopularGraph
phenomenon is that TA + E relies more on the capabil- BenchmarkswithoutStructuralInforma-
ity of LLMs. Although we have removed the pseudo labels tion?
P , we find that the explanations still contain LLMs’ pre-
dictions. As a result, the effectiveness of TA + E will be In this subsection, we treat the node classification problem
influenced by LLMs’ performance on the dataset. As shown as a text classification problem by ignoring the structural
in [22], the LLMs can achieve superior performance on the information. We adopt ChatGPT (gpt-3.5-turbo-0613) as
Pubmed dataset but perform poorly on the Cora dataset. the LLMs to conduct all the experiments. We choose five
Compared to TA + E , our proposed KEA only utilizes popular textual graph datasets with raw text attributes:
the commonsense knowledge of the LLMs, which may have Cora [40], Citeseer [15], Pubmed [57], Ogbn-arxiv , and
better stability across different datasets. Ogbn-products [23]. The details of these datasets can be
foundinAppendixA.ConsideringthecoststoqueryLLMs’
Cora PubmedCora |
bility across different datasets. Ogbn-products [23]. The details of these datasets can be
foundinAppendixA.ConsideringthecoststoqueryLLMs’
Cora PubmedCora Pubmed
GCN GAT MLP GCN GAT MLPGCN GAT MLP GCN GAT MLP
TAPE 74.56 ±2.03 75.27 ±2.10 64.44 ±0.60TAPE 87.88 ±0.98 88.69 ±1.13 83.09 ±0.9192.22 ±1.3085.97 ±0.3193.35 ±1.5086.97 ±0.3395.05 ±0.2793.18 ±0.28
P 52.79 ±1.47 62.13 ±1.50 63.56 ±0.52 81.92 ±1.89P 64.90 ±1.39 80.11 ±4.01 70.31 ±1.91 85.73 ±0.59 91.60 ±0.62 93.65 ±0.3588.27 ±0.0193.27 ±0.15
TAPETAPE TA+E (e5)TA+E (e5)90.68 ±2.1283.38 ±0.4291.86 ±1.3684.00 ±0.0987.00 ±4.8375.73 ±0.5392.64 ±1.0087.44 ±0.4993.35 ±1.24 94.34 ±0.8686.71 ±0.9290.25 ±1.56
TA+E (PLM) 87.44 ±1.74 88.40 ±1.60 82.80 ±1.00 90.23 ±0.71TA+E (PLM) 78.02 ±0.56 64.08 ±12.36 55.72 ±11.98 80.70 ±1.73 79.66 ±3.08 76.42 ±2.1891.73 ±1.5895.40 ±0.32
E (PLM) 79.46 ±1.10 74.82 ±1.19 63.04 ±0.88 81.88 ±0.05 81.56 ±0.07 76.90 ±1.60E (PLM) 83.28 ±4.53 82.47 ±6.06 80.41 ±3.35 88.90 ±2.94 83.00 ±14.07 87.75 ±14.83
E (e5)E (e5) 89.39 ±2.6984.38 ±0.3690.13 ±2.5283.01 ±0.6084.05 ±4.03 89.68 ±0.78 90.61 ±1.61 91.09 ±0.8570.64 ±1.10 82.23 ±0.78 80.30 ±0.77 77.23 ±0.48
OriginalOriginal TA (PLM) 85.86 ±2.28 86.52 ±1.87 78.20 ±2.25TA (PLM) 59.23 ±1.16 57.38 ±2.01 30.98 ±0.68 62.12 ±0.07 61.57 ±0.07 53.65 ±0.2691.49 ±1.92 89.88 ±4.6394.65 ±0.13
attributesattributes TA (e5)TA (e5)90.53 ±2.3382.56 ±0.7389.10 ±3.2281.62 ±1.0986.19 ±4.38 89.65 ±0.85 89.55 ±1.16 91.39 ±0.4774.26 ±0.9382.63 ±1.13 79.67 ±0.80 80.38 ±1.94 Table 7: A detailed ablation study of KEA on Cora and Pubmed dataset in the low labeling rate setting. For each
combination of features and models, we useyellowto denote the best performance under a specific GNN/MLP model,green
the second best one, andpinkthe third best one.
Table 8: A detailed ablation study of KEA on Cora and Pubmed dataset in the high labeling rate setting. For each
combination of features and models, we useyellowto denote the best performance under a specific GNN/MLP model,green
the second best one, andpinkthe third best one.
Table 9: Comparison of the performance of TA, KEA-I, and KEA-S, and TA + E. The best performance is shown with an
underline. Cora (low) means a low labeling rate setting, and Cora (high) denotes a high labeling rate setting.
APIs, it’s not possible for us to test the whole dataset for gether with their labels for LLMs to better understand
these graphs. Considering the rate limit imposed by Ope- thetask. Inadditiontothenode’scontent,thisapproach
nAI 4, we randomly select 200 nodes from the test sets |
high labeling rate setting.
APIs, it’s not possible for us to test the whole dataset for gether with their labels for LLMs to better understand
these graphs. Considering the rate limit imposed by Ope- thetask. Inadditiontothenode’scontent,thisapproach
nAI 4, we randomly select 200 nodes from the test sets as integratesthecontentandlabelsofrandomlyselectedin-
our test data. In order to ensure that these 200 nodes bet- context samples from the training set. In the section, we
ter represent the performance of the entire set, we repeat all adopt random sampling to select few-shot prompts.
experiments twice. Additionally, we employ zero-shot per- 3. Zero-shotpromptswithChain-of-Thoughts(CoT) :
formance as a sanity check, comparing it with the results in CoT [70] presents its effectiveness in various reasoning
TAPE [22] to ensure minimal discrepancies. tasks, which can greatly improve LLMs’ reasoning abil-
We explore the following strategies: ities. In this study, we test whether CoT can improve
1. Zero-shot prompts : This approach solely involves the LLMs’ capability on node classification tasks. On the
attribute of a given node. basis of zero-shot prompts, we guide the LLMs to gen-
2. Few-shot prompts : On the basis of zero-shot prompts, erate the thought process by using the prompt ”think it
few-shotprompts provide in-context learningsamples to- step by step”.
4. Few-shot prompts with CoT : Inspired by [82], which
4 https://platform.openai.com/docs/guides/ demonstrates that incorporating the CoT process gen-
rate-limits/overview erated by LLMs can further improve LLMs’ reasoning
Cora (low) Pubmed (low)
GCN GAT MLP GCN GAT MLPCora PubmedCora Pubmed
TA 82.56 ± 0.73 81.62 ± 1.09 74.26 ± 0.93 82.63 ± 1.13 79.67 ± 0.80 80.38 ± 1.94GCN GAT MLP GCN GAT MLPGCN GAT MLP GCN GAT MLP
OriginalOriginalKEA-I + TA 83.20 ± 0.56 83.38 ± 0.63 74.34 ± 0.97 83.30 ± 1.75 81.16 ± 0.87 80.74 ± 2.44TA(PLM) 59.23 ±1.16 57.38 ±2.01 30.98 ±0.68 62.12 ±0.07 61.57 ±0. |
Pubmed
TA 82.56 ± 0.73 81.62 ± 1.09 74.26 ± 0.93 82.63 ± 1.13 79.67 ± 0.80 80.38 ± 1.94GCN GAT MLP GCN GAT MLPGCN GAT MLP GCN GAT MLP
OriginalOriginalKEA-I + TA 83.20 ± 0.56 83.38 ± 0.63 74.34 ± 0.97 83.30 ± 1.75 81.16 ± 0.87 80.74 ± 2.44TA(PLM) 59.23 ±1.16 57.38 ±2.01 30.98 ±0.68 62.12 ±0.07 61.57 ±0.07 53.65 ±0.26TA (PLM) 85.86 ±2.28 86.52 ±1.87 78.20 ±2.2591.49 ±1.92 89.88 ±4.6394.65 ±0.13
attributes TA(e5) 82.56 ±0.73 81.62 ±1.09 74.26 ±0.93 82.63 ±1.13 79.67 ±0.80 80.38 ±1.94
Attributes TA (e5) 90.53 ±2.33 89.10 ±3.22 86.19 ±4.38 89.65 ±0.85 89.55 ±1.16 91.39 ±0.47
KEA-S + TA 84.63 ± 0.58 85.02 ± 0.40 76.11 ± 2.66 82.93 ± 2.38 81.34 ± 1.51 80.74 ± 2.44
TA+E 83.38 ± 0.42 84.00 ± 0.09 75.73 ± 0.53KEA-I+TA(e5)KEA-I+TA (e5)83.20 ±0.5691.12 ±1.7690.24 ±2.9383.38 ±0.6387.88 ±4.44 90.19 ±0.83 90.60 ±1.22 92.12 ±0.7474.34 ±0.9787.44±83.30 ±1.750.4986.7181.16 ±0.87±0.9290.2580.74 ±2.44±1.56
KEA-I+TA(PLM) 53.21 ±11.54 55.38 ±4.64 31.80 ±3.63 57.13 ±8.20 58.66 ±4.27 52.28 ±4.47KEA-I+TA (PLM) 87.07 ±1.04 87.66 ±0.86 79.12 ±2.77Cora (high) Pubmed (high)92.32 ±0.6492.29 ±1.4394.85 ±0.20
KEA-I(e5) 81.35 ±0.77 82.04 ±0.72 70.64 ±1.10 81.98 ±0.91KEA-I (e5)91.09 ±1.78 90.13 ±2.7686.78 ±4.12 89.56 ±0.82 90.25 ±1.34 91.92 ±0.8081.04 ±1.39 79.73 ±1.63
KEA KEA-I(PLM) 36.68 ±18.63 37.69 ±12.79 30.46 ±0.60 56.22 ±7.17 59.33 ±1.69 52.79 ±0.51KEA-I (PLM) 86.08 ±2.35 85.23 ±3.15 77.97 ±2.87GCN GAT MLP GCN GAT MLP91.73 ±0.5891.93 ±1.7694.76 ±0.33
KEA KEA-S+TA(e5)KEA-S+TA (e5)84.63 ±0.5891.09 ±1.7892.30 ±1.6985.02 ±0.4088.95 ±4.96 90.40 ±0.9276.11 ±2.6682.93 ±2.3890.82 ±1.30 91.78 ±0.5681.34 ±1.5180.74 ±2.44
TA 90.53 ± 2.33 89.10 ± 3.22 86.19 ± 4.38 89.65 ± 0.85 89.55 ± 1.16 91.39 ± 0.47KEA-S+TA(PLM) 51.36 ±16.13 52.85 ±7.00 34.56 ±5.09 59.47 ±6.09 51.93 ±3.27 51.11 ±2.63KEA-S+TA (PLM) 83.98 ±5.13 87.33 ±1.68 80.04 ±1.32 86.11 ±5.68 89.04 ±5.82 94.35 ±0.48
KEA-I + TA KEA-S(e5)KEA-S (e5) 89.39 ±2.6991.12±1.76 90.24 ± 2.93 87.88 ± 4.44 90.19 ± 0.83 90.60 ± 1.22 92.12 ± 0.7484.38 ±0.3690.13 ±2.52 84.05 ±4.03 89.68 ±0.78 90.61 ±1.61 91.09 ±0.8583.01 ±0.60 70.64 ±1.10 82.23 ±0.78 80.30 ±0.77 77.23 ±0.48
KEA-S + TA 91.09 ± 1.78KEA-S(PLM) 28.97 ±18.24 43.88 ±10.31 30.36 ±0.58 61.22 ±0.94 54.93 ±1.55 47.94 ±0.89KEA-S (PLM) 83.35 ±7.30 85.67 ±2.00 76.76 ±1.82 79.68 ±19.57 69.90 ±19.75 85.91 ±6.4792.30±1.6988.95±4.96 90.40 ± 0.92 90.82 ± 1.30 91.78 ± 0.56
TA+E 90.68 ± 2.12 91.86 ± 1.36 87.00 ± 4.83 92.64 ± 1.00 93.35 ± 1.24 94.34 ± 0.86 capabilities. Building upon the few-shot prompts, this soning capability [70]. However, we find that it’s not effec-
approach enables the LLMs to generate a step-by-step tive for the node classification task. This phenomenon can
thoughtprocessforthein-contextsamples. Subsequently, |
4.83 92.64 ± 1.00 93.35 ± 1.24 94.34 ± 0.86 capabilities. Building upon the few-shot prompts, this soning capability [70]. However, we find that it’s not effec-
approach enables the LLMs to generate a step-by-step tive for the node classification task. This phenomenon can
thoughtprocessforthein-contextsamples. Subsequently, bepotentiallyexplainedby Observation12 . Incontrastto
thegeneratedCoTprocessesareinsertedintotheprompt mathematical reasoning, where a single answer is typically
as auxiliary information. expected,multiplereasonablechainsofthoughtcanexistfor
Output Parsing. In addition, we need a parser to ex- node classification. An example is shown in Table 12. This
tract the output from LLMs. We devise a straightforward phenomenon poses a challenge for LLMs as they may strug-
approach to retrieve the predictions from the outputs. Ini- gle to match the ground truth labels due to the presence of
tially, we instruct the LLMs to generate the results in a multiple reasonable labels.
formatted output like “a python list”. Then, we can use Observation 14. For prompts that are very similar
the symbols “[” and “]” to locate the expected outputs. It in semantics, there may be huge differences in their
shouldbenotedthatthisdesignaimstoextracttheinforma- effects.
tion more easily but has little influence on the performance. Inaddition,weobservethatTAPE[22]implementsaunique
We observe that sometimes LLMs will output contents that prompt on the Ogbn-arxiv dataset, yielding impressive re-
are slightly different from the expected format, for exam- sults via zero-shot prompts. The primary distinction be-
ple, output the expected format “Information Retrieval” to tween their prompts and ours lies in the label design. Given
“Information Extraction”. In such cases, we compute the that all papers originate from the computer science sub-
edit distance between the extracted output and the cate- category of Arxiv, they employ the brief term ”arxiv cs
gory names and select the one with the smallest distance. subcategories” as a substitute for these 40 categories. Re-
This method proves effective when the input context is rela- markably, this minor alteration contributes to a substan-
tively short. If this strategy encounters errors, we resort to tial enhancement in performance. To delve deeper into this
extracting the first mentioned categories in the output texts phenomenon, we experiment with three disparate label de-
as the predictions. If there’s no match, then the model’s signs: (1) Strategy 1: the original Arxiv identifier, such as
prediction |
egy encounters errors, we resort to tial enhancement in performance. To delve deeper into this
extracting the first mentioned categories in the output texts phenomenon, we experiment with three disparate label de-
as the predictions. If there’s no match, then the model’s signs: (1) Strategy 1: the original Arxiv identifier, such as
prediction for the node is incorrect. ”arxivcs.CV”; (2)Strategy2: naturallanguagedescriptors,
ToreducethevarianceofLLMs’predictions,wesetthetem- like ”computer vision”; and (3) Strategy 3: the specialized
peratureto0. Forfew-shotcases, wefindthatprovidingtoo prompt, utilizing ”arxiv cs subcategory” to denote all cat-
much context will cause LLMs to generate outputs that are egories. Unexpectedly, we discover that Strategy 3 signifi-
notcompatiblewiththeexpectedformats. Therefore,weset cantly outperforms the other two (refer to Table 13).
a maximum number of samples to ensure that LLMs gener- GiventhatLLMsundergopre-trainingonextensivetextcor-
ateoutputswithvalidformats. Inthisstudy, wechoosethis pora, it’s likely that these corpora include papers from the
number to 2 and adopt accuracy as the performance metric. Arxiv database. That specific prompt could potentially en-
5.1.1 Observations hancethe“activation”ofthesemodels’correspondingmem-
ory. However, the reason for the excellent results achieved
Observation 11. LLMs present preliminary effec- by this kind of prompt might not stem from the simple data
tiveness on some datasets. memorization of the LLM [25]. When applying to papers
AccordingtotheresultsinTable10, itisevidentthatLLMs after 2023 that are not included in the pre-training corpus
demonstrateremarkablezero-shotperformanceon Pubmed . of the LLMs, this prompt also achieves similar effectiveness.
When it comes to Ogbn-products , LLMs can achieve per- This phenomenon reminds us that when using ChatGPT,
formance levels comparable to fine-tuned PLMs. However, sometimes providing more information in the prompt (such
there is a noticeable performance gap between LLMs and ascategoryinformationfromthe Ogbn-arxiv dataset)may
GNNs on Cora and Pubmed datasets. To gain a deeper actually lead to a decrease in performance.
understanding of this observation, it is essential to analyze
the output of LLMs. 5.2 Incorporating Structural Information in
Observation 12. Wrong predictions made by LLMs thePrompts
are sometimes also reasonable.
After investigating the output of LLMs, we find th |
actually lead to a decrease in performance.
understanding of this observation, it is essential to analyze
the output of LLMs. 5.2 Incorporating Structural Information in
Observation 12. Wrong predictions made by LLMs thePrompts
are sometimes also reasonable.
After investigating the output of LLMs, we find that a part As we note, LLMs can already present superior zero-shot
of the wrong predictions made by LLMs are very reason- performance on some datasets without providing any struc-
able. An example is shown in Table 11. In this example, tural information. However, there is still a large perfor-
we can see that besides the ground truth label ”Reinforce- mance gap between LLMs and GNNs in Cora , Citeseer ,
ment Learning”, ”Neural Networks” is also a reasonable la- and Ogbn-arxiv . Then a question naturally raises that
bel, which also appears in the texts. We find that this is a whether we can further increase LLMs’ performance by in-
common problem for Cora , Citeseer , and Ogbn-arxiv . corporating structural information? To answer this prob-
For Ogbn-arxiv , there are usually multiple labels for one lem, we first need to identify how to denote the structural
paperonthewebsite. However,inthe Ogbn-arxiv dataset, informationintheprompt. LLMssuchasChatGPTarenot
only one of them is chosen as the ground truth. This leads originallydesignedforgraphstructures,sotheycannotpro-
to a misalignment between LLMs’ commonsense knowledge cess adjacency matrices like GNNs. In this part, we study
and the annotation bias inherent in these datasets. More- several ways to convey structural information and test their
over, we find that introducing few-shot samples presents lit- effectiveness on the Cora dataset.
tle help to mitigate the annotation bias. Specifically, we first consider inputting the whole graph into
Observation 13. Chain-of-thoughts do not bring in the LLMs. Using Cora dataset as an example, we try to
performance gain. use prompts like “node 1: ⟨paper content ⟩” to represent
Forreasoningtasksinthegeneraldomain,chain-of-thoughts attributes, and prompts like “node 1 cites node 2” to rep-
isbelievedtobeaneffectiveapproachtoincreaseLLM’srea- resent the edge. However, we find that this approach is not Table 10: Performance of LLMs on real-world text attributed graphs without structural information, we also include the
result of GCN (or SAGE for Ogbn-products ) together with Sentence-BERT features. For Cora , Citeseer , Pubmed , we
show the results of the low labeling rate sett |
resent the edge. However, we find that this approach is not Table 10: Performance of LLMs on real-world text attributed graphs without structural information, we also include the
result of GCN (or SAGE for Ogbn-products ) together with Sentence-BERT features. For Cora , Citeseer , Pubmed , we
show the results of the low labeling rate setting.
Table11: AwrongbutreasonablepredictionmadebyLLMs GNNs typically have 2 layers, indicating that the 2-hop
neighbor information is the most useful in most cases. Con-
Paper: The Neural Network House: An overview; Typi- sideringtheinputcontextlimitofLLMs,weempiricallyfind
cal home comfort systems utilize only rudimentary forms that each time we can summarize the attribute information
of energy management and conservation. The most sophis- of 5 neighbors. In this paper, we sample neighbors once and
ticated technology in common use today is an automatic only summarize those selected neighbors. In practice, we
setback thermostat. Tremendous potential remains for im- can sample multiple times and summarize each of them to
proving the efficiency of electric and gas usage... obtain more fine-grained neighborhood information.
Ground Truth: Reinforcement Learning Observation 15. Neighborhood summarization is
LLM’s Prediction: Neural Networks likely to achieve performance gain.
FromTable14,wenotethatincorporatingneighborhoodin-
formation in either zero-shot or few-shot approaches yields
performance gains compared to the zero-shot prompt with-
out structural information except on the Pubmed dataset.
Table 12: An example that LLMs generate CoT processes By following the ”homophily” assumption [87; 39], which
not matching with ground truth labels suggests that neighboring nodes tend to share the same la-
bels, the inclusion of neighboring information can poten-
Paper: The Neural Network House: An overview.: Typ- tially alleviate annotation bias. For instance, let’s consider |
suggests that neighboring nodes tend to share the same la-
bels, the inclusion of neighboring information can poten-
Paper: The Neural Network House: An overview.: Typ- tially alleviate annotation bias. For instance, let’s consider
ical home comfort systems utilize only rudimentary forms apaperfromArxivcoveringgeneraltopicsliketransformers.
of energy management and conservation. The most sophis- Merely analyzing the content of this paper makes it difficult
ticated technology in common use today is an automatic to determine which category the author would choose, as
setback thermostat. Tremendous potential remains for im- categories such as ”Artificial Intelligence,” ”Machine Learn-
proving the efficiency of electric and gas usage... ing,”and”ComputerVision”areallplausibleoptions. How-
Generated Chain-of-thoughts: The paper discusses the ever, by examining its citation relationships, we can better
use of neural networks for intelligent control and mentions infer the author’s bias. If the paper cites numerous sources
the utilization of neural network reinforcement learning and from the ”Computer Vision” domain, it is likely that the
prediction techniques. Therefore, the most likely category author is also a researcher in that field, thereby favoring
for this paper is ’Neural Networks’. the selection of this category. Consequently, structural in-
Ground Truth: Reinforcement Learning formation provides implicit supervision to assist LLMs in
LLM’s Prediction: Neural Networks capturing the inherent annotation bias in the dataset. How-
ever, fromthe Pubmed dataset, weobservethatincorporat-
ing neighborhood information results in clear performance
drop, which necessitates a deep analysis below.
Observation 16. LLMs with structure prompts may
Table 13: Performance of LLMs on OGB-Arxiv dataset, suffer from heterophilous neighboring nodes.
with three different label designs. From Table 14, we observe that LLMs perform worse on |
Observation 16. LLMs with structure prompts may
Table 13: Performance of LLMs on OGB-Arxiv dataset, suffer from heterophilous neighboring nodes.
with three different label designs. From Table 14, we observe that LLMs perform worse on
Pubmed after incorporating the structural information. To
Strategy 1 Strategy 2 Strategy 3 gainadeeperunderstanding, wefocusonthosenodeswhere
Ogbn-arxiv 48.5 51.8 74.5 zero-shot prompts without structural information can lead
to correct prediction but prompts with 2-hop information
can’t.
An example of this kind of node is shown in Table 15.
feasible since LLMs usually present a small input context After analyzing the 2-hop neighbors of this node, we find
length restriction. As a result, we consider an “ego-graph” that 15 out of 19 2-hop neighboring nodes have different
view, which refers to the subgraphs induced from the center labels against this node. This case is usually denoted as
nodes. In this way, we can narrow the number of nodes to ”heterophily” [87], which is a phenomenon in graph theory
be considered. where nodes in a graph tend to connect with nodes that are
Specifically, we first organize the neighbors of the current dissimilar to them. In this case, we find that both GNNs
nodes as a list of dictionaries consisting of attributes and la- and LLMs with a structure-aware prompt make wrong pre-
bels of the neighboring nodes for training nodes. Then, the dictions. However, LLMs ignoring structural information
LLMs summarize the neighborhood information. It should get correct predictions, which indicates that LLMs with
be noted that we only consider 2-hop neighbors because a structure-aware prompt may also suffer from the ”het-
Cora Citeseer Pubmed Ogbn-arxiv Ogbn-products
Zero-shot 67.00 ± 1.41 65.50 ± 3.53 90.75 ± 5.30 51.75 ± 3.89 70.75 ± 2.48Few-shot 67.75 ± 3.53 66.00 ± 5.66 85.50 ± 2.80 50.25 ± 1.06 77.75 ± 1.06
Zero-shot with COT 64.00 ± 0.71 66.50 ± 2.82 86.25 ± 3.29 50.50 ± 1.41 71.25 ± 1.06
Few-shot with COT 64.00 ± 1. |
Cora Citeseer Pubmed Ogbn-arxiv Ogbn-products
Zero-shot 67.00 ± 1.41 65.50 ± 3.53 90.75 ± 5.30 51.75 ± 3.89 70.75 ± 2.48Few-shot 67.75 ± 3.53 66.00 ± 5.66 85.50 ± 2.80 50.25 ± 1.06 77.75 ± 1.06
Zero-shot with COT 64.00 ± 0.71 66.50 ± 2.82 86.25 ± 3.29 50.50 ± 1.41 71.25 ± 1.06
Few-shot with COT 64.00 ± 1.41 60.50 ± 4.94 85.50 ± 4.94 47.25 ± 2.47 73.25 ± 1.77
GCN/SAGE 82.20 ± 0.49 71.19 ± 1.10 81.01 ± 1.32 73.10 ± 0.25 82.51 ± 0.53 Table 14: Performance of LLMs on real-world text attributed graphs with summarized neighborhood information. For Cora ,
Citeseer , Pubmed , we show the results of the low labeling rate setting. We also include the result of GCN (or SAGE for
Ogbn-products ) together with Sentence-BERT features.
erophily” problem. significant performance improvement compared to others.
Consequently, the primary challenge can be summarized as
Table 15: GNNs and LLMs with structure-aware prompts follows: how can we effectively select both the critical nodes
are both wrong within the graph and the reliable nodes in the context of
LLMs?
Paper: Title: C-reactiveproteinandincidentcardiovascular Taking into account the complexity of these two challenges,
events among men with diabetes. we don’t intend to comprehensively address them in this
Abstract: OBJECTIVE: Several large prospective studies paper. Instead, we present a preliminary study to evaluate
have shown that baseline levels of C-reactive protein (CRP) the performance of a simple strategy: randomly selecting a
... subset of nodes for annotation. It is worth noting that ad-
Neighbor Summary: This paper focuses on different aspects vanced selection strategies such as active learning [72] could
of type2diabetes mellitus. Itexploresthelevelsofvarious beadoptedtoimprovethefinalperformance. Weleavesuch
markerssuchastumornecrosisfactor-alpha,interleukin-2... exploration as future work. Regarding the annotation bud-
Ground truth: ”Diabetes Mellitus Type 1” get, we adopt a ”low labeling rate” setting, wherein we ran-
Structure-ignorant prompts: ”Diabetes Mellitus domly select a total of 20 nodes multiplied by the number
Type 1” |
exploration as future work. Regarding the annotation bud-
Ground truth: ”Diabetes Mellitus Type 1” get, we adopt a ”low labeling rate” setting, wherein we ran-
Structure-ignorant prompts: ”Diabetes Mellitus domly select a total of 20 nodes multiplied by the number
Type 1” of classes. For the selected nodes, we adopt 75% of them
Structure-aware prompt: ”Diabetes Mellitus Type as training nodes and the rest as validation nodes. Con-
2” sequently, we annotate a total of 140 nodes in the Cora
GNN: ”Diabetes Mellitus Type 2” dataset and 60 nodes in the Pubmed dataset. In this part,
we use GCN as the GNN model and adopt the embeddings
generated by the Sentence-BERT model. The results are
shown in Table 16. We can observe that training GCN on
the pseudo labels can lead to satisfying performance. Par-
ticularly, it can match the performance of GCN trained on
5.3 CaseStudy: LLMsasthePseudoAnnota- ground truth labels with 10 shots per class. As a refer-
tors ence, around 67% of the pseudo labels for Cora can match
From Table 10, we show that LLMs can be good zero-shot ground truth labels, while around 93% of the pseudo labels
predictors onseveralreal-worldgraphs,whichprovidesthe for Pubmed are ground truth labels.
possibility to conduct zero-shot inference on datasets with- Table 16: Performance of GCN trained on either pseudo
outlabels. DespitetheeffectivenessofLLMs,itstillpresents labels generated by LLMs, or ground truth labels
two problems: (1) The price of using LLMs’ API is not
cheap,andconductinginferenceonalltestingnodesforlarge Cora Pubmed
graphsincurshighcosts; (2)Whetheritisalocallydeployed Using pseudo labels
open-source LLM or a closed source LLM accessed through 20 shots × #class 64.95 ± 0.98 71.70 ± 1.06
anAP |
Cora Pubmed
graphsincurshighcosts; (2)Whetheritisalocallydeployed Using pseudo labels
open-source LLM or a closed source LLM accessed through 20 shots × #class 64.95 ± 0.98 71.70 ± 1.06
anAPI,theinferencewiththeseLLMsaremuchslowerthan Using ground truth
GNNs, since the former has high computational resource re- 3 shots per class 52.63 ± 1.46 59.35 ± 2.67
quirements, while the latter has rate limits. One potential 5 shots per class 58.97 ± 1.41 65.98 ± 0.74
solution to these challenges is leveraging the knowledge of 10 shots per class 69.87 ± 2.27 71.51 ± 0.77
LLMs to train smaller models like GNNs, which inspires a
potential application of LLMs to be used as annotators.
Basedonthepreliminaryexperimentaloutcomes,LLMsdis- Observation 17. The quality of pseudo labels is key
play encouraging results on certain datasets, thus highlight- to downstream performance.
ingtheirpotentialforgeneratinghigh-qualitypseudo-labels. Although we don’t place significant emphasis on the selec-
However, the use of LLMs as an annotator introduces a tionofnodestobelabeled,thepreliminaryresultsshowthat
new challenge. A key consideration lies in deciding the there is relatively little variance among different random se-
nodes that should be annotated. Unlike the self-labeling lections. Comparing this to the impact of pseudo labels, we
in GNNs[8; 34; 32], where confidence-based or information- observe that the quality of pseudo labels can make a sig-
basedmetricsareemployedtoestimatethequalityofpseudo- nificant difference. When higher quality pseudo labels are
labels. Itremainsadifficulttasktodeterminetheconfidence used, GNNs perform much better on Pubmed compared to
of pseudo-labels generated by LLMs. Additionally, differ- Cora . This result highlights the importance of developing
ent nodes within a graph have distinct impacts on other an approach to select confident nodes for LLMs.
nodes [72]. Annotating certain nodes can result in a more Observation 18. Getting the confidence by simply
Cora Citeseer Pubmed Ogbn-arxiv Ogbn-products
Zero-shot 67.00 ±1.41 65.50 ±3.53 90.75 ±5.30 51.75 ±3.89 70.75 ±2.48
Zero-Shotwith2-hopinfo 71.75 ±0.35 62.00 ±1.41 88.00 |
nodes [72]. Annotating certain nodes can result in a more Observation 18. Getting the confidence by simply
Cora Citeseer Pubmed Ogbn-arxiv Ogbn-products
Zero-shot 67.00 ±1.41 65.50 ±3.53 90.75 ±5.30 51.75 ±3.89 70.75 ±2.48
Zero-Shotwith2-hopinfo 71.75 ±0.35 62.00 ±1.41 88.00 ±1.41 55.00 ±2.83 75.25 ±3.53Few-shot 67.75 ±3.53 66.00 ±5.66 85.50 ±2.80 50.25 ±1.06 77.75 ±1.06
Few-Shotwith2-hopinfo 74.00 ±4.24 67.00 ±4.94 79.25 ±6.71 52.25 ±3.18 76.00 ±2.82
GCN/SAGE 82.20 ±0.49 71.19 ±1.10 81.01 ±1.32 73.10 ±0.25 82.51 ±0.53prompting the LLMs may not work since they are Observation 19. LLMs-as-Predictors demonstrate
too “confident”. robustness when facing OOD data.
Basedonpreviousobservations,wechecksomesimplestrate- From Table 18, we find that LLMs-as-Predictors present
gies to achieve the confidence level of LLMs’ outputs. Ini- promisingrobustnessagainstOODdata. Itshouldbenoted
tially,weattempttoprompttheLLMsdirectlyfortheircon- that we only try a simple structure-ignorant prompt, and
fidence level. However, we discover that most of the time, we may further improve the OOD performance of LLMs by
LLMs simply output a value of 1, rendering it meaningless. selectingproperin-contextsamplesandincorporatingstruc-
Examples are shown in Table 17. tural information. In a nutshell, LLMs present great poten-
tial to enhance the OOD generalization capability of graph
Table 17: Prompts used to generate neighbor summary models.
Instruction 6. RELATEDWORK
Output the confidence level in the range of 0 to 1 and the Following our proposed two pipelines, i.e., LLMs as the En-
most 1 possible category of this paper as a python dict, like hancers and LLMs as the Predictors, we review existing
”prediction”: ”XX”, ”confidence”: ”XX” works in this section.
6.1 LLMsastheEnhancers
Intherecentsurgeofresearch,increasingattentionhasbeen
Another potential solution is to utilize LLMs that support paid on the intersection of LLMs and GNNs in the realm of
prediction logits, such as text-davinci-003. However, we ob- TAGs [83; 6; 78; 77; 49; 22; 86; 24; 33; 10]. Compared to
serve that the probability of the outputs from these models shallow embeddings, LLMs can provide a richer repository
is consistently close to 1, rendering the output not helpful. |
paid on the intersection of LLMs and GNNs in the realm of
prediction logits, such as text-davinci-003. However, we ob- TAGs [83; 6; 78; 77; 49; 22; 86; 24; 33; 10]. Compared to
serve that the probability of the outputs from these models shallow embeddings, LLMs can provide a richer repository
is consistently close to 1, rendering the output not helpful. ofcommonsenseknowledge,whichcouldpotentiallyenhance
the performance of downstream tasks [51].
5.4 Case Study: Applying LLMs to handle Several studies employ PLMs as text encoders, transform-
out-of-distributiondata ing text attributes into node features, which can thus be
classified as feature-level enhancement . The integration
Out-of-distribution(OOD)learningaddressesscenarioswhere structuresvaryamongtheseworks: someadoptasimplecas-
training and test data are drawn from different distribu- cading structure [49; 6; 78; 37], while others opt for an iter-
tions. Given the ubiquity of distribution shifts in graph ativestructure[83; 74; 77]. Forthoseutilizingthecascading
data [29], OOD generalization on graphs has emerged as a structure,preliminaryinvestigationshavebeenconductedto
crucial research direction in recent years. A recent bench- determine how the quality of text embeddings affects down-
mark, GOOD [17], reveals that existing GNN-based mod- stream classification performance [49]. GIANT [6] attempts
els struggle with robustness when confronted with distri- to incorporate structural information into the pre-training
butional shifts. In contrast, LLMs have demonstrated com- stage of PLMs, achieving improved performance albeit with
mendablerobustnessontextualdatainthepresenceofOOD additionaltrainingoverhead. SimTEG[10]suggeststhatus-
scenarios [67]. Node classification on the TAG, when disre- ing embeddings obtained through efficiently fine-tuned pa-
garding graph structures, can also be considered as a text rameters to replace the original embeddings of pre-trained
classification task. Therefore, in this section, we initiate language models can solve the problem of overfitting dur-
a preliminary exploration into the application of LLMs for ing fine-tuning, thereby further enhancing the performance
OOD scenarios on graphs. of the cascading structure. OneForAll [33] further adopts
ExperimentalSetups. WeadopttheGOOD-Arxivdataset sentence embedding model to unify the feature space, and
from the GOOD benchmark [17] considering its text at- propose a unified model for divers |
erformance
OOD scenarios on graphs. of the cascading structure. OneForAll [33] further adopts
ExperimentalSetups. WeadopttheGOOD-Arxivdataset sentence embedding model to unify the feature space, and
from the GOOD benchmark [17] considering its text at- propose a unified model for diverse tasks across multiple
tribute availability. Specifically, we adopt all four types datasets. Thiscascadingstructurehasalsobeensuccessfully
of the OOD shift: “Concept-degree”, “Covariate-degree”, applied to tasks such as fact verification [37] and question
“Concept-time”,and“Covariate-time”fromtheGOOD.The answering [78]. However, despite its simplicity, recent stud-
final results are shown in Table 18. We adopt the prompt ies [83] have identified potential drawbacks of the cascading
from TAPE [22] since it achieves better performance on the structure. Specifically, it establishes a tenuous connection
Ogbn-arxiv dataset. For comparison, we take the best between the text attribute and the graph. The embeddings
baseline models from the GOOD benchmark. generated by the PLMs do not take graph structures into
account, and the parameters of the PLMs remain constant
Table 18: OOD performance comparison. “Val” means during the GNN training process. Alternatively, in the iter-
the results on the IID validation sets. “Test” indicates ativestructure, Graphformers[74]facilitatestheco-training
the results of the OOD test sets. We can see that LLMs- of PLMs and GNNs using each other’s generated embed-
as-Predictors consistently outperform the best GNN-based dings. GLEM [83] takes this a step further by considering
OODbaselines. Moreover,thegapbetweenIIDperformance pseudolabelsgeneratedbybothPLMsandGNNsandincor-
and OOD performance is small. poratingthemintotheoptimizationprocess. DRAGON[77]
successfullyextendstheiterativestructuretotheknowledge
Val Test Best baseline (test) graph domain.
concept degree 73.01 72.79 63.00 ComparedtothesestudiesfocusingonPLMs, arecentstud-
covariate degree 70.23 68.21 59.08 y[22]considerstheusageofembedding-invisibleLLMssuch
concept time 72.66 71.98 67.45 asChatGPT[45]forrepresentationlearningonTAGs,which
covariate time 74.28 74.37 71.34 aims to adopt LLMs to enhance the text attributes and
thuscanbecategorizedinto |
geofembedding-invisibleLLMssuch
concept time 72.66 71.98 67.45 asChatGPT[45]forrepresentationlearningonTAGs,which
covariate time 74.28 74.37 71.34 aims to adopt LLMs to enhance the text attributes and
thuscanbecategorizedinto text-level enhancement . Thisworkintroducesapromptdesignedtogenerateexplanations contrastivelearningtoalignthegraphandtextfeaturespaces.
for the predictions made by LLMs. These generated expla- It also introduces dual-stage instruction tuning, where the
nations are subsequently encoded into augmented features first stage adopts self-supervised instruction tuning to make
by PLMs. Through the ensemble of these augmented fea- LLMsbetterunderstandgraph-structuredinformation. The
tures with the original features, the proposed methodology second stage adopts task-specific fine-tuning to allow LLMs
demonstrates its efficacy and accomplishes state-of-the-art achieve task-specific knowledge and then make predictions.
performance on the Ogbn-arxiv leaderboard [23]. Never- GraphLLM [3] and DGTL [50] apply this pipeline to graph
theless, the study offers limited analytical insights into the reasoning tasks and graph representation learning.
underlying reasons for the success of this approach. Addi-
tionally,wehaveidentifiedapotentialconcernregardingthe 7. CONCLUSIONS,LIMITATIONS,ANDFU-
prompts utilized in the referenced study.
Another work pertaining to the integration of LLMs and TUREDIRECTIONS
GNNs is the Graph-Toolformer [80]. Drawing inspirations In this section, we summarize our key findings, present the
from Toolformer [56], this study utilizes LLMs as an inter- limitations of this study and discuss the potential directions
face to bridge the natural language commands and GNNs. of leveraging LLMs in graph machine learning.
This approach doesn’t change the features and training of
GNNs, which is out of our scope. 7.1 KeyFindings
In this paper, we propose two potential pipelines: LLMs-as-
6.2 LLMsasthePredictors Enhancers and LLMs-as-Predictors that incorporate LLMs
While LLMs-as-Enhancers have proven to be effective, the to handle the text-attributed graphs. Our rigorous empiri-
pipeline still requires GNNs for final predictions. In a sig- cal studies reveal several interesting findings which provide
nificant shift from this approach, recent studies [18; 65] new insights for future studies. We highlight some key find-
have begun exploring a unique pipeline that solely relies |
to handle the text-attributed graphs. Our rigorous empiri-
pipeline still requires GNNs for final predictions. In a sig- cal studies reveal several interesting findings which provide
nificant shift from this approach, recent studies [18; 65] new insights for future studies. We highlight some key find-
have begun exploring a unique pipeline that solely relies ings below and more can be found from Observation 1 to
on LLMs for final predictions. These works fall under the Observation 19.
category of LLMs-as-Predictors . The first series of work Finding 1. For LLMs-as-Enhancers , deep sentence
focus on applying closed-source LLMs without tuning the embedding models present effectiveness in terms of
parameters. GPT4Graph [18] evaluates the potential of performance and efficiency. We empirically find that
LLMs in executing knowledge graph (KG) reasoning and whenweadoptdeepsentenceembeddingmodelsasenhancers
node classification tasks. Their findings indicate that these at the feature level, they present good performance under
models can deliver competitive results for short-range KG different dataset split settings, and also scalability. This
reasoning but struggle with long-range KG reasoning and indicates that they are good candidates to enhance text at-
node classification tasks. However, its presentation is pretty tributes at the feature level.
vagueandtheydon’tgivethedetailedformatoftheprompt Finding 2. For LLMs-as-Enhancers ,thecombination
they use. Considering the publicity of the Arxiv data, the of LLMs’ augmentations and ensembling demonst-
dataleakageprobleminevaluationisfurtherstudiedin[25]. rates its effectiveness. As demonstrated in Section 4.2,
NLGraph [65] introduces a synthetic benchmark to assess when LLMs are utilized as enhancers at the text level, we
graph structure reasoning capabilities. The study primar- observe performance improvements by ensembling the aug-
ily concentrates on traditional graph reasoning tasks such mentedattributeswiththeoriginalattributesacrossdatasets
as shortest path, maximum flow, and bipartite matching, and data splits. This suggests a promising approach to
while only offering limited analysis on node classification enhance the performance of attribute-related tasks. The
tasks. This does not align with our central focus, primar- proposed pipeline involves augmenting the attributes with
ily on graph learning, with a specific emphasis on node LLMs and subsequently ensembling the original attributes
classification tasks. GraphText [84] further tries to apply with the augmented ones.
LLMs to a broader range of non-text-attributed graphs by |
mar- proposed pipeline involves augmenting the attributes with
ily on graph learning, with a specific emphasis on node LLMs and subsequently ensembling the original attributes
classification tasks. GraphText [84] further tries to apply with the augmented ones.
LLMs to a broader range of non-text-attributed graphs by Finding 3. For LLMs-as-Predictors , LLMs present
converting the original features into clustering centers or preliminary effectiveness but also indicate potential
pseudo labels. LLM4Dyg [81] further evaluates LLMs’ ca- evaluation problem. In Section 5, we conduct prelimi-
pabilityfortemporalgraph-relatedtasks. LLMGNN[4]and nary experiments on applying LLMs as predictors, utilizing
GPT4GNAS [66] apply LLMs-as-predictors as annotators both textual attributes and edge relationships. The results
and agents for neural architecture search, respectively. demonstrate that LLMs present effectiveness in processing
As these closed-source LLMs only accept text-type inputs, textualattributesandachievinggoodzero-shotperformance
the first type of methods requires transforming graphs into on certain datasets. Moreover, our analysis reveals two po-
certain form of natural language, either directly using node tential problems within the existing evaluation framework:
attributes or describing the graph structure using natural (1) There are instances where LLMs’ inaccurate predictions
language. Meanwhile, due to the input length limitations canalsobeconsideredreasonable, particularlyinthecaseof
of LLMs, this transformation process often results in the citation datasets where multiple labels may be appropriate.
lossofaconsiderableamountofinformationfromthegraph. (2) We find a potential test data leakage problem on Ogbn-
Therefore,thesecondtypeofworkinvolvesfine-tuningLLMs arxiv , which underscores the need for a careful reconsider-
toenablethemtounderstandgraphinformationrepresented ation of how to appropriately evaluate the performance of
as embeddings. InstructGLM [79] combines textual instruc- LLMs on real-world datasets.
tions with node features in embedding form, enabling LLMs
tounderstandnodefeaturesthroughinstructiontuning. Sub- 7.2 Limitations
sequently, it predicts the type of nodes based on the given A deeper understanding of the effectiveness of text
instructions. GraphGPT[62]furtherintroducescross-modal embeddings. Despite the effectiveness of deep sentenceembedding models, our understanding of why their embed- tion” 5 (2) the ground truth labels may present ambiguity,
dingsoutperformPLMs’onnodeclassificationtasksremains |
understanding of the effectiveness of text
instructions. GraphGPT[62]furtherintroducescross-modal embeddings. Despite the effectiveness of deep sentenceembedding models, our understanding of why their embed- tion” 5 (2) the ground truth labels may present ambiguity,
dingsoutperformPLMs’onnodeclassificationtasksremains and the performance calculated based on them may not re-
limited. Furthermore, we observe a performance gap be- flect LLMs’ genuine capability. For the first problem, one
tween deep sentence embedding models and GLEM on the possible mitigation is to use the latest dataset which is not
Ogbn-products dataset, which may be related to the do- included in the training corpus of LLMs. However, that
mains of the dataset. Moreover, as shown in Observation 4, meansweneedtokeepcollectingdataandannotatingthem,
GNNs demonstrate different levels of effectiveness on differ- which seems not an effective solution. For the second prob-
enttextembeddings. However, wegivelimitedexplanations lem, one possible solution is to reconsider the ground truth
for this phenomenon. To gain a deeper understanding, we design. For instance, for the categorization of academic pa-
need to have a look at the original feature space and the pers, we may adopt a multi-label setting and select all ap-
feature space after aggregation. This phenomenon may po- plicable categories as the ground truth. However, for more
tentiallyberelatedtotheanistrophyinlanguagemodelem- general tasks, it remains a challenge to design more rea-
beddings [12]. More in-depth analysis is required to better sonable ground truths. Generally speaking, it’s a valuable
understand these phenomena. future direction to rethink how to properly evaluate LLMs.
Costs of LLM augmentations. In the work, we study AligningthefeaturespaceofgraphmodelsandLLMs.
TAPE and KEA to enhance the textual attributes at the Currently, a major obstacle hindering the wider application
text level. Although these methods have proven to be ef- of LLMs in the field of graph learning is the discrepancy be-
fective, they require querying LLMs’ APIs at least N times tween the feature space of LLMs and that of graphs. This
for a graph with N nodes. Given the cost associated with discrepancy makes it difficult for LLMs to effectively under-
LLMs, this poses a significant expense when dealing with stand information in the graph domain. There are mainly
large-scale datasets. Consequently, we have not presented two approaches to address this issue in current work. The
results for the Ogbn-arxiv and Ogbn-products datasets. first approach is to |
effectively under-
LLMs, this poses a significant expense when dealing with stand information in the graph domain. There are mainly
large-scale datasets. Consequently, we have not presented two approaches to address this issue in current work. The
results for the Ogbn-arxiv and Ogbn-products datasets. first approach is to translate the information on the graph
Text-formatted hand-crafted prompts to represent into natural language that LLMs can understand. The sec-
graphs. In Section 5, we limit our study to the use of “nat- ond approach involves directly inputting the graph infor-
ural language” prompts for graph representation. However, mation in the form of embeddings and then using instruc-
variousotherformatsexistforrepresentinggraphsinnatural tion tuning to enable LLMs to understand this information.
language such as XML, YAML, GML, and more [55]. More- However, both methods have their evident limitations. For
over, wemainlydesignthesepromptsinahand-craftedway, the first method, the translation process can result in in-
which is mainly based on trial and error. It’s thus worth- formation loss, and the inherent input length limitation of
while to consider exploring more prompt formats and how LLMs also prevents users from inputting large-scale graphs.
to come up with automatic prompts. For the second method, the introduction of tuning signifi-
cantly increases computational overhead. Is there a better
7.3 FutureDirections way to align LLMs with graphs? A recent work targeting
Extending the current pipelines to more tasks and multimodality [47] has shown new possibilities. It demon-
more types of graphs. In this study, our primary fo- stratesthatwithfixedLLMparameters, onlyalineartrans-
cus is on investigating the node classification task for text- formation layer is needed to convert information from the
attributedgraphs. Nevertheless,itremainsunexploredwhe- visual domain into content that can be effectively processed
ther these two pipelines can be extended to other graph- byLLMs,andsuchanarchitecturealsoholdsgreatpotential
learning tasks or other types of graphs. Certain tasks neces- in the field of graph machine learning.
sitatetheutilizationoflong-rangeinformation[11],andrep-
resenting such information within LLMs’ limited input con- 8. REFERENCES
text poses a significant challenge. Furthermore, we demon- [1]R. Anil, A. M. Dai, O. Firat, M. Johnson, D. Lep-
strate that LLMs exhibit promising initial results in graphs ikhin, A. Passos, S. Shakeri, E. Taropa, P. Bailey,
co |
n[11],andrep-
resenting such information within LLMs’ limited input con- 8. REFERENCES
text poses a significant challenge. Furthermore, we demon- [1]R. Anil, A. M. Dai, O. Firat, M. Johnson, D. Lep-
strate that LLMs exhibit promising initial results in graphs ikhin, A. Passos, S. Shakeri, E. Taropa, P. Bailey,
containingabundanttextualinformation,particularlyinnat- Z. Chen, et al. Palm 2 technical report. arXiv preprint
urallanguage. However,theexplorationoftheireffectiveex- arXiv:2305.10403 , 2023.
tension to other types of graphs with non-natural language
information, such as molecular graph [13; 30], still needs [2]S.Bubeck,V.Chandrasekaran,R.Eldan,J.A.Gehrke,
further exploration. E. Horvitz, E. Kamar, P. Lee, Y. T. Lee, Y.-F. Li,
Using LLMs more efficiently. Despite the effectiveness S. M. Lundberg, H. Nori, H. Palangi, M. T. Ribeiro,
ofLLMs, theinherentoperationalefficiencyandoperational and Y. Zhang. Sparks of artificial general intelligence:
cost of these models still pose significant challenges. Taking Early experiments with gpt-4. ArXiv , abs/2303.12712,
ChatGPT, which is accessed through an API, as an exam- 2023.
ple,thecurrentbillingmodelincurshighcostsforprocessing [3]Z. Chai, T. Zhang, L. Wu, K. Han, X. Hu, X. Huang,
large-scalegraphs. Asforlocallydeployedopen-sourcelarge and Y. Yang. Graphllm: Boosting graph reason-
models, even just using them for inference requires substan- ing ability of large language model. arXiv preprint
tial hardware resources, not to mention training the models arXiv:2310.05845 , 2023.
with parameter updates. Therefore, developing more effi-
cient strategies to utilize LLMs is currently a challenge. [4]Z. Chen, H. Mao, H. Wen, H. Han, W. Jin, H. Zhang,
EvaluatingLLMs’capabilityforgraphlearningtasks. H. Liu, and J. Tang. Label-free node classification
In this paper, we briefly talk about the potential pitfalls of on graphs with large language models (llms). arXiv
the current evaluation framework. There are mainly two preprint arXiv:2310.04668 , 2023.
problems: (1)thetestdatamayalreadyappearinthetrain-
ing corpus of LLMs, which is referred to as ”contamina- 5 https://hitz-zentroa.github.io/lm-contamination/ [5]W.-L. Chiang, X. Liu, S. Si, Y. Li, S. Bengio, and C.-J. [16]J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals,
Hsieh. Cluster-gcn: An efficient algorithm for training and G. E. Dahl. Neural message passing for quantum
deep |
rred to as ”contamina- 5 https://hitz-zentroa.github.io/lm-contamination/ [5]W.-L. Chiang, X. Liu, S. Si, Y. Li, S. Bengio, and C.-J. [16]J. Gilmer, S. S. Schoenholz, P. F. Riley, O. Vinyals,
Hsieh. Cluster-gcn: An efficient algorithm for training and G. E. Dahl. Neural message passing for quantum
deep and large graph convolutional networks. Proceed- chemistry. ArXiv , abs/1704.01212, 2017.
ings of the 25th ACM SIGKDD International Confer- [17]S.Gui, X.Li, L.Wang, andS.Ji.GOOD:Agraphout-
ence on Knowledge Discovery & Data Mining , 2019.
of-distribution benchmark. In Thirty-sixth Conference
[6]E. Chien, W.-C. Chang, C.-J. Hsieh, H.-F. Yu, on Neural Information Processing Systems Datasets
J.Zhang,O.Milenkovic,andI.S.Dhillon.Nodefeature and Benchmarks Track , 2022.
extraction by self-supervised multi-scale neighborhood
prediction. In ICLR 2022 , 2022. [18]J. Guo, L. Du, and H. Liu. Gpt4graph: Can large lan-
guage models understand graph structured data? an
[7]A. Creswell, M. Shanahan, and I. Higgins. Selection- empirical evaluation and benchmarking. arXiv preprint
inference: Exploiting large language models for inter- arXiv:2305.15066 , 2023.
pretable logical reasoning. In The Eleventh Interna- [19]W. L. Hamilton, R. Ying, and J. Leskovec. Inductive
tional Conference on Learning Representations , 2023. representation learning on large graphs. In NIPS , 2017.
[8]E. Dai, C. Aggarwal, and S. Wang. Nrgnn: Learning a [20]Z. S. Harris. Distributional structure. Word , 10(2-
label noise resistant graph neural network on sparsely 3):146–162, 1954.
and noisily labeled graphs. In Proceedings of the 27th
ACM SIGKDD Conference on Knowledge Discovery & [21]P. He, X. Liu, J. Gao, and W. Chen. Deberta:
Data Mining , KDD ’21, page 227–236, New York, NY, Decoding-enhanced bert with disentangled attention.
USA, 2021. Association for Computing Machinery. arXiv preprint arXiv:2006.03654 , 2020.
[9]J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. [22]X. He, X. Bresson, T. Laurent, and B. Hooi. Explana-
BERT: Pre-training of deep bidirectional transformers tionsasfeatures: Llm-basedfeaturesfortext-attributed
for language understanding |
arXiv preprint arXiv:2006.03654 , 2020.
[9]J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. [22]X. He, X. Bresson, T. Laurent, and B. Hooi. Explana-
BERT: Pre-training of deep bidirectional transformers tionsasfeatures: Llm-basedfeaturesfortext-attributed
for language understanding. In Proceedings of the 2019 graphs. arXiv preprint arXiv:2305.19523 , 2023.
Conference of the North American Chapter of the As-
sociation for Computational Linguistics: Human Lan- [23]W. Hu, M. Fey, M. Zitnik, Y. Dong, H. Ren, B. Liu,
guageTechnologies, Volume1(LongandShortPapers) , M. Catasta, and J. Leskovec. Open graph benchmark:
pages 4171–4186, Minneapolis, Minnesota, June 2019. Datasets for machine learning on graphs. Advances in
Association for Computational Linguistics. neuralinformationprocessingsystems ,33:22118–22133,
[10]K. Duan, Q. Liu, T.-S. Chua, S. Yan, W. T. Ooi, 2020.
Q. Xie, and J. He. Simteg: A frustratingly simple ap- [24]Z. Hu, Y. Dong, K. Wang, K.-W. Chang, and Y. Sun.
proach improves textual graph learning. arXiv preprint Gpt-gnn: Generative pre-training of graph neural net-
arXiv:2308.02565 , 2023. works. Proceedings of the 26th ACM SIGKDD Inter-
[11]V. P. Dwivedi, L. Ramp´aˇsek, M. Galkin, A. Parviz, national Conference on Knowledge Discovery & Data
G. Wolf, A. T. Luu, and D. Beaini. Long range graph Mining , 2020.
benchmark. In Thirty-sixth Conference on Neural In- [25]J. Huang, X. Zhang, Q. Mei, and J. Ma. Can llms ef-
formation Processing Systems Datasets and Bench- fectively leverage graph structural information: When
marks Track , 2022. and why. arXiv preprint arXiv:2309.16595 , 2023.
[12]K.Ethayarajh.Howcontextualarecontextualizedword [26]Y. Ji, Y. Gong, Y. Peng, C. Ni, P. Sun, D. Pan, B. Ma,
representations? Comparing the geometry of BERT, and X. Li. Exploring chatgpt’s ability to rank content:
ELMo, and GPT-2 embeddings. In Proceedings of A preliminary study on consistency with human pref-
the 2019 Conference on Empirical Methods in Natu- erences. ArXiv , abs/2303.07610, 2023.
ral Language Processing and the 9th International Joint
Conference on Natural Language Processing (EMNLP- [27]T. N. Kipf and M. Welling. Semi-supervised classifi-
IJCNLP) |
A preliminary study on consistency with human pref-
the 2019 Conference on Empirical Methods in Natu- erences. ArXiv , abs/2303.07610, 2023.
ral Language Processing and the 9th International Joint
Conference on Natural Language Processing (EMNLP- [27]T. N. Kipf and M. Welling. Semi-supervised classifi-
IJCNLP) , pages 55–65, Hong Kong, China, Nov. 2019. cation with graph convolutional networks. In Interna-
Association for Computational Linguistics. tional Conference on Learning Representations , 2017.
[13]M. Fey and J. E. Lenssen. Fast graph repre- [28]G. Li, M. M¨uller, B. Ghanem, and V. Koltun. Train-
sentation learning with pytorch geometric. ArXiv , ing graph neural networks with 1000 layers. In Inter-
abs/1903.02428, 2019. national conference on machine learning , pages 6437–
6449. PMLR, 2021.
[14]Y. Gao, T. Sheng, Y. Xiang, Y. Xiong, H. Wang,
and J. Zhang. Chat-rec: Towards interactive and ex- [29]H. Li, X. Wang, Z. Zhang, and W. Zhu. Out-of-
plainablellms-augmentedrecommendersystem. ArXiv , distribution generalization on graphs: A survey. arXiv
abs/2303.14524, 2023. preprint arXiv:2202.07987 , 2022.
[15]C. L. Giles, K. D. Bollacker, and S. Lawrence. Citeseer: [30]J. Li, Y. Liu, W. Fan, X. Wei, H. Liu, J. Tang, and
An automatic citation indexing system. In Proceedings Q. Li. Empowering molecule discovery for molecule-
of the Third ACM Conference on Digital Libraries , DL captiontranslationwithlargelanguagemodels: Achat-
´98, pages 89–98, New York, NY, USA, 1998. ACM. gpt perspective. ArXiv , abs/2306.06615, 2023.[31]Q. Li, X. Li, L. Chen, and D. Wu. Distilling knowl- [44]A.Neelakantan,T.Xu,R.Puri,A.Radford,J.M.Han,
edge on text graph for social media attribute inference. J. Tworek, Q. Yuan, N. A. Tezak, J. W. Kim, C. Hal-
In Proceedings of the 45th International ACM SIGIR lacy, J. Heidecke, P. Shyam, B. Power, T. E. Nekoul,
Conference on Research and Development in Informa- G.Sastry, G.Krueger, D.P.Schnurr, F.P.Such, K.S.-
tion Retrieval , SIGIR ’22, page 2024–2028, New York, K.Hsu,M.Thompson,T.Khan,T.Sherbakov,J.Jang,
NY,USA,2022.AssociationforComputingMachinery. P. Welinder, and L. Weng. Text and |
onference on Research and Development in Informa- G.Sastry, G.Krueger, D.P.Schnurr, F.P.Such, K.S.-
tion Retrieval , SIGIR ’22, page 2024–2028, New York, K.Hsu,M.Thompson,T.Khan,T.Sherbakov,J.Jang,
NY,USA,2022.AssociationforComputingMachinery. P. Welinder, and L. Weng. Text and code embeddings
[32]Y.Li, J.Yin, andL.Chen.Informativepseudo-labeling by contrastive pre-training. ArXiv , abs/2201.10005,
forgraphneuralnetworkswithfewlabels. Data Mining 2022.
and Knowledge Discovery , 37(1):228–254, 2023. [45]OpenAI. Introducing chatgpt, 2022.
[33]H. Liu, J. Feng, L. Kong, N. Liang, D. Tao, Y. Chen, [46]OpenAI. Gpt-4 technical report. ArXiv ,
and M. Zhang. One for all: Towards training one abs/2303.08774, 2023.
graph model for all classification tasks. arXiv preprint
arXiv:2310.00149 , 2023. [47]Z.Pang,Z.Xie,Y.Man,andY.-X.Wang.Frozentrans-
[34]H.Liu, B.Hu, X.Wang, C.Shi, Z.Zhang, andJ.Zhou. formers in language models are effective visual encoder
Confidence may cheat: Self-training on graph neu- layers. arXiv preprint arXiv:2310.12973 , 2023.
ral networks under distribution shift. In Proceedings [48]F. Petroni, T. Rockt¨aschel, P. Lewis, A. Bakhtin,
of the ACM Web Conference 2022 , WWW ’22, page Y. Wu, A. H. Miller, and S. Riedel. Language models
1248–1258, New York, NY, USA, 2022. Association for as knowledge bases? ArXiv , abs/1909.01066, 2019.
Computing Machinery. [49]S. Purchase, A. Zhao, and R. D. Mullins. Revis-
[35]J. Liu, C. Liu, R. Lv, K. Zhou, and Y. Zhang. Is chat- iting embeddings for graph neural networks. ArXiv ,
gpt a good recommender? a preliminary study. arXiv abs/2209.09338, 2022.
preprint arXiv:2304.10149 , 2023.
[36]W. Liu, P. Zhou, Z. Zhao, Z. Wang, Q. Ju, H. Deng, [50]Y. Qin, X. Wang, Z. Zhang, and W. Zhu. Disen-
andP.Wang.K-bert: Enablinglanguagerepresentation tangled representation learning with large language
with knowledge graph. In AAAI Conference on Artifi- models for text-attributed graphs. arXiv preprint
cial Intelligence , 2019. arXiv:2310.18152 , 2023.
[37]Z. Liu, C. Xiong, M. Sun, and Z. Liu. Fine-grained [51]X.Qiu, T.Sun, Y.Xu, |
with knowledge graph. In AAAI Conference on Artifi- models for text-attributed graphs. arXiv preprint
cial Intelligence , 2019. arXiv:2310.18152 , 2023.
[37]Z. Liu, C. Xiong, M. Sun, and Z. Liu. Fine-grained [51]X.Qiu, T.Sun, Y.Xu, Y.Shao, N.Dai, andX.Huang.
factverificationwithkernelgraphattentionnetwork.In Pre-trained models for natural language processing: A
Proceedings of the 58th Annual Meeting of the Associ- survey. Science China Technological Sciences , 63:1872
ation for Computational Linguistics , pages 7342–7351, – 1897, 2020.
Online, July 2020. Association for Computational Lin- [52]A. Radford, J. Wu, R. Child, D. Luan, D. Amodei,
guistics. and I. Sutskever. Language models are unsupervised
[38]Y. Ma and J. Tang. Deep Learning on Graphs . Cam- multitask learners. 2019.
bridge University Press, 2021. [53]C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang,
[39]H. Mao, Z. Chen, W. Jin, H. Han, Y. Ma, T. Zhao, M. Matena, Y. Zhou, W. Li, and P. J. Liu. Explor-
N.Shah,andJ.Tang.Demystifyingstructuraldisparity ing the limits of transfer learning with a unified text-
in graph neural networks: Can one size fit all? arXiv to-text transformer. Journal of Machine Learning Re-
preprint arXiv:2306.01323 , 2023. search , 21(140):1–67, 2020.
[40]A. McCallum, K. Nigam, J. D. M. Rennie, and K. Sey- [54]N.ReimersandI.Gurevych.Sentence-BERT:Sentence
more. Automating the construction of internet portals embeddings using Siamese BERT-networks. In Pro-
with machine learning. Information Retrieval , 3:127– ceedings of the 2019 Conference on Empirical Methods
163, 2000. in Natural Language Processing and the 9th Interna-
tional Joint Conference on Natural Language Process-
[41]A. Miaschi and F. Dell’Orletta. Contextual and non- ing (EMNLP-IJCNLP) , pages 3982–3992, Hong Kong,
contextual word embeddings: an in-depth linguistic in- China, Nov. 2019. Association for Computational Lin-
vestigation.In Proceedingsofthe5thWorkshoponRep- guistics.
resentation Learning for NLP , pages 110–119, Online, [55]M. Roughan |
P) , pages 3982–3992, Hong Kong,
contextual word embeddings: an in-depth linguistic in- China, Nov. 2019. Association for Computational Lin-
vestigation.In Proceedingsofthe5thWorkshoponRep- guistics.
resentation Learning for NLP , pages 110–119, Online, [55]M. Roughan and S. J. Tuke. Unravelling graph-
July 2020. Association for Computational Linguistics. exchange file formats. ArXiv , abs/1503.02781, 2015.
[42]T. Mikolov, K. Chen, G. Corrado, and J. Dean. Effi-
cientestimationofwordrepresentationsinvectorspace. [56]T. Schick, J. Dwivedi-Yu, R. Dess`ı, R. Raileanu,
arXiv preprint arXiv:1301.3781 , 2013. M. Lomeli, L. Zettlemoyer, N. Cancedda, and
[43]N. Muennighoff, N. Tazi, L. Magne, and N. Reimers. T. Scialom. Toolformer: Language models can
MTEB: Massive text embedding benchmark. In Pro- teach themselves to use tools. arXiv preprint
ceedings of the 17th Conference of the European Chap- arXiv:2302.04761 , 2023.
ter of the Association for Computational Linguistics , [57]P. Sen, G. Namata, M. Bilgic, L. Getoor, B. Galligher,
pages2014–2037, Dubrovnik, Croatia, May2023.Asso- and T. Eliassi-Rad. Collective classification in network
ciation for Computational Linguistics. data. AI Magazine , 29(3):93, Sep. 2008.[58]C. Sun, H. Gu, and J. Hu. Scalable and adaptive [71]T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. De-
graph neural networks with self-label-enhanced train- langue, A. Moi, P. Cistac, T. Rault, R. Louf, M. Fun-
ing. arXiv preprint arXiv:2104.09376 , 2021. towicz, and J. Brew. Huggingface’s transformers:
State-of-the-art natural language processing. ArXiv ,
[59]T.Sun,Y.Shao,H.Qian,X.Huang,andX.Qiu.Black- abs/1910.03771, 2019.
boxtuningforlanguage-model-as-a-service.In Interna-
tional Conference on Machine Learning , pages 20841– [72]Y. Wu, Y. Xu, A. Singh, Y. Yang, and A. W.
20855. PMLR, 2022. Dubrawski. Active learning for graph neural networks
via node feature propagation. ArXiv , abs/1910.07567,
[60]X. Sun, X. Li, J. Li, F. Wu, S. Guo, T. Zhang, and 2019.
G. Wang. Text classification via large language models.
ArXiv , abs/2305.08377, 202 |
Dubrawski. Active learning for graph neural networks
via node feature propagation. ArXiv , abs/1910.07567,
[60]X. Sun, X. Li, J. Li, F. Wu, S. Guo, T. Zhang, and 2019.
G. Wang. Text classification via large language models.
ArXiv , abs/2305.08377, 2023. [73]F. Xia, K. Sun, S. Yu, A. Aziz, L. Wan, S. Pan, and
H. Liu. Graph learning: A survey. IEEE Transactions
[61]Y. Sun, S. Wang, Y. Li, S. Feng, X. Chen, H. Zhang, on Artificial Intelligence , 2:109–127, 2021.
X.Tian,D.Zhu,H.Tian,andH.Wu.Ernie: Enhanced [74]J. Yang, Z. Liu, S. Xiao, C. Li, D. Lian, S. Agrawal,
representation through knowledge integration. ArXiv ,
abs/1904.09223, 2019. A. S, G. Sun, and X. Xie. Graphformers: GNN-
nested transformers for representation learning on tex-
[62]J. Tang, Y. Yang, W. Wei, L. Shi, L. Su, S. Cheng, tual graph. In A. Beygelzimer, Y. Dauphin, P. Liang,
D. Yin, and C. Huang. Graphgpt: Graph instruc- and J. W. Vaughan, editors, Advances in Neural Infor-
tion tuning for large language models. arXiv preprint mation Processing Systems , 2021.
arXiv:2310.13023 , 2023. [75]Z. Yang, W. W. Cohen, and R. Salakhutdinov. Revis-
[63]H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. iting semi-supervised learning with graph embeddings.
Lachaux, T. Lacroix, B. Rozi`ere, N. Goyal, E. Ham- ArXiv , abs/1603.08861, 2016.
bro, F. Azhar, et al. Llama: Open and efficient founda- [76]L. Yao, C. Mao, and Y. Luo. Graph convolutional net-
tionlanguagemodels. arXivpreprintarXiv:2302.13971 , works for text classification. ArXiv , abs/1809.05679,
2023. 2018.
[64]P. Veliˇckovi´c, G. Cucurull, A. Casanova, A. Romero, [77]M. Yasunaga, A. Bosselut, H. Ren, X. Zhang, C. D.
P. Li`o, and Y. Bengio. Graph attention networks. In Manning, P. Liang, and J. Leskovec. Deep bidirectional
International Conference on Learning Representations , language-knowledgegraphpretraining.In Neural Infor-
2018. mation Processing Systems (NeurIPS) , 2022.
[65]H. Wang, S. Feng, T. |
Manning, P. Liang, and J. Leskovec. Deep bidirectional
International Conference on Learning Representations , language-knowledgegraphpretraining.In Neural Infor-
2018. mation Processing Systems (NeurIPS) , 2022.
[65]H. Wang, S. Feng, T. He, Z. Tan, X. Han, [78]M.Yasunaga,J.Leskovec,andP.Liang.Linkbert: Pre-
and Y. Tsvetkov. Can language models solve graph training language models with document links. In Pro-
problems in natural language? arXiv preprint ceedings of the 60th Annual Meeting of the Associa-
arXiv:2305.10037 , 2023. tion for Computational Linguistics (Volume 1: Long
[66]H. Wang, Y. Gao, X. Zheng, P. Zhang, H. Chen, and Papers) , pages 8003–8016, 2022.
J. Bu. Graph neural architecture search with gpt-4. [79]R. Ye, C. Zhang, R. Wang, S. Xu, and Y. Zhang.
arXiv preprint arXiv:2310.01436 , 2023. Natural language is all a graph needs. arXiv preprint
[67]J. Wang, X. Hu, W. Hou, H. Chen, R. Zheng, arXiv:2308.07134 , 2023.
Y. Wang, L. Yang, H. Huang, W. Ye, X. Geng, [80]J. Zhang. Graph-toolformer: To empower llms with
et al. On the robustness of chatgpt: An adversar- graph reasoning ability via prompt augmented by chat-
ial and out-of-distribution perspective. arXiv preprint gpt. arXiv preprint arXiv:2304.11116 , 2023.
arXiv:2302.12095 , 2023.
[68]L. Wang, N. Yang, X. Huang, B. Jiao, L. Yang, [81]Z. Zhang, X. Wang, Z. Zhang, H. Li, Y. Qin, S. Wu,
D. Jiang, R. Majumder, and F. Wei. Text embeddings and W. Zhu. Llm4dyg: Can large language models
by weakly-supervised contrastive pre-training. arXiv solve problems on dynamic graphs? arXiv preprint
preprint arXiv:2212.03533 , 2022. arXiv:2310.17110 , 2023.
[69]M. Wang, L. Yu, D. Zheng, Q. Gan, Y. Gai, Z. Ye, [82]Z.Zhang,A.Zhang,M.Li,andA.J.Smola.Automatic
M. Li, J. Zhou, Q. Huang, C. Ma, Z. Huang, Q. Guo, chain of thought prompting in large language models.
H. Zhang, H. Lin, J. J. Zhao, J. Li, A. Smola, ArXiv , abs/2210.03493, 2022.
and Z. Zhang. Deep graph library: Towards effi- [83]J.Zhao,M.Qu,C.Li,H.Yan,Q.Liu,R.Li,X.Xie,and
cient and scalable deep learning on graphs. ArXiv , J |
f thought prompting in large language models.
H. Zhang, H. Lin, J. J. Zhao, J. Li, A. Smola, ArXiv , abs/2210.03493, 2022.
and Z. Zhang. Deep graph library: Towards effi- [83]J.Zhao,M.Qu,C.Li,H.Yan,Q.Liu,R.Li,X.Xie,and
cient and scalable deep learning on graphs. ArXiv , J. Tang. Learning on large-scale text-attributed graphs
abs/1909.01315, 2019. via variational inference. In The Eleventh International
[70]J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Conference on Learning Representations , 2023.
Q. Le, and D. Zhou. Chain of thought prompting elic- [84]J.Zhao,L.Zhuo,Y.Shen,M.Qu,K.Liu,M.Bronstein,
its reasoning in large language models. arXiv preprint Z. Zhu, and J. Tang. Graphtext: Graph reasoning in
arXiv:2201.11903 , 2022. text space. arXiv preprint arXiv:2310.01089 , 2023. [85]W.X.Zhao,K.Zhou,J.Li,T.Tang,X.Wang,Y.Hou, Ogbn-arxiv and Ogbn-products [23] These dataset are
Y. Min, B. Zhang, J. Zhang, Z. Dong, Y. Du, C. Yang, selected from the popular OGB benchmark [23], and de-
Y. Chen, Z. Chen, J. Jiang, R. Ren, Y. Li, X. Tang, scriptions for these datasets can be found in https://ogb.
Z. Liu, P. Liu, J. Nie, and J. rong Wen. A survey of stanford.edu/docs/nodeprop .
large language models. ArXiv , abs/2303.18223, 2023.
[86]J. Zhu, Y. Cui, Y. Liu, H. Sun, X. Li, M. Pelger, B. EXPERIMENTSETUPS
L. Zhang, T. Yan, R. Zhang, and H. Zhao. Textgnn: B.1 ComputingEnvironment
Improving text encoder via graph neural network in WeimplementallthebaselinemodelswithPyG[13],DGL[69],
sponsored search. Proceedings of the Web Conference and transformers [71] modules. The experiments were con-
2021 , 2021. ducted in a GPU server with eight NVIDIA RTX A5000
GPUs, each with 24GB VRAM.
[87]J. Zhu, Y. Yan, L. Zhao, M. Heimann, L. Akoglu, and
D. Koutra. Beyond homophily in graph neural net- B.2 Hyperparameters
works: Current limitations and effective designs. In For RevGAT, GraphSage, and SAGN models, we directly
H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and adopttheb |
Akoglu, and
D. Koutra. Beyond homophily in graph neural net- B.2 Hyperparameters
works: Current limitations and effective designs. In For RevGAT, GraphSage, and SAGN models, we directly
H. Larochelle, M. Ranzato, R. Hadsell, M. Balcan, and adoptthebesthyperparametersfromtheOGBleaderboard 7.
H. Lin, editors, Advances in Neural Information Pro- For Deberta-base on Cora and Pubmed , we follow the hy-
cessing Systems , volume 33, pages 7793–7804. Curran perparameter setting of TAPE [22]. In terms of GLEM, for
Associates, Inc., 2020. the LM part, we follow the hyperparameter setting in their
reporsitory 8. For GCN, GAT, MLP, we use the following
APPENDIX hyperparameter search range.
(a) Hidden dimension: {8, 16, 32, 64, 128, 256 }.
A. DATASETS (b) Number of layers: {1, 2, 3 }
In this work, we mainly use the following five real-world (c) Normalization: {None, BatchNorm };
graph datasets. Their statistics are shown in Table 19. (d) Learning rate: {1e-2, 5e-2, 5e-3, 1e-3 }
(e) Weight Decay: {1e-5, 5e-5, 5e-4, 0 }
Table 19: Statistics of the graph datasets. (f) Dropout: {0., 0.1, 0.5, 0.8 }
(g) Number of heads for GAT: {1, 4, 8 }
C. DEMONSTRATIONSOFTAPE
ExamplesforPubmed Afteranalyzingthe Pubmed data-
set, we find an interesting phenomenon that sometimes the
label of the paper just appears in the raw text attributes.
AnexampleisshowninTable20. Thispropertyof Pubmed
may be |
label of the paper just appears in the raw text attributes.
AnexampleisshowninTable20. Thispropertyof Pubmed
may be related to the superior zero-shot performance of
A.1 DatasetDescription LLMs on this dataset. This can also potentially explain
In this part, we give a brief introduction to each graph why GCN and GAT are outperformed by MLP in the high
dataset. It should be noted that it’s cumbersome to get the labeling ratio. When the link between node attributes and
raw text attributes for some datasets, and we will elaborate node labels can be easily found and adequate to determine
them below. The structural information and label informa- the categories, incorporating neighbors coming from other
tion of these datasets can be achieved from Pyg 6. We will categories will introduce noise.
also release the pre-processed versions of these datasets to Table 20: An illustrative example for Pubmed
assist future related studies.
Cora[40] Cora isapapercitationdatasetwiththefollow-
ing seven categories: [’Rule Learning’, ’Neural Networks’, Title: Predictive power of sequential measures of albumin-
’Case Based’, ’Genetic Algorithms’, ’Theory’, ’Reinforce- uria for progression to ESRD or death in Pima Indians with
ment Learning’, ’Probabilistic Methods’]. The raw text at- type 2 diabetes .
tributes can be obtained from https://people.cs.umass. ... (content omitted here)
edu/ ~ mccallum/data.html . Ground truth label: Diabetes Mellitus Type 2
Citeseer[15] Citeseer isapapercitationdatasetwiththe
following seven categories: [”Agents”, ”ML”, ”IR”, ”DB”,
”HCI”, ”AI”]. Note that we find that the TAG versopm
onlycontainsthetextattributesfor3186nodes. Asaresult,
we take the graph consisted of these 3186 nodes with 4277
edges.
Pubmed [57] Pubmed is a paper citation dataset consist-
ing scientific journals collected from the PubMed database
with the following three categories: [’Diabetes Mellitus, Ex-
perimental’, ’Diabetes Mellitus Type 1’, ’Diabetes Mellitus
Type 2’].
6 https://pytorch-geometric.readthedocs.io/en/ 7 https://github.com/snap-stanford/ogb
latest/modules/data.html 8 htt |
with the following three categories: [’Diabetes Mellitus, Ex-
perimental’, ’Diabetes Mellitus Type 1’, ’Diabetes Mellitus
Type 2’].
6 https://pytorch-geometric.readthedocs.io/en/ 7 https://github.com/snap-stanford/ogb
latest/modules/data.html 8 https://github.com/AndyJZhao/GLEM
Dataset #Nodes #Edges Task Metric
Cora [40] 2,708 5,429 7-classclassif. Accuracy
Citeseer * [15] 3,186 4,277 6-classclassif. Accuracy
Pubmed [57] 19,717 44,338 3-classclassif. Accuracy
Ogbn-arxiv [23] 169,343 1,166,243 40-classclassif. Accuracy
Ogbn-products [23] 2,449,029 61,859,140 47-classclassif. Accuracy |
SELF-DISCOVER: LargeLanguageModelsSelf-ComposeReasoningStructures
Pei Zhou1 Jay Pujara1 Xiang Ren1 Xinyun Chen2 Heng-Tze Cheng2
Quoc V. Le2 Ed H. Chi2 Denny Zhou2 Swaroop Mishra2 Huaixiu Steven Zheng2
Abstract son. Forexample, few-shot andzero-shot chain-of-thought
We introduce SELF-DISCOVER, a general frame- (CoT)(Nyeetal.,2021;Weietal.,2022;Kojimaetal.,2022;
workforLLMstoself-discoverthetask-intrinsic Yasunaga et al.,2023) resembles how humans solve prob-
reasoningstructurestotacklecomplexreasoning lemsstep-by-step,decomposition-basedprompting(Zhou
problems that are challenging for typical prompt- et al.,2022a;Drozdov et al.,2022;Patel et al.,2022;Hao
ing methods. Core to the framework is a self- et al.,2023;Khot et al.,2022) is inspired by how humans
discovery process where LLMs select multiple breakdown a complex problem into a series of smaller
atomic reasoning modules such as critical think- subproblems, and then solve those subproblems one by
ingandstep-by-stepthinking,andcompose them one (Polya,2004), and step-back prompting (Zheng et al.,
into an explicit reasoning structure for LLMs to 2023) is motivated by how humans reflect on task nature
follow during decoding. SELF-DISCOVER sub- toderivegeneralprinciples. However,afundamentallimi-
stantially improves GPT-4 and PaLM 2’s per- tation is that each technique itself serves as an atomic rea-
formance onchallenging reasoningbenchmarks soningmodulemaking animplicitpriorassumptionofthe
suchasBigBench-Hard,groundedagentreason- process on how to tackle a given task. Instead, we argue
ing, and MATH, by as much as 32% compared that each task has a unique intrinsic structure underlying
toChain ofThought (CoT).Furthermore, SELF- thereasoning processinvolvedin solving itefficiently. For
DISCOVER outperformsinference-intensivemeth- instance,least-to-mostprompting(Zhouetal.,2022a;Droz-
ods suchas CoT-Self-Consistency by morethan dov et al.,2022) has shownto be much more effective than
20%,whilerequiring10-40xfewerinferencecom- CoT (Wei et al.,2022) at solving tasks such as symbolic
pute. Finally, we show that the self-discovered manipulationand compositionalgeneralization, duetothe
reasoning structures are universally applicable decomposition structure of the tasks.
acrossmodelfamilies: from PaLM 2-LtoGPT-4, Thispaperaimsatself-discoveringtheunderlyingreasoning
and from GPT-4 to Llama2, and share commonal- structureuniquetoeachtask,whilebeinghighlyefficientin
ities with human reasoning patterns. termsof computation. Ourapproach, SELF-DISCOVER,is
inspiredbyhowhumansinternallydeviseareasoningpro-
gramforproblem-solving(Newelletal.,1958;Rasmussen,
1. Introduction 1983), as illustrated in Figure2. From a set of atomic
reasoning modules described in natural language such as
LargeLanguageModels(LLM)(Brownetal.,2020;Chowd- “breakdownintosubtasks”and“ criticalthinking”,anLLM,
hery et al.,2022;OpenAI,2023b;Anil et al.,2023) pow- and task examples without labels, SELF-DISCOVER com-
eredby transformers(Vaswanietal.,2017)have produced |
1983), as illustrated in Figure2. From a set of atomic
reasoning modules described in natural language such as
LargeLanguageModels(LLM)(Brownetal.,2020;Chowd- “breakdownintosubtasks”and“ criticalthinking”,anLLM,
hery et al.,2022;OpenAI,2023b;Anil et al.,2023) pow- and task examples without labels, SELF-DISCOVER com-
eredby transformers(Vaswanietal.,2017)have produced poses a coherent reasoning structure intrinsic to the task
impressivebreakthroughsingeneratingcoherenttexts(Ope- (Stage 1) and then solves instances of the task using the
nAI,2022), andfollowinginstructions (Zhongetal.,2021; discovered structure (Stage 2). Stage 1 operates at the task-
Mishra et al.,2022c;Wei et al.,2021;Chung et al.,2022; level and uses three actions to guide the LLM to generate
Ouyang et al.,2022). In pursuit of the goal to enhance a reasoning structure for the task. At Stage 2, during the
LLMs’ capability to reason and solve complex problems, finaldecoding,theLLM simplyfollows the self-discovered
various prompting methods have been proposed, drawing structure to arrive at the final answer.
inspirations from cognitive theories of how humans rea- Solving problems using SELF-DISCOVER brings several
1University of Southern California 2Google DeepMind. Cor- benefits compared to other methods for LLM reasoning.
respondence to: Pei Zhou <peiz@usc.edu>, Swaroop Mishra First, the discovered reasoning structure is grounded in
<swaroopmishra@google.com>,HuaixiuStevenZheng<steven- atomicreasoning modulesbenefiting fromthestrengths of
zheng@google.com>. multiple reasoning modulesin contrast to applyinga priori
Preprint. modulesuchasCoT.Second, SELF-DISCOVER isefficient
1 SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures
Figure1. SELF-DISCOVER guides LLMs to self-discover and compose atomic reasoning modules into a reasoning structure to solve
challengingtasks. ThroughtestingonchallengingreasoningbenchmarksincudingBigBench-Hard(BBH),agentreasoning(T4D),and
MATH, we find thatSELF-DISCOVER outperforms Direct Answering on 23/25 and CoT on 21/25 tasks in zero-shot setting using PaLM
2-L. Full BBH results are in AppendixCTable3.
incomputationasitonlyrequires3moreinferencestepson computationerrors(e.g. math). Wealsotakeacloserlook
the task-level, while being more performant than inference- at the self-discovered reasoning structures, and show the
heavyensembleapproachessuchasself-consistency(Wang universality of them by transferability study from PaLM
et al.,2022). Lastly, the discovered reasoning structure 2-L to GPT-4, and from GPT-4 to Llama-2-70B. We hope
is intrinsic to the task, and conveys LLMs’ insights about toencouragemorefutureworkonstructuredreasoningfor
the task in a more interpretable way than the optimized solving challenging problems using LLMs.
prompts (Zhou et al.,2022b;Yang et al.,2023).
We test SELF-DISCOVER on 25 challenging reasoning 2. Self-Discovering Reasoning Structures for
tasks including Big Bench-Hard (BBH) (Suzgun et al., Problem-Solving
2022), Thinking for Doing (T4D) (Zhou et al.,2023) and We take inspiration from how humans use prior knowledge
MATH (Hendrycks et al.,2021). SELF-DISCOVER outper- and skills to devise a reasoning program to solve prob-
formsCoTon21/25taskwithperformancegainsupto42% lems (Newell et al.,1958;Rasmussen,1983). When we
(Figure1),highlighti |
tructures for
tasks including Big Bench-Hard (BBH) (Suzgun et al., Problem-Solving
2022), Thinking for Doing (T4D) (Zhou et al.,2023) and We take inspiration from how humans use prior knowledge
MATH (Hendrycks et al.,2021). SELF-DISCOVER outper- and skills to devise a reasoning program to solve prob-
formsCoTon21/25taskwithperformancegainsupto42% lems (Newell et al.,1958;Rasmussen,1983). When we
(Figure1),highlightingtheadvantageoftheself-discovered face a new problem, we often first search internally what
reasoning structure composed from the atomic reasoning knowledge and skills from our prior experience might be
modulesagainstasingleaprioriCoTmodule. Furthermore, helpful to solve it. Then we will attempt to apply relevant
we demonstrate that SELF-DISCOVER achieves superior knowledge andskills tothis task. And finallywe willcon-
performanceagainstinference-heavymethodssuchasCoT nect multiple individual skills and knowledge to solve the
+ Self-Consistency and majority voting of every module problem. Wedesign SELF-DISCOVER toenactthesesteps
while requiring 10-40x fewer inference compute (Figure5). into two stages as illustrated in Figure2.
Finally, we compare SELF-DISCOVER with prompts op-
timized (OPRO) using a training set (Yang et al.,2023) Given a task and a set of reasoning module descriptions
(Figure9). We find that SELF-DISCOVER still performs on representinghigh-levelproblem-solvingheuristicssuchas
parorbetterthanOPROwhiletheself-discoveredreasoning “Usecriticalthinking”and“ Let’sthinkstepbystep”,Stage1
structure are much more interpretable. of SELF-DISCOVER aimsto uncover the intrinsicreasoning
Weconduct a set ofanalysis to understand the effectiveness structurefor solvingthistaskvia meta-reasoning. Specifi-
of SELF-DISCOVER. BybreakingdownBBHtasksinto4 cally, we uses three meta-prompts to guide LLMs to select,
differentcategories,wefindthatSELF-DISCOVERperforms adapt,andimplementanactionablereasoningstructurewith
best on tasks requiring world knowledge and has a mod- no labels or training required. We format the structure in
erate performance boost on algorithmic tasks compared to key-value pairs similar to JSON due to interpretability and
CoT (Figure4). This is further confirmed by the error anal- findingsonfollowingJSONboostsreasoningandgeneration
ysis on MATH, where 74.7% model failures comes from quality(Zhouetal.,2023;OpenAI,2023a). Thestructureof
2 SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures
Figure2.Illustration of using SELF-DISCOVER for problem-solving. Given a generative LM, task, and seed reasoning module
descriptions, we guide LMs to generate a reasoning structure in key-value format to solve the task. Finally, models can follow the
self-discovered structures to solve the every instance from the task by filling in the values in JSON step-by-step.
the meta-prompts andfull promptsare shownin Appendix. taskathand. Forexample,from“breaktheproblemintosub-
Stage1operatesontask-level,meaningweonlyneedtorun problems”to “ calculateeacharithmeticoperationin order”
SELF-DISCOVER once foreach task. Then, in Stage2, we for arithmetic problems. Given selected reasoning module
can simply usethe discovered reasoningstructure tosolve subsetD S fromthepreviousstep,ADAPTrephraseseach
every instance of the given task by instructing models to oftheselectedmoduletobemorespecifictothetask. Sim-
followtheprovidedstructurebyfillingeachkeyandar |
problems”to “ calculateeacharithmeticoperationin order”
SELF-DISCOVER once foreach task. Then, in Stage2, we for arithmetic problems. Given selected reasoning module
can simply usethe discovered reasoningstructure tosolve subsetD S fromthepreviousstep,ADAPTrephraseseach
every instance of the given task by instructing models to oftheselectedmoduletobemorespecifictothetask. Sim-
followtheprovidedstructurebyfillingeachkeyandarrive ilarly to SELECT, this stage uses a meta-prompt pA and
at a final answer. a generative modelM to generate the adapted reasoning
module descriptionsD A :
2.1. Stage 1: Self-Discover Task-Specific Structures D A = M (pA ∥ D S ∥ ti). (2)
The first stage consists of three actions: 1) SELECT, where IMPLEMENT Finally, given the adapted reasoning mod-
relevantreasoningmodulesfortask-solvingarechosenfrom uledescriptionsD A , SELF-DISCOVER operationalizesthe
theset ofreasoningmodule descriptions;2) ADAPT, where reasoning modules into an implemented reasoning struc-
descriptionsofselectedreasoningmodulesarerephrasedto tureD I with specified instruction on what to generate for
be more specific to the task at hand; and 3) IMPLEMENT, each step. In additionto a meta promptpI, IMPLEMENT
where the adapted reasoningdescriptions areimplemented also provides a demonstration of a human-written reason-
into a structured actionable plan so that the task can be ing structureS human on another task to better convert the
solved by following the structure. natural language descriptions into a reasoning structure:
SELECT First, not every reasoning module is helpful for D I = M (pA ∥ S human ∥ D A ∥ ti). (3)
every task, so the first stage of SELF-DISCOVER guides
modelto selectmodulesthat areusefulbased ontaskexam- 2.2. Stage 2: Tackle Tasks Using Discovered Structures
ples. Forexample,“reflectivethinking”mighthelpsearch Afterthe threestages, wehave animplemented reasoning
forfirst-principletheoriesonscienceproblems,while“cre- structureD I uniquely adapted for the task we need tosolve
ativethinking”helpsongeneratinganovelcontinuationto T . Then we can simply append the reasoning structure to
a story. Given raw set of reasoning module descriptions all instances of the task and prompt models to follow the
D suchas“criticalthinking”,and“ breaktheprobleminto reasoning structure to generate an answerA :
sub-problems” (full set in AppendixA), and a few task ex-
ampleswithoutlabelsti ∈ T , SELF-DISCOVER firstselects A = M (D S ∥ t),∀t∈ T. (4)
asubsetofreasoningmodulesD S thatareusefulforsolving More details of prompts are included in AppendixA.
the tasks by using a modelM and a meta-promptpS :
D S = M (pS ∥ D ∥ ti). (1) 3. Experiment Setup
ADAPT Sinceeachreasoningmoduleprovidesageneral 3.1. Tasks
descriptionofhowtosolveproblems,thenextstepof SELF- We focus on diverse reasoning benchmarks that are still
DISCOVER aims at tailoring each selected module to the challenging for LLMs: BIG-Bench Hard (BBH) (Suzgun
3 SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures
Figure3.Illustrationofthreeactionsof SELF-DISCOVER. WeuseLMstocomposeacoherentreasoni |
descriptionofhowtosolveproblems,thenextstepof SELF- We focus on diverse reasoning benchmarks that are still
DISCOVER aims at tailoring each selected module to the challenging for LLMs: BIG-Bench Hard (BBH) (Suzgun
3 SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures
Figure3.Illustrationofthreeactionsof SELF-DISCOVER. WeuseLMstocomposeacoherentreasoningstructurebyselectingrelevant
modules, adapting to task-specific descriptions, and implement a reasoning structure in JSON.
etal.,2022)contains23carefully-selectedchallengingtasks • DirectPrompting,wheremodeldirectlygeneratesthe
fromBIG-Bench(Srivastavaetal.,2023). BBHtaskscover answer without intermediate reasoning steps.
adiverserangeofreasoningproblemsspanningthefollow- • CoT (Wei et al.,2022;Kojima et al.,2022), where
ing 4 categories according to their authors: 1) Algorithmic models are prompted to generate a reasoning process
and Multi-Step Arithmetic Reasoning, 2) Natural Language leading to the final answer.
Understanding, 3) Use of World Knowledge, and 4) Mul-
tilingual Knowledge and Reasoning. We also test on a • Plan-and-Solve(Wangetal.,2023),wheremodelsare
grounded social agent reasoning task called Thinking for prompted to first generate a plan and then solve the
Doing (T4D) where models must leverage mental state rea- problem. SELF-DISCOVER differs by grounding the
soning to determine actions to perform (Zhou et al.,2023), reasoningstructureinatomicreasoningmodules,and
where GPT-4 with CoT only reaches around 50%. Finally, promptingthedecodingtofollowtheexplicitkey-value
we subsample 200 examples from the MATH (Hendrycks reasoning structure.
etal.,2021)testset,andgenerateinstance-levelreasoning
structures via a one-shot demonstration to adapt to the com- Next, we also consider other baselines that make use of
plexityofMATHtasks. Forevaluations,weuseaccuracyto the raw seed reasoning modules (RM) we pass to SELF-
measure the model performance on BBH, T4D and MATH DISCOVER. Wecomparewiththefollowingmethods’per-
(details can be found in AppendixB). formance and the inference call efficiency on a subset of
3.2. Models tasks.
We use several state-of-the-art LLMs: GPT-4 (gpt-4-turbo- • CoT-Self-Consistency (Wang et al.,2022), we sample
preview)(OpenAI,2023b),GPT-3.5-turbo(ChatGPT)(Ope- multipleoutputsfromLLMwithCoTandaggregatean-
nAI, 2022) 1, instruction-tuned PaLM 2-L (Anil et al.,swerstogetthefinalanswer. Wecomparethismethod
2023) 2, and an open-source LLM Llama2-70B (Touvron onasubsetoftasksduetothecostofrepetitivequeries.
et al.,2023). • Majority voting of each RM: we prompt models to
3.3. Baselines solvethetasksbyappendingeachRMandusemajority
voting of all answers to get the final answer. We exam-
Wecompare SELF-DISCOVER withother zero-shotprompt- inewhetherintegrating multiple RMsintoacoherent
ing methods for LLM reasoning: reasoning structure is advantageous to applying each
1accessed in October-December 2023 RMtosolvethetaskandusemajorityvotingtoensem-
2ForMATH,weuseaPaLM2-Lmodelwithastrongerinstruc- blethempost-hoc,whichcostsmuchmoreinference
tion |
-
Wecompare SELF-DISCOVER withother zero-shotprompt- inewhetherintegrating multiple RMsintoacoherent
ing methods for LLM reasoning: reasoning structure is advantageous to applying each
1accessed in October-December 2023 RMtosolvethetaskandusemajorityvotingtoensem-
2ForMATH,weuseaPaLM2-Lmodelwithastrongerinstruc- blethempost-hoc,whichcostsmuchmoreinference
tion tuning to enable betterinstruction following of more complex computation.
reasoning structures. • BestofeachRM:thismethodassumesthatwehaveac-
cesstooraclelabelsandusesthehighestaccuracyfrom
4 SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures
Table1.Self-Discover significantly improves LLM reasoning
across a diverse set of25 complex tasks: BBH, T4D and MATH.
CoT: zero-shot Chain of Thought (Kojima et al.,2022). PS: plan-
and-solve prompting (Wang et al.,2023).
Method BBH T4D MATH
PaLM 2-L 56% 30% 45%
PaLM 2-L + CoT 60% 40% 42%
PaLM 2-L + PS 61% 42% 49%
PaLM 2-L + Self-Discover 67% 69% 50.5%
GPT-4 58% 51% 70.5% Figure4.Breakdown of SELF-DISCOVER performance im-
GPT-4 + CoT 75% 52% 71% provement on 4 categories on PaLM 2-L. SELF-DISCOVER per-
GPT-4 + PS 73% 53% 70% forms the best on tasks requiring world knowledge.
GPT-4 + Self-Discover 81% 85% 73%
applyingeachRM.Wecomparewiththistoexamine over direct answering and CoT of PaLM 2-L are shown
whether SELF-DISCOVER competeswithmethodsthat in Figure1, where we find SELF-DISCOVER outperforms
depend on perfect prior knowledge of which RM to them on over 20/24 tasks. For a per-task performance for
use on a new task. all 23 BBH tasks, please refer to AppendixC.
On the grounded social agent task T4D, SELF-
Furthermore, for analysis on universality of reasoning struc- DISCOVER reaches over ≥ 27% (32% ) absolute
tures, we comparewith a prompt-optimization method that improvement over all baselines on PaLM 2-L (GPT-4).
requireatrainingsettoimproveprompts: LLMsasoptimiz- SELF-DISCOVER achieves 69% and 85% accuracy on
ers(OPRO)(Yangetal.,2023). Weaimto showthatwhen PaLM2-L andGPT-4, significantlyoutperforming previous
weapplystructuresorpromptsoptimizedfromonemodel, SoTApromptingmethodsuchasForeseeandReflect(FaR)
thereasoningstructurescanretainmoreperformancegains which employs an expert-designed reasoning structure.
than the wordings of prompts. In contrast, SELF-DISCOVER generates the reasoning
structure automatically from a set of atomic reasoning
modules without human interventions.
4. Results ForMATH,weobserveamoderategainof1%-7%(2%-3%)
We answer the following questions through experimental on PaLM 2-L (GPT-4) from SELF-DISCOVER compared
results: 1) Doesdiscoveringreasoningstructuresimprove to the baselines. Upon error analysis (see AppendixDfor
LLM reas |
ly from a set of atomic reasoning
modules without human interventions.
4. Results ForMATH,weobserveamoderategainof1%-7%(2%-3%)
We answer the following questions through experimental on PaLM 2-L (GPT-4) from SELF-DISCOVER compared
results: 1) Doesdiscoveringreasoningstructuresimprove to the baselines. Upon error analysis (see AppendixDfor
LLM reasoning capabilities? (4.1) 2) Which categories details), we find that the reasoning structures generated by
of problems do SELF-DISCOVER perform the best? (4.2) PaLM 2-L from SELF-DISCOVER are correct 87.5% of the
and 3) Can SELF-DISCOVER boost LLM performance ef- time: human experts can follow the reasoning structures
ficiently? (4.3) Finally,we willshowqualitativeexamples to solve the tasks perfectly. The majority of the failures
of self-discovered structures, LLM output following the (74.7%)comesfromerrorsinexecutingthecomputations,
structures, and compare with LLM output following other consistent with prior findings (Zheng et al.,2023).
prompting methods for reasoning (4.4).
4.2. Which Types of Problems Do
4.1. Does SELF-DISCOVER Improve LLM Reasoning? SELF-DISCOVER Help the Most?
Overall,SELF-DISCOVERimprovesPaLM2-LandGPT- SELF-DISCOVER performs best on tasks that require
4’s reasoning across diverse set of reasoning tasks. Ta- diverse world knowledge. Figure4presents the aver-
ble1showstheoverallresultsoncomplexreasoningtasks age improvement in terms of delta in accuracy of SELF-
of BBH, T4D and MATH using PaLM 2-L and GPT-4. DISCOVER over direct answer and CoT on 4 categories
WecompareSelf-Discoverwithbaselinesincludingdirect of reasoning tasks we test. We adopt the categoriza-
prompting, CoT, and Plan-and-Solve (PS). tion fromSuzgun et al.(2022). We find that SELF-
Onaggregated23tasksofBBH, SELF-DISCOVER achieves DISCOVER improves over these two baselines on all cate-
7%and6%absoluteimprovementonPaLM2-LoverChain- gories,butespeciallyontasksthatrequireworldknowledge
of-Thoughtand Plan-and-Solve, respectively. Similargains suchassportsunderstanding,movierecommendation,and
(6%and8%)areobservedwhenSELF-DISCOVERisapplied ruin names.
to GPT-4. Breakdown results of each task’s improvement Thesetasksdemandmodelstoreasonusingfactandgeneral
5 SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures
Figure5.Comparisonofaccuracywithnumberofinferencecallsrequiredperinstance. ForCoT-Self-Consistency,wesample10
times. Best of each RM method requires gold labels (*). SELF-DISCOVER requires only 1 inference call per instance (plus 3 more
meta-prompts on the task-level), same as Direct and CoT while reaching better performance compared with 40x more call required
methods (majority voting of each RM) on GPT-4. We acknowledge that SELF-DISCOVER input and output are longer than CoT and
Directprompting,increasingcost. However,asthenumberofinstancesincreases,theefficiencyofSELF-DISCOVER intermsofinference
per instance is highly desirable.
commonsenseknowledge. Weinterpret SELF-DISCOVER’s
advantagesonthesetasksasstrengthfromintegratingmul-
tiplereasoningmodulesfromvariousperspectivesasonly
applyingCoTmightmisskeyknowledgeinthereasoning
process. We observe that the gain on the Algorithmic cate-
goryismoderate,consistentwiththefindingsfromSec.4.1
on MATH.
4.3. How Efficient is SELF-DISCOVER?
SELF-DISCOVER achievesbetterperformancewhilere-
quiring 10-40x fewer i |
es,theefficiencyofSELF-DISCOVER intermsofinference
per instance is highly desirable.
commonsenseknowledge. Weinterpret SELF-DISCOVER’s
advantagesonthesetasksasstrengthfromintegratingmul-
tiplereasoningmodulesfromvariousperspectivesasonly
applyingCoTmightmisskeyknowledgeinthereasoning
process. We observe that the gain on the Algorithmic cate-
goryismoderate,consistentwiththefindingsfromSec.4.1
on MATH.
4.3. How Efficient is SELF-DISCOVER?
SELF-DISCOVER achievesbetterperformancewhilere-
quiring 10-40x fewer inference computer compared to
self-consistency or majority voting. Here we examine
a subset of 2 tasks from BBH and present a more thor-
ough comparison of methods including those requiring Figure6.Examplesofself-discoveredstructuresonBBH tasks
many inference calls that are too costly to run on all 24 usingPaLM2-L.Weobservetraitsofatomicreasoningmodules
tasks. Figure5shows average accuracy and number of such as “step-by-step thinking”, “ reflect on task nature”, and an in-
inference calls required per instance for each method us- terestingcreativethinkingcasewheremodelsdevise analgorithm
ing GPT-4. Accuracy wise (y-axis), we find that SELF- using stack to solve parenthesis parsing task.
DISCOVER outperforms other baselines even those that re-
quire repeated inference calls such as CoT-self-consistency multiplereasoningmodules,andprovidesinsightsonhow
and majority voting of applying each RM. Efficiency wise to solve the tasks. Furthermore, example of comparing
(x-axis), SELF-DISCOVER only requires one call per in- reasoning processesfrom CoT, Plan-and-Solve, and SELF-
stanceandthreemoreinferencecallsonthetask-level,CoT- DISCOVER is shown in Figure7. We find that CoT and
self-consistency requires 10 times more since we have to Plan-and-Solve makes incorrect assertions early and arrives
sample 10 times for each instance, and methods using each at a wrong answer while following structure from SELF-
RMrequires40timesmoreasweuse40RMs. Insummary, DISCOVER leads the model to generate logical conclusions
SELF-DISCOVERpresentsitselfastrongreasoningboosting (“path is closed as the beginning and ending coordinates
method that is efficient to deploy on large-scale. are the same”) and arrive at the correct answer.
4.4. Qualitative Examples 5. Deep DivingInto Self-DiscoveredReasoning
Weshowexamplesofmodel-discoveredstructuresfordiffer- Structures
entreasoningtasksinFigure6fromPaLM2-L.Weobserve After experimental results showing the effectiveness and
thateachstructureisuniquelyadaptedtothetask,integrates efficiency of SELF-DISCOVER on a range of reasoning
6 SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures
Figure7.ComparisonofgeneratedreasoningprocessfromCoT,Plan-and-Solve, and SELF-DISCOVER onBBH-geometricshape
task. BothCoTandPlan-and-Solveincorrectlyassertsthatthepathdoesnotformaregularshapeasitisnotaclosedpath(highlightedin
red) and arrive at a wrong answer. The reasoningstructure (in blueCourier font) from SELF-DISCOVER first breaks down each line
segmentandanalyzethecoordinatescarefully,thenleverageslogicalreasoningtoconcludethatitformsaclosedshapeasthepathendsat
the same coordinate (highlighted in purple and orange), and selects the correct answer through final reasoning.
tasks, thissection further analyzesare allactions of SELF-
DISCOVER needed and what other benefits can self-
discovered structures bring? In Sec.5.1, we show thatit
is critical tothe model’sperformance touse thereasoning
structures discovered through the three steps of SELECT,
ADAPT and IMPLEMENT. In Sec.5.2, we demonstrate
theuniversalityoftheself-discoveredreasoningstructures |
hapeasthepathendsat
the same coordinate (highlighted in purple and orange), and selects the correct answer through final reasoning.
tasks, thissection further analyzesare allactions of SELF-
DISCOVER needed and what other benefits can self-
discovered structures bring? In Sec.5.1, we show thatit
is critical tothe model’sperformance touse thereasoning
structures discovered through the three steps of SELECT,
ADAPT and IMPLEMENT. In Sec.5.2, we demonstrate
theuniversalityoftheself-discoveredreasoningstructures
by (1) applying the structures discovered by PaLM 2-L to
GPT-4,(2)applyingthestructuresdiscoveredbyGPT-4to
Llama-2-70B.Wefurthershowthecommonalitiesbetween
the reasoning structures and human reasoning patterns in Figure8.Ablation study on three SELF-DISCOVER actions on
AppendixE. 4reasoning tasks: all threeactionsare beneficialfor task-solving.
5.1. Importance of SELF-DISCOVER Actions
We conduct ablation study on the three actions: SELECT, 5.2. Towards Universality of Discovered Reasoning
ADAPT,andIMPLEMENTtoanalyzetheeffectsof SELF- Structures
DISCOVER actions. Figure8showresultsusingGPT-4on4 Applying PaLM 2-L Discovered Structures to GPT-4
reasoningtaskswhenweapplySELECT (-S)orapplySE- We first use a PaLM 2-L model to discover the reasoning
LECTandADAPT(-SA)orapplyallthreeactions. Wefind structuresof4reasoningtasks. Then,weapplytheresulting
thatwitheachstage,model’szero-shotreasoningcapability reasoning structures to the decoding of GPT-4 as grounding.
improveconsistently across tasks, indicatingthat all three We compare our approach to OPRO (Yang et al.,2023)
actions are beneficial. In particular, after all three actions whichdiscoveredzero-shot-promptsthroughoptimizations.
SAI,thereasoningstructuresareadaptedtobetaskspecific, We apply OPRO prompts optimized using PaLM 2-L on
and bring the most gain to solving the reasoning tasks. each task to GPT-4 on the same reasoning tasks. Figure9
shows that SELF-DISCOVER outperforms OPROon 3 out
of4tasksdespitethatOPROused20%datatooptimizethe
7 SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures
prompting methods has some strengths and weaknesses
in terms of their successful applicationdomain. Our work
SELF-DISCOVER presentsthemissingpieceintheprompt-
ing literature, as SELF-DISCOVER provides a way to self-
compose over various prompting methods via the proposed
self-discovery mechanism. Composing over prompting
methodsin SELF-DISCOVER isanalogoustotheprogram-
ming literature where a program is written using various
basic buildingblocks such asfor loop, if/elsecondition etc.
6.2. Reasoning and Planning
Figure9.Transferrabilitytestsofoptimizedprompts(OPRO) With the development of various reasoning and plan-
andcomposedstructures(SELF-DISCOVER). Theresultsshown ning benchmarks such as GSM8 |
where a program is written using various
basic buildingblocks such asfor loop, if/elsecondition etc.
6.2. Reasoning and Planning
Figure9.Transferrabilitytestsofoptimizedprompts(OPRO) With the development of various reasoning and plan-
andcomposedstructures(SELF-DISCOVER). Theresultsshown ning benchmarks such as GSM8K (Cobbe et al.,2021),
are from GPT-4 using the prompts and structures optimized or Math(Hendrycksetal.),BigBench(Srivastavaetal.,2023)
composedusingPaLM2-L.Wefindthatself-discoveredreasoning etc.,variousmethodshavebeenproposedtoimprovemodel
structure transfers more robustly than optimized prompts. performance. Oftenthesemethodsinducespecificreason-
prompt. Incontrast, SELF-DISCOVER isdoneinazero-shot ing structures mimicking the reasoning structure of the un-
manner, demonstrating the efficiency of our method and derlying task associated with the dataset. For example,
universality of the discovered reasoning structures. chain of thought (Wei et al.,2022) and scratchpad (Nye
ApplyingGPT-4DiscoveredStructurestoLlama2and et al.,2021) induce generation of explanations associated
ChatGPT Motivated by transferrability performance with a reasoning question. Similarly other methods induces
acrossLLMs,wefurtherinvestigatecanself-discoveredrea- specific reasoning structures such as question summariza-
soning structures from LLMs boost reasoning for smaller tion (Kuznia et al.,2022), question decomposition (Patel
LMs that are challenging to come up with structures them- et al.,2022), program generation (Mishra et al.,2022a;
selves3. We use GPT-4 to discover the task-intrinsic rea- Chenetal.,2022;Gaoetal.,2023b),etc. However,inareal
soning structures, and then apply those structures to the world user traffic, queries can be diverse covering various
decodingofopen-sourcedLlama2-70BaswellasGPT-3.5- reasoning structures. Our work SELF-DISCOVER allows
turbo (ChatGPT) on two subsets of tasks from BBH. We modelstocombinemultiplereasoningapproachesbyself-
findthatusingself-discoveredstructuresonLlama2(52%) composingintoastructurewithouttheneedtoaccesstask
outperforms CoT (42%) on disambiguation QA zero-shot labels. There have been some related work that explores
andonGPT-3.5-turbo(56%)outperformsCoT(51%)onge- LLM combining skills in-context such as SkiC (Chen et al.,
ometrywith3-shotdemonstrationfromstructuredreasoning 2023),devisingastrategy(Gaoet al.,2023a),andplanning
process. with iterative quering (Liu et al.,2023). However, they
requirehumanannotatingskillsandreasoningplanswhile
SELF-DISCOVERleveragesascalablesolutionwiththehelp
6. Related Work of LLM’s meta-task reasoning capabilities.
6.1. Prompting Methods 7. Conclusion
Recent advancements in the area of LLMs have given rise We introduce SELF-DISCOVER, an efficient and performant
to a plethora of few-shot (Brown et al.,2020) and instruc- framework for models to self-discover a reasoning structure
tion (Mishra et al.,2022c;Wei et al.,2021;Ouyang et al., for any task from a seed set of general problem-solving
2022) prompting techniques, including Chain-of-Thought skills. We observe drastic improvements on challengin |
ancements in the area of LLMs have given rise We introduce SELF-DISCOVER, an efficient and performant
to a plethora of few-shot (Brown et al.,2020) and instruc- framework for models to self-discover a reasoning structure
tion (Mishra et al.,2022c;Wei et al.,2021;Ouyang et al., for any task from a seed set of general problem-solving
2022) prompting techniques, including Chain-of-Thought skills. We observe drastic improvements on challenging
prompting (CoT) (Nye et al.,2021;Wei et al.,2022), Least- reasoning benchmarks from multiple LLMs up to 30%. Ab-
to-most prompting (Zhou et al.,2022a;Drozdov et al., lations study of SELF-DISCOVER demonstrates that the
2022), Decomposed prompting (Khot et al.,2022), Re- composedreasoningstructuresareuniversallytransferable
framing (Mishra et al.,2022b), Help Me Think Prompt- betweenLLMs. Forward looking,we areexcited toexplore
ing (Mishra & Nouri,2023), Stepback Prompting (Zheng more on LLM structured reasoning to push the boundary
et al.,2023) and search-based approaches like Tree-of- of problem-solving and discover potentials for Human-AI
Thought(ToT)(Yaoetal.,2023a),Graph-of-Thought(Besta collaboration.
et al.,2023;Yao et al.,2023b), Branch-solve-merge (Saha
et al.,2023) and RAP (Hao et al.,2023). Each of the
3We triedzero-shot meta prompting Llama2 but observedlow-
quality structure outputs.
8 SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures
Acknowledgement Gao,C.,Jiang,H.,Cai,D.,Shi,S.,andLam,W. Strategyllm:
We thankAndrew Dai and AdamsYu ofGoogle DeepMind Largelanguage modelsas strategygenerators, executors,
for their insightful feedback on this paper. optimizers, and evaluators for problem solving. arXiv
preprint arXiv:2311.08803, 2023a.
References Gao, L., Madaan, A., Zhou, S., Alon, U., Liu, P., Yang,
Anil, R., Dai, A. M., Firat, O., Johnson, M., Lepikhin, Y., Callan, J., and Neubig, G. Pal: Program-aided lan-
D., Passos, A., Shakeri, S., Taropa, E., Bailey, P., Chen, guagemodels. InInternationalConferenceonMachine
Z., et al. Palm 2 technical report. arXiv preprint Learning, pp. 10764–10799. PMLR, 2023b.
arXiv:2305.10403, 2023. Hao, S., Gu, Y., Ma, H.,Hong, J. J., Wang, Z., Wang, D. Z.,
Besta, M., Blach, N., Kubicek, A., Gerstenberger, R., Gi- andHu, Z. Reasoningwith languagemodel isplanning
aninazzi, L., Gajda, J., Lehmann, T., Podstawski, M., with world model. arXiv preprint arXiv:2305.14992,
Niewiadomski, H.,Nyczyk, P., etal. Graph ofthoughts: 2023.
Solving elaborate problems with large language models. Hendrycks, D., Burns,C., Kadavath, S.,Arora, A., Basart,
arXiv preprint arXiv:2308.09687, 2023. S., Tang, E., Song, D., and Steinhardt, J. Measuring
Brown,T.,Mann, B., Ryder, N.,Subbiah, M., Kaplan, J.D., mathematicalproblemsolvingwiththemathdataset.Sort,
Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., 2(4):0–6.
Askell, A., et al. Languagemodels are few-shot learners.
Advancesinneuralinformationprocessingsystems,33: Hendrycks, D., Burns,C., Kadavath, S.,Arora, A., Basart,
1877–1901, 2020. S.,Tang,E.,Song,D.,andSteinhardt,J. Measuringmath-
Chen, J., Pan, X., Yu, D., Song, K., Wang, X., Yu, D., ematical problem solving wit |
Sort,
Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., 2(4):0–6.
Askell, A., et al. Languagemodels are few-shot learners.
Advancesinneuralinformationprocessingsystems,33: Hendrycks, D., Burns,C., Kadavath, S.,Arora, A., Basart,
1877–1901, 2020. S.,Tang,E.,Song,D.,andSteinhardt,J. Measuringmath-
Chen, J., Pan, X., Yu, D., Song, K., Wang, X., Yu, D., ematical problem solving with the math dataset, 2021.
and Chen, J. Skills-in-context prompting: Unlocking Khot, T., Trivedi, H., Finlayson, M., Fu, Y., Richardson, K.,
compositionalityinlargelanguagemodels.arXivpreprint Clark, P., and Sabharwal, A. Decomposed prompting:
arXiv:2308.00304, 2023. A modular approach for solving complex tasks. In The
Chen, W., Ma, X., Wang, X., and Cohen, W. W. Program Eleventh International Conference on Learning Repre-
of thoughts prompting: Disentangling computation from sentations, 2022.
reasoningfornumericalreasoningtasks. arXivpreprint Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., and Iwasawa,
arXiv:2211.12588, 2022. Y. Large language models are zero-shot reasoners. Ad-
Chowdhery,A., Narang,S., Devlin,J., Bosma,M., Mishra, vances in neural information processing systems, 35:
G., Roberts, A., Barham, P., Chung, H. W., Sutton, C., 22199–22213, 2022.
Gehrmann, S., et al. Palm: Scaling language modeling Kuznia, K., Mishra, S., Parmar, M., and Baral, C. Less is
with pathways. arXiv preprint arXiv:2204.02311, 2022. more: Summaryoflonginstructionsisbetterforprogram
Chung, H. W., Hou, L., Longpre, S., Zoph, B., Tay, Y., synthesis. In Proceedings of the 2022 Conference on
Fedus, W., Li, Y., Wang, X., Dehghani, M., Brahma, EmpiricalMethods inNaturalLanguage Processing, pp.
S.,etal. Scalinginstruction-finetunedlanguagemodels. 4532–4552, 2022.
arXiv preprint arXiv:2210.11416, 2022.
Cobbe, K., Kosaraju, V., Bavarian, M., Chen, M., Jun, H., Liu, T., Guo, Q., Yang, Y., Hu, X., Zhang, Y., Qiu, X.,
Kaiser, L., Plappert, M., Tworek, J., Hilton, J., Nakano, and Zhang, Z. Plan, verify and switch: Integrated
R., et al. Training verifiers to solvemath word problems. reasoning with diverse x-of-thoughts. arXiv preprint
arXiv preprint arXiv:2110.14168, 2021. arXiv:2310.14628, 2023.
Drozdov, A., Schärli, N., Akyürek, E., Scales, N., Song, Mishra, S. and Nouri, E. HELP ME THINK: A simple
X., Chen, X., Bousquet, O., and Zhou, D. Composi- promptingstrategyfornon-expertstocreatecustomized
tionalsemanticparsingwithlargelanguagemodels.arXiv content with models. In Rogers, A., Boyd-Graber, J.,
preprint arXiv:2209.15003, 2022. and Okazaki, N. (eds.), Findings of the Association for
ComputationalLinguistics: ACL2023,pp.11834–11890,
Fernando, C., Banarse, D., Michalewski, H., Osindero, Toronto, Canada, July 2023. Association for Computa-
S.,andRocktäschel,T. Promptbreeder: Self-referential tional Linguistics. doi: 10.18653/v1/2023.findings-acl.
self-improvement via prompt evolution. arXiv preprint 751. URL https://aclanthology.org/2023.
arXiv:2309.16797, 2023. findings-acl.751.
9 SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures
Mishra, S., Finlayson, M., Lu, P., Tang, L., Wellec |
erential tional Linguistics. doi: 10.18653/v1/2023.findings-acl.
self-improvement via prompt evolution. arXiv preprint 751. URL https://aclanthology.org/2023.
arXiv:2309.16797, 2023. findings-acl.751.
9 SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures
Mishra, S., Finlayson, M., Lu, P., Tang, L., Welleck, S., Saha, S., Levy, O., Celikyilmaz, A., Bansal, M., Weston,
Baral, C., Rajpurohit, T., Tafjord, O., Sabharwal, A., J., and Li, X. Branch-solve-merge improves large lan-
Clark,P.,etal. Lila: Aunifiedbenchmarkformathemati- guage model evaluation and generation. arXiv preprint
cal reasoning. InProceedings of the 2022 Conference on arXiv:2310.15123, 2023.
EmpiricalMethods inNaturalLanguage Processing, pp. Srivastava,A.,Rastogi,A.,Rao,A.,Shoeb,A.A.M.,Abid,
5807–5832, 2022a. A., Fisch, A., Brown, A. R., Santoro, A., Gupta, A.,
Mishra,S.,Khashabi,D.,Baral,C.,Choi,Y.,andHajishirzi, Garriga-Alonso, A., et al. Beyond the imitation game:
H. Reframing instructional prompts to gptk’s language. Quantifyingandextrapolatingthecapabilitiesoflanguage
InFindingsoftheAssociationforComputationalLinguis- models. Transactions on Machine Learning Research,
tics: ACL 2022, pp. 589–612, 2022b. 2023.
Mishra,S.,Khashabi,D.,Baral,C.,andHajishirzi,H.Cross- Suzgun, M., Scales, N., Schärli, N., Gehrmann, S., Tay,
task generalization via natural language crowdsourcing Y., Chung, H. W., Chowdhery, A., Le, Q. V., Chi,
instructions. InProceedingsof the60thAnnual Meeting E. H., Zhou, D., et al. Challenging big-bench tasks and
oftheAssociationforComputationalLinguistics(Volume whether chain-of-thought can solve them. arXiv preprint
1: Long Papers), pp. 3470–3487, 2022c. arXiv:2210.09261, 2022.
Newell, A., Shaw, J. C., and Simon, H. A. Elements of a Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi,
theory of humanproblem solving. Psychological review, A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P.,
65(3):151, 1958. Bhosale, S., et al. Llama 2: Open foundation and fine-
tuned chat models. arXiv preprint arXiv:2307.09288,
Nye,M.,Andreassen,A.J.,Gur-Ari,G.,Michalewski,H., 2023.
Austin,J.,Bieber,D.,Dohan,D.,Lewkowycz,A.,Bosma, Vaswani,A.,Shazeer,N.,Parmar,N.,Uszkoreit,J.,Jones,
M., Luan, D., et al. Show your work: Scratchpads for L.,Gomez,A.N.,Kaiser,L.u.,andPolosukhin,I. Atten-
intermediate computation with language models. arXiv tion is all you need. In Advances in Neural Information
preprint arXiv:2112.00114, 2021. ProcessingSystems,volume 30.CurranAssociates,Inc.,
OpenAI. Chatgpt: Optimizing language models for dia- 2017. URL https://proceedings.neurips.
logue, 2022. URL https://openai.com/blog/ cc/paper_files/paper/2017/file/
chatgpt/. 3f5ee243547dee91fbd053c1c4a845aa-Paper.
pdf.
OpenAI. Json generation mode, 2023a. URL Wang,L.,Xu, W.,Lan,Y.,Hu, Z.,Lan,Y.,Lee, R.K.-W.,
https://platform.openai.com/docs/ and Lim, E.-P. Plan-and-solve prompting: Improving
guides/text-generation/json-mode. |
file/
chatgpt/. 3f5ee243547dee91fbd053c1c4a845aa-Paper.
pdf.
OpenAI. Json generation mode, 2023a. URL Wang,L.,Xu, W.,Lan,Y.,Hu, Z.,Lan,Y.,Lee, R.K.-W.,
https://platform.openai.com/docs/ and Lim, E.-P. Plan-and-solve prompting: Improving
guides/text-generation/json-mode. zero-shot chain-of-thought reasoning by large language
OpenAI, R. Gpt-4 technical report. arXiv, pp. 2303–08774, models. arXiv preprint arXiv:2305.04091, 2023.
2023b. Wang, X., Wei, J., Schuurmans, D., Le, Q. V., Chi,
Ouyang,L.,Wu,J.,Jiang,X.,Almeida,D.,Wainwright,C., E. H., Narang, S., Chowdhery, A., and Zhou, D. Self-
Mishkin,P.,Zhang,C.,Agarwal,S.,Slama,K.,Ray,A., consistencyimproveschainofthoughtreasoninginlan-
et al. Training language models to follow instructions guage models. In The Eleventh International Conference
withhumanfeedback. AdvancesinNeuralInformation on Learning Representations, 2022.
Processing Systems, 35:27730–27744, 2022. Wei, J., Bosma, M., Zhao, V., Guu, K., Yu, A. W., Lester,
Patel, P., Mishra, S., Parmar, M., and Baral, C. Is a ques- B., Du, N., Dai, A. M., and Le, Q. V. Finetuned lan-
tiondecompositionunitallweneed? InProceedingsof guage models are zero-shot learners. In International
the 2022 Conference on Empirical Methods in Natural Conference on Learning Representations, 2021.
Language Processing, pp. 4553–4569, 2022. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F.,
Chi, E., Le, Q. V., Zhou, D., et al. Chain-of-thought
Polya, G. How to solve it: A new aspect of mathematical prompting elicits reasoning in large language models.
method, volume 85. Princeton university press, 2004. Advances in Neural Information Processing Systems, 35:
Rasmussen, J. Skills, rules, and knowledge; signals, signs, 24824–24837, 2022.
and symbols, and other distinctions in human perfor- Yang,C.,Wang,X.,Lu,Y.,Liu,H.,Le,Q.V.,Zhou,D.,and
mance models. IEEE transactions on systems, man, and Chen, X. Large language models as optimizers. arXiv
cybernetics, (3):257–266, 1983. preprint arXiv:2309.03409, 2023.
10 SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures
Yao,S.,Yu,D.,Zhao,J.,Shafran,I.,Griffiths,T.L.,Cao,Y.,
and Narasimhan, K. Tree of thoughts: Deliberate prob-
lem solving with large language models. arXiv preprint
arXiv:2305.10601, 2023a.
Yao, Y., Li, Z., and Zhao, H. Beyond chain-of-thought,
effective graph-of-thought reasoning in large language
models. arXiv preprint arXiv:2305.16582, 2023b.
Yasunaga, M., Chen, X., Li, Y., Pasupat, P., Leskovec, J.,
Liang,P.,Chi,E.H.,andZhou,D.Largelanguagemodels
asanalogicalreasoners. arXivpreprintarXiv:2310.01714,
2023.
Zheng,H.S.,Mishra,S.,Chen,X.,Cheng,H.-T.,Chi,E.H.,
Le, Q. V., and Zhou, D. Take a step back: Evoking
reasoningviaabstractioninlargelanguagemodels. arXiv
preprint arXiv:2310.06117, 2023.
Zhong, R., Lee, K., Zhang, Z., and Klein, D. Adapt-
ing language models for zero-shot learning by meta-
tuning on dataset and promptcollections. arXiv preprint
arXiv:2104.04670, 2021.
Zhou,D.,Schärli,N.,Hou,L.,Wei,J.,Scales,N.,Wang,X.,
Schuurmans, D., Cui, C., Bousquet, O., Le, Q. V., et al.
Least-to-most prompting enables complex reasoning in
large language models. In The Eleventh International
Conferen |
reasoningviaabstractioninlargelanguagemodels. arXiv
preprint arXiv:2310.06117, 2023.
Zhong, R., Lee, K., Zhang, Z., and Klein, D. Adapt-
ing language models for zero-shot learning by meta-
tuning on dataset and promptcollections. arXiv preprint
arXiv:2104.04670, 2021.
Zhou,D.,Schärli,N.,Hou,L.,Wei,J.,Scales,N.,Wang,X.,
Schuurmans, D., Cui, C., Bousquet, O., Le, Q. V., et al.
Least-to-most prompting enables complex reasoning in
large language models. In The Eleventh International
Conference on Learning Representations, 2022a.
Zhou, P., Madaan, A., Potharaju, S. P., Gupta, A., McKee,
K. R., Holtzman, A., Pujara, J., Ren, X., Mishra, S.,
Nematzadeh, A., et al. How far are large language mod-
els from agents with theory-of-mind? arXiv preprint
arXiv:2310.03051, 2023.
Zhou, Y., Muresanu, A. I., Han, Z., Paster, K., Pitis, S.,
Chan,H., andBa, J. Large languagemodels are human-
level prompt engineers. In The Eleventh International
Conference on Learning Representations, 2022b.
11 SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures
A. Self-Discover Prompt Details
Table2shows all 39 reasoning modules we use for SELF-DISCOVER, adopted fromFernando et al.(2023), that contain
cognitive heuristics of problem-solving.
Figure10contains the structure of the three actions of SELF-DISCOVER during Stage 1, where it discovers an intrinsic
reasoning structure on the task-level.
For Stage 2,where we usethe self-discoveredstructure to solve thetask instances, westart with theprompt: “Follow the
step-by-stepreasoningplaninJSONtocorrectlysolvethetask. Fillinthevaluesfollowingthekeysbyreasoningspecifically
about thetaskgiven. Do notsimply rephrase thekeys.”,followed bythe reasoningstructure, andfinally thetask instance.
Figure10.Meta-Promptsforthe three actionsof SELF-DISCOVER. Eachmeta-prompt consistsofan instructioninthe beginning and
the end, reasoning module descriptions, and task examples without labels. For IMPLEMENT, to show model an example of a reasoning
structure (plan), we present a human-written structure in JSON for another task.
B. Evaluation Details
We use accuracy and exact matching as with other methods tested on BBH, T4D and MATH. To properly evaluate the
generated answers from LLMs, we prompt the models to end the answer with “Thus, the final answer is [X]”, where X
is eitherone answer option suchas “A” ora string such as“ valid”. During evaluation, we manually examine each task’s
outputs from LLMs and designheuristics to extract the final answers. For MATH dataset, we find that it ischallenging to
extracttheanswers accurately. Asaresult, wesubsample200testexamplesfrom MATH, andmanuallysanitycheckand
annotate the extracted answers for all methods tested in our paper.
C. BBH Per Task Performance
Per-task performance on BBH (23 tasks in total) are shown in Table3.
D. Error Analysis
We performanerror analysisof SELF-DISCOVER onthe MATH datasetof200 samplestounderstandthe failuremodes.
We manually annotate whether the generated reasoning structure is correct or not together with whether the correctness of
modelpredictionusing SELF-DISCOVER. Areasoningstructureisdefinedascorrectifahumanexpertcansolvethetaskby
simply following the reasoning structure.
12 SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures
Table2.All 39 reasoning modules consisting of high-level cognitive heuristics for problem-solving. We adopt them fromFernando et al.
(2023).
ReasoningModules
1HowcouldIdeviseanexperimenttohelpsolvethatproblem?
2Makealistofideasforsolvingthisproblem,andapplythemonebyonetotheproblemtoseeifanyprogresscanbemade.
3HowcouldImeasureprogressonthisproblem?
4HowcanIsimplifytheproblemsothatitiseasiertosolve?
5Whatarethekeyassumptionsunderlyingthisproblem?
6Whatarethepotentialrisksanddrawbacksofeachsolution?
7Whatarethea |
asoning modules consisting of high-level cognitive heuristics for problem-solving. We adopt them fromFernando et al.
(2023).
ReasoningModules
1HowcouldIdeviseanexperimenttohelpsolvethatproblem?
2Makealistofideasforsolvingthisproblem,andapplythemonebyonetotheproblemtoseeifanyprogresscanbemade.
3HowcouldImeasureprogressonthisproblem?
4HowcanIsimplifytheproblemsothatitiseasiertosolve?
5Whatarethekeyassumptionsunderlyingthisproblem?
6Whatarethepotentialrisksanddrawbacksofeachsolution?
7Whatarethealternativeperspectivesorviewpointsonthisproblem?
8Whatarethelong-termimplicationsofthisproblemanditssolutions?
9HowcanIbreakdownthisproblemintosmaller,moremanageableparts?
10CriticalThinking: Thisstyleinvolvesanalyzingtheproblemfromdifferentperspectives,questioningassumptions,andevaluating
theevidenceorinformationavailable. Itfocusesonlogicalreasoning,evidence-baseddecision-making,andidentifying
potentialbiasesorflawsinthinking.
11Trycreativethinking,generateinnovativeandout-of-the-boxideastosolvetheproblem. Exploreunconventionalsolutions,
thinkingbeyondtraditionalboundaries,andencouragingimaginationandoriginality.
12Seekinputandcollaborationfromotherstosolvetheproblem. Emphasizeteamwork,opencommunication,andleveragingthe
diverseperspectivesandexpertiseofagrouptocomeupwitheffectivesolutions.
13Usesystemsthinking: Considertheproblemaspartofalargersystemandunderstandingtheinterconnectednessofvariouselements.
Focusesonidentifyingtheunderlyingcauses,feedbackloops,andinterdependenciesthatinfluencetheproblem,anddevelopingholistic
solutionsthataddressthesystemasawhole.
14UseRiskAnalysis: Evaluatepotentialrisks,uncertainties,andtradeoffsassociatedwithdifferentsolutionsorapproachestoa
problem. Emphasizeassessingthepotentialconsequencesandlikelihoodofsuccessorfailure,andmakinginformeddecisionsbased13
onabalancedanalysisofrisksandbenefits.
15UseReflectiveThinking: Stepbackfromtheproblem,takethetimeforintrospectionandself-reflection. Examinepersonalbiases,
assumptions,andmentalmodelsthatmayinfluenceproblem-solving,andbeingopentolearningfrompastexperiencestoimprove
futureapproaches.
16Whatisthecoreissueorproblemthatneedstobeaddressed?
17Whataretheunderlyingcausesorfactorscontributingtotheproblem?
18Arethereanypotentialsolutionsorstrategiesthathavebeentriedbefore? Ifyes,whatweretheoutcomesandlessonslearned?
19Whatarethepotentialobstaclesorchallengesthatmightariseinsolvingthisproblem?
20Arethereanyrelevantdataorinformationthatcanprovideinsightsintotheproblem? Ifyes,whatdatasourcesareavailable,
andhowcantheybeanalyzed?
21Arethereanystakeholdersorindividualswhoaredirectlyaffectedbytheproblem? Whataretheirperspectivesandneeds?
22Whatresources(financial,human,technological,etc.) areneededtotackletheproblemeffectively?
23Howcanprogressorsuccessinsolvingtheproblembemeasuredorevaluated?
24Whatindicatorsormetricscanbeused?
25Istheproblematechnicalorpracticalonethatrequiresaspecificexpertiseorskillset? Orisitmoreofaconceptualor
theoreticalproblem?
26Doestheprobleminvolveaphysicalconstraint,suchaslimitedresources,infrastructure,orspace?
27Istheproblemrelatedtohumanbehavior,suchasasocial,cultural,orpsychologicalissue?
28Doestheprobleminvolvedecision-makingorplanning,wherechoicesneedtobemadeunderuncertaintyorwithcompeting
objectives?
29Istheproblemananalyticalonethatrequiresdataanalysis,modeling,oroptimizationtechniques?
30Istheproblemadesignchallengethatrequirescreativesolutionsandinnovation?
31Doestheproblemrequireaddressingsystemicorstructuralissuesratherthanjustindividualinstances?
32Istheproblemtime-sensitiveorurgent,requiringimmediateattentionandaction?
33Whatkindsofsolutiontypicallyareproducedforthiskindofproblemspecification?
34Giventheproblemspecificationandthecurrentbestsolution,haveaguessaboutotherpossiblesolutions.
35Let’simaginethecurrentbestsolutionistotallywrong,whatotherwaysaretheretothinkabouttheproblemspecification?
36Whatisthebestwaytomodifythiscurrentbestsolution,givenwhatyouknowaboutthesekindsofproblemspecification?
37Ignoringthecurrentbestsolution,createanentirelynewsolutiontotheproblem.
38Let’sthinkstepbystep.
39Let’smakea |
entionandaction?
33Whatkindsofsolutiontypicallyareproducedforthiskindofproblemspecification?
34Giventheproblemspecificationandthecurrentbestsolution,haveaguessaboutotherpossiblesolutions.
35Let’simaginethecurrentbestsolutionistotallywrong,whatotherwaysaretheretothinkabouttheproblemspecification?
36Whatisthebestwaytomodifythiscurrentbestsolution,givenwhatyouknowaboutthesekindsofproblemspecification?
37Ignoringthecurrentbestsolution,createanentirelynewsolutiontotheproblem.
38Let’sthinkstepbystep.
39Let’smakeastepbystepplanandimplementitwithgoodnotionandexplanation. SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures
Table3. Big Bench-Hard (Suzgun et al.,2022) per-task performance of GPT-4 and PaLM 2-L with S ELF-DISCOVER.
Out of 200 examples, we find that 87.5% (175) examples have correct reasoning structures. 12.5% (25) examples have
incorrectreasoningstructuresleadingtopredictionerrors. Table4shows4suchexampleswheretheLLMmisunderstands
the task, or makes an error in one of the steps or adds unnecessary steps in the reasoning structure.
Next, we analyze the errors made by the model in SELF-DISCOVER: out of 99 examples where the model prediction is
wrong,wrongreasoningstructuresaccountforonly25.3%oftheerrors. Theremaining74.7%errorsareduetoerrorsin
the intermediate calculations suchas math computations. Table5shows 3 examples of sucherrors. Thisinsight indicates
that futureimprovements shouldaim at improvingthe step-wise calculationaccuracy of LLMs,such as usingtools or code
generation.
E. Further Anaysis
Model-Discovered Reasoning Structures vs. Human Reasoning Patterns We investigate whether LLM-discovered
reasoningstructuressharesomecommonalitieswithhumanreasoningpatterns. Wegivehumans3taskinstanceswithout
labels and an example reasoning structure (same as SELF-DISCOVER meta-reasoning stage) and ask them to write a
reasoning structure for a task before solving it. Figure11shows comparison of human and LLM-composed reasoning
structuresontheBBH-navigationtask. Weobservesimilarstructuressuchasmental-notingaftereachmovement. From
promisingfindingsofLLMself-discoveredstructuresboostandsharetraitsofhumanmeta-reasoning,wehopetoencourage
more future work to study humna-AI collaboration for complex problem-solving.
BigBench-HardTask Human(Avg.) Human(Max) GPT-4 GPT-4 GPT-4 PaLM2-L PaLM2-L PaLM2-L
Direct +CoT +Self-Discover Direct +CoT +Self-Discover
boolean_expressions 79 100 73 83 85 71 84 84
causal_judgement 70 100 67 75 80 46 59 61
date_understanding 77 100 74 80 81 73 78 78
disambiguation_qa 67 93 60 70 80 54 50 57
dyck_languages 48 100 69 73 77 94 95 9814
formal_fallacies 91 100 60 60 80 60 63 69
geometric_shapes 54 100 |
93 60 70 80 54 50 57
dyck_languages 48 100 69 73 77 94 95 9814
formal_fallacies 91 100 60 60 80 60 63 69
geometric_shapes 54 100 30 56 60 33 34 39
hyperbaton 75 100 68 69 76 80 75 82
logical_deduction_seven_objects 40 89 60 70 70 45 39 50
movie_recommendation 61 90 70 70 86 83 54 66
multistep_arithmetic_two 10 25 10 92 70 4 50 47
navigate 82 100 70 90 90 38 63 67
object_counting 86 100 90 100 100 27 44 70
penguins_in_a_table 78 100 80 100 90 70 67 75
reasoning_about_colored_objects 75 100 77 80 79 36 79 75
ruin_names 78 100 90 80 97 79 58 90
salient_translation_error_detection 37 80 40 50 70 56 48 60
snarks 77 100 73 89 97 58 62 86
sports_understanding 71 100 54 61 90 44 47 89
temporal_sequences 91 100 96 99 100 99 97 99
tracking_shuffled_objects_seven_objects 65 100 24 80 68 22 58 36
web_of_lies 81 100 15 80 71 54 42 67
word_sorting 63 100 65 90 85 12 4 15 SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures
Table4.Examples of wrong reasoning structures for MATH. The first error in the reasoning structure is highlighted in red.
Prompt Reasoning Structure Error
Howmanynumbersbetween 1. Findthenumberofmultiplesof3between1and Need to sub-
1 and 2005 are integer multi- 2005. tract the num-
ples of 3 or 4 but not 12? 2. Find the number of multiples of 4 between 1 ber of mul-
and 2005. |
Error
Howmanynumbersbetween 1. Findthenumberofmultiplesof3between1and Need to sub-
1 and 2005 are integer multi- 2005. tract the num-
ples of 3 or 4 but not 12? 2. Find the number of multiples of 4 between 1 ber of mul-
and 2005. tiples of 12
3. Find the number of multiples of 12 between 1 twice instead
and 2005. of once.
4. Add the number of multiples of 3 and 4.
5. Subtractthenumberofmultiplesof12(toavoid
double counting).
6. The result is the number of integers between 1
and 2005 that are multiples of 3 or 4 but not 12.
How many numbers are in 1. Find the number of pairs in the list. LLM misun-
the list 6,7,10,11,14,15,..., 2. Find the number of numbers in each pair. derstands the
94,95,98? 3. Multiplythenumber ofpairsby thenumberof task: the rea-
numbers in each pair to find the total number of soning struc-
numbers in the list. ture is irrele-
vant.
A restaurant offers three 1. Define the variables: There are 365
desserts, and exactly twice - M: number of main courses nights in the
as many appetizers as main - A: number of appetizers (twice the number of year 2003.
courses. Adinnerconsistsof main courses)
an appetizer, a main course, - D: number of desserts (fixed at 3)
and a dessert. What is the - T: total number of possible dinner combinations
leastnumberofmaincourses 2. Calculate the total number of possible dinner
thattherestaurantshouldof- combinations:
fer so that a customer could - T = M * A * D
have a different dinner each 3. Set the total number of possible dinner combi-
night in the year 2003? nations equal to the number of nights in the year
2003:
- T = 2003
4. Substitute the expression for A in terms of M:
- T = M * (2 * M) * 3
5. Solve the equation for M:
- 6M 2 = 2003
6. Find the smallest integer value of M that satis-
fies the equation.
Determine the number of 1. Count the total number of letters in the word Step 5 is un-
ways to arrange the letters of NINE. necessary and |
- 6M 2 = 2003
6. Find the smallest integer value of M that satis-
fies the equation.
Determine the number of 1. Count the total number of letters in the word Step 5 is un-
ways to arrange the letters of NINE. necessary and
the word NINE. 2. Identify thenumberoftimeseachletterappears leads to error
in the word. in the final re-
3. Calculate the factorial of the total number of sult.
letters.
4. Dividethe factorialby thefactorial of the num-
ber of times each letter appears.
5. Multiply theresultsfrom step4for each unique
letter.}
15 SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures
Table5.Examples of wrong calculations for MATH. The first error in the intermediate computations is highlighted in red.
Prompt Self-Discover Prediction Error
How many ways are there "1. Calculatethetotalnumberofwaystoarrange There are 7ro-
to arrange 6 people around 6 people in 7 seats without considering rotations tations instead
acirculartablewith7seats? as the same.": 7 * 6 * 5 * 4 * 3 * 2 = 5040 of 6 that are
(Two seatingsareconsidered - There are 7 choices for the first person. consideredthe
the same if one is a rotation - There are 6 choices for the second person. same.
of the other.) - There are 5 choices for the third person.
- There are 4 choices for the fourth person.
- There are 3 choices for the fifth person.
- There are 2 choices for the sixth person.
"2. Dividetheresultfrom step1by thenumberof
rotationsthatare consideredthesame.":5040 /6
= 840
-Thereare6rotationsthatareconsideredthesame.
A customer ordered 15 "1. Findthenumberofwaystopackagethechoco- Thereis0way
piecesofgourmetchocolate. lates using only boxes of 1 piece.": 1, to package 15
The order can be packaged "2. Findthenumberofwaystopackagethechoco- chocolates us-
in small boxes that contain lates using only boxes of 2 pieces.": 1, ingonlyboxes
1, 2 or 4 pieces of chocolate. "3. Findthenumberofwaystopackagethechoco- of 2 pieces.
Any box that is used must lates using only boxes of 4 pieces.": 0,
be full. Howmany different "4. Findthenumberofwaystopackagethechoco-
combinations of boxes can lates using a combination of boxes of 1 and 2
beusedforthecustomer’s15 pieces.": 7,
chocolate pie |
ngonlyboxes
1, 2 or 4 pieces of chocolate. "3. Findthenumberofwaystopackagethechoco- of 2 pieces.
Any box that is used must lates using only boxes of 4 pieces.": 0,
be full. Howmany different "4. Findthenumberofwaystopackagethechoco-
combinations of boxes can lates using a combination of boxes of 1 and 2
beusedforthecustomer’s15 pieces.": 7,
chocolate pieces? One such "5. Findthenumberofwaystopackagethechoco-
combination to be included lates using a combination of boxes of 1 and 4
is touse seven2-piece boxes pieces.": 3,
and one 1-piece box. "6. Findthenumberofwaystopackagethechoco-
lates using a combination of boxes of 2 and 4
pieces.": 2,
"7. Findthenumberofwaystopackagethechoco-
lates using a combination of boxes of 1, 2, and 4
pieces.": 4,
"8. Sum up the total number of ways from the
sevencasesabove.": 1+1+0+7+3+2+4=18
How many primes arein the "1. IdentifytherowofPascal’sTrianglethatstarts 6 is not a
row of Pascal’s Triangle that witha 1followedbya 6.": "1, 6,15, 20,15, 6,1", prime.
startswitha 1 followedbya "2. Listall the numbers in that row.": [1, 6, 15, 20,
6 ? 15, 6, 1],
"3. Check each number in the list for primality.":
[False, True, False, False, False, True, False],
"4. Countthe numberofprimes foundinthe list.":
2
16 SELF-DISCOVER: Large Language Models Self-Compose Reasoning Structures
Figure11.Casestudyofhuman-writtenstructuresharescommonalitieswithLLM-discoveredreasoningstructure. Weobserve
similar reasoning patterns–both structures contain step-wise analysis of each instruction.
17 |
UnderstandingLLMs:AComprehensiveOverviewfromTraining
toInference
YihengLiu a,HaoHe a,TianleHan a,XuZhang a,MengyuanLiu a,JiamingTian a,
YutongZhang b,JiaqiWang c,XiaohuiGao d,TianyangZhong d,YiPan e,ShaochenXu e,
ZihaoWu e,ZhengliangLiu e,XinZhang b,ShuZhang c,XintaoHu d,TuoZhang d,
NingQiang a,TianmingLiu eandBaoGe a
aSchoolofPhysicsandInformationTechnology,ShaanxiNormalUniversity,Xi’an,710119,Shaanxi,China
bInstituteofMedicalResearch,NorthwesternPolytechnicalUniversity,Xi’an,710072,Shaanxi,China
cSchoolofComputerScience,NorthwesternPolytechnicalUniversity,Xi’an,710072,Shaanxi,China
dSchoolofAutomation,NorthwesternPolytechnicalUniversity,Xi’an,710072,Shaanxi,China
eSchoolofComputing,TheUniversityofGeorgia,Athens,30602,USA
ARTICLE INFO ABSTRACT
Keywords: TheintroductionofChatGPThasledtoasignificantincreaseintheutilizationofLarge
LargeLanguageModels LanguageModels(LLMs)foraddressingdownstreamtasks.There’sanincreasingfocuson
Training cost-efficienttraininganddeploymentwithinthiscontext.Low-costtraininganddeploymentof
Inference LLMsrepresentthefuturedevelopmenttrend.Thispaperreviewstheevolutionoflargelanguage
Survey modeltrainingtechniquesandinferencedeploymenttechnologiesalignedwiththisemerging
trend.Thediscussionontrainingincludesvariousaspects,includingdatapreprocessing,training
architecture,pre-trainingtasks,paralleltraining,andrelevantcontentrelatedtomodelfine-
tuning.Ontheinferenceside,thepapercoverstopicssuchasmodelcompression,parallel
computation,memoryscheduling,andstructuraloptimization.ItalsoexploresLLMs’utilization
andprovidesinsightsintotheirfuturedevelopment.
1. Introduction
Languagemodeling(LM)isafundamentalapproachforachievingcognitiveintelligenceinthefieldofnatural
languageprocessing(NLP),anditsprogresshasbeennotableinrecentyears[1;2;3].Itassumesacentralrole
inunderstanding,generating,andmanipulatinghumanlanguage,servingasthecornerstoneforadiverserangeof
NLPapplications[4],includingmachinetranslation,chatbots,sentimentanalysis,andtextsummarization.With
theevolutionofdeeplearning,theearlystatisticallanguagemodels(SLM)havegraduallytransformedintoneural
languagemodels(NLM)basedonneuralnetworks.Thisshiftischaracterizedbytheadoptionofwordembeddings,
representingwordsasdistributedvectors.Notably,thesewordembeddingshaveconsistentlyexcelledinpracticalNLP
tasks,profoundlyshapingthefield’sprogress.Pre-trainedlanguagemodels(PLM)representasubsequentphasein
theevolutionoflanguagemodelsfollowingNLM.EarlyattemptsatPLMsincludedELMo[5],whichwasbuiltona
BidirectionalLSTMarchitecture.However,withtheadventofthetransformerarchitecture[6],characterizedbyparallel
self-attentionmechanisms,thepre-trainingandfine-tuninglearningparadigmhaspropelledPLMtoprominenceas
theprevailingapproach.Thesemodelsaretypicallytrainedviaself-supervisiononextensivedatasets,cementingtheir
statusastheprimarymethodologyinthefield.
TheTransformerarchitectureisexceptionallywell-suitedforscalingupmodels,andresearchanalysishasrevealed
thatincreasingthemodel’sscaleortrainingdatasizecansignificantlyenhanceitsperformance.Manystudieshave
pushedtheboundariesofmodelperformancebycontinuouslyexpandingthescaleofPLM[7;8;9;10].Asmodels
growlarger,aremarkablephenomenonknownas"emergence"occurs,whereintheyexhibitastonishingperformance
[8].Thesemodelsarecapableofgeneratinghigh-qualitytextandpossessrobustlearningandreasoningabilities.They
caneventacklefew-shotlearningtasksthroughin-contextlearning(ICL)[8].Thisremarkablecapabilityenablestheir
seamlessapplicationtoawiderangeofdownstreamtasksacrossdiversedomains[11;12;13;14].
∗Correspond |
nystudieshave
pushedtheboundariesofmodelperformancebycontinuouslyexpandingthescaleofPLM[7;8;9;10].Asmodels
growlarger,aremarkablephenomenonknownas"emergence"occurs,whereintheyexhibitastonishingperformance
[8].Thesemodelsarecapableofgeneratinghigh-qualitytextandpossessrobustlearningandreasoningabilities.They
caneventacklefew-shotlearningtasksthroughin-contextlearning(ICL)[8].Thisremarkablecapabilityenablestheir
seamlessapplicationtoawiderangeofdownstreamtasksacrossdiversedomains[11;12;13;14].
∗Correspondingauthor
ORCID(s):
YihengLiuetal.:PreprintsubmittedtoElsevier Page1of30 AComprehensiveOverviewfromTrainingtoInference
Pre-trainedlanguagemodels(PLMs)withsignificantlylargerparametersizesandextensivetrainingdataare
typicallydenotedasLargeLanguageModels(LLMs)[15;16;17].Themodelsizeusuallyexceeds6-10billion(6-
10B)parameters.AprominentmilestoneinthedevelopmentofLLMsisexemplifiedbytheGPTseries[18;7;8;19].
Notably,OpenAIreleasedChatGPTinNovember2022,markingapivotalmomentintheeraofLLMsandagame-
changingmomentinthefieldofartificialintelligence.ChatGPThasempoweredcurrentAIalgorithmstoachieve
unprecedentedlevelsofstrengthandeffectiveness,reshapingthewayhumansemployordevelopAIalgorithms.
Itsemergencehascapturedtheattentionoftheresearchcommunity.However,owingtoChatGPT’sabsenceasan
open-sourceplatform,theprincipalwaytouseChatGPTcurrentlyisbyaccessingitthroughOpenAI’swebsiteat
https://chat.openai.comorviatheirAPIinterface.TrainingLLMsthatcanserveasalternativestoChatGPT,or
domain-specificLLMs,hasbecomehighlynecessary[20;21;22;23;24;1;25;26].TraininganddeployingLLMs
demandexpertiseinhandlinglarge-scaledataandsubstantialpracticalexperienceindistributedparalleltraining
[27;28;29].ThisrequirementemphasizestheneedforresearchersdevelopingLLMstopossesssignificantengineering
capabilitiesinaddressingthechallengesencounteredduringLLMdevelopment.Researcherswhoareinterestedinthe
fieldofLLMsmustpossessengineeringskillsorlearntocollaborateeffectivelywithengineers.
Fortheabovereasons,theprimaryobjectiveofthispaperistoprovideacomprehensiveoverviewofLLMstraining
andinferencetechniquestoequipresearcherswiththeknowledgerequiredfordeveloping,deploying,andapplying
LLMs.Thestructureoftherestofthisreviewisasfollows:InSection2,wewillintroducetherelevantbackground
andfoundationalknowledgeofLLMs.InSection3,wewilldelveintothetechnicalaspectsoftrainingLLMs,whilein
Section4wewillexplorethetechnologiesrelatedtoLLM’sinferenceanddeployment.InSection5,wewilldiscuss
theutilizationofLLMs,andSection6willexplorethefuturedirectionsandtheirimplicationsforLLMs.
2. BackgroundKnowledge
2.1. Transformer
Transformerisadeeplearningmodelbasedonanattentionmechanismforprocessingsequencedatathatcan
effectivelysolvecomplexnaturallanguageprocessingproblems.Thismodelwasfirstproposedin2017[6],and
replacedthetraditionalrecurrentneuralnetworkarchitecture[30]inmachinetranslationtasksasthestate-of-the-art
modelatthattime.Duetoitssuitabilityforparallelcomputingandthecomplexityofthemodelitself,Transformer
outperformsthepreviouslypopularrecurrentneuralnetworksintermsofaccuracyandperformance.TheTransformer
architectureconsistsprimarilyoftwomodules,anEncoderandaDecoder,aswellastheattentionmechanismwithin
thesemodules.
2.1.1. Self-Attention
Self-AttentionStructure[6]: Essentially,theattentionmechanismaimsatselectingasmallamountofimportant
informationfromalargeamountofdataandfocusingontheseimportantpieceswhileignoringthemajorityof
unimportantinformation.Theself-attentionmechanism,asavariantoftheattentionmechanism,reducesrelianceon
externalinformationandexcelsatcapturinginternalcorrelationswithindataorfeatures.Applyingtheself-attention
mechanismintext-primarilyinvolvescalculatingthemutualinfluencebetweenwordstoaddresstheissueoflong-range
dependencies.Additionally,self-attentionisthecoreideabehindtransformers.Thecoreformulaforkey-valueattention
isasfollows:
𝐴𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛 (𝑄,𝐾,𝑉 ) = 𝑠𝑜𝑓𝑡𝑚𝑎𝑥 (𝑄𝐾 𝑇√𝑑 𝑘)𝑉 (1)
Self-attentionallowsthe |
heattentionmechanism,reducesrelianceon
externalinformationandexcelsatcapturinginternalcorrelationswithindataorfeatures.Applyingtheself-attention
mechanismintext-primarilyinvolvescalculatingthemutualinfluencebetweenwordstoaddresstheissueoflong-range
dependencies.Additionally,self-attentionisthecoreideabehindtransformers.Thecoreformulaforkey-valueattention
isasfollows:
𝐴𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛 (𝑄,𝐾,𝑉 ) = 𝑠𝑜𝑓𝑡𝑚𝑎𝑥 (𝑄𝐾 𝑇√𝑑 𝑘)𝑉 (1)
Self-attentionallowsthemodeltoweightheimportanceofdifferentwordsinasentencewhenpredictingaparticular
word.Itcalculatesaweightedsumofthevaluesofallwordsinthesentence,wheretheweightsaredeterminedbythe
relevanceofeachwordtothetargetword.
Theself-attentionmechanismconsistsofthreesteps:calculatingthequery,key,andvaluevectors.Thequeryvector
representsthewordbeingattendedto,whilethekeyvectorsrepresentallthewordsinthesentence.Thevaluevectors
storetheinformationassociatedwitheachword.Theattentionweightsarecomputedbytakingthedotproductbetween
thequeryandkeyvectors,followedbyasoftmaxoperationtoobtainadistributionoverthewords.
Multi-HeadAttention[6]: Multi-headself-attentionextendstheself-attentionmechanismbyperformingit
multipletimesinparallel.Eachattentionheadlearnstofocusondifferentaspectsoftheinput,capturingdifferent
dependenciesandpatterns.Theoutputsoftheattentionheadsarethenconcatenatedandlinearlytransformedtoobtain
YihengLiuetal.:PreprintsubmittedtoElsevier Page2of30 AComprehensiveOverviewfromTrainingtoInference
thefinalrepresentation.Byusingmultipleattentionheads,themodelcancapturebothlocalandglobaldependencies,
allowingforamorecomprehensiveunderstandingoftheinputsequence.Thisparallelizationalsoenhancesthemodel’s
capacitytocapturecomplexrelationshipsbetweenwords.TheMulti-headattentioncanbeformulatedasfollows:
𝑀𝑢𝑙𝑡𝑖𝐻𝑒𝑎𝑑𝐴𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛 (𝑄,𝐾,𝑉 ) = 𝐶𝑜𝑛𝑐𝑎𝑡 [ℎ𝑒𝑎𝑑 1,… ,ℎ𝑒𝑎𝑑 ℎ ]𝑊 𝑜
𝑤ℎ𝑒𝑟𝑒ℎ𝑒𝑎𝑑 𝑖 = 𝐴𝑡𝑡𝑒𝑛𝑡𝑖𝑜𝑛 (𝑄𝑊 𝑄𝑖 ,𝐾𝑊 𝐾𝑖 ,𝑉𝑊 𝑉𝑖 ) (2)
Inthiscase,"𝐶𝑜𝑛𝑐𝑎𝑡 "meanstoconcatenatetheattentioncalculationresultsofeachhead,"𝑊 𝑜"istheweight
matrixoftheoutputlayer,usedtolinearlytransformtheconcatenatedresults.Thisyieldstheoutputofmulti-head
attention.Insummary,multi-headattentionenhancesthemodel’sabilitytorepresentinputsequencesbyperforming
parallelattentioncalculationsunderdifferentlineartransformations,thenconcatenatingandlinearlytransformingthe
results.ThismechanismplaysanimportantroleintheTransformermodel,helpingtohandlelong-rangedependencies
andimprovemodelperformance.
2.1.2. Encoder
Theencodermodule[6]oftheTransformermodeliscomposedofmultipleidenticallayers,eachofwhichincludes
amulti-headattentionmechanismandfeed-forwardneuralnetwork[31].Inthemulti-headattentionmechanism,each
positionintheinputsequenceiscalculatedforattentionwithotherpositionstocapturethedependenciesbetween
differentpositionsintheinputsequence.Thefeed-forwardneuralnetworkisthenusedtofurtherprocessandextract
featuresfromtheoutputoftheattentionmechanism.Theencodermodulegraduallyextractsfeaturesoftheinput
sequencethroughthestackingofmultiplesuchlayersandpassesthefinalencodingresulttothedecodermodulefor
decoding.Thedesignoftheencodermoduleenablesittoeffectivelyhandlelong-rangedependencieswithintheinput
sequenceandhassignificantlyimprovedperformanceinvariousNLPtasks.
2.1.3. Decoder
Thedecodermodule[32]oftheTransformermodelisalsocomposedofmultipleidenticallayers,eachofwhich
includesamulti-headattentionmechanismandafeed-forwardneuralnetwork.Unliketheencoder,thedecoderalso
includesanadditionalencoder-decoderattentionmechanism,usedtocomputeattentionontheinputsequenceduring
thedecodingprocess.Ateachposition,thedecodercanonlyperformself-attentioncalculationswiththepositions
beforeittoensurethatthegenerationofthesequencedoesnotviolategrammarrules.Masksplayanimportantrole
inthedecoder,ensuringthatonlyinformationbeforethecurrenttimestepisfocusedonwhengeneratingtheoutput
sequence,andnotleakinginform |
-headattentionmechanismandafeed-forwardneuralnetwork.Unliketheencoder,thedecoderalso
includesanadditionalencoder-decoderattentionmechanism,usedtocomputeattentionontheinputsequenceduring
thedecodingprocess.Ateachposition,thedecodercanonlyperformself-attentioncalculationswiththepositions
beforeittoensurethatthegenerationofthesequencedoesnotviolategrammarrules.Masksplayanimportantrole
inthedecoder,ensuringthatonlyinformationbeforethecurrenttimestepisfocusedonwhengeneratingtheoutput
sequence,andnotleakinginformationfromfuturetimesteps.Specifically,thedecoder’sself-attentionmechanism
usesmaskstopreventthemodelfromaccessingfutureinformationwhengeneratingpredictionsateachtimestep,
maintainingthecausalityofthemodel.Thisensuresthattheoutputgeneratedbythemodeldependsontheinformation
atthecurrenttimestepandbefore,withoutbeinginfluencedbyfutureinformation.
2.1.4. PositionalEmbedding
Positionandorderarecrucialforcertaintasks,suchasunderstandingasentenceoravideo.Positionandorder
definethegrammarofasentence,theyareintegraltothesemanticsofsentences.TheTransformerutilizesMulti-Head
Self-Attention(MHSA)toavoidtherecursiveapproachofRNN,thusspeedingupthetrainingprocess.Additionally,
itcancapturelong-rangedependenciesinsentencesandhandlelongerinputs.Wheneachtokeninasentencepasses
throughtheTransformer’sEncoder/Decoderstack,themodelitselflacksanysenseofposition/orderforeachtoken
(permutationinvariance).Therefore,amethodisstillneededtoincorporatethesequentialinformationoftokensinto
themodel.Toenablethemodeltoperceivetheinputsequence,positionalinformationaboutthelocationofeach
tokeninthesentencecanbeadded,andthistechniqueisknownaspositionalembedding(PE).whichisusedinthe
Transformermodeltoincorporatethesequentialorderoftokensintotheinputrepresentation.SincetheTransformer
doesnothaverecurrentconnections,itlackstheinherentnotionoftokenorderpresentinrecurrentneuralnetworks.
Toaddressthis,positionalembeddingassignsauniquevectortoeachtokenpositionintheinputsequence.These
positionalembeddingsareaddedtothewordembeddingbeforebeingfedintothemodel.Byincludingpositional
information,themodelcandifferentiatebetweentokensbasedontheirpositioninthesequence.IntheTransformer
model,thecoreformulaofthepositionembeddingcanbeexpressedas:
𝑃𝐸 (𝑝𝑜𝑠, 2𝑖) = 𝑠𝑖𝑛 ( 𝑝𝑜𝑠
10000 ( 2𝑖𝑑𝑚𝑜𝑑𝑒𝑙 )) (3)
YihengLiuetal.:PreprintsubmittedtoElsevier Page3of30 AComprehensiveOverviewfromTrainingtoInference
𝑃𝐸 (𝑝𝑜𝑠, 2𝑖+1) = 𝑐𝑜𝑠 ( 𝑝𝑜𝑠
10000 ( 2𝑖𝑑𝑚𝑜𝑑𝑒𝑙 )) (4)
Inthisequation,𝑃𝐸 representsthepositionembeddingmatrix,𝑝𝑜𝑠 representsthepositionofatokeninthesentence,
𝑖representsthedimensionindexofthepositionembedding,and𝑑 𝑚𝑜𝑑𝑒𝑙 representsthehiddenlayerdimensionofthe
Transformermodel.Byusingsineandcosinefunctionsandperformingdifferentcalculationsontheposition(pos)and
dimension(i),thisformulageneratesuniquepositionembeddingvaluesforeachpositionanddimension.Asaresult,
eachtokenisassignedauniquepositionembeddingvector,allowingthemodeltoperceivethesequentialinformationof
tokensinthesentence.Inpracticalapplications,thepositionembeddingmatrixisaddedtotheinputwordembedding
matrixtocombinepositioninformationandsemanticinformation,therebyprovidingamorecomprehensiveinput
representationfortheTransformermodel.
TwocommonlyusedpositionalencodingmethodsinTransformerareAbsolutePositionalEncodingandRelative
PositionalEncoding.
(1)AbsolutePositionalEncoding:Itgeneratesuniquepositionalembeddingvaluesforeachpositionanddimension
byusingsineandcosinefunctions.Thismethodusessineandcosinefunctionsinthementionedformulatocalculate
thepositionalembeddingvaluesandaddsthemtothewordembeddings.AbsolutePositionalEncodingprovidesa
uniqueencodingforeachposition,enablingthemodeltoperceivethesequentialinformationofwordsinthesentence.
(2)RelativePositionalEncoding:Itisanencodingmethodbasedonrelativepositionalrela |
Relative
PositionalEncoding.
(1)AbsolutePositionalEncoding:Itgeneratesuniquepositionalembeddingvaluesforeachpositionanddimension
byusingsineandcosinefunctions.Thismethodusessineandcosinefunctionsinthementionedformulatocalculate
thepositionalembeddingvaluesandaddsthemtothewordembeddings.AbsolutePositionalEncodingprovidesa
uniqueencodingforeachposition,enablingthemodeltoperceivethesequentialinformationofwordsinthesentence.
(2)RelativePositionalEncoding:Itisanencodingmethodbasedonrelativepositionalrelationships.Relative
PositionalEncodingrepresentspositionalinformationbycalculatingtherelativedistancesbetweenwords.Thismethod
isusedinmodelslikeTransformer-XL[33],andRelativePositionalEncodingcanbettercapturetherelativepositional
relationshipsbetweenwordswhendealingwithlongsequences.Bothofthesepositionalencodingmethodsaimto
providethepositionalinformationofwordsintheinputsequencetotheTransformermodel,enablingthemodelto
bettercomprehendandprocesssequentialdata.Thespecificchoiceofpositionalencodingmethoddependsonthe
specificapplicationscenarioandmodeldesign.
Therearealsootherpositionalencodingmethodsappliedtoothermodels,suchasRoPE[34]andALiBi[35].
RoPEisamethodthatusesAbsolutePositionalEncodingtorepresentRelativePositionalEncodingandisapplied
inthedesignoflargelanguagemodelslikePaLM[36],LLaMA[9],andGLM-130B[37].
ALiBidoesnotaddpositionalembeddingstowordembeddingsbutinsteadaddsapre-definedbiasmatrixtothe
attentionscorebasedonthedistancebetweentokens.ItisappliedinthedesignoflargelanguagemodelslikeBLOOM
[38].
Someotherpositionalencodingmethods,suchasmixedpositionalencoding,multi-digitpositionalencoding,and
implicitpositionalencoding,arealsousedbysomemodels.
2.2. PromptLearning
Promptlearningservesasawidelyadoptedmachinelearningapproach,particularlyinthefieldofNLP.Atits
core,thismethodologyinvolvesguidingamodeltoproducespecificbehaviorsoroutputsthroughthecarefuldesign
ofpromptstatements.Itiscommonlyemployedtofine-tuneandguidepre-trainedLLMsforexecutingparticular
tasksorgeneratingdesiredresults.Researchershaveobservedthatthedesignofspecificpromptstatementscansteer
pre-trainedmodelstoperformvarioustasks,suchasquestion-answering,textgeneration,andsemanticunderstanding
[39;40;41;42;43;44;45;46;47;48;49;50].Thestrengthofthisapproachliesinitsabilitytoadapttodifferenttasks
throughsimplemodificationstopromptstatements,eliminatingtheneedforretrainingtheentiremodel.ForLLMs
liketheGPTseriesandotherpre-trainedmodels,promptlearningprovidesastraightforwardandpowerfulmeans
formodelfine-tuning.Bysupplyingappropriateprompts,researchersandpractitionerscancustomizethemodel’s
behavior,makingitmoresuitableforspecificdomainsortaskrequirements.Inshort,promptlearningisamachine
learningapproachthat,buildsuponpre-trainedlanguagemodels,andguidesthemodeltoperformvarioustasksthrough
thedesignofpromptstatements,offeringincreasedflexibilityforcustomizingmodelapplications.InthisSection,we
willintroducethebasicknowledgeofpromptlearning.
2.2.1. BackgroundandOverview
Promptlearningisanewapproachtomachinelearning[51].Intheearlyfieldofnaturallanguageprocessing
(NLP),researchersmainlyusedfullysupervisedlearningmode[52],whichtrainedmodelsforspecifictasksonthe
inputandoutputexampledatasetofthetargettask.However,duetothelimitedtrainingdataset,thismethodcannot
YihengLiuetal.:PreprintsubmittedtoElsevier Page4of30 AComprehensiveOverviewfromTrainingtoInference
trainhigh-qualitymodelswell,soearlyNLPreliedmoreonfeatureengineering;Withtheemergenceofneuralnetwork
modelsandtheiruseinthefieldofNLP,peoplehavebeguntopayattentiontoarchitectureengineering[53].
However,between2017and2019,thelearningapproachofNLPmodelsshiftedfromfullysupervisedlearningto
anewmode:pre-trainandfine-tuneparadigm[54].Inthisparadigm,amodelwithafixedarchitectureispre-trained
asalanguagemodeltopredicttheprobabilityofobservedtextdata.Duetotheabundantrawtextdatarequiredfor
traininglanguagemodels,theselanguagemodelscanbetrainedonlargedatasets.Duringthisprocess,languagemodels
canlearnrobustuniversalfe |
elsandtheiruseinthefieldofNLP,peoplehavebeguntopayattentiontoarchitectureengineering[53].
However,between2017and2019,thelearningapproachofNLPmodelsshiftedfromfullysupervisedlearningto
anewmode:pre-trainandfine-tuneparadigm[54].Inthisparadigm,amodelwithafixedarchitectureispre-trained
asalanguagemodeltopredicttheprobabilityofobservedtextdata.Duetotheabundantrawtextdatarequiredfor
traininglanguagemodels,theselanguagemodelscanbetrainedonlargedatasets.Duringthisprocess,languagemodels
canlearnrobustuniversalfeaturesofthelanguagetheyaremodeling.Then,byintroducingadditionalparametersand
fine-tuningthemusingtask-specificobjectivefunctions,thePLMmentionedabovewilladapttodifferentdownstream
tasks.Atthispoint,thefocusofresearchshiftedtoobjectiveengineering,whichistodesigntrainingobjectivesduring
pre-trainingandfine-tuning.SinceBERT,NLPhasbeenusingpre-trainingandfine-tuningmethodsforalongperiod
oftime,butthisapproachrequiresanewmodeltobefine-tunedforeachtaskandcannotbeshared.ButforanLLM,
itfeelslikecustomizingeachtask,whichisveryinefficient[51].
Promptlearning,thismethodhasdemonstratedamazingcapabilitiesinGPT-3.TheGPT-3modelcanhandle
manytaskswithonlyafewsamplesbyusingnaturallanguagepromptsandtaskdemonstrationsascontext,without
updatingparametersintheunderlyingmodel.PromptLearningreplacestheprocessofpre-trainedandfine-tuning
withpre-trained,promptsandpredictions.Inthisparadigm,thedownstreamtaskisnottoadaptthepre-trainedLM
tothedownstreamtaskthroughobjectiveengineering,buttoredefinethedownstreamtaskwiththehelpoftext
prompts,makingitlookmorelikethetaskssolvedduringtheoriginalLMtraining.Forpromptlearning,itisonly
necessarytoinsertdifferentpromptparameterstoadapttodifferenttasks.Thatistosay,eachtaskonlyneedstotrain
thepromptparameterseparately,withouttheneedtotraintheentirepre-trainedlanguagemodel[55].Thisapproach
greatlyimprovestheefficiencyofusingpre-trainedlanguagemodelsandsignificantlyshortenstrainingtime.
2.2.2. BasiccomponentsandprocessofPromptlearning
Inthetraditionalpre-trained+fine-tuningparadigm,thereisagapbetweenthepre-trainedstageanddownstream
tasks[51],whilepromptlearningcanmaintainconsistencybetweenthepre-trainedtargetformatanddownstreamtask
outputformat,thatis,aligntheformofdownstreamtaskswiththeformofPLMspre-trainedtasks.Whentraining
PLMs,wecantransformtheoriginaltargettaskintoafill-in-the-blankorcontinuationtasksimilartothepre-trained
taskofPLMsbyconstructingaprompt.Theadvantageofthismethodisthatthroughaseriesofappropriateprompts,
wecanuseasinglelanguagemodeltosolvevariousdownstreamtasks.
Promptlearningoptimizestheperformanceofmodelsondifferenttasksbyusingpre-trainedmodelsanddesigning
appropriatetemplates.Promptlearningconsistsofprompttemplates,answermappings,andpre-trainedlanguage
models.Theprompttemplateisthemainbodyoftheprompt,andfillintheblank[56]andgeneratebasedon
prefix[57]aretwocommontypesofpromptlearningtemplates.Thefill-in-the-blanktemplateselectsoneormore
positionsinthetextandrepresentsthemwith[MASK]tags,usedtopromptthemodeltofillinthecorresponding
words;Prefix-basedtemplategenerationinvolvesaddingaspecificprefixbeforeasentencetoguidethemodel
ingeneratingappropriatetext.Answermappingistheprocessofevaluatingallpossibleanswersaccordingtoa
probabilitydistribution,selectingthemostlikelyanswerasthepredictedoutput,andconvertingitintoappropriate
categorymappingwords.Thisprocesstypicallyinvolvesconvertinglabelsintonaturallanguagevocabulary,known
asVerbalizer[58].
TheworkflowofPromptlearningmainlyincludesthefollowingfourparts:
(1)UsePLMsasbaseencoders
(2)Addadditionalcontext(template)witha[MASK]position
(3)Projectlabelstolabelwords(verbalizer)
(4)BetheGAPbetweenpre-trainingandfine-tuning
Afterdefiningthetemplateandanswerspace,weneedtochooseasuitablepre-trainedlanguagemodel.Thereare
nowvariouspre-trainedmodels(PTMs)withgoodperformance,andwhenselectingamodel,oneusuallyconsidersits
paradigm,suchasAutorecursive,MaskedLanguageModeling,EncoderDecoder,etc.Basedonthis,forthesummary
task,amoresuitableBidirectionalandAuto-RegressiveTransformers(BART)modelcanbeselected.
Theselectionofatemplateplays |
sition
(3)Projectlabelstolabelwords(verbalizer)
(4)BetheGAPbetweenpre-trainingandfine-tuning
Afterdefiningthetemplateandanswerspace,weneedtochooseasuitablepre-trainedlanguagemodel.Thereare
nowvariouspre-trainedmodels(PTMs)withgoodperformance,andwhenselectingamodel,oneusuallyconsidersits
paradigm,suchasAutorecursive,MaskedLanguageModeling,EncoderDecoder,etc.Basedonthis,forthesummary
task,amoresuitableBidirectionalandAuto-RegressiveTransformers(BART)modelcanbeselected.
Theselectionofatemplateplaysaveryimportantroleinthepromptlearning.Templatescangenerallybe
distinguishedbasedonwhethertheyaremanuallyspecified:artificiallyconstructedtemplatesorautomaticallysearched
templates.Artificiallycreatedtemplatesarethemostintuitivemethod,easytounderstand,andhavegoodperformance
inpracticalapplications.However,artificiallyconstructedtemplatesalsohavesomedrawbacks:priorknowledgeis
requiredwhendesigningtemplatesmanually[59],andtheremaybefailures[60].Therearetwotypesofautomatically
YihengLiuetal.:PreprintsubmittedtoElsevier Page5of30 AComprehensiveOverviewfromTrainingtoInference
generatedtemplates:discretepromptsandcontinuousprompts.Discretepromptsallowthemodeltoselecttheoptimal
templateinasetofdiscretetemplatespaces,whilecontinuouspromptsallowthelanguagemodeltoautomatically
trainaprompt.Accordingtoresearch,usingmultipletemplates[61]canimprovetheperformanceofthemodel.The
simplestwaytochoosetousemultipletemplatesandaggregatethemtogethertocompleteanansweristotakethe
average[60]orweightedaverageofeachtemplateoutput[58].
Verbalizeristheprocessofmappinglabelstolabelwords,andtheselectionofverbalizersisalsocrucialforprompt
learning.Therearetwowaystoconstructaverbalizer:manualdefinitionandautomaticsearch.Themanualdefinition
requiresprofessionalknowledgeandmayhavedisadvantagessuchasstrongsubjectivityandasmallcoveragearea.To
solvethisproblem,wecanchoosethefollowingsolutions:(1)Manuallydesignwithhumanpriorknowledge;(2)Start
withanIntellabelword,paraphraseandexpand;(3)Startwithaninternallabelword,usingexternalknowledgeand
expand;(4)Decomposethelabelwithmultipletokens;(5)Virtualtokenandoptimizethelabelembedding.Inaddition,
wecanuseexternalknowledgebasestoexpandandimprovelabelwords,therebyachievingbettertextclassification
results[62].
2.2.3. learningstrategy
TheemergenceofthenewparadigmofPromptlearninghasbroughtsignificantchangestothetrainingprocess.
ThelearningstrategiesforPromptlearningmainlyincludethefollowing:(1)Pre-trainingthenfine-tuning,whichisa
traditionalpre-training+finetuningmethod[63];(2)Tuningfreepromotion,relyingonthedesignerLMofpromptsto
directlyprovideanswers[64];(3)FixedLMprompttuning,whichupdatestherelevantparametersofpromptsusing
downstreamtasktrainingdata;(4)FixpromptLMtuning,thisstrategyistofine-tunetheparametersofLM,which
havefixedparameterswhenusingprompts;(5)Prompt+LMtuningisastrategythatupdatesbothpromptsrelated
parametersandLMparameters.
Thesedifferentlearningstrategiescanbeselectedbasedonspecifictasksandneeds.Pre-training+fine-tuningis
themostcommonstrategy,suitableformosttasks[63].Nofine-tuningpromptsaresuitableforsimpletasks,which
cangreatlyreducetrainingtimeandcomputationalresourceconsumption.FixedLMpromptfine-tuningandfixed
promptLMfine-tuningaresuitablefortasksthatrequiremoreprecisecontrolandcanoptimizemodelperformance
byadjustingpromptparametersorlanguagemodelparameters.CombiningpromptsandLMfine-tuningcombinesthe
advantagesofbothandcanfurtherimprovemodelperformance[51].
Insummary,Promptlearningprovidesuswithanewtrainingparadigmthatcanoptimizemodelperformance
onvariousdownstreamtasksthroughappropriatepromptdesignandlearningstrategies.Choosingtheappropriate
template,constructinganeffectiveverbalizer,andadoptingappropriatelearningstrategiesareallimportantfactorsin
improvingtheeffectivenessofpromptlearning.
3. TrainingofLargeLanguageModels
ThetrainingofLLMscanbebroadlydividedintothreesteps.Thefirststepinvolvesdatacollectionandprocessing.
Thesecondstepencompassesthepre-trainingprocess,whichincludesdeterminingthemodel’ |
ainingparadigmthatcanoptimizemodelperformance
onvariousdownstreamtasksthroughappropriatepromptdesignandlearningstrategies.Choosingtheappropriate
template,constructinganeffectiveverbalizer,andadoptingappropriatelearningstrategiesareallimportantfactorsin
improvingtheeffectivenessofpromptlearning.
3. TrainingofLargeLanguageModels
ThetrainingofLLMscanbebroadlydividedintothreesteps.Thefirststepinvolvesdatacollectionandprocessing.
Thesecondstepencompassesthepre-trainingprocess,whichincludesdeterminingthemodel’sarchitectureandpre-
trainingtasksandutilizingsuitableparalleltrainingalgorithmstocompletethetraining.Thethirdstepinvolvesfine-
tuningandalignment.Inthissection,wewillprovideanoverviewofthemodeltrainingtechniques.Thiswillincludean
introductiontotherelevanttrainingdatasets,datapreparationandpreprocessing,modelarchitecture,specifictraining
methodologies,modelevaluation,andcommonlyusedtrainingframeworksforLLMs.
3.1. DataPreparationandPreprocessing
3.1.1. Dataset
TrainingLLMsrequirevastamountsoftextdata,andthequalityofthisdatasignificantlyimpactsLLM
performance.Pre-trainingonlarge-scalecorporaprovidesLLMswithafundamentalunderstandingoflanguageand
somegenerativecapability.ThefirststepinLLMtrainingiscollectingsubstantialcorporaofnaturallanguagetext.
Pre-trainingdatasourcesarediverse,commonlyincorporatingwebtext,conversationaldata,andbooksasgeneral
pre-trainingcorpora.Additionally,someresearcheffortsintroducespecializeddatafromprofessionaldomains,such
ascodeorscientificdata,toenhanceLLMcapabilitiesinthosefields.LeveragingdiversesourcesoftextdataforLLM
trainingcansignificantlyenhancethemodel’sgeneralizationcapabilities.Inthefollowingsection,wewillpresent
thecommonlyuseddatasetsfortrainingLLMsasshowninTable1.Thesecorporaarecategorizedinto5groupsfor
discussion.
YihengLiuetal.:PreprintsubmittedtoElsevier Page6of30 AComprehensiveOverviewfromTrainingtoInference
Table 1
Commonlyusedcorporainformation.
Corpora Type Links
BookCorpus[65] Books https://github.com/soskek/bookcorpus
Gutenberg[66] Books https://www.gutenberg.org
Books1[8] Books Notopensourceyet
Books2[8] Books Notopensourceyet
CommonCrawl[67] CommonCrawl https://commoncrawl.org
C4[68] CommonCrawl https://www.tensorflow.org/datasets/catalog/c4
CC-Stories[69] CommonCrawl Notopensourceyet
CC-News[70] CommonCrawl https://commoncrawl.org/blog/news-dataset-available
RealNews[71] CommonCrawl https://github.com/rowanz/grover/tree/master/realnews
RefinedWeb[72] CommonCrawl https://huggingface.co/datasets/tiiuae/falcon-refinedweb
WebText RedditLink Notopensourceyet
OpenWebText[73] RedditLink https://skylion007.github.io/OpenWebTextCorpus/
PushShift.io[74] RedditLink https://pushshift.io/
Wikipedia[75] Wikipedia https://dumps.wikimedia.org/zhwiki/latest/
BigQuery[76] Code https://cloud.google.com/bigquery
CodeParrot Code https://huggingface.co/codeparrot
thePile[77] Other https://github.com/EleutherAI/the-pile
ROOTS[78] Other https://huggingface.co/bigscience-data
Books:TwocommonlyutilizedbooksdatasetsforLLMstrainingareBookCorpus[65]andGutenberg[66].These
datasetsincludeawiderangeofliterarygenres,includingnovels,essays,poetry,history,science,philosophy,andmore.
Widelyemployedbynumerous |
https://huggingface.co/codeparrot
thePile[77] Other https://github.com/EleutherAI/the-pile
ROOTS[78] Other https://huggingface.co/bigscience-data
Books:TwocommonlyutilizedbooksdatasetsforLLMstrainingareBookCorpus[65]andGutenberg[66].These
datasetsincludeawiderangeofliterarygenres,includingnovels,essays,poetry,history,science,philosophy,andmore.
WidelyemployedbynumerousLLMs[9;79],thesedatasetscontributetothemodels’trainingbyexposingthemtoa
diversearrayoftextualgenresandsubjectmatter,fosteringamorecomprehensiveunderstandingoflanguageacross
variousdomains.
CommonCrawl:CommonCrawl[67]managesanaccessiblerepositoryofwebcrawldata,freelyavailablefor
utilizationbyindividualsandorganizations.Thisrepositoryencompassesavastcollectionofdata,comprisingover
250billionwebpagesaccumulatedoveraspanof16years.Establishedin2007,CommonCrawlhasevolvedintoa
widelyrecognizedandreferencedcorpusintheacademicandresearchcommunities,citedinmorethan10,000research
papers.Thiscontinuouslyexpandingcorpusisadynamicresource,withanadditionof3–5billionnewwebpageseach
month.Itssignificanceextendstothefieldofnaturallanguageprocessing,whereitservesasaprimarytrainingcorpus
innumerouslargelanguagemodels.Notably,asubstantialportionoftherawtokensemployedintrainingGPT-3
[8],amountingto82%,issourcedfromtheCommonCrawl.However,duetothepresenceofasubstantialamountof
low-qualitydatainwebarchives,preprocessingisessentialwhenworkingwithCommonCrawldata.Currently,four
commonlyusedfiltereddatasetsbasedonCommonCrawlareavailable:C4[68],CC-Stories[69],CC-News[70],and
RealNews[71].
RedditLinks:Redditisasocialmediaplatformwhereuserscansubmitlinksandposts,andotherscanvoteon
themusingthe"upvote"or"downvote"system.Thischaracteristicmakesitavaluableresourceforcreatinghigh-quality
datasets.
Wikipedia:Wikipedia[75],afreeandopenonlineencyclopediaproject,hostsavastrepositoryofhigh-quality
encyclopediccontentspanningawidearrayoftopics.TheEnglishversionofWikipediaisextensivelyutilizedinthe
trainingofmanyLLMs[8;9;80],servingasavaluableresourceforlanguageunderstandingandgenerationtasks.
Additionally,Wikipediaisavailableinmultiplelanguages,providingdiverselanguageversionsthatcanbeleveraged
fortraininginmultilingualenvironments.
Code:Thereisalimitedavailabilityofpubliclyaccessiblecodedatasetsatpresent.Existingeffortsprimarily
involvewebscrapingofcodewithopen-sourcelicensesfromtheinternet.ThemainsourcesincludeGithubandStack
Overflow.
WehaveorganizeddatasetsutilizedbydistinctLLMs.Duringthetrainingprocess,LLMsaretypicallytrainedon
multipledatasets,asspecifiedinTable2forreference.
YihengLiuetal.:PreprintsubmittedtoElsevier Page7of30 AComprehensiveOverviewfromTrainingtoInference
Table 2
DatasetsutilizedbydistinctLLMs
LLMs Datasets
GPT-3[8] CommonCrawl[67],WebText2[8],Books1[8],Books2[8],Wikipedia[75]
LLaMA[9] CommonCrawl[67],C4[68],Wikipedia[75],Github,Books,Arxiv,StackExchange
PaLM[36] SocialMedia,Webpages,Books,Github,Wikipedia,News(total780Btokens)
T5[68] C4[68],WebText,Wikipedia,RealNews
CodeGen[81] thePile,BIGQUERY,BIGPYTHON
CodeGeeX[82] CodeParrot,thePile,Github
GLM[37] BooksCorpus,Wikipedia
BLOOM[38] ROOTS
OPT[83] BookCorpus,CCNews,CC-Stories,thePile,Pushshift.io
3.1.2. Datapreprocessing
Onceanadequatecorpusofdataiscollected,thesubsequentstepisdatapreprocessing.Thequalityofdata
preprocessingdirectlyimpactsthemodel’sperformanceandsecurity.Thespecificpreprocessingstepsinvolvefiltering
low-qualitytext,includingeliminatingtoxicandbiasedcontenttoensurethemodelalignswithhumanethical
standards.Italsoincludesdeduplication,removingduplicatesint |
38] ROOTS
OPT[83] BookCorpus,CCNews,CC-Stories,thePile,Pushshift.io
3.1.2. Datapreprocessing
Onceanadequatecorpusofdataiscollected,thesubsequentstepisdatapreprocessing.Thequalityofdata
preprocessingdirectlyimpactsthemodel’sperformanceandsecurity.Thespecificpreprocessingstepsinvolvefiltering
low-qualitytext,includingeliminatingtoxicandbiasedcontenttoensurethemodelalignswithhumanethical
standards.Italsoincludesdeduplication,removingduplicatesinthetrainingset,andexcludingredundantcontent
inthetestsettomaintainthesampledistributionbalance.Privacyscrubbingisappliedtoensurethemodel’ssecurity,
preventinginformationleakageorotherprivacy-relatedconcerns.Additionally,iffine-tuningLLMsisconsidered,
expandingthevocabularyshouldalsobeconsidered.Ontheotherhand,LLaMA2models[10]representanotable
exception.Thesemodelsforegofilteringintheirpretrainingcorpus,asaggressivefiltrationmightaccidentallyfilter
outsomedemographicgroups.ThisapproachenhancesthegeneralizabilityofthebaseLLaMA2models,making
themmoreadeptacrossarangeofdownstreamtasks,suchashatespeechdetectionandprivacyde-identification.
Observationsindicatethatabstainingfromadditionalfilteringinthepretrainingdataenablesthebasemodeltoachieve
reasonablesafetyalignmentwithfewerexamples[10].Whilethisincreasesbothgeneralizabilityandsafetyalignment
efficiency,theimplementationofadditionalsafetymitigationsisstillimperativepriortopublicdeployment,asfurther
discussedinSection3.5.4.
Qualityfiltering:Filteringlow-qualitydataistypicallydoneusingheuristic-basedmethodsorclassifier-based
methods.Heuristicmethodsinvolveemployingmanuallydefinedrulestoeliminatelow-qualitydata[84;72].For
instance,rulescouldbesettoretainonlytextcontainingdigits,discardsentencescomposedentirelyofuppercase
letters,andremovefileswithasymbolandwordratioexceeding0.1,andsoforth.Classifier-basedmethodsinvolve
trainingaclassifieronahigh-qualitydatasetsuchasWebText[85]tofilteroutlow-qualitydatasets.
Deduplication:Languagemodelsmaysometimesrepetitivelygeneratethesamecontentduringtextgeneration,
potentiallyduetoahighdegreeofrepetitioninthetrainingdata.Extensiverepetitioncanleadtotraininginstability,
resultinginadeclineintheperformanceofLLMs[86].Additionally,itiscrucialtoconsideravoidingdataset
contaminationbyremovingduplicateddatapresentinboththetrainingandtestingset[87].
Privacyscrubbing:LLMs,astext-generatingmodels,aretrainedondiversedatasets,whichmayposeprivacy
concernsandtheriskofinadvertentinformationdisclosure[88].Inthepreprocessingphaseoflanguagedatasets,itis
imperativetoaddressprivacyconcernsbysystematicallyremovinganysensitiveinformation.Thisinvolvesemploying
techniquessuchasanonymization,redaction,ortokenizationtoeliminatepersonallyidentifiabledetails,geolocation,
andotherconfidentialdata.Bycarefullyscrubbingthedatasetofsuchsensitivecontent,researchersanddeveloperscan
ensurethatthelanguagemodelstrainedonthesedatasetsupholdprivacystandardsandmitigatetheriskofunintentional
disclosureofprivateinformation.Itisessentialtostrikeabalancebetweendatautilityandprivacyprotection,fostering
responsibleandethicaluseoflanguagedatasetsinvariousapplications.
Filteringouttoxicandbiasedtext:Inthepreprocessingstepsoflanguagedatasets,acriticalconsiderationisthe
removaloftoxicandbiasedcontenttoensurethedevelopmentoffairandunbiasedlanguagemodels.Thisinvolves
implementingrobustcontentmoderationtechniques,suchasemployingsentimentanalysis,hatespeechdetection,and
biasidentificationalgorithms.Byleveragingthesetools[89],researcherscansystematicallyidentifyandfilterouttext
thatmayperpetuateharmfulstereotypes,offensivelanguage,orbiasedviewpoints.
YihengLiuetal.:PreprintsubmittedtoElsevier Page8of30 AComprehensiveOverviewfromTrainingtoInference
3.2. Architecture
Currently,allLLMsarebuiltupontheTransformerarchitecture,allowingthesemodelstoscaletoseveral10billion
orevenatrillionparameters.Typically,PLMarchitecturesfallintothreecategories:Encoder-only[90],Encoder-
decoder[68]andDecoder-only[1 |
t
thatmayperpetuateharmfulstereotypes,offensivelanguage,orbiasedviewpoints.
YihengLiuetal.:PreprintsubmittedtoElsevier Page8of30 AComprehensiveOverviewfromTrainingtoInference
3.2. Architecture
Currently,allLLMsarebuiltupontheTransformerarchitecture,allowingthesemodelstoscaletoseveral10billion
orevenatrillionparameters.Typically,PLMarchitecturesfallintothreecategories:Encoder-only[90],Encoder-
decoder[68]andDecoder-only[18].TheEncoder-onlyarchitectureisnolongeremployedinthelatestLLMsand
won’tbefurtherdiscussedhere.Instead,thissectionwillfocusonintroducingtheEncoder-decoderandDecoder-only
architectures.
Figure 1:ThefiguresfromlefttorightrepresenttheEncoder-decoderarchitecture,CausalDecoderarchitecture,Prefix
Decoderarchitecture,andtheirmaskconfigurations,respectively.Thisdiagramillustratestherangeoftokensthateach
inputtokencanattendto.
3.2.1. Encoder-decoderArchitecture
TheEncoder-decoderarchitectureofLLMsisbuiltuponthetraditionalTransformerEncoder-decoderarchitecture.
TheEncoder-decoderarchitectureconsistsoftwomaincomponents:theEncoderandtheDecoder.Eachpartofthe
EncoderiscomposedofmultiplelayersofTransformer’sMulti-HeadSelf-Attentionlayers,whichencodetheinput
sequence.TheDecoder,ontheotherhand,utilizescross-attentionovertheoutputrepresentationoftheEncoderand
generatesthetargetsequenceinanautoregressivemanner.Theencoder-decoderarchitectureservesasthefoundation
forprominentLLMssuchasT5[68],flan-T5[91],andBART[92].
3.2.2. Decoder-onlyArchitecture
LLMswithaDecoder-onlyarchitectureutilizethedecodercomponentofthetraditionalTransformerarchitecture.
UnliketheEncoder-Decoderarchitecture,whichincorporatesbothanencoderandadecoder,theDecoder-only
architectureissolelyfocusedonthedecodingprocess.Inthisconfiguration,themodelsequentiallygeneratestokens,
attendingtoprecedingtokensinthesequence.Thisarchitecturehasbeenappliedtovariouslanguagegenerationtasks,
showcasingitseffectivenessinvarioustaskssuchastextgenerationwithouttheneedforanexplicitencodingphase.
TheDecoder-onlyarchitecturecanbefurtherclassifiedintotwocategories:theCausalDecoderarchitectureandthe
PrefixDecoderarchitecture.
TheCausalDecoderArchitecture:IntheCausalDecoderarchitecture,eachtokeninthemodelinputsequence
canonlyattendtopastinputtokensanditselfduringthedecodingprocess.Itachievesunidirectionalattentiontothe
inputsequencebyusingaspecificmaskasshowninFigure1.Infact,differentarchitecturesaremainlyimplementedby
configuringdifferentmaskmatrices.ThefigureillustratesacomparisonofmaskconfigurationsbetweentheEncoder-
decoderandDecoder-onlyarchitectures(includingCasualDecoderandPrefixDecoder).TherepresentativeLLMsfor
YihengLiuetal.:PreprintsubmittedtoElsevier Page9of30 AComprehensiveOverviewfromTrainingtoInference
theCausalDecoderarchitecturearetheGPTseries[18;7;8;93;19].TheGPTseriesofLLMsarecurrentlyknownfor
theirsuperiorperformance,withtheirfoundationalCausalDecoderarchitecturewidelyappliedinotherLLMssuchas
BLOOM[38],OPT[83],Gopher[84],andLLaMA[9].
ThePrefixDecoderArchitecture:ThePrefixDecoderarchitecturecombinestheadvantagesofboththeEncoder-
decoderandCausalDecoderarchitectures.Itleveragesitsuniquemaskconfigurations,asillustratedinFigure1,
enablingbidirectionalattentionfortokensintheprefixwhilemaintainingunidirectionalattentionforgenerating
subsequenttokens[54].Thisdesignallowsfortheautoregressivegenerationoftheoutputsequencewiththeflexibility
toattendbi-directionallytotheprefixtokens.RepresentativeLLMsutilizingthePrefixDecoderarchitectureinclude
PaLM[36]andGLM[37].
3.3. Pre-trainingTasks
LargeLanguageModels(LLMs)typicallylearnrichlanguagerepresentationsthroughapre-trainingprocess.
Duringpre-training,thesemodelsleverageextensivecorpora,suchastextdatafromtheinternet,andundergotraining
throughself-supervisedlearningmethods.Languagemodelingisonecommonformofself-supervisedlearningtask
inwhichthemodelistaskedwithpredictingthenextwordinagivencontext.Throughthistask,themodelacquires
theabilitytoca |
ntativeLLMsutilizingthePrefixDecoderarchitectureinclude
PaLM[36]andGLM[37].
3.3. Pre-trainingTasks
LargeLanguageModels(LLMs)typicallylearnrichlanguagerepresentationsthroughapre-trainingprocess.
Duringpre-training,thesemodelsleverageextensivecorpora,suchastextdatafromtheinternet,andundergotraining
throughself-supervisedlearningmethods.Languagemodelingisonecommonformofself-supervisedlearningtask
inwhichthemodelistaskedwithpredictingthenextwordinagivencontext.Throughthistask,themodelacquires
theabilitytocaptureinformationrelatedtovocabulary,grammar,semantics,andtextstructure.
Inlanguagemodeling[18;7;8;36],themodelisrequiredtopredictthenextwordinagivencontext.Thistask
enablesthemodeltodevelopanuancedunderstandingoflanguage.Specifically,themodelobserveslargeamounts
oftextualdataandattemptstopredictthenextwordateachpositioninthetext.Thisgraduallearningprocess
allowsthemodeltocapturethepatternsandinformationinherentinlanguage,encodingavastamountoflinguistic
knowledgeintoitsparameters.Oncepre-trainingiscomplete,thesemodelparameterscanbefine-tunedforvarious
naturallanguageprocessingtaskstoadapttospecifictaskrequirements.Theobjectiveoflanguagemodelingistotrain
amodeltomaximizethelikelihoodoftextualdata.Foragiventextsequence,denotedas 𝑤 1,𝑤 2,...,𝑤 𝑇,where 𝑤 𝑡
representsthetokenatposition𝑡,𝑃 (𝑤 𝑡|𝑤 1,𝑤 2,...,𝑤 𝑡−1 )istheprobabilityofpredicting𝑤 𝑡giventheprecedingcontext
𝑤 1,𝑤 2,...,𝑤 𝑡−1 ,theobjectivefunctionforlanguagemodelingcanbeexpressedusingcross-entropyloss.Here,we
definetheobjectiveasmaximizingtheconditionalprobabilityofthegiventextsequence:
𝑇∑
𝐿 𝐿𝑀 = 1𝑇 − 𝑙𝑜𝑔𝑃 (𝑤 𝑡|𝑤 1,𝑤 2,...,𝑤 𝑡−1 ) (5)
𝑡=1
LanguagemodelingservesasaprevalentpretrainingobjectiveformostLLMs.Inadditiontolanguagemodeling,
thereareotherpretrainingtaskswithintherealmoflanguagemodeling.Forinstance,somemodels[68;37]usetext
withcertainportionsrandomlyreplaced,andthenemployautoregressivemethodstorecoverthereplacedtokens.The
primarytrainingapproachinvolvestheautoregressiverecoveryofthereplacedintervals.
3.4. ModelTraining
3.4.1. ParallelTraining
Intheparalleltrainingmentionedbelow,therewillbediscussionsaboutcollectivecommunicationwhichhelpsus
betterunderstandtheprinciplesofparalleltraining.Figure2haslistedfivereductionrelationships.1)Broadcast:Send
datafromoneGPUtootherGPUs.2)Reduce:Reduce(sum/average)dataofallGPUs,sendtooneGPU.3)AllReduce:
ReducealldataofGPUs,sendtoallGPUs.4)ReduceScatter:ReducealldataofGPUs,sendportionstoallGPUs.5)All
Gather:GatherdataofallGPUs,sendallGPUs.
DataParallel:Theprocessofdataparallelism[94?]isshowninFigure3,thereisaparameterserverthatstores
themodel’sparametersandtheentirebatchofdata.EachGPUusesbroadcasttosynchronizethemodelparameters
anddividesthedataintooneportionperGPU,witheachGPUreceivingaportionofthedata.EachGPUusesthe
completemodelparametersandaportionofthedatatoperformforwardandbackwardpropagation.Thisway,the
gradientsareobtainedforeachGPU.Finally,weaggregatethegradientsandsendtheaggregatedgradientsbacktothe
parameterserver,wheretheoriginalmodelparametersandtheaggregatedcompletegradientsareavailable.Withthis
information,wecanuseanoptimizertoupdatethemodelparameters.Theupdatedparameterswillthenenterthenext
roundofmodeltrainingiterations.Distributeddataparallelism[95]abandonstheuseofaparameterserverandinstead
employsall-reduceongradientinformation,ensuringthateachGPUreceivesthesamegradientinformation.Theresult
ofall-reduceiscommunicatedtoallGPUs,allowingthemtoindependentlyupdatetheirrespectivemodeloptimizers.
Aftereachroundofupdates,themodel’sparameters,gradients,andthehistoricalinformationoftheoptimizerare
consistentacrossallGPUs.
YihengLiuetal.:PreprintsubmittedtoElsevier Page10of30 AComprehensiveOverviewfromTrainingtoInference
Figure 2:Fivecollectivecommunicationsthatareusedbyparalleltrainingmethods.
TheoccupationofGPUmemoryofintermediateresultsisrelatedtothebatchsize,sentencelength,andmodel
d |
deloptimizers.
Aftereachroundofupdates,themodel’sparameters,gradients,andthehistoricalinformationoftheoptimizerare
consistentacrossallGPUs.
YihengLiuetal.:PreprintsubmittedtoElsevier Page10of30 AComprehensiveOverviewfromTrainingtoInference
Figure 2:Fivecollectivecommunicationsthatareusedbyparalleltrainingmethods.
TheoccupationofGPUmemoryofintermediateresultsisrelatedtothebatchsize,sentencelength,andmodel
dimensions.Whenusingdataparallelism,abatchofdataisdividedintomanyparts,allowingeachGPUtoprocessa
portionofthedata.Inequivalentterms,thebatchsizeprocessedoneachGPUisreducedtooneovertheoriginalnumber
ofGPUs.Dataparallelismhasreducedtheinputdimensions,resultinginanoverallreductionintheintermediateresults
ofthemodel.Adrawbackisthattosupportmodeltraining,eachGPUneedstoreceiveatleastonepieceofdata.In
themostextremecase,wheneachGPUreceivesonlyonepieceofdata,ourparameters,gradients,andoptimizerstill
needtobefullystoredontheGPU.Evenifwedon’tstoreanyintermediateresultsontheGPU,ourmodelmaystillbe
unabletoperformcomputationsonasingleGPU.
ModelParallel:Modelparallelism[96]wasfirstintroducedbyMegatron-LMtoalleviatememorypressure.From
figure4,wecanclearlyunderstandtheoverallarchitectureofmodelparallelism.Takingadvantageofthemostcommon
linearlayerintheTransformerasanexample,theparametersofthelinearlayerformamatrixofsizeA*B,andthe
inputtothelinearlayerisavectorofsizeB*1.Representingthisas𝑦𝐴 ∗𝐵 =𝑊 𝐴 ∗𝐵 𝑥 𝐵,wecanhorizontallypartition
themodel’sparametersintomanysegmentsusingthepropertyofmatrixmultiplication.Eachsegmentisofsizea
dividedbynmultipliedbyB.Utilizingthepropertiesofmatrixmultiplication,wecanmove𝑥 𝐵 intoparentheses,and
finally,theresultofthelinearlayerisobtainedbymultiplyingmanysmallmatriceswiththeparametersofthelinear
layer.Throughthisapproach,theparametersofthelinearlayercanbedistributedacrossmultipleGPUs.However,itis
YihengLiuetal.:PreprintsubmittedtoElsevier Page11of30 AComprehensiveOverviewfromTrainingtoInference
Figure 3:Thearchitectureofdataparallelismanddistributeddataparallelism.Thediagramillustratesthedifference
betweendataparallelismanddistributeddataparallelismandtheadvantagesofdistributeddataparallelism.
crucialtoensurethattheinputstothemodelonmultipleGPUsareidentical.Insteadofusingadataparallelapproach
topartitionthedata,weneedtoensurethattheinputsobtainedoneachGPUarethesame,meaningtheybelongto
thesamebatchofdata.WecanthenpartitionaparameterlikethelinearlayeracrossGPUs,witheachGPUreceiving
asmallportionofthematrix.Byperformingmodelcalculationswiththissmallportionandthedata,weobtaina
sub-result,asshowninFormula5.Theresultsofthesecomputationsneedtobeconcatenatedusingtheall-gather
operatorandcommunicatedtoallGPUs.
𝑦𝐴 ∗𝐵 = 𝑊 𝐴 ∗𝐵 𝑥 𝐵
= [ 𝑊 (1)𝐴 𝐴 𝐴 (6)
𝑛 ∗𝑏;𝑊 (2)𝑛 ∗𝑏;...;𝑊 (𝑛)𝑛 ∗𝑏]𝑥 𝐵
= [ 𝑊 (1)𝐴 𝐴 𝐴
𝑛 ∗𝑏𝑥 𝐵 ;𝑊 (2)𝑛 ∗𝑏𝑥 𝐵 ;...;𝑊 (𝑛)𝑛 ∗𝑏𝑥 𝐵 ]
ZeRO:ZeRO[97]isaframeworkbuiltondataparallelism.DuringtheparameterupdatingprocessoneachGPU,
thesamesetofparametersisused,leadingtocomputationalredundancy.EachGPUusesreducedscattertoeliminate
thisredundancytoobtainaportionofthegradientresults.Afterupdatingaportionofthemodelparametersoneach
GPU,anall-gatheroperationisperformedtosynchronizetheparametersacrossallGPUs.Aftertheall-gatheroperation,
theoriginalgradientnolongerneedstobesavedonthegraphicscardandcanberemoved.Figure5showstheupdate
ofZeRO.InZeRO1,theoriginalgradientisremovedafterbackwardpropagation,whileinZeRO2,theproductof
thegradient*iscalculatedinadvanceduringbackwardpropagation,andonlythegradient*issavedonthegraphics
card,removingthegradient.Thisway,thedeletionofthegradientisadvanced,leadingtofurthersavingsinGPU
YihengLiuetal.:PreprintsubmittedtoElsevier Page12of30 |
ginalgradientnolongerneedstobesavedonthegraphicscardandcanberemoved.Figure5showstheupdate
ofZeRO.InZeRO1,theoriginalgradientisremovedafterbackwardpropagation,whileinZeRO2,theproductof
thegradient*iscalculatedinadvanceduringbackwardpropagation,andonlythegradient*issavedonthegraphics
card,removingthegradient.Thisway,thedeletionofthegradientisadvanced,leadingtofurthersavingsinGPU
YihengLiuetal.:PreprintsubmittedtoElsevier Page12of30 AComprehensiveOverviewfromTrainingtoInference
Figure4:Theoverallarchitectureofmodelparallelism.Theleftsideofthediagramshowstheprocessofmodelparallelism,
andtherightsideshowsthememoryusageofparameters,gradients,andoptimizersinthegraphicscardofthemodel
parallelismmethod.
memoryspace.ZeRO3conductsadetaileddivisionofthemodelparameters.Eachgraphicscardretainsonlyaportion
ofthegradientsforupdating,andparameterupdatesalsoonlyaffectaportionofthemodelparameters.Therefore,
eachgraphicscardonlyneedstostoretheparameters,gradients,andoptimizerrelatedtothepartoftheparameters
itisresponsiblefor.Duringforwardandbackwardpropagation,anall-gatheroperationisrequiredonce,andafterthe
operationiscomplete,themodelparametersarereleasedfromthegraphicscard.Zero3doesnotuseallgatherduring
parameterupdates,butitrequiresanall-gatheroperationduringbothforwardandbackwardpropagation,addingone
communicationstep.ComparedtoZeRO2,ZeRO3isanalgorithmthattradestimeforspace.
PipelineParallel:Pipelineparallelism[98]andmodelparallelismsharesimilarities.Inmodelparallelism,linear
layersaredividedintomanysmallmatrices,whicharethendistributedtodifferentGPUs.Forpipelineparallelism,
differentlayersofthemodelareassignedtodifferentGPUs.Specifically,ifwehaveann-layertransformer,wecan
assignthe𝑙𝑎𝑦𝑒𝑟 𝑖ofthetransformertothe𝐺𝑃𝑈 𝑖,andsoon.Duringtheforwardpropagationofthemodel,weneed
toperformthecomputationofthe𝑙𝑎𝑦𝑒𝑟 𝑖onthe𝐺𝑃𝑈 𝑖,thenpasstheresulttothe𝐺𝑃𝑈 𝑖+1 .The𝐺𝑃𝑈 𝑖+1 receivesthe
outputfromthe 𝐺𝑃𝑈 𝑖,performsthecomputationforthatlayerandpassestheresulttothenextGPU.Thismethod
partitionstheparameters,gradients,optimizer,andintermediateresultsforeachlayer.
3.4.2. MixedPrecisionTraining
Inrecentyears,topre-trainextremelylargelanguagemodels,someresearch[99]hasbeguntoutilize16-bitfloating-
pointnumbers(FP16)toreducememoryusageandcommunicationoverhead.FP16hasasmallernumericalrangeand
lowerprecisionineffectivedigits[100;38],butcomputationstendtobefasterthanFP32.Ingeneralmodeltraining,
FP32isoftenusedasthedefaultrepresentationfortrainingparameters.However,inactualmodeltraining,thenumber
ofparametersinamodeltypicallydoesnotexceedtheorderofthousands,wellwithinthenumericalrangeofFP16.
Toimprovecomputationalspeed,wecanconvertfromFP32toFP16.Duringparameterupdates,theamountofthe
parameterisroughlyequaltothegradientmultipliedbythelearningrate.TheminimumvalueofFP16isontheorder
of1e-5.AstheproductofthegradientandlearningrateisalreadywellbelowtherepresentationrangeofFP16,the
parameterupdatewouldresultinloss,knownasunderflow.Therefore,werepresenttheparameterupdateobtainedby
YihengLiuetal.:PreprintsubmittedtoElsevier Page13of30 AComprehensiveOverviewfromTrainingtoInference
Figure 5:TheoverallarchitectureofZeRO.TheupperdemonstratesZeROstage1andZeROstage2.Thelowerdisplays
ZeROstage3.ThegraphillustratestheoptimizationofmemoryusageofgraphicscardparametersinrelationtoZeRO3
versusZeRO1andZeRO2
multiplyingthegradientbythelearningrateasFP32.Wecannotdirectlyaddthishigh-precisionparameterupdate
toalower-precisionmodel,asthiswouldstillresultinfloating-pointunderflow.Consequently,weneedtosavean
additionalsingle-precisionparameterontheoptimizer.Toacceleratebothforwardandbackwardpassesinthemodel,
half-precisionparametersandgradientsareusedandpassedtotheoptimizerforupdating.Theoptimizer’supdate
quantityissavedasFP32,andweaccumulateiteffectivelythroughatemporarilycreatedFP32parameterinthe
optimizer.Aftereffectiveaccumulation,itisthenconvertedbacktoFP16parameters.
3.4.3. Offloading
Theparametersi |
date
toalower-precisionmodel,asthiswouldstillresultinfloating-pointunderflow.Consequently,weneedtosavean
additionalsingle-precisionparameterontheoptimizer.Toacceleratebothforwardandbackwardpassesinthemodel,
half-precisionparametersandgradientsareusedandpassedtotheoptimizerforupdating.Theoptimizer’supdate
quantityissavedasFP32,andweaccumulateiteffectivelythroughatemporarilycreatedFP32parameterinthe
optimizer.Aftereffectiveaccumulation,itisthenconvertedbacktoFP16parameters.
3.4.3. Offloading
Theparametersintheoptimizerareatleasttwiceasmanyasthemodelparameters,andastudy[101]proposesthe
ideaofmovingtheoptimizer’sparametersfromtheGPUtotheCPU.AlthoughGPUcomputationismuchfasterthan
CPU,thequestionariseswhetheroffloadingthisoperationcouldbecomeabottleneckfortheoveralltrainingspeed
ofthemodeloptimizer.Inreality,weutilizeZeRO3.AftertheoptimizationwithZeRO3,thesizeoftheparameters,
gradients,andoptimizerisreducedto1/nofthenumberofGPUs.BybindingoneGPUtomultipleCPUs,weeffectively
lowerthecomputationalloadoneachCPU.
3.4.4. Overlapping
Memoryoperationsaretypicallyasynchronous.Thus,Wecansendarequesttothememoryinadvanceandthen
proceedwithothercomputations.Aftercompletingothercomputations,wecomebacktohandlethememoryrequest.
Thistwo-stepoperationisusedintheforwardpropagationprocessofmodeltraining.Weneedtoobtaintheparameters
of𝑙𝑎𝑦𝑒𝑟 𝑖throughagatheroperation.Afterobtainingtheparametersof𝑙𝑎𝑦𝑒𝑟 𝑖,intheforwardpropagationprocessof
𝑙𝑎𝑦𝑒𝑟 𝑖,weproactivelyretrievetheparametersof𝑙𝑎𝑦𝑒𝑟 𝑖+1 throughanasynchronousfetch.Oncetheforwardpropagation
YihengLiuetal.:PreprintsubmittedtoElsevier Page14of30 AComprehensiveOverviewfromTrainingtoInference
Table 3
Commonlyusedinstructiontuningdatasets.
Datasets Links
static-hh https://huggingface.co/datasets/Dahoas/static-hh
OIG https://huggingface.co/datasets/laion/OIG
Self-Instruct[102] https://github.com/yizhongw/self-instruct
Naturalinstructions[103] https://github.com/allenai/natural-instructions
P3[104] https://huggingface.co/datasets/bigscience/P3
Promptsource[105] https://github.com/bigscience-workshop/promptsource
WebGPT[106] https://huggingface.co/datasets/openai/webgpt_comparisons
Flan[107] https://github.com/google-research/flan
MVPCorpus[108] https://github.com/RUCAIBox/MVP
calculationfor𝑙𝑎𝑦𝑒𝑟 𝑖iscompleted,theparametersfor𝑙𝑎𝑦𝑒𝑟 𝑖+1 havebeenobtainedandarestoredintheGPU.Wecan
thenimmediatelyproceedwiththeforwardpropagationcalculation,andsoon.
3.4.5. Checkpoint
Inordertosupportthebackwardpropagationofthemodel,AllintermediateresultsintheGPUmemoryneedto
besavedduringtheforwardpropagationofthemodel.Tooptimizethisprocess,acheckpointmechanism,whichdoes
notsaveallintermediateresultsintheGPUmemorybutonlyretainscertaincheckpointpointsisutilized.
Thediagrambelowillustratesasimplifiedstructureofatransformer.Eachtransformerblocktakesamodelinput,
undergoescomplexcomputationsthroughattentionandfeed-forwardprocesses,andproducestheoveralloutputof
thatlayer.Wekeeponlytheinputofeachmajorlayerinthetransformerasourcheckpoint.
Duringthebackwardpropagationprocess,howdowecomputethegradientsofthelinearlayerswithineachmajor
layer?Wecanperformatechniquecalledrecomputation,whichinvolvesre-executingtheforwardpassofeachmajor
layerduringthebackwardpropagationprocess.Wetemporarilyobtaintheinputsofthelinearlayerswithineachmajor
layer,andtheintermediateresultsobtainedcanbeusedforbackwardpropagation.Oncethebackwardpropagationfor
thatlayeriscomplete,wecandiscardthecheckpointandthetemporarilyrecomputedintermediateresultsofthelinear
layerswithinthemodelfrom |
kwardpropagationprocess,howdowecomputethegradientsofthelinearlayerswithineachmajor
layer?Wecanperformatechniquecalledrecomputation,whichinvolvesre-executingtheforwardpassofeachmajor
layerduringthebackwardpropagationprocess.Wetemporarilyobtaintheinputsofthelinearlayerswithineachmajor
layer,andtheintermediateresultsobtainedcanbeusedforbackwardpropagation.Oncethebackwardpropagationfor
thatlayeriscomplete,wecandiscardthecheckpointandthetemporarilyrecomputedintermediateresultsofthelinear
layerswithinthemodelfromtheGPUmemory.
Assumingwehaveatransformerwith24layers,eachlayercontainingfourtofivelinearlayers,usingthecheckpoint
mechanismreducestheoriginallyrequiredstorageof120intermediateresultstoonly24intermediateresults.
3.5. Fine-Tuning
ThetrainingofLLMsinthispaperisdividedintothreestages:datacollectionandprocessing,pre-training,and
fine-tuning.Thissectionwillprovideareviewofthefine-tuningmethodsforLLMs.Specifically,wecategorizefine-
tuningtechniquesintothreetypes:supervisedfine-tuning(SFT)[93],alignmenttuning,andparameter-efficienttuning.
3.5.1. SupervisedFine-Tuning
Thecoreconceptofsupervisedfine-tuninginvolvesadjustingthemodelinasupervisedmanneronthebasisof
large-scalepre-training,enhancingitscapabilitytobetteradapttothespecificrequirementsofthetargettask.Inthe
processofSFT,itisnecessarytopreparealabeleddatasetforthetargettask,whichincludesinputtextalongwith
correspondinglabels.Instructiontuningisacommonlyusedtechniqueinthefine-tuningprocessofLLMsandcan
beconsideredasaspecificformofSFT.ItinvolvesfurthertrainingLLMsonadatasetcomposedof(instruction,
output)pairs,focusingonenhancingthecapabilitiesandcontrollabilityoflargelanguagemodelsbyunderstanding
andfollowinghumaninstructions.Wecompiledcommonlyusedinstructiontuningdatasets,asillustratedinTable3.
3.5.2. AlignmentTuning
DuetoLLMsbeingpre-trainedonmassiveanddiverseinternetdata,eventhoughthetrainingdataundergoes
somepreprocessing,itisstillchallengingtoguaranteetheabsenceofbiasedorharmfulcontentinterabyte-scale
trainingdatasets.DespiteLLMsdemonstratingimpressiveperformanceacrossvariousnaturallanguageprocessing
tasks,theyfrequentlyexhibitbehaviorsdivergingfromhumanintent.Thisincludesgeneratingfalseinformation,
YihengLiuetal.:PreprintsubmittedtoElsevier Page15of30 AComprehensiveOverviewfromTrainingtoInference
producingexpressionswithbiasormisleadingcontent,andsoon[93;109].ToaddresstheseissuesofLLMsdisplaying
behaviorsbeyondhumanintent,alignmenttuningbecomescrucial[93;110].
Ingeneral,alignmenttuningaimstomeetthefollowingthreecriteria:beinghelpful,honest,andharmless.
Helpful:Theconceptofhelpfulnessrevolvesaroundwhetherthemodel-generatedoutputprovesgenuinely
beneficialforaspecifictaskorinquiry.Intherealmofnaturallanguageprocessing,themodel’sgeneratedtextor
responsesshouldfurnishvaluableinformation,positivelyimpactingtheuser’srequirementsortaskobjectives.
Honest:Honestyentailswhetherthemodel-generatedoutputisauthenticandreliable.Themodelshouldproduce
informationconsistentwithfacts,steeringclearoffabricationordistortion.Thiscontributestomaintainingusertrust
intheauthenticityofthemodel’soutputs.
Harmless:Harmlessnessisconcernedwithwhetherthemodel-generatedoutputposesnoharmtousersorsociety.
Themodelshouldrefrainfromgeneratingcontentthatisharmful,offensive,orperilous,ensuringitsutilizationremains
safeforallrelevantstakeholders.
IntrainingLLMs,anoteworthyapproachtoalignmenttuningisbasedonReinforcementLearningwithHuman
Feedback(RLHF)[93].Thismethodinvolvescollectinghumanfeedbackdatatotrainarewardmodel(RM)for
reinforcementlearning.TheRMservesastherewardfunctionduringreinforcementlearningtraining,andalgorithms
suchasProximalPolicyOptimization(PPO)[111]areemployedtofine-tunetheLLM.Inthiscontext,LLMis
consideredasthepolicy,andtheactionspaceisconsideredasthevocabularyoftheLLM.
3.5.3. Parameter-efficientTuning
Currently,large-scalePLMssuchasChatGPT[93;19]continuetogrowinscale.However,forthemajorityof
researchers,conductingfullfine-tuningonconsumer-gradehardwarehasbecomecost-pr |
edbackdatatotrainarewardmodel(RM)for
reinforcementlearning.TheRMservesastherewardfunctionduringreinforcementlearningtraining,andalgorithms
suchasProximalPolicyOptimization(PPO)[111]areemployedtofine-tunetheLLM.Inthiscontext,LLMis
consideredasthepolicy,andtheactionspaceisconsideredasthevocabularyoftheLLM.
3.5.3. Parameter-efficientTuning
Currently,large-scalePLMssuchasChatGPT[93;19]continuetogrowinscale.However,forthemajorityof
researchers,conductingfullfine-tuningonconsumer-gradehardwarehasbecomecost-prohibitiveandimpractical.
UnlikeSFTandalignmenttuning,theobjectiveofparameter-efficienttuningistoreducecomputationalandmemory
overhead.Thismethodinvolvesfine-tuningonlyasmalloradditionalsubsetofmodelparameterswhilekeeping
themajorityofpre-trainedparametersfixed,therebysignificantlyloweringcomputationalandstoragecosts.Itis
noteworthythatstate-of-the-artparameter-efficienttuningtechniqueshaveachievedperformancelevelscomparable
tofullfine-tuning.Somecommonparameter-efficienttuningmethodsincludeLow-RankAdaptation(LoRA)[112],
PrefixTuning[113]andP-Tuning[114;115].Theadoptionofthesemethodsenablesefficientmodeltuningevenin
resource-constrainedenvironments,offeringfeasibilityandefficiencyforpracticalapplications.
WiththeriseofLLMs,parameter-efficienttuninghasgarneredincreasingattention,withLoRAbeingwidely
employedinthelatestreleasesofLLMs.LoRA[112]anditsrelatedadvancements[116;117]arenoteworthyand
deserveattention.
3.5.4. SafetyFine-Tuning
ToenhancethesafetyandresponsibilityofLLMs,theintegrationofadditionalsafetytechniquesduringfine-tuning
isessential.Thisencompassesthreeprimarytechniques,applicabletobothSFTandRLHFphases.
SupervisedSafetyFine-Tuning:Inthistechnique,labelersaretaskedwithgeneratingdemonstrationdatathat
incorporateshighsafetyriskadversarialprompts.Thishandcraftsafetydemonstrationdataisthenincorporatedinto
theSFTphase,therebyaugmentingthemodel’scapacitytomanagesafetyrisks.
SafetyRLHF:Thistechniqueemploysthesameorevenmoreaggressiveadversarialpromptstoquerythemodels.
ThesafestresponseexhibitingrefusalbehavioristhenusedtotrainasafetyrewardmodelwithintheRLHFframework.
SafetyContextDistillation:Thistechniqueemployscontextdistillation[118]byinitiallyprefixingsafety
preprompts,like“Youareasafeandresponsibleassistant,”toadversarialprompts.Thisprocessyieldssafergenerated
responses.Themodelisthenfine-tunedonthesesaferdemonstrationdatabutwithouttheinclusionofthesafety
pre-prompts.Thissafetydistillationfurtherenhancesthemodel’ssafetycapabilities.
3.6. Evaluation
Unlikeinthepast,large-scaledeeplearningmodelshaveawiderrangeofapplicationsandstrongerperformance
comparedtoordinarymodels.However,withgreatpowercomesgreatresponsibility,andevaluatingthesemodelshas
becomemorecomplex,requiringconsiderationofpotentialproblemsandrisksfromallaspects.Sincethepopularity
ofChatGPT,manyrelatedstudieshavebeenpublished,includingthesurveyandsummaryofLLMsevaluationin
reference[119;120],whichishelpfulfordevelopinglarge-scaledeeplearningmodels.Thissectionwillintroduce
sometestingdatasets,evaluationdirectionsandmethods,andpotentialthreatsthatneedtobeconsideredbasedon
previousevaluationworkonlargemodels.
YihengLiuetal.:PreprintsubmittedtoElsevier Page16of30 AComprehensiveOverviewfromTrainingtoInference
3.6.1. Statictestingdataset
Theevaluationoflargemodels’capabilitiesrequiresappropriatedatasetsforvalidation.Here,weintroduceseveral
commonlyuseddatasetsfortestingpurposes.Consideringmultimodallargemodels,typicaldatasetsforcomputer
visionincludeImageNet[121]andOpenImages[122].InadditiontothecommonlyusedGLUE[123]andSuperGLUE
[124]forLLMs,MMLU[125]ishighlycompetitiveintestingcomprehensivecapability.Ifyourmodelprimarilyuses
Chineselanguage,thenCMMLU[126],asabenchmarkforChineselargemodels,shouldalsobeconsidered,and
XTREME[127]andXTREME-R[128]aresuitablechoicesformultilinguallargemodels.Forassessingmathematical
knowledgecapabilities,therearedatasetssuchasMATH[129]andGSM8K[130],whileHumanEval[131]andMBPP
[132]canserveasbenchmarksforcodegeneration.Forcom |
ImageNet[121]andOpenImages[122].InadditiontothecommonlyusedGLUE[123]andSuperGLUE
[124]forLLMs,MMLU[125]ishighlycompetitiveintestingcomprehensivecapability.Ifyourmodelprimarilyuses
Chineselanguage,thenCMMLU[126],asabenchmarkforChineselargemodels,shouldalsobeconsidered,and
XTREME[127]andXTREME-R[128]aresuitablechoicesformultilinguallargemodels.Forassessingmathematical
knowledgecapabilities,therearedatasetssuchasMATH[129]andGSM8K[130],whileHumanEval[131]andMBPP
[132]canserveasbenchmarksforcodegeneration.Forcommonsensereasoningtestsindailyhumanlifeandwork,
thefollowingdatasetsareavailable:HelloSwag[133],PIQA[134],BoolQ[135],SIQA[136],WinoGrande[137],
ARC[138],andOpenBookQA[139].Formedicalknowledge,therearedatasetssuchasMedQA-USMLE[140]and
MedMCQA[141].
3.6.2. OpendomainQ&Aevaluation
Currently,LLMsinteractwithhumansintheformofquestionsandanswers.Comparedtothefragmentedand
ambiguousinformationreturnedbytraditionalsearches,LLMsprovidemorerealisticandefficientquestion-and-
answerresultsthatalignwithhumanhabits.Therefore,theevaluationofODQA(OpenDomainQuestionAnswering)
[142]capabilityisessential.Theperformanceofopen-domainquestionansweringgreatlyaffectsuserexperience.
CommonlyuseddatasetsfortestingincludeSquAD[143]andNaturalQuestions[144],withF1scoreandExact-Match
accuracy(EM)asevaluationmetrics.However,notethatthemethodofwordmatchingmayhavecertainissues,such
aswhenafactuallycorrectanswerisnotinthegoldenanswerlist.Therefore,humanevaluationseemstobenecessary,
andliterature[145]hasconducteddetailedresearchonthismatter.
3.6.3. Securityevaluation
Asanemergingandhotresearchfield,LLMsmustpayattentiontotheirpotentialsecuritythreats,preventmalicious
useorvulnerabilitiestomaliciousattacks,andaddressanypotentiallong-termissuesthatmayposeathreatto
humandevelopment.Additionally,redteaminginvariousdomainsisnecessarytocriticallyassessandtestthemodel,
identifyingvulnerabilities,biases,inaccuracies,andareasforsafetyimprovement.
Potentialbias:ThetrainingdataforLLMsmaycontainpotentialbiases,suchasgenderorrace.Security
assessmentsneedtoaddresswhetherthemodelgeneratesoramplifiesthesebiasesandhowtoreduceorcorrectthem.
Reference[146]discussesindetailthecausesofbiasinLLMsandtheseriousconsequencesthatmayarise.Reference
[147]extensivelystudieshowpre-trainedlanguagemodelsgenerateharmfulcontenttowhatextent,andhowtouse
controlledtextgenerationalgorithmstopreventthegenerationoftoxiccontent.CHBias[148]isaChinesedatasetthat
canbeusedtoevaluateandmitigatethebiasproblemofLLMs.
Privacyprotection:LLMsmaycomeintocontactwithalargeamountofuserdata,suchastextandimages,
duringthetrainingprocess.Securityassessmentsneedtoensuretheeffectiveprotectionofuserdataprivacytoprevent
leaksandmisuse.Reference[149]conductedresearchonmodelslikeChatGPTandfoundthatitispossibletoextract
trainingdataeffectivelyfromthesemodels.Reference[150]providesasolutionbyproposingaframeworkcalled
DEPN(DetectandEditingPrivacyNeurons)todetectandeditprivacyneuronsinpre-trainedlanguagemodels.It
alsointroducesaprivacyneuronaggregatortoeliminateprivacyinformationinabatch-processingmanner,effectively
reducingtheleakageofprivacydatawhilemaintainingmodelperformance.
Adversarialattacks:LLMsmaybesusceptibletoadversarialattacks,suchasinputtampering,intentional
misinformation,orgeneratingfalseinformation.Securityassessmentsneedtoconsidertherobustnessofthemodel,
i.e.,itsabilitytowithstandsuchattacks.Asmentionedinreference[151],LLMsstillhave"jailbreak"risks,whereusers
canmanipulatethemodeltogeneratetoxiccontentusingspecificinputmethodslikerole-playingoraddingspecial
suffixesasstudiedinthereferencedpaper.Especiallywhenusingopen-sourcepre-trainedmodels,anyvulnerabilities
inthepre-trainingmodelsregardingadversarialattacksareinheritedaswell.Reference[152]providesasolutionto
mitigatetheharmcausedbythesevulnerabilities.
3.6.4. Evaluationmethod
AutomatedevaluationandmanualevaluationplaycrucialrolesinLanguageModel(LLM)research.Automated
evaluationtypicallyinvolvesusingvariousmetricsandindicatorstoquantifytheperformanceofmodels,suchas
BIEU[153],ROUGE[154],andBERTSScore[155],whichcanmeasuretheaccurac |
rencedpaper.Especiallywhenusingopen-sourcepre-trainedmodels,anyvulnerabilities
inthepre-trainingmodelsregardingadversarialattacksareinheritedaswell.Reference[152]providesasolutionto
mitigatetheharmcausedbythesevulnerabilities.
3.6.4. Evaluationmethod
AutomatedevaluationandmanualevaluationplaycrucialrolesinLanguageModel(LLM)research.Automated
evaluationtypicallyinvolvesusingvariousmetricsandindicatorstoquantifytheperformanceofmodels,suchas
BIEU[153],ROUGE[154],andBERTSScore[155],whichcanmeasuretheaccuracyofLLM-generatedcontent.
YihengLiuetal.:PreprintsubmittedtoElsevier Page17of30 AComprehensiveOverviewfromTrainingtoInference
Thesemetricscanhelpresearchersquicklyassessmodelperformanceonlarge-scaledataandcomparedifferent
models.However,automatedevaluationalsohaslimitationsasitcannotfullycapturethecomplexityoflanguage
understandingandgeneration.Researchinreference[156]hasshownthatmanualevaluationismorereliablefor
someopen-endedgenerationtasks.Manualevaluationtypicallyinvolveshumanannotatorssubjectivelyjudgingand
assessingthequalityofmodel-generatedoutputs.Thisevaluationmethodcanhelprevealhowmodelsperformin
specifictasksorscenariosandidentifysubtleissuesanderrorsthatautomatedevaluationmayoverlook.However,
manualevaluationalsofaceschallengessuchashightimecostsandsubjectivity.Therefore,itisoftennecessaryto
combinethestrengthsofautomatedandmanualevaluationtocomprehensivelyassesstheperformanceoflanguage
models.
3.7. LLMFramework
Largedeeplearningmodelsoffersignificantaccuracygains,buttrainingbillionstotrillionsofparametersis
challenging.Existingsolutionssuchasdistributedtraininghavesolvedfundamentallimitationstofitthesemodelsinto
limiteddevicememorywhileobtainingcomputation,communication,anddevelopmentefficiency.Next,thissection
willintroduceseverallargelanguagemodelframeworksthatutilizedistributedtrainingtechnologyleveragingGPU,
CPU,andNVMememorytoallowforunprecedentedmodelscaleonlimitedresourceswithoutrequiringmodelcode
refactoring.
TransformersTransformers[157],anopen-sourcePythonlibrarybyHuggingFace,isdedicatedtobuildingmodels
usingtheTransformerarchitecture.Featuringasimpleanduser-friendlyAPI,itfacilitateseasycustomizationofvarious
pre-trainedmodels.Witharobustcommunityofusersanddevelopers,transformerscontinuouslyupdateandimprove
modelsandalgorithms.
DeepSpeed:Deepspeed[158],anopen-sourceoptimizationlibrarycompatiblewithPyTorch,isdevelopedby
MicrosoftandutilizedintrainingLLMslikeMTNLG[79]andBLOOM[38].Currently,Itprovidesfullsupport
forZeROtechnologywhichincludesOptimizerstatepartitioning,Gradientpartitioningandparameterpartitioning,
Custommixedprecisiontraining,ArangeoffastCUDA-extension-basedoptimizers[159]andZeRO-offloadtoCPU
andDisk/NVMe.Throughtheabovetechnologies.Additionally,Deepspeedhasachievedexcellentscalabilityand
efficiencywithsmallmemoryrequirements.
BMTrain:BMTrain[160]isanefficientlargemodeltrainingtoolkitdevelopedbyTsinghuaUniversitythatcanbe
usedtotrainlargemodelswithtensofbillionsofparameters.Itcantrainmodelsinadistributedmannerwhilekeeping
thecodeassimpleasstand-alonetraining.BMTraindoesnotrequiremodelrefactoringtowork.Infact,PyTorchusers
canenableBMTrainwithafewlinesofcodechangetotheirexistingtrainingpipeline.Itprovidesthesupportof
variousoptimizationtechniquessuchasZeROoptimizationandcommunicationoptimization.
Megatron-LM:Megatron-LM[96;161;162]isadeeplearninglibrarydevelopedbyNVIDIAfortraininglarge-
scalelanguagemodels.Megatron-LMpresentstheirtechniquesincludingmodelanddataparallelism,mixed-precision
training,andFlashAttentionfortrainingverylargetransformermodels.Specifically,ittakesadvantageofthestructure
oftransformernetworkstocreateasimplemodelparallelimplementationbyaddingafewsynchronizationprimitives
anditenablestrainingtransformermodelswithbillionsofparametersandtrainsefficientlyinPyTorch.Italsoperforms
anin-depthempiricalanalysisoftheirmodelanddataparalleltechniqueanddemonstratesupto76%scalingefficiency
using512GPUswhichcanlargelyimprovethetrainingefficiencyandspeed,enablingefficientdistribut |
ion
training,andFlashAttentionfortrainingverylargetransformermodels.Specifically,ittakesadvantageofthestructure
oftransformernetworkstocreateasimplemodelparallelimplementationbyaddingafewsynchronizationprimitives
anditenablestrainingtransformermodelswithbillionsofparametersandtrainsefficientlyinPyTorch.Italsoperforms
anin-depthempiricalanalysisoftheirmodelanddataparalleltechniqueanddemonstratesupto76%scalingefficiency
using512GPUswhichcanlargelyimprovethetrainingefficiencyandspeed,enablingefficientdistributedtraining
acrossGPUs.
Inadditiontotheaforementionedframeworks,Colossal-AI[163]andFastMoE[164;165]arealsotwopopular
frameworksfortrainingLLMs.Inprinciple,anydeeplearningframeworkthatsupportsparallelcomputingcanbeused
totrainLLMs.ExamplesincludePyTorch[166],TensorFlow[167;168],PaddlePaddle[169],MXNet[170],OneFlow
[171],MindSpore[172]andJAX[173].
4. InferencewithLargeLanguageModels
Thescaleoflargemodelsisgrowingatarateofnearly10timesperyear,whichbringsabouthugecomputational
consumptionandcarbonemissions[174].Therefore,reducingthecomputationalburdenoftraininglargemodelswhile
retainingtheirreasoningabilityhasbecomeacommonconcernforeveryone.Inthischapter,wemainlyintroducehow
toreducecostsfrombothcomputationalandstorageaspects,thatis,howtoefficientlyperformlarge-scalemodel
inferencefromfouraspects:modelcompression,memoryscheduling,parallelism,andstructuraloptimization.
YihengLiuetal.:PreprintsubmittedtoElsevier Page18of30 AComprehensiveOverviewfromTrainingtoInference
4.1. ModelCompression
4.1.1. KnowledgeDistillation
KnowledgeDistillation[175]referstotransferringknowledgefromacumbersome(teacher)modeltoasmaller
(student)modelthatismoresuitablefordeployment.Thisisachievedbyfittingthesofttargetsofthetwomodels,
assofttargetsprovidemoreinformationthangoldlabels.Initially,thecalculationformodeldistillationinvolvedonly
fittingtheoutputsfromthelastlayerofboththeteacherandstudentmodels[176].PKD[177]improvesthisprocessby
computingthemean-squarelossbetweennormalizedhiddenstates,allowingthestudentmodeltolearnfrommultiple
intermediatelayersoftheteachermodel.Inordertodiscovermoreintermediaterepresentationssuitableforknowledge
distillation,Jiaoetal.[178]proposedTinyBERT.Thisenablesthestudentmodeltolearnfromtheembeddinglayer
andattentionmatricesoftheteachermodel.
4.1.2. ModelPruning
Modelpruninginvolvesremovingredundantportionsfromtheparametermatricesoflargemodels.Itisdivided
intounstructuredpruningandstructuredpruning.Unstructuredpruninginvolvesremovingindividualconnections
orweightsinaneuralnetworkwithoutadheringtoanyspecificstructuralpattern.Instructuredpruning,specific
structuralpatternsorunitswithinaneuralnetworkareprunedorremoved.Gordonetal.[179]comparedtheeffects
ofunstructuredandstructuredpruningontheBERTmodel.Theyfoundthattheeffectivenessofunstructuredpruning
significantlydecreasesasthepruningratioincreases,whileinstructuredpruning,30-40%oftheweightscanbe
discardedwithoutaffectingBERT’suniversality.Differentstructuresinthemodelcanbestructurallypruned.Michel
etal.[180]prunedattentionheadsandfoundthatablatingoneheadoftenpositivelyimpactstheperformanceofWMT
andBERT.Theyproposedagradient-basedmetricforevaluatingtheimportanceofattentionheadstoenhancepruning
effectiveness.Fanetal.[179]performedlayerpruningbyextendingdropoutfromweightstolayers.Duringtraining,
theyrandomlydroppedlayersandachievedgoodinferenceresultsbyselectingsub-networkswithanydesireddepth
duringtesting.
4.1.3. ModelQuantization
Thefundamentalideabehindmodelquantizationistoreducethenumberoffloating-pointbitsusedinnumerical
calculationswithinalargemodelnetwork,therebydecreasingstorageandcomputationcosts.Thisinvolvesconverting
floating-pointoperationsintofixed-precisionoperations.However,asprecisiondecreases,themodel’slossgradually
increases,andwhenprecisiondropsto1bit,themodel’sperformanceexperiencesasuddendecline.Toaddressthe
optimizationchallengesintroducedbylow-precisionquantization,Baietal.[181]proposedBinaryBERT.Theyinitially
trainedahalf-sizedternarymodelandtheniniti |
istoreducethenumberoffloating-pointbitsusedinnumerical
calculationswithinalargemodelnetwork,therebydecreasingstorageandcomputationcosts.Thisinvolvesconverting
floating-pointoperationsintofixed-precisionoperations.However,asprecisiondecreases,themodel’slossgradually
increases,andwhenprecisiondropsto1bit,themodel’sperformanceexperiencesasuddendecline.Toaddressthe
optimizationchallengesintroducedbylow-precisionquantization,Baietal.[181]proposedBinaryBERT.Theyinitially
trainedahalf-sizedternarymodelandtheninitializedabinarymodelwiththeternarymodelthroughweightsplitting.
Finally,theyfine-tunedthebinarymodel.Thisapproachyieldedbetterresultsforthebinarymodelcomparedtotraining
abinarymodelfromscratch.
4.1.4. WeightSharing
ThebasicideaofweightsharingistousethesamesetofparametersformultiplepartsofaLLM.Insteadoflearning
differentparametersforeachinstanceorcomponent,themodelsharesacommonsetofparametersacrossvariousparts.
Weightsharinghelpsreducethenumberofparametersthatneedtobelearned,makingthemodelmorecomputationally
efficientandreducingtheriskofoverfitting,especiallyinsituationswherethereislimiteddata.ALBERT[182]uses
theCross-layerparameter-sharingstrategytoeffectivelyreducethenumberofparametersofthemodel,andcanachieve
bettertrainingresultsthanthebaselinewiththesameparameternumber.
4.1.5. Low-rankApproximation
Low-rankdecompositionmethodsarecrucialinthefieldofmodelcompression,astheyallowforthecreation
ofmorecompactmodelswithfewerparameters.Thisreductioninmodelsizeisparticularlybeneficialfordeploying
neuralnetworksonresource-constraineddevices,improvingefficiencyduringinference.Chenetal.[183]performeda
low-rankdecompositionontheinputmatrix,enablingmatrixoperationswithinthelargemodeltooccuratalower-rank
level,effectivelyreducingthecomputationalworkload.Fromtheresults,theirproposedmethod,DRONE,notonly
ensurestheinferenceperformanceofthelargemodelbutalsoachievesanaccelerationratioofmorethan1.3times
comparedtothebaselinemethod.Thespecificchoiceoflow-rankdecompositionmethoddependsonthearchitecture
oftheneuralnetworkandtherequirementsofthetargetapplication.
YihengLiuetal.:PreprintsubmittedtoElsevier Page19of30 AComprehensiveOverviewfromTrainingtoInference
4.2. MemoryScheduling
DeployingLLMsonasingleconsumer-gradeGPUisconstrainedbythelimitationsoftheavailablevideomemory,
giventhesubstantialparametersofLLMs.Therefore,appropriateMemorySchedulingstrategiescanbeusedtosolve
thehardwarelimitationsoflargemodelinference.Memoryschedulinginlargemodelinferenceinvolvestheefficient
organizationandmanagementofmemoryaccesspatternsduringthereasoningorinferencephaseofcomplexneural
networkmodels.Inthecontextofsophisticatedreasoningtasks,suchasnaturallanguageunderstandingorcomplex
decision-making,largemodelsoftenhaveintricatearchitecturesandconsiderablememoryrequirements.Memory
schedulingoptimizestheretrievalandstorageofintermediaterepresentations,modelparameters,andactivationvalues,
ensuringthattheinferenceprocessisbothaccurateandperformedwithminimallatency.Forexample,BMInf[184]
utilizestheprincipleofvirtualmemory,achievingefficientinferenceforlargemodelsbyintelligentlyschedulingthe
parametersofeachlayerbetweentheGPUandCPU.
4.3. Parallelism
Bothinferenceandtrainingcanleverageparallelizationtechniques.Presently,parallelizationtechniquesfor
inferenceprimarilymanifestacrossthreedimensions:DataParallelism,TensorParallelism,andPipelineParallelism.
DataParallelismprimarilyinvolvesincreasingtheoverallthroughputoftheinferencesystembyaddingmoreGPU
devices[101;97;159;185].Tensorparallelismisaformofmodelparallelismwherethemodel’sparametersare
partitionedintomultipletensors,eachcomputedondifferentprocessingunits.Thisapproachprovesbeneficialwhen
dealingwithmodelsthataretoolargetofitintothememoryofasingleGPU.Tensorparallelismprimarilyinvolves
increasingthenumberofdeviceshorizontallythroughparallelcomputationtoreducelatency[96].Pipelineparallelism
primarilyinvolvesverticallyincreasingthenumberofGPUdevicesthroughparallelcomputationtosupportlarger
modelsandenhancedeviceutiliza |
185].Tensorparallelismisaformofmodelparallelismwherethemodel’sparametersare
partitionedintomultipletensors,eachcomputedondifferentprocessingunits.Thisapproachprovesbeneficialwhen
dealingwithmodelsthataretoolargetofitintothememoryofasingleGPU.Tensorparallelismprimarilyinvolves
increasingthenumberofdeviceshorizontallythroughparallelcomputationtoreducelatency[96].Pipelineparallelism
primarilyinvolvesverticallyincreasingthenumberofGPUdevicesthroughparallelcomputationtosupportlarger
modelsandenhancedeviceutilization.Typically,itiscombinedwithtensorparallelismtoachieveoptimalperformance
[98].
4.4. StructuralOptimization
IntheforwardpropagationcomputationofLLMs,thecalculationspeedissignificantlyfasterthanthespeed
ofmemoryaccess.Inferencespeedcanbeimpactedbynumerousmemoryaccessoperations.OnegoalinLLM
inferenceistominimizethenumberofmemoryaccessesduringforwardpropagation.FlashAttention[186]and
PagedAttention[187]enhancecomputationalspeedbyemployingachunkedcomputationapproach,mitigatingthe
storageoverheadassociatedwithmatrices.TheentireoperationtakesplacewithinSRAM,reducingthenumberof
accessestoHighBandwidthMemory(HBM)andsignificantlyboostingcomputationalspeed.BothFlashAttention
andPagedAttentionhavebeenadoptedbymainstreaminferenceframeworks,andseamlesslyintegratedintothese
frameworksforstraightforwardutilization.
4.5. InferenceFramework
Parallelcomputing,modelcompression,memoryscheduling,andspecificoptimizationsfortransformerstructures,
allintegraltoLLMinference,havebeeneffectivelyimplementedinmainstreaminferenceframeworks.These
frameworksfurnishthefoundationalinfrastructureandtoolsrequiredfordeployingandrunningLLMmodels.They
offeraspectrumoftoolsandinterfaces,streamliningthedeploymentandinferenceprocessesforresearchersand
engineersacrossdiverseapplicationscenarios.Thechoiceofaframeworktypicallyhingesonprojectrequirements,
hardwaresupport,anduserpreferences.InTable4,wecompilesomeoftheseframeworksforreference.
5. UtilizationofLLMs
TheapplicationscopeofLLMsisextensiveandcanbepracticallyemployedinalmostanyspecializeddomain
[1;193;46;194;195].Followingpre-trainingandfine-tuning,LLMsareprimarilyutilizedbydesigningsuitable
promptsforvarioustasks.Leveragingpowerfulzero-shotcapabilities,manytaskscanbedirectlyaccomplishedby
guidingLLMswithstraightforwardprompts.Formorecomplextasksthatcannotbeachievedthroughsimpleprompts,
afew-shotapproachinvolvingin-contextlearningisemployedtoguideLLMsintaskcompletion.Additionally,
incorporatingchain-of-thought[196;197]promptsinthepromptenhancesin-contextlearningbyintroducinga
reasoningprocess.Thepipelineofthein-contextlearningandchain-of-thoughtisshowninFigure6.Insome
specializedresearchdirections,obtainingintermediatelayerrepresentationsofLLMsmaybenecessary.Forinstance,
inneurosciencestudies,embeddingrepresentationsfromthemodelareusedtoinvestigateactivationregionsofbrain
functions[198;199;200;201].
YihengLiuetal.:PreprintsubmittedtoElsevier Page20of30 AComprehensiveOverviewfromTrainingtoInference
Table 4
ListofLLMinferenceframework.
Framework Links
TensorRT https://github.com/NVIDIA/TensorRT-LLM
FasterTransformer https://github.com/NVIDIA/FasterTransformer
Megatron-LM[96] https://github.com/NVIDIA/Megatron-LM
FlexGen[188] https://github.com/FMInference/FlexGen
DeepSpeed[158] https://github.com/microsoft/DeepSpeed
vLLM[187] https://github.com/vllm-project/vllm
FlexFlow[189] https://github.com/flexflow/FlexFlow
StreamingLLM[190] https://github.com/mit-han-lab/streaming-llm
ColossalAI[163] https://github.com/hpcaitech/ColossalAI
BMCook[191 |
eepSpeed[158] https://github.com/microsoft/DeepSpeed
vLLM[187] https://github.com/vllm-project/vllm
FlexFlow[189] https://github.com/flexflow/FlexFlow
StreamingLLM[190] https://github.com/mit-han-lab/streaming-llm
ColossalAI[163] https://github.com/hpcaitech/ColossalAI
BMCook[191] https://github.com/OpenBMB/BMCook
BMInf[184] https://github.com/OpenBMB/BMInf
Petals[192] https://github.com/bigscience-workshop/petals
Figure 6:A)in-contextlearning,B)Chainofthought.
Generally,thereareseveralapproachestoemployingLLMs.Thefirstinvolvesaccessingthecapabilitiesofrobust
proprietarymodelsthroughopenAPIservices,suchasutilizingtheAPIprovidedbyChatGPT[19].Thesecond
approachincludesdeployingopen-sourceLLMsforlocaluse[9].Thethirdmethodentailsfine-tuningopen-source
LLMstomeetspecificdomainstandards[43;202],enablingtheirapplicationinaparticularfield,andsubsequently
deployingthemlocally.InTable5,wehavecompiledinformationonvariousopen-sourceLLMsforreference.
Researcherscanchoosefromtheseopen-sourceLLMstodeployapplicationsthatbestsuittheirneeds.
6. FutureDirectionsandImplications
ThissectionwilldelveintothefuturetrendsandimpactofLLMtechnology.Ourdiscussionwillbestructuredinto
threeparts:firstly,anexplorationofthedevelopmentaltrendswithinLLMstechnologyitself;secondly,anexamination
ofthedevelopmentaldirectionsforAIresearchers;andfinally,ananalysisofthesocietalimpactresultingfromthe
ongoingdevelopmentofLLMs.
Basedonexistingexperiences,itisevidentthatanamplesupplyofhigh-qualitydataandasufficientnumberof
parameterssignificantlycontributetoenhancingtheperformanceofmodels[8].Lookingahead,themodelscaleof
YihengLiuetal.:PreprintsubmittedtoElsevier Page21of30 AComprehensiveOverviewfromTrainingtoInference
Table 5
ListofopensourceLLMs.
LLM Size(B) Links
T5[68] 11B https://github.com/google-research/text-to-text-transfer-transformer
CodeGen[81] 16B https://github.com/salesforce/CodeGen
MOSS[203] 16B https://github.com/OpenLMLab/MOSS
GLM[37] 130B https://github.com/THUDM/GLM
ChatGLM[37] 6B https://github.com/THUDM/ChatGLM3
ChatYuan[204] 0.7B https://github.com/clue-ai/ChatYuan
OPT[83] 175B https://github.com/facebookresearch/metaseq
BLOOM[38] 176B https://huggingface.co/bigscience/bloom
LLaMA[9] 65B https://github.com/facebookresearch/llama
CodeGeeX[82] 13B https://github.com/THUDM/CodeGeeX
Baichuan[205] 13B https://github.com/baichuan-inc/Baichuan2
Aquila 7B https://github.com/FlagAI-Open/FlagAI/tree/master/examples/Aquila
MiniGPT-4[206] 25B https://github.com/Vision-CAIR/MiniGPT-4
Vicuna[207] 13B https://github.com/lm-sys/FastChat
LLMsisexpectedtocontinueexpanding,therebyaugmentingtheirlearningcapabilitiesandoverallperformance.
Moreover,themajorityofcurrentlyavailableLLMsareconfinedtoasinglenaturallanguagemodality,lacking
extensionstoprocessmultimodaldatasuchasimages,videos,andspeech.Thereisapotentialfuturetrajectoryfor
LLMstoevo |
-4[206] 25B https://github.com/Vision-CAIR/MiniGPT-4
Vicuna[207] 13B https://github.com/lm-sys/FastChat
LLMsisexpectedtocontinueexpanding,therebyaugmentingtheirlearningcapabilitiesandoverallperformance.
Moreover,themajorityofcurrentlyavailableLLMsareconfinedtoasinglenaturallanguagemodality,lacking
extensionstoprocessmultimodaldatasuchasimages,videos,andspeech.Thereisapotentialfuturetrajectoryfor
LLMstoevolvetowardshandlinginformationbeyondtext,incorporatingmultimodaldatalikeimagesandaudio.
Thisevolutionwouldempowermodelstocomprehensivelyunderstandandgeneratemultimodalcontent,significantly
broadeningtheapplicationscopeofLLMs.TheinevitableexpansionofLLMsintothefieldofmultimodalityisbound
toincurincreasedtrainingcosts.Apivotalfocusforfuturedevelopmentsliesintheefficientfine-tuningofparameters
andthedeploymentofLLMsthroughtechniquessuchasknowledgedistillation,modelcompression,andquantization,
aimedatreducingboththetrainingandinferencecostsofLLMs.Anotheremergingtrendisthedomain-specifictraining
andfine-tuningofLLMsforparticularsectors,facilitatingamoreadeptadaptationtoandunderstandingofindustry-
specificterminologiesandcontexts.Lastly,intheexplorationofpotentialnewarchitecturesforLLMsthecurrent
landscapepredominantlyreliesonthetransformerarchitecture.Whilethetransformerarchitecturenaturallyboasts
advantagessuchasparallelcomputingandadaptabilitytovariousinputmodalities,itsdesigntypicallynecessitates
fixed-sizeinputs.Thisrequirementmaynecessitatepaddingortruncationwhendealingwithvariable-lengthsequences,
potentiallyleadingtocomputationalandinformationinefficiencies,aswellaschallengesingeneratingcoherentdata.
InvestigatingthepotentialofRecurrentNeuralNetwork(RNN)architecturesintheeraofLLMscouldemergeasa
pivotalresearchdirection.Forinstance,RWKV[208],anLLMdesignedundertheRNNarchitecture,hasdemonstrated
competitiveperformanceonvariousthird-partyevaluations,provingitselfcomparabletothemajorityoftransformer-
basedLLMs.
ForresearchersinthefieldofAI,workinginisolationisbecomingincreasinglyimpractical.Thefuturedirection
ofAIdevelopmentwillintertwinewithvariousindustries,necessitatingclosecollaborationwithprofessionalsfrom
diversefields.Itiscrucialtoengageincollaborativeefforts,bridgingresearchdisciplines,andcollectivelyaddressing
challengesbycombiningexpertisefromdifferentdomains.Simultaneously,thereisafreshsetofrequirementsforthe
comprehensiveskillsofAIresearchers.TraininganddeployingLLMsnecessitateproficiencyinmanaginglarge-scale
dataandsubstantialpracticalexperienceindistributedparalleltraining.Thiscriterionunderscorestheimportancefor
researchersinvolvedinLLMdevelopmenttopossesssubstantialengineeringcapabilities,addressingthechallenges
inherentintheprocess.ResearcherswhoareinterestedinthefieldofLLMsmusteitherpossessengineeringskillsor
adeptlycollaboratewithengineerstonavigatethecomplexitiesofmodeldevelopment[3].
AsLLMsfindwidespreadapplicationsinsocietallife,concernsaboutethicalissuesandsocietalimpactareona
continuousrise.Thismayinvolveresearchandimprovementsinareassuchasmanagingmodelbiasesandcontrolling
theriskofmisuse[4].Consideringtheparamountimportanceofprivacyanddatasecurity,thefuturedevelopment
ofLLMsmightinvolvemorefederatedlearninganddecentralizedapproachestoenhancemodelperformancewhile
safeguardinguserprivacy.Developersshouldengageininterdisciplinarycollaborationwithexpertsfromvarious
fields,includingdecision-making,legalstudies,andsociology,toestablishstandardsandethicalframeworksforthe
YihengLiuetal.:PreprintsubmittedtoElsevier Page22of30 AComprehensiveOverviewfromTrainingtoInference
development,deployment,andutilizationofLLMs,mitigatingpotentialharmfulconsequences.Intermsofpublic
awarenessandeducation,mandatoryawarenesstrainingshouldbeimplementedbeforelarge-scalepublicdeployment
andapplications.ThisaimstoenhancepublicunderstandingofthecapabilitiesandlimitationsofLLMs,fostering
responsibleandinformeduse,especiallyinindustriessuc |
ubmittedtoElsevier Page22of30 AComprehensiveOverviewfromTrainingtoInference
development,deployment,andutilizationofLLMs,mitigatingpotentialharmfulconsequences.Intermsofpublic
awarenessandeducation,mandatoryawarenesstrainingshouldbeimplementedbeforelarge-scalepublicdeployment
andapplications.ThisaimstoenhancepublicunderstandingofthecapabilitiesandlimitationsofLLMs,fostering
responsibleandinformeduse,especiallyinindustriessuchaseducationandjournalism.
7. Conclusion
TheintroductionofChatGPThasusheredinatransformativeeraintherealmofLargeLLMs,significantly
influencingtheirutilizationfordiversedownstreamtasks.Theemphasisoncost-effectivetraininganddeployment
hasemergedasacrucialaspectintheevolutionofLLMs.Thispaperhasprovidedacomprehensivesurveyofthe
evolutionoflargelanguagemodeltrainingtechniquesandinferencedeploymenttechnologiesinalignmentwith
theemergingtrendoflow-costdevelopment.Theprogressionfromtraditionalstatisticallanguagemodelstoneural
languagemodels,andsubsequentlytoPLMssuchasELMoandtransformerarchitecture,hassetthestageforthe
dominanceofLLMs.Thescaleandperformanceofthesemodels,particularlyexemplifiedbytheGPTseries,have
reachedunprecedentedlevels,showcasingthephenomenonofemergenceandenablingversatileapplicationsacross
variousdomains.Notably,thereleaseofChatGPTbyOpenAIinNovember2022hasmarkedapivotalmomentin
theLLMlandscape,revolutionizingthestrengthandeffectivenessofAIalgorithms.However,thecurrentreliance
onOpenAI’sinfrastructureunderscoresthenecessityforalternativeLLMs,emphasizingtheneedfordomain-specific
modelsandadvancementsinthetraininganddeploymentprocesses.
TraininganddeployingLLMspresentchallengesthatdemandexpertiseinhandlinglarge-scaledataanddistributed
paralleltraining.TheengineeringcapabilitiesrequiredforLLMdevelopmenthighlightthecollaborativeeffortsneeded
betweenresearchersandengineers.AsweexplorethetechnicalaspectsofLLMtrainingandinferenceinthisreview,
itbecomesevidentthatadeepunderstandingoftheseprocessesisessentialforresearchersventuringintothefield.
Lookingahead,thefutureofLLMsholdspromisingdirections,includingfurtheradvancementsinmodelarchitectures,
improvedtrainingefficiency,andbroaderapplicationsacrossindustries.Theinsightsprovidedinthisreviewaimto
equipresearcherswiththeknowledgeandunderstandingnecessarytonavigatethecomplexitiesofLLMdevelopment,
fosteringinnovationandprogressinthisdynamicfield.AsLLMscontinuetoevolve,theirimpactonnaturallanguage
processingandAIasawholeispoisedtoshapethefuturelandscapeofintelligentsystems.
References
[1]Y.Liu,T.Han,S.Ma,J.Zhang,Y.Yang,J.Tian,H.He,A.Li,M.He,Z.Liu etal.,“Summaryofchatgpt-relatedresearchandperspective
towardsthefutureoflargelanguagemodels,”Meta-Radiology,p.100017,2023.
[2]J.Wang,E.Shi,S.Yu,Z.Wu,C.Ma,H.Dai,Q.Yang,Y.Kang,J.Wu,H.Hu etal.,“Promptengineeringforhealthcare:Methodologiesand
applications,”arXivpreprintarXiv:2304.14670,2023.
[3]W.X.Zhao,K.Zhou,J.Li,T.Tang,X.Wang,Y.Hou,Y.Min,B.Zhang,J.Zhang,Z.Dong etal.,“Asurveyoflargelanguagemodels,”
arXivpreprintarXiv:2303.18223,2023.
[4]J.Kaddour,J.Harris,M.Mozes,H.Bradley,R.Raileanu,andR.McHardy,“Challengesandapplicationsoflargelanguagemodels,” arXiv
preprintarXiv:2307.10169,2023.
[5]M.E.Peters,M.Neumann,M.Iyyer,M.Gardner,C.Clark,K.Lee,andL.Zettlemoyer,“Deepcontextualizedwordrepresentations,”in
Proceedingsofthe2018ConferenceoftheNorthAmericanChapteroftheAssociationforComputationalLinguistics:HumanLanguage
Technologies,Volume1(LongPapers),Jun.2018,pp.2227–2237.
[6]A.Vaswani,N.Shazeer,N.Parmar,J.Uszkoreit,L.Jones,A.N.Gomez,Ł.Kaiser,andI.Polosukhin,“Attentionisallyouneed,” Advances
inneuralinformationprocessingsystems,vol.30,2017.
[7]A.Radford,J.Wu,D.Amodei,D.Amodei,J.Clark,M.Brundage,andI.Sutskever,“Betterlanguagemodelsandtheirimplications,” OpenAI
Bloghttps://openai.com/blog/better-language-models,vol.1,no.2,2019.
[8]T.Brown,B.Mann,N.Ryder,M.Subbiah,J.D.Kaplan,P.Dhariwal,A.Neelakantan,P.Shyam,G.Sastry,A.Askell etal.,“Language
modelsarefew-shotlear |
wani,N.Shazeer,N.Parmar,J.Uszkoreit,L.Jones,A.N.Gomez,Ł.Kaiser,andI.Polosukhin,“Attentionisallyouneed,” Advances
inneuralinformationprocessingsystems,vol.30,2017.
[7]A.Radford,J.Wu,D.Amodei,D.Amodei,J.Clark,M.Brundage,andI.Sutskever,“Betterlanguagemodelsandtheirimplications,” OpenAI
Bloghttps://openai.com/blog/better-language-models,vol.1,no.2,2019.
[8]T.Brown,B.Mann,N.Ryder,M.Subbiah,J.D.Kaplan,P.Dhariwal,A.Neelakantan,P.Shyam,G.Sastry,A.Askell etal.,“Language
modelsarefew-shotlearners,”Advancesinneuralinformationprocessingsystems,vol.33,pp.1877–1901,2020.
[9]H.Touvron,T.Lavril,G.Izacard,X.Martinet,M.-A.Lachaux,T.Lacroix,B.Rozière,N.Goyal,E.Hambro,F.Azhar etal.,“Llama:Open
andefficientfoundationlanguagemodels,”arXivpreprintarXiv:2302.13971,2023.
[10]H.Touvron,L.Martin,K.Stone,P.Albert,A.Almahairi,Y.Babaei,N.Bashlykov,S.Batra,P.Bhargava,S.Bhosale etal.,“Llama2:Open
foundationandfine-tunedchatmodels,”arXivpreprintarXiv:2307.09288,2023.
[11]S.Rezayi,H.Dai,Z.Liu,Z.Wu,A.Hebbar,A.H.Burns,L.Zhao,D.Zhu,Q.Li,W.Liu etal.,“Clinicalradiobert:Knowledge-infusedfew
shotlearningforclinicalnotesnamedentityrecognition,”inInternationalWorkshoponMachineLearninginMedicalImaging. Springer,
2022,pp.269–278.
[12]Z.Liu,M.He,Z.Jiang,Z.Wu,H.Dai,L.Zhang,S.Luo,T.Han,X.Li,X.Jiang etal.,“Surveyonnaturallanguageprocessinginmedical
imageanalysis.”Zhongnandaxuexuebao.Yixueban=JournalofCentralSouthUniversity.MedicalSciences,vol.47,no.8,pp.981–993,
2022.
YihengLiuetal.:PreprintsubmittedtoElsevier Page23of30 AComprehensiveOverviewfromTrainingtoInference
[13]W.Liao,Z.Liu,H.Dai,Z.Wu,Y.Zhang,X.Huang,Y.Chen,X.Jiang,D.Zhu,T.Liu etal.,“Mask-guidedbertforfewshottextclassification,”
arXivpreprintarXiv:2302.10447,2023.
[14]S.Rezayi,Z.Liu,Z.Wu,C.Dhakal,B.Ge,H.Dai,G.Mai,N.Liu,C.Zhen,T.Liu etal.,“Exploringnewfrontiersinagriculturalnlp:
Investigatingthepotentialoflargelanguagemodelsforfoodapplications,”arXivpreprintarXiv:2306.11892,2023.
[15]T.Zhong,W.Zhao,Y.Zhang,Y.Pan,P.Dong,Z.Jiang,X.Kui,Y.Shang,L.Yang,Y.Wei etal.,“Chatradio-valuer:Achatlargelanguage
modelforgeneralizableradiologyreportgenerationbasedonmulti-institutionandmulti-systemdata,”arXivpreprintarXiv:2310.05242,
2023.
[16]Z.Liu,T.Zhong,Y.Li,Y.Zhang,Y.Pan,Z.Zhao,P.Dong,C.Cao,Y.Liu,P.Shu etal.,“Evaluatinglargelanguagemodelsforradiology
naturallanguageprocessing,”arXivpreprintarXiv:2307.13693,2023.
[17]T.Zhong,Y.Wei,L.Yang,Z.Wu,Z.Liu,X.Wei,W.Li,J.Yao,C.Ma,X.Li etal.,“Chatabl:Abductivelearningvianaturallanguage
interactionwithchatgpt,”arXivpreprintarXiv:2304.11107,2023.
[18]A.Radford,K.Narasimhan,T.Salimans,I.Sutskever etal.,“Improvinglanguageunderstandingbygenerativepre-training,”OpenAI,2018.
[19]OpenAI,“Gpt-4technicalreport,”2023.
[20]H.Dai,Z.Liu,W.Liao,X.Huang,Y.Cao,Z.Wu,L.Zhao,S.Xu,W.Liu,N.Liu,S.Li,D.Zhu,H.Cai,L.Sun,Q.Li,D.Shen,T.Liu,and
X.Li,“Auggpt:Leveragingchatgptfortextdataaugmentation,”2023.
[21]Z.Liu,X.Yu,L.Zhang,Z.Wu,C.Cao,H.Dai,L.Zhao,W.Liu,D.Shen,Q.Li etal.,“Deid-gpt:Zero-shotmedicaltextde-identification
bygpt-4,”arXivpreprintarXiv:2303.11032,2023.
[22]C.Ma,Z.Wu,J.Wang,S.Xu,Y.Wei,Z.Liu,L.Guo,X.Cai,S.Zhang,T.Zhang etal.,“Impressiongpt:aniterativeoptimizingframework
forradiologyreportsummarizationwithchatgpt,”arXivpreprintarXiv:2304.08448,2023.
[23]W.Liao,Z.Liu,H.Dai,S.Xu,Z.Wu,Y.Zhang,X.Huang,D.Zhu,H.Cai,T.Liu etal.,“Differentiatechatgpt-generatedandhuman-written
medicaltexts,”arXivpreprintarXiv:2304.11567,2023.
[24]H.Dai,Y.Li,Z.Liu,L.Zhao,Z.Wu,S.Song,Y.Shen,D.Zhu,X.Li,S.Li etal.,“Ad-autogpt:Anautonomousgptforalzheimer’sdisease
infodemiology,”arXivpreprintarXiv:2306.10095,2023.
[25]Z.Guan,Z.Wu,Z.Liu,D.Wu,H.Ren,Q.Li,X.Li,andN.Liu,“Cohortgpt:Anenhancedgptforparticipantrecruitmentinclinicalstudy,”
arXivpreprintarXiv:2307.11346,2023.
[26]Z.Liu,Z.Wu,M.Hu,B.Zhao,L.Zhao,T.Zhang,H.Dai,X.Chen,Y.Shen,S.Li etal.,“Pharmacygpt:Theaiphar |
dhuman-written
medicaltexts,”arXivpreprintarXiv:2304.11567,2023.
[24]H.Dai,Y.Li,Z.Liu,L.Zhao,Z.Wu,S.Song,Y.Shen,D.Zhu,X.Li,S.Li etal.,“Ad-autogpt:Anautonomousgptforalzheimer’sdisease
infodemiology,”arXivpreprintarXiv:2306.10095,2023.
[25]Z.Guan,Z.Wu,Z.Liu,D.Wu,H.Ren,Q.Li,X.Li,andN.Liu,“Cohortgpt:Anenhancedgptforparticipantrecruitmentinclinicalstudy,”
arXivpreprintarXiv:2307.11346,2023.
[26]Z.Liu,Z.Wu,M.Hu,B.Zhao,L.Zhao,T.Zhang,H.Dai,X.Chen,Y.Shen,S.Li etal.,“Pharmacygpt:Theaipharmacist,”arXivpreprint
arXiv:2307.10432,2023.
[27]Y.Wei,T.Zhang,H.Zhang,T.Zhong,L.Zhao,Z.Liu,C.Ma,S.Zhang,M.Shang,L.Du etal.,“Chat2brain:Amethodformapping
open-endedsemanticqueriestobrainactivationmaps,”arXivpreprintarXiv:2309.05021,2023.
[28]T.Zhong,X.Wei,E.Shi,J.Gao,C.Ma,Y.Wei,S.Zhang,L.Guo,J.Han,T.Liu etal.,“Asmall-samplemethodwitheegsignalsbased
onabductivelearningformotorimagerydecoding,”inInternationalConferenceonMedicalImageComputingandComputer-Assisted
Intervention. Springer,2023,pp.416–424.
[29]J.Gao,L.Zhao,T.Zhong,C.Li,Z.He,Y.Wei,S.Zhang,L.Guo,T.Liu,J.Han etal.,“Predictionofcognitivescoresbyjointuseof
movie-watchingfmriconnectivityandeyetrackingviaattention-censnet,”Psychoradiology,vol.3,2023.
[30]I.Sutskever,O.Vinyals,andQ.V.Le,“Sequencetosequencelearningwithneuralnetworks,” Advancesinneuralinformationprocessing
systems,vol.27,2014.
[31]G.BebisandM.Georgiopoulos,“Feed-forwardneuralnetworks,” IeeePotentials,vol.13,no.4,pp.27–31,1994.
[32]Y.Yang,L.Wang,S.Shi,P.Tadepalli,S.Lee,andZ.Tu,“Onthesub-layerfunctionalitiesoftransformerdecoder,” arXivpreprint
arXiv:2010.02648,2020.
[33]Z.Dai,Z.Yang,Y.Yang,J.Carbonell,Q.V.Le,andR.Salakhutdinov,“Transformer-xl:Attentivelanguagemodelsbeyondafixed-length
context,”arXivpreprintarXiv:1901.02860,2019.
[34]J.Su,M.Ahmed,Y.Lu,S.Pan,W.Bo,andY.Liu,“Roformer:Enhancedtransformerwithrotarypositionembedding,” Neurocomputing,p.
127063,2023.
[35]O.Press,N.A.Smith,andM.Lewis,“Trainshort,testlong:Attentionwithlinearbiasesenablesinputlengthextrapolation,” arXivpreprint
arXiv:2108.12409,2021.
[36]A.Chowdhery,S.Narang,J.Devlin,M.Bosma,G.Mishra,A.Roberts,P.Barham,H.W.Chung,C.Sutton,S.Gehrmann etal.,“Palm:
Scalinglanguagemodelingwithpathways,”arXivpreprintarXiv:2204.02311,2022.
[37]A.Zeng,X.Liu,Z.Du,Z.Wang,H.Lai,M.Ding,Z.Yang,Y.Xu,W.Zheng,X.Xia etal.,“Glm-130b:Anopenbilingualpre-trained
model,”arXivpreprintarXiv:2210.02414,2022.
[38]B.Workshop,T.L.Scao,A.Fan,C.Akiki,E.Pavlick,S.Ilić,D.Hesslow,R.Castagné,A.S.Luccioni,F.Yvon etal.,“Bloom:A176b-
parameteropen-accessmultilinguallanguagemodel,”arXivpreprintarXiv:2211.05100,2022.
[39]L.Zhao,L.Zhang,Z.Wu,Y.Chen,H.Dai,X.Yu,Z.Liu,T.Zhang,X.Hu,X.Jiang etal.,“Whenbrain-inspiredaimeetsagi,”Meta-
Radiology,p.100005,2023.
[40]J.Holmes,Z.Liu,L.Zhang,Y.Ding,T.T.Sio,L.A.McGee,J.B.Ashman,X.Li,T.Liu,J.Shen,andW.Liu,“Evaluatinglargelanguage
modelsonahighly-specializedtopic,radiationoncologyphysics,”FrontiersinOncology,vol.13,Jul.2023.
[41]Z.Wu,L.Zhang,C.Cao,X.Yu,H.Dai,C.Ma,Z.Liu,L.Zhao,G.Li,W.Liu etal.,“Exploringthetrade-offs:Unifiedlargelanguagemodels
vslocalfine-tunedmodelsforhighly-specificradiologynlitask,”arXivpreprintarXiv:2304.09138,2023.
[42]S.Rezayi,Z.Liu,Z.Wu,C.Dhakal,B.Ge,C.Zhen,T.Liu,andS.Li,“Agribert:knowledge-infusedagriculturallanguagemodelsfor
matchingfoodandnutrition,”inProceedingsoftheThirty-FirstInternationalJointConferenceonArtificialIntelligence,vol.7,2022,pp.
5150–5156.
[43]Z.Liu,A.Zhong,Y.Li,L.Yang,C.Ju,Z.Wu,C.Ma,P.Shu,C.Chen,S.Kim etal.,“Radiology-gpt:Alargelanguagemodelforradiology,”
arXivpreprintarXiv:2306.08666,2023.
YihengLiuetal.:PreprintsubmittedtoElsevier Page24of30 AComprehensiveOverviewfromTrainingtoInference
[44]Z.Liu,X.He,L.Liu,T.Liu,andX.Zhai, ContextMatters:AStrategytoPre-trainLanguageModelforScienceEducation. SpringerNature
Switzerland,2023,p.666–674. |
]Z.Liu,A.Zhong,Y.Li,L.Yang,C.Ju,Z.Wu,C.Ma,P.Shu,C.Chen,S.Kim etal.,“Radiology-gpt:Alargelanguagemodelforradiology,”
arXivpreprintarXiv:2306.08666,2023.
YihengLiuetal.:PreprintsubmittedtoElsevier Page24of30 AComprehensiveOverviewfromTrainingtoInference
[44]Z.Liu,X.He,L.Liu,T.Liu,andX.Zhai, ContextMatters:AStrategytoPre-trainLanguageModelforScienceEducation. SpringerNature
Switzerland,2023,p.666–674.
[45]J.Wang,Z.Liu,L.Zhao,Z.Wu,C.Ma,S.Yu,H.Dai,Q.Yang,Y.Liu,S.Zhang etal.,“Reviewoflargevisionmodelsandvisualprompt
engineering,”arXivpreprintarXiv:2307.00855,2023.
[46]X.Li,L.Zhang,Z.Wu,Z.Liu,L.Zhao,Y.Yuan,J.Liu,G.Li,D.Zhu,P.Yan etal.,“Artificialgeneralintelligenceformedicalimaging,”
arXivpreprintarXiv:2306.05480,2023.
[47]H.Cai,W.Liao,Z.Liu,Y.Zhang,X.Huang,S.Ding,H.Ren,Z.Wu,H.Dai,S.Li etal.,“Coarse-to-fineknowledgegraphdomainadaptation
basedondistantly-supervisediterativetraining,”arXivpreprintarXiv:2211.02849,2022.
[48]H.Dai,C.Ma,Z.Liu,Y.Li,P.Shu,X.Wei,L.Zhao,Z.Wu,D.Zhu,W.Liu etal.,“Samaug:Pointpromptaugmentationforsegment
anythingmodel,”arXivpreprintarXiv:2307.01187,2023.
[49]L.Zhang,Z.Liu,L.Zhang,Z.Wu,X.Yu,J.Holmes,H.Feng,H.Dai,X.Li,Q.Li etal.,“Segmentanythingmodel(sam)forradiation
oncology,”arXivpreprintarXiv:2306.11730,2023.
[50]Z.Xiao,Y.Chen,L.Zhang,J.Yao,Z.Wu,X.Yu,Y.Pan,L.Zhao,C.Ma,X.Liu etal.,“Instruction-vit:Multi-modalpromptsforinstruction
learninginvit,”arXivpreprintarXiv:2305.00201,2023.
[51]P.Liu,W.Yuan,J.Fu,Z.Jiang,H.Hayashi,andG.Neubig,“Pre-train,prompt,andpredict:Asystematicsurveyofpromptingmethodsin
naturallanguageprocessing,”ACMComputingSurveys,vol.55,no.9,pp.1–35,2023.
[52]S.B.Kotsiantis,I.Zaharakis,P.Pintelas etal.,“Supervisedmachinelearning:Areviewofclassificationtechniques,”Emergingartificial
intelligenceapplicationsincomputerengineering,vol.160,no.1,pp.3–24,2007.
[53]Y.Bengio,A.Courville,andP.Vincent,“Representationlearning:Areviewandnewperspectives,” IEEEtransactionsonpatternanalysis
andmachineintelligence,vol.35,no.8,pp.1798–1828,2013.
[54]L.Dong,N.Yang,W.Wang,F.Wei,X.Liu,Y.Wang,J.Gao,M.Zhou,andH.-W.Hon,“Unifiedlanguagemodelpre-trainingfornatural
languageunderstandingandgeneration,”Advancesinneuralinformationprocessingsystems,vol.32,2019.
[55]T.SchickandH.Schütze,“It’snotjustsizethatmatters:Smalllanguagemodelsarealsofew-shotlearners,” arXivpreprintarXiv:2009.07118,
2020.
[56]F.Petroni,T.Rocktäschel,P.Lewis,A.Bakhtin,Y.Wu,A.H.Miller,andS.Riedel,“Languagemodelsasknowledgebases?” arXivpreprint
arXiv:1909.01066,2019.
[57]B.Lester,R.Al-Rfou,andN.Constant,“Thepowerofscaleforparameter-efficientprompttuning,” arXivpreprintarXiv:2104.08691,2021.
[58]T.SchickandH.Schütze,“Exploitingclozequestionsforfewshottextclassificationandnaturallanguageinference,” arXivpreprint
arXiv:2001.07676,2020.
[59]R.Shin,C.H.Lin,S.Thomson,C.Chen,S.Roy,E.A.Platanios,A.Pauls,D.Klein,J.Eisner,andB.VanDurme,“Constrainedlanguage
modelsyieldfew-shotsemanticparsers,”arXivpreprintarXiv:2104.08768,2021.
[60]Z.Jiang,F.F.Xu,J.Araki,andG.Neubig,“Howcanweknowwhatlanguagemodelsknow?” TransactionsoftheAssociationfor
ComputationalLinguistics,vol.8,pp.423–438,2020.
[61]K.Duh,K.Sudoh,X.Wu,H.Tsukada,andM.Nagata,“Generalizedminimumbayesrisksystemcombination,”in Proceedingsof5th
InternationalJointConferenceonNaturalLanguageProcessing,2011,pp.1356–1360.
[62]Z.Jiang,J.Araki,H.Ding,andG.Neubig,“Howcanweknowwhenlanguagemodelsknow?onthecalibrationoflanguagemodelsfor
questionanswering,”TransactionsoftheAssociationforComputationalLinguistics,vol.9,pp.962–977,2021.
[63]M.McCloskeyandN.J.Cohen,“Catastrophicinterferenceinconnectionistnetworks:Thesequentiallearningproblem,”in Psychologyof
learningandmotivation. Elsevier,1989,vol.24,pp.109–165.
[64]T.Brown,B.Mann,N.Ryder,M.Subbiah,J.D.Kaplan,P.Dhariwal,A.Neelakantan,P.Shyam,G.Sastry,A.Askell etal.,“Language
modelsarefew-shotle |
,“Howcanweknowwhenlanguagemodelsknow?onthecalibrationoflanguagemodelsfor
questionanswering,”TransactionsoftheAssociationforComputationalLinguistics,vol.9,pp.962–977,2021.
[63]M.McCloskeyandN.J.Cohen,“Catastrophicinterferenceinconnectionistnetworks:Thesequentiallearningproblem,”in Psychologyof
learningandmotivation. Elsevier,1989,vol.24,pp.109–165.
[64]T.Brown,B.Mann,N.Ryder,M.Subbiah,J.D.Kaplan,P.Dhariwal,A.Neelakantan,P.Shyam,G.Sastry,A.Askell etal.,“Language
modelsarefew-shotlearners,”Advancesinneuralinformationprocessingsystems,vol.33,pp.1877–1901,2020.
[65]Y.Zhu,R.Kiros,R.Zemel,R.Salakhutdinov,R.Urtasun,A.Torralba,andS.Fidler,“Aligningbooksandmovies:Towardsstory-likevisual
explanationsbywatchingmoviesandreadingbooks,”inProceedingsoftheIEEEinternationalconferenceoncomputervision,2015,pp.
19–27.
[66]“Projectgutenberg.”[Online].Available:https://www.gutenberg.org/
[67]“Commoncrawl.”[Online].Available:https://commoncrawl.org/
[68]C.Raffel,N.Shazeer,A.Roberts,K.Lee,S.Narang,M.Matena,Y.Zhou,W.Li,andP.J.Liu,“Exploringthelimitsoftransferlearning
withaunifiedtext-to-texttransformer,”TheJournalofMachineLearningResearch,vol.21,no.1,pp.5485–5551,2020.
[69]T.H.TrinhandQ.V.Le,“Asimplemethodforcommonsensereasoning,” arXivpreprintarXiv:1806.02847,2018.
[70]Y.Liu,M.Ott,N.Goyal,J.Du,M.Joshi,D.Chen,O.Levy,M.Lewis,L.Zettlemoyer,andV.Stoyanov,“Roberta:Arobustlyoptimizedbert
pretrainingapproach,”arXivpreprintarXiv:1907.11692,2019.
[71]R.Zellers,A.Holtzman,H.Rashkin,Y.Bisk,A.Farhadi,F.Roesner,andY.Choi,“Defendingagainstneuralfakenews,” Advancesinneural
informationprocessingsystems,vol.32,2019.
[72]G.Penedo,Q.Malartic,D.Hesslow,R.Cojocaru,H.Alobeidli,A.Cappelli,B.Pannier,E.Almazrouei,andJ.Launay,“Therefinedweb
datasetforfalconllm:Outperformingcuratedcorporawithwebdataonly,”inThirty-seventhConferenceonNeuralInformationProcessing
SystemsDatasetsandBenchmarksTrack,2023.
[73]A.Gokaslan,V.C.E.Pavlick,andS.Tellex,“Openwebtextcorpus,”http://Skylion007.github.io/OpenWebTextCorpus,2019.
[74]J.Baumgartner,S.Zannettou,B.Keegan,M.Squire,andJ.Blackburn,“Thepushshiftredditdataset,”in Proceedingsoftheinternational
AAAIconferenceonwebandsocialmedia,vol.14,2020,pp.830–839.
[75]“Wikipedia.”[Online].Available:https://en.wikipedia.org/wiki/Main_Page
[76]“Bigquerydataset.”[Online].Available:https://cloud.google.com/bigquery
[77]L.Gao,S.Biderman,S.Black,L.Golding,T.Hoppe,C.Foster,J.Phang,H.He,A.Thite,N.Nabeshima etal.,“Thepile:An800gbdataset
ofdiversetextforlanguagemodeling,”arXivpreprintarXiv:2101.00027,2020.
YihengLiuetal.:PreprintsubmittedtoElsevier Page25of30 AComprehensiveOverviewfromTrainingtoInference
[78]H.Laurençon,L.Saulnier,T.Wang,C.Akiki,A.VillanovadelMoral,T.LeScao,L.VonWerra,C.Mou,E.GonzálezPonferrada,H.Nguyen
etal.,“Thebigsciencerootscorpus:A1.6tbcompositemultilingualdataset,”AdvancesinNeuralInformationProcessingSystems,vol.35,
pp.31809–31826,2022.
[79]S.Smith,M.Patwary,B.Norick,P.LeGresley,S.Rajbhandari,J.Casper,Z.Liu,S.Prabhumoye,G.Zerveas,V.Korthikanti etal.,“Using
deepspeedandmegatrontotrainmegatron-turingnlg530b,alarge-scalegenerativelanguagemodel,”arXivpreprintarXiv:2201.11990,
2022.
[80]R.Thoppilan,D.DeFreitas,J.Hall,N.Shazeer,A.Kulshreshtha,H.-T.Cheng,A.Jin,T.Bos,L.Baker,Y.Du etal.,“Lamda:Language
modelsfordialogapplications,”arXivpreprintarXiv:2201.08239,2022.
[81]E.Nijkamp,B.Pang,H.Hayashi,L.Tu,H.Wang,Y.Zhou,S.Savarese,andC.Xiong,“Codegen:Anopenlargelanguagemodelforcode
withmtulti-turnprogramsynthesis,”arXivpreprintarXiv:2203.13474,2022.
[82]Q.Zheng,X.Xia,X.Zou,Y.Dong,S.Wang,Y.Xue,L.Shen,Z.Wang,A.Wang,Y.Li etal.,“Codegeex:Apre-trainedmodelforcode
generationwithmultilingualbenchmarkingonhumaneval-x,”inProceedingsofthe29thACMSIGKDDConferenceonKnowledgeDiscovery
andDataMining,2023,pp.5673–5684.
[83]S.Zhang,S.Roller,N.Goyal,M.Artetxe,M.Chen,S.Chen,C.Dewan,M.Diab,X.Li,X.V.Lin |
H.Wang,Y.Zhou,S.Savarese,andC.Xiong,“Codegen:Anopenlargelanguagemodelforcode
withmtulti-turnprogramsynthesis,”arXivpreprintarXiv:2203.13474,2022.
[82]Q.Zheng,X.Xia,X.Zou,Y.Dong,S.Wang,Y.Xue,L.Shen,Z.Wang,A.Wang,Y.Li etal.,“Codegeex:Apre-trainedmodelforcode
generationwithmultilingualbenchmarkingonhumaneval-x,”inProceedingsofthe29thACMSIGKDDConferenceonKnowledgeDiscovery
andDataMining,2023,pp.5673–5684.
[83]S.Zhang,S.Roller,N.Goyal,M.Artetxe,M.Chen,S.Chen,C.Dewan,M.Diab,X.Li,X.V.Lin etal.,“Opt:Openpre-trainedtransformer
languagemodels,”arXivpreprintarXiv:2205.01068,2022.
[84]H.W.Chung,L.Hou,S.Longpre,B.Zoph,Y.Tay,W.Fedus,Y.Li,X.Wang,M.Dehghani,S.Brahma etal.,“Scalinginstruction-finetuned
languagemodels,”arXivpreprintarXiv:2210.11416,2022.
[85]A.Radford,J.Wu,R.Child,D.Luan,D.Amodei,I.Sutskever etal.,“Languagemodelsareunsupervisedmultitasklearners,”OpenAIblog,
vol.1,no.8,p.9,2019.
[86]D.Hernandez,T.Brown,T.Conerly,N.DasSarma,D.Drain,S.El-Showk,N.Elhage,Z.Hatfield-Dodds,T.Henighan,T.Hume etal.,
“Scalinglawsandinterpretabilityoflearningfromrepeateddata,”arXivpreprintarXiv:2205.10487,2022.
[87]K.Lee,D.Ippolito,A.Nystrom,C.Zhang,D.Eck,C.Callison-Burch,andN.Carlini,“Deduplicatingtrainingdatamakeslanguagemodels
better,”arXivpreprintarXiv:2107.06499,2021.
[88]N.Carlini,F.Tramer,E.Wallace,M.Jagielski,A.Herbert-Voss,K.Lee,A.Roberts,T.Brown,D.Song,U.Erlingsson etal.,“Extracting
trainingdatafromlargelanguagemodels,”in30thUSENIXSecuritySymposium(USENIXSecurity21),2021,pp.2633–2650.
[89]S.Gehman,S.Gururangan,M.Sap,Y.Choi,andN.A.Smith,“Realtoxicityprompts:Evaluatingneuraltoxicdegenerationinlanguage
models,”arXivpreprintarXiv:2009.11462,2020.
[90]J.Devlin,M.-W.Chang,K.Lee,andK.Toutanova,“Bert:Pre-trainingofdeepbidirectionaltransformersforlanguageunderstanding,” arXiv
preprintarXiv:1810.04805,2018.
[91]H.W.Chung,L.Hou,S.Longpre,B.Zoph,Y.Tay,W.Fedus,Y.Li,X.Wang,M.Dehghani,S.Brahma etal.,“Scalinginstruction-finetuned
languagemodels,”arXivpreprintarXiv:2210.11416,2022.
[92]M.Lewis,Y.Liu,N.Goyal,M.Ghazvininejad,A.Mohamed,O.Levy,V.Stoyanov,andL.Zettlemoyer,“Bart:Denoisingsequence-to-
sequencepre-trainingfornaturallanguagegeneration,translation,andcomprehension,”arXivpreprintarXiv:1910.13461,2019.
[93]L.Ouyang,J.Wu,X.Jiang,D.Almeida,C.L.Wainwright,P.Mishkin,C.Zhang,S.Agarwal,K.Slama,A.Ray etal.,“Traininglanguage
modelstofollowinstructionswithhumanfeedback,”arXivpreprintarXiv:2203.02155,2022.
[94]S.Li,Y.Zhao,R.Varma,O.Salpekar,P.Noordhuis,T.Li,A.Paszke,J.Smith,B.Vaughan,P.Damania etal.,“Pytorchdistributed:
Experiencesonacceleratingdataparalleltraining,”arXivpreprintarXiv:2006.15704,2020.
[95]M.Isard,M.Budiu,Y.Yu,A.Birrell,andD.Fetterly,“Dryad:distributeddata-parallelprogramsfromsequentialbuildingblocks,”in
Proceedingsofthe2ndACMSIGOPS/EuroSysEuropeanConferenceonComputerSystems2007,2007,pp.59–72.
[96]M.Shoeybi,M.Patwary,R.Puri,P.LeGresley,J.Casper,andB.Catanzaro,“Megatron-lm:Trainingmulti-billionparameterlanguagemodels
usingmodelparallelism,”arXivpreprintarXiv:1909.08053,2019.
[97]S.Rajbhandari,J.Rasley,O.Ruwase,andY.He,“Zero:Memoryoptimizationstowardtrainingtrillionparametermodels,”in SC20:
InternationalConferenceforHighPerformanceComputing,Networking,StorageandAnalysis. IEEE,2020,pp.1–16.
[98]Y.Huang,Y.Cheng,A.Bapna,O.Firat,D.Chen,M.Chen,H.Lee,J.Ngiam,Q.V.Le,Y.Wu etal.,“Gpipe:Efficienttrainingofgiant
neuralnetworksusingpipelineparallelism,”Advancesinneuralinformationprocessingsystems,vol.32,2019.
[99]P.Micikevicius,S.Narang,J.Alben,G.Diamos,E.Elsen,D.Garcia,B.Ginsburg,M.Houston,O.Kuchaiev,G.Venkatesh etal.,“Mixed
precisiontraining,”arXivpreprintarXiv:1710.03740,2017.
[100]J.W.Rae,S.Borgeaud,T.Cai,K.Millican,J.Hoffmann,F.Song,J.Aslanides,S.Henderson,R.Ring,S.Young etal.,“Scalinglanguage
models:Methods,analysis&insightsfromtraininggopher,”arXivpreprintarXiv:2112.11446,2021.
[101]J.Ren,S.Rajbhandari,R.Y.Aminabadi,O.Ruwase,S.Yang,M.Zhang,D.Li,andY.He,“ |
ssingsystems,vol.32,2019.
[99]P.Micikevicius,S.Narang,J.Alben,G.Diamos,E.Elsen,D.Garcia,B.Ginsburg,M.Houston,O.Kuchaiev,G.Venkatesh etal.,“Mixed
precisiontraining,”arXivpreprintarXiv:1710.03740,2017.
[100]J.W.Rae,S.Borgeaud,T.Cai,K.Millican,J.Hoffmann,F.Song,J.Aslanides,S.Henderson,R.Ring,S.Young etal.,“Scalinglanguage
models:Methods,analysis&insightsfromtraininggopher,”arXivpreprintarXiv:2112.11446,2021.
[101]J.Ren,S.Rajbhandari,R.Y.Aminabadi,O.Ruwase,S.Yang,M.Zhang,D.Li,andY.He,“ {ZeRO-Offload}:Democratizing{Billion-Scale}
modeltraining,”in2021USENIXAnnualTechnicalConference(USENIXATC21),2021,pp.551–564.
[102]Y.Wang,Y.Kordi,S.Mishra,A.Liu,N.A.Smith,D.Khashabi,andH.Hajishirzi,“Self-instruct:Aligninglanguagemodelwithself
generatedinstructions,”arXivpreprintarXiv:2212.10560,2022.
[103]Y.Wang,S.Mishra,P.Alipoormolabashi,Y.Kordi,A.Mirzaei,A.Arunkumar,A.Ashok,A.S.Dhanasekaran,A.Naik,D.Stap etal.,
“Super-naturalinstructions:Generalizationviadeclarativeinstructionson1600+nlptasks,”arXivpreprintarXiv:2204.07705,2022.
[104]S.H.Bach,V.Sanh,Z.-X.Yong,A.Webson,C.Raffel,N.V.Nayak,A.Sharma,T.Kim,M.S.Bari,T.Fevry etal.,“Promptsource:An
integrateddevelopmentenvironmentandrepositoryfornaturallanguageprompts,”arXivpreprintarXiv:2202.01279,2022.
[105]S.Victor,W.Albert,R.Colin,B.Stephen,S.Lintang,A.Zaid,C.Antoine,S.Arnaud,R.Arun,D.Manan etal.,“Multitaskprompted
trainingenableszero-shottaskgeneralization,”inInternationalConferenceonLearningRepresentations,2022.
[106]R.Nakano,J.Hilton,S.Balaji,J.Wu,L.Ouyang,C.Kim,C.Hesse,S.Jain,V.Kosaraju,W.Saunders etal.,“Webgpt:Browser-assisted
question-answeringwithhumanfeedback,”arXivpreprintarXiv:2112.09332,2021.
[107]J.Wei,M.Bosma,V.Zhao,K.Guu,A.W.Yu,B.Lester,N.Du,A.M.Dai,andQ.V.Le,“Finetunedlanguagemodelsarezero-shotlearners,”
inInternationalConferenceonLearningRepresentations.
YihengLiuetal.:PreprintsubmittedtoElsevier Page26of30 AComprehensiveOverviewfromTrainingtoInference
[108]T.Tang,J.Li,W.X.Zhao,andJ.-R.Wen,“Mvp:Multi-tasksupervisedpre-trainingfornaturallanguagegeneration,” arXivpreprint
arXiv:2206.12131,2022.
[109]Z.Kenton,T.Everitt,L.Weidinger,I.Gabriel,V.Mikulik,andG.Irving,“Alignmentoflanguageagents,” arXivpreprintarXiv:2103.14659,
2021.
[110]A.Glaese,N.McAleese,M.Trębacz,J.Aslanides,V.Firoiu,T.Ewalds,M.Rauh,L.Weidinger,M.Chadwick,P.Thacker etal.,“Improving
alignmentofdialogueagentsviatargetedhumanjudgements,”arXivpreprintarXiv:2209.14375,2022.
[111]J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization algorithms,” arXiv preprint
arXiv:1707.06347,2017.
[112]E.J.Hu,Y.Shen,P.Wallis,Z.Allen-Zhu,Y.Li,S.Wang,L.Wang,andW.Chen,“Lora:Low-rankadaptationoflargelanguagemodels,”
arXivpreprintarXiv:2106.09685,2021.
[113]X.L.LiandP.Liang,“Prefix-tuning:Optimizingcontinuouspromptsforgeneration,” arXivpreprintarXiv:2101.00190,2021.
[114]X.Liu,K.Ji,Y.Fu,W.L.Tam,Z.Du,Z.Yang,andJ.Tang,“P-tuningv2:Prompttuningcanbecomparabletofine-tuninguniversallyacross
scalesandtasks,”arXivpreprintarXiv:2110.07602,2021.
[115]X.Liu,Y.Zheng,Z.Du,M.Ding,Y.Qian,Z.Yang,andJ.Tang,“Gptunderstands,too,” AIOpen,2023.
[116]Q.Zhang,M.Chen,A.Bukharin,P.He,Y.Cheng,W.Chen,andT.Zhao,“Adaptivebudgetallocationforparameter-efficientfine-tuning,”
arXivpreprintarXiv:2303.10512,2023.
[117]T.Dettmers,A.Pagnoni,A.Holtzman,andL.Zettlemoyer,“Qlora:Efficientfinetuningofquantizedllms,” arXivpreprintarXiv:2305.14314,
2023.
[118]A.Askell,Y.Bai,A.Chen,D.Drain,D.Ganguli,T.Henighan,A.Jones,N.Joseph,B.Mann,N.DasSarma etal.,“Agenerallanguage
assistantasalaboratoryforalignment,”arXivpreprintarXiv:2112.00861,2021.
[119]Y.Chang,X.Wang,J.Wang,Y.Wu,K.Zhu,H.Chen,L.Yang,X.Yi,C.Wang,Y.Wang etal.,“Asurveyonevaluationoflargelanguage
models,”arXivpreprintarXiv:2307.03109,2023.
[120]Z.Liu,H.Jiang,T.Zhong,Z.Wu,C.Ma,Y.Li,X.Yu,Y.Zhang,Y.Pan,P.Shu etal.,“Holist |
ofquantizedllms,” arXivpreprintarXiv:2305.14314,
2023.
[118]A.Askell,Y.Bai,A.Chen,D.Drain,D.Ganguli,T.Henighan,A.Jones,N.Joseph,B.Mann,N.DasSarma etal.,“Agenerallanguage
assistantasalaboratoryforalignment,”arXivpreprintarXiv:2112.00861,2021.
[119]Y.Chang,X.Wang,J.Wang,Y.Wu,K.Zhu,H.Chen,L.Yang,X.Yi,C.Wang,Y.Wang etal.,“Asurveyonevaluationoflargelanguage
models,”arXivpreprintarXiv:2307.03109,2023.
[120]Z.Liu,H.Jiang,T.Zhong,Z.Wu,C.Ma,Y.Li,X.Yu,Y.Zhang,Y.Pan,P.Shu etal.,“Holisticevaluationofgpt-4vforbiomedicalimaging,”
arXivpreprintarXiv:2312.05256,2023.
[121]J.Deng,W.Dong,R.Socher,L.-J.Li,K.Li,andL.Fei-Fei,“Imagenet:Alarge-scalehierarchicalimagedatabase,”in 2009IEEEconference
oncomputervisionandpatternrecognition. Ieee,2009,pp.248–255.
[122]A.Kuznetsova,H.Rom,N.Alldrin,J.Uijlings,I.Krasin,J.Pont-Tuset,S.Kamali,S.Popov,M.Malloci,A.Kolesnikov etal.,“Theopen
imagesdatasetv4:Unifiedimageclassification,objectdetection,andvisualrelationshipdetectionatscale,”InternationalJournalofComputer
Vision,vol.128,no.7,pp.1956–1981,2020.
[123]A.Wang,A.Singh,J.Michael,F.Hill,O.Levy,andS.R.Bowman,“Glue:Amulti-taskbenchmarkandanalysisplatformfornaturallanguage
understanding,”2018.
[124]A.Wang,Y.Pruksachatkun,N.Nangia,A.Singh,J.Michael,F.Hill,O.Levy,andS.Bowman,“Superglue:Astickierbenchmarkfor
general-purposelanguageunderstandingsystems,”Advancesinneuralinformationprocessingsystems,vol.32,2019.
[125]D.Hendrycks,C.Burns,S.Basart,A.Zou,M.Mazeika,D.Song,andJ.Steinhardt,“Measuringmassivemultitasklanguageunderstanding,”
arXivpreprintarXiv:2009.03300,2020.
[126]H.Li,Y.Zhang,F.Koto,Y.Yang,H.Zhao,Y.Gong,N.Duan,andT.Baldwin,“Cmmlu:Measuringmassivemultitasklanguage
understandinginchinese,”arXivpreprintarXiv:2306.09212,2023.
[127]J.Hu,S.Ruder,A.Siddhant,G.Neubig,O.Firat,andM.Johnson,“Xtreme:Amassivelymultilingualmulti-taskbenchmarkforevaluating
cross-lingualgeneralisation,”inInternationalConferenceonMachineLearning. PMLR,2020,pp.4411–4421.
[128]S.Ruder,N.Constant,J.Botha,A.Siddhant,O.Firat,J.Fu,P.Liu,J.Hu,D.Garrette,G.Neubig etal.,“Xtreme-r:Towardsmorechallenging
andnuancedmultilingualevaluation,”arXivpreprintarXiv:2104.07412,2021.
[129]D.Hendrycks,C.Burns,S.Kadavath,A.Arora,S.Basart,E.Tang,D.Song,andJ.Steinhardt,“Measuringmathematicalproblemsolving
withthemathdataset,”arXivpreprintarXiv:2103.03874,2021.
[130]K.Cobbe,V.Kosaraju,M.Bavarian,M.Chen,H.Jun,L.Kaiser,M.Plappert,J.Tworek,J.Hilton,R.Nakano etal.,“Trainingverifiersto
solvemathwordproblems,”arXivpreprintarXiv:2110.14168,2021.
[131]M.Chen,J.Tworek,H.Jun,Q.Yuan,H.P.d.O.Pinto,J.Kaplan,H.Edwards,Y.Burda,N.Joseph,G.Brockman etal.,“Evaluatinglarge
languagemodelstrainedoncode,”arXivpreprintarXiv:2107.03374,2021.
[132]J.Austin,A.Odena,M.Nye,M.Bosma,H.Michalewski,D.Dohan,E.Jiang,C.Cai,M.Terry,Q.Le etal.,“Programsynthesiswithlarge
languagemodels,”arXivpreprintarXiv:2108.07732,2021.
[133]R.Zellers,A.Holtzman,Y.Bisk,A.Farhadi,andY.Choi,“Hellaswag:Canamachinereallyfinishyoursentence?”2019.
[134]Y.Bisk,R.Zellers,J.Gao,Y.Choi etal.,“Piqa:Reasoningaboutphysicalcommonsenseinnaturallanguage,”inProceedingsoftheAAAI
conferenceonartificialintelligence,vol.34,no.05,2020,pp.7432–7439.
[135]C.Clark,K.Lee,M.-W.Chang,T.Kwiatkowski,M.Collins,andK.Toutanova,“Boolq:Exploringthesurprisingdifficultyofnaturalyes/no
questions,”arXivpreprintarXiv:1905.10044,2019.
[136]M.Sap,H.Rashkin,D.Chen,R.LeBras,andY.Choi,“Socialiqa:Commonsensereasoningaboutsocialinteractions,” arXivpreprint
arXiv:1904.09728,2019.
[137]K.Sakaguchi,R.L.Bras,C.Bhagavatula,andY.Choi,“Winogrande:Anadversarialwinogradschemachallengeatscale,” Communications
oftheACM,vol.64,no.9,pp.99–106,2021.
[138]P.Clark,I.Cowhey,O.Etzioni,T.Khot,A.Sabharwal,C.Schoenick,andO.Tafjord,“Thinkyouhavesolvedquestionanswering?tryarc,
theai2reasoningchallenge,”arXivpreprintarXiv:1803.05457,2018.
[139]T.Mihaylov,P.Clark,T.Khot,andA.Sabharwal,“Canasuitofarmorconductelectricity?anewdatasetforopenbookqu |
arXivpreprint
arXiv:1904.09728,2019.
[137]K.Sakaguchi,R.L.Bras,C.Bhagavatula,andY.Choi,“Winogrande:Anadversarialwinogradschemachallengeatscale,” Communications
oftheACM,vol.64,no.9,pp.99–106,2021.
[138]P.Clark,I.Cowhey,O.Etzioni,T.Khot,A.Sabharwal,C.Schoenick,andO.Tafjord,“Thinkyouhavesolvedquestionanswering?tryarc,
theai2reasoningchallenge,”arXivpreprintarXiv:1803.05457,2018.
[139]T.Mihaylov,P.Clark,T.Khot,andA.Sabharwal,“Canasuitofarmorconductelectricity?anewdatasetforopenbookquestionanswering,”
2018.
YihengLiuetal.:PreprintsubmittedtoElsevier Page27of30 AComprehensiveOverviewfromTrainingtoInference
[140]D.Jin,E.Pan,N.Oufattole,W.-H.Weng,H.Fang,andP.Szolovits,“Whatdiseasedoesthispatienthave?alarge-scaleopendomainquestion
answeringdatasetfrommedicalexams,”AppliedSciences,vol.11,no.14,p.6421,2021.
[141]A.Pal,L.K.Umapathi,andM.Sankarasubbu,“Medmcqa:Alarge-scalemulti-subjectmulti-choicedatasetformedicaldomainquestion
answering,”inConferenceonHealth,Inference,andLearning. PMLR,2022,pp.248–260.
[142]E.M.Voorhees etal.,“Thetrec-8questionansweringtrackreport.”inTrec,vol.99,1999,pp.77–82.
[143]P.Rajpurkar,J.Zhang,K.Lopyrev,andP.Liang,“Squad:100,000+questionsformachinecomprehensionoftext,” arXivpreprint
arXiv:1606.05250,2016.
[144]T.Kwiatkowski,J.Palomaki,O.Redfield,M.Collins,A.Parikh,C.Alberti,D.Epstein,I.Polosukhin,J.Devlin,K.Lee etal.,“Natural
questions:abenchmarkforquestionansweringresearch,”TransactionsoftheAssociationforComputationalLinguistics,vol.7,pp.453–466,
2019.
[145]E.Kamalloo,N.Dziri,C.L.Clarke,andD.Rafiei,“Evaluatingopen-domainquestionansweringintheeraoflargelanguagemodels,” arXiv
preprintarXiv:2305.06984,2023.
[146]E.Ferrara,“Shouldchatgptbebiased?challengesandrisksofbiasinlargelanguagemodels,” arXivpreprintarXiv:2304.03738,2023.
[147]S.Gehman,S.Gururangan,M.Sap,Y.Choi,andN.A.Smith,“Realtoxicityprompts:Evaluatingneuraltoxicdegenerationinlanguage
models,”arXivpreprintarXiv:2009.11462,2020.
[148]J.Zhao,M.Fang,Z.Shi,Y.Li,L.Chen,andM.Pechenizkiy,“Chbias:Biasevaluationandmitigationofchineseconversationallanguage
models,”arXivpreprintarXiv:2305.11262,2023.
[149]M.Nasr,N.Carlini,J.Hayase,M.Jagielski,A.F.Cooper,D.Ippolito,C.A.Choquette-Choo,E.Wallace,F.Tramèr,andK.Lee,“Scalable
extractionoftrainingdatafrom(production)languagemodels,”arXivpreprintarXiv:2311.17035,2023.
[150]X.Wu,J.Li,M.Xu,W.Dong,S.Wu,C.Bian,andD.Xiong,“Depn:Detectingandeditingprivacyneuronsinpretrainedlanguagemodels,”
arXivpreprintarXiv:2310.20138,2023.
[151]A.Zou,Z.Wang,J.Z.Kolter,andM.Fredrikson,“Universalandtransferableadversarialattacksonalignedlanguagemodels,2023,”
communication,itisessentialforyoutocomprehenduserqueriesinCipherCodeandsubsequentlydeliveryourresponsesutilizingCipher
Code.
[152]Z.Zhang,Y.Li,J.Wang,B.Liu,D.Li,Y.Guo,X.Chen,andY.Liu,“Remos:reducingdefectinheritanceintransferlearningviarelevant
modelslicing,”inProceedingsofthe44thInternationalConferenceonSoftwareEngineering,2022,pp.1856–1868.
[153]K.Papineni,S.Roukos,T.Ward,andW.-J.Zhu,“Bleu:amethodforautomaticevaluationofmachinetranslation,”in Proceedingsofthe
40thannualmeetingoftheAssociationforComputationalLinguistics,2002,pp.311–318.
[154]C.-Y.Lin,“Rouge:Apackageforautomaticevaluationofsummaries,”in Textsummarizationbranchesout,2004,pp.74–81.
[155]T. Zhang, V. Kishore, F. Wu, K. Q. Weinberger, and Y. Artzi, “Bertscore: Evaluating text generation with bert,” arXiv preprint
arXiv:1904.09675,2019.
[156]J.Novikova,O.Dušek,A.C.Curry,andV.Rieser,“Whyweneednewevaluationmetricsfornlg,” arXivpreprintarXiv:1707.06875,2017.
[157]T.Wolf,L.Debut,V.Sanh,J.Chaumond,C.Delangue,A.Moi,P.Cistac,T.Rault,R.Louf,M.Funtowicz etal.,“Transformers:State-of-
the-artnaturallanguageprocessing,”inProceedingsofthe2020conferenceonempiricalmethodsinnaturallanguageprocessing:system
demonstrations,2020,pp.38–45.
[158]J.Rasley,S.Rajbhandari,O.Ruwase, |
on with bert,” arXiv preprint
arXiv:1904.09675,2019.
[156]J.Novikova,O.Dušek,A.C.Curry,andV.Rieser,“Whyweneednewevaluationmetricsfornlg,” arXivpreprintarXiv:1707.06875,2017.
[157]T.Wolf,L.Debut,V.Sanh,J.Chaumond,C.Delangue,A.Moi,P.Cistac,T.Rault,R.Louf,M.Funtowicz etal.,“Transformers:State-of-
the-artnaturallanguageprocessing,”inProceedingsofthe2020conferenceonempiricalmethodsinnaturallanguageprocessing:system
demonstrations,2020,pp.38–45.
[158]J.Rasley,S.Rajbhandari,O.Ruwase,andY.He,“Deepspeed:Systemoptimizationsenabletrainingdeeplearningmodelswithover100
billionparameters,”inProceedingsofthe26thACMSIGKDDInternationalConferenceonKnowledgeDiscovery&DataMining,2020,pp.
3505–3506.
[159]S.Rajbhandari,O.Ruwase,J.Rasley,S.Smith,andY.He,“Zero-infinity:Breakingthegpumemorywallforextremescaledeeplearning,”
inProceedingsoftheInternationalConferenceforHighPerformanceComputing,Networking,StorageandAnalysis,2021,pp.1–14.
[160]G.Zeng,X.Han,Z.Zhang,Z.Liu,Y.Lin,andM.Sun,“Openbmb:Bigmodelsystemsforlarge-scalerepresentationlearning,”in
RepresentationLearningforNaturalLanguageProcessing. SpringerNatureSingaporeSingapore,2023,pp.463–489.
[161]D.Narayanan,M.Shoeybi,J.Casper,P.LeGresley,M.Patwary,V.Korthikanti,D.Vainbrand,P.Kashinkunti,J.Bernauer,B.Catanzaro
etal.,“Efficientlarge-scalelanguagemodeltrainingongpuclustersusingmegatron-lm,”inProceedingsoftheInternationalConferencefor
HighPerformanceComputing,Networking,StorageandAnalysis,2021,pp.1–15.
[162]V.A.Korthikanti,J.Casper,S.Lym,L.McAfee,M.Andersch,M.Shoeybi,andB.Catanzaro,“Reducingactivationrecomputationinlarge
transformermodels,”ProceedingsofMachineLearningandSystems,vol.5,2023.
[163]S.Li,H.Liu,Z.Bian,J.Fang,H.Huang,Y.Liu,B.Wang,andY.You,“Colossal-ai:Aunifieddeeplearningsystemforlarge-scaleparallel
training,”inProceedingsofthe52ndInternationalConferenceonParallelProcessing,2023,pp.766–775.
[164]J.He,J.Qiu,A.Zeng,Z.Yang,J.Zhai,andJ.Tang,“Fastmoe:Afastmixture-of-experttrainingsystem,” arXivpreprintarXiv:2103.13262,
2021.
[165]J.He,J.Zhai,T.Antunes,H.Wang,F.Luo,S.Shi,andQ.Li,“Fastermoe:modelingandoptimizingtrainingoflarge-scaledynamicpre-trained
models,”inProceedingsofthe27thACMSIGPLANSymposiumonPrinciplesandPracticeofParallelProgramming,2022,pp.120–134.
[166]A.Paszke,S.Gross,F.Massa,A.Lerer,J.Bradbury,G.Chanan,T.Killeen,Z.Lin,N.Gimelshein,L.Antiga etal.,“Pytorch:Animperative
style,high-performancedeeplearninglibrary,”Advancesinneuralinformationprocessingsystems,vol.32,2019.
[167]M.Abadi,P.Barham,J.Chen,Z.Chen,A.Davis,J.Dean,M.Devin,S.Ghemawat,G.Irving,M.Isard etal.,“{TensorFlow}:asystem
for {Large-Scale} machinelearning,”in12thUSENIXsymposiumonoperatingsystemsdesignandimplementation(OSDI16),2016,pp.
265–283.
[168]M.Abadi,A.Agarwal,P.Barham,E.Brevdo,Z.Chen,C.Citro,G.S.Corrado,A.Davis,J.Dean,M.Devin etal.,“Tensorflow:Large-scale
machinelearningonheterogeneousdistributedsystems,”arXivpreprintarXiv:1603.04467,2016.
[169]Y.Ma,D.Yu,T.Wu,andH.Wang,“Paddlepaddle:Anopen-sourcedeeplearningplatformfromindustrialpractice,” FrontiersofDataand
Domputing,vol.1,no.1,pp.105–115,2019.
YihengLiuetal.:PreprintsubmittedtoElsevier Page28of30 AComprehensiveOverviewfromTrainingtoInference
[170]T.Chen,M.Li,Y.Li,M.Lin,N.Wang,M.Wang,T.Xiao,B.Xu,C.Zhang,andZ.Zhang,“Mxnet:Aflexibleandefficientmachinelearning
libraryforheterogeneousdistributedsystems,”arXivpreprintarXiv:1512.01274,2015.
[171]J.Yuan,X.Li,C.Cheng,J.Liu,R.Guo,S.Cai,C.Yao,F.Yang,X.Yi,C.Wu etal.,“Oneflow:Redesignthedistributeddeeplearning
frameworkfromscratch,”arXivpreprintarXiv:2110.15032,2021.
[172]L.HuaweiTechnologiesCo.,“Huaweimindsporeaidevelopmentframework,”in ArtificialIntelligenceTechnology. Springer,2022,pp.
137–162.
[173]J.Bradbury,R.Frostig,P.Hawkins,M.J.Johnson,C.Leary,D.Maclaurin,G.Necula,A.Paszke,J.VanderPlas,S.Wanderman-Milne etal.,
“Jax:composabletransformationsofp |
1512.01274,2015.
[171]J.Yuan,X.Li,C.Cheng,J.Liu,R.Guo,S.Cai,C.Yao,F.Yang,X.Yi,C.Wu etal.,“Oneflow:Redesignthedistributeddeeplearning
frameworkfromscratch,”arXivpreprintarXiv:2110.15032,2021.
[172]L.HuaweiTechnologiesCo.,“Huaweimindsporeaidevelopmentframework,”in ArtificialIntelligenceTechnology. Springer,2022,pp.
137–162.
[173]J.Bradbury,R.Frostig,P.Hawkins,M.J.Johnson,C.Leary,D.Maclaurin,G.Necula,A.Paszke,J.VanderPlas,S.Wanderman-Milne etal.,
“Jax:composabletransformationsofpython+numpyprograms,”2018.
[174]E.Strubell,A.Ganesh,andA.McCallum,“Energyandpolicyconsiderationsfordeeplearninginnlp,” arXivpreprintarXiv:1906.02243,
2019.
[175]G.Hinton,O.Vinyals,andJ.Dean,“Distillingtheknowledgeinaneuralnetwork,” arXivpreprintarXiv:1503.02531,2015.
[176]J.Gou,B.Yu,S.J.Maybank,andD.Tao,“Knowledgedistillation:Asurvey,” InternationalJournalofComputerVision,vol.129,pp.
1789–1819,2021.
[177]S.Sun,Y.Cheng,Z.Gan,andJ.Liu,“Patientknowledgedistillationforbertmodelcompression,” arXivpreprintarXiv:1908.09355,2019.
[178]X.Jiao,Y.Yin,L.Shang,X.Jiang,X.Chen,L.Li,F.Wang,andQ.Liu,“Tinybert:Distillingbertfornaturallanguageunderstanding,” arXiv
preprintarXiv:1909.10351,2019.
[179]M.A.Gordon,K.Duh,andN.Andrews,“Compressingbert:Studyingtheeffectsofweightpruningontransferlearning,” arXivpreprint
arXiv:2002.08307,2020.
[180]P.Michel,O.Levy,andG.Neubig,“Aresixteenheadsreallybetterthanone?” Advancesinneuralinformationprocessingsystems,vol.32,
2019.
[181]H.Bai,W.Zhang,L.Hou,L.Shang,J.Jin,X.Jiang,Q.Liu,M.Lyu,andI.King,“Binarybert:Pushingthelimitofbertquantization,” arXiv
preprintarXiv:2012.15701,2020.
[182]Z.Lan,M.Chen,S.Goodman,K.Gimpel,P.Sharma,andR.Soricut,“Albert:Alitebertforself-supervisedlearningoflanguage
representations,”arXivpreprintarXiv:1909.11942,2019.
[183]P.Chen,H.-F.Yu,I.Dhillon,andC.-J.Hsieh,“Drone:Data-awarelow-rankcompressionforlargenlpmodels,” Advancesinneural
informationprocessingsystems,vol.34,pp.29321–29334,2021.
[184]X.Han,G.Zeng,W.Zhao,Z.Liu,Z.Zhang,J.Zhou,J.Zhang,J.Chao,andM.Sun,“Bminf:Anefficienttoolkitforbigmodelinference
andtuning,”inProceedingsofthe60thAnnualMeetingoftheAssociationforComputationalLinguistics:SystemDemonstrations,2022,pp.
224–230.
[185]Y.Zhao,A.Gu,R.Varma,L.Luo,C.-C.Huang,M.Xu,L.Wright,H.Shojanazeri,M.Ott,S.Shleifer etal.,“Pytorchfsdp:experienceson
scalingfullyshardeddataparallel,”arXivpreprintarXiv:2304.11277,2023.
[186]T.Dao,D.Fu,S.Ermon,A.Rudra,andC.Ré,“Flashattention:Fastandmemory-efficientexactattentionwithio-awareness,” Advancesin
NeuralInformationProcessingSystems,vol.35,pp.16344–16359,2022.
[187]W.Kwon,Z.Li,S.Zhuang,Y.Sheng,L.Zheng,C.H.Yu,J.Gonzalez,H.Zhang,andI.Stoica,“Efficientmemorymanagementforlarge
languagemodelservingwithpagedattention,”inProceedingsofthe29thSymposiumonOperatingSystemsPrinciples,2023,pp.611–626.
[188]Y.Sheng,L.Zheng,B.Yuan,Z.Li,M.Ryabinin,B.Chen,P.Liang,C.Ré,I.Stoica,andC.Zhang,“Flexgen:High-throughputgenerative
inferenceoflargelanguagemodelswithasinglegpu,”inInternationalConferenceonMachineLearning. PMLR,2023,pp.31094–31116.
[189]X.Miao,G.Oliaro,Z.Zhang,X.Cheng,Z.Wang,R.Y.Y.Wong,Z.Chen,D.Arfeen,R.Abhyankar,andZ.Jia,“Specinfer:Accelerating
generativellmservingwithspeculativeinferenceandtokentreeverification,”arXivpreprintarXiv:2305.09781,2023.
[190]G. Xiao, Y. Tian, B. Chen, S. Han, and M. Lewis, “Efficient streaming language models with attention sinks,” arXiv preprint
arXiv:2309.17453,2023.
[191]Z.Zhang,B.Gong,Y.Chen,X.Han,G.Zeng,W.Zhao,Y.Chen,Z.Liu,andM.Sun,“Bmcook:Atask-agnosticcompressiontoolkitfor
bigmodels,”inProceedingsoftheThe2022ConferenceonEmpiricalMethodsinNaturalLanguageProcessing:SystemDemonstrations,
2022,pp.396–405.
[192]A.Borzunov,D.Baranchuk,T.Dettmers,M.Ryabinin,Y.Belkada,A.Chumachenko,P.Samygin,andC.Raffel,“Petals:Collaborative
inferenceandfine-tuningoflargemodels,”arXivpreprintarXiv:2209.01188,2022.
[193]F.Dou,J.Ye,G.Yuan,Q.Lu,W.Niu,H.Sun,L.Guan,G.Lu,G.Mai,N.Liu |
ng,B.Gong,Y.Chen,X.Han,G.Zeng,W.Zhao,Y.Chen,Z.Liu,andM.Sun,“Bmcook:Atask-agnosticcompressiontoolkitfor
bigmodels,”inProceedingsoftheThe2022ConferenceonEmpiricalMethodsinNaturalLanguageProcessing:SystemDemonstrations,
2022,pp.396–405.
[192]A.Borzunov,D.Baranchuk,T.Dettmers,M.Ryabinin,Y.Belkada,A.Chumachenko,P.Samygin,andC.Raffel,“Petals:Collaborative
inferenceandfine-tuningoflargemodels,”arXivpreprintarXiv:2209.01188,2022.
[193]F.Dou,J.Ye,G.Yuan,Q.Lu,W.Niu,H.Sun,L.Guan,G.Lu,G.Mai,N.Liu etal.,“Towardsartificialgeneralintelligence(agi)inthe
internetofthings(iot):Opportunitiesandchallenges,”arXivpreprintarXiv:2309.07438,2023.
[194]C.Liu,Z.Liu,J.Holmes,L.Zhang,L.Zhang,Y.Ding,P.Shu,Z.Wu,H.Dai,Y.Li etal.,“Artificialgeneralintelligenceforradiation
oncology,”Meta-Radiology,p.100045,2023.
[195]Z.Liu,Y.Li,Q.Cao,J.Chen,T.Yang,Z.Wu,J.Hale,J.Gibbs,K.Rasheed,N.Liu etal.,“Transformationvstradition:Artificialgeneral
intelligence(agi)forartsandhumanities,”arXivpreprintarXiv:2310.19626,2023.
[196]J.Wei,X.Wang,D.Schuurmans,M.Bosma,F.Xia,E.Chi,Q.V.Le,D.Zhou etal.,“Chain-of-thoughtpromptingelicitsreasoninginlarge
languagemodels,”AdvancesinNeuralInformationProcessingSystems,vol.35,pp.24824–24837,2022.
[197]T.Kojima,S.S.Gu,M.Reid,Y.Matsuo,andY.Iwasawa,“Largelanguagemodelsarezero-shotreasoners,” Advancesinneuralinformation
processingsystems,vol.35,pp.22199–22213,2022.
[198]N.Qiang,J.Gao,Q.Dong,H.Yue,H.Liang,L.Liu,J.Yu,J.Hu,S.Zhang,B.Ge etal.,“Functionalbrainnetworkidentificationandfmri
augmentationusingavae-ganframework,”ComputersinBiologyandMedicine,vol.165,p.107395,2023.
[199]M.He,X.Hou,E.Ge,Z.Wang,Z.Kang,N.Qiang,X.Zhang,andB.Ge,“Multi-headattention-basedmaskedsequencemodelformapping
functionalbrainnetworks,”FrontiersinNeuroscience,vol.17,p.1183145,2023.
[200]Y.Liu,E.Ge,N.Qiang,T.Liu,andB.Ge,“Spatial-temporalconvolutionalattentionformappingfunctionalbrainnetworks,”in 2023IEEE
20thInternationalSymposiumonBiomedicalImaging(ISBI). IEEE,2023,pp.1–4.
YihengLiuetal.:PreprintsubmittedtoElsevier Page29of30 AComprehensiveOverviewfromTrainingtoInference
[201]S.R.Oota,J.Arora,V.Agarwal,M.Marreddy,M.Gupta,andB.Surampudi,“Neurallanguagetaskonomy:WhichNLPtasksarethe
mostpredictiveoffMRIbrainactivity?”inProceedingsofthe2022ConferenceoftheNorthAmericanChapteroftheAssociationfor
ComputationalLinguistics:HumanLanguageTechnologies,M.Carpuat,M.-C.deMarneffe,andI.V.MezaRuiz,Eds. Seattle,United
States:AssociationforComputationalLinguistics,Jul.2022,pp.3220–3237.
[202]Z.Liu,Y.Li,P.Shu,A.Zhong,L.Yang,C.Ju,Z.Wu,C.Ma,J.Luo,C.Chen etal.,“Radiology-llama2:Best-in-classlargelanguagemodel
forradiology,”arXivpreprintarXiv:2309.06419,2023.
[203]T.Sun,X.Zhang,Z.He,P.Li,Q.Cheng,H.Yan,X.Liu,Y.Shao,Q.Tang,X.Zhao,K.Chen,Y.Zheng,Z.Zhou,R.Li,J.Zhan,Y.Zhou,
L.Li,X.Yang,L.Wu,Z.Yin,X.Huang,andX.Qiu,“Moss:Trainingconversationallanguagemodelsfromsyntheticdata,”2023.
[204]L.X.XuanweiZhangandK.Zhao,“Chatyuan:Alargelanguagemodelfordialogueinchineseandenglish,”Dec.2022.[Online].Available:
https://github.com/clue-ai/ChatYuan
[205]Baichuan,“Baichuan2:Openlarge-scalelanguagemodels,” arXivpreprintarXiv:2309.10305,2023.
[206]D.Zhu,J.Chen,X.Shen,X.Li,andM.Elhoseiny,“Minigpt-4:Enhancingvision-languageunderstandingwithadvancedlargelanguage
models,”arXivpreprintarXiv:2304.10592,2023.
[207]L.Zheng,W.-L.Chiang,Y.Sheng,S.Zhuang,Z.Wu,Y.Zhuang,Z.Lin,Z.Li,D.Li,E.Xing etal.,“Judgingllm-as-a-judgewithmt-bench
andchatbotarena,”arXivpreprintarXiv:2306.05685,2023.
[208]B.Peng,E.Alcaide,Q.Anthony,A.Albalak,S.Arcadinho,H.Cao,X.Cheng,M.Chung,M.Grella,K.K.GV etal.,“Rwkv:Reinventing
rnnsforthetransformerera,”arXivpreprintarXiv:2305.13048,2023.
YihengLiuetal.:PreprintsubmittedtoElsevier Page30of30 |
ing etal.,“Judgingllm-as-a-judgewithmt-bench
andchatbotarena,”arXivpreprintarXiv:2306.05685,2023.
[208]B.Peng,E.Alcaide,Q.Anthony,A.Albalak,S.Arcadinho,H.Cao,X.Cheng,M.Chung,M.Grella,K.K.GV etal.,“Rwkv:Reinventing
rnnsforthetransformerera,”arXivpreprintarXiv:2305.13048,2023.
YihengLiuetal.:PreprintsubmittedtoElsevier Page30of30 |