text
stringlengths 398
4.1k
|
---|
dict the next token y stationarityandseasonality(Hamilton,2020). Thesemod-
basedonagivencontextsequenceX ,trainedbymaximiz- elsassumedpasttrendswouldcontinueintothefuture. Deep
ing the probability of the tokensequence given the context: neural networks, like recurrent and temporal convolution
neuralnetworks(Gamboa,2017),processedlarger,complex
TY datasets, capturing non-linear and long-term dependencies
P (y |X ) = P (yt |x 1,x 2,...,xt− 1), (1) withoutheavyrelianceonprior knowledge, thustransform-
t=1 ing predictive time series analysis. Recent research like
whereT representsthesequencelength. Throughthis,the TimeCLR(Yehetal.,2023)introducedpre-trainingondi-
model achieves intelligent compression and language gener- verse,large-scaletimeseriesdata,allowingfine-tuningfor
ation in an autoregressive manner. specifictaskswithrelativelysmallerdatasamples(Jinetal.,
2023b),reducingthetimeandresourcesrequiredformodel
Emergent Abilities of LLMs. Large language models training. This allows for the application of sophisticated
exhibit emergent abilities that set them apart from tradi- models in scenarios where collecting large-scale time se-
tional neural networks. These abilities, present in large riesdataischallenging. Despitethesuccessesofprevious
models but not in smaller ones, are a significant aspect of generations, we posit that the emergence of LLMs is set to
LLMs(Weietal.,2022a). Threekeyemergentabilitiesof revolutionizetimeseriesanalysis,shiftingitfrompredictive
LLMs include: (1) In-context learning (ICL), introduced to general intelligence. LLM-centric models, processing
3 What Can Large Language Models Tell Us about Time Series Analysis
both language instructionsand time series (Jinet al.,2024;
Anonymous,2024a), extend capabilities to general question
answering, interpretable predictions, and complex reason-
ing, moving beyond conventional predictive analytics.
3. LLM-assisted Enhancer for Time Series
Numerous time seriesanalysis models have been devisedto
address temporal data. LLMs, owing to vast internal knowl-
edge and reasoning capabilities, seamlessly enhance both
aspects. Hence,wecanintuitivelydistinguishLLM-assisted Figure 3: Categories of LLM-centered predictor.
enhancer methods from data and model perspectives.
3.1. Data-Based Enhancer 3.3. Discussion
LLM-assistedenhancersnotonlyenhancedatainterpretabil- LLM-assisted enhancers effectively address the inherent
ity but also provide supplementary improvements, facili- sparsityandnoisecharacteristicsoftimeseriesdata,provid-
tatingamorethoroughunderstandingandeffectiveuseof ingexistingtimeseriesmodelswithmoreeffectiveexternal
time series data. For interpretability, LLMs offer textual knowledge and analytical capabilities. Moreover, this tech-
descriptionsandsummaries,helpingtounderstandpatterns nology is plug-and-play, enabling flexible assistance for
andanomaliesintimeseriesdata. ExamplesincludeLLM- real-world time series data and model challenges. How |
ingexistingtimeseriesmodelswithmoreeffectiveexternal
time series data. For interpretability, LLMs offer textual knowledge and analytical capabilities. Moreover, this tech-
descriptionsandsummaries,helpingtounderstandpatterns nology is plug-and-play, enabling flexible assistance for
andanomaliesintimeseriesdata. ExamplesincludeLLM- real-world time series data and model challenges. How-
MPE (Liang et al.,2023) forhuman mobility data, Signal- ever, a notable hurdle is that using LLM as an enhancer
GPT (Liu et al.,2023a) for biological signals, and Insight introducessignificanttimeandcostoverheadswhendealing
Miner(Zhang etal.,2023f)for trendmining. Additionally, with large-scale datasets. In addition, the inherent diversity
AmicroN (Chatterjee et al.,2023) and SST (Ghosh et al., and range of application scenarios in time series data add
2023)useLLMsfordetailedsensorandspatialtimeseries layersofcomplexityto thecreationofuniversallyeffective
analysis. Supplementary enhancements involve integrat- LLM-assisted enhancers.
ing diversedata sources,enriching timeseries datacontext
andimprovingmodelrobustness,asexploredin(Yuetal.,
2023b) and (Fatouros et al.,2024) for financial decision-
making. Suchenhancements helpimprove domainmodels’
inherent capabilities and make them more robust.
3.2. Model-Based Enhancer
Model-based enhancers aim to augment time series mod-
els by addressing their limitations in external knowledge
anddomain-specificcontexts. Transferringknowledgefrom
LLMs boosts the performance of domain models in han- 4. LLM-centered Predictor for Time Series
dling complex tasks. Such approaches often employ a
dual-towermodel, likethosein(Qiu etal.,2023b;Li etal., LLM-centered predictors utilize the extensive knowledge
2023b;Yu et al.,2023a), use frozen LLMs for electrocar- within LLMs for diverse time series tasks such as predic-
diogram (ECG) analysis. Some methods further utilize tionandanomalydetection. AdaptingLLMstotimeseries
contrastive learning to achieve certain alignments. For datainvolvesuniquechallengessuchasdifferencesindata
example, IMU2CLIP (Moon et al.,2023) aligns text and sampling and information completeness. In the following
videowithsensordata,whileSTLLM(Anonymous,2024b) discussion, approaches are categorized into tuning-based
enhances spatial time series prediction. Another line of andnon-tuning-basedmethodsbasedonwhetheraccessto
work utilizes prompting techniques to harness the infer- LLMparameters,primarilyfocusingonbuildinggeneralor
ential decision-making capability of LLMs. For instance, domain-specific time series models.
TrafficGPT(Zhangetal.,2023e)exemplifiesdecisionanal-
ysis, integrating trafficmodels withLLMs foruser-tailored 4.1. Tuning-Based Predictor
solutions,offeringdetailedinsightstoenhancesysteminter-
pretability. Tuning-based predictors use accessible LLM parameters,
Our position: LLM-assisted enhancers represent a typicallyinvolvingpatchingandtokenizingnumericalsig-
promising avenue for augmenting time series data and nals and related text data, followed by fine-tuning for time
models, meriting fur |
solutions,offeringdetailedinsightstoenhancesysteminter-
pretability. Tuning-based predictors use accessible LLM parameters,
Our position: LLM-assisted enhancers represent a typicallyinvolvingpatchingandtokenizingnumericalsig-
promising avenue for augmenting time series data and nals and related text data, followed by fine-tuning for time
models, meriting further exploration. Future directions
should focus on developing efficient, accountable, and 4
universally adaptable plug-and-play solutions that effec-
tivelyaddresspractical challenges, suchas datasparsity
andnoise,whilealsoconsideringthetimeandcosteffi-
ciencies for large-scale dataset applications. What Can Large Language Models Tell Us about Time Series Analysis
series tasks. Figure3(a) shows this process: (1) with a 4.2. Non-Tuning-Based Predictor
Patching( ·) operation (Nie et al.,2022), a time series is Non-tuning-based predictors, suitable for closed-source
chunked to form patch-based tokensX inp . An additional models, involve preprocessing timeseries datato fitLLM
optionis toperformTokenizer( ·) operationon timeseries- input spaces. As Figure3(b) illustrates, this typically in-
related text data to form text sequence tokensTinp ; (2) time volvestwo steps: (1)preprocessing rawtime series,includ-
series patches (and optional text tokens) are fed into the ing optional operations such as prompt Template( ·) and
LLMwith accessibleparameters; (3)anextratask layer,de- customizedTokenizer( ·);(2)feedingtheprocessedinputs
notedasTask (·),isfinallyintroducedtoperformdifferent X inp intotheLLMtoobtainresponses. AParse( ·) function
analysis tasks with the instruction promptP . Thisprocess is then employedto retrieve prediction labels. This process
is formulated below: is formulated below:
Pre-processing: X inp = Patching( X ), Pre-processing: X inp = Template( X ,P ),
Tinp = Tokenizer( T ), (2) or X inp = Tokenizer( X ), (3)
Analysis: ˆY = Task( f△LLM (X inp ,Tinp ,P )), Analysis: ˆY = Parse( f ▲LLM (X inp )),
whereP represents the instruction prompt for the current
whereX andT denote the set of time series samples and analysistask,andf ▲LLM denotestheblack-boxLLMmodel.
related text samples, respectively. These two (the latter (Spathis&Kawsar,2023)initiallynotedthatLLMtokeniz-
is optional) are fed together into LLMf△LLM with partial ers, not designed for numerical values, separate continu-
unfreezing or additional adapter layers to predict label ˆY . ous values and ignore their temporal relationships. They
Adaptingpre-trainedlargelanguagemodelsdirectlytoraw suggested using lightweightembedding layers and prompt
time series numerical signals for downstream time series engineeringassolutions. Followingthis,LLMTime(Gruver
analysis tasks is often counterintuitive due to the inherent etal.,2023)introducedanoveltokenizationapproach,con-
modality gap between text and timeseries data. Neverthe- vertingtokensintoflexiblecontinuousvalues,enablingnon-
less, FPT (Zhou et al.,2023a) and similar studies found tuned LLMs to match or exceed zero-shot prediction perfor-
that LLMs, even when frozen, can perform comparably manceindomain-specificmodels. Thissuccessisattributed
in t |
often counterintuitive due to the inherent etal.,2023)introducedanoveltokenizationapproach,con-
modality gap between text and timeseries data. Neverthe- vertingtokensintoflexiblecontinuousvalues,enablingnon-
less, FPT (Zhou et al.,2023a) and similar studies found tuned LLMs to match or exceed zero-shot prediction perfor-
that LLMs, even when frozen, can perform comparably manceindomain-specificmodels. Thissuccessisattributed
in time series tasks due to the self-attention mechanism’s toLLMs’abilitytorepresentmultimodaldistributions. Us-
universality. Others, like GATGPT (Chen et al.,2023b) ingin-contextlearning,evaluationswereperformedintasks
and ST-LLM (Liu et al.,2024a), applied these findings to like sequence transformation and completion. (Mirchan-
spatial-temporal data, while UniTime (Liu et al.,2024b) daniet al.,2023) suggestedthatLLMs’ capacitytohandle
used manual instructions for domain identification. This abstract patterns positions them as foundational general pat-
allowsthemtohandletimeseriesdatawithdifferentcharac- tern machines. Thishas led to applyingLLMs in areaslike
teristics and distinguish between different domains. human mobility mining (Wang et al.,2023c;Zhang et al.,
However, the above methods all require modifications that 2023g), financial forecasting (Lopez-Lira & Tang,2023),
disrupt the parameters of the original LLMs, potentially and health prediction (Kim et al.,2024).
leading to catastrophic forgetting. In contrast, another line 4.3. Others
ofwork,inspiredbythis, aimstoavoidthisbyintroducing
additional lightweight adaptation layers. Time-LLM (Jin Beyond the previously discussed methods, another signifi-
etal.,2024)usestextdataasapromptprefixandreprograms cant approach in temporal analysis involves building foun-
inputtimeseriesintolanguagespace,enhancingLLM’sper- dation modelsfrom scratch, asshownin Figure3(c). This
formanceinvariousforecastingscenarios. TEST(Sunetal., approach focuses on creating large, scalable models, both
2024)tacklesinconsistentembeddingspacesbyconstruct- genericanddomain-specific,aimingtoemulatethescaling
ing an encoder for time series data, employing alignment law (Kaplan et al.,2020) of LLMs. PreDcT (Das et al.,
contrasts and soft prompts for efficient fine-tuning with 2023) used Google Trends data to build a vast time series
frozen LLMs. LLM4TS (Chang et al.,2023) integrates corpus, altering a foundational model’s attention architec-
multi-scale time series data into LLMs using a two-level ture for time prediction. Lag-Llama (Rasul et al.,2023)
aggregationstrategy,improvingtheirinterpretationoftem- introducedaunivariateprobabilistictimeseriesforecasting
poral information. TEMPO (Cao et al.,2023) combines model,applyingthesmoothlybrokenpower-law(Caballero
seasonalandtrenddecompositionswithfrozenLLMs,using et al.,2022) for model scaling analysis. TimeGPT (Garza
prompt pooling to address distribution changes in forecast- &Mergenthaler-Canseco,2023)furtheradvancedtimese-
ing non-stationary time series. riesforecasting,enablingzero-shotinferencethroughexten-
5 What Can Large Language Models Tell Us about Time Series Analysis
sive dataset knowledge. Recent efforts, including (Ekam- eate strategies for constructing a robust general-purpose
baram et al.,2024), have focused on making these foun- time series |
riesforecasting,enablingzero-shotinferencethroughexten-
5 What Can Large Language Models Tell Us about Time Series Analysis
sive dataset knowledge. Recent efforts, including (Ekam- eate strategies for constructing a robust general-purpose
baram et al.,2024), have focused on making these foun- time series analysis agent.
dational models more efficient and applicable in specific In the subsequent section, we employ prompt engineering
areas like weather forecasting (Chen et al.,2023a), path techniquestocompelLLMstoassistinexecutingbasictime
planning(Sunetal.,2023),epidemicdetection(Kamarthi seriesanalyticaltasks. OurdemonstrationrevealsthatLLMs
& Prakash,2023), and cloud operations (Woo et al.,2023). undeniably possess the potential to function as time series
4.4. Discussion agents. Nevertheless, their proficiency is constrained when
itcomestocomprehendingintricatetimeseriesdata,leading
LLM-centric predictors have advanced significantly in time to the generation of hallucinatory outputs. Ultimately, we
seriesanalysis,outperformingmostdomain-specificmodels identify and discuss promising avenues that can empower
in few-shot and zero-shot scenarios. Tuning-based meth- ustodevelopmorerobustandreliablegeneral-purposetime
ods,withtheiradjustableparameters,generallyshowbetter series agents.
performance and adaptability to specific domains. How-
ever, they are prone to catastrophic forgetting and involve 5.1. Empirical Insights: LLMs as Time Series Analysts
high training costs due to parameter modification. While This subsection presents experiments evaluating LLMs’
adapterlayershavesomewhatalleviatedthisissue,thechal- zero-shot capability in serving as agents for human in-
lengeofexpensive trainingpersists. Conversely,non-tuning teraction and time series data analysis. We utilize the
methods,offering text-based predictions,dependheavilyon HAR(Anguitaetal.,2013)database,derivedfromrecord-
manual prompt engineering, and their prediction stability ings of 30 study participants engaged in activities of
is notalways reliable. Additionally,buildingfoundational daily living (ADL)
time series models from scratch involves balancing high while carrying a waist-
development costs against their applicability. Therefore, mounted smartphone
furtherrefinementisneededtoaddressthesechallengesin equipped with inertial
LLM-centric predictors. sensors. The end goal
istoclassifyactivities
into four categories
(Stand, Sit, Lay, Walk),
with ten instances per Figure 4: Confusion matrix of
class for evaluation. HAR classification. |
into four categories
(Stand, Sit, Lay, Walk),
with ten instances per Figure 4: Confusion matrix of
class for evaluation. HAR classification.
The prompts used for
GPT-3.5 are illustrated in Figure7, and the classification
confusion matrixis presented in Figure4. Our key findings
include:
LLMs as Effective Agents: The experiments demonstrate
that current LLMs serve adeptly as agents for human in-
5. LLM-empowered Agent for Time Series teraction andtime seriesdata analysis, producingaccurate
As demonstrated in the previous section, tuning-based ap- predictions as shown in Figure7. Notably, all instances
proachesintimeseriesutilizeLLMsasrobustmodelcheck- with label Stand were correctly classified, underscoring
points,attemptingtoadjustcertainparametersforspecific the LLMs’ proficiency in zero-shot tasks. The models ex-
domain applications. However, this approach often sacri- hibit a profound understandingof common-sense behaviors,
fices the interactive capabilities of LLMs and may not fully encompassing various labels in time series classification,
exploit the benefits offered by LLMs, such as in-context anomalydetection,andskillfulapplicationof dataaugmen-
learningorchain-of-thought. Ontheotherhand,non-tuning tation (Figure8).
approaches, integrating time series data into textual formats Interpretability and Truthfulness: Agent LLMs priori-
ordevelopingspecializedtokenizers,facelimitationsdueto tize high interpretability and truthfulness, allowingusers to
LLMs’primarytrainingonlinguisticdata,hinderingtheir inquireaboutthereasonsbehindtheirdecisionswithconfi-
comprehension of complex time series patterns not easily dence. The intrinsic classification reasoning is articulated
captured in language. Addressing these challenges, there in natural language, fostering a user-friendly interaction.
arelimitedworksthatdirectlyleverageLLMsastimeseries Limitations in Understanding Complex Patterns: De-
Ourposition:LLM-centricpredictors,thoughburgeon-agents for general-purpose analysis and problem-solving.spite their capabilities, current LLMs show limitations in
ingintimeseriesanalysis,arestillintheirinfancyandWefirstendeavortoprovideanoverviewofsuchapproachescomprehendingcomplextime seriespatterns. Whenfaced
warrant deeper consideration. Future advancementsacross various modalities in AppendixB, aiming to delin-
should not only build upon but also enhance time series
foundationmodels. ByharnessinguniqueLLMcapabil- 6
ities such as in-context learning and chain-of-thought
reasoning, these advancements can overcome current
limitations like catastrophic forgetting, and improve pre-
diction stability and reliability. What Can Large Language M |
ced
warrant deeper consideration. Future advancementsacross various modalities in AppendixB, aiming to delin-
should not only build upon but also enhance time series
foundationmodels. ByharnessinguniqueLLMcapabil- 6
ities such as in-context learning and chain-of-thought
reasoning, these advancements can overcome current
limitations like catastrophic forgetting, and improve pre-
diction stability and reliability. What Can Large Language Models Tell Us about Time Series Analysis
vide accurate justifications without access to the underlying
model or specific data details.
To surmount such limitations and develop practical time se-
riesagentsbuiltuponLLMs,itbecomesparamounttoseam-
lessly integrate time series knowledge into LLMs. Draw-
ing inspiration from studies that have successfully injected
domain-specific knowledge into LLMs (Wang et al.,2023b;
Liu et al.,2023b;Wu et al.,2023;Schick et al.,2023), we
(a) Align (b) Fusion proposeseveralresearchdirections. Theseincludeinnova-
tive methods to enhance LLMs’ proficiency in time series
analysis by endowing them with a deep understanding of
temporal patterns and relevant contextual information.
• Aligning Time Series Features with Language Model
Representations (Figure5a): Explicitly aligning time
seriesfeatureswithpre-trainedlanguagemodelrepresen-
tations can potentially enhance the model’s understand-
ing of temporal patterns. This alignment may involve
(c) Using external tools mappingspecificfeaturestothecorrespondinglinguistic
elements within the model.
Figure 5: Different directions for incorporating time series • Fusing Text Embeddings and Time Series Fea-
knowledge to LLMs. tures (Figure5b): Exploring the fusion of text embed-
dings and time series features in a format optimized for
withcomplexqueries,theymayinitiallyrefusetoprovide LLMs isa promisingavenue. Thisfusion aims tocreate
answers, citing the lack of access to detailed information a representation that leverages the strengths of LLMs in
about the underlying classification algorithm. natural language processing while accommodating the
intricacies of time series data.
Bias and Task Preferences: LLMs display a bias towards • TeachingLLMstoUtilizeExternalPre-trainedTime
thetraining languagedistributions, exhibiting astrong pref- Series Mode |
a representation that leverages the strengths of LLMs in
about the underlying classification algorithm. natural language processing while accommodating the
intricacies of time series data.
Bias and Task Preferences: LLMs display a bias towards • TeachingLLMstoUtilizeExternalPre-trainedTime
thetraining languagedistributions, exhibiting astrong pref- Series Models (Figure5c): The goal here is to instruct
erence for specific tasks. In Figure7, instances of Lay theLLMonidentifyingtheappropriatepre-trainedtime
are consistently misclassifiedasSit and Stand, with better seriesmodel fromanexternal poolandguiding itsusage
performance observed for Sit and Stand. basedonuserqueries. Thetimeseriesknowledgeresides
HallucinationProblem: TheLLMsaresusceptibletohal- withinthis externalmodel hub,while theLLM assumes
lucination, generating reasonable but false answers. For theroleofahigh-levelagentresponsiblefororchestrating
instance, in Figure8, augmented data is merely a copy of their utilization and facilitating interaction with users.
given instances, although the model knows how to apply
data augmentation: These instances continue the hourly Differentiating fromapproaches like modelrepurposing or
trend of oil temperature and power load features, maintain- fine-tuning on specific tasks, the focus of future research
ingthestructureandcharacteristicsoftheprovideddataset. should be on harnessing the inherent zero-shot capabilities
SubsequentinquiriesintothemisclassificationinFigure4, of LLMs for general pattern manipulation. Establishing
particularly regardingwhy LLMs classify Lay instances as a framework that facilitates seamless interaction between
Sit andStand,elicitseeminglyplausiblejustifications(see users and LLM agents for solving general time series prob-
Table1). However, thesejustificationsexposethemodel’s lems through in-context learning is a promising direction.
inclination to fabricate explanations.
5.3. Exploring Alternative Research Avenues
5.2. Key Lessons for Advancing Time Series Agents Addressing theurgent andcrucial need toenhance the capa-
Inlight ofthe empiricalinsightsfrom earlierexperiments, bilitiesoftimeseriesagentsbuiltuponLLMs,werecognize
it is apparent that LLMs, when serving as advanced time thatincorporatingtimeseriesknowledgeisapivotaldirec-
series analytical agents, exhibit notable limitations when tion. Concurrently, mitigating risks associated with such
dealingwithquestionsabout datadistributionandspecific agentsisequallyparamount. Inthisregard,wepinpointkey
features. Their responsesoften show areliance onrequest- challengesandsuggestpotentialdirectionstoboostboththe
ing additional information or highlight an inability to pro- reliability and effectiveness of our time series agents.
7 What Can Large Language Models Tell Us about Time Series Analysis
Hallucination,arecurringchallengedocumentedinvarious 6. Further Discussion
foundationalmodels(Zhouetal.,2023b;Rawteetal.,2023; Ourperspectivesserveasastartingpointforongoingdiscus-
Li et al.,2023a), proves to be a noteworthy concern when sion. We acknowledge that readers mayhave diverseviews
employingLLMsasagentsfortimeseriesdataanalysis,as and be curious about aspects of LLM-cent |
allucination,arecurringchallengedocumentedinvarious 6. Further Discussion
foundationalmodels(Zhouetal.,2023b;Rawteetal.,2023; Ourperspectivesserveasastartingpointforongoingdiscus-
Li et al.,2023a), proves to be a noteworthy concern when sion. We acknowledge that readers mayhave diverseviews
employingLLMsasagentsfortimeseriesdataanalysis,as and be curious about aspects of LLM-centric time series
observed in our experiments. This phenomenon involves analysis not addressed previously. Below, we objectively
thegenerationofcontentthatdeviatesfromfactualoraccu- examine several of these alternate viewpoints:
rate information. Addressing hallucination in this context
commonlyreliesontwoprimarymethods: theidentification AccountabilityandTransparency: Despiteintensepub-
of reliable prompts (Vu et al.,2023;Madaan et al.,2023) lic and academic interest, LLMs remain somewhat enig-
and the fine-tuning of models using dependable instruction matic,raisingfundamentalquestionsliketheircapabilities,
datasets(Tianet al.,2023;Zhang etal.,2023a). Neverthe- operationalmechanisms,andefficiencylevels. Thesecon-
less, it is crucial to recognize that these approaches often cernsarealsopertinentinLLM-centrictimeseriesanalysis,
demand substantial human effort, introducing challenges especially in recent studies using prompting methods with-
relatedtoscalabilityandefficiency. Whilecertaininitiatives outalteringoff-the-shelfLLMs,suchasPromptCast(Xue&
have sought to integrate domain-specific knowledge into Salim,2023). Tofoster transparentandaccountableLLM-
ICLprompts (Daetal.,2023a;Yanget al.,2022b)and con- centric time series models, we advocate two key consid-
struct instruction datasets tailored to specific domains (Liu erations. Firstly, a robust scientific process should aim to
etal.,2023b;Geetal.,2023),theoptimalformatofinstruc- understandunderlyingmechanisms,assuggestedin(Gruver
tions or prompts for effective time series analysis remains et al.,2023). Secondly, establishing a transparent develop-
unclear. Exploring and establishing guidelines for craft- ment and evaluation framework is necessary, one that the
ing impactful instructions within the context of time series community can adopt for achieving better clarity — this
analysis represents acompelling avenuefor future research. includes consistent model reporting, benchmarking results,
Beyondthis,thereareongoingconcernsregardingthealign- clear explanations of internal processes and outputs, and
mentoftimeseriesagentswithhumanpreferences(Lee effective communication of model uncertainties (Liao &
etal.,2023),forinstance,withafocusonensuringthegener- Vaughan,2023).
ation ofcontent that isboth helpful andharmless (Bai etal.,
2022). These unresolvedissues underscoretheimperative Privacy and Security: LLM-centric time series analy-
necessity for the development of more robust and trustwor- sis introduces significant privacy and security challenges.
thy time series agents. Moreover, the world is perpetually Considering that much of the real-world time series data
in a state of flux, and the internet undergoes continuous is confidential and sensitive, concerns about da |
ive Privacy and Security: LLM-centric time series analy-
necessity for the development of more robust and trustwor- sis introduces significant privacy and security challenges.
thy time series agents. Moreover, the world is perpetually Considering that much of the real-world time series data
in a state of flux, and the internet undergoes continuous is confidential and sensitive, concerns about data leakage
evolution,witnessingtheadditionofpetabytesofnewdata and misuse areparamount. LLMs are known tosometimes
on a daily basis (Wenzek et al.,2019). In the realm of memorize segments of their training data, which may in-
time series analysis, the evolving pattern assumes greater cludeprivateinformation(Perisetal.,2023). Consequently,
significance due to the inherent concept drift in time se- developing and deploying LLM-centric time series mod-
ries data (Tsymbal,2004), where future data may exhibit els necessitates implementing privacy-preserving measures
patternsdifferentfromthoseobservedinthepast. Address- againstthreatslikeadversarialattacksandunauthorizeddata
ingthischallengeinvolvesenablingagentstocontinually extraction. Formulatingand adhering to ethicalguidelines
acquire new knowledge (Garg et al.,2023) or adopting and regulatory frameworks is also important (Zhuo et al.,
a lifelong learning pattern without the need for expensive 2023). These should specifically address the complex chal-
retraining — a crucial aspect for time series agents. lenges posed by the use of LLMs in time series analysis,
ensuring their responsible and secure application.
Environmental and Computational Costs: The envi-
ronmental and computational costs of LLM-centric time
series analysis are subjects of considerable concern. Critics
contend that the current benefits of these models do not
outweighthesubstantialresourcesneededfortheirfunction-
ing. In response, we argue that: (1) much of the existing
OurPosition:CurrentLLMsexcelasagentsforhuman researchinthisareaisstillinitsinfancy,offeringsubstantial
interaction and time series data analysis, but they en- opportunitiesforoptimizationintandemwithLLMdevelop-
counterissuessuchasoccasionalinaccuraciesandaten- ment; (2) exploring more efficient alignment and inference
dencytowardhallucination. Toimprovetheirreliability strategiesisworthwhile. Thisisespeciallypertinentgiven
indecision-making,it iscrucialtodevelopguidelines for the large context windows needed for handling tokenized
effectiveinstructionsand toincorporatedomain-specific high-precision numerical data.
knowledge. Overcoming challenges like hallucination,
aligningwithhumanpreferences,andadjustingtoevolv- 8
ingtimeseriesdataiskeytomaximizingtheircapabili-
tiesand minimizingrisks. Our future visionisto develop
robust and adaptab |
,it iscrucialtodevelopguidelines for the large context windows needed for handling tokenized
effectiveinstructionsand toincorporatedomain-specific high-precision numerical data.
knowledge. Overcoming challenges like hallucination,
aligningwithhumanpreferences,andadjustingtoevolv- 8
ingtimeseriesdataiskeytomaximizingtheircapabili-
tiesand minimizingrisks. Our future visionisto develop
robust and adaptable LLM-empowered agents that can
adeptly handle the intricacies of time series analysis. What Can Large Language Models Tell Us about Time Series Analysis
7. Conclusion recognition using smartphones. In Esann, volume 3, pp.
This paper aims to draw the attention of researchers and 3, 2013.
practitioners to the potential of LLMs in advancing time Anonymous. Sociodojo: Building lifelonganalytical agents
series analysisand tounderscore the importanceof trustin with real-world text and time series. In The Twelfth
these endeavors. Our position is that LLMs can serve as International Conference on Learning Representations,
thecentral hubfor understandingand advancing timeseries 2024a. URLhttps://openreview.net/forum?
analysis, steering towards more universal intelligent sys- id=s9z0HzWJJp.
temsforgeneral-purposeanalysis,whetherasaugmenters, Anonymous. Spatio-temporalgraphlearningwithlargelan-
predictors, or agents. To substantiate our positions, we have guage model. 2024b. URLhttps://openreview.
reviewedrelevantliterature,exploringanddebatingpossi- net/forum?id=QUkcfqa6GX.
ble directions towards LLM-centric time series analysis to
bridge existing gaps. Bai, Y., Kadavath, S., Kundu, S., Askell, A., Kernion, J.,
Ourobjectiveistoamplifytheawarenessofthisareawithin Jones,A.,Chen,A.,Goldie,A.,Mirhoseini,A.,McKin-
the research community and pinpoint avenues for future non, C., et al. Constitutional ai: Harmlessness from ai
investigations. Whileourpositionsmayattractbothagree- feedback. arXiv preprint arXiv:2212.08073, 2022.
ment and dissent, the primary purpose of this paper is to Besta, M., Blach, N., Kubicek, A., Gerstenberger, R., Gi-
sparkdiscussiononthisinterdisciplinarytopic. Ifitserves aninazzi, L., Gajda, J., Lehmann, T., Podstawski, M.,
to shift the discourse within the community, it will have Niewiadomski, H.,Nyczyk, P., etal. Graphof thoughts:
achieved its intended objective. Solving elaborate problems with large language models.
arXiv preprint arXiv:2308.09687, 2023.
Impact Statements
This positionpaper aims toreshape perspectiveswithin the Brown,T.,Mann, B., Ryder, N.,Subbiah, M., Kaplan, J.D.,
time series analysis community by exploring the untapped Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G.,
potential ofLLMs. Weadvocate a shift towardsintegrating Askell, A., et al. Language models are few-shot learners.
LLMs with time series analysis, proposing a future where Advancesinneuralinformationprocessingsystems,33:
decision-makingandanalyticalintelligencearesignificantly 1877–1901, 2020.
enhanced through this synergy. While our work primarily Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J.,
contributes toacademicdiscourseandresearchdirections, Horvitz, E., Kamar, E., Lee, P., Lee, Y. T., Li, Y.,
it also touches upon potential societal impacts, particularly Lundberg, S., et al. Sparks of artificial general intel-
i |
ssingsystems,33:
decision-makingandanalyticalintelligencearesignificantly 1877–1901, 2020.
enhanced through this synergy. While our work primarily Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J.,
contributes toacademicdiscourseandresearchdirections, Horvitz, E., Kamar, E., Lee, P., Lee, Y. T., Li, Y.,
it also touches upon potential societal impacts, particularly Lundberg, S., et al. Sparks of artificial general intel-
indecision-makingprocessesacrossvariousindustries. Eth- ligence: Early experiments with gpt-4. arXiv preprint
ically, the responsible and transparent use of LLMs in time arXiv:2303.12712, 2023.
seriesanalysisisemphasized,highlightingtheneedfortrust Caballero,E.,Gupta,K.,Rish,I.,andKrueger,D. Broken
and understanding in their capabilities. While we foresee neural scaling laws. arXiv preprint arXiv:2210.14891,
noimmediatesocietalconsequencesrequiringspecificem- 2022.
phasis, we acknowledge the importance of ongoing ethical
considerations and the potential for future societal impacts Cao, D., Jia, F., Arik, S. O., Pfister, T., Zheng, Y., Ye, W.,
as this interdisciplinary field evolves. andLiu,Y. Tempo: Prompt-basedgenerativepre-trained
transformer for time series forecasting. arXiv preprint
References arXiv:2310.04948, 2023.
Achiam, J., Adler, S., Agarwal, S., Ahmad, L., Akkaya, I., Chang,C.,Peng,W.-C.,andChen,T.-F. Llm4ts: Two-stage
Aleman, F.L., Almeida, D., Altenschmidt, J., Altman, S., fine-tuning for time-series forecasting with pre-trained
Anadkat,S.,etal. Gpt-4technicalreport. arXivpreprint llms. arXiv preprint arXiv:2308.08469, 2023.
arXiv:2303.08774, 2023. Chatterjee,S.,Mitra,B.,andChakraborty,S. Amicron: A
Alghamdi, T., Elgazzar, K., Bayoumi, M., Sharaf, T., and frameworkfor generatingannotations forhuman activity
Shah,S.Forecastingtrafficcongestionusingarimamodel- recognitionwithgranularmicro-activities. arXivpreprint
ing. In201915thinternationalwirelesscommunications arXiv:2306.13149, 2023.
& mobile computing conference (IWCMC), pp. 1227– Chen, S., Long, G., Shen, T., and Jiang, J. Prompt fed-
1232. IEEE, 2019. erated learning for weather forecasting: Toward foun-
Anguita, D., Ghio, A., Oneto, L., Parra, X., Reyes-Ortiz, dation models on meteorological data. arXiv preprint
J. L., et al. A public domain dataset for human activity arXiv:2301.09152, 2023a.
9 What Can Large Language Models Tell Us about Time Series Analysis
Chen, Y., Wang, X., and Xu, G. Gatgpt: A pre-trained large Ghosh, S., Sengupta, S., and Mitra, P. Spatio-temporal
language model with graph attention network for spa- storytelling? leveraginggenerativemodelsforsemantic
tiotemporalimputation.arXivpreprintarXiv:2311.14332, trajectory analysis. arXiv preprint arXiv:2306.13905,
2023b. 2023.
Chen,Z., Mao,H.,Li,H., Jin,W.,Wen,H.,Wei, X.,Wang, Gruver, N., Finzi, M., Qiu, S., and Wilson, A. G. Large
S.,Yin,D.,Fan,W.,Liu,H.,etal. Exploringthepotential language models are zero-shot time series forecasters.
of large language models (llms) in learning on graphs. Advancesinneuralinformationprocessingsystems,2023.
arXiv preprint arXiv:2307.03393, 2023c. |
2023.
Chen,Z., Mao,H.,Li,H., Jin,W.,Wen,H.,Wei, X.,Wang, Gruver, N., Finzi, M., Qiu, S., and Wilson, A. G. Large
S.,Yin,D.,Fan,W.,Liu,H.,etal. Exploringthepotential language models are zero-shot time series forecasters.
of large language models (llms) in learning on graphs. Advancesinneuralinformationprocessingsystems,2023.
arXiv preprint arXiv:2307.03393, 2023c. Gu, Z., Zhu, B., Zhu, G., Chen, Y., Tang, M., and
Chowdhery,A., Narang,S., Devlin,J., Bosma,M., Mishra, Wang, J. Anomalygpt: Detecting industrial anoma-
G., Roberts, A., Barham, P., Chung, H. W., Sutton, C., lies using large vision-language models. arXiv preprint
Gehrmann, S., et al. Palm: Scaling language modeling arXiv:2308.15366, 2023.
withpathways. JournalofMachineLearningResearch, Gudibande, A., Wallace, E., Snell, C., Geng, X., Liu,
24(240):1–113, 2023. H., Abbeel, P., Levine, S., and Song, D. The false
Da, L., Gao, M., Mei, H., and Wei, H. Llm powered sim- promise of imitating proprietary llms. arXiv preprint
to-real transfer for traffic signal control. arXiv preprint arXiv:2305.15717, 2023.
arXiv:2308.14284, 2023a. Hamilton,J.D. Timeseriesanalysis. Princetonuniversity
Da, L., Liou, K., Chen, T., Zhou, X., Luo, X., Yang, press, 2020.
Y., and Wei, H. Open-ti: Open traffic intelligence Huang, W., Abbeel, P., Pathak, D., and Mordatch, I. Lan-
with augmented language model. arXiv preprint guage models as zero-shot planners: Extracting ac-
arXiv:2401.00211, 2023b. tionable knowledge for embodied agents. In Interna-
Das, A., Kong, W., Sen, R., and Zhou, Y. A decoder- tional Conference on Machine Learning, pp. 9118–9147.
only foundation model for time-series forecasting. arXiv PMLR, 2022.
preprint arXiv:2310.10688, 2023. Jin, M., Koh, H. Y., Wen, Q., Zambon, D., Alippi, C.,
Ekambaram,V.,Jati,A.,Nguyen,N.H.,Dayama,P.,Reddy, Webb, G. I., King, I., and Pan, S. A survey on graph
C., Gifford, W. M., and Kalagnanam, J. Ttms: Fast neural networks for time series: Forecasting, classifica-
multi-level tiny timemixers for improved zero-shotand tion,imputation,andanomalydetection. arXivpreprint
few-shot forecasting of multivariate time series. arXiv arXiv:2307.03759, 2023a.
preprint arXiv:2401.03955, 2024. Jin, M., Wen, Q., Liang, Y., Zhang, C.,Xue, S., Wang, X.,
Fatouros, G., Metaxas, K., Soldatos, J., and Kyriazis, D. Zhang,J.,Wang,Y.,Chen,H.,Li,X.,etal. Largemodels
Can large language models beat wall street? unveiling for time series and spatio-temporal data: A survey and
the potential of ai in stock selection. arXiv preprint outlook. arXiv preprint arXiv:2310.10196, 2023b.
arXiv:2401.03737, 2024. Jin, M., Wang, S., Ma, L., Chu, Z., Zhang, J. Y., Shi, X.,
Fuller, W. A. Introduction to statistical time series. John Chen,P.-Y.,Liang,Y.,Li,Y.-F.,Pan, S.,etal. Time-llm:
Wiley & Sons, 2009. Timeseriesforecastingbyreprogramminglargelanguage
models. InInternationalConferenceonMachineLearn-
Gamboa, J. C. B. Deep learning for time-series analysis. ing, 2024.
arXiv preprint arXiv:1701.01887, 2017. |
l time series. John Chen,P.-Y.,Liang,Y.,Li,Y.-F.,Pan, S.,etal. Time-llm:
Wiley & Sons, 2009. Timeseriesforecastingbyreprogramminglargelanguage
models. InInternationalConferenceonMachineLearn-
Gamboa, J. C. B. Deep learning for time-series analysis. ing, 2024.
arXiv preprint arXiv:1701.01887, 2017.
Kalekar, P. S. et al. Time series forecasting using holt-
Garg, S., Farajtabar, M., Pouransari, H., Vemulapalli, R., winters exponential smoothing. Kanwal Rekhi school of
Mehta, S., Tuzel, O., Shankar, V., and Faghri, F. Tic- information Technology, 4329008(13):1–13, 2004.
clip: Continual training of clip models. arXiv preprint
arXiv:2310.16226, 2023. Kamarthi,H.andPrakash,B.A. Pems: Pre-trainedepidmic
time-series models. arXiv preprint arXiv:2311.07841,
Garza, A. and Mergenthaler-Canseco, M. Timegpt-1. arXiv 2023.
preprint arXiv:2310.03589, 2023.
Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B.,
Ge, Y., Hua, W., Ji, J., Tan, J., Xu, S., and Zhang, Y. Ope- Chess, B., Child, R., Gray, S., Radford, A., Wu, J., and
nagi: When llm meets domain experts. arXiv preprint Amodei, D. Scaling laws for neural language models.
arXiv:2304.04370, 2023. arXiv preprint arXiv:2001.08361, 2020.
10 What Can Large Language Models Tell Us about Time Series Analysis
Kim, Y., Xu, X., McDuff, D., Breazeal, C., and Park, Madaan, A., Tandon, N., Gupta, P., Hallinan, S., Gao,
H. W. Health-llm: Large language models for health L., Wiegreffe, S., Alon, U., Dziri, N., Prabhumoye, S.,
prediction via wearable sensor data. arXiv preprint Yang, Y., et al. Self-refine: Iterative refinement with
arXiv:2401.06866, 2024. self-feedback. arXiv preprint arXiv:2303.17651, 2023.
Kojima, T., Gu, S. S., Reid, M., Matsuo, Y., and Iwasawa, Mirchandani, S., Xia, F., Florence, P., Driess, D., Arenas,
Y. Large language models are zero-shot reasoners. Ad- M. G., Rao, K., Sadigh, D., Zeng, A., et al. Large lan-
vances in neural information processing systems, 35: guagemodelsasgeneralpatternmachines. In7thAnnual
22199–22213, 2022. Conference on Robot Learning, 2023.
Lai, S., Xu, Z., Zhang, W., Liu, H., and Xiong, H. Large Moon,S., Madotto,A.,Lin, Z.,Saraf,A., Bearman,A.,and
languagemodelsastrafficsignalcontrolagents: Capacity Damavandi, B. Imu2clip: Language-grounded motion
andopportunity. arXivpreprintarXiv:2312.16044,2023. sensortranslationwithmultimodalcontrastivelearning.
InFindingsoftheAssociationforComputationalLinguis-
Lee,K.,Liu,H.,Ryu,M.,Watkins,O.,Du,Y.,Boutilier,C., tics: EMNLP 2023, pp. 13246–13253, 2023.
Abbeel,P.,Ghavamzadeh,M.,andGu,S.S.Aligningtext- Nie, Y., Nguyen, N. H., Sinthong, P., and Kalagnanam, J. A
to-imagemodelsusinghumanfeedback. arXivpreprint timeseriesisworth64words: Long-termforecastingwith
arXiv:2302.12192, 2023. transformers. In The Eleventh International Conference
Li, J., Cheng, X., Zhao, W. X., Nie, J.-Y., and Wen, on Learn |
s: EMNLP 2023, pp. 13246–13253, 2023.
Abbeel,P.,Ghavamzadeh,M.,andGu,S.S.Aligningtext- Nie, Y., Nguyen, N. H., Sinthong, P., and Kalagnanam, J. A
to-imagemodelsusinghumanfeedback. arXivpreprint timeseriesisworth64words: Long-termforecastingwith
arXiv:2302.12192, 2023. transformers. In The Eleventh International Conference
Li, J., Cheng, X., Zhao, W. X., Nie, J.-Y., and Wen, on Learning Representations, 2022.
J.-R. Helma: A large-scale hallucination evaluation Oh, J., Bae, S., Lee, G., Kwon, J.-m., and Choi, E.
benchmark for large language models. arXiv preprint Ecg-qa: A comprehensive question answering dataset
arXiv:2305.11747, 2023a. combined with electrocardiogram. arXiv preprint
Li, J., Liu, C.,Cheng, S., Arcucci, R., and Hong, S. Frozen arXiv:2306.15681, 2023.
language model helps ecg zero-shot learning. arXiv Peris, C., Dupuy, C., Majmudar, J., Parikh, R., Smaili, S.,
preprint arXiv:2303.12311, 2023b. Zemel, R., and Gupta, R. Privacy in the time of language
Liang,Y.,Liu,Y.,Wang,X.,andZhao,Z. Exploringlarge models. In Proceedings of the Sixteenth ACM Interna-
language models for human mobility prediction under tional Conference on Web Search and Data Mining, pp.
public events. arXiv preprint arXiv:2311.17351, 2023. 1291–1292, 2023.
Liao, Q.V. andVaughan, J.W. Ai transparency inthe age Qiu, J., Han, W., Zhu, J., Xu, M., Rosenberg, M., Liu,
of llms: A human-centered research roadmap. arXiv E., Weber, D., and Zhao, D. Transfer knowledge from
preprint arXiv:2306.01941, 2023. naturallanguagetoelectrocardiography: Canwedetect
cardiovascular disease through language models? arXiv
Liu, C., Ma, Y., Kothur, K., Nikpour, A., and Kavehei, preprint arXiv:2301.09017, 2023a.
O. Biosignal copilot: Leveraging the power of llms in Qiu,J.,Zhu,J.,Liu,S.,Han,W.,Zhang,J.,Duan,C.,Rosen-
drafting reports for biomedical signals. medRxiv, pp. berg,M.A.,Liu,E.,Weber,D.,andZhao,D. Automated
2023–06, 2023a. cardiovascular record retrieval by multimodal learning
Liu, C., Yang, S., Xu, Q., Li, Z., Long, C., Li, Z., and between electrocardiogram and clinical report. In Ma-
Zhao,R.Spatial-temporallargelanguagemodelfortraffic chineLearningforHealth(ML4H),pp.480–497.PMLR,
prediction. arXiv preprint arXiv:2401.10134, 2024a. 2023b.
Liu,H.,Li,C.,Wu,Q.,andLee,Y.J. Visualinstructiontun- Rasul, K., Ashok, A., Williams, A. R., Khorasani, A.,
ing. Advancesinneuralinformationprocessingsystems, Adamopoulos,G.,Bhagwatkar,R.,Biloˇs,M.,Ghonia,H.,
2023b. Hassen,N.V.,Schneider,A.,etal. Lag-llama: Towards
foundation models for time series forecasting. arXiv
Liu, X., Hu, J., Li, Y., Diao, S., Liang, Y., Hooi, B., and preprint arXiv:2310.08278, 2023.
Zimmermann, R. Unitime: A language-empowereduni- Rawte, V., Sheth, A., and Das, A. A survey of hallu-
fied model for cross-domain time series forecasting. In cination in large foundation models. arXiv preprint
The Web Conference 2024 (WWW), 2024b. arXiv:2309.05922, 2023.
Lopez-Lira,A.andTang,Y. Canchatgptforecaststockprice Sanh, V., Webson, A.,Raffel, C.,Bach, S.H., Sutawika,L., |
rXiv:2310.08278, 2023.
Zimmermann, R. Unitime: A language-empowereduni- Rawte, V., Sheth, A., and Das, A. A survey of hallu-
fied model for cross-domain time series forecasting. In cination in large foundation models. arXiv preprint
The Web Conference 2024 (WWW), 2024b. arXiv:2309.05922, 2023.
Lopez-Lira,A.andTang,Y. Canchatgptforecaststockprice Sanh, V., Webson, A.,Raffel, C.,Bach, S.H., Sutawika,L.,
movements? return predictability and large language Alyafeai, Z., Chaffin, A., Stiegler, A., Scao, T. L., Raja,
models. arXiv preprint arXiv:2304.07619, 2023. A., etal. Multitaskprompted trainingenables zero-shot
11 What Can Large Language Models Tell Us about Time Series Analysis
task generalization. arXiv preprint arXiv:2110.08207, Wang,L.,Ma,C.,Feng,X.,Zhang,Z.,Yang,H.,Zhang,J.,
2021. Chen, Z., Tang, J., Chen, X., Lin, Y., et al. A survey on
Schick,T.,Dwivedi-Yu, J.,Dess`ı,R., Raileanu,R.,Lomeli, large language model based autonomous agents. arXiv
M.,Zettlemoyer,L., Cancedda,N.,andScialom,T. Tool- preprint arXiv:2308.11432, 2023a.
former: Language models can teach themselves to use Wang,W.,Bao,H.,Dong,L.,Bjorck, J.,Peng,Z.,Liu,Q.,
tools. arXiv preprint arXiv:2302.04761, 2023. Aggarwal, K., Mohammed, O. K., Singhal, S., Som, S.,
Shumway, R. H., Stoffer, D. S., Shumway, R. H., and Stof- and Wei, F. Image asa foreign language: BEiT pretrain-
fer, D. S. Arima models. Time series analysis and its ingforvisionandvision-languagetasks. InProceedings
applications: with R examples, pp. 75–163, 2017. of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition, 2023b.
Singh, I., Blukis, V., Mousavian, A., Goyal, A., Xu, D.,
Tremblay,J.,Fox,D.,Thomason,J.,andGarg,A. Prog- Wang, X.,Fang, M.,Zeng,Z., andCheng, T. Wherewould
prompt: Generatingsituatedrobottaskplansusinglarge i go next? large language models as human mobility
languagemodels. In2023IEEEInternationalConference predictors. arXiv preprint arXiv:2308.15197, 2023c.
on Robotics and Automation (ICRA), pp. 11523–11530. Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B.,
IEEE, 2023. Borgeaud, S., Yogatama, D., Bosma, M.,Zhou, D., Met-
Spathis, D. and Kawsar, F. The first step is the hardest: zler,D.,etal.Emergentabilitiesoflargelanguagemodels.
Pitfalls of representing and tokenizing temporal data for arXiv preprint arXiv:2206.07682, 2022a.
largelanguagemodels. arXivpreprintarXiv:2309.06236,
2023. Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F.,
Sun, C., Li, Y., Li, H., and Hong, S. Test: Text prototype Chi, E., Le, Q. V., Zhou, D., et al. Chain-of-thought
alignedembeddingtoactivatellm’sabilityfortimeseries. prompting elicits reasoning in large language models.
InInternationalConferenceonMachineLearning,2024. Advances in Neural Information Processing Systems, 35:
24824–24837, 2022b.
Sun,Q.,Zhang,S.,Ma,D.,Shi,J.,Li,D.,Luo,S.,Wang,Y., Wen,Q.,Sun,L.,Yang,F.,Song,X.,Gao,J.,Wang,X.,and
Xu,N.,Cao,G.,andZhao,H. Largetrajectorymodelsare Xu,H. Timeseriesdataaugmentationfordeeplearning:
scalablemotionpredictorsandplanners. arXivpreprint A survey. |
g,2024. Advances in Neural Information Processing Systems, 35:
24824–24837, 2022b.
Sun,Q.,Zhang,S.,Ma,D.,Shi,J.,Li,D.,Luo,S.,Wang,Y., Wen,Q.,Sun,L.,Yang,F.,Song,X.,Gao,J.,Wang,X.,and
Xu,N.,Cao,G.,andZhao,H. Largetrajectorymodelsare Xu,H. Timeseriesdataaugmentationfordeeplearning:
scalablemotionpredictorsandplanners. arXivpreprint A survey. In IJCAI, pp. 4653–4660, 2021.
arXiv:2310.19620, 2023.
Tian, K., Mitchell, E., Yao, H., Manning, C. D., and Finn, Wen,Q.,Yang,L.,Zhou,T.,andSun,L. Robusttimeseries
C. Fine-tuning language models for factuality. arXiv analysis and applications: An industrial perspective. In
preprint arXiv:2311.08401, 2023. Proceedings of the 28th ACM SIGKDD Conference on
KnowledgeDiscoveryandDataMining,pp.4836–4837,
Touvron, H.,Lavril, T.,Izacard,G., Martinet,X., Lachaux, 2022.
M.-A., Lacroix, T., Rozi`ere, B., Goyal, N., Hambro, E.,
Azhar,F.,etal. Llama: Openandefficientfoundationlan- Wen,Q.,Zhou,T.,Zhang,C.,Chen,W.,Ma,Z.,Yan,J.,and
guage models. arXiv preprint arXiv:2302.13971, 2023a. Sun,L. Transformersintimeseries: Asurvey. InInterna-
Touvron, H., Martin, L., Stone, K., Albert, P., Almahairi, tionalJointConferenceon ArtificialIntelligence(IJCAI),
A., Babaei, Y., Bashlykov, N., Batra, S., Bhargava, P., 2023.
Bhosale, S., et al. Llama 2: Open foundation and fine- Wenzek, G., Lachaux, M.-A., Conneau, A., Chaudhary, V.,
tuned chat models. arXiv preprint arXiv:2307.09288, Guzm´an, F., Joulin, A., and Grave, E. Ccnet: Extracting
2023b. highqualitymonolingualdatasetsfromwebcrawldata.
Tsay, R.S. Analysisof financialtime series. Johnwiley& arXiv preprint arXiv:1911.00359, 2019.
sons, 2005. Woo, G., Liu, C., Kumar, A., and Sahoo, D. Pushing
Tsymbal,A. Theproblemofconceptdrift: definitionsand the limits of pre-training for time series forecasting in
related work. Computer Science Department, Trinity the cloudops domain. arXiv preprint arXiv:2310.05063,
College Dublin, 106(2):58, 2004. 2023.
Vu, T., Iyyer, M., Wang, X., Constant, N., Wei, J., Wei, J., Wu, C., Yin, S., Qi, W., Wang, X., Tang, Z., and Duan, N.
Tar, C., Sung, Y.-H., Zhou, D., Le, Q., et al. Freshllms: Visual chatgpt: Talking, drawing and editing with visual
Refreshing large language models with search engine foundation models. arXiv preprint arXiv:2303.04671,
augmentation. arXiv preprint arXiv:2310.03214, 2023. 2023.
12 What Can Large Language Models Tell Us about Time Series Analysis
Xie, Q., Han, W., Zhang, X., Lai, Y., Peng, M., Lopez- Yu, X., Chen, Z., and Lu, Y. Harnessing LLMs for tem-
Lira, A., and Huang, J. Pixiu: A large language model, poral data - a study on explainable financial time series
instruction data and evaluation benchmark for finance. forecasting. In Proceedings of the 2023 Conference on
arXiv preprint arXiv:2306.05443, 2023. EmpiricalMethodsinNaturalLanguageProcessing: In-
dustry Track, pp. 739–753, Singapore, December 2023c.
Xue, H. and Salim, F. D. Promptcast: A new prompt- Zhang,H.,Diao,S.,Lin,Y.,Fung,Y.R.,Lian,Q.,Wang,X.,
basedlearningparadigmfortimeseriesforecasting. IEEE |
ance. forecasting. In Proceedings of the 2023 Conference on
arXiv preprint arXiv:2306.05443, 2023. EmpiricalMethodsinNaturalLanguageProcessing: In-
dustry Track, pp. 739–753, Singapore, December 2023c.
Xue, H. and Salim, F. D. Promptcast: A new prompt- Zhang,H.,Diao,S.,Lin,Y.,Fung,Y.R.,Lian,Q.,Wang,X.,
basedlearningparadigmfortimeseriesforecasting. IEEE Chen,Y.,Ji,H.,andZhang,T. R-tuning: Teachinglarge
TransactionsonKnowledge andDataEngineering,2023. language models to refuse unknown questions. arXiv
Xue, S., Zhou, F., Xu, Y., Zhao, H., Xie, S., Jiang, C., preprint arXiv:2311.09677, 2023a.
Zhang, J., Zhou, J., Xu, P., Xiu, D., et al. Weaverbird: Zhang, J., Huang, J., Jin, S., and Lu, S. Vision-language
Empowering financial decision-making with large lan- models for vision tasks: A survey. arXiv preprint
guage model, knowledge base, and search engine. arXiv arXiv:2304.00685, 2023b.
preprint arXiv:2308.05361, 2023.
Yan, Y., Wen, H., Zhong, S., Chen, W., Chen, H., Wen, Zhang, K., Wen, Q., Zhang, C., Cai, R., Jin, M.,
Q.,Zimmermann,R.,andLiang,Y. Whenurbanregion Liu, Y., Zhang, J., Liang, Y., Pang, G., Song, D.,
profiling meets large language models. arXiv preprint et al. Self-supervised learning for time series analy-
arXiv:2310.18340, 2023. sis: Taxonomy, progress, and prospects. arXiv preprint
arXiv:2306.10125, 2023c.
Yang, A., Miech, A., Sivic, J., Laptev, I., and Schmid, C. Zhang, S., Dong, L., Li, X., Zhang, S., Sun, X., Wang, S.,
Zero-shot video question answering via frozen bidirec- Li, J.,Hu, R., Zhang,T., Wu, F.,et al. Instructiontuning
tionallanguagemodels. AdvancesinNeuralInformation for large language models: A survey. arXiv preprint
Processing Systems, 35:124–141, 2022a. arXiv:2308.10792, 2023d.
Yang, Z., Gan, Z., Wang, J., Hu, X., Lu, Y., Liu, Z., and Zhang, S., Fu, D., Zhang, Z., Yu, B., and Cai, P. Traf-
Wang, L. An empirical study of gpt-3 for few-shot ficgpt: Viewing, processing and interacting with traffic
knowledge-basedvqa.InProceedingsoftheAAAIConfer- foundation models. arXiv preprint arXiv:2309.06719,
enceonArtificialIntelligence,volume36,pp.3081–3089, 2023e.
2022b. Zhang, Y., Zhang, Y., Zheng, M., Chen, K., Gao, C., Ge,
Yao,S.,Yu,D.,Zhao,J.,Shafran,I.,Griffiths,T.L.,Cao,Y., R., Teng, S., Jelloul, A., Rao, J., Guo, X., et al. Insight
and Narasimhan, K. Tree of thoughts: Deliberate prob- miner: A time series analysis dataset for cross-domain
lem solving with large language models. arXiv preprint alignmentwithnaturallanguage. InNeurIPS2023AIfor
arXiv:2305.10601, 2023. Science Workshop, 2023f.
Yeh, C.-C. M., Dai, X., Chen, H., Zheng, Y., Fan, Y., Der, Zhang,Z., Amiri,H.,Liu, Z.,Z¨ufle,A., andZhao,L. Large
A.,Lai,V.,Zhuang,Z.,Wang,J.,Wang,L.,etal. Toward language models for spatial trajectory patterns mining.
afoundationmodelfortimeseriesdata. InProceedingsof arXiv preprint arXiv:2310.04942, 2023g.
the32ndACMInternationalConferenceonInformation Zhao,W.X.,Zhou, K.,Li, J.,Tang,T., Wang,X., Hou,Y.,
and Knowledge Management, pp. 4400–4404, 2023. Min,Y.,Zhang,B.,Zhang,J.,Dong,Z.,etal. Asurveyof |
V.,Zhuang,Z.,Wang,J.,Wang,L.,etal. Toward language models for spatial trajectory patterns mining.
afoundationmodelfortimeseriesdata. InProceedingsof arXiv preprint arXiv:2310.04942, 2023g.
the32ndACMInternationalConferenceonInformation Zhao,W.X.,Zhou, K.,Li, J.,Tang,T., Wang,X., Hou,Y.,
and Knowledge Management, pp. 4400–4404, 2023. Min,Y.,Zhang,B.,Zhang,J.,Dong,Z.,etal. Asurveyof
largelanguagemodels. arXivpreprintarXiv:2303.18223,
Yin, Z., Wang, J., Cao, J., Shi, Z., Liu, D., Li, M., Sheng, 2023.
L.,Bai,L.,Huang,X.,Wang,Z.,etal. Lamm: Language- Zhou, T., Niu, P., Wang, X., Sun, L., and Jin, R. One
assisted multi-modal instruction-tuning dataset, frame- fits all: Power generaltime series analysis by pretrained
work, and benchmark. arXiv preprint arXiv:2306.06687, lm. Advances in neural information processing systems,
2023. 2023a.
Yu, H., Guo, P., and Sano, A. Zero-shot ecg diagnosis Zhou, Y., Cui, C., Yoon, J., Zhang, L., Deng, Z., Finn, C.,
with large language models and retrieval-augmented gen- Bansal,M., and Yao,H. Analyzingand mitigatingobject
eration. In Machine Learning for Health (ML4H), pp. hallucination in large vision-language models. arXiv
650–663. PMLR, 2023a. preprint arXiv:2310.00754, 2023b.
Yu, X., Chen, Z., Ling, Y., Dong, S., Liu, Z., and Lu, Y. Zhuo,T.Y.,Huang, Y.,Chen,C.,andXing, Z. Exploringai
Temporaldatameetsllm–explainablefinancialtimeseries ethics of chatgpt: A diagnostic analysis. arXiv preprint
forecasting. arXiv preprint arXiv:2306.11025, 2023b. arXiv:2301.12867, 2023.
13 What Can Large Language Models Tell Us about Time Series Analysis
A. Literature Review
Figure 6: An overview of LLM-centric time series analysis and related research.
B. LLM-empowered Agent for Time Series
B.1. Overview of Related Works
In the realm of leveraging LLMs as agents for general-purpose time series analysis is still nascent. In the following,
we provide an overview of related approaches across different modalities, focusing on strategies for developing robust,
general-purposetimeseriesagents. Thesemethodsfallintotwoprimarycategories. (1)Externalknowledgeintegration:
this strategy employs ICL prompts to enhance LLMs’ understanding of specific domains. Yang et al. embeds object
descriptions and relationships into prompts to aid LLMs in image query analysis (Yang et al.,2022b). Similarly, Da et al.
uses prompts containing traffic states, weather types, and road types for domain-informed inferences (Da et al.,2023a).
Other studies like(Huang et al.,2022;Singh et al.,2023)include state, object lists, and actions in prompts, allowing LLMs
toplanacrossvariedenvironmentsandtasks. Wuetal. introducesapromptmanagerforChatGPTtoleveragepretrained
vision models (Wu et al.,2023), while SocioDojo (Anonymous,2024a) employs ICL for accessing external knowledge
sources like news and journals for decision-making. Despite their efficiency and no need for additional training, these
prompt-based methods face limitations such as input length constraints and difficulties in capturing complex time series
patternslinguistically. (2)AlignmentofLLMstotargetmodalitycontent: thismethodalignsLLMswithspecificmodality |
models (Wu et al.,2023), while SocioDojo (Anonymous,2024a) employs ICL for accessing external knowledge
sources like news and journals for decision-making. Despite their efficiency and no need for additional training, these
prompt-based methods face limitations such as input length constraints and difficulties in capturing complex time series
patternslinguistically. (2)AlignmentofLLMstotargetmodalitycontent: thismethodalignsLLMswithspecificmodality
content. Schicketal. enablesLLMstoannotatedatasetswithAPIcalls,fine-tuningthemfordiversetoolusage(Schicketal.,
2023). LLaVA (Liu et al.,2023b) generates multimodal language-image instruction data using GPT-4, while Pixiu (Xie
et al.,2023) creates a multi-task instruction dataset for financial applications, leading to the development of FinMA, a
financialLLMfine-tunedforvariousfinancialtasks. Yinetal. offersamulti-modalinstructiontuningdatasetfor2Dand3D
understanding, helpingLLMs bridgethe gap betweenword prediction anduser instructions(Yin etal.,2023). However,
designingcomprehensiveinstructionsremainsacomplextask(Zhangetal.,2023d),andthere’sconcernthatthisapproach
may favor tasks over-represented in the training data (Gudibande et al.,2023).
B.2. Demonstrations
SignalGPT (Liu et al.,2023a), LLM-MPE (Liang et al.,2023), SST (Ghosh et al.,2023),
Data-Based EnhancerTable 1: Justification for classifyingSit and Stand activitiesInsight Miner (Zhang et al.,2023f), AmicroN (Chatterjee et al.,2023),
LLM-assisted Enhancer (Yu et al.,2023b), (Yu et al.,2023c), (Fatouros et al.,2024)
(Section3) IMU2CLIP (Moon et al.,2023), STLLM (Anonymous,2024b), (Qiu et al.,2023b),
Model-Based Enhancer TrafficGPT (Zhang et al.,2023e), (Li et al.,2023b14), (Yu et al.,2023a), (Qiu et al.,2023a)
Time-LLM (Jin et al.,2024), FPT (Zhou et al.,2023a), UniTime (Liu et al.,2024b),
Activity JustificationforClassificationTuning-Based PredictorTEMPO (Cao et al.,2023), LLM4TS (Chang et al.,2023), ST-LLM (Liu et al.,2024a),
Sit Instances where there is relatively low movement and consistent values in the ac-GATGPT (Chen et al.,2023b), TEST (Sun et al.,2024)
LLM-centered Predictorcelerometerandgyroscopereadings,typicalofasedentaryposition.Non-Tuning-Based PredictorPromptCast (Xue & Salim,2023), LLMTIME (Gruver et al.,2023),
Stand Instanceswherethereisminimalmovement,butthesensorreadingsmayshowmore(Section4)(Spathis & Kawsar,2023), (Mirchandani et al.,2023), (Zhang et al.,2023g)
variabilitycomparedtositting. StandingtypicallyinvolvesslightvariationsinbodyLag-Llama (Rasul et al.,2023), PreDcT (Das et al.,2023), CloudOps (Woo et al.,2023),
positionandmayexhibitmorefluctuationsinsensorreadings.OthersTTMs (Ekambaram et al.,2024), STR (Sun et al.,2023), MetaPFL (Chen et al.,2023a),
Time-GPT (Garza & Mergenthaler-Canseco,2023), PEMs (Kamarthi & Prakash,2023)
GPT3-VQA (Yang et al.,2022b), PromptGAT (Da et al.,2023a),
External Knowledge Open-TI (Da et al.,2023b), Planner (Huang et al.,2022), Sociodojo (Anonymous,2024a),
LLM-empowered Agent ProgPrompt (Singh et al.,2023), Visual ChatGPT (Wu et al.,2023)
(Section5) Adapt Target Modality Toolformer (Schick et al.,20 |
GPT3-VQA (Yang et al.,2022b), PromptGAT (Da et al.,2023a),
External Knowledge Open-TI (Da et al.,2023b), Planner (Huang et al.,2022), Sociodojo (Anonymous,2024a),
LLM-empowered Agent ProgPrompt (Singh et al.,2023), Visual ChatGPT (Wu et al.,2023)
(Section5) Adapt Target Modality Toolformer (Schick et al.,2023), LLaVA (Liu et al.,2023b), PIXIU (Xie et al.,2023) What Can Large Language Models Tell Us about Time Series Analysis
Figure 7: Human interaction with ChatGPT for time series classification task.
15 What Can Large Language Models Tell Us about Time Series Analysis
Figure 8: Human interaction with ChatGPT for time series data augmentation and anomaly detection tasks.
16 |
JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 1
Unifying Large Language Models and
Knowledge Graphs: A Roadmap
Shirui Pan,Senior Member, IEEE, Linhao Luo,
Yufei Wang, Chen Chen, Jiapu Wang, Xindong Wu,Fellow, IEEE
Abstract—Large language models (LLMs), such as ChatGPT and GPT4, are making new waves in the field of naturallanguage
processing and artificial intelligence, due to their emergent ability and generalizability. However, LLMs are black-box models, which
oftenfallshortofcapturingandaccessingfactualknowledge.Incontrast,KnowledgeGraphs(KGs),WikipediaandHuapuforexample,
are structured knowledge models that explicitly store rich factual knowledge. KGs can enhanceLLMs by providing external knowledge
for inference and interpretability. Meanwhile, KGs are difficult to construct and evolve by nature, which challenges the existing methods
in KGs to generate new facts and represent unseen knowledge. Therefore, it is complementary to unify LLMs and KGs together and
simultaneously leverage their advantages. In this article, we present a forward-looking roadmap for the unification of LLMs and KGs.
Our roadmap consists of three general frameworks, namely,1) KG-enhanced LLMs,which incorporate KGs during the pre-training and
inference phases of LLMs, or for the purpose of enhancing understanding of the knowledge learned by LLMs; 2) LLM-augmented KGs,
that leverage LLMs for different KG tasks such as embedding, completion, construction, graph-to-text generation, and question
answering; and3) Synergized LLMs + KGs, in which LLMs and KGs play equal roles and work in a mutually beneficial way to enhance
both LLMs and KGs for bidirectional reasoning driven by both data and knowledge. We reviewand summarize existing efforts within
these three frameworks in our roadmap and pinpoint their future research directions.
IndexTerms—Natural Language Processing, Large Language Models, Generative Pre-Training, Knowledge Graphs, Roadmap,
Bidirectional Reasoning.
✦
1 INTRODUCTION
Large language models (LLMs)1 (e.g., BERT [1], RoBERTA
[2], and T5 [3]), pre-trained on the large-scale corpus,
have shown great performance in various natural language
processing (NLP) tasks, such as question answering [4],
machine translation [5], and text generation [6]. Recently,
the dramatically increasing model size further enables the
LLMs with the emergent ability [7], paving the road for
applying LLMs as Artificial General Intelligence (AGI).
Advanced LLMs like ChatGPT2 and PaLM23, with billions
of parameters, exhibit great potential in many complex
practical tasks, such as education [8], code generation [9]
and recommendation [10].
• Shirui Pan is with the School of Information and Communication Tech- Fig. 1. Summarization of the pros and cons for LLMs and KGs. LLM
nology and Institute for Integrated and Intelligent Systems (IIIS), Griffith pros: General Knowledge [11], Language Processing [12], Generaliz-
University, Queensland, Australia. Email: s.pan@griffith.edu.au; ability [13]; LLM cons: Implicit Knowledge [14], Hallucination [15], In-
• Linhao Luo and Yufei Wang are with the Department of Data Sci- decisiveness [16],Black-box [17],LackingDomain-specific/NewKnowl-
ence and AI, Monash University, Melbourne, Australia. E-mail: lin- edge[18].KGpros:StructuralKnowledge[19],Accuracy [20],Decisive-
hao.luo@monash.edu, garyyufei@gmail.com. ness [21], Interpretability [22], Domain-specific Knowledge [23], Evolv-
• Chen Chen is with the Nanyang Technologic |
], In-
• Linhao Luo and Yufei Wang are with the Department of Data Sci- decisiveness [16],Black-box [17],LackingDomain-specific/NewKnowl-
ence and AI, Monash University, Melbourne, Australia. E-mail: lin- edge[18].KGpros:StructuralKnowledge[19],Accuracy [20],Decisive-
hao.luo@monash.edu, garyyufei@gmail.com. ness [21], Interpretability [22], Domain-specific Knowledge [23], Evolv-
• Chen Chen is with the Nanyang Technological University, Singapore. E- ing Knowledge [24]; KG cons: Incompleteness [25], Lacking Language
mail: s190009@ntu.edu.sg. Understanding [26], Unseen Facts [27]. Pros. and Cons. are selected
• Jiapu Wang is with the Faculty of Information Technology, Beijing Uni- based on their representativeness. Detailed discussion can be found in
versityofTechnology,Beijing,China.E-mail:jpwang@emails.bjut.edu.cn. Appendix A.
• Xindong Wu is with the Key Laboratory of Knowledge Engineering with
Big Data (the Ministry of Education of China), Hefei University of Tech-
nology, Hefei, China, and also with the Research Center for Knowledge Despite their success in many applications, LLMs have
Engineering, Zhejiang Lab, Hangzhou, China. Email: xwu@hfut.edu.cn. been criticized for their lack of factual knowledge. Specif-
• Shirui Pan and Linhao Luo contributed equally to this work.
• Corresponding Author: Xindong Wu. ically, LLMs memorize facts and knowledge contained in
1. LLMs are also known as pre-trained language models (PLMs). the training corpus [14]. However, further studies reveal
2. https://openai.com/blog/chatgpt that LLMs are not able to recall facts and often experience
3. https://ai.google/discover/palm2 hallucinations by generating statements that are factually
0000–0000/00$00.00 © 2023 IEEEJOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 2
incorrect [15], [28]. For example, LLMs might say “Ein- and KGs to mutually enhance performance in knowledge
stein discovered gravity in 1687” when asked, “When did representation [44] and reasoning [45], [46]. Although there
Einstein discover gravity?”, which contradicts the fact that are some surveys on knowledge-enhanced LLMs [47]–[49],
IsaacNewtonformulatedthegravitationaltheory.Thisissue which mainly focus on using KGs as an external knowledge
severely impairs the trustworthiness of LLMs. to enhance LLMs, they ignore other possibilities of integrat-
As black-box models, LLMs are also criticized for their ing KGs for LLMs and the potential role of LLMs in KG
lack of interpretability. LLMs represent knowledge implic- applications.
itly in their parameters. It is difficult to interpret or validate Inthisarticle,wepresentaforward-lookingroadmapfor
theknowledgeobtainedbyLLMs.Moreover,LLMsperform unifying both LLMs and KGs, to leverage their respective
reasoning by a probability model, which is an indecisive strengths and overcome the limitations of each approach,
process [16]. The specific patterns and functions LLMs for various downstream tasks. We propose detailed cate-
used to arrive at predictions or decisions are not directly gorization, conduct comprehensive reviews, and pinpoint
accessible or explainable to humans [17]. Even though some emerging directions in these fast-growing fields. Our main
LL |
lity model, which is an indecisive strengths and overcome the limitations of each approach,
process [16]. The specific patterns and functions LLMs for various downstream tasks. We propose detailed cate-
used to arrive at predictions or decisions are not directly gorization, conduct comprehensive reviews, and pinpoint
accessible or explainable to humans [17]. Even though some emerging directions in these fast-growing fields. Our main
LLMs are equipped to explain their predictions by applying contributions are summarized as follows:
chain-of-thought [29], their reasoning explanations also suf- 1) Roadmap. We present a forward-looking roadmap
fer from the hallucination issue [30]. This severely impairs for integrating LLMs and KGs. Our roadmap,
the application of LLMs in high-stakes scenarios, such as consisting of three general frameworks to unify
medical diagnosis and legal judgment. For instance, in a LLMs and KGs, namely, KG-enhanced LLMs, LLM-
medical diagnosis scenario, LLMs may incorrectly diagnose augmented KGs, and Synergized LLMs + KGs, pro-
a disease and provide explanations that contradict medical vides guidelines for the unification of these two
commonsense. This raises another issue that LLMs trained distinct but complementary technologies.
on general corpus might not be able to generalize well 2) Categorization and review. For each integration
to specific domains or new knowledge due to the lack of framework of our roadmap, we present a detailed
domain-specific knowledge or new training data [18]. categorization and novel taxonomies of research
To address the above issues, a potential solution is to in- on unifying LLMs and KGs. In each category, we
corporate knowledge graphs (KGs) into LLMs. Knowledge review the research from the perspectives of differ-
graphs (KGs), storing enormous facts in the way of triples, ent integration strategies and tasks, which provides
i.e.,(headentity,relation,tailentity ),areastructuredand more insights into each framework.
decisive manner of knowledge representation (e.g., Wiki- 3) Coverage of emerging advances. We cover the
data [20], YAGO [31], and NELL [32]). KGs are crucial for advanced techniques in both LLMs and KGs. We
various applications as they offer accurate explicit knowl- include the discussion of state-of-the-art LLMs like
edge [19]. Besides, they are renowned for their symbolic ChatGPT and GPT-4 as well as the novel KGs e.g.,
reasoning ability [22], which generates interpretable results. multi-modal knowledge graphs.
KGs can also actively evolve with new knowledge contin- 4) Summary of challenges and future directions. We
uously added in [24]. Additionally, experts can construct highlight the challenges in existing research and
domain-specific KGs to provide precise and dependable present several promising future research direc-
domain-specific knowledge [23]. tions.
Nevertheless, KGs are difficult to construct [25], and
current approaches in KGs [27], [33], [34] are inadequate The rest of this article is organized as follows. Section
in handling the incomplete and dynamically changing na- 2 first explains the background of LLMs and KGs. Section
ture of real-world KGs. These approaches fail to effectively 3 int |
domain-specific knowledge [23]. tions.
Nevertheless, KGs are difficult to construct [25], and
current approaches in KGs [27], [33], [34] are inadequate The rest of this article is organized as follows. Section
in handling the incomplete and dynamically changing na- 2 first explains the background of LLMs and KGs. Section
ture of real-world KGs. These approaches fail to effectively 3 introduces the roadmap and the overall categorization of
model unseen entities and represent new facts. In addition, this article. Section 4 presents the different KGs-enhanced
they often ignore the abundant textual information in KGs. LLM approaches. Section 5 describes the possible LLM-
Moreover,existingmethodsinKGsareoftencustomizedfor augmented KG methods. Section 6 shows the approaches
specific KGs or tasks, which are not generalizable enough. of synergizing LLMs and KGs. Section 7 discusses the
Therefore, it is also necessary to utilize LLMs to address the challenges and future research directions. Finally, Section 8
challenges faced in KGs. We summarize the pros and cons concludes this paper.
of LLMs and KGs in Fig. 1, respectively.
Recently, the possibility of unifying LLMs with KGs has 2 BACKGROUND
attracted increasing attention from researchers and practi- In this section, we will first briefly introduce a few rep-
tioners. LLMs and KGs are inherently interconnected and resentative large language models (LLMs) and discuss the
can mutually enhance each other. In KG-enhanced LLMs, prompt engineering that efficiently uses LLMs for varieties
KGs can not only be incorporated into the pre-training and of applications. Then, we illustrate the concept of knowl-
inference stages of LLMs to provide external knowledge edge graphs (KGs) and present different categories of KGs.
[35]–[37], but also used for analyzing LLMs and provid-
ing interpretability [14], [38], [39]. In LLM-augmented KGs, 2.1 LargeLanguagemodels(LLMs)
LLMs have been used in various KG-related tasks, e.g., KG
embedding [40], KG completion [26], KG construction [41], Large language models (LLMs) pre-trained on large-scale
KG-to-text generation [42], and KGQA [43], to improve the corpus have shown great potential in various NLP tasks
performance and facilitate the application of KGs. In Syn- [13]. As shown in Fig. 3, most LLMs derive from the Trans-
ergized LLM + KG, researchers marries the merits of LLMs former design [50], which contains the encoder and decoderJOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 3
Fig. 2. Representative large language models (LLMs) in recent years. Open-source models are represented by solid squares, while closed source
models are represented by hollow squares.
sponsible for encoding the input sentence into a hidden-
space, and the decoder is used to generate the target output
text.Thetrainingstrategiesinencoder-decoderLLMscanbe
more flexible. For example, T5 [3] is pre-trained by masking
and predicting spans of masking words. UL2 [54] unifies
several training targets such as different mask |
text.Thetrainingstrategiesinencoder-decoderLLMscanbe
more flexible. For example, T5 [3] is pre-trained by masking
and predicting spans of masking words. UL2 [54] unifies
several training targets such as different masking spans and
masking frequencies. Encoder-decoder LLMs (e.g., T0 [55],
ST-MoE[56],andGLM-130B[57])areabletodirectlyresolve
tasks that generate sentences based on some context, such
Fig. 3. An illustration of the Transformer-based LLMs with self-attention as summariaztion, translation, and question answering [58].
mechanism.
2.1.3 Decoder-only LLMs.
modules empowered by a self-attention mechanism. Based Decoder-only large language models only adopt the de-
on the architecture structure, LLMs can be categorized coder module to generate target output text. The training
into three groups: 1) encoder-only LLMs, 2) encoder-decoder paradigm for these models is to predict the next word in
LLMs,and3)decoder-onlyLLMs.AsshowninFig.2,wesum- the sentence. Large-scale decoder-only LLMs can generally
marize several representative LLMs with different model perform downstream tasks from a few examples or simple
architectures, model sizes, and open-source availabilities. instructions, without adding prediction heads or finetun-
ing [59]. Many state-of-the-art LLMs (e.g., Chat-GPT [60]
2.1.1 Encoder-only LLMs. andGPT-44)followthedecoder-onlyarchitecture.However,
Encoder-only large language models only use the encoder since these models are closed-source, it is challenging for
to encode the sentence and understand the relationships academic researchers to conduct further research. Recently,
between words. The common training paradigm for these Alpaca5 and Vicuna6 are released as open-source decoder-
model is to predict the mask words in an input sentence. only LLMs. These models are finetuned based on LLaMA
This method is unsupervised and can be trained on the [61] and achieve comparable performance with ChatGPT
large-scale corpus. Encoder-only LLMs like BERT [1], AL- and GPT-4.
BERT [51], RoBERTa [2], and ELECTRA [52] require adding 2.1.4 Prompt Engineering
anextrapredictionheadtoresolvedownstreamtasks.These Prompt engineering is a novel field that focuses on creating
models are most effective for tasks that require understand- and refining prompts to maximize the effectiveness of large
ing the entire sentence, such as text classification [26] and languagemodels(LLMs)acrossvariousapplicationsandre-
named entity recognition [53]. searchareas[62].AsshowninFig.4,apromptisasequence
2.1.2 Encoder-decoder LLMs. 4. https://openai.com/product/gpt-4
Encoder-decoder large language models adopt both the 5. https://github.com/tatsu-lab/stanford alpaca
encoder and decoder module. The encoder module is re- 6. https://lmsys.org/blog/2023-03-30-vicuna/JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY |
searchareas[62].AsshowninFig.4,apromptisasequence
2.1.2 Encoder-decoder LLMs. 4. https://openai.com/product/gpt-4
Encoder-decoder large language models adopt both the 5. https://github.com/tatsu-lab/stanford alpaca
encoder and decoder module. The encoder module is re- 6. https://lmsys.org/blog/2023-03-30-vicuna/JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 4
Fig. 4. An example of sentiment classification prompt.
of natural language inputs for LLMs that are specified for
the task, such as sentiment classification. A prompt could
contain several elements, i.e., 1) Instruction, 2) Context, and
3) Input Text. Instruction is a short sentence that instructs
the model to perform a specific task. Context provides the
context for the input text or few-shot examples. Input Text is
the text that needs to be processed by the model.
Prompt engineering seeks to improve the capacity of
large large language models (e.g., ChatGPT) in diverse
complex tasks such as question answering, sentiment clas- Fig.5.Examplesofdifferentcategories’knowledgegraphs,i.e.,encyclo-
sification, and common sense reasoning. Chain-of-thought pedic KGs, commonsense KGs, domain-specific KGs, and multi-modal
(CoT) prompt [63] enables complex reasoning capabilities KGs.
through intermediate reasoning steps. Prompt engineering
also enables the integration of structural data like knowl- encyclopedicknowledgegraphs,likeFreebase[66],Dbpedia
edgegraphs(KGs)intoLLMs.Lietal.[64]simplylinearizes [67],andYAGO[31]arealsoderivedfromWikipedia.Inad-
the KGs and uses templates to convert the KGs into pas- dition, NELL [32] is a continuously improving encyclopedic
sages. Mindmap [65] designs a KG prompt to convert graph knowledge graph, which automatically extracts knowledge
structure into a mind map that enables LLMs to perform from the web, and uses that knowledge to improve its per-
reasoning on it. Prompt offers a simple way to utilize the formance over time. There are several encyclopedic knowl-
potentialofLLMswithoutfinetuning.Proficiencyinprompt edge graphs available in languages other than English such
engineering leads to a better understanding of the strengths asCN-DBpedia[68]andVikidia[69].Thelargestknowledge
and weaknesses of LLMs. graph, named Knowledge Occean (KO)7, currently contains
2.2 KnowledgeGraphs(KGs) 4,8784,3636 entities and 17,3115,8349 relations in both En-
Knowledge graphs (KGs) store structured knowledge as a glish and Chinese.
collection of triplesKG = {(h,r,t ) ⊆E×R×E} , whereE 2.2.2 Commonsense Knowledge Graphs.
andR respectively denote the set of entities and relations.
Existing knowledge graphs (KGs) can be classified into four Commonsense knowledge graphs formulate the knowledge
groups based on the stored information: 1) encyclopedic KGs, about daily concepts, e.g., objects, and events, as well
2) commonsense KGs, 3) domain-specific KGs, and 4) multi- as their relationships [70]. Compared with encyclopedic
modal KGs. We illustrate the examples of KGs of different knowledge graphs, commonsense knowledge graphs often
categories in Fig. 5. model the tacit knowledge extracted from text such as (Car,
UsedFor, Drive). ConceptNet [71] contains a wide range
2.2.1 Encyclop |
) multi- as their relationships [70]. Compared with encyclopedic
modal KGs. We illustrate the examples of KGs of different knowledge graphs, commonsense knowledge graphs often
categories in Fig. 5. model the tacit knowledge extracted from text such as (Car,
UsedFor, Drive). ConceptNet [71] contains a wide range
2.2.1 Encyclopedic Knowledge Graphs. of commonsense concepts and relations, which can help
Encyclopedic knowledge graphs are the most ubiquitous computers understand the meanings of words people use.
KGs, which represent the general knowledge in real-world. ATOMIC [72], [73] and ASER [74] focus on thecausal effects
Encyclopedic knowledge graphs are often constructed by between events, which can be used for commonsense rea-
integrating information from diverse and extensive sources, soning. Some other commonsense knowledge graphs, such
including human experts, encyclopedias, and databases. as TransOMCS [75] and CausalBanK [76] are automatically
Wikidata [20] is one of the most widely used encyclopedic constructed to provide commonsense knowledge.
knowledge graphs, which incorporates varieties of knowl-
edge extracted from articles on Wikipedia. Other typical 7. https://ko.zhonghuapu.com/ JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 5
Fig. 6. The general roadmap of unifying KGs and LLMs. (a.) KG-enhanced LLMs. (b.) LLM-augmented KGs. (c.) Synergized LLMs + KGs.
TABLE 1 Chatbot. Firefly develops a photo editing application that
Representative applications of using LLMs and KGs. allows users to edit photos by using natural language de-
scriptions. Copilot, New Bing, and Shop.ai adopt LLMs to
empower their applications in the areas of coding assistant,
web search, and recommendation, respectively. Wikidata
and KO are two representative knowledge graph applica-
tions that are used to provide external knowledge. OpenBG
[90] is a knowledge graph designed for recommendation.
Doctor.ai develops a health care assistant that incorporates
LLMs and KGs to provide medical advice.
3 ROADMAP & CATEGORIZATION
2.2.3 Domain-specific Knowledge Graphs In this section, we first present a road map of explicit
Domain-specific knowledge graphs are often constructed frameworks that unify LLMs and KGs. Then, we present
to represent knowledge in a specific domain, e.g., medi- the categorization of research on unifying LLMs and KGs.
cal, biology, and finance [23]. Compared with encyclopedic |
Domain-specific Knowledge Graphs In this section, we first present a road map of explicit
Domain-specific knowledge graphs are often constructed frameworks that unify LLMs and KGs. Then, we present
to represent knowledge in a specific domain, e.g., medi- the categorization of research on unifying LLMs and KGs.
cal, biology, and finance [23]. Compared with encyclopedic 3.1 Roadmap
knowledge graphs, domain-specific knowledge graphs are The roadmap of unifying KGs and LLMs is illustrated in
often smaller in size, but more accurate and reliable. For Fig. 6. In the roadmap, we identify three frameworks for
example, UMLS [77] is a domain-specific knowledge graph the unification of LLMs and KGs, including KG-enhanced
in the medical domain, which contains biomedical concepts LLMs, LLM-augmented KGs, and Synergized LLMs + KGs.
and their relationships. In addition, there are some domain- The KG-enhanced LLMs and LLM-augmented KGs are two
specificknowledgegraphsinotherdomains,suchasfinance parallel frameworks that aim to enhance the capabilities of
[78], geology [79], biology [80], chemistry [81] and geneal- LLMs and KGs, respectively. Building upon these frame-
ogy [82]. works, Synergized LLMs + KGs is a unified framework that
2.2.4 Multi-modal Knowledge Graphs. aims to synergize LLMs and KGs to mutually enhance each
Unlike conventional knowledge graphs that only contain other.
textual information, multi-modal knowledge graphs repre- 3.1.1 KG-enhanced LLMs
sent facts in multiple modalities such as images, sounds, LLMs are renowned for their ability to learn knowledge
and videos [83]. For example, IMGpedia [84], MMKG [85], from large-scale corpus and achieve state-of-the-art per-
and Richpedia [86] incorporate both the text and image formance in various NLP tasks. However, LLMs are often
information into the knowledge graphs. These knowledge criticized for their hallucination issues [15], and lacking of
graphs can be used for various multi-modal tasks such as interpretability. To address these issues, researchers have
image-text matching [87], visual question answering [88], proposed to enhance LLMs with knowledge graphs (KGs).
and recommendation [89]. KGs store enormous knowledge in an explicit and struc-
2.3 Applications tured way, which can be used to enhance the knowledge
awareness of LLMs. Some researchers have proposed to
LLMs as KGs have been widely applied in various incorporate KGs into LLMs during the pre-training stage,
real-world applications. We summarize some representa- which can help LLMs learn knowledge from KGs [35], [91].
tive applications of using LLMs and KGs in Table 1. Other researchers have proposed to incorporate KGs into
ChatGPT/GPT-4 are LLM-based chatbots that can commu- LLMs during the inference stage. By retrieving knowledge
nicate with humans in a natural dialogue format. To im- |
real-world applications. We summarize some representa- which can help LLMs learn knowledge from KGs [35], [91].
tive applications of using LLMs and KGs in Table 1. Other researchers have proposed to incorporate KGs into
ChatGPT/GPT-4 are LLM-based chatbots that can commu- LLMs during the inference stage. By retrieving knowledge
nicate with humans in a natural dialogue format. To im- from KGs, it can significantly improve the performance
prove knowledge awareness of LLMs, ERNIE 3.0 and Bard of LLMs in accessing domain-specific knowledge [92]. To
Name incorporate KGs into their chatbot applications. Instead ofCategoryLLMsKGsURL improvetheinterpretabilityofLLMs,researchersalsoutilize
ChatGPT/GPT-4 ChatBot ✓ https://shorturl.at/cmsE0
ERNIE3.0 ChatBot ✓ ✓ https://shorturl.at/sCLV9
Bard ChatBot ✓ ✓ https://shorturl.at/pDLY6
Firefly PhotoEditing ✓ https://shorturl.at/fkzJV
AutoGPT AIAssistant ✓ https://shorturl.at/bkoSY
Copilot CodingAssistant ✓ https://shorturl.at/lKLUV
NewBing WebSearch ✓ https://shorturl.at/bimps
Shop.ai Recommendation ✓ https://shorturl.at/alCY7
Wikidata KnowledgeBase ✓ https://shorturl.at/lyMY5
KO KnowledgeBase ✓ https://shorturl.at/sx238
OpenBG Recommendation ✓ https://shorturl.at/pDMV9
Doctor.ai HealthCareAssistant ✓ ✓ https://shorturl.at/dhlK0JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 6
images. In the Synergized Model layer, LLMs and KGs could
synergize with each other to improve their capabilities. In
Technique layer, related techniques that have been used in
LLMs and KGs can be incorporated into this framework to
further enhance the performance. In the Application layer,
LLMs and KGs can be integrated to address various real-
world applications, such as search engines [100], recom-
mender systems [10], and AI assistants [101].
3.2 Categorization
To better understand the research on unifying LLMs and
KGs, we further provide a fine-grained categorization for
each framework in the roadmap. Specifically, we focus
on different ways of integrating KGs and LLMs, i.e., KG-
enhanced LLMs, KG-augmented LLMs, and Synergized
LLMs+KGs.Thefine-grainedcategorizationoftheresearch
is illustrated in Fig. 8 |
ifically, we focus
on different ways of integrating KGs and LLMs, i.e., KG-
enhanced LLMs, KG-augmented LLMs, and Synergized
LLMs+KGs.Thefine-grainedcategorizationoftheresearch
is illustrated in Fig. 8.
KG-enhanced LLMs. Integrating KGs can enhance the
performance and interpretability of LLMs in various down-
stream tasks. We categorize the research on KG-enhanced
LLMs into three groups:
Fig. 7. The general framework of the Synergized LLMs + KGs, which 1) KG-enhanced LLM pre-training includes works that
contains four layers: 1) Data, 2) Synergized Model, 3) Technique, and
4) Application. apply KGs during the pre-training stage and im-
prove the knowledge expression of LLMs.
KGs to interpret the facts [14] and the reasoning process of 2) KG-enhanced LLM inference includes research that
LLMs [38]. utilizes KGs during the inference stage of LLMs,
which enables LLMs to access the latest knowledge
3.1.2 LLM-augmented KGs without retraining.
KGs store structure knowledge playing an essential role in 3) KG-enhancedLLMinterpretabilityincludesworksthat
many real-word applications [19]. Existing methods in KGs use KGs to understand the knowledge learned by
fall short of handling incomplete KGs [33] and processing LLMs and interpret the reasoning process of LLMs.
text corpus to construct KGs [93]. With the generalizability LLM-augmentedKGs.LLMscanbeappliedtoaugment
of LLMs, many researchers are trying to harness the power various KG-related tasks. We categorize the research on
of LLMs for addressing KG-related tasks. LLM-augmented KGs into five groups based on the task
The most straightforward way to apply LLMs as text types:
encoders for KG-related tasks. Researchers take advantage 1) LLM-augmented KG embedding includes studies that
of LLMs to process the textual corpus in the KGs and then apply LLMs to enrich representations of KGs by
use the representations of the text to enrich KGs representa- encoding the textual descriptions of entities and
tion[94].SomestudiesalsouseLLMstoprocesstheoriginal relations.
corpusandextractrelationsandentitiesforKGconstruction 2) LLM-augmented KG completion includes papers that
[95]. Recent studies try to design a KG prompt that can utilize LLMs to encode text or generate facts for
effectively convert structural KGs into a format that can be better KGC performance.
comprehended by LLMs. In this way, LLMs can be directly 3) LLM-augmented KG construction includes works that
applied to KG-related tasks, e.g., KG completion [96] and apply LLMs to address the entity discovery, corefer-
KG reasoning [97]. |
utilize LLMs to encode text or generate facts for
effectively convert structural KGs into a format that can be better KGC performance.
comprehended by LLMs. In this way, LLMs can be directly 3) LLM-augmented KG construction includes works that
applied to KG-related tasks, e.g., KG completion [96] and apply LLMs to address the entity discovery, corefer-
KG reasoning [97]. ence resolution, and relation extraction tasks for KG
construction.
3.1.3 Synergized LLMs + KGs 4) LLM-augmented KG-to-text Generation includes re-
The synergy of LLMs and KGs has attracted increasing search that utilizes LLMs to generate natural lan-
attention from researchers these years [40], [42]. LLMs and guage that describes the facts from KGs.
KGs are two inherently complementary techniques, which 5) LLM-augmentedKGquestionansweringincludesstud-
should be unified into a general framework to mutually ies that apply LLMs to bridge the gap between
enhance each other. natural language questions and retrieve answers
To further explore the unification, we propose a unified from KGs.
framework of the synergized LLMs + KGs in Fig. 7. The SynergizedLLMs+KGs.ThesynergyofLLMsandKGs
unified framework contains four layers: 1) Data, 2) Syner- aims to integrate LLMs and KGs into a unified framework
gizedModel,3)Technique,and4)Application.IntheDatalayer, to mutually enhance each other. In this categorization, we
LLMsandKGsareusedtoprocessthetextualandstructural review the recent attempts of Synergized LLMs + KGs from
data, respectively. With the development of multi-modal the perspectives of knowledge representation and reasoning.
LLMs [98] and KGs [99], this framework can be extended Inthefollowingsections(Sec4,5,and6),wewillprovide
to process multi-modal data, such as video, audio, and details on these categorizations. JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 7
Fig. 8. Fine-grained categorization of research on unifying large language models (LLMs) with knowledge graphs (KGs).
4 KG-ENHANCED LLMS TABLE 2
Large language models (LLMs) achieve promising results Summary of KG-enhanced LLM methods.
in many natural language processing tasks. However, LLMs
have been criticized for their lack of practical knowledge
and tendency to generate factual errors during inference.
To address this issue, researchers have proposed integrating
knowledge graphs (KGs) to enhance LLMs. In this sec-
tion, we first introduce the KG-enhanced LLM pre-training,
which aims to inject knowledge into LLMs during the pre-
training stage. Then, we introduce the KG-enhanced LLM
inference, which enables LLMs to consider the latest knowl-
edge while generating sentences. Finally, we introduce the
KG-enhanced LLM interpretability, which aims to improve
the interpretability of LLMs by using KGs. Table 2 |
tion, we first introduce the KG-enhanced LLM pre-training,
which aims to inject knowledge into LLMs during the pre-
training stage. Then, we introduce the KG-enhanced LLM
inference, which enables LLMs to consider the latest knowl-
edge while generating sentences. Finally, we introduce the
KG-enhanced LLM interpretability, which aims to improve
the interpretability of LLMs by using KGs. Table 2 summa-
rizes the typical methods that integrate KGs for LLMs.
4.1 KG-enhancedLLMPre-training
Existinglargelanguagemodelsmostlyrelyonunsupervised
training on the large-scale corpus. While these models may
exhibit impressive performance on downstream tasks, they
often lack practical knowledge relevant to the real world.
Task Method Year KG Technique
PreviousworksthatintegrateKGsintolargelanguagemod-ERNIE[35] 2019 E IntegratingKGsintoTrainingObjectiveconsidered to be the most important entities for learning,
GLM[102] 2020 C IntegratingKGsintoTrainingObjective
elscanbecategorizedintothreeparts:1)IntegratingKGsintoEbert[103] 2020 D IntegratingKGsintoTrainingObjectiveand they are given a higher masking probability during
KEPLER[40] 2021 E IntegratingKGsintoTrainingObjective
training objective, 2) Integrating KGs into LLM inputs, and 3)DeterministicLLM[104] 2022 E IntegratingKGsintoTrainingObjectivepre-training.Furthermore,E-BERT[103]furthercontrolsthe
KALA[105] 2022 D IntegratingKGsintoTrainingObjective
KGs Instruction-tuning.WKLM[106] 2020 E IntegratingKGsintoTrainingObjectivebalance between the token-level and entity-level training
KG-enhancedLLMpre-trainingK-BERT[36] 2020 E+D IntegratingKGsintoLanguageModelInputs
CoLAKE[107] 2020 E IntegratingKGsintoLanguageModelInputs losses.Thetraininglossvaluesareusedasindicationsofthe
ERNIE3.0[101] 2021 E+D IntegratingKGsintoLanguageModelInputs
4.1.1 Integrating KGs into Training ObjectiveDkLLM[108] 2022 E IntegratingKGsintoLanguageModelInputslearningprocessfortokenandentity,whichdynamicallyde-
KP-PLM[109] 2022 E KGsInstruction-tuning
OntoPrompt[110] 2022 E+D KGsInstruction-tuning
The research efforts in this category focus on designingChatKBQA[111] 2023 E KGsInstruction-tuningtermines their ratio for the next training epochs. SKEP [124]
RoG[112] 2023 E KGsInstruction-tuning
novel knowledge-aware training objectives. An intuitiveKGLM[113] 2019 E Retrival-augmentedknowledgefusionalso follows a similar fusion to inject sentiment knowledge
REALM[114] 2020 E Retrival-augmentedknowledgefusion
ideaistoexposemoreknowledgeentitiesinthepre-trainingRAG[92] 2020 E Retrival-augmentedknowledgefusionduring LLMs pre-training. SKEP first determines words
KG-enhancedLLMinference EMAT[115] 2022 E Retrival-augmentedknowledgefusion
objective. GLM [102] leverages the knowledge graph struc-Lietal.[64] 2023 C KGsPromptingwithpositiveandnegativesentimentbyutilizingPMIalong
Mindmap[65] 2023 E+D KGsPrompting
ture to assi |
2020 E Retrival-augmentedknowledgefusionduring LLMs pre-training. SKEP first determines words
KG-enhancedLLMinference EMAT[115] 2022 E Retrival-augmentedknowledgefusion
objective. GLM [102] leverages the knowledge graph struc-Lietal.[64] 2023 C KGsPromptingwithpositiveandnegativesentimentbyutilizingPMIalong
Mindmap[65] 2023 E+D KGsPrompting
ture to assign a masking probability. Specifically, entitiesChatRule[116] 2023 E+D KGsPromptingwith a predefined set of seed sentiment words. Then, it
CoK[117] 2023 E+C+D KGsPrompting
that can be reached within a certain number of hops areLAMA[14] 2019 E KGsforLLMprobingassigns a higher masking probability to those identified
LPAQA[118] 2020 E KGsforLLMprobing
Autoprompt[119] 2020 E KGsforLLMprobing
MedLAMA[120] 2022 D KGsforLLMprobing
KG-enhancedLLMinterpretabilityLLM-facteval[121] 2023 E+D KGsforLLMprobing
KagNet[38] 2019 C KGsforLLManalysis
Interpret-lm[122] 2021 E KGsforLLManalysis
knowledge-neurons[39] 2021 E KGsforLLManalysis
Shaoboetal.[123] 2022 E KGsforLLManalysis
E:EncyclopedicKnowledgeGraphs,C:CommonsenseKnowledgeGraphs,D:Domain-SpecificKnowledgeGraphs.JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 8
Fig. 9. Injecting KG information into LLMs training objective via text-
knowledge alignment loss, where h denotes the hidden representation
generated by LLMs.
sentiment words in the word masking objective.
The other line of work explicitly leverages the connec- Fig.10.InjectingKGinformationintoLLMsinputsusinggraphstructure.
tions with knowledge and input text. As shown in Fig. 9,
ERNIE[35]proposesanovelword-entityalignmenttraining connected word graph where tokens aligned with knowl-
objective as a pre-training objective. Specifically, ERNIE edge entities are connected with their neighboring entities.
feeds both sentences and corresponding entities mentioned The above methods can indeed inject a large amount
in the text into LLMs, and then trains the LLMs to pre- of knowledge into LLMs. However, they mostly focus on
dict alignment links between textual tokens and entities in popular entities and overlook the low-frequent and long-
knowledgegraphs.Similarly,KALM[91]enhancestheinput tail ones. DkLLM [108] aims to improve the LLMs repre-
tokens by incorporating entity embeddings and includes sentations towards those entities. DkLLM first proposes a
an entity prediction pre-training task in addition to the novel measurement to determine long-tail entities and then
token-only pre-training objective. This approach aims to replacestheseselectedentitiesinthetextwithpseudotoken
improve the ability of LLMs to capture knowledge related embedding as new input to the large language models.
to entities. Finally, KEPLER [40] directly employs both Furthermore, Dict-BERT [125] proposes to leverage exter-
knowledgegraphembeddingtrainingobjectiveandMasked nal dictionaries to solve this issue. Specifically, Dict-BERT
tokenpre-trainingobjectiveintoasharedtransformer-based improves the representation quality of rare words by |
o capture knowledge related embedding as new input to the large language models.
to entities. Finally, KEPLER [40] directly employs both Furthermore, Dict-BERT [125] proposes to leverage exter-
knowledgegraphembeddingtrainingobjectiveandMasked nal dictionaries to solve this issue. Specifically, Dict-BERT
tokenpre-trainingobjectiveintoasharedtransformer-based improves the representation quality of rare words by ap-
encoder. Deterministic LLM [104] focuses on pre-training pending their definitions from the dictionary at the end of
language models to capture deterministic factual knowledge. input text and trains the language model to locally align
It only masks the span that has a deterministic entity as the rare word representations in input sentences and dictionary
question and introduces additional clue contrast learning definitions as well as to discriminate whether the input text
and clue classification objective. WKLM [106] first replaces and definition are correctly mapped.
entities in the text with other same-type entities and then
feeds them into LLMs. The model is further pre-trained to
distinguish whether the entities have been replaced or not. 4.1.3 KGs Instruction-tuning
4.1.2 Integrating KGs into LLM Inputs Instead of injecting factual knowledge into LLMs, the KGs
Instruction-tuning aims to fine-tune LLMs to better com-
As shown in Fig. 10, this kind of research focus on in- prehend the structure of KGs and effectively follow user
troducing relevant knowledge sub-graph into the inputs instructions to conduct complex tasks. KGs Instruction-
of LLMs. Given a knowledge graph triple and the corre- tuning utilizes both facts and the structure of KGs to cre-
sponding sentences, ERNIE 3.0 [101] represents the triple as ate instruction-tuning datasets. LLMs finetuned on these
a sequence of tokens and directly concatenates them with datasets can extract both factual and structural knowledge
the sentences. It further randomly masks either the relation from KGs, enhancing the reasoning ability of LLMs. KP-
token in the triple or tokens in the sentences to better PLM [109] first designs several prompt templates to transfer
combine knowledge with textual representations. However, structural graphs into natural language text. Then, two self-
such direct knowledge triple concatenation method allows supervised tasks are proposed to finetune LLMs to further
the tokens in the sentence to intensively interact with the leverage the knowledge from these prompts. OntoPrompt
tokens in the knowledge sub-graph, which could result in [110] proposes an ontology-enhanced prompt-tuning that
Knowledge Noise [36]. To solve this issue, K-BERT [36] takes can place knowledge of entities into the context of LLMs,
the first step to inject the knowledge triple into the sentence which are further finetuned on several downstream tasks.
via a visible matrix where only the knowledge entities have ChatKBQA [111] finetunes LLMs on KG structure to gener-
accesstotheknowledgetripleinformation,whilethetokens ate logical queries, which can be executed on KGs to obtain
in the sentences can only see each other in the self-attention answers. To better reason on graphs, RoG [112] presents a
module. To further reduce Knowledge Noise, Colake [107] planning-retrieval-reasoning framework. RoG is finetuned
proposes a unified word-knowledge graph (shown in Fig. onKGstructuretogeneraterelationpathsgroundedb |
edgetripleinformation,whilethetokens ate logical queries, which can be executed on KGs to obtain
in the sentences can only see each other in the self-attention answers. To better reason on graphs, RoG [112] presents a
module. To further reduce Knowledge Noise, Colake [107] planning-retrieval-reasoning framework. RoG is finetuned
proposes a unified word-knowledge graph (shown in Fig. onKGstructuretogeneraterelationpathsgroundedbyKGs
10) where the tokens in the input sentences form a fully as faithful plans. These plans are then used to retrieve validJOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 9
reasoning paths from the KGs for LLMs to conduct faithful
reasoning and generate interpretable results.
KGs Instruction-tuning can better leverage the knowl-
edge from KGs for downstream tasks. However, it requires
retraining the models, which is time-consuming and re-
quires lots of resources.
4.2 KG-enhancedLLMInference
The above methods could effectively fuse knowledge into
LLMs. However, real-world knowledge is subject to change Fig. 11. Retrieving external knowledge to enhance the LLM generation.
and the limitation of these approaches is that they do
not permit updates to the incorporated knowledge without
retraining the model. As a result, they may not generalize converts structured KGs into text sequences, which can be
welltotheunseenknowledgeduringinference[126].There- fed as context into LLMs. In this way, LLMs can better take
fore, considerable research has been devoted to keeping the advantage of the structure of KGs to perform reasoning. Li
knowledge space and text space separate and injecting the et al. [64] adopt the pre-defined template to convert each
knowledge while inference. These methods mostly focus on triple into a short sentence, which can be understood by
the Question Answering (QA) tasks, because QA requires LLMs for reasoning. Mindmap [65] designs a KG prompt to
the model to capture both textual semantic meanings and convert graph structure into a mind map that enables LLMs
up-to-date real-world knowledge. to perform reasoning by consolidating the facts in KGs and
the implicit knowledge from LLMs. ChatRule [116] sam-
4.2.1 Retrieval-Augmented Knowledge Fusion ples several relation paths from KGs, which are verbalized
Retrieval-Augmented Knowledge Fusion is a popular and fed into LLMs. Then, LLMs are prompted to generate
method to inject knowledge into LLMs during inference. meaningfullogicalrulesthatcanbeusedforreasoning.CoK
The key idea is to retrieve relevant knowledge from a large [117] proposes a chain-of-knowledge prompting that uses a
corpus and then fuse the retrieved knowledge into LLMs. sequence of triples to elicit the reasoning ability of LLMs to
As shown in Fig. 11, RAG [92] proposes to combine non- reach the final answer.
parametric and parametric modules to handle the external KGs prompting presents a simple way to synergize
knowledge. Given the input text, RAG first searches for rel- LLMs and KGs. By using the prompt, we can easily harness
evant KG in the non-parametric module via MIPS to obtain the power of LLMs to perform reasoning based on KGs
several documents. RAG then treats these documents as without retraining the models. However, the prompt is
hiddenvariablesz andfeedsthemintotheoutputgenerator, usually |
nts a simple way to synergize
knowledge. Given the input text, RAG first searches for rel- LLMs and KGs. By using the prompt, we can easily harness
evant KG in the non-parametric module via MIPS to obtain the power of LLMs to perform reasoning based on KGs
several documents. RAG then treats these documents as without retraining the models. However, the prompt is
hiddenvariablesz andfeedsthemintotheoutputgenerator, usually designed manually, which requires lots of human
empowered by Seq2Seq LLMs, as additional context infor- effort.
mation. The research indicates that using different retrieved
documents as conditions at different generation steps per- 4.3 Comparison between KG-enhanced LLM Pre-
forms better than only using a single document to guide trainingandInference
the whole generation process. The experimental results KG-enhanced LLM Pre-training methods commonly en-
show that RAG outperforms other parametric-only and rich large-amount of unlabeled corpus with semantically
non-parametric-only baseline models in open-domain QA. relevant real-world knowledge. These methods allow the
RAG can also generate more specific, diverse, and factual knowledge representations to be aligned with appropri-
text than other parameter-only baselines. Story-fragments ate linguistic context and explicitly train LLMs to lever-
[127] further improves architecture by adding an additional age those knowledge from scratch. When applying the
module to determine salient knowledge entities and fuse resulting LLMs to downstream knowledge-intensive tasks,
them into the generator to improve the quality of generated they should achieve optimal performance. In contrast, KG-
long stories. EMAT [115] further improves the efficiency of enhanced LLM inference methods only present the knowl-
such a system by encoding external knowledge into a key- edge to LLMs in the inference stage and the underlying
valuememoryandexploitingthefastmaximuminnerprod- LLMs may not be trained to fully leverage these knowledge
uct search for memory querying. REALM [114] proposes a whenconductingdownstreamtasks,potentiallyresultingin
novel knowledge retriever to help the model to retrieve and sub-optimal model performance.
attend over documents from a large corpus during the pre- However,real-worldknowledgeisdynamicandrequires
training stage and successfully improves the performance frequent updates. Despite being effective, the KG-enhanced
of open-domain question answering. KGLM [113] selects LLM Pre-training methods never permit knowledge up-
the facts from a knowledge graph using the current context dates or editing without model re-training. As a result, the
to generate factual sentences. With the help of an external KG-enhanced LLM Pre-training methods could generalize
knowledge graph, KGLM could describe facts using out-of- poorly to recent or unseen knowledge. KG-enhanced LLM
domain words or phrases. inference methods can easily maintain knowledge updates
4.2.2 KGs Prompting by changing the inference inputs. These methods help im-
prove LLMs performance on new knowledge and domains.
To better feed the KG structure into the LLM during infer- In summary, when to use these methods depends on the
ence, KGs prompting aims to design a crafted prompt that application scenarios. If one wishes to apply LLMs to han-JOURNAL OF LATEX CLASS FILE |
by changing the inference inputs. These methods help im-
prove LLMs performance on new knowledge and domains.
To better feed the KG structure into the LLM during infer- In summary, when to use these methods depends on the
ence, KGs prompting aims to design a crafted prompt that application scenarios. If one wishes to apply LLMs to han-JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 10
Fig. 12. The general framework of using knowledge graph for language
model probing. Fig. 13. The general framework of using knowledge graph for language
model analysis.
dle time-insensitive knowledge in particular domains (e.g.,
commonsense and reasoning knowledge), KG-enhanced Thus, LPAQA [118] proposes a mining and paraphrasing-
LLM Pre-training methods should be considered. Other- based method to automatically generate high-quality and
wise, KG-enhanced LLM inference methods can be used to diverse prompts for a more accurate assessment of the
handle open-domain knowledge with frequent updates. knowledge contained in the language model. Moreover,
Adolphs et al. [128] attempt to use examples to make the
4.4 KG-enhancedLLMInterpretability language model understand the query, and experiments
Although LLMs have achieved remarkable success in many obtain substantial improvements for BERT-large on the T-
NLP tasks, they are still criticized for their lack of inter- RExdata.Unlikeusingmanuallydefinedprompttemplates,
pretability. The large language model (LLM) interpretability Autoprompt [119] proposes an automated method, which
refers to the understanding and explanation of the inner is based on the gradient-guided search to create prompts.
workings and decision-making processes of a large lan- LLM-facteval [121] designs a systematic framework that
guage model [17]. This can improve the trustworthiness of automatically generates probing questions from KGs. The
LLMs and facilitate their applications in high-stakes scenar- generated questions are then used to evaluate the factual
ios such as medical diagnosis and legal judgment. Knowl- knowledge stored in LLMs.
edgegraphs(KGs)representtheknowledgestructurallyand Instead of probing the general knowledge by using
can provide good interpretability for the reasoning results. the encyclopedic and commonsense knowledge graphs,
Therefore, researchers try to utilize KGs to improve the BioLAMA [129] and MedLAMA [120] probe the medical
interpretabilityofLLMs,whichcanberoughlygroupedinto knowledge in LLMs by using medical knowledge graphs.
two categories: 1) KGs for language model probing, and 2) KGs Alex et al. [130] investigate the capacity of LLMs to re-
for language model analysis. tain less popular factual knowledge. They select unpopular
facts from Wikidata knowledge graphs which have low-
4.4.1 KGs for LLM Probing frequency clicked entities. These facts are then used for the
The large language model (LLM) probing aims to under- evaluation, where the results indicate that LLMs encount |
lysis. tain less popular factual knowledge. They select unpopular
facts from Wikidata knowledge graphs which have low-
4.4.1 KGs for LLM Probing frequency clicked entities. These facts are then used for the
The large language model (LLM) probing aims to under- evaluation, where the results indicate that LLMs encounter
stand the knowledge stored in LLMs. LLMs, trained on difficulties with such knowledge, and that scaling fails to
large-scale corpus, are often known as containing enor- appreciably improve memorization of factual knowledge in
mous knowledge. However, LLMs store the knowledge in the tail.
a hidden way, making it hard to figure out the stored 4.4.2 KGs for LLM Analysis
knowledge. Moreover, LLMs suffer from the hallucination
problem [15], which results in generating statements that Knowledge graphs (KGs) for pre-train language models
contradict facts. This issue significantly affects the reliability (LLMs) analysis aims to answer the following questions
of LLMs. Therefore, it is necessary to probe and verify the such as “how do LLMs generate the results?”, and “how do
knowledge stored in LLMs. the function and structure work in LLMs?”. To analyze the
LAMA [14] is the first work to probe the knowledge inference process of LLMs, as shown in Fig. 13, KagNet [38]
in LLMs by using KGs. As shown in Fig. 12, LAMA first and QA-GNN [131] make the results generated by LLMs
converts the facts in KGs into cloze statements by a pre- at each reasoning step grounded by knowledge graphs. In
defined prompt template and then uses LLMs to predict the this way, the reasoning process of LLMs can be explained
missing entity. The prediction results are used to evaluate by extracting the graph structure from KGs. Shaobo et al.
the knowledge stored in LLMs. For example, we try to [123] investigate how LLMs generate the results correctly.
probe whether LLMs know the fact (Obama, profession, pres- They adopt the causal-inspired analysis from facts extracted
ident). We first convert the fact triple into a cloze question from KGs. This analysis quantitatively measures the word
“Obama’sprofessionis .”withtheobjectmasked.Then,we patterns that LLMs depend on to generate the results. The
test if the LLMs can predict the object “president” correctly. results show that LLMs generate the missing factual more
However, LAMA ignores the fact that the prompts are bythepositionallyclosedwordsratherthantheknowledge-
inappropriate. For example, the prompt “Obama worked as dependent words. Thus, they claim that LLMs are inade-
a ” may be more favorable to the prediction of the blank quatetomemorizefactualknowledgebecauseoftheinaccu-
by the language models than “Obama is a by profession” . rate dependence. To interpret the training of LLMs, Swamy JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 11
TABLE 3
Summary of representative LLM-augmented KG methods.
Fig. 14. LLMs as text encoder for knowledge graph embedding (KGE). |
11
TABLE 3
Summary of representative LLM-augmented KG methods.
Fig. 14. LLMs as text encoder for knowledge graph embedding (KGE).
5.1 LLM-augmentedKGEmbedding
Knowledge graph embedding (KGE) aims to map each
entity and relation into a low-dimensional vector (embed-
ding) space. These embeddings contain both semantic and
structural information of KGs, which can be utilized for
various tasks such as question answering [180], reasoning
[38], and recommendation [181]. Conventional knowledge
graph embedding methods mainly rely on the structural
information of KGs to optimize a scoring function de-
fined on embeddings (e.g., TransE [33], and DisMult [182]).
However, these approaches often fall short in representing
unseen entities and long-tailed relations due to their limited
structural connectivity [183], [184]. To address this issue, as
shown in Fig. 14, recent research adopts LLMs to enrich
et al. [122] adopt the language model during pre-training representations of KGs by encoding the textual descriptions
to generate knowledge graphs. The knowledge acquired by of entities and relations [40], [94].
LLMs during training can be unveiled by the facts in KGs 5.1.1 LLMs as Text Encoders
Task Method Year LLM Technique
explicitly. To explore how implicit knowledge is stored inPretrain-KGE[94] 2020 E LLMsasTextEncodersPretrain-KGE [94] is a representative method that follows
KEPLER[40] 2020 E LLMsasTextEncoders
parameters of LLMs, Dai et al. [39] propose the concept ofNayyerietal.[132] 2022 E LLMsasTextEncoders
LLM-augmentedKGembedding Huangetal.[133] 2022 E LLMsasTextEncoders the framework shown in Fig. 14. Given a triple(h,r,t ) from
knowledge neurons. Specifically, activation of the identifiedCoDEx[ |
method that follows
KEPLER[40] 2020 E LLMsasTextEncoders
parameters of LLMs, Dai et al. [39] propose the concept ofNayyerietal.[132] 2022 E LLMsasTextEncoders
LLM-augmentedKGembedding Huangetal.[133] 2022 E LLMsasTextEncoders the framework shown in Fig. 14. Given a triple(h,r,t ) from
knowledge neurons. Specifically, activation of the identifiedCoDEx[134] 2022 E LLMsasTextEncodersKGs, it firsts uses a LLM encoder to encode the textual de-
LMKE[135] 2022 E LLMsforJointTextandKGEmbedding
knowledge neurons is highly correlated with knowledgekNN-KGE[136] 2022 E LLMsforJointTextandKGEmbedding
LambdaKG[137] 2023 E+D+ED LLMsforJointTextandKGEmbedding scriptionsofentitiesh ,t,andrelationsr intorepresentations
expression. Thus, they explore the knowledge and factsKG-BERT[26] 2019 E JointEncoding
MTL-KGC[138] 2020 E JointEncoding as
represented by each neuron by suppressing and amplifyingPKGC[139] 2022 E JointEncoding
LASS[140] 2022 E JointEncoding
knowledge neurons.MEM-KGC[141] 2021 E MLMEncoding eh = LLM(Texth),et = LLM(Textt),er = LLM(Textr), (1)
LLM-augmentedKGcompletion OpenWorldKGC[142] 2023 E MLMEncoding
StAR[143] 2021 E SeparatedEncoding
SimKGC[144] 2022 E SeparatedEncoding whereeh,er, andet denotes the initial embeddings of enti-
LP-BERT[145] 2022 E SeparatedEncoding
GenKGC[96] 2022 ED LLMasdecoders tiesh ,t, and relationsr, respectively. Pretrain-KGE uses the
5 LLM-AUGMENTED KGSKGT5[146] 2022 ED LLMasdecoders
KG-S2S[147] 2022 ED LLMasdecoders BERT as the LLM encoder in experiments. Then, the initial
AutoKG[93] 2023 D LLMasdecoders
ELMO[148] 2018 E NamedEntityRecognition embeddings are fed into a KGE model to generate the final
Knowledge graphs are famous for representing knowledgeGenerativeNER[149] 2021 ED NamedEntityRecognition
LDET[150] 2019 E EntityTyping
in a structural manner. They have been applied in manyBOX4Types[151] 2021 E EntityTypingembeddings vh,vr, and vt. During the KGE training phase,
ELQ[152] 2020 E EntityLinking
downstream tasks such as question answering, recommen-ReFinED[153] 2022 E EntityLinkingthey optimize the KGE model by following the standard
BertCR[154] 2019 E CR(Within-document)
dation, and web search. However, the conventional KGsSpanbert[155] 2020 E CR(Within-document)KGE loss function as
LLM-augmentedKGconstruction CDLM[156] 2021 E CR(Cross-document)
CrossCR[157] 2021 E CR(Cross-document)
are often incomplete and existing methods often lack con-CR-RL[158] 2021 E CR(Cross-document) |
2019 E CR(Within-document)
dation, and web search. However, the conventional KGsSpanbert[155] 2020 E CR(Within-document)KGE loss function as
LLM-augmentedKGconstruction CDLM[156] 2021 E CR(Cross-document)
CrossCR[157] 2021 E CR(Cross-document)
are often incomplete and existing methods often lack con-CR-RL[158] 2021 E CR(Cross-document) L =[ γ + f (vh,vr,vt)− f (v′h,v′r,v′t)], (2)
sidering textual information. To address these issues, re-SentRE[159] 2019 E RE(Sentence-level)
Curriculum-RE[160] 2021 E RE(Sentence-level)
DREEAM[161] 2023 E RE(Document-level)
cent research has explored integrating LLMs to augmentKumaretal.[95] 2020 E End-to-EndConstructionwhere f is the KGE scoring function, γ is a margin hy-
Guoetal.[162] 2021 E End-to-EndConstruction
KGs to consider the textual information and improve theGrapher[41] 2021 ED End-to-EndConstructionperparameter, and v′h,v′r, and v′t are the negative samples.
PiVE[163] 2023 D+ED End-to-EndConstruction
performance in downstream tasks. In this section, we willCOMET[164] 2019 D DistillingKGsfromLLMsIn this way, the KGE model could learn adequate struc-
BertNet[165] 2022 E DistillingKGsfromLLMs
introduce the recent research on LLM-augmented KGs. WeWestetal.[166] 2022 D DistillingKGsfromLLMsture information, while reserving partial knowledge from
Ribeiroetal[167] 2021 ED LeveragingKnowledgefromLLMs
will introduce the methods that integrate LLMs for KGJointGT[42] 2021 ED LeveragingKnowledgefromLLMsLLMenablingbetterknowledgegraphembedding.KEPLER
LLM-augmentedKG-to-textGenerationFSKG2Text[168] 2021 D+ED LeveragingKnowledgefromLLMs
embedding, KG completion, KG construction, KG-to-textGAP[169] 2022 ED LeveragingKnowledgefromLLMs[40] offers a unified model for knowledge embedding and
GenWiki[170] 2020 - ConstructingKG-textalignedCorpus
generation, and KG question answering, respectively. Rep-KGPT[171] 2020 ED ConstructingKG-textalignedCorpuspre-trained language representation. This model not only
Lukovnikovetal.[172] 2019 E Entity/RelationExtractor
resentative works are summarized in Table 3.Luoetal.[173] 2020 E Entity/RelationExtractorgenerates effective text-enhanced knowledge embedding
QA-GNN[131] 2021 E Entity/RelationExtractor
Nanetal.[174] 2023 E+D+ED Entity/RelationExtractor
LLM-augmentedKGQA DEKCOR[175] 2021 E AnswerReasoner
DRLK[176] 2022 E AnswerReasoner
OreoLM[177] 2022 E AnswerReasoner
GreaseLM[178] 2022 E AnswerReasoner
ReLMKG[179] 2022 E AnswerReasoner
UniKGQA[43] 2023 E AnswerReasoner
E:Encoder-onlyLLMs,D:Decoder-onlyLLMs,ED:Encoder-decoderLLMs.JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY |
AnswerReasoner
GreaseLM[178] 2022 E AnswerReasoner
ReLMKG[179] 2022 E AnswerReasoner
UniKGQA[43] 2023 E AnswerReasoner
E:Encoder-onlyLLMs,D:Decoder-onlyLLMs,ED:Encoder-decoderLLMs.JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 12
the structure of the KG, without considering the exten-
sive textual information. However, the recent integration of
LLMsenablesKGCmethodstoencodetextorgeneratefacts
for better KGC performance. These methods fall into two
distinct categories based on their utilization styles: 1) LLM
as Encoders (PaE), and 2) LLM as Generators (PaG).
5.2.1 LLM as Encoders (PaE).
As shown in Fig. 16 (a), (b), and (c), this line of work
first uses encoder-only LLMs to encode textual information
Fig. 15. LLMs for joint text and knowledge graph embedding. as well as KG facts. Then, they predict the plausibility
of the triples or masked entities by feeding the encoded
representation into a prediction head, which could be a
using powerful LLMs but also seamlessly integrates factual simple MLP or conventional KG score function (e.g., TransE
knowledgeintoLLMs.Nayyerietal.[132]useLLMstogen- [33] and TransR [185]).
erate the world-level, sentence-level, and document-level Joint Encoding. Since the encoder-only LLMs (e.g., Bert
representations. They are integrated with graph structure [1]) are well at encoding text sequences, KG-BERT [26]
embeddings into a unified vector by Dihedron and Quater- represents a triple (h,r,t ) as a text sequence and encodes
nion representations of 4D hypercomplex numbers. Huang it with LLM Fig. 16(a).
et al. [133] combine LLMs with other vision and graph x = [CLS] Texth [SEP] Textr [SEP] Textt [SEP], (5)
encoderstolearnmulti-modalknowledgegraphembedding
thatenhancestheperformanceofdownstreamtasks.CoDEx The final hidden state of the [CLS] token is fed into a
[134] presents a novel loss function empowered by LLMs classifier to predict the possibility of the triple, formulated
that guides the KGE models in measuring the likelihood of as
triplesbyconsideringthetextualinformation.Theproposed s = σ (MLP(e[CLS])), (6)
loss function is agnostic to model structure that can be
incorporated with any KGE model. where σ (·) denotes the sigmoid function and e[CLS] de-
notes the representation encoded by LLMs. To improve the
5.1.2 LLMs for Joint Text and KG Embedding efficacy of KG-BERT, MTL-KGC [138] proposed a Mu |
(e[CLS])), (6)
loss function is agnostic to model structure that can be
incorporated with any KGE model. where σ (·) denotes the sigmoid function and e[CLS] de-
notes the representation encoded by LLMs. To improve the
5.1.2 LLMs for Joint Text and KG Embedding efficacy of KG-BERT, MTL-KGC [138] proposed a Multi-
Instead of using KGE model to consider graph structure, Task Learning for the KGC framework which incorporates
another line of methods directly employs LLMs to incorpo- additional auxiliary tasks into the model’s training, i.e.
rate both the graph structure and textual information into prediction (RP) and relevance ranking (RR). PKGC [139]
the embedding space simultaneously. As shown in Fig. 15, assesses the validity of a triplet(h,r,t ) by transforming the
k NN-KGE [136] treats the entities and relations as special triple and its supporting information into natural language
tokens in the LLM. During training, it transfers each triple sentences with pre-defined templates. These sentences are
(h,r,t ) and corresponding text descriptions into a sentence then processed by LLMs for binary classification. The sup-
x as porting information of the triplet is derived from the at-
x = [CLS]h Texth[SEP]r [SEP][MASK] Textt[SEP], tributes ofh andt with a verbalizing function. For instance,
(3) if the triple is (Lebron James, member of sports team, Lakers),
where the tailed entities are replaced by [MASK]. The sen- the information regarding Lebron James is verbalized as
tence is fed into a LLM, which then finetunes the model to ”Lebron James: American basketball player”. LASS [140]
predict the masked entity, formulated as observes that language semantics and graph structures are
equally vital to KGC. As a result, LASS is proposed to
P LLM (t|h,r )= P ([MASK]=t|x, Θ) , (4) jointly learn two types of embeddings: semantic embedding
and structure embedding. In this method, the full text of a
where Θ denotes the parameters of the LLM. The LLM is triple is forwarded to the LLM, and the mean pooling of the
optimized to maximize the probability of the correct entity corresponding LLM outputs for h , r, and t are separately
t. After training, the corresponding token representations calculated. These embeddings are then passed to a graph-
in LLMs are used as embeddings for entities and rela- based method, i.e. TransE, to reconstruct the KG structures.
tions. Similarly, LMKE [135] proposes a contrastive learning MLM Encoding. Instead of encoding the full text of a
method to improve the learning of embeddings generated triple, many works introduce the concept of Masked Lan-
by LLMs for KGE. Meanwhile, to better capture graph guage Model (MLM) to encode KG text (Fig. 16(b)). MEM-
structure, LambdaKG [137] samples 1-hop neighbor entities KGC [141] uses Masked Entity Model (MEM) classification
and concatenates their tokens with the triple as a sentence mechanism to predict the masked entities of the triple. The
feeding into LLMs. input text is in the form of
5.2 LLM-augmentedKGCompletion |
guage Model (MLM) to encode KG text (Fig. 16(b)). MEM-
structure, LambdaKG [137] samples 1-hop neighbor entities KGC [141] uses Masked Entity Model (MEM) classification
and concatenates their tokens with the triple as a sentence mechanism to predict the masked entities of the triple. The
feeding into LLMs. input text is in the form of
5.2 LLM-augmentedKGCompletion x = [CLS] Texth [SEP] Textr [SEP][MASK][SEP], (7)
Knowledge Graph Completion (KGC) refers to the task of Similar to Eq. 4, it tries to maximize the probability that the
inferring missing facts in a given knowledge graph. Similar masked entity is the correct entityt. Additionally, to enable
to KGE, conventional KGC methods mainly focused on the model to learn unseen entities, MEM-KGC integratesJOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 13
Fig. 17. The general framework of adopting LLMs as decoders (PaG)
for KG Completion. The En. and De. denote the encoder and decoder,
Fig.16.ThegeneralframeworkofadoptingLLMsasencoders(PaE)for respectively.
KG Completion.
multitask learning for entities and super-class prediction another instance of leveraging a Siamese textual encoder
based on the text description of entities: to encode textual representations. Following the encoding
process, SimKGC applies contrastive learning techniques to
x = [CLS][MASK][SEP] Texth [SEP]. (8) these representations. This process involves computing the
similarity between the encoded representations of a given
OpenWorld KGC [142] expands the MEM-KGC model to triple and its positive and negative samples. In particular,
address the challenges of open-world KGC with a pipeline the similarity between the encoded representation of the
framework, where two sequential MLM-based modules are triple and the positive sample is maximized, while the sim-
defined: Entity Description Prediction (EDP), an auxiliary ilarity between the encoded representation of the triple and
module that predicts a corresponding entity with a given the negative sample is minimized. This enables SimKGC
textual description; Incomplete Triple Prediction (ITP), the to learn a representation space that separates plausible
target module that predicts a plausible entity for a given and implausible triples. To avoid overfitting textural in-
incomplete triple (h,r, ?) . EDP first encodes the triple with formation, CSPromp-KG [186] employs parameter-efficient
Eq. 8 and generates the final hidden state, which is then prompt learning for KGC.
forwarded into ITP as an embedding of the head entity in LP-BERT [145] is a hybrid KGC method that combines
Eq. 7 to predict target entities. both MLM Encoding and Separated Encoding. This ap-
SeparatedEncoding.AsshowninFig.16(c),thesemeth- proach consists of two stages, namely pre-training and
ods involve partitioning a triple (h,r,t ) into two distinct fine-tuning. During pre-training, the method utilizes the
parts, i.e.(h,r ) andt, which can be expressed as |
rid KGC method that combines
Eq. 7 to predict target entities. both MLM Encoding and Separated Encoding. This ap-
SeparatedEncoding.AsshowninFig.16(c),thesemeth- proach consists of two stages, namely pre-training and
ods involve partitioning a triple (h,r,t ) into two distinct fine-tuning. During pre-training, the method utilizes the
parts, i.e.(h,r ) andt, which can be expressed as standard MLM mechanism to pre-train a LLM with KGC
x (h,r ) = [CLS] Texth [SEP] Textr [SEP], (9) data. During the fine-tuning stage, the LLM encodes both
x t = [CLS] Textt [SEP]. (10) parts and is optimized using a contrastive learning strategy
(similar to SimKGC [144]).
ThenthetwopartsareencodedseparatelybyLLMs,andthe
final hidden states of the[CLS] tokens are used as the rep- 5.2.2 LLM as Generators (PaG).
resentationsof(h,r ) andt,respectively.Therepresentations RecentworksuseLLMsassequence-to-sequencegenerators
are then fed into a scoring function to predict the possibility inKGC.AspresentedinFig.17(a)and(b),theseapproaches
of the triple, formulated as involve encoder-decoder or decoder-only LLMs. The LLMs
s = fscore (e(h,r ),et), (11)receiveasequencetextinputofthequerytriple(h,r, ?) ,and
generate the text of tail entityt directly.
wherefscore denotes the score function like TransE. GenKGC [96] uses the large language model BART [5]
StAR [143] applies Siamese-style textual encoders on as the backbone model. Inspired by the in-context learning
their text, encoding them into separate contextualized rep- approach used in GPT-3 [59], where the model concatenates
resentations.Toavoidthecombinatorialexplosionoftextual relevant samples to learn correct output answers, GenKGC
encoding approaches, e.g., KG-BERT, StAR employs a scor- proposes a relation-guided demonstration technique that
ing module that involves both deterministic classifier and includes triples with the same relation to facilitating the
spatial measurement for representation and structure learn- model’s learning process. In addition, during generation,
ing respectively, which also enhances structured knowledge an entity-aware hierarchical decoding method is proposed
by exploring the spatial characteristics. SimKGC [144] is to reduce the time complexity. KGT5 [146] introduces aJOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 14
in inference without ranking all the candidates and easily
generalizing to unseen entities. But, the challenge of PaG is
that the generated entities could be diverse and not lie in
KGs. What is more, the time of a single inference is longer
due to the auto-regressive generation. Last, how to design
a powerful prompt that feeds KGs into LLMs is still an
open |
uld be diverse and not lie in
KGs. What is more, the time of a single inference is longer
due to the auto-regressive generation. Last, how to design
a powerful prompt that feeds KGs into LLMs is still an
open question. Consequently, while PaG has demonstrated
promising results for KGC tasks, the trade-off between
model complexity and computational efficiency must be
carefully considered when selecting an appropriate LLM-
based KGC framework.
5.2.3 Model Analysis
Justin et al. [187] provide a comprehensive analysis of KGC
methods integrated with LLMs. Their research investigates
Fig. 18. The framework of prompt-based PaG for KG Completion. the quality of LLM embeddings and finds that they are
suboptimal for effective entity ranking. In response, they
novel KGC model that fulfils four key requirements of propose several techniques for processing embeddings to
such models: scalability, quality, versatility, and simplicity. improve their suitability for candidate retrieval. The study
To address these objectives, the proposed model employs a alsocomparesdifferentmodelselectiondimensions,suchas
straightforward T5 small architecture. The model is distinct Embedding Extraction, Query Entity Extraction, and Lan-
from previous KGC methods, in which it is randomly ini- guage Model Selection. Lastly, the authors propose a frame-
tialized rather than using pre-trained models. KG-S2S [147] work that effectively adapts LLM for knowledge graph
is a comprehensive framework that can be applied to var- completion.
ious types of KGC tasks, including Static KGC, Temporal 5.3 LLM-augmentedKGConstruction
KGC, and Few-shot KGC. To achieve this objective, KG-S2S
reformulates the standard triple KG fact by introducing an Knowledge graph construction involves creating a struc-
additional element, forming a quadruple (h,r,t,m ), where turedrepresentationofknowledgewithinaspecificdomain.
m represents the additional ”condition” element. Although This includes identifying entities and their relationships
different KGC tasks may refer to different conditions, they with each other. The process of knowledge graph construc-
typically have a similar textual format, which enables uni- tion typically involves multiple stages, including 1) entity
fication across different KGC tasks. The KG-S2S approach discovery, 2) coreference resolution, and 3) relation extraction.
incorporates various techniques such as entity description, Fig19presentsthegeneralframeworkofapplyingLLMsfor
soft prompt, and Seq2Seq Dropout to improve the model’s eachstageinKGconstruction.Morerecentapproacheshave
performance. In addition, it utilizes constrained decoding explored 4) end-to-end knowledge graph construction, which
to ensure the generated entities are valid. For closed-source involves constructing a complete knowledge graph i |
corporates various techniques such as entity description, Fig19presentsthegeneralframeworkofapplyingLLMsfor
soft prompt, and Seq2Seq Dropout to improve the model’s eachstageinKGconstruction.Morerecentapproacheshave
performance. In addition, it utilizes constrained decoding explored 4) end-to-end knowledge graph construction, which
to ensure the generated entities are valid. For closed-source involves constructing a complete knowledge graph in one
LLMs (e.g., ChatGPT and GPT-4), AutoKG adopts prompt step or directly 5) distilling knowledge graphs from LLMs.
engineering to design customized prompts [93]. As shown
in Fig. 18, these prompts contain the task description, few- 5.3.1 Entity Discovery
shot examples, and test input, which instruct LLMs to Entity discovery in KG construction refers to the process of
predict the tail entity for KG completion. identifying and extracting entities from unstructured data
Comparison between PaE and PaG. LLMs as Encoders sources, such as text documents, web pages, or social me-
(PaE) applies an additional prediction head on the top of dia posts, and incorporating them to construct knowledge
the representation encoded by LLMs. Therefore, the PaE graphs.
framework is much easier to finetune since we can only Named Entity Recognition (NER) involves identifying
optimize the prediction heads and freeze the LLMs. More- and tagging named entities in text data with their positions
over, the output of the prediction can be easily specified and classifications. The named entities include people, or-
and integrated with existing KGC functions for different ganizations, locations, and other types of entities. The state-
KGC tasks. However, during the inference stage, the PaE of-the-art NER methods usually employ LLMs to leverage
requires to compute a score for every candidate in KGs, their contextual understanding and linguistic knowledge
which could be computationally expensive. Besides, they for accurate entity recognition and classification. There are
cannot generalize to unseen entities. Furthermore, the PaE three NER sub-tasks based on the types of NER spans
requires the representation output of the LLMs, whereas identified, i.e., flat NER, nested NER, and discontinuous
some state-of-the-art LLMs (e.g. GPT-41) are closed sources NER. 1) Flat NER is to identify non-overlapping named entities
and do not grant access to the representation output. from input text. It is usually conceptualized as a sequence
LLMs as Generators (PaG), on the other hand, which labelling problem where each token in the text is assigned
does not need the prediction head, can be used without a unique label based on its position in the sequence [1],
finetuningoraccesstorepresentations.Therefore,theframe- [148], [188], [189]. 2) Nested NER considers complex scenarios
work of PaG is suitable for all kinds of LLMs. In addition, which allow a token to belong to multiple entities. The span-
PaG directly generates the tail entity, making it efficient based method [190]–[194] is a popular branch of nestedJOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 15
andlinkinginonepassfordownstreamquestionanswering
systems. Unlike previous models that frame EL as matching |
a popular branch of nestedJOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 15
andlinkinginonepassfordownstreamquestionanswering
systems. Unlike previous models that frame EL as matching
invectorspace,GENRE[205]formulatesitasasequence-to-
sequence problem, autoregressively generating a version of
the input markup-annotated with the unique identifiers of
anentityexpressedinnaturallanguage.GENREisextended
to its multilingual version mGENRE [206]. Considering the
efficiencychallengesofgenerativeELapproaches,[207]par-
allelizes autoregressive linking across all potential mentions
and relies on a shallow and efficient decoder. ReFinED [153]
proposes an efficient zero-shot-capable EL approach by
taking advantage of fine-grained entity types and entity
descriptions which are processed by a LLM-based encoder.
5.3.2 Coreference Resolution (CR)
Coreference resolution is to find all expressions (i.e., men-
tions) that refer to the same entity or event in a text.
Within-documentCRreferstotheCRsub-taskwhereall
these mentions are in a single document. Mandar et al. [154]
Fig. 19. The general framework of LLM-based KG construction. initialize LLM-based coreferences resolution by replacing
the previous LSTM encoder [208] with BERT. This work is
NER which involves enumerating all candidate spans and followed by the introduction of SpanBERT [155] which is
classifying them into entity types (including a non-entity pre-trainedonBERTarchitecturewithaspan-basedmasked
type). Parsing-based methods [195]–[197] reveal similarities language model (MLM). Inspired by these works, Tuan
between nested NER and constituency parsing tasks (pre- Manh et al. [209] present a strong baseline by incorporat-
dicting nested and non-overlapping spans), and propose to ing the SpanBERT encoder into a non-LLM approach e2e-
integrate the insights of constituency parsing into nested coref [208]. CorefBERT leverages Mention Reference Predic-
NER. 3) Discontinuous NER identifies named entities that may tion (MRP) task which masks one or several mentions and
not be contiguous in the text. To address this challenge, [198] requires the model to predict the masked mention’s corre-
uses the LLM output to identify entity fragments and deter- sponding referents. CorefQA [210] formulates coreference
mine whether they are overlapped or in suc |
coref [208]. CorefBERT leverages Mention Reference Predic-
NER. 3) Discontinuous NER identifies named entities that may tion (MRP) task which masks one or several mentions and
not be contiguous in the text. To address this challenge, [198] requires the model to predict the masked mention’s corre-
uses the LLM output to identify entity fragments and deter- sponding referents. CorefQA [210] formulates coreference
mine whether they are overlapped or in succession. resolution as a question answering task, where contextual
Unlike the task-specific methods, GenerativeNER [149] queries are generated for each candidate mention and the
uses a sequence-to-sequence LLM with a pointer mecha- coreferent spans are extracted from the document using the
nism to generate an entity sequence, which is capable of queries. Tuan Manh et al. [211] introduce a gating mech-
solving all three types of NER sub-tasks. anism and a noisy training method to extract information
Entity Typing (ET) aims to provide fine-grained and from event mentions using the SpanBERT encoder.
ultra-grained type information for a given entity men- In order to reduce the large memory footprint faced
tioned in context. These methods usually utilize LLM to by large LLM-based NER models, Yuval et al. [212] and
encodementions,contextandtypes.LDET[150]appliespre- Raghuveerelal.[213]proposedstart-to-endandapproxima-
trained ELMo embeddings [148] for word representation tion models, respectively, both utilizing bilinear functions
and adopts LSTM as its sentence and mention encoders. to calculate mention and antecedent scores with reduced
BOX4Types [151] recognizes the importance of type depen- reliance on span-level representations.
dency and uses BERT to represent the hidden vector and Cross-document CR refers to the sub-task where the
each type in a hyperrectangular (box) space. LRN [199] mentions refer to the same entity or event might be across
considers extrinsic and intrinsic dependencies between la- multiple documents. CDML [156] proposes a cross docu-
bels. It encodes the context and entity with BERT and ment language modeling method which pre-trains a Long-
employs these output embeddings to conduct deductive former [214] encoder on concatenated related documents
and inductive reasoning. MLMET [200] uses predefined and employs an MLP for binary classification to determine
patterns to construct input samples for the BERT MLM and whether a pair of mentions is coreferent or not. CrossCR
employs [MASK] to predict context-dependent hypernyms [157]utilizesanend-to-endmodelforcross-documentcoref-
ofthemention,whichcanbeviewedastypelabels.PL[201] erence resolution which pre-trained the mention scorer on
and DFET [202] utilize prompt learning for entity typing. gold mention spans and uses a pairwise scorer to compare
LITE [203] formulates entity typing as textual inference and mentions with all spans across all documents. CR-RL [158]
uses RoBERTa-large-MNLI as the backbone network. proposes an actor-critic deep reinforcement learning-based
EntityLinking(EL),asknownasentitydisambiguation, coreference resolver for cross-document CR.
involves linking entity mentions appearing in the text to 5.3.3 Relation Extraction (RE)
their corresponding entities in a knowledge graph. [204]
proposed BERT-based end-to-end EL systems that jointly Relation extraction involves identifying semantic relation-
discover and link entities. ELQ [152] employs a fast bi |
reinforcement learning-based
EntityLinking(EL),asknownasentitydisambiguation, coreference resolver for cross-document CR.
involves linking entity mentions appearing in the text to 5.3.3 Relation Extraction (RE)
their corresponding entities in a knowledge graph. [204]
proposed BERT-based end-to-end EL systems that jointly Relation extraction involves identifying semantic relation-
discover and link entities. ELQ [152] employs a fast bi- ships between entities mentioned in natural language text.
encoder architecture to jointly perform mention detection There are two types of relation extraction methods, i.e.JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 16
sentence-level RE and document-level RE, according to the
scope of the text analyzed.
Sentence-level RE focuses on identifying relations be-
tween entities within a single sentence. Peng et al. [159] and
TRE [215] introduce LLM to improve the performance of
relation extraction models. BERT-MTB [216] learns relation
representationsbasedonBERTbyperformingthematching-
the-blanks task and incorporating designed objectives for
relation extraction. Curriculum-RE [160] utilizes curriculum
learning to improve relation extraction models by gradu- Fig. 20. The general framework of distilling KGs from LLMs.
ally increasing the difficulty of the data during training.
RECENT [217] introduces SpanBERT and exploits entity
type restriction to reduce the noisy candidate relation types. construction tasks (e.g., entity typing, entity linking, and
Jiewen [218] extends RECENT by combining both the entity relation extraction). Then, it adopts the prompt to perform
information and the label information into sentence-level KG construction using ChatGPT and GPT-4.
embeddings, which enables the embedding to be entity-
label aware. 5.3.4 Distilling Knowledge Graphs from LLMs
Document-level RE (DocRE) aims to extract relations LLMshavebeenshowntoimplicitlyencodemassiveknowl-
between entities across multiple sentences within a docu- edge [14]. As shown in Fig. 20, some research aims to distill
ment. Hong et al. [219] propose a strong baseline for DocRE knowledge from LLMs to construct KGs. COMET [164]
by replacing the BiLSTM backbone with LLMs. HIN [220] proposesacommonsensetransformermodelthatconstructs
use LLM to encode and aggregate entity representation at commonsense KGs by using existing tuples as a seed set of
different levels, including entity, sentence, and document knowledge on which to train. Using this seed set, a LLM
levels. GLRE [221] is a global-to-local network, which uses learns to adapt its learned representations to knowledge
LLM to encode the document information in terms of entity generation, and produces novel tuples that are high quality.
global and local representations as well as context relation Experimental results reveal that implicit knowledge from
representations.SIRE[222]usestwoLLM-basedencodersto LLMs is transferred to generate explicit knowledge in com-
extractintra-sentenceandinter-sentencerelations.LSR[223] monsense KGs. BertNet [165] proposes a novel framework
and GAIN [224] propose graph-based approaches which for automatic KG construction empowered by LLMs. It re-
induce graph structures on top of LLM to better extract quires only the minimal definition of relations as inputs and
relations. DocuNet [225] formulates DocRE as a semantic automatically generates diverse prompts, |
ctintra-sentenceandinter-sentencerelations.LSR[223] monsense KGs. BertNet [165] proposes a novel framework
and GAIN [224] propose graph-based approaches which for automatic KG construction empowered by LLMs. It re-
induce graph structures on top of LLM to better extract quires only the minimal definition of relations as inputs and
relations. DocuNet [225] formulates DocRE as a semantic automatically generates diverse prompts, and performs an
segmentation task and introduces a U-Net [226] on the LLM efficient knowledge search within a given LLM for consis-
encoder to capture local and global dependencies between tentoutputs.TheconstructedKGsshowcompetitivequality,
entities. ATLOP [227] focuses on the multi-label problems diversity, and novelty with a richer set of new and complex
in DocRE, which could be handled with two techniques, relations, which cannot be extracted by previous methods.
i.e., adaptive thresholding for classifier and localized con- West et al. [166] propose a symbolic knowledge distillation
text pooling for LLM. DREEAM [161] further extends and framework that distills symbolic knowledge from LLMs.
improves ATLOP by incorporating evidence information. They first finetune a small student LLM by distilling com-
End-to-End KG Construction. Currently, researchers are monsense facts from a large LLM like GPT-3. Then, the
exploring the use of LLMs for end-to-end KG construction. student LLM is utilized to generate commonsense KGs.
Kumar et al. [95] propose a unified approach to build
KGs from raw text, which contains two LLMs powered 5.4 LLM-augmentedKG-to-textGeneration
components. They first finetune a LLM on named entity The goal of Knowledge-graph-to-text (KG-to-text) genera-
recognition tasks to make it capable of recognizing entities
in raw text. Then, they propose another “2-model BERT” tion is to generate high-quality texts that accurately and
for solving the relation extraction task, which contains two consistently describe the input knowledge graph infor-
BERT-based classifiers. The first classifier learns the relation mation [228]. KG-to-text generation connects knowledge
class whereas the second binary classifier learns the direc- graphs and texts, significantly improving the applicability
tion of the relations between the two entities. The predicted of KG in more realistic NLG scenarios, including story-
triples and relations are then used to construct the KG. Guo telling[229]andknowledge-groundeddialogue[230].How-
et al. [162] propose an end-to-end knowledge extraction ever, it is challenging and costly to collect large amounts
model based on BERT, which can be applied to construct of graph-text parallel data, resulting in insufficient training
KGs from Classical Chinese text. Grapher [41] presents a and poor generation quality. Thus, many research efforts re-
novel end-to-end multi-stage system. It first utilizes LLMs sort to either: 1) leverage knowledge from LLMs or 2) construct
to generate KG entities, followed by a simple relation con- large-scaleweakly-supervisedKG-textcorpustosolvethisissue.
struction head, enabling efficient KG construction from the 5.4.1 Leveraging Knowledge from LLMs
textual description. PiVE [163] proposes a prompting with
an iterative verification framework that utilizes a smaller AspioneeringresearcheffortsinusingLLMsforKG-to-Text
LLM like T5 to correct the errors in KGs generated by a generation, Ribeiro et al. [167] and Kale and Rastogi [231]
larger LLM (e.g., ChatGPT). To further explore ad |
xtcorpustosolvethisissue.
struction head, enabling efficient KG construction from the 5.4.1 Leveraging Knowledge from LLMs
textual description. PiVE [163] proposes a prompting with
an iterative verification framework that utilizes a smaller AspioneeringresearcheffortsinusingLLMsforKG-to-Text
LLM like T5 to correct the errors in KGs generated by a generation, Ribeiro et al. [167] and Kale and Rastogi [231]
larger LLM (e.g., ChatGPT). To further explore advanced directly fine-tune various LLMs, including BART and T5,
LLMs, AutoKG design several prompts for different KG with the goal of transferring LLMs knowledge for thisJOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 17
knowledgegraph,similartotheideaofdistancesupervision
in the relation extraction task [232]. They also provide a
1,000+ human annotated KG-to-Text test data to verify the
effectiveness of the pre-trained KG-to-Text models. Simi-
larly, Chen et al. [171] also propose a KG-grounded text
corpus collected from the English Wikidump. To ensure the
connectionbetweenKGandtext,theyonlyextractsentences
with at least two Wikipedia anchor links. Then, they use
the entities from those links to query their surrounding
neighbors in WikiData and calculate the lexical overlapping
between these neighbors and the original sentences. Finally,
Fig. 21. The general framework of KG-to-text generation. only highly overlapped pairs are selected. The authors ex-
plore both graph-based and sequence-based encoders and
task. As shown in Fig. 21, both works simply represent identify their advantages in various different tasks and
the input graph as a linear traversal and find that such settings.
a naive approach successfully outperforms many existing 5.5 LLM-augmentedKGQuestionAnswering
state-of-the-art KG-to-text generation systems. Interestingly, Knowledge graph question answering (KGQA) aims to find
Ribeiro et al. [167] also find that continue pre-training could answers to natural language questions based on the struc-
further improve model performance. However, these meth- tured facts stored in knowledge graphs [233], [234]. The
ods are unable to explicitly incorporate rich graph semantics inevitable challenge in KGQA is to retrieve related facts and
in KGs. To enhance LLMs with KG structure information, extend the reasoning advantage of KGs to QA. Therefore,
JointGT [42] proposes to inject KG structure-preserving recent studies adopt LLMs to bridge the gap between nat-
representations into the Seq2Seq large language models. ural language questions and structured knowledge graphs
Given input sub-KGs and corresponding text, JointGT first [174],[175],[235].Thegeneralframeworko |
KGs. To enhance LLMs with KG structure information, extend the reasoning advantage of KGs to QA. Therefore,
JointGT [42] proposes to inject KG structure-preserving recent studies adopt LLMs to bridge the gap between nat-
representations into the Seq2Seq large language models. ural language questions and structured knowledge graphs
Given input sub-KGs and corresponding text, JointGT first [174],[175],[235].ThegeneralframeworkofapplyingLLMs
represents the KG entities and their relations as a sequence for KGQA is illustrated in Fig. 22, where LLMs can be used
of tokens, then concatenate them with the textual tokens as 1) entity/relation extractors, and 2) answer reasoners.
which are fed into LLM. After the standard self-attention
module, JointGT then uses a pooling layer to obtain the 5.5.1 LLMs as Entity/relation Extractors
contextual semantic representations of knowledge entities Entity/relation extractors are designed to identify entities
and relations. Finally, these pooled KG representations are and relationships mentioned in natural language questions
then aggregated in another structure-aware self-attention and retrieve related facts in KGs. Given the proficiency in
layer. JointGT also deploys additional pre-training objec- language comprehension, LLMs can be effectively utilized
tives, including KG and text reconstruction tasks given for this purpose. Lukovnikov et al. [172] are the first to uti-
masked inputs, to improve the alignment between text and lize LLMs as classifiers for relation prediction, resulting in a
graph information. Li et al. [168] focus on the few-shot notable improvement in performance compared to shallow
scenario. It first employs a novel breadth-first search (BFS) neural networks. Nan et al. [174] introduce two LLM-based
strategy to better traverse the input KG structure and feed KGQA frameworks that adopt LLMs to detect mentioned
the enhanced linearized graph representations into LLMs entities and relations. Then, they query the answer in KGs
for high-quality generated outputs, then aligns the GCN- using the extracted entity-relation pairs. QA-GNN [131]
based and LLM-based KG entity representation. Colas et uses LLMs to encode the question and candidate answer
al. [169] first transform the graph into its appropriate repre- pairs, which are adopted to estimate the importance of
sentation before linearizing the graph. Next, each KG node relative KG entities. The entities are retrieved to form a
is encoded via a global attention mechanism, followed by subgraph, where an answer reasoning is conducted by a
a graph-aware attention module, ultimately being decoded graphneuralnetwork.Luoetal.[173]useLLMstocalculate
into a sequence of tokens. Different from these works, KG- the similarities between relations and questions to retrieve
BART [37] keeps the structure of KGs and leverages the related facts, formulated as
graph attention to aggregate the rich concept semantics in
the sub-KG, which enhances the model generalization on s(r,q )= LLM(r)⊤ LLM(q), (12)
unseen concept sets.
where q denotes the question, r denotes the relation, and
5.4.2 Constructing large weakly KG-text aligned Corpus LLM(·) would generate representation for q and r, respec-
Although LLMs have achieved remarkable empirical suc- tively.Furthermore,Zhangetal.[236]proposeaLLM-based
cess, their unsupervised pre-training objectives are not nec- |
(12)
unseen concept sets.
where q denotes the question, r denotes the relation, and
5.4.2 Constructing large weakly KG-text aligned Corpus LLM(·) would generate representation for q and r, respec-
Although LLMs have achieved remarkable empirical suc- tively.Furthermore,Zhangetal.[236]proposeaLLM-based
cess, their unsupervised pre-training objectives are not nec- path retriever to retrieve question-related relations hop-by-
essarily aligned well with the task of KG-to-text genera- hop and construct several paths. The probability of each
tion, motivating researchers to develop large-scale KG-text path can be calculated as
aligned corpus. Jin et al. [170] propose a 1.3M unsupervised |p|Y
KG-to-graphtrainingdatafromWikipedia.Specifically,they P (p|q)= s(rt,q), (13)
first detect the entities appearing in the text via hyperlinks t=1
and named entity detectors, and then only add text that wherep denotes the path, andrt denotes the relation at the
shares a common set of entities with the corresponding t-th hop ofp. The retrieved relations and paths can be usedJOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 18
TABLE 4
Summary of methods that synergize KGs and LLMs.
Task Method Year
JointGT [42] 2021
Synergized Knowledge representation KEPLER [40] 2021
DRAGON [44] 2022
HKLM [238] 2023
LARK [45] 2023
Siyuan et al. [46] 2023
Synergized Reasoning KSL [239] 2023
StructGPT [237] 2023
Think-on-graph [240] 2023
TobetterguideLLMsreasonthroughKGs,OreoLM[177]
Fig. 22. The general framework of applying LLMs for knowledge graph proposes a Knowledge Interaction Layer (KIL) which is in-
question answering (KGQA). serted amid LLM layers. KIL interacts with a KG reasoning
module, where it discovers different reasoning paths, and
ascontextknowledgetoimprovetheperformanceofanswer |
deLLMsreasonthroughKGs,OreoLM[177]
Fig. 22. The general framework of applying LLMs for knowledge graph proposes a Knowledge Interaction Layer (KIL) which is in-
question answering (KGQA). serted amid LLM layers. KIL interacts with a KG reasoning
module, where it discovers different reasoning paths, and
ascontextknowledgetoimprovetheperformanceofanswer then the reasoning module can reason over the paths to
reasoners as generate answers. GreaseLM [178] fuses the representations
from LLMs and graph neural networks to effectively reason
P (a|q)= X P (a|p)P (p|q), (14) over KG facts and language context. UniKGQA [43] unifies
p∈P the facts retrieval and reasoning into a unified framework.
whereP denotes retrieved paths anda denotes the answer. UniKGQA consists of two modules. The first module is
a semantic matching module that uses a LLM to match
5.5.2 LLMs as Answer Reasoners questions with their corresponding relations semantically.
Answer reasoners are designed to reason over the retrieved The second module is a matching information propagation
facts and generate answers. LLMs can be used as answer module, which propagates the matching information along
reasoners to generate answers directly. For example, as directed edges on KGs for answer reasoning. Similarly,
shown in Fig. 3 22, DEKCOR [175] concatenates the re- ReLMKG[179]performsjointreasoningonalargelanguage
trieved facts with questions and candidate answers as model and the associated knowledge graph. The question
and verbalized paths are encoded by the language model,
x = [CLS]q [SEP] Related Facts[SEP]a [SEP], (15) and different layers of the language model produce outputs
where a denotes candidate answers. Then, it feeds them thatguideagraphneuralnetworktoperformmessagepass-
into LLMs to predict answer scores. After utilizing LLMs to ing. This process utilizes the explicit knowledge contained
generate the representation ofx as QA context, DRLK [176] in the structured knowledge graph for reasoning purposes.
proposes a Dynamic Hierarchical Reasoner to capture the StructGPT[237]adoptsacustomizedinterfacetoallowlarge
interactions between QA context and answers for answer language models (e.g., ChatGPT) directly reasoning on KGs
prediction. Yan et al. [235] propose a LLM-based KGQA to perform multi-step question answering.
framework consisting of two stages: (1) retrieve related 6 SYNERGIZED LLMS + KGS
facts from KGs and (2) generate answers based on the
retrievedfacts.Thefirststageissimilartotheentity/relation The synergy of LLMs and KGs has attracted increasing
extractors. Given a candidate answer entity a, it extracts a attentiontheseyears,whichmarriesthemeritsofLLMsand
seriesofpathsp 1,...,pn fromKGs.Butthesecondstageisa KGs to mutually enhance performance in various down-
LLM-based answer reasoner. It first verbalizes the paths by stream applications. For example, LLMs can be used to
using the entity names and relation names in KGs. Then, it understand natural language, while KGs are treated as a
concatenates the ques |
tity a, it extracts a attentiontheseyears,whichmarriesthemeritsofLLMsand
seriesofpathsp 1,...,pn fromKGs.Butthesecondstageisa KGs to mutually enhance performance in various down-
LLM-based answer reasoner. It first verbalizes the paths by stream applications. For example, LLMs can be used to
using the entity names and relation names in KGs. Then, it understand natural language, while KGs are treated as a
concatenates the questionq and all pathsp 1,...,pn to make knowledge base, which provides factual knowledge. The
an input sample as unification of LLMs and KGs could result in a powerful
x = [CLS]q [SEP]p 1 [SEP] ···[SEP]pn [SEP]. (16) model for knowledge representation and reasoning.
In this section, we will discuss the state-of-the-art Syn-
These paths are regarded as the related facts for the can- ergized LLMs + KGs from two perspectives: 1) Synergized
didate answer a. Finally, it uses LLMs to predict whether KnowledgeRepresentation,and2)SynergizedReasoning.Rep-
the hypothesis: “a is the answer ofq” is supported by those resentative works are summarized in Table 4.
facts, which is formulated as
e[CLS] = LLM(x), (17)6.1 SynergizedKnowledgeRepresentation
s = σ (MLP(e[CLS])), (18) Text corpus and knowledge graphs both contain enormous
knowledge. However, the knowledge in text corpus is
where it encodes x using a LLM and feeds representation usually implicit and unstructured, while the knowledge
corresponding to[CLS] token for binary classification, and in KGs is explicit and structured. Synergized Knowledge
σ (·) denotes the sigmoid function. Representation aims to design a synergized model that canJOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 19
Fig. 24. The framework of LLM-KG Fusion Reasoning.
Fig. 23. Synergized knowledge representation by additional KG fusion
modules.
LLM-KG Fusion Reasoning. LLM-KG Fusion Reasoning
effectively represent knowledge from both LLMs and KGs. leverages two separated LLM and KG encoders to process
The synergized model can provide a better understanding the text and relevant KG inputs [244]. These two encoders
of the knowledge from both sources, making it valuable for are equally important and jointly fusing the knowledge
many downstream tasks. from two sources for reasoning. To improve the interac-
To jointly represent the knowledge, researchers propose tion between text and knowledge, KagNet [38] proposes
the synergized models by introducing additional KG fu- to first encode the input KG, and then augment the input
sion modules, which are jointly trained with LLMs. As textual representation. In contrast, MHGRN [234] uses the
shown in Fig. 23, ERNIE [35] proposes a textual-knowledge final LLM outputs of the input text to guide the reasoning
dual encoder architecture where a T-encoder first encodes process on the KGs. Yet, both of them only design a single-
the input sentences, then a K-encoder processes knowledge directioninteractionbetweenthetextandKGs.Totacklethis
gra |
are jointly trained with LLMs. As textual representation. In contrast, MHGRN [234] uses the
shown in Fig. 23, ERNIE [35] proposes a textual-knowledge final LLM outputs of the input text to guide the reasoning
dual encoder architecture where a T-encoder first encodes process on the KGs. Yet, both of them only design a single-
the input sentences, then a K-encoder processes knowledge directioninteractionbetweenthetextandKGs.Totacklethis
graphswhicharefusedthemwiththetextualrepresentation issue, QA-GNN [131] proposes to use a GNN-based model
from the T-encoder. BERT-MK [241] employs a similar dual- to jointly reason over input context and KG information
encoder architecture but it introduces additional informa- via message passing. Specifically, QA-GNN represents the
tion of neighboring entities in the knowledge encoder com- input textual information as a special node via a pooling
ponent during the pre-training of LLMs. However, some of operation and connects this node with other entities in KG.
the neighboring entities in KGs may not be relevant to the However, the textual inputs are only pooled into a single
input text, resulting in extra redundancy and noise. Coke- dense vector, limiting the information fusion performance.
BERT[242]focusesonthisissueandproposesaGNN-based JointLK [245] then proposes a framework with fine-grained
module to filter out irrelevant KG entities using the input interactionbetweenanytokensinthetextualinputsandany
text. JAKET [243] proposes to fuse the entity information in KGentitiesthroughLM-to-KGandKG-to-LMbi-directional
the middle of the large language model. attention mechanism. As shown in Fig. 24, pairwise dot-
KEPLER [40] presents a unified model for knowledge product scores are calculated over all textual tokens and KG
embedding and pre-trained language representation. In KE- entities,thebi-directionalattentivescoresarecomputedsep-
PLER,theyencodetextualentitydescriptionswithaLLMas arately. In addition, at each jointLK layer, the KGs are also
their embeddings, and then jointly optimize the knowledge dynamically pruned based on the attention score to allow
embedding and language modeling objectives. JointGT [42] later layers to focus on more important sub-KG structures.
proposes a graph-text joint representation learning model, Despite being effective, in JointLK, the fusion process be-
which proposes three pre-training tasks to align represen- tweentheinputtextandKGstillusesthefinalLLMoutputs
tations of graph and text. DRAGON [44] presents a self- as the input text representations. GreaseLM [178] designs
supervised method to pre-train a joint language-knowledge deep and rich interaction between the input text tokens and
foundation model from text and KG. It takes text segments KG entities at each layer of the LLMs. The architecture and
and relevant KG subgraphs as input and bidirectionally fusionapproachismostlysimilartoERNIE[35]discussedin
fuses information from both modalities. Then, DRAGON Section 6.1, except that GreaseLM does not use the text-only
utilizes two self-supervised reasoning tasks, i.e., masked T-encoder to handle input text.
language modeling and KG link prediction to optimize the LLMs as Agents Reasoning. Instead using two encoders
model parameters. HKLM [238] introduces a unified LLM to fuse the knowledge, LLMs can also be treated as agents
which incorporates KGs to learn representations of domain- to interact with the KGs to conduct reasoning [246], as
specific knowledge. |
asoning tasks, i.e., masked T-encoder to handle input text.
language modeling and KG link prediction to optimize the LLMs as Agents Reasoning. Instead using two encoders
model parameters. HKLM [238] introduces a unified LLM to fuse the knowledge, LLMs can also be treated as agents
which incorporates KGs to learn representations of domain- to interact with the KGs to conduct reasoning [246], as
specific knowledge. illustrated in Fig. 25. KD-CoT [247] iteratively retrieves facts
from KGs and produces faithful reasoning traces, which
guide LLMs to generate answers. KSL [239] teaches LLMs
6.2 SynergizedReasoning tosearchonKGstoretrieverelevantfactsandthengenerate
answers. StructGPT [237] designs several API interfaces to
Tobetterutilizetheknowledgefromtextcorpusandknowl- allow LLMs to access the structural data and perform rea-
edgegraphreasoning,SynergizedReasoningaimstodesign soningbytraversingonKGs.Think-on-graph[240]provides
a synergized model that can effectively conduct reasoning a flexible plug-and-play framework where LLM agents it-
with both LLMs and KGs. eratively execute beam searches on KGs to discover theJOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 20
it opens a new door to utilizing KGs for hallucination
detection.
7.2 KGsforEditingKnowledgeinLLMs
Although LLMs are capable of storing massive real-world
knowledge, they cannot quickly update their internal
knowledge updated as real-world situations change. There
aresomeresearcheffortsproposedforeditingknowledgein
LLMs [252] without re-training the whole LLMs. Yet, such
solutions still suffer from poor performance or computa-
tional overhead [253]. Existing studies [254] also reveal that
Fig. 25. Using LLMs as agents for reasoning on KGs. editasinglefactwouldcausearippleeffectforotherrelated
knowledge. Therefore, it is necessary to develop a more
efficient and effective method to edit knowledge in LLMs.
reasoningpathsandgenerateanswers.Toenhancetheagent Recently, researchers try to leverage KGs to edit knowledge
abilities, AgentTuning [248] presents several instruction- in LLMs efficiently.
tuning datasets to guide LLM agents to perform reasoning
on KGs. 7.3 KGsforBlack-boxLLMsKnowledgeInjection
Comparison and Discussion. LLM-KG Fusion Reasoning
combines the LLM encoder and KG encoder to represent Although pre-training and k |
nt Recently, researchers try to leverage KGs to edit knowledge
abilities, AgentTuning [248] presents several instruction- in LLMs efficiently.
tuning datasets to guide LLM agents to perform reasoning
on KGs. 7.3 KGsforBlack-boxLLMsKnowledgeInjection
Comparison and Discussion. LLM-KG Fusion Reasoning
combines the LLM encoder and KG encoder to represent Although pre-training and knowledge editing could update
knowledge in a unified manner. It then employs a syner- LLMs to catch up with the latest knowledge, they still need
gized reasoning module to jointly reason the results. This to access the internal structures and parameters of LLMs.
framework allows for different encoders and reasoning However, many state-of-the-art large LLMs (e.g., ChatGPT)
modules, which are trained end-to-end to effectively utilize only provide APIs for users and developers to access, mak-
theknowledgeandreasoningcapabilitiesofLLMsandKGs. ing themselves black-box to the public. Consequently, it is
However, these additional modules may introduce extra impossible to follow conventional KG injection approaches
parameters and computational costs while lacking inter- described [38], [244] that change LLM structure by adding
pretability. LLMs as Agents for KG reasoning provides a additional knowledge fusion modules. Converting various
flexibleframeworkforreasoningonKGswithoutadditional types of knowledge into different text prompts seems to be
training cost, which can be generalized to different LLMs a feasible solution. However, it is unclear whether these
and KGs. Meanwhile, the reasoning process is interpretable, prompts can generalize well to new LLMs. Moreover, the
which can be used to explain the results. Nevertheless, prompt-based approach is limited to the length of input to-
definingtheactionsandpoliciesforLLMagentsisalsochal- kensofLLMs.Therefore,howtoenableeffectiveknowledge
lenging. The synergy of LLMs and KGs is still an ongoing injection for black-box LLMs is still an open question for us
research topic, with the potential to have more powerful to explore [255], [256].
frameworks in the future.
7.4 Multi-ModalLLMsforKGs
7 FUTURE DIRECTIONS AND MILESTONES Current knowledge graphs typically rely on textual and
In this section, we discuss the future directions and several graph structure to handle KG-related applications. How-
milestones in the research area of unifying KGs and LLMs. ever, real-world knowledge graphs are often constructed
by data from diverse modalities [99], [257], [258]. Therefore,
7.1 KGsforHallucinationDetectioninLLMs effectively leveraging representations from multiple modal-
The hallucination problem in LLMs, which generates fac- ities would be a significant challenge for future research in
tually incorrect content, significantly hinders the reliability KGs [259]. One potential solution is to develop methods
of LLMs. As discussed in Section 4, existing studies try thatcanaccuratelyencodeandalignentitiesacrossdifferent
to utilize KGs to obtain more reliable LLMs through pre- modalities. Recently, with the development of multi-modal
training or KG-enhanced inference. Despite the efforts, the LLMs [98], [260], leveraging LLMs for modality alignment
issueofhallucinationmaycontinuetopersistintherealmof holds promise in this regard. But, bridging the gap b |
As discussed in Section 4, existing studies try thatcanaccuratelyencodeandalignentitiesacrossdifferent
to utilize KGs to obtain more reliable LLMs through pre- modalities. Recently, with the development of multi-modal
training or KG-enhanced inference. Despite the efforts, the LLMs [98], [260], leveraging LLMs for modality alignment
issueofhallucinationmaycontinuetopersistintherealmof holds promise in this regard. But, bridging the gap between
LLMs for the foreseeable future. Consequently, in order to multi-modal LLMs and KG structure remains a crucial
gain the public’s trust and border applications, it is impera- challenge in this field, demanding further investigation and
tive to detect and assess instances of hallucination within advancements.
LLMs and other forms of AI-generated content (AIGC). 7.5 LLMsforUnderstandingKGStructure
Existing methods strive to detect hallucination by training a
neuralclassifieronasmallsetofdocuments[249],whichare Conventional LLMs trained on plain text data are not
neither robust nor powerful to handle ever-growing LLMs. designed to understand structured data like knowledge
Recently, researchers try to use KGs as an external source graphs.Thus,LLMsmightnotfullygrasporunderstandthe
to validate LLMs [250]. Further studies combine LLMs information conveyed by the KG structure. A straightfor-
and KGs to achieve a generalized fact-checking model that ward way is to linearize the structured data into a sentence
can detect hallucinations across domains [251]. Therefore, that LLMs can understand. However, the scale of the KGsJOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 21
We envision that there will be multiple stages (milestones)
intheroadmapofunifyingKGsandLLMs,asshowninFig.
26. In particular, we will anticipate increasing research on
three stages: Stage 1: KG-enhanced LLMs, LLM-augmented
KGs, Stage 2: Synergized LLMs + KGs, and Stage 3: Graph
Structure Understanding, Multi-modality, Knowledge Up-
dating. We hope that this article will provide a guideline to
advance future research.
Fig. 26. The milestones of unifying KGs and LLMs.
ACKNOWLEDGMENTS
makes it impossible to linearize the whole KGs as input. This research was supported by the Australian Research
Moreover, the linearization process may lose some underly- Council (ARC) under grants FT210100097 and DP240101547
inginformationinKGs.Therefore,itisnecessarytodevelop and the National Natural Science Foundation of China
LLMs that can directly understand the KG structure and (NSFC) under grant 62120106008.
reason over it [237].
REFERENCES
7.6 Synergized LLMs and KGs for Birectional Reason- [1] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-
ing |
ore,itisnecessarytodevelop and the National Natural Science Foundation of China
LLMs that can directly understand the KG structure and (NSFC) under grant 62120106008.
reason over it [237].
REFERENCES
7.6 Synergized LLMs and KGs for Birectional Reason- [1] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, “Bert: Pre-
ing training of deep bidirectional transformers for language under-
KGs and LLMs are two complementary technologies that standing,” arXiv preprint arXiv:1810.04805, 2018.
can synergize each other. However, the synergy of LLMs [2] Y. Liu, M. Ott, N. Goyal, J. Du, M. Joshi, D. Chen, O. Levy,
and KGs is less explored by existing researchers. A desired M. Lewis, L. Zettlemoyer, and V. Stoyanov, “Roberta: A ro-
bustly optimized bert pretraining approach,” arXiv preprint
synergy of LLMs and KGs would involve leveraging the arXiv:1907.11692, 2019.
strengths of both technologies to overcome their individual [3] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena,
limitations. LLMs, such as ChatGPT, excel in generating Y. Zhou, W. Li, and P. J. Liu, “Exploring the limits of transfer
human-like text and understanding natural language, while learning with a unified text-to-text transformer,” The Journal of
Machine Learning Research, vol. 21, no. 1, pp. 5485–5551, 2020.
KGs are structured databases that capture and represent [4] D. Su, Y. Xu, G. I. Winata, P. Xu, H. Kim, Z. Liu, and P. Fung,
knowledgeinastructuredmanner.Bycombiningtheircapa- “Generalizing question answering system with pre-trained lan-
bilities, we can create a powerful system that benefits from guage model fine-tuning,” in Proceedings of the 2nd Workshop on
Machine Reading for Question Answering, 2019, pp. 203–211.
the contextual understanding of LLMs and the structured [5] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed,
knowledgerepresentationofKGs.TobetterunifyLLMsand O. Levy, V. Stoyanov, and L. Zettlemoyer, “Bart: Denoising
KGs, many advanced techniques need to be incorporated, sequence-to-sequence pre-training for natural language genera-
such as multi-modal learning [261], graph neural network tion, translation, and comprehension,” in ACL, 2020, pp. 7871–
7880.
[262], and continuous learning [263]. Last, the synergy of [6] J. Li, T. Tang, W. X. Zhao, and J.-R. Wen, “Pretrained lan-
LLMs and KGs can be applied to many real-world applica- guage models for text generation: A survey,” arXiv preprint
tions, such as search engines [100], recommender systems arXiv:2105.10311, 2021.
[10], [89], and drug discovery. [7] J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud,
D. Yogatama, M. Bosma, D. Zhou, D. Metzler et al., “Emergent
With a given application problem, we can apply a KG abil |
eprint
tions, such as search engines [100], recommender systems arXiv:2105.10311, 2021.
[10], [89], and drug discovery. [7] J. Wei, Y. Tay, R. Bommasani, C. Raffel, B. Zoph, S. Borgeaud,
D. Yogatama, M. Bosma, D. Zhou, D. Metzler et al., “Emergent
With a given application problem, we can apply a KG abilitiesoflargelanguagemodels,” TransactionsonMachineLearn-
to perform a knowledge-driven search for potential goals ing Research.
and unseen data, and simultaneously start with LLMs [8] K. Malinka, M. Pere ˇs´ıni, A. Firc, O. Hujˇn´ak, and F. Januˇs, “On
the educational impact of chatgpt: Is artificial intelligence ready
to perform a data/text-driven inference to see what new to obtain a university degree?” arXiv preprint arXiv:2303.11146,
data/goalitemscanbederived.Whentheknowledge-based 2023.
search is combined with data/text-driven inference, they [9] Z. Li, C. Wang, Z. Liu, H. Wang, S. Wang, and C. Gao, “Cctest:
can mutually validate each other, resulting in efficient and Testing and repairing code completion systems,” ICSE, 2023.
[10]J.Liu,C.Liu,R.Lv,K.Zhou,andY.Zhang,“Ischatgptagoodrec-
effective solutions powered by dual-driving wheels. There- ommender?apreliminarystudy,” arXivpreprintarXiv:2304.10149,
fore,wecananticipateincreasingattentiontounlockthepo- 2023.
tentialofintegratingKGsandLLMsfordiversedownstream [11]W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min,
applicationswithbothgenerativeandreasoningcapabilities B. Zhang, J. Zhang, Z. Dong et al., “A survey of large language
models,” arXiv preprint arXiv:2303.18223, 2023.
in the near future. [12]X. Qiu, T. Sun, Y. Xu, Y. Shao, N. Dai, and X. Huang, “Pre-trained
models for natural language processing: A survey,” Science China
Technological Sciences, vol. 63, no. 10, pp. 1872–1897, 2020.
8 CONCLUSION [13]J. Yang, H. Jin, R. Tang, X. Han, Q. Feng, H. Jiang, B. Yin, and
Unifying large language models (LLMs) and knowledge X. Hu, “Harnessing the power of llms in practice: A survey on
chatgpt and beyond,” arXiv preprint arXiv:2304.13712, 2023.
graphs (KGs) is an active research direction that has at- [14]F. Petroni, T. Rockt ¨aschel, S. Riedel, P. Lewis, A. Bakhtin, Y. Wu,
tracted increasing attention from both academia and in- and A. Miller, “Language models as knowledge bases?” in
dustry. In this article, we provide a thorough overview of EMNLP-IJCNLP, 2019, pp. 2463–2473.
[15]Z. Ji, N. Lee, R. Frieske, T. Yu, D. Su, Y. Xu, E. Ishii, Y. J. Bang,
the recent research in this field. We first introduce different A. Madotto, and P. Fung, “Survey of hallucination in natural
manners that integrate KGs to enhance LLMs. Then, we |
dustry. In this article, we provide a thorough overview of EMNLP-IJCNLP, 2019, pp. 2463–2473.
[15]Z. Ji, N. Lee, R. Frieske, T. Yu, D. Su, Y. Xu, E. Ishii, Y. J. Bang,
the recent research in this field. We first introduce different A. Madotto, and P. Fung, “Survey of hallucination in natural
manners that integrate KGs to enhance LLMs. Then, we language generation,” ACM Computing Surveys, vol. 55, no. 12,
introduce existing methods that apply LLMs for KGs and pp. 1–38, 2023.
establish taxonomy based on varieties of KG tasks. Finally, [16]H. Zhang, H. Song, S. Li, M. Zhou, and D. Song, “A survey of
controllable text generation using transformer-based pre-trained
we discuss the challenges and future directions in this field. language models,” arXiv preprint arXiv:2201.05337, 2022.JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 22
[17]M. Danilevsky, K. Qian, R. Aharonov, Y. Katsis, B. Kawas, and [41]I. Melnyk, P. Dognin, and P. Das, “Grapher: Multi-stage knowl-
P. Sen, “A survey of the state of explainable ai for natural edge graph construction using pretrained language models,” in
language processing,” arXiv preprint arXiv:2010.00711, 2020. NeurIPS 2021 Workshop on Deep Generative Models and Downstream
[18]J. Wang, X. Hu, W. Hou, H. Chen, R. Zheng, Y. Wang, L. Yang, Applications, 2021.
H.Huang,W.Ye,X.Gengetal.,“Ontherobustnessofchatgpt:An [42]P. Ke, H. Ji, Y. Ran, X. Cui, L. Wang, L. Song, X. Zhu, and
adversarial and out-of-distribution perspective,” arXiv preprint M. Huang, “JointGT: Graph-text joint representation learning for
arXiv:2302.12095, 2023. text generation from knowledge graphs,” in ACL Finding, 2021,
[19]S. Ji, S. Pan, E. Cambria, P. Marttinen, and S. Y. Philip, “A pp. 2526–2538.
survey on knowledge graphs: Representation, acquisition, and [43]J. Jiang, K. Zhou, W. X. Zhao, and J.-R. Wen, “Unikgqa: Unified
applications,” IEEE TNNLS, vol. 33, no. 2, pp. 494–514, 2021. retrievalandreasoningforsolvingmulti-hopquestionanswering
[20]D. Vrande ˇci´c and M. Kr¨otzsch, “Wikidata: a free collaborative over knowledge graph,” ICLR 2023, 2023.
knowledgebase,” Communications of the ACM, vol. 57, no. 10, pp. [44]M. Yasunaga, A. Bosselut, H. Ren, X. Zhang, C. D. Manning, P. S.
78–85, 2014. Liang, and J. Leskovec, “Deep bidirectional language-knowledge
[21]S. Hu, L. Zou, and X. Zhang, “A state-transition framework to graph pretraining,” NeurIPS, vol. 35, pp. 37309–37323, 2022.
answer complex questions over knowledge base,” in EMNLP, [45]N.ChoudharyandC.K.Reddy,“Complexlogicalreasoningover
2018, pp. 2098–2108. knowledge graphs using large language models,” arXiv preprint
[22]J. Zhang, B. Chen, L. Zhang, X. Ke, and H. Ding, “Neural, arXiv:2305.01157, 2023.
symbolic and neural-symbolic reasoning on knowledge graphs,” [46]S. Wang, Z. Wei, J. Xu, and Z. Fan, “Unifying structure |
[45]N.ChoudharyandC.K.Reddy,“Complexlogicalreasoningover
2018, pp. 2098–2108. knowledge graphs using large language models,” arXiv preprint
[22]J. Zhang, B. Chen, L. Zhang, X. Ke, and H. Ding, “Neural, arXiv:2305.01157, 2023.
symbolic and neural-symbolic reasoning on knowledge graphs,” [46]S. Wang, Z. Wei, J. Xu, and Z. Fan, “Unifying structure reasoning
AI Open, vol. 2, pp. 14–35, 2021. and language model pre-training for complex reasoning,” arXiv
[23]B. Abu-Salih, “Domain-specific knowledge graphs: A survey,” preprint arXiv:2301.08913, 2023.
Journal of Network and Computer Applications, vol. 185, p. 103076, [47]C. Zhen, Y. Shang, X. Liu, Y. Li, Y. Chen, and D. Zhang, “A
2021. survey on knowledge-enhanced pre-trained language models,”
[24]T. Mitchell, W. Cohen, E. Hruschka, P. Talukdar, B. Yang, J. Bet- arXiv preprint arXiv:2212.13428, 2022.
teridge, A. Carlson, B. Dalvi, M. Gardner, B. Kisiel, K. Jayant, [48]X. Wei, S. Wang, D. Zhang, P. Bhatia, and A. Arnold, “Knowl-
L. Ni, M. Kathryn, M. Thahir, N. Ndapandula, P. Emmanouil, edge enhanced pretrained language models: A compreshensive
R. Alan, S. Mehdi, S. Burr, W. Derry, G. Abhinav, C. Xi, S. Abul- survey,” arXiv preprint arXiv:2110.08455, 2021.
hair, and W. Joel, “Never-ending learning,” Communications of the [49]D. Yin, L. Dong, H. Cheng, X. Liu, K.-W. Chang, F. Wei, and
ACM, vol. 61, no. 5, pp. 103–115, 2018. J. Gao, “A survey of knowledge-intensive nlp with pre-trained
[25]L. Zhong, J. Wu, Q. Li, H. Peng, and X. Wu, “A comprehen- language models,” arXiv preprint arXiv:2202.08772, 2022.
sive survey on automatic knowledge graph construction,” arXiv [50]A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N.
preprint arXiv:2302.05019, 2023. Gomez, Ł. Kaiser, and I. Polosukhin, “Attention is all you need,”
[26]L. Yao, C. Mao, and Y. Luo, “Kg-bert: Bert for knowledge graph NeurIPS, vol. 30, 2017.
completion,” arXiv preprint arXiv:1909.03193, 2019. [51]Z. Lan, M. Chen, S. Goodman, K. Gimpel, P. Sharma, and R. Sori-
[27]L. Luo, Y.-F. Li, G. Haffari, and S. Pan, “Normalizing flow- cut, “Albert: A lite bert for self-supervised learning of language
basedneuralprocessforfew-shotknowledgegraphcompletion,” representations,” in ICLR, 2019.
SIGIR, 2023. [52]K.Clark,M.-T.Luong,Q.V.Le,andC.D.Manning,“Electra:Pre-
[28]Y. Bang, S. Cahyawijaya, N. Lee, W. Dai, D. Su, B. Wilie, H. Love- training text encoders as discriminators rather than generators,”
nia, Z. Ji, T. Yu, W. Chung et al., “A multitask, multilingual, arXiv preprint arXiv:2003.10555, 2020.
multimodal evaluation of chatgpt on reasoning, hallucination, [53]K. Hakala and S. Pyysalo, “Biomedical named entity recognition
and interactivity,” arXiv preprint arXiv:2302.04023, 2023. with multilingual bert,” in Proceedings of the 5th workshop on
[29]X.Wang,J.Wei,D.Schuurmans,Q.Le,E.Chi,andD.Zhou,“Self- BioNLP open shared tasks, 2019, pp. 56–61.
consistency improves chain of th |
multimodal evaluation of chatgpt on reasoning, hallucination, [53]K. Hakala and S. Pyysalo, “Biomedical named entity recognition
and interactivity,” arXiv preprint arXiv:2302.04023, 2023. with multilingual bert,” in Proceedings of the 5th workshop on
[29]X.Wang,J.Wei,D.Schuurmans,Q.Le,E.Chi,andD.Zhou,“Self- BioNLP open shared tasks, 2019, pp. 56–61.
consistency improves chain of thought reasoning in language [54]Y. Tay, M. Dehghani, V. Q. Tran, X. Garcia, J. Wei, X. Wang,
models,” arXiv preprint arXiv:2203.11171, 2022. H.W.Chung,D.Bahri,T.Schuster,S.Zhengetal.,“Ul2:Unifying
[30]O. Golovneva, M. Chen, S. Poff, M. Corredor, L. Zettlemoyer, language learning paradigms,” in ICLR, 2022.
M.Fazel-Zarandi,andA.Celikyilmaz,“Roscoe:Asuiteofmetrics [55]V. Sanh, A. Webson, C. Raffel, S. Bach, L. Sutawika, Z. Alyafeai,
for scoring step-by-step reasoning,” ICLR, 2023. A. Chaffin, A. Stiegler, A. Raja, M. Dey et al., “Multitask
[31]F. M. Suchanek, G. Kasneci, and G. Weikum, “Yago: a core of prompted training enables zero-shot task generalization,” in
semantic knowledge,” in WWW, 2007, pp. 697–706. ICLR, 2022.
[32]A. Carlson, J. Betteridge, B. Kisiel, B. Settles, E. Hruschka, and [56]B. Zoph, I. Bello, S. Kumar, N. Du, Y. Huang, J. Dean, N. Shazeer,
T. Mitchell, “Toward an architecture for never-ending language and W. Fedus, “St-moe: Designing stable and transferable sparse
learning,” in Proceedings of the AAAI conference on artificial intelli- expert models,” URL https://arxiv. org/abs/2202.08906, 2022.
gence, vol. 24, no. 1, 2010, pp. 1306–1313. [57]A. Zeng, X. Liu, Z. Du, Z. Wang, H. Lai, M. Ding, Z. Yang, Y. Xu,
[33]A. Bordes, N. Usunier, A. Garcia-Duran, J. Weston, and W. Zheng, X. Xia, W. L. Tam, Z. Ma, Y. Xue, J. Zhai, W. Chen,
O. Yakhnenko, “Translating embeddings for modeling multi- Z. Liu, P. Zhang, Y. Dong, and J. Tang, “GLM-130b: An open
relational data,” NeurIPS, vol. 26, 2013. bilingual pre-trained model,” in ICLR, 2023.
[34]G. Wan, S. Pan, C. Gong, C. Zhou, and G. Haffari, “Reasoning [58]L.Xue,N.Constant,A.Roberts,M.Kale,R.Al-Rfou,A.Siddhant,
like human: Hierarchical reinforcement learning for knowledge A. Barua, and C. Raffel, “mt5: A massively multilingual pre-
graph reasoning,” in AAAI, 2021, pp. 1926–1932. trained text-to-text transformer,” in NAACL, 2021, pp. 483–498.
[35]Z. Zhang, X. Han, Z. Liu, X. Jiang, M. Sun, and Q. Liu, “ERNIE: [59]T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhari-
Enhanced language representation with informative entities,” in wal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al.,
ACL, 2019, pp. 1441–1451. “Language models are few-shot learners,” Advances in neural
[36]W. Liu, P. Zhou, Z. Zhao, Z. Wang, Q. Ju, H. Deng, and P. Wang, information processing systems, vol. 33, pp. 1877–1901, 2020.
“K-BERT: enabling language representation with knowledge [60]L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright,
graph,” in AAAI, 2020, pp. 2901–2908. P. Mishkin, C. Zhang |
anguage models are few-shot learners,” Advances in neural
[36]W. Liu, P. Zhou, Z. Zhao, Z. Wang, Q. Ju, H. Deng, and P. Wang, information processing systems, vol. 33, pp. 1877–1901, 2020.
“K-BERT: enabling language representation with knowledge [60]L. Ouyang, J. Wu, X. Jiang, D. Almeida, C. Wainwright,
graph,” in AAAI, 2020, pp. 2901–2908. P. Mishkin, C. Zhang, S. Agarwal, K. Slama, A. Ray et al.,
[37]Y. Liu, Y. Wan, L. He, H. Peng, and P. S. Yu, “KG-BART: knowl- “Training language models to follow instructions with human
edge graph-augmented BART for generative commonsense rea- feedback,” NeurIPS, vol. 35, pp. 27730–27744, 2022.
soning,” in AAAI, 2021, pp. 6418–6425. [61]H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux,
[38]B. Y. Lin, X. Chen, J. Chen, and X. Ren, “KagNet: Knowledge- T. Lacroix, B. Rozi`ere, N. Goyal, E. Hambro, F. Azhar et al.,
aware graph networks for commonsense reasoning,” in EMNLP- “Llama: Open and efficient foundation language models,” arXiv
IJCNLP, 2019, pp. 2829–2839. preprint arXiv:2302.13971, 2023.
[39]D. Dai, L. Dong, Y. Hao, Z. Sui, B. Chang, and F. Wei, [62]E. Saravia, “Prompt Engineering Guide,” https://github.com/
“Knowledge neurons in pretrained transformers,” arXiv preprint dair-ai/Prompt-Engineering-Guide, 2022, accessed: 2022-12.
arXiv:2104.08696, 2021. [63]J.Wei,X.Wang,D.Schuurmans,M.Bosma,F.Xia,E.H.Chi,Q.V.
[40]X. Wang, T. Gao, Z. Zhu, Z. Zhang, Z. Liu, J. Li, and J. Tang, Le, D. Zhou et al., “Chain-of-thought prompting elicits reasoning
“KEPLER: A unified model for knowledge embedding and pre- in large language models,” in NeurIPS.
trained language representation,” Transactions of the Association [64]S. Li, Y. Gao, H. Jiang, Q. Yin, Z. Li, X. Yan, C. Zhang, and B. Yin,
for Computational Linguistics, vol. 9, pp. 176–194, 2021. “Graph reasoning for question answering with triplet retrieval,”
in ACL, 2023.JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 23
[65]Y. Wen, Z. Wang, and J. Sun, “Mindmap: Knowledge graph [89]R. Sun, X. Cao, Y. Zhao, J. Wan, K. Zhou, F. Zhang, Z. Wang, and
prompting sparks graph of thoughts in large language models,” K. Zheng, “Multi-modal knowledge graphs for recommender
arXiv preprint arXiv:2308.09729, 2023. systems,” in CIKM, 2020, pp. 1405–1414.
[66]K. Bollacker, C. Evans, P. Paritosh, T. Sturge, and J. Taylor, “Free- [90]S. Deng, C. Wang, Z. Li, N. Zhang, Z. Dai, H. Chen, F. Xiong,
base: A collaboratively created graph database for structuring M. Yan, Q. Chen, M. Chen, J. Chen, J. Z. Pan, B. Hooi, and
human knowledge,” in SIGMOD, 2008, pp. 1247–1250. H. Chen, “Construction and applications of billion-scale pre-
[67]S. Auer, C. Bizer, G. Kobilarov, J. Lehmann, R. Cyganiak, and trained multimodal business knowledge graph,” in ICDE, 2023.
Z. Ives, “Dbpedia: A nucleus for a w |
ated graph database for structuring M. Yan, Q. Chen, M. Chen, J. Chen, J. Z. Pan, B. Hooi, and
human knowledge,” in SIGMOD, 2008, pp. 1247–1250. H. Chen, “Construction and applications of billion-scale pre-
[67]S. Auer, C. Bizer, G. Kobilarov, J. Lehmann, R. Cyganiak, and trained multimodal business knowledge graph,” in ICDE, 2023.
Z. Ives, “Dbpedia: A nucleus for a web of open data,” in The [91]C. Rosset, C. Xiong, M. Phan, X. Song, P. Bennett, and S. Tiwary,
SemanticWeb:6thInternationalSemanticWebConference. Springer, “Knowledge-aware language model pretraining,” arXiv preprint
2007, pp. 722–735. arXiv:2007.00655, 2020.
[68]B. Xu, Y. Xu, J. Liang, C. Xie, B. Liang, W. Cui, and Y. Xiao, “Cn- [92]P. Lewis, E. Perez, A. Piktus, F. Petroni, V. Karpukhin, N. Goyal,
dbpedia: A never-ending chinese knowledge extraction system,” H. K¨uttler, M. Lewis, W.-t. Yih, T. Rockt¨aschel, S. Riedel,
in 30th International Conference on Industrial Engineering and Other and D. Kiela, “Retrieval-augmented generation for knowledge-
Applications of Applied Intelligent Systems. Springer, 2017, pp. intensive nlp tasks,” in NeurIPS, vol. 33, 2020, pp. 9459–9474.
428–438. [93]Y.Zhu,X.Wang,J.Chen,S.Qiao,Y.Ou,Y.Yao,S.Deng,H.Chen,
[69]P. Hai-Nyzhnyk, “Vikidia as a universal multilingual online and N. Zhang, “Llms for knowledge graph construction and
encyclopedia for children,” The Encyclopedia Herald of Ukraine, reasoning: Recent capabilities and future opportunities,” arXiv
vol. 14, 2022. preprint arXiv:2305.13168, 2023.
[70]F. Ilievski, P. Szekely, and B. Zhang, “Cskg: The commonsense [94]Z. Zhang, X. Liu, Y. Zhang, Q. Su, X. Sun, and B. He, “Pretrain-
knowledge graph,” Extended Semantic Web Conference (ESWC), kge: learning knowledge representation from pretrained lan-
2021. guage models,” in EMNLP Finding, 2020, pp. 259–266.
[71]R. Speer, J. Chin, and C. Havasi, “Conceptnet 5.5: An open [95]A. Kumar, A. Pandey, R. Gadia, and M. Mishra, “Building
multilingual graph of general knowledge,” in Proceedings of the knowledge graph using pre-trained language model for learning
AAAI conference on artificial intelligence, vol. 31, no. 1, 2017. entity-aware relationships,” in 2020 IEEE International Conference
[72]H. Ji, P. Ke, S. Huang, F. Wei, X. Zhu, and M. Huang, “Language on Computing, Power and Communication Technologies (GUCON).
generation with multi-hop reasoning on commonsense knowl- IEEE, 2020, pp. 310–315.
edge graph,” in EMNLP, 2020, pp. 725–736. [96]X. Xie, N. Zhang, Z. Li, S. Deng, H. Chen, F. Xiong, M. Chen,
[73]J. D. Hwang, C. Bhagavatula, R. Le Bras, J. Da, K. Sakaguchi, and H. Chen, “From discrimination to generation: Knowledge
A. Bosselut, and Y. Choi, “(comet-) atomic 2020: On symbolic graph completion with generative transformer,” in WWW, 2022,
and neural commonsense knowledge graphs,” in AAAI, vol. 35, pp. 162–165.
no. 7, 2021, pp. 6384–6392. |
F. Xiong, M. Chen,
[73]J. D. Hwang, C. Bhagavatula, R. Le Bras, J. Da, K. Sakaguchi, and H. Chen, “From discrimination to generation: Knowledge
A. Bosselut, and Y. Choi, “(comet-) atomic 2020: On symbolic graph completion with generative transformer,” in WWW, 2022,
and neural commonsense knowledge graphs,” in AAAI, vol. 35, pp. 162–165.
no. 7, 2021, pp. 6384–6392. [97]Z. Chen, C. Xu, F. Su, Z. Huang, and Y. Dou, “Incorporating
[74]H. Zhang, X. Liu, H. Pan, Y. Song, and C. W.-K. Leung, “Aser: structured sentences with time-enhanced bert for fully-inductive
A large-scale eventuality knowledge graph,” in Proceedings of the temporal relation prediction,” SIGIR, 2023.
web conference 2020, 2020, pp. 201–211. [98]D. Zhu, J. Chen, X. Shen, X. Li, and M. Elhoseiny, “Minigpt-4:
[75]H. Zhang, D. Khashabi, Y. Song, and D. Roth, “Transomcs: from Enhancing vision-language understanding with advanced large
linguistic graphs to commonsense knowledge,” in IJCAI, 2021, language models,” arXiv preprint arXiv:2304.10592, 2023.
pp. 4004–4010. [99]M. Warren, D. A. Shamma, and P. J. Hayes, “Knowledge engi-
[76]Z. Li, X. Ding, T. Liu, J. E. Hu, and B. Van Durme, “Guided neering with image data in real-world settings,” in AAAI, ser.
generation of cause and effect,” in IJCAI, 2020. CEUR Workshop Proceedings, vol. 2846, 2021.
[77]O.Bodenreider,“Theunifiedmedicallanguagesystem(umls):in- [100]R. Thoppilan, D. De Freitas, J. Hall, N. Shazeer, A. Kul-
tegrating biomedical terminology,” Nucleic acids research, vol. 32, shreshtha, H.-T. Cheng, A. Jin, T. Bos, L. Baker, Y. Du et al.,
no. suppl 1, pp. D267–D270, 2004. “Lamda: Language models for dialog applications,” arXiv
[78]Y. Liu, Q. Zeng, J. Ordieres Mer ´e, and H. Yang, “Anticipating preprint arXiv:2201.08239, 2022.
stock market of the renowned companies: a knowledge graph [101]Y. Sun, S. Wang, S. Feng, S. Ding, C. Pang, J. Shang, J. Liu,
approach,” Complexity, vol. 2019, 2019. X. Chen, Y. Zhao, Y. Lu et al., “Ernie 3.0: Large-scale knowledge
[79]Y. Zhu, W. Zhou, Y. Xu, J. Liu, Y. Tan et al., “Intelligent learning enhanced pre-training for language understanding and genera-
forknowledgegraphtowardsgeologicaldata,” ScientificProgram- tion,” arXiv preprint arXiv:2107.02137, 2021.
ming, vol. 2017, 2017. [102]T. Shen, Y. Mao, P. He, G. Long, A. Trischler, and W. Chen,
[80]W. Choi and H. Lee, “Inference of biomedical relations among “Exploiting structured knowledge in text via graph-guided rep-
chemicals, genes, diseases, and symptoms using knowledge rep- resentation learning,” in EMNLP, 2020, pp. 8980–8994.
resentation learning,” IEEE Access, vol. 7, pp. 179373–179384, [103]D. Zhang, Z. Yuan, Y. Liu, F. Zhuang, H. Chen, and H. Xiong,
2019. “E-bert: A phrase and product knowledge enhanced language
[81]F. Farazi, M. Salamanca, S. Mosbach, J. Akroyd, A. Eibeck, model for e-commerce,” arXiv preprint arXi |
sentation learning,” in EMNLP, 2020, pp. 8980–8994.
resentation learning,” IEEE Access, vol. 7, pp. 179373–179384, [103]D. Zhang, Z. Yuan, Y. Liu, F. Zhuang, H. Chen, and H. Xiong,
2019. “E-bert: A phrase and product knowledge enhanced language
[81]F. Farazi, M. Salamanca, S. Mosbach, J. Akroyd, A. Eibeck, model for e-commerce,” arXiv preprint arXiv:2009.02835, 2020.
L. K. Aditya, A. Chadzynski, K. Pan, X. Zhou, S. Zhang et al., [104]S. Li, X. Li, L. Shang, C. Sun, B. Liu, Z. Ji, X. Jiang, and Q. Liu,
“Knowledge graph approach to combustion chemistry and inter- “Pre-training language models with deterministic factual knowl-
operability,” ACS omega, vol. 5, no. 29, pp. 18342–18348, 2020. edge,” in EMNLP, 2022, pp. 11118–11131.
[82]X. Wu, T. Jiang, Y. Zhu, and C. Bu, “Knowledge graph for china’s [105]M.Kang,J.Baek,andS.J.Hwang,“Kala:Knowledge-augmented
genealogy,” IEEE TKDE, vol. 35, no. 1, pp. 634–646, 2023. language model adaptation,” in NAACL, 2022, pp. 5144–5167.
[83]X. Zhu, Z. Li, X. Wang, X. Jiang, P. Sun, X. Wang, Y. Xiao, and [106]W. Xiong, J. Du, W. Y. Wang, and V. Stoyanov, “Pretrained en-
N. J. Yuan, “Multi-modal knowledge graph construction and cyclopedia: Weakly supervised knowledge-pretrained language
application: A survey,” IEEE TKDE, 2022. model,” in ICLR, 2020.
[84]S. Ferrada, B. Bustos, and A. Hogan, “Imgpedia: a linked dataset [107]T. Sun, Y. Shao, X. Qiu, Q. Guo, Y. Hu, X. Huang, and Z. Zhang,
with content-based analysis of wikimedia images,” in The Seman- “CoLAKE: Contextualized language and knowledge embed-
tic Web–ISWC 2017. Springer, 2017, pp. 84–93. ding,” in Proceedings of the 28th International Conference on Com-
[85]Y. Liu, H. Li, A. Garcia-Duran, M. Niepert, D. Onoro-Rubio, putational Linguistics, 2020, pp. 3660–3670.
and D. S. Rosenblum, “Mmkg: multi-modal knowledge graphs,” [108]T. Zhang, C. Wang, N. Hu, M. Qiu, C. Tang, X. He, and J. Huang,
in The Semantic Web: 16th International Conference, ESWC 2019, “DKPLM: decomposable knowledge-enhanced pre-trained lan-
Portoroˇz, Slovenia, June 2–6, 2019, Proceedings 16 . Springer, 2019, guage model for natural language understanding,” in AAAI,
pp. 459–474. 2022, pp. 11703–11711.
[86]M. Wang, H. Wang, G. Qi, and Q. Zheng, “Richpedia: a large- [109]J. Wang, W. Huang, M. Qiu, Q. Shi, H. Wang, X. Li, and M. Gao,
scale, comprehensive multi-modal knowledge graph,” Big Data “Knowledge prompting in pre-trained language model for natu-
Research, vol. 22, p. 100159, 2020. ral language understanding,” in Proceedings of the 2022 Conference
[87]B. Shi, L. Ji, P. Lu, Z. Niu, and N. Duan, “Knowledge aware on Empirical Methods in Natural Language Processing, 2022, pp.
semantic concept expansion for image-text matching.” in IJCAI, 3164–3177.
[88]S. Shah, A. Mishra, N. Yadati, and P. P. Talukdar, “Kvqa:vol. 1, 2019, p. 2. [110]H. Ye, N. Zhang, S. Deng, X. Chen, H. Chen, F. Xiong, X. Chen,
Knowledge-aware visual question answering,” in AAAI, vol. 33, and H. Chen, “Ontology-enhanced prompt-tuning for few-shot
no. 01, 201 |
on Empirical Methods in Natural Language Processing, 2022, pp.
semantic concept expansion for image-text matching.” in IJCAI, 3164–3177.
[88]S. Shah, A. Mishra, N. Yadati, and P. P. Talukdar, “Kvqa:vol. 1, 2019, p. 2. [110]H. Ye, N. Zhang, S. Deng, X. Chen, H. Chen, F. Xiong, X. Chen,
Knowledge-aware visual question answering,” in AAAI, vol. 33, and H. Chen, “Ontology-enhanced prompt-tuning for few-shot
no. 01, 2019, pp. 8876–8884. learning,” in Proceedings of the ACM Web Conference 2022, 2022,
pp. 778–787.JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 24
[111]H. Luo, Z. Tang, S. Peng, Y. Guo, W. Zhang, C. Ma, G. Dong, [135]X. Wang, Q. He, J. Liang, and Y. Xiao, “Language models as
M.Song,W.Linetal.,“Chatkbqa:Agenerate-then-retrieveframe- knowledge embeddings,” arXiv preprint arXiv:2206.12617, 2022.
work for knowledge base question answering with fine-tuned [136]N. Zhang, X. Xie, X. Chen, S. Deng, C. Tan, F. Huang,
large language models,” arXiv preprint arXiv:2310.08975, 2023. X. Cheng, and H. Chen, “Reasoning through memorization:
[112]L. Luo, Y.-F. Li, G. Haffari, and S. Pan, “Reasoning on graphs: Nearest neighbor knowledge graph embeddings,” arXiv preprint
Faithful and interpretable large language model reasoning,” arXiv:2201.05575, 2022.
arXiv preprint arxiv:2310.01061, 2023. [137]X. Xie, Z. Li, X. Wang, Y. Zhu, N. Zhang, J. Zhang, S. Cheng,
[113]R. Logan, N. F. Liu, M. E. Peters, M. Gardner, and S. Singh, B. Tian, S. Deng, F. Xiong, and H. Chen, “Lambdakg: A library
“Barack’s wife hillary: Using knowledge graphs for fact-aware for pre-trained language model-based knowledge graph embed-
language modeling,” in ACL, 2019, pp. 5962–5971. dings,” 2022.
[114]K. Guu, K. Lee, Z. Tung, P. Pasupat, and M.-W. Chang, “Realm: [138]B.Kim,T.Hong,Y.Ko,andJ.Seo,“Multi-tasklearningforknowl-
Retrieval-augmented language model pre-training,” in ICML, edge graph completion with pre-trained language models,” in
2020. COLING, 2020, pp. 1737–1743.
[115]Y. Wu, Y. Zhao, B. Hu, P. Minervini, P. Stenetorp, and S. Riedel, [139]X. Lv, Y. Lin, Y. Cao, L. Hou, J. Li, Z. Liu, P. Li, and J. Zhou,
“An efficient memory-augmented transformer for knowledge- “Do pre-trained models benefit knowledge graph completion? A
intensive NLP tasks,” in EMNLP, 2022, pp. 5184–5196. reliableevaluationandareasonableapproach,”in ACL,2022,pp.
[116]L. Luo, J. Ju, B. Xiong, Y.-F. Li, G. Haffari, and S. Pan, “Chatrule: 3570–3581.
Mining logical rules with large language models for knowledge [140]J.Shen,C.Wang,L.Gong,andD.Song,“Jointlanguagesemantic
graph reasoning,” arXiv preprint arXiv:2309.01538, 2023. and structure embedding for knowledge graph completion,” in
[117]J. Wang, Q. Sun, N. Chen, X. Li, and M. Gao, “Boosting language COLING, 2022, pp. 1965–1978.
models reasoning with chain-of-knowledge prompting,” arXiv [141]B.Choi,D.Jang,andY.Ko,“MEM-KGC:maskedentitymodelfor
preprint arXiv:2306. |
0]J.Shen,C.Wang,L.Gong,andD.Song,“Jointlanguagesemantic
graph reasoning,” arXiv preprint arXiv:2309.01538, 2023. and structure embedding for knowledge graph completion,” in
[117]J. Wang, Q. Sun, N. Chen, X. Li, and M. Gao, “Boosting language COLING, 2022, pp. 1965–1978.
models reasoning with chain-of-knowledge prompting,” arXiv [141]B.Choi,D.Jang,andY.Ko,“MEM-KGC:maskedentitymodelfor
preprint arXiv:2306.06427, 2023. knowledge graph completion with pre-trained language model,”
[118]Z. Jiang, F. F. Xu, J. Araki, and G. Neubig, “How can we know IEEE Access, vol. 9, pp. 132025–132032, 2021.
what language models know?” Transactions of the Association for [142]B. Choi and Y. Ko, “Knowledge graph extension with a pre-
Computational Linguistics, vol. 8, pp. 423–438, 2020. trained language model via unified learning method,” Knowl.
[119]T. Shin, Y. Razeghi, R. L. Logan IV, E. Wallace, and S. Singh, “Au- Based Syst., vol. 262, p. 110245, 2023.
toprompt: Eliciting knowledge from language models with au- [143]B. Wang, T. Shen, G. Long, T. Zhou, Y. Wang, and Y. Chang,
tomatically generated prompts,” arXiv preprint arXiv:2010.15980, “Structure-augmented text representation learning for efficient
2020. knowledge graph completion,” in WWW, 2021, pp. 1737–1748.
[120]Z. Meng, F. Liu, E. Shareghi, Y. Su, C. Collins, and N. Collier, [144]L.Wang,W.Zhao,Z.Wei,andJ.Liu,“Simkgc:Simplecontrastive
“Rewire-then-probe: A contrastive recipe for probing biomedi- knowledge graph completion with pre-trained language mod-
cal knowledge of pre-trained language models,” arXiv preprint els,” in ACL, 2022, pp. 4281–4294.
arXiv:2110.08173, 2021. [145]D. Li, M. Yi, and Y. He, “Lp-bert: Multi-task pre-training
[121]L. Luo, T.-T. Vu, D. Phung, and G. Haffari, “Systematic assess- knowledge graph bert for link prediction,” arXiv preprint
mentoffactualknowledgeinlargelanguagemodels,”in EMNLP, arXiv:2201.04843, 2022.
2023. [146]A. Saxena, A. Kochsiek, and R. Gemulla, “Sequence-to-sequence
[122]V. Swamy, A. Romanou, and M. Jaggi, “Interpreting language knowledge graph completion and question answering,” in ACL,
models through knowledge graph extraction,” arXiv preprint 2022, pp. 2814–2828.
arXiv:2111.08546, 2021. [147]C. Chen, Y. Wang, B. Li, and K. Lam, “Knowledge is flat: A
[123]S. Li, X. Li, L. Shang, Z. Dong, C. Sun, B. Liu, Z. Ji, X. Jiang, seq2seq generative framework for various knowledge graph
and Q. Liu, “How pre-trained language models capture fac- completion,” in COLING, 2022, pp. 4005–4017.
tual knowledge? a causal-inspired analysis,” arXiv preprint [148]M.E.Peters,M.Neumann,M.Iyyer,M.Gardner,C.Clark,K.Lee,
arXiv:2203.16747, 2022. andL.Zettlemoyer,“Deepcontextualizedwordrepresentations,”
[124]H. Tian, C. Gao, X. Xiao, H. Liu, B. He, H. Wu, H. Wang, and in NAACL, 2018, pp. 2227–2237.
F. Wu, “SKEP: Sentiment knowledge enhanced pre-training for [149]H. Yan, T. Gui, J. Dai, Q. Guo, Z. Zhang, and X. Qiu, “A unified
sentiment analysis |
s,M.Neumann,M.Iyyer,M.Gardner,C.Clark,K.Lee,
arXiv:2203.16747, 2022. andL.Zettlemoyer,“Deepcontextualizedwordrepresentations,”
[124]H. Tian, C. Gao, X. Xiao, H. Liu, B. He, H. Wu, H. Wang, and in NAACL, 2018, pp. 2227–2237.
F. Wu, “SKEP: Sentiment knowledge enhanced pre-training for [149]H. Yan, T. Gui, J. Dai, Q. Guo, Z. Zhang, and X. Qiu, “A unified
sentiment analysis,” in ACL, 2020, pp. 4067–4076. generative framework for various NER subtasks,” in ACL, 2021,
[125]W. Yu, C. Zhu, Y. Fang, D. Yu, S. Wang, Y. Xu, M. Zeng, and pp. 5808–5822.
M. Jiang, “Dict-BERT: Enhancing language model pre-training [150]Y. Onoe and G. Durrett, “Learning to denoise distantly-labeled
with dictionary,” in ACL, 2022, pp. 1907–1918. data for entity typing,” in NAACL, 2019, pp. 2407–2417.
[126]T.McCoy,E.Pavlick,andT.Linzen,“Rightforthewrongreasons: [151]Y. Onoe, M. Boratko, A. McCallum, and G. Durrett, “Modeling
Diagnosingsyntacticheuristicsinnaturallanguageinference,”in fine-grained entity types with box embeddings,” in ACL, 2021,
ACL, 2019, pp. 3428–3448. pp. 2051–2064.
[127]D. Wilmot and F. Keller, “Memory and knowledge augmented [152]B. Z. Li, S. Min, S. Iyer, Y. Mehdad, and W. Yih, “Efficient one-
language models for inferring salience in long-form stories,” in pass end-to-end entity linking for questions,” in EMNLP, 2020,
EMNLP, 2021, pp. 851–865. pp. 6433–6441.
[128]L. Adolphs, S. Dhuliawala, and T. Hofmann, “How to query [153]T. Ayoola, S. Tyagi, J. Fisher, C. Christodoulopoulos, and A. Pier-
language models?” arXiv preprint arXiv:2108.01928, 2021. leoni, “Refined: An efficient zero-shot-capable approach to end-
[129]M.Sung,J.Lee,S.Yi,M.Jeon,S.Kim,andJ.Kang,“Canlanguage to-end entity linking,” in NAACL, 2022, pp. 209–220.
models be biomedical knowledge bases?” in EMNLP, 2021, pp. [154]M. Joshi, O. Levy, L. Zettlemoyer, and D. S. Weld, “BERT for
4723–4734. coreference resolution: Baselines and analysis,” in EMNLP, 2019,
[130]A. Mallen, A. Asai, V. Zhong, R. Das, H. Hajishirzi, and pp. 5802–5807.
D. Khashabi, “When not to trust language models: Investigating [155]M. Joshi, D. Chen, Y. Liu, D. S. Weld, L. Zettlemoyer, and
effectiveness and limitations of parametric and non-parametric O. Levy, “Spanbert: Improving pre-training by representing and
memories,” arXiv preprint arXiv:2212.10511, 2022. predicting spans,” Trans. Assoc. Comput. Linguistics, vol. 8, pp.
[131]M.Yasunaga,H.Ren,A.Bosselut,P.Liang,andJ.Leskovec,“QA- 64–77, 2020.
GNN: Reasoning with language models and knowledge graphs [156]A. Caciularu, A. Cohan, I. Beltagy, M. E. Peters, A. Cattan,
for question answering,” in NAACL, 2021, pp. 535–546. and I. Dagan, “CDLM: cross-document language modeling,” in
[132]M. Nayyeri, Z. Wang, M. Akter, M. M. Alam, M. R. A. H. EMNLP, 2021, pp. 2648–2662.
Rony, J. Lehmann, S. Staab et al., “Integrating knowledge graph [157]A.Cattan,A.Eirew,G.Stanovsky,M.Joshi,andI.Dagan,“Cross-
embedding and pretrained language models in hypercomplex |
A. Cattan,
for question answering,” in NAACL, 2021, pp. 535–546. and I. Dagan, “CDLM: cross-document language modeling,” in
[132]M. Nayyeri, Z. Wang, M. Akter, M. M. Alam, M. R. A. H. EMNLP, 2021, pp. 2648–2662.
Rony, J. Lehmann, S. Staab et al., “Integrating knowledge graph [157]A.Cattan,A.Eirew,G.Stanovsky,M.Joshi,andI.Dagan,“Cross-
embedding and pretrained language models in hypercomplex document coreference resolution over predicted mentions,” in
spaces,” arXiv preprint arXiv:2208.02743, 2022. ACL, 2021, pp. 5100–5107.
[133]N. Huang, Y. R. Deshpande, Y. Liu, H. Alberts, K. Cho, [158]Y. Wang, Y. Shen, and H. Jin, “An end-to-end actor-critic-based
C. Vania, and I. Calixto, “Endowing language models with neural coreference resolution system,” in IEEE International Con-
multimodal knowledge graph representations,” arXiv preprint ference on Acoustics, Speech and Signal Processing, ICASSP 2021,
arXiv:2206.13163, 2022. Toronto, ON, Canada, June 6-11, 2021, 2021, pp. 7848–7852.
[134]M.M.Alam,M.R.A.H.Rony,M.Nayyeri,K.Mohiuddin,M.M. [159]P.ShiandJ.Lin,“SimpleBERTmodelsforrelationextractionand
Akter, S. Vahdati, and J. Lehmann, “Language model guided semantic role labeling,” CoRR, vol. abs/1904.05255, 2019.
knowledge graph embeddings,” IEEE Access, vol. 10, pp. 76008–
76020, 2022.JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 25
[160]S.ParkandH.Kim,“Improvingsentence-levelrelationextraction [183]W. Xiong, M. Yu, S. Chang, X. Guo, and W. Y. Wang, “One-shot
through curriculum learning,” CoRR, vol. abs/2107.09332, 2021. relational learning for knowledge graphs,” in EMNLP, 2018, pp.
[161]Y. Ma, A. Wang, and N. Okazaki, “DREEAM: guiding attention 1980–1990.
withevidenceforimprovingdocument-levelrelationextraction,” [184]P. Wang, J. Han, C. Li, and R. Pan, “Logic attention based
in EACL, 2023, pp. 1963–1975. neighborhood aggregation for inductive knowledge graph em-
[162]Q.Guo,Y.Sun,G.Liu,Z.Wang,Z.Ji,Y.Shen,andX.Wang,“Con- bedding,” in AAAI, vol. 33, no. 01, 2019, pp. 7152–7159.
structing chinese historical literature knowledge graph based [185]Y. Lin, Z. Liu, M. Sun, Y. Liu, and X. Zhu, “Learning entity
on bert,” in Web Information Systems and Applications: 18th Inter- and relation embeddings for knowledge graph completion,” in
national Conference, WISA 2021, Kaifeng, China, September 24–26, Proceedings of the AAAI conference on artificial intelligence, vol. 29,
2021, Proceedings 18. Springer, 2021, pp. 323–334. no. 1, 2015.
[163]J. Han, N. Collier, W. Buntine, and E. Shareghi, “Pive: Prompt- [186]C.Chen,Y.Wang,A.Sun,B.Li,andL.Kwok-Yan,“Dippingplms
ing with iterative verification improving graph-based generative sauce: Bridging structure and text for effective knowledge graph
capability of llms,” arXiv preprint arXiv:2305.12392, 2023. completion via conditional soft prompting,” in ACL, 2023.
[164]A. Bosselut, H. Rashkin, M. Sap, C. Malaviya, A. Celikyilmaz, [187]J. Lovelace and C. P. Ros |
[186]C.Chen,Y.Wang,A.Sun,B.Li,andL.Kwok-Yan,“Dippingplms
ing with iterative verification improving graph-based generative sauce: Bridging structure and text for effective knowledge graph
capability of llms,” arXiv preprint arXiv:2305.12392, 2023. completion via conditional soft prompting,” in ACL, 2023.
[164]A. Bosselut, H. Rashkin, M. Sap, C. Malaviya, A. Celikyilmaz, [187]J. Lovelace and C. P. Ros ´e, “A framework for adapting pre-
andY.Choi,“Comet:Commonsensetransformersforknowledge trained language models to knowledge graph completion,” in
graph construction,” in ACL, 2019. Proceedings of the 2022 Conference on Empirical Methods in Natural
[165]S.Hao,B.Tan,K.Tang,H.Zhang,E.P.Xing,andZ.Hu,“Bertnet: Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emi-
Harvesting knowledge graphs from pretrained language mod- rates, December 7-11, 2022, 2022, pp. 5937–5955.
els,” arXiv preprint arXiv:2206.14268, 2022. [188]J. Fu, L. Feng, Q. Zhang, X. Huang, and P. Liu, “Larger-context
[166]P. West, C. Bhagavatula, J. Hessel, J. Hwang, L. Jiang, R. Le Bras, tagging: When and why does it work?” in Proceedings of the
X. Lu, S. Welleck, and Y. Choi, “Symbolic knowledge distillation: 2021 Conference of the North American Chapter of the Association for
from general language models to commonsense models,” in Computational Linguistics: Human Language Technologies, NAACL-
NAACL, 2022, pp. 4602–4625. HLT 2021, Online, June 6-11, 2021, 2021, pp. 1463–1475.
[167]L.F.R.Ribeiro,M.Schmitt,H.Sch ¨utze,andI.Gurevych,“Investi- [189]X. Liu, K. Ji, Y. Fu, Z. Du, Z. Yang, and J. Tang, “P-tuning
gatingpretrainedlanguagemodelsforgraph-to-textgeneration,” v2: Prompt tuning can be comparable to fine-tuning universally
in Proceedings of the 3rd Workshop on Natural Language Processing across scales and tasks,” CoRR, vol. abs/2110.07602, 2021.
for Conversational AI, 2021, pp. 211–227. [190]J. Yu, B. Bohnet, and M. Poesio, “Named entity recognition as
[168]J. Li, T. Tang, W. X. Zhao, Z. Wei, N. J. Yuan, and J.-R. Wen, dependency parsing,” in ACL, 2020, pp. 6470–6476.
“Few-shot knowledge graph-to-text generation with pretrained [191]F. Li, Z. Lin, M. Zhang, and D. Ji, “A span-based model for
language models,” in ACL, 2021, pp. 1558–1568. joint overlapped and discontinuous named entity recognition,”
[169]A. Colas, M. Alvandipour, and D. Z. Wang, “GAP: A graph- in ACL, 2021, pp. 4814–4828.
aware language model framework for knowledge graph-to-text [192]C. Tan, W. Qiu, M. Chen, R. Wang, and F. Huang, “Boundary
generation,” in Proceedings of the 29th International Conference on enhanced neural span classification for nested named entity
Computational Linguistics, 2022, pp. 5755–5769. recognition,” in The Thirty-Fourth AAAI Conference on Artificial
[170]Z. Jin, Q. Guo, X. Qiu, and Z. Zhang, “GenWiki: A dataset of Intelligence, AAAI 2020, The Thirty-Second Innovative Applications
1.3 million content-sharing text and graphs for unsupervised of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI
graph-to-text generati |
Linguistics, 2022, pp. 5755–5769. recognition,” in The Thirty-Fourth AAAI Conference on Artificial
[170]Z. Jin, Q. Guo, X. Qiu, and Z. Zhang, “GenWiki: A dataset of Intelligence, AAAI 2020, The Thirty-Second Innovative Applications
1.3 million content-sharing text and graphs for unsupervised of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI
graph-to-text generation,” in Proceedings of the 28th International Symposium on Educational Advances in Artificial Intelligence, EAAI
Conference on Computational Linguistics, 2020, pp. 2398–2409. 2020,NewYork,NY,USA,February7-12,2020,2020,pp.9016–9023.
[171]W. Chen, Y. Su, X. Yan, and W. Y. Wang, “KGPT: Knowledge- [193]Y. Xu, H. Huang, C. Feng, and Y. Hu, “A supervised multi-head
grounded pre-training for data-to-text generation,” in EMNLP, self-attention network for nested named entity recognition,” in
2020, pp. 8635–8648. Thirty-Fifth AAAI Conference on Artificial Intelligence, AAAI 2021,
[172]D. Lukovnikov, A. Fischer, and J. Lehmann, “Pretrained trans- Thirty-Third Conference on Innovative Applications of Artificial Intel-
formers for simple question answering over knowledge graphs,” ligence,IAAI2021,TheEleventhSymposiumonEducationalAdvances
in The Semantic Web–ISWC 2019: 18th International Semantic Web in Artificial Intelligence, EAAI 2021, Virtual Event, February 2-9,
Conference, Auckland, New Zealand, October 26–30, 2019, Proceed- 2021, 2021, pp. 14185–14193.
ings, Part I 18. Springer, 2019, pp. 470–486. [194]J. Yu, B. Ji, S. Li, J. Ma, H. Liu, and H. Xu, “S-NER: A concise
[173]D. Luo, J. Su, and S. Yu, “A bert-based approach with relation- and efficient span-based model for named entity recognition,”
aware attention for knowledge base question answering,” in Sensors, vol. 22, no. 8, p. 2852, 2022.
IJCNN. IEEE, 2020, pp. 1–8. [195]Y. Fu, C. Tan, M. Chen, S. Huang, and F. Huang, “Nested named
[174]N. Hu, Y. Wu, G. Qi, D. Min, J. Chen, J. Z. Pan, and Z. Ali, “An entity recognition with partially-observed treecrfs,” in AAAI,
empiricalstudyofpre-trainedlanguagemodelsinsimpleknowl- 2021, pp. 12839–12847.
edge graph question answering,” arXiv preprint arXiv:2303.10368, [196]C. Lou, S. Yang, and K. Tu, “Nested named entity recognition
2023. as latent lexicalized constituency parsing,” in Proceedings of the
[175]Y. Xu, C. Zhu, R. Xu, Y. Liu, M. Zeng, and X. Huang, “Fusing 60th Annual Meeting of the Association for Computational Linguistics
context into knowledge graph for commonsense question an- (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May 22-27,
swering,” in ACL, 2021, pp. 1201–1207. 2022, 2022, pp. 6183–6198.
[176]M. Zhang, R. Dai, M. Dong, and T. He, “Drlk: Dynamic hierar- [197]S. Yang and K. Tu, “Bottom-up constituency parsing and nested
chical reasoning with language model and knowledge graph for named entity recognition with pointer networks,” in Proceedings
question answering,” in EMNLP, 2022, pp. 5123–5133. of the 60th Annual Meeting of the Association for Computati |
, 2022, pp. 6183–6198.
[176]M. Zhang, R. Dai, M. Dong, and T. He, “Drlk: Dynamic hierar- [197]S. Yang and K. Tu, “Bottom-up constituency parsing and nested
chical reasoning with language model and knowledge graph for named entity recognition with pointer networks,” in Proceedings
question answering,” in EMNLP, 2022, pp. 5123–5133. of the 60th Annual Meeting of the Association for Computational
[177]Z. Hu, Y. Xu, W. Yu, S. Wang, Z. Yang, C. Zhu, K.-W. Chang, and Linguistics (Volume 1: Long Papers), ACL 2022, Dublin, Ireland, May
Y. Sun, “Empowering language models with knowledge graph 22-27, 2022, 2022, pp. 2403–2416.
reasoning for open-domain question answering,” in EMNLP, [198]F. Li, Z. Lin, M. Zhang, and D. Ji, “A span-based model for
2022, pp. 9562–9581. joint overlapped and discontinuous named entity recognition,”
[178]X. Zhang, A. Bosselut, M. Yasunaga, H. Ren, P. Liang, C. D. Man- in Proceedings of the 59th Annual Meeting of the Association for
ning, and J. Leskovec, “Greaselm: Graph reasoning enhanced Computational Linguistics and the 11th International Joint Conference
language models,” in ICLR, 2022. onNaturalLanguageProcessing,ACL/IJCNLP2021,(Volume1:Long
[179]X.CaoandY.Liu,“Relmkg:reasoningwithpre-trainedlanguage Papers), Virtual Event, August 1-6, 2021, 2021, pp. 4814–4828.
modelsandknowledgegraphsforcomplexquestionanswering,” [199]Q. Liu, H. Lin, X. Xiao, X. Han, L. Sun, and H. Wu, “Fine-grained
Applied Intelligence, pp. 1–15, 2022. entity typing via label reasoning,” in Proceedings of the 2021
[180]X. Huang, J. Zhang, D. Li, and P. Li, “Knowledge graph embed- Conference on Empirical Methods in Natural Language Processing,
ding based question answering,” in WSDM, 2019, pp. 105–113. EMNLP 2021, Virtual Event / Punta Cana, Dominican Republic, 7-11
[181]H. Wang, F. Zhang, X. Xie, and M. Guo, “Dkn: Deep knowledge- November, 2021, 2021, pp. 4611–4622.
aware network for news recommendation,” in WWW, 2018, pp. [200]H. Dai, Y. Song, and H. Wang, “Ultra-fine entity typing with
1835–1844. weaksupervisionfromamaskedlanguagemodel,”in Proceedings
[182]B. Yang, S. W.-t. Yih, X. He, J. Gao, and L. Deng, “Embedding of the 59th Annual Meeting of the Association for Computational
entities and relations for learning and inference in knowledge Linguistics and the 11th International Joint Conference on Natural
bases,” in ICLR, 2015. Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers),
Virtual Event, August 1-6, 2021, 2021, pp. 1790–1799.JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 26
[201]N. Ding, Y. Chen, X. Han, G. Xu, X. Wang, P. Xie, H. Zheng, extraction,” in PAKDD, ser. Lecture Notes in Computer Science,
Z.Liu,J.Li,andH.Kim,“Prompt-learningforfine-grainedentity vol. 12084, 2020, pp. 197–209.
typing,” in Findings of t |
, VOL. ??, NO. ??, MONTH 20YY 26
[201]N. Ding, Y. Chen, X. Han, G. Xu, X. Wang, P. Xie, H. Zheng, extraction,” in PAKDD, ser. Lecture Notes in Computer Science,
Z.Liu,J.Li,andH.Kim,“Prompt-learningforfine-grainedentity vol. 12084, 2020, pp. 197–209.
typing,” in Findings of the Association for Computational Linguistics: [221]D. Wang, W. Hu, E. Cao, and W. Sun, “Global-to-local neural
EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, networks for document-level relation extraction,” in Proceedings
2022, 2022, pp. 6888–6901. of the 2020 Conference on Empirical Methods in Natural Language
[202]W. Pan, W. Wei, and F. Zhu, “Automatic noisy label correction Processing, EMNLP 2020, Online, November 16-20, 2020, 2020, pp.
for fine-grained entity typing,” in Proceedings of the Thirty-First 3711–3721.
International Joint Conference on Artificial Intelligence, IJCAI 2022, [222]S. Zeng, Y. Wu, and B. Chang, “SIRE: separate intra- and
Vienna, Austria, 23-29 July 2022, 2022, pp. 4317–4323. inter-sentential reasoning for document-level relation extrac-
[203]B. Li, W. Yin, and M. Chen, “Ultra-fine entity typing with indi- tion,” in Findings of the Association for Computational Linguistics:
rect supervision from natural language inference,” Trans. Assoc. ACL/IJCNLP 2021, Online Event, August 1-6, 2021, ser. Findings of
Comput. Linguistics, vol. 10, pp. 607–622, 2022. ACL, vol. ACL/IJCNLP 2021, 2021, pp. 524–534.
[204]S. Broscheit, “Investigating entity knowledge in BERT with sim- [223]G. Nan, Z. Guo, I. Sekulic, and W. Lu, “Reasoning with latent
ple neural end-to-end entity linking,” CoRR, vol. abs/2003.05473, structure refinement for document-level relation extraction,” in
2020. ACL, 2020, pp. 1546–1557.
[205]N. D. Cao, G. Izacard, S. Riedel, and F. Petroni, “Autoregressive [224]S. Zeng, R. Xu, B. Chang, and L. Li, “Double graph based
entity retrieval,” in 9th ICLR, ICLR 2021, Virtual Event, Austria, reasoning for document-level relation extraction,” in Proceedings
May 3-7, 2021, 2021. of the 2020 Conference on Empirical Methods in Natural Language
[206]N. D. Cao, L. Wu, K. Popat, M. Artetxe, N. Goyal, M. Plekhanov, Processing, EMNLP 2020, Online, November 16-20, 2020, 2020, pp.
L. Zettlemoyer, N. Cancedda, S. Riedel, and F. Petroni, “Mul- 1630–1640.
tilingual autoregressive entity linking,” Trans. Assoc. Comput. [225]N. Zhang, X. Chen, X. Xie, S. Deng, C. Tan, M. Chen, F. Huang,
Linguistics, vol. 10, pp. 274–290, 2022. L. Si, and H. Chen, “Document-level relation extraction as se-
[207]N. D. Cao, W. Aziz, and I. Titov, “Highly parallel autoregressive mantic segmentation,” in IJCAI, 2021, pp. 3999–4006.
entity linking with discriminative correction,” in Proceedings of [226]O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional
the 2021 Conference on Empirical Methods in Natural Language networks for biomedical image segmentation,” in Medical Image
Processing, E |
raction as se-
[207]N. D. Cao, W. Aziz, and I. Titov, “Highly parallel autoregressive mantic segmentation,” in IJCAI, 2021, pp. 3999–4006.
entity linking with discriminative correction,” in Proceedings of [226]O. Ronneberger, P. Fischer, and T. Brox, “U-net: Convolutional
the 2021 Conference on Empirical Methods in Natural Language networks for biomedical image segmentation,” in Medical Image
Processing, EMNLP 2021, Virtual Event / Punta Cana, Dominican Computing and Computer-Assisted Intervention - MICCAI 2015 -
Republic, 7-11 November, 2021, 2021, pp. 7662–7669. 18th International Conference Munich, Germany, October 5 - 9, 2015,
[208]K. Lee, L. He, and L. Zettlemoyer, “Higher-order coreference Proceedings, Part III, ser. Lecture Notes in Computer Science, vol.
resolution with coarse-to-fine inference,” in NAACL, 2018, pp. 9351, 2015, pp. 234–241.
687–692. [227]W. Zhou, K. Huang, T. Ma, and J. Huang, “Document-level rela-
[209]T. M. Lai, T. Bui, and D. S. Kim, “End-to-end neural coreference tion extraction with adaptive thresholding and localized context
resolution revisited: A simple yet effective baseline,” in IEEE pooling,” in AAAI, 2021, pp. 14612–14620.
International Conference on Acoustics, Speech and Signal Processing, [228]C. Gardent, A. Shimorina, S. Narayan, and L. Perez-Beltrachini,
ICASSP 2022, Virtual and Singapore, 23-27 May 2022, 2022, pp. “The WebNLG challenge: Generating text from RDF data,” in
8147–8151. Proceedings of the 10th International Conference on Natural Language
[210]W. Wu, F. Wang, A. Yuan, F. Wu, and J. Li, “Corefqa: Coreference Generation, 2017, pp. 124–133.
resolution as query-based span prediction,” in Proceedings of the [229]J. Guan, Y. Wang, and M. Huang, “Story ending generation with
58thAnnualMeetingoftheAssociationforComputationalLinguistics, incremental encoding and commonsense knowledge,” in AAAI,
ACL 2020, Online, July 5-10, 2020, 2020, pp. 6953–6963. 2019, pp. 6473–6480.
[211]T. M. Lai, H. Ji, T. Bui, Q. H. Tran, F. Dernoncourt, and W. Chang, [230]H. Zhou, T. Young, M. Huang, H. Zhao, J. Xu, and X. Zhu,
“A context-dependent gated module for incorporating symbolic “Commonsense knowledge aware conversation generation with
semantics into event coreference resolution,” in Proceedings of the graph attention,” in IJCAI, 2018, pp. 4623–4629.
2021 Conference of the North American Chapter of the Association for [231]M. Kale and A. Rastogi, “Text-to-text pre-training for data-to-text
Computational Linguistics: Human Language Technologies, NAACL- tasks,” in Proceedings of the 13th International Conference on Natural
HLT 2021, Online, June 6-11, 2021, 2021, pp. 3491–3499. Language Generation, 2020, pp. 97–102.
[212]Y.Kirstain,O.Ram,andO.Levy,“Coreferenceresolutionwithout [232]M. Mintz, S. Bills, R. Snow, and D. Jurafsky, “Distant supervision
span representations,” in Proceedings of the 59th Annual Meeting of for relation extraction without labeled data,” in ACL, 2009, pp.
the Association for Computational Linguistics and the 11th Interna- 1003–1011.
tional JointConference on Natural Language Processing,ACL/I |
102.
[212]Y.Kirstain,O.Ram,andO.Levy,“Coreferenceresolutionwithout [232]M. Mintz, S. Bills, R. Snow, and D. Jurafsky, “Distant supervision
span representations,” in Proceedings of the 59th Annual Meeting of for relation extraction without labeled data,” in ACL, 2009, pp.
the Association for Computational Linguistics and the 11th Interna- 1003–1011.
tional JointConference on Natural Language Processing,ACL/IJCNLP [233]A. Saxena, A. Tripathi, and P. Talukdar, “Improving multi-hop
2021, (Volume 2: Short Papers), Virtual Event, August 1-6, 2021, question answering over knowledge graphs using knowledge
2021, pp. 14–19. base embeddings,” in ACL, 2020, pp. 4498–4507.
[213]R. Thirukovalluru, N. Monath, K. Shridhar, M. Zaheer, [234]Y. Feng, X. Chen, B. Y. Lin, P. Wang, J. Yan, and X. Ren, “Scalable
M. Sachan, and A. McCallum, “Scaling within document corefer- multi-hop relational reasoning for knowledge-aware question
ence to long texts,” in Findings of the Association for Computational answering,” in EMNLP, 2020, pp. 1295–1309.
Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021, ser. [235]Y. Yan, R. Li, S. Wang, H. Zhang, Z. Daoguang, F. Zhang, W. Wu,
Findings of ACL, vol. ACL/IJCNLP 2021, 2021, pp. 3921–3931. and W. Xu, “Large-scale relation learning for question answering
[214]I. Beltagy, M. E. Peters, and A. Cohan, “Longformer: The long- over knowledge bases with pre-trained language models,” in
document transformer,” CoRR, vol. abs/2004.05150, 2020. EMNLP, 2021, pp. 3653–3660.
[215]C.Alt,M.H ¨ubner,andL.Hennig,“Improvingrelationextraction [236]J. Zhang, X. Zhang, J. Yu, J. Tang, J. Tang, C. Li, and H. Chen,
by pre-trained language representations,” in 1st Conference on “Subgraph retrieval enhanced model for multi-hop knowledge
Automated Knowledge Base Construction, AKBC 2019, Amherst, MA, base question answering,” in ACL (Volume 1: Long Papers), 2022,
USA, May 20-22, 2019, 2019. pp. 5773–5784.
[216]L. B. Soares, N. FitzGerald, J. Ling, and T. Kwiatkowski, “Match- [237]J. Jiang, K. Zhou, Z. Dong, K. Ye, W. X. Zhao, and J.-R. Wen,
ing the blanks: Distributional similarity for relation learning,” in “Structgpt: A general framework for large language model to
ACL, 2019, pp. 2895–2905. reason over structured data,” arXiv preprint arXiv:2305.09645,
[217]S. Lyu and H. Chen, “Relation classification with entity type 2023.
restriction,” in Findings of the Association for Computational Lin- [238]H. Zhu, H. Peng, Z. Lyu, L. Hou, J. Li, and J. Xiao, “Pre-training
guistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021, ser. language model incorporating domain-specific heterogeneous
Findings of ACL, vol. ACL/IJCNLP 2021, 2021, pp. 390–395. knowledge into a unified representation,” Expert Systems with
[218]J. Zheng and Z. Chen, “Sentence-level relation extraction via Applications, vol. 215, p. 119369, 2023.
contrastive learning with descriptive relation prompts,” CoRR, [239]C. Feng, X. Zhang, and Z. Fei, “Knowledge solver: Teaching llms
vol. abs/2304.04935, 2023. to s |
90–395. knowledge into a unified representation,” Expert Systems with
[218]J. Zheng and Z. Chen, “Sentence-level relation extraction via Applications, vol. 215, p. 119369, 2023.
contrastive learning with descriptive relation prompts,” CoRR, [239]C. Feng, X. Zhang, and Z. Fei, “Knowledge solver: Teaching llms
vol. abs/2304.04935, 2023. to search for domain knowledge from knowledge graphs,” arXiv
[219]H. Wang, C. Focke, R. Sylvester, N. Mishra, and W. Y. Wang, preprint arXiv:2309.03118, 2023.
“Fine-tune bert for docred with two-step process,” CoRR, vol. [240]J. Sun, C. Xu, L. Tang, S. Wang, C. Lin, Y. Gong, H.-Y. Shum,
abs/1909.11898, 2019. and J. Guo, “Think-on-graph: Deep and responsible reasoning
[220]H. Tang, Y. Cao, Z. Zhang, J. Cao, F. Fang, S. Wang, and P. Yin, of large language model with knowledge graph,” arXiv preprint
“HIN:hierarchicalinferencenetworkfordocument-levelrelation arXiv:2307.07697, 2023.JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 27
[241]B. He, D. Zhou, J. Xiao, X. Jiang, Q. Liu, N. J. Yuan, and T. Xu, [265]Y.Zheng,H.Y.Koh,J.Ju,A.T.Nguyen,L.T.May,G.I.Webb,and
“BERT-MK: Integrating graph contextualized knowledge into S. Pan, “Large language models for scientific synthesis, inference
pre-trained language models,” in EMNLP, 2020, pp. 2281–2290. and explanation,” arXiv preprint arXiv:2310.07984, 2023.
[242]Y. Su, X. Han, Z. Zhang, Y. Lin, P. Li, Z. Liu, J. Zhou, and M. Sun, [266]B.Min,H.Ross,E.Sulem,A.P.B.Veyseh,T.H.Nguyen,O.Sainz,
“Cokebert: Contextual knowledge selection and embedding to- E. Agirre, I. Heintz, and D. Roth, “Recent advances in natural
wards enhanced pre-trained language models,” AI Open, vol. 2, language processing via large pre-trained language models: A
pp. 127–134, 2021. survey,” ACM Computing Surveys, vol. 56, no. 2, pp. 1–40, 2023.
[243]D.Yu,C.Zhu,Y.Yang,andM.Zeng,“JAKET:jointpre-trainingof [267]J. Wei, M. Bosma, V. Zhao, K. Guu, A. W. Yu, B. Lester, N. Du,
knowledge graph and language understanding,” in AAAI, 2022, A. M. Dai, and Q. V. Le, “Finetuned language models are zero-
pp. 11630–11638. shot learners,” in International Conference on Learning Representa-
[244]X. Wang, P. Kapanipathi, R. Musa, M. Yu, K. Talamadupula, tions, 2021.
I. Abdelaziz, M. Chang, A. Fokoue, B. Makni, N. Mattei, and [268]Y. Zhang, Y. Li, L. Cui, D. Cai, L. Liu, T. Fu, X. Huang, E. Zhao,
M.Witbrock,“Improvingnaturallanguageinferenceusingexter- Y. Zhang, Y. Chen, L. Wang, A. T. Luu, W. Bi, F. Shi, and S. Shi,
nal knowledge in the science questions domain,” in AAAI, 2019, “Siren’s song in the ai ocean: A survey on hallucination in large
pp. 7208–7215. language models,” arXiv preprint arXiv:2309.01219, 2023.
[245]Y. Sun, Q. Shi, L. Qi, and Y. Zhang, “JointLK: Joint reasoning
with language models and knowledge graphs for commonsense
question answering,” in NAACL, 2022, pp. 5049–5060. APPENDIX A
[246]X. Liu, H. Yu, H. Zhang, Y. Xu, X. Lei, H. Lai, Y. Gu, H. Ding, |
song in the ai ocean: A survey on hallucination in large
pp. 7208–7215. language models,” arXiv preprint arXiv:2309.01219, 2023.
[245]Y. Sun, Q. Shi, L. Qi, and Y. Zhang, “JointLK: Joint reasoning
with language models and knowledge graphs for commonsense
question answering,” in NAACL, 2022, pp. 5049–5060. APPENDIX A
[246]X. Liu, H. Yu, H. Zhang, Y. Xu, X. Lei, H. Lai, Y. Gu, H. Ding, PROS AND CONS FOR LLMS AND KGS
K. Men, K. Yang et al., “Agentbench: Evaluating llms as agents,”
arXiv preprint arXiv:2308.03688, 2023. In this section, we introduce the pros and cons of LLMs and
[247]Y. Wang, N. Lipka, R. A. Rossi, A. Siu, R. Zhang, and T. Derr, KGs in detail. We summarize the pros and cons of LLMs
“Knowledge graph prompting for multi-document question an-
swering,” arXiv preprint arXiv:2308.11730, 2023. and KGs in Fig. 1, respectively.
[248]A. Zeng, M. Liu, R. Lu, B. Wang, X. Liu, Y. Dong, and J. Tang, LLM pros.
“Agenttuning: Enabling generalized agent abilities for llms,”
2023. • General Knowledge [11]: LLMs pre-trained on large-
[249]W. Kry ´sci´nski, B. McCann, C. Xiong, and R. Socher, “Evaluating scale corpora, which contain a large amount of gen-
the factual consistency of abstractive text summarization,” arXiv eral knowledge, such as commonsense knowledge
preprint arXiv:1910.12840, 2019.
[250]Z. Ji, Z. Liu, N. Lee, T. Yu, B. Wilie, M. Zeng, and P. Fung, “Rho [264] and factual knowledge [14]. Such knowledge
(\ρ): Reducing hallucination in open-domain dialogues with canbedistilledfromLLMsandusedfordownstream
knowledge grounding,” arXiv preprint arXiv:2212.01588, 2022. tasks [265].
[251]S. Feng, V. Balachandran, Y. Bai, and Y. Tsvetkov, “Factkb: Gen-
eralizable factuality evaluation using language models enhanced • LanguageProcessing[12]:LLMshaveshowngreatper-
with factual knowledge,” arXiv preprint arXiv:2305.08281, 2023. formance in understanding natural language [266].
[252]Y. Yao, P. Wang, B. Tian, S. Cheng, Z. Li, S. Deng, H. Chen, and Therefore, LLMs can be used in many natural lan-
N. Zhang, “Editing large language models: Problems, methods, guage processing tasks, such as question answering
and opportunities,” arXiv preprint arXiv:2305.13172, 2023.
[253]Z. Li, N. Zhang, Y. Yao, M. Wang, X. Chen, and H. Chen, [4], machine translation [5], and text generation [6].
“Unveiling the pitfalls of knowledge editing for large language • Generalizability [13]: LLMs enable great generalizabil-
models,” arXiv preprint arXiv:2310.02129, 2023. ity, which can be applied to various downstream
[254]R. Cohen, E. Biran, O. Yoran, A. Globerson, and M. Geva,
“Evaluating the ripple effects of knowledge editing in language tasks [267]. By providing few-shot examples [59] or
models,” arXiv preprint arXiv:2307.12976, 2023. finetuningonmulti-taskdata[3],LLMsachievegreat
[255]S. Diao, Z. Huang, R. Xu, X. Li, Y. Lin, X. Zhou, and T. Zhang, performance on many tasks.
“Black-box prompt learning for pre-trained language models,”
arXiv preprint arXiv:2201.08531, 2022. LLM cons.
[256]T.Sun,Y.Shao,H.Qian,X.Huang,andX.Qiu,“Black-boxtuning
for language-model-as-a-service,” in International Confe |
76, 2023. finetuningonmulti-taskdata[3],LLMsachievegreat
[255]S. Diao, Z. Huang, R. Xu, X. Li, Y. Lin, X. Zhou, and T. Zhang, performance on many tasks.
“Black-box prompt learning for pre-trained language models,”
arXiv preprint arXiv:2201.08531, 2022. LLM cons.
[256]T.Sun,Y.Shao,H.Qian,X.Huang,andX.Qiu,“Black-boxtuning
for language-model-as-a-service,” in International Conference on • Implicit Knowledge [14]: LLMs represent knowledge
Machine Learning. PMLR, 2022, pp. 20841–20855. implicitly in their parameters. It is difficult to inter-
[257]X. Chen, A. Shrivastava, and A. Gupta, “NEIL: extracting visual
knowledge from web data,” in IEEE International Conference on pret or validate the knowledge obtained by LLMs.
Computer Vision, ICCV 2013, Sydney, Australia, December 1-8, 2013, • Hallucination [15]: LLMs often experience hallucina-
2013, pp. 1409–1416. tions by generating content that while seemingly
[258]M. Warren and P. J. Hayes, “Bounding ambiguity: Experiences plausible but are factually incorrect [268]. This prob-
with an image annotation system,” in Proceedings of the 1st Work-
shop on Subjectivity, Ambiguity and Disagreement in Crowdsourcing, lem greatly reduces the trustworthiness of LLMs in
ser. CEUR Workshop Proceedings, vol. 2276, 2018, pp. 41–54. real-world scenarios.
[259]Z. Chen, Y. Huang, J. Chen, Y. Geng, Y. Fang, J. Z. Pan, N. Zhang, • Indecisiveness [16]: LLMs perform reasoning by gen-
and W. Zhang, “Lako: Knowledge-driven visual estion answer-
ing via late knowledge-to-text injection,” 2022. erating from a probability model, which is an in-
[260]R.Girdhar,A.El-Nouby,Z.Liu,M.Singh,K.V.Alwala,A.Joulin, decisive process. The generated results are sampled
and I. Misra, “Imagebind: One embedding space to bind them fromtheprobabilitydistribution,whichisdifficultto
all,” in ICCV, 2023, pp. 15180–15190. control.
[261]J. Zhang, Z. Yin, P. Chen, and S. Nichele, “Emotion recognition
using multi-modal data and machine learning techniques: A • Black-box [17]: LLMs are criticized for their lack of
tutorial and review,” Information Fusion, vol. 59, pp. 103–126, interpretability. It is unclear to know the specific pat-
2020. ternsandfunctionsLLMsusetoarriveatpredictions
[262]H. Zhang, B. Wu, X. Yuan, S. Pan, H. Tong, and J. Pei, “Trust- or decisions.
worthy graph neural networks: Aspects, methods and trends,”
arXiv:2205.07424, 2022. • Lacking Domain-specific/New Knowledge [18]: LLMs
[263]T.Wu,M.Caccia,Z.Li,Y.-F.Li,G.Qi,andG.Haffari,“Pretrained trained on general corpus might not be able to gen-
language model in continual learning: A comparative study,” in eralize well to specific domains or new knowledge
ICLR, 2022.
[264]X. L. Li, A. Kuncoro, J. Hoffmann, C. de Masson d’Autume, duetothelackofdomain-specificknowledgeornew
P. Blunsom, and A. Nematzadeh, “A systematic investigation of training data.
commonsense knowledge in large language models,” in Proceed-
ingsofthe2022ConferenceonEmpiricalMethodsinNaturalLanguage
Processing, 2022, pp. 11838–118 |
eralize well to specific domains or new knowledge
ICLR, 2022.
[264]X. L. Li, A. Kuncoro, J. Hoffmann, C. de Masson d’Autume, duetothelackofdomain-specificknowledgeornew
P. Blunsom, and A. Nematzadeh, “A systematic investigation of training data.
commonsense knowledge in large language models,” in Proceed-
ingsofthe2022ConferenceonEmpiricalMethodsinNaturalLanguage
Processing, 2022, pp. 11838–11855.JOURNAL OF LATEX CLASS FILES, VOL. ??, NO. ??, MONTH 20YY 28
KG pros. • Evolving Knowledge [24]: The facts in KGs are contin-
• Structural Knowledge [19]: KGs store facts in a struc- uously evolving. The KGs can be updated with new
tural format (i.e., triples), which can be understand- facts by inserting new triples and deleting outdated
able by both humans and machines. ones.
• Accuracy [20]: Facts in KGs are usually manually KG cons.
curated or validated by experts, which are more
accurate and dependable than those in LLMs. • Incompleteness [25]: KGs are hard to construct and
• Decisiveness [21]: The factual knowledge in KGs is often incomplete, which limits the ability of KGs to
storedinadecisivemanner.Thereasoningalgorithm provide comprehensive knowledge.
in KGs is also deterministic, which can provide deci- • Lacking Language Understanding [33]: Most studies on
sive results. KGs model the structure of knowledge, but ignore
• Interpretability [22]: KGs are renowned for their sym- the textual information in KGs. The textual informa-
bolic reasoning ability, which provides an inter- tioninKGsisoftenignoredinKG-relatedtasks,such
pretable reasoning process that can be understood as KG completion [26] and KGQA [43].
by humans. • Unseen Facts [27]: KGs are dynamically changing,
• Domain-specific Knowledge [23]: Many domains can which makesit difficultto model unseenentities and
constructtheirKGsbyexpertstoprovidepreciseand represent new facts.
dependable domain-specific knowledge. |
ExploringthePotentialofLargeLanguageModels(LLMs)
inLearningonGraphs
Zhikai Chen1, Haitao Mao1, Hang Li1, Wei Jin3, Hongzhi Wen1,
Xiaochi Wei2, Shuaiqiang Wang2, Dawei Yin2Wenqi Fan4, Hui Liu1, Jiliang Tang1
1Michigan State University 2 Baidu Inc. 3 Emory University
4 The Hong Kong Polytechnic University
{chenzh85, haitaoma, lihang4, wenhongz, liuhui7, tangjili}@msu.edu,
{weixiaochi, wangshuaiqiang}@baidu.com, yindawei@acm.org,
wei.jin@emory.edu,
wenqifan03@gmail.com
ABSTRACT focus on the node classification task. Intuitively, TAGs pro-
LearningonGraphshasattractedimmenseattentiondueto vide both node attribute and graph structural information.
its wide real-world applications. The most popular pipeline Thus, it is important to effectively capture both while mod-
for learning on graphs with textual node attributes pri- eling their interrelated correlation. Graph Neural Networks
marily relies on Graph Neural Networks (GNNs), and uti- (GNNs) [38] have emerged as the de facto technique for
lizes shallow text embedding as initial node representations, handlinggraph-structureddata, oftenleveragingamessage-
which has limitations in general knowledge and profound passing paradigm to effectively capture the graph structure.
semantic understanding. In recent years, Large Language To encode textual information, conventional pipelines typ-
Models (LLMs) have been proven to possess extensive com- ically make use of non-contextualized shallow embeddings
mon knowledge and powerful semantic comprehension abil- e.g., Bag-of-Words [20] and Word2Vec [42] embeddings, as
ities that have revolutionized existing workflows to handle seen in the common graph benchmark datasets [23; 57],
text data. In this paper, we aim to explore the potential where GNNs are subsequently employed to process these
of LLMs in graph machine learning, especially the node embeddings. Recent studies demonstrate that these non-
classification task, and investigate two possible pipelines: contextualized shallow embeddings suffer from some limita-
LLMs-as-Enhancers and LLMs-as-Predictors . The former tions, suchastheinabilitytocapturepolysemouswords[51]
leverages LLMs to enhance nodes’ text attributes with their and deficiency in semantic information [41; 12], which may
massive knowledge and then generate predictions through lead to sub-optimal performance on downstream tasks.
GNNs. The latter attempts to directly employ LLMs as Compared to these non-contextualized shallow textual em-
standalone predictors. We conduct comprehensive and sys- beddings, large language models (LLMs) present massive
tematical studies on th |
ay
massive knowledge and then generate predictions through lead to sub-optimal performance on downstream tasks.
GNNs. The latter attempts to directly employ LLMs as Compared to these non-contextualized shallow textual em-
standalone predictors. We conduct comprehensive and sys- beddings, large language models (LLMs) present massive
tematical studies on these two pipelines under various set- context-awareknowledgeandsuperiorsemanticcomprehen-
tings. From comprehensive empirical results, we make orig- sion capability through the process of pre-training on large-
inal observations and find new insights that open new pos- scale text corpora [48; 12]. This knowledge achieved from
sibilities and suggest promising directions to leverage LLMs pre-traininghasledtoasurgeofrevolutionsfordownstream
forlearningongraphs. Ourcodesanddatasetsareavailable NLPtasks[85]. ExemplarssuchasChatGPTandGPT4[46],
at: https://github.com/CurryTang/Graph-LLM . equippedwithhundredsofbillionsofparameters,exhibitsu-
perior performance [2] on numerous text-related tasks from
variousdomains. Consideringtheexceptionalabilityofthese
1. INTRODUCTION LLMs to process and understand textual data, a pertinent
Graphs are ubiquitous in various disciplines and applica- question arises: (1) Can we leverage the knowledge of LLMs
tions,encompassingawiderangeofreal-worldscenarios[73]. to compensate for the deficiency of contextualized knowledge
Many of these graphs have nodes that are associated with and semantic comprehension inherent in the conventional
textattributes,resultingintheemergenceoftext-attributed GNN pipelines? In addition to the knowledge learned via
graphs,suchascitationgraphs[23;57]andproductgraphs[5]. pre-training, recent studies suggest that LLMs present pre-
Forexample,inthe Ogbn-products dataset[23],eachnode liminarysuccessontaskswithimplicitgraphstructuressuch
represents a product, and its corresponding textual descrip- asrecommendation[35;14],ranking[26],andmulti-hoprea-
tion is treated as the node’s attribute. These graphs have soning[7],inwhichLLMsareadoptedtomakethefinalpre-
seen widespread use across a myriad of domains, from social dictions. Given such success, we further question: (2) Can
network analysis [31], information retrieval [86], to a diverse LLMs, beyond merely integrating with GNNs, independently
range of natural language processing tasks [37; 76]. perform predictive tasks with explicit graph structures? In
Given the prevalence of text-attributed graphs (TAGs), we this paper, we aim to embark upon a preliminary investi-
aimtoexplorehowtoeffectivelyhandlethesegraphs,witha |
, beyond merely integrating with GNNs, independently
range of natural language processing tasks [37; 76]. perform predictive tasks with explicit graph structures? In
Given the prevalence of text-attributed graphs (TAGs), we this paper, we aim to embark upon a preliminary investi-
aimtoexplorehowtoeffectivelyhandlethesegraphs,witha gation of these two questions by undertaking a series of ex-
tensive empirical analyses. Particularly, the key challenge is
how to design an LLM-compatible pipeline for graph learn-ing tasks. Consequently, we explore two potential pipelines two pipelines to leverage LLMs under the task of node clas-
to incorporate LLMs: (1) LLMs-as-Enhancers : LLMs are sification. Section 4 explores the first pipeline, LLMs-as-
adopted to enhance the textual information; subsequently, Enhancers , which adopts LLMs to enhance text attributes.
GNNs utilize refined textual data to generate predictions. Section 5 details the second pipeline, LLMs-as-Predictors ,
(2) LLMs-as-Predictors : LLMs are adapted to generate the exploring the potential for directly applying LLMs to solve
finalpredictions,wherestructuralandattributeinformation graph learning problems as a predictor. Section 6 discusses
is present completely through natural languages. works relevant to the applications of LLMs in the graph do-
Inthiswork,weembracethechallengesandopportunitiesto main. Section 7 summarizes our insights and discusses the
studytheutilizationofLLMsingraph-relatedproblemsand limitationsofourstudyandthepotentialdirectionsofLLMs
aim to deepen our understanding of the potential of LLMs in the graph domain.
on graph machine learning , with a focus on the node clas-
sification task. First , we aim to investigate how LLMs can 2. PRELIMINARIES
enhance GNNs by leveraging their extensive knowledge and In this section, we present concepts, notations and problem
semantic comprehension capability. It is evident that differ- settings used in the work. We primarily delve into the node
ent types of LLMs possess varying levels of capability, and classification task on the text-attributed graphs, which is
more powerful models often come with more usage restric- one of the most important downstream tasks in the graph
tions [59; 85; 51]. Therefore, we strive to design different learning domain. Next, we first give the definition of text-
strategies tailored to different types of models, and better attributed graphs.
leverage their capabilities within the constraints of these us- |
one of the most important downstream tasks in the graph
tions [59; 85; 51]. Therefore, we strive to design different learning domain. Next, we first give the definition of text-
strategies tailored to different types of models, and better attributed graphs.
leverage their capabilities within the constraints of these us- Text-Attributed Graphs A text-attributed graph (TAG)
age limitations. Second , we want to explore how LLMs GS is defined as a structure consisting of nodes V and their
can be adapted to explicit graph structures as a predictor. corresponding adjacency matrix A ∈ R |V |×| V |. For each
A principal challenge lies in crafting a prompt that enables node vi ∈V , it is associated with a text attribute, denoted
theLLMstoeffectivelyusestructuralandattributeinforma- as si.
tion. To address this challenge, we attempt to explore what In this study, we focus on node classification, which is one
information can assist LLMs in better understanding and of the most commonly adopted graph-related tasks.
utilizing graph structures. Through these investigations, we Node Classification on TAGs Given a set of labeled
make some insightful observations and gain a better under- nodes L ⊂ V with their labels yL , we aim to predict the
standingofthecapabilitiesofLLMsingraphmachinelearn- labels yU for the remaining unlabeled nodes U = V\L .
ing. We use the citation network dataset Ogbn-arxiv [23] as an
Contributions. Our contributions are summarized as fol- illustrative example. In such a graph, each node represents
lows: an individual paper from the computer science subcategory,
1.We explore two pipelines that incorporate LLMs to han- with the attribute of the node embodying the paper’s ti-
dle TAGs: LLMs-as-Enhancers and LLMs-as-Predictors . tle and abstracts. The edges denote the citation relation-
ThefirstpipelinetreatstheLLMsasattributeenhancers, ships. The task is to classify the papers into their corre-
seamlessly integrating them with GNNs. The second sponding categories, for example, “cs.cv” (i.e., computer vi-
pipeline directly employs the LLMs to generate predic- sion). Next, we introduce the models adopted in this study,
tions. including graph neural networks and large language models.
2.For LLMs-as-Enhancers , we introduce two strategies to Graph Neural Networks. When applied to TAGs for
enhance text attributes via LLMs. W |
sion). Next, we introduce the models adopted in this study,
tions. including graph neural networks and large language models.
2.For LLMs-as-Enhancers , we introduce two strategies to Graph Neural Networks. When applied to TAGs for
enhance text attributes via LLMs. We further conduct a node classification, Graph Neural Networks (GNNs) lever-
seriesofexperimentstocomparetheeffectivenessofthese age the structural interactions between nodes. Given initial
enhancements. node features h 0i, GNNs update the representation of each
3.For LLMs-as-Predictors , we design a series of experi- nodebyaggregatingtheinformationfromneighboringnodes
ments to explore LLMs’ capability in utilizing structural in a message-passing manner [16]. The l-th layer can be for-
and attribute information. From empirical results, we mulated as:
summarize some original observations and provide new
insights. h li =UPD l h l− 1i ,AGG j∈N (i) MSG l h l− 1i ,h l− 1j , (1)
Key Insights. Through comprehensive empirical evalua-
tions, we find the following key insights: where AGG is often an aggregation function such as sum-
1.For LLMs-as-Enhancers , using deep sentence embedding mation, or maximum. UPD and MSG are usually some
models to generate embeddings for node attributes show differentiable functions, such as MLP. The final hidden rep-
both effectiveness and efficiency. resentations can be passed through a fully connected layer
2.For LLMs-as-Enhancers ,utilizingLLMstoaugmentnode to make classification predictions.
attributes at the text level also leads to improvements in Large Language Models. In this work, we primarily uti-
downstream performance. lizetheterm”largelanguagemodels”(LLMs)todenotelan-
3.For LLMs-as-Predictors , LLMspresentpreliminaryeffec- guage models that have been pre-trained on extensive text
tiveness but we should be careful about their inaccurate corpora. Despite the diversity of pre-training objectives [9;
predictions and the potential test data leakage problem. 52; 53], the shared goal of these LLMs is to harness the
4.LLMs demonstrate the potential to serve as good anno- knowledge acquired during the pre-training phase and re-
tators for labeling nodes, as a decent portion of their purpose it for a range of downstream tasks. Based on their
annotations is accurate. |
the shared goal of these LLMs is to harness the
4.LLMs demonstrate the potential to serve as good anno- knowledge acquired during the pre-training phase and re-
tators for labeling nodes, as a decent portion of their purpose it for a range of downstream tasks. Based on their
annotations is accurate. interfaces,specificallyconsideringwhethertheirembeddings
Organization. The remaining of this paper is organized as areaccessibletousersornot,inthisworkweroughlyclassify
follows. Section 2 introduces necessary preliminary knowl- LLMs as below:
edge and notations used in this paper. Section 3 introduces Embedding-visible LLMs Embedding-visible LLMs pro-vide access to their embeddings, allowing users to interact Given the superior power of LLMs in understanding textual
with and manipulate the underlying language representa- information, we now investigate different strategies to lever-
tions. Embedding-visible LLMs enable users to extract em- age LLMs for node classification in textual graphs. Specifi-
beddings for specific words, sentences, or documents, and cally, wepresenttwodistinctpipelines: LLMs-as-Enhancers
performvariousnaturallanguageprocessingtasksusingthose and LLMs-as-Predictors . Figure 1 provides figurative illus-
embeddings. Examples of embedding-visible LLMs include trations of these two pipelines, and we elaborate on their
BERT [9], Sentence-BERT [54], and Deberta [21]. details as follows.
Embedding-invisible LLMs Embedding-invisible LLMs LLMs-as-Enhancers In this pipeline, LLMs are leveraged
do not provide direct access to their embeddings or allow to enhance the text attributes. As shown in Figure 1, for
userstomanipulatetheunderlyinglanguagerepresentations. LLMs-as-Enhancers , LLMs are adopted to pre-process the
Instead, they are typically deployed as web services [59] textattributes, andthenGNNsaretrainedontheenhanced
and offer restricted interfaces. For instance, ChatGPT [45], attributesasthepredictors. Consideringdifferentstructures
along with its API, solely provides a text-based interface. of LLMs, we conduct enhancements either at the feature
Users can only engage with these LLMs through text inter- level or at the text level as shown in Figure 2.
actions. 1. Feature-levelenhancement: Forfeature-levelenhance-
In addition to the interfaces, the size, capability, and model ment, embedding-visible LLMs inject their knowledge by
structure are crucial factors in determining how LLMs can simply encoding the text att |
s. 1. Feature-levelenhancement: Forfeature-levelenhance-
In addition to the interfaces, the size, capability, and model ment, embedding-visible LLMs inject their knowledge by
structure are crucial factors in determining how LLMs can simply encoding the text attribute si into text embed-
beleveragedfor graphs. Consequently, wetake into account dings h i ∈ R d. We investigate two feasible integrating
the following four types of LLMs: structures for feature-level enhancement. (1) Cascad-
1. Pre-trained Language Models: We use the term ”pre- ing structure: Embedding-visible LLMs and GNNs are
trained language models” (PLMs) to refer to those rela- combinedsequentially. Embedding-visibleLLMsfirsten-
tively small large language models, such as Bert [9] and code text attributes into text features, which are then
Deberta [21], which can be fine-tuned for downstream adopted as the initial node features for GNNs. (2) Iter-
tasks. Itshouldbenotedthatstrictlyspeaking,allLLMs ative structure [83]: PLMs and GNNs are co-trained
can be viewed as PLMs. Here we adopt the commonly togetherbygeneratingpseudolabelsforeachother. Only
used terminology for models like BERT [51] to distin- PLMsaresuitableforthisstructuresinceitinvolvesfine-
guish them from other LLMs follwing the convention in tuning.
a recent paper [85]. 2. Text-level enhancement: For text-level enhancement,
2. DeepSentenceEmbeddingModels: Thesemodelstyp- given the text attribute si, LLMs will first transform the
ically use PLMs as the base encoders and adopt the bi- text attribute into augmented attribute sAugi . Enhanced
encoder structure [54; 68; 44]. They further pre-train the attributes will then be encoded into enhanced node fea-
modelsinasupervised[54]orcontrastivemanner[68;44]. tures h Augi ∈ R d throughembedding-visibleLLMs. GNNs
In most cases, there is no need for these models to con- willmakepredictionsbyensemblingtheoriginalnodefea-
duct additional fine-tuning for downstream tasks. These tures and augmented node features.
models can be further categorized into local sentence em- LLMs-as-Predictors In this pipeline, LLMs are leveraged
bedding models and online sentence embedding models . to directly make predictions for the node classification task.
Localsentenceembeddingmodels areopen-sourceandcan |
ures and augmented node features.
models can be further categorized into local sentence em- LLMs-as-Predictors In this pipeline, LLMs are leveraged
bedding models and online sentence embedding models . to directly make predictions for the node classification task.
Localsentenceembeddingmodels areopen-sourceandcan As shown in Figure 1b, for LLMs-as-Predictors , the first
be accessed locally, with Sentence-BERT (SBERT) being step is to design prompts to represent graph structural in-
an example. On the other hand, online sentence embed- formation, textattributes, andlabelinformationwithtexts.
ding models are closed-source and deployed as services, Then, embedding-invisibleLLMsmakepredictionsbasedon
with OpenAI’s text-ada-embedding-002 [44] being an ex- the information embedded in the prompts.
ample.
3. Large Language Models: Compared to PLMs, Large 4. LLMSASTHEENHANCERS
Language Models (LLMs) exhibit significantly enhanced In this section, we investigate the potential of employing
capabilities with orders of magnitude more parameters. LLMstoenrichthetextattributesofnodes. Aspresentedin
LLMs can be categorized into two types. The first type Section 3, we consider feature-level enhancement , which
consists of open-source LLMs, which can be deployed lo- injects LLMs’ knowledge by encoding text attributes into
cally,providinguserswithtransparentaccesstothemod- features. Moreover, we consider text-level enhancement ,
els’ parameters and embeddings. However, the substan- which inject LLMs’ knowledge by augmenting the text at-
tial size of these models poses a challenge, as fine-tuning tributes at the text level. We first study feature-level en-
them can be quite cumbersome. One representative ex- hancement .
ampleofanopen-sourceLLMisLLaMA[63]. Thesecond
type of LLMs is typically deployed as services [59], with 4.1 Feature-levelEnhancement
restrictions placed on user interfaces. In this case, users
are unable to access the model parameters, embeddings, In feature-level enhancement , we mainly study how to com-
orlogitsdirectly. ThemostpowerfulLLMssuchasChat- bineembedding-visibleLLMswithGNNsatthefeaturelevel.
GPT [45] and GPT4 [46] belong to this kind. The embedding generated by LLMs will be adopted as the
Among the four types of LLMs, PLMs, deep sentence em- initial features of GNNs. We first briefly introduce the
beddingmodels,andopen-sourceLLMsareoftenembedding- dataset and dataset split settings we use.
visible LLMs. Closed-source LLMs are embedding-invisible |
The embedding generated by LLMs will be adopted as the
Among the four types of LLMs, PLMs, deep sentence em- initial features of GNNs. We first briefly introduce the
beddingmodels,andopen-sourceLLMsareoftenembedding- dataset and dataset split settings we use.
visible LLMs. Closed-source LLMs are embedding-invisible Datasets. Inthisstudy,weadopt Cora [40], Pubmed [57],
LLMs. Ogbn-arxiv ,and Ogbn-products [23],fourpopularbench-
marks for node classification. We present their detailed
3. PIPELINESFORLLMSINGRAPHS statistics and descriptions in Appendix A. Specifically, we(a) An illustration of LLMs-as-Enhancers , where LLMs pre- (b) An illustration of LLMs-as-Predictors , where LLMs directly
processthetextattributes,andGNNseventuallymakethepredic- make the predictions. The key component for this pipeline is how
tions. Threedifferentstructuresforthispipelinearedemonstrated to design an effective prompt to incorporate structural and attribute
in Figure 2. information.
Figure 1: Pipelines for integrating LLMs into graph learning. In all figures, we use “PLM” to denote small-scale PLMs that
can be fine-tuned on downstream datasets, “LLM *” to denote embedding-visible LLMs, and “LLM” to denote embedding-
invisible LLMs.
Figure 2: Three strategies to adopt LLMs as enhancers. The first two integrating structures are designed for feature-level
enhancement, while the last structure is designed for text-level enhancement. From left to right: (1) Cascading Structure:
Embedding-visible LLMs enhance text attributes directly by encoding them into initial node features for GNNs. (2) Iterative
Structure: GNNsandPLMsareco-trainedinaniterativemanner. (3)Text-levelenhancementstructure: Embedding-invisible
LLMs are initially adopted to enhance the text attributes by generating augmented attributes. The augmented attributes
and original attributes are encoded and then ensembled together.
examine two classification dataset split settings, specifically LLMs , and (3) Intergrating structures for LLMs and GNNs .
tailored for the Cora and Pubmed datasets. Meanwhile, In this study, we choose the most representative models for
for Ogbn-arxiv and Ogbn-products ,weadopttheofficial each component, and the details are listed below.
dataset splits. (1) For Cora and Pubmed , the first split- 1. SelectionofGNNs: ForGNNson Cora and Pubmed ,we
ting setting addresses low-labeling-rate conditions, which consider Graph Convolutional Network (GCN) [27] and
is a commonly adopted setting [75]. To elaborate, we ran- Graph Attention Network (GAT) [64]. We also include
domly select 20 nodes from each class to form the training t |
and Pubmed ,we
ting setting addresses low-labeling-rate conditions, which consider Graph Convolutional Network (GCN) [27] and
is a commonly adopted setting [75]. To elaborate, we ran- Graph Attention Network (GAT) [64]. We also include
domly select 20 nodes from each class to form the training the performance of MLP to evaluate the quality of
set. Then, 500 nodes are chosen for the validation set, while text embeddings without aggregations . For Ogbn-
1000 additional random nodes from the remaining pool are arxiv , we consider GCN, MLP, and a better-performed
used for the test set. (2) The second splitting setting caters GNN model RevGAT [28]. For Ogbn-products , we
to high-labeling-rate scenarios, which is also a commonly consider GraphSAGE [19] which supports neighborhood
usedsetting,andalsoadoptedbyTAPE[22]. Inthissetting, sampling for large graphs, MLP, and a state-of-the-art
60% of the nodes are designated for the training set, 20% modelSAGN[58]. ForRevGATandSAGN,weadoptall
for the validation set, and the remaining 20% are set aside tricks utilized in the OGB leaderboard [23] 1.
for the test set. We take the output of GNNs and compare 2. Selection of LLMs: To enhance the text attributes at the
it with the ground truth of the dataset. We conduct all the feature level, we specifically require embedding-visible
experiments on 10 different seeds and report both average LLMs. Specifically, we select (1) Fixed PLM/LLMs
accuracy and variance. without fine-tuning: We consider Deberta [21] and
Baseline Models. In our exploration of how LLMs aug- LLaMA [63]. The first one is adapted from GLEM [83]
ment node attributes at the feature level, we consider three and we follow the setting of GLEM [83] to adopt the
main components: (1) Selection of GNNs , (2) Selection of
1 https://ogb.stanford.edu/docs/leader_nodeprop/ [CLS] token of PLMs as the text embeddings. LLaMA is as the predictor, the performance of TF-IDF embedding is
a widely adopted open-source LLM, which has also been close to and even surpasses the PLM embedding. This re-
included in Langchain 2. We adopt LLaMA-cpp 3, which sult is consistent with the findings in [49], which suggests
adopt the [EOS] token as text embeddings in our exper- that GNNs present distinct effectiveness for different types
iments. (2) Local sentence embedding models : We |
close to and even surpasses the PLM embedding. This re-
included in Langchain 2. We adopt LLaMA-cpp 3, which sult is consistent with the findings in [49], which suggests
adopt the [EOS] token as text embeddings in our exper- that GNNs present distinct effectiveness for different types
iments. (2) Local sentence embedding models : We of text embeddings. However, we don’t find a simple met-
adoptSentence-BERT [54]ande5-large[68]. Theformer ric to determine the effectiveness of GNNs on different text
is one of the most popular lightweight deep text embed- embeddings. We will further discuss this limitation in Sec-
ding models while the latter is the state-of-the-art model tion 7.2.
on the MTEB leaderboard [43]. (3) Online sentence Observation 2. Fine-tune-based LLMs may fail at
embedding models : We consider two online sentence low labeling rate settings.
embeddingmodels,i.e.,text-ada-embedding-002[44]from From Table 1, we note that no matter the cascading struc-
OpenAI,andPalm-Cortex-001[1]fromGoogle. Although ture or the iterative structure, fine-tune-based LLMs’ em-
the strategy to train these models has been discussed [1; beddingsperformpoorlyforlowlabelingratesettings. Both
44], their detailed parameters are not known to the pub- fine-tunedPLMandGLEMpresentalargegapagainstdeep
lic, together with their capability on node classification sentence embedding models and TF-IDF, which do not in-
tasks. (4) Fine-tuned PLMs : We consider fine-tuning volve fine-tuning. When training samples are limited, fine-
Deberta on the downstream dataset, and also adopt the tuningmayfailtotransfersufficientknowledgeforthedown-
last hidden states of PLMs as the text embeddings. For stream tasks.
fine-tuning,weconsidertwointegratingstructuresbelow. Observation 3. With a simple cascading structure,
3. Integration structures: We consider cascading struc- the combination of deep sentence embedding with
ture and iterative structure . (1) Cascading struc- GNNs makes a strong baseline.
ture: we first fine-tune the PLMs on the downstream FromTable1,Table2,Table3,wecanseethatwithasimple
dataset. Subsequently, the text embeddings engendered cascading structure, the combination of deep sentence em-
by the fine-tuned PLM are employed as the initial node bedding models (including both local sentence embedding
features for GNNs. (2) Iterative structure: PLMs and models and online sentence embedding models) with GNNs
GNNs are first trained separately and further co-trained show competitive performance, under |
ntence em-
by the fine-tuned PLM are employed as the initial node bedding models (including both local sentence embedding
features for GNNs. (2) Iterative structure: PLMs and models and online sentence embedding models) with GNNs
GNNs are first trained separately and further co-trained show competitive performance, under all dataset split set-
in an iterative manner by generating pseudo labels for tings. The intriguing aspect is that, during the pre-training
each other. This grants us the flexibility to choose either stage of these deep sentence embedding models, no struc-
the final iteration of PLMs or GNNs as the predictive tural information is incorporated. Therefore, it is aston-
models,whicharedenotedas“GLEM-LM”and“GLEM- ishing that these structure-unaware models can outperform
GNN”, respectively. GIANT on Ogbn-arxiv , which entails a structure-aware
Wealsoconsidernon-contextualizedshallowembeddings[41] self-supervised learning stage.
including TF-IDF and Word2vec [23] as a comparison. TF- Observation 4. Simply enlarging the model size of
IDF is adopted to process the original text attributes for LLMsmaynothelpwiththenodeclassificationper-
Pubmed [57], and Word2vec is utilized to encode the origi- formance.
nal text attributes for Ogbn-arxiv [23]. For Ogbn-arxiv From Table 1 and Table 2, we can see that although the
and Ogbn-products , we also consider the GIANT fea- performance of the embeddings generated by LLaMA out-
tures [6], which can not be directly applied to Cora and performs the Deberta-base without fine-tuning by a large
Pubmed because of its special pre-training strategy. Fur- margin, there is still a large performance gap between the
thermore, we don’t include LLaMA for Ogbn-arxiv and performance of embeddings generated by deep sentence em-
Ogbn-products because it imposes an excessive computa- bedding models in the low labeling rate setting. This result
tional burden when dealing with large-scale datasets. indicates that simply increasing the model size may not be
The results are shown in Table 1, Table 2, and Table 3. In sufficienttogeneratehigh-qualityembeddingsfornodeclas-
these tables, we demonstrate the performance of different sification. The pre-training objective may be an important
combinations of text encoders and GNNs. We also include factor.
the performance of MLPs which can suggest the original
quality of the textual embeddings before the aggregation. 4.1.2 Scalability Investigation
Moreover, Weusecolorstoshowthetop3bestLLMsunder In the afo |
on. The pre-training objective may be an important
combinations of text encoders and GNNs. We also include factor.
the performance of MLPs which can suggest the original
quality of the textual embeddings before the aggregation. 4.1.2 Scalability Investigation
Moreover, Weusecolorstoshowthetop3bestLLMsunder In the aforementioned experimental process, we empirically
each GNN (or MLP) model. Specifically, We useyellow find that in larger datasets like Ogbn-arxiv , methods like
to denote the best one under a specific GNN/MLP model, GLEM that require fine-tuning of the PLMs will take sev-
greenthe second best one, andpinkthe third best one. eral orders of magnitude more time in the training stage
than these that do not require fine-tuning. It presents a
4.1.1 Node Classification Performance Comparison hurdle for these approaches to be applied to even larger
Observation 1. Combined with different types of datasets or scenarios with limited computing resources. To
text embeddings, GNNs demonstrate distinct effec- gain a more comprehensive understanding of the efficiency
tiveness. and scalability of different LLMs and integrating structures,
From Table 3, if we compare the performance of TF-IDF we conduct an experiment to measure the running time and
and fine-tuned PLM embeddings when MLP is the predic- memory usage of different approaches. It should be noted
tor, we can see that the latter usually achieves much bet- thatwemainlyconsiderthescalabilityprobleminthetrain-
ter performance. However, when a GNN model is adopted ing stage, which is different from the efficiency problem in
the inference stage.
2 https://python.langchain.com/ In this study, we choose representative models from each
3 https://github.com/ggerganov/llama.cpp type of LLMs, and each kind of integrating structure. For Table 1: Experimental results for feature-level LLMs-as-Enhancer on Cora and Pubmed with a low labeling ratio. Since
MLPs do not provide structural information, it is meaningless to co-train it with PLM (with their performance shown as
N/A). We useyellowto denote the best performance under a specific GNN/MLP model,greenthe second best one, andpink
the third best one.
Table 2: Experimental results for feature-level LLMs-as-Enhancers on Cora and Pubmed with a high labeling ratio. We use
yellowto denote the best performance under a specific GNN/MLP model,greenthe second best one, andpinkthe third best
one.
TF-IDF,it’sashallowembed |
mance under a specific GNN/MLP model,greenthe second best one, andpink
the third best one.
Table 2: Experimental results for feature-level LLMs-as-Enhancers on Cora and Pubmed with a high labeling ratio. We use
yellowto denote the best performance under a specific GNN/MLP model,greenthe second best one, andpinkthe third best
one.
TF-IDF,it’sashallowembeddingthatdoesn’tinvolveeither column). The efficiency results are shown in Table 4. We
training or inference, so the time and memory complexity also report the peak memory usage in the table. We adopt
of the LM phase can be neglected. In terms of Sentence- the default output dimension of each text encoder, which is
BERT, for the LM phase, this kind of local sentence embed- shown in the brackets.
ding model does not involve a fine-tuning stage, and they Observation 5. For integrating structures, iterative
only need to generate the initial embeddings. For text-ada- structure introduces massive computation overhead
embedding-002, which is offered as an API service, we make in the training stage.
API calls to generate embeddings. In this part, we set the FromTable2andTable3,GLEMpresentsasuperiorperfor-
batch size of Ada to 1,024 and call the API asynchronously, manceindatasetswithanadequatenumberoflabeledtrain-
then we measure the time consumption to generate embed- ing samples, especially in large-scale datasets like Ogbn-
dings as the LM phase running time. For Deberta-base, we arxiv and Ogbn-products . However,fromTable4,wecan
record the time used to fine-tune the model and generate see that it introduces massive computation overhead in the
the text embeddings as the LM phase running time. For training stage compared to Deberta-base with a cascading
GLEM, since it co-trains the PLM and GNNs, we consider structure, which indicates the potential efficiency problem
LM phase running time and GNN phase running time to- of the iterative structures.
gether (and show the total training time in the “LM phase”cora pubmedcora pubmedMoreover, from Table 4, we note that for the GNN phase,
GCN GAT MLP GCN GAT MLPGCN GAT MLP GCN GAT MLP
Non-contextualizedShallowEmbeddingsNon-contextualizedShallowEmbeddings
TF-IDFTF-IDF 81.99 ±0.63 82.30 ±0.65 67.18 ±1.01 78.86 ±2.00 77.65 ±0.91 71.07 ±0.7890.90 ±2.7490.64 ±3.08 83.98 ±5.91 89.16 ±1.25 89.00 ±1.67 89 |
note that for the GNN phase,
GCN GAT MLP GCN GAT MLPGCN GAT MLP GCN GAT MLP
Non-contextualizedShallowEmbeddingsNon-contextualizedShallowEmbeddings
TF-IDFTF-IDF 81.99 ±0.63 82.30 ±0.65 67.18 ±1.01 78.86 ±2.00 77.65 ±0.91 71.07 ±0.7890.90 ±2.7490.64 ±3.08 83.98 ±5.91 89.16 ±1.25 89.00 ±1.67 89.72 ±3.57
Word2Vec 88.40 ±2.25 87.62 ±3.83 78.71 ±6.32 85.50 ±0.77 85.63 ±0.93 83.80 ±1.33Word2Vec 74.01 ±1.24 72.32 ±0.17 55.34 ±1.31 70.10 ±1.80 69.30 ±0.66 63.48 ±0.54
PLM/LLMEmbeddingswithoutFine-tuningPLM/LLMEmbeddingswithoutFine-tuning
Deberta-base 48.49 ±1.86 51.02 ±1.22 30.40 ±0.57 62.08 ±0.06 62.63 ±0.27 53.50 ±0.43Deberta-base 65.86 ±1.96 79.67 ±3.19 45.64 ±4.41 67.33 ±0.69 67.81 ±1.05 65.07 ±0.57
LLama7B 66.80 ±2.20 59.74 ±1.53 52.88 ±1.96 73.53 ±0.06 67.52 ±0.07 66.07 ±0.56LLama7B 89.69 ±1.86 87.66 ±4.84 80.66 ±7.72 88.26 ±0.78 88.31 ±2.01 89.39 ±1.09
LocalSentenceEmbeddingModelsLocalSentenceEmbeddingModels
Sentence-BERT(MiniLM) 89.61 ±3.23Sentence-BERT(MiniLM)82.20 ±0.4982.77 ±0.5990.68 ±2.2274.26 ±1.4486.45 ±5.56 90.32 ±0.91 90.80 ±2.02 90.59 ±1.2381.01 ±1.32 79.08 ±0.07 76.66 ±0.50
e5-largee5-large 82.56 ±0.73 81.62 ±1.0990.53 ±2.33 89.10 ±3.2274.26 ±0.9386.19 ±4.38 89.65 ±0.85 89.55 ±1.16 91.39 ±0.4782.63 ±1.1379.67 ±0.8080.38 ±1.94
OnlineSentenceEmbeddingModelsOnlineSentenceEmbeddingModels
text-ada-embedding-002text-ada-embedding-002 89.13 ±2.0082.72 ±0.6982.51 ±0.8690.42 ±2.5073.15 ±0.89 79.09 ±1.5185.97 ±5.58 89.81 ±0.8580.27 ±0.4191.48 ±1.9478.03 ±1.0292.63 ±1.14
GooglePalmCortex001 81.15 ±1.01GooglePalmCortex00190.02 ±1.86 90.31 ±2.82 81.03 ±2.60 89.78 ±0.95 90.52 ±1.3582.79 ±0.41 69.51 ±0.8380.91 ±0.1980.72 ±0.3378.93 ±0.9091.87 ±0.84
Fine-tunedPLMEmbeddingsFine-tunedPLMEmbeddings
Fine-tunedDeberta-base 85.86 ±2.28 86.52 ±1.87 78.20 ±2.25Fine-tunedDeberta-base 59.23 ±1.16 57.38 ±2.01 30.98 ±0.68 62.12 ±0.07 61.57 ±0.07 53.65 ±0.2691.49 ±1.92 89.88 ±4.6394.65 ±0.13
IterativeStructureIterativeStructure
GLEM-GNN 48.49 ±1.86 51.02 ±1.22 N/A 62.08 ±0.06 62.63 ±0.27 N/AGLEM-GNN 89.13 ±0.73 88.95 ±0.64 N/A92.57 ±0.2592.78 ±0.21 N/A
GLEM-LM 59.23 ±1.16 57.38 ±2.01 N/A 62.12 ±0.07 61.57 ±0.07 N/AGLEM-LM 82.71 ±1.08 83.54 ±0.99 N/A94.36 ±0.2194.62 ±0.14 N/A Table 3: Experimental results for feature-level LLMs-as-Enhancers on Ogbn-arxiv and Ogbn-products dataset. MLPs do
not provide structural information so it’s meaningless to co-train it with PLM, thus we don’t show the performance. We use
yellowto denote the best performance under a specific GNN/MLP model,greenthe second best one, andpinkthe third best
one.
Table 4: Efficiency analysis on Ogbn-arxiv . Note that we show the dimension of generated embeddings in the brackets. For
GIANT, it adopts a special pre-training stage, which will introduce computation overhead with orders of magnitude larger
than that of fine-tuning. The specific time was not discussed in the original paper, therefore its cost in LM-phase is not shown |
third best
one.
Table 4: Efficiency analysis on Ogbn-arxiv . Note that we show the dimension of generated embeddings in the brackets. For
GIANT, it adopts a special pre-training stage, which will introduce computation overhead with orders of magnitude larger
than that of fine-tuning. The specific time was not discussed in the original paper, therefore its cost in LM-phase is not shown
in the table.
the dimension of initial node features, which is the default be embedding-visible. However, the most powerful LLMs
output dimension of text encoders mainly determines mem- such as ChatGPT [45], PaLM [1], and GPT4 [46] are all de-
ory usage and time cost. ployed as online services [59], which put strict restrictions
Observation 6. In terms of different LLM types, so that users can not get access to model parameters and
deep sentence embedding models present better ef- embeddings. Users can only interact with these embedding-
ficiency in the training stage. invisible LLMs through texts, which means that user inputs
In Table 4, we analyze the efficiency of different types of must be formatted as texts and LLMs will only yield text
LLMs by selecting representative models from each cate- outputs. In this section, we explore the potential for these
gory. Comparing fine-tune-based PLMs with deep sentence embedding-invisibleLLMstodotext-levelenhancement. To
embedding models, we observe that the latter demonstrates enhance the text attribute at the text level, the key is to
significantly better time efficiency as they do not require a expand more information that is not contained in the orig-
fine-tuning stage. Additionally, deep sentence embedding inal text attributes. Based on this motivation and a recent
models exhibit improved memory efficiency as they solely paper [22], we study the following two potential text-level
involve the inference stage without the need to store addi- enhancements, and illustrative examples of these two aug-
tional information such as gradients. mentations are shown in Figure 3.
1. TAPE [22]: The motivation of TAPE is to leverage the
4.2 Text-levelEnhancementOgbn-arxiv Ogbn-products knowledge of LLMs to generate high-quality node fea-
GCN MLP RevGAT SAGE SAGN MLP |