text
stringlengths
398
4.1k
xecute complex, low-level tasks. By successfully handling task complexity and allowing for the discovery of unexpected high-performingpolicies,EUREKAsuccessfullydealswithtwokeyreasonsthatinhibitthetranslationofthedesired agent behavior to rewards that were identified in subsection 5.1. Finally, similarly to the LLM4RL-Reward studies discussed in5.1, akey benefitof EUREKA isthe alignmentof rewardsto human preferencesby incorporatinghuman knowledge about the state through appropriate initialization of the reward. Similarly, Song et al. [113] proposed a three-step, self-refined LLM framework for generating reward functions to help robotic agents achieve specific goals. The first step is the initial design of the reward function based on the natural language input provided by the user. The input includes a description of the environment, a description of the task in hand, broken down into goals, a description of the observable state (such as position and velocity of robot), and a list ofrulesthatshouldbefollowedwhendesigningtherewardfunction(forexample,restrictingthedependenceofthe rewardexclusivelyonknownquantities). Inthesecondstep,theinitialrewardfunctionisapplied,therobotacts,and its behavior is evaluated. In the evaluation step, the user collects their observations around the training process and convergence, the objective metrics, the success rate of the tasks, and makes an overall assessment (“good” or “bad”) of the robotic agent’s performance. Finally, in the self-refinement step, the feedback of the user is embedded in a feedback prompt, which isthen used by the LLMto generate the next reward signal. The authors evaluated the performanceof theframeworkforninedifferentcontinuouscontroltasksacrossthreedifferentroboticsystemsystemsandachieveda success rate between 93% and 100% for all tasks, outperforming the corresponding manual reward setting in almost all of them. 12RL/LLM Taxonomy Tree 5.2 LLM4RL-Goal Goalsettingisakeyelementinintrinsicallymotivatedreinforcementlearning[28],whichaddresseskeychallengesof the traditional Deep RL setting, including the difficulty to abstract actions and to explore the environment [6]. Contrary totraditionalRL,wherethetrainingoftheagentreliesexclusivelyonexternalrewardsfromtheenvironment,intrinsic RL builds on the psychology concept of intrinsic motivation and developmental learning, which is inspired by babies developingtheirskillsbyexploringtheenvironment[6]. Inasimilarmanner,IntrinsicMotivationinRLallowsagents to learn reusable, general skills that are applicable to various tasks over their lifetime. With right internal goals, agents can autonomously explore open-ended environments and build useful skills at a pretraining phase. Intrinsic RL is therefore particularly useful when the design of a reward function is not straightforward. However, the benefits of intrinsicRLarelimitedwhentheenvironmentcomplexityandsizeincreasessignificantly,sincethereisnoguarantee that the skills the agent builds throughout its lifetime will be useful to perform any downstream tasks at all. Toaddressthisshortcoming,Duetal.[41]proposedanew,intrinsicallymotivatedRLmethodcalledELLM(Exploring with LLMs). The authors build on the observation that, when faced with new tasks, humans do not uniformly (and therefore,blindly)exploretheoutcomespaces oftheiractions,butrelyontheirphysicalandsocialcommonsenseto prioritizethe explorationofplausiblyuseful behaviors,thathave the reasonablyhighestlikelihood tosucceed,such as using a key to open a door. ELLM leverages knowledge from text corpora to capture useful semantic information and thus enable structured exploration of task-agnostic (i.e., pretraining) environments by allowing the agent to use this knowledgeto reasonaboutthe usefulnessof newbegaviors. Inparticular,the authorsused apre-trainedLLM, GPT-2 [102]tosuggestgoalsduringexploration. In eachstep,theLLMispromptedwithalistoftheagent’savailableactions, along with a text description of the current observation, and suggests a goal, which should be
ELLM leverages knowledge from text corpora to capture useful semantic information and thus enable structured exploration of task-agnostic (i.e., pretraining) environments by allowing the agent to use this knowledgeto reasonaboutthe usefulnessof newbegaviors. Inparticular,the authorsused apre-trainedLLM, GPT-2 [102]tosuggestgoalsduringexploration. In eachstep,theLLMispromptedwithalistoftheagent’savailableactions, along with a text description of the current observation, and suggests a goal, which should be diverse, common-sense sensitive,andcontextsensitive. Aftertheacgenttakesanactionandtransitionsintheenvironment,thegoal-conditioned rewards are computed based on the semantic similarity between the LLM-generated goal and the description of the agent’s transition. The exploration of RL training is therefore controlled and enhanced without the need for explicit human intervention. ELLM was shown capable of producing context-sensitive, common-sensical and diversegoals, boosting pretraining exploration performance, as well as performance on downstream tasks. TaskExplore by Quartey et al. [101] is another framework that uses LLMs to facilitate RL training by generating intermediatetasks. ThegoalofTaskExploreistomaximizetheuseofpreviousexperiencescollectedbyaroboticagent during their training in a simulated homeenvironment by generating and learninguseful auxiliary tasks while solving a largertargettask. TheprocessstartsbyconvertingataskspecificationtoagraphusingLinearTemporalLogic. Then, thegraphistraversedtocreateanabstracttasktemplatewhereobjectpropositionsarereplacedwiththeircontext-aware embedding representations, which have been generated by an LLM. The abstract task template is subsequently used to generate auxiliary tasks for each proposition node by swapping relevant objects from the environment and selecting objects with the most similar embeddings with the template embedding node under consideration. The training process consists of both online RL (where a policy for the given task is learned by following an epsilon-greedy behavioral policy)andofflineRL(whereallQ-valuefunctions,includingthoseofsubtasks,areupdated). Interestingly,theauthors foundthatlearningtheauxiliarytasksdoesnotadverselyaffecttheperformanceoflearningthemaintask. Inaddition, theauxiliarytaskslearnedthrough TaskExploreperformedbettercomparedto thesametaskslearnedthrougharandom behavior policy. Finally, the curriculum built on auxiliary tasks developed through TaskExplore outperformed the corresponding curriculum developed by randomly sampled tasks. 5.3 LLM4RL-Policy In this subclass, a large language model directly assists the policy of an RL agent by generating trajectories for pretraining [106], creating a policy prior [56], acting as a planner in a Planner-Actor-Reporter scheme [34], by directly representingthe policy[22],or bycombining anadapter modelto fine-tunethe promptof anotherLLMthat generates instructions [146]. 5.3.1 Trajectory Generation for Pretraining The power of Reinforcement Learning largely lies in the ability of the agent to learn through interacting with its environment. However,sometimestheinteractionwiththeenvironmentisimpractical,eitherbecauseitisexpensive(e.g., inrobotics),ordangerous(asinhealthcareorautonomousvehicles)[68]. However,asexplainedintheLLM4RL-Reward subsection(5.1,evenwhensuchinteractionispossibleandassumingavailabletrainingdataandcomputationalresources, training a model fromscratch might still face slow convergence. Therefore, pretraining with previously collected data can benefit the agent, particularly in complex domains where large datasets are needed for scalable generalization. In suchcases, OfflineReinforcementLearningiscommonly used. Offline RL handlesthecontrolproblem asasequence modelingproblem[27,61,46]andusessupervisedlearningtofitadatasetconsistingofstate-action-rewardtrajectories. 13RL/LLM Taxonomy Tree ByframingofflineRLasasequencemodelingproblem,Reidetal.[106]investigatewhetherLLMscanbesuccessfully transferredtootherdomainswh
in complex domains where large datasets are needed for scalable generalization. In suchcases, OfflineReinforcementLearningiscommonly used. Offline RL handlesthecontrolproblem asasequence modelingproblem[27,61,46]andusessupervisedlearningtofitadatasetconsistingofstate-action-rewardtrajectories. 13RL/LLM Taxonomy Tree ByframingofflineRLasasequencemodelingproblem,Reidetal.[106]investigatewhetherLLMscanbesuccessfully transferredtootherdomainswhenfine-tunedonofflineRLtasksthathavenorelationtolanguage. Theauthorsmodeled trajectories autoregressively, representing each trajectory t as a sequence of the form t = ( ˆR 1,s1,α 1, ˆR 2,s2,α 2, ..., ˆR N ,sN ,α N ), with ˆri,si, andai representing the state, action, and cumulative reward, respectively, at time stept. Thefinalobjectiveofthesupervisedlearningwasaweightedsumofthreelossfunctions: Theprimarylossfunctionis a classic Mean Squared Error loss. The second loss function is a cosine similarity loss that quantifies the similarity between language representations and offline RL input representations, with the goal to make the input language embeddings as similar as possible to their language counterparts. Finally, the third loss function represents the negative log=likelihood-based language modeling objective to allow for joint training of language modeling and trajectory modeling. Theauthors usedpre-trained modelsGPT2-small andChibiT,a modelthat waspretrained onWikitext-103 dataset [77]. As shown by experiments in four Atari tasks and three OpenAI Gym tasks, pretraining with language datasets can successfully fine-tune offline RL tasks and, most importantly, by achieving significant gains in terms of both convergence speed and total reward received, compared to baseline offline RL including Decision Transformer [24], CQL [66], TD3+BC [45], BRAC [135], and AWR baselines [95]. 5.3.2 Creating a Policy Prior Insubsection5.1,wereviewedstudieswherehuman-agentalignmentisachievedthroughrewarddesign. Analternative waytotrainRLagentsthatbehaveaccordingtohumanpreferencesisthecreationofapolicypriorthatisalignedwith humanpreferencesandreasoning. HuandSadigh[56]proposeInstruct-RL,aframeworkforhuman-AIcoordination where the human uses high-level natural language instructions to specify to the AI agent the type of behavior they expect. The human natural language instruction is then passed to pre-trained LLMs, which produce a prior policy. To construct the prior, the LLM is supplied with the initial human instruction as well as a language prompt, which providesalanguage descriptionoftheobservations oftheenvironmentwhile the policyisbeingfollowed. Inpractice, the LLM uses a softmax function to predict the probability of possible actions given the observed state and the human instruction. Finally, the policy prior is used as a reference policy during the training of the RL agent to regularize the trainingobjective,withtheregularizationtechniquevaryingaccordingtotheRLtrainingalgorithm,byaugmentingthe epsilon-greedymethodinQ-learningandaddingaKLpenaltytothetrainingobjectiveinPPO.Theexperimentalresults basedonboththeperformanceofthealgorithmsinbenchmarkingtasks(Hanabiandanegotiatinggame)andthehuman evaluation demonstrated that instructRL successfully incorporated human preference to produce high-performance policies, even when the prior policies were imperfect and generated with simple prompts. However, the authors highlighted the need for fine-tuning to improve test-time performance, since adding the LLM priors to trained agents was shown to add no meaningful improvement to the policy. 5.3.3 LLM Being the Policy WhileLLMs possess generalreasoning capabilities,theyare not trainedto solve environment specificproblems during their training, and thus cannot influence or explore the specific environment where a task needs to be accomplished. Cartaetal.[22]proposedaframeworktoovercomethelackofalignmentbetweenthegeneralstatisticalknowledgeof the LLMand the environmentby functionally grounding LLMsand using themdirectly as the policy
nts was shown to add no meaningful improvement to the policy. 5.3.3 LLM Being the Policy WhileLLMs possess generalreasoning capabilities,theyare not trainedto solve environment specificproblems during their training, and thus cannot influence or explore the specific environment where a task needs to be accomplished. Cartaetal.[22]proposedaframeworktoovercomethelackofalignmentbetweenthegeneralstatisticalknowledgeof the LLMand the environmentby functionally grounding LLMsand using themdirectly as the policy tobe updated. Using a simple grid-world based text environment (Baby AI), the authors formulated a goal-augmented, partially observable MDP where, given a prompt p,the LLM outputs a probabilitydistribution over the possibleactions and an actionissampledaccordingtothisdistribution. TheauthorsusedProximalPolicyOptimizationtotraintheagentand usedFlan-T5780MasthepolicyLLMandobserved80%successrateafter250Ktrainingsteps,whichisasignificant improvementcomparedtopreviousmodels. Theauthorsalsoconsideredgenitalizationovernewobjects,whereadrop inperformancewasobserved,withthemodelstilloutperformingthebenchmark. Thebiggest dropisobservedwhen testing against generalization to new tasks or a different language. 5.3.4 Planner Dasgupta etal. [34] combined thereasoning capabilities ofthe LLM with the specialization and theknowledge ofthe environment ofan RLagentin aPlanner-Actor-Reporter scheme. Thisstudy belongstotheRL + LLMcateogry,and willthereforebeanalyzedinsection5. However,theauthorsalsodesignedavariationofthemainframeworkwherethe Planner is embedded in the training loop of the Reporter with the goal to increase its truthfulness, so that it reports accurate information to the planner. The Reporter uses the reward received at the end of the episode to update their reporting policy suchthat it eventually learnsto only report helpful, i.e. truthful and relevant, information backto the planner. The reader shall refer to section 6 for more details on this publication. 14RL/LLM Taxonomy Tree Table 4: Breakdown Of LLM4RL Synergy Goals Per Subclass Study Test-Time Performance Training Efficiency Alignment with Grounding Learning com- Improving explo- Policy Reuse human intent plex tasks ration Kwon et al. [67] Xie et al. [138] Ma et al. [75] Songet al.[113] Du et al. [41] Quartey et al. [101] Reid et al. [106] Hu and Sadigh [56] Carta et al. [22] Zhang and Lu [146] 5.3.5 Using an adapter model Zhang andLu [146] developed the RLAdapter Framework, acomplexsystem that, apartfrom the RL agentand the LLM,includesadditionallyanAdaptermodeltoimprovetheconnectionbetweentheRLagentandtheLLMwithout requiringthe costlyand oftenimpossible fine-tuningofthe baseLLM. TheAdapter itselfis also anLLM -in particular, a 4-bit quantized version of the LLaMA2-7B model, which is fine-tuned with feedback from the RL agent and the LLM. Embedding the adaptermodel in the RL training loop isshown to resultin more meaningful guidance overall, by enhancing both the LLM’s comprehension of downstream tasks as well as the agent’s understanding capability and effective learning of difficult tasks. The key metric that aids thisclosed-loop feedback isthe understanding score, which quantifies the semantic similarity between the agent’s recent actions and the sub-goals suggested by the LLM as measured bythe cosinesimilarity betweenthe embeddingsof theLLM-provided sub-goalsand theepisode trajectory. The prompt of the Adapter includes a description pf the player’s observations, past actions, past sub-goals, and the understanding score. The Adaptergenerates a prompt for thebase LLM, containing a naturallanguage summary of the player’s past observations, actions, and understanding. In turn, the base LLM generates updated instructions for the RL agent, which, a
d the sub-goals suggested by the LLM as measured bythe cosinesimilarity betweenthe embeddingsof theLLM-provided sub-goalsand theepisode trajectory. The prompt of the Adapter includes a description pf the player’s observations, past actions, past sub-goals, and the understanding score. The Adaptergenerates a prompt for thebase LLM, containing a naturallanguage summary of the player’s past observations, actions, and understanding. In turn, the base LLM generates updated instructions for the RL agent, which, as usual, takes an action, receives the response from the environment, and updates its policy. The understandingscore isthen calculatedandis usedto fine-tune theadapter model,which thengoeson togenerate anew prompt. TheperformanceofRLAdapterwithGPT-3.5[86]asbaselinewascomparedtobaselinemodels,including ELLM by [41]. RLAdapter was shown to outperform all baselines for 1 million steps, apart from ELLM; however, for 5million steps,the performanceofRLAdapter withGPT-4[87]exceeded thatof ELLMaswell, andRLAdapter with GPT-3.5 [86] matched SPRING by [136] in terms of performance. The main novelty of RLAdapter lies in fine-tuning the lightweight, cheaper-to-update adapter model, while only updatingthepromptofthebaseLLM.DespitetheLLMpromptbeingupdatedthroughthisfeedbackloop,wedonot classify this study asRL4LLM, since RL is not used to improve the performance of a downstream language task. 6 RL+LLM: Combining Independently Trained Models for Planning The last major class of studies includes those where the RL agent and the LLM are fundamental components of the same framework and where, contrary to the two previous categories, they are independent from each other. In this class, anRL agentistrained tolearnspecific skills,andthe LLMleverages itsknowledgeofthe realworldtodetermine ways toplanoverthoseskillsinordertoaccomplishatask. Thiscombinationresultsinanagentthatknowshowtoperform long-horizon tasks in a specific environment. 15 RL/LLM Taxonomy Tree Table 5: LLM4RL Environment and Applications Table 6: LLM4RL Base Algorithms and RL Architecture LLM4RL Study Base LLM RL Architecture Kwon et al. [67] GPT-3 [17] DQN [80] Xie et al. [138] GPT-4 [87] PPO [110] [110], SAC [54] Ma et al. [75] GPT-4[87](alsoexperimentswithGPT-3.5) PPO [110] Song et al. [113] GPT-4 [87] PPO [110] Du et al. [41] Codex [26] DQN[80],withdoubleQ-learning[122],du- eling networks [129] and multi-step learning [117] Quartey et al. [101] InstructGPTtext-davinci-003(forgenerating LPOPL (LTL Progression for Off-Policy instructions) [90], Sentence-T5 [83] (for en- Learning) [121] LLM4RLStudy EnvironmentandApplicationcoding object descriptions to vectors) Kwonetal.[67] UltimatumGame,2-PlayerMatrixGames,DEALORNODEALnegotiationtask[69]Reid et al. [106] GPT-2 [102], CLIP [103], iGPT [25]; Decision Transformer [24], CQL [66], Xieetal.[138] 17manipulationtasks: Tworoboticmanipulationbenchmarks-MANISKILL2[52]andTD3+BC [45], BRAC [135], AWR [95]. METAWORLD [1
entence-T5 [83] (for en- Learning) [121] LLM4RLStudy EnvironmentandApplicationcoding object descriptions to vectors) Kwonetal.[67] UltimatumGame,2-PlayerMatrixGames,DEALORNODEALnegotiationtask[69]Reid et al. [106] GPT-2 [102], CLIP [103], iGPT [25]; Decision Transformer [24], CQL [66], Xieetal.[138] 17manipulationtasks: Tworoboticmanipulationbenchmarks-MANISKILL2[52]andTD3+BC [45], BRAC [135], AWR [95]. METAWORLD [141] and two locomotion environments of MUJOCO [15]. 1) META- Hu and Sadigh [56] GPT-3.5 [86] (text-davinci-003) Q-learning [81] and PPO [110]WORLD:BenchmarkforMulti-taskRoboticsLearningandPreference-basedReinforcement Learning.Robotarmfortabletoptasks:DrawerOpen/Close,WindowOpen/Close,Button Carta et al. [22] Flan-T5 780M [104] PPO [110]Press, Sweep, Door Unlock/Close, Handle Press/Press Slide . 2) MANISKILL2: object manipulationtasksinenvironmentswithrealisticphysicalsimulations.Tasks:Lift/Pick/Stack Zhang and Lu [146] GPT-3.5 [86] and GPT-4 [87] PPO [110]Cube,TurnFaucet,OpenCabinetDoor/Drawer,PushChair3)GymMuJoCo:Hopper(Move Forward,FrontFlip,BackFlip),Ant(MoveForward,WaveLeg,LieDown).4)RealRobot. Manipulationtasks-Pick-and-place,assembly,articulatedobjectmanipulationwithrevolute orslidingjoint,andmobilemanipulation, Maetal.[75] ForIsaacGymEnvironments:Cartpole,Quadcopter,FrankaCabinet,Anymal,BallBalance, Ant,AllegroHand,Humanoid,ShadowHand,Over,DoorCloseInward,DoorCloseOutward,16 DoorOpenInward,DoorOpenOutward,Scissors,SwingCup,Switch,Kettle,LiftUnderarm, Pen,BottleCap,CatchAbreast,CatchOver2Underarm,CatchUnderarm,ReOrientation,Gras- pAndPlace,BlockStack,PushBlock,TwoCatchUnderarm. Penspinningasacomplicated dexteroustask. Songetal.[113] Threeroboticsystems: 1)RoboticManipulatorforballcatching,ballbalancing,andball pushing(FrankaEmikaPandaEmikaby[55]),2)Quadrupedrobotforvelocitytracking, running,andwalkingtotarget(AnymalAnyRoboticsby[4],3)Quadcopterforhovering, flyingthroughawindfield,andvelocitytracking(CrazyflieBitCrazeby[14]). Duetal.[41] 1)Craftergameenvironment(2-DversionofMinecraft),modifiedtoa)replacethegeneral “Do”commandwithmorespecificcommandsandb)increasedamageagainstenemiesand reducetheamountofwoodneededtocraftatable.2)Housekeeproboticsimulatorby[64], whereanagentcleansupahousebyrearrangingobjects. Quarteyetal.[101] HomeGrid. Used a food preparation task (maps to visiting the right squares in the right order). Reidetal.[106] Twomulti-agentcoordinationgames:Say-Select(acooperativegamewheretwoplayersare collectingrewardsbyselectingfromasetofballs,eachofwhichismappedtoarewardvalue thatisonlyknowntooneoftheplayers)andHanabi[10],acooperativecardgame. HuandSadigh[56] Multi-agentcoordinationgames:Say-SelectandHanabi.[10] Cartaetal.[22] Newenvironmentintroduced: BabyAI-Text(AdaptationofBabyAI[29]tobetext-only). Minigridenvironmentwhereanagentnavigatesandinteractswithobjectsthrough6com- mands:turnleft,turnright,goforward,pickup,drop,toggle. ZhangandLu[146] CrafterGamewith22differenttasks(e.g.,collectingresources,craftingitems,defeating monsters)RL/LLM Taxonomy Tree This category can be further refined based on whether planning relies on conversational feedback or not. In the first subcategory,RL+LLM-No Language Feedback, the LLM generates a static skill graph but does
ridenvironmentwhereanagentnavigatesandinteractswithobjectsthrough6com- mands:turnleft,turnright,goforward,pickup,drop,toggle. ZhangandLu[146] CrafterGamewith22differenttasks(e.g.,collectingresources,craftingitems,defeating monsters)RL/LLM Taxonomy Tree This category can be further refined based on whether planning relies on conversational feedback or not. In the first subcategory,RL+LLM-No Language Feedback, the LLM generates a static skill graph but does not participate in the planning process afterthat. In thesecond subcategory,(RL+LLM-With Language Feedback), theuser queryor the LLMpromptisupdatedaccordingtotheresultsoftheinteractionbetweentheagentandtheenvironmentineachstep. However, we should highlight thatRL+LLM-With Language Feedback studies where the prompt is modified during planning are completely different from the RL4LLM-Prompt studies that were analyzed in section 4.2: The goal of frameworks of theRL4LLM - Prompt category is the improvement of the LLM itself, while the goal of frameworks in theRL + LLM category is planning towards a downstream task that is not directly related to the LLM - or natural language in general. 6.1 RL+LLM-Without Language Feedback Asidefromgroundingroboticagentstotheirenvironment,RL + LLMcombinationshavebeenshowntobebeneficialfor learningmultiple,long-horizontasksinanopen-endedenvironment,asinthecaseofYuanetal.[142],whodeveloped Plan4MC,anframeworkforexecutingMinecrafttasks. Asthe authorsexplain,explorationunder long-horizon tasks owes its difficulty to the size, complexity, and partial observability of open-ended environments. Themost common strategy so far in Reinforcement learning literature and practice has been imitation learning, which relies on expert demonstrations or video datasets, which are frequently difficult to obtain. However, even setting aside this obstacle, traininganRLagentinalargestatespaceisinevitablyhardduetosampleinefficiency,whileskippingdemonstrations isnot aviable choice,since itmight allowtheagent tolearn onlya veryrestrictedset ofskills. As such,the mainidea of [142], itto breakdowntasks intobasic, short-horizonskills, learn thoseseparately, andplan over skills– in other words, find the proper sequence of skills to be executed. In this context, reinforcement learning is used to train the fundamental skills, which are classified as “Find”, “Manipulate”, and “Craft”, independently in advance. It is worth nothing that each typeof skillis trainedwith adifferent algorithm. Theauthors promptChatGPT [85] byproviding the context, alongwith anexampleof theoutput in thedesired format, andChatGPT producesthe skillgraph in the desiredformat, wherenodes representskillsand arcsrepresent “require”and“consume” relationships. During online planning, the agent alternates between skill planning(i.e., identifyinga feasible plan to achievethe goal by performing depth-first search on the skill graph) and skill execution (i.e., selecting policies to solve the complicating skills, and reverting to skill planning if the execution of a task fails). The experimental results confirmedthe validity of Plan4MC, whichwasshowntoachievehighersuccessratecomparedtoothervariationsofthealgorithm,includingthosewithout task decomposition or without separate learning of “Find”. In a separate set of experiments, ChatGPT was also used to generate the plan, but this model variation was outperformed by the original Plan4MC version. 6.2 RL+LLM-With Language Feedback In acommon planning schemeunder this category, the LLMs generate instructions for tasks that the agenthas already learned through Reinforcement Learning, while the feedback from the environment is used to update the instructions. The feedback from the environment includes natural language (although not necessarily limited to it - see [60]) and is usedtoupdatetheuserqueryorpromptbasedontheresultsoftheitneractionbetweentheagentandtheenvironmentin each step. “SayCan”by Ahnetal.[3]involvesaroboticassistantthatperformshouseholdtasksinakitchenenvironment. The authorsrelyont
e instructions for tasks that the agenthas already learned through Reinforcement Learning, while the feedback from the environment is used to update the instructions. The feedback from the environment includes natural language (although not necessarily limited to it - see [60]) and is usedtoupdatetheuserqueryorpromptbasedontheresultsoftheitneractionbetweentheagentandtheenvironmentin each step. “SayCan”by Ahnetal.[3]involvesaroboticassistantthatperformshouseholdtasksinakitchenenvironment. The authorsrelyontheobservationthat,theknowledgeofLLMsabouttheworld,whilevast,isnotphysicallygroundedto the environment that the robotic agent is operating in. As a result, a fully trained robotic agent might not be able to selecttheappropriateskillstoaccomplishatask. Theroleofreinforcementlearninginthiscaseistoachievegrounding by helping the agent obtain awareness of the scene. In this case, grounding is measured by calculating the probability ofsuccessfully executinga taskusingaparticular skillina givenstate. Boththe LLMandRL directly participatein computing this probability: theLLM is used to calculate the probability that each skill contributes to completing the instruction, while the affordance function of the RL agent provides the probability that each skill will be executed successfully. Theproductofthosetwoquantitiesistheprobabilitythataskillwillsuccessfullyperformtheinstruction. Then, the most probable skill is selected, its policy is executed, and the LLM query is amended to include the language description of the skill. The planis formulated as a dialogue between the robot andthe user, where theuser provides a high-level instruction and the robot responds by listing the skill sequence that is it going to execute. SayCan was evaluated on 101 different tasks in a real kitchen environment To improve the performance of the system, the LLM undergoes prompt engineering to ensure that it produces skill recommendationsthatareconsistentwiththeuserquery. Infact,theauthorsfoundthattheperformanceoftheLLM improvedwhena)thesequentialstepswereexplicitlynumbered,b)theobjectsmentionedpresentedvariationacrossthe promptexamples,c)carefulanderror-freephrasingofthenamesofskillsandobjects. Inaseparatesetofexperiments, 17RL/LLM Taxonomy Tree SayCan was integrated with Chain of Thought [131], which was shown to improve its performance at negotiation tasks. In addition, it was able to successfully execute instructions provided in languages other than English. Similarlyto“SayCan”, Huangetal.[60]proposedInnerMonologue,aframeworkforplanningandinteractionwith roboticagentsthat havebeen trainedtoexecuteavarietyofskills. Like thepreviousstudyof[3],LLMshelp theagent understandwhattheavailableskillsare,howtheyaffecttheenvironmentwhenexecuted,andhowthechangesinthe environment translateto feedback innatural language. However, contraryto SayCan, InnerMonologue alsoprovides closed-loop feedback to the LLM predictions. InnerMonologuechainstogetherthreecomponents: a)thepre-trainedlanguage-conditionedroboticmanipulationskills, b) a set of perception models, like scene descriptors and success detectors, and c) human feedback provided by a user thatgeneratesnaturallanguageinstructionstowardstherobot. Thepretrainedmanipulationskillsareshort-horizonskills accompanied by short language descriptions, and may be trained through RL. The LLM plays the role of the Planner, whose goal is to find a sequence of skills that achieve the goal expressed by the user. First, the Planner receives the human instruction and breaks it down into a sequence of steps. As it executes the generated plan, the Planner receives three types of textual feedback from the environment: a) Success Detection, which answers whether the low-level skill wassuccessful,b)PassiveSceneDescription,whichisprovidedwithoutexplicitlyqueryingthePlannerandincludes objectrecognitionfeedbackandtask-progressscenedescriptions,andc)ActiveSceneDescription,whichisprovidedby a person or a pretrained model(like a Visual Question Answering model) in responset
on and breaks it down into a sequence of steps. As it executes the generated plan, the Planner receives three types of textual feedback from the environment: a) Success Detection, which answers whether the low-level skill wassuccessful,b)PassiveSceneDescription,whichisprovidedwithoutexplicitlyqueryingthePlannerandincludes objectrecognitionfeedbackandtask-progressscenedescriptions,andc)ActiveSceneDescription,whichisprovidedby a person or a pretrained model(like a Visual Question Answering model) in responseto explicit questions asked by the LLM. Asthe robot interactswith its environment, thecollected feedback iscontinuously appended tothe LLM prompt, thus forming an “inner monologue” that closes the loop from the environment to the agent and therefore enhancing the planning capabilities of the LLM. Inner Monologue was tested on simulated and real table-top manipulation environments,aswellasarealkitchenmobilemanipulationenvironment. Inthelatter,pre-trainedaffordancefunctions are used for action grounding and the results are compared to SayCan by [3] under standard conditions and under adversarialconditions,withaddeddisturbances duringcontrolpolicyexecutions that causetheplantofail,Inall cases, theembodiedfeedbackprovidedintheInnerMonologueframeworkwasshowntoimprovethesuccessrateofthetasks compared to Inner its predecessor, while under adversarial conditions it was only Inner Monologue that was able to consistently complete the instructions successfully. In addition, Inner Monologue was shown to possess significant reasoning capabilities,including continuousadaptation to new instructions,self-proposing goalsin cases ofinfeasiblity, multi-lingualunderstanding,interactivesceneunderstanding, and robustness todisturbancesinhuman instructions,like swapping theorder of feedback ortypos. Three failure modeswere also observed: False positive andnegative success detections, LLM Planning errors due to ignoring the environment feedback, and control errors. Dasgupta et al. [34] proposed a Planner-Actor-Reporter scheme to take advantage of the reasoning capabilities of the LLM and the specialized control skills of a trained RL agent: The LLM acts as the Planner that receives a task description,performslogicalreasoning,anddecomposesthetaskintoasequenceofinstructionsthatitpassestothe Actor, who executes the instructions, while the reporter provides feedback on the action effects back to the Planner. The framework is implemented in a partially observable 2-D grid-world [35], with each object possessing a unique combination of color, shape, and texture. Both the Actor and the Reporter are RL agents, trained with VTrace loss. The Actor follows a pre-trained policy that has been trained on simple tasks in the same environment. To allow the agent to receive feedback both from environment observations and through natural language, the policy uses two encoders: a convolutional visual encoder for visual observations and an LSTM-based language encoder for natural language instructions (e.g., “Pick up X”), and its action space includes movement within the grid, picking up objects, andexaminingobjects. TheReporterpossessesasimilararchitecturewiththePlanner,sinceitalsoincludesencoders forvisionandlanguage,butitadditionallypossessesamemorymoduleandapolicyhead,whichisabinaryclassifier head that chooses one of two possible reports. The Reporter observes the results of the Actor’s interaction with the environment and communicates with the Planner to inform it of its next command. In every step, the information generated by the Reporter is appended to the dialogue transcript, which is then used as the updated prompt of the Planner, that generatesa new instruction for the nextstep. The authors argue that training the Reporter toignore noise andproduce usefulfeedback forthe planneris moreefficient comparedto utilizinga large– andtherefore expensive – planner. From a robustness perspective, they showed that the framework exhibited robustness for challenging tasks where the Planner needsto explicitly request specificinformation to incorporate into its next se
he dialogue transcript, which is then used as the updated prompt of the Planner, that generatesa new instruction for the nextstep. The authors argue that training the Reporter toignore noise andproduce usefulfeedback forthe planneris moreefficient comparedto utilizinga large– andtherefore expensive – planner. From a robustness perspective, they showed that the framework exhibited robustness for challenging tasks where the Planner needsto explicitly request specificinformation to incorporate into its next set of instructions, as well asfortasksperformedinenvironmentsforwhichtheLLMlacksprevioussemantic experienceandthereforeneedsto perform abstract logical reasoning. They also showed that the Planner-Actor-Reporter scheme is better at learning tasks that are difficult to learn using pure RL Baselines. Table 7 summarizes the tasks and applications ofRL+LLMstudies. 18 RL/LLM Taxonomy Tree Table 7: RL+LLM Environment and Applications 7 Discussion 7.1 Goals of Synergy and Reasons for Success So far, we have built our taxonomy based on the structural breakdown of the methods analyzed in the studies that combineReinforcementLearningandLargeLanguageModels,byidentifyingthe waythateachofthetwomodelsis embedded in the integrated framework where they coexist and potentially interact and pinpointing specific components within this framework where the two model types are interlocked. In this section, we provide a broader view of the studiesbyexaminingthegoalofthissynergy,inlinewiththeinherentfeaturesofthetwomodelsthatmakethesynergy successful. 7.1.1 RL4LLM: Responsible AI, Alignment with Human Preference, Performance Improvement IntheRL4LLMcase,particularemphasisisplacedonimprovingthequalityoftheoutputoftheNLPapplicationthat the LLM aims to complete, the performance of the LLM at task execution, or both. Overall, the primary quality considerations in theRL4LLMcategory are Responsible AI and Alignment with human preferences and intent. Not surprisingly, given the generative nature of LLMs, Responsible AI is a primary concern of researchers, who wish to ensure the design of models that are not only helpful, but also harmless. Research on mitigating potentially harmful output precedes the era of Large Language Models, with “harmfulness” manifesting itself in a variety of ways: offensive responses [96] (e.g., unkind, with offensive jokes, or references to morally questionable or sexual desires), dataleakage [96,9](i.e., theuse ofconfidentialdata, suchas SocialSecurityNumbers, fortrainingthe model,which can then be inferred by an adversary), generated contact information (such as phone numbers, home addresses, and e-mailaddresses)[96],distributionalbias-i.e.,textthatisnegativemoreoftenforspecificgroups[96,8],engagement to sensitive questions [8, 9]. Notably, all studies which emphasize LLM harmlessness fall under the “fine-tuning” umbrella,eitherwithorwithouthumanfeedback,afactthatindicatesthatcarefulpromptdesignisnotalwayssufficient to guarantee the adherence of the output to responsible AI principles. In fact, the variety of ways in which harmful contentcanbegeneratedcanonlybecoveredthroughextensiveexamples,inwhichfew-shotlearningisnotenough. Naturally,theconstructionoffine-tuningdatasets–whichembedthepreferenceand ethical standards ofhumans–and the subsequent tuningof the model parametersthrough Reinforcement Learning isa natural choice. Helpfulness,i.e., RL+LLMStudy EnvironmentandLong-TermTask
ponsible AI principles. In fact, the variety of ways in which harmful contentcanbegeneratedcanonlybecoveredthroughextensiveexamples,inwhichfew-shotlearningisnotenough. Naturally,theconstructionoffine-tuningdatasets–whichembedthepreferenceand ethical standards ofhumans–and the subsequent tuningof the model parametersthrough Reinforcement Learning isa natural choice. Helpfulness,i.e., RL+LLMStudy EnvironmentandLong-TermTask Low-LevelSkillsthealignmentbetweenthegoalsandpreferencesoftheuserandtheLLMoutputisanotheraspectofoutputquality: For example, an LLM assistant that is tasked to generate Pythoncode is expected to produce clean, executable, and correct Yuanetal.[142] MinecraftTasks(e.g.,“Harvestcookedbeefwithswordresults. 3typesofskills: Finding,Manipulation,Crafting inplains” Ahnetal.[3] 101real-worldroboticinakitchen. 551 skills of seven skill families and 17 objects. Skill19 includeincludepicking,placingandrearrangingobjects, openingandclosingdrawers,navigatingtovariousloca- tions,andplacingobjectsinspecificconfigurations. Huangetal.[60] 3Familiesoftasks: Manipulation,MobileManipulation, Navigationandmanipulationskillswithpoliciestrained DrawerManipulation. Mockofficekitchenenvironment fromRGBobservation. with5locationsand15objects. Dasguptaetal.[34] 2D partially observable grid-world [35] with 4 objects, Primitive skills like “Pick up object X” or “Examine each of which has a “secret” property. Three types of objectX” tasks: 1)Secretpropertyconditionaltask(“If[object1] isgood,pickup[object1],otherwisepickup[object2]”. 2)Secretpropertysearchtask(“Theobjectsare[],[],[], and[]. Pickuptheobjectwiththegoodsecretproperty”RL/LLM Taxonomy Tree 7.1.2 LLM4RL: Efficiency, Grounding, and Human Preferences Studies in this class depart from the field of Natural Language Processing and extend to applications where the use of a languagemodel would seem irrelevant in the pre-LLM era. Adetailed reviewof the studies presented in section 4 reveals that LLMs possess three particular features which make the collaboration between them and RL agents successful: 1. Abilityfor zero-shotorfew-shotlearning: theability ofLLMstolearn throughnoexamplesor fewexamples ofdesiredbehaviorallowstoaligntheiroutputtohumanfeedback. Thisalignmentisprimarilyutilizedfor RL agent reward design (5.1) with the goal to generate appropriate reward signals that successfully represent human preferences. 2. Real-world knowledge: LLMs possess vast “knowledge” of the real world, which allows them to explore new behaviors and generate training data. Both capabilities result in time and cost-efficient RL training by a) helpingtheagentavoidexpensive exploration,particularlyinopen-ended environments,b) eliminatingthe need forexpensive datacollection andc) reducing theneed for from-scratch training, sincethey allow policies to be transferred between agents. 3. Reasoningcapabilities: forapplicationsinvolvingroboticmanipulation,theroboticagentistheonepossessing real-worldknowledgeaboutitsenvironment,whiletheLLMisusedforgroundingtheactionsoftheagentto the environment by ensuring those actions achieve the desired task. 7.1.3 RL+LLM: Planning Thegoalofallstudiesin theRL+LLMcategoryissuccessfulplanningandexecution ofrelativelycomplextasks. Inall cases, the agent is equipped with a set of skills t
to be transferred between agents. 3. Reasoningcapabilities: forapplicationsinvolvingroboticmanipulation,theroboticagentistheonepossessing real-worldknowledgeaboutitsenvironment,whiletheLLMisusedforgroundingtheactionsoftheagentto the environment by ensuring those actions achieve the desired task. 7.1.3 RL+LLM: Planning Thegoalofallstudiesin theRL+LLMcategoryissuccessfulplanningandexecution ofrelativelycomplextasks. Inall cases, the agent is equipped with a set of skills that they have already learned through RL. The LLM then helps the agent combine those tasks in order to execute longer-horizon, complex tasks that generally require the execution of morethanoneofthosesimpleskills,inthecorrectorder. WithouttheLLM,theagentwouldhavetolearnthelong-term skills from scratch. However, as we discussed in section 6, training for long-term tasks, especially in complex and partiallyobservableenvironments,can bedataintensive,sufferfrom sampleinefficiency,andeventuallyunsuccessful. Therefore,insteadofexplicitlylearningcomplextasks,planningdeterminestheappropriatesequenceofbasicskillsthat have to be executed. LLMs are suitable for planners because they can reason about possible skills execution sequences based on their knowledge of the real world. 7.2 Shortcomings 7.2.1 LLM4RL While the available results of theLLM4RL andRL+LLM synergy are impressive and more than promising with regards to thepotentialforfuturedevelopment,wecanidentifyasetofshortcomingsofthisclassofproblems,whichrefertotwo key metrics: the applicability of each framework, and the scalability of the process. Applicability. Despite the wide applicability of RL agents in domains like (see subsection 2.3 for more details), our review of the work on the LLM4RL and RL+LLM classes reveals that practically all applications of the relevant studies are limited to either benchmarking environments, games, or robotic environments (Tables 5 and 7), a trend that might initially raise questions about the applicability of the synergy to real-world scenarios beyond household tasks or games. We can attribute this apparent limitation to three reasons: First, the majority of studies presented in sections 4and 6focus on introducingnovel concepts involvingthe use ofLLMs for tasksthat have traditionally been performed otherwise. As proofs-of-concept, they are therefore well-suited to benchmarking environments, like Atari games. Second,before a combined modeling framework is deployedin the real-world, itsbehavior must beextensively testedforsafety,security,andresponsibleAIconsiderations. TheamplitudeofresearchonResponsibleAIonLLMs, both in theRL4LLM domain and in aside from that, serves as a proof that these considerations are taken seriously by thescientific community. Therefore, itwilllikely notbelong untiltheLLM4RLclassesencompass practicalreal-world applicationsofcontrolsystemsin areaslikehealthcareandfinance. Third,thekey strengthofLLMsthatenablesthis synergy, i.e., theability to convey human sense and preferences restricts, at the same time, the range of applications thatcan beaccommodatedbyLLM4RLframeworks. This limitationappliestoboth thespecificationofgoals ordesired behavior through human language, as well as on the representation of the state using natural language [41]. Even within the realm of the commonly used benchmarking environments, the applicability of LLM4RL methods is often constrainedby the limitations ofthe frameworksthemselves. Forexample, some frameworks, such asthe GLAM method by [22] are exclusively limited to textual environments, while the ELLM method by [41] assumes a natural 20RL/LLM Taxonomy Tree language textual representation of the agent’s state. Other methods (e.g., TEXT2REWARD by [138] are capable of handling relatively simple tasks, but are yet to be tested for more complex tasks. Performance. Asidefromapplicability,performanceisanotherparameterthatrequiresfurtherevaluationinLLM4RL studies, with the specific requirements varying among studies. For
nments, while the ELLM method by [41] assumes a natural 20RL/LLM Taxonomy Tree language textual representation of the agent’s state. Other methods (e.g., TEXT2REWARD by [138] are capable of handling relatively simple tasks, but are yet to be tested for more complex tasks. Performance. Asidefromapplicability,performanceisanotherparameterthatrequiresfurtherevaluationinLLM4RL studies, with the specific requirements varying among studies. For example, [56] identify the need to fine-tune InstructRLto improveits performanceat test-time.Inother cases, the performance ofthe underlyingLLM isshown to be sensitive to prompt choice or even prone to errors despite well-formulated prompts (e.g., [41]. Certain language modelshortcomingshadalreadybeenidentifiedpriortotherapidexpansionofLLMs-forexample,thepolicytransfer frameworkof[62]wasshown tooccasionallysuffer from“catastrophicforgetting”,whichsignificantly reducedthe benefits of the agent policy initialization. Scalability. Finally, the scalability of the solution as the state and action space of the RL agents grows is a potential challenge. As pointed by [22], scaling up can be computationally inefficient and therefore constrain the application to a single environment and relatively small LLMs. 7.2.2 RL+LLM The combination of RL and LLM for planning long and complex tasks is showing promising results in both studies included in theRL+LLM class. However, an inevitable outcome of such a synergy is that the final model can eventually onlybeasgoodastheindividualskillsforwhichtheagenthasbeenalreadytrainedfor,irrespectiveoftherobustness oftheskillplangeneratedbytheLLM.AspointedintheSayCanpaper[3],thereisachancethatthesystemcannot react in case the execution of the intermediate skills fails. Similarly, low success rate of specific individual skills (“Find-Skills”) are the key limitations highlighted by [ 142], therefore hindering the end-to-end execution of the plan generated by the Plan4MC method. 7.3 Alternatives HavingreviewedthewaysthatRLandLLMcollaborate,alongwiththestrengthsandweaknessesofeachframework, wearenowexploringtheexistenceofLLM-basedapproachesdesignedtoachievethesamegoalswithoutinvolvingRL agents. We investigate the following questions: 1. Is RL required for fine-tuning an LLM? 2. Is RL required for prompt optimization? 3. Is RL required for an LLM to achieve a non-NLP-related task? Interestingly,theanswertoalltheabovequestionsis“no”. Inthediscussionthatfollows,weofferabriefreviewof state-of-art frameworksthat serve as counterexamples. Theseframeworks are out ofthe scope of thistaxonomy, since theydonotrelyonthesynergybetweenanRLagentandaLLM-aside,ofcourse,fromtheuseofRLHFfortheinitial training of the LLM. 7.3.1 Fine-tuning an LLM without RL: Syndicom, Rain, LIMA In section 4.1,we presented how RL is used to fine-tune a trained LLM to improve its output for specific tasks. In thissection,wepresent recentstudiesthatachievefine-tuning withoutusingRL–instead,they useeithersupervised training methods [149, 107] or self-evaluation [71] using specially crafted datasets. The LIMA (Less Is More for Alignment) model [149] was fine-tuned using supervised learning. The authors analyzed their Superficial Alignment Hypothesis, according to which “a model’s knowledge and capabilities are learnt almost entirelyduringpretraining,whilealignmentteachesitwhichsubdistributionofformatsshouldbeusedwheninteracting withusers”. LIMAcreatorsfine-tuneaLLaMalanguagemodelwith65billionparametersusingstandardsupervised loss and a small dataset of 1000 prompt-response pairs. The responses of LIMA outperform GPT-4[87], Bard, and DaVince003 models, based on human evaluation, anddemonstrate ability to handle complex queries and generalize well to previously unseen tasks. Inthe SYNDICOMframeworkby [107],the creatorsfine-tuned aconversationalagent toenhance itscommonsense reasoning. SYNDICOM consists of two components: First, a dataset containing valid and invalid responses in dialogue contexts, with the invalid o
ardsupervised loss and a small dataset of 1000 prompt-response pairs. The responses of LIMA outperform GPT-4[87], Bard, and DaVince003 models, based on human evaluation, anddemonstrate ability to handle complex queries and generalize well to previously unseen tasks. Inthe SYNDICOMframeworkby [107],the creatorsfine-tuned aconversationalagent toenhance itscommonsense reasoning. SYNDICOM consists of two components: First, a dataset containing valid and invalid responses in dialogue contexts, with the invalid ones accompanied by natural language feedback. The authors build a template by randomly sampling from ATOMIC and use GPT-3 to convert the template to natural-sounding dialogue and mark the invalid responses, while human feedback is provided by crowd workers. The second key component of SYNDICOM is a trainingprocedureofafeedbackmodelandaresponsegenerationmodel: First,thefeedbackmodelistrainedtopredict 21RL/LLM Taxonomy Tree thenaturallanguagefeedbackforinvalidresponses. Then,theresponsegenerationmodelistrainedbasedontheinvalid response,thepredictedfeedback,andthedialogue. ThequalityofSYNDICOMresponseswasshowntooutperform ChatGPT based on both ROUGE-1 score and human evaluation. Inadifferentstudy,[71]proposedtheRAIN(RewindableAuto-regressiveInference)methodtoproduceLLMresponses alignedtohumanintentbyfine-tuningthroughself-evaluationandrewindmechanisms. RAINisaself-alignmentmodel, i.e.,doesnotreceiveexternalsupervision,andratherallowsLLMstoevaluatetheiroutputandusetheevaluationresults to improve it. In a nutshell, RAIN searchers over token sets, with each token set mapping to the node of a search tree. The search consists of an inner and an outer loop. The inner loop, which updates token attributes, consists of a forward searchstepfromroottoleafthroughheuristicsimulationandabackwardrewindsteptotheroot. Theouterloopadjusts thetokensetprobabilitiesanddeterminesthe next tokenset. RAINwasshowntooutperformLLaMA30B intermsof harmlessnessand perform equallywell interms ofhelpfulness andwas alsoshown tooutperform LLaMA-2-chat13B in terms of truthfulness. 7.3.2 Prompt Optimization Without RL: Learning-Based Prompt Optimization In subsection 4.2, we reviewed studies where RL is used for LLM prompt engineering. Nonetheless, RL is not the sole method for conducting prompt engineering: [115] summarized state-of-art methods on learning-based prompt optimization, with examples where prompt optimization is achieved through methods like Beam Search [99] or Evolution Strategy [150]. However, every single of theRL4LLM-Promptframeworks presetned inthis study was able to overcome traditional challenges that were primarily related to training efficiency of supervised learning methods. RLPrompt [36] combined multiple desirable properties which previously had not been present collectively present in any framework: it is automated, gradient-free (therefore eliminating the need to access or compute gradients, which can be computationally expensive), uses frozen LMs (thus not updating any LM parameters), efficient (since it guides optimization through the RL reward information), transferable between different langauge models (due to the use of discrete prompts rather than embeddings), and capable of few-shot and zero-shot learning (since the reward function eliminates the necessity for supervised data). TEMPERA [145] outperformed RLPrompt in multiple tasks like fine-tuning, prompttuning and discreteprompt search. Finally,Prompt-OIRL was the firstmodel to addressthe challenged of inference time evaluation (through offline prompt evaluation) and expensive online prompt optimization (through offline prompt optimization without access to the target LLM). 7.3.3 LLMs for non-NLP tasks As established in section 5, integrating a Large Language Model in a RL framework allows us to utilize the vast knowledgeandgroundingcapabilitiesofLLMsandachieveavarietyofcontroltasksthatarenotinherentlyrelated tonaturallanguage,rangingfromplayinggamestoroboticmanipulation. Wealsoreviewedstudieswheretheou
inference time evaluation (through offline prompt evaluation) and expensive online prompt optimization (through offline prompt optimization without access to the target LLM). 7.3.3 LLMs for non-NLP tasks As established in section 5, integrating a Large Language Model in a RL framework allows us to utilize the vast knowledgeandgroundingcapabilitiesofLLMsandachieveavarietyofcontroltasksthatarenotinherentlyrelated tonaturallanguage,rangingfromplayinggamestoroboticmanipulation. Wealsoreviewedstudieswheretheoutput of LLMs together with learned robotic policies can be used for planning or sequential decision-making tasks in the LLM+RLcategory. Particularlyintherealmofrobotics,weshowed(5and6)thatgroundingtheagenttothenatural environment is a key challenge that LLMs have successfully addressed. KOSMOS[59]isamultimodalLargeLanguageModelthathasbeentrainedonweb-scalemultimodalcorpora,including text,image-captionpairsanddocumentswithbothimagesandtext. ThegoalofKOSMOSistoalignperceptionwith LLMs, practically allowing models to see and talk. The key idea behind KOSMOS is directly analogous to that of Large Language Models, since it is trained to predict the most likely next token. However, it extends this principle beyond language, showing successful performance on visiontasks as well. Morespecifically,the model is capableof successfully executing dialogue tasks, visual explanation and Question-Answering, number recognition, and image captioning. Similarly, PaLM-E [38] is a general-purpose multimodal language model for embodied reasoning, visual-language, and language tasks. Inputs suchas images andneural 3Drepresentations areembedded alongside text tokensand passed as inputtotheTransformer. Incorporatingcontinuousinputsfromvarioussensormodalitiesoftheembodiedagentcan enable themultimodal languagemodel itself tomake grounded inferencesfor sequentialdecision making inthe real world. PaLM-Etransfersknowledgefromvisual-languagedomainsintoembodiedreasoning,suchassequentialrobotic planningandansweringquestionsabouttheobservableworld. Thisknowledgetransferleadstohighdataefficiency for robotics tasks. PaLM-E operates on multimodal sequences of tokens with inputs such as images and neural 3D representations alongsidetext tokens. Theauthors demonstrate thata generalist multi-embodiment agent canbe trained leveraging transfer learning across modalities, by incorporating embodied data into the training of a multimodal LLM. Like KOSMOS-1, PaLM-E can perform tasks such as zero shot multimodal chain of thought, visually-conditioned jokes, zero-shot multi-image relationships, spatial grounding, robot visual perception, dialogue and planning etc. 22RL/LLM Taxonomy Tree GPT-4V by OpenAI [1] is a multimodal LLM that has been trained to analyze and understand text and image input and generatetext outputsdemonstrates impressiveperformance onvarious tasks, suchas exams,logic puzzles, aswell as vision and language tasks. GPT-4V was trained on a large-scale corpus of web data, including both positive and negativeexamples(rightandwrongsolutionstoproblems,weakandstrongreasoning,self-contradictoryandconsistent statements)andofvariousideologiesandideas. Notethatthemodel’scapabilitiesseemtocomeprimarilyfromthe pre-training process. It isinteresting to notethat multimodalityis not necessaryfor an LLMto succesfullyexecute non-NLPtasks. A typical example is the SPRING framework [136], where an LLM learns to play complex, open-world games like Crafter or MinecraftbyreadingtheLatexsourcecodeoftherelatedacademicpapers. Adirectedacyclicgraphisconstructed,with gameplayspecificquestionsasnodesanddependenciesbetweenquestionsasedges. Theexperimentsdemonstrated thattheLLMshowthatwhenusingchain-of-thoughtprompting,LLMscansuccessfullyexecutecomplextasks,while SPRING’s zero-shot performance exceeded that of state-of-art RL algorithms for 1 million training steps. 8 Conclusions and Future Work In this work, we have proposed the RL/LLM Taxonomy Tree, a comprehensive classification of state-of-art compu- tation
latedacademicpapers. Adirectedacyclicgraphisconstructed,with gameplayspecificquestionsasnodesanddependenciesbetweenquestionsasedges. Theexperimentsdemonstrated thattheLLMshowthatwhenusingchain-of-thoughtprompting,LLMscansuccessfullyexecutecomplextasks,while SPRING’s zero-shot performance exceeded that of state-of-art RL algorithms for 1 million training steps. 8 Conclusions and Future Work In this work, we have proposed the RL/LLM Taxonomy Tree, a comprehensive classification of state-of-art compu- tational frameworks that combine Reinforcement Learning Agents and Large Language Models to achieve a target task. We have therefore identified three core classes, namely RL4LLM, which use RL to improve the performance of an LLM; LLM4RL, where an LLM assists the training of an RL agent; and RL+LLM, where an RL agent and an LLM participate in a common framework for planning downstream tasks. We have further divided each class into subcategories based on observations that differentiate the studies that belong in each one. Since each category corresponds to a distinct type of synergy between RL and LLMs, we have explored the key motivations behind the development of the frameworks included in each category and have explained which key strengths of RL and LLMs are utilizedeachtime. TheadaptabilityofRLtoNLPtasksthankstotheirsequentialdecision-makingnature,aswellasthe reasoning capabilitiesand vastknowledge about thereal worldthat LLMspossess serve astestaments forthe success ofthe synergy,resultingin modelsthatarealigned withhumanintent andResponsibleAIprinciples. Inaddition,by reviewing the prolific literature on alternative methods, we acknowledge that, for most applications, this synergy is not theonly choice. Finally, since LLMsare arelativelynewarea ofArtificial Intelligence, therestill existpotential shortcomings;thoseprimarilyconcerntheapplicabilityofLLM4RLandRL+LLMframeworks,alongwithaspects like computational efficiency and scalability. Nevertheless, the pace of research is so rapid that we can only anticipate substantial improvements in those areas as well. ThisreviewisintendedtohelpresearchersunderstandRL-LLMsynergiesanddeveloptheirownAIframeworks. Inthe future, we will keep classifying new studies based on the RL/LLM Taxonomy Tree and, if appropriate, expand it to capture novel categories that, given the pace of ongoing research, will almost certainly arise. Undoubtedly, the future holds boundless possibilities for RL-LLM synergies in this regard. References [1] Gpt-4technicalreport. Technicalreport,OpenAI,2023. URLhttps://openai.com/contributions/gpt -4v. [2] P. Abbeel and A. Y. Ng. Apprenticeship learning via inverse reinforcement learning. In Proceedings of the Twenty-First International Conference on Machine Learning, ICML ’04, page 1, New York, NY, USA, 2004. Association for Computing Machinery. ISBN 1581138385. doi: 10.1145/1015330.1015430. URL https://doi.org/10.1145/1015330.1015430. [3] M.Ahn,A.Brohan,N.Brown,Y.Chebotar,O.Cortes,B.David,C.Finn,C.Fu,K.Gopalakrishnan,K.Hausman, A.Herzog, D.Ho,J.Hsu, J.Ibarz,B. Ichter,A.Irpan, E.Jang,R.J. Ruano,K.Jeffrey,S. Jesmonth,N.J.Joshi, R. Julian, D. Kalashnikov, Y. Kuang, K.-H. Lee, S. Levine, Y. Lu, L. Luu, C. Parada, P. Pastor, J. Quiambao, K.Rao,J.Rettinghouse,D.Reyes,P.Sermanet,N.Sievers,C.Tan,A.Toshev,V.Vanhoucke,F.Xia,T.Xiao, P. Xu, S. Xu, M. Yan, and A. Zeng. Do as i can, not as i say: Grounding language in robotic affordances, 2022. [4]AnyRobotics. Anymal, 2023. URL https://www.anybotics.com/robotics/anymal/. [5] K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath. A brief survey of deep reinforcement learning. arXiv preprint arXiv:1708.05866, 2017. [6]A. Aubret, L. Matignon, and S. Hassas. A survey on intrinsic motivation in reinforcement learning, 2019. [7] H. Bai, R. Cheng, and Y. Jin. Evolutionary reinforcement learning: A survey. Intelligent Computing, 2:0025, 2023. doi: 10.34133/icomputing.0025. URLhttps://spj.science.org/doi/abs/10.34133/icomputin
com/robotics/anymal/. [5] K. Arulkumaran, M. P. Deisenroth, M. Brundage, and A. A. Bharath. A brief survey of deep reinforcement learning. arXiv preprint arXiv:1708.05866, 2017. [6]A. Aubret, L. Matignon, and S. Hassas. A survey on intrinsic motivation in reinforcement learning, 2019. [7] H. Bai, R. Cheng, and Y. Jin. Evolutionary reinforcement learning: A survey. Intelligent Computing, 2:0025, 2023. doi: 10.34133/icomputing.0025. URLhttps://spj.science.org/doi/abs/10.34133/icomputin g.0025. 23RL/LLM Taxonomy Tree [8] Y. Bai, A. Jones, K. Ndousse, A. Askell, A. Chen, N. DasSarma, D. Drain, S. Fort, D. Ganguli, T. Henighan, N. Joseph, S. Kadavath, J. Kernion, T. Conerly, S. El-Showk, N. Elhage, Z. Hatfield-Dodds, D. Hernandez, T.Hume,S.Johnston,S.Kravec,L.Lovitt,N.Nanda,C.Olsson,D.Amodei,T.Brown,J.Clark,S.McCandlish, C. Olah, B. Mann, and J. Kaplan. Training a helpful and harmless assistant with reinforcement learning from human feedback, 2022. [9] Y.Bai,S.Kadavath,S.Kundu,A.Askell,J.Kernion,A.Jones,A.Chen,A.Goldie,A.Mirhoseini,C.McKinnon, C. Chen, C. Olsson, C. Olah, D. Hernandez, D. Drain, D. Ganguli, D. Li, E. Tran-Johnson, E. Perez, J. Kerr, J. Mueller, J. Ladish, J. Landau, K. Ndousse, K. Lukosuite, L. Lovitt, M. Sellitto, N. Elhage, N. Schiefer, N. Mercado, N. DasSarma, R. Lasenby, R. Larson, S. Ringer, S. Johnston, S. Kravec, S. E. Showk, S. Fort, T. Lanham,T. Telleen-Lawton,T. Conerly,T. Henighan, T.Hume, S. R. Bowman, Z. Hatfield-Dodds, B.Mann, D. Amodei, N. Joseph, S. McCandlish, T. Brown, and J. Kaplan. Constitutional ai: Harmlessness from ai feedback, 2022. [10] N. Bard, J. N. Foerster, S. Chandar, N. Burch, M. Lanctot, H. F. Song, E. Parisotto, V. Dumoulin, S. Moitra, E. Hughes, I. Dunning, S. Mourad, H. Larochelle, M. G. Bellemare, and M. Bowling. The hanabi challenge: A new frontier for ai research. Artificial Intelligence, 280:103216, 2020. ISSN 0004-3702. doi: https: //doi.org/10.1016/j.artint.2019.103216. URLhttps://www.sciencedirect.com/science/article/pii/ S0004370219300116. [11] J. Beck, R. Vuorio, E. Zheran Liu, Z. Xiong, L. Zintgraf, C. Finn, and S. Whiteson. A Survey of Meta- Reinforcement Learning. arXiv e-prints, art. arXiv:2301.08028, Jan. 2023. doi: 10.48550/arXiv.2301.08028. [12]R. Bellman. A markovian decision process. Indiana Univ. Math. J., 6:679–684, 1957. ISSN 0022-2518. [13] Y. Bengio, J. Louradour, R. Collobert, and J. Weston. Curriculum learning. In Proceedings of the 26th Annual International Conference on Machine Learning, ICML ’09, page 41–48, New York, NY, USA, 2009. Association for Computing Machinery. ISBN 9781605585161. doi: 10.1145/1553374.1553380. URL https://doi.org/10.1145/1553374.1553380. [14]BitCraze. Crazyflie, 2023. URL https://www.bitcraze.io/products/crazyflie-2-1/. [15] G.Brockman,V.Cheung,L.Pettersson,J.Schneider,J.Schulman,J. Tang,andW.Zaremba. Openaigym,2016. [16] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A.Askell,S.Agarwal, A.Herbert-Voss,G.Krueger,T.Henighan,R.Child,A.Ramesh,D.M.Ziegler,J.Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners, 2020. [17] T. B. Brown, B. Mann, N. Ryder, M. Subbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A.Askell,S.Agarwal, A.Herbert-Voss,G.Krueger,T.Henighan,R.Child,A.Ramesh,D.M.Ziegler,J.Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners, 2020. [18] E.CambriaandB.White. Jumpingnlpcurves: Areviewofnaturallanguageprocessingresearch[reviewarticle]. IEEE Computational Intellige
bbiah, J. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A.Askell,S.Agarwal, A.Herbert-Voss,G.Krueger,T.Henighan,R.Child,A.Ramesh,D.M.Ziegler,J.Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. McCandlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners, 2020. [18] E.CambriaandB.White. Jumpingnlpcurves: Areviewofnaturallanguageprocessingresearch[reviewarticle]. IEEE Computational Intelligence Magazine, 9(2):48–57, 2014. doi: 10.1109/MCI.2014.2307227. [19]B. Cao, H. Lin, X. Han, and L. Sun. The life cycle of knowledge in big language models: A survey, 2023. [20] Y.Cao,L.Yao,J.McAuley,andQ.Z.Sheng. Reinforcementlearningforgenerativeai: Asurvey. arXivpreprint arXiv:2308.14328, 2023. [21] Z. Cao, R. A. Ramachandra, and K. Yu. Temporal video-language alignment network for reward shaping in reinforcement learning, 2023. [22] T.Carta,C.Romac,T.Wolf,S.Lamprier,O.Sigaud,andP.-Y.Oudeyer. Groundinglargelanguagemodelsin interactive environments with online reinforcement learning, 2023. [23] Y. Chang, X. Wang, J. Wang, Y. Wu, K. Zhu, H. Chen, L. Yang, X. Yi, C. Wang, Y. Wang, et al. A survey on evaluation of large language models. arXiv preprint arXiv:2307.03109, 2023. [24] L. Chen,K. Lu,A. Rajeswaran, K.Lee, A. Grover, M.Laskin, P. Abbeel,A. Srinivas,and I.Mordatch. Decision transformer: Reinforcement learning via sequence modeling, 2021. [25] M.Chen, A.Radford, R.Child,J. Wu, H.Jun,D. Luan,and I.Sutskever. Generativepretrainingfrom pixels. In Proceedings of the 37th International Conference on Machine Learning, ICML’20. JMLR.org, 2020. [26] M.Chen,J.Tworek,H.Jun,Q.Yuan,H.P.d.O.Pinto,J.Kaplan,H.Edwards,Y.Burda,N.Joseph,G.Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021. [27]T. Chen, A. Murali, and A. Gupta. Hardware conditioned policies for multi-robot transfer learning, 2019. 24RL/LLM Taxonomy Tree [28] N.Chentanez, A.Barto, andS.Singh. Intrinsically motivatedreinforcementlearning. In L.Saul, Y.Weiss, and L. Bottou, editors, Advances in Neural Information Processing Systems, volume 17. MIT Press, 2004. URL https://proceedings.neurips.cc/paper_files/paper/2004/file/4be5a36cbaca8ab9d2066debf e4e65c1-Paper.pdf. [29] M. Chevalier-Boisvert, D. Bahdanau, S. Lahlou, L. Willems, C. Saharia, T. H. Nguyen, and Y. Bengio. Babyai: A platform to study the sample efficiency of grounded language learning, 2019. [30] K.Choi, C.Cundy, S.Srivastava,and S.Ermon. Lmpriors: Pre-trained languagemodels astask-specificpriors, 2022. [31] K. Chowdhary and K. Chowdhary. Naturallanguage processing. Fundamentals of artificial intelligence, pages 603–649, 2020. [32] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S.Gehrmann,P.Schuh,K.Shi,S.Tsvyashchenko,J.Maynez,A.Rao,P.Barnes,Y.Tay,N.Shazeer,V.Prab- hakaran,E.Reif,N.Du,B.Hutchinson,R.Pope,J.Bradbury,J.Austin,M.Isard,G.Gur-Ari,P.Yin,T.Duke, A. Levskaya, S. Ghemawat, S. Dev, H. Michalewski, X. Garcia, V. Misra, K. Robinson, L. Fedus, D. Zhou, D. Ippolito, D.Luan, H. Lim, B.Zoph, A. Spiridonov,R. Sepassi, D. Dohan,S. Agrawal, M. Omernick, A.M. Dai,T.S.Pillai,M.Pellat,A.Lewkowycz,E.Moreira,R.Child,O.Polozov,K.Lee,Z.Zhou,X.Wang,B.Saeta, M. Diaz, O. Firat, M. Catasta, J. Wei, K. Meier-Hellstern, D. Eck, J. Dean, S. Petrov, and N. Fiedel. Palm: Scaling language modeling with pathways, 2022. [33] K. Cobbe, V. Kosaraju,M. Bavarian, M. Chen,H. Jun, L. Kaiser, M.Plappert, J. Tworek, J.Hilton, R. Nakano, C. Hesse, and J. Schulman. Training verifiers to solve math word problems, 2021. [34] I. Dasgupta, C. Kaeser-Chen, K. Marino, A. Ahuja, S. Babayan, F. Hill, and R. Fergus. Collaborating with language models for embodied reasoning, 2023. [35]G. DeepMind. pycolab
Meier-Hellstern, D. Eck, J. Dean, S. Petrov, and N. Fiedel. Palm: Scaling language modeling with pathways, 2022. [33] K. Cobbe, V. Kosaraju,M. Bavarian, M. Chen,H. Jun, L. Kaiser, M.Plappert, J. Tworek, J.Hilton, R. Nakano, C. Hesse, and J. Schulman. Training verifiers to solve math word problems, 2021. [34] I. Dasgupta, C. Kaeser-Chen, K. Marino, A. Ahuja, S. Babayan, F. Hill, and R. Fergus. Collaborating with language models for embodied reasoning, 2023. [35]G. DeepMind. pycolab. URL https://github.com/google-deepmind/pycolab. [36] M.Deng,J.Wang,C.-P.Hsieh,Y.Wang,H.Guo,T.Shu,M.Song,E.P.Xing,andZ.Hu. Rlprompt: Optimizing discrete text prompts with reinforcement learning, 2022. [37] J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2019. [38] D. Driess, F. Xia, M. S. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson, Q. Vuong, T. Yu, et al. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378, 2023. [39] M. Du, F. He, N. Zou, D. Tao, and X. Hu. Shortcut learning of large language models in natural language understanding, 2023. [40] W. Du and S.Ding. A survey on multi-agent deepreinforcement learning: from the perspective of challenges and applications. Artificial Intelligence Review, 54:3215–3238, 2021. [41] Y.Du,O.Watkins,Z.Wang,C.Colas,T.Darrell,P.Abbeel,A.Gupta,andJ.Andreas. Guidingpretrainingin reinforcement learning with large language models, 2023. [42] J. Eschmann. Reward Function Design in Reinforcement Learning, pages 25–33. Springer International Publishing, Cham, 2021. ISBN 978-3-030-41188-6. doi: 10.1007/978-3-030-41188-6\_3. URL https: //doi.org/10.1007/978-3-030-41188-6_3. [43] A. Fan, B. Gokkaya, M. Harman, M. Lyubarskiy, S. Sengupta, S. Yoo, and J. M. Zhang. Large language models for software engineering: Survey and open problems, 2023. [44] V. François-Lavet, P. Henderson, R. Islam, M. G. Bellemare, J. Pineau, et al. An introduction to deep reinforce- ment learning. Foundations and Trends® in Machine Learning, 11(3-4):219–354, 2018. [45]S. Fujimoto and S. S. Gu. A minimalist approach to offline reinforcement learning, 2021. [46] H.Furuta,Y.Matsuo,andS.S.Gu. Generalizeddecisiontransformerforofflinehindsightinformationmatching, 2022. [47] I.O.Gallegos,R.A.Rossi,J.Barrow,M.M.Tanjim,S.Kim,F.Dernoncourt,T.Yu,R.Zhang,andN.K.Ahmed. Bias and fairness in large language models: A survey, 2023. [48] L. C. Garaffa, M. Basso, A. A. Konzen, and E. P. de Freitas. Reinforcement learning for mobile robotics exploration: Asurvey. IEEETransactionsonNeuralNetworksandLearningSystems,34(8):3796–3810,2023. doi: 10.1109/TNNLS.2021.3124466. [49] D. G. Ghalandari, C. Hokamp, and G. Ifrim. Efficient unsupervised sentence compression by fine-tuning transformers with reinforcement learning, 2022. 25RL/LLM Taxonomy Tree [50] P. Goyal, S. Niekum, and R. J. Mooney. Using natural language for reward shaping in reinforcement learning, 2019. [51] S.GronauerandK.Diepold. Multi-agentdeepreinforcementlearning: asurvey. ArtificialIntelligenceReview, pages 1–49, 2022. [52] J. Gu, F. Xiang, X. Li, Z. Ling, X. Liu, T. Mu, Y. Tang, S. Tao, X. Wei, Y. Yao, X. Yuan, P. Xie, Z. Huang, R. Chen, and H. Su. Maniskill2: A unified benchmark for generalizable manipulation skills, 2023. [53] Z.Guo,R.Jin,C.Liu,Y.Huang,D.Shi,L.Yu,Y.Liu,J.Li,B.Xiong,D.Xiong,etal. Evaluatinglargelanguage models: A comprehensive survey. arXiv preprint arXiv:2310.19736, 2023. [54]T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum entropy deep reinforce- ment learning with a stochastic actor, 2018. [55] S. Haddadin, S.Parusel, L. Johannsmeier, S. Golz, S. Gabl,F. Walch, M. Sabaghian,C. Jähne, L. Hausperger, and S. Haddadin. The franka emika robot: A refer
[53] Z.Guo,R.Jin,C.Liu,Y.Huang,D.Shi,L.Yu,Y.Liu,J.Li,B.Xiong,D.Xiong,etal. Evaluatinglargelanguage models: A comprehensive survey. arXiv preprint arXiv:2310.19736, 2023. [54]T. Haarnoja, A. Zhou, P. Abbeel, and S. Levine. Soft actor-critic: Off-policy maximum entropy deep reinforce- ment learning with a stochastic actor, 2018. [55] S. Haddadin, S.Parusel, L. Johannsmeier, S. Golz, S. Gabl,F. Walch, M. Sabaghian,C. Jähne, L. Hausperger, and S. Haddadin. The franka emika robot: A reference platform for robotics research and education. IEEE Robotics & Automation Magazine, 29(2):46–64, 2022. doi: 10.1109/MRA.2021.3138382. [56]H. Hu and D. Sadigh. Language instructed reinforcement learning for human-ai coordination, 2023. [57] J. Hu, L. Tao, J. Yang, and C. Zhou. Aligning language models with offline reinforcement learning from human feedback. arXiv preprint arXiv:2308.12050, 2023. [58] J. Huang and K. C.-C. Chang. Towards reasoning in large language models: A survey. arXiv preprint arXiv:2212.10403, 2022. [59] S.Huang,L.Dong,W.Wang,Y.Hao,S.Singhal,S.Ma,T.Lv,L.Cui,O.K.Mohammed,Q.Liu,etal. Language is not all you need: Aligning perception with language models. arXiv preprint arXiv:2302.14045, 2023. [60] W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P. Florence, A. Zeng, J. Tompson, I. Mordatch, Y. Chebotar, P.Sermanet,N.Brown,T.Jackson,L.Luu,S.Levine,K.Hausman,andB.Ichter. Innermonologue: Embodied reasoning through planning with language models, 2022. [61] M. Janner, Q. Li, and S. Levine. Reinforcement learning as one big sequence modeling problem. CoRR, abs/2106.02039, 2021. URLhttps://arxiv.org/abs/2106.02039. [62] Y.Jiang,Q.Gao,G.Thattai,andG.Sukhatme. Language-informedtransferlearningforembodiedhousehold activities, 2023. [63] L. P. Kaelbling, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. Journal of artificial intelligence research, 4:237–285, 1996. [64] Y.Kant,A.Ramachandran,S.Yenamandra,I.Gilitschenski,D.Batra,A.Szot,andH.Agrawal. Housekeep: Tidy- ingvirtualhouseholdsusingcommonsensereasoning. InS.Avidan,G.Brostow,M.Cissé,G.M.Farinella,and T. Hassner, editors, Computer Vision – ECCV 2022, pages 355–373, Cham, 2022. Springer Nature Switzerland. ISBN 978-3-031-19842-7. [65] J.KimandB.Lee. Ai-augmentedsurveys: Leveraginglargelanguagemodelsforopinionpredictioninnationally representative surveys, 2023. [66] A.Kumar,A.Zhou,G.Tucker,andS.Levine. Conservativeq-learningforofflinereinforcementlearning,2020. [67]M. Kwon, S. M. Xie, K. Bullard, and D. Sadigh. Reward design with language models, 2023. [68] S. Levine, A. Kumar, G. Tucker, and J. Fu. Offline reinforcement learning: Tutorial, review, and perspectives on open problems. arXiv preprint arXiv:2005.01643, 2020. [69] M.Lewis,D.Yarats,Y.N.Dauphin,D.Parikh,andD.Batra. Dealornodeal? end-to-endlearningfornegotiation dialogues, 2017. [70] L. Li, Y. Zhang, D. Liu, and L. Chen. Large language models for generative recommendation: A survey and visionary discussions, 2023. [71] Y. Li, F. Wei, J. Zhao, C. Zhang, and H. Zhang. Rain: Your language models can align themselves without finetuning. arXiv preprint arXiv:2309.07124, 2023. [72] J. Lin, X. Dai, Y. Xi, W. Liu, B. Chen, X. Li, C. Zhu, H. Guo, Y. Yu, R. Tang, and W. Zhang. How can recommender systems benefit from large language models: A survey, 2023. [73] X. Liu, H. Yu, H. Zhang, Y. Xu, X. Lei, H. Lai, Y. Gu, H. Ding, K. Men, K. Yang, et al. Agentbench: Evaluating llms as agents. arXiv preprint arXiv:2308.03688, 2023. 26RL/LLM Taxonomy Tree [74] Y.Liu,T.Han,S.Ma,J.Zhang,Y.Yang,J.Tian,H.He,A.Li,M.He,Z.Liu,Z.Wu,L.Zhao,D.Zhu,X.Li, N. Qiang, D. Shen, T. Liu, and B. Ge. Summary of ChatGPT-related research and perspective towards the future of largelanguage models. Meta-Radiology, 1(2):100017,sep 2023. doi: 10.1016/j.metrad.2023.100017. URL https://
et al. Agentbench: Evaluating llms as agents. arXiv preprint arXiv:2308.03688, 2023. 26RL/LLM Taxonomy Tree [74] Y.Liu,T.Han,S.Ma,J.Zhang,Y.Yang,J.Tian,H.He,A.Li,M.He,Z.Liu,Z.Wu,L.Zhao,D.Zhu,X.Li, N. Qiang, D. Shen, T. Liu, and B. Ge. Summary of ChatGPT-related research and perspective towards the future of largelanguage models. Meta-Radiology, 1(2):100017,sep 2023. doi: 10.1016/j.metrad.2023.100017. URL https://doi.org/10.1016%2Fj.metrad.2023.100017. [75] Y. J. Ma, W. Liang, G. Wang, D.-A. Huang, O. Bastani, D. Jayaraman, Y. Zhu, L. Fan, and A. Anandkumar. Eureka: Human-level reward design via codinglarge language models. arXiv preprint arXiv:2310.12931, 2023. [76] N.Mazyavkina,S.Sviridov,S.Ivanov,andE.Burnaev. Reinforcementlearningforcombinatorialoptimization: A survey. Computers&OperationsResearch,134:105400,2021. ISSN0305-0548. doi: https://doi.org/10.1016/j.co r.2021.105400. URLhttps://www.sciencedirect.com/science/article/pii/S0305054821001660. [77]S. Merity, C. Xiong, J. Bradbury, and R. Socher. Pointer sentinel mixture models, 2016. [78] G. Mialon,R. Dessì, M. Lomeli, C.Nalmpantis, R. Pasunuru, R. Raileanu,B. Rozière,T. Schick,J. Dwivedi-Yu, A. Celikyilmaz, E. Grave, Y. LeCun, and T. Scialom. Augmented language models: a survey, 2023. [79] B.Min,H.Ross,E.Sulem,A. P.B.Veyseh,T.H.Nguyen,O.Sainz,E.Agirre,I.Heinz, andD.Roth. Recent advances in natural language processing via large pre-trained language models: A survey, 2021. [80] V. Mnih,K. Kavukcuoglu, D. Silver, A.Graves, I.Antonoglou, D. Wierstra, and M.Riedmiller. Playing atari with deep reinforcement learning, 2013. [81] V.Mnih,K.Kavukcuoglu,D.Silver,A.A.Rusu,J.Veness,M.G.Bellemare,A.Graves,M.A.Riedmiller,A.K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou, H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis. Human-level control through deep reinforcement learning. Nature, 518:529–533, 2015. URLhttps://api.semanticscholar.org/CorpusID:205242740. [82] V.Nastase,R.Mihalcea,andD.R.Radev. Asurveyofgraphsinnaturallanguageprocessing. NaturalLanguage Engineering, 21(5):665–698, 2015. [83] J. Ni, G. H. Ábrego, N. Constant, J. Ma, K. B. Hall, D. Cer, and Y. Yang. Sentence-t5: Scalable sentence encoders from pre-trained text-to-text models, 2021. [84] M.Nie,D.Chen,and D.Wang. Reinforcement learningongraphs: Asurvey. IEEETransactionsonEmerging Topics in Computational Intelligence, 2023. [85]OpenAI. Chatgpt, 2023. URL https://chat.openai.com/chat. [86]OpenAI. Gpt-3.5, 2023. URL https://platform.openai.com/docs/models/gpt-3-5. [87]OpenAI. Gpt-4 technical report, 2023. [88] R. Oshikawa, J. Qian, and W. Y. Wang. A survey on natural language processing for fake news detection. arXiv preprint arXiv:1811.00770, 2018. [89] D. W. Otter, J. R. Medina, and J. K. Kalita. A survey of the usages of deep learning for natural language processing. IEEE transactions on neural networks and learning systems, 32(2):604–624, 2020. [90] L.Ouyang,J.Wu,X.Jiang,D.Almeida,C.L.Wainwright,P.Mishkin,C.Zhang,S.Agarwal,K.Slama,A.Ray, J. Schulman, J. Hilton, F. Kelton, L. Miller, M. Simens, A. Askell, P. Welinder, P. Christiano, J. Leike, and R. Lowe. Training language models to follow instructions with human feedback, 2022. [91] S. Padakandla. A survey of reinforcement learning algorithms for dynamically varying environments. ACM Comput.Surv., 54(6),jul 2021. ISSN0360-0300. doi: 10.1145/3459991. URLhttps://doi.org/10.1145/ 3459991. [92] S.Pan, L.Luo, Y. Wang,C.Chen, J.Wang, andX. Wu. Unifying large languagemodels andknowledgegraphs: A roadmap, 2023. [93] A.Patel,S. Bhattamishra,andN. Goyal. Arenlpmodels reallyabletosolve simplemathwordproblems?, 2021. [94] S. Pateria, B. Subagdja, A.-h. Tan, and C. Quek. Hierarchical reinforcement learning: A comprehensive survey. ACM Comput. Surv., 54(5), jun 2021. IS
Comput.Surv., 54(6),jul 2021. ISSN0360-0300. doi: 10.1145/3459991. URLhttps://doi.org/10.1145/ 3459991. [92] S.Pan, L.Luo, Y. Wang,C.Chen, J.Wang, andX. Wu. Unifying large languagemodels andknowledgegraphs: A roadmap, 2023. [93] A.Patel,S. Bhattamishra,andN. Goyal. Arenlpmodels reallyabletosolve simplemathwordproblems?, 2021. [94] S. Pateria, B. Subagdja, A.-h. Tan, and C. Quek. Hierarchical reinforcement learning: A comprehensive survey. ACM Comput. Surv., 54(5), jun 2021. ISSN 0360-0300. doi: 10.1145/3453160. URL https: //doi.org/10.1145/3453160. [95] X.B. Peng,A.Kumar,G. Zhang,andS. Levine. Advantage-weightedregression: Simpleand scalableoff-policy reinforcement learning, 2019. [96] E. Perez, S. Huang, H. F. Song, T. Cai, R. Ring, J. Aslanides, A. Glaese, N. McAleese, and G. Irving. Red teaminglanguagemodelswithlanguagemodels. CoRR,abs/2202.03286,2022. URLhttps://arxiv.org/ab s/2202.03286. 27RL/LLM Taxonomy Tree [97] J. Peters and S. Schaal. Reinforcement learning by reward-weighted regression for operational space control. In Proceedings of the 24th international conference on Machine learning, pages 745–750, 2007. [98] R.F.Prudencio,M.R.Maximo,andE.L.Colombini. Asurveyonofflinereinforcementlearning: Taxonomy, review, and open problems. IEEE Transactions on Neural Networks and Learning Systems, 2023. [99] R.Pryzant,D.Iter,J.Li,Y.T.Lee,C.Zhu,andM.Zeng. Automaticpromptoptimizationwith"gradientdescent" and beam search, 2023. [100] X.Qiu, T.Sun,Y.Xu,Y.Shao, N.Dai,andX. Huang. Pre-trainedmodelsfornatural languageprocessing: A survey. Science China Technological Sciences, 63(10):1872–1897, 2020. [101]B.Quartey,A.Shah,andG.Konidaris. Exploitingcontextualstructuretogenerateusefulauxiliarytasks,2023. [102] A.Radford,J.Wu,R.Child,D.Luan,D.Amodei,andI.Sutskever. Languagemodelsareunsupervisedmultitask learners. 2019. URLhttps://api.semanticscholar.org/CorpusID:160025533. [103] A.Radford,J.W.Kim,C.Hallacy,A.Ramesh,G.Goh,S.Agarwal,G.Sastry,A.Askell,P.Mishkin,J.Clark, G. Krueger, and I. Sutskever. Learning transferable visual models from natural language supervision, 2021. [104] J.W.Rae,S.Borgeaud,T.Cai,K.Millican,J.Hoffmann,F.Song,J.Aslanides,S.Henderson,R.Ring,S.Young, E.Rutherford,T.Hennigan,J.Menick,A.Cassirer,R.Powell,G.vandenDriessche,L.A.Hendricks,M.Rauh,P.- S.Huang,A.Glaese,J.Welbl,S.Dathathri,S.Huang,J.Uesato,J.Mellor,I.Higgins,A.Creswell,N.McAleese, A.Wu,E.Elsen,S.Jayakumar,E.Buchatskaya,D.Budden,E.Sutherland,K.Simonyan,M.Paganini,L.Sifre, L.Martens,X.L.Li,A.Kuncoro,A.Nematzadeh,E.Gribovskaya,D.Donato,A.Lazaridou,A.Mensch,J.-B. Lespiau, M. Tsimpoukelli, N. Grigorev, D. Fritz, T. Sottiaux, M. Pajarskas, T. Pohlen, Z. Gong, D. Toyama, C. de Masson d’Autume, Y. Li, T. Terzi, V. Mikulik, I. Babuschkin, A. Clark, D. de Las Casas, A. Guy, C. Jones, J.Bradbury,M.Johnson,B.Hechtman,L.Weidinger,I.Gabriel,W.Isaac,E.Lockhart,S.Osindero,L.Rimell, C. Dyer, O. Vinyals, K.Ayoub, J. Stanway, L.Bennett, D. Hassabis,K. Kavukcuoglu, and G.Irving. Scaling language models: Methods, analysis & insights from training gopher, 2022. [105] R.Ramamurthy,P.Ammanabrolu,K.Brantley,J.Hessel,R.Sifa,C.Bauckhage,H.Hajishirzi,andY.Choi. Is reinforcement learning (not) for natural language processing: Benchmarks, baselines, and building blocks for natural language policy optimization, 2023. [106]M. Reid, Y. Yamada, and S. S. Gu. Can wikipedia help offline reinforcement learning?, 2022. [107] C. Richardson, A. Sundar, and L. Heck. Syndicom: Improving conversational commonsense with error-injection and natural language feedback. arXiv preprint arXiv:2309.10015, 2023. [108]S. Roy and D. Roth. Solving general arithmetic word problems, 2016. [109] V. Sanh, L. Debut, J. Chaumond, and T. Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter, 2020. [110] J.Schulman,F.Wolski,P
amada, and S. S. Gu. Can wikipedia help offline reinforcement learning?, 2022. [107] C. Richardson, A. Sundar, and L. Heck. Syndicom: Improving conversational commonsense with error-injection and natural language feedback. arXiv preprint arXiv:2309.10015, 2023. [108]S. Roy and D. Roth. Solving general arithmetic word problems, 2016. [109] V. Sanh, L. Debut, J. Chaumond, and T. Wolf. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter, 2020. [110] J.Schulman,F.Wolski,P.Dhariwal,A.Radford,andO.Klimov. Proximalpolicyoptimizationalgorithms,2017. [111] T. Shen, R. Jin, Y. Huang, C. Liu, W. Dong, Z. Guo, X. Wu, Y. Liu, and D. Xiong. Large language model alignment: A survey. arXiv preprint arXiv:2309.15025, 2023. [112] I. Solaiman and C. Dennison. Process for adapting language models to society (palms) with values-targeted datasets. Advances in Neural Information Processing Systems, 34:5861–5873, 2021. [113] J. Song, Z.Zhou, J. Liu, C.Fang, Z.Shu, and L. Ma. Self-refined large language modelas automated reward function designer for deep reinforcement learning in robotics. arXiv preprint arXiv:2309.06687, 2023. [114] N. Stiennon, L. Ouyang, J. Wu, D. Ziegler, R. Lowe, C. Voss, A. Radford, D. Amodei, and P. F. Christiano. Learningtosummarizewithhumanfeedback. AdvancesinNeuralInformationProcessingSystems,33:3008– 3021, 2020. [115] H. Sun. Offline prompt evaluation and optimization with inverse reinforcement learning. arXiv preprint arXiv:2309.06553, 2023. [116] H. Sun. Reinforcement learning inthe eraof llms: What isessential? what is needed? anrl perspectiveon rlhf, prompting, and beyond, 2023. [117] R.SuttonandA. Barto. Reinforcementlearning: Anintroduction. IEEETransactionsonNeuralNetworks,9(5): 1054–1054, 1998. doi: 10.1109/TNN.1998.712192. [118] R.S.Suttonand A.G.Barto. ReinforcementLearning: AnIntroduction. ABradfordBook, Cambridge, MA, USA, 2018. ISBN 0262039249. 28RL/LLM Taxonomy Tree [119] R.Thoppilan,D.D.Freitas,J.Hall,N.Shazeer,A.Kulshreshtha,H.-T.Cheng,A.Jin,T.Bos,L.Baker,Y.Du, Y.Li,H.Lee,H.S.Zheng,A.Ghafouri,M.Menegali,Y.Huang,M.Krikun,D.Lepikhin,J.Qin,D.Chen,Y.Xu, Z.Chen,A.Roberts,M.Bosma,V.Zhao,Y.Zhou,C.-C.Chang,I.Krivokon,W.Rusch,M.Pickett,P.Srinivasan, L. Man, K. Meier-Hellstern, M. R. Morris, T. Doshi, R. D. Santos, T. Duke, J. Soraker, B. Zevenbergen, V.Prabhakaran,M.Diaz,B.Hutchinson,K.Olson,A.Molina,E.Hoffman-John,J.Lee,L.Aroyo,R.Rajakumar, A.Butryna,M.Lamm,V.Kuzmina,J.Fenton,A.Cohen,R.Bernstein,R.Kurzweil,B.Aguera-Arcas,C.Cui, M. Croak, E. Chi, and Q. Le. Lamda: Language models for dialog applications, 2022. [120] A. Torfi, R. A. Shirvani, Y. Keneshloo, N. Tavaf, and E. A.Fox. Natural languageprocessing advancements by deep learning: A survey. arXiv preprint arXiv:2003.01200, 2020. [121] R. Toro Icarte, T. Q. Klassen, R. Valenzano, and S. A. McIlraith. Teaching multiple tasks to an rl agent using ltl. In Proceedings of the 17th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS ’18, page 452–461, Richland, SC, 2018. International Foundation for Autonomous Agents and Multiagent Systems. [122] H. Van Hasselt, A. Guez, and D. Silver. Deep reinforcement learning with double q-learning. Proceedings of the AAAI Conference on Artificial Intelligence, 30, 09 2015. doi: 10.1609/aaai.v30i1.10295. [123] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. [124] J.Wang,Y.Huang,C.Chen,Z.Liu,S.Wang,andQ.Wang. Softwaretestingwithlargelanguagemodel: Survey, landscape, and vision, 2023. [125] L.Wang,C.Ma,X.Feng,Z.Zhang,H.Yang,J.Zhang,Z.Chen,J.Tang,X.Chen,Y.Lin,etal. Asurveyon large language model based autonomous agents. arXiv preprint arXiv:2308.11432, 2023. [126] S.Wang,Y.Zhu,H.Liu,Z.Zheng,C.Chen, an
t, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017. [124] J.Wang,Y.Huang,C.Chen,Z.Liu,S.Wang,andQ.Wang. Softwaretestingwithlargelanguagemodel: Survey, landscape, and vision, 2023. [125] L.Wang,C.Ma,X.Feng,Z.Zhang,H.Yang,J.Zhang,Z.Chen,J.Tang,X.Chen,Y.Lin,etal. Asurveyon large language model based autonomous agents. arXiv preprint arXiv:2308.11432, 2023. [126] S.Wang,Y.Zhu,H.Liu,Z.Zheng,C.Chen, andJ.Li. Knowledgeeditingforlargelanguagemodels: Asurvey, 2023. [127] X. Wang, G. Chen, G. Qian, P. Gao, X.-Y. Wei, Y. Wang, Y. Tian, and W. Gao. Large-scale multi-modal pre-trained models: A comprehensive survey, 2023. [128]Y. Wang, W. Zhong, L. Li, F. Mi, X. Zeng, W. Huang, L. Shang, X. Jiang, and Q. Liu. Aligning large language models with human: A survey, 2023. [129] Z.Wang,T.Schaul,M.Hessel,H.Hasselt,M.Lanctot,andN.Freitas. Duelingnetworkarchitecturesfordeep reinforcementlearning. InM. F. BalcanandK. Q.Weinberger,editors, Proceedings ofThe33rdInternational Conference on Machine Learning, volume 48 of Proceedings of Machine Learning Research, pages 1995–2003, New York,NewYork, USA,20–22Jun2016.PMLR. URL https://proceedings.mlr.press/v48/wang f16.html. [130] J.Wei,Y.Tay,R.Bommasani,C.Raffel,B.Zoph,S.Borgeaud,D.Yogatama,M.Bosma,D.Zhou,D.Metzler, E. H. Chi, T. Hashimoto, O. Vinyals, P. Liang, J. Dean, and W. Fedus. Emergent abilities of large language models, 2022. [131] J.Wei,X.Wang,D.Schuurmans,M.Bosma,B.Ichter,F.Xia,E.Chi,Q.Le,andD.Zhou. Chain-of-thought prompting elicits reasoning in large language models, 2023. [132] G. Weiss. Dynamic programming and markov processes. ronald a. howard. technology press and wiley, new york, 1960. viii + 136 pp. illus. $5.75. Science, 132(3428):667–667, 1960. doi: 10.1126/science.132.3428.667.a. URLhttps://www.science.org/doi/abs/10.1126/science.132.3428.667.a. [133] H. Wu, M. Wang, J. Wu, F. Francis, Y.-H. Chang, A. Shavick, H. Dong, M. T. Poon, N. Fitzpatrick, A. P. Levine, etal. Asurveyonclinicalnaturallanguageprocessingintheunitedkingdomfrom2007to2022. NPJdigital medicine, 5(1):186, 2022. [134] L. Wu, Z. Zheng, Z. Qiu, H. Wang, H. Gu, T. Shen, C. Qin, C.Zhu, H. Zhu, Q. Liu, H. Xiong, and E.Chen. A survey on large language models for recommendation, 2023. [135]Y. Wu, G. Tucker, and O. Nachum. Behavior regularized offline reinforcement learning, 2019. [136] Y.Wu,S.Prabhumoye,S.Y.Min,Y.Bisk,R.Salakhutdinov,A.Azaria,T.Mitchell,andY.Li. Spring: Gpt-4 out-performs rl algorithms by studying papers and reasoning, 2023. [137] Z. Xi, W. Chen, X. Guo, W. He, Y. Ding, B. Hong, M. Zhang, J. Wang, S. Jin, E. Zhou, et al. The rise and potential of large language model based agents: A survey. arXiv preprint arXiv:2309.07864, 2023. [138] T. Xie, S. Zhao, C. H. Wu, Y. Liu, Q. Luo, V. Zhong, Y. Yang, and T. Yu. Text2reward: Automated dense reward function generation for reinforcement learning. arXiv preprint arXiv:2309.11489, 2023. 29RL/LLM Taxonomy Tree [139] J.Yang,H.Jin,R.Tang,X.Han,Q.Feng,H.Jiang,B.Yin,andX.Hu. Harnessingthepowerofllmsinpractice: A survey on chatgpt and beyond, 2023. [140] C.Yu,J.Liu,S.Nemati,andG.Yin. Reinforcementlearninginhealthcare: Asurvey. ACMComput.Surv.,55 (1), nov 2021. ISSN 0360-0300. doi: 10.1145/3477600. URLhttps://doi.org/10.1145/3477600. [141] T.Yu,D.Quillen,Z. He,R.Julian,A.Narayan, H.Shively,A.Bellathur, K.Hausman,C.Finn,and S. Levine. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning, 2021. [142] H. Yuan, C. Zhang, H. Wang, F. Xie, P. Cai, H. Dong, and Z. Lu. Plan4mc: Skill reinforcement learning and planning for open-world minecraft tasks, 2023. [143] Z. Zeng, H. Shi, Y. Wu, Z. Hong, et al. Survey of natural language processing techniques in bioinformatics. Computational and mathematical methods in medi
He,R.Julian,A.Narayan, H.Shively,A.Bellathur, K.Hausman,C.Finn,and S. Levine. Meta-world: A benchmark and evaluation for multi-task and meta reinforcement learning, 2021. [142] H. Yuan, C. Zhang, H. Wang, F. Xie, P. Cai, H. Dong, and Z. Lu. Plan4mc: Skill reinforcement learning and planning for open-world minecraft tasks, 2023. [143] Z. Zeng, H. Shi, Y. Wu, Z. Hong, et al. Survey of natural language processing techniques in bioinformatics. Computational and mathematical methods in medicine, 2015, 2015. [144] S. Zhang, L. Dong, X. Li, S. Zhang, X. Sun, S. Wang, J. Li, R. Hu, T. Zhang, F. Wu, and G. Wang. Instruction tuning for large language models: A survey, 2023. [145] T.Zhang,X.Wang,D.Zhou,D.Schuurmans,andJ.E.Gonzalez. Tempera: Test-timepromptingviareinforce- ment learning, 2022. [146] W.ZhangandZ. Lu. Rladapter: Bridginglargelanguagemodelstoreinforcementlearninginopen worlds,2023. [147] H. Zhao, H. Chen, F. Yang, N. Liu, H. Deng, H. Cai, S. Wang, D. Yin, and M. Du. Explainability for large language models: A survey, 2023. [148] W.X.Zhao,K. Zhou,J.Li,T.Tang,X.Wang,Y.Hou,Y. Min, B.Zhang,J.Zhang,Z.Dong, etal. Asurveyof large language models. arXiv preprint arXiv:2303.18223, 2023. [149] C. Zhou, P. Liu, P. Xu, S. Iyer, J. Sun, Y. Mao, X. Ma, A. Efrat, P. Yu, L. Yu, et al. Lima: Less is more for alignment. arXiv preprint arXiv:2305.11206, 2023. [150] Y. Zhou,A.I.Muresanu, Z.Han,K. Paster, S.Pitis,H. Chan,andJ.Ba. Largelanguagemodels arehuman-level prompt engineers, 2023. [151] Y.Zhu,H.Yuan,S.Wang,J.Liu,W.Liu,C.Deng,Z.Dou,andJ.-R.Wen. Largelanguagemodelsforinformation retrieval: A survey, 2023. 30
JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 1 Large Language Models on Graphs: A Comprehensive Survey Bowen Jin*, Gang Liu*, Chi Han*, Meng Jiang, Heng Ji, Jiawei Han ✦ Abstract—Large language models (LLMs), such as GPT4 and LLaMA, are creating significant advancements in natural language processing, due to their strong text encoding/decoding ability and newly found emergentcapability(e.g.,reasoning).WhileLLMsaremainlydesignedto process pure texts, there are many real-world scenarios where text data is associatedwith richstructure informationin theformof graphs (e.g., academic networks, and e-commerce networks) or scenarios where graph data are paired with rich textual information (e.g., molecules with descriptions). Besides, although LLMs have shown their pure text- basedreasoning ability,itis underexploredwhether suchabilitycan be generalized to graphs (i.e., graph-based reasoning). In this paper, we provide a systematic review of scenarios and techniques related to large languagemodelson graphs.Wefirstsummarize potential scenarios of adopting LLMs on graphs into three categories, namely pure graphs, text-attributed graphs, and text-paired graphs. We then discuss detailed techniques for utilizing LLMs on graphs, including LLM as Predictor, LLM as Encoder, and LLM as Aligner, and compare the advantages and disadvantages of different schools of models. Furthermore, we discuss the real-world applications of such methods and summarize open-source codes and benchmark datasets. Finally, we conclude with potential future research directions in this fast-growing field. The related source can be found at https://github.com/PeterGriffinJin/ Awesome-Language-Model-on-Graphs. IndexTerms—LargeLanguageModels,GraphNeuralNetworks,Natural Language Processing, Graph Representation Learning Fig.1.According totherelationshipbetween graph andtext,wecatego- rize three LLM on graph scenarios. Depending on the role of LLM, we summarizethreeLLM-on-graphtechniques.“LLMasPredictor”iswhere 1 INTRODUCTION LLMs are responsible for predicting the final answer. “LLM as Aligner” will alignthe inputs-output pairs withthose ofGNNs. “LLMas Encoder” ARGE language models (LLMs) (e.g., BERT [23], T5 refers to using LLMs to encode and obtain feature vectors. L[29], LLaMA [119]) which represents a direction of emergent ability [5], exposing a strong potential for Artificial ever-increasing models’ sizes pre-trained on larger corpora, General Intelligence (AGI). have demonstrated powerful capabilities in solving natural language processing (NLP) tasks, including question answer- While LLMs are extensively applied to process pure texts, ing [1], text generation [2] and document understanding there is an increasing number of applications where the text [3]. There are no clear and static thresholds regarding the data are associated with structure information which are model sizes. Early LLMs (e.g., BERT [23], RoBERTa [24]) represented in the form of graphs. As presented in Fig. 1, in adopt an encoder-only architecture and show capabilities academic networks, papers (with title and description) and in text representation learning [4] and natural language authors (with profile text), a
ear and static thresholds regarding the data are associated with structure information which are model sizes. Early LLMs (e.g., BERT [23], RoBERTa [24]) represented in the form of graphs. As presented in Fig. 1, in adopt an encoder-only architecture and show capabilities academic networks, papers (with title and description) and in text representation learning [4] and natural language authors (with profile text), are interconnected with author- understanding[3].Inrecentyears,morefocushasbeengiven ship relationships. Understanding both the author/paper’s to larger decoder-only architectures [119] or encoder-decoder text information and author-paper structure information architectures [29]. As the model size scales up, such LLMs on such graphs can contribute to advanced author/paper have also shown reasoning ability and even more advanced modeling and accurate recommendations for collaboration; In the scientific domain, molecules are represented as graphs • * The first three authors contributed equally to this work. and are often paired with text that describes their basic • Bowen Jin, Chi Han, Heng Ji, Jiawei Han: University of Illinois at Urbana- properties (e.g., mass and weight). Joint modeling of both the Champaign.{bowenj4, chihan3, hengji, hanj}@illinois.edu moleculestructure(graph)andtheassociatedrichknowledge • Gang Liu, Meng Jiang: University of Notre Dame. {gliu7, (text) is important for deeper molecule understanding. Since mjiang2@}@nd.edu LLMs are mainly proposed for modeling texts that lie in aJOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 2 sequential fashion, those scenarios mentioned above pose TABLE 1 new challenges on how to enable LLMs to encode the Notations of Concepts. structure information on graphs. In addition, since LLMs Notations Descriptions have demonstrated their superb text-based reasoning ability, |·| The length of a set. it is promising to explore whether they have the potential [A ,B ] The concatenation ofA andB . to address fundamental graph reasoning problems on pure ∥ Concatenate operation. graphs. These graph reasoning tasks include inferring con- G A graph. nectivity [6], shortest path [7], subgraph matching [8], and V The set of nodes in a graph. v A nodev ∈V . logical rule induction [18]. E The set of edges in a graph. Recently, there has been an increasing interest [9] in e An edgee ∈E . extending LLMs for graph-based applications (summarized Gv The ego graph associated withv inG . in Fig. 1). According to the relationship between graph N (v) The neighbors of a nodev. M A meta
] in e An edgee ∈E . extending LLMs for graph-based applications (summarized Gv The ego graph associated withv inG . in Fig. 1). According to the relationship between graph N (v) The neighbors of a nodev. M A meta-path or a meta-graph. and text presented in Fig. 1, the application scenarios can N M (v) The nodes which are reachable from be categorized into pure graphs, text-attributed graphs nodev with meta-path or meta-graphM . (nodes/edges are associated with texts), and text-paired D The text set. graphs. Depending on the role of LLMs and their interaction s ∈S The text token in a text sentenceS . dvi The text associated with the nodevi. with graph neural networks (GNNs), the LLM on graphs deij The text associated with the edgeeij . techniques can be classified into treating LLMs as the final dG The text associated with the graphG . predictor (LLM as Predictor), treating LLMs as the feature n The number of nodes,n = |V |. encoder for GNNs (LLM as Encoder), and align LLMs with b The dimension of a node hidden state. GNNs (LLM as Aligner). x vi ∈ R d The initial feature vector of the nodevi. H v ∈ R n× b The node hidden feature matrix. There are a limited number of existing surveys exploring h vi ∈ R b The hidden representation of nodevi. the intersection between LLMs and graphs. Related to deep hG ∈ R b The hidden representation of a graphG . learning on graphs, Wu et al. [19] gives a comprehensive h dv ∈ R b The representation of textdv . overview of graph neural networks (GNNs) with detailed H dv ∈ R |dv|× b The hidden states of tokens indv . illustrations on recurrent graph neural networks, convo- W ,Θ ,w,θ Learnable model parameters. lutional graph neural networks, graph autoencoders, and LLM(·) Large Language model. GNN(·) Graph neural network. spatial-temporal graph neural networks. Liu et al. [20] dis- cusspretrainedfoundationmodelsongraphs,includingtheir Organization of Survey. The rest of this survey is backbone architectures, pretraining methods, and adaptation organized as follows. Section 2 introduces the background of techniques. Pan et al. [21] review the connection between LLMsandGNNs,listscommonlyusednotations,anddefines LLMs and knowledge graphs (KGs) especially on how KGs related concepts. Section 3 categorizes graph scenarios where can enhance LLMs training and inference, and how LLMs LLMs can be adopted and summarizes LLMs on graph can facilitate KG construction and reasoning. In summary, techniques. Section
echniques. Pan et al. [21] review the connection between LLMsandGNNs,listscommonlyusednotations,anddefines LLMs and knowledge graphs (KGs) especially on how KGs related concepts. Section 3 categorizes graph scenarios where can enhance LLMs training and inference, and how LLMs LLMs can be adopted and summarizes LLMs on graph can facilitate KG construction and reasoning. In summary, techniques. Section 4-6 provides a detailed illustration of existing surveys either focus more on GNNs rather than LLM methodologies for different graph scenarios. Section LLMs or fail to provide a systematic perspective on their 7 delivers available datasets, opensource codebases, and a applicationsinvariousgraphscenariosasinFig.1.Ourpaper collection of applications across various domains. Section provides a comprehensive review of the LLMs on graphs, 8 introduces some potential future directions. Section 9 for broader researchers from diverse backgrounds besides summarizes the paper. the computer science and machine learning community who 2 DEFINITIONS & BACKGROUND want to enter this rapidly developing field. 2.1 Definitions Our Contributions. The notable contributions of our We provide definitions of various types of graphs and paper are summarized as follows: introduce the notations (as shown in Table 1) in this section. • Categorization of Graph Scenarios. We systemat- Definition1(Graph):AgraphcanbedefinedasG = ( V ,E). ically summarize the graph scenarios where lan- HereV signifies the set of nodes, while E denotes the set guage models can be adopted into: pure graphs, text- of edges. A specific node can be represented byvi ∈V , and attributed graphs, and text-paired graphs. an edge directed from node vj to vi can be expressed as • Systematic Review of Techniques. We provide the eij = ( vi,vj) ∈E . The set of nodes adjacent to a particular most comprehensive overview of language models nodev is articulated asN (v) = {u ∈V| (v,u ) ∈E} . on graph techniques. For different graph scenarios, A graph containing a node type setA and an edge type we summarize the representative models, provide setR , where|A| + |R| > 2, is called a heterogeneous graph. detailed illustrations of each of them, and make A heterogeneous graph is also associated with a node type necessary comparisons. mapping function ϕ : V → A and an edge type mapping • Abundant Resources. We collect abundant resources functionψ :E →R . on language models on graphs, including benchmark Definition 2 (Graph with node-level textual information): A datasets, open-source codebases, and practical appli- graph with node-level textual information can be denoted as cations. G = ( V ,E,D ), whereV ,E andD are node set, edge set, and • Future Directions. We delve into the foundational text set, respectively. Each vi ∈ V is associated with some principles of language models on graphs and propose textual information dvi ∈ D . For instance, in an academic six prospective avenues for future exploration. citation network, one can
G = ( V ,E,D ), whereV ,E andD are node set, edge set, and • Future Directions. We delve into the foundational text set, respectively. Each vi ∈ V is associated with some principles of language models on graphs and propose textual information dvi ∈ D . For instance, in an academic six prospective avenues for future exploration. citation network, one can interpret v ∈ V as the scholarlyJOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 3 articles,e ∈E as the citation links between them, andd ∈D Simple but powerful, subsequent models like GPT-3 [26], as the textual content of these articles. A graph with node- GPT-4 [118], LLaMA [119], LLaMA2 [119], Mistral 7B [120], leveltextualinformationisalsocalledatext-attributedgraph and T5 [29] show impressive emergent capabilities such [31], a text-rich graph [62], or a textual graph [72]. as few-shot learning, chain-of-thought reasoning, and pro- Definition 3 (Graph with edge-level textual information): A gramming. Efforts have been made to combine language graph with node-level textual information can be denoted models with other modalities such as vision [96], [121] and asG = ( V ,E,D ), whereV ,E andD are node set, edge set, biochemical structures [47], [122], [123]. We will discuss its and text set, respectively. Each eij ∈ E is associated with combination with graphs in this paper. some textual informationdeij ∈D . For example, in a social We would like to point out that the word “large” in network, one can interpret v ∈ V as the users, e ∈ E as LLM is not associated with a clear and static threshold the interaction between the users, andd ∈D as the textual to divide language models. “Large” actually refers to a content of the messages sent between the users. direction in which language models are inevitably evolving, Definition 4 (Graph with graph-level textual information): It and larger foundational models tend to possess significantly can be denoted as the pair (G ,dG ), where G = ( V ,E). V more representation and generalization power. Hence, we andE are node set and edge set. dG is the text set paired defineLLMstoencompassbothmedium-scalePLMs,suchas to the graphG . For instance, in a molecular graphG ,v ∈V BERT, and large-scale LMs, like GPT-4, as suggested by [21]. denotes an atom, e ∈ E represents the strong attractive Graph Neural Networks & Graph Transformers. In real- forces or chemical bonds that hold molecules together, and world scenarios, not all the data are sequential like text, dG represents the textual description of the molecule. We many data lies in a more complex non-Euclidean structure, note that texts may also be associated with subgraph-level i.e., graphs. GNN is proposed as a deep-learning architecture concepts and then paired with the entire graph. for graph data. Primary GNNs including GCN [84], Graph- 2.2 Background SAGE [85] and, GAT [86] are designed for solving node- level tasks. They mainly adopt a propagation-aggregation (Large) Language Models. Language Models (LMs), or paradigm to obtain node representations: languagemodeling,isanareainthefieldofnaturallanguage processing(NLP)onunderstandingandgenerationfromtext a (l− 1)vi
SAGE [85] and, GAT [86] are designed for solving node- level tasks. They mainly adopt a propagation-aggregation (Large) Language Models. Language Models (LMs), or paradigm to obtain node representations: languagemodeling,isanareainthefieldofnaturallanguage processing(NLP)onunderstandingandgenerationfromtext a (l− 1)vivj = PROP (l) h (l− 1)vi ,h (l− 1)vj,∀vj ∈N (vi); (4) distributions. In recent years, large language models (LLMs) have demonstrated impressive capabilities in tasks such h (l)vi = AGG (l) h (l− 1)vi ,{a (l− 1)vivj |vj ∈N (vi)}. (5) as machine translation, text summarization, and question Later works such as GIN [189] explore GNNs for solving answering [26], [43], [112]–[115], [195]. graph-level tasks. They obtain graph representations by Language models have evolved significantly over time. adopting a READOUT function on node representations: BERT [23] marks significant progress in language modeling h G = READOUT({h vi|vi ∈G} ). (6) and representation. BERT models the conditional probability of a word given its bidirectional context, also named masked The READOUT functions include mean pooling, max pool- language modeling (MLM) objective:  ing, and so on. Subsequent work on GNN tackles the issues of over-smoothing [139], over-squashing [140], interpretabil- E S∼D  X log p(si|s1,...,si− 1,si+1 ,...,sN S ) , (1) ity[145],andbias[143].Whilemessage-passing-basedGNNs si∈S have demonstrated advanced structure encoding capability, where S is a sentence sampled from the corpus D , si is researchers are exploring further enhancing its expressive- the i-th word in the sentence, and N S is the length of the ness with Transformers (i.e., graph Transformers). Graph sentence. BERT utilizes the Transformer architecture with Transformers utilize a global multi-head attention mecha- attention mechanisms as the core building block. In the nism to expand the receptive field of each graph encoding vanilla Transformer, the attention mechanism is defined as: layer [141]. They integrate the inductive biases of graphs QK T into the model by positional encoding, structural encoding, Attention(Q,K,V ) = softmax √ V, (2) the combination of message-passing layers with attention dk layers [142], or improving the efficiency of attention on large where Q,K,V ∈ R N S× dk are the query, key, and value graphs [144]. Graph Transformers have been proven as the vectorsforeachwordinthesentence,respectively.Following state-of-the-art solution for many pure graph problems. BERT, other masked language models are proposed, such Language Models vs. Graph Transformers. Modern lan- as RoBERTa [24], ALBERT [116], and ELECTRA [117], with guagemodelsandgraphTransformersbothuseTransformers similar architectures and objectives of text representation. [93] as the base model architecture. This makes the two Although the original Transformer paper [93] was experi- concepts hard to distinguish, especially when the language mented on machine translation, it was not until the release of models are adopted on graph applicati
as RoBERTa [24], ALBERT [116], and ELECTRA [117], with guagemodelsandgraphTransformersbothuseTransformers similar architectures and objectives of text representation. [93] as the base model architecture. This makes the two Although the original Transformer paper [93] was experi- concepts hard to distinguish, especially when the language mented on machine translation, it was not until the release of models are adopted on graph applications. In this paper, GPT-2 [115] that language generation (aka. causal language “Transformers” typically refers to Transformer language modeling) became impactful on downstream tasks. Causal models for simplicity. Here, we provide three points to language modeling is the task of predicting the next word help distinguish them: 1) Tokens (word token vs. node given the previous words in a sentence. The objective of token): Transformers take a token sequence as inputs. For causal language modeling is defined as:  languagemodels,thetokensarewordtokens;whileforgraph Transformers, the tokens are node tokens. In those cases E S∼D  X log p(si|s1,...,si− 1) . (3) where tokens include both word tokens and node tokens if si∈S the backbone Transformers is pretrained on text corpus (e.g., JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 4 Fig. 2. A taxonomy of LLM on graph scenarios and techniques with representative examples. BERT [23] and LLaMA [119]), we will call it a “language theory problems) or serve as knowledge sources to enhance model”. 2) Positional Encoding (sequence vs. graph): language the large language models (alleviate hallucination). models typically adopt the absolute or relative positional Text-Attributed Graphs refers to graphs where nodes or encoding considering the position of the word token in the edges are associated with semantically rich text informa- sequence, while graph Transformers adopt shortest path tion. They are also called text-rich networks [31], textual distance [141], random walk distance, the eigenvalues of the graphs [72] or textual-edge networks [74]. Examples include graph Laplacian [142] to consider the distance of nodes in academic networks, e-commerce networks, social networks, the graph. 3) Goal (text vs. graph): The language models and legal case networks. On these graphs, researchers are are originally proposed for text encoding and generation; interestedinlearningrepresentationsfornodesoredgeswith while graph Transformers are proposed for node encoding both textual and structure information [72] [74]. or graph encoding. In those cases where texts are served Text-Paired Graphs have textual descriptions defined for the as nodes/edges on the graph if the backbone Transformers entire graph structure. For example, graphs like molecules is pretrained on text corpus, we will call it a “language may be paired with captions or textual features. While the model”. graph structure significantly contributes to molecular prop- erties, text descriptions can complement our u
entire graph structure. For example, graphs like molecules is pretrained on text corpus, we will call it a “language may be paired with captions or textual features. While the model”. graph structure significantly contributes to molecular prop- erties, text descriptions can complement our understanding 3 CATEGORIZATION AND FRAMEWORK of molecules. The graph scenarios can be found in Fig. 1. In this section, we first introduce our categorization of graph 3.2 Categorization of LLMs on Graph Techniques Zero-Shot [124]–[126], [128], [131], scenarios where language models can be adopted. ThenDirect Answering Few-Shot [124], [125], [128], [131], GraphLLM [42], According to the roles of LLMs and what are the final we discuss the categorization of LLM on graph techniques. Role Prompting [126], Format Explanation [126], Finally, we summarize the training & inference framework components for solving graph-related problems, we classify CoT [124]–[126], [128], [131], [132], Pure Graphs LLM as Predictor LLM on graph techniques into three main categories: for language models on graphs. Heuristic Reasoning Self-Consistency [124], BaG [124], [131], LLM as Predictor. This category of methods serves LLM RoG [129], StructGPT [130], ToG [132] as the final component to output representations or predic- 3.1 Categorization of Graph Scenarios with LLMs.Algorithmic Reasoning Algorithmic Prompting [124], Graph-ToolFormer [127] tions. It can be enhanced with GNNs and can be classified Rule-based InstructGLM [46], GraphText [65], [82], Pure Graphs without Textual Information are graphs withGraph as depending on how the graph information is injected into no text information or no semantically rich text information.Sequence LLM: 1) Graph as Sequence: This type of method makes no GNN-based GNP [41], GraphGPT [45], DGTL [76], METERN [75] Examples include traffic graphs and power transmission changes to the LLM architecture, but makes it be aware LLM as Predictor GreaseLM [67], DRAGON [81], GraphFormers [72], graphs. Those graphs often serve as context to test the graphGraph-Empowered LLM of graph structure by taking a “graph token sequence” as Patton [31], Heterformer [73], Edgeformers [74], reasoning ability of large language models (solve graph input. The “graph token sequence” can be natural language Graph-Aware LLM Finetuning SPECTER [51], SciNCL [52], Touchup-G [54], TwHIN-BERT [5
Patton [31], Heterformer [73], Edgeformers [74], reasoning ability of large language models (solve graph input. The “graph token sequence” can be natural language Graph-Aware LLM Finetuning SPECTER [51], SciNCL [52], Touchup-G [54], TwHIN-BERT [56], MICoL [59], E2EG [60] One-step TextGNN [77], AdsGNN [78], GNN-LM [66] Text-Rich Optimization Graphs Two-step GIANT [58], LM-GNN [68], SimTeG [35], GaLM [80] LLM as Encoder Data Augmentation LLM-GNN [64], TAPE [70], ENG [71] Knowledge Distillation AdsGNN [78], GraD [69] Prediction Alignment LTRN [57], GLEM [62] LLM as Aligner Latent Space Alignment ConGrat [53], GRENADE [55], G2P2 [63], THLM [33] CatBERTa [159] , LLaMA-Mol [160] , LLM4Mol [163] , RT [164] , MolReGPT [165] , ChatMol [166] , Graph as Sequence MolXPT [169] , LLM-ICL [168] , Text+Chem T5 [171] , MolT5 [123] , KV-PLM [175] , Chemformer [156] , LLM as Predictor MFBERT [176] , Galatica [178] , SMILES-BERT [179] Text-Paired Graph-Empowered LLM ReLM [157] , Prot2Text [161] , GIMLET [47] , Graphs Text2Mol [122] MolCA [167] , GIT-Mol [158] , MolFM [162] , LLM as Aligner Latent Space Alignment CLAMP [170] , MoMu-v2 [173] , MoleculeSTM [172] , MoMu [174]JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 5 descriptions for a graph or hidden representations outputted and findings. Table 4 in the Appendix lists a categoriza- by graph encoders. 2) Graph-Empowered LLM: This type of tion of these efforts. Usually, input graphs are serialized method modifies the architecture of the LLM base model as part of the input sequence, either by verbalizing the (i.e., Transformers) and enables it to conduct joint text and graph structure [124]–[126], [128]–[132] or by encoding the graphencodinginsidetheirarchitecture.3)Graph-AwareLLM graph structure into implicit feature sequences [42]. The Finetuning: This type of method makes no changes to the studied reasoning problems range from simpler ones like input of the LLMs or LLM architectures, but only fine-tunes connectivity, shortest paths, and cycle detection to harder the LLMs with supervision from the graph. ones like maximum flow and Hamiltonian pathfinding (an LLM as Encoder. This method is mostly utilized for graphs NP-complete problem). A comprehensive list of the studied where nod
d makes no changes to the studied reasoning problems range from simpler ones like input of the LLMs or LLM architectures, but only fine-tunes connectivity, shortest paths, and cycle detection to harder the LLMs with supervision from the graph. ones like maximum flow and Hamiltonian pathfinding (an LLM as Encoder. This method is mostly utilized for graphs NP-complete problem). A comprehensive list of the studied where nodes or edges are associated with text information problems is listed in Appendix Table 5. Note that we only (solving node-level or edge-level tasks). GNNs are the final list representative problems here. This table does not include components and we adopt LLM as the initial text encoder. more domain-specific problems, such as the spatial-temporal To be specific, LLMs are first utilized to encode the text reasoning problems in [128]. associated with the nodes/edges. The outputted feature 4.1 Direct Answering vectors by LLMs then serve as input embeddings for GNNs for graph structure encoding. The output embeddings from Although graph-based reasoning problems usually involve the GNNs are adopted as final node/edge representations complex computation, researchers still attempt to let lan- for downstream tasks. However, these methods suffer from guage models directly generate answers from the serialized convergence issues, sparse data issues, and inefficient issues, input graphs as a starting point or a baseline, partially where we summarize solutions from optimization, data because of the simplicity of the approach and partially in augmentation, and knowledge distillation perspectives. awe of other emergent abilities of LLMs. Although various LLM as Aligner. This category of methods adopts LLMs attempts have been made to optimize how graphs are as text-encoding components and aligns them with GNNs presentedintheinputsequence,whichwewilldiscussinthe which serve as graph structure encoding components. LLMs following sections, bounded by the finite sequence length and GNNs are adopted together as the final components for and computational operations, there is a fundamental limita- tasksolving.Tobespecific,thealignmentbetweenLLMsand tion of this approach to solving complex reasoning problems GNNs can be categorized into 1) Prediction Alignment where such as NP-complete ones. Unsurprisingly, most studies find the generated pseudo labels from one modality are utilized that LLMs possess preliminary graph understanding ability, for training on the other modality in an iterative learning but the performance is less satisfactory on more complex fashion and 2) Latent Space Alignment where contrastive problemsorlargergraphs[42],[124]–[126],[128],[131]where learning is adopted to align text embeddings generated by reasoning is necessary. LLMs and graph embeddings generated by GNNs. Plainly Verbalizing Graphs. Verbalizing the graph structure In the following sections, we will follow our categoriza- in natural language is the most straightforward way of tion in Section 3 and discuss detailed methodologies for each representing graphs. Representative approaches include graph scenario. describing the edge and adjacency lists, widely studied 4 PURE GRAPHS in [124], [125], [128], [131]. For example, for a triangle graph with three nodes, the edge list can be written as “[(0, 1), (1
iled methodologies for each representing graphs. Representative approaches include graph scenario. describing the edge and adjacency lists, widely studied 4 PURE GRAPHS in [124], [125], [128], [131]. For example, for a triangle graph with three nodes, the edge list can be written as “[(0, 1), (1, 2), Problems on pure graphs provide a fundamental motivation (2, 0)]”, which means node 0 is connected to node 1, node 1 for why and how LLMs are introduced into graph-related is connected to node 2, node 2 is connected to node 0. It can reasoningproblems.Investigatedthoroughlyingraphtheory, also be written in natural language such as “ There is an edge pure graphs serve as a universal representation format for a between node 0 and node 1, an edge between node 1 and node 2, wide range of classical algorithmic problems in all perspec- and an edge between node 2 and node 0.” On the other hand, we tives in computer science. Many graph-based concepts, such can describe the adjacency list from the nodes’ perspective. as shortest paths, particular sub-graphs, and flow networks, For example, for the same triangle graph, the adjacency list have strong connections with real-world applications [133]– can be written as “ Node 0 is connected to node 1 and node 2. [135], [193]. Therefore, pure graph-based reasoning is vital Node 1 is connected to node 0 and node 2. Node 2 is connected to in providing theoretical solutions and insights for reasoning node 0 and node 1.” On these inputs, one can prompt LLMs to problems grounded in real-world applications. answer questions either in zero-shot or few-shot (in-context Nevertheless, many reasoning tasks require a computa- learning) settings, the former of which is to directly ask tion capacity beyond traditional GNNs. GNNs are typically questions given the graph structure, while the latter is to ask designed to carry out a bounded number of operations given questions about the graph structure after providing a few a graph size. In contrast, graph reasoning problems can examples of questions and answers. [124]–[126] do confirm require up to indefinite complexity depending on the task’s that LLMs can answer easier questions such as connectivity, nature. On the other hand, LLMs demonstrate excellent neighbor identification, and graph size counting but fail emergent reasoning ability [48], [112], [113] recently. This to answer more complex questions such as cycle detection is partially due to their autoregressive mechanism, which and Hamiltonian pathfinding. Their results also reveal that enablescomputingindefinitesequencesofintermediatesteps providing more examples in the few-shot setting increases with careful prompting or training [48], [49]. the performance, especially on easier problems, although it The following subsections discuss the attempts to in- is still not satisfactory. corporate LLMs into pure graph reasoning problems. We Paraphrasing Graphs. The verbalized graphs can be lengthy, will also discuss the corresponding challenges, limitations, unstructured, and complicated to read, even for humans,JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 6 so they might not be the best input format for LLMs to summarize the key nodes, edges, or sub-graphs and pe
The verbalized graphs can be lengthy, will also discuss the corresponding challenges, limitations, unstructured, and complicated to read, even for humans,JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 6 so they might not be the best input format for LLMs to summarize the key nodes, edges, or sub-graphs and perform infer the answers. To this end, researchers also attempt to reasoning. paraphrase the graph structure into more natural or concise Searching on Graphs. This kind of reasoning is related to sentences. [126] find that by prompting LLMs to generate a the search algorithms on graphs, such as breadth-first search format explanation of the raw graph inputs for itself (Format- (BFS) and depth-first search (DFS) Although not universally Explanation) or to pretend to play a role in a natural task applicable, BFS and DFS are the most intuitive and effective (Role Prompting), the performance on some problems can be ways to solve some graph reasoning problems. Numer- improved but not systematically. [131] explores the effect of ous explorations have been made to simulate searching- grounding the pure graph in a real-world scenario, such as based reasoning, especially on knowledge-graph question social networks, friendship graphs, or co-authorship graphs. answering. This approach enjoys the advantage of providing In such graphs, nodes are described as people, and edges are interpretable evidence besides the answer. Reasoning-on- relationships between people. Results indicate that encoding Graphs(RoG)[129]isarepresentativeapproachthatprompts in real-world scenarios can improve the performance on LLMs to generate several relation paths as plans, which are some problems, but still not consistently. then retrieved from the knowledge graph (KG) and used Encoding Graphs Into Implicit Feature Sequences. Finally, as evidence to answer the questions. Another approach is researchers also attempt to encode the graph structure into to iteratively retrieve and reason on the subgraphs from implicit feature sequences as part of the input sequence [42]. KG [130], [132], simulating a dynamic searching process. At Unlike the previous verbalizing approaches, this usually each step, the LLMs retrieve neighbors of the current nodes involves training a graph encoder to encode the graph and then decide to answer the question or continue the next structure into a sequence of features and fine-tuning the search step. These methods address the scalability challenge LLMs to adapt to the new input format. [42] demonstrates when knowledge from multiple graphs is available. drastic performance improvement on problems including 4.3 Algorithmic Reasoning substructure counting, maximum triplet sum, shortest path, The previous two approaches are heuristic, which means and bipartite matching, indicating that fine-tuning LLMs has that the reasoning process accords with human intuition great fitting power on a specific task distribution. but is not guaranteed to lead to the correct answer. In 4.2 Heuristic Reasoning contrast, these problems are usually solved by algorithms Direct mapping to the output leverages the LLMs’ powerful in computer science. Therefore, researchers also attempt to representationpowerto“guess”theanswers.Still,itdoesnot let LLMs perform algorithmic reasoning on graphs. [124] fully utilize the LLMs’ impressive em
but is not guaranteed to lead to the correct answer. In 4.2 Heuristic Reasoning contrast, these problems are usually solved by algorithms Direct mapping to the output leverages the LLMs’ powerful in computer science. Therefore, researchers also attempt to representationpowerto“guess”theanswers.Still,itdoesnot let LLMs perform algorithmic reasoning on graphs. [124] fully utilize the LLMs’ impressive emergent reasoning ability, proposed “Algorithmic Prompting”, which prompts the LLMs which is essential for solving complex reasoning problems. to recall the algorithms that are relevant to the questions To this end, attempts have been made to let LLMs perform and then perform reasoning step by step according to the heuristic reasoning on graphs. This approach encourages algorithms. Their results, however, do not show consistent LLMs to perform a series of intermediate reasoning steps improvement over the heuristic reasoning approach. A more that might heuristically lead to the correct answer, which directapproach,Graph-ToolFormer[127],letsLLMsgenerate resembles a path-finding reasoning schema [203]. API calls as explicit reasoning steps. These API calls are then ReasoningStepbyStep.Encouragedbythesuccessofchain- executed externally to acquire answers on an external graph. of-thought (CoT) reasoning [48], [113], researchers also at- This approach is suitable for converting tasks grounded in tempt to let LLMs perform reasoning step by step on graphs. realtasksintopuregraphreasoningproblems,demonstrating Chain-of-thought encourages LLMs to roll out a sequence of efficacy on various applications such as knowledge graphs, reasoning steps to solve a problem, similar to how humans social networks, and recommendation systems. solve problems. Zero-shot CoT is a similar approach that 4.4 Discussion does not require any examples. These techniques are studied The above approaches are not mutually exclusive, and they in [42], [124]–[126], [128], [131], [132]. Results indicate that can be combined to achieve better performance, for example, CoT-stylereasoningcanimprovetheperformanceonsimpler by prompting language models for heuristics in algorithmic problems, such as cycle detection and shortest path detection. searching. Moreover, heuristic reasoning can also conduct Still, the improvement is inconsistent or diminishes on more direct answering, while algorithmic reasoning contains the complex problems, such as Hamiltonian path finding and capacity of heuristic reasoning as a special case. Researchers topological sorting. are advised to select the most suitable approach for a specific Retrieving Subgraphs as Evidence. Many graph reasoning problem. problems, such as node degree counting and neighborhood detection, only involve reasoning on a subgraph of the 5 TEXT-ATTRIBUTED GRAPHS. whole graph. Such properties allow researchers to let LLMs Text-attributed graphs exist ubiquitously in the real world, retrieve the subgraphs as evidence and perform reasoning e.g., academic networks, and legal case networks. Learning on the subgraphs. Build-a-Graph prompting [124] encour- on such networks requires the model to encode both the ages LLMs to reconstruct the relevant graph structures textual information associated with the nodes/edges and to the questions and then perform reasoning on them. the structure information lying inside the input graph. This method demonstrates promising results on problems
academic networks, and legal case networks. Learning on the subgraphs. Build-a-Graph prompting [124] encour- on such networks requires the model to encode both the ages LLMs to reconstruct the relevant graph structures textual information associated with the nodes/edges and to the questions and then perform reasoning on them. the structure information lying inside the input graph. This method demonstrates promising results on problems Depending on the role of LLM, existing works can be except for Hamiltonian pathfinding, a notoriously tricky categorized into three types: LLM as Predictor, LLM as problem requiring reasoning on the whole graph. Another Encoder, and LLM as Aligner. We summarize all surveyed approach,Context-Summarization[126],encouragesLLMsto methods in Appendix Table 6.JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 7 5.1 LLM as Predictor The strength of these methods is that they can capture These methods serve the language model as the main model the hidden representations of useful structure information architecture to capture both the text information and graph with a strong graph encoder, while the challenge is how structure information. They can be categorized into three to fill the gap between graph modality and text modality. types: Graph as Sequence methods, Graph-Empowered LLMs, GNP [41] adopts a similar philosophy from LLaVA [91], and Graph-Aware LLM finetuning methods, depending on how where they utilize GNN to generate graph tokens and then structure information in graphs is injected into language project the graph tokens into the text token space with models (input vs. architecture vs. loss). In the Graph as Se- learnable projection matrices. The projected graph tokens are quence methods, graphs are converted into sequences that can concatenated with text tokens and fed into the language be understood by language models together with texts from model. GraphGPT [45] further proposes to train a text- the inputs. In the Graph-Empowered LLMs methods, people grounded GNN for the projection with a text encoder and modify the architecture of Transformers (which is the base contrastive learning. DGTL [76] introduces disentangled architecture for LLMs) to enable it to encode text and graph graph learning, serves graph representations as positional structure simultaneously. In the Graph-Aware LLM finetuning encoding, and adds them to the text sequence. METERN methods, LLM is fine-tuned with graph structure supervision [75] adds learnable relation embeddings to node textual and can generate graph-contextualized representations. sequences for text-based multiplex representation learning 5.1.1 Graph as Sequence. on graphs [92]. In these methods, the graph information is mainly encoded 5.1.2 Graph-Empowered LLMs. into the LLM from the “input” side. The ego-graphs associ- In these methods, researchers design advanced LLM archi- ated with nodes/edges are serialized into a sequenceH Gv tecture (i.e., Graph-Empowered LLMs) which can conduct which can be fed into the LLM together with the textsdv: joint text and graph encoding inside their model architecture. Transformers [93] serve as the base model
e “input” side. The ego-graphs associ- In these methods, researchers design advanced LLM archi- ated with nodes/edges are serialized into a sequenceH Gv tecture (i.e., Graph-Empowered LLMs) which can conduct which can be fed into the LLM together with the textsdv: joint text and graph encoding inside their model architecture. Transformers [93] serve as the base model for nowadays pre- H Gv = Graph2Seq(G v), (7) trained LMs [23] and LLMs [36]. However, they are designed h v = LLM([H Gv,dv]). (8) for natural language (sequence) encoding and do not take non-sequential structure information into consideration. To Depending on the choice of Graph2Seq(·) function, the this end, Graph-Empowered LLMs are proposed. They have methods can be further categorized into rule-based methods a shared philosophy of introducing virtual structure tokens and GNN-based methods. The illustration of the categories H Gv inside each Transformer layer: can be found in Fig. 3. fH (l) Rule-based: Linearizing Graphs into Text Sequence with dv = [ H (l)Gv ,H (l)dv ] (10) Rules. These methods design rules to describe the structure where H Gv can be learnable embeddings or output from with natural language and adopt a text prompt template graph encoders. Then the original multi-head attention as Graph2Seq(·). For example, given an ego-graph G vi of (MHA)inTransformersismodifiedintoanasymmetricMHA the paper node vi connecting to author nodes vj and vk to take the structure tokens into consideration: and venue nodesvt andvs,H Gvi = Graph2Seq(G vi) = “The MHA asy (H (l)dv ,fH (l)dv ) = ∥Uu =1 head u (H (l)dv ,fH (l)dv ), centor paper node is vi. Its author neighbor nodes are vj and ! vk and its venue neighbor nodes are vt and vs”. This is the where head u (H (l) Q (l)u fK (l)⊤pu·eV (l)u , dv ,fH (l)dv ) = softmax d/U most straightforward and easiest way (without introducing extra model parameters) to encode graph structures into Q (l)u = H (l)dv W (l)Q,u , fK (l)u = fH (l)dv W (l)K,u , eV (l)u = fH (l)dv W (l)V,u . language models. Along this line, InstructGLM [46] designs (11) templates to describe local ego-graph structure (maximum With the asymmetric MHA mechanism, the node encoding 3-hop connection) for each node and conduct instruction process of the(l+1) -th layer will be: tuning for node classification and link prediction. GraphText [65] further proposes a syntax tree-based method to transfer fH (l)′dv = Normalize( H (l)dv +MHA asy (fH (l)dv ,H (l)dv )), structureintotextsequence.Researchers[82]alsostudywhen (12) and why the linearized structure information on graphs can H (l+1)dv = Normalize( fH (l)′dv +MLP( fH (l)′dv )). improve the performance of LLM on node classification and Along this line of work, GreaseLM [67]
= Normalize( H (l)dv +MHA asy (fH (l)dv ,H (l)dv )), structureintotextsequence.Researchers[82]alsostudywhen (12) and why the linearized structure information on graphs can H (l+1)dv = Normalize( fH (l)′dv +MLP( fH (l)′dv )). improve the performance of LLM on node classification and Along this line of work, GreaseLM [67] proposes to have a find that the structure information is beneficial when the language encoding component and a graph encoding compo- textual information associated with the node is scarce (in nent in each layer. These two components interact through a this case, the structure information can provide auxiliary modality-fusion layer (MInt layer), where a special structure information gain). token is added to the text Transformer input, and a special GNN-based: Encoding Graphs into Special Tokens with node is added to the graph encoding layer. DRAGON [81] GNNs. Different from rule-based methods which use natural further proposes strategies to pretrain GreaseLM with unsu- language prompts to linearize graphs into sequences, GNN- pervised signals. GraphFormers [72] are designed for node based methods adopt graph encoder models (i.e., GNN) to representation learning on homogeneous text-attributed encode the ego-graph associated with nodes into special networks where the current layer[CLS] token hidden states token representations which are concatenated with the pure of neighboring documents are aggregated and added as a text information into the language model: new token on the current layer center node text encoding. H Gv = Graph2Seq(G v) = GraphEnc(G v). (9) Patton [31] proposes to pretrain GraphFormers with two novel strategies: network-contextualized masked languageJOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 8 Fig.3.TheillustrationofvariousLLMasPredictormethods,including(a)Rule-basedGraphAsSequence,(b)GNN-basedGraphAsSequence,(c) Graph-Empowered LLMs. modeling and masked node prediction. Heterformer [73] fine-tuningthelanguagemodel.Asummarizationofthetwo- introduces virtual neighbor tokens for text-rich neighbors tower graph-centric LLM fine-tuning objectives can be found and textless neighbors which are concatenated with the in Appendix Table 7. original text tokens and fed into each Transformer layer. There are other methods using the one-tower pipeline, Edgeformers [74] are proposed for representation learning where node pairs are concatenated and encoded together: on textual-edge networks where edges are associated with h vi,vj = LLMθ(dvi,dvj), minθ f (h vi,vj). (14) rich textual information. When conducting edge encoding, virtual node tokens will be concatenated onto the original LinkBERT [30] proposes a document relation prediction edge text tokens for joint encoding. objective (an extension of next sentence prediction in BERT [23])whichaimstoclassifytherelationoftwonodetextpairs 5.1.3 Graph-Aware LLM finetuning. from contiguous, random, and linked. MICoL [59] explores In these methods, the graph information is mainly injected
BERT [30] proposes a document relation prediction edge text tokens for joint encoding. objective (an extension of next sentence prediction in BERT [23])whichaimstoclassifytherelationoftwonodetextpairs 5.1.3 Graph-Aware LLM finetuning. from contiguous, random, and linked. MICoL [59] explores In these methods, the graph information is mainly injected predicting the node pairs’ binary meta-path or meta-graph into the LLM by “fine-tuning on graphs”. Researchers indicated relation with the one-tower language model. assume that the structure of graphs can provide hints on 5.1.4 Discussion what documents are “semantically similar” to what other Although the community is making good progress, there are documents. For example, papers citing each other in an still some open questions to be solved. academic graph can be of similar topics. These methods Graph as Code Sequence. Existing graphs as sequence adopt vanilla language models that take text as input (e.g., methods are mainly rule-based or GNN-based. The former BERT [23] and SciBERT [25]) as the base model and fine-tune relies on natural language to describe the graphs which is them with structure signals on the graph [51]. After that, not natural for structure data, while the latter has a GNN the LLMs will learn node/edge representations that capture component that needs to be trained. A more promising way the graph homophily from the text perspective. This is the is to obtain a structure-aware sequence for graphs that can simplest way to utilize LLMs on graphs. However, during support zero-shot inference. A potential solution is to adopt encoding, the model itself can only consider text. codes (that can capture structures) to describe the graphs Mostmethodsadoptthetwo-towerencodingandtraining and utilize code LLMs [22]. pipeline, where the representation of each node is obtained Advanced Graph-Empowered LLM techniques. Graph- separately and the model is optimized as follows: empowered LLM is a promising direction to achieve foun- h vi = LLMθ(dvi), minθ f (h vi,{h v+ dational models for graphs. However, existing works are far i },{h v−i }). (13) from enough: 1) Task. Existing methods are mainly designed Herev +i representsthepositivenodestovi,v−i representsthe for representation learning (with encoder-only LLMs) which negative nodes tovi andf (·) denotes the pairwise training are hard to adopt for generation tasks. A potential solution objective. Different methods have different strategies forv +i is to design Graph-Empowered LLMs with decoder-only or andv−i with different training objectivesf (·). SPECTER [51] encoder-decoderLLMsasthebasearchitecture.2)Pretraining. constructs the positive text/node pairs with the citation Pretraining is important to enable LLMs with contextualized relation, explores random negatives and structure hard data understanding capability, which can be generalized negatives, and fine-tunes SciBERT [25] with the triplet to other tasks. However, existing works mainly focus on loss. SciNCL [52] extends SPECTER by introducing more pretraining LLMs on homogeneous text-attributed networks. advanced positive and negative sampling methods based on Future studies are needed to explore LLM pretraining in embeddings trained on graphs. Touchup-G [54] proposes the more diverse real-world scenarios including heteroge
d fine-tunes SciBERT [25] with the triplet to other tasks. However, existing works mainly focus on loss. SciNCL [52] extends SPECTER by introducing more pretraining LLMs on homogeneous text-attributed networks. advanced positive and negative sampling methods based on Future studies are needed to explore LLM pretraining in embeddings trained on graphs. Touchup-G [54] proposes the more diverse real-world scenarios including heterogeneous measurement of feature homophily on graphs and brings text-attributed networks [73], dynamic text-attributed net- up a binary cross-entropy fine-tuning objective. TwHIN- works [128], and textual-edge networks [74]. BERT [56] mines positive node pairs with off-the-shelf 5.2 LLM as Encoder heterogeneous information network embeddings and trains LLMs extract textual features to serve as initial node feature themodelwithacontrastivesocialloss.MICoL[59]discovers vectors for GNNs, which then generate node/edge repre- semantically positive node pairs with meta-path [90] and sentations and make predictions. These methods typically adopts the InfoNCE objective. E2EG [60] utilizes a similar adopt an LLM-GNN cascaded architecture to obtain the final philosophy from GIANT [58] and adds a neighbor prediction representationh vi for nodevi: objective apart from the downstream task objective. WalkLM x vi = LLM(dvi) h vi = GNN(X v,G ). (15) [61]conductsrandomwalksforstructurelinearizationbeforeJOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 9 Fig.4.TheillustrationofvarioustechniquesrelatedtoLLMasEncoder,including(a)One-stepTraining,(b)Two-stepTraining,(c)DataAugmentation, and (d) Knowledge Distillation. Here x vi is the feature vector that captures the textual in NLP [83], [89]. LLM-GNN [64] proposes to conduct informationdvi associated withvi. The final representation zero-shot node classification on text-attributed networks by h vi will contain both textual information and structure labeling a few nodes and using the pseudo labels to fine- information of vi and can be used for downstream tasks. tune GNNs. TAPE [70] presents a method that uses LLM to In the following sections, we will discuss the optimization, generate prediction text and explanation text, which serve as augmentation, and distillation of such models. The figures augmented text data compared with the original text data. for these techniques can be found in Fig. 4. A following medium-scale language model is adopted to 5.2.1 Optimization encode the texts and output features for augmented texts One-step training refers to training the LLM and GNN andoriginaltextrespectivelybeforefeedingintoGNNs.ENG together in the cascaded architecture for the downstream [71] brings forward the idea of generating labeled nodes for tasks. TextGNN [77] explores GCN [84], GraphSAGE [85], eachcategory,addingedgesbetweenlabelednodesandother GAT [86] as the base GNN architecture, adds skip connection nodes, and conducting semi-supervised GNN learning for between LLM output and GNN output, and optimizes the node classification. whole architecture for sponsored search task. AdsGNN 5.2.3 Knowledge Distillation [78] further extends TextGNN by proposing edge-level LLM-GNN cascaded pipeline is capable of capturing both information aggregation. GNN-LM [66] adds GNN layers text
tecture, adds skip connection nodes, and conducting semi-supervised GNN learning for between LLM output and GNN output, and optimizes the node classification. whole architecture for sponsored search task. AdsGNN 5.2.3 Knowledge Distillation [78] further extends TextGNN by proposing edge-level LLM-GNN cascaded pipeline is capable of capturing both information aggregation. GNN-LM [66] adds GNN layers text information and structure information. However, the to enable the vanilla language model to reference similar pipelinesuffersfromtimecomplexityissuesduringinference, contexts in the corpus for language modeling. Joint training since GNNs need to conduct neighbor sampling and LLMs LLMs and GNNs in a cascaded pipeline is convenient but need to encode the text associated with both the center may suffer from efficiency [68] (only support sampling a few node and its neighbors. A straightforward solution is to one-hop neighbors regarding memory complexity) and local serve the LLM-GNN cascade pipeline as the teacher model minimal [35] (LLM underfits the data) issues. and distill it into an LLM as the student model. In this Two-step training means first adapting LLMs to the graph, case, during inference, the model (which is a pure LLM) and then finetuning the whole LLM-GNN cascaded pipeline. only needs to encode the text on the center node and avoid GIANT [58] proposes to conduct neighborhood prediction time-consuming neighbor sampling. AdsGNN [78] proposes with the use of XR-Transformers [79] and results in an LLM an L2-loss to force the outputs of the student model to that can output better feature vectors than bag-of-words preserve topology after the teacher model is trained. GraD and vanilla BERT [23] embedding for node classification. [69] introduces three strategies including the distillation LM-GNN [68] introduces graph-aware pre-fine-tuning to objective and task objective to optimize the teacher model warm up the LLM on the given graph before fine-tuning and distill its capability to the student model. the whole LLM-GNN pipeline and demonstrating significant 5.2.4 Discussion performance gain. SimTeG [35] finds that the simple frame- Given that GNNs are demonstrated as powerful models in work of first training the LLMs on the downstream task and encoding graphs, “LLMs as encoders” seems to be the most then fixing the LLMs and training the GNNs can result in straightforward way to utilize LLMs on graphs. However, outstanding performance. They further find that using the there are still open questions. efficient fine-tuning method,e.g., LoRA [40] to tune the LLM LimitedTask:GoBeyondRepresentationLearning.Current can alleviate overfitting issues. GaLM [80] explores ways “LLMsasencoders”methodsorLLM-GNNcascadedarchitec- to pretrain the LLM-GNN cascaded architecture. The two- tures are mainly focusing on representation learning, given step strategy can effectively alleviate the insufficient training the single embedding propagation-aggregation mechanism of the LLM which contributes to higher text representation ofGNNs,whichpreventsitfrombeingadoptedtogeneration quality but is more computationally expensive and time- tasks (e.g., node/text generation). A potential solution to consuming than the one-step training strategy. this challenge can be to conduct GNN encoding for LLM- 5.2.2 Data Augmentation generated token-level representations and to design proper With its demonstrated zero-shot capability [43], LLMs can
ofGNNs,whichpreventsitfrombeingadoptedtogeneration quality but is more computationally expensive and time- tasks (e.g., node/text generation). A potential solution to consuming than the one-step training strategy. this challenge can be to conduct GNN encoding for LLM- 5.2.2 Data Augmentation generated token-level representations and to design proper With its demonstrated zero-shot capability [43], LLMs can be decoders that can perform generation based on the LLM- used for data augmentation to generate additional text data GNN cascaded model outputs. for the LLM-GNN cascaded architecture. The philosophy Low Efficiency: Advanced Knowledge Distillation. The of using LLM to generate pseudo data is widely explored LLM-GNN cascaded pipeline suffers from time complexityJOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 10 issues since the model needs to conduct neighbor sampling and then embedding encoding for each neighboring node. Although there are methods that explore distilling the learned LLM-GNN model into an LLM model for fast inference, they are far from enough given that the inference of LLM itself is time-consuming. A potential solution is to distill the model into a much smaller LM or even an MLP. Similar methods [87] have been proven effective in GNN to Fig.5.TheillustrationofLLMasAlignermethods,including(a)LLM-GNN MLP distillation and are worth exploring for the LLM-GNN Prediction Alignment and (b) LLM-GNN Latent Space Alignment. cascaded pipeline as well. KL-divergence-based neighbor-level knowledge alignment: 5.3 LLM as Aligner minimizetheneighborhoodsimilaritydistributioncalculated These methods contain an LLM component for text encoding between LLM and GNN. G2P2 [63] further extends node- and a GNN component for structure encoding. These two text contrastive learning by adding text-summary interaction components are served equally and trained iteratively or and node-summary interaction. Then, they introduce using parallelly. LLMs and GNNs can mutually enhance each other label texts in the text modality for zero-shot classification, since the LLMs can provide textual signals to GNNs, while and using soft prompts for few-show classification. THLM theGNNscandeliverstructureinformationtoLLMs.Accord- [33] proposes to pretrain the language model by contrastive ingtohowtheLLMandtheGNNinteract,thesemethodscan learning with a heterogeneous GNN on heterogeneous text- be further categorized into: LLM-GNN Prediction Alignment attributed networks. The pretrained LLM can be fine-tuned and LLM-GNN Latent Space Alignment. The illustration of on downstream tasks. these two categories of methods can be found in Fig. 5. 5.3.3 Discussion. 5.3.1 LLM-GNN Prediction Alignment In “LLMs as Aligners” methods, most research is adopt- This refers to training the LLM with the text data on a ing shallow GNNs (e.g., GCN, GAT, with thousands of graph and training the GNN with the structure data on a parameters) to be the graph encoders that are aligned graphiteratively.LLMwillgeneratelabelsfornodesfromthe with LLMs through iterative training (i.e., prediction align- text perspective and serve them as pseudo-labels for GNN ment) or contrastive training (i.e., latent space alignment). training, while GNN will generate labels
shallow GNNs (e.g., GCN, GAT, with thousands of graph and training the GNN with the structure data on a parameters) to be the graph encoders that are aligned graphiteratively.LLMwillgeneratelabelsfornodesfromthe with LLMs through iterative training (i.e., prediction align- text perspective and serve them as pseudo-labels for GNN ment) or contrastive training (i.e., latent space alignment). training, while GNN will generate labels for nodes from the Although LLMs (with millions or billions of parameters) structure perspective and serve them as pseudo-labels for have strong expressive capability, the shallow GNNs (with LLM training. By this design, these two modality encoders limited representative capability) can constrain the mutual can learn from each other and contribute to a final joint text learning effectiveness between LLMs and GNNs. A potential and graph encoding. In this direction, LTRN [57] proposes solution is to adopt GNNs which can be scaled up [88]. a novel GNN architecture with personalized PageRank Furthermore, deeper research to explore what is the best [94] and attention mechanism for structure encoding while model size combination for LLMs and GNNs in such “LLMs adoptingBERT[23]asthelanguagemodel.Thepseudolabels as Aligners” LLM-GNN mutual enhancement framework is generatedbyLLMandGNNaremergedforthenextiteration very important. of training. GLEM [62] formulates the iterative training 6 TEXT-PAIRED GRAPHS process into a pseudo-likelihood variational framework, where the E-step is to optimize LLM and the M-step is to Graphs are prevalent data objects in scientific disciplines train the GNN. such as cheminformatics [183], [194], [200], material infor- 5.3.2 LLM-GNN Latent Space Alignment matics [181], bioinformatics [201], and computer vision [147]. It denotes connecting text encoding (LLM) and structure Within these diverse fields, graphs frequently come paired encoding (GNN) with cross-modality contrastive learning: with critical graph-level text information. For instance, molecular graphs in cheminformatics are annotated with text h dvi = LLM(dvi),h vi = GNN(G v), (16)properties such as toxicity, water solubility, and permeability properties [181], [183]. Research on such graphs (scientific l(h dvi,h vi) = Sim(h dvi,h vi)P discovery) could be accelerated by the text information j̸= i Sim(h dvi,h vj), (17) and the adoption of LLMs. In this section, we review the L = X 1 application of LLMs on graph-captioned graphs with a vi∈G 2|G|(l(h dvi,h vi)+ l(h vi,h dvi)) (18) focus on molecular graphs. According to the technique A similar philosophy is widely used in vision-language categorization in Section 3.2, we begin by investigating joint modality learning [96]. Along this line of approaches, methods that utilize LLMs as Predictor. Then, we discuss ConGrat [53] adopts GAT [86] as the graph encoder and methods that align GNNs with LLMs. We summarize all tries MPNet [34] as the language model encoder. They surveyed methods in Appendix Table 8. have expanded the original InfoNCE loss by incorporating 6.1 LLM as Predictor graph-specific elements. These elements pertain to the
earning [96]. Along this line of approaches, methods that utilize LLMs as Predictor. Then, we discuss ConGrat [53] adopts GAT [86] as the graph encoder and methods that align GNNs with LLMs. We summarize all tries MPNet [34] as the language model encoder. They surveyed methods in Appendix Table 8. have expanded the original InfoNCE loss by incorporating 6.1 LLM as Predictor graph-specific elements. These elements pertain to the most In this subsection, we review how to conduct “LLM as likely second, third, and subsequent choices regarding the Predictor” for graph-level tasks. Existing methods can be nodes from which a text originates and the texts that categorized into Graph as Sequence (treat graph data as a node generates. In addition to the node-level multi- sequenceinput)andGraph-EmpoweredLLMs(designmodel modality contrastive objective, GRENADE [55] proposes architecture to encode graphs).JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 11 6.1.1 Graph as Sequence descriptions from a molecule) and text-based molecular For text-paired graphs, we have three steps to utilize existing generation (where a molecular graph structure is generated LLMforgraphinputs.Step1:Linearizegraphsintosequence from a natural description). Specifically, MolT5 [123] is with rule-based methods. Step 2: Tokenize the linearized se- developed based on the T5 [29], suitable for these two tasks. quence. Step 3: Train/Finetune different LLMs (e.g., Encoder- It formulates molecule-text translation as a multilingual only, Encoder-Decoder, Decoder-only) for specific tasks. We problem and initializes the model using the T5 checkpoint. will discuss each step as follows. The model was pre-trained on two monolingual corpora: Step 1: Rule-based Graph Linearization. Rule-based lin- the Colossal Clean Crawled Corpus (C4) [29] for the natural earization converts molecular graphs into text sequences language modality and one million SMILES [156] for the that can be processed by LLMs. To achieve this, researchers molecule modality. Text+Chem T5 [171] extends the input develop specifications based on human expertise in the form and output domains to include both SMILES and texts, oflinenotations[148].Forexample,theSimplifiedMolecular- unlocking LLMs for more generation functions such as Input Line-Entry System (SMILES) [148] records the symbols text or reaction generation. ChatMol [166] exploits the of nodes encountered during a depth-first traversal of interactive capabilities of LLMs and proposes designing a molecular graph. The International Chemical Identifier molecule structures through multi-turn dialogs with T5. (InChI) [149] encodes molecular structures into unique string Decoder-only LLMs. Decoder-only architectures have been texts with more hierarchical information. Canonicalization adopted for recent LLMs due to their advanced generation algorithms produce unique SMILES for each molecule, ability.MolGPT[177]andMolXPT[169]areGPT-stylemodels often referred to as canonical SMILES. However, there are used for molecule classification and generation. Specifically, more than one SMILES corresponding to a single molecule MolGPT [177] focuses on conditional molecule generation and SMILES sometimes represent invalid molecules; LLMs
to their advanced generation algorithms produce unique SMILES for each molecule, ability.MolGPT[177]andMolXPT[169]areGPT-stylemodels often referred to as canonical SMILES. However, there are used for molecule classification and generation. Specifically, more than one SMILES corresponding to a single molecule MolGPT [177] focuses on conditional molecule generation and SMILES sometimes represent invalid molecules; LLMs tasks using scaffolds, while MolXPT [169] formulates the learned from these linearized sequences can easily generate classification task as a question-answering problem with yes invalid molecules (e.g., incorrect ring closure symbols and or no responses. RT [164] adopts XLNet [27] and focuses unmatchedparentheses)duetosyntacticalerrors.Tothisend, on molecular regression tasks. It frames the regression as a DeepSMILES [150] is proposed. It can alleviate this issue in conditional sequence modeling problem. Galactica [178] is most cases but does not guarantee 100% robustness. The a set of LLMs with a maximum of 120 billion parameters, linearized string could still violate basic physical constraints. which is pretrained on two million compounds from Pub- To fully address this problem, SELFIES [151] is introduced Chem[183].Therefore,Galacticacouldunderstandmolecular which consistently yields valid molecular graphs. graph structures through SMILES. With instruction tuning Step 2: Tokenization. These approaches for linearized data and domain knowledge, researchers also adapt general- sequences are typically language-independent. They operate domain LLMs such as LLaMA to recognize molecular graph at both character level [167], [178] and substring level [162], structures and solve molecule tasks [160]. Recent studies [169], [173]–[176], based on SentencePiece or BPE [155]. also explore the in-context learning capabilities of LLMs Additionally, RT [164] proposes a tokenization approach that on graphs. LLM-ICL [168] assesses the performance of facilitates handling regression tasks within LM Transformers. LLMs across eight tasks in the molecular domain, ranging from property classification to molecule-text translation. Step 3: Encoding the Linearized Graph with LLMs. Encoder- MolReGPT [165] proposes a method to retrieve molecules onlyLLMs.EarlierLLMslikeSciBERT[25]andBioBERT[180] with similar structures and descriptions to improve in- are trained on scientific literature to understand natural context learning. LLM4Mol [163] utilizes the summarization language descriptions related to molecules but are not capa- capability of LLMs as a feature extractor and combines it ble of comprehending molecular graph structures. To this with a smaller, tunable LLM for specific prediction tasks. end, SMILES-BERT [179] and MFBERT [176] are proposed 6.1.2 Graph-Empowered LLMs for molecular graph classification with linearized SMILES Different from the methods that adopt the original LLM strings. Since scientific natural language descriptions contain architecture (i.e., Transformers) and input the graphs as human expertise which can serve as a supplement for sequences to LLMs, graph-empowered LLMs attempt to molecular graph structures, recent advances emphasize joint design LLM architectures that can conduct joint encoding understanding of them [159], [175]: The linearized graph of text and graph structures. Some works modify the posi- sequence is concatenated wi
ions contain architecture (i.e., Transformers) and input the graphs as human expertise which can serve as a supplement for sequences to LLMs, graph-empowered LLMs attempt to molecular graph structures, recent advances emphasize joint design LLM architectures that can conduct joint encoding understanding of them [159], [175]: The linearized graph of text and graph structures. Some works modify the posi- sequence is concatenated with the raw natural language data tional encoding of Transformers. For instance, GIMLET [47] and then input into the LLMs. Specifically, KV-PLM [175] is treats nodes in a graph as tokens. It uses one Transformer built based on BERT [23] to understand the molecular struc- to manage both the graph structure and text sequence ture in a biomedical context. CatBERTa [159], as developed [v1,v2,...,v|V|,s|V|+1 ,...,s|V|+ |dG|], wherev ∈V is a node from RoBERTa [24], specializes in the prediction of catalyst and s ∈ dG is a token in the text associated with G . This properties for molecular graphs. sequence cannot reflect graph structure. Therefore, a new Encoder-Decoder LLMs. Encoder-only LLMs may lack the position encoding (PE) is used to jointly encode graph capability for generation tasks. In this section, we discuss structures and text sequences. It defines the relative distance LLMs with encoder-decoder architectures. For example, between tokensi andj as follows: Chemformer [156] uses a similar architecture as BART [28].  Therepresentationfromtheencodercanbeusedforproperty i− j ifi,j ∈ dG , prediction tasks, and the whole encoder-decoder architecture PE( i,j) = GSD( i,j)+Mean ek∈ SP( i,j) x ek ifi,j ∈V , can be optimized for molecule generation. Others focus  −∞ ifi∈V ,j ∈ dG , 0 ifi∈ dG ,j ∈V . on molecule captioning (which involves generating textual (19)JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 12 GSD is the graph shortest distance between two nodes, electrons, hybridization state, aromaticity, and presence in a andMean k∈ SP( i,j) represents the mean pooling of the edge ring. Bond features encompass the bond’s type (e.g., single, featuresx ek along the shortest pathSP( i,j) between nodes double, or triple), the bond’s stereochemistry (e.g., E/Z i andj. GIMLET [47] adapts bi-directional attention for node or cis/trans), and whether the bond is conjugated [188]. tokens and enables texts to selectively attend to nodes. These Each feature provides specific information about atomic designs render the Transformer’s submodule, which handles properties and structure, crucial for molecular modeling and the graph part, equivalent to a Graph Transformer [141]. cheminformatics. One may directly vectorize the molecular Cross-attention is also used to interact representations graph structure into binary vectors [186] and then apply between graphs and texts. Given the graph hidden stateh G , parameterized Multilayer Perceptrons (MLPs) on the top its node-level
odule, which handles properties and structure, crucial for molecular modeling and the graph part, equivalent to a Graph Transformer [141]. cheminformatics. One may directly vectorize the molecular Cross-attention is also used to interact representations graph structure into binary vectors [186] and then apply between graphs and texts. Given the graph hidden stateh G , parameterized Multilayer Perceptrons (MLPs) on the top its node-level hidden state H v and text hidden state H dG , of these vectors to get the graph representation. These Text2Mol [122] implemented interaction between representa- vectorization approaches are based on human-defined rules tions in the hidden layers of encoders, while Prot2Text [161] and vary, such as MACCS, ECFP, and CDK fingerprints [186]. implemented this interaction within the layers of between These rules take inputs of a molecule and output a vector encoder and decoderH dG = softmax W Q H dG ·(W K H v)T√·consisting of 0/1 bits. Each bit denotes a specific type of dk substructure related to functional groups that could be used W V H v, where W Q ,W K ,W V are trainable parameters forvariouspropertypredictions.Fingerprintsconsideratoms that transform the query modality (e.g., sequences) and and structures, but they cannot automatically learn from the the key/value modality (e.g., graphs) into the attention graph structure. GNNs could serve as automatic feature space. Furthermore, Prot2Text [161] utilizes two trainable extractors to replace or enhance fingerprints. Some specific parameter matrices W 1 and W 2 to integrate the graph methods are explored in Section 6.1.2, while the other graph representation into the sequence representation H dG = prior such as the eigenvectors of a graph Laplacian and the H dG + 1|dG|h GW 1W 2. random walk prior could also be used [142]. 6.1.3 Discussion LLM Outputs for Prediction. LMs like KV-PLM [175], SMILES-BERT [179], MFBERT [176], and Chemformer [156] LLM Inputs with Sequence Prior.Thefirstchallengeisthatthe use a prediction head on the output vector of the last layer. progress in advanced linearization methods has not progressed in These models are finetuned with standard classification and tandem with the development of LLMs. Emerging around 2020, regression losses but may not fully utilize all the parameters linearization methods for molecular graphs like SELFIES and advantages of the complete architecture. In contrast, offer significant grammatical advantages, yet advanced LMs models like RT [164], MolXPT [169], and Text+Chem T5 [171] andLLMsfromgraphmachinelearningandlanguagemodel frame prediction as a text generation task. These models are communities might not fully utilize these, as these encoded trainedwitheithermaskedlanguagemodelingorautoregres- results are not part of pretraining corpora prior to their sivetargets,whichrequiresameticulousdesignofthecontext proposal. Consequently, recent studies [168] indicate that words in the text [164]. Specifically, domain knowledge LLMs,suchasGPT-3.5/4,maybelessadeptatusingSELFIES instructions may be necessary to activate the in-context comparedtoSMILES.Therefore,theperformanceofLM-only learning ability of LLMs, thereby making them domain andLLM-onlymethodsmaybelimitedbytheexpressiveness experts
iresameticulousdesignofthecontext proposal. Consequently, recent studies [168] indicate that words in the text [164]. Specifically, domain knowledge LLMs,suchasGPT-3.5/4,maybelessadeptatusingSELFIES instructions may be necessary to activate the in-context comparedtoSMILES.Therefore,theperformanceofLM-only learning ability of LLMs, thereby making them domain andLLM-onlymethodsmaybelimitedbytheexpressiveness experts [168]. For example, a possible template could be ofolderlinearizationmethods,asthereisnowaytooptimize divided into four parts:{General Description}{ Task-Specific these hard-coded rules during the learning pipeline of LLMs. Description}{ Question-Answer Examples}{ Test Question}. However, the second challenge remains as the inductive bias of LLM Outputs for Reasoning. Since string representations graphs may be broken by linearization. Rule-based linearization of molecular graphs usually carry new and in-depth domain methods introduce inductive biases for sequence modeling, knowledge, which is beyond the knowledge of LLMs, recent thereby breaking the permutation invariance assumption work [146], [157], [165] also attempts to utilize the reasoning inherent in molecular graphs. It may reduce task difficulty abilityofLLMs,insteadofusingthemasaknowledgesource by introducing sequence order to reduce the search space. for predicting the property of molecular graphs. ReLM [157] However,itdoesnotmeanmodelgeneralization.Specifically, utilizes GNNs to suggest top-k candidates, which were then there could be multiple string-based representations for a used to construct multiple-choice answers for in-context single graph from single or different approaches. Numerous learning. ChemCrow [146] designs the LLMs as the chemical studies [152]–[154] have shown that training on different agent to implement various chemical tools. It avoided direct string-based views of the same molecule can improve the inference in an expertise-intensive domain. sequential model’s performance, as these data augmentation 6.2 LLM as Aligner approaches manage to retain the permutation-invariance 6.2.1 Latent Space Alignment nature of graphs. These advantages are also achievable with a permutation-invariant GNN, potentially simplifying the OnemaydirectlyalignthelatentspacesoftheGNNandLLM model by reducing the need for complex, string-based data through contrastive learning and predictive regularization. augmentation design. Typically, a graph representation from a GNN can be read LLM Inputs with Graph Prior. Rule-based linearization may out by summarizing all node-level representations, and a be considered less expressive and generalizable compared sequence representation can be obtained from the [CLS] to the direct graph representation with rich node features, token. We first use two projection heads, which are usually edgefeatures,andtheadjacencymatrix[187].Variousatomic MLPs, to map the separate representation vectors from the features include atomic number, chirality, degree, formal GNN and LLM into a unified space as h G and h dG , and charge, number of hydrogen atoms, number of radical then align them within this space. Specifically, MoMu [174]JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 13 and MoMu-v2 [173] retrieve two sentences from the corpus dimensions. The scale of GNNs may be a bottleneck in le
ed space as h G and h dG , and charge, number of hydrogen atoms, number of radical then align them within this space. Specifically, MoMu [174]JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 13 and MoMu-v2 [173] retrieve two sentences from the corpus dimensions. The scale of GNNs may be a bottleneck in learn- for each molecular graph. During training, graph data ing semantic meaningful representation and there is a risk of augmentation was applied to molecular graphs, creating two over-relianceononemodality,neglectingtheother.Therefore, augmented views. Consequently, there are four pairs ofG for future large-scale GNN designs comparable to LLMs, anddG .Foreachpair,thecontrastivelossforspacealignment scaling up the dimension size and adding deeper layers, may isasℓMoMu = − log exp (cos (hG ,h dG )/τ )P whereτ isthe be considered. Besides, Transformer encoders [142] may also ˜dG̸= dG expcos hG ,h ˜dG/τ improve the expressive power of deep GNNs. temperature hyper-parameter and ˜dG denotes the sequence Generation Decoder with GNNs. GNNs are often not used not paired to the graphG . MoleculeSTM [172] also applies as decoders for graph generation. The prevalent decoders contrastive learning to minimize the representation distance are mostly text-based, generating linearized graph structures between a molecular graphG and its corresponding textsdG , such as SMILES. These methods may be sensitive to the while maximizing the distance between the molecule and sequence order in the linearized graph. Generative diffusion unrelated descriptions. MoleculeSTM [172] randomly sam- models [202] on graphs could be utilized in future work to ples negative graphs or texts to construct negative pairs of design generators with GNNs. (G , ˜d) and( ˜G ,d). Similarly, MolFM [162] and GIT-Mol [158] 7 APPLICATIONS implement contrastive loss with mutual information and negativesampling.Thesetwomethodsalsousecross-entropy 7.1 Datasets, Splitting and Evaluation to regularize the unified space with the assumption that We summarize the datasets for three scenarios (namely pure randomly permuted graph and text inputs are predictable if graphs, text-attributed graphs, and text-paired graphs) and they originate from the same molecule. show them in Table 5, Table 2, and Table 3 respectively. However, the aforementioned methods cannot leverage 7.1.1 Pure Graphs task labels. Given a classification label y, CLAMP [170] In Table 5, we summarize the pure graph reasoning prob- learns to map active molecules (y = 1 ) so that they lems discussed in Section 4. Many problems are shared or align with the corresponding assay description for each + revisited in different datasets due to their commonality. NL- molecular graph G : ℓCLAMP = y log σ τ− 1h TG h dG. CLAMP [170] requires la-Graph[124],LLMtoGraph[125]andGUC[126]studyasetof (1 − y)log 1 − σ τ− 1h TG h dG standard graph reasoning problems, including connectivity, bels to encourage that active molecules and their cor- shortest path, and graph diameter. GraphQA [131] bench- responding text descriptions are clustered together in marksasimilarsetofproblemsbutadditionallydescribesthe the latent space. To advance the alignment between two graphs in real-world scenarios to study the effect of graph modalities, Mol
standard graph reasoning problems, including connectivity, bels to encourage that active molecules and their cor- shortest path, and graph diameter. GraphQA [131] bench- responding text descriptions are clustered together in marksasimilarsetofproblemsbutadditionallydescribesthe the latent space. To advance the alignment between two graphs in real-world scenarios to study the effect of graph modalities, MolCA [167] trains the Query Transformer (Q- grounding. LLM4DyG [128] focuses on reasoning tasks on Former) [190] for molecule-text projecting and contrastive temporally evolving graphs. Accuracy is the most common alignment. Q-former initializes N q learnable query tokens evaluation metric as they are primarily formulated as graph {q k}N qk=1 . These query tokens are updated with self-attention question-answering tasks. and interact with the output of GNNs through cross- attention to obtain the k -th queried molecular represen- 7.1.2 Text-Attributed Graphs tation vector (h G )k := Q-Former( q k). The query tokens We summarize the famous datasets for evaluating models share the same self-attention modules with the texts, but on text-attributed graphs in Table 2. The datasets are mostly use different MLPs, allowing the Q-Former to be used from the academic, e-commerce, book, social media, and for obtaining the representation of text sequence h dG := Wikipedia domains. The popular tasks to evaluate models Q-Former( [CLS]). Then we have ℓMolCA = − ℓg2t − ℓt2g , on those datasets include node classification, link prediction, where ℓg2t = log exp (max k cos ((hG )k,h dG )/τ )P , andedge classification, regression, and recommendation. The ˜dG̸= dG expmax k cos(hG )k,h ˜dG/τ evaluation metrics for node/edge classification include ℓt2g = log exp (max k cos (h dG ,(hG )k)/τ )P Accuracy, Macro-F1, and Micro-F1. For link prediction and ˜G̸= G exp (max k cos (h dG ,(h ˜G )k)/τ ). recommendation evaluation, Mean Reciprocal Rank (MRR), 6.2.2 Discussion Normalized Discounted Cumulative Gain (NDCG), and Hit Larger-Scale GNNs.GNNsintegrateatomicandgraphstruc- Ratio (Hit) usually serve as metrics. While evaluating model tural features for molecular representation learning [145]. performance on regression tasks, people tend to adopt mean Specifically, Text2Mol [122] utilizes the GCN [84] as its graph absolute errors (MAE) or root mean square error (RMSE). encoder and extracts unique identifiers for node features 7.1.3 Text-Paired Graphs based on Morgan fingerprints [186]. MoMu [174], MoMu- Table 3 shows text-paired graph datasets (including text- v2 [173], MolFM [162], GIT-Mol [158], and MolCA [167] availableandgraph-onlydatasets).ForDataSplitting,options prefer GIN [189] as the backbone, as GIN has been proven include random splitting, source-based splitting, activity to be as expressive and powerful as the Weisfeiler-Lehman cliffs and scaffolds [196], and data balancing [143]. Graph graph isomorphism test. As described in Section 2.2, there classification usually adopts AUC [188] as the metrics, has been notable progress in making GNNs deeper, more while regression uses MAE, RMSE, and R2 [145]. For text generalizable, and more powerful since the proposal of the generation evaluation, people tend to use the Bilingual GCN [84] in 2016 and the GIN [189] in 2018. However, most Evaluation Understudy (BLEU) sc
orphism test. As described in Section 2.2, there classification usually adopts AUC [188] as the metrics, has been notable progress in making GNNs deeper, more while regression uses MAE, RMSE, and R2 [145]. For text generalizable, and more powerful since the proposal of the generation evaluation, people tend to use the Bilingual GCN [84] in 2016 and the GIN [189] in 2018. However, most Evaluation Understudy (BLEU) score; while for molecule reviewed works [158], [162], [167], [173], [174] are developed generation evaluation, heuristic evaluation methods (based usingtheGIN[189]asaproofofconceptfortheirapproaches. on factors including validity, novelty, and uniqueness) are These pretrained GINs feature five layers and 300 hidden adopted. However, it is worth noted that BLEU score is JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 14 TABLE 2 Data collection in Section 5 for text-attributed graphs. Task: “NC”, “UAP”, “LP”, “Rec”, “EC”, “RG” denote node classification, user activity prediction, link prediction, recommendation, edge classification, and regression task. TABLE 3 Data collection in Section 6 for text-captioned graphs. “PT”, “FT”, “Cap.”, “GC”, “Retr.’,and “Gen.” refer to pretraining, finetuning, caption, graph classification, retrieval, and graph generation, respectively. The superscript for the sizedenotes # graph-text pairs1, # graphs2, # assays3. efficientbutlessaccurate,whileheuristicevaluationmethods 7.3 Practical applications are problematic subject to unintended modes, such as the 7.3.1 Scientific Discovery superfluous addition of carbon atoms in [197]. Virtual Screening. It aims to search a library of unlabeled 7.2 Open-source Implementations molecules to identify useful structures for a given task. HuggingFace. HF Transformers1 is the most popular Python Machine learning models could automatically screen out library for Transformers-based language models. Besides, it trivial candidates to accelerate this process. However, train- also provides two additional packages: Datasets2 for easily ing accurate models is not easy since labeled molecules are accessing and sharing datasets and Evaluate3 for easily limitedinsizeandimbalancedindistribution[143].Thereare evaluating machine learning models and datasets. many efforts to improve GNNs against data sparsity [143], Fairseq. Fairseq4 is another open-source Python library for [145], [192]. However, it is difficult for a model to generalize Transformers-based language models. andunderstandin-depthdomainknowledgethatithasnever PyTorch Geometric. PyG5 is an open-source Python library been trained on. Texts could be complementary knowledge for graph machine learning. It packages more than 60 types
[145], [192]. However, it is difficult for a model to generalize Transformers-based language models. andunderstandin-depthdomainknowledgethatithasnever PyTorch Geometric. PyG5 is an open-source Python library been trained on. Texts could be complementary knowledge for graph machine learning. It packages more than 60 types sources. Discovering task-related content from massive of GNN, aggregation, and pooling layers. scientific papers and using them as instructions has great Deep Graph Library. DGL6 is another open-source Python potential to design accurate GNNs in virtual screening [47]. library for graph machine learning. Molecular Generation. Molecular generation and optimiza- RDKit. RDKit7 is one of the most popular open-source tion is one fundamental goal for drug and material discovery. cheminformatics software programs that facilitates various Scientific hypotheses of molecules [199], can be represented operations and visualizations for molecular graphs. It offers in the joint space of GNNs and LLMs. Then, one may search many useful APIs, such as the linearization implementation in the latent space for a better hypothesis that aligns with for molecular graphs, to convert them into easily stored the text description (human requirements) and adheres to SMILES and to convert these SMILES back into graphs. structural constraints like chemical validity. Chemical space 1. https://huggingface.co/docs/transformers/index has been found to contain more than 10 60 molecules [198], 2. https://huggingface.co/docs/datasets/index which is beyond the capacity of exploration in wet lab exper- 3. https://huggingface.co/docs/evaluate/index iments. Generating constrained candidates within relevant Text. Data Year Task # Nodes # Edges Domain Source & Notes4. https://github.com/facebookresearch/fairseqsubspaces is a challenge [202] and promising, especially ogb-arxiv 2020.5 NC 169,343 1,166,243 Academic OGB [188]5. https://pytorch-geometric.readthedocs.io/en/latest/index.html Data Date Task Size Source & Notesogb-products 2020.5 NC 2,449,029 61,859,140 E-commerce OGB [188]6. https://www.dgl.ai/when incorporating textual conditions. ogb-papers110M 2020.5 NC 111,059,956 1,615,685,872 Academic OGB [188]7. https://www.rdkit.org/docs/Synthesis Planning. Synthesis designs start from available ChEMBL-2023 [185] 2023 Various 2.4M2 ,20.3M3 Drug-likeogb-citation2 2020.5 LP 2,927,963 30,561,187 Academic OGB [188] PubChem [183] 2019 Various 96M2 ,237M3 BiomedicalCora 2000 NC 2,708
.org/docs/Synthesis Planning. Synthesis designs start from available ChEMBL-2023 [185] 2023 Various 2.4M2 ,20.3M3 Drug-likeogb-citation2 2020.5 LP 2,927,963 30,561,187 Academic OGB [188] PubChem [183] 2019 Various 96M2 ,237M3 BiomedicalCora 2000 NC 2,708 5,429 Academic [10] PC324K [167] 2023 PT, Cap., 324K1 PubChem [183]Citeseer 1998 NC 3,312 4,732 Academic [11] MolXPT-PT [169] 2023 PT 30M2 PubChem [183], PubMed, ChEBI [182]DBLP 2023.1 NC, LP 5,259,858 36,630,661 Academic www.aminer.org/citation ChE-bio [47] 2023 PT 365K2 ChEMBL [184]MAG 2020 NC, LP, Rec RG ∼ 10 M ∼ 50 M Academic multiple domains [12] [13] ChE-phy [47] 2023 PT 365K2 ChEMBL [184]Goodreads-books 2018 NC, LP ∼ 2M ∼ 20 M Books multiple domains [14] ChE ZS [47] 2023 GC 91K2 ChEMBL [184]Amazon-items 2018 NC, LP, Rec ∼ 15 .5M ∼ 100 M E-commerce multiple domains [15] PC223M [170] 2023 PT, Retr. 223M1 ,2M2 ,20K3 PubChem [183]SciDocs 2020 NC, UAP, LP, Rec - - Academic [51] PCSTM [172] 2022 PT 281K1 PubChem [183]PubMed 2020 NC 19,717 44,338 Academic [16] PCdes [183] 2022 FT, Cap, Retr. 15K1 PubChem [183]Wikidata5M 2021 LP ∼ 4M ∼ 20 M Wikipedia [17] ChEBI-20 [122] 2021 FT., Retr., Gen., Cap. 33K1 PubChem [183], ChEBI [182]Twitter 2023 NC, LP 176,279 2,373,956 Social [53] Goodreads-reviews 2018 EC, LP ∼ 3M ∼ 100 M Books multiple domains [14] Amazon-reviews 2018 EC, LP ∼ 15 .5M ∼ 200 M E-commerce multiple domains [15] Stackoverflow 2023 EC, LP 129,322 281,657 Social [74]JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 15 molecules and involve planning a sequence of steps that Education Domain. In the education domain, we can can finally produce a desired chemical compound through construct a graph with coursework as nodes and their a series of reactions [199]. This procedure includes a se- relations as edges. The model learned on such a graph can be quence of reactant molecules and reaction conditions. Both utilizedforknowledgetracing[136]andstudentperformance graphs and t
ence of steps that Education Domain. In the education domain, we can can finally produce a desired chemical compound through construct a graph with coursework as nodes and their a series of reactions [199]. This procedure includes a se- relations as edges. The model learned on such a graph can be quence of reactant molecules and reaction conditions. Both utilizedforknowledgetracing[136]andstudentperformance graphs and texts play important roles in this process. For prediction [137]. example, graphs may represent the fundamental structure of 8 FUTURE DIRECTIONS molecules, while texts may describe the reaction conditions, additives, and solvents. LLMs can assist in the planning by Better Benchmark Datasets. Most pure graph benchmarks suggesting possible synthesis paths directly or by serving as evaluate LLMs’ reasoning ability on homogeneous graphs agents to operate on existing planning tools [146]. but do not include evaluations on heterogeneous or spatial- 7.3.2 Computational Social Science temporal graphs. For text-attributed graphs, as summarized In computational social science, researchers are interested in Table 2, most benchmark datasets are from academic in modeling the behavior of people/users and discovering domains and e-commerce domains. However, in the real new knowledge that can be utilized to forecast the future. world, text-attributed graphs are ubiquitous across multiple The behaviors of users and interactions between users can domains (e.g., legal and health). More diverse datasets are be modeled as graphs, where the nodes are associated with needed to comprehensively evaluate LLMs on real-world rich text information (e.g., user profile, messages, emails). We scenarios. For text-paired graphs, as summarized in Table 3, will show two example scenarios below. there is a lack of comprehensive datasets covering various E-commerce. In E-commerce platforms, there are many machine learning tasks in chemistry. Although a massive interactions (e.g., purchase, view) between users and prod- number of scientific papers are available, preprocessing ucts. For example, users can view or purchase products. them into a ready-to-use format and pairing them with In addition, the users, products, and their interactions are specific molecular graph data points of interest remains associated with rich text information. For instance, products a cumbersome and challenging task. Besides, we could have titles/descriptions and users can leave a review of investigategraph-textpairsin3Dspace,whereeachmolecule products. In this case, we can construct a graph [102] may be associated with atomic coordinates [138]. where nodes are users and products, while edges are their Broader Task Space with LLMs. More comprehensive stud- interactions. Both nodes and edges are associated with text. iesontheperformanceofLLMsforgraphtasksholdpromise It is important to utilize both the text information and the for the future. While LLMs as encoder approaches have been graph structure information (user behavior) to model users explored for text-attributed graphs, their application to text- and items and solve complex downstream tasks (e.g., item captioned molecular graphs remains underexplored. Promis- recommendation [106], bundle recommendation [107], and ing directions include using LLMs for data augmentation product understanding [108]). and knowledge distillation to design domain-specific GNNs Social Media. In soc
odel users explored for text-attributed graphs, their application to text- and items and solve complex downstream tasks (e.g., item captioned molecular graphs remains underexplored. Promis- recommendation [106], bundle recommendation [107], and ing directions include using LLMs for data augmentation product understanding [108]). and knowledge distillation to design domain-specific GNNs Social Media. In social media platforms, there are many for various text-paired graph tasks. Furthermore, although users and they interact with each other through messages, graph generation has been approached in text-paired graphs, emails, and so on. In this case, we can build a graph where it remains an open problem for text-attributed graphs (i.e., nodes are users and edges are the interaction between users. how to conduct joint text and graph structure generation) There will be text associated with nodes (e.g., user profile) Multi-Modal Foundation Models. One open question is, and edges (e.g., messages). Interesting research questions “Should we use one foundation model to unify different will be how to do joint text and graph structure modeling modalities, and how?” The modalities can include texts, to deeply understand the users for friend recommendation graphs, and even images. For instance, molecules can be [109], user analysis [110], community detection [111], and represented as graphs, described as texts, and photographed personalized response generation [97], [98]. as images; products can be treated as nodes in a graph, 7.3.3 Specific Domains associated with a title/description, and combined with an image. Designing a model that can conduct joint encoding In many specific domains, text data are interconnected and forallmodalitieswillbeusefulbutchallenging.Furthermore, lie in the format of graphs. The structure information on the there has always been tension between building a unified graphs can be utilized to better understand the text unit and foundational model and customizing model architectures contribute to advanced problem-solving. for different domains. It is thus intriguing to ask whether a Academic Domain. In the academic domain, graphs [12] unified architecture will suit different data types, or if tailor- are constructed with papers as nodes and their relations ing model designs according to domains will be necessary. (e.g., citation, authorship, etc) as edges. The representation Correctly answering this question can save economic and learned for papers on such graphs can be utilized for paper intellectual resources from unnecessary attempts and also recommendation [103], paper classification [104], and author shed light on a deeper understanding of graph-related tasks. identification [105]. Efficienct LLMs on Graphs. While LLMs have shown Legal Domain. In the legal domain, opinions given by a strong capability to learn on graphs, they suffer from the judges always contain references to opinions given for inefficiency in graph linearization and model optimization. previous cases. In such scenarios, people can construct a On one hand, as discussed in Section 5.1.1 and 6.1.1, many graph [99] based on the citation relations between opinions. methods rely on transferring graphs into sequences that can The representations learned on such a graph with both beinputtedintoLLMs.However,thelengthofthetr
ays contain references to opinions given for inefficiency in graph linearization and model optimization. previous cases. In such scenarios, people can construct a On one hand, as discussed in Section 5.1.1 and 6.1.1, many graph [99] based on the citation relations between opinions. methods rely on transferring graphs into sequences that can The representations learned on such a graph with both beinputtedintoLLMs.However,thelengthofthetransferred text and structure information can be utilized for clause sequence will increase significantly as the size of the graph classification [100] and opinion recommendation [101]. increases. This poses challenges since LLMs always have aJOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 16 maximum sequence input length and a long input sequence [4] Reimers, N. and Gurevych, I., “Sentence-BERT: Sentence Embed- willleadtohighertimeandmemorycomplexity.Ontheother dings using Siamese BERT-Networks,” in EMNLP, 2019. hand, optimizing LLMs itself is computationally expensive. [5] Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., Although some general efficient tuning methods such as Yogatama, D., Bosma, M., Zhou, D., Metzler, D. and Chi, E.H., “Emergent Abilities of Large Language Models,” in TMLR, 2022. LoRA are proposed, there is a lack of discussion on graph- [6] Nagamochi, H. and Ibaraki, T., “Algorithmic aspects of graph aware LLM efficient tuning methods. connectivity,” in Cambridge University Press, 2018. Generalizable and Robust LLMs on Graphs. Another [7] Goldberg, A.V. and Harrelson, C., “Computing the shortest path: A interesting direction is to explore the generalizability and search meets graph theory,” in SODA (Vol. 5, pp. 156-165), 2005. [8] Sun, Z., Wang, H., Wang, H., Shao, B. and Li, J., “Efficient subgraph robustness of LLMs on graphs. Generalizability refers to matching on billion node graphs,” in arXiv preprint arXiv:1205.6691, having the ability to transfer the knowledge learned from 2012. one domain graph to another; while robustness denotes [9] Chen, Z., Mao, H., Li, H., Jin, W., Wen, H., Wei, X., ... & Tang, J. having consistent prediction regarding obfuscations and (2023). Exploring the potential of large language models (llms) in learning on graphs. arXiv preprint arXiv:2307.03393. attacks. Although LLMs have demonstrated their strong [10] McCallum, A.K., Nigam, K., Rennie, J. and Seymore, K., “Automat- generalizability in processing text, they still suffer from ing the construction of internet portals with machine learning,” in robustness and hallucination issues, which are to be solved Information Retrieval, 3, pp.127-163, 2000. for graph data modeling as well. [11] Giles, C.L., Bollacker, K.D. and Lawrence, S., “CiteSeer: An au- tomatic citation indexing system,” in Proceedings of the third ACM LLM as Dynamic Agents on Graphs. Although LLMs have
allucination issues, which are to be solved Information Retrieval, 3, pp.127-163, 2000. for graph data modeling as well. [11] Giles, C.L., Bollacker, K.D. and Lawrence, S., “CiteSeer: An au- tomatic citation indexing system,” in Proceedings of the third ACM LLM as Dynamic Agents on Graphs. Although LLMs have conference on Digital libraries (pp. 89-98), 1998. shown their advanced capability in generating text, one- [12] Wang, K., Shen, Z., Huang, C., Wu, C.H., Dong, Y. and Kanakia, pass generation of LLMs suffers from hallucination and A., “Microsoft academic graph: When experts are not enough,” in misinformation issues due to the lack of accurate parametric Quantitative Science Studies, 1(1), pp.396-413, 2020. [13] Zhang, Y., Jin, B., Zhu, Q., Meng, Y. and Han, J., “The Effect of knowledge. Simply augmenting retrieved knowledge in Metadata on Scientific Literature Tagging: A Cross-Field Cross- context is also bottlenecked by the capacity of the retriever. Model Study,” in WWW, 2023. In many real-world scenarios, graphs such as academic [14] Wan, M. and McAuley, J., “Item recommendation on monotonic networks, and Wikipedia are dynamically looked up by behavior chains,” in Proceedings of the 12th ACM conference on recommender systems, 2018. humans for knowledge-guided reasoning. Simulating such [15] Ni, J., Li, J. and McAuley, J., “Justifying recommendations using a role of dynamic agents can help LLMs more accurately re- distantly-labeled reviews and fine-grained aspects,” in EMNLP- trieve relevant information via multi-hop reasoning, thereby IJCNLP, 2019. correcting their answers and alleviating hallucinations. [16] Sen, P., Namata, G., Bilgic, M., Getoor, L., Galligher, B. and Eliassi- Rad, T., “Collective classification in network data,” in AI magazine, 9 CONCLUSION 29(3), pp.93-93, 2008. [17] Wang, X., Gao, T., Zhu, Z., Zhang, Z., Liu, Z., Li, J. and Tang, In this paper, we provide a comprehensive review of large J., “KEPLER: A unified model for knowledge embedding and pre- language models on graphs. We first categorize graph trained language representation,” in TACL, 2021. scenarios where LMs can be adopted and summarize the [18] Liu, L., Du, B., Ji, H., Zhai, C. and Tong, H., “Neural-answering largelanguagemodelsongraphtechniques.Wethenprovide logical queries on knowledge graphs,” in KDD., 2021. [19] Wu, Z., Pan, S., Chen, F., Long, G., Zhang, C., & Philip, S. Y., a thorough review, analysis, and comparison of methods “A comprehensive survey on graph neural networks,” in IEEE within each scenario. Furthermore, we summarize available transactions on neural networks and learning systems, 32(1), 4-24, 2020. datasets, open-source codebases, and multiple applications. [20] Liu, J., Yang, C., Lu, Z., Chen, J., Li, Y., Zhan
, Pan, S., Chen, F., Long, G., Zhang, C., & Philip, S. Y., a thorough review, analysis, and comparison of methods “A comprehensive survey on graph neural networks,” in IEEE within each scenario. Furthermore, we summarize available transactions on neural networks and learning systems, 32(1), 4-24, 2020. datasets, open-source codebases, and multiple applications. [20] Liu, J., Yang, C., Lu, Z., Chen, J., Li, Y., Zhang, M., Bai, T., Fang, Y., Finally, we suggest future directions for large language Sun, L., Yu, P.S. and Shi, C., “Towards Graph Foundation Models: A Survey and Beyond,” in arXiv preprint arXiv:2310.11829, 2023. models on graphs. [21] Pan, S., Luo, L., Wang, Y., Chen, C., Wang, J. and Wu, X., “Unifying Large Language Models and Knowledge Graphs: A Roadmap,” in ACKNOWLEDGMENTS arXiv preprint arXiv:2306.08302, 2023. This work was supported in part by US DARPA KAIROS [22] Wang, Y., Le, H., Gotmare, A.D., Bui, N.D., Li, J. and Hoi, S.C., “Codet5+: Opencode largelanguage models for codeunderstanding Program No. FA8750-19-2-1004 and INCAS Program No. and generation.,” in arXiv preprint arXiv:2305.07922, 2023. HR001121C0165, National Science Foundation IIS-19-56151, [23] Devlin, J., Chang, M.W., Lee, K. and Toutanova, K., “Bert: Pre- and the Molecule Maker Lab Institute: An AI Research training of deep bidirectional transformers for language understand- Institutes program supported by NSF under Award No. ing,” in NAACL, 2019. [24] Liu,Y.,Ott,M.,Goyal,N.,Du,J.,Joshi,M.,Chen,D.,Levy,O.,Lewis, 2019897, and the Institute for Geospatial Understanding M.,Zettlemoyer,L.andStoyanov,V.,“Roberta:Arobustlyoptimized through an Integrative Discovery Environment (I-GUIDE) bert pretraining approach,” in arXiv preprint arXiv:1907.11692, 2019. by NSF under Award No. 2118329. Any opinions, findings, [25] Beltagy, I., Lo, K. and Cohan, A., “SciBERT: A pretrained language and conclusions or recommendations expressed herein are model for scientific text,” in arXiv preprint arXiv:1903.10676, 2019. [26] Brown,T., Mann,B.,Ryder,N., Subbiah, M.,Kaplan,J.D., Dhariwal, those of the authors and do not necessarily represent the P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., and Agarwal, views, either expressed or implied, of DARPA or the U.S. “Language models are few-shot learners,” in NeurIPS, 2020. Government. [27] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R.R. and Le, Q.V., “Xlnet: Generalized autoregressive pretraining for language REFERENCES understanding,” in NeurIPS, 2019. [28] Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, [1] Yang, W., Xie, Y., Lin, A., Li, X., Tan, L., Xiong, K., Li, M. and Lin,
Q.V., “Xlnet: Generalized autoregressive pretraining for language REFERENCES understanding,” in NeurIPS, 2019. [28] Lewis, M., Liu, Y., Goyal, N., Ghazvininejad, M., Mohamed, [1] Yang, W., Xie, Y., Lin, A., Li, X., Tan, L., Xiong, K., Li, M. and Lin, J., A., Levy, O., Stoyanov, V. and Zettlemoyer, L., “Bart: Denoising “End-to-end open-domain question answering with bertserini,” in sequence-to-sequence pre-training for natural language generation, NAACL, 2019. translation, and comprehension,” in ACL, 2020. [2] Liu, Y. and Lapata, M., “Text Summarization with Pretrained [29] Raffel, C., Shazeer, N., Roberts, A., Lee, K., Narang, S., Matena, Encoders,” in EMNLP, 2019. M., Zhou, Y., Li, W. and Liu, P.J., “Exploring the limits of transfer [3] Wang, A., Singh, A., Michael, J., Hill, F., Levy, O. and Bowman, S.R., learning with a unified text-to-text transformer,” in JMLR, 2020. “GLUE: A Multi-Task Benchmark and Analysis Platform for Natural [30] Yasunaga, M., Leskovec, J. and Liang, P., “LinkBERT: Pretraining Language Understanding,” in ICLR, 2018. Language Models with Document Links,” in ACL, 2022.JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 17 [31] Jin, B., Zhang, W., Zhang, Y., Meng, Y., Zhang, X., Zhu, Q. and Han, [55] Li, Y., Ding, K. and Lee, K., “GRENADE: Graph-Centric Lan- J., “Patton: Language Model Pretraining on Text-Rich Networks,” in guage Model for Self-Supervised Representation Learning on Text- ACL, 2023. Attributed Graphs,” in EMNLP., 2023. [32] Zhang, X., Malkov, Y., Florez, O., Park, S., McWilliams, B., Han, J. [56] Zhang, X., Malkov, Y., Florez, O., Park, S., McWilliams, B., Han, J. and El-Kishky, A., “TwHIN-BERT: a socially-enriched pre-trained and El-Kishky, A., “TwHIN-BERT: A Socially-Enriched Pre-trained language model for multilingual Tweet representations,” in KDD, Language Model for Multilingual Tweet Representations at Twitter,” 2023. in KDD., 2023. [33] Zou,T.,Yu,L.,Huang,Y.,Sun,L.andDu,B.,“PretrainingLanguage [57] Zhang, X., Zhang, C., Dong, X.L., Shang, J. and Han, J., “Minimally- Models with Text-Attributed Heterogeneous Graphs,” in arXiv supervisedstructure-richtextcategorizationvialearningontext-rich preprint arXiv:2310.12580, 2023. networks,” in WWW., 2021. [34] Song, K., Tan, X., Qin, T., Lu, J. and Liu, T.Y., “Mpnet: Masked and [58] Chien, E., Chang, W.C., Hsieh, C.J., Yu, H.F., Zhang, J., Milenkovic, permuted pre-training for language understanding,” in NeurIPs., O., and Dhillon, I.S., “Node feature extraction by self-supervised 2020. multi-scale neighborhood prediction,” in ICLR., 2022. [35] Duan, K., Liu, Q., Chua, T.S., Yan, S., Ooi, W.T., Xie, Q. and He, J., [59] Zhang, Y., Shen, Z., Wu, C.H., Xie, B., Hao, J., Wang, Y.Y., Wang, K. “Simteg: A frustratingly simple approach imp
ng for language understanding,” in NeurIPs., O., and Dhillon, I.S., “Node feature extraction by self-supervised 2020. multi-scale neighborhood prediction,” in ICLR., 2022. [35] Duan, K., Liu, Q., Chua, T.S., Yan, S., Ooi, W.T., Xie, Q. and He, J., [59] Zhang, Y., Shen, Z., Wu, C.H., Xie, B., Hao, J., Wang, Y.Y., Wang, K. “Simteg: A frustratingly simple approach improves textual graph and Han, J., “Metadata-induced contrastive learning for zero-shot learning,” in arXiv preprint arXiv:2308.02565., 2023. multi-label text classification,” in WWW., 2022. [36] Kasneci, E., Seßler, K., K¨uchemann, S., Bannert, M., Dementieva, D., [60] Dinh, T.A., Boef, J.D., Cornelisse, J. and Groth, P., “E2EG: End- Fischer, F., Gasser, U., Groh, G., G¨unnemann, S., H¨ullermeier, E. and to-End Node Classification Using Graph Topology and Text-based Krusche, S., “ChatGPT for good? On opportunities and challenges Node Attributes,” in arXiv preprint arXiv:2208.04609., 2022. of large language models for education,” in Learning and individual [61] Tan, Y., Zhou, Z., Lv, H., Liu, W. and Yang, C., “Walklm: A differences, 103., 2023. uniformlanguagemodelfine-tuningframeworkforattributedgraph [37] Lester, B., Al-Rfou, R. and Constant, N., “The power of scale for embedding,” in NeurIPs., 2023. parameter-efficient prompt tuning,” in EMNLP, 2021. [62] Zhao, J., Qu, M., Li, C., Yan, H., Liu, Q., Li, R., Xie, X. and Tang, [38] Li, X.L. and Liang, P., “Prefix-tuning: Optimizing continuous J., “Learning on large-scale text-attributed graphs via variational prompts for generation,” in ACL, 2021. inference,” in ICLR., 2023. [39] Houlsby, N., Giurgiu, A., Jastrzebski, S., Morrone, B., De Larous- [63]Wen, Z. and Fang, Y., “Augmenting Low-Resource Text Classifica- silhe, Q., Gesmundo, A., Attariyan, M. and Gelly, S., “Parameter- tion with Graph-Grounded Pre-training and Prompting,” in SIGIR., efficient transfer learning for NLP,” in ICML, 2019. 2023. [40] Hu, E.J., Shen, Y., Wallis, P., Allen-Zhu, Z., Li, Y., Wang, S., Wang, [64] Chen, Z., Mao, H., Wen, H., Han, H., Jin, W., Zhang, H., Liu, H. L. and Chen, W., “Lora: Low-rank adaptation of large language and Tang, J., “Label-free Node Classification on Graphs with Large models,” in ICLR, 2022. Language Models (LLMS),” in arXiv preprint arXiv:2310.04668., 2023. [41] Tian, Y., Song, H., Wang, Z., Wang, H., Hu, Z., Wang, F., Chawla, [65] Zhao, J., Zhuo, L., Shen, Y., Qu, M., Liu, K., Bronstein, M., Zhu, Z. N.V. and Xu, P., “Graph Neural Prompting with Large Language and Tang, J., “Graphtext: Graph reasoning in text space,” in arXiv Models,” in arXiv preprint arXiv:2309.15427., 2023. preprint arXiv:2310.01089., 2023. [42] Chai, Z., Zhang, T., Wu, L., Han, K., Hu, X., Huang, X. and Yang, Y., [66] Meng, Y., Zong, S., Li, X., Sun, X., Zhang, T., Wu, F. and Li, J., “GraphLLM: Boosting Graph Reasoning Ability of Large Language “Gnn-lm: Language modeling based on global contexts via gnn,” in Model,” in arXiv preprint arXiv:2310.05845., 2023. ICLR., 2022. [43] Wei, J., Bosma, M., Zhao, V.Y., Guu, K., Yu, A.W., Lester, B., Du, N.,
i, Z., Zhang, T., Wu, L., Han, K., Hu, X., Huang, X. and Yang, Y., [66] Meng, Y., Zong, S., Li, X., Sun, X., Zhang, T., Wu, F. and Li, J., “GraphLLM: Boosting Graph Reasoning Ability of Large Language “Gnn-lm: Language modeling based on global contexts via gnn,” in Model,” in arXiv preprint arXiv:2310.05845., 2023. ICLR., 2022. [43] Wei, J., Bosma, M., Zhao, V.Y., Guu, K., Yu, A.W., Lester, B., Du, N., [67] Zhang, X., Bosselut, A., Yasunaga, M., Ren, H., Liang, P., Manning, Dai, A.M. and Le, Q.V., “Finetuned language models are zero-shot C.D. and Leskovec, J., “Greaselm: Graph reasoning enhanced learners,” in ICLR., 2022. language models for question answering,” in ICLR., 2022. [44]Sanh, V., Webson, A., Raffel, C., Bach, S.H., Sutawika, L., Alyafeai, [68] Ioannidis, V.N., Song, X., Zheng, D., Zhang, H., Ma, J., Xu, Y., Zeng, Z., Chaffin, A., Stiegler, A., Scao, T.L., Raja, A. and Dey, M., B., Chilimbi, T. and Karypis, G., “Efficient and effective training of “Multitask prompted training enables zero-shot task generalization,” language and graph neural network models,” in AAAI, 2023. in ICLR., 2022. [69] Mavromatis, C., Ioannidis, V.N., Wang, S., Zheng, D., Adeshina, S., [45] Tang, J., Yang, Y., Wei, W., Shi, L., Su, L., Cheng, S., Yin, D. Ma, J., Zhao, H., Faloutsos, C. and Karypis, G., “Train Your Own and Huang, C., “GraphGPT: Graph Instruction Tuning for Large GNN Teacher: Graph-Aware Distillation on Textual Graphs,” in Language Models,” in arXiv preprint arXiv:2310.13023., 2023. PKDD, 2023. [46] Ye,R.,Zhang,C.,Wang,R.,Xu,S.andZhang,Y.,“Naturallanguage [70] He, X., Bresson, X., Laurent, T. and Hooi, B., “Explanations as is all a graph needs,” in arXiv preprint arXiv:2308.07134., 2023. Features: LLM-Based Features for Text-Attributed Graphs,” in arXiv [47] Zhao,H.,Liu,S.,Ma,C.,Xu,H.,Fu,J.,Deng,Z.H.,Kong,L.andLiu, preprint arXiv:2305.19523., 2023. Q., “GIMLET: A Unified Graph-Text Model for Instruction-Based [71] Yu,J.,Ren,Y.,Gong,C.,Tan,J.,Li,X.andZhang,X.,“EmpowerText- Molecule Zero-Shot Learning,” in bioRxiv, pp.2023-05., 2023. Attributed Graphs Learning with Large Language Models (LLMs),” [48] Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., Le, in arXiv preprint arXiv:2310.09872., 2023. Q.V. and Zhou, D., “Chain-of-thought prompting elicits reasoning [72] Yang, J., Liu, Z., Xiao, S., Li, C., Lian, D., Agrawal, S., Singh, A., in large language models,” in NeurIPs., 2022. Sun, G. and Xie, X., “GraphFormers: GNN-nested transformers for representation learning on textual graph,” in NeurIPs., 2021. [49] Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T.L., Cao, Y. and [73] Jin, B., Zhang, Y., Zhu, Q. and Han, J., “Heterformer: Transformer- Narasimhan, K., “Tree of thoughts: Deliberate problem solving with based deep node representation learning on heterogeneous text-rich large language models,” in arXiv preprint arXiv:2305.10601., 2023. networks,” in KDD., 2023. [50] Besta, M., Blach, N., Kubicek, A., Gerstenberger, R., Gianinazzi, L., [74] Jin, B., Zhang, Y., Meng, Y. and Han, J., “Edgeformers: Graph- Gajda, J., Lehmann, T., Podstawski, M., Niewia
nsformer- Narasimhan, K., “Tree of thoughts: Deliberate problem solving with based deep node representation learning on heterogeneous text-rich large language models,” in arXiv preprint arXiv:2305.10601., 2023. networks,” in KDD., 2023. [50] Besta, M., Blach, N., Kubicek, A., Gerstenberger, R., Gianinazzi, L., [74] Jin, B., Zhang, Y., Meng, Y. and Han, J., “Edgeformers: Graph- Gajda, J., Lehmann, T., Podstawski, M., Niewiadomski, H., Nyczyk, Empowered Transformers for Representation Learning on Textual- P. and Hoefler, T., “Graph of thoughts: Solving elaborate problems Edge Networks,” in ICLR., 2023. with large language models,” in arXiv preprint arXiv:2308.09687., [75] Jin, B., Zhang, W., Zhang, Y., Meng, Y., Zhao, H. and Han, J., 2023. “Learning Multiplex Embeddings on Text-rich Networks with One [51] Cohan, A., Feldman, S., Beltagy, I., Downey, D. and Weld, D.S., Text Encoder,” in arXiv preprint arXiv:2310.06684., 2023. “Specter: Document-level representation learning using citation- [76] Qin, Y., Wang, X., Zhang, Z. and Zhu, W., “Disentangled Represen- informed transformers,” in ACL., 2020. tation Learning with Large Language Models for Text-Attributed [52] Ostendorff, M., Rethmeier, N., Augenstein, I., Gipp, B. and Rehm, Graphs,” in arXiv preprint arXiv:2310.18152., 2023. G., “Neighborhood contrastive learning for scientific document [77] Zhu, J., Cui, Y., Liu, Y., Sun, H., Li, X., Pelger, M., Yang, T., Zhang, representations with citation embeddings,” in EMNLP., 2022. L., Zhang, R. and Zhao, H., “Textgnn: Improving text encoder via [53] Brannon, W., Fulay, S., Jiang, H., Kang, W., Roy, B., Kabbara, J. and graph neural network in sponsored search,” in WWW., 2021. Roy, D., “ConGraT: Self-Supervised Contrastive Pretraining for Joint [78] Li, C., Pang, B., Liu, Y., Sun, H., Liu, Z., Xie, X., Yang, T., Cui, Graph and Text Embeddings,” in arXiv preprint arXiv:2305.14321., Y., Zhang, L. and Zhang, Q., “Adsgnn: Behavior-graph augmented 2023. relevance modeling in sponsored search,” in SIGIR., 2021. [54] Zhu, J., Song, X., Ioannidis, V.N., Koutra, D. and Faloutsos, C., [79] Zhang, J., Chang, W.C., Yu, H.F. and Dhillon, I., “Fast multi- “TouchUp-G: Improving Feature Representation through Graph- resolution transformer fine-tuning for extreme multi-label text Centric Finetuning,” in arXiv preprint arXiv:2309.13885., 2023. classification,” in NeurIPs., 2021.JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 18 [80] Xie,H., Zheng, D.,Ma,J., Zhang,H.,Ioannidis,V.N.,Song,X., Ping, [109] Chen, L., Xie, Y., Zheng, Z., Zheng, H. and Xie, J., “Friend recom- Q., Wang, S., Yang, C., Xu, Y. and Zeng, B., “Graph-Aware Language mendation based on multi-social graph convolutional network,” in Model Pre-Training on a Large Graph Corpus Can Help Multiple IEEE Access, 8, pp.43618-43629, 2020. Graph Applications,” in KDD., 2023. [110] Wang, G., Zhang, X., Tang, S., Zheng, H. and Zhao, B.Y., “Unsu- [81] Yasunaga, M., Bosselut, A., Ren, H., Zhang, X., Manning,
g, B., “Graph-Aware Language mendation based on multi-social graph convolutional network,” in Model Pre-Training on a Large Graph Corpus Can Help Multiple IEEE Access, 8, pp.43618-43629, 2020. Graph Applications,” in KDD., 2023. [110] Wang, G., Zhang, X., Tang, S., Zheng, H. and Zhao, B.Y., “Unsu- [81] Yasunaga, M., Bosselut, A., Ren, H., Zhang, X., Manning, pervised clickstream clustering for user behavior analysis,” in CHI, C.D., Liang, P.S. and Leskovec, J., “Deep bidirectional language- 2016. knowledge graph pretraining,” in NeurIPs., 2022. [111] Shchur, O. and G¨unnemann, S., “Overlapping community detec- [82] Huang, J., Zhang, X., Mei, Q. and Ma, J., “CAN LLMS EF- tion with graph neural networks,” in arXiv:1909.12201., 2019. FECTIVELY LEVERAGE GRAPH STRUCTURAL INFORMATION: [112] Wei, J., Tay, Y., Bommasani, R., Raffel, C., Zoph, B., Borgeaud, S., WHEN AND WHY,” in arXiv preprint arXiv:2309.16595.., 2023. Yogatama, D., Bosma, M., Zhou, D., Metzler, D. and Chi, E.H., 2022. [83] Jin, X., Vinzamuri, B., Venkatapathy, S., Ji, H. and Natarajan, P., ”Emergent Abilities of Large Language Models” in Transactions on “Adversarial Robustness for Large Language NER models using Machine Learning Research, 2022. Disentanglement and Word Attributions,” in EMNLP., 2023. [113] Kojima, T., Gu, S.S., Reid, M., Matsuo, Y. and Iwasawa, Y., 2022. [84] Kipf, T.N. and Welling, M., “Semi-supervised classification with ”Large language models are zero-shot reasoners” in NeurIPS. graph convolutional networks,” in ICLR., 2017. [114] Wei, J., Wang, X., Schuurmans, D., Bosma, M., Xia, F., Chi, E., [85] Hamilton, W., Ying, Z. and Leskovec, J., “Inductive representation Le, Q.V. and Zhou, D., 2022. ”Chain-of-thought prompting elicits learning on large graphs,” in NeurIPs., 2017. reasoning in large language models” in NeurIPS. [86] Veliˇckovi´c, P., Cucurull, G., Casanova, A., Romero, A., Lio, P. and [115] Radford, A., 2019. ”Language Models are Unsupervised Multitask Bengio, Y., “Graph attention networks,” in ICLR., 2018. Learners” in OpenAI Blog, 2019. [87] Zhang, S., Liu, Y., Sun, Y. and Shah, N., “Graph-less Neural [116] Lan,Z.,Chen,M.,Goodman,S.,Gimpel,K.,Sharma,P.andSoricut, Networks: Teaching Old MLPs New Tricks Via Distillation,” in R., 2019, September. ”ALBERT: A Lite BERT for Self-supervised ICLR., 2022. Learning of Language Representations” in ICLR. [88] Liu,M.,Gao,H.andJi,S.,“Towardsdeepergraphneuralnetworks,” [117] Clark, K., Luong, M.T., Le, Q.V. and Manning, C.D., 2019, Septem- in KDD., 2020. ber.”ELECTRA:Pre-trainingTextEncodersasDiscriminatorsRather [89] Meng, Y., Huang, J., Zhang, Y. and Han, J., “Generating training Than Generators” in ICLR. data with language models: Towards zero-shot language under- [118] Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, standing,” in NeurIPS., 2022. E., Kamar, E., Lee, P., Lee, Y.T., Li, Y., Lundberg, S. and Nori, H., [90] Sun, Y., Han, J., Yan, X., Yu, P.S. and Wu, T., “Pathsim: Meta 2023. ”Sparks of artificial general intelli
Than Generators” in ICLR. data with language models: Towards zero-shot language under- [118] Bubeck, S., Chandrasekaran, V., Eldan, R., Gehrke, J., Horvitz, standing,” in NeurIPS., 2022. E., Kamar, E., Lee, P., Lee, Y.T., Li, Y., Lundberg, S. and Nori, H., [90] Sun, Y., Han, J., Yan, X., Yu, P.S. and Wu, T., “Pathsim: Meta 2023. ”Sparks of artificial general intelligence: Early experiments path-based top-k similarity search in heterogeneous information with gpt-4” in arXiv preprint arXiv:2303.12712. networks,” in VLDB., 2011. [119] Touvron,H.,Martin,L.,Stone,K.,Albert,P.,Almahairi,A.,Babaei, [91] Liu, H., Li, C., Wu, Q. and Lee, Y.J., “Visual instruction tuning,” in Y., Bashlykov, N., Batra, S., Bhargava, P., Bhosale, S. and Bikel, D., NeurIPs., 2023. 2023. ”Llama 2: Open foundation and fine-tuned chat models” in [92] Park, C., Kim, D., Han, J. and Yu, H., “Unsupervised attributed arXiv preprint arXiv:2307.09288. multiplex network embedding,” in AAAI., 2020. [120] Jiang, A.Q., Sablayrolles, A., Mensch, A., Bamford, C., Chap- [93] Vaswani,A.,Shazeer,N.,Parmar,N.,Uszkoreit,J.,Jones,L.,Gomez, lot, D.S., Casas, D.D.L., Bressand, F., Lengyel, G., Lample, G., A.N., Kaiser, Ł . and Polosukhin, I., “Attention is all you need,” in Saulnier, L. and Lavaud, L.R., 2023. ”Mistral 7B” in arXiv preprint NeurIPs., 2017. arXiv:2310.06825. [94]Haveliwala, T.H., “Topic-sensitive pagerank,” in WWW., 2002. [121] Alayrac, J.B., Donahue, J., Luc, P., Miech, A., Barr, I., Hasson, [95] Oord, A.V.D., Li, Y. and Vinyals, O., “Representation learning with Y., Lenc, K., Mensch, A., Millican, K., Reynolds, M. and Ring, R., contrastive predictive coding,” in arXiv:1807.03748., 2018. 2022. ”Flamingo: a visual language model for few-shot learning” in [96] Radford, A., Kim, J.W., Hallacy, C., Ramesh, A., Goh, G., Agar- NeurIPS. wal, S., Sastry, G., Askell, A., Mishkin, P., Clark, J. and Krueger, [122] Edwards, C., Zhai, C. and Ji, H., 2021. ”Text2mol: Cross-modal G., “Learning transferable visual models from natural language molecule retrieval with natural language queries” in EMNLP. supervision,” in ICML., 2021. [123] Edwards, C., Lai, T., Ros, K., Honke, G., Cho, K. and Ji, H., 2022, [97] Sun, C., Li, J., Fung, Y.R., Chan, H.P., Abdelzaher, T., Zhai, C. and Ji, December. ”Translation between Molecules and Natural Language” H., “Decoding the silent majority: Inducing belief augmented social in EMNLP. graph with large language model for response forecasting,” in arXiv [124] Wang, H., Feng, S., He, T., Tan, Z., Han, X. and Tsvetkov, Y., ”Can preprint arXiv:2310.13297., 2023. Language Models Solve Graph Problems in Natural Language?” in [98] Sun, C., Li, J., Chan, H.P., Zhai, C. and Ji, H., “Measuring the Effect arXiv preprint arXiv:2305.10037., 2023. of Influential Messages on Varying Personas,” in ACL., 2023. [125] Liu, C. and Wu, B., 2023. ”Evaluating large language models on [99] Whalen, R., “Legal networks: The promises and challenges of legal graphs: Performance insights and comparative analysis” in arXiv network analysis,” in Mich. St. L. R
uage?” in [98] Sun, C., Li, J., Chan, H.P., Zhai, C. and Ji, H., “Measuring the Effect arXiv preprint arXiv:2305.10037., 2023. of Influential Messages on Varying Personas,” in ACL., 2023. [125] Liu, C. and Wu, B., 2023. ”Evaluating large language models on [99] Whalen, R., “Legal networks: The promises and challenges of legal graphs: Performance insights and comparative analysis” in arXiv network analysis,” in Mich. St. L. Rev.., 2016. preprint arXiv:2308.11224, 2023. [100] Friedrich, A. and Palmer, A. and Pinkal, M., “Situation entity [126] Guo, J., Du, L. and Liu, H.. ”GPT4Graph: Can Large Language types: automatic classification of clause-level aspect,” in ACL., 2016. Models Understand Graph Structured Data? An Empirical Evalua- [101] Guha, N., Nyarko, J., Ho, D.E., R´e, C., Chilton, A., Narayana, tion and Benchmarking” in arXiv preprint arXiv:2305.15066, 2023. A., Chohlas-Wood, A., Peters, A., Waldon, B., Rockmore, D.N. and [127] Zhang, J., 2023. ”Graph-ToolFormer: To Empower LLMs with Zambrano, D., “Legalbench: A collaboratively built benchmark for Graph Reasoning Ability via Prompt Augmented by ChatGPT” in measuring legal reasoning in large language models,” in arXiv arXiv preprint arXiv:2304.11116, 2023. preprint arXiv:2308.11462., 2023. [128] Zhang, Z., Wang, X., Zhang, Z., Li, H., Qin, Y., Wu, S. and Zhu, [102] Lin, Y., Wang, H., Chen, J., Wang, T., Liu, Y., Ji, H., Liu, Y. W.. ”LLM4DyG: Can Large Language Models Solve Problems on and Natarajan, P., “Personalized entity resolution with dynamic Dynamic Graphs?” in arXiv preprint arXiv:2310.17110, 2023. heterogeneous knowledge graph representations,” in arXiv preprint arXiv:2104.02667, 2021. [129] Luo, L., Li, Y.F., Haffari, G. and Pan, S., 2023. ”Reasoning on [103] Bai, X., Wang, M., Lee, I., Yang, Z., Kong, X. and Xia, F., “Scientific graphs: Faithful and interpretable large language model reasoning” paper recommendation: A survey,” in Ieee Access, 2019. in arXiv preprint arXiv:2310.01061, 2023. [104] Chowdhury, S. and Schoen, M.P., “Research paper classification [130] Jiang, J., Zhou, K., Dong, Z., Ye, K., Zhao, W.X. and Wen, J.R.. using supervised machine learning techniques,” in Intermountain ”Structgpt: A general framework for large language model to reason Engineering, Technology and Computing, 2020. over structured data” in arXiv preprint arXiv:2305.09645, 2023. [105] Madigan, D., Genkin, A., Lewis, D.D., Argamon, S., Fradkin, D. [131] Fatemi,B.,Halcrow,J.andPerozzi,B..”Talklikeagraph:Encoding and Ye, L., “Author identification on the large scale,” in CSNA, 2005. graphs for large language models” in arXiv:2310.04560, 2023. [106] He, X., Deng, K., Wang, X., Li, Y., Zhang, Y. and Wang, M., [132] Sun, J., Xu, C., Tang, L., Wang, S., Lin, C., Gong, Y., Shum, H.Y. “Lightgcn: Simplifying and powering graph convolution network and Guo, J.. ”Think-on-graph: Deep and responsible reasoning of for recommendation,” in SIGIR, 2020. large language model with knowledge graph” in arXiv preprint [107] Chang, J., Gao, C., He, X., Jin, D. and Li, Y., “Bundle recommenda- arXiv:2307.07697, 2023. tion with graph convolutional networks,” in SIGIR, 2020. [133] Danny Z. Chen.. ”Developing algorithms a
and Guo, J.. ”Think-on-graph: Deep and responsible reasoning of for recommendation,” in SIGIR, 2020. large language model with knowledge graph” in arXiv preprint [107] Chang, J., Gao, C., He, X., Jin, D. and Li, Y., “Bundle recommenda- arXiv:2307.07697, 2023. tion with graph convolutional networks,” in SIGIR, 2020. [133] Danny Z. Chen.. ”Developing algorithms and software for geo- [108] Xu, H., Liu, B., Shu, L. and Yu, P., “Open-world learning and metric path planning problems” in ACM Comput. Surv. 28, 4es (Dec. application to product classification,” in WWW, 2019. 1996), 18–es. https://doi.org/10.1145/242224.242246, 1996.JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 19 [134] Iqbal A., Hossain Md., Ebna A., ”Airline Scheduling with Max [159] Ock J, Guntuboina C, Farimani AB. Catalyst Property Prediction Flow algorithm” in IJCA, 2018. with CatBERTa: Unveiling Feature Exploration Strategies through [135] LiJiang,XiaoningZang,IbrahimI.Y.Alghoul,XiangFang,Junfeng Large Language Models. arXiv preprint arXiv:2309.00563, 2023. Dong,ChangyongLiang.”Schedulingthecoveringdeliveryproblem [160] Fang Y, Liang X, Zhang N, Liu K, Huang R, Chen Z, Fan X, Chen in last mile delivery” in Expert Systems with Applications, 2022. H.,Mol-Instructions:ALarge-ScaleBiomolecularInstructionDataset [136] Nakagawa, H., Iwasawa, Y. and Matsuo, Y., ”Graph-based knowl- for Large Language Models. arXiv preprint arXiv:2306.08018, 2023. edge tracing: modeling student proficiency using graph neural [161] Abdine H, Chatzianastasis M, Bouyioukos C, Vazirgiannis M., network” in WI, 2019. Prot2Text: Multimodal Protein’s Function Generation with GNNs [137] Li, H., Wei, H., Wang, Y., Song, Y. and Qu, H.. ”Peer-inspired and Transformers, arXiv preprint arXiv:2307.14367, 2023. student performance prediction in interactive online question pools [162] Luo Y, Yang K, Hong M, Liu X, Nie Z., MolFM: A Multimodal with graph neural network” in CIKM, 2020. Molecular Foundation Model, arXiv preprint arXiv:2307.09484, 2023. [138] Zhang, X., Wang, L., Helwig, J., Luo, Y., Fu, C., Xie, Y., ... & Ji, S. [163] Qian, C., Tang, H., Yang, Z., Liang, H., & Liu, Y., Can large (2023). Artificial intelligence for science in quantum, atomistic, and language models empower molecular property prediction? arXiv continuum systems. arXiv preprint arXiv:2307.08423. preprint arXiv:2307.07443, 2023 [139] Rusch, T. K., Bronstein, M. M., & Mishra, S. (2023). A sur- [164] Born, J., & Manica, M., Regression Transformer enables concur- vey on oversmoothing in graph neural networks. arXiv preprint rent sequence regression and generation for molecular language arXiv:2303.10993. modelling. Nature Machine Intelligence, 5(4), 432-444, 2023. [140] Topping, J., Di Giovanni, F., Chamberlain, B. P., Dong, X., & Bron- [165] Li J, Liu Y, Fan W, Wei XY, Liu H, Tang J, Li Q., Empowering stein, M. M. (2021). Understanding over-squashing and bottlenecks Molecule Discovery for Molecule-Caption Translation with Large on graphs via curvature. arXiv preprint
Xiv:2303.10993. modelling. Nature Machine Intelligence, 5(4), 432-444, 2023. [140] Topping, J., Di Giovanni, F., Chamberlain, B. P., Dong, X., & Bron- [165] Li J, Liu Y, Fan W, Wei XY, Liu H, Tang J, Li Q., Empowering stein, M. M. (2021). Understanding over-squashing and bottlenecks Molecule Discovery for Molecule-Caption Translation with Large on graphs via curvature. arXiv preprint arXiv:2111.14522. Language Models: A ChatGPT Perspective. arXiv, 2023. [141] Ying, C., Cai, T., Luo, S., Zheng, S., Ke, G., He, D., ... & Liu, [166] Zeng, Z., Yin, B., Wang, S., Liu, J., Yang, C., Yao, H., ... & Liu, T. Y. (2021). Do transformers really perform badly for graph Z., Interactive Molecular Discovery with Natural Language. arXiv, representation?. NeurIPS, 34, 28877-28888. 2023. [142] Ramp´aˇsek, L., Galkin, M., Dwivedi, V. P., Luu, A. T., Wolf, G., [167] Liu Z, Li S, Luo Y, Fei H, Cao Y, Kawaguchi K, Wang X, Chua TS., & Beaini, D. (2022). Recipe for a general, powerful, scalable graph MolCA: Molecular Graph-Language Modeling with Cross-Modal transformer. NeurIPS, 35, 14501-14515. Projector and Uni-Modal Adapter, in EMNLP, 2023. [143] Liu, G., Zhao, T., Inae, E., Luo, T., & Jiang, M. (2023). [168] Guo T, Guo K, Liang Z, Guo Z, Chawla NV, Wiest O, Zhang X. Semi-Supervised Graph Imbalanced Regression. arXiv preprint What indeed can GPT models do in chemistry? A comprehensive arXiv:2305.12087. benchmark on eight tasks. in NeurIPS, 2023. [144] Wu Q, Zhao W, Li Z, Wipf DP, Yan J. Nodeformer: A scalable [169] Liu Z, Zhang W, Xia Y, Wu L, Xie S, Qin T, Zhang M, Liu TY., graphstructurelearningtransformerfornodeclassification.NeurIPS. MolXPT: Wrapping Molecules with Text for Generative Pre-training, 2022 Dec 6;35:27387-401. in ACL, 2023. [145] Liu, G., Zhao, T., Xu, J., Luo, T., & Jiang, M., Graph rationalization [170] Seidl, P., Vall, A., Hochreiter, S., & Klambauer, G., Enhancing with environment-based augmentations, In ACM SIGKDD, 2022. activity prediction models in drug discovery with the ability to [146] Bran, A. M., Cox, S., White, A. D., & Schwaller, P., ChemCrow: understand human language, in ICML, 2023. Augmenting large-language models with chemistry tools, arXiv [171] Christofidellis, D., Giannone, G., Born, J., Winther, O., Laino, T., preprint arXiv:2304.05376, 2023. & Manica, M., Unifying molecular and textual representations via [147] Riesen, K., & Bunke, H., IAM graph database repository for multi-task language modelling, in ICML, 2023. graph based pattern recognition and machine learning. In Structural, [172] Liu, S., Nie, W., Wang, C., Lu, J., Qiao, Z., Liu, L., ... & Anandku- Syntactic, and Statistical Pattern Recognition: Joint IAPR International mar, A. Multi-modal molecule structure-text model for text-based Workshop. retrieval and editing, Nature Machine Intelligence, 2023. [148] Weininger, D., SMILES, a chemical language and information [173] Lacombe, R., Gaut, A., He, J., L¨udeke, D., & Pistunova, K., Extract- system. 1. Introduction to methodology and encoding rules. Journal ing Molecular Properties
mar, A. Multi-modal molecule structure-text model for text-based Workshop. retrieval and editing, Nature Machine Intelligence, 2023. [148] Weininger, D., SMILES, a chemical language and information [173] Lacombe, R., Gaut, A., He, J., L¨udeke, D., & Pistunova, K., Extract- system. 1. Introduction to methodology and encoding rules. Journal ing Molecular Properties from Natural Language with Multimodal of chemical information and computer sciences, 28(1), 31-36, 1988 Contrastive Learning, ICML Workshop on Computational Biology, 2023. [149] Heller S, McNaught A, Stein S, Tchekhovskoi D, Pletnev I. InChI- [174] Su, B., Du, D., Yang, Z., Zhou, Y., Li, J., Rao, A., ... & Wen, J. R., the worldwide chemical structure identifier standard. Journal of A molecular multimodal foundation model associating molecule cheminformatics. 2013 Dec;5(1):1-9. graphs with natural language, arXiv preprint arXiv:2209.05481. 2022. [150] O’Boyle, N., & Dalke, A., DeepSMILES: an adaptation of SMILES [175] Zeng, Z., Yao, Y., Liu, Z., & Sun, M., A deep-learning system for use in machine-learning of chemical structures, 2018. bridging molecule structure and biomedical text with comprehen- sion comparable to human professionals, Nature communications. [151] Krenn,M.,H¨ase,F.,Nigam,A.,Friederich,P.,&Aspuru-Guzik,A., [176] Iwayama, M., Wu, S., Liu, C., & Yoshida, R., Functional Output Self-referencing embedded strings (SELFIES): A 100% robust molec- Regression for Machine Learning in Materials Science. Journal of ular string representation. Machine Learning: Science and Technology. Chemical Information and Modeling, 62(20), 4837-4851, 2022. [152] Bjerrum, E. J. (2017). SMILES enumeration as data augmenta- [177] Bagal V, Aggarwal R, Vinod PK, Priyakumar UD. MolGPT: tion for neural network modeling of molecules. arXiv preprint molecular generation using a transformer-decoder model. Journal of arXiv:1703.07076. Chemical Information and Modeling. 2021 Oct 25;62(9):2064-76. [153] Ar´us-Pous, J., Johansson, S. V., Prykhodko, O., Bjerrum, E. J., [178] Taylor, R., Kardas, M., Cucurull, G., Scialom, T., Hartshorn, A., Tyrchan, C., Reymond, J. L., ... & Engkvist, O. (2019). Randomized Saravia, E., ... & Stojnic, R., Galactica: A large language model for SMILES strings improve the quality of molecular generative models. science. arXiv, 2022. Journal of cheminformatics, 11(1), 1-13. [179] Wang,S.,Guo,Y.,Wang,Y.,Sun,H.,&Huang,J.,Smiles-bert:large [154] Tetko IV, Karpov P, Bruno E, Kimber TB, Godin G. Augmentation scale unsupervised pre-training for molecular property prediction. is what you need!. InInternational Conference on Artificial Neural In BCB, 2019. Networks 2019 Sep 9 (pp. 831-835). Cham: Springer International [180] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C. H., & Kang, J., Publishing. BioBERT: a pre-trained biomedical language representation model [155] Kudo, T., & Richardson, J., Sentencepiece: A simple and language for biomedical text mining. Bioinformatics, 36(4), 1234-1240, 2020. independent subword tokenizer and detokenizer for neural text
onal [180] Lee, J., Yoon, W., Kim, S., Kim, D., Kim, S., So, C. H., & Kang, J., Publishing. BioBERT: a pre-trained biomedical language representation model [155] Kudo, T., & Richardson, J., Sentencepiece: A simple and language for biomedical text mining. Bioinformatics, 36(4), 1234-1240, 2020. independent subword tokenizer and detokenizer for neural text [181] Ma, R., & Luo, T. (2020). PI1M: a benchmark database for polymer processing, in EMNLP, 2018. informatics. Journal of Chemical Information and Modeling. [156] Irwin, R., Dimitriadis, S., He, J., & Bjerrum, E. J. (2022). Chem- [182] Hastings, J., Owen, G., Dekker, A., Ennis, M., Kale, N., Muthukr- former: a pre-trained transformer for computational chemistry. ishnan, V., ... & Steinbeck, C., ChEBI in 2016: Improved services and Machine Learning: Science and Technology, 3(1), 015022. an expanding collection of metabolites. Nucleic acids research. [157] Shi, Y., Zhang, A., Zhang, E., Liu, Z., & Wang, X., ReLM: [183] Kim, S., Chen, J., Cheng, T., Gindulyte, A., He, J., He, S., ... & Leveraging Language Models for Enhanced Chemical Reaction Bolton, E. E., PubChem 2019 update: improved access to chemical Prediction, in EMNLP, 2023. data, Nucleic acids research, 47(D1), D1102-D1109, 2019. [158] LiuP,RenY,RenZ.,Git-mol:Amulti-modallargelanguagemodel [184] Gaulton, A., Bellis, L. J., Bento, A. P., Chambers, J., Davies, M., for molecular science with graph, image, and text, arXiv preprint Hersey, A., ... & Overington, J. P., ChEMBL: a large-scale bioactivity arXiv:2308.06911, 2023 database for drug discovery. Nucleic acids research.JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 20 [185] Zdrazil B, Felix E, Hunter F, Manners EJ, Blackshaw J, Corbett APPENDIX S, de Veij M, Ioannidis H, Lopez DM, Mosquera JF, Magarinos MP. .1 Training & Inference Framework with LLMs The ChEMBL Database in 2023: a drug discovery platform spanning multiple bioactivity data types and time periods. Nucleic Acids There are two typical training and inference paradigms Research. 2023 Nov 2:gkad1004. to apply language models on graphs: 1) Pretraining-then- [186] Mellor, C. L., Robinson, R. M., Benigni, R., Ebbrell, D., Enoch, S. J., Firman, J. W., ... & Cronin, M. T. D. (2019). Molecular fingerprint- finetuning: typically adopted for medium-scale large lan- derived similarity measures for toxicological read-across: Recom- guage models; and 2) Pretraining-then-prompting: typically mendations for optimal use. Regulatory Toxicology and Pharmacology. adopted for large-scale large language models. [187] Krenn, M., Ai, Q., Barthel, S., Carson, N., Frei, A., Frey, N. C., ... Pretraining denotes training the language model with unsu- & Aspuru-Guzik, A. (2022). SELFIES and the future of molecular string representations. Patterns, 3(10). pervised objectives to initialize them with language under- [188] Hu, W., Fey, M., Zitnik, M., Dong, Y., Ren, H., Liu, B., ... & standing and inference ability for downstream tasks. Typical Leskovec, J., Open graph benchmark: Datasets for machine learning pretraining objectives for pure text includ
he language model with unsu- & Aspuru-Guzik, A. (2022). SELFIES and the future of molecular string representations. Patterns, 3(10). pervised objectives to initialize them with language under- [188] Hu, W., Fey, M., Zitnik, M., Dong, Y., Ren, H., Liu, B., ... & standing and inference ability for downstream tasks. Typical Leskovec, J., Open graph benchmark: Datasets for machine learning pretraining objectives for pure text include masked language on graphs. In NeurIPS, 2020. modeling [23], auto-regressive causal language modeling [189] Xu,K.,Hu,W.,Leskovec,J.,&Jegelka,S.Howpowerfularegraph neural networks? In ICLR, 2019. [26], corruption-reconstruction language modeling [28] and [190] Li,J.,Li,D.,Savarese,S.,&Hoi,S.,Blip-2:Bootstrappinglanguage- text-to-text transfer modeling [29]. When extended in the image pre-training with frozen image encoders and large language graphdomain,languagemodelpretrainingstrategiesinclude models. arXiv preprint arXiv:2301.12597. [191] Zang, C., & Wang, F. Moflow: an invertible flow model for document relation prediction [30], network-contextualized generating molecular graphs. In ACM SIGKDD, 2020. maskedlanguagemodeling[31],contrastivesocialprediction [192] Liu, G., Inae, E., Zhao, T., Xu, J., Luo, T., & Jiang, M. (2023). Data- [32] and context graph prediction [33]. Centric Learning from Unlabeled Graphs with Diffusion Model. Finetuning refers to the process of training the language arXiv preprint arXiv:2303.10108. [193] Wang, Y., Lipka, N., Rossi, R. A., Siu, A., Zhang, R., & Derr, T. modelwithlabeleddataforthedownstreamtasks.Language Knowledge graph prompting for multi-document question answer- model fine-tuning methodology can be further categorized ing. AAAI, 2024. into fully fine-tuning, efficient fine-tuning, and instruction [194] Guo, Z., Yu, W., Zhang, C., Jiang, M. and Chawla, N.V., GraSeq: graph and sequence fusion learning for molecular property predic- tuning. tion. CIKM, 2020. [195] Yu, W., Zhu, C., Qin, L., Zhang, Z., Zhao, T., & Jiang, M. • Full Finetuning means updating all the parameters Diversifying content generation for commonsense reasoning with inside the language model. It is the most commonly mixture of knowledge graph experts. ACL findings, 2022. used fine-tuning method that fully stimulates the [196] Deng, J., Yang, Z., Wang, H., Ojima, I., Samaras, D., & Wang, F. language model’s potential for downstream tasks, but (2023). A systematic study of key elements underlying molecular property prediction. Nature Communications, 14(1), 6395. can suffer from heavy computational overload [36] [197] Renz, P., Van Rompaey, D., Wegner, J. K., Hochreiter, S., & and result in overfitting issues [35]. Klambauer, G. (2019). On failure modes in molecule generation • Efficient Finetuning refers to only fine-tuning a and optimization. Drug Discovery Today: Technologies, 32, 55-63. [198] Reymond, J. L. (2015). The chemical space project. Accounts of subset of parameters inside the language model. Chemical Research, 48(3), 722-730. Efficient tuning methods for pure text include prompt [199] Wang, H., Fu, T., Du, Y., Gao, W., Huang, K., Liu, Z., ... & Zitnik, tuning [37], prefix tuning [38], adapter [39] and LoRA M. (2023). Scientific discovery in the age of artificial intelligence. [40]. Efficient language model
ct. Accounts of subset of parameters inside the language model. Chemical Research, 48(3), 722-730. Efficient tuning methods for pure text include prompt [199] Wang, H., Fu, T., Du, Y., Gao, W., Huang, K., Liu, Z., ... & Zitnik, tuning [37], prefix tuning [38], adapter [39] and LoRA M. (2023). Scientific discovery in the age of artificial intelligence. [40]. Efficient language model fine-tuning methods Nature, 620(7972), 47-60. [200] Wang, H., Li, W., Jin, X., Cho, K., Ji, H., Han, J., & Burke, M. D. particularly designed for graph data include graph Chemical-reaction-aware molecule representation learning. arXiv, neural prompt [41] and graph-enhanced prefix [42]. 2021. • Instruction Tuning denotes fine-tuning language [201] Lai, T. M., Zhai, C., & Ji, H. (2023). KEBLM: Knowledge-Enhanced model with downstream task instructions [43] [44] to Biomedical Language Models. Journal of Biomedical Informatics. [202] Liu G, Xu J, Luo T, Jiang M. Inverse Molecular Design with Multi- encourage model generalization to unseen tasks in Conditional Diffusion Guidance. arXiv, 2024. inference. It is an orthogonal concept with full fine- [203] Li, M., Li, S., Wang, Z., Huang, L., Cho, K., Ji, H., Han, J. and tuning and efficient fine-tuning, in other words, one Voss, C. The future is not one-dimensional: Complex event schema induction by graph modeling for event prediction. arXiv preprint can choose both full fine-tuning and efficient fine- arXiv:2104.06344. tuning for instruction tuning. Instruction tuning is adopted in the graph domain for node classification [45], link prediction [46], and graph-level tasks [47]. Prompting is a technique to apply language model for downstreamtasksolvingwithoutupdatingthemodelparam- eters. One needs to formulate the test samples into natural language sequences and ask the language model to directly conduct inference based on the in-context demonstrations. This is a technique particularly popular for large-scale au- toregressive language models. Apart from direct prompting, following-up works propose chain-of-thought prompting [48], tree-of-thought prompting [49], and graph-of-thought prompting [50]. JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 21 TABLE 4 A collection of LLM reasoning methods on pur
prompting [50]. JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 21 TABLE 4 A collection of LLM reasoning methods on pure graph discussed in Section 4. We do not include thebackbone models used in these methods studied inthe original papers, as thesemethods generally applyto any LLMs. The “Papers” column liststhe papers thatstudy the specificmethods. Method GraphFormatorEncoding ReasoningProcess ReasoningCategory Papers Zero-Shot Verbalizededgeoradjacency Directlyanswering. DirectAnswering [124]–[126], [128], list. [131] RolePrompting Verbalizededgeoradjacency Directly answering by designating a specific role to the DirectAnswering [126] list. LLM. FormatExplanation Verbalizededgeoradjacency Encouraging the LLM to explain the input graph format DirectAnswering [126] list. first. GraphLLM Prefix tokens encoded by a Directlyanswering. DirectAnswering [42] graphencoder. Few-Shot (In-Context Verbalizededgeoradjacency Directlyansweringbyfollowingtheexamples. DirectAnswering [124], [125], [128], Learning) lists preceded with a few [131] demonstrativeexamples. Chain-of-Thought Verbalizededgeoradjacency Reasoningthroughaseriesofintermediatereasoningsteps HeuristicReasoning [124]–[126], [128], lists preceded with a few inthegenerationfollowingtheexamples. [131],[132] demonstrativeexamples. Self-Consistency Verbalizededgeoradjacency Reasoning through a series of intermediate reasoning HeuristicReasoning [124] lists preceded with a few stepsingeneration,andthenselectingthemostconsistent demonstrativeexamples. answer. Build-a-Graph Verbalizededgeoradjacency Reconstructingthegraphinoutput,andthenreasoningon HeuristicReasoning [124],[131] list. thegraph. Context-Summarization Verbalizededgeoradjacency Directlyansweringbyfirstsummarizingthekeyelements HeuristicReasoning [126] list. inthegraph. Reasoning-on-Graph Retrievedpathsfromexternal First,planthereasoningprocessintheformofpathstobe HeuristicReasoning [129]
thegraph. Context-Summarization Verbalizededgeoradjacency Directlyansweringbyfirstsummarizingthekeyelements HeuristicReasoning [126] list. inthegraph. Reasoning-on-Graph Retrievedpathsfromexternal First,planthereasoningprocessintheformofpathstobe HeuristicReasoning [129] graphs. retrievedandtheninferontheretrievedpaths. Iterative Reading-then- Retrived neighboring edges Iterativelyretrievingneighboringedgesornodesfromex- HeuristicReasoning [130],[132] Reasoning or nodes from external ternalgraphsandinferringfromtheretrievedinformation. graphs. AlgorithmicReasoning Verbalizededgeoradjacency Simulatingthereasoningprocessofarelevantalgorithmin AlgorithmicReasoning [124] list. ageneration. CallingAPIs ExternalKnowledgeBase. Generate the reasoning process as (probably nested) API AlgorithmicReasoning [127],[132] callstobeexecutedexternallyontheknowledgebase. JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 22 TABLE 5 A collection of pure graph reasoning problems studied in Section 4.G = ( V ,E) denotes a graph with verticesV and edgesE.v ande denote individualverticesandedges,respectively.The“Papers”columnliststhepapersthatstudytheproblemusingLLMs.The“Complexity”columnlists the time complexity of standard algorithms for the problem, ignoring more advanced but complex algorithms that are not comparable to LLMs’ reasoning processes. Problem Definition Applications TypicalComplexity Papers Connectivity GivenagraphG andtwonodesu andv,tell RelationshipDetection,Link O (|E|) orO (V 2) [124],[125] iftheyareconnectedbyapath. Prediction NeighborDetection GivenagraphG andanodev,findthenodes Recommendation, O (min( |E|,|V |)) [126] connectedtov. KnowledgeQA NodeDegree Given a graphG and a node v, find the EntityPopularity,Importance O (min( |E|,|V |)) [125],[126] numberofedgesconnectedtov. Ranking AttributeRetrieval Given a graphG with node-level informa- Recommendation,NodeClas- O (1) [126] tionandanodev,returntheattributeofv. sification,NodeQA GraphSize GivenagraphG,findthenumberofnodes Graph-levelClassification O (|V |+ |E|) [126] andedges. CycleDetection GivenagraphG,tellifitcontainsacycle. Loop Elimination, Program
Recommendation,NodeClas- O (1) [126] tionandanodev,returntheattributeofv. sification,NodeQA GraphSize GivenagraphG,findthenumberofnodes Graph-levelClassification O (|V |+ |E|) [126] andedges. CycleDetection GivenagraphG,tellifitcontainsacycle. Loop Elimination, Program O (|V |) [124] LoopDetection Diameter GivenagraphG,findthediameterofG. Graph-level Classification, O (|V |3) or O (|V |2 log |V |+ [126] Clustering |V ||E|) TopologicalSort Given a directed acyclic graphG, find a TimelineGeneration,Depen- O (|V |+ |E|) [124] topological ordering of its vertices so that dencyParsing,Scheduling for every edge(u,v ),u comes beforev in theordering. WedgeorTriangleDetection GivenagraphG andavertexv,identifyif RelationshipDetection,Link O (|V |+ |E|) [125] thereisawedgeortrianglecenteredatv. Prediction MaximumTripletSum Given a graphG, find the maximum sum CommunityDetection O (|V |3) [42] of the weights of three vertices that are connected. ShortestPath Given a graphG and two nodesu andv, Navigation,Planning O (|E|) orO (V 2) [42],[124],[125] findtheshortestpathbetweenu andv. MaximumFlow GivenadirectedgraphG withasourcenode TransportationPlanning,Net- O (|V ||E|2), [124] s andasinknodet,findthemaximumflow workDesign O (|E||V |log |V |) orO (|V |3) froms tot. BipartiteGraphMatching GivenabipartitegraphG withtwodisjoint Recommendation, Resource O (|E|p |V |) [42],[124] setsofverticesV1 andV2,findamatching Allocation,Scheduling between V1 and V2 that maximizes the numberofmatchedpairs. GraphNeuralNetworks Given a graphG with node featuresX of Node Classification, Graph- O (ld|V |2) [124] dimensiond, simulate a graph neural net- levelClassification workswithlpayersandreturntheencoded nodefeatures ClusteringCoefficient GivenagraphG,findtheclusteringcoeffi- CommunityDetection,Node O (|V |3) [126] cientofG. Clustering SubstrcutureCounting GivenagraphG andasubgraphG′,count PatternMatching,Subgraph NP-Complete [42] thenumberofoccurrencesofG′inG. Detection, Abnormality De- tection HamiltonPath
cientofG. Clustering SubstrcutureCounting GivenagraphG andasubgraphG′,count PatternMatching,Subgraph NP-Complete [42] thenumberofoccurrencesofG′inG. Detection, Abnormality De- tection HamiltonPath GivenagraphG,findapaththatvisitsevery RoutePlanning,DrillingMa- NP-Complete [124] vertexexactlyonce. chine Planning, DNA Se- quencing (Knowledge)GraphQA Givena(knowledge)graphG andaquestion Dialogue System, Smart As- — [126],[129]–[132] q,findtheanswertoq. sistant,Recommendation GraphQueryLanguageGen- GivenagraphG andaqueryq,generatea GraphSummarization,FAQ — [126] eration querylanguagethatcanbeusedtoqueryG. Generation, Query Sugges- tions NodeClassification GivenagraphG,predicttheclassofanode Recommendation, User Pro- — [126],[127] v. filing,AbnormalityDetection GraphClassification GivenagraphG,predicttheclassofG. MoleculePropertyPrediction, — [126],[127] MoleduleQA,GraphQA JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 23 TABLE 6 Summary of large language models on text-attributed graphs. Role of LM: “TE”, “SE”, “ANN” and “AUG” denote text encoder, structure encoder, annotator(labeling thenode/edges), andaugmentator (conductdata augmentation).Task: “NC”,“UAP”, “LP”,“Rec”, “QA”,“NLU”, “EC”,“LM”, “RG” denote node classification, user activity prediction, link prediction, recommendation,question answering, natural language understanding, edge classification, language modeling, and regression task. Approach Category Role of LM LM Size Focus Task GNN-LM [66] LLM as Encoder TE 237M Task LM GIANT [58] LLM as Encoder TE 110M Task NC TextGNN [77] LLM as Encoder TE 110M Task Search AdsGNN [78] LLM as Encoder TE 110M Task Search LM-GNN [68] LLM as Encoder TE 110M Efficiency NC, LP, EC GraD [69] LLM as Encoder TE 110M/66M Efficiency LP, NC TAPE [70] LLM as Encoder TE, AUG 129M/GPT-3.5
Task Search AdsGNN [78] LLM as Encoder TE 110M Task Search LM-GNN [68] LLM as Encoder TE 110M Efficiency NC, LP, EC GraD [69] LLM as Encoder TE 110M/66M Efficiency LP, NC TAPE [70] LLM as Encoder TE, AUG 129M/GPT-3.5 Task NC SimTeG [35] LLM as Encoder TE 80M/355M Task NC, LP LLM-GNN [64] LLM as Encoder ANN GPT-3.5 Task NC ENG [71] LLM as Encoder TE, AUG 80M/GPT-3.5 Task NC SPECTER [51] LLM as Predictor TE 110M Representation NC, UAP, LP, Rec GraphFormers [72] LLM as Predictor TE, SE 110M Representation LP GreaseLM [67] LLM as Predictor TE, SE 355M Task QA SciNCL [52] LLM as Predictor TE 110M Representation NC, UAP, LP, Rec MICoL [59] LLM as Predictor TE 110M Supervision NC LinkBERT [30] LLM as Predictor TE 110M Pretraining QA, NLU Heterformer [73] LLM as Predictor TE, SE 110M Representation NC, LP E2EG [60] LLM as Predictor TE 66M Task NC TwHIN-BERT [56] LLM as Predictor TE 110M/355M Pretraining NC, LP Edgeformers [74] LLM as Predictor TE, SE 110M Representation NC, LP, EC Patton [31] LLM as Predictor TE, RE 110M Pretraining NC, LP, Search InstructGLM [46] LLM as Predictor TE, SE 250M/7B Generalization NC, LP GNP [41] LLM as Predictor TE, SE 3B/11B Task QA Touchup-G [54] LLM as Predictor TE 110M Representation NC, LP DGTL [76] LLM as Predictor TE, SE 13B Task NC GraphText [65] LLM as Predictor TE, SE GPT-3.5/4 Task NC GraphGPT [45] LLM as Predictor TE, SE 7B Generalization NC METERN [75] LLM as Predictor TE, RE 110M Representation NC, LP, Rec, RG LTRN [57] LLM as Aligner TE 110M Supervision NC GLEM [62] LLM as Aligner TE 110M Task NC G2P2 [63] LLM as Aligner TE 110M Supervision NC ConGraT [53] LLM as Aligner TE 110M/82M Representation LP, LM, NC GRENADE [55] LLM as Aligner TE 110M Representation NC, LP THLM [33] LLM as Aligner TE 110B Pretraining NC, LP JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021
LLM as Aligner TE 110M/82M Representation LP, LM, NC GRENADE [55] LLM as Aligner TE 110M Representation NC, LP THLM [33] LLM as Aligner TE 110B Pretraining NC, LP JOURNAL OF LATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2021 24 TABLE 7 A summarization of Graph-Aware LLM finetuning objectives on text-attributed graphs.v+i andv−i denote a positive training node and a negative training node tovi, respectively. TABLE 8 Model collection in Section 6 for text-captioned graphs. “Lin.” and “Vec.” representLinearized Graph Encoding and Vectorized Graph Encoding. “Classif.”, “Regr.”, “NER”, “RE”, “Retr.”, “Gen.”, “Cap.” represent classification, regression, named entity recognition, relation extraction, (molecule) graph retrieval, (molecule) graph generation, (molecule) graph captioning. Model LM Encoder Graph Encoder Gen. Decoder LM Size Task SMILES-BERT [179] Transformer [93] Linearized N.A. 30M-565M Classification Text2Mol [122] SciBERT [25] GCN Transformer [93] ≥ 110M Retrieval MolGPT [177] N.A. Linearized GPT 6M Generation Chemformer [156] BART [28] Linearized BART [28] 45M-230M Regression, Gen. KV-PLM [175] BERT [23] Linearized N.A 110M-340M Classif., NER,RE, Retrieval MFBERT [176] RoBERTa [24] Linearized N.A. 110M-340M Classification Galatica [178] N.A. Linearized Transformer [93] 125M-120B Classification MolT5 [123] T5.1.1 [29] Linearized Transformer 80M-780M Gen., Cap. Text+Chem T5 [171] T5 [29] Linearized T5 [29] 80M-780M Classif, Gen.,Caption GPT-3.5/4 Classification LLM-ICL [168] N.A. Linearized LLaMA2 [119] Generation≥ 780M Method positivev+i negativev−Galactica [178] Captioni Objectivef(·) SPECTER[51] (vi,v+GIMLET [47] T5 [29] GT T5 [29] 80M-780M Classif., Regr.i ) /∈E ; max {||h vi− h v+ i )∈E (vi,v−(vi,vu )∈E ,(vu,v−i )∈E ,(vi,v−i ) /∈E i ||2 −||h vi− h v−i ||2 + m, 0} SciNCL[52] ||h vi− h v+MolXPT
Objectivef(·) SPECTER[51] (vi,v+GIMLET [47] T5 [29] GT T5 [29] 80M-780M Classif., Regr.i ) /∈E ; max {||h vi− h v+ i )∈E (vi,v−(vi,vu )∈E ,(vu,v−i )∈E ,(vi,v−i ) /∈E i ||2 −||h vi− h v−i ||2 + m, 0} SciNCL[52] ||h vi− h v+MolXPT [169] N.A. Linearized GPT-2 350M Classif, Gen.,Caption i ||2 ∈ (k+ − c+ ;k+ ] ||h vi− h v−i ||2 ∈ (k−hard − c−hard ;k−hard ] max {||h vi− h v+i ||2 −||h vi− h v−i ||2 + m, 0} Touchup-G[54] (vi,v+ChatMol [166] T5 [29] Linearized T5 [29] 80M-780M Gen., Cap. i )∈E (vi,v−i ) /∈E log(h vi·h v+i )+ log(1 − h vi·h v−i ) MolReGPT [165] N.A. Linearized GPT-3.5 N.A. Gen., Cap.exp(cos( hvi,hv+i )/η ) TwHIN-BERT[56] cos( xvi,xv+RT [164] N.A. Linearized XLNet [27] 27M Regr. Gen.Pexp(cos( hvi,hv−i)/η ) i ) <k in-batchrandom − log v−i LLM4Mol [163] RoBERTa [24] Linearized GPT-3.5 N.A. Classif, Regr.exp(cos( hvi,hv+i )/η ) MICoL[59] v+LLaMA-Mol [160] N.A. Linearized LLaMA [119] 7B Regr., Gene.i ∈ N M (vi) in-batchrandom − logPexp(cos( hvi,hv−i)/η ) v−i E2EG[60] (vi,v+Prot2Text [161] N.A. GNN GPT-2 256M-760M Captioni )∈E (vi,v−i ) /∈E log(h vi·h v+v− i )+ P i log(1 − h vi·h v−i ) CatBERTa [159] N.A. Linearized RoBERTa [24] N.A. Regression ReLM [157] N.A. GNN GPT-3.5 N.A. Classification MoMu [174] SciBERT [25] GNN MolT5 [123] 82M-782M Classif, Gen.,KV-PLM [175] MoFlow [191] Caption, Retr. MoleculeSTM [172] BART [28] GIN BART [28] 45M-230M Classif, Gen.,Linearized Caption CLAMP [170] BioBERT [180] GNN N.A. ≤ 11B ClassificationCLIP [96], T5 [29] Lin., Vec. Retrieval MolFM [162] BERT [23] GIN MolT5 [123] 61.8M Classif., Gen.Caption, Retr. MoMu-v2 [173] Sci
Caption CLAMP [170] BioBERT [180] GNN N.A. ≤ 11B ClassificationCLIP [96], T5 [29] Lin., Vec. Retrieval MolFM [162] BERT [23] GIN MolT5 [123] 61.8M Classif., Gen.Caption, Retr. MoMu-v2 [173] SciBERT [25] GIN N.A. 82M-782M Classification GIT-Mol [158] SciBERT [25] GIN MolT5 [123] 190M-890M Classif, Gen.Linearized Caption MolCA [167] Galactica [178] GIN N.A. 100M-877M Classif., Regr.,Retrieval
Jamba: A Hybrid Transformer-Mamba Language Model Opher Lieber∗ Barak Lenz∗ Hofit Bata Gal Cohen Jhonathan Osin Itay Dalmedigos Erez Safahi Shaked Meirom Yonatan Belinkov Shai Shalev-Shwartz Omri Abend Raz Alon Tomer Asida Amir Bergman Roman Glozman Michael Gokhman Avashalom Manevich Nir Ratner Noam Rozen Erez Shwartz Mor Zusman Yoav Shoham Abstract We present Jamba, a new base large language model based on a novel hybrid Transformer-Mambamixture-of-experts(MoE)architecture. Specifically,Jamba interleaves blocks of Transformer and Mamba layers, enjoying the benefits of both modelfamilies. MoEisadded insome oftheselayers toincrease modelcapacity whilekeepingactiveparameterusagemanageable. Thisflexiblearchitectureallows resource- and objective-specific configurations. In the particular configuration we have implemented, we end up with a powerful model that fits in a single 80GB GPU. Built at large scale, Jamba provides high throughput and small memory footprint compared to vanilla Transformers, and at the same time state-of-the-art performanceonstandardlanguagemodelbenchmarksandlong-contextevaluations. Remarkably,themodelpresentsstrongresultsforupto256Ktokenscontextlength. We study various architectural decisions, such as how to combine Transformer and Mamba layers, and how to mix experts, and show that some of them are crucial in large scale modeling. We also describe several interesting properties of these architectureswhichthetrainingandevaluationofJambahaverevealed,andplanto release checkpoints from various ablation runs, to encourage further exploration of this novel architecture. We make the weights of our implementation of Jamba publicly available under a permissive license. Model: https://huggingface.co/ai21labs/Jamba-v0.1 1 Introduction We introduce Jamba, a new publicly available large language model. Jamba is based on a novel hybrid architecture, which combines Transformer layers [46] with Mamba layers [16], a recent state-space model[17, 18], as wellas a mixture-of-experts (MoE)component [13, 41]. Jambathus combinestwoorthogonalarchitecturaldesignsthattogethergiveitimprovedperformanceandhigher throughput,whilemaintainingamanageablememoryfootprint. The7B-basedJambamodel(12B active parameters, 52B total available parameters) we are releasing was designed to fit in a single 80GBGPU,buttheJambaarchitecturesupportsotherdesignchoices,depending onone’shardware and performance requirements. ∗ Equal contribution.ThefundamentalnoveltyofJambaisitshybridTransformer-Mambaarchitecture(thoughseemention belowofrecentrelatedefforts). DespitetheimmensepopularityoftheTransformerasthepredominant architecture for language models, it suffers from two main drawbacks. First, its high memory and compute requirementshinders theprocessing oflong contexts, wherethe key-value (KV) cachesize becomesalimitingfactor. Second,itslackofasinglesummarystateentailsslowinferenceandlow throughput, since each generated token performs a computation on the entire context. In contrast, older recurrent neural network (RNN) models, which summarize an arbitrarily long context in a singlehiddenstate,donotsufferfromtheselimitations. RNNmodelshavetheirownshortcomings, however. They are costly to train since trainingcannot be parallelized across time steps. And they struggle with long distance relationships, which the hidden state captures to only a limited extent. Recentstatespacemodels(SSMs)likeMambaare moreefficienttotrainthanRNNsandaremore capableathandlinglongdistancerelationships,butstilllagbehindtheperformanceofcomparably sized Transformer language models. Taking advantage of both model families, Jamba combines Transformer and Mamba layers, at a certain ratio
ings, however. They are costly to train since trainingcannot be parallelized across time steps. And they struggle with long distance relationships, which the hidden state captures to only a limited extent. Recentstatespacemodels(SSMs)likeMambaare moreefficienttotrainthanRNNsandaremore capableathandlinglongdistancerelationships,butstilllagbehindtheperformanceofcomparably sized Transformer language models. Taking advantage of both model families, Jamba combines Transformer and Mamba layers, at a certain ratio. Varying the ratio of Transformer/Mamba layers allows balancing memory usage, efficient training, and long context capabilities. A few other recent attempts to combine Attention and SSM modules are worth noting. [50] mixes an S4 layer [17] with a local attention layer, followed by a sequence of local attention layers; it shows experimentswithsmallmodelsand simpletasks. [16]reportsthatinterleavingMambaand attention layers is only slightly better than pure Mamba in terms of perplexity, with models up to 1.3B parameters. [33] starts with an SSM layer followed by chunk-based Transformers, with models up to 1.3B showing improved perplexity. [12] adds an SSM layer before the self-attention in a Transformer layer, while [38] adds theSSM after the self-attention, both showing improvements on speechrecognition. [32]replacestheMLPlayersintheTransformerbyMambalayers,andshows benefitsinsimpletasks. TheseeffortsaredifferentfromJambabothintheparticularwayinwhich theSSMcomponentismixedwiththeattentionone,andinthescaleofimplementation. Closestare perhapsH3[14],aspeciallydesignedSSMthatenablesinductioncapabilities,andageneralization called Hyena [35]. The former proposed ahybrid architecture that replaces the second and middle layerswithself-attention,andwasimplementedwithupto2.7Bparametersand400Btrainingtokens. However, as shown in [16], its perfomance lags that of pure Mamba. Based on Hyena, StripedHyena [36] interleaves attention and SSM layers in a 7B parameter model. However, it lags behind the Attention-onlyMistral-7B[22]. AllofthisrendersJambathefirstproduction-gradeAttention-SSM hybridmodel. ScalingthehybridJambaarchitecturerequiredovercomingseveralobstacles,which we dicsuss in Section6. JambaalsoincludesMoElayers[13,41],whichallowincreasingthemodelcapacity(totalnumberof availableparameters)withoutincreasingcomputerequirements(numberofactiveparameters). MoE isaflexibleapproachthatenablestrainingextremelylargemodelswithstrongperformance[23]. In Jamba, MoE is applied to some of the MLP layers. The more MoE layers, and the more experts in eachMoElayer,thelargerthetotalnumberofmodelparameters. Incontrast,themoreexpertsweuse at each forward pass, the larger the number of active parameters as well as the compute requirement. In ourimplementation of Jamba, weapply MoE atevery otherlayer, with16 experts and thetop-2 experts used at each token (a more detailed discussion of the model architecture is provided below). We evaluated our implementation of Jamba on a wide range of benchmarks and found it performs comparablyto Mixtral-8x7B[23],which hasa similarnumber ofparameters,and alsoto thelarger Llama-2 70B [45]. In addition, our model supports a context length of 256K tokens – the longest supportedcontextlengthforproduction-gradepubliclyavailablemodels. Onlong-contextevaluations, JambaoutperformesMixtralonmostoftheevaluateddatasets. Atthesametime,Jambaisextremely efficient; for example, its throughput is 3x that of Mixtral-8x7B for long contexts. Moreover, our model fits in a single GPU (with 8bit weights) even with contexts of over 128K tokens, which is impossible with similar-size attention-only models such as Mixtral-8x7B. Somewhatunusuallyforanewarchitecture,wereleaseJamba(12Bactiveparameters,52Btotalavail- ableparameters) underApache 2.0license: https://huggingface.co/ai21labs/Jamba-v0.1. We doso sincewe feelthat thenovelarchitecture ofJamba callsfor furtherstudy, experimentation, and optimization by the community. Our design was based on various ablation experiments we conductedtoexploretheeffectofdifferenttradeoffsanddesigncho
128K tokens, which is impossible with similar-size attention-only models such as Mixtral-8x7B. Somewhatunusuallyforanewarchitecture,wereleaseJamba(12Bactiveparameters,52Btotalavail- ableparameters) underApache 2.0license: https://huggingface.co/ai21labs/Jamba-v0.1. We doso sincewe feelthat thenovelarchitecture ofJamba callsfor furtherstudy, experimentation, and optimization by the community. Our design was based on various ablation experiments we conductedtoexploretheeffectofdifferenttradeoffsanddesignchoices,and insights gleanedfrom those. Theseablationswereperformedatscalesofupto7Bparameters,andtraining runsofupto 250B tokens. We plan to release model checkpoints from these runs. 2Figure1: (a)AsingleJambablock. (b)Differenttypesoflayers. Theimplementationshownhereis withl = 8 ,a : m = 1 : 7 ratio of attention-to-Mamba layers, and MoE applied everye = 2 layers. Importantnotice: The Jambamodel released isa pretrainedbase model, whichdid not gothrough alignmentorinstructiontuning,and doesnothavemoderationmechanisms. Itshouldnotbe usedin production environments or with end users without additional adaptation. 2 Model Architecture Jamba is a hybrid decoder architecture that mixes Transformer layers [46] with Mamba layers [16], a recentstate-spacemodel(SSM) [17,18],inaddition toamixture-of-experts(MoE)module[13,41]. We call the combination of these three elements a Jamba block. See Figure1for an illustration. Combining Transformer, Mamba, and MoE elements allows flexibility in balancing among the sometimes conflicting objectives of low memory usage, high throughput, and high quality. In terms of memory usage, note that comparing the total size of the model parameters can be misleading. In an MoE model, the number of active parameters that participate in any given forward step may be much smaller than the total number of parameters. Another important consideration is the KV cache – the memory required to store the attention keys and values in the context. When scaling Transformer models to long contexts, the KV cache becomes a limiting factor. Trading off attention layers for Mamba layers reduces the total size of the KV cache. Our architecture aims to provide 3not only a small number of active parameters but also an 8x smaller KV cache compared to a vanilla Transformer. Table1compares Jamba with recent publicly available models, showing its advantage in maintaining a small KV cache even with 256K token contexts. Available params Active params KV cache (256K context, 16bit) LLAMA-2 6.7B 6.7B 128GB Mistral 7.2B 7.2B 32GB Mixtral 46.7B 12.9B 32GB Jamba 52B 12B 4GB Table 1: Comparison of Jamba and recent open models in terms of total available parameters, active parameters, and KV cache memory on long contexts. Jamba provides a substantial reduction in the KV cache memory requirements. In terms of throughput, with short sequences, attention operations take up a small fraction of the inferenceandtrainingFLOPS[6]. However,withlongsequences,attentionhogsmostofthecompute. Incontrast, Mambalayersare morecompute-efficient. Thus,increasingthe ratioofMamba layers improves throughput especially for long sequences. Hereisadescriptionofthemainconfiguration,whichprovidesimprovedperformanceandefficiency. Section6contains results from ablation experiments supporting the design choices. ThebasiccomponentisaJambablock,whichmayberepeatedinsequence. EachJambablockisa combinationofMambaorAttentionlayers. EachsuchlayercontainseitheranattentionoraMamba module,followedbyamulti-layerperceptron(MLP).Thedifferentpossibletypesoflayersareshown in Figure1(b). 2 A Jamba b
proves throughput especially for long sequences. Hereisadescriptionofthemainconfiguration,whichprovidesimprovedperformanceandefficiency. Section6contains results from ablation experiments supporting the design choices. ThebasiccomponentisaJambablock,whichmayberepeatedinsequence. EachJambablockisa combinationofMambaorAttentionlayers. EachsuchlayercontainseitheranattentionoraMamba module,followedbyamulti-layerperceptron(MLP).Thedifferentpossibletypesoflayersareshown in Figure1(b). 2 A Jamba block containsl layers, which are mixed at a ratio ofa : m , meaninga attention layers for everym Mamba layers. InJamba,someoftheMLPsmaybereplacedbyMoElayers,whichhelpsincreasethemodelcapacity whilekeepingtheactivenumberofparameters,andthusthecompute,small. TheMoEmodulemay be applied to MLPs everye layers. When using MoE, there aren possible experts per layer, with a routerchoosing thetopK expertsat eachtoken. In summary,the differentdegreesof freedominthe Jamba architecture are: • l: The number of layers. • a : m : ratio of attention-to-Mamba layers. • e: how often to use MoE instead of a single MLP. • n : total number of experts per layer. • K : number of top experts used at each token. Giventhisdesignspace,Jambaprovidesflexibilityinpreferringcertainpropertiesoverothers. For example, increasingm and decreasinga, that is, increasing the ratio of Mamba layers at the expense ofattention layers,reducesthe requiredmemoryfor storingthekey-value cache. Thisreduces the overallmemoryfootprint,whichisespeciallyimportantforprocessinglongsequences. Increasingthe ratio of Mamba layers also improves throughput, especially at long sequences. However, decreasing a might lower the model’s capabilities. Additionally, balancing n , K , and e affects the relationship between active parameters and total availableparameters. Alargern increasesthemodelcapacityattheexpenseofmemoryfootprint, whilealargerK increasestheactiveparameterusageandthecomputerequirement. Incontrast,a largere decreases the model capacity, while decreasing both compute (whenK >1) and memory requirements,andallowingforlesscommunicationdependencies (decreasingmemorytransfersas well as inter-GPU communication during expert-parallel training and inference). Jamba’s implementation of Mamba layers incorporate several normalizations that help stabilize training in large model scales. In particular, we apply RMSNorm [48] in the Mamba layers. 2The figure showsa potential Attention MoElayer, whichour architecture does notuse, but future variants could. 4WefoundthatwiththeMambalayer,positionalembeddingsormechanismslikeRoPE[42]arenot necessary, and so we do not use any explicit positional information. Otherarchitecturedetailsarestandard,includinggrouped-queryattention(GQA),SwiGLUactivation function [6, 40, 45], and load balancing for the MoE [13]. The vocabulary size is 64K. The tokenizer istrainedwithBPE[15,29,39]andeachdigitisaseparatetoken[6]. Wealsoremovethedummy space used in Llama and Mistral tokenizers for more consistent and reversible tokenization. 3 Reaping the Benefits 3.1 Jamba Implementation for a Single 80GB GPU The specific configuration in our implementation was chosen to fit in a single 80GB GPU, while achieving best performance in the sense of quality and throughput. In our implementation we have a sequence of 4 Jamba blocks. Each Jamba block has the following configuration: • l = 8 : The number of layers. • a : m = 1 : 7 : ratio attention-to-Mamba layers. • e = 2 : how often to use MoE instead of a single MLP. • n = 16 : total number of experts. • K = 2 : number of top experts used at each token. Thea : m = 1 : 7 ratio was chosen according to preliminary ablations, as shown in Section6, since this ratio was the most compute-efficient variant amongst the best performing variants in terms of quality. Theconfiguration ofthe expertswas chosento enablethemodel tofit ina single80GBGPU
7 : ratio attention-to-Mamba layers. • e = 2 : how often to use MoE instead of a single MLP. • n = 16 : total number of experts. • K = 2 : number of top experts used at each token. Thea : m = 1 : 7 ratio was chosen according to preliminary ablations, as shown in Section6, since this ratio was the most compute-efficient variant amongst the best performing variants in terms of quality. Theconfiguration ofthe expertswas chosento enablethemodel tofit ina single80GBGPU (with int8 weights), while including sufficient memoryfor the inputs. In particular,n ande were balanced to have an average of∼ 8 experts per layer. In addition, we balanced n , K , and e to allow for high quality, while keeping both compute requirements and communication dependencies (memory transfers) checked. Accordingly, we chose to replace the MLP module with MoE on every other layer, aswell as have a totalof 16 experts,two of whichare used ateach token. These choiceswere inspired by prior work on MoE [7,49] and verified in preliminary experiments. Figure2showsthemaximalcontextlengththatfitsasingle80GBGPUwithourJambaimplementa- tioncompared toMixtral 8x7Band Llama-2-70B.Jambaprovides2xthe contextlength ofMixtral and 7x that of Llama-2-70B. Figure2: ComparisonofmaximumcontextlengthfittinginasingleA10080GBGPU.Jambaenables 2x the context length of Mixtral and 7x that of Llama-2-70B. Overall, our Jamba implementation was successfully trained on context lengths of up to 1M tokens. The released model supports lengths of up to 256K tokens. 53.2 Throughput Analysis Forconcreteness,wepresentresultsofthethroughputintwospecificsettings.3 Inthefirstsetting,we have varying batch size, a single A100 80 GB GPU, int8 quantization, 8K context length, generating output of512 tokens. As Figure3ashows, Jamba allows processing oflarge batches, leadingto a 3x increase in throughput (tokens/second) over Mixtral, which does not fit with a batch of 16 despite having a similar number of active parameters. Inthesecondsetting,wehaveasinglebatch,4A100GPUs,noquantization,varyingcontextlengths, generatingoutputof512tokens. AsdemonstratedinFigure3b,atsmallcontextlengthsallmodels haveasimilarthroughput. Jambaexcelsatlongcontexts;with128Ktokens its throughputis3xthat ofMixtral. Notethat thisis despitethefact thatJamba hasnot yetenjoyedoptimizations ofthe kind thecommunityhasdevelopedforpureTransformermodelsoverthepastsixyears. Wecanexpect the throughut gap to increase as such optimizations are developed also for Jamba. (a) Throughput at different batch sizes (single A100 (b) Throughput at different context lengths (single GPU, 8K context length). Jamba allows processing batch, 4 A100 GPUs). With a context of 128K to- large batches, with a throughput 3x greater than Mix- kens,Jambaobtains3xthethroughputofMixtral,while tral. Llama-2-70B does not fit with this long context. Figure 3: Comparison of throughput (tokens/second) with Jamba and recent open models. 4 Training Infrastructure and Dataset The model was trained on NVIDIA H100 GPUs. We used an in-house proprietary framework allowing efficient large-scale training including FSDP, tensor parallelism, sequence parallelism, and expert parallelism. Jamba is trained on an in-house dataset that contains text data from the Web, books, and code, with thelastupdateinMarch2024. Ourdataprocessingpipelineincludesqualityfiltersanddeduplication. 5 Evaluation In general we approach benchmarks cautiously, as they correlate only partially with what matters in real applications, and furthermore invite gaming the system in order to boast vanity numbers. Nevertheless, we present several indicative results. 5.1 Academic Benchmarks We report results with a wide range of standard academic benchmarks: Common sense reasoning: HellaSwag (10-shot) [47], WinoGrande (5-shot) [37], ARC-E (0-shot) and ARC-Challenge (25-shot) [9], and PIQA (zero-shot) [3]. Reading
n In general we approach benchmarks cautiously, as they correlate only partially with what matters in real applications, and furthermore invite gaming the system in order to boast vanity numbers. Nevertheless, we present several indicative results. 5.1 Academic Benchmarks We report results with a wide range of standard academic benchmarks: Common sense reasoning: HellaSwag (10-shot) [47], WinoGrande (5-shot) [37], ARC-E (0-shot) and ARC-Challenge (25-shot) [9], and PIQA (zero-shot) [3]. Reading Comprehension: BoolQ (10-shots) [8] and QuAC (zero-shot) [5]. Others: GSM8K (3-shot CoT) [10], HumanEval (pass@1) [4], Natural Questions closed-book (NQ; 5-shot) [26], and TruthfulQA (zero-shot) [27]. Aggregate benchmarks: MMLU (5-shot) [20] and BBH (3-shot) [43]. 3Referringtoend-to-end throughput(encoding+decoding). The results shouldbetakenrelativelyratherthan absolutely, as they are without possible optimizations. 6 Reasoning HellaSwag WinoGrande ARC-E ARC-C PIQA NQ TruthfulQA Llama-213B 80.7 72.8 77.3 59.4 80.5 37.7 37.4 Llama-270B 85.3 80.2 80.2 67.3 82.8 46.9 44.9 Gemma 81.2 72.3 81.5 53.2 81.2 32.6 44.8 Mixtral 86.7 81.2 77.6 66 83 44.8 46.8 Jamba 87.1 82.5 73.5 64.4 83.2 45.9 46.4 Comprehension Aggregate BoolQ QuAC GSM8K HumanEval MMLU BBH Llama-213B 81.7 42.7 34.7 18.3 54.8 39.4 Llama-270B 85 42.4 55.3 29.9 69.8 51.2 Gemma 87.2 39.2 54.5 32.3 64.3 55.1 Mixtral 88.4 40.9 60.4 34.8 70.6 50.3 Jamba 88.2 40.9 59.9 29.3 67.4 45.4 Table2: ComparisonofJambawith otherpubliclyavailablemodels. Jambaobtainssimilar orbetter performance with much better throughput. Table2comparesJamba toseveral publiclyavailable modelson commonacademic benchmarksfor evaluating language models. We compare with Llama-2 13B [45], which has about the same number ofactive paramtersasour model,Llama-2 70B,which islarger thanour model,Gemma [44], which has7Bparameters,andMixtral[23],whichhasaboutthesamenumberofactiveandtotalparameters as our model. Noticebly,Jambaperformscomparablytotheleadingpubliclyavailablemodelsofsimilarorlarger size, includingLlama-2 70B andMixtral. At thesame time, ourmodel has asmaller number oftotal available parameters than Llama-2 (52B compared to 70B). Moreover, as a sparse model, Jamba has only 12B active parameters, similar to Mixtral’s 12.9B active parameters. However, as a fully- attentional model, Mixtral has a large memory footprint with long sequences, requiring 32GB for the KVcachewith256Ktokens. Incontrast,thankstoitshybridAttention-Mambaarchitecture,Jamba’s KVcachetakesonly4GBevenatsuchalongcontext(Section2). Importantly,ourJambaachieves sucha strongperformancewhile havingmuch betterthroughputthan Llama-270Band Mixtral,up to 3x improvement (Section3.2). In summary, Jamba demostrates the ability of hybrid architectures to reach the performance of state-of-the-art Transformer based models of the same size class, while having the benefits of an SSM. 5.2 Long-Context Evaluations We have successfully trained Jambamodels withcontextlengths ofup to1M tokens. Th
a’s KVcachetakesonly4GBevenatsuchalongcontext(Section2). Importantly,ourJambaachieves sucha strongperformancewhile havingmuch betterthroughputthan Llama-270Band Mixtral,up to 3x improvement (Section3.2). In summary, Jamba demostrates the ability of hybrid architectures to reach the performance of state-of-the-art Transformer based models of the same size class, while having the benefits of an SSM. 5.2 Long-Context Evaluations We have successfully trained Jambamodels withcontextlengths ofup to1M tokens. The released modelhandlescontextlengthsofupto 256Ktokens. Inthissection,weevaluateitonsyntheticand naturalistic benchmarks that test is long-context capabilities. 5.2.1 Needle-in-a-haystack As Figure4shows, Jamba has excellent performance in the needle-in-a-haystack evaluation, which requiresretrievingasimplestatementplantedinalongcontextwindow[24]. Thisresultisnoteworthy especially given that our implementation of Jamba uses only 4 attention layers. 5.2.2 Naturalistic long-context evaluation WeevaluateJamba’sabilitytohandlelongcontextsusingquestion-answeringbenchmarks,consisting of long inputs. To this end, we repurpose five of the longest-context datasets from L-Eval [2], by structuring them in a few-shot format (we use 3-shots in all experiments here). Specifically, we evaluated the models on the following datasets: NarrativeQA (QA on narratives; [25]), LongFQA (finance;[2]),NaturalQuestions(NQ;Wikipedia;[26]),CUAD(law;[21]),andSFiction(science 7Figure4: Aneedle-in-a-haystackevaluationshowingJamba’sabilitytorecallstatementsplacedin the middle of contexts of up to 256K tokens length. fiction). The average input length in these datasets ranges from 6K to 62K tokens. These context lengths are further highly expanded by the few-shot format. Table3summarizes the evaluation results, in terms of F1. Jamba outperforms Mixtral on most of the datasetsaswellasonaverage. Inaddition,astheselong-contexttasksrequiresubstantialcomputation, here Jamba’s efficiency shines, with much better throughput with long contexts (Section3.2). LongFQA CUAD NarrativeQA NQ SFiction Avg Mixtral 0.42 0.46 0.29 0.58 0.42 0.43 Jamba 0.44 0.44 0.30 0.60 0.40 0.44 Table 3: Results (F1) on long-context QA benchmarks, with a 3-shot format. 6 Ablations and Insights Thissectiondiscussesablationexperimentsweranfordifferentdesignchoicesinourimplementation of the Jamba architecture. First we show the benefit of combining attention and Mamba layers, at which ratio they should be combined, and how to interleave them. We investigate cases where pure Mamba fails, suggesting that it struggles to develop in-context learning capabilities, while the Attention–Mamba hybrid exhibits in-context learning similar to vanilla Transformers. Then we show thebenefit of addingMoE on topof a hybrid Attention–Mamba model. Finally, we sharetwo additional learnings that we found useful: explicit positional information is not needed in Jamba, and Mamba layers necessitate special normalization to stabilize training at large scale.4 Fortheseablations,wereportthefollowingmeasures,whichexhibitinformativeperformanceevenat small data or model scale. • Academic benchmarks: HellaSwag (10-shot) [47], WinoGrande (5-shot) [37], Natural Questions closed-book (NQ; 5-shot) [26]. • HuggingFaceOpenLLMleaderboard(OLLM)[11]: asummarystatisticofseveraldatasets. We report results with our reproduction. • Perplexity evaluations: we report log-prob (per byte) on texts from three domains: C4, Books, and code. 4In all the ablation experiments, “pure Mamba” refers to models with Mamba layers interleaved with MLP layers. 86.1 Benefits of combining Attention and Mamba We first investigate the ratio of Attention to Mamba
LMleaderboard(OLLM)[11]: asummarystatisticofseveraldatasets. We report results with our reproduction. • Perplexity evaluations: we report log-prob (per byte) on texts from three domains: C4, Books, and code. 4In all the ablation experiments, “pure Mamba” refers to models with Mamba layers interleaved with MLP layers. 86.1 Benefits of combining Attention and Mamba We first investigate the ratio of Attention to Mamba layers (a : m ), with 1.3B parameters models trainedfor 250Btokens. AsTable4shows, thehybrid Jambamodel outperformsthepure attention or Mamba models. The ratio of attention-to-Mamba layers may be 1:3 or 1:7 with virtually no performance difference. Figure5shows the training loss of these models, where Jamba exhibits improved loss during training. Given that a 1:7 ratio is more compute-efficient and shows similar performance, we opt for it in our larger-scale experiments. Hella Wino log-prob OLLM NQ C4 Books CodeSwagGrande Attention 36.4 62.4 59.6 14.5 -0.543 -0.659 -0.331 Mamba 36.1 62.6 59.4 14.5 -0.543 -0.661 -0.334 Jamba (a : m = 1 : 3 , no MoE) 37.2 65.1 61.7 16.5 -0.533 -0.649 -0.321 Jamba (a : m = 1 : 7 , no MoE) 37.2 65.1 61.7 16.0 -0.533 -0.650 -0.321 Table 4: Results on academic benchmarks and log-probability evaluations showing an improved performance of Attention-Mamba (no MoE) compared to vanilla Attention and Mamba models. There is no substantial difference between 1:3 and 1:7 ratios of Attention-to-Mamba layers. Models are 1.3B parameters, trained for 250B tokens. Figure5: TraininglosscurvesforpureAttention,pureMamba,and Attention-Mamba hybrids(no MoE), with ratiosa : m of 1:3 and 1:4. All models are 1.3B parameters. The two hybrids achieve better loss throughout this training run, without any noticeable difference between the different Attention/Mamba ratios. Next,wecompareperformanceofvanillaTransformer,vanillaMamba,andAttention-Mambahybrid models, at 7B model size, after training on 50B tokens. As Table5shows, the pure Mamba layer is quite competitive, but lags slightly behind pure Attention. The hybrid Attention-Mamba (without MoE)outperformsthepuremodelswhileobtainingbetterthroughputthanthevanillaTransformer (Section3.2). Hella Wino log-prob OLLM NQ C4 Books CodeSwagGrande Attention 36.1 60.4 59.7 13.7 -0.555 -0.666 -0.347 Mamba 35.3 60.2 55.8 14.0 -0.554 -0.667 -0.355 Jamba (a : m = 1 : 7 , no MoE) 36.6 62.5 58.8 15.4 -0.547 -0.658 -0.340 Table5: Resultsonacademicbenchmarksandlog-probevaluations,comparingpureAttention,pure Mamba,andAttention-Mambahybrid(noMoE).Modelsare7Bparameters,trainedfor50Btokens. 9Figure6shows the training loss of the three architectures. While the pure Transformer and Mamba modelshaveasimilar convergence,the hybridJamba(noMoE) hasalowerlossthroughout thisrun. Figure6: TraininglosscurvesforpureAttention,pureMamba,andanAttention-Mambahybrid(no MoE). All models are 7B parameters. the hybrid achives better loss throughout this training run. 6.2 Why does the Combination Work? The pure Mamba model showed fairly good results in most tasks early on, including in general perplexity evaluations. However, it performed substantially worse than the pure
ile the pure Transformer and Mamba modelshaveasimilar convergence,the hybridJamba(noMoE) hasalowerlossthroughout thisrun. Figure6: TraininglosscurvesforpureAttention,pureMamba,andanAttention-Mambahybrid(no MoE). All models are 7B parameters. the hybrid achives better loss throughout this training run. 6.2 Why does the Combination Work? The pure Mamba model showed fairly good results in most tasks early on, including in general perplexity evaluations. However, it performed substantially worse than the pure Attention model in three common benchmark tasks: IMDB [28], QuAC [5], and NarrativeQA [25]. In contrast, the hybridAttention-MambaperformedsimilarlytotheAttentionmodelonthesedatasets. Table6shows the results for 1.3B models after 250B tokens. IMDB QuAC NarrativeQA Attention 84.1 27.9 45.8 Mamba 48.8 20.2 27.7 Attention-Mamba 90.9 26.6 43.7 Table 6: Mambaperforms poorlyoncertaindatasets, whiletheAttention-Mambahybridperformson par with the Attention model. Lookingintotheseresultsfurther,wefoundoutthatthepureMambamodeloftendoesnotfollowthe correctformat. Forinstance,intheIMDBdataset,answerchoicesare“Positive”or“Negative”. While the Attention model adheres to this format, the pure Mamba model often produces other answers, such as “Very Good”, “Very Positive”, “Funny”, “Bad”, “Poor”, and “3/10”. While these may be considered correct answers, the difficulty of Mamba to adhere to the format suggests a potential problem. Indeed, toperform successfulin-context learning,it is importantfor modelsto capturethe input-outputformat[30]. ThehybridAttention-Mambamodelfollowstheformatsuccessfully,just like the pure Attention model. We hypothesize that this phenomenon points to a limitation of SSMs – a potential difficulty in in-contextlearning(ICL). Indeed,theabilityto performICLhasbeen linkedtotheemergence ofso- called induction heads in Transformer language models during training, which perform approximate copying operations that are supportive of ICL [31]. We conjecture that the lack of an attention mechanism in the pure Mamba model makes it difficult for it to learn in-context. While Mamba may learn to copy and perform simple ICL when explicitly trained to do so ([16, 32], it is not clear if ICL is an emergent capability in SSM as is typical of Transformer models. In contrast, the hybridAttention–Mamba modeldoes performsuccessfulICL, even whenonly1 outof 8layers isan Attention one. Asanecdotalevidenceofanemergentinductionmechanism,wevisualizeinFigure7theattention of an example headfrom a 1.3B Attention-Mambahybrid model (no MoE), onan IMDB example where the pure Mamba failed and the hybrid succeeded. Clearly, the attention from the last token 10(“:”) isfocusedonthelabelsfromthefew-shotexamples. Wehavefound12suchheadsinourhybrid model, in all three attention layers (which correspond to layers 4, 12, 20 in the model). Figure 7: Example induction head (H3, first attention layer) from a hybrid Attention-Mamba model. Highlighted wordsreflect strong attention fromthe last token, “:”,just before themodel is about to predict the label. We see that the attention is focused on label tokens from the few-shot examples. Future work can further investigate the emergence of ICL in hybrid models at large scale. Our released checkpoints would hopefully facilitate such investigations. Finally, very recent work has attempted to extract attention-like scores from state-space models like Mamba [1], which opens another direction to search for induction capabilities in state-space models. 6.3 The Effect of Mixture-of-Experts (MoE) RecentworkhasshownthatMoEimprovesTransformerlanguagemodelswhilekeepingcompute manageable [23].5 However, it is not clear if MoE integrates well with state-space models at a largescale, andspecificallywith ourhybrid Attention–Mamba
facilitate such investigations. Finally, very recent work has attempted to extract attention-like scores from state-space models like Mamba [1], which opens another direction to search for induction capabilities in state-space models. 6.3 The Effect of Mixture-of-Experts (MoE) RecentworkhasshownthatMoEimprovesTransformerlanguagemodelswhilekeepingcompute manageable [23].5 However, it is not clear if MoE integrates well with state-space models at a largescale, andspecificallywith ourhybrid Attention–Mamba architecture. Indeed, Table7shows thatMoEimprovestheperformanceofthehybridAttention-Mambaarchitectureatlargescale(7B parameterstrainedon50Btokens). TheMoEvarianthasn = 16 totalexperts,K = 2 expertsusedat each token, and MoE is applied everye = 2 layers, as described in Section3.1. Hella Wino log-prob OLLM NQ C4 Books CodeSwagGrande Jamba (no MoE) 36.6 62.5 58.8 15.4 -0.547 -0.658 -0.340 Jamba+MoE 38.1 66.0 61.2 18.9 -0.534 -0.645 -0.326 Table 7: Mixture-of-experts improves the Attention-Mamba hybrid. 5There is also initial evidence that MoE helps Mamba layers, albeit at small model and data scale [34]. 116.4 Stabilizing Mamba at large scale When training Jamba models of up to 1.3B parameters, we observed stable training without special problems. However,whenscalingtothelargestmodelreleasedhere(7B-based,whichhas12B/52B active/total parameters), we encountered large loss spikes. Investigating this revealed that inner parts of the Mamba layers suffer from large activation values, leading to the spikes. We therefore added RMSNorm [48] to internal activations. As Figure8shows, this stabilized training and prevented additional loss spikes. Figure 8: Adding RMSNorm to Mamba layers prevents loss spikes. 6.5 Jamba does not Require Explicit Positional Information Table8shows resultsofthe Jambaarchitecture (withMoE)with nopositional informationandwhen applyingRoPE[42]intheattentionlayers(1.3Bparametermodels,250Btokens). Theresultsare similar,suggestingthatexplicitpositionalinformationmaynotberequiredforthehybridarchitecture. Presumably, theMambalayers, whichare placedbefore attentionlayers, provideimplicit position information.6 Hella Wino Narrative log-prob OLLM ARC-C NQ BoolQ C4 Books CodeSwagGrandeQA Jamba 39.6 71.5 64.2 40.7 50.5 22.2 68.9 -0.516 -0.623 -0.299 Jamba+RoPE 40.1 71.8 65.5 40.4 46.2 22.2 67.9 -0.516 -0.623 -0.299 Table 8: Comparison of Jamba with and without explicit positional information. 7 Conclusion WepresentedJamba,anovelarchitecturewhichcombinesAttentionandMambalayers,withMoE modules, and an open implementation of it, reaching state-of-the-art performance and supporting longcontexts. WeshowedhowJambaprovidesflexibilityforbalancingperformanceandmemory requirements,while maintaining ahighthroughput. Weexperimentedwithseveraldesignchoices such as the ratio of Attention-to-Mamba layers and discussed some discoveries made during the development process, which will inform future work on hybrid attention–state-space models. To facilitate such research, we plan to release model checkpoints from smaller-scale training runs. The largest model we provide with this release has 12B active and 52B total available parameters, supporting context lengths of up to 256K tokens and fitting in a single 80GB GPU even when processing 140K-token texts. 6Some prior evidence suggested that Transformer decoder models do not need positional encodings [19]. However, all existing la
will inform future work on hybrid attention–state-space models. To facilitate such research, we plan to release model checkpoints from smaller-scale training runs. The largest model we provide with this release has 12B active and 52B total available parameters, supporting context lengths of up to 256K tokens and fitting in a single 80GB GPU even when processing 140K-token texts. 6Some prior evidence suggested that Transformer decoder models do not need positional encodings [19]. However, all existing large scale models do use some sort of explicit position information. 12References [1] AmeenAli,ItamarZimerman,andLiorWolf. Thehiddenattentionofmambamodels. arXiv preprint arXiv:2403.01590, 2024. [2] Chenxin An, Shansan Gong, Ming Zhong, MukaiLi, Jun Zhang, Lingpeng Kong, and Xipeng Qiu. L-Eval: Instituting standardized evaluation for long context language models. arXiv preprint arXiv:2307.11088, 2023. [3] Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. PIQA: Reasoning about physical commonsense in natural language. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 7432–7439, 2020. [4] Mark Chen,Jerry Tworek, Heewoo Jun,Qiming Yuan,Henrique Ponde deOliveira Pinto, Jared Kaplan,HarriEdwards,YuriBurda,NicholasJoseph,GregBrockman,etal. Evaluatinglarge language models trained on code. arXiv preprint arXiv:2107.03374, 2021. [5] Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-tau Yih, Yejin Choi, Percy Liang, and Luke Zettlemoyer. QuAC: Question answering in context. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2174–2184, 2018. [6] AakankshaChowdhery,SharanNarang,JacobDevlin,MaartenBosma,GauravMishra,Adam Roberts,PaulBarham,HyungWonChung,CharlesSutton,SebastianGehrmann,etal. Palm: Scalinglanguagemodelingwithpathways. JournalofMachineLearningResearch,24(240):1– 113, 2023. [7] Aidan Clark, Diego de Las Casas, Aurelia Guy, Arthur Mensch, Michela Paganini, Jordan Hoffmann, Bogdan Damoc, Blake Hechtman, Trevor Cai, Sebastian Borgeaud, et al. Unified scaling laws for routed language models. In International conference on machine learning, pages 4057–4086. PMLR, 2022. [8] Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and KristinaToutanova. BoolQ:Exploringthesurprisingdifficultyofnaturalyes/noquestions. In Proceedings of the 2019 Conference of the North American Chapter of the Association for ComputationalLinguistics: HumanLanguageTechnologies,Volume1(LongandShortPapers), pages 2924–2936, 2019. [9] Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot, Ashish Sabharwal, Carissa Schoenick, andOyvindTafjord. Thinkyouhavesolvedquestionanswering? tryARC,theAI2reasoning challenge. arXiv preprint arXiv:1803.05457, 2018. [10] KarlCobbe, VineetKosaraju,MohammadBavarian, MarkChen, HeewooJun, LukaszKaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021. [11] Hugging Face. Open LLM leaderboard. https://huggingface.co/spaces/ HuggingFaceH4/open_llm_leaderboard, 2024. [12] YassirFathullah,ChunyangWu,YuanShangguan,JuntengJia,WenhanXiong,JayMahadeokar, ChunxiLiu,YangyangShi,OzlemKalinli,MikeSeltzer,andMarkJ.F.Gales. Multi-headstate spacemodelforspeechrecognition. InProceedingsofINTERSPEECH2023,pages241–245, 2023. [13] William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. Journal of Machine Learning Research, 23(120):1–39, 2022. [14] Daniel Y Fu, Tri Dao, Khaled Kamal Saab, Armin W Thomas, Atri Rudra, and Christopher Re. Hungryhungryhippos: Towardslanguagemodelingwithstatespacemodels. InTheEleventh International Conference on Learning Repres
ognition. InProceedingsofINTERSPEECH2023,pages241–245, 2023. [13] William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity. Journal of Machine Learning Research, 23(120):1–39, 2022. [14] Daniel Y Fu, Tri Dao, Khaled Kamal Saab, Armin W Thomas, Atri Rudra, and Christopher Re. Hungryhungryhippos: Towardslanguagemodelingwithstatespacemodels. InTheEleventh International Conference on Learning Representations, 2022. [15] Philip Gage. A new algorithm for data compression. The C Users Journal, 12(2):23–38, 1994. 13[16] Albert Gu and Tri Dao. Mamba: Linear-time sequence modeling with selective state spaces. arXiv preprint arXiv:2312.00752, 2023. [17] AlbertGu,KaranGoel,andChristopherRe. Efficientlymodelinglongsequenceswithstructured state spaces. InInternational Conference on Learning Representations, 2021. [18] Albert Gu,Isys Johnson,Karan Goel, KhaledSaab, Tri Dao, AtriRudra, and Christopher Ré. Combiningrecurrent,convolutional,andcontinuous-timemodelswithlinearstatespacelayers. Advances in neural information processing systems, 34:572–585, 2021. [19] Adi Haviv, Ori Ram, Ofir Press, Peter Izsak, and Omer Levy. Transformer language models withoutpositionalencodingsstilllearnpositionalinformation. In FindingsoftheAssociation for Computational Linguistics: EMNLP 2022, pages 1382–1390, 2022. [20] Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding. In International Conference on Learning Representations, 2020. [21] DanHendrycks,CollinBurns,AnyaChen,andSpencerBall. CUAD:Anexpert-annotatedNLP datasetforlegalcontractreview. InThirty-fifthConferenceonNeuralInformationProcessing Systems Datasets and Benchmarks Track (Round 1), 2021. [22] Albert Q Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lucile Saulnier, et al. Mistral 7b. arXiv preprint arXiv:2310.06825, 2023. [23] Albert Q Jiang, Alexandre Sablayrolles, Antoine Roux, Arthur Mensch, Blanche Savary, Chris Bamford,DevendraSinghChaplot,DiegodelasCasas,EmmaBouHanna,FlorianBressand, et al. Mixtral of experts. arXiv preprint arXiv:2401.04088, 2024. [24] Greg Kamradt. Needle in a haystack - pressure testing llms. https://github.com/ gkamradt/LLMTest_NeedleInAHaystack/, 2023. [25] Tomas Kocisky, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Ga- bor Melis, and Edward Grefenstette. The NarrativeQA reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317–328, 2018. [26] TomKwiatkowski,JennimariaPalomaki,OliviaRedfield,MichaelCollins,AnkurParikh,Chris Alberti,DanielleEpstein,IlliaPolosukhin,JacobDevlin,KentonLee,etal. Naturalquestions: a benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452–466, 2019. [27] Stephanie Lin, Jacob Hilton, and Owain Evans. TruthfulQA: Measuring how models mimic humanfalsehoods. InProceedingsofthe60thAnnualMeetingoftheAssociationforCompu- tational Linguistics (Volume 1: Long Papers), pages 3214–3252, Dublin, Ireland, May 2022. Association for Computational Linguistics. [28] AndrewMaas,RaymondEDaly,PeterTPham,DanHuang,AndrewYNg,andChristopher Potts. Learningwordvectorsforsentimentanalysis. InProceedingsofthe49thannualmeeting oftheassociationforcomputationallinguistics: Humanlanguagetechnologies,pages142–150, 2011. [29] SabrinaJ Mielke,ZaidAlyafeai, ElizabethSalesky, ColinRaffel,MananDey,MatthiasGallé, Arun Raja, Chenglei Si, Wilson Y Lee, Benoît Sagot, et al. Between words and charac- ters: A brief history of open-vocabulary modeling and tokenization in NLP. arXiv preprint arXiv:2112.10508, 202
ndChristopher Potts. Learningwordvectorsforsentimentanalysis. InProceedingsofthe49thannualmeeting oftheassociationforcomputationallinguistics: Humanlanguagetechnologies,pages142–150, 2011. [29] SabrinaJ Mielke,ZaidAlyafeai, ElizabethSalesky, ColinRaffel,MananDey,MatthiasGallé, Arun Raja, Chenglei Si, Wilson Y Lee, Benoît Sagot, et al. Between words and charac- ters: A brief history of open-vocabulary modeling and tokenization in NLP. arXiv preprint arXiv:2112.10508, 2021. [30] SewonMin,XinxiLyu,AriHoltzman,MikelArtetxe,MikeLewis,HannanehHajishirzi,and Luke Zettlemoyer. Rethinking the role of demonstrations: What makes in-context learning work? In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11048–11064, 2022. 14[31] Catherine Olsson, Nelson Elhage, Neel Nanda, Nicholas Joseph, Nova DasSarma, Tom Henighan, Ben Mann, Amanda Askell, Yuntao Bai, Anna Chen, et al. In-context learning and induction heads. arXiv preprint arXiv:2209.11895, 2022. [32] Jongho Park, Jaeseung Park, Zheyang Xiong, Nayoung Lee, Jaewoong Cho, Samet Oymak, KangwookLee, andDimitrisPapailiopoulos. Canmambalearnhowtolearn? a comparative study on in-context learning tasks. arXiv preprint arXiv:2402.04248, 2024. [33] Jonathan Pilault, Mahan Fathi, Orhan Firat, Christopher Pal, Pierre-Luc Bacon, and Ross Goroshin. Block-state transformers. In Thirty-seventh Conference on Neural Information Processing Systems, 2023. [34] Maciej Pióro, Kamil Ciebiera, Krystian Król, Jan Ludziejewski, and Sebastian Jaszczur. MoE-Mamba: Efficient selective state space models with mixture of experts. arXiv preprint arXiv:2401.04081, 2024. [35] Michael Poli, Stefano Massaroli, Eric Nguyen, Daniel Y Fu, Tri Dao, Stephen Baccus, Yoshua Bengio,StefanoErmon,andChristopherRé. Hyenahierarchy: Towardslargerconvolutional language models. In International Conference on Machine Learning, pages 28043–28078. PMLR, 2023. [36] Michael Poli, Jue Wang, Stefano Massaroli, Jeffrey Quesnelle, RyanCarlow, Eric Nguyen, and ArminThomas. StripedHyena: MovingBeyondTransformerswithHybridSignalProcessing Models. https://github.com/togethercomputer/stripedhyena, 2023. [37] Keisuke Sakaguchi,Ronan LeBras,Chandra Bhagavatula, andYejinChoi. WinoGrande: An adversarial winograd schema challenge at scale. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, pages 8732–8740, 2020. [38] GeorgeSaon,AnkitGupta,andXiaodongCui. Diagonalstatespaceaugmentedtransformers for speech recognition. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. IEEE, 2023. [39] Rico Sennrich, Barry Haddow, and Alexandra Birch. Neural machine translation of rare words withsubwordunits. InProceedingsof the 54thAnnual Meetingof the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715–1725, 2016. [40]Noam Shazeer. Glu variants improve transformer. arXiv preprint arXiv:2002.05202, 2020. [41] NoamShazeer,AzaliaMirhoseini,KrzysztofMaziarz,AndyDavis,QuocLe,GeoffreyHinton, and Jeff Dean. Outrageously large neural networks: The sparsely-gated mixture-of-experts layer. In International Conference on Learning Representations, 2017. [42] Jianlin Su, Murtadha Ahmed, Yu Lu, Shengfeng Pan, Wen Bo, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. Neurocomputing, 568:127063, 2024. [43] Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc Le, Ed Chi, Denny Zhou, et al. Challenging BIG- Benchtasksandwhetherchain-of-thoughtcansolvethem. InFindingsoftheAssociationfor Computational Linguistics: ACL 2023, pages 13003–13051, 2023. [44] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak,LaurentS
with rotary position embedding. Neurocomputing, 568:127063, 2024. [43] Mirac Suzgun, Nathan Scales, Nathanael Schärli, Sebastian Gehrmann, Yi Tay, Hyung Won Chung, Aakanksha Chowdhery, Quoc Le, Ed Chi, Denny Zhou, et al. Challenging BIG- Benchtasksandwhetherchain-of-thoughtcansolvethem. InFindingsoftheAssociationfor Computational Linguistics: ACL 2023, pages 13003–13051, 2023. [44] Gemma Team, Thomas Mesnard, Cassidy Hardin, Robert Dadashi, Surya Bhupatiraju, Shreya Pathak,LaurentSifre,MorganeRivière,MihirSanjayKale,JulietteLove,etal. Gemma: Open models based on gemini research and technology. arXiv preprint arXiv:2403.08295, 2024. [45] HugoTouvron, LouisMartin,KevinStone,PeterAlbert,AmjadAlmahairi, YasmineBabaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. [46] AshishVaswani,NoamShazeer,NikiParmar,JakobUszkoreit,LlionJones,AidanNGomez, ŁukaszKaiser,andIlliaPolosukhin. Attentionisallyouneed. Advancesinneuralinformation processing systems, 30, 2017. 15[47] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. HellaSwag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4791–4800, 2019. [48] Biao Zhang and Rico Sennrich. Root mean square layer normalization. Advances in Neural Information Processing Systems, 32, 2019. [49] Barret Zoph, Irwan Bello, Sameer Kumar, Nan Du, Yanping Huang, Jeff Dean, Noam Shazeer, and William Fedus. ST-MoE: Designing stableand transferable sparseexpertmodels. arXiv preprint arXiv:2202.08906, 2022. [50] SimiaoZuo, Xiaodong Liu,Jian Jiao,Denis Charles,Eren Manavoglu, Tuo Zhao, andJianfeng Gao. Efficient longsequence modelingvia state spaceaugmented transformer. arXiv preprint arXiv:2212.08136, 2022. 16
Position Paper: What Can Large Language Models Tell Us about Time Series Analysis Ming Jin*1 Yifan Zhang*2 Wei Chen*3 Kexin Zhang4 Yuxuan Liang†3 Bin Yang5 Jindong Wang6 Shirui Pan7 Qingsong Wen†8 Abstract Time seriesanalysis is essentialfor comprehend- ing the complexities inherent in various real- world systems and applications. Although large language models (LLMs) have recently made sig- nificant strides, the development of artificial gen- eralintelligence(AGI)equippedwithtimeseries analysis capabilities remains in its nascent phase. Most existing time series models heavily rely on domain knowledge and extensive model tun- Figure1: Acrossamyriadoftimeseriesanalyticaldomains, ing,predominantlyfocusingonpredictiontasks. theintegrationoftimeseriesandLLMsdemonstratespoten- In this paper, we argue that current LLMs have tial in solving complex real-world problems. thepotentialtorevolutionizetimeseriesanalysis, therebypromotingefficientdecision-makingand advancing towards a more universal form of time ingofcomplexreal-worldsystemsandsupportinginformed series analytical intelligence. Such advancement decision-making. Manyreal-worlddynamiclaws,suchasfi- could unlock a wide range of possibilities, includ- nancial market fluctuations (Tsay,2005) and traffic patterns ing modality switching and time series question during peak hours (Alghamdi et al.,2019), are fundamen- answering. We encourage researchers and prac- tallyencapsulatedintimeseriesdata. Inaddressingpracti- titioners to recognize the potential of LLMs in calscenarios,timeseriesanalysisemploysmethodsranging advancingtimeseriesanalysisandemphasizethe fromtraditionalstatistics(Fuller,2009)torecentdeeplearn- need for trust in these related efforts. Further- ing techniques (Gamboa,2017;Wen et al.,2021). In the more, we detail the seamless integration of time eraofsensoryartificialintelligence,thesedomain-specific series analysis with existing LLM technologies models efficiently extract meaningful representations for andoutlinepromisingavenuesforfutureresearch. predictiontaskslikeforecastingandclassification. Despite such successes,a notable gap persistsbetween mainstream 1. Introduction timeseriesresearchandthedevelopmentofartificialgeneral intelligence (AGI) (Bubeck et al.,2023) with time series Timeseries,afundamentaldatatypeforrecordingdynamic capabilitiestoaddressvariousproblemsinaunifiedmanner. system variable changes, is widely applied across diverse The recent emergence of large language models (LLMs), disciplines and applications (Hamilton,2020;Wen et al., suchasLlama(Touvronetal.,2023a;b)andGPT-4(Achiam 2022). Its analysis is instrumental in uncovering patterns et al.,2023), have swept through and propelled advance- andrelationshipsovertime,thusfacilitatingtheunderstand- ments in various interdisciplinary fields (Zhao et al.,2023). *Equal contribution. †Corresponding authors. 1Monash Uni- Their outstanding zero-shot capabilities (Kojima et al., versity. 2Chinese Academy of Sciences. 3The Hong Kong Uni- 2022), along with emerging reasoning and planning abili- versityofScienceandTechnology(Guangzhou). 4ZhejiangUni- ties (Wang et al.,2023a), have garnered
andrelationshipsovertime,thusfacilitatingtheunderstand- ments in various interdisciplinary fields (Zhao et al.,2023). *Equal contribution. †Corresponding authors. 1Monash Uni- Their outstanding zero-shot capabilities (Kojima et al., versity. 2Chinese Academy of Sciences. 3The Hong Kong Uni- 2022), along with emerging reasoning and planning abili- versityofScienceandTechnology(Guangzhou). 4ZhejiangUni- ties (Wang et al.,2023a), have garnered increasing atten- versity. 5East China Normal University. 6Microsoft Research tion. However, their focus has primarily been on text se- Asia. 7GriffithUniversity. 8SquirrelAI. Correspondenceto: Yux- quences. TheexplorationofextendingLLMs’capabilities uan Liang < yuxliang@outlook.com> , Qingsong Wen < qing- to accommodate and process more data modalities, such songedu@gmail.com> . as images (Zhang et al.,2023b) and graphs (Chen et al., Preliminary work. 2023c), has begun to receive preliminary attention. 1 What Can Large Language Models Tell Us about Time Series Analysis With theintegration of LLMs, time seriesanalysis is under- ineexistingpreliminaryworkandpresentaclearroadmap, going significant transformation (Jin et al.,2023b). Time highlighting three potential integration forms of LLMs and series models are conventionally designed for specific tasks, time series analysis; (3) Identifying future opportunities. depend heavily on prior domain knowledge and exten- Weexploreandarticulateareasthatcurrentresearchhasnot sivemodeltuning,lackingassurancesofeffectiveupdates yet addressed, presenting promising directions for future and validations (Zhou et al.,2023a). Conversely, LLMs investigations in this evolving interdisciplinary field. hold enormous potential not only to improve prediction performance (Jin et al.,2024) but also to support cross- 2. Background disciplinary(Yanetal.,2023),interactive(Xueetal.,2023), and interpretative (Gu et al.,2023) analyses. By aligning Thissectionprovidesanoverviewofthefundamentalcon- time series and natural language, large language and spe- cepts in time series analysis and large language models. cialistic time series models constitute a new technology Furthermore, it outlines a developmental roadmap for time paradigm,wheretheLLMispromptedwithbothtimeseries series analytical models, tracing the progression from tra- and text-based instructions. In this paradigm, time series ditional statistical methods to advanced, next-generation and textual information provide essential contexts, LLMs LLM-centric approaches, thereby synthesizing the founda- contribute internal knowledge and reasoning capabilities, tional principles of both fields. andpre-trainedtimeseriesmodelsofferfundamentalpattern recognition assurances. This novel integration is depicted 2.1. Time Series Analysis in Figure1, where the successful amalgamation of these DataModality. Timeseriesdata,comprisingsequential componentsshowcasesthepotentialforageneral-purpose, observations over time, can be either regularly or irreg- unified system in next-generation time series analysis. ularly sampled, with the latter often leading to missing Why This Position Paper? Given the remarkable capa- values. This data falls into two main categories: uni- bilities emerging in recent research (Jin et al.,2023b), we variate and multivariate. Univariate time series consist believethatthefieldoftimeseriesanalysisresearchisunder
observations over time, can be either regularly or irreg- unified system in next-generation time series analysis. ularly sampled, with the latter often leading to missing Why This Position Paper? Given the remarkable capa- values. This data falls into two main categories: uni- bilities emerging in recent research (Jin et al.,2023b), we variate and multivariate. Univariate time series consist believethatthefieldoftimeseriesanalysisresearchisunder- of single scalar observations over time, represented as going an exciting transformative moment. Our standpoint X = {x 1,x 2,···,x T }∈ R T . Multivariatetimeseries,on is that LLMs can serve as the central hub for understand- the other hand, involveN -dimensional vector observations, ing and advancing time series analysis. Specifically, we denoted as X ∈ R N × T . In complex real-world systems, presentkeyinsightsthatLLMscanprofoundlyimpacttime multivariatetimeseriesoftenexhibitintricatespatialdepen- seriesanalysisinthreefundamentalways: (1)aseffective denciesinadditiontotemporalfactors. Thishasledtosome data and model enhancers, augmenting time series data recentstudiesmodelingthemasgraphs(Jinetal.,2023a), andexistingapproacheswithenhancedexternalknowledge also referred to as spatial time series. In this approach, a andanalyticalprowess;(2)assuperiorpredictors,utilizing time series is conceptualized as a sequence of graph snap- theirextensiveinternalknowledgeandemergingreasoning shots, G = {G 1,G 2,···,GT }, with each G t = ( A t,X t) abilities to benefit a range of prediction tasks; and(3) as representing an attributed graph characterized by an adja- next-generation agents, transcending conventional roles cency matrixA t ∈ R N × N and node featuresX t ∈ R N × D . to actively engage in and transform time series analysis. Weadvocate attentionto relatedresearch andefforts, mov- AnalyticalTasks. Timeseriesanalyticsiscrucialforde- ingtowardsmoreuniversalintelligentsystemsforgeneral- rivinginsightsfromdata,withrecentdeeplearningadvance- purpose time series analysis. To this end, we thoroughly mentsspurringariseinneuralnetwork-basedmethods(Wen examine relevant literature, present and discuss potential et al.,2023). These methods focus on modeling complex formulationsof LLM-centrictimeseries analysistobridge inter-temporal and/or inter-variable relationships in time the gap between the two. We also identify and outline series(Zhangetal.,2023c;Jinetal.,2023b),aidingintasks prospective research opportunities and challenges, calling like forecasting,classification, anomalydetection, andim- for greater commitment and exploration in this promising putation. Forecasting predicts future values, classification interdisciplinary field. categorizesseriesbypatterns,anomalydetectionidentifies anomalous events, and imputation estimates missing data. Contributions: The contributions of this work can be Beyond these tasks,emerging researchhas shownpromise summarizedinthreeaspects: (1)Offeringnewperspectives. inmodalityswitchingandquestionanswering(Xue&Salim, We articulate our stance on LLM-centric time series anal- 2023;Jin et al.,2024;Yang et al.,2022a). These novel ysis, outlining the potential synergies between LLMs and approacheshighlight thepotentialforcross-disciplinary,in- timeseriesanalyticalmodels. Thisunderscorestheneedfor teractive, and interpretative advancements in time series increasedresearchfocusan
ringnewperspectives. inmodalityswitchingandquestionanswering(Xue&Salim, We articulate our stance on LLM-centric time series anal- 2023;Jin et al.,2024;Yang et al.,2022a). These novel ysis, outlining the potential synergies between LLMs and approacheshighlight thepotentialforcross-disciplinary,in- timeseriesanalyticalmodels. Thisunderscorestheneedfor teractive, and interpretative advancements in time series increasedresearchfocusanddedicationinthisarea;(2)Sys- analytics. Such advancements open a realm of possibili- tematicreviewandcategorization. We meticulously exam- ties in practical applications, such as (zero-shot) medical 2 What Can Large Language Models Tell Us about Time Series Analysis Figure 2: A roadmap of timeseries analysis delineating fourgenerations of models basedon their task-solving capabilities. question answering (Yu et al.,2023a;Oh et al.,2023) and by GPT-3 (Brownet al.,2020),allowing LLMs togenerate intelligent traffic agents (Da et al.,2023b;Lai et al.,2023). relevant outputs for new instances using instructions and examples without additional training; (2) Instruction fol- 2.2. Large Language Models lowing, where LLMs, through instruction tuning, excel at Basic Concept. Large language models typically refer novel tasks presented in an instructional format, enhancing to transformer-based pre-trained language models (PLMs) their generalization (Sanh et al.,2021); (3) Step-by-step withbillionsormoreparameters. ThescalingofPLMs,both reasoning,whereLLMsusestrategieslikechain-of-thought in terms of model and data size, has been found to enhance (CoT)(Weietal.,2022b)orotherpromptingstrategies(Yao model performance across various downstream tasks (Zhao et al.,2023;Besta et al.,2023) to address complex tasks et al.,2023). These models such as GPT-4 (Achiam et al., requiring multiple reasoning steps. 2023),PaLM(Chowdheryetal.,2023),andLlama(Touvron 2.3. Research Roadmap etal.,2023a),undergoextensivepre-trainingonextensive text corpora, enabling them to acquire wide-ranging knowl- Time series analyticalmodel development spans fourgener- edgeandproblem-solvingcapabilitiesfordiverseNLPtasks. ations: (1)statisticalmodels,(2)deepneuralnetworks,(3) Technically,language modeling(LM) isafundamental pre- pre-trained models, and (4) LLM-centric models, as shown trainingtask inLLMsand akey methodforadvancing ma- inFigure2. Thiscategorizationhingesontheevolvingtask- chine language intelligence. The primary objective of LM solvingcapabilitiesofeachmodelgeneration. Traditional is to model the probability of generating word sequences, analyticsreliedonstatisticalmodelslikeARIMA(Shumway encompassingbothnon-autoregressiveandautoregressive et al.,2017) and Holt-Winters (Kalekar et al.,2004), op- languagemodelcategories. Autoregressivemodels,likethe timized for small-scale data and based on heuristics like GPT series (Bubeck et al.,2023), predict the next token y stationarityandseasonality(Hamilton,2020). Thesemod- basedonagivencontextsequenceX ,trainedbymaximiz- elsassumedpasttrendswouldcontinueintothefuture. Deep ing the probability of the tokensequence given the context: neural networks, like recurrent and temporal convolution neuralnetworks(Gamboa,2017),processedlarger,complex TY