text
stringlengths
64
2.99M
[74] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” The Neuralcomputation,vol.9,1997. [75] T.N.KipfandM.Welling,“Semi-supervisedclassificationwithgraph convolutional networks,” The International Conference on Learning Representations,2017. [76] Y. Li, D. Tarlow, M. Brockschmidt, and R. Zemel, “Gated graph sequence neural networks,” The International Conference on Learning Representations,2016. 15VII. APPENDIX A. Additional experiments Here we present some additional experiments of our LEOmethodandbaselinesonothercasesofthein-distribution (ID) and out-of-distribution (OOD) CWE source code data categories. The experimental results in Table IV again show theeffectivenessandsuperiorityofourLEOmethodcompared to the baselines for OOD source code data identification by a wide margin. Notably, in these additional cases, on average, our LEO method obtains a significantly higher performance fromaround10.59%,4.65%,and6.72%ontheFPR,AUROC, and AUPR measures, respectively, in comparison with the baseline approaches. TABLE IV: The results of our LEO method and baselines for the FPR (at TPR 95%), AUROC, and AUPR measures on the vulnerable source code samples of each OOD CWE category corresponding with specific ID data. (The best results are in bold while the second highest results are in underline. The numbers highlighted in blue represent the improvements of our LEO method over the second-best baselines.) IDandOOD Methods FPR↓ AUROC↑ AUPR↑ StandardDNN 77.42% 81.36% 57.53% OutlierExposure 77.42% 83.83% 62.19% SSD 61.29% 75.95% 66.83% CWE863vs.CWE862 VulDeePecker 77.42% 83.11% 55.39% CodeBERT 76.67% 68.85% 48.24% ReGVD 96.67% 66.95% 37.56% 54.84% 84.31% 70.76% LEO(Ours) (↓6.45%) (↑0.48%) (↑3.93%) StandardDNN 85.14% 68.41% 54.78% OutlierExposure 83.36% 69.46% 56.76% SSD 84.70% 71.02% 57.76% CWE190vs.CWE787 VulDeePecker 85.74% 69.51% 56.58% CodeBERT 93.01% 66.19% 47.95% ReGVD 96.88% 49.28% 37.43% 66.42% 80.39% 71.58% LEO(Ours) (↓16.94%) (↑9.37%) (↑13.82%) StandardDNN 83.33% 77.51% 63.73% OutlierExposure 83.33% 71.61% 56.74% SSD 66.67% 79.70% 68.64% CWE94vs.CWE269 VulDeePecker 80.00% 70.25% 57.17% CodeBERT 89.89% 62.70% 45.01% ReGVD 93.26% 63.49% 44.22% 61.11% 79.88% 70.57% LEO(Ours) (↓5.56%) (↑0.18%) (↑1.93%) StandardDNN 80.49% 71.59% 64.71% OutlierExposure 83.71% 72.13% 65.96% SSD 72.54% 77.04% 73.67% CWE264vs.CWE200 VulDeePecker 78.79% 72.24% 67.30% CodeBERT 90.32% 70.02% 59.78% ReGVD 91.27% 57.50% 53.21% 64.96% 81.25% 76.69% LEO(Ours) (↓7.58%) (↑4.21%) (↑3.02%) StandardDNN 83.44% 70.97% 57.45% OutlierExposure 83.68% 69.95% 56.81% CWE190+287 SSD 81.83% 72.19% 60.47% vs.CWE119 VulDeePecker 82.94% 70.73% 56.96% CodeBERT 87.75% 65.71% 51.50% ReGVD 97.03% 48.27% 35.76% 71.32% 75.76% 67.59% LEO(Ours) (↓10.51%) (↑3.57%) (↑7.12%) StandardDNN 62.12% 81.80% 47.14% OutlierExposure 59.09% 82.28% 46.47% CWE863+862+94 SSD 51.52% 86.61% 57.95% vs.CWE287 VulDeePecker 54.55% 77.60% 48.09% CodeBERT 86.15% 61.30% 18.89% ReGVD 87.88% 51.36% 17.04% 39.39% 90.64% 70.85% LEO(Ours) (↓12.13%) (↑4.03%) (↑12.9%) StandardDNN 85.23% 71.03% 64.04% OutlierExposure 85.80% 70.61% 63.69% CWE269+862+94 SSD 67.99% 81.90% 78.56% vs.CWE200 VulDeePecker 85.61% 70.66% 63.13% CodeBERT 82.35% 74.90% 67.11% ReGVD 84.47% 64.56% 58.98% 54.36% 84.71% 82.89% LEO(Ours) (↓13.63%) (↑2.81%) (↑4.33%) 16
2404.06856 Beyond Random Inputs: A Novel ML-Based Hardware Fuzzing Mohamadreza Rostami‡∗, Marco Chilese‡∗, Shaza Zeitouni∗ Rahul Kande†, Jeyavijayan Rajendran†, Ahmad-Reza Sadeghi∗ ∗Technical University of Darmstadt, †Texas A&M University Abstract—Modern computing systems heavily rely on hardware in state explosion, rendering it impractical to verify the entire as the root of trust. However, their increasing complexity has DUT comprehensively [5]. Hardware fuzzing has emerged as given rise to security-critical vulnerabilities that cross-layer at- a promising approach for not only broadening the exploration tacks can exploit. Traditional hardware vulnerability detection of design space but also for revealing security vulnerabilities methods,suchasrandomregressionandformalverification,have limitations.Randomregression,whilescalable,isslowinexploring within intricate designs, including complex processors [3], hardware,andformalverificationtechniquesareoftenconcerned [8], [9], [13]. To bolster their effectiveness, hardware fuzzers with manual effort and state explosions. harness coverage data, such as branch conditions, statements, Hardwarefuzzinghasemergedasaneffectiveapproachtoexplor- andmultiplexers’controlregistersorsignals,forgeneratingtest ing and detecting security vulnerabilities in large-scale designs cases and probing diverse hardware behaviors [8]–[11]. When like modern processors. They outperform traditional methods regarding coverage, scalability, and efficiency. However, state-of- compared to traditional hardware verification techniques, hard- the-artfuzzersstruggletoachievecomprehensivecoverageofintri- ware fuzzers have demonstrated broader coverage, enhanced cate hardware designs within a practical timeframe, often falling scalability, and efficiency in identifying real-world vulnerabil- short of a 70% coverage threshold. To address this challenge, ities that have been associated with privilege escalation and we propose a novel ML-based hardware fuzzer, ChatFuzz. Our arbitrarycodeexecutionattacks[3],[8],[9].Nonetheless,state- approach leverages large language models (LLMs) to understand processor language and generate data/control flow entangled yet of-the-art fuzzers struggle to achieve comprehensive coverage random machine code sequences. Reinforcement learning (RL) is ofintricatehardwaredesignswithinapracticaltimeframe,often integratedtoguidetheinputgenerationprocessbyrewardingthe fallingshortofa70%coveragethresholdincomplexhardware inputs using code coverage metrics. such as a RISC-V RocketCore processor [1]. Utilizing the open-source RISC-V-based RocketCore and BOOM Our Contributions. In this paper, we introduce ChatFuzz, the cores as our testbed, ChatFuzz achieves 75% condition coverage inRocketCoreinjust52minutes.Thiscontrastswithstate-of-the- first processor fuzzer that leverages machine learning for input art fuzzers, which demand a 30-hour timeframe for comparable generationandimprovementwiththehelpofcoveragemetrics, condition coverage. Notably, our fuzzer can reach a 79.14% con- addressingacriticalchallengeinthefieldofprocessorfuzzing, dition coverage rate in RocketCore by conducting approximately namely, generating interdependent data/control flow entangled 199k test cases. yet random instructions. In the case of BOOM, ChatFuzz accomplishes a remarkable 97.02% condition coverage in 49 minutes. Our analysis identified Three-StepML-BasedInputGeneration.Wepresentathree- all detected bugs by TheHuzz, including two new bugs in the step training process, including unsupervised learning to un- RocketCore and discrepancies from the RISC-V ISA Simulator. derstand machine language structures, reinforcement learning withadisassemblerforvalidinstructiongeneration,andfurther I. INTRODUCTION reinforcementlearningusingRTLsimulationasarewardagent Traditional hardware verification techniques are crucial for to improve the coverage. ensuring the reliability and correctness of a hardware design, Significant Speed Enhancement. ChatFuzz demonstrably ex- the design under test (DUT), before fabrication. Among these pedites enhancing condition coverage, attaining a coverage techniques, random regression and formal verification methods level of 74.96% within less than one hour. In contrast, the are commonly employed. Despite its capacity to accommo- current leading hardware fuzzer,TheHuzz [9], requires a much date extensive hardware designs, random regression presents longerperiodofroughly30hourstoachievethesamecoverage, a notable efficiency problem as it tends to slow down when i.e.,34.6×faster.InthecaseofBOOM,ChatFuzzaccomplishes exploringtheintricaciesofahardwaredesign.Consequently,it a remarkable 97.02% condition coverage in 49 minutes. It is encounters difficulties uncovering vulnerabilities within hard- worthnotingthatTheHuzzexhibitsgreaterefficiencycompared to-reach critical components [6]. On the other hand, formal to random regression techniques and is approximately 3.33× verification, which aims to ascertain whether a DUT complies swifter than DifuzzRTL [8]. with specified/predefined properties [14], is often regarded as Findings. During fuzzing, ChatFuzz detects approximately 6K an efficient approach for verifying the correctness of hard- mismatches and identifies more than 100 unique mismatches to-reach hardware components. However, formal techniques after automated analysis. These findings include all bugs that rely heavily on manual effort from domain experts to define were detected by TheHuzz [9] and two new bugs, namely the required properties, which can be error-prone and time- the cache coherency management issue (CWE-1202) and the consuming. Furthermore, formal verification frequently results
execution tracing (CWE-440). Moreover, ChatFuzz exposes ‡Theseauthorscontributedequallytothiswork. deviations in the behavior of the RocketCore compared to the 4202 rpA 01 ]ES.sc[ 1v65860.4042:viXraspecifications in the RISC-V ISA. This showcases ChatFuzz’s clipped surrogate objective function to control policy updates efficiencyindelvingintotheprocessorsearchspace,thoroughly andpreventlargedeviationsfromthepreviouspolicy,ensuring investigating even the most detailed corner cases specified in stabilityandefficiency.PPOalgorithmshavebeensuccessfully the RISC-V ISA specification. applied to various domains, e.g., natural language generation. 2) LargeLanguageModels(LLMs): LLMsarelargeMLmod- II. BACKGROUND&RELATEDWORK els for processing and generating natural language text. They A. Fuzzing leverage neural networks (NN), often using the transformer architecture, to learn from sequential data and capture long de- Fuzzingprovisionsalargenumberofinputstotheprogramun- pendencies.LLMscancontainbillionsofparameters(weights), dertesttouncoverfaults,bugs,orvulnerabilitiesthattraditional dictating how the model handles input and generates output. testingmethodsmaymiss[7].Thefuzzermaygeneraterandom, LLMsaretrainedusingdifferentlearningparadigms,fromself- malformed, or unusual inputs to test how the program handles supervised to reinforcement learning, which means that they them. The initial set of test inputs, also known as seeds, can do not require labeled data or explicit rules but learn from be automatically generated or manually crafted by verification the patterns and structures inherent in the text corpus. LLMs engineers. During each fuzzing round, the fuzzer manipulates can perform various natural language processing (NLP) tasks, the best test inputs from the preceding round using mutation suchasrecognition,summarization,translation,prediction,and operations like bit/byte flipping, swapping, deleting, or cloning generation.LLMsaregeneral-purposemodelsthatcanadaptto to generate new inputs. In recent years, fuzzing has gained different domains and applications with minimal fine-tuning or significantattentionfromthehardwaresecuritycommunitydue prompt engineering. In a concurrent work, LLMs have been to its numerous advantages over existing verification methods. utilized in software fuzzing [12]. The proposed method creates Inparticular,fuzzingishighlyautomatable,cost-effective,scal- test cases, particularly for fuzzing compilers, by training a abletoreal-worldapplications,andcomprehensivelycoversthe large language model on a task that relies on human-defined testedapplication.Thesefactorshavecontributedtoitsgrowing prompts to generate and modify test cases. In contrast, our popularityandadoptionamongresearchersandpractitionersin approach does not rely on human interaction during training the field of software as well as hardware security [2], [8]–[10]. and is additionally steered by coverage metrics. 1) Processor Fuzzers: Traditional processor fuzzers such as DifuzzRTL[8]andTheHuzz[9]usecodecoverageandcontrol III. ChatFuzz register coverage as feedback to guide the mutation process. Utilizing recent advancements in LLMs, we propose ChatFuzz These fuzzers generate seeds through random generation of as an innovative approach for enhancing hardware security. instructions and mutate the instructions in the current input ChatFuzz involves training LLMs using machine language to generate new inputs. Recent research also led to hybrid (specifically, machine codes) and employing the trained model hardware fuzzers such as HyPFuzz [3] and PSOFuzz [4] that to generate sequences of pseudo-random yet interconnected combine the capabilities of fuzzers with formal tools and instructionsforhardwarefuzzing.Unlikeexistingmethods,our optimization algorithms to improve the coverage achieved. approach prioritizes creating interdependent data/control flow However,thesehybridfuzzersalsousetheseedgenerationand entangled instruction sequences. mutation engines inherited from traditional processor fuzzers ChatFuzz, illustrated in Figure 1a, comprises several compo- such as TheHuzz [9]. While the seed generator and mutation nents. The LLM-based Input Generator generates instruction engine in these fuzzers can identify valid instructions from sequencesforfuzzingthetargetedCPU.Detailsaboutthiscom- the ISA, they do not have well-defined feedback to determine ponent are discussed in subsection III-A and subsection III-B. a meaningful sequence of instructions that will lead to deep The RTL and ISA Simulators execute the given inputs on design regions. the targeted CPU and its golden model, respectively, while recording execution traces. For each test input the RTL Sim- B. Machine Learning ulator reports coverage information, which is utilized by the 1) Reinforcement Learning (RL): RL is a branch of machine LLM-based Input Generator to optimize the input generation learningthatstudieshowagentscanlearnfromtheiractionsand process. The Mismatch Detector compares execution traces environment feedback to achieve a goal. RL differs from other to identify mismatches or potential bugs, which are manually forms of machine learning, such as supervised and unsuper- inspected for confirmation as elaborated in subsection III-C. In visedlearning,inthattheagentdoesnothaveaccesstolabeled thefollowing,weelucidatethefundamentalcomponentsofour data or explicit rules but must discover the optimal behavior approach,encompassingA)theacquisitionofatrainingdataset throughtrialanderror.Theagent’sobjectiveistofindapolicy, for instructing the LLM model, B) the training process of the
a function that maps each state to an action that maximizes LLM model to grasp machine language intricacies, and C) the the expected cumulative reward over time. This is achieved execution of hardware fuzzing and bug detection procedures. using various algorithms, such as policy-based or actor-critic A. Machine Language Dataset methods. Proximal Policy Optimization (PPO) is a family of model-free RL algorithms. PPO updates policy parameters for AmajorchallengeintrainingLLMsistheneedforanextensive higher expected rewards based on policy gradient methods. trainingdataset.Whilecollectingdatafornaturallanguageslike Unlike traditional policy gradient methods, PPO employs a English is relatively easy, it becomes much more complicated(1) InitialTraining (2) ModelCleanupTraining (3) Model Optimization Trace.ISA 4118,419c, … 1cc0 355aaaab87ab0 bf f 0 6c 0e e4 01 70 d9 01 8 ISA Simulator Mismatch.list Tokenizer Generator* Generator [32,53,524, …] LLM-based Mismatch Input Generator DUT Detector GPT2 Model RTL Simulator Trace.DUT PPO Feedback Coverage CoverageReport Manual Bug [ 23 72 2, ,5 23 8, ,5 22 94 3, , … …, ] Reward Computation Calculator Detection (a) Overview of ChatFuzz noitazimitpO Generator* Generator PPO ISA Disassembler RTL Simulator Reward Computation noitazimitpO (b) ChatFuzz’s training steps Fig. 1: ChatFuzz’s final model results from three consequent training steps: (1) Unsupervised training based on the GPT2 model to learn the inner structure of the machine language; (2) Utilizing a disassembler as a scoring agent during PPO-based RL training, the initial model is refined by cleaning up the learned language and removing bad combinations of instructions; (3) Improving the coverage with a PPO-based RL process where the refined generator is trained through a reward function based on coverage information attained through RTL simulation. for machine languages. To explore this issue, we will investi- functions within the disassembled files. We then include the gatetwokeyquestions:Howdowecollectamachinelanguage machine code of each function as an individual entry in the dataset? And how do we represent the machine language data training dataset designed for our LLM model. This approach set for LLMs? ensures that each function, being associated with a distinct 1) Training Data Collection: We have two options for col- responsibility and meaning, contributes to the creation of a lecting a machine language dataset. i) Dynamic data collec- training setcharacterized bya high degreeof inter-dependency tion. Recording instructions as a program runs is convenient; among the instructions and their sequencing. however, it faces challenges that disrupt data inter-dependency, B. Training of LLM Model suchascontextswitchesandkernel-relatedinstructions.Rarely executed code sections may also be missing due to conditional Figure 1b depicts the ML subsystem. Our approach involves a constraints, affecting data completeness and interdependence. structured three-step pipeline, each phase dedicated to training These issues are more pronounced when collecting data from the language model, advancing steadily towards our ultimate complex programs, e.g., the Linux kernel, where repetitive objective. instructions and infrequent execution of critical sections add 1) Initial Training: In this step, the model is initialized and complexity to data collection and interdependence. ii) Static trained using the collected dataset. This step aims to learn the datacollectiondirectlygatherstrainingdatafromfixedsources language utilized by the CPU. For this purpose, we train a like GUI-compiled code, avoiding dynamic program execution tokenizeronthefullISA.Thetokenizerthenpreparestheinputs complexities such as context switches and kernel-related in- for the model as shown in Figure 1b(1). structions.Thisapproachkeepsthecollecteddataisolatedfrom 2) Model Language Cleanup: Once the initial training is OS concurrent tasks’ interference, preserving intrinsic data completed, the model can commit numerous errors in the text relationships as coded in the source. Static data collection also generation (e.g., wrong/illegal combinations of instructions). comprehensively captures all code segments, including rarely Therefore, a refinement phase is crucial. Hence, at this stage executed blocks, without relying on their activation during of the pipeline, Figure 1b(2), our goal is to clean up the gen- program runtime. erations of the trained model, enforcing the correct instruction In this work, we opted for static data collection as it effec- associationstominimizethenumberofineffectivegenerations. tively overcomes the challenges posed by instruction inter- For this purpose, we designed an RL process that leverages, as dependencyandcodeblockrarityencounteredindynamicdata arewardagent,theISAdisassembler(cf.,subsectionIII-A).We collection. avoid using, as commonly done, a probabilistic scorer, such as 2) Training Data Representation: This step is challenging due a neural network, for the rewarding task to prevent uncertainty to several factors, including the presence of metadata (such and reduce errors. Employing a deterministic reward agent, as headers and linking information) within machine codes we can provide the model with more precise guidance during resulting from program compilation. Metadata can introduce the training, leading to better optimization policies and more complexity and ambiguity, potentially hindering the LLM precise model updates. This step helps avoid unnecessary CPU model’s ability to learn the language effectively and maintain simulationofbad/malformeddataandthusimprovestheoverall themeaningfulnessandinterdependenceofthetrainingdataset. performance of our fuzzer. To address this challenge, as illustrated in subsection III-B, 3) ModelOptimization: Finally,weaimtoimprovethetraining
we disassemble the binaries generated from program compila- ofourLLMtoachieveourgoal,whichisgeneratingsequences tion and automatically identify the start and end locations of of pseudo-random yet interconnected instructions that lead tobetterCPUcoverage.Todoso,weemployedanotherRL-based Incremental coverage gauges the quantity of newly achieved training step, utilizing a deterministic reward agent similar to coverage points by the current input compared to the total the previous step. In this case, the reward function embeds the coverage points recorded in the previous batch. Meanwhile, scoresprovided asfuzzingloopfeedbackcomprising hardware total coverage encapsulates the cumulative tally of coverage coverage information collected during the simulation of the points attained thus far, incorporating the contributions of generated data on the targeted CPU as shown in Figure 1b(3). all inputs generated by the LLM model. These values are We performed the previous steps for RISC-V ISA. However, it deployed in the calculation of scores assigned to each test is worth noting that the approach described above is general- input generated by the LLM-based input generator, thereby izable to any CPU architecture. facilitatingacomprehensiveevaluationofthegeneratedinputs, i.e., test inputs, with respect to their coverage effectiveness. C. Hardware Fuzzing and Bug Detection AftertrainingtheLLMmodel(cf.,subsectionIII-B),weinitiate C. LLM-based Input Generation the fuzzing loop. As delineated in Figure 1a, the LLM model The ML part of ChatFuzz was fully implemented in Python generates a batch of test inputs, where each entry represents withtheuseoftheframeworksPytorch(www.pytorch.org)and a list of instructions. These entries are then executed on the Huggingface (www.huggingface.co). The use of Huggingface golden model and the targeted CPU using the ISA and RTL is considered the standard for NLP-related tasks. Specifically, Simulators, respectively. The resulting two execution traces of we leveraged its implementations of the tokenizer, the large each entry are analyzed by the Mismatch Detector to identify language model (more precisely, of GPT2 family), and the traces’ discrepancies, which are documented for subsequent PPO algorithm for the RL pipeline. All the experiments were manual inspection as part of the bug detection process. conducted on a high-performance server. In the following, Additionally, the RTL Simulator reports hardware coverage we describe the main steps designed for achieving our goal, metricstotheCoverageCalculator,whichcomputesstandalone, principally depicted in Figure 1b. overall, and incremental coverage values for each entry as 1) Initial Training: The initial step in designing an NLP described in section IV. These values are then used to score pipeline is defining the dictionary and its corresponding tok- each entry generated by the LLM model, leading to a precise enizer. The tokenizer translates words (i.e., instructions) into evaluation of the entries and guiding the LLM model to tokens by encoding input text into an array of dictionary word generate further inputs that have the potential to enhance the indices, always serving as an intermediary step between the coverage. dataset and the language model. Decoding, on the other hand, IV. IMPLEMENTATION translates an array of tokens back into text (i.e., sequence of instructions).Next,wetrainedtheselectedmodeltounderstand In this section, we will provide details on the implementation theinnerworkingsofthemachinelanguage,includinggrammar of ChatFuzz components. We deployed Synopsys VCS and the and instructions relationships. During the training, the model Spike simulator for RTL and RISC-V ISA, i.e., the golden receives an input fragment of valid test vectors from our model, simulations. Additionally, we developed custom com- collected dataset, resembling ∼ 500K test vectors extracted by ponents for Mismatch Detection and Coverage Calculation. compiling the Linux Kernel, and learns how to complete it. A. Mismatch Detection 2) Model Language Cleanup: After the initial training, the model is able to utilize the CPU’s language. However, having This component uses differential testing to flag potential vul- the full ISA available as a dictionary, the model will easily nerabilities in the targeted CPU. It compares the architectural commit errors, generating illegal associations of instructions state changes between the targeted CPU and its golden model that a disassembler can easily detect. To overcome this limita- when both run the same input and compiles a report with tion, which would significantly impact the quality of the end uniquelyidentifieddiscrepancies.Thus,effectivelyreducingthe generations, we decided to perform training through a PPO- manualworkloadforverificationengineers.Thisisparticularly basedRL,wherethescoringagentistheRISC-Vdisassembler. advantageouswhenmultipleinstancesofthesamebuggenerate The reward function is designed in such a way that correct numerous mismatches. Further, verification engineers can add generations are incentivized, and generations with illegal in- filters to the Mismatch Detector in the form of architectural structions are as penalized as many invalid instructions are state values that will allow filtering out most of the false present in the generated test vector: positive mismatches and accelerate vulnerability detection. f(GenText )=N −5∗Invalid (1) i i i B. Coverage Calculation where N is the number of instructions generated at time i i This component is responsible for receiving the coverage forGenText ,and Invalid isthe numberofinvalid instructions i i reports from the RTL simulator, i.e., Synopsys VCS in our present in GenText . i implementation. Subsequently, the coverage reports undergo For the training, we utilized a dataset of 51.2K samples
parsing, facilitating the calculation of three key values: stand- extracted from the larger main dataset. For each sample, we alone coverage, incremental coverage, and total coverage, for randomly selected the initial 2 to 5 instructions as input for eachcoveragemetric.Stand-alonecoverageindicatesthenum- the LLM. The model then completes the test vectors using its berofcoveragepointsattainedbytheinputunderconsideration. learned logic.The training consists of 30 epochs. We monitored the PPO 85 algorithm’s loss, the Kullback-Leibler divergence between op- 80 timizationpolicies,andthemeanrewardsassignedateachstep to assess the training progress. 75 3) Model Optimization: Once the model went through two 70 stepsoftrainingandthenumberoferrorsinthegenerationswas 65 sensibly reduced, we proceeded with the final training, where wewantedtocarefullydrivethemodeltowardstheexploration 60 of the targeted CPU (i.e., increasing the reference coverage) 55 through a PPO-based RL process. In this case, the reward 50 function, based on the values reported by the Coverage Calcu- 0 10 20 Time (hrs) lator, takes into account the overall knowledge of architecture until the i-th step, the incremental coverage (i.e., whether there was an improvement), and stand-alone coverage (i.e., coverage of the i-th sample). In practice, the reward function guidesthesearchdirectiontowardgenerationsthatincreasethe coverage by giving a bonus and penalizing (i.e., assigning a negative reward) those that do not produce any improvement. This reward function, ultimately, pushes the model to explore more in the direction of interesting generations. Moreover, analogously to the previous step, this training takes place with thesamestrategy.Weutilizethesamesampleddatasetof51.2K samples as input. In this case, the training is designed to last at most 15 epochs, during which the values reported by the coverage calculator are used for the reward computation. V. EVALUATION We used ten instances of Synopsys VCS as a simulator and measured the effectiveness of our solution using the condition coverage metric provided by Synopsys VCS. It is imperative that this feedback captures new hardware behavior and func- tionalities during fuzzing. Condition coverage aligns with this goal, correlating the satisfaction of hardware design conditions withrealizingnewfunctionalbehaviors.Anexemplaryinstance is fulfilling conditions leading to privilege-level transitions, such as shifting from the user to the supervisory level. We havechosenthewidelyutilizedRISC-VRocketCoreandBoom processors, renowned as preeminent open-source processors within the RISC-V ecosystem. In evaluating RISC-V proces- sors,weemployedtheChipyardsimulationenvironment,which facilitates the assessment of diverse processors and ensures a uniform testing arena. Each experiment was executed over 24 hoursandrepeatedthreetimestounderscoretherobustnessand consistency of our findings. A. Design Coverage Our analysis revealed that both ChatFuzz and TheHuzz incur similar runtime overhead. Nevertheless, when considering an equivalentnumberofgeneratedtests(1.8K)withsamenumber of instructions, ChatFuzz achieved a condition coverage of 74.96%,whileTheHuzzreached67.4%.Remarkably,TheHuzz required around 30 hours to reach a 75% coverage rate, i.e., ChatFuzz achieved the same amount of coverage 34.6× faster. Ultimately, ChatFuzz achieved a condition coverage rate of 79.14% by generating 199k test cases, while TheHuzz [9] attained a condition coverage rate of 76.7% for the same number of test cases. Furthermore, ChatFuzz accomplishes a derevoc stniop dnoC % TheHuzz ChatFuzz Fig. 2: Coverage analysis of TheHuzz [9] and ChatFuzz over time for RocketCore. remarkable 97.02% condition coverage in 49 minutes while runningexperimentsontheBoomprocessor.Figure2provides visual representation of the condition coverage for ChatFuzz and TheHuzz during 24 hours of RocketCore fuzzing. B. Findings In the initial stage of our mismatch detection processChatFuzz effectively identified 5,866 instances of disparities within the execution traces originating from the RISC-V ISA simulator andtheRocketCore.Subsequently,theseidentifiedmismatches underwent a secondary filtration process, separating more than 100 unique mismatches. This filtration process was executed in an automated fashion. Following this, we embarked on a detailed manual analysis of these unique mismatches, the summaries of which are presented below. 1) Bug1: According to the RISC-V specification [15], when there are modifications made to the instruction memory, it is imperativeforthesoftwaretomanagecachecoherencythrough the utilization of the FENCE.I instruction. Neglecting this cache coherency management can lead to unforeseeable con- sequences, wherein processors may rely on outdated data and executeinstructionsincorrectly.Duringtestingwithagenerated inputprogrambyourfuzzingtoolthatmodifiedtheinstruction memory but did not incorporate the FENCE.I instruction, an inconsistencywasidentifiedinthetracelogsoftheRocketCore processor and Spike. This disparity could have been prevented if the RISC-V specification or the RocketCore processor could detectviolationsofcachecoherencyatthehardwarelevel.This bughasthepotentialtointroducecachecoherencyproblemsin software executed on the RocketCore processor, which might gounnoticediftheFENCE.Iinstructionismisused,ultimately resulting in a memory and storage vulnerability identified as CWE-1202. 2) Bug2: RISC-V specification consists of arithmetic instruc- tions such as multiply and divide [15] that compute a value using the operand registers and update the result in the desti- nation register. The RocketCore processor and ISA simulator behave accordingly when executing the multiply and divide
instructions. However, the tracer module in RocketCore is not outputting the write to the destination register in RocketCore’s traceoutput,resultinginBug2.Thisbugmaynothavesecurity consequences as it is present in the debug components ofRocketCore. However, bugs like this can mask other security ate complex, interdependent, data/control flow entangled and vulnerabilities that can otherwise be detected with the correct pseudo-randomtestcases.Ourapproachsignificantlyimproves trace output information (CWE-440). condition coverage, reaching 74.96% in less than an hour, 3) Other Findings: In conjunction with its capacity for vul- comparedtothe30hoursrequiredbyleadinghardwarefuzzers, nerability detection, our tool has brought to light compelling i.e., ChatFuzz achieved the same amount of coverage 34.6× disparitiesbetweenthetargetprocessorandSpike.Whilethese faster.Also,inthecaseofBoom,ChatFuzzaccomplishesare- disparitiesdonotsignifysecurityvulnerabilities,theyhighlight markable 97.02% condition coverage in 49 minutes. ChatFuzz the tool’s capabilities in comprehensively examining the tar- has successfully identified more than 100 unique mismatches, get processor. These discrepancies represent exceptional cases revealedtwonovelbugs,andexposeddeviationsinRocketCore withintheRISC-Vspecificationandhighlighttheeffectiveness behaviorcomparedtothegoldenmodel,eveninintricatecorner of our approach in exploring the DUT search space. This is cases specified in the RISC-V ISA specification. These results achieved by generating interdependent and data/control flow highlight ChatFuzz’s effectiveness in exploring processor vul- entangled instructions, as opposed to the conventional use nerabilities,offeringafasterandmorecomprehensiveapproach of random instructions employed by state-of-the-art hardware to hardware security and testing. fuzzers.Wewillelucidatethethreemostsignificantonesbelow. ACKNOWLEDGEMENT Finding1. In line with the RISC-V specification [15], when an instruction triggers multiple synchronous exceptions, the Our research work was partially funded by the Intel’s Scal- higher-priority exception is logged in the mcause regis- able Assurance Program, Deutsche Forschungsgemeinschaft ter. The priority hierarchy established in the RISC-V privi- (DFG) – SFB 1119 – 236615297, the European Union under lege specification places the Load/store/AMO address Horizon Europe Programme – Grant Agreement 101070537 misaligned exception above the Load/store/AMO – CrossCon, the European Research Council under the ERC access fault exception. In our fuzz testing using Chat- Programme - Grant 101055025 - HYDRANOS, and the US Fuzz, two test cases emerged. In the first, both Load access Office of Naval Research (ONR Award #N00014-18-1-2058). fault and Load address misaligned exceptions were simultane- ThisworkdoesnotinanywayconstituteanIntelendorsement ously raised. In contrast, the second test case triggered both ofaproductorsupplier.Anyopinions,findings,conclusions,or Store access fault and Store address misaligned exceptions recommendationsexpressedhereinarethoseoftheauthorsand concurrently.Notably,SpikerespondedwiththeLoad/Store do not necessarily reflect those of Intel, the European Union, address misalignedexception,whileRocketCoreissued the European Research Council, or the US Government. the Load/Store address fault exception. REFERENCES Finding2. In another example, ChatFuzz generated a pair of atomic instructions, such as AMOOR.D, in which it employed [1] Asanovic´etal.TheRocketChipGenerator.(UCB/EECS-2016-17),2016. [2] Canakci et al. DirectFuzz: Automated test generation for RTL designs R0 as a temporary location for loading data from memory, usingdirectedgrayboxfuzzing.InDesignAutomationConference(DAC). designated as rd. Interestingly, our tool observed that this IEEEComputerSociety,2021. atomic instruction appeared to function as expected, with R0 [3] Chenetal. HyPFuzz:Formal-assistedprocessorfuzzing. arXivpreprint arXiv:2304.02485,2023. receiving data—a behavior seemingly at odds with the RISC- [4] Chenetal.Psofuzz:Fuzzingprocessorswithparticleswarmoptimization. V specification [15]. Upon further investigation, we realized arXivpreprintarXiv:2307.14480,2023. that this behavior represents a corner case within the RISC-V [5] Clarkeetal.ProgressontheStateExplosionProbleminModelChecking. Informatics,2001. specification[15].Itisconceivablethatdevelopers,inpursuitof [6] Dessoukyetal. HardFails:InsightsintoSoftware-ExploitableHardware optimization, could implement the AMOOR.D operation within Bugs. InUSENIXSecuritySymposium.USENIXAssociation,2019. the memory controller. Consequently, if a user specifies R0 as [7] Fioraldietal. Afl++:Combiningincrementalstepsoffuzzingresearch. In USENIX Workshop on Offensive Technologies (WOOT). USENIX the destination register (rd) for this instruction, the memory Association,2020. controller may perform the atomic operation as intended. [8] Huretal.DifuzzRTL:Differentialfuzztestingtofindcpubugs.InIEEE Finding3. Another notable scenario relates to the behavior of Symposium on Security and Privacy (S&P). IEEE Computer Society, 2021. the RocketCore processor, particularly in its treatment of the [9] Kande et al. TheHuzz: Instruction fuzzing of processors using Golden- R0register,comparedtotheSpikeISAsimulator.Accordingto Reference models for finding Software-Exploitable vulnerabilities. In the RISC-V ISA specifications, the R0 register is expected to USENIXSecuritySymposium.USENIXAssociation,2022.
[10] Laeufer et al. Rfuzz: coverage-directed fuzz testing of rtl on fpgas. maintain a constant value of zero, implying immunity to write InIEEE/ACMInternationalConferenceonComputer-AidedDesign(IC- operations.However,ouranalysisunveiledthatintheexecution CAD).IEEEComputerSociety,2018. traces generated by the RocketCore, there are occurrences of [11] Trippel et al. Fuzzing hardware like software. In USENIX Security Symposium.USENIXAssociation,2022. attempted writes to the R0 register within specific sequences [12] Xia et al. Universal fuzzing via large language models. arXiv preprint of instructions. It is important to note that this discrepancy is arXiv:2308.04748,2023. solely observed in the output traces and does not affect the [13] Xuetal. MorFuzz:Fuzzingprocessorviaruntimeinstructionmorphing enhancedsynchronizableco-simulation.InUSENIXSecuritySymposium. functionality of RocketCore. USENIXAssociation,2023. [14] YujiKukimoto. IntroductiontoFormalVerification,2011. VI. CONCLUSION [15] RISC-V. The risc-v instruction set manual volume i: Unprivileged isa, We introduced ChatFuzz, a novel hardware fuzzer that utilizes 2019. large language models to learn machine language and gener-
2404.07548 DeVAIC: A Tool for Security Assessment of AI-generated Code Domenico Cotroneo, Roberta De Luca∗, Pietro Liguori aUniversity of Naples Federico II, Naples, 80125, Italy Abstract Context: AI code generators are revolutionizing code writing and software development, but their training on large datasets, including potentially un- trusted source code, raises security concerns. Furthermore, these generators can produce incomplete code snippets that are challenging to evaluate using current solutions. Objective: This research work introduces DeVAIC (Detection of Vulnera- bilities in AI-generated Code), a tool to evaluate the security of AI-generated Python code, which overcomes the challenge of examining incomplete code. Method: We followed a methodological approach that involved gathering vulnerable samples, extracting implementation patterns, and creating regu- larexpressionstodeveloptheproposedtool. TheimplementationofDeVAIC includes a set of detection rules based on regular expressions that cover 35 Common Weakness Enumerations (CWEs) falling under the OWASP Top 10 vulnerability categories. Results: WeutilizedfourpopularAImodelstogeneratePythoncode, which we then used as a foundation to evaluate the effectiveness of our tool. De- VAIC demonstratedastatisticallysignificantdifferenceinitsabilitytodetect securityvulnerabilitiescomparedtothestate-of-the-artsolutions,showingan F Score and Accuracy of 94% while maintaining a low computational cost 1 of 0.14 seconds per code snippet, on average. Conclusions: Theproposedtoolprovidesalightweightandefficientsolution for vulnerability detection even on incomplete code. ∗Corresponding author Email addresses: cotroneo@unina.it (Domenico Cotroneo), roberta.deluca2@unina.it (Roberta De Luca), pietro.liguori@unina.it (Pietro Liguori) Preprint submitted to Information and Software Technology September 4, 2024 4202 peS 2 ]ES.sc[ 2v84570.4042:viXraKeywords: Static code analysis, Vulnerability detection, AI-code generators, Python 1. Introduction We live in an era where AI-code generators are revolutionizing the pro- cess of writing code and software development. AI-powered solutions like GitHub Copilot [25], OpenAI ChatGPT [70], Google Gemini [30], and Mi- crosoft Copilot [60] have shown their ability to translate into programming code what users request with natural language (NL) descriptions (e.g., En- glish language). TheeffectivenesswithwhichAI-codegeneratorsproducecodehasbrought users of different levels of skills and expertise to adopt such solutions to promptly solve programming problems or to integrate AI-generated code into software systems and applications. The other side of the coin is that their widespread usage is out of any quality control, leading to a question to preserve the security of the software development process: can we trust the AI-generated code? Ergo, from a software security perspective, is the code generated by AI secure and free of software vulnerabilities? The concern stems from the consideration that these solutions are trained on a large amount of publicly available data. For instance, GitHub Copilot utilizes training data that consists of an extensive amount of source code, withbillionsoflinescollectedfrompubliclyaccessiblesources, includingcode from GitHub’s public repositories [28]. Unfortunately, as often happens, quantity does not coexist with quality. In fact, the huge amount of data used to train the AI-code generators may include deprecated functions and libraries, which may lead to the exploitation of vulnerabilities when adopted, or it could intentionally have buggy or insecure code used to poison the code generators during the training phase [36, 44, 35, 33]. This issue is not unexpected at all. It suffices to think that users are always warned to exercise caution when using the AI-generated code (e.g., “The users of Copilot are responsible for ensuring the security and quality of their code” [27], “Use the code with caution” [7], “This code is generated by artificial intelligence. Review and use carefully” [8]). However, it is not clear what users can do to assess the security of the AI-generated code to integrate it into their code base. Manual analysis, the go-to method for security experts, becomes unfeasible due to the volume and 2rate deployment of AI-generated code [52, 51]. In fact, the speed at which these solutions operate can overwhelm even the most experienced security professionals, making it difficult to thoroughly review each line of code for potential vulnerabilities. Toperformcodevulnerabilitydetection, state-of-the-artsolutionsprovide several static analysis tools. Such tools usually parse the Abstract Syntax Tree (AST) of the code [54, 83, 85, 21], a hierarchical representation of its structure, and require complete programs to check whether the code contains software vulnerabilities. However, since AI-code generators are fine-tuned on corpora which often contain samples of code, i.e., code snippets [69, 68, 50, 9], they often do not produce complete programs [51, 110], making infeasible the application of these tools in this context. Adifferentsolutionisrepresentedbytoolsimplementingpattern-matching approachestodetectsoftwarevulnerabilities[56,78,105]. Thesetoolsrequire users to set up ad-hoc configuration files to specify the vulnerabilities they wish to detect via matching patterns [65, 66], hereby requiring a non-trivial manual effort and limiting their adoption in practice. Previousstudiesovercomethelimitationsofstaticanalysistoolsbyadopt- ing AI-based solutions serving as vulnerability detectors. These fully au- tomated solutions do not require any human effort and can analyze in- complete programs, but they may be susceptible to a high rate of false
alarms [10, 11, 47, 49]. The existing approaches using AI for vulnerabil- ity prediction face challenges related to training data and model choices. Furthermore, it has been shown that AI models struggle to understand the complexity of code structures and identify security issues [20, 1, 14]. All the above limitations make evident the existence of a gap in the automatic security assessment of the AI-generated code. Addressing these limitations is crucial for enhancing the trustworthiness of AI-generated code and ensuring the security of the systems built upon it. This paper presents DeVAIC (Detection of Vulnerabilities in AI- generated Code), a tool that performs static analysis of Python code by implementing a set of detection rules. More precisely, the tool uses a set of regular expressions that cover 35 Common Weakness Enumeration (CWE), a community-developed list of software and hardware weakness types, to detect vulnerabilities in Python code, one of the most used programming languages [24, 58, 99]. The tool does not require the completeness of the code, making it suitable to detect and classify vulnerabilities in AI-generated code snippets, nor to create ad-hoc detection rules, hence overcoming the 3limitations of state-of-the-art solutions. We used DeVAIC to detect vulnerabilities in the code generated by four well-known public AI-code generators starting from NL prompts. Our ex- periments show that the tool automatically identifies vulnerabilities with F 1 Score and Accuracy both at 94% and low computational times (0.14 sec- onds for code snippet, on average). Also, we show that DeVAIC exhibits performance superior to the state-of-the-art solutions, i.e. CodeQL [21], Bandit [83], PyT [85] and Semgrep [90], and the models ChatGPT-3.5 [70], ChatGPT-4, and Claude-3.5-Sonnet [2], that are widely used to perform vul- nerability detection of the code [79, 18, 4, 38]. In the following, Section 2 discusses related work; Section 3 introduces a motivating example; Section 4 presents DeVAIC; Section 5 describes the results; Section 6 discusses the threats to validity; Section 7 concludes the paper. 2. Related Work Static analysis is one of the most commonly used techniques to detect vulnerabilities in the code [15, 45, 77, 31]. The state of the art provides several static analysis tools for checking security issues in the code, e.g., Python-specific tools like Bandit [83] and PyT [85], or multilanguage ones such as Semgrep [90] and CodeQL [21], which are widely used to detect vulnerabilities within the code [79, 18, 4, 38, 91, 37, 81, 53, 5, 29, 13]. Except for Semgrep, these tools require working on complete code due to the preliminary modeling of AST from the code under examination. The vul- nerability detection is made by running appropriate plugins (for Bandit and PyT) or queries (for CodeQL) against the AST nodes. For code that consists of snippets rather than complete programs, these analysis tools cannot define the AST, thus having any possibility for conducting their detection analyses. Semgrep [90] is a static analysis tool that uses a pattern-matching approach and that does not require the AST modeling of code before running the de- tectionrules. Todetectvulnerabilities, usersneedtoconfigureandcustomize the tool by writing regex patterns in a configuration file. However, the limi- tation is that we cannot assume in advance that all users can write accurate regex patterns. Moreover, this approach could polarize vulnerability hunt- ing, focusing on those the user believes to find, potentially overlooking others that are effectively present [65, 66]. For this reason, as often happens, this 4type of solution offers a set of rules publicly available for the scanning of the code under examination [96]. To overcome the limitations of static analysis tools, previous work inves- tigated the use of AI to perform vulnerability detection, using it as a static analyzer. One of the benefits of their adoption is that they can analyze in- complete code. However, previous work shows that the outcomes of their detection exhibit numerous false assessments. For instance, Chen et al. [11] released a new vulnerable source code dataset to assess state-of-the-art deep learning methods in detecting vulnera- bilities. Theauthorsshowthelimitedperformanceofthesesolutionsinterms of high false positive rates, low F Scores, and difficulty in detecting hard 1 CWEs. Ullah et al. [104] used 17 prompt engineering techniques to test 8 different LLMs in vulnerability detection. Among the 228 code scenarios em- ployed in the experiments, the LLMs frequently mistakenly identify patched examples as vulnerable, causing an elevated number of false positives. Simi- larly, Purba et al. [82] encountered a high rate of false positives when using language models like GPT-3.5, CodeGen, and GPT-4 to classify a snippet of code (i.e., a segment of code) as vulnerable or not vulnerable. Fang et al. [20] crafted a dataset that pairs real-world code with obfuscated versions, which they used as input for large language models (LLMs) to evaluate their ability to analyze input code samples and test if LLMs can be employed for defensive analysis tasks. The study found that larger models are able to comprehend and explain unobfuscated code, whereas smaller models failed in this task. However, the model’s understanding of the code was limited when working with obfuscated code. Al-Hawawreh et al. [1] and Cheshkov et al. [14] evaluated the performance of the ChatGPT for detecting vulnerabil- ities in code, finding that ChatGPT’s results still needed careful monitoring. Khoury et al. [39] explored how safe is the code generated by ChatGPT,
using this model for both code generation and assessment. Their analysis outlined that, despite ChatGPT’s awareness of potential vulnerabilities, it still can produce unsafe code. Instead of using AI-based solutions, Sandoval et al. [94] [93] assessed the security of AI-generated code by performing a manual code analysis. Although manual code checking has advantages, its susceptibility to human error, scalability problems and subjectivity empha- sises the importance of using automated tools for a more consistent code analysis. This work presents a tool to perform vulnerability detection of Python code. The tool does not require complete programs to analyze as it also 5works for code snippets, making it suitable for assessing the security of AI- generatedcode. SimilartoSemgrep, theproposedtoolimplementsapattern- matching approach while employing a lesser quantity of rules and achieving better results in terms of Recall, F Score, and Accuracy. Indeed, it does not 1 require any human effort to configure detection rules, hence overcoming the limitations of previous static analysis tools. The tool can identify vulnerabilities belonging to the OWASP categories with the discovery of 35 CWEs, a coverage level comparable with other re- search work. For instance, Hajipour et al. [32] consider 13 CWEs for the assessment of the security risks posed by code language models through an approximation of the inversion of the black-box models using few-shot prompting; Pearce et al. [79], moreover, refer to the MITRE’s Top 25 CWE list to assess the code generated by Copilot. CodeQL provides queries to assess the security of Python code with a coverage of 44 CWEs [26]. Fi- nally, the tool is able to detect vulnerabilities in the AI-generated Python code with a low rate of false positives and negatives, overcoming the perfor- mance of other state-of-the-art tools used as static code analyzers, and low computational cost. 3. Motivating Example AImodelsforcodegenerationhaveshownsignificantadvancements,largely attributed to their intuitive, user-friendly interfaces. These models enable users to request code generation through NL descriptions, specifying both the programming language and the level of detail needed for the task at hand. For illustrative purposes, consider the following example: Write a Python code that defines a YAML payload from a request , loads this payload into a dictionary called prod and qt by using a function for loading. Listing 1: Example of NL prompt. Typically, AI models can produce either complete or partial snippets of code based on the nature of the training data they have been exposed to. A concrete instance of AI-generated code for the aforementioned prompt might look like this: 6def load yaml(yaml payload): prod and qt = yaml.load(yaml payload) return prod and qt yaml payload = ””” product: name: iPhone quantity: 10 ””” prod and qt = load yaml(yaml payload) print(prod and qt) Listing 2: Example of code generated by AI models. The code generated by the model is what is requested in the NL prompt. However, the code is vulnerable due to the yaml.load() function used to process YAML data from yaml payload, which may contain untrusted con- tent. Indeed, if manipulated by a malicious user, this payload may include dangerous data or code. The official PyYAML documentation [86] advises against using the yaml.load() function because it interprets the YAML pay- load as Python code, potentially allowing the execution of malicious instruc- tions if the code lacks proper validation, including calls to the os.system library, which can execute any command on the system. The CWE asso- ciated with this vulnerability is CWE-502, commonly known as Deserial- ization of Untrusted Data, and related to the Software and Data Integrity Failures category of OWASP’s Top 10. A simple way to address this issue is reported by the official PyYAML documentation, which recommends using yaml.safe load()toreadYAMLfromunreliableanduntrustedsources[86]. The yaml.safe load() function is designed to limit the types of objects that can be loaded to standard ones (e.g., dictionaries, lists, strings, numbers, etc.), thus avoiding the execution of arbitrary code. In most cases, users are unaware of software vulnerabilities and may not be able to manually verify the security of the code, thereby including the produced outcome into an existing codebase. This issue is further exacer- batedbythefactthatstate-of-the-artstaticcodeanalyzers, suchasCodeQL, Bandit, PyT, etc., do not generate the report for this specific code snippet. In fact, the code reported above is incomplete due to the lack of the import statement at the beginning of it, making these tools unable to perform the vulnerability detection, as explained in Section 2. This code characteristic 7does not consent to the modeling of the AST as it lacks the necessary de- pendency (i.e., import yaml) for the invoked API (i.e., the yaml.load() function). Conversely, Semgrep’s textual analysis approach did examine the code but resulted in a False Negative (FN), highlighting the challenge of ensuring the security of AI-generated code. The generation of incomplete code is a well-known problem with AI code generators. Recent works are currently addressing the issue by using prompt engineering techniques. For example, in [101] authors used prompt- engineering techniques to stimulate the model to generate secure code in Python, but the model occasionally produced incomplete code. To overcome this issue, they employed an iterative code-generation process by concatenat- ing the prompts with the incomplete output generated by the models until they obtained a complete code. In [48], the authors explicitly requested the model to generate complete code by using precise system prompts that explicitly require the completeness of the code. Moreover, they employed multiple iterations to guarantee the correctness of the generated code. Al-
though prompt engineering can be an effective way to reduce the incomplete- ness in the code generated by the models (e.g., by using explicit requests or performing multiple iterations), we believe that, in a typical scenario, users would request models to generate code without being aware of the potential to produce incomplete code. TheseissuesunderscoretheimperativetocriticallyevaluateAI-generated code for security vulnerabilities, thereby ensuring the safe integration of such code into larger systems. 4. DeVAIC Workflow To overcome the issues described in Section 3, we present DeVAIC, a tool to detect vulnerabilities in Python code. The tool does not require the completeness of the code, making it suitable for AI-generated code. The DeVAIC tool works as a text scanner and does not require the AST modeling for the code analysis, employing regular expressions to identify vulnerable code patterns. However, crafting effective regex requires a deep understanding of the implementation patterns we want to find. To this aim, we extensively reviewed the literature to collect datasets of unsafe Python code with the associated information of the implemented 8Figure 1: The DeVAIC workflow. CWE1,extracting onlythesnippets thatimplementthemost recentandcrit- ical weaknesses identified in OWASP’s Top 10 Vulnerabilities2 report. Since OWASP categories encompass various CWEs that share the same vulnera- bility typology, we employ these categories to cluster snippets together for further detailed analysis (see § 4.1). Using a named entity tagger, we standardize snippets by replacing vari- able names, and input and output parameters of functions to reduce the variability of the code. Then, we compare snippet pairs within each OWASP category based on their similarity level, facilitating automated identification ofcommonimplementationpatternsusingtheLongestCommonSubsequence (LCS), i.e., the longest sequence that appears as a (not necessarily contigu- ous) subsequence in both snippets. By using the identified patterns, we infer 1The Common Weakness Enumeration (CWE) is a list of common software and hard- ware weakness types published annually by the Massachusetts Institute of Technology Research and Engineering® (MITRE) Corporation. 2The Open Web Application Security Project® (OWASP) draws up the Top 10 Ap- plication Security List every four years to define the 10 most prevalent web application vulnerabilities. 9regex-based detection rules able to identify vulnerabilities with similar pat- terns and assess their corresponding vulnerability types (see § 4.2). Finally, we collect all the detection rules in a bash script that takes an input file containing line-by-line code to analyze. After scanning the file, the tool generates a report of the detection, including the number of snippets identified as vulnerable and their category according to the OWASP (see § 4.3). Figure 1 shows a detailed flowchart outlining the steps through which we developed the tool. In the rest of this Section, we detail the steps of the workflow. 4.1. Vulnerable Code Collection In our pursuit to collect vulnerable snippets, we performed a compre- hensive study of the literature. State-of-the-art provides numerous corpora containing vulnerable code, some aimed at poisoning models to induce them to generate vulnerabilities, while others serve to evaluate AI models as vul- nerability detectors [16, 11]. Since we target Python code, we selected two corpora suitable to our pur- pose, discarding those containing snippets written in different programming languages (e.g., DiverseVuln [11], Big-Vul [107, 19], SARD [67]): 1. SecurityEval [95]: It is a dataset built to evaluate the security of code generation models and to compare the performance of some state-of- the-art static analysis tools, such as Bandit and CodeQL [97]. The dataset contains 130 Python code samples covering 75 vulnerability types,whicharemappedtopromptswritteninNaturalLanguage(NL). 2. Copilot CWE Scenarios Dataset [80]: It encompasses the source code utilized to craft NL prompts for generating code with Large Language Models (LLMs) [98, 102]. It covers code associated with 89 distinct vulnerable scenarios. To develop detection rules that address the most recent and critical weak- nesses, we extracted from these corpora only the vulnerable code snippets that implement the CWEs present in the most recent OWASP Top 10 Vul- nerabilities report from 2021 [71, 34]. Overall, we selected 240 vulnerable snippets that implement 35 CWEs related to 9 OWASP categories3. Among 3MITRE itself is the authority responsible for establishing the correlation between CWEs and the OWASP Top 10 10Table 1: List of 35 selected CWEs. In blue we indicate the CWEs belonging to at least one of the MITRE’s top 25 of the last three years. We use “-” to indicate the absence of a specific CWE in the relative top 40. Rank Rank Rank OWASP 2021 CWE MITRE MITRE MITRE 2021 2022 2023 CWE-022 8 8 8 CWE-377 - - - Broken Access Control CWE-425 - - - CWE-601 37 35 32 CWE-319 35 39 - CWE-321 - - - CWE-326 - - - CWE-327 - - - Cryptographic Failures CWE-329 - - - CWE-330 - - - CWE-347 - - - CWE-759 - - - CWE-760 - - - CWE-295 26 26 34 Identification and Authentication Failures CWE-384 - - - CWE-020 4 4 6 CWE-078 5 6 5 CWE-079 2 2 2 CWE-080 - - - CWE-090 - - - Injection CWE-094 28 25 23 CWE-095 - - - CWE-096 - - - CWE-099 - - - CWE-113 - - - CWE-116 - - - CWE-643 - - - CWE-1236 - - - CWE-209 - - - Insecure Design CWE-269 29 29 22 CWE-434 10 10 10 Security Logging and Monitoring Failures (SLMF) CWE-117 - - - Security Misconfiguration CWE-611 23 24 28 Server-Side Request Forgery (SSRF) CWE-918 24 21 19
Software and Data Integrity Failures (SDIF) CWE-502 13 12 15 the selected CWEs, 10 of them can be found in at least one of the MITRE’s Top25listsfromthepastthreeyears[61,62,63]. Theuseof35CWEsiscon- sistent with other detection setups used in previous research studies [32, 79]. Table 1 shows the selected CWEs with their OWASP category. 114.2. Detection Rules Implementation To define detection rules, we search for common patterns among the se- lected code snippets based on the assumption that similar vulnerabilities manifest in similar ways [100, 34, 56]. Pattern-matching approaches such as Longest Common Subsequence (LCS), N-gram and others, have been widely used to identify the same family malware [56, 105, 78, 109]. In our work, we applied the LCS algorithm to identify vulnerable patterns among snippets belonging to the same OWASP category. As an example, consider the pair of code snippets shown in Table 2 - “Original Snippet” column, which belongs to the Injection OWASP cate- gory (i.e., user-supplied data is not validated, filtered, or sanitized by the application). Specifically, the code snippet in Row #1 addresses CWE-020 (Improper Input Validation), whereas the snippet in Row #2 is linked to CWE-080 (Basic XSS). Despite implementing distinct CWEs, these samples share analogous implementation patterns, i.e., the use of an output value from a function of a Python module without any sanitization, subsequently used as an input parameter in a different function. Table2: Examplesofstandardizationforcodesnippets. Thecommonlongestsubsequences of original snippets and the standardized snippets are in blue. Original Snippet Standardized Snippet def temperature for location(): file yaml = def temperature for location(): S1 request.args.get("file", None) var0 = request.args.get(var1, json payload = var2) var3 = yaml.load(var0) yaml.load(file yaml) name = request.args.get(‘name’, var0 = request.args.get(var1, ‘’) response = S2 var2) var3 = make response(name) return make response(var0) return var3 response Therefore, to search for similar code snippets, we first check the similar- ity for each pair of snippets grouped in the same OWASP category. To this aim, we use SequenceMatcher, a class of the Python module difflib, that computes the Longest Common Subsequence (LCS) with no junk elements, i.e., tokens that are not considered in the matching (e.g., the newline char- acter \n), among different code snippets. First, we compute the similarity ratio between two code snippets, ranging from 0 (total mismatching) to 1 12(perfect matching). Based on empirical observations, we focus exclusively on pairs exhibiting a similarity ratio exceeding 50% [87, 88, 89]. This decision is grounded in the assumption that when two snippets share at least half of their content, we can infer the existence of a meaningful common pattern. Then, we find common patterns among snippets that meet the similarity threshold within each OWASP category by computing the LCS [55, 56]. To support the LCS in finding common patterns, we standardize all the snippets for each OWASP category, i.e., we reduce the randomness of the code snippets, by using a named entity tagger, which returns a dictionary of standardizable tokens for the input and output parameters of functions, extracted through regular expressions. We replace the selected tokens in every intent with “var#”, where # denotes a number from 0 to |l|, and |l| is the number of tokens to standardize. This data processing method prevents snippets that have a low similarity score due to parameters (e.g., parameter names containing a high number of tokens) from not being grouped to find common patterns. Table 2 - “Standardized Snippet” column shows the standardization pro- cess performed on the snippets, where the parameters of the original code snippets are replaced (e.g., the input parameter “file” is replaced with var1 in the code snippet of row #1). Due to the standardization, the similarity ratio increases from ∼ 37% (obtained on the original snippets) to ∼ 63%. As a result, the LCS returns a clearer and easier-to-identify pattern, as shown in the table. ItisinterestingtoanalyzehowN-gramswouldbehaveforthesamepairof standardized snippets, whose similarity is higher than the original ones. To the best of our knowledge, N-grams can be used with two granularities, i.e., character N-grams and word (token) N-grams [12, 40, 42, 57], and the typical valuesofNforthecommonpatternextractionare4or6[40,42]. Thecharac- ter N-grams is unusable for our purpose due to a too-fine granularity, causing thegenerationofalotofnoise(e.g.,referringtoTable2-“StandardizedSnip- pet” column, we had the common patterns [’var0’, ’ar0 ’, ’r0 =’, ’0 = ’, ’ = r’, ’= re’, ’req’, etc.] for character 4-grams, and [’var0 = ’, ’ar0 = r’, ’r0 = re’, ’0 = req’, etc.] for character 6-grams). On the other hand, the word N-grams produced clearer results for pattern extraction, comparable with LCS results. However, in the case of Injection-related patterns, this solution did not always extract the pattern correctly. Referring once again to the standardized snippets in Table 2, we obtained the common patterns [(’var0’, ’=’, ’request.args.get(var1,’, ’var2)’), (’=’, ’request.args.get(var1,’, 13’var2)’, ’var3’), (’request.args.get(var1,’, ’var2)’, ’var3’, ’=’] for word
4-grams, [(’var0’, ’=’, ’request.args.get(var1,’, ’var2)’, ’var3’, ’=’)] for word 6-grams. The pattern resulting from the word 6-grams is near to the oneweextractedfromLCS,asshowninblueinTable2-“StandardizedSnip- pet”. However, with the word 6-grams we lost the information about the use of the variable var0, which contains the output of the request.args.get() function. In the snippets S1 and S2, var0 is passed as a parameter in another function, and, if not sanitized, can introduce flaws in the code. The use of var0 as a parameter is intercepted by LCS (i.e., var0 appears enclosed in round brackets) while is totally lost with the N-grams. For these reasons, we adoptedthewidelyusedLCSalgorithm[46,106,41]toextractthevulnerable patterns for the regex creation. Toidentifypatterns,wecreatedetectionrulesbyusingregularexpressions (regex) that, by their nature, operate on patterns within the text (code). As they do not require the completeness of the code to identify specific patterns, they are well-suited for analyzing incomplete or partial programs, which is often the case of AI-generated code. For example, a detection rule based on LCS result of the example in Table 2 can involve detecting the input function request.args.get(), whose output (i.e., var0) lacks proper sanitization and is subsequently utilized as an input parameter for another sink function (i.e., passing the variable var0 as a parameter in brackets, regardless of the specific function in which it is passed). Since rules are created by patterns found in the same OWASP category, each rule is associated with a category. Therefore, when the tool analyzes a code snippet containing the previous pattern, the rule detects the vulnerability and indicates the related OWASP category. In the implementation phase of detection rules, we treated the code under examination as a text on which we searched for specific patterns that imple- ment vulnerabilities. Then, we adopt domain-specific language designed for textprocessingsuchasawkandgrep,whicharestandardutilitiesinUnix-like operating systems and are well-suited for text processing tasks. To better describe the logic behind the implemented detection rules, and how the rules are triggered during the analysis, we refer again to the vul- nerable snippets S1 and S2 shown in Table 2. As the first step, we consider these snippets as textual strings by transforming them into single-line code, where the new lines in the code are replaced with the “\n” symbol. This transformation facilitates the use of text-processing tools. Using the LCS method along with standardized snippets, we can identify 14the use of the unsafe function request.args.get() with two input parame- ters and one output parameter. The request.args.get() function is com- monly used in web frameworks to retrieve query parameters from the URL in HTTP requests. This function is considered unsafe if the extracted values are used without proper validation or sanitization, as it could lead to security vulnerabilities such as injection attacks. The rule then employs the grep command to locate occurrences of the request.args.get()function. If grepfindsamatch, thedetectionruleuses the awk command to extract the name of the output variable. For instance, the rule identifies var0 as the output variable in the snippets. The next critical step is to determine if var0, the output of request.args.get(), is subsequently passed as an input to another function. This is important becausepassingunsanitizeddatatootherfunctionscanpropagatethevulner- ability, potentially leading to serious security issues such as data corruption, unauthorized access, or remote code execution. Toidentifysuchinstances, theruleemployesthegrepcommandtosearch for var0 directly used as a parameter in another function call, such as yaml.load(var0) or make response(var0). If the rule finds this pattern, the code snippet is flagged as potentially vulnerable. In fact, this pattern does not follow any well-known good practise for input validation, such as the use of escaping or encoding functions to prevent attacks like Injection or PathTraversal, asoutlinedintheOWASPCheatSheetSeries[75,74,73,72]. Overall, we implement 85 detection rules that cover the 35 CWEs over the 9 OWASP categories shown in Table 1. Since a single CWE can be implemented with different programming patterns, i.e., there is no unique way to implement a CWE, we needed to implement a number of detection rules (85) that is higher than the number of CWEs covered by the rules themselves (35). 4.3. Tool Execution As DeVAIC uses standard features of most Unix-like operating systems, it is highly portable. It takes as input a file in TXT format containing a set of code snippets, each written on a single line. Multi-line code snippets (e.g., a function) are separated by the newline character \n. While scanning the entire set of snippets in the input file, DeVAIC exe- cutes all the detection rules to detect any vulnerabilities. If the tool identifies a vulnerability through a rule, the tool continues the execution of all the de- tection rules since the same snippet may implement different vulnerabilities, 15even belonging to different OWASP categories. At the end of the execution, the tool returns in output a file containing a summary report of the detection results. The report includes: • The number of code snippets analyzed by the tool; • The number/percentage of code snippets identified as safe, i.e., that do not contain any vulnerability, and the number/percentage of code snippets identified as unsafe along with the classification of the vulner- abilities according to OWASP Top 10; • The overall execution time on the entire dataset of snippets and the average execution time per single snippet.
The tool also produces a file that exhibits the detection results, highlight- ing the OWASP categories for each vulnerable snippet. Since a single code may encompass multiple vulnerabilities from different categories, the occur- rences of snippets that fall in the various OWASP categories may exceed the total number of unsafe code snippets. To illustrate how DeVAIC works in practice, we refer to the vulnerable code generated by AI provided in Section 3, in which we have already dis- cussed the vulnerabilities exposed and the issues related to analyzing it using state-of-the-art tools, which is due to the code’s incompleteness. In contrast, theevaluationusingDeVAIC highlighteditseffectivenessinanalyzingincom- plete code snippets. Specifically, as previously mentioned, the preliminary step is to convert the code into a single-line format, with the carriage return represented by the “\n” symbol. Subsequently, we ran DeVAIC, obtaining the identification of the OWASP category associated with the vulnerability implemented in the analyzed code, i.e., Software and Data Integrity Failures. Going deeper into the tool functioning, the code triggered a rule that uses the grep command to intercept the vulnerable function yaml.load(). We share DeVAIC and the files to reproduce our experiments on the following URL: https://github.com/dessertlab/DeVAIC. 5. Experimental Evaluation We evaluate the effectiveness of our tool, DeVAIC, through experiments conducted on code generated by four distinct publicly available AI models, 16i.e., Google Gemini (LaMDA’s successor), Microsoft Copilot (GPT-4), Ope- nAI ChatGPT (GPT-3.5), and GitHub Copilot (GPT-4). These models are accessible via APIs and generate code suggestions starting from NL prompts. 5.1. NL Prompts Details We used a set of 125 NL prompts for each of the mentioned models to generate Python code, for a total of 500 code snippets used to assess DeVAIC’s detection skills. The NL inputs used to query the models are extracted from the test set used in [17]. In this work, the authors queried state-of-the-art code gener- ators to assess whether models generate unsafe code. The authors, which included also security experts, manually constructed the test set by using 100 NL descriptions of code by combining two benchmark datasets used for evaluating the security of AI-generated code (i.e., SecurityEval [92], and LLMSecEval[103]). Althoughthese100promptsdonotcontaintriggersthat force the models to circumvent their security measures for code generation, they still potentially lead the models to generate unsafe code when they are “poisoned”, i.e., fine-tuned with vulnerable corpora. This test set perfectly fits with the scope of our paper since it allows us to evaluate the performance of DeVAIC and other baseline tools on the AI-generated code. To perform a morecomprehensiveassessment, weenrichedthesetofNLinputswithanad- ditional set of 25 examples (for a total of 125 unique prompts) that require performing operations in the context of Flask (a Python web application framework) and extracted from the CodeXGLUE benchmark [59]. The 125 prompts cover a comprehensive set of domains, such as GET and POST im- plementation, file opening, temporary file creation and permissions changing, operations on input read from an external source, etc, without including spe- cific implementation details about how to implement the requested task. In this way, we stimulated the models to generate code snippets representative of real-case scenarios. The number of tokens for these 125 prompts has an average value of 15.5 (median value is 14). Typically, most prompts fall within the range of 11 to 19 tokens, with a few outliers presenting a minimum of 7 and a maximum of 34 tokens. The variation in prompt length is attributed to the need for additional details to clarify the request, although it is important to remark that prompts do not contain any implementation details. Table 3 displays two examples of NL prompts and a sample of vulner- able code generated by the models we employed. Row #1 showcases a 17Table 3: Examples of vulnerable code produced by models. The vulnerable part of the code is highlighted in red. NL prompt Unsafe prediction “Write a Python code with a get name() function that gets def get name(): \n request = flask.request the name parameter from the \n name = request.args.get("name") \n if URL by using request module by name is None: \n return None \n importing flask and opens the file object = open("names/" + name, "r") \n file names/name returning a file return file object object” user input = input("Please enter your “Get the user input from the input: ")\n evaluated input = request and eval it” eval(user input) \n print(evaluated input) model’s prediction that uses the request.args.get() function of the Flask module. The only validation produced in the code (i.e., the only one de- fined by the user in the prompt) is to verify if the output parameter of the request.args.get() function is an empty variable. Nevertheless, it is not sufficient because, even if this parameter is not empty, it may contain dan- gerous characters (i.e., “../”, “../..”, etc.) that can allow a malicious user to access files or directories outside of the intended directory (i.e., name/). The CWE associated with this kind of vulnerability is CWE-022, commonly known as “Path Traversal”. This CWE is related to the Broken Access Con- trol category of OWASP’s Top 10. To enhance the security, the code has to ensure that the file path is actually within the expected directory before opening the file itself [76]. We also notice that the generated code snippet lacks completeness, making the application of some static analysis tools not feasible. Finally, Row #2 shows that the request to evaluate a user input involved the generation of code containing the eval() function. This function is con-
sidered dangerous due to the potential injection of malicious code and is related to CWE-095, also known as “Eval Injection”, which belongs to the Injection OWASP category. In fact, the official ast documentation recom- mends using literal eval(), a more secure alternative to eval(), that can evaluate only a restricted set of expressions [84]. 185.2. Manual Analysis After submitting the NL prompts, the models generated a total of 500 code snippets. The average number of tokens for the code is 54 (with a median of 42, a maximum of 205, and a minimum of 4), with most of the code snippets (254 in total, ∼ 51%) falling within the range of 22 to 88 tokens. Furthermore, as shown in Figure 2, every model produced some instances of incomplete code, i.e., the model provided code functions without the necessary import statement at the beginning. For each group of 125 snippets produced by each model, we reported 8 incomplete code snippets for Google Gemini (∼ 6%), 12 for Microsoft Copilot (∼ 10%), 39 for GitHub Copilot (∼ 31%) and 6 for OpenAI ChatGPT (∼ 5%), reaching a total of 13% of incomplete code on 500 code generated. This result underlines the importance of evaluating AI-generated code with a tool able to overcome this issue. Figure 2: Occurrences of complete and incomplete code generated by each model. After grouping the code for each model and converting them in a TXT file with snippets written line by line, we ran DeVAIC to check for vulnerable implementation patterns in the outcomes. The assessment of the detection results needs manual inspection, involving the evaluation of each code pro- curedbymodelsintermsofTruePositives(TPs), FalsePositives(FPs), True Negatives (TNs), and False Negatives (FNs). More precisely, we have a TP case when both DeVAIC and manual analysis detect a software vulnerabil- ity; similarly, when both DeVAIC and manual analysis do not detect any 19vulnerability in the code, then we have a TN case. When DeVAIC identifies a vulnerability not confirmed by manual inspection, then it is an FP case. Conversely, when the tool does not identify a vulnerability within the code but manual analysis does, then it is an FN case. As manual classification can be susceptible to errors, it was conducted by a diverse group of 3 human evaluators, all with a strong background and expertise in cyber security and AI code generators. The group included indi- viduals with varying degrees of professional experience and educational qual- ifications. In particular, 2 Ph.D. students and a post-doctoral researcher, all with a computer engineering degree. To minimize the potential for human error, the 3 human evaluators independently examined each code snippet generated by the four models to check whether it contained or not vulner- abilities, by assigning a score of 1 or 0, respectively. Then, the evaluators comparedtheirresultsandperformedanin-depthanalysisofthefewdiscrep- ancy cases. The few discrepancies, which consisted of a ∼ 2% on cases, were attributed to human misclassification and subsequently resolved in a com- plete alignment, achieving a 100% consensus in the final evaluation. Thanks to the diversity and expertise of our evaluators and the iterative process of analysis, we ensured the reliability of our human evaluation process. Onaverage,thefourmodelsproduced54%ofvulnerablecode(271vulner- able code over 500 predictions). Furthermore, the same human experts that identified the vulnerabilities in the code generated by models, also checked for the CWE associated with the code. Again, this in-depth analysis was performed independently by consulting the MITRE reports [64]. Afterward, the evaluators compared their results, collaboratively reviewing the few dis- crepancies encountered, until reaching a 100% consensus. The details about the CWE categories associated with each vulnerable snippet generated by the four models are listed in Table 4. In particular, we marked in bold the 21 CWEs that overlap with those listed in Table 1, i.e., the CWEs of the samples we used for the rule creation. The total of 271 vulnerable snippets generated showed 118 CWEs for the vulnerable snippets produced by the model Google Gemini, 136 CWEs for OpenAI ChatGPT-3.5, 71 CWEs for Microsoft Copilot and 79 CWEs for GitHub Copilot, as shown in Table 4 - “Total for model” row. For each model, therearesomevulnerableinstancesforwhichtheevaluatorsidentified more than a single CWE (i.e., 34% out of vulnerable snippets for Gemini, 53% for ChatGPT, 15% for Microsoft Copilot, and 12% for GitHub Copilot). Some of these CWEs are closely related to each other and allowed us to cap- 20Table 4: Occurrences of CWEs in vulnerable snippets generated by each model. The “-” signifies that a specific CWE is not present. CWEs overlapping with Table 1 are in bold. OpenAI Google Microsoft GitHub Total for CWE ChatGPT- Gemini Copilot Copilot CWE 3.5 CWE-020 13 16 10 10 49 CWE-022 5 5 7 4 21 CWE-078 4 5 2 6 17 CWE-079 3 14 8 4 29 CWE-089 1 - - 2 3 CWE-090 3 - - 1 4 CWE-094 15 14 2 2 33 CWE-095 1 - - - 1 CWE-113 - 1 - - 1 CWE-117 - 2 2 - 4 CWE-209 - 25 - - 25 CWE-215 15 12 2 2 31 CWE-259 6 9 2 2 19 CWE-276 - - 1 - 1 CWE-295 2 - 1 2 5 CWE-319 2 1 2 3 8 CWE-326 3 - 1 1 5 CWE-327 5 3 5 7 20 CWE-330 2 - 1 2 5 CWE-337 2 1 1 - 4 CWE-338 - 1 1 2 4 CWE-347 4 1 - 2 7 CWE-377 - - 1 1 2 CWE-400 5 - - 4 9 CWE-477 - - - 1 1 CWE-489 8 - - - 8 CWE-502 8 8 9 8 33 CWE-601 1 1 1 1 4 CWE-611 5 1 - - 6 CWE-614 - 1 1 - 2 CWE-703 1 1 2 - 4 CWE-732 2 6 3 8 19 CWE-776 1 2 2 - 5 CWE-798 1 6 3 4 14
CWE-918 - - 1 - 1 Total for model 118 136 71 79 21turethedifferentcharacteristicsofthevulnerabilitiesshowninthesesnippets (e.g., the generated snippet in Table 5 is vulnerable to CWE-95 (Eval Injec- tion) and CWE-209 (Information Exposure Through an Error Message), the latter is closely related to CWE-215 (Insertion of Sensitive Information Into Debugging Code) and CWE-489 (Active Debug Code)). Table 5: Example of CWE labeling for a vulnerable generate snippet. CWE Vulnerable snippet import flask \n def eval input(request):\n user input = CWE-095 request.args.get(‘‘user input’’) \n result=eval(user input) \n return CWE-209 result\n app = flask.Flask( name )\n @app.route(‘‘/’’) \n def index():\n CWE-215 result = eval input( flask.request)\n print(result) \n if name == CWE-489 ‘‘ main ’’:\n app.run(debug=True) Overall, the 500 generated code snippets allowed us to comprehensively assess the tool’s detection ability in both vulnerable and non-vulnerable in- stances, providing a robust and fair measure of their overall performance across various scenarios. 5.3. Experimental Results We assessed the tool’s detection ability by using standard metrics typi- cally employed in this field, i.e., Precision, Recall, F Score, and Accuracy. 1 We computed these metrics with the TPs, TNs, FPs, and FNs identified during the manual analysis (see § 5.2). To provide context for the evaluation, we compared DeVAIC’s perfor- mance with a baseline. In our analysis, we utilized CodeQL version v2.16.4 and the two Security test suites [22, 23] of queries for Python. We also used Bandit version 1.7.7, Semgrep version 1.61.1, and python-taint module version 0.42, also known as PyT. As introduced in Section 2, Semgrep uses a pattern-matching approach by executing regular expressions (regex) to search for vulnerable patterns in the code. In the official Registry [96], Semgrep provides ready-to-use configuration files containing regex for vulnerability detection. Instead, tools like CodeQL, Bandit and PyT first model the code under examination with an Abstract Syntax Tree (AST) and then they execute their detection rules on the AST nodes. More in detail, Bandit builds the AST of the code under examination, and runs appropriate plugins (i.e., assert used, exec used, set bad file permissions, etc.) [6] against the AST nodes. Once Bandit has finished scanning all the source code, it generates a final report. CodeQL 22Table 6: Evaluation of detection results comparing DeVAIC with the state of the art. OpenAI Google Microsoft GitHub All Detection Tool ChatGPT- Gemini Copilot Copilot models 3.5 noisicerP DeVAIC 0.97 1.00 0.95 0.95 0.97 Bandit 0.89 0.81 0.83 0.82 0.84 CodeQL 0.79 0.86 0.83 0.95 0.85 Semgrep 0.87 0.98 0.94 0.85 0.91 PyT 1.00 0.89 1.00 1.00 0.96 ChatGPT-3.5 0.93 0.91 0.77 0.82 0.85 ChaGPT-4 0.71 0.72 0.71 0.68 0.71 Claude-3.5-Sonnet 0.66 0.74 0.74 0.74 0.72 llaceR DeVAIC 0.95 0.96 0.90 0.86 0.92 Bandit 0.70 0.69 0.51 0.58 0.62 CodeQL 0.36 0.54 0.42 0.26 0.39 Semgrep 0.55 0.69 0.58 0.51 0.58 PyT 0.11 0.11 0.08 0.04 0.09 ChatGPT-3.5 0.71 0.57 0.68 0.71 0.67 ChatGPT-4 0.77 0.66 0.69 0.72 0.71 Claude-3.5-Sonnet 0.75 0.64 0.78 0.88 0.76 erocS F 1 DeVAIC 0.96 0.98 0.92 0.90 0.94 Bandit 0.78 0.74 0.63 0.68 0.72 CodeQL 0.49 0.67 0.56 0.41 0.54 Semgrep 0.67 0.81 0.72 0.64 0.71 PyT 0.20 0.20 0.16 0.08 0.16 ChatGPT-3.5 0.81 0.70 0.72 0.76 0.75 ChaGPT-4 0.74 0.69 0.70 0.70 0.71 Claude-3.5-Sonnet 0.71 0.69 0.76 0.81 0.74 ycaruccA DeVAIC 0.95 0.98 0.93 0.89 0.94 Bandit 0.77 0.74 0.71 0.69 0.73 CodeQL 0.56 0.70 0.68 0.58 0.63 Semgrep 0.69 0.82 0.78 0.67 0.74 PyT 0.48 0.50 0.56 0.46 0.50 ChaGPT-3.5 0.80 0.73 0.75 0.75 0.76 ChatGPT-4 0.68 0.66 0.71 0.65 0.68 Claude-3.5-Sonnet 0.63 0.67 0.76 0.76 0.71 treats code like data, which means that it is necessary to generate a CodeQL database to represent the codebase before running the analysis queries. Even inthiscase, CodeQLprovidespublictestsuiteswithready-to-usequeries[22, 2323]. After the code scanning, CodeQL generates a report. Finally, PyT generatestheASTandcreatestheControlFlowGraph(CFG);then,itpasses the CFG to a Framework Adaptor, which will mark the arguments of certain functions (by Flask, Django and other libraries) as tainted sources. In the final step, PyT checks if the output from the tainted sources is sanitized. If not, it raises an alert and generates a report. We remark again that several snippets generated by the models in our setup were incomplete, so the static analysis tools just mentioned cannot define the AST, thus having issues conducting their detection analyses. At first, we analyzed the model predictions with DeVAIC. Then, we modified the incomplete ones (65 in total) by inserting the import statement when lacking to compare DeVAIC’s evaluation results with the state-of-the-art. Furthermore, we enriched our analysis by comparing DeVAIC even with
LLMs-based detection methods, which are nowadays a current trend and a widely used solution in the state of the art to perform vulnerability detec- tion [108, 43]. In particular, we adopted ChatGPT-3.5 and ChatGPT-4 [70], and the new model Claude-3.5-Sonnet [2]. ChatGPT-3.5 and ChatGPT-4 are developed by OpenAI and represent successive advancements in language model performance, with ChatGPT-4 offering enhanced understanding, rea- soning, and generation capabilities compared to its predecessors. Claude- 3.5-Sonnet, on the other hand, is the new model developed by Anthropic AI, which has set new performance standards in several tasks [3]. We prompted these 3 models using a Zero-Shot Role-Oriented (ZS-RO) prompt[104], i.e., weassignedtothemodeltheroleofvulnerabilitydetection system (i.e., RO), successively asking to perform the vulnerability detection of the 500 generated snippets with a question about the related CWE (NL prompt: ‘‘You are a vulnerability detection system. Your task is to analyze the following code snippet and identify any potential security vulnerabilities. Specifically, please determine if the code is susceptible to any known Com- mon Weakness Enumerations (CWEs) and provide the corresponding CWE identifier.” Considering the average values for all models, Table 6 shows that De- VAIC achieved high values for each metric. Regarding Precision, DeVAIC reaches 97%, which is comparable with the values achieved by Semgrep4 4When we used Semgrep, we executed a total of 1291 rules from the official ruleset registry for Python [96]. 24(91%) and PyT (96%), while surpassing the values achieved by ChatGPT- 4 (71%) and Claude-3.5-Sonnet (72%). Furthermore, DeVAIC achieved a maximum Recall of 92% on average, surpassing the baseline with a consid- erable gap compared to the employed solutions, which obtain a maximum of 76% (i.e., Claude-3.5-Sonnet). Finally, DeVAIC shows superior performance to other state-of-the-art solutions (≥ 20%) with average values of 94% for both F Score and Accuracy. 1 Considering the CWEs mapped to the experimental dataset shown in Table 4, we analyzed the snippets detected by DeVAIC and the baseline tools to evaluate how many CWE categories were covered. We found that DeVAIC detected a set of vulnerable snippets associated with 31 out of the 35 CWEs listed in Table 4 (i.e., 89%), confirming the high performance exhibited by the evaluation metrics in Table 6. Regarding the baseline tools, Bandit detected snippets associated with 18 CWEs (i.e., 51%), while we obtained22CWEsforCodeQL (i.e., 63%), 21CWEsfor Semgrep (i.e., 60%), 8 CWEs for PyT (i.e., 23%), 29 CWEs for Claude-3.5-Sonnet (i.e., 83%), 19 CWEs for ChatGPT-3.5 (i.e., 54%), and 21 CWEs for ChatGPT-4 (i.e., 60%). The baseline tools correctly identified some instances of generated snippets labelled with multiple CWEs, justifying the high number of CWEs covered. However, they missed other vulnerable instances, obtaining lower evaluation metrics than DeVAIC, as shown in Table 6. We further evaluated if, for each metric (i.e., Precision, Recall, F1 Score, and Accuracy), there is a statistical difference between DeVAIC and the state-of-the-art solutions using the non-parametric Wilcoxon rank sum test. The null hypothesis is that the two samples derive from the same popula- tion (i.e., the two populations have equal medians). If the null hypothesis is rejected, the Wilcoxon rank sum test indicates that the two samples are statistically different. As deducible by the considerations explained above, for the Precision, DeVAIC is statistically different with Bandit, and with the models ChatGPT-4 and Claude-3.5-Sonnet. For all of the other metrics, DeVAIC achieves optimal results, which are statistically different (and bet- ter) than any other results obtained with the state of the art. The previously mentioned considerations highlight the outstanding performance of DeVAIC. Then, we performed an in-depth analysis of the results provided by our tool to investigate the cases of FN and FP. Since DeVAIC behaves like a text scanner, we first checked whether the performance of the tool is affected by thecomplexity, intermsofnumberoftokens, ofthecodetoanalyze. Figure3 shows four boxplots to illustrate the complexity of the code across the TP, 25Figure3: Analysisofsnippetlengthsvariabilityforeachclassification. Themedianvalues are in orange, while the mean values are in green. TN, FP, and FN cases. The figure highlights that the interquartile ranges, i.e., the height of the boxplots, are greater for TP and TN cases (59 and 57, respectively) than the FN (46) and FP cases (29). Moreover, the average number of tokens per category, which is 46 for TP, 40 for TN, 44 for FP, and 43 for FN, proves that the cases of misclassification do not depend on the complexity of snippets as DeVAIC is effective in detecting vulnerabilities even for complex code snippets. 5.4. Computational Cost We analyzed the computational times of DeVAIC. We run the tool on a computer with a 13th Gen Intel(R) Core(TM) i9-13900H CPU, and 32 GB of RAM. We compared the execution time taken by DeVAIC with the other so- lutions employed. Figure 4 shows the median execution time of each tool in analyzing 500 code snippets from the four models. Bandit is the faster
(0.67 seconds), while CodeQL is the slower (570 seconds, which corresponds to approximately 9 minutes). PyT and Semgrep employed 253.40 and 123.67 seconds, respectively (2 and 4 minutes approximately). In this context, while theothertoolsemployedminutesonaveragetoanalyzethesnippets,DeVAIC remains in the realm of seconds, as Bandit does. 26Figure 4: Comparison of median execution times for DeVAIC, Bandit, CodeQL, Semgrep, and PyT for the analysis of all the 500 codes generated by the 4 models. When using LLM models as vulnerability detectors, the time it takes for analysis depends heavily on human-AI interaction. If we approximate the time for a single detection, using the same prompt but changing snippets from time to time, to be about one minute, and if we analyze 500 snippets, the total time for a single model would be 500 minutes (∼ 8 hours). Furthermore, we deeply inspected the distribution of the times required by DeVAIC to execute the detection rules on every code snippet. Despite the different lengths of the analyzed code snippets, Figure 5 shows that the tool exhibited an average execution time of 0.16 seconds (with a median of 0.14, a maximum of 0.59, and a minimum of 0.10). The figure highlights that for most of the snippets (169 in total, ∼ 34%), the execution time falls within the range of 0.10 to 0.13 seconds. Moreover, we did not find any relation between the length of snippets, in termsoftokens,toanalyzeandthecomputationtimesofthetool. Indeed,the outliers in execution times were due to cases of multiple identified OWASP categories, which require the tool to perform multiple write operations to the report file. Finally, we conducted a study to evaluate the performance of our tool when analyzing large programs. To increase the code complexity, we con- catenated the snippets generated by Gemini (we could have chosen the code 27Figure 5: Occurrences of execution times taken by DeVAIC for scanning and evaluating individual code snippets generated by the 4 models. generated by any other AI model as well) to create a new set of code charac- terized by 125 snippets with an incremental number of tokens, starting from 73 for the first snippet to 6892 for the last one. Figure6showstheDeVAIC’sperformanceinevaluatingthenewcodecol- lection. Analyzing snippets with more tokens led to longer execution times. While the initial snippet, comprising 73 tokens, only required 0.12 seconds forevaluation, thefinalsnippet, consistingof6892tokens, tookover1minute (75.23 seconds) to process. Furthermore, we applied a second-degree polyno- mial curve to fit the data, obtaining an R2 value of 0.984. This value suggests a strong fit between the predicted execution times and the actual data and implies that the relationship between the snippet length and execution time is not linear but rather quadratic, with execution times increasing at a rate that accelerates with the number of tokens. The polynomial relationship might be due to various factors inherent in code analysis processes, such as increased memory allocation, more intensive parsing, and a higher number of computational operations needed to process and analyze larger codebases. 28Figure 6: Execution times of DeVAIC plotted against the cumulative number of tokens in code snippets. Each data point represents the time taken as snippets are progressively combined, starting from single snippets and incrementally merging with additional ones. 6. Threats to Validity Rule Creation: Thecreationofdetectionrulesbasedonregularexpressions could introduce bias, as these rules are derived from patterns observed within a limited set of vulnerable code snippets. This limitation may result in rules thatareeithertoospecificortoogeneral, affectingthetool’saccuracyinreal- world scenarios. To mitigate this threat, we employed a diverse dataset of vulnerabilities covering a broad range of CWEs and OWASP categories [95, 80]. We also conducted iterative refinements of our rules by testing them on separate validation sets to ensure they accurately capture the intended patterns without being overly broad or narrow. Coverage of CWEs: A key threat to the validity of our study concerns the comprehensive coverage of CWEs using detection rules. Given the variability in how a single CWE can manifest across different code patterns, it is chal- lenging to ensure that all possible implementations are adequately covered. This complexity arises because a CWE can present itself in multiple distinct ways, making it difficult to design a definitive set of rules that captures ev- ery variation. To mitigate this threat, we implemented 85 detection rules, which is significantly more than the 35 CWEs addressed in our study. This was done to capture a broad spectrum of implementation patterns for each 29CWE. However, we acknowledge that the possibility remains that some pat- terns might not be covered by the existing rules, potentially leaving certain vulnerabilities undetected. Despite these challenges, the effectiveness of our detection tool, DeVAIC, is demonstrated by the results obtained from our experimental dataset. DeVAIC successfully detected 91% (248 out of 271) of the vulnerable snippets generated by AI models. This high detection rate suggests that our rule set is robust and capable of identifying a wide range of CWE implementations. However, it is important to note that while this result is promising, it does not guarantee that all potential patterns were captured, as the remaining 9% of vulnerabilities were not detected. Future work will enhance the robustness of our detection tool by continuously up- dating the rule set to incorporate new CWE patterns, which is crucial for maintaining comprehensive coverage in a dynamic cybersecurity landscape.
AI Models: The selection of AI models for evaluating DeVAIC may intro- duce bias, as the performance of these models can vary significantly. This variation could inadvertently affect the perceived effectiveness of DeVAIC. However, we remark that we selected four widely used AI code generation models,representingarangeofunderlyingtechnologiesandtrainingdatasets. Thisdiversityhelpsensurethatourevaluationencompassesavarietyofcode- generation behaviours and vulnerabilities. Metrics: The use of Precision, Recall, F Score, and Accuracy as met- 1 rics relies on correctly classifying vulnerabilities. Any misclassification could impact these metrics and the interpretation of DeVAIC effectiveness. To enhance the reliability of our classification, we employed a multi-researcher approach, where multiple experts independently assessed the vulnerabilities before reaching a consensus. This process reduces the risk of subjective bias and ensures a more accurate classification of true and false positives/nega- tives. Manual Evaluation: A potential threat to the validity of our manual anal- ysis is the subjective nature of human evaluation, which can introduce biases and inconsistencies. Despite the evaluators’ strong backgrounds in cyber- security and AI code generation, differences in interpretation and judgment could affect the classification of vulnerabilities in the generated code snip- pets. To address these concerns, we implemented several rigorous measures to mitigate the risk of bias and ensure consistency. First, each code snip- pet was independently evaluated by three experts (a group comprising two Ph.D. students and a post-doctoral researcher, all with substantial expertise in the relevant domains). This independence in evaluation helps reduce the 30influence of individual biases, as each evaluator’s assessment is made without knowledge of the others’ judgments. Following the independent evaluations, we conducted a further inspection process for cases with discrepancies, which accounted for only about 2% of the total evaluations. This low discrepancy rate suggests a high level of agreement among the evaluators, indicating ro- bust initial assessments. For the discrepancies that did occur, the evaluators engaged in detailed discussions to reach a consensus, ensuring that any po- tential misunderstandings or differing interpretations were resolved. This iterative approach helped align the evaluators’ perspectives and provided an opportunity for in-depth analysis, enhancing the overall reliability of the evaluation. Generalizability: TheeffectivenessofDeVAIC iscurrentlylimitedtoPython code. This focus raises questions about the tool’s applicability to other pro- gramming languages that may have different syntax, semantics, and com- mon vulnerabilities. While DeVAIC is initially designed for Python, the methodology used to create detection rules based on regular expressions and patterns observed in vulnerable code can be adapted to other languages. We are currently investigating this methodology for the extraction of vul- nerable implementation patterns and rule creation in the C/C++ language. Adapting DeVAIC to C/C++ requires additional considerations due to the complexity and variety in how equivalent functionalities are implemented in these languages compared to Python (e.g., what can be accomplished with a single Python instruction might require several more complex instructions in C/C++). This complexity increases the difficulty of pattern identifica- tion and necessitates the development of more sophisticated detection rules. Future work will involve extending DeVAIC’s capabilities to additional lan- guages and leveraging domain experts to ensure the accuracy and relevance of new rules. 7. Conclusion In this work, we introduced DeVAIC, a tool that implements a set of detection rules to identify vulnerabilities in the Python code. The tool is designed to overcome the limitation of static analysis tools since it does not require complete programs, making it particularly suitable for AI-generated code. To define rules, we extracted vulnerable code from public datasets and, after grouping the code according to their OWASP categories and similarity, we found common patterns by standardizing the code snippets and using 31the LCS. We evaluated DeVAIC on code generated by four public AI-code generators. The results outline the tool’s ability to detect vulnerabilities, showing an average F Score and Accuracy both at 94%, overcoming the 1 performance of other state-of-the-art solutions used as a baseline for the evaluation, with a limited computational cost. Future work aims to extend the number of program languages to analyze, enhancing the set of detection rules available. Moreover, we also aim to enrich the list of CWEs covered by the tool. Acknowledgments We are grateful to our students Francesco Balassone and Ferdinando Si- mone D’Agostino for their help in the early stage of this work. Data availability The tool is available and the files to reproduce our experiments are pub- liclyavailableonthefollowingURL:https://github.com/dessertlab/DeVAIC. References [1] Muna Al-Hawawreh, Ahamed Aljuhani, and Yaser Jararweh. Chatgpt for cybersecurity: practical applications, challenges, and future direc- tions. Cluster Computing, pages 1–16, 2023. [2] Anthropic. Claude-3.5-Sonnet. https://www.anthropic.com/news/ claude-3-5-sonnet, 2024. [3] Anthropic. Claude 3.5 Sonnet Model Card Addendum. https://www-cdn.anthropic.com/ fed9cc193a14b84131812372d8d5857f8f304c52/Model_Card_ Claude_3_Addendum.pdf, 2024. [4] Atieh Bakhshandeh, Abdalsamad Keramatfar, Amir Norouzi, and Mo-
hammad Mahdi Chekidehkhoun. Using chatgpt as a static application security testing tool. arXiv preprint arXiv:2308.14434, 2023. [5] Vinuri Bandara, Thisura Rathnayake, Nipuna Weerasekara, Charitha Elvitigala, Kenneth Thilakarathna, Primal Wijesekera, and Chamath 32Keppitiyagama. Fix that fix commit: A real-world remediation anal- ysis of javascript projects. In 2020 IEEE 20th International Working Conference on Source Code Analysis and Manipulation (SCAM), pages 198–202. IEEE, 2020. [6] Bandit. Test Plugins. https://bandit.readthedocs.io/en/latest/ plugins/, 2024. [7] Bard/FAQ. WhatCanBardDoandOtherFrequentlyAskedQuestions - Bard. https://bard.google.com/faq?hl=en, 2023. [8] Bing/FAQ. Microsoft Bing - Frequently Asked Questions. https: //www.microsoft.com/en-us/bing?form=MA13FV, 2023. [9] Carnegie Mellon University NeuLab and STRUDEL Lab. CoNaLa. https://conala-corpus.github.io/. [10] Saikat Chakraborty, Rahul Krishna, Yangruibo Ding, and Baishakhi Ray. Deep learning based vulnerability detection: Are we there yet. IEEE Transactions on Software Engineering, 2021. [11] YizhengChen, ZhoujieDing, LamyaAlowain, XinyunChen, andDavid Wagner. Diversevul: A new vulnerable source code dataset for deep learningbased vulnerability detection. InProceedings of the 26th Inter- national Symposium on Research in Attacks, Intrusions and Defenses, pages 654–668, 2023. [12] Boris Chernis and Rakesh Verma. Machine learning methods for soft- ware vulnerability detection. In Proceedings of the fourth ACM in- ternational workshop on security and privacy analytics, pages 31–39, 2018. [13] Boris Cherry, Pol Benats, Maxime Gobert, Loup Meurice, Csaba Nagy, and Anthony Cleve. Static analysis of database accesses in mongodb applications. In2022 IEEE International Conference on Software Anal- ysis, Evolution and Reengineering (SANER), pages 930–934. IEEE, 2022. [14] Anton Cheshkov, Pavel Zadorozhny, and Rodion Levichev. Evalu- ation of chatgpt model for vulnerability detection. arXiv preprint arXiv:2304.07232, 2023. 33[15] Brian Chess and Gary McGraw. Static analysis for security. IEEE security & privacy, 2(6):76–79, 2004. [16] Domenico Cotroneo, Cristina Improta, Pietro Liguori, and Roberto Natella. Vulnerabilities in ai code generators: Exploring targeted data poisoning attacks. arXiv preprint arXiv:2308.04451, 2023. [17] Domenico Cotroneo, Cristina Improta, Pietro Liguori, and Roberto Natella. Vulnerabilities in ai code generators: Exploring targeted data poisoningattacks. InProceedings of the 32nd IEEE/ACM International Conference on Program Comprehension, pages 280–292, 2024. [18] Trevor Dunlap, Seaver Thorn, William Enck, and Bradley Reaves. Finding fixed vulnerabilities with off-the-shelf static analysis. In 2023 IEEE 8th European Symposium on Security and Privacy (EuroS&P), pages 489–505. IEEE, 2023. [19] Jiahao Fan, Yi Li, Shaohua Wang, and Tien N Nguyen. Ac/c++ code vulnerability dataset with code changes and cve summaries. In Proceedings of the 17th International Conference on Mining Software Repositories, pages 508–512, 2020. [20] Chongzhou Fang, Ning Miao, Shaurya Srivastav, Jialin Liu, Ruoyu Zhang, Ruijie Fang, Asmita Asmita, Ryan Tsang, Najmeh Nazari, Han Wang, et al. Large language models for code analysis: Do llms really do their job? arXiv preprint arXiv:2310.12357, 2023. [21] GitHub. CodeQL. https://codeql.github.com/. [22] GitHub. CodeQL: Experimental Security Queries for Python lan- guage. https://github.com/github/codeql/tree/main/python/ ql/src/experimental/Security. [23] GitHub. CodeQL: Security Queries for Python language. https:// github.com/github/codeql/tree/main/python/ql/src/Security. [24] GitHub. The top programming languages. https://octoverse. github.com/2022/top-programming-languages. [25] GitHub. GitHub Copilot. https://github.com/features/copilot, 2023. 34[26] GitHub Docs. Python queries for CodeQL analy- sis. https://docs.github.com/en/code-security/ code-scanning/managing-your-code-scanning-configuration/ python-built-in-queries, 2023. [27] GitHub. [n. d.]. GitHub Copilot for Individuals. https: //docs.github.com/en/copilot/overview-of-github-copilot/ about-github-copilot-for-individuals. [28] GitHub/Blog. GitHub Copilot now has a better AI model and new capabilities. https://github.blog/ 2023-02-14-github-copilot-now-has-a-better-ai-model-and-new-capabilities/, 2023. [29] Mat´ıas Gobbi and Johannes Kinder. Poster: Using codeql to detect malware in npm. In Proceedings of the 2023 ACM SIGSAC Conference on Computer and Communications Security, pages 3519–3521, 2023. [30] Google. Google Gemini. https://gemini.google.com/app, 2023. [31] Katerina Goseva-Popstojanova and Andrei Perhinschi. On the capabil- ityofstaticcodeanalysistodetectsecurityvulnerabilities. Information and Software Technology, 68:18–33, 2015. [32] HosseinHajipour, ThorstenHolz, LeaSch¨onherr, andMarioFritz. Sys- tematicallyfindingsecurityvulnerabilitiesinblack-boxcodegeneration models. arXiv preprint arXiv:2302.04012, 2023. [33] Sivana Hamer, Marcelo d’Amorim, and Laurie Williams. Just an- other copy and paste? comparing the security vulnerabilities of chatgpt generated code and stackoverflow answers. arXiv preprint arXiv:2403.15600, 2024. [34] Anne Honkaranta, Tiina Lepp¨anen, and Andrei Costin. Towards prac- tical cybersecurity mapping of stride and cwe—a multi-perspective
approach. In 2021 29th Conference of Open Innovations Association (FRUCT), pages 150–159. IEEE, 2021. [35] Aftab Hussain, Md Rafiqul Islam Rabin, and Mohammad Amin Alipour. Trojanedcm: A repository for poisoned neural models of source code. arXiv preprint arXiv:2311.14850, 2023. 35[36] Cristina Improta. Poisoning programs by un-repairing code: Security concernsofai-generatedcode. In2023 IEEE 34th International Sympo- sium on Software Reliability Engineering Workshops (ISSREW), pages 128–131. IEEE, 2023. [37] DA Kapustin, VV Shvyrov, and TI Shulika. Static analysis of corpus of source codes of python applications. Programming and Computer Software, 49(4):302–309, 2023. [38] Adhishree Kathikar, Aishwarya Nair, Ben Lazarine, Agrim Sachdeva, and Sagar Samtani. Assessing the vulnerabilities of the open-source ar- tificial intelligence (ai) landscape: A large-scale analysis of the hugging face platform. In 2023 IEEE International Conference on Intelligence and Security Informatics (ISI), pages 1–6. IEEE, 2023. [39] Rapha¨el Khoury, Anderson R Avila, Jacob Brunelle, and Baba Ma- madou Camara. How secure is code generated by chatgpt? arXiv preprint arXiv:2304.09655, 2023. [40] Wael Khreich, Babak Khosravifar, Abdelwahab Hamou-Lhadj, and Chamseddine Talhi. An anomaly detection system based on variable n-gram features and one-class svm. Information and Software Technol- ogy, 91:186–197, 2017. [41] Miryung Kim, David Notkin, and Dan Grossman. Automatic inference of structural changes for matching across program versions. In 29th International Conference on Software Engineering (ICSE’07), pages 333–343. IEEE, 2007. [42] Charlotte Lecluze, Lo¨ıs Rigouste, Emmanuel Giguet, and Nadine Lu- cas. Whichgranularitytobootstrapamultilingualmethodofdocument alignment: character n-grams or word n-grams? Procedia-Social and Behavioral Sciences, 95:473–481, 2013. [43] Haonan Li, Yu Hao, Yizhuo Zhai, and Zhiyun Qian. Assisting static analysis with large language models: A chatgpt experiment. In Pro- ceedings of the 31st ACM Joint European Software Engineering Con- ference and Symposium on the Foundations of Software Engineering, pages 2107–2111, 2023. 36[44] Jia Li, Zhuo Li, HuangZhao Zhang, Ge Li, Zhi Jin, Xing Hu, and Xin Xia. Poisonattackandpoisondetectionondeepsourcecodeprocessing models. ACM Transactions on Software Engineering and Methodology, 2023. [45] Li Li, Tegawend´e F Bissyand´e, Mike Papadakis, Siegfried Rasthofer, AlexandreBartel, DamienOcteau, JacquesKlein, andLeTraon. Static analysis of android apps: A systematic literature review. Information and Software Technology, 88:67–95, 2017. [46] Xiaoyun Li, Hongyu Zhang, Van-Hoang Le, and Pengfei Chen. Logshrink: Effective log compression by leveraging commonality and variability of log data. In Proceedings of the 46th IEEE/ACM Interna- tional Conference on Software Engineering, pages 1–12, 2024. [47] Yi Li, Shaohua Wang, and Tien N Nguyen. Vulnerability detection withfine-grainedinterpretations. InProceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pages 292–303, 2021. [48] Youjia Li, Jianjun Shi, and Zheng Zhang. An approach for rapid source code development based on chatgpt and prompt engineering. IEEE Access, 2024. [49] Zhen Li, Deqing Zou, Shouhuai Xu, Xinyu Ou, Hai Jin, Sujuan Wang, Zhijun Deng, and Yuyi Zhong. Vuldeepecker: A deep learning-based system for vulnerability detection. arXiv preprint arXiv:1801.01681, 2018. [50] Pietro Liguori, Erfan Al-Hossami, Domenico Cotroneo, Roberto Natella, Bojan Cukic, and Samira Shaikh. Shellcode IA32: A dataset for automatic shellcode generation. In Royi Lachmy, Ziyu Yao, Greg Durrett, Milos Gligoric, Junyi Jessy Li, Ray Mooney, Graham Neu- big, Yu Su, Huan Sun, and Reut Tsarfaty, editors, Proceedings of the 1st Workshop on Natural Language Processing for Programming (NLP4Prog 2021), pages 58–64, Online, August 2021. Association for Computational Linguistics. [51] Pietro Liguori, Erfan Al-Hossami, Domenico Cotroneo, Roberto Natella, Bojan Cukic, and Samira Shaikh. Can we generate shell- 37codes via natural language? an empirical study. Automated Software Engineering, 29(1):30, 2022. [52] Pietro Liguori, Erfan Al-Hossami, Vittorio Orbinato, Roberto Natella, Samira Shaikh, Domenico Cotroneo, and Bojan Cukic. Evil: exploiting software via natural language. In 2021 IEEE 32nd International Sym- posium on Software Reliability Engineering (ISSRE), pages 321–332. IEEE, 2021. [53] Damian Lyons and Dino Becaj. A meta-level approach for multilin- gual taint analysis. In International Conference on Software and Data Technologies (ICSOFT), volume 2021, 2021. [54] Li Ma, Huihong Yang, Jianxiong Xu, Zexian Yang, Qidi Lao, and Dong Yuan. Code analysis with static application security testing for python program. Journal of Signal Processing Systems, 94(11):1169– 1182, 2022. [55] Siqi Ma, Ferdian Thung, David Lo, Cong Sun, and Robert H Deng. Vurle: Automatic vulnerability detection and repair by learning from examples. InComputer Security–ESORICS 2017: 22nd European Sym- posium on Research in Computer Security, Oslo, Norway, September 11-15, 2017, Proceedings, Part II 22, pages 229–246. Springer, 2017. [56] Ferdiansyah Mastjik, Cihan Varol, and Asaf Varol. Comparison of pattern matching techniques on identification of same family malware. International Journal of Information Security Science, 4(3):104–111, 2015. [57] Paul McNamee and James Mayfield. Character n-gram tokenization for european language text retrieval. Information retrieval, 7:73–97, 2004.
[58] Leo A Meyerovich and Ariel S Rabkin. Empirical analysis of program- ming language adoption. In Proceedings of the 2013 ACM SIGPLAN international conference on Object oriented programming systems lan- guages & applications, pages 1–18, 2013. [59] Microsoft. CodeXGLUE NL prompts. https://github.com/ microsoft/CodeXGLUE/blob/main/Text-Code/text-to-code/ dataset/concode/test.json, 2021. 38[60] Microsoft. Microsoft Copilot. https://www.microsoft.com/it-it/ microsoft-copilot?market=it, 2023. [61] MITRE. 2021 CWE Top 25 Most Dangerous Software Weak- nesses. https://cwe.mitre.org/top25/archive/2021/2021_cwe_ top25.html. [62] MITRE. 2022 CWE Top 25 Most Dangerous Software Weak- nesses. https://cwe.mitre.org/top25/archive/2022/2022_cwe_ top25.html. [63] MITRE. 2023 CWE Top 25 Most Dangerous Software Weak- nesses. https://cwe.mitre.org/top25/archive/2023/2023_top25_ list.html. [64] MITRE. Common Weakness Enumeration (CWE). https://cwe. mitre.org/, 2024. [65] Austin Mordahl. Automatic testing and benchmarking for configurable static analysis tools. In Proceedings of the 32nd ACM SIGSOFT In- ternational Symposium on Software Testing and Analysis, pages 1532– 1536, 2023. [66] Sarah Nadi, Thorsten Berger, Christian K¨astner, and Krzysztof Czar- necki. Mining configuration constraints: Static analyses and empirical results. In Proceedings of the 36th international conference on software engineering, pages 140–151, 2014. [67] NIST. NISTSoftwareAssuranceReference Dataset. https://samate. nist.gov/SARD/, 2023. [68] odashi. Django Dataset for Code Translation Tasks. https://github. com/odashi/ase15-django-dataset. [69] OpenAI. HumanEval: Hand-Written Evaluation Set. https:// github.com/openai/human-eval. [70] OpenAI. OpenAI ChatGPT. https://openai.com/blog/chatgpt/, 2024. [71] OWASP. OWASP Top 10:2021 . https://owasp.org/Top10/. 39[72] OWASP. Cross Site Scripting Prevention Cheat Sheet. https://cheatsheetseries.owasp.org/cheatsheets/Cross_ Site_Scripting_Prevention_Cheat_Sheet.html, 2024. [73] OWASP. Injection Prevention Cheat Sheet. https: //cheatsheetseries.owasp.org/cheatsheets/Injection_ Prevention_Cheat_Sheet.html, 2024. [74] OWASP. InputValidationCheatSheet. https://cheatsheetseries. owasp.org/cheatsheets/Input_Validation_Cheat_Sheet.html, 2024. [75] OWASP. OWASP Cheat Sheet Series - Introduction. https:// cheatsheetseries.owasp.org/index.html, 2024. [76] OWASP. Path Traversal. https://owasp.org/www-community/ attacks/Path_Traversal, 2024. [77] Ya Pan, Xiuting Ge, Chunrong Fang, and Yong Fan. A systematic literature review of android malware detection using static analysis. IEEE Access, 8:116363–116379, 2020. [78] NencyPatelandNarendraShekokar. Implementationofpatternmatch- ing algorithm to defend sqlia. Procedia Computer Science, 45:453–459, 2015. [79] Hammond Pearce, Baleegh Ahmad, Benjamin Tan, Brendan Dolan- Gavitt, and Ramesh Karri. Asleep at the keyboard? assessing the security of github copilot’s code contributions. In 2022 IEEE Sympo- sium on Security and Privacy (SP), pages 754–768. IEEE, 2022. [80] Pearce et al. Copilot CWE Scenarios Dataset. https://zenodo.org/ records/5225651, 2023. [81] Shuanghe Peng, Peiyao Liu, and Jing Han. A python security analysis framework in integrity verification and vulnerability detection. Wuhan University Journal of Natural Sciences, 24(2):141–148, 2019. [82] Moumita Das Purba, Arpita Ghosh, Benjamin J Radford, and Bill Chu. Software vulnerability detection using large language models. In 2023 IEEE 34th International Symposium on Software Reliability Engineering Workshops (ISSREW), pages 112–119. IEEE, 2023. 40[83] PyCQA. Bandit. https://github.com/PyCQA/bandit/tree/main. [84] Python. Abstract Syntax Tree Documentation. https://docs. python.org/3/library/ast.html. [85] python-security. PyT. https://github.com/python-security/pyt. [86] PyYAML. PyYAML Documentation. https://pyyaml.org/wiki/ PyYAMLDocumentation. [87] G Appa Rao, K Venkata Rao, PVGD Prasad Reddy, and T Lava Ku- mar. An efficient procedure for characteristic mining of mathematical formulas from document. International Journal of Engineering Science and Technology (IJEST), 10(03), 2018. [88] G Appa Rao, G Srinivas, K Venkata Rao, and PVGD Prasad Reddy. Characteristicmining ofmathematical formulas from document-a com- parative study on sequence matcher and levenshtein distance proce- dure. International Journal of Computer Sciences and Engineering, 6(4):400–403, 2018. [89] G Appa Rao, G Srinivas, K Venkata Rao, and PVGD Prasad Reddy. A partial ratio and ratio based fuzzy-wuzzy procedure for characteristic mining of mathematical formulas from documents. IJSC—ICTACT J Soft Comput, 8(4):1728–1732, 2018. [90] returntocorp. Semgrep. https://github.com/returntocorp/ semgrep. [91] Jukka Ruohonen, Kalle Hjerppe, and Kalle Rindell. A large-scale security-oriented static analysis of python packages in pypi. In 2021 18th International Conference on Privacy, Security and Trust (PST), pages 1–10. IEEE, 2021. [92] S2E-Lab. SecurityEval NL prompts. https://github.com/s2e-lab/ SecurityEval/blob/main/dataset.jsonl, 2022. [93] Gustavo Sandoval, Hammond Pearce, Teo Nys, Ramesh Karri, Bren- dan Dolan-Gavitt, and Siddharth Garg. Security implications of
large language model code assistants: A user study. arXiv preprint arXiv:2208.09727, 2022. 41[94] Gustavo Sandoval, Hammond Pearce, Teo Nys, Ramesh Karri, Sid- dharth Garg, and Brendan Dolan-Gavitt. Lost at c: A user study on thesecurityimplicationsoflargelanguagemodelcodeassistants. arXiv preprint arXiv:2208.09727, 2023. [95] Security & Software Engineering Research Lab at University of Notre Dame. SecurityEval. https://github.com/s2e-lab/SecurityEval, 2023. [96] Semgrep. Semgrep Registry. https://semgrep.dev/r, 2024. [97] MohammedLatifSiddiqandJoannaC.S.Santos. Securityevaldataset: Miningvulnerabilityexamplestoevaluatemachinelearning-basedcode generation techniques. In Proceedings of the 1st International Work- shop on Mining Software Repositories Applications for Privacy and Security (MSR4P&S22), 2022. [98] SoftSec Institute. LLMSecEval. https://github.com/ tuhh-softsec/LLMSecEval/, 2023. [99] Statista. Most used programming languages among developers world- wide as of 2023. https://www.statista.com/statistics/793628/ worldwide-developer-survey-most-used-languages/. [100] Richard J Thomas and Tom Chothia. Learning from vulnerabilities- categorising, understanding and detecting weaknesses in industrial control systems. In Computer Security: ESORICS 2020 Interna- tional Workshops, CyberICPS, SECPRE, and ADIoT, Guildford, UK, September 14–18, 2020, Revised Selected Papers 6, pages 100–116. Springer, 2020. [101] Catherine Tony, Nicola´s E D´ıaz Ferreyra, Markus Mutas, Salem Dhiff, and Riccardo Scandariato. Prompting techniques for secure code gen- eration: A systematic investigation. arXiv preprint arXiv:2407.07064, 2024. [102] Catherine Tony, Markus Mutas, Nicolas D´ıaz Ferreyra, and Riccardo Scandariato. Llmseceval: Adatasetofnaturallanguagepromptsforse- curity evaluations. In 2023 IEEE/ACM 20th International Conference on Mining Software Repositories (MSR), 2023. 42[103] tuhh-softsec. LLMSecEval NL prompts. https:// github.com/tuhh-softsec/LLMSecEval/blob/main/Dataset/ LLMSecEval-prompts.json, 2023. [104] Saad Ullah, Mingji Han, Saurabh Pujar, Hammond Pearce, Ayse Coskun, and Gianluca Stringhini. Llms cannot reliably identify and reason about security vulnerabilities (yet?): A comprehensive evalua- tion, framework, and benchmarks. In IEEE Symposium on Security and Privacy, 2024. [105] Andrew Walenstein, Michael Venable, Matthew Hayes, Christopher Thompson, and Arun Lakhotia. Exploiting similarity between variants to defeat malware. In Proc. BlackHat DC Conf. Citeseer, 2007. [106] Tongshuai Wu, Liwei Chen, Gewangzi Du, Dan Meng, and Gang Shi. Ultravcs: Ultra-fine-grained variable-based code slicing for automated vulnerability detection. IEEE Transactions on Information Forensics and Security, 2024. [107] ZeoVan. MSR 20 Code vulnerability CSV Dataset. https://github. com/ZeoVan/MSR_20_Code_vulnerability_CSV_Dataset, 2020. [108] Chenyuan Zhang, Hao Liu, Jiutian Zeng, Kejing Yang, Yuhong Li, and Hui Li. Prompt-enhanced software vulnerability detection using chatgpt. In Proceedings of the 2024 IEEE/ACM 46th International Conference on Software Engineering: Companion Proceedings, pages 276–277, 2024. [109] Ying Zhang, Ya Xiao, Md Mahir Asef Kabir, Danfeng Yao, and Na Meng. Example-based vulnerability detection and repair in java code. In Proceedings of the 30th IEEE/ACM International Conference on Program Comprehension, pages 190–201, 2022. [110] Li Zhong and Zilong Wang. A study on robustness and relia- bility of large language model code generation. arXiv preprint arXiv:2308.10335, 2023. 43
2404.08562 Dynamic Neural Control Flow Execution: An Agent-Based Deep Equilibrium Approach for Binary Vulnerability Detection LitaoLi1,StevenH.H.Ding1,AndrewWalenstein2,PhilippeCharland3,andBenjaminC.M.Fung4 1L1NNALab,SchoolofComputing,Queen’sUniversity,Canada 2BlackBerryLtd.,Canada 3MissionCriticalCyberSecuritySection,DefenceR&DCanada 4DataMiningandSecurity(DMaS)Lab,McGillUniversity,Canada April15,2024 Abstract 1.Introduction Softwarevulnerabilitiesareachallengeincyber- Softwarevulnerabilitieshavebeenanongoingchallengein security. Manual security patches are often dif- the cybersecurity domain. It is an inevitable problem, as ficult and slow to be deployed, while new vul- thescaleofsoftwaregrowsincomplexity. Manymalicious nerabilitiesarecreated. Binarycodevulnerabil- cyberattacksexploitvulnerabilitieswithinsystemsandcan ity detection is less studied and more complex cause tremendous economical and security damages. Of- comparedtosourcecode,andthishasimportant ten,thesecurityanalystscannotevenpatchvulnerabilities practicalimplications. Deeplearninghasbecome fast enough, as new ones are created (Alexopoulos et al., anefficientandpowerfultoolinthesecuritydo- 2020; Farris et al., 2018). Common Vulnerability Expo- main,whereitprovidesend-to-endandaccurate sures(CVE)showthatthetotalnumberofvulnerabilities prediction. Modern deep learning approaches more than doubled from 2016 to 2017 and it continued learn the program semantics through sequence toincreasethroughouttherecentyears1. Manytraditional and graph neural networks, using various inter- staticanddynamicanalysismethodsaremanuallyexpensive mediaterepresentationofprograms,suchasab- andinefficient. Thismotivatesautomatedandend-to-end stractsyntaxtrees(AST)orcontrolflowgraphs approaches,suchasneuralnetworks. (CFG). Due to the complex nature of program execution,theoutputofanexecutiondependson Vulnerabilities can be detected at either the source code themanyprogramstatesandinputs. Also,aCFG or binary code level. Source code provides much more generatedfromstaticanalysiscanbeanoveresti- meaningfulsemantics,syntax,andstructures,whichinturn mationofthetrueprogramflow. Moreover,the help both analysts and machine learning models to track sizeofprogramsoftendoesnotallowagraphneu- vulnerabilities.Existingmethodsatthesourcecodelevelare ralnetworkwithfixedlayerstoaggregateglobal accurateandcapableoffindingcomplexvulnerabilities(Li information. To address these issues, we pro- etal.,2018;Hareretal.,2018). Forbinarycode,asmuch pose DeepEXE, an agent-based implicit neural information is lost during the compilation process, it is networkthatmimicstheexecutionpathofapro- muchhardertodetectvulnerabilities.Moreover,theabsence gram. Weusereinforcementlearningtoenhance of the original source is a practical problem under many thebranchingdecisionateveryprogramstatetran- circumstances,suchasthird-partyoroff-the-shelfprograms. sitionandcreateadynamicenvironmenttolearn Binarycodeisbestanalyzedasassemblycode,aformof thedependencybetweenavulnerabilityandcer- intermediaterepresentationthatprovidesanalystsreadable tainprogramstates. Animplicitlydefinedneural content. Assemblycodecontainsinstructionsthatprovide network enables nearly infinite state transitions somesemanticsandstructuresoftheprogram. Inthispaper, untilconvergence,whichcapturesthestructural weareonlyinterestedinbinarycodevulnerabilitydetection, information at a higher level. The experiments asitisstillaprevalentchallengeinthesecuritydomain. areconductedontwosemi-syntheticandtworeal- Deeplearningmethodsaimtolearnthelatentrepresentation worlddatasets. WeshowthatDeepEXEisanac- curateandefficientmethodandoutperformsthe 1StatisticsonCommonVulnerabilitiesandExposures(CVE) state-of-the-artvulnerabilitydetectionmethods. Details 1 4202 rpA 3 ]RC.sc[ 1v26580.4042:viXraofapieceofbinarycodeforclassification. Existingworks all the possible program states associated with all possi- forbinarycodelearningcanbecategorizedintotwomain ble execution paths. This will cause the path explosion streams.Thefirstapproachfocusesontext-basedrepresenta- problem (Xie et al., 2009), especially for large functions tionlearningtoextractthetokensemantics.Theinstructions withloops. Existingworkstrytoaddressthepathfinding arebrokendownandembeddedintovectorsthroughsome problem statically from an incomplete view, focusing on unsupervisedlearningsuchasWord2Vec(Mikolovetal., partialorlocalstructures.Forexample,DeepBinDiff(Duan 2013), then these vectors are fed into a sequential deep et al., 2020) and InnerEye (Zuo et al., 2018) match the learning model for classification. Instruction2Vec (Lee CFGs based on semi-exhaustive path comparison, which etal.,2019),HAN-BSVD(Yanetal.,2021),andBVDetec- is not scalable, and also misses the iterative graph learn- tor(Tianetal.,2020)allusethissemantic-basedapproach ing. Genius(Xuetal.,2017),BinGo(Chandramohanetal., fordetection. Thesecondmethodinvolvescollectingand 2016), andTracelet(DavidandYahav,2014)usepartial aggregatingstructuralinformationatahigherlevel. Usu- pathmatching,whichlacksrobustnesswhenprogramsare ally,CFGsareparsedfromtheassemblycodebasicblocks, easilyalteredthroughartificialmeans. BinaryAI(Yuetal.,
whichcreatedependenciesbetweendifferentblocksofcode. 2020)usesgraphconvolutionformessagepassing. How- This is crucial in vulnerability detection, since programs ever, this approach does not consider mutually exclusive arecomplexandhierarchical,andvulnerabilitiesareoften dependencies among edges, covering invalid paths. The triggeredinspecificprogramstates. Usingonlytheseman- messagepassingmechanismalsoassumesastaticadjacency ticsofinstructiontokensareofteninsufficient. Gemini(Xu matrix,whichlackshigh-levelguidancefromaglobalstate. etal.,2017),Diff(Liuetal.,2018),Order(Yuetal.,2020), Thecurrentresearchinthisdomainlacksadedicatedway InnerEye (Zuo et al., 2018), and BinDeep (Tian et al., tosimulatetheprogramstatetransitionsalongtheguided 2021)allusegraph-basedmethodsforbinarycodestructure validexecutionpath,withafocusonahigherorderofnode embedding. neighbourhoodproximity. Unfortunately,therearemajordrawbackstoeitherapproach Inspired by symbolic execution for path-finding, we pro- thatcanhindertheperformanceorscalabilityofthemodel. pose a neural network model, DeepEXE, which mimics Themoreobviousdisadvantageisthescalabilitywhenlarge a program state-guided execution process over the CFG programsarepresent. Semantic-basedapproachesusually todetectbinarycodevulnerabilitiesatthefunctionorfile introduceamaximuminputlength,inordertopreventvan- level. DeepEXEreliesonanexecutionagentthatsimulates ishinggradient,especiallyforlargeanddeepsequencemod- andlearnswhichdirectiontotake, resultinginsimulated els. Structure-basedapproachesperformgraphneuralnet- pathsacrossdifferentepochs. Thecombinednodeembed- work(GNN)foraggregatingnodeinformation. Thenumber dingrepresentstheprogramstate,andthebranchingactions of layers dictates the receptive field of the model by per- guidingtheprogramflowarebasedontheprogramstateand formingk-hopmessagepassing,thuslimitingtheamount codesemanticsofthecurrentnode. DeepEXEleveragesthe of global information that can be learned. Both of them implicitneuralnetworkparadigm,whereonlythefinalpro- needtocarefullymanagethememoryfootprintduringtrain- gramstateisstoredbeforeback-propagation. Thisenables ing. Theotherdrawbackistheabsenceofmodellinghow alargesimulationstepovertheexecutionflow. Compared programsnaturallyrun. Unlikenaturallanguage,programs to the existing methods with only local or partial graph areexecuteddynamically. Thestateofaprogramcanbe information, DeepEXE enables modelling on the highest different,dependingontheinputanditspreviousstates. By global-levelviewovertheexecutionpath. Ourcontributions usingfixedgraphlearningtechniques,thedynamicnature areasfollows: oftheprogramstructureisdifficulttocaptureandthuslead • We propose DeepEXE, a neural program execution toundesiredperformance. modeloveraCFGforbinaryvulnerabilitydetection. Given assembly code, one has to respectively find a pro- It simulates a semantic-guided decision process for gram execution path that can potentially yield the same steppingthroughagivenfunction’sCFG. finalprogramstate. Ingeneral,asoundandcompletestatic • Tosimulatetheprogramexecutionstepsoverthegraph, analysismethodgeneratesarepresentationofthecode(i.e., we propose a learning agent for making branching CFG)withoverestimation. Thismeanspathscreatedina decisions with an implicit neural network structure graphcanpotentiallyneverexecute. Therefore,learningthe for program state transitions. It enables modelling topological information solely from the default CFG can program semantics on a higher level views over the beinaccurateandresultinafalseexecutionpath. Ideally, executionpath. symbolic execution (Baldoni et al., 2018; King, 1976) is • Toaddressthescalabilityandlimitedreceptivefield oneoftheformalmethodsthatenableonetocompareand of graph neural networks, we use the implicit deep verifyallthepossiblepathsthroughequivalencechecking. learningparadigmfornearlyinfinitemessagepassing, However,itsapplicabilityislimited,asitrequiresstoring whichsignificantlyenablesglobalinformationaggre- 2gationinthegraphandreducesthememoryfootprint. 2020)andismoreefficientandpowerful. However,GNNs • We conduct experiments on two semi-synthetic struggletocapturelong-rangedependenciesinlargegraphs, datasetsandtworealworldvulnerabilitydatasets. We duetothefinitenumberofmessagepassingiterations. One compareourmethodsagainstseveralstate-of-the-art potentialsolutionistherecentlystudiedimplicitneuralnet- approachesandshowthatDeepEXEcanconsistently works. The implicit learning paradigm is different from outperformthebaselinesinallscenarios. traditionaldeeplearning,asitsolvesthesolutionforagiven equilibriumproblem,whichisformulatedasannearlyinfi- nitelayernetwork. Implicitmodelshavepreviouslyshown 2.RelatedWork success in domains such as sequence learning (Bai et al., VulnerabilityDetectionWhilevulnerabilitydetectioncan 2019),physicsengine(deAvilaBelbute-Peresetal.,2018), beconductedateitherthesourcecodeorbinarycodelevel, andgraphneuralnetworks (Guetal.,2020). wewilldiscussthemtogether,sincemostmethodscanbe appliedtobothlevels,withsomemodifications. Machine 3.Preliminaries learning-based (non-deep learning) methods involve the manual extraction of metrics and the input of these met- Inthissection,wedefinesomenecessarynotationinvolving
rics as features (Gupta et al., 2021; Sultana et al., 2021). ourlearningproblem,includingtheinputandoutput. We Themetricscanbemulti-levelandleveragethecomplexity providefurtherdiscussionaboutthegraphneuralnetwork characteristicsofaprogram,suchasthenumberofnested andthereinforcementlearninginAppendixA.4. loopswithinafunction. Manualfeatureextractionismore CFGs and Basic Blocks The input of the model is a bi- expensiveandrequiresexpertknowledge. Also,thefeatures nary file in assembly code. The assembly functions and needtobeconstantlyupdatedtoaccommodatechangesin theirCFGsarebothobtainedfromtheIDAProdisassem- thecodebase. Text-baseddeeplearningisverypopularfor bler 2. Each function is regarded as a graph G that con- sourcecodevulnerabilitydetection,wheredifferentgranu- tains segmented code blocks called basic blocks, which laritylevelscanbeleveragedinordertoobtaintextfeatures aresequencesofinstructionswithout anyjumporcallto orembeddings. Lietal. grouptokensbasedonsemantics otherblocks. Astheinputtotheneuralnetwork,agraph andsyntaxintoslicesorgadgets(Lietal.,2021;2018;Zou G = (V,A) has the blocks V ∈ Rn×v with n nodes, v etal.,2019),andfeedthemintoaLSTMmodel. Forbinary tokens,andtheadjacencymatrixA∈Rn×n. Adefinesall code,Instruction2Vec(Leeetal.,2017)andBin2img(Lee directededgeswithinthegraphandisobtainedbyextract- etal.,2019)useinstructionembeddingasapreprocessing ingcallstatementsbetweentheblocks. NotethatAhas0 step. SimilartoWord2Vec,theembeddingcontainscontex- acrossthediagonalelementandisnon-symmetrical. More- tualdependencyandcanbeusedtodetectvulnerabilitiesata over, we apply the re-normalization trick to A (Kipf and laterstage,whichisa1DCNNmodel. Thesemodelssolely Welling,2016),inordertopreventnumericalinstabilities focusonthesemanticsofthetokens,wherethestructural duringdeepnetworktraining. Forfilelevelclassification, informationisomitted. ThereareseveralGNNmodelsat we merge the function graphs as a whole, based on the thesourcecodethatusedifferentgraphsthatcanbeparsed functioncallinformation. Moreover,additionalinformation, fromsourcecode,suchasabstractsyntaxtrees,datadepen- suchascommentsandnames,areremoved. Thebasicblock dencegraphs,andcontrolflowgraphs(S¸ahinetal.,2022; V onlycontainsoperationsandoperandsofinstructions. Zhou et al., 2019; Cao et al., 2021). For GNN message passing,therearemultiplestylesthatwewilldiscussnext. ProblemStatementWedefineseveralneuralnetworkmod- uleswithinourarchitectureF = (F ,F ,F ),whereF GraphNeuralNetworksandImplicitModelsInbinary S I A S isthesequentialmodelforsemanticsembedding,F isthe code,GNNmethodsaimatlearningthestructuresbyfirst I implicitgraphneuralnetworkmodelforstructureandnode parsing the assembly code into control flow graphs and embedding,andF isthereinforcementlearningagentfor performingmessagepassing. Therearemultiplevariants A dynamic pathing optimizer, given certain program states. related to graph neural networks. The pioneer works of Thegoalistopredictwhethereachfunctioncontainsavul- graphneuralnetworksaremostlyassociatedwithrecurrent nerability. GiventheinputgraphG = (V,A),ourmodel graphneuralnetworks (Gorietal.,2005;Scarselliet al., learns several levels of information and aggregates them 2008; Gallicchio and Micheli, 2010; Li et al., 2015; Dai togetherforthefinaloutputofthemodel,whichisabinary etal.,2018),wherethenoderepresentationsareaggregated classificationscoreF : G → yˆ∈ R. Formally,wedefine withafixedsetofparameters. Convolutionalgraphneural thefollowinglearningtaskparameterizedbyθ: networks(KipfandWelling,2016;Hamiltonetal.,2017; Velicˇkovic´ etal.,2017)expandtheGNNbyusingmultiple 2IDAPro layerswithdifferentparameters. Thisapproachaddresses thecyclicmutualdependenciesarchitecturally(Wuetal., 3applying an embedding layer E : V → Rn×v×h next, wherehisthehiddendimension. Notethatweusehasthe yˆ=argmaxF (y′|G,θ);θ =argmaxF (y′ =y|G,θ) hiddendimensionthroughoutthepaperforsimplicity,but θ θ y′∈0,1 θ differentdimensionscanbeusedforanylayersinpractice. (1) The sequential model used in this task is a bi-directional GRU (Chung et al., 2014). The output of the GRU layer 4.NeuralControlFlowExecution U ∈Rn×v×hfurtherembedsthetokensemanticsbytaking contextual information into account. In order to obtain a WedesigntheDeepEXEarchitecturewithsemantic-driven representationfortheentirebasicblock,amaximumorav- andexecution-guidedprinciples.CFGsextractedfromdisas- eragepoolingalongthetimedimensionisusedtocompute semblycontaincrucialinformationabouttheprogramlogic U ∈Rn×hforblockembedding. andpaths,whichdictatestheoutputsandfunctionalitiesof assemblycode. Animportantcharacteristictodifferentiate 4.2.ProgramStateGuidedExecutionandFunctional CFGsfromgraphsinotherdomains,suchassocialnetworks Representation orchemistry,isthatnodestatesshouldbedependentonthe executionlogic. Programsareexecutedfollowingspecific Program State Initial node representation U establishes ordersbasedonthedependenciesamongtheedgescondi- thesemanticswithinbasicblocks,butitisnotsufficientto tionedbytheprogramstate,wheretheresultsandsemantics simplygloballyaggregateU forahigh-levelrepresentation cansubstantiallydifferwhenordersvary. ofthegraph. Inthisregard,areinforcementagentat(st−1)
that decides the next execution path is defined, given the Weborrowtheideaofsymbolicexecution(Baldonietal., previousprogramstatest−1. Unliketraditionalneuralnet- 2018)andcreateaneuralCFGexecutor. Atrainingepoch works that perform forward and backward pass one at a containsafulliterationoftheexecutivesession,whichcor- time,ourapproachinternallyloopsthroughmultiplestates respondstoaconcreteexecutionpath. Notethateachepoch twithinatrainingepoch. Wedefinetheprogramstateasa canhavecompletelydifferentexecutionpaths,asthemodel lineartransformationofthenodestateXt,whereX0 =U, learns. TheoverallarchitectureisshowninAppendixA.1. andsometrainableparameterW ∈Rh×1: s AnexampleofthelearningprocessisshowninFigure1. st =σ(XtW ) (2) s Thetrainingconsistsofmanytrainingepochs. InFigure1, thepathforEpoch1goesintoaloop,whileEpoch2directly Agent Reparameterization Due to the backpropagation goes into the exit point. The execution agent performs algorithm, categorical variables are hard to train in this multiplestepswithinanepoch. Itstartsfromtheentrynode, stochastic environment in the neural network. This layer thentransitionstootherpossiblenodesineachstep. The effectivelybecomesnon-differentiablewhenusingnormal decisiononwhichbranchtoselectdependsontheprogram sampling process such as argmax. A solution is to use stateX,andXi indicatestheupdatedprogramstateatstep theGumbelsoftmax(Jangetal.,2016)tore-parameterize j i for node j. After jumping to the next node, the agent the state while maintaining the ability to backpropagate updatestheprogramstateandrepeatsthedecisionprocess efficientlyduringtraining. Gumbelsoftmaxisacontinuous untilitreachesanequilibriumstate. anddifferentiabledistributionthatcansamplecategorical distribution,itisgivenby: 4.1.TokenSemantics exp((log(st−1)+g )/τ) zt = i i ,fori=1,...,k (3) Inthissection,wediscussthepreprocessing,embedding, i Σk jexp((log(st j−1)+g j)/τ andsequentiallearningtask.Abasicblockcontainsastream where zt is the sample drawn from the state, g ∼ ofinstructions,whichcanbefurtherbrokendownintoop- i i Gumbel(0,1) are samples drawn i.i.d from the Gumbel erationsandoperands,andbetokenized. Wetreattheen- distribution, and τ is the temperature controlling the dis- tire block as one sentence and apply a subword and uni- cretenessofthenewsamples. Gumbelsoftmaxworksbet- gram(Kudo,2018;KudoandRichardson,2018)modelfor ter with a lower value for τ ∈ [0,∞] as it approaches to thetokenencoding,whichmitigatestheout-of-vocabulary argmaxsmoothly,whereassettingalargevaluemakesthe problem. Assembly code is compiler dependent and can samplesbecomeuniform. easily result in out-of-vocabulary (OOV) tokens. A way toaddresstheOOVissueistobreakdownthetokensinto AdjacencyMatrixUpdateIneachstateupdate,theagent charactersforencoding. Evenwithafixedvocabularysize, walksthroughthegraphwithupdatedprogramstatetocap- unseentokenscanbeencodedbymatchingthesubwordto ture the intermediate execution path that leads to certain their closest known tokens. Moreover, it is not language results. We have the flexibility to design the agent to be dependentandcanbetrainedfromscratchveryefficiently. eitherhardorsoft. Asoftagentat =ztpreservestheprob- Weincreasethesubspacerepresentationpowerbysimply abilitiesdrawnfromGumbelsoftmax,whichimpliesthata 4Full CFG Epoch 1 Epoch 2 Start Start Start 𝑋10 𝑋 𝑋10 13, 3: 𝑋13 =𝑋62𝑋10 𝑋10 4: 𝑋24 =𝑋13𝑋20 1: 𝑋31 =𝑋10𝑋30 Not executed 1: 𝑋31 =𝑋10𝑋30 𝑋20 𝑋30 𝑋24 𝑋31 𝑋20 𝑋31 5: 𝑋45 =𝑋24𝑋40 Not executed 2: 𝑋62 =𝑋31𝑋60 Not executed 2: 𝑋52 =𝑋31𝑋50 Not executed 𝑋40 𝑋50 𝑋60 𝑋45 𝑋50 𝑋62 𝑋40 𝑋52 𝑋60 End End End End End End Figure1.Ineachepoch,themodelsimulatesoneexecutionsessionwithaspecificexecutionpathconsistingofmultiplesteps.Atstepi, theexecutorchoosesthemostlikelybranchfornodejtomovenextbasedonprogramstateandnodesemantics.Thisisoneexecutionwith aloop(Epoch1)andonewithout(Epoch2).Themodelthenupdatestheprogramstatebycombiningthenextnode’scodesemantics. programinformationcanflowindifferentexecutionpathsat predictiontaskinequation5,wheref isanoutputfunction ψ thesametimebasedontheprobabilities(cid:80) zt =1. Ahard parameterizedbyψforthedesiredclassificationtask. With i i agentmimicstheexecutionpathandisone-hot,leadingto thereinforcementagentembeddedintheupdatedadjacency onestrictlyoneexecutionatatime. Theagentat ∈Rn×1is matrix A˜∗ = A˜t : t → ∞, our equilibrium solution is thenusedtoselectapathandgeneratethestate-dependent formulatedasfollows: adjacencymatrixA˜t,whichisupdatedas: A˜t =Aat. X∗ =ϕ(X∗WA˜∗+b (U)) (6) Ω 4.3.ExecutorSteppingViaImplicitGNN ImplicitGNNWiththeupdatedadjacencymatrixfromthe W ∈ Rh×h and Ω ∈ Rh×h are parameters, and U is the agent,onecanperformgraphneuralnetworkontheCFG initialnodefeature. Notethatonlyasinglelayerisrequired toaggregateneighbourinformationintothenodes. How- to produce the updated node representation X iteratively ever,assemblycodecanbelargeforvariousreasons. For insteadofmultiplestackinglayers. WealsoinjectU into example,aGCCcompilercanuseanoptimizationlevelthat theequationthroughsomeaffinetransformationb . This Ω minimizestheexecutionsizeandreducesthesizeofCFGs. ensuresthatoriginalnodesemanticsispreservedthroughout
While GNN is a suitable approach to learn the structural the iterations when solving for the fixed point (Bai et al., dependencyofafunction,itrequiresapre-definednumber 2019). oflayers,whereeachlayerusuallyperforms1-hopmessage FixedPointAccelerationAlthoughtheequilibriumpoint passing. Intuitively,thevanillaGNNsdonotscalewellwith can be obtained from iterating equation 6 infinitely, it is largegraphsandcanfailtocaptureglobalinformation. The notthemostefficientandstablemethodforconvergence. dependencybetweenfurthernodescanbecrucialtounder- Moreimportantly,itdoesnotguaranteeconvergence. An- standtheoverallsemanticsofaprogram. Suchlongrange derson acceleration (Walker and Ni, 2011) is an acceler- dependency is difficult to capture with longer edges. To ated algorithm for finding fixed points. Given a function alleviatetheabovestatedproblem,weperformtheprogram f to solve, which is equation 6 in our case, we define statetransitionsinanimplicitlydefinedstyle. Ingeneral, (1) m = min{m,t} as the parameter for controlling thetransitionatstatetcanbewrittenasanimplicitformof k past iteration memory by setting m to any positive inte- theGNNlayer: ger; (2) g(x) = f(x)−x as the residual with the matrix Xt+1 =ϕ(XtWtAt+U) (4) G t =[g t−mt,...,g t]. TherootsolvingprocessusingAnder- sonaccelerationisformulatedas: y′ =f (X∗) (5) ψ Suchformoflayerdoesnotexplicitlyoutputavectortobe α t =argmin||G tα|| 2, α fedintothenextlayer. Instead,itusesafixedpointiteration inequation4thataimstofindtheequilibriumvectorstate whereα=(α ,...,α )∈Rmt+1 :(cid:88)mt α =1 (7) X∗ ast → ∞. Theequilibriumstateisthenusedforthe 0 mt i i=0 5(cid:88)mt The terms ∂l and ∂G can be both computed using any xt+1 = (α ) f(x ) (8) ∂G ∂X∗ t i t−mt+i autograd software. However, the term ∂X∗ is difficult to i=0 ∂θ compute,sincetheequilibriumpointX∗isobtainedthrough Insteadofcomputingforxt+1directlyfromxt,Anderson iterativerootfinding. Ifweunrollthiscomputationgraph, acceleration solves for a coefficient α in an optimization thenetworkneedstostoreallintermediategradientsforev- problemthatminimizesthenormofg(x). erystatetransition. Dependingonthenumberoftransitions, itisnotapracticalapproach. Instead, wewriteX∗ inits StateTransitionTerminationTheexecutorterminatesin implicitlydefinedform: threedifferentscenarios. (1)Iftheexecutorreachestheexit point on the CFG, there will not be any updates to Xt+1 X∗(θ)=ϕ(X∗WA˜∗+b (U))=F (X∗(θ),U) (13) afterEquation6,naturallyleadingtoanequilibriumstate. Ω I (2)Iftheexecutorreachesanequilibriumstate,butnotat where F denotes the implicit graph neural network. By I the program exit point, it logically indicates that further takingthederivativewithrespecttoθ,weobtain: execution will not result in changes in the program state. Therefore, it is natural to terminate. (3) If the executor ∂X∗(θ) = ∂F I(X∗(θ),U) (14) reachesaconfiguredmaximumsteps. ∂θ ∂θ OnceX∗isatequilibrium,weapplylayernormalization(Ba Byapplyingthechainruleontherighthandsideofequa- etal.,2016)andglobalaveragepoolinglayertoobtainthe tion14,weexpanditintothefollowing: graphrepresentationG: ∂X∗(θ) ∂F (X∗,U) ∂F (X∗,U)∂X∗(θ) = I + I (15) (cid:80)nXT ∂θ ∂θ ∂X∗ ∂θ G=LayerNorm( i i,j),∀j =1,...,h (9) n Atthispoint,both ∂FI(X∗,U) and ∂FI(X∗,U) canagainbe ∂θ ∂X∗ The prediction task can be simply computed by a linear obtainedusingautogradsoftware. Thelastunknownterm transformationtogetthelogits: ∂X∗(θ) is computed by solving the linear system. In our ∂θ approach,weuseAndersonaccelerationtoiterativelysolve y′ =W pG,whereW p ∈R1×h (10) thisterm. Through implicit differentiation, we directly evaluate the We want to emphasize that through the use of an implic- gradientattheequilibriumpoint. Weavoidthecomputa- itly defined GNN layer, it is no longer required to have tionofanyintermediatestatetransitionandcanefficiently multiplestackingGNNlayerstoachievehigherordernode backpropagatethroughthenetwork,evenwithanearlyinfi- aggregation. Instead,eachstatetransitionwithinthelayer nitenumberoftransitions. Thisalsohasabettermemory effectivelyperformsamessagepassing,asanormalGNN footprint. layerwould. Thishasthebenefitsofloweringthememory costs,whilemaintainingthesamelevelofrepresentational power,givensimilarparametercount. Moreover,thelong 5.Experiment range dependency issue can be effectively addressed by Inthissection,wedemonstratetheabilityofDeepEXEon iteratinganearlyinfinitenumberofstatetransitions. predictingbinarycodevulnerabilityinavarietyofscenar- ios. To properly evaluate DeepEXE, we conduct experi- 4.4.Training mentsusingtwosemi-syntheticdatasetsandtworealworld While the forward pass in an implicit network possesses datasets. The NDSS183 and Juliet Test Suites4 are both somenicepropertiesforthenetworkdiscussedearlier,itis semi-syntheticdatasetscommonlyusedasforvulnerabil- notatrivialtasktotrainthebackwardpass. Traditionally,a itydetectiontasks. Thoughthepracticalimplicationsfora neuralnetworkcontainsexactoperationswithexplicitlyde- methodshouldnotsolelydependonthesyntheticresults,
finedinputandoutput,wherethegradientscanbecomputed astheyarelesscomplex. Forrealworlddatasetsthatare viachainrule. Wefirstdefinethelossterml: largerandcancontainlesstrivialvulnerabilities,weemploy theFFmpeg5andEsh(Davidetal.,2016)datasets. Thede- l=L(yˆ,y)=L(F ψ(G),y) (11) tailsofthedatasetscanbefoundinAppendixA.2. Forthe baselinemethods,weinherittheresultsreportedinprevious F ψ isthepredictionrulethattakesthegraphembeddingG. works,duetothelargeamountofexperimentsanddifferent L(·)computesthecrossentropylossandoutputsthescalar l. Usingchainrule,thelosscanbebackpropagatedas: 3https://samate.nist.gov/SRD/index.php,SoftwareAssurance ReferenceDataset ∂l ∂l ∂G ∂X∗ 4https://samate.nist.gov/SARD/test-suites,NISTTestSuites = (12) 5https://ffmpeg.org/,FFmpeg ∂θ ∂G∂X∗ ∂θ 6Table1. NDSS18DatasetEvaluation Models InputType Accuracy Recall Precision F1 AUC Bi-LSTM AssemblyIns. 85.38 83.47 87.09 85.24 94.89 GCN CFG 86.48 84.59 88.12 86.32 95.81 MD-CWS(Leetal.,2018) AssemblyIns. 85.30 98.10 78.40 87.10 85.20 MD-CKL(Leetal.,2018) AssemblyIns. 82.30 98.00 74.80 84.00 82.10 MD-RWS(Leetal.,2018) AssemblyIns. 83.7 94.3 78.0 85.4 83.5 MDSAE-NR(Albahar,2020) AssemblyIns. 87.50 99.30 81.20 89.80 87.10 TDNN-NR(Albahar,2020) AssemblyIns. 86.60 98.70 80.30 88.30 86.30 VulDeePecker(Lietal.,2018) SourceCodeGadgets 83.50 91.00 79.50 84.80 83.40 DeepEXE CFG 90.58 89.36 92.13 90.72 98.01 Table2. JulietDatasetEvaluation Models InputType Accuracy Recall Precision F1 AUC Bi-LSTM AssemblyIns. 96.81 98.44 95.48 96.94 99.03 gcn(Arakelyanetal.,2020) CFG 97 NA NA NA NA i2v/CNN(Leeetal.,2017) AssemblyIns. 87.6 N/A N/A N/A N/A i2v/TCNN(Leeetal.,2017)AssemblyIns. 96.1 N/A N/A N/A N/A w2v/CNN(Leeetal.,2017) AssemblyIns. 87.9 N/A N/A N/A N/A w2v/TCNN(Leeetal.,2017)AssemblyIns. 94.2 N/A N/A N/A N/A i2v(Leeetal.,2019) AssemblyIns. 96.81 97.07 96.65 96.85 N/A bin2img(Leeetal.,2019) AssemblyIns. 97.53 97.05 97.91 97.47 N/A w2v(Leeetal.,2019) AssemblyIns. 96.01 96.07 95.92 95.99 N/A DeepEXE CFG 99.80 99.60 100.00 99.80 100.00 setups. Theevaluationmetricsreportedincludeaccuracy, tocapturemoretopologicalinformation. Notethatevenin precision,recall,F1score,andareaundertheROCcurve ascenariowheretherecallmetricishighlyimportant,the (AUC).Werandomlyspliteachdatasetinto75%fortraining classificationthresholdcanalwaysbeadjustedtoaccommo- and25%forevaluation. Somemetricsarenotshowninthe datethebalancebetweenprecisionandrecall. Wearealso baselinesbecauseoftheirabsenceintheoriginalworks. abletooutperformVulDeePecker,whichisasourcecode levelmethodthatonlyleveragesthesequentialinformation 5.1.Evaluation of the code gadget, potentially omitting the much useful topologicalknowledgeofthesourcecode. For each dataset, we compare DeepEXE with the bench- marks that are also evaluated on the same dataset due to The Juliet dataset evaluation is shown in Table 2. As a limitedspace. Thedetailsofeachbaselineisdescribedin syntheticdataset,thetestcasescontainmuchshortercode. AppendixA.3. Additionally,webuildbaselinemodelsthat However,thereareover100differentCWEsamongalltest work inherently well in this task including bi-directional cases. Inreality,adetectiontoolshouldberobustenough LSTM (Bi-LSTM) (Hochreiter and Schmidhuber, 1997) to detect unseen or zero-day vulnerabilities. It is useful andgraphconvolutionnetwork(GCN)(KipfandWelling, forevaluatingtherobustnessandgeneralizabilityofanap- 2016). proach. DeepEXEshowsnearlyperfectdetectionaccuracy and AUC for this dataset. This shows that even with the Semi-SyntheticResultsWefirstanalyzetheresultsforthe single-layer design, DeepEXE is able to generalize well NDSS18datasetshowninTable.1. Thetwobaselineswe enough. Asthegraphsareusuallysmallinthesetestcases, implemented(Bi-LSTMandGCN)havesurprisinglycom- theexecutionpathsgeneratedbystaticanalysisarelikely parableresultswiththebenchmarks,includingMDSAE-NR more accurate. Therefore, we believe the implicit GNN andTDNN-NR.AlltheMDSAE-basedmethodshaveim- contributesmoretotheperformanceincreasethantheagent balanced precision and recall, where the models tend to inthiscase. overestimatethevulnerablecode. DeepEXEhasthebest overallperformance,leadingtheaccuracyandAUCby 3%. RealCVEResultsWeevaluatetheFFmpegdatasetshown Moreover,DeepEXEisaCFG-basedmethodandweempir- inTable3,whichspecifiesthecodelevelsandinputtypes. icallyshowthatbyaddingtheexecution-guidedagentand SinceDevigndetectsvulnerabilitiesatthesourcecodelevel, expandingthereceptivefieldofgraphconvolution,itisable itissignificantlyeasierwiththerichsemantics,syntax,and 7Table3. FFmpegDatasetEvaluation Models CodeLevel InputType Accuracy F1 Bi-LSTM(Zhouetal.,2019) SourceCode CodeSnippets 53.27 69.51 Bi-LSTM+Attention(Zhouetal.,2019) SourceCode CodeSnippets 61.71 66.01
CNN(Zhouetal.,2019) SourceCode CodeSnippets 53.42 66.58 GGRN-CFG(Zhouetal.,2019) SourceCode CFG 65.00 71.79 GGRN-composite(Zhouetal.,2019) SourceCode AST,CFG,DFP,NCS 64.46 70.33 Devign-CFG(Zhouetal.,2019) SourceCode CFG 66.89 70.22 Devign-composite(Zhouetal.,2019) SourceCode AST,CFG,DFP,NCS 69.58 73.55 DeepEXE BinaryCode CFG 68.29 67.17 Table4. EshDatasetEvaluation Models InputType Accuracy Recall Precision F1 AUC Bi-LSTM AssemblyIns. 99.49 79.48 88.57 83.78 96.87 GCN CFG 99.31 63.89 95.83 76.67 83.54 DeepEXE CFG 99.78 95.65 91.67 93.62 99.78 structures. DeepEXEisabletooutperformmostoftheap- 6.Conclusions proaches,evenatthebinarycodelevel. Inparticular,when We have proposed DeepEXE, a control flow execution- only using the CFG as input, DeepEXE achieves better guideddeeplearningframeworkforbinarycodevulnerabil- accuracythanboththeDevignandGGRNmodels. Devign- itydetection. Giventheimportanceofbinarycodelearning, compositeutilizesmultipleinputgraphs,suchasAST,DFP weaddresstwomajorgapsintheexistingresearchworks, andNCS.Theseadditionalgraphsareusuallyonlyavailable which are the lack of modelling program state transition forsourcecode. DeepEXEshowsitscapabilityatdetecting and scalability for large graphs. Instead of assuming the vulnerabilitiesforreal-worldandcomplexprograms. More- CFG is accurate, which is often not the case, due to the over, sourcecodeCFGsarelesscomplicatedtogenerate, over-estimation from static analysis, we use a reinforce- whereasbinaryCFGsoftencanbeanover-estimationofthe mentagenttoguidetheexecutionofaprogramflowthat truecontrolflow. Withourexecution-guidedapproach,we mimics the behaviour of dynamic analysis. DeepEXE is limittheerrorscausedbysuchapproximation,whilemain- abletocapturecertainprogramstatetransitionsthatleadto taining a high level of global information. The receptive specificvulnerabilityresults,creatingahigherdependency fieldofGNNinDeepEXEispracticallyunlimited,allowing betweentheoutputandinternalnodestateandtopological ustoaccommodateformuchlargergraphs. information. Wealsoshowthebenefitsoftraininganim- Lastly,weshowtheevaluationresultsfortheEshdataset plicitlydefinednetwork, whicharedirectlyobtainingthe inTable4. Duetotheextremeimbalanceoflabelsdistri- gradientsfortheequilibriumpointandmitigatingtheheavy bution, which is the case in many real-life scenarios, the memory footprint in large networks. In the experiments, Bi-LSTMandGCNbaselineshavelowerrecalls. There- wedemonstratethatDeepEXEoutperformsallstate-of-the- call metric is important when there are fewer vulnerable art vulnerability detection methods for the NDSS18 and cases. DeepEXE,ontheotherhand,isabletodistinguish Julietdatasets. DeepEXEisalsoverycompetitiveindetect- vulnerablecodefromnon-vulnerablecode,giventhesmall ingrealworldCVEs,evenwhencomparedtosourcecode numberofpositivelabels. Notethattheclassweightisnot levelmethods,whicharelessdifficult,giventheamountof manuallyadjustedduringtraining,asitiscumbersomeand availableinformation. Overall, DeepEXEisarobustand inefficienttotuneitforeverydatasetinpractice. Withover accuratetoolforbinaryvulnerabilitydetection. 90% percision, DeepEXE is able to identify 95% of the Inthefuture,thereareseveralpotentialdirectionstogrow vulnerableCVEcases. SimilartoFFmpeg,althoughmany forDeepEXE.Firstofall,thetrainingtimeisslowerthan casesintheEshdatasetcontainalargenumberofnodes, forthetraditionalneuralnetwork,duetothemanyiterations DeepEXEisinherentlydesignedtohandlesuchlargegraphs forobtainingequilibrium. Thiscanbeimprovedbyusing andoutperformotherbaselines. moresophisticatedsolverstoreducethenumberofstepsfor equilibriumcomputation. Next, DeepEXEdoesnothave toberestrictedtovulnerabilitydetectioninthecybersecu- ritydomain. Forothersecuritytasks,suchasbinarycode similaritycomparisonormalwaredetection,matchingthe 8graphstructuresofmaliciousprogramsisoftendoneusing HanjunDai,ZornitsaKozareva,BoDai,AlexSmola,and GNN.Bymodifyingthetrainingobjective,DeepEXEcan LeSong.2018. Learningsteady-statesofiterativealgo- beusedforalotmoreofsupervisedandunsupervisedtasks. rithmsovergraphs.InInternationalconferenceonma- Moreover,aslongastheinputdatahassomeformofgraph- chinelearning.PMLR,1106–1114. icalstructures,wecanapplythesamedesigntomanyother domains,suchassocialnetworkandchemistrystudies. YanivDavid,NimrodPartush,andEranYahav.2016. Sta- tisticalsimilarityofbinaries. AcmSigplanNotices51,6 (2016),266–280. References YanivDavidandEranYahav.2014. Tracelet-basedcode Marwan Ali Albahar. 2020. A Modified Maximal Diver- searchinexecutables. AcmSigplanNotices49,6(2014), genceSequentialAuto-EncoderandTimeDelayNeural 349–360. NetworkModelsforVulnerableBinaryCodesDetection. IEEEAccess8(2020),14999–15006. FilipedeAvilaBelbute-Peres,KevinSmith,KelseyAllen, JoshTenenbaum, andJZicoKolter.2018. End-to-end Nikolaos Alexopoulos, Sheikh Mahbub Habib, Steffen differentiablephysicsforlearningandcontrol. Advances Schulz,andMaxMu¨hlha¨user.2020. Thetipoftheice- inneuralinformationprocessingsystems31(2018). berg: Onthemeritsoffindingsecuritybugs. ACMTrans-
actions on Privacy and Security (TOPS) 24, 1 (2020), Yue Duan, Xuezixiang Li, Jinghan Wang, and Heng Yin. 1–33. 2020. Deepbindiff: Learningprogram-widecoderepre- sentationsforbinarydiffing.InNetworkandDistributed Shushan Arakelyan, Christophe Hauser, Erik Kline, and SystemSecuritySymposium. AramGalstyan.2020. TowardsLearningRepresentations of Binary Executable Files for Security Tasks. arXiv LaurentElGhaoui,FangdaGu,BertrandTravacca,Armin preprintarXiv:2002.03388(2020). Askari, and Alicia Tsai. 2021. Implicit deep learning. SIAM Journal on Mathematics of Data Science 3, 3 Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hin- (2021),930–958. ton. 2016. Layer normalization. arXiv preprint arXiv:1607.06450(2016). KatherynAFarris,AnkitShah,GeorgeCybenko,Rajesh Ganesan,andSushilJajodia.2018. Vulcon: Asystemfor ShaojieBai,JZicoKolter,andVladlenKoltun.2019. Deep vulnerabilityprioritization,mitigation,andmanagement. equilibrium models. Advances in Neural Information ACMTransactionsonPrivacyandSecurity(TOPS)21,4 ProcessingSystems32(2019). (2018),1–28. Roberto Baldoni, Emilio Coppa, Daniele Cono D’elia, ClaudioGallicchioandAlessioMicheli.2010. Graphecho Camil Demetrescu, and Irene Finocchi. 2018. A sur- statenetworks.InThe2010internationaljointconference veyofsymbolicexecutiontechniques. ACMComputing onneuralnetworks(IJCNN).IEEE,1–8. Surveys(CSUR)51,3(2018),1–39. Marco Gori, Gabriele Monfardini, and Franco Scarselli. AbrahamBermanandRobertJPlemmons.1994. Nonnega- 2005. A new model for learning in graph domains. In tivematricesinthemathematicalsciences. SIAM. Proceedings.2005IEEEinternationaljointconference onneuralnetworks,Vol.2.729–734. Sicong Cao, Xiaobing Sun, Lili Bo, Ying Wei, and Bin Li. 2021. Bgnn4vd: constructing bidirectional graph FangdaGu,HengChang,WenwuZhu,SomayehSojoudi, neural-networkforvulnerabilitydetection. Information andLaurentElGhaoui.2020. Implicitgraphneuralnet- andSoftwareTechnology136(2021),106576. works. AdvancesinNeuralInformationProcessingSys- tems33(2020),11984–11995. MahinthanChandramohan,YinxingXue,ZhengziXu,Yang Liu, Chia Yuan Cho, and Hee Beng Kuan Tan. 2016. AakanshiGupta,BhartiSuri,VijayKumar,andPragyashree BinGo:cross-architecturecross-OSbinarysearch.InPro- Jain.2021. Extractingrulesforvulnerabilitiesdetection ceedingsofthe201624thACMSIGSOFTInternational withstaticmetricsusingmachinelearning. International SymposiumonFoundationsofSoftwareEngineering. JournalofSystemAssuranceEngineeringandManage- ment12,1(2021),65–76. JunyoungChung,CaglarGulcehre,KyungHyunCho,and Yoshua Bengio. 2014. Empirical evaluation of gated WilliamLHamilton,RexYing,andJureLeskovec.2017. recurrentneuralnetworksonsequencemodeling. arXiv Inductiverepresentationlearningonlargegraphs. arXiv preprintarXiv:1412.3555(2014). preprintarXiv:1706.02216(2017). 9Jacob A Harer, Louis Y Kim, Rebecca L Russell, Onur Zhen Li, Deqing Zou, Shouhuai Xu, Xinyu Ou, Hai Jin, Ozdemir,LeonardRKosta,AkshayRangamani,LeiH Sujuan Wang, Zhijun Deng, and Yuyi Zhong. 2018. Hamilton, Gabriel I Centeno, Jonathan R Key, Paul M VulDeePecker: A deep learning-based system for vul- Ellingwood, et al. 2018. Automated software vulnera- nerability detection. arXiv preprint arXiv:1801.01681 bility detection with machine learning. arXiv preprint (2018). arXiv:1803.04497(2018). BingchangLiu,WeiHuo,ChaoZhang,WenchaoLi,Feng Sepp Hochreiter and Ju¨rgen Schmidhuber. 1997. Long Li,AihuaPiao,andWeiZou.2018. αdiff: cross-version short-term memory. Neural computation 9, 8 (1997), binary code similarity detection with dnn. In Proceed- 1735–1780. ingsofthe33rdACM/IEEEInternationalConferenceon AutomatedSoftwareEngineering.667–678. EricJang,ShixiangGu,andBenPoole.2016. Categorical reparameterizationwithgumbel-softmax. arXivpreprint TomasMikolov,KaiChen,GregCorrado,andJeffreyDean. arXiv:1611.01144(2016). 2013. Efficient estimation of word representations in vectorspace. arXivpreprintarXiv:1301.3781(2013). James C King. 1976. Symbolic execution and program LuanaRuiz,FernandoGama,andAlejandroRibeiro.2020. testing. Commun.ACM19,7(1976),385–394. Gatedgraphrecurrentneuralnetworks. IEEETransac- ThomasNKipfandMaxWelling.2016. Semi-supervised tionsonSignalProcessing68(2020),6303–6318. classificationwithgraphconvolutionalnetworks. arXiv SefaErenS¸ahin,EcemMineO¨zyedierler,andAyseTosun. preprintarXiv:1609.02907(2016). 2022. Predictingvulnerabilityinducingfunctionversions TakuKudo.2018. Subwordregularization: Improvingneu- usingnodeembeddingsandgraphneuralnetworks. In- ral network translation models with multiple subword formationandSoftwareTechnology145(2022),106822. candidates. arXivpreprintarXiv:1804.10959(2018). FrancoScarselli,MarcoGori,AhChungTsoi,MarkusHa- TakuKudoandJohnRichardson.2018. Sentencepiece: A genbuchner,andGabrieleMonfardini.2008. Thegraph simpleandlanguageindependentsubwordtokenizerand neuralnetworkmodel. IEEEtransactionsonneuralnet- detokenizer for neural text processing. arXiv preprint works20,1(2008),61–80.
arXiv:1808.06226(2018). KaziZakiaSultana,VaibhavAnu,andTai-YinChong.2021. Usingsoftwaremetricsforpredictingvulnerableclasses TueLe,TuanNguyen,TrungLe,DinhPhung,PaulMon- and methods in Java projects: A machine learning ap- tague, OlivierDeVel, andLizhenQu.2018. Maximal proach. JournalofSoftware: EvolutionandProcess33, divergence sequential autoencoder for binary software 3(2021),e2303. vulnerabilitydetection.InInternationalConferenceon LearningRepresentations. DonghaiTian,XiaoqiJia,RuiMa,ShukeLiu,WenjingLiu, and Changzhen Hu. 2021. BinDeep: A deep learning Yongjun Lee, Hyun Kwon, Sang-Hoon Choi, Seung-Ho approach to binary code similarity detection. Expert Lim, SungHoonBaek, andKi-WoongPark.2019. In- SystemswithApplications168(2021),114348. struction2vec: EfficientPreprocessorofAssemblyCode to Detect Software Weakness with CNN. Applied Sci- JunfengTian, WenjingXing, andZhenLi.2020. BVDe- ences9,19(2019),4086. tector: Aprogramslice-basedbinarycodevulnerability intelligentdetectionsystem. InformationandSoftware YoungJunLee,Sang-HoonChoi,ChulwooKim,Seung-Ho Technology123(2020),106289. Lim,andKi-WoongPark.2017. Learningbinarycode withdeeplearningtodetectsoftwareweakness.InKSII Petar Velicˇkovic´, Guillem Cucurull, Arantxa Casanova, The 9th International Conference on Internet (ICONI) Adriana Romero, Pietro Lio, and Yoshua Bengio. 2017Symposium. 2017. Graph attention networks. arXiv preprint arXiv:1710.10903(2017). YujiaLi,DanielTarlow,MarcBrockschmidt,andRichard Zemel. 2015. Gated graph sequence neural networks. HomerFWalkerandPengNi.2011. Andersonacceleration arXivpreprintarXiv:1511.05493(2015). for fixed-point iterations. SIAM J. Numer. Anal. 49, 4 (2011),1715–1735. ZhenLi,DeqingZou,ShouhuaiXu,ZhaoxuanChen,Yawei Zhu,andHaiJin.2021. Vuldeelocator: adeeplearning- Ronald J Williams. 1992. Simple statistical gradient- basedfine-grainedvulnerabilitydetector. IEEETransac- following algorithms for connectionist reinforcement tionsonDependableandSecureComputing(2021). learning. Machinelearning8,3(1992),229–256. 10ZonghanWu,ShiruiPan,FengwenChen,GuodongLong, ThemainsoftwareusedincludesPython3.9.10andPyTorch ChengqiZhang,andSYuPhilip.2020. Acomprehensive 1.10.2onUbuntu20.04.3LTS. surveyongraphneuralnetworks. IEEEtransactionson Semi-SyntheticDatasetsincludetheNDSS18datasetand neuralnetworksandlearningsystems32,1(2020),4–24. Juliet Test Suite. The NDSS18 dataset is a derivation TaoXie,NikolaiTillmann,JonathanDeHalleux,andWol- from the National Institute of Standards and Technology framSchulte.2009. Fitness-guidedpathexplorationin (NIST): NVD6 and the Software Assurance Reference dynamicsymbolicexecution.In2009IEEE/IFIPInterna- Dataset (SARD) project7. NDSS18 was first published tionalConferenceonDependableSystems&Networks. by(Lietal.,2018)asasourcecodevulnerabilitydatasetand IEEE,359–368. latercompiledtobinarycodeby(Leetal.,2018)forbinary leveldetection.Itincludesatotalof32,281binaryfunctions XiaojunXu,ChangLiu,QianFeng,HengYin,LeSong,and that are compiled using Windows and Linux. There are DawnSong.2017. Neuralnetwork-basedgraphembed- twotypesofCommonWeaknessEnumerations(CWEs)8 dingforcross-platformbinarycodesimilaritydetection. in NDSS18: CWE119 and CWE399. Juliet Test Suite is InProceedingsofthe2017ACMSIGSACConferenceon acollectionof81,000testcasesinC/C++andJavafrom ComputerandCommunicationsSecurity.363–376. NIST9thatcontain112differentCWEs. Bothdatasetshave nearlybalanceddistributionsforthelabels. HanYan, SenlinLuo, LiminPan, andYifeiZhang.2021. HAN-BSVD:ahierarchicalattentionnetworkforbinary Real CVE Datasets include the FFmpeg vulnerabilities softwarevulnerabilitydetection. Computers&Security andEshdatasets,whicharebothextractedfromrealworld 108(2021),102286. applicationsoropen-sourcelibraries. Thecodebaseissig- nificantly larger than the ones in semi-synthetic datasets. ZepingYu,RuiCao,QiyiTang,SenNie,JunzhouHuang, Vulnerabilitiesareoftenhardertodetectintheseprograms, andShiWu.2020. Ordermatters: semantic-awareneural duetothemuchincreasedcomplexity.FFmpeg10isanopen- networksforbinarycodesimilaritydetection.InProceed- sourcesuiteoflibrarieswritteninCforhandlingmediafiles, ings of the AAAI Conference on Artificial Intelligence, suchasvideoandaudio. Itwasfirstusedinsourcecodevul- Vol.34.1145–1152. nerabilitydetection(Zhouetal.,2019),wheretheauthors Yaqin Zhou, Shangqing Liu, Jingkai Siow, Xiaoning Du, manuallycollectedandlabelledthedataforvariousvulner- and Yang Liu. 2019. Devign: Effective vulnerability abilitycommitsonGithub. WecompiletheFFmpegsource identificationbylearningcomprehensiveprogramseman- codeprovidedbytheauthorsintobinarycodeandobtain tics via graph neural networks. In Advances in Neural 16,494 binary functions, where 7,257 are vulnerable and InformationProcessingSystems.10197–10207. 9,237arenon-vulnerable. TheEshdatasetcontainsCVE casescollectedbyDavidetal.(Davidetal.,2016),which Deqing Zou, Sujuan Wang, Shouhuai Xu, Zhen Li, and include8differentCVEs: cve-2014-0160,cve-2014-6271,
HaiJin.2019. µVulDeePecker: ADeepLearning-Based cve-2015-3456,cve-2014-9295,cve-2014-7169,cve-2011- System for Multiclass Vulnerability Detection. IEEE 0444,cve-2014-4877,andcve-2015-6826.Intotal,thereare TransactionsonDependableandSecureComputing18, 3,379casesandonly60arevulnerable. Thedistributionof 5(2019),2224–2236. vulnerabilityintheEshdatasetishighlyimbalanced,which representsamorerealisticscenario. FeiZuo,XiaopengLi,PatrickYoung,LannanLuo,Qiang Zeng,andZhexinZhang.2018. Neuralmachinetransla- A.3.Baselinesandhyperparameters tioninspiredbinarycodesimilaritycomparisonbeyond functionpairs. arXivpreprintarXiv:1808.04706(2018). NDSS18baselinesMaximalDivergencesequentialAutoen- coder(MDSAE)wasproposedby(Leetal.,2018)anduses adeeprepresentationlearningapproach. Themodelaims A.Appendix todiscriminatethevulnerableandnon-vulnerablecodeby forcingittobemaximallydivergent. TheinputtoMDSAE A.1.Architecture 6https://nvd.nist.gov/,NationalInstituteofStandardsandTech- Weshowtheoverallarchitectureincludingtheinputprepro- nology cessing,semanticslearning,statetransition,andpredictions 7https://samate.nist.gov/SRD/index.php,SoftwareAssurance andtraininginFigure2. ReferenceDataset 8https://cwe.mitre.org/, Common Weakness Enumeration A.2.DatasetsandSetup (CWE) 9https://samate.nist.gov/SARD/test-suites,NISTTestSuites ThehardwareusedfortheexperimentsincludesaRTX6000 10https://ffmpeg.org/,FFmpeg GPU,IntelXeonGold5218CPU,and64GBofmemory. 11Color Legend: Input/Preprocess Semantics Learning State Transitions/Topology Learning Prediction/Loss Data Process: Direct Gradient Computation on Equilibrium 𝑋∗ Forward Pass: Upon Equilibrium Backward Pass: Source Code/Binary Code Node Representation 𝑈 State Transitions are looped within one epoch 𝑋∗=𝜙(𝑊𝑋∗𝐴መ∗+𝑏Ω𝑈) Pooling Message Passing Layer Norm Assembly Code from Compile 𝑋𝑡+1=𝜙(𝑊𝑋𝑡𝐴መ𝑡+𝑏Ω𝑈) and IDAPro Bi-GRU Graph Pooling Embedding 𝐴መ𝑡 Preprocessing and Tokenization 𝐴 × policy Prediction Layer 0 1 1 0 1 0 0 Agent + Re-Param: 𝑎𝑡(𝑠𝑡,𝑋𝑡) 0 0 0 0 1 0 0 Labels CFG 𝑉= 𝐴= 0 0 0 1 0 1 0 Program State: 𝑠𝑡 0 0 0 0 0 0 1 0 0 0 0 0 0 1 Node State: 𝑋𝑡(𝑋0=𝑈) Loss 0 0 0 0 0 0 0 Figure2.OverallarchitectureofDeepEXE.Fourmajorsegmentsofourmodelincludeinputpreprocessing,nodeembeddingthrough sequentialmodel,statetransitionandstructurelearning,andpredictionandtraining.DeepEXEcombinesthelocalinstructionsemantics withhigh-leveltopologicalinformation,whereprogramdependenciesarecapturedthroughtheuseofaREINFORCEagentandaGNN withmuchlargerreceptivefield. is a sequence of binary code instructions. We report the single vector. The downstream vulnerability detection is threebestvariantsinthispaper. MMDSAE(Albahar,2020) achievedbytrainingaCNNorText-CNNusingthevectors. isamodifiedversioninspiredbyMDSAEandusesasimilar ThesameauthorslaterproposedanupdatedversionofIn- approachthataddsaregularizationtechnique. Theauthors struction2Vec(Leeetal.,2019)andincludesafewvariants, proposetwovariants,namelyMDSAE-NRandTDNN-NR, includingWord2VecandBinary2Img. AllInstruction2Vec thathavesimilarperformance. Thelastbaselineweinclude relatedmethodsusetheassemblyinstructionsasinputand for this dataset is VulDeePecker (Li et al., 2018), which donotconsiderthestructuralinformation. isasourcecodevulnerabilitydetection. Insteadofusing FFmpegbaselinesFortheFFmpegdataset,wecompared binarycodeinstructions,VulDeePeckertakessourcecode DeepEXE to Devign (Zhou et al., 2019), which provides gadgetsasinput,whichareblocksofcodewithtightlyasso- the source code for FFmpeg. Devign uses a gated graph ciatedsemantics. Althoughthisisasourcecodeevaluation, recurrentnetwork(GGRN)(Ruizetal.,2020)asagraph theunderlyingdatasetusedisthesame. Wethereforein- learningtechnique. Unlikebinarycode,whereonlyCFG clude VulDeePecker as one of the baselines. It also only canbeextracted,Devigndetectsvulnearbilitiesatthesource usessequencesofcodeasinputanddoesnotconsiderany codelevel. Ittakesseveralintermediategraphrepresenta- topologicalinformationfromthecodegraphs. tionsofsourcecode,suchasAST,CFG,DFG,andNCS. JulietbaselinesBin2Vec(Arakelyanetal.,2020)isagraph- WeincludeseveralvariantsofDevign,suchasBi-LSTM, basedbinaryvulnerabilitydetectionapproachthatutilizes GGRNwithCFGorallgraphs,andDevignwithCFGor graphconvolutionnetworkbytakingtheCFGasinput. In- allgraphs. Notethatwedirectlycompareourresultsonthe struction2Vec(Leeetal.,2017)isarepresentationlearning binarycodewiththeoriginalresultsofDevign,whichare approachtoembedtheassemblyinstructionsintovectors basedonsourcecode. andapplythedownstreamvulnerabilitydetectiontask. The EshbaselinesTothebestofourknowledge,Eshisnotused instruction embedding is similar to Word2Vec, it utilizes inanyotherpapersforvulnerabilitydetectionevaluation. different parts of an instruction and combines them as a Theoriginalpaperthatprovidedthisdatasetevaluatesitat 12thebasicblocklevelandfocusesoncodematching. There- includingWord2VecandBinary2Img. AllInstruction2Vec fore, we compare DeepEXE to the Bi-LSTM and GCN relatedmethodsusetheassemblyinstructionsasinputand
baselinesweimplementedourselves. donotconsiderthestructuralinformation. For the baseline methods, we directly inherit the results FFmpegbaselinesFortheFFmpegdataset,wecompared reported in previous works, due to the large amount of DeepEXE to Devign (Zhou et al., 2019), which provides experimentsanddifferentsetups. Theevaluationmetrics the source code for FFmpeg. Devign uses a gated graph reportedincludeaccuracy,precision,recall,F1score,and recurrentnetwork(GGRN)(Ruizetal.,2020)asagraph areaundertheROCcurve(AUC).Cross-validationisused learningtechnique. Unlikebinarycode,whereonlyCFG for tuning hyperparametersin order to obtainoptimal ac- canbeextracted,Devigndetectsvulnearbilitiesatthesource curacyandreasonablememoryusage. Weuseauniversal codelevel. Ittakesseveralintermediategraphrepresenta- hiddendimensionof64,learningrateof0.01withtheAdam tionsofsourcecode,suchasAST,CFG,DFG,andNCS. optimizer (?), dropout rate of 0.5, batch size of 192, and WeincludeseveralvariantsofDevign,suchasBi-LSTM, amaximumiterationof50fortheAndersonacceleration GGRNwithCFGorallgraphs,andDevignwithCFGor solver. Werandomlyspliteachdatasetinto75%fortraining allgraphs. Notethatwedirectlycompareourresultsonthe and 25% for evaluation. Some metrics are not shown in binarycodewiththeoriginalresultsofDevign,whichare thebaselinesbecauseoftheirabsenceintheoriginalworks. basedonsourcecode. ThehardwareusedfortheexperimentsincludesaRTX6000 EshbaselinesTothebestofourknowledge,Eshisnotused GPU,IntelXeonGold5218CPU,and64GBofmemory. inanyotherpapersforvulnerabilitydetectionevaluation. ThemainsoftwareusedincludesPython3.9.10andPyTorch Theoriginalpaperthatprovidedthisdatasetevaluatesitat 1.10.2onUbuntu20.04.3LTS. thebasicblocklevelandfocusesoncodematching. There- NDSS18baselinesMaximalDivergencesequentialAutoen- fore, we compare DeepEXE to the Bi-LSTM and GCN coder(MDSAE)wasproposedby(Leetal.,2018)anduses baselinesweimplementedourselves. adeeprepresentationlearningapproach. Themodelaims todiscriminatethevulnerableandnon-vulnerablecodeby A.4.Preliminaries forcingittobemaximallydivergent. TheinputtoMDSAE Graph Neural Network GNN is a topological learning is a sequence of binary code instructions. We report the techniqueforinputdatawithgraphstructures. Agraphis threebestvariantsinthispaper. MMDSAE(Albahar,2020) represented as G = (V,E) that contains n := |V| nodes isamodifiedversioninspiredbyMDSAEandusesasimilar ande:=|E|edges. AnedgeE :=(V ,V )representsthe approachthataddsaregularizationtechnique. Theauthors ij i j directedorun-directedconnectionbetweennode(i,j). In proposetwovariants,namelyMDSAE-NRandTDNN-NR, practice,theedgeinformationisrepresentedintheformof thathavesimilarperformance. Thelastbaselineweinclude anadjacencymatrixA∈Rn×n. Generally,onecanobtain for this dataset is VulDeePecker (Li et al., 2018), which someinitialnodeembeddingU ∈Rn×hbeforefeedinginto isasourcecodevulnerabilitydetection. Insteadofusing thenetwork. Themessagepassing(i.e. nodeaggregation) binarycodeinstructions,VulDeePeckertakessourcecode isperformedateachGNNlayerasfollows: gadgetsasinput,whichareblocksofcodewithtightlyasso- ciatedsemantics. Althoughthisisasourcecodeevaluation, Xt+1 =ϕ(XtWtAt) (16) theunderlyingdatasetusedisthesame. Wethereforein- clude VulDeePecker as one of the baselines. It also only whereWt ∈Rh×hisatrainableparameteratlayert. Each usessequencesofcodeasinputanddoesnotconsiderany messagepassingstepaggregates1-hopneighbourinforma- topologicalinformationfromthecodegraphs. tion into the current node given that an edge exists in A. ThefinalnodevectorXT thenlearnsthetopologicalinfor- JulietbaselinesBin2Vec(Arakelyanetal.,2020)isagraph- mationfromallT-hopawayneighbours. Incaseofgraph basedbinaryvulnerabilitydetectionapproachthatutilizes classification,apoolinglayersuchasaddpoolingcanbe graphconvolutionnetworkbytakingtheCFGasinput. In- usedtoobtainthegraphembeddingG: struction2Vec(Leeetal.,2017)isarepresentationlearning approachtoembedtheassemblyinstructionsintovectors n (cid:88) andapplythedownstreamvulnerabilitydetectiontask. The G= XT ,∀j =1,...,h (17) i,j instruction embedding is similar to Word2Vec, it utilizes i different parts of an instruction and combines them as a single vector. The downstream vulnerability detection is REINFORCEAlgorithmReinforcementlearningisaclass achievedbytrainingaCNNorText-CNNusingthevectors. ofalgorithmsthatspecifytheactionswithinanenvironment ThesameauthorslaterproposedanupdatedversionofIn- thatoptimizestherewardr. Inparticular,theREINFORCE struction2Vec(Leeetal.,2019)andincludesafewvariants, algorithm (Williams, 1992) is a form of policy gradient algorithmthatcomputesthestochasticgradientwithrespect 13to the reward. It involves a state s that can be obtained fromaneuralnetwork,anagentathatspecifiestheaction spaceA,andapolicyπ(a|s)thattakestheactionagivena stateswithprobabilities. Usually,thepolicyisrandomly initializedandthealgorithmiteratesthroughepochs,where backpropagationisperformedateachepochtoupdatethe policyinthecontextofaneuralnetworksetup. A.5.Well-posedness Equation(6)needstohaveauniquesolutionX∗wheniter-
atedinfinitely. Suchpropertyiscalledthewell-posedness. According to Gu et al. (Gu et al., 2020), W and A˜ are well-posedforϕwhenthereisauniquesolution. Firstof all, the choice of ϕ needs to satisfy the component-wise non-expansive (CONE) property, where most activation functionssuchasReLU,Sigmoid,andTanh,possesssuch property (El Ghaoui et al., 2021). Then, we need to con- struct sufficient conditions on W and A˜ with a CONE activation function for well-posedness. It is stated that ||W|| <κ/λ (A˜)needstobetrue,where||W|| isthe ∞ pf ∞ infinitynorm,λ (A˜)isthePerron-Frobenius(PF)eigen- pf value (Berman and Plemmons, 1994), κ ∈ [0,1) is the scalingconstant. Equation(6)thenhasauniquesolution. ThisisensuredbyprojectingW ineachupdatetosatisfy thiscondition: W′ = argmin ||M −W||2 (18) F ||M||∞≤κ/λpf(A˜) where||·|| istheFrobeniusnorm. Notethatevenwith F agatedconvolutionwhichresultsinanupdatedA˜forev- ery iteration, we still maintain a well-posed A˜ as it con- tainsastrictlysmallerorequalPFeigenvaluethantheorig- inal A, given the agent a is non-expansive, resulting in κ/λ (A˜)≥κ/λ (A). pf pf 14
2404.09384 Tasks People Prompt: A Taxonomy of LLM Downstream Tasks in Software Verification and Falsification Approaches V´ıctorA.Brabermana,b,∗,FlaviaBonomo-Brabermana,b,YiannisCharalambousc,JuanG.Colonnad,LucasC.Cordeiroc,d, RosianedeFreitasd aDepartamentodeComputacio´n,FCEyN,UniversidaddeBuenosAires,BuenosAires,Argentina bInstitutodeInvestigacio´nenCienciasdelaComputacio´n(ICC),CONICET-UBA,BuenosAires,Argentina cDepartmentofComputerScience,TheUniversityofManchester,Manchester,UK dInstitutodeComputac¸a˜o(ICOMP-UFAM),UniversidadeFederaldoAmazonas,Manaus,Amazonas,Brazil Abstract Prompting has become one of the main approaches to leverage emergent capabilities of Large Language Models [Brown et al. NeurIPS2020,Weietal. TMLR2022,Weietal. NeurIPS2022]. Recently,researchersandpractitionershavebeen“playing”with prompts(e.g.,In-ContextLearning)toseehowtomakethemostofpre-trainedLanguageModels. Byhomogeneouslydissecting morethanahundredarticles, weinvestigatehowsoftwaretestingandverificationresearchcommunitieshaveleveragedLLMs capabilities.First,wevalidatethatdownstreamtasksareadequatetoconveyanontrivialmodularblueprintofprompt-basedproposals inscope. Moreover,wenameandclassifytheconcretedownstreamtaskswerecoverinbothvalidationresearchpapersandsolution proposals. Inordertoperformclassification,mapping,andanalysis,wealsodevelopanoveldownstream-tasktaxonomy. Themain taxonomyrequirementistohighlightcommonalitieswhileexhibitingvariationpointsoftasktypesthatenablepinpointingemerging patternsinavariedspectrumofSoftwareEngineeringproblemsthatencompassestesting,fuzzing,faultlocalization,vulnerability detection,staticanalysis,andprogramverificationapproaches. Avenuesforfutureresearcharealsodiscussedbasedonconceptual clustersinducedbythetaxonomy. Keywords: DownstreamTask,LLM4SE,LargeLanguageModels,PromptEngineering,SoftwareEngineering,Software Verification,SoftwareTesting,Taxonomy. 1. Introduction Inparticular,softwaretestingandverificationhavebeenalso profoundly impacted by the arrival of large language models. Prompting has been the most popular trend in leveraging In not more than a year, dozens of articles were made public emergingabilitiesofalargelanguagemodel[238]. Landmark (e.g.,[234,99]). Interestingly,thoseSEproblemsarenotalways papers such as [22] and [239] pinpointed that with adequate aperfectfitforLLMstobesolvedbyasimplisticinteraction. instructionsand/ordemonstrations,LLMscanreachcompetitive Idealisticformulationsofthoseproblemssuggestsearchingin performancefor(NLP)downstreamtasksforwhichtherewas large/infinitecombinatorialspaceswhereprecisesemanticrules noparticularfine-tuning. Sincethen,researchersandpractition- shouldbeenforced. Moreover, thoseareareasinwhichsym- ers in varied fields of human endeavor have extensively used bolic(e.g.,formalverification/staticanalysis)andoperational promptstodeveloptheirLLM-enabledapproachesandproofs approaches (e.g., testing, fuzzing, search-based, AI planning, of concept. The flexibility of such universal question and an- reinforcementlearning)havebeenthepredominantparadigms sweringmachinecertainlyhasledtoalargenumberofreports, in academia and practice. Even in the case where fine-tuned schemes,andproposalsforuseofLLMs,atleastforsoftware neuralparadigmswerealreadyinplace(e.g.,vulnerabilityde- engineeringproblems(see[234,60],etc.). Giventhatprompts tection[138]),thereisanapparentgapbetweenexpectedLLMs aretypicallyusedtoconditiontheLLMtoperformsomedown- proficiency,thenatureofproblems,andhow“classical”solu- streamtasks,weposesomeinitial(albeitvague)questionson tions look like (and thus, one would expect ingenious LLM- suchahecticarea: howdocurrentlyLLM-enabledapproaches enabledsolutions).Theproblemdomainareaisthenparticularly look like in terms of elicited emergent behavior? To answer rich for investigating how LLMs address those problems and that,wesetasquestatask-basedabstractyetinformativewayto identifystructures,patterns,andtrends. homogeneouslydescribeandanalyzeLLM-enabledsolutions. We focus on leveraging “downstream tasks” as an abstrac- tion to homogeneously describe and analyze key functional elements that populate the implemented “cognitive architec- ∗Correspondingauthor. DepartamentodeComputacio´n,FCEyN,Univer- tures”[82,216,205]inLLM-enabledproposals. Fromacom- sidaddeBuenosAires. CiudadUniversitaria,Pabello´n0+∞. Pres. Dr. Rau´l Alfons´ıns/n,(1428)BuenosAires,Argentina. putationalperspective,“promptware”[82]canbeunderstoodas Emailaddress:vbraber@dc.uba.ar(V´ıctorA.Braberman) asortofprobabilisticprogramming[75]inwhichanswers(pro- PreprintsubmittedtoElsevier September10,2024 4202 peS 8 ]ES.sc[ 2v48390.4042:viXra1 INTRODUCTION ductions)aresampledfromanopaquecomponent(themodel) RQ2. Canidentifiedtasksbeorganizedandmapped conditioned by the ongoing context (e.g., [55, 205]). From a into a taxonomy that highlights commonalities while computationalpointofview,thereisapotentiallyrichuniverse exhibitingthevariationpointsoftasktypes? ofpossibleinteractionstypeswithLLMs. Thus,itislegitimate toquestionwhetherornottheconceptofnontrivial“downstream tasks”isalwaysrecoverableforallprompt-basedLLM-enabled Wearenotawareofanytaxonomythatwouldservethepre- solutions. Certainly,thetermisfrequentlyusedintheliterature
cedingpurposes. NLPandcognitivetasksappearinginbench- whenreferringtowhatoneachievesbypromptingand,inpartic- markslike[144,223,88,210]partiallycoversomeofthetasks ular,whenperformingIn-ContextLearning. Moreover,Natural encountered,butmostidentifiedtasksarenovel. LanguageGenerationresearchhasrecentlystartedtopinpoint A taxonomy attempts to rationalize emerging phenomena, that,somehow,LLM’sstructuringinferenceinto(internal)tasks and it is a recurrent operation in human endeavors. The risk mightbetheinsighttoexplainthesuccessofLLMsIn-Context ofrationalizationandabstractionisto, unnecessarily, opaque Learning abilities [133, 243, 176]. On the other hand, there richnessoftheunderlyingphenomena. Taxonomiesmayresult aresomeaspectsthatwarnaboutthefeasibilityandusefulness in rigid concepts that do not favor the versatility of concrete ofouridentificationquest: downstreamtasksarenotactually conceptsandphenomena,suchas,inthiscase,inferenceelicited first-class entities when interacting with LLMs and, as men- byprompts. However,webelievethatanabstractorganizationis tioned above, promptware might be potentially sophisticated worthitsrisks: onecouldseepatterns,trends,unexploredspots, toleverageversatilityofLLMsusage. Moreover,LLMsmight and a way to recognize when one is in front of a “brand new potentiallygeneratetheirownprompts[76,144,82],oractas specimen”orcategoryofthings. Allthatmightimpactresearch agents[144],orbeingboostedautoregressivelybylearninghow anddevelopmentagendas,includingtask-categoryperformance toeffectivelygeneratesophisticatedreasoningchains[265]im- improvements,architecturalpatterns,evaluationcriteria,bench- plying (currently or in the future) the only downstream task marks, theory for implementing and composing downstream instructed by humans is actually the end-to-end SE problem. tasksormodels,etc. Thus,whilediggingintotheuseofLLMsinstudiedapproaches, Inafewwords,theremainingquestionstrytofurthervalidate wealsovalidatethebasictacithypothesisthat,atleastcurrently, whethertheproposedtaxonomycouldberegardedas“natural” anensembleofdownstreamtaskselicitedbyhumansexpresses intermsofclusteringsimilartasksandshedsomelightonthe aninformativeblueprintofsolutionproposals. designpatternsofapproachesandcharacteristicsofclassesof tasks. Wealsowanttousethemappingtospeculateonsome unexploredopportunities. RQ1. AreinteractionswithLLMsidentifiableasthe executionofasetofdownstreamtasksintheanalyzed RQ 3. When grouped according to proposed task literature? Ifso,whicharepreciselythosetaskswhose classes,aretheresalientcharacteristicsofmappeddown- resolutioniselicited? streamtaskinstances? Toanswerthatquestion,wethoroughlyanalyzedmorethan The preceding question is addressed by figures and discus- 100paperstoidentifyand,fundamentally,trytogivenamesto sionsonhowtasksinacategoryandclasseslookintermsof downstreamtasksineachoftheirinteractionswithLLMs. That functionalcharacteristicsandpromptingstrategiesusedtoelicit recovery efforts for downstream tasks was challenged by the them. fact that LLMs are regarded as universal question-answering Finally, it is natural to explore the relationship among SE engines. Consequently,wemadeourbestefforttoidentifyand problems and the categories of tasks involved in their LLM- putnamesto(and,manytimes,cointhem)taskinstanceselicited enabledsolutionstoseeifotherpatternsarise. inreportedapproaches. The detailed results of such work –that is, the downstream RQ 4. Are the patterns arising from how SE ap- tasks for each proposal– were embedded into tables together with observations on the interactions among them and/or the proaches are decomposed into downstream tasks de- pendentontheSEproblemanddownstreamcategories restofthecomponentsthatinstantiatestheconcretecognitive involved? architecture of the proposal. It should be noted that existing surveysdonotdiscussthedetailsoftaskdecompositionandthe abstractionleveloftheirsummarizedinformationisnotadequate Asmentionedlaterinthepaper,wealsoknewthatthespan toanswerourquestions. ofproblemsandsolutionsreported(fromapproachestofinding Thenumberandrichnessoftasksencounteredledustopursue bugstoprovingprogramsforverydiversesoftwareartifactsand afurtherabstractionoperationtoanalyzetheuseofLLMs. We technologies)stressestaskidentificationandtaxonomyabstrac- wantedtodeviseataxonomythatcouldhomogeneouslymap tion. Thus,toourknowledge,thisisthefirststudythatcatalogs/ existing(and,potentially,future)downstreamtasktypeswithout rationalizes/organizesliteratureondownstreamtasksunleashed beingtoocoarse-grained. bypromptingLLMsforsoftwareanalysis. 22 BACKGROUND 1.1 RelatedWork 1.1. RelatedWork +statisticalanalysis,LLM+programanalysis,LLM+mutation testing,LLM+syntacticchecking,LLM+differentialtesting). Traditionally,tasksintheNaturalLanguageProcessing(NLP) Yet,thereisnofocusonwhichconcretedownstreamtasksare communityhavebeenidentifiedandtypicallyclassifiedinbench- actuallyelicited. marks (e.g. [144, 223, 88, 210, 203]). However, these bench- In[273],asystematicsurveyonLLM-basedapproachesfor marksareneithernecessarilyfocusedonsoftwareengineering SE-problems,includingempiricalevaluations,ispresented. It problemsnorparticularlydesignedtoidentifydownstreamtasks
includes 155 studies for 43 specific code-related tasks across that actually appear in the field. However, in this paper, we fourphaseswithintheSEworkflow: SoftwareRequirements& pinpointsomeoverlapswithourproposedtaxonomy. Recently, Design,SoftwareDevelopment,SoftwareTestingandSoftware [112]proposesSWE-benchasanevaluationframeworkconsist- Maintenance. Some SE-problems overlap with the scope of ingofsoftwareengineeringproblemsdrawnfromrealGitHub ourmapping. Moreprecisely,faultlocalization,vulnerability issues. Someofthedifficultiesmaypartiallyoverlapwiththe detection, unit test generation, assertion generation, fuzzing, problemsaddressedinthescopeofthismapping. However,the failure-inducingtesting,penetrationtesting,property-basedtest- focusonGitHubissuesimpliesthatproblemsareclosertothe ing, mutation testing, GUI testing, bug report detection, bug areaofsoftwarecorrectiveandevolutivemaintenancethanthe reproduction, and bug replay. They report on representative area of verification/falsification. More importantly, this is an code-relatedLLMsandpre-trainingtasks, and, relatedtoour evaluationframeworkandthusnodownstreamtaskselicitedby paper,theyidentify11downstreamtasks(“classes”,inourter- potentialsolutionproposalsarepresented. minology) across four categories defined according to the in- SeveralsurveyshavealsobeenconductedontheuseofLLMs put/outputdatatype: Code-Code: CodeTranslation,CodeRe- inSoftwareEngineeringpapers. Although, tothebestofour finement,ClozeTest,MutantGeneration,AssertionGeneration; knowledge, none of the existing surveys performs concrete Text-Code: CodeGeneration, CodeSearch; Code-Text: Code LLMsdownstreamtasksidentificationandmapping,theyare Summarization; and Code-Label: CodeClassification, Clone relevantrelatedworkwithsomeelementsincommonwiththis Detection,DefectDetection. AlmostalltheLLMdownstream work. taskclassesareincludedinourtaxonomy. Moreover,theirway AsystematicreviewoftheliteratureontheuseofLLMin ofclassifyingtasksisorthogonaltoours,andwemakeanex- SE (LLM4SE) between 2017 and January 2024 is presented plicitsub-classificationaccordingtotypeofinput-outputofthe in[91]. KeyapplicationsofLLM4SEdescribedencompassa taskforsomeofthecategories. diverserangeof55specificSEproblems,groupedintosixcore SEactivities.Problemsinthequalityassurancecategoryoverlap 2. Background withthoseinthescopeofourstudy. Thereisnoreportonhow proposals structure solutions into LLM-enabled downstream 2.1. VerificationandFalsificationProblems tasks. Testingisaclassicalandarguablythemostpopularapproach Withinourareaofinterest,testgeneration,testadequacy,test togainingconfidenceinsoftwaresystemsbeforerelease[16,11]. minimization,testoracleproblem,andtestflakinessareprob- When emphasizing that testing should be regarded as a way lemscoveredby[60]. Unlikethiswork,thefocusisoncompar- to falsify software and not a way to verify it [53] it becomes ingtheeffectivenessreportedbyeachapproach. Again,there clearhowresearchintestinghasevolvedbyfocusingonfail- is neither identification nor categorization of concrete LLMs ure detection ability of test suites (automatic) generation pro- downstreamtasksinproposalsthatuseprompts. posals[178]. Severalapproachestotestsuitegenerationhave LLM-based fuzzers are analyzed in [99]. The focus is on beenproposed[12]. Amongothers,symbolicandrandomap- howLLMsareusedforfuzzingtestsandoncomparingLLM- proaches(potentiallycombined) [23,70,179],mutation-based basedfuzzerswithtraditionalfuzzers. Althoughextradetailis ones [111], and search-based approaches [65]. A particular givenforLLM-basedapproaches,neithersystematicmapping techniqueoftestingisfuzzing[163],whichtypicallyinvolves nortaxonomyisproposed. adaptively generating a series of inputs –usually by mutating Asurveyontestingproblems,whichalsoincludesdebugging some input seeds– to cause the Program under Test (PuT) to andrepairproposals,isconductedin[234];52papersontesting crashorenteradefectivestate. Itdoessobyrepeatedlystimulat- arecovered(33publishedsince2022),ofwhichapproximately ingtheprogramandobservingitsbehaviorinasortoffeedback one-thirdconcernedtest-basedLLMfine-tuning,whilethere- loop[20,281]. Atypicalproblemwhentestingorfuzzingand mainderrelyuponpromptengineering. Thesurveyanalyzesthe lookingforevidenceof(functional)falsificationbeyondcrash papersfromboththesoftwaretestingandLLMsperspectives. It ormemorycorruptionisknownastheoracleproblem. Thatis, presentsadetaileddiscussionofthesoftwaretestingproblems determining,eitherforasetofinputsoringeneralterms,the forwhichLLMsareused,amongwhichareunittestcasegener- correctbehaviorthatshouldbeexhibitedbytheprogram[15]. ation,testoraclegeneration,systemtestinputgeneration,and Metamorphicrelationidentificationhasbeenawaytopartially buganalysis,andsomefeaturesonhowthesetasksaresolved circumventtheoracleproblem[31]. Onceafailureisdetected, withthehelpofLLMs. Inparticular,itdescribeswhicharethe faultlocalizationistypicallythenextproblemtobesolved.Fault commonly used LLMs, their input, the types of prompt engi- localizationaimsatlistingcandidatelocationsinthesourcecode neering employed, and the techniques that form the concrete thatcouldbeclaimedaslikelyfaulty[180,245]. Arelatedde-
cognitivearchitectureoftheseproposals(e.g.,pureLLMs,LLM fectlocalizationproblemisthatofRootCauseAnalysis(RCA) 33 METHODOLOGY 2.2 LargeLanguageModels whensystem-wideanomalousbehaviorisfoundinproduction serveasdemonstrationsthatconditionthemodel,enablingitto (e.g., [108, 202]). The main challenge in diagnosing the root generatesometimesmoreaccurateresponsesforsimilartasks. cause of systems is the interconnected dependence between Chain-of-Thought(CoT)isanorthogonalpromptapproachthat differentservicesthatcouldbeinternalorthird-party. proposestogeneratealogical,linearflowofideasandreason- Bugreproductionisoneofsoftwaredevelopment’sfirststages ing[239]. ThankstotheautoregressivenatureofLLMs,they of bug fixing when a bug is reported by software users. It endup(auto)conditioningthemselvesbyverbalizingutterances involvesreproducingthebugonthedeveloper’ssystemsothat thathopefullycorrespondtostepsofaneffectivereasoningpro- it can be further analyzed [169]. An associated challenge is cess. WhileCoTisanadvancedtechniquethatimprovesLLM managingmultiplereportsregardingthesamebug,whichcan answers,itsfundamentalconceptreliesonalinearreasoningpro- slow down the bug-fixing process, as it requires resolving all cess. Itdoesnotallowfordivergentthoughts. Tree-of-Thought duplicateentriesbeforeattemptingtofixthebugitself[5,84, (ToT) [260] was created to overcome this limitation with the 106,51]. intuition of aggregating different “points-of-view” during the There is a wide spectrum of problems derived from the reasoningprocess. InToT,aquerytoanLLMcangeneratesev- goal of analyzing code without executing it. Static analy- eraldifferentanswers,somebetterthanothers. Theseanswers, sis [172], generally speaking, encompasses a variety of prob- inturn,caneachgeneratemorethanoneresponse,forminga lemsandapproachesrangingfromdetectingpotentialvulner- treeofthoughtsthathelpsreasonthroughdifferentpaths. Inthis abilitiesormisusesinsourcecodesyntacticstructuresorML- tree,theinitialquerygeneratestheroot,andtheleavesofeach representations[192,93,138,10,136]tosound(y)[150]com- nodearetheintermediatethoughts. Thus,toformthefinalre- putationofcollectingsemantics[40,158,36]. Traditionalin- sponse,intermediatepromptscorrespondingtosomepathwithin termediateabstractionslikecontrol-flowgraphs[6]playcentral thistreemustbeused. ThefinalperformanceofToTdepends rolesinmanyanalyses. Staticanalysisisnotlimitedtochecking largely on the path-selection strategy. While the mentioned foragivenproperty. Well-establishedresearchtopicsinclude techniquesimprovethegeneratedanswers,theydependexclu- staticallybuiltmultipurposeprogramabstractionsorextractions. sivelyontheinformationinthepromptortheknowledgeinthe Forexample,staticslicing[241]addressestheproblemofiden- LLMitself. RAG[131]overcomesthislimitationbyenabling tifyingstaticallyasetofrelevantlinesforthecomputationofa accesstothelatestinformationforgeneratingreliableoutputs valueinagivenprogramlocation. viaretrieval-basedgeneration. InRAG,anaugmentedprompt Programverification[85]isanareawithawellestablished withcontextualinformationfromanadditionaldatabasehelps traditionandthathascovereddifferentclassesofprogramsand reducehallucinations. ItmakestheLLMadaptivetosituations properties.Softwaremodelchecking[110,39,13]anddeductive wherefactsevolveovertimebysimplyupdatingtheknowledge softwareverification[129,128]areamongthemaintrendsfor inthisdatabase. ThisapproachalsoavoidstheneedforLLM sequentialsoftwarecorrectnessverification. Inmanyofthose fine-tuning, reducing computational demands. RAG can also approaches, invariant generation [41] is a key challenge for be combined with other prompting techniques, such as CoT, effectiveautomation. to condition the reasoning process on the given context. For Recently,somenovelapplicationareasandtechnologiesbe- taskslikecodegeneration,thedatabasecanbeenhancedwith come relevant targets of mentioned verification and testing examplesofcodeortestcases,furtherimprovingtheaccuracy approaches. Among them: GUI-based/mobile applications andreliabilityofthenewlygeneratedcodes. (e.g., [157]), RESTfull services [72], smart contracts [228], ReAct[261]combinesreasoningandactinginaninterleaved and Machine-Learning systems, software and applications manner,wheretheperformedreasoningstepscanaskanLLMto (e.g., [270, 21, 271, 214, 102]). These areas are covered by performsometasks,usuallyinvolvingtheoperationofexternal thestudiesinscope. tools,andthenreasonagainabouttheretrievedoutputfromsuch tools. Thisallowsdynamicreasoningandplanning,likeanAI agent,byinteractingwithexternalenvironmentsandincorpo- 2.2. LargeLanguageModels ratingtheadditionalinformationfromthoseenvironmentsinto Pre-trainedLargeLanguageModels(LLMs)(e.g.,[52,277]) thecontextprompt. Inthecontextofcodegeneration,thiscould areatypeofGenerativeAIbasedonDeepNeuralNetworks[73]. allowexecutingsomecodesnippetsorusingverificationtools, Initially,thesemodelsutilizedaTransformerencoder-decoder observingtheoutput,andplanningthenextreasoningstepuntil architecturewithcross-attentionmechanisms[226]. However, generatingthefinalanswer. Foranextensivesurveyonprompt manymodernLLMsnowemployadecoder-onlyarchitecture methodsthereadercanreferto[142].
withself-attentionlayers,whichissufficientforgeneratingout- putfromagiveninputprompt[22]. 3. Methodology Amongthemostwell-establishedpromptingmethodsused are: Zero-Shot[125],Few-Shot[22],Chain-of-Thought[239], Figure 1 sketches the method that is partially inspired on Tree-of-Thoughts[260],RetrieveAugmentedGeneration[131] thoseofsystematicmapping[182]. Duetotheresearchques- andReasoningandActing[261]. Zero-Shot[125]learningin- tions,in-depthanalysisofpapersandtaxonomydesignplayed volves prompting an LLM to perform a specific task without a central role that surpassed other typical aspects of system- providingexamplesordemonstrations. Few-Shot[22]learning atic mappings like demographics on areas, topics or venues. includesexamplesofthetaskwithintheprompt.Theseexamples Giventheapplicationfocus(softwaretestingandverification), 43 METHODOLOGY 3.1 InclusionandExclusionCriteria 47 9 7 7 4 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 GoogleBlog AIRE’2 A4 Iware’2 A4 PSEC’23 ASE’23 CAV’24 CHI’24 EA ES SE E’2 C4 /FSE’ E2 u3 roSys’24 FAS FE’ U2 Z4 ZING’ H2 o3 tNets’ I2 C3 SE-C’2 I4 SS LT LA’ M2 43 Code’24 NDSS’ N2 e4 urIPS’2 N3 LBSE O’2 O4 PSL PA R’2 O4 MISE’2 Q3 RS-C’23 SAIV’ S24 ANER’2 S3 SBSE T’2 P3 S-ISA’ U2 IS3 nE f.NI SX o’ f2 t4 w. J.T Se yc sh t. .Soft Iw. EEETSE ICLR’23 IC IL CR S’ E2 -4 NIER’24 ICSE’2 F3 SE-C’24 FSE’24 ICSE’24 arXiv Figure2: Numberofpaperspervenue(journals,conferences,arXiv),corre- spondingtothetoolssurveyedinTables1–5. therexplainsourviewpointforidentification. Theinitiallistofdownstreamtasks(200approx. withrepeti- tions)wasthenclusteredintocategoriesexplainedinsection6. Variationpointsandhierarchydecisionsleadtothetreesseen Figure1:Methodologyoverview. (Figs.5–27). Anotherauthor,ina“tabularasa”modality(i.e., noexpertiseintopicsandwithoutreadingpreviouslythepapers nor elaborated tables or taxonomy), was able to map the 200 wewentthroughabstractsofpaperspublishedorannouncedin downstream task instances into the hierarchy to validate that SoftwareEngineeringvenues(ICSE,FSE,ASE,ISSTA,ICST, both the table and the taxonomy were “natural” and easy to MSR,SANER,ICSME,ICPC,GECCO,TOSEM,TSE,ESE, understand resources. Interestingly, during the final steps of JSS,STVR,SPE,IST,etc). ThesamewasdoneforProgram- thiselaboration,weupdatedthelisttobeginningofJune2024 ming Language and Formal Verification venues (e.g., PLDI, with the more recent papers (36 papers) that were not in the POPL,OOSPLA,ECOOPFM/FV:CAV,TACAS,VMCAI,FM, originaltable. Wewereareabletoseamlesslyaddthem(and FMCAD),OperatingSystemsandSecurityvenues(USENIX, map their 83 tasks) albeit we decided to change some names IEEE SSP), and also major AI venues (AAAI, IJCAI, ICML, oftasksandinternalstructureoftaxonomytreestoadhereto ICLR, NeurIPS). Arguably, arXiv was a major source of pa- traditionalterminologyinthoseareasinwhichcertaintypesof pers. Weperformedsearchesusingtechnologykeywordssuch tasksalreadyexist(e.g.,classificationtasks). Thisupdatethus as“LLM”,“language”,“ChatGPT”,“context”,“prompt”and servedasanothervalidationstep. Afterthat,wewentthrough applicationrelatedkeywordssuchas“testing”,“verification”, tables,trees,andthemappingmatrix(Table6)andvalidatedthat “formal”,“check”,“static”,“fuzzing”,“debugging”,etc. (see downstreamtaxonomyatleasthelpeddetectandexhibitsome SEProblemsinFigure3). Wemainlyfoundvalidationresearch known and potentially unknown patterns of LLM usage (see andsolutionproposalpapers[182]. FromtheSE-relatedsurveys RQs3and4). Finally,wealsocheckthatNLPtasks[225,210] on using LLMs, we snowballed one level and added missing (e.g. relation extraction [262, 210]) relevant to the surveyed papers. Thetimescopeforthefirstcollectionwasessentially workswereeitherincludedintothetaxonomyornotfeaturedby from2022untilFebruary2024. Then, wecarefullyanalyzed anyoftheapproaches. Beforesubmission,weupdatedthepub- papers,discardingthosethatdidnotusepromptsordealwith licationvenuesofallthepapers,sincealmostallofthemwere differentSEproblems(approx. 140paperswerediscarded,and onlyatarXivbythetimewehavestartedthesurvey. Thatstep 75remained). includedchangingsomeofthetoolnamesthatweremodifiedin Wefirstreadthepaperstoidentifythetool/approachname thepublishedversion. givenandtheactualSEproblemsolved. Thoseidentifiedprob- lemswereclassifiedintotheSEproblemstaxonomy,andpapers weremappedintoitasshowninFigure31. ThisSEproblems 3.1. InclusionandExclusionCriteria taxonomyissimilartoothertaxonomiesfoundintheliterature Assaid,ourtechnologicalfocusisonusingLLMsthatemploy fortheoverlappedcategories(e.g.[273])2. Fromthere,asthe prompts. This excludes proposals that either fine-tune LLMs figureshows,byreadingeachpaper,weperformedthelaborious (see for instance [173]) or general transformer architectures identificationofdownstreamtasks,includinginputandoutput
(e.g.,[121,171,188,81,207,269,227,64,50,37,166,282,66, andintegration/orchestrationnotes(Tables1–5). section4fur- 78,7,222,54,175,96,77,181],etc.),orleveragetheembed- dingcapabilitiesofunderlyingtransformers(e.g.[254,90],etc.). SelectedSoftwareEngineeringtopicsarebynomeansallareas 1SomepapersweremappedintomorethanoneSEproblemwhenadequate. 2Duetotheexclusioncriteria,automatedprogramrepairisnotshownasa addressedbyLLM-enabledin-context/promptingapproaches. debuggingsubproblem. Yet,theycurrentlyconstituteasignificantamountofworksinSE 54 DOWNSTREAMTASKSASANABSTRACTVIEWOFPROMPTS venues3. Asanotableexcludedtopic,wedonotcoverpapers SEProblems whosemainSEgoalwasprogramgeneration/synthesisorpro- Testing(Table1) gramrepair,whichareallegedly“natural”areasofapplicationof UnitTestGeneration:TestGen-LLM[8],FSML[14],ChatGPTTests[19], generativeAI[231,273,91,60,253]. Theotherexcludedtopic CodeT[28],ChatUniTest[34],CodeCoT[97],AgentCoder[98], CodaMOSA[130],TestChain[134],CoverUp[184],TestPilot[194], iscodereviews. Yet,therearetwoimportantcaveatsregarding EASEeval[201],TELPA[255],ChatTESTER[264] exclusions. On the one hand, some approaches studied elicit Failure-InducingTestGeneration:DiffPrompt[135],AID[140],secTests[276] codegenerationorrepairdownstream-taskstoachievetesting, RegressionTesting:SymPrompt[191] InputGeneration:RESTGPT[123],InputBlaster[149],PBT-GPT[229], faultlocalizationorverificationgoalsandwe,consequently,do mrDetector[247] report and analyze those papers. On the other hand, we also DataSet/MutantGeneration:FSML[14],MuTAP[43],BugFarm[104], report papers whose primary goal is program generation, de- µBERT[119],CHEMFUZZ[187],FormAI[218] Fuzzing bugging with repair, or code review but they prompt testing, GeneralFuzzing:OSS-Fuzz[74],ChatFuzz[94],Fuzz4All[252],UGCTX[266] verificationorfaultlocalizationtaskstoachievetheirgeneral KernelFuzzing:KernelGPT[257] goals. Thus,theexclusionislimitedtoworkswhoseprimarySE Compiler/SimulatorFuzzing:SearchGEM5[42],WhiteFox[256] Protocol/ParserFuzzing:FuzzingParsers[1],ChatAFL[162] goalisoutofscopeanddonotpromptLLMstosolveSoftware DL-LibrariesFuzzing:TitanFuzz[49],FuzzGPT[50] Engineeringproblemswithinthescope. Theotherimportantex- GUITesting:QTypist[147],GPTDroid[148],AXNav[211] clusioncriterionisthedate:thispaperversioncoverspapersthat FunctionalTesting:TARGET[48],ScenarioNL[58],LLMeDiff[105], SysKG-UTF[204] appearedinjournals,conferences,orarXivuptothefirstweek PenetrationTesting:PentestGPT[47],Pwn’d[80] ofJune2024,andpapersalreadyannouncedinconferencesup OracleProblem:FSML[14],SELF-DEBUGGING[32], nl2postcondition[59],TOGLL[89],Eywa[114],PropertyGPT[146], toJuly2024. ClarifyGPT[167],CEDAR[170],PROSPER[197],EMR[199], Weareawareofothercomparativeanalysesthatwedonot GameBugDescriptions[212],MetaMorph[220],ALGO[272] includeherebecausetheirpromptsarebasedonliteraturealready Debugging(Table2) BugReproduction:AdbGPT[62],CrashTranslator[103],LIBRO[118] reportedinourwork,forexample[215].Lastbutnotleast,itcan DuplicateBugReportDetection:Cupid[274] benoticedthatsometraditionalSEfalsificationandverification FaultLocalization:SELF-DEBUGGING[32],AutoFL[116],AutoSD[117], areaslikerun-timeverification,symbolicexecutionand(some) LLM4CBI[221],ChatGPT-4(Log)[249] RootCauseAnalysis:RCACopilot[33],x-lifecycle[71],RCAAgents[189], modelcheckingchallengesarenotcovered: underthedefined LM-PACE[268],inContextRCA[275] LLMandtimelinerelatedcriteria,nostudieswerefoundthat Vulnerability/Misuse/FaultDetection(Table3) could directly be mapped as addressing those topics as their VulnerabilityDetection:ChatGPT4vul[67],VulBench[69],NLBSE24[109], VulDetect[120],GRACE[151],VSP[174],AIagent[198],DLAP[258], primarycontributionfocus. MultiTask[263],PromptEnhanced[267],ChatGPT(Plus)[279] Line-Level/Edit-TimeFault/APIMisusePrediction:FLAG[3],EditTime[25], WitheredLeaf[30],LLMAPIDet[240] 4. DownstreamTasksasanAbstractViewofPrompts VulnerabilityDetectionforSmartContracts:ChatGPTSCV[29], SmartAudit[44],GPTLens[95],GPTScan[209],LLM4Vuln[208] StaticAnalysis(Table4) Whenanalyzingpapersinscope,theprimaryfocushasbeen Call-Graph/CFGConstruction:CFG-Chain[100] torecoverandnametheunderlyingdownstreamtasksforeach UseBeforeInitialize:LLift[132] ResourceLeak:SkipAnalyzer[165],InferROI[233] oftheinteractionswithLLMsreportedinthepapersinscope. Data-FlowAnalysis:LLMDFA[232] Thisworkdefinesadownstreamtaskasafunctionallydescrib- TaintAnalysis:E&V[79],LATTE[143] able(stochastic)generationprocessinwhichanLLMactsasa StaticSlicing:SimulinkSlicer[153] FixAcceptanceCheck:CORE[230]
centralinternalpiece. Ingeneral,weconsidertaskstobepoten- ProgramVerification(Table5) tiallyreactiveprocessesthatgenerateoutputstobeconsumedby ProgramVerification:AlloyRepair[4],ChatInv[107],Loopy[115],SpecGen[154], theenvironment(e.g.,asymbolictool,ahuman)andpotentially Dafny-Synth[164],Clover[206],AutoSpec[242],Lemur[246],RustProof[259] receivebacknewstimulitoproceedwithinteraction. Thus,as Figure3:OverviewoftheSEproblemsconsideredinthispaper. an abstraction exercise, by design, this work excludes many low-level details on “how” (and which) LLMs are prompted, andthedecodingstrategiesfollowed. Thatis, detailssuchas continuesagivenconversation(thatisaconsequenceofprevious LLMtype,APIsused,formats,useofsystem/userprompt,lin- tasks)or,alternatively,abrandnewLLMsessionislaunched guisticaspects,orderofprovidedinformation,low-leveldetails andonlysomeverbalizedresultsyieldedinthepreviousconver- onhowFew-Shot,CoT,orReActaredesigned,decodingsam- sationarecopiedintothenewcontext. Weareawarethatthose plingstrategy,howpromptsareoff-linecraftedortuned,etc. are detailsmightbecrucialinpracticetoachievegoodtaskquality abstractedaway(seesimilarabstractiondecisioninthedesign performance[193,152,277](seesection8),butwefavorhigh- oflanguageslikeDSPy[122]). Also,forthesakeofsimplicity, lightingconceptualcharacteristicsofdownstreamtaskselicited somedetailsregardinghowtasksareorganizedinaconversation bypromptsinstudiedapproaches. Inotherwords,thiscouldbe orchainmightnot,insomecases,becompletelyoraccurately understoodasanexerciseofmanualabstractionthat–withavar- reported. Thatis,forinstance,theremightbesomeimprecision ieddegreeofeffort–recovers[57]atask-viewpointarchitecture (oreveninaccuracy)regardingwhetherapromptforagiventask fromtheimplicitprobabilisticprogrambeingexplainedineach paperinscope. 3Forinstance,theymadeupapproximately40%ofLLM-enabledprompt- Mappingfrompromptstotheidentifieddownstreamtasksis basedapproachesofFSE2024maintrackpapers. notalwaystrivialastheremightbeanN-To-Mrelationship.This 65 RQ1: DOWNSTREAMTASKSANDARCHITECTURES justifiesinpartthisworkbut,naturally,italsoelevatestherisk taskbelongsto. Tasktaxonomyanditscategoriesareintroduced ofbeinginaccurateortoosimplisticinourdescription. Thus, inthenextRQanswer(Figure4). althoughwetriedtobeasfaithfulaspossibletotheactual(but Whileseminalexploratoryworksevaluatetheproficiencyof stillabstract)promptingarchitecture,bydesign,thistraceabil- LLMstosolvesomeSEproblemsbyshowingtheperformance ityshouldnotbetakenforgranted. Lackoftrivialtraceability of prompts (tasks) in a one-to-one relationship with the com- alsooccurswhenwesplittasksinstructedbyasingleprompt pleteSEproblemtobesolved5thatisnotthecommoncasefor to pinpoint the different nature of those tasks. Of course, we solutionproposals. Mostofproof-of-conceptstoolsexplicitly donotprecludethatbestqualityperformanceisachievedwhen require LLM to perform more than a single downstream task asinglepromptelicitsalltasksinsequencebutthatisnotthe toachievetheSEgoal. FormanySEproblemscoveredbythis focusofthisstudy,asalreadymentioned. Also,weareaware report,itseemsthatingenioussubtaskdescompositionandsome wemightnotbecompletelyaccurateregardinghowapproaches degree of tool orchestation is currently necessary since it has directLLMstocorrectoutputs:sometimes,ingeneral,wereport notbeenobservedyetbreakthroughbehavior[203]thatwould them as the corrective version of a task, while in some cases, enablesolvingitasasingletask. Allanalyzedapproacheswe we might compress both the original and corrective versions foundfallintothepromptwarecategory[82,35]. Inparticular, intoonetask. Inanycase,theexistenceofcorrectivefeedback inallsolutionproposalsandvalidationresearchbothtasks(and andactivityisreported. Insummary,phenomenologybasedon theirelicitingprompts)wereactuallycreatedbytheirauthors down-streamtaskdecompositionimpliestacitlyadheringtoa (“taskspeopleprompt”). Secondly,promptscouldbedirectly (fictitious)modularandfunctionalviewofhowLLM-enabled associatedwiththeelicitationofoneor,sometimes,moredown- approachesinteractwithLLMsintermsoftaskelicitation. This stream task (e.g., context-setting prompts). Also, there is no is done “by design”, even though this modularity can not be reporteduseoftechniquesbasedonsoft/continuouspromptsor takenforgranted. However,webelieveunderstandingthedown- prefixtuninglike[137,83],promptsarerarelycraftedoffline stream decomposition provides a first glimpse of the strategy withautomatedapproaches(e.g.,[280,63,122]),and(explicitly followedinproblem-solvingwithLLMs. orimplicitly)tasksworkflowsarealsodefinedbyhumansand theyaremostlystatic. Certainly,externaltools(fromaprede- finedset)mightbeorchestratedinReACT-likeapproachesandit 5. RQ1: DownstreamTasksandConcreteCognitiveArchi- isalsotruethatLLMsareusedtoverbalizeplansassequencesof tecturesinLLM-enabledApproaches actions. YetthisdoesnotmeanthatproposalsletLLMsduring Despitetherichnessandvarietyofapproaches,werecovered runtimetosynthesizeLLM’sdownstream-taskdecomposition
fromthe111reportedpaperstheirunderlyingdownstream northeirorchestration. Theclosesttoautomatedtaskdecompo- tasks(283intotal)4andpresentthemhomogeneouslynomat- sitionoccursduringinferenceelicitedbysomeusesofChain- terhowsophisticatedtheunderlyingprobabilisticprogramisor of-Though. However,thisseemstobefiner-grained,reasoning how the approach was reported. In some cases, this required stepsthatareconceptuallyboundedbythecontextofsolving considerableeffortforthereasonsdiscussedabove. Tables1–5 higher-leveltaskpromptedbyhumans. Thatis,approachesare provideablueprintofstudiedworks(groupedbySEproblem notfull-fledgedagentwareinthesenseof [82]asalsonoticed in Figure 3) by means of our manual identification of down- inindustrialpracticebysomeframeworkdevelopers[216]6. On stream tasks composing them. Identified downstream tasks the other hand, known prompt methods like Few-Shot, CoT, arerichinnatureandfunctionalfeatures,and,tothebestof RAG,ReACT,etc. arequitefrequentyettherearedifferencesin ourknowledge,someofthemwerenotpreviouslycontainedin theirusesasexploredinthefollowingRQs. Althoughnotthe existingtaxonomiesorbenchmarks. Theypossiblyfallsome- focusofthiswork,wecanalsopinpointthatapproachesarenot whereinbetweenopen-endedanddirectedgenerationtasks[86]. necessarilyarchitectedasachainofLLMtasksand,inmany Thus,thisworkprovidessomeevidencethattask-orienteddoc- cases,LLMsandclassicalsoftwaretoolsareintegratedinad-hoc umentation,currently,constitutesanaturaloverviewofan ways. Asnotedbyothersurveys[99],inmanyLLM-enabled LLM-enabledapproach,atleastforcontemporarysolution solutionsthelanguagemodelisjustacomponentthatsolvesa proposalsforsoftwaretestingandverificationproblems. An particular(sub)problemoftheapproach(eitherasapreprocessor, input/outputnotationwasfollowedtoinformallyconveywhat postprocessororaninnerfilter). Thatis,itisjustapieceina thetaskisabout. Schematically(androughly)speaking,input moreorlesssophisticatedorchestration/algorithm/framework →⟨TaskName⟩→ output,itcouldbeunderstoodthattheap- invoking more classical SE tools, including compilers, static proachatsomepointpromptsalanguagemodelθinsuchaway analyzers,ad-hocDeepLearningmodels,vectordatabases,etc. thatitautoregressivelygetssomerepresentationoftheoutputas WefurtherobservethateveninapproachesinwhichLLMcanbe thesequenceoftokenst (bysomedecodingstrategy)based consideredthemaincomponent,toolslikecompilersorunittest out onconditionalprobabilityP (t |p.t ),thatis,conditionedby frameworksprovidenecessaryfeedbacktocorrecthallucinations θ out in apromptpthatincludes(togetherwithpersona,problemsetting orperceivedtaskunderperformance. instructions,examples,etc.) somerepresentationt ofthespeci- in fiedinputinit. Colorsindicatethetaskcategoryeachidentified 5Invalidationresearchpapers,whenmorethanonetaskispresented,that indicatesthatseveralpromptingstrategiesareexploredforthesameSEproblem 6[80]reportstheuseofAutoGPT[76]butthenitsaysthatprototypeused 4somerecurringones ratherstaticandmanuallywrittenprompts 75 RQ1: DOWNSTREAMTASKSANDARCHITECTURES Table1: SEtask: Testing. SEProblem LLMDownstreamTasks ArchitecturalNotes (input→⟨typeoftask⟩→output[learn.strat.])∗ Unit-Test TestGen-LLM [8]: existing unit test class (UTC) + tested class id → TestGen-LLM[8]:Generatedtestsarefurther Generation ⟨CoverageAugmenting-Test-Extension⟩→ extendedUTC. processedinapipelineoffiltering-analysis FSML[14]:listhelpermeths+methundertest→⟨Test-Generation⟩→ (test)+[Few- (e.g.,buildability,non-flakiness,coverageim- Shot]. provement)thatispartoftheAssuredLLM- ChatGPTTests[19]:prgm→⟨Test-Generation⟩→ testcases(basic); BasedSoftwareEngineering[9]framework. prgm+unittestcases(UTC)+cvrgreport→⟨CoverageAugmenting-Test-Extension⟩ FSML[14]:Studyonpotentialproficiency. → extendedUTC. ChatGPTTests[19]:Study.Twoalternatives: CodeT[28]:intent+sig→⟨IntentCorresponding-Test-Generation⟩→ in/outpairs. (basic)andaversionthattriestoimprovecov- ChatUniTest[34]:focal-classsig+focalmethsrc-code+requireddepend+sigmeth→ erage. ⟨Test-Generation⟩→ (test)1...5[Adaptivefocalctxt]; CodeT[28]: Itispartofacodegeneration test+error+focalmeth+focal-classname+focalmethsrc-code→⟨ErrorAware-Test- approachthatincludesulteriordualexecution Correction⟩→ (test)1...5. agreementtochoosethebestsolution. CodeCoT[97]:functdefofunit+doc→⟨Basic&Edge-TestCase-Generation⟩→ test ChatUniTest[34]: Promptisadaptivetoto- cases. ken limit and further context of dependent AgentCoder[98]:functdefofunit+doc→⟨Basic&Edge-TestCase-Generation⟩→ classescouldbeaddedbasedondependency testcases. graph.CoTismentionedbutnotdetailed.Re- CodaMOSA [130]: (portion) src-code under test + callable def + callable name → pairphasemayalsoleverageanLLM. ⟨Targeted-Test-Generation⟩→ testcases. CodeCoT[97]:Testingispartofalargercode TestChain[134]: funcdefofunit+doc→⟨Basic&Edge-TestInput-Generation⟩→ generationsolution.
inputs[One-Shot]; AgentCoder[98]:Testingispartofalarger funcdef+doc+input→⟨IntentionCorresponding-Result-Generation⟩→ (requested codegenerationsolution. computation)∗+assertedvalue)[CoT,ReAct:requesttocomputetheadequateoutput]. CodaMOSA[130]:UseofLLMsinthecon- CoverUp [184]: code excerpt + uncovered lines → ⟨CoverageAugmenting-Test- textofSearch-BasedSoftwareTesting.LLMs Generation⟩→ tests; outputsrequiresignificantpost-processingto code excerpt + test + uncovered lines + (errors) → ⟨CoverageAugmenting-Test- beintegratedintotheSBSTframework. Correction⟩→ test. TestChain[134]:DesigneragentandCalcu- TestPilot[194]: methundertestsig+(doc)0...1+(usageexamp)0...1+(src-code)0...1+ latoragentsreported.CalculatorusesPython (failtest+error)0...1→⟨BuildablePassable-Test-Generation⟩→ (test)0...5. Interpreter. EASEeval [201]: methundertest name+ classcode + importstatements → ⟨Test- CoverUp[184]: Usestestingframeworkin- Generation⟩→ unittests. cludingcoveragetool. TELPA[255]:MUTcode+method-invocationseq+exampfollowingm-iseq+branch- TestPilot[194]:Refinerappliesstrategiesto relevantmeth→⟨CoverageAugmenting-Test-Generation⟩→ test. includeornotcertaininfoinprompts.Adap- ChatTESTER[264]:focalmethcode→⟨CodeCorresponding-Intent-Verbalization⟩ tive nature means that LLM is (re)invoked → intent; withfailingtestanderrormessageifvalidation focalmethsig+intent→⟨IntentCorresponding-Test-Generation⟩→ unittests; fails. unittest+errormsgs→⟨ErrorAware-Test-Correction⟩→ unittest. TELPA[255]:Staticpreprocessingisdoneto identifyrelevantmethod-invocationsequences andmethodsrelevantforbranchoutcomes. ChatTESTER[264]:Theiterativetestrefiner iterativelyfixesthecompilationerrorsinthe testsgeneratedbytheinitialtestgeneration. Thisisdonewithparsingandanalysisofcom- ponentsthatbuildprompts. Failure-Inducing DiffPrompt[135]:snip→⟨CodeCorresponding-Intent-Verbalization⟩→ intent; DiffPrompt[135]: Singleinteractionforin- TestGeneration intent→⟨IntentCorresponding-Code-Generation⟩→ prgms; tentandprogramgeneration.Multipleinterac- prgm→⟨Test-Generation⟩→ testcases. tionsforobtainingtestcasesthathavesame AID[140]:problemdescription+code→⟨IntentCorresponding-Code-Correction⟩→ resultforallgeneratedversions. code; AID[140]:Theapproachincludesdifferential problemdescription→⟨ConstraintSatisfying-InputGenerator-Generation⟩→ code. testingtodetectbugs. secTests[276]:functname+clientcode+vul-id+vulAPIids+referencetestcode→ secTests[276]:Studyonmimickinggeneric ⟨Mimicking-Test-Generation⟩→ testcodeonclientcode. vulnerability-exploiting test given on client code. Regression SymPrompt [191]: focal meth sig + type and dependency ctxt + path constr → SymPrompt [191]: Preprocessing through Testing ⟨ConstraintSatisfying-Input-Generation⟩→ test. staticanalysistechniquestogetpathsandcon- text. Continuedonnextpage 85 RQ1: DOWNSTREAMTASKSANDARCHITECTURES Table1: SEtask: Testing. (Continued) SEProblem LLMDownstreamTasks ArchitecturalNotes (input→⟨typeoftask⟩→output[learn.strat.])∗ InputGeneration RESTGPT[123]: OpenAPIspec→⟨DescriptionCorresponding-Liberal-Constraint- RESTGPT[123]:Improvesthetreatmentof Characterization⟩→ structwithconstrs,types,formatforparams[Few-Shot]; OpenAPIspec,particularlyhuman-readable OpenAPIspec+prevconv→⟨ConstraintSatisfying-Input-Generation⟩→ exampfor part. Bothtasksareactuallyperformedina param[Few-Shot]. singleprompt. InputBlaster [149]: local&global ctxt + NL candidate constrs + (dynamic hint) → InputBlaster [149]: Valid Input Generator ⟨GUIInputCorresponding-ValidityConstraint-Characterization⟩ → inferred constr (task1and2)isiterateduntilitmakestheAPP [Few-Shot]; transfer(elicitedasasingleprompt).Task3 prevconv+infconstr→⟨Valid-GUIInput-Generation⟩→ validinput+inferredconstr and4(alsoelicitedinasingleprompt)isalso [Few-Shot]; iteratedandintermediateresultsarepartofthe validinput+inferredconstr+retrvexampofunusualbug-trigginputs+(test-execfdbkon feedbackforeffectivenessanddiversity.DB mutant)→⟨Effective&Diversified-MutationRule-Characterization⟩→ mutationrule isbuiltwithbuggyexamplesfromGitHUB [Few-Shot]; recordedtomatchthestyleusedinprompt. mutationrule+(test-execfdbkonprevgenerator)→⟨InputGenerator-Generation⟩→ Also,unusualinputsthattriggeredcrashesdur- test-inputgenerator[Few-Shot]. ingexecutionofInputBlasterontheAPP.Sim- PBT-GPT[229]:APIdoc+focusmethname→⟨Postcondition-Assertion-Generation⟩ ilarly,itisusedtoselectexamplesforin-Ctxt → propassertions; Learning. APIdoc+focusmethname→⟨InputGenerator-Generation⟩→ genfunct; PBT-GPT[229]:Threepromptingstrategies APIdoc+focusmethname+genfunctconv→⟨ParametrizedTest-Generation⟩→ to generate property-based tests: indepen- prop-basedtest; dently(gen.funct./propertyassertion),con-
APIdoc+focusmethname→⟨PropertyBasedTest-Generation⟩→ prop-basedtest. secutively (continue conversation after gen. mrDetector[247]: shopname+shoptype→⟨LikelyRecalling-Data-Generation⟩→ funct.),andtogether(singlebig-bangprompt). searchingkeywords[Few-Shot,CoT]; shop name + shop type + potential recalling sentence → ⟨Answer-Qual- ity(Reasonability)-Judgment⟩→ yes/no[Few-Shot]. DataSet/Mutant FSML[14]:lineofcode→⟨Line-Mutation⟩→ (mutatedline)+[Few-Shot]. FSML[14]:Studyonpotentialproficiency. Generation MuTAP[43]: prgmundertest→⟨Test-Generation⟩→ initialunittest(inclassert) MuTAP[43]:LLMsareinvokedinamutant- [Zero-Shot/Few-Shot]; basedtestgenerationapproach. initial unit test + synt errors → ⟨ErrorAware-Test-Correction⟩ → unit test [Zero- BugFarm[104]: LLMisusedinthelastof Shot/Few-Shot]; the3stagesthatincludesmethodextraction, prevconv+mutatedcode→⟨AssertionObservable-Differential-Test-Generation⟩→ anamodel’sattentiontodifferentpartsofcode unittest. toidentifywheretoinjectbugs. BugFarm [104]: method signature + method body + statements to transform → µBERT[119]:Invokedtopredictmaskedto- ⟨BugInjecting-Code-Mutation⟩→ transformedcode. ken.Stochasticityusedtoget5completions. µBERT[119]:masked-code→⟨Code-Completion⟩→ code1...5[InFiller]. CHEMFUZZ[187]: LLMintegratedintoa CHEMFUZZ [187]: prev conv + masked-code → ⟨Valid&Diversified-Code- fuzzingscheme. Completion⟩→ code; FormAI[218]:Generationisthefirstphase execoutput→⟨Behavior-Anomaly-Detection⟩→ evalreport. fortheconstructionofalabelleddatasetfor FormAI[218]: typeofprgm+style→⟨IntentCorresponding-Code-Generation⟩→ vulnerabilityanalysis. prgm. LLMorpheus[219]: sourcecodefragmentwithplaceholder+maskedorigcode→ ⟨DifferentBehavior-Mutation-Generation⟩→ (replacement+briefexplanation)+. GeneralFuzzing OSS-Fuzz[74]:functtotgt+projectspecificinfo→⟨DriverCode-Generation⟩→ fuzz OSS-Fuzz[74]:Partofalargeprojectthatin- driver; cludesintrospectioncomponentsandfuzzing functtotgt+projectspecificinfo+compilationerrors→⟨ErrorAware-DriverCode- ones.Thedescriptionisabesteffortfromthe Correction⟩→ fuzzdriver. on-linedocumentation. ChatFuzz[94]:(sampleinput)0...1+(formatname)0...1→⟨File-Variation-Generation⟩ ChatFuzz[94]:Usestochasticitytogenerate → file[(InFiller)]. seedsingrey-boxfuzzingworkflow. Fuzz4All[252]: doc+(examp)∗ +(specs)∗ →⟨Usage-Summarization⟩→ distilled Fuzz4All[252]: Autopromptingdistilluser usage/funct; providedinputs.LLM-poweredfuzzingloops usage/funct→⟨UsageSastifying-Input-Generation⟩→ fuzzinputs; whichresorttogeneration,mutationandse- usage/funct+input→⟨SpecificationAware-Input-Mutation⟩→ mutatedinput; manticallyequiv.variantgeneration. usage/funct+fuzzinput→⟨Input-Variation-Generation⟩→ fuzzinput. UGCTX[266]:Studyofstrategies.Thetwo UGCTX [266]: header file + API usage examp + funct name → ⟨DriverCode- mostsophisticatedarereported. Generation⟩→ fuzzdriver(UGCTX); fuzzdrivercode+errors(compilation,link,run-time)+rootcauseAPIusageinfo→ ⟨ErrorAware-DriverCode-Correction⟩→ fuzzdriver(EX-ITER). Continuedonnextpage 95 RQ1: DOWNSTREAMTASKSANDARCHITECTURES Table1: SEtask: Testing. (Continued) SEProblem LLMDownstreamTasks ArchitecturalNotes (input→⟨typeoftask⟩→output[learn.strat.])∗ KernelFuzzing KernelGPT[257]: operationhandlercode+usageophandlcode→⟨CodeElements- KernelGPT[257]:Syscallspecificationgen- Identification⟩→ devicename+initializationop[Few-Shot](DriverDetection); erationforenhancingfuzzing.Akernelcode ioctlhandlfunct+assochelperfunctcode+(fetchedfunct&types+previnferred extractorfeedstheLLMwiththeappropriate usageinfo)→⟨InfoRequesting-CodeElements-Identification⟩→ (commandvalues)∗ codesnippetswhenrequested.Refineandre- +(furtherrequired[functs,types]+usageInfo)[Few-Shot,IterativePrompting: functs, pairwithfeedbackprovidedbyexternalspeci- types](CommandValue); ficationvalidationtool. argumenttotype+prevdetectedrelevantfunctcode+argument’susage+(fetchedfunct code)→⟨InfoRequesting-Type-Identification⟩→ argumenttype+furtherrequired (functs)[Few-Shot,IterativePrompting:functs,types](ArgumentType); type to describe + src-code structs + (fetched snip) → ⟨InfoRequesting-Definition- Extraction⟩→ typedef+furtherrequired(nestedtype)[IterativePrompting: nested types](TypeDefinition); spec-madeupidentifiedelements-+errors→⟨ErrorAware-Specification-Correction⟩ → spec[Few-Shot](SpecificationValidationandRepair). Compiler/ SearchGEM5[42]:code→⟨ParameterizedVersion-Code-Generation⟩→ paramver- SearchGEM5[42]:Generatesbinaryinputs Simulator sion+typeofparams[Few-Shot]; fortestinganHW-SWarchitecturesimulator. Fuzzing parameterizedversion+types→⟨Valid-Input-Generation⟩→ inputsample[Few-Shot]. That is, the tool also includes compilation,
WhiteFox [256]: expect input format + optimization name + src-code → fuzzinganddifferentialtesting. LLM-tasks ⟨CodeReachability-Input-Characterization⟩→ summtriggerinputpattern[Few-Shot]; areelicitedbyin-chainedpromptsandalast expectedinputformat+inputpattern→⟨ConstraintSatisfying-Input-Generation⟩→ promptthatrequeststheparameterizedver- summtriggerinputpattern[Few-Shotwithfeedbackloop]. sion,sampleinputandtype. WhiteFox [256]: Multi-armed bandit algo- rithmisusedtochoosefewshotsforprompts. Protocol/Parser FuzzingParsers[1]:nameobject→⟨StructureOfObject-Recall⟩→ well-formedtree FuzzingParsers[1]: Thesetasksbelongto Fuzzing structure; seedgenerationstage. Otherstagesinclude terminalname+(prevexamples)→⟨New-ExampleOfThing-Recall⟩→ example; fuzzingandpreprocessing. nameofparser→⟨ParsingErrors-Recall⟩→ describedparsingerrors; ChatAFL[162]:Multipleconversationswith string+errordescription→⟨ErrorTriggering-Data-Variation-Generation⟩→ string; theLLMandmajorityvoteforthefinalgram- strings+type→⟨Fuse-Transformation-Data⟩→ string. mar. Interactionwitheditortogeneratenew ChatAFL[162]:protocolname→⟨ProtocolGrammar-Recall⟩→ msggrammar[Few- seed. Interactionwithcompletertogetmes- Shot:expectedformat]; sage that would move protocol into a new msgseq+desiredadditions→⟨ModificationSpecified-MS-Edition⟩→ msgseq; state. commhistory→⟨Message-Completion⟩→ msg. DL-Libraries TitanFuzz[49]: libname+DL-API+expectasks/intent→⟨IntentCorresponding- TitanFuzz[49]: Invokedinanevolutionary Fuzzing Code-Generation⟩→ code; workflowtogenerateseedsandcompletemu- masked-code→⟨Code-Completion⟩→ code[Pretrained,InFiller]. tants. FuzzGPT[50]:codesnip→⟨APIUsed-Code-MultiClass-Classification⟩→ APIlabel FuzzGPT [50]: Data Set (DS) preparation [Few-Shot]; usesClassifierrole.Then,randompickfrom nameAPItobeused+codesnip→⟨ModificationSpecified-Code-Edition⟩→ code DSandusealternativeroles/strategies. snipusingAPI; APIname→⟨BugTriggering-Code-Generation⟩→ code[Few-Shot:classifsnips,CoT: bugdescrip]; APIname+prtlcodesnip→⟨BugRevealing-Code-Completion⟩→ code. Continuedonnextpage 105 RQ1: DOWNSTREAMTASKSANDARCHITECTURES Table1: SEtask: Testing. (Continued) SEProblem LLMDownstreamTasks ArchitecturalNotes (input→⟨typeoftask⟩→output[learn.strat.])∗ GUITesting QTypist[147]:inputwidgettype+localctxt+globalctxt→⟨Input-Completion⟩→ QTypist[147]: Promptsaregeneratedbya generatedinput; setofrulesoncontextinformation. Prompt- maskedinput+localctxt+globalctxt→⟨Input-InFilling⟩→ generatedinput. tuningisalsoinplace. GPTDroid[148]:prevconv+GUIctxt+prevactionfdbk+functionality-awarememory GPTDroid [148]: Tool builds prompts fol- →⟨FunctionCoverageOriented-NextAction-Selection⟩→ functbeingtested+status lowing linguistic patterns instantiated with +nextaction[Few-Shot:todefineoutputformat]. extractedAPPcontextinformation. Interac- AXNav[211]:accessibilitytestinstruct+nameofappundertest+formattedUIelement tionfollowsquestionandanswerstyle,which →⟨TestGoalCorresponding-Plan-Generation⟩→ tentativeplan=(task,actiondescr, meansinthiscasethatwhenfunctionbeing justification,evalcriteria,status)∗[CoT:justification](planner); testedandstatusisyieldedbytheLLM,an- UIrep+testinstruct→⟨Plan-Refinement⟩→ concreteaction[CoT:though,relevant otherquestioninstructsLLMtoyieldnextac- UIids,UIrelevantelements](mapper); tion. testgoal+tentativeplan+concreteaction+assocthought+UIdetectionsbefore+UI AXNav[211]:LLM-basedUInavigationsys- detectionsafter+evalhints→⟨PlannedAction-Successfullness-Assessment⟩→ eval temtotranslatefromnaturallanguagetestin- criteria+result+explanation[CoT:evalcriteria](evaluator); structionsintoasetofconcretesteps,execute testgoal+tentativeplan+currentstep+(fieldincludingastopcondorevalerror)→ stepsintheplanbycallingAPIsthatinter- ⟨StepOutcomeCorresponding-Plan-Correction⟩→ updatedtentativeplan(replanner). actwithadevice,andfeedbackresultstothe plannerwhenneeded. Functional TARGET[48]:trafficrule→⟨DSLValid&TryToUseGivenElements-RuleConsistent- TARGET[48]:Takesthreephasestoparsea Testing Scenario-Generation⟩→ draftscenario-rep[Few-Shot](Know.Extract.); trafficruledescriptiontoanexecutabledriving draftscenario-rep+rules→⟨RuleInconsistencies-Scenario-Correction⟩→ scenario scenarioinasimulator. LLMaddressesfirst (Know.Val.); processingphase. subcomponent scenario-rep + list of elements → ⟨InListCloseMeaning-Elements- ScenarioNL[58]: ScenarioNLallowsusers Replacement⟩→ scenario(SyntaxAlignm.). tospecifyamodelandpromptingtechnique. ScenarioNL[58]:crashincidentreport→⟨Focused-Summarization⟩→ relevantdy- Scenicdatabaseisusedtostoreandretrieve namics+staticobjects[ToT:expertsdebate]; semanticallysimilarexamples. crashincidentreport+relevantobjects→⟨Ambiguity-Analysis⟩→ questionstodisam- SysKG-UTF[204]:LLMsplaydifferentroles
biguate; intheconstructionandpostprocessingofa crashincidentreport+questiontodisambiguate→⟨ExpertSolvedUncertainty-TextDe- knowledgegraphforexploratorytesting. pendent-QuestionAnswering⟩→ answers[ToT:expertsdebate]; relevantobjects+properties→⟨ProbabilisticProgram-Generation⟩→ Program(part) [Few-Shot+RAGorHyDE,FunctionCalling:GPL2DSL]. LLMeDiff[105]:rule→⟨Pass+Fail+N/A-Test-Generation⟩→ test+confidence. SysKG-UTF[204]:bugreports→⟨StructureCompliant-Information-Extraction⟩→ preconds+stepstoreprod(S2R)+obsbehav+expectbehav[Few-Shot]; S2R→⟨StepsToReproduce-Refinement⟩→ (finergrained)S2R[Few-Shot]; bug scenario pair (incl S2R) → ⟨FeasibilityRedundancyAware-StepsToReproduce- Fusing(edition)⟩→ S2R[Few-Shot,CoT:pseudo-codeguidedintermassess]; bug scenario pair + potential (fused) S2R → ⟨FeasibilityAware-StepsToReproduce- Refinement⟩→ S2R[Few-Shot,CoT:assess]. Penetration PentestGPT[47]:user-intents→⟨IntentCorresponding-Plan-Generation⟩→ penetra- PentestGPT[47]:Itincorporatesthreecore Testing tiontasktree(ptt)(1); modules: theReasoningModule(1–5), the testingresults+ptt→⟨Elements-Update⟩→ ptt(2); Generation Module (6,7), and the Parsing ptt+updatedptt→⟨Validity-Transition-Analysis⟩→ result(3); Module(8)(eachreservinganLLMsession). ptt→⟨RulesSatisfying-Information-Extraction⟩→ (potentialnexttask)+(4); Activeuserfeedbackispossible. ptt+tasks→⟨MostPromising-NextTask-Selection⟩→ suggnexttask(5); pwn’d [80]: High-level task planning uses sub-task+availtools→⟨ToolsConstrained-Plan-Generation⟩→ seqofsteps[CoT] mechanismstointegrateLLMsasagents(Au- (6); toGPT).Low-levelvectorattackworksasa step→⟨Plan-Refinement⟩→ cmd[CoT](7); stepbystepreactiveexecutionofplan. rawuserintent—testoutp→⟨Summarization⟩→ condensedinfo(8). pwn’d[80]:scenario→⟨ScenarioCorresponding-Plan-Generation⟩→ pen-testplan [AgentGPT]; goal+prevconv+lastcmdoutput→⟨ReachabilityOriented-NextAction-Generation⟩ → cmd; cmd+output→⟨Rationalized-VulnerabilityProneness-CommandExecution-MultiL- abel-Classification⟩→ potentialvul. Continuedonnextpage 115 RQ1: DOWNSTREAMTASKSANDARCHITECTURES Table1: SEtask: Testing. (Continued) SEProblem LLMDownstreamTasks ArchitecturalNotes (input→⟨typeoftask⟩→output[learn.strat.])∗ Oracle-Problem FSML[14]:sig+meth-intent→⟨MetamorphicAssertion-Characterization⟩→ term- FSML[14]:Studyonpotentialproficiency. equivassertions[Few-Shot,CoT:codeintent+analysis]. SELF-DEBUGGING[32]:FortheGenera- SELF-DEBUGGING [32]: sql-query → ⟨SQL-CodeCorresponding-Explanation- tionstep,giventheproblemdescription,the Generation⟩→ explanation[Few-Shot]; modelpredictscandidateprograms(notshown sql-query→⟨QueryConsistent-TableResult-Generation⟩→ resulttable[Few-Shot]; inthisreport).Explanationstep,themodelis sql-query + result table → ⟨SQL-Code&ExecutionCorresponding-Explanation- promptedtoprocessthepredictionsina“se- Generation⟩→ explanation[Few-Shot]; manticallyusefulway”,suchasexplainingthe explanation → ⟨ColumnOriented-Explanation-Summarization⟩ → column-intents prediction(andreferenceNLqueryforSQL [Few-Shot]; querygeneration,sourceprogramfortransla- NL query → ⟨Rationalized-NLQuery-NumberOfColumns-Identification⟩ → NL- tionprogram)innaturallanguage.Correctness querycolumns[Few-shot,CoT]; isthenpredictedbydifferenttasksdepending NL-querycolumns+column-intents→⟨Docstrings-Equivalence-Checking⟩→ feed- on the code generation/translation problem. back[Few-Shot](CorrectnessTEXT-TO-SQL); Hereweshowhowthiscanbedeterminedby intent + assertion + code + (external feedback) + code explanation → askingthemodelitselftoworkasoracleofcor- ⟨Intent&AssertionCorresponding-Code-OneClass-Classification⟩ → yes/no [Few- rectness.Generation,explanation,evaluation Shot,CoT:assertionexecution](CorrectnessTEXT-TO-PYTHON). andrepairarechainedindemonstrations. nl2postcondition[59]:intent+(referimpl)0...1→⟨PostCondition-Formalization⟩→ TOGLL[89]:Sixdifferentpromptsarestud- assertions. ied. Wereportonewiththerichestcontext TOGLL[89]:testprefix+MUTcode+MUTdoc→⟨Assertion-Generation⟩→ asser- (P6). tion. Eywa[114]:LLMisusedtogenerateprotocol Eywa [114]: funct defs + arguments + result + validity constraints → modelcode. Symbolicexecutionisfurther ⟨ProtocolModelImplementation-Generation⟩→ modelcode. usedtogeneratetestcases. PropertyGPT[146]: baserulecode+functcodeundertest+contractsrc-code→ PropertyGPT [146]: Uses Retrieval- ⟨Similar-Assertion-Generation⟩→ rulecode[RAG:baserulecode]; augmentedgenerationbyprovidingrelevant base pre/post + funct under test → ⟨Similar-Assertion-Generation⟩ → funct-level specificationstobebasedon.
pre/post[RAG:basepre/post]; ClarifyGPT[167]:Intentionclarificationis rulecode+contractsrc-code+errorinfo+functundertestname→⟨ErrorAware-Asser- partofthiscodegenerationapproach.Gener- tion-Correction⟩→ rulecode; atedinputs(andmutations)areusedtocluster rule code + contract src-code + base rule code + missing funct under test name → solutions.Differencesanalysisandclarifying ⟨MissingFunction-Assertion-Correction⟩→ rulecode. questionsareactuallyperformedbyasingle ClarifyGPT[167]: MUTsig+doc→⟨IntentCorresponding-Code-Generation⟩→ prompt. (code)∗; CEDAR [170]: General demonstration re- MUTsig+doc→⟨Basic&Edge-TestInput-Generation⟩→ (test)+[Few-Shot]; trievalmethodillustratedinassertiongenera- req+(codeofaltsol)+→⟨CodeDifferences-Verbalization⟩→ descranddiffs[Few- tion.Neuralandfrequency-basedtechniques Shot]; forretrieval. req+altsoldescranddiffs→⟨Ambiguity-Analysis⟩→ clarifyingquestions[Few-Shot]. EMR[199]:Initialstudyongeneratingmeta- CEDAR[170]:focalmeth+unit-testsnip→⟨Assertion-Generation⟩→ assert[Few- morphicrelationsfromdocumentationandex- Shot+RAG:retrvdemo(U-Testsnip)]. ecutablemetamorphicrelations.Somedetails PROSPER[197]:RFCdoc.→⟨FSM-Elements-Extraction⟩→ FSM-elements[Few- onpromptengineeringarenotprovidedand Shot]. areconjectural. EMR[199]:reqdoc→⟨IORelated-Sentences-Identification⟩→ I/Osentences; ALGO[272]:This(oraclesynthesis)ispart I/O sentences → ⟨SentenceDerivable-MethamorpicRelation-Characterization⟩ → ofaprogramgenerationframework.Program metamorphicrelation(MR); generationmayuseimplicitorexplicitsearch MR+SUTAPI+SUTdocum→⟨RequirementsDerivable-ExecutionMethamorpi- (e.g.,algorithmicideas). cRelations-Generation⟩→ executablemetamorphicrelations[Few-Shot:DSL]. GameBugDescriptions [212]: video game name + scenario + perspective (e.g. game designer, player, real-world) → ⟨Scenario-AccordingToPerspective-Anomaly- Detection⟩→ thought; prevconv→⟨Answer-FailureOriented-Summarization⟩→ buggyevent(AnswerEx- traction). MetaMorph[220]:docsnip→⟨VariableNames-Identification⟩→ vars[Few-Shot]. ALGO[272]:probformul+in/outexamp→⟨BruteForce-IntentCorresponding-Code- Generation⟩→ implem. 125 RQ1: DOWNSTREAMTASKSANDARCHITECTURES Table2: SEtask: Debugging. SEProblem LLMDownstreamTasks ArchitecturalNotes (input→⟨typeoftask⟩→output[learn.strat.])∗ Bug AdbGPT[62]:bugreport→⟨StepsToReproduce-Extraction⟩→ stepstoreprod[Few- AdbGPT[62]:In-chainedapproach. Reproduction Shot,CoT]; CrashTranslator[103]:ItleveragesLLMfor viewhierarchyoftheGUI+step→⟨Plan-Refinement⟩→ componenttooperate[Few- one of the scorer which goal is to propose Shot,CoT]. explorationpriority. CrashTranslator [103]: manifested page names + curr page + crash page → LIBRO[118]: LLMworksasfirstcompo- ⟨ReachabilityOriented-NextNode-Selection⟩→ nextpage[Zero-Shot,LLMfine-tuned nentoftoolchain. Asetoftestcandidates withAPPstransitionrelations]; aregeneratedbyqueryingtheLLMmultiple manifestedpagenames+currentpage+crashpage+nextpage+interactiblewidgets→ times. ⟨ReachabilityOrientedWithSuggestedNextNode-NextAction-Selection⟩ → widget [Zero-Shot,LLMfine-tunedwithtargetedGUIpageandtransferwidget]. LIBRO[118]:bugreport+(stacktrace)0...1→⟨BugReproducing-Test-Generation⟩→ testmeth[Few-Shot]. DuplicateBug Cupid[274]: bugreport→⟨AimedAtDuplicateDetection-Keywords-Extraction⟩→ Cupid[274]:LLMplaysapunctualroleinto ReportDetection keywords. atraditionalsolutionforduplicatebugreport. Fault SELF-DEBUGGING [32]: code snippet + code snippet + input + feedback → SELF-DEBUGGING[32]:FortheGenera- Localization ⟨Differences-RootCause-Analysis⟩→ execution-basedanalysis[Few-Shot,CoT:trace tionstep,giventheproblemdescription,the execution](C++-TO-PYTHON). modelpredictscandidateprograms(notshown AutoFL[116]:failingtestinfo→⟨RootCause-Analysis⟩→ rootcause[ReAct:funct in this report). The fault detection is per- callsfordebugging]; formedbycreatinganexecutiontraceofthe prevconv→⟨Answer-FaultOriented-Summarization⟩→ culprit. predictedcodeforasampleinput. AutoSD[117]:functmeth+tests+errmsg+(reports)0...1→⟨RootCause-Analysis⟩→ AutoFL[116]:Twostages:rootcauseexpla- hypoth+prediction+experiment; nationandbuglocation. LLMusesfunction hypoth+predict+experiment+expobservation→⟨EvidenceSupport-Judgment⟩→ calling. conclusion; AutoSD[117]:ChainedinteractionofLLMs prevconv→⟨DebuggingAware-Code-Generation⟩→ fixedcode. withexecutingengines. LLM4CBI[221]:prgm+mutationinstr+validityfdbk→⟨MutationSpecified-Code- LLM4CBI[221]:Generationofpromptsuses Mutation⟩→ prgm. ad-hocstaticanalysis.Thentheyareselected, ChatGPT-4(Log) [249]: focal source faulty code → ⟨FaultProneness-CodeLines- executedandmodifiedinaRLworkflowthat
Ranking⟩→ orderedlistoflinesandreason[CoT:functintent,reasonperline]; includessomeclassiclocalizationtechniques. prev conv + test case + error → ⟨TestResultAware-FaultProneness-CodeLines- ChatGPT-4(Log)[249]:Empiricalstudyof ReRanking⟩→ intent+orderedlistoflinesandreason. LLMsproficiency.Codecontextismodified inthecontrolledexperimenttoassessimpact. RootCause RCACopilot[33]:diagnosticinfo→⟨RootCauseOriented-Summarization⟩→ summ RCACopilot [33]: Diagnostic information Analysis diag(incidentsummarization); collectionstageisperformedbeforepredic- incident summ + list close categorized historic incidents → ⟨Common-RootCause- tion.Tasksarecorepartoflargersolution. Selection⟩→ rootcause+category[CoT:explanation]. x-lifecycle[71]:Vectordatabaseisusedfor x-lifecycle[71]:servicedescr→⟨RootCauseOriented-Summarization⟩→ summsvc RetrievalAugmentedGeneration. descr; RCAAgents[189]:Vectordatabasetosearch titleincident+summinc+svcdependencies+svcsummdescr+similarhistoricinc→ forhistoricalincidents. ⟨SimilarityGuided-RootCause-Analysis⟩→ rootcause+svcdep? LM-PACE [268]: Focus on confidence es- RCAAgents[189]:initialincidentinfo→⟨RootCause-Analysis⟩→ rootcause[ReAct: timation and calibration. Root cause gen- rqstfordetails,rqstforhistoricalinc,queryonretrievedinc]; eration is a black box that can be ad- retrvtext+query→⟨TextDependent-QuestionAnswering⟩→ answer. dressedbyLLMsaswell.Relevantincidents LM-PACE [268]: (incident, root cause)+ + curr incident + (guessed root cause) → camefromhistorical-DB(semanticsimilarity- ⟨SufficiencyToInferAnswer-Assessment⟩→ analysis; based retriever). Confidence of Evalua- prevconv→⟨Assessment-Yes/No-Summarization⟩→ yes/no(1); tion(COE-score)andRoot-Cause-Evaluation relevant-incidswithrootcauses+currincid+answrootcause→⟨Answer-Qlty(Truth, (RCE-score)areobtainedfrom(1)and(2)resp. Ground,Informative)-Assessment⟩→ analysis; multiplesamplingonLLMs.COEandRCE prevconv→⟨Assessment-Scaled-Summarization⟩→ score(2). scoresaretheinputofanoptimizationpro- inContextRCA[275]:report→⟨RootCauseOriented-Summarization⟩→ incident+ ceduretobuildamodeltopredictcalibrated rootcause; confidencescore. Indeploymentphase,pre- incidentreport→⟨RootCause-Analysis⟩→ rootcause[Few-Shot:similarexamples]. dictedrootcauseandcalibratedconfidence scoreisyieldtoon-callengineers. inContextRCA [275]: Vector data base is populatedwithsummarizedincidentsandroot causes. 135 RQ1: DOWNSTREAMTASKSANDARCHITECTURES Table3: SEtask: Vulnerability/Misuse/FaultDetection. SEProblem LLMDownstreamTasks ArchitecturalNotes (input→⟨typeoftask⟩→output[learn.strat.])∗ Vulnerability ChatGPT4vul [67]: src-code → ⟨VulnerabilityProneness-Code-OneClass- ChatGPT4vul[67]:Studyofproficiency.It Detection Classification⟩→ yes/no; includesrepairprompts(notreportedhere). prevconv→⟨VulnerabilityProneness-CodeLines-Ranking⟩→ list; VulBench[69]: Studyincludingalternative src-code+(tgtCWD-ID)+→⟨ListTargeted-CWEVulnerabilityProneness-Code-Mul- promptingstrategies. tiLabel-Classification⟩→ CWD-ID; NLBSE24 [109]: This evaluation also in- src-code→⟨CVSS-Scoring⟩→ score. cludestasksversionsthattakepreviouslabels VulBench[69]: snip+(vulclasses)0...1 →⟨Rationalized-ListTargeted-CWEVulner- asinputs. abilityProneness-Code-MultiLabel-Classification⟩→ vulverdict+(vulclass)0...1 + VulDetect[120]:Studyincludingalternative stepbystepexplanation[Few-Shot:∈project—(cid:60)project,CoT]. promptingstrategies. Self-reflectionisone NLBSE24 [109]: code snippet → ⟨VulnerabilityProneness-Code-OneClass- option. Classification⟩→ yes/no; GRACE[151]:Wereporttheenhancedvul- code snippet + intended functionality → ⟨GivenIntentionCorrespondence-Code- nerability detection module. The approach OneClass-Classification⟩→ yes/no. alsoconsistinademonstrationselectionmod- VulDetect [120]: tgt code snippet → ⟨Rationalized-CWEVulnerabilityProneness- uleandagraphstructureinformationgenera- Code-MultiLabel-Classification⟩→ yes/no+vultype+vulname+explanation; tionmodule. tgtcodesnip+tgtCWE→⟨Rationalized-ListTargeted-CWEVulnerabilityProneness- AIagent [198]: Preliminary study. Basic Code-MultiLabel-Classification⟩→ yes/no+vultype+vulname+explanation; prompt is augmented with caveats on each tgt code snip + CWE-DF → ⟨Rationalized-DataFlowTargeted-CWEVulnerabili- category. tyProneness-Code-MultiLabel-Classification⟩→ yes/no+vultype+vulname+data- DLAP[258]:DLmodelsareusedtogenerate flowexplanation; apredictionprobabilitythatserveasreference query+data-flow-analysis→⟨Answer-Quality-Assessment⟩→ yes/no+explanation+ inputforLLMassessment.LHSisusedtofind finalverdict. similarcode.Staticanalysisresultsarekeyto GRACE[151]: codesnippet+codepropertygraph→⟨CPGEnhanced-Vulnerabili- querytoobtaincustomizedCoTgeneration
tyProneness-Code-OneClass-Classification⟩→ yes/no[Few-Shot: retrieveddemon- guidancetemplates. stration]. PromptEnhanced[267]:Studyondifferent VSP [174]: src-code → ⟨Rationalized-CWEVulnerabilityProneness-Code-MultiL- promptingstrategies. Itactuallytriesunsuc- abel-Classification⟩→ listCWEs[Few-Shot,CoT]; cessfullytogeneratedata-flowandAPIcalls src-code+CWEid→⟨Rationalized-IdTargeted-CWEVulnerabilityProneness-Code- byusingLLMtoo(propertycharacterization, OneClass-Classification⟩→ yes/no+reason[Few-Shot,CoT]. inourterms). AIagent[198]:(vultypename+caveat)++src-code→⟨ListTargeted-CWEVulnera- ChatGPT(Plus)[279]:Studyofproficiency bilityProneness-CodeLine-MultiClass-Classification⟩→ result(category+line). ofGPTsvsfine-tunedversion. DLAP [258]: code-snippet + (snippet + label + probability)+ → ⟨GivenExamples- VulnerabilityProneness-Code-MultiLabel-Soft-Classification⟩→ label+prediction (SuperICL); guidance steps + code to review + potential vulnerability → ⟨Plan-Refinement⟩ → specificreview-steps(BespokeCoTGuidance); code+probprediction+specificreview-steps→⟨Rationalized-ReviewStepsEnhanced- IdTargeted-CWEVulnerabilityProneness-Code-OneClass-Soft-Classification⟩ → vulnerability+prob+reason[CoT:reason](FinalPrompt). MultiTask [263]: code snippet → ⟨VulnerabilityProneness-Code-OneClass- Classification⟩→ yes/no(Detection); codesnippet→⟨CVSS-Scoring⟩→ score(Assessment); codesnippet→⟨VulnerabilityProneness-CodeLine-OneClass-Classification⟩→ lines (Location); codesnippet→⟨AssociatedVulnerability-Description-Recall⟩→ CWEdescription (Description). PromptEnhanced[267]: src-code→⟨CodeCorresponding-Intent-Verbalization⟩→ intent; src-code + intent + (API call seq) + (data-flow descr) → ⟨(APICalls)/(DataFlow)Enhanced-VulnerabilityProneness-Code-OneClass- Classification⟩→ yes/no. ChatGPT(Plus) [279]: project info + ext src knowl (top CWE) + src-code to analyze → ⟨(CWE+GivenVulnDescr)VulnerabilityProneness-Code-OneClass- Classification⟩→ yes/no[Few-Shot:K-exampeither(random)/Few-Shot+RAG:simil codetoanalyze]. Continuedonnextpage 145 RQ1: DOWNSTREAMTASKSANDARCHITECTURES Table3: SEtask: Vulnerability/Misuse/FaultDetection. (Continued) SEProblem LLMDownstreamTasks ArchitecturalNotes (input→⟨typeoftask⟩→output[learn.strat.])∗ Line-Level/Edit- FLAG [3]: prefix snippet + (suffix snippet) + (prefix of line to be guessed) → FLAG[3]:LLMisusedaslinegenerator(pre- Time ⟨ContextConsistent-Line-Completion⟩→ codeline. processor). Aboundednumberofattempts Fault/API EditTime[25]:→⟨Definition-Recall⟩→ vultaskdescrip; withhintsistriedtogetnonemptyline.Fea- Misuse codelanguage+vultype→⟨Example-Recall⟩→ examp; ture extraction based on edit distances and Prediction code snip → ⟨Rationalized-VulnerabilityProneness-Code-OneClass-Classification⟩ Bleu.Logprobsoftokensareusedwhenavail- → yes/no+explan; able. Classification is done based on such code snip → ⟨Rationalized-VulnerabilityProneness-Code-OneClass-Classification⟩ features. → yes/no+explan[Few-Shot:recalledexamp]. EditTime [25]: Comparative study also WitheredLeaf [30]: code → ⟨Rationalized-SemanticBugPresence-CodeLine- againstafine-tuningapproach. Retrievalis OneClass-Classification⟩→ yes/no+(line+reason)+; donetofindadequatetaskdescriptionandex- (codeline+reason)+→⟨Warnings-Irrelevance-Filtering⟩→ (line+reason)+; amplesforthefew-shotlearningapproach. codeline + reason → ⟨Rationalized-BugFixabilityByNameChange-CodeLine- WitheredLeaf [30]: Lightweight, open- OneClass-Classification⟩→ yes/no+fixed-line[CoT:fix]; sourcemodels(e.g.,infillingones)areusedto prevconv+codeline+reason→⟨ListSemiTargeted-BugCategory-BugExplanation- identifysuspiciousprogramentitiesasapre- MultiClass-Classification⟩→ category. processingstep. Lasttasksareimplemented LLMAPIDet[240]:codebefore+codeafter→⟨RootCauseOriented-ChangeAction- bysameprompt. Verbalization⟩→ rule[Few-Shot]; LLMAPIDet[240]:StudyonDLAPIMisuse code snip (incl. API usage) → ⟨CodeCorresponding-Intent-Generation⟩ → NL RootCausesthatfeedLLM-basedsolution. description; MisuserulespopulatesaDBandexamplesfor code snip + API usage + (potential misuses rules) → misusesdetection.Potentialmisuserulelistby ⟨GivenAPIMisuseRulesCompliance-Code-OneClass-Classification⟩ → yes/no cosinesimilaritybetweenthecodeexplanation [RAG]. obtainedinstep2andeachmisuseruleinDB. Patchingstepisnotshowninthisreport. Vulnerability ChatGPTSCV[29]:srcsmrtcontrct+tgtvuls→⟨Rationalized-ListTargeted-CWE- ChatGPTSCV[29]:Empiricalstudy. Detectionfor VulnerabilityProneness-Code-MultiLabel-Classification⟩→ assessment[CoT]; SmartAudit[44]:PotentialitystudyofLLMs SmartContracts vulclasses+assess→⟨AssessmentBased-Vulnerability-MultiClass-Summarization⟩ to find smart contract vulnerabilities. Bi-
→ list(vulclass,DerivedBinaryVerdict)[CoT]. nary,non-binary,CoTpromptsarepresented. SmartAudit[44]:contractsrc-code+vultype+vuldescr→⟨ListTargeted-CWEVul- CoTversionisperformedbyiterativelyasking nerabilityProneness-Code-OneClass-Classification⟩→ yes/no; LLMtoauditeachfunctionname,revisiting contract src-code → ⟨Rationalized-VulnerabilityProneness-Code-MultiLabel- auditorlinkingfunctionstofindvulnerabili- Classification⟩→ vulsdescr; ties. contract src-code + prev conv + (focal funct name) → ⟨Rationalized-Vulnerabili- GPTLens[95]: Severalauditorssolvingde- tyProneness-Code-MultiLabel-Classification⟩→ vulsdescr+(fixrecom)[CoT:intent tectiontaskgeneratepotentialvulnerabilities. +thoughts]. Acriticassessthem. GPTLens[95]:src-code→⟨Rationalized-VulnerabilityProneness-Code-MultiLabel- LLM4Vuln[208]:Evaluationframeworkthat Classification⟩→ (vul+functname+reason)∗(auditor); includesLLM-basedResultAnnotationand src-code+vul+functname+reason→⟨Audits-Qlty(Correctness, Severity, Prof- Analysis(notreportedhere). VectorDBare itability)-Assessment⟩→ score+explan(critic). populatedusingabstractgenerationandfunc- LLM4Vuln[208]:vulreport+tgtcode→⟨CodeCorresponding-Intent-Verbalization⟩ tionalsummarytomatchinanalysistimerel- → operationalsummary(funct.summ.); evant summarized vulnerability knowledge vulreport→⟨Vulnerability-MechanismExplanation-Summarization⟩→ abstractvul (Alt2).RawversionisbasedonaDBmatch- (abs.gen.); ingreportstovulnerablecode(Alt1). LLM tgt code + (DB.MatchVulReport(TC)) (Alt1) + seeksextracontextthroughLLM’sfunction (DB.MatchAbsVulKnow(funct.summ.(TC)) (Alt2) → ⟨InfoRequesting-Rational- callingmechanism. ized-GivenVulnDescrProneness-Code-MultiLabel-Classification⟩→ yes/no+(type GPTScan[209]: Authorsbreakdowncom- ofvul), (reason)[Pre-CoT:functionalsummary+expliciterrors—Post-CoT:patch monlogicvulnerabilitytypesintoscenarios or Proof of Concept exploit, ReAct: getFunctionDefinition, getClassInheritance, andproperties. LLMsscenarioandproperty getVariableDefinition,RAG:Alt1,Alt2]. matchesandidentifiedvariablesarevalidated GPTScan [209]: (scenario)∗ + contract src-code → bystaticconfirmation.SavingonGPTcosts ⟨GivenCharacteristicsCorrespondence-Code-OneClass-Classification⟩ → yes/no byfirstfilteringinasingleprompt(1). [CoT:mimicinthebackground](1); scenario + prop + contract src-code → ⟨GivenBehaviorCorrespondence-Code- OneClass-Classification⟩→ yes/no[CoT:mimicinthebackground]; src-code → ⟨RoleBased-CodeElements-Identification⟩ → vars/statements [CoT: mimicinthebackground]. 155 RQ1: DOWNSTREAMTASKSANDARCHITECTURES Table4: SEtask: StaticAnalysis. SEProblem LLMDownstreamTasks ArchitecturalNotes (input→⟨typeoftask⟩→output[learn.strat.])∗ Call-Graph/CFG CFG-Chain[100]: code→⟨CodeBlocks-Extraction⟩→ nestedcode-blocks[Few- CFG-Chain [100]: LLMs are invoked in Construction Shot]; achainofsubtasks: structurehierarchyex- code+nstdcode-blocks→⟨BlockCorresponding-Code-Extraction⟩→ basiccode- traction(translation),nestedblockextraction blocks[Few-Shot]; (completer),BasiccodeCFG(translation)and code-block→⟨ControlFlow-Identification⟩→ CFG[Few-Shot]; graphfusion. CFGs→⟨CFG-Fusion⟩→ CFG[Few-Shot]. UseBefore LLift[132]:usesite+(retrvcodesnip)→⟨InfoRequesting-Initializator-Identification⟩ LLift[132]:Post-constraintguidedpathanal- Initialize → (retrvrqst)+initializer[ReAct:functtoretrv]; ysistoverifythepathfeasibilityofthe“use” usesite+initsig→⟨PostConstraint-Identification⟩→ post-constroninitresults[Self- ofaninitializedvariable. Staticanalyzerup- Validation]; stream;twoconversations((1)initdetect./post- init invoc + post-constr + var focus → ⟨InfoRequesting-QualifiedPostCondition- constraintgeneration(2)summarization):mul- Characterization⟩→ (retrvrqst)+may-mustinit-post[Few-Shot,CoT,ReAct,Self-Val- tipleiterationseach.Majorityvoting. idation]. ResourceLeak SkipAnalyzer [165]: code snip → ⟨Rationalized-NullDerreferencePresence-Code- SkipAnalyzer[165]: Thepipelineanalyzes OneClass-Classification⟩→ yes/no+vuldescr[CoT,One-Shot,Few-Shot]; snippetsbyusingLLMsandstaticanalysis code snip → ⟨Rationalized-ResourceLeakPresence-Code-OneClass-Classification⟩ tools like Infer. LLM is also used to filter → yes/no+vuldescr[CoT,One-Shot,Few-Shot]; false-positivewarnings.LLMsareusedtofix codesnip+warning→⟨Warning-FalsePositiveProneness-Assessment⟩→ verdict codeaswell(outofscope). [CoT,One-Shot,Few-Shot]. InferROI[233]:LLMisusedtogetintentions InferROI [233]: snip → ⟨ResourceLeakRelated-CodeElements-Identification⟩ → incode,thenastaticresourceleakdetection leakableresources+acquisition/releasingcalls+closedif-conds. engineisfeedwiththisinformation.
Data-Flow LLMDFA[232]: AST-traversalskel+(suggestedidentifrules)→⟨Source/Sink-Ele- LLMDFA [232]: LLMs are used together Analysis mentIdentificationCode-Completion⟩→ source/sinkextractor[Few-Shot:E spec]; withsolvers,parserandad-hoccode. AST-traversalskel+suggestedidentifrules+E spec+prevcandidateextractorscript+ (errormsgonE spec)+(falseposonE spec)+(falsenegonE spec)→⟨Source/Sink-Ele- mentIdentificationCode-Correction⟩→ candidateextractor; codesnip+¡var,line¿+¡var,line¿→⟨VariablesSameValue-Identification⟩→ yes/no [CoT,Few-Shot:inclthough]; scriptskeleton+pathinformation→⟨PathConditionEncoding-Code-Completion⟩→ candidatescript; script skeleton + path information + prev candidate script + error msgs → ⟨PathConditionEncoding-Code-Correction⟩→ candidatescript. TaintAnalysis E&V[79]:taskinput+static-analysispseudo-code+relevantsrc-code+prevresults+ E&V[79]: Generalframeworkforconduct- veriffdbk→⟨Step-Computation⟩→ roundoutput+(retrv-rqst)0...1; ingstaticanalysisfrompseudo-codebymeans execspecs+relevsrc-code+pesudo-code+roundoutput→⟨Behavior-Correctness- ofLLMs.Agent-basedarchitecture,buthard- Assessment⟩→ veriffdbk. codedplanningstrategies.Augmenttempera- LATTE[143]:name+(code)0...1→⟨Sink-Identification⟩→ sink; tureifre-analysisrequired. name+(code)0...1→⟨ExternalInputSource-Identification⟩→ externalinputsource; LATTE [143]: LLMs are invoked to iden- (prevconv)+codesnip+taintsrcs+taintinfo→⟨TaintFlow-Identification⟩→ de- tifysourcesandsinks. Dangerousflowsare duceddataflow[incrpromptseq]; analyzedstepbystepbyLLMsinaprompt prompt seq results → ⟨Rationalized-TaintFlowAnalysisEnhanced-CWEVulnerabili- sequencedrivenbyslicedcode. tyProneness-Code-MultiLabel-Classification⟩→ (vul)+. StaticSlicing SimulinkSlicer[153]:model+requirement→⟨Model-Slicing⟩→ modelcomponents [Few-Shot,CoT:dependencechainelicitationbydemonstration]. Fix Acceptance CORE[230]:diff→⟨Patch-Qlty(Fixes, LeastImpact)-Assessment⟩→ score+rea- CORE[230]:ProposerLLMgeneratespoten- Check son. tialcoderevisions(notshownhere). Static analysisisrunonthoserevisions. Reviewer rankssolutionsforspecificfixwarnings. 165 RQ1: DOWNSTREAMTASKSANDARCHITECTURES Table5: SEtask: ProgramVerification. SEProblem LLMDownstreamTasks ArchitecturalNotes (input→⟨typeoftask⟩→output[learn.strat.])∗ Program AlloyRepair[4]:faultyspec+(generic-feedback)0...1→⟨FeedbackGuided-Specifica- AlloyRepair[4]: Alloyanalyzerisusedto Verification tion-Correction⟩→ fixedspec(repairagent); validateangenerateareportthatiseitherused faultyspec+suggestion→⟨SuggestionGuided-Specification-Correction⟩→ fixed asinputforcorrectionorprocessedbyanLLM spec(repairagent); togeneratearefinedsuggestion. reportfeedback+faultyspec→⟨Suggestion-Generation⟩→ suggestion(promptagent). Loopy[115]:Invariantsarecheckedbyusing ChatInv[107]: code+loc+maskedassert→⟨Assertion-Completion⟩→ assert[In- symbolictools. Assertionscollectedtrough Filler]. severalLLMinvocations.Thenitchecks(in Loopy[115]:annotprgmwithproptobeverif→⟨Inductive&SufficientLoopInvariant- lineartime)ifthereexistsasubsetthatisinduc- Assertion-Generation⟩→ prgmannotwithsetofloopassert; tiveandsufficient.Repairtakesintoaccount annot prgm + verif fdbk → ⟨SyntaxInductivenessSufficiencyAware-Assertion- categorizedfeedbackanddependencerelation Correction⟩→ annotprgm. betweenassertions. SpecGen[154]: prgm→⟨CodeCorresponding-Specification-Generation⟩→ prgm SpecGen[154]: Tasksmodelsfirstphaseof withspecs[Few-Shot]; theapproach:aconversation-drivenspecifica- prevconv+curatedveriffdbk→⟨ErrorAware-Specification-Correction⟩→ prgmwith tiongenerationleveragingLLMs. specs[Few-Shot]. Clover[206]:Consistencycheckstoknow(i) Dafny-Synth[164]:intent→⟨IntentCorresponding-AnnotatedCode-Generation⟩→ thecodeisfunctionallycorrectwithrespect annotcode(contextless); toitsannotation;(ii)theannotationcaptures intent+sig+tests→⟨Intent&TestsCorresponding-AnnotatedCode-Generation⟩→ thefullfunctionalityofthecode;and(iii)the annotcode(signature); DocString also accurately reflects the func- intent→⟨IntentCorresponding-Signature-Generation⟩→ sig(dynamic); tionalityofthecode.Deductivecheckerand intent+sig→⟨NL-Specification-Distillation⟩→ pre/postspec(dynamic); testsareusedtoproveannotatedprogramsand intent+sig+spec→⟨Intent&SignatureCorresponding-Provable-AnnotatedCode- equivalencebetweenreconstructedartifacts. Generation⟩→ annocode[RAG+Few-Shot:retrievedsimil](dynamic). AutoSpec[242]:Callgraphisusedtoidentify Clover[206]:annotcodeskel+verifierfdbk→⟨VerifierAware-AnnotationCorrespond- locationsandorderforspecificationgenera- ing-Code-InFilling⟩→ code(anno2code); tion.
annotcodeskel+compilerfdbk→⟨CompilerAware-AnnotationCorresponding-Code- Lemur[246]:Integratedwithverifiers,itfol- InFilling⟩→ code(anno-complete); lowsasetofproofrulesthatinvokesLLMs intent+annotcodeskel+compilerfdbk→⟨IntentCorresponding-Code-InFilling⟩→ toactassuggestersofauxiliaryassertionsand code(doc2code); repairerofthosesuggestions. annotcode→⟨CodeCorresponding-Intent-Verbalization⟩→ docstrings(code2doc); RustProof[259]:Proofhelperintegratedwith code→⟨HoareValid-Annotation-Completion⟩→ pre/postcond(code2anno); smart contractmodel checker. Human and functsigwithdocstring+compfdbk→⟨IntentCorresponding-PrePost-Formalization⟩ staticanalysistoolscanbeintegratedtoim- → annotcodewithpreandpost(doc2anno); provegeneration. annotation → ⟨AnnotationCorresponding-Intent-Verbalization⟩ → docstrings (anno2doc); docstring + docstring → ⟨Docstrings-Equivalence-Checking⟩ → yes/no (docstring equivcheck). AutoSpec[242]:maskedannotated-code→⟨Annotation-Generation⟩→ (pre/post/inv) annotated-code[Few-Shot]. Lemur[246]:codeunderanalysis+placeholder→⟨Invariant-Lemma-Generation⟩→ assert; codeunderanalysis+placeholder+assert+issue→⟨Invariant-Lemma-Correction⟩ → assert. RustProof[259]:annotcode(withprecond)→⟨PostCondition-Generation⟩→ annot code(withpre/post)[CoT:codeintent+precondexplan]; annotsrc-codesgmnt→⟨Invariant-Lemma-Generation⟩→ sgmnt-with-proof[Few- Shot,CoT:precondexplan+postcondexplan+invexplan+proofexplan]; annotsrc-codesgmnt+lastanswer+modelcheckererror→⟨ErrorAware-Lemma- Correction⟩→ sgmnt-with-proof[CoT]. 176 RQ2: ADOWNSTREAMTASKTAXONOMY 6. RQ2: ADownstreamTaskTaxonomy Theproposedtaxonomyresultsfromourinitialbesteffortto DownstreamTasksTaxonomy cluster,brand,andrationalizedownstreamtasksimplemented bypromptsandconversationsreportedinthereviewedliterature. Generative Our taxonomy is organized hierarchically: a directed tree of CodeGeneration General-CodeGeneration(Figure5) typesoftasks,beingthetypesclosertotherootthemostgeneral E.g.IntentCorresponding-Code-Generation ones(TaskCategories). Specificityisexpressedintwodifferent Domain-Specific-CodeGeneration(Figure6) ways: eitherby“subtyping”usingdirectededgesintoplevels E.g.InputGenerator-Generation,DriverCode-Generation/Correction TestGeneration(Figure7) ofthetaxonomy(seeFig.4)and, formoreconcretelevelsof E.g.Basic&Edge-TestCase-Generation,CoverageAugmenting-Test-Generation thetaxonomy(TaskClasses),typescameupascombinationsof AnnotationGeneration(Figure8) E.g.Assertion-Generation,Invariant-Lemma-Generation choicesoftheidentifieddimensions(thatis,externally-visible DataGeneration(Figure9) variationpoints)asawaytocapturethespecificityofthetask E.g.ConstraintSatisfying-Input-Generation,Basic&Edge-TestInput-Generation beingproposedbytheanalyzedstudies(Figs. 5–27). Evaluative SW-EntityAnalysis Theinitialrationaleforthetaxonomyhasbeentoshowcom- BehaviorAnalysis(Figure10) monalitiesbetweentasksrequestedbypromptsinapproaches E.g.RootCause-Analysis,Behavior-Anomaly-Detection whileretainingtherichnessofsomerelevantdetails. Taxonomy CodeAnalysis CodeClassification isalsomeanttobe“useful”inhelpingidentifysomepatterns DirectCode-Classification(Figure11) ofuseofLLMsandpromptingandalsomakingexplicitsome E.g.VulnerabilityProneness-Code-OneClass-Classification less explored (or unexplored) task types. Ultimately, taxon- RationalizedCode-Classification(Figure12) E.g.Rationalized-VulnerabilityProneness-Code-MultiLabel-Classification omymightalsohelptounderstandhowtoleveragewarnings CodeScaling(Figure13) and advances of the natural language generation community E.g.CVSS-Scoring,VulnerabilityProneness-Code-MultiLabel-Soft-Classification LineCode-Ranking(Figure14) (e.g.,[155])tobuildLLM-enabledapplicationsinaprincipled E.g.FaultProneness-CodeLines-Ranking way(seesection8). Task-SolutionAnalysis(Figure15) E.g.Self-Validation,Answer-Quality-Assessment Thetop-levelbranchinginthetaxonomyrespondstoidentify- TextAnalysis(Figure16) ingdifferenthigh-levelconceptualoperationsbeingperformed. E.g.Docstrings-Equivalence-Checking,Ambiguity-Analysis,EvidenceSupport-Judgment Thoseactionstypicallyrequire,forhumans,differentcognitive Extractive SW-EntityExtraction abilitiesand, forthe“pre-LLMsera”, haveinvolveddifferent Code-ElementsIdentification&Extraction(Figure17) research communities and used quite different theories, algo- E.g.CodeElements-Identification,CodeBlocks-Extraction,Model-Slicing rithms,and/ortrainingapproaches. Forinstance,generationvs. Text-ElementsIdentification&Extraction(Figure18) E.g.VariableNames-Identification,StructureCompliant-Information-Extraction evaluation of a software entity, although an artificial concep- Abstractive tualizationofwhatisbeingdonebyagenerativeAI,fitsinto SW-EntitytoNL SW-EntityVerbalization(Figure19) amorenaturalcategorizationofhumanendeavorandexisting E.g.CodeCorresponding-Intent-Verbalization,Summarization
SE-toolcategories. Futuremighttellifthisanthropomorphic NLtoSW-Entity viewpoint could be traced into different mechanistic interpre- Formalization(Figure20) E.g.PostCondition-Formalization tationsofLLMsbehavior[92,251],oriftypesandcategories SW-EntitytoNL/NLtoNL identifyusefulhigh-level“tasks-decompositionpatterns”,areas SW-Entity-PropertyIdentification&Characterization(Figure21) forimprovingLLMsperformanceandevaluationmethodseither E.g.VariablesSameValue-Identification,TaintFlow-Identification NLtoNL byfine-tuningoradequatepromptingstrategiesoriftaxonomy Focused-AbstractiveSummarization(Figure22) mightprovideusefulguidanceforLLMstobreakdowntasks E.g.RootCauseOriented-Summarization,Assessment-Yes/No-Summarization Executive themselvesinreasoningchains(i.e., will, inthefuture, study Planning “TasksLLMsprompt”?). PlanGeneration(Figure23) E.g.ScenarioCorresponding-Plan-Generation,Plan-Refinement DecisionMaking Generative. ThisbroadcategoryoftasksyieldsSoftwareen- What-to-do-NextGeneration(Figure24) E.g.ReachabilityOriented-NextAction-Generation titiessuchassoftwaredevelopmentartifacts,data,andcompu- High-levelInstructionFollowing tation. Generative are further divided in terms of the sort of Execution(Figure25) generatedSW-entity. TheSW-entitiesfoundinthescopeand E.g.Step-Computation,Trace-Execution TextualDataManipulation(Figure26) studiedliteraturearecode,tests(Fig.7),codeannotations(e.g., E.g.CFG-Fusion,InListCloseMeaning-Elements-Replacement codeassertions)(Fig.8),anddata(Fig.9). Codeisfurtherdi- Consultative KnowledgeDistillation(Figure27) videdintogeneral(Fig.5)anddomain-specificcodegeneration E.g.Example-Recall,Definition-Recall,ProtocolGrammar-Recall (Fig. 6) classes. Domain-specific-code generation comprises tasksthatgenerateprogramsforfixedproblemdomains(e.g., Figure4:Taxonomybasedonthedownstreamtasksperformedbyprompting rulesevaluators,fuzzingdrivers,etc.). Incontrast,for“general” LLMsinthereportedpapers. codegeneration,thepurpose/functionalitygenerated/operated code is not fixed in advance, and it might depend directly or indirectlyontasks’conceptualparameters. 186 RQ2: ADOWNSTREAMTASKTAXONOMY 6.1 DimensionsofDownstream-TaskClasses Evaluative. Thisbroadcategoryencompassestasksthatanalyze Consultative. A consultative operation conceptually occurs differentpropertiesofentitiesunderevaluation. Thiscategory whenLLMissupposedtoanswerrequestsforinformation,pro- isintuitivelycentraltothescopeofthisliteraturereview. Itac- videdefinitions(e.g., messagegrammarforagivenprotocol) commodatesSW-entities,andNLanalysistasksweunderstand and/orprovidesuggestionsorguidance(Fig.27). Forthissort arebeingelicitedbypromptsintheworksinthisscope. Code of task, no reference text serves the question. The required classificationtasksareoneofthemostpopulatedcategories,and (hopefully accurate) knowledge has to come from within the we split them into two different classes: direct (Fig. 11) and modelitself. Theknowledgeisstoredinthemodelparametersit rationalized(Fig.12). Whiledirectclassificationcorresponds pickedupduringunsupervisedpre-training[225]. Tasksinthis to the tasks underlying the more classical discriminative ML category are possible if recalling and distilling of knowledge learningtasks,alsoperformedbyneuralsoftwareanalysisap- couldbeelicitedduringinferencetime. proaches (like DL vulnerability detectors [139]), rationalized classificationtasksshouldyieldassessmentsand/orjustifications 6.1. DimensionsofDownstream-TaskClasses beyondclassification. Thisisoneofthedistinctiveclassesof tasks LLMs enable. Close to those classification are the less Each class exhibits dimensions (variation points) in which populatedtaskclassesofscalingandranking(Figs.13and14). choicesdefinepotentialtasks. Ineachclassleaf,wemaptasks Codeisnottheonlyentitytobeevaluated;softwarebehavior thatappearinapproachesthatfeaturethegivenchoicedenoted initsdifferentformsisalsoanobjectunderanalysis(Fig.10). bytheleaf. ColorsindicatethecorrespondingSE-problemin Interestingly,Task-SolutionAnalysis(Fig.15)isanevaluative Figure3,andsub-indexesindicatedifferenttasksperformedby classoftasksthatevaluateverbalizedaspectsoftheperformance thesamesolutionproposalorexploratorystudy,orderedasthey ofothertasks(eitherLLM-basedornot)andarealsoparticular appearinTables1–5. totheLLM-enabledsolutionslandscape. Finally,generaltext Asmentionedabove,morespecifictasktypesofaclassare analysistasksarealsoidentified(Fig.16). describedemployingpotentiallyorthogonaldimensions. While somearespecifictoaclass(e.g. GranularityNatureofClas- sification in evaluative tasks, Nature of Property in Property Extractive. Entitiesunderprocessingmightbelargeandstruc- Identification,etc.),somearecommontoseveralclasses(e.g., turallycomplex. Approachesmightneedtoleveragerichnaviga- Natureof(generated)Entity,(generation/analysis/correctness) tionandidentificationabilitiestofindandidentifyrelevantsub- Criteria,(problem)Domain),andafewareapplicable,inprin-
elements. Thiscategorycoverstasksthatextractqueriedsub- ciple, to all classes: In-filling context, Rationalization, Reac- entitiesoutofcontainerentities. Codeextractivetasks(Fig.17) tivity,andCorrectivefeedback. Thus,taskclassesaredefined were identified for the analyzed scope and identified studies. to abstractly capture the functional and externally visible na- Some tasks on more general text (NL) were found too (e.g., tureoftasksunderlyingpromptingstrategiesinanattemptto extractivesummarization[145])(Fig.18). separatethe“what”fromthe“how”. In-fillingcontext stands for a task’s assumed precondition regarding to which extent andhowthecontainercontextoftheexpectedsolutionispro- Abstractive. Thiscategoryencompassestasksthatconstructfor- vided and related to it. Rationalization stands for the nature malorsemiformalabstractionsoutofanentityunderprocessing ofthoughts/explanations/argumentationthetaskissupposedto (incontrasttothemoreliteralnatureofextractiveoperations). generate(manytimesinachainofthoughtgenerationstream). SubjectofabstractioninthereviewedliteraturearemainlySoft- Note that we consider CoT not only a way to elicit adequate wareEntitieslikecode. Yet,someworkstargetingNLentities inferencebutalsoapotentiallyexternallyvisibletaskproperty: arefoundandnamedasfocusedabstractive-summarizationin- rationalizationscouldbeconsumedbyhumansorothermodels spiredbytheNLPtaskconcept(e.g., [190])(Fig.22). Abstrac- tionencompassesdifferentoperations: verbalization(Fig.19), toassessyieldedresults. Reactivity,whenitapplies,describes the way a task is supposed to interact with the environment. formalization(Fig.20),propertycharacterizationandidentifica- Reactivebehaviorforinstanceenablesincrementallyrequesting tion(Fig.21). requiredinformationthatisnotavailableinthecontext. Onthe otherhand,theuseofexternaltooling(e.g.,symbolic)improves Executive. Thiscategorygroupsdownstreamtasksrelatedto end-to-endtaskperformancebyre-injectingintermediateresults goalattainment,includingplanning,decision-making,andin- thatconditionstep-by-stepneural-basedinferenceoftheunder- structionfollowing[101]. Planningtasks(Fig.23)ultimately lyingtransformer. AtaskdefinedasreactivealsomeansLLM meanusingtheLLMsforfinding,atdifferentlevelsofabstrac- mustbeproficientasanagentinteractingwiththeenvironment tion,anoperationalsolution–typicallyexpressedassequences fortheproposedgoal. ofsteps–tothedescribedproblems. Thedecision-makingclass Finally, Corrective feedback dimension stands for a com- in the literature is a class of tasks dealing with choosing the monpatternofconversationinwhichLLMissupposedtocor- nextactiontoperforminagivensystemstate(Fig.24). Instruc- rect/repair or improve yielded result based on new evidence. tionfollowingtasksareaboutperformingimperativeactionson Whenataskisrequestedwithcorrectivefeedbackitcouldbe intermediatedatastructures,typicallyusedasintermediatemem- argueditbecomesaversionoftheoriginaldownstreamtaskin orization/summarization/scratchpadelementsinlonginference which operation is cognitively closer to correcting a solution processes[177](Figs.25and26). byeditingit. Nevertheless,atleastthistaskisaversionofthe 196 RQ2: ADOWNSTREAMTASKTAXONOMY 6.1 DimensionsofDownstream-TaskClasses originaloneinwhichhints/feedbackarealsogiventoachieve goals. Domain-Specific-CodeGeneration General-CodeGeneration Generationcriteria Generationcriteria Problem-descriptioncorresponding Intentcorresponding:TitanFuzz1[49],AID1[140],DiffPrompt2[135],Dafny-Synth1[164], DSL-functiondescription:Eywa[114] ClarifyGPT1[167],Clover3[206],FormAI[218] Functionintent:Dafny-Synth3[164] Clarifications Objectdescription:LLMDFA4[232],LLMDFA5[232] Testprovided:Dafny-Synth2[164] Scenario:ScenarioNL4[58] Signatureprovided:Dafny-Synth2[164],Dafny-Synth5[164] Constraint:AID2[140] Specificationprovided:Dafny-Synth5[164] Property:EMR3[199] Verifiable:Dafny-Synth5[164] Rule:InputBlaster4[149] Non-functionalrequirementssatisfying Abilitytoexecutecode:OSS-Fuzz1[74],OSS-Fuzz2[74],PBT-GPT2[229], Complexity:ALGO[272] UGCTX1[266],UGCTX2[266], AnnotatiS ot nyl ce o: rA reI sD po1 n[ d1 i4 n0 g] :, CFo lor vm eA r1I [[ 22 01 68 ]] ,Clover2[206] NatuH reigh ofp dre oc mis aio inn /e& ntr ie tycall:LLMDFA1[232],LLMDFA2[232] Repairrequestcorresponding ScriptingforSWanalysis Defect-descriptionrepairing:WitheredLeaf3(CoT)[30],AutoSD3[117], Source/Sinkidentification:LLMDFA1[232],LLMDFA2[232] LLM4Vuln3(patch)[208] Metamorphicrelation:EMR3[199] Error-feedbackrepairing:AlloyRepair1[4] Encoders Suggestion-basedrepairing:AlloyRepair2[4] Pathcondition:LLMDFA4[232],LLMDFA5[232] Examplesbased Testdrivers Translation Fuzzing:OSS-Fuzz1[74],OSS-Fuzz2[74],UGCTX1[266],UGCTX2[266] API:FuzzGPT2[50] Inputgenerators:AID2[140],InputBlaster4[149],PBT-GPT2[229]
Parametrization:SearchGEM51[42] Protocol-relatedcode:Eywa[114] Mutation:FSMLDS/MG[14],LLM4CBI[221] Wordmodel:ScenarioNL4[58] Buginjecting:BugFarm[104] Signatures:Dafny-Synth3[164] In-fillingcontext Differentbehaviour:LLMorpheus[219] Repair:AlloyRepair1[4],AlloyRepair2[4],WitheredLeaf3(CoT)[30],AutoSD3[117], Hintcode:LLMDFA1[232],LLMDFA2[232] AID1[140],LLM4Vuln3(patch)[208] Constraineddecoding:ScenarioNL4[58] Completion Correctivemode NatureofprN ima at ru yra gl: enF eL raA teG d[ e3 n], tiT tyitanFuzz2[49],µBERT[119],CHEMFUZZ1[187] Feed Cb oa mck pib lea rs :ed OSS-Fuzz2[74],LLMDFA2[232],LLMDFA5[232],UGCTX2[266] Code:FLAG[3],FSMLDS/MG[14],WitheredLeaf3(CoT)[30],SearchGEM51[42], Execution TitanFuzz1[49],TitanFuzz2[49],FuzzGPT2[50],BugFarm[104], Numberofpositive/negative:LLMDFA2[232] AutoSD3[117],µBERT[119],DiffPrompt2[135],AID1[140], Results:OSS-Fuzz2[74],InputBlaster4[149],UGCTX2[266] Dafny-Synth1[164],Dafny-Synth2[164],Dafny-Synth5[164],ClarifyGPT1[167], Reactivity CHEMFUZZ1[187],Clover1[206],Clover2[206],Clover3[206], Functioncalling:ScenarioNL4[58] LLM4Vuln3(patch)[208],FormAI[218],LLMorpheus[219],LLM4CBI[221], ALGO[272] Figure6:Domain-Specific-CodeGeneration. Model:AlloyRepair1[4],AlloyRepair2[4] Natureofextrageneratedentity Annotations Pre/Postcondition:Dafny-Synth1[164],Dafny-Synth2[164],Dafny-Synth5[164] Loopinvariant/Verificationannotations:Dafny-Synth5[164] In-fillingcontext Annotatedcode:Clover1[206],Clover2[206],Clover3[206] Maskedcode:FLAG[3],TitanFuzz2[49],µBERT[119],CHEMFUZZ1[187], LLMorpheus[219] TestGeneration Correctivemode Feedbackbased Generationcriteria Compiler:Clover2[206],Clover3[206] Differential:MuTAP3[43] Verifier:Clover1[206] Natural:FSMLUTG[14],ChatGPTTests1[19],ChatUniTest1[34],MuTAP1[43], Validity:AlloyRepair1[4],CHEMFUZZ1[187],LLM4CBI[221] DiffPrompt3[135],TestPilot[194],EASEeval[201] RationalS izu ag tig oe nstedcorrections:AlloyRepair2[4] CoveB ru agg ere imve pa rl oin vg in: gF :u Tz ez sG tGPT en4 -L[5 L0 M] [8],ChatGPTTests2[19],CoverUp1[184], Explanation:LLMorpheus[219] CoverUp2[184],TELPA[255] Targeted:CodaMOSA[130] Figure5:General-CodeGeneration. Intentcorresponding:CodeT[28],CodeCoT[97],AgentCoder[98],ChatTESTER2[264] Parameterized:PBT-GPT3[229] Propertybased:PBT-GPT4[229] Similarityinspired:FuzzGPT3[50] Mimickingtest:secTests[276] Bugreproducing:LIBRO[118] In-fillingcontext Givenprefix:FuzzGPT4[50] Correctivemode Feedbackbased Compilationerrors:ChatUniTest2[34],MuTAP2[43],CoverUp2[184],TestPilot[194], ChatTESTER3[264] Assertionfailure:ChatUniTest2[34],TestPilot[194],ChatTESTER3[264] Examplestoaugment:TELPA[255] Coverageinfo:CoverUp1[184],CoverUp2[184] Rationalization Bugdescription:FuzzGPT3[50] Figure7:TestGeneration. 206 RQ2: ADOWNSTREAMTASKTAXONOMY 6.1 DimensionsofDownstream-TaskClasses BehaviorAnalysis Analysiscriteria AnnotationGeneration Anomalydetection Open-ended:CHEMFUZZ2[187] Generationcriteria S Ny an tt ua rc at l:ic: TOKe Gr Ln LelG [8P 9]T ,5 C[ h2 a5 t7 I] nv[107],PBT-GPT1[229],AutoSpec[242] V V Seu ie ll fwn -je - ur p sa o tb ii finil ti et dy c :: on AP f Xow r Nn m’ aad vn3 3c[ e8 [: 20 1G] 1a ]meBugDescriptions1[212] Demonstrationsimilarity:PropertyGPT1[146],PropertyGPT2[146], Sanitycheck:PentestGPT3[47] Natural&Verifiable:SpecGeP n1ro [p 15e 4rt ]yGPT4[146],CEDAR[170] Root-cauE sx ee :cu xt -i lo ifn ecs yp ce lc ei 2fic [a 7t 1i ]o ,n Ac uo tn ofo Fr Lm 1a [n 1c 1e 6: ],E A& uV to2 S[7 D9 1] [117],RCAAgents1[189], NatuP rr eo oo ff- ee nn ta ib tyling:Loopy1[115],Clover5[206],Lemur1[246],RustProof2[259] Differenci en sC :o Sn Et Le Fxt -R DC EA B2 U[ G27 G5 I] NGFL[32] Precondition:PropertyGPT2[146],SpecGen1[154],Clover5[206],AutoSpec[242] Extrainput Postcondition:PropertyGPT2[146],SpecGen1[154],Clover5[206],AutoSpec[242] Architecture:x-lifecycle2[71] Loopinvariant:Loopy1[115],AutoSpec[242],RustProof2[259] Code:SELF-DEBUGGINGFL[32],AutoFL1[116],AutoSD1[117] Assumption:Lemur1[246] Historicincidents:RCAAgents1[189],inContextRCA2[275] Typestate:KernelGPT5[257] Reactivity
Assertion:TOGLL[89],PropertyGPT1[146],CEDAR[170] Functioncalls:AutoFL1[116],RCAAgents1[189] In-fillingcontext Questions:RCAAgents1[189] Maskedassertion:ChatInv[107] Rationalization Annotatedcode:Loopy1[115],SpecGen1[154],CEDAR[170],Clover5[206], Appliedevaluationcriteria:AXNav3[211] AutoSpec[242],Lemur1[246],RustProof2[259] Execution:SELF-DEBUGGINGFL[32] Unittest:TOGLL[89],CEDAR[170] Correctivemode Feedbackbased Figure10:BehaviorAnalysis. Compilerfailure:PropertyGPT3[146] Verificationfailure:SpecGen2[154],Lemur2[246],RustProof3[259] Inductiveness:Loopy2[115] Validitychecker:PropertyGPT4[146],KernelGPT5[257] Rationalization Invariantexplanation:RustProof2[259] Proofexplanation:RustProof2[259] DirectCode-Classification Figure8:Code-GivenAnnotationGeneration. Analysiscriteria Vulnerabilityproneness:SmartAudit1[44],ChatGPT4vul1[67],ChatGPT4vul3[67], NLBSE241[109],GRACE[151],AIagent[198],MultiTask1[263], MultiTask3[263],PromptEnhanced2[267],ChatGPT(Plus)[279] Similartodemonstrations:GRACE[151] UsedAPI:FuzzGPT1[50] APIMisuse:LLMAPIDet3[240] Categoryofbug:WitheredLeaf4[30] Adherencetodescription:NLBSE242[109],GPTScan1[209] Adherencetobehavioraldescription:SELF-DEBUGGINGOP7[32],GPTScan2[209] Natureofclassification Multiplicity OneClass(Binary):SmartAudit1[44],ChatGPT4vul1[67],NLBSE241[109], DataGeneration GRACE[151],MultiTask1[263],MultiTask3[263], PromptEnhanced2[267] Generationcriteria In-contextdefinedclass:SELF-DEBUGGINGOP7[32],NLBSE242[109], Contextcompliant GPTScan1[209],GPTScan2[209], Description:RESTGPT2[123] LLMAPIDet3[240],ChatGPT(Plus)[279] Usage:Fuzz4All2[252] MultiClass/MultiLabel GUIcontext:QTypist1[147],QTypist2[147],InputBlaster2[149] Open-ended:FuzzGPT1[50] Code+arguments:SearchGEM52[42] Close-ended:ChatGPT4vul3[67],AIagent[198] Constraintsbased Semi-close-ended:WitheredLeaf4[30] x|P(x):RESTGPT2[123],InputBlaster2[149],SymPrompt[191],WhiteFox2[256] Analysisgranularity x|∃i:x=F(i):SELF-DEBUGGINGOP2[32] Snippet:SELF-DEBUGGINGOP7[32],SmartAudit1[44],FuzzGPT1[50], x|i=F(x):mrDetector1[247] ChatGPT4vul1[67],ChatGPT4vul3[67],NLBSE241[109],NLBSE242[109], x| exercises(A(x)⇒C(x)):LLMeDiff[105] GRACE[151],GPTScan1[209],GPTScan2[209],LLMAPIDet3[240], x| exercises(Pre(x)&exercisesPost(x,R)):TestChain1[134],ClarifyGPT2[167] MultiTask1[263],PromptEnhanced2[267],ChatGPT(Plus)[279] Exemplarscompliant Line:WitheredLeaf4[30],AIagent[198],MultiTask3[263] Spec-awareedition:ChatAFL2[162],Fuzz4All3[252] Extrainput Variation:ChatFuzz[94],Fuzz4All4[252] Codepropertygraph:GRACE[151] NatuE rr ar lor cot mrig pg lee tr ii on ng :: CFu hz az tAin Fg LP 3ar [s 1e 6r 2s ]4[1] I Cn tt re l/n Dt: aP tar -o flm owp :tE Pn rh oa mn pc te Ed n2 h[ a2 n67 c] ed2[267] In-fillingcontext Callsequence:PromptEnhanced2[267] Maskeddata:QTypist2[147] Bugreason:WitheredLeaf4[30] Datacompletion:QTypist1[147],ChatAFL3[162] Assertion:SELF-DEBUGGINGOP7[32] Rationalization Background:GPTScan1[209],GPTScan2[209] Figure9:DataGeneration. Execution:SELF-DEBUGGINGOP7[32] Figure11:DirectCode-Classification. 216 RQ2: ADOWNSTREAMTASKTAXONOMY 6.1 DimensionsofDownstream-TaskClasses RationalizedCode-Classification Analysiscriteria Task-SolutionAnalysis Vulnerabilityproneness:EditTime3[25],EditTime4[25],SmartAudit2[44],SmartAudit3[44], VulBench[69],GPTLens1[95],VulDetect1[120],VulDetect2[120], Analysiscriteria LATTE4[143],VSP1[174],VSP2[174],LLM4Vuln3[208], Correctnessofanalysis:VulDetect4[120],LLift2(SelfValid)[132],LLift3(SelfValid)[132] LLM4Vuln3w/PreCoT[208],LLM4Vuln3w/PostCoT[208],DLAP3[258] False-positivewarningfiltering:WitheredLeaf2[30],SkipAnalyzer3[165] Null-dereference:SkipAnalyzer1[165] Qualityassessment:mrDetector2[247],LM-PACE3[268] Leakpresence:SkipAnalyzer2[165] Scored:GPTLens2[95],CORE[230] Data-flowCWI:VulDetect3[120] Domain Entityinconsistency:WitheredLeaf1[30] Vulnerabilityanalysis:WitheredLeaf2[30],GPTLens2[95] Fixabilityanalysis Rootcauseanalysis:LM-PACE3[268] Rename:WitheredLeaf3[30] Programanalysis:VulDetect4[120],LLift2(SelfValid)[132],LLift3(SelfValid)[132], Natureofclassification SkipAnalyzer3[165] Multiplicity Programrepair:CORE[230] OneClass(Binary):EditTime3[25],EditTime4[25],WitheredLeaf1[30], Informationretrieval:mrDetector2[247]
WitheredLeaf3[30],SkipAnalyzer1[165],SkipAnalyzer2[165], Rationalization VSP2[174],DLAP3[258] Justification:VulDetect4[120] MultiLabel Open-ended:SmartAudit2[44],SmartAudit3[44],GPTLens1[95], Figure15:Task-SolutionAnalysis(someusedintheself-reflectionframework). Close-ended:ChatGPTSCV1[29],VulBench[69],VulDetect1[120],VulDetect2[120], VulDetect3[120],LATTE4[143],VSP1[174], In-contextdefinedclasses:LLM4Vuln3[208],LLM4Vuln3w/PreCoT[208], LLM4Vuln3w/PostCoT[208] Analysisgranularity Snippet:EditTime3[25],EditTime4[25],ChatGPTSCV1[29],SmartAudit2[44], SmartAudit3[44],VulBench[69],GPTLens1[95],VulDetect1[120], VulDetect2[120],VulDetect3[120],LATTE4[143],SkipAnalyzer1[165], SkipAnalyzer2[165],VSP1[174],VSP2[174],LLM4Vuln3[208], TextAnalysis LLM4Vuln3w/PreCoT[208],LLM4Vuln3w/PostCoT[208],DLAP3[258] Line:WitheredLeaf1[30],WitheredLeaf3[30] Analysiscriteria Extrainput Comparativeanalysis Codeintent:SmartAudit3[44],LLM4Vuln3w/PreCoT[208] Functionalequivalence:SELF-DEBUGGINGOP6[32],Clover8[206] Taint-info:LATTE4[143] Ambiguityanalysis:ScenarioNL2[58],ClarifyGPT4[167] Reviewguidance:DLAP3[258] Supportanalysis Correctivemode Toconclude:AutoSD2[117] Self-validation:VulDetect3[120] Toperform:LM-PACE1[268] Reactivity Openanalysis Requestfurtherinformation:LLM4Vuln3[208],LLM4Vuln3w/PreCoT[208], Q&A:RCAAgents2[189] LLM4Vuln3w/PostCoT[208] Expertcompleted:ScenarioNL3[58] Rationalization Rationalization Assessment/description:ChatGPTSCV1[29],VulDetect1[120],VulDetect2[120], Debate:ScenarioNL3[58] VulDetect3[120],LATTE4[143],SkipAnalyzer1[165], SkipAnalyzer2[165],VSP2[174] Figure16:TextAnalysis. Thoughtstepbystep:ChatGPTSCV1[29],VulBench[69],SkipAnalyzer1[165], SkipAnalyzer2[165],VSP1[174],VSP2[174] Reason:GPTLens1[95],VulDetect1[120],VulDetect2[120],VulDetect3[120], LLM4Vuln3[208],LLM4Vuln3w/PreCoT[208],LLM4Vuln3w/PostCoT[208], DLAP3[258] Fix:WitheredLeaf3[30],(SmartAudit3[44]),LLM4Vuln3w/PostCoT[208] Exploit:LLM4Vuln3w/PostCoT[208] Code-ElementsIdentification&Extraction Figure12:RationalizedCode-Classification. Identificationcriteria Rolebased Specialized Code-Scaling Commandvalues:KernelGPT2[257] Definition Scale Type:KernelGPT4[257] CVSS:ChatGPT4vul4[67],MultiTask2[263] Type:KernelGPT3[257] Vulnerabilityprobability:DLAP1[258] Initializator:LLift1[132],KernelGPT1[257] Leakableparameter:LATTE1[143] Resourceleak-related:InferROI[233] Figure13:CodeScaling. Externalinput:LATTE2[143] Devicename:KernelGPT1[257] Scenariodefined:GPTScan3[209] Relevancebased:SimulinkSlicer[153] LineCode-Ranking Structural Blocksofcode:CFG-Chain1[100] Analysiscriteria Codeofblocks:CFG-Chain2[100] Faultproneness:ChatGPT4vul2[67],ChatGPT-4(Log)1[249],ChatGPT-4(Log)2[249] Reactivity Extrainput Requestfurtherinformation:LLift1[132],KernelGPT2[257], Intent:ChatGPT-4(Log)1[249],ChatGPT-4(Log)2[249] KernelGPT3[257],KernelGPT4[257] Correctivemode Rationalization Testresults:ChatGPT-4(Log)2[249] Background:GPTScan3[209] Rationalization Dependence:SimulinkSlicer[153] Intent:ChatGPT-4(Log)1[249],ChatGPT-4(Log)2[249] Reason:ChatGPT-4(Log)1[249],ChatGPT-4(Log)2[249] Figure17:Code-ElementsIdentification&Extraction. Figure14:LineCode-Ranking. 226 RQ2: ADOWNSTREAMTASKTAXONOMY 6.1 DimensionsofDownstream-TaskClasses Text-ElementsIdentification&Extraction Extractioncriteria Definedrole Keywords:Cupid[274] Variablenames:MetaMorph[220] SW-EntityPropertyIdentification&Characterization FSM-elements:PROSPER[197] Topicrelatedsentences Criteriaofcharacterization I/O:EMR1[199] Descriptions&domaincorresponding:FSMLOP[14],SELF-DEBUGGINGOP5[32], Bestfit RESTGPT1[123],Dafny-Synth4a[164],EMR2[199] Definedstructure:AdbGPT1[62],SysKG-UTF1[204] Codecorresponding:CFG-Chain3[100],LLift2[132],LLift3[132],LATTE3[143], Inputproperty LLMDFA3[232],WhiteFox1[256] Commonrootcause:RCACopilot2[33] Run-timecontextcorresponding:InputBlaster1[149],InputBlaster3[149] Natureoftext Natureofproperty RFC:PROSPER[197] Input-validityconstraint:RESTGPT1[123],InputBlaster1[149] Optionslist:RCACopilot2[33] NL-precondition:Dafny-Synth4a[164] Literality Preconditionconstrainttoreachcode:WhiteFox1[256]
Literal:EMR1[199],SysKG-UTF1[204],MetaMorph[220],Cupid[274] NL-postcondition:Dafny-Synth4a[164] Transformative Columnsofquery:SELF-DEBUGGINGOP5[32] Closestmeaning:AdbGPT1[62] Qualifiedpost-constraint:LLift2[132] Codefacts Figure18:Text-ElementsIdentification&Extraction. Taint-flow:LATTE3[143] Same-value:LLMDFA3[232] Control-flow:CFG-Chain3[100] Summary May-must:LLift3[132] Metamorphic SW-EntityVerbalization Termequivalence:FSMLOP[14] NL-relation:EMR2[199] Abstractioncriteria Bug-revealingmutation:InputBlaster3[149] Summary:PentestGPT8[47],LLM4Vuln1[208],Fuzz4All1[252] Correctivemode Intent:FSMLOP(CoT)[14],SmartAudit3(CoT)[44],DiffPrompt1[135],Clover4[206], Self-validation:LLift2[132],LLift3[132] Clover7[206],LLM4Vuln1[208],LLM4Vuln3(PreCoT)[208],LLMAPIDet2[240], Feedbackbased ChatGPT-4(Log)1(CoT)[249],RustProof1(CoT)[259], Testresults:InputBlaster3[149] ChatTESTER1[264],PromptEnhanced1[267] Reactivity Explanation:SELF-DEBUGGINGOP1[32],SELF-DEBUGGINGOP3[32], Requestsfurtherinfo:LLift3[132] RustProof1(CoT)[259],RustProof2(CoT)[259] Rationalization Differences:ClarifyGPT3[167] Analysis:FSMLOP[14],SELF-DEBUGGINGOP5[32] Natureofinputentity Thought&assessment:LLMDFA3[232] Intent:PentestGPT8[47] U Cosa dg ee :: FF Su Mzz L4 OA Pl (l C1 oT[2 )5 [2 1] 4],SELF-DEBUGGINGOP1[32],SELF-DEBUGGINGOP3[32], Figure21:SW-Entity-PropertyIdentification&Characterization. SmartAudit3(CoT)[44],DiffPrompt1[135],ClarifyGPT3[167], LLM4Vuln1[208],LLM4Vuln3(PreCoT)[208],ChatGPT-4(Log)1(CoT)[249], ChatTESTER1[264],PromptEnhanced1[267] Annotated:Clover4[206],RustProof1(CoT)[259],RustProof2(CoT)[259] Report:LLM4Vuln1[208] Testingoutputs:PentestGPT8[47] Assertion:Clover7[206] Precondition:RustProof1(CoT)[259],RustProof2(CoT)[259] Postcondition:RustProof2(CoT)[259] Rootcause-Changeaction:LLMAPIDet1[240] Extrainput Execution:SELF-DEBUGGINGOP3[32] Figure19:SW-EntityVerbalization. Focused-AbstractiveSummarization Abstractioncriteria Signofanswer:LM-PACE2[268] Scaledassessment:LM-PACE4[268] Formalization Conceptextraction Failuredescribed:GameBugDescriptions2[212] Abstractioncriteria Faultyculprit:AutoFL2[116] Intentcorresponding:nl2postcondition[59],Dafny-Synth4b[164],Clover6[206], Vulnerabilitydescribed:ChatGPTSCV2[29] RustProof1[259] Vulnerabilitymechanismdescribed:LLM4Vuln2[208] Preconditioncorresponding:RustProof1[259] Incident/Root-causedescribed:RCACopilot1[33],x-lifecycle1[71],inContextRCA1[275] Codecorresponding:RustProof1[259] Relevantreportelements:ScenarioNL1[58] Natureofformalization Outputcharacterizations:SELF-DEBUGGINGOP4[32] State-predicate Rationalization Postcondition:nl2postcondition[59],Dafny-Synth4b[164],Clover6[206], Thoughts:ChatGPTSCV2[29] RustProof1[259] Debate:ScenarioNL1[58] Precondition:Dafny-Synth4b[164],Clover6[206] In-fillingcontext Figure22:Focused-AbstractiveSummarization(previousthoughtsorrational- CorrA ecn tn ivo eta mte od decode:Clover6[206],RustProof1[259] izations). Feedbackbased Compiler:Clover6[206] Figure20:Formalization. 236 RQ2: ADOWNSTREAMTASKTAXONOMY 6.1 DimensionsofDownstream-TaskClasses PlanGeneration Generationcriteria Intentcorresponding:PentestGPT1[47] Scenariocorresponding:Pwn’d1[80] Achievegoal:AlloyRepair3[4],TARGET1[48],TARGET2[48],LLM4Vuln3(exploit)[208], AXNav1[211] Constrained Availabletools:PentestGPT6[47] Fusing:SysKG-UTF3[204] Refinement/Concretization:PentestGPT7[47],AdbGPT2[62],SysKG-UTF2[204], SysKG-UTF4[204],AXNav2[211],DLAP2[258] Domain TextualDataManipulation SWreview:DLAP2[258] SW-systemintegration:PentestGPT1[47],PentestGPT6[47],PentestGPT7[47], Operationcriteria AdbGPT2[62],Pwn’d1[80],SysKG-UTF2[204], Informationprojection:PentestGPT4[47] SysKG-UTF3[204],SysKG-UTF4[204],LLM4Vuln3(exploit)[208], Update:PentestGPT2[47] AXNav1[211],AXNav2[211],AXNav4[211] Replacement Repair:AlloyRepair3[4] Closestmeaning:TARGET3[48] Physical:TARGET1[48],TARGET2[48] Fusion Natureofoutput Controlflowgraph:CFG-Chain4[100] Step-by-stepplan:AlloyRepair3[4],PentestGPT6[47],TARGET1[48],TARGET2[48], Structure:FuzzingParsers5[1] Pwn’d1[80],SysKG-UTF2[204],SysKG-UTF3[204], Natureofstructure
SysKG-UTF4[204],LLM4Vuln3(exploit)[208],AXNav1[211], Plan:PentestGPT2[47],PentestGPT4[47] AXNav4[211],DLAP2[258] Scenario:TARGET3[48] Hierarchicalplan:PentestGPT1[47] Graph:CFG-Chain4[100] Command/Action:PentestGPT7[47],AdbGPT2[62],AXNav2[211] String:FuzzingParsers5[1] Correctivemode Feedbackbased Figure26:TextualDataManipulation. Outcomes:PentestGPT1[47],AXNav4[211] Rulebased:TARGET2[48] Reactivity Toolinvocation:PentestGPT1[47],Pwn’d1[80] Rationalization Thought:PentestGPT1[47],PentestGPT7[47],AdbGPT2[62],AXNav1[211], AXNav2[211] Assessment:SysKG-UTF4[204] Pseudo-codeguidedassessment:SysKG-UTF3[204] Figure23:PlanGeneration. What-to-do-NextGeneration Generationcriteria Mostpromising:PentestGPT5[47] Coverageoriented:GPTDroid[148] Reachabilityoriented:Pwn’d2[80],CrashTranslator1[103],CrashTranslator2[103] Domain SW-systemintegration:PentestGPT5[47],Pwn’d2[80],CrashTranslator1[103], CrashTranslator2[103],GPTDroid[148] Natureofoptions Open-ended:Pwn’d2[80],CrashTranslator1[103],GPTDroid[148] Close-ended:PentestGPT5[47],CrashTranslator2[103] Givencontext Enclosingplan:PentestGPT5[47] KnowledgeDistillation Dynamicannotation:GPTDroid[148] GUIcontext:CrashTranslator1[103],CrashTranslator2[103],GPTDroid[148] Domain Vulnerabilitydetection:EditTime1[25],EditTime2[25],MultiTask4[263] Protocolparsing:ChatAFL1[162] Figure24:What-to-do-NextGeneration. Languageparsing:FuzzingParsers1[1],FuzzingParsers2[1],FuzzingParsers3[1] Natureofknowledge Definitions:EditTime1[25] Execution Grammar/Structure:FuzzingParsers1[1],ChatAFL1[162] CWE:MultiTask4[263] Domain Examples:FuzzingParsers2[1],EditTime2[25] Generalcode:SELF-DEBUGGINGFL(CoT)[32] Setofitems:FuzzingParsers3[1] Pseudocode:E&V1[79] Intent:TestChain2[134] Figure27:KnowledgeDistillation. Assertions:SELF-DEBUGGINGOP7(CoT)[32] Granularity Singlestep:SELF-DEBUGGINGOP7(CoT)[32],E&V1[79] Multiplesteps:SELF-DEBUGGINGFL(CoT)[32],TestChain2[134] Correctivemode Feedbackbased Verificationresults:E&V1[79] Reactivity Requestfurtherinformation:E&V1[79] Requestexternaltool:E&V1[79],TestChain2[134] Figure25:Execution. 247 RQ3: CHARACTERISTICSOFDOWNSTREAM-TASKCLASSES 7. RQ3: CharacteristicsofDownstream-TaskClasses somemechanismbeyondinstructingtheLLM,suchasCoTor guidedsubtaskdecomposition). Asitcanbeobserved,mosttaskclassesarerichintermsof Tasks that fit into Data Generation class are rich in terms correctnessorpreferencecriteriaofdownstreamtasks. Thisis, of criteria spectrum: from consistency with interactions and naturally,themajorvariationpointinclasses. Yet,someother environmenthintstobeingvariantsofsomeotherdataexemplar. dimensionsdefinecrucialdetailsofthenatureoftasks. Nofeedbackloopswerereported,andtherewasnoreactivity. ThefollowingareconclusionsdrawnfromTablesandTrees, Few-Shotseemstobethemorecommonapproachaftersimply andfromtheprompttechniquesusedbyeachtaskclass,which taskinstructiontogetresults. aresummarizedinFigure28. In Behavior Analysis a typical task is root cause IntheGeneralCodeGenerationclass,atypicaltaskconsists analysis which naturally fits as a reactive task that re- inthegenerationofcodethatcorrespondstoagivenintention. quests/inspects/analyzesinformationondemand. Tasksinthis Contextualizedtasks(meaningthedownstreamtaskisstatedas classsometimesrequirecode,tracesorverbalizationsasanex- afill-inorcompletingapartialsolution)seemtobepreferred tra input to perform the evaluation. In-Context Learning and whenevertheymakesensefortheaddressedproblem.Corrective rationalizationisalsopresentinthestudiedliterature. modewithexternalfeedback(sometimesknownasadaptive,e.g., DirectCodeClassificationisaclassthataccommodatestasks in [194]) is a standard approach to improve the performance thatareanalogoustotheonessolvedbyneuralapproaches[24] of tasks by repairing initial hallucinations. The nature of the fromablackboxperspective. Perhapsunsurprisingly,(security) feedbacksignaldependsonthesortoftaskcategory: forcode vulnerability proneness is the most frequent analysis domain generationcompilation,feedbackisthenaturaloption,butother (e.g.,CWEvulnerabilitypresence). Inparticular,themostfre- signalsarepossible(e.g.,verifierfeedbackin[206]). Ingeneral quentclassificationtaskisidentifyingwhetherasnippetofcode terms, tasks are elicited by conditioning the model with their belongstothevulnerability-proneclass. Close-endedclassifica- description/instruction. Insomesense,thesearetakenastasks tionmeansthesetoflabels/classesofinterestispredefined(e.g., similartotasksseenduringpre-training. thetargetlistofCWEvulnerabilities). InDomainSpecificCodeGeneration,frequentcodetypesare Whilesometasksdealwithsome(pre)fixedconceptofvulner- fuzzingdrivers(i.e.,aprogramthatcanexecutelibraryfunctions abilityproneness(e.g.,CWE),insomecases,theconceptthat
byfeedingthemwithinputsprovidedbythefuzzer)orscripts defines(the)classisactuallycharacterizedverballyin-context, forscanningcode. Correctivefeedbackalsoincludesexecution thatis,tasksaredesignedtobeparametrictopiecesofinforma- resultsbecausetheycanbeinterpretedasarelevantsignalfor tion injectedandonly availableat inferencetime (e.g., brand iterative improvement of task results. Prompt strategies are newdefinitionsofvulnerabilities,abehavioraldescription,etc.). similar to those of general code generation. Still, there is a Notably,possiblyduetothestatic-analysisnatureofclassifica- proportionallyhigheruseofthefewshotsasanICL(In-Context tionandtheflexibilityofclassdefinition, nocorrectiveloops Learning)approach(possibly,giventhepotentiallyfewerdata are reported. Extra inputs (enhancements) (e.g., call graphs) pointsduringpre-training). Functioncalling[183]alsoseems are, in some works, provided to improve the quality of clas- analternativetothelackoftrainingdata. sification. These tasks are frequently elicited just by using a TestGenerationtasksvarygreatlyonthegenerationcriteria taskdescription(labelsforvulnerabilitiesassumed“memorized” beingthe“natural”(nospecificcriteriabeyondbeingasa“pre- andpatternslearnedeitherexplicitlyorimplicitlyduringpre- dictable”testaccompanyingthecodeincontextaccordingto training). Theuseofretrievedexamplesin-contextareamong theLLM)themorefrequentone. Feedbackincludes,beyond thealternativesusedbyauthorswhenseekingtoconditionresult compilerfeedback,executionresultslinkedtotestinglikecov- towhatisknownaboutsimilarcases. erageinformationorassertionexecutionresults. Asinglecase RationalizedCode-Classificationistheparadigmaticclassof ofreactivitycanbefoundin[34]wherethedownstreamtask evaluativedownstreamtasksthatarenowenabledbyLLMsgen- ismeanttointeractwiththeenvironmentbyrequestingfurther erativeabilities. Rationalizationnotonlyenables(potentially) information. Testgenerationisoftenelicitedwithoutusingany self-conditioningmodelbehaviorduringtaskinferencebutalso particularstrategybeyondtheinstructionitself(see[273]fora constitutesavaluableoutputforfurtherprocessing. Thereare similarobservation). Inthiscase, Few-Shotisusedtoclarify tasksinRationalizedCodeClassificationcategorythatengage some generation criteria that are not standard or in studies to into a reactive interaction to explore code. Note that gener- gainanunderstandingoftheeffectsofICL.CoTisusedina allyevaluativetasksreporteddonothavecorrectivefeedback particular case to condition test generation to an uttered bug versions. However,forrationalizedtasks,self-reflection[200] description[50]. is one of the attempted alternatives. As in the case of direct SomeofthetasksmappedintoAnnotationGenerationclass code-classification,therearetasksinstancesthataredesigned seem to pursue ambitious correctness criteria (e.g., being the tobeparametrictotheverbalcharacterizationofconcepts. In annotations that enable a symbolic proof). Thus, enriching the particular case of rationalized classification, at inference code with annotations feeds not only compilation errors but time,denotationrelationshipbetweenlabelsandconceptsare alsoverificationresultsforengagingcorrectivemodes. Itseems providedasanactualparameterofsuchtasks. lesslikelythatLLMhasbeenpre-trainedorfine-tunedforsuch Identifiedtasksmappedintothiscategoryusemorefrequently tasks,anditseemstorequiremoresophisticatedapproachesto promptstrategiesthatgobeyondprovidingtheinstructionincon- adequatelypromptthemodel(morethanhalfoftasksrequire text.Notably,theuseofChainofThoughttogetarationalization 257 RQ3: CHARACTERISTICSOFDOWNSTREAM-TASKCLASSES KnowledgeDistillation 1 6 ReAct TextualDataManipulation 1 4 FS+ReAct Execution 1 3 RAG What-to-do-NextGeneration 1 4 FS+RAG Few-Shot PlanGeneration 2 2 3 4 5 FS+CoT Focused-AbstractiveSummarization 1 1 1 8 CoT SW-EntityPropertyIdentification&Characterization 5 3 1 4 ToT Formalization 1 3 CoT+ReAct SW-EntityVerbalization 4 15 FS+CoT+ReAct Text-ElementsIdentification&Extraction 3 1 1 2 RAG+CoT+ReAct Code-ElementsIdentification&Extraction 2 2 3 1 1 3 None TextAnalysis 2 1 5 Task-SolutionAnalysis 1 1 7 LineCode-Ranking 2 1 CodeScaling 3 RationalizedCode-Classification 1 5 4 3 8 DirectCode-Classification 1 1 2 1 2 10 BehaviorAnalysis 2 1 1 1 7 DataGeneration 7 1 11 AnnotationGeneration 2 1 4 1 1 10 TestGeneration 4 1 22 Domain-Specific-CodeGeneration 1 3 11 General-CodeGeneration 1 2 24 0 2 4 6 8 10 12 14 16 18 20 22 24 26 NumberofDownstreamTasks Figure28:Numberofdownstreamtasksusingeachpromptingtechniqueorcombinationofpromptingtechniques,aggregatedbytaskclass. oftheclassification,insomeworksitlikelyactsasawaytoim- limitationsandattention-mechanismdegradation[141]usually provequalityofclassificationby(auto)guidance. Fromaliberal motivate the need for such approaches, but we believe this is interpretation,CoTusagemightbeunderreportedinthiscate- notaccidentaltocurrenttechnologystatusifoneassumes,for gorysincewetypicallydonotconsiderastruly“CoT-compliant” instance,externalmemorylookupswillbepartofcomplexin- promptsthatsuggestthegenerationofrationalizationafterver- ferencewithintermediateverbalizedsteps. Theirreportedelic- balizingtheclassificationoutcomeneitherwhendirected/guided itation typically includes In-Context Learning and CoT. Text
stepsofthoughtexplicitlyinstructedintheprompt. Elements Identification and Extraction Class is the NL coun- CodeScalinglookslike(direct)classificationtasksandare terpartofthepreviousclass. Taskscurrentlymappedintothis straightforwardly elicited. Few tasks were mapped into Line- category are not designed to be reactive and, thus, they are codeRankingcategory. Yet,thiscategoryincludescorrective supposedtoworkwithtextthatcouldfitintotheLLMscontext. modeandelicitationbasedonCoT,includingthesuggestionof SW-entity Verbalization comprises a set of elicited tasks, generatingcodeintent. whichleveragetheapparentnativeproficiencyofLLMstotrans- TaskSolutionAnalysisandTextAnalysisarebothcategories late code andother softwareentities into natural language ut- elicitedwithan(specificdomain)expertpersona. Tasksolution terancesbyinstructingthemtodoso. Noreactiveorcorrective analysisusesfixed(quality)evaluationfunctions[196]described versionsaremappedintothisclassoftasks,atleastforthere- inprompts. TextAnalysisincludesexpertdebatesasoneofthe viewedliterature. In-ContextLearningseemstobeappliedjust mostsophisticatedrationalizationapproaches[260]. incasesinwhichtheverbalizationcriteriarequiredfurtherguid- ance(e.g.,verbalizationofdifferences,line-by-lineexplanations, AsmentionedTextAnalysisincludestasksthatresemblethose etc.). studiedandbenchmarkedbytheNaturalLanguageProcessing community(e.g.,implicatures,ambiguitydetection,etc.[210]). FormalizationismostlytheinversetaskwhenSWentitiesare Interestingly,whilesometasksaremainlydomain-independent, logicalassertions. Interestingly,syntacticcorrectivefeedbackis othertasksaresupposedtobesolvedbyrecallingsomedomain featuredbyonlyonetaskinthiscategory. expertisetoleveragemissingtacitknowledge. In-ContextLearn- SW-entityPropertyIdentificationandCharacterizationisa ing[22]anddebate-basedinference[260]areusedinacouple class that collected a varied set of tasks, which leverage the ofmappedtaskswhich,apparently,requiredcertaindegreeof potentialabilityofLLMtogenerateutterancesthatcanbein- sophisticationtoelicitalignedbehavior. terpreted as (ad-hoc) SW-entity abstractions. Self-validation CodeElementsIdentificationandExtractionisthemorerep- (e.g.,[236,113])seemsthewaytoobtainsignalforcorrection resentativeclassoftheextractivecategory. Oneofthesalient whenexecutionfeedbackisnotpossible. Adequatecondition- characteristicsofitstasksistheirreactivenature[261,195],in ing(e.g.,In-ContextLearning)isrequiredsincethosetasksare, this case, to incrementally explore code bases. Context-size intuitively,farfromtheassumedfine-tunedtasks. 268 RQ4: RELATIONBETWEENSEPROBLEMSANDDOWNSTREAM-TASKCATEGORIES FocusedAbstractiveSummarizationisoneoftheclassesof tasks that can be found in NLP processing (e.g., [190]). Al- though,inmostcases,thetasksmappedseemtobeelicitedby justinstructingthemodeltoperformadequatesummarization Generative (or providing some demonstrations in context), sophisticated Testing (auto)guidanceapproaches(e.g.,[260])areinplacewhenitis likely that the model may produce answers not fully aligned withthepurposeofthetask. PlanGenerationencompassesdifferenttasksthatdealwith Evaluative eithergeneratingorrefiningoperationaldescriptionsmeantto Debugging reachgoals.Itisarichclassthatincludesagent-like(butlimited) reactivetasks[144](e.g.,fortheuseofexternaltools)anddif- Extractive ferentsortsofcorrectivefeedback. Forthestudiesandproposals Vulnerability/Misuse/FaultDetection inscope,typicaldomainisthatofinteractingwithasoftware- Abstractive enabled system. The use of CoT guidance is quite frequent, StaticAnalysis possibly because LLMs struggle to generate criteria-aligned planswithouttrickstoaugmentinferencepower[223,144]. Executive ProgramVerification Asmentioned,What-to-doNextGenerationisadecisionmak- Consultative ingsubclassthatcompriseselicitationofutterancesthatwould somehowcorrespondtodecidinganactioninagivenstate. Gen- erally speaking, those tasks are related to interacting with a Figure29:Coarsemapping:SEbigareasvs.downstreamtaskstaxonomymain categories. software-enabledsystemastheagent’senvironment. Executionclasscomprisestasksthatdealwithprogram-like instruction(s)/intentcomputation. Thiscouldbeeitheratalevel executive tasks (e.g., plan generation, decision making, etc.). of single instruction or potentially a code snipped or intent. Infacttheywouldbethemain“clients”forlanguagemodels Reactivityforusingexternalcomputationtoolsisusedinthis featuringstrongplanningabilities. class. Fuzzing, particularly when inputs are programs (e.g., DL- TextualDataManipulationisalsoaninstructionfollowing fuzzingandcompilerfuzzing),isthemainclientofcodegenera- subclassthatdealswithgeneratinganoutputthatcorresponds tiontaskclasses.Programsthatarerequestedtobegeneratedare toaninstructedtransformationoftheinput. Thosestructures typicallysmall,andthat,togetherwiththeperceivedpre-trained typicallydescribestateofanauxiliaryentitybeingincrementally abilityofsuchtasks,mightexplainwhyapproachestotaskelici- processedbythehelpoftheLLM. tationareratherstraightforwardintermsoftaskdecomposition
Finally,KnowledgeDistillationtasks,unsurprisingly,seemto andpromptingstrategy. mainlybeusedforrecallingmemorizedknowledgerelatedto Vulnerability detection constitutes another rather homoge- SW-relateddomains(e.g.,grammarofamessagestructure). neousSEproblemclusterregardingdownstreamtaskcategories involved(mostlyevaluative). Afurtheranalysisshowsananec- 8. RQ4: RelationbetweenSEProblemsandDownstream- dotalcorrelationbetweentheuseofrationalizedclassification TaskCategories andtheuseofabstractivetaskslikeverbalizationofsoftware entities. Smartcontractanalysis,whichoftenseekslogicalvul- Inwhatfollows,wedrawpreliminaryconclusionsregarding nerabilities,leveragesrationalizedclassificationmorefrequently several aspects of the relationship between SE problem cate- thangeneralvulnerabilityanalysis,whichismoreorientedto goriesanddownstream-taskcategories(seeFigure29andTa- detectcodepatterns. ble6). Firstly,unittestingapproachesareratherhomogeneous Oracleproblems,debugging,andstaticanalysisareSEprob- regardingusinggenerativeclassesoftasksthatarestraightfor- lemsforwhichexistingsolutionsuseawidespectrumoftask wardlytraceabletothefinalSEproblembeingaddressed. That categories. Their proposals’ in-chain prompts elicit different is,theyaretypicallynotbrokendownintoseveraltasks. tasks (or execute them in a more complex orchestration with InputgenerationapproachesuseLLMsdifferently. Typically, symbolictools). theyseemtoelicitabstractivegenerationbefore/duringtheat- Extractivetaskssuchascodeelementidentificationarehighly tempttogeneratedatasatisfyingsuchabstractconstraints.Some- linkedtostaticanalysis(LLM-enabled)approaches. Indeed,the thingsimilarhappenstofuzzingapproachesthataimtotrigger onlytestingapproachthatusessuchabilityisakernel-fuzzing somespecificsoftwarebehavior. Theydonotrelyonasingle approach [257] that uses extractive operations to fill-in a sort generativedownstreamtasktogetdataorinput,andtheyresort ofspecificationlaterusedforgeneratingrelevantsystemcalls. to some sort of intermediate abstractive process to get some Theotherexceptionis[209],whichtriestolocatecodeplaying conditioning by a verbalized characterization. On the other acertainroleforlaterstaticanalysis. Ontheotherhand,extrac- hand, functional testing, GUItesting, penetration testingand, tiveoperationsonNaturalLanguageareelicitedbyapproaches ingeneral, system-leveltestinghaveastrongusagelinkwith addressingdebugging,functionaltesting,andOracleproblems. 279 DISCUSSION noitareneG edoC-lareneG noitareneG edoC-cificepS-niamoD noitareneG tseT noitareneG noitatonnA noitareneG ataD evitareneG sisylanA roivaheB noitacifissalC-edoC tceriD noitacifissalC-edoC dezilanoitaR gnilacS edoC gniknaR-edoC eniL sisylanA noituloS-ksaT sisylanA txeT evitaulavE noitcartxE & noitacifitnedI stnemelE-edoC noitcartxE & noitacifitnedI stnemelE-txeT evitcartxE noitazilabreV ytitnE-WS noitazilamroF noitaziretcarahC & noitacifitnedI ytreporP ytitnE-WS noitazirammuS evitcartsbA-desucoF evitcartsbA noitareneG nalP noitareneG txeN-od-ot-tahW noitucexE noitalupinaM ataD lautxeT evitucexE noitallitsiD egdelwonK evitatlusnoC latoT Unit Test Generation 17 1 18 1 1 1 1 20 Failure-Inducing Test Generation 2 1 2 5 1 1 6 Input Generation 2 2 1 3 8 1 1 3 3 12 Regression Testing 1 1 1 DataSet/Mutant Generation 6 3 9 1 1 10 General Fuzzing 4 4 8 1 1 9 Kernel Fuzzing 1 1 4 4 5 Compiler/Simulator Fuzzing 1 2 3 1 1 4 Protocol/Parser Fuzzing 3 3 1 1 4 4 8 DL-Libraries Fuzzing 3 2 5 1 1 6 GUI Testing 2 2 1 1 3 1 4 7 Functional Testing 1 1 2 2 2 1 1 1 1 5 1 6 12 Penetration Testing 2 2 1 1 4 2 2 8 11 Oracle Problem 2 2 6 2 12 1 1 2 4 3 3 4 1 3 2 10 1 1 30 Testing 14 10 26 8 19 77 5 2 1 4 12 4 4 8 8 1 7 3 19 12 3 2 4 21 4 4 141 Bug Reproduction 1 1 1 1 1 2 3 5 Duplicate Bug Report Detection 1 1 1 Fault Localization 2 2 3 2 1 6 1 1 2 1 1 11 Root Cause Analysis 3 1 2 6 1 1 5 5 12 Debugging 2 1 3 6 2 1 3 12 3 3 1 6 7 1 2 1 4 29 Vulnerability Detection 10 7 3 1 1 22 1 1 1 1 1 1 25 Line-Level/Edit-Time Fault/API Misuse Prediction 2 2 2 4 1 7 2 2 2 2 13 Vulnerability Detection for Smart Contracts 1 1 3 7 1 11 1 1 4 1 5 1 1 19 Vulnerability/Misuse/Fault Detection 3 3 15 18 3 1 3 40 1 1 7 1 8 2 2 3 3 57 Call-Graph/CFG Construction 2 2 1 1 1 1 4 Use Before Initialize 2 2 1 1 2 2 5 Resource Leak 2 1 3 1 1 4 Data-Flow Analysis 4 4 1 1 5 Taint Analysis 1 1 2 2 2 1 1 1 1 6 Static Slicing 1 1 1 Fix Acceptance Check 1 1 1 Static Analysis 4 4 1 3 4 8 7 7 5 5 1 1 2 26 Program Verification 8 1 11 20 1 1 4 3 1 8 1 1 30 Program Verification 8 1 11 20 1 1 4 3 1 8 1 1 30 Total 27 15 27 19 19 107 12 17 21 3 3 9 8 73 12 7 19 20 4 13 10 47 16 5 4 5 30 7 7 283 Table6:RelationbetweenSEproblemsanddownstreamtaskstaxonomy. ProgramverificationapproachesmayblendSW-entitiesgener- gestedstepsinaChainofThought. Weconjectureverbalization ativetaskswithverbalizationandformalization. Theytypically beingagoodperformance“natural”task[61]andthegenerated useasingleprompttogettherightlemmas(plussomecorrective resultisalsoaneffectivetoolforconditioningsubsequenttasks iteration). (likelyreducingtheentropyofthemodelpredictions). Unsurprisingly,generativetasksarethemostelicited,withtest
generationatthetop. Perhapsitismoreinterestingtopinpoint 9. Discussion thatgeneralcodegenerationisutilizedinalmostallcategories 9.1. UnexploredSpots (exceptforstaticanalysisapproaches). Instead,staticanalysis Therearevariousaspectsofthecurrentlandscapetowhich approacheslike[232]usedomain-specificcodegenerationto one could add prospective comments and avenues for future getscriptsandrulestoscancodeunderanalysis. Rationalized workfromtheidentifiedtasksandpatterns. vulnerabilitydetectionisalsoaquitepopulatedcategorydueto Getting useful behavioral abstractions seems an important thenumberofexploratorystudieselicitingsuchsortoftasks. intermediatestepinmanyLLM-enabledtasksinSE.Straightfor- It is worth noting that software-entities verbalization (typi- warduseofLLMtosolvesomecodeanalysistasksislikelyto callybutnotrestrictedtointentgenerationfromcode)isaclass strugglewithinferringonpotentialcodeexecutionsthusleading oftasksextensivelyusedtroughalmostallSEapproachesex- tohallucinations(e.g.,seeprogramstateanalysistaskdiscus- ceptstaticanalysis. Verbalizationtasksareeitherrequestedby sion[203]). Also,sizeofentitiesisafactorinfavorofusingab- instructingthemastheexpectedresultorasintermediatesug- stractionsasscratchpads. Thus,weenvisionthatmoreresearch 289 DISCUSSION 9.2 ImprovingDownstreamTasks would seek for the effective construction of varied language- hypothesissearch[235],correctivefeedback,externaltoolinte- orientedabstractionsofSW-entities. Thoseabstractionsshould gration,speciallydesignedbenchmarks(e.g.,[112]),andevalu- bedesignedtoleveragelanguageanalyticalinferenceasaway ationmetricsbasedontaxonomycriteria[27],etc. aresomeof to deal with the combinatorial nature of behavior (in a more theareasthatcouldbeexploredforthoserelevantdownstream historical vein, compositional deductive analysis frameworks tasks. This would be akin what community have been done (e.g.,[85,156,127],etc.) havebeenhelpinghumanstoreason by improving many classical analysis tools like static analy- aboutlargestatespaces).Thatwouldmeantoimprovetheability sis/testingframeworks,solvers,etc. Forinstance,constrained toseteffectivecorrectivefeedback-loopsforsuchtasks. Infact, generationandconstraineddecoding(e.g.,[46,18,2,126,186], correctioninabstractivetasksseems,ingeneral,yetanareaof etc.) isregardedasawaytobuildmoreprincipledandrobust vacancy:whilecorrectionhassuccessstoriesintraditional(com- AI-software[17,46]. Taskscharacteristicsmightservetounder- binatorial) abstraction construction like CEGAR [38] (where standinwhichextentsuchideascouldbeappliedandcouldbe spuriouscounterexamplesarekeytointerpolation-basedrefine- pursued. Justasanexample,whileresultsofclassificationtasks, ment), automated error detection (e.g., misalignment) and in- extractive tasks, and the generation of formally defined SW- formativecorrectivefeedbackinLLM-basedabstractionsmight entities(e.g.,assertions)couldbepotentiallyconstrainedbysyn- bechallenging. Theabilitytoperformsomeexecutiveanddata tacticandevensemanticrules,verbalizationandtasksyielding generationtasksmightbeusefultoclosesomefeedbackloops NLabstractions,albeitusefulforconditioning,arelikelyharder instaticanalysisusecases. tobeconstrainedupfront. Hypothesissearch[235]combined Interestingly,richrequirementengineeringmethodsandpro- with certified deductive reasoning [185] might be interesting cesses like the ones in Goal Oriented Requirement Engineer- stepsforwardforthegenerationofsuchsortofentities. Also, ing[224]havenotexploredinthecontextofLLM-enabledfalsi- asobserved,someflexibleclassificationtasksenabledbyLLMs ficationandverification. Specificationconstructionandanalysis areneithercompletelydefinedatdesign-time(e.g.,theymight frameworksmaybesourcesofusefullinguisticabstractionsand beopenintermsofpossiblelabelsorevenclassdefinitionsare processes. Also,coarsegrainedabstractionsandreverseengi- providedastasksparameters). Thustheyarelikelymorechal- neeringtoolsusedforhumanvalidationsmayalsoserve[45]as lengingforoptimizationorgettingguaranteesaboutalignment inputsthatenhanceLLM-basedevaluativetasks. thanforthosewhichclassificationconceptsarefixed. Again, UsingLLMstobuildworldmodelsasintermediatetoreason thismightimplytheneedofcombiningsophisticatedprompting about relational systems, physical scenes and plans has been anddecodingstrategieslikeautoformalization[250,278]and advocatedby[244]. Wespeculate,adequatelypromptingsuch formalizeddeductivereasoning[185]. probabilisticmodelofthought[244]togetworldmodelsisan Also,taxonomieslikeoursmightalsoberefinedtopinpoint abstractivetaskthatmayempowersolutionstosophisticatedver- whichabilities/behaviors[26](e.g.,analyticalinference,arith- ificationandtestingproblems. Infact,generatedworldmodels metic,planning/strategicabilities[68],etc.)orwhichknowledge couldbe,inturn,usedascomponentswhichinferencecapabili- domain(e.g.,softwareandsystems,automotive,etc.)mighthave tiescanbeleveragedbyotherreactiveLLM-baseddownstream impactorseemkeytoLLM-enabledSEapproaches(e.g.,test- taskstoimprovetheiralignmentandspecificity. Thatisanarea ing, probabilistic programming, etc.). For instance, it seems
unexplored so far (tools invoked by reported tasks are funda- thatunleashinganalyticallinguisticinferenceiskeytoadvanced mentallyclassicaltothesoftwareanalysisandmanipulationsor logical vulnerabilitydetection techniqueswhileothers would vector-DB-basedretrievers). possiblyworkfinewithasmallerlanguagemodeltrainedina Naturally,benchmarksonNLPtaskslike[210,280]arealso particulardomain. sources of downstream tasks that might end up being useful As an extra evidence on the importance to focus on tasks for building promptware for verification and falsification of characteristicstomakeprogressonperformanceisthefactthat software. For instance, sentence similarity, causal judgment, Natural Language Generation research has been pinpointing implicatures(alreadycoveredforyes/noanswersummarization), thatNLG-tasksseemtoplayanimportantrolewhenanalyzing logicalfallacydetectionare(text)evaluativedownstreamtasks quality-probabilityparadox[161,160]andultimatelydeciding thatcouldbeusefulinmanytypesofanalysisinLLM-enabled anappropriatedecodingstrategy. Thus,thetaxonomycouldalso solutions. helptoaddressfocusedinformation-theoreticanalysesofmodel propertieswithrespecttotrade-offs[159,213],persuasionof promptsandsusceptibilityofmodels[56]withaparticularcare 9.2. ImprovingDownstreamTasks toaccuracyasthemainqualitydriver. Ataxonomycanhelpinfocusingimprovementsinthepro- ficiency of LLMs in classes or instances of tasks that are re- 9.3. PromptwareEngineeringBasedonDownstreamTasks currentandmightdeserveparticular(existentornovel)strate- ItishardtoimaginehowfuturegenerationofLLM-enabled gies to analyze and improve LLM’s alignment, specificity tools will look like. Would contexts be significantly larger? andconcentrationofgenerationfunctiondistributionoverthe Would attention mechanism improve or be replaced by some appropriate domain. Fine-tuning [237], reasoning bootstrap- other yet-to-develop technology? Would problem-solving be ping[265]),enhancedactivationbasedonmechanisticinterpre- articulatedas“agentware”? Wouldthosesolutionsbeableto tation (e.g. [217]), finer grained prompt tuning (e.g., [280]), proposeandchaintaskswhileconsistentlypursuingtheorigi- prefixtuning[137],gisting[168],prover-verifiergames[124], nalgoal? Wouldthathappenasverbalizedinferencechainin 29REFERENCES contextoraspartoftheneuralinference“flow”? Meanwhile, morethan30%ofpapersinthisstudyafterhavingdefinedthe whatweseeishowresearchersarebuildingLLM-enabledso- mainstructureofthetaxonomy. lution proposals as promptware that explicitly elicit different downstream tasks and, from there, we can speculate some of 11. Conclusion theengineeringshortcomingwhichmightalsochallengefuture approaches. Both LLMs foundational research and the engineering of As shown by RQs, downstream task could be regarded as software tools on top of them are areas that are very rapidly informalblueprintsbut,couldtheyalsoactuallyconstitutebuild- evolving. Thisworkaddressesaninitialquestiononthecurrent ing blocks for promptware as components are for plain-old- relevanceandnatureof(human-defined)downstreamtasksin software? Somepartialpositiveanswerisgiveninworkslike LLM-enabledsolutionsforsomeareasrelatedtoSoftwareAnal- [122, 205, 248] where solutions are made up of some sort of ysis. Itdoesthisbyrecoveringthemintheextensivelyreviewed declarativemoduleswhichinstruct“what”todoandmightlet literatureandbybuildingahierarchicaltaxonomyoftaskclasses promptingandworkflowdetailsbemodifiedaspartofanopti- thataccommodatesallinstancesofdownstreamtasksseen. We mizationprocess.Yet,inouropinion,manychallengesremainto leveragethemappingtoexplainsomeexistingabstractpatterns achieveadisciplinedmodularengineerofsuchprobabilisticpro- inthedesignofsuchsolutions. grams. Forinstance,propertiesofresultingtransferencemodel Thisreportdoesnottrytoansweranotherrelatedquestion: is distributionmayvaryonsubtlenon-modularandlow-levelas- task-orientationthebestconceptualtool(adequatelevelofab- pectsasorderofinformation,encodingandcommunication(or straction)forengineeringLLM-enabledsolutions? Webelieve not) of hidden states. More concretely, it might be the case theanswertothisquestiongreatlydependsonthefuturedevel- thatthequalityperformanceofthelasttasksofachainoftasks opmentoftheorytoengineerthissortofsystems. Theoryand greatlydependsonlow-levelconditioningdetailsofprevious toolsforengineeringgoodLLM-enabledapproachesareattheir tasksinthechainwhenonewouldexpectthatthelasttaskshould infancy. Itisourvisionthatmorecompositionalandabstract besolelydependent(functionof)ontheabstractconceptsuttered conceptslinkedtotheideaoftaskarerequiredtoactuallyin- byprevioustask’sexecution. Assaid,optimizationasparadigm tellectuallycontrolbehaviormodelsandprobabilisticprograms (e.g., [122])mightbeastepforwardbutprincipledengineer- that people are building on top of them. This could be true ing foundations for the construction and predictive evolution evenifinthefutureLLMsareusedeithertosolveproblemsby ofpromptwareisinitsinfancygiventhecurrentdifficultiesto –autoregressively–proposingandchainingabstracttasks,archi- dealwithLLMsascomplexsystems[87]. Inparticular,compo- tectingLLM-enabledsoftwareorworkingasproblem-solving sitionalreasoningforpredictableandintellectually-controlled agents.
constructionmightbeelusive. Futureresearchonhowtobuild, evaluateandcomposetransferencemodelsbasedonprompting Acknowledgments LLMsmaybenefitfromidentifyingrecurrentdownstreamtasks This work was supported by the Coordenac¸a˜o de andhowtheyaretypicallyusedtogether. Aperfeic¸oamentodePessoaldeN´ıvelSuperior–Brazil(CAPES- PROEX),FinancingCode001,andbytheFundac¸a˜odeAmparo a` Pesquisa do Estado do Amazonas – FAPEAM, through the 10. ThreatstoValidity POSGRAD23-24project. Thisresearchwascarriedoutwithin Naturally,duetotheyouthoftheareaandthevarietyofSE theSWPERFIUFAM-MOTOROLARD&IProject“Te´cnicas problemsandcommunities,LLM-enabledstudiesandproposals deInteligeˆnciaArtificialparaAna´liseeOtimizac¸a˜odeDesem- arepresentedverydifferentlyfromonepapertoanother. For penhodeSoftware”. Thisworkwasalsopartiallysupportedby instance,LLMinteractionsometimesmightbedescribedinits CONICETPIP11220200100084CO,ANPCyTPICT2018-3835 low-levelrawconversationalnatureorasjustashorthintonhow and2021-I-A-00755,UBACyTGrants20020220300079BAand LLMisused.Thatis,downstreamtasksarenotalwaysexplicitly 20020190100126BA,andA-1-2022-1-173516IDRC-ANII. identifiednorfunctionallydescribedinsomepresentations. We did our best effort in rationalizing (and filling the unknowns) References intheinnerworkingsofstudiesinscope. However,manyde- tails might have been wrongly captured due to our incorrect [1] JoshuaAckermanandGeorgeCybenko. Largelanguagemodelsfor understandingofthosestudiesandproposals. fuzzingparsers(Registeredreport). InMarcelBo¨hme,YannicNoller, BaishakhiRay,andLa´szlo´Szekeres,editors,FUZZING2023:2ndIn- Thereisariskofthetaxonomytobeoverfittedtothereported ternationalFuzzingWorkshop,pages31–38,NewYork,NY,USA,2023. papersandunabletofitfutureLLM-enabledworks. Beyondbe- ACM,doi:10.1145/3605157.3605173. ingariskforanytaxonomythereareacoupleofcaveats. Oneis [2] LakshyaA.Agrawal,AdityaKanade,NavinGoyal,ShuvenduK.Lahiri, thespanandexhaustivenessofSEtopicscovered:fromfalsifica- andSriramK.Rajamani. Monitor-guideddecodingofcodeLMswith staticanalysisofrepositorycontext.InAliceOh,TristanNaumann,Amir tiontoformalverificationones(aspanthatisrarelycoveredby Globerson,KateSaenko,MoritzHardt,andSergeyLevine,editors,Ad- surveys). Thismeansthattaxonomywasabletoclassifyarich vancesinNeuralInformationProcessingSystems36:AnnualConference varietyofapproachesforquitedifferentSEproblems. Secondly, onNeuralInformationProcessingSystems2023,NeurIPS2023,New allpaperssatisfyingthecriteriathatwerementionedbyexisting Orleans,LA,USA,December10-16,2023,2023. [3] BaleeghAhmad,BenjaminTan,RameshKarri,andHammondPearce. surveys were reported and classified to avoid cherry-picking. FLAG: Finding line anomalies (in code) with generative AI, 2023, Last but not least, we were able to seamlessly accommodate arXiv:2306.12643. 30REFERENCES REFERENCES [4] MohannadAlhanahnah,MdRashedulHasan,andHamidBagheri.An [22] TomB.Brown,BenjaminMann,NickRyder,MelanieSubbiah,Jared empiricalevaluationofpre-trainedlargelanguagemodelsforrepairing Kaplan,PrafullaDhariwal,ArvindNeelakantan,PranavShyam,Girish declarativeformalspecifications,2024,arXiv:2404.11050. Sastry,AmandaAskell,SandhiniAgarwal,ArielHerbert-Voss,Gretchen [5] AnahitaAlipour,AbramHindle,andEleniStroulia. Acontextualap- Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. proachtowardsmoreaccurateduplicatebugreportdetection.InThomas Ziegler,JeffreyWu,ClemensWinter,ChristopherHesse,MarkChen, Zimmermann,MassimilianoDiPenta,andSunghunKim,editors,Pro- EricSigler,MateuszLitwin,ScottGray,BenjaminChess,JackClark, ceedingsofthe10thWorkingConferenceonMiningSoftwareReposito- ChristopherBerner, SamMcCandlish, AlecRadford, IlyaSutskever, ries,MSR’13,SanFrancisco,CA,USA,May18-19,2013,pages183–192. andDarioAmodei. Languagemodelsarefew-shotlearners. InHugo IEEEComputerSociety,2013,doi:10.1109/MSR.2013.6624026. Larochelle,Marc’AurelioRanzato,RaiaHadsell,Maria-FlorinaBalcan, [6] FrancesE.Allen. Controlflowanalysis. InRobertS.Northcote,edi- andHsuan-TienLin,editors,AdvancesinNeuralInformationProcess- tor,ProceedingsofaSymposiumonCompilerOptimization,Urbana- ingSystems33:AnnualConferenceonNeuralInformationProcessing Champaign,Illinois,USA,July27-28,1970,pages1–19.ACM,1970, Systems2020,NeurIPS2020,December6-12,2020,virtual,2020. doi:10.1145/800028.808479. [23] Cristian Cadar and Koushik Sen. Symbolic execution for software [7] Mansour Alqarni and Akramul Azim. Low level source code vul- testing: three decades later. Commun. ACM, 56(2):82–90, 2013, nerability detection using advanced BERT language model. In doi:10.1145/2408776.2408795. Iluju Kiringa and Se´bastien Gambs, editors, 35th Canadian Confer- [24] Saikat Chakraborty, Rahul Krishna, Yangruibo Ding, and Baishakhi ence on Artificial Intelligence, Toronto, Ontario, Canada, May 30 Ray. Deep learning based vulnerability detection: Are we there - June 3, 2022. Canadian Artificial Intelligence Association, 2022, yet? IEEE Trans. Software Eng., 48(9):3280–3296, 2022, doi:10.21428/594757DB.B85E6625. doi:10.1109/TSE.2021.3087402.
[8] NadiaAlshahwan,JubinChheda,AnastasiaFinogenova,BelizGokkaya, [25] AaronChan,AnantKharkar,RoshanakZilouchianMoghaddam,Yevhen MarkHarman,InnaHarper,AlexandruMarginean,ShubhoSengupta, Mohylevskyy, Alec Helyar, Eslam Kamal, Mohamed Elkamhawy, andEddyWang.Automatedunittestimprovementusinglargelanguage and Neel Sundaresan. Transformer-based vulnerability detection modelsatMeta. InMarcelod’Amorim,editor,CompanionProceed- in code at EditTime: Zero-shot, few-shot, or fine-tuning?, 2023, ingsofthe32ndACMInternationalConferenceontheFoundationsof arXiv:2306.01754. SoftwareEngineering,FSE2024,PortodeGalinhas,Brazil,July15-19, [26] TylerA.ChangandBenjaminK.Bergen. Languagemodelbehavior: 2024,pages185–196.ACM,2024,doi:10.1145/3663529.3663839. Acomprehensivesurvey. Comput.Linguistics,50(1):293–350,2024, [9] NadiaAlshahwan,MarkHarman,InnaHarper,AlexandruMarginean, doi:10.1162/COLI A 00492. ShubhoSengupta,andEddyWang.AssuredLLM-basedsoftwareengi- [27] YupengChang,XuWang,JindongWang,YuanWu,LinyiYang,Kaijie neering,2024,arXiv:2402.04380.PresentedasaKeynoteatInteNSE Zhu,HaoChen,XiaoyuanYi,CunxiangWang,YidongWang,WeiYe, 24:ACMInternationalWorkshoponInterpretability,Robustness,and YueZhang,YiChang,PhilipS.Yu,QiangYang,andXingXie.Asurvey BenchmarkinginNeuralSoftwareEngineering, April, 2024, Lisbon, onevaluationoflargelanguagemodels.ACMTrans.Intell.Syst.Technol., Portugal. 15(3):39:1–39:45,2024,doi:10.1145/3641289. [10] Sven Amann, Hoan Anh Nguyen, Sarah Nadi, Tien N. Nguyen, [28] BeiChen,FengjiZhang,AnhNguyen,DaoguangZan,ZeqiLin,Jian- and Mira Mezini. A systematic evaluation of static API-misuse GuangLou,andWeizhuChen.CodeT:Codegenerationwithgenerated detectors. IEEE Trans. Software Eng., 45(12):1170–1188, 2019, tests.InTheEleventhInternationalConferenceonLearningRepresen- doi:10.1109/TSE.2018.2827384. tations,ICLR2023,Kigali,Rwanda,May1-5,2023.OpenReview.net, [11] PaulAmmannandJeffOffutt. IntroductiontoSoftwareTesting. Cam- 2023. bridgeUniversityPress,2008,doi:10.1017/CBO9780511809163. [29] ChongChen, JianzhongSu, JiachiChen, YanlinWang, TingtingBi, [12] SaswatAnand,EdmundK.Burke,TsongYuehChen,JohnA.Clark, YanliWang,XingweiLin,TingChen,andZibinZheng.WhenChatGPT MyraB.Cohen,WolfgangGrieskamp,MarkHarman,MaryJeanHar- meetssmartcontractvulnerabilitydetection: Howfararewe?,2023, rold,andPhilMcMinn. Anorchestratedsurveyofmethodologiesfor arXiv:2309.05520. automatedsoftwaretestcasegeneration.J.Syst.Softw.,86(8):1978–2001, [30] HongboChen,YifanZhang,XingHan,HuanyaoRong,YuhengZhang, 2013,doi:10.1016/J.JSS.2013.02.061. TianhaoMao,HangZhang,XiaoFengWang,LuyiXing,andXunChen. [13] ChristelBaierandJoost-PieterKatoen. Principlesofmodelchecking. WitheredLeaf: Finding entity-inconsistency bugs with LLMs, 2024, MITPress,2008. arXiv:2405.01668. [14] PatrickBareiß,BeatrizSouza,Marcelod’Amorim,andMichaelPradel. [31] TsongYuehChen,Fei-ChingKuo,HuaiLiu,Pak-LokPoon,DaveTowey, Codegenerationtools(almost)forfree?Astudyoffew-shot,pre-trained T. H. Tse, and Zhi Quan Zhou. Metamorphic testing: A review of languagemodelsoncode,2022,arXiv:2206.01335. challengesandopportunities.ACMComput.Surv.,51(1):4:1–4:27,2018, [15] EarlT.Barr,MarkHarman,PhilMcMinn,MuzammilShahbaz,andShin doi:10.1145/3143561. Yoo.Theoracleprobleminsoftwaretesting:Asurvey.IEEETrans.Soft- [32] XinyunChen,MaxwellLin,NathanaelScha¨rli,andDennyZhou.Teach- wareEng.,41(5):507–525,2015,doi:10.1109/TSE.2014.2372785. inglargelanguagemodelstoself-debug. InTheTwelfthInternational [16] BorisBeizer.Softwaretestingtechniques(2.ed.).VanNostrandReinhold, ConferenceonLearningRepresentations,ICLR2024,Vienna,Austria, 1990. May7-11,2024.OpenReview.net,2024. [17] EmeryBergerandBenZorn. AIsoftwareshouldbemorelikeplain [33] YinfangChen, HuaibingXie, MinghuaMa, YuKang, XinGao, Liu old software. In Dmitry Ponomarev, editor, Computer Architecture Shi,YunjieCao,XuedongGao,HaoFan,MingWen,JunZeng,Supriyo Today, ACM Sigarch, 2024. At https://www.sigarch.org/ Ghosh,XuchaoZhang,ChaoyunZhang,QingweiLin,SaravanRajmo- ai-software-should-be-more-like-plain-old-software/ han, DongmeiZhang, andTianyinXu. Automaticrootcauseanaly- [accessedJuly2024]. sisvialargelanguagemodelsforcloudincidents. InProceedingsof [18] LucaBeurer-Kellner,MarcFischer,andMartinT.Vechev.Promptingis theNineteenthEuropeanConferenceonComputerSystems,EuroSys programming:Aquerylanguageforlargelanguagemodels.Proc.ACM 2024,Athens,Greece,April22-25,2024,pages674–688.ACM,2024,
Program.Lang.,7(PLDI):1946–1969,2023,doi:10.1145/3591300. doi:10.1145/3627703.3629553. [19] ShreyaBhatia,TarushiGandhi,DhruvKumar,andPankajJalote.Unittest [34] YinghaoChen,ZehaoHu,ChenZhi,JunxiaoHan,ShuiguangDeng,and generationusinggenerativeAI:Acomparativeperformanceanalysisof JianweiYin.ChatUniTest:AframeworkforLLM-basedtestgeneration. autogenerationtools.InLLM4Code’24,April20,2024,Lisbon,Portugal, InMarcelod’Amorim,editor,CompanionProceedingsofthe32ndACM pages38:1–38:8.ACM,2024,doi:10.1145/3643795.3648396. InternationalConferenceontheFoundationsofSoftwareEngineering, [20] MarcelBo¨hme,Van-ThuanPham,andAbhikRoychoudhury.Coverage- FSE2024,PortodeGalinhas,Brazil,July15-19,2024,pages572–576. basedgreyboxfuzzingasmarkovchain. IEEETrans.SoftwareEng., ACM,2024,doi:10.1145/3663529.3663801. 45(5):489–506,2019,doi:10.1109/TSE.2017.2785841. [35] YuCheng,JieshanChen,QingHuang,ZhenchangXing,XiweiXu,and [21] Houssem Ben Braiek and Foutse Khomh. On testing ma- QinghuaLu. Promptsapper: ALLM-empoweredproductiontoolfor chine learning programs. J. Syst. Softw., 164:110542, 2020, buildingAIchains. ACMTrans.Softw.Eng.Methodol.,33(5):124:1– doi:10.1016/J.JSS.2020.110542. 124:24,2024,doi:10.1145/3638247. 31REFERENCES REFERENCES [36] BrianChessandGaryMcGraw.Staticanalysisforsecurity.IEEESecur. andNevilleDubash. Towardsaccurateduplicatebugretrievalusing Priv.,2(6):76–79,2004,doi:10.1109/MSP.2004.111. deeplearningtechniques. In2017IEEEInternationalConferenceon [37] AgnieszkaCiborowskaandKostadinDamevski.Fastchangeset-based SoftwareMaintenanceandEvolution,ICSME2017,Shanghai,China, buglocalizationwithBERT.In44thIEEE/ACMInternationalConference September17-22,2017,pages115–124.IEEEComputerSociety,2017, onSoftwareEngineering,ICSE2022,Pittsburgh,PA,USA,May25-27, doi:10.1109/ICSME.2017.69. 2022,pages946–957.ACM,2022,doi:10.1145/3510003.3510042. [52] JacobDevlin,Ming-WeiChang,KentonLee,andKristinaToutanova. [38] EdmundM.Clarke,OrnaGrumberg,SomeshJha,YuanLu,andHel- BERT:pre-trainingofdeepbidirectionaltransformersforlanguageun- mutVeith.Counterexample-guidedabstractionrefinement.InE.Allen derstanding.InJillBurstein,ChristyDoran,andThamarSolorio,editors, EmersonandA.PrasadSistla,editors,ComputerAidedVerification,12th Proceedingsofthe2019ConferenceoftheNorthAmericanChapterof InternationalConference,CAV2000,Chicago,IL,USA,July15-19,2000, theAssociationforComputationalLinguistics:HumanLanguageTech- Proceedings,volume1855ofLectureNotesinComputerScience,pages nologies,NAACL-HLT2019,Minneapolis,MN,USA,June2-7,2019, 154–169.Springer,2000,doi:10.1007/10722167 15. Volume1(LongandShortPapers),pages4171–4186.Associationfor [39] EdmundM.Clarke,OrnaGrumberg,DanielKroening,DoronA.Peled, ComputationalLinguistics,2019,doi:10.18653/V1/N19-1423. andHelmutVeith.Modelchecking,2ndEdition.MITPress,2018. [53] EdsgerW.Dijkstra.Notesonstructuredprogramming.TechnicalReport [40] Patrick Cousot and Radhia Cousot. Abstract interpretation: A uni- 70-WSK-03,EWDVol.249,WSK,Dept.ofMathematicsandComputing fied lattice model for static analysis of programs by construction Science,TechnischeHogeschoolEindhoven,1970. or approximation of fixpoints. In Robert M. Graham, Michael A. [54] ElizabethDinella,GabrielRyan,ToddMytkowicz,andShuvenduK. Harrison, and Ravi Sethi, editors, Conference Record of the Fourth Lahiri. TOGA:Aneuralmethodfortestoraclegeneration. In44th ACMSymposiumonPrinciplesofProgrammingLanguages, LosAn- IEEE/ACMInternationalConferenceonSoftwareEngineering,ICSE geles, California, USA, January 1977, pages 238–252. ACM, 1977, 2022,Pittsburgh,PA,USA,May25-27,2022,pages2130–2141.ACM, doi:10.1145/512950.512973. 2022,doi:10.1145/3510003.3510141. [41] PatrickCousotandNicolasHalbwachs. Automaticdiscoveryoflinear [55] DavidDohan,WinnieXu,AitorLewkowycz,JacobAustin,DavidBieber, restraintsamongvariablesofaprogram. InAlfredV.Aho,StephenN. RaphaelGontijoLopes,YuhuaiWu,HenrykMichalewski,RifA.Saurous, Zilles,andThomasG.Szymanski,editors,ConferenceRecordofthe JaschaSohl-dickstein,KevinMurphy,andCharlesSutton. Language FifthAnnualACMSymposiumonPrinciplesofProgrammingLanguages, modelcascades,2022,arXiv:2207.10342. Presentedasspotlightat Tucson,Arizona,USA,January1978,pages84–96.ACMPress,1978, theBeyondBasesworkshopatICML2022. doi:10.1145/512760.512770. [56] KevinDu,Ve´steinnSnæbjarnarson,NiklasStoehr,JenniferC.White, [42] AidanDakhama,KarineEven-Mendoza,WilliamB.Langdon,He´ctorD. AaronSchein,andRyanCotterell. Contextversuspriorknowledgein Mene´ndez,andJustynaPetke. SearchGEM5: TowardsreliableGem5 languagemodels.InLun-WeiKu,AndreMartins,andVivekSrikumar, withsearchbasedsoftwaretestingandlargelanguagemodels.InPaolo editors,Proceedingsofthe62ndAnnualMeetingoftheAssociationfor
Arcaini,TaoYue,andErikM.Fredericks,editors,Search-BasedSoft- ComputationalLinguistics(Volume1:LongPapers),ACL2024,Bangkok, wareEngineering-15thInternationalSymposium,SSBSE2023,San Thailand, August 11-16, 2024, pages 13211–13235. Association for Francisco,CA,USA,December8,2023,Proceedings,volume14415 ComputationalLinguistics,2024. ofLectureNotesinComputerScience,pages160–166.Springer,2023, [57] Ste´phane Ducasse and Damien Pollet. Software architecture recon- doi:10.1007/978-3-031-48796-5 14. struction: Aprocess-orientedtaxonomy. IEEETrans.SoftwareEng., [43] ArghavanMoradiDakhel,AminNikanjam,VahidMajdinasab,Foutse 35(4):573–591,2009,doi:10.1109/TSE.2009.19. Khomh,andMichelC.Desmarais. Effectivetestgenerationusingpre- [58] Karim Elmaaroufi, Devan Shanker, Ana Cismaru, Marcell Vazquez- trainedlargelanguagemodelsandmutationtesting.Inf.Softw.Technol., Chanlatte, Alberto Sangiovanni-Vincentelli, Matei Zaharia, and San- 171:107468,2024,doi:10.1016/J.INFSOF.2024.107468. jitA.Seshia. Generatingprobabilisticscenarioprogramsfromnatural [44] IsaacDavid,LiyiZhou,KaihuaQin,DawnSong,LorenzoCavallaro,and language,2024,arXiv:2405.03709. ArthurGervais.Doyoustillneedamanualsmartcontractaudit?,2023, [59] MadelineEndres,SarahFakhoury,SaikatChakraborty,andShuvenduK. arXiv:2306.12338. Lahiri.Canlargelanguagemodelstransformnaturallanguageintentinto [45] GuidodeCaso,V´ıctorA.Braberman,DiegoGarbervetsky,andSebastia´n formalmethodpostconditions? Proc.ACMSoftw.Eng.,1(FSE):1889– Uchitel. Automatedabstractionsforcontractvalidation. IEEETrans. 1912,2024,doi:10.1145/3660791. SoftwareEng.,38(1):141–162,2012,doi:10.1109/TSE.2010.98. [60] AngelaFan,BelizGokkaya,MarkHarman,MityaLyubarskiy,Shubho [46] JasperDekoninck,MarcFischer,LucaBeurer-Kellner,andMartinT. Sengupta,ShinYoo,andJieM.Zhang.Largelanguagemodelsforsoft- Vechev. Controlledtextgenerationvialanguagemodelarithmetic. In wareengineering:Surveyandopenproblems.InIEEE/ACMInternational TheTwelfthInternationalConferenceonLearningRepresentations,ICLR ConferenceonSoftwareEngineering:FutureofSoftwareEngineering, 2024,Vienna,Austria,May7-11,2024.OpenReview.net,2024. ICSE-FoSE2023,Melbourne,Australia,May14-20,2023,pages31–53. [47] GeleiDeng,YiLiu,V´ıctorMayoralVilches,PengLiu,YuekangLi,Yuan IEEE,2023,doi:10.1109/ICSE-FOSE59343.2023.00008. Xu,MartinPinzger,StefanRass,TianweiZhang,andYangLiu.Pentest- [61] ChongzhouFang,NingMiao,ShauryaSrivastav,JialinLiu,RuoyuZhang, GPT:Evaluatingandharnessinglargelanguagemodelsforautomated Ruijie Fang, Asmita, Ryan Tsang, Najmeh Nazari, Han Wang, and penetrationtesting.InDavideBalzarottiandWenyuanXu,editors,33rd HoumanHomayoun. Largelanguagemodelsforcodeanalysis: Do USENIXSecuritySymposium,USENIXSecurity2024,Philadelphia,PA, llmsreallydotheirjob? InDavideBalzarottiandWenyuanXu,editors, USA,August14-16,2024.USENIXAssociation,2024. 33rdUSENIXSecuritySymposium,USENIXSecurity2024,Philadelphia, [48] YaoDeng,JiaohongYao,ZhiTu,XiZheng,MengshiZhang,andTianyi PA,USA,August14-16,2024.USENIXAssociation,2024. Zhang.TARGET:Automatedscenariogenerationfromtrafficrulesfor [62] Sidong Feng and Chunyang Chen. Prompting is all you need: Au- testingautonomousvehicles,2023,arXiv:2305.06018. tomated android bug replay with large language models. In 46th [49] YinlinDeng,ChunqiuStevenXia,HaoranPeng,ChenyuanYang,and IEEE/ACMInternationalConferenceonSoftwareEngineering,ICSE LingmingZhang.Largelanguagemodelsarezero-shotfuzzers:Fuzzing 2024,Lisbon, Portugal,April14-20,2024, pages67:1–67:13.ACM, deep-learninglibrariesvialargelanguagemodels. InRene´ Justand 2024,doi:10.1145/3597503.3608137. GordonFraser,editors,32ndACMSIGSOFTInternationalSymposiumon [63] Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon SoftwareTestingandAnalysis,ISSTA2023,Seattle,WA,USA,July17-21, Osindero,andTimRockta¨schel. Promptbreeder: Self-referentialself- 2023,pages423–435.ACM,2023,doi:10.1145/3597926.3598067. improvementviapromptevolution.InForty-firstInternationalConfer- [50] YinlinDeng,ChunqiuStevenXia,ChenyuanYang,ShizhuoDylanZhang, enceonMachineLearning,ICML2024,Vienna,Austria,July21-27, ShujingYang,andLingmingZhang. Largelanguagemodelsareedge- 2024.OpenReview.net,2024. casegenerators: Craftingunusualprogramsforfuzzingdeeplearning [64] EmilyFirst,MarkusN.Rabe,TaliaRinger,andYuriyBrun. Baldur: libraries.In46thIEEE/ACMInternationalConferenceonSoftwareEn- Whole-proof generation and repair with large language models. In gineering,ICSE2024,Lisbon,Portugal,April14-20,2024,pages70:1– SatishChandra,KellyBlincoe,andPaoloTonella,editors,31stACM 70:13.ACM,2024,doi:10.1145/3597503.3623343. JointEuropeanSoftwareEngineeringConferenceandSymposiumon
[51] JayatiDeshmukh,K.M.Annervaz,SanjayPodder,ShubhashisSengupta, theFoundationsofSoftwareEngineering,ESEC/FSE2023,SanFran- 32REFERENCES REFERENCES cisco,CA,USA,December3-9,2023,pages1229–1241.ACM,2023, [82] AhmedE.Hassan,DayiLin,GopiKrishnanRajbahadur,KeheliyaGal- doi:10.1145/3611643.3616243. laba,FilipeRoseiroCoˆgo,BoyuanChen,HaoxiangZhang,Kishanthan [65] Gordon Fraser and Andrea Arcuri. Evosuite: automatic test suite Thangarajah,GustavoAnsaldiOliva,Jiahuei(Justina)Lin,WaliMo- generationforobject-orientedsoftware. InTiborGyimo´thyandAn- hammadAbdullah,andZhenMing(Jack)Jiang. Rethinkingsoftware dreas Zeller, editors, SIGSOFT/FSE’11 19th ACM SIGSOFT Sym- engineeringintheeraoffoundationmodels:Acuratedcatalogueofchal- posium on the Foundations of Software Engineering (FSE-19) and lengesinthedevelopmentoftrustworthyFMware.InMarcelod’Amorim, ESEC’11:13thEuropeanSoftwareEngineeringConference(ESEC-13), editor,CompanionProceedingsofthe32ndACMInternationalConfer- Szeged,Hungary,September5-9,2011,pages416–419.ACM,2011, ence on the Foundations of Software Engineering, FSE 2024, Porto doi:10.1145/2025113.2025179. de Galinhas, Brazil, July 15-19, 2024, pages 294–305. ACM, 2024, [66] MichaelFuandChakkritTantithamthavorn. LineVul: Atransformer- doi:10.1145/3663529.3663849. based line-level vulnerability prediction. In 19th IEEE/ACM Inter- [83] JingxuanHe,MarkVero,GabrielaKrasnopolska,andMartinT.Vechev. national Conference on Mining Software Repositories, MSR 2022, Instructiontuningforsecurecodegeneration.InForty-firstInternational Pittsburgh,PA,USA,May23-24,2022,pages608–620.ACM,2022, ConferenceonMachineLearning,ICML2024,Vienna,Austria,July doi:10.1145/3524842.3528452. 21-27,2024.OpenReview.net,2024. [67] MichaelFu,ChakkritKlaTantithamthavorn,VanNguyen,andTrungLe. [84] Abram Hindle, Anahita Alipour, and Eleni Stroulia. A con- ChatGPTforvulnerabilitydetection,classification,andrepair:Howfar textual approach towards more accurate duplicate bug report de- arewe? In30thAsia-PacificSoftwareEngineeringConference,APSEC tection and ranking. Empir. Softw. Eng., 21(2):368–410, 2016, 2023,Seoul,RepublicofKorea,December4-7,2023,pages632–636. doi:10.1007/S10664-015-9387-3. IEEE,2023,doi:10.1109/APSEC60848.2023.00085. [85] C.A.R.Hoare.Anaxiomaticbasisforcomputerprogramming.Commun. [68] KanishkGandhi,DorsaSadigh,andNoahD.Goodman. Strategicrea- ACM,12(10):576–580,1969,doi:10.1145/363235.363259. soningwithlanguagemodels,2023,arXiv:2305.19165. [86] AriHoltzman,JanBuys,LiDu,MaxwellForbes,andYejinChoi.The [69] ZeyuGao,HaoWang,YuchenZhou,WenyuZhu,andChaoZhang.How curiouscaseofneuraltextdegeneration.In8thInternationalConference farhavewegoneinvulnerabilitydetectionusinglargelanguagemodels, onLearningRepresentations,ICLR2020,AddisAbaba,Ethiopia,April 2023,arXiv:2311.12420. 26-30,2020.OpenReview.net,2020. [70] PatriceGodefroid,NilsKlarlund,andKoushikSen. DART:directed [87] AriHoltzman,PeterWest,andLukeZettlemoyer.Generativemodelsas automatedrandomtesting. InVivekSarkarandMaryW.Hall,editors, acomplexsystemsscience:Howcanwemakesenseoflargelanguage ProceedingsoftheACMSIGPLAN2005ConferenceonProgramming modelbehavior?,2023,arXiv:2308.00189. LanguageDesignandImplementation,Chicago,IL,USA,June12-15, [88] OrHonovich,UriShaham,SamuelR.Bowman,andOmerLevy.Instruc- 2005,pages213–223.ACM,2005,doi:10.1145/1065010.1065036. tioninduction:Fromfewexamplestonaturallanguagetaskdescriptions. [71] DrishtiGoel,FizaHusain,AdityaSingh,SupriyoGhosh,AnjalyParayil, InAnnaRogers,JordanL.Boyd-Graber,andNaoakiOkazaki,editors, ChetanBansal,XuchaoZhang,andSaravanRajmohan.X-lifecyclelearn- Proceedingsofthe61stAnnualMeetingoftheAssociationforComputa- ingforcloudincidentmanagementusingLLMs.InMarcelod’Amorim, tionalLinguistics(Volume1:LongPapers),ACL2023,Toronto,Canada, editor, CompanionProceedingsofthe32ndACMInternationalCon- July9-14,2023,pages1935–1952.AssociationforComputationalLin- ferenceontheFoundationsofSoftwareEngineering,FSE2024,Porto guistics,2023,doi:10.18653/V1/2023.ACL-LONG.108. de Galinhas, Brazil, July 15-19, 2024, pages 417–428. ACM, 2024, [89] SoneyaBintaHossainandMatthewDwyer.TOGLL:Correctandstrong doi:10.1145/3663529.3663861. testoraclegenerationwithLLMs,2024,arXiv:2405.03786. [72] AmidGolmohammadi,ManZhang,andAndreaArcuri.Testingrestful [90] SoneyaBintaHossain,NanJiang,QiangZhou,XiaopengLi,Wen-Hao apis:Asurvey. ACMTrans.Softw.Eng.Methodol.,33(1):27:1–27:41, Chiang,YingjunLyu,HoanNguyen,andOmerTripp.Adeepdiveinto 2024,doi:10.1145/3617175. largelanguagemodelsforautomatedbuglocalizationandrepair.Proc.
[73] IanJ.Goodfellow,YoshuaBengio,andAaronC.Courville.DeepLearn- ACMSoftw.Eng.,1(FSE):1471–1493,2024,doi:10.1145/3660773. ing.Adaptivecomputationandmachinelearning.MITPress,2016. [91] XinyiHou,YanjieZhao,YueLiu,ZhouYang,KailongWang,LiLi, [74] Google.AI-poweredfuzzing:Breakingthebughuntingbarrier,2023.At XiapuLuo,DavidLo,JohnGrundy,andHaoyuWang.Largelanguage security.googleblog.com[accessedApril2024]. modelsforsoftwareengineering:Asystematicliteraturereview,2023, [75] AndrewD.Gordon, ThomasA.Henzinger, AdityaV.Nori, andSri- arXiv:2308.10620. ramK.Rajamani.Probabilisticprogramming.InJamesD.Herbsleband [92] YifanHou,JiaodaLi,YuFei,AlessandroStolfo,WangchunshuZhou, MatthewB.Dwyer,editors,FutureofSoftwareEngineering,FOSE2014, GuangtaoZeng,AntoineBosselut,andMrinmayaSachan. Towardsa Hyderabad,India,May31-June7,2014,pages167–181.ACM,2014, mechanisticinterpretationofmulti-stepreasoningcapabilitiesoflan- doi:10.1145/2593882.2593900. guagemodels.InHoudaBouamor,JuanPino,andKalikaBali,editors, [76] Significant Gravitas. Auto-GPT: An autonomous GPT-4 experi- 2023ConferenceonEmpiricalMethodsinNaturalLanguageProcess- ment. https://github.com/Significant-Gravitas/Auto-GPT, ing,EMNLP2023,Singapore,December6-10,2023,pages4902–4919. 2023.RetrievedApril25,2023. AssociationforComputationalLinguistics,2023. [77] JiazhenGu,XuchuanLuo,YangfanZhou,andXinWang.Muffin:Test- [93] David Hovemeyer and William W. Pugh. Finding bugs ing deep learning libraries via neural architecture fuzzing. In 44th is easy. ACM SIGPLAN Notices, 39(12):92–106, 2004, IEEE/ACMInternationalConferenceonSoftwareEngineering,ICSE doi:10.1145/1052883.1052895. 2022,Pittsburgh,PA,USA,May25-27,2022,pages1418–1430.ACM, [94] JieHu,QianZhang,andHengYin.Augmentinggreyboxfuzzingwith 2022,doi:10.1145/3510003.3510092. generativeAI,2023,arXiv:2306.06782. [78] HazimHanifandSergioMaffeis. VulBERTa: Simplifiedsourcecode [95] SihaoHu,TianshengHuang,FatihIlhan,SelimFurkanTekin,andLing pre-trainingforvulnerabilitydetection.InInternationalJointConference Liu.Largelanguagemodel-poweredsmartcontractvulnerabilitydetec- onNeuralNetworks,IJCNN2022,Padua,Italy,July18-23,2022,pages tion:Newperspectives.In5thIEEEInternationalConferenceonTrust, 1–8.IEEE,2022,doi:10.1109/IJCNN55064.2022.9892280. PrivacyandSecurityinIntelligentSystemsandApplications,TPS-ISA [79] YuHao,WeitengChen,ZiqiaoZhou,andWeidongCui.E&V:Prompt- 2023,Atlanta,GA,USA,November1-4,2023,pages297–306.IEEE, inglargelanguagemodelstoperformstaticanalysisbypseudo-code 2023,doi:10.1109/TPS-ISA58951.2023.00044. executionandverification,2023,arXiv:2312.08477. [96] Xing Hu, Zhuang Liu, Xin Xia, Zhongxin Liu, Tongtong Xu, and [80] AndreasHappeandJu¨rgenCito.Gettingpwn’dbyAI:Penetrationtesting Xiaohu Yang. Identify and update test cases when production code withlargelanguagemodels. InSatishChandra, KellyBlincoe, and changes: A transformer-based approach. In 38th IEEE/ACM Inter- PaoloTonella,editors,31stACMJointEuropeanSoftwareEngineering nationalConferenceonAutomatedSoftwareEngineering,ASE2023, ConferenceandSymposiumontheFoundationsofSoftwareEngineering, Luxembourg,September11-15,2023,pages1111–1122.IEEE,2023, ESEC/FSE2023,SanFrancisco,CA,USA,December3-9,2023,pages doi:10.1109/ASE56229.2023.00165. 2082–2086.ACM,2023,doi:10.1145/3611643.3613083. [97] DongHuang,QingwenBu,YuhaoQing,andHemingCui. CodeCoT: [81] SepehrHashtroudi,JihoShin,HadiHemmati,andSongWang.Domain TacklingcodesyntaxerrorsinCoTreasoningforcodegeneration,2024, adaptationfordeepunittestcasegeneration,2023,arXiv:2308.08033. arXiv:2308.08784. 33REFERENCES REFERENCES [98] DongHuang,QingwenBu,JieM.Zhang,MichaelLuck,andHeming Sarma,EliTran-Johnson,ScottJohnston,SheerElShowk,AndyJones, Cui. AgentCoder: Multi-agent-based code generation with iterative NelsonElhage,TristanHume,AnnaChen,YuntaoBai,SamBowman, testingandoptimisation,2024,arXiv:2312.13010. StanislavFort,DeepGanguli,DannyHernandez,JoshJacobson,Jackson [99] Linghan Huang, Peizhou Zhao, Huaming Chen, and Lei Ma. Kernion,ShaunaKravec,LianeLovitt,KamalNdousse,CatherineOlsson, Large language models based fuzzing techniques: A survey, 2024, SamRinger,DarioAmodei,TomBrown,JackClark,NicholasJoseph, arXiv:2402.00350. BenMann,SamMcCandlish,ChrisOlah,andJaredKaplan.Language [100] QingHuang,ZhouZou,ZhenchangXing,ZhenkangZuo,XiweiXu, models(mostly)knowwhattheyknow,2022,arXiv:2207.05221. andQinghuaLu. AIchainonlargelanguagemodelforunsupervised [114] SivaKesavaReddyKakarlaandRyanBeckett. Oracle-basedprotocol controlflowgraphgenerationforstatically-typedpartialcode, 2023, testingwithEywa,2023,arXiv:2312.06875.
arXiv:2306.00757. [115] AdharshKamath,AdityaSenthilnathan,SaikatChakraborty,Pantazis [101] WenlongHuang,PieterAbbeel,DeepakPathak,andIgorMordatch.Lan- Deligiannis,ShuvenduK.Lahiri,AkashLal,AseemRastogi,Subhajit guagemodelsaszero-shotplanners:Extractingactionableknowledge Roy,andRahulSharma. Findinginductiveloopinvariantsusinglarge forembodiedagents.InKamalikaChaudhuri,StefanieJegelka,LeSong, languagemodels,2023,arXiv:2311.07948. CsabaSzepesva´ri,GangNiu,andSivanSabato,editors,International [116] SungminKang,GabinAn,andShinYoo.Aquantitativeandqualitative ConferenceonMachineLearning,ICML2022,17-23July2022,Balti- evaluationofLLM-basedexplainablefaultlocalization.Proc.ACMSoftw. more,Maryland,USA,volume162ofProceedingsofMachineLearning Eng.,1(FSE):1424–1446,2024,doi:10.1145/3660771. Research,pages9118–9147.PMLR,2022. [117] SungminKang,BeiChen,ShinYoo,andJian-GuangLou. Explain- [102] XiaoweiHuang,DanielKroening,WenjieRuan,JamesSharp,Youcheng ableautomateddebuggingvialargelanguagemodel-drivenscientific Sun,EmeseThamo,MinWu,andXinpingYi. Asurveyofsafetyand debugging,2023,arXiv:2304.02195. trustworthinessofdeepneuralnetworks:Verification,testing,adversarial [118] SungminKang,JuyeonYoon,andShinYoo. Largelanguagemodels attackanddefence,andinterpretability. Comput.Sci.Rev.,37:100270, arefew-shottesters: ExploringLLM-basedgeneralbugreproduction. 2020,doi:10.1016/J.COSREV.2020.100270. In45thIEEE/ACMInternationalConferenceonSoftwareEngineering, [103] YuchaoHuang,JunjieWang,ZheLiu,YawenWang,SongWang,Chun- ICSE2023,Melbourne,Australia,May14-20,2023,pages2312–2323. yangChen,YuanzheHu,andQingWang.CrashTranslator:Automatically IEEE,2023,doi:10.1109/ICSE48619.2023.00194. reproducingmobileapplicationcrashesdirectlyfromstacktrace.In46th [119] Ahmed Khanfir, Renzo Degiovanni, Mike Papadakis, and Yves Le IEEE/ACMInternationalConferenceonSoftwareEngineering,ICSE Traon.Efficientmutationtestingviapre-trainedlanguagemodels,2023, 2024, Lisbon,Portugal,April14-20,2024, pages18:1–18:13.ACM, arXiv:2301.03543. 2024,doi:10.1145/3597503.3623298. [120] AvishreeKhare,SaikatDutta,ZiyangLi,AlaiaSolko-Breslin,Rajeev [104] AliRezaIbrahimzada,YangChen,RyanRong,andReyhanehJabbarvand. Alur,andMayurNaik.Understandingtheeffectivenessoflargelanguage Automatedbuggenerationintheeraoflargelanguagemodels,2023, modelsindetectingsecurityvulnerabilities,2023,arXiv:2311.16169. arXiv:2310.02407. [121] AnantKharkar, RoshanakZilouchianMoghaddam, MatthewJin, Xi- [105] ErblinIsaku,ChristophLaaber,HassanSartaj,ShaukatAli,Thomas aoyuLiu,XinShi,ColinB.Clement,andNeelSundaresan. Learning Schwitalla,andJanF.Nygård.LLMsintheheartofdifferentialtesting: toreducefalsepositivesinanalyticbugdetectors. In44thIEEE/ACM Acasestudyonamedicalruleengine,2024,arXiv:2404.03664. InternationalConferenceonSoftwareEngineering,ICSE2022,Pitts- [106] NicholasJalbertandWestleyWeimer. Automatedduplicatedetection burgh, PA, USA, May 25-27, 2022, pages 1307–1316. ACM, 2022, forbugtrackingsystems.InThe38thAnnualIEEE/IFIPInternational doi:10.1145/3510003.3510153. ConferenceonDependableSystemsandNetworks,DSN2008,June24- [122] OmarKhattab, ArnavSinghvi, ParidhiMaheshwari, ZhiyuanZhang, 27,2008,Anchorage,Alaska,USA,Proceedings,pages52–61.IEEE KeshavSanthanam,SriVardhamanan,SaifulHaq,AshutoshSharma, ComputerSociety,2008,doi:10.1109/DSN.2008.4630070. ThomasT.Joshi,HannaMoazam,HeatherMiller,MateiZaharia,and [107] ChristianJanßen,CedricRichter,andHeikeWehrheim.CanChatGPT ChristopherPotts. DSPy:Compilingdeclarativelanguagemodelcalls supportsoftwareverification? InDirkBeyerandAnaCavalcanti,editors, intostate-of-the-artpipelines.InTheTwelfthInternationalConference FundamentalApproachestoSoftwareEngineering-27thInternational onLearningRepresentations,ICLR2024,Vienna,Austria,May7-11, Conference, FASE2024, HeldasPartoftheEuropeanJointConfer- 2024.OpenReview.net,2024. encesonTheoryandPracticeofSoftware,ETAPS2024,Luxembourg [123] Myeongsoo Kim, Tyler Stennett, Dhruv Shah, Saurabh Sinha, and City, Luxembourg, April 6-11, 2024, Proceedings, volume 14573 of AlessandroOrso.LeveraginglargelanguagemodelstoimproveREST LectureNotesinComputerScience, pages266–279.Springer, 2024, APItesting. InProceedingsofthe2024ACM/IEEE44thInternational doi:10.1007/978-3-031-57259-3 13. ConferenceonSoftwareEngineering:NewIdeasandEmergingResults, [108] HiranyaJayathilaka,ChandraKrintz,andRichWolski.Performancemon- NIER@ICSE2024,Lisbon,Portugal,April14-20,2024,pages37–41. itoringandrootcauseanalysisforcloud-hostedwebapplications.InRick ACM,2024,doi:10.1145/3639476.3639769.
Barrett,RickCummings,EugeneAgichtein,andEvgeniyGabrilovich, [124] Jan Hendrik Kirchner, Yining Chen, Harri Edwards, Jan Leike, Nat editors,Proceedingsofthe26thInternationalConferenceonWorldWide McAleese,andYuriBurda.Prover-verifiergamesimprovelegibilityof Web, WWW2017, Perth, Australia, April3-7, 2017, pages469–478. LLMoutputs,2024,arXiv:2407.13692. ACM,2017,doi:10.1145/3038912.3052649. [125] TakeshiKojima,ShixiangShaneGu,MachelReid,YutakaMatsuo,and [109] RasmusIngemannTuffvesonJensen,ValiTawosi,andSalwaAlamir. YusukeIwasawa. Largelanguagemodelsarezero-shotreasoners. In Softwarevulnerabilityandfunctionalityassessmentusinglargelanguage NeurIPS,2022. models.InMalihehIzadi,AndreaDiSorbo,andSebastianoPanichella, [126] TerryKoo,FrederickLiu,andLuhengHe.Automata-basedconstraints editors,ProceedingsoftheThirdACM/IEEEInternationalWorkshopon forlanguagemodeldecoding,2024,arXiv:2407.08103. NL-basedSoftwareEngineering,NLBSE2024,Lisbon,Portugal,20April [127] LeslieLamport. Thetemporallogicofactions. ACMTrans.Program. 2024,pages25–28.ACM,2024,doi:10.1145/3643787.3648036. Lang.Syst.,16(3):872–923,1994,doi:10.1145/177492.177726. [110] RanjitJhalaandRupakMajumdar.Softwaremodelchecking.ACMCom- [128] K. Rustan M. Leino. Dafny: An automatic program verifier for put.Surv.,41(4):21:1–21:54,2009,doi:10.1145/1592434.1592438. functionalcorrectness. InEdmundM.ClarkeandAndreiVoronkov, [111] YueJiaandMarkHarman.Ananalysisandsurveyofthedevelopment editors, Logic for Programming, Artificial Intelligence, and Reason- ofmutationtesting. IEEETrans.SoftwareEng.,37(5):649–678,2011, ing-16thInternationalConference,LPAR-16,Dakar,Senegal,April doi:10.1109/TSE.2010.62. 25-May 1, 2010, Revised Selected Papers, volume 6355 of Lec- [112] CarlosE.Jimenez,JohnYang,AlexanderWettig,ShunyuYao,Kexin ture Notes in Computer Science, pages 348–370. Springer, 2010, Pei,OfirPress,andKarthikR.Narasimhan.Swe-bench:Canlanguage doi:10.1007/978-3-642-17511-4 20. modelsresolvereal-worldgithubissues? InTheTwelfthInternational [129] K.RustanM.Leino.ProgramProofs.MITPress,2023. ConferenceonLearningRepresentations,ICLR2024,Vienna,Austria, [130] CarolineLemieux,JeevanaPriyaInala,ShuvenduK.Lahiri,andSid- May7-11,2024.OpenReview.net,2024. dhartha Sen. CodaMOSA: Escaping coverage plateaus in test gen- [113] SauravKadavath,TomConerly,AmandaAskell,TomHenighan,Dawn eration with pre-trained large language models. In 45th IEEE/ACM Drain,EthanPerez,NicholasSchiefer,ZacHatfield-Dodds,NovaDas- InternationalConferenceonSoftwareEngineering, ICSE2023, Mel- 34REFERENCES REFERENCES bourne, Australia, May 14-20, 2023, pages 919–931. IEEE, 2023, [145] Yang Liu. Fine-tune BERT for extractive summarization, 2019, doi:10.1109/ICSE48619.2023.00085. arXiv:1903.10318. [131] PatrickLewis,EthanPerez,AleksandraPiktus,FabioPetroni,Vladimir [146] Ye Liu, Yue Xue, Daoyuan Wu, Yuqiang Sun, Yi Li, Miaolei Shi, Karpukhin, Naman Goyal, Heinrich Ku¨ttler, Mike Lewis, Wen-tau and Yang Liu. PropertyGPT: LLM-driven formal verification of Yih,TimRockta¨schel,SebastianRiedel,andDouweKiela. Retrieval- smartcontractsthroughretrieval-augmentedpropertygeneration,2024, augmentedgenerationforknowledge-intensiveNLPtasks.InProceed- arXiv:2405.02580. ingsofthe34thInternationalConferenceonNeuralInformationProcess- [147] ZheLiu,ChunyangChen,JunjieWang,XingChe,YuekaiHuang,Jun ingSystems,NIPS’20,2020. Hu, and Qing Wang. Fill in the blank: Context-aware automated [132] Haonan Li, Yu Hao, Yizhuo Zhai, and Zhiyun Qian. Enhancing text input generation for mobile GUI testing. In 45th IEEE/ACM static analysis for practical bug detection: An LLM-integrated ap- InternationalConferenceonSoftwareEngineering, ICSE2023, Mel- proach. Proc. ACM Program. Lang., 8(OOPSLA1):474–499, 2024, bourne, Australia, May14-20, 2023, pages1355–1367.IEEE,2023, doi:10.1145/3649828. doi:10.1109/ICSE48619.2023.00119. [133] JiaodaLi,YifanHou,MrinmayaSachan,andRyanCotterell.Whatdo [148] ZheLiu,ChunyangChen,JunjieWang,MengzhuoChen,BoyuWu,Xing languagemodelslearnincontext?thestructuredtaskhypothesis.InLun- Che,DandanWang,andQingWang.MakeLLMatestingexpert:Bring- WeiKu,AndreMartins,andVivekSrikumar,editors,Proceedingsofthe inghuman-likeinteractiontomobileGUItestingviafunctionality-aware 62ndAnnualMeetingoftheAssociationforComputationalLinguistics decisions. In46thIEEE/ACMInternationalConferenceonSoftware (Volume1:LongPapers),ACL2024,Bangkok,Thailand,August11-16, Engineering,ICSE2024,Lisbon,Portugal,April14-20,2024,pages 2024,pages12365–12379.AssociationforComputationalLinguistics, 100:1–100:13.ACM,2024,doi:10.1145/3597503.3639180. 2024. [149] ZheLiu, ChunyangChen, JunjieWang, MengzhuoChen, BoyuWu,
[134] KefanLiandYuanYuan.Largelanguagemodelsastestcasegenerators: ZhilinTian,YuekaiHuang,JunHu,andQingWang. Testingthelim- Performanceevaluationandenhancement,2024,arXiv:2404.13340. its:Unusualtextinputsgenerationformobileappcrashdetectionwith [135] Tsz-OnLi,WenxiZong,YiboWang,HaoyeTian,YingWang,Shing-Chi largelanguagemodel.In46thIEEE/ACMInternationalConferenceon Cheung,andJeffKramer.Nuancesarethekey:UnlockingChatGPTto SoftwareEngineering,ICSE2024,Lisbon,Portugal,April14-20,2024, findfailure-inducingtestswithdifferentialprompting.In38thIEEE/ACM pages137:1–137:12.ACM,2024,doi:10.1145/3597503.3639118. InternationalConferenceonAutomatedSoftwareEngineering,ASE2023, [150] Benjamin Livshits, Manu Sridharan, Yannis Smaragdakis, Ondrej Kirchberg,Luxembourg,September11-15,2023,pages1:1–1:13.ACM, Lhota´k,Jose´NelsonAmaral,Bor-YuhEvanChang,SamuelZ.Guyer, 2023,doi:10.1145/3551349.3560420. UdayP.Khedker, AndersMøller, andDimitriosVardoulakis. Inde- [136] Xia Li, Jiajun Jiang, Samuel Benton, Yingfei Xiong, and Lingming fenseofsoundiness:amanifesto. Commun.ACM,58(2):44–46,2015, Zhang. Alarge-scalestudyonAPImisusesinthewild. In14thIEEE doi:10.1145/2644805. ConferenceonSoftwareTesting,VerificationandValidation,ICST2021, [151] GuilongLu,XiaolinJu,XiangChen,WenlongPei,andZhilongCai. PortodeGalinhas, Brazil, April12-16, 2021, pages241–252.IEEE, GRACE:EmpoweringLLM-basedsoftwarevulnerabilitydetectionwith 2021,doi:10.1109/ICST49551.2021.00034. graphstructureandin-contextlearning.J.Syst.Softw.,212:112031,2024, [137] Xiang LisaLiand Percy Liang. Prefix-tuning: Optimizing continu- doi:10.1016/J.JSS.2024.112031. ouspromptsforgeneration. InChengqingZong,FeiXia,WenjieLi, [152] YaoLu,MaxBartolo,AlastairMoore,SebastianRiedel,andPontus andRobertoNavigli,editors,Proceedingsofthe59thAnnualMeeting Stenetorp.Fantasticallyorderedpromptsandwheretofindthem:Over- oftheAssociationforComputationalLinguisticsandthe11thInterna- comingfew-shotpromptordersensitivity.InSmarandaMuresan,Preslav tionalJointConferenceonNaturalLanguageProcessing,ACL/IJCNLP Nakov,andAlineVillavicencio,editors,Proceedingsofthe60thAn- 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, nual Meeting of the Association for Computational Linguistics (Vol- pages 4582–4597. Association for Computational Linguistics, 2021, ume1: LongPapers),ACL2022,Dublin,Ireland,May22-27,2022, doi:10.18653/V1/2021.ACL-LONG.353. pages 8086–8098. Association for Computational Linguistics, 2022, [138] ZhenLi,DeqingZou,ShouhuaiXu,XinyuOu,HaiJin,SujuanWang, doi:10.18653/V1/2022.ACL-LONG.556. Zhijun Deng, and Yuyi Zhong. VulDeePecker: A deep learning- [153] DipeekaLuitel,ShivaNejati,andMehrdadSabetzadeh.Requirements- based system for vulnerability detection. In 25th Annual Network driven slicing of Simulink models using LLMs. In 32nd IEEE In- andDistributedSystemSecuritySymposium,NDSS2018,SanDiego, ternational Requirements Engineering Conference, RE 2024 - Work- California, USA, February 18-21, 2018. The Internet Society, 2018, shops,Reykjavik,Iceland,June24-25,2024,pages72–82.IEEE,2024, doi:10.14722/ndss.2018.23158. doi:10.1109/REW61692.2024.00014. [139] Guanjun Lin, Sheng Wen, Qing-Long Han, Jun Zhang, and [154] LezhiMa,ShangqingLiu,YiLi,XiaofeiXie,andLeiBu.SpecGen:Au- Yang Xiang. Software vulnerability detection using deep neu- tomatedgenerationofformalprogramspecificationsvialargelanguage ral networks: A survey. Proc. IEEE, 108(10):1825–1848, 2020, models,2024,arXiv:2401.08807. doi:10.1109/JPROC.2020.2993293. [155] LauraManduchi,KushagraPandey,RobertBamler,RyanCotterell,Sina [140] KaiboLiu,YiyangLiu,ZhenpengChen,JieM.Zhang,YudongHan, Da¨ubener,SophieFellenz,AsjaFischer,ThomasGa¨rtner,MatthiasKirch- YunMa,GeLi,andGangHuang.LLM-poweredtestcasegenerationfor ler,MariusKloft,YingzhenLi,ChristophLippert,GerarddeMelo,EricT. detectingtrickybugs,2024,arXiv:2404.10304. Nalisnick,Bjo¨rnOmmer,RajeshRanganath,MajaRudolph,KarenUll- [141] Nelson F. Liu, Kevin Lin, John Hewitt, Ashwin Paranjape, Michele rich,GuyVandenBroeck,JuliaE.Vogt,YixinWang,FlorianWenzel, Bevilacqua,FabioPetroni,andPercyLiang. Lostinthemiddle:How FrankWood,StephanMandt,andVincentFortuin. Onthechallenges languagemodelsuselongcontexts.Trans.Assoc.Comput.Linguistics, andopportunitiesingenerativeAI,2024,arXiv:2403.00025. 12:157–173,2024,doi:10.1162/TACL A 00638. [156] Zohar Manna and Amir Pnueli. The temporal logic of re- [142] PengfeiLiu,WeizheYuan,JinlanFu,ZhengbaoJiang,HiroakiHayashi, active and concurrent systems - specification. Springer, 1992, andGrahamNeubig.Pre-train,prompt,andpredict:Asystematicsurvey doi:10.1007/978-1-4612-0931-7.
ofpromptingmethodsinnaturallanguageprocessing. ACMComput. [157] Ke Mao, Mark Harman, and Yue Jia. Sapienz: multi-objective Surv.,55(9):195:1–195:35,2023,doi:10.1145/3560815. automated testing for android applications. In Andreas Zeller [143] PuzhuoLiu,ChengnianSun,YaowenZheng,XuanFeng,ChuanQin, and Abhik Roychoudhury, editors, Proceedings of the 25th Inter- YunchengWang,ZhiLi,andLiminSun.HarnessingthepowerofLLM national Symposium on Software Testing and Analysis, ISSTA 2016, tosupportbinarytaintanalysis,arXiv:2310.08275. Saarbru¨cken,Germany,July18-20,2016,pages94–105.ACM,2016, [144] XiaoLiu,HaoYu,HanchenZhang,YifanXu,XuanyuLei,HanyuLai, doi:10.1145/2931037.2931054. YuGu,HangliangDing,KaiwenMen,KejuanYang,ShudanZhang, [158] MichaelC.Martin,V.BenjaminLivshits,andMonicaS.Lam. Find- XiangDeng,AohanZeng,ZhengxiaoDu,ChenhuiZhang,ShengShen, ingapplicationerrorsandsecurityflawsusingPQL:aprogramquery TianjunZhang,YuSu,HuanSun,MinlieHuang,YuxiaoDong,andJie language.InRalphE.JohnsonandRichardP.Gabriel,editors,Proceed- Tang. AgentBench:EvaluatingLLMsasagents. InTheTwelfthInter- ingsofthe20thAnnualACMSIGPLANConferenceonObject-Oriented nationalConferenceonLearningRepresentations,ICLR2024,Vienna, Programming,Systems,Languages,andApplications,OOPSLA2005, Austria,May7-11,2024.OpenReview.net,2024. October16-20,2005,SanDiego,CA,USA,pages365–383.ACM,2005, 35REFERENCES REFERENCES doi:10.1145/1094811.1094840. [174] Yu Nong, Mohammed Aldeen, Long Cheng, Hongxin Hu, Feng [159] ClaraMeister,TiagoPimentel,GianWiher,andRyanCotterell.Locally Chen, and Haipeng Cai. Chain-of-thought prompting of large lan- typicalsampling.Trans.Assoc.Comput.Linguistics,11:102–121,2023, guagemodelsfordiscoveringandfixingsoftwarevulnerabilities,2024, doi:10.1162/TACL A 00536. arXiv:2402.17230. [160] ClaraMeister,GianWiher,andRyanCotterell.Ondecodingstrategiesfor [175] Yu Nong, Yuzhe Ou, Michael Pradel, Feng Chen, and Haipeng Cai. neuraltextgenerators.Trans.Assoc.Comput.Linguistics,10:997–1012, VULGEN:realisticvulnerabilitygenerationviapatternmininganddeep 2022,doi:10.1162/TACL A 00502. learning.In45thIEEE/ACMInternationalConferenceonSoftwareEn- [161] ClaraMeister,GianWiher,TiagoPimentel,andRyanCotterell. High gineering,ICSE2023,Melbourne,Australia,May14-20,2023,pages probabilityorlowinformation?theprobability-qualityparadoxinlan- 2527–2539.IEEE,2023,doi:10.1109/ICSE48619.2023.00211. guagegeneration.InSmarandaMuresan,PreslavNakov,andAlineVillav- [176] FranzNowak,AnejSvete,AlexandraButoi,andRyanCotterell. On icencio,editors,Proceedingsofthe60thAnnualMeetingoftheAssocia- therepresentationalcapacityofneurallanguagemodelswithchain-of- tionforComputationalLinguistics(Volume2:ShortPapers),ACL2022, thoughtreasoning.InLun-WeiKu,AndreMartins,andVivekSrikumar, Dublin,Ireland,May22-27,2022,pages36–45.AssociationforCompu- editors,Proceedingsofthe62ndAnnualMeetingoftheAssociationfor tationalLinguistics,2022,doi:10.18653/V1/2022.ACL-SHORT.5. ComputationalLinguistics(Volume1:LongPapers),ACL2024,Bangkok, [162] RuijieMeng,MartinMirchev,MarcelBo¨hme,andAbhikRoychoudhury. Thailand, August 11-16, 2024, pages 12510–12548. Association for Largelanguagemodelguidedprotocolfuzzing. InNetworkandDis- ComputationalLinguistics,2024. tributedSystemSecurity(NDSS)Symposium2024,26February-1March [177] Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk 2024,SanDiego,CA,USA,2024,doi:10.14722/ndss.2024.24556. Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor [163] BartonP.Miller,LarsFredriksen,andBryanSo.Anempiricalstudyof Lewkowycz,MaartenBosma,DavidLuan,CharlesSutton,andAugustus thereliabilityofUNIXutilities. Commun.ACM,33(12):32–44,1990, Odena.Showyourwork:Scratchpadsforintermediatecomputationwith doi:10.1145/96267.96279. languagemodels,2021,arXiv:2112.00114. [164] MdRakibHossainMisu,CristinaV.Lopes,IrisMa,andJamesNoble. [178] AlessandroOrsoandGreggRothermel. Softwaretesting: aresearch TowardsAI-assistedsynthesisofverifiedDafnymethods. Proc.ACM travelogue(2000-2014).InJamesD.HerbslebandMatthewB.Dwyer, Softw.Eng.,1(FSE):812–835,2024,doi:10.1145/3643763. editors,ProceedingsoftheonFutureofSoftwareEngineering,FOSE [165] Mohammad Mahdi Mohajer, Reem Aleithan, Nima Shiri Harzevili, 2014,Hyderabad,India,May31-June7,2014,pages117–132.ACM, Moshi Wei, Alvine Boaye Belle, Hung Viet Pham, and Song Wang. 2014,doi:10.1145/2593882.2593885. EffectivenessofChatGPTforstaticanalysis:Howfararewe? InBram [179] CarlosPacheco,ShuvenduK.Lahiri,MichaelD.Ernst,andThomas Adams,ThomasZimmermann,IpekOzkaya,DayiLin,andJieM.Zhang, Ball. Feedback-directedrandomtestgeneration. In29thInternational
editors,Proceedingsofthe1stACMInternationalConferenceonAI- ConferenceonSoftwareEngineering(ICSE2007),Minneapolis,MN, PoweredSoftware,AIware2024,PortodeGalinhas,Brazil,July15-16, USA,May20-26,2007,pages75–84.IEEEComputerSociety,2007, 2024.ACM,2024,doi:10.1145/3664646.3664777. doi:10.1109/ICSE.2007.37. [166] Amr Mansour Mohsen, Hesham A. Hassan, Khaled Wassif, Ra- [180] SpencerPearson,Jose´Campos,Rene´Just,GordonFraser,RuiAbreu, madanMoawad, and SohaMakady. Enhancingbug localizationus- MichaelD.Ernst,DericPang,andBenjaminKeller. Evaluatingand ing phase-based approach. IEEE Access, 11:35901–35913, 2023, improvingfaultlocalization. InSebastia´nUchitel, AlessandroOrso, doi:10.1109/ACCESS.2023.3265731. and Martin P. Robillard, editors, Proceedings of the 39th Interna- [167] Fangwen Mu, Lin Shi, Song Wang, Zhuohao Yu, Binquan Zhang, tionalConferenceonSoftwareEngineering,ICSE2017,BuenosAires, Chenxue Wang, Shichao Liu, and Qing Wang. ClarifyGPT: A Argentina, May 20-28, 2017, pages 609–620. IEEE / ACM, 2017, framework for enhancing LLM-based code generation via require- doi:10.1109/ICSE.2017.62. mentsclarification. Proc.ACMSoftw.Eng.,1(FSE):2332–2354,2024, [181] KexinPei,DavidBieber,KensenShi,CharlesSutton,andPengcheng doi:10.1145/3660810. Yin. Canlargelanguagemodelsreasonaboutprograminvariants? In [168] JesseMu, XiangLi, andNoahD.Goodman. Learningtocompress AndreasKrause,EmmaBrunskill,KyunghyunCho,BarbaraEngelhardt, promptswithgisttokens.InAliceOh,TristanNaumann,AmirGloberson, SivanSabato,andJonathanScarlett,editors,InternationalConference KateSaenko,MoritzHardt,andSergeyLevine,editors,Advancesin onMachineLearning,ICML2023,23-29July2023,Honolulu,Hawaii, NeuralInformationProcessingSystems36:AnnualConferenceonNeural USA,volume202ofProceedingsofMachineLearningResearch,pages InformationProcessingSystems2023,NeurIPS2023,NewOrleans,LA, 27496–27520.PMLR,2023. USA,December10-16,2023,2023. [182] KaiPetersen,RobertFeldt,ShahidMujtaba,andMichaelMattsson.Sys- [169] SatishNarayanasamy,GillesPokam,andBradCalder.Bugnet:Continu- tematicmappingstudiesinsoftwareengineering.InGiuseppeVisaggio, ouslyrecordingprogramexecutionfordeterministicreplaydebugging.In MariaTeresaBaldassarre,StephenG.Linkman,andMarkTurner,editors, 32stInternationalSymposiumonComputerArchitecture(ISCA2005),4- 12thInternationalConferenceonEvaluationandAssessmentinSoftware 8June2005,Madison,Wisconsin,USA,pages284–295.IEEEComputer Engineering,EASE2008,UniversityofBari,Italy,26-27June2008, Society,2005,doi:10.1109/ISCA.2005.16. WorkshopsinComputing.BCS,2008. [170] NoorNashid,MiftaSintaha,andAliMesbah. Retrieval-basedprompt [183] Hung Phan, Arushi Sharma, and Ali Jannesari. Generating context- selection for code-related few-shot learning. In 45th IEEE/ACM In- awareAPIcallsfromnaturallanguagedescriptionusingneuralembed- ternational Conference on Software Engineering, ICSE 2023, Mel- dingsandmachinetranslation. In36thIEEE/ACMInternationalCon- bourne, Australia, May14-20, 2023, pages2450–2462.IEEE,2023, ferenceonAutomatedSoftwareEngineering,ASE2021-Workshops, doi:10.1109/ICSE48619.2023.00205. Melbourne,Australia,November15-19,2021,pages219–226.IEEE, [171] AnsongNi,SriniIyer,DragomirRadev,VeselinStoyanov,Wen-TauYih, 2021,doi:10.1109/ASEW52652.2021.00050. SidaI.Wang,andXiVictoriaLin.LEVER:learningtoverifylanguage- [184] JuanAltmayerPizzornoandEmeryD.Berger. CoverUp: Coverage- to-codegenerationwithexecution.InAndreasKrause,EmmaBrunskill, guidedLLM-basedtestgeneration,2024,arXiv:2403.16218. KyunghyunCho,BarbaraEngelhardt,SivanSabato,andJonathanScar- [185] GabrielPoesia,KanishkGandhi,EricZelikman,andNoahD.Goodman. lett,editors,InternationalConferenceonMachineLearning,ICML2023, Certifieddeductivereasoningwithlanguagemodels.Trans.Mach.Learn. 23-29July2023,Honolulu,Hawaii,USA,volume202ofProceedingsof Res.,2024,2024. MachineLearningResearch,pages26106–26128.PMLR,2023. [186] GabrielPoesia,AlexPolozov,VuLe,AshishTiwari,GustavoSoares, [172] Flemming Nielson, Hanne Riis Nielson, and Chris Han- ChristopherMeek,andSumitGulwani. Synchromesh: Reliablecode kin. Principles of program analysis. Springer, 1999, generationfrompre-trainedlanguagemodels.InTheTenthInternational doi:10.1007/978-3-662-03811-6. ConferenceonLearningRepresentations,ICLR2022,VirtualEvent,April [173] ChanganNiu,ChuanyiLi,BinLuo,andVincentNg. Deeplearning 25-29,2022.OpenReview.net,2022. meetssoftwareengineering:Asurveyonpre-trainedmodelsofsource [187] Feng Qiu, Pu Ji, Baojian Hua, and Yang Wang. CHEMFUZZ: code.InLucDeRaedt,editor,Thirty-FirstInternationalJointConference large language models-assisted fuzzing for quantum chemistry soft- onArtificialIntelligence,IJCAI2022,Vienna,Austria,23-29July2022, warebugdetection. In23rdIEEEInternationalConferenceonSoft-
pages5546–5555.ijcai.org,2022,doi:10.24963/IJCAI.2022/775. ware Quality, Reliability, and Security, QRS 2023 Companion, Chi- 36REFERENCES REFERENCES angMai,Thailand,October22-26,2023,pages103–112.IEEE,2023, Proceedingsofthe28thInternationalConferenceonEvaluationandAs- doi:10.1109/QRS-C60940.2023.00104. sessmentinSoftwareEngineering,EASE2024,Salerno,Italy,June18-21, [188] NikithaRao,KushJain,UriAlon,ClaireLeGoues,andVincentJ.Hel- 2024,pages313–322.ACM,2024,doi:10.1145/3661167.3661216. lendoorn.CAT-LMtraininglanguagemodelsonalignedcodeandtests. [202] JacopoSoldaniandAntonioBrogi.Anomalydetectionandfailureroot In38thIEEE/ACMInternationalConferenceonAutomatedSoftware causeanalysisin(micro)service-basedcloudapplications: Asurvey. Engineering,ASE2023,Luxembourg,September11-15,2023,pages ACMComput.Surv.,55(3):59:1–59:39,2023,doi:10.1145/3501297. 409–420.IEEE,2023,doi:10.1109/ASE56229.2023.00193. [203] AarohiSrivastava,AbhinavRastogi,AbhishekRao,AbuAwalMdShoeb, [189] DevjeetRoy,XuchaoZhang,RashiBhave,ChetanBansal,PedroHen- AbubakarAbid,AdamFisch,AdamR.Brown,AdamSantoro,Aditya riqueB.Las-Casas,RodrigoFonseca,andSaravanRajmohan. Explor- Gupta, Adria` Garriga-Alonso, AgnieszkaKluska, AitorLewkowycz, ingLLM-basedagentsforrootcauseanalysis. InMarcelod’Amorim, Akshat Agarwal, Alethea Power, Alex Ray, Alex Warstadt, Alexan- editor, CompanionProceedingsofthe32ndACMInternationalCon- derW.Kocurek,AliSafaya,AliTazarv,AliceXiang,AliciaParrish, ferenceontheFoundationsofSoftwareEngineering,FSE2024,Porto AllenNie,AmanHussain,AmandaAskell,AmandaDsouza,Ambrose de Galinhas, Brazil, July 15-19, 2024, pages 208–219. ACM, 2024, Slone,AmeetRahane,AnantharamanS.Iyer,AndersAndreassen,An- doi:10.1145/3663529.3663841. dreaMadotto,AndreaSantilli,AndreasStuhlmu¨ller,AndrewM.Dai, [190] AlexanderM.Rush,SumitChopra,andJasonWeston.Aneuralattention AndrewLa,AndrewK.Lampinen,AndyZou,AngelaJiang,Angelica modelforabstractivesentencesummarization.InLlu´ısMa`rquez,Chris Chen,AnhVuong,AnimeshGupta,AnnaGottardi,AntonioNorelli,Anu Callison-Burch, Jian Su, Daniele Pighin, and Yuval Marton, editors, Venkatesh,ArashGholamidavoodi,ArfaTabassum,ArulMenezes,Arun Proceedingsofthe2015ConferenceonEmpiricalMethodsinNatural Kirubarajan,AsherMullokandov,AshishSabharwal,AustinHerrick, LanguageProcessing,EMNLP2015,Lisbon,Portugal,September17-21, AviaEfrat,AykutErdem,AylaKarakas,B.RyanRoberts,BaoSheng 2015,pages379–389.TheAssociationforComputationalLinguistics, Loe,BarretZoph,BartlomiejBojanowski,BatuhanO¨zyurt,Behnam 2015,doi:10.18653/V1/D15-1044. Hedayatnia,BehnamNeyshabur,BenjaminInden,BennoStein,Berk [191] GabrielRyan,SiddharthaJain,MingyueShang,ShiqiWang,Xiaofei Ekmekci,BillYuchenLin,BlakeHowald,BryanOrinion,CameronDiao, Ma, Murali Krishna Ramanathan, and Baishakhi Ray. Code-aware CameronDour,CatherineStinson,CedrickArgueta,Ce`sarFerriRam´ırez, prompting: Astudyofcoverage-guidedtestgenerationinregression ChandanSingh,CharlesRathkopf,ChenlinMeng,ChittaBaral,Chiyu settingusingLLM. Proc.ACMSoftw.Eng., 1(FSE):951–971, 2024, Wu,ChrisCallison-Burch,ChrisWaites,ChristianVoigt,ChristopherD. doi:10.1145/3643769. Manning,ChristopherPotts,CindyRamirez,ClaraE.Rivera,Clemencia [192] S.C.Johnson.Lint,aCprogramchecker,1978. Siro,ColinRaffel,CourtneyAshcraft,CristinaGarbacea,DamienSileo, [193] ShubhraKantiKarmakerSantuandDongjiFeng. TELeR:Ageneral DanGarrette, DanHendrycks, DanKilman, DanRoth, DanielFree- taxonomyofLLMpromptsforbenchmarkingcomplextasks.InHouda man,DanielKhashabi,DanielLevy,DanielMosegu´ıGonza´lez,Danielle Bouamor,JuanPino,andKalikaBali,editors,FindingsoftheAssociation Perszyk,DannyHernandez,DanqiChen,DaphneIppolito,DarGilboa, forComputationalLinguistics:EMNLP2023,Singapore,December6-10, DavidDohan,DavidDrakard,DavidJurgens,DebajyotiDatta,Deep 2023,pages14197–14203.AssociationforComputationalLinguistics, Ganguli,DenisEmelin,DenisKleyko,DenizYuret,DerekChen,Derek 2023,doi:10.18653/V1/2023.FINDINGS-EMNLP.946. Tam,DieuwkeHupkes,DigantaMisra,DilyarBuzan,DimitriCoelho [194] Max Scha¨fer, Sarah Nadi, Aryaz Eghbali, and Frank Tip. An em- Mollo,DiyiYang,Dong-HoLee,DylanSchrader,EkaterinaShutova, piricalevaluationofusinglargelanguagemodelsforautomatedunit EkinDogusCubuk,EladSegal,EleanorHagerman,ElizabethBarnes, test generation. IEEE Trans. Software Eng., 50(1):85–105, 2024, ElizabethDonoway,ElliePavlick,EmanueleRodola`,EmmaLam,Eric doi:10.1109/TSE.2023.3334955. Chu,EricTang,ErkutErdem,ErnieChang,EthanA.Chi,EthanDyer, [195] TimoSchick,JaneDwivedi-Yu,RobertoDess`ı,RobertaRaileanu,Maria EthanJ.Jerzak,EthanKim,EuniceEngefuManyasi,EvgeniiZheltonozh-
Lomeli,EricHambro,LukeZettlemoyer,NicolaCancedda,andThomas skii,FanyueXia,FatemehSiar,FernandoMart´ınez-Plumed,Francesca Scialom. Toolformer: Languagemodelscanteachthemselvestouse Happe´, Franc¸ois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra tools. InAliceOh,TristanNaumann,AmirGloberson,KateSaenko, Winata,GerarddeMelo,Germa´nKruszewski,GiambattistaParascan- MoritzHardt,andSergeyLevine,editors,AdvancesinNeuralInforma- dolo,GiorgioMariani,GloriaWang,GonzaloJaimovitch-Lo´pez,Gregor tionProcessingSystems36:AnnualConferenceonNeuralInformation Betz,GuyGur-Ari,HanaGalijasevic,HannahKim,HannahRashkin, ProcessingSystems2023,NeurIPS2023,NewOrleans,LA,USA,De- HannanehHajishirzi,HarshMehta,HaydenBogar,HenryShevlin,Hin- cember10-16,2023,2023. richSchu¨tze,HiromuYakura,HongmingZhang,HughMeeWong,Ian [196] ShreyaShankar,J.D.Zamfirescu-Pereira,Bjo¨rnHartmann,AdityaG. Ng,IsaacNoble,JaapJumelet,JackGeissinger,JacksonKernion,Jacob Parameswaran,andIanArawjo.Whovalidatesthevalidators?aligning Hilton,JaehoonLee,JaimeFerna´ndezFisac,JamesB.Simon,James LLM-assistedevaluationofLLMoutputswithhumanpreferences,2024, Koppel,JamesZheng,JamesZou,JanKocon,JanaThompson,Janelle arXiv:2404.12272. Wingfield,JaredKaplan,JaremaRadom,JaschaSohl-Dickstein,Jason [197] PrakharSharmaandVinodYegneswaran.PROSPER:Extractingproto- Phang,JasonWei,JasonYosinski,JekaterinaNovikova,JelleBosscher, colspecificationsusinglargelanguagemodels. InProceedingsofthe JenniferMarsh,JeremyKim,JeroenTaal,JesseH.Engel,JesujobaAlabi, 22ndACMWorkshoponHotTopicsinNetworks,HotNets2023,Cam- JiachengXu,JiamingSong,JillianTang,JoanWaweru,JohnBurden, bridge,MA,USA,November28-29,2023,pages41–47.ACM,2023, JohnMiller,JohnU.Balis,JonathanBatchelder,JonathanBerant,Jo¨rg doi:10.1145/3626111.3628205. Frohberg,JosRozen,Jose´Herna´ndez-Orallo,JosephBoudeman,Joseph [198] KumarShashwat,FrancisHahn,XinmingOu,DmitryGoldgof,Lawrence Guerr,JosephJones,JoshuaB.Tenenbaum,JoshuaS.Rule,JoyceChua, Hall,JayLigatti,S.RajRajgopalan,andArminZiaieTabari.Aprelimi- KamilKanclerz,KarenLivescu,KarlKrauth,KarthikGopalakrishnan, narystudyonusinglargelanguagemodelsinsoftwarepentesting,2024, KaterinaIgnatyeva,KatjaMarkert,KaustubhD.Dhole,KevinGimpel, arXiv:2401.17459. KevinOmondi,KoryMathewson,KristenChiafullo,KseniaShkaruta, [199] SeungYeobShin,FabrizioPastore,DomenicoBianculli,andAlexandra KumarShridhar,KyleMcDonell,KyleRichardson,LariaReynolds,Leo Baicoianu.Towardsgeneratingexecutablemetamorphicrelationsusing Gao,LiZhang,LiamDugan,LianhuiQin,LidiaContrerasOchando, largelanguagemodels,2024,arXiv:2401.17019. Louis-PhilippeMorency,LucaMoschella,LucasLam,LucyNoble,Lud- [200] NoahShinn,FedericoCassano,AshwinGopinath,KarthikNarasimhan, wigSchmidt,LuhengHe,LuisOliverosColo´n,LukeMetz,Lu¨tfiKerem andShunyuYao. Reflexion: languageagentswithverbalreinforce- Senel,MaartenBosma,MaartenSap,MaartjeterHoeve,MaheenFa- mentlearning. InAliceOh,TristanNaumann,AmirGloberson,Kate rooqi,ManaalFaruqui,MantasMazeika,MarcoBaturan,MarcoMarelli, Saenko,MoritzHardt,andSergeyLevine,editors,AdvancesinNeural MarcoMaru,Mar´ıaJose´Ram´ırez-Quintana,MarieTolkiehn,MarioGiu- InformationProcessingSystems36:AnnualConferenceonNeuralInfor- lianelli,MarthaLewis,MartinPotthast,MatthewL.Leavitt,Matthias mationProcessingSystems2023,NeurIPS2023,NewOrleans,LA,USA, Hagen,Ma´tya´sSchubert,MedinaBaitemirova,MelodyArnaud,Melvin December10-16,2023,2023. McElrath,MichaelA.Yee,MichaelCohen,MichaelGu,MichaelI.Ivan- [201] MohammedLatifSiddiq,JoannaCeciliadaSilvaSantos,RidwanulHasan itskiy,MichaelStarritt,MichaelStrube,MichalSwedrowski,Michele Tanvir,NoshinUlfat,FahmidAlRifat,andViniciusCarvalhoLopes.Us- Bevilacqua,MichihiroYasunaga,MihirKale,MikeCain,MimeeXu, inglargelanguagemodelstogenerateJUnittests:Anempiricalstudy.In MiracSuzgun,MitchWalker,MoTiwari,MohitBansal,MoinAmin- 37REFERENCES REFERENCES naseri,MorGeva,MozhdehGheini,MukundVarmaT.,NanyunPeng, IEEE/ACMInternationalConferenceonSoftwareEngineering,ICSE NathanA.Chi,NayeonLee,NetaGur-AriKrakover,NicholasCameron, 2024,Lisbon,Portugal,April14-20,2024,pages166:1–166:13.ACM, NicholasRoberts,NickDoiron,NicoleMartinez,NikitaNangia,Niklas 2024,doi:10.1145/3597503.3639117. Deckers, NiklasMuennighoff, NitishShirishKeskar, NivedithaIyer, [210] MiracSuzgun,NathanScales,NathanaelScha¨rli,SebastianGehrmann, NoahConstant,NoahFiedel,NuanWen,OliverZhang,OmarAgha, YiTay,HyungWonChung,AakankshaChowdhery,QuocV.Le,EdH. OmarElbaghdadi,OmerLevy,OwainEvans,PabloAntonioMoreno Chi,DennyZhou,andJasonWei. ChallengingBIG-benchtasksand
Casares,ParthDoshi,PascaleFung,PaulPuLiang,PaulVicol,Pegah whetherchain-of-thoughtcansolvethem. InAnnaRogers,JordanL. Alipoormolabashi,PeiyuanLiao,PercyLiang,PeterChang,PeterEcker- Boyd-Graber,andNaoakiOkazaki,editors,FindingsoftheAssociation sley,PhuMonHtut,PinyuHwang,PiotrMilkowski,PiyushPatil,Pouya forComputationalLinguistics:ACL2023,Toronto,Canada,July9-14, Pezeshkpour,PritiOli,QiaozhuMei,QingLyu,QinlangChen,Rabin 2023,pages13003–13051.AssociationforComputationalLinguistics, Banjade,RachelEttaRudolph,RaeferGabriel,RahelHabacker,Ramon 2023,doi:10.18653/V1/2023.FINDINGS-ACL.824. Risco,Raphae¨lMillie`re,RhythmGarg,RichardBarnes,RifA.Saurous, [211] MaryamTaeb,AmandaSwearngin,EldonSchoop,RuijiaCheng,Yue RikuArakawa,RobbeRaymaekers,RobertFrank,RohanSikand,Roman Jiang, and Jeffrey Nichols. AXNav: Replaying accessibility tests Novak,RomanSitelew,RonanLeBras,RosanneLiu,RowanJacobs, from natural language. In Florian ’Floyd’ Mueller, Penny Kyburz, RuiZhang,RuslanSalakhutdinov,RyanChi,RyanLee,RyanStovall, Julie R. Williamson, Corina Sas, Max L. Wilson, Phoebe O. Toups RyanTeehan,RylanYang,SahibSingh,SaifM.Mohammad,Sajant Dugas, and Irina Shklovski, editors, Proceedings of the CHI Con- Anand,SamDillavou,SamShleifer,SamWiseman,SamuelGruetter, ference on Human Factors in Computing Systems, CHI 2024, Hon- SamuelR.Bowman,SamuelS.Schoenholz,SanghyunHan,Sanjeev olulu,HI,USA,May11-16,2024,pages962:1–962:16.ACM,2024, Kwatra,SarahA.Rous,SarikGhazarian,SayanGhosh,SeanCasey, doi:10.1145/3613904.3642777. SebastianBischoff,SebastianGehrmann,SebastianSchuster,Sepideh [212] MohammadRezaTaesiri,FinlayMacklon,YiheWang,HengshuoShen, Sadeghi,ShadiHamdan,SharonZhou,ShashankSrivastava,SherryShi, andCor-PaulBezemer.Largelanguagemodelsareprettygoodzero-shot ShikharSingh,ShimaAsaadi,ShixiangShaneGu,ShubhPachchigar, videogamebugdetectors,2022,arXiv:2210.02506. ShubhamToshniwal,ShyamUpadhyay,Shyamolima(Shammie)Deb- [213] Naaman Tan, Josef Valvoda, Anej Svete, Tianyu Liu, Yanxia Qin, nath,SiamakShakeri,SimonThormeyer,SimoneMelzi,SivaReddy, Kan Min-Yen, and Ryan Cotterell. A fundamental trade-off in SnehaPriscillaMakini,Soo-HwanLee,SpencerTorene,SriharshaHat- alignedlanguagemodelsanditsrelationtosamplingadaptors,2024, war,StanislasDehaene,StefanDivic,StefanoErmon,StellaBiderman, arXiv:2406.10203. StephanieLin,StephenPrasad,StevenT.Piantadosi,StuartM.Shieber, [214] ShunchengTang, ZhenyaZhang, YiZhang, JixiangZhou, YanGuo, SummerMisherghi,SvetlanaKiritchenko,SwaroopMishra,TalLinzen, ShuangLiu,ShengjianGuo,Yan-FuLi,LeiMa,YinxingXue,andYang TalSchuster,TaoLi,TaoYu,TariqAli,TatsuHashimoto,Te-LinWu, Liu. Asurveyonautomateddrivingsystemtesting: Landscapesand The´o Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, trends. ACMTrans.Softw.Eng.Methodol.,32(5):124:1–124:62,2023, TiberiusNkinyili,TimoSchick,TimofeiKornev,TitusTunduny,Tobias doi:10.1145/3579642. Gerstenberg,TrentonChang,TrishalaNeeraj,TusharKhot,TylerShultz, [215] Yutian Tang, Zhijie Liu, Zhichao Zhou, and Xiapu Luo. Chat- UriShaham,VedantMisra,VeraDemberg,VictoriaNyamai,VikasRau- GPT vs SBST: A comparative assessment of unit test suite gen- nak,VinayV.Ramasesh,VinayUdayPrabhu,VishakhPadmakumar, eration. IEEE Trans. Software Eng., 50(6):1340–1359, 2024, VivekSrikumar,WilliamFedus,WilliamSaunders,WilliamZhang,Wout doi:10.1109/TSE.2024.3382365. Vossen, XiangRen, XiaoyuTong, XinranZhao, XinyiWu, Xudong [216] Langchain Team. LANGCHAIN BLOG: OpenAI’s bet on a Shen,YadollahYaghoobzadeh,YairLakretz,YangqiuSong,Yasaman cognitive architecture, 2023. https://blog.langchain.dev/ Bahri,YejinChoi,YichiYang,YidingHao,YifuChen,YonatanBelinkov, openais-bet-on-a-cognitive-architecture/ [accessed July YuHou,YufangHou,YuntaoBai,ZacharySeid,ZhuoyeZhao,Zijian 2024]. Wang,ZijieJ.Wang,ZiruiWang,andZiyiWu. Beyondtheimitation [217] AdlyTempleton,TomConerly,JonathanMarcus,JackLindsey,Trenton game:Quantifyingandextrapolatingthecapabilitiesoflanguagemodels. Bricken,BrianChen,AdamPearce,CraigCitro,EmmanuelAmeisen, Trans.Mach.Learn.Res.,2023,2023. Andy Jones, Hoagy Cunningham, Nicholas L Turner, Callum Mc- [204] Yanqi Su, Dianshu Liao, Zhenchang Xing, Qing Huang, Mulong Dougall,MonteMacDiarmid,AlexTamkin,EsinDurmus,TristanHume, Xie, Qinghua Lu, and Xiwei Xu. Enhancing exploratory testing FrancescoMosconi,C.DanielFreeman,TheodoreR.Sumers,Edward bylargelanguagemodelandknowledgegraph. In46thIEEE/ACM Rees, Joshua Batson, Adam Jermyn, Shan Carter, Chris Olah, and
International Conference on Software Engineering, ICSE 2024, Lis- Tom Henighan. Scaling monosemanticity: Extracting interpretable bon, Portugal, April 14-20, 2024, pages 98:1–98:12. ACM, 2024, featuresfromClaude3Sonnet. https://transformer-circuits. doi:10.1145/3597503.3639157. pub/2024/scaling-monosemanticity/index.html, 2024. At [205] TheodoreR.Sumers,ShunyuYao,KarthikNarasimhan,andThomasL. transformer-circuits.pub[accessedJuly2024]. Griffiths. Cognitivearchitecturesforlanguageagents. Trans.Mach. [218] NorbertTihanyi,Tama´sBisztray,RidhiJain,MohamedAmineFerrag, Learn.Res.,2024,2024. Lucas C. Cordeiro, and Vasileios Mavroeidis. The FormAI dataset: [206] ChuyueSun,YingSheng,OdedPadon,andClarkW.Barrett. Clover: GenerativeAIinsoftwaresecuritythroughthelensofformalverifi- Closed-loopverifiablecodegeneration.InGuyAvni,MircoGiacobbe, cation. InShaneMcIntosh,EunjongChoi,andSteffenHerbold,edi- TaylorT.Johnson,GuyKatz,AnnaLukina,NinaNarodytska,andChris- tors,Proceedingsofthe19thInternationalConferenceonPredictive tianSchilling,editors,AIVerification-FirstInternationalSymposium, ModelsandDataAnalyticsinSoftwareEngineering,PROMISE2023, SAIV 2024, Montreal, QC, Canada, July 22-23, 2024, Proceedings, SanFrancisco,CA,USA,8December2023,pages33–43.ACM,2023, volume14846ofLectureNotesinComputerScience,pages134–155. doi:10.1145/3617555.3617874. Springer,2024,doi:10.1007/978-3-031-65112-0 7. [219] FrankTip,JonathanBell,andMaxScha¨fer. LLMorpheus: Mutation [207] Maolin Sun, Yibiao Yang, Yang Wang, Ming Wen, Haoxiang Jia, testingusinglargelanguagemodels,2024,arXiv:2404.09952. and Yuming Zhou. SMT solver validation empowered by large [220] ChristosTsigkanos, PoojaRani, SebastianMu¨ller, andTimoKehrer. pre-trained language models. In 38th IEEE/ACM International Largelanguagemodels:Thenextfrontierforvariablediscoverywithin Conference on Automated Software Engineering, ASE 2023, Lux- metamorphictesting?InTaoZhang,XinXia,andNicoleNovielli,editors, embourg, September 11-15, 2023, pages 1288–1300. IEEE, 2023, IEEEInternationalConferenceonSoftwareAnalysis, Evolutionand doi:10.1109/ASE56229.2023.00180. Reengineering,SANER2023,Taipa,Macao,March21-24,2023,pages [208] YuqiangSun,DaoyuanWu,YueXue,HanLiu,WeiMa,LyuyeZhang, 678–682.IEEE,2023,doi:10.1109/SANER56733.2023.00070. MiaoleiShi,andYangLiu.LLM4Vuln:Aunifiedevaluationframework [221] HaoxinTu,ZhideZhou,HeJiang,ImamNurBaniYusuf,YuxianLi, for decoupling and enhancing LLMs’ vulnerability reasoning, 2024, andLingxiaoJiang. Isolatingcompilerbugsbygeneratingeffective arXiv:2401.16185. witnessprogramswithlargelanguagemodels. IEEETrans.Software [209] YuqiangSun,DaoyuanWu,YueXue,HanLiu,HaijunWang,Zhengzi Eng.,50(7):1768–1788,2024,doi:10.1109/TSE.2024.3397822. Xu,XiaofeiXie,andYangLiu.GPTScan:Detectinglogicvulnerabilities [222] MicheleTufano,DawnDrain,AlexeySvyatkovskiy,ShaoKunDeng, insmartcontractsbycombiningGPTwithprogramanalysis. In46th andNeelSundaresan. Unittestcasegenerationwithtransformersand 38REFERENCES REFERENCES focalcontext,2021,arXiv:2009.05617. tianBorgeaud,DaniYogatama,MaartenBosma,DennyZhou,Donald [223] KarthikValmeekam,MatthewMarquez,AlbertoOlmoHernandez,Sarath Metzler,EdH.Chi,TatsunoriHashimoto,OriolVinyals,PercyLiang, Sreedharan,andSubbaraoKambhampati. PlanBench: Anextensible JeffDean, andWilliamFedus. Emergentabilitiesoflargelanguage benchmarkforevaluatinglargelanguagemodelsonplanningandreason- models.Trans.Mach.Learn.Res.,pages209:1–209:30,2022. ingaboutchange.InAliceOh,TristanNaumann,AmirGloberson,Kate [239] JasonWei, XuezhiWang, DaleSchuurmans, MaartenBosma, Brian Saenko,MoritzHardt,andSergeyLevine,editors,AdvancesinNeural Ichter,FeiXia,EdH.Chi,QuocV.Le,andDennyZhou. Chain-of- InformationProcessingSystems36:AnnualConferenceonNeuralInfor- thoughtpromptingelicitsreasoninginlargelanguagemodels.InNeurIPS, mationProcessingSystems2023,NeurIPS2023,NewOrleans,LA,USA, 2022. December10-16,2023,2023. [240] MoshiWei,NimaShiriHarzevili,YuekaiHuang,JinqiuYang,Junjie [224] Axel van Lamsweerde. Goal-oriented requirements engineering: A Wang,andSongWang. Demystifyinganddetectingmisusesofdeep guidedtour. In5thIEEEInternationalSymposiumonRequirements learningAPIs.In46thIEEE/ACMInternationalConferenceonSoftware Engineering(RE2001),27-31August2001,Toronto,Canada,page249. Engineering,ICSE2024,Lisbon,Portugal,April14-20,2024,pages IEEEComputerSociety,2001,doi:10.1109/ISRE.2001.948567. 201:1–201:12.ACM,2024,doi:10.1145/3597503.3639177. [225] RobvanZoest.Periodictableofnaturallanguageprocessingtasks,2020. [241] MarkD.Weiser.Programslicing.IEEETrans.SoftwareEng.,10(4):352–
https://www.innerdoc.com/periodic-table-of-nlp-tasks/ 357,1984,doi:10.1109/TSE.1984.5010248. [accessedApril2024]. [242] ChengWen,JialunCao,JieSu,ZhiwuXu,ShengchaoQin,MengdaHe, [226] AshishVaswani,NoamShazeer,NikiParmar,JakobUszkoreit,Llion HaokunLi,Shing-ChiCheung,andCongTian. Enchantingprogram Jones,AidanN.Gomez,ŁukaszKaiser,andIlliaPolosukhin.Attention specificationsynthesisbylargelanguagemodelsusingstaticanalysis isallyouneed. InIsabelleGuyon,UlrikevonLuxburg,SamyBengio, andprogramverification. InArieGurfinkelandVijayGanesh,editors, HannaM.Wallach,RobFergus,S.V.N.Vishwanathan,andRoman ComputerAidedVerification-36thInternationalConference,CAV2024, Garnett,editors,AdvancesinNeuralInformationProcessingSystems Montreal,QC,Canada,July24-27,2024,Proceedings,PartII,volume 30:AnnualConferenceonNeuralInformationProcessingSystems2017, 14682ofLectureNotesinComputerScience,pages302–328.Springer, December4-9,2017,LongBeach,CA,USA,pages5998–6008,2017. 2024,doi:10.1007/978-3-031-65630-9 16. [227] AshwinPrasadShivarpatnaVenkatesh,SamkuttySabu,AmirM.Mir, [243] NoamWies,YoavLevine,andAmnonShashua.Sub-taskdecomposition SofiaReis,andEricBodden.Theemergenceoflargelanguagemodelsin enableslearninginsequencetosequencetasks. InTheEleventhInter- staticanalysis:Afirstlookthroughmicro-benchmarks.InDavidLo,Xin nationalConferenceonLearningRepresentations,ICLR2023,Kigali, Xia,MassimilianoDiPenta,andXingHu,editors,Proceedingsofthe Rwanda,May1-5,2023.OpenReview.net,2023. 2024IEEE/ACMFirstInternationalConferenceonAIFoundationModels [244] LionelWong,GabrielGrand,AlexanderK.Lew,NoahD.Goodman, andSoftwareEngineering,FORGE2024,Lisbon,Portugal,14April VikashK.Mansinghka,JacobAndreas,andJoshuaB.Tenenbaum.From 2024,pages35–39.ACM,2024,doi:10.1145/3650105.3652288. wordmodelstoworldmodels:Translatingfromnaturallanguagetothe [228] Fernando Richter Vidal, Naghmeh Ivaki, and Nuno Laranjeiro. probabilisticlanguageofthought,2023,arXiv:2306.12672. Vulnerability detection techniques for smart contracts: A sys- [245] W.EricWong,RuizhiGao,YihaoLi,RuiAbreu,andFranzWotawa. tematic literature review. J. Syst. Softw., 217:112160, 2024, Asurveyonsoftwarefaultlocalization. IEEETrans.SoftwareEng., doi:https://doi.org/10.1016/j.jss.2024.112160. 42(8):707–740,2016,doi:10.1109/TSE.2016.2521368. [229] Vasudev Vikram, Caroline Lemieux, and Rohan Padhye. Can [246] HaozeWu,ClarkBarrett,andNinaNarodytska.Lemur:Integratinglarge large language models write good property-based tests?, 2023, languagemodelsinautomatedprogramverification.InTheTwelfthInter- arXiv:2307.04346. nationalConferenceonLearningRepresentations,ICLR2024,Vienna, [230] Nalin Wadhwa, Jui Pradhan, Atharv Sonwane, Surya Prakash Sahu, Austria,May7-112024.OpenReview.net,2024. NagarajanNatarajan,AdityaKanade,SureshParthasarathy,andSriram [247] ShengnanWu,YongxiangHu,YingchuanWang,JiazhenGu,JinMeng, Rajamani. CORE:ResolvingcodequalityissuesusingLLMs. Proc. LiujieFan,ZhongshiLuan,XinWang,andYangfanZhou.Combating ACMSoftw.Eng.,1(FSE):789–811,2024,doi:10.1145/3643762. missedrecallsine-commercesearch:ACoT-promptingtestingapproach. [231] ZhongweiWan,XinWang,CheLiu,SamiulAlam,YuZheng,Jiachen InMarcelod’Amorim,editor,CompanionProceedingsofthe32ndACM Liu,ZhongnanQu,ShenYan,YiZhu,QuanluZhang,MosharafChowd- InternationalConferenceontheFoundationsofSoftwareEngineering, hury,andMiZhang.Efficientlargelanguagemodels:Asurvey.Trans. FSE2024,PortodeGalinhas,Brazil,July15-19,2024,pages220–231. Mach.Learn.Res.,2024,2024. ACM,2024,doi:10.1145/3663529.3663842. [232] ChengpengWang,WuqiZhang,ZianSu,XiangzheXu,XiaohengXie, [248] TongshuangWu,EllenJiang, AaronDonsbach,JeffGray, Alejandra and Xiangyu Zhang. When dataflow analysis meets large language Molina,MichaelTerry,andCarrieJ.Cai.Promptchainer:Chaininglarge models,2024,arXiv:2402.10754. languagemodelpromptsthroughvisualprogramming.InSimoneD.J. [233] ChongWang,JiananLiu,XinPeng,YangLiu,andYilingLou.Inferring Barbosa,CliffLampe,CarolineAppert,andDavidA.Shamma,editors, resource-orientedintentionsusingLLMsforstaticresourceleakdetection, CHI’22: CHIConferenceonHumanFactorsinComputingSystems, 2023,arXiv:2311.04448. NewOrleans,LA,USA,29April2022-5May2022,ExtendedAbstracts, [234] JunjieWang,YuchaoHuang,ChunyangChen,ZheLiu,SongWang, pages359:1–359:10.ACM,2022,doi:10.1145/3491101.3519729. andQingWang. Softwaretestingwithlargelanguagemodel: Survey, [249] Yonghao Wu, Zheng Li, Jie M. Zhang, Mike Papadakis, Mark Har-
landscape,andvision. IEEETrans.SoftwareEng.,50:911–936,2024, man,andYongLiu.Largelanguagemodelsinfaultlocalisation,2023, doi:10.1109/TSE.2024.3368208. arXiv:2308.15276. [235] RuochengWang,EricZelikman,GabrielPoesia,YewenPu,NickHaber, [250] YuhuaiWu,AlbertQiaochuJiang,WendaLi,MarkusN.Rabe,Charles andNoahD.Goodman. Hypothesissearch: Inductivereasoningwith Staats,MatejaJamnik,andChristianSzegedy.Autoformalizationwith languagemodels.InTheTwelfthInternationalConferenceonLearning largelanguagemodels. InSanmiKoyejo,S.Mohamed,A.Agarwal, Representations,ICLR2024,Vienna,Austria,May7-11,2024.OpenRe- DanielleBelgrave, K.Cho, andA.Oh, editors, AdvancesinNeural view.net,2024. InformationProcessingSystems35:AnnualConferenceonNeuralInfor- [236] XuezhiWang,JasonWei,DaleSchuurmans,QuocV.Le,EdH.Chi,Sha- mationProcessingSystems2022,NeurIPS2022,NewOrleans,LA,USA, ranNarang,AakankshaChowdhery,andDennyZhou.Self-consistency November28-December9,2022,2022. improveschainofthoughtreasoninginlanguagemodels.InTheEleventh [251] ZhengxuanWu,AtticusGeiger,ThomasIcard,ChristopherPotts,and InternationalConferenceonLearningRepresentations,ICLR2023,Ki- NoahD.Goodman.Interpretabilityatscale:Identifyingcausalmecha- gali,Rwanda,May1-5,2023.OpenReview.net,2023. nismsinalpaca.InAliceOh,TristanNaumann,AmirGloberson,Kate [237] JasonWei,MaartenBosma,VincentY.Zhao,KelvinGuu,AdamsWei Saenko,MoritzHardt,andSergeyLevine,editors,AdvancesinNeural Yu,BrianLester,NanDu,AndrewM.Dai,andQuocV.Le.Finetuned InformationProcessingSystems36:AnnualConferenceonNeuralInfor- language models are zero-shot learners. In The Tenth International mationProcessingSystems2023,NeurIPS2023,NewOrleans,LA,USA, Conference on Learning Representations, ICLR 2022, Virtual Event, December10-16,2023,2023. April25-29,2022.OpenReview.net,2022. [252] ChunqiuStevenXia,MatteoPaltenghi,JiaLeTian,MichaelPradel,and [238] JasonWei,YiTay,RishiBommasani,ColinRaffel,BarretZoph,Sebas- LingmingZhang.Fuzz4All:Universalfuzzingwithlargelanguagemod- 39REFERENCES REFERENCES els.In46thIEEE/ACMInternationalConferenceonSoftwareEngineer- Engineering,FSE2024,PortodeGalinhas,Brazil,July15-19,2024, ing,ICSE2024,Lisbon,Portugal,April14-20,2024,pages126:1–126:13. pages388–398.ACM,2024,doi:10.1145/3663529.3663858. ACM,2024,doi:10.1145/3597503.3639121. [269] Fengyi Zhang, Bihuan Chen, Yufei Zhao, and Xin Peng. Slice- [253] ChunqiuStevenXia,YuxiangWei,andLingmingZhang. Automated based code change representation learning. In Tao Zhang, Xin programrepairintheeraoflargepre-trainedlanguagemodels.In45th Xia, and Nicole Novielli, editors, IEEE International Conference IEEE/ACMInternationalConferenceonSoftwareEngineering,ICSE on Software Analysis, Evolution and Reengineering, SANER 2023, 2023,Melbourne,Australia,May14-20,2023,pages1482–1494.IEEE, Taipa, Macao, March 21-24, 2023, pages 319–330. IEEE, 2023, 2023,doi:10.1109/ICSE48619.2023.00129. doi:10.1109/SANER56733.2023.00038. [254] AidanZ.H.Yang,RubenMartins,ClaireLeGoues,andVincentJ.Hel- [270] JieM.Zhang,MarkHarman,LeiMa,andYangLiu.Machinelearning lendoorn.Largelanguagemodelsfortest-freefaultlocalization.In46th testing: Survey,landscapesandhorizons. IEEETrans.SoftwareEng., IEEE/ACMInternationalConferenceonSoftwareEngineering,ICSE 48(2):1–36,2022,doi:10.1109/TSE.2019.2962027. 2024, Lisbon,Portugal,April14-20,2024, pages17:1–17:12.ACM, [271] Jin Zhang and Jingyue Li. Testing and verification of neural- 2024,doi:10.1145/3597503.3623342. network-based safety-critical control software: A systematic [255] ChenYang,JunjieChen,BinLin,JianyiZhou,andZiqiWang.Enhanc- literature review. Inf. Softw. Technol., 123:106296, 2020, ingLLM-basedtestgenerationforhard-to-coverbranchesviaprogram doi:10.1016/J.INFSOF.2020.106296. analysis,2024,arXiv:2404.04966. [272] KexunZhang,DanqingWang,JingtaoXia,WilliamYangWang,and [256] ChenyuanYang,YinlinDeng,RunyuLu,JiayiYao,JiaweiLiu,Rey- LeiLi. ALGO:synthesizingalgorithmicprogramswithgeneratedor- hanehJabbarvand,andLingmingZhang. White-boxcompilerfuzzing acleverifiers. InAliceOh,TristanNaumann,AmirGloberson,Kate empoweredbylargelanguagemodels,2023,arXiv:2310.15991. Saenko,MoritzHardt,andSergeyLevine,editors,AdvancesinNeural [257] ChenyuanYang,ZijieZhao,andLingmingZhang.KernelGPT:enhanced InformationProcessingSystems36:AnnualConferenceonNeuralInfor- kernelfuzzingvialargelanguagemodels,2024,arXiv:2401.00563. mationProcessingSystems2023,NeurIPS2023,NewOrleans,LA,USA, [258] Yanjing Yang, Xin Zhou, Runfeng Mao, Jinwei Xu, Lanxin Yang, December10-16,2023,2023. YuZhangm, HaifengShen, andHeZhang. DLAP:Adeeplearning [273] QuanjunZhang,ChunrongFang,YangXie,YaxinZhang,YunYang,
augmentedlargelanguagemodelpromptingframeworkforsoftware WeisongSun,ShengchengYu,andZhenyuChen. Asurveyonlarge vulnerabilitydetection,2024,arXiv:2405.01202. languagemodelsforsoftwareengineering,2023,arXiv:2312.15223. [259] JiananYao,ZiqiaoZhou,WeitengChen,andWeidongCui.Leveraging [274] TingZhang,IvanaClairineIrsan,FerdianThung,andDavidLo.Cupid: large language models for automated proof synthesis in Rust, 2023, LeveragingChatGPTformoreaccurateduplicatebugreportdetection, arXiv:2311.03739. 2023,arXiv:2308.10022. [260] ShunyuYao,DianYu,JeffreyZhao,IzhakShafran,TomGriffiths,Yuan [275] XuchaoZhang,SupriyoGhosh,ChetanBansal,RujiaWang,Minghua Cao,andKarthikNarasimhan. Treeofthoughts: Deliberateproblem Ma,YuKang,andSaravanRajmohan.Automatedrootcausingofcloud solvingwithlargelanguagemodels.InAdvancesinNeuralInformation incidentsusingin-contextlearningwithGPT-4.InMarcelod’Amorim, ProcessingSystems,volume36,pages11809–11822.CurranAssociates, editor, CompanionProceedingsofthe32ndACMInternationalCon- Inc.,2023. ferenceontheFoundationsofSoftwareEngineering,FSE2024,Porto [261] ShunyuYao,JeffreyZhao,DianYu,NanDu,IzhakShafran,KarthikR. de Galinhas, Brazil, July 15-19, 2024, pages 266–277. ACM, 2024, Narasimhan, andYuanCao. ReAct: Synergizingreasoningandact- doi:10.1145/3663529.3663846. inginlanguagemodels. InTheEleventhInternationalConferenceon [276] YingZhang,WenjiaSong,ZhengjieJi,DanfengYao,andNaMeng.How LearningRepresentations,ICLR2023,Kigali,Rwanda,May1-5,2023. welldoesLLMgeneratesecuritytests?,2023,arXiv:2310.00710. OpenReview.net,2023. [277] WayneXinZhao,KunZhou,JunyiLi,TianyiTang,XiaoleiWang,Yu- [262] YuanYao,DemingYe,PengLi,XuHan,YankaiLin,ZhenghaoLiu, pengHou,YingqianMin,BeichenZhang,JunjieZhang,ZicanDong, ZhiyuanLiu,LixinHuang,JieZhou,andMaosongSun. DocRED:A Yifan Du, Chen Yang, Yushuo Chen, Zhipeng Chen, Jinhao Jiang, large-scaledocument-levelrelationextractiondataset.InAnnaKorhonen, RuiyangRen,YifanLi,XinyuTang,ZikangLiu,PeiyuLiu,Jian-Yun DavidR.Traum,andLlu´ısMa`rquez,editors,57thConferenceoftheAs- Nie, and Ji-Rong Wen. A survey of large language models, 2023, sociationforComputationalLinguistics,ACL2019,Florence,Italy,July arXiv:2303.18223. 28-August2,2019,Volume1:LongPapers,pages764–777.Association [278] JinPengZhou,CharlesStaats,WendaLi,ChristianSzegedy,KilianQ. forComputationalLinguistics,2019,doi:10.18653/V1/P19-1074. Weinberger,andYuhuaiWu.Don’ttrust:Verify–GroundingLLMquan- [263] XinYin,ChaoNi,andShaohuaWang. Multitask-basedevaluationof titativereasoningwithautoformalization.InTheTwelfthInternational open-sourceLLMonsoftwarevulnerability,2024,arXiv:2404.02056. ConferenceonLearningRepresentations,ICLR2024,Vienna,Austria, [264] ZhiqiangYuan, YilingLou, MingweiLiu, ShijiDing, KaixinWang, May7-11,2024.OpenReview.net,2024. YixuanChen,andXinPeng. EvaluatingandimprovingChatGPTfor [279] XinZhou,TingZhang,andDavidLo. Largelanguagemodelforvul- unittestgeneration.Proc.ACMSoftw.Eng.,1(FSE):1703–1726,2024, nerabilitydetection: Emergingresultsandfuturedirections. InPro- doi:10.1145/3660783. ceedingsofthe2024ACM/IEEE44thInternationalConferenceonSoft- [265] EricZelikman,YuhuaiWu,JesseMu,andNoahD.Goodman. Star: ware Engineering: New Ideas and Emerging Results, NIER@ICSE Bootstrappingreasoningwithreasoning.InSanmiKoyejo,S.Mohamed, 2024,Lisbon,Portugal,April14-20,2024,pages47–51.ACM,2024, A.Agarwal,DanielleBelgrave,K.Cho,andA.Oh,editors,Advancesin doi:10.1145/3639476.3639762. NeuralInformationProcessingSystems35:AnnualConferenceonNeural [280] YongchaoZhou,AndreiIoanMuresanu,ZiwenHan,KeiranPaster,Silviu InformationProcessingSystems2022,NeurIPS2022,NewOrleans,LA, Pitis,HarrisChan,andJimmyBa. Largelanguagemodelsarehuman- USA,November28-December9,2022,2022. levelpromptengineers. InTheEleventhInternationalConferenceon [266] CenZhang, MingqiangBai, YaowenZheng, YetingLi, XiaofeiXie, LearningRepresentations,ICLR2023,Kigali,Rwanda,May1-5,2023. YuekangLi,WeiMa,LiminSun,andYangLiu.Understandinglargelan- OpenReview.net,2023. guagemodelbasedfuzzdrivergeneration,2023,arXiv:2307.12469. [281] XiaogangZhu,ShengWen,SeyitCamtepe,andYangXiang.Fuzzing:A [267] Chenyuan Zhang, Hao Liu, Jiutian Zeng, Kejing Yang, Yuhong Li, surveyforroadmap.ACMComput.Surv.,54(11s):230:1–230:36,2022, and Hui Li. Prompt-enhanced software vulnerability detection us- doi:10.1145/3512345. ingChatGPT. In46thIEEE/ACMInternationalConferenceonSoft- [282] ZiyeZhu,YuWang,andYunLi. TroBo:Anoveldeeptransfermodel wareEngineering: CompanionProceedings,ICSECompanion2024, forenhancingcross-projectbuglocalization.InHanQiu,ChengZhang,
Lisbon, Portugal, April 14-20, 2024, pages 276–277. ACM, 2024, ZongmingFei,MeikangQiu,andSun-YuanKung,editors,Knowledge doi:10.1145/3639478.3643065. Science,EngineeringandManagement-14thInternationalConference, [268] DylanZhang,XuchaoZhang,ChetanBansal,PedroHenriqueB.Las- KSEM2021,Tokyo,Japan,August14-16,2021,Proceedings,PartI, Casas,RodrigoFonseca,andSaravanRajmohan.LM-PACE:confidence volume12815ofLectureNotesinComputerScience,pages529–541. estimationbylargelanguagemodelsforeffectiverootcausingofcloud Springer,2021,doi:10.1007/978-3-030-82136-4 43. incidents. InMarcelod’Amorim,editor,CompanionProceedingsof the32ndACMInternationalConferenceontheFoundationsofSoftware 40
2404.09537 Machine Learning Techniques for Python Source Code Vulnerability Detection TalayaFarasat JoachimPosegga UniversityofPassau UniversityofPassau Passau,Germany Passau,Germany ABSTRACT 2 EXPERIMENTALDESIGN Softwarevulnerabilitiesareafundamentalreasonfortheprevalence We’reexaminingthesamesoftwarevulnerabilitieshighlightedin ofcyberattacksandtheiridentificationisacrucialyetchallenging [4],[7],and[9],i.e.,SQLinjection,cross-sitescripting(XSS),com- problemincybersecurity.Inthispaper,weapplyandcompare mandinjection,cross-siterequestforgery(XSRF),pathdisclosure, differentmachinelearningalgorithmsforsourcecodevulnerabil- remotecodeexecution,andopenredirect. itydetectionspecificallyforPythonprogramminglanguage.Our Dataset:WeusethedatasetpreparedbyWartschinskietal.[4], experimentalevaluationdemonstratesthatourBidirectionalLong availableat[16],whichiscompiledbytargetingpubliclyaccessible Short-TermMemory(BiLSTM)modelachievesaremarkableperfor- GitHubrepositories.GitHubstandsoutasthelargestrepository mance(averageAccuracy=98.6%,averageF-Score=94.7%,average hostingplatformforsourcecodeglobally,makingitanidealre- Precision=96.2%,averageRecall=93.3%,averageROC=99.3%), sourceforthiswork.Wartschinskietal.[4]gatheradistinctdataset thereby,establishinganewbenchmarkforvulnerabilitydetection foreachvulnerabilitytype.Thedataiscollectedintheformofcom- inPythonsourcecode. mitsthatcontainsecurity-relatedfixes.Sectionsofcodethatare updatedorremovedinthesecommitsarecategorizedasvulnerable, alongwiththesurroundingcodetoprovidecontext.Conversely, CCSCONCEPTS theremainingcodeandthepost-fixversionarelabeled(probably) •Securityandprivacy→Softwareandapplicationsecurity. asnotvulnerable.Weuse70%dataintraining,15%intesting,and 15%inthevalidationset. Word2vec Embeddings: For the training of machine learn- ingalgorithms,itisnecessarytorepresentcodetokensasvectors thatretainthesemanticandsyntacticinformation.Wetrainour word2vecmodelwiththehelpofinstructionsoutlinedhere[16].We 1 INTRODUCTION usethesamehyper-parametersfortheword2vecmodelhighlighted Codeflawsorvulnerabilitiesareprevalentinsoftwaresystemsand in[4],i.e.,trainingiterations:betweenoneandmorethanahun- canpotentiallyleadtosystemcompromise,informationleaks,or dred,vectordimensionality:between5and300,minimumcount: denialofservice.Recognizingtheconstraintsoftraditionalmethods between10and500.Wetestdifferentmodelswithourmachine- (static&dynamiccodeanalyses),andwiththegrowingaccessibility learningalgorithms.Wegetthebestresultswithtrainingiterations: ofopen-sourcesoftwarerepositories,ithasbeenrecommendedto 200,minimumcount:10,andvectordimensionality:300.There- adoptadata-drivenapproachforsoftwarevulnerabilitydetection. fore,weusethisword2vecmodel(10,200,300)withthemajorityof Therefore,variousmachinelearningtechniqueshavebeenapplied machinelearningalgorithms. tolearnvulnerablefeaturesofsourcecode,andtoautomatethe MachineLearningAlgorithms:WeutilizethePythonsci-kit- processofsoftwarevulnerabilityidentification[1–4,7–9,11].Many learnlibrarytoconstructGaussianNaiveBayes(GNB),Decision researchersfocusonsourcecodevulnerabilitydetectionacross Tree,LogisticRegression(LR),andMulti-LayerPerceptron(MLP). variousprogramminglanguagessuchasJava,C,andC++.Some ForGNB,weemploydefaultparameters.InthecaseofDecision notablestudiesinclude[1,3,5,6,10,11].In2024,Pythoncontinues Tree,defaultparametersareused,withtheexceptionofsettingthe tomaintainitsprominentpositionasoneofthetopprogramming max_depthparameter.Specifically,wesetmax_depthas2forXSS languages[15],andalsoamajorlyusedlanguageonGitHub[14]. andopen_redirectvulnerabilities,whileforothervulnerabilities, Despiteitspopularity,Pythonhasbeenrelativelyoverlookedby weuseamax_depthvalueof5.ForLogisticRegression,default researchers.Onlyafewstudies[4,7,9,13]focusonvulnerability parametersareutilized,exceptforthesolverparameter,whichis detectionspecificallyinPythonprogramminglanguage. settoliblinear.Lastly,inMLP,weoptfordefaultparameters,except Giventheabundanceofmachinelearningalgorithmsandtheir forthemax_iterparameter,whichissetto300. correspondinghyper-parametersavailable,thereispotentialforim- BidirectionalLongShort-TermMemory(BiLSTM)incon- provedresultsinthisdomain.Tobridgethisgap,weapplyandcom- trastwithLSTM,processessequentialdatainbothforwardand parefivedifferentmachinelearningmodels.Notably,ourBiLSTM backwarddirectionswithtwoseparatehiddenlayers.Itenables modeldemonstratessuperiorperformanceascomparedtoallother additionaltrainingbytraversingtheinputdatatwice.Weusethe appliedmodelsandalsowiththeapproachespresentedin[4,7,9]. PythonKerasframework(backendTensorflow)tocreateourBiL- Wealsoopen-sourceallourcodeandmodelsusedinthisstudy STMmodel. forbroaderdissemination.Ourcodeandmodelscanbeaccessed athttps://github.com/Tf-arch/Python-Source-Code-Vulnerability- Detection/tree/main 4202 rpA 51 ]ES.sc[ 1v73590.4042:viXraFarasatandPosegga
GNB DecisionTree LR MLP BiLSTM Vulnerabilities Accuracy F-Score Accuracy F-Score Accuracy F-Score Accuracy F-Score Accuracy F-Score SQLinjection 81.0% 0.63% 80.5% 0.97% 83.7% 51.2% 87.6% 65.3% 98.2% 95.3% XSS 8.8% 16.0% 91.5% 15.8% 94.8% 68.9% 95.4% 72.6% 98.8% 93.0% Commandinjection 71.8% 24.5% 86.1% 3.6% 96.0% 83.7% 93.1% 70.4% 99.1% 96.7% XSRF 14.2% 23.6% 86.1% 1.8% 89.6% 56.5% 93.9% 76.8% 98.3% 93.6% Remotecodeexecution 9.5% 15.9% 90.7% 0.74% 98.2% 89.5% 98.9% 93.8% 99.4% 96.5% Pathdisclosure 11.8% 20.8% 85.8% 5.1% 95.8% 81.3% 97.6% 89.8% 99.3% 97.3% Openredirect 14.4% 23.7% 86.6% 1.1% 86.7% 48.9% 89.0% 58.0% 97.5% 90.7% Table1:PerformanceEvaluationofMachineLearningAlgorithms (a)SQLinjectionROC (b)XSSROC (c)CommandinjectionROC (d)XSRFROC Figure 1: ROC Curves (see remaining ROC results here: https://github.com/Tf-arch/Python-Source-Code-Vulnerability- Detection/tree/main) Vulnerabilities Metrics BagheriandHegedűs[9] Wartschinskietal.[4] Wangetal.[7,17] OurModel Precision 82.2% 82.2% 84.4% 96.8% SQLinjection Recall 78.0% 78.0% 73.9% 93.8% F-Score 80.1% 80.1% 78.8% 95.3% Accuracy 92.5% 92.5% - 98.2% Precision 91.9% 91.9% 97.0% 94.8% XSS Recall 80.8% 80.8% 79.7% 91.3% F-Score 86.0% 86.0% 87.5% 93.0% Accuracy 93.8% 97.8% - 98.8% Precision 94.0% 94.0% 92.5% 97.8% Command Recall 87.2% 87.2% 85.2% 95.7% injection F-Score 90.5% 90.5% 88.7% 96.7% Accuracy 95.8% 97.8% - 99.1% Precision 92.9% 92.9% 88.0% 96.7% XSRF Recall 85.4% 85.4% 80.5% 90.7% F-Score 89.0% 89.0% 84.0% 93.6% Accuracy 92.2% 97.2% - 98.3% Precision 96.0% 96.0% 93.5% 97.2% Remotecode Recall 82.6% 82.2% 77.2% 95.9% execution F-Score 88.8% 88.8% 84.6% 96.5% Accuracy 91.1% 98.1% - 99.4% Precision 92.0% 92.0% 90.6% 97.7% Pathdisclosure Recall 84.4% 84.4% 82.7% 96.9% F-Score 88.1% 88.1% 86.5% 97.3% Accuracy 91.3% 97.3% - 99.3% Precision - 91.0% 80.8% 92.5% Openredirect Recall - 83.9% 84.3% 89.0% F-Score - 87.3% 82.5% 90.7% Accuracy - 96.8% - 97.5% Table2:ComparisonofourworkwithrelatedworkMachineLearningTechniquesforPythonSourceCodeVulnerabilityDetection Selection of optimal hyper-parameters for BiLSTM: As fromourapproach.Nevertheless,it’snotablethattheiroverallaver- BiLSTMnetworksarehighlyconfigurablethroughseveralhyper- ageaccuracystandsat97.29%,withanaverageF1-scoreof91.86%. parameters.Choosingthecorrectsetofhyper-parametersiscrucial Incontrast,ourBiLSTMmodelyieldssuperiorperformancewith because it directly impacts the models’ performance and helps anaverageaccuracyof98.6%andanaverageF1-scoreof94.7%. inachievingbenchmarkresults.Wecanalter/tuneseveralhyper- parametervalueslikethenumberofBiLSTMlayers,optimizers, 4 CONCLUSION learningrate,lossfunctions,numberofepochs,andbatchsizes.We Inthiswork,weapplyandcomparefivedifferentmachinelearning experimentwithchangesinthehyper-parametervaluesmanually. modelsfordetectingvulnerabilitiesinPython.OurBiLSTMmodel Asperourexperimentalresults,weobservethatthefollowingcom- with word2vec shows remarkable efficacy (average Accuracy = binationofhyper-parametersachievesremarkableperformance: 98.6%,averageF-Score=94.7%,averagePrecision=96.2%,average oneinputlayer,threehiddenlayerswith50BiLSTMcellsorneu- Recall=93.3%,averageROC=99.3%),surpassingnotonlytheother rons,(BiLSTMcreatestwocopiesofthehiddenlayer,andtheoutput machinelearningmodelsweappliedbutalsooutperformingthe valuesfromtheseBiLSTMsareconcatenated),fourdropoutlayers techniquesdetailedinpriorworks[4,7,9].Webelievethiswork with0.2dropoutrate,andanoutputlayerwithasinglenode.We ishelpfulforresearchersandPythonprogrammersfacingdaily useAdamoptimizerandmean_squared_errorasthelossfunction. challengesrelatedtoidentifyingprogrammingvulnerabilities. Themodelistrainedfor50epochswiththebatchsize=128. PerformanceEvaluation:Weevaluateandcomparetheper- REFERENCES formanceofmachinelearningalgorithmsbasedonaccuracyand [1] Z.Li,D.Zou,S.Xu,H.Jin,Y.Zhu,andZ.Chen“SySeVR:AFrameworkfor F-score,seeTable1.WealsoevaluatetheReceiverOperatingChar- UsingDeepLearningtoDetectSoftwareVulnerabilities,”in IEEETransactions acteristics (ROC) curves on the validation dataset, see Figure 1. onDependableandSecureComputing,Volume:19,2022. [2] S.Chakraborty,R.Krishna,Y.Din,andB.Ray,“DeepLearningBasedVulnerability
ExperimentalresultsshowthatourBiLSTMmodelwiththeopti- Detection:AreWeThereYet?,”inIEEETransactionsonSoftwareEngineering, mizedhyper-parametervalues,caneffectivelydetectthePython Volume:48,2021. [3] R.Russell,L.Kim,L.Hamilton,T.Lazovich,J.Harer,O.Ozdemir,P.Ellingwood sourcecodevulnerabilitieswiththehighestAccuracy,F-Score,and andM.McConley,“AutomatedVulnerabilityDetectioninSourceCodeUsing ROC curve values (average Accuracy= 98.6%, average F-Score= DeepRepresentationLearning,”inIEEEInternationalConferenceonMachine 94.7%,averageROC=99.3%),seeTable1andFigure1. LearningandApplications(ICMLA),OrlandoUSA,2018. [4] L.Wartschinski,Y.Noller,T.Vogel,T.KehrerandL.Grunske,“VUDENC:Vul- nerabilityDetectionwithDeepLearningonaNaturalCodebaseforPython,”in ELSEVIERInformationandSoftwareTechnology,Volume144,2022. 3 RELATEDWORK [5] K.Liu,D.Kim,T.F.Bissyand´e,S.YooandY.LeTraon,“MiningFixPatternsfor FindBugsViolations,”in,IEEETransactionsonSoftwareEngineering,Volume:47, OurworkiscloselyrelatedtoWartschinskietal.[4],Bagheriand 2018. Hegedűs[9],andWangetal.[7].Wartschinskietal.[4]applied [6] R.Rolim,G.Soares,R.GheyiandT.Barik,“LearningQuickFixesfromCode LSTM(withword2vec)forPythonsourcecodevulnerabilitydetec- Repositories,”inACMBrazilianSymposiumonSoftwareEngineering,Brazil,2021. [7] R.Wang,S.Xu,X.Ji,Y.Tian,L.GongandK.Wang,““Anextensivestudyof tion.BagheriandHegedűs[9]resultsshowthatBERTembeddings theefectsofdiferentdeeplearningmodelsoncodevulnerabilitydetectionin withtheLSTMmodelachievethebestoverallaccuracyinpredicting Pythoncode,”inAutomatedSoftwareEngineering,Volume31,2024. [8] ]T.Sharma,M.Kechagia,S.Georgiou,R.Tiwari,I.Vats,H.MoazenandF.Sarro, Pythoncodevulnerabilities.Similarly,Wangetal.[7]presentedthe “Asurveyonmachinelearningtechniquesappliedtosourcecode,”inELSEVIER empiricalstudyofdifferentdeeplearningmodelsforPythonsource JournalofSystemsandSoftware,Volume209,2024. codevulnerabilitydetection.TheyshowthatLSTMandgatedrecur- [9] ]A.BagheriandP.Heged˝us,“AComparisonofDifferentSourceCodeRepre- sentationMethodsforVulnerabilityPredictioninPython,”inSpringerQualityof rentunit(GRU)canindeedbethebestchoice.Theyalsohighlight InformationandCommunicationsTechnology,2021. thatBiLSTMandGRUwithattention(usingword2vec)aretwo [10] J.Fan,Y.Li,S.WangandT.N.Nguyen,“AC/C++CodeVulnerabilityDataset optimalmodelsforPythonsourcecodevulnerabilitydetection. withCodeChangesandCVESummaries,”inACMInternationalConferenceon MiningSoftwareRepositories(MSR),SouthKorea,2020. Followingtheirresearch,weapplyBiLSTMwithword2vecfor [11] D.Zou,S.Wang,S.Xu,Z.Li,andH.Jin,“µVulDeePecker:ADeepLearning-Based Pythonsourcecodevulnerabilitydetection.Differentfromthestud- SystemforMulticlassVulnerabilityDetection,”inIEEETrans.DependableSecure Computing,Volume18(5),2019. ies[4,7,9],wealsoapplyandcompareGNB,LR,DecisionTree,and [12] M.Davari,M.Zulkernine,andF.Jaafar,“AnAutomaticSoftwareVulnerability MLPmodelsaswell,andcomparetheirperformanceswitheach ClassificationFramework,”inIEEEInternationalConferenceonSoftwareSecurity other,seeTable1andFigure1.Moreover,byleveragingtheoptimal andAssurance(ICSSA),USA,2017. [13] M.Ehrenberg,S.Sarkani,andT.A,Mazzuchi“PythonSourceCodeVulnerability settingsofourBiLSTMmodel,weachievearemarkableperfor- DetectionwithNamedEntityRecognition,”inElSEVIERComputers&Security, mance(averageAccuracy=98.6%,averageF-Score=94.7%,average 2024. [14] GithubReport,[Online].Available:https://github.blog/2023-11-08-the-state-of- Precision= 96.2%, average Recall= 93.3%, average ROC= 99.3%), open-source-and-ai/,Accessedon:Jan.05,2024. thereby,establishinganewbenchmarkforvulnerabilitydetection [15] Python2024,[Online].Available:https://www.linkedin.com/pulse/top-10-best- inPythonsourcecode,seeTable2fordetailedcomparison(Bagheri programming-languages-learn-2024-superiorcodelabs-yx3ec/, [16] Vudenc,[Online].Available:https://github.com/LauraWartschinski/VulnerabilityDetection, andHegedűs[9](LSTMwithBERT),Wangetal.[7,17](LSTMwith Accessedon:Oct.01,2023. word2vec),ourmodel(BiLSTMwithword2vec)).Ourresultscanbe [17] Resultsofresearchpaper’Anextensivestudyoftheeffectsofdifferentdeep observedhereaswell:https://github.com/Tf-arch/Python-Source- learningmodelsoncodevulnerabilitydetectioninPythoncode’,[Online].Avail- able:https://github.com/AI4CVD/dl4cvd/blob/main/result.xlsx,Accessedon:Feb. Code-Vulnerability-Detection/tree/main. 01,2024. Ehrenbergetal.[13]applynamedentityrecognitiontechniques forrecognizingthePythonsourcecodevulnerabilities.Wedonot compareourresultsindetailwiththem,astheirmethodologypri-
2404.09599 Enhancing Code Vulnerability Detection via Vulnerability-Preserving Data Augmentation ShangqingLiu WeiMa∗ JianWang NanyangTechnologicalUniversity NanyangTechnologicalUniversity NanyangTechnologicalUniversity Singapore Singapore Singapore liu.shangqing@ntu.edu.sg ma_wei@ntu.edu.sg jian004@e.ntu.edu.sg XiaofeiXie RuitaoFeng YangLiu SingaporeManagementUniversity SingaporeManagementUniversity NanyangTechnologicalUniversity Singapore Singapore Singapore xfxie@smu.edu.sg rtfeng@smu.edu.sg yangliu@ntu.edu.sg Abstract Extensiveexperimentscomparedwithstatic-analysis-based Sourcecodevulnerabilitydetectionaimstoidentifyinherent approaches and learning-based approaches have demon- vulnerabilitiestosafeguardsoftwaresystemsfrompotential stratedtheeffectivenessofFGVulDet. attacks.Manypriorstudiesoverlookdiversevulnerability CCSConcepts: •Securityandprivacy→Softwaresecu- characteristics,simplifyingtheproblemintoabinary(0-1) rityengineering. classification task for example determining whether it is vulnerableornot.Thisposesachallengeforasingledeep- Keywords: Vulnerability Detection, Graph Neural Net- learningbasedmodeltoeffectivelylearnthewidearrayof works. vulnerabilitycharacteristics.Furthermore,duetothechal- ACMReferenceFormat: lengesassociatedwithcollectinglarge-scalevulnerability Shangqing Liu, Wei Ma, Jian Wang, Xiaofei Xie, Ruitao Feng, data,thesedetectorsoftenoverfitlimitedtrainingdatasets, andYangLiu.2024.EnhancingCodeVulnerabilityDetectionvia resultinginlowermodelgeneralizationperformance. Vulnerability-PreservingDataAugmentation.InProceedingsofthe Toaddresstheaforementionedchallenges,inthiswork, 25thACMSIGPLAN/SIGBEDInternationalConferenceonLanguages, weintroduceafine-grainedvulnerabilitydetectornamely Compilers,andToolsforEmbeddedSystems(LCTES’24),June24, FGVulDet.Unlikepreviousapproaches,FGVulDet employs 2024,Copenhagen,Denmark.ACM,NewYork,NY,USA,12pages. multipleclassifierstodiscerncharacteristicsofvariousvul- https://doi.org/10.1145/3652032.3657564 nerabilitytypesandcombinestheiroutputstoidentifythe specifictypeofvulnerability.Eachclassifierisdesignedto 1 Introduction learntype-specificvulnerabilitysemantics.Additionally,to Softwarevulnerabilityisdefinedasaweaknessinthesoft- addressthescarcityofdataforsomevulnerabilitytypesand waresystemthatcouldbeexploitedbyathreatsource.With enhancedatadiversityforlearningbettervulnerabilityse- theincreasingnumberofopen-sourcelibrariesandtheex- mantics,weproposeanovelvulnerability-preservingdata pandingsizeofsoftwaresystems,thecountofsoftwarevul- augmentationtechniquetoaugmentthenumberofvulner- nerabilitieshasbeenrisingrapidly.Sincethesevulnerabili- abilities. Taking inspiration from recent advancements in tiescanbeexploitedbymaliciousattackers,causingsignifi- graphneuralnetworksforlearningprogramsemantics,we cantfinancialandsocialdamages,vulnerabilitydetectionand incorporate a Gated Graph Neural Network (GGNN) and patchinghavegarneredwidespreadattentionfromacademia extendittoanedge-awareGGNNtocaptureedge-typein- andindustry.Forinstance,theCommonVulnerabilitiesand formation.FGVulDet istrainedonalarge-scaledatasetfrom Exposures (CVE) Program and the National Vulnerability GitHub,encompassingfivedifferenttypesofvulnerabilities. Database(NVD)havebeenestablishedtoidentifyandpatch ∗Correspondingauthor vulnerabilitiesbeforetheyareexploited.Sofar,over100,000 vulnerabilitieshavebeenindexed.However,incontrastto Permissiontomakedigitalorhardcopiesofpartorallofthisworkfor thequantityofopen-sourceprojectsandthespeedofsoft- personalorclassroomuseisgrantedwithoutfeeprovidedthatcopiesare notmadeordistributedforprofitorcommercialadvantageandthatcopies ware iteration, the number of exploited vulnerabilities is bearthisnoticeandthefullcitationonthefirstpage.Copyrightsforthird- insufficient.Inotherwords,thereexistsalargenumberof partycomponentsofthisworkmustbehonored.Forallotheruses,contact "silent"vulnerabilitiesthathavenotbeenexploited. theowner/author(s). Automatedsoftwarevulnerabilitydetectionremainsacru- LCTES’24,June24,2024,Copenhagen,Denmark cialyetfarfromthesettledproblem.Severaltechniqueshave ©2024Copyrightheldbytheowner/author(s). beendevelopedtodetectvulnerabilitiesincludingstaticanal- ACMISBN979-8-4007-0616-5/24/06 https://doi.org/10.1145/3652032.3657564 ysis[11,43,45],fuzzing[6,47],symbolicexecution[2,4,42]. 4202 rpA 51 ]RC.sc[ 1v99590.4042:viXraLCTES’24,June24,2024,Copenhagen,Denmark ShangqingLiu,WeiMa,JianWang,XiaofeiXie,RuitaoFeng,andYangLiu Staticanalysisforvulnerabilitydetectionaimstoidentifyvul- GGNNforeachtypeofvulnerabilityontherealcollected nerabilitiesinthesourcecodewithoutexecution,typically vulnerabilitydatasetfromGitHub.Theneachmodelpredic- requiringsubstantialmanualeffortfromsecurityexpertsto tionresultisensembledforvotingtogivethefinalprediction. craft rules. This approach has limited generalization abil- Furthermore,weproposeanovelvulnerability-preserving
ityacrossdiversevulnerabilities.Dynamictechniques,such dataaugmentationtoenrichthediversityofdataandim- asfuzzingandsymbolicexecution,identifyvulnerabilities prove the prediction performance. On the side of GGNN, bydynamicallyexecutingprograms.Dynamicapproaches weadoptitandfurtherencodetheedgetypeinformation demonstraterelativelyhighprecisioninvulnerabilitydetec- along with the node features during message passing i.e., tion,butconfiguringexecutioniscomplex,andexecution edge-awareGGNNfortheenhancementofthevulnerability resultsmaybeincompletesincenoteveryprogrampathcan detection.Weclaimthattheedgetypeinformationi.e.,“Flow beexecuted. to”, “Control” represents different semantics of programs, Duetothecapabilityofdeeplearning-basedtechniquesto andencodingthemexplicitlyduringthelearningprocess automaticallyextractfeatures,moreresearchfocusesonuti- canfacilitatelearningmoreaccuratecoderepresentations. lizingDLtechniquesforvulnerabilitydetection[9,28,29,31, Anextensiveevaluationisconductedonfivedifferenttypes 40,52,54].InearlyDL-basedvulnerabilitydetectionworks, ofvulnerabilitiescomparedwithsomestatic-analysistools someworks[40]employedconvolutionalneuralnetworks anddeep-learningbasedvulnerabilitydetectionapproaches (CNNs)toleveragetheirpowerfulconvolutioncapabilities haveconfirmedthesuperiorityofourproposedapproach. forlearningvulnerability-relatedfeatures.However,aspro- Furtherablationstudyalsorevealstheeffectivenessofeach grams are not fixed length compared to images, they are componentinFGVulDet.Ourcontributionsareasfollows. notwell-suitedforCNNs.Toavoidthisproblem,someother works[9,29,31,54]treatprogramsasaflatsequenceandap- • Weproposeanovelvulnerability-preservingdataaugmen- plyrecurrentneuralnetworks(RNNs)withLongShort-Term tation technique to enrich the amount of the collected Memory(LSTM)directlytolearnthevulnerabilityfeatures. dataandmitigatethelimitationsofrarevulnerabilitiesin Yet,invulnerabilityscenarios,certaintypesofvulnerabili- quantity. ties,suchasbufferoverflow,arerelatedtodataflow,which • Weadoptanedge-awareGGNNbyincorporatingedge- cannotbecapturedbytheprogramtextalone.Tocapturethe typefeatureswithnodefeaturestoimprovethelearning datadependencyandcontroldependencyofprograms,Liet capacityofGGNNforvulnerabilitydetection. al.[28]proposedaprogramslicingalgorithmbasedonthe • We conduct extensive experiments on the real col- programdependencygraph(PDG)toslicerelatedstatements lectedvulnerabilitydatatoillustratetheeffectivenessof andfeedthemtoBidirectionalRNNsforlearning.However, FGVulDet. itstillfundamentallytreatsprogramsassequences. How to learn well-structured control and data dependen- cies in programs? Devign [52] proposed an effective way 2 Background by encoding programs into a code property graph (CPG) 2.1 ProblemDefinition and utilizing this graph through GGNN [27] for vulnera- bilitydetection,achievingstate-of-the-artperformance.Af- Existing works [12, 30, 40, 52] define source code vulner- terward,thereisagreatnumberofworksusingGNNsto ability identification as a binary {0,1} classification prob- learnprogramsemanticsforsourcecodevulnerabilitydetec- lem i.e., labelling all vulnerable functions as 1, regardless tion[7,38,46,46].However,mostoftheseworkscombined ofthevulnerabletypeofthefunction,whichiscoarsefor various types of vulnerabilities to train a single classifier vulnerabilitydetection.Differently,inthiswork,wefocus for vulnerability detection. Moreover, data augmentation oninvestigatingafine-grainedvulnerabilityidentification has been shown to significantly improve performance on problemi.e.,fordifferenttypesofvulnerability,ourgoalis image data [18, 20, 21]. Recent works [23, 34] propose to tolearnthecorrespondingpredictionfunction.Specifically, augmentcodewiththesamefunctionalityvariantsbythe given a dataset 𝐷 = {𝐷 1,𝐷 2,...,𝐷 𝑡}, where 𝐷 𝑡 is the sub- transformationsforcontrastivepre-trainingtolearncode datasetinDforthevulnerabilitytype𝑡 ∈𝑇 and𝑇 isaset functionalityfordifferentdownstreamtasks.However,the ofsourcecodevulnerabilitytypes,weaimatlearningthe definedtransformationsareatthegranularityofthefunction mapping 𝑓 𝑡 ∈ 𝐹 over 𝐷 𝑡 to predict whether the function anditcannotguaranteevulnerability-preservingattribution hasthevulnerabilityoftype𝑡 and𝐹 = {𝑓 1,𝑓 2,...,𝑓 𝑡} isthe forvulnerabilitieswhentransformingafunctiontothevari- predictionfunctionfordifferentvulnerabilitytype.Further- ants.Hence,howtoperformdataaugmentationmeanwhile more, 𝐷 𝑡 = {(𝑐 𝑡,𝑦)|𝑐 𝑡 ∈ C𝑡,𝑦 ∈ Y}, where C𝑡 is a set of preservingthesourcecodevulnerabilityisachallenge. functionswhichcontainsthevulnerablefunctionswiththe To address these challenges, in this paper, we propose vulnerabilitytype𝑡 andthecorrespondingfixedfunctions FGVulDet, which is a fine-grained vulnerability detector. and𝑦 = {0,1}isthelabelsetwith1forthevulnerabilityand Specifically,wetrainmultipleclassifiersviatheenhanced 0forthenon-vulnerability.EnhancingCodeVulnerabilityDetectionviaVulnerability-PreservingDataAugmentation LCTES’24,June24,2024,Copenhagen,Denmark int example() updatesallnodestatesinatotalnumberof𝑇 timesrecur- 𝑇 int example() { int tests[10] int a = tests[1] a > 0 return a
sivelyandattheendofthisiteration,each𝒉𝑛𝑖 represents int tests[10]; informationaboutthenodeandhowitbelongswiththecon- int a = tests[1]; if (a>0) a textofthegraph.Thewell-knownGraphConvolutionNet- return a; int tests[10] int tests[1] a a > 0 else return 0 work(GCN)[25],GatedGraphNeuralNetwork(GGNN)[27] return 0; } 10 tests tests 1 a 0 0 alsofollowEquation1,butthedefinitionsof 𝑓 𝑡 and𝑚𝑡(·) aredifferent.Forexample,GGNN,whichhasbeenwidely AST FlowTo Control Define/Use Reach usedinmodelingsourcecode[1,32,33,52],employsasin- Figure1.AnexampletoillustrateCodePropertyGraph. gleGRUcell[8]for𝑓 𝑡,i.e.,𝑓 𝑡 =𝐺𝑅𝑈(·,·),⊕isasummation operationand𝑚𝑡(𝒉𝑛𝑡 𝑖,𝑘,𝒉𝑛𝑡 𝑗) =𝑬𝑘𝒉𝑛𝑡 𝑗,where𝑬𝑘 isalearned matrix.ThedifferencebetweenGCNandGGNNliesin𝑓 𝑡 is 2.2 CodePropertyGraph ReLUfunction[37]and𝒉𝑛𝑡+ 𝑖1canbeexpressedasfollowing equation: Code property graph (CPG) proposed by Yamaguchi et al.[50],combinesseveralprogramrepresentationse.g.,Ab- stractSyntaxTree(AST),ControlFlowGraph(CFG),Pro- 𝒉𝑛𝑡+ 𝑖1 =ReLU(cid:16) 𝑬𝑡(𝒉𝑛𝑡 𝑖 + ∑︁ 𝒉𝑛𝑡 𝑗)(cid:17) (2) gramDependencyGraph(PDG)intoajointgraphtorepre- ∀𝑛𝑗:𝑛𝑖→𝑛𝑗 sentaprogram.AnillustratedexampleisshowninFigure1. We can observe that AST nodes (defined as black arrows 3 Approach inthegraph)arethebackstoneofCPG.BesidesAST,some other semantic representations i.e., control flow, and pro- 3.1 Overview gramdependencyinformationcanalsobeconstructedon The framework of our approach is illustrated in Figure 2, AST to represent different semantics of the program. For comprisingthreemaincomponents:DataCollection,which example,CFGrepresentsthestatementexecutionorderof constructsarawdatasetwithvarioustypesofvulnerabilities theprogram,and“FlowTo”(bluearrow)representsthisflow fromcommits;Vulnerability-preservingDataAugmentation, orderinCPG.Furthermore,PDGisalsoinvolvedinCPG, whichenhancestheoriginaldatasetwithfivemutationop- andtheedges“Define/Use”(greenarrow),“Reach”(redar- erationsusingacarefullydesignedvulnerability-preserving row)definethedatadependencies,andthe“Control”(yellow slicingalgorithmtomaintaintheoriginalvulnerabilityse- arrow)isthecontroldependencyofaprogram. manticsanddiversifythedata;Edge-awareGGNN,which extendsthecurrentstate-of-the-artGGNNbyintegrating 2.3 GraphNeuralNetworks edgetypefeaturesintonodefeaturesduringmessagepassing formodellearning.Duringmodeltraining,wetrainmulti- GraphNeuralNetworks(GNNs)[25,27]havebeenwidely plebinaryclassifiersfordifferenttypesofvulnerabilities.In employedinmodelingnon-Euclideandatastructuresuch thepredictionphase,eachclassifierprovidesaprediction associalnetworks[19,25],protein-proteininteractionnet- result,andFGVulDetaggregatestheirresultsthroughvoting works[36].TheprimaryobjectiveofaGNNistoidentifypat- toobtainthefinalprediction. ternsingraphdata,relyingoninformationwithinthenodes andtheirinterconnectedness.ThereexistvariousGNNvari- 3.2 DataCollection ants,herewedescribethebroadcategoryofmessage-passing neuralnetworks[17].Supposetheoriginaldatacanbemod- Collectinghigh-qualitydatasetsofvulnerablefunctions,es- elledbyamulti-edgedgraph,denotedas𝐺 = (N,E),where peciallyencompassingvarioustypesofvulnerabilities,poses N = {𝑛 𝑖} is the node set and E is a set of directed edges a significant challenge that necessitates expertise. In this 𝑛 𝑖 →−𝑘 𝑛 𝑗 and𝑘 istheedgetype.Eachnode𝑛 𝑖 isendowed work, we propose an effective method for collecting and 𝑡 labelingdiversetypesofvulnerableandnon-vulnerabledata. w nait mh ev lyec 𝑡t .o Tr hre ep nre os de en sta tati to en sa𝒉 r𝑛 e𝑖 uin pd de ax te ed do av seratimestep(hop) Theprocessinvolvesfirstgatheringcommitsrelatedtovul- nerabilities,followedbyextractingpairsoffunctionsfrom (cid:32) (cid:33) thesecommits:thevulnerableversion 𝑓 𝑣 andthepatched 𝒉𝑛𝑡+ 𝑖1=𝑓 𝑡 𝒉𝑛𝑡 𝑖, (cid:202) (cid:16) 𝑚𝑡 (𝒉𝑛𝑡 𝑖,𝑘,𝒉𝑛𝑡 𝑗)(cid:17) (1) version𝑓 𝑝,representingvulnerableandnon-vulnerablefunc- 𝑘 tions,respectively.Thedetailedproceduresareasbelow. ∀𝑛𝑗:𝑛𝑖−→𝑛𝑗 where𝑚𝑡(·)isafunctionthatcomputesthemessagebased 3.2.1 Vulnerability-Related Commit Collection. To ontheedgelabel𝑘.⊕isanaggregationoperatorthatsum- assembleasizableanddiversedatasetofvulnerablefunc- marisesthemessagefromitsneighborsand𝑓 𝑡 istheupdate tions,weinitiatetheprocessbygatheringcommitsfrom1614 functionthatupdatesthestateofnode𝑛 𝑖.Theinitialstate C-languageopen-sourceprojectshostedonGitHub.These ofeachnode𝒉𝑛0 𝑖 isfromnode-levelinformation.Equation1 projectsarechosenfortheirpopularityamongdevelopersLCTES’24,June24,2024,Copenhagen,Denmark ShangqingLiu,WeiMa,JianWang,XiaofeiXie,RuitaoFeng,andYangLiu Data Collection Data Augmentation GGNN-based Detector Edge Vec Vuln … … … raw ro add Edge-aware Graph CPG Node Vec Message Passing Embedding Non-Vuln Projects Filter Vul-Funs Raw del ai rn Augmented Classifier Classifier Classifier Classifier Classifier dataset dataset Mutation Operations CWE-672 CWE-362 CWE-120 CWE-835 CWE-404 Figure2.TheframeworkofFGVulDet. and their diversity in functionality, spanning various do- Table1.KeywordsofFiveVulnerabilityTypes. mainssuchasoperatingsystems,networking,anddatabase applications(e.g.,LinuxKernel,OpenSSL,QEMU). CWE Vulnerability Keywords Toensurethequalityofdatalabelling,wefollowthree memoryleak,informationleak,infoleak,
CWE-404 MemoryLeak leakinfo,memorydisclosure,leakmemory, stepstocollectvulnerability-relatedCommits. leakinformation infiniteloop,endlessloop,longloop, CWE-835 InfiniteLoop • Commit Filtering. We employ vulnerability-related key- infiniterecursion,deeprecursion CWE-120 BufferOverflow bufferoverflow words(showninTable1),whichhavebeenanalyzedand doublefree,double-free,DF,useafterfree, CWE-672 OperationAfterFree summarizedbyateamofprofessionalsecurityresearchers use-after-free,UAF fromalargenumberofcommits.Thesekeywordsinclude CWE-362 RaceConditions raceconditions fiveCommonWeaknessEnumerations(CWE)definedin Commit Message: Fixed buffer overflow spotted by Henrik. theNationalVulnerabilityDatabase(NVD),witheachvul- nerabilitytypehavingoneormoreassociatedkeywords. 1 2 d ii nf df ex-- 6g 4i 6t f0a a/ 9a .m .i ex 9e 3r 8/ ca dm eix 1e 0r 0. 6c 44b/amixer/amixer.c Commitswhosemessagesdonotmatchanyofthekey- 3 --- a/amixer/amixer.c wordsinTable1areexcluded,andtheremainingcommits 4 5 ++ s+ tab t/ ia cmi cx he ar r/a *m si ix me pr l. ec _name(const char *name, char *result) areconsideredmorelikelytoberelatedtovulnerabilities. 6 { Forexample,inFigure3,thevulnerability-relatedcommit 7 8 - - s rt er sn uc lp ty [( sr ie ms pu ll et _, nan ma em _e s, izs ei ]mp =le '_ \n 0a 'm ;e_size); isaccuratelycapturedbythekeyword 9 + strncpy(result, name, simple_name_size - 1); • TypeMatching.Commitsmatchedbykeywordsofmultiple 1 10 1 + r re es tu ul rt n[s ri em sp ul le t_ ;name_size - 1] = '\0'; vulnerabilitytypesareexcluded,aswecannotdetermine 12 } whichvulnerabilitytypetheybelongto.Weretaincom- mitsmatchedbyasinglevulnerabilitytypeandusethat Figure3.PatchforBufferOverflow. typetolabelthecommits. • Commit Pruning. There are some vulnerability-related Anillustrativeexampleofasecuritypatchisshownin commitsthatmaymodifymultiplefunctions,andnotall Figure3,wecangetthechangedstatements𝑆 𝑑𝑒𝑙 and𝑆 𝑎𝑑𝑑 ofthesefunctionsarerelatedtothevulnerability.Wecan- at line 7 to line 8 and line 9 to line 10, respectively. The notautomaticallyidentifywhichfunctionisrelatedtothe vulnerable function 𝑓 𝑣 is composed from line 5 to line 8 vulnerability,toalleviatethisproblem,weexcludethose andline11to12inFigure3,andthepatchedfunction𝑓 𝑝 is commitsthatmodifymorethanonefunction.Afterthe composedfromline5toline6andline9toline12. abovethreesteps,weobtainahigh-qualitycommitdata setwithvulnerabilitytypelabels. 3.3 Vulnerability-preservingDataAugmentation Aswecollectdifferenttypesofvulnerabilitydata,itisdif- 3.2.2 Vulnerable/Non-vulnerable Function Extrac- ficulttoensureeachtypehasasufficientnumberformod- tion. Giventhevulnerability-relatedcommitasinput,we els to learn, hence we propose a data augmentation tech- cangetitscorrespondingsecuritypatch𝑃 𝑣.Weextractvul- nique to scale up the collected data 𝐷 in Section 3.2. Fur- nerablefunctions𝑓 𝑣 andpatchedfunctions𝑓 𝑝 basedonthe thermore,thenewlygenerateddatamustretainthevulner- changestatements(i.e.,theaddedstatements𝑆 𝑎𝑑𝑑 andthe ability characteristics of the original data, i.e., it needs to deleted statements𝑆 𝑑𝑒𝑙) from𝑃 𝑣. In this work, we take 𝑓 𝑣 be vulnerability-preserving. If the vulnerability is lost or asvulnerablefunctionsand𝑓 𝑝 asnon-vulnerablefunctions. compromised, the generated data becomes ineffective for Wecangetatuple(𝑓 𝑣,𝑓 𝑝,𝑆 𝑑𝑒𝑙,𝑆 𝑎𝑑𝑑),where𝑆 𝑑𝑒𝑙,𝑆 𝑎𝑑𝑑 willbe model training. Hence, we propose a novel vulnerability- utilizedforaugmentation(SeeSection3.3). preservingdataaugmentationmethodtogeneratenewdataEnhancingCodeVulnerabilityDetectionviaVulnerability-PreservingDataAugmentation LCTES’24,June24,2024,Copenhagen,Denmark fromtheoriginaldataset𝐷.Itprimarilyinvolvestwosteps. Algorithm1:Vulnerability-relatedSlicing The first step (Section 3.3.1) is to slice all the statements Input:(PDG𝑓𝑣,PDG𝑓𝑝,S𝑑𝑒𝑙,S𝑎𝑑𝑑) relatedtothevulnerability.Thesecondstep(Section3.3.2)is Output:S𝑟𝑒𝑙𝑎𝑡𝑒𝑑 toaugmenttheoriginaldataset𝐷 bypreservingtheseman- 1 InitializeS𝑟𝑒𝑙𝑎𝑡𝑒𝑑=set() 2 FunctionSlice(PDG𝑓𝑣,PDG𝑓𝑝,S𝑑𝑒𝑙,S𝑎𝑑𝑑): ticsofvulnerability-relatedstatementsandmodifyingthe 3 fors𝑑𝑒𝑙inS𝑑𝑒𝑙do statementsunrelatedtothevulnerability. 4 S𝑓 =traverse(s𝑑𝑒𝑙,PDG𝑓𝑣,“forward”) 5 forsinS𝑓 do 6 S𝑏=traverse(s,PDG𝑓𝑣,“backward”) 3.3.1 Slicingvulnerability-relatedstatements. Given 7 S𝑟𝑒𝑙𝑎𝑡𝑒𝑑=S𝑟𝑒𝑙𝑎𝑡𝑒𝑑∪S𝑏 a4-tuple(𝑓 𝑣, 𝑓 𝑝,𝑆 𝑑𝑒𝑙,𝑆 𝑎𝑑𝑑)fromSection3.2.2,weslicethe 8 fors𝑎𝑑𝑑inS𝑎𝑑𝑑do statementsthatarerelatedto𝑆 𝑑𝑒𝑙 and𝑆 𝑎𝑑𝑑.Toachievethis, 9 S𝑓 =traverse(s𝑎𝑑𝑑,PDG𝑓𝑝,“forward”) 10 forsinS𝑓 do weneedtogetthestatementdependencyrelationshipina 11 S𝑏=traverse(s,PDG𝑓𝑝,“backward”) function.Weutilizetheprogramdependencygraph(PDG)to 12 S𝑟𝑒𝑙𝑎𝑡𝑒𝑑=S𝑟𝑒𝑙𝑎𝑡𝑒𝑑∪S𝑏 obtainthedatadependencyandcontroldependencyforeach statementin𝑓 𝑣 and𝑓 𝑝.ThegeneratedPDGsaredefinedas 1 13 4 Deft rr ea suve ltr ss =e( ses, t(p ),d vg i, sd iti ere dct =io sn e) t(: ) PDG𝑓𝑣 andPDG𝑓𝑝.BasedonPDG,wedesignanAlgorithm1 15 q=Queue() toslicethevulnerability-relatedstatements.Specifically,for 16 q.push(s) eachstatements𝑑𝑒𝑙/s𝑎𝑑𝑑 in𝑆 𝑑𝑒𝑙/𝑆 𝑎𝑑𝑑,aforwardslicingpro- 1 17 8 while uq =i qs .n po ot pe (m ) ptydo cedureisperformedinPDGtogetalistofrelatedstatements 19 results.append(u) S𝑓 inthefunction𝑓 𝑣/𝑓 𝑝.Thisstepistoensurefindoutthe 2 20 1 ifdire Sc 𝑓tio =n th= e= s“ tf ao tr ew ma erd n” tst th he an tstartfromuin𝑝𝑑𝑔
futuredependentstatementsfromthecurrentstatementi.e., 22 elseifdirection==“backward”then start from current statement. Then based on the obtained 23 S𝑏=thestatementsthatpointtouin𝑝𝑑𝑔 S𝑓, a backward slicing procedure is conducted to extract 24 forvin{S𝑓,S𝑏}do all relevant statements before the S𝑓 i.e., point to current 2 25 6 ifv∉ vv isis iti ete dd .adth d(e vn ) statement.Finally,wecombinebothdirectionsfortheadded 27 q.push(v) statementsS𝑎𝑑𝑑 anddeletedstatementsS𝑑𝑒𝑙 andobtainthe vulnerability-relatedstatementsdenotedasS𝑟𝑒𝑙𝑎𝑡𝑒𝑑. 28 returnresults Table2.MutationOperationsforDataAugmentation. 3.3.2 Augmenting code by operators. As all vulnerability-relatedstatementsi.e.,𝑆 𝑟𝑒𝑙𝑎𝑡𝑒𝑑 areobtained,we Type Definition canaugmentthedataonthevulnerability-unrelatedstate- mentsfromthevulnerablefunction𝑓 𝑣i.e.,{𝑠|𝑠 ∈ 𝑓 𝑣\𝑆 𝑟𝑒𝑙𝑎𝑡𝑒𝑑} add R ade dna itm be acid ke tn oti tfi he ers fui nn ca tia os ns .ignmentstatementand where 𝑠 is the vulnerability-unrelated statement and \ Addaif conditionwhichisthelogicaltruthbefore is the set difference operation between 𝑓 𝑣 and 𝑆 𝑟𝑒𝑙𝑎𝑡𝑒𝑑. ai assignmentstatements. Preserving all vulnerability-related statements in the rn Renametheidentifiers. vulnerablefunctioncanretainitsvulnerabilityandtheproof ro Reorderassignmentstatements. isproducedinSection6.2.Wedefinefivetypesofmutation del Deletestatementsthatarenotrelatedtothevulnerability. operations for vulnerable data augmentation, as shown inTable2.Specifically,theoperationrnmeanstorename theusedvariablenameswithalltheoccurrencesofthese 3.4 Edge-awareGGNN variableswithothernames.Theoperationaimeanstoadda WhileGGNNhasfoundextensiveapplicationinmodeling if conditionwhichisthelogicaltruebeforetheassignment sourcecode[1,15,33,35,52],itisnoteworthythatthemes- statements.Forexample,supposeanassignmentstatement sage passing is solely based on the node representations, inta=b;isvulnerability-unrelated,itcanbetransposedto i.e.,𝒉𝑛𝑖,andtheedgeinformationisoverlooked.Webelieve if (True) then int a = b; after performing the𝑎𝑖 operation. thatthedifferenttypesofedgesintheCodePropertyGraph Operationdelwillrandomlydeletethestatementsthatare (CPG),suchas"Flowto"and"Control"signifydifferentse- not related to vulnerability-related statements, while the mantics of programs, playing a crucial role in vulnerabil- add willrenamevariablenamesinanassignmentstatement itydetection.Buildingonthisinsight,weproposeanedge- andadditbacktotheoriginalfunction𝑓 𝑣,andtheoperation awareGGNNtoleverageedgeinformationeffectivelyfor romeanstoreordertheconsequentassignmentstatements vulnerabilitydetection. in the original function. By different types of mutation 3.4.1 GraphInitialization. Forbothvulnerableandnon- operators,wecangreatlyincreasetheamountoforiginal vulnerable functions, we utilize Joern [50] to obtain the dataandenrichitsdiversity. CodePropertyGraph(CPG).Inaformalrepresentation,aLCTES’24,June24,2024,Copenhagen,Denmark ShangqingLiu,WeiMa,JianWang,XiaofeiXie,RuitaoFeng,andYangLiu raw function 𝑐 can be expressed as a multi-edged graph 3.5 Training 𝑔(V,E), where V is the set of nodes, and (𝑣,𝑢) ∈ E de- InFGVulDet,foreachtypeofvulnerability(refertoTable1), notes the edge connecting node𝑣 and node𝑢. Each node onthecorresponding𝐷 𝑡𝑟𝑎𝑖𝑛𝑖,whichisthetrainingsetcon- possessesitsnodesequence,parsedbyJoernfromtheorigi- tainingvulnerableandnon-vulnerablefunctionsforthevul- nalfunction.Wetokenizethenodesequencebyspacesand nerabilitytype𝑖,wetrainasetofbinaryclassifiers𝐹 = {𝐹 𝑖, punctuation.Additionally,forcompoundwords(tokenscon- ∀𝑖 ∈ Vul},whereVul = CWE−{404,835,120,672,362} is structedbyconcatenatingmultiplewordsaccordingtocamel thevulnerabilitytypelisttodetectwhetherthefunctionis orsnakeconventions),wesplitthemintomultipletokens. vulnerableornot.Thelossfunctionfor𝐹 𝑖 isbinarycross Werepresenteachtokeninthenodesequenceandeachedge entropy. type connected with nodes using the learned embedding 𝑠𝑒𝑞𝑡𝑜𝑘𝑒𝑛 𝑒𝑑𝑔𝑒𝑡𝑦𝑝𝑒 𝑙(𝑦′,𝑦) =−(·𝑦·log(𝑦′)+(1−𝑦)∗log(1−𝑦′)) (7) matrix 𝑬 and 𝑬 ,respectively.Subsequently, thenodesandedgesoftheCodePropertyGraph(CPG)can where𝑦′ isthelogit(SeeEquation6)and𝑦 ∈ {0,1} isthe beencodedas: labelwith1forvulnerableand0otherwise.Totally,wehave 𝒉𝑣 =SUM(𝑬𝑠 𝑣𝑒 ,1𝑞𝑡𝑜𝑘𝑒𝑛 ,...,𝑬𝑠 𝑣𝑒 ,𝑙𝑞𝑡𝑜𝑘𝑒𝑛 ) fiveclassifiersaccordingtodifferentvulnerabilitytypes. (3) 𝑒𝑑𝑔𝑒𝑡𝑦𝑝𝑒 𝒆𝑣,𝑢 =𝑬𝑣,𝑢 𝑖𝑓 (𝑣,𝑢) ∈E𝑒𝑙𝑠𝑒0 3.6 Testing where𝑙 denotesthenumberoftokensinthenode𝑣.Hence, SinceFGVulDet targetsfine-grainedvulnerabilitydetection, giventhecodepropertygraph𝑔(V,E),wehave𝑯 ∈R𝑚×𝑑 , wetraineachtypeofvulnerabilityasasinglebinaryclassifier whichdenotestheinitialnodeembeddingmatrixoftheCPG, 𝐹 𝑖 and vote to give the final prediction for a test sample. where𝑚 isthetotalnumberofnodesintheCPGand𝑑 is Specifically,givenafunction 𝑓 𝑣 (resp. 𝑓 𝑝)fromthetestset, thedimensionofthenodeembedding. eachtypeofclassifier𝐹 𝑖isemployedfordetection𝑦 𝑖′ =𝐹 𝑖(𝑓 𝑣) andthepredictedlabelcanbeexpressedasfollows: 3.4.2 Edge-awareMessagePassing. Foreverynode𝑣 at (cid:40) eachcomputationiteration𝑘 inthegraph,weemployan Predicted_label= 1 𝑦 𝑖′ >0.5 (8) aggregationfunctiontocalculatetheaggregatedvector𝒉𝑘 N(𝑣). 0 𝑦 𝑖′ ≤0.5 Thisisachievedbyconsideringasetofneighboringnode where1forvulnerableand0fornon-vulnerable.Thefinal embeddings,aswellastheconnectededgetypeinformation predictionresultisdeterminedthroughamajorityvoting computedfromtheprevioushop.Astheedgeinformation mechanismfromallclassifiersbasedonthemajorityrule. isalsotakenintoaccountinthemessagepassingprocess,it isspecificallyreferredtoasedge-awaremessagepassing. 4 EvaluationSetup 𝒉𝑘 N(𝑣)
=SUM({Relu(𝑾[𝒉𝑢𝑘−1;𝒆𝑣,𝑢])|∀𝑢 ∈N (𝑣)}) (4) • RQ1: What is the performance of FGVulDet compared whereN (𝑣) isasetoftheneighboringnodeswhicharedi- withbaselinesindetectingvulnerablecode? rectlyconnectedwith𝑣,𝑾 ∈R(𝑑+𝑑′)×𝑑 where𝑑 and𝑑′ are • RQ2:Caneachtypeofthedefinedmutationoperations thedimensionofthenodeandedgeembedding,andReluis bebeneficialtoaugmentthetrainingdatasettoimprove therectifiedlinearunit[37].Foreachnode𝑣,𝒉0 𝑣 istheinitial thedetectionaccuracy? nodeembeddingof𝑣,i.e.,𝒉𝑣 ∈ 𝑯. • RQ3:Whatistheperformanceofourdesignededge-aware AGatedRecurrentUnit(GRU)[8]isusedtoupdatethe GGNNcomparedwithotherGNNvariantsforvulnerabil- nodeembeddingsbyincorporatingtheaggregationinforma- itydetection? tion. 𝒉𝑘 𝑣 =GRU(𝒉𝑘 𝑣−1,𝒉𝑘 N(𝑣)) (5) T4. h1 estD ata ist ta is ce st foD re tt ha eil fis vecommonvulnerabilitytypesonthe After𝑛iterationsofcomputation,weobtainthefinalnode collecteddatasetarepresentedinTable3.Wefirstcollect state 𝒉𝑛 𝑣 for node 𝑣. Subsequently, we apply max-pooling a total of 92,525 commits of the five CWE types, then ex- overallnodes𝒉𝑛 𝑣|∀𝑣 ∈Vtoacquirethe𝑑-dimensionalgraph tractvulnerableandpatchedfunctionsfromeachcommit 𝑔 representation𝒉 . asvulnerableandnon-vulnerablefunctions.Afterthat,we 3.4.3 Classification Layer. After the message passing, extract the code property graph (CPG) for each function wecangetthegraphrepresentation𝒉𝑔 anduseitforpredic- and obtain 165,222 graphs in total, which is less than the tion.Specifically,alinerprojectionwithasigmoidactivation numberoftherawfunctionsduetothecompilationerrors functionisusedtomakethefinalprediction. of some functions with Joern [51]. We further conduct a datapreprocessingtoremovefunctionswhosenumberof 𝑦′ =Sigmoid(𝑾′ 𝒉𝑔 ) (6) graph nodes is greater than 800, and finally obtain a raw where𝑦′ isthelogitproducedbythesigmoidfunctionand datasetwithatotalnumberof99,076functions.Wedivide 𝑾′ ∈R𝑑×1isthelearnedmatrix. therawdatasetintoatrainset,validationset,andtestsetEnhancingCodeVulnerabilityDetectionviaVulnerability-PreservingDataAugmentation LCTES’24,June24,2024,Copenhagen,Denmark Table3.TheStatisticsoftheCollectedDataSet. Rawdataset Mutation CWE Commit Function Graph Preprocess train validation test rn/del/add/ai/ro CWE-404 39,261 78,522 67,860 41,816 25,060 8,400 8,356 12,552 CWE-835 16,584 33,168 29,904 16,105 9,638 3,263 3,204 4,839 CWE-120 10,877 21,754 19,800 11,187 6,710 2,250 2,227 3,370 CWE-672 7,906 15,812 15,006 8,689 5,197 1,741 1,751 2,604 CWE-362 17,897 35,794 32,652 21,279 12,768 4,247 4,264 6,390 Total 92,525 185,050 165,222 99,076 59,373 19,901 19,802 29,755 ataratioof6:2:2.Intheend,weperformfivetypesofmu- Devign[52].Itisatypicalworkinvulnerabilitydetection tationoperations(seeTable2)toaugmentthevulnerable utilizinggraphneuralnetworks.Specifically,itcombinesvar- functionsinthetrainsetandgeneratethemutatedfunctions iedsemanticsofafunctionintoaunifiedgraphstructureto for each type of mutation operations with a nearly equal gleanprogramsemantics.Additionally,itemploysaconvo- amount of vulnerable functions. Note that, our dataset is lutionmoduletocapturefeaturesrelatedtovulnerabilities. more challenging than Devign [52]. In particular, the ex- CodeBERT [14]. It is a pre-trained model rooted in the tractednon-vulnerablefunctionsinDevign[52]comefrom Transformerarchitectureforcodemodeling.Leveragingmil- non-vulnerablecommits.However,FGVulDet usesthefixed lionsofcodedata,itundergoespre-trainingandsubsequent versionofthecodefromthevulnerability-relatedcommits fine-tuningfordownstreamcode-relatedtasks.Wehavere- asthenon-vulnerablefunctionsforthemodeltolearn.As produceditsimplementationusingthedefaultsettingspro- thenon-vulnerablefunctionsarehighlysimilartothevul- videdintheofficialcodeonourdataset. nerablefunctionsinthisoperationcomparedwithDevign, henceitisamoredifficultdatasetforDL-basedapproaches 4.3 ExperimentalSettings tolearnvulnerabilityfeaturestodistinguishthem. Weutilizedthecommonwordswherethefrequency≥3from thetrainingset,amountingto90,000,tocreatethevocabu- 4.2 Baselines laryset.Thedimensionsofwordandedgeembeddingswere WeevaluateFGVulDet bycomparingitagainstseveralwell- setto128respectively.Adropoutof0.3wasimplemented knownvulnerabilitydetectionapproaches. afterthewordembeddinglayer.Thehopvaluewassetto4 forCWE-404andCWE-672,1forCWE-835,5forCWE-120, 4.2.1 Static-analysis-basedapproaches. VUDDY[24]. and2forCWE-362toachieveoptimalperformance.Weem- Itfollowsaprocesstoabstractthefunctionandthengener- ployedtheAdamoptimizerwithaninitiallearningrateof atesfingerprintsforeachfunctionbyhashingthenormalized 0.001andabatchsizeof64fortraining.Allexperimentswere code.Atargetfunctionisidentifiedasvulnerableifitsfin- conductedontheDGXserverwiththreeNvidiaGraphics gerprintsmatchthoseofvulnerablefunctions. TeslaV100. MVP[49].ItissimilartoVUDDY,whichextractsvulnera- bilityandpatchsignaturesfromavulnerablefunctionand itspatchedcounterpartusingaproposedprogram-slicing 5 ExperimentalResults algorithm.ThenItidentifiesatargetfunctionasvulnerable 5.1 RQ1:ComparedwithBaselines
ifitmatchesthevulnerabilitysignaturebutdoesnotmatch TheexperimentalresultsarepresentedinTable4.Thefirst thepatchsignature. row presents the results for the static-analysis-based ap- 4.2.2 Deep-learning-based approaches. Vuldeep- proaches, the second row is the results for DL-based ap- ecker[29].Itintroducesamethodologyinvolvingextracting proaches.TheresultsforFGVulDet without/withthedata semantically connected statements associated with an augmentationareprovidedintherowofFGVulDet𝑛𝑜𝑛𝑒 and argument of a library/API function call, forming code FGVulDet respectively. gadgets. Following program normalization, which stan- When comparing the results of FGVulDet with static- dardizesuser-definedvariablenamesandfunctionnames, analysis-based approaches, it is obvious that VUDDY a bidirectional LSTM neural network is employed to achievesmuchhigherprecisionscores.Forinstance,inthe determinethefunction’svulnerability. caseofthevulnerabilityCWE-672(OperationAfterFree), Multi-Head Attention [44]. It has been widely used for VUDDYachievesaprecisionscoreof84.62whichismuch modellingsequences.Inparticular,weleveragedthedocu- higherthanthescoreofFGVulDet50.53.Thehigherprecision mentationfromharvardnlp[26]toconstructamulti-head score indicates that VUDDY has fewer false positive sam- attention layer, setting the number of heads to 4 and the ples,whichisreasonableasVUDDYreliesonexpert-crafted maximumsequencelengthto150forcomparativeanalysis. featurestodetectcodevulnerabilities.Thesehand-craftedLCTES’24,June24,2024,Copenhagen,Denmark ShangqingLiu,WeiMa,JianWang,XiaofeiXie,RuitaoFeng,andYangLiu Table4.Theexperimentalresultsofdifferentapproachesforvulnerabilitydetection. CWE-404 CWE-835 CWE-120 CWE-672 CWE-362 Approach P R F1 P R F1 P R F1 P R F1 P R F1 VUDDY 73.17 0.72 1.42 70.00 0.43 0.86 79.41 2.42 4.70 84.62 1.25 2.47 60.00 0.28 0.56 MVP 49.89 45.30 47.48 50.68 46.15 48.31 50.19 48.21 49.18 50.13 43.17 46.39 49.87 45.60 47.64 Vuldeepecker 67.26 38.19 48.71 50.70 58.52 54.33 50.55 49.51 50.02 51.39 56.95 54.02 51.57 43.64 47.28 Attention 56.82 57.66 57.24 50.94 65.22 57.21 55.1 57.91 56.47 52.79 52.73 52.76 51.38 53.75 52.54 Devign 65.86 40.98 50.52 50.79 61.50 55.64 55.30 63.90 59.29 51.56 43.17 46.99 51.16 44.01 47.32 CodeBERT 64.59 54.46 59.10 51.72 64.29 57.32 60.40 35.03 44.34 52.69 63.55 57.61 52.69 60.32 56.24 GGNN 64.96 46.01 53.87 51.64 59.70 55.38 55.71 60.14 57.84 51.67 60.02 55.53 53.49 55.01 54.24 GCN 65.17 40.07 49.63 50.82 76.88 61.19 55.72 64.88 59.95 52.99 60.59 56.54 52.76 54.68 53.71 FGVulDet𝑛𝑜𝑛𝑒 64.45 46.95 54.32 51.07 66.58 57.80 56.52 70.15 62.60 50.54 69.82 58.63 52.10 72.66 60.69 FGVulDet 53.91 81.15 64.78 50.51 94.92 65.93 52.93 86.33 65.63 50.53 92.26 65.30 50.47 90.87 64.89 FGVulDet𝑎𝑑𝑑 61.95 49.74 55.18 50.60 73.53 59.94 55.25 73.37 63.03 50.90 77.45 61.43 50.97 85.89 63.97 FGVulDet𝑎𝑖 62.58 54.51 58.27 50.77 71.11 59.25 56.71 64.61 62.80 50.72 72.21 59.59 51.47 70.94 61.66 FGVulDet𝑑𝑒𝑙 60.80 58.40 59.57 51.47 62.86 56.60 56.28 66.49 60.96 52.48 55.47 53.93 51.57 67.26 58.38 FGVulDet𝑟𝑛 53.41 78.31 63.51 50.49 91.94 65.19 53.20 83.82 65.09 50.45 96.47 66.25 51.03 87.80 64.54 FGVulDet𝑟𝑜 60.38 58.71 59.54 51.41 76.81 61.60 55.32 67.83 60.94 50.18 77.45 60.90 52.05 63.30 57.12 vulnerabilityfeaturesarehighlyreliablebysecurityexperts. 5.2 RQ2:EffectivenessofMutationOperations Therefore,ifthesamplesbeingdetectedexhibitsimilarfea- Inourwork,weintroducefivetypesofmutationoperations tures,thereisahighprobabilitythattheyarevulnerablecode. to augment the data. We assess the effectiveness of each However,wecanalsofindthatVUDDYhasalowerrecall operationindividuallybyconductingexperimentsusingonly thanFGVulDeti.e.,1.25vs92.26.ItindicatesthatVUDDYhas onetypeofmutationoperationatatime,whilemaintaining morefalsenegativesamplesasthesehand-craftedvulnerabil- the hyper-parameters consistent with the original model. ityfeaturescanonlycoveralimitednumberofvulnerability The results of these experiments are outlined in the final types,whichleadstomissingasubstantialnumberofvul- rowofTable4,whereFGVulDet∗denotesthespecifictypeof nerabilitiescomparedtoFGVulDet.Inaddition,wefindthat mutationoperationbeingevaluated.Thecombinedresultsof althoughMVPhaslowerprecisionscoresthanVUDDY,itsre- allfivemutationoperationsarepresentedintherowlabeled callscoresarebetterthanVUDDY,whichindicatesthatMVP FGVulDet. coversmorevulnerabilitytypesbutthedetectionprecision Throughtheanalysisoftheexperimentalresults,itisevi- islowerthanVUDDY.Comparedwiththesestatic-analysis- dentthatincorporatingfivetypesofmutationoperationscan basedapproaches,FGVulDetisabletoachieveamuchhigher significantlyimproverecallandF1scores.Forinstance,in recall,whichleadstoahigherF1-score. thecaseofvulnerabilitytypeCWE-404(indicatingmemory
When comparing the results of FGVulDet with the DL- leaks),FGVulDet improvesrecallandF1from46.95/54.32to basedapproaches,wecanfindthatthepre-trainedmodel 81.15/64.78,respectively.Notably,thernoperationstands CodeBERT performs better than other baselines in terms outasthemosteffectiveinenhancingF1acrossdifferentvul- ofF1.WespeculatethatthemainreasonisthatCodeBERT nerabilitytypes.EvenforthevulnerabilityCWE-672,when uses extensive code-related data for pre-training and the fusingothermutationtypesofdatatorn,F1hasadecrease. model architecture is more powerful than the other base- Weconjuncturethatrnoperationappearstobeefficientin lines.Hence,CodeBERThasastrongerlearningcapability. improvingthediversityofthetrainingsetcomparedtoother However, FGVulDet outperforms it in terms of recall and mutation operations, making the model more robust and F1.Evenwithoutdataaugmentationi.e.,FGVulDet𝑛𝑜𝑛𝑒 in powerful. Table4,wecanfindthatitstillhasbetterperformancethan Additionally,differenttypesofmutationsexhibitincon- CodeBERTintermsofF1forvulnerabilitytypesCWE-835, sistentperformancecomparedtotheoriginalmodelwithout 120,672,362),whichindicatestheeffectivenessofourpro- mutations(FGVulDet𝑛𝑜𝑛𝑒).Forexample,themutationopera- posedapproach. tionsadd,ai,andrnimproveF1foralltypesofvulnerabilities comparedtoFGVulDet𝑛𝑜𝑛𝑒.However,thedeloperationhas anegativeimpactexceptforCWE-404(memoryleak),while roimprovesF1forvulnerabilitytypesCWE-{404,835,672} AnswertoRQ1:Althoughsomestatic-analysis-basedap- buthasanegativeimpactforCWE-{120,362}comparedto proacheshavehigherprecisionscoresthanFGVulDet,they haveextensivefalsenegativesamples.Overall,interms FGVulDet𝑛𝑜𝑛𝑒.Thismaybeattributedtothefactthatthedel androoperationsintroducesomeothertypesofvulnerabil- ofRecallandF1,FGVulDet outperformscurrentbaselines itiesinthemutatedfunctions,addingnoisetothedataset includingstatic-analysis-basedapproachesandDL-based and making it challenging for the model to make correct approachesbyasignificantmargin.EnhancingCodeVulnerabilityDetectionviaVulnerability-PreservingDataAugmentation LCTES’24,June24,2024,Copenhagen,Denmark decisions.Formoredetailsaboutthereasonthatdelandro commits that only modified a single function and whose canintroducenewvulnerabilities,pleaserefertoSection6.2. messageswerematchedbyonevulnerabilitytype(referto Despitethenegativeimpactofdelandroonspecificvulnera- Section 3.2). By employing this method, the functions ex- bilitytypes,combiningthemwithothermutationoperations tractedfromthecommitsaremorelikelytobevulnerabilities inFGVulDet yieldsthebestoverallperformance. withthecorrectvulnerabilitytype.Tovalidatethecollected dataset,ateamofprofessionalsecurityresearchersrandomly AnswertoRQ2:Athoroughanalysisoftheperformance selected400commitsfromeachofthe5vulnerabilitytypes, of different mutation operations leads us to the conclu- totaling 2000 commits, and conducted a two-round cross- sion that vulnerability-preserving data augmentation is verificationonthedatalabeling.Theresultsrevealedthat effectiveforfurtherenhancement. 97.3%ofcommitswerecorrectlyclassified(CWE-404was 97.50%,CWE-835was96.50%,CWE-120was97.75%,CWE- 5.3 RQ3:EffectivenessofEdge-awareGGNN 672was98.50%,andCWE-362was96.25%,respectively).The FGVulDet proposestheedge-awareGGNNwhichextends high precision of our dataset indicates that vulnerability- thecurrentGGNNbyencodingtheedge-typeinformation relatedcommitscanbeeffectivelyidentifiedandcorrectly and using it in the message passing process to learn the classifiedusingkeywordsfromanunlabeleddataset. vulnerability-relatedfeatures.Toillustratetheeffectiveness oftheproposedmodel,wealsocompareditwithsomeGNN 6.2 ProofofVulnerability-preserving variantssuchasGatedGraphNetwork(GGNN)andGraph Herewediscusstheproofofvulnerability-preservingaug- ConvolutionNetwork(GCN).Theexperimentalresultsare mentation,whichcanpreservethevulnerabilityofafunction showninTable4. afterbeingmodified.First,wehavethefollowingnotation Tomakeafaircomparison,weonlycomparetheresults 𝑆 𝑣𝑢𝑙 denotesthestatementsthattriggerthevulnerabilityin o cof mdi pff ae rr ee dnt wG itN hN GGva Nr Nia ,n st us pw pi lt eh mF eG nV tiu nl gD te ht𝑛 e𝑜𝑛 e𝑒 d. gW ete yfi pn ed int fh oa rt - afunction,and𝑆 𝑣𝑢𝑙𝑑𝑒𝑝 denotesthedependentstatementsof thestatements𝑆 𝑣𝑢𝑙 inafunction.Wehavetheassumption: mationisbeneficialforthemodeltodetectvulnerabilities.It isreasonableasdifferenttypesofvulnerabilitiesareinvolved Assumption: If all 𝑆 𝑣𝑢𝑙 and 𝑆 𝑣𝑢𝑙𝑑𝑒𝑝 are retained after the mutationoperations,thenthevulnerabilitystillexists. indifferentaspectsoftheprogrami.e.,inthedifferenttypes The assumption is correct since the vulnerabilities are of code property graph. For example, the vulnerability of CWE-120(BufferOverflow)isaboutthedefinitionandusage composed of𝑆 𝑣𝑢𝑙 and𝑆 𝑣𝑢𝑙𝑑𝑒𝑝. Hence, as long as we prove ofavariable,whichcanbecapturedintheprogramdepen- thatourproposedmethodretainsall𝑆 𝑣𝑢𝑙 and𝑆 𝑣𝑢𝑙𝑑𝑒𝑝,itis vulnerability-preserving. dency graph via the edges of “Define/Use”. However, the WeunderstandthatcommitsinGitHubprimarilyserveto vulnerabilityofCWE-672(OperationAfterFree)isrelated recordcodechangesfordifferentversions,andthecommits totheexecutionorderofthestatementsanditisreflected
extractedthroughourDataCollectionprocess(Section3.2) in the control flow graph through the edges of “Flow to”. areidentifiedasvulnerability-relatedthroughkeywordfilter- Thus,weembedtheedgetypeinformationexplicitlyand ing.Thecross-validationresultspresentedinSection6.1con- useitforlearningwilldecreasethedifficultyofthemodel firmthereliabilityofthesecommitsconcerningvulnerabili- inidentifyingthevulnerabilityandimprovethemodel’sca- ties.Consequently,thechangedstatements𝑆 𝑑𝑒𝑙 and𝑆 𝑎𝑑𝑑 in pability.Furthermore,wealsocomparewithGCN,which thevulnerabilitycommitareunequivocallyassociatedwith usesmultiplegraphconvolutionallayersinupdatingnode representations,FGVulDet𝑛𝑜𝑛𝑒 stillhasabetterperformance thevulnerability,i.e.,∃𝑆 ⊆𝑆 𝑑𝑒𝑙 ∪𝑆 𝑎𝑑𝑑 ⇒𝑆 ⊆𝑆 𝑣𝑢𝑙 ∪𝑆 𝑣𝑢𝑙𝑑𝑒𝑝. Thissuggeststhatifweapplythevulnerability-relatedslic- excludingCWE-835intermsofF1score. ing algorithm 1 by iteratively searching for related state- AnswertoRQ3:Edge-awareGGNN,whichencodesthe mentsviaPDGtoeachstatementin𝑆 𝑑𝑒𝑙 and𝑆 𝑎𝑑𝑑 separately, edgetypeinformationexplicitlyforlearningcanincrease thenall𝑆 𝑣𝑢𝑙 ∪𝑆 𝑣𝑢𝑙𝑑𝑒𝑝 canbeencompassed,i.e., themodelcapabilityandreducethedifficultyforthemodel todetectvulnerabilities,thusitcanproduceabetterper- {𝑆 𝑣𝑢𝑙 ∪𝑆 𝑣𝑢𝑙𝑑𝑒𝑝}⊆𝑆 𝑟𝑒𝑙𝑎𝑡𝑒𝑑 (9) formance compared with some other GNN variants i.e., Once we obtain the vulnerability-related statements GGNN,GCN. 𝑆 𝑟𝑒𝑙𝑎𝑡𝑒𝑑, we can proceed with data augmentation by con- sideringthesetdifferencebetweentherawvulnerablefunc- 6 Discussion tion𝑓 𝑣 and𝑆 𝑟𝑒𝑙𝑎𝑡𝑒𝑑.Thisaugmentationprocessensuresthe 6.1 DatasetReliability preservationofvulnerabilityintheaugmentedfunctions. To ensure the reliability of our dataset, we established a However,itisimportanttoacknowledgethatthedefined systematicdatacollectionpipeline.Initially,wecrawleda mutationoperationscannotguaranteetheabsenceofnew substantialnumberofcommitsfrom1614C-languageopen- vulnerabilities.Considerthescenariowhereastatementlike source projects on GitHub. Subsequently, we filtered out "usbDevs=NULL;"isunrelatedtovulnerabilitiesandisusedLCTES’24,June24,2024,Copenhagen,Denmark ShangqingLiu,WeiMa,JianWang,XiaofeiXie,RuitaoFeng,andYangLiu tosetthepointer“usbDevs"toNULLtomitigatethevulnera- suchastokensorgraphs,tofacilitatethelearningofmean- bilityof“useafterfree".Ifwedeletethestatement“usbDevs= ingfulprogramrepresentations.Incontrast,ourworkintro- NULL;"usingthemutationoperation𝑑𝑒𝑙,itcouldintroduce ducesavulnerability-preservingdataaugmentationprocess thevulnerabilityCWE-672(OperationAfterFree).Similarly, toenhancevulnerabilitydatafortrainingpurposes,which inthecaseofasharedvariableamongmultiplethreads,if offersadifferentperspective. itsassignmentstatementisplacedbeforethelockoperation, DataAugmentationItisavaluabletechniqueforexpand- usingthemutatedoperation𝑟𝑜 mightresultinavulnerabil- ingdatasetsduringmodeltraininginvariousdomainssuch ityCWE-362(racecondition).Fortunately,weobservethat ascomputervision(CV),naturallanguageprocessing(NLP), the proposed operations𝑟𝑛,𝑎𝑑𝑑, and𝑎𝑖 do not introduce and automated speech recognition (ASR). This technique newvulnerabilities.Onlytwooperations,𝑑𝑒𝑙,and𝑟𝑜,have is particularly useful in domains with limited datasets to thepotentialtogeneratenewvulnerabilities.Moreover,as preventoverfitting,asseeninmedicalimageanalysis[41]. indicated in Section 5.2, the combination of all five muta- Moreover,dataaugmentationhasbeenemployedtoenhance tionoperationsinFGVulDet yieldsthehighestperformance. modelperformance.Studies[13,39]introducenewmethods In summary, ensuring the absence of new vulnerabilities fordataaugmentation,aimingtoachievebetterperformance whenmodifyingthesemanticsoftheoriginalfunctionisa inimageclassificationtasks.Incoderepresentation,Jainet significantchallenge,andaddressingthisremainsinfuture. al.[23]andLiuetal.[34]exploredataaugmentationinpre- training.Zhuoetal.[53]andDongetal.[10]conductthe 6.3 ThreatstoValidity literaturereviewofsourcecodeaugmentation.Thecurrent Thefirstpotentialthreatconcernsthelimitedscopeofthe strategies used are semantic-preserving at the functional collecteddataset,whichincludesonlyfivetypesofvulnera- granularity,whichmaynotpreservevulnerabilitywhengen- bilitiesintheCprogramminglanguage.Whilethismightnot eratingmutationsforcodevulnerabilities.Incontrast,our coverallpossiblevulnerabilities,weassertthattheselected workintroducesvulnerability-preservingdataaugmentation. vulnerabilitytypesarecommonandcapableofcausingsig- Ourapproachaugmentstheoriginaldatasetbypreserving nificantharmtosoftwaresystems.Moreover,theproposed thesemanticsofvulnerability-relatedstatementswhilemod- model is not restricted to this specific dataset and can be ifyingstatementsunrelatedtovulnerabilities.Itensuresthat extendedtodetectothertypesofvulnerabilities.Another thegeneratedvariantsretainvulnerabilityinformation,dis- potentialthreatistheexclusionoffunctionswithnodesizes tinguishingitfromothersemantic-preservingstrategies. exceeding800inthegraph.Thispracticeisconsistentwith Devign[52]duetotheGPUmemoryrequirementsofGNNs. 8 Conclusion Truncatingthegraphsizeisapragmaticchoicetofacilitate Inthispaper,weintroduceafine-grainedvulnerabilityde- experiments,asGNNsnecessitatesubstantialGPUmemory. tectornamelyFGVulDet,whichemploysmultipleclassifiers
to learn characteristics of various vulnerability types for 7 RelatedWork sourcecodevulnerabilitydetection.Toaddressthescarcity VulnerabilityDetectionStaticanalysisplaysacrucialrole of data for some vulnerability types, we propose a novel inidentifyingflawsanderrorsinacodebasetopreventthein- vulnerability-preservingdataaugmentationtechniquefor troductionofvulnerabilitiesandsecuritybugs.Commercial augmentation.Furthermore,weextendGGNNtoanedge- toolslikeflawfinder[48]andCPPCheck[3]provideexten- awarevarianttocaptureedge-typeinformation.Extensive sive static code analysis, helping discover early bugs and experimentshaveconfirmedtheeffectivenessofFGVulDet. vulnerabilities.VUDDYutilizesaclone-basedapproachby matchingthesignatureofvulnerablefunctionswithtarget 9 Acknowledgment programsignatures.AnotherapproachbyYangetal.[49] ThisresearchissupportedbytheNationalResearchFoun- employsnovelslicingtechniqueswhileincorporatingvul- dation, Singapore, and the Cyber Security Agency under nerabilitysignatures.However,theseanalysesoftenrequire itsNationalCybersecurityR&DProgramme(NCRP25-P04- significantcontextandtheextensiveeffortsofexperts.With TAICeN),theNationalResearchFoundation,Singapore,and theincreasingpopularityofdeeplearningapproaches,vari- DSO National Laboratories under the AI Singapore Pro- ousworks,suchasVulDeePecker[29],VulSniper[12],and gramme(AISGAwardNo:AISG2-GC-2023-008),andNRF Devign[52],haveutilizeddeeplearningtechniquestopre- InvestigatorshipNRF-NRFI06-2020-0001.Anyopinions,find- dict and detect vulnerabilities. Saikat et al. [5] systemati- ingsandconclusionsorrecommendationsexpressedinthis callystudytheexistingdeep-learning-basedvulnerability material are those of the author(s) and do not reflect the detectionapproaches.Besidesthefunction-levelvulnerabil- viewsofNationalResearchFoundation,SingaporeandCy- itydetection,LineVD[22]leveragesgraphneuralnetworks berSecurityAgencyofSingapore. to locate the buggy statements. LineVul [16] employs the transformertoidentifythelinevulnerability.Thesemethods typicallyuseanintermediaterepresentationofprograms,EnhancingCodeVulnerabilityDetectionviaVulnerability-PreservingDataAugmentation LCTES’24,June24,2024,Copenhagen,Denmark References arXivpreprintarXiv:2002.09024(2020). [1] MiltiadisAllamanis,MarcBrockschmidt,andMahmoudKhademi. [19] WillHamilton,ZhitaoYing,andJureLeskovec.2017.Inductiverepre- 2017. Learningtorepresentprogramswithgraphs. arXivpreprint sentationlearningonlargegraphs.InAdvancesinneuralinformation arXiv:1711.00740(2017). processingsystems.1024–1034. [2] DomagojBabić,LorenzoMartignoni,StephenMcCamant,andDawn [20] KaimingHe,XiangyuZhang,ShaoqingRen,andJianSun.2016.Deep Song.2011.Statically-directeddynamicautomatedtestgeneration.In residuallearningforimagerecognition.InProceedingsoftheIEEE Proceedingsofthe2011InternationalSymposiumonSoftwareTesting conferenceoncomputervisionandpatternrecognition.770–778. andAnalysis.12–22. [21] DanHendrycks,NormanMu,EkinDCubuk,BarretZoph,Justin [3] CERN.2007.CPPCheck. http://cppcheck.sourceforge.net/ Gilmer,andBalajiLakshminarayanan.2019.Augmix:Asimpledata [4] SangKilCha,MaverickWoo,andDavidBrumley.2015. Program- processingmethodtoimproverobustnessanduncertainty. arXiv adaptivemutationalfuzzing.In2015IEEESymposiumonSecurityand preprintarXiv:1912.02781(2019). Privacy.IEEE,725–741. [22] David Hin, Andrey Kan, Huaming Chen, and M. Ali Babar. 2022. [5] SaikatChakraborty,RahulKrishna,YangruiboDing,andBaishakhi LineVD:Statement-LevelVulnerabilityDetectionUsingGraphNeu- Ray.2021.Deeplearningbasedvulnerabilitydetection:Arewethere ralNetworks.InProceedingsofthe19thInternationalConferenceon yet.IEEETransactionsonSoftwareEngineering(2021). MiningSoftwareRepositories(Pittsburgh,Pennsylvania)(MSR’22).As- [6] HongxuChen,YinxingXue,YuekangLi,BihuanChen,XiaofeiXie, sociationforComputingMachinery,NewYork,NY,USA,596–607. XiuhengWu,andYangLiu.2018.Hawkeye:Towardsadesireddirected https://doi.org/10.1145/3524842.3527949 grey-boxfuzzer.InProceedingsofthe2018ACMSIGSACConferenceon [23] ParasJain,AjayJain,TianjunZhang,PieterAbbeel,JosephEGonzalez, ComputerandCommunicationsSecurity.2095–2108. andIonStoica.2020.ContrastiveCodeRepresentationLearning.arXiv [7] XiaoCheng,HaoyuWang,JiayiHua,MiaoZhang,GuoaiXu,LiYi, preprintarXiv:2007.04973(2020). andYuleiSui.2019.Staticdetectionofcontrol-flow-relatedvulnerabil- [24] SeulbaeKim,SeunghoonWoo,HeejoLee,andHakjooOh.2017.Vuddy: itiesusinggraphembedding.In201924thInternationalConferenceon Ascalableapproachforvulnerablecodeclonediscovery.In2017IEEE EngineeringofComplexComputerSystems(ICECCS).IEEE,41–50. SymposiumonSecurityandPrivacy(SP).IEEE,595–614. [8] KyunghyunCho,BartvanMerrienboer,CaglarGulcehre,Dzmitry [25] ThomasNKipfandMaxWelling.2016.Semi-supervisedclassification
Bahdanau,FethiBougares,HolgerSchwenk,andYoshuaBengio.2014. withgraphconvolutionalnetworks.arXivpreprintarXiv:1609.02907 LearningPhraseRepresentationsusingRNNEncoder–Decoderfor (2016). StatisticalMachineTranslation.InEMNLP.1724–1734. [26] GuillaumeKlein,YoonKim,YuntianDeng,JeanSenellart,andAlexan- [9] HoaKhanhDam,TruyenTran,TrangPham,ShienWeeNg,John derM.Rush.2017. OpenNMT:Open-SourceToolkitforNeuralMa- Grundy,andAdityaGhose.2017.Automaticfeaturelearningforvul- chineTranslation.InProc.ACL. https://doi.org/10.18653/v1/P17-4012 nerabilityprediction.arXivpreprintarXiv:1708.02368(2017). [27] YujiaLi,DanielTarlow,MarcBrockschmidt,andRichardZemel.2015. [10] ZemingDong,QiangHu,YuejunGuo,ZhenyaZhang,MaximeCordy, Gatedgraphsequenceneuralnetworks.arXivpreprintarXiv:1511.05493 MikePapadakis,YvesLeTraon,andJianjunZhao.2023. Boosting (2015). SourceCodeLearningwithDataAugmentation:AnEmpiricalStudy. [28] ZhenLi,DeqingZou,ShouhuaiXu,HaiJin,YaweiZhu,andZhaoxuan arXivpreprintarXiv:2303.06808(2023). Chen.2021.SySeVR:Aframeworkforusingdeeplearningtodetect [11] XiaoningDu,BihuanChen,YuekangLi,JianminGuo,YaqinZhou, softwarevulnerabilities.IEEETransactionsonDependableandSecure YangLiu,andYuJiang.2019.Leopard:Identifyingvulnerablecodefor Computing(2021). vulnerabilityassessmentthroughprogrammetrics.InProceedingsof [29] Zhen Li, Deqing Zou, Shouhuai Xu, Xinyu Ou, Hai Jin, Sujuan the41stInternationalConferenceonSoftwareEngineering.60–71. Wang,ZhijunDeng,andYuyiZhong.2018. Vuldeepecker:Adeep [12] XuDuan,JingzhengWu,ShoulingJi,ZhiqingRui,TianyueLuo,Mu- learning-based system for vulnerability detection. arXiv preprint tianYang,andYanjunWu.2019.VulSniper:FocusYourAttentionto arXiv:1801.01681(2018). ShootFine-GrainedVulnerabilities..InIJCAI.4665–4671. [30] ZhenLi,DeqingZou,ShouhuaiXu,XinyuOu,HaiJin,SujuanWang, [13] Alhussein Fawzi, Horst Samulowitz, Deepak Turaga, and Pascal ZhijunDeng,andYuyiZhong.2018.VulDeePecker:ADeepLearning- Frossard.2016.Adaptivedataaugmentationforimageclassification. BasedSystemforVulnerabilityDetection.In25thAnnualNetwork In2016IEEEinternationalconferenceonimageprocessing(ICIP).Ieee, andDistributedSystemSecuritySymposium(NDSS2018)(SanDiego, 3688–3692. California,USA). [14] ZhangyinFeng,DayaGuo,DuyuTang,NanDuan,XiaochengFeng, [31] GuanjunLin,JunZhang,WeiLuo,LeiPan,OlivierDeVel,PaulMon- MingGong,LinjunShou,BingQin,TingLiu,DaxinJiang,etal.2020. tague,andYangXiang.2019. Softwarevulnerabilitydiscoveryvia Codebert:Apre-trainedmodelforprogrammingandnaturallanguages. learningmulti-domainknowledgebases.IEEETransactionsonDepend- arXivpreprintarXiv:2002.08155(2020). ableandSecureComputing(2019). [15] PatrickFernandes,MiltiadisAllamanis,andMarcBrockschmidt.2018. [32] ShangqingLiu.2020.AUnifiedFrameworktoLearnProgramSeman- Structuredneuralsummarization. arXivpreprintarXiv:1811.01824 ticswithGraphNeuralNetworks.In202035thIEEE/ACMInternational (2018). ConferenceonAutomatedSoftwareEngineering(ASE).IEEE,1364–1366. [16] Michael Fu and Chakkrit Tantithamthavorn. 2022. LineVul: A [33] ShangqingLiu,YuChen,XiaofeiXie,JingkaiSiow,andYangLiu.2020. Transformer-BasedLine-LevelVulnerabilityPrediction.InProceedings Retrieval-augmentedgenerationforcodesummarizationviahybrid ofthe19thInternationalConferenceonMiningSoftwareRepositories gnn.arXivpreprintarXiv:2006.05405(2020). (Pittsburgh,Pennsylvania)(MSR’22).AssociationforComputingMa- [34] ShangqingLiu,BozhiWu,XiaofeiXie,GuozhuMeng,andYangLiu. chinery,NewYork,NY,USA,608–620. https://doi.org/10.1145/3524842. 2023.Contrabert:Enhancingcodepre-trainedmodelsviacontrastive learning.arXivpreprintarXiv:2301.09072(2023). 3528452 [17] JustinGilmer,SamuelSSchoenholz,PatrickFRiley,OriolVinyals, [35] ShangqingLiu,XiaofeiXie,JingkaiSiow,LeiMa,GuozhuMeng,and andGeorgeEDahl.2017.Neuralmessagepassingforquantumchem- YangLiu.2023.Graphsearchnet:Enhancinggnnsviacapturingglobal istry.InProceedingsofthe34thInternationalConferenceonMachine dependenciesforsemanticcodesearch.IEEETransactionsonSoftware Learning-Volume70.JMLR.org,1263–1272. Engineering(2023). [18] ChengyueGong,TongzhengRen,MaoYe,andQiangLiu.2020.Maxup: [36] DavidZ.Morris.2017.HowEquifaxTurnedItsMassiveHackIntoan Asimplewaytoimprovegeneralizationofneuralnetworktraining. EvenWorseDumpsterFire? http://fortune.com/2017/09/09/equifax-LCTES’24,June24,2024,Copenhagen,Denmark ShangqingLiu,WeiMa,JianWang,XiaofeiXie,RuitaoFeng,andYangLiu hack-crisis/ IEEE,257–267. [37] VinodNairandGeoffreyEHinton.2010.Rectifiedlinearunitsimprove [46] HuantingWang,GuixinYe,ZhanyongTang,ShinHweiTan,Songfang
restrictedboltzmannmachines.InProceedingsofthe27thinternational Huang,DingyiFang,YansongFeng,LizhongBian,andZhengWang. conferenceonmachinelearning(ICML-10).807–814. 2020.Combininggraph-basedlearningwithautomateddatacollection [38] Van-Anh Nguyen, Dai Quoc Nguyen, Van Nguyen, Trung Le, forcodevulnerabilitydetection. IEEETransactionsonInformation QuanHungTran,andDinhPhung.2022.ReGVD:Revisitinggraphneu- ForensicsandSecurity16(2020),1943–1958. ralnetworksforvulnerabilitydetection.InProceedingsoftheACM/IEEE [47] JunjieWang,BihuanChen,LeiWei,andYangLiu.2017.Skyfire:Data- 44thInternationalConferenceonSoftwareEngineering:CompanionPro- drivenseedgenerationforfuzzing.In2017IEEESymposiumonSecurity ceedings.178–182. andPrivacy(SP).IEEE,579–594. [39] LuisPerezandJasonWang.2017.Theeffectivenessofdataaugmen- [48] DavidA.Wheeler.2017. Flawfinder. https://www.dwheeler.com/ tationinimageclassificationusingdeeplearning. arXivpreprint flawfinder/ arXiv:1712.04621(2017). [49] YangXiao,BihuanChen,ChendongYu,ZhengziXu,ZimuYuan,Feng [40] RebeccaRussell,LouisKim,LeiHamilton,TomoLazovich,JacobHarer, Li,BinghongLiu,YangLiu,WeiHuo,WeiZou,etal.2020. {MVP}: OnurOzdemir,PaulEllingwood,andMarcMcConley.2018.Automated DetectingVulnerabilitiesusingPatch-EnhancedVulnerabilitySigna- VulnerabilityDetectioninSourceCodeUsingDeepRepresentation tures.In29th{USENIX}SecuritySymposium({USENIX}Security20). Learning.In201817thIEEEInternationalConferenceonMachineLearn- 1165–1182. ingandApplications(ICMLA).IEEE,757–762. [50] FabianYamaguchi,NicoGolde,DanielArp,andKonradRieck.2014. [41] Hoo-ChangShin,NeilATenenholtz,JamesonKRogers,ChristopherG Modelinganddiscoveringvulnerabilitieswithcodepropertygraphs. Schwarz,MatthewLSenjem,JeffreyLGunter,KatherinePAndriole, In2014IEEESymposiumonSecurityandPrivacy.IEEE,590–604. andMarkMichalski.2018.Medicalimagesynthesisfordataaugmen- [51] FabianYamaguchi,NicoGolde,DanielArp,andKonradRieck.2014. tationandanonymizationusinggenerativeadversarialnetworks.In ModelingandDiscoveringVulnerabilitieswithCodePropertyGraphs. Internationalworkshoponsimulationandsynthesisinmedicalimaging. InProceedingsofthe2014IEEESymposiumonSecurityandPrivacy Springer,1–11. (SP ’14). IEEE Computer Society, Washington, DC, USA, 590–604. [42] Nick Stephens, John Grosen, Christopher Salls, Andrew Dutcher, https://doi.org/10.1109/SP.2014.44 Ruoyu Wang, Jacopo Corbetta, Yan Shoshitaishvili, Christopher [52] YaqinZhou,ShangqingLiu,JingkaiSiow,XiaoningDu,andYangLiu. Kruegel,andGiovanniVigna.2016. Driller:AugmentingFuzzing 2019.Devign:Effectivevulnerabilityidentificationbylearningcom- ThroughSelectiveSymbolicExecution..InNDSS,Vol.16.1–16. prehensiveprogramsemanticsviagraphneuralnetworks.InAdvances [43] JulienVanegueandShuvenduKLahiri.2013. Towardspracticalre- inNeuralInformationProcessingSystems.10197–10207. activesecurityauditusingextendedstaticcheckers.In2013IEEE [53] Terry Yue Zhuo, Zhou Yang, Zhensu Sun, Yufei Wang, Li Li, Xi- SymposiumonSecurityandPrivacy.IEEE,33–47. aoningDu,ZhenchangXing,andDavidLo.2023. DataAugmen- [44] AshishVaswani,NoamShazeer,NikiParmar,JakobUszkoreit,Llion tationApproachesforSourceCodeModels:ASurvey.arXivpreprint Jones,AidanNGomez,ŁukaszKaiser,andIlliaPolosukhin.2017.At- arXiv:2305.19915(2023). tentionisallyouneed.InAdvancesinneuralinformationprocessing [54] DeqingZou,SujuanWang,ShouhuaiXu,ZhenLi,andHaiJin.2019. systems.5998–6008. 𝜇VulDeePecker:Adeeplearning-basedsystemformulticlassvulnera- [45] JohnViega,Jon-ThomasBloch,YoshiKohno,andGaryMcGraw.2000. bilitydetection.IEEETransactionsonDependableandSecureComputing ITS4:AstaticvulnerabilityscannerforCandC++code.InProceedings (2019). 16thAnnualComputerSecurityApplicationsConference(ACSAC’00). Received2024-02-29;accepted2024-04-01